diff --git a/adopting-bitcoin/2021/2021-11-16-gloria-zhao-transaction-relay-policy/index.html b/adopting-bitcoin/2021/2021-11-16-gloria-zhao-transaction-relay-policy/index.html index 5916fbd502..d2d0f0a584 100644 --- a/adopting-bitcoin/2021/2021-11-16-gloria-zhao-transaction-relay-policy/index.html +++ b/adopting-bitcoin/2021/2021-11-16-gloria-zhao-transaction-relay-policy/index.html @@ -10,4 +10,4 @@ Gloria Zhao

Date: November 16, 2021

Transcript By: Michael Folkson

Tags: Transaction relay policy

Category: Conference

Media: -https://www.youtube.com/watch?v=fbWSQvJjKFs

Slides: https://github.com/glozow/bitcoin-notes/blob/master/Tx%20Relay%20Policy%20for%20L2%20Devs%20-%20Adopting%20Bitcoin%202021.pdf

Intro

Hi I’m Gloria, I work on Bitcoin Core at Brink. Today I am going to talk a bit about mempool policy and why you cannot always expect your transactions to propagate or your fee bumps to work. What we are doing to try to resolve stuff like that. This talk is for those of you who find the mempool a bit murky or unstable or unpredictable. You are not really sure if it is an interface you are comfortable building an application on top of. I would guess that this most of you. Hopefully today we can make the mempool a bit more clear. My goal is to be as empathetic as possible when it comes to transaction propagation limitations. I am going to try to bring a perspective of a Bitcoin protocol developer, how we reason about mempool and transaction relay. I am going to define what policy is, how we think about it, why we have certain policies and define pinning. I will go over some known limitations that we have, maybe clear up some misconceptions about what attack vectors are possible. I’ll mention a specific Lightning attack. My goal is for us to walk out of this today as friends so we can start a conversation about how to build a Layer 1 and Layer 2 with a seamless bridge between them.

Anyone should be able to send a Bitcoin payment

I want to start off from the ground up to make sure we are on the same page about how we think about Bitcoin. We want Bitcoin to be a system where anyone in the world can send money to anybody. That is the whole point. Abstractly people often use terms like permissionless and decentralized and trustless. Perhaps a slightly more concrete framework would be something like this. The first point we want it to be extremely accessible to run a node. If for example bitcoind doesn’t build on MacOS or it requires 32GB of RAM or it has 2000 dollars a month of running costs in order to operate a full node that is unacceptable to us. We want it to run on your basic Raspberry Pi, easy hardware. The other part is censorship resistance because we are trying to build a system that works for those whom the traditional financial system has failed. If we have built a payment system where it is very easy for some government or wealthy bank or large company to censor arbitrarily transactions from users they don’t agree with then we have failed. The other thing is we want it to be secure, obviously. This is an arena where we are not relying on some government where we can say “This node which corresponds to this real life identity did something bad so now we go and prosecute them”. We don’t have that option. We want anyone to be able to run a node, that is part of the permissionless and decentralized portion of Bitcoin’s design goals. Therefore security is not an option, it is first and foremost when we are designing transaction relay. Fourthly, also relating to permissionless and decentralization we do not force software updates in Bitcoin Core. We cannot force anyone to run a new patch of Bitcoin node so if we were to say have an upgrade that improves user privacy but it costs way more or miners need to be willing to lose 50 percent in revenue and fees we cannot reasonably expect anyone to upgrade their node to that software. Incentive compatibility is always in mind when we are talking about transaction relay. This is the baseline framework for how we are thinking about things.

P2P Transaction Relay Network

Hopefully this has not come as a surprise to you but the way that we deal with this in Bitcoin we have a distributed peer-to-peer network where hopefully anyone can run a Bitcoin node, connect to the Bitcoin network. That comes in very forms. It could be a background task of bitcoind running on your laptop, it could be a Raspberry Pi that has Umbrel or Raspiblitz installed on it, it could be a large company server that is servicing thousands of smartphone wallets. All of them might be a Bitcoin node. They look very different but on the peer-to-peer network they all look like a Bitcoin node. When you are trying to pay somebody all you do is you connect to your node, you submit your transaction, you broadcast it to your peers, hopefully they will broadcast it to their peers and then it will propagate and eventually reach a miner. They will include it in a block.

Mempools

One really key part of transaction relay is everyone who participates in transaction relay keeps a mempool. I’ll define what a mempool is for those of who want it defined. It is a cache of unconfirmed transactions and it is highly optimized to help pick the sets of transactions that are most incentive compatible aka highest fee rates. This is helpful for both miners and non-miners. This helps us gracefully re-org, it allows us to design transaction relay in a more private way because we are doing something smarter than just accept and then forward. It is useful. I have written extensively on why mempools are useful if anyone wants to read more.

Mempool Policy

You may have heard the phrase “There’s no such thing as the mempool” and that is true. Each node keeps a mempool and each node can have different software. They are allowed to configure it however they want because Bitcoin is freedom right? Different mempools may look different, your Raspberry Pi 1 may not have as much memory available to allocate for the mempool so you might be more restrictive in your policies. Whereas a miner may keep a very large mempool because they want a very long backlog of transactions they can include in their blocks if things empty. We refer to mempool policy as a node’s set of validation rules for transactions in addition to consensus which all unconfirmed transactions must pass in order to be accepted into this node’s mempool. This is unconfirmed transactions, it has nothing to do with transactions within blocks. It is their prerogative what they are going to restrict coming into their mempool.

From a protocol development perspective we focus on protecting the bitcoind user

I want to really hammer home the point that from a Bitcoin protocol development standpoint we are trying to protect the user who is running this Bitcoin node. I get into a lot of fights with application developers who think we should be protecting the user who is broadcasting the transaction and in some cases those two are not aligned. We have an extremely restrictive set of resources available to us in mempool validation. Typically it is just 300 MB of memory and we don’t want to be spending for example even half a second validating a single transaction. That is too long. Once again even though we are trying to support the honest users, the whistleblowers, the activists, the traditionally censored people, we also need to be aware that when we are connecting to a peer by design we need to expect that this peer can be an attacker. The most common thing that we think about is CPU exhaustion. Again if it takes half a second that is too long to validate a transaction because we will have hundreds of transactions coming in per second. Our code is open source so if there is a bug in our mempool code such that you can cause an out of memory error because of some transaction with a specific type of script for example we need to expect that that is going to be exploited. Another thing to note is in the competitive world of mining it might sometimes be advantageous for a miner to try to fill mempools with garbage or render other miners’ mempools useless or try to censor high fee transactions from their mempools. Or maybe they are launching a long range attack where they put things in the mempool such that when they broadcast a new block everyone has to spend half a second which is too long updating their mempool. Whatever it is they stall the network for half a second and that gives them a head start on mining the next block which is significant. Or they are able to launch some other type of attack.

The other concerns that I will talk about less today regarding privacy, I will be talking a lot about Lightning counterparties but we are also aware of the fact that Lightning counterparties with conflicting transactions might have a back and forth with respect to fee bumping races or pinning attacks. We want to make it such that as long as the honest user is broadcasting something incentive compatible they are always going to be able to win. Most of the time, let’s say. And of course transaction relay is an area where privacy is a huge concern. If you are able to find the origin node that a transaction came from and you are able to link a IP address with a wallet address it doesn’t matter how much you hide onchain, you have already de-anonymized this transaction. This is an area where privacy is a huge concern. The other example of this is transactions are probably the cheapest way to try to analyze network topology. We are always trying to make sure we are not shooting ourselves in the foot with respect to privacy. I am not going to talk about privacy as much today but it is also a concern.

The Policy Spectrum

We can think about policy on a spectrum where on one extreme end we have the “ideal” perfect mempool. We have infinite resources, we are able to freeze time every time we receive a transaction or need to assemble a block and we have an infinite amount of time to validate or assemble. Perhaps in this case we accept all consensus valid transactions. On the other side of the spectrum we have the perfectly defensive conservative mempool policy where we don’t validate anything. We don’t want to spend any validation resources on anything that comes from someone we don’t know. Unfortunately if we want to make this sustainable at a network level we are going to need a central registry of identities that correspond to nodes or wallet addresses. We are just a bank at that point. Neither end of the spectrum really follows our design goals that we want. We want to be somewhere in the middle.

DoS protection isn’t the full story

But it would be too easy if this was the only concern. DoS protection is actually not the only thing that we think about when it comes to mempool policy, it is a common misconception. Let’s go back to thinking about these two ends. We’ll see why this linear approach is not the full picture. Even if we were to have infinite resources would we want to accept every single consensus valid transaction? The answer is actually no. The best example I can offer is soft forks. While we have this space of all consensus valid transactions today that space can shrink. With the recent Taproot activation for example we noticed that F2Pool who mined the activation block did not actually upgrade their nodes to have Taproot validation logic. The question is what would have happened if somebody sent them an invalid Taproot spend, an invalid v1 witness spending transaction. If they had accepted that into their mempool and mined it into a block then F2Pool, AntPool and all the users with nodes that weren’t upgraded would have forked. We would have had a network split where some of the network is not upgraded and some of it is actually enforcing Taproot. This is disastrous but as long as 51 percent of the hash rate is enforcing Taproot we are ok but this is a very bad situation that we would want to avoid. So presumably all miners were running at least version 0.13 and above which included SegWit. With SegWit had semantics for v0 witnesses, it discouraged the use of all witnesses v1 and above. That is why F2Pool did not enforce Taproot rules but they also did not accept any v1 witness transactions into their mempool and did not include those in their blocks. This is one reason why this kind of linear view is imperfect.

On the other side of the spectrum we have the perfectly defensive one. The question is are there policies where we are trying to protect the node from DoS but we harm users? This goes back to all those conversations I’ve had with application devs where policy is harming their users. The example I’m going to give is descendant limits. In Bitcoin Core’s standardness rules, the default limit for descendants is you won’t allow a transaction and all of its descendants in the mempool to exceed 101 kilo virtual bytes. This is a sensible rule because you can imagine a miner broadcasting transactions such that everyone’s mempool is one transaction and descendants. They publish a block that conflicts with that transaction and they’ve just flushed out everyone’s mempools. They spend a few seconds updating all those entries and now they have an empty mempool and nothing to put in their next block. And this miner just got a head start on the next block. This is a sensible rule in protecting our limited mempool resources. However for Lightning devs out there, perhaps you are aware of this, when anchor outputs were proposed in Lightning the idea was each counterparty would have an output in the commitment transaction that could be immediately spent so that each of them could be attaching a high fee child to fee bump the commitment transaction. That was the background behind anchor outputs. It was a great idea. However, they were thinking “What if one of the counterparties dominates the descendant limit so they broadcast a 100 kilo virtual byte child and now the other counterparty is not able to fee bump?” This is what is known as a type of pinning attack. A pinning attack is a type of censorship attack in which the attacker takes advantage of mempool policies to either censor a transaction from mempools or prevent a transaction that is in the mempool from being mined. In this case they are preventing the commitment transaction from being mined. We’ve defined pinning.

It is now worthy to mention the CPFP carve out exemption which was shoehorned into mempool policy to address this exact attack on Lightning users. It is a bit convoluted if the extra descendant has a maximum of two ancestors in their ancestor limit and it is within 10 kilo virtual bytes etc etc then they get an exemption. It is the cause of a lot of testing bugs and it is an ugly hack. I’m not here to complain but I want to note is that the average Bitcoin node user who maybe doesn’t care about Lightning for whatever reason doesn’t really have a reason to have this in their mempool policy. It might not be entirely incentive compatible is what I’m saying. I’ll come back to this later.

Policy designed for incentive compatibility enables fee bumping

Speaking of fee bumping, I do want to mention some examples of policies that people might like. These fee bumping primitives are all brought to you by mempool policy. The first one is RBF and this comes from a behavior in our mempool where if we see a transaction that conflicts with something in our mempool we’re not just going to say “We don’t want that”. We will actually consider that new transaction and if it pays significantly higher in fees we might replace the one in our mempool. This is good for users because for incentive compatibility it aligns and it allows users to fee bump their transactions. The other one is the mempool is aware of ancestor and descendant packages so when we are including transactions in a block we will go by ancestor fee rate. This allows a child to pay for a parent or CPFP. And when we are evicting transactions from the mempool we are not going to evict something that has a low fee rate if it has a high fee child. Again this is incentive compatible but it also helps users.

I am going to shill my work really quickly. Package RBF is the combination of RBF and CPFP such that you can have a child paying for both of its parents and RBFing the conflicts of its parents. It is pretty cool, you should check it out.

Mempool Policy

Just to summarize what we’ve covered so far. We’ve defined mempool policy as a node’s set of validation rules in addition to consensus that they apply to unconfirmed transactions in order to be accepted into their mempool. We have covered a few different reasons for why we have mempool policy. One of the biggest ones is DoS protection. We are also trying to design a transaction acceptance logic that is incentive compatible and it allows us to upgrade the network’s consensus rules in a safe way. There is a fourth category that I thought I mentioned but we don’t have time today, network best practices or standardness such as the dust limit. That’s the one that I think is more miscellaneous let’s say.

I did title this talk “Transaction Relay Policy”. It is because we are aware of the fact that the vast majority of nodes on the Bitcoin network run Bitcoin Core nodes. The vast majority of people who respond to polls on Twitter have stated that they use the default settings when it comes to mempool policy. This is kind of scary when you are opening a PR for mempool policy changes but it is a good thing in that it allows us to kind of have a set of expectations for whether or not our transactions are going to propagate across the network. It might be called transaction relay policy.

Policies can seem arbitrary/opaque and make transaction relay unpredictable

Now I am going to get into known issues with our default mempool policy in Bitcoin Core. I have been told that a lot of policy seems arbitrary such as the dust limit for example. I want to empathize and that is why I made this meme to show my sympathy. But essentially as an application developer you don’t have a stable or predictable interface for broadcasting your transactions. Especially in Lightning or offchain contracts where you are relying on being able to propagate and confirm a transaction in time to redeem the funds before your counterparty gets away with stealing from you. This can be a huge headache. I am aware of some within the standardness of a transaction itself, the biggest one is the dust limit, and the myriad of convoluted opaque script rules that transactions need to follow. Another category is transactions in evaluation within the context of a mempool. I already mentioned very niche exceptions in ancestor, descendant limit counting. I am also aware of BIP 125 RBF giving a lot of people grief because it can be expensive or sometimes even impossible to fee bump your transactions using RBF. And of course one of the biggest ones is in periods of high transaction volume, you never know when that is going to happen, you are not always able to fee bump your transactions. I put at the bottom the worst one is that every mempool can have different policies. I would say another one is that mempool policy is not well understood by application developers. That is why I’m here.

Commitment Transactions cannot replace one another

I am going to now describe a specific Lightning channel attack. It all stems from this fact that commitment transactions cannot replace each other. In Lightning you have this combination of conditions where you negotiate the fee rate ahead of time, way before you are ever going to broadcast this transaction because you weren’t planning on broadcasting that transaction. You are also not able to use RBF. If your counterparty is trying to cheat you or you are doing a unilateral close because they are not responding to you, they are obviously not available to then sign a new commitment transaction with you. But also because mempools only look at replacement transactions one at a time and your commitment transactions are going to be the same fee rate because why wouldn’t they be, commitment transactions cannot replace each other. Even if Tx2A has a huge fee rate Tx1A cannot replace Tx1B. A small note, I am aware that these are not fully encapsulating all of the script logic in Lightning transactions, please bear with me, the revocation keys and stuff I didn’t include. Just imagine these as simplified Lightning transactions.

Lightning Attacks

We are going to describe an attack between Alice, Bob and Mallory where Alice and Bob have a Lightning channel and Bob and Mallory have a Lightning channel. Alice is going to pay Mallory through Bob because Lightning. If Lightning is working properly either both Bob and Mallory get paid or neither of them get paid. That is how it is supposed to work. When the HTLC is offered from Alice to Bob and from Bob to Mallory each one of them is going to have a transaction in each channel. There is going to be a pair of commitment transactions in each channel. With LN-Penalty we have these asymmetric commitments so we are able to attribute blame for who broadcast the transaction. These are conflicting transactions just to be clear. Tx1A and Tx1B conflict. Tx2B and Tx2M conflict. Only one of them can be broadcast and confirmed onchain. We have constructed this such that Alice is paying Bob and Bob is paying Mallory. Mallory can reveal the preimage to redeem the funds or Bob can get his refund at t5. On the other side Bob can reveal the preimage that he got from Mallory and redeem the funds or Alice can get her refund at t6. Bob has that tiny little buffer timing out between Mallory not paying and Alice redeeming. But spoiler alert, the outcome of this attack is going to be Bob pays Mallory but Alice doesn’t pay Bob. These would be the corresponding transactions where Mallory broadcasts, she got the redemption with the preimage and Alice broadcasts where she got the refund from Bob. It is going to happen such that at t6 Alice gets her refund but Bob was not able to get his refund.

What happens is Mallory broadcasts her commitment transaction with a huge pinning transaction attached to it at t4. These two transactions are in everyone’s mempools. There are two scenarios. Either Bob has a mempool and is watching, he knows Mallory broadcast these transactions or he doesn’t. Let’s go over the first scenario first. We require CPFP carve out but like I said before I don’t really know why every mempool would have CPFP carve out implemented as one of their mempool policies. It allows Bob to get his money back in this case, he is able to fee bump the commitment transaction and then he gets his refund. But in this case CPFP carve out is critical to Lightning security. The second case is Bob doesn’t have a mempool. I think this is a more reasonable security assumption for Lighting nodes because it doesn’t make sense to me that Lightning nodes need to have a Bitcoin mempool or to be watching Bitcoin mempools in general. Bob was not watching a mempool and so at t5 he says “Ok Mallory hasn’t revealed the preimage to me and she’s not responding to me so now I am going to unilaterally close. I am going to broadcast my commitment transaction plus a fee bump and I am going to get my refund after the to_self_delay. The problem is because Mallory’s transactions are already in the mempool even if Bob broadcasts it, he’s like “Why isn’t it confirming onchain?”, he’s watching the blockchain, he’s not seeing his transactions confirm and he’s very confused. It is because Mallory has already put this transaction in the mempool and he is not able to replace it. The solution to this is package relay. In this case package relay is an important part of Lightning security.

In both of these cases t5 has elapsed and Bob has not been able to successfully redeem his refund from Mallory. At t6 Alice gets her refund but Bob didn’t. Let’s pretend Alice and Mallory are actually the same person. They have managed to steal this HTLC amount from Bob. Hopefully everyone followed that. As a disclaimer this is easily avoided if the commitment transactions are negotiated with a fee rate that was high enough to just confirm immediately in the next block at t4 or t5. This brings me to my recommendations for L2 and application developers.

Let’s try to be friends?

My recommendation is for the time being lean towards overestimating on fees rather than underestimating. These pinning attacks only work if you are floating at the bottom of the mempool and you are not able to fee bump. Another one is never rely on zero confirmation transactions that you don’t control. What you have in your mempool does not necessarily match what miners have in their mempools or even the entirety of the rest of the network might have a different transaction in their mempool. So if you don’t have full control over that transaction don’t rely on it if it has no confirmations. Another one is there is a very nice RPC called testmempoolaccept available with Bitcoin Core. That will test both consensus rules and standard Bitcoin Core default policy. Again because that is the vast majority of policies throughout the network it should be pretty good for testing. Especially if you are relying on fee bumping working I recommend having different tests with various mempool contents, maybe conflicts, maybe some set of nodes to test your assumptions. Try to do that on every release, maybe on master every once in a while. I don’t want to put too many expectations on you but these are my recommendations for how to make sure that you are not relying on unsound assumptions when it comes to transaction propagation. I would like to invite you to communicate your grievances whether it is with our standard mempool policy or it is with the testing utilities not being sufficient for testing your use cases. Please complain and say something. Hopefully this talk has convinced you that while we do have restrictions for a reason we want to support Lightning applications. Please give feedback on proposals when they are sent to the bitcoin-dev mailing list. On the other side I think we all agree that Lightning transactions are Bitcoin transactions so at least in the Bitcoin Core world one big effort is to document our current transaction relay policy and provide a stable testing interface to make sure transactions are passing them. We are also working on various improvements and simplifications of our current mempool policy. Recently we have agreed not to restrict policy without notifying bitcoin-dev first. But of course that doesn’t work unless you guys read bitcoin-dev and then give us your feedback if something is going to harm your application. Together we can make privacy and scalability and all those usability wonderful things that come with L2 applications without sacrificing security.

Q&A

Q - With testmempoolaccept you said it was hardcoded to the default policy or is it based on the policy that I’m running on my node?

A - If you are calling testmempoolaccept on your node it will be your node’s policy. But if you say spin up some regtest nodes or you compile bitcoind that will be whatever the defaults are.

Q - It is not the defaults it is whatever my node is running. If I want to test other assumptions I could change the policy on my node and that would be reflected in testmempoolaccept?

A - Yes

Q - If the package relay makes it in would that allow us to safely remove the CPFP carve out in the future without harming people with existing Lightning channels and stuff like that?

A - That is a very good question. I think so but I haven’t completely thought it through. I think so.

Q - One question regarding the Lightning nodes monitoring the mempool or not. You said it was not a proper assumption that a Lightning node would be monitoring the mempool because why should it. But if you are forwarding HTLCs and someone may drop the commitment transaction onchain you may see the secret there. You have to accept HTLCs with the same secret. Why would you wait for that transaction to get confirmed instead of taking that from the mempool? You shouldn’t assume that you are going to receive that in your mempool because you may not but if you see it in the mempool you may be able to act quicker.

A - This is how I’m understanding your question. In a lot of these types of attacks if the honest party is watching the mempool then they are able to mitigate that attack and respond to it. I think it is a difference between having a mempool helps and mempools are required in order for Lightning nodes to be able to secure their transactions. Yes on one hand I do agree. Having a mempool helps. But I don’t think anybody wants a mempool to be a requirement for Lightning security.

Q - Even then it will be a monitoring of the entire mempool of the network which is not possible.

A - Right.

Q - What’s the story on you and Lisa with the death to the mempool thing?

A - I am happy to talk about this offline. Lisa and I had a discussion about it where we disagreed with each other. She tagged me in her post to the bitcoin-dev mailing list. I think people were under the impression that our discussion was us being in agreement when actually we didn’t agree. I think that is where most of the confusion comes from.

Q - Her stance is that mempools are unreliable so we should get rid of them altogether?

A - Mempools as they stand today are not perfect. It is not an extremely stable interface for application devs. I have heard multiple Lightning devs say “For us a much better interface would just be if our company partnered with a miner and we were always able to submit our transactions to them.” For example all lnd nodes or all rust-lightning nodes were able to just always submit to F2Pool or something. That would resolve the headaches around making sure your transactions are standard and you are going to meet policy. I understand why that’s more comfortable for Lightning devs but I feel like in that case it is not “We need a mempool in order for Lightning to be secure” it is “We need a relationship with a specific miner in order for Lightning to be secure”. I don’t think that is censorship resistant, I don’t think that is more private, I don’t think that is the direction that we want Bitcoin and Lightning to go in. That is my own personal opinion. That’s why I came today to talk about this because I understand there are grievances but I think we can work together to make an interface that works.

darosior: At the end of the day it is pretty similar to fee bumping, you have shortcuts and very often they would increase centralization pressure. Once we lose censorship resistance we will realize that it was precious and we should think before we lose it.

Q - …

A - I can’t offer a specific roadmap but I can tell you what is implemented and what is yet to be implemented. All of the mempool validation code is implemented. It is not all merged. That part is already written. The next part is defining the P2P protocol which we get closer to finalizing everyday. Writing out the spec, putting out the BIP, implementing it and then getting it reviewed and merged. That is what is implemented and what is not implemented. You can make your estimate on how long that will take.

Q - While you’ve been working on package relay what have you learned that surprised you? Something that is interesting that was unexpected, something more difficult or more easy than you thought?

A - I am surprised by how fun it is. It is a privilege everyday to wake up and get to think about graph theory and incentive compatibility and DoS protection. It is very fun. I didn’t expect it to be this fun. The next layer of surprise is I am very surprised not everyone is interested in mempool policy.

Q - Do you have some example of DDoS attack vectors that you would introduce that are not super obvious? When you have a mempool package policy.

A - For example our UTXO set is several gigabytes. Some of it is stored on disk and we have layered caches. We want to be very protective over how peers on the peer-to-peer network can influence how we are caching things. That can impact block validation performance. For example if based on how we limit the size of one transaction we can expect churn in our UTXO caching system. We don’t want that to be way larger if we are validating 25 transactions at a time or 2 transactions at a time or n transactions at a time. That would be an example of a DoS attack. Same thing with if there are exponentially more signatures that can be present in a package compared to an individual transaction, we don’t want to be expending exponentially more resources validating a package compared to a transaction.

Q - The Lightning case is a very specific case where you have a package of two transactions. Is it just trying to get that one right first and then think about the general case later?

A - Yes. The single child, single parent versus single child, multi parent. I think most of it was a trade-off between complexity with implementation versus how useful it would be for both L1 and L2 protocols. Given the fact that we know there is a lot of batched fee bumping that happens even on the onchain Bitcoin level it has a very high usefulness score in terms of multi parent, single child. When we were looking at single parent, single child versus multi parent, single child the complexity is not that much worse. Weighing those two things against each other, multi parent, single child was better on those two things.

darosior: It seems to be weird to tailor the mempool rules to a specific use case that is Lightning. The criticism with carve outs which I kind of agree with, multi parent fee bump, CPFP could be useful for other applications as well and is useful for other applications.

Q - I agree with you that mempool is a super interesting research area. Throughout the entire blockchain space we have seen things blow up because of mempool policy, not necessarily in Bitcoin but in other blockchain ecosystems. Do you spend much time trying to look through what other blockchains have tried and either failed at or even succeeded at that we could bring over to the Bitcoin mempool policy?

A - No I haven’t spent any time. Not because I don’t think there’s good ideas out there. I’m not the only person who has ever thought about a mempool and Bitcoin is not the only place that has ever looked at the mempool. I have noted that a lot of the problems that we have in mempool are specific to the fact that we are a UTXO based model. It is nice because we don’t have the problem where we aren’t able to enforce ordering of transactions. It is trivial for us because parents need to be before children. But it does mean that when we are creating a mempool 75 percent of the mempool dynamic memory usage is allocated for the metadata and not the transactions themselves. We need to be tracking packages of transactions as a whole rather than treating them as atomic units.

Q - That is in a package relay world or that is in Bitcoin Core?

A - In the Bitcoin world.

Q - Today?

A - Yes, 75 percent. It is always surprising to people. I think it is mostly because of the fact that we have a UTXO based model. I don’t think it is a bad thing, this is something that we have to deal with that other people don’t have to deal with. Like I said before there are advantages of this that we have that other people don’t.

Q - Where can I find out more about package relay?

A - Brink has recently released a podcast called The Bitcoin Development Podcast. We have recorded the first 5 episodes and it is all about mempool policy, pinning attacks, package relay and mining improvements. I would recommend going to brink.dev/podcast to learn more.

Q - Could you talk a little bit about the dust limit and mempool acceptance? Lowering the dust limit for mempool policy, is that on the roadmap? Should we have it? Should we not have it? Are there different clients that have different thoughts about what is the current dust limit? I think I had some channel closings because there were some disagreements between Lightning nodes.

A - For those who are not aware dust is referring to uneconomical outputs. If you have an output with say 20 satoshis in it, even at the network rate of 1 sat per vbyte you cannot spend this output in an economical way because the input size to provide the script to spend that output is going to require more than 20 satoshis to pay for it. In Bitcoin Core mempool policy we ban any transaction that has dust in it. This is not just to gatekeep but it is the idea that when you have dust in the global UTXO set probably nobody is going to spend it but everyone on the network has to keep that. Because scalability is essentially limited by the UTXO set size it is an arbitrary choice by the Bitcoin Core developers to say we don’t allow dust to propagate through Bitcoin Core nodes. But it is an example of that best practices thing that I talked about earlier in the slide where it is consensus valid but it doesn’t fit into the other categories of DoS protection but in a form it is a DoS. You are asking the entire network to store dust in their UTXO set. It is also an example of this inherent tension between application devs and protocol devs when it comes to mempool policy. For example in Lightning if you have outputs such as HTLCs inflight or the channel balance such that one of those outputs is dust it is not going to propagate. So I think typically in Lightning if you have an output that is going to be dust you drop it to fees. That is considered best practice. But this is an inconvenient thing that they have to deal with when they are implementing Lightning. Again I am trying to be as empathetic as possible, I understand why this sucks. But there are reasons to have that dust limit. It is a place of tension there. I am not saying it is never going to change, I’m just saying those are the arguments.

\ No newline at end of file +https://www.youtube.com/watch?v=fbWSQvJjKFs

Slides: https://github.com/glozow/bitcoin-notes/blob/master/Tx%20Relay%20Policy%20for%20L2%20Devs%20-%20Adopting%20Bitcoin%202021.pdf

Intro

Hi I’m Gloria, I work on Bitcoin Core at Brink. Today I am going to talk a bit about mempool policy and why you cannot always expect your transactions to propagate or your fee bumps to work. What we are doing to try to resolve stuff like that. This talk is for those of you who find the mempool a bit murky or unstable or unpredictable. You are not really sure if it is an interface you are comfortable building an application on top of. I would guess that this most of you. Hopefully today we can make the mempool a bit more clear. My goal is to be as empathetic as possible when it comes to transaction propagation limitations. I am going to try to bring a perspective of a Bitcoin protocol developer, how we reason about mempool and transaction relay. I am going to define what policy is, how we think about it, why we have certain policies and define pinning. I will go over some known limitations that we have, maybe clear up some misconceptions about what attack vectors are possible. I’ll mention a specific Lightning attack. My goal is for us to walk out of this today as friends so we can start a conversation about how to build a Layer 1 and Layer 2 with a seamless bridge between them.

Anyone should be able to send a Bitcoin payment

I want to start off from the ground up to make sure we are on the same page about how we think about Bitcoin. We want Bitcoin to be a system where anyone in the world can send money to anybody. That is the whole point. Abstractly people often use terms like permissionless and decentralized and trustless. Perhaps a slightly more concrete framework would be something like this. The first point we want it to be extremely accessible to run a node. If for example bitcoind doesn’t build on MacOS or it requires 32GB of RAM or it has 2000 dollars a month of running costs in order to operate a full node that is unacceptable to us. We want it to run on your basic Raspberry Pi, easy hardware. The other part is censorship resistance because we are trying to build a system that works for those whom the traditional financial system has failed. If we have built a payment system where it is very easy for some government or wealthy bank or large company to censor arbitrarily transactions from users they don’t agree with then we have failed. The other thing is we want it to be secure, obviously. This is an arena where we are not relying on some government where we can say “This node which corresponds to this real life identity did something bad so now we go and prosecute them”. We don’t have that option. We want anyone to be able to run a node, that is part of the permissionless and decentralized portion of Bitcoin’s design goals. Therefore security is not an option, it is first and foremost when we are designing transaction relay. Fourthly, also relating to permissionless and decentralization we do not force software updates in Bitcoin Core. We cannot force anyone to run a new patch of Bitcoin node so if we were to say have an upgrade that improves user privacy but it costs way more or miners need to be willing to lose 50 percent in revenue and fees we cannot reasonably expect anyone to upgrade their node to that software. Incentive compatibility is always in mind when we are talking about transaction relay. This is the baseline framework for how we are thinking about things.

P2P Transaction Relay Network

Hopefully this has not come as a surprise to you but the way that we deal with this in Bitcoin we have a distributed peer-to-peer network where hopefully anyone can run a Bitcoin node, connect to the Bitcoin network. That comes in very forms. It could be a background task of bitcoind running on your laptop, it could be a Raspberry Pi that has Umbrel or Raspiblitz installed on it, it could be a large company server that is servicing thousands of smartphone wallets. All of them might be a Bitcoin node. They look very different but on the peer-to-peer network they all look like a Bitcoin node. When you are trying to pay somebody all you do is you connect to your node, you submit your transaction, you broadcast it to your peers, hopefully they will broadcast it to their peers and then it will propagate and eventually reach a miner. They will include it in a block.

Mempools

One really key part of transaction relay is everyone who participates in transaction relay keeps a mempool. I’ll define what a mempool is for those of who want it defined. It is a cache of unconfirmed transactions and it is highly optimized to help pick the sets of transactions that are most incentive compatible aka highest fee rates. This is helpful for both miners and non-miners. This helps us gracefully re-org, it allows us to design transaction relay in a more private way because we are doing something smarter than just accept and then forward. It is useful. I have written extensively on why mempools are useful if anyone wants to read more.

Mempool Policy

You may have heard the phrase “There’s no such thing as the mempool” and that is true. Each node keeps a mempool and each node can have different software. They are allowed to configure it however they want because Bitcoin is freedom right? Different mempools may look different, your Raspberry Pi 1 may not have as much memory available to allocate for the mempool so you might be more restrictive in your policies. Whereas a miner may keep a very large mempool because they want a very long backlog of transactions they can include in their blocks if things empty. We refer to mempool policy as a node’s set of validation rules for transactions in addition to consensus which all unconfirmed transactions must pass in order to be accepted into this node’s mempool. This is unconfirmed transactions, it has nothing to do with transactions within blocks. It is their prerogative what they are going to restrict coming into their mempool.

From a protocol development perspective we focus on protecting the bitcoind user

I want to really hammer home the point that from a Bitcoin protocol development standpoint we are trying to protect the user who is running this Bitcoin node. I get into a lot of fights with application developers who think we should be protecting the user who is broadcasting the transaction and in some cases those two are not aligned. We have an extremely restrictive set of resources available to us in mempool validation. Typically it is just 300 MB of memory and we don’t want to be spending for example even half a second validating a single transaction. That is too long. Once again even though we are trying to support the honest users, the whistleblowers, the activists, the traditionally censored people, we also need to be aware that when we are connecting to a peer by design we need to expect that this peer can be an attacker. The most common thing that we think about is CPU exhaustion. Again if it takes half a second that is too long to validate a transaction because we will have hundreds of transactions coming in per second. Our code is open source so if there is a bug in our mempool code such that you can cause an out of memory error because of some transaction with a specific type of script for example we need to expect that that is going to be exploited. Another thing to note is in the competitive world of mining it might sometimes be advantageous for a miner to try to fill mempools with garbage or render other miners’ mempools useless or try to censor high fee transactions from their mempools. Or maybe they are launching a long range attack where they put things in the mempool such that when they broadcast a new block everyone has to spend half a second which is too long updating their mempool. Whatever it is they stall the network for half a second and that gives them a head start on mining the next block which is significant. Or they are able to launch some other type of attack.

The other concerns that I will talk about less today regarding privacy, I will be talking a lot about Lightning counterparties but we are also aware of the fact that Lightning counterparties with conflicting transactions might have a back and forth with respect to fee bumping races or pinning attacks. We want to make it such that as long as the honest user is broadcasting something incentive compatible they are always going to be able to win. Most of the time, let’s say. And of course transaction relay is an area where privacy is a huge concern. If you are able to find the origin node that a transaction came from and you are able to link a IP address with a wallet address it doesn’t matter how much you hide onchain, you have already de-anonymized this transaction. This is an area where privacy is a huge concern. The other example of this is transactions are probably the cheapest way to try to analyze network topology. We are always trying to make sure we are not shooting ourselves in the foot with respect to privacy. I am not going to talk about privacy as much today but it is also a concern.

The Policy Spectrum

We can think about policy on a spectrum where on one extreme end we have the “ideal” perfect mempool. We have infinite resources, we are able to freeze time every time we receive a transaction or need to assemble a block and we have an infinite amount of time to validate or assemble. Perhaps in this case we accept all consensus valid transactions. On the other side of the spectrum we have the perfectly defensive conservative mempool policy where we don’t validate anything. We don’t want to spend any validation resources on anything that comes from someone we don’t know. Unfortunately if we want to make this sustainable at a network level we are going to need a central registry of identities that correspond to nodes or wallet addresses. We are just a bank at that point. Neither end of the spectrum really follows our design goals that we want. We want to be somewhere in the middle.

DoS protection isn’t the full story

But it would be too easy if this was the only concern. DoS protection is actually not the only thing that we think about when it comes to mempool policy, it is a common misconception. Let’s go back to thinking about these two ends. We’ll see why this linear approach is not the full picture. Even if we were to have infinite resources would we want to accept every single consensus valid transaction? The answer is actually no. The best example I can offer is soft forks. While we have this space of all consensus valid transactions today that space can shrink. With the recent Taproot activation for example we noticed that F2Pool who mined the activation block did not actually upgrade their nodes to have Taproot validation logic. The question is what would have happened if somebody sent them an invalid Taproot spend, an invalid v1 witness spending transaction. If they had accepted that into their mempool and mined it into a block then F2Pool, AntPool and all the users with nodes that weren’t upgraded would have forked. We would have had a network split where some of the network is not upgraded and some of it is actually enforcing Taproot. This is disastrous but as long as 51 percent of the hash rate is enforcing Taproot we are ok but this is a very bad situation that we would want to avoid. So presumably all miners were running at least version 0.13 and above which included SegWit. With SegWit had semantics for v0 witnesses, it discouraged the use of all witnesses v1 and above. That is why F2Pool did not enforce Taproot rules but they also did not accept any v1 witness transactions into their mempool and did not include those in their blocks. This is one reason why this kind of linear view is imperfect.

On the other side of the spectrum we have the perfectly defensive one. The question is are there policies where we are trying to protect the node from DoS but we harm users? This goes back to all those conversations I’ve had with application devs where policy is harming their users. The example I’m going to give is descendant limits. In Bitcoin Core’s standardness rules, the default limit for descendants is you won’t allow a transaction and all of its descendants in the mempool to exceed 101 kilo virtual bytes. This is a sensible rule because you can imagine a miner broadcasting transactions such that everyone’s mempool is one transaction and descendants. They publish a block that conflicts with that transaction and they’ve just flushed out everyone’s mempools. They spend a few seconds updating all those entries and now they have an empty mempool and nothing to put in their next block. And this miner just got a head start on the next block. This is a sensible rule in protecting our limited mempool resources. However for Lightning devs out there, perhaps you are aware of this, when anchor outputs were proposed in Lightning the idea was each counterparty would have an output in the commitment transaction that could be immediately spent so that each of them could be attaching a high fee child to fee bump the commitment transaction. That was the background behind anchor outputs. It was a great idea. However, they were thinking “What if one of the counterparties dominates the descendant limit so they broadcast a 100 kilo virtual byte child and now the other counterparty is not able to fee bump?” This is what is known as a type of pinning attack. A pinning attack is a type of censorship attack in which the attacker takes advantage of mempool policies to either censor a transaction from mempools or prevent a transaction that is in the mempool from being mined. In this case they are preventing the commitment transaction from being mined. We’ve defined pinning.

It is now worthy to mention the CPFP carve out exemption which was shoehorned into mempool policy to address this exact attack on Lightning users. It is a bit convoluted if the extra descendant has a maximum of two ancestors in their ancestor limit and it is within 10 kilo virtual bytes etc etc then they get an exemption. It is the cause of a lot of testing bugs and it is an ugly hack. I’m not here to complain but I want to note is that the average Bitcoin node user who maybe doesn’t care about Lightning for whatever reason doesn’t really have a reason to have this in their mempool policy. It might not be entirely incentive compatible is what I’m saying. I’ll come back to this later.

Policy designed for incentive compatibility enables fee bumping

Speaking of fee bumping, I do want to mention some examples of policies that people might like. These fee bumping primitives are all brought to you by mempool policy. The first one is RBF and this comes from a behavior in our mempool where if we see a transaction that conflicts with something in our mempool we’re not just going to say “We don’t want that”. We will actually consider that new transaction and if it pays significantly higher in fees we might replace the one in our mempool. This is good for users because for incentive compatibility it aligns and it allows users to fee bump their transactions. The other one is the mempool is aware of ancestor and descendant packages so when we are including transactions in a block we will go by ancestor fee rate. This allows a child to pay for a parent or CPFP. And when we are evicting transactions from the mempool we are not going to evict something that has a low fee rate if it has a high fee child. Again this is incentive compatible but it also helps users.

I am going to shill my work really quickly. Package RBF is the combination of RBF and CPFP such that you can have a child paying for both of its parents and RBFing the conflicts of its parents. It is pretty cool, you should check it out.

Mempool Policy

Just to summarize what we’ve covered so far. We’ve defined mempool policy as a node’s set of validation rules in addition to consensus that they apply to unconfirmed transactions in order to be accepted into their mempool. We have covered a few different reasons for why we have mempool policy. One of the biggest ones is DoS protection. We are also trying to design a transaction acceptance logic that is incentive compatible and it allows us to upgrade the network’s consensus rules in a safe way. There is a fourth category that I thought I mentioned but we don’t have time today, network best practices or standardness such as the dust limit. That’s the one that I think is more miscellaneous let’s say.

I did title this talk “Transaction Relay Policy”. It is because we are aware of the fact that the vast majority of nodes on the Bitcoin network run Bitcoin Core nodes. The vast majority of people who respond to polls on Twitter have stated that they use the default settings when it comes to mempool policy. This is kind of scary when you are opening a PR for mempool policy changes but it is a good thing in that it allows us to kind of have a set of expectations for whether or not our transactions are going to propagate across the network. It might be called transaction relay policy.

Policies can seem arbitrary/opaque and make transaction relay unpredictable

Now I am going to get into known issues with our default mempool policy in Bitcoin Core. I have been told that a lot of policy seems arbitrary such as the dust limit for example. I want to empathize and that is why I made this meme to show my sympathy. But essentially as an application developer you don’t have a stable or predictable interface for broadcasting your transactions. Especially in Lightning or offchain contracts where you are relying on being able to propagate and confirm a transaction in time to redeem the funds before your counterparty gets away with stealing from you. This can be a huge headache. I am aware of some within the standardness of a transaction itself, the biggest one is the dust limit, and the myriad of convoluted opaque script rules that transactions need to follow. Another category is transactions in evaluation within the context of a mempool. I already mentioned very niche exceptions in ancestor, descendant limit counting. I am also aware of BIP 125 RBF giving a lot of people grief because it can be expensive or sometimes even impossible to fee bump your transactions using RBF. And of course one of the biggest ones is in periods of high transaction volume, you never know when that is going to happen, you are not always able to fee bump your transactions. I put at the bottom the worst one is that every mempool can have different policies. I would say another one is that mempool policy is not well understood by application developers. That is why I’m here.

Commitment Transactions cannot replace one another

I am going to now describe a specific Lightning channel attack. It all stems from this fact that commitment transactions cannot replace each other. In Lightning you have this combination of conditions where you negotiate the fee rate ahead of time, way before you are ever going to broadcast this transaction because you weren’t planning on broadcasting that transaction. You are also not able to use RBF. If your counterparty is trying to cheat you or you are doing a unilateral close because they are not responding to you, they are obviously not available to then sign a new commitment transaction with you. But also because mempools only look at replacement transactions one at a time and your commitment transactions are going to be the same fee rate because why wouldn’t they be, commitment transactions cannot replace each other. Even if Tx2A has a huge fee rate Tx1A cannot replace Tx1B. A small note, I am aware that these are not fully encapsulating all of the script logic in Lightning transactions, please bear with me, the revocation keys and stuff I didn’t include. Just imagine these as simplified Lightning transactions.

Lightning Attacks

We are going to describe an attack between Alice, Bob and Mallory where Alice and Bob have a Lightning channel and Bob and Mallory have a Lightning channel. Alice is going to pay Mallory through Bob because Lightning. If Lightning is working properly either both Bob and Mallory get paid or neither of them get paid. That is how it is supposed to work. When the HTLC is offered from Alice to Bob and from Bob to Mallory each one of them is going to have a transaction in each channel. There is going to be a pair of commitment transactions in each channel. With LN-Penalty we have these asymmetric commitments so we are able to attribute blame for who broadcast the transaction. These are conflicting transactions just to be clear. Tx1A and Tx1B conflict. Tx2B and Tx2M conflict. Only one of them can be broadcast and confirmed onchain. We have constructed this such that Alice is paying Bob and Bob is paying Mallory. Mallory can reveal the preimage to redeem the funds or Bob can get his refund at t5. On the other side Bob can reveal the preimage that he got from Mallory and redeem the funds or Alice can get her refund at t6. Bob has that tiny little buffer timing out between Mallory not paying and Alice redeeming. But spoiler alert, the outcome of this attack is going to be Bob pays Mallory but Alice doesn’t pay Bob. These would be the corresponding transactions where Mallory broadcasts, she got the redemption with the preimage and Alice broadcasts where she got the refund from Bob. It is going to happen such that at t6 Alice gets her refund but Bob was not able to get his refund.

What happens is Mallory broadcasts her commitment transaction with a huge pinning transaction attached to it at t4. These two transactions are in everyone’s mempools. There are two scenarios. Either Bob has a mempool and is watching, he knows Mallory broadcast these transactions or he doesn’t. Let’s go over the first scenario first. We require CPFP carve out but like I said before I don’t really know why every mempool would have CPFP carve out implemented as one of their mempool policies. It allows Bob to get his money back in this case, he is able to fee bump the commitment transaction and then he gets his refund. But in this case CPFP carve out is critical to Lightning security. The second case is Bob doesn’t have a mempool. I think this is a more reasonable security assumption for Lighting nodes because it doesn’t make sense to me that Lightning nodes need to have a Bitcoin mempool or to be watching Bitcoin mempools in general. Bob was not watching a mempool and so at t5 he says “Ok Mallory hasn’t revealed the preimage to me and she’s not responding to me so now I am going to unilaterally close. I am going to broadcast my commitment transaction plus a fee bump and I am going to get my refund after the to_self_delay. The problem is because Mallory’s transactions are already in the mempool even if Bob broadcasts it, he’s like “Why isn’t it confirming onchain?”, he’s watching the blockchain, he’s not seeing his transactions confirm and he’s very confused. It is because Mallory has already put this transaction in the mempool and he is not able to replace it. The solution to this is package relay. In this case package relay is an important part of Lightning security.

In both of these cases t5 has elapsed and Bob has not been able to successfully redeem his refund from Mallory. At t6 Alice gets her refund but Bob didn’t. Let’s pretend Alice and Mallory are actually the same person. They have managed to steal this HTLC amount from Bob. Hopefully everyone followed that. As a disclaimer this is easily avoided if the commitment transactions are negotiated with a fee rate that was high enough to just confirm immediately in the next block at t4 or t5. This brings me to my recommendations for L2 and application developers.

Let’s try to be friends?

My recommendation is for the time being lean towards overestimating on fees rather than underestimating. These pinning attacks only work if you are floating at the bottom of the mempool and you are not able to fee bump. Another one is never rely on zero confirmation transactions that you don’t control. What you have in your mempool does not necessarily match what miners have in their mempools or even the entirety of the rest of the network might have a different transaction in their mempool. So if you don’t have full control over that transaction don’t rely on it if it has no confirmations. Another one is there is a very nice RPC called testmempoolaccept available with Bitcoin Core. That will test both consensus rules and standard Bitcoin Core default policy. Again because that is the vast majority of policies throughout the network it should be pretty good for testing. Especially if you are relying on fee bumping working I recommend having different tests with various mempool contents, maybe conflicts, maybe some set of nodes to test your assumptions. Try to do that on every release, maybe on master every once in a while. I don’t want to put too many expectations on you but these are my recommendations for how to make sure that you are not relying on unsound assumptions when it comes to transaction propagation. I would like to invite you to communicate your grievances whether it is with our standard mempool policy or it is with the testing utilities not being sufficient for testing your use cases. Please complain and say something. Hopefully this talk has convinced you that while we do have restrictions for a reason we want to support Lightning applications. Please give feedback on proposals when they are sent to the bitcoin-dev mailing list. On the other side I think we all agree that Lightning transactions are Bitcoin transactions so at least in the Bitcoin Core world one big effort is to document our current transaction relay policy and provide a stable testing interface to make sure transactions are passing them. We are also working on various improvements and simplifications of our current mempool policy. Recently we have agreed not to restrict policy without notifying bitcoin-dev first. But of course that doesn’t work unless you guys read bitcoin-dev and then give us your feedback if something is going to harm your application. Together we can make privacy and scalability and all those usability wonderful things that come with L2 applications without sacrificing security.

Q&A

Q - With testmempoolaccept you said it was hardcoded to the default policy or is it based on the policy that I’m running on my node?

A - If you are calling testmempoolaccept on your node it will be your node’s policy. But if you say spin up some regtest nodes or you compile bitcoind that will be whatever the defaults are.

Q - It is not the defaults it is whatever my node is running. If I want to test other assumptions I could change the policy on my node and that would be reflected in testmempoolaccept?

A - Yes

Q - If the package relay makes it in would that allow us to safely remove the CPFP carve out in the future without harming people with existing Lightning channels and stuff like that?

A - That is a very good question. I think so but I haven’t completely thought it through. I think so.

Q - One question regarding the Lightning nodes monitoring the mempool or not. You said it was not a proper assumption that a Lightning node would be monitoring the mempool because why should it. But if you are forwarding HTLCs and someone may drop the commitment transaction onchain you may see the secret there. You have to accept HTLCs with the same secret. Why would you wait for that transaction to get confirmed instead of taking that from the mempool? You shouldn’t assume that you are going to receive that in your mempool because you may not but if you see it in the mempool you may be able to act quicker.

A - This is how I’m understanding your question. In a lot of these types of attacks if the honest party is watching the mempool then they are able to mitigate that attack and respond to it. I think it is a difference between having a mempool helps and mempools are required in order for Lightning nodes to be able to secure their transactions. Yes on one hand I do agree. Having a mempool helps. But I don’t think anybody wants a mempool to be a requirement for Lightning security.

Q - Even then it will be a monitoring of the entire mempool of the network which is not possible.

A - Right.

Q - What’s the story on you and Lisa with the death to the mempool thing?

A - I am happy to talk about this offline. Lisa and I had a discussion about it where we disagreed with each other. She tagged me in her post to the bitcoin-dev mailing list. I think people were under the impression that our discussion was us being in agreement when actually we didn’t agree. I think that is where most of the confusion comes from.

Q - Her stance is that mempools are unreliable so we should get rid of them altogether?

A - Mempools as they stand today are not perfect. It is not an extremely stable interface for application devs. I have heard multiple Lightning devs say “For us a much better interface would just be if our company partnered with a miner and we were always able to submit our transactions to them.” For example all lnd nodes or all rust-lightning nodes were able to just always submit to F2Pool or something. That would resolve the headaches around making sure your transactions are standard and you are going to meet policy. I understand why that’s more comfortable for Lightning devs but I feel like in that case it is not “We need a mempool in order for Lightning to be secure” it is “We need a relationship with a specific miner in order for Lightning to be secure”. I don’t think that is censorship resistant, I don’t think that is more private, I don’t think that is the direction that we want Bitcoin and Lightning to go in. That is my own personal opinion. That’s why I came today to talk about this because I understand there are grievances but I think we can work together to make an interface that works.

darosior: At the end of the day it is pretty similar to fee bumping, you have shortcuts and very often they would increase centralization pressure. Once we lose censorship resistance we will realize that it was precious and we should think before we lose it.

Q - …

A - I can’t offer a specific roadmap but I can tell you what is implemented and what is yet to be implemented. All of the mempool validation code is implemented. It is not all merged. That part is already written. The next part is defining the P2P protocol which we get closer to finalizing everyday. Writing out the spec, putting out the BIP, implementing it and then getting it reviewed and merged. That is what is implemented and what is not implemented. You can make your estimate on how long that will take.

Q - While you’ve been working on package relay what have you learned that surprised you? Something that is interesting that was unexpected, something more difficult or more easy than you thought?

A - I am surprised by how fun it is. It is a privilege everyday to wake up and get to think about graph theory and incentive compatibility and DoS protection. It is very fun. I didn’t expect it to be this fun. The next layer of surprise is I am very surprised not everyone is interested in mempool policy.

Q - Do you have some example of DDoS attack vectors that you would introduce that are not super obvious? When you have a mempool package policy.

A - For example our UTXO set is several gigabytes. Some of it is stored on disk and we have layered caches. We want to be very protective over how peers on the peer-to-peer network can influence how we are caching things. That can impact block validation performance. For example if based on how we limit the size of one transaction we can expect churn in our UTXO caching system. We don’t want that to be way larger if we are validating 25 transactions at a time or 2 transactions at a time or n transactions at a time. That would be an example of a DoS attack. Same thing with if there are exponentially more signatures that can be present in a package compared to an individual transaction, we don’t want to be expending exponentially more resources validating a package compared to a transaction.

Q - The Lightning case is a very specific case where you have a package of two transactions. Is it just trying to get that one right first and then think about the general case later?

A - Yes. The single child, single parent versus single child, multi parent. I think most of it was a trade-off between complexity with implementation versus how useful it would be for both L1 and L2 protocols. Given the fact that we know there is a lot of batched fee bumping that happens even on the onchain Bitcoin level it has a very high usefulness score in terms of multi parent, single child. When we were looking at single parent, single child versus multi parent, single child the complexity is not that much worse. Weighing those two things against each other, multi parent, single child was better on those two things.

darosior: It seems to be weird to tailor the mempool rules to a specific use case that is Lightning. The criticism with carve outs which I kind of agree with, multi parent fee bump, CPFP could be useful for other applications as well and is useful for other applications.

Q - I agree with you that mempool is a super interesting research area. Throughout the entire blockchain space we have seen things blow up because of mempool policy, not necessarily in Bitcoin but in other blockchain ecosystems. Do you spend much time trying to look through what other blockchains have tried and either failed at or even succeeded at that we could bring over to the Bitcoin mempool policy?

A - No I haven’t spent any time. Not because I don’t think there’s good ideas out there. I’m not the only person who has ever thought about a mempool and Bitcoin is not the only place that has ever looked at the mempool. I have noted that a lot of the problems that we have in mempool are specific to the fact that we are a UTXO based model. It is nice because we don’t have the problem where we aren’t able to enforce ordering of transactions. It is trivial for us because parents need to be before children. But it does mean that when we are creating a mempool 75 percent of the mempool dynamic memory usage is allocated for the metadata and not the transactions themselves. We need to be tracking packages of transactions as a whole rather than treating them as atomic units.

Q - That is in a package relay world or that is in Bitcoin Core?

A - In the Bitcoin world.

Q - Today?

A - Yes, 75 percent. It is always surprising to people. I think it is mostly because of the fact that we have a UTXO based model. I don’t think it is a bad thing, this is something that we have to deal with that other people don’t have to deal with. Like I said before there are advantages of this that we have that other people don’t.

Q - Where can I find out more about package relay?

A - Brink has recently released a podcast called The Bitcoin Development Podcast. We have recorded the first 5 episodes and it is all about mempool policy, pinning attacks, package relay and mining improvements. I would recommend going to brink.dev/podcast to learn more.

Q - Could you talk a little bit about the dust limit and mempool acceptance? Lowering the dust limit for mempool policy, is that on the roadmap? Should we have it? Should we not have it? Are there different clients that have different thoughts about what is the current dust limit? I think I had some channel closings because there were some disagreements between Lightning nodes.

A - For those who are not aware dust is referring to uneconomical outputs. If you have an output with say 20 satoshis in it, even at the network rate of 1 sat per vbyte you cannot spend this output in an economical way because the input size to provide the script to spend that output is going to require more than 20 satoshis to pay for it. In Bitcoin Core mempool policy we ban any transaction that has dust in it. This is not just to gatekeep but it is the idea that when you have dust in the global UTXO set probably nobody is going to spend it but everyone on the network has to keep that. Because scalability is essentially limited by the UTXO set size it is an arbitrary choice by the Bitcoin Core developers to say we don’t allow dust to propagate through Bitcoin Core nodes. But it is an example of that best practices thing that I talked about earlier in the slide where it is consensus valid but it doesn’t fit into the other categories of DoS protection but in a form it is a DoS. You are asking the entire network to store dust in their UTXO set. It is also an example of this inherent tension between application devs and protocol devs when it comes to mempool policy. For example in Lightning if you have outputs such as HTLCs inflight or the channel balance such that one of those outputs is dust it is not going to propagate. So I think typically in Lightning if you have an output that is going to be dust you drop it to fees. That is considered best practice. But this is an inconvenient thing that they have to deal with when they are implementing Lightning. Again I am trying to be as empathetic as possible, I understand why this sucks. But there are reasons to have that dust limit. It is a place of tension there. I am not saying it is never going to change, I’m just saying those are the arguments.

\ No newline at end of file diff --git a/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/index.html b/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/index.html index bad9b739e5..1f500d4cfc 100644 --- a/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/index.html +++ b/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/index.html @@ -11,4 +11,4 @@ Antoine Riard

Date: February 6, 2020

Transcript By: Michael Folkson

Tags: Taproot, Lightning, Ptlc

Category: Conference

Media: -https://www.advancingbitcoin.com/video/a-schnorr-taprooted-lightning,11/

Slides: https://www.dropbox.com/s/9vs54e9bqf317u0/Schnorr-Taproot%27ed-LN.pdf

Intro

Today Schnorr and Taproot for Lightning, it is a really exciting topic.

Lightning architecture

The Lightning architecture for those who are not familiar with it. You have the blockchain as the underlying layer. On top of it you are going to build a channel, you have a HTLC and people are going to spend onions to you. If you want to be paid you are going to send an invoice to the sender.

What should we design Lightning for?

What should we design Lightning for? When we are doing Lightning design spec, we are pouring a lot of brainpower into it and everyone has a different view of what Lightning should be. Should Lightning be a fast, payment transaction system? Should Lightning be optimized for microtransactions? Is Lightning really cool because you get instant finality of your transactions? Is privacy the reason we are doing Lightning? Lightning may have better privacy properties. When we are talking about privacy for Lightning it would be better to have the privacy of the base layer in mind. On the base layer you are going to broadcast transactions. There is an amount, it is not encrypted. There is an address, it is not encrypted. You are going to link inputs and outputs in the UTXO graph.

What’s the privacy on the base layer?

Privacy for the base layer is not that great that today. Lightning may be a way to solve privacy.

What’s the privacy on Lightning?

But on Lightning there is a payment path. Lightning nodes have pubkeys tied to them and that is an identity vector. With HTLCs you may reuse a hash, there are a lot of different privacy vectors. Privacy is I think really important if you want censorship resistant money.

Why should we focus on privacy?

“Cryptography rearranges power, it configures who can do what, from what” The Moral Character of Cryptographic Work (Rogaway)

If you don’t have privacy I can bribe or blackmail you because I know how you are using this tech. That is a huge vector of attack. There is this awesome paper by Philip Rogaway. I encourage everyone to read it.

EC-Schnorr: efficient signature scheme

Keypair = (x,P) with P= xG and ephemeral keypair (k,R) with R = kG

Message hash = e = hash(R | m) and Signature = (R,s) with s = k + ex

Verification = sG = R + eP

You can see Schnorr and Taproot as a privacy boost. The reason to modify the consensus base layer which is a lot of work, there are a lot of people involved, there has to be a good motivation for doing this. Schnorr is a replacement for ECDSA. Originally Satoshi didn’t get Schnorr into Bitcoin because there were some patent issues. Schnorr is really awesome because there is linearity in the verification equation of Schnorr. Linearity means it is easy to sum up components. It is easy to sum up signatures, it is easy to sum up pubkeys and it is easy to sum up nonces between different parties.

Taproot: privacy preserving Script Tree

Taproot pubkey: Q = P + tG with Q and P curve points

t is the root of a Merkle tree where each leaf is a hash of a script

Spending witness provides Merkle proof and script

The other big new consensus upgrade proposal, nothing has yet been adopted by the network, Taproot is the idea of building a Merkle tree of every leaf of the Merkle tree is going to be a script. You are going to commit the root of the Merkle tree inside the pubkey. That is cool. Now when you are going to spend a Taproot output you have two options. The first option is to use a keypath spend. The other option is to reveal one of the scripts plus a Merkle proof. This Merkle proof lets the network verify that this script has been committed with the initial commitment of the scriptPubKey, the pubkey of the transaction spend.

New consensus properties

What are the new consensus properties of this upgrade? Linearity is the one we are going to use for this talk. With Taproot we have cheap complex scripts. Another advantage is under the Taproot assumption, if everyone agrees, you don’t have a disagreement, they can spend a Taproot output in a cooperative way so the script isn’t seen by any external observer.

More Schnorr-Taproot resources

There are BIP numbers for Schnorr, Taproot and Tapscript. I encourage you to read the BIPs. There are also more resources on AJ Town’s GitHub repo.

Channel: “Plaintext” closing

P2WSH output: 0 <32-byte-hash>

Witness script: 2 <pubkey1> <pubkey2> 2 OP_CHECKMULTISIG

Right now you are going to broadcast a funding transaction onchain. This funding transaction is going to be a pay-to-witness-script-hash (P2WSH). When you close the channel every peer of the network is going to see that was a 2-of-2. By revealing the script you are going to leak that you were using Lightning. How can we solve this?

Schnorr Taproot -Channel: “Discreet” closing

Taproot output: 1 <32-byte-pubkey>

Witness script: <MuSig-sig>

We can embed the script in a Taproot output. This way if both parties agree to do a mutual closing you are not going to be able to disassociate this Lightning funding Taproot output from another Taproot output.

Channel: Worst-case closing

Going further, even if we disagree ideally we would like the channel to not be seen by any party. The blockchain cares about faithful execution of the contract but ideally you shouldn’t learn about the amounts because amounts are part of the contract.

Schnorr Taproot -Channel: Pooled Commitment

I think you can go further with this idea. You can encode the commitment transaction in its own Taptree and every Tapscript would be a HTLC. This Tapscript would spend to a 2nd stage transaction. This 2nd stage transaction would have two outputs. One output paying to the HTLC and the other one paying back to the Taptree minus the Tapscript spend. I think maybe SIGHASH_NOINPUT would be a better fit for this construction but there is a way to make the channel discreet. The blockchain shouldn’t learn about you doing some kind of offchain construction.

HTLC: Payment hash correlation

Every HTLC part of the payment path reuse the same Script hashlock ie

OP_HASH160 <RIPEMD160(payment_hash)> OP_EQUALVERIFY

Going further right now we are using a payment hash. Any HTLC part of the payment path is reusing the same hash. If you are a Chainalysis company and you are running spy nodes on the network or you are running big processing nodes and these nodes are part of the same payment path they are going to be able to guess “graph nearness” of the sender and receiver. That is really bad because right now payment paths are quite short given the current topology. Ideally we would like to use a different hashlock for every hop.

Schnorr-Taproot: Point Time Locked Contract

partial_sig = sG = R + H(P | R | m)P

adaptor_sig = s’G = T + R + H(P | R | m)P with the T the nonce tweak

secret t = adaptor_sig - partial_sig

There is this cool idea of scriptless scripts by Andrew Poelstra who was speaking earlier today. With a scriptless script you are going to tweak the nonce pubkey with a secret. When one of the parties is ready to claim the secret she has to reveal it to unlock the output.

PTLC protocol: setup phase

(See diagram in slides)

The protocol works like this. You are going to build an aggregated pubkey of 2-of-2. One of the parties is going to submit a modified nonce pubkey. Alice is going to send a partial sig to Bob. Bob is going to send his partial sig… When Bob is ready to claim the output he has to reveal the secret. That is a way to atomically exchange funds against a secret. You can reuse this primitive to build a world like Lightning payment paths. PTLC, point timelocked contracts, should be the replacement for HTLC. There will be three phases. The first phase, setup, you send a curve point to every part of the payment path.

PTLC protocol: update phase

(See diagram in slides)

The second phase is the update phase. You are going to exchange partial sigs between every hop of the payment path.

PTLC protocol: settlement phase

(See diagram in slides)

The last phase is the settlement one. Dave is going to reveal the secret that lets Carol learn about her own secret which is going to let Bob learn about his own secret. Bob is going to claim the PTLC from Alice. Alice is going to learn the final secret. This final secret can be reused to solve other issues.

Invoices: proof-of-payment

Right now when you are going to succeed a payment on the network you are going to learn the preimage. The preimage can be used as a proof of payment. But it doesn’t tell you who is the original sender. Every hop of the payment path can claim in front of a judge “I was the guy who made the payment. I have the preimage.” If you are able to also submit the invoice you can’t associate between parts of the payment path.

Schnorr Taproot Invoices: proof-of-payer

Reusing the z value (zG has been signed by the receiver) of the PTLC protocol, you will be able to have this unique secret value. This unique secret value is only going to be learned by the original sender. This could be cool because you could use this to trigger a second stage contract or some kind of consumer protection escrow, something like this.

Onion-packet: simple payment or MPP

MPP has been presented by Joost. Right now MPP is cool to solve liquidity issues but it may be a weakness for privacy because you may be able to do payment paths intersection between the different MPP used if a spying node of part of all MPP payment paths. Ideally you want to use a different value for this payment path.

Schnorr Taproot onion packet: Discreet Log AMP

There is the idea of using the same cryptography trick of Schnorr linearity. Before to set the payment path Alice the sender will offset the curve point received from Dave, the last hop of the payment path, by her own secret. You are going to send shards of the secret through every onion part of the atomic multipayment path. Only when all of them are locked at the last hop, is it going to be possible to combine the shard secrets and claim the payment.

HTLC: stuck payments

There is another issue right now which is being discussed on the mailing list. You send a payment, one of the hops on the payment path is going to be offline or not available. To cancel the payment and wait to send another one you have to first wait until the HTLC timelock expires to get the funds back to the original sender. Ideally you want a way so that the sender can cancel the payment without waiting.

Schnorr Taproot HTLC: cancellable payments

You can do this again thanks to the PTLC construction. The last secret is only going to be revealed by Alice when Dave, the receiver of the funds, is going to acknowledge that he received every payment packet. If you do this this is really cool because it may allow you to build higher level protocols, some kind of forward error correction. The idea is you are going to send more packets than needed to fulfill the payment. Thanks to this it is going to better UX because if one of the packets fails you still have more packets to pay the payee.

HTLC: simple hash-time-locked-contract

The last thing that we can also build thanks to Schnorr… Right now HTLCs are pretty cool but they are pretty simple. There is only one timelock, there is only one hash. Maybe people are interested to have different hashes. One of the hashes is submitted by an escrow. It may be an arbiter in any contract. I am Alice, I am interested to get a shipment of some goods. I am funding a payment today but I never received my goods. You may be able to insert an escrow into your HTLC. By doing this it would mean every hop part of the payment path has to support the advanced HTLC. Worse it is going to learn the semantics of the contract.

Schnorr Taproot: end-to-end payment point contracts

What you can do is instead of this is have payment point constructions. The idea is you still use scriptless scripts but you add other primitives thanks to key aggregation or ECDH. You can also do DLCs which is just a curve point. We may be able to build a wider class of HTLC packets or conditional payment packets. I foresee in a few years people doing futures or options on top of Lightning. This class of payments is going to be confidential. Only the endpoints are going to learn about this.

Protocol-side, no silver bullet, a lot of tricks

Schnorr and Taproot, it is not a silver bullet. There are a lot of other leaks like when you are doing channel announcements on Lightning right now you are doxing yourself by linking a Lightning pubkey identity and onchain UTXO. In a few years people are going to wake up and say “This Lightning pubkey was linked to a domain name.” Then you will be able to link between a domain name and an onchain UTXO which is really bad. Even if we do PTLC for the payment path we still have issues with the CLTV delta which is the same on every hop. Also the amount stays the same minus the Lightning fees for every hop. Ideally we may want to implement further tricks like random CLTV delta routing algorithms or pad the payment path to always use 10 hops or 20 hops even if it is costlier. That may be better for privacy. Right now people are working on dual funded channels for Lightning. We may do Coinjoin for every funding transaction which would be really cool. Schnorr and Taproot are going to take more than one year to get integrated into Lightning. This will be only the start for building really consistent privacy for Lightning.

Application-side, building private first apps

Privacy is going to be the default for Lightning, I hope so. If you are going to build applications on top of this you should have this holistic approach and think “I have this Lightning protocol which provides me a lot of privacy. I will try to not break privacy for my application users.” You should think about integrating with Tor, identityless login or identityless tokens, that kind of stuff. I think that is a challenge for application developers building on top of Lightning but I think it is worth it. I am excited, Schnorr and Taproot have been proposed as BIPs and should be soft forked into the protocol if the community supports it. If you are interested to contribute to Lightning you are really welcome.

Thanks to Chaincode

Thanks to Chaincode for supporting this work. Thanks to Advancing Bitcoin.

Q&A

Q - How do you see Taproot being implemented in Lightning? Is it still Lightning?

A - There are multiple ways. First you can integrate Taproot for the funding output. Then you can use Taproot for the HTLC output part of the commitment transaction. You can also use Taproot for the output of the second stage HTLC transaction. There are at least multiple outputs that can be concerned with Lightning. I think the first is to fix the funding output because if you do this we will benefit from the Taproot assumption. Using Taproot for commitment transactions you are still going to leak that you are using Lightning. Maybe we could use the pool construction I was talking about but that is harder stuff. I would chase this one first.

Q - You said Lightning has privacy guarantees on its protocol but developers should make sure they don’t ruin the privacy guarantees on top of the base Lightning protocol. Do you see a tendency that applications are taking shortcuts on Lightning and ruining the privacy?

A - Yes. Right now there is this idea of trampoline routing which is maybe great for user experience but on the privacy side it is broken. What gives us a lot of privacy in Lightning is source routing. Going to trampoline routing means the person who does the trampoline routing for you is going to learn who you are if you are using one hop and worse is going to know who you are sending funds to. There is trampoline routing, if you are not using privacy preserving Lightning clients… Nobody has done a real privacy study on Lightning clients. Neutrino, bloom filters, no one has done real research. They are not great, there are privacy leaks if you are using them. There are Lightning privacy issues and there are base layer privacy issues. If you are building an application you should have all of them in mind. It is really hard. Using the node pubkey I don’t think is great. I would like rendez-vous routing to be done on Lightning to avoid announcing my pubkey, having my invoice tied to my pubkey and my pubkey being part of Lightning. And channel announcement of course. I hope at some point we have some kind of proof of ownership so I can prove I own this channel without revealing which UTXO I own.

\ No newline at end of file +https://www.advancingbitcoin.com/video/a-schnorr-taprooted-lightning,11/

Slides: https://www.dropbox.com/s/9vs54e9bqf317u0/Schnorr-Taproot%27ed-LN.pdf

Intro

Today Schnorr and Taproot for Lightning, it is a really exciting topic.

Lightning architecture

The Lightning architecture for those who are not familiar with it. You have the blockchain as the underlying layer. On top of it you are going to build a channel, you have a HTLC and people are going to spend onions to you. If you want to be paid you are going to send an invoice to the sender.

What should we design Lightning for?

What should we design Lightning for? When we are doing Lightning design spec, we are pouring a lot of brainpower into it and everyone has a different view of what Lightning should be. Should Lightning be a fast, payment transaction system? Should Lightning be optimized for microtransactions? Is Lightning really cool because you get instant finality of your transactions? Is privacy the reason we are doing Lightning? Lightning may have better privacy properties. When we are talking about privacy for Lightning it would be better to have the privacy of the base layer in mind. On the base layer you are going to broadcast transactions. There is an amount, it is not encrypted. There is an address, it is not encrypted. You are going to link inputs and outputs in the UTXO graph.

What’s the privacy on the base layer?

Privacy for the base layer is not that great that today. Lightning may be a way to solve privacy.

What’s the privacy on Lightning?

But on Lightning there is a payment path. Lightning nodes have pubkeys tied to them and that is an identity vector. With HTLCs you may reuse a hash, there are a lot of different privacy vectors. Privacy is I think really important if you want censorship resistant money.

Why should we focus on privacy?

“Cryptography rearranges power, it configures who can do what, from what” The Moral Character of Cryptographic Work (Rogaway)

If you don’t have privacy I can bribe or blackmail you because I know how you are using this tech. That is a huge vector of attack. There is this awesome paper by Philip Rogaway. I encourage everyone to read it.

EC-Schnorr: efficient signature scheme

Keypair = (x,P) with P= xG and ephemeral keypair (k,R) with R = kG

Message hash = e = hash(R | m) and Signature = (R,s) with s = k + ex

Verification = sG = R + eP

You can see Schnorr and Taproot as a privacy boost. The reason to modify the consensus base layer which is a lot of work, there are a lot of people involved, there has to be a good motivation for doing this. Schnorr is a replacement for ECDSA. Originally Satoshi didn’t get Schnorr into Bitcoin because there were some patent issues. Schnorr is really awesome because there is linearity in the verification equation of Schnorr. Linearity means it is easy to sum up components. It is easy to sum up signatures, it is easy to sum up pubkeys and it is easy to sum up nonces between different parties.

Taproot: privacy preserving Script Tree

Taproot pubkey: Q = P + tG with Q and P curve points

t is the root of a Merkle tree where each leaf is a hash of a script

Spending witness provides Merkle proof and script

The other big new consensus upgrade proposal, nothing has yet been adopted by the network, Taproot is the idea of building a Merkle tree of every leaf of the Merkle tree is going to be a script. You are going to commit the root of the Merkle tree inside the pubkey. That is cool. Now when you are going to spend a Taproot output you have two options. The first option is to use a keypath spend. The other option is to reveal one of the scripts plus a Merkle proof. This Merkle proof lets the network verify that this script has been committed with the initial commitment of the scriptPubKey, the pubkey of the transaction spend.

New consensus properties

What are the new consensus properties of this upgrade? Linearity is the one we are going to use for this talk. With Taproot we have cheap complex scripts. Another advantage is under the Taproot assumption, if everyone agrees, you don’t have a disagreement, they can spend a Taproot output in a cooperative way so the script isn’t seen by any external observer.

More Schnorr-Taproot resources

There are BIP numbers for Schnorr, Taproot and Tapscript. I encourage you to read the BIPs. There are also more resources on AJ Town’s GitHub repo.

Channel: “Plaintext” closing

P2WSH output: 0 <32-byte-hash>

Witness script: 2 <pubkey1> <pubkey2> 2 OP_CHECKMULTISIG

Right now you are going to broadcast a funding transaction onchain. This funding transaction is going to be a pay-to-witness-script-hash (P2WSH). When you close the channel every peer of the network is going to see that was a 2-of-2. By revealing the script you are going to leak that you were using Lightning. How can we solve this?

Schnorr Taproot -Channel: “Discreet” closing

Taproot output: 1 <32-byte-pubkey>

Witness script: <MuSig-sig>

We can embed the script in a Taproot output. This way if both parties agree to do a mutual closing you are not going to be able to disassociate this Lightning funding Taproot output from another Taproot output.

Channel: Worst-case closing

Going further, even if we disagree ideally we would like the channel to not be seen by any party. The blockchain cares about faithful execution of the contract but ideally you shouldn’t learn about the amounts because amounts are part of the contract.

Schnorr Taproot -Channel: Pooled Commitment

I think you can go further with this idea. You can encode the commitment transaction in its own Taptree and every Tapscript would be a HTLC. This Tapscript would spend to a 2nd stage transaction. This 2nd stage transaction would have two outputs. One output paying to the HTLC and the other one paying back to the Taptree minus the Tapscript spend. I think maybe SIGHASH_NOINPUT would be a better fit for this construction but there is a way to make the channel discreet. The blockchain shouldn’t learn about you doing some kind of offchain construction.

HTLC: Payment hash correlation

Every HTLC part of the payment path reuse the same Script hashlock ie

OP_HASH160 <RIPEMD160(payment_hash)> OP_EQUALVERIFY

Going further right now we are using a payment hash. Any HTLC part of the payment path is reusing the same hash. If you are a Chainalysis company and you are running spy nodes on the network or you are running big processing nodes and these nodes are part of the same payment path they are going to be able to guess “graph nearness” of the sender and receiver. That is really bad because right now payment paths are quite short given the current topology. Ideally we would like to use a different hashlock for every hop.

Schnorr-Taproot: Point Time Locked Contract

partial_sig = sG = R + H(P | R | m)P

adaptor_sig = s’G = T + R + H(P | R | m)P with the T the nonce tweak

secret t = adaptor_sig - partial_sig

There is this cool idea of scriptless scripts by Andrew Poelstra who was speaking earlier today. With a scriptless script you are going to tweak the nonce pubkey with a secret. When one of the parties is ready to claim the secret she has to reveal it to unlock the output.

PTLC protocol: setup phase

(See diagram in slides)

The protocol works like this. You are going to build an aggregated pubkey of 2-of-2. One of the parties is going to submit a modified nonce pubkey. Alice is going to send a partial sig to Bob. Bob is going to send his partial sig… When Bob is ready to claim the output he has to reveal the secret. That is a way to atomically exchange funds against a secret. You can reuse this primitive to build a world like Lightning payment paths. PTLC, point timelocked contracts, should be the replacement for HTLC. There will be three phases. The first phase, setup, you send a curve point to every part of the payment path.

PTLC protocol: update phase

(See diagram in slides)

The second phase is the update phase. You are going to exchange partial sigs between every hop of the payment path.

PTLC protocol: settlement phase

(See diagram in slides)

The last phase is the settlement one. Dave is going to reveal the secret that lets Carol learn about her own secret which is going to let Bob learn about his own secret. Bob is going to claim the PTLC from Alice. Alice is going to learn the final secret. This final secret can be reused to solve other issues.

Invoices: proof-of-payment

Right now when you are going to succeed a payment on the network you are going to learn the preimage. The preimage can be used as a proof of payment. But it doesn’t tell you who is the original sender. Every hop of the payment path can claim in front of a judge “I was the guy who made the payment. I have the preimage.” If you are able to also submit the invoice you can’t associate between parts of the payment path.

Schnorr Taproot Invoices: proof-of-payer

Reusing the z value (zG has been signed by the receiver) of the PTLC protocol, you will be able to have this unique secret value. This unique secret value is only going to be learned by the original sender. This could be cool because you could use this to trigger a second stage contract or some kind of consumer protection escrow, something like this.

Onion-packet: simple payment or MPP

MPP has been presented by Joost. Right now MPP is cool to solve liquidity issues but it may be a weakness for privacy because you may be able to do payment paths intersection between the different MPP used if a spying node of part of all MPP payment paths. Ideally you want to use a different value for this payment path.

Schnorr Taproot onion packet: Discreet Log AMP

There is the idea of using the same cryptography trick of Schnorr linearity. Before to set the payment path Alice the sender will offset the curve point received from Dave, the last hop of the payment path, by her own secret. You are going to send shards of the secret through every onion part of the atomic multipayment path. Only when all of them are locked at the last hop, is it going to be possible to combine the shard secrets and claim the payment.

HTLC: stuck payments

There is another issue right now which is being discussed on the mailing list. You send a payment, one of the hops on the payment path is going to be offline or not available. To cancel the payment and wait to send another one you have to first wait until the HTLC timelock expires to get the funds back to the original sender. Ideally you want a way so that the sender can cancel the payment without waiting.

Schnorr Taproot HTLC: cancellable payments

You can do this again thanks to the PTLC construction. The last secret is only going to be revealed by Alice when Dave, the receiver of the funds, is going to acknowledge that he received every payment packet. If you do this this is really cool because it may allow you to build higher level protocols, some kind of forward error correction. The idea is you are going to send more packets than needed to fulfill the payment. Thanks to this it is going to better UX because if one of the packets fails you still have more packets to pay the payee.

HTLC: simple hash-time-locked-contract

The last thing that we can also build thanks to Schnorr… Right now HTLCs are pretty cool but they are pretty simple. There is only one timelock, there is only one hash. Maybe people are interested to have different hashes. One of the hashes is submitted by an escrow. It may be an arbiter in any contract. I am Alice, I am interested to get a shipment of some goods. I am funding a payment today but I never received my goods. You may be able to insert an escrow into your HTLC. By doing this it would mean every hop part of the payment path has to support the advanced HTLC. Worse it is going to learn the semantics of the contract.

Schnorr Taproot: end-to-end payment point contracts

What you can do is instead of this is have payment point constructions. The idea is you still use scriptless scripts but you add other primitives thanks to key aggregation or ECDH. You can also do DLCs which is just a curve point. We may be able to build a wider class of HTLC packets or conditional payment packets. I foresee in a few years people doing futures or options on top of Lightning. This class of payments is going to be confidential. Only the endpoints are going to learn about this.

Protocol-side, no silver bullet, a lot of tricks

Schnorr and Taproot, it is not a silver bullet. There are a lot of other leaks like when you are doing channel announcements on Lightning right now you are doxing yourself by linking a Lightning pubkey identity and onchain UTXO. In a few years people are going to wake up and say “This Lightning pubkey was linked to a domain name.” Then you will be able to link between a domain name and an onchain UTXO which is really bad. Even if we do PTLC for the payment path we still have issues with the CLTV delta which is the same on every hop. Also the amount stays the same minus the Lightning fees for every hop. Ideally we may want to implement further tricks like random CLTV delta routing algorithms or pad the payment path to always use 10 hops or 20 hops even if it is costlier. That may be better for privacy. Right now people are working on dual funded channels for Lightning. We may do Coinjoin for every funding transaction which would be really cool. Schnorr and Taproot are going to take more than one year to get integrated into Lightning. This will be only the start for building really consistent privacy for Lightning.

Application-side, building private first apps

Privacy is going to be the default for Lightning, I hope so. If you are going to build applications on top of this you should have this holistic approach and think “I have this Lightning protocol which provides me a lot of privacy. I will try to not break privacy for my application users.” You should think about integrating with Tor, identityless login or identityless tokens, that kind of stuff. I think that is a challenge for application developers building on top of Lightning but I think it is worth it. I am excited, Schnorr and Taproot have been proposed as BIPs and should be soft forked into the protocol if the community supports it. If you are interested to contribute to Lightning you are really welcome.

Thanks to Chaincode

Thanks to Chaincode for supporting this work. Thanks to Advancing Bitcoin.

Q&A

Q - How do you see Taproot being implemented in Lightning? Is it still Lightning?

A - There are multiple ways. First you can integrate Taproot for the funding output. Then you can use Taproot for the HTLC output part of the commitment transaction. You can also use Taproot for the output of the second stage HTLC transaction. There are at least multiple outputs that can be concerned with Lightning. I think the first is to fix the funding output because if you do this we will benefit from the Taproot assumption. Using Taproot for commitment transactions you are still going to leak that you are using Lightning. Maybe we could use the pool construction I was talking about but that is harder stuff. I would chase this one first.

Q - You said Lightning has privacy guarantees on its protocol but developers should make sure they don’t ruin the privacy guarantees on top of the base Lightning protocol. Do you see a tendency that applications are taking shortcuts on Lightning and ruining the privacy?

A - Yes. Right now there is this idea of trampoline routing which is maybe great for user experience but on the privacy side it is broken. What gives us a lot of privacy in Lightning is source routing. Going to trampoline routing means the person who does the trampoline routing for you is going to learn who you are if you are using one hop and worse is going to know who you are sending funds to. There is trampoline routing, if you are not using privacy preserving Lightning clients… Nobody has done a real privacy study on Lightning clients. Neutrino, bloom filters, no one has done real research. They are not great, there are privacy leaks if you are using them. There are Lightning privacy issues and there are base layer privacy issues. If you are building an application you should have all of them in mind. It is really hard. Using the node pubkey I don’t think is great. I would like rendez-vous routing to be done on Lightning to avoid announcing my pubkey, having my invoice tied to my pubkey and my pubkey being part of Lightning. And channel announcement of course. I hope at some point we have some kind of proof of ownership so I can prove I own this channel without revealing which UTXO I own.

\ No newline at end of file diff --git a/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/index.html b/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/index.html index 86733e9610..b0a28df140 100644 --- a/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/index.html +++ b/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/index.html @@ -2,7 +2,7 @@ \ No newline at end of file +https://www.advancingbitcoin.com/video/signet-in-practice-integrating-signet-into-your-software,5/

Slides: https://www.dropbox.com/s/6fqwhx7ugr3ppsg/Signet%20Integration%20V2.pdf

BIP 325: https://github.com/bitcoin/bips/blob/master/bip-0325.mediawiki

Signet on Bitcoin Wiki: https://en.bitcoin.it/wiki/Signet

Bitcoin dev mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html

Bitcoin Core PR 16411 (closed): https://github.com/bitcoin/bitcoin/pull/16411

Bitcoin Core PR 18267 (open): https://github.com/bitcoin/bitcoin/pull/18267

Intro

I am going to talk about Signet. Do you guys know what Signet is? A few people know. I will explain it briefly. I have an elevator pitch, I have three actually depending on the height of the elevator. Basically Signet is testnet except all the broken parts are removed. There is no real mining but who needs mining in a test network? We don’t need decentralization in a test network so why bother? If you are a miner testnet is great for you. If you are not a miner, if you are a software developer testnet is broken. It doesn’t work at all. Signet integration, I will give examples of integrating Signet into existing software. This is the logo that one guy in our company did for Signet. It looks like a keyhole, it is pretty nice.

Agenda

My agenda looks like this. There is an elevator pitch like I mentioned, there is three of them. I will go through some integration examples and then I will talk about testing features. Testing features is where this system actually shines. It can be completely integrated into the pull request framework or the way that we do features today where people can have a dedicated network for a specific feature. As soon as you compile their pull request you will automatically be on that network. This is pretty cool.

Elevator pitch

My elevator pitch for the tiny elevator is Signet good, testnet bad. Two seconds. The five second version is for Bitcoin testing beyond regtest Signet is good and testnet is bad. I know what regtest is, raise your hand. Half of you know what it is so you might actually get something out of this. The 30 second version, tall building, is “For Bitcoin testing beyond regtest Signet is a better alternative than testnet because testnet is unreliable and broken, partially because it has a skewed incentive model (testnet coins = 0, mining testnet is not free).” Testnet is unreliable and broken. You will have no blocks for a long period of time and then you will suddenly have 100,000 blocks. You will have re-orgs and all these things going on. The testnet coins that people are mining on the testnet network, I don’t even know why they are mining them but they are mining them. They are mining them and they sell them at zero dollars. What is the incentive here? There is no incentive. They are just doing it, I don’t know what they are doing here. Again note we don’t need decentralization in a test network. No government is going to come in and say “Give me those testnet coins.” If they do you can just say “Ok I will give you them and then they walk away.”

Steps to integrate Signet testing

I am going to talk about integrating it. It is going to be code on the screen. If you are allergic to code I suggest you take out your phone and play with it for a while. I am going to show code. The first code is c-lightning. c-lightning is using bitcoin-cli, the system inside of the Bitcoin Core system, to do the Lightning stuff. I am also going to talk about rust-lightning which is a different implementation. It does direct peer-to-peer communication. The implementation is different and much more closer to the Bitcoin Core version of Signet.

First what we need to do is add chain parameters. Testnet has its set of chain parameters and mainnet has its set of chain parameters. Then we have to add the option. That’s it. We don’t really have to do a lot here. With c-lightning there is a BIP 32 version, it is four bytes. I don’t really have to say anymore about that.

c-lightning Signet PR

https://github.com/ElementsProject/lightning/pull/2816

These are the chain parameters. If you go through here you are going to see that nothing here is actually hard. Your four year old niece could do this and she would snort at you for giving her such a simple task. You have the network_name, you have the bip173_name, that is bech32 for you. Signet addresses start with sb. The addresses start with sb1 instead of bc1. You have genesis_blockhash and rpc_port and the cli_args which is -signet to run it. This is all you have to do. If you have software that runs on testnet and mainnet you just add another one of these and then you are good. Or preferably you throw out the testnet one and replace it with this. That is up to you. If you want to use testnet I am not going to stop you.

Some doc stuff. Some option stuff, lightningd is C code so it is verbose. In the wallet this --network= and then one of a number of strings. I actually wrote the chainparams_get_network_names helper function because this was all hardcoded before. That is c-lightning. That is all that is required for c-lightning. After doing this I could run a Lightning node on c-lightning using Signet. Not very hard.

rust-bitcoin PR 291 (not yet merged)

https://github.com/rust-bitcoin/rust-bitcoin/pull/291

rust-bitcoin is a little harder. This is for the genesis block. In rust-bitcoin it derives the genesis block including the coinbase transaction inside of the genesis block. There is some stuff here. You see there is a OP_PUSHNUM_1, a pubkey, a OP_PUSHNUM_1 and a OP_CHECKMULTISIG. Also it is creating new txdata, hash and Merkle root and then it derives the block from there. This can be tricky. You have to compare the hashes you get out for the different things. But it is not hard. If you have the code to do on mainnet you can do this no problem. Then there is the chain parameters split out into the minimum proof of work value. This is the chain parameters themselves. If we think of the c-lightning one, we have the network name, we have the height for different activations, BIP 34, BIP 65, BIP 66, that is CHECKSEQUENCEVERIFY, CHECKLOCKTIMEVERIFY. You have the pow_target_spacing. The only difference between mainnet and Signet is that the activations are obviously lower here. They all activate at the beginning. When you start Signet you immediately have SegWit and everything enabled and running. We have some constants, we have some assertions in tests and we have some more assertions in tests. We have versions for the legacy address stuff. We don’t really use it a lot. Back in the day Bitcoin had 1… for P2PKH and 3…. for P2SH. We have the bech32 names. If someone asks from string “What is this address?” and it starts with sb then rust-bitcoin will say this is a Signet address. Also what is this string that starts with 125? It is a Signet pay-to-pubkey-hash address. These are helper methods to do that, nothing really complicated here. This is displaying stuff for extended public keys. This is the private key version. If you have private keys for Signet and you have private keys for mainnet… You try to import a mainnet private key into Signet it is going to fail because the starting digit in the version is wrong. That is rust-bitcoin. There are some complications with the genesis generation that I didn’t mention here but that is all you need to do with rust-bitcoin. I don’t think you’ll ever do that unless you are writing a full node. If you are writing a full node you may have a little more work to do. The c-lightning example is probably going to be what you are looking at to get support for Signet.

Feature testing dedicated networks

I’ll take a moment just to mention that you can already get coins from the command line and you can use them to test things. You can start up your own Signet if you want to. You are not going to risk a miner coming in and blowing your chain away. You can have a global testing network without worrying about anything. That is where Signet is I hope going to accelerate development. I will show real quick one example of those custom Signets that I am talking about. There is a default one that is using the same consensus rules as mainnet. There are custom ones, one that has Taproot, Tapscript. Eventually also CHECKTEMPLATEVERIFY probably, we’ll see.

What you need to do when you make your own Signet is you tweak the default network and you make that a commit in your pull request to whatever software you are doing and then you are done. If you are doing a soft fork you also enable the soft fork inside of the Signet chain. For example Taproot as a soft fork exists inside of Bitcoin Core but it is not activated. By inside Bitcoin Core I mean the work in progress pull request for Bitcoin Core. Bitcoin Core does not have it yet. If you do that and someone grabs your pull request, compiles it and starts running it with the Signet flag they are going to be on your network. They are going to get blocks from you and they can start using your features. That all works out of the box in this case.

Example: signet-taproot network

This is an example of me adding a commit that I mentioned to the Signet pull request. I am removing these parts up here and I am adding this. I am saying “Using default taproot signet network” instead to make it easy for the user to realize. Then a different block challenge and a different seed. That is all I’m doing. Like I said I am enabling Taproot also.

We will go through the process of creating this Signet custom network tomorrow if you are going to the workshop tomorrow morning. We can also do a custom one if you have other ideas you want to try. I hope to see you there. If you have questions I will take them now or I will take them during the time we are here. Thank you for listening.

Q&A

Q - How easy is it to use Signet to do offline testing?

A - I know Stepan Snigirev has done hardware wallet in Signet where he derives everything offline and then eventually he gets a signature for something. Then he puts it online. You can do stuff offline for sure.

Q - You said you could use Signet to write your own network. There are some networks out there and you can create your own. What is the difference between that and regtest?

A - The difference is that with regtest if you try to use it with other people you are in the situation where someone could come in and blow your entire chain away. Regtest is great for local testing. But as soon as you have people connected then Signet is superior.

Q - You said it is possible to get Signet coins via the command line as any kind of Signet node. Do you use the same RPC call as regtest?

A - It is a shell script that connects to a faucet that I am running that will send you coins. You can integrate it into your test system. If you have a global test system with your software you can get coins, do this, do that, burn the coins, whatever you want to do. There are rate limits on that faucet but you could also set up your own faucet easily. There is code to do faucets and explorers. Eventually you could just use Docker containers to set up a Tor running Signet node or a Signet issuer or whatever you want to do.

Q - Where I can find information about Signet to make more sense of it? You have a recommended way to go to find more?

A - On the Bitcoin wiki there is a Signet article. You should check that out.

Q - Just to clarify the examples you went through, you integrate Signet into rust-bitcoin?

A - Yes. It has not been merged it. The c-lightning one has been merged.

Q - The integration with c-lightning, you are using Bitcoin Core Signet with c-lightning?

A - Yes. Eventually it is just going to be Bitcoin Core because I hope the PR is merged. But right now the Signet PR has not been merged yet. Right now you have to use the custom Signet. I hope in Bitcoin Core 0.20 that Signet will be in there. I will try to make it be in there. Then you could just use the regular bitcoind -signet and it should work.

Q - If you are running your own version of Signet you don’t have to care about mining whatsoever?

A - Signet also has proof of work. It also has a miner. But the thing is you have to have the private key to the challenge for the block. For every block that is mined there is a signature inside of the block. That signature has to be valid. The only people who can mine on this network are the people who have the private key. Normally one person or two people. Literally what I do right now with the default network is I sleep for 9 minutes and then I mine for 1 minute to hit close to 10 minutes per block. The proof of work difficulty is really, really low.

\ No newline at end of file diff --git a/advancing-bitcoin/2020/index.xml b/advancing-bitcoin/2020/index.xml index 1574216797..7ea3f86046 100644 --- a/advancing-bitcoin/2020/index.xml +++ b/advancing-bitcoin/2020/index.xml @@ -21,7 +21,7 @@ Intro Hi everyone. Some of you were at my Bitcoin dev presentation about Miniscr Descriptors It is very nice having this scheduled immediately after Andrew Chow’s talk about output descriptors because there is a good transition here.Signet Integrationhttps://btctranscripts.com/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/Thu, 06 Feb 2020 00:00:00 +0000https://btctranscripts.com/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/Slides: https://www.dropbox.com/s/6fqwhx7ugr3ppsg/Signet%20Integration%20V2.pdf BIP 325: https://github.com/bitcoin/bips/blob/master/bip-0325.mediawiki Signet on Bitcoin Wiki: https://en.bitcoin.it/wiki/Signet -Bitcoin dev mailing list: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html +Bitcoin dev mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html Bitcoin Core PR 16411 (closed): https://github.com/bitcoin/bitcoin/pull/16411 Bitcoin Core PR 18267 (open): https://github.com/bitcoin/bitcoin/pull/18267 Intro I am going to talk about Signet. Do you guys know what Signet is? A few people know. I will explain it briefly. I have an elevator pitch, I have three actually depending on the height of the elevator. Basically Signet is testnet except all the broken parts are removed. \ No newline at end of file diff --git a/advancing-bitcoin/2022/2022-03-03-michael-folkson-taproot-update/index.html b/advancing-bitcoin/2022/2022-03-03-michael-folkson-taproot-update/index.html index 1922a5364b..43ee16fe7c 100644 --- a/advancing-bitcoin/2022/2022-03-03-michael-folkson-taproot-update/index.html +++ b/advancing-bitcoin/2022/2022-03-03-michael-folkson-taproot-update/index.html @@ -11,7 +11,7 @@ Michael Folkson

Date: March 3, 2022

Transcript By: Michael Folkson

Tags: Taproot

Category: Conference

Media: -https://www.youtube.com/watch?v=8RNYhwEQKxM

Topic: State of the Taproot Address 2022

Slides: https://www.dropbox.com/s/l31cy3xkw0zi6aq/Advancing%20Bitcoin%20presentation%20-%20Michael%20Folkson.pdf?dl=0

Intro

Ok, so good morning. So earlier this week there was the State of the Union Address so I thought I’d make a bad joke first up. This talk is going to be the “State of the Taproot Address”. I promise all the bad jokes that I make in this presentation will all be Drake gifs, there will be no other jokes involved. This is the “State of the Taproot Address 2022”.

Our thoughts with those affected in Ukraine

I briefly wanted to say we’ve got friends who can’t be here who are going some very hard times at the moment. Obviously we hope that Gleb and others stay safe and they can get back to Bitcoin development activity as soon as possible.

Timeline

On the regularity of soft forks: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-October/019535.html

Stumbling into a contentious soft fork activation attempt: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019728.html

So there is a very long timeline for how we got Taproot into Bitcoin. I wanted to highlight a few points just so that we think about some of these things when we make future soft fork proposals. A couple of points from this timeline, one it took a very long time. If you go back to 2008 when the Schnorr signature patent expired, people were talking about Schnorr post 2008. The idea for spending from a Merkle tree of scripts dates back to 2012. There were various proposals from that point onwards. There was BIP114, there was BIP116, and these all moved the conversation forward and it was really valuable work. I think it is important that we remember that we do need to get the best proposals into Bitcoin and we need to be confident and clear as a community that when we embark on a soft fork activation process that we are as confident as possible that this is the best proposal to enable the functionality that we want and that it is not going to be replaced by something better in the short to medium term. Obviously the idea of using Schnorr and Taproot, the Taproot proposal was early 2018, I certainly think and presumably we all think because we activated this soft fork that the proposal that we activated was the best proposal. If we’d activated some of these earlier proposals perhaps they wouldn’t be used and we’d have to continue to support them in full node implementations like Bitcoin Core. That is not the best way to go, certainly in my opinion. We need the best proposals into Bitcoin, we need overwhelming community support that whatever we’re activating is the best possible proposal for Bitcoin. I suppose the last point I’d like to make, in a decentralized community it is just impossible to have an authority figure that says “Yes we are going to do this soft fork. This soft fork is going ahead.” In a decentralized community it is really hard. Anyone can stand up, I could stand up today and say “We are going to do a soft fork tomorrow for whatever, a new opcode in 6 months”. Hopefully you would all say “Michael you can’t speak for the community, no one person can say this soft fork is going to happen” which does it make tricky. I think the only solution to this problem is that things go really, really slow. Whenever an activation process starts for a soft fork everyone knows about it. If you’ve paid any attention to Bitcoin technology whatsoever you know about it. The only way you get to do that is community engagement, there were various things, Bitcoin Optech, Taproot workshops, structured review, IRC meetings, tonnes of stuff. This was all really valuable work to get the community to a point where they’d heard about Taproot, they knew the benefits and we were all confident that it was the best proposal to get into Bitcoin. I could talk a bit about activation but I’ll leave activation. That’s probably another talk in itself.

More Drake gifs

I did promise the only bad jokes would be Drake gifs. We’ve got another Drake gif, this was fiatjaf on Twitter. Who knows what is going to be in the next soft fork and when it will be? Personally I think it will be a bundle of opcodes, sighash flags or Simplicity. Or perhaps both. That would be my prediction but I could be totally wrong and I have no idea how the community will converge on whatever the next soft fork is. But this is a Taproot presentation and there is still tonnes of Taproot work to be done.

The “real work” begins

Another tweet from Pieter (Wuille). This is back when Taproot activated. He said “the real work will be in building wallets/protocols that build on top of it to make use of its advantages”. The “real work” hasn’t started. He is being self deprecating, there were 5 to 10 years of hard work from Pieter and others but there is still so much work to do for various projects. I’ll do a high level overview of the various projects and how they can leverage Taproot, or are in the process of leveraging Taproot. There is still so much work to do to take advantage of the benefits.

What do/did we have pre-Taproot?

P2SH, P2WSH address that reveals the entire script when spent from

e.g.

<key_1> OP_CHECKSIG OP_SWAP <key_2> OP_CHECKSIG OP_ADD OP_SWAP
+https://www.youtube.com/watch?v=8RNYhwEQKxM

Topic: State of the Taproot Address 2022

Slides: https://www.dropbox.com/s/l31cy3xkw0zi6aq/Advancing%20Bitcoin%20presentation%20-%20Michael%20Folkson.pdf?dl=0

Intro

Ok, so good morning. So earlier this week there was the State of the Union Address so I thought I’d make a bad joke first up. This talk is going to be the “State of the Taproot Address”. I promise all the bad jokes that I make in this presentation will all be Drake gifs, there will be no other jokes involved. This is the “State of the Taproot Address 2022”.

Our thoughts with those affected in Ukraine

I briefly wanted to say we’ve got friends who can’t be here who are going some very hard times at the moment. Obviously we hope that Gleb and others stay safe and they can get back to Bitcoin development activity as soon as possible.

Timeline

  • 2008: Schnorr signatures patent expires
  • ~2012: Russell O’Connor discusses the idea of a Merkle tree of spending conditions on IRC
  • April 2016: Johnson Lau creates BIP 114 (Merkelized Abstract Syntax Tree)
  • August 2017: Multiple authors create BIP 116 (OP_MERKLEBRANCHVERIFY). SegWit activates this month too.
  • January 2018: Greg Maxwell posts “Taproot: Privacy preserving switchable scripting” to bitcoin-dev mailing list
  • September 2019: Bitcoin Optech Taproot workshops
  • November 2019: 100+ people participate in Taproot BIP structured review IRC meetings
  • August 2020: Last semantics change to the BIPs
  • October 2020: Merged into the Bitcoin Core codebase (without activation)
  • February 2021:  Taproot activation IRC meetings start
  • June 2021: Taproot locks in after miner signaling threshold reached
  • November 2021: Taproot activates on mainnet

On the regularity of soft forks: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-October/019535.html

Stumbling into a contentious soft fork activation attempt: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019728.html

So there is a very long timeline for how we got Taproot into Bitcoin. I wanted to highlight a few points just so that we think about some of these things when we make future soft fork proposals. A couple of points from this timeline, one it took a very long time. If you go back to 2008 when the Schnorr signature patent expired, people were talking about Schnorr post 2008. The idea for spending from a Merkle tree of scripts dates back to 2012. There were various proposals from that point onwards. There was BIP114, there was BIP116, and these all moved the conversation forward and it was really valuable work. I think it is important that we remember that we do need to get the best proposals into Bitcoin and we need to be confident and clear as a community that when we embark on a soft fork activation process that we are as confident as possible that this is the best proposal to enable the functionality that we want and that it is not going to be replaced by something better in the short to medium term. Obviously the idea of using Schnorr and Taproot, the Taproot proposal was early 2018, I certainly think and presumably we all think because we activated this soft fork that the proposal that we activated was the best proposal. If we’d activated some of these earlier proposals perhaps they wouldn’t be used and we’d have to continue to support them in full node implementations like Bitcoin Core. That is not the best way to go, certainly in my opinion. We need the best proposals into Bitcoin, we need overwhelming community support that whatever we’re activating is the best possible proposal for Bitcoin. I suppose the last point I’d like to make, in a decentralized community it is just impossible to have an authority figure that says “Yes we are going to do this soft fork. This soft fork is going ahead.” In a decentralized community it is really hard. Anyone can stand up, I could stand up today and say “We are going to do a soft fork tomorrow for whatever, a new opcode in 6 months”. Hopefully you would all say “Michael you can’t speak for the community, no one person can say this soft fork is going to happen” which does it make tricky. I think the only solution to this problem is that things go really, really slow. Whenever an activation process starts for a soft fork everyone knows about it. If you’ve paid any attention to Bitcoin technology whatsoever you know about it. The only way you get to do that is community engagement, there were various things, Bitcoin Optech, Taproot workshops, structured review, IRC meetings, tonnes of stuff. This was all really valuable work to get the community to a point where they’d heard about Taproot, they knew the benefits and we were all confident that it was the best proposal to get into Bitcoin. I could talk a bit about activation but I’ll leave activation. That’s probably another talk in itself.

More Drake gifs

I did promise the only bad jokes would be Drake gifs. We’ve got another Drake gif, this was fiatjaf on Twitter. Who knows what is going to be in the next soft fork and when it will be? Personally I think it will be a bundle of opcodes, sighash flags or Simplicity. Or perhaps both. That would be my prediction but I could be totally wrong and I have no idea how the community will converge on whatever the next soft fork is. But this is a Taproot presentation and there is still tonnes of Taproot work to be done.

The “real work” begins

Another tweet from Pieter (Wuille). This is back when Taproot activated. He said “the real work will be in building wallets/protocols that build on top of it to make use of its advantages”. The “real work” hasn’t started. He is being self deprecating, there were 5 to 10 years of hard work from Pieter and others but there is still so much work to do for various projects. I’ll do a high level overview of the various projects and how they can leverage Taproot, or are in the process of leveraging Taproot. There is still so much work to do to take advantage of the benefits.

What do/did we have pre-Taproot?

P2SH, P2WSH address that reveals the entire script when spent from

e.g.

<key_1> OP_CHECKSIG OP_SWAP <key_2> OP_CHECKSIG OP_ADD OP_SWAP
  <key_3> OP_CHECKSIG OP_ADD OP_SWAP OP_DUP OP_IF <a032>
  OP_CHECKSEQUENCEVERIFY OP_VERIFY OP_ENDIF OP_ADD 3 OP_EQUAL
-

Jimmy gave a great intro to Taproot itself. Obviously before Taproot, a script is saying how certain Bitcoin in a UTXO can be spent. Before Taproot you would have a very long script such as the one below and it would be behind a hash. Whenever you spent from that script you’d have to satisfy the conditions from that very long script. The key point was that it was a very long script and whenever you spend from that script it goes onchain.

What is Taproot?

With Taproot as Jimmy was saying we can now split this script up into various parts. There is the key path and there is the script path. You would expect to use the key path most of the time, it is a single key spend every time you use the key path in Taproot, only a single key and a single signature are going onchain. Interestingly you can do some multisig schemes, you can use MuSig, perhaps FROST, where it is a multisig protocol but only a single key and a single signature are going onchain. As far as the blockchain knows, whenever you spend from a key path only a single key and a single signature are going onchain. Even if there is some fancy protocol behind how you construct that single key and single signature. The script path would generally be considered the exit scenario when something goes wrong with that key path. I’ve said “Tap me if you need me”. Generally you’d expect to use the key path but you can encode various different spending conditions within this Merkle tree. If you do need them you tap that Merkle root, you prove that the leaf script that you are spending from is in the Taproot tree. Then you satisfy the conditions of that leaf script to be able to spend from that pay-to-taproot (P2TR) address. This really should be imprinted on your minds. Certainly if you are thinking of doing anything with Taproot or leveraging Taproot.

Q = P + H(P | c)G

  • Q is the tweaked public key
  • P is the internal public key (P = xG where x is the private key)
  • H is the hash function
  • | is concatenation
  • c is the commitment to the Taproot tree
  • G is the generator point

https://github.com/bitcoinops/taproot-workshop/blob/master/2.2-taptweak.ipynb

Alternatively if you prefer a math formula, that’s the math formula for Taproot. You are committing to the internal pubkey, the Taproot tree and that’s all combined into the Taproot pubkey which is encoded into the Taproot address.

Sending and receiving to P2TR addresses

So the first step, any project, any business in the space should be thinking about is supporting sending to P2TR addresses and also ideally receiving to P2TR addresses. Murch has this wiki page where he is monitoring community adoption for the various projects, wallets, protocols etc that are supporting sending to P2TR.

Working towards a circular P2TR address economy

Murch had an interesting point on Twitter where he said it is really important that projects don’t wait for other projects to start adopting Taproot. The really important thing is that if a project looks around and is going “No one else supports sending to a P2TR address” they are not going to support receiving to a P2TR address.

What is a P2TR address?

What Bitcoin Core and Lightning implementations have done is enable sending to any future SegWit version. There is an example of a P2TR address, the p represents 1 so this is SegWit version 1. Whenever you see a bc1p that is a mainnet Taproot address. It is really important that all businesses, projects support sending to this because if they don’t support this you can’t give out a Taproot address. If there is a wallet that can’t send money to a Taproot address then you can’t give them a Taproot address to send to. As Murch said you can even support sending to future versions because it is the responsibility of the person giving out the Taproot address that they don’t give out Taproot addresses they can’t spend from or that anyone can spend from. Ideally the approach everyone would take is to support sending to any future SegWit version even at this point, whether it is SegWit version 2, 3, 4 and who knows whether we’ll have any of them. We are back into the world of speculating future soft forks.

Taproot’d wallets (including Bitcoin Core)

So I talked about sending to Taproot addresses, obviously generating Taproot addresses and receiving to those addresses is a bit more complicated. You need to know how to spend from that Taproot address and that is a lot more complicated than just being able to send to a Taproot address. There are various levels of support. Hopefully people will support spending from key paths, script paths and eventually supporting constructions of complex Taproot trees of scripts.

The Taproot descriptor tr()

tr(pk_1,{pk_2,pk_3})

https://bitcoin.stackexchange.com/questions/115227/how-to-represent-a-taproot-output-with-more-than-2-scriptleaves-as-a-descriptor

Support for the tr() descriptor was included in Bitcoin Core 22.0. Only creates tr() descriptor in fresh wallets.

PR 22051 (Basic Taproot derivation support for descriptors): https://github.com/bitcoin/bitcoin/pull/22051

An open PR 23417 to allow old wallets to generate Taproot descriptors: https://github.com/bitcoin/bitcoin/pull/23417

I think the best practice now, especially for a new wallet, is to use descriptors. Andrew Chow has introduced a new Taproot descriptor tr(). In Core it only supports spending from pubkeys on both the key path and the script path. If you go back to that Taproot tree diagram even if you are spending from the Taproot tree currently the Bitcoin Core wallet will only let you spend from a single pubkey even if you are using a script path, a leaf script. You are only spending from a single pubkey currently. Obviously there is work being done such that that restriction will be lifted. The Core wallet and other wallets, they’ll support from sending from complex scripts within the Tapleaf of a Taproot tree.

multi_a() and sortedmulti_a()

Interestingly Script didn’t change much. “Tapscript” is the terminology we are using, this upgrade to Script. Script hasn’t actually changed much, it was just that Taproot tree structure that we introduced. One of the few changes to Script as part of the Taproot soft fork was the replacement for CHECKMULTISIG, CHECKSIGADD is not yet supported in the Bitcoin Core wallet but that are descriptors that map to the use of this new multisig opcode. And presumably other projects will also use these updated multisig descriptors that support CHECKSIGADD in the future. Obviously totally different to MuSig and FROST which are doing a multisig protocol but only a single key and single signature going onchain. With CHECKMULTISIG and with these multisig descriptors multiple signatures and multiple keys are still going onchain. There are lots of different areas to multisig and Jimmy did a great job of going through some of those.

Taproot’d Miniscript

Once we get onto enabling complex scripts, I totally agree with previous speakers, I think Policy and Miniscript is going to be the way to go. Sanket is talking on Miniscript later and doing a workshop tomorrow. I certainly recommend attending those.

Miniscript (recap)

https://bitcoin.sipa.be/miniscript/

Policy: thresh(3,pk(key_1),pk(key_2),pk(key_3),older(12960))

Miniscript: thresh(3,pk(key_1),s:pk(key_2),s:pk(key_3),sdv:older(12960))

Miniscript from C++ compiler has since changed: https://bitcoin.stackexchange.com/questions/115288/why-has-the-miniscript-for-this-particular-policy-changed-over-the-past-few-mont/

When you are constructing a script or tweaking a script I think the future will be interacting at the Policy level. If you compare the complexity of the script to the policy the policy seems a lot easier to construct. The Miniscript compiler will generate that Miniscript for you and that Miniscript is just an encoding of Script. The problems with trying to construct one of those scripts for yourself without using something like Policy and Miniscript, it is very easy to get things wrong. You can very easily lock up funds accidentally, you can have hidden spending paths that you didn’t realize you had that could allow parties in your multiparty protocol to steal your funds or a hacker to steal your funds. There is just so much that can go wrong when you try to start constructing your own script. There is work to do to finalize the first version of Miniscript. But I certainly think that’s going to be the way to go if we are going to be writing complex scripts ourselves and perhaps tweaking them depending on the demands or interests of particular projects because they will all want different scripts presumably. Lots of different combinations and multisigs, timelocks, hashlocks whatever.

Taproot’d Miniscript

Miniscript is not yet finalized or in Core. darosior is doing work to get that into Core, that’s been a long term project, this is the Core wallet obviously. There are a few open questions. Does Policy and Miniscript ignore these multisignature protocols where only one single key and one single signature go onchain? Do you just effectively put a pubkey into your Policy and then Policy and Miniscript doesn’t know anything about these complex multisignature protocols? Or does Miniscript tell you to use MuSig and FROST because it is more efficient and less data is going onchain? They are kind of open questions. Updating the Policy to Miniscript compiler is going to be a big project potentially. This would be the Miniscript compiler constructing that Taproot tree for you. You effectively say “These are all the conditions I want. I am likely to spend from this one but I might not spend from these others.” Then the Miniscript compiler would construct that Taproot tree for you, very ambitious, Sanket will tell you more on whether we’ll be able to get there. The running theme is there is still so much work to do now that we have Taproot and on various projects we are kind of just scratching the surface. We are certainly nowhere near the end destination for enabling a lot of this Taproot functionality.

Taproot’d Lightning

  • Closing a channel to a P2TR address
  • Opening a channel from a P2TR address
  • Taproot multisig for 2-of-2 (OP_CHECKSIGADD)
  • MuSig(2) for cooperative closes (space savings, privacy)
  • Using the Taproot tree for alternative script paths
  • PTLCs
  • Other Taproot ideas (additional spending path if peer comes back online)

Oliver (Gugger) will be talking about Taproot’d Lightning later. I chatted to him a couple of days ago, just to check I wasn’t going to repeat what he was going to say later. I’ll say a couple of things on Lightning. There are some people who are very enthusiastic about upgrading Lightning for Taproot. t-bast has a great doc. I did say all my gifs would be Drake ones but I lied. Anyway that’s t-bast’s. There is a lot of enthusiasm but it is a big project in itself. The Lightning implementations, as far as I know, all support closing a channel to a P2TR address. Again everyone should, Lightning wallets etc. Opening a channel from a P2TR address is a work in progress. Then there are various questions. Are we going to use this new Taproot multisig opcode (CHECKSIGADD) or are we going to wait for MuSig? Are the Lightning template scripts going to use the Taproot tree? PTLCs is a whole other problem. With HTLCs you have a hash and a preimage, the preimage goes from the destination back to the source. If we are going to have PTLCs you are going to have the equivalent of a private key to a public key getting exchanged back from the destination to the source. And so it is not just a question of upgrading a channel between peers, it is can we use PTLCs from source A to destination B. That’s going to be a real problem because it is not just upgrading a channel, it is looking at the whole route and checking that PTLCs are enabled from the source to the destination. Long term project, lots to discuss, barely scratching the surface.

Taproot’d Lightning - upgrading Lightning gossip

https://btctranscripts.com/c-lightning/2021-11-01-developer-call/#future-possible-changes-to-lightning-gossip

One thing there has been quite a lot of work so far on Lightning that I don’t think Oliver is going to cover later, upgrading Lightning gossip. If Lightning was to upgrade to using MuSig, rather than the 2-of-2 with 2 signatures and 2 pubkeys going onchain it would only be one pubkey and one signature going onchain. It would be very difficult to identify onchain what the channel opens and channel closes are. But currently Lightning gossip still gossips which UTXO you are committing to when you are opening a channel. You are getting the privacy benefit on the blockchain but if anybody, Chainalysis or any of these bad guys, is monitoring gossip they still can currently determine which UTXO you’ve committed to. Ideally we would sever that link entirely and we’d have a gossip protocol on the Lightning Network where you are able to prove that you had those funds to open a channel and not actually say which UTXO it is. There have been various discussions on how to do that. So far the conclusion sadly has been that a lot of the cryptographic magic like ring sigs, ZK proofs aren’t mature, have very large proofs which makes it problematic to use new cryptographic magic to achieve that severing of gossip ties. Possibly you could just commit to a small UTXO and then make lots of large channels because you’ve committed to a single small UTXO. But obviously that is opening up a partial DoS vector because you could prove that you own 0.1 Bitcoin and then open channels with 10 Bitcoin and you don’t actually own 10 Bitcoin. Still in discussion but there has been movement in terms of upgrading Lightning gossip. There are other questions like do you use stuff like Minisketch and do a big gossip update or do we just stick to Taproot stuff? Still lots and lots to discuss, still scratching the surface.

Taproot’d Lightning - Views on the way forward

ZmnSCPxj https://bitcoinops.org/en/newsletters/2021/09/01/#preparing-for-taproot-11-ln-with-taproot

AJ Towns https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-October/003278.html

There are differing views on what should be prioritized. I just have two people, Z-man and AJ, but various people have different levels of enthusiasm/time to prioritize this. It is going to take a lot of work, if you went to the Lightning event a couple of days ago, the spec process is 2 implementations implementing a new feature and formalizing a particular spec or BOLT. It is not just the case of an individual implementation implementing it and running with it, we want compatibility across implementations on the Lightning Network ideally.

Taproot’d Lightning - Channel upgrades

BOLT 2 (upgrade protocol on reestablish): https://github.com/lightning/bolts/pull/868

I’lll skip that because I think I’m coming up on time.

Taproot’d Liquid

  • New opcodes enabled through OP_SUCCESS mechanism
  • PSBT v2 for confidential transactions (PSET)
  • The 11-of-15 threshold sig peg in/out using FROST or a similar threshold scheme
  • Simplicity

There is some interesting stuff on Liquid. Taproot is activated on Liquid but there are some new opcodes on Liquid. It would be really cool if we get Simplicity on Liquid. That certainly seems a strong candidate for a future soft fork. Simplicity is a complete replacement of Script itself, a lot of the problems with Script that Miniscript tries to resolve. Simplicity would be a complete replacement of Script, at least for that Tapleaf version taking Script out completely.

Other protocols

  • Coinswap/Teleport (currently using ECDSA 2-of-2, could use MuSig2 in future)
  • Atomic swaps
  • Scriptless scripts/adaptor signatures
  • DLCs

There are other protocols. Before Taproot was activated there was lots of enthusiasm for all these off-chain protocols using Schnorr and Taproot. Hopefully that enthusiasm will come back. Now we’ve got Schnorr I don’t know why people aren’t revisiting all the ideas they had for building off-chain protocols with Schnorr. Hopefully people will digest that we actually have Schnorr signatures onchain now and the cool ideas they had for using Schnorr are now possible.

Future possible soft fork(s) built on Taproot

  • New sighash flags (e.g. SIGHASH_ANYPREVOUT)
  • New opcode(s) e.g. OP_CTV, OP_TLUV, OP_CAT, OP_CHECKSIGFROMSTACK etc
  • Simplicity enabled on a new Tapleaf version
  • Graftroot, G’root etc
  • Cross input signature aggregation (needs Taproot successor)

Lots of ideas for future possible soft forks. We will hear about some of them at the conference. But as I said let’s, as a community, be very slow and cautious and ensure that whatever is in the next soft fork, whenever that is, is the best possible proposal for Bitcoin and won’t be replaced something superior a year or two afterwards.

Q&A

Q - How often do you look at other wallet implementations and other node implementations?

A - I try to follow Bitcoin Core and Lightning spec. I feel as if I’m stretched way too thin doing that. I do follow Murch’s wiki site to see what the community adoption of Taproot is. I am aware that Laolu and Oliver have got a bunch of Taproot stuff into btcd, the Go alternative full node implementation to Core. I try to monitor it at a high level but I feel as if I’m stretched way too thin just trying to follow Core and Lightning but doing my best.

\ No newline at end of file +

Jimmy gave a great intro to Taproot itself. Obviously before Taproot, a script is saying how certain Bitcoin in a UTXO can be spent. Before Taproot you would have a very long script such as the one below and it would be behind a hash. Whenever you spent from that script you’d have to satisfy the conditions from that very long script. The key point was that it was a very long script and whenever you spend from that script it goes onchain.

What is Taproot?

With Taproot as Jimmy was saying we can now split this script up into various parts. There is the key path and there is the script path. You would expect to use the key path most of the time, it is a single key spend every time you use the key path in Taproot, only a single key and a single signature are going onchain. Interestingly you can do some multisig schemes, you can use MuSig, perhaps FROST, where it is a multisig protocol but only a single key and a single signature are going onchain. As far as the blockchain knows, whenever you spend from a key path only a single key and a single signature are going onchain. Even if there is some fancy protocol behind how you construct that single key and single signature. The script path would generally be considered the exit scenario when something goes wrong with that key path. I’ve said “Tap me if you need me”. Generally you’d expect to use the key path but you can encode various different spending conditions within this Merkle tree. If you do need them you tap that Merkle root, you prove that the leaf script that you are spending from is in the Taproot tree. Then you satisfy the conditions of that leaf script to be able to spend from that pay-to-taproot (P2TR) address. This really should be imprinted on your minds. Certainly if you are thinking of doing anything with Taproot or leveraging Taproot.

Q = P + H(P | c)G

https://github.com/bitcoinops/taproot-workshop/blob/master/2.2-taptweak.ipynb

Alternatively if you prefer a math formula, that’s the math formula for Taproot. You are committing to the internal pubkey, the Taproot tree and that’s all combined into the Taproot pubkey which is encoded into the Taproot address.

Sending and receiving to P2TR addresses

So the first step, any project, any business in the space should be thinking about is supporting sending to P2TR addresses and also ideally receiving to P2TR addresses. Murch has this wiki page where he is monitoring community adoption for the various projects, wallets, protocols etc that are supporting sending to P2TR.

Working towards a circular P2TR address economy

Murch had an interesting point on Twitter where he said it is really important that projects don’t wait for other projects to start adopting Taproot. The really important thing is that if a project looks around and is going “No one else supports sending to a P2TR address” they are not going to support receiving to a P2TR address.

What is a P2TR address?

What Bitcoin Core and Lightning implementations have done is enable sending to any future SegWit version. There is an example of a P2TR address, the p represents 1 so this is SegWit version 1. Whenever you see a bc1p that is a mainnet Taproot address. It is really important that all businesses, projects support sending to this because if they don’t support this you can’t give out a Taproot address. If there is a wallet that can’t send money to a Taproot address then you can’t give them a Taproot address to send to. As Murch said you can even support sending to future versions because it is the responsibility of the person giving out the Taproot address that they don’t give out Taproot addresses they can’t spend from or that anyone can spend from. Ideally the approach everyone would take is to support sending to any future SegWit version even at this point, whether it is SegWit version 2, 3, 4 and who knows whether we’ll have any of them. We are back into the world of speculating future soft forks.

Taproot’d wallets (including Bitcoin Core)

So I talked about sending to Taproot addresses, obviously generating Taproot addresses and receiving to those addresses is a bit more complicated. You need to know how to spend from that Taproot address and that is a lot more complicated than just being able to send to a Taproot address. There are various levels of support. Hopefully people will support spending from key paths, script paths and eventually supporting constructions of complex Taproot trees of scripts.

The Taproot descriptor tr()

tr(pk_1,{pk_2,pk_3})

https://bitcoin.stackexchange.com/questions/115227/how-to-represent-a-taproot-output-with-more-than-2-scriptleaves-as-a-descriptor

Support for the tr() descriptor was included in Bitcoin Core 22.0. Only creates tr() descriptor in fresh wallets.

PR 22051 (Basic Taproot derivation support for descriptors): https://github.com/bitcoin/bitcoin/pull/22051

An open PR 23417 to allow old wallets to generate Taproot descriptors: https://github.com/bitcoin/bitcoin/pull/23417

I think the best practice now, especially for a new wallet, is to use descriptors. Andrew Chow has introduced a new Taproot descriptor tr(). In Core it only supports spending from pubkeys on both the key path and the script path. If you go back to that Taproot tree diagram even if you are spending from the Taproot tree currently the Bitcoin Core wallet will only let you spend from a single pubkey even if you are using a script path, a leaf script. You are only spending from a single pubkey currently. Obviously there is work being done such that that restriction will be lifted. The Core wallet and other wallets, they’ll support from sending from complex scripts within the Tapleaf of a Taproot tree.

multi_a() and sortedmulti_a()

Interestingly Script didn’t change much. “Tapscript” is the terminology we are using, this upgrade to Script. Script hasn’t actually changed much, it was just that Taproot tree structure that we introduced. One of the few changes to Script as part of the Taproot soft fork was the replacement for CHECKMULTISIG, CHECKSIGADD is not yet supported in the Bitcoin Core wallet but that are descriptors that map to the use of this new multisig opcode. And presumably other projects will also use these updated multisig descriptors that support CHECKSIGADD in the future. Obviously totally different to MuSig and FROST which are doing a multisig protocol but only a single key and single signature going onchain. With CHECKMULTISIG and with these multisig descriptors multiple signatures and multiple keys are still going onchain. There are lots of different areas to multisig and Jimmy did a great job of going through some of those.

Taproot’d Miniscript

Once we get onto enabling complex scripts, I totally agree with previous speakers, I think Policy and Miniscript is going to be the way to go. Sanket is talking on Miniscript later and doing a workshop tomorrow. I certainly recommend attending those.

Miniscript (recap)

https://bitcoin.sipa.be/miniscript/

Policy: thresh(3,pk(key_1),pk(key_2),pk(key_3),older(12960))

Miniscript: thresh(3,pk(key_1),s:pk(key_2),s:pk(key_3),sdv:older(12960))

Miniscript from C++ compiler has since changed: https://bitcoin.stackexchange.com/questions/115288/why-has-the-miniscript-for-this-particular-policy-changed-over-the-past-few-mont/

When you are constructing a script or tweaking a script I think the future will be interacting at the Policy level. If you compare the complexity of the script to the policy the policy seems a lot easier to construct. The Miniscript compiler will generate that Miniscript for you and that Miniscript is just an encoding of Script. The problems with trying to construct one of those scripts for yourself without using something like Policy and Miniscript, it is very easy to get things wrong. You can very easily lock up funds accidentally, you can have hidden spending paths that you didn’t realize you had that could allow parties in your multiparty protocol to steal your funds or a hacker to steal your funds. There is just so much that can go wrong when you try to start constructing your own script. There is work to do to finalize the first version of Miniscript. But I certainly think that’s going to be the way to go if we are going to be writing complex scripts ourselves and perhaps tweaking them depending on the demands or interests of particular projects because they will all want different scripts presumably. Lots of different combinations and multisigs, timelocks, hashlocks whatever.

Taproot’d Miniscript

Miniscript is not yet finalized or in Core. darosior is doing work to get that into Core, that’s been a long term project, this is the Core wallet obviously. There are a few open questions. Does Policy and Miniscript ignore these multisignature protocols where only one single key and one single signature go onchain? Do you just effectively put a pubkey into your Policy and then Policy and Miniscript doesn’t know anything about these complex multisignature protocols? Or does Miniscript tell you to use MuSig and FROST because it is more efficient and less data is going onchain? They are kind of open questions. Updating the Policy to Miniscript compiler is going to be a big project potentially. This would be the Miniscript compiler constructing that Taproot tree for you. You effectively say “These are all the conditions I want. I am likely to spend from this one but I might not spend from these others.” Then the Miniscript compiler would construct that Taproot tree for you, very ambitious, Sanket will tell you more on whether we’ll be able to get there. The running theme is there is still so much work to do now that we have Taproot and on various projects we are kind of just scratching the surface. We are certainly nowhere near the end destination for enabling a lot of this Taproot functionality.

Taproot’d Lightning

Oliver (Gugger) will be talking about Taproot’d Lightning later. I chatted to him a couple of days ago, just to check I wasn’t going to repeat what he was going to say later. I’ll say a couple of things on Lightning. There are some people who are very enthusiastic about upgrading Lightning for Taproot. t-bast has a great doc. I did say all my gifs would be Drake ones but I lied. Anyway that’s t-bast’s. There is a lot of enthusiasm but it is a big project in itself. The Lightning implementations, as far as I know, all support closing a channel to a P2TR address. Again everyone should, Lightning wallets etc. Opening a channel from a P2TR address is a work in progress. Then there are various questions. Are we going to use this new Taproot multisig opcode (CHECKSIGADD) or are we going to wait for MuSig? Are the Lightning template scripts going to use the Taproot tree? PTLCs is a whole other problem. With HTLCs you have a hash and a preimage, the preimage goes from the destination back to the source. If we are going to have PTLCs you are going to have the equivalent of a private key to a public key getting exchanged back from the destination to the source. And so it is not just a question of upgrading a channel between peers, it is can we use PTLCs from source A to destination B. That’s going to be a real problem because it is not just upgrading a channel, it is looking at the whole route and checking that PTLCs are enabled from the source to the destination. Long term project, lots to discuss, barely scratching the surface.

Taproot’d Lightning - upgrading Lightning gossip

https://btctranscripts.com/c-lightning/2021-11-01-developer-call/#future-possible-changes-to-lightning-gossip

One thing there has been quite a lot of work so far on Lightning that I don’t think Oliver is going to cover later, upgrading Lightning gossip. If Lightning was to upgrade to using MuSig, rather than the 2-of-2 with 2 signatures and 2 pubkeys going onchain it would only be one pubkey and one signature going onchain. It would be very difficult to identify onchain what the channel opens and channel closes are. But currently Lightning gossip still gossips which UTXO you are committing to when you are opening a channel. You are getting the privacy benefit on the blockchain but if anybody, Chainalysis or any of these bad guys, is monitoring gossip they still can currently determine which UTXO you’ve committed to. Ideally we would sever that link entirely and we’d have a gossip protocol on the Lightning Network where you are able to prove that you had those funds to open a channel and not actually say which UTXO it is. There have been various discussions on how to do that. So far the conclusion sadly has been that a lot of the cryptographic magic like ring sigs, ZK proofs aren’t mature, have very large proofs which makes it problematic to use new cryptographic magic to achieve that severing of gossip ties. Possibly you could just commit to a small UTXO and then make lots of large channels because you’ve committed to a single small UTXO. But obviously that is opening up a partial DoS vector because you could prove that you own 0.1 Bitcoin and then open channels with 10 Bitcoin and you don’t actually own 10 Bitcoin. Still in discussion but there has been movement in terms of upgrading Lightning gossip. There are other questions like do you use stuff like Minisketch and do a big gossip update or do we just stick to Taproot stuff? Still lots and lots to discuss, still scratching the surface.

Taproot’d Lightning - Views on the way forward

ZmnSCPxj https://bitcoinops.org/en/newsletters/2021/09/01/#preparing-for-taproot-11-ln-with-taproot

AJ Towns https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-October/003278.html

There are differing views on what should be prioritized. I just have two people, Z-man and AJ, but various people have different levels of enthusiasm/time to prioritize this. It is going to take a lot of work, if you went to the Lightning event a couple of days ago, the spec process is 2 implementations implementing a new feature and formalizing a particular spec or BOLT. It is not just the case of an individual implementation implementing it and running with it, we want compatibility across implementations on the Lightning Network ideally.

Taproot’d Lightning - Channel upgrades

BOLT 2 (upgrade protocol on reestablish): https://github.com/lightning/bolts/pull/868

I’lll skip that because I think I’m coming up on time.

Taproot’d Liquid

There is some interesting stuff on Liquid. Taproot is activated on Liquid but there are some new opcodes on Liquid. It would be really cool if we get Simplicity on Liquid. That certainly seems a strong candidate for a future soft fork. Simplicity is a complete replacement of Script itself, a lot of the problems with Script that Miniscript tries to resolve. Simplicity would be a complete replacement of Script, at least for that Tapleaf version taking Script out completely.

Other protocols

There are other protocols. Before Taproot was activated there was lots of enthusiasm for all these off-chain protocols using Schnorr and Taproot. Hopefully that enthusiasm will come back. Now we’ve got Schnorr I don’t know why people aren’t revisiting all the ideas they had for building off-chain protocols with Schnorr. Hopefully people will digest that we actually have Schnorr signatures onchain now and the cool ideas they had for using Schnorr are now possible.

Future possible soft fork(s) built on Taproot

Lots of ideas for future possible soft forks. We will hear about some of them at the conference. But as I said let’s, as a community, be very slow and cautious and ensure that whatever is in the next soft fork, whenever that is, is the best possible proposal for Bitcoin and won’t be replaced something superior a year or two afterwards.

Q&A

Q - How often do you look at other wallet implementations and other node implementations?

A - I try to follow Bitcoin Core and Lightning spec. I feel as if I’m stretched way too thin doing that. I do follow Murch’s wiki site to see what the community adoption of Taproot is. I am aware that Laolu and Oliver have got a bunch of Taproot stuff into btcd, the Go alternative full node implementation to Core. I try to monitor it at a high level but I feel as if I’m stretched way too thin just trying to follow Core and Lightning but doing my best.

\ No newline at end of file diff --git a/austin-bitcoin-developers/2019-06-29-hardware-wallets/index.html b/austin-bitcoin-developers/2019-06-29-hardware-wallets/index.html index 7eb5aa74c5..c6503cd232 100644 --- a/austin-bitcoin-developers/2019-06-29-hardware-wallets/index.html +++ b/austin-bitcoin-developers/2019-06-29-hardware-wallets/index.html @@ -10,4 +10,4 @@ Stepan Snigirev

Date: June 29, 2019

Transcript By: Bryan Bishop

Tags: Hardware wallet, Security problems

Category: Meetup

Media: -https://www.youtube.com/watch?v=rK0jUeHeDf0

https://twitter.com/kanzure/status/1145019634547978240

see also:

Background

A bit more than a year ago, I went through Jimmy Song’s Programming Blockchain class. That’s where I met M where he was the teaching assistant. Basically, you write a python bitcoin library from scratch. The API for this library and the classes and fnuctions that Jimmy uses is very easy to read and understand. I was happy with that API, and what I did afterwards was that I wanted to move out of quantum physics to working on bitcoin. I started writing a library that had a similar API- took an arduino board and wrote a similar library that had the same features, and extra things like HD keys and a few other things. I wanted to make a hardware wallet that is easy to program, inspired by Jimmy’s class. That’s about when I met and started CryptoAdvance. Then I made a refactored version that doesn’t require arduino dependencies, so now it works with arduino, Mbed, bare metal C, with a real-time operating system, etc. I also plan to make a micropython binding and embedded rust binding.

Introduction

I am only considering three hardware wallets today- Trezor, Ledger and ColdCard. Basically all others have similar architectures to one of these. You might know that Trezor uses a general purpose microcontroller, like those used in microwaves or in our cars. These are more or less in every device out there. They made the choice to only use this without a secure element for a few reasons- they wanted to make the hardware wallet completely open-source and say for sure what is running on the hardware wallet. I do not think they do not have any secure element in the hardware wallet unless we develop an open-source secure element. I think our community could make that happen. Maybe we could cooperate with Trezor and Ledger and at some point develop a secure element based on the RISC-V architecture.

Secure elements

I think we need several types of secure elements for different security models. You want to diversify the risk. You want to use multisig with Schnorr signatures. You want different devices with different security models, and ideally each key should be stored with different hardware, and different security models in each one as well. How will the vulnerabilities appear? It could be a poorly designed protocol, hopefully you won’t have a bug in that but sometimes hardware wallets fail. It could be a software vulnerability where the people who wrote the software made a mistake, like overflows or implementation bugs or some not-very-secure cryptographic primitives like leaking information through a sidechannel. The actual hardware can be vulnerable to hardware attacks, like glitching. There’s ways to make microcontrollers not behave according to their specification. There can also be hardware bugs as well, which happen from time to time, and just because the manufacturer of the chip can also make mistakes- most of the chips are still designed not automatically but by humans. When humans put transistors and optimize this by hand, they can also make mistakes. There’s also the possibility of government backdoors, which is why we want an open-source secure element.

Backdoors…

There was a talk sometime ago about instructions in x86 processors where basically, they have specific set of instructions that is not documented and not— they call it Appendix H. They share this appendix only with trusted parties ((laughter)). Yeah. These instructions can do weird things, we don’t know exactly what, but some guy was able to find all of the instructions. He was even able to get privileges from the user level, not just to the root level but to the ring-2 level, complete control that even the operating system doesn’t have access to. Even if you run tails, it doesn’t mean that the computer is stateless. There is still a bunch of crap running under the OS that your OS doesn’t know about. You should be careful with that one. On librem computers, they have not only PureOS but also Qubes that you can run and also they use the – which is also open– basically you can have … check that it is booting the real tails. The librem tool is called heads not tails. You should look at that if you are particularly paranoid.

Librem computers have several options. You can eiher rune PureOS or you can run Qubes or tails if you want. The librem key checks the bootloader.

Decapping

Ledger hardware wallets use a secure element. They have a microcontroller also. There are two different architectures to talk about. Trezor just uses a general-purpose MCU. Then we have ColdCard which is using a general-purpose MCU plus it adds on top of that a secure– I wouldn’t call it a secure element, but it is secure key storage. The thing that is available on the market, ColdCard guys were able to convince the manufacturer to open-source this secure key storage device. So we hope we know what is running on the microcontrollers, but we can’t verify that. If we give you the chip, we can’t verify it. We could in theory do some decapping. With decapping, imagine a chip and you have some epoxy over the semiconductor and the rest of the space is just wires that go into the device. If you want to study what is inside the microcontroller, what you do is you put it into the laser cutter and you first make a hole on the device and then you can put here the nitric acid that you heat up to 100 degrees and it will dissolve all the plastic around it. Then you have a hole to the microcontroller, then you can put this into an optical microscope or electron microscope or whatever you have and actually study the whole surface there. There was an ATmega that someone decapped and it was still able to run. There was a talk at defcon where the guys showed how to make DIY decappers. You could take a trezor or other hardware wallet and you put it on a mount, and then you just put the stream of nitric acid to the microcontroller and it dissolves the plastic but the device itself can still operate. So while you’re doing some cryptography there, you can get to the semiconductor level and put it under the microscope and observe how exactly it operates. Then when the microcontroller operates, you can see how the plastic– not only with the microscope but even also with— like when a transistor flips between 0 and 1, it has a small chance of emitting a photon. So you can watch for emitted photons and there’s probably some information about the keys given by that. Eventually you would be able to extract the keys. You cut the majority of the plastic then you put in the nitric acid to get to the semiconductor level. In this example, the guy was looking at the input and output buffer on the microcontroller. You can also look at the individual registers. It’s slightly different for secure elements or secure key storage, though. They put engineering effort into the hardware side to make sure that it is not easy to do any decapping. When the cryptography is happening on the secure element, they do have certain regions that are dummy parts of the microcontroller. So they are operating and doing something, but they are trying to fool you about where the keys are. They have a bunch of other interesting things there. If you are working with security-focused chips, then it’s much harder to determine what’s going on there. The other thing though is that in the ColdCard the key storage device is pretty old so that is why the manufacturer was more willing to open-source it. If we are able to see what is running there, that means the attacker will also be able to extract our keys from there. So being able to verify the chips, also shows that it is not secure to users. So being able to verify with decapping might not be a good thing. So it’s tricky.

Secure key storage element (not a secure element)

Normally the microcontroller asks the secure storage to give the key to move it to the main microcontroller, then the cryptographic operations occur, and then it is wiped from the memory of the microcontroller. How can you get this key from the secure key storage? Obviously you need to authenticate yourself. In ColdCard, you do this with a PIN code. How would you expect the wallet to behave when you enter the wrong PIN code? In Trezor, you increase a counter and increase the delay between PIN entries, or like in Ledger where you have a limited number of entry attempts before your secrets get wiped. ColdCard is using the increased delay mechanism. The problem is that this delay time delay is enforced not by the secure element but by the general-purpose microcontroller. To guess the correct PIN code, if you are able to somehow stop the microcontroller from increasing the time, then you would be able to bruteforce the PIN code. To communicate with the key storage, the microcontroller has a secret stored here, and hwenever it uses the secret to communicate with the key storage, then the key storage will respond. If the attacker can get this secret, then he can throw away the microcontroller and use his own equipment with that secret and try all the PIN combinations until he finds the right one. This is because of the choice they did where basically you can have any amount of tries for the PIN code for the secure element. This particular secure key storage has the option to limit the number of PIN entry attempts. But the problem is that it is not resettable. This means that for the whole lifetime of the device, you can have a particular number of wrong PIN entries and then you can just as you reach this level you throw away the device. This is a security tradeoff that they did. I would prefer actually to set this limit to say 1000 PIN codes, or 1000 tries, and I doubt that I would fail to enter the PIN code 1000 times. For the ColdCard, they use a nice approach for PIN codes where you have it split into two parts. You can use arbitrary length, but they recommend something like 10 digits. They show some verification words, which helps you verify that the secure element is still the right one. So if someone flips your device for another one, in an evil maid attack, then you would see that there are different words and you can stop entering the PIN. There was actually an attack on the cold card to bruteforce it. The words are deterministic from the first part of your PIN code. Maybe you try multiple digits, and then you write down the words, and you make a table of this, and then when you do the evil maid attack, then you put in that table so that it shows the information to the user. You need to bruteforce the first few PIN numbers and get those words, but the later words are hard to figure out without knowing the PIN code.

Evil maid attacks

No matter what hardware wallet you have, even a Ledger with the nicest hardware– say I take it and put it in my room and put a similar device there that has a wireless connectivity to an attacker’s computer, then it’s a bridged to the real Ledger, and I can have this bidirectional communication and do a man-in-the-middle attack. Whatever the real Ledger shows, I can display on this device and fool the user. I can become an ultimate man-in-the-middle here. Whatever the user enters, I can see what he enters and I know the PIN code and then I can do everything. There are two ways to mitigate this attack— you can use a Faraday cage every time you’re using the hardware wallet, and the second way is to enforce it on the hardware and I think the Blockstream guys suggested this– you have a certain limit on the speed of light, so you can’t communicate faster than the speed of right? Your device is here. If you are communicating with this device here, then you can enforce that the reply should come within a few nanoseconds. Then it is very unlikely that– it is not possible by the laws of physics to get the signal to your real device and then get back.

What about using an encrypted communication channel with the hardware wallet? The attacker is trying to get the PIN code. It’s still vulnerable. Instead of entering the PIN code, you instead use a computer— then you have your hardware wallet connected to it, and then in the potentially comrpomised situation we have this malicious attacker that has wireless connectivity to the real hardware wallet. Say our computer is not compromised. Over this encrypted channel, you can get the mask for your PIN code– like some random numbers that you need to add to your PIN code in order to enter it on the device. Your MITM doesn’t know about this unless he compromises your computer as well. Or a one-time pad situation. Every time you enter the PIN code, you enter the PIN code plus this number. Could work with a one-time pad. When you are operating with your hardware wallet, you probably want to sign a particular transaction. If the attacker is able to replace this transaction with his own, and display to you your own transaction then you are screwed. But if the transaction is passed encrypted to the real hardware wallet, then he can’t replace it because yeah it would be unauthenticated.

Ephemeral disposable hardware wallets

Another way to get rid of the secure key storage is to make the whole hardware wallet ephemeral, so you focus on guarding the seed and the passphrase and enter it each time. The hardware wallet is disposable then. You might never return to use that hardware wallet again. I was thinking about this regarding secure generation of mnemonics on a consumer device. If you have a disposable microcontroller but everything else we’re sure doesn’t have any digital components or memory, then each time we could replace the microcontroller with a new one and the old one we just crush with a hammer. If you already remember the mnemonic, well the PIN discourages you from remembering it. If you use disposable hardware, and it’s not stored in the vault then when someone accesses the vault then they don’t see anything and don’t know what your setup really is.

Offline mnemonic generation and Shamir secret sharing

I prefer 12 word mnemonics because they are easier to remember and they still have good entropy, like 128-bit entropy. I can still remember the 12 words. I would back it up with a Shamir secret sharing scheme. We take our mnemonic and split it into pieces. If you remember the mnemonic, then you don’t need to go recover your shares. Not all shares are required to recover the secret, but you can configure how many shares are required.

If you are worried about your random number generator, then you should be adding user entropy. If you have a tampered random number generator and you’re adding user entropy anyway, the nit doesn’t matter. Older laptops can be a problem, like 32-bit machines and the wallets might not support 32-bit CPUs anymore or something. If your wallet was written for python2.6…. now you have to write something to handle big integers, etc. This is a lot of bitrot in just 5-7 years.

Regarding offline mnemonic generation or secure mnemonic generation… use a dart board, use dices, do you trust yourself to generate your entropy? I am thinking about something like that– we have true random number generators in the chips; we can ask the chips for the random numbers, then use those numbers, and then display them to the user. The word and the corresponding index. We also know that circuits degrade over time, so random number generators could be compromised in a predictable way based on hardware failure statistics.

Entropy verification

You can roll some dice, then take a photo of it. If I’m building a malicious hardware wallet and I want to steal your bitcoin, this is by far the most easy way. Maybe the hardware is great, but the software I’m running isn’t using that hardware random number generator. Or maybe there’s a secret known to the attacker. You wait a few years, and then you have a great retirement account. You could also enforce a secure protocol that allows you to use a hardware wallet even if you don’t trust it. You can generate a mnemonic with user entropy and verify that entropy was in fact used.

Signatures leaking private keys due to non-random nonces chosen by the hardware wallet

If I am still an evil manufacturer of the hardware wallet, and I was forced by the community to include this entropy verification mechanism, and I also want the community to like me, so we say we have a completely airgapped solution with QR codes and cameras… and say you can use any software wallet you want because it works with everything. Now you have a setup like this; say you have an uncompromised computer, only a compromised hardware wallet which was manufactured by evil means. You are preparing the unsigned transaction here, you pass it to the hardware wallet using the QR codes. The hardware wallet then displays you the information and you verify everything is correct, and you verify on the computer as well. Then you sign it, and get back the signed transaction. Then you get a perfectly valid bitcoin transaction that you verify is correct and you broadcast it to the network. It’s the nonce attack. Yes, exactly. The problem is that the signature in bitcoin has two numbers, one is a random nonce. Our hardware wallet can choose a deterministically-derived nonce or random nonce to blind the private key. If this nonce is chosen insecurely, either because you’re using a bad random number generator that isn’t producing uniformly random values, or because you’re evil, then just by looking at signatures I will be able to get information about your private key. Something like this happened with yubikey recently. There was FIPS-certified yubikeys and they were leaking your private keys within 3 signatures. The stupid thing is that they introduced the vulnerability by mistake when they were preparing their device for the certification process, which asks you to use random numbers but actually you don’t want to use random numbers– you want to use deterministic derivation of nonces because you don’t trust your random numbers. You can still use the random numbers, but you should use it together with deterministic nonce derivation. Say you have your private key that nobody knows, and you want to sign a certain message…. ideally it should be HMAC something…. This is the nice way, but you can’t verify your hardware wallet is actually doing this. You would have to know your private key to verify this, and you don’t want to put your private key in some other device. Also, you don’t want your device to be able to switch to malicious nonce generation. You want to make sure your hardware wallet cannot choose arbitrary random nonces. You can force the hardware wallet to use not just the numbers it likes, but also the numbers that you like.

You could send the hash of the random number you will be using, such that the hardware wallet doesn’t need to use an RNG, it can use a deterministic algorithm to derive this R value. It would be a pretty secure communication scheme in both cases for both software and hardware but not both of them at the same time. One of them should be fine.

All the hardware wallets currently ignore this. I am preparing a proposal to include this field into bip174 PSBT. Our hardware wallet will definitely support it. I want to build a system where you don’t need to trust the hardware wallet too much, even if it is compromised or if there are bugs. All hardware wallets are hacked from time to time.

With dummy keys, you can check that the signatures are generated determinsitically and make sure it is happening and then you feel safe with the hardware wallet maybe. But switching from this deterministic algorithm to the malicious one can happen at any times. It could be triggered by a hardware update or some phase of the moon, you can’t be sure.

Another solution was that you can use verifiable generation of these random nonces, I think this was proposed by Pieter Wuille. For this you need a particular hashing function that supports zero-knowledge proofs that you were using this algorithm without exposing your private keys. The problem here is that it is very heavy computation for a microcontroller, so you’re probably not going to get it into a microcontroller.

There is also an idea about using sign-to-contract as an anti-nonce-sidechannel measure.

Ledger and multisig

If you have p2sh multisig and this wallet was only one of the signatures, then even if it was malicious then it doesn’t control all the keys– and ideally you’re using different hardware for the other keys. The problem with multisignature is that… well, Trezor supports it nicely. I am very happy with Trezor and multisig. ColdCard released firmware that supports multisig about a day ago. Ledger has terrible implementation of multisignature. What I was expecting from the wallet to show when you’re using multisignature, you want to see your bitcoin address, and you want to see what is the amount that you are actually sending or signing? What is the change output? What are the amounts? With Ledger multisig, you always see two outputs and you don’t know which one you are spending and which one is change address if any. With two Ledgers in a multisig setup, you are less secure than using a single Ledger. If anyone wants to make a pull request to the bitcoin app of Ledger, please do so. It’s there, people are complaining about this issue for more than a year I think. Ledger is not very good at multisignature.

Practicality

I know about a few supply chain attacks, but those relied on users doing stupid things. I’m not aware of any targeted hardware attacks. Right now the easiest way to attack someone’s hardware wallet is to convince them to do something stupid. I think at the moment there’s just not enough hackers looking into this field. But the value is going to increase. There have definitely been software wallet attacks.

walletrecoveryservices.com sells some of this as a service for hardware wallet recovery. The Wired magazine editor lost his PIN code or something, and he did a lot of work to get the data off the device. So this isn’t a malicious attack, but it is nonetheless an attack.

You should be wary of closed-source third-party hardware devices without a brand name. How do you trust any of this?

Right now it might be easier to get cryptocurrency by launching malware or starting a new altcoin or something or a hard-fork of another altcoin. Those are the easiest ways. Right now it’s easy to target lightning nodes and take money there; you know their public addresses and how much money they have, so you know that if you can get to the server then you can recover your cost of attack. So those targets are much more obvious and easier to attack at this point. There are easier remote attacks at this point, than attacking hardware wallets.

Downsides of microcontrollers

http://diyhpl.us/wiki/transcripts/breaking-bitcoin/2019/extracting-seeds-from-hardware-wallets/

How do secure elements work? Seems like a silver bullet that does everything, right? I can tell you the difference between normal microcontrollers and secure elements. The normal microcontrollers are made for speed, efficiency and they are made easy to develop for. There are so-called security bits that you set when you’re done programming your microcontroller. When the microcontroller boots, so how you would imagine is that it should boot with no-read no-write permissions and then checks the security bits to see whether you should be able to communicate with it, and then you should allow it to have read/write access. But sometimes it’s done the other way around, where the microcontroller is in an open mode for read-write and then it checks the security bits and then locks itself. But the problem is that if you talk to the microcontroller before it was able to read those bits, then you might be able to extract a single byte from microcontroller flash memory. You could keep doing this by doing rebooting over and over again; if you are fast and the microcontroller is slow, you can do this even faster. I think this is something that Ledger is referencing in all their talks– this “unfixable attack” on all microcontrollers like Trezor and others. I think it’s related to this, because this is exactly the thing that is broken by design and cannot be fixed just because the system evolved like this. No, they don’t need to use low temperature here. You just need to be faster than the microcontroller, which is easy because the microcontrollers used in hardware wallets are 200 MHz or so. So if you use a GPU or a modern computer then you would be able to do something.

So the threat is that the microcontroller’s memory would be able to be read, before it can lock itself down? The problem is that you can read out the whole flash memory. This means that even if it is encrypted, you have a key somewhere to decrypt it. What is stored on the flash memory? What is being protected here? Some have secret keys. All the IoT devices probably have your wifi password. There are plenty of different secrets that you might want to protect. The decryption key could in theory be stored somewhere else. Sounds like the threat is that the microcontroller can expose data written on them, and maybe you care about that because it’s proprietary data or a secret or a plaintext bitcoin private key then that’s a huge problem. If you have a microcontroller in some computing device that you have programmed… sounds like this threat is less interesting. Yes, that’s why most people don’t care about the microcontroller. In consumer devices, normally people do even easier things. They forget to disable the debug interface, and then you have direct full access to the microcontroller with like JTAG or something. So you can read out flash memory or this other stuff, and reprogram it if you want.

There’s also the JTAG interface. There’s another standard too, serial wire debug interface (SWDI). These interfaces are used for debugging. They allow you during development to see and completely control the whole microcontroller, you can set breakpoints, you can observe the memory, you can trigger all the pins around the device. You can do whatever you want using these interfaces. There’s a way to disable this, and that’s what manufacturers of hardware wallets do. Or another security bit, but again the security bit is not checked at the moment of boot, but a little bit later, so it’s another race condition. Ledger forgot to disable JTAG interface on the microcontroller that controls their display, some time ago. But they still had a secure element, so it wasn’t a big deal. Yes, disabling it is a software thing. All the security bits are software-measures. You just set the flags for your firmware to disable certain features.

Also, the microcontrollers like the normal ones, they are designed to work in certain conditions. In the temperature range from here to here, or the voltage level or power supply here from 1.7 volts to +- 0.2 volts, and with a clock rate within a certain range. What happens if this environment changes? You can get undefined behavior, which is exactly what the attacker wants. What this means is that the microcontroller operated beyond those limits could skip instructions, make miscalculations, it can do lots of different things. It can also reboot. It can skip instructions, or make incorrect calculations.

As an example, one of the attacks on the Trezor hardware wallets using this stuff was ….. when they connect the Trezor to my computer over USB, the computer asks the device who are you and how can I help you? What kind of device are you? The Trezor says I am Trezor model G for example. Then what the attacker was able to do- even before you unlock your hardware wallet- how this data is sent over to the computer… Trezor is basically checking what is the length it should send to the computer? And this length is calculated during certain instructions. If you breach the microcontroller at this time, and make this calculation do something random, like more than the 14 bits the hardware wallet is expecting, you get not only the trezor model X information but also in addition to that you would be able to get the mnemonic and the full content of the memory. The model information was stored right next the mnemonic information. They fixed this, though. Right now, you have to unlock the Trezor with the PIN, it doesn’t send any data out at all until unlocked. There’s some non-readable part of memory that the microcontroller can’t read; so if there’s overflow, it will throw an error and not be able to read those bits. So this is also a good approach. In principle, they made a lot of fixes recently. This was a software fix not a hardware fix.

The mnemonic phrase on the hardware wallet was stored plaintext and the PIN verification was vulnerable to a sidechannel attack. Another big attack for microcontrollers is sidechannel attacks. When microcontrollers are comparing numbers, they can leak information by just consuming different amounts of power, or taking a slightly different amount of time to calculate something. Trezor was vulnerable to this as well, some time ago, in particular to PIN code verification. So they were verifying this by you entering a PIN and comparing to a stored PIN. This comparison was consuming different cycles, different patterns were causing different– by observing the emission of this sidechannel from the microcontroller, LedgerHQ was able to distinguish between different digits in the PIN. They built a machine learning system to distinguish between the systems and after trying 5 different PINs, this program was able to say your real PIN. 5 PINs was still doable in terms of delay, so you can do it in a few hours. This was also fixed. Now PIN codes aren’t stored in plaintext; now they use it to derive a decryption key that decrypts the mnemonic phrase. This way is nicer because even if you have a wrong PIN, the decryption key is wrong and then you can’t decrypt the mnemonic and it’s not vulnerable to any kind of sidechannel attack.

On the hardware side, there’s race conditions, sidechannel attacks, operating environment manipulation, and debug interfaces for microcontrollers. You could also decap it and shoot it with lasers or something, and make it behave in a weird way. So this can be exploited by an attacker.

Secure elements again

On the other hand, what does a secure element do? They are similar to microcontrollers, but they don’t have debug interfaces. They don’t have the read-write flags. They also have a bunch of different countermeasures against these attacks, for example hardware measures. There is a watchdog that monitors the voltage on the power supply PIN and as soon as it sees it goes below some value, it triggers the alarm and you can erase the keys as soon as you see this occur. Or you just stop operating. If you see the voltage supply is varying, you just stop operation. If you see the temperature is changing too much, you can also stop operation. You could either stop, or erase your secrets. There’s also this mechanism that allows the microcontroller to detect if you are trying to decap, like with a simple light sensor. If you decap the chip, you have access to the semiconductor and you can see a lot of light coming out and then you stop operations. Here you definitely want to wipe your secrets clean and delete all of those. They also use a bunch of interesting techniques against sidechannel attacks. For example, they don’t do just constant power consumption and constant timing, but then on top of that they introduce additional random delays and random noise on the power lines making it more and more difficult for the attacker to get any data from there. They also normally have a very limited capacity on bits. You have power supply pin, ground, maybe a few more to drive something simple like an LED on the ColdCard or on the modern Ledger Model X they are actually able to talk to the display driver to control the display which is a nice improvement on the hardware. In principle, it is not very capable. You can’t expect the secure element to drive a large display or to react to user input. Button is probably fine, but definitely not the best thing.

The reason why the Ledger has a tiny screen with low resolution is because they are trying to drive everything from the secure element, at least on Ledger Model X. Previously this was not the case, where they had a normal microcontroller talking to the secure element where you unlock it and then it signs whatever you want. And then this microcontroller controls the display. This was actually a big point that was pointed out by — … this architecture is not perfect because you have this man-in-the-middle that controls this display. You have to trust your hardware wallet has a trusted display, but with this architecture you can’t because there’s a man-in-the-middle. It’s hard to mitigate this and figure out how to tradeoff between complete security versus user usability. I hope you get the idea of why secure elements are actually secure.

Problems with secure elements

There are in fact some problems with secure elements, though. They have all of these nice anti-tampering mechanisms but they also like to hide other stuff. The common practice in the security field is that when you close-source your security solution, you get some extra points on the security certification like ELF 5 and other standards. Just by closing sourcing what you wrote or what you did, you get extra points. Now what we have is the problem that we can’t really find out what is running inside these devices. If you want to work with a secure element, you have to be big enough to talk to these companies and get the keys required to program it. And you also have to sign their non-disclosure agreement. And only at that point would they give you documentation; and then the problem is that you can’t open-source what you wrote. Alternatively, you use a secure element that is running a Java Card OS in there so it’s something like a subset of Java developed for the banking industry because bankers like Java for some reason. Basically they have this Java VM that can run your applet… so you have no idea how the thing is operating, you just trust them because it’s certified and it has been there for 20-30 years already and we know all the security research institutes are trying very hard to even get a single …. and then you can completely open-source the Java Card applet that you upload to the secure element but you don’t know what’s running underneath it. Java Card is considered a secure element, yes.

By the way, the most secure Java Cards or secure elements were normally developed for the … like when you buy this card, you have a secret there that allows you to watch TV from a certain account. They were very good at protecting the secrets because they had the same secret everywhere. The signal is coming from space or satellite, the signal is always the same. You’re forced to use the same secret on all the devices. This means that if even one is hacked, you get free TV for everyone, so they put a lot of effort into securing this kind of chip because as soon as it is hacked then you’re really screwed and you have to replace all devices.

Also, let’s talk about the Sony Playstation 3 key compromise attack. They use ECDSA signing, and you’re not supposed to get all the games for free right? The only way to get the game running, you need to have a proper signature on the game, so the signature from Sony. The problem was that they suppose didn’t hire a cryptographer or anyone decent at cryptography… they implemented a digital signing algorithm in a way where they were reusing the same nonce over and over again. It’s the same problem with the hardware wallets we described earlier today. If you are reusing the same nonce, then I can actually extract your private key just by having two signatures from you. I can then get your private key and then I can run whatever game I want because I have the Sony private key. This was for the Sony Playstation 2. I think it was the fastest hack of a gaming console ever.

QR code wallets

Constraining the unidirectionality of information. Bits can’t flow backwards. The only place a decrypted key should be is in a running hardware wallet. There’s a slip39 python implementation. Chris Howe contributed a slip39 C library.

With multisig and each key sharded, can you mix the shards from the different keys and is that safe?

The qr code is json -> gzip -> base64 and it fits like 80-90 outputs and it’s fine. Animated QR codes could be cool, but there’s some libraries like color QR codes that give you a boost. It’s packetization. Is the user going to be asked to display certain QR codes in a certain order, or will it be negotiated graphically between the devices? You can use high contrast cameras.

Printed QR code… each packet can say it’s 1-of-n, and as it reads it, it figures out which one it is, and then it figures out which one is done or not done yet.

Signatures could be batched into a bigger output QR code on the device. So it’s not a huge bottleneck yet. Packetized QR codes are an interesting area. When you parse the QR code from Christopher Allen’s stuff, the QR code says what it is saying.

Recent attacks

There were some recent attacks that showed that even if you are using secure hardware, it doesn’t mean you’re secure. When you have an attacker that can get to your device, then you’re in trouble and they can do nasty attacks against the microcontroller. Another idea is to wipe the device every time. There are wallets that use secure elements, such as the Ledger hardware wallet.

On the hardware side, it’s wonderful. The team is strong on the hardware side. They came from the security industry. They know about certification of secure elements, they know how to hack microcontrollers and they keep showing interesting attacks on Trezor and other random wallets. They are extremely good on the software side, but that doesn’t mean they can’t screw up on the software side. It actually happened a few times.

One of the more scary attacks on Ledger happened at the end of the last year, and it was change address related. When you are sending money to someone, what you expect is that you have your inputs and let’s say you have the inputs of the— one bitcoin, say, and then you have normally two outputs, like one is for the payment and the other is the change output. How do you verify this is the change address? You should be able to derive the corresponding private key and public key that will control that output. If you get the same address for the change output, then you are sure the money goes back to you. Normally what you do is you provide the derivation path for the corresponding private key, because we have this hierarchical deterministic tree of keys. So the hardware wallet just needs to know how to derive the key. So you send to the wallet the derivation path as well, like the bip32 derivation path. Then the hardware wallet can derive the corresponding key and see it exactly this output will be controlled by this key so it has the right address. …. So what Ledger did is that they didn’t do the verification…. they just assumed that if there was an output with some derivation path, then it is probably right. This means that the attacker could replace the address for this output to any address at all, just put any derivation path and all the money could go to the attacker when you’re sending some small amount of bitcoin then all the change goes to the attacker. It was disclosed last year, and it was discovered by the Mycelium guys because they were working on transfer of funds between different accounts on Ledger and they found that somehow it is too easy to implement this in Ledger so something is going wrong here and they discovered the attack. It was fixed, but who knows how long this problem was there. From the hardware wallet perspective, if someone doesn’t tell me it’s a change output or prove that to me, then I should say it’s not a change output. This was one problem.

There was a minor issue too, where they didn’t read the documentation of the microcontroller. The problem was, how did they verify the firmware that is running on this microcontroller? They basically…. when we have our new firmware, then the Ledger has a specific region in memory where they hda a magic sequence of bytes in particular for Ledger it was some hex magic number. Then they store that there. What happened next is that when you’re updating the Ledger firmware, the Ledger was first erasing this and then flashing the firmware and then at the end they were verifying if the signature if this firmware is correct. If the signature was generated by the Ledger key, then they put back this magic number into the register and then you were able to start this firmware and make it start working. Sounds good, right? If you are provided a wrong signature, then these magic bytes are all zeroes at the moment so it wouldn’t run this firmware, it would just rollback to the previous firmware. The problem is that if you read the documentation for the microcontroller, you see there are two different addresses to access this memory region where they store these magic bytes. One was completely blocked from external read-write such that if you try to write to these registers then you fail because only the microcontroller can write it. But then there was another one that had access to the same memory region and you could write any bytes there, and then you could make the microcontroller run any firmware you give it. Someone was able to play a game of snake on the Ledger hardware wallet as a result of this. If you get control of this display and the buttons, with custom firmware- you can hide arbitrary outputs. You can fool the user in different ways, because you’re controlling what the user will see when he is signing. So I think it’s a pretty big problem. It was a hard problem to exploit but still a problem.

Another super serious fuckup that happened with Bitbox… you know this one? Some of the wallets have a nice hidden wallet feature. The idea is that if someone takes your hardware wallet and tells you please unlock it, otherwise I will hit you with a wrench. You will probably unlock it, and then spend the money to the attacker because you don’t want to die. The hidden wallet feature is supposed to secure your money in such a way that there is also a hidden wallet that the attacker doesn’t know about and they will only get a fraction of your money. You use the same mnemonic but you use a different passphrase, normally. The Bitbox guys did it slightly differently, and it was a bad idea to reinvent the protocol with their own rules. So what they did was, you have this master private key like a bip32 xprv. It has a chaincode and the private key in there. So when you have the master public key, you have the same chaincode and just the public key corresponding to this private key. Given the master public key, you can derive all your addresses but not spend, and if you have the private key you could spend. The hidden wallet they used the same xpub with the same chaincode but they flip the chaincode and the key. So that means if your software wallet knows the master public key of both the normal wallet and the hidden wallet, then it basically knows both the chaincode and the private key so they can get all your money. If you are using this hidden wallet feature, then you are screwed.

Is the attacker not supposed to know about the hidden wallet feature? How is this supposed to work? In principle, this hidden wallet feature is questionable. As an attacker, I would keep hitting you with a wrench until you give me all the hidden wallets you have. I would keep hitting you until you give me the next password or next passphrase and so on, they would never trust that you don’t have a next wallet. The wallet would have to be sufficiently funded so that the attacker would think it is likely everything. You could also do the replace-by-fee race where you burn all your money to the miners ((hopefully you sign the fee possibilities correctly)). The attacker isn’t going to stop physically attacking you. But there’s still a big difference between physically hitting someone and killing them. Murder seems like a line that less people would be willing to cross.

TruCrypt had plausible deniability in the encryption because you could have multiple drives encrypted, but you didn’t know how many are there. It might be suspicious that a 1 GB encrypted volume has only a single 10 kb file… but the idea is to put something really incriminating along with your 10 BTC, and you just say I’m so embarrassed and this would make it seem more legitimate that it is actually your full coin amount.

Having secure hardware doesn’t mean you’re not vulnerable to attacks. I really think the best thing to do is to use multisignature.

Timelock

If you are willing to wait for a delay, you can use a timelock, or spend instantly with 2-of-2 multisig keys. You would enforce on the hardware wallet to only make these timelock transactions. The attacker provides the address. It doesn’t what your wallet say; at best, your wallet has already locked it. You can’t spend it in a way that locks it, because presumably the attacker wants to use their only address. You could pre-sign the transaction and delete your private key– hope you got that fee right.

If you can prove to a bank that you will get $1 billion one year from now, then they will front you the money. You get the use of the money, you negotiate with the attacker and pay them a percentage. But this gets into K&R insurance stuff…. You could also use a bank, that is 2-of-2 multisig or my key but with a delay of 6 months. So this means every time you need to make a transaction, you go to the bank and make a transaction— you can still get your money back if the bank disappears, and then the attacker can’t get anything because they probably don’t want to go with you to the bank or cross multiple borders as you travel around the world to get all your keys or something.

The best protection is to never tell anyone how much bitcoin you hold.

How could you combine a timelock with a third-party? Timelock with multisig is fine.

We’re planning to add support for miniscript, which could include timelocks. But no hardware wallet currently enforces timelocks, to my knowledge.

Miniscript

Miniscript (or here) was introduced by Pieter Wuille. It’s not one-to-one mapping to all possible bitcoin script, it’s a subset of bitcoin script but it covers like 99.99% of all use cases observed on the network so far. The idea is you describe the logic of your script in a convenient form such that a wallet can parse this information and figure out what keys or information it needs to get in order to produce a key. This also works for many of the lightning scripts and also various multisig scripts. You can then compile this miniscript policy into bitcoin script. Then you can analyze and say this branch is the most probable that I will use most of the time, and then order the branches in the script to make it more efficiently executed on average in terms of sigops. It can optimize the script in such a way that actually your fees or your data that you have when you’re signing this script will be minimal according to your priorities. So if you’re mostly spending with this, then this will be superoptimal and this other branch might be a bit longer.

After implementing miniscript, it will be possible to use timelock. Until then, you need something like a raspberrypi with custom firmware. We can try to implement a timelock feature together tomorrow if you will still be here.

Pieter has a proof-of-concept on his website where you can type out policies and get an actual bitcoin script. I don’t think he has the demonstration of going the other way around; but it is described in many details how this all works. I think they are finishing their multiple implementations at the moment and I think it’s almost ready to really get started. Some pull requests have been merged for output descriptors. In Bitcoin Core you can provide a script descriptor and feed this into the wallet, like whether it’s segwit or legacy, nested segwit or native segwit etc. You can also use script descriptors for multisignature wallets, you can already use Bitcoin Core with existing hardware wallets… it’s still a bit of a trouble because you need to run a command line interface and it’s not super user friendly and not in the GUI yet, but if you’re fine with command line interfaces and if you’re willing to make a small script that will do it for you, then you’re probably fine. I think integration with Bitcoin Core is very important and nice that we have this moving forward.

Advanced features for hardware wallets

http://diyhpl.us/wiki/transcripts/breaking-bitcoin/2019/future-of-hardware-wallets/

Something we could do is coinjoin. Right now hardware wallets only support situations where all inputs belong to the hardware wallets. In coinjoin transactions, that’s not the case. If we can fool the hardware wallet into displaying something wrong, then we can potentially steal the funds. How could the hardware wallet understand if this input belongs to the hardware wallet or not? It needs to derive the key and checks is it able to sign. It requires some help from the software wallet to do this. The user needs to sign transactions twice for this protocol.

It is pretty normal to have multiple signing of a coinjoin transaction in a short period of time, because sometimes the coinjoin protocol stalls due to users falling off the network or just being too delayed.

Signing transactions with external inputs is tricky.

Hardware wallet proof of (non)-ownership

Say we’re a malicious wallet. I am not a coinjoin server, but a client application. I can put two identical user inputs, which is usually common in coinjoin, and you put them in the inputs and you put only one user output and then the others are other outputs. How can the hardware wallet decide if the input belongs to the user or not? Right now there’s no way. So we trust the software to mark the input needed to sign. The attack is to mark only one of the user inputs as mine, and then the hardware wallet signs it and we get the signature for the first input. The software wallet then pretends the coinjoin transaction failed, and sends to the hardware transaction the same transaction but marking the second input as ours. So the hardware wallet doesn’t have a way to determine which inputs were his. You could do SPV proofs to proof that an input is yours. We need a reliable way to determine if the input belongs to the hardware wallet or not. Trezor is working on this with achow101.

https://github.com/satoshilabs/slips/blob/slips-19-20-coinjoin-proofs/slip-0019.md

We could make a proof for every input, and we need to sign this proof with a key. The idea is to prove that you can spend and prove that… it can commit to the whole coinjoin transaction to prove to the server that this is owned, and it helps the server defend against denial of service attacks because now the attacker has to spend his own UTXOs. The proof can only be signed by the hardware wallet itself. You also have a unique transaction identifier.. It’s sign(UTI||proof_body, input_key). They can’t take this proof and send it to another coinjoin round. This technique proves that we own the input. The problem arises from the fact that we have this crazy derivation path. Use the unique identity key, which can be a normal bitcoin key with a fixed derivation path. The proof body will be HMAC(id_key, txid || vout). This can be wallet-specific and the host may collect them for UTXOs. You can’t fake this because the hardware wallet is the only thing that can generate this proof.

This could be extend to multisig or even MuSig key aggregation.

We can ask all participants of this coinjoin transaction to sign a certain message with the private key that controls this input. So we have some message, and a signature. The signature proves to us, to everyone, that the guy who put this message there actually controls the corresponding private key. This is the signature from the key that controls this input. On the message side, we can put whatever the hardware wallet wants. The hardware wallet is the guy who can sign this proof. He is the only one that controls this key. So what it can do is generate a particular message that it will be able to recognize afterwards. So I take the transaction hash and hash it together with my fixed key that I store in my memory, and then I get a unique message that looks random but I will be able to reproduce it whenever I see it and I will be able to make sure it was my input because I was the guy who generated this inside the message. Once we provide all these proofs for every input, our hardware wallet can go through each input and make sure which inputs are mine and which are not mine. This can then help detect when the software wallet is trying to fool you.

I hope hardware wallets will be able to do coinjoins fairly soon. Trezor will probably deploy it first because we’re working with them on that.

Q: What’s the use case for this? Say I want to leave something connected to the internet to make money from something like joinmarket? Or I want to be a privacy taker?

A: It works in both cases. If you want to participate in coinjoin and earn something- but right now it doesn’t work this way. Right now all the revenue is going to the Wasabi Wallet guys. Their servers take fees to connect people together. At the moment I think it’s if you want to use the coinjoin to get some privacy, then you need this kind of protocol, so you probably need to either connect your hardware wallet to do this or you still can do it using the airgap.

In our case for example, I was thinking about having a screen on the computer and then a QR code and they can communicate over QR codes like this is a webcam and this is a screen. I was also thinking about audio output, like a 3.5mm jack from the hardware wallet to the computer. The bandwidth there is pretty good. You could also just play audio on a speaker. But then your hardware wallet needs a speaker, and it can just send your private key out. But a 3.5mm audio jack makes sense.

Q: What about coinshuffle, or coinswap?

A: I only know a little about this. For wasabi wallet, it doesn’t know which inputs correspond to which outputs because it registers them separately. You get back a blinded signature, and you give them a blinded output or something. They generate a blind signature and they don’t know what they are signing. It allows the coinjoin server to verify that yes I signed something and this guy wants to register this output so it looks right and I should put it into the coinjoin. For all this communication they use Schnorr signatures because there you can use blind signatures. In principles this means you have two virtual identities that are not connected to each other; your inputs and outputs are completely disconnected even for the coinjoin server. They also generate outputs of the same value and then they make another section of the outputs with a different value so you can also get anonymity for some amount of change.

Wasabi wallet supports hardware wallets, but not for coinjoin. Then the only remaining benefit of using Wasabi is having complete coin control and being able to pick coins to send to people.

Q: How does Wasabi deal with privacy when fetching your UTXOs?

A: I think they are using the Neutrino protocol, they ask for the filters from the server and then they download blocks from random bitcoin nodes. You don’t need to trust their central server at that point. I think it’s already enabled to connect to your own node, awesome that’s great. Cool. Then now you can actually get it from your Bitcoin Core node.

Lightning for hardware wallets

Lightning is still in development, but it’s already running live on mainnet. We know the software isn’t super stable yet, but people were excited and they started to use it with real money. Not a lot of real money, it’s like a few hundred bitcoin on the whole lightning network at this point.

Right now lightning network works only with hot wallets with the keys on your computer. It’s probably not an issue for us, but for normal customers buying coffee daily this is a problem. It might be okay to store a few hundred dollars on your mobile wallet and maybe it gets stolen, that’s fine it’s a small amount of money. But for merchants and payment processors, then you care about losing coins or payments, and you want to have enough channels open so that you don’t have to close any and you won’t have any liquidity issues on the network. You have to store your private keys on an online computer that has a particular IP address, or maybe through tor, and certain ports open, and you’re broadcasting to the whole world how much money you have in those channels. Not very good, right? So this is another thing we’re working on that would be nice to get the lightning private keys on a hardware wallet. Unfortunately, here you can’t really use an airgap.

You could partially use cold storage. You could at least make sure that when the channel is closed that the money goes to cold storage. There’s a nice separation of different keys in lightning. When you open a channel, you should verify what address to use when closing the channel. Then even if your node is hacked, if the attacker tries to close the channels and get the money, then it fails because all the money goes to the cold storage.

But if he is able to get all the money through the lightning network to his node, then you’re probably screwed. To store these private keys on the hardware wallet is challenging, but you could have a …. he can also do a signed channel update. If you provide enough information to the hardware wallet, that you are actually routing a transaction and that your balance is increasing, then the hot wallet hardware wallet could sign automatically. If the amount is decreasing then you definitely need to ask the user to confirm. So we’re working on that.

Schnorr signatures for hardware wallets

The advantage of Schnorr signatures for hardware wallets is key aggregation. Imagine you’re using normal multisig transactions, like 3-of-5. This means that every time you’re signing the transaction and putting it into the blockchain, you see there’s 5 pubkeys and 3 signatures. It’s a huge amount of data, and everyone in the world can see that you’re using a 3-of-5 multisig setup. Terrible for privacy, and terrible in terms of fees.

With Schnorr signatures, you can actually combine these keys together into a single key. So you can have several devices or signers that generate signatures and then you can combine the signatures and corresponding public keys into a single public key and a single signature. Then all the transactions on the blockchain and most of the transactions on the blockchain would look similar, just a public key and a signature.

With taproot (or here), it’s even better. You can add the scripting functionality there as well. If everything goes well, like in lightning maybe you and your counterparty are freely cooperating and you don’t need to do a unilateral close. You could do a 2-of-2 multisig mutual close, and then it looks exactly like a public key and a single signature. If someone is not cooperating and things are going wrong, then you can show a branch in the taproot script that shows that you are allowed to claim the money but this script is only revealed if you have to go down this path. Otherwise you get a single public key and a single signature on the blockchain.

We can use chips on a single hardware wallet device with different architectures and different heterogeneous security models, and put three different chips with three different keys on there, and make sure we can spend the bitcoin only if each of these chips are signing in the hardware wallet. So one can be a proprietary secure element, and then a few other microcontrollers in the same hardware wallet, and the output is only a single public key and a single signature. We could also do one chip from Russia and one chip from China. So even if there is a backdoor, it is unlikely that both the Russian government and the U.S. government will cooperate to attack your wallet. From the user’s perspective, it looks like only a single key or a single signature.

All these watchdogs and anti-tamper mechanisms and preventions of fault injections and stuff…. they are not implemented yet, but I know there’s a few companies that are working on security peripherals around the RISC-V architecture. So hopefully we will get secure elements soon. The only problem at the moment is that most of– some of the companies, I would say, they take this open-source RISC-V architecture and they put on top of it a bunch of proprietary closed-source modules and this kind of ruins the whole idea. We need a fully open-source RISC-V chip. I would definitely recommend looking at RISC-V.

There’s also the IBM Power9’s are also open-source at this point. Raptor Computing Systems is one of the few manufacturers that will actually sell you the device. It’s a server, so not ideal, but it is actually open-source. It’s a $2000 piece of equipment, it’s a full computer. So it’s not an ideal consumer device for hardware wallets. I believe the CPU and most of the device on the board are open-source including the core. It’s an IBM architecture. Okay, I should probably look at that. Sounds interesting.

Best practices

I want to talk about best practices, and then talk about developing our own hardware wallet. But first about best practices.

Never store mnemonics as plaintext. Never load the mnemonic into the memory of the microcontroller before it’s unlocked. There was something called “frozen Trezor attack”. You take your Trezor, you power it on, the first thing it does is load your mnemonic into memory and then what you can do is freeze the Trezor with low temperature to make sure the memory of the key is still visible. Then you update the firmware to custom firmware, which Trezor allows, which is normally OK because your flash memory is wiped and they assume your RAM is decaying but at cold temperatures it stays there. Next, once you have the firmware there, you don’t have the mnemonic but you can print on the serial… but you can still get the data out of there. The problem was that they were loading the mnemonic to RAM before checking the bit. So never do that.

The ultimate and one thing that prevents an attacker from spending your funds is the PIN code or whatever verification method you’re using. It’s extremely better here to store the correct PIN code on the device. In principle, comparing the correct PIN code to the wrong PIN code is bad because during these comparisons there’s a sidechannel attack. Instead, you want to use a PIN code and another authentication method to derive the decryption key to decrypt the encrypted mnemonic. This way, you remove all the sidechannel attacks and you don’t have the mnemonic as plaintext.

Another nice feature that people should use is, have you heard about physically unclonable functions? It’s a really nice feature. Say you have a manufactured microcontroller… when the RAM is manufactured, there are certain fluctuations in the environment such that the RAM is manufactured differently for every bit. When you power on the microcontroller, the state of your memory will be random. Then you erase it and start using it as normal. But this randomness has a certain pattern, and this pattern is unclonable because you cannot observe it and it cannot be reproduced in another RAM device. You can use this pattern as a fingerprint, as the unique key on the device. This is why it’s called a physically unclonable function, it’s due to variations in the manufacturing process. You can use that together with a PIN code and other stuff to encrypt your mnemonic. When this device is booted, it will be able to decrypt the mnemonic. But extracting the full flash memory will not help this, because it still needs to get the physically unclonable function which is in the device. The only way to get that is to flash the firmware, read the key and extract it over serial or whatever. It’s a fail of both the read protection and write protection.

Q: Why not get rid of the PIN and the mnemonic storage, and require the user to enter that and wipe the device?

A: You could. So it’s a secure signing device, but not a secure storage device. So there’s secure storage, and then secure signing. So you store the passphrase or the wallet encrypted on paper or CryptoSteel in a bank vault or something with the mnemonic or something… and it’s encrypted, and you remember the passphrase. So you never store the passphrase anywhere.

The secure storage problem and the secure signing problem should be separated. So you could use replaceable commodity hardware for signing, and the mnemonic should be stored encrypted on paper or something. The problem with entering a mnemonic every time is that you could have an evil maid attack. The prevention here is to not have wifi or anything. But maybe the attacker is clever and they put in a battery pack and some kind of broadcast mechanism, but this goes back to having a disposable commodity wallet.

Besides RAM, you can use a piece of glass to make a physically unclonable function. You could put a piece of glass into the wallet and it has a laser and can measure the imperfections in the screen and uses this to derive the decryption key for your mnemonic. This isn’t a fingerprint. Glass might degrade over time though. A piece of plastic can have a unique pattern that generates an interference pattern and then can be used to extract a decryption key from that. But that’s not going to happen for a while.

This other thing about evil maid attacks and hardware implants, how can we prevent that? There’s a way to do anti-tamper mesh around the device such that whenever someone is trying to get into the device, like drilling a hole, your security measures are automatically activated. In HSMs in the banking industry, they basically have a device constantly connected to power and it monitors current going through this conductive mesh. If they detect the change in current in this mesh, then they wipe the device. The problem here is that when you’re running out of power, you have to rely on the battery, and then when the battery is draining before it is drained you have to wipe the keys anyway.

There’s a better way, where you don’t just check the current, but you also check the capacity of the wires in the mesh and you use that as a unique fingerprint to generate the unique decryption key of your secrets. Even if someone drills a 100 micron hole in here, the decryption key changes and they will not be able to extract the secrets anymore. You can’t just buy a device like this at the moment, but it’s a very promising approach. It’s probably for people who really care about large amounts of money, because this is really expensive.

If you are going to wipe the keys, then you might as well pre-sign a transaction before you wipe the keys, to send the coins to backup cold storage or something.

Rolling your own hardware security

You should not expect that rolling your own hardware wallet is a good idea. Your device will probably not be as secure as Trezor or Ledger because they have large teams. But if there is a bug in the Trezor firmware, then attackers will probably try to exploit that across all Trezor users. But if you have a custom implementation that might not be super secure but it’s custom, then what are the chances that the guy who comes to your place and finds a weird looking device and figures out it’s a hardware wallet and then how does he figure out how to break it? Another thing you can do is hide your hardware wallet in a Trezor shell ((laughter)).

Someone in a video suggested making a fake hardware wallet, and when it is powered on by someone, then it sends an alert message to a telegram group and says call 911 I’m being attacked. You could put this into the casing of Trezor. When the guy connects to it, it sends the message. Another thing you could do is install malware on the attacker’s computer, and then track them and do various surveillance things. You could also claim yeah I need to use Windows XP with this setup or something equally insecure, which is plausible because maybe you set this system up 10 years ago.

Options for making a hardware wallet prototype

What can we use to make a hardware wallet? If you think making hardware is hard, it’s not. You just write firmware and upload it. You can also use FPGAs which are fun to develop with. I like the boards that support micropython, which is a limited version of python. You can talk to peripherals, display QR codes and so on. Trezor and ColdCard are using micropython for their firmware. I think micropython has a long way to go though, because as soon as you move away from what has already been implemented then you end up having problems where you need to dive into the internals of micropython and you end up having to write new C code or something. But if you like everything already there, then it is extremely easy to work with.

Another option is to work with arduinos. This is the framework developed maybe 20 years ago I don’t know, and it’s used throughout the whole do-it-yourself community. It became extremely easy to start writing code. I know people who learned programming by using arduino. It’s C++, and not as user friendly as python, but still the way how they make all the libraries and all the modules, it is extremely friendly to the user. They didn’t develop this framework with security in mind, though.

There’s also the Mbed framework. They support a huge variety of boards. This framework was developed by ARM. You’re again writing C++ code, and then you compile it into the binary, and then when you connect the board you drag-and-drop it into the board. It’s literally drag-and-drop. Even more, you don’t need to install any toolchain. You can go online and use their online browser compiler. It’s not very convenient, except for getting started and getting some LEDs blinking. You don’t even need to install anything on your computer.

Something else to pay attention to is the rust language, which focuses on memory safety. It makes a lot of sense. They also have rust for Mbed systems. So you can start writing rust for microcontrollers, but you can also access the libraries normally written by the manufacturer or something- like for talking to the displays, LEDs and all this stuff. In the microcontroller world, everything is written in C. You can write your hardware wallet logic in rust, and then still have the bindings to the C libraries.

There’s a very interesting project called TockOS. This is an operating system written in rust completely, but you can keep writing in C or C++ but the OS itself like the management system can make sure that even if one of your libraries is completely compromised then you’re still fine and it can’t access memory from other programs. So I think that’s really nice. At the moment, I think there’s not too many people that know rust, but that’s improving. Definitely very interesting toolchain.

Another nice thing you can do with DIY hardware wallets, or not just DIY but with flexible hardware, is custom authentication. If you’re not happy with just a PIN code, like you want a longer password, or you want to have an accelerometer and you want to flip the hardware wallet in a certain way that only you know or something, or for example you can enforce one-time passwords and multi-factor authentication. You don’t only require the PIN but also a signature from your yubikey, and all these weird kinds of things, or even your fingerprint but that’s a bad idea because fingerprints have a low entropy and people can just take your finger anyway or steal your fingerprints.

You could use yubikey, Google Titan, or even some banking cards. You could do multi-factor authentication and different private key storage devices to do multisig having nothing to do with bitcoin, to authenticate to get into a hardware wallet.

Note that all of these boards I am talking about are not super secure. They all use microcontrollers and they don’t have a secure element. You can get a really cheap board that is like $2 bucks. Keep in mind, it’s manufactured and designed in China. It’s very widespread but who knows, maybe there’s still a backdoor in the device somewhere. Who knows. Also, it has bluetooth and wifi so that’s something to be aware of. If you want a not very secure version of the Ledger X, then you could do it. It would probably be safer than storing the money on your laptop that is constantly connected. All other developer boards tend to have simple application-specific microcontrollers. This one here has the same chip that Trezor has, in theory you could port trezor to this. So you get the security of a Trezor wallet, a much larger screen and maybe some additional functionality that you might like. So it might make sense in some cases. I wouldn’t rely completely on DIY hardware for security.

There’s also some cheap security-focused chips available on the market. The one that is used in the ColdCard is on the market, some sort of ECC blahblahblha from Microchip. You can also apply it in the arduino form factor. So it can provide you a secure key storage for your bitcoin keys, and the rest can be done on the normal microcontroller.

No secure elements are available on the market at the moment that would allow you to use elliptic curve cryptography for the bitcoin curve. They haven’t been built yet.

To make a fully secure element that is completely open-source from the very bottom to the very top, will cost like $20 million. What we are releasing is what is accessible to us at the moment. So what we can do is we can get this secure element that has a proprietary Java Card OS on top of it, and then on top of this we can write a bitcoin-specific that can talk to the hardware and use all the elliptic curve accelerators and hardware features and can still be open-source because we don’t have know how exactly this Java Card OS works so it’s not fully open-source, we’re just open-sourcing everything we can. In the ColdCard, they can’t use the elliptic curve cryptography on the secure key storage element, but in other secure elements yes you can run ECDSA and other elliptic curve cryptography.

My design choices for my hardware wallet

I wanted to talk about how we designed our hardware wallet and get your feedback about whether this makes sense or not and how we can improve, especially Bryan and M. I also want to make sure that I don’t build something very usable. After that, I think we can, if you guys have your laptops with you, then we can setup the development environment for tomorrow so that we can even give the boards to try out in the evening and take them home. If you just promise not to steal it, you can take some home if you want to, if you’re actually going to try it. Note that I will be disturbing you all the time tomorrow, so tonight is your chance to look at this alone. Tomorrow then we can work on some crazy hardware wallets.

What we decided to do is we have some hardware partners that can manufacture custom chips for us. We can take the components of the normal microcontrollers and put them in a nice way into a single package. So what components do hardware wallets normally have? The ones that also have a secure element, at least. So we decided first to go open-source. We’re not going to work on a bare metal secure element and a closed-source OS that we can’t open-source due to NDAs. So we are using a Java Card OS and even though we don’t know how it works, it seems to work so it should be pretty secure hopefully. Then to write on top of that, we’re writing a bitcoin applet that works with the secure element. We can only put in there stuff that we sign and that we upload using our administration keys. This means that we can develop and upload software, and then you can suggest certain changes that we can enable for upload. It requires certain communication with us first. I can’t give the keys to anyone else, because if they get leaked then we’re in trouble with the government because they are worried about criminals that will compromise super secure communication channels or use it to organize their illegal activities. This is unfortunately the state of the industry. We wanted to develop an open-source secure element, but for now we can just make a hardware wallet perhaps more secure and a bit more flexible.

So we are using the Java Card smartcard, and then we have two other microcontrollers. We also have a somewhat large display to show all the information about the transaction and to format it nicely and to enable this QR code airgap we need to be able to display pretty large QR codes. We have to use another general purpose microcontroller because only they can handle large screens at the moment. Then we have a third microcontroller that does all the dirty work that is not security-critical like the communication over USB, talking to the SD cards, processing images from the camera to read the QR codes and things like that. This makes for complete physical isolation of the large codebase that is not security-critical and handling all of the user data and the data from the cold computer. We also have another microcontroller that is dedicated to driving the display so that you can have a somewhat trusted display. All of these microcontrollers are packed into the same package. Inside, we have layered semiconductor devices in the package, and they are layered in a secured-sandwhich structure. The top one is the secure element, and the two others are underneath. So in theory, heat from one chip in the package can be detected in the other, but the smartcard presumably has a lot of work done on power analysis and sidechannel prevention.

Even if the attacker gets access to this chip and decaps it, he will first hit the secure element which has all of these anti-tampering mechanisms like watchdogs and voltage detection and memory mapping and other stuff, and it shares this capability with other chips. They obviously share the same voltage supply, so first you gain a little bit of security on the secure element with the other microcontrollers. Even if the attacker tries to get into the memory of the not very secure microcontrollers, the secure element is in the way and it is hard to get underneath it. The previous customers they made this for was for satellites, and in that case you have issues with radiation and stuff. For us this helped because it means you cannot, first, no electromagnetic radiation goes from the chip to the outside so this eliminates some sidechannel attacks, and secondly limited electromagnetic radiation gets into the chip. And as I said, they are all in the same package. In our development boards, they are not in the same package because we don’t have the money to develop the chip but we will start soonish. Still, it has all of this hardware already.

The normal microcontrollers have these debug interfaces. Even if they are disabled with the security bits, the attacker can still do a lot of race condition stuff and actually even re-enable it. So we included a fuse in there that physically breaks the connection from the JTAG interface to the outside world. So just having the chip, the attacker is not able to use the JTAG interface. This is a nice thing that is normally not available. On the developer board, we expose a bunch of connections for development purposes, but in the final product the only connections will be the display, touch screen, and for the camera we haven’t decided which of them we want to use. Normally the camera is used to store the QR code and it is obviously communication related and should be on the communication microcontroller. We can take a picture of dice as user-defined entropy.

We have a completely airgapped code that works with QR codes to scan transactions and QR codes to transmit signatures. We call this the M feature because he’s the one that suggested this. It’s so nice and fluent how it works that I have this nice video I can show you. So first we scan the QR code of the money we want to send to; the watchonly wallet on the phone knows all the UTXOs and it prepares the unsigned transactions and shows this to the hardware wallet. Next, on the hardware wallet we scan the QR code. Now we see information about the transaction on the hardware wallet, that we can sign, and then we get the signed transaction back. We then scan it with the watchonly wallet, and then we broadcast it. The flow is pretty convenient.

This is uni-directional data flow. It only goes in one direction. It’s highly controlled data flow. It is also limited in terms of the data you can pass back and forth. This is both a good thing and a bad thing. For a large transaction, it might be a problem. For small data, this is great. We haven’t tried animated QR codes yet. I was able to do PSBT with a few inputs and outputs, so it was pretty large. With partially-signed bitcoin transactions and legacy inputs, then your data is going to blow up in size pretty fast. Before segwit, you had to put a lot of information into the hashing function to derive the sighash value. Now with segwit, if the wallet lies about the amounts, then it will just generate an invalid signature. In order to calculate the fee for the hardware wallet, I have to pass the whole previous transaction, and it can be huge. Maybe you get the coins from the exchange, and the exchange might create a big transaction with thousands of outputs and you would have to pass all of it to the hardware wallet. If you are using legacy addresses, you will probably have trouble with QR codes and then you will need to do animations. For segwit transactions and reasonable number of inputs and outputs, you’re fine. You could just say, don’t use legacy anymore, and sweep those coins. The only reasonable attack vector there is if you’re a mining pool and you’re going to trick you into broadcasting that and rob you, but outside it hurts you but doesn’t profit the attacker so it doesn’t really have incentive. You can also show a warning and saying if you want to use the fees, then use bech32 or at least check it here. You can show them the fee on the hot machine, but that’s probably compromised.

With the airgapped mode, you have cold storage and it’s an airgap and it’s reasonably secure. Nothing is perfect, there will be bugs and attacks on our hardware wallet as well. We just hope to make less mistakes than what we see at the moment. Also, another thing I didn’t talk about is the …. within smartcards, we have to reuse the Java Card OS just because applets have to be written in java. Then we decided to go with embedded rust, and even though it’s a little harder to develop and not many people know rust, this part is really security critical and we don’t want to shoot ourselves in the foot with mistakes there. Also, it prevents a lot of isolation and best practices from the security world. Third, we open to the developers and it is running micropython. So this means that if you want custom functionality on the hardware wallet, like a custom communication scheme like bluetooth which we don’t want but if you want it then you’re welcome to do it yourself. Then you write a python script that handles this communication type, and then just use it. Another way to do it is that, if you’re using a particular custom script that we’re not aware of, and we’re showing this transaction in an ugly way that is maybe hard to read, then maybe what you can do is add some metadata to send it to the other microcontroller to display you something. It will still mark it as not super trusted because it’s coming from this python app, but if you’re trusting the developers of the app, then you can make a few choices. But at least you can see some additional information about this transaction if our stuff wasn’t able to parse it; like if it’s a coinjoin transaction that is increasing your anonymity set by 50 or whatever- you still see the inputs and outputs, plus the extra information normally. So this might help the user experience.

Besides from the airgapped mode, there is another completely different use case where you use this as a hot wallet like for a lightning hardware wallet. You keep the wallet connected to the computer, and the connected computer runs a watchonly lightning wallet, and then communicates with the hardware wallet for signatures on transactions that strictly increase the balane of your channel. Then, you could be in the loop for any transactions reducing your balance. Note still that this is not an airgapped mode, but it’s a different security mode. You probably want most of your funds in cold storage, and then some amount can be in this hot wallet hardware wallet device. Also, one of the problems with coinjoin is that it can take some time like a few minutes, especially if you need to retry multiple times, which is sort of like a hot wallet.

You probably don’t want to carry your hardware wallet around all the time, or go home to press a button to confirm your lightning payment or something. So what you can do is setup a phone with an app that is paired to the hardware wallet, so the hardware wallet is aware of the app and the app stores some secret to authenticate itself and then you can set a limit in the morning for your phone. So the hardware wallet should authorize payments up to a certain amount if it’s coming from this secure app. Even more, you can set particular permissions to the mobile app. Like, the exchange wallet should be able to ask the hardware wallet for an invoice but only an invoice to pay you. Same with bitcoin addresses, it can only ask for bitcoin addresses for a particular path. You can make it more flexible and get better user experience if you want to. The hardware wallet is connected to the internet, and the requests are forwarded through the cloud in this scheme.

The first thing that we are releasing is the developer board and the secure element. Another thing I want to discuss is the API of the first version of the secure element. In particular, as developers what would you like it to have? I have a few ideas about what may be useful. Maybe you can think of something in addition.

Obviously, it should store the mnemonic, passwords, passphrases, and it should be able to do all the bip32 derivation calculations and store bip32 public keys. We also want ECDSA and standard elliptic curve signing that we’re using at the moment. Also, I want to include this anti-nonce attack mitigation protocol. We don’t want to trust the proprietary secure element; we want to be sure that it is not trying to leak our private keys using this chosen nonce attack. So we want to enforce this protocol. Also, we want to use Schnorr signatures in particular with Shamir secret sharing. This would allow you to take the Schnorr keys, and then get a signature out of it that uses the correct point. Key aggregation with Shamir, you need a fancy function to combine each of the points from the parties.

It makes sense to use Shamir here because you can do thresholds. With Musig you can do that too, but with Schnorr it’s just k-of-k. Verifiable Shamir secret sharing using pedersen commitments. Say we have 3 keys living on our hardware, and these other parts are on some other backup hardware elsewhere. These can all communicate with each other. Each one generates their own key, and then using pedersen commitments and some other fancy algorithm, they can be sure that each of them ends up with a share of some common secret but none of them know what the common secret is. So we have a virtual private key or full secret that is split up with the Shamir secret sharing scheme with all of these guys and none of them know the full secret. The only problem is how do you back it up? You can’t see a mnemonic corresponding to this. Nobody knows this key, so nobody can display it. So the only way to do it is to somehow- let’s say this guy controls the display, then it can opt to display his own private key, but then what about the others? They don’t want to communicate to this guy to show it because then they can reconstruct it…. so you could use something like the display that you attach to different devices to display this mnemonic, but then the display driver can steal all of them. Ideally the display driver is a very simple device that only has memory and a display. You could also use disposable hardware to generate the private key, but then how do you recover it?

Another idea is seedless backups such that– if one of the chips breaks, if you’re not using 5-of-5 but say 3-of-5 then you could still make these guys communicate with each other, re-negotiate on the same key, generate a new shard for the new member and then replace all the old parts of this Shamir secret sharing scheme with the new one. Instead of the old polynomial, you choose the new polynomial and there’s a way such that each of these devices can switch to the new key and instead of compromising one we get the new member. This is also useful if one of the shards or keys gets compromised or lost, as long as you have enough keys to reconstruct it.

For signing, you don’t need to reassemble the Shamir secret share private key because it’s Schnorr. So the signatures are partially generated, and then they can be combined together into the correct final signature without reassembling the private key on a single machine.

Say you are doing SSS over a master private key. With each shard, we generate a partial signature over a transaction. A Schnorr signature is the sum of this random point plus the hash times the private key, and it’s linear. We can apply the same function to recombine the signature parts. We can apply the same function to s1, s2 and s3 and then you get the full signature S this way without ever combining the parts of the keys into the full key.

The multisignature is nice when you’re not the only owner of the private keys, like escrow with your friends and family or whatever. The Shamir Secret Sharing scheme with Schnorr is great if you are the only owner of the key, so you only need the pieces of the key. There’s a paper I can give you that explains how the shards are generated, or if the virtual master private key is generated on a single machine. Multisig is better for commercial custody, and Shamir is better for self-custody and self cold storage. Classic multisignature will still be available with Schnorr, you don’t have to use the key combination you would still use the CHECKMULTISIG. I think you can still use it. ((Nope, you can’t- see the “Design” section of bip-tapscript or CHECKDLSADD.)) From a mining perspective, CHECKMULTISIG makes it more expensive to validate that transaction because it has many signatures.

I was thinking about using miniscript policies to unlock the secure element. To unlock the secure element, you could just use a PIN code or you could use it in a way where you need signatures from other devices or one-time codes from some other authentication mechanism. We needed to implement miniscript anyway. We’re not restricted to bitcoin sigop limits or anything here; so the secure element should be able to verify this miniscript script with whatever authentication keys or passwords you are using. It can even be CHECKMULTISIG with the 15 key limit removed.

I will send Bryan the paper on linear secret sharing for Schnorr threshold signatures, where each shard can be used to generate partial signatures that can later be recombined. Maybe pedersen’s paper on verifiable secret sharing from 1999. And then there’s a response to that paper where you can fix how one party can bias the public key, which biasing the public key doesn’t matter but whatever. GJKR'99 section 4 has it. It’s not Shamir secret sharing scheme if you don’t reassemble the private key; it’s just a threshold signature scheme that happens to use linear secret sharing. Calling it something else is dangerous because it might encourage people to implement SSSS where the key can be recovered.

Q: What if you got rid of the secure element, and make the storage ephemeral and wipe it on shutdown? Basically what tails does. The user has to enter their mnemonic and passphrase. You can even save the mnemonic and just require a passphrase. Would it be easier and cheaper if we get rid of the secure element?

A: Well, what about an evil maid attack? He uploads certain firmware to your microcontroller. How do you verify he didn’t take the password?

Q: It is possible, but this evil maid has to come back and get access again in the future. But in the mean time, you could destroy it, and take it apart and confirm and install software each time.

A: I really want a disposable hardware wallet. As long as the signature is valid, that is all you need. You’re going to destroy the hardware device after using it.

Q: If you’re using this approach without a secure element, what about a hot wallet situation like for lightning?

A: If they don’t have access to the hardware wallet, then it is fine. But if they do have access, then it will be even worse in the scenario without the secure element. It is using the private keys for the cryptographic operations, and if it’s on the normal microcontroller then you can observe power consumption and extract the keys. It’s pretty hard to get rid of this. Trezor is still vulnerable to this problem. Ledger guys discovered a sidechannel attack when you derive a public key from your private key you leak some information about your private key. They are working on fixing this, but basically to derive a public key you need to unlock the device and this means you already know the PIN code so it’s not an issue for them. But in automatic mode, if your device is locked but still doing cryptography on a microcontroller then it is still an issue. I would prefer to only do cryptography operations on a secure element. If we implement something that can store the keys or erase the keys as well, and it can do cryptographic operations without sidechannels, then I think you can move to a normal developer board or an extended microcontroller and do everything there already. I think it’s not purely a constraint, it’s more like– it has some limitations because we can’t connect the display to the secure element, but it’s tradeoffs that we’re thinking about. For disposable, I think it’s perfectly fine to use the— make a very cheap disposable thing. The microcontrollers used in Trezor cost like $2 or something like that. So we can just have a bunch of them, buy them in bulk on digikey or something by the thousands and then you put them on the chip and you need to solder a bit. Ideally you want a chip with a DIP package or a socket and you just put the controller there. It would be nice to try it out.

Q: If you are worried about an evil maid attack, then you could use a tamper-evident bag. The evil maid could replace your hardware wallet; but in theory you would know because of the tamper-evident bag. You can use glitter, take a picture of it, and store that, and when you go back to your device you check your glitter or something. Or maybe it has an internet connection and if it’s ever tampered with, then it says hey it was disconnected and I don’t trust it anymore.

These fingerprint scanners on laptops are completely stupid. Besides, an easy place to find those fingerprints is on the keyboard itself ((laughter)). You should be encrypting your hard drive with a very long passphrase that you type in.

Statechains

http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2019-06-07-statechains/

With statechains, you can transfer the private key to someone else, and then you enforce that only the other person can make a signature. Ruben came up with an interesting construction where on top of this you can do lightning transactions where you only transfer part of this money, and you can do rebalancing and stuff. But it requires Schnorr signatures, blind signatures for Schnorr, and if it does happen well it won’t be for the moment.

How we can help with this is that, we can provide the functionality on the secure element that extracts the key and then– such that you are sure that the private key is moved. You need to trust the manufacturer still, but you can diverisificate the trust of it, so that you don’t have to fully trust this federation that is enforcing this policy.

Follow-up

https://bitcoin-hardware-wallet.github.io/

\ No newline at end of file +https://www.youtube.com/watch?v=rK0jUeHeDf0

https://twitter.com/kanzure/status/1145019634547978240

see also:

Background

A bit more than a year ago, I went through Jimmy Song’s Programming Blockchain class. That’s where I met M where he was the teaching assistant. Basically, you write a python bitcoin library from scratch. The API for this library and the classes and fnuctions that Jimmy uses is very easy to read and understand. I was happy with that API, and what I did afterwards was that I wanted to move out of quantum physics to working on bitcoin. I started writing a library that had a similar API- took an arduino board and wrote a similar library that had the same features, and extra things like HD keys and a few other things. I wanted to make a hardware wallet that is easy to program, inspired by Jimmy’s class. That’s about when I met and started CryptoAdvance. Then I made a refactored version that doesn’t require arduino dependencies, so now it works with arduino, Mbed, bare metal C, with a real-time operating system, etc. I also plan to make a micropython binding and embedded rust binding.

Introduction

I am only considering three hardware wallets today- Trezor, Ledger and ColdCard. Basically all others have similar architectures to one of these. You might know that Trezor uses a general purpose microcontroller, like those used in microwaves or in our cars. These are more or less in every device out there. They made the choice to only use this without a secure element for a few reasons- they wanted to make the hardware wallet completely open-source and say for sure what is running on the hardware wallet. I do not think they do not have any secure element in the hardware wallet unless we develop an open-source secure element. I think our community could make that happen. Maybe we could cooperate with Trezor and Ledger and at some point develop a secure element based on the RISC-V architecture.

Secure elements

I think we need several types of secure elements for different security models. You want to diversify the risk. You want to use multisig with Schnorr signatures. You want different devices with different security models, and ideally each key should be stored with different hardware, and different security models in each one as well. How will the vulnerabilities appear? It could be a poorly designed protocol, hopefully you won’t have a bug in that but sometimes hardware wallets fail. It could be a software vulnerability where the people who wrote the software made a mistake, like overflows or implementation bugs or some not-very-secure cryptographic primitives like leaking information through a sidechannel. The actual hardware can be vulnerable to hardware attacks, like glitching. There’s ways to make microcontrollers not behave according to their specification. There can also be hardware bugs as well, which happen from time to time, and just because the manufacturer of the chip can also make mistakes- most of the chips are still designed not automatically but by humans. When humans put transistors and optimize this by hand, they can also make mistakes. There’s also the possibility of government backdoors, which is why we want an open-source secure element.

Backdoors…

There was a talk sometime ago about instructions in x86 processors where basically, they have specific set of instructions that is not documented and not— they call it Appendix H. They share this appendix only with trusted parties ((laughter)). Yeah. These instructions can do weird things, we don’t know exactly what, but some guy was able to find all of the instructions. He was even able to get privileges from the user level, not just to the root level but to the ring-2 level, complete control that even the operating system doesn’t have access to. Even if you run tails, it doesn’t mean that the computer is stateless. There is still a bunch of crap running under the OS that your OS doesn’t know about. You should be careful with that one. On librem computers, they have not only PureOS but also Qubes that you can run and also they use the – which is also open– basically you can have … check that it is booting the real tails. The librem tool is called heads not tails. You should look at that if you are particularly paranoid.

Librem computers have several options. You can eiher rune PureOS or you can run Qubes or tails if you want. The librem key checks the bootloader.

Decapping

Ledger hardware wallets use a secure element. They have a microcontroller also. There are two different architectures to talk about. Trezor just uses a general-purpose MCU. Then we have ColdCard which is using a general-purpose MCU plus it adds on top of that a secure– I wouldn’t call it a secure element, but it is secure key storage. The thing that is available on the market, ColdCard guys were able to convince the manufacturer to open-source this secure key storage device. So we hope we know what is running on the microcontrollers, but we can’t verify that. If we give you the chip, we can’t verify it. We could in theory do some decapping. With decapping, imagine a chip and you have some epoxy over the semiconductor and the rest of the space is just wires that go into the device. If you want to study what is inside the microcontroller, what you do is you put it into the laser cutter and you first make a hole on the device and then you can put here the nitric acid that you heat up to 100 degrees and it will dissolve all the plastic around it. Then you have a hole to the microcontroller, then you can put this into an optical microscope or electron microscope or whatever you have and actually study the whole surface there. There was an ATmega that someone decapped and it was still able to run. There was a talk at defcon where the guys showed how to make DIY decappers. You could take a trezor or other hardware wallet and you put it on a mount, and then you just put the stream of nitric acid to the microcontroller and it dissolves the plastic but the device itself can still operate. So while you’re doing some cryptography there, you can get to the semiconductor level and put it under the microscope and observe how exactly it operates. Then when the microcontroller operates, you can see how the plastic– not only with the microscope but even also with— like when a transistor flips between 0 and 1, it has a small chance of emitting a photon. So you can watch for emitted photons and there’s probably some information about the keys given by that. Eventually you would be able to extract the keys. You cut the majority of the plastic then you put in the nitric acid to get to the semiconductor level. In this example, the guy was looking at the input and output buffer on the microcontroller. You can also look at the individual registers. It’s slightly different for secure elements or secure key storage, though. They put engineering effort into the hardware side to make sure that it is not easy to do any decapping. When the cryptography is happening on the secure element, they do have certain regions that are dummy parts of the microcontroller. So they are operating and doing something, but they are trying to fool you about where the keys are. They have a bunch of other interesting things there. If you are working with security-focused chips, then it’s much harder to determine what’s going on there. The other thing though is that in the ColdCard the key storage device is pretty old so that is why the manufacturer was more willing to open-source it. If we are able to see what is running there, that means the attacker will also be able to extract our keys from there. So being able to verify the chips, also shows that it is not secure to users. So being able to verify with decapping might not be a good thing. So it’s tricky.

Secure key storage element (not a secure element)

Normally the microcontroller asks the secure storage to give the key to move it to the main microcontroller, then the cryptographic operations occur, and then it is wiped from the memory of the microcontroller. How can you get this key from the secure key storage? Obviously you need to authenticate yourself. In ColdCard, you do this with a PIN code. How would you expect the wallet to behave when you enter the wrong PIN code? In Trezor, you increase a counter and increase the delay between PIN entries, or like in Ledger where you have a limited number of entry attempts before your secrets get wiped. ColdCard is using the increased delay mechanism. The problem is that this delay time delay is enforced not by the secure element but by the general-purpose microcontroller. To guess the correct PIN code, if you are able to somehow stop the microcontroller from increasing the time, then you would be able to bruteforce the PIN code. To communicate with the key storage, the microcontroller has a secret stored here, and hwenever it uses the secret to communicate with the key storage, then the key storage will respond. If the attacker can get this secret, then he can throw away the microcontroller and use his own equipment with that secret and try all the PIN combinations until he finds the right one. This is because of the choice they did where basically you can have any amount of tries for the PIN code for the secure element. This particular secure key storage has the option to limit the number of PIN entry attempts. But the problem is that it is not resettable. This means that for the whole lifetime of the device, you can have a particular number of wrong PIN entries and then you can just as you reach this level you throw away the device. This is a security tradeoff that they did. I would prefer actually to set this limit to say 1000 PIN codes, or 1000 tries, and I doubt that I would fail to enter the PIN code 1000 times. For the ColdCard, they use a nice approach for PIN codes where you have it split into two parts. You can use arbitrary length, but they recommend something like 10 digits. They show some verification words, which helps you verify that the secure element is still the right one. So if someone flips your device for another one, in an evil maid attack, then you would see that there are different words and you can stop entering the PIN. There was actually an attack on the cold card to bruteforce it. The words are deterministic from the first part of your PIN code. Maybe you try multiple digits, and then you write down the words, and you make a table of this, and then when you do the evil maid attack, then you put in that table so that it shows the information to the user. You need to bruteforce the first few PIN numbers and get those words, but the later words are hard to figure out without knowing the PIN code.

Evil maid attacks

No matter what hardware wallet you have, even a Ledger with the nicest hardware– say I take it and put it in my room and put a similar device there that has a wireless connectivity to an attacker’s computer, then it’s a bridged to the real Ledger, and I can have this bidirectional communication and do a man-in-the-middle attack. Whatever the real Ledger shows, I can display on this device and fool the user. I can become an ultimate man-in-the-middle here. Whatever the user enters, I can see what he enters and I know the PIN code and then I can do everything. There are two ways to mitigate this attack— you can use a Faraday cage every time you’re using the hardware wallet, and the second way is to enforce it on the hardware and I think the Blockstream guys suggested this– you have a certain limit on the speed of light, so you can’t communicate faster than the speed of right? Your device is here. If you are communicating with this device here, then you can enforce that the reply should come within a few nanoseconds. Then it is very unlikely that– it is not possible by the laws of physics to get the signal to your real device and then get back.

What about using an encrypted communication channel with the hardware wallet? The attacker is trying to get the PIN code. It’s still vulnerable. Instead of entering the PIN code, you instead use a computer— then you have your hardware wallet connected to it, and then in the potentially comrpomised situation we have this malicious attacker that has wireless connectivity to the real hardware wallet. Say our computer is not compromised. Over this encrypted channel, you can get the mask for your PIN code– like some random numbers that you need to add to your PIN code in order to enter it on the device. Your MITM doesn’t know about this unless he compromises your computer as well. Or a one-time pad situation. Every time you enter the PIN code, you enter the PIN code plus this number. Could work with a one-time pad. When you are operating with your hardware wallet, you probably want to sign a particular transaction. If the attacker is able to replace this transaction with his own, and display to you your own transaction then you are screwed. But if the transaction is passed encrypted to the real hardware wallet, then he can’t replace it because yeah it would be unauthenticated.

Ephemeral disposable hardware wallets

Another way to get rid of the secure key storage is to make the whole hardware wallet ephemeral, so you focus on guarding the seed and the passphrase and enter it each time. The hardware wallet is disposable then. You might never return to use that hardware wallet again. I was thinking about this regarding secure generation of mnemonics on a consumer device. If you have a disposable microcontroller but everything else we’re sure doesn’t have any digital components or memory, then each time we could replace the microcontroller with a new one and the old one we just crush with a hammer. If you already remember the mnemonic, well the PIN discourages you from remembering it. If you use disposable hardware, and it’s not stored in the vault then when someone accesses the vault then they don’t see anything and don’t know what your setup really is.

Offline mnemonic generation and Shamir secret sharing

I prefer 12 word mnemonics because they are easier to remember and they still have good entropy, like 128-bit entropy. I can still remember the 12 words. I would back it up with a Shamir secret sharing scheme. We take our mnemonic and split it into pieces. If you remember the mnemonic, then you don’t need to go recover your shares. Not all shares are required to recover the secret, but you can configure how many shares are required.

If you are worried about your random number generator, then you should be adding user entropy. If you have a tampered random number generator and you’re adding user entropy anyway, the nit doesn’t matter. Older laptops can be a problem, like 32-bit machines and the wallets might not support 32-bit CPUs anymore or something. If your wallet was written for python2.6…. now you have to write something to handle big integers, etc. This is a lot of bitrot in just 5-7 years.

Regarding offline mnemonic generation or secure mnemonic generation… use a dart board, use dices, do you trust yourself to generate your entropy? I am thinking about something like that– we have true random number generators in the chips; we can ask the chips for the random numbers, then use those numbers, and then display them to the user. The word and the corresponding index. We also know that circuits degrade over time, so random number generators could be compromised in a predictable way based on hardware failure statistics.

Entropy verification

You can roll some dice, then take a photo of it. If I’m building a malicious hardware wallet and I want to steal your bitcoin, this is by far the most easy way. Maybe the hardware is great, but the software I’m running isn’t using that hardware random number generator. Or maybe there’s a secret known to the attacker. You wait a few years, and then you have a great retirement account. You could also enforce a secure protocol that allows you to use a hardware wallet even if you don’t trust it. You can generate a mnemonic with user entropy and verify that entropy was in fact used.

Signatures leaking private keys due to non-random nonces chosen by the hardware wallet

If I am still an evil manufacturer of the hardware wallet, and I was forced by the community to include this entropy verification mechanism, and I also want the community to like me, so we say we have a completely airgapped solution with QR codes and cameras… and say you can use any software wallet you want because it works with everything. Now you have a setup like this; say you have an uncompromised computer, only a compromised hardware wallet which was manufactured by evil means. You are preparing the unsigned transaction here, you pass it to the hardware wallet using the QR codes. The hardware wallet then displays you the information and you verify everything is correct, and you verify on the computer as well. Then you sign it, and get back the signed transaction. Then you get a perfectly valid bitcoin transaction that you verify is correct and you broadcast it to the network. It’s the nonce attack. Yes, exactly. The problem is that the signature in bitcoin has two numbers, one is a random nonce. Our hardware wallet can choose a deterministically-derived nonce or random nonce to blind the private key. If this nonce is chosen insecurely, either because you’re using a bad random number generator that isn’t producing uniformly random values, or because you’re evil, then just by looking at signatures I will be able to get information about your private key. Something like this happened with yubikey recently. There was FIPS-certified yubikeys and they were leaking your private keys within 3 signatures. The stupid thing is that they introduced the vulnerability by mistake when they were preparing their device for the certification process, which asks you to use random numbers but actually you don’t want to use random numbers– you want to use deterministic derivation of nonces because you don’t trust your random numbers. You can still use the random numbers, but you should use it together with deterministic nonce derivation. Say you have your private key that nobody knows, and you want to sign a certain message…. ideally it should be HMAC something…. This is the nice way, but you can’t verify your hardware wallet is actually doing this. You would have to know your private key to verify this, and you don’t want to put your private key in some other device. Also, you don’t want your device to be able to switch to malicious nonce generation. You want to make sure your hardware wallet cannot choose arbitrary random nonces. You can force the hardware wallet to use not just the numbers it likes, but also the numbers that you like.

You could send the hash of the random number you will be using, such that the hardware wallet doesn’t need to use an RNG, it can use a deterministic algorithm to derive this R value. It would be a pretty secure communication scheme in both cases for both software and hardware but not both of them at the same time. One of them should be fine.

All the hardware wallets currently ignore this. I am preparing a proposal to include this field into bip174 PSBT. Our hardware wallet will definitely support it. I want to build a system where you don’t need to trust the hardware wallet too much, even if it is compromised or if there are bugs. All hardware wallets are hacked from time to time.

With dummy keys, you can check that the signatures are generated determinsitically and make sure it is happening and then you feel safe with the hardware wallet maybe. But switching from this deterministic algorithm to the malicious one can happen at any times. It could be triggered by a hardware update or some phase of the moon, you can’t be sure.

Another solution was that you can use verifiable generation of these random nonces, I think this was proposed by Pieter Wuille. For this you need a particular hashing function that supports zero-knowledge proofs that you were using this algorithm without exposing your private keys. The problem here is that it is very heavy computation for a microcontroller, so you’re probably not going to get it into a microcontroller.

There is also an idea about using sign-to-contract as an anti-nonce-sidechannel measure.

Ledger and multisig

If you have p2sh multisig and this wallet was only one of the signatures, then even if it was malicious then it doesn’t control all the keys– and ideally you’re using different hardware for the other keys. The problem with multisignature is that… well, Trezor supports it nicely. I am very happy with Trezor and multisig. ColdCard released firmware that supports multisig about a day ago. Ledger has terrible implementation of multisignature. What I was expecting from the wallet to show when you’re using multisignature, you want to see your bitcoin address, and you want to see what is the amount that you are actually sending or signing? What is the change output? What are the amounts? With Ledger multisig, you always see two outputs and you don’t know which one you are spending and which one is change address if any. With two Ledgers in a multisig setup, you are less secure than using a single Ledger. If anyone wants to make a pull request to the bitcoin app of Ledger, please do so. It’s there, people are complaining about this issue for more than a year I think. Ledger is not very good at multisignature.

Practicality

I know about a few supply chain attacks, but those relied on users doing stupid things. I’m not aware of any targeted hardware attacks. Right now the easiest way to attack someone’s hardware wallet is to convince them to do something stupid. I think at the moment there’s just not enough hackers looking into this field. But the value is going to increase. There have definitely been software wallet attacks.

walletrecoveryservices.com sells some of this as a service for hardware wallet recovery. The Wired magazine editor lost his PIN code or something, and he did a lot of work to get the data off the device. So this isn’t a malicious attack, but it is nonetheless an attack.

You should be wary of closed-source third-party hardware devices without a brand name. How do you trust any of this?

Right now it might be easier to get cryptocurrency by launching malware or starting a new altcoin or something or a hard-fork of another altcoin. Those are the easiest ways. Right now it’s easy to target lightning nodes and take money there; you know their public addresses and how much money they have, so you know that if you can get to the server then you can recover your cost of attack. So those targets are much more obvious and easier to attack at this point. There are easier remote attacks at this point, than attacking hardware wallets.

Downsides of microcontrollers

http://diyhpl.us/wiki/transcripts/breaking-bitcoin/2019/extracting-seeds-from-hardware-wallets/

How do secure elements work? Seems like a silver bullet that does everything, right? I can tell you the difference between normal microcontrollers and secure elements. The normal microcontrollers are made for speed, efficiency and they are made easy to develop for. There are so-called security bits that you set when you’re done programming your microcontroller. When the microcontroller boots, so how you would imagine is that it should boot with no-read no-write permissions and then checks the security bits to see whether you should be able to communicate with it, and then you should allow it to have read/write access. But sometimes it’s done the other way around, where the microcontroller is in an open mode for read-write and then it checks the security bits and then locks itself. But the problem is that if you talk to the microcontroller before it was able to read those bits, then you might be able to extract a single byte from microcontroller flash memory. You could keep doing this by doing rebooting over and over again; if you are fast and the microcontroller is slow, you can do this even faster. I think this is something that Ledger is referencing in all their talks– this “unfixable attack” on all microcontrollers like Trezor and others. I think it’s related to this, because this is exactly the thing that is broken by design and cannot be fixed just because the system evolved like this. No, they don’t need to use low temperature here. You just need to be faster than the microcontroller, which is easy because the microcontrollers used in hardware wallets are 200 MHz or so. So if you use a GPU or a modern computer then you would be able to do something.

So the threat is that the microcontroller’s memory would be able to be read, before it can lock itself down? The problem is that you can read out the whole flash memory. This means that even if it is encrypted, you have a key somewhere to decrypt it. What is stored on the flash memory? What is being protected here? Some have secret keys. All the IoT devices probably have your wifi password. There are plenty of different secrets that you might want to protect. The decryption key could in theory be stored somewhere else. Sounds like the threat is that the microcontroller can expose data written on them, and maybe you care about that because it’s proprietary data or a secret or a plaintext bitcoin private key then that’s a huge problem. If you have a microcontroller in some computing device that you have programmed… sounds like this threat is less interesting. Yes, that’s why most people don’t care about the microcontroller. In consumer devices, normally people do even easier things. They forget to disable the debug interface, and then you have direct full access to the microcontroller with like JTAG or something. So you can read out flash memory or this other stuff, and reprogram it if you want.

There’s also the JTAG interface. There’s another standard too, serial wire debug interface (SWDI). These interfaces are used for debugging. They allow you during development to see and completely control the whole microcontroller, you can set breakpoints, you can observe the memory, you can trigger all the pins around the device. You can do whatever you want using these interfaces. There’s a way to disable this, and that’s what manufacturers of hardware wallets do. Or another security bit, but again the security bit is not checked at the moment of boot, but a little bit later, so it’s another race condition. Ledger forgot to disable JTAG interface on the microcontroller that controls their display, some time ago. But they still had a secure element, so it wasn’t a big deal. Yes, disabling it is a software thing. All the security bits are software-measures. You just set the flags for your firmware to disable certain features.

Also, the microcontrollers like the normal ones, they are designed to work in certain conditions. In the temperature range from here to here, or the voltage level or power supply here from 1.7 volts to +- 0.2 volts, and with a clock rate within a certain range. What happens if this environment changes? You can get undefined behavior, which is exactly what the attacker wants. What this means is that the microcontroller operated beyond those limits could skip instructions, make miscalculations, it can do lots of different things. It can also reboot. It can skip instructions, or make incorrect calculations.

As an example, one of the attacks on the Trezor hardware wallets using this stuff was ….. when they connect the Trezor to my computer over USB, the computer asks the device who are you and how can I help you? What kind of device are you? The Trezor says I am Trezor model G for example. Then what the attacker was able to do- even before you unlock your hardware wallet- how this data is sent over to the computer… Trezor is basically checking what is the length it should send to the computer? And this length is calculated during certain instructions. If you breach the microcontroller at this time, and make this calculation do something random, like more than the 14 bits the hardware wallet is expecting, you get not only the trezor model X information but also in addition to that you would be able to get the mnemonic and the full content of the memory. The model information was stored right next the mnemonic information. They fixed this, though. Right now, you have to unlock the Trezor with the PIN, it doesn’t send any data out at all until unlocked. There’s some non-readable part of memory that the microcontroller can’t read; so if there’s overflow, it will throw an error and not be able to read those bits. So this is also a good approach. In principle, they made a lot of fixes recently. This was a software fix not a hardware fix.

The mnemonic phrase on the hardware wallet was stored plaintext and the PIN verification was vulnerable to a sidechannel attack. Another big attack for microcontrollers is sidechannel attacks. When microcontrollers are comparing numbers, they can leak information by just consuming different amounts of power, or taking a slightly different amount of time to calculate something. Trezor was vulnerable to this as well, some time ago, in particular to PIN code verification. So they were verifying this by you entering a PIN and comparing to a stored PIN. This comparison was consuming different cycles, different patterns were causing different– by observing the emission of this sidechannel from the microcontroller, LedgerHQ was able to distinguish between different digits in the PIN. They built a machine learning system to distinguish between the systems and after trying 5 different PINs, this program was able to say your real PIN. 5 PINs was still doable in terms of delay, so you can do it in a few hours. This was also fixed. Now PIN codes aren’t stored in plaintext; now they use it to derive a decryption key that decrypts the mnemonic phrase. This way is nicer because even if you have a wrong PIN, the decryption key is wrong and then you can’t decrypt the mnemonic and it’s not vulnerable to any kind of sidechannel attack.

On the hardware side, there’s race conditions, sidechannel attacks, operating environment manipulation, and debug interfaces for microcontrollers. You could also decap it and shoot it with lasers or something, and make it behave in a weird way. So this can be exploited by an attacker.

Secure elements again

On the other hand, what does a secure element do? They are similar to microcontrollers, but they don’t have debug interfaces. They don’t have the read-write flags. They also have a bunch of different countermeasures against these attacks, for example hardware measures. There is a watchdog that monitors the voltage on the power supply PIN and as soon as it sees it goes below some value, it triggers the alarm and you can erase the keys as soon as you see this occur. Or you just stop operating. If you see the voltage supply is varying, you just stop operation. If you see the temperature is changing too much, you can also stop operation. You could either stop, or erase your secrets. There’s also this mechanism that allows the microcontroller to detect if you are trying to decap, like with a simple light sensor. If you decap the chip, you have access to the semiconductor and you can see a lot of light coming out and then you stop operations. Here you definitely want to wipe your secrets clean and delete all of those. They also use a bunch of interesting techniques against sidechannel attacks. For example, they don’t do just constant power consumption and constant timing, but then on top of that they introduce additional random delays and random noise on the power lines making it more and more difficult for the attacker to get any data from there. They also normally have a very limited capacity on bits. You have power supply pin, ground, maybe a few more to drive something simple like an LED on the ColdCard or on the modern Ledger Model X they are actually able to talk to the display driver to control the display which is a nice improvement on the hardware. In principle, it is not very capable. You can’t expect the secure element to drive a large display or to react to user input. Button is probably fine, but definitely not the best thing.

The reason why the Ledger has a tiny screen with low resolution is because they are trying to drive everything from the secure element, at least on Ledger Model X. Previously this was not the case, where they had a normal microcontroller talking to the secure element where you unlock it and then it signs whatever you want. And then this microcontroller controls the display. This was actually a big point that was pointed out by — … this architecture is not perfect because you have this man-in-the-middle that controls this display. You have to trust your hardware wallet has a trusted display, but with this architecture you can’t because there’s a man-in-the-middle. It’s hard to mitigate this and figure out how to tradeoff between complete security versus user usability. I hope you get the idea of why secure elements are actually secure.

Problems with secure elements

There are in fact some problems with secure elements, though. They have all of these nice anti-tampering mechanisms but they also like to hide other stuff. The common practice in the security field is that when you close-source your security solution, you get some extra points on the security certification like ELF 5 and other standards. Just by closing sourcing what you wrote or what you did, you get extra points. Now what we have is the problem that we can’t really find out what is running inside these devices. If you want to work with a secure element, you have to be big enough to talk to these companies and get the keys required to program it. And you also have to sign their non-disclosure agreement. And only at that point would they give you documentation; and then the problem is that you can’t open-source what you wrote. Alternatively, you use a secure element that is running a Java Card OS in there so it’s something like a subset of Java developed for the banking industry because bankers like Java for some reason. Basically they have this Java VM that can run your applet… so you have no idea how the thing is operating, you just trust them because it’s certified and it has been there for 20-30 years already and we know all the security research institutes are trying very hard to even get a single …. and then you can completely open-source the Java Card applet that you upload to the secure element but you don’t know what’s running underneath it. Java Card is considered a secure element, yes.

By the way, the most secure Java Cards or secure elements were normally developed for the … like when you buy this card, you have a secret there that allows you to watch TV from a certain account. They were very good at protecting the secrets because they had the same secret everywhere. The signal is coming from space or satellite, the signal is always the same. You’re forced to use the same secret on all the devices. This means that if even one is hacked, you get free TV for everyone, so they put a lot of effort into securing this kind of chip because as soon as it is hacked then you’re really screwed and you have to replace all devices.

Also, let’s talk about the Sony Playstation 3 key compromise attack. They use ECDSA signing, and you’re not supposed to get all the games for free right? The only way to get the game running, you need to have a proper signature on the game, so the signature from Sony. The problem was that they suppose didn’t hire a cryptographer or anyone decent at cryptography… they implemented a digital signing algorithm in a way where they were reusing the same nonce over and over again. It’s the same problem with the hardware wallets we described earlier today. If you are reusing the same nonce, then I can actually extract your private key just by having two signatures from you. I can then get your private key and then I can run whatever game I want because I have the Sony private key. This was for the Sony Playstation 2. I think it was the fastest hack of a gaming console ever.

QR code wallets

Constraining the unidirectionality of information. Bits can’t flow backwards. The only place a decrypted key should be is in a running hardware wallet. There’s a slip39 python implementation. Chris Howe contributed a slip39 C library.

With multisig and each key sharded, can you mix the shards from the different keys and is that safe?

The qr code is json -> gzip -> base64 and it fits like 80-90 outputs and it’s fine. Animated QR codes could be cool, but there’s some libraries like color QR codes that give you a boost. It’s packetization. Is the user going to be asked to display certain QR codes in a certain order, or will it be negotiated graphically between the devices? You can use high contrast cameras.

Printed QR code… each packet can say it’s 1-of-n, and as it reads it, it figures out which one it is, and then it figures out which one is done or not done yet.

Signatures could be batched into a bigger output QR code on the device. So it’s not a huge bottleneck yet. Packetized QR codes are an interesting area. When you parse the QR code from Christopher Allen’s stuff, the QR code says what it is saying.

Recent attacks

There were some recent attacks that showed that even if you are using secure hardware, it doesn’t mean you’re secure. When you have an attacker that can get to your device, then you’re in trouble and they can do nasty attacks against the microcontroller. Another idea is to wipe the device every time. There are wallets that use secure elements, such as the Ledger hardware wallet.

On the hardware side, it’s wonderful. The team is strong on the hardware side. They came from the security industry. They know about certification of secure elements, they know how to hack microcontrollers and they keep showing interesting attacks on Trezor and other random wallets. They are extremely good on the software side, but that doesn’t mean they can’t screw up on the software side. It actually happened a few times.

One of the more scary attacks on Ledger happened at the end of the last year, and it was change address related. When you are sending money to someone, what you expect is that you have your inputs and let’s say you have the inputs of the— one bitcoin, say, and then you have normally two outputs, like one is for the payment and the other is the change output. How do you verify this is the change address? You should be able to derive the corresponding private key and public key that will control that output. If you get the same address for the change output, then you are sure the money goes back to you. Normally what you do is you provide the derivation path for the corresponding private key, because we have this hierarchical deterministic tree of keys. So the hardware wallet just needs to know how to derive the key. So you send to the wallet the derivation path as well, like the bip32 derivation path. Then the hardware wallet can derive the corresponding key and see it exactly this output will be controlled by this key so it has the right address. …. So what Ledger did is that they didn’t do the verification…. they just assumed that if there was an output with some derivation path, then it is probably right. This means that the attacker could replace the address for this output to any address at all, just put any derivation path and all the money could go to the attacker when you’re sending some small amount of bitcoin then all the change goes to the attacker. It was disclosed last year, and it was discovered by the Mycelium guys because they were working on transfer of funds between different accounts on Ledger and they found that somehow it is too easy to implement this in Ledger so something is going wrong here and they discovered the attack. It was fixed, but who knows how long this problem was there. From the hardware wallet perspective, if someone doesn’t tell me it’s a change output or prove that to me, then I should say it’s not a change output. This was one problem.

There was a minor issue too, where they didn’t read the documentation of the microcontroller. The problem was, how did they verify the firmware that is running on this microcontroller? They basically…. when we have our new firmware, then the Ledger has a specific region in memory where they hda a magic sequence of bytes in particular for Ledger it was some hex magic number. Then they store that there. What happened next is that when you’re updating the Ledger firmware, the Ledger was first erasing this and then flashing the firmware and then at the end they were verifying if the signature if this firmware is correct. If the signature was generated by the Ledger key, then they put back this magic number into the register and then you were able to start this firmware and make it start working. Sounds good, right? If you are provided a wrong signature, then these magic bytes are all zeroes at the moment so it wouldn’t run this firmware, it would just rollback to the previous firmware. The problem is that if you read the documentation for the microcontroller, you see there are two different addresses to access this memory region where they store these magic bytes. One was completely blocked from external read-write such that if you try to write to these registers then you fail because only the microcontroller can write it. But then there was another one that had access to the same memory region and you could write any bytes there, and then you could make the microcontroller run any firmware you give it. Someone was able to play a game of snake on the Ledger hardware wallet as a result of this. If you get control of this display and the buttons, with custom firmware- you can hide arbitrary outputs. You can fool the user in different ways, because you’re controlling what the user will see when he is signing. So I think it’s a pretty big problem. It was a hard problem to exploit but still a problem.

Another super serious fuckup that happened with Bitbox… you know this one? Some of the wallets have a nice hidden wallet feature. The idea is that if someone takes your hardware wallet and tells you please unlock it, otherwise I will hit you with a wrench. You will probably unlock it, and then spend the money to the attacker because you don’t want to die. The hidden wallet feature is supposed to secure your money in such a way that there is also a hidden wallet that the attacker doesn’t know about and they will only get a fraction of your money. You use the same mnemonic but you use a different passphrase, normally. The Bitbox guys did it slightly differently, and it was a bad idea to reinvent the protocol with their own rules. So what they did was, you have this master private key like a bip32 xprv. It has a chaincode and the private key in there. So when you have the master public key, you have the same chaincode and just the public key corresponding to this private key. Given the master public key, you can derive all your addresses but not spend, and if you have the private key you could spend. The hidden wallet they used the same xpub with the same chaincode but they flip the chaincode and the key. So that means if your software wallet knows the master public key of both the normal wallet and the hidden wallet, then it basically knows both the chaincode and the private key so they can get all your money. If you are using this hidden wallet feature, then you are screwed.

Is the attacker not supposed to know about the hidden wallet feature? How is this supposed to work? In principle, this hidden wallet feature is questionable. As an attacker, I would keep hitting you with a wrench until you give me all the hidden wallets you have. I would keep hitting you until you give me the next password or next passphrase and so on, they would never trust that you don’t have a next wallet. The wallet would have to be sufficiently funded so that the attacker would think it is likely everything. You could also do the replace-by-fee race where you burn all your money to the miners ((hopefully you sign the fee possibilities correctly)). The attacker isn’t going to stop physically attacking you. But there’s still a big difference between physically hitting someone and killing them. Murder seems like a line that less people would be willing to cross.

TruCrypt had plausible deniability in the encryption because you could have multiple drives encrypted, but you didn’t know how many are there. It might be suspicious that a 1 GB encrypted volume has only a single 10 kb file… but the idea is to put something really incriminating along with your 10 BTC, and you just say I’m so embarrassed and this would make it seem more legitimate that it is actually your full coin amount.

Having secure hardware doesn’t mean you’re not vulnerable to attacks. I really think the best thing to do is to use multisignature.

Timelock

If you are willing to wait for a delay, you can use a timelock, or spend instantly with 2-of-2 multisig keys. You would enforce on the hardware wallet to only make these timelock transactions. The attacker provides the address. It doesn’t what your wallet say; at best, your wallet has already locked it. You can’t spend it in a way that locks it, because presumably the attacker wants to use their only address. You could pre-sign the transaction and delete your private key– hope you got that fee right.

If you can prove to a bank that you will get $1 billion one year from now, then they will front you the money. You get the use of the money, you negotiate with the attacker and pay them a percentage. But this gets into K&R insurance stuff…. You could also use a bank, that is 2-of-2 multisig or my key but with a delay of 6 months. So this means every time you need to make a transaction, you go to the bank and make a transaction— you can still get your money back if the bank disappears, and then the attacker can’t get anything because they probably don’t want to go with you to the bank or cross multiple borders as you travel around the world to get all your keys or something.

The best protection is to never tell anyone how much bitcoin you hold.

How could you combine a timelock with a third-party? Timelock with multisig is fine.

We’re planning to add support for miniscript, which could include timelocks. But no hardware wallet currently enforces timelocks, to my knowledge.

Miniscript

Miniscript (or here) was introduced by Pieter Wuille. It’s not one-to-one mapping to all possible bitcoin script, it’s a subset of bitcoin script but it covers like 99.99% of all use cases observed on the network so far. The idea is you describe the logic of your script in a convenient form such that a wallet can parse this information and figure out what keys or information it needs to get in order to produce a key. This also works for many of the lightning scripts and also various multisig scripts. You can then compile this miniscript policy into bitcoin script. Then you can analyze and say this branch is the most probable that I will use most of the time, and then order the branches in the script to make it more efficiently executed on average in terms of sigops. It can optimize the script in such a way that actually your fees or your data that you have when you’re signing this script will be minimal according to your priorities. So if you’re mostly spending with this, then this will be superoptimal and this other branch might be a bit longer.

After implementing miniscript, it will be possible to use timelock. Until then, you need something like a raspberrypi with custom firmware. We can try to implement a timelock feature together tomorrow if you will still be here.

Pieter has a proof-of-concept on his website where you can type out policies and get an actual bitcoin script. I don’t think he has the demonstration of going the other way around; but it is described in many details how this all works. I think they are finishing their multiple implementations at the moment and I think it’s almost ready to really get started. Some pull requests have been merged for output descriptors. In Bitcoin Core you can provide a script descriptor and feed this into the wallet, like whether it’s segwit or legacy, nested segwit or native segwit etc. You can also use script descriptors for multisignature wallets, you can already use Bitcoin Core with existing hardware wallets… it’s still a bit of a trouble because you need to run a command line interface and it’s not super user friendly and not in the GUI yet, but if you’re fine with command line interfaces and if you’re willing to make a small script that will do it for you, then you’re probably fine. I think integration with Bitcoin Core is very important and nice that we have this moving forward.

Advanced features for hardware wallets

http://diyhpl.us/wiki/transcripts/breaking-bitcoin/2019/future-of-hardware-wallets/

Something we could do is coinjoin. Right now hardware wallets only support situations where all inputs belong to the hardware wallets. In coinjoin transactions, that’s not the case. If we can fool the hardware wallet into displaying something wrong, then we can potentially steal the funds. How could the hardware wallet understand if this input belongs to the hardware wallet or not? It needs to derive the key and checks is it able to sign. It requires some help from the software wallet to do this. The user needs to sign transactions twice for this protocol.

It is pretty normal to have multiple signing of a coinjoin transaction in a short period of time, because sometimes the coinjoin protocol stalls due to users falling off the network or just being too delayed.

Signing transactions with external inputs is tricky.

Hardware wallet proof of (non)-ownership

Say we’re a malicious wallet. I am not a coinjoin server, but a client application. I can put two identical user inputs, which is usually common in coinjoin, and you put them in the inputs and you put only one user output and then the others are other outputs. How can the hardware wallet decide if the input belongs to the user or not? Right now there’s no way. So we trust the software to mark the input needed to sign. The attack is to mark only one of the user inputs as mine, and then the hardware wallet signs it and we get the signature for the first input. The software wallet then pretends the coinjoin transaction failed, and sends to the hardware transaction the same transaction but marking the second input as ours. So the hardware wallet doesn’t have a way to determine which inputs were his. You could do SPV proofs to proof that an input is yours. We need a reliable way to determine if the input belongs to the hardware wallet or not. Trezor is working on this with achow101.

https://github.com/satoshilabs/slips/blob/slips-19-20-coinjoin-proofs/slip-0019.md

We could make a proof for every input, and we need to sign this proof with a key. The idea is to prove that you can spend and prove that… it can commit to the whole coinjoin transaction to prove to the server that this is owned, and it helps the server defend against denial of service attacks because now the attacker has to spend his own UTXOs. The proof can only be signed by the hardware wallet itself. You also have a unique transaction identifier.. It’s sign(UTI||proof_body, input_key). They can’t take this proof and send it to another coinjoin round. This technique proves that we own the input. The problem arises from the fact that we have this crazy derivation path. Use the unique identity key, which can be a normal bitcoin key with a fixed derivation path. The proof body will be HMAC(id_key, txid || vout). This can be wallet-specific and the host may collect them for UTXOs. You can’t fake this because the hardware wallet is the only thing that can generate this proof.

This could be extend to multisig or even MuSig key aggregation.

We can ask all participants of this coinjoin transaction to sign a certain message with the private key that controls this input. So we have some message, and a signature. The signature proves to us, to everyone, that the guy who put this message there actually controls the corresponding private key. This is the signature from the key that controls this input. On the message side, we can put whatever the hardware wallet wants. The hardware wallet is the guy who can sign this proof. He is the only one that controls this key. So what it can do is generate a particular message that it will be able to recognize afterwards. So I take the transaction hash and hash it together with my fixed key that I store in my memory, and then I get a unique message that looks random but I will be able to reproduce it whenever I see it and I will be able to make sure it was my input because I was the guy who generated this inside the message. Once we provide all these proofs for every input, our hardware wallet can go through each input and make sure which inputs are mine and which are not mine. This can then help detect when the software wallet is trying to fool you.

I hope hardware wallets will be able to do coinjoins fairly soon. Trezor will probably deploy it first because we’re working with them on that.

Q: What’s the use case for this? Say I want to leave something connected to the internet to make money from something like joinmarket? Or I want to be a privacy taker?

A: It works in both cases. If you want to participate in coinjoin and earn something- but right now it doesn’t work this way. Right now all the revenue is going to the Wasabi Wallet guys. Their servers take fees to connect people together. At the moment I think it’s if you want to use the coinjoin to get some privacy, then you need this kind of protocol, so you probably need to either connect your hardware wallet to do this or you still can do it using the airgap.

In our case for example, I was thinking about having a screen on the computer and then a QR code and they can communicate over QR codes like this is a webcam and this is a screen. I was also thinking about audio output, like a 3.5mm jack from the hardware wallet to the computer. The bandwidth there is pretty good. You could also just play audio on a speaker. But then your hardware wallet needs a speaker, and it can just send your private key out. But a 3.5mm audio jack makes sense.

Q: What about coinshuffle, or coinswap?

A: I only know a little about this. For wasabi wallet, it doesn’t know which inputs correspond to which outputs because it registers them separately. You get back a blinded signature, and you give them a blinded output or something. They generate a blind signature and they don’t know what they are signing. It allows the coinjoin server to verify that yes I signed something and this guy wants to register this output so it looks right and I should put it into the coinjoin. For all this communication they use Schnorr signatures because there you can use blind signatures. In principles this means you have two virtual identities that are not connected to each other; your inputs and outputs are completely disconnected even for the coinjoin server. They also generate outputs of the same value and then they make another section of the outputs with a different value so you can also get anonymity for some amount of change.

Wasabi wallet supports hardware wallets, but not for coinjoin. Then the only remaining benefit of using Wasabi is having complete coin control and being able to pick coins to send to people.

Q: How does Wasabi deal with privacy when fetching your UTXOs?

A: I think they are using the Neutrino protocol, they ask for the filters from the server and then they download blocks from random bitcoin nodes. You don’t need to trust their central server at that point. I think it’s already enabled to connect to your own node, awesome that’s great. Cool. Then now you can actually get it from your Bitcoin Core node.

Lightning for hardware wallets

Lightning is still in development, but it’s already running live on mainnet. We know the software isn’t super stable yet, but people were excited and they started to use it with real money. Not a lot of real money, it’s like a few hundred bitcoin on the whole lightning network at this point.

Right now lightning network works only with hot wallets with the keys on your computer. It’s probably not an issue for us, but for normal customers buying coffee daily this is a problem. It might be okay to store a few hundred dollars on your mobile wallet and maybe it gets stolen, that’s fine it’s a small amount of money. But for merchants and payment processors, then you care about losing coins or payments, and you want to have enough channels open so that you don’t have to close any and you won’t have any liquidity issues on the network. You have to store your private keys on an online computer that has a particular IP address, or maybe through tor, and certain ports open, and you’re broadcasting to the whole world how much money you have in those channels. Not very good, right? So this is another thing we’re working on that would be nice to get the lightning private keys on a hardware wallet. Unfortunately, here you can’t really use an airgap.

You could partially use cold storage. You could at least make sure that when the channel is closed that the money goes to cold storage. There’s a nice separation of different keys in lightning. When you open a channel, you should verify what address to use when closing the channel. Then even if your node is hacked, if the attacker tries to close the channels and get the money, then it fails because all the money goes to the cold storage.

But if he is able to get all the money through the lightning network to his node, then you’re probably screwed. To store these private keys on the hardware wallet is challenging, but you could have a …. he can also do a signed channel update. If you provide enough information to the hardware wallet, that you are actually routing a transaction and that your balance is increasing, then the hot wallet hardware wallet could sign automatically. If the amount is decreasing then you definitely need to ask the user to confirm. So we’re working on that.

Schnorr signatures for hardware wallets

The advantage of Schnorr signatures for hardware wallets is key aggregation. Imagine you’re using normal multisig transactions, like 3-of-5. This means that every time you’re signing the transaction and putting it into the blockchain, you see there’s 5 pubkeys and 3 signatures. It’s a huge amount of data, and everyone in the world can see that you’re using a 3-of-5 multisig setup. Terrible for privacy, and terrible in terms of fees.

With Schnorr signatures, you can actually combine these keys together into a single key. So you can have several devices or signers that generate signatures and then you can combine the signatures and corresponding public keys into a single public key and a single signature. Then all the transactions on the blockchain and most of the transactions on the blockchain would look similar, just a public key and a signature.

With taproot (or here), it’s even better. You can add the scripting functionality there as well. If everything goes well, like in lightning maybe you and your counterparty are freely cooperating and you don’t need to do a unilateral close. You could do a 2-of-2 multisig mutual close, and then it looks exactly like a public key and a single signature. If someone is not cooperating and things are going wrong, then you can show a branch in the taproot script that shows that you are allowed to claim the money but this script is only revealed if you have to go down this path. Otherwise you get a single public key and a single signature on the blockchain.

We can use chips on a single hardware wallet device with different architectures and different heterogeneous security models, and put three different chips with three different keys on there, and make sure we can spend the bitcoin only if each of these chips are signing in the hardware wallet. So one can be a proprietary secure element, and then a few other microcontrollers in the same hardware wallet, and the output is only a single public key and a single signature. We could also do one chip from Russia and one chip from China. So even if there is a backdoor, it is unlikely that both the Russian government and the U.S. government will cooperate to attack your wallet. From the user’s perspective, it looks like only a single key or a single signature.

All these watchdogs and anti-tamper mechanisms and preventions of fault injections and stuff…. they are not implemented yet, but I know there’s a few companies that are working on security peripherals around the RISC-V architecture. So hopefully we will get secure elements soon. The only problem at the moment is that most of– some of the companies, I would say, they take this open-source RISC-V architecture and they put on top of it a bunch of proprietary closed-source modules and this kind of ruins the whole idea. We need a fully open-source RISC-V chip. I would definitely recommend looking at RISC-V.

There’s also the IBM Power9’s are also open-source at this point. Raptor Computing Systems is one of the few manufacturers that will actually sell you the device. It’s a server, so not ideal, but it is actually open-source. It’s a $2000 piece of equipment, it’s a full computer. So it’s not an ideal consumer device for hardware wallets. I believe the CPU and most of the device on the board are open-source including the core. It’s an IBM architecture. Okay, I should probably look at that. Sounds interesting.

Best practices

I want to talk about best practices, and then talk about developing our own hardware wallet. But first about best practices.

Never store mnemonics as plaintext. Never load the mnemonic into the memory of the microcontroller before it’s unlocked. There was something called “frozen Trezor attack”. You take your Trezor, you power it on, the first thing it does is load your mnemonic into memory and then what you can do is freeze the Trezor with low temperature to make sure the memory of the key is still visible. Then you update the firmware to custom firmware, which Trezor allows, which is normally OK because your flash memory is wiped and they assume your RAM is decaying but at cold temperatures it stays there. Next, once you have the firmware there, you don’t have the mnemonic but you can print on the serial… but you can still get the data out of there. The problem was that they were loading the mnemonic to RAM before checking the bit. So never do that.

The ultimate and one thing that prevents an attacker from spending your funds is the PIN code or whatever verification method you’re using. It’s extremely better here to store the correct PIN code on the device. In principle, comparing the correct PIN code to the wrong PIN code is bad because during these comparisons there’s a sidechannel attack. Instead, you want to use a PIN code and another authentication method to derive the decryption key to decrypt the encrypted mnemonic. This way, you remove all the sidechannel attacks and you don’t have the mnemonic as plaintext.

Another nice feature that people should use is, have you heard about physically unclonable functions? It’s a really nice feature. Say you have a manufactured microcontroller… when the RAM is manufactured, there are certain fluctuations in the environment such that the RAM is manufactured differently for every bit. When you power on the microcontroller, the state of your memory will be random. Then you erase it and start using it as normal. But this randomness has a certain pattern, and this pattern is unclonable because you cannot observe it and it cannot be reproduced in another RAM device. You can use this pattern as a fingerprint, as the unique key on the device. This is why it’s called a physically unclonable function, it’s due to variations in the manufacturing process. You can use that together with a PIN code and other stuff to encrypt your mnemonic. When this device is booted, it will be able to decrypt the mnemonic. But extracting the full flash memory will not help this, because it still needs to get the physically unclonable function which is in the device. The only way to get that is to flash the firmware, read the key and extract it over serial or whatever. It’s a fail of both the read protection and write protection.

Q: Why not get rid of the PIN and the mnemonic storage, and require the user to enter that and wipe the device?

A: You could. So it’s a secure signing device, but not a secure storage device. So there’s secure storage, and then secure signing. So you store the passphrase or the wallet encrypted on paper or CryptoSteel in a bank vault or something with the mnemonic or something… and it’s encrypted, and you remember the passphrase. So you never store the passphrase anywhere.

The secure storage problem and the secure signing problem should be separated. So you could use replaceable commodity hardware for signing, and the mnemonic should be stored encrypted on paper or something. The problem with entering a mnemonic every time is that you could have an evil maid attack. The prevention here is to not have wifi or anything. But maybe the attacker is clever and they put in a battery pack and some kind of broadcast mechanism, but this goes back to having a disposable commodity wallet.

Besides RAM, you can use a piece of glass to make a physically unclonable function. You could put a piece of glass into the wallet and it has a laser and can measure the imperfections in the screen and uses this to derive the decryption key for your mnemonic. This isn’t a fingerprint. Glass might degrade over time though. A piece of plastic can have a unique pattern that generates an interference pattern and then can be used to extract a decryption key from that. But that’s not going to happen for a while.

This other thing about evil maid attacks and hardware implants, how can we prevent that? There’s a way to do anti-tamper mesh around the device such that whenever someone is trying to get into the device, like drilling a hole, your security measures are automatically activated. In HSMs in the banking industry, they basically have a device constantly connected to power and it monitors current going through this conductive mesh. If they detect the change in current in this mesh, then they wipe the device. The problem here is that when you’re running out of power, you have to rely on the battery, and then when the battery is draining before it is drained you have to wipe the keys anyway.

There’s a better way, where you don’t just check the current, but you also check the capacity of the wires in the mesh and you use that as a unique fingerprint to generate the unique decryption key of your secrets. Even if someone drills a 100 micron hole in here, the decryption key changes and they will not be able to extract the secrets anymore. You can’t just buy a device like this at the moment, but it’s a very promising approach. It’s probably for people who really care about large amounts of money, because this is really expensive.

If you are going to wipe the keys, then you might as well pre-sign a transaction before you wipe the keys, to send the coins to backup cold storage or something.

Rolling your own hardware security

You should not expect that rolling your own hardware wallet is a good idea. Your device will probably not be as secure as Trezor or Ledger because they have large teams. But if there is a bug in the Trezor firmware, then attackers will probably try to exploit that across all Trezor users. But if you have a custom implementation that might not be super secure but it’s custom, then what are the chances that the guy who comes to your place and finds a weird looking device and figures out it’s a hardware wallet and then how does he figure out how to break it? Another thing you can do is hide your hardware wallet in a Trezor shell ((laughter)).

Someone in a video suggested making a fake hardware wallet, and when it is powered on by someone, then it sends an alert message to a telegram group and says call 911 I’m being attacked. You could put this into the casing of Trezor. When the guy connects to it, it sends the message. Another thing you could do is install malware on the attacker’s computer, and then track them and do various surveillance things. You could also claim yeah I need to use Windows XP with this setup or something equally insecure, which is plausible because maybe you set this system up 10 years ago.

Options for making a hardware wallet prototype

What can we use to make a hardware wallet? If you think making hardware is hard, it’s not. You just write firmware and upload it. You can also use FPGAs which are fun to develop with. I like the boards that support micropython, which is a limited version of python. You can talk to peripherals, display QR codes and so on. Trezor and ColdCard are using micropython for their firmware. I think micropython has a long way to go though, because as soon as you move away from what has already been implemented then you end up having problems where you need to dive into the internals of micropython and you end up having to write new C code or something. But if you like everything already there, then it is extremely easy to work with.

Another option is to work with arduinos. This is the framework developed maybe 20 years ago I don’t know, and it’s used throughout the whole do-it-yourself community. It became extremely easy to start writing code. I know people who learned programming by using arduino. It’s C++, and not as user friendly as python, but still the way how they make all the libraries and all the modules, it is extremely friendly to the user. They didn’t develop this framework with security in mind, though.

There’s also the Mbed framework. They support a huge variety of boards. This framework was developed by ARM. You’re again writing C++ code, and then you compile it into the binary, and then when you connect the board you drag-and-drop it into the board. It’s literally drag-and-drop. Even more, you don’t need to install any toolchain. You can go online and use their online browser compiler. It’s not very convenient, except for getting started and getting some LEDs blinking. You don’t even need to install anything on your computer.

Something else to pay attention to is the rust language, which focuses on memory safety. It makes a lot of sense. They also have rust for Mbed systems. So you can start writing rust for microcontrollers, but you can also access the libraries normally written by the manufacturer or something- like for talking to the displays, LEDs and all this stuff. In the microcontroller world, everything is written in C. You can write your hardware wallet logic in rust, and then still have the bindings to the C libraries.

There’s a very interesting project called TockOS. This is an operating system written in rust completely, but you can keep writing in C or C++ but the OS itself like the management system can make sure that even if one of your libraries is completely compromised then you’re still fine and it can’t access memory from other programs. So I think that’s really nice. At the moment, I think there’s not too many people that know rust, but that’s improving. Definitely very interesting toolchain.

Another nice thing you can do with DIY hardware wallets, or not just DIY but with flexible hardware, is custom authentication. If you’re not happy with just a PIN code, like you want a longer password, or you want to have an accelerometer and you want to flip the hardware wallet in a certain way that only you know or something, or for example you can enforce one-time passwords and multi-factor authentication. You don’t only require the PIN but also a signature from your yubikey, and all these weird kinds of things, or even your fingerprint but that’s a bad idea because fingerprints have a low entropy and people can just take your finger anyway or steal your fingerprints.

You could use yubikey, Google Titan, or even some banking cards. You could do multi-factor authentication and different private key storage devices to do multisig having nothing to do with bitcoin, to authenticate to get into a hardware wallet.

Note that all of these boards I am talking about are not super secure. They all use microcontrollers and they don’t have a secure element. You can get a really cheap board that is like $2 bucks. Keep in mind, it’s manufactured and designed in China. It’s very widespread but who knows, maybe there’s still a backdoor in the device somewhere. Who knows. Also, it has bluetooth and wifi so that’s something to be aware of. If you want a not very secure version of the Ledger X, then you could do it. It would probably be safer than storing the money on your laptop that is constantly connected. All other developer boards tend to have simple application-specific microcontrollers. This one here has the same chip that Trezor has, in theory you could port trezor to this. So you get the security of a Trezor wallet, a much larger screen and maybe some additional functionality that you might like. So it might make sense in some cases. I wouldn’t rely completely on DIY hardware for security.

There’s also some cheap security-focused chips available on the market. The one that is used in the ColdCard is on the market, some sort of ECC blahblahblha from Microchip. You can also apply it in the arduino form factor. So it can provide you a secure key storage for your bitcoin keys, and the rest can be done on the normal microcontroller.

No secure elements are available on the market at the moment that would allow you to use elliptic curve cryptography for the bitcoin curve. They haven’t been built yet.

To make a fully secure element that is completely open-source from the very bottom to the very top, will cost like $20 million. What we are releasing is what is accessible to us at the moment. So what we can do is we can get this secure element that has a proprietary Java Card OS on top of it, and then on top of this we can write a bitcoin-specific that can talk to the hardware and use all the elliptic curve accelerators and hardware features and can still be open-source because we don’t have know how exactly this Java Card OS works so it’s not fully open-source, we’re just open-sourcing everything we can. In the ColdCard, they can’t use the elliptic curve cryptography on the secure key storage element, but in other secure elements yes you can run ECDSA and other elliptic curve cryptography.

My design choices for my hardware wallet

I wanted to talk about how we designed our hardware wallet and get your feedback about whether this makes sense or not and how we can improve, especially Bryan and M. I also want to make sure that I don’t build something very usable. After that, I think we can, if you guys have your laptops with you, then we can setup the development environment for tomorrow so that we can even give the boards to try out in the evening and take them home. If you just promise not to steal it, you can take some home if you want to, if you’re actually going to try it. Note that I will be disturbing you all the time tomorrow, so tonight is your chance to look at this alone. Tomorrow then we can work on some crazy hardware wallets.

What we decided to do is we have some hardware partners that can manufacture custom chips for us. We can take the components of the normal microcontrollers and put them in a nice way into a single package. So what components do hardware wallets normally have? The ones that also have a secure element, at least. So we decided first to go open-source. We’re not going to work on a bare metal secure element and a closed-source OS that we can’t open-source due to NDAs. So we are using a Java Card OS and even though we don’t know how it works, it seems to work so it should be pretty secure hopefully. Then to write on top of that, we’re writing a bitcoin applet that works with the secure element. We can only put in there stuff that we sign and that we upload using our administration keys. This means that we can develop and upload software, and then you can suggest certain changes that we can enable for upload. It requires certain communication with us first. I can’t give the keys to anyone else, because if they get leaked then we’re in trouble with the government because they are worried about criminals that will compromise super secure communication channels or use it to organize their illegal activities. This is unfortunately the state of the industry. We wanted to develop an open-source secure element, but for now we can just make a hardware wallet perhaps more secure and a bit more flexible.

So we are using the Java Card smartcard, and then we have two other microcontrollers. We also have a somewhat large display to show all the information about the transaction and to format it nicely and to enable this QR code airgap we need to be able to display pretty large QR codes. We have to use another general purpose microcontroller because only they can handle large screens at the moment. Then we have a third microcontroller that does all the dirty work that is not security-critical like the communication over USB, talking to the SD cards, processing images from the camera to read the QR codes and things like that. This makes for complete physical isolation of the large codebase that is not security-critical and handling all of the user data and the data from the cold computer. We also have another microcontroller that is dedicated to driving the display so that you can have a somewhat trusted display. All of these microcontrollers are packed into the same package. Inside, we have layered semiconductor devices in the package, and they are layered in a secured-sandwhich structure. The top one is the secure element, and the two others are underneath. So in theory, heat from one chip in the package can be detected in the other, but the smartcard presumably has a lot of work done on power analysis and sidechannel prevention.

Even if the attacker gets access to this chip and decaps it, he will first hit the secure element which has all of these anti-tampering mechanisms like watchdogs and voltage detection and memory mapping and other stuff, and it shares this capability with other chips. They obviously share the same voltage supply, so first you gain a little bit of security on the secure element with the other microcontrollers. Even if the attacker tries to get into the memory of the not very secure microcontrollers, the secure element is in the way and it is hard to get underneath it. The previous customers they made this for was for satellites, and in that case you have issues with radiation and stuff. For us this helped because it means you cannot, first, no electromagnetic radiation goes from the chip to the outside so this eliminates some sidechannel attacks, and secondly limited electromagnetic radiation gets into the chip. And as I said, they are all in the same package. In our development boards, they are not in the same package because we don’t have the money to develop the chip but we will start soonish. Still, it has all of this hardware already.

The normal microcontrollers have these debug interfaces. Even if they are disabled with the security bits, the attacker can still do a lot of race condition stuff and actually even re-enable it. So we included a fuse in there that physically breaks the connection from the JTAG interface to the outside world. So just having the chip, the attacker is not able to use the JTAG interface. This is a nice thing that is normally not available. On the developer board, we expose a bunch of connections for development purposes, but in the final product the only connections will be the display, touch screen, and for the camera we haven’t decided which of them we want to use. Normally the camera is used to store the QR code and it is obviously communication related and should be on the communication microcontroller. We can take a picture of dice as user-defined entropy.

We have a completely airgapped code that works with QR codes to scan transactions and QR codes to transmit signatures. We call this the M feature because he’s the one that suggested this. It’s so nice and fluent how it works that I have this nice video I can show you. So first we scan the QR code of the money we want to send to; the watchonly wallet on the phone knows all the UTXOs and it prepares the unsigned transactions and shows this to the hardware wallet. Next, on the hardware wallet we scan the QR code. Now we see information about the transaction on the hardware wallet, that we can sign, and then we get the signed transaction back. We then scan it with the watchonly wallet, and then we broadcast it. The flow is pretty convenient.

This is uni-directional data flow. It only goes in one direction. It’s highly controlled data flow. It is also limited in terms of the data you can pass back and forth. This is both a good thing and a bad thing. For a large transaction, it might be a problem. For small data, this is great. We haven’t tried animated QR codes yet. I was able to do PSBT with a few inputs and outputs, so it was pretty large. With partially-signed bitcoin transactions and legacy inputs, then your data is going to blow up in size pretty fast. Before segwit, you had to put a lot of information into the hashing function to derive the sighash value. Now with segwit, if the wallet lies about the amounts, then it will just generate an invalid signature. In order to calculate the fee for the hardware wallet, I have to pass the whole previous transaction, and it can be huge. Maybe you get the coins from the exchange, and the exchange might create a big transaction with thousands of outputs and you would have to pass all of it to the hardware wallet. If you are using legacy addresses, you will probably have trouble with QR codes and then you will need to do animations. For segwit transactions and reasonable number of inputs and outputs, you’re fine. You could just say, don’t use legacy anymore, and sweep those coins. The only reasonable attack vector there is if you’re a mining pool and you’re going to trick you into broadcasting that and rob you, but outside it hurts you but doesn’t profit the attacker so it doesn’t really have incentive. You can also show a warning and saying if you want to use the fees, then use bech32 or at least check it here. You can show them the fee on the hot machine, but that’s probably compromised.

With the airgapped mode, you have cold storage and it’s an airgap and it’s reasonably secure. Nothing is perfect, there will be bugs and attacks on our hardware wallet as well. We just hope to make less mistakes than what we see at the moment. Also, another thing I didn’t talk about is the …. within smartcards, we have to reuse the Java Card OS just because applets have to be written in java. Then we decided to go with embedded rust, and even though it’s a little harder to develop and not many people know rust, this part is really security critical and we don’t want to shoot ourselves in the foot with mistakes there. Also, it prevents a lot of isolation and best practices from the security world. Third, we open to the developers and it is running micropython. So this means that if you want custom functionality on the hardware wallet, like a custom communication scheme like bluetooth which we don’t want but if you want it then you’re welcome to do it yourself. Then you write a python script that handles this communication type, and then just use it. Another way to do it is that, if you’re using a particular custom script that we’re not aware of, and we’re showing this transaction in an ugly way that is maybe hard to read, then maybe what you can do is add some metadata to send it to the other microcontroller to display you something. It will still mark it as not super trusted because it’s coming from this python app, but if you’re trusting the developers of the app, then you can make a few choices. But at least you can see some additional information about this transaction if our stuff wasn’t able to parse it; like if it’s a coinjoin transaction that is increasing your anonymity set by 50 or whatever- you still see the inputs and outputs, plus the extra information normally. So this might help the user experience.

Besides from the airgapped mode, there is another completely different use case where you use this as a hot wallet like for a lightning hardware wallet. You keep the wallet connected to the computer, and the connected computer runs a watchonly lightning wallet, and then communicates with the hardware wallet for signatures on transactions that strictly increase the balane of your channel. Then, you could be in the loop for any transactions reducing your balance. Note still that this is not an airgapped mode, but it’s a different security mode. You probably want most of your funds in cold storage, and then some amount can be in this hot wallet hardware wallet device. Also, one of the problems with coinjoin is that it can take some time like a few minutes, especially if you need to retry multiple times, which is sort of like a hot wallet.

You probably don’t want to carry your hardware wallet around all the time, or go home to press a button to confirm your lightning payment or something. So what you can do is setup a phone with an app that is paired to the hardware wallet, so the hardware wallet is aware of the app and the app stores some secret to authenticate itself and then you can set a limit in the morning for your phone. So the hardware wallet should authorize payments up to a certain amount if it’s coming from this secure app. Even more, you can set particular permissions to the mobile app. Like, the exchange wallet should be able to ask the hardware wallet for an invoice but only an invoice to pay you. Same with bitcoin addresses, it can only ask for bitcoin addresses for a particular path. You can make it more flexible and get better user experience if you want to. The hardware wallet is connected to the internet, and the requests are forwarded through the cloud in this scheme.

The first thing that we are releasing is the developer board and the secure element. Another thing I want to discuss is the API of the first version of the secure element. In particular, as developers what would you like it to have? I have a few ideas about what may be useful. Maybe you can think of something in addition.

Obviously, it should store the mnemonic, passwords, passphrases, and it should be able to do all the bip32 derivation calculations and store bip32 public keys. We also want ECDSA and standard elliptic curve signing that we’re using at the moment. Also, I want to include this anti-nonce attack mitigation protocol. We don’t want to trust the proprietary secure element; we want to be sure that it is not trying to leak our private keys using this chosen nonce attack. So we want to enforce this protocol. Also, we want to use Schnorr signatures in particular with Shamir secret sharing. This would allow you to take the Schnorr keys, and then get a signature out of it that uses the correct point. Key aggregation with Shamir, you need a fancy function to combine each of the points from the parties.

It makes sense to use Shamir here because you can do thresholds. With Musig you can do that too, but with Schnorr it’s just k-of-k. Verifiable Shamir secret sharing using pedersen commitments. Say we have 3 keys living on our hardware, and these other parts are on some other backup hardware elsewhere. These can all communicate with each other. Each one generates their own key, and then using pedersen commitments and some other fancy algorithm, they can be sure that each of them ends up with a share of some common secret but none of them know what the common secret is. So we have a virtual private key or full secret that is split up with the Shamir secret sharing scheme with all of these guys and none of them know the full secret. The only problem is how do you back it up? You can’t see a mnemonic corresponding to this. Nobody knows this key, so nobody can display it. So the only way to do it is to somehow- let’s say this guy controls the display, then it can opt to display his own private key, but then what about the others? They don’t want to communicate to this guy to show it because then they can reconstruct it…. so you could use something like the display that you attach to different devices to display this mnemonic, but then the display driver can steal all of them. Ideally the display driver is a very simple device that only has memory and a display. You could also use disposable hardware to generate the private key, but then how do you recover it?

Another idea is seedless backups such that– if one of the chips breaks, if you’re not using 5-of-5 but say 3-of-5 then you could still make these guys communicate with each other, re-negotiate on the same key, generate a new shard for the new member and then replace all the old parts of this Shamir secret sharing scheme with the new one. Instead of the old polynomial, you choose the new polynomial and there’s a way such that each of these devices can switch to the new key and instead of compromising one we get the new member. This is also useful if one of the shards or keys gets compromised or lost, as long as you have enough keys to reconstruct it.

For signing, you don’t need to reassemble the Shamir secret share private key because it’s Schnorr. So the signatures are partially generated, and then they can be combined together into the correct final signature without reassembling the private key on a single machine.

Say you are doing SSS over a master private key. With each shard, we generate a partial signature over a transaction. A Schnorr signature is the sum of this random point plus the hash times the private key, and it’s linear. We can apply the same function to recombine the signature parts. We can apply the same function to s1, s2 and s3 and then you get the full signature S this way without ever combining the parts of the keys into the full key.

The multisignature is nice when you’re not the only owner of the private keys, like escrow with your friends and family or whatever. The Shamir Secret Sharing scheme with Schnorr is great if you are the only owner of the key, so you only need the pieces of the key. There’s a paper I can give you that explains how the shards are generated, or if the virtual master private key is generated on a single machine. Multisig is better for commercial custody, and Shamir is better for self-custody and self cold storage. Classic multisignature will still be available with Schnorr, you don’t have to use the key combination you would still use the CHECKMULTISIG. I think you can still use it. ((Nope, you can’t- see the “Design” section of bip-tapscript or CHECKDLSADD.)) From a mining perspective, CHECKMULTISIG makes it more expensive to validate that transaction because it has many signatures.

I was thinking about using miniscript policies to unlock the secure element. To unlock the secure element, you could just use a PIN code or you could use it in a way where you need signatures from other devices or one-time codes from some other authentication mechanism. We needed to implement miniscript anyway. We’re not restricted to bitcoin sigop limits or anything here; so the secure element should be able to verify this miniscript script with whatever authentication keys or passwords you are using. It can even be CHECKMULTISIG with the 15 key limit removed.

I will send Bryan the paper on linear secret sharing for Schnorr threshold signatures, where each shard can be used to generate partial signatures that can later be recombined. Maybe pedersen’s paper on verifiable secret sharing from 1999. And then there’s a response to that paper where you can fix how one party can bias the public key, which biasing the public key doesn’t matter but whatever. GJKR'99 section 4 has it. It’s not Shamir secret sharing scheme if you don’t reassemble the private key; it’s just a threshold signature scheme that happens to use linear secret sharing. Calling it something else is dangerous because it might encourage people to implement SSSS where the key can be recovered.

Q: What if you got rid of the secure element, and make the storage ephemeral and wipe it on shutdown? Basically what tails does. The user has to enter their mnemonic and passphrase. You can even save the mnemonic and just require a passphrase. Would it be easier and cheaper if we get rid of the secure element?

A: Well, what about an evil maid attack? He uploads certain firmware to your microcontroller. How do you verify he didn’t take the password?

Q: It is possible, but this evil maid has to come back and get access again in the future. But in the mean time, you could destroy it, and take it apart and confirm and install software each time.

A: I really want a disposable hardware wallet. As long as the signature is valid, that is all you need. You’re going to destroy the hardware device after using it.

Q: If you’re using this approach without a secure element, what about a hot wallet situation like for lightning?

A: If they don’t have access to the hardware wallet, then it is fine. But if they do have access, then it will be even worse in the scenario without the secure element. It is using the private keys for the cryptographic operations, and if it’s on the normal microcontroller then you can observe power consumption and extract the keys. It’s pretty hard to get rid of this. Trezor is still vulnerable to this problem. Ledger guys discovered a sidechannel attack when you derive a public key from your private key you leak some information about your private key. They are working on fixing this, but basically to derive a public key you need to unlock the device and this means you already know the PIN code so it’s not an issue for them. But in automatic mode, if your device is locked but still doing cryptography on a microcontroller then it is still an issue. I would prefer to only do cryptography operations on a secure element. If we implement something that can store the keys or erase the keys as well, and it can do cryptographic operations without sidechannels, then I think you can move to a normal developer board or an extended microcontroller and do everything there already. I think it’s not purely a constraint, it’s more like– it has some limitations because we can’t connect the display to the secure element, but it’s tradeoffs that we’re thinking about. For disposable, I think it’s perfectly fine to use the— make a very cheap disposable thing. The microcontrollers used in Trezor cost like $2 or something like that. So we can just have a bunch of them, buy them in bulk on digikey or something by the thousands and then you put them on the chip and you need to solder a bit. Ideally you want a chip with a DIP package or a socket and you just put the controller there. It would be nice to try it out.

Q: If you are worried about an evil maid attack, then you could use a tamper-evident bag. The evil maid could replace your hardware wallet; but in theory you would know because of the tamper-evident bag. You can use glitter, take a picture of it, and store that, and when you go back to your device you check your glitter or something. Or maybe it has an internet connection and if it’s ever tampered with, then it says hey it was disconnected and I don’t trust it anymore.

These fingerprint scanners on laptops are completely stupid. Besides, an easy place to find those fingerprints is on the keyboard itself ((laughter)). You should be encrypting your hard drive with a very long passphrase that you type in.

Statechains

http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2019-06-07-statechains/

With statechains, you can transfer the private key to someone else, and then you enforce that only the other person can make a signature. Ruben came up with an interesting construction where on top of this you can do lightning transactions where you only transfer part of this money, and you can do rebalancing and stuff. But it requires Schnorr signatures, blind signatures for Schnorr, and if it does happen well it won’t be for the moment.

How we can help with this is that, we can provide the functionality on the secure element that extracts the key and then– such that you are sure that the private key is moved. You need to trust the manufacturer still, but you can diverisificate the trust of it, so that you don’t have to fully trust this federation that is enforcing this policy.

Follow-up

https://bitcoin-hardware-wallet.github.io/

\ No newline at end of file diff --git a/austin-bitcoin-developers/2019-08-22-socratic-seminar-2/index.html b/austin-bitcoin-developers/2019-08-22-socratic-seminar-2/index.html index e87a206543..aff63981bb 100644 --- a/austin-bitcoin-developers/2019-08-22-socratic-seminar-2/index.html +++ b/austin-bitcoin-developers/2019-08-22-socratic-seminar-2/index.html @@ -8,4 +8,4 @@ < Austin Bitcoin Developers < Socratic Seminar 2

Socratic Seminar 2

Date: August 22, 2019

Transcript By: Bryan Bishop

Tags: Research, Hardware wallet

Category: -Meetup

https://twitter.com/kanzure/status/1164710800910692353

Introduction

Hello. The idea was to do a more socratic style meetup. This was popularized by Bitdevs NYC and spread to SF. We tried this a few months ago with Jay. The idea is we run through research news, newsletters, podcasters, talk about what happened in the technical bitcoin community. We’re going to have different presenters.

Mike Schmidt is going to talk about some optech newsletters that he has been contributing to. Dhruv will talk about Hermit and Shamir secret sharing. Flaxman will teach us how to setup a multisig hardware wallet with Electrum. He is going to show us how you can actually do this and some of these things we have learned. Bryan Bishop will talk about his vaults proposal that was recently made. So ideally we can keep these each to about 10 minutes. They will probably go over a little though. Let’s get a lot of audience participation and really interactive.

Bitcoin Optech newsletters

I don’t have anything prepared, but we can open up some of these links and I can introduce my perspective or what my understanding is. If people have ideas or questions, then just speak up.

Newsletter 57: Coinjoin and joinmarket

https://bitcoinops.org/en/newsletters/2019/07/31/

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017169.html

Fidelity bonds for providing sybil resistance to joinmarket. Has anyone used joinmarket before? No? Nobody? Nice try… Right. So, joinmarket is a wallet that is specifically designed for doing coinjoins. A coinjoin is a way to do a little bit of mixing or tumbling of coins to increase the privacy or fungibility of your coins. There’s a few different options to it. It essentially uses IRC chatbots to solicit makers and takers. So if you really want to mix your coins, you’re a taker, and a maker on the other side puts up funds to mix with your funds. So there’s this maker/taker model which is interesting. I haven’t used it, but it looks to be facilitated by IRC chat. The maker, the person putting in money, doesn’t necessarily need privacy, makes a small percentage on their bitcoin. It’s all done with smart contracts, and your coins aren’t at risk at any point, except in as much that they are stored in a hot wallet to interact with the protocol. The sybil resistance that they are talking about here is that, so, Chris Belcher has a great privacy entry on the bitcoin wiki so check that out sometime. He’s one of the joinmarket developers. He notices that it costs very little to flood the network with a bunch of makers if you’re a malicious actor, and this breaks privacy because the chances of you running into a malicious or fraudulent chainalysis type company, it’s not that they can take your coins, but they would be invading your privacy. The cost of them doing this is quite low, so the chances of them doing this is quite high as a result.

Similar to bitcoin mining, this is sybil resistance by burning energy for proof-of-work. There’s two types of potentials for proof-of-work in this scenario against sybil resistance. One is that you can burn bitcoin, and the other is that you can lock up bitcoin, both of which is proof that you have some skin in the game. So you can prove both of these things on chain and it’s a way of associating that you locked up these coins and you locked it up once for this IRC nickname and this gives you credibility to trade as a regular person. So you can’t just have 1000 chatbots to snoop… It’s 30 to 80,000 BTC. That would be the lock-up. This is locking up this much BTC to take up some total capacity of the joinmarket situation. It would be no worse than the current situation, where they have the capability to do it anyway, so this makes it more expensive. It also makes it more expensive for the average user, which is the downside. The cost of legitimate makers staking or locking up or burning their coins are going to be passed on to the takers. In the way that it is setup now, the mining fee is substantially more than what these makers are making by doing the fixing,s so the theory according to Chris is that people would be willing to take a higher fee for the mixing, because they are already paying 10x for the mining fees. I don’t know how many coinjoins you can do in a day, but there’s public listings of makers and what they will charge and what their capacity is. Some people are putting up 750 BTC and you can mix with them, and they charge 0.0001% or something. The higher cost is for sybil protection, it’s a natural rate. If you’re paying 10x more to process the transaction on the bitcoin network, then maybe you’re willing to put in a few more sats to pay for this sybil resistance.

Samurai and Wasabi wallet teams had some interesting discussions. They were talking about address reuse and how much does it really reduce privacy. I don’t think it’s a resolved issue, they are both still going back and forth attacking each other. For any of these coinjoins, they are all exposed to a certain extent to coinjoin. So there’s always tradeoffs. Higher cost, some protection, still not perfect, a company could still be willing to lock up those coins. An interesting thing about that is that it increases the cost for Chainalysis services– they will have to charge more to their customers; so this strips their margins and maybe we can put them out of business.

Newsletter 57: signmessage

https://github.com/bitcoin/bitcoin/issues/16440

https://github.com/bitcoin/bips/blob/master/bip-0322.mediawiki

Bitcoin Core has the ability to do signmessage, but this functionality was only for single key pay-to-pubkeyhash (P2PKH). Kallewoof has opened up a pull request that allows you to have that same set of functionality with other address types. So for segwit, P2SH etc., … I think it’s interesting, and it’s forward compatible with future versions of segwit, so taproot and Schnorr are included, would have the ability to sign the scripts with these keys, and it’s backwards compatible because it has the same fnuctionality for signing a single key. Yeah, it oculd be used for a proof of reserve. Steven Roose did the fake output with the message, that’s his proof of reserve tool. You construct an invalid transaction to do proof-of-reserve. If Coinbase wanted to prove they had the coins, they could create a transaction that is very clos eto a valid transaction but is technically wrong, but still with valid signatures. bip322 is just signing the message.

All you can do with signing is prove that you at some point had that private key. If someone stole your private keys, you can still sign with your private key, but you no longer have the coins. You have to prove that you don’t have it; or that someone else doesn’t have it. Or that, at the current blockheight, you had the funds. That’s the real challenge of proof-of-reserves, most of the proposals are about moving the funds.

Newsletter 57: Bloom filter discussion

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017145.html

https://github.com/bitcoin/bitcoin/issues/16152

There was some consernation here about disabling the bloom filter as a default in Bitcoin Core. Right now if you’re using an SPV wallet or client, like Breadwallet using SPV, that’s one of the major ones using SPV… The discussion was that in the previous week, there was a merged pull request that disables bloom filters by default. A newer version of Bitcoin Core would no longer serve up these bloom filters to lite clients. My Breadwallet wouldn’t be able to connect to Bitcoin Core nodes and do this by default, but again someone could turn it back on.

Someone was arguing that Chainalysis is already running all these bloom filter nodes anyway, and they will continue to collect that information. Many Bitcoin Core nodes are running nodes that are over a year old, so they aren’t going to go anywhere soon. You will still be able to run some lite clients. I think Breadwallet runs some nodes too. You could always run your own nodes too, and serve those bloom filters yourself.

Does anyone use a wallet or is aware that you are using a wallet that is an SPV lite client? Electrum doesn’t do the bip37 bloom filters, it uses a trust model.

Is the idea that bip57 will be on by default, to replace that? Is Neutrino going to be on by default? It is for btcd. I imagine bitcoind will have something similar. It’s just network commands. You store a little bit more locally, with the Neutrino filters. You have to keep track. If there’s a coinbase commitment or whatever, you’re going to need to check that too. That would have to be a soft-fork.

Lightning news (Buck Perley)

I am going to go through the lightning topics on the socratic seminar list. I was on a plane for a few hours yesterday, so I prepared some questions and hopefully we can spark some questions around those.

Watchtowers

https://blog.bitmex.com/lightning-network-part-4-all-adopt-the-watchtower/

Bitmex had something about running watchtowers. The nice thing about their article is that they go through the different scenarios when you’re running lightning and what enforces good behavior in lightning. If you look at the justice transaction— in lightning, when you have two parties that enter into the channel and rebalance funds constantly without having to publish a transaction, the way you enforce non publication of the older state is by a penalty or justice transaction. If someone tries to publish an old state, you can steal all of the funds from the channel as a way to punish them. This is called the justice transaction. One of the issues with this, and lightning in general, is that your node has to be online all the time because the only way to publish the justice transaction is if you’re online and your node notices that the incorrect state had been published.

lnd v0.7 was recently published with watchtowers available. What a watchtower is, is it lets you go offline. Basically, if you’re offline, you can hire another node to keep an eye on the blockchain for you and they will publish the justice transaction on your behalf. There’s some interesting constructions where you can pay back the watchtower so you can kind of split the punishment transaction funds that are in that transaction. They don’t talk about that here in the article.

One of the things that is interesting is comparing justice transactions to eltoo where they have SIGHASH_NOINPUT. I thought that was an interesting point of discussion.

Q: I upgraded my node so that I could utilize that functionality. How do you transact with other nodes and set them up to be a watchtower? It’s not clear to me how that works.

A: You have to basically find one, and you point your node at it and you say this is going to be my watchtower node. It does add this interesting aspect as to how the economics of lightning kind of…. the incentives of people to be routing nodes, and be routing funds and earning fees just for having good uptime. Casa published their node heartbeat thing where they actively reward people. I forget the mechanics of how they keep them honest. So you give them updates of the justice transaction; right now there’s a privacy tradeoff. Without eltoo, they have to have every single state update in the channel. What’s nice about eltoo is that you don’t, you basically don’t have to store all state. With eltoo, you don’t need to remember the intermediate states, just the latest one.

Q: So the other nodes are providing watchtower services for me; and unless I upgrade my node to have the watchtower, then other people can do the same thing. Do you have to open a channel?

A: No, you’re just giving them the raw transaction.

Q: Are they encrypting the justice transaction?

A: I’m not sure. There were mechanisms discussed to split it up even further. The idea was the watchtower would only be aware of your transaction once it is published; they wouldn’t be able to know what the transaction details are, beforehand. They would constantly be watching the transaction and try to decrypt all the time.

Q: Has anyone tried to commercialize any of this?

A: Well, nobody has been able to commercialize anything about lightning. lnbig has locked up $5m in the lightning network and is making $20/month. At this point, you’re just doing it for altruistic reasons.

Q: One of the arguments is that if the bitcoin fees go way up, you have the benefit of having these routing nodes and channels already setup.

A: Yes, but right now it’s not a viable business. It could be in the future. Right now, my sense is that you’re not making money on the fees, but you’re making liquidity and this makes it more viable for your customers to use lightning. So really your business model is more about creating liquidity and helping utility rather than making money. The idea is that people would make fees as watchtowers, routing fees, increasing liquidity, and there’s another business model where people can pay for inbound liquidity. Those are the three main lightning network business models that I know about.

Steady state model for LN

https://github.com/gr-g/ln-steady-state-model

LN guide

https://blog.lightning.engineering/posts/2019/08/15/routing-quide-1.html

Is anyone here running a lightning node? Okay, a few. One of the big nods against lightning is that it’s not super usable. Part of that they are trying to help on the engineering side with watchtowers and autopilot and new GUIs. Another big part of it is just, guides on how to run a lightning node. They go through useful flags that you could enable. If you ever just run on lnd cli help, it’s a huge menu of stuff to be aware of. Any horror stories of headaches of dealing with lightning and things that would be helpful?

Q: More inbound liquidity.

A: What do you think could help that? Is there anything in the pipeline that would helpful?

Q: Grease. If you download their wallet, they give you $100 bucks of liquidity. After you run out, you have to get more channels. It’s kind of a strange incentive problem. It’s kind of like a line of credit. Locking up funds is costly on the other end, so they need a good reason to think they should be doing this.

One of the problems with lightning is essentially you have to lock up funds. In order to receive funds, you need to have a channel where someone has their own funds locked up. Otherwise, your channel cannot be rebalanced when you earn more on your side. If Jimmy has $100 of bitcoin on his side of the channel, and I have $100 on my side, someone can pay Jimmy up to $100 through me. Once he has made $100 and our channel has been rebalanced, he can no longer receive any more money. For on-chain payments, anyone can pay you immediately, and on lightning that’s not the same. Liquidity is a major problem.

They have one Loop server. It’s basically submarine swaps. It’s a tool that leverages submarine swaps, built by Lightning Labs, it’s a way to take funds by your off-chain lightning wallet. You loop out by sending to another wallet which will send you on-chain funds and this will give you inbound liquidity. Or you can pay your own lightning wallet from on-chain funds. If you have seen those store interfaces where you can pay with lightning, or pay with on-chain bitcoin to the lightning wallet, for that they are using submarine swaps. It’s not cheap either, because there are fees associated with. You get those funds back on chain, but you have to pay transaction fees at that point. And then there’s fees associated with– the Loop server is charging fees for this service, which is another business model.

They have a mechanism called Loopty Loop where you recursively continue to loop out. You loop funds out, you get on-chained fnuds, you loop out, and you loop out again. You can keep on doing that and get inbound liquidity, but again it’s not cheap, and it’s not instant. So you’re losing some of the benefits of lightning.

Static channel backups

Lightning Labs was talking about its mobile app now. One of the interesting things about this update was that they have static channel backups to iCloud. I was kind of curious if anyone has thoughts on that. I think it’s cool you can do cloud backup for these. It stores the state of the channel, including what the balance is. If your bitcoin node goes down, and you just have your mnemonic, that’s fine. But with LN, you have off-chain states where there’s no record of it on the blockchain. The only other record is the counterparty but you don’t want to trust them. If you don’t have backups of your state, your counterparty could publish a theft transaction and you wouldn’t know about it. You might accidentally publish an old state too, which will give your counterparty a chance to steal all the funds in the channel, which is anothe rthing that eltoo can prevent. If you have the app on iOS, you’re automatically updating these things and you don’t have to worry about it, but you’re trusting Apple iCloud.

Suredbits playground

This is– this lets you pay lightning micropayments for, you can pay for spot prices, or NBA stats, and if I were to press something… basically, it’s paying for an API call for small requests. So it would be like almost an AWS on demand, that’s how I think about it.

Boltwall

https://github.com/Tierion/boltwall

On the topic of API stuff, this was something that I recently just built and published called boltwall. This is nodejs express-based middleware that can be put in front of routes that you want protected. It’s simple to setup. If you have your lightning node setup, then you can pass the necessary config in. These configs are only stored on your server. The client never sees any of this. Or you can use opennode, which for those who haven’t used it, it’s a custodial lightning system where they manage the LN node and you put your API key into boltwall. I think that’s best for machine-to-machine payments.

I used macaroons as part of the authorization mechanism. Macaroons are used in lnd for their authorization and authentication. Macaroons are basically cookies with a lot more fine grained detail. Web cookies are usually just a json blob that says here’s your permissions and it’s signed by the server and you authenticate the signature. What you do with macaroons, is they are basically HMACs so you can have chains of signed macaroons that are connected to each other. I have one built in here that is a time-based macaroon where you can pay one satoshi for one second of access. When I’m thinking about lightning, there’s a lot of consumer-level pain points involved.

Q: Why time based, instead of paying for a request?

A: It depends on the market. Rather than saying I’m paying for a single request, you could say instead of the back-and-forth handshake, you get access for 30 seconds and be done with it until the time expires. I built a proof-of-concept application that is like a yalls (which is like a medium.com) for reading content where rather than paying a bulk amount for a piece of content, you say oh I’m going to pay for 30 seconds and see if you want to keep reading based on that. It allows for more flexible pricing mechanisms where you can have a lot more fine grained price discrimination based off of the demand.

Hardware wallet rant

https://stephanlivera.com/episode/97/

I have been talking about multisig on hardware wallets lately. We are going to start with something that is bad, and then show something better. Go ahead and fire up electrum. Pass the –testnet flag. We’re not going to do the personal server stuff.

https://github.com/gwillen/bitcoin/tree/feature-offline-v2

https://github.com/gwillen/bitcoin/tree/feature-offline-v1

https://github.com/gwillen/bitcoin/tree/feature-offline

We have a Bitcoin Core full node here. We have QT up right now, but you could use the cli. There’s a spendable component, and that’s nothing because all I have is watchonly. I’m using Bitcoin Core for my consensus rules, but I’m not using it as a wallet. I’m just watching addresses, keeping track of transaction history, balances, that kind of thing. So we have electrum personal server running and bitcoin core running. So I boot up electrum and run it on testnet. I also set a flag to say, don’t accidentally connect to someone else and tell them what all my addresses are. You put this in the config file, and also on the command line again just to be sure … Yes, you could also use firewall rules which might be smart in the future.

We can look at a transaction in electrum, so it’s saying that I have this bitcoin which we can see I have here and my recent transactions and see that I received it. Now if I was going to receive more bitcoin, there’s this cool “show on trezor” button. If I hit this, it pops up on trezor and it shows it. This is an essential part of receiving bitcoin; you don’t ask for your receiving address on your malware-infected computer. You want to do this check on a quorum of hardware wallets. Do you really want to go to 3 different hardware wallet locations before receiving funds? If you’re receiving $100m, then yes you do. If you’re doing 3-of-5, and you only confirm on 2-of-5, then the attacker might have 3-of-5 but the 2-of-5 have confirmed they are the participants. Coldcard will do a thing where you register the group of pubkeys so that you know we’re all in this… Coldcard has like 3 options, one is upload a text file with the pubkeys. Another one is that when you send it a multisig output, it will offer to create this and show you the xpubs, and ask if you want to register it; and the third one is trust whic his what the others do. Casa gives you all the xpubs… it’s another way this works; you can put those into an offline airgapped electrum client never touches the internet, and it can generate receiving addresses. So you can say, well, this machine that has never touched the internet says that these xpubs will give me these addresses, so 2-of-5 plus offline electrum then maybe I’m willing to move forward. There’s QR codes built-in for setting these up.

I don’t like when trezor is plugged into this machine, because I think the machine might be malware infected. But this device could be a fake trezor, it might be a keyboard that installs malware or something and I don’t even see it type in the malware urls. If we have three different hardware devices, I want four laptops. One that is connected to the bitcoin network; and each of the other three laptops are connected to the hardware wallet. I pass them bitcoin transactions by QR code. That whole ecosystem of computer and hardware wallet can be eternally quarantined and never connected to the internet. So we can build a hardware airgap into this.

I recommend a laptop because they have webcams and batteries. In this demo, we have to pick up the laptops and aim the screens at the cameras. A nice portable hardware wallet with a QR code scanner, you have to pick them up or something, that would be nice. With desktops, this is going to be painful because you have to lug your desktop to the safety deposit box. Do note that many banks don’t have power outlets in their vaults, so you need a battery. Really any 64-bit machine should be fine. Historically, I used 32-bits, but tails is no longer compatible with that and some Ubuntu versions are complaining. In this demo, we’re going to use native segwit, and it’s a multisig setup so choose that option.

Electrum is very finnicky. I hit the back button. I went back to see if this was the correct one and then I lost everything. I am using a hardware wallet with deterministic key derivation, so I can get back to that. The back button should actually prompt you and ask do you really want to undo all this work. The big warning is, do not hit the back button.

You may have seen my twitter threads. I would accept a very bad hardware wallet, if it allowed multisig. Adding a second hardware wallet is only additive and can help protect against hardware wallet errors. On twitter, the wallet manufacturers said no big deal. There are three big issues with Ledger. It does not support testnet. They take the public key and they show the representation on mainnet, and they ask if you want to send there. It’s not that they don’t support it; they supported it in the past, and then they stopped supporting it. So, no testnet. They also don’t have a mechanism for verifying a receive address. Only if you want to use it insecurely will it show you. The third issue is that they don’t support sending either, because they don’t do a sum of the inputs and the sum of the outputs thing. They don’t validate what’s change and what’s going to someone else. They just show you a bunch of outputs and ask you whether they look correct, but as a human you have no idea of knowing what all the inputs and outputs are, unless you’re extremely careful and very careful and taking notes. Otherwise they might be sending change to your attacker or something. Trezor can’t verify that the multisig address belongs to the same bip32 chain; they can’t verify the quorum, but they can verify their own key. So let’s say it’s 3-of-5, you can go on 3 devices that will say yes I’m one of the five on this 3-of-5, but you have to sign on 3 different devices so that you now know you’re 3 of those 5. You can always prove a wallet is a member of the quorum, except on Ledger. They used to export the data without telling you what the bip32 path was, which is a huge hole. Most wallets can prove they are in a quorum– do they understand one of the outputs is in the same quorum as the input? As far as I can understand, it’s only Coldcard that understands that today. Trezor knows it’s part of it, but they don’t know which part. So if you’re signing with a quorum of devices that you control with Trezor then you’re not at risk, but if you have a trusted third party countersigning then it gets a little squirrely because they could be increasing the number of keys that a trusted third party has. Native segwit would have allowed us to do the xpubs out of order.

Bitcoin Core can display a QR code, but it can’t scan them. The issue has been open for like 2 years.

2-of-2 is bad because if there’s a bug in either of them (not even both) then your funds are lost forever and you can’t recover it. An attacker might also do a Cryptolocker-style attack against you, and force you to give them some amount of the funds to recover whatever you negotiate.

Each of these hardware wallets have firmware, udev rules, and computer-side things. Some of them are wonky like, connect to a web browser and install some shit. Oh god, my hardware wallet is connected to the internet? Well, install it once, and then use it only on the airgap.

You need to verify your receive address on the hardware wallets. Don’t just check the last few characters of the address on the hardware device’s screen.

When verifying the transaction on the Ledger device, the electrum wallet has a popup occluding the address. The Ledger wallet also doesn’t display the value in the testnet address version, it’s showing a mainnet address. I would have to write a script to check if this address is actually the same testnet address.

Chainalysis is probably running all the electrum nodes for testnet anyway.

I would like to say this was easy, but it was not. A lot of this was one-time setup costs. It’s not perfect. It’s buggy. It has issues. But it does technically work. At the end of the day, you have to get signatures from two different pieces of hardware. You can be pretty chill about how you set things up.

Q: If you use the coldcard, can you get the xpub without having to plug in the coldcard?

A: I think you can write xpubs to the sdcard.

A: What I really want is… there seems to be some things like showing an address, which you can only do if you plug it in. A lot of the options are burried in the menus. Coldcard is definitely my most favorite.

Q: The Trezor only showed one output because the other one was change, right? So if you were creating three outputs, two that were going to addresses that didn’t belong to trezor or the multisig quorum, would it show the two outputs on the trezor?

A: I think so. But only way to know is to do it.

A: I was talking with instagibbs who has been working on HWI. He says the trezor receives a template for what is the change address; it doesn’t do any verification to define what is change, it just trusts what the client says. So it might not be better than a Ledger. It just trusts what the hot machine knew. Ledger seems to do a better job because– the trezor could be hiding the— Coldcard is clearly doing the best job, so you can teach it about the xpubs and it can make the assertion for itself without having to trust others.

Vaults (Bryan Bishop)

Cool, thanks for describing that.

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html

https://www.coindesk.com/the-vault-is-back-bitcoin-coder-to-revive-plan-to-shield-wallets-from-theft

https://bitcoinmagazine.com/articles/revamped-idea-bitcoin-vaults-may-end-exchange-hacks-good

Hermit

https://github.com/unchained-capital/hermit

Originally at Unchained we started with using Electrum. At that point, we weren’t doing collaborative custody. We had all the keys, it was multisig. So in theory we had employees and signing teams using electrum, and it was a terrible experience. It was every day using electrum, it was always weird experiences. I also don’t like how Electrum constructs these addresses.

Today I am not going to talk about multisig, instead I am focusing on this new tool called Hermit. I want to give a small demo. I’ll do a few slides. It’s a command line sharded wallet for airgap deployments. There’s some similarity to the QR code airgap wallets. At Unchained, we’re focusing on collaborative custody where we have some of the keys and the customers have some of the keys as well. This would not work if we were using Electrum ourselves, it would be too painful. We had to build software that streamlines it for us. The net result is that we’re protecting keys. There’s a big difference between an organization protecting keys, and an individual protecting keys.

In particular, for us, we’re not a hot wallet oriented company. Our transactions are at least weeks if not months or years and planned in advance. We like to keep everything multisig and cold storage. We like airgaps. Hardware wallets have a temporary airgap, which can be nice. We don’t want to be spending $1000s on HSMs to keep a number secret. The way we view it, we have so many devices scattered around the company for testing and so on– every developer has many different devices of each type. These hardware wallets can’t be more than $100 each, otherwise it’s too expensive to trust new products. We don’t like Trezor Connect where they know the transaction we’re signing, that’s deeply frustrating. Again, we’re not individuals here, this is an organization. We’re a company. We have to write some stuff down explicitly or else it will get lost. As a person, you might remember, but you should write down too. Also as an organization, we have to coordinate. As a person, you remember the keys, locations, things like that. You don’t need to send an email to yourself to let you know to take step 2 in the process, whereas the company has this requirement. Most companies have employee turnover, we don’t seem to, but we could. There’s also a lot of information about us that is public, like the address of this commercial building. We have a lot of hardware wallets here at this location, but none of them matter. There’s also scheduling issues like people going on vacation and other things. No single person can handle all the signing requests, they would burn out. So we have to rotate, and have uptime, and so on.

So what are the options? Well, hardware wallets. We want to encourage customers to use hardware wallets, and hopefully there are better hardware wallets coming down the line. These are better than paper wallets. Because of the multisig product we offer, we believe that even bad wallets when put together have a multiplicative effect on security.

Today, I don’t think it’s reaosnable to have customers using Hermit it’s too much for them. They will probably use hardware wallets for the foreseeable future. We have been using hardware wallets and we would like to move to something like Hermit. One candidate we looked at and really liked was a project called Subzero at Square which was meant to be an airgapped tool and serve as cold storage. It’s a really good product, but it wasn’t quite sufficient for us. We needed something a little more complex.

I am showing here a diagram of two different ways of thinking about protecting a key like multisig and Shamir secret sharing. Can you get some redundancy without using multisig? Sure, you can use Shamir secret sharing. There are some cool properties like 2-of-3 shares are required to be combined together in the same location. One amazing aspect of this scheme is that if you have one shard, you have precisely zero pieces of information. It’s a discrete jump where as soon as you have n shards, you get it. It’s not just cutting up pieces of a mnemonic or whatever, and reduces search space for an attacker. That’s not how the secret shares work.

SLIP 39 makes it more convenient to do encrypted Shamir shards with mnemonics. SLIP 39 was put out by the Trezor folks. As much as we poop on hardware wallets, I have to salute the SatoshiLabs team for being ahead of everyone and releasing foundational code like bip32 and implementing and doing it in an open source way. Reading their code was how I understood some of their ideas. Another thing they have done is they have released a 2 level Shamir shard system. You want to create a way to do Shamir secret sharding, without all shards being equal. You can distinguish more or less valuable shards or distribute them to people you trust more or less depending on the security level of each person or each group. So you can have a 1-of-3 secret, and the second group can have a different setup like 3-of-5. This is not multisig– where you can do this asynchronously in different locations and you’re never forced to be in one spot with all your keys. This is not that, but it gives you flexibility.

I am going to give you a quick demo of what Hermit looks like.

Hermit is open-source software. It’s “standards compliant” but it’s a new standard. SLIP 0039 is not really cryptographically reviewed yet. We have contributed not only Hermit as an application that uses SLIP 39, but we have been pushing back code at the layer to say this is the Shamir implementation that– so far this is the one that people seem to be going with which is exciting. It’s designed for airgapped deployments, which is nice.

Hermit is not multisig. Multisig and shard are complimentary. For an individual, instead of managing shards, maybe manage more than one keys. For us, we’re already in a multisig context here at Unchained, and we want to be able to do a better job and have better key management controls. Hermit is also not an online wallet. How did it know what to put in here? It had no idea. Something else has to produce the QR code with bip174 PSBTs. Next month, I am excited to get some time to present what we think is the other half of this, a tool for wallets. An online wallet is producing these PSBTs and honestly, I suggest printing them out. Print out all the metadata, and come in the room and then sign.

Hermit is not an HSM. It is a piece of python software that runs on a commodity laptop, which is not an HSM. The Ledger is a high security little conclave that lives on the electronic device and has interesting properties. In particular, the ways to communicate in and out of there is really restrictive and it will never reveal the key. If you think about it, that’s what a Hermit installation is. You control the hardware, it’s completely open-source. This is basically what you wanted from an HSM, especially if you run it in a context that is extremely secure. HSMs are presumably secure even if you plug it into a malware-infected laptop, though.

Q: So you have a signing ceremony and the shard owners walk in and enter their part and then move on?

A: Yes, that’s one way to do it.

Q: So to produce a bitcoin signature, you need a qourum of shards from each group.

A: Correct, it’s unlocking all shards together in memory at one location and then acting with that. What we like with that is it’s a nice mix for us because we can create adversarial signing teams that are watching each other and limiting the opportunities for collusion. Using SLIP 39 is really nice and flexible for organizations.

Trezor claims that they will support SLIP 39 by the end of the summer, which is really interesting because you can recover shards one at a time in a Trezor and just walk around to each shard and collect those and get the full secret.

Jimmy stuff

Last but not least, Jimmy has something to sell us. This is the little bitcoin book. It’s available on Amazon right now. I had seven coauthors on this. We wrote the book in four days, which was a really fun experience. It’s meant for someone who doesn’t know anything about bitcoin. It’s a very short read, at 115 pages. About 30 pages of those are Q&A. Another 10 are glossary and things like that. So it’s really more like 75 pages that you can read pretty quickly. We had in mind a person that doesn’t know anything about bitcoin. I have given this book to my wife who doesn’t know much about what’s going on; it’s meant to be that sort of book that is understandbale. The first chapter is, “what is wrong with money today?” and what’s going on with the current system? Doesn’t mention bitcoin once, and then it goes to what is bitcoin. We tell the story of the Lehman bank and talk about what drove Satoshi to build bitcoin. The other chapter is about price and volatility. We asked a lot of people that we knew that didn’t know about bitcoin, they ask what is it backed by? Why does it have a market price? Chapter four is about why bitcoin matters for human rights and this is just talking about it globally and why it matters right now. There’s a very Silicon Valley centric perspective on bitcoin which is that it’s going to disrupt or whatever, but there’s real people right now that are benefiting from bitcoin that did not have financial tools or bank accounts available before. Right now there are people escaping from Venezuela for bitcoin. There’s a discount of bitcoin in Colombia right now, because there’s so many refugees coming out of Venezuela with their wealth in bitcoin and they sell it immediately in Colombia to get started on their new life. There are cab drivers in Iran asking me about bitcoin. This is a real thing, guys. Getting that global perspective is a big goal of this book. Chapter five is a tale of two futures and this is where we speculate about what the future would be like without bitcoin, and then what the future would be like with bitcoin. Lastly, so that’s where the book ends, and then we have a bunch of Q&A and stuff that you might want to know about. There’s questions like, who is Satoshi? Who controls bitcoin? Isn’t it too volatile? How can it be trusted? Why have so many exchanges been hacked? There’s a whole section on the energy question. All kinds of stuff like that. Additional resources, like lopp’s page, podcasts, books, websites, things like that. I am going to probably send this with my Christmas cards or something. Half my friends have no idea what I am doing here. This is my way of informing them. It’s number on Amazon for the category of digital currencies.

\ No newline at end of file +Meetup

https://twitter.com/kanzure/status/1164710800910692353

Introduction

Hello. The idea was to do a more socratic style meetup. This was popularized by Bitdevs NYC and spread to SF. We tried this a few months ago with Jay. The idea is we run through research news, newsletters, podcasters, talk about what happened in the technical bitcoin community. We’re going to have different presenters.

Mike Schmidt is going to talk about some optech newsletters that he has been contributing to. Dhruv will talk about Hermit and Shamir secret sharing. Flaxman will teach us how to setup a multisig hardware wallet with Electrum. He is going to show us how you can actually do this and some of these things we have learned. Bryan Bishop will talk about his vaults proposal that was recently made. So ideally we can keep these each to about 10 minutes. They will probably go over a little though. Let’s get a lot of audience participation and really interactive.

Bitcoin Optech newsletters

I don’t have anything prepared, but we can open up some of these links and I can introduce my perspective or what my understanding is. If people have ideas or questions, then just speak up.

Newsletter 57: Coinjoin and joinmarket

https://bitcoinops.org/en/newsletters/2019/07/31/

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017169.html

Fidelity bonds for providing sybil resistance to joinmarket. Has anyone used joinmarket before? No? Nobody? Nice try… Right. So, joinmarket is a wallet that is specifically designed for doing coinjoins. A coinjoin is a way to do a little bit of mixing or tumbling of coins to increase the privacy or fungibility of your coins. There’s a few different options to it. It essentially uses IRC chatbots to solicit makers and takers. So if you really want to mix your coins, you’re a taker, and a maker on the other side puts up funds to mix with your funds. So there’s this maker/taker model which is interesting. I haven’t used it, but it looks to be facilitated by IRC chat. The maker, the person putting in money, doesn’t necessarily need privacy, makes a small percentage on their bitcoin. It’s all done with smart contracts, and your coins aren’t at risk at any point, except in as much that they are stored in a hot wallet to interact with the protocol. The sybil resistance that they are talking about here is that, so, Chris Belcher has a great privacy entry on the bitcoin wiki so check that out sometime. He’s one of the joinmarket developers. He notices that it costs very little to flood the network with a bunch of makers if you’re a malicious actor, and this breaks privacy because the chances of you running into a malicious or fraudulent chainalysis type company, it’s not that they can take your coins, but they would be invading your privacy. The cost of them doing this is quite low, so the chances of them doing this is quite high as a result.

Similar to bitcoin mining, this is sybil resistance by burning energy for proof-of-work. There’s two types of potentials for proof-of-work in this scenario against sybil resistance. One is that you can burn bitcoin, and the other is that you can lock up bitcoin, both of which is proof that you have some skin in the game. So you can prove both of these things on chain and it’s a way of associating that you locked up these coins and you locked it up once for this IRC nickname and this gives you credibility to trade as a regular person. So you can’t just have 1000 chatbots to snoop… It’s 30 to 80,000 BTC. That would be the lock-up. This is locking up this much BTC to take up some total capacity of the joinmarket situation. It would be no worse than the current situation, where they have the capability to do it anyway, so this makes it more expensive. It also makes it more expensive for the average user, which is the downside. The cost of legitimate makers staking or locking up or burning their coins are going to be passed on to the takers. In the way that it is setup now, the mining fee is substantially more than what these makers are making by doing the fixing,s so the theory according to Chris is that people would be willing to take a higher fee for the mixing, because they are already paying 10x for the mining fees. I don’t know how many coinjoins you can do in a day, but there’s public listings of makers and what they will charge and what their capacity is. Some people are putting up 750 BTC and you can mix with them, and they charge 0.0001% or something. The higher cost is for sybil protection, it’s a natural rate. If you’re paying 10x more to process the transaction on the bitcoin network, then maybe you’re willing to put in a few more sats to pay for this sybil resistance.

Samurai and Wasabi wallet teams had some interesting discussions. They were talking about address reuse and how much does it really reduce privacy. I don’t think it’s a resolved issue, they are both still going back and forth attacking each other. For any of these coinjoins, they are all exposed to a certain extent to coinjoin. So there’s always tradeoffs. Higher cost, some protection, still not perfect, a company could still be willing to lock up those coins. An interesting thing about that is that it increases the cost for Chainalysis services– they will have to charge more to their customers; so this strips their margins and maybe we can put them out of business.

Newsletter 57: signmessage

https://github.com/bitcoin/bitcoin/issues/16440

https://github.com/bitcoin/bips/blob/master/bip-0322.mediawiki

Bitcoin Core has the ability to do signmessage, but this functionality was only for single key pay-to-pubkeyhash (P2PKH). Kallewoof has opened up a pull request that allows you to have that same set of functionality with other address types. So for segwit, P2SH etc., … I think it’s interesting, and it’s forward compatible with future versions of segwit, so taproot and Schnorr are included, would have the ability to sign the scripts with these keys, and it’s backwards compatible because it has the same fnuctionality for signing a single key. Yeah, it oculd be used for a proof of reserve. Steven Roose did the fake output with the message, that’s his proof of reserve tool. You construct an invalid transaction to do proof-of-reserve. If Coinbase wanted to prove they had the coins, they could create a transaction that is very clos eto a valid transaction but is technically wrong, but still with valid signatures. bip322 is just signing the message.

All you can do with signing is prove that you at some point had that private key. If someone stole your private keys, you can still sign with your private key, but you no longer have the coins. You have to prove that you don’t have it; or that someone else doesn’t have it. Or that, at the current blockheight, you had the funds. That’s the real challenge of proof-of-reserves, most of the proposals are about moving the funds.

Newsletter 57: Bloom filter discussion

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017145.html

https://github.com/bitcoin/bitcoin/issues/16152

There was some consernation here about disabling the bloom filter as a default in Bitcoin Core. Right now if you’re using an SPV wallet or client, like Breadwallet using SPV, that’s one of the major ones using SPV… The discussion was that in the previous week, there was a merged pull request that disables bloom filters by default. A newer version of Bitcoin Core would no longer serve up these bloom filters to lite clients. My Breadwallet wouldn’t be able to connect to Bitcoin Core nodes and do this by default, but again someone could turn it back on.

Someone was arguing that Chainalysis is already running all these bloom filter nodes anyway, and they will continue to collect that information. Many Bitcoin Core nodes are running nodes that are over a year old, so they aren’t going to go anywhere soon. You will still be able to run some lite clients. I think Breadwallet runs some nodes too. You could always run your own nodes too, and serve those bloom filters yourself.

Does anyone use a wallet or is aware that you are using a wallet that is an SPV lite client? Electrum doesn’t do the bip37 bloom filters, it uses a trust model.

Is the idea that bip57 will be on by default, to replace that? Is Neutrino going to be on by default? It is for btcd. I imagine bitcoind will have something similar. It’s just network commands. You store a little bit more locally, with the Neutrino filters. You have to keep track. If there’s a coinbase commitment or whatever, you’re going to need to check that too. That would have to be a soft-fork.

Lightning news (Buck Perley)

I am going to go through the lightning topics on the socratic seminar list. I was on a plane for a few hours yesterday, so I prepared some questions and hopefully we can spark some questions around those.

Watchtowers

https://blog.bitmex.com/lightning-network-part-4-all-adopt-the-watchtower/

Bitmex had something about running watchtowers. The nice thing about their article is that they go through the different scenarios when you’re running lightning and what enforces good behavior in lightning. If you look at the justice transaction— in lightning, when you have two parties that enter into the channel and rebalance funds constantly without having to publish a transaction, the way you enforce non publication of the older state is by a penalty or justice transaction. If someone tries to publish an old state, you can steal all of the funds from the channel as a way to punish them. This is called the justice transaction. One of the issues with this, and lightning in general, is that your node has to be online all the time because the only way to publish the justice transaction is if you’re online and your node notices that the incorrect state had been published.

lnd v0.7 was recently published with watchtowers available. What a watchtower is, is it lets you go offline. Basically, if you’re offline, you can hire another node to keep an eye on the blockchain for you and they will publish the justice transaction on your behalf. There’s some interesting constructions where you can pay back the watchtower so you can kind of split the punishment transaction funds that are in that transaction. They don’t talk about that here in the article.

One of the things that is interesting is comparing justice transactions to eltoo where they have SIGHASH_NOINPUT. I thought that was an interesting point of discussion.

Q: I upgraded my node so that I could utilize that functionality. How do you transact with other nodes and set them up to be a watchtower? It’s not clear to me how that works.

A: You have to basically find one, and you point your node at it and you say this is going to be my watchtower node. It does add this interesting aspect as to how the economics of lightning kind of…. the incentives of people to be routing nodes, and be routing funds and earning fees just for having good uptime. Casa published their node heartbeat thing where they actively reward people. I forget the mechanics of how they keep them honest. So you give them updates of the justice transaction; right now there’s a privacy tradeoff. Without eltoo, they have to have every single state update in the channel. What’s nice about eltoo is that you don’t, you basically don’t have to store all state. With eltoo, you don’t need to remember the intermediate states, just the latest one.

Q: So the other nodes are providing watchtower services for me; and unless I upgrade my node to have the watchtower, then other people can do the same thing. Do you have to open a channel?

A: No, you’re just giving them the raw transaction.

Q: Are they encrypting the justice transaction?

A: I’m not sure. There were mechanisms discussed to split it up even further. The idea was the watchtower would only be aware of your transaction once it is published; they wouldn’t be able to know what the transaction details are, beforehand. They would constantly be watching the transaction and try to decrypt all the time.

Q: Has anyone tried to commercialize any of this?

A: Well, nobody has been able to commercialize anything about lightning. lnbig has locked up $5m in the lightning network and is making $20/month. At this point, you’re just doing it for altruistic reasons.

Q: One of the arguments is that if the bitcoin fees go way up, you have the benefit of having these routing nodes and channels already setup.

A: Yes, but right now it’s not a viable business. It could be in the future. Right now, my sense is that you’re not making money on the fees, but you’re making liquidity and this makes it more viable for your customers to use lightning. So really your business model is more about creating liquidity and helping utility rather than making money. The idea is that people would make fees as watchtowers, routing fees, increasing liquidity, and there’s another business model where people can pay for inbound liquidity. Those are the three main lightning network business models that I know about.

Steady state model for LN

https://github.com/gr-g/ln-steady-state-model

LN guide

https://blog.lightning.engineering/posts/2019/08/15/routing-quide-1.html

Is anyone here running a lightning node? Okay, a few. One of the big nods against lightning is that it’s not super usable. Part of that they are trying to help on the engineering side with watchtowers and autopilot and new GUIs. Another big part of it is just, guides on how to run a lightning node. They go through useful flags that you could enable. If you ever just run on lnd cli help, it’s a huge menu of stuff to be aware of. Any horror stories of headaches of dealing with lightning and things that would be helpful?

Q: More inbound liquidity.

A: What do you think could help that? Is there anything in the pipeline that would helpful?

Q: Grease. If you download their wallet, they give you $100 bucks of liquidity. After you run out, you have to get more channels. It’s kind of a strange incentive problem. It’s kind of like a line of credit. Locking up funds is costly on the other end, so they need a good reason to think they should be doing this.

One of the problems with lightning is essentially you have to lock up funds. In order to receive funds, you need to have a channel where someone has their own funds locked up. Otherwise, your channel cannot be rebalanced when you earn more on your side. If Jimmy has $100 of bitcoin on his side of the channel, and I have $100 on my side, someone can pay Jimmy up to $100 through me. Once he has made $100 and our channel has been rebalanced, he can no longer receive any more money. For on-chain payments, anyone can pay you immediately, and on lightning that’s not the same. Liquidity is a major problem.

They have one Loop server. It’s basically submarine swaps. It’s a tool that leverages submarine swaps, built by Lightning Labs, it’s a way to take funds by your off-chain lightning wallet. You loop out by sending to another wallet which will send you on-chain funds and this will give you inbound liquidity. Or you can pay your own lightning wallet from on-chain funds. If you have seen those store interfaces where you can pay with lightning, or pay with on-chain bitcoin to the lightning wallet, for that they are using submarine swaps. It’s not cheap either, because there are fees associated with. You get those funds back on chain, but you have to pay transaction fees at that point. And then there’s fees associated with– the Loop server is charging fees for this service, which is another business model.

They have a mechanism called Loopty Loop where you recursively continue to loop out. You loop funds out, you get on-chained fnuds, you loop out, and you loop out again. You can keep on doing that and get inbound liquidity, but again it’s not cheap, and it’s not instant. So you’re losing some of the benefits of lightning.

Static channel backups

Lightning Labs was talking about its mobile app now. One of the interesting things about this update was that they have static channel backups to iCloud. I was kind of curious if anyone has thoughts on that. I think it’s cool you can do cloud backup for these. It stores the state of the channel, including what the balance is. If your bitcoin node goes down, and you just have your mnemonic, that’s fine. But with LN, you have off-chain states where there’s no record of it on the blockchain. The only other record is the counterparty but you don’t want to trust them. If you don’t have backups of your state, your counterparty could publish a theft transaction and you wouldn’t know about it. You might accidentally publish an old state too, which will give your counterparty a chance to steal all the funds in the channel, which is anothe rthing that eltoo can prevent. If you have the app on iOS, you’re automatically updating these things and you don’t have to worry about it, but you’re trusting Apple iCloud.

Suredbits playground

This is– this lets you pay lightning micropayments for, you can pay for spot prices, or NBA stats, and if I were to press something… basically, it’s paying for an API call for small requests. So it would be like almost an AWS on demand, that’s how I think about it.

Boltwall

https://github.com/Tierion/boltwall

On the topic of API stuff, this was something that I recently just built and published called boltwall. This is nodejs express-based middleware that can be put in front of routes that you want protected. It’s simple to setup. If you have your lightning node setup, then you can pass the necessary config in. These configs are only stored on your server. The client never sees any of this. Or you can use opennode, which for those who haven’t used it, it’s a custodial lightning system where they manage the LN node and you put your API key into boltwall. I think that’s best for machine-to-machine payments.

I used macaroons as part of the authorization mechanism. Macaroons are used in lnd for their authorization and authentication. Macaroons are basically cookies with a lot more fine grained detail. Web cookies are usually just a json blob that says here’s your permissions and it’s signed by the server and you authenticate the signature. What you do with macaroons, is they are basically HMACs so you can have chains of signed macaroons that are connected to each other. I have one built in here that is a time-based macaroon where you can pay one satoshi for one second of access. When I’m thinking about lightning, there’s a lot of consumer-level pain points involved.

Q: Why time based, instead of paying for a request?

A: It depends on the market. Rather than saying I’m paying for a single request, you could say instead of the back-and-forth handshake, you get access for 30 seconds and be done with it until the time expires. I built a proof-of-concept application that is like a yalls (which is like a medium.com) for reading content where rather than paying a bulk amount for a piece of content, you say oh I’m going to pay for 30 seconds and see if you want to keep reading based on that. It allows for more flexible pricing mechanisms where you can have a lot more fine grained price discrimination based off of the demand.

Hardware wallet rant

https://stephanlivera.com/episode/97/

I have been talking about multisig on hardware wallets lately. We are going to start with something that is bad, and then show something better. Go ahead and fire up electrum. Pass the –testnet flag. We’re not going to do the personal server stuff.

https://github.com/gwillen/bitcoin/tree/feature-offline-v2

https://github.com/gwillen/bitcoin/tree/feature-offline-v1

https://github.com/gwillen/bitcoin/tree/feature-offline

We have a Bitcoin Core full node here. We have QT up right now, but you could use the cli. There’s a spendable component, and that’s nothing because all I have is watchonly. I’m using Bitcoin Core for my consensus rules, but I’m not using it as a wallet. I’m just watching addresses, keeping track of transaction history, balances, that kind of thing. So we have electrum personal server running and bitcoin core running. So I boot up electrum and run it on testnet. I also set a flag to say, don’t accidentally connect to someone else and tell them what all my addresses are. You put this in the config file, and also on the command line again just to be sure … Yes, you could also use firewall rules which might be smart in the future.

We can look at a transaction in electrum, so it’s saying that I have this bitcoin which we can see I have here and my recent transactions and see that I received it. Now if I was going to receive more bitcoin, there’s this cool “show on trezor” button. If I hit this, it pops up on trezor and it shows it. This is an essential part of receiving bitcoin; you don’t ask for your receiving address on your malware-infected computer. You want to do this check on a quorum of hardware wallets. Do you really want to go to 3 different hardware wallet locations before receiving funds? If you’re receiving $100m, then yes you do. If you’re doing 3-of-5, and you only confirm on 2-of-5, then the attacker might have 3-of-5 but the 2-of-5 have confirmed they are the participants. Coldcard will do a thing where you register the group of pubkeys so that you know we’re all in this… Coldcard has like 3 options, one is upload a text file with the pubkeys. Another one is that when you send it a multisig output, it will offer to create this and show you the xpubs, and ask if you want to register it; and the third one is trust whic his what the others do. Casa gives you all the xpubs… it’s another way this works; you can put those into an offline airgapped electrum client never touches the internet, and it can generate receiving addresses. So you can say, well, this machine that has never touched the internet says that these xpubs will give me these addresses, so 2-of-5 plus offline electrum then maybe I’m willing to move forward. There’s QR codes built-in for setting these up.

I don’t like when trezor is plugged into this machine, because I think the machine might be malware infected. But this device could be a fake trezor, it might be a keyboard that installs malware or something and I don’t even see it type in the malware urls. If we have three different hardware devices, I want four laptops. One that is connected to the bitcoin network; and each of the other three laptops are connected to the hardware wallet. I pass them bitcoin transactions by QR code. That whole ecosystem of computer and hardware wallet can be eternally quarantined and never connected to the internet. So we can build a hardware airgap into this.

I recommend a laptop because they have webcams and batteries. In this demo, we have to pick up the laptops and aim the screens at the cameras. A nice portable hardware wallet with a QR code scanner, you have to pick them up or something, that would be nice. With desktops, this is going to be painful because you have to lug your desktop to the safety deposit box. Do note that many banks don’t have power outlets in their vaults, so you need a battery. Really any 64-bit machine should be fine. Historically, I used 32-bits, but tails is no longer compatible with that and some Ubuntu versions are complaining. In this demo, we’re going to use native segwit, and it’s a multisig setup so choose that option.

Electrum is very finnicky. I hit the back button. I went back to see if this was the correct one and then I lost everything. I am using a hardware wallet with deterministic key derivation, so I can get back to that. The back button should actually prompt you and ask do you really want to undo all this work. The big warning is, do not hit the back button.

You may have seen my twitter threads. I would accept a very bad hardware wallet, if it allowed multisig. Adding a second hardware wallet is only additive and can help protect against hardware wallet errors. On twitter, the wallet manufacturers said no big deal. There are three big issues with Ledger. It does not support testnet. They take the public key and they show the representation on mainnet, and they ask if you want to send there. It’s not that they don’t support it; they supported it in the past, and then they stopped supporting it. So, no testnet. They also don’t have a mechanism for verifying a receive address. Only if you want to use it insecurely will it show you. The third issue is that they don’t support sending either, because they don’t do a sum of the inputs and the sum of the outputs thing. They don’t validate what’s change and what’s going to someone else. They just show you a bunch of outputs and ask you whether they look correct, but as a human you have no idea of knowing what all the inputs and outputs are, unless you’re extremely careful and very careful and taking notes. Otherwise they might be sending change to your attacker or something. Trezor can’t verify that the multisig address belongs to the same bip32 chain; they can’t verify the quorum, but they can verify their own key. So let’s say it’s 3-of-5, you can go on 3 devices that will say yes I’m one of the five on this 3-of-5, but you have to sign on 3 different devices so that you now know you’re 3 of those 5. You can always prove a wallet is a member of the quorum, except on Ledger. They used to export the data without telling you what the bip32 path was, which is a huge hole. Most wallets can prove they are in a quorum– do they understand one of the outputs is in the same quorum as the input? As far as I can understand, it’s only Coldcard that understands that today. Trezor knows it’s part of it, but they don’t know which part. So if you’re signing with a quorum of devices that you control with Trezor then you’re not at risk, but if you have a trusted third party countersigning then it gets a little squirrely because they could be increasing the number of keys that a trusted third party has. Native segwit would have allowed us to do the xpubs out of order.

Bitcoin Core can display a QR code, but it can’t scan them. The issue has been open for like 2 years.

2-of-2 is bad because if there’s a bug in either of them (not even both) then your funds are lost forever and you can’t recover it. An attacker might also do a Cryptolocker-style attack against you, and force you to give them some amount of the funds to recover whatever you negotiate.

Each of these hardware wallets have firmware, udev rules, and computer-side things. Some of them are wonky like, connect to a web browser and install some shit. Oh god, my hardware wallet is connected to the internet? Well, install it once, and then use it only on the airgap.

You need to verify your receive address on the hardware wallets. Don’t just check the last few characters of the address on the hardware device’s screen.

When verifying the transaction on the Ledger device, the electrum wallet has a popup occluding the address. The Ledger wallet also doesn’t display the value in the testnet address version, it’s showing a mainnet address. I would have to write a script to check if this address is actually the same testnet address.

Chainalysis is probably running all the electrum nodes for testnet anyway.

I would like to say this was easy, but it was not. A lot of this was one-time setup costs. It’s not perfect. It’s buggy. It has issues. But it does technically work. At the end of the day, you have to get signatures from two different pieces of hardware. You can be pretty chill about how you set things up.

Q: If you use the coldcard, can you get the xpub without having to plug in the coldcard?

A: I think you can write xpubs to the sdcard.

A: What I really want is… there seems to be some things like showing an address, which you can only do if you plug it in. A lot of the options are burried in the menus. Coldcard is definitely my most favorite.

Q: The Trezor only showed one output because the other one was change, right? So if you were creating three outputs, two that were going to addresses that didn’t belong to trezor or the multisig quorum, would it show the two outputs on the trezor?

A: I think so. But only way to know is to do it.

A: I was talking with instagibbs who has been working on HWI. He says the trezor receives a template for what is the change address; it doesn’t do any verification to define what is change, it just trusts what the client says. So it might not be better than a Ledger. It just trusts what the hot machine knew. Ledger seems to do a better job because– the trezor could be hiding the— Coldcard is clearly doing the best job, so you can teach it about the xpubs and it can make the assertion for itself without having to trust others.

Vaults (Bryan Bishop)

Cool, thanks for describing that.

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html

https://www.coindesk.com/the-vault-is-back-bitcoin-coder-to-revive-plan-to-shield-wallets-from-theft

https://bitcoinmagazine.com/articles/revamped-idea-bitcoin-vaults-may-end-exchange-hacks-good

Hermit

https://github.com/unchained-capital/hermit

Originally at Unchained we started with using Electrum. At that point, we weren’t doing collaborative custody. We had all the keys, it was multisig. So in theory we had employees and signing teams using electrum, and it was a terrible experience. It was every day using electrum, it was always weird experiences. I also don’t like how Electrum constructs these addresses.

Today I am not going to talk about multisig, instead I am focusing on this new tool called Hermit. I want to give a small demo. I’ll do a few slides. It’s a command line sharded wallet for airgap deployments. There’s some similarity to the QR code airgap wallets. At Unchained, we’re focusing on collaborative custody where we have some of the keys and the customers have some of the keys as well. This would not work if we were using Electrum ourselves, it would be too painful. We had to build software that streamlines it for us. The net result is that we’re protecting keys. There’s a big difference between an organization protecting keys, and an individual protecting keys.

In particular, for us, we’re not a hot wallet oriented company. Our transactions are at least weeks if not months or years and planned in advance. We like to keep everything multisig and cold storage. We like airgaps. Hardware wallets have a temporary airgap, which can be nice. We don’t want to be spending $1000s on HSMs to keep a number secret. The way we view it, we have so many devices scattered around the company for testing and so on– every developer has many different devices of each type. These hardware wallets can’t be more than $100 each, otherwise it’s too expensive to trust new products. We don’t like Trezor Connect where they know the transaction we’re signing, that’s deeply frustrating. Again, we’re not individuals here, this is an organization. We’re a company. We have to write some stuff down explicitly or else it will get lost. As a person, you might remember, but you should write down too. Also as an organization, we have to coordinate. As a person, you remember the keys, locations, things like that. You don’t need to send an email to yourself to let you know to take step 2 in the process, whereas the company has this requirement. Most companies have employee turnover, we don’t seem to, but we could. There’s also a lot of information about us that is public, like the address of this commercial building. We have a lot of hardware wallets here at this location, but none of them matter. There’s also scheduling issues like people going on vacation and other things. No single person can handle all the signing requests, they would burn out. So we have to rotate, and have uptime, and so on.

So what are the options? Well, hardware wallets. We want to encourage customers to use hardware wallets, and hopefully there are better hardware wallets coming down the line. These are better than paper wallets. Because of the multisig product we offer, we believe that even bad wallets when put together have a multiplicative effect on security.

Today, I don’t think it’s reaosnable to have customers using Hermit it’s too much for them. They will probably use hardware wallets for the foreseeable future. We have been using hardware wallets and we would like to move to something like Hermit. One candidate we looked at and really liked was a project called Subzero at Square which was meant to be an airgapped tool and serve as cold storage. It’s a really good product, but it wasn’t quite sufficient for us. We needed something a little more complex.

I am showing here a diagram of two different ways of thinking about protecting a key like multisig and Shamir secret sharing. Can you get some redundancy without using multisig? Sure, you can use Shamir secret sharing. There are some cool properties like 2-of-3 shares are required to be combined together in the same location. One amazing aspect of this scheme is that if you have one shard, you have precisely zero pieces of information. It’s a discrete jump where as soon as you have n shards, you get it. It’s not just cutting up pieces of a mnemonic or whatever, and reduces search space for an attacker. That’s not how the secret shares work.

SLIP 39 makes it more convenient to do encrypted Shamir shards with mnemonics. SLIP 39 was put out by the Trezor folks. As much as we poop on hardware wallets, I have to salute the SatoshiLabs team for being ahead of everyone and releasing foundational code like bip32 and implementing and doing it in an open source way. Reading their code was how I understood some of their ideas. Another thing they have done is they have released a 2 level Shamir shard system. You want to create a way to do Shamir secret sharding, without all shards being equal. You can distinguish more or less valuable shards or distribute them to people you trust more or less depending on the security level of each person or each group. So you can have a 1-of-3 secret, and the second group can have a different setup like 3-of-5. This is not multisig– where you can do this asynchronously in different locations and you’re never forced to be in one spot with all your keys. This is not that, but it gives you flexibility.

I am going to give you a quick demo of what Hermit looks like.

Hermit is open-source software. It’s “standards compliant” but it’s a new standard. SLIP 0039 is not really cryptographically reviewed yet. We have contributed not only Hermit as an application that uses SLIP 39, but we have been pushing back code at the layer to say this is the Shamir implementation that– so far this is the one that people seem to be going with which is exciting. It’s designed for airgapped deployments, which is nice.

Hermit is not multisig. Multisig and shard are complimentary. For an individual, instead of managing shards, maybe manage more than one keys. For us, we’re already in a multisig context here at Unchained, and we want to be able to do a better job and have better key management controls. Hermit is also not an online wallet. How did it know what to put in here? It had no idea. Something else has to produce the QR code with bip174 PSBTs. Next month, I am excited to get some time to present what we think is the other half of this, a tool for wallets. An online wallet is producing these PSBTs and honestly, I suggest printing them out. Print out all the metadata, and come in the room and then sign.

Hermit is not an HSM. It is a piece of python software that runs on a commodity laptop, which is not an HSM. The Ledger is a high security little conclave that lives on the electronic device and has interesting properties. In particular, the ways to communicate in and out of there is really restrictive and it will never reveal the key. If you think about it, that’s what a Hermit installation is. You control the hardware, it’s completely open-source. This is basically what you wanted from an HSM, especially if you run it in a context that is extremely secure. HSMs are presumably secure even if you plug it into a malware-infected laptop, though.

Q: So you have a signing ceremony and the shard owners walk in and enter their part and then move on?

A: Yes, that’s one way to do it.

Q: So to produce a bitcoin signature, you need a qourum of shards from each group.

A: Correct, it’s unlocking all shards together in memory at one location and then acting with that. What we like with that is it’s a nice mix for us because we can create adversarial signing teams that are watching each other and limiting the opportunities for collusion. Using SLIP 39 is really nice and flexible for organizations.

Trezor claims that they will support SLIP 39 by the end of the summer, which is really interesting because you can recover shards one at a time in a Trezor and just walk around to each shard and collect those and get the full secret.

Jimmy stuff

Last but not least, Jimmy has something to sell us. This is the little bitcoin book. It’s available on Amazon right now. I had seven coauthors on this. We wrote the book in four days, which was a really fun experience. It’s meant for someone who doesn’t know anything about bitcoin. It’s a very short read, at 115 pages. About 30 pages of those are Q&A. Another 10 are glossary and things like that. So it’s really more like 75 pages that you can read pretty quickly. We had in mind a person that doesn’t know anything about bitcoin. I have given this book to my wife who doesn’t know much about what’s going on; it’s meant to be that sort of book that is understandbale. The first chapter is, “what is wrong with money today?” and what’s going on with the current system? Doesn’t mention bitcoin once, and then it goes to what is bitcoin. We tell the story of the Lehman bank and talk about what drove Satoshi to build bitcoin. The other chapter is about price and volatility. We asked a lot of people that we knew that didn’t know about bitcoin, they ask what is it backed by? Why does it have a market price? Chapter four is about why bitcoin matters for human rights and this is just talking about it globally and why it matters right now. There’s a very Silicon Valley centric perspective on bitcoin which is that it’s going to disrupt or whatever, but there’s real people right now that are benefiting from bitcoin that did not have financial tools or bank accounts available before. Right now there are people escaping from Venezuela for bitcoin. There’s a discount of bitcoin in Colombia right now, because there’s so many refugees coming out of Venezuela with their wealth in bitcoin and they sell it immediately in Colombia to get started on their new life. There are cab drivers in Iran asking me about bitcoin. This is a real thing, guys. Getting that global perspective is a big goal of this book. Chapter five is a tale of two futures and this is where we speculate about what the future would be like without bitcoin, and then what the future would be like with bitcoin. Lastly, so that’s where the book ends, and then we have a bunch of Q&A and stuff that you might want to know about. There’s questions like, who is Satoshi? Who controls bitcoin? Isn’t it too volatile? How can it be trusted? Why have so many exchanges been hacked? There’s a whole section on the energy question. All kinds of stuff like that. Additional resources, like lopp’s page, podcasts, books, websites, things like that. I am going to probably send this with my Christmas cards or something. Half my friends have no idea what I am doing here. This is my way of informing them. It’s number on Amazon for the category of digital currencies.

\ No newline at end of file diff --git a/austin-bitcoin-developers/2020-01-21-socratic-seminar-5/index.html b/austin-bitcoin-developers/2020-01-21-socratic-seminar-5/index.html index 7ac05a20e1..3fd75d37b9 100644 --- a/austin-bitcoin-developers/2020-01-21-socratic-seminar-5/index.html +++ b/austin-bitcoin-developers/2020-01-21-socratic-seminar-5/index.html @@ -11,4 +11,4 @@ < Austin Bitcoin Developers < Socratic Seminar 5

Socratic Seminar 5

Date: January 21, 2020

Transcript By: Bryan Bishop

Tags: Lightning

Category: -Meetup

https://www.meetup.com/Austin-Bitcoin-Developers/events/267941700/

https://bitdevs.org/2019-12-03-socratic-seminar-99

https://bitdevs.org/2020-01-09-socratic-seminar-100

https://twitter.com/kanzure/status/1219817063948148737

LSATs

So we usually start off with a lightning-based project demo that he has been working on for a few months.

This is not an original idea to me. This was presented by roasbeef, co-founder of Lightning Labs at the Lightning conference last October. I worked on a project that did something similar. When he presented this as a more formalized specification, it made a lot of sense based on what I was working on. So I just finished an initial version of some tool that put this into practice and let people build on top of this. I will just give a brief overview of what you can do with this.

Quick outline. I’m going to talk about API keys and the state of authentication today. Then what macaroons are, which are a big part of how LSATs work.

LSAT is a lightning service authentication token.

Then we’ll talk about use cases and a few of the tools like lsat-js and another one. Hopefully you can use those. A lot of the content here, you can see the original presentation that Laolu (roasbeef) gave and put together. Some of the content is at least inspired by that presentation.

State of authentication today: Anyone on the internet should be familiar with our authentication problems. If you’re doing login and authentication, you’re probably doing email passwords or OAUTH or something. It’s gross. You can also have more general API keys. If you create an AWS account or if you want to use a Twilio API then you get a key and that key goes into the request to show that you are authenticated.

API keys don’t really have any built-in sybil resistance. If you get an API key, then you can use it anywhere, depending on service-side restrictions. They add sybil restrictions by having to login through email or something. The key itself does not have sybil resistance, it’s just a string of letters and numbers and that’s it.

API keys and cookies as well– which was an initial form of what macaroons are– don’t have an ability to delegate. If you have an API key and you want to share that API key and share your access with someone, they have full access to what that API key provides. Some services will give you a read-only API key, read-write API key, admin-level API key, and so on. It’s possible, but it just has some problems and it’s not as flexible as they could be.

The whole idea of logging in and getting authentication tokens is cumbersome. Credit card, email, street addresses, this is not so great when you just want to read a WSJ or NYT article. Why do we have to give all this information just to get access?

So somebody might be using something that appears the right ways… like HTTPS, the commnication is encrypted, that’s great.. But once you give them private information, you have no way to audit how they store that private information. We see major hacks at department stores and websites that leak private information. An attacker only needs to attack the weakest system holding your personal private information. This is the source of the problem. Ashley Madison knowing about your affair isn’t a big deal, but someone hacking in and exposing that information is really bad.

I highly recommend reading about macaroons. The basic idea is that macaroons are like cookies, for anyone who works with them in web development. They encode some information that shares with the server, like authentication levels and timeouts and stuff like that, to the next level. lnd talks about macaroons a lot, but this is not a lnd-specific thing. lnd just happens to use this, for delegated authentication to lnd. They are using macaroons, these tools are using macaroons. They are using macaroons in their loop service in a totally different way from their loop service. These could be used in place of cookies, it’s just sad that almost nobody is using them.

It works based on chained HMACs. When you’re creating a macaroon, you have a secret signing key just like when you do cookies. You sign and commit to a version of a macaroon. This ties into delegation— you can add new what are called caveats and sign using a previous signature and that locks in the new caveat. Nobody that then receives the new version of the macaroon with a caveat that has been signed with a previous signature, can change it. It’s like a blockchain. It just can’t be reversed. It’s pretty cool.

Proof-carrying changes the architecture around authentication. You’re putting the authorization into the macaroon itself. So you’re saying what permissions somebody who has this macaroon has, rather than putting a lot of logic into the server. So if you present this macaroon and it is verified by the server for certain levels of access, then this simplifies backend authentication and authorization a lot more. It decouples authorization logic.

lsat is for lightning service authentication tokens. This could be useful for pay as you go billing (no more subscriptions), no personal information required, it’s tradeable (with the help of a service provider)– this is something that roasbeef has proposed. You can make it tradeable; unless you’re doing it on a blockchain, you need a central server that facilitates it. Then there’s also machine-to-machine authentication. Because of the built-in sybil resistance not tied to identity, you can have machines paying for access to other stuff without having your credit card number on the device. You can also attenuate privileges. You can sell privileges.

I’ll introduce a few key concepts to talk about how this works. There’s status codes– anybody who has navigated the web is familiar with HTTP 404 which is for resource not found. There’s a whole bunch of these numbers. HTTP 402 was supposed to be for “payment required” and it took them decades to do this at the protocol level without an internet-native money. So LSAT will leverage this and use HTTP 402 to send messages across the wire.

There’s a lot of information in HTTP headers that describe requests and responses. This is where we’re going to set information for LSAT. In the response, you will have a challenge issued by a server when there’s an HTTP 402 payment required. This is a WWW-Authenticate header. There’s also another one Authorized-request: authorization. The only thing unique to this is how you’re going to read the values associated with these HTTP header keys. After you get the challenge, you send an authorization that satisfies that challenge.

You pay a lightning invoice request using a certain BOLT. This gets put into the WWW-Authenticate challenge. The preimage is a random 32 byte string that is generated and is part of every lightning payment, but it’s hidden until the payment has been satisfied. This is essentially how you can trustlessly second second-layer payments in an instantaneous ways. Then there’s a payment hash. Anyone who has received a payment invoice has this payment hash. The preimage is revealed after you pay. This basically, the payment hash generated from hashing the preimage… which means you can’t guess the preimage, from the payment hash. But once you have the preimage, you can prove that only that preimage could generate that payment hash. This is important for LSAT validation.

Say the client wants to access some protected content. Say the server then says… there’s no authentication associated with this request. I’m going to bake a macaroon, and it is going to have information that will indicate what is required for access. This is going to include generatig a lightning payment invoice. Then we send a WWW-Authenticate challenge back. Once you pay an invoice, you get a preimage in return, which you need to satisfy the LSAT because when you send the token back it’s the macaroon and then a colon and then that preimage. Because what’s happening is that the invoice information, the payment hash is embedded in the macaroon. So the server looks for the payment hash, and the preimage, and then he checks H(preimage) == payment hash boom it’s done.

Depending on what limitations you want to put on the paywall, this is stateful verification. You know that the person who has that preimage had to have paid the invoice associated with that hash. The hash is in the macaroon token.

This helps decouple payment from authorization. The server could know that the payment was satisfied by using lnd or something, but this helps decouple it. Also it helps other services to check the authorization too.

The current version of LSATs has a version identifier in the macaroons that it generates. The way that the Lightning Labs team has done it is that they have a version number and it will be incremented as functionality is added on to it.

In my tool, we have pre-build configurations to add expirations. So you can get 1 second of access for every satoshi paid, or something. Service levels is something that the Loop team has been working on.

The signature is made at macaroon baking time. So you have a secret key, it assigns a macaroon, and the signature gets passed around with the macaroon.

This allows for sybil-resistant machine-to-machine invoices. HODL invoices are something I have implemented. HODL invoices are basically a way to pay an invoice without it being immediately settled. It’s an escrow service built with lightning, but it does create some issues in the lightning network. There are ways to use them that doesn’t hinder the network, as long as they aren’t used for long periods of time. I was using this for one-time use tokens. If you try to access it, and an invoice is being held but not settled, then as soon as it is settled then it is no longer valid. There’s also a way to split payments and pay a single invoice but then you have some coordination problems. I think this is similar to the lightning-rod release that Reese did, which was for offline payments. They have a service where you can do trustless third party payments.

I made lsat-js which is a client-side library for interacting with LSAT services. If you have a web application that has this implemented, then you can decode them, get the payment hash, see if there’s any expirations on them. Then there’s BOLTWALL where you add a single line to a server, and you put it around a route that you want to require payment for, then BOLTWALL catches it when you get a request. It’s just nodejs middleware, so it could work with load balancers.

NOW-BOLTWALL is a serverless framework for deploying websites and normal serverless functions; this is a CLI tool that will configure it. The easiest way to do this is btcpay and use deployment with luna node for $8/mo, and then you can configure NOW-BOLTWALL. Then using zyke which they have a free tier, you can deploy a server out there and they are running load balancers themselves. You can pass it a url yo uwant to protocol. So if you have a server somewhere else, you can just deploy this on the command line.

And then there’s lsat-playground, which I am going to demonstrate real quick. This is just the UX that I put together.

LSAT would be useful for a service provider that hosts blogs from different authors, and the author can be convinced that the user paid and got access to the content- and that the user paid specifically that author, not the service provider.

I’ll put up some slides on the meetup page.

Socratic seminar

This is going to be a rapid-fire run down of things going on in bitcoin development. I’ll facilitate this and talk about topics, and pull some knowledge out of the audience. I only understand like 10% of this, and some of you understand a lot more. So just interrupt me and jump in, but don’t give a speech.

We missed a month because we had the taproot workshop last month. BitDevsNYC is a meetup in NYC that publishes these lists of links about what happened that month. I read through some of these.

OP_CHECKTEMPLATEVERIFY

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-November/017494.html

This is Jeremy Rubin’s work. The idea is that it’s a covenant proposal for bitcoin. The idea is that the UTXO can only… ((bryan took this one)). This workshop is going to be at the end of the month. Says he is going to sponsor people so if you’re into this then consider it. Because it can be self-referential, you can have accidental turing completeness. The initial version had this problem. It might also be used by exchanges on withdrawal transactions to prevent or blacklist your future transactions.

Watchtower BOLT

It is pretty interesting. I was in London and met with these guys. They have a whole python implementation of this. It was nice and simple, not sure if it’s open-source yet. There’s like three watchtower implementations now, and they should be standardized.

PlusToken scam

ErgoBTC did some research on the PlusToken scam. It was a scam in China that netted like 200,000 BTC. The people running it weren’t sophisticated. So they tried to do some mixing… but they shuffled their coins in a bad way. They got caught. Some whitehat figured it out and was able to track where the funds left an exchange and so on. Here’s a twitter thread talking about how the movement of these BTC might have affected the price. A month ago, some of the guys were caught. This guy’s theory is that they arrested the underlings, and the guy with the keys didn’t get arrested. So the mixing continued, clearly this other guy has the keys. They also had a bunch of ETH and it was moved like a month ago, and the market got spooked- the ETH price dropped. So maybe he took a big short position, then moved coins, rather than selling. 200,000 BTC is a lot, you can really move the price with this.

PlusToken scammed $1.9 billion across all the coins, with just like a landing page. They had people on the streets going to these Chinese people saying buy bitcoin and we multiply it, it’s this new mining thing. MtGox was like 500,000 BTC, which was 7% of the circulating supply at the time. So this might be 2-3% of the supply.

The guy also appeared on a podcast where he talked about the tools he used to figure this out. This is an interesting topic. Coinjoins are going to be a topic on a lot of these. This is just one side of coinjoin. Obviously the coinjoin he was using was imperfect.

txstats.com

This is a transaction stats visualizer from BitMex research.

Here’s murch reporting on some exchange dumping. He does wallet development for bitgo. He often talks about UTXO consolidation stuff. Someone dumped 4 MB of transactions at 1 sat/vbyte. Someone was consolidating a bunch of dust when fees were really cheap.

Here’s a lightning data site. Pierre had the number one node. It has capacity, different types of channel closes. BitMex wrote an article reporting on uncooperative closes, because you can see the force-close operations on the blockchain.

Jameson Lopp has some bitcoin implementation comparisons. This is an analog. This is the different versions of Bitcoin Core like v0.8 and on. He then looks at how long initial block download takes, for the current blockchain. There’s another one for how long it takes to sync to the blockchain on the date it was released. There was a big drop-out when they switched from openssl to libsecp256k1. So it was hugely more performant.

Insecure shortcuts in MuSig

This is about some of the interactivity in Schnorr MuSig. There’s three rounds in this protocol. In this article, he is discussing all the ways you can mess up with it. MuSig is pretty complex and there’s a lot of footguns, that’s the summary here I guess.

ZenGo time-based threshold signature

Multisig in concept is get a few different entities, where you can do on-chain multisig or off-chain multisig where you aggregate the signatures together and come together. These guys have something like that, but the keys rotate over time. You can have a scenario where all your parties lose a key over a given year, but since the keys are rotating, none of them lose a threshold amount over a certain amount. So the wallet would still be preserved even if all the people have lost their keys. This is called “proactive secret sharing”. Seems like it would be more practical to do 3-of-5 and just setup a new 3-of-5 when 1-of-5 loses it. Binance likes this because it’s shitcoin-compatibility that they like. Ledger too.

Attack on coldcard

The way this attack works is that you can trick it into generating a change address into something like this… a derivation path where you take the 44th child, 0th hardened, and then the last one is a huge number. So you stick it on a leaf way on the edge of the private key, such that it would be hard to find it again if you look for it. You still technically own the coins, but it would be hard to actually spend them. So it was a clever exploit. Basically an attacker can convince your coldcard that it’s being sent to “your” address, but it’s really a random bip32 path or something. No hardware wallets currently track what change addresses they give out. So the idea is to restrict it to some lookahead gap.. don’t go further than the gap or something. Or you might be on a website generating a lot of addresses, in advance, for users or payments or something. There was also something about 32-bit + 1 or something, beyond the MAX value.

Trezor bug

Trezor had a bug where if you had a– if you’re trying to do a single-sig output, and then you had a multi-sig input and then multi-sig change, you could inject your own multisig change address or something. Your host machine could do this. This was like a month ago or a month-and-half ago. They don’t show the change, if you own it. In this scenario, the multisig change address is something you don’t own, and it should treat that as a double spend or something. This was a remote exploit. It treated someone else’s multisig address as your address. It just wasn’t included in fee calculation or something.

Monero thread

Someone got a bad hash on their software. So it is a detective story trying to figure out what went wrong, like whether the website is bad or something. Turns out the binary was malicious. You can see the detective work in real-time. Someone was able to get the binary and do some analysis. Two functions were added; one of these would send your seed to the attacker. So if you ran this binary and you had any monero then the attacker now has that monero. It’s pretty fascinating. The binary made it on to Monero’s website. That’s pretty terrifying. This is a good example of why you need to check signatures and hashes when you download a wallet. Monero was serving this to people who were downloading the software. It was getmonero.org that was serving malware. It’s interesting that they got access to the site, and they didn’t update the md5 hashes or something. Well, maybe they were thinking users would check against the website not the binary they actually downloaded.

SIM swappers arrests

This was just a news article. SIM swapping is when you go into a Verizon store and say you have a number, and then they put your phone number on your phone and then you can steal someone’s bitcoin or whatever. They use the usual questions like what’s your maiden name and other public information.

Vertcoin 51% attack

This has happened twice now. We had a discussion when this happened six months ago. Somehow this coin survives the 51% attacks. Why are they surviving? Maybe it’s just so speculative that people are shrugging it off. What about bitcoin or ethereum being 51% attacked? So maybe it’s all speculative trading, or the users are too stupid or something.

The role of dandelion routing in “breaking mimblewimble’s privacy model”

Bitcoin Core 0.19.1

There was some kind of problem right after v0.19.0, and then came out v0.19.1. There’s some new RPC commands. getbalance is a lot better. A lot of RPC fixes. That’s cool, so download that.

Remove OpenSSL

OpenSSL was completely removed. It started in 2012. A lot of this sped up initial block download. Funny thing is that gavinandresen didn’t like the idea. But it turned into a big improvement. It took a few years to completely remove OpenSSL, because it was supplying all the crypto primitives for signatures, hashing, keys. It took 10 years to remove openssl. The last thing needing it was bip70. They needed it for something.

Branch-and-bound coin selection

It’s a better way to do coin selection when composing transactions. You want to optimize for fees in the long-term. So he wrote his thesis to prove that this would be a good way to this. Murch also pointed out that random coin selection was actually better than the stochastic approximation solution.

joostjager - routing allow route …

You can pay someone by including them in a route, without them having to give you an invoice. Alex Bosworth made a library to do this. You have to be directly connected to them; so you can route to yourself to a person you’re connected to.

Optional last hop to payments

So here you can say, you can define a route by saying I want to pay this person and the last hop has to be from point n-1 to n. So if for some reason you want, like if you didn’t trust somebody… So I wanted to pay you, but I wanted to choose who was the last hop. I don’t know why you would want to do that, though.

lnrpc and other rpc commands for lnd

joinmarket-clientserver 0.6.0

Does anyone actually use joinmarket? If I did, I wouldn’t tell you. What?

There’s a lot of work going on joinmarket. There’s a lot of changes there. The really cool thing about joinmarket- I have never used it– it seems to offer the promise of having a passive hot wallet sitting there earning you a yield on your bitcoin. Joinmarket has maker-taker fees. Happy that people are working on this.

Wallet standards organization meeting

This was a transcript Bryan made from a previous Austin Bitcoin Developers meetup.

Lightning on mobile: Neutrino in the palm of your hand

He showed how to create a react-native app that could do neutrino. You want a non-custodial, full-participant mobile user to be a full participant in the network without downloading 200 GB of stuff. I think the main innovation is not neutrino but instead of having to write custom to build lnd binary for mobile, it’s an SDK where you can just type “import lnd” and that’s all you need to go.

New mailing list for lnd development that roasbeef announced

Probably related to Linux Foundation migration…

Hashrate derivatives

Jeremy Rubin has another project which is a hashrate derivatives platform. The idea is that you can timelock transactions either in minutes or blocks, and whichever one gets there faster gets the payout. It’s an interesting way to implement a derivative. It’s basically DeFi. So you could probably play with this if you had some bitcoin mining capacity. Over a month… Uh, well the market will price it. That’s a good point.

New stratum protocol for mining pools (stratum v2)

Here’s some marketing speak about stratum v2.

The more interesting thing is this reddit thread. The people from slushpool are in this thread with petertodd and bluematt. Some of the benefits are that it will help you operate on less than ideal internet connections. You get blocks and stuff faster I believe. One of the interesting things bluematt pointed out is that if you’re mining you’re not sure if your ISP is stealing some of your hashrate because there’s no authentication between you and the pool and the miners.

The protocol now will send blocktemplates ahead of time. Then when new blocks are observed, they will just send the previoushash field. They are trying to load the block template ahead of time, and then send 64 bytes to fill in the thing so you can immediately start mining. That’s an interesting optimization. It’s a cool thread if you want to learn more about mining.

lnurl

This is another way of encoding HTTP invoices in HTTP query strings.

BOLT-android

Some LN hackathons

Bitfinex LN withdrawals

Aanlysis of bech32 bug

A single typo can drive a lot of problems. One of the interesting things is that changing one constant in bech32 implementation will fix the problem. How did that guy find that bug? Wasn’t Blockstream doing fuzz testing to prevent this? Millions of dollars of budget on fuzz testing.

Unequal coinjoins

Some guy withdrew from Binance and withdrew and then did coinjoins on Wasabi. Binance banned him. So in the coinjoin world, there’s discussion about how to deal with that. The fact that coinjoins are very recognizable. If you take money out of an exchange and do a coinjoin, the exchange is going to know. So what bout doing coinjoins with non-equal values? Right now coinjoins use equal values which makes them very recognizable. You just look for these unnatural transactions and you’re done. But what about doing coinjoins with non-equal amounts so that it might look like an exchange’s batching transaction or doing payouts to users? Coinjoiners are being discriminated against. The person who got slapped on their wrist was withdrawing directly into a coinjoin. Don’t get me wrong, they don’t like coinjoins, but also don’t be stupid. Don’t just send straight to a coinjoin. At the same time, it does show a weakness of this approach.

Wasabi hosted a research club. Right after the coinjoin-binance issue, a week later Wasabi was doing some hosted youtube things to dig up old research on unequal amount coinjoining. This is an interesting topic. Someone has a reference implementation in rust, and the code is very readable. There’s an 1.5 hour discussion where Adam is just grilling him. It’s pretty good. He found a bug in one of the papers… he could never get his implementation to work, and then he realized there was an off-by-one bug in the spec in the paper, he fixed it and then got it to work.

Blind merged mining with covenants and OP_CTV

This is basically what Paul Sztorc was talking about when he visited us a few months ago. This is about having another chain secured by bitcoin which bitcoin would be unaware of, and there would be some token involved. Ruben’s proposal is interesting because it’s blind merged mining, which is what Paul needs for his truthcoin stuff. So you get another thing for free if we actually get OP_CTV.

An argument that some people make for any new features into bitcoin is that we don’t know what else we might be able to come up with, to use this for. Like hte original version OP_SECURETHEBAG which turned out you can do turing completeness with. Maybe it’s a use case we want; but a lot of people think blind merged mining is not what we want- I forget why. A lot of thought goes into whether soft-forks should go in.

ZmnSCPxj on path privacy

Not really sure how to pronounce his name. Zeeman? It’s ZmnSCPxj. You can deduce a lot of information about what happened in the payments route. The first part of the email is like how you can use this to figure stuff out. So he talks about an evil surveillance on one node along the route, but if what if they are two nodes around the route. You can develop reverse routing tables if you have enough clout in the network. He goes into talking about some of the stuff that will happen with Schnorr, like path decorrelation and so on.

ZmnSCPxj on taproot and lightning

This is just insane. This was a good one.

Bitcoin Optech newsletters

c-lightning went from defaulting to testnet to defaulting to mainnet. They added support for payment secrets. You can do this thing, probing, where you try to route fake payments through a node and try to assess and figure out what it can do. You can generate random preimages and then creating a payment hash from that preimage even though it’s invalid. I assume this is a mitigation for that.

Here’s a thread about what watchtowers have to store, in eltoo. One of the benefits of eltoo is that you don’t have to store the complete history of the channel, only the most recent update. So do they have to store the latest update, or also the settlement transaction? Any comments about that? I don’t really know eltoo well enough to speculate on that.

c-lightning added createonion and spendonion RPC methods to allow for encrypted LN messages that the node itself doesn’t have to understand. This lets plugins use lightning network more arbitrarily to send messages of some sort, and they are tor-encrypted messages.

whatsat is a text/chat app. They are trying to get the same functionality over c-lightning.

All three LN implementations now have multi-path payments. This allows you to… say you have one bitcoin in three different channels. Even though you have 3 BTC, you can only send 1 BTC. Multipath allows you to send three missiles to the same target. lnd, eclair and c-lightning all support this now in some state. Can you use this on mainnet? Should you even do it? It might be in phoenix. lnd’s implementation has it in the code, but they only allow you to specify one path anyway. So they haven’t actually tested it in something that people can run sending multiple paths, but the code has been refactored to allow for it.

Andrew Chow gave a good answer about max bip32 depth, which is 256.

Bitcoin Core added a powerpc architecture.

There’s now a rpc whitelist. If you have credentials to do RPC with a node, you can do basically anything. But this command allows you to whitelist certain commands. Say you want your lightning node to not be adding new peers on the p2p level, which would allow you to be eclipse attacked. Lightning should only be able to do blockchain monitoring queries. Nicolas Dorier says my block explorer only relies on sendrawtransaction for broadcasting. So you want to whitelist, this is per user credential. Do they have multiple user credentials for bitcoin.conf?

This is why lnd is using macaroons. It solves this problem. You don’t need to have a list of people in the config file, you can just have people that have macaroons that give them that access.

Here’s what Bryan was talking about, which is the year-end review. I encourage you to read this, if you’re going to read only one thing it’s the 2019 year-end review newsletter from Bitcoin Optech. Every month in the last year there’s been some big innovation. It’s really crazy to read through this. Erlay is really big too, like an 80% bandwidth reduction.

Gleb Naumenk gave a nice talk at London Bitcoin Devs on erlay. He talked about p2p network stuff. I encourage you to check that out if you’re interested.

The script descriptor language is like a better version of addresses for describing a range of addresses. They sort of look like code with parenthesis and stuff. Chris Belcher has proposed base64 encoding it, because right now if you try to copy a script descriptor by default it wont highlight. It makes script descriptors more ergonomical for people who don’t know what they mean.

BitMex LN tool

Here’s a tool from BitMex which is a live alert system for channels. This was the BitMex forkmonitor.

Caravan

Unchained’s caravan got a shoutout.

Anonymous coinjoin ransactions with

This is a paper that wasabi dug out from like 2 years ago.

Luke-jr’s full node chart

The script to produce this is closed-source so you can’t game it. But there’s multiple implementations out there. I was suspicious of this because luke-jr is kind of crazy, but gleb seems to think it’s correct. We’re at the same number of full nodes as we were at in mid 2017. So that’s kind of interesting. The top line is the total number of full nodes, and the bottom line is the number of listening nodes which will respond to you if you try to initiate a connection with them. You probably want to be unreachable for selfish reasons, but you do need to be reachable to help people sync with the blockchain. Mid 2017 might be peak segwit when people were spinning up nodes, or it might be related to the market price. There was also a June price spike too. Maybe some of these are for lightning nodes. I bet a lot of people don’t do that anymore.

Clark Moody’s dashboard

Moody’s dashboard has a lot of real-time stats that you can see updating in realtime.

Bitcoin Magazine year-end review

We had a 10% growth in the number of commits, an 85% price increase, bitcoin dominance went up 15%, our inflation rate is 3.8%. Daily volume went up. Segwit went from 32% to 62%. Value of daily transactions went up. The blockchain size grew 29%. Bitcoin node count plummeted, it went down, which is not so good. It might be because a lot of people had only 256 GB hard drives maybe that’s why they dropped out… yeah, but pruning?

arxiv: new channel construction manuscript

List of hardware wallet attacks (from Shift Crypto)

It’s a pretty cool list. This is why you do multi-vendor multisig, maybe. This is pretty terrifying.

The pitfalls of multisig when using hardware wallets

One of the ideas people don’t realize is that if you use multisig and you lose redeemSripts or ability to compute them, you lose the multisig capability. You need to backup more than just your seed if you’re doing multisig. You need to backup redeemScripts. Some vendors try to show everything on the screen, and some others infer things. Manufacturers don’t want to be stateful, but multisig requires you to maintain state like not changing other multisig paricipants or swapping out information out from under you. If you’re thinking about implementing multisig, look at the article.

bunnie’s talk about secure hardware

The big point of this talk was – you can’t hash hardware. You can hash a computer program and check, but you can’t do that with hardware. So basically having open-source hardware doesn’t really make it more secure necessarily. He goes through this big all the ramifications of that and what you can do. He has a device, a text-message phone that is as secure as he can make it, and the interesting thing is that you could turn it into a hardware.

CCC conference and talks

SHA-1 collission

For $70k they were able to create two PGP keys, using the legacy version of PGP that uses sha-1 for hashing, and they were able to create two keys that had different user ids with colliding certificates.

Bitcoin Core fuzz testing

There is a fuzz testing stats page, and then Bitcoin Core PR review group.

lncli + invoices - experimental key send mode

It’s just a way to send to someone, you can send to another invoice. They had this feature for like a year and then it finally got merged.

\ No newline at end of file +Meetup

https://www.meetup.com/Austin-Bitcoin-Developers/events/267941700/

https://bitdevs.org/2019-12-03-socratic-seminar-99

https://bitdevs.org/2020-01-09-socratic-seminar-100

https://twitter.com/kanzure/status/1219817063948148737

LSATs

So we usually start off with a lightning-based project demo that he has been working on for a few months.

This is not an original idea to me. This was presented by roasbeef, co-founder of Lightning Labs at the Lightning conference last October. I worked on a project that did something similar. When he presented this as a more formalized specification, it made a lot of sense based on what I was working on. So I just finished an initial version of some tool that put this into practice and let people build on top of this. I will just give a brief overview of what you can do with this.

Quick outline. I’m going to talk about API keys and the state of authentication today. Then what macaroons are, which are a big part of how LSATs work.

LSAT is a lightning service authentication token.

Then we’ll talk about use cases and a few of the tools like lsat-js and another one. Hopefully you can use those. A lot of the content here, you can see the original presentation that Laolu (roasbeef) gave and put together. Some of the content is at least inspired by that presentation.

State of authentication today: Anyone on the internet should be familiar with our authentication problems. If you’re doing login and authentication, you’re probably doing email passwords or OAUTH or something. It’s gross. You can also have more general API keys. If you create an AWS account or if you want to use a Twilio API then you get a key and that key goes into the request to show that you are authenticated.

API keys don’t really have any built-in sybil resistance. If you get an API key, then you can use it anywhere, depending on service-side restrictions. They add sybil restrictions by having to login through email or something. The key itself does not have sybil resistance, it’s just a string of letters and numbers and that’s it.

API keys and cookies as well– which was an initial form of what macaroons are– don’t have an ability to delegate. If you have an API key and you want to share that API key and share your access with someone, they have full access to what that API key provides. Some services will give you a read-only API key, read-write API key, admin-level API key, and so on. It’s possible, but it just has some problems and it’s not as flexible as they could be.

The whole idea of logging in and getting authentication tokens is cumbersome. Credit card, email, street addresses, this is not so great when you just want to read a WSJ or NYT article. Why do we have to give all this information just to get access?

So somebody might be using something that appears the right ways… like HTTPS, the commnication is encrypted, that’s great.. But once you give them private information, you have no way to audit how they store that private information. We see major hacks at department stores and websites that leak private information. An attacker only needs to attack the weakest system holding your personal private information. This is the source of the problem. Ashley Madison knowing about your affair isn’t a big deal, but someone hacking in and exposing that information is really bad.

I highly recommend reading about macaroons. The basic idea is that macaroons are like cookies, for anyone who works with them in web development. They encode some information that shares with the server, like authentication levels and timeouts and stuff like that, to the next level. lnd talks about macaroons a lot, but this is not a lnd-specific thing. lnd just happens to use this, for delegated authentication to lnd. They are using macaroons, these tools are using macaroons. They are using macaroons in their loop service in a totally different way from their loop service. These could be used in place of cookies, it’s just sad that almost nobody is using them.

It works based on chained HMACs. When you’re creating a macaroon, you have a secret signing key just like when you do cookies. You sign and commit to a version of a macaroon. This ties into delegation— you can add new what are called caveats and sign using a previous signature and that locks in the new caveat. Nobody that then receives the new version of the macaroon with a caveat that has been signed with a previous signature, can change it. It’s like a blockchain. It just can’t be reversed. It’s pretty cool.

Proof-carrying changes the architecture around authentication. You’re putting the authorization into the macaroon itself. So you’re saying what permissions somebody who has this macaroon has, rather than putting a lot of logic into the server. So if you present this macaroon and it is verified by the server for certain levels of access, then this simplifies backend authentication and authorization a lot more. It decouples authorization logic.

lsat is for lightning service authentication tokens. This could be useful for pay as you go billing (no more subscriptions), no personal information required, it’s tradeable (with the help of a service provider)– this is something that roasbeef has proposed. You can make it tradeable; unless you’re doing it on a blockchain, you need a central server that facilitates it. Then there’s also machine-to-machine authentication. Because of the built-in sybil resistance not tied to identity, you can have machines paying for access to other stuff without having your credit card number on the device. You can also attenuate privileges. You can sell privileges.

I’ll introduce a few key concepts to talk about how this works. There’s status codes– anybody who has navigated the web is familiar with HTTP 404 which is for resource not found. There’s a whole bunch of these numbers. HTTP 402 was supposed to be for “payment required” and it took them decades to do this at the protocol level without an internet-native money. So LSAT will leverage this and use HTTP 402 to send messages across the wire.

There’s a lot of information in HTTP headers that describe requests and responses. This is where we’re going to set information for LSAT. In the response, you will have a challenge issued by a server when there’s an HTTP 402 payment required. This is a WWW-Authenticate header. There’s also another one Authorized-request: authorization. The only thing unique to this is how you’re going to read the values associated with these HTTP header keys. After you get the challenge, you send an authorization that satisfies that challenge.

You pay a lightning invoice request using a certain BOLT. This gets put into the WWW-Authenticate challenge. The preimage is a random 32 byte string that is generated and is part of every lightning payment, but it’s hidden until the payment has been satisfied. This is essentially how you can trustlessly second second-layer payments in an instantaneous ways. Then there’s a payment hash. Anyone who has received a payment invoice has this payment hash. The preimage is revealed after you pay. This basically, the payment hash generated from hashing the preimage… which means you can’t guess the preimage, from the payment hash. But once you have the preimage, you can prove that only that preimage could generate that payment hash. This is important for LSAT validation.

Say the client wants to access some protected content. Say the server then says… there’s no authentication associated with this request. I’m going to bake a macaroon, and it is going to have information that will indicate what is required for access. This is going to include generatig a lightning payment invoice. Then we send a WWW-Authenticate challenge back. Once you pay an invoice, you get a preimage in return, which you need to satisfy the LSAT because when you send the token back it’s the macaroon and then a colon and then that preimage. Because what’s happening is that the invoice information, the payment hash is embedded in the macaroon. So the server looks for the payment hash, and the preimage, and then he checks H(preimage) == payment hash boom it’s done.

Depending on what limitations you want to put on the paywall, this is stateful verification. You know that the person who has that preimage had to have paid the invoice associated with that hash. The hash is in the macaroon token.

This helps decouple payment from authorization. The server could know that the payment was satisfied by using lnd or something, but this helps decouple it. Also it helps other services to check the authorization too.

The current version of LSATs has a version identifier in the macaroons that it generates. The way that the Lightning Labs team has done it is that they have a version number and it will be incremented as functionality is added on to it.

In my tool, we have pre-build configurations to add expirations. So you can get 1 second of access for every satoshi paid, or something. Service levels is something that the Loop team has been working on.

The signature is made at macaroon baking time. So you have a secret key, it assigns a macaroon, and the signature gets passed around with the macaroon.

This allows for sybil-resistant machine-to-machine invoices. HODL invoices are something I have implemented. HODL invoices are basically a way to pay an invoice without it being immediately settled. It’s an escrow service built with lightning, but it does create some issues in the lightning network. There are ways to use them that doesn’t hinder the network, as long as they aren’t used for long periods of time. I was using this for one-time use tokens. If you try to access it, and an invoice is being held but not settled, then as soon as it is settled then it is no longer valid. There’s also a way to split payments and pay a single invoice but then you have some coordination problems. I think this is similar to the lightning-rod release that Reese did, which was for offline payments. They have a service where you can do trustless third party payments.

I made lsat-js which is a client-side library for interacting with LSAT services. If you have a web application that has this implemented, then you can decode them, get the payment hash, see if there’s any expirations on them. Then there’s BOLTWALL where you add a single line to a server, and you put it around a route that you want to require payment for, then BOLTWALL catches it when you get a request. It’s just nodejs middleware, so it could work with load balancers.

NOW-BOLTWALL is a serverless framework for deploying websites and normal serverless functions; this is a CLI tool that will configure it. The easiest way to do this is btcpay and use deployment with luna node for $8/mo, and then you can configure NOW-BOLTWALL. Then using zyke which they have a free tier, you can deploy a server out there and they are running load balancers themselves. You can pass it a url yo uwant to protocol. So if you have a server somewhere else, you can just deploy this on the command line.

And then there’s lsat-playground, which I am going to demonstrate real quick. This is just the UX that I put together.

LSAT would be useful for a service provider that hosts blogs from different authors, and the author can be convinced that the user paid and got access to the content- and that the user paid specifically that author, not the service provider.

I’ll put up some slides on the meetup page.

Socratic seminar

This is going to be a rapid-fire run down of things going on in bitcoin development. I’ll facilitate this and talk about topics, and pull some knowledge out of the audience. I only understand like 10% of this, and some of you understand a lot more. So just interrupt me and jump in, but don’t give a speech.

We missed a month because we had the taproot workshop last month. BitDevsNYC is a meetup in NYC that publishes these lists of links about what happened that month. I read through some of these.

OP_CHECKTEMPLATEVERIFY

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-November/017494.html

This is Jeremy Rubin’s work. The idea is that it’s a covenant proposal for bitcoin. The idea is that the UTXO can only… ((bryan took this one)). This workshop is going to be at the end of the month. Says he is going to sponsor people so if you’re into this then consider it. Because it can be self-referential, you can have accidental turing completeness. The initial version had this problem. It might also be used by exchanges on withdrawal transactions to prevent or blacklist your future transactions.

Watchtower BOLT

It is pretty interesting. I was in London and met with these guys. They have a whole python implementation of this. It was nice and simple, not sure if it’s open-source yet. There’s like three watchtower implementations now, and they should be standardized.

PlusToken scam

ErgoBTC did some research on the PlusToken scam. It was a scam in China that netted like 200,000 BTC. The people running it weren’t sophisticated. So they tried to do some mixing… but they shuffled their coins in a bad way. They got caught. Some whitehat figured it out and was able to track where the funds left an exchange and so on. Here’s a twitter thread talking about how the movement of these BTC might have affected the price. A month ago, some of the guys were caught. This guy’s theory is that they arrested the underlings, and the guy with the keys didn’t get arrested. So the mixing continued, clearly this other guy has the keys. They also had a bunch of ETH and it was moved like a month ago, and the market got spooked- the ETH price dropped. So maybe he took a big short position, then moved coins, rather than selling. 200,000 BTC is a lot, you can really move the price with this.

PlusToken scammed $1.9 billion across all the coins, with just like a landing page. They had people on the streets going to these Chinese people saying buy bitcoin and we multiply it, it’s this new mining thing. MtGox was like 500,000 BTC, which was 7% of the circulating supply at the time. So this might be 2-3% of the supply.

The guy also appeared on a podcast where he talked about the tools he used to figure this out. This is an interesting topic. Coinjoins are going to be a topic on a lot of these. This is just one side of coinjoin. Obviously the coinjoin he was using was imperfect.

txstats.com

This is a transaction stats visualizer from BitMex research.

Here’s murch reporting on some exchange dumping. He does wallet development for bitgo. He often talks about UTXO consolidation stuff. Someone dumped 4 MB of transactions at 1 sat/vbyte. Someone was consolidating a bunch of dust when fees were really cheap.

Here’s a lightning data site. Pierre had the number one node. It has capacity, different types of channel closes. BitMex wrote an article reporting on uncooperative closes, because you can see the force-close operations on the blockchain.

Jameson Lopp has some bitcoin implementation comparisons. This is an analog. This is the different versions of Bitcoin Core like v0.8 and on. He then looks at how long initial block download takes, for the current blockchain. There’s another one for how long it takes to sync to the blockchain on the date it was released. There was a big drop-out when they switched from openssl to libsecp256k1. So it was hugely more performant.

Insecure shortcuts in MuSig

This is about some of the interactivity in Schnorr MuSig. There’s three rounds in this protocol. In this article, he is discussing all the ways you can mess up with it. MuSig is pretty complex and there’s a lot of footguns, that’s the summary here I guess.

ZenGo time-based threshold signature

Multisig in concept is get a few different entities, where you can do on-chain multisig or off-chain multisig where you aggregate the signatures together and come together. These guys have something like that, but the keys rotate over time. You can have a scenario where all your parties lose a key over a given year, but since the keys are rotating, none of them lose a threshold amount over a certain amount. So the wallet would still be preserved even if all the people have lost their keys. This is called “proactive secret sharing”. Seems like it would be more practical to do 3-of-5 and just setup a new 3-of-5 when 1-of-5 loses it. Binance likes this because it’s shitcoin-compatibility that they like. Ledger too.

Attack on coldcard

The way this attack works is that you can trick it into generating a change address into something like this… a derivation path where you take the 44th child, 0th hardened, and then the last one is a huge number. So you stick it on a leaf way on the edge of the private key, such that it would be hard to find it again if you look for it. You still technically own the coins, but it would be hard to actually spend them. So it was a clever exploit. Basically an attacker can convince your coldcard that it’s being sent to “your” address, but it’s really a random bip32 path or something. No hardware wallets currently track what change addresses they give out. So the idea is to restrict it to some lookahead gap.. don’t go further than the gap or something. Or you might be on a website generating a lot of addresses, in advance, for users or payments or something. There was also something about 32-bit + 1 or something, beyond the MAX value.

Trezor bug

Trezor had a bug where if you had a– if you’re trying to do a single-sig output, and then you had a multi-sig input and then multi-sig change, you could inject your own multisig change address or something. Your host machine could do this. This was like a month ago or a month-and-half ago. They don’t show the change, if you own it. In this scenario, the multisig change address is something you don’t own, and it should treat that as a double spend or something. This was a remote exploit. It treated someone else’s multisig address as your address. It just wasn’t included in fee calculation or something.

Monero thread

Someone got a bad hash on their software. So it is a detective story trying to figure out what went wrong, like whether the website is bad or something. Turns out the binary was malicious. You can see the detective work in real-time. Someone was able to get the binary and do some analysis. Two functions were added; one of these would send your seed to the attacker. So if you ran this binary and you had any monero then the attacker now has that monero. It’s pretty fascinating. The binary made it on to Monero’s website. That’s pretty terrifying. This is a good example of why you need to check signatures and hashes when you download a wallet. Monero was serving this to people who were downloading the software. It was getmonero.org that was serving malware. It’s interesting that they got access to the site, and they didn’t update the md5 hashes or something. Well, maybe they were thinking users would check against the website not the binary they actually downloaded.

SIM swappers arrests

This was just a news article. SIM swapping is when you go into a Verizon store and say you have a number, and then they put your phone number on your phone and then you can steal someone’s bitcoin or whatever. They use the usual questions like what’s your maiden name and other public information.

Vertcoin 51% attack

This has happened twice now. We had a discussion when this happened six months ago. Somehow this coin survives the 51% attacks. Why are they surviving? Maybe it’s just so speculative that people are shrugging it off. What about bitcoin or ethereum being 51% attacked? So maybe it’s all speculative trading, or the users are too stupid or something.

The role of dandelion routing in “breaking mimblewimble’s privacy model”

Bitcoin Core 0.19.1

There was some kind of problem right after v0.19.0, and then came out v0.19.1. There’s some new RPC commands. getbalance is a lot better. A lot of RPC fixes. That’s cool, so download that.

Remove OpenSSL

OpenSSL was completely removed. It started in 2012. A lot of this sped up initial block download. Funny thing is that gavinandresen didn’t like the idea. But it turned into a big improvement. It took a few years to completely remove OpenSSL, because it was supplying all the crypto primitives for signatures, hashing, keys. It took 10 years to remove openssl. The last thing needing it was bip70. They needed it for something.

Branch-and-bound coin selection

It’s a better way to do coin selection when composing transactions. You want to optimize for fees in the long-term. So he wrote his thesis to prove that this would be a good way to this. Murch also pointed out that random coin selection was actually better than the stochastic approximation solution.

joostjager - routing allow route …

You can pay someone by including them in a route, without them having to give you an invoice. Alex Bosworth made a library to do this. You have to be directly connected to them; so you can route to yourself to a person you’re connected to.

Optional last hop to payments

So here you can say, you can define a route by saying I want to pay this person and the last hop has to be from point n-1 to n. So if for some reason you want, like if you didn’t trust somebody… So I wanted to pay you, but I wanted to choose who was the last hop. I don’t know why you would want to do that, though.

lnrpc and other rpc commands for lnd

joinmarket-clientserver 0.6.0

Does anyone actually use joinmarket? If I did, I wouldn’t tell you. What?

There’s a lot of work going on joinmarket. There’s a lot of changes there. The really cool thing about joinmarket- I have never used it– it seems to offer the promise of having a passive hot wallet sitting there earning you a yield on your bitcoin. Joinmarket has maker-taker fees. Happy that people are working on this.

Wallet standards organization meeting

This was a transcript Bryan made from a previous Austin Bitcoin Developers meetup.

Lightning on mobile: Neutrino in the palm of your hand

He showed how to create a react-native app that could do neutrino. You want a non-custodial, full-participant mobile user to be a full participant in the network without downloading 200 GB of stuff. I think the main innovation is not neutrino but instead of having to write custom to build lnd binary for mobile, it’s an SDK where you can just type “import lnd” and that’s all you need to go.

New mailing list for lnd development that roasbeef announced

Probably related to Linux Foundation migration…

Hashrate derivatives

Jeremy Rubin has another project which is a hashrate derivatives platform. The idea is that you can timelock transactions either in minutes or blocks, and whichever one gets there faster gets the payout. It’s an interesting way to implement a derivative. It’s basically DeFi. So you could probably play with this if you had some bitcoin mining capacity. Over a month… Uh, well the market will price it. That’s a good point.

New stratum protocol for mining pools (stratum v2)

Here’s some marketing speak about stratum v2.

The more interesting thing is this reddit thread. The people from slushpool are in this thread with petertodd and bluematt. Some of the benefits are that it will help you operate on less than ideal internet connections. You get blocks and stuff faster I believe. One of the interesting things bluematt pointed out is that if you’re mining you’re not sure if your ISP is stealing some of your hashrate because there’s no authentication between you and the pool and the miners.

The protocol now will send blocktemplates ahead of time. Then when new blocks are observed, they will just send the previoushash field. They are trying to load the block template ahead of time, and then send 64 bytes to fill in the thing so you can immediately start mining. That’s an interesting optimization. It’s a cool thread if you want to learn more about mining.

lnurl

This is another way of encoding HTTP invoices in HTTP query strings.

BOLT-android

Some LN hackathons

Bitfinex LN withdrawals

Aanlysis of bech32 bug

A single typo can drive a lot of problems. One of the interesting things is that changing one constant in bech32 implementation will fix the problem. How did that guy find that bug? Wasn’t Blockstream doing fuzz testing to prevent this? Millions of dollars of budget on fuzz testing.

Unequal coinjoins

Some guy withdrew from Binance and withdrew and then did coinjoins on Wasabi. Binance banned him. So in the coinjoin world, there’s discussion about how to deal with that. The fact that coinjoins are very recognizable. If you take money out of an exchange and do a coinjoin, the exchange is going to know. So what bout doing coinjoins with non-equal values? Right now coinjoins use equal values which makes them very recognizable. You just look for these unnatural transactions and you’re done. But what about doing coinjoins with non-equal amounts so that it might look like an exchange’s batching transaction or doing payouts to users? Coinjoiners are being discriminated against. The person who got slapped on their wrist was withdrawing directly into a coinjoin. Don’t get me wrong, they don’t like coinjoins, but also don’t be stupid. Don’t just send straight to a coinjoin. At the same time, it does show a weakness of this approach.

Wasabi hosted a research club. Right after the coinjoin-binance issue, a week later Wasabi was doing some hosted youtube things to dig up old research on unequal amount coinjoining. This is an interesting topic. Someone has a reference implementation in rust, and the code is very readable. There’s an 1.5 hour discussion where Adam is just grilling him. It’s pretty good. He found a bug in one of the papers… he could never get his implementation to work, and then he realized there was an off-by-one bug in the spec in the paper, he fixed it and then got it to work.

Blind merged mining with covenants and OP_CTV

This is basically what Paul Sztorc was talking about when he visited us a few months ago. This is about having another chain secured by bitcoin which bitcoin would be unaware of, and there would be some token involved. Ruben’s proposal is interesting because it’s blind merged mining, which is what Paul needs for his truthcoin stuff. So you get another thing for free if we actually get OP_CTV.

An argument that some people make for any new features into bitcoin is that we don’t know what else we might be able to come up with, to use this for. Like hte original version OP_SECURETHEBAG which turned out you can do turing completeness with. Maybe it’s a use case we want; but a lot of people think blind merged mining is not what we want- I forget why. A lot of thought goes into whether soft-forks should go in.

ZmnSCPxj on path privacy

Not really sure how to pronounce his name. Zeeman? It’s ZmnSCPxj. You can deduce a lot of information about what happened in the payments route. The first part of the email is like how you can use this to figure stuff out. So he talks about an evil surveillance on one node along the route, but if what if they are two nodes around the route. You can develop reverse routing tables if you have enough clout in the network. He goes into talking about some of the stuff that will happen with Schnorr, like path decorrelation and so on.

ZmnSCPxj on taproot and lightning

This is just insane. This was a good one.

Bitcoin Optech newsletters

c-lightning went from defaulting to testnet to defaulting to mainnet. They added support for payment secrets. You can do this thing, probing, where you try to route fake payments through a node and try to assess and figure out what it can do. You can generate random preimages and then creating a payment hash from that preimage even though it’s invalid. I assume this is a mitigation for that.

Here’s a thread about what watchtowers have to store, in eltoo. One of the benefits of eltoo is that you don’t have to store the complete history of the channel, only the most recent update. So do they have to store the latest update, or also the settlement transaction? Any comments about that? I don’t really know eltoo well enough to speculate on that.

c-lightning added createonion and spendonion RPC methods to allow for encrypted LN messages that the node itself doesn’t have to understand. This lets plugins use lightning network more arbitrarily to send messages of some sort, and they are tor-encrypted messages.

whatsat is a text/chat app. They are trying to get the same functionality over c-lightning.

All three LN implementations now have multi-path payments. This allows you to… say you have one bitcoin in three different channels. Even though you have 3 BTC, you can only send 1 BTC. Multipath allows you to send three missiles to the same target. lnd, eclair and c-lightning all support this now in some state. Can you use this on mainnet? Should you even do it? It might be in phoenix. lnd’s implementation has it in the code, but they only allow you to specify one path anyway. So they haven’t actually tested it in something that people can run sending multiple paths, but the code has been refactored to allow for it.

Andrew Chow gave a good answer about max bip32 depth, which is 256.

Bitcoin Core added a powerpc architecture.

There’s now a rpc whitelist. If you have credentials to do RPC with a node, you can do basically anything. But this command allows you to whitelist certain commands. Say you want your lightning node to not be adding new peers on the p2p level, which would allow you to be eclipse attacked. Lightning should only be able to do blockchain monitoring queries. Nicolas Dorier says my block explorer only relies on sendrawtransaction for broadcasting. So you want to whitelist, this is per user credential. Do they have multiple user credentials for bitcoin.conf?

This is why lnd is using macaroons. It solves this problem. You don’t need to have a list of people in the config file, you can just have people that have macaroons that give them that access.

Here’s what Bryan was talking about, which is the year-end review. I encourage you to read this, if you’re going to read only one thing it’s the 2019 year-end review newsletter from Bitcoin Optech. Every month in the last year there’s been some big innovation. It’s really crazy to read through this. Erlay is really big too, like an 80% bandwidth reduction.

Gleb Naumenk gave a nice talk at London Bitcoin Devs on erlay. He talked about p2p network stuff. I encourage you to check that out if you’re interested.

The script descriptor language is like a better version of addresses for describing a range of addresses. They sort of look like code with parenthesis and stuff. Chris Belcher has proposed base64 encoding it, because right now if you try to copy a script descriptor by default it wont highlight. It makes script descriptors more ergonomical for people who don’t know what they mean.

BitMex LN tool

Here’s a tool from BitMex which is a live alert system for channels. This was the BitMex forkmonitor.

Caravan

Unchained’s caravan got a shoutout.

Anonymous coinjoin ransactions with

This is a paper that wasabi dug out from like 2 years ago.

Luke-jr’s full node chart

The script to produce this is closed-source so you can’t game it. But there’s multiple implementations out there. I was suspicious of this because luke-jr is kind of crazy, but gleb seems to think it’s correct. We’re at the same number of full nodes as we were at in mid 2017. So that’s kind of interesting. The top line is the total number of full nodes, and the bottom line is the number of listening nodes which will respond to you if you try to initiate a connection with them. You probably want to be unreachable for selfish reasons, but you do need to be reachable to help people sync with the blockchain. Mid 2017 might be peak segwit when people were spinning up nodes, or it might be related to the market price. There was also a June price spike too. Maybe some of these are for lightning nodes. I bet a lot of people don’t do that anymore.

Clark Moody’s dashboard

Moody’s dashboard has a lot of real-time stats that you can see updating in realtime.

Bitcoin Magazine year-end review

We had a 10% growth in the number of commits, an 85% price increase, bitcoin dominance went up 15%, our inflation rate is 3.8%. Daily volume went up. Segwit went from 32% to 62%. Value of daily transactions went up. The blockchain size grew 29%. Bitcoin node count plummeted, it went down, which is not so good. It might be because a lot of people had only 256 GB hard drives maybe that’s why they dropped out… yeah, but pruning?

arxiv: new channel construction manuscript

List of hardware wallet attacks (from Shift Crypto)

It’s a pretty cool list. This is why you do multi-vendor multisig, maybe. This is pretty terrifying.

The pitfalls of multisig when using hardware wallets

One of the ideas people don’t realize is that if you use multisig and you lose redeemSripts or ability to compute them, you lose the multisig capability. You need to backup more than just your seed if you’re doing multisig. You need to backup redeemScripts. Some vendors try to show everything on the screen, and some others infer things. Manufacturers don’t want to be stateful, but multisig requires you to maintain state like not changing other multisig paricipants or swapping out information out from under you. If you’re thinking about implementing multisig, look at the article.

bunnie’s talk about secure hardware

The big point of this talk was – you can’t hash hardware. You can hash a computer program and check, but you can’t do that with hardware. So basically having open-source hardware doesn’t really make it more secure necessarily. He goes through this big all the ramifications of that and what you can do. He has a device, a text-message phone that is as secure as he can make it, and the interesting thing is that you could turn it into a hardware.

CCC conference and talks

SHA-1 collission

For $70k they were able to create two PGP keys, using the legacy version of PGP that uses sha-1 for hashing, and they were able to create two keys that had different user ids with colliding certificates.

Bitcoin Core fuzz testing

There is a fuzz testing stats page, and then Bitcoin Core PR review group.

lncli + invoices - experimental key send mode

It’s just a way to send to someone, you can send to another invoice. They had this feature for like a year and then it finally got merged.

\ No newline at end of file diff --git a/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/index.html b/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/index.html index 4ae932c704..0d973b3b89 100644 --- a/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/index.html +++ b/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/index.html @@ -1,6 +1,6 @@ Merkleized Abstract Syntax Trees | ₿itcoin Transcripts \ No newline at end of file +Core dev tech

https://twitter.com/kanzure/status/907075529534328832

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html

I am going to talk about the scheme I posted to the mailing list yesterday which is to implement MAST (merkleized abstract syntax trees) in bitcoin in a minimally invasive way as possible. It’s broken into two major consensus features that together gives us MAST. I’ll start with the last BIP.

This is tail-call evaluation. Can we generalize P2SH to give us more general capabilities, than just a single redeem script. So instead of committing to a single script, what about multiple scripts. The semantics are super simple. All it does is that when you reach the end of termination for a script, if the stack is not clean meaning there are at least two items, then it will recurse into the top most. This is the actually the same as P2SH does… So we recurse into scripts. There’s two levels of P2SH. To keep this safe, we limit recursion to one tail call evaluation.

To get the rest of MAST, we have to introduce a new opcode which I am calling OP_MERKLEBRANCHVERIFY. It takes 3 arguments: the root, the leaf hash, and a proof. This is the same extension type of opcode that OP_CSV and OP_CLTV. MBV will be nop4. It will fail if there aren’t 3 items on the stack. The leaf and the root are both 32 byte hashes. And the other one has to be a serialized proof which is specified in the first BIP. It checks the types to see if they can be serialized. And then it will check to see if these two match the root and make sure it gets the value and either failing or continuing.

Together, these two capabilities give us generalized MAST because we can construct something that looks like this, where our witness looks like this. We push the proof, the merkle proof. So the witness looks like arg1..argN and then [script] and then [proof].

Witness: arg1 … argN [script] proof

For the reedemScript, we want to copy the script, we capture the second element and hash it… we add the hash of the root, then MBV. This gives you generalized MAST capabilities with these two features. After executing this, we will have …. oh, also we have to drop the MBV parameters, to make it soft-fork compatible.

Redeem script: OVER HASH256 root MERKLEBRANCHVERIFY 2DROP DROP

We will have proven that this script was committed to in the original script. We drop everything we did here, with the proof, leaving just the script in the stack and the arguments to pass into this, and we recurse exactly once into this script and do whatever it wants to do. We can generate a complex contract and it has multiple execution pathway.s You have a tool that generates each possible execution paths you’re interested in. At redeem time, you specify the path you want to use, show the path, and then execute it.

You can blind certain code pathways by only showing the one that you do use. Or you can do something like sipa’s key trees where you specify the merkle tree of the keys you want to use, and just pull out the ones you want to use. Merklebranchverify uses most of the logic from somewhere else, although it would make it consensus-critical. And the tail-call recursion is very simple because it’s limited to being just tail calls. You reach the end of the script, you reset some of the variables, set your script to the new script you’re recursing into, and then perform a GOTO.

When you get to the end of your execution, if there is something still on your stack, you execute it. If there’s two or more. If you go through your execution and find you still have stuff on your stack, you take the top thing and execute it as a script. This is safe because, first there are no scripts out there at least on bitcoin mainnet that do not have cleanstack. It is not relayed at the moment. That’s not consensus. “It’s a consensus rule for witness.” This would not be work in SegWit, right. When you terminate execution right now with 2 or more items on the stack, which is allowed except for SegWit…. cleanstack is a consensus rule on SegWit, it’s not just standardness. This is only for v0 of course. Part of the benefit of this is that… reduce the need for script versioning. Well that might interact there. At least without SegWit, this is a soft-fork because any interesting script you push on the stack, except an empty script or series of 0 pushes, this would evaluate to true.

The redeem script is part of the witness. How do you avoid being limited to .. levels. One is that, the merkle branch, without CAT or something you can’t get around it. But what you can do is that these merkle tree structure was designed for chaining calls. You can just pull out a middle hash and you verify it’s part of the next one. And because you can make balanced trees this way… and because the unary case is handled as a pass-through, you can have just one pass through. What about tail recursing log N times? Well, only one level of recursion depth on the final execution. But that will probably not reach bitcoin because it is hard to bound the computation of that. But the size of the script will be linear on the… no, OP_DUP and generate data, you can make a script that generates itself and calls DUP or whatever, and it keeps going. Write a script that pushes to the stack a copy of itself, and then you run that and it runs forever. There’s no hash in the execution in this proposal.

Tail recursion as a model of handling general recursion on stack-based languages, makes a lot of sense, and it would require cost modeling and there would probably be run-forever scripts and bloat-memory-forever scripts. We get away with a lot in bitcoin script because we’re limited by no recursion and no looping and we can we almost ignore the cost of the memory usage or everything, there’s no pattern of options that can become a problem. But with unbounded recursion, that’s not the case. One level of recursion is the only thing you need for getting something out a hash tree. It’s hard to come up with a use-case where you want the generalized recursion. Delegability? If you have OP_CHECKSIGFROMSTACK, then that would solve that. If we had a CHECKSIGFROMSTACK this makes the merkle branch verify stuff even more powerful where you can run a script and signed for whatever.

What are the issues with OP_CHECKSIGFROMSTACK? … You could do covenants, well with OP_CAT. Here’s a coin that can only be spent by a certain transaction, so you could lock a coin into a chain.

If you don’t mind the exponential blow-up, then all things reduce to “long list of ways to unlock this, pick one”. The exponential blow-up in a merkle tree turns into a linear blow-up in ways to unlock things, but still exponential work to construct it.

One of the advantage of this over jl2012’s original bip114 ((but see also)) is that, besides being decomposed into two simpler components… go fetch pubkeys out of a tree, then have key tree signatures, it also helps you deal with exponential blow-up when you start hitting those limits, you could put more logic into the top-level script. How hard is it to make this … pull out multiple things from the tree at once, because there’s sharing to look at. Yes that was some of the feedback, it’s doable, and you have to work out the proof structure because it’s meant for single outputs. In the root you might have n which is the number of items to pull out. So it might be 3 root leaf leaf leaf proof. But without knowing what that n was, you basically have to use it as a constant in your script, the root is a constant. I think it would be interesting to have a fixed n here.

The outer version is the hash type or explaining what the hashes are. So the history behind this is that at some point we were thinking about these … recoverable hashes which I don’t think anyone is seriously considering at this point, but historically the reason for extending the size … I think my idea at the time when this witness versioning came up, we only need 16 versions there because we only need the version number for what hashing scheme to use. You don’t want to put the hashing version inside the witness which is constrained by that hash itself because now someone finds a bug and writes ripemd160 and now there’s a preimage attack there and now someone can take a witness program but claim it’s a ripemd160 hash and spend it that way. So at the very least the hashing scheme itself should be specified outside the witness. But pretty much everything else can be inside, and I don’t know what structure that should have, like maybe a bitfield of features (i am not serious about this), but there could be a bit field that has the last two are hashed, into the program.

One of the advantages of script versioning that we did is that it’s actually hard to be super confident of an implementation of a new flag is actually a soft-fork. It’s easier to be sure that these things are soft-forks. Mark is saying as a principle we don’t merge multiple unrelated things except the most reasonable things. CHECKSEQUENCEVERIFY and CHECKLOCKTIMEVERIFY sure. Segwit had bip147 nulldummy. But generally combining multiple things, there’s no end to that and you get this exponential penalty on review where you’re saying okay there’s this boatload of unrelated features… good luck.

I couldn’t do a tree in your scheme I couldn’t do a 19-of-20 multisig… 600 byte serialization can’t go on the stack. Any single item on the stack. There’s a push and the push is… … This tree plus tail recursion ends up with next script on stack, so it’s subject to the 510 byte push limit. It could be limited in v1 or something. You can’t push something larger than that. Even in SegWit, the SegWit script is not pushed on to the stack, it’s just separate. Segwit encodes the stack— it’s not a script that runs anything. It’s not the proof that’s the problem, it’s the script itself. The script can’t contain all the pubkeys. A tree of 19-of-20 checkmultisig, you can’t put that kind of redeem script in there, and this could be fixed in v1 or something. The 520-byte limit for P2SH has been a pain. You can only get up to 17-of-17 or something. There are reasons to not use… not accountability, but also you have to do setup in advance. So if you have some script that pulls keys from a bunch of different trees, and says go do a multisig with that. It’s been a limit, but it would be much more of a limit, limited in other ways. That limit becomes more favorable as you soft-fork in other stuff into the script.

The way that people that interact with this, like some application writer trying to write a script, this is much more complex than the original MAST thing where you just have a data structure. People working with this are going to have a harder time. Someone should go figure out a proper proposal of what the alternative looks like.

Merklebranchverify is more useful than generally MAST. This is a slight generalization of P2SH and this is what P2SH should have been originally. P2SH was designed explicitly to (not?) be a nop eval… code dealing with what code will be executed. P2SH with user-specified hashing. You do the hashing, and it handles the execution, but only one level of execution just like P2SH. The user provides some outer level of code, and then provides the next level. In P2SH you are confined to only running a template, and this eliminates the template. With the merklebranchverify you can make use of this. It’s 3 layers for technical reasons, because of P2SH and compatibility. If we had meditated more on P2SH we might have ended up on this tail recurse design too. It’s almost indistinguishable from P2SH in the sense that… if P2SH template was slightly different, then you could say it always did that.

The delta from P2SH is that the script would be a DUP, that’s the difference. If P2SH had a DUP in front of the hashequalverify and then you ran these consensus rules, then it would just work, because P2SH consumes the script. Or had a special opcode that doesn’t actually remove the thing being hashed, then it would be compatible with this. This would be a reinterpretation of what P2SH is doing, and it would make P2SH makes sense. … A script and OP_HASH160 and then spend to it.. I put OP_NOP and then it worked? People complain about the magicness of P2SH sort of suggesting that people might be confused like this. There’s pushonly magic, right.

Clean way to do multi output for merklebranchverify? Pull 3 out of a tree from a thousand. Pretty cool. You could repeat the proof and do it again, provide 3 separate proofs or something. The concern with multi-merklebranchverify is the verification scripts.. but maybe not an issue.

Compact serialization could optimize that away… only doing it at the top level. Prevents you from doing the gather operation and pulling 3 items out, and… No, that’s not true. You can still have the merklebranchverify as a nopcode for doing keytreesignatures where you don’t need to do recursion. You can do both.

If you use a scheme like bip114, then it would be less, because in the output itself… and it mainly solves the push size issue.

Once you check the top of it, you can grab part of the tree, and then you can merklebranchverify and– this is more like merklebranchget, because you already did the verified. You do an encoding of this where from a partial merkle tree, compute a hash, and then do GETs on it to go get items from the tree. You have an opcode that verifies a pruned merkle tree data structure is what you think it is. It only does verification on hash expected. And then get the thing you wanted out of the tree. So you can do the shared branching down. You take a proof, an opcode that takes it, then you do get operations to get a branch and attach another branch. This is close to what bip114 does. It was not clear why he was pulling multiple things out, but this makes sense. If you are doing multiple gets out of the merkle tree then you definitely want to share levels. If you’re pulling two things out, then half the time it’s going to be on the same left branch, etc. And in practice you’re… and if you intentionally order your tree such that they are, you can make your common cases do this anyway. Notice how again a system that is just, let’s just rewrite that script input as a merkle tree, might be easier to implement, there are edge cases at the end where you can check any branches that were provided but not pruned and never accessed, if yes then fail the script and that’s your ((what?)) rule. I’m not quite getting that. With a serialization merkle tree where you have pruned and unpruned branches, malleability is someone taking a pruned branch and making it unpruned by adding in the extra data which they might be able to do in some cases. So you want to get rid of the malleability by something like the cleanstack rule: if you didn’t pull it out, … is it in the minimum possible serialization for what you did. The actual implementation of the tree can say how many values can be… and just keep track of that. The minimum serialization size is different between… in these kinds of trees. Either something either is or isn’t unpruned. In the serialization, there is only one possible serialization. That’s why cleanstack got …. and the motivation is to prevent malleability. Where a miner can’t add garbage to your witness. In a different type of serialization coding for transactions, you could argue that malleability matters because the wallet can just throw that away. We do have residual witness malleability and I’d like to have code in the relaying network that autostrips witnesses. Arguably this might be a proof point for malleability for being an expansion point. Unfortunately it’s both. Malleability is software expansion space… The midground is to make things policy-prohibitive rather than….

Can you serialize the size of the.. in a checksig? Yes. In v1, we would probably do something where the– something gets serialized, like the size of your witness, some you can’t malleate larger.. validate the signature. And this idea has come up a couple times in the past. Sign your signature size, essentially. Malleability in both cases isn’t an issue, it reduces the efficiency of relay. The one thing that you could do is take a valid transaction that is being relayed, and I make it larger to the point that it is … resulting in policy rejects of that transaction. Segwit kills tail-call recursion because of cleanstack. In v1 maybe we can change that. Well what if we want general recursion later? Specific attack here– you can sign the size, or provide the maximum. The witness will be no longer than this value, right. You can serialize the maximum that you used. The maximum that you used when you signed was 200 bytes, your signature might be less, and the verifier needs to know what number to use, in the hash. This whole thing with witness malleability is like different from…

The review process for turning NOPs into soft-forks is painful.

What exactly is going into bitcoin script v1 proposals? ((lots of mumbling/groaning))

We could have an OP_SHA256VERIFY instead of OP_SHA256. For a lot of things, it’s conceptually… … If every operation always verifies, with no non-verifies, then you could validate scripts in perfectly parallel. You take every operation in a script and validate it on its own. You don’t need sequential verify. It’s bad for serialization– you can’t do that. But if we didn’t care about bandwidth and storage, then it would be a better design.

If you had a hash256verify here in the merklebranchverify… some could turn into OP_DROP. The nopdrop ((laughter)). OP_RETURNTRUE, not just OP_RETURN. All the nopcodes should be turned into OP_RETURNTRUEs. If you hit this opcode, then you’re done for good. This is a generalization of version. The downside is that you don’t know if you’ve hit one of these until you’re in the script. This basically lets you have a merkle tree where some scripts have different versions in them. This is a way to do witness version by saying … my first opcode is OP_RETURN15 and 15 isn’t defined yet, so it’s true, and after defining it, now it has a meaning. Some of them might turn into NOPs… Let’s not call it OP_RETURN, it should be OP_EVIL or OP_VER. Right now script that doesn’t deserialize is never valid, because you can’t get to the end. Run to the end, but have a saturating counter that says whatever happens, I’m already done. In DEX, everything is deserialization. So you have your pointers in a stream. The upgrade path for that would be the data structure you just deserialize… What parts do you need to deserialize if you haven’t? OP_RETURN in a scriptsig, return whatever would be on top of the stack, which would be bad.

Not good to mix code and data. Can you hash what is on the stack and not duplicate it. The overhead would go away. Otherwise you wouldn’t need the large push in merklebranchverify.

The version of merklebranchverify in the link is on top of Core and that’s a hard-fork because of the witness details. But also there was one in elements.git where there’s a merklebranch RPC and others. Takes a list and turns it into a tree, but want to do arbitrary JSON objects.

So merklebranchverify maybe should be deployed with the non-SegWit version.. but maybe that would send a conflicting message to the users of bitcoin. Segwit’s cleanstack rule prevents us from doing this immediately. Only in v0.

Need some candidate soft-forks that are highly desirable by the users. Maybe signature aggregation, maybe luke-jr suggestion anti-replay by OP_CHECKBLOCKATHEIGHT proposal. Needs to be highly desirable by the user to qualify for this particular case though. Needs to be a small change, so maybe not signature aggregation, but maybe signature aggregation since it’s still highly desirable.

They can break a CHECKSIGFROMSTACK… in a hard-fork. CHECKBLOCKHASH has other implications, like transactions aren’t– in the immediately prior block, you can’t reinsert the transaction, it’s not reorg-safe. It should be restricted to like 100 blocks back at least. +[a]I approve this chatham house rule violation

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014979.html

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015022.html

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014963.html

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014960.html

https://www.reddit.com/r/Bitcoin/comments/7p61xq/the_first_mast_pull_requests_just_hit_the_bitcoin/

bip98 “Fast Merkle Trees” https://github.com/bitcoin/bips/blob/master/bip-0098.mediawiki

bip116 “MERKLEBRANCHVERIFY” https://github.com/bitcoin/bips/blob/master/bip-0116.mediawiki

bip117 “Tail Call Execution Semantics” https://github.com/bitcoin/bips/blob/master/bip-0117.mediawiki

implementation of bip98 and bip116 https://github.com/bitcoin/bitcoin/pull/12131

implementation of bip117 https://github.com/bitcoin/bitcoin/pull/12132

https://bitcointechtalk.com/what-is-a-bitcoin-merklized-abstract-syntax-tree-mast-33fdf2da5e2f

\ No newline at end of file diff --git a/bitcoin-core-dev-tech/2017-09/index.xml b/bitcoin-core-dev-tech/2017-09/index.xml index a6a1b0af09..50d23dd746 100644 --- a/bitcoin-core-dev-tech/2017-09/index.xml +++ b/bitcoin-core-dev-tech/2017-09/index.xml @@ -1,5 +1,5 @@ Bitcoin Core Dev Tech 2017 on ₿itcoin Transcriptshttps://btctranscripts.com/bitcoin-core-dev-tech/2017-09/Recent content in Bitcoin Core Dev Tech 2017 on ₿itcoin TranscriptsHugo -- gohugo.ioenThu, 07 Sep 2017 00:00:00 +0000Merkleized Abstract Syntax Treeshttps://btctranscripts.com/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/Thu, 07 Sep 2017 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/https://twitter.com/kanzure/status/907075529534328832 -https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html +https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html I am going to talk about the scheme I posted to the mailing list yesterday which is to implement MAST (merkleized abstract syntax trees) in bitcoin in a minimally invasive way as possible. It&rsquo;s broken into two major consensus features that together gives us MAST. I&rsquo;ll start with the last BIP. This is tail-call evaluation. Can we generalize P2SH to give us more general capabilities, than just a single redeem script.Signature Aggregationhttps://btctranscripts.com/bitcoin-core-dev-tech/2017-09/2017-09-06-signature-aggregation/Wed, 06 Sep 2017 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2017-09/2017-09-06-signature-aggregation/https://twitter.com/kanzure/status/907065194463072258 Sipa, can you sign and verify ECDSA signatures by hand? No. Over GF(43), maybe. Inverses could take a little bit to compute. Over GF(2). diff --git a/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/index.html b/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/index.html index 7c10223216..758796db3d 100644 --- a/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/index.html +++ b/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/index.html @@ -1,7 +1,7 @@ Cross Curve Atomic Swaps | ₿itcoin Transcripts
\ No newline at end of file +Core dev tech

https://twitter.com/kanzure/status/971827042223345664

Draft of an upcoming scriptless scripts paper. This was at the beginning of 2017. But now an entire year has gone by.

post-schnorr lightning transactions https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html

An adaptor signature.. if you have different generators, then the two secrets to reveal, you just give someone both of them, plus a proof of a discrete log, and then you say learn the secret to one that gets the reveal to be the same. It’s a proof of equivalence discrete log. You decompose a secret key into bits. For these purposes, it’s okay to have a 128-bit secret key, it’s only used once. 128 bits is much smaller than secp and stuff. We can definitely decompose it into bits. You need a private key, but lower than the group ordering in both. I am going to treat the public key… I’m going to assume– it’s going to be small enough, map from every integer into a range, bijection into the set of integers and set of scalars in secp and the set of scalars in… It’s just conceptually because otherwise it doesn’t make sense to do that. In practice, it’s all the same numbers. You split it into bits, similar to the Monero Ring-CT thing, which is an overly complicated way to describe it. What about schnorr ring signatures? Basically, the way this works, a Schnorr signature has a choice of a nonce, you hash it, then you come up with an S value that somehow satisfies some equation involving the hash and the secret nonce and the secret key. The idea is that because the hash commits to everything including the public key and the nonce, the only way you can make this equation work is if you use – the secret key– and then you can compute S. And if hte hash didn’t commit, then you could just solve for what the public nonce should be. But you can’t do that because you have to choose a nonce before a hash. In a schnorr ring signature, you have a pile of public keys, you choose a nonce you get a hash, but then the next key you have to use that hash but the next key, and eventually you get bcak to the beginning and it wraps around. You start from one of the secret keys, you start one key passed that, you make up random signatures and solve them and go back, and eventually you can’t make a random signature and solve it, and that’s the final hash, which is already determined, and you’re not allowed to make it again, you have to do the right thing, you need a secret key. The verifier doesn’t know the order, they can’t distinguish them. What’s cool about this is that you’re doing a bunch of algebra, you’re throwing some shit into a hash, and then you’re doing more algebra again and repeating. You could do this as long as you want, into the not-your-key ring signature, you just throw into random stuff into this hash. You could show a priemage and prove that it was one of those people, it’s unlinkability stuff. You could build a range proof out of this schnorr ring signature. Suppose you have a pedersen commitment between 0 and 10, call it commitment C. If the commitment is 0, then I know the discrete log to C. If the commitment is to 1, then I know C - H, and if it’s 2 then it’s C-2H, and then I make a ring with C - H, up to C-10H, and if the range is in there, then I will know one of those discrete logs. You split your number into bits, you have a commitment to either 0, 1, or 0, 2, or 0, 4, or 0, 8, and you add all of these up. Each of these individual ring signatures is linear in the number of things. You can get a log sized proof by doing this. These hashes– because we’re putting points into these hashes, and hashes is just using this data. I can do a simultaneous ring signature where every state I’m going to share a hash function, I will do both ring signatures, but I’m using the same hash for both of them, so I choose a random nonce over here, I hash them both, and then I compute an S value on that hash on both sides, I get another nonce and I put both of those into the next hash, and eventually I’ll have to actually solve on both sides. So this is clearly two different ring signatures that both are sharing some…. But it’s true that the same index.. I have to know the secret key of the same index on both of them. One way to think about this is that I have secp and ed, and I am finding a group structure in this in the obvious way, and then my claim is that these two points, like Tsecp and Ted have the same discrete log. In this larger group, I’m claiming this discrete log of this multipoint is T, T and both the components are the same. I do a ring signature in this cartesian product group, and I’m proving that they are the same in both step, and this is equivalent to the same thing where I was combining hashes.

Say we’re trying to do an atomic swap. I have an adaptor signature where I can put some coins into a multisig. You give me an adaptor signature, which gives me the ability to translate your signature, which you will later reveal to take the coins, into some discrete log challenge. You give me an adaptor signature and some values, and I say yeah as long as this happens hten I will sign. So coins can use one curve, and you can claim curves from the other coin. On the other chain we do the same way, so you give me an adaptor signature with the same t value. I only have the adaptor signatures now. You sign to take your coins, I use your signature to learn the secret key, and then I can use that to take coins on my end. What if this was an ed25519 coin and a secp coin? It’s dependent on using the same t on both sides. I want you to give me two t’s and one on secp and one on ed25519 and a proof that they are using the same private key. How do you make the key– how do you limit the… When you make this ring signature thing, you have only so many digits. You divide it up into digits, you add up all the digits. For each digit, you give me a– you say this is either the secret key of 0 or 1, or 0 or 2, etc. Doesn’t have to be 128-bits. These proofs are pretty big. Each ring signature is like… if you do it in binary, 96 bytes per digit, 96 bytes * 128 in this case. This is just p2p. It’s like 10-20 kb. It’s not very much.

Do people currently have cross-chain atomic swaps for different curves? You could just use hashes, but anyone could see the hashes. They can link them.

The signer could give you a signature and the adaptor signature. There’s a challenger-responder protocol in that draft of the scriptless script.

We need a new definition or letter for sG. We use that a lot.

\ No newline at end of file diff --git a/bitcoin-core-dev-tech/2018-03/index.xml b/bitcoin-core-dev-tech/2018-03/index.xml index 56de5f1294..d9904ab8e3 100644 --- a/bitcoin-core-dev-tech/2018-03/index.xml +++ b/bitcoin-core-dev-tech/2018-03/index.xml @@ -8,5 +8,5 @@ You take every single path it has; so instead, it becomes &hellip; certain c Graftroot The idea of graftroot is that in every contract there is a superset of people that can spend the money. This assumption is not always true but it&rsquo;s almost always true. Say you want to lock up these coins for a year, without any conditionals to it, then it doesn&rsquo;t work. But assume you have&ndash; pubkey recovery? No&hellip; pubkey recovery is inherently incompatible with any form of aggregation, and aggregation is far superior.
Bellare-Nevenhttps://btctranscripts.com/bitcoin-core-dev-tech/2018-03/2018-03-05-bellare-neven/Mon, 05 Mar 2018 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2018-03/2018-03-05-bellare-neven/See also http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-06-signature-aggregation/ It&rsquo;s been published, it&rsquo;s been around for a decade, and it&rsquo;s widely cited. In Bellare-Neven, it&rsquo;s itself, it&rsquo;s a multi-signature scheme which means multiple pubkeys and one message. You should treat the individual authorizations to spend inputs, as individual messages. What we need is an interactive aggregate signature scheme. Bellare-Neven&rsquo;s paper suggests a trivial way of building an aggregate signature scheme out of a multisig scheme where interactively everyone signs everyone&rsquo;s message.Cross Curve Atomic Swapshttps://btctranscripts.com/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/Mon, 05 Mar 2018 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/https://twitter.com/kanzure/status/971827042223345664 Draft of an upcoming scriptless scripts paper. This was at the beginning of 2017. But now an entire year has gone by. -post-schnorr lightning transactions https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html +post-schnorr lightning transactions https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html An adaptor signature.. if you have different generators, then the two secrets to reveal, you just give someone both of them, plus a proof of a discrete log, and then you say learn the secret to one that gets the reveal to be the same.
\ No newline at end of file diff --git a/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/index.html b/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/index.html index 22f69246cf..af697b05c1 100644 --- a/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/index.html +++ b/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/index.html @@ -1,5 +1,5 @@ Great Consensus Cleanup | ₿itcoin Transcripts - \ No newline at end of file +Core dev tech

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html

https://twitter.com/kanzure/status/1136591286012698626

Introduction

There’s not much new to talk about. Unclear about CODESEPARATOR. You want to make it a consensus rule that transactions can’t be larger than 100 kb. No reactions to that? Alright. Fine, we’re doing it. Let’s do it. Does everyone know what this proposal is?

Validation time for any block– we were lazy about fixing this. Segwit was a first step to fixing this, by giving people a way to do this in a more efficient way. So the goal of the big consensus cleanup is to fix that, and also fix timewarp, we have known about this for a long time and we should fix it. Three, we should do a soft-fork that is not the most critical thing and thus can be bikeshedded a little bit and if it doesn’t activate then nobody is too sad. We can do it for the sake of doing a soft-fork and get something through and work our way through any problems that might emerge, prior to doing something more involved like Schnorr signatures.

Is there a risk of normalizing soft-forks? Maybe, but this isn’t frivolous.

Timewarp

The soft-fork fix for timewarp is actually simple. I would like to get that pull request in. What is the fix exactly? The issue in timewarp is there’s an off-by-one in the calculation of the height for difficulty adjustment. You can jump backwards or forwards significantly, not sure, just on the difficulty adjustment block change. If you just constrain those two blocks to be well ordered, then fine, timewarp is gone. So you just add a new rule saying these blocks must be well-ordered. If you set that rule such that it can go backwards up to 2 hours, then there should be no impact where a miner would generate such a block. The timestamp is allowed to go backwards no more than 7200 seconds.

Having a rule in between the two difficulty periods, and any constant limit on how much the time can go backwards, fixes the timewarp attack. Based on how many seconds it’s allowed to go backwards, there’s a small factor in block speedup that can be achieved but it’s always bounded. If it’s 600 seconds going backward, then actually the bound is exactly what we would have wanted in the first place, namely 2016 blocks per 2 weeks. If you allow it to go back more, you allow slightly more blocks. If you allow less, interestingly, you have fewer block.

What about using timewarp for forward blocks? Someone asked about doing forward blocks on litecoin. But litecoin had merged a timewarp fix a long time ago.

Is the timewarp bug something we should fix? Slushpool had feedback. Is it only Mark that wants to use it?

The best nrolling you have is to generate a bunch of midstates, and increment the timer by one second. Nobody does n rolling in hardware yet, but people want to do it. This is probably why the slushpool people were concerned. Median time past might be in the past, and the prior block might be 2 hours in the future, and therefore you’re more than 2 hours behind the prior block. This is why it’s allowed to go back by 2 hours. Median time past can be in the past, and the prior block can be 2 hours in the future. If you wanted as much n time rolling as possible, you start with median time past, and right now people start with current time and they don’t really do n time rolling. There’s basically no reason to n time roll significantly in the future, as long as you can store a bunch of midstates for every 1 second. You could argue there’s no reason to do this, or you can’t do this. There’s no reason to do this, and nobody is doing it as far as I can tell.

The only evidence we saw of ntime rolling was up to maybe 10 minutes or something, we looked. The only times that jumped backwards were probably other issues. Do people mine blocks with timestamps in the past? Yes, but that’s a different thing. No, it’s not. It’s old work. I only update you every 30 seconds. Someone else can screw you by mining a block 2 hours in the future. If you start at the current time and someone mines a block 2 hours in the future, that’s always okay. You would have rejected a block more than 2 hours in the future. You’ll meet the rule. They do mine in the past, but it starts with the time in the header, but they find it later in the future. Nobody mines a block whose timestamp is older than the actual time for the prior block? No, nobody does that. With miners, you do see that because stratum updates periodically, pretty much all miners are mining 15 seconds behind because they update every 30 seconds so on average that’s 15 seconds. But you always start with the current time when the stratum job is generated and sent to the client.

How much does current time vary between miners? A few seconds. Harold crawled the bitcoin network and found that 90% of listening bitcoin nodes have a timestamp that is within the measurement accuracy of me. People without an accurate clock, are like 2 days off. So yeah. When you connect, the version message has the current unix time, system time. It’s network-adjusted time, I think so? That would be infectious wouldn’t. You announce system time, but you use the network-adjusted time. Your crawl data is still useful because it tells you the network-adjusted time. Pool servers probably do their own thing.

Does bfgminer or cgminer currently have any code to reject their pool telling them to participate in a timewarp based on previous headers? Clients don’t do anything, nor should they. An ASIC should not be interpreting data based on consensus rules. It’s stupid. bfgminer does some insane shit but that’s because bfgminer is insane. In general you want the miner to be as dumb as a brick, they just throw out work to an ASIC. Right now they are too smart for their own good because they see the coinbase transaction.

So you crawled node times, but did you look at pool times? No, I didn’t get around to that. I was just looking at the bitcoin network and comparing their clock to my local clock. Almost all pools have a listening bitcoin node on the same IP address, which is weird. But for those who don’t, they have a pool, and then a node in the same rack or data center running a bitcoin node and then maybe an internal bitcoin node that is not listening. In general, the clocks seem to be within a second, which is roughly what you would expect from NTP plus or minus a few seconds. A whole bunch of people are an hour out. It’s probably buggy timezone. Buggy daylight savings time, probably. And Windows setting local time instead of setting to UTC. There’s a 15-minute time zone. North Korea has the worst timezone stuff ever. Well, that’s because they don’t care about the rest of the world.

More on the great consensus cleanup proposal

There’s a bunch of rules, including 64-byte transactions which primarily impacts segwit transactions. Yeah, that’s another thing. The codesep thing is only ever intended to be for non-segwit transactions but maybe we will drop that in favor of a 100k rule. But the 100k rule is only for legacy? I haven’t thought that far ahead. It should be 100k base size. You could just say 100k base size, and then it applies to segwit as well. Or 400k witness, really. 100k strip size, is what you want, right? Base size is weird language to me. Witness strip size, no more than 100kb should be the proposal. That’s the only thing that matters to the sighashing rules. But you could also do it by witness weight because that’s how we calculate size now. You’re right, if you want to attack the exact thing that is causing problem, you would do it by…. we don’t want to do it to the other thing because if you have a giant witness. Do we want witnesses larger than 400 kb?

Are you proposing this as the next soft-fork, or are you arguing this should be the next soft-fork? Given where we are on taproot, it seems possible that trying to do something first would only push that out further. Well, it was intended to be faster, but nobody really reviewed it or gave a shit, so whatever. It is completely orthogonal. It could happen in parallel, there’s not really any harm to that. Also, if community agreement is there, there’s no reason why they couldn’t be activated at the same time if the agreement is there.

These DoS vectors, the timewarp bug and the sighash stuff– how much time would we have to deploy a soft-fork like this and have it ready? If something bad happens… we’ve seen bad things happen, we’ve seen blocks that take way too long to validate. It’s been exploited before. Timewarp is a different question. Timewarp can we deal with that when it happens? No, suddenly a timewarp block comes out and then we know it has happened. Exploiting timewarp is a very slow process. It’s in multiple months, so we could choose to deploy the soft-fork then. If timewarp is happening then it’s because the miners are doing it, who you need for the soft-fork activation that case. For sighashes, you need to setup 100’s of thousands of UTXOs and we can see that particular one. But the f2pool-style dust cleanup isn’t really that bad, but it’s pretty nasty. Large scriptpubkey size, 10 kb scriptpubkeys or something.

If you want to do something non-controversial and easy to do a soft-fork, then I believe a 100kb size limit will make it very controversial. Any kind of size limit on transactions I expect to be controversial. It’s been non-standard for 5-7 years forever now. Would this interfere with confidential transactions? Well, arguably it should not be effecting witness data. It wouldn’t apply to that anyway.

What about the big giant transaction per block, giant coinjoin transaction with Schnorr? If you have a new signature scheme, and and and this other stuff, then maybe. The new sighash type in segwit generally prevents this, so you could say only non-witness … only the data that is relevant to non-witness inputs, which mostly corresponds to the witness strip size. Only then do you check if it’s over 100kb and if it is, then you fail. You want to be able to do it without validating the scripts. Counting the number of legacy inputs and put a limit on that. You can lookup the UTXOs, but you don’t have to validate them. A second type of witness that gets added… you would not be counting that, so it’s kind of dangerous, if we predict that.

Another comment, you really need numbers to show people that this change reduces the worst case. Yes, I have numbers. Having those numbers would be great for convincing people. I haven’t gotten around to rewriting it yet. Nobody appeared to really care.

Soft-fork

Since the timewarp rule has not been violated, maybe nobody upgrades- how do you tell? Well, if they mine invalid blocks then it won’t be accepted by the network. You don’t have the assurance, except that everyone is running Bitcoin Core. If you start losing money, you’re going to fix your shit very quickly.

The BIP says use bip9 versionbits with an explicit promise that if the nodes and ecosystem appear to upgrade robustly, and bip9 times out, then there will be a second bip9 signalling window in a future release where that bip9 signalling window times out with an activation instead of a timeout. With the signalling of bip9. Yeah, that’s bip8. So yeah first don’t make it explicitly activate, just see that everyone wants it, then you could see activation and see in the network and there’s still– we still want to use bip9 but you have to meet us there, otherwise bip8 and the carrot and the stick. Client could have an option to let you in the current release wihtout upgrading to add the new signalling parameters. That’s kind of crazy. Then there might be people too impatient for bip9 to timeout… configuration file consensus is not the right way to go.

I don’t want to get into an argument about UASF. It’s better to have the conservative and slow approach for making sure we really have consensus for these changes. Why are we in a rush? Will there be any resistance to a soft-fork at all? I think this isn’t simple enough. The codeseparator stuff is going to get removed from the proposal. There’s a lot of undefined details still. It’s a good idea. If we didn’t have taproot potentially ready also close… be careful how you phrase that. What’s the reasoning for fixing multiple vulnerabilities instead of just one? There’s a lot of overhead to doing a soft-fork, and a soft-fork just to remove 64 byte transactions I don’t think that sets the precedent on how soft-forks happen. We shouldn’t be setting a precedent of doing a soft-fork every 6 months. Yes, there’s overhead and risk to soft-fork. If you have a bunch of uncontroversial things, then sure it is helpful to bundle them. Also these have to be super simple. Matt has a few things that are pretty straightforward, and they have to be adequately socialized to tell people why it’s important. Well, we want to avoid omnibus soft-forks. Not that this is happening here. There’s a balance, of course.

It should be uncontroversial and simple which will make it go faster. But we also need to be highly motivated, which makes it go faster or smoother. I think this is an argument for taproot. I think that has been socialized for way longer. I think people understand might be a strong word, they get there’s a benefit to them for this, whereas this stuff might be less clear. There will be drama around activation method. The twitter community is all the UASF morons.

You guys don’t like bip8? Doing bip8 out of the gate is shitty. This is like saying Bitcoin Core developers have made the decision. You need to make it real for people, before you can know there is consensus. This is what bip9 allows for. I think you rollout with bip9, and then it should be likely you see adoption from nodes, miners might be slow, but who is going to object to it?

If we’re going to throw manpower at something, why not something like taproot? Well, you have to fix these real vulnerabilities at some point. It’s a question of sequencing and when do you do this? This might be true if we do a good job of motivating this and explaining it to people.

I would make a superlong bip9 window. If it happens before taproot that’s great, if it happens after taproot that’s fine. We would say we would like this to get out there, but it’s not urgent. We don’t want to set a precedent for setting two soft-forks because if you upgrade you get… we start with one, and then later we do the taproot one. They have to be separated one year or whatever. I think this discussion is premature until we have a clear idea of whether the ecosystem wants these changes. You can talk about activation mechanisms in the abstract, sure.

The taproot BIPs are relatively new. I would try to make the argument you don’t get a clear picture of that until you write the pull requests and start writing code and merging and that’s when you get people to understand this is real. Reviewing and publishing and merging the code is independent from the decision to add an activation mechanism. Don’t you at least need a plan? I guess you can change the activation mechanism is going to be.

I would like to see more code for the cleanup soft-fork. Yes, there is a patch and pull request. It was just in one blob. No, there’s tests and everything. The code is fairly readable. When I last looked it didn’t seem separated. Fanquake, it needs to be tagged “needs author action”. It’s 5 commits, I’m looking at it right now.

You could also pull a Satoshi and do a commit “change makefile” ((laughter)).

There’s a mining pool censoring shielded transactions for zcash. Bitcoin Cash miners have already activated a fork for Schnorr signatures so there’s at least some interest there.

Do a bip9 signaling mechanism, do a release with it, really publicize it. The potential for controversy is higher with the great consensus cleanup because it’s closer to bikeshed. But Schnorr-Taproot is Pieter bringing the tablets off the mountain ((laughter; nobody wants that)).

What about splitting up the great consensus cleanup? You overestimate how much people follow any of this stuff. They generally don’t. What would be terrible about having multiple soft-forks? Well it becomes 32 combinations to test. Partial activation nightmares.

I don’t think these soft-forks can be done in parallel. We’re not good at managing social expectations in this project or communicating them in a good way. If we try to do this for two things, I think we’re going to do an even worse job. I think it’s better to focus on one thing, focus on how we’re communicating it, how we’re signaling there’s consensus on this first before just throwing it out there. If we’re doing two soft-forks, it’s going to be confusing to us whether each of them have consensus or whatever, forget about how everyone else perceives it. None of the problems in the great cleanup soft-fork are particularly sexy, so you will struggle to get anyone motivated and interested. But if you have numbers, then maybe. I don’t know how much to prioritize this because I don’t know what the numbers are. It’s weird to say “it’s hard to justify, so we should just do it”. I’m not saying hard to justify, it’s just not sexy. It would be hard socially to get OP_CHECKMULTISIG soft-fork bug fix deployed, because it’s just a byte and most people don’t care. Well, we haven’t tried to get people excited. It’s hard for even me this is important, it’s just not an exciting topic. If this group isn’t superexcited about it, then how could the world get excited about it? I can’t even remember the full thing and I wrote it.

It’s significantly less exciting than taproot, and it could help build an understanding of soft-forks going forward, and then you do something that is more likely to result in drama. This soft-fork doesn’t have any direct benefits so it’s easier to argue against it. Well, we don’t even have the performance numbers.

If you want community consent for these changes, which is what you need to make the changes, then despite how new the taproot stuff is, the general idea has been socialized for so long that we’re in a good place for moving forward especially since it takes a long time anyway. But these cleanup ideas haven’t been socialized, and packaging them up into something like this is recent. If we were going to leave the cleanup sitting out there for a while and signal that we’re as a group think this is something we should do, it’s probably the next thing we will do after taproot, then people will expect it. They expect Schnorr right now, and they should expect these cleanups. If there’s determined opposition for those changes, we will have time to see what those arguments are.

We know the worst case performance is bad, but what we don’t know is how much the cleanup soft-fork would improve things, like validation attacks. What if this was exploited, and we tell people ah well we knew about the vulnerability and we just did nothing about it. Timewarp has been well documented for 8 years now. It’s theoretically possible for someone to construct a block that takes a long time to validate and there’s negative repercussions from that. Is it going to be one block, a lot of block? How likely is this to actually happen? To be fair, it’s been documented for years and most altcoins have fixed it. Bitcoin may never do it because it’s a hard-fork used to be the old argument. Very early on I think people thought it was a hard-fork but that hasn’t been the case for a while.

There was a large timewarp attack in 2013 against an altcoin that didn’t fix the timewarp vulnerability. There was also a recent one, the two proof-of-work balancing thing and a timewarp attack.

As for activation ordering, I think it might be reasonable because it’s a common pattern that people are used to, to maybe suggestion to do one feature fork, one cleanup fork. This could prepare people in a way with not so many drawbacks. I like that, and the suggestion to socialize it, which hasn’t happened yet. We’ve been socialing removing CODESEPARATOR for like 4 years. At a developer meetup in SF a few months ago, CODESEPARATOR was news to them. There was no discussion about the timewarp fix there, but about CODESEPARATOR and the sighash modes I think– what were the three things? 64 byte, codeseparator, pushdata requirement, scriptsig push only. I think 80% of the discussion there was about OP_CODESEPARATOR. If that’s any indication; but it might be random crowd behavior. I think that’s the correct indication, I think. It’s hard to understand the utility of OP_CODESEPARATOR, and lots of people think they have found a use case, but it almost never works for anyone. It’s not that it’s useful, but that someone might think it’s useful and therefore might lock up their funds permanently, which makes this discussion even more difficult. You could get around this with well any transaction created before this time and after this time, and then you have context problems…

Also, for the 100kb transaction limit, there’s discussion to be had about setting precedence. Ignoring OP_CODESEPARATOR for a moment, what do you do if you want to remove something like that? It turns out it’s a major vulnerability, but some people might have funds locked up with it, so what do you do about it? We haven’t done much with soft-fork cleanups or vulnerability-fixing soft-forks, so how do you set precedent for removing something like that? I was a fan of doing something like, from activation of the soft-fork it’s now 5 years before activation. Or it only applies to UTXOs created after the activation date. I’m also a fan of activating it for all UTXOs created after the soft-fork, and then 5 years later we activate it for all the old UTXOs before the soft-fork activated.

Probably going to drop the OP_CODESEPARATOR change. It turns out to not be…. there are some classes of transactions where it’s really annoying for total hash data usage, but if you want to maximize total hash data usage, it’s not really the immediately worse thing, it’s not the way you would actually do it, especially if you add a 100 kb transaction limit. I have a text document where I wrote all of this up, it’s been a while.

Does anyone really adamently think this should be deployed before the taproot soft-fork? It depends on activation. The question I’m most interested in, is this a good idea or not? I feel like that question hasn’t been answered. If the soft-fork is useful, we should do it. Does anyone not like the idea of doing these cleanups? It depends on how much it improves the situation. 64-byte transactions- you don’t want to soft-fork those out? I definitely want to soft-fork out the 64-byte transactions. I want to fix timewarp, too. We need to clarify things more, get some numbers, and then let things sink in. Also we have to figure out how to communicate these unsexy things to the Bitcoin community. Who is going to champion this to the community? We might be overestimating how much users care about this; why didn’t we fix this earlier, okay just fix it. The way to do it is brand it, call it time bleed ((laughter; no don’t do that)). Timewarp is a pretty good brand, except that it sounds cool or good.

\ No newline at end of file diff --git a/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/index.html b/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/index.html index c93880f8a2..8a2c6a175b 100644 --- a/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/index.html +++ b/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/index.html @@ -1,6 +1,6 @@ Taproot | ₿itcoin Transcripts \ No newline at end of file +Core dev tech

https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html

https://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/

previously: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/

https://twitter.com/kanzure/status/1136616356827283456

Introduction

Okay, so, first question- who put my name on that list and what do they want? It wasn’t me. I’ll ask questions. I can give a summary, but there’s been a lot of talk already and I don’t know what to focus on. What would sipa like us to review in particular about it? What design decisions do you feel least confident about? Is there anything where you would like other people to investigate design decisions before charging ahead?

Taproot overview

Let me make a list of all the things that are included in bip-taproot. Should I also give a brief summary of what the broad idea of taproot is? Is it already well known? Okay, five minute overview of taproot. Okay, where is roasbeef? This is hard, knowing where to start. Let’s start with the taproot assumption. That’s a good start.

The history before taproot is we have had p2sh where we moved the script actual code we don’t put it in the output, it’s only revealed in the input and you commit to it ahead of time. People realized you could do this recursively. Instead of just hashing the script, make it a merkle tree of multiple scripts, commit to just the merkle root and then reveal only the part of the script you’re going to use, under the assumption that you’re going to split your script into a collection of disjunctive statements. You only reveal a log scale number of branches. So you can have a root to the script tree, then scripts here and scripts here. So there’s a bit of a scaling improvement from this and also a bit of a privacy because you’re only revealing the parts you’re using.

Pay-to-contract

Greg Maxwell and andytoshi realized at a breakfast place in Mountain View while I was in the restroom was that there’s a function called the pay-to-contract function which takes as input an elliptic curve point or public key and a script, which is defined as the public key plus the hash of the public key and S… it’s a cryptographic commitment. Nobody else, knowing P and S, cannot find the same output value. It’s a weird hash value, but it has all the properties of a hash function. Second, it has a property that if you know the private key for this public key P then you also know the private key to the output of this function if you also know S. What you can do with this is say, well we’re going to use this merkle tree still but we’re going to add one special level on top of it where you have P here and you do this pay-to-contract of those two things instead. Now we can have consensus rules that allow you to spend in two different ways. One way is, pay-to-contract is an elliptic curve point, it’s a public key. So you could sign directly with that public key, which you can do if you know the private key. It’s like taking the right branch, but it’s a special branch and you don’t have to reveal that there was another branch or any branching at all. Or you reveal P plus a branch leading down to one of the scripts and you can verify that this thing commits to it. This is the key spending path, and these are the script spending path. You reveal the public key. When spending through script, you reveal the public key plus the merkle path and nothing else. It’s the internal public key. There’s an output public key, and then this is the internal public key. You only reveal the intenral public when spending using a script. When you spend using the internal public key, you are actually spending with the output public key but it’s a tweaked version of the internal one.

Taproot assumption

What we’re calling the taproot assumption is that, any kind of more complex script construction in bitcoin really just encodes conditions under which coins can be spent. In any sort of smart contract on top, it’s possible to add a branch that says “everyone agrees” or “at least some people agree”. You can always add a unanimity branch, or a certain set of people. It’s a script that consists only of public keys. With adaptor signatures, you can do some things still. No hash locks or timelocks, etc. Spending using this is the cheapest thing; the only thing that goes on the blockchain is one public key and one signature. You don’t reveal to the world even the existence of the other scripts that were also allowed to spend. Even when you spend using a script, you’re not revealing what other scripts existed or whether a public key existed. The assumption is that most spends on the network can be written as ones that just consist of public keys. There’s always the possibility of spending using a script, but we believe that if the cheapest option for spending is one that just has a public key, then people will optimize their scripts to actually have this public key on the unanimity branch, and with the argument that- there’s never a problem with having a branch for when everyone agrees- there should never be a problem with that.

To your point though, there are reasons why you might not want to do that, like expecting everyone to be online at signing time. I expect that to be fairly common, like a two-of-three multisig, maybe you pick the two most– you expect it’s almost always going to be A and B then you put A and B on this branch and then put another branch that says A and C which is less likely.

How am I doing on my five minutes? Why not use threshold signatures in that specific case? I’m talking about multiple public keys, but in taproot only one is there. In musig or other key aggregation schemes where we can represent a combination of multiple keys, really just with a single public key, and there are— with musig, you can do very simple and fairly straightforward security arguments on n-of-n multisig. You can also do non-accountable thresholds like 3-of-5 or any boolean expression over multiple keys you can encode in a single public key. But if it’s not everyone, then you always have an interactive setup where the different participants need to share secrets with each other that they need to securely store and I expect this to be a major hurdle in practical implementations. Probably not in something like lightning where you already have lots of interactivity; but for most wallet applications, this probably won’t fly. With n-of-n, you can do non-interactive setup, but there is interaction at signing time.

Everything becomes a merkle tree with this one special branch at the top that is mandatory. Given that it is mandatory and very cheap to use, it’s reasonable that people will optimize for it. It also gives uniformity. Every output is going to look identical now. They are all just public keys or elliptic curve points. It’s not really mandatory, you can use a non-existing public key. It’s mandatory to put a point there, but it doesn’t need to be a valid key of course. That’s a bit of a tradeoff. There are probably constructions where it is more efficient to publish this without a required public key, but this breaks the uniformity and you get less privacy. This is a tradeoff between a mild scaling advantage, or bandwidth advantage really, versus privacy.

Key aggregation you use is not part of consensus; you can do whatever down the road people come up with. Yes, wallets decide. The only requirement we have at the consensus level is that it’s a signature scheme that permits aggregation easily. Schnorr does this much more easily than ECDSA, so that’s why Schnorr is in there. I know people will say 2-party ECDSA is a thing, but it’s a couple orders of magnitude harder. Calling 2p-ECDSA a thing is strong, there are some papers. Maybe people are already massively and widely using this, specifically bitconner.

Does the pay-to-contract thing need more eyes? If you model the hash as a random oracle, then it follows trivially. The pay-to-contract scheme was introduced in 2013 conference in San Jose by Tim Ohanka. It didn’t include the public key here; which made it not a commitment and therefore it was trivially broken.

So that’s the explanation for why including merkle branches, which is that if you’re already making this taproot execution structure and Schnorr signatures, then merkle branches are literally maybe 10 lines of consensus code. It makes very big scripts scale logarithmically instead of linearly. Such an obvious win, if you’re going to change the structure anyway.

Have you analyzed how much historical scripts would have … no, I haven’t. I’ve made numbers about what we can expect for various types of multisig constructions. It’s hard to analyze because the biggest advantage is probably going to hopefully be in the way that people use the scripting system, not so much in doing the exact same thing as was possible before.

bip-taproot proposal

In the proposal, there’s a whole bunch of smaller and bigger things included. I guess I will go over those. We tried to really think about extensibility without going too far, and maybe we went too far. The reasoning here is that it’s already a couple of things; there are many more ideas, even just about script execution structure, there’s graftroot, g’root, and delegation mechanisms that people have thought about. There are incentives as an engineer to try to pack everything together in one proposal. One of the reasons is well, then you can analyze how they all interact and you can make sure all the combinations are done the most efficient ways and that they are all possible, and it also has some fungibility improvements because now you don’t create dozens of new observable script versions. I think we have to recognize that incentive exists, and also pick a tradeoff for picking some features but not everything. As the complexity of the proposal goes up, the political difficulty of convincing the ecosystem of the necessity of everything goes up. This field is evolving, so for some things maybe it’s better to wait. To compensate for having a bunch of missing things, we thought about extensibility in terms of making sure that some of the things we can imagine at least wouldn’t cause a drop in efficient if they were done separately.

One of them is that the scripts at the bottom of the tree are all given a version number; we are calling that the leaf version. Really the reason for doing this was because we had 5 or 6 bits available in the serialization, instead of saying they have to be this value, if it’s an unknown version number then it’s an unencumbered spend. The difference between this version number, and witness version number which goes at the top, is that these are only revealed along with the script that is actually executed. You can have a tree with a whole bunch of satisfactions and they are all using boring old scripts, and one uses a new feature that exists in a future version, you’re not going to tell the network that you’re doing anything with that new version unless you reveal that script. So this allows you to plan for future upgrades ahead of time, without telling anyone or revealing anything.

There’s a few changes to the scripting language, such as to support batch validation, like making sure the Schnorr signatures and pay-to-contract (p2c) validation is all batch verifiable. This gives us a factor of 2-4x speedup when verifying all transactions or a whole block or the whole blockchain. You can aggregate millions of signatures together. The speedup goes up only logarithmically but we’re really in a use case where batch validation is the thing we– we actually do have millions of signatures, and the only thing we care about is that they are all valid. Making sure that there’s no things in the script system that break that ability to batch verify. To give an example of something not batch verifiable is OP_CHECKSIGNOT… you could write a script that depends on giving an invalid signature for a given private key; it’s allowed in the scripting language even htough I can’t think why you want that. So we have a rule that says all signatures that pass must be a valid one or be empty. This makes the checksig fail without making the script fail; this is a current standardness rule. Nulldummy is the checkmultisig one, I think this is nullfail.

The ECDSA opcodes are gone and replaced with Schnorr opcodes. There’s no OP_CHECKMULTISIG anymore because it’s not batch validatable. It actually tries multiple combinations in there, and if you don’t have the information in there to say what matches with what, youcan’t batch verify it. Instead, there’s CHECKSIG_ADD, and it increments a counter with whether the second signature check succeeded. You can write the signature check as key checksig add number key checksig add number equal verify. This is batch validatable. There’s still a limit of 201 opcodes in the scripting language, and simulating this would cost 3 opcodes per key, and with this special opcode it’s only one. With respect to the 200 opcode limit, you’re still counting htem the same? Yes, nothing has changed about that limit. CHECKSIGADD is just a lower cost. If you use v0 segwit, you have ECDSA opcodes. If you use v1 or taproot, you only have the Schnorr opcodes in there.

There’s a whole range of unusable opcodes in today’s bitcoin scripting language which automatically fail your script, which are turned into OP_SUCCESS. It’s an opcode that makes the script automatically succeed. It’s an unencumbered output again, even when encountered in an unexecuted IF branch. There’s reasons for doing so; the advantage of this is we’re no longer restricted to redefining NOP opcodes to introduce new functionality. You could always have a new leaf version to completely replace the scripting language, but if you want to replace just one of the opcodes, you don’t need the version mechanism, you can just use the OP_SUCCESS opcode and redefine it to have a new meaning. It makes the script return true, and it can have any semantics, rather than “don’t touch the stack” semantics for redefining NOPs. Does that make sense?

One more thing is public keys that start with an unknown byte are treated as an automatic success as well. This means that if a— not in the top level one, only in the scripts or leafs. The reason for this is that it lets you introduce new public key crypto systems, new sighash modes, or anything without adding another 3 opcodes for doing signature checks, instead you just encode it in the public key itself where we have a byte anyway. This is not like OP_SUCCESS, it makes just that signature check succeed. I forget the actual rationale for it. The pubkey is an argument passed to a CHECKSIG in a script. It doesn’t need to be a push, it can come from anywhere. It’s a stack element passed to the CHECKSIG opcode. How do you avoid someone changing a transaction in flight? You’re committing to the public key in script. If you’re passing it in on the stack, you have another problem, without otherwise constraining it. Oh, right. Any questions about this part?

Also, uncompressed public keys are gone because really why do we still have those.

If you’re doing a soft-fork later, you have to turn previously valid things into invalid things. So OP_SUCCESS is useful for that. When redefining a NOP opcode, the only thing you can do is not change the stack but it can observe it. You are restricted to opcodes that can’t modify the stack. This is why CHECKLOCKTIMEVERIFY leaves the data on the stack. There could be a variation of OP_SUCCESS called OP_RELATIVESUCCESS where if you hit the opcode then it’s success but otherwise you don’t. The reason why it doesn’t do that is that, you want an OP_SUCCESS that redefines the whole script language in a potential soft-fork. It lets you introduce an OP_SUCCESS that changes how you parse opcodes, which is something you do before any execution. The rule is that you iterate through all opcodes and if you encounter OP_SUCCESS without failing to parse, you don’t execute anything at all, it just succeeds. You don’t continue to parse either.

Also, there’s the part about getting rid of the sigops limit. It’s not entirely removing it. Instead of having two separate resource limitations in a block- the weight limit and sigops limits which leads to annoying minor miner optimization problem. Instead, there’s just an allowance for having one sigop per 50 bytes of witness. Given every signature requires 64 bytes for the witness being checked, plus bytes for the public key, plus overhead from just the input itself, this shouldn’t restrict any features but it gets rid of the two-dimensional problem. Now, what if at some point there is a reason to introduce an opcode that is very expensive, like say someone wants OP_CHECKSNARKVERIFY or something more expensive like execute something or OP_VERIFYETHEREUM or OP_JAVASCRIPT. You can imagine that because of this taproot assumption where we essentially assume that most spends are going to use the simple keypath, it might be reasonable to have fairly expensive exception clauses in your taproot leaf scripts to satisfy. Well, what do you do if such an opcode is orders of magnitude more expensive than a sigop now? You would want that to correspond to a weight increase, because proportionality you don’t want to go beyond more than x CPU per bandwidth really. In order to do that, we feared that introducing that would incentivize people to stuff the transaction with hundreds of zero bytes or something just to not hit this limit. Say we introduce a limit that costs a hundred sigops; now you need a 5000 byte witness which would be a waste to the network just to stuff it up. So an idea we had is, if we had a way to amend a transaction’s weight in an unconditional way, where we could just set a marker on the transaction that says compute the weight but incremented by this value and that increment should have a property that is signed by all the keys otherwise people could change the transaction weight in-flight, and also it should be recognizable out-of-context. Even when you don’t have the UTXO being spent, you should unconditionally be able to do this weight adjustment. For that reason, we have an annex which is a witness element when spending a taproot output that has no consensus meaning except it goes into the signature and it is otherwise skipped. This is an area where weight increments could go. Say the transaction didn’t have a sequence number, and we wanted to do something like relative locktime, what we now call the sequence field could have been put in this annex as well. It’s essentially adding fields to an input of a transaction that the script doesn’t care about. The only consensus rule included in the taproot proposal now is that you can have an annex identified in this certain unique way with a certain prefix byte, and if it is, then it’s skipped. It lives in the witness, it’s the last witness stack element, it has to start with byte 0x50 in hex. There’s no valid witness spend right now that can start with byte 0x50 in p2wsh or p2wpkh.

Another thing is, say we want to do another script execution change like graftroot or things in that domain. We may want to reuse tapscript inside those where the script execution semantics actually remain the same and maybe increments ot the scripting language apply to both. If those would need an annex, then… I think it’s useful to think of the leaf versions as the real script version, and the version at the top is really the execution structure version and they might be independent from each other. There’s a number of execution mechanisms, and then there’s a number of script versions. This might increase why we would want to have this annex attached to the tapscripts.

I think that’s it. Oh, the transaction digest algorithm is updated. Jonas mentioned, there’s a number of sighash improvements. There’s more precomputed things to reduce the impact of large scripts. Tagged hashing, yep. Why? Couldn’t you just tag it differently if you wanted to change it? Yes, you can but for example you don’t want to introduce new tags for– you don’t want the introduction of new tags to be a common thing. Likely you want optimized implementations for them, and it increases code size. For simple things that get shared, you want to put it in the data, and you use the tags to make sure different domains don’t interact. I don’t care much about the epoch bytes.

It seems like the signer requires a lot more context. Actually, it’s less. The biggest change in the sighash is SIGHASH_ALL is now signing the amounts being spent of all inputs, rather than just the inputs being spent. This is because of a particular attack against hardware wallets which is exploitable today. I think this was discussed on the mailing lists, actually, a couple months ago. You need to give the hardware wallet the unspent outputs being spent… you need the scripts of the outputs being spent anyway so that the hardware wallet can… say a 1000 person coinjoin, you have one input, but now you need all the other context data? Yes, that’s a good point. I need to re-read that thread on that mailing list. This is a good point, and I hadn’t considered it. This would make PSBTs very very large before you could even consider signing. You already need the vouts and txids. You also need this data right now for computing the fee; you don’t necessarily have to compute the fee, but you certainly should and you certainly should want to.

In addition, there’s a change where you always sign the scriptpubkey being spent which protects against the sort of concern of can I mutate the sighash that was designed for a P2SH thing into a thing that is not P2SH which is possible today? You fix this by including a bit that says explicitly whether this is P2SH but to categorically remove this concern you sign the scriptpubkey being spent.

There’s a couple more pre-computed values.. Any kind of variable-length data is pre-hashed, so if you have multiple checksigs it doesn’t need to be recomputed. The annex is variable length. The hash of the annex is computed once and included wherever it is needed. The number of inputs and number of outputs are pre-hashed, always. The data fed into the sighash computation has a bounded size, which is 200-something bytes.

This is the first time any kind of merkle inclusion proof is covered by consensus. You have made a change to that in that you order the pairs. Did you consider any other changes? Tagged hashes, sure. It could theoretically be any kind of accumulator, right? So what John is referring to is that, in this merkle tree we don’t care about the position of things, only that something is in there. Someone suggested, why don’t you sort the inputs to the hash function at every level? Now you don’t need to reveal to the network whether you’re going left or right; you just give the other branch and you could always combine it. This gets rid of one bit of witness data for every level, that’s not a big issue, but there’s a bit of complexity avoided about how to deal with the serialization of all those bits. Given sufficiently randomized scripts, it actually gives a tiny bit of privacy as well because you don’t leak information through the ordering of position in the tree anymore. So why not any other kind of accumulator? Do you have any suggestion? It’s semi-serious. Is there anything else out there? I think even a simple RSA accumulator, which has problems with trusted setup… but in this case, the setup is the person who owns the private key, right? It’s a private accumulator? You do a setup for every key? I did math for this at some point and concluded it only makes sense if you have more than 16 million leafs, just in terms of size and CPU I don’t want to think about even. You can have up to 4 billion leaves, actually. Above that, you will have difficulty computing your address.

The deterministic ordering of the leaves, can you not order your leaves by likelihood? You can still do that. You want to build your tree as a huffman tree based on likelihood, and then there’s sorting that flips at various levels, but the depth of every leaf remains the same. So through that, you still leak information obviously, but that’s desirable I think.

Post-quantum security

https://twitter.com/pwuille/status/1133838842690060289

One of the interesting things about post-quantum security is that, if you have a taproot output generated from an internal public key with a known discrete log and it ever gets spent using the script path, you have a transferable proof that ECDLP is broken. Someone with an ECDLP break might choose to not use it for this, because using it would convince the world that they really do have an ECDLP break.

This isn’t specific to quantum computers. If there’s ever a substantial threat of ECDLP being broken, there is no choice but to blacklist spending using ECDLP-based constructions. So at that point, either you say well there’s just these coins and they can’t move anymore at all, or you go towards fairly complicated post-quantum zero-knowledge proof to show this public key was derived from this seed or this hardened path. If a convention, and this is unenforcable, if a convention on the use of taproot is that the public key must always appear….. the public key doesn’t have to appear there, though. You can just define that as another way to spend it. You could do that at a hard-fork; that’s the least of our concerns. The idea is that, a best case scenario is that, if ECDLP gets broken, like 10 years from now there’s sufficient research that we have confidence that there’s a fairly acceptable tradeoff for a post-quantum secure scheme, a new output type gets defined, you can spend it and all good. Then 20 years later than that, people start to say there’s actually a 500 qubit quantum computer only a factor this much more before our funds are at risk… but by that time, pretty much everyone has moved to a post-quantum scheme anyway. In the taproot tree you put some big hash thing for a post-quantum situation, as a backup. Have a branch somewhere that is like an unspendable thing but with a hash of your private key in it? Not unspendable; you can define a Lamport-kind of signature, which nobody should want to use becaues it’s huge, but you could always put it in the script tree and it’s there for years and decades and nobody touches it. It can be added later. If you have this in for 10 or 20 years, all the wallets put it in but never spend from it. So if we kill ECDLP stuff, then it makes sense because wallets have already been using this. I somewhat agree, but it just has to be enough time in advance. The big problem is that in post-quantum secure schemes, we can’t do fun stuff. But that happens no matter what. Well, not necessarily. Maybe post-quantum secure schemes will be found later that allow those. That’s not crazy. There’s very minimal cost for doing this just-in-case, and if you find a better post-quantum scheme that has fun features then you can soft-fork that in later. You need another branch, but you never see it. It adds complexity, though.

If the public keys involved are always derived from some seed which we know, then you could have a zero-knowledge proof of knowledge of whatever generated that seed which will be enormous but that’s fine because most post-quantum stuff is enormous. This would be a hard-fork, but we don’t need to do anything except make sure that our private keys have known seeds that are known to wallets. Or your private key is just the hash of another private key.

Would the bip32 derivation be too crazy for the purpose you’re describing? I don’t know what a zero-knowledge proof here would look like. It’s difficult to predict what the efficiency would be. It’s going to be at least 10’s of kilobytes probably. You could use a simple post-quantum scheme like Lamport signatures.

Most wallets use one of like five libraries, so as soon as you have it in bitcoinjs you’re fine (regrettably). Part of the point of taproot is that you won’t really be able to tell which wallets people are using anymore. You have a financial incentive to have those Lamport signautres in there.

\ No newline at end of file diff --git a/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/index.html b/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/index.html index d2a96ba616..4d7c165b76 100644 --- a/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/index.html +++ b/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/index.html @@ -1,5 +1,5 @@ Signet | ₿itcoin Transcripts - \ No newline at end of file +Core dev tech

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html

https://twitter.com/kanzure/status/1136980462524608512

Introduction

I am going to talk a little bit about signet. Does anyone not know what signet is? The idea is to have a signature of the block or the previous block. The idea is that testnet is horribly broken for testing things, especially testing things for long-term. You have large reorgs on testnet. What about testnet with a less broken difficulty adjustment? Testnet is for miner testing really. One of the goals is that you want predictable unreliability and not world-shattering unreliability. Signet would be like a new series of networks. Anyone can spin up a signet; it’s a public network but only people with the private key can create blocks. Anyone can create blocks but it would be invalid if you look at the coinbase. You could fool SPV clients I guess. You could have a taproot signet and spin that up, or a Schnorr signet, or whatever you want to do.

Q&A

Q: It still does proof-of-work?

A: The blockheader is still valid, yes. It still does proof-of-work. People are lazy and they want to support this in their software, but they don’t want to go in and hack around their consensus validation software. Instead, they can keep this as is. How to store and download headers also stays the same, you don’t want to change those things.

Q: Is this regtest proof-of-work?

A: It’s like difficulty 1.

Q: Then you could be easily be fooled by header changes?

A: Yes, you can. This is for testing, so you shouldn’t be connected to random nodes.

Implementations do not need to implement the signature check and it works with all existing software. You have a coinbase output that has a signature saying my consensus is configured and you configure what the scriptpubkey is for that in the scriptsig. Instead of signing the transaction you sign the block. The changes are not very big at all. It’s the same as transaction signing. There’s a new signer that takes a hash and you create a signature. The hash is the blockhash.

Q: Is your goal for people to spin up random signets, or for there to be a global one?

A: One idea is to have a reliable signet that people can use for testing. This permanent signet would have a web interface and we could ask it to double spend you or something and then it would double spend your address. All of this is outside of the proposal, this is just a tool that does it. It’s double spending as a service (DSaaS).

You have a circular dependency- it can’t be the blockhash. The best way would be to remove the witness commitment manually. In segwit, they set it to 0000 in the merkle… But you probably don’t want to do that here because you still want to sign your coinbase. You could do something like, compute the would-be blockhash if that commitment was removed, and then that’s what you sign. Zeroed out or removed, either way.

You could sign the previous block instead of the current block. You sign everything except the signature itself of course, and probably the nonce in the header. The thing with this is that you are going to have to create a signature every time, because you are going to do PoW and do one signature per nonce. So you don’t sign the nonce. You could do the signature, and then still roll the nonce. With difficulty 1, you’re only going to do one on average anyway. It’s going to be mainnet difficulty 1.

Regtest vs signet

Regtest is bad, because anyone can go and make a billion blocks. You have to get the headers and then the block and then check the signature.

What’s so bad about having the signature in the header? Everyone would have to change their consensus code to be able to parse and validate this. It would be easier if they don’t have to modify any software to use this. It could either be out of the box, or they make changes for signet. There’s little motivation to add signature verification to different tools when this is not used in production for anything. It’s literally only to test new protocols, or to test your exchange integration to be sure that you’re handling reorgs properly- but you could use regtest for that case.

You can run bitcoind enforcing signet, and you connect to your own node. You don’t really care that you’re vulnerable to– because you’re not checking, you’re only getting blocks from your own node. The same is true for regtest, but anyone else who connects to that regtest network can blow away your blocks. You could just use regtest and only trust certain nodes, which means block relay would be from a single node running the thing.

You don’t need to protect a signet network though. On signet, you’re still connected to a node that is validating. A node that is validating on regtest will see the reorg and see that it is still valid and consensus-valid, unless you do whitelist-only for regtest, which everyone would have to setup. Regtest is context-sensitive. Signet users still need to validate signatures, you connect to bitcoind running signet. So you do have to use the signet software, but they don’t require other changes to their other software stacks if the new header format breaks something. You opt into a particular signet based on its scriptsig. It doesn’t matter what software you run internally, but you use bitcoind as an edge router.

What about having a regular header, and a separate set of signature? It’s the segwit trick. How many changes is Bitcoin Core going to accept for this signet testing-only thing? It’s super simple if it’s just “a signature in a certain place”. If you don’t like it, you don’t have to use it. Well, if it’s going to be part of Bitcoin Core then that means we’re maintaining.

regtest has no proof-of-work? No, it has proof-of-work but it’s one bit of work. You have to set it to zero. Half the time, you get it on the first try.

If your goal is to have 10 minute blocks, you don’t need to change the difficulty rules at all. You can just use the mainnet rules. And then the signer, if you have a high-profile signet somewhere, they have 10 ASICs available, they can choose a higher difficulty if they want and it will have that security. The difficulty will be exactly what the signer chooses or can produce. He can also choose minimal and it’s less secure… The signer can have a cronjob and make a minimum-difficulty block at that time. You just mine the whole time, and it gets you to some difficulty.

How are you going to do reorg on demand if the difficulty is exactly what they can do? Well, it will take 10-20 minutes to make the reorg. That’s fine. It would be nice for faster reorgs. 10 minutes is only for difficulty adjustment.

Have a chainparam serialization and make it easy to send that out. That’s the pull request that someone was thinking about– it’s a custom chain like regtest but you can change all the chainparams to whatever you want, like a custom genesis or whatever. A configure arg or command line parameter that has the file for chainparams.

Applications

It’s superior in every way to testnet, I think. The only thing testnet is useful for is mining testing and testing miner equipment. If you want really fast blocks and really fast reorgs, then use testnet.

If you are testing protocols like eltoo protocols across many different people, then regtest is way too fragile for that, and testnet is also way too fragile for that if you want to get anything productive done. But you still want to be able to do things like double spending as a service, because eltoo needs to be robust enough to be able to handle expected reorgs but not necessarily earth-shattering reorgs. Another application is that, as an exchange, I always wanted my customers to join regtest and test with my arbitrary reorgs.

We can take bip-taproot and just slap it in there. We could either just run the branch itself on signet… or the signer can enforce other consensus rules and now those consensus rules are active there. Taproot can be a soft-fork and you can just say this soft-fork is enabled on this network, sure. During the development of segwit, there were a few different test networks for segwit called segnet. Not a typo, there was segnet and now there is signet. Nobody remembers segnet except Pieter.

It’s also useful for testing wallet software. Say an exchange running a semi-private signet. It’s extremely common to visit exchanges and you look at their wallet code, and they aren’t even checking for reorgs at all. So here’s an easy way for them to check their work against reorgs. It could be very educational.

Implementation

The pull request for signet is in limbo. I am planning on going back to it. There’s an older implementation that modifies the blockheaders. I am going to replace that with something that doesn’t do that. It doesn’t seem too hard to do.

\ No newline at end of file diff --git a/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/index.html b/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/index.html index c7442819ea..2e86f8ba86 100644 --- a/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/index.html +++ b/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/index.html @@ -1,6 +1,6 @@ Blind statechains: UTXO transfer with a blind signing server | ₿itcoin Transcripts \ No newline at end of file +Core dev tech

https://twitter.com/kanzure/status/1136992734953299970

“Formalizing Blind Statechains as a minimalistic blind signing server” https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html

overview: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39

statechains paper: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf

previous transcript: http://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/

Introduction

I am going to talk casually through the whole statechains thing. If you want to interject, please do. I’ll get started. The current idea is to make it completely blind. It’s blinded statechains. The goal is to allow people to transfer a UTXO without changing anything on-chain. The concept I use to describe it is a blind signing server. The idea is that the server only has two functions: you can generate a new key for a user, which is kind of like generating a new keychain and it’s a linear chain that only goes in one direction and you can’t split up the coins, and you can request from the server a blind signature and point to the next user that gets the requested next user which is how it gets to be a chain.

The heavylifting is done by the user. The server only signs things. There could be a user with a single key, but that key could be a threshold signature and be a federation. Instead of 2-of-3, it could be 3-of-5 plus one more person that always has to sign.

Example

sigUserB(blindedMessageB, userC) is the user putting a signature on a blinded message and on the next user that will get to request the next message. The blinded message gets signed by the server, with the key. It returns blindSignatureB. Money goes from A to B. You repeat it to get from C to D by using sigUserC(blindedMessageC, userD). It’s a simple server where you’re creating– it’s just a chain of signatures. It’s an ECC linked list, basically.

It’s like transfering signing rights. The key is how they request a signature with, and you get to transfer who gets to request the next signature. The server signs something on behalf of one user, then on behalf of another user.

Joint key ownership with the server

If the second user creates another key, and we call it a transitory key because we’re going to give it to someone else. You can use musig to create another key which is a 2-of-2 multisig between A and X. In order to sign with AX and utilizing this server, you request a blind signature from A, then you complete the signature by signing with this key X. If you want to transfer the ownership of key AX, you give X privkey to someone and tell the server to transfer the signing rights.

A more concrete example

On the statechain, say we have user B who controls what the server gets to sign. User B requests an eltoo signature on this off-chain transaction. Say the money goes to AX or after a timeout to B. So this is basically a lightning basic in the sense that if there’s a timeout then the money goes back to B. So now that B has that guarantee, he now sends money on the bitcoin blockchain to this. He already has the signature, so he’s guaranteed to be able to redeem the money on-chain without the help of the server. If you want to transfer the money, you request another signature from the server. This is another eltoo update transaction with another state number. Instead of B getting to sign, it’s now user C who gets to sign. We can keep going like this. You change the eltoo update transactions, and basically this is how you transfer money off-chain.

In eltoo, you can spend an output with a transaction. Even this goes to a blockchain, you can send the updated transaction to the blockchain. This is because of NOINPUT and the sequence number enforcement and the timelocks in eltoo.

It is still reactive security, yes. If you don’t pay attention, it’s the same as lightning. You have to pay attention to the blockchain and know when someone is broadcasting a transaction to close the channel.

Practical results

We can change UTXO ownership without changing the key. The server doesn’t know X, so has no full control. If the server disappears, you can redeem on-chain. Since you’re doing blind signatures over everything, the server doesn’t know it’s signing anything like bitcoin. It just puts a blind signature on something, it doesn’t verify any transaction data or anything. It’s not aware of it. It’s up to users to unblind and verify these transactions.

Q: What does the server verify before making a blind signature?

A: It doesn’t verify anything. Give it a message, and it signs it. The user unblinds the signatures and can choose to not accept a transaction. This is similar to petertodd’s client-side validation work.

Q: Doesn’t the server only need to enforce that it will not create a signature for B after it was transferred to C?

A: Yes, it will only sign once for the user. It enforces who it signs for, and that it only signs once.

Q: Does it need to enforce that the sequence numbers increase?

A: The receiver checks that.

Q: But she doesn’t know what the previous sequence numbers were.

A: All the blinded signatures that happened before that the server signed, will go to the user the receiver.

Q: So you have a linearly growing ownership package?

A: Yes, that’s the same with petertodd’s client-side validation proposals. The unblinding is the same secret that has been passed on from one user to the next. You can hash the private key and those are secrets you use for blinding. You need two secrets for blinding. You can pass on the unblinded versions of the transactions, that might be enough. It kind of depends on what you want to do. The blinded signatures could come from the server or the users could pass them on. Maybe you prefer the server to keep the blinded messages, you download them and unblind them. You pass on X and what you get from the server. Either method works.

A user asks for a signature, and says the receiver gets it next time. When that next user asks for a blind signature, then the server knows the chain of transfers. That’s correct. But it doesn’t know the identity of the public key owners. But there is definitely history. The path is known, yes. It doesn’t know what UTXO it is, though. But it does know that if it is a UTXO that now this next next recipient is the current owner. It’s a one-time token. The receiver could be the same person as the spender, the server wouldn’t really know.

You could make the path not known if you use ecash tokens. You exchange the token for a signature and get a new token back, like chaumian ecash. Okay, we will talk about that.

With eltoo you can do transaction cut-through to the final UTXO or whoever has the final transaction. All of this is blinded. The statechain just sees a chain of related user requests but it doesn’t know what.

The role of the server

You’re trusting the server to only cooperate with the latest owner. The server promises to only cooperate with the latest owner. You’re relying on the server to do this. The server is a federation, it’s a Schnorr threshold signature using musig or something. It must publish all blind signature requests (the statechain). This way people could audit the server and see that it never signed twice. Make sure the server signs only for user requests, and make sure the server never signs twice for a user.

This is a public blockchain or public statechains. It’s centralized so not a big deal to just use HTTPS, json-rpc, whatever.

Atomicity via adapter signatures

Once you see a signature, you learn a secret. The signature has to give either all of the signatures or none of the signatures. If it tries to give half, then that doesn’t work because you would be able to complete the other signatures. We use this to make it possible to transfer multiple coins on multiples of these statechains. If you have a chain with one bitcoin and another chain with one bitcoin and another one with two bitcoin, you can swap them. These atomic swaps can also be done across coins.

For swapping to smaller values… the server has all the signatures from everybody, except for the adaptor secrets. Once it receives all the secrets from everybody, it can complete all the signatures and can publish all of them. If they choose to only publish half of them, then the users also have their adaptor signatures.

Comparison to federated sidechains

It’s non-divisible, for full UTXOs only. It’s nearly the same turst model as federated sidechains. It’s non-custodial because 2-of-2 and off-chain eltoo. A watchtower or full node is needed to watch for close transactions. It’s not a money transmitter because it’s just blind signing which could be anything. Don’t blame me, basically. It’s still a federation. Lightning is more safe. If the federation really tried, they could get your private key by like doing the swap and they are one of the users. If they get one of the transient keys then they can get your money.

Here, you can send people bitcoin on a statechain- they would need to trust the statechain, and they would need to like bitcoin, but there’s no encumberence and it’s not like lightning in that aspect.

Worst case scenarios

Server obtains a bunch of transitory keys, unblinds the signatures, notices the bitcoin transactions, proceeds to provably steal the coins, all the other users (keys not stolen) withdraw on-chain as a result. But this is harmless without the transitory keys. Court order to freeze or confiscate coins, they can’t really comply.

Microtransactions

You can’t send anything smaller than an economically viable UTXO. They would never be able to redeem it on-chain. So you’re limited by transaction fees on-chain really. As a statechain, you want to charge fees, and this is needed when swapping between currencies. There will be some fractional amounts when swapping between altcoins. There needs to be some method to pay, that is smaller than the UTXOs that you’re transferring. You could give the statechain one of these UTXOs, well, you could pay with lightning or a credit card. Or satellite API with chaumian tokens for payments, I guess that’s not deployed yet.

Lightning on statechains: eltoo and channel factories

If you have a channel factory, you could add and remove participants. Eltoo supports any number of participants. Doesn’t that make it a factory? The idea of a factory is you have a secondary protocol running in addition to eltoo. But in eltoo it’s this flat thing where you can rearrange funds between individual participants and you don’t need this second layer of second layers all the way down. We should have called it turtle.

The server can be inside a statechain itself without knowing.

Channel updated together with multi atomic swap. Uncooperative close similar to regular eltoo. Statechains use a simplified version of eltoo, where you only have update transactions and you have settlement transactions another way. If you want to rebalance your channel, you can add a bitcoin into the channel by swapping and then moving over the channel. All of this is possible. We just talked about channel factories to add/remove members too.

Lightning is limited in throughput; you have routes and you can only send so much money through it. It’s divisible and you can send fractional amounts no problem. In statechains, there’s infinite throughput, but it isn’t divisible. If you combine the two, assuming you accept the trust assumptions, you have a perfect mix of being able to send anything frictionlessly.

Nobody has to put up money to support the protocol. You could have a fixed fee. The eltoo fees are up to the users. In lightning, the only ones you pay are the intermediates and here are there are no intermediates. You onboard them into your own group and pay them directly. You all have to be online to do that. This doesn’t apply without statechains, it allows us to have dynamic membership of instances of eltoo, which is really cool.

You could make the server a party to the lightning channel and then pay them. Yes, sure. Let’s assume we trust the server, then we’re good. If one of the parties disappears and stops cooperating, you’re forced on-chain. So you do add to your on-chain risk as you add more members. That’s the tradeoff. But that’s always the case, even with just eltoo, you sort of have to know they are online when it comes to signing time. If you add the server to your eltoo channel then they know the UTXO and it’s still kind of blind but they have more information. You could have one channel to the server, expose that one channel to them and pay them fees through that channel. But not through the other channels.

As the number of members in the statechain increases, cooperation becomes more expensive so maybe they want to earn fees for that.

If you trust some servers, and another user trusts different servers in a different federation, is that something possible? You can’t upgrade the security, you can only downgrade it. You can have threshold signatures to do this. But we could have an intermediate step that ends up on-chain, if you do it on-chain that’s fine. We would need to replay that on-chain anyway, right. Well, that’s unfortunate.

Use cases

You could do off-chain value transfer, lightning channels (balancing), betting channels (using multisig or discreet log contracts), or non-fungible RGB tokens (using single use seals). You use pay-to-contract to put a colored coin into a UTXO and you put the UTXO on the statechain, and now you can move this non-fungible token off-chain. Kind of solves that.

There are also non-bitcoin usecases that I haven’t really thought of; the server is just a dumb blind signing server that doesn’t know what it’s doing. It’s likely that it is used for other things, and it’s better if that’s the case because then you could argue that you’re not a money transmitter or something. At the very least, it’s a timestamp server. So we can timestamp stuff.

It does require more trust because the off-chain transaction concept is not something you can emulate without blockchain. For non-bitcoin usecases, it’s weird to think about, but up until now if someone had a private key then you would assume he couldn’t give it to someone else without both of them having that, but the server lets them do that and the assumption breaks. If you see a private key, it can be given to someone else. The ownership can change by moving it through the statechain.

Further topics

You could use hardware security modules to transfer transitory keys, like attestation. You have a private key inside of an HSM or hardware wallet. You have another hardware device and you want to transfer the private key over. As long as the private key doesn’t go out of the device, you can do off-chain transfer of money. This is like teechan, yeah. HSMs are terrible, but the thing is, we’re transferring the transitory key in a way that is even less secure than that. If you’re adding an HSM then it’s more secure, and if the HSM breaks then you’re just back to the model we have now. The user could collude with the federation to steal money… Everyone would have their own hardware wallet device, and my device talks to your device, and my transitory key is inside of it, and it never comes out, or if it does come out then it refuses to transfer to your hardware device. This requires trust in the HSM of course. You could run a program by the server that attests that it doesn’t sign old states. I don’t know if that would be equivalent or better security, but yes that’s a good point. You can at least share the trust- split the trust into the hardware developer and the server, instead of just trusting the server.

What if the opendime had a transitory key, and it could sign. You could physically hand it off. Yeah let me think about that. I’m not sure. I think it should work as long as it can do partially-signed bitcoin transactions. I don’t see how it doesn’t work, so that’s interesting. Very literally, that’s the only copy of the key if it’s an opendime. I’m sure you could design something like that with opendime-like properties where there’s some security guarantees around nobody having actually seen the private key. The blinded information can be in that chip as well, and maybe a verification header. This adds an additional assumption that, if you don’t go online at all, there’s an additional thing. Yeah just trust me, plug this into USB. I’m giving you money, just trust me… sure, that’s what’s happening.

You could also do graftroot withdrawal, which allows redeeming forks or an ETF. Instead of withdrawing from the statechain by– the cooperative withdrawal would be a blind signature where the money just goes to you on-chain, without the eltoo nonsense and you throw that out. But if you can withdraw through graftroot, then assuming we had graftroot, assuming after graftroot some hard-fork occurs then you now have a graftroot key with which you could get all the hard-fork coins out. Because the assumption is there’s some kind of replay protection but graftroot is the same. Your graftroot key will work on all hard-forked chains but you need to create a different transaction on that other chain. If you withdraw through graftroot, you can withdraw from all the chains. Assuming they support graftroot. The assumption is that it’s a bitcoin fork and graftroot is already there and they just copy those features and soft-forks.

This could also be used for an ETF. With an ETF, the problem with a hard-fork is which coins are you going to be given, well with graftroot you could be given all the coins without knowing the hard-forks or how many. You might have one utxo with more hard-forks and have a different value or something. But this can be the case right now anyway.

An open problem is, you could verify only the history of the coins you own or receive. But you need some kind of guarantee that there aren’t two histories. So you need to succinctly store and relay statechain history. You need to be able to know all the chains that exist and know that yours is uniquely… one key from the server that is only signing this. How do you prove a negative though? You throw a merkle tree in there. There are various ways. There was a proposal about preventing double spending by forcing signing with the same k twice, then if they ever sign something twice they lose money or something. I’m not sure if you detect that they signed with the same k. The punishment doesn’t matter, it’s already there- if they sign twice, the reputation is shattered. They are already punished, we just need to detect it. One way would be to know all the keys that are being signed with and get a list of them and make sure there’s, you’re only in there once. Another way is a sparse merkle tree which I haven’t looked.

At best you can make fraud proofs easier to make and prove, but why would they want to give enough data to prove that a fraud occured. How do you know there’s not two histories? Well, on-chain there can only be one history. Once people find out, the whole reputation is shattered. Off-chain you can inflation, but once you go on-chain then only one of the histories gets written. The server signs a specific history of the statechain and gives it to you. If you have the whole list of keys it ever gave out, and your key is only in there once, I think that’s enough evidence.

See also

https://diyhpl.us/wiki/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/statechains/

\ No newline at end of file diff --git a/bitcoin-core-dev-tech/2019-06/index.xml b/bitcoin-core-dev-tech/2019-06/index.xml index 392b74af25..da8845c891 100644 --- a/bitcoin-core-dev-tech/2019-06/index.xml +++ b/bitcoin-core-dev-tech/2019-06/index.xml @@ -1,6 +1,6 @@ Bitcoin Core Dev Tech 2019 on ₿itcoin Transcriptshttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/Recent content in Bitcoin Core Dev Tech 2019 on ₿itcoin TranscriptsHugo -- gohugo.ioenFri, 07 Jun 2019 00:00:00 +0000AssumeUTXOhttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-assumeutxo/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-assumeutxo/https://twitter.com/kanzure/status/1137008648620838912 Why assumeutxo assumeutxo is a spiritual continuation of assumevalid. Why do we want to do this in the first place? At the moment, it takes hours and days to do initial block download. Various projects in the community have been implementing meassures to speed this up. Casa I think bundles datadir with their nodes. Other projects like btcpay have various ways of bundling this up and signing things with gpg keys and these solutions are not quite half-baked but they are probably not desirable either.Blind statechains: UTXO transfer with a blind signing serverhttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/https://twitter.com/kanzure/status/1136992734953299970 -&ldquo;Formalizing Blind Statechains as a minimalistic blind signing server&rdquo; https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html +&ldquo;Formalizing Blind Statechains as a minimalistic blind signing server&rdquo; https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html overview: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39 statechains paper: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf previous transcript: http://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/ @@ -10,18 +10,18 @@ https://github.com/bitcoin-core/bitcoin-devwiki/wiki/P2P-Design-Philosophy &ldquo;Elligator Squared: Uniform Points on Elliptic Curves of Prime Order as Uniform Random Strings&rdquo; https://eprint.iacr.org/2014/043 Previous talks https://btctranscripts.com/scalingbitcoin/milan-2016/bip151-peer-encryption/ https://btctranscripts.com/sf-bitcoin-meetup/2017-09-04-jonas-schnelli-bip150-bip151/ -Introduction This proposal has been in progress for years. Many ideas from sipa and gmaxwell went into bip151. Years ago I decided to try to move this forward. There is bip151 that again most of the ideas are not from myself but come from sipa and gmaxwell. The original proposal was withdrawn because we figured out ways to do it better.Signethttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html +Introduction This proposal has been in progress for years. Many ideas from sipa and gmaxwell went into bip151. Years ago I decided to try to move this forward. There is bip151 that again most of the ideas are not from myself but come from sipa and gmaxwell. The original proposal was withdrawn because we figured out ways to do it better.Signethttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html https://twitter.com/kanzure/status/1136980462524608512 Introduction I am going to talk a little bit about signet. Does anyone not know what signet is? The idea is to have a signature of the block or the previous block. The idea is that testnet is horribly broken for testing things, especially testing things for long-term. You have large reorgs on testnet. What about testnet with a less broken difficulty adjustment? Testnet is for miner testing really.General discussion on SIGHASH_NOINPUT, OP_CHECKSIGFROMSTACK, and OP_SECURETHEBAGhttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-noinput-etc/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-noinput-etc/SIGHASH_NOINPUT, ANYPREVOUT, OP_CHECKSIGFROMSTACK, OP_CHECKOUTPUTSHASHVERIFY, and OP_SECURETHEBAG https://twitter.com/kanzure/status/1136636856093876225 There&rsquo;s apparently some political messaging around OP_SECURETHEBAG and &ldquo;secure the bag&rdquo; might be an Andrew Yang thing. -SIGHASH_NOINPUT A bunch of us are familiar with NOINPUT. Does anyone need an explainer? What&rsquo;s the difference from the original NOINPUT and the new one? NOINPUT is kind of scary to at least some people. If we just do NOINPUT, does that start causing problems in bitcoin?Great Consensus Cleanuphttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html +SIGHASH_NOINPUT A bunch of us are familiar with NOINPUT. Does anyone need an explainer? What&rsquo;s the difference from the original NOINPUT and the new one? NOINPUT is kind of scary to at least some people. If we just do NOINPUT, does that start causing problems in bitcoin?Great Consensus Cleanuphttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html https://twitter.com/kanzure/status/1136591286012698626 Introduction There&rsquo;s not much new to talk about. Unclear about CODESEPARATOR. You want to make it a consensus rule that transactions can&rsquo;t be larger than 100 kb. No reactions to that? Alright. Fine, we&rsquo;re doing it. Let&rsquo;s do it. Does everyone know what this proposal is? Validation time for any block&ndash; we were lazy about fixing this. Segwit was a first step to fixing this, by giving people a way to do this in a more efficient way.Maintainers view of the Bitcoin Core projecthttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-maintainers/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-maintainers/https://twitter.com/kanzure/status/1136568307992158208 How do the maintainers think or feel everything is going? Are there any frustrations? Could contributors help eliminate these frustrations? That&rsquo;s all I have. It would be good to have better oversight or overview about who is working in what direction, to be more efficient. Sometimes I have seen people working on the same thing, and both make a similar pull request with a lot of overlap. This is more of a coordination issue.Taproothttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki -https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html +https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html https://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/ previously: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/ https://twitter.com/kanzure/status/1136616356827283456 diff --git a/bitcoin-developers-miners-meeting-2016/cali2016/index.html b/bitcoin-developers-miners-meeting-2016/cali2016/index.html index f0799e0c47..b7f0312ab5 100644 --- a/bitcoin-developers-miners-meeting-2016/cali2016/index.html +++ b/bitcoin-developers-miners-meeting-2016/cali2016/index.html @@ -11,4 +11,4 @@ You can update and continue proof of work.

Changing from proof of work to proof of stake changes the economics of the system, all the rules change and it will impact everything. The next point, the construction of ethereum, it’s built for rolling out to getting the current point of a blockchain should possess all the smart contracts. And smart contracts using solidity, it’s consuming a lot of CPU power. If you get a lot of smart contracts, if the blockchain would be bigger, you would not be able to download the blockchain and synchronize new nodes. It requires a lot of CPU power. And during the time, if smart contracts would be bigger, than a regular node would not be able to process all the smart contracts. At this point you should somehow also perform hard-fork, and split the network. Each point is practically creating the pandora’s box. If it doesn’t solve it, it’s basically the death of ethereum. If you solve them, meaning the hard-fork, it’s meaning change of rules of the whole system, and the user will be ….

All the users already decide to use ethereum, they will be impacted and lose trust at this point. As you go forward in time the more users you will lose. Ethereum is too young to brave this, .. They are not thinking about the users because they are testing the network as they go and modifying the rules.

And also rolling back the transactions if it’s money is quite risky. Someone should decide. Is it fraud transaction or not? It’s also changing a lot of rules and also impacting the users and creating a dissatisfaction for the users. Some users will not be satisfied.

I was surprised about the Ethereum process that there wasn’t a document. They have their own BIPs, and they didn’t use it.

They were trying to fix the problem as fast as possible.

Yes, but they still had time to go. It wasn’t last minute. They had some time.

A separate issue with this is that their response time to the issue seemed slow to me. I don’t quite understand why. When there were network incidents in the Bitcoin network in the past, I think that the response time of the Bitcoin community was much faster. I saw basically no response from the Ethereum technical community other than telling exchanges to stop trading. For days after, funds were being drained out of The DAO and they had not given patches to the miners to block the transactions. Why didn’t they reorg in the first hour? There were some simple things that they could have done. There were some things that they could have done well. I think they took wrong action. But they did take action. Credit where credit is due. Their hard-fork was bilateral, meaning that the new hard-fork chain would reject blocks created by Ethereum Classic, and Ethereum Classic would of course reject blocks from the new chain. The prior hard-fork proposals for bitcoin like 109, 101, 102, none of those were bilateral hard-forks. What this means is that if Ethereum Classic gets more total work than Ethereum, which to the market looks like it could possibly happen, it will not reorg. You will not have Ethereum Classic get more work and then the ethereum other chain gets erased. This is because of the bilateral fork. If they did a fork like the BIP 101 fork, and you had a situation where the classic chain got more work than the other chain, then it would have erased the chain history from the other one. Presumably they would have made a fork to erase the history and bring it back. It was probably easier for them to do it this way, but they did it well. The replay situation I think is quite interesting and has a parallel in Bitcoin. A year and a half ago, there was a discussion in the #bitcoin-dev chat channel when Gavin and Hearn were talking about their fork proposals. Some people began to discuss how to do replay prevention between forks of bitcoin. This made Gavin mad because he rejected the idea that another chain would exist at all after the fork. We had an extensive discussion about replay. I think we did that better than Ethereum because bitcoin has less replay risk, inherently. In bitcoin, we would not have as many replay problems if we had a fork like this. In addition to this, this was something we were thinking about when we started to think about hard-forks in Bitcoin.

Why is that?

The UTXO model would be why we would be better off. Ethereum has accounts, instead. So the anti-replay counter is more inherent. If you make a transaction you can spend them equally if they were in the same accounts. If bitcoin forked like ethereum did, there would be some replay. You could change things in the transaction format in the hard-fork, and other tricks to reduce the chance of replay. Ethereum developers knew about replay. They in fact apparently had a conversation with coinbase about replay before the hard-fork and their position was that there would not be two surviving chains so don’t worry about replay. So I think the failure mode was a lack of sufficient paranoia, being overly confident, many things I think we do in Bitcoin we over-engineer a little bit but this over-engineering is for a good thing because we can’t predict all the failure modes.

We cannot fix things in two seconds. It takes us some time.

But for example, replay is still not fixed in Ethereum world even though it is causing many problems. I am sure it will get fixed in some numbers of weeks or months. Replay could even be an attack against another chain, so some users might consider it a good thing. It’s only a good thing if the other chain actually dies.

One point I would like to make, as an interesting thought experiment, is that it’s important to make replay protection to allow prior transactions before the fork. There was discussion to change the transaction type to allow for prior transaction types. Particularly in bitcoin you need to do that. One challenge is that in bitcoin you might have locktime transactions that are pre-signed and we don’t want to invalidate. Some Bitcoin ATM vendors have some timelocked coins and they can recover those in the event of the ATM being stolen, but they can’t do this in another way.

So the interesting wrinkle in this is that if you have nlocktime transactions, if there is a hard-fork construction that allows for a new transaction type, and it allows priors, then the hard-fork transaction format would not be replayable. Well, there was a suggestion in that earlier discussion, to prevent replay without any need to change any transaction formats but it was somewhat complicated. Did it require nested or something like that? You would require miners to start producing 0-value TXouts that anyone could spend. And then transactions would pick those up and spend them. This is a lot of code. You don’t have to require them, they could do this voluntarily. Well, it’s complicated, it would work, and doesn’t have those downsides. When we had those discussions and realized it was complicated, we realized how much work it would be. It’s easy to do this as a soft-fork with for instance a signature hashing flag that says that there should be on the stack a previous block hash that is earlier than today minus 144 blocks which would be equivalent to that other proposal, with zero-value outputs. The challenge we had with that is that the only people who were interesting in solving that problem at the time were the people who were not interested in hard-fork arguments at the time. We talked through some of this, but Gavin was really angry that we were talking about that, he saw that as an effort to undermine a hard-fork, but it wasn’t.

Seems like we have not translated in a while. They are reading this transcript, live. I would be curious to hear their perspectives on any of that discussion.

Whether the minority chain should be allowed to exist or not. We see that in Etherium Vitalk said that it is good with two-chains coexisting. But some of it might think that the minority chain should not exist at all.

I think a big problem you have with that attitude is that the moment you have any disagreement it does look like a 51% attack against a minority chain. At least with the obvious way to do it with a soft-fork. At worst, you could end up with legal problems.

Me personally, I would be dubious about participating in a forced fork like that. We should distinguish between should exist and should be prevented. It is bad in etherium that the other chain exists, but that doesn’t mean….. any particular action to prevent it, such action might not be valid.

Because our difficulty adjustment is relatively slow, if we were to say that a hard-fork would only happen after 99% miner support, assuming it was achieved by 99% miners supporting it rather than a forced soft-fork to get there, you would be in a situation where it would take a long time to adjust on the minority chain that it would not be usable over there.

The challenge with this point is that, in the case of ethereum, it’s not just that they moved which was a problem too, the go-ethereum client would not synchronize a shorter chain even if it’s the only one with valid blocks. Ethereum Classic had to blacklist the hard-fork block immediately. Ethereum Classic also forked at that point. It was a soft-fork, though. If you start a pre-fork Ethereum client, when only happened to connect to Classic nodes, it wouldn’t know at all. It’s a complex set of variables for that scenario. You can’t count on retarget to [ … ] – a minority side, under bitcoin rules, it’s very difficult to do make trades because you’re waiting such a long time. You’re in a position where it’s less likely than ethereum, for people to start making trades and creating a viable currency. One thing that we learned is that there’s a large economic incentive for traders to encourage the splitting of assets. Poloniex alone in the first several hours after opening up ETC made $200k in trading fees. Traders have made money on the volatility, as well. If nothing else occurs, an incident like this is an opportunity for exchanges and traders to potentially make a lot of money. If they need to take some actions to make that outcome occur, thy might do so, even if the market cap of the currency goes down. Ethereum currency holders lost out on this, but the exchanges and traders made some money, although not universally. BTC-E and Coinbase are probably not doing so great, although others might have better outcomes so far. It might be easy to get into a situation where you can do an anti-replay mechanism. One exchange might be in a situation where they lose a ton of money if the fork has value. Who is responsible for the lack of care that led to this situation? Is it the exchange’s faults who didn’t protect from replay? Is it the developer’s fault for not foreseeing this as a problem? In the U.S. legal system, the way to answer that is by court cases. I think that in this case, because it is designed mostly by ethereum team, it’s not like open-source, they might have liability because I think legally ethereum more look like a commercial project because they are selling a piece like a commercial company. So technically for a lawyer, it’s a commercial project (all open source is commercial). The specific economics around ethereum, like the pre-mine, maybe makes a stronger argument there.

I think we are learning as we go, but I have noticed that only the native English speakers are talking. I would encourage everyone if you want to say something, please jump in and make yourself heard. Or if you feel like people are going too quickly, ask them to pause and stop.

In such a fork, in bitcoin, if that happens, lots of bitcoin miners will start to protect the viability of the majority chain. They will volunteer to attack the minority chain.

So one of the interesting things in Ethereum is that immediately after the hard fork occurred they declared the other side the loser, but both sides did that.

I wouldn’t go and assume that it would be easy to get away with 51% attacks against minority chains. And they might start taking legal action against anyone here. I would never do that with my name attached. Pretty good chance that I would end up in jail for that. We would all be liable to some extent.

Maybe if you mine an empty block, on that chain, then basically it has no value. Otherwise they will compete against each other long-term for value. If we just work together, to technically somehow make sure the minority chain cannot not exist. There were some proposals made that the minority chain could not exist, or that the minority has to have a hard-fork themselves, and that is fine.

Technically this is easy to do. This is called a forced fork. The problem is that there’s a good chance, no matter how you did that, if there was someone in the minority who said “no, you’re attacking us, you’re preventing us from transacting” that could then result in legal action. It’s such a grey action. We have no idea how courts would rule on this. Anyone with their names on this publicly, especially in western countries, . The other thing is that we do know how to increase capacity through soft-fork methods, so we should prefer that. But we are getting off-topic. Ok.

The bigger point that we need to be careful about is that in the Ethereum community, they argued that this is a safety mechanism in Ethereum, that if there’s a disagreement about the rules, then life can go on and yes the economic effects of two competing chains are bad – I agree about that although they don’t all agree — that’s a balancing protection in the system perhaps. If the majority of Ethereum miners want to do something bad, as some argued that they did, that the network can continue on without them. And it’s a way for people to feel more comfortable with their Ethereum investments, and that if the worst happens, then they could at least stay on in a diluted split system. I don’t know if it’s possible, in bitcoin, to really make it so that the minority fork cannot happen in Bitcoin. Even if we could make it impossible, we would lose an argument for Bitcoin’s long-term safety. We should be conservative in how we talk about and think about that. The fact that the miners are counterbalanced by the users enforcing the rules is an important part of Bitcoin’s security model.

In addition to that, not just the long-term effects of how that affects the pseudo-political view of how the system works, any effort to try to create a situation where the minority or one particular chain cannot continue to exist, there will be many people that strongly hold that view and they will fight to make sure that chain can continue. I know there have been proposals to make it less likely that a minority chain can continue, I have not seen anything that makes it absolutely impossible to continue. If you attack a minority chain, then basically bitcoin is no longer permissionless. This also plays poorly with concerns about mining centralization. If the system is balanced, it’s okay to say the other forces are balancing, but if you need permission from other miners to run another chain, then the centralization of mining is much more of a concern.

In the next few months, maybe a fork would be likely to happen, I just have 1% of hashrate maybe, I decide to fork away. I have only 1% of the hashrate, and I fork away. What would happen to that chain? It’s hard to say. And do exchanges need to care about that? They can profit from that, independent of whether that 1% chain is technically secure. If some random person with 1% makes a fork, and an exchange doesn’t know about it, and they didn’t protect themselves against replay, do they have liability for any random person that makes their own fork? Should there be a standard mechanism for preventing replay in bitcoin hard-forks? Even if we don’t want to do a hard-fork, we should recommend an anti-replay mechanism. The transactions should commit to a blockhash, which is dangerous to do with the most recent blockhash because then reorgs cause problems. But if it’s a blockhash from 1 day ago, like 144 blocks, then it does close the time window for when a replay attack could happen, and the chances are that this wouldn’t happen for situations that might show up in court. We should properly fix replay in bitcoin, so that if someone wanted to create a fork, then there wouldn’t be a replay issue.

But couldn’t the purpose who is forking could chose not to use that mechanism? Well that’s on them. But it’s not on them. [cross-talk]

We can’t force people to be sensible. We could do some work to encourage them to be sensible. I think at a bare minimum on a cultural level, it’s good to establish the precedence that exchanges in the event of a fork, have an obligation to give coins on the old fork back. This is more of a cultural norm, of course. One of the things we have to be mindful about, in the infrastructure of bitcoin, when we do things that change things on how bitcoin works, it imposes cost on all the users of bitcoin. And they have their own business concerns. If we impose too much cost on them too quickly, then they will respond in dumb ways like not listening to us, adopting strange altcoins, ignoring us, etc.

Can we let the miners talk a little bit?

I think the part of the blockchain created by multiple years, creating another one, split in a chain continuously leaving independently, create a lot of potential issues. We may be should take the point of view that if someone likes to alternative chain, they should just waste their fearless one blockchain, like new start for a blockchain. Like a new point. And then continue. It’s more like splitting the result of the previous year, and in October’s its lights and you could never prevent exchanges from playing on this split. You could never prevent the users, because you mostly rely on user belief of money, and on one side of the blockchain and they have the private keys. There are a lot of users. This should never happen. Most likely the current blockchain doesn’t fit into your requirements, and you would like to, just to create something.

Okay. Miners, could you talk a little bit about your perspective?

I don’t think it’s reasonable to assume that exchanges can give back the other coin. The thing with Ethereum Classic, it’s an interesting scenario is that the thing they didn’t give back was the original coin. It wasn’t like someone created something; someone created something new, they jumped to the new thing, and didn’t give back the original asset. Some of the exchanges have given it back already, like on Poloniex. The ones that aren’t giving it back, that’s the situation they are in.

It’s like two copies of the lottery ticket, but the first to show the lottery ticket gets the prize.It’s possible right now for coinbase to give it back. Even though they lost, they can buy it back. Let’s say the price went back up, well then there’s no way. They lost it because of replay in this case. If we’re looking at what are the ultimate lessons from this to learn? What should be the standard of care from developers? Some exchanges profited from the volatility from this. Think about victims, though. Users, other exchanges, other businesses, TheDAO hacker, it’s a tremendous amount of systems that you have to change very quickly if you’re an exchange in a very fast moving and confusing situation, when you have a situation like this. I’m not even talking about legal responsibility. What about an ethical responsibility of the developers? What should be the future standard of care that we expect for cryptocurrency?

I would widen that from developers to infrastructure. Together, developers and miners are part of the infrastructure that make Bitcoin go, and to a lesser extent the exchanges and infrastructure providers. They are the ones that make the bitcoin currency usable, we have a duty of care to make the system stable and make sure it upholds its value. We have to make sure that nobody loses money.

For example, the client should avoid the cases where you’re entering the wrong address and you’re able to send the money for nothing, like destroying it. The point here is that Ethereum doesn’t do that either.

Standard or duty of care, the community should learn from history like incidents like this. They should ask the question: are the developers moving too fast? Are there reasons for this change, and are those reasons good?

Many people would see TheDAO hack reversal as a social good. And that good could outweigh the potential harm. However, I would much rather see duty of care as much more important for Bitcoin. I would see this as an immutable ledger. It’s a social good that outweighs these harms.

This reminds me of one of the announcements on the p2pfoundation site from Satoshi which was that the advantage of cryptography is that it gives you security that cannot be taken back no matter what no matter the reason. (See link for quote)

http://p2pfoundation.ning.com/forum/topics/bitcoin-open-source

It has clearly poisoned the other chain. A lot of the community has found it less interesting because it no longer lives up to what other people joined the community to do. That was not analyzed when people said “just return TheDAO coins”. You cannot control the markets. A lot of users lost money in the Ethereum hard-fork. Some of them lost trust in blockchain technology, maybe forever.

Other aspects of duty of care could include the vigorousness of technical review, having a specification, promises made to users when a system is not ready. Clear communication to users.

Has ethereum scared users away from cryptocurrency in general? I think it has some, at least. Did we, the bitcoin community, do enough to warn people in advance of these events with ethereum? If we had done more, would it have scared away fewer people because “we told you so”? I don’t know. I have kept my criticisms of ethereum relatively quiet, because I do not benefit from criticizing their efforts. If I say that ethereum has a bad design, then some people would say hooray, and the ethereum people would throw more rocks. There are enough people that are throwing rocks at me already anyway.

There has been a lot of schadenfreude in this community. Damages in ethereum hurt us all to a very significant level. But at the same time I could have warned people more. I think we are all guilty of this. When it pertains to legal risks, if they do things that could complicate things for us….. Also keep in mind that this situation is not over yet. It’s a competitive solution it’s like blaming one side and another one side. Only independent experts, independent technical experts who understand how blockchain works, might mark the potential risk for the technology. But from our side, it doesn’t look like a … even if we provide any indication, this could be interpreted by many sides, but neither person would be just blaming.

The hard thing is that experts don’t exist. Asking someone to create like certification or qualification, … it will also be harmful for there to be certification. This is a great risk in many industries. If cryptocurrency as a whole doesn’t mature and behave responsibly, the result will be that either we fail entirely, or there will be regulatory pressure that creates enormous cost for all of us. We don’t want that. In the past, I have reached out to competing cryptocurrencies and told them advice about how they might be screwing things up. Some have listened, others have not. There are places where as a whole cryptocurrency needs to get its act together so that we don’t end up getting regulated in bad ways.

This kind of risk is a particularly bad one. We can’t function in an environment where developers as a whole can’t operate in the public. If we were in a position where we had to all be pseudonymous, anonymous like Satoshi and so on, and if developers would be politically and legally blamed for their actions, nothing would get done. Notice how outside the room it does not say “bitcoin meeting”. FBI knows. I bet your plane ticket says.

We need to let the miners talk. Anyone else who hasn’t been speaking?

I would actually say that regarding the eth hard fork that yes the situation is still ongoing, but even if completed the analysis should be going on for some time. We haven’t heard all the stories about how various companies were affected by the hard fork, but that doesn’t mean that they don’t exist. There should be a post-mortem.

If we wait long enough they will probably do another hard-fork anyway.

It’s important to understand that all the confusion and losses and broken promises were entirely foreseeable. And many people had foreseen these problems. The whole Ethereum debacle is really bad for Bitcoin because it shows that some cryptocurrencies are not trustable. For the general public, it’s hard for them to make the distinction at all. The public doesn’t know the difference.

There’s a question of product differentiation that arises here. I think the Ethereum split is an opportunity for bitcoin to be product-differentiated from ethereum. But I don’t know what that product message is exactly. I don’t want to say something like “bitcoin never has any hard-forks ever” because I can’t make that happen anyway. How about “Bitcoin has a vigorous duty of care”?

Once we have AI and machines manufacturing mining rigs and setting up automated facilities, and AI writing code and we take out the human element, then bitcoin is really immutable. The ethereum mission statement was basically exactly that. Removing the human element. We could say that we’re doing the same thing, and we are, script validation is law. If you redefine law, then that’s meaningless. This is profound for me, because the ethereum website had the strongest language and it’s still there. The ethereum website and ethereum README said … it had the strongest statement that code is alw, and it’s much stronger than anything on bitcoin.org, and it had a contract like code is law, and a lesson we should take from this is that saying these things doesn’t make it true. And it doesn’t even persuade public opinion, at least not enough to prevent a mess.

I don’t know what it is that causes this expectation. Ultimately it is expectation of the community, if a community assumes that a hard-fork for whatever purposes are hard, then they are hard. If people assume they are not hard, then they are not difficult. In the case of ethereum, or TheDAO, there was an explicit language and an explicit contract that said the opposite. Compare it to the Bitcoin event where MtGox happened, where sort of the relative economic impact was the same. I cannot remember anyone even suggesting a similar approach. So clearly at the time in Bitcoin there was an expectation difference. Was there just one address to do to undo the MtGox problem? Well yes that was a difference. Would a hard-fork have been considered if there was a single address in the MtGox problem? Well there was a concern about MtGox destroying 3000 BTC because they sent it to a zero-length script. There was discussion much later on where some random person on IRC discussed a hard-fork to recover those 3000 BTC. It never got any traction.

It’s a conflict of interest when there’s some insiders that have a vested interest in the outcome of an event … if you have a strong separation of concerns, where the different players are not crossing over, well we all have kind of a vested interest. It’s different because of TheDAO. Ethereum Foundation has something like 20% of all ether. It gives them a lot of weight. With MtGox, if we reversed the MtGox stuff, I would have made a profit from the MtGox insolvency coins, but I never suggested that because it would be wrong to do.

A lot of the fintech stuff I have worked on, the data to do reversal simply does not exist in the public. Fungibility actually boosts immutability because if you can’t target the coins to take back, then you can’t even make a proposal to undo it. At minimum, it requires a lot of upfront effort to do that change. If I wanted to reverse a MtGox scenario, I would spend a couple months petition users to collect data about the MtGox scenario, and perhaps this would be stretched out even longer, which would be helpful for preventing a hard-fork.

Talking about what we can learn: The expectation regarding the ethereum hard-fork, we are talking about making it hard to do reversals in Bitcoin in such a case as if it was always the right thing. And it might not always be the case. But I think what could happen is that if you build a system where it is nearly impossible to reverse in case of a mistake being made, people are more diligent about what people do with their money. This is the outcome that you want. You want people to use the system in a way where the people are sufficiently diligent such that there is no demand for reversal in the first place. This could be done by education, for example.

If you want a reversible system, you could always build it on a second layer on top of the immutable layer. This is not technically hard. I don’t think that people in the community realize this. There’s a special element in TheDao where it was described as “perfectly safe” whereas nobody in the Bitcoin world described MtGox as “perfectly safe”. TheDao was described as perfectly safe and if you didn’t like it you could take out the coins. This is something we should be mindful about for future smart contract work.

Both TheDao and MtGox were both perceived as “too big to fail” to some extent. But yes MtGox failed.

I had an idea for a underhanded smart contract contest – compete to come up with the best safe-looking script that actually steals money. This would help increase awareness regarding how difficult it is to create safe smart contracts. I stopped working on this because of TheDao hack because it would be in poor taste.

In Financial Crypto conference, there was a short talk about a particular script in Ethereum that was a provable pyramid scheme. It was not only a pyramid scheme, it was advertised as one, and it was also a provable pyramid scheme, and yet people still sent money to it. Yes, it’s gambling. It’s the ethereum investment thesis. Some people want to play that game. People like the redistribution of wealth game. It’s a game. But there are probably many things that the bitcoin community could do to increase awareness of various risk, which might reduce the risk of thinking things like “the network needs to reverse to save theDao”. “Of course it was full of bugs”. It would have played out differently in the Ethereum world because of that. Well, with the exception of the Ethereum Foundation conflict of interest.

We saw the poll before the Ethereum Foundation hard-fork. There was 10-20% against it, but we see right now the market cap of ETC is about 20% of the market cap of the ETHF currency now. We can see this disagreement between two parties because one party say we should have “code is law” and the other party says “since this DAO hack is not acceptable and lots of people lose money, then we should save it because they are saying save it”. It’s a different opinion between two parties. That’s why ETC can gain momentum. Right now, for Bitcoin, we’re in a different situation. Increase the block size or not. Do people in Core still agree, 10-20%, is that still a significant?

There are a few points to make here. 20% of ethereum people voted for it and 80% voted against… it was only 5% that voted at all. It was 5% voted. And then 20% of the 5% was… so it’s very unclear what the actual numbers would have been.

…..

With bitcoin, again, we can try to know. But keep in mind that people saw a hard-fork go horribly wrong. It went wrong in a way where a lot of people invested in Bitcoin would consider it going really wrong. Perhaps to Ethereum investors they might have other expectations. I don’t think I could convince a lot of Bitcoiners to do a hard-fork, anymore, after this.

The outcome for Ethereum is no where near as bad as it would be for Bitcoin, which is actually used by people. Whereas Ethereum is not really used in any retail or p2p money transfer cross-border stuff right now. People are buying it on an exchange and then they put it into TheDAO or make test contracts that don’t have real value. Whereas in Bitcoin there is real money to be lost by mistakes.

But arguing the other way, in Bitcoin there is more incentive to get the details right. Ethereum failed even the most basic due diligence in terms of even specifying what they were doing. In Bitcoin land, we wouldn’t fail at that part. In Bitcoin, what we do matters a lot more than what they do in Ethereum today. But you have a point at least.

It’s still difficult to get accurate views on any of these. I’m not sure we have any clever way to poll the community that Ethereum didn’t try before the hard-fork. They spent a few days of polling, whereas in Bitcoin we would take many months or years to gather the election data. I don’t think we have a clever way to poll the community.

Part of the problem is that opinions change, and they are often conditional, and it’s hard to collect all the conditions of those opinions. If you start with one idea, then change it, other people might no longer find those opinions to be representative of their beliefs.

It’s difficult to reason about dynamic systems that involve these non-linear incentive relationships and we just need to target stability. In the bitcoin community, we favor soft-forks because we can build safety mechanisms where it is less likely to result in two chains. We can do this with consensus building as well. I think that in a world where everyone wants a hard-fork to happen, and they all agree, then it’s not a big deal. But it’s a grey area between everyone and not everyone, and how do you determine this?

The act of the Ethereum Foundation releasing the hard-fork was partly the cause of the trouble. It seemed official that there was a vote and everything was decided. And then miners/pools piled on. And in some cases, miners voted against it, but their pool didn’t want to be the one mining the minority chain, and ignored the user vote. That happened and it’s not representative. That’s the Buterin Effect.

The official air of it gave it more traction than it would have had organically. If you said here’s another client at another area to download, if that was how it was done, then it may have taken months and months if ever for the fork to activate because not enough people would have cared. Pools were just looking at this out of control effect that just happens.

If that one pool didn’t switch to the other fork, they would have made a lot of Ethereum Classic coins. This is not unique to this kind of situation. It’s the same in almost every vote on anything, like elections in any major country. People are often heavily influenced by who they are told who will win. Exit polling, even, can have major effect on this. If someone is on an exit poll showing the other way, it could swing the other way.

The value of money is in consensus. So the Buterin Effect here is way stronger here than in other political systems. It makes sense to go with the majority because otherwise my money might not be worth very much. This could cause a big amplification effect. It looked for a while that the Ethereum Classic market price would go up, it would be more profitable to mine, some hashrate would join, the price would go up, then more hashrate would join, and at some point it reached some equilibrium where it would stop going up. The hashrate got to the point where … there were these run-away effects. It’s psychological. It’s a psychological cause for price increase. It caused people to explore the supply-demand curve in a way that moved the price up. Some people were saying “oh it has 12% the hashrate, so why is it only 10% of the market cap therefore it is underpriced”.

I thought it was interesting that there were probably a lot of miners that had no opinion either way, but they mined it because they could make profit. Yep, that’s what happened. That’s how it works. For a future possible split, it’s something to consider.

There is a lot of profit to be made by stirring the pot and inciting people to go off and go on a different fork, or maybe incite people to do things that might not be in the best interest of the system. As infrastructure providers of bitcoin, we should be careful to avoid stirring the pot, and we should be clear to not put ourselves in a position where it is incredibly profitable to cause things to happen.

Pretty graph of hash rate and price of ETC / ETH: http://slacknation.github.io/medium/13/13.html

…Social aspects are incredibly important to understand the possibility of having the social motivation for having a contentious hard fork in the first place. It’s interesting because it is Hobbesian. You can have different social constructs.

… and by avoiding that, you can create new social systems that are more fair, etc. And ethereum is trying to build a society where their justifications are related to social norms. How you go about, you have the political justification like a “majority” or “majority belief” that the coins should be taken back. And we need to be cognizant that throughout the history and future history of cryptocurrency, this is going to be a defining factor in terms of justification for doing this.

This idea of like, what’s going to come next after that, we’re already increasing the block size with segwit. That’s the reality of it. With segwit, the block size increase that we’re doing is a one-time thing. The underlying technical details of this is that we cannot repeat segwit again, although we get a boost from Schnorr signatures in the near future. The specific technique only works once. With hard-forks, you could attempt to do that indefinitely and create a new coin every single day. There could be a trade-off between decentralization and size from each day, and doing a hard-fork from that standpoint could be quite controversial depending on the social precedence. It’s hard to avoid that level of social controversy if you’re trying to do a block size increase hard-fork. Equally, just the notion that anyone could do a hard-fork at all, it would be controversial and it’s tricky to make it less controversial.

To bring it back to ethereum, since ethereum classic has such a lower gas rate, could we get the gas rate raised up to infinity and test out a fully scaled up chain? That could be really interesting, because then we don’t have to test bitcoin with transaction fees only. They didn’t fork their testnet, in ethereum, they had a testnet but they didn’t fork it first. Has ethereum forked their testnet?

They set up a continuous integration network called Hive to do testing. You have to do live network testing. You learn different things in different environments. None of them teach you things that you learn in the production network.

It’s more of an opportunity for security researchers to say “aha! I broke your test network”, you know instead of saying “aha! I broke your coin” which would be decidedly worse.

You can answer some economic questions with a test network. One of the things I wanted Gavin to do with his block size increase proposal, was to backdate the date of the fork sufficiently far back so that we could see whether anyone bothers to run test nodes. You would need a couple hundred gigs of hard drive space to even test this. He said, in private, “of course nobody would run it”. But he didn’t think that was important. He had a point: the fact that nobody would run testnet, with lots of 8 MB blocks, doesn’t mean that nobody would run bitcoin with 8 MB blocks.

During the Ethereum hard-fork, exchanges make play more important role for the success of Ethereum Classic. After fork, we did mine a number of blocks on the old chain. Until we gave up and nobody gave us any …… but ethereum classic gets succeed once major exchange supported the chain. If we fork bitcoin, I think it’s a situation maybe similar.

By the way very interesting that the exchange price, it was based on bitcoin. The exchange rate is based on bitcoin price. So what currency to setup the price? Kraken pairs are all bitcoin based now.

I have talked with Ethereum developers about this. They are all getting paid in bitcoin. Their prices are denominated in bitcoin? The way that they get paid is by transferring bitcoin. They have USD denominated salaries, but it’s paid in BTC.

Some of this ethereum activity has impacted bitcoin price. On the exchanges, everyone who made an investment in ethereum, … the volume of fiat for ethereum is very small. People making the exchange for bitcoin and then from bitcoin for fiat for use then for euro. This is the reason why the bitcoin price is pressured all this time volume selling is higher. So the people just leaving ethereum, at least some of them [are driving down the price].

The suggestion that they gave up on mining ethereum classic, because it wasn’t necessarily as tradeable, to me that begs the question is it the obligation of exchanges to always by default continue listing the pre-forked coin? Even with 99% of, …. Well, remember there’s a lot of money to be made with listing all the things. But let’s say you have a preference for one. At minimum, because you bought in, you should deliver.

Say you are a gold depository. And there’s a new thing called lead. And you’re switching to lead. Yeah you should switch to lead. But at minimum you should deliver the gold bars.

A better example are stock splits. It’s more clear that you should give both parts back, after a stock split. It’s property. As long as there are some blocks on the other chain, it’s hard to argue in court of law that you could get away without delivering both coin properties.

They could have waited a month with some best effort to hold coins to see how things play out.

Have we beat this topic to death yet?

So one thing to say about ethereum, is that you can say that what ethereum did well is that they made lots of positive media and PR while things were failing in the background. They continued to taunt success, while everything was failing in the background. In bitcoin, we tend to say things like “I don’t like the way that’s going” and have our arguments in public. It might be good for bitcoin confidence if we were more positive, like let’s say companies were working together to say joint positive marketing. That kind of PR and marketing, in the bitcoin world, has the ability to be self-fulfilling. With a little bit more, the ethereum side could have maybe avoided some of this, if they were better at this. At a technical level, we could be very upfront with each other.

You have to be careful about how far you go with this. You have to build trust with external communities. They have to know that when they are talking with you, they are getting the real deal. Previously at a geophysics startup, they like to bring the investors to the engineers who talk frankly about what’s working and not working, because we’re talking about a 12 year project with $150 billion poured into it. The investors want to hear that something real for the next year is happening, not that something that is being filtered. I think there is a balance.

You have to look at other industries where nobody would dream of letting engineers speak in public, which could backfire for those industries. It’s not necessarily the way to go for some industries.

There will be a reception in 20 minutes. We can keep talking. We can eat and talk.

Maybe this kind of hard-fork is very difficult to be prevented from, this kind of cryptocurrency system. The idea of a currency is given by the consent of the people who have participated in the system. In a hard-fork there might be 3 groups of people. Two kinds might be, I will persist in A, and the other party will persist in B, and there will be some people who say both parties have won and now there’s two coins. In that situation, the split of the community will be very hard to be prevented. Before the ethereum hard-fork, lots of people still have an illusion that okay maybe one is the genuine ethereum… actually that’s a lie, it will not survive. But after the hard-fork we just see that there’s actually no genuine chain. And the miners, to kill the minority chain, is it the right thing to do? Any other hard-fork is now more difficult as a result of the ethereum hard-fork.

Ethereum’s hard-fork, for example, is very controversial. It’s against their own advertisement that code is law. In bitcoin, increasing the block size is much less important rule, it’s just a technical rule that was written by Satoshi in 2010. It’s technical, but it’s not related to the philosophy or the value of the system. I think this thing, makes maybe in the future any hard-fork will be contentious no matter what.

Can you explain why you don’t think it’s tied to value? Well, I think he meant moral value. Why do you think the block size change is not tied to moral value? If we did a hard-fork to unlimited bock size, I would quit and I would be done. The block size at a technical level determines the decentralization and whether people can participate running the nodes.

But perhaps we should look at this in terms of fungibility. If there was a hypothetical break in fungibility, then people would find bitcoin uninteresting and move on. Block size can break fungibility, and that’s why it’s controversial.

The fact that you’re hard-forking for a block size change, obviously presupposes that you’re hard-forking. But we have a soft-fork for block size increase. The fork itself, …. hard-fork from the point of view right now, in the future, any hard-fork will be contentious because people can always argue oh this fork I disagree with, and both sides persist.

Did you say soft-fork as well?

If we activate a soft-fork with 95% with hasrhate, and someone says, this soft-fork I don’t agree with. They would have to hard-fork. Maybe some people don’t accept a soft-fork.

There’s a common argument that with a soft-fork there is less opportunity for people to disagree. There is also no problem with disagreeing. You are free to. Let’s let it play out – say that a soft-fork is blocked. 6% don’t run it. What happens? It doesn’t activate.

Then it just doesn’t happen. Then say this situation persists, and there are people are angry and they want the additional capacity and features. Then I think that situation is similar to what happens if miners are blocking some kinds of transactions. Because they are effectively blocking the segwit transactions, which people cannot make without that change. So how would users respond if miners were blocking and censoring these transactions? How would the stakeholders respond? I don’t think we know.

There have been guesses that people have made about this in the past. Perhaps there would be a hard-fork or something. But we don’t know.

This is not about disagreeing with soft-forks. The solution we have for this is fungibility. If transactions are not identifiable, then this problem goes away entirely. That’s sort of an aside, though.

Going back to the other point though, maybe after Ethereum, all hard-forks are controversial perhaps. I hope that is not true. One thing that when hard-forks came up in Bitcoin, which I proposed, was that this block size stuff whether you like it or not, has controversy. There are changes that we could do that would be uncontroversial as possible, if we were going to talk about a hard-fork in bitcoin, then we should first get experience with a change that is purely technical. I think that block size is not a purely technical measure. Perhaps a change that everyone is capable of agreeing with, for a first hard-fork.

Because of this split in Ethereum, it sets a precedent for Bitcoin for the possible future of hard-forks. For bitcoin, a hard-fork there can only be two options. One side must accept multiple chains, multiple attacks from multiple vectors, or we just stay on the main chain and try to kill the forks and the minority chains. There can only be two possibilities. Either you accept that there will be multiples, or accept that you can attack.

Attacks could be stopped, because you change the PoW on the other chain until the attack is not successful. And botnets might have PoW to attack anyway on a new chain.

I have 10% in this direction and you – American government says I like this direction. Companies can lead their own chains because of the hashing power they have. … .

… price increase, and it may have more than 50% of hashrate, to become the main chain.

I don’t think we want multiple chains. I think that would be bad for bitcoin.

His point is that you if you don’t want multiple chains, you have to attack the other chains. No, this is untrue. There are ways to prevent those attacks.

In the past, people thought that a hard-fork is good because there is A and B. In a soft-fork, … if something is not controversial, maybe it’s better to use a soft-fork.

Hold on. I reject the premise that a hard-fork necessarily results in multiple chains. Many hard-forks would result in multiple chains. There are technical ways to prevent this. Anyone can make a new hard-fork. It’s always a possibility, even if no change is made, like we could walk out of this room with 10 hard-forks of Bitcoin in the next 10 minutes.

I don’t think that’s the real spectrum of choices. Attacks will not be successful if people want another chain to accept, if you use technical means to prevent those attacks from being successful.

I think is unnecessarily pessimistic to assume that any hard-fork in the future will result in two chains or be contentious. It’s hypothetically possible that a carefully-prepared plan with all stakeholders in agreement somehow over some sufficient period of time, over some time period to satisfy duty of care, to make some change that has no philosophical or economic effect, or very little, then those things shouldn’t result in any more drama than what we could create without a triggering factor.

Do you think exchanges and traders profiting from it, would they create drama? But they can do that now.

It requires users to believe that, at any point, there would be some point, …. right?

.. because of the… anyone who wants to start a new coin business, they could control a certain position of the hashrate, you could fork off the current chain and perform a hard-fork to lead the chain in the other direction that you would like it to go. And there could be only two ways to deal with that, either we have to accept that fork or we have to suppress.

No, no we’ve been saying strongly that there are technical measures that can be taken to prevent there from being that choice between follow and attack.

PoW change. Ok stop, one possibility is that there is not a fork created. One proposal that was made, as a hard-fork change, would be to in every bitcoin block header, there is 32 bits of zeroes which are always zero because of the minimum difficulty in the field that contains the prior block hash. There was a proposal to hard-fork the system such that the values that have to be zero right now, could be any value. This would give you extra nonce space in the header, which would give you things like the asicboost optimization without having to use version space. It would also allow more efficient mining hardware. It would change nothing else about the operation of the system. If this was done, then I think it would just happen and there would not be multiple chains. However, the asicboost patents make this political, whereas earlier it would have not been political.

If someone has the economic motivation to create a fork, then the choice fundamentally is to allow the fork or you disallow the fork. That’s true for altcoins as well, though. Effectively a hard-fork is an altcoin, but it’s pre-distributed in the same way that bitcoin is distributed. This has existed in the past in bitcoin, like Clams. It’s not a problem. I suspect that if you did a hard-fork like that one, with a reasonable amount of notice, and an ability to get strong indications from the stakeholders and coin holders and 1 bit in nsequence, if you saw that for 90% of the transactions for the past 12 months, then I could see yeah the old version of bitcoin might be traded, but you would be in a situation where the social consensus would be to mostly ignore it. I think that’s much higher chance, in comparison to ethereum with short notice for dubious reasons.

We had the … which did eventually fork off older nodes in March 2013, no fork was created in actuality, because it was in the software for many months beforehand, it was uncontroversial, there were announcements, there were fixes that you could apply to old clients. Occasionally there are old clients that fork off, and they try to get back on. So yeah….

As I said, even, I would not define that as a hard-fork. It is from a technical point of view that it was a hard-fork. There was no fork actually created. It was a hard-fork rule change, but not an actual hard-fork. I think usually when we say hard-fork, we mean rule change. We don’t mean an actual fork. When people talk about hard-fork in a colloquial sense, they are referring to the other thing. Thus this is inherently impossible to talk about in a clear way.

I don’t think that technically hashrate attacks, I don’t think that’s an option that makes sense. Plus the legal options. We accept that altcoins exist. It’s no different. It is the same. Mastercoin…. Clams is a better example here.

In ethereum, Ethereum Classic says they are the real coin, and Ethereum Fork says they are the real coin. Using the word “altcoin” might be politically loaded, though. It’s unfortunate.

I am sorry for using this word again, but I would say, then I would say that the reason why the ethereum community split in this way, ultimately goes to a violation of a duty of care. If they were adequately careful in how they approach all their decisions and promises, then they would have more universal support of the community in one direction or the other. I don’t know, that is the fundamental issue. Well, if they had they done that sufficiently well, then they wouldn’t have done a TheDAO bailout.

8 conversations at the same time. Need more cephalopod arms. More chains needed.

A failure of duty of care in this case for Ethereum was not being careful in engineering for the disaster to happen. There’s more than that. They were tempted to violate their own principles. There was also a conflict of interest in that they were invested in TheDAO as well. The outcome could have been different without the conflict of interest with their TheDAO investments.

What he was saying was that people in this room are angels. Don’t assume devils don’t exist. You can’t assume some actors will not fork to a minority chain to disrupt the market.

EVAs don’t exist ????

Could you explain, he’s making a point about game theory, or, I am not following. Yes, game theory.

I think there’s a misunderstanding, I don’t think anyone is saying that we think it’s bad to attack and so attacks won’t happen. I think the argument I was making was that attacks to suppress an altcoin won’t be successful, because they can change their technical rules to suppress the attacks using security defense attacks. Political and market attacks might be much more successful. 51% attack an altcoin and they will adopt, they will change their PoW, they will make technical defenses.

I think it is important to emphasize that in the same way as bitcoin, altcoins can deploy technical defenses as well.

The history of this is that there was an altcoin that was killed because someone said it was dead. That’s mostly what he did. He was also 90% of the hashrate on this. But he said it was dead and that stopped it.

I have to point out there was an example of an altcoin that was attacked by a 99% hashrate attack. It had like a 72 block time warping reorg. They made zero changes and survived for 2 years before changing the PoW. I think this was feathercoin.

There was another one where the developers attacked it. There is lots of obscure history here that is hard to document in a timely manner. Have you ever played a game called Illuminati? Let’s bust out the board game and we can all play Illuminati the board game.

Short reception break soon or now.

Mining software

(Back from break)

We could continue our discussion tomorrow, if we want to. Any closing remarks? What about problems with Bitcoin Core? I have never been a miner, so for miners, what has been the biggest problem? What are the issues that need to get fixed?

For people who are programming Bitcoin, we use the software, and sometimes we don’t like a problem and work on it to fix it and make it better. Some developers are mining, but definitely not at the scale that the professional miners are doing. Who in here is mining? Who has ever mined? Who has setup Core in order to mine with some actual hardware? Do you include failing to do that? I just wanted to make a point that it may be surprising how few developers are setting up miners in the current environment. It fucking sucks. What could we do to make this better?

There was a period of time when development was actively ignoring mining considerations. It was sort of assumed that the mining industry was big commercially successful and that it could take care of itself. As a result, developer attention went to other areas. As a result now, I am not happy with the state of mining configuration in Bitcoin Core. But I am sure that issues, like pool operators and miners encounter, go beyond just configuration of mining, and usage of it is probably differing. I would love to hear about any issues.

For a miner, he can just setup a Core software instance, and then he can start mining. But one problem that cannot be resolved [that arises] is the orphan rate.

Hmm. Hm. Mm. Mmm. Okay. I see.

So there are mining farms. They want to setup mining pools. Because of the orphan rate, and the stability issues in the system, forced them to give up on that. So they have to join mining pools to mine blocks.

What stability issues? Besides being a pain in the ass?

bitcoind hang-up sometimes.

When were you doing this?

Maybe at least two years ago.

That may have been, for example, HTTP connection errors. RPC errors. So these have been fixed. We fixed that particular issue since then.

If you can improve it, one thing is, it’s very hard to set or prioritize transactions and setup our own manual rules to select transactions. The transactions (rules?) do not persist if I restart bitcoind. Maybe a better way to customize the options here. I would like to see bitcoind connect to outside network with same time using Core network and also another network at the same time.

You can do that. Maybe there needs to be documentation about this.

If you set a proxy… ? It’s -onion. It’s not -proxy, it’s called -onion. There are a bunch of different options. The behavior you want is possible, but we need better documentation to explain how. Better documentation would be helpful for you here.

It will use the normal network for normal connection, and use the tor proxy. To connect to tor nodes it will use tor, and for normal nodes it will use the normal network. I want to have my bitcoin nodes behind, only connect to some known trusted nodes.

He wants -connect combined with -onion. Ah, it doesn’t do that right now, but it could be done. He wants to connect a local node, but then all other…. This make connection much more reliable. You still have a slow link, but it’s a link.

((Developers mumble amongst themselves. Solution found. Can fix.))

The request to persist transactions… prioritization? Persist the mempool, and to persist prioritization, and a better prioritization API in general.

Also, it would be better to have more proxy settings which one could be a backup of another one. Our nodes have to be setup outside of China as a remote server. We have to setup two tor proxies to make them much more reliable. With only one proxy, if one fails, then everything fails.

Yes, I think we can do that. You need the ability to add proxies without restarting? Do you need the ability to add proxies without restarting? Well, we will send you an email and get your requirements. I think this should be no problem to do everything you just asked for.

This would make bitcoind much more reliable and as reliable as possible while keeping the IP secret.

When the networking library gets merged it would be good to have a tool that analyzes the network config to make sure we do not incorrectly connect to the incorrect networks.

In 0.13, there is compact blocks, which can help lower orphan rates somewhat. This has been deployed in FIBRE network for world-wide block broadcast and this can help reduce orphan rate. There will always be an orphan rate in the network by design, but we can make it very small. The problem we would like to solve is the one where you have to join a big pool to get out of orphaning. That’s an example where big pools have an advantage due to their size. And better block propagation could reduce this advantage that big pools have. Larger blocks that take longer to propagate contribute to this problem, which is a problem for block size increase proposals.

This is part of a general project that we would like miners to be mining locally off of a local bitcoin instance. So they said they were trying to do that, but it didn’t work for them. They were talking earlier that they want to open-source their pool software for this purpose. If you see any hang-ups to this, then we need to know about those bugs so that we can fix those. Some of the tor configuration things can be alleviated.

At the time of Satoshi, everyone was running their own bitcoin nodes on their desktop computers… but now only exchanges and major businesses, they run this software on their machines. Bitcoin Core is still basically designed for desktop.

Well, not really. I think most developers run Bitcoin Core instances without the GUI and this makes Wladimir sad often.

I think he means small devices?

You can have several Bitcoin Core configurations for different purposes. Like the default dbcache size. Number of connections. If the db cache size is too low, it’s super slow. The default, until 0.13, has been 100 megabytes, and that’s slow. It hasn’t been larger because we expect people to not run it on desktops because desktops have lots of RAM, but to run it on VPS where RAM is more limited.

There are many things where if it was only running on desktops, we would have made changes and made it faster. But we haven’t due to concern of running it on small VPS devices ((sorry, browser crashed while typing)).

This is a longer-term goal of Bitcoin Core to provide the basic consensus part, through libconsensus, so that it is easy for people to go and write full node packages which are completely compatible with the rest of bitcoin but also have different API and different feature sets and written in different languages. Well, we are in the weeds here. This is something we hope to work on as developers.

The C++ code is difficult to maintain and it’s slow. It’s an important goal for us to have, to have a consensus package that we have that could be used by any language, and could be used for any application they want to integrate that into. So maybe btcd could have different wallets on different networks and everything else. Well, particularly he was talking about mining policy, which I assume is the kind of development he is interested in.

I am working on libconsensus specifically because a customer asked me to. To have a node more easy to customize in C#, that’s the same reason why we were talking about making libconsensus.

I think we only have 10 minutes left. If anyone would like to make some closing remarks, then we could wrap up today’s discussions. We should also mention topics that we would like to talk about tomorrow specifically.

Some topic ideas for Day 2

    • Fungibility
    • blockwitholding attack
    • soft-fork to prevent block withholding attack
    • soft-forks
    • maybe we should discuss latest innovations in block propagation
    • how to communicate better
    • Bitcoin Core communication issues regarding updates and progress
    • lightning network stuff
    • miner profitability regarding second layer solutions
    • block size, HK agreement, etc.
    • AsicBoost and patents (and soft-fork prevention??)
    • Patent pools, patents, defensive patent strategy
    • regulatory pressures on miners if any
    • regulatory pressures and legal considerations for developers
    • decentralized variance reduction
    • weak blocks
    • long-term mining profitability (like when a transaction fee might become higher than the default subsidy)
    • new APIs for bitcoind for wallets and blockchain services
    • overview of segwit and segwit security review (re: their question regarding how do we know it is secure)
    • the future of transaction fees, wallet fee estimation
    • mining policy rules and expression and loading that into Core
    • getblocktemplate upgrades
    • bip9 version bits stuff
    • weak blocks
    • replace-by-fee things

Google Bus tour will require you to sign your name on the sheet of paper. When is the tour? On Monday afternoon at 3:30pm.

Mempool synchronization. There was a request for historical mempool snapshots and that data to be made available. Maybe we will ask Chainalysis, do any of us know them?

Day 2

For developers I would like to request coordination using IRC regarding allocation of talk time. We need explicit pauses and “catch up” time. Transcription has a slight delay and projection has an additional (internet) delay. Time must be given for reply.

Good morning. Does everyone have a wifi connection? Who does not have a wifi connection?

((going through list of topics enumerated on the previous day))

There might be topics that people are not aware of that developers are working on, things like segwit and other scaling items.

What about the miners? Would you like to start out with some topics? Anyone else?

Developers meet in public every week with each other on the internet. We talk regularly with each other. It would be better to have input from the miners regarding what they are interested in talking about.

What about requirements for bitcoin? We all want bitcoin to succeed. We have in mind things that we want. It would be good for users to be able to buy ASICs. We often don’t talk about how that would be done. We would like more people to be able to use bitcoin, or for bitcoin transaction fees to be lower, but changing the format in a certain way is not a requirement. I think that if we could collect a list of requirements, that we would be surprised how much we might agree on those requirements. It would be a good exercise to go through.

What do you mean by buying ASICs?

You can go and buy a bitmain S9 on a website. I could put in a credit card, and some time later an S9 will arrive by mail. For example, there are not many manufacturers that sell ASICs in a form factor that work in a house. There used to be KNC, they don’t exist anymore. These guys make shipping containers. KNC is out of the market.

The housing market is just for reputation.

It was like worse the implementation. Doesn’t sell the miners because inside these dangerous high voltage and it’s not possible to certify them in the EU or the US (for consumer use?). If you are making them smaller, then you are just using the type, and there’s no reason to sell it. So you would be going for bankruptcy, you would not be able to make it. It was just maybe wrong decision on our side. It would be good to create the product without the certification and some specific requirements, they just solve the technical problem, but it doesn’t pass the certification. And lately, it was just a question about survival. There is no time to split the team and there is the other percent. Right now this is some other problem with containers. But chips, there is the key I think, they will release and they will be available. I know about creating USB efficient stick, will not be so efficient for the functions.

Just the idea of a requirement is that many people should have ASICs.

What do you mean? You want to buy ASIC or you want to buy miners?

What I am saying is that it would be better if there were more manufacturers.

Maybe we should reformulate it. Mining hosting? Keeping the miner at home I think more far is will be more complicated task.

What I think he is trying to say is that for the long-term survival of the Bitcoin system, we think that it is important for participation in mining to be very widely distributed. And that’s sort of a requirement. But how we get more mining more distributed is an open question.

In the future perhaps we will have better distribution of ASICs and mining among users.

Yesterday we saw a graph, for all the companies, for mining companies in 2015 from 10 companies there are 8 is … so the .. because is grow very fast, the combination is very very high. So then it came to stagnation, so often, maybe we will see other players.

There is also Avalon 3, right? They got bought by somebody? I cannot talk about them I think.

The bigger thing that he was trying to ask about is that we all think about things that Bitcoin needs to achieve in the future. A requirement example is that Bitcoin mining is widely distributed. Another example requirement is that bitcoin should be easy to use. Perhaps another requirement would be “bitcoin fees should be under X threshold”. It is often useful to think and ask about requirements, and not technical mechanisms, because there are often multiple technical mechanisms to achieve requirements. There is also often more agreement about requirements than there is agreement about mechanisms. So perhaps we should talk about requirements that we find interesting.

Mining data center not centralized, we already draw them up and tried to show how many miners is distributed by the .. of the map. I could also mark his data center because I did not have any information, they could release them up and it would be great PR because it would show that mining is not just some point, there are 60 persons located in China for example, and it’s very distributed and the statistics can show, it’s great PR.

Multiple locations does not distribute it. It’s still under the same control of the same company or government.

No, even the data centers, what they were saying yesterday was that .. they cannot just make some attack so easily, they perform maintenance and just to combine all the power .. so I think it’s technically very hard to organize like that. In our case, a huge part of, to investors. It’s not our power. I think it’s not just one or two investors, it’s hundreds or thousands. It’s even more than the number of Core developers all together.

Keep in mind that if the developers go away, nothing happens in Bitcoin Core development for a while. No, that’s not correct. Well, it’s not the same as mining issues where immediately bitcoin fails. I mean that nothing goes wrong if developers vanish. The software keeps working, for a while. Whereas if mining disappears, bitcoin breaks immediately.

If someone were to do a hidden modification someone wouldn’t discover it.

No, it’s not that easy. It’s very difficult to get things to pass peer review. I would say that I am much more worried about, and in particularly because if something happens to the developers, there’s a big pool of talent in the Crypto community. It will take a few months.

Let’s just agree that both our concerns, that we want decentralization.

If we want bitcoin to be a success in 5 years, what characteristics do we think it has to have to be a success? Perhaps mining should be reasonably decentralized. We could say, it should support many users. It should have reasonable fees. Which of these do the miners agree with? What are they interested in? Do we see fungibility as a pretty existential interesting item? Maybe if you say mining, that might not be a requirement, the actual requirement is fungibility perhaps.

The main requirement for bitcoin mining is the low cost of electricity. A lot of those low power costs are located in remote regions. They are distributed around the world. That advantage by itself proves the decentralization model.

Progress and organizations

I think we should try to short-term plan and long-term plan. We should try to improve our image to investors. We could also talk about the image presented to investors. All the internal battles and conflicts hurt that image. There is no centralized PR for developers. The developers are actually somewhat opposed to centralized PR. Nobody from venture investors, they say many times, they are not really technical. They are not following technical presentations. They would like to get some financial information about the status of the project. It does not yet exist. Maybe we should hire some people to represent Bitcoin community interest.

I recognize that someone mentioned the opinions on the decentralization effort. I would like to make two points. I think that the R&D in bitcoin is currently needs some improvement. I believe that it’s a little slow in pace. We need to improve and accelerate the R&D effort in Bitcoin. The second point is that I would like to make is, could we have multiple versions of software and multiple parallel R&D teams getting involved in Bitcoin development to improve its R&D? For example, in Ethereum, the R&D efforts are very active and funded by Ethereum Foundation and we could learn from their examples and draw lessons from and also improve our own R&D effort.

What is Ethereum doing that we’re not? Some concrete examples would be useful.

I would like to ask how many in your room has submitted more than 1000 lines of code into the Bitcoin repository in the last year.

Lines of Code is not a good measure for progress on development. It is difficult to measure what is progress. What about when people review source code? How do you measure that? What does it mean whether someone contributes by line versus time reviewing or designing? Code review can be much more useful than writing code.

Can we get a translation regarding what Ethereum Foundation has been doing that we have not regarding R&D? What is the perception here?

Okay, so. He thinks that because … they have more versions of the software, and also the price of ethereum also proves that it has more activity in terms of R&D and system improvement.

His point is that there is correlation between price and ethereum’s R&D activity.

Well, bitcoin’s price is much higher.

Bitcoin’s current rate of R&D is much higher than it has ever been. We could use statistics to show this if we would like.

I definitely agree that there are communication problems in the Bitcoin space. For example, talking about what is going on in development. Awareness in the public is not particularly good. We need help to fix this. The actual level of R&D activity is high. Since Bitcoin Core 0.12.1, which was just a few months ago, the diff size between the master branch and 0.12.1, is 184,000 lines of patch between them. So that’s a considerable amount of development activity that is going on. This work includes major features, including features which are important to the mining ecosystem. Maybe we don’t talk about these improvements often enough? Perhaps we do not speak clearly enough about these developments? We are often talking to other developers. From a development perspective, our focus is on development activity, making sure things are running smoothly, but we do not often communicate to drive the price for example. We would like to do more.

There is a fair amount of hostility in the Bitcoin ecosystem. This creatures pressure against the development community to speak about progress. Every time the developers speak, there is a hornet’s nest of negative responses. We get a lot of negativity, even for talking about forward progress. This is very demoralizing. In development, our culture is often that we would prefer to do good work, and not necessarily promote our work or talk about it.

I counted the bitcoin repository lines of code. I only found 120,000 lines of C/C++ code including empty lines and header files. Where did you see 184,000 lines of change? Where did it come from?

It was a “diff” (the difference) between the older versions and new versions. It includes both additions and deletions. Between two versions, there are added lines and deleted lines. This adds up to 184,000 in total. Someone can work for 1 month on a new feature to replace an old feature. This can mean 1,000 lines of new code, and 20,000 lines of removed code. The line of code metric is not a good measure of R&D quality or progress. You can see this result yourself in git, we can show you how to measure this you just type git diff v0.12.1 after git checkout master and you can see for yourself the differences between the master branch and v0.12.1.

git checkout https://github.com/bitcoin/bitcoin && cd bitcoin
 git diff v0.12.1 | wc -l
-

… more details can be read on http://bitcoincore.org/ ( or https://github.com/bitcoin/bitcoin/compare/v0.12.0...v0.13.0rc2 )

I would like to thank you for your efforts on R&D. You are the main development team in the Bitcoin space. I recognize that as you just mentioned there is a lot of negativity and attacks on Core developers. From the outsider perspective, this contributes to the image that we are not unified. We have divisions among ourselves because of this. This is in turn compromising the development of Bitcoin. At this point, I am not sure that the Core development team is going to improve their PR effort, or whether they will dig in and bury yourselves into the work and not try to improve your PR.

One clarifying remark is that some developers feel that they should not participate in PR (public relations).

It is very difficult in a project like this to get PR because one of the reasons is that, …. they have massive investment, they have money available to market themselves and Ethereum. In Bitcoin Core, we do not have this funding. All of us work as volunteers. We do not have funding. We are not experts on marketing. We are not PR experts. Is there a way to improve this? The Bitcoin Foundation did not work very well. We are open to ideas. You have to understand that we are developers. We are not good at PR. We are not funded.

All of you are volunteers? Okay.

The claim that Bitcoin’s creator has a premine is untrue. It should not be circulated. This claim is false. No, we did not say that. We were joking about something else. Oh, okay. Yes. In bitcoin there was no premine. In ethereum, they premined and funded their marketing and development. We don’t have that in bitcoin.

My belief is that for such a large development effort, without financial resources, it’s not feasible to move forward. The Bitcoin Foundation fell not because the model was wrong, but fell because of its poor management.

Because they got arrested. Only two of them.

Maybe the bitcoin community needs to rethink about how to fund the development effort. He proposes that maybe we can fund another foundation to provide financial resources to development efforts to make things easier for you guys, to make it feasible and sustainable in the long run.

.. and it should involve the major players in the bitcoin space in this foundation.

Bitcoin Core has a sponsorship program in place. We have worked a little bit on that. There is also an unfortunate barrier of entry regarding education for developers. One of my projects is using libconsensus such that people with less technical skills can make their own full node. That’s what I’m trying to do right now. I hope that in the future there will be more work there.

Are you doing that full time or part time? Part time.

He believes that for the Bitcoin development effort, just rely on donations and sponsorship and also part-time work. Also, a couple of highly skilled developers is not enough. It’s not sustainable in the long run.

An interesting question for the people in the room here, of the developers, who is working on bitcoin full time?

We definitely agree that there needs to be a more sustainable model. It cannot be just one approach. We need to do multiple things to make R&D sustainable. One problem that we have had in the past, which has made our PR efforts more withheld, is that we have had problems with initiatives to do public outreach. These initiatives have used our work to try to take control of Bitcoin. They have used our work to try to argue for their own authority over Bitcoin. This was the case with Bitcoin Foundation. This was a bad experience for many of us in Bitcoin Core. The Bitcoin Foundation failed and failed to be sustainable. But also, Bitcoin Foundation used its influence in ways that were harmful to long-term sustainability of bitcoin.

As someone who worked for Bitcoin Foundation, I did not like telling people that I was employed by BF. I went out of my way to avoid even mentioning it, as a developer. Even though I was free to work on code without being involved in the other stuff they were doing. They tried to express control over Bitcoin [like in a social way].

I don’t mean to say that Bitcoin Foundation took bad actions. However, many outsiders perceived it to be in control of Bitcoin. That was a problem. As an example, Ethereum Foundation is perceived to be in control of Ethereum. This kind of control over Bitcoin must not exist. We need sustainability without control, and we need this without the perception of control too. The perception of control is also a major problem. By stating that Ethereum is forking, a statement made by Ethereum Foundation, they were able to silence people who had disagreements with that hard-fork plan, through social means. Also, see the related Buterin effect.

Okay. He feels that the reason why you feel that some initiative would try to take control of Bitcoin and through Bitcoin Core development. Why people blame you for that is because the interest being represented by Core is still narrow. (Something about narrow business interest?) The interest group you represent is still too narrow. The community is not being represented. I think he means that you are not representing the majority. What he is proposing is that we need to form another foundation that could engage most of the companies in Bitcoin industry so that they can all fund that foundation which in terms would fund you.

Some developers care less about earning money, which is why we do not […].

The foundation may not be able to represent most of the industries in the Bitcoin space. Because those companies and industries might have their own private interest. They might be antagonistic to each other, like Coinbase versus Blockstream, so it may not be inclusive itself. It is hard to have one foundation that represent the majority of interests in the Bitcoin space because of the conflict of interest. How could you setup one organization that can….

It is not possible to have one organization that represents everybody.

We should do what is right; as a developer who does not own a company, I do not like the idea of companies controlling a foundation.

https://bitcoincore.org/en/about/sponsorship/programme/

We need to design a system where there is nobody in control. This is perhaps not best represented by the companies that exist today. Business interest is important, as well.

Let’s separate the issues first. Let’s not talk about the format of the organization or this foundation. Let’s instead revisit what I just mentioned before. My points are that, first, for such a complex and large-scale development effort, you must have financial resources. Otherwise, you cannot be sustainable in the long-term. You must have full-time staff. You cannot be part-time. That is also not sustainable. We need to involve more people. We need more than a few smart developers. Based on these, we can talk about how we can form that foundation or some other entity. I just strongly disbelieve that this amorphous and loose organization can really sustain in the long run in terms of the complexity of the R&D effort.

I think we have to discuss whether such an organization is necessary. And then we can discuss how to run such an organization, as a later step.

My personal experience from the Linux industry and especially… I agree that we cannot rely on volunteers for sustained progress. The trouble here is that we have seen dangers of a single centralized organization. Since that time, it has been confusing to not have a single centralized organization. However, since that time, we have made more progress than in the past from contributions of a mix of contributing orgs, some private companies, some non-profit organizations, have worked together, such as the two developers working at MIT DCI. There are some chain engineers at ChainCode Labs, as another example. There are also some engineers at other companies. I have heard something about a Chinese company wanting to train an engineer to become a Core developer. (That has not been going so well). Oh, I see. Well, you have to continue that investment. It takes a while. I hear that a Japanese company now wants to begin the long-term investment to also train Core engineers.

And, there is no company that controls Linux. There are many companies that contribute engineers full-time to work on the upstream open-source development projects that Linux relies on. Linux Foundation is more of a coordinator. Linux Foundation does not control Linux.

Sure, I can explain. Linux has many companies. There are a number of other Linux vendors in addition to Red Hat. There is Intel, AMD, ARM, and many 100s of other companies in the Linux software ecosystem or in the system integration companies like IBM, HP, there are many of these companies all over the world and they all decided over time to devote full-time engineers to the upstream development projects of the Linux kernel project and the many other thousands of pieces of software that are used in the Linux stack.

There is really no one company in control of Linux. And Linux Foundation serves as a coordinating function for the Linux software development community. It is a little confusing there. To simplify how people understand Linux, it is sometimes described as the Linux Foundation making decisions – but I think that the way that things actually work in Linux is that the Linux engineers make decisions based upon peer review. They won’t let a large moneyed interest to override what they think is a sound, technical decision.

One thing I would add is that the Linux Foundation has helped other open-source industries to better coordinate. The Linux Foundation has offered to help as a neutral process facilitating function regarding Bitcoin. They theoretically do not have anyone inside that cares either way about Bitcoin. It is an option to ask them for help. However, I don’t know if it is the right approach.

I think Linux is a good open-source example. Linux Foundation helps Linux to do PR and engineers focus on development direction. I think Bitcoin developing can take example from Linux. My first request, I want to modify my request. Where does such organization, many organization, whether it is necessary for Bitcoin.

Okay. I think Linux has set a good example. The Linux Foundation has, they work on the PR for Linux and the engineers are just setting the rules for the system development. Based on that, I would like to modify my first proposal. Can we learn from the Linux example and maybe we can follow the Linux example and setup one or multiple foundations for Bitcoin such as for Linux like Fedora, Redhat and each one has their own foundation. Is that true?

That is not exactly how it works.

I would like to add that, I personally see potential for process facilitation and communications as early roles that are easier to agree upon.

One thing to say perhaps is that there is a perception that there is no Bitcoin Foundation-like that is guiding bitcoin. However, there are many different organizations that are supporting, in limited capacities, different parts of Bitcoin. MIT DCI is paying for a number of developers to work on Core on a full-time basis. They are running classes. They have been organizing events that help to promote bitcoin. They are supporting only three developers, right? Yes, but there are others being supported by other organizations. Bitmain is currently paying for one of the current developers. Blockstream pays Pieter to work on Bitcoin full-time. Ciphrex pays Eric to work on Bitcoin Core. There are other organizations that are doing political lobbying (like Coin Center in the U.S. for political changes).

It is a fact, though, that from the point of talking about the technological work that we are doing, that the community is failing to communicate this adequately. We had this discussion earlier today about comparing to Ethereum. There was some laughter from the developer side earlier during that discussion when someone made a comparison to Ethereum. It was not meant to be insulting laughter. Rather, we feel that we do a lot of dev work compared to Ethereum. Since a few minutes ago, I looked at the data. I found that there were 3x more commits to Bitcoin Core over those to go-ethereum. ( the numbers are 27 contributors to go-ethereum vs 96 to Core; and 1294 to core commits vs 490 to go-ethereum since january first) There was even more developers for Bitcoin Core not Ethereum in this case. We could be doing more work to communicate this more widely. There is more work to be done to communicate to the public about this. Perhaps there are some additional needs for organizational efforts around that? But we need to keep in mind that we have had very specific problems in the past with these organizational efforts, such as the earlier stories about Bitcoin Foundation that we have explained. [And despite the lack of funding, we are still more productive than Ethereum Foundation.]

If ethereum is written in Go, is it a higher-level language and do Go commits compare to C++ commits directly?

Can we take a break? Yes.

From my perspective, we try, we the development community, we are successful if you never hear about us. However, this is not always the most useful perspective. We agree. There could be some organizational efforts to promote Bitcoin technology could help a lot.

I am curious about the miners in the room, who mine Ethereum, how many different client implementations do you use? Just one.

Lunch will be setup outside this room. We will continue to have the meeting in here.

Apparently, Ethereum literally has more marketing employees than developers. There are literally more people working for Ethereum Foundation with the role of marketing, than there are employees that do development. This is surprising, but it also explains a lot. Bitcoin has 3x more engineers, has much more value, much more code, and has approximately zero marketing. There has been some improvement, perhaps.

Something else about marketing. So the discussion about setting up a new foundation. To me, it sounded like one of the intents would be to have some Bitcoin marketing. Maybe from companies. One of the problems in the Bitcoin ecosystem is that some companies are saying negative things about Bitcoin. They are anti-selling bitcoin. They are saying negative things about Bitcoin or negative things about each other. In Ethereum, they fight only in private. However, the marketing people still continue to say positive things in Ethereum, even while the hard-fork was failing. It was very positive marketing. We all want Bitcoin to succeed. How about a marketing alliance that advertises and says positive things to people who would buy Bitcoin or who would buy miners or buy services from everyone’s companies? Why are we not doing this?

Why not a standardized foundation in that sense? If you are proposing a marketing alliance, why not use a foundation which could have a template format we could use? It’s more expedient.

We are not good at marketing.

Those are such very different goals. To mix the two with one type of organization, is what we’re afraid of. We do not want this to have perception of control over Bitcoin. This tends to create the perception of control, and controlling the narrative and the way that people start to look at it. The previous proposal for a marketing alliance very specifically did not include development.

No control, like a Linux Foundation?

What is needed is not so much control of development, but rather process facilitation between the stakeholders in the industry and community, such as for marketing purposes and advertising purposes. When I say process facilitation, it could be to better coordinate marketing, but it could also help to better coordinate — a very common problem is that people are not talking to each other. There are very simple problems that for example, miners or exchanges could have solved, if they would have asked (requested) the developers for help. Historically, that request has not happened. This would be solved by process facilitation. [Regular, scheduled contact and communication.]

What does facilitation mean? One example of facilitation is regularly scheduled one-on-one meetings between industry members whoever they may be. Some stakeholders don’t naturally communicate with other stakeholders, and perhaps having an intermediary or other coordinating roles could help. When you don’t have this, all kinds of assumptions happen, often these assumptions are incorrect. For most developers communication is not a natural skill. Having others to fill this role and fill up calendars with scheduled meetings would be profoundly helpful. Having others to be available to work on these problems, to help assemble a big picture view, so to know who to connect to on a given issue. Developers are not naturally going to put those on their calendars without being requested. You cannot rely on volunteers for this sort of effort to be a sustained benefit to the ecosystem. Instead, there should be people who are paid to have these roles. So when I suggest an org that is only process facilitation and maybe also a marketing alliance, this is what I mean by “facilitation” and what I think about in general.

My question is, first I say that facilitation is an important role and to make it sustainable it has to be someone doing it full-time. Is there one or more people that everyone can agree could do this with neutrality because they do not work at any company?

There are companies in Bitcoin who are more effective at marketing. BTCC does some marketing. Blockchain.info did a promotional video. I think I have seen videos from other companies. Maybe the companies that are good at that, could form an alliance for marketing and persuade other companies to join. Probably those companies could get some help in providing technical progress reports that they could make. They could simplify the Bitcoin Core meeting notes. There was a release of a candidate for Bitcoin Core release yesterday, but it has not reached mass visibility, perhaps it has not even reached visibility to people in this room. This would be a good example of a task for such a marketing alliance to focus on.

Who here is subscribed to the Bitcoin Core mailing list?

Would it be helpful if we publish mailing list summaries periodically? We need a place to talk where it is in our language. But perhaps summaries could be more targeted to a general audience? Maybe a digest?

Your weekly release for developers, it’s not for common people to read. It’s not for non-technical people to read. That’s why they hope you can make them more transparent.

We can definitely make communications more directed to a general audience. That feedback is very useful, so thank you for sharing that.

I wanted to share my perspective as a developer of an alternative client. One thing about “we need more bitcoin developers”. It’s actually an arduous process to get up to the level where a developer can contribute (not to bitcoin core, but to bitcoin). I have done this recently myself as a developer. In btcd, it’s funded by a company called Company Zero, and we have much less developers than Bitcoin Core. The intersection of developers that know bitcoin and Go is smaller than C/C++. We have been working to catch up with Bitcoin Core with soft-forks, but btcd is behind at the moment on the soft-forks. I think bitcoin development is going quickly. We have been working on BIPS 68, 112, 113, 9, all the segwit BIPS. The CSV package (68/112/113) has pull requests outstanding. They are pending review (btcd is behind on BIP9/68/112/.. But has segwit in progress). We don’t have enough reviewers. There are maybe 5 active developers for btcd. It’s a different skillset. I think the btcd code base, and it’s sub-packages is itself a big investment in terms of bitcoin infrastructure, they are good libraries, and Lightning development is enabled by btcd and infrastructure investment there. Maybe btcd itself does not do enough marketing for itself. It’s a little less known than Bitcoin Core in that respect. However, if people want to contribute, they could get into the development team.

I have been testing segwit for the past 4 or 5 months. We have tested on segnet, testnet, we have made lightning channels, One developer made the first lightning channels on both segnet and testnet. The pull requests (segwit, csv-package) themselves have not received as much review, but we’ve been testing it for the last six months or so. He made the big blocks on segnet and testnet to make sure that we could verify them properly. I’ll slow down.

How big was the segnet big block? It was about 3.6 megabytes. We had a competition between ourselves. We wrote good spamming tools.

The other thing is that I think one thing that is needed amongst other development teams for Bitcoin is that collaboration is required. The recent segwit BIPs have helped with this. It’s good to have multiple eyes reviewing the BIPs themselves. Us implementing the large changes, we found some things about segwit, and we gave feedback to the other developers. We made a transaction on segnet that had 19,000 inputs to test how well the sighash function works. Before that, it took about 29 seconds for Bitcoin Core to validate that. We implemented sighash mid-state caching in btcd and then it took 3 milliseconds or something. Core has a PR to implement hash caching also, and as a result of our testing has some great examples to benchmark against.

This was an example of how other implementations can help because btcd used a different approach to implementing the Bitcoin protocol. From this, we were able to see that they were compatible with segregated witness, and showed some corner cases. This was helpful and very useful. This work was good and we’re thankful that they did this work on segregated witness.

Maybe I will make a couple of comments here. I think that, I am not against creating a separate team for developing the project. If we have multiple teams, then coordination between teams takes a lot of time and cost and resources. You cannot go to war with 10 battalions and no general to lead. Developing the product, you always should have a general architecture and a general engineer. In the Bitcoin network, it’s more about the standard, it’s a communication standard. You cannot create ten different software projects because they will not be able to communicate.

What good do you see coming from having multiple implementations?

From the perspective of the venture investors, it will be very complex to explain what you are doing and how you are doing it. If it would be a battle between Core and other implementations, it would decrease the trust in Bitcoin, decrease the market price of Bitcoin, and it does not help make R&D go faster. If someone thinks that they can make a new split from scratch, then they should name it something else like an altcoin. I think that it is much much better if we create a team like Lightning where we can inspire the bitcoin core team and then coordination between different teams will increase productivity overall. They can internally discuss the idea and agree, and if they all internally agree then it would be helpful and efficient and stable, then it can be merged back into Bitcoin. For Bitcoin Core, I think it would also be helpful to add more efficient developers and more leaders and support them in a more transparent way. Right now, even than some companies are paying for some developers. Okay. Right now that’s not the issue, but there might be lately a lot of questions, it should be really transparent, like one central point, I don’t know, one whatever you call it, it should be transparent and like a business you donate your money and this money supports developers [like the Bitcoin Core sponsorship program?]. That’s it.

Thank you for the recap. Lunch is setup outside.

Development is somewhat hampered when you increase the number of implementations. Development can also be hampered by having multiple implementations. It takes more coordination and more effort to have developers reviewing multiple implementations of consensus-related software. It is enough work to do it in Bitcoin Core. If you are asking for multiple implementations, then this request is incompatible with having faster and more R&D because it divides our efforts, our time and attention, and weakens our ability to make R&D progress. Ethereum Foundation might seem like a fast mover on R&D but that is perhaps due to the misperception generated by their marketing efforts. They have more marketing employees than developer employees. “Let’s split your engineers on 3 different projects so that we go faster” is nonsensical because software engineering does not work like that. Perhaps we are not understanding what the other side is saying.

(Note that this was from lunch chit chat, the following was not discussed as a group.)

What do the miners see as the future of Bitcoin in the long term? What is their take on this? Yesterday there were two answers, but we would like to hear more from the miners.

Can we please also get an assessment from both sides regarding the level of misunderstanding that they think is continuing to occur here today? This would be valuable feedback.

Wallets should probably show the moving window average of number of zeroes appearing in blockhashes. This would be useful for improving knowledge about current difficulty. However, consumers might get concerned when that number does (or does not) change.

The short-term interest that some have regarding bitcoin market price would be to crank block size to infinity and pump the price. But what about the long-term interest in bitcoin? What is the long-term value of bitcoin? Why is bitcoin valuable in the first place? What is it about bitcoin that is different from other products? The concern that some developers have, regarding constructing an organization exclusively devoted to PR and marketing, is that control can unintentionally degrade the value of Bitcoin even if we each individually have good intentions going into such an effort, because of how contrary it is to the ethos of Bitcoin, decentralization and fungibility. Core developers don’t have a way to design this type of organization, while preventing the negative outcomes that are obviously applicable, and for this reason we have not made a proposal for this type of organization.

Block size and hard-forks

We had a good discussion in the morning on a number of topics. In the afternoon, the miners would like to shift the discussion to block size scaling.

I think that we can start by talking about the Hong Kong agreement. I know that it is not an agreement with Bitcoin Core. I know that Bitcoin Core cannot sign an agreement. I know that it was individuals that signed. We know that there is a disagreement even within Bitcoin Core whether to scale the block size, how much, how large, how to do it, whether to do it. Those individuals promised to propose a hard-fork proposal to the Core community for reviewing.

With everyone joining forces, it is good for marketing, it is good for the economy, but at this time we feel … with the promise of those individuals … several today are joining us here as well. I also recognize that we will have to spend resources for you guys maybe you have salaries and means lots of budget and… some… forum. I think it’s kind of a promise. Maybe you are not representing Core, and we understand that. But from your personal promise, I think this is a promise that you guys need to fulfill. There might be some delay, but I think we should talk frankly to see what’s going on here.

Keep in mind that we do have segwit, which is a block size increase. We need to go get segwit out and implemented. After that, the Hong Kong agreement what it was, was a change in how segwit’s block size increase would be allocated.

Let’s answer the question. Let me keep going.

I think that right now, I think the biggest concern is that Ethereum has shown that this is looking a lot more risky. When we met in New York, we talked a lot about how we would get consensus and show consensus. The miners were not in New York. I am not sure if people are aware about this.

Many of the HK agreement signers spent a week in NY. We did a lot of design work. We talked about how to properly construct a hard-fork. We talked about how we would do this in a way where we would not have the same risks that Ethereum has recently experienced. We talked about in HK how it is important for Bitcoin to remain unified and how it is important to the long-term value of Bitcoin. In HK, and in NY, there’s no desire to do anything that would be controversial. We would need consensus around any kind of hard-fork that would happen. It would have to be incredibly non-controversial. While it’s certainly the case that a lot of that research and many of those discussions should be made more broadly available, there’s certainly a lot of concern now that even from people outside this room that it would be very difficult to get that level of consensus around a hard-fork. While proposing a hard-fork is one thing, which I think should still happen, or at least proposing some details about how a hard-fork should look, which I think should still happen, as of the last week I do not think we would be able to get the kind of consensus we need, such as from Bitpay and their investors, who saw what happened with Ethereum Classic, to get the level of consensus required for Bitcoin Core or for Bitcoin to have a hard-fork that does not result in a significant loss of value.

I think it’s worth mentioning too that, a majority of, we were heavily disagreeing in NY amongst ourselves even in the New York office how something might be deployed. Much of that was due to talk of signaling. Since then, we have talked about several assumptions. One assumption that someone originally proposed was that “nobody would follow a minority chain” or “nobody would attack another chain”. There were discussions about those different scenarios. It has been enlightening in the last few weeks to see that some of those things are in fact very possible and are more concerning than we thought before. It’s something that requires thought. It provides a new data point. It requires some new analysis I think.

I wanted to point out that hard-forks are very disruptive to markets. They are disruptive to merchants, to markets, to entire ecosystems. We have to take this into account. Unless there is an overwhelmingly strongly justified reason to do a hard-fork, then the costs outweigh the benefits. We have been looking at ways to solve these problems in Bitcoin without having a hard-fork. This has been a huge focus of our engineering work over the last several years. We have been working on ways to make everyone happy in the ecosystem [although we are not maximizing happiness].

Also, yes I have been writing code for the hard-fork. It’s not ready at the moment but I do want you to know that I have been honoring that commitment and have been writing code towards that.

https://github.com/luke-jr/bips/blob/bip-mmhf/bip-mmhf.mediawiki

https://github.com/luke-jr/bitcoin/compare/bc94b87...luke-jr:hardfork2016

Regardless of whether we have reached a broad consensus on a hard-fork, based on the recent Ethereum hard-fork experience, a Bitcoin hard-fork would inevitably take place. At some point in the future, it’s bound to happen. Based on the ethereum hard-fork and experience, they believe that the hard-fork for splitting Bitcoin is inevitably bound to happen.

Only if we try to make it happen. Already happened with Clams, right? A controversial, disharmonious hard-fork does not need to happen.

This divergence in opinion is kind of hard-wired into human nature, even amongst this small group we have divergence in opinion not to mention the broader community. Because of this divergence of opinion, it’s going to split the public into like two or more multiple user groups and opinion camps and ideology of how bitcoin should be implemented. Some may belong to Core and some may not have a development team. That might be hard-wired into human nature and politics.

Okay. Even though there may not be such a development team at this point, because people are driven by interest and this drive might compel or invite another development team to jump in and do this kind of work.

To offset this trend, it’s necessary for us to increase the user base. And if we treat Bitcoin as a reserve currency, then the possibility for it to split is high. But if we treat it as a payment network, then people are unwilling to let it split because you would increase the usability.

When you say payment network, what does that mean in 5 years or 10 years?

If we treat bitcoin as a payment method, then we have to support new tech development such as LIghtning Network and a bigger block size to support that functionality. Based on this logic, all of our effort is to defend ourselves against such a hard-fork effort that may occur in the future. In order to defend a potential hard-fork in the future, we need to provide some kind of forum or unified basis so that different voices in the Bitcoin community can have a platform to communicate with each other and resolve their differences and reach consensus and to protect ourselves against such a malicious hard-fork in the future.

It’s not that easy to get everyone together to communicate and discuss.

It’s not that it’s hard, it’s that you did not do it. (Scaling Bitcoin conferences?)

I agree. I think we need to work to make sure that Bitcoin is both a system for payments and a reserve currency. As a system for payments alone, payments do not have sticky long-term value. You (the customer) buy coins to pay, then you pay to purchase some item, then you (the merchant) sell coins. This does not result in long-term preservation of value. Also, to be a good payment network, Bitcoin must have tech like lightning because no reasonable amount of block size would make Bitcoin large enough to scale to the payment needs of the world (like 1 million transactions/second). We must preserve the long-term value of Bitcoin while growing the user base. We need to find as a way as a community of Bitcoin users to come to strong agreements in order to defend against malicious hard-forks that would damage Bitcoin’s long-term value.

The first link is the BIP draft. The second link is some of the code to implement that BIP. This is related to the hard-fork commitment.

I would like to apologize to you, and to the developers, for undermining their efforts to produce this material for you regarding their commitments. I did so because their efforts in New York came right after some public comments about blocking segwit regarding a hard-fork. In that environment, I felt very uncomfortable about hard-fork proposals slowing the scaling of bitcoin through segwit. I regret the climate that my comments created. I am sorry for that and for my comments.

I think I need to clarify this. Segwit block also come from the … that the HK agreement would not be respected. It’s a very bad spiral that we have got into, in terms of bad communication. Maybe both parties don’t want to do something under pressure. Maybe both parties don’t want to be threatening.

One problem I did have was that a lot of people in the Bitcoin community told me that they would never want to hard-fork. So despite my agreement with you, I had others telling me they were concerned about a closed door meeting. I was hearing that a lot.

After the HK agreement, both parties were unhappy. The big blockers want really big blocks. 2 MB would never make them happy. They were not happy. And it was a bad agreement for them. And for those who want to control block size growth, they …. and I think at the meeting we both agreed that we should try to convince other people that this is kind of a best compromise.

There are a lot of people that don’t want a hard-fork at all. It’s a hard-fork. Everyone has to agree to a hard-fork. It’s difficult.

Do people not want it? Or rather, do they not want it now? People have said “oh, they never want a hard-fork”.

Oh, that’s not true [of my beliefs] at all. The negative discussion about hard-forks makes future hard-forks harder. That frightens me. The system will need hard-forks in the future. But they need to be harmonious hard-forks. People fighting against hard-forks makes this less likely. I think that we will need hard-forks in the future. I hope there will be hard-forks in the future.

The HK agreement did not happen the way I would have liked, but it happened anyway. It would be better to have a better way to come to agreement about how hard-forks happen. It would be good to have lots of community voices coming together. Hopefully we can set a precedent regarding what is the right way to do a hard-fork.

One observation here is that it sounds like we could have avoided much drama. When I heard about the “let’s block segwit” statement, if I had reached out to you directly and clarified in private, it sounds like this could have avoided problems and improved Bitcoin’s public image. I think there are many opportunities for us as the Bitcoin industry to try to settle our disputes one-on-one. This may improve Bitcoin’s public image as well and be more productive for all of us.

Maybe someone could describe the, like the MMHF hard-fork. And someone else had some BIPs about that. What are in these BIPs? What is the structure?

I think a lot of the opposition I was getting was regarding a closed door agreement. They wanted it to be something that develops organically.

I think that if I can expand on that, there is a political challenge where if someone external to Bitcoin tries to enforce a hard-fork on bitcoin, then it must be rejected for bitcoin’s long-term survival. When we are talking about designing a hard-fork, we must make it clear to the community that the desire for the hard-fork is organic from the Bitcoin space. I think that what he found was that the structure of the HK agreement undermined that understanding. Perhaps that could be overcome by getting more support for it, but still it is something to keep in mind for how we handle harmonious hard-forks in the future.

I think that people believe that communication should be open. Nobody noticed that BIP draft. There was no place to publish things like that. There was no blog post about it, or a tweeted link or something, to the draft. Anyway, please describe what it does.

I had summarized some of the ideas we had discussed in Zurich. I think we first need to know, it seems like there’s some discrepancy even between a few people, regarding the … maybe he could explain his BIP draft.

If you scroll down to the specification section; well basically, don’t read the document. What features does it have in it? Pretty much most of it is making the hard-fork safe. How does it make it safe? The simplest possible hard-fork would be one where old nodes continue on the old chain and not follow the hard-fork. If people did not upgrade, then the nodes would get stuck and the chain would be vulnerable to attack in the worst case. So what you are saying is that one of the concerns you discussed in New York was that un-upgraded systems would be vulnerable to hashrate spinning up and mining fake confirmations where people don’t know about the fork, like automated withdrawal systems would be compromised unfortunately. So your hard-fork proposal is one where they are still mining on the original chain from the perspective of un upgraded original nodes. The way this is designed, the old nodes will see the old chain as empty blocks. They will follow that block and they will not be left vulnerable to fake confirmation attacks. This is done by defining the serialization of these headers separately from the hash algorithm of the block hash. So the block hash is then calculated with a more complex algorithm than currently. Have you implemented this in source code? Yes, partially. The p2p stuff will at the moment not talk to the old nodes, this needs to be fixed, which is similar in segwit. It has comparable complexity to the segwit change.

I think it’s important to clarify that this is a difference from the Ethereum hard-fork. In that hard-fork, if you ran an old client, in comparison this one will not allow transactions to be confirmed. The old nodes are forced to make a decision, do they want to do the hard-fork or do they want to do a different soft-fork to prevent the hard-fork? They are not left vulnerable. They must make a decision one way or another.

You could say that this proposal is not much of a fork. It is blocking transactions. There must be a fork to block the transactions. Anyway, this is the main design principle. There are other interesting elements in the design of this. The new header structure gives more nonce space to the miners that they could use in ASICs without having to do the whole merkle tree for the transactions in the ASICs. So it would lower the communication cost to mining devices. However, it does not put the extra nonce space in the compression run. It’s asicboost [extranonce?] nonce. It repurposes 3 of the version bytes to nonce space as well. You have version bytes in the new header. This also gives, this proposal also gives nonce space in the version field. It’s basically enabling asicboost. It’s an interesting discussion that we should have separately in another venue. Does it have other interesting features?

It fixes the timestamp overflow issue by cleanly overflowing. It’s using 32 bits to represent a 64-bit timestamp. It makes a long-term improvement for timestamp handling, because we want bitcoin to last many thousands of years. Part of bitcoin’s value is that it’s forever. If we can fix the time/clock issues, then we might as well fix that.

Another thing that the proposal does is that it’s designed for merge mining natively. So namecoin won’t need to have whole bitcoin transaction bogging down every block of theirs. Merge mining is more efficient, which we could also use for side chains. Version bits have been expanded to have separate bits for more hard-forks in the future, so that we don’t need to repeat the complexity of this R&D so that it can simplify the idea of a soft-hard fork. Anything else? He’s looking at his BIP draft.

It redefines the merkle tree algorithm to fix the duplicate transaction issue that we worked around. It fixes a bunch of technical minutia and it’s hygienic, it cleans things up, it reduces technical debt. Obviously we need to .. for the old blocks. Also it improves things for SPV nodes and lightweight clients.

What were the problems with this proposal?

Well (laughter), I think the tricky thing with this is that because it’s a firm fork…. No, let’s translate. I would say that the tricky thing with this is that because it’s a firm fork, …. a firm fork being, that it is a soft-fork that forces everyone else off the network (those who do not upgrade). If you try to do that with 50/50 approval, then politically it looks like it’s an attack against the minority. From a technical level, it means that miners can do this by veto. The problem is that we want to avoid political ugliness. Do people understand a firm fork? It’s combining a soft-fork and hard-fork. There is zero possibility of two coins afterwards. By default, it would not be two coins. If I am running a bitcoin node and I do nothing during the hard-fork, then I am guaranteed to be forced off the network. Then I have to take action to either accept or decline the hard-fork. This is coercive. However, if we got solid indication that we got consent from the bitcoin community, then it would not raise major issues.

A lot of what I talked about in New York and previously, was how do we see whether coin holders have approved this? Are they going to use this new chain? Are we going to not end up with Ethereum and Ethereum Classic situation? Or are we going to end up with a unified bitcoin? I think it’s important that we find a good way to have coin holders to show that yes we actually approve of this. We can actually use coin voting as a way to argue that the whole economy has approved of this. We can otherwise set a precedent where miners can push through changes without consent, which calls into question the value proposition of Bitcoin. So that is why coin readiness signalling methods would be useful to avoid having the value proposition violated so directly.

It’s not necessarily that the whole community is going to vote on this. How are we going to look at those results? What would it mean if some voted but not others?

If you have a wallet and you did not upgrade, then you are at risk. It’s easy for that wallet to end up on a chain other than the one you suspected. We must respect people’s rights to not go with this consensus. It would be dangerous for us to try to question the consensus they want to go with. Who is in control here? Is it users? Or is it the group of bitcoin miners? It could infringe on the value proposition of this being a currency with a stable meaning.

He believes that in this case, if those wallets refuse to upgrade, then they would be taken off the network.

It’s only if they neglect to upgrade, then they would be left off the network. If they choose not to take the hard-fork, which they must do actively, then they can continue on the chain without the hard-fork. They must do something to accept it, or to reject it, but there is no “default” behavior.

The existing chain will have blocks, but with zero transactions.

So his proposal is that, people who are using the current software, their software stops working. New software is released, and then if you install it, it defaults to the chain with the most hashrate. And there is a button that would say “no I want the other chain” …. no, there would be no default. You must choose A or B. It’s a user interface question. So who is the origin chain? Nobody would be the origin chain. And if we are doing our job right, hopefully nobody would choose reject, because otherwise we failed our community by choosing something without consensus.

Ethereum’s voting turn-out was so low (5.5%), and the people who voted were probably very invested in TheDAO. It’s possible that they had hashrate majority, but not most of the users.

Can I give a bigger picture here? If it was designed such that the greater hashrate could decide that this would be a powerful weapon to befuddle bitcoin in the future, the people would look at bitcoin and say “oh the government just needs to buy a bunch of hashrate and rewrite the rules at whim”. We need to be able to respond to that and say no, lots of hashrate can attack the network, but no they cannot rewrite the rules at whim. So this BIP design still supports the existence of the original chain, however through non-technical means we should make sure that perhaps nobody wants that original chain to exist. “Hashrate deciding” is a long-term threat to Bitcoin’s value and fungibility. It’s each person that must decide for themselves. Nobody should decide for them.

In this fork, I think the economic or market cap, …. we want everyone to make their own decision to join the hard-fork. But yes, market cap would matter. Importantly the things that were mentioned on security earlier is that we need to build public process to make sure that everyone knows that this has nearly unanimous support and then it would be very easy. If there is doubt, then there is opportunity to trade on the doubt and make a political stink about this.

Their point yesterday was that people would try to do that. Well, we can minimize that. A very clear signal that there is no doubt, would be to show that a significant percentage of UTXOs or something in the last year, that yes they agree, through some coin signalling method. That’s what coin signalling can show. It can show that people who use the chain and own coins that they agree with this.

You should explain that proposal. Have they heard it?

I know that ethereum with their hard-fork did a limited version of coin signalling. They used their coins to say “yes I agree/disagree with this change”. The concept is simple. If you own some BTC, then you should have a voice and you should be able to say I own BTC and I would be willing to use this new definition of what BTC is, and I am not going to oppose it. Wallets would have a button for which way do you want to go. Some BTC might be in cold storage, etc. It should be about coins that are spent over the last year, not about old cold wallet coins stored a long time ago.

Before the hard-fork, there should be a new version of the wallet where you could signal with your coins, whether you like the proposal or not. If that is reasonably high through this measurement, … we would work with all wallet vendors, exchanges, hosted wallets, everything would be using this coin signalling mechanism. The point is not to trigger the hard-fork. The point is to build political consensus so that any adversarial fork created from this has the least chance of surviving.

Coin signalling is very easily gameable by malicious entities. The method is to purchase some BTC and then signal in a way that does not represent the wishes of the community.

There would be public outreach, coin voting, and using every means at the community’s disposal to make sure that everyone is on the same page and such that the harmonious hard-fork would be as clean as possible. Ethereum should not be the point of reference, it should be testnet4. Yes, just a second, I am almost done making testnet4.

He actually believes that what has been proposed here is something that makes the hard-fork easier. You almost make the hard-fork a not-so-hard-fork. That is why he believes that you may have opened up some other issues.

Yes.

If we are comfortable with this approach, then in time we would be inclined to perform more and more hard-forks in the future if we are getting more comfortable with it. Because it would not be as difficult to execute any more, based on your method. So this might open up a pandora’s box for more hard-forks in the future which might alter the immutability principles of the Bitcoin blockchain. Although I believe that we need a hard-fork eventually, but we have to do it in a way that we should not set bad precedent for the future. We must do it with precaution to minimize such jeopardy and pitfalls in the future.

The proposal does not change that the hard-fork needs consensus. This is still a requirement.

Many people in the development community share these concerns, and this is part of why there was not much published after the meeting in New York as well.

When this idea was originally thought of, it got termed an “evil fork”. That was the term. Precisely because if you have 95% hashrate then you can perform any type of change you want. But if we had a simple hard-fork, which means that the miners are kinda threatening the community. What if it is not accepted by the market or the users? So in that sense it is threatening. But miners want to kill the old chain.

We don’t agree. Part of the proposal makes it easy to make it the side rejecting the fork to fire the miners. It has a switch to ignore all the blocks produced on the “evil fork” chain side. There has been effort to make sure that the miners don’t control it. There is a risk, I agree it. It has that risk, but he was saying that when you upgrade, you have to make a choice. There is no default choice. If “A” is the high hashrate and “B” will make a checkpoint or change the PoW or something, but it would be implemented and ready for that eventuality. Let’s say that we don’t know what the users want, even after coin signalling. Perhaps the old chain doesn’t exist, maybe it does, but we make it possible for it to exist anyway, without making it a default.

Is it possible to switch to PoW and to a PoW+PoS so it is a mixed system? A mixed system is much more difficult to attack?

Research does not support that.

If someone proposes an evil fork, then perhaps there are attacks on the old system. […..] …. that mechanism could be used to coordinate that fork away.

My main point was that the mechanism of people signalling their support for a hard-fork, with their coins and UTXOs, could be used to also change the PoW function. If miners want to go against the economic majority, then the users can show support for a PoW switch and continue on, and leave the miners behind. I hope this will never happen. But it makes it easier to do so. I have bobchain and it’s the best thing ever. Because this thing could happen, I think it incentivizes people to cooperate.

There is a research question in there. Could these hybrid systems be used? All of the PoW+PoS hybrid proposals in the past were very obviously broken. They opened up new serious attack vectors, for each attack vector they reviewed. Perhaps in the future someone will come up with a strong hybrid. I look forward to the research, but we don’t know a way like that.

Using coin signalling to start a new chain could in theory work, but we don’t know how to both of them at the same time. We know how to say at this point start this new thing, and it’s PoW. But not about both things at the same time in the same change.

It gets into a lot of complexity. All of the hybrid altcoins end up doing checkpointing with a central point of control. It’s interesting to note that.

[Short break.]

Long-term goals for Bitcoin and fungibility

So what about some long-term success goals for Bitcoin? What are the things we hope to achieve over the next 5 or 10 years?

We have to remain secure. We need efficient ASICs. I argue that we need fungibility. It’s one of the important distinguishing properties of bitcoin, that it is permissionless and global.

We need competitive fees. They can’t be too high because they might prevent use cases. We need lightning and Bitcoin on-chain. We want, presumably, many users. We want everyone to benefit from this technology and for everyone to own bitcoin.

We need long-term confidence for this to function as a store of value. We need people to be confident that these properties will survive. We need to also survive regulation risk. There are less regulatory risks than compared to years ago, but there are still some.

I would say that positive marketing, like the marketing discussion earlier, could help these things. We could attract new users. We could explain the benefits to any users. We could figure out what people like. If we had positive marketing, it could help confidence. A nice positive bitcoin business ecosystem could help regulators feel more comfortable and help reduce regulatory risk on our industry.

If we look at this at a high-level, it helps developers figure out what to work in the short-term. We care about fungibility, therefore we work on new protocols to create fungibility in various methods. Maybe we try to make mining sufficiently decentralized to achieve fungibility in practice. And perhaps we are careful [Chrome crashed].

I am suggesting that it is useful to have a framework for discussions. For people who are making ASICs or doing payment processing; if they all agree that over the next 5 to 10 years these are the objectives, then great. I think that sometimes we get stuck in a mode where we don’t think about these long-term objectives. What do the miners think about this? It’s like me going to the ASIC manufacturers and saying we need 10 nm tech. But maybe 14 nm is better. How would I know? I should let them inform us. If we agree on the high level details, then we can let people who specialize on the details do what they are good at, including people who are good at development, or people who are good at marketing especially regarding why people should buy bitcoin.

Perhaps miners feel that there are long-term priorities that are different; perhaps for bitcoin transactions you would pay more because it is a permissionless system. Perhaps you would pay more for this transaction because other systems would block those transactions. In an ideal world, perhaps transaction fees would be low. Yes, externally, competitive. They can’t be punitive of course, only those who would be extremely

What is their vision for what we need or want in 5 years?

Fungibility is a relatively new term to us. I think the decentralized money is– money should be decentralized. Specifically, that’s something I agree with. Fungibility is a new word to this community. What does it mean? I think it will create lots of disagreement in this term. We want a united network. Store of value. Maybe we need to keep the principle as simple as possible. If it is too complicated, it needs to be in a single term that everyone can understand, and not something that will create lots of disagreement.

What is fungibility? It is a big word. We can give some examples of things that are not fungible. Paypal has a bad reputation because they suddenly one day stop accounts, even though you had done nothing wrong. And then you have to argue with them for 6 months to get your money back. That is not fungible.

Fungibility technically is the ability that every coin has the same value as every other coin. Specifically, the example about paypal, is that someone as someone that receives money as a paypal user, if you fear that paypal will revert or block the transaction, or freeze the account or undo your ownership of your own money. So fungibility, in bitcoin, translates to the inability to censor transactions. This relates to privacy and decentralization. Maybe fungibility is too high level of a term to describe what we want here.

I can give an example. Let’s say that some people give me 1 BTC. It should not matter which person gave me the 1 BTC to pay you. They should all be 1 BTC. That’s what fungibility means. There is no “dirty coins”. There is no blacklisting.

If coins are not the same value as each other, then they would be worthless.

Paper cash does not have this problem. They have serial numbers, but the serial numbers on the dollar bills are not used. It’s not your fault. If somebody gives you some money that they have obtained in a crime (or ethnic crime), in most places you are not in trouble for spending that.

To be clear, this is not necessarily about crime. It could also be for funny examples. Let’s say that if you have 25 BTC that are freshly mined in a block, then they should not be any different from any other 25 BTC. What are you talking about? You mean 12.5 BTC. No, 3.125 BTC, look to the future my friend (where testnet is).

Another related concept to fungibility is that Bitcoin is permissionless. You don’t need to ask for permission to sign up. You don’t need permission to send a payment to a person. There are things that reduce fungibility. There are some companies that are trying to analyze the blockchain to claim some coins are worse than others because of their association with people and payments and transactions. That is very bad for bitcoin. We should make sure that people do not have to be involved with a government or bank in order to send a transaction. It should be cash-like.

Maybe a remark here. I think that point number 4 on the board, that many users. I think it should be higher on your list of examples. I think the second point, fungibility, should be… in that place.. More people using it, more people know about the technology, it’s immediately create bigger fungibility. A very good example is what I saw was happened in just a couple months in the Ukraine is that they just include in monk an option to send you can keep bitcoin on the account and you can buy them on light the bunk it. And people stop to buy it even just to try, and they create huge volume in just one single month. And right now they create a lot of … and I am receiving just a lot of email in one small country. They have been creating 10 different events. And people start reading about the technology and start trying that. And what bitcoin is still missing is missing the good documentation, and good representation by someone. So the users should be indicated, and to indicate the users you should get some lectures for reading and talking about bitcoin, marketing materials for explaining bitcoin, and so on. So many users, depends on documentation, and from lecturing, from books, from mentioning in the news, in multiple ways.

Among this group, only I am running an exchange other than the person that had to leave a moment ago. I have first hand experience with regulators. Regulation in China is big but …in the U.S. it might be even worse, the pressure of regulation is heavy. A lot of those exchanges, have problems with coins getting frozen, because of association with money laundering. If bitcoin becomes fungible, I am afraid that you would be even more subject to regulators.

He feels that the regulation pressure in China, is big, but the regulation in the United States is even more. There are even more regulations here in the U.S..

Bitcoin today in practice is fungible. You can trace it, but it doesn’t matter which BTC I give you. It’s still a BTC.

For example, in my exchange, there are some frozen funds by the government because of money laundering suspicions. We hired experts that analyzed the blockchain to trace the source and movements of those coins through the blockchain to prove that you know we have no relation and that we’re unrelated to the money laundering activity. So that is why we need to know the traceability of bitcoin in the blockchain.

I think bitcoin might not be comparable to cash for this because in bitcoin you have, you can track the source by anyone and in cash it’s very hard to trace the source.

That is exactly what fungibility is about. It’s about making bitcoin more like cash.

Yes it’s like cash, but it’s not…

The bitcoin whitepaper calls it “p2p electronic cash”. We think it needs to be more cash-like.

Maybe like cash, but anyone can trace where it is coming from.

Yes, but not reliably.

Yes, but we are talking about long-term goals.

What I heard is that, what I understood, so please correct me if I was wrong, if Bitcoin becomes more fungible then regulators will be more burdensome to you because they would not be able to track Bitcoin. I think it’s the reverse. If Bitcoin becomes fungible, then regulators will not go to you because they will not be able to extract information.

I don’t know how the U.S. police do but because law enforcement in China work, even though you cannot provide the source information or the source of fund information to them, that would make them even more suspicious of the outcome and the source of the funds. This might make them take action to freeze funds. This is why he thinks fungibility in Bitcoin could create even more regulatory pressure for the exchanges in China.

If the coins are fungible, then they cannot be frozen. By technical means, regulation cannot be…

You do not enforce the regulations in the protocol itself. The enforcement would happen at higher levels. You could audit businesses, without modifying the fungibility of bitcoin itself. [The AML/KYC happens at the business level, not on the blockchain.]

Bitcoin in many ways behaves like cash. On this particular issue, the characteristics of bitcoin, because if you want to make the protocol and make bitcoin even more fungible on the protocol-level, I believe it’s just a wish. You may not be able to make it happen. It’s because you know, there’s a difference between coins on the blockchain. Some coins are dirty. Some coins are clean. That’s just how they are. You can’t erase this difference. You can’t make them the same.

We can erase these differences.

Let me put another angle on this. The other part of fungibility is that, even if you could trace the origin of coins. There should be an expectation in Bitcoin that if I create a transaction and I pay a competitive fee, the coins should go through. So the transaction should happen regardless of whether governments want to lock that transaction and prevent it from happening. In ChainAnchor they wanted to go and block transactions that did not have correct AML/KYC. To some in the Bitcoin community, this is more concerning than fungibility itself. Even if you could trace the origin of the coin, then at the very least you could not prevent the transaction from happening. So you could still trace the origin of the coins, but the transaction would never be stoppable.

Gold is legal and is highly fungible. If there is tainted gold, and tainted serial numbers, you can still melt the metals down into liquid and get untainted gold. So we can do the same with bitcoin.

In a system with good fungibility, regulatory compliance can be achieved even greater than it is today. If payment protocols between exchanges allowed for the sharing of identifying information on that other network, that would improve regulatory compliance. If the system is not fungible, then we have international uncertainty created by different policy in different jurisdiction which creates uncertainty about coin value. It is perfectly technically possible to make coins that are absolutely always equal with equal origins. There are competitors like zerocash (zcash) that do this as their competitive basis. If bitcoin is poor at this, then it is not as good at digital gold and not as good as a store of value and we could see bitcoin out-competed by those competitors.

The fungibility aspect is a huge competitive advantage of Bitcoin versus the existing financial system including Visa, Paypal and Mastercard. The greatest advantage that Bitcoin has is that it eliminates counterparty risk. It can eliminate counterparty risk with legal risk. You do not need to underwrite your counterparty’s legal standing. The moment that they deliver your bitcoin, you have the good. The underwriting in today’s financial system has a significant cost.

Maybe explain underwriting? Well, it’s the idea of taking responsibility. If I take a bitcoin from you, it’s not up to me to determine if you are complying with the laws of some other jurisdiction. Underwriting means you are not responsible for the liability of the other party (the counterparty).

The point is that it’s cash. If I pay you bitcoin, you own the bitcoin. It doesn’t matter where it comes from. There are technical ways to make the origins indistinguishable (the same and equal).

People talk about chargebacks. It goes deeper than Visa chargebacks. The interesting thing about this is that when you are dealing with transactions worth millions and billions of dollars, they are not concerned about chargebacks, they are concerned about solvency of the counterparty. That’s why we have title insurance. With money itself, there is a lot of underwriting necessary similar to title insurance. When you remove that need, when you make bitcoin fully fungible, that’s a huge advantage that bitcoin has over all other monetary systems.

This is also very important when we think about automated payments. Machine to machine transactions. And smart contracts. Because smart contract or a machine can’t evaluate the counterparty risk or AML counterparty risk in a transaction; they can only look at the transaction. The only way to make a machine automatically do this, is to make the system more permissioned and trying to eject “bad” users from the system which would harm the permissionless of bitcoin. So, we the developers in the community think that being cash like is a very important competitive advantage of Bitcoin which supports other competitive advantages like smart contracts, machine-to-machine transactions, and that if we want bitcoin to grow in the world then we need to protect this advantage and find ways to further it. If we don’t do this, then Bitcoin might be supplanted in the market by alternatives (like zcash, monero, etc.) which do.

Legally or technically?

If legally, then this might not be universal across the countries.

You make it work technically so that the legal choice becomes irrelevant. You don’t need to make that kind of decision then.

If you are trying to make this technologically fungible, then you have to a serious change to how bitcoin works?

So there are ways with no protocol changes to achieve much better fungibility in Bitcoin. For example, this is done with lightning network. Some technology, like coinjoin, is built into bitcoin from day one. One of the things this results in is that someone who is engaged in criminal activity can already get pretty good privacy in the system. They can mine in order to get fresh coins. They can use coinjoins. They can swap their coins with other users. So the criminal actors already have fungibility enough for them. So the remaining question is what about the non-criminals? What fungibility do they have? What risk are we placing on users in our system?

Criminal users do not need as much fungibility as you might think. They need money laundering. The irony is that they are trying to make dirty money look clean. They actually want a paper trail. Criminal users want a paper trail. That’s the weird paradox here. They do not want an invisible trail.

Most of the people doing coinjoin, you think, why they do that? I think most of them because they are probably not very clean, so they use coinjoin, ….

Criminals make false transactions that make them look like real transactions like “I sold a car” or “I sold a t-shirt”.

I have been running my joinmarket coinjoin client the whole time on my laptop for this whole event. Why should I want to use a system that has poor privacy? I would rather use monero or zcash. Why would I want money that I cannot use to pay my obligations or to buy goods and services? You have competition from centralized users (which often have good privacy), and decentralized things which offer better privacy than bitcoin.

I received payments for moderating a forum online. Some of the coins from those payments came from totally lawful gambling sites, and the coins went to the forum, and then the forum paid me. And I deposited to Coinbase, and then Coinbase took weeks of arguing with me to get it fixed. It was not Coinbase’s fault. They exist in a regulatory environment where they are forced to act in a certain way. The United States is very negative about lawful gambling services from other countries. Coinbase had to do this because they live under the jurisdiction of U.S. law. It’s unfortunate because it’s an application that U.S. law hates, even though I was paid for something completely unrelated to gamble. I never gamble. This is an example of getting hurt by a lack of fungibility in Bitcoin.

I think that responsibility should be applied to the user, and not to punish the entire system or the exchanges for the behavior of bad actors. Fungibility protects the innocent. Fungibility is about protecting the innocent. The criminals want to money launder, and they want public records.

I will tell you a true story. Some user a few months ago deposited 1000 BTC to an exchange. They believe the coin comes from a darknet coin mixing service. So he seized the money and asked the user for their photo ID and their AML document. After receiving the document, what did you… Did you release the coins? Yes.

Wouldn’t you like a system where you can’t even tell whether it’s from a darknet market in its history?

Well they might be required by the police to do this.

I have an interesting experience where my employer is a regulated as a commodities trading platform, …the interesting thing is that even if bitcoin is completely fungible the regulators are completely okay with that. The thing is that the regulators require us to perform AML and KYC regardless of the fungibility of the underlying coins. So if the coins are all equal with equal origins that are indistinguishable, the regulators are okay with this. We are still obligated to investigate who our customers are, but the actual network-level blockchain sources are completely unimportant.

….. once it’s in your wallet, it’s moving around hand-to-hand cash in an economy, and in an exchange it’s more like a bank account where you are transferring it to someone else and someone. …. cash on hand is more fungible than cash in a bank account.

We were talking about fungibility, AML and KYC for exchanges. Could you share some experiences from your exchange about that?

If there is a problem, then the police and government will come in and want to find out where the coins went.

The assertion was that fungibility is a desirable long-term goal or property. We are trying to talk about long-term characteristics that are interesting to all of us as a group.

It’s hard to say I think. If you have fungibility, then it is what it is. It could be the argument that, if everything is the same, then there is nothing you could do. You can’t really say it’s good or bad right now, but right now the regulators do ask for where the coins go or where they were from. If it’s not to a known address, then there’s nothing you could do about it anyway.

Confidential transactions? Could we have updates?

As some people here know, there are a number of technologies for improving confidentiality and privacy in bitcoin. Coinjoin is one of these R&D efforts. Coinjoin has a limitation, which is that the coinjoin does not hide the values of the amounts being moved. Commercially, the amounts can be valuable information. The difference in values from before and after a coinjoin can be used to dis-entangle the coinjoin and find out the information. There is a R&D effort for signature aggregation which can increase blockchain capacity by 30%. It will also let you save fees by using coinjoin, it’s a side-effect of aggregation.

About a year ago, there was a publication for confidential transactions. It exists in various sidechain systems now. CT (confidential transactions) makes the values or amounts of the bitcoin transactions completely private. We have been working on making this more efficient. Unfortunately the previous construction of CT added size. We have since then made it 20% faster. We have made it natively support colored coins and assets, and making them private, which is perhaps important for other systems more than it is important for bitcoin. We have also more recently come up with better ways to combine it with coinjoin that make it easier to deploy. We have been working on improving that technology. One of the great things about CT technology is that it only has a constant factor cost of scaling. If you were to apply confidential transactions to the bitcoin network, there would be an additional size to those bitcoin transactions that use CT would be higher, but that would be the only downside. The ring signatures in monero and the zcash tech have worse long-term scaling characteristics, CT is better in this aspect. I hope that this confidentiality work will benefit bitcoin in the near future, and if not bitcoin then at least sidechains.

You could say that you can separate the fungibility in an exchange, is different from the fungibility of a coin in your pocket. This is the same for physical cash and bitcoin. There are a lot of people who really love fungibility. They would be very sad if bitcoin became less fungible. If you were to talk about for many of the bitcoin holders, and asked them, if you lost a feature of bitcoin, would it upset you or would you stop using bitcoin? If it lost fungibility, many people would get upset. It’s something that many people are passionate. That’s why you hear all this excitement about new tech, decentralization as important, because decentralization is the current mechanism that bitcoin is using to have fungibility. It’s the assurance that someone will eventually process your transaction. If one miner has a government that says don’t process this transaction, then someone with a few terahashes in their garage in another country will still be able to take that transaction because of decentralization. So you need some reasonable level of decentralization to guarantee that all transactions will get processed.

Lot of stuff. But government and police may not share the same view. If we change the current fungibility technically, governments may change their attitude toward bitcoin.

Properties of good money: http://contrarianinvestorsjournal.com/?p=391

He thinks there are more people doing bank transactions than people using bitcoin. So does bitcoin really need fungibility? He believes that maybe we should focus more on the underlying technology to make bitcoin processing, like lightning network and making the infrastructure more robust.

Keep in mind that if you don’t need fungibility in a system, then you do not need mining. You can use banks. You don’t need infrastructure. We have a severe risk of competition from highly centralized efficient system.

We need … very easy to … .. banks and governments… still… maybe we need a fungibility, but maybe we can make this fungibility in another layer of this network, like sidechains maybe, and make it into the mining network, but on the main chain we should make the protocol as simple as possible.

I want to make more of a point there. If the main chain is fungible in the sense that you can do a transaction and it will always be mined, then you can make layers on top to make it more fungible. You can do coinjoin and lightning as second layers. The protocol does not have to be complex. If someone wants to make bitcoin less fungible, then I am not able to just go to you and say here’s a list of addresses to blacklist. I think right now it’s not a good story for now. When we look at the hashrate graphs on blockchain.info, it’s not a true story, but it hurts investor confidence. We need to assure them that they will be able to spend their money. At the exchange level the privacy might not be good, but they need to be able to move money around.

Machine-to-machine transactions and smart contracts, it might be difficult to make alternative protocols reliable if bitcoin is not sufficiently fungible. Automated decisions made by wallets will not be able to respect the non-fungibility of a coin, which is an invisible property, which makes all of these bitcoin systems less usable. However, I agree that we should move complexity to other layers and keep the base layers simple.

My impression is that we are more in agreement than it seems.

A point there is that in bitcoin, even with no change to the system, fungibility changes over time. Originally in bitcoin wallet software, a new address was used for every transaction. Back when bitcoin started, there were no companies like Chainalysis or Elliptic that connected to every node in the network and monitor everything. Sometimes tech has to adapt to the world, in the same way that the block size has to change to adapt to the world.

Like a VPN, you run it on top. I think we are in agreement on this. I agree. It’s not a magic bullet. In a bad world where bitcoin was very unfungible, then lightning and sidechains might not even be possible, or they might be unreliable. You cannot build a fungible system on top of one that is non-fungible. It’s not possible. It’s effectively impossible. Anything built on top of it cannot be fungible.

If you want to break those privacy properties, you break the underlying layer. This is not a detail. This is important. Building something “strong” on top of something “weak” is silly. We should make the base bitcoin layer as strong as possible. This sounds like a detail, but it’s not a detail.

His point is that he still has to wait to see until after the exchange has used this fungibility functionality, then they will know the actual effect on their operations and systems. They think it is too premature to talk about its actual effect. So let’s not spend too much time on this. Let’s move on to something else.

Let’s take a 5 or 10 minute break and then we can continue.

Hard-forks

What topics do the miners consider important that have not been discussed?

Please have a seat so that we can continue and wrap up. Long day, lot of information, exhausted. We still have a few more topics to touch before we wrap up the meeting. Let’s go with the miners first. Who would like to start?

Under the existing situation, the … of Ethereum… have already given us an example of a hard-fork. Bitcoin could have a similar situation in the near future. How should we solve this problem? In the long-term, we should promote the use cases of bitcoin. In the short-term, we should build a wider, broader consensus. We need to form a platform to communicate and talk. It should include Core developers, miners, Bitcoin companies, bitcoin exchanges, and other users. Based on all the lessons we have learned from the other coins and their histories, we should give up the fighting and conflict. We should pay our attention to communication and cooperation. We should build such a platform.

Another topic we would like to talk about is that right now it’s July 31st and it’s the last day of the HK agreement. We think that individuals should give an explanation to the community.

How should we make that communication platform? What explanation should we give to the community? The miners need to give kind of an explanation or somehow to the community as well. We have those pressures as well.

Personally I am happy to see work on that.

If someone was to post about the status of this, to draw attention to the documents and code that he has written, would this be the communication you are looking for about the agreement?

I believe that would help a lot. Okay, I can do that. One thing that I wanted to ask about that, on that subject, because of the people opposed to the hard-fork from the agreement, and because of the people who are saying deadline time is up time is up all the time, maybe we should remove the pressure, and I will continue to work on a hard-fork anyway?

We should research this. He doesn’t want to be working under pressure. There are many open problems still. It’s better to do this without the auspices of pressure.

Okay a better communication would be, how would you like to do this work? I think it’s important that people do not continue to perceive that work as a closed door Hong Kong agreement to do a hard-fork. The hard-fork itself must be designed organically and normally.

So perhaps a question would be, on a personal level, would you want to work on a hard-fork proposal? Yes I am going to do that, but if there’s an agreement, then the community perceives that negatively.

So maybe present about the deliverable, and now it is better for the larger community to collaborate on it. So further collaboration and work would be open collaboration and work. Would that be agreeable and good?

You can make a proposal to the wider community. The proposal might not be completely finished, it could be a draft, it’s obviously something that requires further work, it’s the proposal so far. The HK agreement said after segwit, too.

It is much better to admit that the time is delayed. It’s not a problem. We can just say frankly, it’s delayed. For lots of engineering reasons; ethereum stuff, we have lessons learned; segwit is delayed, etc. Are we okay with saying this? It’s not just him, it’s everyone.

It is not fair that the attention has shifted to him. It’s not all on you. We want everyone to be aware of this. If this work going forward is seen as a forced outcome of a closed room agreement, then it will be opposed on that basis alone. I think this work is good and useful, but we have to remove the specter of “this is a forced change on the network”. We need to collaborate to improve that. We can say that the process is delayed, segwit caused delay, complexity of this caused delays, and that’s fine, but we need this to go from an “HK agreement” proposal to being a community proposal. I am saying this for the sake of the proposal. Without this, it cannot get widespread public support.

May I ask a question. Why is he the one working on this but it was never communicated. Why wasn’t it shared? Why did we not hear about this work? Why did you not hear about this work?

I can’t stop them from working on things. He published about it on the bitcoin-dev mailing list. I think we all just are very bad at communication. So within the developer community, some of us were not aware either. None of us were eager to talk about this when it sounded like this was going to be used to block segwit.

My belief that is what has been proposed, that we try to and turn this HK agreement into a community-based consensus work, is somewhat hard to do and impractical in his belief. There could be another way to realize that. We could do it through a foundation that we were proposing earlier. It could be the consensus for a new foundation to try to realize that.

I think that if you tried to do that through a new foundation, it would lead a lot of backlash against that hard-fork plan and organization.

You would have to get people to accept and embrace the foundation. That would take time.

We could explore the possibility of doing that. We can be open to working with other efforts.

It would be extremely challenging to do it that way.

We should talk more about it.

At the bare minimum, the best suitability for a foundation is things like marketing efforts, not hard-forks.

Regardless of which plan we are trying to adopt, we cannot, it’s not reasonable to expect 100% consensus. So the question is to what extent we want to reach consensus. How much effort would we like to put into engaging the community to get there?

When you said you wanted to use the HK agreement to make it a community proposal for a hard-fork, you meant the development community? Or do you mean everyone?

I mean everyone. The point is that politically, the Bitcoin ecosystem should not accept imposed rule-changes on the network. And so, a hard-fork that comes out of a closed-door meeting sounds like an imposed rule change on the network. There are many people who will principally reject this, reflexively. I want there to be collaboration. Most people will ignore it. But I want there to be collaboration so that we can say this is a product of the Bitcoin community. It cannot be a closed-door agreement.

This was Luke’s post on the mailing list: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012389.html

At this point, it would be a little bit difficult or challenging to reframe the HK agreement to a broader community-based effort. Trying to reframe the HK agreement to an open community agreement would be difficult. The simplest way is to, we just based on the HK agreement, then we try to pull people into that and then gain consensus on that. That would be the simplest way.

My understanding of the plan in February was that we would make something, we would then propose it to people, and then hope they would like it. We would hope the community would reach consensus on it, and we would ask them to discuss it and build on it and so on. So I think that’s good, to me personally.

In order to make bitcoin grow sustainably in the future, it’s important for us to build a platform here as I mentioned before. We need communication. We need to resolve differences. We need to bridge gaps so that we don’t have those situations again. A lot of us in this room did not even know about that work. We should all try to prevent that from happening in the future, and a platform could facilitate that.

We should work to improve communication.

Tomorrow we don’t go to Stanfordland until 11. Perhaps tomorrow we could talk about how miners can get into better communication with Core and how Core can have better communication. Yeah we could spend a few hours on this tomorrow morning. We have breakfast at 9am tomorrow again.

My point is that now the market cap for Bitcoin is almost over $10 billion USD. It is big. Driven by private interest and technological interest, there will be a lot of … that will be popping up one after another. And a lot of Core developers here, you are many of you are the most influential figures in the Bitcoin community. So the sooner you form such an organization, the better off you become to protect yourself from those potential competitors in the future. So, don’t mess up.

We could post on the medium blog and link to his code and proposal and write a few things. Maybe, I think, it’s himself who should tell the community. He can sign his name on the blog post. The update should be provided in the same location. We just need some channel to announce it to the public for mass media and so on.

I think it would be good to post it on the bitcoin-dev mailing list.

He should also mention the other contributions from the others contributors. “In the future we need to move forward and have better communication”. Does that sound good?

We need to be careful to not make this sound like bitcoin is talking about hard-forking immediately after Ethereum Foundation’s blunders.

We have refreshments in the back.

Breakfast and meeting starts at 9am tomorrow morning. We can talk for an hour or two, then we will head out to Standfordland.

Day 3

We don’t have a lot of time in the morning for discussion. We have a hard stop for discussion in the morning. In the morning, we will have a wrap-up discussion. We will leave for Stanford and we can share rides. I have already sent the parking email and we can all park at the same location and walk over to the building.

Block withholding attacks

I had a brief chat with a developer during the Hong Kong Scaling Bitcoin event. I think a miner and I share the opinion that it is quite a challenge for any pool operators to … I think most of the developers haven’t recognized how risky this could be. I would like to hear more from you about this.

From my side, I am not familiar with mining. The attack is when one of your participants finds something with PoW and it does not broadcast it to the pool? Okay.

And this is particularly acute for pay-per-share pools because they continue to get effectively the same return until the pool goes bankrupt.

We think there have been real attacks against ghash.io and we also think it’s the main reason why they had such bad luck for long time. And also, there was one more possibility. If one pool suffers from this kind of attack, then they may launch the same attack against another pool if they think the other pool was launching that attack against them. And also, if this happens, maybe many people just attack each other with this withholding attack, and it’s quite dangerous for the whole industry. It’s like a death spiral.

https://petertodd.org/2016/block-publication-incentives-for-miners

I don’t think you need to talk about death spiral, block withholding is important anyway. One of the challenges with fixing this is that there’s no fix that we know about that doesn’t also kill p2pool or kill a totally decentralized pool as a viable option. Several years ago, this was much more of an issue than it is now, because it seems that p2pool has died its own death independently of this attack. There are several ways to fix block withholding this. The mining pool could retain some secret which it gives out only after you return the block to it. And then this is made part of the Bitcoin consensus rules. The most straightforward way to deploy this would require a hard-fork. Unfortunately this hard-fork would be incompatible with the safe hard-fork method that was discussed yesterday. There might be some ways to fix that and improve that.

There is a quasi soft-fork way to fix block withholding attacks. Unfortunately it’s kind of ugly. I am not sure if it’s a path we want to go down, although it’s easier to deploy. The basic idea in all of these fixes is that you can make it so that the hasher can tell if it has a valid share or not, but it can’t tell if it has a valid block. So it emits the share, and only with the secret data can it do the final check to see if it has a valid block. How CPU costly is checking? It’s .. same as validating a share.

As far as I know, there are no way to detect all the possibilities that signal to the attacker. There are also some .. that the detector can detect from…

One of the things that fixes block withholding helps with is this issue of “accidental block withholding”. Because there is not enough nonce space in the blockheader, mining software has become sophisticated with reaching into the bitcoin block and mucking around with the internals of the block. This has made it easy for authors of mining software and device firmware authors to mess up their handling of mining, such that they correctly return shares (because if they didn’t do this then they would notice immediately) but they don’t correctly return blocks. There have been a couple of cases where this has occurred. One reason for this is that stratum encodes the difficulty as a floating point number in JSON or JSON-like format, which has made it easy for people to mess up type handling and do dumb things. There are some cases which I am confident were accidents. But from a pool’s perspective, it’s even worse than a malicious withholding. At least a malicious withholder will at least be strategic, so it’s kind of worse if it’s not even a malicious event. I think it is fair to say that I would like to fix it. When we talked about fixing block withholding several years ago, with the mining community back then, there was relatively little interest in fixing it. Part of the reason for that lack of interest was that, at the time, there were no pay-per-share pools. Many of the existing pool operators (at that time) that they did not have to worry much about withholding and that the attackers would only be harming the attacker’s selves. Since then, the pooling climate has changed. Some people have published an analysis that particularly with the existence of Very Large Pools there are ways to strategically mine that profit, er, that profit from withholding rather than merely being destructive with withholding.

In terms of choosing payout mechanisms, like pay-per-share, pay-per-last-share, all these different things, is this user driven? Do people want X or Y method? How do pools decide on this? What has been the recent demand or drive behind this?

You said it’s mostly pay per share?

F2pool and antpool and BW are pay-per-share. Is that mostly driven from the pool who says this is easier and better for us?

Only users want this. If you use PPS, maybe the mining pool doesn’t care about operate. Users do not like the fluctuation of the income. That’s most of the reason why there are pools.

That’s the reason pools exist in the first place.

We were the first major mining pool in China to do PPS. You can see everyone else copied their FAQ from us.

It was surprising to me to see the migration back to PPS because there was a period early on in Bitcoin’s life, where PPS was used, and it was attacked and then people stopped using it. I was surprised to see pools transferring back to that. I understand, though, the user concerns. One of the biggest block withholding attacks we knew about with PPS was that BtcGuild lost 1400 bitcoin at least due to block withholding. It seemed like it was inadvertent

… ((lost connection to google doc, briefly commandeered another laptop))

Could you fix this in a soft-fork?

There is a soft-fork way to fix this. It’s a little bit ugly. It’s not a pure soft-fork. What you basically do is you can imagine the normal way to fix withholding is to change how, it’s a hard-fork that changes how someone decides whether a block is valid or not. It changes the definition of a passing block. The way you do this with a soft-fork is you impose a new rule, the original rule and a new rule. The new rule starts with zero difficulty and you ramp it up slowly over time. IT narrows th…it lowers the network hashrate by a small fraction of a percent. You ramp it up over time to the point where it actually has enough bits of entropy to effectively stop withholding. It has the soft-fork like advantage that it cannot encourage a network split over it. But in every other way it has disadvantages.

So don’t fixes to block withholding inherently make selfish mining worse? Because the defense against selfish mining is that the miner can broadcast the block, and now they can’t do that.

In practice, no they don’t. One of the potential answers to selfish mining is that you can imagine a pool that is mining that is not announcing the new blocks. One of the way to avoid this is to have the hashers leak the blocks that the pool solves. With stratum, this is not possible. With GBT mining, it’s possible to do that.

Is that block withholding? Or is that share withholding?

No, that’s selfish mining. If you imagine a pool with a lot of hashrate, more than 1/3rd. If a pool finds a block and instead of announcing a block, perhaps they keep the block for themselves and…after retargeting and so on. And really there’s no interaction here with block withholding. …so the solution to selfish mining cannot work with stratum. I don’t know if pooled selfish mining is a major concern, because it would be very detectable. All the miners would see that the blocks aren’t being announced, and at that point action could be taken.

Do mining pools keep statistics on the luck of different users? It’s not like you can do a whole lot.

Yes, but it’s, they do that, but it’s not useful. Particularly…you can’t kick them off, it’s not actionable. An attacker would spread themselves across many low hashrate accounts.

Basically, if they are actively trying to hide from you, the only remedy is to close access to the pool, only friends and family. Which is not helpful to the idea of pooling.

We should publish a proposal (or two) on fixing block withholding. I would be happy to work on this. In the next 30 to 60 days, I will put out some kind of proposal on this.

Concluding discussions

We only have 30 minutes left. Who would like to give a summary talk? We don’t have a lot of time. So could you keep it short and simple?

Maybe we could close by having a discussion about how to, there was some discussion about communication platforms and where we come together and how we stay current with each other.

I think we have had very good discussions this weekend. I hope we have improved our relationships greatly. I would like to talk about how we could continue to do that going forward, how we can continue to have open collaboration.

My suggestion would be, not reddit.

I believe that because of the market cap of Bitcoin is continuing to grow, I think that Bitcoin enterprise and companies are the engines behind the market cap growth. They are the ones most incentivized to protect Bitcoin and its ecosystem. I am proposing for this platform or organization, that the Bitcoin companies and enterprises, whether miners or exchanges or application developers, they are the major participants in this platform. To start out with, as the first step, we can first setup some kind of social circle as the first step. It’s kind of like a consortium. We can connect through email and wechat or skype. There are all kinds of mediums and technology available for us to setup this circle so that we can start communication channels first. So the participants should be the major players in the Bitcoin industry and they have to pay a certain fee to join this social circle. So it’s semi-public, but it’s not free. On the Core side, they can send some representative to represent Core development and to join this social circle. And then we try to within this circle, we try to work together and with coordination and try to establish some kind of foundation-like entity. I know you guys hate the idea of a foundation. For a lack of word, I am using the word foundation as a reference. So this social circle is monitored by its chairman and its secretary, to host the communication between different parties and companies within that circle. So that is the proposal as the first step towards building this communication platform.

What do you guys think?

Sounds reasonable. I guess nothing seriously wrong with that. I have heard some conversations where people said let’s create that. Nothing has happened so far.

Who would like to take the initiative and someone has to drive it.

Maybe using the Bitcoin roundtable as the name of the org.

They can organize everything on the legal level. How it will be more efficient to open it. And then we can just decide.. Or how they choose the Chairman, some leaders, etc. And who will it be? So any suggestions from the Core developer side about this proposal?

The pay-for-access component of it may have bad optics to the public when it’s presented that way. Participation in any group has cost, regardless. There are a lot of cost that people have incurred to come here to this meeting. We should be cautious in how this organization is setup and presented, to avoid bad image.

Maybe it’s not paid, maybe it’s sponsorship.

I think that there is, it’s useful to have a pay component to provide a neutral access control mechanism. The argument would be that “if the pay level is too high, then that access control mechanism is not particularly neutral”. I think this can be worked through and solved. I think it’s a potential source of issues that should be considered.

This circle is trying to protect the commercial interests of the Bitcoin companies and industry. I believe that bitcoin in the long run is driven by those enterprises and companies.

They are in the initial stage of bitcoin development. It’s still a hacker culture. It’s informal. Casual. It’s relaxed. But now, at this point, in its history, he believes that it should become more formal and formally structured, driven by commercial interest. So there could be a transition from a hacker culture to…

I think that there may be a mistaken belief that these are incompatible cultures. I think that if you look at all the protocols of the internet, then you will see that almost all of them are developed in the context of IETF, like the HTTP protocols. Most of the attendees at the IETF meetings are working for some of the largest tech companies in the world like Google, Microsoft and Cisco. They participate in an open environment, talking about technology. There is no fee to participate in IETF. Individual meetings themselves have conference fees, but the mailing lists are open. This environment is the one in which the Internet is developed. I agree that bitcoin as it grows needs to become more formal and professional. However, there are many forms that this can take. It definitely needs to be heavily driven by commercial players, not volunteerism. However, this does not necessarily mean that you have a hierarchical system of authority where an elected body is in charge of Bitcoin or something like that. An example of this is how the Internet is organized.

It’s worth pointing out that we are all here. Well, all of us here are here. This meetup was able to be arranged, without that central organization. It was not a top-down meeting.

This idea of “hacker culture” is actually not really what the IETF and these kinds of process organizations… I think this is actually just the outcome of professional development in a situation where you cannot impose on other people. I think this is what the Internet protocol saw as well. There are professionals developing these technologies. They are not in a situation where they can impose.

Decentralized professional development, which happens to look hackery, because the hacker behavior is decentralized, but not necessarily professional. So it is kind of similar.

He believes that this hybrid model is a good idea. He also believes that they can co-exist in a healthy and organic way. He was thinking that maybe there could be two payment and donation structures so that we can collect and solicit fundings for this forum. Maybe some industry companies can pay a fee to join this circle or some others can just donate or sponsor this entity. They would pay a fee to join. For those who pay a fee, they would have a voting right. For those who donate, they do not have a voting right. And the opinion or voice coming out from this organization or circle, they only represent themselves, and they do not represent the entire Bitcoin community because there are other organizations or other groups within that community. So that’s his idea.

So….. like, one thing I want to say, … let’s discuss more, we’re not going to solve all of this in 5 minutes.

So… the one thing to think about is that today, bitcoin is small and friendly. We know each other. We know those companies. Relatively small. Wait a few years. You haven’t seen “more” my friend. So the point is that, if you look at HTTP, now there are very big powerful companies involved like Microsoft. And they want to control HTTP. And IBM wants to control it separately. Bitcoin is finance. When we think about setting up an organization, we’re thinking about ourselves. But in 5 year time, that will include Goldman Sachs and Microsoft. And they each send 20 brilliantly manipulative people and they will bribe politicians and spread mayhem and chaos. We the Bitcoin world will lose control of this organization. Goldman Sachs will create 40 subsidiaries, they will each pay the X membership fee, and then they will now have the entire vote of the organization. So we need to be mindful of this sort of situation. The Internet IETF things for example have evolved to allow for technical development to proceed in a way that promotes good technical outcomes that are good for users even though some of the engineers work for Big Evil Corps like Microsoft.

Are they independent even though they are working for big companies?

Somewhat. That’s the objective, anyway, with IETF. Oh really?

This discussion is critical and important. There have been many good ideas expressed. We can’t solve everything here right now. There’s too much to discuss. We should not have the expectation of solving everything here.

We have three minutes. We have to find a place to park, then we have to walk. Stanford is down the street but it will take 30 minutes to get there. The Stanford campus is huge. I sent the parking instructions. That’s the closest to the building, so please check your email.

His point is that it doesn’t matter that in the future some kind of bigger company like Goldman Sachs trying to take over. It’s inevitable that some company will try to take control of the organization. At that point, we just have to move on and form another entity and another group. It’s not going to be the only social circle in the Bitcoin ecosystem. There could be many of these. It’s not going to be that they take over one and then they get to takeover Bitcoin. They can try whatever they want to take Bitcoin over. We can always change our tactics. We can move to other mediums.

As long as the organization doesn’t give the impression it has control over the protocol.

They will try to setup a circle. They are asking you whether to do it at this point. They don’t know if you are willing to join an organization.

They are offering a proposal that we can form this together at this point.

I think we are interested in any and all opportunities to collaborate.

We will try to setup a commercial circle. We hope that the Core developers will join.

In the afternoon, we can come back after we visit Stanford. We have an hour gap and we can come back here. I have reserved the conference room for the whole day. We can still come back here. Googleplex is not very far from here. It takes 30min to get there or less. We can still come back.

Dan Boneh discussion

http://diyhpl.us/wiki/transcripts/2016-july-bitcoin-developers-miners-meeting/dan-boneh/

Google Tech Talk

http://diyhpl.us/wiki/transcripts/2016-july-bitcoin-developers-miners-meeting/jihan-wu-google-tech-talk/

Other session

Miners have some other events later this evening. Perhaps as a group we should think about any closing summaries?

What do we want to say to any journalists that ask? I think it is important to present this as some people gathering together who have been in the space for a long time, who have had trouble in the past with communication. By seeing people face to face, and talking about things, it helps us discuss.

A comment I made before, a story to take to a journalist later, that bitcoin is a global decentralized system and it works just fine as long as we do our own thing. But it works better if we collaborate. With this weekend, we were able to get to know each other better, we were able to understand many of the things we had in common. We were able to open more lines of communication. We were able to improve our friendships and these things in general are good for bitcoin.

In Zurich, we did a full transcript, but we also published a summary of the meeting, in the following format: https://bitcoincore.org/en/meetings/2016/05/20/ .. if we do the same for our gathering, it would be much shorter than the above example.

We published the Zurich summary first, and the full transcript was published weeks later. In Zurich, it took about two weeks I think to go over the whole thing.

Event Summary

Over the last few days, some Bitcoin developers and miners got together for a social gathering to improve communication, friendship, and to do some California sightseeing. We talked about where bitcoin is and where bitcoin is going. We learned a lot from each other. We also visited Stanford to attend a cryptography talk to learn more about potential improvements for Bitcoin, as well as the Google campus to give a presentation and talk about Bitcoin. We had many informal discussions amongst ourselves about topics such as mining decentralization, evolution of the Bitcoin protocol, safety improvements and progress for both soft- and hard-forks, as well as improving communication and cooperation across the Bitcoin ecosystem such as new venues to work together in unison. We think that Bitcoin’s strength comes from the consensus of its participants. Many of us plan to attend Scaling Bitcoin 3 in Milan, Italy and everyone would like to continue with gatherings like these and others in the future with all parts of the Bitcoin ecosystem. We hope to be releasing notes in the near future for parts of the community that were not attending.

在过去几天里,一些比特币开发人员和比特币矿工们在一起进行了社交聚会来增进友谊,改善沟通,以及共同进行了一些观光交流活动。我们谈论了比特币的现在情况以及未来的发展。我们互相之间学到了很多。我们在斯坦福大学还参加了一个密码学的讲座,学到了比特币将来一些可能的改进。然后我们又去了谷歌总部在那里做了一个比特币的演讲和讨论。我们还有很多非正式的讨论,关于矿业去中心化,比特币协议演变,软分叉与硬分叉的安全性的提高和进度,以及改进比特币生态系统里的沟通与合作,比如通过新的渠道来共同行动。我们认为比特币的强处来源于所有所有参与者的共识。我们其中许多人有计划参加在意大利米兰的2016 Scaling Bitcoin 。我们每一个人都希望在未来和比特币生态系统的所有组成部分继续进行这样的聚会。对于那些没有能够参加这次聚会的社区组成部分,我们会在近期公布我们的谈话记录。

\ No newline at end of file +

… more details can be read on http://bitcoincore.org/ ( or https://github.com/bitcoin/bitcoin/compare/v0.12.0...v0.13.0rc2 )

I would like to thank you for your efforts on R&D. You are the main development team in the Bitcoin space. I recognize that as you just mentioned there is a lot of negativity and attacks on Core developers. From the outsider perspective, this contributes to the image that we are not unified. We have divisions among ourselves because of this. This is in turn compromising the development of Bitcoin. At this point, I am not sure that the Core development team is going to improve their PR effort, or whether they will dig in and bury yourselves into the work and not try to improve your PR.

One clarifying remark is that some developers feel that they should not participate in PR (public relations).

It is very difficult in a project like this to get PR because one of the reasons is that, …. they have massive investment, they have money available to market themselves and Ethereum. In Bitcoin Core, we do not have this funding. All of us work as volunteers. We do not have funding. We are not experts on marketing. We are not PR experts. Is there a way to improve this? The Bitcoin Foundation did not work very well. We are open to ideas. You have to understand that we are developers. We are not good at PR. We are not funded.

All of you are volunteers? Okay.

The claim that Bitcoin’s creator has a premine is untrue. It should not be circulated. This claim is false. No, we did not say that. We were joking about something else. Oh, okay. Yes. In bitcoin there was no premine. In ethereum, they premined and funded their marketing and development. We don’t have that in bitcoin.

My belief is that for such a large development effort, without financial resources, it’s not feasible to move forward. The Bitcoin Foundation fell not because the model was wrong, but fell because of its poor management.

Because they got arrested. Only two of them.

Maybe the bitcoin community needs to rethink about how to fund the development effort. He proposes that maybe we can fund another foundation to provide financial resources to development efforts to make things easier for you guys, to make it feasible and sustainable in the long run.

.. and it should involve the major players in the bitcoin space in this foundation.

Bitcoin Core has a sponsorship program in place. We have worked a little bit on that. There is also an unfortunate barrier of entry regarding education for developers. One of my projects is using libconsensus such that people with less technical skills can make their own full node. That’s what I’m trying to do right now. I hope that in the future there will be more work there.

Are you doing that full time or part time? Part time.

He believes that for the Bitcoin development effort, just rely on donations and sponsorship and also part-time work. Also, a couple of highly skilled developers is not enough. It’s not sustainable in the long run.

An interesting question for the people in the room here, of the developers, who is working on bitcoin full time?

We definitely agree that there needs to be a more sustainable model. It cannot be just one approach. We need to do multiple things to make R&D sustainable. One problem that we have had in the past, which has made our PR efforts more withheld, is that we have had problems with initiatives to do public outreach. These initiatives have used our work to try to take control of Bitcoin. They have used our work to try to argue for their own authority over Bitcoin. This was the case with Bitcoin Foundation. This was a bad experience for many of us in Bitcoin Core. The Bitcoin Foundation failed and failed to be sustainable. But also, Bitcoin Foundation used its influence in ways that were harmful to long-term sustainability of bitcoin.

As someone who worked for Bitcoin Foundation, I did not like telling people that I was employed by BF. I went out of my way to avoid even mentioning it, as a developer. Even though I was free to work on code without being involved in the other stuff they were doing. They tried to express control over Bitcoin [like in a social way].

I don’t mean to say that Bitcoin Foundation took bad actions. However, many outsiders perceived it to be in control of Bitcoin. That was a problem. As an example, Ethereum Foundation is perceived to be in control of Ethereum. This kind of control over Bitcoin must not exist. We need sustainability without control, and we need this without the perception of control too. The perception of control is also a major problem. By stating that Ethereum is forking, a statement made by Ethereum Foundation, they were able to silence people who had disagreements with that hard-fork plan, through social means. Also, see the related Buterin effect.

Okay. He feels that the reason why you feel that some initiative would try to take control of Bitcoin and through Bitcoin Core development. Why people blame you for that is because the interest being represented by Core is still narrow. (Something about narrow business interest?) The interest group you represent is still too narrow. The community is not being represented. I think he means that you are not representing the majority. What he is proposing is that we need to form another foundation that could engage most of the companies in Bitcoin industry so that they can all fund that foundation which in terms would fund you.

Some developers care less about earning money, which is why we do not […].

The foundation may not be able to represent most of the industries in the Bitcoin space. Because those companies and industries might have their own private interest. They might be antagonistic to each other, like Coinbase versus Blockstream, so it may not be inclusive itself. It is hard to have one foundation that represent the majority of interests in the Bitcoin space because of the conflict of interest. How could you setup one organization that can….

It is not possible to have one organization that represents everybody.

We should do what is right; as a developer who does not own a company, I do not like the idea of companies controlling a foundation.

https://bitcoincore.org/en/about/sponsorship/programme/

We need to design a system where there is nobody in control. This is perhaps not best represented by the companies that exist today. Business interest is important, as well.

Let’s separate the issues first. Let’s not talk about the format of the organization or this foundation. Let’s instead revisit what I just mentioned before. My points are that, first, for such a complex and large-scale development effort, you must have financial resources. Otherwise, you cannot be sustainable in the long-term. You must have full-time staff. You cannot be part-time. That is also not sustainable. We need to involve more people. We need more than a few smart developers. Based on these, we can talk about how we can form that foundation or some other entity. I just strongly disbelieve that this amorphous and loose organization can really sustain in the long run in terms of the complexity of the R&D effort.

I think we have to discuss whether such an organization is necessary. And then we can discuss how to run such an organization, as a later step.

My personal experience from the Linux industry and especially… I agree that we cannot rely on volunteers for sustained progress. The trouble here is that we have seen dangers of a single centralized organization. Since that time, it has been confusing to not have a single centralized organization. However, since that time, we have made more progress than in the past from contributions of a mix of contributing orgs, some private companies, some non-profit organizations, have worked together, such as the two developers working at MIT DCI. There are some chain engineers at ChainCode Labs, as another example. There are also some engineers at other companies. I have heard something about a Chinese company wanting to train an engineer to become a Core developer. (That has not been going so well). Oh, I see. Well, you have to continue that investment. It takes a while. I hear that a Japanese company now wants to begin the long-term investment to also train Core engineers.

And, there is no company that controls Linux. There are many companies that contribute engineers full-time to work on the upstream open-source development projects that Linux relies on. Linux Foundation is more of a coordinator. Linux Foundation does not control Linux.

Sure, I can explain. Linux has many companies. There are a number of other Linux vendors in addition to Red Hat. There is Intel, AMD, ARM, and many 100s of other companies in the Linux software ecosystem or in the system integration companies like IBM, HP, there are many of these companies all over the world and they all decided over time to devote full-time engineers to the upstream development projects of the Linux kernel project and the many other thousands of pieces of software that are used in the Linux stack.

There is really no one company in control of Linux. And Linux Foundation serves as a coordinating function for the Linux software development community. It is a little confusing there. To simplify how people understand Linux, it is sometimes described as the Linux Foundation making decisions – but I think that the way that things actually work in Linux is that the Linux engineers make decisions based upon peer review. They won’t let a large moneyed interest to override what they think is a sound, technical decision.

One thing I would add is that the Linux Foundation has helped other open-source industries to better coordinate. The Linux Foundation has offered to help as a neutral process facilitating function regarding Bitcoin. They theoretically do not have anyone inside that cares either way about Bitcoin. It is an option to ask them for help. However, I don’t know if it is the right approach.

I think Linux is a good open-source example. Linux Foundation helps Linux to do PR and engineers focus on development direction. I think Bitcoin developing can take example from Linux. My first request, I want to modify my request. Where does such organization, many organization, whether it is necessary for Bitcoin.

Okay. I think Linux has set a good example. The Linux Foundation has, they work on the PR for Linux and the engineers are just setting the rules for the system development. Based on that, I would like to modify my first proposal. Can we learn from the Linux example and maybe we can follow the Linux example and setup one or multiple foundations for Bitcoin such as for Linux like Fedora, Redhat and each one has their own foundation. Is that true?

That is not exactly how it works.

I would like to add that, I personally see potential for process facilitation and communications as early roles that are easier to agree upon.

One thing to say perhaps is that there is a perception that there is no Bitcoin Foundation-like that is guiding bitcoin. However, there are many different organizations that are supporting, in limited capacities, different parts of Bitcoin. MIT DCI is paying for a number of developers to work on Core on a full-time basis. They are running classes. They have been organizing events that help to promote bitcoin. They are supporting only three developers, right? Yes, but there are others being supported by other organizations. Bitmain is currently paying for one of the current developers. Blockstream pays Pieter to work on Bitcoin full-time. Ciphrex pays Eric to work on Bitcoin Core. There are other organizations that are doing political lobbying (like Coin Center in the U.S. for political changes).

It is a fact, though, that from the point of talking about the technological work that we are doing, that the community is failing to communicate this adequately. We had this discussion earlier today about comparing to Ethereum. There was some laughter from the developer side earlier during that discussion when someone made a comparison to Ethereum. It was not meant to be insulting laughter. Rather, we feel that we do a lot of dev work compared to Ethereum. Since a few minutes ago, I looked at the data. I found that there were 3x more commits to Bitcoin Core over those to go-ethereum. ( the numbers are 27 contributors to go-ethereum vs 96 to Core; and 1294 to core commits vs 490 to go-ethereum since january first) There was even more developers for Bitcoin Core not Ethereum in this case. We could be doing more work to communicate this more widely. There is more work to be done to communicate to the public about this. Perhaps there are some additional needs for organizational efforts around that? But we need to keep in mind that we have had very specific problems in the past with these organizational efforts, such as the earlier stories about Bitcoin Foundation that we have explained. [And despite the lack of funding, we are still more productive than Ethereum Foundation.]

If ethereum is written in Go, is it a higher-level language and do Go commits compare to C++ commits directly?

Can we take a break? Yes.

From my perspective, we try, we the development community, we are successful if you never hear about us. However, this is not always the most useful perspective. We agree. There could be some organizational efforts to promote Bitcoin technology could help a lot.

I am curious about the miners in the room, who mine Ethereum, how many different client implementations do you use? Just one.

Lunch will be setup outside this room. We will continue to have the meeting in here.

Apparently, Ethereum literally has more marketing employees than developers. There are literally more people working for Ethereum Foundation with the role of marketing, than there are employees that do development. This is surprising, but it also explains a lot. Bitcoin has 3x more engineers, has much more value, much more code, and has approximately zero marketing. There has been some improvement, perhaps.

Something else about marketing. So the discussion about setting up a new foundation. To me, it sounded like one of the intents would be to have some Bitcoin marketing. Maybe from companies. One of the problems in the Bitcoin ecosystem is that some companies are saying negative things about Bitcoin. They are anti-selling bitcoin. They are saying negative things about Bitcoin or negative things about each other. In Ethereum, they fight only in private. However, the marketing people still continue to say positive things in Ethereum, even while the hard-fork was failing. It was very positive marketing. We all want Bitcoin to succeed. How about a marketing alliance that advertises and says positive things to people who would buy Bitcoin or who would buy miners or buy services from everyone’s companies? Why are we not doing this?

Why not a standardized foundation in that sense? If you are proposing a marketing alliance, why not use a foundation which could have a template format we could use? It’s more expedient.

We are not good at marketing.

Those are such very different goals. To mix the two with one type of organization, is what we’re afraid of. We do not want this to have perception of control over Bitcoin. This tends to create the perception of control, and controlling the narrative and the way that people start to look at it. The previous proposal for a marketing alliance very specifically did not include development.

No control, like a Linux Foundation?

What is needed is not so much control of development, but rather process facilitation between the stakeholders in the industry and community, such as for marketing purposes and advertising purposes. When I say process facilitation, it could be to better coordinate marketing, but it could also help to better coordinate — a very common problem is that people are not talking to each other. There are very simple problems that for example, miners or exchanges could have solved, if they would have asked (requested) the developers for help. Historically, that request has not happened. This would be solved by process facilitation. [Regular, scheduled contact and communication.]

What does facilitation mean? One example of facilitation is regularly scheduled one-on-one meetings between industry members whoever they may be. Some stakeholders don’t naturally communicate with other stakeholders, and perhaps having an intermediary or other coordinating roles could help. When you don’t have this, all kinds of assumptions happen, often these assumptions are incorrect. For most developers communication is not a natural skill. Having others to fill this role and fill up calendars with scheduled meetings would be profoundly helpful. Having others to be available to work on these problems, to help assemble a big picture view, so to know who to connect to on a given issue. Developers are not naturally going to put those on their calendars without being requested. You cannot rely on volunteers for this sort of effort to be a sustained benefit to the ecosystem. Instead, there should be people who are paid to have these roles. So when I suggest an org that is only process facilitation and maybe also a marketing alliance, this is what I mean by “facilitation” and what I think about in general.

My question is, first I say that facilitation is an important role and to make it sustainable it has to be someone doing it full-time. Is there one or more people that everyone can agree could do this with neutrality because they do not work at any company?

There are companies in Bitcoin who are more effective at marketing. BTCC does some marketing. Blockchain.info did a promotional video. I think I have seen videos from other companies. Maybe the companies that are good at that, could form an alliance for marketing and persuade other companies to join. Probably those companies could get some help in providing technical progress reports that they could make. They could simplify the Bitcoin Core meeting notes. There was a release of a candidate for Bitcoin Core release yesterday, but it has not reached mass visibility, perhaps it has not even reached visibility to people in this room. This would be a good example of a task for such a marketing alliance to focus on.

Who here is subscribed to the Bitcoin Core mailing list?

Would it be helpful if we publish mailing list summaries periodically? We need a place to talk where it is in our language. But perhaps summaries could be more targeted to a general audience? Maybe a digest?

Your weekly release for developers, it’s not for common people to read. It’s not for non-technical people to read. That’s why they hope you can make them more transparent.

We can definitely make communications more directed to a general audience. That feedback is very useful, so thank you for sharing that.

I wanted to share my perspective as a developer of an alternative client. One thing about “we need more bitcoin developers”. It’s actually an arduous process to get up to the level where a developer can contribute (not to bitcoin core, but to bitcoin). I have done this recently myself as a developer. In btcd, it’s funded by a company called Company Zero, and we have much less developers than Bitcoin Core. The intersection of developers that know bitcoin and Go is smaller than C/C++. We have been working to catch up with Bitcoin Core with soft-forks, but btcd is behind at the moment on the soft-forks. I think bitcoin development is going quickly. We have been working on BIPS 68, 112, 113, 9, all the segwit BIPS. The CSV package (68/112/113) has pull requests outstanding. They are pending review (btcd is behind on BIP9/68/112/.. But has segwit in progress). We don’t have enough reviewers. There are maybe 5 active developers for btcd. It’s a different skillset. I think the btcd code base, and it’s sub-packages is itself a big investment in terms of bitcoin infrastructure, they are good libraries, and Lightning development is enabled by btcd and infrastructure investment there. Maybe btcd itself does not do enough marketing for itself. It’s a little less known than Bitcoin Core in that respect. However, if people want to contribute, they could get into the development team.

I have been testing segwit for the past 4 or 5 months. We have tested on segnet, testnet, we have made lightning channels, One developer made the first lightning channels on both segnet and testnet. The pull requests (segwit, csv-package) themselves have not received as much review, but we’ve been testing it for the last six months or so. He made the big blocks on segnet and testnet to make sure that we could verify them properly. I’ll slow down.

How big was the segnet big block? It was about 3.6 megabytes. We had a competition between ourselves. We wrote good spamming tools.

The other thing is that I think one thing that is needed amongst other development teams for Bitcoin is that collaboration is required. The recent segwit BIPs have helped with this. It’s good to have multiple eyes reviewing the BIPs themselves. Us implementing the large changes, we found some things about segwit, and we gave feedback to the other developers. We made a transaction on segnet that had 19,000 inputs to test how well the sighash function works. Before that, it took about 29 seconds for Bitcoin Core to validate that. We implemented sighash mid-state caching in btcd and then it took 3 milliseconds or something. Core has a PR to implement hash caching also, and as a result of our testing has some great examples to benchmark against.

This was an example of how other implementations can help because btcd used a different approach to implementing the Bitcoin protocol. From this, we were able to see that they were compatible with segregated witness, and showed some corner cases. This was helpful and very useful. This work was good and we’re thankful that they did this work on segregated witness.

Maybe I will make a couple of comments here. I think that, I am not against creating a separate team for developing the project. If we have multiple teams, then coordination between teams takes a lot of time and cost and resources. You cannot go to war with 10 battalions and no general to lead. Developing the product, you always should have a general architecture and a general engineer. In the Bitcoin network, it’s more about the standard, it’s a communication standard. You cannot create ten different software projects because they will not be able to communicate.

What good do you see coming from having multiple implementations?

From the perspective of the venture investors, it will be very complex to explain what you are doing and how you are doing it. If it would be a battle between Core and other implementations, it would decrease the trust in Bitcoin, decrease the market price of Bitcoin, and it does not help make R&D go faster. If someone thinks that they can make a new split from scratch, then they should name it something else like an altcoin. I think that it is much much better if we create a team like Lightning where we can inspire the bitcoin core team and then coordination between different teams will increase productivity overall. They can internally discuss the idea and agree, and if they all internally agree then it would be helpful and efficient and stable, then it can be merged back into Bitcoin. For Bitcoin Core, I think it would also be helpful to add more efficient developers and more leaders and support them in a more transparent way. Right now, even than some companies are paying for some developers. Okay. Right now that’s not the issue, but there might be lately a lot of questions, it should be really transparent, like one central point, I don’t know, one whatever you call it, it should be transparent and like a business you donate your money and this money supports developers [like the Bitcoin Core sponsorship program?]. That’s it.

Thank you for the recap. Lunch is setup outside.

Development is somewhat hampered when you increase the number of implementations. Development can also be hampered by having multiple implementations. It takes more coordination and more effort to have developers reviewing multiple implementations of consensus-related software. It is enough work to do it in Bitcoin Core. If you are asking for multiple implementations, then this request is incompatible with having faster and more R&D because it divides our efforts, our time and attention, and weakens our ability to make R&D progress. Ethereum Foundation might seem like a fast mover on R&D but that is perhaps due to the misperception generated by their marketing efforts. They have more marketing employees than developer employees. “Let’s split your engineers on 3 different projects so that we go faster” is nonsensical because software engineering does not work like that. Perhaps we are not understanding what the other side is saying.

(Note that this was from lunch chit chat, the following was not discussed as a group.)

What do the miners see as the future of Bitcoin in the long term? What is their take on this? Yesterday there were two answers, but we would like to hear more from the miners.

Can we please also get an assessment from both sides regarding the level of misunderstanding that they think is continuing to occur here today? This would be valuable feedback.

Wallets should probably show the moving window average of number of zeroes appearing in blockhashes. This would be useful for improving knowledge about current difficulty. However, consumers might get concerned when that number does (or does not) change.

The short-term interest that some have regarding bitcoin market price would be to crank block size to infinity and pump the price. But what about the long-term interest in bitcoin? What is the long-term value of bitcoin? Why is bitcoin valuable in the first place? What is it about bitcoin that is different from other products? The concern that some developers have, regarding constructing an organization exclusively devoted to PR and marketing, is that control can unintentionally degrade the value of Bitcoin even if we each individually have good intentions going into such an effort, because of how contrary it is to the ethos of Bitcoin, decentralization and fungibility. Core developers don’t have a way to design this type of organization, while preventing the negative outcomes that are obviously applicable, and for this reason we have not made a proposal for this type of organization.

Block size and hard-forks

We had a good discussion in the morning on a number of topics. In the afternoon, the miners would like to shift the discussion to block size scaling.

I think that we can start by talking about the Hong Kong agreement. I know that it is not an agreement with Bitcoin Core. I know that Bitcoin Core cannot sign an agreement. I know that it was individuals that signed. We know that there is a disagreement even within Bitcoin Core whether to scale the block size, how much, how large, how to do it, whether to do it. Those individuals promised to propose a hard-fork proposal to the Core community for reviewing.

With everyone joining forces, it is good for marketing, it is good for the economy, but at this time we feel … with the promise of those individuals … several today are joining us here as well. I also recognize that we will have to spend resources for you guys maybe you have salaries and means lots of budget and… some… forum. I think it’s kind of a promise. Maybe you are not representing Core, and we understand that. But from your personal promise, I think this is a promise that you guys need to fulfill. There might be some delay, but I think we should talk frankly to see what’s going on here.

Keep in mind that we do have segwit, which is a block size increase. We need to go get segwit out and implemented. After that, the Hong Kong agreement what it was, was a change in how segwit’s block size increase would be allocated.

Let’s answer the question. Let me keep going.

I think that right now, I think the biggest concern is that Ethereum has shown that this is looking a lot more risky. When we met in New York, we talked a lot about how we would get consensus and show consensus. The miners were not in New York. I am not sure if people are aware about this.

Many of the HK agreement signers spent a week in NY. We did a lot of design work. We talked about how to properly construct a hard-fork. We talked about how we would do this in a way where we would not have the same risks that Ethereum has recently experienced. We talked about in HK how it is important for Bitcoin to remain unified and how it is important to the long-term value of Bitcoin. In HK, and in NY, there’s no desire to do anything that would be controversial. We would need consensus around any kind of hard-fork that would happen. It would have to be incredibly non-controversial. While it’s certainly the case that a lot of that research and many of those discussions should be made more broadly available, there’s certainly a lot of concern now that even from people outside this room that it would be very difficult to get that level of consensus around a hard-fork. While proposing a hard-fork is one thing, which I think should still happen, or at least proposing some details about how a hard-fork should look, which I think should still happen, as of the last week I do not think we would be able to get the kind of consensus we need, such as from Bitpay and their investors, who saw what happened with Ethereum Classic, to get the level of consensus required for Bitcoin Core or for Bitcoin to have a hard-fork that does not result in a significant loss of value.

I think it’s worth mentioning too that, a majority of, we were heavily disagreeing in NY amongst ourselves even in the New York office how something might be deployed. Much of that was due to talk of signaling. Since then, we have talked about several assumptions. One assumption that someone originally proposed was that “nobody would follow a minority chain” or “nobody would attack another chain”. There were discussions about those different scenarios. It has been enlightening in the last few weeks to see that some of those things are in fact very possible and are more concerning than we thought before. It’s something that requires thought. It provides a new data point. It requires some new analysis I think.

I wanted to point out that hard-forks are very disruptive to markets. They are disruptive to merchants, to markets, to entire ecosystems. We have to take this into account. Unless there is an overwhelmingly strongly justified reason to do a hard-fork, then the costs outweigh the benefits. We have been looking at ways to solve these problems in Bitcoin without having a hard-fork. This has been a huge focus of our engineering work over the last several years. We have been working on ways to make everyone happy in the ecosystem [although we are not maximizing happiness].

Also, yes I have been writing code for the hard-fork. It’s not ready at the moment but I do want you to know that I have been honoring that commitment and have been writing code towards that.

https://github.com/luke-jr/bips/blob/bip-mmhf/bip-mmhf.mediawiki

https://github.com/luke-jr/bitcoin/compare/bc94b87...luke-jr:hardfork2016

Regardless of whether we have reached a broad consensus on a hard-fork, based on the recent Ethereum hard-fork experience, a Bitcoin hard-fork would inevitably take place. At some point in the future, it’s bound to happen. Based on the ethereum hard-fork and experience, they believe that the hard-fork for splitting Bitcoin is inevitably bound to happen.

Only if we try to make it happen. Already happened with Clams, right? A controversial, disharmonious hard-fork does not need to happen.

This divergence in opinion is kind of hard-wired into human nature, even amongst this small group we have divergence in opinion not to mention the broader community. Because of this divergence of opinion, it’s going to split the public into like two or more multiple user groups and opinion camps and ideology of how bitcoin should be implemented. Some may belong to Core and some may not have a development team. That might be hard-wired into human nature and politics.

Okay. Even though there may not be such a development team at this point, because people are driven by interest and this drive might compel or invite another development team to jump in and do this kind of work.

To offset this trend, it’s necessary for us to increase the user base. And if we treat Bitcoin as a reserve currency, then the possibility for it to split is high. But if we treat it as a payment network, then people are unwilling to let it split because you would increase the usability.

When you say payment network, what does that mean in 5 years or 10 years?

If we treat bitcoin as a payment method, then we have to support new tech development such as LIghtning Network and a bigger block size to support that functionality. Based on this logic, all of our effort is to defend ourselves against such a hard-fork effort that may occur in the future. In order to defend a potential hard-fork in the future, we need to provide some kind of forum or unified basis so that different voices in the Bitcoin community can have a platform to communicate with each other and resolve their differences and reach consensus and to protect ourselves against such a malicious hard-fork in the future.

It’s not that easy to get everyone together to communicate and discuss.

It’s not that it’s hard, it’s that you did not do it. (Scaling Bitcoin conferences?)

I agree. I think we need to work to make sure that Bitcoin is both a system for payments and a reserve currency. As a system for payments alone, payments do not have sticky long-term value. You (the customer) buy coins to pay, then you pay to purchase some item, then you (the merchant) sell coins. This does not result in long-term preservation of value. Also, to be a good payment network, Bitcoin must have tech like lightning because no reasonable amount of block size would make Bitcoin large enough to scale to the payment needs of the world (like 1 million transactions/second). We must preserve the long-term value of Bitcoin while growing the user base. We need to find as a way as a community of Bitcoin users to come to strong agreements in order to defend against malicious hard-forks that would damage Bitcoin’s long-term value.

The first link is the BIP draft. The second link is some of the code to implement that BIP. This is related to the hard-fork commitment.

I would like to apologize to you, and to the developers, for undermining their efforts to produce this material for you regarding their commitments. I did so because their efforts in New York came right after some public comments about blocking segwit regarding a hard-fork. In that environment, I felt very uncomfortable about hard-fork proposals slowing the scaling of bitcoin through segwit. I regret the climate that my comments created. I am sorry for that and for my comments.

I think I need to clarify this. Segwit block also come from the … that the HK agreement would not be respected. It’s a very bad spiral that we have got into, in terms of bad communication. Maybe both parties don’t want to do something under pressure. Maybe both parties don’t want to be threatening.

One problem I did have was that a lot of people in the Bitcoin community told me that they would never want to hard-fork. So despite my agreement with you, I had others telling me they were concerned about a closed door meeting. I was hearing that a lot.

After the HK agreement, both parties were unhappy. The big blockers want really big blocks. 2 MB would never make them happy. They were not happy. And it was a bad agreement for them. And for those who want to control block size growth, they …. and I think at the meeting we both agreed that we should try to convince other people that this is kind of a best compromise.

There are a lot of people that don’t want a hard-fork at all. It’s a hard-fork. Everyone has to agree to a hard-fork. It’s difficult.

Do people not want it? Or rather, do they not want it now? People have said “oh, they never want a hard-fork”.

Oh, that’s not true [of my beliefs] at all. The negative discussion about hard-forks makes future hard-forks harder. That frightens me. The system will need hard-forks in the future. But they need to be harmonious hard-forks. People fighting against hard-forks makes this less likely. I think that we will need hard-forks in the future. I hope there will be hard-forks in the future.

The HK agreement did not happen the way I would have liked, but it happened anyway. It would be better to have a better way to come to agreement about how hard-forks happen. It would be good to have lots of community voices coming together. Hopefully we can set a precedent regarding what is the right way to do a hard-fork.

One observation here is that it sounds like we could have avoided much drama. When I heard about the “let’s block segwit” statement, if I had reached out to you directly and clarified in private, it sounds like this could have avoided problems and improved Bitcoin’s public image. I think there are many opportunities for us as the Bitcoin industry to try to settle our disputes one-on-one. This may improve Bitcoin’s public image as well and be more productive for all of us.

Maybe someone could describe the, like the MMHF hard-fork. And someone else had some BIPs about that. What are in these BIPs? What is the structure?

I think a lot of the opposition I was getting was regarding a closed door agreement. They wanted it to be something that develops organically.

I think that if I can expand on that, there is a political challenge where if someone external to Bitcoin tries to enforce a hard-fork on bitcoin, then it must be rejected for bitcoin’s long-term survival. When we are talking about designing a hard-fork, we must make it clear to the community that the desire for the hard-fork is organic from the Bitcoin space. I think that what he found was that the structure of the HK agreement undermined that understanding. Perhaps that could be overcome by getting more support for it, but still it is something to keep in mind for how we handle harmonious hard-forks in the future.

I think that people believe that communication should be open. Nobody noticed that BIP draft. There was no place to publish things like that. There was no blog post about it, or a tweeted link or something, to the draft. Anyway, please describe what it does.

I had summarized some of the ideas we had discussed in Zurich. I think we first need to know, it seems like there’s some discrepancy even between a few people, regarding the … maybe he could explain his BIP draft.

If you scroll down to the specification section; well basically, don’t read the document. What features does it have in it? Pretty much most of it is making the hard-fork safe. How does it make it safe? The simplest possible hard-fork would be one where old nodes continue on the old chain and not follow the hard-fork. If people did not upgrade, then the nodes would get stuck and the chain would be vulnerable to attack in the worst case. So what you are saying is that one of the concerns you discussed in New York was that un-upgraded systems would be vulnerable to hashrate spinning up and mining fake confirmations where people don’t know about the fork, like automated withdrawal systems would be compromised unfortunately. So your hard-fork proposal is one where they are still mining on the original chain from the perspective of un upgraded original nodes. The way this is designed, the old nodes will see the old chain as empty blocks. They will follow that block and they will not be left vulnerable to fake confirmation attacks. This is done by defining the serialization of these headers separately from the hash algorithm of the block hash. So the block hash is then calculated with a more complex algorithm than currently. Have you implemented this in source code? Yes, partially. The p2p stuff will at the moment not talk to the old nodes, this needs to be fixed, which is similar in segwit. It has comparable complexity to the segwit change.

I think it’s important to clarify that this is a difference from the Ethereum hard-fork. In that hard-fork, if you ran an old client, in comparison this one will not allow transactions to be confirmed. The old nodes are forced to make a decision, do they want to do the hard-fork or do they want to do a different soft-fork to prevent the hard-fork? They are not left vulnerable. They must make a decision one way or another.

You could say that this proposal is not much of a fork. It is blocking transactions. There must be a fork to block the transactions. Anyway, this is the main design principle. There are other interesting elements in the design of this. The new header structure gives more nonce space to the miners that they could use in ASICs without having to do the whole merkle tree for the transactions in the ASICs. So it would lower the communication cost to mining devices. However, it does not put the extra nonce space in the compression run. It’s asicboost [extranonce?] nonce. It repurposes 3 of the version bytes to nonce space as well. You have version bytes in the new header. This also gives, this proposal also gives nonce space in the version field. It’s basically enabling asicboost. It’s an interesting discussion that we should have separately in another venue. Does it have other interesting features?

It fixes the timestamp overflow issue by cleanly overflowing. It’s using 32 bits to represent a 64-bit timestamp. It makes a long-term improvement for timestamp handling, because we want bitcoin to last many thousands of years. Part of bitcoin’s value is that it’s forever. If we can fix the time/clock issues, then we might as well fix that.

Another thing that the proposal does is that it’s designed for merge mining natively. So namecoin won’t need to have whole bitcoin transaction bogging down every block of theirs. Merge mining is more efficient, which we could also use for side chains. Version bits have been expanded to have separate bits for more hard-forks in the future, so that we don’t need to repeat the complexity of this R&D so that it can simplify the idea of a soft-hard fork. Anything else? He’s looking at his BIP draft.

It redefines the merkle tree algorithm to fix the duplicate transaction issue that we worked around. It fixes a bunch of technical minutia and it’s hygienic, it cleans things up, it reduces technical debt. Obviously we need to .. for the old blocks. Also it improves things for SPV nodes and lightweight clients.

What were the problems with this proposal?

Well (laughter), I think the tricky thing with this is that because it’s a firm fork…. No, let’s translate. I would say that the tricky thing with this is that because it’s a firm fork, …. a firm fork being, that it is a soft-fork that forces everyone else off the network (those who do not upgrade). If you try to do that with 50/50 approval, then politically it looks like it’s an attack against the minority. From a technical level, it means that miners can do this by veto. The problem is that we want to avoid political ugliness. Do people understand a firm fork? It’s combining a soft-fork and hard-fork. There is zero possibility of two coins afterwards. By default, it would not be two coins. If I am running a bitcoin node and I do nothing during the hard-fork, then I am guaranteed to be forced off the network. Then I have to take action to either accept or decline the hard-fork. This is coercive. However, if we got solid indication that we got consent from the bitcoin community, then it would not raise major issues.

A lot of what I talked about in New York and previously, was how do we see whether coin holders have approved this? Are they going to use this new chain? Are we going to not end up with Ethereum and Ethereum Classic situation? Or are we going to end up with a unified bitcoin? I think it’s important that we find a good way to have coin holders to show that yes we actually approve of this. We can actually use coin voting as a way to argue that the whole economy has approved of this. We can otherwise set a precedent where miners can push through changes without consent, which calls into question the value proposition of Bitcoin. So that is why coin readiness signalling methods would be useful to avoid having the value proposition violated so directly.

It’s not necessarily that the whole community is going to vote on this. How are we going to look at those results? What would it mean if some voted but not others?

If you have a wallet and you did not upgrade, then you are at risk. It’s easy for that wallet to end up on a chain other than the one you suspected. We must respect people’s rights to not go with this consensus. It would be dangerous for us to try to question the consensus they want to go with. Who is in control here? Is it users? Or is it the group of bitcoin miners? It could infringe on the value proposition of this being a currency with a stable meaning.

He believes that in this case, if those wallets refuse to upgrade, then they would be taken off the network.

It’s only if they neglect to upgrade, then they would be left off the network. If they choose not to take the hard-fork, which they must do actively, then they can continue on the chain without the hard-fork. They must do something to accept it, or to reject it, but there is no “default” behavior.

The existing chain will have blocks, but with zero transactions.

So his proposal is that, people who are using the current software, their software stops working. New software is released, and then if you install it, it defaults to the chain with the most hashrate. And there is a button that would say “no I want the other chain” …. no, there would be no default. You must choose A or B. It’s a user interface question. So who is the origin chain? Nobody would be the origin chain. And if we are doing our job right, hopefully nobody would choose reject, because otherwise we failed our community by choosing something without consensus.

Ethereum’s voting turn-out was so low (5.5%), and the people who voted were probably very invested in TheDAO. It’s possible that they had hashrate majority, but not most of the users.

Can I give a bigger picture here? If it was designed such that the greater hashrate could decide that this would be a powerful weapon to befuddle bitcoin in the future, the people would look at bitcoin and say “oh the government just needs to buy a bunch of hashrate and rewrite the rules at whim”. We need to be able to respond to that and say no, lots of hashrate can attack the network, but no they cannot rewrite the rules at whim. So this BIP design still supports the existence of the original chain, however through non-technical means we should make sure that perhaps nobody wants that original chain to exist. “Hashrate deciding” is a long-term threat to Bitcoin’s value and fungibility. It’s each person that must decide for themselves. Nobody should decide for them.

In this fork, I think the economic or market cap, …. we want everyone to make their own decision to join the hard-fork. But yes, market cap would matter. Importantly the things that were mentioned on security earlier is that we need to build public process to make sure that everyone knows that this has nearly unanimous support and then it would be very easy. If there is doubt, then there is opportunity to trade on the doubt and make a political stink about this.

Their point yesterday was that people would try to do that. Well, we can minimize that. A very clear signal that there is no doubt, would be to show that a significant percentage of UTXOs or something in the last year, that yes they agree, through some coin signalling method. That’s what coin signalling can show. It can show that people who use the chain and own coins that they agree with this.

You should explain that proposal. Have they heard it?

I know that ethereum with their hard-fork did a limited version of coin signalling. They used their coins to say “yes I agree/disagree with this change”. The concept is simple. If you own some BTC, then you should have a voice and you should be able to say I own BTC and I would be willing to use this new definition of what BTC is, and I am not going to oppose it. Wallets would have a button for which way do you want to go. Some BTC might be in cold storage, etc. It should be about coins that are spent over the last year, not about old cold wallet coins stored a long time ago.

Before the hard-fork, there should be a new version of the wallet where you could signal with your coins, whether you like the proposal or not. If that is reasonably high through this measurement, … we would work with all wallet vendors, exchanges, hosted wallets, everything would be using this coin signalling mechanism. The point is not to trigger the hard-fork. The point is to build political consensus so that any adversarial fork created from this has the least chance of surviving.

Coin signalling is very easily gameable by malicious entities. The method is to purchase some BTC and then signal in a way that does not represent the wishes of the community.

There would be public outreach, coin voting, and using every means at the community’s disposal to make sure that everyone is on the same page and such that the harmonious hard-fork would be as clean as possible. Ethereum should not be the point of reference, it should be testnet4. Yes, just a second, I am almost done making testnet4.

He actually believes that what has been proposed here is something that makes the hard-fork easier. You almost make the hard-fork a not-so-hard-fork. That is why he believes that you may have opened up some other issues.

Yes.

If we are comfortable with this approach, then in time we would be inclined to perform more and more hard-forks in the future if we are getting more comfortable with it. Because it would not be as difficult to execute any more, based on your method. So this might open up a pandora’s box for more hard-forks in the future which might alter the immutability principles of the Bitcoin blockchain. Although I believe that we need a hard-fork eventually, but we have to do it in a way that we should not set bad precedent for the future. We must do it with precaution to minimize such jeopardy and pitfalls in the future.

The proposal does not change that the hard-fork needs consensus. This is still a requirement.

Many people in the development community share these concerns, and this is part of why there was not much published after the meeting in New York as well.

When this idea was originally thought of, it got termed an “evil fork”. That was the term. Precisely because if you have 95% hashrate then you can perform any type of change you want. But if we had a simple hard-fork, which means that the miners are kinda threatening the community. What if it is not accepted by the market or the users? So in that sense it is threatening. But miners want to kill the old chain.

We don’t agree. Part of the proposal makes it easy to make it the side rejecting the fork to fire the miners. It has a switch to ignore all the blocks produced on the “evil fork” chain side. There has been effort to make sure that the miners don’t control it. There is a risk, I agree it. It has that risk, but he was saying that when you upgrade, you have to make a choice. There is no default choice. If “A” is the high hashrate and “B” will make a checkpoint or change the PoW or something, but it would be implemented and ready for that eventuality. Let’s say that we don’t know what the users want, even after coin signalling. Perhaps the old chain doesn’t exist, maybe it does, but we make it possible for it to exist anyway, without making it a default.

Is it possible to switch to PoW and to a PoW+PoS so it is a mixed system? A mixed system is much more difficult to attack?

Research does not support that.

If someone proposes an evil fork, then perhaps there are attacks on the old system. […..] …. that mechanism could be used to coordinate that fork away.

My main point was that the mechanism of people signalling their support for a hard-fork, with their coins and UTXOs, could be used to also change the PoW function. If miners want to go against the economic majority, then the users can show support for a PoW switch and continue on, and leave the miners behind. I hope this will never happen. But it makes it easier to do so. I have bobchain and it’s the best thing ever. Because this thing could happen, I think it incentivizes people to cooperate.

There is a research question in there. Could these hybrid systems be used? All of the PoW+PoS hybrid proposals in the past were very obviously broken. They opened up new serious attack vectors, for each attack vector they reviewed. Perhaps in the future someone will come up with a strong hybrid. I look forward to the research, but we don’t know a way like that.

Using coin signalling to start a new chain could in theory work, but we don’t know how to both of them at the same time. We know how to say at this point start this new thing, and it’s PoW. But not about both things at the same time in the same change.

It gets into a lot of complexity. All of the hybrid altcoins end up doing checkpointing with a central point of control. It’s interesting to note that.

[Short break.]

Long-term goals for Bitcoin and fungibility

So what about some long-term success goals for Bitcoin? What are the things we hope to achieve over the next 5 or 10 years?

We have to remain secure. We need efficient ASICs. I argue that we need fungibility. It’s one of the important distinguishing properties of bitcoin, that it is permissionless and global.

We need competitive fees. They can’t be too high because they might prevent use cases. We need lightning and Bitcoin on-chain. We want, presumably, many users. We want everyone to benefit from this technology and for everyone to own bitcoin.

We need long-term confidence for this to function as a store of value. We need people to be confident that these properties will survive. We need to also survive regulation risk. There are less regulatory risks than compared to years ago, but there are still some.

I would say that positive marketing, like the marketing discussion earlier, could help these things. We could attract new users. We could explain the benefits to any users. We could figure out what people like. If we had positive marketing, it could help confidence. A nice positive bitcoin business ecosystem could help regulators feel more comfortable and help reduce regulatory risk on our industry.

If we look at this at a high-level, it helps developers figure out what to work in the short-term. We care about fungibility, therefore we work on new protocols to create fungibility in various methods. Maybe we try to make mining sufficiently decentralized to achieve fungibility in practice. And perhaps we are careful [Chrome crashed].

I am suggesting that it is useful to have a framework for discussions. For people who are making ASICs or doing payment processing; if they all agree that over the next 5 to 10 years these are the objectives, then great. I think that sometimes we get stuck in a mode where we don’t think about these long-term objectives. What do the miners think about this? It’s like me going to the ASIC manufacturers and saying we need 10 nm tech. But maybe 14 nm is better. How would I know? I should let them inform us. If we agree on the high level details, then we can let people who specialize on the details do what they are good at, including people who are good at development, or people who are good at marketing especially regarding why people should buy bitcoin.

Perhaps miners feel that there are long-term priorities that are different; perhaps for bitcoin transactions you would pay more because it is a permissionless system. Perhaps you would pay more for this transaction because other systems would block those transactions. In an ideal world, perhaps transaction fees would be low. Yes, externally, competitive. They can’t be punitive of course, only those who would be extremely

What is their vision for what we need or want in 5 years?

Fungibility is a relatively new term to us. I think the decentralized money is– money should be decentralized. Specifically, that’s something I agree with. Fungibility is a new word to this community. What does it mean? I think it will create lots of disagreement in this term. We want a united network. Store of value. Maybe we need to keep the principle as simple as possible. If it is too complicated, it needs to be in a single term that everyone can understand, and not something that will create lots of disagreement.

What is fungibility? It is a big word. We can give some examples of things that are not fungible. Paypal has a bad reputation because they suddenly one day stop accounts, even though you had done nothing wrong. And then you have to argue with them for 6 months to get your money back. That is not fungible.

Fungibility technically is the ability that every coin has the same value as every other coin. Specifically, the example about paypal, is that someone as someone that receives money as a paypal user, if you fear that paypal will revert or block the transaction, or freeze the account or undo your ownership of your own money. So fungibility, in bitcoin, translates to the inability to censor transactions. This relates to privacy and decentralization. Maybe fungibility is too high level of a term to describe what we want here.

I can give an example. Let’s say that some people give me 1 BTC. It should not matter which person gave me the 1 BTC to pay you. They should all be 1 BTC. That’s what fungibility means. There is no “dirty coins”. There is no blacklisting.

If coins are not the same value as each other, then they would be worthless.

Paper cash does not have this problem. They have serial numbers, but the serial numbers on the dollar bills are not used. It’s not your fault. If somebody gives you some money that they have obtained in a crime (or ethnic crime), in most places you are not in trouble for spending that.

To be clear, this is not necessarily about crime. It could also be for funny examples. Let’s say that if you have 25 BTC that are freshly mined in a block, then they should not be any different from any other 25 BTC. What are you talking about? You mean 12.5 BTC. No, 3.125 BTC, look to the future my friend (where testnet is).

Another related concept to fungibility is that Bitcoin is permissionless. You don’t need to ask for permission to sign up. You don’t need permission to send a payment to a person. There are things that reduce fungibility. There are some companies that are trying to analyze the blockchain to claim some coins are worse than others because of their association with people and payments and transactions. That is very bad for bitcoin. We should make sure that people do not have to be involved with a government or bank in order to send a transaction. It should be cash-like.

Maybe a remark here. I think that point number 4 on the board, that many users. I think it should be higher on your list of examples. I think the second point, fungibility, should be… in that place.. More people using it, more people know about the technology, it’s immediately create bigger fungibility. A very good example is what I saw was happened in just a couple months in the Ukraine is that they just include in monk an option to send you can keep bitcoin on the account and you can buy them on light the bunk it. And people stop to buy it even just to try, and they create huge volume in just one single month. And right now they create a lot of … and I am receiving just a lot of email in one small country. They have been creating 10 different events. And people start reading about the technology and start trying that. And what bitcoin is still missing is missing the good documentation, and good representation by someone. So the users should be indicated, and to indicate the users you should get some lectures for reading and talking about bitcoin, marketing materials for explaining bitcoin, and so on. So many users, depends on documentation, and from lecturing, from books, from mentioning in the news, in multiple ways.

Among this group, only I am running an exchange other than the person that had to leave a moment ago. I have first hand experience with regulators. Regulation in China is big but …in the U.S. it might be even worse, the pressure of regulation is heavy. A lot of those exchanges, have problems with coins getting frozen, because of association with money laundering. If bitcoin becomes fungible, I am afraid that you would be even more subject to regulators.

He feels that the regulation pressure in China, is big, but the regulation in the United States is even more. There are even more regulations here in the U.S..

Bitcoin today in practice is fungible. You can trace it, but it doesn’t matter which BTC I give you. It’s still a BTC.

For example, in my exchange, there are some frozen funds by the government because of money laundering suspicions. We hired experts that analyzed the blockchain to trace the source and movements of those coins through the blockchain to prove that you know we have no relation and that we’re unrelated to the money laundering activity. So that is why we need to know the traceability of bitcoin in the blockchain.

I think bitcoin might not be comparable to cash for this because in bitcoin you have, you can track the source by anyone and in cash it’s very hard to trace the source.

That is exactly what fungibility is about. It’s about making bitcoin more like cash.

Yes it’s like cash, but it’s not…

The bitcoin whitepaper calls it “p2p electronic cash”. We think it needs to be more cash-like.

Maybe like cash, but anyone can trace where it is coming from.

Yes, but not reliably.

Yes, but we are talking about long-term goals.

What I heard is that, what I understood, so please correct me if I was wrong, if Bitcoin becomes more fungible then regulators will be more burdensome to you because they would not be able to track Bitcoin. I think it’s the reverse. If Bitcoin becomes fungible, then regulators will not go to you because they will not be able to extract information.

I don’t know how the U.S. police do but because law enforcement in China work, even though you cannot provide the source information or the source of fund information to them, that would make them even more suspicious of the outcome and the source of the funds. This might make them take action to freeze funds. This is why he thinks fungibility in Bitcoin could create even more regulatory pressure for the exchanges in China.

If the coins are fungible, then they cannot be frozen. By technical means, regulation cannot be…

You do not enforce the regulations in the protocol itself. The enforcement would happen at higher levels. You could audit businesses, without modifying the fungibility of bitcoin itself. [The AML/KYC happens at the business level, not on the blockchain.]

Bitcoin in many ways behaves like cash. On this particular issue, the characteristics of bitcoin, because if you want to make the protocol and make bitcoin even more fungible on the protocol-level, I believe it’s just a wish. You may not be able to make it happen. It’s because you know, there’s a difference between coins on the blockchain. Some coins are dirty. Some coins are clean. That’s just how they are. You can’t erase this difference. You can’t make them the same.

We can erase these differences.

Let me put another angle on this. The other part of fungibility is that, even if you could trace the origin of coins. There should be an expectation in Bitcoin that if I create a transaction and I pay a competitive fee, the coins should go through. So the transaction should happen regardless of whether governments want to lock that transaction and prevent it from happening. In ChainAnchor they wanted to go and block transactions that did not have correct AML/KYC. To some in the Bitcoin community, this is more concerning than fungibility itself. Even if you could trace the origin of the coin, then at the very least you could not prevent the transaction from happening. So you could still trace the origin of the coins, but the transaction would never be stoppable.

Gold is legal and is highly fungible. If there is tainted gold, and tainted serial numbers, you can still melt the metals down into liquid and get untainted gold. So we can do the same with bitcoin.

In a system with good fungibility, regulatory compliance can be achieved even greater than it is today. If payment protocols between exchanges allowed for the sharing of identifying information on that other network, that would improve regulatory compliance. If the system is not fungible, then we have international uncertainty created by different policy in different jurisdiction which creates uncertainty about coin value. It is perfectly technically possible to make coins that are absolutely always equal with equal origins. There are competitors like zerocash (zcash) that do this as their competitive basis. If bitcoin is poor at this, then it is not as good at digital gold and not as good as a store of value and we could see bitcoin out-competed by those competitors.

The fungibility aspect is a huge competitive advantage of Bitcoin versus the existing financial system including Visa, Paypal and Mastercard. The greatest advantage that Bitcoin has is that it eliminates counterparty risk. It can eliminate counterparty risk with legal risk. You do not need to underwrite your counterparty’s legal standing. The moment that they deliver your bitcoin, you have the good. The underwriting in today’s financial system has a significant cost.

Maybe explain underwriting? Well, it’s the idea of taking responsibility. If I take a bitcoin from you, it’s not up to me to determine if you are complying with the laws of some other jurisdiction. Underwriting means you are not responsible for the liability of the other party (the counterparty).

The point is that it’s cash. If I pay you bitcoin, you own the bitcoin. It doesn’t matter where it comes from. There are technical ways to make the origins indistinguishable (the same and equal).

People talk about chargebacks. It goes deeper than Visa chargebacks. The interesting thing about this is that when you are dealing with transactions worth millions and billions of dollars, they are not concerned about chargebacks, they are concerned about solvency of the counterparty. That’s why we have title insurance. With money itself, there is a lot of underwriting necessary similar to title insurance. When you remove that need, when you make bitcoin fully fungible, that’s a huge advantage that bitcoin has over all other monetary systems.

This is also very important when we think about automated payments. Machine to machine transactions. And smart contracts. Because smart contract or a machine can’t evaluate the counterparty risk or AML counterparty risk in a transaction; they can only look at the transaction. The only way to make a machine automatically do this, is to make the system more permissioned and trying to eject “bad” users from the system which would harm the permissionless of bitcoin. So, we the developers in the community think that being cash like is a very important competitive advantage of Bitcoin which supports other competitive advantages like smart contracts, machine-to-machine transactions, and that if we want bitcoin to grow in the world then we need to protect this advantage and find ways to further it. If we don’t do this, then Bitcoin might be supplanted in the market by alternatives (like zcash, monero, etc.) which do.

Legally or technically?

If legally, then this might not be universal across the countries.

You make it work technically so that the legal choice becomes irrelevant. You don’t need to make that kind of decision then.

If you are trying to make this technologically fungible, then you have to a serious change to how bitcoin works?

So there are ways with no protocol changes to achieve much better fungibility in Bitcoin. For example, this is done with lightning network. Some technology, like coinjoin, is built into bitcoin from day one. One of the things this results in is that someone who is engaged in criminal activity can already get pretty good privacy in the system. They can mine in order to get fresh coins. They can use coinjoins. They can swap their coins with other users. So the criminal actors already have fungibility enough for them. So the remaining question is what about the non-criminals? What fungibility do they have? What risk are we placing on users in our system?

Criminal users do not need as much fungibility as you might think. They need money laundering. The irony is that they are trying to make dirty money look clean. They actually want a paper trail. Criminal users want a paper trail. That’s the weird paradox here. They do not want an invisible trail.

Most of the people doing coinjoin, you think, why they do that? I think most of them because they are probably not very clean, so they use coinjoin, ….

Criminals make false transactions that make them look like real transactions like “I sold a car” or “I sold a t-shirt”.

I have been running my joinmarket coinjoin client the whole time on my laptop for this whole event. Why should I want to use a system that has poor privacy? I would rather use monero or zcash. Why would I want money that I cannot use to pay my obligations or to buy goods and services? You have competition from centralized users (which often have good privacy), and decentralized things which offer better privacy than bitcoin.

I received payments for moderating a forum online. Some of the coins from those payments came from totally lawful gambling sites, and the coins went to the forum, and then the forum paid me. And I deposited to Coinbase, and then Coinbase took weeks of arguing with me to get it fixed. It was not Coinbase’s fault. They exist in a regulatory environment where they are forced to act in a certain way. The United States is very negative about lawful gambling services from other countries. Coinbase had to do this because they live under the jurisdiction of U.S. law. It’s unfortunate because it’s an application that U.S. law hates, even though I was paid for something completely unrelated to gamble. I never gamble. This is an example of getting hurt by a lack of fungibility in Bitcoin.

I think that responsibility should be applied to the user, and not to punish the entire system or the exchanges for the behavior of bad actors. Fungibility protects the innocent. Fungibility is about protecting the innocent. The criminals want to money launder, and they want public records.

I will tell you a true story. Some user a few months ago deposited 1000 BTC to an exchange. They believe the coin comes from a darknet coin mixing service. So he seized the money and asked the user for their photo ID and their AML document. After receiving the document, what did you… Did you release the coins? Yes.

Wouldn’t you like a system where you can’t even tell whether it’s from a darknet market in its history?

Well they might be required by the police to do this.

I have an interesting experience where my employer is a regulated as a commodities trading platform, …the interesting thing is that even if bitcoin is completely fungible the regulators are completely okay with that. The thing is that the regulators require us to perform AML and KYC regardless of the fungibility of the underlying coins. So if the coins are all equal with equal origins that are indistinguishable, the regulators are okay with this. We are still obligated to investigate who our customers are, but the actual network-level blockchain sources are completely unimportant.

….. once it’s in your wallet, it’s moving around hand-to-hand cash in an economy, and in an exchange it’s more like a bank account where you are transferring it to someone else and someone. …. cash on hand is more fungible than cash in a bank account.

We were talking about fungibility, AML and KYC for exchanges. Could you share some experiences from your exchange about that?

If there is a problem, then the police and government will come in and want to find out where the coins went.

The assertion was that fungibility is a desirable long-term goal or property. We are trying to talk about long-term characteristics that are interesting to all of us as a group.

It’s hard to say I think. If you have fungibility, then it is what it is. It could be the argument that, if everything is the same, then there is nothing you could do. You can’t really say it’s good or bad right now, but right now the regulators do ask for where the coins go or where they were from. If it’s not to a known address, then there’s nothing you could do about it anyway.

Confidential transactions? Could we have updates?

As some people here know, there are a number of technologies for improving confidentiality and privacy in bitcoin. Coinjoin is one of these R&D efforts. Coinjoin has a limitation, which is that the coinjoin does not hide the values of the amounts being moved. Commercially, the amounts can be valuable information. The difference in values from before and after a coinjoin can be used to dis-entangle the coinjoin and find out the information. There is a R&D effort for signature aggregation which can increase blockchain capacity by 30%. It will also let you save fees by using coinjoin, it’s a side-effect of aggregation.

About a year ago, there was a publication for confidential transactions. It exists in various sidechain systems now. CT (confidential transactions) makes the values or amounts of the bitcoin transactions completely private. We have been working on making this more efficient. Unfortunately the previous construction of CT added size. We have since then made it 20% faster. We have made it natively support colored coins and assets, and making them private, which is perhaps important for other systems more than it is important for bitcoin. We have also more recently come up with better ways to combine it with coinjoin that make it easier to deploy. We have been working on improving that technology. One of the great things about CT technology is that it only has a constant factor cost of scaling. If you were to apply confidential transactions to the bitcoin network, there would be an additional size to those bitcoin transactions that use CT would be higher, but that would be the only downside. The ring signatures in monero and the zcash tech have worse long-term scaling characteristics, CT is better in this aspect. I hope that this confidentiality work will benefit bitcoin in the near future, and if not bitcoin then at least sidechains.

You could say that you can separate the fungibility in an exchange, is different from the fungibility of a coin in your pocket. This is the same for physical cash and bitcoin. There are a lot of people who really love fungibility. They would be very sad if bitcoin became less fungible. If you were to talk about for many of the bitcoin holders, and asked them, if you lost a feature of bitcoin, would it upset you or would you stop using bitcoin? If it lost fungibility, many people would get upset. It’s something that many people are passionate. That’s why you hear all this excitement about new tech, decentralization as important, because decentralization is the current mechanism that bitcoin is using to have fungibility. It’s the assurance that someone will eventually process your transaction. If one miner has a government that says don’t process this transaction, then someone with a few terahashes in their garage in another country will still be able to take that transaction because of decentralization. So you need some reasonable level of decentralization to guarantee that all transactions will get processed.

Lot of stuff. But government and police may not share the same view. If we change the current fungibility technically, governments may change their attitude toward bitcoin.

Properties of good money: http://contrarianinvestorsjournal.com/?p=391

He thinks there are more people doing bank transactions than people using bitcoin. So does bitcoin really need fungibility? He believes that maybe we should focus more on the underlying technology to make bitcoin processing, like lightning network and making the infrastructure more robust.

Keep in mind that if you don’t need fungibility in a system, then you do not need mining. You can use banks. You don’t need infrastructure. We have a severe risk of competition from highly centralized efficient system.

We need … very easy to … .. banks and governments… still… maybe we need a fungibility, but maybe we can make this fungibility in another layer of this network, like sidechains maybe, and make it into the mining network, but on the main chain we should make the protocol as simple as possible.

I want to make more of a point there. If the main chain is fungible in the sense that you can do a transaction and it will always be mined, then you can make layers on top to make it more fungible. You can do coinjoin and lightning as second layers. The protocol does not have to be complex. If someone wants to make bitcoin less fungible, then I am not able to just go to you and say here’s a list of addresses to blacklist. I think right now it’s not a good story for now. When we look at the hashrate graphs on blockchain.info, it’s not a true story, but it hurts investor confidence. We need to assure them that they will be able to spend their money. At the exchange level the privacy might not be good, but they need to be able to move money around.

Machine-to-machine transactions and smart contracts, it might be difficult to make alternative protocols reliable if bitcoin is not sufficiently fungible. Automated decisions made by wallets will not be able to respect the non-fungibility of a coin, which is an invisible property, which makes all of these bitcoin systems less usable. However, I agree that we should move complexity to other layers and keep the base layers simple.

My impression is that we are more in agreement than it seems.

A point there is that in bitcoin, even with no change to the system, fungibility changes over time. Originally in bitcoin wallet software, a new address was used for every transaction. Back when bitcoin started, there were no companies like Chainalysis or Elliptic that connected to every node in the network and monitor everything. Sometimes tech has to adapt to the world, in the same way that the block size has to change to adapt to the world.

Like a VPN, you run it on top. I think we are in agreement on this. I agree. It’s not a magic bullet. In a bad world where bitcoin was very unfungible, then lightning and sidechains might not even be possible, or they might be unreliable. You cannot build a fungible system on top of one that is non-fungible. It’s not possible. It’s effectively impossible. Anything built on top of it cannot be fungible.

If you want to break those privacy properties, you break the underlying layer. This is not a detail. This is important. Building something “strong” on top of something “weak” is silly. We should make the base bitcoin layer as strong as possible. This sounds like a detail, but it’s not a detail.

His point is that he still has to wait to see until after the exchange has used this fungibility functionality, then they will know the actual effect on their operations and systems. They think it is too premature to talk about its actual effect. So let’s not spend too much time on this. Let’s move on to something else.

Let’s take a 5 or 10 minute break and then we can continue.

Hard-forks

What topics do the miners consider important that have not been discussed?

Please have a seat so that we can continue and wrap up. Long day, lot of information, exhausted. We still have a few more topics to touch before we wrap up the meeting. Let’s go with the miners first. Who would like to start?

Under the existing situation, the … of Ethereum… have already given us an example of a hard-fork. Bitcoin could have a similar situation in the near future. How should we solve this problem? In the long-term, we should promote the use cases of bitcoin. In the short-term, we should build a wider, broader consensus. We need to form a platform to communicate and talk. It should include Core developers, miners, Bitcoin companies, bitcoin exchanges, and other users. Based on all the lessons we have learned from the other coins and their histories, we should give up the fighting and conflict. We should pay our attention to communication and cooperation. We should build such a platform.

Another topic we would like to talk about is that right now it’s July 31st and it’s the last day of the HK agreement. We think that individuals should give an explanation to the community.

How should we make that communication platform? What explanation should we give to the community? The miners need to give kind of an explanation or somehow to the community as well. We have those pressures as well.

Personally I am happy to see work on that.

If someone was to post about the status of this, to draw attention to the documents and code that he has written, would this be the communication you are looking for about the agreement?

I believe that would help a lot. Okay, I can do that. One thing that I wanted to ask about that, on that subject, because of the people opposed to the hard-fork from the agreement, and because of the people who are saying deadline time is up time is up all the time, maybe we should remove the pressure, and I will continue to work on a hard-fork anyway?

We should research this. He doesn’t want to be working under pressure. There are many open problems still. It’s better to do this without the auspices of pressure.

Okay a better communication would be, how would you like to do this work? I think it’s important that people do not continue to perceive that work as a closed door Hong Kong agreement to do a hard-fork. The hard-fork itself must be designed organically and normally.

So perhaps a question would be, on a personal level, would you want to work on a hard-fork proposal? Yes I am going to do that, but if there’s an agreement, then the community perceives that negatively.

So maybe present about the deliverable, and now it is better for the larger community to collaborate on it. So further collaboration and work would be open collaboration and work. Would that be agreeable and good?

You can make a proposal to the wider community. The proposal might not be completely finished, it could be a draft, it’s obviously something that requires further work, it’s the proposal so far. The HK agreement said after segwit, too.

It is much better to admit that the time is delayed. It’s not a problem. We can just say frankly, it’s delayed. For lots of engineering reasons; ethereum stuff, we have lessons learned; segwit is delayed, etc. Are we okay with saying this? It’s not just him, it’s everyone.

It is not fair that the attention has shifted to him. It’s not all on you. We want everyone to be aware of this. If this work going forward is seen as a forced outcome of a closed room agreement, then it will be opposed on that basis alone. I think this work is good and useful, but we have to remove the specter of “this is a forced change on the network”. We need to collaborate to improve that. We can say that the process is delayed, segwit caused delay, complexity of this caused delays, and that’s fine, but we need this to go from an “HK agreement” proposal to being a community proposal. I am saying this for the sake of the proposal. Without this, it cannot get widespread public support.

May I ask a question. Why is he the one working on this but it was never communicated. Why wasn’t it shared? Why did we not hear about this work? Why did you not hear about this work?

I can’t stop them from working on things. He published about it on the bitcoin-dev mailing list. I think we all just are very bad at communication. So within the developer community, some of us were not aware either. None of us were eager to talk about this when it sounded like this was going to be used to block segwit.

My belief that is what has been proposed, that we try to and turn this HK agreement into a community-based consensus work, is somewhat hard to do and impractical in his belief. There could be another way to realize that. We could do it through a foundation that we were proposing earlier. It could be the consensus for a new foundation to try to realize that.

I think that if you tried to do that through a new foundation, it would lead a lot of backlash against that hard-fork plan and organization.

You would have to get people to accept and embrace the foundation. That would take time.

We could explore the possibility of doing that. We can be open to working with other efforts.

It would be extremely challenging to do it that way.

We should talk more about it.

At the bare minimum, the best suitability for a foundation is things like marketing efforts, not hard-forks.

Regardless of which plan we are trying to adopt, we cannot, it’s not reasonable to expect 100% consensus. So the question is to what extent we want to reach consensus. How much effort would we like to put into engaging the community to get there?

When you said you wanted to use the HK agreement to make it a community proposal for a hard-fork, you meant the development community? Or do you mean everyone?

I mean everyone. The point is that politically, the Bitcoin ecosystem should not accept imposed rule-changes on the network. And so, a hard-fork that comes out of a closed-door meeting sounds like an imposed rule change on the network. There are many people who will principally reject this, reflexively. I want there to be collaboration. Most people will ignore it. But I want there to be collaboration so that we can say this is a product of the Bitcoin community. It cannot be a closed-door agreement.

This was Luke’s post on the mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012389.html

At this point, it would be a little bit difficult or challenging to reframe the HK agreement to a broader community-based effort. Trying to reframe the HK agreement to an open community agreement would be difficult. The simplest way is to, we just based on the HK agreement, then we try to pull people into that and then gain consensus on that. That would be the simplest way.

My understanding of the plan in February was that we would make something, we would then propose it to people, and then hope they would like it. We would hope the community would reach consensus on it, and we would ask them to discuss it and build on it and so on. So I think that’s good, to me personally.

In order to make bitcoin grow sustainably in the future, it’s important for us to build a platform here as I mentioned before. We need communication. We need to resolve differences. We need to bridge gaps so that we don’t have those situations again. A lot of us in this room did not even know about that work. We should all try to prevent that from happening in the future, and a platform could facilitate that.

We should work to improve communication.

Tomorrow we don’t go to Stanfordland until 11. Perhaps tomorrow we could talk about how miners can get into better communication with Core and how Core can have better communication. Yeah we could spend a few hours on this tomorrow morning. We have breakfast at 9am tomorrow again.

My point is that now the market cap for Bitcoin is almost over $10 billion USD. It is big. Driven by private interest and technological interest, there will be a lot of … that will be popping up one after another. And a lot of Core developers here, you are many of you are the most influential figures in the Bitcoin community. So the sooner you form such an organization, the better off you become to protect yourself from those potential competitors in the future. So, don’t mess up.

We could post on the medium blog and link to his code and proposal and write a few things. Maybe, I think, it’s himself who should tell the community. He can sign his name on the blog post. The update should be provided in the same location. We just need some channel to announce it to the public for mass media and so on.

I think it would be good to post it on the bitcoin-dev mailing list.

He should also mention the other contributions from the others contributors. “In the future we need to move forward and have better communication”. Does that sound good?

We need to be careful to not make this sound like bitcoin is talking about hard-forking immediately after Ethereum Foundation’s blunders.

We have refreshments in the back.

Breakfast and meeting starts at 9am tomorrow morning. We can talk for an hour or two, then we will head out to Standfordland.

Day 3

We don’t have a lot of time in the morning for discussion. We have a hard stop for discussion in the morning. In the morning, we will have a wrap-up discussion. We will leave for Stanford and we can share rides. I have already sent the parking email and we can all park at the same location and walk over to the building.

Block withholding attacks

I had a brief chat with a developer during the Hong Kong Scaling Bitcoin event. I think a miner and I share the opinion that it is quite a challenge for any pool operators to … I think most of the developers haven’t recognized how risky this could be. I would like to hear more from you about this.

From my side, I am not familiar with mining. The attack is when one of your participants finds something with PoW and it does not broadcast it to the pool? Okay.

And this is particularly acute for pay-per-share pools because they continue to get effectively the same return until the pool goes bankrupt.

We think there have been real attacks against ghash.io and we also think it’s the main reason why they had such bad luck for long time. And also, there was one more possibility. If one pool suffers from this kind of attack, then they may launch the same attack against another pool if they think the other pool was launching that attack against them. And also, if this happens, maybe many people just attack each other with this withholding attack, and it’s quite dangerous for the whole industry. It’s like a death spiral.

https://petertodd.org/2016/block-publication-incentives-for-miners

I don’t think you need to talk about death spiral, block withholding is important anyway. One of the challenges with fixing this is that there’s no fix that we know about that doesn’t also kill p2pool or kill a totally decentralized pool as a viable option. Several years ago, this was much more of an issue than it is now, because it seems that p2pool has died its own death independently of this attack. There are several ways to fix block withholding this. The mining pool could retain some secret which it gives out only after you return the block to it. And then this is made part of the Bitcoin consensus rules. The most straightforward way to deploy this would require a hard-fork. Unfortunately this hard-fork would be incompatible with the safe hard-fork method that was discussed yesterday. There might be some ways to fix that and improve that.

There is a quasi soft-fork way to fix block withholding attacks. Unfortunately it’s kind of ugly. I am not sure if it’s a path we want to go down, although it’s easier to deploy. The basic idea in all of these fixes is that you can make it so that the hasher can tell if it has a valid share or not, but it can’t tell if it has a valid block. So it emits the share, and only with the secret data can it do the final check to see if it has a valid block. How CPU costly is checking? It’s .. same as validating a share.

As far as I know, there are no way to detect all the possibilities that signal to the attacker. There are also some .. that the detector can detect from…

One of the things that fixes block withholding helps with is this issue of “accidental block withholding”. Because there is not enough nonce space in the blockheader, mining software has become sophisticated with reaching into the bitcoin block and mucking around with the internals of the block. This has made it easy for authors of mining software and device firmware authors to mess up their handling of mining, such that they correctly return shares (because if they didn’t do this then they would notice immediately) but they don’t correctly return blocks. There have been a couple of cases where this has occurred. One reason for this is that stratum encodes the difficulty as a floating point number in JSON or JSON-like format, which has made it easy for people to mess up type handling and do dumb things. There are some cases which I am confident were accidents. But from a pool’s perspective, it’s even worse than a malicious withholding. At least a malicious withholder will at least be strategic, so it’s kind of worse if it’s not even a malicious event. I think it is fair to say that I would like to fix it. When we talked about fixing block withholding several years ago, with the mining community back then, there was relatively little interest in fixing it. Part of the reason for that lack of interest was that, at the time, there were no pay-per-share pools. Many of the existing pool operators (at that time) that they did not have to worry much about withholding and that the attackers would only be harming the attacker’s selves. Since then, the pooling climate has changed. Some people have published an analysis that particularly with the existence of Very Large Pools there are ways to strategically mine that profit, er, that profit from withholding rather than merely being destructive with withholding.

In terms of choosing payout mechanisms, like pay-per-share, pay-per-last-share, all these different things, is this user driven? Do people want X or Y method? How do pools decide on this? What has been the recent demand or drive behind this?

You said it’s mostly pay per share?

F2pool and antpool and BW are pay-per-share. Is that mostly driven from the pool who says this is easier and better for us?

Only users want this. If you use PPS, maybe the mining pool doesn’t care about operate. Users do not like the fluctuation of the income. That’s most of the reason why there are pools.

That’s the reason pools exist in the first place.

We were the first major mining pool in China to do PPS. You can see everyone else copied their FAQ from us.

It was surprising to me to see the migration back to PPS because there was a period early on in Bitcoin’s life, where PPS was used, and it was attacked and then people stopped using it. I was surprised to see pools transferring back to that. I understand, though, the user concerns. One of the biggest block withholding attacks we knew about with PPS was that BtcGuild lost 1400 bitcoin at least due to block withholding. It seemed like it was inadvertent

… ((lost connection to google doc, briefly commandeered another laptop))

Could you fix this in a soft-fork?

There is a soft-fork way to fix this. It’s a little bit ugly. It’s not a pure soft-fork. What you basically do is you can imagine the normal way to fix withholding is to change how, it’s a hard-fork that changes how someone decides whether a block is valid or not. It changes the definition of a passing block. The way you do this with a soft-fork is you impose a new rule, the original rule and a new rule. The new rule starts with zero difficulty and you ramp it up slowly over time. IT narrows th…it lowers the network hashrate by a small fraction of a percent. You ramp it up over time to the point where it actually has enough bits of entropy to effectively stop withholding. It has the soft-fork like advantage that it cannot encourage a network split over it. But in every other way it has disadvantages.

So don’t fixes to block withholding inherently make selfish mining worse? Because the defense against selfish mining is that the miner can broadcast the block, and now they can’t do that.

In practice, no they don’t. One of the potential answers to selfish mining is that you can imagine a pool that is mining that is not announcing the new blocks. One of the way to avoid this is to have the hashers leak the blocks that the pool solves. With stratum, this is not possible. With GBT mining, it’s possible to do that.

Is that block withholding? Or is that share withholding?

No, that’s selfish mining. If you imagine a pool with a lot of hashrate, more than 1/3rd. If a pool finds a block and instead of announcing a block, perhaps they keep the block for themselves and…after retargeting and so on. And really there’s no interaction here with block withholding. …so the solution to selfish mining cannot work with stratum. I don’t know if pooled selfish mining is a major concern, because it would be very detectable. All the miners would see that the blocks aren’t being announced, and at that point action could be taken.

Do mining pools keep statistics on the luck of different users? It’s not like you can do a whole lot.

Yes, but it’s, they do that, but it’s not useful. Particularly…you can’t kick them off, it’s not actionable. An attacker would spread themselves across many low hashrate accounts.

Basically, if they are actively trying to hide from you, the only remedy is to close access to the pool, only friends and family. Which is not helpful to the idea of pooling.

We should publish a proposal (or two) on fixing block withholding. I would be happy to work on this. In the next 30 to 60 days, I will put out some kind of proposal on this.

Concluding discussions

We only have 30 minutes left. Who would like to give a summary talk? We don’t have a lot of time. So could you keep it short and simple?

Maybe we could close by having a discussion about how to, there was some discussion about communication platforms and where we come together and how we stay current with each other.

I think we have had very good discussions this weekend. I hope we have improved our relationships greatly. I would like to talk about how we could continue to do that going forward, how we can continue to have open collaboration.

My suggestion would be, not reddit.

I believe that because of the market cap of Bitcoin is continuing to grow, I think that Bitcoin enterprise and companies are the engines behind the market cap growth. They are the ones most incentivized to protect Bitcoin and its ecosystem. I am proposing for this platform or organization, that the Bitcoin companies and enterprises, whether miners or exchanges or application developers, they are the major participants in this platform. To start out with, as the first step, we can first setup some kind of social circle as the first step. It’s kind of like a consortium. We can connect through email and wechat or skype. There are all kinds of mediums and technology available for us to setup this circle so that we can start communication channels first. So the participants should be the major players in the Bitcoin industry and they have to pay a certain fee to join this social circle. So it’s semi-public, but it’s not free. On the Core side, they can send some representative to represent Core development and to join this social circle. And then we try to within this circle, we try to work together and with coordination and try to establish some kind of foundation-like entity. I know you guys hate the idea of a foundation. For a lack of word, I am using the word foundation as a reference. So this social circle is monitored by its chairman and its secretary, to host the communication between different parties and companies within that circle. So that is the proposal as the first step towards building this communication platform.

What do you guys think?

Sounds reasonable. I guess nothing seriously wrong with that. I have heard some conversations where people said let’s create that. Nothing has happened so far.

Who would like to take the initiative and someone has to drive it.

Maybe using the Bitcoin roundtable as the name of the org.

They can organize everything on the legal level. How it will be more efficient to open it. And then we can just decide.. Or how they choose the Chairman, some leaders, etc. And who will it be? So any suggestions from the Core developer side about this proposal?

The pay-for-access component of it may have bad optics to the public when it’s presented that way. Participation in any group has cost, regardless. There are a lot of cost that people have incurred to come here to this meeting. We should be cautious in how this organization is setup and presented, to avoid bad image.

Maybe it’s not paid, maybe it’s sponsorship.

I think that there is, it’s useful to have a pay component to provide a neutral access control mechanism. The argument would be that “if the pay level is too high, then that access control mechanism is not particularly neutral”. I think this can be worked through and solved. I think it’s a potential source of issues that should be considered.

This circle is trying to protect the commercial interests of the Bitcoin companies and industry. I believe that bitcoin in the long run is driven by those enterprises and companies.

They are in the initial stage of bitcoin development. It’s still a hacker culture. It’s informal. Casual. It’s relaxed. But now, at this point, in its history, he believes that it should become more formal and formally structured, driven by commercial interest. So there could be a transition from a hacker culture to…

I think that there may be a mistaken belief that these are incompatible cultures. I think that if you look at all the protocols of the internet, then you will see that almost all of them are developed in the context of IETF, like the HTTP protocols. Most of the attendees at the IETF meetings are working for some of the largest tech companies in the world like Google, Microsoft and Cisco. They participate in an open environment, talking about technology. There is no fee to participate in IETF. Individual meetings themselves have conference fees, but the mailing lists are open. This environment is the one in which the Internet is developed. I agree that bitcoin as it grows needs to become more formal and professional. However, there are many forms that this can take. It definitely needs to be heavily driven by commercial players, not volunteerism. However, this does not necessarily mean that you have a hierarchical system of authority where an elected body is in charge of Bitcoin or something like that. An example of this is how the Internet is organized.

It’s worth pointing out that we are all here. Well, all of us here are here. This meetup was able to be arranged, without that central organization. It was not a top-down meeting.

This idea of “hacker culture” is actually not really what the IETF and these kinds of process organizations… I think this is actually just the outcome of professional development in a situation where you cannot impose on other people. I think this is what the Internet protocol saw as well. There are professionals developing these technologies. They are not in a situation where they can impose.

Decentralized professional development, which happens to look hackery, because the hacker behavior is decentralized, but not necessarily professional. So it is kind of similar.

He believes that this hybrid model is a good idea. He also believes that they can co-exist in a healthy and organic way. He was thinking that maybe there could be two payment and donation structures so that we can collect and solicit fundings for this forum. Maybe some industry companies can pay a fee to join this circle or some others can just donate or sponsor this entity. They would pay a fee to join. For those who pay a fee, they would have a voting right. For those who donate, they do not have a voting right. And the opinion or voice coming out from this organization or circle, they only represent themselves, and they do not represent the entire Bitcoin community because there are other organizations or other groups within that community. So that’s his idea.

So….. like, one thing I want to say, … let’s discuss more, we’re not going to solve all of this in 5 minutes.

So… the one thing to think about is that today, bitcoin is small and friendly. We know each other. We know those companies. Relatively small. Wait a few years. You haven’t seen “more” my friend. So the point is that, if you look at HTTP, now there are very big powerful companies involved like Microsoft. And they want to control HTTP. And IBM wants to control it separately. Bitcoin is finance. When we think about setting up an organization, we’re thinking about ourselves. But in 5 year time, that will include Goldman Sachs and Microsoft. And they each send 20 brilliantly manipulative people and they will bribe politicians and spread mayhem and chaos. We the Bitcoin world will lose control of this organization. Goldman Sachs will create 40 subsidiaries, they will each pay the X membership fee, and then they will now have the entire vote of the organization. So we need to be mindful of this sort of situation. The Internet IETF things for example have evolved to allow for technical development to proceed in a way that promotes good technical outcomes that are good for users even though some of the engineers work for Big Evil Corps like Microsoft.

Are they independent even though they are working for big companies?

Somewhat. That’s the objective, anyway, with IETF. Oh really?

This discussion is critical and important. There have been many good ideas expressed. We can’t solve everything here right now. There’s too much to discuss. We should not have the expectation of solving everything here.

We have three minutes. We have to find a place to park, then we have to walk. Stanford is down the street but it will take 30 minutes to get there. The Stanford campus is huge. I sent the parking instructions. That’s the closest to the building, so please check your email.

His point is that it doesn’t matter that in the future some kind of bigger company like Goldman Sachs trying to take over. It’s inevitable that some company will try to take control of the organization. At that point, we just have to move on and form another entity and another group. It’s not going to be the only social circle in the Bitcoin ecosystem. There could be many of these. It’s not going to be that they take over one and then they get to takeover Bitcoin. They can try whatever they want to take Bitcoin over. We can always change our tactics. We can move to other mediums.

As long as the organization doesn’t give the impression it has control over the protocol.

They will try to setup a circle. They are asking you whether to do it at this point. They don’t know if you are willing to join an organization.

They are offering a proposal that we can form this together at this point.

I think we are interested in any and all opportunities to collaborate.

We will try to setup a commercial circle. We hope that the Core developers will join.

In the afternoon, we can come back after we visit Stanford. We have an hour gap and we can come back here. I have reserved the conference room for the whole day. We can still come back here. Googleplex is not very far from here. It takes 30min to get there or less. We can still come back.

Dan Boneh discussion

http://diyhpl.us/wiki/transcripts/2016-july-bitcoin-developers-miners-meeting/dan-boneh/

Google Tech Talk

http://diyhpl.us/wiki/transcripts/2016-july-bitcoin-developers-miners-meeting/jihan-wu-google-tech-talk/

Other session

Miners have some other events later this evening. Perhaps as a group we should think about any closing summaries?

What do we want to say to any journalists that ask? I think it is important to present this as some people gathering together who have been in the space for a long time, who have had trouble in the past with communication. By seeing people face to face, and talking about things, it helps us discuss.

A comment I made before, a story to take to a journalist later, that bitcoin is a global decentralized system and it works just fine as long as we do our own thing. But it works better if we collaborate. With this weekend, we were able to get to know each other better, we were able to understand many of the things we had in common. We were able to open more lines of communication. We were able to improve our friendships and these things in general are good for bitcoin.

In Zurich, we did a full transcript, but we also published a summary of the meeting, in the following format: https://bitcoincore.org/en/meetings/2016/05/20/ .. if we do the same for our gathering, it would be much shorter than the above example.

We published the Zurich summary first, and the full transcript was published weeks later. In Zurich, it took about two weeks I think to go over the whole thing.

Event Summary

Over the last few days, some Bitcoin developers and miners got together for a social gathering to improve communication, friendship, and to do some California sightseeing. We talked about where bitcoin is and where bitcoin is going. We learned a lot from each other. We also visited Stanford to attend a cryptography talk to learn more about potential improvements for Bitcoin, as well as the Google campus to give a presentation and talk about Bitcoin. We had many informal discussions amongst ourselves about topics such as mining decentralization, evolution of the Bitcoin protocol, safety improvements and progress for both soft- and hard-forks, as well as improving communication and cooperation across the Bitcoin ecosystem such as new venues to work together in unison. We think that Bitcoin’s strength comes from the consensus of its participants. Many of us plan to attend Scaling Bitcoin 3 in Milan, Italy and everyone would like to continue with gatherings like these and others in the future with all parts of the Bitcoin ecosystem. We hope to be releasing notes in the near future for parts of the community that were not attending.

在过去几天里,一些比特币开发人员和比特币矿工们在一起进行了社交聚会来增进友谊,改善沟通,以及共同进行了一些观光交流活动。我们谈论了比特币的现在情况以及未来的发展。我们互相之间学到了很多。我们在斯坦福大学还参加了一个密码学的讲座,学到了比特币将来一些可能的改进。然后我们又去了谷歌总部在那里做了一个比特币的演讲和讨论。我们还有很多非正式的讨论,关于矿业去中心化,比特币协议演变,软分叉与硬分叉的安全性的提高和进度,以及改进比特币生态系统里的沟通与合作,比如通过新的渠道来共同行动。我们认为比特币的强处来源于所有所有参与者的共识。我们其中许多人有计划参加在意大利米兰的2016 Scaling Bitcoin 。我们每一个人都希望在未来和比特币生态系统的所有组成部分继续进行这样的聚会。对于那些没有能够参加这次聚会的社区组成部分,我们会在近期公布我们的谈话记录。

\ No newline at end of file diff --git a/bitcoin-explained/taproot-activation-lockinontimeout/index.html b/bitcoin-explained/taproot-activation-lockinontimeout/index.html index d2f8476bd9..cf807321c5 100644 --- a/bitcoin-explained/taproot-activation-lockinontimeout/index.html +++ b/bitcoin-explained/taproot-activation-lockinontimeout/index.html @@ -11,4 +11,4 @@ Sjors Provoost, Aaron van Wirdum

Date: February 26, 2021

Transcript By: Michael Folkson

Tags: Taproot, Soft fork activation

Category: Podcast

Media: -https://www.youtube.com/watch?v=7ouVGgE75zg

BIP 8: https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki

Arguments for LOT=true and LOT=false (T1-T6 and F1-F6): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html

Additional argument for LOT=false (F7): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html

Aaron van Wirdum article on LOT=true or LOT=false: https://bitcoinmagazine.com/articles/lottrue-or-lotfalse-this-is-the-last-hurdle-before-taproot-activation

Intro

Aaron van Wirdum (AvW): Live from Utrecht, this is the van Wirdum Sjorsnado. Sjors, make the pun.

Sjors Provoost (SP): We have a “lot” to talk about.

AvW: We have a “lot” to discuss. In this episode we are going to discuss the Taproot activation process and the debate surrounding it on the parameter lot, lockinontimeout which can be set to true and false.

SP: Maybe as a reminder to the listener we have talked about Taproot in general multiple times but especially in Episode 2. And we have talked about activating Taproot, activating soft forks in general in Episode 3 so we might skip over a few things.

AvW: In Episode 3 we discussed all sorts of different proposals to activate Taproot but it has been over half a year at least right?

SP: That was on September 25th so about 5 months, yeah.

AvW: It has been a while and by now the discussion has reached its final stage I would say. At this point the discussion is about the lot parameter, true or false. First to recap very briefly Sjors can you explain what are we doing here? What is a soft fork?

What is a soft fork?

SP: The idea of a soft fork is you make the rules more strict. That means that from the point of view of a node that doesn’t upgrade nothing has changed. They are just seeing transactions that are valid to them from the nodes that do upgrade. Because they have stricter rules they do care about what happens. The nice thing about soft forks is that as a node user you can upgrade whenever you want. If you don’t care about this feature you can upgrade whenever you want.

AvW: A soft fork is a backwards compatible protocol upgrade and the nice thing about it is that if a majority of miners enforce the rules then that automatically means all nodes on the network will follow the same blockchain.

SP: That’s right. The older nodes don’t know about these new rules but they do know that they’ll follow the chain with the most proof of work, as long as it is valid. If most of the miners are following the new rules then most of the proof of work will be following the new rules. And so an old node will by definition follow that.

AvW: The nice thing about soft forks is that if a majority of hashpower enforces the new rules the network will remain in consensus. Therefore the last couple of soft forks were activated through hash power coordination. That means that miners could include a bit in the blocks they mined signaling that they were ready for the upgrade. Once most miners, 95 percent in most cases, indicated that they were ready nodes would recognize this and enforce the upgrade.

SP: That’s right. A node would check, for example every two weeks, how many blocks signaled this thing and if yes, then it says “Ok the soft fork is now active. I am going to assume that the miners will enforce this.”

The ability for miners to block a soft fork upgrade

AvW: Right. The problem with this upgrade mechanism is that it also means miners can block the upgrade.

SP: Yeah that’s the downside.

AvW: Even if everyone agrees with the upgrade, for example in this case Taproot, it seems to have broad consensus, but despite that broad consensus miners could still block the upgrade, which is what happened with SegWit a couple of years ago.

SP: Back then there was a lot of debate about the block size and lots of hard fork proposals and lots of hurt feelings. Eventually it was very difficult to get SegWit activated because miners were not signaling for it, probably mostly intentionally. Now it could also happen that miners just ignore an update, not because they don’t like it, just because they’re busy.

AvW: Yeah. In the case of SegWit that was in the end resolved through UASF, or at least that was part of it. We are not going to get into that in depth. That basically meant that a group of users said “On this day (some date in the future, it was August 1st 2017) we are going to activate the SegWit rules no matter how much hash power supports it.”

SP: Right, at the same time and perhaps as a consequence as that, a group of miners and other companies agreed that they would start signaling for SegWit. There were a whole bunch of other things going on at the same time. Whatever happened on 1st August, the thing activated, or a little bit earlier I think.

The lockinontimeout (LOT) parameter

AvW: Now we are four years ahead in time, it is four years later and now the Taproot upgrade is ready to go. What happened a couple of years ago is now spurring new debate on the Taproot upgrade. That brings us to the lockinontimeout (LOT) parameter which is a new parameter. Although it is inspired by things from that SegWit upgrade period.

SP: It is basically a built in UASF option which you can decide to use or not. There is now a formal way in the protocol to do this to activate a soft fork at a cut off date.

AvW: LOT has two options. The first option is false, LOT is false. That means that miners can signal for the upgrade for one year and then in that year if the 90 percent threshold for the upgrade is met it will activate as we just explained. By the way 1 year and 90 percent isn’t set in stone but it is what people seem to settle on. For convenience sake that is what I’m going to use for discussing this. Miners have 1 year to activate the upgrade. If after that year they have not upgraded the Taproot upgrade will expire. It will just not happen, that is LOT is false.

SP: And of course there is always the option then of shipping a new release, trying again. It is not a “no” vote, it is just nothing happens.

AvW: Exactly. Then there is LOT=true which again miners have 1 year to signal support (readiness) for the upgrade. If a 90 percent threshold is met then the upgrade will activate. However the big difference is what happens if miners don’t reach this threshold. If they don’t signal for the upgrade. In that case when the year is almost over nodes that have LOT=true will start to reject all blocks that don’t signal for the upgrade. In other words they will only accept blocks that will signal for the upgrade which means of course that the 90 percent threshold will be met and therefore Taproot, or any other soft fork in this mechanism, will activate.

SP: If enough blocks are produced.

AvW: If enough blocks are produced, yes, that’s true. A little bit of nuance for those who find it interesting, even LOT=true nodes will accept up to 10 percent of blocks that don’t signal. That’s to avoid weird chain split scenarios.

SP: Yeah. If it activates in the normal way only 90 percent has to signal. If you mandate signaling then it would be weird to have a different percentage suddenly.

AvW: They are going to accept the first 10 percent of non-signaling blocks but after that every block that doesn’t signal is going to be rejected. So the 90 percent threshold will definitely be reached. The big reason for LOT=-true, to set it to true, is that this way miners cannot block the upgrade. Even if they try to block the upgrade, once the year is over nodes will still enforce Taproot. So it is guaranteed to happen.

SP: If enough blocks are produced. We can get into some of the risks with this but I think you want to continue explaining a bit.

AvW: The reason some people like LOT=true is because that way miners don’t have a veto. The counterargument there, you already suggested that, is that miners don’t have a veto anyway even if we use LOT=false the upgrade will expire after a year but after that year we can just deploy a new upgrade mechanism and a new signaling period. This time maybe use LOT=true.

SP: Or even while this thing is going on. You could wait half a year with LOT=false and half a year later say “This is taking too long. Let’s take a little more risk and set LOT=true.” Or lower the threshold or some other permutation that slightly increases the risk but also increases the likeliness of activation.

AvW: Yeah, you’re right. But that is actually also one of the arguments against using LOT=false. LOT=true proponents say that as you’ve suggested we can do it after 6 months, but there are another group of users that might say “No. First wait until the year is over and then we will just redeploy again.” Let’s say after 6 months Taproot hasn’t activated. Now all of a sudden you get a new discussion between people that want to start deploying LOT=true clients right away and groups of users that want to wait until the year is over. It is reintroducing the discussion we are having now except by then we only have 6 months to resolve it. It is like a ticking time bomb kind of situation.

SP: Not really to resolve it. If you don’t do anything for 6 months then there is only one option left which is to try again with a new activation bit.

AvW: But then you need to agree on when you are going to do that. Are you going to do that after 6 months or are you going to do that later? People might disagree.

SP: Then you’d be back to where we are now except you’d know a little bit more because now you know that the miners weren’t signaling.

AvW: And you don’t have a lot of time to resolve it because after 6 months might happen. Some group of users might run LOT=true or….

SP: The thing you’re talking about here is the possibility of say anarchy in the sense that there is no consensus about when to activate this thing. One group, I think we discussed this at length in the third episode, just gets really aggressive and says “No we are going to activate this earlier.” Then nobody knows when it is going to happen.

AvW: Let me put it differently. If right now we are saying “If after 6 months miners haven’t activated Taproot then we will just upgrade to LOT=true clients” then LOT=true proponents will say “If that’s the plan anyways let’s just do it now. That’s much easier. Why do we have to do that halfway?” That is the counterargument to the counterargument.

SP: I can see that. But of course there is also the scenario where we never do this, Taproot just doesn’t activate. It depends on what people want. There is something to be said for a status quo bias where you don’t do anything if it is too controversial for whatever reason. There is another side case here that is useful to keep in mind. There might be a very good reason to cancel Taproot. There might be a bug that is revealed after.

AvW: You are getting ahead of me. There are a bunch of arguments in favor of LOT=false. One argument is we’ve already done LOT=false a bunch of times, the previous miner activated soft forks, and most of the times it went fine. There was just this one time with SegWit in the midst of a big war, we don’t have a big war now. There is no reason to change what we’ve been doing successfully until now. That is one argument. The counterargument would for example be “Yeah but if you choose LOT=false now that could draw controversy itself. It could be used to drive a wedge. We are not in a war right now but it could cause a war.”

SP: I don’t see how that argument doesn’t apply to LOT=true. Anything could cause controversy.

AvW: That is probably fair. I tend to agree with that. The other argument for LOT=false is that miners and especially mining pools have already indicated that they are supportive of Taproot, they’ll activate it. It is not necessary to do the LOT=true thing as far as we can tell. The third argument is what you just mentioned. It is possible that someone finds a bug with Taproot, a software bug or some other problem is possible. If you do LOT=false it is fairly easy to just let it expire and users won’t have to upgrade their software again.

SP: The only thing there is that you’d have to recommend that miners do not install that upgrade. It is worth noting, I think we pointed this out in Episode 3, people don’t always review things very early. A lot of people have reviewed Taproot code but others might not bother to review it until the activation code is there because they just wait for the last minute. It is not implausible that someone very smart starts reviewing this very late, perhaps some exchange that is about to deploy it.

AvW: Something like that happened with P2SH, the predecessor to P2SH. OP_EVAL was just about to be deployed and then a pretty horrible bug was found.

SP: We’ve seen it with certain altcoins too, right before deployment people find zero days, either because they were… or just because the code was shipped in a rush and nobody checked it. There is definitely always a risk, whatever soft fork mechanism you use, that a bug is discovered at the last minute. If you are really unlucky then it is too late, it is deployed and you need a hard fork to get rid of it which would be really, really bad.

AvW: I don’t think that’s true. There are other ways to get rid of it.

SP: Depending on what the bug is you may be able to soft fork it out.

AvW: There are ways to fix it even in that case. The other counterargument to that point would be “If we are not sure that it is bug free and we are not sure that this upgrade is correct then it shouldn’t be deployed either way, LOT=true or LOT=false or anything. We need to be sure of that anyway.”

SP: Yeah but like I said some people won’t review something until it is inevitable.

AvW: I am just listing the arguments. Fourth argument against LOT=true is that LOT=true could feed into the perception that Bitcoin and especially Bitcoin Core developers control the protocol, have power of the protocol. They are shipping code and that necessarily becomes the new protocol rules in the case of Taproot.

SP: There could be some future soft fork where really nobody in the community cares about it, just a handful of Bitcoin Core developers do, and they force it onto the community. Then you get a bunch of chaos. The nice thing about having at least the miners signal is that they are part of the community and at least they are ok with it. The problem is it doesn’t reflect what other people in the community think about it. It just reflects what they think about it. There are a bunch of mechanisms. There is discussion on the mailing list, you see if people have problems. Then there is miner signaling which is a nice indication that people are happy. You get to see that, there are as many people consenting as possible. It would be nice if there were other mechanisms of course.

AvW: The other point that Bitcoin Core developers, while they decide which code they include in Bitcoin Core they don’t decide what users actually end up running.

SP: Nobody might download it.

AvW: Exactly. They don’t actually have power over the network. In that sense the argument is bunk but it could still feed into the perception that they do. Even that perception, if you can avoid it maybe that’s better. That is an argument.

SP: And the precedent. What if Bitcoin Core is compromised at some point and ships an update and says “If you don’t stop it then it is going to activate.” Then it is nice if the miners can say “No I don’t think so.”

AvW: Users could say that as well by not downloading it like you said. Now we get to the fifth argument. This is where it gets pretty complex. The fifth argument against LOT=true is that it could cause all sorts of network instability. If it happens that the year is over and there are LOT=true clients on the network it is possible that they would split off from the main chain and there could be re-orgs. People could lose money and miners could mine an invalid block and lose their block reward and all that sort of stuff. The LOT=true proponents argue that that risk is actually best mitigated if people adopt LOT=true.

SP: I’m skeptical, that sounds very circular. Maybe it is useful to explain what these bad scenarios look like? Then others can decide whether a) they think those bad scenarios are worth risking and b) how to make them less likely. Some of that is almost political. You all have these discussions in society, should people have guns or not, what are the incentives, you may never figure that out. But we can talk about some of the mechanics here.

AvW: To be clear, if somehow there would be complete consensus on the network over either LOT=true, all nodes run LOT=true, or all nodes run LOT=false then I think that would be totally fine. Either way.

SP: Yeah. The irony is of course that if there is complete consensus and everybody runs LOT=true then it will never be used. You’re right in theory. I don’t see a scenario where miners would say “We are happy with LOT=true but we are deliberately not going to signal and then signal at the very last moment.”

AvW: You are right but we are digressing. The point is that the really complicated scenarios arise when some parts of the network are running LOT=true, some parts of the network are running LOT=false or some parts of the network are running neither because they haven’t upgraded. Or some combination of these, half of the network has LOT=true, half of the network has neither. That’s where things get very complicated and Sjors, you’ve thought about it, what do you think? Tell me what the risks are.

The chain split scenario

SP: I thought about this stuff during the SegWit2x debacle as well as the UASF debacle which were similar in a way but also very different because of who was explaining and whether it was a hard fork or a soft fork. Let’s say you are running the LOT=true version of Bitcoin Core. You downloaded it, maybe it was released by Bitcoin Core or maybe you self compiled it, but it says LOT=true. You want Taproot to activate. But the scenario here is that the rest of the world, the miners aren’t really doing this. The day arrives and you see a block, it is not signaling correctly but you want it to signal correctly so you say “This block is now invalid. I am not going to accept this block.” I’m just going to wait until another miner comes with a block that does meet my criteria. Maybe that happens once in every 10 blocks for example. You are seeing new blocks but they are coming in very, very slowly. So somebody sends you a transaction, you want Bitcoin from somebody, they send you a transaction and this transaction has a fee and it is probably going to be wrong. Let’s say you are receiving a transaction from somebody who is running a node with LOT=false. They are on a chain that is going ten times faster than you are, in this intermediate state. Their blocks might be just full, their fees are pretty low, and you are receiving it. But you are on this shorter, slower moving chain so your mempool is really full and your blocks are completely full, so that transaction probably won’t confirm on your side. It is just going to be sitting in the mempool, that is one complexity. That’s actually a relatively good scenario because you don’t accept unconfirmed transactions. You will have a disagreement with your counterparty, you’ll say “It hasn’t confirmed” and they’ll say “It has confirmed”. Then at least you might realize what is going on, you read about the LOT war or whatever. So that’s one scenario. The other scenario is where somehow it does confirm on your side and it also confirms on the other side. That is kind of good because then you are safe either way. If it is confirmed on both sides then whatever happens in a future re-org that transaction is actually on the chain, maybe in a different block. Another scenario could be because there are two chains, one short chain and one long chain, but they are different. If you are receiving coins that are sent from a coinbase transaction on one side or the other then there is no way it can be valid on your side. This can also be a feature, it is called replay protection essentially. You receive a transaction and you don’t even see it in your mempool, you call the other person and say “This doesn’t make any sense.” That’s good. But now suddenly the world changes its mind and says “No we do want Taproot, we do want LOT=true, we are now LOT=true diehards” and all the miners start mining on top of your shorter chain. Your short chain becomes the very long chain. In that case you are pretty happy in most of the scenarios we’ve discussed.

AvW: Sounds good to me.

SP: You had a transaction that was in maybe in your tenth block and on the other side it was in the first block. It is still yours. There were some transactions floating in the mempool for a very long time, they finally confirm. I think you are fairly happy. We were talking about the LOT=true node. As a LOT=true node user, in these scenarios you are happy. Maybe not if you’re paying someone.

AvW: You are starting to make the case for LOT=true Sjors, I know that is not your intention but you are doing a good job at it.

SP: For the full node user who knows what they are doing in general. If you are a full node user and you know what you are doing then I think you are going to be fine in general. This is not too bad. Now let’s say you are a LOT=false user and let’s say you don’t know what you are doing. In the same scenario you are on the longest chain, you are receiving coins from an exchange and you’ve seen these headers out there for this shorter chain. You might have seen them, depends on whether they reach you or not. But it is a shorter chain and it is valid according to you because it is a more strict ruleset. You are fine, this other chain has Taproot and you don’t probably. You are accepting transactions and you are a happy camper but then all of a sudden because the world changes everything disappears from under you. All the transactions you’ve seen confirmed in a block are now back in the mempool and they might have been double spent even.

AvW: Yeah the reason for that is that we are talking about a chain split that has happened. You have a LOT=false node but at any point the LOT=true chain becomes the longest chain then your LOT=false node would still accept that chain. It would still consider it valid. The other way round is not true. But the LOT=false node will always consider the LOT=true chain valid. So in your scenario where you are using Bitcoin on the longest chain, on the LOT=false chain, we’re happy, we received money, we did a hard day’s work and got our paycheck at the end, paid in Bitcoin. We think we’re safe, we got a bunch of confirmations but then all of a sudden the LOT=true chain becomes longer which means your node switches to a LOT=true chain. That money you received on the LOT=false chain which you thought was the Bitcoin chain is just gone. Poof. That is the problem you are talking about.

SP: Exactly.

AvW: I will add to this very briefly. I think this is an even bigger problem for non-upgraded nodes.

SP: I was about to get to that. Now we are talking about the LOT=false people. You could still say “Why did you download the LOT=false version?” Because you didn’t know. Now we are talking about an unupgraded node. For the unupgraded node Taproot does not exist so it has no preference for which of the chains, it will just pick the longest one.

AvW: It is someone in Korea, he doesn’t keep up with discussion.

SP: Let’s not be mean to Korea.

AvW: Pick a country where they don’t speak English.

SP: North Korea.

AvW: Someone doesn’t keep up with their Bitcoin discussion forums, maybe doesn’t read English, doesn’t really care. He just likes this Bitcoin thing, downloaded the software a couple of years ago, put in his hard day’s work, gets paid and the money disappears.

SP: Or his node might be in a nuclear bunker that he put it in 5 years ago under 15 meters of concrete, airgapped, and somehow it can download blocks because it is watching the Blockstream satellite or something but it cannot be upgraded. And he doesn’t know about this upgrade. Which would be odd if you are into nuclear bunkers and full nodes. Anyway somebody is running an out of date node, in Bitcoin we have the policy that you don’t have to upgrade, it is not a mandatory thing. It should be safe or at least relatively safe to run an unupgraded node. You are receiving a salary, the same as the LOT=false person, then suddenly there’s a giant re-org that comes out of nowhere. You have no idea why people bother to re-org because you don’t know about this rule change.

AvW: You don’t see the difference.

SP: And poof your salary just disappeared.

AvW: That’s bad. That’s basically the worst case scenario that no one wants I think.

SP: Yeah. You can translate this also to people who are using hardware wallet software that hasn’t been updated, they are using remote nodes or they are using SPV nodes that don’t check the rules but only check the work. They’ll have similar experiences where suddenly the longest chain changes so your SPV wallet, which we explained in an earlier episode, its history disappears. At least for the lightweight nodes you could do some victim shaming and say “You should be running a full node. If bad things happen you should have run a full node.” But I still don’t think that’s good safety engineering, to tell people “If you don’t use your safety belt in the correct position the car might explode.” But at least for the unupgraded full node that is an explicit case that Bitcoiners want to support. You want to support people not upgrading and not suddenly losing their coins in a situation like this. That is why I’m not a LOT=true person.

Avoiding a chain split scenario

AvW: That’s what I want to get that. Everyone agrees or at least we both agree and I think most people would agree that this scenario we just painted, that’s horrible, we don’t want that. So the next question is how do you avoid this scenario? That’s also one of the things where LOT=true and LOT=false people differ in their opinions. LOT=false proponents like yourself argue against LOT=true because the chain split was caused by LOT=true and therefore if we don’t want chain splits we don’t want LOT=true and the thing we just described won’t happened. The worst case scenario is that we don’t have Taproot, it will just expire. That’s not as bad as this poor Korean guy losing his honest day’s work.

SP: Exactly and we might have Taproot later.

AvW: The LOT=true proponents will argue Bitcoin is an open network and any peer can run any software they want. For better or worse LOT=true is a thing that exists. If we want to avoid a chain split the best way to avoid that is to make sure that everyone uses LOT=true or at least the majority of miners upgrade in time and LOT=true is the best way to ensure that. Getting critical mass for LOT=true is actually the safer option despite the fact that LOT=true also introduced the risk. If I want to give an analogy it is kind of like the world would be a safer place without nuclear weapons but there are nuclear weapons. It seems like it is safer to actually have one in that case.

SP: I think that analogy breaks down very quickly but yeah I get the idea.

AvW: It is not a perfect analogy I’m sure. The point is LOT=true exists and now we have to deal with that. It might be a better world, a safer place if LOT=true didn’t exist, if UASFs didn’t exist. But it does and now we have to deal with that fact. There is an argument to be made that making sure the soft fork succeeds is actually the best way to go to save that poor Korean guy.

SP: I am always very skeptical of this type of game theory because it sounds rhetorically nice but I’m not sure it is really true. One of the obvious problems is how do you know you’ve reached the entire Bitcoin community. We talked about this hypothetical person in this other country who is not reading Twitter and Reddit, has no idea that this is going on let alone most of the lightweight wallet users. The number of people who use Bitcoin is much, much greater than the number of people who are even remotely interested in these discussions. Also to even explain the risk to those people, even if we could reach them, to explain why they should upgrade, that alone is a rather big challenge. In this episode we roughly try to explain what would go wrong if they don’t upgrade. We can’t just tell them they must upgrade. That violates the idea that you persuade people with arguments and let them decide what they want to do rather than tell them based on authority.

AvW: Keep in mind that in the end all of this is avoided if a majority of hash power upgrades. With LOT=true it actually is any majority would be fine in the end. If miners themselves use LOT=true then they’ll definitely get the longest chain by the end of the year.

SP: The game theory is kind of narrowed to say you want to convince the miners to do this. The problem though is if it fails we just explained the disaster that happens. Then the question is what is that risk? Can you put a percentage on that? Can you somehow simulate the world and find out what happens?

AvW: I am on the fence on this. I see compelling arguments on both sides. I was leaning towards lot=false at first but the more I think about it… The argument is if you include lot=true in Bitcoin Core then that practically guarantees that everything will be fine because the economic majority will almost certainly run it. Exchanges and most users.

SP: I’m not even sure that is true. That assumes that this economic majority is quick to upgrade and not ignoring things.

AvW: At least within a year.

SP: There might be companies that are running 3 or 4 year old nodes because they have 16 different s***coins. Even that, I would not assume that. We know from the network in general that lots of people don’t upgrade nodes and one year is pretty short. You can’t tell from the nodes whether they are the economic majority or not. That might be a few critical players that would do the trick here.

AvW: Yes I can’t be sure. I’m not sure. I am speculating, I am explaining the argument. But the opposite is also true, now that lot=true exists some group of users will almost certainly run it and that introduces maybe greater risks than if it was included in Core. That would increase the chances of success for LOT=true, the majority upgrading.

SP: It really depends on who that group is. Because if that group is just some random people who are not economically important then they are experiencing the problems and nobody else notices anything.

AvW: That is true. If it is a very small group that might be true but the question is how small or how big does that group need to be for this to become a problem. They have an asymmetry, this advantage because their chain can never be re-orged away while the LOT=false chain can be re-orged away.

SP: But their chain may never grow so that’s also a risk. It is not a strict advantage.

AvW: I think it is definitely a strict advantage.

SP: The advantage is you can’t be re-orged. The disadvantage is your chain might never grow. I don’t know which of those two…

AvW: It would probably grow. It depends on how big that group is again. That’s not something we can objectively measure. I guess that’s what it comes down to.

SP: Even retroactively we can’t. We still don’t know what really caused the SegWit activation even four years afterwards. That gives you an idea of how difficult it is to know what these forces really are.

AvW: Yes, that’s where we agree. It is very difficult to know either way. I am on the fence about it.

SP: The safest thing to do is to do nothing.

AvW: Not even that. There might still be a LOT=true minority or maybe even majority that might still happen.

SP: Another interesting thought experiment then is to say “There is always going to be a LOT=true group for any soft fork. What about a soft fork that has no community support? What if an arbitrary group of people decides to run their own soft fork because they want to? Maybe someone wants to shrink the coin supply. Set the coin issuance to zero tomorrow or reduce the block size to 300 kilobytes.” They could say “Because it is a soft fork and because I run a LOT=true node, there could be others who run a LOT=true. Therefore it must activate and everybody should run this node.” That would be absurd. There is a limit to this game theory. You can always think of some soft fork and some small community that will say this and completely fail. You have to estimate how big and how powerful this thing is. I don’t even know what the metric is.

AvW: But also how harmful the upgrade is because I would say that is the answer to your point there. If the upgrade itself is considered valuable then there is very little cost for people to just switch to the other chain, the chain that can’t be re-orged and that has the upgrade that is valuable. That’s a pretty good reason to actually switch. While switching to a chain even if it can’t be re-orged that screws with the coin limit or that kind of stuff, that is a much bigger disincentive and also a disincentive for miners to switch.

SP: Some people might say that a smaller block size is safer.

AvW: They are free to fork off, that is also possible. We didn’t even discuss that but it is possible that the chain split is lasting, that it will forever be a LOT=true minority chain and a LOT=false majority chain. Then we have the Bitcoin, Bitcoin Cash split or something like that. We just have two coins.

SP: With the big scary sword of Damocles hanging above it.

AvW: Then maybe a checkpoint would have to be included in the majority chain which would be very ugly.

SP: You could come up with some sort of incompatible soft fork to prevent a re-org in the future.

AvW: Let’s work towards the end of this episode.

SP: I think we have covered a lot of different arguments and explained that this is pretty complicated.

AvW: What do you think is going to happen? How do you anticipate this playing out?

SP: I started looking a little bit at the nitty gritty, one of the pull requests that Luke Dashjr opened to implement BIP 8 in general, not specifically for Taproot I believe. There’s already complexity with this LOT=true thing engaged because you need to think about how the peer-to-peer network should behave. From a principle of least action, what is the least work for developers to do, setting LOT to false probably results in easier code which will get merged earlier. And even if Luke is like “I will only do this if it is set to true” then somebody else will make a pull request that sets it to false and gets merged earlier. I think from a what happens when lazy people, I mean lazy in the most respectful way, what is the path of least resistance? It is probably a LOT=false, just from an engineering point of view.

AvW: So LOT=false in Bitcoin Core is what you would expect in that case.

SP: Yes. And somebody else would implement LOT=true.

AvW: In some alt client for sure.

SP: Yeah. And that might have no code review.

AvW: It is just a parameter setting right?

SP: No it is more complicated because how it is going to interact with its peers and what it is going to do when there’s a chain split etc.

AvW: What do you think about this scenario that neither is implemented in Bitcoin Core? Do you see that happening?

SP: Neither. LOT=null?

AvW: Just no activation mechanism because there is no consensus for one.

SP: No I think it will be fine. I can’t predict the future but my guess is that a LOT=false won’t be objected to as much as some people might think.

AvW: We’ll see then I guess.

SP: Yeah we’ll see. This might be the dumbest thing I’ve ever said.

\ No newline at end of file +https://www.youtube.com/watch?v=7ouVGgE75zg

BIP 8: https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki

Arguments for LOT=true and LOT=false (T1-T6 and F1-F6): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html

Additional argument for LOT=false (F7): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html

Aaron van Wirdum article on LOT=true or LOT=false: https://bitcoinmagazine.com/articles/lottrue-or-lotfalse-this-is-the-last-hurdle-before-taproot-activation

Intro

Aaron van Wirdum (AvW): Live from Utrecht, this is the van Wirdum Sjorsnado. Sjors, make the pun.

Sjors Provoost (SP): We have a “lot” to talk about.

AvW: We have a “lot” to discuss. In this episode we are going to discuss the Taproot activation process and the debate surrounding it on the parameter lot, lockinontimeout which can be set to true and false.

SP: Maybe as a reminder to the listener we have talked about Taproot in general multiple times but especially in Episode 2. And we have talked about activating Taproot, activating soft forks in general in Episode 3 so we might skip over a few things.

AvW: In Episode 3 we discussed all sorts of different proposals to activate Taproot but it has been over half a year at least right?

SP: That was on September 25th so about 5 months, yeah.

AvW: It has been a while and by now the discussion has reached its final stage I would say. At this point the discussion is about the lot parameter, true or false. First to recap very briefly Sjors can you explain what are we doing here? What is a soft fork?

What is a soft fork?

SP: The idea of a soft fork is you make the rules more strict. That means that from the point of view of a node that doesn’t upgrade nothing has changed. They are just seeing transactions that are valid to them from the nodes that do upgrade. Because they have stricter rules they do care about what happens. The nice thing about soft forks is that as a node user you can upgrade whenever you want. If you don’t care about this feature you can upgrade whenever you want.

AvW: A soft fork is a backwards compatible protocol upgrade and the nice thing about it is that if a majority of miners enforce the rules then that automatically means all nodes on the network will follow the same blockchain.

SP: That’s right. The older nodes don’t know about these new rules but they do know that they’ll follow the chain with the most proof of work, as long as it is valid. If most of the miners are following the new rules then most of the proof of work will be following the new rules. And so an old node will by definition follow that.

AvW: The nice thing about soft forks is that if a majority of hashpower enforces the new rules the network will remain in consensus. Therefore the last couple of soft forks were activated through hash power coordination. That means that miners could include a bit in the blocks they mined signaling that they were ready for the upgrade. Once most miners, 95 percent in most cases, indicated that they were ready nodes would recognize this and enforce the upgrade.

SP: That’s right. A node would check, for example every two weeks, how many blocks signaled this thing and if yes, then it says “Ok the soft fork is now active. I am going to assume that the miners will enforce this.”

The ability for miners to block a soft fork upgrade

AvW: Right. The problem with this upgrade mechanism is that it also means miners can block the upgrade.

SP: Yeah that’s the downside.

AvW: Even if everyone agrees with the upgrade, for example in this case Taproot, it seems to have broad consensus, but despite that broad consensus miners could still block the upgrade, which is what happened with SegWit a couple of years ago.

SP: Back then there was a lot of debate about the block size and lots of hard fork proposals and lots of hurt feelings. Eventually it was very difficult to get SegWit activated because miners were not signaling for it, probably mostly intentionally. Now it could also happen that miners just ignore an update, not because they don’t like it, just because they’re busy.

AvW: Yeah. In the case of SegWit that was in the end resolved through UASF, or at least that was part of it. We are not going to get into that in depth. That basically meant that a group of users said “On this day (some date in the future, it was August 1st 2017) we are going to activate the SegWit rules no matter how much hash power supports it.”

SP: Right, at the same time and perhaps as a consequence as that, a group of miners and other companies agreed that they would start signaling for SegWit. There were a whole bunch of other things going on at the same time. Whatever happened on 1st August, the thing activated, or a little bit earlier I think.

The lockinontimeout (LOT) parameter

AvW: Now we are four years ahead in time, it is four years later and now the Taproot upgrade is ready to go. What happened a couple of years ago is now spurring new debate on the Taproot upgrade. That brings us to the lockinontimeout (LOT) parameter which is a new parameter. Although it is inspired by things from that SegWit upgrade period.

SP: It is basically a built in UASF option which you can decide to use or not. There is now a formal way in the protocol to do this to activate a soft fork at a cut off date.

AvW: LOT has two options. The first option is false, LOT is false. That means that miners can signal for the upgrade for one year and then in that year if the 90 percent threshold for the upgrade is met it will activate as we just explained. By the way 1 year and 90 percent isn’t set in stone but it is what people seem to settle on. For convenience sake that is what I’m going to use for discussing this. Miners have 1 year to activate the upgrade. If after that year they have not upgraded the Taproot upgrade will expire. It will just not happen, that is LOT is false.

SP: And of course there is always the option then of shipping a new release, trying again. It is not a “no” vote, it is just nothing happens.

AvW: Exactly. Then there is LOT=true which again miners have 1 year to signal support (readiness) for the upgrade. If a 90 percent threshold is met then the upgrade will activate. However the big difference is what happens if miners don’t reach this threshold. If they don’t signal for the upgrade. In that case when the year is almost over nodes that have LOT=true will start to reject all blocks that don’t signal for the upgrade. In other words they will only accept blocks that will signal for the upgrade which means of course that the 90 percent threshold will be met and therefore Taproot, or any other soft fork in this mechanism, will activate.

SP: If enough blocks are produced.

AvW: If enough blocks are produced, yes, that’s true. A little bit of nuance for those who find it interesting, even LOT=true nodes will accept up to 10 percent of blocks that don’t signal. That’s to avoid weird chain split scenarios.

SP: Yeah. If it activates in the normal way only 90 percent has to signal. If you mandate signaling then it would be weird to have a different percentage suddenly.

AvW: They are going to accept the first 10 percent of non-signaling blocks but after that every block that doesn’t signal is going to be rejected. So the 90 percent threshold will definitely be reached. The big reason for LOT=-true, to set it to true, is that this way miners cannot block the upgrade. Even if they try to block the upgrade, once the year is over nodes will still enforce Taproot. So it is guaranteed to happen.

SP: If enough blocks are produced. We can get into some of the risks with this but I think you want to continue explaining a bit.

AvW: The reason some people like LOT=true is because that way miners don’t have a veto. The counterargument there, you already suggested that, is that miners don’t have a veto anyway even if we use LOT=false the upgrade will expire after a year but after that year we can just deploy a new upgrade mechanism and a new signaling period. This time maybe use LOT=true.

SP: Or even while this thing is going on. You could wait half a year with LOT=false and half a year later say “This is taking too long. Let’s take a little more risk and set LOT=true.” Or lower the threshold or some other permutation that slightly increases the risk but also increases the likeliness of activation.

AvW: Yeah, you’re right. But that is actually also one of the arguments against using LOT=false. LOT=true proponents say that as you’ve suggested we can do it after 6 months, but there are another group of users that might say “No. First wait until the year is over and then we will just redeploy again.” Let’s say after 6 months Taproot hasn’t activated. Now all of a sudden you get a new discussion between people that want to start deploying LOT=true clients right away and groups of users that want to wait until the year is over. It is reintroducing the discussion we are having now except by then we only have 6 months to resolve it. It is like a ticking time bomb kind of situation.

SP: Not really to resolve it. If you don’t do anything for 6 months then there is only one option left which is to try again with a new activation bit.

AvW: But then you need to agree on when you are going to do that. Are you going to do that after 6 months or are you going to do that later? People might disagree.

SP: Then you’d be back to where we are now except you’d know a little bit more because now you know that the miners weren’t signaling.

AvW: And you don’t have a lot of time to resolve it because after 6 months might happen. Some group of users might run LOT=true or….

SP: The thing you’re talking about here is the possibility of say anarchy in the sense that there is no consensus about when to activate this thing. One group, I think we discussed this at length in the third episode, just gets really aggressive and says “No we are going to activate this earlier.” Then nobody knows when it is going to happen.

AvW: Let me put it differently. If right now we are saying “If after 6 months miners haven’t activated Taproot then we will just upgrade to LOT=true clients” then LOT=true proponents will say “If that’s the plan anyways let’s just do it now. That’s much easier. Why do we have to do that halfway?” That is the counterargument to the counterargument.

SP: I can see that. But of course there is also the scenario where we never do this, Taproot just doesn’t activate. It depends on what people want. There is something to be said for a status quo bias where you don’t do anything if it is too controversial for whatever reason. There is another side case here that is useful to keep in mind. There might be a very good reason to cancel Taproot. There might be a bug that is revealed after.

AvW: You are getting ahead of me. There are a bunch of arguments in favor of LOT=false. One argument is we’ve already done LOT=false a bunch of times, the previous miner activated soft forks, and most of the times it went fine. There was just this one time with SegWit in the midst of a big war, we don’t have a big war now. There is no reason to change what we’ve been doing successfully until now. That is one argument. The counterargument would for example be “Yeah but if you choose LOT=false now that could draw controversy itself. It could be used to drive a wedge. We are not in a war right now but it could cause a war.”

SP: I don’t see how that argument doesn’t apply to LOT=true. Anything could cause controversy.

AvW: That is probably fair. I tend to agree with that. The other argument for LOT=false is that miners and especially mining pools have already indicated that they are supportive of Taproot, they’ll activate it. It is not necessary to do the LOT=true thing as far as we can tell. The third argument is what you just mentioned. It is possible that someone finds a bug with Taproot, a software bug or some other problem is possible. If you do LOT=false it is fairly easy to just let it expire and users won’t have to upgrade their software again.

SP: The only thing there is that you’d have to recommend that miners do not install that upgrade. It is worth noting, I think we pointed this out in Episode 3, people don’t always review things very early. A lot of people have reviewed Taproot code but others might not bother to review it until the activation code is there because they just wait for the last minute. It is not implausible that someone very smart starts reviewing this very late, perhaps some exchange that is about to deploy it.

AvW: Something like that happened with P2SH, the predecessor to P2SH. OP_EVAL was just about to be deployed and then a pretty horrible bug was found.

SP: We’ve seen it with certain altcoins too, right before deployment people find zero days, either because they were… or just because the code was shipped in a rush and nobody checked it. There is definitely always a risk, whatever soft fork mechanism you use, that a bug is discovered at the last minute. If you are really unlucky then it is too late, it is deployed and you need a hard fork to get rid of it which would be really, really bad.

AvW: I don’t think that’s true. There are other ways to get rid of it.

SP: Depending on what the bug is you may be able to soft fork it out.

AvW: There are ways to fix it even in that case. The other counterargument to that point would be “If we are not sure that it is bug free and we are not sure that this upgrade is correct then it shouldn’t be deployed either way, LOT=true or LOT=false or anything. We need to be sure of that anyway.”

SP: Yeah but like I said some people won’t review something until it is inevitable.

AvW: I am just listing the arguments. Fourth argument against LOT=true is that LOT=true could feed into the perception that Bitcoin and especially Bitcoin Core developers control the protocol, have power of the protocol. They are shipping code and that necessarily becomes the new protocol rules in the case of Taproot.

SP: There could be some future soft fork where really nobody in the community cares about it, just a handful of Bitcoin Core developers do, and they force it onto the community. Then you get a bunch of chaos. The nice thing about having at least the miners signal is that they are part of the community and at least they are ok with it. The problem is it doesn’t reflect what other people in the community think about it. It just reflects what they think about it. There are a bunch of mechanisms. There is discussion on the mailing list, you see if people have problems. Then there is miner signaling which is a nice indication that people are happy. You get to see that, there are as many people consenting as possible. It would be nice if there were other mechanisms of course.

AvW: The other point that Bitcoin Core developers, while they decide which code they include in Bitcoin Core they don’t decide what users actually end up running.

SP: Nobody might download it.

AvW: Exactly. They don’t actually have power over the network. In that sense the argument is bunk but it could still feed into the perception that they do. Even that perception, if you can avoid it maybe that’s better. That is an argument.

SP: And the precedent. What if Bitcoin Core is compromised at some point and ships an update and says “If you don’t stop it then it is going to activate.” Then it is nice if the miners can say “No I don’t think so.”

AvW: Users could say that as well by not downloading it like you said. Now we get to the fifth argument. This is where it gets pretty complex. The fifth argument against LOT=true is that it could cause all sorts of network instability. If it happens that the year is over and there are LOT=true clients on the network it is possible that they would split off from the main chain and there could be re-orgs. People could lose money and miners could mine an invalid block and lose their block reward and all that sort of stuff. The LOT=true proponents argue that that risk is actually best mitigated if people adopt LOT=true.

SP: I’m skeptical, that sounds very circular. Maybe it is useful to explain what these bad scenarios look like? Then others can decide whether a) they think those bad scenarios are worth risking and b) how to make them less likely. Some of that is almost political. You all have these discussions in society, should people have guns or not, what are the incentives, you may never figure that out. But we can talk about some of the mechanics here.

AvW: To be clear, if somehow there would be complete consensus on the network over either LOT=true, all nodes run LOT=true, or all nodes run LOT=false then I think that would be totally fine. Either way.

SP: Yeah. The irony is of course that if there is complete consensus and everybody runs LOT=true then it will never be used. You’re right in theory. I don’t see a scenario where miners would say “We are happy with LOT=true but we are deliberately not going to signal and then signal at the very last moment.”

AvW: You are right but we are digressing. The point is that the really complicated scenarios arise when some parts of the network are running LOT=true, some parts of the network are running LOT=false or some parts of the network are running neither because they haven’t upgraded. Or some combination of these, half of the network has LOT=true, half of the network has neither. That’s where things get very complicated and Sjors, you’ve thought about it, what do you think? Tell me what the risks are.

The chain split scenario

SP: I thought about this stuff during the SegWit2x debacle as well as the UASF debacle which were similar in a way but also very different because of who was explaining and whether it was a hard fork or a soft fork. Let’s say you are running the LOT=true version of Bitcoin Core. You downloaded it, maybe it was released by Bitcoin Core or maybe you self compiled it, but it says LOT=true. You want Taproot to activate. But the scenario here is that the rest of the world, the miners aren’t really doing this. The day arrives and you see a block, it is not signaling correctly but you want it to signal correctly so you say “This block is now invalid. I am not going to accept this block.” I’m just going to wait until another miner comes with a block that does meet my criteria. Maybe that happens once in every 10 blocks for example. You are seeing new blocks but they are coming in very, very slowly. So somebody sends you a transaction, you want Bitcoin from somebody, they send you a transaction and this transaction has a fee and it is probably going to be wrong. Let’s say you are receiving a transaction from somebody who is running a node with LOT=false. They are on a chain that is going ten times faster than you are, in this intermediate state. Their blocks might be just full, their fees are pretty low, and you are receiving it. But you are on this shorter, slower moving chain so your mempool is really full and your blocks are completely full, so that transaction probably won’t confirm on your side. It is just going to be sitting in the mempool, that is one complexity. That’s actually a relatively good scenario because you don’t accept unconfirmed transactions. You will have a disagreement with your counterparty, you’ll say “It hasn’t confirmed” and they’ll say “It has confirmed”. Then at least you might realize what is going on, you read about the LOT war or whatever. So that’s one scenario. The other scenario is where somehow it does confirm on your side and it also confirms on the other side. That is kind of good because then you are safe either way. If it is confirmed on both sides then whatever happens in a future re-org that transaction is actually on the chain, maybe in a different block. Another scenario could be because there are two chains, one short chain and one long chain, but they are different. If you are receiving coins that are sent from a coinbase transaction on one side or the other then there is no way it can be valid on your side. This can also be a feature, it is called replay protection essentially. You receive a transaction and you don’t even see it in your mempool, you call the other person and say “This doesn’t make any sense.” That’s good. But now suddenly the world changes its mind and says “No we do want Taproot, we do want LOT=true, we are now LOT=true diehards” and all the miners start mining on top of your shorter chain. Your short chain becomes the very long chain. In that case you are pretty happy in most of the scenarios we’ve discussed.

AvW: Sounds good to me.

SP: You had a transaction that was in maybe in your tenth block and on the other side it was in the first block. It is still yours. There were some transactions floating in the mempool for a very long time, they finally confirm. I think you are fairly happy. We were talking about the LOT=true node. As a LOT=true node user, in these scenarios you are happy. Maybe not if you’re paying someone.

AvW: You are starting to make the case for LOT=true Sjors, I know that is not your intention but you are doing a good job at it.

SP: For the full node user who knows what they are doing in general. If you are a full node user and you know what you are doing then I think you are going to be fine in general. This is not too bad. Now let’s say you are a LOT=false user and let’s say you don’t know what you are doing. In the same scenario you are on the longest chain, you are receiving coins from an exchange and you’ve seen these headers out there for this shorter chain. You might have seen them, depends on whether they reach you or not. But it is a shorter chain and it is valid according to you because it is a more strict ruleset. You are fine, this other chain has Taproot and you don’t probably. You are accepting transactions and you are a happy camper but then all of a sudden because the world changes everything disappears from under you. All the transactions you’ve seen confirmed in a block are now back in the mempool and they might have been double spent even.

AvW: Yeah the reason for that is that we are talking about a chain split that has happened. You have a LOT=false node but at any point the LOT=true chain becomes the longest chain then your LOT=false node would still accept that chain. It would still consider it valid. The other way round is not true. But the LOT=false node will always consider the LOT=true chain valid. So in your scenario where you are using Bitcoin on the longest chain, on the LOT=false chain, we’re happy, we received money, we did a hard day’s work and got our paycheck at the end, paid in Bitcoin. We think we’re safe, we got a bunch of confirmations but then all of a sudden the LOT=true chain becomes longer which means your node switches to a LOT=true chain. That money you received on the LOT=false chain which you thought was the Bitcoin chain is just gone. Poof. That is the problem you are talking about.

SP: Exactly.

AvW: I will add to this very briefly. I think this is an even bigger problem for non-upgraded nodes.

SP: I was about to get to that. Now we are talking about the LOT=false people. You could still say “Why did you download the LOT=false version?” Because you didn’t know. Now we are talking about an unupgraded node. For the unupgraded node Taproot does not exist so it has no preference for which of the chains, it will just pick the longest one.

AvW: It is someone in Korea, he doesn’t keep up with discussion.

SP: Let’s not be mean to Korea.

AvW: Pick a country where they don’t speak English.

SP: North Korea.

AvW: Someone doesn’t keep up with their Bitcoin discussion forums, maybe doesn’t read English, doesn’t really care. He just likes this Bitcoin thing, downloaded the software a couple of years ago, put in his hard day’s work, gets paid and the money disappears.

SP: Or his node might be in a nuclear bunker that he put it in 5 years ago under 15 meters of concrete, airgapped, and somehow it can download blocks because it is watching the Blockstream satellite or something but it cannot be upgraded. And he doesn’t know about this upgrade. Which would be odd if you are into nuclear bunkers and full nodes. Anyway somebody is running an out of date node, in Bitcoin we have the policy that you don’t have to upgrade, it is not a mandatory thing. It should be safe or at least relatively safe to run an unupgraded node. You are receiving a salary, the same as the LOT=false person, then suddenly there’s a giant re-org that comes out of nowhere. You have no idea why people bother to re-org because you don’t know about this rule change.

AvW: You don’t see the difference.

SP: And poof your salary just disappeared.

AvW: That’s bad. That’s basically the worst case scenario that no one wants I think.

SP: Yeah. You can translate this also to people who are using hardware wallet software that hasn’t been updated, they are using remote nodes or they are using SPV nodes that don’t check the rules but only check the work. They’ll have similar experiences where suddenly the longest chain changes so your SPV wallet, which we explained in an earlier episode, its history disappears. At least for the lightweight nodes you could do some victim shaming and say “You should be running a full node. If bad things happen you should have run a full node.” But I still don’t think that’s good safety engineering, to tell people “If you don’t use your safety belt in the correct position the car might explode.” But at least for the unupgraded full node that is an explicit case that Bitcoiners want to support. You want to support people not upgrading and not suddenly losing their coins in a situation like this. That is why I’m not a LOT=true person.

Avoiding a chain split scenario

AvW: That’s what I want to get that. Everyone agrees or at least we both agree and I think most people would agree that this scenario we just painted, that’s horrible, we don’t want that. So the next question is how do you avoid this scenario? That’s also one of the things where LOT=true and LOT=false people differ in their opinions. LOT=false proponents like yourself argue against LOT=true because the chain split was caused by LOT=true and therefore if we don’t want chain splits we don’t want LOT=true and the thing we just described won’t happened. The worst case scenario is that we don’t have Taproot, it will just expire. That’s not as bad as this poor Korean guy losing his honest day’s work.

SP: Exactly and we might have Taproot later.

AvW: The LOT=true proponents will argue Bitcoin is an open network and any peer can run any software they want. For better or worse LOT=true is a thing that exists. If we want to avoid a chain split the best way to avoid that is to make sure that everyone uses LOT=true or at least the majority of miners upgrade in time and LOT=true is the best way to ensure that. Getting critical mass for LOT=true is actually the safer option despite the fact that LOT=true also introduced the risk. If I want to give an analogy it is kind of like the world would be a safer place without nuclear weapons but there are nuclear weapons. It seems like it is safer to actually have one in that case.

SP: I think that analogy breaks down very quickly but yeah I get the idea.

AvW: It is not a perfect analogy I’m sure. The point is LOT=true exists and now we have to deal with that. It might be a better world, a safer place if LOT=true didn’t exist, if UASFs didn’t exist. But it does and now we have to deal with that fact. There is an argument to be made that making sure the soft fork succeeds is actually the best way to go to save that poor Korean guy.

SP: I am always very skeptical of this type of game theory because it sounds rhetorically nice but I’m not sure it is really true. One of the obvious problems is how do you know you’ve reached the entire Bitcoin community. We talked about this hypothetical person in this other country who is not reading Twitter and Reddit, has no idea that this is going on let alone most of the lightweight wallet users. The number of people who use Bitcoin is much, much greater than the number of people who are even remotely interested in these discussions. Also to even explain the risk to those people, even if we could reach them, to explain why they should upgrade, that alone is a rather big challenge. In this episode we roughly try to explain what would go wrong if they don’t upgrade. We can’t just tell them they must upgrade. That violates the idea that you persuade people with arguments and let them decide what they want to do rather than tell them based on authority.

AvW: Keep in mind that in the end all of this is avoided if a majority of hash power upgrades. With LOT=true it actually is any majority would be fine in the end. If miners themselves use LOT=true then they’ll definitely get the longest chain by the end of the year.

SP: The game theory is kind of narrowed to say you want to convince the miners to do this. The problem though is if it fails we just explained the disaster that happens. Then the question is what is that risk? Can you put a percentage on that? Can you somehow simulate the world and find out what happens?

AvW: I am on the fence on this. I see compelling arguments on both sides. I was leaning towards lot=false at first but the more I think about it… The argument is if you include lot=true in Bitcoin Core then that practically guarantees that everything will be fine because the economic majority will almost certainly run it. Exchanges and most users.

SP: I’m not even sure that is true. That assumes that this economic majority is quick to upgrade and not ignoring things.

AvW: At least within a year.

SP: There might be companies that are running 3 or 4 year old nodes because they have 16 different s***coins. Even that, I would not assume that. We know from the network in general that lots of people don’t upgrade nodes and one year is pretty short. You can’t tell from the nodes whether they are the economic majority or not. That might be a few critical players that would do the trick here.

AvW: Yes I can’t be sure. I’m not sure. I am speculating, I am explaining the argument. But the opposite is also true, now that lot=true exists some group of users will almost certainly run it and that introduces maybe greater risks than if it was included in Core. That would increase the chances of success for LOT=true, the majority upgrading.

SP: It really depends on who that group is. Because if that group is just some random people who are not economically important then they are experiencing the problems and nobody else notices anything.

AvW: That is true. If it is a very small group that might be true but the question is how small or how big does that group need to be for this to become a problem. They have an asymmetry, this advantage because their chain can never be re-orged away while the LOT=false chain can be re-orged away.

SP: But their chain may never grow so that’s also a risk. It is not a strict advantage.

AvW: I think it is definitely a strict advantage.

SP: The advantage is you can’t be re-orged. The disadvantage is your chain might never grow. I don’t know which of those two…

AvW: It would probably grow. It depends on how big that group is again. That’s not something we can objectively measure. I guess that’s what it comes down to.

SP: Even retroactively we can’t. We still don’t know what really caused the SegWit activation even four years afterwards. That gives you an idea of how difficult it is to know what these forces really are.

AvW: Yes, that’s where we agree. It is very difficult to know either way. I am on the fence about it.

SP: The safest thing to do is to do nothing.

AvW: Not even that. There might still be a LOT=true minority or maybe even majority that might still happen.

SP: Another interesting thought experiment then is to say “There is always going to be a LOT=true group for any soft fork. What about a soft fork that has no community support? What if an arbitrary group of people decides to run their own soft fork because they want to? Maybe someone wants to shrink the coin supply. Set the coin issuance to zero tomorrow or reduce the block size to 300 kilobytes.” They could say “Because it is a soft fork and because I run a LOT=true node, there could be others who run a LOT=true. Therefore it must activate and everybody should run this node.” That would be absurd. There is a limit to this game theory. You can always think of some soft fork and some small community that will say this and completely fail. You have to estimate how big and how powerful this thing is. I don’t even know what the metric is.

AvW: But also how harmful the upgrade is because I would say that is the answer to your point there. If the upgrade itself is considered valuable then there is very little cost for people to just switch to the other chain, the chain that can’t be re-orged and that has the upgrade that is valuable. That’s a pretty good reason to actually switch. While switching to a chain even if it can’t be re-orged that screws with the coin limit or that kind of stuff, that is a much bigger disincentive and also a disincentive for miners to switch.

SP: Some people might say that a smaller block size is safer.

AvW: They are free to fork off, that is also possible. We didn’t even discuss that but it is possible that the chain split is lasting, that it will forever be a LOT=true minority chain and a LOT=false majority chain. Then we have the Bitcoin, Bitcoin Cash split or something like that. We just have two coins.

SP: With the big scary sword of Damocles hanging above it.

AvW: Then maybe a checkpoint would have to be included in the majority chain which would be very ugly.

SP: You could come up with some sort of incompatible soft fork to prevent a re-org in the future.

AvW: Let’s work towards the end of this episode.

SP: I think we have covered a lot of different arguments and explained that this is pretty complicated.

AvW: What do you think is going to happen? How do you anticipate this playing out?

SP: I started looking a little bit at the nitty gritty, one of the pull requests that Luke Dashjr opened to implement BIP 8 in general, not specifically for Taproot I believe. There’s already complexity with this LOT=true thing engaged because you need to think about how the peer-to-peer network should behave. From a principle of least action, what is the least work for developers to do, setting LOT to false probably results in easier code which will get merged earlier. And even if Luke is like “I will only do this if it is set to true” then somebody else will make a pull request that sets it to false and gets merged earlier. I think from a what happens when lazy people, I mean lazy in the most respectful way, what is the path of least resistance? It is probably a LOT=false, just from an engineering point of view.

AvW: So LOT=false in Bitcoin Core is what you would expect in that case.

SP: Yes. And somebody else would implement LOT=true.

AvW: In some alt client for sure.

SP: Yeah. And that might have no code review.

AvW: It is just a parameter setting right?

SP: No it is more complicated because how it is going to interact with its peers and what it is going to do when there’s a chain split etc.

AvW: What do you think about this scenario that neither is implemented in Bitcoin Core? Do you see that happening?

SP: Neither. LOT=null?

AvW: Just no activation mechanism because there is no consensus for one.

SP: No I think it will be fine. I can’t predict the future but my guess is that a LOT=false won’t be objected to as much as some people might think.

AvW: We’ll see then I guess.

SP: Yeah we’ll see. This might be the dumbest thing I’ve ever said.

\ No newline at end of file diff --git a/bitcoin-explained/taproot-activation-speedy-trial/index.html b/bitcoin-explained/taproot-activation-speedy-trial/index.html index 5b7dcf591b..8ccdcfded4 100644 --- a/bitcoin-explained/taproot-activation-speedy-trial/index.html +++ b/bitcoin-explained/taproot-activation-speedy-trial/index.html @@ -11,4 +11,4 @@ Sjors Provoost, Aaron van Wirdum

Date: March 12, 2021

Transcript By: Michael Folkson

Tags: Taproot, Soft fork activation

Category: Podcast

Media: -https://www.youtube.com/watch?v=oCPrjaw3YVI

Speedy Trial proposal: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html

Intro

Aaron van Wirdum (AvW): Live from Utrecht this is the van Wirdum Sjorsnado. Sjors, what is your pun of the week?

Sjors Provoost (SP): I actually asked you for a pun and then you said “Cut, re-edit. We are going to do it again.” I don’t have a pun this week.

AvW: Puns are your thing.

SP: We tried this LOT thing last time.

AvW: Sjors, we are going to talk a lot this week.

SP: We are going to get blocked for this.

AvW: We talked a lot two weeks ago. LOT was the parameter we discussed two weeks ago, LOT=true, LOT=false, about Taproot activation. We are two weeks further in and now it seems like the community is somewhat reaching consensus on an activation solution called “Speedy Trial”. That is what we are going to discuss today.

SP: That’s right.

Speedy Trial proposal

Speedy Trial proposal: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html

Proposed timeline: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018594.html

AvW: Should we begin with Speedy Trial, what is Speedy Trial Sjors?

SP: I think that is a good idea to do. With the proposals that we talked about last time for activating Taproot, basically Bitcoin Core would release some software, maybe in April of something, and then the miners will start signaling using that software in I think, August or something. Then they can signal for a year and at the end of the year the whole thing ends.

AvW: That was LOT=true or LOT=false. The debate was on whether or not it should end with forced signaling or not. That’s the LOT=true, LOT=false thing.

SP: The thing to keep in mind is that the first signaling, it would be a while before that starts happening. Until that time we really don’t know essentially. What Speedy Trial proposes is to say “Rather than discussing whether or not there is going to be signaling and having lots of arguments about it, let’s just try that really quickly.” Instead there would be a release maybe around April, of course there’s nobody in charge of actual timelines. In that case the signaling would start much earlier, I’m not entirely sure when, maybe in May or pretty early. The signaling would only be for 3 months. At the end of 3 months it would give up.

AvW: It would end on LOT=false basically.

SP: Yes. It is the equivalent of LOT=false or just how it used to be with soft forks. It signals but only for a couple of months.

AvW: If it isn’t activated within these months by hash power which is probably going to be 90 percent hash power? It is going to require 90 percent hash power to activate Taproot. If it doesn’t happen then the proposal expires and when it expires we can continue our discussion on how to activate Taproot. Or if it activates then what happens?

SP: The thing is because you still want to give miners enough time to really upgrade their software the actual Taproot rules won’t take effect until September or August.

AvW: Miners and actual Bitcoin users.

SP: Yes. You want to give everybody plenty of time to upgrade. The idea is we would start the signaling very quickly. Also miners can signal without installing the software. Once the signal threshold has been reached then the soft fork is set in stone. It is going to happen, at least if people run the full nodes. Then there is still some time for people to upgrade and for miners to really upgrade and run that new software rather than just signal for it. They could run that software but they might not. That is why is sort of ok to release a bit early.

AvW: They should really be running the software if they are signaling?

SP: No. We can get into that later.

AvW: For now, to recap really briefly, Speedy Trial means release the software fairly fast and quickly after it is released start the signaling period for 3 months, which is relatively short for a signaling period. See if 90 percent of miners agree, if they do Taproot activates 6 months after the initial release of the software. If 90 percent of miners don’t activate within 3 months the proposal expires and we can continue the discussion on how to activate Taproot.

SP: We are then back to where we were a few weeks ago but with more data.

The evolution of the Speedy Trial proposal

AvW: Exactly. I want to briefly touch on how we got here. We discussed the whole LOT=true and LOT=false thing and there appeared to be a gridlock. Some people definitely didn’t want LOT=true, some people definitely didn’t want LOT=false and then a third proposal entered the mix. It wasn’t brand new but it wasn’t a major part of the discussion, a simple flag day. A simple flag day would have meant that the Bitcoin Core code would have included a date in the future or a block height in the future, at which point the Taproot upgrade would activate regardless of hash power up until that point.

SP: I find this an even worse idea. When there is a lot of debate people start proposing stuff.

AvW: I think the reason that we reached this gridlock situation where people feel very strongly about different ideas has a lot to do what happened during the SegWit upgrade. We discussed this before but people have very different ideas of what actually happened. Some people feel very strongly that users showed their muscles. Users claimed their sovereignty, users claimed back the protocol and they basically forced miners to activate the SegWit upgrade. It was a huge victory for Bitcoin users. Then other people feel very strongly that Bitcoin came near to a complete disaster with a fractured network and people losing money, a big mess. The first group of people really likes doing a UASF again or starting with LOT=false and switching to LOT=true or maybe just starting with LOT=true. The people who think it was a big mess, they prefer to use a flag day this time. Nice and safe in a way, use a flag day, none of this miner signaling, miners can’t be forced to signal and all of that. These different views on what actually happened a couple of years ago now means people can’t really agree on a new activation proposal. After a lot of discussion all factions were sort of willing to settle on Speedy Trial even though no one really likes it for a couple of reasons which we will get into. The UASF people, they are ok with Speedy Trial because it doesn’t get in the way of the UASF. If the Speedy Trial fails they will still do the UASF next year. The flag day people are sort of ok because the 3 months doesn’t allow for a big enough window to do the UASF probably. The UASF people have said that that is too fast and let’s do this Speedy Trial.

SP: There is also still the LOT=false, let’s just do soft forks the way we’ve done them before where they might just expire. A group of people that were quietly continuing to work on the actual code that could do that. Just from mailing lists and Twitter it is hard to gauge what is really going on. This is a very short timescale.

AvW: The LOT=false people, this is basically LOT=false just on a shorter timescale. Everyone is sort of willing to settle on this even though no one really likes it.

SP: From the point of view that I’m seeing, I’m actually looking at the code that is being written, what I have noticed is that once the Speedy Trial came out more people came out of the woodwork and started writing code that could actually get this done. Whereas before it was mostly Luke I think writing that one pull request.

AvW: BIP 8?

SP: Yeah BIP 8. I guess we can get into the technical details, what I am trying to say is one thing that shows that Speedy Trial seems like a good idea is that there are more developers from different angles cooperating on it and getting things done a little bit more quickly. When you have some disagreement then people start procrastinating, not reviewing things or not writing things. That’s a vague indicator that this seems to be ok. People are working on it quickly and it is making progress so that is good.

AvW: Some technical details you want to get into?

Different approaches of implementing Speedy Trial

Stack Exchange on block height versus mix of block height and MTP: https://bitcoin.stackexchange.com/questions/103854/should-block-height-or-mtp-or-a-mixture-of-both-be-used-in-a-soft-fork-activatio/

PR 21377 implementing mix of block height and MTP: https://github.com/bitcoin/bitcoin/pull/21377

PR 21392 implementing block height: https://github.com/bitcoin/bitcoin/pull/21392

SP: The idea of Speedy Trial can be implemented in two different ways. You can use the existing BIP 9 system that we already have. The argument for that would be that’s far less code because it already works. It is just for 3 months so why not just use the old BIP 9 code?

AvW: BIP 9 used dates in the future?

SP: Yes. You can tell when the signaling could start, when the signaling times out. There are some annoying edge cases where if it ends right around the deadline but then there is a re-org and it ends right before the deadline, people’s money might get lost if they try to get into the first Taproot block. This is difficult to explain to people.

AvW: The thing is the signaling happens per difficulty period of 2016 blocks. At least up until now 95 percent of blocks needed to signal support. But these two block periods, they don’t neatly fit into exact dates or anything. They just happen. While the signaling period does start and end on specific dates, that is why you can get weird edge cases.

SP: Let’s do an example there, it is fun to illustrate. Let’s say the deadline of this soft fork is on September 1st, pick a date, for signaling. On September 1st at midnight UTC. A miner mines block number 2016 or some multiple of 2016, that’s when the voting ends. They mine this block one second before midnight UTC. They signal “Yes.” Everyone who sees that block says “Ok we have 95 percent or whatever it is and right before midnight Taproot is active.” They have this automatic script that says “I am now going to put all my savings in a Taproot address because I want to be in the first block and I am feeling reckless, I love being reckless.” Then there is another miner who miners 2 seconds later because they didn’t see that recent block. There can be stale blocks. Their block arrives one second past midnight. It votes positive too but it is too late and so the soft fork does not activate because the signaling was not done before midnight, the deadline. That is the subtlety you get with BIP 9. Usually it is not a problem but it is difficult to explain these edge cases to people.

AvW: It is a bigger problem with shorter signaling periods as well?

SP: Yes of course. If there is a longer signaling period it is less likely that the signal is going to arrive at the edge of a period.

AvW: The threshold, I thought it was going to be 90 percent this time?

SP: That’s a separate thing. First let’s talk about, regardless of the threshold, these two mechanisms. One is based on time, that’s BIP 9, easy because we already have the code for it, the downside is all these weird things that you need to explain to people. Nowadays soft forks in Bitcoin are so important, maybe CNN wants to write about it, it is nice if you can actually explain it without sounding like a complete nerd. But the alternative is to say “Let’s just use this new BIP 8 that was proposed anyway and uses height.” We ignore all the LOT=true stuff but the height stuff is very useful. Then it is much simpler. As of this block height that’s when the signaling ends. That height is always at the edge of these retargeting periods. That’s just easier to reason about. You are saying “If the signaling is achieved by block 700,321 then it happens, or it doesn’t happen.” If there is a re-org, that could still be a problem by the way, there could be a re-org at the same height. But then the difference would be that it would activate because we just made the precisely 95 percent. Then there is a re-org and that miner votes no and then it doesn’t activate. That is an edge case.

AvW: That is also true with BIP 9. You remove one edge case, you have one edge case less which is better.

SP: Right, with BIP 9 you could have the same scenario, exactly one vote, if it is just at the edge one miner vote. But the much bigger problem with BIP 9 is that if the time on the block is 1 second after midnight or before this matters. Even if they are way over the threshold. They might have 99.999 percent but that last block comes in too late and so the entire period is disqualified. With an election you are looking at all the votes. You are saying “It has got 97 percent support, it is going to happen” and then that last block is just too late and it doesn’t happen. It is difficult to explain but we don’t have this problem with height based activation.

AvW: I guess the biggest disadvantage of using BIP 8 is that it is a bigger change as far as code comes.

SP: Yeah but I’ve looked at that code yesterday and wrote some tests for it. Andrew Chow and Luke Dashjr have already implemented a lot of it. It has already been reviewed by people. It is actually not too bad. It looks like 50 lines of code. However, if there is a bug in it it is really, really bad. Just because it is only a few lines of code, it might be safer to use something that is already out there. But I am not terribly worried about it.

The hash power threshold

AvW: Then there is the hash power threshold. Is it 90 or 95?

SP: What is being implemented now in Bitcoin Core is the general mechanism. It is saying “For any soft fork that you call Speedy Trial you could for example use 90 percent.” But for Taproot the code for Taproot in Bitcoin Core, it just says “It never activates.” That is the way you indicate that this soft fork is in the code but it is not going to happen yet. These numbers are arbitrary. The code will support 70 percent or 95 percent, as long as it is not some imaginary number or more than 100 percent.

AvW: It is worth pointing out that in the end it is always 51 percent effectively because 51 percent of miners can always decide to orphan non-signaling blocks.

SP: And create a mess. But they could.

AvW: It is something to be aware of that miners can always do that if they choose to.

SP: But the general principle that is being built now is that at least we could do a slightly lower threshold. There might be still some discussion on whether that is safe or not.

AvW: It is not settled yet? 90 or 95 as far as you know?

SP: I don’t think so. You could have some arguments in favor of it but we will get into that with the risk section.

AvW: Or we can mention really briefly is that the benefit of having the higher threshold is a lower risk of orphan blocks after activation. That’s mainly the reason.

SP: But because we are doing a delayed activation, there’s a long time between signaling and activation, whereas normally you signal and immediately, or at least within 2 weeks, it activates. Right now it can take much, much longer. That means miners have a longer time to upgrade. There is a little less risk of orphaning even if you have a lower signaling threshold.

Delayed activation

AvW: True. I think that was the third point you wanted to get at anyway. The delayed activation.

SP: What happens normally is you tally the votes in the last difficulty period. If it is more than whatever the threshold is then the state of the soft fork goes from STARTED, as in we know about it and we are counting, to LOCKED_IN. The state LOCKED_IN will normally last for 2 weeks or one retargeting period, and then the rules actually take effect. What happens with Speedy Trial, the delayed activation part, is that this LOCKED_IN state will go on for much longer. It might go on for months. It is LOCKED_IN for months and then the rules actually take effect. This change is only two lines of code which is quite nice.

Downsides and risks for this proposal

AvW: Ok. Shall we get to some of the downsides of this proposal?

SP: Some of the risks. The first one we briefly mentioned. Because this thing is deployed quite quickly and because it is very clear that the activation of the rules is delayed, there is an incentive for miners to just signal rather than actually install the code. Then they could procrastinate on actually installing the software. That is fine unless they procrastinate so long that they forget to actually enforce the rules.

AvW: Which sounds quite bad to me Sjors.

SP: Yeah. That is bad, I agree. It is always possible for miners to just signal and not actually enforce the rules. This risk exists with any soft fork deployment.

AvW: Yes, miners can always just signal, fake signal. That has happened in the past. We have seen fake signaling. It was the BIP 66 soft fork where we learnt later that miners were fake signaling because we saw big re-orgs on the network. That is definitely something we would want to avoid.

SP: I think we briefly explained this earlier but we can explain it again. Bitcoin Core, if you use that to create your blocks as a miner, there are some safety mechanisms in place to make sure that you do not create a block that is invalid. However if another miner creates a block that is invalid you will mine on top of it. Then you have a problem because the full nodes that are enforcing Taproot will reject your block. Presumably most of the ecosystem, if this signaling works, will upgrade. Then you get into this whole very scary situation where you really hope that is true. Not a massive part of the economy is too lazy to upgrade and you get a complete mess.

AvW: Yes, correct.

SP: I think the term we talked about is the idea of a troll. You could have a troll user. Let’s say I’m a mean user and I’m going to create a transaction that looks like a Taproot transaction but is actually invalid according to Taproot rules. The way that works, the mechanism in Bitcoin to do soft forks is you have this version number in your SegWit transaction. You say “This is a SegWit version 1 transaction.” Nodes know that when you see a higher SegWit version that you don’t know about…

AvW: Taproot version?

SP: SegWit version. The current version of SegWit is version 0 because we are nerds. If you see a SegWit version transaction with 1 or higher you assume that anybody can spend that money. That means that if somebody is spending from that address you don’t care. You don’t consider the block invalid as an old node. But a node that is aware of the version will check the rules. What you could do as a troll is create a broken Schnorr signature for example. You take a Schnorr signature and you swap one byte. Then if that is seen by an old node it says “This is SegWit version 1. I don’t know what that is. It is fine. Anybody can spend this so I am not going to check the signature.” But the Taproot nodes will say “Hey, wait a minute. That is an invalid signature, therefore that is an invalid block.” And we have a problem. There is a protection mechanism there that normal miners will not mine SegWit transactions that they don’t know about. They will not mine SegWit version 1 if they are not upgraded.

AvW: Isn’t it also the case that regular nodes will just not forward the transaction to other nodes?

SP: That is correct, that is another safety mechanism.

AvW: There are two safety mechanisms.

SP: They are basically saying “Hey other node, I don’t think you want to just give your money away.” Or alternatively “You are trying to do something super sophisticated that I don’t understand”, something called standardness. If you are doing something that is not standard I am not going to relay it. That is not a consensus rule. That’s important. It means you can compile your node to relay those things and you can compile your miner to mine these things but it is a footgun if you don’t know what you are doing. But it is not against consensus. However, when a transaction is in a block then you are dealing with consensus rules. That again means that old nodes will look at it and say “I don’t care. I’m not going to check the signature because that is a higher version than I know about.” But the nodes that are upgraded will say “Hey, wait a minute. This block contains a transaction that is invalid. This block is invalid.” And so a troll user doesn’t really stand a chance to do much damage.

AvW: Because the transaction won’t make it over the peer-to-peer network and even if it does it would only make it to miners that will still reject it. A troll user probably can’t do much harm.

SP: Our troll example of a user that swaps one byte in a Schnorr signature, he tries to send this transaction, he sends it to a node that is upgraded. That node will say “That’s invalid, go away. I am going to ban you now.” Maybe not ban but definitely gets angry. But if he sends it to a node that is not upgraded, that node will say “ I don’t know about this whole new SegWit version of yours. Go away. Don’t send me this modern stuff. I am really old school. Send me old stuff.” So the transaction doesn’t go anywhere but maybe somehow it does end up with a miner. Then the miner says “ I am not going to mine this thing that I don’t know about. That is dangerous because I might lose all my money.” However you might have a troll miner. That would be very, very expensive trolling but we have billionaires in this ecosystem. If you mine a block that is invalid it is going to cost you a few hundred thousand euros, I think at the current prices, maybe even more.

AvW: Yeah, 300,000 something.

SP: If you have 300,000 euros to burn you could make a block like that and challenge the ecosystem, say “Hey, here’s a block. Let me see if you actually verify it.” Then if that block goes to nodes that are upgraded, those will reject it. If that block goes to nodes that are not upgraded, it is fine, it is accepted. But then if somebody mines on top of it, if that miner has not upgraded they will not check it, they will build on top of it. Eventually the ecosystem will probably reject that entire chain and it becomes a mess. Then you really, really, really want a very large majority of miners to check the blocks, not just mine blindly. In general, there are already problems with miners just blindly mining on top of other miners, even for a few seconds, for economic reasons.

AvW: That was a long tangent on the problems with false signaling. All of this would only happen if miners are false signaling?

SP: To be clear false signaling is not some malicious act, it is just a lazy, convenient thing. You say “Don’t worry, I’ll do my homework. I will send you that memo in time, don’t worry.”

AvW: I haven’t upgraded yet but I will upgrade. That’s the risk of false signaling.

SP: It could be deliberate too but that would have to be a pretty large conspiracy.

AvW: One other concern, one risk that has been mentioned is that using LOT=false in general could help users launch a UASF because they could run a UASF client with LOT=true and incentivize miners to signal, like we just mentioned. That would not only mean they would fork off to their own soft fork themselves but basically activate a soft fork for the entire economy. That’s not a problem in itself but some people consider it a problem if users are incentivized to try a UASF. Do you understand that problem?

SP: If we go for this BIP 8 approach, if we switch to using block height rather than timestamps…

AvW: Or flag day.

SP: The Speedy Trial doesn’t use a flag day.

AvW: I know. I am saying that if you do a flag day you cannot do a UASF that triggers something else.

SP: You could maybe, why not?

AvW: What would you trigger?

SP: There is a flag day out there but you deploy software that requires signaling.

AvW: That is what UASF people would be running.

SP: They can run that anyway. Even if there is a flag day they can decide to run software that requires signaling, even though nobody would signal probably. But they could.

AvW: Absolutely but they cannot “co-opt” to call it that LOT=false nodes if there is only a flag day out there.

SP: That’s true. They would require the signaling but the flag day nodes that are out there would be like “I don’t know why you’re not accepting these blocks. There’s no signal, there’s nothing to activate. There is just my flag day and I am going to wait for my flag day.”

AvW: I don’t want to get into the weeds too much but if there are no LOT=false nodes to “co-opt” then miners could just false signal. The UASF nodes are activating Taproot but the rest of the network still hasn’t got Taproot activated. If the UASF nodes ever send coins to a Taproot address they are going to lose their coins at least on the rest of the network.

SP: And they wouldn’t get this re-org advantage that they think they have. This sounds even more complicated than the stuff we talked about 2 weeks ago.

AvW: Yes, that’s why I mentioned I’m getting a little bit into the weeds now. But do you get the problem?

SP: Is this an argument for or against a flag day?

AvW: It depends on your perspective Sjors.

SP: That of somebody who does not want Bitcoin to implode in a huge fire and would like to see Taproot activated.

AvW: If you don’t like UASFs, if you don’t want people to do UASFs then you might also not want LOT=false nodes out there.

SP: Yeah ok, you’re saying “If you really want to not see UASF exist at all.” I’m not terribly worried about these things existing. What I talked about 2 weeks ago, I am not going to contribute to them probably.

AvW: I just wanted to mention that that is one argument against LOT=false that I’ve seen out there. Not an argument I agree with myself either but I have seen the argument.

SP: Accurately what you are saying is it is an argument for not using signaling but using a flag day.

AvW: Yes. Even Speedy Trial uses signaling. While it is shorter, it might still be long enough to throw a UASF against it for example.

SP: And it is compatible with that. Because it uses signaling it is perfectly compatible with somebody deploying a LOT=true system and making a lot of noise about it. But I guess in this case, even the strongest LOT=true proponents, one of them at least, argued that would be completely reckless to do that.

AvW: There are no UASF proponents out there right now who think this is a good idea. As far as I know at least.

SP: So far there are not. But we talked about, in September I think, this cowboy theory. I am sure there is somebody out there that will try a UASF even on the Speedy Trial.

Speedy Trial as a template for future soft fork activations?

AvW: You can’t exclude the possibility at least. There is another argument against Speedy Trial, I find this argument quite compelling actually, which is we came out of 2017 with a lot of uncertainty. I just mentioned the uncertainty at the beginning of this episode, some of it at least. Some people thought UASF was a great success, some people thought it was reckless. Both are partly true, there is truth in both. Now we have a soft fork, Taproot, that everyone seems to love, users seem to like it, developers seem to like it, miners seem to like it, everyone likes it. The only thing we need to do is upgrade it. Now might be a very good opportunity to clean up the mess from 2017 in a way. Agree on what soft forks are exactly, what is the best way to deploy a soft fork and then use that. That way it becomes a template that we can use in more contentious times in the future when maybe there is another civil war going or there is more FUD being thrown at Bitcoin. We seem to be in calm waters right now. Maybe this is a really good time to do it right which will help us moving into the future. While Speedy Trial, no one thinks this is actually the right way. It is fine, we need something so let’s do it. It is arguably kicking the can of the really big discussion we need to have down the road.

SP: Yeah, maybe. One scenario I could see is where the Speedy Trial goes through, activates successfully and the Taproot deployment goes through and everything is fine. Then I think that would remove that trauma. The next soft fork would be done in the nice traditional LOT=false BIP 8. We’ll release something and then several months later miners start signaling and it will activate. So maybe it is a way to get over the trauma.

AvW: You think this is a way to get over the post traumatic stress disorder? Let everyone see that miners can actually activate.

SP: It might be good to get rid of that tension because the downside of releasing regular say BIP 8 LOT=false mechanism is that it is going to be 6 months of hoping that miners are going to signal and then hopefully just 2 weeks and it is done. That 6 months where everybody is anticipating it, people are going to go even crazier than they are now perhaps. I guess it is a nice way to say “Let’s get this trauma over with” But I think there are downsides. For one thing, what if in the next 6 months we find a bug in Taproot? We have 6 months to think about something that is already activated.

AvW: We can soft fork it out.

SP: If that is a bug that can be fixed in a soft fork, yes.

AvW: I think any Taproot, you could just burn that type.

SP: I guess you could add a soft fork that says “No version 1 addresses can be mined.”

AvW: Yes exactly. I think that should be possible right?

SP: Yeah. I guess it is possible to nuke Taproot but it is still scary because old nodes will think it is active.

AvW: This is a pretty minor concern for me.

SP: It is and it isn’t. Old nodes, nodes that are released now basically who know about this Speedy Trial, they will think Taproot is active. They might create receive addresses and send coins. But their transactions won’t confirm or they will confirm and then get unconfirmed. They won’t get swept away because the soft fork will say “You cannot spend this money.” It is not anyone-can-spend, it is “You cannot spend this.” It is protected in that sense. I suppose there are soft fork ways out of a mess but that are not as nice as saying “Abort, abort, abort. Don’t signal.” If we use the normal BIP 8 mechanism, until miners start signaling you can just say “Do not signal.”

AvW: Sure. Any final thoughts? What are your expectations? What is going to happen?

SP: I don’t know, I’’m happy to see progress on the code. At least we’ve got actual code and then we’ll decide what to do with it. Thank you for listening to the van Wirdum Sjorsnado.

AvW: There you go.

\ No newline at end of file +https://www.youtube.com/watch?v=oCPrjaw3YVI

Speedy Trial proposal: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html

Intro

Aaron van Wirdum (AvW): Live from Utrecht this is the van Wirdum Sjorsnado. Sjors, what is your pun of the week?

Sjors Provoost (SP): I actually asked you for a pun and then you said “Cut, re-edit. We are going to do it again.” I don’t have a pun this week.

AvW: Puns are your thing.

SP: We tried this LOT thing last time.

AvW: Sjors, we are going to talk a lot this week.

SP: We are going to get blocked for this.

AvW: We talked a lot two weeks ago. LOT was the parameter we discussed two weeks ago, LOT=true, LOT=false, about Taproot activation. We are two weeks further in and now it seems like the community is somewhat reaching consensus on an activation solution called “Speedy Trial”. That is what we are going to discuss today.

SP: That’s right.

Speedy Trial proposal

Speedy Trial proposal: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html

Proposed timeline: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018594.html

AvW: Should we begin with Speedy Trial, what is Speedy Trial Sjors?

SP: I think that is a good idea to do. With the proposals that we talked about last time for activating Taproot, basically Bitcoin Core would release some software, maybe in April of something, and then the miners will start signaling using that software in I think, August or something. Then they can signal for a year and at the end of the year the whole thing ends.

AvW: That was LOT=true or LOT=false. The debate was on whether or not it should end with forced signaling or not. That’s the LOT=true, LOT=false thing.

SP: The thing to keep in mind is that the first signaling, it would be a while before that starts happening. Until that time we really don’t know essentially. What Speedy Trial proposes is to say “Rather than discussing whether or not there is going to be signaling and having lots of arguments about it, let’s just try that really quickly.” Instead there would be a release maybe around April, of course there’s nobody in charge of actual timelines. In that case the signaling would start much earlier, I’m not entirely sure when, maybe in May or pretty early. The signaling would only be for 3 months. At the end of 3 months it would give up.

AvW: It would end on LOT=false basically.

SP: Yes. It is the equivalent of LOT=false or just how it used to be with soft forks. It signals but only for a couple of months.

AvW: If it isn’t activated within these months by hash power which is probably going to be 90 percent hash power? It is going to require 90 percent hash power to activate Taproot. If it doesn’t happen then the proposal expires and when it expires we can continue our discussion on how to activate Taproot. Or if it activates then what happens?

SP: The thing is because you still want to give miners enough time to really upgrade their software the actual Taproot rules won’t take effect until September or August.

AvW: Miners and actual Bitcoin users.

SP: Yes. You want to give everybody plenty of time to upgrade. The idea is we would start the signaling very quickly. Also miners can signal without installing the software. Once the signal threshold has been reached then the soft fork is set in stone. It is going to happen, at least if people run the full nodes. Then there is still some time for people to upgrade and for miners to really upgrade and run that new software rather than just signal for it. They could run that software but they might not. That is why is sort of ok to release a bit early.

AvW: They should really be running the software if they are signaling?

SP: No. We can get into that later.

AvW: For now, to recap really briefly, Speedy Trial means release the software fairly fast and quickly after it is released start the signaling period for 3 months, which is relatively short for a signaling period. See if 90 percent of miners agree, if they do Taproot activates 6 months after the initial release of the software. If 90 percent of miners don’t activate within 3 months the proposal expires and we can continue the discussion on how to activate Taproot.

SP: We are then back to where we were a few weeks ago but with more data.

The evolution of the Speedy Trial proposal

AvW: Exactly. I want to briefly touch on how we got here. We discussed the whole LOT=true and LOT=false thing and there appeared to be a gridlock. Some people definitely didn’t want LOT=true, some people definitely didn’t want LOT=false and then a third proposal entered the mix. It wasn’t brand new but it wasn’t a major part of the discussion, a simple flag day. A simple flag day would have meant that the Bitcoin Core code would have included a date in the future or a block height in the future, at which point the Taproot upgrade would activate regardless of hash power up until that point.

SP: I find this an even worse idea. When there is a lot of debate people start proposing stuff.

AvW: I think the reason that we reached this gridlock situation where people feel very strongly about different ideas has a lot to do what happened during the SegWit upgrade. We discussed this before but people have very different ideas of what actually happened. Some people feel very strongly that users showed their muscles. Users claimed their sovereignty, users claimed back the protocol and they basically forced miners to activate the SegWit upgrade. It was a huge victory for Bitcoin users. Then other people feel very strongly that Bitcoin came near to a complete disaster with a fractured network and people losing money, a big mess. The first group of people really likes doing a UASF again or starting with LOT=false and switching to LOT=true or maybe just starting with LOT=true. The people who think it was a big mess, they prefer to use a flag day this time. Nice and safe in a way, use a flag day, none of this miner signaling, miners can’t be forced to signal and all of that. These different views on what actually happened a couple of years ago now means people can’t really agree on a new activation proposal. After a lot of discussion all factions were sort of willing to settle on Speedy Trial even though no one really likes it for a couple of reasons which we will get into. The UASF people, they are ok with Speedy Trial because it doesn’t get in the way of the UASF. If the Speedy Trial fails they will still do the UASF next year. The flag day people are sort of ok because the 3 months doesn’t allow for a big enough window to do the UASF probably. The UASF people have said that that is too fast and let’s do this Speedy Trial.

SP: There is also still the LOT=false, let’s just do soft forks the way we’ve done them before where they might just expire. A group of people that were quietly continuing to work on the actual code that could do that. Just from mailing lists and Twitter it is hard to gauge what is really going on. This is a very short timescale.

AvW: The LOT=false people, this is basically LOT=false just on a shorter timescale. Everyone is sort of willing to settle on this even though no one really likes it.

SP: From the point of view that I’m seeing, I’m actually looking at the code that is being written, what I have noticed is that once the Speedy Trial came out more people came out of the woodwork and started writing code that could actually get this done. Whereas before it was mostly Luke I think writing that one pull request.

AvW: BIP 8?

SP: Yeah BIP 8. I guess we can get into the technical details, what I am trying to say is one thing that shows that Speedy Trial seems like a good idea is that there are more developers from different angles cooperating on it and getting things done a little bit more quickly. When you have some disagreement then people start procrastinating, not reviewing things or not writing things. That’s a vague indicator that this seems to be ok. People are working on it quickly and it is making progress so that is good.

AvW: Some technical details you want to get into?

Different approaches of implementing Speedy Trial

Stack Exchange on block height versus mix of block height and MTP: https://bitcoin.stackexchange.com/questions/103854/should-block-height-or-mtp-or-a-mixture-of-both-be-used-in-a-soft-fork-activatio/

PR 21377 implementing mix of block height and MTP: https://github.com/bitcoin/bitcoin/pull/21377

PR 21392 implementing block height: https://github.com/bitcoin/bitcoin/pull/21392

SP: The idea of Speedy Trial can be implemented in two different ways. You can use the existing BIP 9 system that we already have. The argument for that would be that’s far less code because it already works. It is just for 3 months so why not just use the old BIP 9 code?

AvW: BIP 9 used dates in the future?

SP: Yes. You can tell when the signaling could start, when the signaling times out. There are some annoying edge cases where if it ends right around the deadline but then there is a re-org and it ends right before the deadline, people’s money might get lost if they try to get into the first Taproot block. This is difficult to explain to people.

AvW: The thing is the signaling happens per difficulty period of 2016 blocks. At least up until now 95 percent of blocks needed to signal support. But these two block periods, they don’t neatly fit into exact dates or anything. They just happen. While the signaling period does start and end on specific dates, that is why you can get weird edge cases.

SP: Let’s do an example there, it is fun to illustrate. Let’s say the deadline of this soft fork is on September 1st, pick a date, for signaling. On September 1st at midnight UTC. A miner mines block number 2016 or some multiple of 2016, that’s when the voting ends. They mine this block one second before midnight UTC. They signal “Yes.” Everyone who sees that block says “Ok we have 95 percent or whatever it is and right before midnight Taproot is active.” They have this automatic script that says “I am now going to put all my savings in a Taproot address because I want to be in the first block and I am feeling reckless, I love being reckless.” Then there is another miner who miners 2 seconds later because they didn’t see that recent block. There can be stale blocks. Their block arrives one second past midnight. It votes positive too but it is too late and so the soft fork does not activate because the signaling was not done before midnight, the deadline. That is the subtlety you get with BIP 9. Usually it is not a problem but it is difficult to explain these edge cases to people.

AvW: It is a bigger problem with shorter signaling periods as well?

SP: Yes of course. If there is a longer signaling period it is less likely that the signal is going to arrive at the edge of a period.

AvW: The threshold, I thought it was going to be 90 percent this time?

SP: That’s a separate thing. First let’s talk about, regardless of the threshold, these two mechanisms. One is based on time, that’s BIP 9, easy because we already have the code for it, the downside is all these weird things that you need to explain to people. Nowadays soft forks in Bitcoin are so important, maybe CNN wants to write about it, it is nice if you can actually explain it without sounding like a complete nerd. But the alternative is to say “Let’s just use this new BIP 8 that was proposed anyway and uses height.” We ignore all the LOT=true stuff but the height stuff is very useful. Then it is much simpler. As of this block height that’s when the signaling ends. That height is always at the edge of these retargeting periods. That’s just easier to reason about. You are saying “If the signaling is achieved by block 700,321 then it happens, or it doesn’t happen.” If there is a re-org, that could still be a problem by the way, there could be a re-org at the same height. But then the difference would be that it would activate because we just made the precisely 95 percent. Then there is a re-org and that miner votes no and then it doesn’t activate. That is an edge case.

AvW: That is also true with BIP 9. You remove one edge case, you have one edge case less which is better.

SP: Right, with BIP 9 you could have the same scenario, exactly one vote, if it is just at the edge one miner vote. But the much bigger problem with BIP 9 is that if the time on the block is 1 second after midnight or before this matters. Even if they are way over the threshold. They might have 99.999 percent but that last block comes in too late and so the entire period is disqualified. With an election you are looking at all the votes. You are saying “It has got 97 percent support, it is going to happen” and then that last block is just too late and it doesn’t happen. It is difficult to explain but we don’t have this problem with height based activation.

AvW: I guess the biggest disadvantage of using BIP 8 is that it is a bigger change as far as code comes.

SP: Yeah but I’ve looked at that code yesterday and wrote some tests for it. Andrew Chow and Luke Dashjr have already implemented a lot of it. It has already been reviewed by people. It is actually not too bad. It looks like 50 lines of code. However, if there is a bug in it it is really, really bad. Just because it is only a few lines of code, it might be safer to use something that is already out there. But I am not terribly worried about it.

The hash power threshold

AvW: Then there is the hash power threshold. Is it 90 or 95?

SP: What is being implemented now in Bitcoin Core is the general mechanism. It is saying “For any soft fork that you call Speedy Trial you could for example use 90 percent.” But for Taproot the code for Taproot in Bitcoin Core, it just says “It never activates.” That is the way you indicate that this soft fork is in the code but it is not going to happen yet. These numbers are arbitrary. The code will support 70 percent or 95 percent, as long as it is not some imaginary number or more than 100 percent.

AvW: It is worth pointing out that in the end it is always 51 percent effectively because 51 percent of miners can always decide to orphan non-signaling blocks.

SP: And create a mess. But they could.

AvW: It is something to be aware of that miners can always do that if they choose to.

SP: But the general principle that is being built now is that at least we could do a slightly lower threshold. There might be still some discussion on whether that is safe or not.

AvW: It is not settled yet? 90 or 95 as far as you know?

SP: I don’t think so. You could have some arguments in favor of it but we will get into that with the risk section.

AvW: Or we can mention really briefly is that the benefit of having the higher threshold is a lower risk of orphan blocks after activation. That’s mainly the reason.

SP: But because we are doing a delayed activation, there’s a long time between signaling and activation, whereas normally you signal and immediately, or at least within 2 weeks, it activates. Right now it can take much, much longer. That means miners have a longer time to upgrade. There is a little less risk of orphaning even if you have a lower signaling threshold.

Delayed activation

AvW: True. I think that was the third point you wanted to get at anyway. The delayed activation.

SP: What happens normally is you tally the votes in the last difficulty period. If it is more than whatever the threshold is then the state of the soft fork goes from STARTED, as in we know about it and we are counting, to LOCKED_IN. The state LOCKED_IN will normally last for 2 weeks or one retargeting period, and then the rules actually take effect. What happens with Speedy Trial, the delayed activation part, is that this LOCKED_IN state will go on for much longer. It might go on for months. It is LOCKED_IN for months and then the rules actually take effect. This change is only two lines of code which is quite nice.

Downsides and risks for this proposal

AvW: Ok. Shall we get to some of the downsides of this proposal?

SP: Some of the risks. The first one we briefly mentioned. Because this thing is deployed quite quickly and because it is very clear that the activation of the rules is delayed, there is an incentive for miners to just signal rather than actually install the code. Then they could procrastinate on actually installing the software. That is fine unless they procrastinate so long that they forget to actually enforce the rules.

AvW: Which sounds quite bad to me Sjors.

SP: Yeah. That is bad, I agree. It is always possible for miners to just signal and not actually enforce the rules. This risk exists with any soft fork deployment.

AvW: Yes, miners can always just signal, fake signal. That has happened in the past. We have seen fake signaling. It was the BIP 66 soft fork where we learnt later that miners were fake signaling because we saw big re-orgs on the network. That is definitely something we would want to avoid.

SP: I think we briefly explained this earlier but we can explain it again. Bitcoin Core, if you use that to create your blocks as a miner, there are some safety mechanisms in place to make sure that you do not create a block that is invalid. However if another miner creates a block that is invalid you will mine on top of it. Then you have a problem because the full nodes that are enforcing Taproot will reject your block. Presumably most of the ecosystem, if this signaling works, will upgrade. Then you get into this whole very scary situation where you really hope that is true. Not a massive part of the economy is too lazy to upgrade and you get a complete mess.

AvW: Yes, correct.

SP: I think the term we talked about is the idea of a troll. You could have a troll user. Let’s say I’m a mean user and I’m going to create a transaction that looks like a Taproot transaction but is actually invalid according to Taproot rules. The way that works, the mechanism in Bitcoin to do soft forks is you have this version number in your SegWit transaction. You say “This is a SegWit version 1 transaction.” Nodes know that when you see a higher SegWit version that you don’t know about…

AvW: Taproot version?

SP: SegWit version. The current version of SegWit is version 0 because we are nerds. If you see a SegWit version transaction with 1 or higher you assume that anybody can spend that money. That means that if somebody is spending from that address you don’t care. You don’t consider the block invalid as an old node. But a node that is aware of the version will check the rules. What you could do as a troll is create a broken Schnorr signature for example. You take a Schnorr signature and you swap one byte. Then if that is seen by an old node it says “This is SegWit version 1. I don’t know what that is. It is fine. Anybody can spend this so I am not going to check the signature.” But the Taproot nodes will say “Hey, wait a minute. That is an invalid signature, therefore that is an invalid block.” And we have a problem. There is a protection mechanism there that normal miners will not mine SegWit transactions that they don’t know about. They will not mine SegWit version 1 if they are not upgraded.

AvW: Isn’t it also the case that regular nodes will just not forward the transaction to other nodes?

SP: That is correct, that is another safety mechanism.

AvW: There are two safety mechanisms.

SP: They are basically saying “Hey other node, I don’t think you want to just give your money away.” Or alternatively “You are trying to do something super sophisticated that I don’t understand”, something called standardness. If you are doing something that is not standard I am not going to relay it. That is not a consensus rule. That’s important. It means you can compile your node to relay those things and you can compile your miner to mine these things but it is a footgun if you don’t know what you are doing. But it is not against consensus. However, when a transaction is in a block then you are dealing with consensus rules. That again means that old nodes will look at it and say “I don’t care. I’m not going to check the signature because that is a higher version than I know about.” But the nodes that are upgraded will say “Hey, wait a minute. This block contains a transaction that is invalid. This block is invalid.” And so a troll user doesn’t really stand a chance to do much damage.

AvW: Because the transaction won’t make it over the peer-to-peer network and even if it does it would only make it to miners that will still reject it. A troll user probably can’t do much harm.

SP: Our troll example of a user that swaps one byte in a Schnorr signature, he tries to send this transaction, he sends it to a node that is upgraded. That node will say “That’s invalid, go away. I am going to ban you now.” Maybe not ban but definitely gets angry. But if he sends it to a node that is not upgraded, that node will say “ I don’t know about this whole new SegWit version of yours. Go away. Don’t send me this modern stuff. I am really old school. Send me old stuff.” So the transaction doesn’t go anywhere but maybe somehow it does end up with a miner. Then the miner says “ I am not going to mine this thing that I don’t know about. That is dangerous because I might lose all my money.” However you might have a troll miner. That would be very, very expensive trolling but we have billionaires in this ecosystem. If you mine a block that is invalid it is going to cost you a few hundred thousand euros, I think at the current prices, maybe even more.

AvW: Yeah, 300,000 something.

SP: If you have 300,000 euros to burn you could make a block like that and challenge the ecosystem, say “Hey, here’s a block. Let me see if you actually verify it.” Then if that block goes to nodes that are upgraded, those will reject it. If that block goes to nodes that are not upgraded, it is fine, it is accepted. But then if somebody mines on top of it, if that miner has not upgraded they will not check it, they will build on top of it. Eventually the ecosystem will probably reject that entire chain and it becomes a mess. Then you really, really, really want a very large majority of miners to check the blocks, not just mine blindly. In general, there are already problems with miners just blindly mining on top of other miners, even for a few seconds, for economic reasons.

AvW: That was a long tangent on the problems with false signaling. All of this would only happen if miners are false signaling?

SP: To be clear false signaling is not some malicious act, it is just a lazy, convenient thing. You say “Don’t worry, I’ll do my homework. I will send you that memo in time, don’t worry.”

AvW: I haven’t upgraded yet but I will upgrade. That’s the risk of false signaling.

SP: It could be deliberate too but that would have to be a pretty large conspiracy.

AvW: One other concern, one risk that has been mentioned is that using LOT=false in general could help users launch a UASF because they could run a UASF client with LOT=true and incentivize miners to signal, like we just mentioned. That would not only mean they would fork off to their own soft fork themselves but basically activate a soft fork for the entire economy. That’s not a problem in itself but some people consider it a problem if users are incentivized to try a UASF. Do you understand that problem?

SP: If we go for this BIP 8 approach, if we switch to using block height rather than timestamps…

AvW: Or flag day.

SP: The Speedy Trial doesn’t use a flag day.

AvW: I know. I am saying that if you do a flag day you cannot do a UASF that triggers something else.

SP: You could maybe, why not?

AvW: What would you trigger?

SP: There is a flag day out there but you deploy software that requires signaling.

AvW: That is what UASF people would be running.

SP: They can run that anyway. Even if there is a flag day they can decide to run software that requires signaling, even though nobody would signal probably. But they could.

AvW: Absolutely but they cannot “co-opt” to call it that LOT=false nodes if there is only a flag day out there.

SP: That’s true. They would require the signaling but the flag day nodes that are out there would be like “I don’t know why you’re not accepting these blocks. There’s no signal, there’s nothing to activate. There is just my flag day and I am going to wait for my flag day.”

AvW: I don’t want to get into the weeds too much but if there are no LOT=false nodes to “co-opt” then miners could just false signal. The UASF nodes are activating Taproot but the rest of the network still hasn’t got Taproot activated. If the UASF nodes ever send coins to a Taproot address they are going to lose their coins at least on the rest of the network.

SP: And they wouldn’t get this re-org advantage that they think they have. This sounds even more complicated than the stuff we talked about 2 weeks ago.

AvW: Yes, that’s why I mentioned I’m getting a little bit into the weeds now. But do you get the problem?

SP: Is this an argument for or against a flag day?

AvW: It depends on your perspective Sjors.

SP: That of somebody who does not want Bitcoin to implode in a huge fire and would like to see Taproot activated.

AvW: If you don’t like UASFs, if you don’t want people to do UASFs then you might also not want LOT=false nodes out there.

SP: Yeah ok, you’re saying “If you really want to not see UASF exist at all.” I’m not terribly worried about these things existing. What I talked about 2 weeks ago, I am not going to contribute to them probably.

AvW: I just wanted to mention that that is one argument against LOT=false that I’ve seen out there. Not an argument I agree with myself either but I have seen the argument.

SP: Accurately what you are saying is it is an argument for not using signaling but using a flag day.

AvW: Yes. Even Speedy Trial uses signaling. While it is shorter, it might still be long enough to throw a UASF against it for example.

SP: And it is compatible with that. Because it uses signaling it is perfectly compatible with somebody deploying a LOT=true system and making a lot of noise about it. But I guess in this case, even the strongest LOT=true proponents, one of them at least, argued that would be completely reckless to do that.

AvW: There are no UASF proponents out there right now who think this is a good idea. As far as I know at least.

SP: So far there are not. But we talked about, in September I think, this cowboy theory. I am sure there is somebody out there that will try a UASF even on the Speedy Trial.

Speedy Trial as a template for future soft fork activations?

AvW: You can’t exclude the possibility at least. There is another argument against Speedy Trial, I find this argument quite compelling actually, which is we came out of 2017 with a lot of uncertainty. I just mentioned the uncertainty at the beginning of this episode, some of it at least. Some people thought UASF was a great success, some people thought it was reckless. Both are partly true, there is truth in both. Now we have a soft fork, Taproot, that everyone seems to love, users seem to like it, developers seem to like it, miners seem to like it, everyone likes it. The only thing we need to do is upgrade it. Now might be a very good opportunity to clean up the mess from 2017 in a way. Agree on what soft forks are exactly, what is the best way to deploy a soft fork and then use that. That way it becomes a template that we can use in more contentious times in the future when maybe there is another civil war going or there is more FUD being thrown at Bitcoin. We seem to be in calm waters right now. Maybe this is a really good time to do it right which will help us moving into the future. While Speedy Trial, no one thinks this is actually the right way. It is fine, we need something so let’s do it. It is arguably kicking the can of the really big discussion we need to have down the road.

SP: Yeah, maybe. One scenario I could see is where the Speedy Trial goes through, activates successfully and the Taproot deployment goes through and everything is fine. Then I think that would remove that trauma. The next soft fork would be done in the nice traditional LOT=false BIP 8. We’ll release something and then several months later miners start signaling and it will activate. So maybe it is a way to get over the trauma.

AvW: You think this is a way to get over the post traumatic stress disorder? Let everyone see that miners can actually activate.

SP: It might be good to get rid of that tension because the downside of releasing regular say BIP 8 LOT=false mechanism is that it is going to be 6 months of hoping that miners are going to signal and then hopefully just 2 weeks and it is done. That 6 months where everybody is anticipating it, people are going to go even crazier than they are now perhaps. I guess it is a nice way to say “Let’s get this trauma over with” But I think there are downsides. For one thing, what if in the next 6 months we find a bug in Taproot? We have 6 months to think about something that is already activated.

AvW: We can soft fork it out.

SP: If that is a bug that can be fixed in a soft fork, yes.

AvW: I think any Taproot, you could just burn that type.

SP: I guess you could add a soft fork that says “No version 1 addresses can be mined.”

AvW: Yes exactly. I think that should be possible right?

SP: Yeah. I guess it is possible to nuke Taproot but it is still scary because old nodes will think it is active.

AvW: This is a pretty minor concern for me.

SP: It is and it isn’t. Old nodes, nodes that are released now basically who know about this Speedy Trial, they will think Taproot is active. They might create receive addresses and send coins. But their transactions won’t confirm or they will confirm and then get unconfirmed. They won’t get swept away because the soft fork will say “You cannot spend this money.” It is not anyone-can-spend, it is “You cannot spend this.” It is protected in that sense. I suppose there are soft fork ways out of a mess but that are not as nice as saying “Abort, abort, abort. Don’t signal.” If we use the normal BIP 8 mechanism, until miners start signaling you can just say “Do not signal.”

AvW: Sure. Any final thoughts? What are your expectations? What is going to happen?

SP: I don’t know, I’’m happy to see progress on the code. At least we’ve got actual code and then we’ll decide what to do with it. Thank you for listening to the van Wirdum Sjorsnado.

AvW: There you go.

\ No newline at end of file diff --git a/bitcoin-explained/taproot-activation-update/index.html b/bitcoin-explained/taproot-activation-update/index.html index f8df6cbd4d..3cfcee5c57 100644 --- a/bitcoin-explained/taproot-activation-update/index.html +++ b/bitcoin-explained/taproot-activation-update/index.html @@ -10,4 +10,4 @@ Sjors Provoost, Aaron van Wirdum

Date: April 23, 2021

Transcript By: Michael Folkson

Tags: Taproot

Category: Podcast

Media: -https://www.youtube.com/watch?v=SHmEXPvN6t4

Previous episode on lockinontimeout (LOT): https://btctranscripts.com/bitcoin-magazine/2021-02-26-taproot-activation-lockinontimeout/

Previous episode on Speedy Trial: https://btctranscripts.com/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial/

Aaron van Wirdum on “There are now two Taproot activation clients, here’s why”: https://bitcoinmagazine.com/technical/there-are-now-two-taproot-activation-clients-heres-why

Intro

Aaron van Wirdum (AvW): Live from Utrecht this is the van Wirdum Sjorsnado.

Sjors Provoost (SP): Hello

AvW: Sjors, today we have a “lot” more to discuss.

SP: We’ve already made this pun.

AvW: We’ve already made it twice I think. It doesn’t matter. We are going to discuss the final implementation details of Speedy Trial today. We have already covered Speedy Trial in a previous episode. This time we are also going to contrast it to the LOT=true client which is an alternative that has been released by a couple of community members. We are going to discuss how they compare.

SP: That sounds like a good idea. We also talked about Taproot activation options in general in a much earlier episode.

AvW: One of the first ones?

SP: We also talked about this idea of this cowboy mentality where somebody would eventually release a LOT=true client whatever you do.

AvW: That is where we are.

SP: We also predicted correctly that there would be a lot of bikeshedding.

AvW: Yes, this is also something we will get into. First of all as a very brief recap, we are talking about Taproot activation. Taproot is a proposed protocol upgrade for compact and potentially privacy preserving smart contracts on the Bitcoin protocol. Is that a good summary?

SP: Yeah I think so.

AvW: The discussion on how to upgrade has been going on for a while now. The challenge is that on a decentralized open network like Bitcoin without a central dictator to tell everyone what to run when, you are not going to get everyone to upgrade at the same time. But we do want to keep the network in consensus somehow or other.

SP: Yeah. The other thing that can work when it is a distributed system is some sort of conventions, ways that you are used to doing things. But unfortunately the convention we had ran into problems with the SegWit deployment. Then the question is “Should we try something else or was it just a freak accident and we should try the same thing again?”

AvW: I think the last preface before we really start getting into Speedy Trial, I’d like to point out the general idea with a soft fork, a backwards compatible upgrade which Taproot is, is that if a majority of hash power is enforcing the new rules that means that the network will stay in consensus.

SP: Yeah. We can repeat that if you keep making transactions that are pre-Taproot then those transactions are still valid. In that sense, as a user you can ignore soft forks. Unfortunately if there is a problem you cannot ignore that as a user even if your transactions don’t use Taproot.

AvW: I think everyone agrees that it is very nice if a majority of hash power enforces the rules. There are coordination mechanisms to measure how many miners are on board with an upgrade. That is how you can coordinate a fairly safe soft fork. That is something everyone agrees on. Where people start to disagree on what happens if miners don’t actually cooperate with this coordination. We are not going to rehash all of that. There are previous episodes about that. What we are going to explain is that in the end the Bitcoin Core development community settled on a solution called “Speedy Trial.” We already mentioned that as well in a previous episode. Now it is finalized and we are going to explain what the finalized parameters are for this.

SP: There was a slight change.

AvW: Let’s hear it Sjors. What are the finalized parameters for Speedy Trial? How is Bitcoin Core going to upgrade to Taproot?

Bitcoin Core finalized activation parameters

Bitcoin Core 0.21.1 release notes: https://github.com/bitcoin/bitcoin/blob/0.21/doc/release-notes.md

Speedy Trial activation parameters merged into Core: https://github.com/bitcoin/bitcoin/pull/21686

SP: Starting on I think it is this Sunday (April 25th, midnight) the first time the difficult readjusts, that happens every two weeks, probably one week after Sunday…

AvW: It is Saturday

SP:… the signaling starts. In about two weeks the signaling starts, no earlier than one week from now.

AvW: Just to be clear, that’s the earliest it can start.

SP: The earliest it can start is April 24th but because it only starts at a new difficulty adjustment period, a new retargeting period, it probably won’t start until two weeks from now.

AvW: It will start at the first new difficulty period after April 24th which is estimated I think somewhere early May. May 4th, may the fourth be with you Sjors.

SP: That would be a cool date. That is when the signaling starts and the signaling happens in you could say voting rounds. A voting round is two weeks or one difficulty adjustment period, one retargeting period. If 90 percent of the blocks in that voting period signal on bit number 2, if that happens Taproot is locked in. Locked in means that it is going to happen, imagine the little gif with Ron Paul, “It’s happening.” But the actual Taproot rules won’t take effect immediately, they will take effect at block number 709632.

AvW: Which is estimated to be mined when?

SP: That will be November 12th this year.

AvW: That is going to differ a bit of course based on how fast blocks are going to be mined over the coming months. It is going to be November almost certainly.

SP: Which would be 4 years after the SegWit2x effort imploded.

AvW: Right, nice timing in that sense.

SP: Every date is nice. That’s what the Speedy Trial does. Every two weeks there is a vote, if 90 percent of the vote is reached then that’s the activation date. It doesn’t happen immediately and because it is a “Speedy” Trial it could also fail quickly and that is in August, around August 11th. If the difficulty period after that or before that, I always forget, doesn’t reach the goal, I think it is after…

AvW: The difficulty period must have ended by August 11th right?

SP: When August 11th is passed it could still activate but then the next difficulty period, it cannot. I think the rule is at the end of the difficulty period you start counting and if the result is a failure then if it is after August 11th you give up but if it is not yet August 11th you enter the next round.

AvW: If the first block of a new difficulty period is mined on August 10th will that difficulty period still count?

SP: That’s right. I think that’s one of the subtle changes made to BIP 9 to make BIP 9 easier to reason about. I think it used to be the other way around where you first check the date but if it was past the date you would give up but if it was before the date you would still count. Now I think it is the other way round, it is a bit simpler.

AvW: I see. There is going to be a signaling window of about 3 months.

SP: That’s right.

AvW: If in any difficulty period within that signaling window of 3 months 90 percent of hash power is reached Taproot will activate in November of this year.

SP: Yes

AvW: I think that covers Speedy Trial.

SP: The threshold is 90 percent as we said. Normally with BIP 9 it is 95 percent, it has been lowered to 90.

AvW: What happens if the threshold is not met?

SP: Nothing. Which means anything could still happen. People could deploy new versions of software, try another bit etc

AvW: I just wanted to clarify that.

SP: It does not mean that Taproot is cancelled.

AvW: If the threshold isn’t met then this specific software client will just do nothing but Bitcoin Core developers and the rest of the Bitcoin community can still figure out new ways to activate Taproot.

SP: Exactly. It is a low cost experiment. If it wins we get Taproot. If not then we have some more information as to why we don’t…

AvW: I also want to clarify. We don’t know what it is going to look like yet. That will have to be figured out then. We could start figuring it out now but that hasn’t been decided yet, what the deployment would look like.

Alternative to Bitcoin Core (Bitcoin Core 0.21.0-based Taproot Client)

Update on Taproot activation releases: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-April/018790.html

AvW: Another client was also launched. There is a lot of debate on the name. We are going to call it the LOT=true client.

SP: That sounds good to me.

AvW: It derives from this technical and philosophical difference about how soft forks should be activated in the first place. This client uses, you guessed it, LOT=true. LOT=true means that if the signaling window is over, by the end of the signaling window nodes will start to reject any blocks that don’t signal. They only accept blocks that signal. That is the main difference. Let’s get into the specifics of the LOT=true client.

SP: In the beginning it is the same, in principle it is the same. There is a theoretical possibility that it is not if miners do something really crazy. It starts at a certain block height…

AvW: We just mentioned that Bitcoin Core 0.21.1 starts its signaling window on the first difficulty period after April 24th. This LOT=true client will also in practice start its signaling window on the first difficulty period after April 24th except that April 24th isn’t specified specifically. They just picked the specific block height that is expected to be the first one after April 24th.

SP: Exactly, they picked block 681,408.

AvW: It is specified as a block height instead of indirectly through using a date.

SP: But in all likelihood that is going to be the exact same moment. Both Speedy Trial (Core) and the LOT=true client will start the signaling, the voting periods at the same time. The voting periods themselves, they vote on the same bit. They both vote on bit 2. They both have a threshold of 90 percent. Also if the vote is true then it also has a delayed activation. The delayed activation is a block height in both scenarios in both the Speedy Trial (Core) and the LOT=true variant.

AvW: Botha are November 12th, a November activation anyway. If miners signal readiness for Taproot within the Speedy Trial period both of these clients will activate Taproot in November on that exact same date, exact same block.

SP: So in that sense they are identical. But they are also different.

AvW: Let’s get into the first big difference. We already mentioned one difference which is the very subtle difference between starting it at height, the thing we just mentioned. Let’s get into a bigger difference.

SP: There is also a height for a timeout in this LOT=true and that is also a block. But not only is that a block, that could be a slight difference especially when it is a long time period. At the beginning if you use block height or time, date you can guess very accurately but if it is a year ahead then you can’t. But this is almost two years ahead, this block height (762048, approximately November 10th 2022) that they have in there. It goes on much longer.

AvW: Two years from now you mean, well one and a half.

SP: Exactly. In that sense it doesn’t really matter that they are using height because it is such a big difference anyway. But this is important. They will keep signaling much longer than the Speedy Trial. We can get into the implications later but basically they will signal much later.

AvW: Let’s stick to the facts first and the implications later. Speedy Trial (Bitcoin Core), it will last for 3 months. And this one, the LOT=true client will allow signaling for 18 months.

SP: The other big difference is that at the end of that 18 months, where the Speedy Trial will simply give up and continue, the LOT=true will wait for miners who do signal. This could be nobody or could be everybody.

AvW: They will only accept signaling blocks after these 18 months. For those who are aware of the whole block size war, it is a little bit like the BIP 148 client.

SP: It is pretty much the same with slightly better tolerance. The UASF client required every single block to signal whereas this one requires 90 percent to signal. In practice if you are the miners at the last 10 percent of that window you need to pay a bit more attention. Other than that it is the same.

AvW: That is why some people would call this the UASF client. The BIP 148 client was the UASF client for SegWit, this is the UASF client for Taproot. I know that for example Luke Dashjr has been contributing to this client doesn’t like the term UASF in this context because there is 18 months of regular miner signaling.

SP: So did the UASF. It is a bit more patient than the UASF.

AvW: There is a lot of discussion on the name of the client and what people should call it or not call it. In general some people have been calling it the UASF client and this is why.

SP: You could call it the “Slow UASF” or something.

Implications of having two alternative clients

AvW: I have also seen the name User Enforced Miner Activated Soft Fork (UEMASF). People are coming up with names. The basic facts are clear now I hope. Let’s get into the implications. There are some potential incompatibilities between these two activation clients. Everyone agrees that Taproot is great. Everyone wants Taproot. Everyone agrees that it would be preferable if miners activate it. The only thing where there is some disagreement on is what’s the backup plan. That is where the incompatibilities come in. Do you agree?

SP: I think so.

AvW: What are the incompatibilities? First of all and I already mentioned this, to emphasize this, if Speedy Trial activates Taproot there are no incompatibilities. Both clients are happily using Taproot starting in November. This seems pretty likely because 90 percent of mining pools have already indicated that they support Taproot. Likely there is no big deal here, everything will turn out fine. If Speedy Trial doesn’t succeed in activating Taproot that is where we enter a phase where we are going to start to look at potential incompatibilities.

SP: For sure. Imagine one scenario where Speedy Trial fails. Probably Bitcoin Core people will think about that for a while and think about some other possibilities. For some reason miners get wildly enthusiastic right after Speedy Trial fails and start signaling at 90 percent. As far as Bitcoin Core is concerned Taproot never activated. As far as the UASF or LOT=true client Taproot did just activate.

AvW: Let’s say in month 4, we have 3 months of Speedy Trial and then in month 4 miners suddenly signal readiness for Taproot. Bitcoin Core doesn’t care anymore, Bitcoin Core 0.21.1 isn’t looking at the signaling anymore. But this LOT=true client is. On the LOT=true client Taproot will be activated in November while on this Bitcoin Core client it will not.

SP: Then of course if you are using that LOT=true client and you start immediately using Taproot at that moment because you are very excited, you see all these blocks coming in, you may or may not lose your money. Anybody who is running the regular Bitcoin Core client will accept those thefts from Taproot addresses essentially.

AvW: In this case it matters what miners are doing as well. If miners signal readiness because they are actually ready and they are actually going to enforce Taproot then it is fine. There is no issue because they will enforce the soft fork and even Bitcoin Core 0.21.1 nodes will follow this chain. The LOT=true client will enforce and everybody is happy on the same chain. The only scenario where this is a problem, what you just described, is if miners do signal readiness but they aren’t actually going to enforce the Taproot rules.

SP: The problem is of course in general with soft forks, but especially if everyone is not on exactly the same page about what the rules are, you only know that it is enforced when it is actually enforced. You don’t know if it is going to be enforced in the future. This will create a conundrum for everybody else because then the question is what to do? One thing you could do at that point is say “Obviously Taproot is activated so let’s just release a new version of Bitcoin Core that just says retroactively it activated.” It could just be a BIP 9 soft fork repeating the same bit but slightly later or it could just say “We know it activated. We will just hardcode the flag date.”

AvW: It could just be a second Speedy Trial. Everything would work in that case?

SP: There is a problem with reusing the same bit number within a short period of time. (Note: AJ Towns stated on IRC that this would only be a problem if multiple soft forks were being deployed in parallel.) Because it would be exactly the week after in the scenario we talked about it may not be possible to use the same bit. Then you will have a problem because you can’t actually check that specific bit but there’s no signal on any of the other bits. That would create a bit of a headache. The other solution would be very simple, to say it apparently activated so we will just hardcode the block date and activate it then. The problem is what if between the time that the community decides “Let’s do this” and the moment that software is released and somewhat widely deployed one or more miners say “Actually we are going to start stealing these Taproot coins.” You get a massive clusterf*** in terms of agreeing on the chain. Now miners will not be incentivized to do this because why would you deliberately create complete chaos if you just signaled for a soft fork? But it is a very scary situation and it might make it scary to make the release. If you do make the release but miners start playing these shenanigans what do you do then? Do you accept a huge re-org at some point? Or do you give up and consider it not deployed? But then people lose their money and you’ve released a client that you now have to hard fork from technically. It is not a good scenario.

AvW: It gets complicated in scenarios like this, also with game theory and economics. Even if miners would choose to steal they risk stealing coins on a chain that could be re-orged. They have just mined a chain that could be re-orged if other miners do enforce these Taproot rules. It gets weird, it is a discussion about economic incentives and game theory in that scenario. Personally I think it is pretty unlikely that something like this would happen but it is at least technically possible and it is something to be aware of.

SP: It does make you wonder whether as a miner it is smart to signal immediately after this Speedy Trial. This LOT=true client allows for two years anyway. If the only reason you are signaling is because this client exists then I would strongly suggest not doing it immediately after the Speedy Trial. Maybe wait a little bit until there is some consensus about what to do next.

AvW: One thing you mentioned and I want to quickly address that, this risk always exists for any soft fork. Miners can always false signal, they could have done it with SegWit for example, false signal and then steal coins from SegWit outputs. Old nodes wouldn’t notice the difference. That is always a risk. I think the difference here is that Bitcoin Core 0.21.1 users in this scenario might think they are running a new node, from their perspective they are running an updated node. They are running the same risks as previously only outdated nodes would run.

SP: I would be mostly worried about the potential 0.21.2 users who are installing the successor to Speedy Trial which retroactively activates Taproot perhaps. That group is very uncertain of what the rules are.

AvW: Which group is this?

SP: If the Speedy Trial fails and then it is signaled, there might be a new release and people would install that new release, but then it is not clear whether that new release would be safe or not. That very new release would be the only one that would actually think that Taproot is active, as well as the LOT=true client. But now we don’t know what the miners are running and we don’t know what the exchanges are running because this is very new. This would be done in a period of weeks. Right now we have a 6 month… I guess the activation date would still be November.

AvW: It would still be November so there is still room to prepare in that case.

SP: Ok, then I guess what I said before is nonsense. The easier solution would be to do a flag date where the new release would say “It is going to activate on November 12th or whatever that block height is without any signaling.” Signaling exists but people have different interpretations about it. That could be a way.

Recap

AvW: I am still completely clear about what we are talking about it here but I am not sure if our listeners are catching up at this point. Shall we recap? If miners activate during the Speedy Trial then everything is fine, everyone is in consensus.

SP: And the new rules take effect in November.

AvW: If miners activate after the Speedy Trial period then there is a possibility that the LOT=true client and the Bitcoin Core 0.21.1 client won’t be in consensus if an invalid Taproot block is ever mined.

SP: They have no forced signaling, you’re right. If an invalid Taproot transaction shows up after November 12th….

AvW:…. and if that is mined and enforced by a majority of miners, a majority of miners must have false signaled, then the two clients can get out of consensus. Technically this is true, I personally think it is fairly unlikely. I am not too concerned about this but it is at least technically true and something people should be aware of.

SP: That scenario could be prevented by saying “If we see this “false” signaling, if we see this massive signaling a week after the Speedy Trial then you could decide to release a flag date client which just says we are going to activate this November 12th because apparently the miners want this. Otherwise we have no idea what to make of this signal.”

AvW: I find it very hard to predict what Bitcoin Core developers in this case are going to decide.

SP: I agree but this is one possibility.

A likelier way that the two clients could be incompatible?

AvW: That was one way the two clients can become incompatible potentially. There is another way which is maybe more likely or at least it is not as complicated.

SP: The other one is “Let’s imagine that the Speedy Trial fails and the community does not have consensus over how to proceed next.” Bitcoin Core developers can see that, there is ongoing discussion and nobody agrees. Maybe Bitcoin Core developers decide to wait and see.

AvW: Miners aren’t signaling…

SP: Or erratically etc. Miners aren’t signaling. The discussion still goes on. Nothing happens. Then this LOT=true mechanism kicks in…

AvW: After 18 months. We are talking about November 2022, it is a long way off but at some point the LOT=true mechanism will kick in.

SP: Exactly. Those nodes will then, assuming that the miners are still not signaling, they will stop…

AvW: That is if there are literally no LOT=true signaling blocks.

SP: In the other scenario where miners do massively start signaling, now we are back to that previous scenario where suddenly there is a lot of miner signaling on bit 2. Maybe the soft fork is active but now there is no delay. If the signaling happens anywhere after November 12th the LOT=true client will activate Taproot after one adjustment period.

AvW: I am not sure I am following you.

SP: Let’s say in this case in December of this year the miners suddenly start signaling. After the minimum activation height. In December they all start signaling. The Bitcoin Core client will ignore it but the LOT=true client will say “Ok Taproot is active.”

AvW: This is the same scenario we just discussed? There is only a problem if there is a false signaling. Otherwise it is fine.

SP: There is a problem if there is false signaling but it is more complicated to resolve this one because that option of just releasing a new client with a flag day in it that is far enough into the future, that is not longer there. It is potentially active immediately. If you do a release but then suddenly a miner starts not enforcing the rules you get this confusion that we talked about earlier. Then we are able to solve it by just making a flag date. This would be even messier. Maybe it is also even less likely.

AvW: It is pretty similar to the previous scenario but a little more difficult, less obvious how to solve this.

SP: I think it is messier because it is less obvious how you would do a flag day release in Bitcoin Core in that scenario because it immediately activates.

AvW: That is not where I wanted to go with this.

SP: You wanted to go for some scenario where miners wait all the way to the end until they start signaling?

AvW: Yes that is where I wanted to go.

SP: This is where the mandatory signaling kicks in. If there is no mandatory signaling then the LOT=true nodes will stop until somebody mines a block that they’d like to see, a block that signals. If they do see this block that signals we are back in the previous example where suddenly the regular Bitcoin Core nodes see this signaling but they ignore it. Now there is a group of nodes that believe that Taproot is active and there is a group of nodes that don’t. Somebody then has to decide what to do with it.

AvW: You are still talking about false signaling here?

SP: Even if the signaling is genuine you still want there to be a Bitcoin Core release, probably, that actually says “We have Taproot now.” But the question is when do we have Taproot according to that release? What is a safe date to put in there? You could do it retroactively.

AvW: Whenever they want. The point is if miners are actually enforcing the new rules then the chain will stay together. It is up to Bitcoin Core to implement whenever they feel like it.

SP: The problem with this signaling is you don’t know if it is active until somebody decides to try to break the rules.

AvW: My assumption was that there wasn’t false signaling. They’ll just create the longest chain with the valid rules anyway.

SP: The problem with that is that it is unknowable.

AvW: The scenario I really wanted to get to Sjors is the very simple scenario where the majority of miners doesn’t signal when the 18 months are up. In that case they are going to create the longest chain that the Bitcoin Core 0.21.1 nodes are going to follow while the LOT=true nodes are only going to accept blocks that do signal which maybe zero or at least fewer. If it is a majority then there is no split. But if it is not a majority then we have a split.

SP: And that chain would get further and further behind. The incentive to make a release to account for that would be quite small I think. It depends, this is where your game theory comes in. From a safety point of view, if you now make a release that says “By the way we retroactively consider Taproot active” that would cause a giant re-org. If you just activate it that wouldn’t cause a giant re-org. But if you say “By the way we are going to retroactively mandate that signaling that you guys care about” that would cause a massive re-org. This would be unsafe, that would not be something that would be released probably. That is a very messy situation.

AvW: There are messy potential scenarios. I want to emphasize to our dear listeners, none of this is going to happen in the next couple of months.

SP: And hopefully never. We will throw in a few other bad scenarios and then I guess we can go onto some other topics.

AvW: I want to mention real quick is the reason I’m not too concerned about these really bad scenarios playing out is because I think if it seems even slightly likely that there might be a coin split or anything like that there will probably be futures markets. These futures markets will probably make it very clear to everyone the alternative chain that stands a chance, that will inform miners on what to mine and prevent a split that way. I feel pretty confident about the collective wisdom of the market to warn everyone about potential scenarios so it will probably work out fine. That’s my general perception.

SP: The problem with this sort of stuff is if it doesn’t work out fine it is really, really bad. Then we get to say retroactively “I guess it didn’t work out fine.”

Development process of LOT=true client

AvW: I want to bring something up before you bring up whatever you wanted to bring up. I have seen some concerns by Bitcoin Core developers about the development process of the LOT=true client. I think this gets down to the Gitian building, Gitian signing which we also discussed in another episode.

SP: We talked about the need for software to be open source, to be easy to audit.

AvW: Can you give your view on that in this context?

SP; The change that they made relative to the main Bitcoin Core client is not huge. You can see it on GitHub. In that sense that part of open source is reasonably doable to verify. I think that code has had less review but not zero review.

AvW: Less than Bitcoin Core’s?

SP: Exactly but much more than the UASF, much more than the 2017 UASF.

AvW: More than that?

SP: I would say. The idea has been studied a bit longer. But the second problem is how do you know that what you are downloading isn’t malware. There are two measures there. There is release signatures, the website explains pretty well how to check those. I think they were signed by Luke Dashjr and by the other developer. You can check that.

AvW: Bitcoin Mechanic is the other developer. Actually it is released by Bitcoin Mechanic and Shinobi and Luke Dashjr is the advisor, contributor.

SP: Usually there is a binary file that you download and then there is a file with checksums in it and that file with checksums is also signed by a known person. If you have Luke’s key or whoever, their key and you know them, you can check that at least the binary you downloaded is not coming from a hacked website. The second thing, you just have a binary and you know they signed it but who are they? The second thing is you want to check that this code matches the binary and that is where Gitian building comes in which we talked about in an earlier episode. Basically deterministic builds. It takes the source code and it produces the binary. Multiple people can then sign that indeed according to them this source code produces this binary. The more people that confirm that the more likely it is that they are not colluding. I think there are only two Gitian signatures for this other release.

AvW: So the Bitcoin Core software is being Gitian signed by…

SP: I think 10 or 20 people.

AvW: A lot of the experienced Bitcoin Core developers that have been developing the Bitcoin Core software for a while. Including you? Did you sign it?

SP: The most recent release, yes.

AvW: You are trusting that they are not all colluding and spreading malware. It still comes down to trust in that sense for most people.

SP: If you are really contemplating running this alternative software you really should know what you are doing in terms of all these re-org scenarios. If you already know what you are doing in those terms then just compile the thing from source. Why not? If you are not able to compile things from source you probably shouldn’t be running this. But that is up to you. I am not worried that they are shipping malware but in general it is just a matter of time before somebody says “I have a different version with LOT=happy and please download it here” and it steals all your Bitcoin. It is more the precedent this is setting that I’m worried about than that this thing might actually have malware in it.

AvW: That is fair. Maybe sign it Sjors?

SP: No because I don’t think this is a sane thing to release.

AvW: Fair enough.

SP: That’s just my opinion. Everyone is free to run whatever they want to run.

AvW: Was there anything else you wanted to bring up?

What would Bitcoin Core release if Speedy Trial failed to activate?

SP: Yeah we talked about true signaling or false signaling on bit 1 but a very real possibility I think if this activation fails and we want to try something else then we probably don’t want to use the same bit if it is before the timeout window. That could create a scenario where you might start saying “Let’s use another bit to do signaling.” Then you could get some confusion where there is a new Bitcoin Core release that activates using bit 3 for example but the LOT=true folks don’t see it because they are looking at bit 1. That may or may not be an actual problem. The other thing is that there could be all sorts of other ways to activate this thing. One could be a flag day. If Bitcoin Core were to release a flag day then there won’t be any signaling. The LOT=true client won’t know that Taproot is active and they will demand signaling at some point even though Taproot is already active.

AvW: Your point being that we don’t know what Bitcoin Core will release after Speedy Trial and what they might release might not necessarily be compatible with the LOT=true client. That works both ways of course.

SP: Sure. I am just reasoning from one point here. I would also say that in the event that Bitcoin Core releases something else that has pretty wide community support, I would imagine the people who are running the BIP 8 clients are not sitting in a cave somewhere. They are probably relatively active users that can decide “I am going to run this Bitcoin Core version again because there is a flag day in it which is earlier than the forced signaling.” I could imagine they would decide to run it or not.

AvW: That also works both ways.

SP: No, not really. I am much more worried about people who are not following this discussion who just default to whatever the newest version of Core is. Or don’t upgrade at all, they are still running say Bitcoin Core v0.15. I am much more worried about that group than about the group that actively takes a stance on this thing. If you actively take a stance by running something else then you know what you are doing. It is up to you to stay up to date. But we have a commitment towards all the users that if you are still running in your bunker the 0.15 version of Bitcoin Core that nothing bad should happen to you if you follow the most proof of work within the rules that you know.

AvW: That could also mean making it compatible with the LOT=true client.

SP: No, as far as the v0.15 node is concerned there is no LOT=true client.

AvW: Do we want to get into all sorts of scenarios? The scenario that I am most concerned about is the LOT=true chain to call it that if there is ever a split will win but only after a while because you get long re-orgs. This gets back to the LOT=true versus LOT=false discussion in the first place.

SP: I can only see that happening with a massive price collapse of Bitcoin itself. If the scenario comes to be where LOT=true starts winning after a delay which requires a big re-org… if it is more likely to win its relative price will go up because it is more likely to win. But because a bigger re-org is more disastrous for Bitcoin the longer the re-org the lower the Bitcoin price. That would be the bad scenario. If there is a 1000 block re-org or more then I think the Bitcoin price will collapse to something very low. We don’t really care whether the LOT=true client wins or not. That doesn’t matter anymore.

AvW: I agree with that. The reason I am not concerned is what I mentioned before, I think these things will be sorted out by futures markets well before it actually happens.

SP: I guess the futures market would predict exactly that. That would not be good. Depending on your confidence in futures markets which for me is not that amazing.

Block height versus MTP

https://github.com/bitcoin/bitcoin/pull/21377#issuecomment-818758277

SP: We could still talk about this nitty difference between block height versus block time. There was a fiasco but I don’t think it is an interesting difference.

AvW: We might as well mention it.

SP: When we first described the Speedy Trial we assumed everything would be based on block height. There would be a transformation from the way soft forks work right now which is based on these median times to block heights which is conceptually simpler. Later on there was some discussion between the people who were working on that, considering maybe the only Speedy Trial difference should be the activation height and none of the other changes. From the point of the view of the existing codebase it is easier to make the Speedy Trial adjust one parameter which is a minimum activation height versus the change where you change everything into block heights which is a bigger change from the existing code. Even though the end result is easier. A purely block height based approach is easier to understand, easier to explain what it is going to do, when it is going to do it. Some edge cases are also easier. But to stay closer to the existing codebase is easier for reviewers somewhat. The difference is pretty small so I think some people decided on a coin toss and other people I think agreed without the coin toss.

AvW: There are arguments on both sides but they seem to be pretty subtle, pretty nuanced. Like we mentioned Speedy Trial is going to start on the same date anyway so it doesn’t seem to matter that much. At some point some developers were seriously considering deciding through a coin flip using the Bitcoin blockchain for that, picking a block in the near future and seeing if it ends with an even or odd number. I don’t know if that was literally what they did but that would be one way of doing it. I think they did do the coin flip but then after that the champions for both solutions ended up agreeing anyway.

SP: They agreed on the same thing that the coin flip said.

AvW: The main dissenter was Luke Dashjr who feels strongly about using block heights consistently. He also is of the opinion that the community had found consensus on that and that Bitcoin Core developers not using that is going back or breaking community consensus.

SP: That is his perspective. If you look at the person who wrote the original pull request that was purely height based, I think that was Andrew Chow, he closed his own pull request in favor of the mixed solution that we have now. If the person writing the code removes it himself I think that’s pretty clear. From my point of view the people who are putting in the most effort should probably decide when it is something this trivial. I don’t think it matters that much.

AvW: It seems like a minor point to me but clearly not everyone agrees it is a minor point.

SP: That is what bikeshedding is about right? It wouldn’t be bikeshedding if everybody thought it was irrelevant which color the bike shed had.

AvW: Let’s leave the coin flip and the time, block height thing behind us Sjors because I think we covered anything and maybe we shouldn’t dwell on this last point. Is that it? I hope this was clear.

SP: I think we can very briefly still interject one thing that was brought up which is the timewarp attack.

AvW: We didn’t mention that, that is somewhat relevant in this context. An argument against using block time is that it opens the door to timewarp attacks where miners are faking the timestamps on the blocks they mine to pretend it is a different time and date. That way they can for example just skip the signaling period altogether, if they collude in doing that.

SP: That sounds like an enormous amount of effort for no good reason but it is an interesting scenario. We did an episode about the timewarp attack a long time ago, back when I understood it. There is a soft fork proposal to get rid of it that I don’t think anyone objected to but also nobody bothered to actually implement. One way to deal with this hypothetical scenario is if it were to happen then we deploy the soft fork against the timewarp attack first and then we try Taproot activation again.

AvW: The argument against that from someone like Luke is of course you can fix any bug but you can also just not include the bug in the first place.

SP: It is nice to know that miners would be willing to use it. If we know that miners are actually willing to exploit the timewarp attack that is incredibly valuable information. If they have a way to collude and a motivation to use that attack… The cost of that attack would be pretty low, it would be delaying Taproot by a few months but we would have this massive conspiracy unveiled. I think that is a win.

AvW: The way Luke sees it is that there was already consensus on all sorts of things, using BIP 8 and this LOT=true thing, he saw this as somewhat of a consensus effort. Using block times is frustrating that in his opinion. I don’t want to speak for him but if I am trying to channel Luke a little bit or explain his perspective that would be it. In his view consensus was already forming and now it is a different path.

SP: I don’t think this new approach blocks any of the LOT=true stuff that much. We went through all the scenarios here and the confusion wasn’t around block height versus time, it was on all sorts of things that could go wrong depending on how things evolved. But not that particular issue. As for consensus, consensus is in the eye of the beholder. I would say if multiple people disagree then there isn’t consensus.

AvW: That would also work the other way around. Where Luke disagrees using block time.

SP: But he cannot say there was consensus on something. If people disagree by definition there wasn’t consensus.

AvW: It was my impression that there was no consensus because people disagree. Let’s wrap up. For our listeners that are confused and worried I am going to emphasize the next 3 months Speedy Trial is going to run on both clients. If miners activate through Speedy Trial we are going to have Taproot in November and everyone is going to be happy. We will continue the soft fork discussion with the next soft fork.

SP: We’ll have the same arguments all over again because we’ve learnt absolutely nothing.

\ No newline at end of file +https://www.youtube.com/watch?v=SHmEXPvN6t4

Previous episode on lockinontimeout (LOT): https://btctranscripts.com/bitcoin-magazine/2021-02-26-taproot-activation-lockinontimeout/

Previous episode on Speedy Trial: https://btctranscripts.com/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial/

Aaron van Wirdum on “There are now two Taproot activation clients, here’s why”: https://bitcoinmagazine.com/technical/there-are-now-two-taproot-activation-clients-heres-why

Intro

Aaron van Wirdum (AvW): Live from Utrecht this is the van Wirdum Sjorsnado.

Sjors Provoost (SP): Hello

AvW: Sjors, today we have a “lot” more to discuss.

SP: We’ve already made this pun.

AvW: We’ve already made it twice I think. It doesn’t matter. We are going to discuss the final implementation details of Speedy Trial today. We have already covered Speedy Trial in a previous episode. This time we are also going to contrast it to the LOT=true client which is an alternative that has been released by a couple of community members. We are going to discuss how they compare.

SP: That sounds like a good idea. We also talked about Taproot activation options in general in a much earlier episode.

AvW: One of the first ones?

SP: We also talked about this idea of this cowboy mentality where somebody would eventually release a LOT=true client whatever you do.

AvW: That is where we are.

SP: We also predicted correctly that there would be a lot of bikeshedding.

AvW: Yes, this is also something we will get into. First of all as a very brief recap, we are talking about Taproot activation. Taproot is a proposed protocol upgrade for compact and potentially privacy preserving smart contracts on the Bitcoin protocol. Is that a good summary?

SP: Yeah I think so.

AvW: The discussion on how to upgrade has been going on for a while now. The challenge is that on a decentralized open network like Bitcoin without a central dictator to tell everyone what to run when, you are not going to get everyone to upgrade at the same time. But we do want to keep the network in consensus somehow or other.

SP: Yeah. The other thing that can work when it is a distributed system is some sort of conventions, ways that you are used to doing things. But unfortunately the convention we had ran into problems with the SegWit deployment. Then the question is “Should we try something else or was it just a freak accident and we should try the same thing again?”

AvW: I think the last preface before we really start getting into Speedy Trial, I’d like to point out the general idea with a soft fork, a backwards compatible upgrade which Taproot is, is that if a majority of hash power is enforcing the new rules that means that the network will stay in consensus.

SP: Yeah. We can repeat that if you keep making transactions that are pre-Taproot then those transactions are still valid. In that sense, as a user you can ignore soft forks. Unfortunately if there is a problem you cannot ignore that as a user even if your transactions don’t use Taproot.

AvW: I think everyone agrees that it is very nice if a majority of hash power enforces the rules. There are coordination mechanisms to measure how many miners are on board with an upgrade. That is how you can coordinate a fairly safe soft fork. That is something everyone agrees on. Where people start to disagree on what happens if miners don’t actually cooperate with this coordination. We are not going to rehash all of that. There are previous episodes about that. What we are going to explain is that in the end the Bitcoin Core development community settled on a solution called “Speedy Trial.” We already mentioned that as well in a previous episode. Now it is finalized and we are going to explain what the finalized parameters are for this.

SP: There was a slight change.

AvW: Let’s hear it Sjors. What are the finalized parameters for Speedy Trial? How is Bitcoin Core going to upgrade to Taproot?

Bitcoin Core finalized activation parameters

Bitcoin Core 0.21.1 release notes: https://github.com/bitcoin/bitcoin/blob/0.21/doc/release-notes.md

Speedy Trial activation parameters merged into Core: https://github.com/bitcoin/bitcoin/pull/21686

SP: Starting on I think it is this Sunday (April 25th, midnight) the first time the difficult readjusts, that happens every two weeks, probably one week after Sunday…

AvW: It is Saturday

SP:… the signaling starts. In about two weeks the signaling starts, no earlier than one week from now.

AvW: Just to be clear, that’s the earliest it can start.

SP: The earliest it can start is April 24th but because it only starts at a new difficulty adjustment period, a new retargeting period, it probably won’t start until two weeks from now.

AvW: It will start at the first new difficulty period after April 24th which is estimated I think somewhere early May. May 4th, may the fourth be with you Sjors.

SP: That would be a cool date. That is when the signaling starts and the signaling happens in you could say voting rounds. A voting round is two weeks or one difficulty adjustment period, one retargeting period. If 90 percent of the blocks in that voting period signal on bit number 2, if that happens Taproot is locked in. Locked in means that it is going to happen, imagine the little gif with Ron Paul, “It’s happening.” But the actual Taproot rules won’t take effect immediately, they will take effect at block number 709632.

AvW: Which is estimated to be mined when?

SP: That will be November 12th this year.

AvW: That is going to differ a bit of course based on how fast blocks are going to be mined over the coming months. It is going to be November almost certainly.

SP: Which would be 4 years after the SegWit2x effort imploded.

AvW: Right, nice timing in that sense.

SP: Every date is nice. That’s what the Speedy Trial does. Every two weeks there is a vote, if 90 percent of the vote is reached then that’s the activation date. It doesn’t happen immediately and because it is a “Speedy” Trial it could also fail quickly and that is in August, around August 11th. If the difficulty period after that or before that, I always forget, doesn’t reach the goal, I think it is after…

AvW: The difficulty period must have ended by August 11th right?

SP: When August 11th is passed it could still activate but then the next difficulty period, it cannot. I think the rule is at the end of the difficulty period you start counting and if the result is a failure then if it is after August 11th you give up but if it is not yet August 11th you enter the next round.

AvW: If the first block of a new difficulty period is mined on August 10th will that difficulty period still count?

SP: That’s right. I think that’s one of the subtle changes made to BIP 9 to make BIP 9 easier to reason about. I think it used to be the other way around where you first check the date but if it was past the date you would give up but if it was before the date you would still count. Now I think it is the other way round, it is a bit simpler.

AvW: I see. There is going to be a signaling window of about 3 months.

SP: That’s right.

AvW: If in any difficulty period within that signaling window of 3 months 90 percent of hash power is reached Taproot will activate in November of this year.

SP: Yes

AvW: I think that covers Speedy Trial.

SP: The threshold is 90 percent as we said. Normally with BIP 9 it is 95 percent, it has been lowered to 90.

AvW: What happens if the threshold is not met?

SP: Nothing. Which means anything could still happen. People could deploy new versions of software, try another bit etc

AvW: I just wanted to clarify that.

SP: It does not mean that Taproot is cancelled.

AvW: If the threshold isn’t met then this specific software client will just do nothing but Bitcoin Core developers and the rest of the Bitcoin community can still figure out new ways to activate Taproot.

SP: Exactly. It is a low cost experiment. If it wins we get Taproot. If not then we have some more information as to why we don’t…

AvW: I also want to clarify. We don’t know what it is going to look like yet. That will have to be figured out then. We could start figuring it out now but that hasn’t been decided yet, what the deployment would look like.

Alternative to Bitcoin Core (Bitcoin Core 0.21.0-based Taproot Client)

Update on Taproot activation releases: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-April/018790.html

AvW: Another client was also launched. There is a lot of debate on the name. We are going to call it the LOT=true client.

SP: That sounds good to me.

AvW: It derives from this technical and philosophical difference about how soft forks should be activated in the first place. This client uses, you guessed it, LOT=true. LOT=true means that if the signaling window is over, by the end of the signaling window nodes will start to reject any blocks that don’t signal. They only accept blocks that signal. That is the main difference. Let’s get into the specifics of the LOT=true client.

SP: In the beginning it is the same, in principle it is the same. There is a theoretical possibility that it is not if miners do something really crazy. It starts at a certain block height…

AvW: We just mentioned that Bitcoin Core 0.21.1 starts its signaling window on the first difficulty period after April 24th. This LOT=true client will also in practice start its signaling window on the first difficulty period after April 24th except that April 24th isn’t specified specifically. They just picked the specific block height that is expected to be the first one after April 24th.

SP: Exactly, they picked block 681,408.

AvW: It is specified as a block height instead of indirectly through using a date.

SP: But in all likelihood that is going to be the exact same moment. Both Speedy Trial (Core) and the LOT=true client will start the signaling, the voting periods at the same time. The voting periods themselves, they vote on the same bit. They both vote on bit 2. They both have a threshold of 90 percent. Also if the vote is true then it also has a delayed activation. The delayed activation is a block height in both scenarios in both the Speedy Trial (Core) and the LOT=true variant.

AvW: Botha are November 12th, a November activation anyway. If miners signal readiness for Taproot within the Speedy Trial period both of these clients will activate Taproot in November on that exact same date, exact same block.

SP: So in that sense they are identical. But they are also different.

AvW: Let’s get into the first big difference. We already mentioned one difference which is the very subtle difference between starting it at height, the thing we just mentioned. Let’s get into a bigger difference.

SP: There is also a height for a timeout in this LOT=true and that is also a block. But not only is that a block, that could be a slight difference especially when it is a long time period. At the beginning if you use block height or time, date you can guess very accurately but if it is a year ahead then you can’t. But this is almost two years ahead, this block height (762048, approximately November 10th 2022) that they have in there. It goes on much longer.

AvW: Two years from now you mean, well one and a half.

SP: Exactly. In that sense it doesn’t really matter that they are using height because it is such a big difference anyway. But this is important. They will keep signaling much longer than the Speedy Trial. We can get into the implications later but basically they will signal much later.

AvW: Let’s stick to the facts first and the implications later. Speedy Trial (Bitcoin Core), it will last for 3 months. And this one, the LOT=true client will allow signaling for 18 months.

SP: The other big difference is that at the end of that 18 months, where the Speedy Trial will simply give up and continue, the LOT=true will wait for miners who do signal. This could be nobody or could be everybody.

AvW: They will only accept signaling blocks after these 18 months. For those who are aware of the whole block size war, it is a little bit like the BIP 148 client.

SP: It is pretty much the same with slightly better tolerance. The UASF client required every single block to signal whereas this one requires 90 percent to signal. In practice if you are the miners at the last 10 percent of that window you need to pay a bit more attention. Other than that it is the same.

AvW: That is why some people would call this the UASF client. The BIP 148 client was the UASF client for SegWit, this is the UASF client for Taproot. I know that for example Luke Dashjr has been contributing to this client doesn’t like the term UASF in this context because there is 18 months of regular miner signaling.

SP: So did the UASF. It is a bit more patient than the UASF.

AvW: There is a lot of discussion on the name of the client and what people should call it or not call it. In general some people have been calling it the UASF client and this is why.

SP: You could call it the “Slow UASF” or something.

Implications of having two alternative clients

AvW: I have also seen the name User Enforced Miner Activated Soft Fork (UEMASF). People are coming up with names. The basic facts are clear now I hope. Let’s get into the implications. There are some potential incompatibilities between these two activation clients. Everyone agrees that Taproot is great. Everyone wants Taproot. Everyone agrees that it would be preferable if miners activate it. The only thing where there is some disagreement on is what’s the backup plan. That is where the incompatibilities come in. Do you agree?

SP: I think so.

AvW: What are the incompatibilities? First of all and I already mentioned this, to emphasize this, if Speedy Trial activates Taproot there are no incompatibilities. Both clients are happily using Taproot starting in November. This seems pretty likely because 90 percent of mining pools have already indicated that they support Taproot. Likely there is no big deal here, everything will turn out fine. If Speedy Trial doesn’t succeed in activating Taproot that is where we enter a phase where we are going to start to look at potential incompatibilities.

SP: For sure. Imagine one scenario where Speedy Trial fails. Probably Bitcoin Core people will think about that for a while and think about some other possibilities. For some reason miners get wildly enthusiastic right after Speedy Trial fails and start signaling at 90 percent. As far as Bitcoin Core is concerned Taproot never activated. As far as the UASF or LOT=true client Taproot did just activate.

AvW: Let’s say in month 4, we have 3 months of Speedy Trial and then in month 4 miners suddenly signal readiness for Taproot. Bitcoin Core doesn’t care anymore, Bitcoin Core 0.21.1 isn’t looking at the signaling anymore. But this LOT=true client is. On the LOT=true client Taproot will be activated in November while on this Bitcoin Core client it will not.

SP: Then of course if you are using that LOT=true client and you start immediately using Taproot at that moment because you are very excited, you see all these blocks coming in, you may or may not lose your money. Anybody who is running the regular Bitcoin Core client will accept those thefts from Taproot addresses essentially.

AvW: In this case it matters what miners are doing as well. If miners signal readiness because they are actually ready and they are actually going to enforce Taproot then it is fine. There is no issue because they will enforce the soft fork and even Bitcoin Core 0.21.1 nodes will follow this chain. The LOT=true client will enforce and everybody is happy on the same chain. The only scenario where this is a problem, what you just described, is if miners do signal readiness but they aren’t actually going to enforce the Taproot rules.

SP: The problem is of course in general with soft forks, but especially if everyone is not on exactly the same page about what the rules are, you only know that it is enforced when it is actually enforced. You don’t know if it is going to be enforced in the future. This will create a conundrum for everybody else because then the question is what to do? One thing you could do at that point is say “Obviously Taproot is activated so let’s just release a new version of Bitcoin Core that just says retroactively it activated.” It could just be a BIP 9 soft fork repeating the same bit but slightly later or it could just say “We know it activated. We will just hardcode the flag date.”

AvW: It could just be a second Speedy Trial. Everything would work in that case?

SP: There is a problem with reusing the same bit number within a short period of time. (Note: AJ Towns stated on IRC that this would only be a problem if multiple soft forks were being deployed in parallel.) Because it would be exactly the week after in the scenario we talked about it may not be possible to use the same bit. Then you will have a problem because you can’t actually check that specific bit but there’s no signal on any of the other bits. That would create a bit of a headache. The other solution would be very simple, to say it apparently activated so we will just hardcode the block date and activate it then. The problem is what if between the time that the community decides “Let’s do this” and the moment that software is released and somewhat widely deployed one or more miners say “Actually we are going to start stealing these Taproot coins.” You get a massive clusterf*** in terms of agreeing on the chain. Now miners will not be incentivized to do this because why would you deliberately create complete chaos if you just signaled for a soft fork? But it is a very scary situation and it might make it scary to make the release. If you do make the release but miners start playing these shenanigans what do you do then? Do you accept a huge re-org at some point? Or do you give up and consider it not deployed? But then people lose their money and you’ve released a client that you now have to hard fork from technically. It is not a good scenario.

AvW: It gets complicated in scenarios like this, also with game theory and economics. Even if miners would choose to steal they risk stealing coins on a chain that could be re-orged. They have just mined a chain that could be re-orged if other miners do enforce these Taproot rules. It gets weird, it is a discussion about economic incentives and game theory in that scenario. Personally I think it is pretty unlikely that something like this would happen but it is at least technically possible and it is something to be aware of.

SP: It does make you wonder whether as a miner it is smart to signal immediately after this Speedy Trial. This LOT=true client allows for two years anyway. If the only reason you are signaling is because this client exists then I would strongly suggest not doing it immediately after the Speedy Trial. Maybe wait a little bit until there is some consensus about what to do next.

AvW: One thing you mentioned and I want to quickly address that, this risk always exists for any soft fork. Miners can always false signal, they could have done it with SegWit for example, false signal and then steal coins from SegWit outputs. Old nodes wouldn’t notice the difference. That is always a risk. I think the difference here is that Bitcoin Core 0.21.1 users in this scenario might think they are running a new node, from their perspective they are running an updated node. They are running the same risks as previously only outdated nodes would run.

SP: I would be mostly worried about the potential 0.21.2 users who are installing the successor to Speedy Trial which retroactively activates Taproot perhaps. That group is very uncertain of what the rules are.

AvW: Which group is this?

SP: If the Speedy Trial fails and then it is signaled, there might be a new release and people would install that new release, but then it is not clear whether that new release would be safe or not. That very new release would be the only one that would actually think that Taproot is active, as well as the LOT=true client. But now we don’t know what the miners are running and we don’t know what the exchanges are running because this is very new. This would be done in a period of weeks. Right now we have a 6 month… I guess the activation date would still be November.

AvW: It would still be November so there is still room to prepare in that case.

SP: Ok, then I guess what I said before is nonsense. The easier solution would be to do a flag date where the new release would say “It is going to activate on November 12th or whatever that block height is without any signaling.” Signaling exists but people have different interpretations about it. That could be a way.

Recap

AvW: I am still completely clear about what we are talking about it here but I am not sure if our listeners are catching up at this point. Shall we recap? If miners activate during the Speedy Trial then everything is fine, everyone is in consensus.

SP: And the new rules take effect in November.

AvW: If miners activate after the Speedy Trial period then there is a possibility that the LOT=true client and the Bitcoin Core 0.21.1 client won’t be in consensus if an invalid Taproot block is ever mined.

SP: They have no forced signaling, you’re right. If an invalid Taproot transaction shows up after November 12th….

AvW:…. and if that is mined and enforced by a majority of miners, a majority of miners must have false signaled, then the two clients can get out of consensus. Technically this is true, I personally think it is fairly unlikely. I am not too concerned about this but it is at least technically true and something people should be aware of.

SP: That scenario could be prevented by saying “If we see this “false” signaling, if we see this massive signaling a week after the Speedy Trial then you could decide to release a flag date client which just says we are going to activate this November 12th because apparently the miners want this. Otherwise we have no idea what to make of this signal.”

AvW: I find it very hard to predict what Bitcoin Core developers in this case are going to decide.

SP: I agree but this is one possibility.

A likelier way that the two clients could be incompatible?

AvW: That was one way the two clients can become incompatible potentially. There is another way which is maybe more likely or at least it is not as complicated.

SP: The other one is “Let’s imagine that the Speedy Trial fails and the community does not have consensus over how to proceed next.” Bitcoin Core developers can see that, there is ongoing discussion and nobody agrees. Maybe Bitcoin Core developers decide to wait and see.

AvW: Miners aren’t signaling…

SP: Or erratically etc. Miners aren’t signaling. The discussion still goes on. Nothing happens. Then this LOT=true mechanism kicks in…

AvW: After 18 months. We are talking about November 2022, it is a long way off but at some point the LOT=true mechanism will kick in.

SP: Exactly. Those nodes will then, assuming that the miners are still not signaling, they will stop…

AvW: That is if there are literally no LOT=true signaling blocks.

SP: In the other scenario where miners do massively start signaling, now we are back to that previous scenario where suddenly there is a lot of miner signaling on bit 2. Maybe the soft fork is active but now there is no delay. If the signaling happens anywhere after November 12th the LOT=true client will activate Taproot after one adjustment period.

AvW: I am not sure I am following you.

SP: Let’s say in this case in December of this year the miners suddenly start signaling. After the minimum activation height. In December they all start signaling. The Bitcoin Core client will ignore it but the LOT=true client will say “Ok Taproot is active.”

AvW: This is the same scenario we just discussed? There is only a problem if there is a false signaling. Otherwise it is fine.

SP: There is a problem if there is false signaling but it is more complicated to resolve this one because that option of just releasing a new client with a flag day in it that is far enough into the future, that is not longer there. It is potentially active immediately. If you do a release but then suddenly a miner starts not enforcing the rules you get this confusion that we talked about earlier. Then we are able to solve it by just making a flag date. This would be even messier. Maybe it is also even less likely.

AvW: It is pretty similar to the previous scenario but a little more difficult, less obvious how to solve this.

SP: I think it is messier because it is less obvious how you would do a flag day release in Bitcoin Core in that scenario because it immediately activates.

AvW: That is not where I wanted to go with this.

SP: You wanted to go for some scenario where miners wait all the way to the end until they start signaling?

AvW: Yes that is where I wanted to go.

SP: This is where the mandatory signaling kicks in. If there is no mandatory signaling then the LOT=true nodes will stop until somebody mines a block that they’d like to see, a block that signals. If they do see this block that signals we are back in the previous example where suddenly the regular Bitcoin Core nodes see this signaling but they ignore it. Now there is a group of nodes that believe that Taproot is active and there is a group of nodes that don’t. Somebody then has to decide what to do with it.

AvW: You are still talking about false signaling here?

SP: Even if the signaling is genuine you still want there to be a Bitcoin Core release, probably, that actually says “We have Taproot now.” But the question is when do we have Taproot according to that release? What is a safe date to put in there? You could do it retroactively.

AvW: Whenever they want. The point is if miners are actually enforcing the new rules then the chain will stay together. It is up to Bitcoin Core to implement whenever they feel like it.

SP: The problem with this signaling is you don’t know if it is active until somebody decides to try to break the rules.

AvW: My assumption was that there wasn’t false signaling. They’ll just create the longest chain with the valid rules anyway.

SP: The problem with that is that it is unknowable.

AvW: The scenario I really wanted to get to Sjors is the very simple scenario where the majority of miners doesn’t signal when the 18 months are up. In that case they are going to create the longest chain that the Bitcoin Core 0.21.1 nodes are going to follow while the LOT=true nodes are only going to accept blocks that do signal which maybe zero or at least fewer. If it is a majority then there is no split. But if it is not a majority then we have a split.

SP: And that chain would get further and further behind. The incentive to make a release to account for that would be quite small I think. It depends, this is where your game theory comes in. From a safety point of view, if you now make a release that says “By the way we retroactively consider Taproot active” that would cause a giant re-org. If you just activate it that wouldn’t cause a giant re-org. But if you say “By the way we are going to retroactively mandate that signaling that you guys care about” that would cause a massive re-org. This would be unsafe, that would not be something that would be released probably. That is a very messy situation.

AvW: There are messy potential scenarios. I want to emphasize to our dear listeners, none of this is going to happen in the next couple of months.

SP: And hopefully never. We will throw in a few other bad scenarios and then I guess we can go onto some other topics.

AvW: I want to mention real quick is the reason I’m not too concerned about these really bad scenarios playing out is because I think if it seems even slightly likely that there might be a coin split or anything like that there will probably be futures markets. These futures markets will probably make it very clear to everyone the alternative chain that stands a chance, that will inform miners on what to mine and prevent a split that way. I feel pretty confident about the collective wisdom of the market to warn everyone about potential scenarios so it will probably work out fine. That’s my general perception.

SP: The problem with this sort of stuff is if it doesn’t work out fine it is really, really bad. Then we get to say retroactively “I guess it didn’t work out fine.”

Development process of LOT=true client

AvW: I want to bring something up before you bring up whatever you wanted to bring up. I have seen some concerns by Bitcoin Core developers about the development process of the LOT=true client. I think this gets down to the Gitian building, Gitian signing which we also discussed in another episode.

SP: We talked about the need for software to be open source, to be easy to audit.

AvW: Can you give your view on that in this context?

SP; The change that they made relative to the main Bitcoin Core client is not huge. You can see it on GitHub. In that sense that part of open source is reasonably doable to verify. I think that code has had less review but not zero review.

AvW: Less than Bitcoin Core’s?

SP: Exactly but much more than the UASF, much more than the 2017 UASF.

AvW: More than that?

SP: I would say. The idea has been studied a bit longer. But the second problem is how do you know that what you are downloading isn’t malware. There are two measures there. There is release signatures, the website explains pretty well how to check those. I think they were signed by Luke Dashjr and by the other developer. You can check that.

AvW: Bitcoin Mechanic is the other developer. Actually it is released by Bitcoin Mechanic and Shinobi and Luke Dashjr is the advisor, contributor.

SP: Usually there is a binary file that you download and then there is a file with checksums in it and that file with checksums is also signed by a known person. If you have Luke’s key or whoever, their key and you know them, you can check that at least the binary you downloaded is not coming from a hacked website. The second thing, you just have a binary and you know they signed it but who are they? The second thing is you want to check that this code matches the binary and that is where Gitian building comes in which we talked about in an earlier episode. Basically deterministic builds. It takes the source code and it produces the binary. Multiple people can then sign that indeed according to them this source code produces this binary. The more people that confirm that the more likely it is that they are not colluding. I think there are only two Gitian signatures for this other release.

AvW: So the Bitcoin Core software is being Gitian signed by…

SP: I think 10 or 20 people.

AvW: A lot of the experienced Bitcoin Core developers that have been developing the Bitcoin Core software for a while. Including you? Did you sign it?

SP: The most recent release, yes.

AvW: You are trusting that they are not all colluding and spreading malware. It still comes down to trust in that sense for most people.

SP: If you are really contemplating running this alternative software you really should know what you are doing in terms of all these re-org scenarios. If you already know what you are doing in those terms then just compile the thing from source. Why not? If you are not able to compile things from source you probably shouldn’t be running this. But that is up to you. I am not worried that they are shipping malware but in general it is just a matter of time before somebody says “I have a different version with LOT=happy and please download it here” and it steals all your Bitcoin. It is more the precedent this is setting that I’m worried about than that this thing might actually have malware in it.

AvW: That is fair. Maybe sign it Sjors?

SP: No because I don’t think this is a sane thing to release.

AvW: Fair enough.

SP: That’s just my opinion. Everyone is free to run whatever they want to run.

AvW: Was there anything else you wanted to bring up?

What would Bitcoin Core release if Speedy Trial failed to activate?

SP: Yeah we talked about true signaling or false signaling on bit 1 but a very real possibility I think if this activation fails and we want to try something else then we probably don’t want to use the same bit if it is before the timeout window. That could create a scenario where you might start saying “Let’s use another bit to do signaling.” Then you could get some confusion where there is a new Bitcoin Core release that activates using bit 3 for example but the LOT=true folks don’t see it because they are looking at bit 1. That may or may not be an actual problem. The other thing is that there could be all sorts of other ways to activate this thing. One could be a flag day. If Bitcoin Core were to release a flag day then there won’t be any signaling. The LOT=true client won’t know that Taproot is active and they will demand signaling at some point even though Taproot is already active.

AvW: Your point being that we don’t know what Bitcoin Core will release after Speedy Trial and what they might release might not necessarily be compatible with the LOT=true client. That works both ways of course.

SP: Sure. I am just reasoning from one point here. I would also say that in the event that Bitcoin Core releases something else that has pretty wide community support, I would imagine the people who are running the BIP 8 clients are not sitting in a cave somewhere. They are probably relatively active users that can decide “I am going to run this Bitcoin Core version again because there is a flag day in it which is earlier than the forced signaling.” I could imagine they would decide to run it or not.

AvW: That also works both ways.

SP: No, not really. I am much more worried about people who are not following this discussion who just default to whatever the newest version of Core is. Or don’t upgrade at all, they are still running say Bitcoin Core v0.15. I am much more worried about that group than about the group that actively takes a stance on this thing. If you actively take a stance by running something else then you know what you are doing. It is up to you to stay up to date. But we have a commitment towards all the users that if you are still running in your bunker the 0.15 version of Bitcoin Core that nothing bad should happen to you if you follow the most proof of work within the rules that you know.

AvW: That could also mean making it compatible with the LOT=true client.

SP: No, as far as the v0.15 node is concerned there is no LOT=true client.

AvW: Do we want to get into all sorts of scenarios? The scenario that I am most concerned about is the LOT=true chain to call it that if there is ever a split will win but only after a while because you get long re-orgs. This gets back to the LOT=true versus LOT=false discussion in the first place.

SP: I can only see that happening with a massive price collapse of Bitcoin itself. If the scenario comes to be where LOT=true starts winning after a delay which requires a big re-org… if it is more likely to win its relative price will go up because it is more likely to win. But because a bigger re-org is more disastrous for Bitcoin the longer the re-org the lower the Bitcoin price. That would be the bad scenario. If there is a 1000 block re-org or more then I think the Bitcoin price will collapse to something very low. We don’t really care whether the LOT=true client wins or not. That doesn’t matter anymore.

AvW: I agree with that. The reason I am not concerned is what I mentioned before, I think these things will be sorted out by futures markets well before it actually happens.

SP: I guess the futures market would predict exactly that. That would not be good. Depending on your confidence in futures markets which for me is not that amazing.

Block height versus MTP

https://github.com/bitcoin/bitcoin/pull/21377#issuecomment-818758277

SP: We could still talk about this nitty difference between block height versus block time. There was a fiasco but I don’t think it is an interesting difference.

AvW: We might as well mention it.

SP: When we first described the Speedy Trial we assumed everything would be based on block height. There would be a transformation from the way soft forks work right now which is based on these median times to block heights which is conceptually simpler. Later on there was some discussion between the people who were working on that, considering maybe the only Speedy Trial difference should be the activation height and none of the other changes. From the point of the view of the existing codebase it is easier to make the Speedy Trial adjust one parameter which is a minimum activation height versus the change where you change everything into block heights which is a bigger change from the existing code. Even though the end result is easier. A purely block height based approach is easier to understand, easier to explain what it is going to do, when it is going to do it. Some edge cases are also easier. But to stay closer to the existing codebase is easier for reviewers somewhat. The difference is pretty small so I think some people decided on a coin toss and other people I think agreed without the coin toss.

AvW: There are arguments on both sides but they seem to be pretty subtle, pretty nuanced. Like we mentioned Speedy Trial is going to start on the same date anyway so it doesn’t seem to matter that much. At some point some developers were seriously considering deciding through a coin flip using the Bitcoin blockchain for that, picking a block in the near future and seeing if it ends with an even or odd number. I don’t know if that was literally what they did but that would be one way of doing it. I think they did do the coin flip but then after that the champions for both solutions ended up agreeing anyway.

SP: They agreed on the same thing that the coin flip said.

AvW: The main dissenter was Luke Dashjr who feels strongly about using block heights consistently. He also is of the opinion that the community had found consensus on that and that Bitcoin Core developers not using that is going back or breaking community consensus.

SP: That is his perspective. If you look at the person who wrote the original pull request that was purely height based, I think that was Andrew Chow, he closed his own pull request in favor of the mixed solution that we have now. If the person writing the code removes it himself I think that’s pretty clear. From my point of view the people who are putting in the most effort should probably decide when it is something this trivial. I don’t think it matters that much.

AvW: It seems like a minor point to me but clearly not everyone agrees it is a minor point.

SP: That is what bikeshedding is about right? It wouldn’t be bikeshedding if everybody thought it was irrelevant which color the bike shed had.

AvW: Let’s leave the coin flip and the time, block height thing behind us Sjors because I think we covered anything and maybe we shouldn’t dwell on this last point. Is that it? I hope this was clear.

SP: I think we can very briefly still interject one thing that was brought up which is the timewarp attack.

AvW: We didn’t mention that, that is somewhat relevant in this context. An argument against using block time is that it opens the door to timewarp attacks where miners are faking the timestamps on the blocks they mine to pretend it is a different time and date. That way they can for example just skip the signaling period altogether, if they collude in doing that.

SP: That sounds like an enormous amount of effort for no good reason but it is an interesting scenario. We did an episode about the timewarp attack a long time ago, back when I understood it. There is a soft fork proposal to get rid of it that I don’t think anyone objected to but also nobody bothered to actually implement. One way to deal with this hypothetical scenario is if it were to happen then we deploy the soft fork against the timewarp attack first and then we try Taproot activation again.

AvW: The argument against that from someone like Luke is of course you can fix any bug but you can also just not include the bug in the first place.

SP: It is nice to know that miners would be willing to use it. If we know that miners are actually willing to exploit the timewarp attack that is incredibly valuable information. If they have a way to collude and a motivation to use that attack… The cost of that attack would be pretty low, it would be delaying Taproot by a few months but we would have this massive conspiracy unveiled. I think that is a win.

AvW: The way Luke sees it is that there was already consensus on all sorts of things, using BIP 8 and this LOT=true thing, he saw this as somewhat of a consensus effort. Using block times is frustrating that in his opinion. I don’t want to speak for him but if I am trying to channel Luke a little bit or explain his perspective that would be it. In his view consensus was already forming and now it is a different path.

SP: I don’t think this new approach blocks any of the LOT=true stuff that much. We went through all the scenarios here and the confusion wasn’t around block height versus time, it was on all sorts of things that could go wrong depending on how things evolved. But not that particular issue. As for consensus, consensus is in the eye of the beholder. I would say if multiple people disagree then there isn’t consensus.

AvW: That would also work the other way around. Where Luke disagrees using block time.

SP: But he cannot say there was consensus on something. If people disagree by definition there wasn’t consensus.

AvW: It was my impression that there was no consensus because people disagree. Let’s wrap up. For our listeners that are confused and worried I am going to emphasize the next 3 months Speedy Trial is going to run on both clients. If miners activate through Speedy Trial we are going to have Taproot in November and everyone is going to be happy. We will continue the soft fork discussion with the next soft fork.

SP: We’ll have the same arguments all over again because we’ve learnt absolutely nothing.

\ No newline at end of file diff --git a/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation/index.html b/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation/index.html index be9e5caefd..eb4ceab90b 100644 --- a/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation/index.html +++ b/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation/index.html @@ -14,4 +14,4 @@ Eric Lombrozo, Luke Dashjr

Date: August 3, 2020

Transcript By: Michael Folkson

Tags: Taproot, Soft fork activation

Category: Podcast

Media: -https://www.youtube.com/watch?v=yQZb0RDyFCQ

Location: Bitcoin Magazine (online)

Aaron van Wirdum in Bitcoin Magazine on BIP 8, BIP 9 or Modern Soft Fork Activation: https://bitcoinmagazine.com/articles/bip-8-bip-9-or-modern-soft-fork-activation-how-bitcoin-could-upgrade-next

David Harding on Taproot activation proposals: https://gist.github.com/harding/dda66f5fd00611c0890bdfa70e28152d

Intro

Aaron van Wirdum (AvW): Eric, Luke welcome. Happy Bitcoin Independence Day. How are you doing?

Eric Lombrozo (EL): We are doing great. How are you doing?

AvW: I am doing fine thanks. Luke how are you?

Luke Dashjr (LD): OK. How are you?

AvW: Good thanks. It is cool to have you guys on Bitcoin Independence Day. Obviously both of you played a big role in the UASF movement. You were maybe two of the most prominent and influential supporters. This Bitcoin Independence Day, this August 1st thing from a couple of years ago, this was about SegWit activation, soft fork activation. This is getting relevant again because we are looking at a new soft fork that might be coming up, Taproot. Just in general the conversation of how to activate soft forks is getting started again. So what happened a couple of years ago is becoming relevant again.

EL: We don’t want to repeat what happened a few years ago. We want to do it better this time.

Previous soft fork activations

AvW: That’s what I want to discuss with you guys and why it is great to have you on. Let’s start with that Eric. A couple years ago, apparently something went wrong. Shall we first very briefly mention there was this process called BIP 9. Do you want to explain very briefly what that was and then you can get into why that was a problem or what went wrong?

EL: The first soft forks that were activated were with a flag date, just written into the code. Later on to make the transition more smooth miner signaling was incorporated. Then BIP 9 was this proposal that allowed several soft forks to be under activation at the same time with the idea that there would be extensibility to the protocol and there would be this whole process that we could use to add new features. But it turned out to be very messy and I am not sure that that process is sustainable.

AvW: To be clear the idea with BIP 9 was there is this soft fork, there is a change to the protocol, and the challenge then is to get the network upgraded to that change without splitting the network between upgraded and non-upgraded nodes. The idea was we’ll let activation coordination depend on hashpower. Once enough miners have signaled that they have upgraded the network recognizes this, all upgraded nodes recognize this and they start enforcing the new rules. The reason that is a good idea is because if most miners do this there is no risk of chain splits. Even unupgraded nodes will follow the “upgraded” version of the blockchain.

EL: It would be the longest chain. By default all the other clients would follow that chain automatically.

AvW: That is the benefit of it. But apparently something went wrong. Or at least you think so.

EL: The first few soft forks were really not political at all. There was no contention. SegWit was the first soft fork that really led to some controversy because of the whole block size stuff that happened before. It was a really contentious time in the Bitcoin space. The activation became politicized which was a really serious problem because BIP 9 was not designed to deal with politics. It was designed to deal with activation by miner signaling just for the sake of what you were talking about, so that the network doesn’t split. It is really a technical process to make sure that everyone is on the same page. The idea was that everyone was supposed to already be onboard for this before the release without any politics. It wasn’t any kind of voting system or anything like that. It was not designed or intended to be a voting system. Some people misunderstood it that way and some people took advantage of the confusion to make it seem like it was a voting process. It became very abused and miners started to manipulate the signaling process to mess with the Bitcoin price and other stuff. It became really, really messy. I don’t think we want to do it that way this time again.

AvW: This coordination mechanism, this was basically granted to miners in a way in order to make upgrades happen smoothly. Your perspective is that they started to abuse this right that they were given. Is that a good summary?

EL: Yes. It had a 95 percent threshold which was something that was actually pretty reasonable before when it was mostly just technical upgrades that were not politicized at all. It was a safety threshold where if 95 percent of the hashpower is signaling for it then almost for sure the network is not going to split. But really you only need more than the majority of hashpower in order for the chain to be the longest chain. You could lower that threshold and slightly increase the risk of a chain split. The 95 percent threshold, if more than 5 percent of the hashpower did not signal for it they would get a veto. They could veto the whole activation throughout. It gave a very small minority the ability to veto a soft fork. The default was that if it didn’t activate it would just not be locked in and it would fail. This was a problem because this was in the middle of the block size thing and everyone wanted a resolution to this. Failure wasn’t really an option.

AvW: Luke do you agree with this analysis? Is this how you see it?

LD: More or less yeah.

SegWit and BIP 148

AvW: At some point there was this solution to break the deadlock which was a UASF. This ultimately took shape in the form of BIP 148. Luke I think you were involved with the early process of deciding this. What was the idea behind BIP 148?

LD: I wasn’t really involved that early. I was actually opposed to it originally.

AvW: Why?

LD: I didn’t really analyze the implications fully.

AvW: Can you explain what BIP 148 did and why it was designed the way it was designed?

LD: It essentially took the decision back from the miners. “August 1st we are going to begin the process of activating this and that’s all there is to it.”

AvW: I think the specific way it did that was BIP 148 nodes would start rejecting blocks that didn’t actually signal support for SegWit. Is that right?

LD: Yes. That is how most user activated soft forks previously had been deployed. If the miners didn’t signal the new version their blocks would be invalid.

AvW: There is a nuanced difference between actually activating SegWit itself and enforcing SegWit or enforcing signaling for SegWit right?

LD: Previous soft forks had pretty much done both all the time. Before BIP 9 it was an incremental version number. The miners had to have the correct version number or their blocks were invalid. When version 3 was released all blocks version 2 and before became invalid.

AvW: At some point you did start supporting BIP 148? Why was that?

LD: At one point there was enough of the community that were saying “We are going to do BIP 148 no matter how many other people are onboard.” From that point forward once it was a sizable minority the only way to avoid a chain split was to go all in.

EL: It was all or nothing at that point.

AvW: How big does a minority like that need to be? Is there any way to measure this or think about this or reason about this? What made you realize that there were enough users supporting this?

LD: It really comes down to not so much the number of people but their relevance to the economy. All these people that are going to do BIP 148, can you just ignore them and economically pressure them or cut them off from everyone else? Or is that not viable anymore?

AvW: How do you make that decision? How do you tell the difference between non-viable and viable?

EL: At this point what we did was really try to talk to a lot of exchanges and a lot of other people in the ecosystem to assess their level of support. A lot of users were very much in favor of it but when it became more clear that also a lot of a big economic nodes in the system were going to be using BIP 148 it became clear it was do or die, it was all or nothing. At that point it was get everyone onboard or this is not going to work.

LD: Ironically one of the earliest BIP 148 supporters was BitPay who were actually relevant back then.

AvW: Were they?

LD: I think they lost relevance soon after then. But back then they were more of a big deal.

AvW: You mentioned this do or die situation, what are the risks of die? What could go wrong with something like BIP 148?

EL: At this point we were pretty sure that if people wanted to fork they were going to fork. There was a big push to the BCH thing. It was “Ok if people want to fork off let them fork off.” Wherever the economy goes that is where people are going to gravitate towards. We believed that they would gravitate towards using BIP 148, insisting that everyone else did because that was in everyone’s best economic interest. That is what happened. It was a risk but I think it was a calculated risk. I think there would have been a chain split no matter what. The question was how can we make sure that the chain split has the least amount of economic repercussions for the people that want to stay on the SegWit chain.

AvW: To be clear about that BIP 1 48 could have also led to a chain split and possibly re-orged. It was a risk in that sense.

EL: Sure we were in unchartered territory. I think the theory was pretty sound but there is always a risk. I think at this point the fact that there was a risk actually was part of the motivation for people to want to run BIP 148 because the more people that did that the lower the risk would be.

AvW: That was an interesting game theoretic advantage that this BIP had. There is disagreement about this to this day, what do you guys think BIP 148 actually did? Some people said it did nothing in the end. Miners were just the ones who upgraded the protocol. How do you see this?

LD: It was very clearly BIP 148 that got SegWit activated. If it hadn’t been it would’ve been very clear to everyone because the blocks would’ve been rejected by BIP 148 nodes. There is no way you could have 100 percent miner signaling without BIP 148.

AvW: Eric do you agree with that?

EL: I think BIP 148 played a huge role but I think there were a lot of factors that were very important. For instance the fact that SegWit2x was going on and the whole New York agreement was going on. I think that rallied people to want to support a user activated soft fork even more. It is kind of ironic. Had the whole SegWit2x thing not been happening people might have been more complacent and said “Let’s hold off a little bit and wait and see what happens.” I think this pressured everyone to take action. It seemed like there was an imminent threat so people needed to do something. That was the moment I think the game theory really started to work out because then it would be possible to cross that threshold where it is do or die, the point of no return.

AvW: Let me rephrase the question a little bit. Do you guys think SegWit would’ve been activated if BIP 148 hadn’t happened? Would we have SegWit today?

EL: It is hard to say. I think that eventually it might have activated. At that point people wanted it and it was a decisive moment where people wanted a resolution. The longer it dragged on the more uncertainty there was. Uncertainty in the protocol is not good for the network. There could have been other issues that arose from that. I think the timing happened to be the right moment, the right place at the right time for this to happen. Could it have happened later? Possibly. But I think it would have been much more risky,

LD: This conference today would probably be celebrating SegWit’s final activation instead of Taproot.

AvW: One last question. To sum up this part of Bitcoin history, what were the lessons of this episode? What are we taking with us from this period of Bitcoin history moving forward?

EL: I think it is very important that we try to avoid politicizing this kind of stuff. Ideally we don’t want the protocol base layer to be changing very much. Anytime there is a change to that it introduces an attack vector. Right away people could try to insert vulnerabilities or exploits or try to split the community or create social engineering attacks or stuff like that. Anytime you open the door to changing the rules you are opening yourself up to attacks. We are at a point now where it is impossible to upgrade the protocol without having some kind of process. But the more we try to scale this the more that it tends to become political. I am hoping that Taproot does not create the same kind of contention and it does not become politicized. I don’t think it is a good precedent to set that these things are controversial. At the same time I don’t want this to be a regular process. I don’t want to make a habit out of activating soft forks all the time. I do think Taproot is a very important addition to Bitcoin and it seems to have a lot of community support. It would be really nice to include it now. I think if we wait it is going to get harder to activate it later.

BIP 8 as a possible improvement to BIP 9

AvW: You mentioned that 2017 was this period of controversy, the scaling civil war, whatever you want to call it. Was there really a problem with BIP 9 or was it just a controversial time and by now BIP 9 would be fine again?

EL: I think the problem with BIP 9 was that it was very optimistic. People would play nice and that people would cooperate. BIP 9 does not work in an uncooperative scenario very well at all. It defaults to failing. I don’t know if that is something we want to do in the future because if it fails that means there is still controversy, it did not get resolved. With BIP 8 it does have a flag date where it does have to activate by a certain time. It defaults to activation whereas BIP 9 defaulted to not activating. It removes the veto power from miners. I think that BIP 9 is very problematic. In the best of circumstances it could possibly work but it is just too easy to attack.

LD: And it creates an incentive for that attack.

AvW: What you are saying is that even in a period where there is no civil war going on, no huge controversy, using something like BIP 9 would actually invite controversy. Is that what you are concerned about?

EL: Possibly yes.

LD: Because now the miners can hold the soft fork hostage that has presumably already been agreed on by the community. If it hasn’t been agreed on by the community we shouldn’t be deploying it with any activation, period.

AvW: Luke you have been working on BIP 8 which is an alternative way to activate soft forks. Can you explain what BIP 8 is?

LD: Rather than an alternative I look at it more of taking BIP 9 and fixing the bugs in it.

AvW: How are the bugs fixed? What does BIP 8 do?

LD: The most obvious bug was the fact that one of the problems with BIP 9 when we went to do BIP 148 was originally it had been set for November activation, not August. Then people realized if the hashpower is too low we could force signaling during November and it still won’t activate because the time will lapse too quickly. This was because BIP 9 used timestamps, real world time, to determine when the BIP would expire.

AvW: Because blocks can be mined faster or slower so the real world time can get you in trouble.

LD: If blocks mine too slow then the timeout would occur before the difficulty period had finished which was one of the requirements for activation. The most obvious bug that BIP 8 fixed was to use heights instead of time. That way if the blocks slowed down the timeout would be later as well. BIP 148 worked around this by moving mandatory signaling period up to August which was very fast. I think everyone agreed on that. It was an unfortunate necessity due to that bug. The other one is as we were talking about, it creates an incentive for miners to block the soft fork. If it activates after the timeout then that incentive is gone. There are risks to that if there is a bug found. We can’t back out of it once it has got the activation set. Of course we really should be finding bugs before we set the activation anyway so hopefully that won’t matter.

AvW: Is it the case with BIP 8 that it is configurable? You can force signaling at the end or not. How does this work exactly?

LD: That was a more recent change now that we have got the activation topic back going again. It is designed so that you could deploy it with an activation that does timeout and abort and then later on change that flag to a UASF. If the UASF is set later, as long as it has sufficient community support to do the UASF, all the nodes that were configured without the UASF will still go along with it.

AvW: Who sets this flag? Is it embedded in a software release or is it something users can manually do? Or is it something you compile?

LD: That is an implementation detail. At the end of the day the users can modify the code or someone can provide a release that has that modified.

AvW: How would you do that? How would you like BIP 8 to be used?

LD: Because we shouldn’t be deploying any activation parameters without sufficient community support I think we should just set it upfront. There was Bitcoin Core UASF with BIP 148 code in it. I think it would be best to do all soft forks that way from now on and leave the vanilla Bitcoin Core releases without any soft fork activation.

AvW: That is an interesting perspective. Would Bitcoin Core not include any BIP 8 activation thing at all? It is completely embedded in alternative clients like the BIP 148 client and not in Bitcoin Core at all? With UASF this is the way to do it from now on?

LD: I think that would be best. Before 2017 I would’ve been hesitant that the community would actually go along with it but it worked fine with BIP 148 so I don’t see any reason not to continue it.

AvW: Shouldn’t Bitcoin Core in that case at least include BIP 8 without forced signaling so it will still enforce the soft fork and go along.

LD: Possibly. There is also a middle ground where it could detect that it has been activated.

AvW: Then what? Then users know that they should upgrade?

LD: It can activate at that point.

AvW: What is the benefit of detecting that it was upgraded?

LD: So that it can activate the rules and remain a full node. If you are not enforcing the most recent soft forks you are no longer a full node. That can be a security issue.

AvW: That is what I meant with BIP 8 without forced signaling. I thought that was the same thing.

LD: There is a pull request to BIP 8 that may make the same thing. I would have to look at that pull request, I am not sure it is identical quite yet.

#Activating through alternative clients

AvW: Eric, what do you think of this idea of activating soft forks through alternative clients like BIP 148 did?

EL: It is probably a good idea. I think it is best for Bitcoin Core to remain out of the whole process. The less political Bitcoin Core is the better. The people that have worked on these BIPs for the most part don’t really want to get too involved in this public stuff. I think it is for the best. I think it would set a really horrible precedent for Bitcoin Core to be deploying protocol changes by itself. It is very important that the community support be there and be demonstrated outside of the Bitcoin Core project and that it be decisive. It doesn’t become politicized once it is deployed. I think it is important that there is enough support for it prior to deployment and it is pretty sure that it is going to happen. At that point there is no more decisions to make it or anything like that because any other decision that is added to the process just adds more potential points where people could try to add controversy.

AvW: What is the problem if Bitcoin Core activates soft forks or soft forks are implemented in the Bitcoin Core client?

EL: It sets a really bad precedent because Bitcoin Core has become the reference implementation. It is the most widely used node software. It is very dangerous, it is kind of like a separation of powers kind of thing. It would be really dangerous for Bitcoin Core to have the ability to implement these kinds of things and deploy them especially under the radar, I think it would be really dangerous. It is very important that this stuff be reviewed and that everyone gets a chance to look at it. I think that right now the people who are working on Bitcoin Core might be good, reliable honest people but eventually it could get infiltrated by people or other people could get in that might not have the best of intentions. It could become dangerous at some point if it becomes a habit to do it that way.

AvW: Or Bitcoin Core developers could be coerced or get phone calls from three letter agencies.

LD: Especially if there is a precedent or even an appearance of this supposed power. It is going to cause people who are malicious think they can pressure developers and attempt to do that even if it doesn’t work out.

AvW: The obvious downside is that if not enough people upgrade to this alternative client in this case it could partition the network. It could split the network, it could cause the chaos that we discussed before with chain re-orgs. Is this not a risk you are a concerned about?

EL: If it doesn’t have enough support prior then probably it shouldn’t be done at all. The support should be there and it should be very clear that there is almost unanimous agreement. A very huge amount of the community wants this before any actual code is deployed. At the point the code is deployed I think it would be pretty reasonable to expect that people would want to run it. Otherwise I don’t think it should be done at all.

AvW: But it does put a deadline for people to upgrade their software. People can’t be too lazy. People need to do something otherwise risks will emerge.

LD: That is true no matter what release it is in.

AvW: I want to put a hypothetical in front of you then. Let’s say this solution is chosen. There is this alternative client that includes the soft fork with forced activation in some way or another. It doesn’t actually get the support you guys are predicting it will get or hoping it will get and it does cause a chain split. Which version is Bitcoin in that case?

LD: The community would have to decide on that. It is not something that any one person can rule on or predict. Either the rest of the community will upgrade or the people who did upgrade will have no choice but to revert.

EL: At that point we are in unchartered territory in a lot of ways. We have to see if the incentives align for enough people to really want to get behind one particular version of Bitcoin or not. If there is a significant risk that it could split the network permanently in a way that does not lead to a decisive outcome then I would not support it. I think it is important that there is a theoretical high probability that the network will tend to converge. The strong economic nodes will tend to converge on one particular blockchain.

LD: Also keep in mind it is not enough to simply not upgrade with something like this. It would have to explicitly lock in a block that rejects Taproot. Otherwise there would be a risk of the Taproot chain overtaking their chain and replacing it.

AvW: What kind of timeline would you think in this regard? Luke I think I have seen you mention a year? Is that right?

LD: It is important that the full nodes upgrade not just the miners. From the date that the first release is made I think there needs to be at least 3 months before signaling can even begin. Once it begins I think maybe a year would be good. Not too long, not too short.

EL: I am thinking that a year might actually be a little bit long. There is a trade-off here obviously. The shorter the timeframe the more risk that there won’t be enough time for people to upgrade. But at the same time the longer this goes on the more uncertainty there is and that also causes problems. It is good to find the right balance. I think a year might be a little bit too long, I don’t think it is going to take that long. I would prefer this to happen more quickly. I would actually like this to happen the quickest possible that doesn’t cause a chain split or that reduces the risk of a chain split. That would be my preference.

LD: Don’t forget that miners can signal still and activate it sooner than one year.

EL: Sure. But there is also the veto thing and people possibly using the uncertainty to mess with the markets or other kind of stuff like that.

LD: I don’t know if there is an incentive to try to veto when it is just going to delay it a year.

EL: Yeah.

Modern Soft Fork Activation

AvW: The other perspective in this debate would be for example Matt Corallo’s Modern Soft Fork Activation. I assume that you are aware of that. I will explain it real quick myself. Modern Soft Fork Activation, the idea is that basically you use the old fashioned BIP 9 upgrade process for a year. Let miners activate it for a year. If it doesn’t work then developers will reconsider for 6 months, see if there was a problem with Taproot in this case after all, something that they had missed, some concern miners had with it. Review it for 6 months, if after the 6 months it is found there was no actual problem and miners were delaying for whatever reason then activation is redeployed with a hard deadline activation at 2 years. What do you think of this?

LD: If there is possibly a problem we shouldn’t even get to that first step.

EL: I am not a big fan of this because I think there are too many questions raised. If it doesn’t feel it is decisive and there isn’t a decision that has been made then people are going to be totally confused. It is cause a lot of uncertainty. I think that this is something where we either need to be very aggressive about it and get everyone onboard and be decisive and say “Yes this is going to happen” or it is not worth doing at all. I don’t like this idea of seeing what happens in 6 months or whatever. We either decide right away that yes we are going to do it or no.

LD: It kind of invites controversy.

AvW: The idea though is that by laying out the plan out like this then there is still the guarantee that it will ultimately activate. It might take a little longer than a year but at the end of the road it will activate.

LD: You could do that with BIP 8, just not locking in on the timeout. Then setting the timeout later if you decide to do that.

AvW: There are variations. There are different ways of thinking about this. The real question I want to get at is what is the rush? Aren’t we in it for the long run? Aren’t we here planning to build something that will be here for 200 years? What does an extra year or two matter?

EL: I think for the testing and review of the actual Taproot proposal for sure it should have enough time. We should not rush that. We should not try to deploy anything until the developers that are working on it and are reviewing and testing it are confident that it is at a level where it is safe to deploy, independent of all the activation stuff. I think that the activation though should be quick. I don’t think it will take that long to bring people onboard. We are talking probably a few months at most to get everyone onboard if it is going to happen. If it is taking longer than that then chances are it doesn’t have enough support. We probably shouldn’t be doing it in the first place. If people can’t do it that quickly then I think the whole process is questionable. The more variables that we add to the process later the more it invites controversy and the more that people might just get more confused and think it is more uncertain. With this kind of stuff I think people are looking for decisiveness and looking for resolution. Keeping this uncertain and on hold for a very long period of time is not healthy for the network. The activation part I think should be quick. The deployment should take as long as is necessary to make sure the code actually is good. We should not be deploying code that has not been fully tested and reviewed. But once we decide that “Yes this is what should be done” then I think at that point it should be quick. Otherwise we shouldn’t do it at all.

LD: The decision of course being made by the community. I agree with Eric.

AvW: Another argument in favor of Matt’s perspective here is that the code should be good before you deploy it but maybe in reality in the real world a lot of people will only really start looking at the code, will only really start looking at the upgrade once it is actually out there. Once there is a software release that includes it. By not immediately enforcing the activation this allows realistically more time for review. What do you think?

LD: I think if people have something to review they should review it before it gets to that point, before there is any activation set. Ideally before it gets merged.

AvW: Eric do you see any merit in this argument?

EL: For sure the more eyes that look at it the better but I think nothing would presumably get merged and deployed or released until at least enough eyeballs that are competent in this are able to do it. Right now I think the people that are most motivated are the people that are working on it directly. There might be more eyeballs that look at it later. For sure once it is out there and once the activation has been set it will get more attention. But I don’t think that is the time that people should start reviewing. I think the review should be taking place before that.

LD: At that point if there was an issue found there is going to be a question of “Is it worth redoing all this just to fix that issue or should we just go ahead anyway?” If the issue is bad enough miners could activate simply because of the issue because they want to take advantage of it. We don’t want it to activate if there is a big issue. We need to be completely sure there are no issues before we get to the activation stage.

BIP 8 with forced signaling

AvW: Let me throw out another idea that has been circulating. What if we do a BIP 8 with forced signaling towards the end but give it a long time? After a while you can always speed it up with a new client that includes something forcing miners to signal sooner.

LD: With the current BIP 8 you can’t do that but Anthony Towns has a pull request that will hopefully fix that.

AvW: Do you see merit in this?

EL: I don’t really think it is a good idea. The more we can reduce variables once it has been deployed the better. I think that we should really be looking at getting all this stuff ironed out before. If it has not been done before then someone hasn’t done their job right, that is my way of looking at it. If things are done right then by the time it is out for activation then there should be no controversy. It should be “Let’s get this thing out there and get it done.”

LD: That might be a reason to start with a 1 year timeout. Then we can move it to 6 months if that turns out to be too long.

EL: Then we are inviting more controversy potentially. Last time it was kind of a mess with having to deploy BIP 148 then BIP 91 and then all this other stuff to patch it. The less patches necessary there the better. I think it sets a bad precedent if it is not decisive. If the community has not decided to do this it should not be done at all. If it has decided to do it then it should be decisive and quick. I think that is the best precedent we can set. The more we delay stuff and the more there is controversy, it just invites a lot more potential for things to happen in the future that could be problematic.

AvW: I could throw another idea out there but I expect your answer will be the same. I will throw it out there anyway. I proposed this idea where you have a long BIP 8 period with forced signaling towards the end and you can speed it up later if you decide to. The opposite of that would be to have a long BIP 8 signaling period without forced signaling towards the end. Then at some point we are going to do forced signaling anyway. I guess your answer would be the same. You don’t like solutions that need to be patched along the way?

EL: Yeah.

Protocol ossification

AvW: Last question I think. A lot of people out there like protocol ossification. They like it if Bitcoin, at least some point in the future, proves unable to upgrade.

EL: I would like that.

AvW: Why would you like that?

EL: Because that removes the politics out of the protocol. As long as you can change stuff it opens the door to politics. All the constitutional forms of government, even a lot of Abrahamic religions are based on this idea that you have this thing that is a protocol that doesn’t change. Whenever there is a change there is some kind of schism or some kind of break in the system. When it comes to something where the network effect is such a huge component of this whole thing and splits can be problematic in terms of people not being able to trade between themselves, I think the less politics that are involved the better. People are going to become political about it. The higher the stakes the more that people are going to want to attack it. The less vectors of attack there are the safer. It would be nice if we could just get it so that there are no more changes to the protocol at the base layer. There are always going to be improvements that can be proposed because we always learn from hindsight. We could always say “We can make this better. We can do this sort of thing better.” The question is where do we draw the line. Where do we say “It is good enough and no more changes from now on”? Can that really be done? Is it possible for people to really decide that this is what the protocol is going to be? Then there is also another issue which is what if later on some issue is discovered that requires some protocol change to fix? Some kind of catastrophic bug or some vulnerability or something. At that point there could be an emergency measure to do it which might not set precedent for this kind of thing to become a regular thing. Anytime you invoke any kind of emergency measures it opens the door for attacks. I don’t know where that line is. I think that is a really difficult question.

AvW: I think Nick Szabo has described it as social scalability. That is what you are referring to. You think ossification would benefit social scalability.

LD: I think in the far future it can be considered but in Bitcoin’s current stage, where it is today, if Bitcoin stops improving that opens the door for altcoins to replace it and Bitcoin eventually just becoming irrelevant.

AvW: You say maybe at some point in the future?

LD: In the far future.

EL: It is too early to say.

AvW: Do you have any idea when it is done?

EL: I think anyone who is sure of anything like that is full of s***. I don’t think anyone really understands this problem well enough yet.

LD: There is too much we don’t understand about how Bitcoin should work. There are too many improvements that could be made. At this point it really seems like it should be out of the question.

AvW: Why would it be a problem? Let’s say this soft fork fails for some reason. It doesn’t activate. We learn in the next 2 years that Bitcoin can’t upgrade anymore and this is it. This is what we are going to live with. Why is this a problem? Or is this a problem?

EL: There are certain issues that exist with the protocol right now. Privacy is a huge one. There could be some other potential improvements to the crypto that allow for much more sophisticated compression or more privacy or other kind of stuff that would be significantly beneficial. If some other coin is able to launch that has these features it does have the issue of not having the same kind of founding story as Bitcoin and that kind of myth I think is necessary to create a movement like this. It creates incentives for the early players to… What has happened with the altcoins and all these ICOs and stuff like that, the founders basically end up having too much control over the protocol. That is an issue where I think Bitcoin really maintains its network effect just because of that. But there could be some technical improvement at some point that is so significant that really Bitcoin would be as a serious advantage if it did not incorporate something like that.

LD: At this point I think it is a given that there will be.

AvW: You started out by mentioning privacy. Isn’t this something you think could ultimately be solved well enough on second layers as the protocol is right now?

EL: Possibly. I am not sure. I don’t think anyone has a complete answer to this unfortunately.

LD: The definition of “good enough” may change as privacy invading technology improves. If the governments, not even governments, anyone gets better at invading everyone’s privacy what we need to protect against that could very well raise the bar.

Taproot

AvW: We can discuss Taproot itself a little bit. Is that a good idea? What is great about Taproot? Why do you support it if you support it?

LD: It significantly simplifies what has to happen onchain. Right now we have got all these smart contract capabilities, we have had since 2009, but in most cases you don’t need to have the smart contract if both parties see that it is going to end up a certain way. They can just both sign the single transaction and all the smart contract stuff can be bypassed. That is pretty much the main benefit of Taproot. You can bypass all the smart contract in most cases. Then all the full nodes, they have cheaper verification, very little overhead.

EL: And all the transactions would look the same so nobody would be able to see what the contracts are. All the contracts would run offchain which would be a significant improvement for scalability and privacy. I think it is a win all round.

LD: Instead of everybody running the smart contract it is just participants.

EL: Which is the way it should’ve been in the beginning but I think that it took a while until people realized that that is what the script should be doing. Initially it was thought we could have these scripts that run onchain. This is the way it was done because I don’t think Satoshi thought this completely through. He just wanted to launch something. We have a lot of hindsight now that we didn’t have back then. Now it is obvious that really the blockchain is about authorizing transactions, it is not about processing the conditions of contracts themselves. That can all be done offchain and that can be done very well offchain. In the end the only thing is that the participants need to sign off that it did happen and that’s it. That is all that everyone really cares about. Everyone agreed so what is the big deal? If everyone agrees there is no big issue.

AvW: There doesn’t appear to be any downside to Taproot? Is there any downside to it, have you heard about any concern? I think there was an email on the mailing list a while ago with some concerns.

LD: I can’t think of any. There is no reason not to deploy it at least, I can’t think of any reason not to use it either. It does carry over the bias that SegWit toward bigger blocks but that is something that has to be considered independently. There is no reason to tie the features to the block size.

AvW: Blocks should still be smaller Luke, is that still your position?

LD: Yeah but that is independent of Taproot.

AvW: Are there any other soft forks you guys are excited about or that you could see be deployed on Bitcoin in the next couple of years?

LD: There is ANYPREVOUT which was formerly called NOINPUT, I think that is making good progress. There is CHECKTEMPLATEVERIFY which I still haven’t looked too much into but it seems like it has got significant community support building up at least. Once Taproot is deployed there will probably be a bunch of other improvements on top of that.

EL: After the last soft fork, SegWit activated I was so burnt out by the whole process. This thing was really a more than two year process. 2015 was really when this whole thing kind of started and it didn’t activate until August 1st 2017. That is more than two years of this kind of stuff going on. I don’t know if I have an appetite for a very protracted battle with this at all. I want to see what happens with Taproot first before weighing on other soft forks. Taproot is the one that seems to have the most support right now as far as something that is a no brainer, this would be good to have. I would like to see that first. Once that activates then maybe I will have other ideas of what to do next. Right now I don’t know, it is too early.

AvW: What are your final thoughts on soft fork activation? What do you want to tell our viewers?

EL: I think it is really important that we set a good precedent here. I talked about three different categories of risks. The first risk category is just the technical stuff, making sure the code doesn’t have any bugs and stuff like that. The second one is with the activation methodology and making sure that the network doesn’t split. The third one is with precedent. Inviting potential attacks in the future by people exploiting the process itself. The third part is the one that I think is the least well understood. The first part is the part that is most understood even back in 2015. The second category was the one that the whole SegWit activation showed us a lot of things about although we were discussing BIP 8 and BIP 9. The category 3 risks right now I think are very unknown. That is my biggest concern. I would like to see that there isn’t any kind of precedent established where this kind of stuff could be exploited in the future. It is a really tough line to draw exactly how aggressive we should be with this and whether that sets something up for someone else in the future to do something bad. I think we can only learn by doing it and we have to take risks. We are already in unchartered territory with Bitcoin. Bitcoin is already a risky proposition to begin with. We have to take some risks. They should be calculated risks and we should learn as we go along and correct ourselves as quickly as possible. Other than that I don’t really know exactly what the process will end up looking like. We are learning a lot. Right now the category 3 stuff I think is the key stuff we are going to see if this thing actually happens. That is really important to consider.

AvW: Luke any final thoughts?

LD: Not really.

\ No newline at end of file +https://www.youtube.com/watch?v=yQZb0RDyFCQ

Location: Bitcoin Magazine (online)

Aaron van Wirdum in Bitcoin Magazine on BIP 8, BIP 9 or Modern Soft Fork Activation: https://bitcoinmagazine.com/articles/bip-8-bip-9-or-modern-soft-fork-activation-how-bitcoin-could-upgrade-next

David Harding on Taproot activation proposals: https://gist.github.com/harding/dda66f5fd00611c0890bdfa70e28152d

Intro

Aaron van Wirdum (AvW): Eric, Luke welcome. Happy Bitcoin Independence Day. How are you doing?

Eric Lombrozo (EL): We are doing great. How are you doing?

AvW: I am doing fine thanks. Luke how are you?

Luke Dashjr (LD): OK. How are you?

AvW: Good thanks. It is cool to have you guys on Bitcoin Independence Day. Obviously both of you played a big role in the UASF movement. You were maybe two of the most prominent and influential supporters. This Bitcoin Independence Day, this August 1st thing from a couple of years ago, this was about SegWit activation, soft fork activation. This is getting relevant again because we are looking at a new soft fork that might be coming up, Taproot. Just in general the conversation of how to activate soft forks is getting started again. So what happened a couple of years ago is becoming relevant again.

EL: We don’t want to repeat what happened a few years ago. We want to do it better this time.

Previous soft fork activations

AvW: That’s what I want to discuss with you guys and why it is great to have you on. Let’s start with that Eric. A couple years ago, apparently something went wrong. Shall we first very briefly mention there was this process called BIP 9. Do you want to explain very briefly what that was and then you can get into why that was a problem or what went wrong?

EL: The first soft forks that were activated were with a flag date, just written into the code. Later on to make the transition more smooth miner signaling was incorporated. Then BIP 9 was this proposal that allowed several soft forks to be under activation at the same time with the idea that there would be extensibility to the protocol and there would be this whole process that we could use to add new features. But it turned out to be very messy and I am not sure that that process is sustainable.

AvW: To be clear the idea with BIP 9 was there is this soft fork, there is a change to the protocol, and the challenge then is to get the network upgraded to that change without splitting the network between upgraded and non-upgraded nodes. The idea was we’ll let activation coordination depend on hashpower. Once enough miners have signaled that they have upgraded the network recognizes this, all upgraded nodes recognize this and they start enforcing the new rules. The reason that is a good idea is because if most miners do this there is no risk of chain splits. Even unupgraded nodes will follow the “upgraded” version of the blockchain.

EL: It would be the longest chain. By default all the other clients would follow that chain automatically.

AvW: That is the benefit of it. But apparently something went wrong. Or at least you think so.

EL: The first few soft forks were really not political at all. There was no contention. SegWit was the first soft fork that really led to some controversy because of the whole block size stuff that happened before. It was a really contentious time in the Bitcoin space. The activation became politicized which was a really serious problem because BIP 9 was not designed to deal with politics. It was designed to deal with activation by miner signaling just for the sake of what you were talking about, so that the network doesn’t split. It is really a technical process to make sure that everyone is on the same page. The idea was that everyone was supposed to already be onboard for this before the release without any politics. It wasn’t any kind of voting system or anything like that. It was not designed or intended to be a voting system. Some people misunderstood it that way and some people took advantage of the confusion to make it seem like it was a voting process. It became very abused and miners started to manipulate the signaling process to mess with the Bitcoin price and other stuff. It became really, really messy. I don’t think we want to do it that way this time again.

AvW: This coordination mechanism, this was basically granted to miners in a way in order to make upgrades happen smoothly. Your perspective is that they started to abuse this right that they were given. Is that a good summary?

EL: Yes. It had a 95 percent threshold which was something that was actually pretty reasonable before when it was mostly just technical upgrades that were not politicized at all. It was a safety threshold where if 95 percent of the hashpower is signaling for it then almost for sure the network is not going to split. But really you only need more than the majority of hashpower in order for the chain to be the longest chain. You could lower that threshold and slightly increase the risk of a chain split. The 95 percent threshold, if more than 5 percent of the hashpower did not signal for it they would get a veto. They could veto the whole activation throughout. It gave a very small minority the ability to veto a soft fork. The default was that if it didn’t activate it would just not be locked in and it would fail. This was a problem because this was in the middle of the block size thing and everyone wanted a resolution to this. Failure wasn’t really an option.

AvW: Luke do you agree with this analysis? Is this how you see it?

LD: More or less yeah.

SegWit and BIP 148

AvW: At some point there was this solution to break the deadlock which was a UASF. This ultimately took shape in the form of BIP 148. Luke I think you were involved with the early process of deciding this. What was the idea behind BIP 148?

LD: I wasn’t really involved that early. I was actually opposed to it originally.

AvW: Why?

LD: I didn’t really analyze the implications fully.

AvW: Can you explain what BIP 148 did and why it was designed the way it was designed?

LD: It essentially took the decision back from the miners. “August 1st we are going to begin the process of activating this and that’s all there is to it.”

AvW: I think the specific way it did that was BIP 148 nodes would start rejecting blocks that didn’t actually signal support for SegWit. Is that right?

LD: Yes. That is how most user activated soft forks previously had been deployed. If the miners didn’t signal the new version their blocks would be invalid.

AvW: There is a nuanced difference between actually activating SegWit itself and enforcing SegWit or enforcing signaling for SegWit right?

LD: Previous soft forks had pretty much done both all the time. Before BIP 9 it was an incremental version number. The miners had to have the correct version number or their blocks were invalid. When version 3 was released all blocks version 2 and before became invalid.

AvW: At some point you did start supporting BIP 148? Why was that?

LD: At one point there was enough of the community that were saying “We are going to do BIP 148 no matter how many other people are onboard.” From that point forward once it was a sizable minority the only way to avoid a chain split was to go all in.

EL: It was all or nothing at that point.

AvW: How big does a minority like that need to be? Is there any way to measure this or think about this or reason about this? What made you realize that there were enough users supporting this?

LD: It really comes down to not so much the number of people but their relevance to the economy. All these people that are going to do BIP 148, can you just ignore them and economically pressure them or cut them off from everyone else? Or is that not viable anymore?

AvW: How do you make that decision? How do you tell the difference between non-viable and viable?

EL: At this point what we did was really try to talk to a lot of exchanges and a lot of other people in the ecosystem to assess their level of support. A lot of users were very much in favor of it but when it became more clear that also a lot of a big economic nodes in the system were going to be using BIP 148 it became clear it was do or die, it was all or nothing. At that point it was get everyone onboard or this is not going to work.

LD: Ironically one of the earliest BIP 148 supporters was BitPay who were actually relevant back then.

AvW: Were they?

LD: I think they lost relevance soon after then. But back then they were more of a big deal.

AvW: You mentioned this do or die situation, what are the risks of die? What could go wrong with something like BIP 148?

EL: At this point we were pretty sure that if people wanted to fork they were going to fork. There was a big push to the BCH thing. It was “Ok if people want to fork off let them fork off.” Wherever the economy goes that is where people are going to gravitate towards. We believed that they would gravitate towards using BIP 148, insisting that everyone else did because that was in everyone’s best economic interest. That is what happened. It was a risk but I think it was a calculated risk. I think there would have been a chain split no matter what. The question was how can we make sure that the chain split has the least amount of economic repercussions for the people that want to stay on the SegWit chain.

AvW: To be clear about that BIP 1 48 could have also led to a chain split and possibly re-orged. It was a risk in that sense.

EL: Sure we were in unchartered territory. I think the theory was pretty sound but there is always a risk. I think at this point the fact that there was a risk actually was part of the motivation for people to want to run BIP 148 because the more people that did that the lower the risk would be.

AvW: That was an interesting game theoretic advantage that this BIP had. There is disagreement about this to this day, what do you guys think BIP 148 actually did? Some people said it did nothing in the end. Miners were just the ones who upgraded the protocol. How do you see this?

LD: It was very clearly BIP 148 that got SegWit activated. If it hadn’t been it would’ve been very clear to everyone because the blocks would’ve been rejected by BIP 148 nodes. There is no way you could have 100 percent miner signaling without BIP 148.

AvW: Eric do you agree with that?

EL: I think BIP 148 played a huge role but I think there were a lot of factors that were very important. For instance the fact that SegWit2x was going on and the whole New York agreement was going on. I think that rallied people to want to support a user activated soft fork even more. It is kind of ironic. Had the whole SegWit2x thing not been happening people might have been more complacent and said “Let’s hold off a little bit and wait and see what happens.” I think this pressured everyone to take action. It seemed like there was an imminent threat so people needed to do something. That was the moment I think the game theory really started to work out because then it would be possible to cross that threshold where it is do or die, the point of no return.

AvW: Let me rephrase the question a little bit. Do you guys think SegWit would’ve been activated if BIP 148 hadn’t happened? Would we have SegWit today?

EL: It is hard to say. I think that eventually it might have activated. At that point people wanted it and it was a decisive moment where people wanted a resolution. The longer it dragged on the more uncertainty there was. Uncertainty in the protocol is not good for the network. There could have been other issues that arose from that. I think the timing happened to be the right moment, the right place at the right time for this to happen. Could it have happened later? Possibly. But I think it would have been much more risky,

LD: This conference today would probably be celebrating SegWit’s final activation instead of Taproot.

AvW: One last question. To sum up this part of Bitcoin history, what were the lessons of this episode? What are we taking with us from this period of Bitcoin history moving forward?

EL: I think it is very important that we try to avoid politicizing this kind of stuff. Ideally we don’t want the protocol base layer to be changing very much. Anytime there is a change to that it introduces an attack vector. Right away people could try to insert vulnerabilities or exploits or try to split the community or create social engineering attacks or stuff like that. Anytime you open the door to changing the rules you are opening yourself up to attacks. We are at a point now where it is impossible to upgrade the protocol without having some kind of process. But the more we try to scale this the more that it tends to become political. I am hoping that Taproot does not create the same kind of contention and it does not become politicized. I don’t think it is a good precedent to set that these things are controversial. At the same time I don’t want this to be a regular process. I don’t want to make a habit out of activating soft forks all the time. I do think Taproot is a very important addition to Bitcoin and it seems to have a lot of community support. It would be really nice to include it now. I think if we wait it is going to get harder to activate it later.

BIP 8 as a possible improvement to BIP 9

AvW: You mentioned that 2017 was this period of controversy, the scaling civil war, whatever you want to call it. Was there really a problem with BIP 9 or was it just a controversial time and by now BIP 9 would be fine again?

EL: I think the problem with BIP 9 was that it was very optimistic. People would play nice and that people would cooperate. BIP 9 does not work in an uncooperative scenario very well at all. It defaults to failing. I don’t know if that is something we want to do in the future because if it fails that means there is still controversy, it did not get resolved. With BIP 8 it does have a flag date where it does have to activate by a certain time. It defaults to activation whereas BIP 9 defaulted to not activating. It removes the veto power from miners. I think that BIP 9 is very problematic. In the best of circumstances it could possibly work but it is just too easy to attack.

LD: And it creates an incentive for that attack.

AvW: What you are saying is that even in a period where there is no civil war going on, no huge controversy, using something like BIP 9 would actually invite controversy. Is that what you are concerned about?

EL: Possibly yes.

LD: Because now the miners can hold the soft fork hostage that has presumably already been agreed on by the community. If it hasn’t been agreed on by the community we shouldn’t be deploying it with any activation, period.

AvW: Luke you have been working on BIP 8 which is an alternative way to activate soft forks. Can you explain what BIP 8 is?

LD: Rather than an alternative I look at it more of taking BIP 9 and fixing the bugs in it.

AvW: How are the bugs fixed? What does BIP 8 do?

LD: The most obvious bug was the fact that one of the problems with BIP 9 when we went to do BIP 148 was originally it had been set for November activation, not August. Then people realized if the hashpower is too low we could force signaling during November and it still won’t activate because the time will lapse too quickly. This was because BIP 9 used timestamps, real world time, to determine when the BIP would expire.

AvW: Because blocks can be mined faster or slower so the real world time can get you in trouble.

LD: If blocks mine too slow then the timeout would occur before the difficulty period had finished which was one of the requirements for activation. The most obvious bug that BIP 8 fixed was to use heights instead of time. That way if the blocks slowed down the timeout would be later as well. BIP 148 worked around this by moving mandatory signaling period up to August which was very fast. I think everyone agreed on that. It was an unfortunate necessity due to that bug. The other one is as we were talking about, it creates an incentive for miners to block the soft fork. If it activates after the timeout then that incentive is gone. There are risks to that if there is a bug found. We can’t back out of it once it has got the activation set. Of course we really should be finding bugs before we set the activation anyway so hopefully that won’t matter.

AvW: Is it the case with BIP 8 that it is configurable? You can force signaling at the end or not. How does this work exactly?

LD: That was a more recent change now that we have got the activation topic back going again. It is designed so that you could deploy it with an activation that does timeout and abort and then later on change that flag to a UASF. If the UASF is set later, as long as it has sufficient community support to do the UASF, all the nodes that were configured without the UASF will still go along with it.

AvW: Who sets this flag? Is it embedded in a software release or is it something users can manually do? Or is it something you compile?

LD: That is an implementation detail. At the end of the day the users can modify the code or someone can provide a release that has that modified.

AvW: How would you do that? How would you like BIP 8 to be used?

LD: Because we shouldn’t be deploying any activation parameters without sufficient community support I think we should just set it upfront. There was Bitcoin Core UASF with BIP 148 code in it. I think it would be best to do all soft forks that way from now on and leave the vanilla Bitcoin Core releases without any soft fork activation.

AvW: That is an interesting perspective. Would Bitcoin Core not include any BIP 8 activation thing at all? It is completely embedded in alternative clients like the BIP 148 client and not in Bitcoin Core at all? With UASF this is the way to do it from now on?

LD: I think that would be best. Before 2017 I would’ve been hesitant that the community would actually go along with it but it worked fine with BIP 148 so I don’t see any reason not to continue it.

AvW: Shouldn’t Bitcoin Core in that case at least include BIP 8 without forced signaling so it will still enforce the soft fork and go along.

LD: Possibly. There is also a middle ground where it could detect that it has been activated.

AvW: Then what? Then users know that they should upgrade?

LD: It can activate at that point.

AvW: What is the benefit of detecting that it was upgraded?

LD: So that it can activate the rules and remain a full node. If you are not enforcing the most recent soft forks you are no longer a full node. That can be a security issue.

AvW: That is what I meant with BIP 8 without forced signaling. I thought that was the same thing.

LD: There is a pull request to BIP 8 that may make the same thing. I would have to look at that pull request, I am not sure it is identical quite yet.

#Activating through alternative clients

AvW: Eric, what do you think of this idea of activating soft forks through alternative clients like BIP 148 did?

EL: It is probably a good idea. I think it is best for Bitcoin Core to remain out of the whole process. The less political Bitcoin Core is the better. The people that have worked on these BIPs for the most part don’t really want to get too involved in this public stuff. I think it is for the best. I think it would set a really horrible precedent for Bitcoin Core to be deploying protocol changes by itself. It is very important that the community support be there and be demonstrated outside of the Bitcoin Core project and that it be decisive. It doesn’t become politicized once it is deployed. I think it is important that there is enough support for it prior to deployment and it is pretty sure that it is going to happen. At that point there is no more decisions to make it or anything like that because any other decision that is added to the process just adds more potential points where people could try to add controversy.

AvW: What is the problem if Bitcoin Core activates soft forks or soft forks are implemented in the Bitcoin Core client?

EL: It sets a really bad precedent because Bitcoin Core has become the reference implementation. It is the most widely used node software. It is very dangerous, it is kind of like a separation of powers kind of thing. It would be really dangerous for Bitcoin Core to have the ability to implement these kinds of things and deploy them especially under the radar, I think it would be really dangerous. It is very important that this stuff be reviewed and that everyone gets a chance to look at it. I think that right now the people who are working on Bitcoin Core might be good, reliable honest people but eventually it could get infiltrated by people or other people could get in that might not have the best of intentions. It could become dangerous at some point if it becomes a habit to do it that way.

AvW: Or Bitcoin Core developers could be coerced or get phone calls from three letter agencies.

LD: Especially if there is a precedent or even an appearance of this supposed power. It is going to cause people who are malicious think they can pressure developers and attempt to do that even if it doesn’t work out.

AvW: The obvious downside is that if not enough people upgrade to this alternative client in this case it could partition the network. It could split the network, it could cause the chaos that we discussed before with chain re-orgs. Is this not a risk you are a concerned about?

EL: If it doesn’t have enough support prior then probably it shouldn’t be done at all. The support should be there and it should be very clear that there is almost unanimous agreement. A very huge amount of the community wants this before any actual code is deployed. At the point the code is deployed I think it would be pretty reasonable to expect that people would want to run it. Otherwise I don’t think it should be done at all.

AvW: But it does put a deadline for people to upgrade their software. People can’t be too lazy. People need to do something otherwise risks will emerge.

LD: That is true no matter what release it is in.

AvW: I want to put a hypothetical in front of you then. Let’s say this solution is chosen. There is this alternative client that includes the soft fork with forced activation in some way or another. It doesn’t actually get the support you guys are predicting it will get or hoping it will get and it does cause a chain split. Which version is Bitcoin in that case?

LD: The community would have to decide on that. It is not something that any one person can rule on or predict. Either the rest of the community will upgrade or the people who did upgrade will have no choice but to revert.

EL: At that point we are in unchartered territory in a lot of ways. We have to see if the incentives align for enough people to really want to get behind one particular version of Bitcoin or not. If there is a significant risk that it could split the network permanently in a way that does not lead to a decisive outcome then I would not support it. I think it is important that there is a theoretical high probability that the network will tend to converge. The strong economic nodes will tend to converge on one particular blockchain.

LD: Also keep in mind it is not enough to simply not upgrade with something like this. It would have to explicitly lock in a block that rejects Taproot. Otherwise there would be a risk of the Taproot chain overtaking their chain and replacing it.

AvW: What kind of timeline would you think in this regard? Luke I think I have seen you mention a year? Is that right?

LD: It is important that the full nodes upgrade not just the miners. From the date that the first release is made I think there needs to be at least 3 months before signaling can even begin. Once it begins I think maybe a year would be good. Not too long, not too short.

EL: I am thinking that a year might actually be a little bit long. There is a trade-off here obviously. The shorter the timeframe the more risk that there won’t be enough time for people to upgrade. But at the same time the longer this goes on the more uncertainty there is and that also causes problems. It is good to find the right balance. I think a year might be a little bit too long, I don’t think it is going to take that long. I would prefer this to happen more quickly. I would actually like this to happen the quickest possible that doesn’t cause a chain split or that reduces the risk of a chain split. That would be my preference.

LD: Don’t forget that miners can signal still and activate it sooner than one year.

EL: Sure. But there is also the veto thing and people possibly using the uncertainty to mess with the markets or other kind of stuff like that.

LD: I don’t know if there is an incentive to try to veto when it is just going to delay it a year.

EL: Yeah.

Modern Soft Fork Activation

AvW: The other perspective in this debate would be for example Matt Corallo’s Modern Soft Fork Activation. I assume that you are aware of that. I will explain it real quick myself. Modern Soft Fork Activation, the idea is that basically you use the old fashioned BIP 9 upgrade process for a year. Let miners activate it for a year. If it doesn’t work then developers will reconsider for 6 months, see if there was a problem with Taproot in this case after all, something that they had missed, some concern miners had with it. Review it for 6 months, if after the 6 months it is found there was no actual problem and miners were delaying for whatever reason then activation is redeployed with a hard deadline activation at 2 years. What do you think of this?

LD: If there is possibly a problem we shouldn’t even get to that first step.

EL: I am not a big fan of this because I think there are too many questions raised. If it doesn’t feel it is decisive and there isn’t a decision that has been made then people are going to be totally confused. It is cause a lot of uncertainty. I think that this is something where we either need to be very aggressive about it and get everyone onboard and be decisive and say “Yes this is going to happen” or it is not worth doing at all. I don’t like this idea of seeing what happens in 6 months or whatever. We either decide right away that yes we are going to do it or no.

LD: It kind of invites controversy.

AvW: The idea though is that by laying out the plan out like this then there is still the guarantee that it will ultimately activate. It might take a little longer than a year but at the end of the road it will activate.

LD: You could do that with BIP 8, just not locking in on the timeout. Then setting the timeout later if you decide to do that.

AvW: There are variations. There are different ways of thinking about this. The real question I want to get at is what is the rush? Aren’t we in it for the long run? Aren’t we here planning to build something that will be here for 200 years? What does an extra year or two matter?

EL: I think for the testing and review of the actual Taproot proposal for sure it should have enough time. We should not rush that. We should not try to deploy anything until the developers that are working on it and are reviewing and testing it are confident that it is at a level where it is safe to deploy, independent of all the activation stuff. I think that the activation though should be quick. I don’t think it will take that long to bring people onboard. We are talking probably a few months at most to get everyone onboard if it is going to happen. If it is taking longer than that then chances are it doesn’t have enough support. We probably shouldn’t be doing it in the first place. If people can’t do it that quickly then I think the whole process is questionable. The more variables that we add to the process later the more it invites controversy and the more that people might just get more confused and think it is more uncertain. With this kind of stuff I think people are looking for decisiveness and looking for resolution. Keeping this uncertain and on hold for a very long period of time is not healthy for the network. The activation part I think should be quick. The deployment should take as long as is necessary to make sure the code actually is good. We should not be deploying code that has not been fully tested and reviewed. But once we decide that “Yes this is what should be done” then I think at that point it should be quick. Otherwise we shouldn’t do it at all.

LD: The decision of course being made by the community. I agree with Eric.

AvW: Another argument in favor of Matt’s perspective here is that the code should be good before you deploy it but maybe in reality in the real world a lot of people will only really start looking at the code, will only really start looking at the upgrade once it is actually out there. Once there is a software release that includes it. By not immediately enforcing the activation this allows realistically more time for review. What do you think?

LD: I think if people have something to review they should review it before it gets to that point, before there is any activation set. Ideally before it gets merged.

AvW: Eric do you see any merit in this argument?

EL: For sure the more eyes that look at it the better but I think nothing would presumably get merged and deployed or released until at least enough eyeballs that are competent in this are able to do it. Right now I think the people that are most motivated are the people that are working on it directly. There might be more eyeballs that look at it later. For sure once it is out there and once the activation has been set it will get more attention. But I don’t think that is the time that people should start reviewing. I think the review should be taking place before that.

LD: At that point if there was an issue found there is going to be a question of “Is it worth redoing all this just to fix that issue or should we just go ahead anyway?” If the issue is bad enough miners could activate simply because of the issue because they want to take advantage of it. We don’t want it to activate if there is a big issue. We need to be completely sure there are no issues before we get to the activation stage.

BIP 8 with forced signaling

AvW: Let me throw out another idea that has been circulating. What if we do a BIP 8 with forced signaling towards the end but give it a long time? After a while you can always speed it up with a new client that includes something forcing miners to signal sooner.

LD: With the current BIP 8 you can’t do that but Anthony Towns has a pull request that will hopefully fix that.

AvW: Do you see merit in this?

EL: I don’t really think it is a good idea. The more we can reduce variables once it has been deployed the better. I think that we should really be looking at getting all this stuff ironed out before. If it has not been done before then someone hasn’t done their job right, that is my way of looking at it. If things are done right then by the time it is out for activation then there should be no controversy. It should be “Let’s get this thing out there and get it done.”

LD: That might be a reason to start with a 1 year timeout. Then we can move it to 6 months if that turns out to be too long.

EL: Then we are inviting more controversy potentially. Last time it was kind of a mess with having to deploy BIP 148 then BIP 91 and then all this other stuff to patch it. The less patches necessary there the better. I think it sets a bad precedent if it is not decisive. If the community has not decided to do this it should not be done at all. If it has decided to do it then it should be decisive and quick. I think that is the best precedent we can set. The more we delay stuff and the more there is controversy, it just invites a lot more potential for things to happen in the future that could be problematic.

AvW: I could throw another idea out there but I expect your answer will be the same. I will throw it out there anyway. I proposed this idea where you have a long BIP 8 period with forced signaling towards the end and you can speed it up later if you decide to. The opposite of that would be to have a long BIP 8 signaling period without forced signaling towards the end. Then at some point we are going to do forced signaling anyway. I guess your answer would be the same. You don’t like solutions that need to be patched along the way?

EL: Yeah.

Protocol ossification

AvW: Last question I think. A lot of people out there like protocol ossification. They like it if Bitcoin, at least some point in the future, proves unable to upgrade.

EL: I would like that.

AvW: Why would you like that?

EL: Because that removes the politics out of the protocol. As long as you can change stuff it opens the door to politics. All the constitutional forms of government, even a lot of Abrahamic religions are based on this idea that you have this thing that is a protocol that doesn’t change. Whenever there is a change there is some kind of schism or some kind of break in the system. When it comes to something where the network effect is such a huge component of this whole thing and splits can be problematic in terms of people not being able to trade between themselves, I think the less politics that are involved the better. People are going to become political about it. The higher the stakes the more that people are going to want to attack it. The less vectors of attack there are the safer. It would be nice if we could just get it so that there are no more changes to the protocol at the base layer. There are always going to be improvements that can be proposed because we always learn from hindsight. We could always say “We can make this better. We can do this sort of thing better.” The question is where do we draw the line. Where do we say “It is good enough and no more changes from now on”? Can that really be done? Is it possible for people to really decide that this is what the protocol is going to be? Then there is also another issue which is what if later on some issue is discovered that requires some protocol change to fix? Some kind of catastrophic bug or some vulnerability or something. At that point there could be an emergency measure to do it which might not set precedent for this kind of thing to become a regular thing. Anytime you invoke any kind of emergency measures it opens the door for attacks. I don’t know where that line is. I think that is a really difficult question.

AvW: I think Nick Szabo has described it as social scalability. That is what you are referring to. You think ossification would benefit social scalability.

LD: I think in the far future it can be considered but in Bitcoin’s current stage, where it is today, if Bitcoin stops improving that opens the door for altcoins to replace it and Bitcoin eventually just becoming irrelevant.

AvW: You say maybe at some point in the future?

LD: In the far future.

EL: It is too early to say.

AvW: Do you have any idea when it is done?

EL: I think anyone who is sure of anything like that is full of s***. I don’t think anyone really understands this problem well enough yet.

LD: There is too much we don’t understand about how Bitcoin should work. There are too many improvements that could be made. At this point it really seems like it should be out of the question.

AvW: Why would it be a problem? Let’s say this soft fork fails for some reason. It doesn’t activate. We learn in the next 2 years that Bitcoin can’t upgrade anymore and this is it. This is what we are going to live with. Why is this a problem? Or is this a problem?

EL: There are certain issues that exist with the protocol right now. Privacy is a huge one. There could be some other potential improvements to the crypto that allow for much more sophisticated compression or more privacy or other kind of stuff that would be significantly beneficial. If some other coin is able to launch that has these features it does have the issue of not having the same kind of founding story as Bitcoin and that kind of myth I think is necessary to create a movement like this. It creates incentives for the early players to… What has happened with the altcoins and all these ICOs and stuff like that, the founders basically end up having too much control over the protocol. That is an issue where I think Bitcoin really maintains its network effect just because of that. But there could be some technical improvement at some point that is so significant that really Bitcoin would be as a serious advantage if it did not incorporate something like that.

LD: At this point I think it is a given that there will be.

AvW: You started out by mentioning privacy. Isn’t this something you think could ultimately be solved well enough on second layers as the protocol is right now?

EL: Possibly. I am not sure. I don’t think anyone has a complete answer to this unfortunately.

LD: The definition of “good enough” may change as privacy invading technology improves. If the governments, not even governments, anyone gets better at invading everyone’s privacy what we need to protect against that could very well raise the bar.

Taproot

AvW: We can discuss Taproot itself a little bit. Is that a good idea? What is great about Taproot? Why do you support it if you support it?

LD: It significantly simplifies what has to happen onchain. Right now we have got all these smart contract capabilities, we have had since 2009, but in most cases you don’t need to have the smart contract if both parties see that it is going to end up a certain way. They can just both sign the single transaction and all the smart contract stuff can be bypassed. That is pretty much the main benefit of Taproot. You can bypass all the smart contract in most cases. Then all the full nodes, they have cheaper verification, very little overhead.

EL: And all the transactions would look the same so nobody would be able to see what the contracts are. All the contracts would run offchain which would be a significant improvement for scalability and privacy. I think it is a win all round.

LD: Instead of everybody running the smart contract it is just participants.

EL: Which is the way it should’ve been in the beginning but I think that it took a while until people realized that that is what the script should be doing. Initially it was thought we could have these scripts that run onchain. This is the way it was done because I don’t think Satoshi thought this completely through. He just wanted to launch something. We have a lot of hindsight now that we didn’t have back then. Now it is obvious that really the blockchain is about authorizing transactions, it is not about processing the conditions of contracts themselves. That can all be done offchain and that can be done very well offchain. In the end the only thing is that the participants need to sign off that it did happen and that’s it. That is all that everyone really cares about. Everyone agreed so what is the big deal? If everyone agrees there is no big issue.

AvW: There doesn’t appear to be any downside to Taproot? Is there any downside to it, have you heard about any concern? I think there was an email on the mailing list a while ago with some concerns.

LD: I can’t think of any. There is no reason not to deploy it at least, I can’t think of any reason not to use it either. It does carry over the bias that SegWit toward bigger blocks but that is something that has to be considered independently. There is no reason to tie the features to the block size.

AvW: Blocks should still be smaller Luke, is that still your position?

LD: Yeah but that is independent of Taproot.

AvW: Are there any other soft forks you guys are excited about or that you could see be deployed on Bitcoin in the next couple of years?

LD: There is ANYPREVOUT which was formerly called NOINPUT, I think that is making good progress. There is CHECKTEMPLATEVERIFY which I still haven’t looked too much into but it seems like it has got significant community support building up at least. Once Taproot is deployed there will probably be a bunch of other improvements on top of that.

EL: After the last soft fork, SegWit activated I was so burnt out by the whole process. This thing was really a more than two year process. 2015 was really when this whole thing kind of started and it didn’t activate until August 1st 2017. That is more than two years of this kind of stuff going on. I don’t know if I have an appetite for a very protracted battle with this at all. I want to see what happens with Taproot first before weighing on other soft forks. Taproot is the one that seems to have the most support right now as far as something that is a no brainer, this would be good to have. I would like to see that first. Once that activates then maybe I will have other ideas of what to do next. Right now I don’t know, it is too early.

AvW: What are your final thoughts on soft fork activation? What do you want to tell our viewers?

EL: I think it is really important that we set a good precedent here. I talked about three different categories of risks. The first risk category is just the technical stuff, making sure the code doesn’t have any bugs and stuff like that. The second one is with the activation methodology and making sure that the network doesn’t split. The third one is with precedent. Inviting potential attacks in the future by people exploiting the process itself. The third part is the one that I think is the least well understood. The first part is the part that is most understood even back in 2015. The second category was the one that the whole SegWit activation showed us a lot of things about although we were discussing BIP 8 and BIP 9. The category 3 risks right now I think are very unknown. That is my biggest concern. I would like to see that there isn’t any kind of precedent established where this kind of stuff could be exploited in the future. It is a really tough line to draw exactly how aggressive we should be with this and whether that sets something up for someone else in the future to do something bad. I think we can only learn by doing it and we have to take risks. We are already in unchartered territory with Bitcoin. Bitcoin is already a risky proposition to begin with. We have to take some risks. They should be calculated risks and we should learn as we go along and correct ourselves as quickly as possible. Other than that I don’t really know exactly what the process will end up looking like. We are learning a lot. Right now the category 3 stuff I think is the key stuff we are going to see if this thing actually happens. That is really important to consider.

AvW: Luke any final thoughts?

LD: Not really.

\ No newline at end of file diff --git a/bitcoinplusplus/2022/2022-06-07-alex-myers-minisketch-lightning-gossip/index.html b/bitcoinplusplus/2022/2022-06-07-alex-myers-minisketch-lightning-gossip/index.html index d6038bcdd2..99e7b862a3 100644 --- a/bitcoinplusplus/2022/2022-06-07-alex-myers-minisketch-lightning-gossip/index.html +++ b/bitcoinplusplus/2022/2022-06-07-alex-myers-minisketch-lightning-gossip/index.html @@ -1,7 +1,7 @@ Minisketch and Lightning gossip | ₿itcoin Transcripts \ No newline at end of file +https://www.youtube.com/watch?v=e0u59hSsmio

Location: Bitcoin++

Slides: https://endothermic.dev/presentations/magical-minisketch

Rusty Russell on using Minisketch for Lightning gossip: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001741.html

Minisketch library: https://github.com/sipa/minisketch

Bitcoin Core PR review club on Minisketch (3 sessions):

https://bitcoincore.reviews/minisketch-26

https://bitcoincore.reviews/minisketch-26-2

https://bitcoincore.reviews/minisketch

Introduction

Pretty new to Lightning Network development, joined the Core Lightning team. I’m going to present on Minisketch and what I’ve learnt about the gossip network.

Topics

I am going to go over the role of gossip in the Lightning Network. Is everyone familiar with Lightning or at least semi enthusiast? I am not going to spend too much time on the background of how Lightning works. But I will explain how gossip ties into it. I’ll try to give a high level overview of how Minisketch works, go through an example and then explain how we can use this to improve Lightning gossip.

Role of gossip

The role of gossip in the Lightning Network, I thought it would be fun to explain this through examples. We are going to dive into a basic Lightning transaction.

Make a Lightning payment

For an average user I figured it is probably like pull out your mobile app, you are presented with a QR code or an invoice so you scan that. Then on your app it gives you the memo with the notes, you verify the amount looks roughly correct and you hit Send. WIth any luck a moment later you get your little green check box showing that it was successful. Given we are enthusiasts here I think we should go a level deeper and see what was going on behind the scenes.

Lightning Payment - Enhance

In Core Lightning at least there are a lot of CLI commands we can use to get some more information.

lightning-cli listpays <BOLT 11>

In this case we can see that it did indeed went through. It looks complete, there’s the preimage, that is kind of like our receipt of payment on the Lightning Network. Interestingly down here we see that this was split into 12 different parts. That’s 12 different routes that were calculated and all filled in order to fulfil that invoice.

lightning-cli listpaystatus <BOLT 11>

Let’s go one level deeper. We can see that it wasn’t just 12 routes that were attempted, there were actually 53 different payment attempts. That is part of a multipart payment. Some of those timed out. We got some errors before it was successful. This particular CLI command, it prints out pages of data. But we can actually tell on each failure how many hops deep that was and we get this text encoded string here (0x1007…). If we feed that into another tool (devtools/decodemsg) we can actually decode this. This part of BOLT 4. It specifies that 1007 means we had a temporary_channel_failure. We actually included in there a channel update (0x1000, new channel update enclosed). There is a complete channel update packet. That intermediate node, it doesn’t know who we are because everything is onion routed, it doesn’t know where we are trying to get. It is saying “Whatever you are doing I think you are barking up the wrong tree. You obviously have some outdated information and I think you need to see this.” In this particular case we can see we’ve got information on what sort of fees this particular channel charges, what the cltv_expiry_delta is, how many blocks we need to add for that timeout. The important part is right here, channel_flags=1. It doesn’t sound like much but this is the flag you look up in the protocol and it says this channel is disabled. Obviously when we were calculating a route we didn’t have that information and that was why this particular route failed.

Lightning Payment - Takeaway

So basic example, what did we learn? There were 70 different channels that were utilized to make this payment. Even on a fairly simple scenario. On some of them we have outdated information but we were able to retrieve some new gossip there, update the routing graph which is kind of like our map of what the network looks like. With that new information we were able to recalculate our paths such that we were able to ultimately succeed and make a payment.

Role of Gossip

Backing up, starting from a high level now, what is generally the role of gossip in the network? It is kind of just like in real life. Gossip is second hand information that you share about peers who you are not directly communicating with. In the Lightning Network we have got all these different nodes that are connected. We might have direct connections to a handful of them but if we are down here in that lower left corner we need to collect information on all the different nodes that are out there. We do this in the form of a gossip message called the node_announcement. This one gives the public key of every node and it also gives addresses for how to reach them if we wanted to connect to them directly. That’s like your IP address, onion address or web socket, anything you can dream up you can include that in the node announcement. The next thing we need to know is where the channels are. We have got a channel_announcement type. All that really does is it says between node A and node B there exists a channel and it was funded with a transaction on the blockchain. It lists the short channel ID which is everything you need to look it up and see which UTXO on the blockchain funded it. That part in particular is not great for privacy but it is really good to prevent denial of service. You have to prove that you have a UTXO to open the channel. Now we have the basic topography of the network if we wanted to route across it. It helps to have a little bit more information. We get this in the form of a channel_update. This is stuff like fees or if the channel has a max HTLC size set too low or is otherwise disabled like we saw in those examples. Those are things where we would want to eliminate that from our choices when we go to calculate a route.

Now we have everything we need to be able to construct our route. Why did I call this talk the Lightning Gossip Network? Aren’t we just talking about the Lightning Network? The short answer is not really. It is a little bit different in that we see all the connections we have, the channels between nodes on the Lightning Network. But when we talk about gossip we are actually only communicating with a subset of those. Depending on the implementation, Core Lightning uses 5 peers I think, it is anywhere between 3 and 5 that you’re connected with and gossiping with. It doesn’t have to be one of your peers that you are gossiping with. You can gossip with any nodes. It doesn’t even have to be a node so long as there is a machine out there that knows the protocol and is providing new information that you didn’t have from your other peers about gossip. We are totally happy communicating back and forth with you and gathering new information on the state of the network. The last important detail about it, it operates by flood propagation. Say you are connected to 5 peers, you receive information from one peer and if it is new information you rebroadcast that to 4 other peers. This is really efficient early on in the propagation because you quickly fan out the information to all the peers. But even after several hops it starts to lose efficacy. All of a sudden from multiple sources you are seeing the same data, you are losing efficiency.

Gossip Statistics

Some basic statistics on the state of the Lightning Network. Right now we’ve got 80,000 different channels and 17,000 different public nodes. Going back to that flood propagation, if we are looking at the bare minimum of three connected gossiping peers, that means in order to fan out and reach the entire network we are looking at a minimum of 14 hops. There is kind of like rate limiting built in to gossip propagation. You actually batch all the gossip you receive and you wait 60-90 seconds. Core Lightning uses 60 seconds but I think LND maybe uses a 90 second cycle. You periodically broadcast all the new gossip you receive in that time to your gossip peers. That means that we are looking at 14 hops, 60-90 seconds between each one. It takes a while to fully propagate to the whole network. In practice we are seeing something like 95 percent of the network receives it within 13 minutes. You probably need 20 minutes, a good rule of thumb at minimum, before you can count on everyone seeing the new information.

What is Minisketch?

Switching gears, I am going to try to explain Minisketch. I will do my best here. It is a set reconciliation protocol. Set reconciliation, we have two datasets, you can see with this Venn diagram that they are largely overlapping. In this case they both have valid data and we want to make sure that we get some of that data, make sure both of them are updated with it. In this case we are interested in the symmetric difference between the sets. That is what Minisketch helps us do. If you are like I was a few months ago you might think that this is a hard problem to solve, there is going to be some overhead in trying to reconcile these two sets. You are going to send more information than is strictly required. You don’t know what your peer doesn’t have. But like me you would be wrong. Minisketch has some really cool properties.

Background

It actually comes from a family of error correction codes called BCH error correction. It uses a map like the Berlekamp-Massey algorithm. I am going to go through a really brief high level example.

BCH Example

Imagine we have two sets of data. They contain the elements (1,2,3) and (1,2,3,4). A cool trick we can do if we want to reconcile these sets and make sure both of them have all the data. We take a sum of the elements. Here we have got 6 and 10. If we are on the left side we want to transmit to this set on the right. “Here is my sum. It is 6. We can take the difference (10-6) and we arrive at 4 which is our missing element.” Ok cool property but we are basically just doing subtraction here. This is how it works with exactly one element. Any more and there is obviously no way we can do anything. But we can actually do another trick where say we want to encode 2 differences, we can take the simple sum of the elements in this array… Here’s our array, the first element is the sum of the elements. The next element of our array is the sum of the squares. I’ve done this basic math. This is the data in this array that we are going to transmit to the other guy. It is twice as much to reconcile two differences now. We can take the difference here. We say “That difference is the sum of the elements and this one is the sum of the squares.” Thinking back to algebra we’ve got two equations, two unknowns. That means it is solvable. As this order gets harder it becomes a really difficult problem. That’s where the Berlekamp-Massey algorithm comes in. It is a really efficient way to solve this.

Constructing a large sketch

We can use this for an arbitrarily large set size. We can go all the way up to an order of n. Obviously it takes a little longer the higher this order goes. You have to encode every entry that you have up to the nth order. But it does work and we can resolve a large number of differences with it.

Minisketch

Minisketch library: https://github.com/sipa/minisketch

Minisketch is a C++ library that was developed by Pieter Wuille. It is implementing the PinSketch algorithm. It runs on a wide range of architectures and hardware. It is optimized with some tables to help solve the roots in a time efficient manner. He has also got a pure Python implementation which is really impressive in itself. It can do all the calculations in like 500 lines of code that kind of blows my mind. It is pretty accessible, I encourage you to check out this GitHub page.

Using Minisketch

But supposing you are just an engineer like me. How do we use this fancy math in practice? It looks something like this. First we initialize a sketch and in order to do that we need to know what’s the width of the data that we want to encode. In this case we are talking 64 bits wide. We give it 64 bits and then we need to tell it the capacity. This is how many differences do we want to be able to reconcile between the two sets. If we think we are going to have about 5 differences between them we might choose something like 8 just to make sure we are covered. But we don’t want to get too carried away. Then we add our data and we calculate the syndromes. That is like each line of the “Constructing a large sketch” array. Then we serialize and transmit that over from Alice to Bob. Bob goes through the exact same procedure. He builds his sketch and he merges the two together. It is pretty interesting. He needs to choose the same data width but his capacity can actually be different. The way that the math works out is you can actually truncate this array at any point so that they’re equal and then you can merge those two sketches. You lose some of your capacity there but they are able to be solved just fine. Then you use that Berlekamp-Massey algorithm, that solves it and the result is all the data that you are missing between the two.

Black Box Properties

What properties does this have as we use it? It can support anywhere from 2 to 64 bit wide data. The really cool thing is that the serialized size that you are transmitting between the peers is actually the sketch capacity, the number of entities that you want to resolve, times the data width. If you choose that sketch capacity just right it is like 100 percent efficient in how much data you can get out of it.

Serialized size = sketch capacity * data width

That part really blew my mind. Some other things to keep in mind. As you increase sketch capacity the reconciliation time scales (linearly) up. Depending on the number of differences that you actually have to resolve, that scales quadratically. You want to keep the differences bounded. You don’t want to get too carried away with how big the set differences are there.

Limitations

A couple of basic considerations to keep in mind. You can’t encode an element of zero when we initialize our sketch. You can choose any number apart from zero. The other thing is it is also generally helpful to make sure that the number of elements that are in each sketch, that they are not so different that they exceed the sketch capacity. Normally this isn’t a problem but if you imagine you have 50 entries in one sketch and 100 entries in the other and your sketch capacity is only 10. There is no way it is ever going to solve because the capacity is just not there. The cool thing is that it gives you a warning if it can’t solve it. It says “I can’t find any polynomial that actually satisfies this equation”. That works most of the time.

Erlay

Some background on how we are going to use this. Bitcoin ran into a fairly similar problem that Lightning Network gossip is facing. They were faced with the issue of transaction relay also not scaling very well with flood propagation. If you are familiar with the Erlay protocol, that’s what Minisketch was actually developed for. Erlay uses this to encode each transaction that is in the mempool. You want to share that with peer Bitcoin nodes. Those are 32 byte transaction IDs. Obviously we are limited to only 8 bytes with Minisketch. So we need to compress that down to a smaller fingerprint where we know what transaction we are talking about. There is a hash function to do that. They reconcile inventory sets in order to do that. That’s another element that complicates a little. Say we have the whole mempool, one way you could do it is encode every single transaction in there into a sketch. All the peers do that. They resolve it and they see what differences they have. That’s not actually how Erlay works. What Erlay does is it keeps track of each peer that it is communicating with and it says “It has been 15 seconds since we last talked. I am keeping an inventory of everything that I wanted to tell you about but I haven’t yet. Here’s 5 items that I’ve been meaning to tell you about.” Meanwhile your peer, Bob over there, he’s doing the same but maybe he collected 7 items. Instead of encoding however many thousand items in the mempool, it is really just the 5 items and the 7 items. We reconcile those and maybe we find that there is only 3 differences between the two. Once those are reconciled those inventory sets are zeroed out and they are ready for the next round that is going to take place in another 15 seconds.

LN Gossip vs Bitcoin tx relay

The problem that Bitcoin faced with transaction relay was pretty similar but there are a few differences. For one, any time you introduce that short hash function that produces a 64 bit fingerprint you have to be concerned with collisions between hash functions. Someone could potentially take advantage of that and grind out a hash that would resolve to the same fingerprint. That would be a denial of service vector. There’s some concerns like that. Also timing analysis with transactions that were private. We don’t have to deal with hiding those things with gossip because it is public by nature. Also instead of having to use that hash function we have a trick that I’ll get to in a moment. We don’t just have a single data type, we have 3 different messages that we want to relay.

Application to Gossip

Those messages are the channel_update, the node_announcement and the channel_announcement. Of these the channel_update, that’s your fees, whether the channels are active or disabled, what’s the maximum HTLC it can support, that kind of thing. That is like 97 percent of the network traffic, that is the big one that we want to make more efficient. These top two (channel_update, node_announcement), they are only valid for a 2 week period. These are repeatedly broadcast.

Challenge

Our challenge is we encode all 3 of these message types. We can only use 8 bytes but we have a tool in the form of the short channel ID (SCID). That is what links a channel to the funding transaction on the blockchain. By its nature there can only be one unique transaction on the blockchain. That’s a cool shortcut we can use. On this node_announcement this doesn’t have any channels linked to it so it is kind of hard to use the short channel ID here. Ideally we would like to refer to the node ID, the public key of the node. But unfortunately that is 32 bytes so we’d have to hash it or something.

Encoding Scheme

But what we can actually do when we encode this is use these elements here (block height, transaction index, output index), this is the short channel ID. We can refer to a node announcement by just saying “Here’s the node’s oldest channel” and we just point to which side of the channel that node is on. That way we have a way to canonically identify which node we are talking about. This is normally 8 bytes of data and we are trying to fit the whole thing into 8 bytes. What we have done is shaved off a few bits that we don’t really need. If we can get 32,000 transactions into a block that’s probably sufficient. If you are funding a Lightning channel you are probably not going to have like 1000 outputs. We have saved a few bytes here. Now we have room to leave for a couple of bits, which message type we are talking about and which side of the channel that is on. This is the way we can identify in just 64 bits exactly what the gossip message is we are talking about. Finally we have the timestamp. For those periodic messages we need to know which ones the new one, which one is the old one. Having some bits down there to differentiate those is helpful.

Set Reconciliation benefits

What does this buy us? The short answer is we can save at least 60 percent on bandwidth and such for gossip. Now we’ve done this we can communicate with more peers. Communicating with more peers is really great because that increases the reliability of gossip propagation. In particular node announcements have suffered in the past because the other two types you have some simple heuristics to figure out if you are probably missing one of them. For a channel update we saw in that example at the beginning, the worst case scenario is we fail a route and that error packet would get an update anyway. We can get some of the other data types back but node announcements, that tells you the IP address of the peer to connect to. If they change that and you never see it you might not be able to connect to your peer. That one is really catastrophic and should really benefit from the increased reliability.

What’s next?

I still have a couple outstanding issues to decide upon. One of these is we can keep either global sketches or we can do the per-peer sketches with inventory sets. There are arguments for both. Another thing is we also have rate limits when we accept gossip. If we are rate limiting gossip pretty aggressively this really cuts into our sketch capacity. It would be nice for the implementations that opt into this set reconciliation if everyone can agree on a common criteria for when to rate limit. I’m for using the block height. I’m trying to get everyone onboard, if we can link each gossip message to the current block height. Let’s say you get one message per block height or for every 100 blocks or something. That’s really easy for anyone to validate since you are already running a full node anyway. Then finally in the future we’d really like to get away from the short channel ID in gossip because it does leak privacy. You don’t really want to dox all your onchain UTXOs. We are not quite ready for that just yet but when we do it is going to lead to this per peer inventory set solution. That will probably inform this decision.

Conclusion

Gossip is what lets us build our graph view of the entire Lightning Network and find a route. Minisketch is really cool, everyone should check it out. Hopefully we will be able to use it to improve the reliability of Lightning Network gossip. Here’s my contact info if you want to follow my work. You can see some of it on the lightning-dev mailing list. I’m endothermicdev on GitHub and Twitter. Feel free to reach out.

\ No newline at end of file diff --git a/bitcoinplusplus/2022/index.xml b/bitcoinplusplus/2022/index.xml index af1d495538..b37d1b7b2e 100644 --- a/bitcoinplusplus/2022/index.xml +++ b/bitcoinplusplus/2022/index.xml @@ -1,6 +1,6 @@ Bitcoin++ 2022 on ₿itcoin Transcriptshttps://btctranscripts.com/bitcoinplusplus/2022/Recent content in Bitcoin++ 2022 on ₿itcoin TranscriptsHugo -- gohugo.ioenTue, 07 Jun 2022 00:00:00 +0000Minisketch and Lightning gossiphttps://btctranscripts.com/bitcoinplusplus/2022/2022-06-07-alex-myers-minisketch-lightning-gossip/Tue, 07 Jun 2022 00:00:00 +0000https://btctranscripts.com/bitcoinplusplus/2022/2022-06-07-alex-myers-minisketch-lightning-gossip/Location: Bitcoin++ Slides: https://endothermic.dev/presentations/magical-minisketch -Rusty Russell on using Minisketch for Lightning gossip: https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001741.html +Rusty Russell on using Minisketch for Lightning gossip: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001741.html Minisketch library: https://github.com/sipa/minisketch Bitcoin Core PR review club on Minisketch (3 sessions): https://bitcoincore.reviews/minisketch-26 diff --git a/blockchain-protocol-analysis-security-engineering/2018/hardening-lightning/index.html b/blockchain-protocol-analysis-security-engineering/2018/hardening-lightning/index.html index bc70fb1d3a..2208150003 100644 --- a/blockchain-protocol-analysis-security-engineering/2018/hardening-lightning/index.html +++ b/blockchain-protocol-analysis-security-engineering/2018/hardening-lightning/index.html @@ -13,4 +13,4 @@ Olaoluwa Osuntokun

Date: January 30, 2018

Transcript By: Bryan Bishop

Tags: Security, Lightning

Category: Conference

Media: -https://www.youtube.com/watch?v=V3f4yYVCxpk

https://twitter.com/kanzure/status/959155562134102016

slides: https://docs.google.com/presentation/d/14NuX5LTDSmmfYbAn0NupuXnXpfoZE0nXsG7CjzPXdlA/edit

previous talk (2017): http://diyhpl.us/wiki/transcripts/blockchain-protocol-analysis-security-engineering/2017/lightning-network-security-analysis/

previous slides (2017): https://cyber.stanford.edu/sites/default/files/olaoluwaosuntokun.pdf

Introduction

I am basically going to go over some ways that I’ve been thinking about basically hardening lightning network, in terms of making security better and making the client more scalable itself, and then I’ll talk about some issues and pain points that come up when you’re doing fees on mainnet. And there’s going to be some relatively minor changes that I propose to Bitcoin in this talk. We’ll have to see if they have a chance of getting in. They are not big sweeping changes. One is a new sighash type and one is a new opcode and then there’s covenants but that’s an entirely separate story itsef of course.

Talk overview

A quick overview first. I am going to give an overview of lightning’s security model. I am not going to go much into lightning’s overview because I kind of assume that you at least vaguely know what lightning network is. Payment channels, you connect them, you can route across them and that’s the jist of it. We’re going to talk about hardening the contract breach event. Some of you have talked about ways that we can do that in terms of addin more consensus changes, or going at it from the point of view of a stratey when a breach actually happens and there’s a large mempool backlog. Then, I am going to introduce a new lightning channel type and a new channel design. Basically making the channels more succinct, meaning you’d have to store less history and things like making outsourcing a lot more efficient as well. And then I am going to talk about kind of like outsourcing and a newer model and a model that assumes that– that maintains client privacy as much as possible, because if you want this to scale in the outsourcing to support a large number of clients then we want them to store as little state as possible. And then we’re going to go into making lightning more succinct on chain. If it’s an off-chain protocol, then we want to make sure it has the smallest on-chain footprint possible, otherwise it’s not really scaling because you’re hitting the chain every single time. It should be off-chain itself.

Security model

There’s a diagram of the layers of lightning over here. People always say it’s like “layer 2”. But to me there’s a lot more layers on top of that. To me, layer 1 is bitcoin and the blockchain itself. Layer 2 is the link layer between channels. This is basically how do I open a channel between myself and Bob and how do Bob and I actually update the channels. Then there’s end-to-end routing and the HTLCs and onion routing and whatever else is involved there. And then you have an application layer for things being built on top of lightning, such as exchanges.

I had emojis on this slide, but I guess it didn’t translate somehow. That’s okay.

The way that lightning works is that it uses bitcoin or another blockchain as a dispute-mediation system. Instead of having every transaction go on bitcoin itself, we do contract creation on bitcoin itself. We can do enforcement there. But the execution is off-chain. We do the initialization on the blockchain, then we do the execution on the side off of the chain. This makes it succinct and that’s fine. We treat the chain as a trust anchor. Basically the way it works is that any time that we have a dispute, we can go to the blockchain with our evidence and transactions and then we can publish these transactions more broadly and we can show that hey my counterparty is defrauding me he is violating some clause of the lightning smart contract. And then I get all my money back, basically.

We have the easy way and the hard way. Optimistically, we can do everything off-chain. If we want to exit the contract, then we do signatures between each other and it goes to the chain. The hard way is in the case where I have to go on-chain to actually do a property dispute because of a violation in the contract. When we’re handling disputes, the way we think about it is that we can write to the chain “eventually”. This is configured by a time parameter called T which is like a bip112 CSV value. It means that every time someone goes to the chain there’s a contestation period that is open for time period T to basically refute their claim. We say “eventually” because T can be configured. You want to configure “T” based on the size of the channel. If it’s a $10 channel then maybe you want a few hours, and if it’s a million dollar channel then maybe you want T to be closer to a month or whatever.

The other thing we assume is that miners or pool operators are not colluding against us. They could censor all of our transactions on the chain. As part of this assumption, we assume a certain minimum level of decentralization of the blockchain miners because otherwise you can get blacklisted or something. There are ways that you could try to make the channels blend in with everything else and all of the other transactions occurring on the blockchain, and there’s some cool work on that front too.

Strategy: Hardening contract breach defense

Moving on, let’s talk about hardening contract breach defense strategy. Something that comes up a lot and people ask this is, how do I handle contract breach in the case of massive backlog? This would be the case where, for whatever reason, my fees are sky high, someone is spamming the chain and there’s a contract breach. What am I going to do from there on? There’s some things that people have worked on for adding new consensus changes to bitcoin, such as the timelock would stop after the block was ever so full or possibly you would have some pre-allocated arena where you could do battle for your smart contract. This is looking from a bit more strategic standpoint and looking at the dynamics on the fees in terms of handling the commitment transaction itself.

Whenever someone tries to broadcast a prior state, perhaps out of order, they are basically locked to that particular state. Bob basically had $2 on the newest state and he had $10 on the other state. He’s going to go back to the $10 state. At that point, Bob can only access just his money from the channel itself. Bob revoked some of the outputs. What his counterparty can do is basically start to progressively siphon Bob fees basically into miner fees. This is now a situation where there’s some strategy there because Bob has two options: he can either stop, or I can keep going and I’m eventually going to siphon all of his money to miner’s fees. The only way that his will actually succeed to get that prior state transaction into the chain is if he pays more in miner’s fees than he actually has in his balance himself. You can either do that by using child-pays-for-parent. I think this is a pretty good approach. What we’re doing here is that the justice transaction is going to be using replace-by-fee, we’re going to use that to progressively bump up and the fees the money of the cheater basically into the fees. I think this is a pretty good stopgap for the way things work right now. If you have someone that can be malicious at all, you can almost always penalize this person. So even if there’s a massive backlog, assuming that the miner wants the higher fee rate, then well you know Bob basically gets nothing. This adds further incentive from actually, people trying to do cheating, because now there is this strategy where basically I can just give away your money to miner’s fees and the adversary gets nothing. I’m still made whole through this entire ideal, so whatever happens I’m okay, I want to penalize and punish Bob in the worst way possible and I’m going to jump into the queue in front of everything else in the mempool. I could have put maybe 20 BTC towards fees- I’m fine, I’m just punishing my counterparty, it’s scorched earth, and then I wash my hands and walk away and everything’s okay.

Reducing client side history storage

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=5m45s

Now we’re going to talk about scaling lightning itself. Basically, from the client side. The way it works is that because contract execution is local, we have a local transcript. Every time we do a new state update, there’s basically a shadow chain, and by shadow chain I mean that we have a “blockchain” in the sense that each state refers to previous state in a particular way but also we can do state transitions between these states themselves. The shadow chain is only manifested on the chain in the case of a contract breach or if I just want to force-close. Typically in the normal case we do a cooperative close which is where both sides sign a multisig, we go on chain, you see nothing else. The shadow chain is only manifested on the blockchain in the case of a breakdown or some sort of breach remedy situation.

These state transitions in the channel can be very generic, later this year we might get fancier with some cool stuff. Right now it’s just adding HTLCs, removing HTLCs, and keeping track of prior state. The goal here is to reduce the amount of state that the client needs to keep track of. It would be good if they could keep track of less state, because then they could have more high-throughput transactions. It has implications for outsourcing too. If the state requirements for the client to actually act on the contract is reduced, then it makes the outsourcers more succinct as well, and if the outsourcers are more succinct then people are going to run them and if people run them then we’re going to have more security so it’s a reasonable goal to pursue this.

Overview of commitment invalidation

Before we get into reducing amount of state, I am going to talk about some of the history of how to do commitment invalidation on lightning. You have a series of states. You walk forward in these states one-by-one. Each time you go to a new state, you revoke the old one. You move from state 1, you revoke it, state 2 revoked now you’re on state 3. The central question to how do we channels is how do we do invalidation. One thing to note is that this only matters fo bi-directional channels where you’re going both ways. If it’s a uni-directional channel, every single state update I do is basically benefiting the other participant in some way and they don’t really have incentive to go back on the prior states, but if you have a bi-directional channel there’s probably some point in the history where one of the two counterparties was better off, where they had some incentive to try to cheat and try to go back to that prior state. We solve this by invalidating every single state once we make a new state. The penalty here is that if I ever catch you broadcasting a prior state then you basically get slashed, your money all goes away and there’s some strategy there which I have already talked about a bit. Naievely, you keep all the prior states. But that’s not very good because now you have this linearly growing storage as you do these updates. People like lightning and other off-chain protocols because they can be super fast, but if you have to store a new state for every single update then that’s not very good. The other thing is that, say you’re going to the blockchain to close-out the prior state but that’s not very good because now you have control transactions going into the chain which isn’t really good if you’re trying to make everything succinct itself.

History of succinct commitment invalidation

So let’s get intothe history of and how we currently do commitment invalidation.

When one of the first bi-directional channels was proposed, it was basically a bip68 mechanism with decrementing sequence locks. Use relative timelocks such that atest states can go in before the prior states. The drawback was that this had a limit on the number of possible state updates. bip68 is basically this relative timelock mechanismn. You’d start with 30 day timelock, do an update, go to 29 days, then do an update, then it’s 28. The way that this enforced latest state was by you could only broadcast latest state using the timelocks. If you had state 30 then I’m going to broadcast state 28 before you can broadcast that because the timelock isn’t going to be expired yet. The drawback is that this has a limited number of possible updates, like if it’s a 30 day locktime then with a 1-day period that’s only 30 updates at most. You have to bake that into the lifetime of the channel.

Moving on, there was another proposal called a commitment invalidation tree, which is used in duplex payment channels (cdecker). You keep the decrementing timelock thing, but then you add another layer on top of this which is basically a tree layer. Because of the way that timelocks work, you could only ever broadcast– you can’t broadcast the leaf until you broadcast the root itself. So basically it also because they were bidirectional channels they had to be reset periodically. They were constructed using two uni-directional channels and when the balances got out of whack then you had to reset that. Every time you reset it, you had to decrement the timelock, and then every time you do that you had to measure this root thing, and that worked out pretty well because that would let you, with a tweak, call the kick-off transaction and then you have an indefinite lifetime channel. Pretty cool, but the drawback is that you have this additional off-chain footprint. If you had this massive tree that you needed to broadcast, to get to your certain state, then it’s not very succinct because you have to broadcast all of those transactions, which is not particularly good if our goal is succinctness.

What lightning does now is called commitment revocations (hash or key based). Must reveal secret of prior state when accepting new state. The drawback is that you must critically store order log n of remote party, more complex key derivation, and there’s asymmetric state. We use commitment revocations where, the way it works, every single state has a public key and when you transition to the next state you basically must give up that private key. It’s a little more complex than that. The cool part of this is that we had figured out a way to generate the public key in a deterministic way. The client constantly has a state and there’s this tree type, choose a random number generator, generate all these secrets, and you as the receiver can collapse all of these down into basically the particular states themselves.

The goal here is to develop a commitment scheme with symmetric state. Commitment revocations in lightning right now have asymmetric state. This is basically due to the way we ascribe– we know what our transactions look like, if you broadcast mine on chain then I already have what I need. But that can get worrisome like multiparty channels… if we could make them symmetric, then multiparty channels wouldn’t have this weird combinatorial state blowup, and also we could basically make all of the state much smaller itself.

Commitment invalidation in lightning v1.0

So here’s a review of basically of the way we do commitment invalidation in lightning right now. Well, I put v1.1 on the slide. But we’re on v1.0, I don’t know. I guess I was thinking ahead. The way that it works is that every single state has this thing called a “commitment point”, which is just like a EC base point. The way we derive each of the private keys for each of these points is using a shachain. You can think of a shachain as having a key k and you have an index i which gives you random element in i, and me as a receiver because of this particular structure I can collapse them. Any time I have a shachain element 10 I can forget everything else and re-derive the data by only knowing the shachain 10 were the parameters, more or less.

We do this key derivation scheme which is like kind of complex-ish but important to takeaway here that when we do the state update, you give me a public key, and then I do some elliptic curve math where it turns out that once I reveal the private key to this thing, then only you can actually sign for that state. We make one of these commimtent points, and one of the revocation points, and when we go to the next state then you basically reveal that to me and that’s how it works.

This one has a few drawbacks. It gets involved because we were trying kind of defend against rogue key attacks and things like that. But I think we can make this simpler. The client storage has to store the current state and this log k state, and it’s log k where k is actualy the number of state updates ever. The outsourcer needs a signature for every single state and in addition to that needs a signature for any other HTLC we have and it basically needs to collapse the log k state itself. So we need to make this simpler and more succinct.

OP_CHECKSIGFROMSTACK

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=12m15s

And now for a brief diversion.

I am proposing an addition to bitcoin called OP_CHECKSIGFROMSTACK. Bitcoin checks signatures. You have lots of checksigs when you’re validating the blockchain itself. One thing about bitcoin is that, it’s always assumed to be this thing called the sighash. Any time there’s a signature operation, implicitly we generate this thing called the sighash which is derived from the transaction itself. It’s a heuristic function where you can control what is actually being signed itself. That’s cool, but it’s a little bit restrictive. What if we could add the ability to validate signatures on arbitrary messages? This is super powerful because it can let you do things like delegation, have someone’s public key if it’s signed then take this output. We could also use this to make oracles, like an integer signed by Bitfinex and we could use this inside of a smart contract. We could also do things like having these “blessed” message structures where your protocol has a message that might be opaque but has a particular structure and it can only be signed by both participants. So you could sign this at some point in the future and use this as evidence some point later on.

This proposal isn’t particularly soft-fork safe for bitcoin yet. But basically you have message, signature, public key, and maybe a version and different versions in the future. Or we could have ECDSA or Schnorr signatures or whatever else. The opcode would tell you if it’s valid or whatever.

Signed sequence commitments

Now on to a new commitment invalidation method. This is something I call signed sequence commitments. Rather than now us using this revocation model, what we do is every single state has a state number. We then commit to that state number, and then we sign the commitment. That signed commitment goes into the script itself. This is cool because we have this random number R so when you’re looking at the script you don’t know which state we’re on. To do revocation, we could say, if you can open this commitment itself and because it’s signed you can’t forge it because it’s a 2-of-2 multisig, so we can open the commitment and then show me another commitment with an R sequence value that is greater than the one in this commitment and that means that there was some point in history where two of you cooperated but then one of the counterparties went back to this prior state and tried to cheat. So this is a little bit of a simpler construction. We have a signed sequence number, you can prove a new sequence number, and we can prove it because we hide the state of it, because it can be the case that when we go to the chain we don’t necessarily want to reveal how many state updates we’ve actually done.

Signing is pretty simple: you have this number R which we can derive from some deterministic method. We increment the state number. We have c, which is the commitment. The signature is important, it’s actually an aggregate signature. It’s the signature between both of us. There’s a few techniques for this, like two-party ECDSA multiparty computation techniques, there’s some signature aggregation stuff you could do, just somehow you collaborate and make a signature and it works.

One cool part of this is that whenever I have state 10, I don’t need the prior states anymore. I don’t need any of the other extraneous states. 10 is greater than 9 and all the prior states. I now have constant client storage which is really cool. So we can have a million different states but I only need to keep the latest and this tuple thing with the signature, the commitment, opening the new commitment itself. That’s pretty cool because now outsourcers have constant-sized state per client. Before, it would grow by some factor the size of the history, but now it’s constant in this scheme, which is a pretty big deal. It’s also simpler in terms of key derivation, the script, things like that. We’re just checking a signature.

There’s a state machine in BOLT 2 where you update the channel state, but that doesn’t need to be changed to implement signed sequence commitments. We’ve dramatically simplified the state. The cool thing is that, because the state is symmetric now in mutiparty channels there’s no longer this combinatorial blowup of knowing or tracking who has published and what was their state number when I published and who spent from… it’s just, it’s way simpler to do signed sequence commitments.

Review of state outsourcing

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=15m55s

Signed sequence commitments can make state outsourcing more scalable. Outsourcing is used when a client isn’t necessarily able to watch the chain themselves. In the protocol, the channels are configured with a timelock parameter T. If the channel user is offline and the outsourcer detects an invalid state broadcast on chain, then the outsourcer can act within this period of time T and punish the other counterparty. The cool part is that because we can outsource this we can have lite clients. In the prior model, we send them these initial base points that allow them to re-derive the initial redeem script itself. When they see a transaction on chain, what we do is encode the state number of the commitment into the sequence number in the log time. The outsourcer sees the transaction on chain, looks at the sequence number, basically extracts that and sees what state they are on. Depending on what state they are, they need a different payload itself. This is what we do for the HTLCs and this is cool because they can collapse down the revocation storage into a single thing, and we also give the outsourcers a description of what the justice transaction looks like. That’s where the transaction pulls away the money from the cheating party or whatever. We use bip69 basically that allows us to have deterministic ordering of the inputs and outputs for everything itself. For HTLCs, they require a new signature for every HTLC that was there. We do something with the txid. We have the full txid of the state, which we assume they can’t predict because it’s random what’s in there. We encrypt the blob, we basicaly have that txid. Now when they see the txid on the chain, they can see if it decrypts, if it doesn’t then nothing happens, and if it does then they can do that and get that for it.

There’s some open questions here about doing authentication. You don’t want someone to connect to the outsourcer and send a bunch of garbage, like sending them gigabytes of like random data. Instead, we want to make it somewhat more costly, which could be done with a ring signature scheme or it could be a bond or a bunch of other things. There’s also some conversations and questions there like do you pay per state, do you pay when they do the action, is it a subscription model? We’re working on these questions and they will be answered.

Lighter outsourcers

For outsourcers, this is the really cool part. Every single outsoucer before had to store either the encrypted blob or the prior state for every state. If you did a million states, that state storage can get really large. If they have a thousand clients with a million states each, it starts to get really infeasible and prohibitive.

With signed sequence commitments, the situation for outsourcers is much better. We only need a single tuple per client. The outsourcer sees the commitment, the state number, s is the signature, R is the randomness used in the commitment. We’re going to replace this every single time. The outsourcer has 200 bytes constant storage for each client. That’s all that needs to be updated per state update. No additional state needs to be stored.

What’s cool with shachain is that the old point, in the old scheme, you had to give the outsourcer every single state contiguously. If you ever skipped a state, it wouldn’t be able to collapse the tree. You had leafs and you could collapse to the parent. Without one of the leaves you can’t collapse the tree. But with signed sequence commitments, I can give you whatever state I want, and I can basically live with that little particular risk if I want to skip sending some updates to the outsourcer.

In the signed sequence commitments scheme, we’re just going to use that now encrypted blob approach, we’re going to add revocation type which tells outsourcers how to act on the channel itself, we’re going to give them the commitment, and the revocation info which is how they do the dispatching itself. This is pretty cool itself because now I just have constant state. But it’s a little bit different because if I want to enforce how the outsourcer can sweep the funds, then I need a little bit more, I need to give them the signatures again. As is right now, hey outsourcer if you do your job then you can get all of Bob’s money. It’s kind of a scorched earth approach. But maybe I want to get some money and maybe there’s some compensation or something and we could fix that as well.

Covenants

I skipped or deleted a slide. That’s too bad. You can basically use covenants (here’s a paper) on this approach to eliminate the signature completely. You only have one constant-sized signature. The covenant would sign Bob’s key and Bob can sweep this output but when he sweeps the output he has to do it in a particular way that pays me and pays him. This is cool because I can give him this public key and this tuple and you can only spend it in that one particular way. I can add another clause that says if Bob doesn’t act within three fourths of the way to the timelock expiration then anyone can sweep this. If you’re willing to sacrifice a bit of privacy, then you don’t have to worry about breaches because anyone can sweep the money and if anyone tries to breach then miners are going to do so immediately themselves anyway.

Outsourcer incentivization

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=20m35s

Ideally the outsourcers are compensated because we want people to have a good degree of security and we want them to feel that everything is safe and if someone tries some infidelity then they will be punished. Do you do it on a per state basis, or do you give the outsourcers a retribution bonus? Using a retribution bonus, you could say maybe they say alright I get 10% of the value of Bob’s funds or whatever and we could negotiate that. And they have to stick with this because the signature I give them covers this 10% and if they ever try to take more than 10% then it would be an invalid signature. If anything is violated then everything else will fail itself.

Ideally, breaches are unlikely. They shouldn’t happen often. Maybe there’s like 3 breaches a year, everyone will freak out, and then everything will be fine afterwards. Maybe we should actually do a pay-per-state model where I would pay the outsourcer for every single state. Periodically, they could actually remove this, but the cool thing about this new channel design is that the state is constant so you’re only replacing something. They are not allocating more state per client in terms of history.

There’s a cool idea floating around on the lightning-dev mailing list and this guy Anton implemented this as well, it’s basically each outsourcer has its own specific ecash token. I pay them maybe over lightning, maybe with a zero-knowledge contingent payment, or maybe I just give them cash, but I have these outsourcer’s ecash tokens now. I basically unblind the token if I receive them from them. The cool thing is that I can use the tokens and they aren’t linked to my initial purchase. Every time I send a new state, I also pay the outsourcer with the token and he’s fine with that because he got compensated for it for some reason.

I think this is, we could do both, it just depends on how frequently we’re going to see breaches in the future and how this market is going to develop around the conversation of outsourcers and paying outsourcers.

Scaling outsourcing with outsourcer static backups

One thing we could do is that some people talk about how do you actually do backups in lightning itself. This is a little bit of a challenge because wallets are more stateful. In the case of a regular on-chain wallet, you basically just have your bip32 seed and if you have that seed you can scan the chain and get everything else back. But in lightning, you also have channel data, which includes parameters of the channel, which pubkeys were used, who you opened the channel with, the sizes, etc. There’s static state, and then there’s dynamic state which you outsource after every state update with the outsourcer.

Maybe we could ask the outsourcer to also store this very small payload, like a few hundred bytes at most. We can use the same scheme and we can reuse our sybil-resistant authentication scheme to allow the outsourcer to allow the user to retrieve the blob from the outsourcer. The user can lose all of their data, and as long as they have their original seed, they can reconstruct the data deterministically, authenticate with the outsourcer, get my backup blob, and then rejoin the network itself. There’s a risk that backup is out of date. We have a– in the protocol, when two clients reconnect, they try to sync channel state. If the channel state is off, they can prove to each other that they are on a particular state, and the value that they give to the user allows the user to sweep the transaction on chain. I think this covers the backup scenario pretty well and it depends on the people who are running the outsourcers or the watchtowers themselves.

There’s a question of basically, who watches the watchtowers? Who makes sure they are actually storing the data? You can use a proof-of-retrieviability protocol over static and dynamic states they are storing. So ask a watchtower to make sure that they are storing data, and if they are, provide me with the data. And if not, then you can pay for this again just like paying the watchtower but this time it’s a watchwatchtowertower.

Second stage HTLCs

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=24m5s

The way that HTLCs work currently is that they aren’t just shoved into the commitment transaction because we realized there were some issues with shoving them into the timelocks. What happened before is that if you wanted to have a longer CSV value which was your security parameter in terms of you being able to act on something in a timely manner, then you needed a longer timelock. As you had longer routes, you would have timelocks of 30, 40, 50 days, a very long time. So we wanted to separate that.

Now the HTLCs when claiming it’s a two-stage process. Once you broadcast the second HTLC, you’re waiting for the claim process which is a CSV delay value and afterwards you can claim. The CSV value and HTLC value are in two different scripts and they are fully decoupled from each other. This is in the current commitment design in BOLT 3 where CSV + CLTV decoupled in HTLCs.

There’s some downsides, which is that we have a distinct transaction for every HTLC. Every time we update a state, if we have 1000 HTLCs, then I need to give you a thousand signatures and you need to verify all of those signatures. And we need to store every single signature for every HTLC. If we want to eventually be able to handle breaches in the future, and the outsourcer needs to be able to handle these HTLCs themselves.

The solution here is to use covenants in the HTLC outputs. This eliminates signature + verify with commitment creation, and eliminates signature storage of current state. We’re basically making an off-chain covenant with 2-of-2 multisig. We can just say, if we actualy have real covenants then we don’t them anymore. The goal of the covenants was to force them to wait the CSV delay when they were trying to claim the output. But with this, they can only spend the output if the output that it created in the spending transaction actually has a CSV delay clause. You basically add an independent script for HTLC revocation clause reusing the commitment invalidation technique.

As a stopgap, you could do something with sighash flags to allow you to coalesce these transactions together. Right now if I have 5 HTLCs, I have to do that on chain, there’s 5 different transactions. We could allow you to coalesce these into a single transaction by using more liberal sighash flags, which is cool because then you have this single transaction get confirmed and after a few blocks you can sweep those into your own outputs and it works out fine.

Multi-party channels

There’s some cool work by some people at ETH doing work on multi-party channels ("Scalable funding of bitcoin micropayment channel networks"). It’s a different flavor of channels because there’s some issues about how to make the hierarchy flat, paying for punishment, if I have to update my own state then do I have to wait for everyone else to be online. What they did instead was they made this hierarchical model of channels itself.

There’s a few different concepts. There’s “the hook”. It’s the- if there was five of us, there’s a 5-of-5 multisig. Assuming signature aggregation, we can aggregate that down into a single signature and it looks like a regular channel and even though it might be 200-party channel, it can be made more succinct using signature aggregation.

Once you have the “hook” which creates the giant multisig itself, then you have “allocations” and what the allocations do is that they decide on the distribution of funds inside each channel. If you look at the diagram, it’s just a series of 2-of-2 channels. You can keep doing this over and over again. The cool takeaway here is that they realized you could embed a channel inside another channel and then do recursively deep channels. Every time you make a new channel, it can have its own completely different rules.

I could have a bi-directional channel on one side, and on the other side I could have a muti-party channel and all of these other things. Nobody outside would know. We could do all of this collaboratively off-chain without using any on-chain transactions themselves.

This is pretty cool but there’s some issues with state blowup on chain with this. One of the reason why they didn’t do key-based revocation with this is because you have asymmetric state and you have this blowup that scales with the number of participants and that gets prohibitive. In the paper, they used invalidation tree again. Any time you want to update the hook, like if you want to do different allocations or decide that you want to get rid of one of the parties then you have to update the hook eventually depending on the distribution of the channel itself. And this would require an invalidation mechanism. They use tree-based invalidation using the invalidation tree but the underlying problem is that it can get really large. As you update the hook more and more, you have a tree with a bigger depth. You need to traverse a huge tree to get to the hook, but even below that there might be even more transactions. So you could have hundreds of transactions on chain just to force-close.

However, we could reuse the signed sequence commitments to solve some of these problems. We could use this for the hook update. This makes it more scalable. Everyone would have constant sized state independent of the number of participants in the channel, assuming that I’m keeping track of my own individual transactions. This has the same benefits as far as outsourcing. If we’re using signature aggregation, then everything is very very tiny, and that works out well too.

Fee control

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=29m

One issue that we realized is that it’s not super easy to modify the fees in the channel once it’s been broadcast. Let’s say that my peer is offline and I want to force-close the channel. That fee on the channel itself is actually locked ahead of time. Say I was offline for two days and fees on the chain go up and my transaction might have insufficient fee to get confirmed in a timely manner. At this point, I could use child-pays-for-parent where I create a bigger fee package with a higher fee rate but I can’t do that because the output is timelocked so I would need to wait for the timelock in order to sweep it, and in order to sweep it I would need it to be confirmed, and in order for it to be confirmed, you’re in a weird position right? So one idea is to have a reserve… what this means right now is that, the way the design works, we ensure that each participant has a certain percentage of money inside the channel at all times. The idea here is that you create an anyonecanspend hook reserve hook output that anyone can use to anchor the transaction now. Even though I mess up guessing my fees initially, I can still use the anyonecanspend output.

We can eliminate the fee guessing game. In the protocol, there’s an update fee message, which can go because of the fee going up or down. If there’s very rapid changes in the fee climate on the chain, then this can get kind of difficult. Fee estimation is a hard problem.

The solution is that you don’t want to apply fees to the commitment transaction at all. It has zero fees. You have to add a new input to get it confirmed. So we propose using more liberal sighash flags to allow you to add an input or an input and an output. After I broadcast, that’s when I need to decide what the fees are. Whenever I add this fee input (or whatever), the txid of the transaction actually changes, and if the txid changes and if we had any dependent transactions then those all become invalidated. So what we do is we sign that with SIGHASH_NOINPUT.

I don’t know if you remember it, but SIGHASH_NOINPUT was proposed a long time ago. SIGHASH_NOINPUT is so that you can sign the txid of the transaction that made the input and the position of the input. We’re saying don’t sign that. Only sign the script. I could have a dependent transaction that is still valid, even if the parent transaction changes, because the pubkeyscript stays the same. This gives you more control for confirmation time of your output and other things like that. You don’t have to worry about fees and you can regulate fees completely, gives more control of your confirmation time of your output.

Q&A

Q: To sum it up, the upgrades for bitcoin script would be helpful are covenants, SIGHASH_NOINPNUT, and OP_CHECKSIGFROMSTACK.

A: Yeah. With SIGHASH_NOINPUT and OP_CHECKSIGFROMSTACK we could do a lot. Covenants are the holy grail and I didn’t even get into what you could do there. There’s some controversy around that. I think it’s more likely for OP_CHECKSIGFROMSTACK to get in. These are minor additions, unlike covenants. The possibilities blow up because you have signed data. It’s extremely powerful. If you have OP_CHECKSIGFROMSTACK then you might accidentally enable covenants.

Q: There have been a bunch of discussions about how large centralized lightning network hubs could lead to censorship. Have you done any research on how to keep the lightning network decentralized?

A: There’s zero barrier to entry. There’s some small costs, it’s not high. The capital requirements to have a massive hub are really prohibitive. There’s not really censorship because depending on the routing layer, we basically use onion routing at that point. They don’t really know exactly who you’re sending money to. They could cancel the transacion but then you route around them.

Q: I was curious if there was a limit to the number of state channels that you can have at one point. If not, would the creation of unlimited state channels in a sybil manner would that be an attack vector?

A: The upper bound is the number of UTXOs you can have in the UTXO set. Here we had one output with 20 channels. It’s effectively unbounded. You need to get into the chain. With sybil attacks, you’re paying money to get into the chain so there’s some limits there.

Q: There was an assumption about attackers not colluding with miners.

A: Let’s say there was a miner and there’s this massive channel and my counterparty colludes with them. They could choose not to mine my justice transactions. You could say that if there’s only that 1% chance to get into the chain then that might be enough for fraud to occur. This can also get weird if miners are reorging blocks with my transactions. The committed transactions format, maybe with a zero-knowledge proof, only later on would it be revealed what’s being spent. Commit-reveal could get past this problem of miner censorship or whatever.

Q: I am curious about the- as you upgrade the protocol, how do you anticipate an upgrade schedule for I guess there’s the wallets, hubs, outsourcers?

A: There’s actually a bunch of extension points in the protocol, including the onion protocol and the service bits on lightning node messages. We share the layer 3, HTLC layer. Individual links might have different protocols but they share the same HTLCs. We could have feature bits that we can check to see if someone knows the new fancy channel tech. I foresee it being gradual. Even in the future there will be many channel types and many routing layers. Everything is at the edges, and that’s all that really matters, and that’s a cool design element of the way it turned out.

Q: I like this multi-party channel design.

A: They are very cool.

Q: Can you elaborate on some of the research work done for secure multi-party computation?

A: There’s a bunch of protocols that are enabled by OP_CHECKSIGFROMSTACK where inside of the multi-party computation if you can generate signatures and those signatures sign particular states then you could use that for invalidation as well. Multi-party channels can be used to pool liquidity. There’s opportunities for gaming, like one in each geographic region and all of the economy is done through a single multi-party channel and then you could route commerce between those giant channels. It still works itself out.

Cool, thank you.

https://www.reddit.com/r/Bitcoin/comments/7u79l9/hardening_lightning_olaoluwa_osuntokun_roasbeef/

\ No newline at end of file +https://www.youtube.com/watch?v=V3f4yYVCxpk

https://twitter.com/kanzure/status/959155562134102016

slides: https://docs.google.com/presentation/d/14NuX5LTDSmmfYbAn0NupuXnXpfoZE0nXsG7CjzPXdlA/edit

previous talk (2017): http://diyhpl.us/wiki/transcripts/blockchain-protocol-analysis-security-engineering/2017/lightning-network-security-analysis/

previous slides (2017): https://cyber.stanford.edu/sites/default/files/olaoluwaosuntokun.pdf

Introduction

I am basically going to go over some ways that I’ve been thinking about basically hardening lightning network, in terms of making security better and making the client more scalable itself, and then I’ll talk about some issues and pain points that come up when you’re doing fees on mainnet. And there’s going to be some relatively minor changes that I propose to Bitcoin in this talk. We’ll have to see if they have a chance of getting in. They are not big sweeping changes. One is a new sighash type and one is a new opcode and then there’s covenants but that’s an entirely separate story itsef of course.

Talk overview

A quick overview first. I am going to give an overview of lightning’s security model. I am not going to go much into lightning’s overview because I kind of assume that you at least vaguely know what lightning network is. Payment channels, you connect them, you can route across them and that’s the jist of it. We’re going to talk about hardening the contract breach event. Some of you have talked about ways that we can do that in terms of addin more consensus changes, or going at it from the point of view of a stratey when a breach actually happens and there’s a large mempool backlog. Then, I am going to introduce a new lightning channel type and a new channel design. Basically making the channels more succinct, meaning you’d have to store less history and things like making outsourcing a lot more efficient as well. And then I am going to talk about kind of like outsourcing and a newer model and a model that assumes that– that maintains client privacy as much as possible, because if you want this to scale in the outsourcing to support a large number of clients then we want them to store as little state as possible. And then we’re going to go into making lightning more succinct on chain. If it’s an off-chain protocol, then we want to make sure it has the smallest on-chain footprint possible, otherwise it’s not really scaling because you’re hitting the chain every single time. It should be off-chain itself.

Security model

There’s a diagram of the layers of lightning over here. People always say it’s like “layer 2”. But to me there’s a lot more layers on top of that. To me, layer 1 is bitcoin and the blockchain itself. Layer 2 is the link layer between channels. This is basically how do I open a channel between myself and Bob and how do Bob and I actually update the channels. Then there’s end-to-end routing and the HTLCs and onion routing and whatever else is involved there. And then you have an application layer for things being built on top of lightning, such as exchanges.

I had emojis on this slide, but I guess it didn’t translate somehow. That’s okay.

The way that lightning works is that it uses bitcoin or another blockchain as a dispute-mediation system. Instead of having every transaction go on bitcoin itself, we do contract creation on bitcoin itself. We can do enforcement there. But the execution is off-chain. We do the initialization on the blockchain, then we do the execution on the side off of the chain. This makes it succinct and that’s fine. We treat the chain as a trust anchor. Basically the way it works is that any time that we have a dispute, we can go to the blockchain with our evidence and transactions and then we can publish these transactions more broadly and we can show that hey my counterparty is defrauding me he is violating some clause of the lightning smart contract. And then I get all my money back, basically.

We have the easy way and the hard way. Optimistically, we can do everything off-chain. If we want to exit the contract, then we do signatures between each other and it goes to the chain. The hard way is in the case where I have to go on-chain to actually do a property dispute because of a violation in the contract. When we’re handling disputes, the way we think about it is that we can write to the chain “eventually”. This is configured by a time parameter called T which is like a bip112 CSV value. It means that every time someone goes to the chain there’s a contestation period that is open for time period T to basically refute their claim. We say “eventually” because T can be configured. You want to configure “T” based on the size of the channel. If it’s a $10 channel then maybe you want a few hours, and if it’s a million dollar channel then maybe you want T to be closer to a month or whatever.

The other thing we assume is that miners or pool operators are not colluding against us. They could censor all of our transactions on the chain. As part of this assumption, we assume a certain minimum level of decentralization of the blockchain miners because otherwise you can get blacklisted or something. There are ways that you could try to make the channels blend in with everything else and all of the other transactions occurring on the blockchain, and there’s some cool work on that front too.

Strategy: Hardening contract breach defense

Moving on, let’s talk about hardening contract breach defense strategy. Something that comes up a lot and people ask this is, how do I handle contract breach in the case of massive backlog? This would be the case where, for whatever reason, my fees are sky high, someone is spamming the chain and there’s a contract breach. What am I going to do from there on? There’s some things that people have worked on for adding new consensus changes to bitcoin, such as the timelock would stop after the block was ever so full or possibly you would have some pre-allocated arena where you could do battle for your smart contract. This is looking from a bit more strategic standpoint and looking at the dynamics on the fees in terms of handling the commitment transaction itself.

Whenever someone tries to broadcast a prior state, perhaps out of order, they are basically locked to that particular state. Bob basically had $2 on the newest state and he had $10 on the other state. He’s going to go back to the $10 state. At that point, Bob can only access just his money from the channel itself. Bob revoked some of the outputs. What his counterparty can do is basically start to progressively siphon Bob fees basically into miner fees. This is now a situation where there’s some strategy there because Bob has two options: he can either stop, or I can keep going and I’m eventually going to siphon all of his money to miner’s fees. The only way that his will actually succeed to get that prior state transaction into the chain is if he pays more in miner’s fees than he actually has in his balance himself. You can either do that by using child-pays-for-parent. I think this is a pretty good approach. What we’re doing here is that the justice transaction is going to be using replace-by-fee, we’re going to use that to progressively bump up and the fees the money of the cheater basically into the fees. I think this is a pretty good stopgap for the way things work right now. If you have someone that can be malicious at all, you can almost always penalize this person. So even if there’s a massive backlog, assuming that the miner wants the higher fee rate, then well you know Bob basically gets nothing. This adds further incentive from actually, people trying to do cheating, because now there is this strategy where basically I can just give away your money to miner’s fees and the adversary gets nothing. I’m still made whole through this entire ideal, so whatever happens I’m okay, I want to penalize and punish Bob in the worst way possible and I’m going to jump into the queue in front of everything else in the mempool. I could have put maybe 20 BTC towards fees- I’m fine, I’m just punishing my counterparty, it’s scorched earth, and then I wash my hands and walk away and everything’s okay.

Reducing client side history storage

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=5m45s

Now we’re going to talk about scaling lightning itself. Basically, from the client side. The way it works is that because contract execution is local, we have a local transcript. Every time we do a new state update, there’s basically a shadow chain, and by shadow chain I mean that we have a “blockchain” in the sense that each state refers to previous state in a particular way but also we can do state transitions between these states themselves. The shadow chain is only manifested on the chain in the case of a contract breach or if I just want to force-close. Typically in the normal case we do a cooperative close which is where both sides sign a multisig, we go on chain, you see nothing else. The shadow chain is only manifested on the blockchain in the case of a breakdown or some sort of breach remedy situation.

These state transitions in the channel can be very generic, later this year we might get fancier with some cool stuff. Right now it’s just adding HTLCs, removing HTLCs, and keeping track of prior state. The goal here is to reduce the amount of state that the client needs to keep track of. It would be good if they could keep track of less state, because then they could have more high-throughput transactions. It has implications for outsourcing too. If the state requirements for the client to actually act on the contract is reduced, then it makes the outsourcers more succinct as well, and if the outsourcers are more succinct then people are going to run them and if people run them then we’re going to have more security so it’s a reasonable goal to pursue this.

Overview of commitment invalidation

Before we get into reducing amount of state, I am going to talk about some of the history of how to do commitment invalidation on lightning. You have a series of states. You walk forward in these states one-by-one. Each time you go to a new state, you revoke the old one. You move from state 1, you revoke it, state 2 revoked now you’re on state 3. The central question to how do we channels is how do we do invalidation. One thing to note is that this only matters fo bi-directional channels where you’re going both ways. If it’s a uni-directional channel, every single state update I do is basically benefiting the other participant in some way and they don’t really have incentive to go back on the prior states, but if you have a bi-directional channel there’s probably some point in the history where one of the two counterparties was better off, where they had some incentive to try to cheat and try to go back to that prior state. We solve this by invalidating every single state once we make a new state. The penalty here is that if I ever catch you broadcasting a prior state then you basically get slashed, your money all goes away and there’s some strategy there which I have already talked about a bit. Naievely, you keep all the prior states. But that’s not very good because now you have this linearly growing storage as you do these updates. People like lightning and other off-chain protocols because they can be super fast, but if you have to store a new state for every single update then that’s not very good. The other thing is that, say you’re going to the blockchain to close-out the prior state but that’s not very good because now you have control transactions going into the chain which isn’t really good if you’re trying to make everything succinct itself.

History of succinct commitment invalidation

So let’s get intothe history of and how we currently do commitment invalidation.

When one of the first bi-directional channels was proposed, it was basically a bip68 mechanism with decrementing sequence locks. Use relative timelocks such that atest states can go in before the prior states. The drawback was that this had a limit on the number of possible state updates. bip68 is basically this relative timelock mechanismn. You’d start with 30 day timelock, do an update, go to 29 days, then do an update, then it’s 28. The way that this enforced latest state was by you could only broadcast latest state using the timelocks. If you had state 30 then I’m going to broadcast state 28 before you can broadcast that because the timelock isn’t going to be expired yet. The drawback is that this has a limited number of possible updates, like if it’s a 30 day locktime then with a 1-day period that’s only 30 updates at most. You have to bake that into the lifetime of the channel.

Moving on, there was another proposal called a commitment invalidation tree, which is used in duplex payment channels (cdecker). You keep the decrementing timelock thing, but then you add another layer on top of this which is basically a tree layer. Because of the way that timelocks work, you could only ever broadcast– you can’t broadcast the leaf until you broadcast the root itself. So basically it also because they were bidirectional channels they had to be reset periodically. They were constructed using two uni-directional channels and when the balances got out of whack then you had to reset that. Every time you reset it, you had to decrement the timelock, and then every time you do that you had to measure this root thing, and that worked out pretty well because that would let you, with a tweak, call the kick-off transaction and then you have an indefinite lifetime channel. Pretty cool, but the drawback is that you have this additional off-chain footprint. If you had this massive tree that you needed to broadcast, to get to your certain state, then it’s not very succinct because you have to broadcast all of those transactions, which is not particularly good if our goal is succinctness.

What lightning does now is called commitment revocations (hash or key based). Must reveal secret of prior state when accepting new state. The drawback is that you must critically store order log n of remote party, more complex key derivation, and there’s asymmetric state. We use commitment revocations where, the way it works, every single state has a public key and when you transition to the next state you basically must give up that private key. It’s a little more complex than that. The cool part of this is that we had figured out a way to generate the public key in a deterministic way. The client constantly has a state and there’s this tree type, choose a random number generator, generate all these secrets, and you as the receiver can collapse all of these down into basically the particular states themselves.

The goal here is to develop a commitment scheme with symmetric state. Commitment revocations in lightning right now have asymmetric state. This is basically due to the way we ascribe– we know what our transactions look like, if you broadcast mine on chain then I already have what I need. But that can get worrisome like multiparty channels… if we could make them symmetric, then multiparty channels wouldn’t have this weird combinatorial state blowup, and also we could basically make all of the state much smaller itself.

Commitment invalidation in lightning v1.0

So here’s a review of basically of the way we do commitment invalidation in lightning right now. Well, I put v1.1 on the slide. But we’re on v1.0, I don’t know. I guess I was thinking ahead. The way that it works is that every single state has this thing called a “commitment point”, which is just like a EC base point. The way we derive each of the private keys for each of these points is using a shachain. You can think of a shachain as having a key k and you have an index i which gives you random element in i, and me as a receiver because of this particular structure I can collapse them. Any time I have a shachain element 10 I can forget everything else and re-derive the data by only knowing the shachain 10 were the parameters, more or less.

We do this key derivation scheme which is like kind of complex-ish but important to takeaway here that when we do the state update, you give me a public key, and then I do some elliptic curve math where it turns out that once I reveal the private key to this thing, then only you can actually sign for that state. We make one of these commimtent points, and one of the revocation points, and when we go to the next state then you basically reveal that to me and that’s how it works.

This one has a few drawbacks. It gets involved because we were trying kind of defend against rogue key attacks and things like that. But I think we can make this simpler. The client storage has to store the current state and this log k state, and it’s log k where k is actualy the number of state updates ever. The outsourcer needs a signature for every single state and in addition to that needs a signature for any other HTLC we have and it basically needs to collapse the log k state itself. So we need to make this simpler and more succinct.

OP_CHECKSIGFROMSTACK

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=12m15s

And now for a brief diversion.

I am proposing an addition to bitcoin called OP_CHECKSIGFROMSTACK. Bitcoin checks signatures. You have lots of checksigs when you’re validating the blockchain itself. One thing about bitcoin is that, it’s always assumed to be this thing called the sighash. Any time there’s a signature operation, implicitly we generate this thing called the sighash which is derived from the transaction itself. It’s a heuristic function where you can control what is actually being signed itself. That’s cool, but it’s a little bit restrictive. What if we could add the ability to validate signatures on arbitrary messages? This is super powerful because it can let you do things like delegation, have someone’s public key if it’s signed then take this output. We could also use this to make oracles, like an integer signed by Bitfinex and we could use this inside of a smart contract. We could also do things like having these “blessed” message structures where your protocol has a message that might be opaque but has a particular structure and it can only be signed by both participants. So you could sign this at some point in the future and use this as evidence some point later on.

This proposal isn’t particularly soft-fork safe for bitcoin yet. But basically you have message, signature, public key, and maybe a version and different versions in the future. Or we could have ECDSA or Schnorr signatures or whatever else. The opcode would tell you if it’s valid or whatever.

Signed sequence commitments

Now on to a new commitment invalidation method. This is something I call signed sequence commitments. Rather than now us using this revocation model, what we do is every single state has a state number. We then commit to that state number, and then we sign the commitment. That signed commitment goes into the script itself. This is cool because we have this random number R so when you’re looking at the script you don’t know which state we’re on. To do revocation, we could say, if you can open this commitment itself and because it’s signed you can’t forge it because it’s a 2-of-2 multisig, so we can open the commitment and then show me another commitment with an R sequence value that is greater than the one in this commitment and that means that there was some point in history where two of you cooperated but then one of the counterparties went back to this prior state and tried to cheat. So this is a little bit of a simpler construction. We have a signed sequence number, you can prove a new sequence number, and we can prove it because we hide the state of it, because it can be the case that when we go to the chain we don’t necessarily want to reveal how many state updates we’ve actually done.

Signing is pretty simple: you have this number R which we can derive from some deterministic method. We increment the state number. We have c, which is the commitment. The signature is important, it’s actually an aggregate signature. It’s the signature between both of us. There’s a few techniques for this, like two-party ECDSA multiparty computation techniques, there’s some signature aggregation stuff you could do, just somehow you collaborate and make a signature and it works.

One cool part of this is that whenever I have state 10, I don’t need the prior states anymore. I don’t need any of the other extraneous states. 10 is greater than 9 and all the prior states. I now have constant client storage which is really cool. So we can have a million different states but I only need to keep the latest and this tuple thing with the signature, the commitment, opening the new commitment itself. That’s pretty cool because now outsourcers have constant-sized state per client. Before, it would grow by some factor the size of the history, but now it’s constant in this scheme, which is a pretty big deal. It’s also simpler in terms of key derivation, the script, things like that. We’re just checking a signature.

There’s a state machine in BOLT 2 where you update the channel state, but that doesn’t need to be changed to implement signed sequence commitments. We’ve dramatically simplified the state. The cool thing is that, because the state is symmetric now in mutiparty channels there’s no longer this combinatorial blowup of knowing or tracking who has published and what was their state number when I published and who spent from… it’s just, it’s way simpler to do signed sequence commitments.

Review of state outsourcing

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=15m55s

Signed sequence commitments can make state outsourcing more scalable. Outsourcing is used when a client isn’t necessarily able to watch the chain themselves. In the protocol, the channels are configured with a timelock parameter T. If the channel user is offline and the outsourcer detects an invalid state broadcast on chain, then the outsourcer can act within this period of time T and punish the other counterparty. The cool part is that because we can outsource this we can have lite clients. In the prior model, we send them these initial base points that allow them to re-derive the initial redeem script itself. When they see a transaction on chain, what we do is encode the state number of the commitment into the sequence number in the log time. The outsourcer sees the transaction on chain, looks at the sequence number, basically extracts that and sees what state they are on. Depending on what state they are, they need a different payload itself. This is what we do for the HTLCs and this is cool because they can collapse down the revocation storage into a single thing, and we also give the outsourcers a description of what the justice transaction looks like. That’s where the transaction pulls away the money from the cheating party or whatever. We use bip69 basically that allows us to have deterministic ordering of the inputs and outputs for everything itself. For HTLCs, they require a new signature for every HTLC that was there. We do something with the txid. We have the full txid of the state, which we assume they can’t predict because it’s random what’s in there. We encrypt the blob, we basicaly have that txid. Now when they see the txid on the chain, they can see if it decrypts, if it doesn’t then nothing happens, and if it does then they can do that and get that for it.

There’s some open questions here about doing authentication. You don’t want someone to connect to the outsourcer and send a bunch of garbage, like sending them gigabytes of like random data. Instead, we want to make it somewhat more costly, which could be done with a ring signature scheme or it could be a bond or a bunch of other things. There’s also some conversations and questions there like do you pay per state, do you pay when they do the action, is it a subscription model? We’re working on these questions and they will be answered.

Lighter outsourcers

For outsourcers, this is the really cool part. Every single outsoucer before had to store either the encrypted blob or the prior state for every state. If you did a million states, that state storage can get really large. If they have a thousand clients with a million states each, it starts to get really infeasible and prohibitive.

With signed sequence commitments, the situation for outsourcers is much better. We only need a single tuple per client. The outsourcer sees the commitment, the state number, s is the signature, R is the randomness used in the commitment. We’re going to replace this every single time. The outsourcer has 200 bytes constant storage for each client. That’s all that needs to be updated per state update. No additional state needs to be stored.

What’s cool with shachain is that the old point, in the old scheme, you had to give the outsourcer every single state contiguously. If you ever skipped a state, it wouldn’t be able to collapse the tree. You had leafs and you could collapse to the parent. Without one of the leaves you can’t collapse the tree. But with signed sequence commitments, I can give you whatever state I want, and I can basically live with that little particular risk if I want to skip sending some updates to the outsourcer.

In the signed sequence commitments scheme, we’re just going to use that now encrypted blob approach, we’re going to add revocation type which tells outsourcers how to act on the channel itself, we’re going to give them the commitment, and the revocation info which is how they do the dispatching itself. This is pretty cool itself because now I just have constant state. But it’s a little bit different because if I want to enforce how the outsourcer can sweep the funds, then I need a little bit more, I need to give them the signatures again. As is right now, hey outsourcer if you do your job then you can get all of Bob’s money. It’s kind of a scorched earth approach. But maybe I want to get some money and maybe there’s some compensation or something and we could fix that as well.

Covenants

I skipped or deleted a slide. That’s too bad. You can basically use covenants (here’s a paper) on this approach to eliminate the signature completely. You only have one constant-sized signature. The covenant would sign Bob’s key and Bob can sweep this output but when he sweeps the output he has to do it in a particular way that pays me and pays him. This is cool because I can give him this public key and this tuple and you can only spend it in that one particular way. I can add another clause that says if Bob doesn’t act within three fourths of the way to the timelock expiration then anyone can sweep this. If you’re willing to sacrifice a bit of privacy, then you don’t have to worry about breaches because anyone can sweep the money and if anyone tries to breach then miners are going to do so immediately themselves anyway.

Outsourcer incentivization

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=20m35s

Ideally the outsourcers are compensated because we want people to have a good degree of security and we want them to feel that everything is safe and if someone tries some infidelity then they will be punished. Do you do it on a per state basis, or do you give the outsourcers a retribution bonus? Using a retribution bonus, you could say maybe they say alright I get 10% of the value of Bob’s funds or whatever and we could negotiate that. And they have to stick with this because the signature I give them covers this 10% and if they ever try to take more than 10% then it would be an invalid signature. If anything is violated then everything else will fail itself.

Ideally, breaches are unlikely. They shouldn’t happen often. Maybe there’s like 3 breaches a year, everyone will freak out, and then everything will be fine afterwards. Maybe we should actually do a pay-per-state model where I would pay the outsourcer for every single state. Periodically, they could actually remove this, but the cool thing about this new channel design is that the state is constant so you’re only replacing something. They are not allocating more state per client in terms of history.

There’s a cool idea floating around on the lightning-dev mailing list and this guy Anton implemented this as well, it’s basically each outsourcer has its own specific ecash token. I pay them maybe over lightning, maybe with a zero-knowledge contingent payment, or maybe I just give them cash, but I have these outsourcer’s ecash tokens now. I basically unblind the token if I receive them from them. The cool thing is that I can use the tokens and they aren’t linked to my initial purchase. Every time I send a new state, I also pay the outsourcer with the token and he’s fine with that because he got compensated for it for some reason.

I think this is, we could do both, it just depends on how frequently we’re going to see breaches in the future and how this market is going to develop around the conversation of outsourcers and paying outsourcers.

Scaling outsourcing with outsourcer static backups

One thing we could do is that some people talk about how do you actually do backups in lightning itself. This is a little bit of a challenge because wallets are more stateful. In the case of a regular on-chain wallet, you basically just have your bip32 seed and if you have that seed you can scan the chain and get everything else back. But in lightning, you also have channel data, which includes parameters of the channel, which pubkeys were used, who you opened the channel with, the sizes, etc. There’s static state, and then there’s dynamic state which you outsource after every state update with the outsourcer.

Maybe we could ask the outsourcer to also store this very small payload, like a few hundred bytes at most. We can use the same scheme and we can reuse our sybil-resistant authentication scheme to allow the outsourcer to allow the user to retrieve the blob from the outsourcer. The user can lose all of their data, and as long as they have their original seed, they can reconstruct the data deterministically, authenticate with the outsourcer, get my backup blob, and then rejoin the network itself. There’s a risk that backup is out of date. We have a– in the protocol, when two clients reconnect, they try to sync channel state. If the channel state is off, they can prove to each other that they are on a particular state, and the value that they give to the user allows the user to sweep the transaction on chain. I think this covers the backup scenario pretty well and it depends on the people who are running the outsourcers or the watchtowers themselves.

There’s a question of basically, who watches the watchtowers? Who makes sure they are actually storing the data? You can use a proof-of-retrieviability protocol over static and dynamic states they are storing. So ask a watchtower to make sure that they are storing data, and if they are, provide me with the data. And if not, then you can pay for this again just like paying the watchtower but this time it’s a watchwatchtowertower.

Second stage HTLCs

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=24m5s

The way that HTLCs work currently is that they aren’t just shoved into the commitment transaction because we realized there were some issues with shoving them into the timelocks. What happened before is that if you wanted to have a longer CSV value which was your security parameter in terms of you being able to act on something in a timely manner, then you needed a longer timelock. As you had longer routes, you would have timelocks of 30, 40, 50 days, a very long time. So we wanted to separate that.

Now the HTLCs when claiming it’s a two-stage process. Once you broadcast the second HTLC, you’re waiting for the claim process which is a CSV delay value and afterwards you can claim. The CSV value and HTLC value are in two different scripts and they are fully decoupled from each other. This is in the current commitment design in BOLT 3 where CSV + CLTV decoupled in HTLCs.

There’s some downsides, which is that we have a distinct transaction for every HTLC. Every time we update a state, if we have 1000 HTLCs, then I need to give you a thousand signatures and you need to verify all of those signatures. And we need to store every single signature for every HTLC. If we want to eventually be able to handle breaches in the future, and the outsourcer needs to be able to handle these HTLCs themselves.

The solution here is to use covenants in the HTLC outputs. This eliminates signature + verify with commitment creation, and eliminates signature storage of current state. We’re basically making an off-chain covenant with 2-of-2 multisig. We can just say, if we actualy have real covenants then we don’t them anymore. The goal of the covenants was to force them to wait the CSV delay when they were trying to claim the output. But with this, they can only spend the output if the output that it created in the spending transaction actually has a CSV delay clause. You basically add an independent script for HTLC revocation clause reusing the commitment invalidation technique.

As a stopgap, you could do something with sighash flags to allow you to coalesce these transactions together. Right now if I have 5 HTLCs, I have to do that on chain, there’s 5 different transactions. We could allow you to coalesce these into a single transaction by using more liberal sighash flags, which is cool because then you have this single transaction get confirmed and after a few blocks you can sweep those into your own outputs and it works out fine.

Multi-party channels

There’s some cool work by some people at ETH doing work on multi-party channels ("Scalable funding of bitcoin micropayment channel networks"). It’s a different flavor of channels because there’s some issues about how to make the hierarchy flat, paying for punishment, if I have to update my own state then do I have to wait for everyone else to be online. What they did instead was they made this hierarchical model of channels itself.

There’s a few different concepts. There’s “the hook”. It’s the- if there was five of us, there’s a 5-of-5 multisig. Assuming signature aggregation, we can aggregate that down into a single signature and it looks like a regular channel and even though it might be 200-party channel, it can be made more succinct using signature aggregation.

Once you have the “hook” which creates the giant multisig itself, then you have “allocations” and what the allocations do is that they decide on the distribution of funds inside each channel. If you look at the diagram, it’s just a series of 2-of-2 channels. You can keep doing this over and over again. The cool takeaway here is that they realized you could embed a channel inside another channel and then do recursively deep channels. Every time you make a new channel, it can have its own completely different rules.

I could have a bi-directional channel on one side, and on the other side I could have a muti-party channel and all of these other things. Nobody outside would know. We could do all of this collaboratively off-chain without using any on-chain transactions themselves.

This is pretty cool but there’s some issues with state blowup on chain with this. One of the reason why they didn’t do key-based revocation with this is because you have asymmetric state and you have this blowup that scales with the number of participants and that gets prohibitive. In the paper, they used invalidation tree again. Any time you want to update the hook, like if you want to do different allocations or decide that you want to get rid of one of the parties then you have to update the hook eventually depending on the distribution of the channel itself. And this would require an invalidation mechanism. They use tree-based invalidation using the invalidation tree but the underlying problem is that it can get really large. As you update the hook more and more, you have a tree with a bigger depth. You need to traverse a huge tree to get to the hook, but even below that there might be even more transactions. So you could have hundreds of transactions on chain just to force-close.

However, we could reuse the signed sequence commitments to solve some of these problems. We could use this for the hook update. This makes it more scalable. Everyone would have constant sized state independent of the number of participants in the channel, assuming that I’m keeping track of my own individual transactions. This has the same benefits as far as outsourcing. If we’re using signature aggregation, then everything is very very tiny, and that works out well too.

Fee control

https://www.youtube.com/watch?v=V3f4yYVCxpk&t=29m

One issue that we realized is that it’s not super easy to modify the fees in the channel once it’s been broadcast. Let’s say that my peer is offline and I want to force-close the channel. That fee on the channel itself is actually locked ahead of time. Say I was offline for two days and fees on the chain go up and my transaction might have insufficient fee to get confirmed in a timely manner. At this point, I could use child-pays-for-parent where I create a bigger fee package with a higher fee rate but I can’t do that because the output is timelocked so I would need to wait for the timelock in order to sweep it, and in order to sweep it I would need it to be confirmed, and in order for it to be confirmed, you’re in a weird position right? So one idea is to have a reserve… what this means right now is that, the way the design works, we ensure that each participant has a certain percentage of money inside the channel at all times. The idea here is that you create an anyonecanspend hook reserve hook output that anyone can use to anchor the transaction now. Even though I mess up guessing my fees initially, I can still use the anyonecanspend output.

We can eliminate the fee guessing game. In the protocol, there’s an update fee message, which can go because of the fee going up or down. If there’s very rapid changes in the fee climate on the chain, then this can get kind of difficult. Fee estimation is a hard problem.

The solution is that you don’t want to apply fees to the commitment transaction at all. It has zero fees. You have to add a new input to get it confirmed. So we propose using more liberal sighash flags to allow you to add an input or an input and an output. After I broadcast, that’s when I need to decide what the fees are. Whenever I add this fee input (or whatever), the txid of the transaction actually changes, and if the txid changes and if we had any dependent transactions then those all become invalidated. So what we do is we sign that with SIGHASH_NOINPUT.

I don’t know if you remember it, but SIGHASH_NOINPUT was proposed a long time ago. SIGHASH_NOINPUT is so that you can sign the txid of the transaction that made the input and the position of the input. We’re saying don’t sign that. Only sign the script. I could have a dependent transaction that is still valid, even if the parent transaction changes, because the pubkeyscript stays the same. This gives you more control for confirmation time of your output and other things like that. You don’t have to worry about fees and you can regulate fees completely, gives more control of your confirmation time of your output.

Q&A

Q: To sum it up, the upgrades for bitcoin script would be helpful are covenants, SIGHASH_NOINPNUT, and OP_CHECKSIGFROMSTACK.

A: Yeah. With SIGHASH_NOINPUT and OP_CHECKSIGFROMSTACK we could do a lot. Covenants are the holy grail and I didn’t even get into what you could do there. There’s some controversy around that. I think it’s more likely for OP_CHECKSIGFROMSTACK to get in. These are minor additions, unlike covenants. The possibilities blow up because you have signed data. It’s extremely powerful. If you have OP_CHECKSIGFROMSTACK then you might accidentally enable covenants.

Q: There have been a bunch of discussions about how large centralized lightning network hubs could lead to censorship. Have you done any research on how to keep the lightning network decentralized?

A: There’s zero barrier to entry. There’s some small costs, it’s not high. The capital requirements to have a massive hub are really prohibitive. There’s not really censorship because depending on the routing layer, we basically use onion routing at that point. They don’t really know exactly who you’re sending money to. They could cancel the transacion but then you route around them.

Q: I was curious if there was a limit to the number of state channels that you can have at one point. If not, would the creation of unlimited state channels in a sybil manner would that be an attack vector?

A: The upper bound is the number of UTXOs you can have in the UTXO set. Here we had one output with 20 channels. It’s effectively unbounded. You need to get into the chain. With sybil attacks, you’re paying money to get into the chain so there’s some limits there.

Q: There was an assumption about attackers not colluding with miners.

A: Let’s say there was a miner and there’s this massive channel and my counterparty colludes with them. They could choose not to mine my justice transactions. You could say that if there’s only that 1% chance to get into the chain then that might be enough for fraud to occur. This can also get weird if miners are reorging blocks with my transactions. The committed transactions format, maybe with a zero-knowledge proof, only later on would it be revealed what’s being spent. Commit-reveal could get past this problem of miner censorship or whatever.

Q: I am curious about the- as you upgrade the protocol, how do you anticipate an upgrade schedule for I guess there’s the wallets, hubs, outsourcers?

A: There’s actually a bunch of extension points in the protocol, including the onion protocol and the service bits on lightning node messages. We share the layer 3, HTLC layer. Individual links might have different protocols but they share the same HTLCs. We could have feature bits that we can check to see if someone knows the new fancy channel tech. I foresee it being gradual. Even in the future there will be many channel types and many routing layers. Everything is at the edges, and that’s all that really matters, and that’s a cool design element of the way it turned out.

Q: I like this multi-party channel design.

A: They are very cool.

Q: Can you elaborate on some of the research work done for secure multi-party computation?

A: There’s a bunch of protocols that are enabled by OP_CHECKSIGFROMSTACK where inside of the multi-party computation if you can generate signatures and those signatures sign particular states then you could use that for invalidation as well. Multi-party channels can be used to pool liquidity. There’s opportunities for gaming, like one in each geographic region and all of the economy is done through a single multi-party channel and then you could route commerce between those giant channels. It still works itself out.

Cool, thank you.

https://www.reddit.com/r/Bitcoin/comments/7u79l9/hardening_lightning_olaoluwa_osuntokun_roasbeef/

\ No newline at end of file diff --git a/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities/index.html b/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities/index.html index ce4aae0b27..ab2f8e1dd7 100644 --- a/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities/index.html +++ b/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities/index.html @@ -12,4 +12,4 @@ Pieter Wuille

Date: January 31, 2018

Transcript By: Bryan Bishop

Tags: Research, Schnorr signatures, Adaptor signatures

Category: Conference

Media: -https://www.youtube.com/watch?v=oTsjMz3DaLs

slides: https://prezi.com/bihvorormznd/schnorr-signatures-for-bitcoin/

https://twitter.com/kanzure/status/958776403977220098

https://blockstream.com/2018/02/15/schnorr-signatures-bpase.html

Introduction

My name is Pieter Wuille. I work at Blockstream. I contribute to Bitcoin Core and bitcoin research in general. I work on various proposals for the bitcoin system for some time now. Today I will be talking about Schnorr signatures for bitcoin, which is a project we’ve been working on, on-and-off, for a couple of years now. I’ll talk about the cool things that we can do that might be non-obvious, and also some non-obvious challenges that we ran into while doing this. This is, as this talk covers things we’ve done over a long time, there are many other people who have contributed to this work, including Greg Maxwell, Andrew Poelstra, myself, and also Russell O’Connor and some external contributors including Peter Dettttman and others. I wanted to mention them. And Jonas Nick.

Benefits of Schnorr signatures for bitcoin

Schnorr signatures have been talked about for a while. The usual mentioned advantages of this approach are that we can decrease the on-chain size of transactions in bitcoin. We can speed up validation and reduce the computational costs. There are privacy improvements that can be made. I’ll be talking about those and the problems we’ve encountered.

Bitcoin

For starters, let’s begin by talking about bitcoin itself. Transactions consist of inputs and outputs. The outputs provide conditions for spending. Russell was talking about this in his previous talk. They are effectively predicates that need to be satisfied. Inputs provide the arguments to those predicates. Typically, in the outputs, the predicated that is included is required signature with key x. This is the most common, but it’s by no means the only thing that we can do.

Bitcoin also supports threshold signatures in a very naieve way. Threshold signatures are schemes where you have a group of n possible keys and you decide ahead of time that any subset k out of those n are able to provide a valid signature and with less it’s not possible. Bitcoin does this naively by giving a list of all the keys and all the signatures. It’s an obvious, naieve way of implementing this construction but it’s by no means the best that we can do.

Predicates and signature validation

Important for what I’ll be talking about later is that in the blockchain model, the chain itself, meaning all the full nodes that validate the chain, do not actually care who signs. If there are multiple possible signers, for example I have wallet on my desktop computer but I want to make sure that I’m protected against software attacks maybe I also want a hardware device and I want to use a system where both wallets need to sign off on a transaction. This is 2-of-2 or 2-of-3 multisig construction. If I want someone to pay me, I am the one who is going to decide what conditions they should be creating when sending the money. I’ll tell them “create an output of this much money to an address that encodes 2-of-3 multisig and these are the keys for that” and we have a compact 2-of-3 pay-to-scripthash (P2SH) implementation for that. But it is me who cares who those signers are. It’s not the blockchain. The chain only sees that yep there was supposed to be a key with this signature, and then it simply sees and validates the presence of this.

Scripts

Bitcoin accomplishes this through scripting. It’s a scripting language called Bitcoin script. It’s a stack-based machine language. The most simple example you can come up with is an output that says “pubkey CHECKSIG” and then an input that contains a signature. The execution model is that you first execute the input, which results in a signature on the stack. Next, you execute the output commands which pushes the public key on to the stack and the CHECKSIG looks at both the signature and the pubkey and checks whether the transaction is good to go.

In practice, what happens is that I don’t tell my senders an actual public key. Instead, I give them a hash of my public key. This was originally for compactness reasons but there are other advantages. You can see the script that is being used for that. Effectively what the script does it takes two inputs, a signature, and a public key. Verify that the hash of the public key is x, and that the signature is valid for that public key.

Going forward, we will be talking about threshold signatures. Bitcoin’s way of dealing with threshold signatures is through an opcode called OP_CHECKMULTISIG which takes a number of keys and a number of signatures, matches them all, and here you can see how this works.

Other things that bitcoin scripting language can do includes hash preimages and timelocks which are used in various higher-level protocols. I should also say that bitcoin script uses ECDSA. It’s a common standard. But let’s talk about Schnorr signatures.

Schnorr signatures

Schnorr signatures are a well-known signature scheme that only relies on the discrete logarithm assumption just like ECDSA does. There are various advantages for Schnorr signatures over ECDSA. Some of them are that it supports native multisig, where multiple parties jointly produce a single signature. This is very nice because we can reduce the number of keys and number of signatures that need to go into the chain. There are various schemes that enable threshold signatures on top of Schnorr signatures. In fact, it is known that there are constructions to effectively implement any monotone boolean function. I’ll talk a bit about that.

Monotone boolean functions are the class of functions from booleans to booleans that can be written using only AND and OR gates. As long as we restrict ourselves to spending conditions that consist of some group of people signing or some other group signing, then this is exactly the class that we want to model. It is in fact known that there are schemes that might have complex setup protocols but it’s actually possible to negotiate keys ahead of time in such a way that A and B or B and C and D, or D and F, or whatever, can eventually sign for this key.

We recently published a paper about a scheme called MuSig which does native multisignatures but without any setup. I’ll talk a bit more about this later.

Other advantages of Schnorr signatures is that they support batch validation where you have multiple sets of keys and messages and you can verify them all at once in a way that is more computationally efficient than doing single validation.

Schnorr signatures have a security proof, which is not true for ECDSA. In addition, they are also non-malleable, so a third-party that does not have access to a private key cannot modify the signature without invalidating it.

Simply by virtue of introducing a new signature scheme, we could get a number of advantages for free. One of them is that ECDSA signatures in bitcoin right now use the rn encoding which adds 6 bytes of completely unnecessary data to the chain for every signature. We can just get rid of this once we introduce another signature scheme.

Most of the things on this slide are in theory also possible with ECDSA but it comes with really complex multi-party computation protocols. I’ll talk about in a minute where this is not the case.

Can we add this to Bitcoin script?

Seems like an almost obvious question and win. We can make the same security assumptions. There are only advantages. Same security assumptions. Ignoring the politics for a second, it seems like we could add a new opcode to the scripting language which is especially easy since bitcoin now has segwit activated and part of that system added script versioning which means we can introduce new or proposed new scripts with new semantics from scratch without much effort. There are other advantages that come from this, though. I’ll talk about two of them.

Taproot

One scheme that can benefit from this sort of new Schnorr signature validation opcode is taproot, which if you’ve been following the bitcoin-dev mailing list over the past few days you may have seen mentioned. Taproot is a proposal by Greg Maxwell where effectively the realization is that almost all cases where a script gets satisfied (where an actual spend occurs) and there are multiple parties involved can almost always be written as “either everyone involved agrees, or some more complex conditions are satisfied”. Taproot encodes a public key or the hash of a script inside just one public key that goes on to the chain. You cannot tell from the key whether it’s just a key or if it’s a key that also commits to a script. The proposed semantics for this allow you to either just spend it by providing a signature with the key that is there, or you reveal that it’s a commitment to a script and then you give the inputs to satisfy that script. If that signature used in the taproot proposal was a Schnorr signature, then we get all the advantages I talked about for Schnorr signatures. So not only could this be used for a single signer, but it could also be the “everyone agrees automatically” by using a native Schnorr multi-signature.

Scriptless scripts

Another area that Schnorr signatures can help with is the topic of scriptless scripts, an area that Andrew Poelstra has been working on. There was a talk about this recently at RealWorldCrypto 2018 which I think was very good. The idea here is how much of the features of an actual scripting language can we accomplish without having a scripting language? It turns out, quite a lot. In particular there is a construction called a cross-chain atomic swap which I won’t go into the details here but it allows multiple– so, I want to sell someone some bitcoin and someone else wants to sell me some litecoin and I don’t know why but assume it’s the case and we want to do this in lockstep across the chains so that no party is fraudulent. Both transactions have to be reversible, so that the other party can’t back out. A cool construction for this was proposed a couple of years ago. It’s a cross-chain atomic swap where the second payment is dependent on using a hash preimage which gets revealed by the other transaction. We put the coins into a construction where they are locked and then when one party takes out their part of the coins, they reveal the information that I need in order to take their other coins. This makes the whole construction atomic. The normal formulation of this always requires on the hash preimage and revealing that and so on. But it’s possible with just a Schnorr signature and this makes it indistinguishable from a normal payment and it also makes it smaller. Schnorr signatures will fit in well with the scriptless scripts scheme and cross-chain atomic swaps. There are many things we can do with Schnorr signatures. We want this.

Cross-input aggregation

Why stop at one signature per input? Bitcoin transactions have multiple independent outputs and we don’t want to restrict the ability for someone to choose them independently. All of these have public keys and signatures that are required. Why can’t we combine all of those signatures into one? Schnorr signatures support multisig so this seems like an obvious win. Well, not so fast.

I’ll show a few equations. The message is m. The public key is X where X = x * G where x is the private key and G is the generator. The signature is a tuple (R, s) which is valid if s * G is equal to R + Hash(X, R, m) * X. What you can notice about this equation is that it is linear. All of the public keys really appear at the top-level which means that what you can do is if multiple parties effectively produce an s independently for the same R value or for some of the R values and you add up the s values the result is a signature that is valid for the sum of their keys. This is how all multisignature constructions for Schnorr work. They are all based on this principle.

Unfortunately there is a caveat here, called the rogue key attack. Assume Alice has key A and Bob has key B. Bob claims that his key is B prime which is really B minus A. So Bob claims that’s his key and people believe him. A naieve multisignature would use the sum of the keys and (B’, A) is really just B which Bob could sign for without Alice’s cooperation. Everyone see’s Alice key but Bob’s says I send to the sum of these keys and I assume that this will only be spendable by both Alice and Bob and this is wrong. The normal way to prevent this is to require the keys to sign themselves. This is effectively an enrollemnt procedure or certification procedure or you include with the public keys a signature that signs itself. There are various constructions but you must guarantee that the parties actually have the private keys corresponding to the public key that they claim to have.

This works for multisig within a single input approach because the people who care about it are just the participants and they can internally prove to each other yep here’s my key and here’s a proof that it’s my key and it doesn’t go into the blockchain. But for cross-input aggregation, we want to reduce all of the inputs in the transaction to one, this is actually not possible, because the sets of keys are under control of the attacker. So the example again is that Alice has a number of coins associated in an output with key A, and Bob wants to steal them. We use a naieve multisig approach with a signature with the sum of all the keys that we can see. And Bob can create an output in a transaction himself of some marginally small amount addressed to the key B minus A and then create a follow-up transaction that spends both Alice’s coin and Bob’s coin in such a way that they cancel out. So this is a completely insecure situation and I believe the only way to prevent it is by including the self-certification signature inside of the blockchain itself, which would undo all the scaling and performance advantages we assumed to have.

What we need is security in the plain public key model where there is no key setup procedure beyond just users claiming that they have a particular key. They are allowed to lie about what their key is, and the system should remain secure. This was something we noticed and we tried to come up with a solution for this rogue-key attack. We tried to publish about it, got rejected, and we were told that we should look at a paper from Bellare-Neven 2006 which exactly solved this problem.

Bellare-Neven signatures

The Schnorr multisignature is S * G = R + H(X, R, m) * X where X was the sum of the public keys. Bellare-Neven introduced a multisignature where you use a separate hash for every signer. Into every hash, goes the set of all the signers. The great thing about this paper is that it gives a very wide security proof where the attacker is allowed to pretty much do anything. An attacker can participate in multiple signing attempts with multiple people simultaneously. This looks exactly like the security model that we want. So let’s go for this and start thinking about how to integrate this into Bitcoin script.

Again, not so fast. There’s another hurdle. We need to consider the distinction between a multisignature and an interactive aggregate signature. The distinction is that a multisig is where you have multiple signers that all sign the same message. In an interactive aggregate signature, every signer has their own message. Seems simple. In the context of bitcoin, every input is signing its own message that authorizes or specifies the claim authorizing the spend. There is a very simple conversion suggested by Bellare-Neven themselves in their paper where you can turn the multisignature scheme into an interactive aggregate signature scheme where you just concatenate the messages of all the participants. This seems so obviously correct that we didn’t really think about it until my colleague Russell O’Connor pointed something out.

Russell’s attack

https://www.youtube.com/watch?v=oTsjMz3DaLs&t=22m27s

Russell pointed out that we’ve– let’s assume that Alice has two outputs, o1 and o2. Bob has an output o3. And we assume m1 is the message that authorizes a spend of o1, and m2 the same for o2, and so on. Alice wants to spend o1 in a coinjoin with Bob. So there’s a multi-party protocol going on, mentioned in an earlier talk that coinjoin is where participants get together and move their coins at the same time and now you can’t tell which outputs belonged to which inputs. It’s a reasonable thing that Alice and Bob would want to do this. In this protocol, Bob would be able to claim he has the same key as Alice. It’s perfectly allowed in the plain public key model. And he chooses as a message, m2, the message that authorizes the spend of Alice’s second output instead of m3 his own output. And you may claim well it’s perfectly possible to modify the protocol where you say don’t ever sign something where someone else is claiming to have your keys. But this is a higher-level construction that we would like the underlying protocol to protect against this sort of situation. If you now look ta the validation equation becomes, you see that Alice’s public key appears twice, and the concatenation of the two messages appears twice, but these two hashes are identical. So Bob can duplicate all of Alice’s messages in a multi-party protocol and end up with a signature that actually authorizes the spend of Alice’s second output which was unintended.

Mitigating Russell’s attack

A better solution that we are proposing is that instead of this L which is the hash of the commitment of all the participant public keys in the set, you include the messages themselves and then in the top-level hashes you include your position within that hash. Russell’s attack doesn’t work anymore because the messages in every hash are different so Bob can’t just repeat the message and steal things. So something to learn about this, at least for myself, is that attack models in multi-party schemes can be very subtle. This was not at all an obvious construction.

Bitcoin integration

Here I guess I should do the slide that I had before. Sorry if I’m making you sea sick. Concretely, how do we integrate this Bellare-Neven like interactive aggregate signature scheme into bitcoin? It seems to give us a lot of advantages. We can turn all of the signatures in one input into a single signature using multisig and threshold signatures. And we can use cross-input aggregation across multiple inputs to even reduce that further and only have one signature for the entire transaction.

How do we do this? There’s a hurdle here. Bitcoin transactions are independent. We have this model where there is an output with a predicate, you provide an input with all the arguments needed to satisfy it and the transaction is valid if all of the predicates are satisfied plus a number of other constraints like “you’re not double spending” and “you’re not creating money out of nothing” and all those things.

For cross-input aggregation, we want one signature overall. The way to do it, or at least what I would propose, is to have the CHECKSIG operator and the related operators always succeed. Right now they take a public key and a signature from the stack and validate whether they correspond. Instead, make them always succeed, remember the public key and message, compute what the message woud have been signed. Continue with validation for all the inputs in a transaction. Now tha tthe transaction is validated, and if all the input predicates succeed still, but in addition there is an overall Bellare-Neven interactive aggregate signature provided in the transaction that is valid for all the delayed checks. This is a bit of I guess a layer violation but I believe it’s one that is valuable because we get all these savings.

Performance

I want to talk a bit about the actual work we’ve been doing towards that end. I want performance. Andrew Poelstra, Jonas Nick and myself have been looking at various algorithms for doing the scalar multiplication in the Bellare-Neven verification equation and there are various algorithms that you get better than constant speedup. You can compute the total faster in aggregate or batch than computing the multiplication operations separately and adding them up. This is a well-known result, but there’s a variety of algorithms. We experimented with multiple of them. In this graph you can see how many keys were involved in the whole transaction, and then the speedup you get over just validatin those keys independently. You have two alorithms- one is Strauss and the other is Pippenger. After various benchmarks and tweaking at what the correct point is to switch over from one to another. Initially for small numbers, Strauss algorithm is significantly faster but at some point Pippenger gets faster and it realy goes up logarithmically in the number of keys. This seems to continue for quite a while. Our overall validation speeds for n keys is really n over log n if we’re talking about large numbers.

You may think, well, there’s never going to be a transaction with 10,000 keys in it, right? You’re already doing cool threshold scheme so there’s only one key left so you don’t need to think about the extreme cases. But this is where batch validation comes in because Bellare-Neven’s validation equation can also be batch validated where you have multiple instances of the equation that can be validated in parallel and you only need to care if they all fail not which specific one fails because that’s the block validity requirement in bitcoin. You’re seeing multiple transactions in a block and all that you care about is whether the block is valid.

These performance numbers apply to all the public keys and signatures you see in a transaction, within a block, rather than just within a transaction. And within a block, we potentially see several thousands of keys and signatures, so this is a nice speedup to have.

Space savings

Furthermore, there are also space savings. This chart is from a simulation where we assume that if this proposal would have been active since day 1 then how much smaller would the blockchain be. Note that this does not do anything with threshold signatures or multisig and it doesn’t try to incorporate how people would have differently used the system (which is where the advantages really are) but this is purely from being left with one signature per transaction and everything else is left in place. You can see between a 25% and 30% reduction in blockchain size. This is mostly a storage improvement and a bandwidth improvement in this context. It’s nice.

Ongoing work

We’re working on a BIP for Bellare-Neven based interactive aggregate signatures. We can present this as a signature scheme on its own. There’s a separate BIP for incorporating this cross-input capable CHECKSIG and its exact semantics would be– I lost a word here, but the recommended approaches for doing various kinds of threshold signings so that we don’t need to stick with this “everyone involved in a contract needs to be independently providing a signature” scheme.

That’s all.

bip-schnorr

Q&A

https://www.youtube.com/watch?v=oTsjMz3DaLs&t=33m3s

Dan Boneh: We have time for questions. Any hope of aggregating signatures across transactions? Leading question.

A: I expected something like that from you. So, there is a proposal by Tadge Dryja where you can effectively combine even across transactions- you can do a batch validation ahead of time and look at what multipliers you would apply and on the R value you can combine the whole R value into a single one. However, this is even more of a layer violation in that transaction validation comes with extra complications like what if you have a transaction that has been validated ahead of time and its validation was cached but now you see it in inside of a block and you need to minus– what you’re aiming for is less signatures where you can arbitrarily and non-interactively combine all signatures. I think that’s something we should look into, but I would rather use all of the possibilities with the current existing security assumptions and then perhaps at some point later consider what could be done.

Q: Hi. I was wondering one question about taproot. The introduction of this standard case where everyone signs and agrees would basically reduce the number of times where you see the contract being executed at all. Wouldn’t this reduce the anonymity set?

A: I don’t think so because in the alternative case where those people would have a contract that explicitly stated everyone agrees or more complex setup– you would still see, you’re going from three cases. One is just single signature single key, to everyone signs with multiple keys and third more complex constructions. What we’re doing with taproot is unifying the first and second branch but the third isn’t effected. I think this is strictly not the case.

Q: You had alluded to political reasons why this wouldn’t get merged. What are the reasons against this?

A: I would very much like to see what I’ve been talking about today to be merged into bitcoin. It’s going to be a lengthy process. There’s a long review cycle. This is one of the reasons why I prefer to stick with proposals that don’t change the security assumptions at all. None of what I’ve been talking about introduces any new assumptions that ECDSA doesn’t already have. So this hopefully makes it relatively easy to see that there are little downsides to deploying this kind of upgrade.

gmaxwell: An extra elaboration on the taproot point… you’re not limited to have the “all agree” case. It can be two-of-three. If your policy was 2-of-3 or 1-of-3 with a timelock then the case that looks like just a single key could just be the 2-of-3 at the top. This would be another factor that could help the anonymity set situation.

Q: Does the Schnorr signature.. I’m wondering about its availability for open-source or widely available software like openssl? My business case would be just.. update.. signature, done by multiple parties.

A: The most commonly deployed Schnorr-like signature is ed25519 which is very well known and used in a number of cases. I believe there are higher-level protocols that specify how to do aggregating multiple keys together and sign for them at once. You may want to look into a system called cosi.

Design approaches for cross-input signature aggregation

\ No newline at end of file +https://www.youtube.com/watch?v=oTsjMz3DaLs

slides: https://prezi.com/bihvorormznd/schnorr-signatures-for-bitcoin/

https://twitter.com/kanzure/status/958776403977220098

https://blockstream.com/2018/02/15/schnorr-signatures-bpase.html

Introduction

My name is Pieter Wuille. I work at Blockstream. I contribute to Bitcoin Core and bitcoin research in general. I work on various proposals for the bitcoin system for some time now. Today I will be talking about Schnorr signatures for bitcoin, which is a project we’ve been working on, on-and-off, for a couple of years now. I’ll talk about the cool things that we can do that might be non-obvious, and also some non-obvious challenges that we ran into while doing this. This is, as this talk covers things we’ve done over a long time, there are many other people who have contributed to this work, including Greg Maxwell, Andrew Poelstra, myself, and also Russell O’Connor and some external contributors including Peter Dettttman and others. I wanted to mention them. And Jonas Nick.

Benefits of Schnorr signatures for bitcoin

Schnorr signatures have been talked about for a while. The usual mentioned advantages of this approach are that we can decrease the on-chain size of transactions in bitcoin. We can speed up validation and reduce the computational costs. There are privacy improvements that can be made. I’ll be talking about those and the problems we’ve encountered.

Bitcoin

For starters, let’s begin by talking about bitcoin itself. Transactions consist of inputs and outputs. The outputs provide conditions for spending. Russell was talking about this in his previous talk. They are effectively predicates that need to be satisfied. Inputs provide the arguments to those predicates. Typically, in the outputs, the predicated that is included is required signature with key x. This is the most common, but it’s by no means the only thing that we can do.

Bitcoin also supports threshold signatures in a very naieve way. Threshold signatures are schemes where you have a group of n possible keys and you decide ahead of time that any subset k out of those n are able to provide a valid signature and with less it’s not possible. Bitcoin does this naively by giving a list of all the keys and all the signatures. It’s an obvious, naieve way of implementing this construction but it’s by no means the best that we can do.

Predicates and signature validation

Important for what I’ll be talking about later is that in the blockchain model, the chain itself, meaning all the full nodes that validate the chain, do not actually care who signs. If there are multiple possible signers, for example I have wallet on my desktop computer but I want to make sure that I’m protected against software attacks maybe I also want a hardware device and I want to use a system where both wallets need to sign off on a transaction. This is 2-of-2 or 2-of-3 multisig construction. If I want someone to pay me, I am the one who is going to decide what conditions they should be creating when sending the money. I’ll tell them “create an output of this much money to an address that encodes 2-of-3 multisig and these are the keys for that” and we have a compact 2-of-3 pay-to-scripthash (P2SH) implementation for that. But it is me who cares who those signers are. It’s not the blockchain. The chain only sees that yep there was supposed to be a key with this signature, and then it simply sees and validates the presence of this.

Scripts

Bitcoin accomplishes this through scripting. It’s a scripting language called Bitcoin script. It’s a stack-based machine language. The most simple example you can come up with is an output that says “pubkey CHECKSIG” and then an input that contains a signature. The execution model is that you first execute the input, which results in a signature on the stack. Next, you execute the output commands which pushes the public key on to the stack and the CHECKSIG looks at both the signature and the pubkey and checks whether the transaction is good to go.

In practice, what happens is that I don’t tell my senders an actual public key. Instead, I give them a hash of my public key. This was originally for compactness reasons but there are other advantages. You can see the script that is being used for that. Effectively what the script does it takes two inputs, a signature, and a public key. Verify that the hash of the public key is x, and that the signature is valid for that public key.

Going forward, we will be talking about threshold signatures. Bitcoin’s way of dealing with threshold signatures is through an opcode called OP_CHECKMULTISIG which takes a number of keys and a number of signatures, matches them all, and here you can see how this works.

Other things that bitcoin scripting language can do includes hash preimages and timelocks which are used in various higher-level protocols. I should also say that bitcoin script uses ECDSA. It’s a common standard. But let’s talk about Schnorr signatures.

Schnorr signatures

Schnorr signatures are a well-known signature scheme that only relies on the discrete logarithm assumption just like ECDSA does. There are various advantages for Schnorr signatures over ECDSA. Some of them are that it supports native multisig, where multiple parties jointly produce a single signature. This is very nice because we can reduce the number of keys and number of signatures that need to go into the chain. There are various schemes that enable threshold signatures on top of Schnorr signatures. In fact, it is known that there are constructions to effectively implement any monotone boolean function. I’ll talk a bit about that.

Monotone boolean functions are the class of functions from booleans to booleans that can be written using only AND and OR gates. As long as we restrict ourselves to spending conditions that consist of some group of people signing or some other group signing, then this is exactly the class that we want to model. It is in fact known that there are schemes that might have complex setup protocols but it’s actually possible to negotiate keys ahead of time in such a way that A and B or B and C and D, or D and F, or whatever, can eventually sign for this key.

We recently published a paper about a scheme called MuSig which does native multisignatures but without any setup. I’ll talk a bit more about this later.

Other advantages of Schnorr signatures is that they support batch validation where you have multiple sets of keys and messages and you can verify them all at once in a way that is more computationally efficient than doing single validation.

Schnorr signatures have a security proof, which is not true for ECDSA. In addition, they are also non-malleable, so a third-party that does not have access to a private key cannot modify the signature without invalidating it.

Simply by virtue of introducing a new signature scheme, we could get a number of advantages for free. One of them is that ECDSA signatures in bitcoin right now use the rn encoding which adds 6 bytes of completely unnecessary data to the chain for every signature. We can just get rid of this once we introduce another signature scheme.

Most of the things on this slide are in theory also possible with ECDSA but it comes with really complex multi-party computation protocols. I’ll talk about in a minute where this is not the case.

Can we add this to Bitcoin script?

Seems like an almost obvious question and win. We can make the same security assumptions. There are only advantages. Same security assumptions. Ignoring the politics for a second, it seems like we could add a new opcode to the scripting language which is especially easy since bitcoin now has segwit activated and part of that system added script versioning which means we can introduce new or proposed new scripts with new semantics from scratch without much effort. There are other advantages that come from this, though. I’ll talk about two of them.

Taproot

One scheme that can benefit from this sort of new Schnorr signature validation opcode is taproot, which if you’ve been following the bitcoin-dev mailing list over the past few days you may have seen mentioned. Taproot is a proposal by Greg Maxwell where effectively the realization is that almost all cases where a script gets satisfied (where an actual spend occurs) and there are multiple parties involved can almost always be written as “either everyone involved agrees, or some more complex conditions are satisfied”. Taproot encodes a public key or the hash of a script inside just one public key that goes on to the chain. You cannot tell from the key whether it’s just a key or if it’s a key that also commits to a script. The proposed semantics for this allow you to either just spend it by providing a signature with the key that is there, or you reveal that it’s a commitment to a script and then you give the inputs to satisfy that script. If that signature used in the taproot proposal was a Schnorr signature, then we get all the advantages I talked about for Schnorr signatures. So not only could this be used for a single signer, but it could also be the “everyone agrees automatically” by using a native Schnorr multi-signature.

Scriptless scripts

Another area that Schnorr signatures can help with is the topic of scriptless scripts, an area that Andrew Poelstra has been working on. There was a talk about this recently at RealWorldCrypto 2018 which I think was very good. The idea here is how much of the features of an actual scripting language can we accomplish without having a scripting language? It turns out, quite a lot. In particular there is a construction called a cross-chain atomic swap which I won’t go into the details here but it allows multiple– so, I want to sell someone some bitcoin and someone else wants to sell me some litecoin and I don’t know why but assume it’s the case and we want to do this in lockstep across the chains so that no party is fraudulent. Both transactions have to be reversible, so that the other party can’t back out. A cool construction for this was proposed a couple of years ago. It’s a cross-chain atomic swap where the second payment is dependent on using a hash preimage which gets revealed by the other transaction. We put the coins into a construction where they are locked and then when one party takes out their part of the coins, they reveal the information that I need in order to take their other coins. This makes the whole construction atomic. The normal formulation of this always requires on the hash preimage and revealing that and so on. But it’s possible with just a Schnorr signature and this makes it indistinguishable from a normal payment and it also makes it smaller. Schnorr signatures will fit in well with the scriptless scripts scheme and cross-chain atomic swaps. There are many things we can do with Schnorr signatures. We want this.

Cross-input aggregation

Why stop at one signature per input? Bitcoin transactions have multiple independent outputs and we don’t want to restrict the ability for someone to choose them independently. All of these have public keys and signatures that are required. Why can’t we combine all of those signatures into one? Schnorr signatures support multisig so this seems like an obvious win. Well, not so fast.

I’ll show a few equations. The message is m. The public key is X where X = x * G where x is the private key and G is the generator. The signature is a tuple (R, s) which is valid if s * G is equal to R + Hash(X, R, m) * X. What you can notice about this equation is that it is linear. All of the public keys really appear at the top-level which means that what you can do is if multiple parties effectively produce an s independently for the same R value or for some of the R values and you add up the s values the result is a signature that is valid for the sum of their keys. This is how all multisignature constructions for Schnorr work. They are all based on this principle.

Unfortunately there is a caveat here, called the rogue key attack. Assume Alice has key A and Bob has key B. Bob claims that his key is B prime which is really B minus A. So Bob claims that’s his key and people believe him. A naieve multisignature would use the sum of the keys and (B’, A) is really just B which Bob could sign for without Alice’s cooperation. Everyone see’s Alice key but Bob’s says I send to the sum of these keys and I assume that this will only be spendable by both Alice and Bob and this is wrong. The normal way to prevent this is to require the keys to sign themselves. This is effectively an enrollemnt procedure or certification procedure or you include with the public keys a signature that signs itself. There are various constructions but you must guarantee that the parties actually have the private keys corresponding to the public key that they claim to have.

This works for multisig within a single input approach because the people who care about it are just the participants and they can internally prove to each other yep here’s my key and here’s a proof that it’s my key and it doesn’t go into the blockchain. But for cross-input aggregation, we want to reduce all of the inputs in the transaction to one, this is actually not possible, because the sets of keys are under control of the attacker. So the example again is that Alice has a number of coins associated in an output with key A, and Bob wants to steal them. We use a naieve multisig approach with a signature with the sum of all the keys that we can see. And Bob can create an output in a transaction himself of some marginally small amount addressed to the key B minus A and then create a follow-up transaction that spends both Alice’s coin and Bob’s coin in such a way that they cancel out. So this is a completely insecure situation and I believe the only way to prevent it is by including the self-certification signature inside of the blockchain itself, which would undo all the scaling and performance advantages we assumed to have.

What we need is security in the plain public key model where there is no key setup procedure beyond just users claiming that they have a particular key. They are allowed to lie about what their key is, and the system should remain secure. This was something we noticed and we tried to come up with a solution for this rogue-key attack. We tried to publish about it, got rejected, and we were told that we should look at a paper from Bellare-Neven 2006 which exactly solved this problem.

Bellare-Neven signatures

The Schnorr multisignature is S * G = R + H(X, R, m) * X where X was the sum of the public keys. Bellare-Neven introduced a multisignature where you use a separate hash for every signer. Into every hash, goes the set of all the signers. The great thing about this paper is that it gives a very wide security proof where the attacker is allowed to pretty much do anything. An attacker can participate in multiple signing attempts with multiple people simultaneously. This looks exactly like the security model that we want. So let’s go for this and start thinking about how to integrate this into Bitcoin script.

Again, not so fast. There’s another hurdle. We need to consider the distinction between a multisignature and an interactive aggregate signature. The distinction is that a multisig is where you have multiple signers that all sign the same message. In an interactive aggregate signature, every signer has their own message. Seems simple. In the context of bitcoin, every input is signing its own message that authorizes or specifies the claim authorizing the spend. There is a very simple conversion suggested by Bellare-Neven themselves in their paper where you can turn the multisignature scheme into an interactive aggregate signature scheme where you just concatenate the messages of all the participants. This seems so obviously correct that we didn’t really think about it until my colleague Russell O’Connor pointed something out.

Russell’s attack

https://www.youtube.com/watch?v=oTsjMz3DaLs&t=22m27s

Russell pointed out that we’ve– let’s assume that Alice has two outputs, o1 and o2. Bob has an output o3. And we assume m1 is the message that authorizes a spend of o1, and m2 the same for o2, and so on. Alice wants to spend o1 in a coinjoin with Bob. So there’s a multi-party protocol going on, mentioned in an earlier talk that coinjoin is where participants get together and move their coins at the same time and now you can’t tell which outputs belonged to which inputs. It’s a reasonable thing that Alice and Bob would want to do this. In this protocol, Bob would be able to claim he has the same key as Alice. It’s perfectly allowed in the plain public key model. And he chooses as a message, m2, the message that authorizes the spend of Alice’s second output instead of m3 his own output. And you may claim well it’s perfectly possible to modify the protocol where you say don’t ever sign something where someone else is claiming to have your keys. But this is a higher-level construction that we would like the underlying protocol to protect against this sort of situation. If you now look ta the validation equation becomes, you see that Alice’s public key appears twice, and the concatenation of the two messages appears twice, but these two hashes are identical. So Bob can duplicate all of Alice’s messages in a multi-party protocol and end up with a signature that actually authorizes the spend of Alice’s second output which was unintended.

Mitigating Russell’s attack

A better solution that we are proposing is that instead of this L which is the hash of the commitment of all the participant public keys in the set, you include the messages themselves and then in the top-level hashes you include your position within that hash. Russell’s attack doesn’t work anymore because the messages in every hash are different so Bob can’t just repeat the message and steal things. So something to learn about this, at least for myself, is that attack models in multi-party schemes can be very subtle. This was not at all an obvious construction.

Bitcoin integration

Here I guess I should do the slide that I had before. Sorry if I’m making you sea sick. Concretely, how do we integrate this Bellare-Neven like interactive aggregate signature scheme into bitcoin? It seems to give us a lot of advantages. We can turn all of the signatures in one input into a single signature using multisig and threshold signatures. And we can use cross-input aggregation across multiple inputs to even reduce that further and only have one signature for the entire transaction.

How do we do this? There’s a hurdle here. Bitcoin transactions are independent. We have this model where there is an output with a predicate, you provide an input with all the arguments needed to satisfy it and the transaction is valid if all of the predicates are satisfied plus a number of other constraints like “you’re not double spending” and “you’re not creating money out of nothing” and all those things.

For cross-input aggregation, we want one signature overall. The way to do it, or at least what I would propose, is to have the CHECKSIG operator and the related operators always succeed. Right now they take a public key and a signature from the stack and validate whether they correspond. Instead, make them always succeed, remember the public key and message, compute what the message woud have been signed. Continue with validation for all the inputs in a transaction. Now tha tthe transaction is validated, and if all the input predicates succeed still, but in addition there is an overall Bellare-Neven interactive aggregate signature provided in the transaction that is valid for all the delayed checks. This is a bit of I guess a layer violation but I believe it’s one that is valuable because we get all these savings.

Performance

I want to talk a bit about the actual work we’ve been doing towards that end. I want performance. Andrew Poelstra, Jonas Nick and myself have been looking at various algorithms for doing the scalar multiplication in the Bellare-Neven verification equation and there are various algorithms that you get better than constant speedup. You can compute the total faster in aggregate or batch than computing the multiplication operations separately and adding them up. This is a well-known result, but there’s a variety of algorithms. We experimented with multiple of them. In this graph you can see how many keys were involved in the whole transaction, and then the speedup you get over just validatin those keys independently. You have two alorithms- one is Strauss and the other is Pippenger. After various benchmarks and tweaking at what the correct point is to switch over from one to another. Initially for small numbers, Strauss algorithm is significantly faster but at some point Pippenger gets faster and it realy goes up logarithmically in the number of keys. This seems to continue for quite a while. Our overall validation speeds for n keys is really n over log n if we’re talking about large numbers.

You may think, well, there’s never going to be a transaction with 10,000 keys in it, right? You’re already doing cool threshold scheme so there’s only one key left so you don’t need to think about the extreme cases. But this is where batch validation comes in because Bellare-Neven’s validation equation can also be batch validated where you have multiple instances of the equation that can be validated in parallel and you only need to care if they all fail not which specific one fails because that’s the block validity requirement in bitcoin. You’re seeing multiple transactions in a block and all that you care about is whether the block is valid.

These performance numbers apply to all the public keys and signatures you see in a transaction, within a block, rather than just within a transaction. And within a block, we potentially see several thousands of keys and signatures, so this is a nice speedup to have.

Space savings

Furthermore, there are also space savings. This chart is from a simulation where we assume that if this proposal would have been active since day 1 then how much smaller would the blockchain be. Note that this does not do anything with threshold signatures or multisig and it doesn’t try to incorporate how people would have differently used the system (which is where the advantages really are) but this is purely from being left with one signature per transaction and everything else is left in place. You can see between a 25% and 30% reduction in blockchain size. This is mostly a storage improvement and a bandwidth improvement in this context. It’s nice.

Ongoing work

We’re working on a BIP for Bellare-Neven based interactive aggregate signatures. We can present this as a signature scheme on its own. There’s a separate BIP for incorporating this cross-input capable CHECKSIG and its exact semantics would be– I lost a word here, but the recommended approaches for doing various kinds of threshold signings so that we don’t need to stick with this “everyone involved in a contract needs to be independently providing a signature” scheme.

That’s all.

bip-schnorr

Q&A

https://www.youtube.com/watch?v=oTsjMz3DaLs&t=33m3s

Dan Boneh: We have time for questions. Any hope of aggregating signatures across transactions? Leading question.

A: I expected something like that from you. So, there is a proposal by Tadge Dryja where you can effectively combine even across transactions- you can do a batch validation ahead of time and look at what multipliers you would apply and on the R value you can combine the whole R value into a single one. However, this is even more of a layer violation in that transaction validation comes with extra complications like what if you have a transaction that has been validated ahead of time and its validation was cached but now you see it in inside of a block and you need to minus– what you’re aiming for is less signatures where you can arbitrarily and non-interactively combine all signatures. I think that’s something we should look into, but I would rather use all of the possibilities with the current existing security assumptions and then perhaps at some point later consider what could be done.

Q: Hi. I was wondering one question about taproot. The introduction of this standard case where everyone signs and agrees would basically reduce the number of times where you see the contract being executed at all. Wouldn’t this reduce the anonymity set?

A: I don’t think so because in the alternative case where those people would have a contract that explicitly stated everyone agrees or more complex setup– you would still see, you’re going from three cases. One is just single signature single key, to everyone signs with multiple keys and third more complex constructions. What we’re doing with taproot is unifying the first and second branch but the third isn’t effected. I think this is strictly not the case.

Q: You had alluded to political reasons why this wouldn’t get merged. What are the reasons against this?

A: I would very much like to see what I’ve been talking about today to be merged into bitcoin. It’s going to be a lengthy process. There’s a long review cycle. This is one of the reasons why I prefer to stick with proposals that don’t change the security assumptions at all. None of what I’ve been talking about introduces any new assumptions that ECDSA doesn’t already have. So this hopefully makes it relatively easy to see that there are little downsides to deploying this kind of upgrade.

gmaxwell: An extra elaboration on the taproot point… you’re not limited to have the “all agree” case. It can be two-of-three. If your policy was 2-of-3 or 1-of-3 with a timelock then the case that looks like just a single key could just be the 2-of-3 at the top. This would be another factor that could help the anonymity set situation.

Q: Does the Schnorr signature.. I’m wondering about its availability for open-source or widely available software like openssl? My business case would be just.. update.. signature, done by multiple parties.

A: The most commonly deployed Schnorr-like signature is ed25519 which is very well known and used in a number of cases. I believe there are higher-level protocols that specify how to do aggregating multiple keys together and sign for them at once. You may want to look into a system called cosi.

Design approaches for cross-input signature aggregation

\ No newline at end of file diff --git a/breaking-bitcoin/2017/changing-consensus-rules-without-breaking-bitcoin/index.html b/breaking-bitcoin/2017/changing-consensus-rules-without-breaking-bitcoin/index.html index d0068ca921..40d0f10503 100644 --- a/breaking-bitcoin/2017/changing-consensus-rules-without-breaking-bitcoin/index.html +++ b/breaking-bitcoin/2017/changing-consensus-rules-without-breaking-bitcoin/index.html @@ -12,4 +12,4 @@ Eric Lombrozo

Date: September 10, 2017

Transcript By: Bryan Bishop

Tags: Soft fork activation

Category: Conference

Media: -https://www.youtube.com/watch?v=0WCaoGiAOHE&t=32min9s

https://twitter.com/kanzure/status/1005822360321216512

Introduction

I’d like to talk about … we actually discovered we can replace the script completely with soft-forks.

It’s important to note this quote from satoshi, from summer 2010: “I don’t believe a second compatible implementation will ever” ……

Comparing open-source development to bitcoin’s blockchain

… a lot of the inspiration came from the development of open-source. All source code is public, analogous to all blockchain being public. All changes are logged, same as all ledger additions, and subject to audit. The way that git works, our main tool, is by chaining these commitments, using hashes. I’ts very similar to a blockchain, except instead of using double sha256 proof-of-work, it’s actually proof of actual developer work and people working on code. The typical workflow is to clone the codebase- similar to syncing to the blockchain. And pull requests are similar to broadcasting transactions, you’re suggesitng that you are wanting to change the state of the system. Then there’s code review (validity), and merging the changes (including the changes in some block or some future block). There’s a lot of parallels.

Code forks vs chain forks

But there’s a huge difference around code forks versus chain forks. It’s hugely different. You have these chains, yes, but you also have arrows merging into nodes. We don’t have a way in bitcoin to merge two blockchains that have already diverged. There have been some ideas for blockchains that could be able to do this, like DAGs in general, but it’s a hard problem to merge different histories and still have the same kind of security that bitcoin has right now.

Compatibility

It’s important to note changes that affect compatibility. In other code bases, compatibility might not matter. In the case of consensus rules, compatibility matters. Also in other aspects too, such as the p2p network layer. There needs to be some level of compatibility otherwise you get partitions of the network. Bitcoin doesn’t work well when you have permanent partitions.

BIP process

https://github.com/bitcoin/bips

bip1 https://github.com/bitcoin/bips/blob/master/bip-0001.mediawiki

bip2 https://github.com/bitcoin/bips/blob/master/bip-0002.mediawiki

The BIP process was developed to document proposals and ideas and specifically ideas that involve compatibility issues. If you want to submit code and it doesn’t impact compatibility, then it’s probably not necessary to submit a BIP. The thing is that different BIPs have different levels of compatibility requirements. This was not really clear in the original way that the BIP process was documented, which is why I worked on bip123, which separates these different kinds of changes.

bip123

The first distinction I made was the difference between consensus and non-consensus changes. In bip123, there’s the consensus layer, the most fundamental level and hardest one to change, and then the peer services layer including message structures and peer discovery which could be upgraded. You could have a migration path where you deprecate older calls and then once nobody is using the old stuff you don’t have to support it anymore. Then there’s the RPC layer for interfacing with node software at the higher layer, and then there’s the application layer for bip39 or bip32 where you want to have wallets that share encrypted keys or stuff like that.

Consensus forks

Consensus forks, you know, split the blockchain and we don’t have a merge process in bitcoin. There might be some future ideas in future crytpocurrencies to allow for merging chains together. But for now there’s no way to merge incompatible chains. The way iit’s resolved is that if the rules and validity are all the same on competing chains then the consensus rule is to follow the chain with the most work. If there’s any kind of partition, then as soon as the partition is resolved, then there’s a reorg and the chain which has less work is not followed. The history of the chain with the most work becomes the one that is accepted. It resolves temporary partitions, but not permanent partitions. If you have an incompatibility problem at the p2p network layer, and nodes are banning each other and creating a network partition, then this kind of rule isn’t going to be able to resolve that partition. It’s not going to fix it. Or once it fixes it, you’re going to get a really really bad reorg. The great firewall of China suddenly goes up and blocks all bitcoin traffic and then the next morning everyone’s transactions are screwed or whatever…

Soft-forks guarantee eventual consistency as long as there’s majority hashrate enforcing the new rules. Hard-forks can create permanent partitions. This is all basic stuff. This is obvious now. We’re aware of it now, that hard-forks always split the chain unless the legacy chain is either abandoned or destroyed. I think that in 2017, we can all look at this and say yes and we’ve seen plenty of examples of this phenomena. This wasn’t obvious back when Satoshi was writing those earlier posts.

Even Satoshi broke Bitcoin a few times

Before we all knew this, Satoshi introduced hard-forks and broke consensus. I don’t think this is well known. Cory Fields found this commit 757f0769d8360ea043f469f3a35f6ec204740446 and informed me of it. I wasn’t aware of it. There was actually a hard-fork in bitcoin that Satoshi did that we would never do it this way- it’s a pretty bad way to do it. If you look at the git commit description here, he says something about reverted makefile.unix wx-config version 0.3.6. Right. That’s all it says. It has no indication that it has a breaking change at all. He was basically hiding it in there. He also posted to bitcointalk and said, please upgrade to 0.3.6 ASAP. We fixed an implementation bug where it is possible that bogus transactions can be displayed as accepted. Do not accept bitcoin payments until you upgrade to 0.3.6. If you can’t upgrade right away, then it would be best to shutdown your bitcoin node until you do. And then on top of that, I don’t know why he decided to do this as well, he decided to add some optimizations in the same code. Fix a bug and add some optimizations.. okay, well, he did it.

One of the interesting things about this commit which isn’t really described in Satoshi’s bitcointalk post is the addition of the OP_NOP opcodes. Before, they were just illegal. If this was found by the script interpreter then it would exit false and fail. And now they basically don’t do anything, they are just NOPs. This is technically a hard-fork that he introduced ((if any transaction was to use these, and or if any transaction in the past had used these in a way that breaks the chain given the new rules, especially since this was not phased in with an activation height or grandfather clause)). The previous rule was that they were illegal. Because now transactions that would have been invalid before are now valid. At the time, nobody was using the NOPs so it probably didn’t effect anyone ((that we know about– someone might have had a pre-signed transaction that was not broadcasted that used the previous features)). The network was really small and probably everyone knew each other that was running a node. So it was easy to get everyone to upgrade quickly. Nobody seemed to care.

There was a genius to this: by doing this hard-fork, it actually enabled future soft-forks. We can now repurpose these NOPs to do other things. I’ll get into that in a second.

BIP classification

Early BIPs had no classification. It’s hard to really prioritize them or see which ones might be more difficult to review if you just look at them by name. By adding bip123, it makes it more clear which ones are consensus-critical. bip16, bip17 and bip18 are touching consensus-critical code. The URI scheme is probably very localized to a single unit that is easy to review. So it makes it much easier to know what to expect in terms of how hard it is going to be to review a particular BIP.

Some of these over here are later BIPs. We see that– we have several soft-forks that were actually deployed, and other stuff that was not soft-forks like peer services changes that were added. And then over here, we see consensus changes are easy to spot. I’m not going to get into this story too much, but bip64 is not a consensus change. This was the reason that bitcoinXT was created actually. There was a dispute. Bitcoin Core developers did not believe that the getutxo feature should be at the p2p level. It was not a consensus change. It became a much more serious situation later on with the whole block size thing which I’ll get into in a second. Here you can see it’s easy to see which particular BIPs are probably– need to be reviewed more rigorously.

Hard-fork mania

In 2015-2016 there was this hard-fork craze that came about. There was a whole bunch of BIP submissions about hard-forks. None of these were actually deployed. But it’s interesting to note that there was this period of time that there was such a mania for hard-forks. During this time it’s also interesting to note that people were still working on some ideas for soft-forks.

There were some breakthrough discoveries about things that we could do with soft-forks that we didn’t think were possible or that we thought were really hard to do. It was a generalization of what kinds of things could we add to the protocol using only soft-forks. It might be obvious now, but it wasn’t always.

A brief history of bitcoin forks

The first set of forks were all activated by basically just updating the software. There was no activation mechanism coordinated or whatever. Satoshi would just say hey I released a new version, install it and don’t send bitcoin until everyone has upgraded. This works well when you have a network with only like ten machines. It doesn’t work very well when you have a network with thousands or millions of machines. It’s really hard to get everyone to upgrade all at once. ((Also, there are security reasons why it’s very important to not include auto-updating code everywhere.))

That didn’t work so well. But that’s the example of 0.3.6 that Satoshi did. That was done that way.

Blockheight activation

After that, it was decided that using blockheight as an activation trigger would be better because it would give people some time to upgrade their software so it wouldn’t be right away. Satoshi’s 1 megabyte blocksize limit in September 2010 was an example of this where it was using the blockheight to activate the rule.

It’s also possible to activate with a flag date using the median time. This made it easier to predict when it would happen. This was used for bip16 and bip30 deployment.

Fork: imposing 1 MB blocksize limit

In 2010, Satoshi added the 1 megabyte block size limit. This was an early soft-fork attempt. It was hardcoding a particular blockheight and if it’s above that blockheight then enforce it and if not, don’t. It assumes that everyone has upgraded their software by this time. This was the first crude mechanism which wasn’t just wait until everyone upgrades and don’t send transactions until then.

bip16 pay-to-scripthash (P2SH)

And then came bip16, right. And bip16 started to use other mechanisms like for instance like the anyonecanspend idea where old nodes would accept the transactions as valid and new nodes could have new rules on that and you could add functionality to this. This idea was reapplied in segwit and it made a comeback in a pretty strong way.

Soft-forks vs hard-forks

By this time the distinction between hard and soft forks started to congeal. At the beginning there was not a good distinction. But there was a post from gavinandresen where he said there are “soft-rule changes and hard rule changes.. soft rule changes tighten up the rules, old software will accept all the blocks created by new software, but the oppsoite might not be true. Soft changes do not require everyone to upgrade. Hard changes modify the rules in a way that un-upgraded software considers illegal. At this point it is hard to deploy hard changes and they require every merchant and miner to upgrade”. It was appreciated at this point that it was a difficult problem. And then a few years later, I guess gavinandresen didn’t think it was that hard anymore.

Non-standardness

Then there other recommendations like “consider transactions non-standard” if it has an unknown version number. You would bump the transaction version number. You could still include such a transaction in a block, but it wouldn’t propagate over the p2p network. This would reduce the risk of sending transactions that might not validate correctly if some nodes aren’t validating the nodes correctly.

isSuperMajority: First used miner activation mechanism

This was around the first time that miner activation concept was invented or started to be used. Track the last number of blocks, the last 1000 blocks, if a certain number of them signal a higher block version then you know miners are going to be enforcing a new rule and then it’s time to upgrade if you want to be safe. Unfortunately there’s a lot of issues with this. If ther’es a cooperative scenario where the incentives with the miners are aligned then it helps to make a smooth transition. When the miners and user interests are not aligned hten there can be a lot of problems here, such as false signaling, validationless mining… just because someone is signaling something doesn’t mean anything in particular. Miners might signal for a change but after activation they could trivially choose not to enforce it. And then there’s this 95% threshold warning which indicates hey something might be up. isSuperMajority used 95% as a safety threshold. I think that unfortnuately this established a narrative that miners are the ones that are able to change the protocol. This was the first time that it was provided and it was to smooth out the process. It wasn’t politicized at this point because most of the soft-forks at the time… there weren’t any conflicts or obvious conflicts there.

We had this super majority mechanism which incremented the block version but you could only deploy one soft-fork at a time. You’re stuck on that one, and then what do you do if it doesn’t get activated? 3 different soft-forks deployed that way: bip34, bip65, and bip66.

Versionbits (bip9)

Versionbits bip9 was developed after that, with the idea that we want to be able to deploy multiple soft-forks at the same time. When I first got into bitcoin, I thought bitcoin would get obsolete, some better tech would come around and it would be better structured and I didn’t see an upgrade path for bitcoin that could remain backwards compatible. When we started to look into versionbits, I thought hey this is a version upgrade path where we could actually add new features and deploy multiple soft-forks simultaneously and maybe we could even scale up the process a little bit. We did 2– we did the checklocktimeverify bip65 + checksequenceverify bip112 bundle, and then came segwit, and segwit is where everything changed.

checklocktimeverify

In checklocktimeverify (bip65), the transaction version is bumped, which indicates it’s going to be using this new feature potentially. And there’s new rules. It’s basically redefining NOP2 which Satoshi enabled with the hard-fork he did before. Had Satoshi not done that hard-fork, then there wouldn’t be NOP2 available, and it wouldn’t have been possible to implement checklocktimeverify like that. Old nodes treat it as a NOP (no operation).

This is an example script: notice that if OP_CHECKLOCKTIMEVERIFY is a NOP then the script interpreter just drops the entire line after the ELSE and that’s how old nodes would evaluate the script.

Origin of soft-fork for segwit

In this context, of seeing all these interesting developments happening with soft-forks, I thought it would be neat if we could have a more abstract soft-forking framework like a plugin kind of thing where rather than having a few people reviewing every single soft-fork proposal that everyone makes then instead have a screening process and modularize it more so that it’s possible to have more unit tests and a clear execution pathway. I proposed this idea. We were concerned about the sequence of rules.. so if there’s a set of rules that need to be checked and it doesn’t matter in what order you can check them, then you can just chain all the different soft-forks. I thought this would be a good way to generalize this, the execution flow is easy to follow this architecture, and it’s encapsulated (chat log exercept: “the execution flow is even easier to follow with this kind of architecture” and “because in the stable consensus code itself the specifics of the rule are encapsulated” and “and in the rule definition itself there’s nothing else BUT the rule definition” and some comments about extension blocks). So the rule definition could be this module that whoever wrote it could write unit tests for it and just make sure it works perfectly.

And then luke-jr just out of the blue as a matter of fact, it was pretty crazy, he just said to me, say a soft-fork for segregated witness. And I was like, could we really do that with a soft-fork? At this point, Pieter Wuille had been working on a segwit implement for the Elements Alpha project and it was there done as a hard-fork and even gmaxwell didn’t think it would be possible to do this as a soft-fork. And according to luke-jr it was obvious that it would be doable as a soft-fork and he blurted out the answer. Note that sipa is Pieter Wuille. I was skeptical about this and luke-jr said in theory it should be possible and sipa was wondering, well, how do you do that? He said it probably entails p2p changes… we could do it with external data blobs and blocks like extension blocks.

Luke-Jr mentioned that it could be done like p2sh’s mechanism in bip16. The changes in transactions referring to each other… so you have to make the transaction not contain scriptsigs. That was the lightbulb moment. It took another 2 weeks for it to sink in. I was just trying to do a simple soft-fork plugin thing and I did not expect segwit to come out of this. I was just thinking about consensus structure and luke-jr ends by breaking soft-fork plugins and then he runs off to bed. This was the end of that chat.

I recently looked this up for a new article that Aaron van Wirdum published in bitcoinmagazine where he looks over the history of segwit. I was trying to find the original chat history for this and just looking back at this it was funny because at this point none of has had realized what we had stumbled across. It wasn’t until a few weeks later that we realized what we found. This happened between the Montreal and Hong Kong scaling bitcoin conferences. At Hong Kong we didn’t have the transaction malleability fix yet and people wanted bigger blocks and lightning and this solved all these things and we were able to do it with a soft-fork. It totally changed the roadmap at this point.

Eventually came the segregated witness segwit BIPs. There were 7 of them. Only 4 of them here are consensus. bip142 was replaced by bip173 (bech32) that sipa has been working on.

versionbits in practice

Once we had segwit implemented, then we were thinking we’d deploy with versionbits bip9 which made the most sense. We talked with some miners and after discussion it seemed that everyone agreed it would be a good idea to deploy this. ….. bip9 was designed in a way such that instead of incrementing the version every time we do an upgrade, we used different bits in the version number and ew could parallelize this and do multiple upgrades at the same time. Here’s the state transition diagram. It was designed to– if it fails, the failure mode was that nothing changes. Status quo was favored. If miners are not going to signal for it then it just fails after a certain timeout. We did not anticipate other stuff like asicboost and other issues that came up. This was assuming that in the worst of cases okay no change stay with status quo. We were happy with this but it caused some problems because miners started to misconstrue this as a vote.. in litecoin, this is segwit signaling in this chart here, and then the price is over here, and you would have to be stupid to not see a correlation here. Miners were starting to get close to the threshold and then drop their hashrate on it. This is an attack vector and it’s not sustainable forever.

Flag dates and user-activated soft-forks

So then we started to think maybe miner-activated soft-forks on their own aren’t good enough. Well what about going back to user-activated soft-forks like the flag date? This is when shaolinfry proposed bip148 which required that miners signal segwit. It was kind of like a rube goldberg device where one device triggers another. This was a way to trigger all the nodes already out there ready to activate. And so we did some game theory analysis on this and actually NicolasDorier did some nice diagrams here. Here’s the decision tree if you decide not to run a bip148 node– if the industry or miners decide not to go along with it, thenyou get a chain split and possibly a massive reorg. On the other hand if you do run a bip148 node, then they would have to collude for you to get a permanent chain split. The game theory here- it’s a game of chicken yes, and assuming it’s not in their interest to do that, then they will opt to not go for the chain split. But if they do split the chain then it will probably be for reasons related to real economic interests like bcash where it’s controversial and some people might wonder. My personal take is that I think it would be ineviatable that some miners would have interests that encourage them to have another chain. It didn’t really adversely effect bitcoin too much and some of us got free money from bcash so thank you.

The problem with bip148 is that the segwit2x collusion agreement came up and it was too late to activate with just bip9. James Hilliard proposed a reduction of the activation threshold to 80% using bip91. It was a way to avoid the chain split with the segwit2x collusion agreement, but it did not avoid the chainsplit with the bcash thing which I think it was inevitable at that point. The bcash hard-fork was a separate proposal unrelated to bip91 and bip8.

Some of the Bitcoin Core developers preferred bip8 over bip148. You can see the distinction here. The main difference is that bip8 does not have a transition from the start state to a failed state. In bip8, miners get a chance to signal and activate, but after a certain threshold rather than going to the failed state it goes to the locked in state. This does not allow miners to stall forever or to stall to the point where it fails. But it does still feed the narrative that miners are activating the fork and it still allows miners to stall for a long time. Others weren’t too happy about that with bip8.

Future of soft-fork activation

This is a big dilemma about how we are going to deploy soft-forks in the future. I’m really happy that segwit activated because now at least we can see some protocol development in second layer or higher-layer stuff that does not require tinkering with the consensus layer. I think this will usher in more innovation. At some point we’re probably going to want to add more features to the bitcoin protocol. This is a big philosophical question we’re asking ourselves. Do we do a UASF for the next one? What about a hybrid approach? Miner activated by itself has been ruled out. bip9 we’re not going to use again.

Interesting changes requiring soft-forks

Near-term soft-forkable changes that people have been looking into include things like: Schnorr signatures, signature aggregation, which is much more efficient than the currently used ECDSA scheme. And MASTs for merkleized abstract syntax trees, and there are at least two different proposals for this. MAST allows you to compress scripts where there’s a single execution pathway that might be taken so you can have this entire tree of potential execution pathways and the proof to authorize a transaction only needs to include one particular leaf of the tree.

We’re looking at new script versions. At this point we don’t need to do the OP_DROP stuff, we can add new opcodes and a new scripting language or replace it with something that’s not a stack machine if we wanted– not that we want to, but in principle we could replace it with another language. It gives us that option. It’s something interesting to think about. Satoshi thought that bitcoin script would be the extensability mechanism to support smart contracts, and now we’re looking at the opposite which is that the way the proofs are constructed. Adding new scripting languages turns out to be simple.

Potential changes requiring hard-forks

Some changes in the future might require a hard-fork. Here are a few that are on the wishlist. Structural changes to the blockheader could allow for adding an extra nonce or chaining headers so you can add more fields or commit to other kinds of data structures, insead of having to stick stuff into the coinbase transaction.

Withholding attack fixes, proof-of-work changes, things like that.

Another change that would be nice is splitting the transaction inputs and outputs into separate merkle trees. You could construct much shorter proofs. If you’re interested in whether a particular UTXO is in a block, you don’t need to download the entire transaction. This would be a nice little feature.

Also if we do want to increase blocksize then my particular take is that it seems silly to bump it 2x once, and then blocks fill up and then we hard-fork again? Every time we hit the limit we hard-fork again? We might as well add a percent increase per annum and have something where it grows gradually. Agreeing on the exact numbres and timeframe are obviously some areas where it might cause disagreement.

Hard-forks and inevitable chain splits

Hard-forks might inevitable cause chain splits. I think this is something that we have come to accept. I think that back in 2015, we thought hard-forks might be a way to upgrade the protocol and assume everyone has the incentive to switch. We’ve seen several hard-forks now. The ethereum hard-fork is an example where people with different ideologies decided to stick it out and mine the old chain. I think we’re going to see that in bitcoin. Even a tiny group of people tha tdon’t want change, we’re going to see a chain split. I think at this point, avoiding a chain split is not an option, but rather we have to work to mitigate the chain splits. This means including things like replay protection, using different address formats, protecting SPV clients, things like that. Market mechanisms and price discovery, liquidity, trying to see whether or not the market actually supports the changes.

Are there situations where the legacy chain will be voluntarily abandoned? I think if we were doing a hard-fork that was a win for everyone and maybe just some guy that just hasn’t connected to bitcoin in years and has a node that he hasn’t spun up… there’s going to be a few of these people out there. There could be a hard-fork where everyone agrees, but still the logistics and coordination is difficult and it probably needs to be done with a lot of anticipation and some mechanism to mitigate any kind of issues there.

Is it practical or ethical to kill the legacy chain? My personal belief is that as long as people want to use the chain, even a few people, then attacking the chain (like mining empty blocks indefinitely) is tantamount to attacking people’s personal property. I think this could be a serious problem.

If both chains survive, which one gets to keep the brand? This is the multi-billion dollar question. My personal view is that whoever is proposing the change, the onus is on them to demonstrate widespread support. The people who want to keep the status quo don’t have to demonstrate anything. The change needs to demonstrate overwhelming support.

Conclusion

I’m not sure how things are going to develop now. We’ve learned a lot in the past few years. Hopefully we will be able to do at least a few more upgrades. If not, then I’m happy we got segwit in so that we can start to do second layer stuff.

Q&A

https://www.youtube.com/watch?v=0WCaoGiAOHE&t=1h8m10s

Q: What about the research going on spoonnet and spoonnet2 for hard-forks?

A: There’s a lot of interesting stuff there. jl2012 has been working on interesting stuff there. I think that at the end of the day there’s still going to be political issues even if you have a technical solution to the logistical issues… if you can’t get people to agree, then you’re still going to have problems. But it’s good to have ideas for replay protection. If you’re going to split, then split for good, and make sure there’s no way to merge again, and make sure you have a good way of coordinating that. This doesn’t fix the political aspects though.

\ No newline at end of file +https://www.youtube.com/watch?v=0WCaoGiAOHE&t=32min9s

https://twitter.com/kanzure/status/1005822360321216512

Introduction

I’d like to talk about … we actually discovered we can replace the script completely with soft-forks.

It’s important to note this quote from satoshi, from summer 2010: “I don’t believe a second compatible implementation will ever” ……

Comparing open-source development to bitcoin’s blockchain

… a lot of the inspiration came from the development of open-source. All source code is public, analogous to all blockchain being public. All changes are logged, same as all ledger additions, and subject to audit. The way that git works, our main tool, is by chaining these commitments, using hashes. I’ts very similar to a blockchain, except instead of using double sha256 proof-of-work, it’s actually proof of actual developer work and people working on code. The typical workflow is to clone the codebase- similar to syncing to the blockchain. And pull requests are similar to broadcasting transactions, you’re suggesitng that you are wanting to change the state of the system. Then there’s code review (validity), and merging the changes (including the changes in some block or some future block). There’s a lot of parallels.

Code forks vs chain forks

But there’s a huge difference around code forks versus chain forks. It’s hugely different. You have these chains, yes, but you also have arrows merging into nodes. We don’t have a way in bitcoin to merge two blockchains that have already diverged. There have been some ideas for blockchains that could be able to do this, like DAGs in general, but it’s a hard problem to merge different histories and still have the same kind of security that bitcoin has right now.

Compatibility

It’s important to note changes that affect compatibility. In other code bases, compatibility might not matter. In the case of consensus rules, compatibility matters. Also in other aspects too, such as the p2p network layer. There needs to be some level of compatibility otherwise you get partitions of the network. Bitcoin doesn’t work well when you have permanent partitions.

BIP process

https://github.com/bitcoin/bips

bip1 https://github.com/bitcoin/bips/blob/master/bip-0001.mediawiki

bip2 https://github.com/bitcoin/bips/blob/master/bip-0002.mediawiki

The BIP process was developed to document proposals and ideas and specifically ideas that involve compatibility issues. If you want to submit code and it doesn’t impact compatibility, then it’s probably not necessary to submit a BIP. The thing is that different BIPs have different levels of compatibility requirements. This was not really clear in the original way that the BIP process was documented, which is why I worked on bip123, which separates these different kinds of changes.

bip123

The first distinction I made was the difference between consensus and non-consensus changes. In bip123, there’s the consensus layer, the most fundamental level and hardest one to change, and then the peer services layer including message structures and peer discovery which could be upgraded. You could have a migration path where you deprecate older calls and then once nobody is using the old stuff you don’t have to support it anymore. Then there’s the RPC layer for interfacing with node software at the higher layer, and then there’s the application layer for bip39 or bip32 where you want to have wallets that share encrypted keys or stuff like that.

Consensus forks

Consensus forks, you know, split the blockchain and we don’t have a merge process in bitcoin. There might be some future ideas in future crytpocurrencies to allow for merging chains together. But for now there’s no way to merge incompatible chains. The way iit’s resolved is that if the rules and validity are all the same on competing chains then the consensus rule is to follow the chain with the most work. If there’s any kind of partition, then as soon as the partition is resolved, then there’s a reorg and the chain which has less work is not followed. The history of the chain with the most work becomes the one that is accepted. It resolves temporary partitions, but not permanent partitions. If you have an incompatibility problem at the p2p network layer, and nodes are banning each other and creating a network partition, then this kind of rule isn’t going to be able to resolve that partition. It’s not going to fix it. Or once it fixes it, you’re going to get a really really bad reorg. The great firewall of China suddenly goes up and blocks all bitcoin traffic and then the next morning everyone’s transactions are screwed or whatever…

Soft-forks guarantee eventual consistency as long as there’s majority hashrate enforcing the new rules. Hard-forks can create permanent partitions. This is all basic stuff. This is obvious now. We’re aware of it now, that hard-forks always split the chain unless the legacy chain is either abandoned or destroyed. I think that in 2017, we can all look at this and say yes and we’ve seen plenty of examples of this phenomena. This wasn’t obvious back when Satoshi was writing those earlier posts.

Even Satoshi broke Bitcoin a few times

Before we all knew this, Satoshi introduced hard-forks and broke consensus. I don’t think this is well known. Cory Fields found this commit 757f0769d8360ea043f469f3a35f6ec204740446 and informed me of it. I wasn’t aware of it. There was actually a hard-fork in bitcoin that Satoshi did that we would never do it this way- it’s a pretty bad way to do it. If you look at the git commit description here, he says something about reverted makefile.unix wx-config version 0.3.6. Right. That’s all it says. It has no indication that it has a breaking change at all. He was basically hiding it in there. He also posted to bitcointalk and said, please upgrade to 0.3.6 ASAP. We fixed an implementation bug where it is possible that bogus transactions can be displayed as accepted. Do not accept bitcoin payments until you upgrade to 0.3.6. If you can’t upgrade right away, then it would be best to shutdown your bitcoin node until you do. And then on top of that, I don’t know why he decided to do this as well, he decided to add some optimizations in the same code. Fix a bug and add some optimizations.. okay, well, he did it.

One of the interesting things about this commit which isn’t really described in Satoshi’s bitcointalk post is the addition of the OP_NOP opcodes. Before, they were just illegal. If this was found by the script interpreter then it would exit false and fail. And now they basically don’t do anything, they are just NOPs. This is technically a hard-fork that he introduced ((if any transaction was to use these, and or if any transaction in the past had used these in a way that breaks the chain given the new rules, especially since this was not phased in with an activation height or grandfather clause)). The previous rule was that they were illegal. Because now transactions that would have been invalid before are now valid. At the time, nobody was using the NOPs so it probably didn’t effect anyone ((that we know about– someone might have had a pre-signed transaction that was not broadcasted that used the previous features)). The network was really small and probably everyone knew each other that was running a node. So it was easy to get everyone to upgrade quickly. Nobody seemed to care.

There was a genius to this: by doing this hard-fork, it actually enabled future soft-forks. We can now repurpose these NOPs to do other things. I’ll get into that in a second.

BIP classification

Early BIPs had no classification. It’s hard to really prioritize them or see which ones might be more difficult to review if you just look at them by name. By adding bip123, it makes it more clear which ones are consensus-critical. bip16, bip17 and bip18 are touching consensus-critical code. The URI scheme is probably very localized to a single unit that is easy to review. So it makes it much easier to know what to expect in terms of how hard it is going to be to review a particular BIP.

Some of these over here are later BIPs. We see that– we have several soft-forks that were actually deployed, and other stuff that was not soft-forks like peer services changes that were added. And then over here, we see consensus changes are easy to spot. I’m not going to get into this story too much, but bip64 is not a consensus change. This was the reason that bitcoinXT was created actually. There was a dispute. Bitcoin Core developers did not believe that the getutxo feature should be at the p2p level. It was not a consensus change. It became a much more serious situation later on with the whole block size thing which I’ll get into in a second. Here you can see it’s easy to see which particular BIPs are probably– need to be reviewed more rigorously.

Hard-fork mania

In 2015-2016 there was this hard-fork craze that came about. There was a whole bunch of BIP submissions about hard-forks. None of these were actually deployed. But it’s interesting to note that there was this period of time that there was such a mania for hard-forks. During this time it’s also interesting to note that people were still working on some ideas for soft-forks.

There were some breakthrough discoveries about things that we could do with soft-forks that we didn’t think were possible or that we thought were really hard to do. It was a generalization of what kinds of things could we add to the protocol using only soft-forks. It might be obvious now, but it wasn’t always.

A brief history of bitcoin forks

The first set of forks were all activated by basically just updating the software. There was no activation mechanism coordinated or whatever. Satoshi would just say hey I released a new version, install it and don’t send bitcoin until everyone has upgraded. This works well when you have a network with only like ten machines. It doesn’t work very well when you have a network with thousands or millions of machines. It’s really hard to get everyone to upgrade all at once. ((Also, there are security reasons why it’s very important to not include auto-updating code everywhere.))

That didn’t work so well. But that’s the example of 0.3.6 that Satoshi did. That was done that way.

Blockheight activation

After that, it was decided that using blockheight as an activation trigger would be better because it would give people some time to upgrade their software so it wouldn’t be right away. Satoshi’s 1 megabyte blocksize limit in September 2010 was an example of this where it was using the blockheight to activate the rule.

It’s also possible to activate with a flag date using the median time. This made it easier to predict when it would happen. This was used for bip16 and bip30 deployment.

Fork: imposing 1 MB blocksize limit

In 2010, Satoshi added the 1 megabyte block size limit. This was an early soft-fork attempt. It was hardcoding a particular blockheight and if it’s above that blockheight then enforce it and if not, don’t. It assumes that everyone has upgraded their software by this time. This was the first crude mechanism which wasn’t just wait until everyone upgrades and don’t send transactions until then.

bip16 pay-to-scripthash (P2SH)

And then came bip16, right. And bip16 started to use other mechanisms like for instance like the anyonecanspend idea where old nodes would accept the transactions as valid and new nodes could have new rules on that and you could add functionality to this. This idea was reapplied in segwit and it made a comeback in a pretty strong way.

Soft-forks vs hard-forks

By this time the distinction between hard and soft forks started to congeal. At the beginning there was not a good distinction. But there was a post from gavinandresen where he said there are “soft-rule changes and hard rule changes.. soft rule changes tighten up the rules, old software will accept all the blocks created by new software, but the oppsoite might not be true. Soft changes do not require everyone to upgrade. Hard changes modify the rules in a way that un-upgraded software considers illegal. At this point it is hard to deploy hard changes and they require every merchant and miner to upgrade”. It was appreciated at this point that it was a difficult problem. And then a few years later, I guess gavinandresen didn’t think it was that hard anymore.

Non-standardness

Then there other recommendations like “consider transactions non-standard” if it has an unknown version number. You would bump the transaction version number. You could still include such a transaction in a block, but it wouldn’t propagate over the p2p network. This would reduce the risk of sending transactions that might not validate correctly if some nodes aren’t validating the nodes correctly.

isSuperMajority: First used miner activation mechanism

This was around the first time that miner activation concept was invented or started to be used. Track the last number of blocks, the last 1000 blocks, if a certain number of them signal a higher block version then you know miners are going to be enforcing a new rule and then it’s time to upgrade if you want to be safe. Unfortunately there’s a lot of issues with this. If ther’es a cooperative scenario where the incentives with the miners are aligned then it helps to make a smooth transition. When the miners and user interests are not aligned hten there can be a lot of problems here, such as false signaling, validationless mining… just because someone is signaling something doesn’t mean anything in particular. Miners might signal for a change but after activation they could trivially choose not to enforce it. And then there’s this 95% threshold warning which indicates hey something might be up. isSuperMajority used 95% as a safety threshold. I think that unfortnuately this established a narrative that miners are the ones that are able to change the protocol. This was the first time that it was provided and it was to smooth out the process. It wasn’t politicized at this point because most of the soft-forks at the time… there weren’t any conflicts or obvious conflicts there.

We had this super majority mechanism which incremented the block version but you could only deploy one soft-fork at a time. You’re stuck on that one, and then what do you do if it doesn’t get activated? 3 different soft-forks deployed that way: bip34, bip65, and bip66.

Versionbits (bip9)

Versionbits bip9 was developed after that, with the idea that we want to be able to deploy multiple soft-forks at the same time. When I first got into bitcoin, I thought bitcoin would get obsolete, some better tech would come around and it would be better structured and I didn’t see an upgrade path for bitcoin that could remain backwards compatible. When we started to look into versionbits, I thought hey this is a version upgrade path where we could actually add new features and deploy multiple soft-forks simultaneously and maybe we could even scale up the process a little bit. We did 2– we did the checklocktimeverify bip65 + checksequenceverify bip112 bundle, and then came segwit, and segwit is where everything changed.

checklocktimeverify

In checklocktimeverify (bip65), the transaction version is bumped, which indicates it’s going to be using this new feature potentially. And there’s new rules. It’s basically redefining NOP2 which Satoshi enabled with the hard-fork he did before. Had Satoshi not done that hard-fork, then there wouldn’t be NOP2 available, and it wouldn’t have been possible to implement checklocktimeverify like that. Old nodes treat it as a NOP (no operation).

This is an example script: notice that if OP_CHECKLOCKTIMEVERIFY is a NOP then the script interpreter just drops the entire line after the ELSE and that’s how old nodes would evaluate the script.

Origin of soft-fork for segwit

In this context, of seeing all these interesting developments happening with soft-forks, I thought it would be neat if we could have a more abstract soft-forking framework like a plugin kind of thing where rather than having a few people reviewing every single soft-fork proposal that everyone makes then instead have a screening process and modularize it more so that it’s possible to have more unit tests and a clear execution pathway. I proposed this idea. We were concerned about the sequence of rules.. so if there’s a set of rules that need to be checked and it doesn’t matter in what order you can check them, then you can just chain all the different soft-forks. I thought this would be a good way to generalize this, the execution flow is easy to follow this architecture, and it’s encapsulated (chat log exercept: “the execution flow is even easier to follow with this kind of architecture” and “because in the stable consensus code itself the specifics of the rule are encapsulated” and “and in the rule definition itself there’s nothing else BUT the rule definition” and some comments about extension blocks). So the rule definition could be this module that whoever wrote it could write unit tests for it and just make sure it works perfectly.

And then luke-jr just out of the blue as a matter of fact, it was pretty crazy, he just said to me, say a soft-fork for segregated witness. And I was like, could we really do that with a soft-fork? At this point, Pieter Wuille had been working on a segwit implement for the Elements Alpha project and it was there done as a hard-fork and even gmaxwell didn’t think it would be possible to do this as a soft-fork. And according to luke-jr it was obvious that it would be doable as a soft-fork and he blurted out the answer. Note that sipa is Pieter Wuille. I was skeptical about this and luke-jr said in theory it should be possible and sipa was wondering, well, how do you do that? He said it probably entails p2p changes… we could do it with external data blobs and blocks like extension blocks.

Luke-Jr mentioned that it could be done like p2sh’s mechanism in bip16. The changes in transactions referring to each other… so you have to make the transaction not contain scriptsigs. That was the lightbulb moment. It took another 2 weeks for it to sink in. I was just trying to do a simple soft-fork plugin thing and I did not expect segwit to come out of this. I was just thinking about consensus structure and luke-jr ends by breaking soft-fork plugins and then he runs off to bed. This was the end of that chat.

I recently looked this up for a new article that Aaron van Wirdum published in bitcoinmagazine where he looks over the history of segwit. I was trying to find the original chat history for this and just looking back at this it was funny because at this point none of has had realized what we had stumbled across. It wasn’t until a few weeks later that we realized what we found. This happened between the Montreal and Hong Kong scaling bitcoin conferences. At Hong Kong we didn’t have the transaction malleability fix yet and people wanted bigger blocks and lightning and this solved all these things and we were able to do it with a soft-fork. It totally changed the roadmap at this point.

Eventually came the segregated witness segwit BIPs. There were 7 of them. Only 4 of them here are consensus. bip142 was replaced by bip173 (bech32) that sipa has been working on.

versionbits in practice

Once we had segwit implemented, then we were thinking we’d deploy with versionbits bip9 which made the most sense. We talked with some miners and after discussion it seemed that everyone agreed it would be a good idea to deploy this. ….. bip9 was designed in a way such that instead of incrementing the version every time we do an upgrade, we used different bits in the version number and ew could parallelize this and do multiple upgrades at the same time. Here’s the state transition diagram. It was designed to– if it fails, the failure mode was that nothing changes. Status quo was favored. If miners are not going to signal for it then it just fails after a certain timeout. We did not anticipate other stuff like asicboost and other issues that came up. This was assuming that in the worst of cases okay no change stay with status quo. We were happy with this but it caused some problems because miners started to misconstrue this as a vote.. in litecoin, this is segwit signaling in this chart here, and then the price is over here, and you would have to be stupid to not see a correlation here. Miners were starting to get close to the threshold and then drop their hashrate on it. This is an attack vector and it’s not sustainable forever.

Flag dates and user-activated soft-forks

So then we started to think maybe miner-activated soft-forks on their own aren’t good enough. Well what about going back to user-activated soft-forks like the flag date? This is when shaolinfry proposed bip148 which required that miners signal segwit. It was kind of like a rube goldberg device where one device triggers another. This was a way to trigger all the nodes already out there ready to activate. And so we did some game theory analysis on this and actually NicolasDorier did some nice diagrams here. Here’s the decision tree if you decide not to run a bip148 node– if the industry or miners decide not to go along with it, thenyou get a chain split and possibly a massive reorg. On the other hand if you do run a bip148 node, then they would have to collude for you to get a permanent chain split. The game theory here- it’s a game of chicken yes, and assuming it’s not in their interest to do that, then they will opt to not go for the chain split. But if they do split the chain then it will probably be for reasons related to real economic interests like bcash where it’s controversial and some people might wonder. My personal take is that I think it would be ineviatable that some miners would have interests that encourage them to have another chain. It didn’t really adversely effect bitcoin too much and some of us got free money from bcash so thank you.

The problem with bip148 is that the segwit2x collusion agreement came up and it was too late to activate with just bip9. James Hilliard proposed a reduction of the activation threshold to 80% using bip91. It was a way to avoid the chain split with the segwit2x collusion agreement, but it did not avoid the chainsplit with the bcash thing which I think it was inevitable at that point. The bcash hard-fork was a separate proposal unrelated to bip91 and bip8.

Some of the Bitcoin Core developers preferred bip8 over bip148. You can see the distinction here. The main difference is that bip8 does not have a transition from the start state to a failed state. In bip8, miners get a chance to signal and activate, but after a certain threshold rather than going to the failed state it goes to the locked in state. This does not allow miners to stall forever or to stall to the point where it fails. But it does still feed the narrative that miners are activating the fork and it still allows miners to stall for a long time. Others weren’t too happy about that with bip8.

Future of soft-fork activation

This is a big dilemma about how we are going to deploy soft-forks in the future. I’m really happy that segwit activated because now at least we can see some protocol development in second layer or higher-layer stuff that does not require tinkering with the consensus layer. I think this will usher in more innovation. At some point we’re probably going to want to add more features to the bitcoin protocol. This is a big philosophical question we’re asking ourselves. Do we do a UASF for the next one? What about a hybrid approach? Miner activated by itself has been ruled out. bip9 we’re not going to use again.

Interesting changes requiring soft-forks

Near-term soft-forkable changes that people have been looking into include things like: Schnorr signatures, signature aggregation, which is much more efficient than the currently used ECDSA scheme. And MASTs for merkleized abstract syntax trees, and there are at least two different proposals for this. MAST allows you to compress scripts where there’s a single execution pathway that might be taken so you can have this entire tree of potential execution pathways and the proof to authorize a transaction only needs to include one particular leaf of the tree.

We’re looking at new script versions. At this point we don’t need to do the OP_DROP stuff, we can add new opcodes and a new scripting language or replace it with something that’s not a stack machine if we wanted– not that we want to, but in principle we could replace it with another language. It gives us that option. It’s something interesting to think about. Satoshi thought that bitcoin script would be the extensability mechanism to support smart contracts, and now we’re looking at the opposite which is that the way the proofs are constructed. Adding new scripting languages turns out to be simple.

Potential changes requiring hard-forks

Some changes in the future might require a hard-fork. Here are a few that are on the wishlist. Structural changes to the blockheader could allow for adding an extra nonce or chaining headers so you can add more fields or commit to other kinds of data structures, insead of having to stick stuff into the coinbase transaction.

Withholding attack fixes, proof-of-work changes, things like that.

Another change that would be nice is splitting the transaction inputs and outputs into separate merkle trees. You could construct much shorter proofs. If you’re interested in whether a particular UTXO is in a block, you don’t need to download the entire transaction. This would be a nice little feature.

Also if we do want to increase blocksize then my particular take is that it seems silly to bump it 2x once, and then blocks fill up and then we hard-fork again? Every time we hit the limit we hard-fork again? We might as well add a percent increase per annum and have something where it grows gradually. Agreeing on the exact numbres and timeframe are obviously some areas where it might cause disagreement.

Hard-forks and inevitable chain splits

Hard-forks might inevitable cause chain splits. I think this is something that we have come to accept. I think that back in 2015, we thought hard-forks might be a way to upgrade the protocol and assume everyone has the incentive to switch. We’ve seen several hard-forks now. The ethereum hard-fork is an example where people with different ideologies decided to stick it out and mine the old chain. I think we’re going to see that in bitcoin. Even a tiny group of people tha tdon’t want change, we’re going to see a chain split. I think at this point, avoiding a chain split is not an option, but rather we have to work to mitigate the chain splits. This means including things like replay protection, using different address formats, protecting SPV clients, things like that. Market mechanisms and price discovery, liquidity, trying to see whether or not the market actually supports the changes.

Are there situations where the legacy chain will be voluntarily abandoned? I think if we were doing a hard-fork that was a win for everyone and maybe just some guy that just hasn’t connected to bitcoin in years and has a node that he hasn’t spun up… there’s going to be a few of these people out there. There could be a hard-fork where everyone agrees, but still the logistics and coordination is difficult and it probably needs to be done with a lot of anticipation and some mechanism to mitigate any kind of issues there.

Is it practical or ethical to kill the legacy chain? My personal belief is that as long as people want to use the chain, even a few people, then attacking the chain (like mining empty blocks indefinitely) is tantamount to attacking people’s personal property. I think this could be a serious problem.

If both chains survive, which one gets to keep the brand? This is the multi-billion dollar question. My personal view is that whoever is proposing the change, the onus is on them to demonstrate widespread support. The people who want to keep the status quo don’t have to demonstrate anything. The change needs to demonstrate overwhelming support.

Conclusion

I’m not sure how things are going to develop now. We’ve learned a lot in the past few years. Hopefully we will be able to do at least a few more upgrades. If not, then I’m happy we got segwit in so that we can start to do second layer stuff.

Q&A

https://www.youtube.com/watch?v=0WCaoGiAOHE&t=1h8m10s

Q: What about the research going on spoonnet and spoonnet2 for hard-forks?

A: There’s a lot of interesting stuff there. jl2012 has been working on interesting stuff there. I think that at the end of the day there’s still going to be political issues even if you have a technical solution to the logistical issues… if you can’t get people to agree, then you’re still going to have problems. But it’s good to have ideas for replay protection. If you’re going to split, then split for good, and make sure there’s no way to merge again, and make sure you have a good way of coordinating that. This doesn’t fix the political aspects though.

\ No newline at end of file diff --git a/breaking-bitcoin/2017/interview-adam-back-elizabeth-stark/index.html b/breaking-bitcoin/2017/interview-adam-back-elizabeth-stark/index.html index 5a13c02709..44f24aed88 100644 --- a/breaking-bitcoin/2017/interview-adam-back-elizabeth-stark/index.html +++ b/breaking-bitcoin/2017/interview-adam-back-elizabeth-stark/index.html @@ -12,4 +12,4 @@ Adam Back, Elizabeth Stark

Date: September 10, 2017

Transcript By: Bryan Bishop

Tags: Lightning

Category: Conference

Media: -https://www.youtube.com/watch?v=0WCaoGiAOHE&t=1h58m3s

https://twitter.com/kanzure/status/1005842855271784449

stark: Thank you Giacomo. Who here had too much fun at the party last night? My voice is going. Hopefully it will stay for this interview. We are lucky to have Adam Back here today, co-founder of Blockstream and inventor of Hashcash (history). He just happened to appear and there was an available spot. Thank you for making this work.

adam3us: Thank you for inviting me.

stark: I’m a co-organizer of this event. I am also founder of Lightning Labs. We thought we wanted to make a really cypherpunk event. Who better to have speaking with us than one of the original cypherpunks. You were involved in the early days. What was going on back then and how did you get involved?

adam3us: At that time, PGP was pretty new. People were pretty excited about the change in the balance of power in that an individual could exchange encrypted email such that a government couldn’t intercept or decrypt them. For many people it was about “doing”. Just building things, rather than seeking permission to build things. This is a concept from startups. Skype didn’t go to seek permission from regulators and telephony is a very regulated space. They built technology, and then users adopted it. Technology that is deployed effects regulations and changes societal norms. Regulations and laws are trailing by maybe 50 years or something. Cypherpunks wanted to effect positive social change by deploying technology without permission. PGP was one of those technologies. Many of these things are based on protecting rights that you have under law. Another form of privacy and freedom is freedom of association, to have free association online you need anonymous communication. The cypherpunks also built the first anonymous remailers. It was through running one of those that I discovered the sort of spam problem. Normally, spam is dealt with by identity typically, and you associate identity too much with email then that creates other problems like non-intentional repudiation, audit logs that can show up in court, stuff like that. The remailers at the time allowed you to post on USENET groups. People would spam through the remailer and send to usenet groups and it would get broadcasted everywhere and it would have high expense to the world. I looked into ways to reduce spam or throttle spam and that’s where hashcash came from. The idea was to have a computational postage stamp that you could create with 30 seconds or a few minutes of work and attaqch this electronic postage stamp to the email and the recipients could check it locally with local information. It had good scalability.

stark: Hashcash is one of the few citations in the Satoshi whitepaper. When did you hear about that document and what was your reaction to being cited in there?

adam3us: I think it was in August 2008 or something like that. It might have been the first email that Satoshi sent to someone. I thought it was interesting and I cited for him some work by Wei Dai (b-money) which was an earlier idea using proof-of-work to make a reusable currency. It dated back from 1998 shortly after hashcash. Nick Szabo’s similar idea Bitgold was around from the same time. Both of those things were rough designs without specific blueprints or specifications, and no implementation. I think that Satoshi contacted Wei Dai and that’s how the b-money reference got into the whitepaper. There were previous electronic cash systems dating back to mid 1995. DigiCash had a strong privacy-centric electronic cash using blind signatures by David Chaum. It was pretty interesting. People were excited about it at that time. It was very centralized and it got shutdown simply by the company going out of business. I think people took that as a lesson, that decentralization is important for survivability. DigiCash was much more private than bitcoin as such, but not decentralized. I think the lesson was that it’s better to have something deployed as a first step and could survive. The design by Hal Finney and Wei Dai… sorry, I mean Nick Szabo and Wei Dai…. Hal Finney provided a design later. DigiCash had a go at deploying a beta server and they promised they would not issue more than 1 million currency units called beta-bucks. People on the cypherpunk list had a go at bootstrapping value. They figured maybe they could use the promise of the company to not issue more than that, and let’s sell some t-shirts or something and see if we could bootstrap some value. Unfortunately the company went bankrupt and if you had any coins then they would be impossible to verify without the server. The architecture is such that the company has a database of double spends and you require access to that to verify your money. Bitcoin, bitgold and others get around that by requiring a peer-to-peer (p2p) network to broadcast the double spend database, effectively.

stark: What led you to pursue bitcoin full-time? You’ve been involved in these communities, you have a PhD, you were working on other aspects of applied crypto. What brought you to bitcoin?

adam3us: People were working on trying to bootstrap and deploy an electronic cash system that was survivable, p2p, and robust. Of all the cypherpunk tech like remailers, tor and things like tor that came before it, one of the gaps was an electronic cash system for paying for these services. People figured tor would be more robust if you could pay for the channel capacity. Tor nodes are run by volunteers. People were viewing electronic cash as the holy grail of cypherpunk technology. When it got deployed and it started to have a market price and show that it could hold a stable value, myself, and everybody else, thought this was interesting. If you think about it, that something as simple as social media platforms like blogging and facebook and twitter and these things are credited with having an impact for allowing some parts of the world to have revolutions and regain control of their country for free society. If just a blogging platform can have a meaningful effect, then a free global internet money, it’s early days now, but in the next 10, 15 years, it would be interesting to see what a free internet money would have in terms of effects.

stark: The theme of this conference is a focus on security. What are the biggest security threats to the bitcoin protocol today?

adam3us: Apart from the technical security threats, bitcoin is great and the finality is great, but it depends on computer security which hasn’t been in a great state of affairs with different operating systems not putting great focus on security. I think we have good solutions with carefully reviewed code, hardware wallets that we heard about yesterday at this conference. I think that’s interesting. There’s still gaps on authentication or the address substitution problem, but that could be addressed with end-to-end authentication. There are some question marks about the social dynamic of blockchains. I think it’s important for security for people to run their own full nodes and verify the transaction set.

stark: Who here runs a full node? Show of hands. Nice.

adam3us: That’s part of the interest with the Blockstream satellite product. It’s a network of satelites that we’re leasing bandwidth on commercial satellites with the idea that the cost of running a full node can be made a lot lower with maybe $100 equipment then you could have free download of bitcoin blockchain and transactions in real-time. The costs of running a full node can thereofre be made low. Many smartphone wallets like GreenAddress smartphone wallet can be configured to connect to your own full node over tor. So reducing the cost of receiving the blockchain is not just for emerging markets, it allows for people who can’t afford to run a full node due to bandwidth, it’s also useful for people with high speed internet because it’s more private. There are people monitoring on the bitcoin network looking for which IP addresses are broadcasting transactions. This can place you at risk of geolocation, burglary, and other thefts. By using a satellite, it’s completely passive and download-only. The bulk of the data is coming in a passive mode. We don’t know how many people are using it, because we can’t tell. You can send transactions over SMS, tor, or other networks. This leaves a much smaller footprint, it saves you bandwidth, and presents another redundant link to the internet. If you have a network split caused by an undersea cable disruption or there’s a tornado in the southeast US which causes problems to internet connectivity— it’s good for bitcoin security to have a robust network with redundant links and a satellite is a very robust independent link. In phase 1, we have one teleporter uplink. In the second phase by the end of year 2017 we will have a couple more uplinks. The satellites can listen to each other in a ring. If the internet connection on one uplink cuts out, then it can donwload blocks from the neighboring satellites and it can be more robust to local internet failures on the uplinks.

stark: The year is 2017. Satoshi whitepaper came out 9 years and the software came out 8 years ago. What do you think the threats are that we are going to see in the future in 5 or 10 years assuming bitcoin continues on its trajectory?

adam3us: As long as you run your own full node and configure your wallet to point at it, all changes are opt-in. A lot of discussions have been had recently about unusual viewpoints about hashrate defining the protocol. But it doesn’t. The hashrate is a service that follows the economics and it’s the users that provide value to the coins. As long as people are running full nodes, the protocol upgrades can be opt-in. I think scale is optimistic, I think we can adopt new technology going forward. I think the enthusiasm about lightning is interesting for high-scale. There’s also a lot of flexibility and extensibility coming into bitcoin with the script extensions that Eric Lombrozo talked about. I think in the next few years we will see sidechains. The ability to incorporate more sweeping alternative opt-in blockchains that are bitcoin-denominated. You could see tech like SNARKs or zerocash that provide strong privacy or other novel features that might be difficult to incorporate into bitcoin directly because of technology risk to see those deployed into a sidechain that is still bitcoin-denominated. Many of the short-term complicatoins about conflict of interest between miners and users will be resolved because they can, if someone wants to focus on medium-security chains then they could just do that and make a sidechain and set the parameters as they wish and see if it succeeds and draws users in the market.

stark: There’s never a dull moment in bitcoin. There’s been discussion in the community about upgrades and block size. How do you think we can most securely upgrade the bitcoin protocol?

adam3us: I think soft-forks are a safe way to upgrade bitcoin. With hard-forks, it’s opt-in and it’s not clear if everyone will opt-in, thus creating two chains when someone doesn’t opt-in. There are ways to combine soft-forks and hard-forks, called evil forks, which is even more coercive. People didn’t want to talk about it for a while but it’s been public for a while now. It’s basically the idea that, you make a new block… Johnson Lau causes this forcenet on bitcoinhardforkresearch.github.io page.. You make a new block, the original block has a transaction that has a hash referring to the new block. The consensus rule is that the original block has to be empty. The original clients and nodes see no transactions. If miners enforce that consensus rule, then it forces a change. In that circumstance, if a change was made that users didn’t like, then they would just soft-fork it out, they would have to make a change to avoid it.

stark: We’ve seen proposals from the community about ways to upgrade bitcoin. How would you do those differently?

adam3us: It’s important that all parts of the community have a voice. I’m not in favor of segwit2x collusion meeting. Proposals that go beyond proposals that are edicts are not something I’m in favor of. I think the timeframe is also important. If you ask any of the developers, they would typically want to see 18 months or 2 years lead-time on something with as widespread impact on all the software and hardware out there as a hard-fork. For a hard-fork, basically every piece of software and equipment needs to upgrade, which requires a lot of lead time. I think replay protection is important. In the bcash proposal, there was mandatory replay protection and the timeframe on that was super rushed. It only got checked in 5 days before their fork date, which is on the extreme end of a short timeframe.

stark: One of the big questions in the community is around fungibility. Can you use the same coin and avoid taint? What is your vision for how we should treat fungibility in bitcoin and what problems are you seeing?

adam3us: Fungibility is an important problem. It’s related to privacy. It can be achieved separately. What provides fungibility in bitcoin today is that mining is decentralized. If miners in one country are ordered by their government to not process some transactions then other miners will process them. If mining gets too centralized then it puts fungibility at risk- if you end up with some coins and maybe some government doesn’t like you, say you’re Wikileaks or you’re a lobbyist or politician, that’s problematic for a currency. You expect that everyone will accept the money no matter who previously had it. Fungibility is important. I think we could have more technology to guarantee fungibility. For someone to block a transaction, they first have to know how made it, in order to block it. The digicash protocol works this way- each payment was unlinkable to previous payments. The people processing each transaction, even though they were centralized, had no information that would allow them to know if they should block it. They had no way to discriminate. What causes problems for bitcoin is the linkability between transactions and change UTXOs. There are some protocols in flight like lightning which has improved fungibility inside the network. There’s also tumblebit, zerolink and confidential transactions is also interesting. There’s some space overhead problems on that one. Confidential transactions interacts well with coinjoin, and you can join coins without having to worry about the specific amounts making it obvious which inputs correspond to which outputs, because confidential transactions hides the values. You can still validate the blockchain but the values are hidden using a form of homomorphic encryption. The inputs and outputs can add up without revealing the values themselves.

stark: What are the protocol-related technologies that you are most excited about coming down the road?

adam3us: For the short term it would be great if companies could integrate lightning into their wallets and exchanges and merchants. It solves some usability problems by providing instant confirmations. Also it improves fungibility and privacy. It should be cheaper too because it provides a lot more scale. If lightning provides a high degree of recirculation in the network then that should be equivalent to a much larger block size in terms of throughput or scale which is otherwise implausible. If we put everything on to the blockchain then it would eventually saturate the internet basically. It’s a broadcast mechanism. You have n^2 scaling problems. If we could get 1000x recirculation within lightning then that’s great but with segwit and around 2 megabytes if you were to have 2 gigabyte blocks to match that on chain then that’s not plausible and you would only able to do the validation computations in a data center. If you can’t see the transactions and verify themselves due to scale mistakes, then that’s not far from today’s world where you are trusting a bank that they had an auditor to check their internal ledgers.

stark: What are the upgrades you would like to see most to the bitcoin chain?

adam3us: It would be interesting to see some more opcodes reintroduced. Script extensability. The merkle tree proposals are interesting because they provide more privacy, like the MAST proposals. It allows you to have a syntax tree represented in the blockchain where the normal case is just a simple transaction but then there are some hidden extra clauses that can get exercised. If the exceptional situation happens then you reveal the more complex program. If the transaction is multisig then basically as long as both parties agree, then the full contract never needs to get revealed, and that saves space on the blockchain, and it’s more privat ebecause transactions are indistinguishable unless their clauses are exercised.

stark: Okay, let’s open it up to audience questions. Thanks.

https://www.youtube.com/watch?v=0WCaoGiAOHE&t=2h22m42s

\ No newline at end of file +https://www.youtube.com/watch?v=0WCaoGiAOHE&t=1h58m3s

https://twitter.com/kanzure/status/1005842855271784449

stark: Thank you Giacomo. Who here had too much fun at the party last night? My voice is going. Hopefully it will stay for this interview. We are lucky to have Adam Back here today, co-founder of Blockstream and inventor of Hashcash (history). He just happened to appear and there was an available spot. Thank you for making this work.

adam3us: Thank you for inviting me.

stark: I’m a co-organizer of this event. I am also founder of Lightning Labs. We thought we wanted to make a really cypherpunk event. Who better to have speaking with us than one of the original cypherpunks. You were involved in the early days. What was going on back then and how did you get involved?

adam3us: At that time, PGP was pretty new. People were pretty excited about the change in the balance of power in that an individual could exchange encrypted email such that a government couldn’t intercept or decrypt them. For many people it was about “doing”. Just building things, rather than seeking permission to build things. This is a concept from startups. Skype didn’t go to seek permission from regulators and telephony is a very regulated space. They built technology, and then users adopted it. Technology that is deployed effects regulations and changes societal norms. Regulations and laws are trailing by maybe 50 years or something. Cypherpunks wanted to effect positive social change by deploying technology without permission. PGP was one of those technologies. Many of these things are based on protecting rights that you have under law. Another form of privacy and freedom is freedom of association, to have free association online you need anonymous communication. The cypherpunks also built the first anonymous remailers. It was through running one of those that I discovered the sort of spam problem. Normally, spam is dealt with by identity typically, and you associate identity too much with email then that creates other problems like non-intentional repudiation, audit logs that can show up in court, stuff like that. The remailers at the time allowed you to post on USENET groups. People would spam through the remailer and send to usenet groups and it would get broadcasted everywhere and it would have high expense to the world. I looked into ways to reduce spam or throttle spam and that’s where hashcash came from. The idea was to have a computational postage stamp that you could create with 30 seconds or a few minutes of work and attaqch this electronic postage stamp to the email and the recipients could check it locally with local information. It had good scalability.

stark: Hashcash is one of the few citations in the Satoshi whitepaper. When did you hear about that document and what was your reaction to being cited in there?

adam3us: I think it was in August 2008 or something like that. It might have been the first email that Satoshi sent to someone. I thought it was interesting and I cited for him some work by Wei Dai (b-money) which was an earlier idea using proof-of-work to make a reusable currency. It dated back from 1998 shortly after hashcash. Nick Szabo’s similar idea Bitgold was around from the same time. Both of those things were rough designs without specific blueprints or specifications, and no implementation. I think that Satoshi contacted Wei Dai and that’s how the b-money reference got into the whitepaper. There were previous electronic cash systems dating back to mid 1995. DigiCash had a strong privacy-centric electronic cash using blind signatures by David Chaum. It was pretty interesting. People were excited about it at that time. It was very centralized and it got shutdown simply by the company going out of business. I think people took that as a lesson, that decentralization is important for survivability. DigiCash was much more private than bitcoin as such, but not decentralized. I think the lesson was that it’s better to have something deployed as a first step and could survive. The design by Hal Finney and Wei Dai… sorry, I mean Nick Szabo and Wei Dai…. Hal Finney provided a design later. DigiCash had a go at deploying a beta server and they promised they would not issue more than 1 million currency units called beta-bucks. People on the cypherpunk list had a go at bootstrapping value. They figured maybe they could use the promise of the company to not issue more than that, and let’s sell some t-shirts or something and see if we could bootstrap some value. Unfortunately the company went bankrupt and if you had any coins then they would be impossible to verify without the server. The architecture is such that the company has a database of double spends and you require access to that to verify your money. Bitcoin, bitgold and others get around that by requiring a peer-to-peer (p2p) network to broadcast the double spend database, effectively.

stark: What led you to pursue bitcoin full-time? You’ve been involved in these communities, you have a PhD, you were working on other aspects of applied crypto. What brought you to bitcoin?

adam3us: People were working on trying to bootstrap and deploy an electronic cash system that was survivable, p2p, and robust. Of all the cypherpunk tech like remailers, tor and things like tor that came before it, one of the gaps was an electronic cash system for paying for these services. People figured tor would be more robust if you could pay for the channel capacity. Tor nodes are run by volunteers. People were viewing electronic cash as the holy grail of cypherpunk technology. When it got deployed and it started to have a market price and show that it could hold a stable value, myself, and everybody else, thought this was interesting. If you think about it, that something as simple as social media platforms like blogging and facebook and twitter and these things are credited with having an impact for allowing some parts of the world to have revolutions and regain control of their country for free society. If just a blogging platform can have a meaningful effect, then a free global internet money, it’s early days now, but in the next 10, 15 years, it would be interesting to see what a free internet money would have in terms of effects.

stark: The theme of this conference is a focus on security. What are the biggest security threats to the bitcoin protocol today?

adam3us: Apart from the technical security threats, bitcoin is great and the finality is great, but it depends on computer security which hasn’t been in a great state of affairs with different operating systems not putting great focus on security. I think we have good solutions with carefully reviewed code, hardware wallets that we heard about yesterday at this conference. I think that’s interesting. There’s still gaps on authentication or the address substitution problem, but that could be addressed with end-to-end authentication. There are some question marks about the social dynamic of blockchains. I think it’s important for security for people to run their own full nodes and verify the transaction set.

stark: Who here runs a full node? Show of hands. Nice.

adam3us: That’s part of the interest with the Blockstream satellite product. It’s a network of satelites that we’re leasing bandwidth on commercial satellites with the idea that the cost of running a full node can be made a lot lower with maybe $100 equipment then you could have free download of bitcoin blockchain and transactions in real-time. The costs of running a full node can thereofre be made low. Many smartphone wallets like GreenAddress smartphone wallet can be configured to connect to your own full node over tor. So reducing the cost of receiving the blockchain is not just for emerging markets, it allows for people who can’t afford to run a full node due to bandwidth, it’s also useful for people with high speed internet because it’s more private. There are people monitoring on the bitcoin network looking for which IP addresses are broadcasting transactions. This can place you at risk of geolocation, burglary, and other thefts. By using a satellite, it’s completely passive and download-only. The bulk of the data is coming in a passive mode. We don’t know how many people are using it, because we can’t tell. You can send transactions over SMS, tor, or other networks. This leaves a much smaller footprint, it saves you bandwidth, and presents another redundant link to the internet. If you have a network split caused by an undersea cable disruption or there’s a tornado in the southeast US which causes problems to internet connectivity— it’s good for bitcoin security to have a robust network with redundant links and a satellite is a very robust independent link. In phase 1, we have one teleporter uplink. In the second phase by the end of year 2017 we will have a couple more uplinks. The satellites can listen to each other in a ring. If the internet connection on one uplink cuts out, then it can donwload blocks from the neighboring satellites and it can be more robust to local internet failures on the uplinks.

stark: The year is 2017. Satoshi whitepaper came out 9 years and the software came out 8 years ago. What do you think the threats are that we are going to see in the future in 5 or 10 years assuming bitcoin continues on its trajectory?

adam3us: As long as you run your own full node and configure your wallet to point at it, all changes are opt-in. A lot of discussions have been had recently about unusual viewpoints about hashrate defining the protocol. But it doesn’t. The hashrate is a service that follows the economics and it’s the users that provide value to the coins. As long as people are running full nodes, the protocol upgrades can be opt-in. I think scale is optimistic, I think we can adopt new technology going forward. I think the enthusiasm about lightning is interesting for high-scale. There’s also a lot of flexibility and extensibility coming into bitcoin with the script extensions that Eric Lombrozo talked about. I think in the next few years we will see sidechains. The ability to incorporate more sweeping alternative opt-in blockchains that are bitcoin-denominated. You could see tech like SNARKs or zerocash that provide strong privacy or other novel features that might be difficult to incorporate into bitcoin directly because of technology risk to see those deployed into a sidechain that is still bitcoin-denominated. Many of the short-term complicatoins about conflict of interest between miners and users will be resolved because they can, if someone wants to focus on medium-security chains then they could just do that and make a sidechain and set the parameters as they wish and see if it succeeds and draws users in the market.

stark: There’s never a dull moment in bitcoin. There’s been discussion in the community about upgrades and block size. How do you think we can most securely upgrade the bitcoin protocol?

adam3us: I think soft-forks are a safe way to upgrade bitcoin. With hard-forks, it’s opt-in and it’s not clear if everyone will opt-in, thus creating two chains when someone doesn’t opt-in. There are ways to combine soft-forks and hard-forks, called evil forks, which is even more coercive. People didn’t want to talk about it for a while but it’s been public for a while now. It’s basically the idea that, you make a new block… Johnson Lau causes this forcenet on bitcoinhardforkresearch.github.io page.. You make a new block, the original block has a transaction that has a hash referring to the new block. The consensus rule is that the original block has to be empty. The original clients and nodes see no transactions. If miners enforce that consensus rule, then it forces a change. In that circumstance, if a change was made that users didn’t like, then they would just soft-fork it out, they would have to make a change to avoid it.

stark: We’ve seen proposals from the community about ways to upgrade bitcoin. How would you do those differently?

adam3us: It’s important that all parts of the community have a voice. I’m not in favor of segwit2x collusion meeting. Proposals that go beyond proposals that are edicts are not something I’m in favor of. I think the timeframe is also important. If you ask any of the developers, they would typically want to see 18 months or 2 years lead-time on something with as widespread impact on all the software and hardware out there as a hard-fork. For a hard-fork, basically every piece of software and equipment needs to upgrade, which requires a lot of lead time. I think replay protection is important. In the bcash proposal, there was mandatory replay protection and the timeframe on that was super rushed. It only got checked in 5 days before their fork date, which is on the extreme end of a short timeframe.

stark: One of the big questions in the community is around fungibility. Can you use the same coin and avoid taint? What is your vision for how we should treat fungibility in bitcoin and what problems are you seeing?

adam3us: Fungibility is an important problem. It’s related to privacy. It can be achieved separately. What provides fungibility in bitcoin today is that mining is decentralized. If miners in one country are ordered by their government to not process some transactions then other miners will process them. If mining gets too centralized then it puts fungibility at risk- if you end up with some coins and maybe some government doesn’t like you, say you’re Wikileaks or you’re a lobbyist or politician, that’s problematic for a currency. You expect that everyone will accept the money no matter who previously had it. Fungibility is important. I think we could have more technology to guarantee fungibility. For someone to block a transaction, they first have to know how made it, in order to block it. The digicash protocol works this way- each payment was unlinkable to previous payments. The people processing each transaction, even though they were centralized, had no information that would allow them to know if they should block it. They had no way to discriminate. What causes problems for bitcoin is the linkability between transactions and change UTXOs. There are some protocols in flight like lightning which has improved fungibility inside the network. There’s also tumblebit, zerolink and confidential transactions is also interesting. There’s some space overhead problems on that one. Confidential transactions interacts well with coinjoin, and you can join coins without having to worry about the specific amounts making it obvious which inputs correspond to which outputs, because confidential transactions hides the values. You can still validate the blockchain but the values are hidden using a form of homomorphic encryption. The inputs and outputs can add up without revealing the values themselves.

stark: What are the protocol-related technologies that you are most excited about coming down the road?

adam3us: For the short term it would be great if companies could integrate lightning into their wallets and exchanges and merchants. It solves some usability problems by providing instant confirmations. Also it improves fungibility and privacy. It should be cheaper too because it provides a lot more scale. If lightning provides a high degree of recirculation in the network then that should be equivalent to a much larger block size in terms of throughput or scale which is otherwise implausible. If we put everything on to the blockchain then it would eventually saturate the internet basically. It’s a broadcast mechanism. You have n^2 scaling problems. If we could get 1000x recirculation within lightning then that’s great but with segwit and around 2 megabytes if you were to have 2 gigabyte blocks to match that on chain then that’s not plausible and you would only able to do the validation computations in a data center. If you can’t see the transactions and verify themselves due to scale mistakes, then that’s not far from today’s world where you are trusting a bank that they had an auditor to check their internal ledgers.

stark: What are the upgrades you would like to see most to the bitcoin chain?

adam3us: It would be interesting to see some more opcodes reintroduced. Script extensability. The merkle tree proposals are interesting because they provide more privacy, like the MAST proposals. It allows you to have a syntax tree represented in the blockchain where the normal case is just a simple transaction but then there are some hidden extra clauses that can get exercised. If the exceptional situation happens then you reveal the more complex program. If the transaction is multisig then basically as long as both parties agree, then the full contract never needs to get revealed, and that saves space on the blockchain, and it’s more privat ebecause transactions are indistinguishable unless their clauses are exercised.

stark: Okay, let’s open it up to audience questions. Thanks.

https://www.youtube.com/watch?v=0WCaoGiAOHE&t=2h22m42s

\ No newline at end of file diff --git a/c-lightning/2021-10-04-developer-call/index.html b/c-lightning/2021-10-04-developer-call/index.html index f8d1249320..1651a008ab 100644 --- a/c-lightning/2021-10-04-developer-call/index.html +++ b/c-lightning/2021-10-04-developer-call/index.html @@ -4,11 +4,11 @@ Location: Jitsi online Video: No video posted online The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -Dust HTLC exposure (Lisa Neigut) Antoine Riard email to the Lightning dev mailing list: https://lists."> \ No newline at end of file diff --git a/c-lightning/2021-10-18-developer-call/index.html b/c-lightning/2021-10-18-developer-call/index.html index 9e7c658790..22f1a13115 100644 --- a/c-lightning/2021-10-18-developer-call/index.html +++ b/c-lightning/2021-10-18-developer-call/index.html @@ -12,4 +12,4 @@ < c-lightning < c-lightning developer call

c-lightning developer call

Date: October 18, 2021

Transcript By: Michael Folkson

Tags: Lightning, C lightning

Category: -Meetup

Topic: Various topics

Location: Jitsi online

Date: October 18th 2021

Video: No video posted online

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

c-lightning v0.10.2 release

https://medium.com/blockstream/c-lightning-v0-10-2-bitcoin-dust-consensus-rule-33e777d58657

We are nominally past the release date. The nominal release date is usually the 10th of every second month. This time I’m release captain so I am the one who is to blame for any delays. In this case we have two more changes that we are going to apply for the release itself. One being the dust fix. If you’ve read two weeks ago there was an announcement about a vulnerability in the specification itself. All implementations were affected. Now we are working on a mitigation. It turns out however that the mitigation that was proposed for the specification is overly complex and has some weird corner cases. We are actually discussing both internally and with the specification itself how to address this exactly. Rusty has a much cleaner solution. We are trying to figure out how we can have this simple dust fix and still be compatible with everybody else who already released theirs. The hot fix has been a bit messy here. Communication could have gone better.

I have been trying to ignore the whole issue. I knew there was an issue, I was like “Everyone else will handle it” and I have learned my lesson. The spec fix doesn’t work in general. It was messy. When I read what they were actually doing, “No we can’t actually do that”. I proposed a much simpler fix. Unfortunately my timing was terrible, I should have done it a month ago. I apologize for that. I read Lisa’s PR and went back and read what the spec said, “Uh oh”.

The RFC thing didn’t get released until a few weeks ago. Our actual RFC patch wasn’t released until very recently.

There was a whole mailing list discussion I was ignoring too which I assume covered this.

It wasn’t quite the same.

The whole mailing list discussion, the details of how to address this were really hidden in there. I didn’t catch it either. I guess we are all to blame in this case. That aside, we are working on a fix and that is the second to last thing that we are going to merge for the release. The other one being the prioritization of the channels by their size. There Rene and I are finishing up our testing, thank you to everyone who is running the paytest plugin. We have been testing with 20,000 payments over the last couple of days. We are currently evaluating them. That will be the headline feature for the release blog post that I am going to write tomorrow. I have a meeting with Rene discussing the results and how to evaluate them and how to showcase what the actual improvements are. It is not always easy in randomized processes to show that there has been an improvement. There definitely has been so we have to come up with a good visualization of what that looks like.

Changing the timing for future meetings

We have been discussing this before, we might get to move this meeting a couple of hours. I think Rusty is already on summertime, southern hemisphere and Europe will be on winter time starting next Sunday. We could move it to 2 hours making it more palatable for us without cutting into Rusty’s already quite busy day work. If there is no opposition I would probably move the next meeting by 2 hours. Rusty will be one hour earlier and everybody else will be one hour earlier as well.

It will be 2 hours earlier when the time changes in Europe?

Yes but in UTC it will one hour earlier.

It is 19:00 UTC.

The same time as the Lightning protocol meetings. They are 5 30am Adelaide time?

Protocol meetings are in Adelaide time but this one is in UTC.

We want to have different ones?

Now it will change and it will be the same time as the protocol meeting.

We can probably figure out how relative timezone shifts work out eventually.

It is 19:00 UTC.

Lightning Core dev meetings

With that difficulty out of the way let’s continue. Lisa and I have been at Core dev last week. We discussed a number of issues. One of the ones that we made some progress on, for me the big highlight was there has been some movement on RBF pinning. I think we understand the issue now much better and how we could resolve these kinds of deadlocks where part of the network sees one thing and the other part of the network sees something else. The issue being that there is no total order between transactions. There are two criteria, there is a fee rate and an absolute fee. They don’t always have a clear winner when comparing. You might end up with a network, one seeing the low fee rate but really big fee and the other one seeing the high fee rate but really low overall fee. That’s this eclipse attack that we have seen a couple of times before. For pinning that is really bad because you believe your transaction is going to confirm but in reality it is being blocked by a competing one that you haven’t seen yourself. We are looking into the details of how to solve that. There have been a couple of proposals. Gloria Zhao is working on these RBF policies especially when it comes to package relay. We are looking into how to solve that, still a couple of things to resolve but I am optimistic that we can come to a conclusion that works much better for Lightning. We also had the Lightning dev meetings where we discussed a couple of things among which is BOLT 12. BOLT 11 has been replaced at some point. We discussed how we could roll out eltoo eventually and what the issues are, various discussions about sighashes that could enable that kind of functionality. It has been a productive week. Sadly that has also slowed down our release cycle a bit. I could have probably done more there. We are still in good time for the release itself. Now I’ve realized that my notes would have been nice that I wrote up before this and forgot to save before restarting my Notebook.

Individual updates

I worked on the onion message and now I will be working on the web proxy so I can send the message to a node on the network. I am mostly busy with that.

What is the web proxy onion message exactly? What are you doing there?

The onion message is BOLT 4. The web proxy, I send a message on behalf of the node on the network.

Are you sending onion messages over web proxy to a node?

Yeah eventually. This is the complete everything you need to do BOLT 12 from nothing in a web browser. Javascript that actually can speak the Lightning Network, he’s got that bit. He can speak to a node through a web socket proxy but the next step is to actually be able to construct an onion message to send through the Lightning Network in Javascript. That’s that whole other level, it is BOLT 4, onion message and all that as well. Unfortunately the web socket stuff did not make it into this release. I did update the spec. You need to use a third party web socket proxy. There are some. You can also create your own and c-lightning will ship with one as well that you can turn on if you want to speak web socket.

This means that you could have an app in your browser that could send onion messages to your c-lightning node?

Onion messages to anyone really.

We were talking about a descriptor for web sockets.

The onion message could end up going wherever on the Lightning Network. You could easily send it to a c-lightning node and forward it on.

I just added it to the milestones since we have a couple of changes still in flight we might as well get that one in. It has been in draft since I last saw it so I will review and merge it as soon as possible.

Lightning Core dev meetings (cont.)

My updates from the meetings last week. We had an interesting discussion about channel jamming. We didn’t have any huge insights into how to fix it but there was a proposal to start changing the fees… There were two things there. There are two variables here. One of the problems with channel jamming is that there is a limit on how many HTLCs can be currently inflight on a channel, that is 483. The other problem is that right now every single HTLC that gets committed to a commitment transaction whether it is dust or not, a dust HTLC won’t actually show up in a transaction but the balance still gets updated. It still counts towards that 483 number. What this means in practice is that you could send very small 1 millisat payments and put them on for a very long time period, maybe 2 weeks of timeout. Put 483 of them through a channel, that channel would not be able to process anymore HTLCs. The amount of money that someone would have to commit to a channel and commit to this attack is very, very small. A couple of things that we looked at changing. One, is there a way that we can make the minimum amount of money someone pays to jam a channel higher, make it more expensive? Every payment attempt is kind of like a bond of Bitcoin you’ve committed to a channel at some point. In order to originate a payment you have to have locked Bitcoin in a channel that you’re pushing to a peer. If you look at that as like a bond or an expense to run this attack, if we can make the amount of money you have to commit to Bitcoin higher, make the attack more expensive. Even if you aren’t spending any money you are locking your own capital up for that long. Anything we can do to make that higher makes the attack more expensive. The other thing, is there any way we can make the number 483 larger so that in order to jam a channel you have to send more payments through at whatever rate, that also makes it more expensive. Then you have to commit more capital to a single channel to lock it up. Based off of these two knobs you can turn to make it more expensive to jam a channel, one suggestion was that we no longer count dust HTLCs towards this 483 number. That makes the total number of payments that are eligible to go through a channel larger. It also raises the amount of money that you need to commit in order to get one of those 483 slots. That makes it slightly more expensive. Another thing we looked at is there anything we can do to change the 483 number, make it bigger. I don’t think there are any concrete proposals on how to make it such that you could have larger than 483. I think there were some suggestions that maybe we have trees of HTLCs that you have buckets for. Another suggestion was that maybe there are bands of how big a HTLC you have to have in order to use up certain bands of that capacity. Dust has its own bucket of how many HTLCs you can put through it before it gets jammed and then larger bands. You stop being able to use it based on the amount of HTLC that it is consuming. The other suggestion was to make payments more expensive in terms of capital locked up in them by adding a variable to the fee calculation for a route that puts a dollar value so to speak on the HTLC lockup time. It would cost more to lock up a payment amount for a longer period of time. The CLTV diff of the payment, we add a factor that you multiply by when you are calculating a fee for a payment. That is added at every hop, a percentage of that fee. This makes it more expensive to lock up capital for longer periods of time. The ACINQ team had some pushback because people were paying more money for safer payments. I think there is definitely a longer discussion to be had there on the trade-offs for adding that. I thought it was interesting because we also have this HODL invoice thing that we’re trying to disincentivize. If you add an actual cost to that length that you allow people to lock up capital for, if the payment fails you don’t actually pay it but at least have them commit their capital and lock up a little bit more when they attempt it. Making it more expensive for a longer lockup period. It is something worth considering to make those attacks more expensive.

Individual updates (cont.)

Pinning stuff was interesting, I had an interesting chat about descriptors, just an overview of the state of multisig composition, not really related to Lightning. I’ve been working on the HTLC dust thing, it sounds like we are going to have some changes. There are two interesting things that happened recently. One is that someone’s database went down and they lost all their USB stuff. They had a problem, one of the channels was one where I had done a liquidity lease ad to their node, I had paid them for some capital. If this happens to you and your node goes down, there is a HSM tool called guess_to_remote that I believe darosior wrote that will provided you know what your output is on the unilateral close transaction that your peer commits to chain, if you have the hsm_secret it will attempt to guess what your spending key is. The script wasn’t updated for anchor outputs or for liquidity ads. It wasn’t working for the channel that I had with this guy. The reason it wasn’t working, all dual funded channels use anchor outputs, it is the most likely that you will use anchor outputs on c-lightning today. Anchor outputs changed what our to_remote script from a P2WPKH to a P2WSH, the hashes weren’t matching. We were looking for a pubkey but we should be looking for a witness script. If it has been leased, if the person who closes the channel has leased funds to the person who does a unilateral close there is an extra thing we need to grind for, how much time was left in the lease. I have a patch to fix this, I just need to test it to make sure it is working as expected. I don’t know if it needs to be in this release, hopefully that will be up soon. Future work, I have got some very delayed stuff that I should have been working on. There is currently a bug, if you are dual funding a channel and your peer decides to RBF it there is a bug in the way that the acceptor side reselects UTXOs to go into that open transaction if you’ve committed funds to it. Successfully calling the RBF code for a dual fund open is very difficult right now, I need to get that fixed for next release. After that I had a good conversation with Christian last week about some stuff for accounting, getting an accounting plugin out. I managed to sketch out some nice updates to the architecture of how we’re doing events. I sat down and went through from first principles how I would do the accounting stuff now as opposed to a year and a half ago when I did it the first time. I think I am going to make some changes to the event structure, I am fairly certain very few people are using it. I don’t think it will be a huge breaking change. Luckily I think there is a version there, I’ll add a new version, the struct already has a version. Hopefully that will be for next release. I think I’m going to get a splicing proof of concept done. All the pieces there, the dual funding was a lot of heavy lifting, we’ve got a PR draft of it done. I spent some time this morning pulling in the draft stuff and the drafted messages. It looks like there is a couple of message definitions missing from the draft, it needs to be cleaned up. I don’t think it is going to be too much to get that done for the next release.

That user who had his USB stick deleted, he was completely without a database, thanks to Lisa’s help he was able to recover 99 percent of his funds. That’s a huge win for us. We might be looking into formalizing those recovery steps and make it easier in future. Although we don’t want to encourage people too much to run all of their infrastructure off of a single USB stick.

LND do something called static channel backups, it is literally a list of everyone you have a channel with. Someone has written a tool so that every time you open or close a channel it updates this list and sends it to you on Telegram. LND has written a workflow where you give it the list of peers you have channels with and walks through a reconnection attempt with them which causes the other peer to do the channel closure. c-lightning will do this if you have an old version of your database and you start up from it. Since an old version of your database has a list of what channels you are connected to at the time of the snapshot, it will walk through a list of them, attempt to reconnect to them and then do the exact same thing as you would get with LND when you attempt to use your static channel backup. We don’t have as much tooling as the LND solution does. I think a lot of people don’t like that as much.

Someone lost a lot of money buying liquidity using c-lightning last week. There is an experimental feature that you can run, if you have experimental dual funding turned on which is liquidity ads, these are cool because they let you advertise how much money you want someone to pay you for you to put money into a channel when they open it with you. You can lease out your Bitcoin to a peer for a rate over like a month. Every lease is 4000 blocks long. Someone did not read the ad. In order to accept an ad, there is a command you can use on your gossip and it will print out all the ads that are there and their JSON objects. They tell you the base fee rate and a rate that you pay per basis point, per amount of capital that you add. In order to accept a rate you have to copy out a condensed form of this ad and put it in the command that you run to fund the channel. You have at least acknowledged that you’ve looked at the ad and have copied out a part of it when you go to take up the person’s offer of liquidity. Someone had done a couple of them, hadn’t been reading the ads and some enterprising c-lightning node runner decided that their base fee for making liquidity ads was going to be 0.1 Bitcoin. Someone copied the liquidity ad and sent it to the peer and paid them 0.1 Bitcoin as their base fee rate for accepting liquidity. They were upset when the liquidity ad pushed all my money to the other side. The nice thing about liquidity ads is your fee ends up being more inbound liquidity for you. It is quite an expensive way to get inbound liquidity but you can now use it as inbound liquidity. I don’t know if this guy managed to find the person who put this in, politely ask them to pay him back, push the money back over the channel. I suggested this. My warning is there are people who don’t read the ads. Maybe you should try setting up liquidity ads to see how much money you can make. The other side is if you are accepting a liquidity ad maybe you should read it. I have heard a rumor that one of the Lightning router service websites is adding a slider so you can look through all the nodes that are offering liquidity ads and figure out how much it will cost to take up each of those ads. Hopefully there will be a pretty UI and a third party tool for these soon. Exciting to see that people are using it even if they are getting hosed.

Thank you for spending time looking at the database optimizations, how to clean it up. I am running always the latest version so everything works.

We are also looking into alternatives for the backup because the current backup code just takes a snapshot and applies incremental changes. That can grow over time. We’ve added compaction already and a smaller database overall is definitely something that we might be able to replicate in the backup as well.

vincenzopalazzo: I am working on some pull request review and also a small change to c-lightning. I am also working on a library that works with c-lightning.

I have undrafted the DNS address descriptor PR and it can be reviewed now. I also started working on the announce remote address feature which I implemented in our code. There is another PR where your code now reports to the other side… respond to a connection what the remote IP was. Now we can decide what to do when we get this information. Proposals please. There were some discussions already on just sending every remote address, unless it is a private network obviously, and announce it up to a certain limit. No checks at all because it is not so problematic to announce a few incorrect addresses. Different options are to wait for a number of same responses or a threshold, a minimum amount of peers that are responding with the same remote address. Or even try stuff like making a connection to the address in order to check if it responds. That might not always work.

I’m pretty sure my router doesn’t allow me to connect from inside of the network to the NAT address. I won’t be the only one. To reiterate the issues that there might be, what we want to avoid is trusting a peer on what address they see us as and then announcing that. It may lead to connection delays if they told us a lie. It is not a security issue because the connections are authenticated and we make sure that we are talking to the node that we expect to be talking to. It is only an attempt to avoid some sort of denial of service attack where I tell you “You are this node in China” and this node in China doesn’t actually run a c-lightning node.

You can report addresses like the FBI’s address and then your node is starting to make connections to the FBI address?

No it is I tell you that you are the FBI.

We had this DNS stuff, when you have a lot of channels, a lot of peers, and somehow you are tricked into believing a different address and you announce that. A lot of people are now making a connection attempt there.

If it is a black holed IP then that can lead to considerable delays. If it is a reachable IP address then it will fail after a couple of seconds.

For black holed IP how is it delayed?

For black holed IPs you are sending a SYN packet that never ever gets a reply. Whereas if there is some device at the end of the IP address then you get a FIN packet right away. It is one round trip to discover that there is nothing there. If it is black holed it may hang for several minutes. Depending on your kernel settings. I think it is a minute by default on most modern Linux kernels.

Should we check that? Is it reasonable to wait that long?

If you open a connection to somewhere it spawns a separate openingd process that can hang forever and doesn’t interfere with the rest of the operation. It is not really that much of an issue, it is only when you count on being able to contact that node. I wouldn’t add too much complexity to prevent this.

If you have multiple addresses launch 3 openingd and wait for the first one?

Sort of. We only ever allow one.

I think I would wait for 3 nodes to announce the same address or 10 percent or something like this and then just announce it.

There is also interference with the command line option. If we set a hostname via the command line options that should always stick around and not be replaced by whatever our peers tell us. I guess you’ve probably thought about that already.

I am very happy with the current address setting for c-lightning. I am happy to set the exact IP address and I can use ways to figure out my IP address very easily when it changes. I was doing some tests at home and it took me 3 minutes when I restarted a DSL router to announce the new IP address. The time was the DSL connection initiation. Immediately when the internet works I check what’s my IP address and then announce my correct IP address. This requires restarting lightningd.

We should fix this. We need some background job that scans for change network configuration.

Or we can make a RPC. The current way of doing things is not going away. Your solution will definitely continue working. DNS is just useful for users that might not have that sort of access to their router or not have the right tools to get to it. DNS is definitely a nice usability feature for home users. If you want to do something more elaborate that will always be an option.

I like the idea of a RPC command for changing the address while the daemon is running.

We need some dynamic way without stopping, that would be good.

If it is possible. I don’t know all the details.

Everything is possible. t-bast implemented one of the things on ACINQ. Since those are protocol dependent am I just testing with him then?

Yeah. The specification process is exactly as you described. We usually have a proposal, then we have two implementations to make sure everything is inside the proposal, everything is well laid out and we can build a second implementation based on the proposal. Once two or more implementations have cross verified that they speak the same protocol then we can merge it usually.

We don’t merge it earlier in our code?

We could although it always has the risk that it might get changed before it gets merged. That is usually where we hide stuff behind the experimental flags. The web socket isn’t based on a RFC if I remember correctly. That’s exactly the risk that we have. Things might change based on the feedback that we get from other implementations. If we roll out the status quo we might end up in a situation where we have to change the stuff later on. Then we suddenly have two implementations that compete. We’ve had that with the onion messages but for the DNS entries in the gossip messages, those stick around a bit longer so I’d like to move that after the milestone so that we don’t have a release that has a feature that is about to get into the specification and might have to slightly change. I don’t think it is going to change here but better to be safe than sorry in this case.

We have successfully released the Raspiblitz version which now includes c-lightning. We had a Twitter Spaces with Christian, that went well, the recording will be released. It was very focused on the c-lightning implementation. There are always small things here and there that can be improved, that is why we iterate. I have seen a couple of people using it so trying to collect as much feedback as possible. We have an update feature so when there is a new release from c-lightning it should be a smooth update within the Raspiblitz menu. It went as planned I think. I have been keeping an eye on the c-lightning Telegram channel. Regarding the backups, it is great that you can achieve the same restoring of the database but the database can be very large. You wouldn’t have that sent by an email or a quick sync to the server or even a Telegram bot. Given that people are used to it, if they switch to try out c-lightning they would want to have something similar there.

That has been a huge release, thank you for inviting me to the Twitter Space. You have a busy Telegram chat.

It is like 1600 people. Some of them are technical so they come with suggestions or custom things that they have done. That is how we learn. A lot of them become contributors.

It is exactly that kind of feedback that we are looking for so that is awesome. I do think we can emulate the static channel backups quite easily by building some tooling around guess_to_remote. It is always a bit of a balance between making it too easy for users to skimp on stuff and not do what they should be doing. My hope is that by doing a similar tool we can have a belt and suspenders deal instead of going with a single belt.

I think the threat of force closing channels, especially during a higher fee environment, is enough to make people want to preserve their channel database. All the liquidity that they have built up if they are running a routing node, a lot to worry about. If there are recurring questions I try to update that c-lightning FAQ on the Raspiblitz. There are a couple of things coming up there and will be extended.

That’s excellent. Especially with the user that lost their full database, we definitely felt the need for static channel backups. By making it easier to run the backup plugin as well, less space, more efficient, less churn on the SD card or wherever you store that. We should also make it much easier for people to choose the right solution instead of the easy solution.

I wanted to ask about the paytest plugin, it is trying to implement the Pickhardt payment?

Yes we do have a pull request 4567 which is a first step towards the Pickhardt payments idea of biasing towards using larger channels. That has in the model of that paper a higher probability of succeeding the payment. While we aren’t using the splitting strategy just yet we do already prioritize the channels according to Rene’s proposal. Like I mentioned before we are looking into evaluating that and if it works out that together with the optimal splitting strategy that Rene proposes we get even more we might end up implementing the entirety of that. The paytest plugin itself is a generic system to test how well performing our pay plugin is. What it does is it creates an invoice as if it were from the destination. If I want to test my reachability from me to you then you start the paytest plugin and I start the paytest plugin, I tell my paytest plugin to test a payment going to you. My paytest plugin creates an invoice as if it were coming from you and your paytest plugin will then receive incoming HTLCs and hold onto them for up to a minute. Then it reports back if the real payment would have succeeded or not. On the recipient side the only part that is needed is this holding on and collecting all of the pieces before letting them go again. Without that your node would immediately say “I don’t know what this is about” and tell me to shut up. This is generic and can be used for c-lightning now and I am hoping that lnd and eclair might be offering a similar facility eventually so that we can get a larger view of what the reachability and performance looks like in the network. We will use that going forward to experiment with the parameterization of our pay plugin and different mechanisms of splitting and prioritizing channels and all that kind of stuff. This is the first step but running a paytest plugin will help us make incremental changes in the future. There is also discussion on how to measure this stuff and what kind of bias we should be using. It is a very interesting discussion.

Last week I opened the PR for GraphQL and got lots of great review comments on that. This week I am working on implementing those changes. I was also following the c-lightning chat and I was almost getting angry seeing some of the negative comments. It made me want to multiply my efforts to fix some of these issues, legitimate things that are hanging people up. c-lightning is a great project and deserves a lot of attention. I am happy to be able to learn a little bit more and hopefully to contribute more as time goes on.

What are you referring to on the negative comments? You mean on IRC?

In the Telegram chat, on the database topic mainly. Basically reports on platforms that were moving off of c-lightning in favor of LND and things like that.

Most people are moving from c-lightning to ACINQ because they came from LND to c-lightning being frustrated. We do have a couple of people in the channel that got burnt badly in the past. There is a certain resentment, it is something that I keep in mind whenever I read those stories. We try to do our best but we can’t do everything.

I made a short script which gets you an invoice when you have LNURL text. I was missing this, we don’t support LNURL but sometimes people communicate this to get payments. It is very easy, I will be happy to share and help if anyone has questions on how to get an invoice from LNURL.

We are getting a couple of proposals that are sort of competing, some are complementary. We will probably end up supporting everything and nothing. Having a way to bridge from one standard to the other is definitely something that is very much valuable.

If you know the LNBits project which is kind of accounting software that allows you to create multiple accounts on top of a funding source. The funding source could be c-lightning, it would be a next step to get it into the Raspiblitz because we already have LNBits on LND. That has a lot of full scale LNURL support with a plugin system. LNURL is probably something higher level than something that would need to be plugged into the protocol or the base layer.

I also got into contact with one of the authors of LNBits looking for help to get full hosted channels working. This would extend this accounting feature to also include keysends which as far as I understood are currently not possible with today’s LNBits. LNBits basically differentiates who the recipient of an incoming payment is based on who created the invoice that got paid. With full hosted channel support we could have virtual nodes really be part of a c-lightning node. Then you could run a single c-lightning node for your friends and family. This reminds me I still need to set up an appointment with the author.

fiatjaf is a big contributor to LNBits and it is Ben Arc who started it, there are a couple of others who are working on it.

Full RBF in Core

One of the discussions at the Core dev meeting was on getting full RBF in Core. This is obviously going to be challenging because some businesses, I’m not sure how many, do use zero confirmation transactions. Thoughts on how important this would be for the Lightning protocol to get full RBF in Core?

We are already moving towards RBF on Lightning with the dual funded channel proposal. All those transactions that we create are RBFable. I feel like the question you are asking, not explicitly, is the zero conf channel proposal which is currently spec’ed. We didn’t talk about it but full RBF would interact quite poorly with zero conf channels. RBF means that any transaction that gets published to the mempool can then be replaced before it is mined in a block. Zero conf kind of assumes that whatever transaction you publish to the mempool will end up in a block. There is tension there. I don’t think there is an easy answer to that other than maybe zero conf channels aren’t really meant for general consumption. The general idea with zero conf in general is that it is between two semi trusted parties. I don’t think that’s a great answer but I think there is definitely a serious concern there where zero conf channels are concerned.

I was speaking to someone at ACINQ and they said they do use zero conf channels but there is no risk because all the funds are on the side of the party who could possibly cheat. I don’t know about other businesses who use zero conf channels.

Breez uses it. The logic is whoever opens the channel would have change from the opening transaction and would be able to send a CPFP to have it confirmed.

Now I’m thinking about it, any opening transaction, at this point there is only funds from one side. Only one side would be eligible to RBF it. You would only need to worry about it when two parties fund it. The dual funded channel would be the place where you worried about the counterparty RBFing you. Currently those are RBFable anyway. Having RBF everywhere wouldn’t be a new consideration, at least on channel opening. There might be other places where RBF is a problem, maybe on closes?

I guess dual funding is also not really an issue because in dual funding you would have to have both parties collude to RBF themselves. They would know that they are RBFing and know the new alias. There really isn’t a risk of one party providing funds and then the other party RBFing it without telling the first one. We end up with a wrong funding transaction ID which has happened a couple of times in the past when people were using the PSBTs and RBF bumping those.

That is not totally true because you could RBF an open and create a different transaction that is not an open transaction with the same input. It is not that they would RBF it and make a new open channel but if they RBF it and use it elsewhere and you have already been using that channel. It gets replaced by a channel that is unrelated, then you’re in trouble right?

If you use the channel before it is confirmed that is true of course.

A channel is opened to you, you receive a payment and then the channel is gone. Your received payment is gone.

Zero conf channels are a pretty specific use case anyway that does require a certain amount of trust. Definitely don’t run everything on zero conf.

Not the suggested default policy.

For example RBF would be really useful for Greenlight. We have the routing node and we have the user node. If the user node was to try to mess with our zero conf channels we can prevent them from doing that. The API doesn’t allow that so we are safe on that side. We would forward payments but we would have the ability to track that much more closely than any node. We are basically rebuilding the ACINQ model here.

I need to understand how Breez how uses it. It is weird with policy because obviously someone could just make a different version of Core and change the RBF policy in that version. There’s that consideration. There’s also the consideration that the vast majority of nodes on the network are Core so how much of a security guarantee is that if you flip a large percentage of the network to do full RBF rather than opt in RBF.

Full RBF is definitely what miners might choose. The idea is that we want to keep that gentleman’s agreement as long as possible because there are users that rely on it. But eventually we might end up using it because miners will ask for it. Hopefully we can inform users in time to move away from a trust model that was not incentive aligned so to speak.

RBF is really good for miners because it gives people the opportunity to bid more on their transactions. If transaction fees start going up then everyone all of a sudden has the ability to start bidding more for block space. Where before they didn’t have that opportunity, you’d have had to add more bytes with CPFP. I am RBF bullish.

CPFP leads to a situation where the mempool is clogged and that would help the miners even more. I bet the miners like RBF less.

Sort of.

It is more optimal than just creating more transactions with higher fees.

The average fee rate is still going to be that and having less throughput is definitely bad news for miners because it might be seen as an ecosystem weakness. Whether they are rational and take all things into account that is a different story I guess. From a user side RBF is definitely something that we want because it allows us to allow lowball fees initially and as we learn more about the mempool and how full the blocks were over the last couple of minutes we can fee bump. We don’t have to guesstimate where the fee might be in a couple of hours so that we can get it confirmed. We can start low and move up. It takes a lot of guess work out of the transaction confirmations. And makes a fair fee market as well. Price discovery.

You can’t have c-lightning updating the Bitcoin Core client to opt into RBF? They would have to do that manually?

I don’t even know if there is a flag to enable full RBF in Bitcoin Core as of now. Is there?

You can opt in to using it.

Right now you wouldn’t want to RBF your channel opens at all. Don’t RBF your channel opens right now. If you are in dual funded channels you can RBF to your heart’s content, it is part of the protocol. Just make sure you are going through the actual RPC commands to make it happen and you are fine. On normal v1 channel opens you can’t RBF it because part of the open is that you tell your peer what the transaction ID they should be looking for is and RBF changes the txid. All your commitments to your transaction will be invalid so any money you’ve committed to that channel will be lost forever. So don’t RBF channel opens now even if you can do it. It is not a good thing to do.

All channel opens are RBF enabled, as far as I know, LND for sure.

But you should definitely not use it.

There has been a red letter warning to not RBF any opens, absolutely.

Now I’m thinking about it it is probably best policy to make it by default the transactions or wallet builds are not RBFable. That is hard because of the way stuff is constructed. Probably the best policy when you are building an open transaction, if it is a v1 open make it not RBFable. By default all the v2 ones are.

Dual funding is safe for RBF. We should get that merged sometime into the spec.

It is only safe because when you redo the dual funding thing you and your peer both update your commitment transaction. There is a part where you do the commitment transaction by sig exchange on the RBF transaction that you built. That is why it is safe. You update your expectations for commit sigs. This is how you get your money out of a 2-of-2 funding transaction. You have to update your commitment sigs.

When we finally get dual funding merged and out of experimental, since single funding is the trivial way of dual funding is dual funding replacing the single funding? Or a RPC command that uses the dual funding code with an option that this is actually single funding?

By default if both peers are signaling that they support the v2 channel opens every open goes through the v2 thing. Right now you use fundchannel and depending on what your peer supports it either uses the current v1 protocol or the v2 protocol. It transparently upgrades you so to speak. When you go through the v2 transaction thing the other side might opt not to include inputs and outputs in which case the transaction you build will be the same as if it was through the v1 stuff.

What is v1 referring to?

Dual funding is the technical name, the v2 channel establishment protocol. Right now when you establish a channel there is a series of messages that you and your peer exchange. When we are moving to v2 we’ve changed what messages we exchange and the order that they happen in and the content of the messages. The end result is that you end up with a channel which is the same as the v1 set of messages. The only difference is that in the v2 your peer has the option to also contribute inputs and outputs to the funding transaction. This is what enables you to do liquidity leases or liquidity ads because they now have the option to also add funds to the transaction.

Dual funding is considered v2 rather than this is the second version of dual funding.

That’s right. Dual funding is possible on the v2 protocol.

The v1 code will stay around for a very long time.

The idea is that if every node is supporting v2 then we would take the v1 code out. But that requires every node on the network to support v2 at which point we could remove v1.

Quite a while.

But you can never be sure?

I think we would be breaking compatibility to new very old nodes.

We have finished a Chaincode seminar, Bitcoin protocol development last week. Generally what I got from it is we don’t do such things in Bitcoin. We don’t do backward incompatible changes.

That’s one of the advantages of not having a global consensus like Bitcoin does. We are able to do quick iterations with breaking changes without endangering the security of everybody. The difference between Bitcoin onchain and offchain is exactly that Bitcoin requires us to have quasi perfect backwards compatibility because otherwise we are breaking other implementations and we are introducing the risk of having an error in the consensus itself which would result in part of the network hard forking off. That is obviously bad and that is why there is mostly one implementation for Bitcoin itself. At least for the consensus critical part it is good to have consistent behavior even if that is sometimes wrong. Like it has been for CHECKMULTISIG. In offchain protocols we are dealing with peer-to-peer communication where we can agree on speaking a completely different protocol without breaking anyone else in the network. We have brought back that flexibility to do quick iteration, experimental stuff and deprecate stuff that we no longer require exactly because we move away from having a global consistent consensus. Now we are more peer-to-peer based.

Can you know what our nodes are running on Lightning Network?

Yes. Those features are announced in the node announcement. We therefore know what part of the public network at least is speaking what sort of extensions. Once the number of nodes in the network that speak only v1 drops below 1 percent we can be reasonably sure that we will not encounter any nodes that don’t speak the v2 protocol.

Is there any chance of congestion in gossip if there are thousands of nodes around the world? It seems like everyone knows everything?

The issue of congested gossip is definitely there. Ultimately we don’t want every node to announce, simply because they might not be online 24/7. We will see a separation into the different concerns. There will be nodes that are more routing focused, those will announce their availability and their channels. There will be a large cloud of users that are not stable enough to route or value their privacy and therefore don’t announce this kind of information to the wider world. Ultimately we think that this sort of relationship between public nodes available for routing and private or unannounced nodes that are consuming the routing services of others will not announce. Currently almost everybody is announcing.

I think the Phoenix and mobile wallets are not announcing?

Exactly. That’s also why whenever we do a census of what everybody is running, we had a paper a couple of months ago from the University of Vienna that I co-wrote, that always specifies that we are only looking at the public part of the network. We do have some ways of finding traces onchain about channels that might have existed but there is no way for us to attribute that to a specific implementation. Whenever we do a measurement about what part of the network runs which implementation we need to be very careful about saying “This is just the announced part”. We can reasonably believe that it is representative for the kind of features we have. Especially for ACINQ they have more visibility into what version a node is using.

\ No newline at end of file +Meetup

Topic: Various topics

Location: Jitsi online

Date: October 18th 2021

Video: No video posted online

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

c-lightning v0.10.2 release

https://medium.com/blockstream/c-lightning-v0-10-2-bitcoin-dust-consensus-rule-33e777d58657

We are nominally past the release date. The nominal release date is usually the 10th of every second month. This time I’m release captain so I am the one who is to blame for any delays. In this case we have two more changes that we are going to apply for the release itself. One being the dust fix. If you’ve read two weeks ago there was an announcement about a vulnerability in the specification itself. All implementations were affected. Now we are working on a mitigation. It turns out however that the mitigation that was proposed for the specification is overly complex and has some weird corner cases. We are actually discussing both internally and with the specification itself how to address this exactly. Rusty has a much cleaner solution. We are trying to figure out how we can have this simple dust fix and still be compatible with everybody else who already released theirs. The hot fix has been a bit messy here. Communication could have gone better.

I have been trying to ignore the whole issue. I knew there was an issue, I was like “Everyone else will handle it” and I have learned my lesson. The spec fix doesn’t work in general. It was messy. When I read what they were actually doing, “No we can’t actually do that”. I proposed a much simpler fix. Unfortunately my timing was terrible, I should have done it a month ago. I apologize for that. I read Lisa’s PR and went back and read what the spec said, “Uh oh”.

The RFC thing didn’t get released until a few weeks ago. Our actual RFC patch wasn’t released until very recently.

There was a whole mailing list discussion I was ignoring too which I assume covered this.

It wasn’t quite the same.

The whole mailing list discussion, the details of how to address this were really hidden in there. I didn’t catch it either. I guess we are all to blame in this case. That aside, we are working on a fix and that is the second to last thing that we are going to merge for the release. The other one being the prioritization of the channels by their size. There Rene and I are finishing up our testing, thank you to everyone who is running the paytest plugin. We have been testing with 20,000 payments over the last couple of days. We are currently evaluating them. That will be the headline feature for the release blog post that I am going to write tomorrow. I have a meeting with Rene discussing the results and how to evaluate them and how to showcase what the actual improvements are. It is not always easy in randomized processes to show that there has been an improvement. There definitely has been so we have to come up with a good visualization of what that looks like.

Changing the timing for future meetings

We have been discussing this before, we might get to move this meeting a couple of hours. I think Rusty is already on summertime, southern hemisphere and Europe will be on winter time starting next Sunday. We could move it to 2 hours making it more palatable for us without cutting into Rusty’s already quite busy day work. If there is no opposition I would probably move the next meeting by 2 hours. Rusty will be one hour earlier and everybody else will be one hour earlier as well.

It will be 2 hours earlier when the time changes in Europe?

Yes but in UTC it will one hour earlier.

It is 19:00 UTC.

The same time as the Lightning protocol meetings. They are 5 30am Adelaide time?

Protocol meetings are in Adelaide time but this one is in UTC.

We want to have different ones?

Now it will change and it will be the same time as the protocol meeting.

We can probably figure out how relative timezone shifts work out eventually.

It is 19:00 UTC.

Lightning Core dev meetings

With that difficulty out of the way let’s continue. Lisa and I have been at Core dev last week. We discussed a number of issues. One of the ones that we made some progress on, for me the big highlight was there has been some movement on RBF pinning. I think we understand the issue now much better and how we could resolve these kinds of deadlocks where part of the network sees one thing and the other part of the network sees something else. The issue being that there is no total order between transactions. There are two criteria, there is a fee rate and an absolute fee. They don’t always have a clear winner when comparing. You might end up with a network, one seeing the low fee rate but really big fee and the other one seeing the high fee rate but really low overall fee. That’s this eclipse attack that we have seen a couple of times before. For pinning that is really bad because you believe your transaction is going to confirm but in reality it is being blocked by a competing one that you haven’t seen yourself. We are looking into the details of how to solve that. There have been a couple of proposals. Gloria Zhao is working on these RBF policies especially when it comes to package relay. We are looking into how to solve that, still a couple of things to resolve but I am optimistic that we can come to a conclusion that works much better for Lightning. We also had the Lightning dev meetings where we discussed a couple of things among which is BOLT 12. BOLT 11 has been replaced at some point. We discussed how we could roll out eltoo eventually and what the issues are, various discussions about sighashes that could enable that kind of functionality. It has been a productive week. Sadly that has also slowed down our release cycle a bit. I could have probably done more there. We are still in good time for the release itself. Now I’ve realized that my notes would have been nice that I wrote up before this and forgot to save before restarting my Notebook.

Individual updates

I worked on the onion message and now I will be working on the web proxy so I can send the message to a node on the network. I am mostly busy with that.

What is the web proxy onion message exactly? What are you doing there?

The onion message is BOLT 4. The web proxy, I send a message on behalf of the node on the network.

Are you sending onion messages over web proxy to a node?

Yeah eventually. This is the complete everything you need to do BOLT 12 from nothing in a web browser. Javascript that actually can speak the Lightning Network, he’s got that bit. He can speak to a node through a web socket proxy but the next step is to actually be able to construct an onion message to send through the Lightning Network in Javascript. That’s that whole other level, it is BOLT 4, onion message and all that as well. Unfortunately the web socket stuff did not make it into this release. I did update the spec. You need to use a third party web socket proxy. There are some. You can also create your own and c-lightning will ship with one as well that you can turn on if you want to speak web socket.

This means that you could have an app in your browser that could send onion messages to your c-lightning node?

Onion messages to anyone really.

We were talking about a descriptor for web sockets.

The onion message could end up going wherever on the Lightning Network. You could easily send it to a c-lightning node and forward it on.

I just added it to the milestones since we have a couple of changes still in flight we might as well get that one in. It has been in draft since I last saw it so I will review and merge it as soon as possible.

Lightning Core dev meetings (cont.)

My updates from the meetings last week. We had an interesting discussion about channel jamming. We didn’t have any huge insights into how to fix it but there was a proposal to start changing the fees… There were two things there. There are two variables here. One of the problems with channel jamming is that there is a limit on how many HTLCs can be currently inflight on a channel, that is 483. The other problem is that right now every single HTLC that gets committed to a commitment transaction whether it is dust or not, a dust HTLC won’t actually show up in a transaction but the balance still gets updated. It still counts towards that 483 number. What this means in practice is that you could send very small 1 millisat payments and put them on for a very long time period, maybe 2 weeks of timeout. Put 483 of them through a channel, that channel would not be able to process anymore HTLCs. The amount of money that someone would have to commit to a channel and commit to this attack is very, very small. A couple of things that we looked at changing. One, is there a way that we can make the minimum amount of money someone pays to jam a channel higher, make it more expensive? Every payment attempt is kind of like a bond of Bitcoin you’ve committed to a channel at some point. In order to originate a payment you have to have locked Bitcoin in a channel that you’re pushing to a peer. If you look at that as like a bond or an expense to run this attack, if we can make the amount of money you have to commit to Bitcoin higher, make the attack more expensive. Even if you aren’t spending any money you are locking your own capital up for that long. Anything we can do to make that higher makes the attack more expensive. The other thing, is there any way we can make the number 483 larger so that in order to jam a channel you have to send more payments through at whatever rate, that also makes it more expensive. Then you have to commit more capital to a single channel to lock it up. Based off of these two knobs you can turn to make it more expensive to jam a channel, one suggestion was that we no longer count dust HTLCs towards this 483 number. That makes the total number of payments that are eligible to go through a channel larger. It also raises the amount of money that you need to commit in order to get one of those 483 slots. That makes it slightly more expensive. Another thing we looked at is there anything we can do to change the 483 number, make it bigger. I don’t think there are any concrete proposals on how to make it such that you could have larger than 483. I think there were some suggestions that maybe we have trees of HTLCs that you have buckets for. Another suggestion was that maybe there are bands of how big a HTLC you have to have in order to use up certain bands of that capacity. Dust has its own bucket of how many HTLCs you can put through it before it gets jammed and then larger bands. You stop being able to use it based on the amount of HTLC that it is consuming. The other suggestion was to make payments more expensive in terms of capital locked up in them by adding a variable to the fee calculation for a route that puts a dollar value so to speak on the HTLC lockup time. It would cost more to lock up a payment amount for a longer period of time. The CLTV diff of the payment, we add a factor that you multiply by when you are calculating a fee for a payment. That is added at every hop, a percentage of that fee. This makes it more expensive to lock up capital for longer periods of time. The ACINQ team had some pushback because people were paying more money for safer payments. I think there is definitely a longer discussion to be had there on the trade-offs for adding that. I thought it was interesting because we also have this HODL invoice thing that we’re trying to disincentivize. If you add an actual cost to that length that you allow people to lock up capital for, if the payment fails you don’t actually pay it but at least have them commit their capital and lock up a little bit more when they attempt it. Making it more expensive for a longer lockup period. It is something worth considering to make those attacks more expensive.

Individual updates (cont.)

Pinning stuff was interesting, I had an interesting chat about descriptors, just an overview of the state of multisig composition, not really related to Lightning. I’ve been working on the HTLC dust thing, it sounds like we are going to have some changes. There are two interesting things that happened recently. One is that someone’s database went down and they lost all their USB stuff. They had a problem, one of the channels was one where I had done a liquidity lease ad to their node, I had paid them for some capital. If this happens to you and your node goes down, there is a HSM tool called guess_to_remote that I believe darosior wrote that will provided you know what your output is on the unilateral close transaction that your peer commits to chain, if you have the hsm_secret it will attempt to guess what your spending key is. The script wasn’t updated for anchor outputs or for liquidity ads. It wasn’t working for the channel that I had with this guy. The reason it wasn’t working, all dual funded channels use anchor outputs, it is the most likely that you will use anchor outputs on c-lightning today. Anchor outputs changed what our to_remote script from a P2WPKH to a P2WSH, the hashes weren’t matching. We were looking for a pubkey but we should be looking for a witness script. If it has been leased, if the person who closes the channel has leased funds to the person who does a unilateral close there is an extra thing we need to grind for, how much time was left in the lease. I have a patch to fix this, I just need to test it to make sure it is working as expected. I don’t know if it needs to be in this release, hopefully that will be up soon. Future work, I have got some very delayed stuff that I should have been working on. There is currently a bug, if you are dual funding a channel and your peer decides to RBF it there is a bug in the way that the acceptor side reselects UTXOs to go into that open transaction if you’ve committed funds to it. Successfully calling the RBF code for a dual fund open is very difficult right now, I need to get that fixed for next release. After that I had a good conversation with Christian last week about some stuff for accounting, getting an accounting plugin out. I managed to sketch out some nice updates to the architecture of how we’re doing events. I sat down and went through from first principles how I would do the accounting stuff now as opposed to a year and a half ago when I did it the first time. I think I am going to make some changes to the event structure, I am fairly certain very few people are using it. I don’t think it will be a huge breaking change. Luckily I think there is a version there, I’ll add a new version, the struct already has a version. Hopefully that will be for next release. I think I’m going to get a splicing proof of concept done. All the pieces there, the dual funding was a lot of heavy lifting, we’ve got a PR draft of it done. I spent some time this morning pulling in the draft stuff and the drafted messages. It looks like there is a couple of message definitions missing from the draft, it needs to be cleaned up. I don’t think it is going to be too much to get that done for the next release.

That user who had his USB stick deleted, he was completely without a database, thanks to Lisa’s help he was able to recover 99 percent of his funds. That’s a huge win for us. We might be looking into formalizing those recovery steps and make it easier in future. Although we don’t want to encourage people too much to run all of their infrastructure off of a single USB stick.

LND do something called static channel backups, it is literally a list of everyone you have a channel with. Someone has written a tool so that every time you open or close a channel it updates this list and sends it to you on Telegram. LND has written a workflow where you give it the list of peers you have channels with and walks through a reconnection attempt with them which causes the other peer to do the channel closure. c-lightning will do this if you have an old version of your database and you start up from it. Since an old version of your database has a list of what channels you are connected to at the time of the snapshot, it will walk through a list of them, attempt to reconnect to them and then do the exact same thing as you would get with LND when you attempt to use your static channel backup. We don’t have as much tooling as the LND solution does. I think a lot of people don’t like that as much.

Someone lost a lot of money buying liquidity using c-lightning last week. There is an experimental feature that you can run, if you have experimental dual funding turned on which is liquidity ads, these are cool because they let you advertise how much money you want someone to pay you for you to put money into a channel when they open it with you. You can lease out your Bitcoin to a peer for a rate over like a month. Every lease is 4000 blocks long. Someone did not read the ad. In order to accept an ad, there is a command you can use on your gossip and it will print out all the ads that are there and their JSON objects. They tell you the base fee rate and a rate that you pay per basis point, per amount of capital that you add. In order to accept a rate you have to copy out a condensed form of this ad and put it in the command that you run to fund the channel. You have at least acknowledged that you’ve looked at the ad and have copied out a part of it when you go to take up the person’s offer of liquidity. Someone had done a couple of them, hadn’t been reading the ads and some enterprising c-lightning node runner decided that their base fee for making liquidity ads was going to be 0.1 Bitcoin. Someone copied the liquidity ad and sent it to the peer and paid them 0.1 Bitcoin as their base fee rate for accepting liquidity. They were upset when the liquidity ad pushed all my money to the other side. The nice thing about liquidity ads is your fee ends up being more inbound liquidity for you. It is quite an expensive way to get inbound liquidity but you can now use it as inbound liquidity. I don’t know if this guy managed to find the person who put this in, politely ask them to pay him back, push the money back over the channel. I suggested this. My warning is there are people who don’t read the ads. Maybe you should try setting up liquidity ads to see how much money you can make. The other side is if you are accepting a liquidity ad maybe you should read it. I have heard a rumor that one of the Lightning router service websites is adding a slider so you can look through all the nodes that are offering liquidity ads and figure out how much it will cost to take up each of those ads. Hopefully there will be a pretty UI and a third party tool for these soon. Exciting to see that people are using it even if they are getting hosed.

Thank you for spending time looking at the database optimizations, how to clean it up. I am running always the latest version so everything works.

We are also looking into alternatives for the backup because the current backup code just takes a snapshot and applies incremental changes. That can grow over time. We’ve added compaction already and a smaller database overall is definitely something that we might be able to replicate in the backup as well.

vincenzopalazzo: I am working on some pull request review and also a small change to c-lightning. I am also working on a library that works with c-lightning.

I have undrafted the DNS address descriptor PR and it can be reviewed now. I also started working on the announce remote address feature which I implemented in our code. There is another PR where your code now reports to the other side… respond to a connection what the remote IP was. Now we can decide what to do when we get this information. Proposals please. There were some discussions already on just sending every remote address, unless it is a private network obviously, and announce it up to a certain limit. No checks at all because it is not so problematic to announce a few incorrect addresses. Different options are to wait for a number of same responses or a threshold, a minimum amount of peers that are responding with the same remote address. Or even try stuff like making a connection to the address in order to check if it responds. That might not always work.

I’m pretty sure my router doesn’t allow me to connect from inside of the network to the NAT address. I won’t be the only one. To reiterate the issues that there might be, what we want to avoid is trusting a peer on what address they see us as and then announcing that. It may lead to connection delays if they told us a lie. It is not a security issue because the connections are authenticated and we make sure that we are talking to the node that we expect to be talking to. It is only an attempt to avoid some sort of denial of service attack where I tell you “You are this node in China” and this node in China doesn’t actually run a c-lightning node.

You can report addresses like the FBI’s address and then your node is starting to make connections to the FBI address?

No it is I tell you that you are the FBI.

We had this DNS stuff, when you have a lot of channels, a lot of peers, and somehow you are tricked into believing a different address and you announce that. A lot of people are now making a connection attempt there.

If it is a black holed IP then that can lead to considerable delays. If it is a reachable IP address then it will fail after a couple of seconds.

For black holed IP how is it delayed?

For black holed IPs you are sending a SYN packet that never ever gets a reply. Whereas if there is some device at the end of the IP address then you get a FIN packet right away. It is one round trip to discover that there is nothing there. If it is black holed it may hang for several minutes. Depending on your kernel settings. I think it is a minute by default on most modern Linux kernels.

Should we check that? Is it reasonable to wait that long?

If you open a connection to somewhere it spawns a separate openingd process that can hang forever and doesn’t interfere with the rest of the operation. It is not really that much of an issue, it is only when you count on being able to contact that node. I wouldn’t add too much complexity to prevent this.

If you have multiple addresses launch 3 openingd and wait for the first one?

Sort of. We only ever allow one.

I think I would wait for 3 nodes to announce the same address or 10 percent or something like this and then just announce it.

There is also interference with the command line option. If we set a hostname via the command line options that should always stick around and not be replaced by whatever our peers tell us. I guess you’ve probably thought about that already.

I am very happy with the current address setting for c-lightning. I am happy to set the exact IP address and I can use ways to figure out my IP address very easily when it changes. I was doing some tests at home and it took me 3 minutes when I restarted a DSL router to announce the new IP address. The time was the DSL connection initiation. Immediately when the internet works I check what’s my IP address and then announce my correct IP address. This requires restarting lightningd.

We should fix this. We need some background job that scans for change network configuration.

Or we can make a RPC. The current way of doing things is not going away. Your solution will definitely continue working. DNS is just useful for users that might not have that sort of access to their router or not have the right tools to get to it. DNS is definitely a nice usability feature for home users. If you want to do something more elaborate that will always be an option.

I like the idea of a RPC command for changing the address while the daemon is running.

We need some dynamic way without stopping, that would be good.

If it is possible. I don’t know all the details.

Everything is possible. t-bast implemented one of the things on ACINQ. Since those are protocol dependent am I just testing with him then?

Yeah. The specification process is exactly as you described. We usually have a proposal, then we have two implementations to make sure everything is inside the proposal, everything is well laid out and we can build a second implementation based on the proposal. Once two or more implementations have cross verified that they speak the same protocol then we can merge it usually.

We don’t merge it earlier in our code?

We could although it always has the risk that it might get changed before it gets merged. That is usually where we hide stuff behind the experimental flags. The web socket isn’t based on a RFC if I remember correctly. That’s exactly the risk that we have. Things might change based on the feedback that we get from other implementations. If we roll out the status quo we might end up in a situation where we have to change the stuff later on. Then we suddenly have two implementations that compete. We’ve had that with the onion messages but for the DNS entries in the gossip messages, those stick around a bit longer so I’d like to move that after the milestone so that we don’t have a release that has a feature that is about to get into the specification and might have to slightly change. I don’t think it is going to change here but better to be safe than sorry in this case.

We have successfully released the Raspiblitz version which now includes c-lightning. We had a Twitter Spaces with Christian, that went well, the recording will be released. It was very focused on the c-lightning implementation. There are always small things here and there that can be improved, that is why we iterate. I have seen a couple of people using it so trying to collect as much feedback as possible. We have an update feature so when there is a new release from c-lightning it should be a smooth update within the Raspiblitz menu. It went as planned I think. I have been keeping an eye on the c-lightning Telegram channel. Regarding the backups, it is great that you can achieve the same restoring of the database but the database can be very large. You wouldn’t have that sent by an email or a quick sync to the server or even a Telegram bot. Given that people are used to it, if they switch to try out c-lightning they would want to have something similar there.

That has been a huge release, thank you for inviting me to the Twitter Space. You have a busy Telegram chat.

It is like 1600 people. Some of them are technical so they come with suggestions or custom things that they have done. That is how we learn. A lot of them become contributors.

It is exactly that kind of feedback that we are looking for so that is awesome. I do think we can emulate the static channel backups quite easily by building some tooling around guess_to_remote. It is always a bit of a balance between making it too easy for users to skimp on stuff and not do what they should be doing. My hope is that by doing a similar tool we can have a belt and suspenders deal instead of going with a single belt.

I think the threat of force closing channels, especially during a higher fee environment, is enough to make people want to preserve their channel database. All the liquidity that they have built up if they are running a routing node, a lot to worry about. If there are recurring questions I try to update that c-lightning FAQ on the Raspiblitz. There are a couple of things coming up there and will be extended.

That’s excellent. Especially with the user that lost their full database, we definitely felt the need for static channel backups. By making it easier to run the backup plugin as well, less space, more efficient, less churn on the SD card or wherever you store that. We should also make it much easier for people to choose the right solution instead of the easy solution.

I wanted to ask about the paytest plugin, it is trying to implement the Pickhardt payment?

Yes we do have a pull request 4567 which is a first step towards the Pickhardt payments idea of biasing towards using larger channels. That has in the model of that paper a higher probability of succeeding the payment. While we aren’t using the splitting strategy just yet we do already prioritize the channels according to Rene’s proposal. Like I mentioned before we are looking into evaluating that and if it works out that together with the optimal splitting strategy that Rene proposes we get even more we might end up implementing the entirety of that. The paytest plugin itself is a generic system to test how well performing our pay plugin is. What it does is it creates an invoice as if it were from the destination. If I want to test my reachability from me to you then you start the paytest plugin and I start the paytest plugin, I tell my paytest plugin to test a payment going to you. My paytest plugin creates an invoice as if it were coming from you and your paytest plugin will then receive incoming HTLCs and hold onto them for up to a minute. Then it reports back if the real payment would have succeeded or not. On the recipient side the only part that is needed is this holding on and collecting all of the pieces before letting them go again. Without that your node would immediately say “I don’t know what this is about” and tell me to shut up. This is generic and can be used for c-lightning now and I am hoping that lnd and eclair might be offering a similar facility eventually so that we can get a larger view of what the reachability and performance looks like in the network. We will use that going forward to experiment with the parameterization of our pay plugin and different mechanisms of splitting and prioritizing channels and all that kind of stuff. This is the first step but running a paytest plugin will help us make incremental changes in the future. There is also discussion on how to measure this stuff and what kind of bias we should be using. It is a very interesting discussion.

Last week I opened the PR for GraphQL and got lots of great review comments on that. This week I am working on implementing those changes. I was also following the c-lightning chat and I was almost getting angry seeing some of the negative comments. It made me want to multiply my efforts to fix some of these issues, legitimate things that are hanging people up. c-lightning is a great project and deserves a lot of attention. I am happy to be able to learn a little bit more and hopefully to contribute more as time goes on.

What are you referring to on the negative comments? You mean on IRC?

In the Telegram chat, on the database topic mainly. Basically reports on platforms that were moving off of c-lightning in favor of LND and things like that.

Most people are moving from c-lightning to ACINQ because they came from LND to c-lightning being frustrated. We do have a couple of people in the channel that got burnt badly in the past. There is a certain resentment, it is something that I keep in mind whenever I read those stories. We try to do our best but we can’t do everything.

I made a short script which gets you an invoice when you have LNURL text. I was missing this, we don’t support LNURL but sometimes people communicate this to get payments. It is very easy, I will be happy to share and help if anyone has questions on how to get an invoice from LNURL.

We are getting a couple of proposals that are sort of competing, some are complementary. We will probably end up supporting everything and nothing. Having a way to bridge from one standard to the other is definitely something that is very much valuable.

If you know the LNBits project which is kind of accounting software that allows you to create multiple accounts on top of a funding source. The funding source could be c-lightning, it would be a next step to get it into the Raspiblitz because we already have LNBits on LND. That has a lot of full scale LNURL support with a plugin system. LNURL is probably something higher level than something that would need to be plugged into the protocol or the base layer.

I also got into contact with one of the authors of LNBits looking for help to get full hosted channels working. This would extend this accounting feature to also include keysends which as far as I understood are currently not possible with today’s LNBits. LNBits basically differentiates who the recipient of an incoming payment is based on who created the invoice that got paid. With full hosted channel support we could have virtual nodes really be part of a c-lightning node. Then you could run a single c-lightning node for your friends and family. This reminds me I still need to set up an appointment with the author.

fiatjaf is a big contributor to LNBits and it is Ben Arc who started it, there are a couple of others who are working on it.

Full RBF in Core

One of the discussions at the Core dev meeting was on getting full RBF in Core. This is obviously going to be challenging because some businesses, I’m not sure how many, do use zero confirmation transactions. Thoughts on how important this would be for the Lightning protocol to get full RBF in Core?

We are already moving towards RBF on Lightning with the dual funded channel proposal. All those transactions that we create are RBFable. I feel like the question you are asking, not explicitly, is the zero conf channel proposal which is currently spec’ed. We didn’t talk about it but full RBF would interact quite poorly with zero conf channels. RBF means that any transaction that gets published to the mempool can then be replaced before it is mined in a block. Zero conf kind of assumes that whatever transaction you publish to the mempool will end up in a block. There is tension there. I don’t think there is an easy answer to that other than maybe zero conf channels aren’t really meant for general consumption. The general idea with zero conf in general is that it is between two semi trusted parties. I don’t think that’s a great answer but I think there is definitely a serious concern there where zero conf channels are concerned.

I was speaking to someone at ACINQ and they said they do use zero conf channels but there is no risk because all the funds are on the side of the party who could possibly cheat. I don’t know about other businesses who use zero conf channels.

Breez uses it. The logic is whoever opens the channel would have change from the opening transaction and would be able to send a CPFP to have it confirmed.

Now I’m thinking about it, any opening transaction, at this point there is only funds from one side. Only one side would be eligible to RBF it. You would only need to worry about it when two parties fund it. The dual funded channel would be the place where you worried about the counterparty RBFing you. Currently those are RBFable anyway. Having RBF everywhere wouldn’t be a new consideration, at least on channel opening. There might be other places where RBF is a problem, maybe on closes?

I guess dual funding is also not really an issue because in dual funding you would have to have both parties collude to RBF themselves. They would know that they are RBFing and know the new alias. There really isn’t a risk of one party providing funds and then the other party RBFing it without telling the first one. We end up with a wrong funding transaction ID which has happened a couple of times in the past when people were using the PSBTs and RBF bumping those.

That is not totally true because you could RBF an open and create a different transaction that is not an open transaction with the same input. It is not that they would RBF it and make a new open channel but if they RBF it and use it elsewhere and you have already been using that channel. It gets replaced by a channel that is unrelated, then you’re in trouble right?

If you use the channel before it is confirmed that is true of course.

A channel is opened to you, you receive a payment and then the channel is gone. Your received payment is gone.

Zero conf channels are a pretty specific use case anyway that does require a certain amount of trust. Definitely don’t run everything on zero conf.

Not the suggested default policy.

For example RBF would be really useful for Greenlight. We have the routing node and we have the user node. If the user node was to try to mess with our zero conf channels we can prevent them from doing that. The API doesn’t allow that so we are safe on that side. We would forward payments but we would have the ability to track that much more closely than any node. We are basically rebuilding the ACINQ model here.

I need to understand how Breez how uses it. It is weird with policy because obviously someone could just make a different version of Core and change the RBF policy in that version. There’s that consideration. There’s also the consideration that the vast majority of nodes on the network are Core so how much of a security guarantee is that if you flip a large percentage of the network to do full RBF rather than opt in RBF.

Full RBF is definitely what miners might choose. The idea is that we want to keep that gentleman’s agreement as long as possible because there are users that rely on it. But eventually we might end up using it because miners will ask for it. Hopefully we can inform users in time to move away from a trust model that was not incentive aligned so to speak.

RBF is really good for miners because it gives people the opportunity to bid more on their transactions. If transaction fees start going up then everyone all of a sudden has the ability to start bidding more for block space. Where before they didn’t have that opportunity, you’d have had to add more bytes with CPFP. I am RBF bullish.

CPFP leads to a situation where the mempool is clogged and that would help the miners even more. I bet the miners like RBF less.

Sort of.

It is more optimal than just creating more transactions with higher fees.

The average fee rate is still going to be that and having less throughput is definitely bad news for miners because it might be seen as an ecosystem weakness. Whether they are rational and take all things into account that is a different story I guess. From a user side RBF is definitely something that we want because it allows us to allow lowball fees initially and as we learn more about the mempool and how full the blocks were over the last couple of minutes we can fee bump. We don’t have to guesstimate where the fee might be in a couple of hours so that we can get it confirmed. We can start low and move up. It takes a lot of guess work out of the transaction confirmations. And makes a fair fee market as well. Price discovery.

You can’t have c-lightning updating the Bitcoin Core client to opt into RBF? They would have to do that manually?

I don’t even know if there is a flag to enable full RBF in Bitcoin Core as of now. Is there?

You can opt in to using it.

Right now you wouldn’t want to RBF your channel opens at all. Don’t RBF your channel opens right now. If you are in dual funded channels you can RBF to your heart’s content, it is part of the protocol. Just make sure you are going through the actual RPC commands to make it happen and you are fine. On normal v1 channel opens you can’t RBF it because part of the open is that you tell your peer what the transaction ID they should be looking for is and RBF changes the txid. All your commitments to your transaction will be invalid so any money you’ve committed to that channel will be lost forever. So don’t RBF channel opens now even if you can do it. It is not a good thing to do.

All channel opens are RBF enabled, as far as I know, LND for sure.

But you should definitely not use it.

There has been a red letter warning to not RBF any opens, absolutely.

Now I’m thinking about it it is probably best policy to make it by default the transactions or wallet builds are not RBFable. That is hard because of the way stuff is constructed. Probably the best policy when you are building an open transaction, if it is a v1 open make it not RBFable. By default all the v2 ones are.

Dual funding is safe for RBF. We should get that merged sometime into the spec.

It is only safe because when you redo the dual funding thing you and your peer both update your commitment transaction. There is a part where you do the commitment transaction by sig exchange on the RBF transaction that you built. That is why it is safe. You update your expectations for commit sigs. This is how you get your money out of a 2-of-2 funding transaction. You have to update your commitment sigs.

When we finally get dual funding merged and out of experimental, since single funding is the trivial way of dual funding is dual funding replacing the single funding? Or a RPC command that uses the dual funding code with an option that this is actually single funding?

By default if both peers are signaling that they support the v2 channel opens every open goes through the v2 thing. Right now you use fundchannel and depending on what your peer supports it either uses the current v1 protocol or the v2 protocol. It transparently upgrades you so to speak. When you go through the v2 transaction thing the other side might opt not to include inputs and outputs in which case the transaction you build will be the same as if it was through the v1 stuff.

What is v1 referring to?

Dual funding is the technical name, the v2 channel establishment protocol. Right now when you establish a channel there is a series of messages that you and your peer exchange. When we are moving to v2 we’ve changed what messages we exchange and the order that they happen in and the content of the messages. The end result is that you end up with a channel which is the same as the v1 set of messages. The only difference is that in the v2 your peer has the option to also contribute inputs and outputs to the funding transaction. This is what enables you to do liquidity leases or liquidity ads because they now have the option to also add funds to the transaction.

Dual funding is considered v2 rather than this is the second version of dual funding.

That’s right. Dual funding is possible on the v2 protocol.

The v1 code will stay around for a very long time.

The idea is that if every node is supporting v2 then we would take the v1 code out. But that requires every node on the network to support v2 at which point we could remove v1.

Quite a while.

But you can never be sure?

I think we would be breaking compatibility to new very old nodes.

We have finished a Chaincode seminar, Bitcoin protocol development last week. Generally what I got from it is we don’t do such things in Bitcoin. We don’t do backward incompatible changes.

That’s one of the advantages of not having a global consensus like Bitcoin does. We are able to do quick iterations with breaking changes without endangering the security of everybody. The difference between Bitcoin onchain and offchain is exactly that Bitcoin requires us to have quasi perfect backwards compatibility because otherwise we are breaking other implementations and we are introducing the risk of having an error in the consensus itself which would result in part of the network hard forking off. That is obviously bad and that is why there is mostly one implementation for Bitcoin itself. At least for the consensus critical part it is good to have consistent behavior even if that is sometimes wrong. Like it has been for CHECKMULTISIG. In offchain protocols we are dealing with peer-to-peer communication where we can agree on speaking a completely different protocol without breaking anyone else in the network. We have brought back that flexibility to do quick iteration, experimental stuff and deprecate stuff that we no longer require exactly because we move away from having a global consistent consensus. Now we are more peer-to-peer based.

Can you know what our nodes are running on Lightning Network?

Yes. Those features are announced in the node announcement. We therefore know what part of the public network at least is speaking what sort of extensions. Once the number of nodes in the network that speak only v1 drops below 1 percent we can be reasonably sure that we will not encounter any nodes that don’t speak the v2 protocol.

Is there any chance of congestion in gossip if there are thousands of nodes around the world? It seems like everyone knows everything?

The issue of congested gossip is definitely there. Ultimately we don’t want every node to announce, simply because they might not be online 24/7. We will see a separation into the different concerns. There will be nodes that are more routing focused, those will announce their availability and their channels. There will be a large cloud of users that are not stable enough to route or value their privacy and therefore don’t announce this kind of information to the wider world. Ultimately we think that this sort of relationship between public nodes available for routing and private or unannounced nodes that are consuming the routing services of others will not announce. Currently almost everybody is announcing.

I think the Phoenix and mobile wallets are not announcing?

Exactly. That’s also why whenever we do a census of what everybody is running, we had a paper a couple of months ago from the University of Vienna that I co-wrote, that always specifies that we are only looking at the public part of the network. We do have some ways of finding traces onchain about channels that might have existed but there is no way for us to attribute that to a specific implementation. Whenever we do a measurement about what part of the network runs which implementation we need to be very careful about saying “This is just the announced part”. We can reasonably believe that it is representative for the kind of features we have. Especially for ACINQ they have more visibility into what version a node is using.

\ No newline at end of file diff --git a/c-lightning/2021-11-01-developer-call/index.html b/c-lightning/2021-11-01-developer-call/index.html index 3a330ef824..f93e4747d5 100644 --- a/c-lightning/2021-11-01-developer-call/index.html +++ b/c-lightning/2021-11-01-developer-call/index.html @@ -11,4 +11,4 @@ < c-lightning < c-lightning developer call

c-lightning developer call

Date: November 1, 2021

Transcript By: Michael Folkson

Tags: Lightning, C lightning

Category: -Meetup

Topic: Various topics

Location: Jitsi online

Video: No video posted online

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

c-lightning v0.10.2 release

https://medium.com/blockstream/c-lightning-v0-10-2-bitcoin-dust-consensus-rule-33e777d58657

We’ve tagged RC1 to get people testing. The testing found 2 regressions, both related to the database handling. One is Rusty’s crash outside of a transaction. I quickly wrapped that in a transaction conditionally whether there is already one or not. The other one is when we do our switcheroo to remove columns from a table that SQLite requires from us. That apparently de-syncs the counters on auto-increment fields. We ended up with duplicate entries in the database itself. That also has a quick fix now. Yesterday I tagged release candidate 2. If all goes well we should have a final release sometime this week.

Do we use auto-increment fields in the database?

We use auto-increment fields whenever we auto-sign a new ID in a column. Most of our tables actually have a ID column that is auto incremented. Whenever we do a big serial or sequence then that is one of these sequences that can end up out of sync if we do our rename, drop and copy into.

I am going to have to look back through the PR because I’m not sure how that happened. I would assume that it would keep incrementing from where it left off.

What you do is you rename the table, create a new one with new counters and then you select and insert into the new table and drop the old one. When you insert with values pre-filled the sequence doesn’t increment itself.

But that is only one time? Say it was 5, you copied across, create a new table, you copy it back, it is still 5?

The sequence is still counting from 1.

How do you fix it?

We fix it by manually setting the value to the maximum plus one of the column.

Wouldn’t the select statement have populated it with that? I will look through the code.

The insert with values in that column will not call the increment on the sequence. That is the collision.

I’m not sure I get it but I will look at the code until I understand it. I have a hack that does some database handling, helpers to delete columns and rename columns. This is easy in Postgres and really hard in SQL. I would like to fix that case rather than have us continuously stumble over it.

Individual updates

I just opened a maximum size channel on signet to my friend who I am teaching c-lightning to. We will have new students soon.

I was thinking I have to fire up my signet node and get that working. I have been running a signet Bitcoin node for a long time but I haven’t actually fired up c-lightning. Maybe after I upgrade my machine. I run my nodes under Valgrind and it is very heavy.

I am running the latest version of c-lightning on signet, testnet and mainnet. It is all running very well.

There seems to be an active signet community which is nice.

The next Raspiblitz release will likely be in the next 2 weeks. We have a few bugs ironed out. We added c-lightning, added parallel testnet and signet. We updated the image, there were some pressing bugs. I have updated to c-lightning 0.10.2 RC1 and RC2 from the menu so it works with the Raspiblitz deployment, the updates are working. That’s available for everyone even if you don’t update the image. That will be coming with the next release of course. My smaller size routing node, I didn’t find any issues which is good I guess. Another interesting development, waxwing has used c-lightning onion messages in Joinmarket. That’s an exciting thing to look at and explore. Also there is a proposal to use actual payments later on for the coinjoin fees paid to the makers but that is far off. It is a more involved, complicated implementation but the onion messages to be used in parallel with the IRC coordination is quite exciting. That just uses a couple of Lightning instances. That is the other thing I have been looking at.

I agree, that is really exciting, I look forward to the Joinmarket integration. That’t pretty sweet. Lisa is always on about people not doxxing their channels and their nodes.

I don’t think we need channels to use onion messages. Those would be just ephemeral nodes that use the layer for communication. It will be exciting to see how it functions with the same machine running a c-lightning node and at least one more as a taker as well. We will see how much pressure it puts on the Lightning Network itself. These messages, all the peers you want to communicate with will be running the same version so I don’t know if there will be any hops involved at all really. There could be but the hops will be between peers which are also running the same kind of Joinmarket software, a Joinmarket cluster on Lightning. It shouldn’t affect the rest of the network as I see it. Same with the payments together, that involves liquidity issues so there is far to go with that.

If you are doing a coinjoin you are obfuscating your UTXO set which is definitely a direction we want to head in.

What is the release cadence of c-lightning? Is it the same as Core?

It is every 2 or 3 months.

For a minor release?

We don’t really distinguish. I’ve actually been thinking about switching us to a dated release system. 21 dot whatever and 22 dot whatever rather than justifying a major or a minor bump, that’s an arbitrary last minute decision. We look at the list and go “I think this feels like a 11 or something”. I don’t know that in retrospect they’ve been justified. The important thing is to get a release out, the number is secondary. We’ve generally tried to go for a fairly rapid release cycle. The idea is that the RC drops on the 10th of whatever month, a 2 month release cycle. But then often in practice if there are a couple of RCs it can be 3 weeks before the final drops. Now we’ve only got 5 weeks until the next RC, is that too tight? It has been 2 or 3 months and it is been up to the release captain to decide what the cadence is going to be. Sometimes it has been longer than that but we do try to keep a fairly regular cadence. If we went for date numbering it would become more obvious when we’ve missed our cadence. But nobody cares about version numbers. Version names, they are important (Joke) but version numbers, bigger numbers are better generally. I think date based is probably something we should switch to at some point.

Block height based?

It has the advantage of neither clarity nor… At least it only goes up so that is good.

But shorter than dates.

Less memorable too for most of us. I don’t reminisce generally about block 400,000 or whatever.

I have been a little lazy the last 2 weeks, just some rebasing of my 2 PRs I have open. And a little progress on the one where we tell our peer his remote address. Now this information is filtered down to the Lightning daemon in order to be used there because connectd can’t use it, it doesn’t know the other addresses. These will be merged after the release. For the DNS one we need another implementation to pick it up because I think no one is working on that. I think Sebastian is working on the remote address part on ACINQ.

Michael is not only modifying c-lightning but he has also been advocating on the spec side to have this spec’ed because this is a change to the messages we send out. The idea that you’ll tell the peer what you see them as. “We see this as your IP address” where that information is available and useful. This is something that Bitcoin Core does now which I hadn’t realized. They used to use myipaddress.com or something. You used to reach out to that to get the IP address of your node. They gossip their view I believe. “Hey this is what I see you as”. If enough of your nodes tell you that seems to be your IP address.

I think we should also maybe think about adding a UPnP library or something like that. If the router is enabled then we can use it. Bitcoin Core does it?

Did it, had a security hole in it, dropped it, I think reenabled it. There have been issues around UPnP, it is a mess. I also tried out this library a couple of times, not that great. It seems to have some bugs so maybe reimplement it, I don’t know. It doesn’t work very well for all routers, maybe for a certain subset and certain devices it fails.

I think the argument is that it is better than nothing. It is nice if you have a IP address. I like the autoconf nature, I like the fact that if people manually override things we should respect that but if people run c-lightning out of the box then it should probably believe consensus when it talks to peers and they see a IP address. Ideally it should try to punch a hole so that works.

Maybe I’ll check the Bitcoin Core node about the UPnP stuff, I am pretty sure they did well enough.

We should just steal their code, that’s an excellent idea.

Or we could tell the Bitcoin daemon to open the port for us if they already have it (Joke)

If you think changing the spec is hard try getting a new RPC call into bitcoind. Definitely looking at what Bitcoin Core is doing there would be nice. Just knowing our IP address would be useful so that we can gossip that. I’m not clear what the defaults should be here. If they don’t set it up and they are behind a private IP address, at the moment we won’t advertise any addresses for them. If we believe we have an address should we start advertising in the hope that people can maybe reach us on that IP address? Are there issues there? People may have been relying on the fact that wasn’t…

I think we can broadcast one or two addresses where we aren’t certain, just try it. As long as we don’t broadcast 10 addresses or more it is not a big deal.

You have to have a public channel to advertise your node anyway. By definition if you’ve got a public channel you are at least announcing your existence. I just worry about people who didn’t want to advertise their IP address. Certainly if they have any Tor things enabled I wonder if we shouldn’t advertise their IP address. We are going to have to think carefully about changing behavior in those cases. We can think about that offline.

What PRs are we talking about?

4829 is the DNS one and the other one is 4864, that one is still in draft. It is working for sending and reading this information but it is not doing anything useful yet so it is a draft.

Last week I worked on some internal Blockstream stuff. I have also been updating our guess to remote stuff to include anchor outputs. Someone recently fried their node and didn’t have a backup. I am on working on recovering stuff. They weren’t able to use our existing recovery tools to recover any of the funds they had put into lease channels on either side. The reason for that is those use anchor outputs and our tooling did not take anchor output scripts into account. I’ve updated those and created an extra tool. The change to anchor outputs changes the to_remote from a public key hash to a script hash. Now you need to know what the script is. It used to be you had a pubkey, it is pretty easy to use existing wallet software to spend any output using a pubkey, private key, that is pretty straightforward, a lot of wallets support that workflow. They don’t really seem to support the script hash thing so I added an extra tool, I haven’t tested it yet and I’m not 100 percent sure it works. In theory there is now a tool where if you give it a PSBT it has the output you want to spend and all the metadata associated with it. This tool will sign that input on the PSBT that you provided and return a PSBT with the proper scriptSig information filled out so you can spend it. I trust it signs correctly and it will do whatever but I haven’t tested that it spends correctly. I pinged the person who had the problem. I need to talk them into giving me their private key data for the output of the channel that I had with them. I can just send it to wherever they want. That is up in a PR.

When I was in Zurich a few weeks ago I spent some time talking to Christian about how to update our accounting stuff. I would really like to get an accounting plugin done soon. I did some rethinking about how we do events, it is an event based system. Coins move around, c-lightning emits an event. I am going to make some changes to how we are keeping track of things. I think the biggest notable change is we will no longer be emitting events about chain fees which kind of sucks. There is a good reason to not do that. Instead the accounting plugin will have to do fee calculations on its own which I think is fine. That is probably going to be the biggest change. Working through that today, figuring out what needs to change. Hopefully the in c-lightning stuff will be quite lightweight and then I can spend a lot of time getting the accounting plugin exactly where I want it. That will be really exciting. I am also going to be in Atlanta, Wednesday through Sunday at the TAB conference. I am giving a talk, appearing on a panel and running some other stuff. I will probably be a little busy this week preparing for the myriad of things someone has signed me up for. If anyone has suggestions about topics to talk about you have 24 hours to submit submissions if there are things on Lightning you want to hear about.

Something to get rid of next after we’ve got rid of the mempool.

That was brave. I think it is great. I think you are wrong but that is ok.

I think it is ok to be wrong.

It was a productive discussion.

It was very interesting actually.

I have had a bunch of conversations with people around it about interesting things. I think it sparked a lot of really good discussion and rethinking about exactly what the mempool’s purpose is. I have been meaning to write a follow up post to the one I sent to the mailing list summarizing everyone’s takes. Some of the arguments I was expecting to see but didn’t.

If people don’t lynch you occasionally you are not doing your job properly.

Someone posted on the mailing list, is someone moderating this list? How is this allowed to be posted? Are they even reading the posts?

Didn’t you know the mempool was part of Satoshi’s doctrine, you can’t question it (Joke).

There have been some pretty good discussions in a private chat I’ve been in about changing the RBF rules. This has been headed up by Murch and Gloria. I have been talking to them pushing for a certain angle on how those rules get changed. I think I’ve managed to convince at least one person. We are looking at making pinning less possible. I don’t know if I want to talk about that though.

RBF pinning is one of those obscure black art areas. People just want you to fix it, they don’t want you to talk about it. That’s the response I’ve had. “That sounds really complicated, tell me when it is fixed”.

Pinning is the one I always forget, the setup is not complicated but there are definitely a number of assumptions you make before your transaction is pinned. I need to remind myself of that. I could talk about accounting but maybe it should after I’ve built it.

Accounting is a big hole unfortunately that we don’t have a really great answer for. It is nice to have thorough blow by blow, here’s where all your money went. Mostly people ignore fees or sum them as if they happened all at once but in a number of scenarios like filing your tax you should be annotating all of the expenses when they happened approximately.

There are a couple of reasons why I like accounting. One way of looking at it, really what I’m working on is a data logging project right now. I am making c-lightning such that it will effectively log all of the data movements. That is really nice for accounting when you have taxes etc but it is also really nice for control or making sure you know where your money is. I talked to the ACINQ guys pretty briefly about how they were doing accounting when I met up with them a few weeks ago. They are doing more checkpoint style. There are a couple of ways you can do it. You can do checkpoint style where you take a snapshot and you compare it to previous snapshots and see what’s changed. The approach I am taking is emit events as we go. There will probably have to be a checkpoint emitted at some point, a checkpoint will be one of the emissions but ideally you’ll have more of a realtime log of events that are happening. The problem is if you have two checkpoints and something happens between them, you have to go back and figure what happened in between. If you have a log of events in theory you should be able to look through the events and see what happened, you don’t get surprises between checkpoints.

There is the meta problem of lack of finalization. Nothing is final ever in Bitcoin.

One of the ways we are going to kind of get around that is I am going to push some of the finalization accounting onto the plugin itself. c-lightning is going to emit events and then to some extent it will be the responsibility of the plugin to keep track of when things are final.

That makes sense. 6 blocks is final(ish) and 100 blocks is forever.

But that means the plugin needs to have a block store of its own that it keeps track of. It is not beautiful but I think it will be fine.

Did you have a prototype of an accounting plugin before? It didn’t make it to the plugin repo?

There is the data logging aspect of c-lightning and I started working on a plugin and ran into problems. The problems I ran into were the fact that c-lightning replays events. One way I am going to fix it is by making it such that the accounting plugin expects duplicates and deduplicates them. There is some complicated stuff that I had to do but there were some things in the original event emissions that we were doing that I don’t think are sustainable long term. Especially now that we have dual funding and there are multi funded transaction opens and splicing on the roadmap soon, fingers crossed. All those things make some of the assumptions that we were making about where fees are going and who owns certain inputs and outputs on a transaction are no longer valid. I am revisiting some of those as well. No there hasn’t been a plugin, there has been events being emitted but as far as I know no one is consuming them.

That’s good because you are about to break them all.

Yes that’s correct. The data format is fine. I thought I was going to have to change the version, they have a version in it already. There is a version number in the data thing. I was looking at it today, I think the data is fine, it is just when we emit them and for what purpose, what events we are emitting is going to change.

I stumbled across it in the outpoint rewrite.

vincenzopalazzo: In the last two weeks I have worked on some c-lightning PR review. Also I started working on an issue where we accepted opening a channel with an amount lower than the minimum amount that the other side would accept. I think we need to query the gossip map in some way to get this information. I think we miss this check where the amount that we are requiring to open the channel is greater than the minimum than the other side would accept. To get this information we need to query inside the fundchannel command to get the information from the gossip map. I am not sure about this.

They do not have a generic way of gossiping to say what their minimum channel size is which is unfortunate. It has been a common request. We didn’t do it because there is a whole heap of other conditions it could have on things. What happens is you get an error back which is a “human readable” code to say I didn’t like things. There is no reliable way of telling it. The human operator reads the code and says “It must be this.”

I think at some point I was working on a spec proposal to send your minimum channel in the init message. There is an init TLV and that way you have it at open. It is not gossip necessarily but when you connect to a channel you immediately get the data, I don’t know.

There are other things you might not like. There was also the proposal to have errors specify what they were complaining about and a suggested value. That would also address this and a pile of other things. You could say “This field should be this” and that would be “Here’s your minimum value” in this case. We cover other cases where you don’t like something that they’re offering you. In a more generic way, perfect is maybe the enemy of the good here, maybe that is why it never happened. “It doesn’t cover all the cases”. That maybe it. For the moment we are still falling back to human readable messages which is terrible. You could hack something in to try to interpret the different things that LND, LDK, us and other people say and do something smarter but good luck.

Just tell them upfront. Whenever you connect to someone, tell them upfront in the message that is formatted and then you don’t have to worry… I guess you’re saying there are other reasons it won’t open but I think the min channel size is a really common one.

Min channel size is actually really common. I do like the error proposal where you specify exactly what is wrong. It was a proposal, I don’t know if it went anywhere. At risk of making you do a lot more work you could perhaps look at that. They don’t broadcast any information where you can immediately tell what the channel size should be. Sorry, you would expect it to work the way you say but it doesn’t.

vincenzopalazzo: I’ve finished my first version of the Matrix plugin with the server. In the next week I will publish the website with some fun information. I am relying on the information that I am receiving because I don’t want to make a real server with authentication, I want someone to put the data and I verify this data is from the node that sent me this information. The things I am thinking about is to use onion messages. I receive the data, I get the hash to check this data and with the onion message I send back the message to the node that is the owner of this payload. “This is your data with a check on the hash” and I receive back ACK or NACK. I don’t know if this is the correct way to do it.

I am a fan of anything that uses onion messages so I’m probably biased here.

Future possible changes to Lightning gossip

I did do a couple of things now I think about it. One was this idea of upending gossip. We are going to have to change gossip for Taproot. The gossip is very much nailed to the fact that with a channel you prove you control the UTXO and that is a 2-of-2. Obviously that is changing when we have Taproot channels, hand wave. We do need to rev our gossip in some way. The question is do we get more ambitious and rev it entirely. One thing I really dislike about gossip is that chain analysis have started to use our gossip to tie UTXOs to node IDs. This is fine for your public channels while they exist but of course that data leaks. When you spend those UTXOs in other ways you have now compromised your privacy. A more radical proposal is to switch around the way we do gossip and not advertise UTXOs directly for a channel. One reason to advertise UTXOs is to avoid spam. At the moment you have to prove that you control these funds in some way. It doesn’t have to be a Lightning channel but it does have to be a 2-of-2 of a certain form and you have to be able to sign it. If we go to a Taproot world and it is a bit more future proof, you control UTXOs and we believe you, then we could loosen it even more and say “You can announce a node if you prove ownership of a UTXO and that would entitle you to announce this many channels.” You just announce them “I’ve got a channel with Joe and it is this size max”. There needs to be some regulation on that. You need to limit it somehow but you can get a bit looser. You can say “Every UTXO that you prove that you own as a node you can advertise one channel plus 8 times its capacity” or something. You still have to expose some UTXO but it doesn’t need to be related. It could be cold storage, could be something else. In theory there are ways that you could get another node to sign for you, there are trust issues with that though. Would you pay them to lease their signature? What if they spend it etc? There are some interesting models around that. Unique ownership of a UTXO, there is some zero knowledge stuff that you could do here too, they are a bit vague. The idea that you prove a UTXO and lets you join the network, announce your node and then you can announce a certain number of channels. With the channels you don’t give away the UTXO information at all necessarily. A naive implementation would just do what it does now and a node advertises some UTXOs or its channels. You wouldn’t associate them directly so it would still be an improvement. There are downsides to this approach. One, there are arbitrary numbers. How many UTXOs? What sized UTXOs have you got to advertise? How many channels are you allowed to advertise? They are a bit handwavey. Ideally we would use BIP 322, there is a proposed way to do signed messages for generic things but it has not been extended for Taproot. I’m not quite sure what that would look like. Ideally we would use the generic signed message system to say “This is how you sign your Lightning node announcement with your UTXO”. That piece is missing from the Bitcoin side. It would be nice if that BIP were finalized and we could start using that.

BIP 322, you would want that updated for Taproot?

And some consensus that that was the way to go. Then we would just use that. At the moment we have a boutique thing because we know it is a 2-of-2 and we produce both signatures. It would be nice to have a generic solution, use that BIP and sign your thing. That would be pretty cool. Whether we would only use the new gossip for Taproot, we’d probably need to do both. There would have to be a migration and everything else so we’d need real buy-in on the idea that that was worth doing. I am really annoyed with the Chain analysis people so it would be good to fix that and this is one way of doing it. Other people may have other ideas if we open up the way we do gossip. The other thing is I haven’t run the numbers on what it would do to our gossip size. Your node announcements obviously get bigger, you don’t have channel announcements anymore, you only have channel update. But you’ve lost the ability to do short channel IDs so now you need to put both node IDs in each one. But there are some compact encodings that we can use and things like that. I may be able to scrape some of that back and make it at least no worse than we are now. Maybe even a little better. That analysis has to be done.

Individual updates (cont.)

On the c-lightning front I said I did the database stuff, I have that working. And ambitiously I looked at doing this wait API that we’ve talked about for a long time in c-lightning. There are a whole pile of events that we would like to wait for. We have a wait invoice already and a wait anyinvoice specifically to wait for invoices to be paid which is the most obvious thing. But there are other things that you might want to wait for, in particular with BOLT 12 rather than invoices being paid you may want to wait for invoices to be generated. And you might want to wait for payments that are outgoing. I’ve got this rough infrastructure, for every event that you can list effectively has a creation index and a change index. Then you can say “Wait for the next invoice change index” and it will say “Here’s the change”. Technically there is a deleted index too. You can say “Wait” and the wait RPC will return and say “Yes this is incremented, you’ve got a new change”. You also need to enhance the list things to list in create order or change order and presumably a limit, it will give you a certain number of paginations. That API where you have an universally incrementing index is really good to avoid missing events. It means your plugin can go to sleep, it comes back and it says “I was up to this event”. Then you’d get all the changes since then, all the changes that have happened. It doesn’t quite work if something is deleted in that time though. That’s the API it would be nice to head towards. It would subsume the wait invoice API and it would let you wait for almost anything that we have. That has been a long term thing, to work on that and have this generic wait API where you can wait for things to change or things to be created. Then you get notified when it does and you can go in and figure out exactly what happened. It is pretty simple to implement. It is both simpler to use and more efficient. A lot of things like Spark end up doing these complete lists and try to figure out what changed since last time they ran. It would be nice to get this kind of mechanism in place. There are a few twists, as I implement it we will see how it goes. That was a random aside, something I wanted to do for ages. There may be a PR eventually for that as well.

Different networks and default ports

In Bitcoin Core when you run a signet node it runs immediately by default on a specified signet port. We use 9735 for any network in lightningd. Would it be possible to switch to some network specific port?

I have done that in the Raspiblitz, following Bitcoin Core’s logic for testnet I add a 1 before the 9735 and for signet I add a 3.

Maybe we should switch to defaults. It makes sense. In theory you can run multiple networks on a single port, I don’t know if anyone does that and our implementation doesn’t support it. But in theory you could have a node that runs both testnet and mainnet off of the same port. We have all the distinguishing messages, you can tell if you are only interested in this kind of gossip. I don’t think any implementation actually supports it.

I had a problem doing that for Bitcoin Core when my node picked up some testnet block hashes and thought it was on a fork. I was quite worried for a moment.

We do distinguish them all, we will ignore them properly. In fact now we will hang up. It used to be that you could connect and you would both furiously gossip with each other about stuff and you’d be ignoring all the replies. “That’s the wrong chain, that’s the wrong chain, that’s the wrong chain”. These days in the init message we now send what chains you support and if there is no overlap between the two sets you just hangup. In theory you can support multiple chains at once, I don’t think it has been done.

Q&A

Someone, I was talking to them, they opened a 20,00 satoshi channel to me. They couldn’t see it in Umbrel what the reserve was of the channel. Maybe on the command line but not in the web user interface. I showed them what’s the reserve, it was the same for both me and them. I didn’t have anything so I said “Now you send me 579 satoshis”. I still cannot send anything to you because it fills my reserve. We managed actually to send through me and so I did not lose any satoshis, I am routing for free. It was very interesting because I never realized these reserves of channels. Today I opened the signet channel with another friend, he filled my reserve and I was still not able to send him 20 satoshis. The reserve was 10,000 satoshis. I checked it was filled, the reserve was filled on my side and I had 20 extra satoshis, I couldn’t send them. It was reported in c-lightning.

Who opened the channel?

He did.

So it is not a fee problem. If it says spendable msat is 20 you should be able to send 20 msats. File a bug report.

He said he was trying to route it. The router didn’t see him able to send that. I am quite sure you would be able to send it to the other side without routing it.

We have a test for this in our test suite that makes sure that you can’t spent 1 msat more and you can spend the amount that it says so it should work. It is quite a complicated calculation to see exactly how much you can spend. I can believe that there are issues with it. You should file a bug report.

I wasn’t trying to route. I was just trying to send to him. There was only one channel, he didn’t have any other channels.

The routing engine is invoked.

Probably because I was sending it and I have many other signet channels.

It should definitely work though.

I’ll file a bug report.

Is there a difference if he uses keysend?

I opened another channel from another signet node to him, I did keysend and it worked. It was a big channel open from my side. It was with lnd on the other side. On mainnet he could spend every single sat.

On the ports conversation Core has just merged a relaxation of using the default port for mainnet. This seems to suggest they are moving away enforcing default ports, currently for mainnet but possibly for testnet etc as well. That’s just if you want to use a different port. You can obviously still use the default port.

I was running bitcoind, Bitcoin Core on different ports and it worked very well. My idea was that when someone plays with different networks on lightningd, they open that port and firewall and it is gossiped that this is their Lightning port for signet. Then they say “But actually this is the port I want to run the mainnet on”.

I think all of us have had the same problem where you have to manually move your other ones out the way. I am happy to follow the Raspiblitz convention and by default we should run testnet on the default port. It isn’t 9735, it is 19735 and whatever they do for signet. People can always play with it. Making the default sane is a win and probably something we should have done a long time ago. I look forward to your PR to change the default port.

I have a question on the encryption of the hsm_secret. If it is encrypted what would be the best way to detect it. As I understand there is no CLI error shown. What I am doing at the moment is parsing the logs to see the error but there could be a better way than that?

I don’t know what happens when it is wrong, what does it do?

It just says it is not available. If you try to call getinfo then it will just say it is not available.

So the daemon doesn’t start?

Yes.

Fatal.

It is fatal. Then you go to the error logs and it says that the password wasn’t provided or wrong password provided or something like that.

I think there should be something out to stderror. But also we should probably use a specific exit error code in the case… That would be a lot easier to detect. PRs welcome. It is a simple one but it is the kind of thing that no one thought about. Just document it in the lightningd man page. Pick error codes, as long as nothing else gets that error code it would be quite reliable. I think 1 is our general error code, 0 is fine. 1 is something went wrong, anything up to about 125 is probably a decent error, exit code for hsm decoding issues. As you say it makes perfect sense because it is a common use case. If you ever want to automate it you need to know.

Minisketch looks as if it is close to merge, if you want to use Minisketch for gossip stuff. You posted an idea before on the mailing list. I have an old branch that pulls in a previous version of Minisketch.

You did actually code it up as well? It was more than just a mailing list post? I didn’t see the branch.

I never pushed the branch, I never got to that point. Just played with it. Minisketch was really nice, the library is very Pieter Wuille, very sweet. Everyone should play with Minisketch just because they can, it is really cool. To some extent it is always secondary, it is just an efficiency improvement. There aren’t many protocol changes required. The whole network doesn’t have to upgrade to start using Minisketch gossip between nodes.

When sending and receiving the remote address I use some struct which I get from the wire. In order to parse it I create the struct in C and say it is packed. I need to say it is packed so it is aligned and not something shifted in order to read it correctly. Is there a proper way to do this? Using some directive gcc that it is packed is not the correct way of doing this.

No it is not the correct way. You should be writing from wire and to wire routines for your struct. That specifies exactly what it should look like on the wire, Reading straight into a struct is generally a bad idea.

\ No newline at end of file +Meetup

Topic: Various topics

Location: Jitsi online

Video: No video posted online

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

c-lightning v0.10.2 release

https://medium.com/blockstream/c-lightning-v0-10-2-bitcoin-dust-consensus-rule-33e777d58657

We’ve tagged RC1 to get people testing. The testing found 2 regressions, both related to the database handling. One is Rusty’s crash outside of a transaction. I quickly wrapped that in a transaction conditionally whether there is already one or not. The other one is when we do our switcheroo to remove columns from a table that SQLite requires from us. That apparently de-syncs the counters on auto-increment fields. We ended up with duplicate entries in the database itself. That also has a quick fix now. Yesterday I tagged release candidate 2. If all goes well we should have a final release sometime this week.

Do we use auto-increment fields in the database?

We use auto-increment fields whenever we auto-sign a new ID in a column. Most of our tables actually have a ID column that is auto incremented. Whenever we do a big serial or sequence then that is one of these sequences that can end up out of sync if we do our rename, drop and copy into.

I am going to have to look back through the PR because I’m not sure how that happened. I would assume that it would keep incrementing from where it left off.

What you do is you rename the table, create a new one with new counters and then you select and insert into the new table and drop the old one. When you insert with values pre-filled the sequence doesn’t increment itself.

But that is only one time? Say it was 5, you copied across, create a new table, you copy it back, it is still 5?

The sequence is still counting from 1.

How do you fix it?

We fix it by manually setting the value to the maximum plus one of the column.

Wouldn’t the select statement have populated it with that? I will look through the code.

The insert with values in that column will not call the increment on the sequence. That is the collision.

I’m not sure I get it but I will look at the code until I understand it. I have a hack that does some database handling, helpers to delete columns and rename columns. This is easy in Postgres and really hard in SQL. I would like to fix that case rather than have us continuously stumble over it.

Individual updates

I just opened a maximum size channel on signet to my friend who I am teaching c-lightning to. We will have new students soon.

I was thinking I have to fire up my signet node and get that working. I have been running a signet Bitcoin node for a long time but I haven’t actually fired up c-lightning. Maybe after I upgrade my machine. I run my nodes under Valgrind and it is very heavy.

I am running the latest version of c-lightning on signet, testnet and mainnet. It is all running very well.

There seems to be an active signet community which is nice.

The next Raspiblitz release will likely be in the next 2 weeks. We have a few bugs ironed out. We added c-lightning, added parallel testnet and signet. We updated the image, there were some pressing bugs. I have updated to c-lightning 0.10.2 RC1 and RC2 from the menu so it works with the Raspiblitz deployment, the updates are working. That’s available for everyone even if you don’t update the image. That will be coming with the next release of course. My smaller size routing node, I didn’t find any issues which is good I guess. Another interesting development, waxwing has used c-lightning onion messages in Joinmarket. That’s an exciting thing to look at and explore. Also there is a proposal to use actual payments later on for the coinjoin fees paid to the makers but that is far off. It is a more involved, complicated implementation but the onion messages to be used in parallel with the IRC coordination is quite exciting. That just uses a couple of Lightning instances. That is the other thing I have been looking at.

I agree, that is really exciting, I look forward to the Joinmarket integration. That’t pretty sweet. Lisa is always on about people not doxxing their channels and their nodes.

I don’t think we need channels to use onion messages. Those would be just ephemeral nodes that use the layer for communication. It will be exciting to see how it functions with the same machine running a c-lightning node and at least one more as a taker as well. We will see how much pressure it puts on the Lightning Network itself. These messages, all the peers you want to communicate with will be running the same version so I don’t know if there will be any hops involved at all really. There could be but the hops will be between peers which are also running the same kind of Joinmarket software, a Joinmarket cluster on Lightning. It shouldn’t affect the rest of the network as I see it. Same with the payments together, that involves liquidity issues so there is far to go with that.

If you are doing a coinjoin you are obfuscating your UTXO set which is definitely a direction we want to head in.

What is the release cadence of c-lightning? Is it the same as Core?

It is every 2 or 3 months.

For a minor release?

We don’t really distinguish. I’ve actually been thinking about switching us to a dated release system. 21 dot whatever and 22 dot whatever rather than justifying a major or a minor bump, that’s an arbitrary last minute decision. We look at the list and go “I think this feels like a 11 or something”. I don’t know that in retrospect they’ve been justified. The important thing is to get a release out, the number is secondary. We’ve generally tried to go for a fairly rapid release cycle. The idea is that the RC drops on the 10th of whatever month, a 2 month release cycle. But then often in practice if there are a couple of RCs it can be 3 weeks before the final drops. Now we’ve only got 5 weeks until the next RC, is that too tight? It has been 2 or 3 months and it is been up to the release captain to decide what the cadence is going to be. Sometimes it has been longer than that but we do try to keep a fairly regular cadence. If we went for date numbering it would become more obvious when we’ve missed our cadence. But nobody cares about version numbers. Version names, they are important (Joke) but version numbers, bigger numbers are better generally. I think date based is probably something we should switch to at some point.

Block height based?

It has the advantage of neither clarity nor… At least it only goes up so that is good.

But shorter than dates.

Less memorable too for most of us. I don’t reminisce generally about block 400,000 or whatever.

I have been a little lazy the last 2 weeks, just some rebasing of my 2 PRs I have open. And a little progress on the one where we tell our peer his remote address. Now this information is filtered down to the Lightning daemon in order to be used there because connectd can’t use it, it doesn’t know the other addresses. These will be merged after the release. For the DNS one we need another implementation to pick it up because I think no one is working on that. I think Sebastian is working on the remote address part on ACINQ.

Michael is not only modifying c-lightning but he has also been advocating on the spec side to have this spec’ed because this is a change to the messages we send out. The idea that you’ll tell the peer what you see them as. “We see this as your IP address” where that information is available and useful. This is something that Bitcoin Core does now which I hadn’t realized. They used to use myipaddress.com or something. You used to reach out to that to get the IP address of your node. They gossip their view I believe. “Hey this is what I see you as”. If enough of your nodes tell you that seems to be your IP address.

I think we should also maybe think about adding a UPnP library or something like that. If the router is enabled then we can use it. Bitcoin Core does it?

Did it, had a security hole in it, dropped it, I think reenabled it. There have been issues around UPnP, it is a mess. I also tried out this library a couple of times, not that great. It seems to have some bugs so maybe reimplement it, I don’t know. It doesn’t work very well for all routers, maybe for a certain subset and certain devices it fails.

I think the argument is that it is better than nothing. It is nice if you have a IP address. I like the autoconf nature, I like the fact that if people manually override things we should respect that but if people run c-lightning out of the box then it should probably believe consensus when it talks to peers and they see a IP address. Ideally it should try to punch a hole so that works.

Maybe I’ll check the Bitcoin Core node about the UPnP stuff, I am pretty sure they did well enough.

We should just steal their code, that’s an excellent idea.

Or we could tell the Bitcoin daemon to open the port for us if they already have it (Joke)

If you think changing the spec is hard try getting a new RPC call into bitcoind. Definitely looking at what Bitcoin Core is doing there would be nice. Just knowing our IP address would be useful so that we can gossip that. I’m not clear what the defaults should be here. If they don’t set it up and they are behind a private IP address, at the moment we won’t advertise any addresses for them. If we believe we have an address should we start advertising in the hope that people can maybe reach us on that IP address? Are there issues there? People may have been relying on the fact that wasn’t…

I think we can broadcast one or two addresses where we aren’t certain, just try it. As long as we don’t broadcast 10 addresses or more it is not a big deal.

You have to have a public channel to advertise your node anyway. By definition if you’ve got a public channel you are at least announcing your existence. I just worry about people who didn’t want to advertise their IP address. Certainly if they have any Tor things enabled I wonder if we shouldn’t advertise their IP address. We are going to have to think carefully about changing behavior in those cases. We can think about that offline.

What PRs are we talking about?

4829 is the DNS one and the other one is 4864, that one is still in draft. It is working for sending and reading this information but it is not doing anything useful yet so it is a draft.

Last week I worked on some internal Blockstream stuff. I have also been updating our guess to remote stuff to include anchor outputs. Someone recently fried their node and didn’t have a backup. I am on working on recovering stuff. They weren’t able to use our existing recovery tools to recover any of the funds they had put into lease channels on either side. The reason for that is those use anchor outputs and our tooling did not take anchor output scripts into account. I’ve updated those and created an extra tool. The change to anchor outputs changes the to_remote from a public key hash to a script hash. Now you need to know what the script is. It used to be you had a pubkey, it is pretty easy to use existing wallet software to spend any output using a pubkey, private key, that is pretty straightforward, a lot of wallets support that workflow. They don’t really seem to support the script hash thing so I added an extra tool, I haven’t tested it yet and I’m not 100 percent sure it works. In theory there is now a tool where if you give it a PSBT it has the output you want to spend and all the metadata associated with it. This tool will sign that input on the PSBT that you provided and return a PSBT with the proper scriptSig information filled out so you can spend it. I trust it signs correctly and it will do whatever but I haven’t tested that it spends correctly. I pinged the person who had the problem. I need to talk them into giving me their private key data for the output of the channel that I had with them. I can just send it to wherever they want. That is up in a PR.

When I was in Zurich a few weeks ago I spent some time talking to Christian about how to update our accounting stuff. I would really like to get an accounting plugin done soon. I did some rethinking about how we do events, it is an event based system. Coins move around, c-lightning emits an event. I am going to make some changes to how we are keeping track of things. I think the biggest notable change is we will no longer be emitting events about chain fees which kind of sucks. There is a good reason to not do that. Instead the accounting plugin will have to do fee calculations on its own which I think is fine. That is probably going to be the biggest change. Working through that today, figuring out what needs to change. Hopefully the in c-lightning stuff will be quite lightweight and then I can spend a lot of time getting the accounting plugin exactly where I want it. That will be really exciting. I am also going to be in Atlanta, Wednesday through Sunday at the TAB conference. I am giving a talk, appearing on a panel and running some other stuff. I will probably be a little busy this week preparing for the myriad of things someone has signed me up for. If anyone has suggestions about topics to talk about you have 24 hours to submit submissions if there are things on Lightning you want to hear about.

Something to get rid of next after we’ve got rid of the mempool.

That was brave. I think it is great. I think you are wrong but that is ok.

I think it is ok to be wrong.

It was a productive discussion.

It was very interesting actually.

I have had a bunch of conversations with people around it about interesting things. I think it sparked a lot of really good discussion and rethinking about exactly what the mempool’s purpose is. I have been meaning to write a follow up post to the one I sent to the mailing list summarizing everyone’s takes. Some of the arguments I was expecting to see but didn’t.

If people don’t lynch you occasionally you are not doing your job properly.

Someone posted on the mailing list, is someone moderating this list? How is this allowed to be posted? Are they even reading the posts?

Didn’t you know the mempool was part of Satoshi’s doctrine, you can’t question it (Joke).

There have been some pretty good discussions in a private chat I’ve been in about changing the RBF rules. This has been headed up by Murch and Gloria. I have been talking to them pushing for a certain angle on how those rules get changed. I think I’ve managed to convince at least one person. We are looking at making pinning less possible. I don’t know if I want to talk about that though.

RBF pinning is one of those obscure black art areas. People just want you to fix it, they don’t want you to talk about it. That’s the response I’ve had. “That sounds really complicated, tell me when it is fixed”.

Pinning is the one I always forget, the setup is not complicated but there are definitely a number of assumptions you make before your transaction is pinned. I need to remind myself of that. I could talk about accounting but maybe it should after I’ve built it.

Accounting is a big hole unfortunately that we don’t have a really great answer for. It is nice to have thorough blow by blow, here’s where all your money went. Mostly people ignore fees or sum them as if they happened all at once but in a number of scenarios like filing your tax you should be annotating all of the expenses when they happened approximately.

There are a couple of reasons why I like accounting. One way of looking at it, really what I’m working on is a data logging project right now. I am making c-lightning such that it will effectively log all of the data movements. That is really nice for accounting when you have taxes etc but it is also really nice for control or making sure you know where your money is. I talked to the ACINQ guys pretty briefly about how they were doing accounting when I met up with them a few weeks ago. They are doing more checkpoint style. There are a couple of ways you can do it. You can do checkpoint style where you take a snapshot and you compare it to previous snapshots and see what’s changed. The approach I am taking is emit events as we go. There will probably have to be a checkpoint emitted at some point, a checkpoint will be one of the emissions but ideally you’ll have more of a realtime log of events that are happening. The problem is if you have two checkpoints and something happens between them, you have to go back and figure what happened in between. If you have a log of events in theory you should be able to look through the events and see what happened, you don’t get surprises between checkpoints.

There is the meta problem of lack of finalization. Nothing is final ever in Bitcoin.

One of the ways we are going to kind of get around that is I am going to push some of the finalization accounting onto the plugin itself. c-lightning is going to emit events and then to some extent it will be the responsibility of the plugin to keep track of when things are final.

That makes sense. 6 blocks is final(ish) and 100 blocks is forever.

But that means the plugin needs to have a block store of its own that it keeps track of. It is not beautiful but I think it will be fine.

Did you have a prototype of an accounting plugin before? It didn’t make it to the plugin repo?

There is the data logging aspect of c-lightning and I started working on a plugin and ran into problems. The problems I ran into were the fact that c-lightning replays events. One way I am going to fix it is by making it such that the accounting plugin expects duplicates and deduplicates them. There is some complicated stuff that I had to do but there were some things in the original event emissions that we were doing that I don’t think are sustainable long term. Especially now that we have dual funding and there are multi funded transaction opens and splicing on the roadmap soon, fingers crossed. All those things make some of the assumptions that we were making about where fees are going and who owns certain inputs and outputs on a transaction are no longer valid. I am revisiting some of those as well. No there hasn’t been a plugin, there has been events being emitted but as far as I know no one is consuming them.

That’s good because you are about to break them all.

Yes that’s correct. The data format is fine. I thought I was going to have to change the version, they have a version in it already. There is a version number in the data thing. I was looking at it today, I think the data is fine, it is just when we emit them and for what purpose, what events we are emitting is going to change.

I stumbled across it in the outpoint rewrite.

vincenzopalazzo: In the last two weeks I have worked on some c-lightning PR review. Also I started working on an issue where we accepted opening a channel with an amount lower than the minimum amount that the other side would accept. I think we need to query the gossip map in some way to get this information. I think we miss this check where the amount that we are requiring to open the channel is greater than the minimum than the other side would accept. To get this information we need to query inside the fundchannel command to get the information from the gossip map. I am not sure about this.

They do not have a generic way of gossiping to say what their minimum channel size is which is unfortunate. It has been a common request. We didn’t do it because there is a whole heap of other conditions it could have on things. What happens is you get an error back which is a “human readable” code to say I didn’t like things. There is no reliable way of telling it. The human operator reads the code and says “It must be this.”

I think at some point I was working on a spec proposal to send your minimum channel in the init message. There is an init TLV and that way you have it at open. It is not gossip necessarily but when you connect to a channel you immediately get the data, I don’t know.

There are other things you might not like. There was also the proposal to have errors specify what they were complaining about and a suggested value. That would also address this and a pile of other things. You could say “This field should be this” and that would be “Here’s your minimum value” in this case. We cover other cases where you don’t like something that they’re offering you. In a more generic way, perfect is maybe the enemy of the good here, maybe that is why it never happened. “It doesn’t cover all the cases”. That maybe it. For the moment we are still falling back to human readable messages which is terrible. You could hack something in to try to interpret the different things that LND, LDK, us and other people say and do something smarter but good luck.

Just tell them upfront. Whenever you connect to someone, tell them upfront in the message that is formatted and then you don’t have to worry… I guess you’re saying there are other reasons it won’t open but I think the min channel size is a really common one.

Min channel size is actually really common. I do like the error proposal where you specify exactly what is wrong. It was a proposal, I don’t know if it went anywhere. At risk of making you do a lot more work you could perhaps look at that. They don’t broadcast any information where you can immediately tell what the channel size should be. Sorry, you would expect it to work the way you say but it doesn’t.

vincenzopalazzo: I’ve finished my first version of the Matrix plugin with the server. In the next week I will publish the website with some fun information. I am relying on the information that I am receiving because I don’t want to make a real server with authentication, I want someone to put the data and I verify this data is from the node that sent me this information. The things I am thinking about is to use onion messages. I receive the data, I get the hash to check this data and with the onion message I send back the message to the node that is the owner of this payload. “This is your data with a check on the hash” and I receive back ACK or NACK. I don’t know if this is the correct way to do it.

I am a fan of anything that uses onion messages so I’m probably biased here.

Future possible changes to Lightning gossip

I did do a couple of things now I think about it. One was this idea of upending gossip. We are going to have to change gossip for Taproot. The gossip is very much nailed to the fact that with a channel you prove you control the UTXO and that is a 2-of-2. Obviously that is changing when we have Taproot channels, hand wave. We do need to rev our gossip in some way. The question is do we get more ambitious and rev it entirely. One thing I really dislike about gossip is that chain analysis have started to use our gossip to tie UTXOs to node IDs. This is fine for your public channels while they exist but of course that data leaks. When you spend those UTXOs in other ways you have now compromised your privacy. A more radical proposal is to switch around the way we do gossip and not advertise UTXOs directly for a channel. One reason to advertise UTXOs is to avoid spam. At the moment you have to prove that you control these funds in some way. It doesn’t have to be a Lightning channel but it does have to be a 2-of-2 of a certain form and you have to be able to sign it. If we go to a Taproot world and it is a bit more future proof, you control UTXOs and we believe you, then we could loosen it even more and say “You can announce a node if you prove ownership of a UTXO and that would entitle you to announce this many channels.” You just announce them “I’ve got a channel with Joe and it is this size max”. There needs to be some regulation on that. You need to limit it somehow but you can get a bit looser. You can say “Every UTXO that you prove that you own as a node you can advertise one channel plus 8 times its capacity” or something. You still have to expose some UTXO but it doesn’t need to be related. It could be cold storage, could be something else. In theory there are ways that you could get another node to sign for you, there are trust issues with that though. Would you pay them to lease their signature? What if they spend it etc? There are some interesting models around that. Unique ownership of a UTXO, there is some zero knowledge stuff that you could do here too, they are a bit vague. The idea that you prove a UTXO and lets you join the network, announce your node and then you can announce a certain number of channels. With the channels you don’t give away the UTXO information at all necessarily. A naive implementation would just do what it does now and a node advertises some UTXOs or its channels. You wouldn’t associate them directly so it would still be an improvement. There are downsides to this approach. One, there are arbitrary numbers. How many UTXOs? What sized UTXOs have you got to advertise? How many channels are you allowed to advertise? They are a bit handwavey. Ideally we would use BIP 322, there is a proposed way to do signed messages for generic things but it has not been extended for Taproot. I’m not quite sure what that would look like. Ideally we would use the generic signed message system to say “This is how you sign your Lightning node announcement with your UTXO”. That piece is missing from the Bitcoin side. It would be nice if that BIP were finalized and we could start using that.

BIP 322, you would want that updated for Taproot?

And some consensus that that was the way to go. Then we would just use that. At the moment we have a boutique thing because we know it is a 2-of-2 and we produce both signatures. It would be nice to have a generic solution, use that BIP and sign your thing. That would be pretty cool. Whether we would only use the new gossip for Taproot, we’d probably need to do both. There would have to be a migration and everything else so we’d need real buy-in on the idea that that was worth doing. I am really annoyed with the Chain analysis people so it would be good to fix that and this is one way of doing it. Other people may have other ideas if we open up the way we do gossip. The other thing is I haven’t run the numbers on what it would do to our gossip size. Your node announcements obviously get bigger, you don’t have channel announcements anymore, you only have channel update. But you’ve lost the ability to do short channel IDs so now you need to put both node IDs in each one. But there are some compact encodings that we can use and things like that. I may be able to scrape some of that back and make it at least no worse than we are now. Maybe even a little better. That analysis has to be done.

Individual updates (cont.)

On the c-lightning front I said I did the database stuff, I have that working. And ambitiously I looked at doing this wait API that we’ve talked about for a long time in c-lightning. There are a whole pile of events that we would like to wait for. We have a wait invoice already and a wait anyinvoice specifically to wait for invoices to be paid which is the most obvious thing. But there are other things that you might want to wait for, in particular with BOLT 12 rather than invoices being paid you may want to wait for invoices to be generated. And you might want to wait for payments that are outgoing. I’ve got this rough infrastructure, for every event that you can list effectively has a creation index and a change index. Then you can say “Wait for the next invoice change index” and it will say “Here’s the change”. Technically there is a deleted index too. You can say “Wait” and the wait RPC will return and say “Yes this is incremented, you’ve got a new change”. You also need to enhance the list things to list in create order or change order and presumably a limit, it will give you a certain number of paginations. That API where you have an universally incrementing index is really good to avoid missing events. It means your plugin can go to sleep, it comes back and it says “I was up to this event”. Then you’d get all the changes since then, all the changes that have happened. It doesn’t quite work if something is deleted in that time though. That’s the API it would be nice to head towards. It would subsume the wait invoice API and it would let you wait for almost anything that we have. That has been a long term thing, to work on that and have this generic wait API where you can wait for things to change or things to be created. Then you get notified when it does and you can go in and figure out exactly what happened. It is pretty simple to implement. It is both simpler to use and more efficient. A lot of things like Spark end up doing these complete lists and try to figure out what changed since last time they ran. It would be nice to get this kind of mechanism in place. There are a few twists, as I implement it we will see how it goes. That was a random aside, something I wanted to do for ages. There may be a PR eventually for that as well.

Different networks and default ports

In Bitcoin Core when you run a signet node it runs immediately by default on a specified signet port. We use 9735 for any network in lightningd. Would it be possible to switch to some network specific port?

I have done that in the Raspiblitz, following Bitcoin Core’s logic for testnet I add a 1 before the 9735 and for signet I add a 3.

Maybe we should switch to defaults. It makes sense. In theory you can run multiple networks on a single port, I don’t know if anyone does that and our implementation doesn’t support it. But in theory you could have a node that runs both testnet and mainnet off of the same port. We have all the distinguishing messages, you can tell if you are only interested in this kind of gossip. I don’t think any implementation actually supports it.

I had a problem doing that for Bitcoin Core when my node picked up some testnet block hashes and thought it was on a fork. I was quite worried for a moment.

We do distinguish them all, we will ignore them properly. In fact now we will hang up. It used to be that you could connect and you would both furiously gossip with each other about stuff and you’d be ignoring all the replies. “That’s the wrong chain, that’s the wrong chain, that’s the wrong chain”. These days in the init message we now send what chains you support and if there is no overlap between the two sets you just hangup. In theory you can support multiple chains at once, I don’t think it has been done.

Q&A

Someone, I was talking to them, they opened a 20,00 satoshi channel to me. They couldn’t see it in Umbrel what the reserve was of the channel. Maybe on the command line but not in the web user interface. I showed them what’s the reserve, it was the same for both me and them. I didn’t have anything so I said “Now you send me 579 satoshis”. I still cannot send anything to you because it fills my reserve. We managed actually to send through me and so I did not lose any satoshis, I am routing for free. It was very interesting because I never realized these reserves of channels. Today I opened the signet channel with another friend, he filled my reserve and I was still not able to send him 20 satoshis. The reserve was 10,000 satoshis. I checked it was filled, the reserve was filled on my side and I had 20 extra satoshis, I couldn’t send them. It was reported in c-lightning.

Who opened the channel?

He did.

So it is not a fee problem. If it says spendable msat is 20 you should be able to send 20 msats. File a bug report.

He said he was trying to route it. The router didn’t see him able to send that. I am quite sure you would be able to send it to the other side without routing it.

We have a test for this in our test suite that makes sure that you can’t spent 1 msat more and you can spend the amount that it says so it should work. It is quite a complicated calculation to see exactly how much you can spend. I can believe that there are issues with it. You should file a bug report.

I wasn’t trying to route. I was just trying to send to him. There was only one channel, he didn’t have any other channels.

The routing engine is invoked.

Probably because I was sending it and I have many other signet channels.

It should definitely work though.

I’ll file a bug report.

Is there a difference if he uses keysend?

I opened another channel from another signet node to him, I did keysend and it worked. It was a big channel open from my side. It was with lnd on the other side. On mainnet he could spend every single sat.

On the ports conversation Core has just merged a relaxation of using the default port for mainnet. This seems to suggest they are moving away enforcing default ports, currently for mainnet but possibly for testnet etc as well. That’s just if you want to use a different port. You can obviously still use the default port.

I was running bitcoind, Bitcoin Core on different ports and it worked very well. My idea was that when someone plays with different networks on lightningd, they open that port and firewall and it is gossiped that this is their Lightning port for signet. Then they say “But actually this is the port I want to run the mainnet on”.

I think all of us have had the same problem where you have to manually move your other ones out the way. I am happy to follow the Raspiblitz convention and by default we should run testnet on the default port. It isn’t 9735, it is 19735 and whatever they do for signet. People can always play with it. Making the default sane is a win and probably something we should have done a long time ago. I look forward to your PR to change the default port.

I have a question on the encryption of the hsm_secret. If it is encrypted what would be the best way to detect it. As I understand there is no CLI error shown. What I am doing at the moment is parsing the logs to see the error but there could be a better way than that?

I don’t know what happens when it is wrong, what does it do?

It just says it is not available. If you try to call getinfo then it will just say it is not available.

So the daemon doesn’t start?

Yes.

Fatal.

It is fatal. Then you go to the error logs and it says that the password wasn’t provided or wrong password provided or something like that.

I think there should be something out to stderror. But also we should probably use a specific exit error code in the case… That would be a lot easier to detect. PRs welcome. It is a simple one but it is the kind of thing that no one thought about. Just document it in the lightningd man page. Pick error codes, as long as nothing else gets that error code it would be quite reliable. I think 1 is our general error code, 0 is fine. 1 is something went wrong, anything up to about 125 is probably a decent error, exit code for hsm decoding issues. As you say it makes perfect sense because it is a common use case. If you ever want to automate it you need to know.

Minisketch looks as if it is close to merge, if you want to use Minisketch for gossip stuff. You posted an idea before on the mailing list. I have an old branch that pulls in a previous version of Minisketch.

You did actually code it up as well? It was more than just a mailing list post? I didn’t see the branch.

I never pushed the branch, I never got to that point. Just played with it. Minisketch was really nice, the library is very Pieter Wuille, very sweet. Everyone should play with Minisketch just because they can, it is really cool. To some extent it is always secondary, it is just an efficiency improvement. There aren’t many protocol changes required. The whole network doesn’t have to upgrade to start using Minisketch gossip between nodes.

When sending and receiving the remote address I use some struct which I get from the wire. In order to parse it I create the struct in C and say it is packed. I need to say it is packed so it is aligned and not something shifted in order to read it correctly. Is there a proper way to do this? Using some directive gcc that it is packed is not the correct way of doing this.

No it is not the correct way. You should be writing from wire and to wire routines for your struct. That specifies exactly what it should look like on the wire, Reading straight into a struct is generally a bad idea.

\ No newline at end of file diff --git a/c-lightning/2022-03-07-developer-call/index.html b/c-lightning/2022-03-07-developer-call/index.html index 551998196a..16cd2cb5c7 100644 --- a/c-lightning/2022-03-07-developer-call/index.html +++ b/c-lightning/2022-03-07-developer-call/index.html @@ -11,4 +11,4 @@ < c-lightning < c-lightning developer call

c-lightning developer call

Date: March 7, 2022

Transcript By: Michael Folkson

Tags: Lightning, C lightning

Category: -Meetup

Name: c-lightning developer call

Topic: Various topics

Location: Jitsi online

Video: No video posted online

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Minisketch and gossip spam

I’ve been working on the multi channel connect stuff, trying to not get distracted by Tether talk and other stuff like that.

It was an awesome write up. I enjoyed it.

I wrote it up and then I thought do I really want to post this? I don’t really want to interact with people over this. I just wanted to drop it and run. My thinking was possibly worth exposing it, I don’t know if anyone cares. Nobody should care. Rene kept bugging me and poking the bear.

I’m still working on Minisketch stuff, I think I have the encoding figured out. I have a draft BOLT 7 update documenting what I’m doing there. Rusty shared an issue on GitHub, I was digging into the logs a little bit, I started logging some of the gossip I was receiving. I think the issue stemmed from node announcements not propagating. I was also looking into in general how often do we filter spam gossip. I was surprised, I parsed the logs and it looks like over a 24 hour period that number is something like 2,100. I’ve gathered a bit more data since I started doing that, I haven’t seen how variable it is. It looks like there is about 2,100 unique channels that we maybe out of date on gossip at any point in time simply because they’ve exceeded their allocated number of updates recently. I guess that doesn’t necessarily mean a lot to the traditional gossip mechanism but in terms of integrating Minisketch, definitely going to have to tighten up the rules around rejection of too frequent updates.

Are those 2,100 updates or 2,100 separate channels?

2,100 separate channels. It was 6,400 separate updates. I filtered those for redundancy and it was down to 4,000. It was 2,100 unique channels over a 1 day period. We are allocating tokens right now, you get one new update per day. I’m not sure that we’d have to synchronize with the other implementations in terms of updates in general. If we are trying to coordinate a Minisketch that will be something we’ll have to discuss, tighten up the spec.

I just merged a PR that increases that number to 2 a day. Apparently there are LND nodes that are going down for 20 minutes to compact the databases every day causing 2 updates. We now allow 2 updates, that was the simplest fix.

I was a little surprised. We actually propagate the gossip regardless of the frequency. It is just for internal usage, updating our own gossip store. It looks like a valid gossip, we’ll still rebroadcast.

No the gossip store is the mechanism by which you rebroadcast. If it doesn’t go in the gossip store it doesn’t get rebroadcast at all.

That was what I thought but I’ll take another look. It wasn’t clear to me.

If it doesn’t go in the gossip store it won’t get rebroadcast, that’s traditionally true. There are a couple of holes in that but generally that’s the case. There’s no other mechanism to send gossip other than direct queries. If we drop it it won’t get transmitted. We know LND has much looser tolerances if any for stopping gossip spam so the level creeps up on the network in general. Allowing twice as much gossip spam is a thing. It is funny to have people complain about the potential bandwidth waste of onion messages when they are not filtering gossip spam which is wasting everyone’s bandwidth today. Somebody, their ISP changed their IP address and they lost a whole heap of channels because their node announcement didn’t propagate. The latest release is a bandaid, it is more aggressive with node announcements. We rebroadcast them at every startup and we refresh them every 24 hours. New timestamp, even if it is the same information. Part of the reason is that it is really hard to tell, if you are missing node announcements you can’t tell because they are optional. You can see the whole graph but you can’t see the node announcements. We have some heuristics to try to figure out if we are missing some and then go back and ask for more. But obviously across the whole network that’s not working all that well. For the moment we are going to be more aggressive with gossiping node announcements. In the long term things like the Minisketch gossip reconciliation should fix this. You will get a complete set of your matching peers. That’s a longer term thing hence the bandaid for the next release.

Personal updates

I spent the last week in London. Before that I mentioned to implement the RPC methods, hooks and subscription interfaces for cln-plugin. That took a bit longer because I wanted to make them async/await which in Rust are really complicated types. That took quite a while to dig into but I am really happy now that we can also call the RPC which is async/await from inside one of these methods and hooks and notification handlers. That completes the scaffolding for cln-plugin and we can finally get it rolling. An additional issue that I had was that some of Rust’s internals triggers Valgrind. I had to figure out how to add suppressions for Valgrind inside of Rust which took me down a weird path. I wanted to make them minimal instead of just copy, pasting whatever Valgrind spat out at the end of the run. That took a bit longer. In London there was Core Dev and then we had Advancing Bitcoin, both excellent venues. We looked a bit into stuff like the RBF pinning attacks. We made some progress there though we will have to discuss this stuff with the wider community because there are no decisions being taken at Core Dev when there are people that might not join. There was general support for the idea of separating the denial of service part out of the RBF rules. This is the only reason why Rule 3 is in there. That is hopefully something that we can address eventually. Funnily enough one of the proposals was the staggered broadcast again which we introduced for denial of service in Lightning all of these years ago and we seem to be quite happy with it. Even though we add these tokens for replacements. Other than that I’ve mostly been working on Greenlight stuff, backporting a couple of things namely for the gRPC interface. We now have automatically generated mTLS certificates. If you use gRPC you can use mutual authentication between server and client. That was a nice piece of code, it works quite nicely. If you don’t have the certificates it will generate them, very similar to what LND does at first start. It generates a couple of certificates that you can copy, paste to wherever you want to use them and end up with a secure connection.

I have a drive by bug report. Every so often I get changes to the Rust generated protofiles. Randomly it adds stuff and I’m like “Nothing changed”. I’m not quite sure whether there is a Heisen-bug in the generator or something. Next time it happens I’ll take a look. I git clean and then I make again and it is all happy.

I saw one enum being generated which was new. I’ll look into it.

As I jump back and forth on my patch series sometimes I do a make and I get Rust stuff.

I was very happy to see that enum pop up. I was like “Somebody changed the JSON schema files and my code worked”. But apparently it is a Heisen-enum.

There may be. I am not changing those files and it seemed to jump out every so often. Be aware that there may be a bug coming down when I’ve got some details.

On the RBF stuff you said you take the DoS RBF rule out. Does that just mean rather than having whatever the five rules were we are splitting them up now so there’s denial of service rules, maximization of transaction fee rules. Is that what you mean when you say split up? We are thinking about them in completely separate spheres now rather than having a bunch of RBF rules that are there for different purposes.

None of this is final, this is going to be discussed in public and probably somebody will come along and say this is a stupid idea. But the general idea is that currently RBF rules attempt to cover multiple facets. One being denial of service attacks, one being optimization for block composition for miners. That creates this multidimensional problem where we have one hammer but two different instruments we’d like to hit with it. Either we hit one completely or we do something in the middle or we hit the other completely. By realizing that these are completely different contexts we can now go back and say “Do we have something else that might be usable for the anti DoS mechanism. There were quite a few proposals, some of them involving our position in the mempool which kind of isn’t optimal because that’s not feasible for light clients. Some of them were saying “Let’s rate limit how much traffic we can generate by an attacker”. Effectively what we want to prevent in an attacker attempting to DoS the network is not the individual connection but we want to prevent an attacker from being able to inject a packet and have that spread in the wider network giving us this quadratic blowup from a single packet being sent to n squared packets being sent around the network. One solution would be to rate limit what an attacker can inject into the network. We’d still be kind of quadratic but we’d have a very low constant factor in front of it. The other one would be to give it a time delay as to what an attacker can forward. What interests me out of all of this is to destroy the confidence of an attacker that he successfully pinned a transaction. By adding delays, by rate limiting and by delaying how quickly he can infect the entire network with his pinned transaction we might get some of the benign transactions. One silly proposal that I made was what happens if all the nodes drop 50 percent of their transactions every minute or so. That means you drop away all of these transactions, the sum of them might have been pinned so these slots are now free again for the replacement transaction itself. It is a silly proposal but it would achieve the effect of removing the certainty for an attacker, whether they pinned or not a transaction. There are more efficient ways to do that obviously.

Light clients and RBF doesn’t make sense anyway, they don’t have a mempool, they can’t validate things so you have to ignore them.

Mobile client for c-lightning

As some of you may have heard I have been working on the first mobile client for c-lightning. I wish I could say that I’ve had more issues but it has been pretty reliable. I’ll give one example. I’ve got BOLT 12 support working so you can tip and it will do a payment. This is just calling pay and it works. The only issue I had was Commando, it seems to block. If I’m doing a long payment it will prevent any other request from happening so the app kind of bugs out. I’ve had to set the retry limit to 30 seconds instead of 60 seconds. Maybe even lower, 15 seconds. Usability wise, I don’t know if you’d want to wait a minute on your phone to do a transaction and then block the connection for 60 seconds. Other than that it has been super stable. Pretty impressed with how well it is working.

I am pretty sure I can fix that, I just have to make the command async.

I noticed there was an async PR for plugins from like 2019. You just need to add an async to the command hook. Another thing that came up with runes is this idea, we are restricted on if it starts with list or get, the read only runes. This is specific to crazy people who want to give me read only access to their node. If we ever introduce a list or get method that is sensitive in future we don’t have a way of revoking runes right now. I have been working on a way to change the master secret on runes, just revoke them all.

We do have a way to revoke them, they have a unique ID so you can go through and revoke them without having to change the whole master secret as well. It is not implemented but you can blacklist them individually. We made a mistake because there are a couple of list commands that are sensitive and now are read only.

Like datastore.

Yeah, list datastore, that’s got all the secrets in it, that would be dumb. We may end up doing something like a whitelist rather than a blacklist.

I was thinking maybe we should encourage people, if you are going to make a read only rune just do it for specific methods. Not a blanket read only.

This is true. Or have some explicit differentiation. We could have a plugin that wraps commands. There are ways. It is good to see runes and commando being used in the wild. It is what it is intended for.

I’m pretty happy with it other than that blocking issue.

Have you filed a bug?

I have an issue open on the plugin repo right now. If you want to try it out on the Testflight go to https://jb55.com/lnlink/. I need more testers.

Personal updates (cont.)

As you mentioned earlier I did the second part of the remote IP address discovery which implements the most basic algorithm I could come up with. It just tries to get the confirmation from two nodes we have a channel with, two different channels. If they both report the same IPv4 or IPv6 address we assume it is a default port and we send node announcements, that works. Rusty did some review, there are some issues with the code. The question I had was do we want a config switch? Or don’t we?

I don’t think so. It is best effort as you pointed out in your PR. Worst case we advertise something that is bogus.

If we had a config switch obviously we wouldn’t create node announcements that wouldn’t be usable because not all of the people will be able to open their router board. Then we just have an additional announced address that is not usable which changes over time. That’t the trade-off there. If we suppress it, we don’t have it, it makes it slightly easier for people to figure out. Maybe it depends on the documentation we’ll add later when we move the experimental.

If it was going to be an option it would have to be a negative option. One thing worth checking is if somebody is trying to run a Tor only node they must not advertise this and stuff like that.

That’s a good point.

You might want to suppress it by default if they are advertising anything explicitly. And have it only if there is no other advertisement. That would be fairly safe, it doesn’t make things worse. Assuming they are not forcing through a proxy. If they turn proxy for everything on, that’s the setting for Tor, you should not tell them your IP address. How’s the spec going? It can’t be moved out of experimental until the spec is finalized.

It is already merged.

Then take it off experimental, put it in.

I can do that. We didn’t have a release with it.

Experimental for stuff that would violate spec or that we can’t do because of spec concerns. That may change. Once it is in the spec it should not be experimental unless we are really terrified it is going to do something horrible to people. That’s fine.

The DNS is still not merged in the spec. There was another implementation that was working on it so maybe we can get this in the spec soonish.

We had a great meeting in Istanbul before London. We had two LND developers, Carla and Oliver, spoke about BOLT 12 in a PR review club fashion which was great. Spoke about the onion messages, spam protection. Paid for a lot of Lightning beers with my c-lightning node. It is a node behind Tor, it was connected through a VPN to my phone with the new Zeus wallet. It was a release candidate then, now the 0.6 release is out, I can recommend. The Raspiblitz 1.7.2 has been so far a good update, we were focusing on stability improvements. We have a connection menu that exposes some endpoints for connecting Spark wallet, Sparko and the Zeus wallet through different interfaces: the c-lightning REST interface, the Spark wallet interface. We would be interested in a facilitated connection towards this iPhone wallet. I would be very happy to assist. If you give us a spec then we just put it into the menu.

Bookkeeper plugin

I have been working on a plugin that I’m now calling the bookkeeper plugin which will help you with your Lightning bookkeeping. I pushed up a PR late night Friday, thanks Rusty for getting some of the prerequisites in. I am currently working through the little things that CI wants me to do. Hoping to get that done by end of day today. The bookkeeping stuff, there are a couple of commands in it that I’m pretty excited about but I’m hoping people will take some time to look at them and run on their node, give me some feedback about what other information they want to have. The novel thing with the bookkeeper is the level of data collection that it is doing. There is probably a little more data we could add. For example when you pay an invoice there’s the possibility for us to write down how much of that invoice goes to fees versus to the actual invoice that you are paying and collect data on that. It is not something we are doing currently. The idea is this will give you a really easy way to print out all the events on your node that made you money, lost you money, where you spent your money, where you earned money at a millisat level of resolution. It is a big PR, a lot of the PR is just printing out different ways of viewing the data, there is a lot of data there now. Hopefully other people will run it and say “Wouldn’t it be great if we also had this view?” It is definitely going to make the on disk data usage a little bit higher than it has been. We are now saving a copy of stuff. I don’t really have a good answer of how to fix that. There is definitely a way you can rollup events that have passed a long time in the past, you checkmark things. Events that are 3 years old you probably don’t need the exact data on, that’s future work. It also keeps track of when a channel closes and has been resolved onchain. It is possible to go and rollup all the channels that you’ve already closed and have been resolved onchain, you already have the data for, after however many blocks e.g. 1000 blocks. There are opportunities for making the data less compact as time goes on.

This week I am going to work on some outstanding bugs and cleanups I need to do in the funder plugin, the way it handles PSBTs needs to be fixed up a little bit. I think I was going to look at making our RBF bump for v2 channels easier to use. Z-man has submitted a big request for that. That’s probably going to be what I’m working on this week unless anything else pops up.

With the bookkeeper plugin I wonder if we mark the APIs unstable for one release.

Right now it automatically turns on and starts running. Maybe there’s a set of plugins I could add it to that’s optional to opt into. You probably want the data but maybe we should make it opt into the data.

My general philosophy with these things is more switches is just confusing. As a general rule the only button should be install and it should just work. It is the same with data collection. I generally prefer to over collect data and then have tools to reduce it or heuristics to reduce it later than not have the data and wonder what the hell happened. The data problem tends to be proportional, as long as it is not ridiculous, people running really big nodes who have data problems either have serious storage or have ways of managing it and prepared to go to that effort. People running small nodes don’t have a data problem. My general emphasis is to store all the data, storage is cheap. At some point if you are running a serious node you’ll need serious hardware or we need to do some maintenance. I think your instinct that you can throw away data later is probably the correct one. I am excited about having it but I wonder if we want to go through a bit of a caveat on the APIs to start with and not guarantee backward compatibility. We generally do and that may be painful to add later if you decide to redo it. Ideally you won’t but it is nice to have the asterisk there.

At least for one release.

This is the first release. Give us feedback and try to keep a little bit of wriggle room.

What’s the best way to indicate on the APIs? Add experimental, exp before the actual thing and then delete it later?

That is almost worse. You’ve done quite a lot of API work so I would say it is unlikely you’re going to do a major change. Ideally of course you won’t or it will be backwards compatible, a new annotation or something. That’s the plan but it is not the promise. There’s a difference between those two. Maybe I’m backing out of this commitment. It is hard to indicate this in a way that isn’t going to hurt people. In the release notes I’ll make sure it is clear this is a new thing, please play around with it but it will be in anger next release. We are still in the data gathering phase maybe. I look forward to reviewing that PR. It is big so I’ve been waiting for you to rebase it before I review it multiple times. It is a lot of code.

vincenzopalazzo: Last week I was also in London. The last two weeks I spent some time on lnprototest, I’m adding a new test for cooperative closing. In the last spec meeting we discussed the spec being underspecified in some way. It is good to have some spec tests from this side. This resulted in me finding a bug in lnprototest. When a test fails we don’t stop Bitcoin Core. After a different failure my machine blew up off the Bitcoin Core daemon. I fixed that, please take a look in case I’ve made a mistake. This week I will merge it and continue my work on the integration testing. I am also working on splitting the listpeers command. I found a mistake in my understanding. The listpeers command uses the intersection between the gossip map and the peer that we are connected with. I am assuming that listpeers shows all the network and this is wrong. I need to refactor the logic that I proposed in the pull request.

Reckless plugin manager

I talked in London with Antoine about the Reckless plugin manager. I propose to migrate this Reckless plugin to Rust because now we have Rust inside c-lightning. I will write a mail on the mailing list on the idea. The idea is splitting the Reckless plugin into three different binaries. One is the core library that makes the update of the plugin. Also I want Reckless to be a plugin of c-lightning but also a separate binary like Lightning tools. You can instantiate a new plugin, if you want to write a plugin in Python you can call it the language and the name of the plugin. It is a big project but I have some ideas.

I have some ideas on Reckless. It has been on my to do list as well since 2019. I believe that yes, it needs to be a separate binary. I would tend towards Python because it is going to have to deal with filesystem stuff. You can do that in C, you can do it in Rust but you could also just do it in Python. It also needs to do web stuff to obtain plugins and stuff like that. It is a pretty Python-y kind of problem and you have already got to have Python in tree. But I have no problem with a Rust dependency, that works just as well.

vincenzopalazzo: With Rust there is this cargo environment that gives you the possibility to avoid the circular dependency that we have in Python. We have pyln, proto and the BOLTs that are dependent on each other like lnprototest. With Rust we get Cargo and the workspace which takes care all of the stuff. With Python the end user needs to install all the dependencies and with Reckless I think we know the dependencies are different and we can have some resolution error. From my point of view Rust is the safe way to go. Maybe there is a little bit more work from our side but we can avoid the user having trouble with Python dependencies.

Whatever it does it does need to be standalone, it does need to be minimal dependency wise. Ideally you want it shipped with c-lightning, it is installed like anything else. The plugin ecosystem is very rich, I feel like it is badly served because it is so hard to install plugins. It should have a Reckless install whatever. And also manage them, activate them, deactivate them, upgrade them in particular is important, stuff like that. It is basically a shell tool, I would have written it in shell 5 years ago. It basically just manipulates the filesystem and runs a few commands to restart things. It doesn’t need to be particularly sophisticated, it does need to do some web stuff and some discovery stuff but that is about it. It is not c-lightning specific really. There are a couple of things it needs to do with c-lightning but it is a separate tool that pokes c-lightning regularly. It doesn’t really matter. The important thing is is that it is self contained, it just works. I would like to ship it by default. It becomes standard for people to say “Just reckless install blah”. It does it, it starts the plugin, does everything.

vincenzopalazzo: I think the most important thing is to introduce inside the plugin ecosystem a manifest of the plugin like a npm install. We can have different languages working in different ways. For instance in Java you need to create a bash script. In some other language you need to create other stuff. With a manifest we can have this second layer that you can work with. This is the first thing I will do, start to think about how to make a good manifest. The second thing, how to build a GitHub page with this manifest, query the GitHub page and get the list of plugins, a fake API. All with GitHub and maybe we can make a fake repository system without a server.

That’s not a problem. Thinking about different types of plugins, we already have two types in the plugin repository and we do test them in isolation. We have some code that already does create virtual environments that are populated by Poetry or by pip. Adding more languages is all down to detecting which environments are present. It pretty much boils down to how well we structure the manifest, maybe we can merge them through a CI job and push them onto the web where they can be grabbed from the clients.

I’m happy with languages like Python, Rust, Python, Go. Make it a standard, if you are going to write a plugin it will be done like this. It will be configure, make etc. You can hard code things as much as you want. The other thing that I think is important, versioning is obviously important so you can upgrade. Upgrade maintenance is important. And dependencies between them. We are seeing more stuff needing Commando for example. If you have that, install this. They are the key issues. Just removing the maintenance burden of having to do this stuff manually. Discovery is nice but I don’t think it is critical. Being able to search for plugins, it is much easier for us to produce a web page that highlights plugins and plugin of the week rather than putting in the tooling.

Q&A

I wanted to mention one thing about Commando. I noticed there was an experimental web socket. I know Aditya was working on web socket stuff. I was going to try compiling my lnsocket library with Emscripten. It compiles raw socket code to web sockets. How well tested is that? Is there any test code for people using web sockets?

It really works. It is weird but it does work. You can tunnel the whole thing over web sockets. Your problem is going to be in the browser, you may or may not be able to connect to an unencrypted web socket. You may need a proxy or you need a certificate which is the whole point to avoid.

You need a HTTPS connection?

Yeah. This is really annoying. I am probably going to run on bootstrap.bolt12.org, at some point there will be a proxy. You connect to that proxy because that’s got the certificate and then you tell it where you want to go. It means I can snoop your traffic but it is encrypted.

You can’t reverse proxy a Lightning connection because there is no SNI or anything to key off of.

Lightning is a parallel infrastructure so there isn’t any real overlap. Providing proxies is a fairly straightforward thing to do. You can use Lightning to pay for them. There’s a possibility there eventually.

The HTTPS thing is really unfortunate. I am just running my node off my home connection. I don’t have a web server and I was thinking of writing a plugin that lists all the payments to an offer. It would have been cool to do that with just lnsocket and Javascript code. But maybe I’ll just bite the bullet and run a proxy or something.

Once we have a public proxy you can do it. That’s the aim.

What was the point about not having SNI?

This is something Warren Togami was talking about, in terms of hiding your IP. Imagine if you had a privacy proxy where you could have an IP that is in front of your actual node. I was trying to think of a way to do that. I think the only way would be if you had a SNI type system where you could reverse proxy the Lightning connection. I don’t think it is possible.

From a browser you should be able to.

This is separate from web browser.

We are actually trying to build a SNI based gRPC proxy in front of Greenlight.

It would work for TLS but I was trying directly to the Lightning Network.

Maybe ngrok works? You can do pure TCP connections over ngrok, that might be an option.

If you have one IP that could potentially reverse proxy to multiple Lightning nodes, you would identify SNI based on the pubkey or something. That is what I was thinking, I didn’t think there was a way to do it in the protocol.

\ No newline at end of file +Meetup

Name: c-lightning developer call

Topic: Various topics

Location: Jitsi online

Video: No video posted online

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Minisketch and gossip spam

I’ve been working on the multi channel connect stuff, trying to not get distracted by Tether talk and other stuff like that.

It was an awesome write up. I enjoyed it.

I wrote it up and then I thought do I really want to post this? I don’t really want to interact with people over this. I just wanted to drop it and run. My thinking was possibly worth exposing it, I don’t know if anyone cares. Nobody should care. Rene kept bugging me and poking the bear.

I’m still working on Minisketch stuff, I think I have the encoding figured out. I have a draft BOLT 7 update documenting what I’m doing there. Rusty shared an issue on GitHub, I was digging into the logs a little bit, I started logging some of the gossip I was receiving. I think the issue stemmed from node announcements not propagating. I was also looking into in general how often do we filter spam gossip. I was surprised, I parsed the logs and it looks like over a 24 hour period that number is something like 2,100. I’ve gathered a bit more data since I started doing that, I haven’t seen how variable it is. It looks like there is about 2,100 unique channels that we maybe out of date on gossip at any point in time simply because they’ve exceeded their allocated number of updates recently. I guess that doesn’t necessarily mean a lot to the traditional gossip mechanism but in terms of integrating Minisketch, definitely going to have to tighten up the rules around rejection of too frequent updates.

Are those 2,100 updates or 2,100 separate channels?

2,100 separate channels. It was 6,400 separate updates. I filtered those for redundancy and it was down to 4,000. It was 2,100 unique channels over a 1 day period. We are allocating tokens right now, you get one new update per day. I’m not sure that we’d have to synchronize with the other implementations in terms of updates in general. If we are trying to coordinate a Minisketch that will be something we’ll have to discuss, tighten up the spec.

I just merged a PR that increases that number to 2 a day. Apparently there are LND nodes that are going down for 20 minutes to compact the databases every day causing 2 updates. We now allow 2 updates, that was the simplest fix.

I was a little surprised. We actually propagate the gossip regardless of the frequency. It is just for internal usage, updating our own gossip store. It looks like a valid gossip, we’ll still rebroadcast.

No the gossip store is the mechanism by which you rebroadcast. If it doesn’t go in the gossip store it doesn’t get rebroadcast at all.

That was what I thought but I’ll take another look. It wasn’t clear to me.

If it doesn’t go in the gossip store it won’t get rebroadcast, that’s traditionally true. There are a couple of holes in that but generally that’s the case. There’s no other mechanism to send gossip other than direct queries. If we drop it it won’t get transmitted. We know LND has much looser tolerances if any for stopping gossip spam so the level creeps up on the network in general. Allowing twice as much gossip spam is a thing. It is funny to have people complain about the potential bandwidth waste of onion messages when they are not filtering gossip spam which is wasting everyone’s bandwidth today. Somebody, their ISP changed their IP address and they lost a whole heap of channels because their node announcement didn’t propagate. The latest release is a bandaid, it is more aggressive with node announcements. We rebroadcast them at every startup and we refresh them every 24 hours. New timestamp, even if it is the same information. Part of the reason is that it is really hard to tell, if you are missing node announcements you can’t tell because they are optional. You can see the whole graph but you can’t see the node announcements. We have some heuristics to try to figure out if we are missing some and then go back and ask for more. But obviously across the whole network that’s not working all that well. For the moment we are going to be more aggressive with gossiping node announcements. In the long term things like the Minisketch gossip reconciliation should fix this. You will get a complete set of your matching peers. That’s a longer term thing hence the bandaid for the next release.

Personal updates

I spent the last week in London. Before that I mentioned to implement the RPC methods, hooks and subscription interfaces for cln-plugin. That took a bit longer because I wanted to make them async/await which in Rust are really complicated types. That took quite a while to dig into but I am really happy now that we can also call the RPC which is async/await from inside one of these methods and hooks and notification handlers. That completes the scaffolding for cln-plugin and we can finally get it rolling. An additional issue that I had was that some of Rust’s internals triggers Valgrind. I had to figure out how to add suppressions for Valgrind inside of Rust which took me down a weird path. I wanted to make them minimal instead of just copy, pasting whatever Valgrind spat out at the end of the run. That took a bit longer. In London there was Core Dev and then we had Advancing Bitcoin, both excellent venues. We looked a bit into stuff like the RBF pinning attacks. We made some progress there though we will have to discuss this stuff with the wider community because there are no decisions being taken at Core Dev when there are people that might not join. There was general support for the idea of separating the denial of service part out of the RBF rules. This is the only reason why Rule 3 is in there. That is hopefully something that we can address eventually. Funnily enough one of the proposals was the staggered broadcast again which we introduced for denial of service in Lightning all of these years ago and we seem to be quite happy with it. Even though we add these tokens for replacements. Other than that I’ve mostly been working on Greenlight stuff, backporting a couple of things namely for the gRPC interface. We now have automatically generated mTLS certificates. If you use gRPC you can use mutual authentication between server and client. That was a nice piece of code, it works quite nicely. If you don’t have the certificates it will generate them, very similar to what LND does at first start. It generates a couple of certificates that you can copy, paste to wherever you want to use them and end up with a secure connection.

I have a drive by bug report. Every so often I get changes to the Rust generated protofiles. Randomly it adds stuff and I’m like “Nothing changed”. I’m not quite sure whether there is a Heisen-bug in the generator or something. Next time it happens I’ll take a look. I git clean and then I make again and it is all happy.

I saw one enum being generated which was new. I’ll look into it.

As I jump back and forth on my patch series sometimes I do a make and I get Rust stuff.

I was very happy to see that enum pop up. I was like “Somebody changed the JSON schema files and my code worked”. But apparently it is a Heisen-enum.

There may be. I am not changing those files and it seemed to jump out every so often. Be aware that there may be a bug coming down when I’ve got some details.

On the RBF stuff you said you take the DoS RBF rule out. Does that just mean rather than having whatever the five rules were we are splitting them up now so there’s denial of service rules, maximization of transaction fee rules. Is that what you mean when you say split up? We are thinking about them in completely separate spheres now rather than having a bunch of RBF rules that are there for different purposes.

None of this is final, this is going to be discussed in public and probably somebody will come along and say this is a stupid idea. But the general idea is that currently RBF rules attempt to cover multiple facets. One being denial of service attacks, one being optimization for block composition for miners. That creates this multidimensional problem where we have one hammer but two different instruments we’d like to hit with it. Either we hit one completely or we do something in the middle or we hit the other completely. By realizing that these are completely different contexts we can now go back and say “Do we have something else that might be usable for the anti DoS mechanism. There were quite a few proposals, some of them involving our position in the mempool which kind of isn’t optimal because that’s not feasible for light clients. Some of them were saying “Let’s rate limit how much traffic we can generate by an attacker”. Effectively what we want to prevent in an attacker attempting to DoS the network is not the individual connection but we want to prevent an attacker from being able to inject a packet and have that spread in the wider network giving us this quadratic blowup from a single packet being sent to n squared packets being sent around the network. One solution would be to rate limit what an attacker can inject into the network. We’d still be kind of quadratic but we’d have a very low constant factor in front of it. The other one would be to give it a time delay as to what an attacker can forward. What interests me out of all of this is to destroy the confidence of an attacker that he successfully pinned a transaction. By adding delays, by rate limiting and by delaying how quickly he can infect the entire network with his pinned transaction we might get some of the benign transactions. One silly proposal that I made was what happens if all the nodes drop 50 percent of their transactions every minute or so. That means you drop away all of these transactions, the sum of them might have been pinned so these slots are now free again for the replacement transaction itself. It is a silly proposal but it would achieve the effect of removing the certainty for an attacker, whether they pinned or not a transaction. There are more efficient ways to do that obviously.

Light clients and RBF doesn’t make sense anyway, they don’t have a mempool, they can’t validate things so you have to ignore them.

Mobile client for c-lightning

As some of you may have heard I have been working on the first mobile client for c-lightning. I wish I could say that I’ve had more issues but it has been pretty reliable. I’ll give one example. I’ve got BOLT 12 support working so you can tip and it will do a payment. This is just calling pay and it works. The only issue I had was Commando, it seems to block. If I’m doing a long payment it will prevent any other request from happening so the app kind of bugs out. I’ve had to set the retry limit to 30 seconds instead of 60 seconds. Maybe even lower, 15 seconds. Usability wise, I don’t know if you’d want to wait a minute on your phone to do a transaction and then block the connection for 60 seconds. Other than that it has been super stable. Pretty impressed with how well it is working.

I am pretty sure I can fix that, I just have to make the command async.

I noticed there was an async PR for plugins from like 2019. You just need to add an async to the command hook. Another thing that came up with runes is this idea, we are restricted on if it starts with list or get, the read only runes. This is specific to crazy people who want to give me read only access to their node. If we ever introduce a list or get method that is sensitive in future we don’t have a way of revoking runes right now. I have been working on a way to change the master secret on runes, just revoke them all.

We do have a way to revoke them, they have a unique ID so you can go through and revoke them without having to change the whole master secret as well. It is not implemented but you can blacklist them individually. We made a mistake because there are a couple of list commands that are sensitive and now are read only.

Like datastore.

Yeah, list datastore, that’s got all the secrets in it, that would be dumb. We may end up doing something like a whitelist rather than a blacklist.

I was thinking maybe we should encourage people, if you are going to make a read only rune just do it for specific methods. Not a blanket read only.

This is true. Or have some explicit differentiation. We could have a plugin that wraps commands. There are ways. It is good to see runes and commando being used in the wild. It is what it is intended for.

I’m pretty happy with it other than that blocking issue.

Have you filed a bug?

I have an issue open on the plugin repo right now. If you want to try it out on the Testflight go to https://jb55.com/lnlink/. I need more testers.

Personal updates (cont.)

As you mentioned earlier I did the second part of the remote IP address discovery which implements the most basic algorithm I could come up with. It just tries to get the confirmation from two nodes we have a channel with, two different channels. If they both report the same IPv4 or IPv6 address we assume it is a default port and we send node announcements, that works. Rusty did some review, there are some issues with the code. The question I had was do we want a config switch? Or don’t we?

I don’t think so. It is best effort as you pointed out in your PR. Worst case we advertise something that is bogus.

If we had a config switch obviously we wouldn’t create node announcements that wouldn’t be usable because not all of the people will be able to open their router board. Then we just have an additional announced address that is not usable which changes over time. That’t the trade-off there. If we suppress it, we don’t have it, it makes it slightly easier for people to figure out. Maybe it depends on the documentation we’ll add later when we move the experimental.

If it was going to be an option it would have to be a negative option. One thing worth checking is if somebody is trying to run a Tor only node they must not advertise this and stuff like that.

That’s a good point.

You might want to suppress it by default if they are advertising anything explicitly. And have it only if there is no other advertisement. That would be fairly safe, it doesn’t make things worse. Assuming they are not forcing through a proxy. If they turn proxy for everything on, that’s the setting for Tor, you should not tell them your IP address. How’s the spec going? It can’t be moved out of experimental until the spec is finalized.

It is already merged.

Then take it off experimental, put it in.

I can do that. We didn’t have a release with it.

Experimental for stuff that would violate spec or that we can’t do because of spec concerns. That may change. Once it is in the spec it should not be experimental unless we are really terrified it is going to do something horrible to people. That’s fine.

The DNS is still not merged in the spec. There was another implementation that was working on it so maybe we can get this in the spec soonish.

We had a great meeting in Istanbul before London. We had two LND developers, Carla and Oliver, spoke about BOLT 12 in a PR review club fashion which was great. Spoke about the onion messages, spam protection. Paid for a lot of Lightning beers with my c-lightning node. It is a node behind Tor, it was connected through a VPN to my phone with the new Zeus wallet. It was a release candidate then, now the 0.6 release is out, I can recommend. The Raspiblitz 1.7.2 has been so far a good update, we were focusing on stability improvements. We have a connection menu that exposes some endpoints for connecting Spark wallet, Sparko and the Zeus wallet through different interfaces: the c-lightning REST interface, the Spark wallet interface. We would be interested in a facilitated connection towards this iPhone wallet. I would be very happy to assist. If you give us a spec then we just put it into the menu.

Bookkeeper plugin

I have been working on a plugin that I’m now calling the bookkeeper plugin which will help you with your Lightning bookkeeping. I pushed up a PR late night Friday, thanks Rusty for getting some of the prerequisites in. I am currently working through the little things that CI wants me to do. Hoping to get that done by end of day today. The bookkeeping stuff, there are a couple of commands in it that I’m pretty excited about but I’m hoping people will take some time to look at them and run on their node, give me some feedback about what other information they want to have. The novel thing with the bookkeeper is the level of data collection that it is doing. There is probably a little more data we could add. For example when you pay an invoice there’s the possibility for us to write down how much of that invoice goes to fees versus to the actual invoice that you are paying and collect data on that. It is not something we are doing currently. The idea is this will give you a really easy way to print out all the events on your node that made you money, lost you money, where you spent your money, where you earned money at a millisat level of resolution. It is a big PR, a lot of the PR is just printing out different ways of viewing the data, there is a lot of data there now. Hopefully other people will run it and say “Wouldn’t it be great if we also had this view?” It is definitely going to make the on disk data usage a little bit higher than it has been. We are now saving a copy of stuff. I don’t really have a good answer of how to fix that. There is definitely a way you can rollup events that have passed a long time in the past, you checkmark things. Events that are 3 years old you probably don’t need the exact data on, that’s future work. It also keeps track of when a channel closes and has been resolved onchain. It is possible to go and rollup all the channels that you’ve already closed and have been resolved onchain, you already have the data for, after however many blocks e.g. 1000 blocks. There are opportunities for making the data less compact as time goes on.

This week I am going to work on some outstanding bugs and cleanups I need to do in the funder plugin, the way it handles PSBTs needs to be fixed up a little bit. I think I was going to look at making our RBF bump for v2 channels easier to use. Z-man has submitted a big request for that. That’s probably going to be what I’m working on this week unless anything else pops up.

With the bookkeeper plugin I wonder if we mark the APIs unstable for one release.

Right now it automatically turns on and starts running. Maybe there’s a set of plugins I could add it to that’s optional to opt into. You probably want the data but maybe we should make it opt into the data.

My general philosophy with these things is more switches is just confusing. As a general rule the only button should be install and it should just work. It is the same with data collection. I generally prefer to over collect data and then have tools to reduce it or heuristics to reduce it later than not have the data and wonder what the hell happened. The data problem tends to be proportional, as long as it is not ridiculous, people running really big nodes who have data problems either have serious storage or have ways of managing it and prepared to go to that effort. People running small nodes don’t have a data problem. My general emphasis is to store all the data, storage is cheap. At some point if you are running a serious node you’ll need serious hardware or we need to do some maintenance. I think your instinct that you can throw away data later is probably the correct one. I am excited about having it but I wonder if we want to go through a bit of a caveat on the APIs to start with and not guarantee backward compatibility. We generally do and that may be painful to add later if you decide to redo it. Ideally you won’t but it is nice to have the asterisk there.

At least for one release.

This is the first release. Give us feedback and try to keep a little bit of wriggle room.

What’s the best way to indicate on the APIs? Add experimental, exp before the actual thing and then delete it later?

That is almost worse. You’ve done quite a lot of API work so I would say it is unlikely you’re going to do a major change. Ideally of course you won’t or it will be backwards compatible, a new annotation or something. That’s the plan but it is not the promise. There’s a difference between those two. Maybe I’m backing out of this commitment. It is hard to indicate this in a way that isn’t going to hurt people. In the release notes I’ll make sure it is clear this is a new thing, please play around with it but it will be in anger next release. We are still in the data gathering phase maybe. I look forward to reviewing that PR. It is big so I’ve been waiting for you to rebase it before I review it multiple times. It is a lot of code.

vincenzopalazzo: Last week I was also in London. The last two weeks I spent some time on lnprototest, I’m adding a new test for cooperative closing. In the last spec meeting we discussed the spec being underspecified in some way. It is good to have some spec tests from this side. This resulted in me finding a bug in lnprototest. When a test fails we don’t stop Bitcoin Core. After a different failure my machine blew up off the Bitcoin Core daemon. I fixed that, please take a look in case I’ve made a mistake. This week I will merge it and continue my work on the integration testing. I am also working on splitting the listpeers command. I found a mistake in my understanding. The listpeers command uses the intersection between the gossip map and the peer that we are connected with. I am assuming that listpeers shows all the network and this is wrong. I need to refactor the logic that I proposed in the pull request.

Reckless plugin manager

I talked in London with Antoine about the Reckless plugin manager. I propose to migrate this Reckless plugin to Rust because now we have Rust inside c-lightning. I will write a mail on the mailing list on the idea. The idea is splitting the Reckless plugin into three different binaries. One is the core library that makes the update of the plugin. Also I want Reckless to be a plugin of c-lightning but also a separate binary like Lightning tools. You can instantiate a new plugin, if you want to write a plugin in Python you can call it the language and the name of the plugin. It is a big project but I have some ideas.

I have some ideas on Reckless. It has been on my to do list as well since 2019. I believe that yes, it needs to be a separate binary. I would tend towards Python because it is going to have to deal with filesystem stuff. You can do that in C, you can do it in Rust but you could also just do it in Python. It also needs to do web stuff to obtain plugins and stuff like that. It is a pretty Python-y kind of problem and you have already got to have Python in tree. But I have no problem with a Rust dependency, that works just as well.

vincenzopalazzo: With Rust there is this cargo environment that gives you the possibility to avoid the circular dependency that we have in Python. We have pyln, proto and the BOLTs that are dependent on each other like lnprototest. With Rust we get Cargo and the workspace which takes care all of the stuff. With Python the end user needs to install all the dependencies and with Reckless I think we know the dependencies are different and we can have some resolution error. From my point of view Rust is the safe way to go. Maybe there is a little bit more work from our side but we can avoid the user having trouble with Python dependencies.

Whatever it does it does need to be standalone, it does need to be minimal dependency wise. Ideally you want it shipped with c-lightning, it is installed like anything else. The plugin ecosystem is very rich, I feel like it is badly served because it is so hard to install plugins. It should have a Reckless install whatever. And also manage them, activate them, deactivate them, upgrade them in particular is important, stuff like that. It is basically a shell tool, I would have written it in shell 5 years ago. It basically just manipulates the filesystem and runs a few commands to restart things. It doesn’t need to be particularly sophisticated, it does need to do some web stuff and some discovery stuff but that is about it. It is not c-lightning specific really. There are a couple of things it needs to do with c-lightning but it is a separate tool that pokes c-lightning regularly. It doesn’t really matter. The important thing is is that it is self contained, it just works. I would like to ship it by default. It becomes standard for people to say “Just reckless install blah”. It does it, it starts the plugin, does everything.

vincenzopalazzo: I think the most important thing is to introduce inside the plugin ecosystem a manifest of the plugin like a npm install. We can have different languages working in different ways. For instance in Java you need to create a bash script. In some other language you need to create other stuff. With a manifest we can have this second layer that you can work with. This is the first thing I will do, start to think about how to make a good manifest. The second thing, how to build a GitHub page with this manifest, query the GitHub page and get the list of plugins, a fake API. All with GitHub and maybe we can make a fake repository system without a server.

That’s not a problem. Thinking about different types of plugins, we already have two types in the plugin repository and we do test them in isolation. We have some code that already does create virtual environments that are populated by Poetry or by pip. Adding more languages is all down to detecting which environments are present. It pretty much boils down to how well we structure the manifest, maybe we can merge them through a CI job and push them onto the web where they can be grabbed from the clients.

I’m happy with languages like Python, Rust, Python, Go. Make it a standard, if you are going to write a plugin it will be done like this. It will be configure, make etc. You can hard code things as much as you want. The other thing that I think is important, versioning is obviously important so you can upgrade. Upgrade maintenance is important. And dependencies between them. We are seeing more stuff needing Commando for example. If you have that, install this. They are the key issues. Just removing the maintenance burden of having to do this stuff manually. Discovery is nice but I don’t think it is critical. Being able to search for plugins, it is much easier for us to produce a web page that highlights plugins and plugin of the week rather than putting in the tooling.

Q&A

I wanted to mention one thing about Commando. I noticed there was an experimental web socket. I know Aditya was working on web socket stuff. I was going to try compiling my lnsocket library with Emscripten. It compiles raw socket code to web sockets. How well tested is that? Is there any test code for people using web sockets?

It really works. It is weird but it does work. You can tunnel the whole thing over web sockets. Your problem is going to be in the browser, you may or may not be able to connect to an unencrypted web socket. You may need a proxy or you need a certificate which is the whole point to avoid.

You need a HTTPS connection?

Yeah. This is really annoying. I am probably going to run on bootstrap.bolt12.org, at some point there will be a proxy. You connect to that proxy because that’s got the certificate and then you tell it where you want to go. It means I can snoop your traffic but it is encrypted.

You can’t reverse proxy a Lightning connection because there is no SNI or anything to key off of.

Lightning is a parallel infrastructure so there isn’t any real overlap. Providing proxies is a fairly straightforward thing to do. You can use Lightning to pay for them. There’s a possibility there eventually.

The HTTPS thing is really unfortunate. I am just running my node off my home connection. I don’t have a web server and I was thinking of writing a plugin that lists all the payments to an offer. It would have been cool to do that with just lnsocket and Javascript code. But maybe I’ll just bite the bullet and run a proxy or something.

Once we have a public proxy you can do it. That’s the aim.

What was the point about not having SNI?

This is something Warren Togami was talking about, in terms of hiding your IP. Imagine if you had a privacy proxy where you could have an IP that is in front of your actual node. I was trying to think of a way to do that. I think the only way would be if you had a SNI type system where you could reverse proxy the Lightning connection. I don’t think it is possible.

From a browser you should be able to.

This is separate from web browser.

We are actually trying to build a SNI based gRPC proxy in front of Greenlight.

It would work for TLS but I was trying directly to the Lightning Network.

Maybe ngrok works? You can do pure TCP connections over ngrok, that might be an option.

If you have one IP that could potentially reverse proxy to multiple Lightning nodes, you would identify SNI based on the pubkey or something. That is what I was thinking, I didn’t think there was a way to do it in the protocol.

\ No newline at end of file diff --git a/c-lightning/index.xml b/c-lightning/index.xml index 423686da8c..9fbd9f9eb9 100644 --- a/c-lightning/index.xml +++ b/c-lightning/index.xml @@ -44,7 +44,7 @@ Topic: Various topics Location: Jitsi online Video: No video posted online The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -Dust HTLC exposure (Lisa Neigut) Antoine Riard email to the Lightning dev mailing list: https://lists.
c-lightning developer callhttps://btctranscripts.com/c-lightning/2021-09-20-developer-call/Mon, 20 Sep 2021 00:00:00 +0000https://btctranscripts.com/c-lightning/2021-09-20-developer-call/Topic: Various topics +Dust HTLC exposure (Lisa Neigut) Antoine Riard email to the Lightning dev mailing list: https://gnusha.c-lightning developer callhttps://btctranscripts.com/c-lightning/2021-09-20-developer-call/Mon, 20 Sep 2021 00:00:00 +0000https://btctranscripts.com/c-lightning/2021-09-20-developer-call/Topic: Various topics Location: Jitsi online Video: No video posted online Agenda: https://hackmd.io/@cdecker/Sy-9vZIQt diff --git a/categories/conference/index.xml b/categories/conference/index.xml index 1da6b9c23f..d33e1d2853 100644 --- a/categories/conference/index.xml +++ b/categories/conference/index.xml @@ -21,7 +21,7 @@ Introduction Elizabeth Crites (EC): My name is Elizabeth Crites, I’m a postdoc Chelsea Komlo (CK): I’m Chelsea Komlo, I’m at the University of Waterloo and I’m also a Principal Researcher at the Zcash Foundation. Today we will be giving you research updates on FROST. What is FROST? (Chelsea Komlo) You might not know what FROST is and that’s totally ok.Lightning Panelhttps://btctranscripts.com/mit-bitcoin-expo/mit-bitcoin-expo-2022/lightning-panel/Tue, 05 Jul 2022 00:00:00 +0000https://btctranscripts.com/mit-bitcoin-expo/mit-bitcoin-expo-2022/lightning-panel/The discussion primarily revolved around the Lightning Network, a scaling solution for Bitcoin designed to enable faster, decentralized transactions. Rene Pickhardt and Lisa Neigut shared their insights, highlighting Lightning&rsquo;s potential as a peer-to-peer electronic cash system and a future payment infrastructure. They emphasized its efficiency for frequent transactions between trusted parties but noted challenges in its current infrastructure, such as the need for continuous online operation and the risk of losing funds if a node is compromised. The panelists discussed the scalability of the network, indicating that while millions could use it self-sovereignly, larger-scale adoption would likely involve centralized service providers. The conversation also touched on the impact of Taproot on privacy and channel efficiency, and the technical intricacies of maintaining state and preventing fraud within the network.Tradeoffs in Permissionless Systemshttps://btctranscripts.com/mit-bitcoin-expo/mit-bitcoin-expo-2022/tradeoffs-in-permissionless-systems/Tue, 05 Jul 2022 00:00:00 +0000https://btctranscripts.com/mit-bitcoin-expo/mit-bitcoin-expo-2022/tradeoffs-in-permissionless-systems/Introduction Hello. I wanted to make a talk about what I work on because I consider it the, well, the area of code that I work in, because I considered it to be one of the most fascinating and definitely the most underrated part of Bitcoin that no one ever talks about. And the reason is I kind of see it as where one of the most important ideological goals of Bitcoin translates into technical challenges, which also happens to be very, very interesting.Minisketch and Lightning gossiphttps://btctranscripts.com/bitcoinplusplus/2022/2022-06-07-alex-myers-minisketch-lightning-gossip/Tue, 07 Jun 2022 00:00:00 +0000https://btctranscripts.com/bitcoinplusplus/2022/2022-06-07-alex-myers-minisketch-lightning-gossip/Location: Bitcoin++ Slides: https://endothermic.dev/presentations/magical-minisketch -Rusty Russell on using Minisketch for Lightning gossip: https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001741.html +Rusty Russell on using Minisketch for Lightning gossip: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001741.html Minisketch library: https://github.com/sipa/minisketch Bitcoin Core PR review club on Minisketch (3 sessions): https://bitcoincore.reviews/minisketch-26 @@ -98,7 +98,7 @@ Intro Hi everyone. Some of you were at my Bitcoin dev presentation about Miniscr Descriptors It is very nice having this scheduled immediately after Andrew Chow’s talk about output descriptors because there is a good transition here.Signet Integrationhttps://btctranscripts.com/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/Thu, 06 Feb 2020 00:00:00 +0000https://btctranscripts.com/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/Slides: https://www.dropbox.com/s/6fqwhx7ugr3ppsg/Signet%20Integration%20V2.pdf BIP 325: https://github.com/bitcoin/bips/blob/master/bip-0325.mediawiki Signet on Bitcoin Wiki: https://en.bitcoin.it/wiki/Signet -Bitcoin dev mailing list: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html +Bitcoin dev mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html Bitcoin Core PR 16411 (closed): https://github.com/bitcoin/bitcoin/pull/16411 Bitcoin Core PR 18267 (open): https://github.com/bitcoin/bitcoin/pull/18267 Intro I am going to talk about Signet. Do you guys know what Signet is? A few people know. I will explain it briefly. I have an elevator pitch, I have three actually depending on the height of the elevator. Basically Signet is testnet except all the broken parts are removed.Replacing Payment Hashes with Payment Pointshttps://btctranscripts.com/lightning-conference/2019/2019-10-20-nadav-kohen-payment-points/Sun, 20 Oct 2019 00:00:00 +0000https://btctranscripts.com/lightning-conference/2019/2019-10-20-nadav-kohen-payment-points/Slides: https://docs.google.com/presentation/d/15l4h2_zEY4zXC6n1NqsImcjgA0fovl_lkgkKu1O3QT0/ @@ -301,7 +301,7 @@ Introductions Hi everybody. This is a talk on how much privacy is enough, evalua https://twitter.com/kanzure/status/1048446183004233731 Introduction I am going to be talking about fraud proofs. It allows lite clients to have a leve lof security of almost the level of a full node. Before I describe fraud proofs, how about we talk about motivations. Motivations There&rsquo;s a large tradeoff between blockchain decentralization and how much on-chain throughput you can get. The more transactions you have on the chain, the more resources you need to validate the chain.Instantiating (Scriptless) 2P-ECDSA: Fungible 2-of-2 MultiSigs for Today's Bitcoinhttps://btctranscripts.com/scalingbitcoin/tokyo-2018/scriptless-ecdsa/Sat, 06 Oct 2018 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/tokyo-2018/scriptless-ecdsa/https://twitter.com/kanzure/status/1048483254087573504 -maybe https://eprint.iacr.org/2018/472.pdf and https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-April/001221.html +maybe https://eprint.iacr.org/2018/472.pdf and https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-April/001221.html Introduction Alright. Thank you very much. Thank you Pedro, that was a great segue into what I&rsquo;m talking about. He has been doing work on formalizing multi-hop locks. I want to also talk about what changes might be necessary to deploy this on the lightning network. History For what it&rsquo;s worth, these dates are rough. Andrew Poelstra started working on this and released something in 2016 for a Schnorr-based scriptless script model.Multi-Hop Locks for Secure, Privacy-Preserving and Interoperable Payment-Channel Networkshttps://btctranscripts.com/scalingbitcoin/tokyo-2018/multi-hop-locks/Sat, 06 Oct 2018 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/tokyo-2018/multi-hop-locks/Giulio Malavolta (Friedrich-Alexander-University Erlangen-Nuernberg), Pedro Moreno-Sanchez (Purdue University), Clara Schneidewind (Vienna University of Technology), Aniket Kate (Purdue University) and Matteo Maffei (Vienna University of Technology) https://eprint.iacr.org/2018/472.pdf @@ -935,7 +935,7 @@ Gavin Andresen, MIT Digital Currency Initiative Vitalik Buterin, Ethereum Foundation Eric Lombrozo, Ciphrex and Bitcoin Core Neha Narula, MIT Media Lab Digital Currency Initiative -Please silence your cell phone during this session. Thank you. Please silence your cell phones during this session. Thank you. Ladies and gentlemen, please take your seats. The session is about to begin.Redesigning Bitcoin Fee Markethttps://btctranscripts.com/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015093.html +Please silence your cell phone during this session. Thank you. Please silence your cell phones during this session. Thank you. Ladies and gentlemen, please take your seats. The session is about to begin.Redesigning Bitcoin Fee Markethttps://btctranscripts.com/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015093.html https://www.reddit.com/r/Bitcoin/comments/72qi2r/redesigning_bitcoins_fee_market_a_new_paper_by/ paper: https://arxiv.org/abs/1709.08881 He will be exploring alternative auction markets. diff --git a/categories/core-dev-tech/index.xml b/categories/core-dev-tech/index.xml index 6e076a241b..e69bfc432a 100644 --- a/categories/core-dev-tech/index.xml +++ b/categories/core-dev-tech/index.xml @@ -92,7 +92,7 @@ We&rsquo;re going to talk a little bit about bip324. This is a BIP that has &lt;bitcoin-otc.com&gt; continues to be the longest operating PGP web-of-trust using public key infrastructure. Rumplepay might be able to bootstrap a web-of-trust over time. Stealth addresses and silent payments Here&rsquo;s something controversial. Say you keep an in-memory map of all addresses that have already been used.AssumeUTXOhttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-assumeutxo/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-assumeutxo/https://twitter.com/kanzure/status/1137008648620838912 Why assumeutxo assumeutxo is a spiritual continuation of assumevalid. Why do we want to do this in the first place? At the moment, it takes hours and days to do initial block download. Various projects in the community have been implementing meassures to speed this up. Casa I think bundles datadir with their nodes. Other projects like btcpay have various ways of bundling this up and signing things with gpg keys and these solutions are not quite half-baked but they are probably not desirable either.Blind statechains: UTXO transfer with a blind signing serverhttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/https://twitter.com/kanzure/status/1136992734953299970 -&ldquo;Formalizing Blind Statechains as a minimalistic blind signing server&rdquo; https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html +&ldquo;Formalizing Blind Statechains as a minimalistic blind signing server&rdquo; https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html overview: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39 statechains paper: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf previous transcript: http://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/ @@ -102,18 +102,18 @@ https://github.com/bitcoin-core/bitcoin-devwiki/wiki/P2P-Design-Philosophy &ldquo;Elligator Squared: Uniform Points on Elliptic Curves of Prime Order as Uniform Random Strings&rdquo; https://eprint.iacr.org/2014/043 Previous talks https://btctranscripts.com/scalingbitcoin/milan-2016/bip151-peer-encryption/ https://btctranscripts.com/sf-bitcoin-meetup/2017-09-04-jonas-schnelli-bip150-bip151/ -Introduction This proposal has been in progress for years. Many ideas from sipa and gmaxwell went into bip151. Years ago I decided to try to move this forward. There is bip151 that again most of the ideas are not from myself but come from sipa and gmaxwell. The original proposal was withdrawn because we figured out ways to do it better.Signethttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html +Introduction This proposal has been in progress for years. Many ideas from sipa and gmaxwell went into bip151. Years ago I decided to try to move this forward. There is bip151 that again most of the ideas are not from myself but come from sipa and gmaxwell. The original proposal was withdrawn because we figured out ways to do it better.Signethttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html https://twitter.com/kanzure/status/1136980462524608512 Introduction I am going to talk a little bit about signet. Does anyone not know what signet is? The idea is to have a signature of the block or the previous block. The idea is that testnet is horribly broken for testing things, especially testing things for long-term. You have large reorgs on testnet. What about testnet with a less broken difficulty adjustment? Testnet is for miner testing really.General discussion on SIGHASH_NOINPUT, OP_CHECKSIGFROMSTACK, and OP_SECURETHEBAGhttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-noinput-etc/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-noinput-etc/SIGHASH_NOINPUT, ANYPREVOUT, OP_CHECKSIGFROMSTACK, OP_CHECKOUTPUTSHASHVERIFY, and OP_SECURETHEBAG https://twitter.com/kanzure/status/1136636856093876225 There&rsquo;s apparently some political messaging around OP_SECURETHEBAG and &ldquo;secure the bag&rdquo; might be an Andrew Yang thing. -SIGHASH_NOINPUT A bunch of us are familiar with NOINPUT. Does anyone need an explainer? What&rsquo;s the difference from the original NOINPUT and the new one? NOINPUT is kind of scary to at least some people. If we just do NOINPUT, does that start causing problems in bitcoin?Great Consensus Cleanuphttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html +SIGHASH_NOINPUT A bunch of us are familiar with NOINPUT. Does anyone need an explainer? What&rsquo;s the difference from the original NOINPUT and the new one? NOINPUT is kind of scary to at least some people. If we just do NOINPUT, does that start causing problems in bitcoin?Great Consensus Cleanuphttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html https://twitter.com/kanzure/status/1136591286012698626 Introduction There&rsquo;s not much new to talk about. Unclear about CODESEPARATOR. You want to make it a consensus rule that transactions can&rsquo;t be larger than 100 kb. No reactions to that? Alright. Fine, we&rsquo;re doing it. Let&rsquo;s do it. Does everyone know what this proposal is? Validation time for any block&ndash; we were lazy about fixing this. Segwit was a first step to fixing this, by giving people a way to do this in a more efficient way.Maintainers view of the Bitcoin Core projecthttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-maintainers/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-maintainers/https://twitter.com/kanzure/status/1136568307992158208 How do the maintainers think or feel everything is going? Are there any frustrations? Could contributors help eliminate these frustrations? That&rsquo;s all I have. It would be good to have better oversight or overview about who is working in what direction, to be more efficient. Sometimes I have seen people working on the same thing, and both make a similar pull request with a lot of overlap. This is more of a coordination issue.Taproothttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki -https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html +https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html https://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/ previously: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/ https://twitter.com/kanzure/status/1136616356827283456 @@ -148,9 +148,9 @@ You take every single path it has; so instead, it becomes &hellip; certain c Graftroot The idea of graftroot is that in every contract there is a superset of people that can spend the money. This assumption is not always true but it&rsquo;s almost always true. Say you want to lock up these coins for a year, without any conditionals to it, then it doesn&rsquo;t work. But assume you have&ndash; pubkey recovery? No&hellip; pubkey recovery is inherently incompatible with any form of aggregation, and aggregation is far superior.Bellare-Nevenhttps://btctranscripts.com/bitcoin-core-dev-tech/2018-03/2018-03-05-bellare-neven/Mon, 05 Mar 2018 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2018-03/2018-03-05-bellare-neven/See also http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-06-signature-aggregation/ It&rsquo;s been published, it&rsquo;s been around for a decade, and it&rsquo;s widely cited. In Bellare-Neven, it&rsquo;s itself, it&rsquo;s a multi-signature scheme which means multiple pubkeys and one message. You should treat the individual authorizations to spend inputs, as individual messages. What we need is an interactive aggregate signature scheme. Bellare-Neven&rsquo;s paper suggests a trivial way of building an aggregate signature scheme out of a multisig scheme where interactively everyone signs everyone&rsquo;s message.Cross Curve Atomic Swapshttps://btctranscripts.com/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/Mon, 05 Mar 2018 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/https://twitter.com/kanzure/status/971827042223345664 Draft of an upcoming scriptless scripts paper. This was at the beginning of 2017. But now an entire year has gone by. -post-schnorr lightning transactions https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html +post-schnorr lightning transactions https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html An adaptor signature.. if you have different generators, then the two secrets to reveal, you just give someone both of them, plus a proof of a discrete log, and then you say learn the secret to one that gets the reveal to be the same.Merkleized Abstract Syntax Treeshttps://btctranscripts.com/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/Thu, 07 Sep 2017 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/https://twitter.com/kanzure/status/907075529534328832 -https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html +https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html I am going to talk about the scheme I posted to the mailing list yesterday which is to implement MAST (merkleized abstract syntax trees) in bitcoin in a minimally invasive way as possible. It&rsquo;s broken into two major consensus features that together gives us MAST. I&rsquo;ll start with the last BIP. This is tail-call evaluation. Can we generalize P2SH to give us more general capabilities, than just a single redeem script.Signature Aggregationhttps://btctranscripts.com/bitcoin-core-dev-tech/2017-09/2017-09-06-signature-aggregation/Wed, 06 Sep 2017 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2017-09/2017-09-06-signature-aggregation/https://twitter.com/kanzure/status/907065194463072258 Sipa, can you sign and verify ECDSA signatures by hand? No. Over GF(43), maybe. Inverses could take a little bit to compute. Over GF(2). diff --git a/categories/meetup/index.xml b/categories/meetup/index.xml index 2f0812e70e..38bba8ac95 100644 --- a/categories/meetup/index.xml +++ b/categories/meetup/index.xml @@ -94,7 +94,7 @@ Location: Bitcoin Sydney (online) Video: No video posted online The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. Agenda: https://github.com/bitcoin-sydney/socratic/blob/master/README.md#2021-11 -Package Mempool Accept and Package RBF: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-September/019464.html +Package Mempool Accept and Package RBF: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-September/019464.html With illustrations: https://gist.c-lightning developer callhttps://btctranscripts.com/c-lightning/2021-11-01-developer-call/Mon, 01 Nov 2021 00:00:00 +0000https://btctranscripts.com/c-lightning/2021-11-01-developer-call/Topic: Various topics Location: Jitsi online Video: No video posted online @@ -111,7 +111,7 @@ Topic: Various topics Location: Jitsi online Video: No video posted online The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -Dust HTLC exposure (Lisa Neigut) Antoine Riard email to the Lightning dev mailing list: https://lists.c-lightning developer callhttps://btctranscripts.com/c-lightning/2021-09-20-developer-call/Mon, 20 Sep 2021 00:00:00 +0000https://btctranscripts.com/c-lightning/2021-09-20-developer-call/Topic: Various topics +Dust HTLC exposure (Lisa Neigut) Antoine Riard email to the Lightning dev mailing list: https://gnusha.c-lightning developer callhttps://btctranscripts.com/c-lightning/2021-09-20-developer-call/Mon, 20 Sep 2021 00:00:00 +0000https://btctranscripts.com/c-lightning/2021-09-20-developer-call/Topic: Various topics Location: Jitsi online Video: No video posted online Agenda: https://hackmd.io/@cdecker/Sy-9vZIQt @@ -139,7 +139,7 @@ Location: Bitcoin Sydney (online) Video: No video posted online The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. Agenda: https://github.com/bitcoin-sydney/socratic/blob/master/README.md#2021-07 -First IRC workshop on L2 onchain support: https://lists.Sydney Socratic Seminarhttps://btctranscripts.com/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/Tue, 01 Jun 2021 00:00:00 +0000https://btctranscripts.com/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/Topic: Agenda in Google Doc below +First IRC workshop on L2 onchain support: https://gnusha.Sydney Socratic Seminarhttps://btctranscripts.com/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/Tue, 01 Jun 2021 00:00:00 +0000https://btctranscripts.com/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/Topic: Agenda in Google Doc below Video: No video posted online Google Doc of the resources discussed: https://docs.google.com/document/d/1E9mzB7fmzPxZ74WZg0PsJfLwjpVZ7OClmRdGQQFlzoY/ The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. @@ -148,7 +148,7 @@ This is Bitcoin Problems, it is a Jekyll website I put on GitHub.< Video: No video posted online Google Doc of the resources discussed: https://docs.google.com/document/d/1VAP4LNjHHVLy9RJwpQ8LqXEUw79z5TTZxr9du_Z0-9A/ The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -PoDLEs revisited (Lloyd Fournier) https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html +PoDLEs revisited (Lloyd Fournier) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html We’ll start with me talking about my research into UTXO probing attacks on Lightning in the dual funding proposal.
Socratic Seminar 20https://btctranscripts.com/sf-bitcoin-meetup/2020-11-30-socratic-seminar-20/Mon, 30 Nov 2020 00:00:00 +0000https://btctranscripts.com/sf-bitcoin-meetup/2020-11-30-socratic-seminar-20/SF Bitcoin Devs socratic seminar #20 ((Names anonymized.)) XX01: For everyone new who has joined- this is an experiment for us. We&rsquo;ll see how this first online event goes. So far no major issues. Nice background, Uyluvolokutat. Should we let Harkenpost in? @@ -183,7 +183,7 @@ Intro Hey. Thanks for coming on your Saturday morning, I appreciate everyone mak Video: No video posted online Reddit link of the resources discussed: https://old.reddit.com/r/chibitdevs/ The conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -Tainting, CoinJoin, PayJoin, CoinSwap Bitcoin dev mailing list post (Nopara) https://lists.Socratic Seminarhttps://btctranscripts.com/sydney-bitcoin-meetup/2020-06-23-socratic-seminar/Tue, 23 Jun 2020 00:00:00 +0000https://btctranscripts.com/sydney-bitcoin-meetup/2020-06-23-socratic-seminar/Name: Socratic Seminar +Tainting, CoinJoin, PayJoin, CoinSwap Bitcoin dev mailing list post (Nopara) https://gnusha.Socratic Seminarhttps://btctranscripts.com/sydney-bitcoin-meetup/2020-06-23-socratic-seminar/Tue, 23 Jun 2020 00:00:00 +0000https://btctranscripts.com/sydney-bitcoin-meetup/2020-06-23-socratic-seminar/Name: Socratic Seminar Topic: Agenda in Google Doc below Location: Bitcoin Sydney (online) Video: No video posted online @@ -195,7 +195,7 @@ Introductions Michael Folkson (MF): This is London BitDevs, this is a Socratic S Location: LA BitDevs (online) CVE: https://nvd.nist.gov/vuln/detail/CVE-2020-14199 Trezor blog post on the vulnerability: https://blog.trezor.io/latest-firmware-updates-correct-possible-segwit-transaction-vulnerability-266df0d2860 -Greg Sanders Bitcoin dev mailing list post in April 2017: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html +Greg Sanders Bitcoin dev mailing list post in April 2017: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html The vulnerability The way Bitcoin transactions are encoded in the software is there is a list of coins essentially and then there is a list of destinations and the amount being sent to each destination. The destinations do not include the fee. Nothing in the transaction tells you what the fee of the transaction is.Taproot and Schnorr Multisignatureshttps://btctranscripts.com/london-bitcoin-devs/2020-06-17-tim-ruffing-schnorr-multisig/Wed, 17 Jun 2020 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2020-06-17-tim-ruffing-schnorr-multisig/Location: London Bitcoin Devs (online) Slides: https://slides.com/real-or-random/taproot-and-schnorr-multisig Transcript of the previous day’s Socratic Seminar on BIP-Schnorr: https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr/ @@ -283,10 +283,10 @@ About me Okay. So here&rsquo;s some things about myself. I&rsquo;ve been SF Lightning Devs LND 0.6 Beta Deep Dive Intro lnd 0.6 was about seven months in the making. There’s more things that we could fit into this talk alone but I cherrypicked some of the highlights and things that will have some real world impact on users of the network or people who are using the API or just generally users of lnd. The high level impact on the network going forward and how we plan to scale up for the next 100K channels that come onto the network.Hardware Wallets (History of Attacks)https://btctranscripts.com/london-bitcoin-devs/2019-05-01-stepan-snigirev-hardware-wallet-attacks/Wed, 01 May 2019 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2019-05-01-stepan-snigirev-hardware-wallet-attacks/Slides: https://www.dropbox.com/s/64s3mtmt3efijxo/Stepan%20Snigirev%20on%20hardware%20wallet%20attacks.pdf -Pieter Wuille on anti covert channel signing techniques: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-March/017667.html +Pieter Wuille on anti covert channel signing techniques: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-March/017667.html Introduction This talk is the second in the series after my previous talk in London a few months ago at the Advancing Bitcoin conference. There I was talking mostly about general attacks on hardware, more from the theoretical perspective. I didn’t say anything bad hardware wallets that exist and I didn’t specify anything on the bad side. Here I feel a bit more free as it is a meetup not a conference so I can say bad things about everyone.Partially Signed Bitcoin Transactionshttps://btctranscripts.com/sf-bitcoin-meetup/2019-03-15-partially-signed-bitcoin-transactions/Fri, 15 Mar 2019 00:00:00 +0000https://btctranscripts.com/sf-bitcoin-meetup/2019-03-15-partially-signed-bitcoin-transactions/Topic: PSBT Location: SF Bitcoin Devs -Introduction Andrew: So as Mark said I&rsquo;m Andrew Chow and today I&rsquo;m going to be talking about BIP 174, partially signed Bitcoin transactions, also known as PSBT. Today I&rsquo;m going to be talking a bit about why we need PSBTs, or Partially Signed Bitcoin Transactions, what they are, and the actual format of the transaction itself and how to use a PSBT and the workflows around that.Better Hashing through BetterHashhttps://btctranscripts.com/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/Tue, 05 Feb 2019 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/Announcement of BetterHash on the mailing list: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016077.html +Introduction Andrew: So as Mark said I&rsquo;m Andrew Chow and today I&rsquo;m going to be talking about BIP 174, partially signed Bitcoin transactions, also known as PSBT. Today I&rsquo;m going to be talking a bit about why we need PSBTs, or Partially Signed Bitcoin Transactions, what they are, and the actual format of the transaction itself and how to use a PSBT and the workflows around that.Better Hashing through BetterHashhttps://btctranscripts.com/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/Tue, 05 Feb 2019 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/Announcement of BetterHash on the mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016077.html Draft BIP: https://github.com/TheBlueMatt/bips/blob/betterhash/bip-XXXX.mediawiki Intro I am going to talk about BetterHash this evening. If you are coming to Advancing Bitcoin don’t worry I am talking about something completely different. You are not going to get duplicated content. That talk should be interesting as well though admittedly I haven’t written it yet. We’ll find out. BetterHash is a project that unfortunately has some naming collisions so it might get renamed at some point, I’ve been working on for about a year to redo mining and the way it works in Bitcoin.Threshold Signatures and Accountabilityhttps://btctranscripts.com/sf-bitcoin-meetup/2019-02-04-threshold-signatures-and-accountability/Mon, 04 Feb 2019 00:00:00 +0000https://btctranscripts.com/sf-bitcoin-meetup/2019-02-04-threshold-signatures-and-accountability/Slides: https://download.wpsoftware.net/bitcoin/wizardry/2019-02-sfdevs-threshold/slides.pdf Transcript completed by: Bryan Bishop Edited by: Michael Folkson diff --git a/categories/podcast/index.xml b/categories/podcast/index.xml index afc22767ce..71bddf19c1 100644 --- a/categories/podcast/index.xml +++ b/categories/podcast/index.xml @@ -435,9 +435,9 @@ That&rsquo;s right. Aaron: 00:01:57 Segregated Witness, which was the previous soft fork, well, was the last soft fork. We&rsquo;re working towards a Taproot soft fork now. Sjors: 00:02:06 -It&rsquo;s the last soft fork we know of.How Bitcoin UASF went down, Taproot LOT=true, Speedy Trial, Small Blockshttps://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Wed, 17 Mar 2021 00:00:00 +0000https://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Luke Dashjr arguments against LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html -T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html -F7 argument for LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html +It&rsquo;s the last soft fork we know of.How Bitcoin UASF went down, Taproot LOT=true, Speedy Trial, Small Blockshttps://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Wed, 17 Mar 2021 00:00:00 +0000https://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Luke Dashjr arguments against LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html +T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html +F7 argument for LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html Transcript by: Stephan Livera Edited by: Michael Folkson Intro Stephan Livera (SL): Luke, welcome to the show. Luke Dashjr (LD): Thanks. diff --git a/chaincode-labs/chaincode-podcast/2020-01-28-pieter-wuille-part1/index.html b/chaincode-labs/chaincode-podcast/2020-01-28-pieter-wuille-part1/index.html index a7d09daaee..e103253324 100644 --- a/chaincode-labs/chaincode-podcast/2020-01-28-pieter-wuille-part1/index.html +++ b/chaincode-labs/chaincode-podcast/2020-01-28-pieter-wuille-part1/index.html @@ -16,4 +16,4 @@ Pieter Wuille

Date: January 28, 2020

Transcript By: Michael Folkson

Tags: Bitcoin core

Category: Podcast

Media: -https://www.youtube.com/watch?v=s0XopkGcN9U

Part 2: https://www.youtube.com/watch?v=Q2lXSRcacAo

Jonas: Welcome to the podcast

John: Hi Pieter

Pieter: Hello John and Jonas

John: Thank you for being the first guest on our podcast.

Jonas: So far the most important guest we’ve had.

Pieter: That’s an amazing honor. Thank you so much for having me.

John: We’re here to talk about Bitcoin and Bitcoin Core development. We have Pieter Wuille as our guest who is a Bitcoin Core contributor of many years standing. Pieter you’ve had over 500 PRs merged into Bitcoin Core and I think over 11,000 review comments in the repo.

Pieter: That is possible.

John: That’s quite a lot over 11 years. No not quite, sorry. We’ll cut that bit so no one knows. Let’s say 9 years.

Pieter: I don’t like the implication.

John: We have a few questions for you. The first question is of all of your PRs that you’ve done we’ve picked out a few that we think are interesting and we’d like to hear from you your inspiration for those and interesting thoughts about those. The first one we picked was headers first syncing. So can you first of all tell us what that is?

Pieter: Sure. Historically in the Bitcoin protocol and Bitcoin Core implementation blocks were learned about and fetched from peers using the getblocks message which you would send to a peer telling them “Hey I know about this block hash. Tell me what’s more.” They would send you a list of block hashes and you’d start fetching them. At the end when you’ve done all of them you would ask again “What more blocks should I ask about?” This works fine as long as you are fetching blocks from one peer. The problem is this mechanism really does not parallelize well to multiple connections because there is no way to interleave. I guess you could come up with some complicated mechanism where I know this peer has these blocks and this peer has these blocks. I’m going to ask for one from each. It is really a mess because you don’t know where you are going. You start off at the beginning and you ask what’s next? There is huge attack potential there because a peer could just be like “Trust me I have a very good chain for you. In the end it is going to have high difficulty” but it just keeps giving you low difficulty blocks for starters. This was also a problem in practice. Around the time maybe 0.6, 0.7 this started to become an issue because downloading blocks started taking longer than ten minutes. That may have been a problem before that time even.

John: You mean downloading the entire blockchain took more than ten minutes?

Pieter: Yes. You’d start off downloading blocks from one peer. You’d ask one peer, intentionally you’d only ask one because we knew this mechanism didn’t parallelize. Then another peer would announce to you “Hey I have a new block” and you’d ask them “Give me that block.” You’d be like “I have no idea what its parent is, can you tell me something about its parent.” The result was that you’d basically start off a complete parallel second block downloading process with that other peer.

John: That new block is called an orphan block?

Pieter: That’s another issue that the pre-headers first mechanism had. You’d learn about blocks and have no way of knowing what its parents were until you had actually fully synced those parents. There used to be a pool where these downloaded blocks without parents were kept called orphan blocks unrelated to…

John: Stale blocks?

Pieter: Stale blocks which are just blocks in the chain that were abandoned because the majority hashrate forked away. Around the time of 0.7, 0.8, 0.9 I think we kept adding hacks on top of the block downloading mechanism trying to put heuristics to prevent it from having 8 connections and downloading all blocks from all of them simultaneously. At some point syncing got so slow that you’d end up with so many orphans that you could go out of memory while downloading. You’re still trying to catch up and you’re learning of all these new blocks that were just mined during the time you were syncing. They would all be kept in memory. Then we introduced a limit on how many of those were kept. The oldest ones would be deleted. That led to even more problems where those orphans were actually downloaded over and over again. Over all this was a mess and it was clear this wasn’t going to keep working.

John: For context this is 2013, 2014ish?

Pieter: Possibly. This was fixed in 0.10. Headers first synchronization was introduced in 0.10. What it did was split the synchronization process in two steps. They were performed in parallel. One is where the normal synchronization process that’s just from beginning to end give me whatever was replaced with just synchronizing the headers. You’d build the best header chain by asking peers “give me headers. You have more headers, give me more headers.” The same mechanism as previously was used for blocks would now just be used for headers which takes in the order of minutes because it was at the time a couple of dozen megabytes, maybe a bit more now.

John: The reason for that is the vast majority of time when doing an initial block download and initial sync is checking the signatures in the transactions.

Pieter: Right. Plus actually downloading the data because headers are 80 bytes per block rather than…

John: 1MB at the time, 2MB now.

Pieter: At the time they weren’t quite full yet. Then there would be a second phase which would just be a background process where during the main loop of Bitcoin Core’s network processing it would try to figure out which headers have I heard about from which peers and see this one has a chain that is actually better than my current fully validated block tip. It would ask for a couple of blocks there. By limiting how many blocks were asked of each peer this parallelizes quite well because I think there is a limit of maybe 8 or 16 blocks per peer that are ever queued up. You’d ask “You have this header chain, I’ll ask the next 16 blocks of you. Someone else has them too, I will ask for the 16 ones after that one from someone else.” Together with a heuristic at the time which was very simple but I think has worked fairly well which is we’d never download a block that is more than 1024 blocks ahead of our current tip. Because we’re starting to download blocks in parallel now from multiple peers and validating as they come in. We don’t have the problem of orphan blocks any more because we already have their headers by the time we ask for them. We know they are part of the best chain assuming that chain is valid. There is still denial of service concerns there but they are much less severe in a headers first model.

John: One of the reasons that those DOS concerns are less is that the headers are very cheap to verify but expensive to create.

Pieter: To construct, exactly. As a general principle you try to validate things with the highest cost for an attacker divided by cost of validation. You do those tests first and if you can bail out early this can massively reduce attack potential. In order to attack now you still have to first create an actual best header chain or of course sybil attack the node during its synchronization. There are some techniques to avoid that as well. Ignoring those we already have a headers chain so we can just ask for blocks from everyone in parallel, see when they come in. As soon as we have all blocks up to a certain point we can actually the run full script and transaction validation and continue. This heuristic we have is… the question is of course is how do you pick good peers? During IBD you don’t care so much about partition resistance. You’re still catching up with the network and you are not fully functional until you’ve caught up. Your primary concern is how do I pick fast peers to synchronize from? The mechanism we picked is never download a block that’s more than 1024 blocks ahead of your current tip. You have a window of blocks that starts at your current tip and 1024 blocks ahead. In that window you try to fetch blocks as possible from all your peers. If that window can’t move because of one peer, which means you have downloaded all the blocks in that window except blocks that are still outstanding in a request with one peer you would disconnect that peer. Conceptually this means that if you have one peer that is so much slower that it is preventing you from making progress that the other peers are allowing you to make, a factor of 10 slower than the rest or something, you would kick that peer and find another one. This mechanism works reasonably well and it will find decent peers. It can get stuck with moderate peers. If they are all equally slow this doesn’t do anything.

John: There’s nothing to do?

Pieter: You don’t know. It might be because your own connection is limited or you’ve picked rather bad but not terrible peers. In any case that mechanism is still being used today I believe.

John: For that PR was that the first time we were tracking per peer state or per peer performance in order to do that kind of calculation?

Pieter: I think so. There was per peer state before that. In particular to prevent the same transaction from being downloaded from multiple peers. There was already ask for caching where you’d at most ask for the same transaction once every two minutes or something. That already existed, that was there since forever. But as an actual performance optimization I think this was the first and maybe still the only real heuristic for finding good peers. As opposed to heuristics for safe, secure peers. There are a bunch of those nowadays when trying to create outgoing connections. When a new incoming connection comes but all our incoming slots are already full there are some heuristics that are used to determine is this peer better? Should we maybe consider kicking one of our inbound peers in favor of this new one? Their rules are don’t kick the last peer that have given you a block or prefer peers that are from a variety of network sources or so on. I think this 1024 window move prevention kicking heuristic is the only actual performance optimizing thing.

John: I think in 0.17 there were a few checks added for peers that were obviously on a different chain. Or trying to follow different consensus rules from you.

Pieter: There were a bunch of other rules added where we were concerned about islands of nodes connected to each other that would share the same consensus rules but they would all be surrounded by nodes with different consensus rules. They did not actually figure out that there were no blocks coming in.

John: That was around the time of SegWit2x and Bcash hard forks which is I think where that concern came from.

Pieter: Yes

Jonas: For such a major change though this was actually pretty quick. You opened the PR in July 2014 and it was merged in October 2014. Compared to today’s review process that is a pretty quick turnaround for some major changes.

Pieter: I think I started working on it significantly earlier than opening the PR though. I’m not sure. I remember it back then as a slow thing but it is all relative.

John: How has Bitcoin Core development culture changed over those 8 or 9 years that you’ve been contributing?

Pieter: It has certainly become harder. We started off with, Bitcoin Core had no tests at the time when I first started contributing. Testing meant manual testing like “I tried to synchronize and it still works.” There were no unit tests, no functional tests. I don’t know when the unit test framework was introduced. This was fairly early but it is limited in how much you can do with just these unit tests. The interactions between nodes, the first major piece of infrastructure that tested larger scale behavior was Matt Corallo’s…

John: Pull tester?

Pieter: Pull tester I think was just a bot that would test pull requests. But one of the things it did was have a test implemented in bitcoinj that simulated things like reorgs and so on and see that Bitcoin Core would follow the right path under all sorts of scenarios. It was much later that that eventually got rewritten in Python.

John: That test still exists as featureblock.py

Pieter: Correct. That is now one of the many functional tests. There have been dozens added.

John: I think about 130, 140 right now.

Pieter: How do you call that? A dozen dozen?

John: A score

Pieter: A score is 20?

John: A gross, sorry. I apologize, we’ll cut that bit. I have one final questions on headers first sync which is did you see an immediate uptick in performance? If you hadn’t done that, if that hadn’t been done what would Bitcoin look like right now?

Pieter: Not so long ago I think Bitmex published a report of trying to synchronize various historical versions and I was surprised to not see headers first make a bit difference there. As far as I remember there was no big difference between 0.9 and 0.10. At the time I believed it was an enormous difference. It would only download every block once. I don’t know why they didn’t observe that.

John: It was possible that the methodology was that everything was in their local network or they had one peer.

Pieter: Possibly. I think they synchronized from random peers on the network but I’m not sure. I remember it as a very big difference, in particular for IBD. Outside of IBD it wasn’t big.

John: If you’re at the tip it doesn’t make a huge difference.

Jonas: Ultraprune?

Pieter: I can talk about what ultraprune is.

Jonas: Go ahead.

Pieter: This was in 0.8. Ultraprune is the name of the patch set I made that effectively introduced the concept of an explicit UTXO set to Bitcoin’s validation logic. Before that time there was a database that kept for every transaction output ever created whether or not it was already spent and even where it was spent using 12 bytes of data in the database per output ever created.

John: That is a txo set not a utxo set.

Pieter: Right. It was mutable. It was a database from txid to list of its outputs and whether or not they were spent and where they were spent. By the time I started working on this this database had grown to several gigabytes. This was a problem. It was fairly slow but also the database was indirect in that when you wanted to do validation you had to go first check this database to see if those outputs were not already spent and if they weren’t you still had to go find the transaction in the block files to find those utxos. You wouldn’t be able to validate the script before you could fetch the utxo. Effectively your working set was this whole database plus the whole blockchain. This couldn’t work with pruning or anything. You had to have all blocks available because you were using the blockchain data as the utxo data. The motivation was someone had started working I think on a patch that would go through this database and delete all txids whose outputs were already fully spent. Clearly these weren’t needed anymore. Ultraprune started as a proof of concept of if we take this to the extreme how small can we make that database? Instead of storing something for every output why don’t we actually switch to something where you just store the unspent ones because those are the only ones you still need afterwards. Then there was this performance consideration where everything is indirect, we always need this indirection to the blockchain data. The utxos are actually small, they are just an amount and a small script usually. Why don’t we copy that to the database as well so everything you need for validation is right there? It depended on what kind of I/O speed you had. At the time it reduced the amount of data you had to access from several gigabytes to maybe in the tens of megabytes at the time.

John: If you extrapolate that to today it changes from 300 gigabytes or whatever the blockchain is to 3 gigabytes?

Pieter: Something like that, exactly. This not only was a performance improvement it was fairly fundamental as a scaling thing because your utxo set hopefully does not grow as fast as your blockchain. There have been times in the past where it has shrunk. Not as much as I would like. The utxo set is much more correlated with actual usage while the blockchain is clearly append only and cumulative and ever growing based on activity. Of course ultraprune was combined with the switch from BDB to LevelDB. They were developed independently and then turned into one PR before being merged. This had the well known effect of having caused a fork in the chain in March 2013 I believe. So the problem here was that 0.8 was so much faster that miners switched over to it almost immediately but much of the network had not switched from 0.7 to 0.8. The BDB database that was used for the tx index with all this spending information in 0.7 had an issue and always had an issue that BDB requires you to configure how many lock objects you need. The number of lock objects is correlated with the number of pages in the database that are simultaneously affected by a single atomic transaction.

John: Where a transaction here is a database update?

Pieter: Correct. It has nothing to do with Bitcoin transactions. This is a database transaction and the whole update of applying a block to the database was done as one atomic update so that either the block would validate and you would be able to continue or there would be a failure and the whole thing would never be applied. Let me rant a bit about BDB documentation which tells you in guiding how to pick this number is run your database with a reasonable load and use this function to determine how many locks are used. There was no way you can predict ahead of time how many locks your actual absolute maximum is. This was combined with a bug in our code on the Bitcoin Core side that a failure to grab a lock would be treated as that block being invalid. Things would have been somewhat but not all that different if we wouldn’t have had that bug.

John: The crucial difference there is that the block failed but instead of attributing that to a local failure in your own system you’d attribute it to a consensus failure.

Pieter: Correct. It would just permanently mark the block as invalid when it somehow needed too many locks. This was non-deterministic across platforms. As we later found out even exploitable because during a reorg the whole reorg would be done as one atomic update which means that the number of locks you need is actually even proportional to the size of your reorg. This means that by feeding different forks to different nodes you could probably have always before 0.8 selectively forked nodes off by triggering this behavior. What happened was 0.8 which switched to a completely different database model as well as LevelDB which is a local database with no locking whatsoever. BDB is a cross process database system. What happened of course was someone produced a block that for a wide range of nodes on the network exceeded the number of locks that were needed. The network rejected the block but the miner that created it as well as a majority of other miners were all happily continuing because they were on 0.8 that had no concern about these locks. What had happened was we had unintentionally removed a consensus rule which was already consistent but still it shouldn’t have been removed without being aware of it and thereby actually introduced a hard fork. It is debatable whether it is a hard fork given that the old code was actually inconsistent with itself all the time. In any case it caused an actual consensus failure on the network. Miners quickly agreed to temporarily revert back to 0.7 which allowed overwriting a chain with one that everybody would accept. 0.8.1 was released that in 0.8 added something simulating the locks limit that BDB had in the hope that people could use 0.8.1 that had the same restrictions or at least similar restrictions.

John: Miners could use 0.8.1 so they wouldn’t be creating blocks that old nodes would reject.

Pieter: This was temporary. I believe in two or three months this rule expired and I believe it took until August 2013 until another block was produced that might have triggered the 0.7 issue. By then the network had largely updated to 0.8 and later versions.

John: Ok. There’s really a lot to dig into in all of that. My first reaction would be I’m a little hesitant to call that a hard fork which I think you said. I don’t think the word hard fork has much meaning in this context really.

Pieter: Yeah I agree. Let’s keep it as an unintentional consensus failure.

Part 2

John: Ok I have a bunch of questions from that. One is what are the lessons from that?

Pieter: One of the things I think learned from that is specifying what your consensus rules is really hard. That doesn’t mean you can’t try but who would’ve thought that a configuration setting in the database layer you are using actually leaked semantically into Bitcoin’s implicitly defined consensus rules. You can attribute that to human failure of course. We should’ve read the documentation and been aware of that.

John: Would testing have caught this?

Pieter: Probably things like modern fuzzing could’ve found this. Who knows right? There could be a bug in your C library. There can be a bug in your kernel. There can even be a bug in your CPU.

John: In your hardware, anywhere.

Pieter: Exactly. We can talk about the boundary in trying to abstract the part of the codebase that intentionally contributes to consensus but it is very hard to say clearly this code has no impact on consensus code because bugs can leak. I think one of the things to learn there is you really want software that is intended for use in a consensus system where not only you have the requirement that if everyone behaves correctly everybody accepts the right answer but also that everybody will disagree about what is an invalid piece of data in lockstep.

John: That condition is much harder.

Pieter: That’s much harder. It is not a usual thing you design things for. Maybe a good thing to bring up is BIP66 DER signature failure. You also had getting rid of OpenSSL on the list of things to talk about. Validation of signatures in Bitcoin’s reference code used to use OpenSSL for validation. Signatures were encoded in whatever data OpenSSL expects.

John: Let’s take a step back and talk about Satoshi implementing Bitcoin. Satoshi wrote a white paper and then produced a reference implementation of Bitcoin. In that reference implementation there was a dependency on OpenSSL that was used for many things.

Pieter: Correct. It was even used for computing the difficulty adjustment I think. It was used for signing. At some point it was used for mining.

John: OpenSSL is a very widely used open source library. It has been deployed in many applications for many years. It wasn’t a bad choice to use OpenSSL.

Pieter: I think it was an obvious choice from a standard software engineering perspective. It was a very reasonable thing to do without things we’ve since learned. What this meant that was even though ECDSA and secp256k1 curve have nicely written up specifications it wasn’t actually these specifications that defined Bitcoin signature validation rules. It was whatever the hell OpenSSL implemented. It turns out what OpenSSL implemented isn’t exactly what the specification says.

John: And isn’t exactly consistent across different platforms.

Pieter: Exactly. What we learned is that the OpenSSL signature parser, at the time, this has since been fixed, at the time allowed certain violations of the DER encoding specification which is a way of structured data in a parsable way that ECDSA specification refers to. OpenSSL used the I think now widely considered bad idea philosophy of being flexible in what you expect and being strict in your output exactly because of the inconsistencies it introduced. OpenSSL allowed signatures that violated the spec. This didn’t mean that this permitted forging a signature. Someone without a private key still could not construct anything that OpenSSL would accept. The problem was that someone with a private key might construct a signature that some versions would accept and others wouldn’t. Indeed in one of these permitted violations of DER it had a bound on the size of a length field and that bound was 32 bits for 32 bit platforms and 64 bits for 64 bit platforms. You could construct a signature at the time that says “The length of this integer is the next 5 bytes.” Those 5 bytes would just contain the number 32 or 33.

John: To get a bit more specific. When we create a signature in ECDSA we have two values, a r value and a s value. Together that forms a signature. When we talk about encoding we’re talking about how we put those values into bits that we transmit across the network. DER encoding has a bunch of fields as well as the r and the s fields which are saying this is the length of the thing…

Pieter: It would start by saying “Here is a concatenation of two things and it is this many bytes.” Then it would say “The first element is an integer and it is this many bytes.” Then you would actually have the data. “The next thing is an integer. It is this many bytes and here is the data.” Then Bitcoin adds a signature hash flag at the end but that is not part of the DER thing. This encoding of the r and s values could either say “It is the next n bytes up to 126” or something but if it is more than that it would include a marker that says “The length of the next field is given in the next n bytes.” The maximum length of that indirect size field was platform dependent in OpenSSL.

John: So what do you do about that? You’ve discovered that Bitcoin is inconsistent with itself.

Pieter: In a similar way that 0.7 and everything before it were inconsistent with itself due to this BDB lock issue. This was a much more concrete thing. You’d know exactly that I can construct a signature that these platforms will accept and these won’t. This wasn’t non-deterministic, this was deterministic. It was just dependent on the platform. The problem was fixing this wasn’t just a database update, this was implicitly part of our consensus rules. So what we needed to do was fix those consensus rules. That is what BIP66 was designed to do. The full rationale for BIP66 wasn’t revealed until long after it was deployed because this was so trivial to exploit. We did keep that hidden for a long time. BIP66’s stated goal which was correct in part was being able to move off OpenSSL. Let’s switch to a very well specified subset of signatures which everybody already produces. The signing code that people were using was sufficiently strict apart from a few other implementations this was generally not a problem. There were concerns at the time about miners that didn’t actually do full validation which would have made it even easier to broadcast such a signature on the network and get it included. That was interesting. Again taught us that even when you think you have a specification of what your consensus rules are everybody would’ve thought there’s this document that specifies ECDSA and secp256k1, that is our specification. It turns out it wasn’t.

John: Consensus is slippery and touches everything.

Jonas: When you’re sitting on an exploit like that, when you’re aware that something is open for exploitation how does that change the process and how do you think about coming up with a solution? You have a time constraint I guess.

Pieter: Given it had been there for a long time there are trade-offs like who do you tell? How fast do you move to fix this because moving too fast might draw suspicion, moving too slow might get exploitable. Really these things always need to be considered on a case by case basis.

John: I think that brings us nicely to the third PR or family of PRs that I have on my list or projects that you’ve contributed to which is libsecp. Can you give us a bit of background on what the genesis of that project was and where it came from?

Pieter: It is not actually known I think why Satoshi picked the secp256k1 curve which was standardized but a very uncommon choice even at the time. I don’t dare to say when, maybe 2012, a post on Bitcointalk by Hal Finney about the special properties that this curve has and presumably why it was picked because it had this promise of accelerated implementation. What this was a particular technique that would allow faster implementation of elliptic curve implementation using an efficiently computable endomorphism. I won’t go into the details unless you want me to but it is a technique that gives you a percentage speedup for multiplication. It also makes certain requirements on the curve that not everyone is as happy with. It also gives you a small speedup for attackers but generally you want an exponential gap between the time it takes for an attacker and honest user anyway. Hal made this post saying “I looked into actually how to implement this particular optimization for the curve. Here is a bit of the math.” I think maybe he had some proof of concept code to show it but I was curious to see how much speed up is this actually going to give. I first tried to look at can I integrate this in OpenSSL itself? Because OpenSSL didn’t have any specialized implementation for this curve nor for this optimization technique in general. I started doing that but OpenSSL was an annoying codebase to work with to say it mildly. I thought how about I just make my own implementation from scratch just to see what the effect is. This started as a small hobby project thinking about… To be fair it is a much easier problem if you are only trying to implement one algorithm for one curve compared to a general library that tries to do everything in cryptography. I had the option of picking specific field representation for how are you going to represent the x and y coordinate. I learned some techniques from how other curves like ed25519 were implemented. I used some of those techniques. I started off by only implementing this optimized construction actually. It turned out when I was done it was maybe a factor of 4 faster than OpenSSL which was a very unexpected result. I hadn’t imagined that with fairly little work it would immediately be that much better. I guess it made sense just by being so specialized and being able to pick data structures that were specifically chosen for this curve rather than generic. You get actually a huge advantage.

John: At this point this was still just a …

Pieter: Yes. This was 2013 probably.

John: Just a personal project. You weren’t thinking about it being part of Bitcoin Core.

Pieter: I open sourced this. It attracted some contributions from Greg Maxwell, not Hal Finney, Peter Dettman who is a major contributor to the Bouncy Castle cryptographic library who by now probably came up with half of the algorithms in libsecp. Sometimes incremental improvements, sometimes original research and algebraic techniques to optimize things here and there. That has pushed the performance every time a couple of percent here and there, it adds up. It was assembly implementations for some routines added by people. After a while including lots and lots of testing that was added I think in 0.10. The signing code was switched in Bitcoin Core to it and then 0.12 the validation code was switched to it. This was after BIP66 had activated and we knew the rules on the network are exactly this DER encoding and nothing else. Interestingly by that time this efficient endomorphism GLV optimization was made optional and off by default in libsecp because of potential concern about patents around it. It is kind of ironic that this project started as an attempt to see the benefit was of this optimization and in the end choosing not to use it. But despite that it was still a very significant performance improvement over OpenSSL.

John: Did you feel some urgency after BIP66 to move across to libsecp?

Pieter: Not really. There was until very recently this vague concern that OpenSSL is a huge library with a huge attack surface. It was not designed with these consensus like applications in mind. At least as far as the signature validation parsing went I think at the time we felt that now we understand the scope of what OpenSSL does here and we had restricted sufficiently. We were fairly confident that that exactly wasn’t going to be a problem anymore. It was more the unknown unknowns for all the other things. I don’t know how fast, I don’t remember.

John: To enumerate some of the benefits of switching to libsecp. It is extremely well tested. It has almost 100% code coverage I believe?

Pieter: I think so.

John: It is much faster than OpenSSL.

Pieter: I think OpenSSL has caught up a bit since.

John: There are many things about libsecp that make the API safe for users I think.

Pieter: It is very much designed to be a hard to misuse library so it doesn’t really expose many low level operations that you might want from a generic cryptographic toolkit. It is designed with fairly high level APIs in mind like validate a signature, parse a signature, create a signature, derive a key and so forth.

John: And lots of thought about constant time and avoiding…

Pieter: Yes it was also from the start designed to be side channel resistant or at least the typical side channels you can protect against in software namely not having code paths that depend on secret data, not having memory accesses that depend on secret data. Despite that actually from the start it didn’t actually do that. There was some timing leak in very early code that was probably very hard to exploit. There was some table with precomputed values and you need to pick one of them based on secret data which is a problem. I think what we did was spread out the data so that there’s one byte of every table entry, say there’s 16 table entries, the first 16 bytes contain the first byte of every entry. Then the next 16 bytes contain the second byte of every entry and so on. You would think now it needs to access all groups of 16 bytes and given reasonable assumptions about architectures that generally have cache lines of 64 bytes you would think it is going to access every cache line so there shouldn’t be any leak anymore. It turns out there is a paper that actually shows even in this case you leak information because the first byte and the second byte… things in a cache line there is a very small difference in timing when they are available they can be observed. The fix is actually access every user conditional move construction where you actually read through every byte always.

Jonas: As you talk about the history and certainly you’ve forgotten more about Bitcoin than most of us will ever learn, as you go back and you think about Satoshi’s reference implementation what are things that you would imagine you would want to do from the beginning? Things that are baked into the software that are difficult to shake even now as you’ve made contributions over the years that you would want to have done differently from the beginning?

Pieter: You mean in code design or actually how Bitcoin works?

Jonas: I think either. I think in terms of code design putting most of the code in one file wasn’t particularly helpful from the beginning. In terms of design choices.

Pieter: It is of course slow to change things but I think given enough time if you are just talking about code design questions, if we have agreement we need to move to this other design we can do it whatever it is. You mention of course everything in one file in the 2010 codebase, the wallet and the consensus validation were all in one file including direct calls to the UI. That was really hard to reason about. The wallet tracking which outputs had been spent was actually done through a callback from the script verifier that would tell the wallet “Hey I’ve seen a validation with this input.” This was really hard to reason about. Of course these days there is a lot more complexity so I don’t want to claim that it is today easier to reason about things. Relative to its complexity and how much it was actually doing it was fairly hairy back then.

John: Yeah I think we’ve made enormous strides. There’s a well defined interface between the wallet and the node so we can be confident that it is not involved in consensus.

Pieter: Yes exactly. There is still a lot of work to do there too but I think we are getting there. Your talk about how Bitcoin could have been designed differently, that is a very hard question because you inevitably run into philosophical questions like if it were to have been designed differently would it have taken off? Especially if you go into questions like economic policy. That’s really hard to guess. I think there are lots of things we have learned. The concept of P2SH, what was clearly not present in the original design, it could have been done in a much simpler way if there would’ve been something like P2SH from the beginning. Yet it seems so obvious that this is preferable because before P2SH if you personally have some multisig policy, nobody was using multisig at the time, I guess that was part of the reason. But if you would’ve wanted to use a multisig policy to protect your coins with a device or cold storage key and an online key you would’ve somehow needed to convey to anyone who wanted to pay you to construct this script that includes your policy. That is annoying for multiple reasons like a) that is none of their business, why do I need to tell you “Hey look I’m using a multisig policy, it just for my own protection.” Secondly you would be paying the fees for paying to my complex script. That should not have been a concern of yours either. Lastly everything you put in an output leaks into the utxo set and as we now know the size of the utxo set is a critical scaling parameter of the system.

John: I’ll add fourthly you would have really long addresses which would be kind of annoying.

Pieter: Exactly you would need a standard for conveying that information that would be variable length inevitably if you go for big scripts. I don’t even think that all of these advantages were talked about the time when P2SH was created. I think it was just the last one. We have no address for this, it is really hard to create one. We can make it simpler by hashing the script first. I think the other advantages were things that we’re only realized later, how much of a better design this is. Of course we have since iterated on that, SegWit I think is clearly something that should have been done from the beginning. The fact signatures leak into the txid made it really hard for all kinds of more complex constructions. At the same time Bitcoin was the first thing in its class and it is unrealistic to expect it to get everything right from the beginning. Thankfully I think we’ve learned very well how to do safe upgrades to some of this.

John: I agree entirely that this isn’t really an exercise for faulting Satoshi for the mistakes. But if I could wave a magic wand SegWit from the genesis block would be great because then the block could commit to the signatures, the wtxid whereas now it doesn’t.

Pieter: In SegWit it does. In SegWit there is a coinbase output that contains a hash with the root of a Merkle tree that commits to all wtxids.

John: Right, yes. But you don’t know that until you deserialize the transactions from the block which is a little bit annoying.

Jonas: How do you think about how to spend your time on Bitcoin? There are so many ways to get nerd sniped and so many directions you could contribute to. What are you excited about and then how do you feel the pull of your personal excitement versus the pull of what’s necessary for someone like you to contribute to Bitcoin?

Pieter: That is a good question, I don’t have a good answer. I try to work on things I’m excited about but sometimes this also means following through on something after you’ve lost some of the excitement about it because you have worked on this and people expect you to continue. It is a hard question. I expect this in general in open source to be a problem. There is no set direction and ultimately people choose how to spend their own time themselves. What am I excited about? I’m happy with the progress we’ve made with Taproot review and how that is going. I’m excited to see that progress further. There are some interesting changes people are working on related to the peer-to-peer protocol, things like Erlay that I contributed to. There are too many things to mention.

John: I think Taproot, Schnorr plus Erlay is a good start for things to get excited about. Shall we wrap up there? Thank you Pieter.

\ No newline at end of file +https://www.youtube.com/watch?v=s0XopkGcN9U

Part 2: https://www.youtube.com/watch?v=Q2lXSRcacAo

Jonas: Welcome to the podcast

John: Hi Pieter

Pieter: Hello John and Jonas

John: Thank you for being the first guest on our podcast.

Jonas: So far the most important guest we’ve had.

Pieter: That’s an amazing honor. Thank you so much for having me.

John: We’re here to talk about Bitcoin and Bitcoin Core development. We have Pieter Wuille as our guest who is a Bitcoin Core contributor of many years standing. Pieter you’ve had over 500 PRs merged into Bitcoin Core and I think over 11,000 review comments in the repo.

Pieter: That is possible.

John: That’s quite a lot over 11 years. No not quite, sorry. We’ll cut that bit so no one knows. Let’s say 9 years.

Pieter: I don’t like the implication.

John: We have a few questions for you. The first question is of all of your PRs that you’ve done we’ve picked out a few that we think are interesting and we’d like to hear from you your inspiration for those and interesting thoughts about those. The first one we picked was headers first syncing. So can you first of all tell us what that is?

Pieter: Sure. Historically in the Bitcoin protocol and Bitcoin Core implementation blocks were learned about and fetched from peers using the getblocks message which you would send to a peer telling them “Hey I know about this block hash. Tell me what’s more.” They would send you a list of block hashes and you’d start fetching them. At the end when you’ve done all of them you would ask again “What more blocks should I ask about?” This works fine as long as you are fetching blocks from one peer. The problem is this mechanism really does not parallelize well to multiple connections because there is no way to interleave. I guess you could come up with some complicated mechanism where I know this peer has these blocks and this peer has these blocks. I’m going to ask for one from each. It is really a mess because you don’t know where you are going. You start off at the beginning and you ask what’s next? There is huge attack potential there because a peer could just be like “Trust me I have a very good chain for you. In the end it is going to have high difficulty” but it just keeps giving you low difficulty blocks for starters. This was also a problem in practice. Around the time maybe 0.6, 0.7 this started to become an issue because downloading blocks started taking longer than ten minutes. That may have been a problem before that time even.

John: You mean downloading the entire blockchain took more than ten minutes?

Pieter: Yes. You’d start off downloading blocks from one peer. You’d ask one peer, intentionally you’d only ask one because we knew this mechanism didn’t parallelize. Then another peer would announce to you “Hey I have a new block” and you’d ask them “Give me that block.” You’d be like “I have no idea what its parent is, can you tell me something about its parent.” The result was that you’d basically start off a complete parallel second block downloading process with that other peer.

John: That new block is called an orphan block?

Pieter: That’s another issue that the pre-headers first mechanism had. You’d learn about blocks and have no way of knowing what its parents were until you had actually fully synced those parents. There used to be a pool where these downloaded blocks without parents were kept called orphan blocks unrelated to…

John: Stale blocks?

Pieter: Stale blocks which are just blocks in the chain that were abandoned because the majority hashrate forked away. Around the time of 0.7, 0.8, 0.9 I think we kept adding hacks on top of the block downloading mechanism trying to put heuristics to prevent it from having 8 connections and downloading all blocks from all of them simultaneously. At some point syncing got so slow that you’d end up with so many orphans that you could go out of memory while downloading. You’re still trying to catch up and you’re learning of all these new blocks that were just mined during the time you were syncing. They would all be kept in memory. Then we introduced a limit on how many of those were kept. The oldest ones would be deleted. That led to even more problems where those orphans were actually downloaded over and over again. Over all this was a mess and it was clear this wasn’t going to keep working.

John: For context this is 2013, 2014ish?

Pieter: Possibly. This was fixed in 0.10. Headers first synchronization was introduced in 0.10. What it did was split the synchronization process in two steps. They were performed in parallel. One is where the normal synchronization process that’s just from beginning to end give me whatever was replaced with just synchronizing the headers. You’d build the best header chain by asking peers “give me headers. You have more headers, give me more headers.” The same mechanism as previously was used for blocks would now just be used for headers which takes in the order of minutes because it was at the time a couple of dozen megabytes, maybe a bit more now.

John: The reason for that is the vast majority of time when doing an initial block download and initial sync is checking the signatures in the transactions.

Pieter: Right. Plus actually downloading the data because headers are 80 bytes per block rather than…

John: 1MB at the time, 2MB now.

Pieter: At the time they weren’t quite full yet. Then there would be a second phase which would just be a background process where during the main loop of Bitcoin Core’s network processing it would try to figure out which headers have I heard about from which peers and see this one has a chain that is actually better than my current fully validated block tip. It would ask for a couple of blocks there. By limiting how many blocks were asked of each peer this parallelizes quite well because I think there is a limit of maybe 8 or 16 blocks per peer that are ever queued up. You’d ask “You have this header chain, I’ll ask the next 16 blocks of you. Someone else has them too, I will ask for the 16 ones after that one from someone else.” Together with a heuristic at the time which was very simple but I think has worked fairly well which is we’d never download a block that is more than 1024 blocks ahead of our current tip. Because we’re starting to download blocks in parallel now from multiple peers and validating as they come in. We don’t have the problem of orphan blocks any more because we already have their headers by the time we ask for them. We know they are part of the best chain assuming that chain is valid. There is still denial of service concerns there but they are much less severe in a headers first model.

John: One of the reasons that those DOS concerns are less is that the headers are very cheap to verify but expensive to create.

Pieter: To construct, exactly. As a general principle you try to validate things with the highest cost for an attacker divided by cost of validation. You do those tests first and if you can bail out early this can massively reduce attack potential. In order to attack now you still have to first create an actual best header chain or of course sybil attack the node during its synchronization. There are some techniques to avoid that as well. Ignoring those we already have a headers chain so we can just ask for blocks from everyone in parallel, see when they come in. As soon as we have all blocks up to a certain point we can actually the run full script and transaction validation and continue. This heuristic we have is… the question is of course is how do you pick good peers? During IBD you don’t care so much about partition resistance. You’re still catching up with the network and you are not fully functional until you’ve caught up. Your primary concern is how do I pick fast peers to synchronize from? The mechanism we picked is never download a block that’s more than 1024 blocks ahead of your current tip. You have a window of blocks that starts at your current tip and 1024 blocks ahead. In that window you try to fetch blocks as possible from all your peers. If that window can’t move because of one peer, which means you have downloaded all the blocks in that window except blocks that are still outstanding in a request with one peer you would disconnect that peer. Conceptually this means that if you have one peer that is so much slower that it is preventing you from making progress that the other peers are allowing you to make, a factor of 10 slower than the rest or something, you would kick that peer and find another one. This mechanism works reasonably well and it will find decent peers. It can get stuck with moderate peers. If they are all equally slow this doesn’t do anything.

John: There’s nothing to do?

Pieter: You don’t know. It might be because your own connection is limited or you’ve picked rather bad but not terrible peers. In any case that mechanism is still being used today I believe.

John: For that PR was that the first time we were tracking per peer state or per peer performance in order to do that kind of calculation?

Pieter: I think so. There was per peer state before that. In particular to prevent the same transaction from being downloaded from multiple peers. There was already ask for caching where you’d at most ask for the same transaction once every two minutes or something. That already existed, that was there since forever. But as an actual performance optimization I think this was the first and maybe still the only real heuristic for finding good peers. As opposed to heuristics for safe, secure peers. There are a bunch of those nowadays when trying to create outgoing connections. When a new incoming connection comes but all our incoming slots are already full there are some heuristics that are used to determine is this peer better? Should we maybe consider kicking one of our inbound peers in favor of this new one? Their rules are don’t kick the last peer that have given you a block or prefer peers that are from a variety of network sources or so on. I think this 1024 window move prevention kicking heuristic is the only actual performance optimizing thing.

John: I think in 0.17 there were a few checks added for peers that were obviously on a different chain. Or trying to follow different consensus rules from you.

Pieter: There were a bunch of other rules added where we were concerned about islands of nodes connected to each other that would share the same consensus rules but they would all be surrounded by nodes with different consensus rules. They did not actually figure out that there were no blocks coming in.

John: That was around the time of SegWit2x and Bcash hard forks which is I think where that concern came from.

Pieter: Yes

Jonas: For such a major change though this was actually pretty quick. You opened the PR in July 2014 and it was merged in October 2014. Compared to today’s review process that is a pretty quick turnaround for some major changes.

Pieter: I think I started working on it significantly earlier than opening the PR though. I’m not sure. I remember it back then as a slow thing but it is all relative.

John: How has Bitcoin Core development culture changed over those 8 or 9 years that you’ve been contributing?

Pieter: It has certainly become harder. We started off with, Bitcoin Core had no tests at the time when I first started contributing. Testing meant manual testing like “I tried to synchronize and it still works.” There were no unit tests, no functional tests. I don’t know when the unit test framework was introduced. This was fairly early but it is limited in how much you can do with just these unit tests. The interactions between nodes, the first major piece of infrastructure that tested larger scale behavior was Matt Corallo’s…

John: Pull tester?

Pieter: Pull tester I think was just a bot that would test pull requests. But one of the things it did was have a test implemented in bitcoinj that simulated things like reorgs and so on and see that Bitcoin Core would follow the right path under all sorts of scenarios. It was much later that that eventually got rewritten in Python.

John: That test still exists as featureblock.py

Pieter: Correct. That is now one of the many functional tests. There have been dozens added.

John: I think about 130, 140 right now.

Pieter: How do you call that? A dozen dozen?

John: A score

Pieter: A score is 20?

John: A gross, sorry. I apologize, we’ll cut that bit. I have one final questions on headers first sync which is did you see an immediate uptick in performance? If you hadn’t done that, if that hadn’t been done what would Bitcoin look like right now?

Pieter: Not so long ago I think Bitmex published a report of trying to synchronize various historical versions and I was surprised to not see headers first make a bit difference there. As far as I remember there was no big difference between 0.9 and 0.10. At the time I believed it was an enormous difference. It would only download every block once. I don’t know why they didn’t observe that.

John: It was possible that the methodology was that everything was in their local network or they had one peer.

Pieter: Possibly. I think they synchronized from random peers on the network but I’m not sure. I remember it as a very big difference, in particular for IBD. Outside of IBD it wasn’t big.

John: If you’re at the tip it doesn’t make a huge difference.

Jonas: Ultraprune?

Pieter: I can talk about what ultraprune is.

Jonas: Go ahead.

Pieter: This was in 0.8. Ultraprune is the name of the patch set I made that effectively introduced the concept of an explicit UTXO set to Bitcoin’s validation logic. Before that time there was a database that kept for every transaction output ever created whether or not it was already spent and even where it was spent using 12 bytes of data in the database per output ever created.

John: That is a txo set not a utxo set.

Pieter: Right. It was mutable. It was a database from txid to list of its outputs and whether or not they were spent and where they were spent. By the time I started working on this this database had grown to several gigabytes. This was a problem. It was fairly slow but also the database was indirect in that when you wanted to do validation you had to go first check this database to see if those outputs were not already spent and if they weren’t you still had to go find the transaction in the block files to find those utxos. You wouldn’t be able to validate the script before you could fetch the utxo. Effectively your working set was this whole database plus the whole blockchain. This couldn’t work with pruning or anything. You had to have all blocks available because you were using the blockchain data as the utxo data. The motivation was someone had started working I think on a patch that would go through this database and delete all txids whose outputs were already fully spent. Clearly these weren’t needed anymore. Ultraprune started as a proof of concept of if we take this to the extreme how small can we make that database? Instead of storing something for every output why don’t we actually switch to something where you just store the unspent ones because those are the only ones you still need afterwards. Then there was this performance consideration where everything is indirect, we always need this indirection to the blockchain data. The utxos are actually small, they are just an amount and a small script usually. Why don’t we copy that to the database as well so everything you need for validation is right there? It depended on what kind of I/O speed you had. At the time it reduced the amount of data you had to access from several gigabytes to maybe in the tens of megabytes at the time.

John: If you extrapolate that to today it changes from 300 gigabytes or whatever the blockchain is to 3 gigabytes?

Pieter: Something like that, exactly. This not only was a performance improvement it was fairly fundamental as a scaling thing because your utxo set hopefully does not grow as fast as your blockchain. There have been times in the past where it has shrunk. Not as much as I would like. The utxo set is much more correlated with actual usage while the blockchain is clearly append only and cumulative and ever growing based on activity. Of course ultraprune was combined with the switch from BDB to LevelDB. They were developed independently and then turned into one PR before being merged. This had the well known effect of having caused a fork in the chain in March 2013 I believe. So the problem here was that 0.8 was so much faster that miners switched over to it almost immediately but much of the network had not switched from 0.7 to 0.8. The BDB database that was used for the tx index with all this spending information in 0.7 had an issue and always had an issue that BDB requires you to configure how many lock objects you need. The number of lock objects is correlated with the number of pages in the database that are simultaneously affected by a single atomic transaction.

John: Where a transaction here is a database update?

Pieter: Correct. It has nothing to do with Bitcoin transactions. This is a database transaction and the whole update of applying a block to the database was done as one atomic update so that either the block would validate and you would be able to continue or there would be a failure and the whole thing would never be applied. Let me rant a bit about BDB documentation which tells you in guiding how to pick this number is run your database with a reasonable load and use this function to determine how many locks are used. There was no way you can predict ahead of time how many locks your actual absolute maximum is. This was combined with a bug in our code on the Bitcoin Core side that a failure to grab a lock would be treated as that block being invalid. Things would have been somewhat but not all that different if we wouldn’t have had that bug.

John: The crucial difference there is that the block failed but instead of attributing that to a local failure in your own system you’d attribute it to a consensus failure.

Pieter: Correct. It would just permanently mark the block as invalid when it somehow needed too many locks. This was non-deterministic across platforms. As we later found out even exploitable because during a reorg the whole reorg would be done as one atomic update which means that the number of locks you need is actually even proportional to the size of your reorg. This means that by feeding different forks to different nodes you could probably have always before 0.8 selectively forked nodes off by triggering this behavior. What happened was 0.8 which switched to a completely different database model as well as LevelDB which is a local database with no locking whatsoever. BDB is a cross process database system. What happened of course was someone produced a block that for a wide range of nodes on the network exceeded the number of locks that were needed. The network rejected the block but the miner that created it as well as a majority of other miners were all happily continuing because they were on 0.8 that had no concern about these locks. What had happened was we had unintentionally removed a consensus rule which was already consistent but still it shouldn’t have been removed without being aware of it and thereby actually introduced a hard fork. It is debatable whether it is a hard fork given that the old code was actually inconsistent with itself all the time. In any case it caused an actual consensus failure on the network. Miners quickly agreed to temporarily revert back to 0.7 which allowed overwriting a chain with one that everybody would accept. 0.8.1 was released that in 0.8 added something simulating the locks limit that BDB had in the hope that people could use 0.8.1 that had the same restrictions or at least similar restrictions.

John: Miners could use 0.8.1 so they wouldn’t be creating blocks that old nodes would reject.

Pieter: This was temporary. I believe in two or three months this rule expired and I believe it took until August 2013 until another block was produced that might have triggered the 0.7 issue. By then the network had largely updated to 0.8 and later versions.

John: Ok. There’s really a lot to dig into in all of that. My first reaction would be I’m a little hesitant to call that a hard fork which I think you said. I don’t think the word hard fork has much meaning in this context really.

Pieter: Yeah I agree. Let’s keep it as an unintentional consensus failure.

Part 2

John: Ok I have a bunch of questions from that. One is what are the lessons from that?

Pieter: One of the things I think learned from that is specifying what your consensus rules is really hard. That doesn’t mean you can’t try but who would’ve thought that a configuration setting in the database layer you are using actually leaked semantically into Bitcoin’s implicitly defined consensus rules. You can attribute that to human failure of course. We should’ve read the documentation and been aware of that.

John: Would testing have caught this?

Pieter: Probably things like modern fuzzing could’ve found this. Who knows right? There could be a bug in your C library. There can be a bug in your kernel. There can even be a bug in your CPU.

John: In your hardware, anywhere.

Pieter: Exactly. We can talk about the boundary in trying to abstract the part of the codebase that intentionally contributes to consensus but it is very hard to say clearly this code has no impact on consensus code because bugs can leak. I think one of the things to learn there is you really want software that is intended for use in a consensus system where not only you have the requirement that if everyone behaves correctly everybody accepts the right answer but also that everybody will disagree about what is an invalid piece of data in lockstep.

John: That condition is much harder.

Pieter: That’s much harder. It is not a usual thing you design things for. Maybe a good thing to bring up is BIP66 DER signature failure. You also had getting rid of OpenSSL on the list of things to talk about. Validation of signatures in Bitcoin’s reference code used to use OpenSSL for validation. Signatures were encoded in whatever data OpenSSL expects.

John: Let’s take a step back and talk about Satoshi implementing Bitcoin. Satoshi wrote a white paper and then produced a reference implementation of Bitcoin. In that reference implementation there was a dependency on OpenSSL that was used for many things.

Pieter: Correct. It was even used for computing the difficulty adjustment I think. It was used for signing. At some point it was used for mining.

John: OpenSSL is a very widely used open source library. It has been deployed in many applications for many years. It wasn’t a bad choice to use OpenSSL.

Pieter: I think it was an obvious choice from a standard software engineering perspective. It was a very reasonable thing to do without things we’ve since learned. What this meant that was even though ECDSA and secp256k1 curve have nicely written up specifications it wasn’t actually these specifications that defined Bitcoin signature validation rules. It was whatever the hell OpenSSL implemented. It turns out what OpenSSL implemented isn’t exactly what the specification says.

John: And isn’t exactly consistent across different platforms.

Pieter: Exactly. What we learned is that the OpenSSL signature parser, at the time, this has since been fixed, at the time allowed certain violations of the DER encoding specification which is a way of structured data in a parsable way that ECDSA specification refers to. OpenSSL used the I think now widely considered bad idea philosophy of being flexible in what you expect and being strict in your output exactly because of the inconsistencies it introduced. OpenSSL allowed signatures that violated the spec. This didn’t mean that this permitted forging a signature. Someone without a private key still could not construct anything that OpenSSL would accept. The problem was that someone with a private key might construct a signature that some versions would accept and others wouldn’t. Indeed in one of these permitted violations of DER it had a bound on the size of a length field and that bound was 32 bits for 32 bit platforms and 64 bits for 64 bit platforms. You could construct a signature at the time that says “The length of this integer is the next 5 bytes.” Those 5 bytes would just contain the number 32 or 33.

John: To get a bit more specific. When we create a signature in ECDSA we have two values, a r value and a s value. Together that forms a signature. When we talk about encoding we’re talking about how we put those values into bits that we transmit across the network. DER encoding has a bunch of fields as well as the r and the s fields which are saying this is the length of the thing…

Pieter: It would start by saying “Here is a concatenation of two things and it is this many bytes.” Then it would say “The first element is an integer and it is this many bytes.” Then you would actually have the data. “The next thing is an integer. It is this many bytes and here is the data.” Then Bitcoin adds a signature hash flag at the end but that is not part of the DER thing. This encoding of the r and s values could either say “It is the next n bytes up to 126” or something but if it is more than that it would include a marker that says “The length of the next field is given in the next n bytes.” The maximum length of that indirect size field was platform dependent in OpenSSL.

John: So what do you do about that? You’ve discovered that Bitcoin is inconsistent with itself.

Pieter: In a similar way that 0.7 and everything before it were inconsistent with itself due to this BDB lock issue. This was a much more concrete thing. You’d know exactly that I can construct a signature that these platforms will accept and these won’t. This wasn’t non-deterministic, this was deterministic. It was just dependent on the platform. The problem was fixing this wasn’t just a database update, this was implicitly part of our consensus rules. So what we needed to do was fix those consensus rules. That is what BIP66 was designed to do. The full rationale for BIP66 wasn’t revealed until long after it was deployed because this was so trivial to exploit. We did keep that hidden for a long time. BIP66’s stated goal which was correct in part was being able to move off OpenSSL. Let’s switch to a very well specified subset of signatures which everybody already produces. The signing code that people were using was sufficiently strict apart from a few other implementations this was generally not a problem. There were concerns at the time about miners that didn’t actually do full validation which would have made it even easier to broadcast such a signature on the network and get it included. That was interesting. Again taught us that even when you think you have a specification of what your consensus rules are everybody would’ve thought there’s this document that specifies ECDSA and secp256k1, that is our specification. It turns out it wasn’t.

John: Consensus is slippery and touches everything.

Jonas: When you’re sitting on an exploit like that, when you’re aware that something is open for exploitation how does that change the process and how do you think about coming up with a solution? You have a time constraint I guess.

Pieter: Given it had been there for a long time there are trade-offs like who do you tell? How fast do you move to fix this because moving too fast might draw suspicion, moving too slow might get exploitable. Really these things always need to be considered on a case by case basis.

John: I think that brings us nicely to the third PR or family of PRs that I have on my list or projects that you’ve contributed to which is libsecp. Can you give us a bit of background on what the genesis of that project was and where it came from?

Pieter: It is not actually known I think why Satoshi picked the secp256k1 curve which was standardized but a very uncommon choice even at the time. I don’t dare to say when, maybe 2012, a post on Bitcointalk by Hal Finney about the special properties that this curve has and presumably why it was picked because it had this promise of accelerated implementation. What this was a particular technique that would allow faster implementation of elliptic curve implementation using an efficiently computable endomorphism. I won’t go into the details unless you want me to but it is a technique that gives you a percentage speedup for multiplication. It also makes certain requirements on the curve that not everyone is as happy with. It also gives you a small speedup for attackers but generally you want an exponential gap between the time it takes for an attacker and honest user anyway. Hal made this post saying “I looked into actually how to implement this particular optimization for the curve. Here is a bit of the math.” I think maybe he had some proof of concept code to show it but I was curious to see how much speed up is this actually going to give. I first tried to look at can I integrate this in OpenSSL itself? Because OpenSSL didn’t have any specialized implementation for this curve nor for this optimization technique in general. I started doing that but OpenSSL was an annoying codebase to work with to say it mildly. I thought how about I just make my own implementation from scratch just to see what the effect is. This started as a small hobby project thinking about… To be fair it is a much easier problem if you are only trying to implement one algorithm for one curve compared to a general library that tries to do everything in cryptography. I had the option of picking specific field representation for how are you going to represent the x and y coordinate. I learned some techniques from how other curves like ed25519 were implemented. I used some of those techniques. I started off by only implementing this optimized construction actually. It turned out when I was done it was maybe a factor of 4 faster than OpenSSL which was a very unexpected result. I hadn’t imagined that with fairly little work it would immediately be that much better. I guess it made sense just by being so specialized and being able to pick data structures that were specifically chosen for this curve rather than generic. You get actually a huge advantage.

John: At this point this was still just a …

Pieter: Yes. This was 2013 probably.

John: Just a personal project. You weren’t thinking about it being part of Bitcoin Core.

Pieter: I open sourced this. It attracted some contributions from Greg Maxwell, not Hal Finney, Peter Dettman who is a major contributor to the Bouncy Castle cryptographic library who by now probably came up with half of the algorithms in libsecp. Sometimes incremental improvements, sometimes original research and algebraic techniques to optimize things here and there. That has pushed the performance every time a couple of percent here and there, it adds up. It was assembly implementations for some routines added by people. After a while including lots and lots of testing that was added I think in 0.10. The signing code was switched in Bitcoin Core to it and then 0.12 the validation code was switched to it. This was after BIP66 had activated and we knew the rules on the network are exactly this DER encoding and nothing else. Interestingly by that time this efficient endomorphism GLV optimization was made optional and off by default in libsecp because of potential concern about patents around it. It is kind of ironic that this project started as an attempt to see the benefit was of this optimization and in the end choosing not to use it. But despite that it was still a very significant performance improvement over OpenSSL.

John: Did you feel some urgency after BIP66 to move across to libsecp?

Pieter: Not really. There was until very recently this vague concern that OpenSSL is a huge library with a huge attack surface. It was not designed with these consensus like applications in mind. At least as far as the signature validation parsing went I think at the time we felt that now we understand the scope of what OpenSSL does here and we had restricted sufficiently. We were fairly confident that that exactly wasn’t going to be a problem anymore. It was more the unknown unknowns for all the other things. I don’t know how fast, I don’t remember.

John: To enumerate some of the benefits of switching to libsecp. It is extremely well tested. It has almost 100% code coverage I believe?

Pieter: I think so.

John: It is much faster than OpenSSL.

Pieter: I think OpenSSL has caught up a bit since.

John: There are many things about libsecp that make the API safe for users I think.

Pieter: It is very much designed to be a hard to misuse library so it doesn’t really expose many low level operations that you might want from a generic cryptographic toolkit. It is designed with fairly high level APIs in mind like validate a signature, parse a signature, create a signature, derive a key and so forth.

John: And lots of thought about constant time and avoiding…

Pieter: Yes it was also from the start designed to be side channel resistant or at least the typical side channels you can protect against in software namely not having code paths that depend on secret data, not having memory accesses that depend on secret data. Despite that actually from the start it didn’t actually do that. There was some timing leak in very early code that was probably very hard to exploit. There was some table with precomputed values and you need to pick one of them based on secret data which is a problem. I think what we did was spread out the data so that there’s one byte of every table entry, say there’s 16 table entries, the first 16 bytes contain the first byte of every entry. Then the next 16 bytes contain the second byte of every entry and so on. You would think now it needs to access all groups of 16 bytes and given reasonable assumptions about architectures that generally have cache lines of 64 bytes you would think it is going to access every cache line so there shouldn’t be any leak anymore. It turns out there is a paper that actually shows even in this case you leak information because the first byte and the second byte… things in a cache line there is a very small difference in timing when they are available they can be observed. The fix is actually access every user conditional move construction where you actually read through every byte always.

Jonas: As you talk about the history and certainly you’ve forgotten more about Bitcoin than most of us will ever learn, as you go back and you think about Satoshi’s reference implementation what are things that you would imagine you would want to do from the beginning? Things that are baked into the software that are difficult to shake even now as you’ve made contributions over the years that you would want to have done differently from the beginning?

Pieter: You mean in code design or actually how Bitcoin works?

Jonas: I think either. I think in terms of code design putting most of the code in one file wasn’t particularly helpful from the beginning. In terms of design choices.

Pieter: It is of course slow to change things but I think given enough time if you are just talking about code design questions, if we have agreement we need to move to this other design we can do it whatever it is. You mention of course everything in one file in the 2010 codebase, the wallet and the consensus validation were all in one file including direct calls to the UI. That was really hard to reason about. The wallet tracking which outputs had been spent was actually done through a callback from the script verifier that would tell the wallet “Hey I’ve seen a validation with this input.” This was really hard to reason about. Of course these days there is a lot more complexity so I don’t want to claim that it is today easier to reason about things. Relative to its complexity and how much it was actually doing it was fairly hairy back then.

John: Yeah I think we’ve made enormous strides. There’s a well defined interface between the wallet and the node so we can be confident that it is not involved in consensus.

Pieter: Yes exactly. There is still a lot of work to do there too but I think we are getting there. Your talk about how Bitcoin could have been designed differently, that is a very hard question because you inevitably run into philosophical questions like if it were to have been designed differently would it have taken off? Especially if you go into questions like economic policy. That’s really hard to guess. I think there are lots of things we have learned. The concept of P2SH, what was clearly not present in the original design, it could have been done in a much simpler way if there would’ve been something like P2SH from the beginning. Yet it seems so obvious that this is preferable because before P2SH if you personally have some multisig policy, nobody was using multisig at the time, I guess that was part of the reason. But if you would’ve wanted to use a multisig policy to protect your coins with a device or cold storage key and an online key you would’ve somehow needed to convey to anyone who wanted to pay you to construct this script that includes your policy. That is annoying for multiple reasons like a) that is none of their business, why do I need to tell you “Hey look I’m using a multisig policy, it just for my own protection.” Secondly you would be paying the fees for paying to my complex script. That should not have been a concern of yours either. Lastly everything you put in an output leaks into the utxo set and as we now know the size of the utxo set is a critical scaling parameter of the system.

John: I’ll add fourthly you would have really long addresses which would be kind of annoying.

Pieter: Exactly you would need a standard for conveying that information that would be variable length inevitably if you go for big scripts. I don’t even think that all of these advantages were talked about the time when P2SH was created. I think it was just the last one. We have no address for this, it is really hard to create one. We can make it simpler by hashing the script first. I think the other advantages were things that we’re only realized later, how much of a better design this is. Of course we have since iterated on that, SegWit I think is clearly something that should have been done from the beginning. The fact signatures leak into the txid made it really hard for all kinds of more complex constructions. At the same time Bitcoin was the first thing in its class and it is unrealistic to expect it to get everything right from the beginning. Thankfully I think we’ve learned very well how to do safe upgrades to some of this.

John: I agree entirely that this isn’t really an exercise for faulting Satoshi for the mistakes. But if I could wave a magic wand SegWit from the genesis block would be great because then the block could commit to the signatures, the wtxid whereas now it doesn’t.

Pieter: In SegWit it does. In SegWit there is a coinbase output that contains a hash with the root of a Merkle tree that commits to all wtxids.

John: Right, yes. But you don’t know that until you deserialize the transactions from the block which is a little bit annoying.

Jonas: How do you think about how to spend your time on Bitcoin? There are so many ways to get nerd sniped and so many directions you could contribute to. What are you excited about and then how do you feel the pull of your personal excitement versus the pull of what’s necessary for someone like you to contribute to Bitcoin?

Pieter: That is a good question, I don’t have a good answer. I try to work on things I’m excited about but sometimes this also means following through on something after you’ve lost some of the excitement about it because you have worked on this and people expect you to continue. It is a hard question. I expect this in general in open source to be a problem. There is no set direction and ultimately people choose how to spend their own time themselves. What am I excited about? I’m happy with the progress we’ve made with Taproot review and how that is going. I’m excited to see that progress further. There are some interesting changes people are working on related to the peer-to-peer protocol, things like Erlay that I contributed to. There are too many things to mention.

John: I think Taproot, Schnorr plus Erlay is a good start for things to get excited about. Shall we wrap up there? Thank you Pieter.

\ No newline at end of file diff --git a/chaincode-labs/chaincode-podcast/2020-01-28-pieter-wuille-part2/index.html b/chaincode-labs/chaincode-podcast/2020-01-28-pieter-wuille-part2/index.html index b6827dea99..47c528805b 100644 --- a/chaincode-labs/chaincode-podcast/2020-01-28-pieter-wuille-part2/index.html +++ b/chaincode-labs/chaincode-podcast/2020-01-28-pieter-wuille-part2/index.html @@ -11,4 +11,4 @@ Pieter Wuille

Date: January 28, 2020

Transcript By: Michael Folkson

Tags: Bitcoin core

Category: Podcast

Media: -https://www.youtube.com/watch?v=Q2lXSRcacAo

Jonas: We are gonna pick up where we left off in episode 1 with a discussion of lessons learned from the 0.8 consensus failure. We then go on to cover libsecp and Pieter’s thoughts about Bitcoin in 2020. We hope you enjoy this as much as we did.

John: Ok I have a bunch of questions from that. One is what are the lessons from that?

Pieter: One of the things I think learned from that is specifying what your consensus rules is really hard. That doesn’t mean you can’t try but who would’ve thought that a configuration setting in the database layer you are using actually leaked semantically into Bitcoin’s implicitly defined consensus rules. You can attribute that to human failure of course. We should’ve read the documentation and been aware of that.

John: Would testing have caught this?

Pieter: Probably things like modern fuzzing could’ve found this. Who knows right? There could be a bug in your C library. There can be a bug in your kernel. There can even be a bug in your CPU.

John: In your hardware, anywhere.

Pieter: Exactly. We can talk about the boundary in trying to abstract the part of the codebase that intentionally contributes to consensus but it is very hard to say clearly this code has no impact on consensus code because bugs can leak. I think one of the things to learn there is you really want software that is intended for use in a consensus system where not only you have the requirement that if everyone behaves correctly everybody accepts the right answer but also that everybody will disagree about what is an invalid piece of data in lockstep.

John: That condition is much harder.

Pieter: That’s much harder. It is not a usual thing you design things for. Maybe a good thing to bring up is BIP66 DER signature failure. You also had getting rid of OpenSSL on the list of things to talk about. Validation of signatures in Bitcoin’s reference code used to use OpenSSL for validation. Signatures were encoded in whatever data OpenSSL expects.

John: Let’s take a step back and talk about Satoshi implementing Bitcoin. Satoshi wrote a white paper and then produced a reference implementation of Bitcoin. In that reference implementation there was a dependency on OpenSSL that was used for many things.

Pieter: Correct. It was even used for computing the difficulty adjustment I think. It was used for signing. At some point it was used for mining.

John: OpenSSL is a very widely used open source library. It has been deployed in many applications for many years. It wasn’t a bad choice to use OpenSSL.

Pieter: I think it was an obvious choice from a standard software engineering perspective. It was a very reasonable thing to do without things we’ve since learned. What this meant that was even though ECDSA and secp256k1 curve have nicely written up specifications it wasn’t actually these specifications that defined Bitcoin signature validation rules. It was whatever the hell OpenSSL implemented. It turns out what OpenSSL implemented isn’t exactly what the specification says.

John: And isn’t exactly consistent across different platforms.

Pieter: Exactly. What we learned is that the OpenSSL signature parser, at the time, this has since been fixed, at the time allowed certain violations of the DER encoding specification which is a way of structured data in a parsable way that ECDSA specification refers to. OpenSSL used the I think now widely considered bad idea philosophy of being flexible in what you expect and being strict in your output exactly because of the inconsistencies it introduced. OpenSSL allowed signatures that violated the spec. This didn’t mean that this permitted forging a signature. Someone without a private key still could not construct anything that OpenSSL would accept. The problem was that someone with a private key might construct a signature that some versions would accept and others wouldn’t. Indeed in one of these permitted violations of DER it had a bound on the size of a length field and that bound was 32 bits for 32 bit platforms and 64 bits for 64 bit platforms. You could construct a signature at the time that says “The length of this integer is the next 5 bytes.” Those 5 bytes would just contain the number 32 or 33.

John: To get a bit more specific. When we create a signature in ECDSA we have two values, a r value and a s value. Together that forms a signature. When we talk about encoding we’re talking about how we put those values into bits that we transmit across the network. DER encoding has a bunch of fields as well as the r and the s fields which are saying this is the length of the thing…

Pieter: It would start by saying “Here is a concatenation of two things and it is this many bytes.” Then it would say “The first element is an integer and it is this many bytes.” Then you would actually have the data. “The next thing is an integer. It is this many bytes and here is the data.” Then Bitcoin adds a signature hash flag at the end but that is not part of the DER thing. This encoding of the r and s values could either say “It is the next n bytes up to 126” or something but if it is more than that it would include a marker that says “The length of the next field is given in the next n bytes.” The maximum length of that indirect size field was platform dependent in OpenSSL.

John: So what do you do about that? You’ve discovered that Bitcoin is inconsistent with itself.

Pieter: In a similar way that 0.7 and everything before it were inconsistent with itself due to this BDB lock issue. This was a much more concrete thing. You’d know exactly that I can construct a signature that these platforms will accept and these won’t. This wasn’t non-deterministic, this was deterministic. It was just dependent on the platform. The problem was fixing this wasn’t just a database update, this was implicitly part of our consensus rules. So what we needed to do was fix those consensus rules. That is what BIP66 was designed to do. The full rationale for BIP66 wasn’t revealed until long after it was deployed because this was so trivial to exploit. We did keep that hidden for a long time. BIP66’s stated goal which was correct in part was being able to move off OpenSSL. Let’s switch to a very well specified subset of signatures which everybody already produces. The signing code that people were using was sufficiently strict apart from a few other implementations this was generally not a problem. There were concerns at the time about miners that didn’t actually do full validation which would have made it even easier to broadcast such a signature on the network and get it included. That was interesting. Again taught us that even when you think you have a specification of what your consensus rules are everybody would’ve thought there’s this document that specifies ECDSA and secp256k1, that is our specification. It turns out it wasn’t.

John: Consensus is slippery and touches everything.

Jonas: When you’re sitting on an exploit like that, when you’re aware that something is open for exploitation how does that change the process and how do you think about coming up with a solution? You have a time constraint I guess.

Pieter: Given it had been there for a long time there are trade-offs like who do you tell? How fast do you move to fix this because moving too fast might draw suspicion, moving too slow might get exploitable. Really these things always need to be considered on a case by case basis.

John: I think that brings us nicely to the third PR or family of PRs that I have on my list or projects that you’ve contributed to which is libsecp. Can you give us a bit of background on what the genesis of that project was and where it came from?

Pieter: It is not actually known I think why Satoshi picked the secp256k1 curve which was standardized but a very uncommon choice even at the time. I don’t dare to say when, maybe 2012, a post on Bitcointalk by Hal Finney about the special properties that this curve has and presumably why it was picked because it had this promise of accelerated implementation. What this was a particular technique that would allow faster implementation of elliptic curve implementation using an efficiently computable endomorphism. I won’t go into the details unless you want me to but it is a technique that gives you a percentage speedup for multiplication. It also makes certain requirements on the curve that not everyone is as happy with. It also gives you a small speedup for attackers but generally you want an exponential gap between the time it takes for an attacker and honest user anyway. Hal made this post saying “I looked into actually how to implement this particular optimization for the curve. Here is a bit of the math.” I think maybe he had some proof of concept code to show it but I was curious to see how much speed up is this actually going to give. I first tried to look at can I integrate this in OpenSSL itself? Because OpenSSL didn’t have any specialized implementation for this curve nor for this optimization technique in general. I started doing that but OpenSSL was an annoying codebase to work with to say it mildly. I thought how about I just make my own implementation from scratch just to see what the effect is. This started as a small hobby project thinking about… To be fair it is a much easier problem if you are only trying to implement one algorithm for one curve compared to a general library that tries to do everything in cryptography. I had the option of picking specific field representation for how are you going to represent the x and y coordinate. I learned some techniques from how other curves like ed25519 were implemented. I used some of those techniques. I started off by only implementing this optimized construction actually. It turned out when I was done it was maybe a factor of 4 faster than OpenSSL which was a very unexpected result. I hadn’t imagined that with fairly little work it would immediately be that much better. I guess it made sense just by being so specialized and being able to pick data structures that were specifically chosen for this curve rather than generic. You get actually a huge advantage.

John: At this point this was still just a …

Pieter: Yes. This was 2013 probably.

John: Just a personal project. You weren’t thinking about it being part of Bitcoin Core.

Pieter: I open sourced this. It attracted some contributions from Greg Maxwell, not Hal Finney, Peter Dettman who is a major contributor to the Bouncy Castle cryptographic library who by now probably came up with half of the algorithms in libsecp. Sometimes incremental improvements, sometimes original research and algebraic techniques to optimize things here and there. That has pushed the performance every time a couple of percent here and there, it adds up. It was assembly implementations for some routines added by people. After a while including lots and lots of testing that was added I think in 0.10. The signing code was switched in Bitcoin Core to it and then 0.12 the validation code was switched to it. This was after BIP66 had activated and we knew the rules on the network are exactly this DER encoding and nothing else. Interestingly by that time this efficient endomorphism GLV optimization was made optional and off by default in libsecp because of potential concern about patents around it. It is kind of ironic that this project started as an attempt to see the benefit was of this optimization and in the end choosing not to use it. But despite that it was still a very significant performance improvement over OpenSSL.

John: Did you feel some urgency after BIP66 to move across to libsecp?

Pieter: Not really. There was until very recently this vague concern that OpenSSL is a huge library with a huge attack surface. It was not designed with these consensus like applications in mind. At least as far as the signature validation parsing went I think at the time we felt that now we understand the scope of what OpenSSL does here and we had restricted sufficiently. We were fairly confident that that exactly wasn’t going to be a problem anymore. It was more the unknown unknowns for all the other things. I don’t know how fast, I don’t remember.

John: To enumerate some of the benefits of switching to libsecp. It is extremely well tested. It has almost 100% code coverage I believe?

Pieter: I think so.

John: It is much faster than OpenSSL.

Pieter: I think OpenSSL has caught up a bit since.

John: There are many things about libsecp that make the API safe for users I think.

Pieter: It is very much designed to be a hard to misuse library so it doesn’t really expose many low level operations that you might want from a generic cryptographic toolkit. It is designed with fairly high level APIs in mind like validate a signature, parse a signature, create a signature, derive a key and so forth.

John: And lots of thought about constant time and avoiding…

Pieter: Yes it was also from the start designed to be side channel resistant or at least the typical side channels you can protect against in software namely not having code paths that depend on secret data, not having memory accesses that depend on secret data. Despite that actually from the start it didn’t actually do that. There was some timing leak in very early code that was probably very hard to exploit. There was some table with precomputed values and you need to pick one of them based on secret data which is a problem. I think what we did was spread out the data so that there’s one byte of every table entry, say there’s 16 table entries, the first 16 bytes contain the first byte of every entry. Then the next 16 bytes contain the second byte of every entry and so on. You would think now it needs to access all groups of 16 bytes and given reasonable assumptions about architectures that generally have cache lines of 64 bytes you would think it is going to access every cache line so there shouldn’t be any leak anymore. It turns out there is a paper that actually shows even in this case you leak information because the first byte and the second byte… things in a cache line there is a very small difference in timing when they are available they can be observed. The fix is actually access every user conditional move construction where you actually read through every byte always.

Jonas: As you talk about the history and certainly you’ve forgotten more about Bitcoin than most of us will ever learn, as you go back and you think about Satoshi’s reference implementation what are things that you would imagine you would want to do from the beginning? Things that are baked into the software that are difficult to shake even now as you’ve made contributions over the years that you would want to have done differently from the beginning?

Pieter: You mean in code design or actually how Bitcoin works?

Jonas: I think either. I think in terms of code design putting most of the code in one file wasn’t particularly helpful from the beginning. In terms of design choices.

Pieter: It is of course slow to change things but I think given enough time if you are just talking about code design questions, if we have agreement we need to move to this other design we can do it whatever it is. You mention of course everything in one file in the 2010 codebase, the wallet and the consensus validation were all in one file including direct calls to the UI. That was really hard to reason about. The wallet tracking which outputs had been spent was actually done through a callback from the script verifier that would tell the wallet “Hey I’ve seen a validation with this input.” This was really hard to reason about. Of course these days there is a lot more complexity so I don’t want to claim that it is today easier to reason about things. Relative to its complexity and how much it was actually doing it was fairly hairy back then.

John: Yeah I think we’ve made enormous strides. There’s a well defined interface between the wallet and the node so we can be confident that it is not involved in consensus.

Pieter: Yes exactly. There is still a lot of work to do there too but I think we are getting there. Your talk about how Bitcoin could have been designed differently, that is a very hard question because you inevitably run into philosophical questions like if it were to have been designed differently would it have taken off? Especially if you go into questions like economic policy. That’s really hard to guess. I think there are lots of things we have learned. The concept of P2SH, what was clearly not present in the original design, it could have been done in a much simpler way if there would’ve been something like P2SH from the beginning. Yet it seems so obvious that this is preferable because before P2SH if you personally have some multisig policy, nobody was using multisig at the time, I guess that was part of the reason. But if you would’ve wanted to use a multisig policy to protect your coins with a device or cold storage key and an online key you would’ve somehow needed to convey to anyone who wanted to pay you to construct this script that includes your policy. That is annoying for multiple reasons like a) that is none of their business, why do I need to tell you “Hey look I’m using a multisig policy, it just for my own protection.” Secondly you would be paying the fees for paying to my complex script. That should not have been a concern of yours either. Lastly everything you put in an output leaks into the utxo set and as we now know the size of the utxo set is a critical scaling parameter of the system.

John: I’ll add fourthly you would have really long addresses which would be kind of annoying.

Pieter: Exactly you would need a standard for conveying that information that would be variable length inevitably if you go for big scripts. I don’t even think that all of these advantages were talked about the time when P2SH was created. I think it was just the last one. We have no address for this, it is really hard to create one. We can make it simpler by hashing the script first. I think the other advantages were things that we’re only realized later, how much of a better design this is. Of course we have since iterated on that, SegWit I think is clearly something that should have been done from the beginning. The fact signatures leak into the txid made it really hard for all kinds of more complex constructions. At the same time Bitcoin was the first thing in its class and it is unrealistic to expect it to get everything right from the beginning. Thankfully I think we’ve learned very well how to do safe upgrades to some of this.

John: I agree entirely that this isn’t really an exercise for faulting Satoshi for the mistakes. But if I could wave a magic wand SegWit from the genesis block would be great because then the block could commit to the signatures, the wtxid whereas now it doesn’t.

Pieter: In SegWit it does. In SegWit there is a coinbase output that contains a hash with the root of a Merkle tree that commits to all wtxids.

John: Right, yes. But you don’t know that until you deserialize the transactions from the block which is a little bit annoying.

Jonas: How do you think about how to spend your time on Bitcoin? There are so many ways to get nerd sniped and so many directions you could contribute to. What are you excited about and then how do you feel the pull of your personal excitement versus the pull of what’s necessary for someone like you to contribute to Bitcoin?

Pieter: That is a good question, I don’t have a good answer. I try to work on things I’m excited about but sometimes this also means following through on something after you’ve lost some of the excitement about it because you have worked on this and people expect you to continue. It is a hard question. I expect this in general in open source to be a problem. There is no set direction and ultimately people choose how to spend their own time themselves. What am I excited about? I’m happy with the progress we’ve made with Taproot review and how that is going. I’m excited to see that progress further. There are some interesting changes people are working on related to the peer-to-peer protocol, things like Erlay that I contributed to. There are too many things to mention.

John: I think Taproot, Schnorr plus Erlay is a good start for things to get excited about. Shall we wrap up there? Thank you Pieter.

\ No newline at end of file +https://www.youtube.com/watch?v=Q2lXSRcacAo

Jonas: We are gonna pick up where we left off in episode 1 with a discussion of lessons learned from the 0.8 consensus failure. We then go on to cover libsecp and Pieter’s thoughts about Bitcoin in 2020. We hope you enjoy this as much as we did.

John: Ok I have a bunch of questions from that. One is what are the lessons from that?

Pieter: One of the things I think learned from that is specifying what your consensus rules is really hard. That doesn’t mean you can’t try but who would’ve thought that a configuration setting in the database layer you are using actually leaked semantically into Bitcoin’s implicitly defined consensus rules. You can attribute that to human failure of course. We should’ve read the documentation and been aware of that.

John: Would testing have caught this?

Pieter: Probably things like modern fuzzing could’ve found this. Who knows right? There could be a bug in your C library. There can be a bug in your kernel. There can even be a bug in your CPU.

John: In your hardware, anywhere.

Pieter: Exactly. We can talk about the boundary in trying to abstract the part of the codebase that intentionally contributes to consensus but it is very hard to say clearly this code has no impact on consensus code because bugs can leak. I think one of the things to learn there is you really want software that is intended for use in a consensus system where not only you have the requirement that if everyone behaves correctly everybody accepts the right answer but also that everybody will disagree about what is an invalid piece of data in lockstep.

John: That condition is much harder.

Pieter: That’s much harder. It is not a usual thing you design things for. Maybe a good thing to bring up is BIP66 DER signature failure. You also had getting rid of OpenSSL on the list of things to talk about. Validation of signatures in Bitcoin’s reference code used to use OpenSSL for validation. Signatures were encoded in whatever data OpenSSL expects.

John: Let’s take a step back and talk about Satoshi implementing Bitcoin. Satoshi wrote a white paper and then produced a reference implementation of Bitcoin. In that reference implementation there was a dependency on OpenSSL that was used for many things.

Pieter: Correct. It was even used for computing the difficulty adjustment I think. It was used for signing. At some point it was used for mining.

John: OpenSSL is a very widely used open source library. It has been deployed in many applications for many years. It wasn’t a bad choice to use OpenSSL.

Pieter: I think it was an obvious choice from a standard software engineering perspective. It was a very reasonable thing to do without things we’ve since learned. What this meant that was even though ECDSA and secp256k1 curve have nicely written up specifications it wasn’t actually these specifications that defined Bitcoin signature validation rules. It was whatever the hell OpenSSL implemented. It turns out what OpenSSL implemented isn’t exactly what the specification says.

John: And isn’t exactly consistent across different platforms.

Pieter: Exactly. What we learned is that the OpenSSL signature parser, at the time, this has since been fixed, at the time allowed certain violations of the DER encoding specification which is a way of structured data in a parsable way that ECDSA specification refers to. OpenSSL used the I think now widely considered bad idea philosophy of being flexible in what you expect and being strict in your output exactly because of the inconsistencies it introduced. OpenSSL allowed signatures that violated the spec. This didn’t mean that this permitted forging a signature. Someone without a private key still could not construct anything that OpenSSL would accept. The problem was that someone with a private key might construct a signature that some versions would accept and others wouldn’t. Indeed in one of these permitted violations of DER it had a bound on the size of a length field and that bound was 32 bits for 32 bit platforms and 64 bits for 64 bit platforms. You could construct a signature at the time that says “The length of this integer is the next 5 bytes.” Those 5 bytes would just contain the number 32 or 33.

John: To get a bit more specific. When we create a signature in ECDSA we have two values, a r value and a s value. Together that forms a signature. When we talk about encoding we’re talking about how we put those values into bits that we transmit across the network. DER encoding has a bunch of fields as well as the r and the s fields which are saying this is the length of the thing…

Pieter: It would start by saying “Here is a concatenation of two things and it is this many bytes.” Then it would say “The first element is an integer and it is this many bytes.” Then you would actually have the data. “The next thing is an integer. It is this many bytes and here is the data.” Then Bitcoin adds a signature hash flag at the end but that is not part of the DER thing. This encoding of the r and s values could either say “It is the next n bytes up to 126” or something but if it is more than that it would include a marker that says “The length of the next field is given in the next n bytes.” The maximum length of that indirect size field was platform dependent in OpenSSL.

John: So what do you do about that? You’ve discovered that Bitcoin is inconsistent with itself.

Pieter: In a similar way that 0.7 and everything before it were inconsistent with itself due to this BDB lock issue. This was a much more concrete thing. You’d know exactly that I can construct a signature that these platforms will accept and these won’t. This wasn’t non-deterministic, this was deterministic. It was just dependent on the platform. The problem was fixing this wasn’t just a database update, this was implicitly part of our consensus rules. So what we needed to do was fix those consensus rules. That is what BIP66 was designed to do. The full rationale for BIP66 wasn’t revealed until long after it was deployed because this was so trivial to exploit. We did keep that hidden for a long time. BIP66’s stated goal which was correct in part was being able to move off OpenSSL. Let’s switch to a very well specified subset of signatures which everybody already produces. The signing code that people were using was sufficiently strict apart from a few other implementations this was generally not a problem. There were concerns at the time about miners that didn’t actually do full validation which would have made it even easier to broadcast such a signature on the network and get it included. That was interesting. Again taught us that even when you think you have a specification of what your consensus rules are everybody would’ve thought there’s this document that specifies ECDSA and secp256k1, that is our specification. It turns out it wasn’t.

John: Consensus is slippery and touches everything.

Jonas: When you’re sitting on an exploit like that, when you’re aware that something is open for exploitation how does that change the process and how do you think about coming up with a solution? You have a time constraint I guess.

Pieter: Given it had been there for a long time there are trade-offs like who do you tell? How fast do you move to fix this because moving too fast might draw suspicion, moving too slow might get exploitable. Really these things always need to be considered on a case by case basis.

John: I think that brings us nicely to the third PR or family of PRs that I have on my list or projects that you’ve contributed to which is libsecp. Can you give us a bit of background on what the genesis of that project was and where it came from?

Pieter: It is not actually known I think why Satoshi picked the secp256k1 curve which was standardized but a very uncommon choice even at the time. I don’t dare to say when, maybe 2012, a post on Bitcointalk by Hal Finney about the special properties that this curve has and presumably why it was picked because it had this promise of accelerated implementation. What this was a particular technique that would allow faster implementation of elliptic curve implementation using an efficiently computable endomorphism. I won’t go into the details unless you want me to but it is a technique that gives you a percentage speedup for multiplication. It also makes certain requirements on the curve that not everyone is as happy with. It also gives you a small speedup for attackers but generally you want an exponential gap between the time it takes for an attacker and honest user anyway. Hal made this post saying “I looked into actually how to implement this particular optimization for the curve. Here is a bit of the math.” I think maybe he had some proof of concept code to show it but I was curious to see how much speed up is this actually going to give. I first tried to look at can I integrate this in OpenSSL itself? Because OpenSSL didn’t have any specialized implementation for this curve nor for this optimization technique in general. I started doing that but OpenSSL was an annoying codebase to work with to say it mildly. I thought how about I just make my own implementation from scratch just to see what the effect is. This started as a small hobby project thinking about… To be fair it is a much easier problem if you are only trying to implement one algorithm for one curve compared to a general library that tries to do everything in cryptography. I had the option of picking specific field representation for how are you going to represent the x and y coordinate. I learned some techniques from how other curves like ed25519 were implemented. I used some of those techniques. I started off by only implementing this optimized construction actually. It turned out when I was done it was maybe a factor of 4 faster than OpenSSL which was a very unexpected result. I hadn’t imagined that with fairly little work it would immediately be that much better. I guess it made sense just by being so specialized and being able to pick data structures that were specifically chosen for this curve rather than generic. You get actually a huge advantage.

John: At this point this was still just a …

Pieter: Yes. This was 2013 probably.

John: Just a personal project. You weren’t thinking about it being part of Bitcoin Core.

Pieter: I open sourced this. It attracted some contributions from Greg Maxwell, not Hal Finney, Peter Dettman who is a major contributor to the Bouncy Castle cryptographic library who by now probably came up with half of the algorithms in libsecp. Sometimes incremental improvements, sometimes original research and algebraic techniques to optimize things here and there. That has pushed the performance every time a couple of percent here and there, it adds up. It was assembly implementations for some routines added by people. After a while including lots and lots of testing that was added I think in 0.10. The signing code was switched in Bitcoin Core to it and then 0.12 the validation code was switched to it. This was after BIP66 had activated and we knew the rules on the network are exactly this DER encoding and nothing else. Interestingly by that time this efficient endomorphism GLV optimization was made optional and off by default in libsecp because of potential concern about patents around it. It is kind of ironic that this project started as an attempt to see the benefit was of this optimization and in the end choosing not to use it. But despite that it was still a very significant performance improvement over OpenSSL.

John: Did you feel some urgency after BIP66 to move across to libsecp?

Pieter: Not really. There was until very recently this vague concern that OpenSSL is a huge library with a huge attack surface. It was not designed with these consensus like applications in mind. At least as far as the signature validation parsing went I think at the time we felt that now we understand the scope of what OpenSSL does here and we had restricted sufficiently. We were fairly confident that that exactly wasn’t going to be a problem anymore. It was more the unknown unknowns for all the other things. I don’t know how fast, I don’t remember.

John: To enumerate some of the benefits of switching to libsecp. It is extremely well tested. It has almost 100% code coverage I believe?

Pieter: I think so.

John: It is much faster than OpenSSL.

Pieter: I think OpenSSL has caught up a bit since.

John: There are many things about libsecp that make the API safe for users I think.

Pieter: It is very much designed to be a hard to misuse library so it doesn’t really expose many low level operations that you might want from a generic cryptographic toolkit. It is designed with fairly high level APIs in mind like validate a signature, parse a signature, create a signature, derive a key and so forth.

John: And lots of thought about constant time and avoiding…

Pieter: Yes it was also from the start designed to be side channel resistant or at least the typical side channels you can protect against in software namely not having code paths that depend on secret data, not having memory accesses that depend on secret data. Despite that actually from the start it didn’t actually do that. There was some timing leak in very early code that was probably very hard to exploit. There was some table with precomputed values and you need to pick one of them based on secret data which is a problem. I think what we did was spread out the data so that there’s one byte of every table entry, say there’s 16 table entries, the first 16 bytes contain the first byte of every entry. Then the next 16 bytes contain the second byte of every entry and so on. You would think now it needs to access all groups of 16 bytes and given reasonable assumptions about architectures that generally have cache lines of 64 bytes you would think it is going to access every cache line so there shouldn’t be any leak anymore. It turns out there is a paper that actually shows even in this case you leak information because the first byte and the second byte… things in a cache line there is a very small difference in timing when they are available they can be observed. The fix is actually access every user conditional move construction where you actually read through every byte always.

Jonas: As you talk about the history and certainly you’ve forgotten more about Bitcoin than most of us will ever learn, as you go back and you think about Satoshi’s reference implementation what are things that you would imagine you would want to do from the beginning? Things that are baked into the software that are difficult to shake even now as you’ve made contributions over the years that you would want to have done differently from the beginning?

Pieter: You mean in code design or actually how Bitcoin works?

Jonas: I think either. I think in terms of code design putting most of the code in one file wasn’t particularly helpful from the beginning. In terms of design choices.

Pieter: It is of course slow to change things but I think given enough time if you are just talking about code design questions, if we have agreement we need to move to this other design we can do it whatever it is. You mention of course everything in one file in the 2010 codebase, the wallet and the consensus validation were all in one file including direct calls to the UI. That was really hard to reason about. The wallet tracking which outputs had been spent was actually done through a callback from the script verifier that would tell the wallet “Hey I’ve seen a validation with this input.” This was really hard to reason about. Of course these days there is a lot more complexity so I don’t want to claim that it is today easier to reason about things. Relative to its complexity and how much it was actually doing it was fairly hairy back then.

John: Yeah I think we’ve made enormous strides. There’s a well defined interface between the wallet and the node so we can be confident that it is not involved in consensus.

Pieter: Yes exactly. There is still a lot of work to do there too but I think we are getting there. Your talk about how Bitcoin could have been designed differently, that is a very hard question because you inevitably run into philosophical questions like if it were to have been designed differently would it have taken off? Especially if you go into questions like economic policy. That’s really hard to guess. I think there are lots of things we have learned. The concept of P2SH, what was clearly not present in the original design, it could have been done in a much simpler way if there would’ve been something like P2SH from the beginning. Yet it seems so obvious that this is preferable because before P2SH if you personally have some multisig policy, nobody was using multisig at the time, I guess that was part of the reason. But if you would’ve wanted to use a multisig policy to protect your coins with a device or cold storage key and an online key you would’ve somehow needed to convey to anyone who wanted to pay you to construct this script that includes your policy. That is annoying for multiple reasons like a) that is none of their business, why do I need to tell you “Hey look I’m using a multisig policy, it just for my own protection.” Secondly you would be paying the fees for paying to my complex script. That should not have been a concern of yours either. Lastly everything you put in an output leaks into the utxo set and as we now know the size of the utxo set is a critical scaling parameter of the system.

John: I’ll add fourthly you would have really long addresses which would be kind of annoying.

Pieter: Exactly you would need a standard for conveying that information that would be variable length inevitably if you go for big scripts. I don’t even think that all of these advantages were talked about the time when P2SH was created. I think it was just the last one. We have no address for this, it is really hard to create one. We can make it simpler by hashing the script first. I think the other advantages were things that we’re only realized later, how much of a better design this is. Of course we have since iterated on that, SegWit I think is clearly something that should have been done from the beginning. The fact signatures leak into the txid made it really hard for all kinds of more complex constructions. At the same time Bitcoin was the first thing in its class and it is unrealistic to expect it to get everything right from the beginning. Thankfully I think we’ve learned very well how to do safe upgrades to some of this.

John: I agree entirely that this isn’t really an exercise for faulting Satoshi for the mistakes. But if I could wave a magic wand SegWit from the genesis block would be great because then the block could commit to the signatures, the wtxid whereas now it doesn’t.

Pieter: In SegWit it does. In SegWit there is a coinbase output that contains a hash with the root of a Merkle tree that commits to all wtxids.

John: Right, yes. But you don’t know that until you deserialize the transactions from the block which is a little bit annoying.

Jonas: How do you think about how to spend your time on Bitcoin? There are so many ways to get nerd sniped and so many directions you could contribute to. What are you excited about and then how do you feel the pull of your personal excitement versus the pull of what’s necessary for someone like you to contribute to Bitcoin?

Pieter: That is a good question, I don’t have a good answer. I try to work on things I’m excited about but sometimes this also means following through on something after you’ve lost some of the excitement about it because you have worked on this and people expect you to continue. It is a hard question. I expect this in general in open source to be a problem. There is no set direction and ultimately people choose how to spend their own time themselves. What am I excited about? I’m happy with the progress we’ve made with Taproot review and how that is going. I’m excited to see that progress further. There are some interesting changes people are working on related to the peer-to-peer protocol, things like Erlay that I contributed to. There are too many things to mention.

John: I think Taproot, Schnorr plus Erlay is a good start for things to get excited about. Shall we wrap up there? Thank you Pieter.

\ No newline at end of file diff --git a/chaincode-labs/chaincode-podcast/chaincode-decoded-bech32m/index.html b/chaincode-labs/chaincode-podcast/chaincode-decoded-bech32m/index.html index ad80f24c1c..ac0fe63bfe 100644 --- a/chaincode-labs/chaincode-podcast/chaincode-decoded-bech32m/index.html +++ b/chaincode-labs/chaincode-podcast/chaincode-decoded-bech32m/index.html @@ -76,7 +76,7 @@ If they were using the same address format, other than the version zero and version one, there’s no difference between the space that they’re allowed to be in. By downgrading to v0, they would essentially be creating a valid output type that is to be interpreted as a P2WSH output, but it cannot be resolved as such because there’s no script hash that resolves to it, because taproot uses a pubkey. That’s not compatible. -People wanted to prevent that from happening on the one hand, and on the other hand, about a year or two years ago, someone discovered that bech32 actually had a length extension mutation weakness.

Bech32 length extension mutation weakness

Mark Erhardt: 00:11:26

You can think of bech32 addresses as just as a huge polynomial. +People wanted to prevent that from happening on the one hand, and on the other hand, about a year or two years ago, someone discovered that bech32 actually had a length extension mutation weakness.

Bech32 length extension mutation weakness

Mark Erhardt: 00:11:26

You can think of bech32 addresses as just as a huge polynomial. The character set, the 32 characters, are just encoding numbers from 0 to 31, and each character position is the factor of one of the polynomial terms. Now it turns out that because the checksum was using a constant of 1, when you have an address that ends on the character p, which encodes the value 1, you can insert q’s or remove q’s right in front of the last letter p. Q encodes the letter zero. diff --git a/chaincode-labs/chaincode-podcast/package-relay/index.html b/chaincode-labs/chaincode-podcast/package-relay/index.html index a3f71b96e9..10d33f64d5 100644 --- a/chaincode-labs/chaincode-podcast/package-relay/index.html +++ b/chaincode-labs/chaincode-podcast/package-relay/index.html @@ -16,7 +16,7 @@ Enjoy.

Adam Jonas: 00:00:35

Hey, Gloria.

Gloria Zhao: 00:00:36

Hello.

Adam Jonas: 00:00:37

Welcome back to Chaincode. This is our second time recording, but this one’s going to be comprehensible, I think (Gloria’s laughter). I think we’re going to make it work.

Mark Erhardt: 00:00:44

Yeah, I promise not to break all the discussions.

Adam Jonas: 00:00:48

It’ll be fine, we don’t have to… let’s not get too in over our head.

Gloria Zhao: 00:00:51

It’s a deep topic.

Adam Jonas: 00:00:52

It’s a deep topic. -What I like about this conversation is you have something very specific to talk about.

Gloria Zhao: 00:00:58

Right.

Adam Jonas: 00:00:58

So you wrote to the mailing list.

Gloria Zhao: 00:01:00

Yes.

What’s package relay?

Adam Jonas: 00:01:01

And you proposed package relay. +What I like about this conversation is you have something very specific to talk about.

Gloria Zhao: 00:00:58

Right.

Adam Jonas: 00:00:58

So you wrote to the mailing list.

Gloria Zhao: 00:01:00

Yes.

What’s package relay?

Adam Jonas: 00:01:01

And you proposed package relay. What’s package relay?

Gloria Zhao: 00:01:05

I proposed some implementation of package relay, which is a concept that’s been talked about for at least seven, maybe nine years. It’s the concept of requesting, announcing, and downloading groups of transactions together, namely related transactions. A package is a widely used term for some set of transactions that have a dependency relationship. @@ -65,7 +65,7 @@ How do we then make it even more complicated and add package relay? So it requires a lot of exploring.

Adam Jonas: 00:09:30

As you’re thinking about the complexity, have you been able to clean up things as you’re complicating things?

Gloria Zhao: 00:09:36

Yeah, I think that that’s my approach. Not to make it more spaghetti, and also I think refactoring helps clarify the interface for everyone. -So for example, part of Package RBF was modularizing and documenting our current Replace-by-Fee policy and pushing that into its own module. +So for example, part of Package RBF was modularizing and documenting our current Replace-by-Fee policy and pushing that into its own module. Now it’s just five helper functions and Package RBF, as I’ve implemented it now, is just calling those same functions with a few different arguments. So that’s nice, and now we hopefully understand RBF better. But that kind of also opened a can of worms, and now people want all of the RBF pinning attacks to be solved. diff --git a/chaincode-labs/chaincode-residency/2019-06-17-john-newbery-security-models/index.html b/chaincode-labs/chaincode-residency/2019-06-17-john-newbery-security-models/index.html index b296202069..34d84d7ca8 100644 --- a/chaincode-labs/chaincode-residency/2019-06-17-john-newbery-security-models/index.html +++ b/chaincode-labs/chaincode-residency/2019-06-17-john-newbery-security-models/index.html @@ -13,6 +13,6 @@ John Newbery

Date: June 17, 2019

Transcript By: Caralie Chrisco

Tags: Security, Lightweight client

Category: Residency

Media: -https://youtu.be/6gGcS4N5Rg4

Topic: Security Models

Location: Chaincode Labs 2019 Residency

Slides: https://residency.chaincode.com/presentations/bitcoin/security_models.pdf

Intro

John Newbery: Alright, Security Models. This is going to be like a quick whistle-stop tour of various things, very high-level view. I’m going to start by giving you some kind of framework to think about things. So in cryptography, we often talk about security proofs in terms of existing schemes, we talk about assumptions.

So, for example… If you can break new scheme B, then you’ll also be able to break old scheme A. That’s how we make security proofs. So for example, under the random oracle model, so if we have a hash function that we model as being random. If you can break taproot, then you can break the discrete log problem over the ecliptic curve in general.

The contrapositive is true that if old scheme A is secure then new scheme B is secure. If you can break new scheme B, you can break old scheme A. So, why do we think the discrete log problem is hard over elliptic curves?

Elichai: Because exponentiating is very easy but doing logarithms is very hard. We don’t know any algorithms that can do logarithms efficiently, so the second you have big numbers, you’re stuck.

John: Yeah, because we haven’t broken it yet. That’s kind of an interesting framework to think about these things when you’re talking about security models, you have to ask, compared to what? So that’s what we can start with. We’re going to start with the gold standard, which is the full node, and we’ll talk about other schemes, and we’ll say how does this compare to a full node?

A lot of these things are about lowering the cost of being able to interact with the Bitcoin network. All of them are. Running a full node is expensive in terms of bandwidth, memory compute. So how can we allow people to interact with the network while allowing maintaining an acceptable security level? There will be trade-offs here, and that’s what we’re going to talk about.

So full and pruned nodes are where we’re going to start. We’re going to talk about light clients, of course, SPV. We’re going to touch on checkpoints, assumevalid, assumeutxo, which I assume you’ve all heard of. Then we’re going to talk about some alternative UTXO set proposals. Then a bit of further reading.

Full Nodes

So what does a full node do? That’s a question as well. What does a full node do?

Various Audience Members: Validate, relay, store transactions for the full blockchain.

John: So downloads all headers and blocks, validate and relay transactions in those blocks in the most-work chain. They may validate and relay unconfirmed transactions. What do you think about that? If I have a full node that is not validating unconfirmed transactions, is that a full node?

Audience Member: From the perspective of the person who wants to verify the transaction, I guess it would be.

Audience Member: If a transaction is not confirmed, it’s not confirmed, right? If you only cared about confirmed transactions and you downloaded all the blocks, then you might not need to relay or validate unconfirmed.

John: Yeah, so I think it’s useful to think about the transaction relay network separately from the block propagation network. I would call a full node something that is on the propagation network and is validating the blockchain, and then on top of that, you can also have a mempool with relay transactions. Those are two separate functions and two separate networks. They just happen to share the same mode of transport right now. But that’s not necessary. Lucianna?

Lucianna: From a security standpoint, full nodes simply valid transactions. The full nodes would only be sending you valid transactions? (inaudible…)

John: You can run a full node that is not relaying unconfirmed transactions, use blocks-only as an option, and you don’t have a mempool basically, you do, but you won’t fill it from your peers, and you won’t relay.

Audience Member: So there’s no validity guarantee, right?

John: Right.

Audience Member: Somebody could be running a different node. There’s no guarantee that there is consensus, given the same rule set, there’s eventual consistency, but there’s no guarantee that at any given time, I’ve caught up.

John: I’m just talking about the difference between relaying unconfirmed transactions and syncing the blockchain. You’re right that there’s no guarantee that - we have, as an assumption that you’re connected to at least one honest peer, and if you are, then you will sync to the most-work blockchain.

Pruned Nodes

Okay, so pruned nodes. What does a pruned node do?

Antoine: Blocks are validated, and you just keep the range of the last block near the tip.

Audience Member: To save space.

John: Yep, downloads, and validates all blocks and headers and discards all block files over a certain storage limit. Is that still a full node?

Antoine: Yeah.

John: Lucianna?

Lucianna: Question, also the answer would be no if the reorg is larger than how much they have stored. Do they keep the headers or just discard the bodies or discard the full data?

John: It discards the full data, the full serialized block. You keep the blockchain safe, which is your view of the most-work chain. You discard all of the serialized data.

Audience member: You also keep all the undo files. So you can efficiently undo a block.

John: You discard all of the full block files and all of the undo files over a certain limit. So Lucianna raised an interesting point, if there’s a reorg too deep, then you can’t reorg back to the most-work chain. That is a difference in security assumptions. But the default here it is…550 megabytes. So the idea is that you have at least some amount of reorg that you can go back to.

Audience Member: What will the pruned full node do if there is a deep reorg?

John: I think it will stop.

Audience Member: Stop working?

John: Yes.

Audience Member: It won’t ask for blocks from others?

John: No. There’s no mechanism for getting old blocks and re-orging that. You need the undo data. You need to know about the transactions that were spent on the block, and you can’t get that over the peer-to-peer network.

Audience Member: So 550 megabytes you said?

John: That’s right.

Audience Member: That’s only like what, like 200, 300 blocks, 400, right?

Audience cross talk…

John: Yeah, you need, I think it’s a day, a day or two of blocks plus undo data because the undo data is about the same size, again a block is 1.5 megabytes.

Audience Member: That’s kind of a low limit.

John: Yeah, I should have looked up what the default is, I think it’s 550.

Audience Member: The minimum is 550.

Audience cross talk…

Audience Member: We never had a chance to test out if this number is the right one.

John: There is a functional test that will…

Audience Member: I mean in a real scenario.

John: Right, so the assumption here is that you’re not going to reorg deeper than a day or a week or whatever it is.

Audience Member: That’s a big assumption.

John: Is that a fair assumption? What do you think?

Audience Member: If you have to reorg that much, I think bitcoin is broken.

Audience Member: Not 500 megabytes, but a day or two?

Audience Member: There was a full day reorg in testnet.

John: Right, that’s testnet.

Audience Member: But what I’m saying - The reason that there was because of the inflation bug. It could have happened in mainnet too if miners hadn’t upgraded in time. I don’t think it’s such a stretch from reality.

[group noise]

Antoine: How long was the reorg?

Audience Member: 24 blocks.

John: Yeah, something like that.

Jon: I’m just making sure I understand. In the functional pruning test, there is that magical value of 550, and it’s all over the test. I actually rewrote the test, but then thought it was too trivial to submit a pull request. But I always wondered, why 550? So that’s the explanation?

John: The explanation is some certain amount of depth that you want to be able to reorg back to plus some buffer, and that what was chosen.

Audience Member: Assuming the future is unknowable, right? We can speculate about the reorg depths, but every time we change the node blocks…

John: Node blocks?

Audience Member: What do you call them?

John: Checkpoints?

Audience Member: So we have checkpoints, and we don’t validate up until the node blocks.

John: We’ll get onto that. That’s assumevalid.

Audience Member: Sorry, but that implies in a sense…

John: We’ll talk about what the assumptions are around checkpoints. The additional assumption here is we don’t have a deep reorg, and we could debate whether that’s a good assumption that we can make about Bitcoin and what other things would break if we did have a deep reorg?

Audience Member: Question - Can you tell that a peer is a full node versus a pruned node if it acts just like a full node?

John: Yeah, there’s a service bit.

Audience Member: The node network?

John: Yeah, node network that would be able to serve you old blocks, prune nodes won’t.

Audience Member: But you can lie about it.

John: You can lie about it, and then your peers will ask you for all blocks, and you don’t have them.

Audience Member: Or you can lie about being a pruned node, but you actually have all the block nodes.

John: I wouldn’t call that lying. It’s just saying, “I’m not giving you that data.”

Audience Member: Can pruned nodes relay new blocks?

John: Yes. And there is a proposal to allow a pruned node to serve blocks up to some level deep. Jonas Schnelli has some of that merged into Bitcoin Core, but it’s not fully merged.

SPV Nodes

Alright, SPV nodes. The term SPV was introduced in this document, which is the Bitcoin white paper, and Satoshi said, “It is possible to verify payments without running a full network node. A user only needs to keep a copy of the block headers of the longest proof-of-work chain, which he can get by querying network nodes until he’s convinced he has the longest chain, and obtain the Merkle branch linking the transaction to the block it’s timestamped in. He can’t check the transaction for himself, but by linking it to a place in the chain, he can see that a network node has accepted it, and blocks added after it further confirms the network has accepted it.

As such, the verification is reliable as long as honest nodes control the network, but is more vulnerable if the network is overpowered by an attacker. While network nodes can verify transactions for themselves, the simplified method can be fooled by an attacker’s fabricated transactions for as long as the attacker can continue to overpower the network. One strategy to protect against this would be to accept alerts from network nodes when they detect an invalid block, prompting the user’s software to download the full block and alerted transactions to confirm the inconsistency. Businesses that receive frequent payments will probably still want to run their own nodes for more independent security and quicker verification.”

Any responses or reactions to that?

Audience member: When fraud-proof?

[Laughter]

John: When fraud-proof. What’s a fraud-proof? Anyone?

Audience member: Verifying a block, I assume, without having the chain checked from the first block. Somebody could prove that to you without you going all the way through the chain. That would be a valid fraud-proof.

John: If someone could present a short, easy to verify, proof that a block is invalid without you having to validate the entire block.

Audience member: The challenge with fraud proofs is there’s no cost to sending them out, so it can be used as an attack vector. You could spam the entire network with, “Hey, this is invalid, this is invalid, this is invalid.”

John: This idea of having alerts from network nodes, there’s no DoS protection against that. This is not even talking about fraud-proofs. This is saying, if someone pretends to send you a valid block, someone else can say that the is block invalid, and then you download the block and validate. It doesn’t really work.

Audience member: Question - What’s a way to say a block is not valid, that we cannot say already? If we know there were additional bitcoin created aside from the coinbase transactions, we can say that the block is invalid already by just looking at the block.

John: You have to validate the whole block. The idea with fraud-proof would be some compact proof where you wouldn’t need to validate the entire block, but you’d still be able to know that it’s invalid.

Audience member: You say validate the whole block, because if I show you a block with a certain height in the future, and I say, “oh look, this is 50 bitcoin.” It could still be that there were 50 bitcoin of fees in all the transactions. In order to verify that, you have to then download all the blocks and the whole chain because you don’t have the input values. Now with SegWit, you can verify that the input values are not being lied about because the input values are signed. But before, there was no way to verify that you did not claim more fees than you should have.

John: You still need that old data if you’re validating, right?

John: Okay, so what can an SPV node do? Carla?

Carla: They can pick up when you receive transactions, they can broadcast transactions…

John: Yeah, so they can verify the transaction has been confirmed by a certain amount of work. Presumably, they’re synching the headers chain, so they know what the most accumulated work chain is. So they know there’s work there, they don’t know that it’s valid or has valid blocks.

Audience member: Question - They know it’s valuable work?

John: They know it’s valid work. You can’t fake work. If you’re synching the headers, you know the timestamps and the end bits, so you know how much work a header should include by looking at the hash.

Audience member: They can’t verify if the money supply has been inflated. There could be a block in the past that’s been inflated, but they wouldn’t know that.

John: Correct. So what can they do? So when the block’s transaction’s chain is valid. So when I say transaction chain, I mean the transaction or its ancestor, going back to the coinbase where those coins were minted, unless you’ve got a Merkle proof for every step in there.

Fraud-Proofs

So, we were talking about fraud proofs. In general, they’re pretty difficult. This is a post from Luke on the mailing list. The generalized case of fraud proofs is likely impossible, but he had an implementation of a fraud-proof scheme showing the block isn’t over a certain size. This was during the 2x time period. People were worried that a 2x chain might fool SPV clients. This would be a way to tell an SPV client that this header you’ve got is committing to a block that is larger than a certain size. And you don’t need to download the entire block. That’s a narrow example of fraud-proof.

Audience member: Does this solution protect you against the DoS attack vector?

John: That people can send you fake fraud proofs? You can just ban them or just disconnect them.

Audience member: The point is you don’t have to check the block if the proof is an actual proof.

Audience member: I guess I struggle to understand the cost of validation is what? IBD is a big chunk, bandwidth is a big chunk, subverting an SPV node, I could assume a certain utxo set and then from there on, just cause a single block isn’t expensive to validate?

John: Well, the ongoing costs might be quite high in terms of bandwidth if you’re trying to run a full node on a limited network, like a mobile network.

Audience member: Seems to me like the big cost would be IBD and bandwidth, and this is attacking a part that’s not.

John: I agree, I guess this was implemented by Luke, and as far as I know, it was not incorporated into a lot of projects, but I’ve included it as an example of an unlimited fraud-proof.

Audience member: Can we backtrack a little bit and talk about DoS protection and maybe, in this kind of scenario, it would feel like having outbound peers that you trust, that you have control over, that you wouldn’t be as concerned about DOS protection with. So in terms of, if that’s the big blocker for fraud proofs, why don’t we just fallback to the kinds of relationships that we trust with our nodes.

John: Well, the big blocker for fool proofs is that they don’t exist. We don’t have a compact way of saying this block is invalid according to these consensus rules. You can imagine some kind of SNARK or ZK system where you could. I don’t know what that would look like. I’m not an expert in cryptography.

Audience member: These are different than accumulators and belonging to a set and things like that? These aren’t related?

John: They’re not related, I don’t think. What you want is a proof where someone could present you a small piece of data, and that data proves they have validated the block according to the rules you want to validate that block according to. And you can look at that proof and not have to validate the proof itself and know that it’s been validated. This is a very narrow example of that. Maybe you don’t want blocks to be greater than one megabyte, and you can get a proof that shows it’s not more than one megabyte. It doesn’t tell you that no new bitcoin is being produced or that some other consensus rule has been broken. For that generalized case, it’s probably impossible, and we don’t know.

Audience Member: I’m really curious how we actually implement this because even for valid fraud-proofs - we sound the alarm when something is going wrong, but the question is, how long do we sound this alarm? You can keep sounding that alarm for a week. No one tells me to stop, so I keep sounding this alarm. So that could also be a DOS attack, right? A fraud-proof is alerting you to an event or a snapshot in the network. But how do you keep broadcasting this fraud-proof?

John: The difference between a fraud-proof and what the white paper said was that the whitepaper said your peers could alert you without giving a proof. So they just to say, this block is bad, and you download the block and check yourself. That’s expensive for you to do.

Audience Member: My question is, how long do you do that? Watch out for this block, it’s invalid.

John: I don’t know, maybe as long as it’s got the most work on it.

Audience Member: I can take a look at that.

John: So SPV nodes can’t determine whether a block is valid, they can’t enforce consensus rules. In general, most of us use the same consensus rules we learned in Bitcoin. In some circumstances, you might care about your consensus rules being specific consensus rules. An example of this is 2x fork, where the authors of the components of the 2x fork announced it as a feature, but changing the size of the block has no impact on light clients. So if you thought SegWit 2x was not bitcoin, you’d see this as an attack on SPV client because you’re sending something that is not a bitcoin block, but they perceive it as a bitcoin block. Does that make sense? SPV clients are not enforcing any consensus rules.

Again, SPV clients can’t ensure your preferred monetary policy is enforced. That’s a consensus rule. So we might all believe that 21 million bitcoin is the total number of bitcoin. If you run an SPV node, you aren’t enforcing that anywhere. That is particularly important if you think about a network where all of the users are running an SPV node. If it becomes too expensive to run a fully validated node, and only miners are running fully validating nodes we are now trusting miners with monetary policy.

Audience Member: Do we have any sort of idea of the ratio?

John: I don’t know

Audience Member: Some general order for proportions.

Audience Member: For ones that are accepting incoming connections, it’s like 95 percent.

Audience Member: 95 percent?

Audience Member: Non-pruned.

Audience Member: It kind of makes sense if you have incoming connections, you might as well want to serve them the whole blockchain. I don’t know. I don’t have any stats for the outgoing nodes.

Audience Member: I haven’t thought about this too much, but what is the economic incentive to run a public node? Like inbound, why do I care?

Audience Member: You want to spread the gospel, right?

[Laughter]

Audience Member: Secure the network.

Audience Member: Is there a rule for that? If you’re only accepting blocks, but never transmitting to others, are you cut out of the network? That would be a kind of incentive.

Audience Member: Once I’m connected, the connection all remains the same. Why do I need to open ports? Why do I care?

Audience Member: You want the network to work because you own bitcoins!

Audience Member: But again, if I want to optimize for just me?

Audience Member: There are some things that make sense on the individual level.

Audience Member: It’s like voting, why do you vote?

Audience Member: Yes. It’s game theory. That’s nash equilibrium, the prison dilemma. It’s the same thing. What’s good for you will be bad for you if everyone does the same thing. It’s a big problem in game theory. I don’t know where the nash equilibrium is in the peer-to-peer network. There is an equilibrium somewhere.

Audience Member: There’s an argument whether you shouldn’t even run a full node if you’re using it as an economic node.

Audience Member: Like to accept payments.

Audience Member: But again, I don’t need to accept inbound. I can just fire up my node, validate my transaction, and disappear, if I’m truly, truly selfish.

Audience Member: What do you lose from accepting inbound?

Audience Member: Cost.

Audience Member: Bandwidth?

Audience Member: Not just bandwidth, but cost.

Audience Member: I disagree. You still want bitcoin to work because you own bitcoins. What is the selfish thing? I don’t think it’s that straightforward.

John: We’ll talk about this a bit more later. I think Lucianna has one last point.

Lucianna: I was going to say if you just go online, check the transaction, and go away, you don’t really know what’s happening when you’re not online. There’s a selfish motive in the network in the network that you have a stake in when you go away.

Audience Member: Fair point.

Audience Member: What was that stat? Percentage of inbound?

Audience Member: I don’t know that, but 10,500 accept incoming connections.

Audience Member: On bitnodes?

Audience Member: On bitnodes.

John: Let’s move on, we’ve had half an hour, and I’m not very far into my slides. SPV nodes also can’t give you the same kind of privacy. There are things we can do to improve upon privacy for light clients, but again, the gold standard is running a full node. Do we want to talk about that? Any thoughts?

Audience Member: This is sort of grouped in with the bloom filters, right?

John: Whatever your strategy is downloading transactions that you’re interested in, even if you are downloading a subset of data, you’re leaking something. You can’t be information-theoretically perfectly private unless you’re validating the full blockchain.

John: And again, SPV nodes can’t detect false negatives. A peer who is serving you, if you are a light client, they can lie by omission – they can just not give you data.

Okay, What about Satoshi’s vision?

[Laughter]

Audience Member: Quickly on that last point, so the fact that you can’t detect a false negative, that has consequences for the lightning network, right?

John: Yes. With bloom filters, yes. With BIP 157/158, a slightly different model, because you’re downloading all of your filters…

I didn’t mention fee estimation. The way fee estimation works in Bitcoin Core is that we look at the mempool. Felix is going to talk more about fee estimation this afternoon. But if you don’t have a mempool, if you aren’t using unconfirmed transactions, you have no way of estimating fees. You can look at blocks, but that’s not a lifeline.

Audience Member: How do SPV wallets do fee estimation?

Audience Member: Public APIs?

Audience Member: Someone runs the server and tells you.

Audience Member: Most of the android wallets, the fee estimation is so bad.

Audience Member: Like Samouri Wallet, that’s one of the features of their server. They’re serving up their own fee estimations.

John: Let’s pause this one till this afternoon. Let’s talk about the estimation stuff this afternoon.

So this is a bit tongue in cheek. Some people talk about all nodes being SPV nodes. What is the point of us running full nodes at all? Satoshi and the white paper said, “If the network becomes very large, like over 100,000 nodes, this is what we’ll use to allow common users to do transactions without being full-blown nodes. At that stage, most users should start running client-only software, and only the specialist server farms keep running full network nodes, kind of like how the usenetwork has consolidated.”

This was back in 2010, but also the white paper states, “businesses that receive frequent payments will probably still want to run their own nodes for more independent security and quicker verification.”

So this conversation has been going on for ten years, but the white paper, even back in 2009, says that the full node is the gold standard, and SPV nodes are, I would say, second class citizens.

Bloom Filters

Okay, we’ll talk about Bloom filters. I’m only going to touch this briefly because I believed Amiti talked about these last week. They’re defined in BIP 37, implemented in Bitcoin Core in August 2012. They allow light clients to request their transactions without revealing everything about their addresses. They’re using probabilistic filters, so you’re requesting more data than you need, and that should give you some level of privacy. But in fact, they are not very good at giving privacy.

This is the implementation. Quite a big change in that PR.

Then this blog post from Jonas Nick talks about how Android wallet that uses Bloom filters gave you almost no privacy because of the way the filter was constructed.

And another paper by Arthur Gervais of ETH talks about Bloom filters.

“We show that considerable information about users who possess a modest number of Bitcoin addresses is +https://youtu.be/6gGcS4N5Rg4

Topic: Security Models

Location: Chaincode Labs 2019 Residency

Slides: https://residency.chaincode.com/presentations/bitcoin/security_models.pdf

Intro

John Newbery: Alright, Security Models. This is going to be like a quick whistle-stop tour of various things, very high-level view. I’m going to start by giving you some kind of framework to think about things. So in cryptography, we often talk about security proofs in terms of existing schemes, we talk about assumptions.

So, for example… If you can break new scheme B, then you’ll also be able to break old scheme A. That’s how we make security proofs. So for example, under the random oracle model, so if we have a hash function that we model as being random. If you can break taproot, then you can break the discrete log problem over the ecliptic curve in general.

The contrapositive is true that if old scheme A is secure then new scheme B is secure. If you can break new scheme B, you can break old scheme A. So, why do we think the discrete log problem is hard over elliptic curves?

Elichai: Because exponentiating is very easy but doing logarithms is very hard. We don’t know any algorithms that can do logarithms efficiently, so the second you have big numbers, you’re stuck.

John: Yeah, because we haven’t broken it yet. That’s kind of an interesting framework to think about these things when you’re talking about security models, you have to ask, compared to what? So that’s what we can start with. We’re going to start with the gold standard, which is the full node, and we’ll talk about other schemes, and we’ll say how does this compare to a full node?

A lot of these things are about lowering the cost of being able to interact with the Bitcoin network. All of them are. Running a full node is expensive in terms of bandwidth, memory compute. So how can we allow people to interact with the network while allowing maintaining an acceptable security level? There will be trade-offs here, and that’s what we’re going to talk about.

So full and pruned nodes are where we’re going to start. We’re going to talk about light clients, of course, SPV. We’re going to touch on checkpoints, assumevalid, assumeutxo, which I assume you’ve all heard of. Then we’re going to talk about some alternative UTXO set proposals. Then a bit of further reading.

Full Nodes

So what does a full node do? That’s a question as well. What does a full node do?

Various Audience Members: Validate, relay, store transactions for the full blockchain.

John: So downloads all headers and blocks, validate and relay transactions in those blocks in the most-work chain. They may validate and relay unconfirmed transactions. What do you think about that? If I have a full node that is not validating unconfirmed transactions, is that a full node?

Audience Member: From the perspective of the person who wants to verify the transaction, I guess it would be.

Audience Member: If a transaction is not confirmed, it’s not confirmed, right? If you only cared about confirmed transactions and you downloaded all the blocks, then you might not need to relay or validate unconfirmed.

John: Yeah, so I think it’s useful to think about the transaction relay network separately from the block propagation network. I would call a full node something that is on the propagation network and is validating the blockchain, and then on top of that, you can also have a mempool with relay transactions. Those are two separate functions and two separate networks. They just happen to share the same mode of transport right now. But that’s not necessary. Lucianna?

Lucianna: From a security standpoint, full nodes simply valid transactions. The full nodes would only be sending you valid transactions? (inaudible…)

John: You can run a full node that is not relaying unconfirmed transactions, use blocks-only as an option, and you don’t have a mempool basically, you do, but you won’t fill it from your peers, and you won’t relay.

Audience Member: So there’s no validity guarantee, right?

John: Right.

Audience Member: Somebody could be running a different node. There’s no guarantee that there is consensus, given the same rule set, there’s eventual consistency, but there’s no guarantee that at any given time, I’ve caught up.

John: I’m just talking about the difference between relaying unconfirmed transactions and syncing the blockchain. You’re right that there’s no guarantee that - we have, as an assumption that you’re connected to at least one honest peer, and if you are, then you will sync to the most-work blockchain.

Pruned Nodes

Okay, so pruned nodes. What does a pruned node do?

Antoine: Blocks are validated, and you just keep the range of the last block near the tip.

Audience Member: To save space.

John: Yep, downloads, and validates all blocks and headers and discards all block files over a certain storage limit. Is that still a full node?

Antoine: Yeah.

John: Lucianna?

Lucianna: Question, also the answer would be no if the reorg is larger than how much they have stored. Do they keep the headers or just discard the bodies or discard the full data?

John: It discards the full data, the full serialized block. You keep the blockchain safe, which is your view of the most-work chain. You discard all of the serialized data.

Audience member: You also keep all the undo files. So you can efficiently undo a block.

John: You discard all of the full block files and all of the undo files over a certain limit. So Lucianna raised an interesting point, if there’s a reorg too deep, then you can’t reorg back to the most-work chain. That is a difference in security assumptions. But the default here it is…550 megabytes. So the idea is that you have at least some amount of reorg that you can go back to.

Audience Member: What will the pruned full node do if there is a deep reorg?

John: I think it will stop.

Audience Member: Stop working?

John: Yes.

Audience Member: It won’t ask for blocks from others?

John: No. There’s no mechanism for getting old blocks and re-orging that. You need the undo data. You need to know about the transactions that were spent on the block, and you can’t get that over the peer-to-peer network.

Audience Member: So 550 megabytes you said?

John: That’s right.

Audience Member: That’s only like what, like 200, 300 blocks, 400, right?

Audience cross talk…

John: Yeah, you need, I think it’s a day, a day or two of blocks plus undo data because the undo data is about the same size, again a block is 1.5 megabytes.

Audience Member: That’s kind of a low limit.

John: Yeah, I should have looked up what the default is, I think it’s 550.

Audience Member: The minimum is 550.

Audience cross talk…

Audience Member: We never had a chance to test out if this number is the right one.

John: There is a functional test that will…

Audience Member: I mean in a real scenario.

John: Right, so the assumption here is that you’re not going to reorg deeper than a day or a week or whatever it is.

Audience Member: That’s a big assumption.

John: Is that a fair assumption? What do you think?

Audience Member: If you have to reorg that much, I think bitcoin is broken.

Audience Member: Not 500 megabytes, but a day or two?

Audience Member: There was a full day reorg in testnet.

John: Right, that’s testnet.

Audience Member: But what I’m saying - The reason that there was because of the inflation bug. It could have happened in mainnet too if miners hadn’t upgraded in time. I don’t think it’s such a stretch from reality.

[group noise]

Antoine: How long was the reorg?

Audience Member: 24 blocks.

John: Yeah, something like that.

Jon: I’m just making sure I understand. In the functional pruning test, there is that magical value of 550, and it’s all over the test. I actually rewrote the test, but then thought it was too trivial to submit a pull request. But I always wondered, why 550? So that’s the explanation?

John: The explanation is some certain amount of depth that you want to be able to reorg back to plus some buffer, and that what was chosen.

Audience Member: Assuming the future is unknowable, right? We can speculate about the reorg depths, but every time we change the node blocks…

John: Node blocks?

Audience Member: What do you call them?

John: Checkpoints?

Audience Member: So we have checkpoints, and we don’t validate up until the node blocks.

John: We’ll get onto that. That’s assumevalid.

Audience Member: Sorry, but that implies in a sense…

John: We’ll talk about what the assumptions are around checkpoints. The additional assumption here is we don’t have a deep reorg, and we could debate whether that’s a good assumption that we can make about Bitcoin and what other things would break if we did have a deep reorg?

Audience Member: Question - Can you tell that a peer is a full node versus a pruned node if it acts just like a full node?

John: Yeah, there’s a service bit.

Audience Member: The node network?

John: Yeah, node network that would be able to serve you old blocks, prune nodes won’t.

Audience Member: But you can lie about it.

John: You can lie about it, and then your peers will ask you for all blocks, and you don’t have them.

Audience Member: Or you can lie about being a pruned node, but you actually have all the block nodes.

John: I wouldn’t call that lying. It’s just saying, “I’m not giving you that data.”

Audience Member: Can pruned nodes relay new blocks?

John: Yes. And there is a proposal to allow a pruned node to serve blocks up to some level deep. Jonas Schnelli has some of that merged into Bitcoin Core, but it’s not fully merged.

SPV Nodes

Alright, SPV nodes. The term SPV was introduced in this document, which is the Bitcoin white paper, and Satoshi said, “It is possible to verify payments without running a full network node. A user only needs to keep a copy of the block headers of the longest proof-of-work chain, which he can get by querying network nodes until he’s convinced he has the longest chain, and obtain the Merkle branch linking the transaction to the block it’s timestamped in. He can’t check the transaction for himself, but by linking it to a place in the chain, he can see that a network node has accepted it, and blocks added after it further confirms the network has accepted it.

As such, the verification is reliable as long as honest nodes control the network, but is more vulnerable if the network is overpowered by an attacker. While network nodes can verify transactions for themselves, the simplified method can be fooled by an attacker’s fabricated transactions for as long as the attacker can continue to overpower the network. One strategy to protect against this would be to accept alerts from network nodes when they detect an invalid block, prompting the user’s software to download the full block and alerted transactions to confirm the inconsistency. Businesses that receive frequent payments will probably still want to run their own nodes for more independent security and quicker verification.”

Any responses or reactions to that?

Audience member: When fraud-proof?

[Laughter]

John: When fraud-proof. What’s a fraud-proof? Anyone?

Audience member: Verifying a block, I assume, without having the chain checked from the first block. Somebody could prove that to you without you going all the way through the chain. That would be a valid fraud-proof.

John: If someone could present a short, easy to verify, proof that a block is invalid without you having to validate the entire block.

Audience member: The challenge with fraud proofs is there’s no cost to sending them out, so it can be used as an attack vector. You could spam the entire network with, “Hey, this is invalid, this is invalid, this is invalid.”

John: This idea of having alerts from network nodes, there’s no DoS protection against that. This is not even talking about fraud-proofs. This is saying, if someone pretends to send you a valid block, someone else can say that the is block invalid, and then you download the block and validate. It doesn’t really work.

Audience member: Question - What’s a way to say a block is not valid, that we cannot say already? If we know there were additional bitcoin created aside from the coinbase transactions, we can say that the block is invalid already by just looking at the block.

John: You have to validate the whole block. The idea with fraud-proof would be some compact proof where you wouldn’t need to validate the entire block, but you’d still be able to know that it’s invalid.

Audience member: You say validate the whole block, because if I show you a block with a certain height in the future, and I say, “oh look, this is 50 bitcoin.” It could still be that there were 50 bitcoin of fees in all the transactions. In order to verify that, you have to then download all the blocks and the whole chain because you don’t have the input values. Now with SegWit, you can verify that the input values are not being lied about because the input values are signed. But before, there was no way to verify that you did not claim more fees than you should have.

John: You still need that old data if you’re validating, right?

John: Okay, so what can an SPV node do? Carla?

Carla: They can pick up when you receive transactions, they can broadcast transactions…

John: Yeah, so they can verify the transaction has been confirmed by a certain amount of work. Presumably, they’re synching the headers chain, so they know what the most accumulated work chain is. So they know there’s work there, they don’t know that it’s valid or has valid blocks.

Audience member: Question - They know it’s valuable work?

John: They know it’s valid work. You can’t fake work. If you’re synching the headers, you know the timestamps and the end bits, so you know how much work a header should include by looking at the hash.

Audience member: They can’t verify if the money supply has been inflated. There could be a block in the past that’s been inflated, but they wouldn’t know that.

John: Correct. So what can they do? So when the block’s transaction’s chain is valid. So when I say transaction chain, I mean the transaction or its ancestor, going back to the coinbase where those coins were minted, unless you’ve got a Merkle proof for every step in there.

Fraud-Proofs

So, we were talking about fraud proofs. In general, they’re pretty difficult. This is a post from Luke on the mailing list. The generalized case of fraud proofs is likely impossible, but he had an implementation of a fraud-proof scheme showing the block isn’t over a certain size. This was during the 2x time period. People were worried that a 2x chain might fool SPV clients. This would be a way to tell an SPV client that this header you’ve got is committing to a block that is larger than a certain size. And you don’t need to download the entire block. That’s a narrow example of fraud-proof.

Audience member: Does this solution protect you against the DoS attack vector?

John: That people can send you fake fraud proofs? You can just ban them or just disconnect them.

Audience member: The point is you don’t have to check the block if the proof is an actual proof.

Audience member: I guess I struggle to understand the cost of validation is what? IBD is a big chunk, bandwidth is a big chunk, subverting an SPV node, I could assume a certain utxo set and then from there on, just cause a single block isn’t expensive to validate?

John: Well, the ongoing costs might be quite high in terms of bandwidth if you’re trying to run a full node on a limited network, like a mobile network.

Audience member: Seems to me like the big cost would be IBD and bandwidth, and this is attacking a part that’s not.

John: I agree, I guess this was implemented by Luke, and as far as I know, it was not incorporated into a lot of projects, but I’ve included it as an example of an unlimited fraud-proof.

Audience member: Can we backtrack a little bit and talk about DoS protection and maybe, in this kind of scenario, it would feel like having outbound peers that you trust, that you have control over, that you wouldn’t be as concerned about DOS protection with. So in terms of, if that’s the big blocker for fraud proofs, why don’t we just fallback to the kinds of relationships that we trust with our nodes.

John: Well, the big blocker for fool proofs is that they don’t exist. We don’t have a compact way of saying this block is invalid according to these consensus rules. You can imagine some kind of SNARK or ZK system where you could. I don’t know what that would look like. I’m not an expert in cryptography.

Audience member: These are different than accumulators and belonging to a set and things like that? These aren’t related?

John: They’re not related, I don’t think. What you want is a proof where someone could present you a small piece of data, and that data proves they have validated the block according to the rules you want to validate that block according to. And you can look at that proof and not have to validate the proof itself and know that it’s been validated. This is a very narrow example of that. Maybe you don’t want blocks to be greater than one megabyte, and you can get a proof that shows it’s not more than one megabyte. It doesn’t tell you that no new bitcoin is being produced or that some other consensus rule has been broken. For that generalized case, it’s probably impossible, and we don’t know.

Audience Member: I’m really curious how we actually implement this because even for valid fraud-proofs - we sound the alarm when something is going wrong, but the question is, how long do we sound this alarm? You can keep sounding that alarm for a week. No one tells me to stop, so I keep sounding this alarm. So that could also be a DOS attack, right? A fraud-proof is alerting you to an event or a snapshot in the network. But how do you keep broadcasting this fraud-proof?

John: The difference between a fraud-proof and what the white paper said was that the whitepaper said your peers could alert you without giving a proof. So they just to say, this block is bad, and you download the block and check yourself. That’s expensive for you to do.

Audience Member: My question is, how long do you do that? Watch out for this block, it’s invalid.

John: I don’t know, maybe as long as it’s got the most work on it.

Audience Member: I can take a look at that.

John: So SPV nodes can’t determine whether a block is valid, they can’t enforce consensus rules. In general, most of us use the same consensus rules we learned in Bitcoin. In some circumstances, you might care about your consensus rules being specific consensus rules. An example of this is 2x fork, where the authors of the components of the 2x fork announced it as a feature, but changing the size of the block has no impact on light clients. So if you thought SegWit 2x was not bitcoin, you’d see this as an attack on SPV client because you’re sending something that is not a bitcoin block, but they perceive it as a bitcoin block. Does that make sense? SPV clients are not enforcing any consensus rules.

Again, SPV clients can’t ensure your preferred monetary policy is enforced. That’s a consensus rule. So we might all believe that 21 million bitcoin is the total number of bitcoin. If you run an SPV node, you aren’t enforcing that anywhere. That is particularly important if you think about a network where all of the users are running an SPV node. If it becomes too expensive to run a fully validated node, and only miners are running fully validating nodes we are now trusting miners with monetary policy.

Audience Member: Do we have any sort of idea of the ratio?

John: I don’t know

Audience Member: Some general order for proportions.

Audience Member: For ones that are accepting incoming connections, it’s like 95 percent.

Audience Member: 95 percent?

Audience Member: Non-pruned.

Audience Member: It kind of makes sense if you have incoming connections, you might as well want to serve them the whole blockchain. I don’t know. I don’t have any stats for the outgoing nodes.

Audience Member: I haven’t thought about this too much, but what is the economic incentive to run a public node? Like inbound, why do I care?

Audience Member: You want to spread the gospel, right?

[Laughter]

Audience Member: Secure the network.

Audience Member: Is there a rule for that? If you’re only accepting blocks, but never transmitting to others, are you cut out of the network? That would be a kind of incentive.

Audience Member: Once I’m connected, the connection all remains the same. Why do I need to open ports? Why do I care?

Audience Member: You want the network to work because you own bitcoins!

Audience Member: But again, if I want to optimize for just me?

Audience Member: There are some things that make sense on the individual level.

Audience Member: It’s like voting, why do you vote?

Audience Member: Yes. It’s game theory. That’s nash equilibrium, the prison dilemma. It’s the same thing. What’s good for you will be bad for you if everyone does the same thing. It’s a big problem in game theory. I don’t know where the nash equilibrium is in the peer-to-peer network. There is an equilibrium somewhere.

Audience Member: There’s an argument whether you shouldn’t even run a full node if you’re using it as an economic node.

Audience Member: Like to accept payments.

Audience Member: But again, I don’t need to accept inbound. I can just fire up my node, validate my transaction, and disappear, if I’m truly, truly selfish.

Audience Member: What do you lose from accepting inbound?

Audience Member: Cost.

Audience Member: Bandwidth?

Audience Member: Not just bandwidth, but cost.

Audience Member: I disagree. You still want bitcoin to work because you own bitcoins. What is the selfish thing? I don’t think it’s that straightforward.

John: We’ll talk about this a bit more later. I think Lucianna has one last point.

Lucianna: I was going to say if you just go online, check the transaction, and go away, you don’t really know what’s happening when you’re not online. There’s a selfish motive in the network in the network that you have a stake in when you go away.

Audience Member: Fair point.

Audience Member: What was that stat? Percentage of inbound?

Audience Member: I don’t know that, but 10,500 accept incoming connections.

Audience Member: On bitnodes?

Audience Member: On bitnodes.

John: Let’s move on, we’ve had half an hour, and I’m not very far into my slides. SPV nodes also can’t give you the same kind of privacy. There are things we can do to improve upon privacy for light clients, but again, the gold standard is running a full node. Do we want to talk about that? Any thoughts?

Audience Member: This is sort of grouped in with the bloom filters, right?

John: Whatever your strategy is downloading transactions that you’re interested in, even if you are downloading a subset of data, you’re leaking something. You can’t be information-theoretically perfectly private unless you’re validating the full blockchain.

John: And again, SPV nodes can’t detect false negatives. A peer who is serving you, if you are a light client, they can lie by omission – they can just not give you data.

Okay, What about Satoshi’s vision?

[Laughter]

Audience Member: Quickly on that last point, so the fact that you can’t detect a false negative, that has consequences for the lightning network, right?

John: Yes. With bloom filters, yes. With BIP 157/158, a slightly different model, because you’re downloading all of your filters…

I didn’t mention fee estimation. The way fee estimation works in Bitcoin Core is that we look at the mempool. Felix is going to talk more about fee estimation this afternoon. But if you don’t have a mempool, if you aren’t using unconfirmed transactions, you have no way of estimating fees. You can look at blocks, but that’s not a lifeline.

Audience Member: How do SPV wallets do fee estimation?

Audience Member: Public APIs?

Audience Member: Someone runs the server and tells you.

Audience Member: Most of the android wallets, the fee estimation is so bad.

Audience Member: Like Samouri Wallet, that’s one of the features of their server. They’re serving up their own fee estimations.

John: Let’s pause this one till this afternoon. Let’s talk about the estimation stuff this afternoon.

So this is a bit tongue in cheek. Some people talk about all nodes being SPV nodes. What is the point of us running full nodes at all? Satoshi and the white paper said, “If the network becomes very large, like over 100,000 nodes, this is what we’ll use to allow common users to do transactions without being full-blown nodes. At that stage, most users should start running client-only software, and only the specialist server farms keep running full network nodes, kind of like how the usenetwork has consolidated.”

This was back in 2010, but also the white paper states, “businesses that receive frequent payments will probably still want to run their own nodes for more independent security and quicker verification.”

So this conversation has been going on for ten years, but the white paper, even back in 2009, says that the full node is the gold standard, and SPV nodes are, I would say, second class citizens.

Bloom Filters

Okay, we’ll talk about Bloom filters. I’m only going to touch this briefly because I believed Amiti talked about these last week. They’re defined in BIP 37, implemented in Bitcoin Core in August 2012. They allow light clients to request their transactions without revealing everything about their addresses. They’re using probabilistic filters, so you’re requesting more data than you need, and that should give you some level of privacy. But in fact, they are not very good at giving privacy.

This is the implementation. Quite a big change in that PR.

Then this blog post from Jonas Nick talks about how Android wallet that uses Bloom filters gave you almost no privacy because of the way the filter was constructed.

And another paper by Arthur Gervais of ETH talks about Bloom filters.

“We show that considerable information about users who possess a modest number of Bitcoin addresses is leaked by a single Bloom filter in existing SPV clients.

We show that an adversary can easily link different Bloom filters, which embed the same elements—irrespective of the -target false positive rate. This also enables the adversary to link, with high confidence, different Bloom filters which pertain to the same originator.

We show that a considerable number of the addresses of users are leaked if the adversary can collect at least two Bloom filters issued by the same SPV client—irrespective of the target false positive rate and of the number of user addresses.”

In terms of achieving its target, Bloom filters do not actually give you very good privacy. Any thoughts about that?

Audience Member: In the end, we just wrote a very complicated thing that gave us nothing?

John: Maybe? Yeah.

Audience Member: So I touched on this the other day, is there a process about rewording changes? If we know for sure it’s not helping, why keep this in the codebase?

Cross talk from the audience…

John: I will answer that in just a second. The question is, is there a way of reversing this? Let’s have a look.

Before we do that, first of all, it’s not very good at preserving privacy, and also places load on the server. So if you’re a server serving up these Bloom filters, you’re doing the compute work to create a new Bloom filter for every client that connects to you.

Audience Member: And you have to run through all the blocks, no?

John: You have to run through every block. Yep. Going back to your question about altruism and selfishness. I think I would separate this from block and transaction propagation in that at the margin with transaction propagation, you probably wouldn’t do it if you weren’t entirely selfish, but there’s a global good.

Locally it’s altruistic, but globally we get a shared benefit from it. Whereas this locally is fully altruistic, but there’s no global shared benefit. So Bloom filters are much worse in transaction block propagation in terms of selfishness. And transaction and block propagation have within them some kind of DoS protection, like the block must contain work, the transaction must contain fee. Whereas constructions like this, there’s no DoS protection at all.

Audience Member: Were there ever large DoS attacked on the network, using bloom filters?

John: I don’t know about that.

Cross talk…

John: I know there is a GitHub repo from Peter Todd called bloom-io-attack so maybe you can try that at home if you want. But, “a single syncing wallet causes 80GB of disk reads and a large amount of CPU time to be consumed processing this data.” [source] So it seems trivial to DoS.

Audience member: Highly asymmetric.

John: Very asymmetric.

Audience Member: This issue came up when I was researching /BIP 21/70… It’s not always that simple as getting rid of bad code.

John: So BIP 70 is application layer. It’s really down to the application or the client. This is p2p, so again it’s down to the client. It’s not consensus at all. So individual clients can decide not to participate in bloom filters.

There was a question last week about SegWit, and the answer is, it doesn’t work with Segwit because if the pubkey is in the witness, then that’s not included in the bloom filter. I think intentionally, we didn’t update bloom filters to support it. Generally, it’s not advised. If you’re running a full node, it will probably be disabled in the next version of Bitcoin Core.

John: Sorry, I misspoke. Disabled by default.

Audience Member: Wouldn’t it be nice to still have it, but using the encrypted Jonas Schnelli authentication. So if I use my android wallet, I can authenticate to my own node.

John: Yeah, and Nicolas Dorier also wants that for BTC Pay Server where you connect to a trusted node, and the trusted node serves you Bloom filters.

Amiti: I don’t understand that. Why do you want a Bloom filter if it’s a trusted node?

Audience Member: You can’t leak privacy because it’s your node.

Amiti: But if you’re trusting it.

Audience Member: Couldn’t your trusted node just have a -

Audience Members: I do not think there is in Bitcoin a fully secure way to communicate with a full node. The only way is with electrum server, not part of Bitcoin Core. Luke is trying to implement that.

John: So here’s Nicolas Dorier, he maintains Bitcoin Pay Server and he’s saying there’s no reason not to use Bloom filters for whitelisted peers. So if you preconfigure the IP address of your peer, you should be able to.

John: This was in response to BlueMatt’s PR.

Audience Member: So, in that statistics, how many nodes have it enabled?

Audience Member: Almost all, like 90%.

John: Alright, we talked about the incentive issues, the DOS attack issues, and Bloom filters are generally not fantastic. A newer proposal is compact block filters, as defined in BIPs 157/158. Again, I’m not going to talk very much because Fabian talked about this last week.

This flips the BIP 37 proposal around, so instead of the client asking for the filter, the server creates a filter based on the block and can serve that to all clients. It uses Golomb-Rice coding instead of Bloom filters, and it’s first fully implemented in btcd which is a GO implementation. There is BIP157, on the roasbeef fork initially, but now being merged upstream to btcd. In Bitcoin Core, we have the block filter construction and we have an index, and the P2P part of that is a work in progress.

Audience Member: How do these work with SegWit transactions?

Audience Member: How do you create a filter that includes SegWit? Addresses, I guess.

Audience Member: There are different filter types. You can basically put anything in the filters that you want. You just need to use a different filter type. You can make a filter type that only has TXIDs or anything.

Audience Member: You’re saying the hash function has to cover the witness data in some way when the TXID is computed?

John: I believe these cover the script pubkey.

Audience Members: I don’t think the witness is commuted.

I think it’s hashed, that’s enough.

There’s no commitment to the witness agent with the txid. That’s why it’s so malleable. You’re filtering with UTXIE. You can filter with anything else.

Fabian: Yeah, but you can filter for anything basically. You can throw any data in there that you want to… It’s super flexible.

John: I think the one that is used includes all of the script pubkeys that have been created and spent in the block. Is that right? And that would cover the addresses if you’re watching for an address.

Audience Member: Is this like neutrino?

John: This is the same thing. Neutrino is an implementation of BIP 157/158, and often people call the protocol neutrino.

\ No newline at end of file +target false positive rate. This also enables the adversary to link, with high confidence, different Bloom filters which pertain to the same originator.

We show that a considerable number of the addresses of users are leaked if the adversary can collect at least two Bloom filters issued by the same SPV client—irrespective of the target false positive rate and of the number of user addresses.”

In terms of achieving its target, Bloom filters do not actually give you very good privacy. Any thoughts about that?

Audience Member: In the end, we just wrote a very complicated thing that gave us nothing?

John: Maybe? Yeah.

Audience Member: So I touched on this the other day, is there a process about rewording changes? If we know for sure it’s not helping, why keep this in the codebase?

Cross talk from the audience…

John: I will answer that in just a second. The question is, is there a way of reversing this? Let’s have a look.

Before we do that, first of all, it’s not very good at preserving privacy, and also places load on the server. So if you’re a server serving up these Bloom filters, you’re doing the compute work to create a new Bloom filter for every client that connects to you.

Audience Member: And you have to run through all the blocks, no?

John: You have to run through every block. Yep. Going back to your question about altruism and selfishness. I think I would separate this from block and transaction propagation in that at the margin with transaction propagation, you probably wouldn’t do it if you weren’t entirely selfish, but there’s a global good.

Locally it’s altruistic, but globally we get a shared benefit from it. Whereas this locally is fully altruistic, but there’s no global shared benefit. So Bloom filters are much worse in transaction block propagation in terms of selfishness. And transaction and block propagation have within them some kind of DoS protection, like the block must contain work, the transaction must contain fee. Whereas constructions like this, there’s no DoS protection at all.

Audience Member: Were there ever large DoS attacked on the network, using bloom filters?

John: I don’t know about that.

Cross talk…

John: I know there is a GitHub repo from Peter Todd called bloom-io-attack so maybe you can try that at home if you want. But, “a single syncing wallet causes 80GB of disk reads and a large amount of CPU time to be consumed processing this data.” [source] So it seems trivial to DoS.

Audience member: Highly asymmetric.

John: Very asymmetric.

Audience Member: This issue came up when I was researching /BIP 21/70… It’s not always that simple as getting rid of bad code.

John: So BIP 70 is application layer. It’s really down to the application or the client. This is p2p, so again it’s down to the client. It’s not consensus at all. So individual clients can decide not to participate in bloom filters.

There was a question last week about SegWit, and the answer is, it doesn’t work with Segwit because if the pubkey is in the witness, then that’s not included in the bloom filter. I think intentionally, we didn’t update bloom filters to support it. Generally, it’s not advised. If you’re running a full node, it will probably be disabled in the next version of Bitcoin Core.

John: Sorry, I misspoke. Disabled by default.

Audience Member: Wouldn’t it be nice to still have it, but using the encrypted Jonas Schnelli authentication. So if I use my android wallet, I can authenticate to my own node.

John: Yeah, and Nicolas Dorier also wants that for BTC Pay Server where you connect to a trusted node, and the trusted node serves you Bloom filters.

Amiti: I don’t understand that. Why do you want a Bloom filter if it’s a trusted node?

Audience Member: You can’t leak privacy because it’s your node.

Amiti: But if you’re trusting it.

Audience Member: Couldn’t your trusted node just have a -

Audience Members: I do not think there is in Bitcoin a fully secure way to communicate with a full node. The only way is with electrum server, not part of Bitcoin Core. Luke is trying to implement that.

John: So here’s Nicolas Dorier, he maintains Bitcoin Pay Server and he’s saying there’s no reason not to use Bloom filters for whitelisted peers. So if you preconfigure the IP address of your peer, you should be able to.

John: This was in response to BlueMatt’s PR.

Audience Member: So, in that statistics, how many nodes have it enabled?

Audience Member: Almost all, like 90%.

John: Alright, we talked about the incentive issues, the DOS attack issues, and Bloom filters are generally not fantastic. A newer proposal is compact block filters, as defined in BIPs 157/158. Again, I’m not going to talk very much because Fabian talked about this last week.

This flips the BIP 37 proposal around, so instead of the client asking for the filter, the server creates a filter based on the block and can serve that to all clients. It uses Golomb-Rice coding instead of Bloom filters, and it’s first fully implemented in btcd which is a GO implementation. There is BIP157, on the roasbeef fork initially, but now being merged upstream to btcd. In Bitcoin Core, we have the block filter construction and we have an index, and the P2P part of that is a work in progress.

Audience Member: How do these work with SegWit transactions?

Audience Member: How do you create a filter that includes SegWit? Addresses, I guess.

Audience Member: There are different filter types. You can basically put anything in the filters that you want. You just need to use a different filter type. You can make a filter type that only has TXIDs or anything.

Audience Member: You’re saying the hash function has to cover the witness data in some way when the TXID is computed?

John: I believe these cover the script pubkey.

Audience Members: I don’t think the witness is commuted.

I think it’s hashed, that’s enough.

There’s no commitment to the witness agent with the txid. That’s why it’s so malleable. You’re filtering with UTXIE. You can filter with anything else.

Fabian: Yeah, but you can filter for anything basically. You can throw any data in there that you want to… It’s super flexible.

John: I think the one that is used includes all of the script pubkeys that have been created and spent in the block. Is that right? And that would cover the addresses if you’re watching for an address.

Audience Member: Is this like neutrino?

John: This is the same thing. Neutrino is an implementation of BIP 157/158, and often people call the protocol neutrino.

\ No newline at end of file diff --git a/chicago-bitdevs/2020-07-08-socratic-seminar/index.html b/chicago-bitdevs/2020-07-08-socratic-seminar/index.html index e89562c3ea..2c91a14b36 100644 --- a/chicago-bitdevs/2020-07-08-socratic-seminar/index.html +++ b/chicago-bitdevs/2020-07-08-socratic-seminar/index.html @@ -3,10 +3,10 @@ Video: No video posted online Reddit link of the resources discussed: https://old.reddit.com/r/chibitdevs/ The conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -Tainting, CoinJoin, PayJoin, CoinSwap Bitcoin dev mailing list post (Nopara) https://lists."> \ No newline at end of file diff --git a/chicago-bitdevs/2020-08-12-socratic-seminar/index.html b/chicago-bitdevs/2020-08-12-socratic-seminar/index.html index 85564b5554..5e9ff9195a 100644 --- a/chicago-bitdevs/2020-08-12-socratic-seminar/index.html +++ b/chicago-bitdevs/2020-08-12-socratic-seminar/index.html @@ -11,4 +11,4 @@ < Chicago Bitdevs < Socratic Seminar

Socratic Seminar

Date: August 12, 2020

Transcript By: Michael Folkson

Tags: P2p, Research, Threshold signature, Sighash anyprevout, Altcoins

Category: -Meetup

Topic: Agenda below

Video: No video posted online

BitDevs Solo Socratic 4 agenda: https://bitdevs.org/2020-07-31-solo-socratic-4

The conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Bitcoin Core P2P IRC Meetings

https://github.com/bitcoin-core/bitcoin-devwiki/wiki/P2P-IRC-meetings

They are starting P2P meetings for the Bitcoin protocol now. The trend for organizing backbone development for Bitcoin is starting to become more modular. They have wallet meetings for the wallet in Bitcoin Core now. You are going to have P2P network meetings for strictly the networking portion of Bitcoin and all of the technical stuff that goes into that. They had one on August 11th and they are doing them every two weeks. I thought I would give a public service announcement there.

Clark Moody Dashboard

https://bitcoin.clarkmoody.com/dashboard/

The next thing, our Bitcoin dashboard that Clark Moody puts on. It is a high level view of what is going on in the Bitcoin ecosystem. Price, 11,500 USD, the GBTC Premium which is a regulated OTC product that Grayscale offers. 17.8 percent is the premium on this. That means it trades at a 17 percent premium to spot Bitcoin. It looks like he has added Bitcoin priced in gold if that is your thing. We are around block height 643,000. UTXO Set Size 66 million, Block Time, we are still under our 10 minute desired threshold that signifies people are adding hash rate to the network and finding blocks faster than we would expect. Lightning Network capacity, mostly the same, at around 1000 BTC. The same with BitMEX’s Lightning node. There looks like there is a little bit more money being pegged into Liquid which is interesting. Transaction fees…

Is there a dashboard where some of these metrics specifically Lightning and Liquid capacity where there are time series charts?

I don’t know that. That would be very nice to have. Then meeting after meeting we could look at the diff since we last talked. I don’t know.

For fee estimates, these fee estimates for Bitcoin seem a little high to me. 135 satoshis per virtual byte, is the network that congested? There are 6000 transactions in the mempool so maybe it is. That is what is going on on the network.

The fees for the next block are completely dominated by a somewhat small set of participants who go completely nuts.

Sure but even the hour. That is 6 blocks, according to this is 129 sats per virtual byte which also seems very high to me. I didn’t realize that Bitcoin fees were as high.

I think that’s a trailing average.

I was looking at the mempool space and the fees are lower than they are on this thing.

Most of it is 1 satoshi per byte so I think it is a trailing estimate.

Dynamic Commitments: Upgrading Channels Without On-Chain Transactions (Laolu Osuntokun)

https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-July/002763.html

Here is our first technical topic. This is to do with upgrading already existing Lightning channels.

The idea here is that when you are changing the actual commitment scheme or anything else about your Lightning channel, so long as no changes are required for your funding transaction and you are changing your revocation mechanism, you don’t have to go onchain for anything. After some discussion about various ways of doing this update offchain they decided upon introducing a new message which will be used I think also during channel opening to explicitly negotiate features instead of implicitly negotiate features using TLV stuff. You can update those features with an update channel message. The proposal is to activate these channel updates once there are no HTLCs left. You just have the to_local, to_remote balances. Then you can send one of these messages and do a commitment signed revoke and ack handshake but where the commitment being signed is now the new format or the new kind of commitment transaction. The ideas for using this are various things. Updating to use new features like static remote keys. That is a feature that improves channels which is somewhat new. People on a lot of channels are not using it because it would require that they close and reopen their channels. That stuff is now unnecessary. Changing the limit on how many HTLCs can live in a given channel can now be changed dynamically without closing channels. You can start with a smaller number. If there is good behavior you can make that a bigger number as you go on. In the future you should be able to do other various fancy things like Antoine (Riard) wants to do things with anchor outputs on commitment transactions. I think some of that stuff is already implemented at least in some places. You would be able to update your channels to use those things as well without going onchain. Part of the motivation is that once we get to Taproot onchain then once we’ve designed what the funding transaction looks like for Taproot we can hash the details later for what actual format we want to use on the commitment transaction. We can push updates without forcing thousands of channels to close.

That last thing is a really key insight. Every time we come up with a new cool Lightning feature we don’t want have to make everybody close their channels, reopen them, go through that process. We want to dynamically change the rules. If two parties consent to changing the rules of the channel there is no reason they shouldn’t be able to do that just by saying “You are cool with this rule, I am cool with this rule. Let’s switch over to these new rules rather than the old rules that we had before.” Of course if they don’t agree that won’t happen. It makes a much smoother upgrade process for the Lightning Network.

The key constraint is that you can’t change the revocation mechanism. You can’t go for example from Poon-Dryja channels to eltoo channels using this mechanism. But you can do most other things that don’t require changing the revocation mechanism.

You wouldn’t be able to go to PTLCs or MuSig post Schnorr?

You would actually. We could for example if people decided they wanted to use ECDSA adaptor signatures today we could use this should this also exist with current Lightning channels to update to PTLC enabled channels. The same goes if we have Taproot and the Taproot funding transaction has been specified but people are still using HTLCs, you can move to PTLCs without requiring channels be shut down so long as the revocation mechanism and the funding transaction doesn’t change.

For negotiating a new funding channel, can’t you spend the old funding transaction so you don’t have to close and open? Just spend the old funding transaction so you just do a new open from the old one.

Yeah and that is discussed later in this thread. ZmnSCPxj was talking about going from Poon-Dryja to eltoo using a single transaction. Anything that requires funding transaction you don’t have to close and open, you can do it simultaneously in a splicing kind of fashion.

Advances in Bitcoin Contracting: Uniform Policy and Package Relay (Antoine Riard)

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-July/018063.html

This is the P2P Bitcoin development space, this package and relay policy across the Bitcoin P2P network. What this problem is trying to solve is there are certain layer 2 protocols that are built on top of Bitcoin such as Lightning that require transactions to be confirmed in a timely manner. Also sometimes it is three transactions that all spend each other that need to be confirmed at the same time to make sure that you can get your money back specifically with Lightning. As Antoine writes here, “Lightning, the most deployed time-sensitive protocol as of now, relies on the timely confirmations of some of its transactions to enforce its security model.” Lightning boils down to if you cheat me on the Lightning Network I go back down to the Bitcoin blockchain and take your money. The assumption there is that you can actually get a transaction confirmed in the Bitcoin blockchain. If you can’t do that the security model for Lightning crumbles. Antoine also writes here that to be able to do this you sometimes need to adjust the fee rate of a transaction. As we all know blockchains have dynamic fee rates depending on what people are doing on the network. It could be 1 satoshis per byte and other times it could be 130 satoshis per byte. Or what we saw with this Clark Moody dashboard, one person may think it is 130 satoshis per byte while another person is like “I have a better view of the network and this person doesn’t know what they are talking about. It is really 10 satoshis per byte.” You can have these disagreements on these Layer 2 protocols too. It is really important that you have an accurate view of what it takes to enforce your Layer 2 transactions and get them confirmed in the network. The idea that is being tossed around to do this is this package relay policy. Antoine did a really good job of laying out exactly what you need here. You need to be able to propagate a transaction across the network so that everyone can see the transaction in a timely manner. Each node has different rules for transactions that they allow into the mempool. The mempool is the staging area where nodes hold transactions before they are mined in a block. Depending on your node settings, you could be running a node on a Raspberry Pi or one of these high end servers with like 64GB of RAM. Depending on what kind of hardware you are running you obviously have limitations on how big your mempool can be. On a Raspberry Pi maybe your mempool is limited to 500MB. On these high end servers you could have 30GB of transactions or something like that. Depending upon which node you are operating your view of the network is different. In terms of Layer 2 protocols you don’t want that because you want everybody to have the same view of the network so they can confirm your transactions when you need them to be confirmed.

“These mempool rules diverge across implementations and even versions of Bitcoin Core and a subset of them can be tightened or relaxed by node operators.”

I can set my bitcoin.conf file to be different another person on the network. We can have different rules to determine what is a valid transaction for our mempool.

“This heterogeneity is actually where the risk is scored for higher protocols.”

If node operators configure things differently that is not good for things like Lightning.

“Your LN’s full node might be connected to tx-relay peers with more constraining policies than yours and thus will always reject your time-sensitive transactions, silently breaking security of your channels.”

That is very bad if you have these time sensitive protocols like Lightning. That will soundly break the security assumptions of your channel.

“Therefore moving towards such stable tx-relay/bumping API, I propose:

a) Identifying and documenting the subset of policy rules on which upper layers have to rely on to enforce their security model

b) Guaranteeing backward-compatibility of those rules, or, in case of tightening change, making sure there is ecosystem coordination with some minimal warning period (1 release?)”

Making sure that there isn’t a Bitcoin Core release that gets pushed out that fundamentally breaks all Layer 2 solutions. This is one of those things that we are learning as we go with Layer 2 development and mempool policy is very important. He also goes onto write about what does this mean for network nodes? He says small mempools won’t discover the best feerate bid which falsifies their fee estimator as we saw with the Clark Moody dashboard. CPFP users, their feerate bump having a chance to fail is especially concerning for LN where concurrent broadcast for the same UTXO can be leveraged by a counterparty to steal channel funds. This is really complex, non-obvious stuff prior to last year or so that we need to think about in Bitcoin development. What does it even mean to have a transaction relayed across the network? It needs to be consensus valid. There must be UTXOs associated with the transaction but then it also must meet your relay policy. If your transaction doesn’t get relayed across the network there is no way that a miner is going to be able to confirm it. I think this is a really interesting active area of research going on in Bitcoin right now. There is a big GitHub issue where a lot of discussion is taking place currently. I am not going to walk through that here. But if it is something you are interested in I highly suggest taking a look at it.

It seems out of scope in a way because ultimately there is what explicitly part of consensus and then there is everything else where the design of Bitcoin is that node operators have total autonomy outside of what is agreed upon by consensus. I don’t really understand what the actual proposal is here. It sounds like there is a discussion about getting users to agree to running certain code which is impossible to check that they are running and building a secure system on top of that assumption. That seems a surprising line of inquiry. I could’ve misunderstood.

It lets you get a more perfect mempool. Say you broadcast 1 sat per byte transaction and then the fee rate jumps to 100 sats per byte. You want to child-pays-for-parent (CPFP) that, the second one has a 200 sats per byte so it will be in the next block. Right now a lot of nodes they see the first transaction and it is too low so they drop it out their mempool. They never accept their second one that bumps it up to make it into a high enough fee rate to be added to your mempool.

I understand what the goal is and what the implementation recommendations are up to a point but why we should expect to get to a point where we can depend on this sort of thing? It seems fishy. What reason do you have to think that random Bitcoin node operators are going to have policy?

I think at the very least it makes sense for mining nodes to implement these things because it means they get better fees. I agree that it gets more complicated beyond that. It does seem like at least a win in that scope.

I want this stuff too but it is very different than other Bitcoin technology related discussion which has to do more with what are the other pre-agreed upon rules that are enforceable as a software consensus.

I don’t know if things are necessarily that straightforward. What do you do with a double spend? Both of them are consensus valid on the network. We are racing, you and me, we are both in a situation where we want to spend from a UTXO and we both can. Do you as a default on the P2P network say that somebody that is trying to double spend is a malicious node and ban them or do you allow the double spend and open up myself to spam?

It is out of scope, it is persuasion.

It is not out of scope. Then you open yourself up to spam which means you can crash other people’s nodes and you end up not having a P2P network.

I don’t mean it is out of scope as a node operator. All we can really do is try to put together good arguments why people should adopt some policy but I don’t see it as something as dependable as the UTXO set that I validate as I run my node. It is not even a matter of degree, it is a matter of kind. Being able to trust other people’s mempool management, I don’t know how you would do it. It is weak subjectivity at that point.

On the Lightning Network you broadcast in your gossip for node discovery or the features you support. If you claim to support certain features and then someone can detect that you don’t then you put that node on your blacklist. I don’t know if a similar kind of thing would work here.

I think the proposal isn’t to add package relay as one of those flags.

If you could do in your node discovery, find the people who are supporting the things that they are supposed to. You can retroactively detect bad behavior.

That is getting closer to the sort of thing that would make me more comfortable. I worry that this is conflating working on Bitcoin Core with making dependable changes to Bitcoin as a software system that people run independently and autonomously.

What transactions should be relayed across the P2P network is what this is trying to answer. Depending on context a lot of things can be consensus valid but going back to my double spend example can open you up to various attack vectors or maybe not be useful in the first place. One fundamental premise that I think we all agree is you can relay a transaction across the P2P network. I also think we are are on the same page, this isn’t the same as consensus rules but it is also very important because we need this assumption of being able to relay transactions. I think you have some valid points. It is a really interesting area of discussion. I am personally interested in it so I will be following it.

I think at the very least since so many people don’t customize this part of their node it would be neat if CPFP worked sometimes which it currently doesn’t really.

Ideally you want your own node to have the best mempool possible and this gets you closer to that. That is just better for yourself. If everyone acts in self interest…

Why is it that having an accurate mempool is important to your average node?

When you get a block if you don’t have a transaction in a mempool you will have to revalidate it and sometimes you will have to download it if you are using compact blocks.

Fee estimation as well.

Here it is the trade-off of does a good mempool mean knowing about all plausible transactions? Then the spam risk is really huge. There are DoS concerns.

If you are limiting it to 100 MB you still want the highest fee 100 MB transactions because they are most likely to be included in blocks.

You want Bitcoin to succeed and to work well. As a node operator part of what you are doing is you are trying to estimate what would make Bitcoin the most useful. If there is a really clear cogent argument for having some potentially complex mempool management policy potentially involving additional coordination with your peers or even an overlay network, I agree that we shouldn’t be surprised if people adopt that. Getting from there to saying it is something you can depend on, I don’t know how to quantify that.

Ethereum gas prices

https://twitter.com/rossdefi/status/1293606969752924162?s=20

In other news gas prices on Ethereum are going bananas. According to this guy it was 30 dollars to submit a Ethereum transaction earlier today. Maybe it has gone up or down since then. I think this is good for Ethereum because that proves it is viable if they ever go to…. is Ethereum deflationary still?

They have a EIP 1559 I think, that makes it so that all fees are burned if it is over a certain limit or something like that.

There is a really weird situation, the Ethereum community seems to be rooting for Ethereum 2.0. What is going to happen is that it is going to be a completely separate blockchain. After five years they are going to merge the two together. We have no idea what the issuance rate is or if ETH is deflationary. The community has no idea what is happening with ETH 2.0.

There is an open question, are blockchains sustainable over the long term? I don’t know if ETH necessarily answers this but I think it is encouraging to see people are willing to pay high fees for transactions. It means you are providing something valuable. Some day maybe we will get to this level on Bitcoin again. Ethereum is validating its valuation now.

On the fees I bet the distribution looks like… the people who regularly pay the highest fees are people who have not optimized their fee paying logic.

I disagree with that but we are going to talk about that later.

It is very interesting that people are willing to pay these fees. I think it is interesting when you look at the use cases for paying these fees on Ethereum, I think the majority of people are involved in the new DeFi yield farming. They are not using it for a real business case, they are using it to get access to this new token that exists and then trying to be early for that. They are willing to pay whatever fee necessary to get in early.

High transaction usage on Ethereum

https://twitter.com/juscamarena/status/1285006400792354816?s=20

I didn’t realize that according to Justin Camarena it takes four transactions to send a ERC20 token. Is this right? Does it take four transactions like he claims to send and withdraw from Coinbase?

It depends what he is doing. If he is taking a ERC20 off of Coinbase, that takes one transaction. Usually if he wants to use it with a particular Dapp then he would need to do an approved transaction. That would be two. And deposit it, that would be three. I’m not sure how he is getting four from that, I see how it could be three.

That is very interesting to me because I didn’t realize that there was so much overhead in making a ERC20 transaction on the blockchain. They could use some of this package relay stuff. I guess it doesn’t actually reduce the number of transactions. Today on Twitter I have seen a lot of talk about Ethereum and Layer 2 solutions but we will see how serious they are.

When you say transaction I guess I don’t know enough about ETH to fully comprehend, is it true that with ERC20 tokens you are not sending ETH you are updating a state? Is this somehow different with a lower fee than I would expect coming from Bitcoin world?

It is a state update but it is essentially fixed size so it scales linearly.

If it is true that it is a third the amount I expect from a value transfer transaction and it takes three transactions and that is also a fixed number this seems fine.

A value transfer of ETH by itself is the cheapest transaction you can do. Anything that includes any amount of data costs extra. It is the other way.

Cloudflare DNS outage

https://twitter.com/lopp/status/1284275353389862914?s=20

Cloudflare had a DNS outage and there was a noticeable drop in Bitcoin transactions during this Cloudflare outage. I found that fascinating. It just goes to show who is making transactions on the network. It is probably Coinbase, Blockchain, all the big exchanges etc. It just goes to show how much influence Cloudflare has over the entire internet.

Altcoin traceability (Ye, Ojukwu, Hsu, Hu)

https://eprint.iacr.org/2020/593.pdf

There is a class project that came out a few months ago and made its way to the news. I wouldn’t really mention this otherwise except that you have probably heard about it in the news at some point. It is an altcoin traceability paper, some news agencies referred to it as the CMU paper. One thing to remember is that this is a pre-print. Even though it has been revised a few times to make it a lot better than the initial version I still take significant issue with many parts of it. This is an example of a paper where you might look at work and it might look simple enough on the high level. “I can try to understand this chart.” The problem is that it is approachable enough where it seems super simple but there is an extra layer on top of it that you might not be thinking about. That is very difficult to capture. I want to stress that when you are looking at certain research papers especially ones where they seem approachable… Here is an example. They look at Zcash and Monero predominantly in this paper. Here you can say “Here is the effective anonymity set for different mixing sizes.” You might say “It is still quite ineffective for large rings.” All of these are pre RingCT outputs where the deducibility was already 90 percent or so. This is an example where this has no relevance since 2017. It doesn’t make super clear as you are reading this research paper. The only reason I am mentioning this and I don’t want to spend that much time on it is that it was referenced in a lot of media. There is a new version out that does make certain parts more accurate but at the end of the day it was a pre-print and it was a class project. Take that for what it is worth. It wasn’t meant to be a super in depth research paper. If you want to look at certain types of things to keep in mind when you are doing analysis of blockchains it repeats a lot of the same methods in other papers.

What was the original conclusion of the paper that the authors claimed?

It was so biased. They made claims that were very qualitative and opinion oriented. They were like “We are going to say this without significant evidence” and it is completely extrapolating. I took significant issue with a lot of the claims they had. The whole point of the paper is to replicate a lot of things that other people have done. You have probably heard of some of the work by Andrew Miller. This is a really easy paper to understand on a super high level. That is partially why it is so dangerous because you might think that you get it more than you might. It is really easy to read. They will talk about some types of heuristics with Monero. “The first output is most likely to be the real spend” and “when a transaction has two or more transaction outputs and two or more of those outputs are included in different inputs of another transaction then those included outputs are assumed to be the real inputs.” They do what other papers have done. This particular paper did not really introduce new analysis. That was another reason it was so dangerous in the media, repeating this as new research even though it is applicable only many years ago. I wanted to point it out just because it was in the media a lot compared to what you would typically expect from a pre-print.

If this was presented in a fraud case or something like that would it be taken seriously? If there was a legal case involving blockchain analysis would a paper like this be admissible?

It is better than nothing. I am sure that even in the current state there would be substantial push back by other experts. One expert against a bunch of other experts saying it is ridiculous. For what it is worth they have made revisions to the paper to make it better. It is difficult to present some of the evidence as is. It is very set to certain time periods. This chart shows the applicability. After 2017 the whole application of the paper is not very useful but this is a paper from this year. Media doesn’t know how to interpret this is one thing I wanted to point out. This is a really easy high level paper to read, just be really careful to make sure you get your time points correct. They try to look at Zcash for example and they didn’t have proper tooling available to continue to update it. They didn’t really update a lot of the data and then extrapolated it. I would love to see them run the real data not just extrapolate old findings. That would be much more useful.

Separate coinbase and non-coinbase rings on Monero

https://github.com/monero-project/monero/issues/6688

I have been focusing on the idea of separating coinbase and non-coinbase rings. With Monero and Bitcoin you have something called a coinbase output. No these are not outputs related to Coinbase the exchange. What they actually are are block reward money. When you mine a block you are granted the right to a specific type of output that includes new monetary issuance and inclusion of the fees people pay in transactions. Whether an output is a coinbase output or not has real implications about what type of user it is. If I sent someone money and my entropy set was related to these specific type of outputs either I am running my own mining pool or solo mining funds or those actually aren’t my funds that I’m sending you, it is other funds I’m sending you. This is an example of a point in metadata that might not be a convincing spend if you are using it as potential entropy. Think about it like you are using a mixer. If you were mixing with a bunch of other mining pools and you are the only normal user in the mixing process you’d probably not get that much entropy from mixing in there because people would know that you would not be the one contributing the non-mining related funds. With Monero this happens relatively often because it is a non-interactive process. People who do generate coinbase outputs are participating by having their decoys available. When I started looking at this a few years ago it was a bigger issue because about 20 percent of the outputs were coinbase outputs. Now it is really closer to 3 percent. It really isn’t a significant issue anymore. The issue goes away in part by just having bigger rings. But really based off greater network usage in general. The amount of coinbase outputs generated per day are constant and a total percent of total network activity. As network activity goes up then their proportional percent of coinbase outputs goes down. We also looked at a few things to see what if we tried to adjust our selection algorithm for coinbase or non-coinbase rings. The idea was we can help protect normal users by having them not spend coinbase outputs. If you are a mining pool then you would only make coinbase rings. That is how we could enforce it at a consensus level. We notice that there was no significant difference in the spend distribution of coinbase and non-coinbase outputs. I found this really interesting. I did not expect this result. I thought coinbase outputs would be spent faster on average because a mining pool mines a block and they send the funds to another user as a payout. I assumed that would be pretty instant but it actually is very similar to normal spend patterns. Of course for Monero where this is deducible up to 2017. We cannot test this on current Monero usage because that data is not available. It is too private.

Going back to the previous paper we just talked about. Are you drawing your analysis from the same dataset they were? Did Monero make a consensus change that makes it unanalyzable now?

What I am talking about with coinbase rings should be considered pretty independent to this. This doesn’t really talk about coinbase outputs. Monero used to have really weak ring signatures up until 2017. We can actually look at Monero transactions up until 2017 and for most transactions we can tell what specific output is being spent. This is not tied to an address, it is not a Monero address that is sending this but we know which Monero output is spent in the transaction. If I go back to this paper you can see here for example in Figure 6 that up until April the vast majority were quite deducible. The green line is the deducible. Frankly that did continue until about January 2017 when there was a steep drop off. Until then that is 90 percent of transactions where we are able to tell what the real output is spent. After that point we can’t really tell anymore. This was a consensus change that enabled the new type of output that provided higher levels of protection and therefore we can no longer deduce this information. We have to look at Monero data up until 2017 and Bitcoin data to determine spends. This is something that Moser et al and other papers have looked at too. In order to determine what the input selection algorithm should be. Ultimately the big takeaway is that coinbase outputs are interesting because they are not a type of output that a normal user might spend. You could separate them and say normal users will actually by network decree not spend these funds. However as network activity increases this becomes a mute point. To what extent do you need to care as a network protocol designer? That really depends on activity and use. It is never good but it can be small enough for you to say the extra complication this would add to consensus is not worth the change. The real impact is very, very small. This is still an ongoing discussion after about two years in the Monero community. Of course the Monero network has grown in number of transactions since then.

What would it mean for a coinbase output not to be spendable? Wouldn’t it just not have any value then?

Suppose that you run a public mining pool. People will send you hash rate and you occasionally a Monero block. I am just a user of the network. I am not even mining on your mining node. I am just buying Monero on an exchange and sending to another user. At the moment when either of us send transactions what we do is we look at all of the outputs on the blockchain and we will semi randomly according to a set process select other outputs on the blockchain which include both coinbase and non-coinbase outputs in the spend. When you are spending funds, since you are a public mining pool and you frequently mine actual coinbase outputs it is understandable for you to actually spend these coinbase outputs. It is also understandable for you to spend non-coinbase outputs. Let’s say you’ll spend a coinbase output to a miner and then you will receive a non-coinbase output as change which you will then use to spend and pay other users. Meanwhile since I am not directly mining I am never being the money printer. I am never actually the person that would convincingly hold a coinbase output myself. Whenever I send transactions that include a coinbase output outside observers can pretty reliably say “ This isn’t a real output that is being spent because why would they ever have possession of this output.” It is not convincing, it is not realistic. This person is probably not running a mining pool or probably isn’t getting super lucky solo mining.

The actual proposal is to partition the anonymity sets between coinbase and non-coinbase?

Exactly. As you do that there are quite a few things you can do. Mining pools publish a lot of information. Not only is it the case that coinbase outputs are not convincing for an individual to spend, most mining pools will publish what blocks they mine and they will publish lists of transactions that they make as payouts. You might see that this coinbase output was produced by this mining pool so what is the chance that this person is making a payment and is not the mining pool? Typically it is going to be pretty large. These are pretty toxic deducible outputs to begin with. Depending on network activity we can increase the effective ring size, not the real true ring size but what the after heuristic effectiveness is by about 3-4 percent. This seems small at the moment but it doesn’t cost any performance. It is just a smarter way of selecting outputs. That is one thing we can do. It is an option where we can stratify the different types of transactions by a type of metadata that we are required to have onchain anyway.

You have been talking about this like it is a new consensus rule to change it. Shouldn’t this be a wallet feature where your wallet doesn’t select coinbase UTXOs. It seems like a soft fork is extreme.

That is absolutely true. There are a few reasons why you may want to have a network upgrade. One, each wallet is going to do their own thing. If a user is sending funds that includes a coinbase decoy for example and suppose only one wallet does this, all the others have updated, then you know it is a user of that wallet. This is not ideal. Also we have the opportunity to do things like say coinbase outputs are pretty toxic anyway, let’s inform users that coinbase outputs should not have a ring. Perhaps we say you shouldn’t directly spend coinbase outputs to users if you do care about privacy in that transaction. Then just for those outputs we can make the ring size one and that would make the network more efficient but it would also require a consensus change. I also think in general even though we can make it a wallet change, as far as privacy is concerned it is better to enforce behavior rather than allow wallets to do their own thing. In the Monero community in general people are a little more open to the idea of forcing best practice behavior instead of simply encouraging it. Definitely a cultural difference between Monero and a few other communities.

In my mind saying that this wallet’s transactions aren’t valid now is way more extreme than forcing them to upgrade their wallet and making a backward compatibility change. If that is what Monero does it is interesting.

There are pros and cons.

Do you recommend the paper? I wasn’t sure if there was any tangible as a result of it.

I wouldn’t advise someone doesn’t read it and I also wouldn’t consider it an essential read. It is most useful for people who already have the context of the other papers. It updates on some of their data. It doesn’t really present much new compared to those. Part of the reason why we like to enforce behavior, this is perhaps controversial in Bitcoin implementations. Enforcing behavior gives much better desired results for privacy than not. Zcash, we constantly have people arguing Monero is better, Zcash is better. You can see the proportion of transactions that are shielded on the two networks comparatively and you can see one is a little bit more adopted. Monero, it has over 100 times as many transactions that hide the sender, receiver and amount than Zcash. That’s because instead of allowing the backwards compatibility users, exchanges and users are not forced to use the best practices, people typically don’t. I think that is because they don’t really care. Think about Bitcoin SegWit adoption. Ideally people should switch right away. But people don’t. If people aren’t going to switch for a financial incentive why are exchanges going to switch to get rid of a point of metadata that they don’t care about unless you force them. With privacy, in my opinion, it is not just whether a feature is available but whether it is enforced. More importantly whether people are actually using it. As you are evaluating implementations you should see if they are adopted or not. I know a lot of people talk about Samourai for example. This is some data on Samourai wallet usage. I want to be really clear. These numbers are not very comparable. These are interactive processes. For a 5 million satoshi pool 602 rounds occurred but those each include several participants. It is not just one participant that shows up. It might be ten participants or something. Even so if you stack all of these on top of each other which you cannot do because they are different amounts, they are denominated and they each have their own anonymity sets it is still pretty darn tiny. The implementation and encouraging good use is critically important. People think pretty highly of Samourai in general. I know the Samourai vs Wasabi feud, sometimes friendly, sometimes not so friendly. Ultimately the actual adoption is kind of small. One thing I mention is we are building these networks, we can talk about them in a research sense but we also need to talk about the sense that these are decentralized networks where it is permisisonless and anybody can send a transaction. People are going to do some weird stuff and people are not going to follow the best practices. That does matter for privacy adoption quite significantly.

Monero CLSAG audit results

https://web.getmonero.org/2020/07/31/clsag-audit.html

This is the big thing coming up with CLSAG. These are a more efficient form of Monero ring signature. They had an audit report come out where JP Aumasson and Antony Vennard reviewed it. You can see the whole version here. They had some proposed changes to the actual paper and how they have the proofs. But ultimately they were saying “This should be stronger.” It resulted in no material code changes for the actual implementation. The paper changed but the code didn’t. You can see the write up here. They are slightly more efficient transactions that will be out this October.

Are you talking size efficient or time efficient?

Both. They have stronger security proofs. They have about 25 percent smaller transaction size and they have 10-20 percent more efficient verification time. It is a really win-win-win across the board there.

FROST: Flexible Round-Optimized Schnorr Threshold Signatures

https://eprint.iacr.org/2020/852.pdf

This paper came out, FROST, which is a technique for doing a multisig to produce a Schnorr signature. I don’t know about the review that has gone on here. I wasn’t able to find any review of it. It is a pre-print. There is always a certain caveat. You should take a pre-print with a grain of salt but it is pretty serious. What distinguishes this Schnorr multisig is that it is a proposal for a two round Schnorr multisig. MuSig is probably the Schnorr multisig that most people are familiar with in this circle. MuSig started out life as a two round multisig scheme which had a vulnerability. The current best replacement for it is a three round. There are some rumors that there is a fix for that. I haven’t seen anything yet. If anybody else has please speak.

I have seen a pre-print that I don’t think is public yet. They have a way of doing MuSig with deterministic nonces in a fancy way. Not in any straightforward way. It is not too fancy but it uses bulletproofs that gets it back down to two rounds because the nonce no longer requires its own round.

I am looking forward to seeing that. If you have a math bent this paper is actually very nice. The constructions are beautiful. For those who are familiar with Shamir’s Secret Sharing, the idea is that you construct a polynomial, you give your users certain point values and then together a threshold subset of the users can reconstruct the secret. There is actually an interesting interplay between Shamir’s Secret Sharing and just additive secret sharing where you just adding the shares under certain situations. They have leveraged this duality that exists in a certain setting in order to get the low number of rounds. There is a key setup where what you are doing is everyone chooses a secret. I am going to ignore the binding and commitments for now. They construct a Shamir’s Secret Share for everybody else. Then they all add up the Shamir’s Secret Shares that all the other users give them and this gives them jointly a Shamir’s Secret Share for the sum of the individual secret shares. That’s the key. Nobody knows the key although any threshold t of them can reconstruct the secret directly. The point of the protocol though is that by doing signing nobody learns their secret. The signing procedure doesn’t leak the secret although they could in principle if they got together they could reconstruct it. Of course if you reconstruct the secret then every user who was in that reconstruction now has unilateral access to the secret. The signing procedure is somewhat similar to what you might be expecting. You have to construct a nonce. The way that you construct the nonce is you have to take care to bind everything properly to avoid certain classes of attack. What you do is everyone in this subset t where t is a threshold chooses a random nonce and then they construct a Shamir’s Secret Share for it and share it out to everybody. There is a nonce commitment phase which you have to do. In this setting what they describe it as is a preprocessing step. You might get together with the group of users that you think is going to be the most common signers. You are doing a 2-of-3, you have two signers that you think are the most likely, they get together, they construct a thousand nonces in one go and then they can use those nonces in the protocol later on and make the protocol from that point on one round. Signing uses this trick where you can convert between additive secret shares. The nonces shared is an additive secret share. The key that is shared is a Shamir’s Secret Share because you are using this threshold technique. There is a method for combining them. If you have an additive share for t people you can convert it into a Shamir’s Secret Share without any additional interaction. That’s the kernel of the idea. I think there is a small error in this paper although I could easily be the one who is wrong, that is the most likely. I think that the group response, you actually have to reconstruct this as a Shamir’s Secret Share. I think this is a small error.

How do they do the multiplication with the hash and the secret shard?

They have this commitment which is broken up into two pieces. The thing that they use is the fact that you can reconstruct the challenge using just the commitment values. It is bound to the round and the message so there is no possibility of the Drijvers attack. That is not a good answer.

With most of these threshold schemes everyone is happy with the addition and all of the complexity is always “We need to go through a multiplication circuit or something.”

This one is weird because the multiplication doesn’t play much of a role actually. It is more the reconstruction. If you have the group key that is shared as a Shamir’s Secret Share it is not clear what to do about the nonces so you can combine them and take advantage of Schnorr linearity. They found a way to do that.

Are public versions of the shards shared at any point? Is that how this is being done?

Let me show you the key generation. If you are asking about the key generation the public key is the Shamir’s Secret and then the public shares are not additive secret shares. They are public commitments to the Shamir shares. My verdict for that paper is do read, it is interesting. Having Schnorr multisig that has low interaction has proven to be extremely useful in the design of cryptocurrency stuff.

Minsc - A Miniscript based scripting language for Bitcoin contracts

https://min.sc/

For those who don’t know Miniscript is a language that encodes to Bitcoin Script. The thing that is cool about Miniscript is that it tracks a number of stack states as part of its type system. When you have a Miniscript expression you can understand how the subexpressions will edit the stack. The reason why you would want to do this is you have a table of satisfactions. Typically if you have some script it is complicated to calculate what the script that allows you to redeem it. With Miniscript every script that you write the type system tells you exactly what conditions you need to redeem that script. All of this geared is towards making analysis like this much more user friendly. It is very cool. Pieter Wuille put together a Policy to Miniscript compiler. Policy is just a simplified version without any of the typing rules. It has got this very simple set of combinators that you can use. Shesek put together a version of the Policy language with variable abstraction functions and some nicer syntax. This is going to be really useful for people who are experimenting with Bitcoin Script policies. It doesn’t have the full power of Miniscript because you can’t tell what is happening on the stack. If you have a Rust project you can include this as a library. But otherwise you can also compile sources using the compiler that Shesek put together. The website is also very nice. You can put in an expression, you get syntax highlighting and you get all the compiled artifacts. It is a really cool tool for playing with script. I recommend that. The GitHub repo, unless you are a real Rust fan there is no need to look into the details. Most of the meat of this is in sipa’s implementation of the Policy to Miniscript compiler.

BIP 118 (ANYPREVOUT)

https://github.com/ajtowns/bips/blob/bip-anyprevout/bip-0118.mediawiki

BIP 118 used to be SIGHASH_NOINPUT, now it is SIGHASH_ANYPREVOUT. The update that is being discussed right now is for integration with Taproot. The main design decisions that you should think about, there are two. One is fairly straightforward if you know how Tapscript works. If you want to use one of these new sighashes you have to use a new key type. In Taproot there is the notion of key types and this means that in order to construct a key post Taproot it is not enough to have a private key, you also have to have a marker that tells you what the capabilities of the corresponding public key are. It is part of a general move to make everything a lot more explicit and incumbent on programmers to explicitly tell the system what you are trying to do and to fail otherwise. Hashes post Taproot have a tag so you can’t reuse hashes in one Taproot setting in another one. And keys have a byte tag so that you can’t reuse public keys in different settings. The semantics of the two sighashes, ANYPREVOUT and ANYPREVOUTANYSCRIPT, if you are really into the guts of Taproot they affect what the transaction digest is that you sign. You should check out what the exact details are. As far as I could tell it is pretty straightforward. You are just not including certain existing Taproot fields. On the BIP right above the security section there is a summary for what these two sighashes do. The main thing is that ANYPREVOUT is anyone can pay except you don’t commit to the outpoint. ANYPREVOUT, you are not committing to the outpoint because that’s something you want to allow to be arbitrary. ANYPREVOUTANYSCRIPT is even weaker. Not only are you not committing to the outpoint but you are also not committing to the spend script.

I’m looking at the signature replay section and I know one objection to SIGHASH_NOINPUT, ANYPREVOUT is that if you do address reusage, let’s say you are a big exchange and you have three HSMs that you are using as your multisig wallet. It is super secure but you can’t regenerate addresses because there is only one key on the HSM. You end up with a static address. If somehow you end up signing an ANYPREVOUT input with these HSMs your wallet could be drained which is what they are hinting at with the signature replay section here. Is that addressed at all in the newer iteration?

No it is not. Nobody knows a way of dealing with it that is better than allowing it. I omitted something important which is that these keys, you can’t use them with a key path, you can only use them on a script path. Taproot has key path and script path spending modes which the user chooses at spend time. ANYPREVOUT is only available on the script path spending mode. This new key type which is the only key type with which you can use the new sighashes according to this proposal are only available in a Tapscript script. This is interesting but I still think that we are probably quite a way from actual deployment and deployment of eltoo. That is the main use case for this new sighash. This is nice because it can be included in another batch of BIPs. It is nice and self contained and clear. You are adding semantics but you are not going to care much about it unless other features become available. That’s the reality here as far as I can see.

BIP 118 is now ANYPREVOUT?

This is not merged and it looks like it might get a new BIP number. SIGHASH_NOINPUT, the name is almost certainly going to change. The SIGHASH_NOINPUT proposal is probably just going to remain in the historical record as an obsolete BIP.

We have been using NOINPUT for our vaults protocol but ANYPREVOUT was also mentioned in the paper. I am not sure on the actual differences.

At a high level they do the same thing. This says “Here is one way that you can incorporate it into a post Taproot Bitcoin.”

If you are on the Lightning dev mailing list people use NOINPUT and ANYPREVOUT interchangeably.

They are synonyms. You are mostly talking about higher level stuff. You want to be able to rebind your transactions. In some sense it doesn’t matter if you have to use a Tapscript or whatever special key type. You want the functionality.

Bitcoin Mining Hashrate and Power Analysis (BitOoda research)

https://medium.com/@BitOoda/bitcoin-mining-hashrate-and-power-analysis-bitooda-research-ebc25f5650bf

This is FCAT which is a subsidiary of Fidelity. They had a really great mining hash rate and power analysis. I am going to go through it pretty quickly. There are some interesting components to take away from this. They go into what is the common mining hardware, what is its efficiency. Roughly 50 percent of mining capacity is in China right now. I’m not really surprised there. They get into cost analysis. In their assessment 50 percent of all Bitcoin mining capacity pays 3 cents or less per kilowatt hour. They are doing energy arbitrage. They are finding pockets where it is really cheap to mine. According to them it is about 5000 dollars to mine a Bitcoin these days. Also the S9 class rigs that I think have been in the field for 3-5 years at this point. According to this research group you need sub 2 cents per kilowatt hour to break even with these S9s that are pretty old at this point. Here is the most interesting takeaway. A significant portion of the Chinese capacity migrates to take advantage of lower prices during the flood season. They go into a bunch of explanation here. What is this flood season? I was reading this and I didn’t know it is a problem. I had heard about it but I hadn’t investigated. The Southwestern provinces of Sichuan and Yunnan face heavy rainfall from May to October. This leads to huge inflows to the dams in these provinces causing a surge in production of hydroelectric power during this time. This power is sold cheaply to Bitcoin miners as the production capacity exceeds demand. Excess water is released form overflowing dams so selling cheap power is a win-win for both utilities and miners. This access of cheaper electricity prices attracts miners who migrate from nearby provinces to take advantage of the low price. Miners pay rough 2-3 cents per kWh in northern China during the dry months but sub 1 cent per kWh in Sichuan and Yunnan during the May to October wet season. They will move their mining operations from May to October down to these other provinces to take advantage of this hydroelectric power which is going on right now. This is the flood season in these provinces. Fascinating stuff. For some context this is Yunnan right here and Sichuan is here. I think they are moving from up in the northern parts here.

A mysterious group has hijacked Tor exit nodes to perform SSL stripping attacks

https://www.zdnet.com/article/a-mysterious-group-has-hijacked-tor-exit-nodes-to-perform-ssl-stripping-attacks/

A mysterious group hijacking Tor exit nodes to perform SSL stripping attacks. From my understanding Tor exit relays are trying to replace people’s Bitcoin addresses more or less. If you are using the Tor network and all your traffic gets encrypted and routed through this network it has to eventually exit the network and get back into plaintext. What these exit nodes are doing is they are looking at this plaintext, seeing if there is a Bitcoin address here, if so copy paste replace with my Bitcoin address rather than the intended address.

In the West most of the internet is now happily routed through TLS connections. One of the things this is trying to do is trying to redirect users to a non TLS version. This is somewhat difficult now because you have an end-to-end TLS connection over Tor if it works properly. It is not possible as far as we know to eavesdrop on that connection or to modify it as a man in the middle. You need to downgrade first.

I think they saw similar attacks to this in the 2018 timeframe but I guess they are picking back up.

It was the scale of this that was so incredible. One of the things that I think is a shame, if anybody is feeling very public spirited and doesn’t mind dealing with a big headache, running a Tor exit node is a really good service. This is where you are going to have a hose that is spewing sewage out onto the internet. Tonnes of random trolls are going to be doing things that are going to get you called. Or at least that is the worry. I have a friend who runs a Tor exit relay and he said that they actually have never had a complaint which I found shocking.

I have had complaints when my neighbor who uses my wifi forgets to VPN when he downloads Game of Thrones or something. It is surprising that that wouldn’t happen.

Is he credible? Does he actually run the exit node? It is like spewing sewage.

I should practice what I preach but I would love more people to run Tor relays. If they get complaints be like “**** you I am running a Tor relay. This is good for freedom.” When it is your own time that you have to waste on complaints it is hard.

Taproot activation proposals

https://en.bitcoin.it/wiki/Taproot_activation_proposals

There has been a big debate on how to activate Taproot in Bitcoin. Communities take different approaches to this.

There are like ten different proposals on the different ways to do it. Most of them are variations of BIP 8 or BIP 9 kind of things. It is picking how long we want to wait for activation and whether we want to overlap them and stuff like that. Currently talk has died down. The Schnorr PR is getting really close to merge and with that the Taproot implementation gets closer to be merged into Core. The discussion will probably then pick back up. Right now as far as I can tell a lot of people are leaning towards either Modern Soft Fork Activation or “Let’s see what happens” where we do a BIP8 of one year and see what happens. Once the PRs get merged people will have to make a decision.

I listened to Luke Dashjr and Eric Lombrozo and I think I am leaning now away from the multiphase, long period activation like Modern Soft Fork Activation. When you said its split between those two what is your feeling on the percentages either way. Is it 50, 50?

I would say it is like 25 percent on Modern Soft Fork Activation and 75 percent on a BIP8 1 year. That is what I gathered but I haven’t gone into the IRC for a week or so. The conversation has died down a lot. I don’t think there has been much talk about it since.

I am a BIP9 supporter for what it is worth. Tar and feather me if you must.

The fastest draw on the blockchain: Next generation front running on Ethereum

https://medium.com/@amanusk/the-fastest-draw-on-the-blockchain-bzrx-example-6bd19fabdbe1

With all this DeFi going on Ethereum there are obviously these large arbitrage opportunities. Since everything is on a blockchain there is no efficiency at all doing this stuff for privacy. It is fun to watch people bid up Ethereum blockchain fees and also very clever folks are taking advantage of bots and stuff programming allocations in these DeFi protocols. This is a whole big use case of getting into this BZRX token launch. It is done onchain and everybody has got to try to buy into the token allocation at a specific time. It goes through these strategies that people could do to get a very cheap token allocation. If I’m not mistaken this guy made half a million dollars because he was clever enough to get on the allocation and then sell at the right time too. The post talks about what is a mempool which we were talking about earlier. How can you guarantee that your transaction to purchase tokens is confirmed as close as possible to the transaction that opens up the auction process. They go through a bunch of different tactics that you could take to increase your odds and do the smart thing. There was some stuff that wasn’t obvious to me. The naive thing I thought was pay a very high gas price and you are likely to get it confirmed close to this auction opening transaction. But that is actually not true. You want to pay the exact same gas price as the transaction that opens the auction so you are guaranteed to be confirmed at the same time. You don’t want to have your transaction being confirmed before one of these auction transactions.

One of the things that recently was very interesting with this front running situation is that there are no bots running on Ethereum where anytime you do any type of arbitrage that is profitable those bots will instantly check that transaction in the mempool and create a new transaction with a higher fee in order to take advantage of your arbitrage opportunity and have their transaction go through first. I thought that was one of the interesting things that people have been doing recently. Not too familiar on this one actually.

I think it is a very fun space to see this play out onchain. Very interesting game theory.

It is also really relevant for us as we are messing with Layer 2 protocols. People here are hacking the mempool in this very adversarial setting and we are just talking about it for the most part. We could learn a lot.

Shouldn’t the miners be able to do all this themselves if they enough capital? They can do the arbitrage because they decide what goes in the block?

There are reports that some miners have been performing liquidations or favoring the ordering of transactions based on liquidating particular positions over professional liquidators that are trying to liquidate positions. Giving themselves an advantage in that way. There have been reports of that happening recently.

I agree that there is a tonne of stuff for us to learn in the Bitcoin ecosystem from what is going down in Ethereum right now. Especially around mempool stuff. Let’s look at this stuff and learn from it.

In terms of front running I used to work at a place where we were doing some Ethereum integration. At the time one of the suggestions that was catching a lot of traction was to have a commit, reveal workflow for participating in a decentralized order book. Does anybody know if that ever gained traction? I stopped paying attention. You would commit to the operation you’d want to do in one round, get that confirmed and then reveal in a future round. Your commits determine the execution order.

I am not familiar with that.

That sounds like a better idea.

It is a way to get rid of front running.

That is interesting. I think the majority of protocols aren’t doing that. The only thing that sounds familiar to that is the auction system in Maker DAO where you call to liquidate a position first. Then it doesn’t get confirmed until a hour later when the auction is finished. This has its own issues but that seems more similar to a commit, reveal. That is the only thing I can think of that is remotely similar.

Evidence of Mempool Manipulation on Black Thursday: Hammerbots, Mempool compression and Spontaneous Stuck Transcations

https://blog.blocknative.com/blog/mempool-forensics

More mempool manipulation, this is sort of the same topic. These guys are looking into Maker DAO liquidations between March 12th and 13th which was a very volatile time in the cryptocurrency markets. Of the 4000 liquidation auctions with Black Thursday 1500 of them were won by zero bids over a 12 hour period, 8 million dollars in aggregated locked collateralized debt positions was lost to these zero bid auctions. When they say a zero bid auction is it a trivial price being placed on the order book and they just got lucky it got filled? What is exactly meant by that?

I think what was happening here is there were a bunch of bots that came in. If you have an auction in Maker DAO, when they first implemented it the auction only lasted 10 minutes. What they would do is as soon as the value of a particular position went below the minimum collateralization ratio they would make a bid on that position for zero dollars. Usually what would happen is that in that 10 minutes someone else would outbid them. But what they would do is they would spam the entire network with a bunch of transactions at the same time so no one would be able to get a transaction in. At the end of 10 minutes they would win the bid. The person who had the position got screwed over.

This is fascinating stuff watching this play out on a live network and seeing people making substantial money from it too.

If you want to read more on how Maker DAO responded to this incident you can check this out.

Working with Binance to return 10,000 dollars of stolen crypto to victim

https://medium.com/mycrypto/working-with-binance-to-return-10-000-of-stolen-crypto-to-a-victim-3048bcc986a9

Another thing that has happened since we last talked was the cryptocurrency hack on Twitter for those who live on a hole or whatever. Somebody hacked Twitter and was posting crypto scams. Joe Biden and Barack Obama’s account was posting this scam. What these people are talking about is how exchanges can be useful in returning people’s property when things are stolen. They claim Coinbase blocked user attempts to send over 280,000 dollars to the scam address. I think this is a really trade-off between centralized exchanges. Obviously some people don’t necessarily like centralized exchanges but I think we can all agree that it is probably good that they were censoring these transactions to keep people from paying in to some Twitter hacker’s crypto scam on Joe Biden’s account or Barack Obama’s account or whoever’s account.

If you are going to scam make sure you reuse addresses so Coinbase can’t block you.

Don’t reuse.

This is perfect marketing for financial regulation. “Look it sometimes doesn’t hurt people.”

That should be the Coinbase advertisement on TV or whatever.

Samourai Wallet Address Reuse Bug

https://medium.com/@thepiratewhocantbenamed/samourai-wallet-address-reuse-bug-1d64d311983d

There is a Samourai wallet address reuse bug that can be triggered in some cases where you reuse the same address due to a null pointer exception in the wallet handling code. The author of this post was not too thrilled with Samourai’s response and handling of the situation from my understanding. Especially for a project that touts itself as privacy preserving.

The bug is fixed now. If you are using Samourai you are good.

BasicBlocker: Redesigning ISAs to Eliminate Speculative-Execution Attacks

https://arxiv.org/abs/2007.15919

Other things in the hardware realm. They are trying to fix all these vulnerabilities that are out there for speculative execution on modern processors. For those who don’t know hardware is a lot faster than software. What hardware and software engineers have realized is that we should speculatively execute code on the hardware just in case that’s the branch people want to take in software. That can have security implications because if you have a IF ELSE statement, say if Alice has permission to Bitcoin wallet allow her in to touch funds or to touch a private key else send her a rejected request. With speculative execution these processors will actually execute both sides of that branch for checking if Alice has access to private keys and that now can be cached on the processors, that is my understanding. That means you can maybe get access to something you shouldn’t get access to. This BasicBlocker, I think it was redesigning these instruction sets and asking for compiler updates and hardware updates to simplify the analysis of the speculative execution. I don’t know, I didn’t have any strong opinions on it. I think we’ve got to solve the problem eventually but nobody wants to take a 30 percent haircut on their performance.

Building formal tools to analyze this sounds like it would be awesome. The class of attack is a side channel attack and by definition a side channel attack is a channel that you weren’t expecting an attacker to use. I think hardening against speculative execution attacks is necessary but you really need to be very careful about applying security principles like minimum privilege.

It is tough. Things will get a lot slower if we decided to actually fix this stuff.

Thanks for attending. We will do it again in a month or so. Next one will be our 12th Socratic Seminar. It will be a year so maybe we will have to figure something special for that.

\ No newline at end of file +Meetup

Topic: Agenda below

Video: No video posted online

BitDevs Solo Socratic 4 agenda: https://bitdevs.org/2020-07-31-solo-socratic-4

The conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Bitcoin Core P2P IRC Meetings

https://github.com/bitcoin-core/bitcoin-devwiki/wiki/P2P-IRC-meetings

They are starting P2P meetings for the Bitcoin protocol now. The trend for organizing backbone development for Bitcoin is starting to become more modular. They have wallet meetings for the wallet in Bitcoin Core now. You are going to have P2P network meetings for strictly the networking portion of Bitcoin and all of the technical stuff that goes into that. They had one on August 11th and they are doing them every two weeks. I thought I would give a public service announcement there.

Clark Moody Dashboard

https://bitcoin.clarkmoody.com/dashboard/

The next thing, our Bitcoin dashboard that Clark Moody puts on. It is a high level view of what is going on in the Bitcoin ecosystem. Price, 11,500 USD, the GBTC Premium which is a regulated OTC product that Grayscale offers. 17.8 percent is the premium on this. That means it trades at a 17 percent premium to spot Bitcoin. It looks like he has added Bitcoin priced in gold if that is your thing. We are around block height 643,000. UTXO Set Size 66 million, Block Time, we are still under our 10 minute desired threshold that signifies people are adding hash rate to the network and finding blocks faster than we would expect. Lightning Network capacity, mostly the same, at around 1000 BTC. The same with BitMEX’s Lightning node. There looks like there is a little bit more money being pegged into Liquid which is interesting. Transaction fees…

Is there a dashboard where some of these metrics specifically Lightning and Liquid capacity where there are time series charts?

I don’t know that. That would be very nice to have. Then meeting after meeting we could look at the diff since we last talked. I don’t know.

For fee estimates, these fee estimates for Bitcoin seem a little high to me. 135 satoshis per virtual byte, is the network that congested? There are 6000 transactions in the mempool so maybe it is. That is what is going on on the network.

The fees for the next block are completely dominated by a somewhat small set of participants who go completely nuts.

Sure but even the hour. That is 6 blocks, according to this is 129 sats per virtual byte which also seems very high to me. I didn’t realize that Bitcoin fees were as high.

I think that’s a trailing average.

I was looking at the mempool space and the fees are lower than they are on this thing.

Most of it is 1 satoshi per byte so I think it is a trailing estimate.

Dynamic Commitments: Upgrading Channels Without On-Chain Transactions (Laolu Osuntokun)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-July/002763.html

Here is our first technical topic. This is to do with upgrading already existing Lightning channels.

The idea here is that when you are changing the actual commitment scheme or anything else about your Lightning channel, so long as no changes are required for your funding transaction and you are changing your revocation mechanism, you don’t have to go onchain for anything. After some discussion about various ways of doing this update offchain they decided upon introducing a new message which will be used I think also during channel opening to explicitly negotiate features instead of implicitly negotiate features using TLV stuff. You can update those features with an update channel message. The proposal is to activate these channel updates once there are no HTLCs left. You just have the to_local, to_remote balances. Then you can send one of these messages and do a commitment signed revoke and ack handshake but where the commitment being signed is now the new format or the new kind of commitment transaction. The ideas for using this are various things. Updating to use new features like static remote keys. That is a feature that improves channels which is somewhat new. People on a lot of channels are not using it because it would require that they close and reopen their channels. That stuff is now unnecessary. Changing the limit on how many HTLCs can live in a given channel can now be changed dynamically without closing channels. You can start with a smaller number. If there is good behavior you can make that a bigger number as you go on. In the future you should be able to do other various fancy things like Antoine (Riard) wants to do things with anchor outputs on commitment transactions. I think some of that stuff is already implemented at least in some places. You would be able to update your channels to use those things as well without going onchain. Part of the motivation is that once we get to Taproot onchain then once we’ve designed what the funding transaction looks like for Taproot we can hash the details later for what actual format we want to use on the commitment transaction. We can push updates without forcing thousands of channels to close.

That last thing is a really key insight. Every time we come up with a new cool Lightning feature we don’t want have to make everybody close their channels, reopen them, go through that process. We want to dynamically change the rules. If two parties consent to changing the rules of the channel there is no reason they shouldn’t be able to do that just by saying “You are cool with this rule, I am cool with this rule. Let’s switch over to these new rules rather than the old rules that we had before.” Of course if they don’t agree that won’t happen. It makes a much smoother upgrade process for the Lightning Network.

The key constraint is that you can’t change the revocation mechanism. You can’t go for example from Poon-Dryja channels to eltoo channels using this mechanism. But you can do most other things that don’t require changing the revocation mechanism.

You wouldn’t be able to go to PTLCs or MuSig post Schnorr?

You would actually. We could for example if people decided they wanted to use ECDSA adaptor signatures today we could use this should this also exist with current Lightning channels to update to PTLC enabled channels. The same goes if we have Taproot and the Taproot funding transaction has been specified but people are still using HTLCs, you can move to PTLCs without requiring channels be shut down so long as the revocation mechanism and the funding transaction doesn’t change.

For negotiating a new funding channel, can’t you spend the old funding transaction so you don’t have to close and open? Just spend the old funding transaction so you just do a new open from the old one.

Yeah and that is discussed later in this thread. ZmnSCPxj was talking about going from Poon-Dryja to eltoo using a single transaction. Anything that requires funding transaction you don’t have to close and open, you can do it simultaneously in a splicing kind of fashion.

Advances in Bitcoin Contracting: Uniform Policy and Package Relay (Antoine Riard)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-July/018063.html

This is the P2P Bitcoin development space, this package and relay policy across the Bitcoin P2P network. What this problem is trying to solve is there are certain layer 2 protocols that are built on top of Bitcoin such as Lightning that require transactions to be confirmed in a timely manner. Also sometimes it is three transactions that all spend each other that need to be confirmed at the same time to make sure that you can get your money back specifically with Lightning. As Antoine writes here, “Lightning, the most deployed time-sensitive protocol as of now, relies on the timely confirmations of some of its transactions to enforce its security model.” Lightning boils down to if you cheat me on the Lightning Network I go back down to the Bitcoin blockchain and take your money. The assumption there is that you can actually get a transaction confirmed in the Bitcoin blockchain. If you can’t do that the security model for Lightning crumbles. Antoine also writes here that to be able to do this you sometimes need to adjust the fee rate of a transaction. As we all know blockchains have dynamic fee rates depending on what people are doing on the network. It could be 1 satoshis per byte and other times it could be 130 satoshis per byte. Or what we saw with this Clark Moody dashboard, one person may think it is 130 satoshis per byte while another person is like “I have a better view of the network and this person doesn’t know what they are talking about. It is really 10 satoshis per byte.” You can have these disagreements on these Layer 2 protocols too. It is really important that you have an accurate view of what it takes to enforce your Layer 2 transactions and get them confirmed in the network. The idea that is being tossed around to do this is this package relay policy. Antoine did a really good job of laying out exactly what you need here. You need to be able to propagate a transaction across the network so that everyone can see the transaction in a timely manner. Each node has different rules for transactions that they allow into the mempool. The mempool is the staging area where nodes hold transactions before they are mined in a block. Depending on your node settings, you could be running a node on a Raspberry Pi or one of these high end servers with like 64GB of RAM. Depending on what kind of hardware you are running you obviously have limitations on how big your mempool can be. On a Raspberry Pi maybe your mempool is limited to 500MB. On these high end servers you could have 30GB of transactions or something like that. Depending upon which node you are operating your view of the network is different. In terms of Layer 2 protocols you don’t want that because you want everybody to have the same view of the network so they can confirm your transactions when you need them to be confirmed.

“These mempool rules diverge across implementations and even versions of Bitcoin Core and a subset of them can be tightened or relaxed by node operators.”

I can set my bitcoin.conf file to be different another person on the network. We can have different rules to determine what is a valid transaction for our mempool.

“This heterogeneity is actually where the risk is scored for higher protocols.”

If node operators configure things differently that is not good for things like Lightning.

“Your LN’s full node might be connected to tx-relay peers with more constraining policies than yours and thus will always reject your time-sensitive transactions, silently breaking security of your channels.”

That is very bad if you have these time sensitive protocols like Lightning. That will soundly break the security assumptions of your channel.

“Therefore moving towards such stable tx-relay/bumping API, I propose:

a) Identifying and documenting the subset of policy rules on which upper layers have to rely on to enforce their security model

b) Guaranteeing backward-compatibility of those rules, or, in case of tightening change, making sure there is ecosystem coordination with some minimal warning period (1 release?)”

Making sure that there isn’t a Bitcoin Core release that gets pushed out that fundamentally breaks all Layer 2 solutions. This is one of those things that we are learning as we go with Layer 2 development and mempool policy is very important. He also goes onto write about what does this mean for network nodes? He says small mempools won’t discover the best feerate bid which falsifies their fee estimator as we saw with the Clark Moody dashboard. CPFP users, their feerate bump having a chance to fail is especially concerning for LN where concurrent broadcast for the same UTXO can be leveraged by a counterparty to steal channel funds. This is really complex, non-obvious stuff prior to last year or so that we need to think about in Bitcoin development. What does it even mean to have a transaction relayed across the network? It needs to be consensus valid. There must be UTXOs associated with the transaction but then it also must meet your relay policy. If your transaction doesn’t get relayed across the network there is no way that a miner is going to be able to confirm it. I think this is a really interesting active area of research going on in Bitcoin right now. There is a big GitHub issue where a lot of discussion is taking place currently. I am not going to walk through that here. But if it is something you are interested in I highly suggest taking a look at it.

It seems out of scope in a way because ultimately there is what explicitly part of consensus and then there is everything else where the design of Bitcoin is that node operators have total autonomy outside of what is agreed upon by consensus. I don’t really understand what the actual proposal is here. It sounds like there is a discussion about getting users to agree to running certain code which is impossible to check that they are running and building a secure system on top of that assumption. That seems a surprising line of inquiry. I could’ve misunderstood.

It lets you get a more perfect mempool. Say you broadcast 1 sat per byte transaction and then the fee rate jumps to 100 sats per byte. You want to child-pays-for-parent (CPFP) that, the second one has a 200 sats per byte so it will be in the next block. Right now a lot of nodes they see the first transaction and it is too low so they drop it out their mempool. They never accept their second one that bumps it up to make it into a high enough fee rate to be added to your mempool.

I understand what the goal is and what the implementation recommendations are up to a point but why we should expect to get to a point where we can depend on this sort of thing? It seems fishy. What reason do you have to think that random Bitcoin node operators are going to have policy?

I think at the very least it makes sense for mining nodes to implement these things because it means they get better fees. I agree that it gets more complicated beyond that. It does seem like at least a win in that scope.

I want this stuff too but it is very different than other Bitcoin technology related discussion which has to do more with what are the other pre-agreed upon rules that are enforceable as a software consensus.

I don’t know if things are necessarily that straightforward. What do you do with a double spend? Both of them are consensus valid on the network. We are racing, you and me, we are both in a situation where we want to spend from a UTXO and we both can. Do you as a default on the P2P network say that somebody that is trying to double spend is a malicious node and ban them or do you allow the double spend and open up myself to spam?

It is out of scope, it is persuasion.

It is not out of scope. Then you open yourself up to spam which means you can crash other people’s nodes and you end up not having a P2P network.

I don’t mean it is out of scope as a node operator. All we can really do is try to put together good arguments why people should adopt some policy but I don’t see it as something as dependable as the UTXO set that I validate as I run my node. It is not even a matter of degree, it is a matter of kind. Being able to trust other people’s mempool management, I don’t know how you would do it. It is weak subjectivity at that point.

On the Lightning Network you broadcast in your gossip for node discovery or the features you support. If you claim to support certain features and then someone can detect that you don’t then you put that node on your blacklist. I don’t know if a similar kind of thing would work here.

I think the proposal isn’t to add package relay as one of those flags.

If you could do in your node discovery, find the people who are supporting the things that they are supposed to. You can retroactively detect bad behavior.

That is getting closer to the sort of thing that would make me more comfortable. I worry that this is conflating working on Bitcoin Core with making dependable changes to Bitcoin as a software system that people run independently and autonomously.

What transactions should be relayed across the P2P network is what this is trying to answer. Depending on context a lot of things can be consensus valid but going back to my double spend example can open you up to various attack vectors or maybe not be useful in the first place. One fundamental premise that I think we all agree is you can relay a transaction across the P2P network. I also think we are are on the same page, this isn’t the same as consensus rules but it is also very important because we need this assumption of being able to relay transactions. I think you have some valid points. It is a really interesting area of discussion. I am personally interested in it so I will be following it.

I think at the very least since so many people don’t customize this part of their node it would be neat if CPFP worked sometimes which it currently doesn’t really.

Ideally you want your own node to have the best mempool possible and this gets you closer to that. That is just better for yourself. If everyone acts in self interest…

Why is it that having an accurate mempool is important to your average node?

When you get a block if you don’t have a transaction in a mempool you will have to revalidate it and sometimes you will have to download it if you are using compact blocks.

Fee estimation as well.

Here it is the trade-off of does a good mempool mean knowing about all plausible transactions? Then the spam risk is really huge. There are DoS concerns.

If you are limiting it to 100 MB you still want the highest fee 100 MB transactions because they are most likely to be included in blocks.

You want Bitcoin to succeed and to work well. As a node operator part of what you are doing is you are trying to estimate what would make Bitcoin the most useful. If there is a really clear cogent argument for having some potentially complex mempool management policy potentially involving additional coordination with your peers or even an overlay network, I agree that we shouldn’t be surprised if people adopt that. Getting from there to saying it is something you can depend on, I don’t know how to quantify that.

Ethereum gas prices

https://twitter.com/rossdefi/status/1293606969752924162?s=20

In other news gas prices on Ethereum are going bananas. According to this guy it was 30 dollars to submit a Ethereum transaction earlier today. Maybe it has gone up or down since then. I think this is good for Ethereum because that proves it is viable if they ever go to…. is Ethereum deflationary still?

They have a EIP 1559 I think, that makes it so that all fees are burned if it is over a certain limit or something like that.

There is a really weird situation, the Ethereum community seems to be rooting for Ethereum 2.0. What is going to happen is that it is going to be a completely separate blockchain. After five years they are going to merge the two together. We have no idea what the issuance rate is or if ETH is deflationary. The community has no idea what is happening with ETH 2.0.

There is an open question, are blockchains sustainable over the long term? I don’t know if ETH necessarily answers this but I think it is encouraging to see people are willing to pay high fees for transactions. It means you are providing something valuable. Some day maybe we will get to this level on Bitcoin again. Ethereum is validating its valuation now.

On the fees I bet the distribution looks like… the people who regularly pay the highest fees are people who have not optimized their fee paying logic.

I disagree with that but we are going to talk about that later.

It is very interesting that people are willing to pay these fees. I think it is interesting when you look at the use cases for paying these fees on Ethereum, I think the majority of people are involved in the new DeFi yield farming. They are not using it for a real business case, they are using it to get access to this new token that exists and then trying to be early for that. They are willing to pay whatever fee necessary to get in early.

High transaction usage on Ethereum

https://twitter.com/juscamarena/status/1285006400792354816?s=20

I didn’t realize that according to Justin Camarena it takes four transactions to send a ERC20 token. Is this right? Does it take four transactions like he claims to send and withdraw from Coinbase?

It depends what he is doing. If he is taking a ERC20 off of Coinbase, that takes one transaction. Usually if he wants to use it with a particular Dapp then he would need to do an approved transaction. That would be two. And deposit it, that would be three. I’m not sure how he is getting four from that, I see how it could be three.

That is very interesting to me because I didn’t realize that there was so much overhead in making a ERC20 transaction on the blockchain. They could use some of this package relay stuff. I guess it doesn’t actually reduce the number of transactions. Today on Twitter I have seen a lot of talk about Ethereum and Layer 2 solutions but we will see how serious they are.

When you say transaction I guess I don’t know enough about ETH to fully comprehend, is it true that with ERC20 tokens you are not sending ETH you are updating a state? Is this somehow different with a lower fee than I would expect coming from Bitcoin world?

It is a state update but it is essentially fixed size so it scales linearly.

If it is true that it is a third the amount I expect from a value transfer transaction and it takes three transactions and that is also a fixed number this seems fine.

A value transfer of ETH by itself is the cheapest transaction you can do. Anything that includes any amount of data costs extra. It is the other way.

Cloudflare DNS outage

https://twitter.com/lopp/status/1284275353389862914?s=20

Cloudflare had a DNS outage and there was a noticeable drop in Bitcoin transactions during this Cloudflare outage. I found that fascinating. It just goes to show who is making transactions on the network. It is probably Coinbase, Blockchain, all the big exchanges etc. It just goes to show how much influence Cloudflare has over the entire internet.

Altcoin traceability (Ye, Ojukwu, Hsu, Hu)

https://eprint.iacr.org/2020/593.pdf

There is a class project that came out a few months ago and made its way to the news. I wouldn’t really mention this otherwise except that you have probably heard about it in the news at some point. It is an altcoin traceability paper, some news agencies referred to it as the CMU paper. One thing to remember is that this is a pre-print. Even though it has been revised a few times to make it a lot better than the initial version I still take significant issue with many parts of it. This is an example of a paper where you might look at work and it might look simple enough on the high level. “I can try to understand this chart.” The problem is that it is approachable enough where it seems super simple but there is an extra layer on top of it that you might not be thinking about. That is very difficult to capture. I want to stress that when you are looking at certain research papers especially ones where they seem approachable… Here is an example. They look at Zcash and Monero predominantly in this paper. Here you can say “Here is the effective anonymity set for different mixing sizes.” You might say “It is still quite ineffective for large rings.” All of these are pre RingCT outputs where the deducibility was already 90 percent or so. This is an example where this has no relevance since 2017. It doesn’t make super clear as you are reading this research paper. The only reason I am mentioning this and I don’t want to spend that much time on it is that it was referenced in a lot of media. There is a new version out that does make certain parts more accurate but at the end of the day it was a pre-print and it was a class project. Take that for what it is worth. It wasn’t meant to be a super in depth research paper. If you want to look at certain types of things to keep in mind when you are doing analysis of blockchains it repeats a lot of the same methods in other papers.

What was the original conclusion of the paper that the authors claimed?

It was so biased. They made claims that were very qualitative and opinion oriented. They were like “We are going to say this without significant evidence” and it is completely extrapolating. I took significant issue with a lot of the claims they had. The whole point of the paper is to replicate a lot of things that other people have done. You have probably heard of some of the work by Andrew Miller. This is a really easy paper to understand on a super high level. That is partially why it is so dangerous because you might think that you get it more than you might. It is really easy to read. They will talk about some types of heuristics with Monero. “The first output is most likely to be the real spend” and “when a transaction has two or more transaction outputs and two or more of those outputs are included in different inputs of another transaction then those included outputs are assumed to be the real inputs.” They do what other papers have done. This particular paper did not really introduce new analysis. That was another reason it was so dangerous in the media, repeating this as new research even though it is applicable only many years ago. I wanted to point it out just because it was in the media a lot compared to what you would typically expect from a pre-print.

If this was presented in a fraud case or something like that would it be taken seriously? If there was a legal case involving blockchain analysis would a paper like this be admissible?

It is better than nothing. I am sure that even in the current state there would be substantial push back by other experts. One expert against a bunch of other experts saying it is ridiculous. For what it is worth they have made revisions to the paper to make it better. It is difficult to present some of the evidence as is. It is very set to certain time periods. This chart shows the applicability. After 2017 the whole application of the paper is not very useful but this is a paper from this year. Media doesn’t know how to interpret this is one thing I wanted to point out. This is a really easy high level paper to read, just be really careful to make sure you get your time points correct. They try to look at Zcash for example and they didn’t have proper tooling available to continue to update it. They didn’t really update a lot of the data and then extrapolated it. I would love to see them run the real data not just extrapolate old findings. That would be much more useful.

Separate coinbase and non-coinbase rings on Monero

https://github.com/monero-project/monero/issues/6688

I have been focusing on the idea of separating coinbase and non-coinbase rings. With Monero and Bitcoin you have something called a coinbase output. No these are not outputs related to Coinbase the exchange. What they actually are are block reward money. When you mine a block you are granted the right to a specific type of output that includes new monetary issuance and inclusion of the fees people pay in transactions. Whether an output is a coinbase output or not has real implications about what type of user it is. If I sent someone money and my entropy set was related to these specific type of outputs either I am running my own mining pool or solo mining funds or those actually aren’t my funds that I’m sending you, it is other funds I’m sending you. This is an example of a point in metadata that might not be a convincing spend if you are using it as potential entropy. Think about it like you are using a mixer. If you were mixing with a bunch of other mining pools and you are the only normal user in the mixing process you’d probably not get that much entropy from mixing in there because people would know that you would not be the one contributing the non-mining related funds. With Monero this happens relatively often because it is a non-interactive process. People who do generate coinbase outputs are participating by having their decoys available. When I started looking at this a few years ago it was a bigger issue because about 20 percent of the outputs were coinbase outputs. Now it is really closer to 3 percent. It really isn’t a significant issue anymore. The issue goes away in part by just having bigger rings. But really based off greater network usage in general. The amount of coinbase outputs generated per day are constant and a total percent of total network activity. As network activity goes up then their proportional percent of coinbase outputs goes down. We also looked at a few things to see what if we tried to adjust our selection algorithm for coinbase or non-coinbase rings. The idea was we can help protect normal users by having them not spend coinbase outputs. If you are a mining pool then you would only make coinbase rings. That is how we could enforce it at a consensus level. We notice that there was no significant difference in the spend distribution of coinbase and non-coinbase outputs. I found this really interesting. I did not expect this result. I thought coinbase outputs would be spent faster on average because a mining pool mines a block and they send the funds to another user as a payout. I assumed that would be pretty instant but it actually is very similar to normal spend patterns. Of course for Monero where this is deducible up to 2017. We cannot test this on current Monero usage because that data is not available. It is too private.

Going back to the previous paper we just talked about. Are you drawing your analysis from the same dataset they were? Did Monero make a consensus change that makes it unanalyzable now?

What I am talking about with coinbase rings should be considered pretty independent to this. This doesn’t really talk about coinbase outputs. Monero used to have really weak ring signatures up until 2017. We can actually look at Monero transactions up until 2017 and for most transactions we can tell what specific output is being spent. This is not tied to an address, it is not a Monero address that is sending this but we know which Monero output is spent in the transaction. If I go back to this paper you can see here for example in Figure 6 that up until April the vast majority were quite deducible. The green line is the deducible. Frankly that did continue until about January 2017 when there was a steep drop off. Until then that is 90 percent of transactions where we are able to tell what the real output is spent. After that point we can’t really tell anymore. This was a consensus change that enabled the new type of output that provided higher levels of protection and therefore we can no longer deduce this information. We have to look at Monero data up until 2017 and Bitcoin data to determine spends. This is something that Moser et al and other papers have looked at too. In order to determine what the input selection algorithm should be. Ultimately the big takeaway is that coinbase outputs are interesting because they are not a type of output that a normal user might spend. You could separate them and say normal users will actually by network decree not spend these funds. However as network activity increases this becomes a mute point. To what extent do you need to care as a network protocol designer? That really depends on activity and use. It is never good but it can be small enough for you to say the extra complication this would add to consensus is not worth the change. The real impact is very, very small. This is still an ongoing discussion after about two years in the Monero community. Of course the Monero network has grown in number of transactions since then.

What would it mean for a coinbase output not to be spendable? Wouldn’t it just not have any value then?

Suppose that you run a public mining pool. People will send you hash rate and you occasionally a Monero block. I am just a user of the network. I am not even mining on your mining node. I am just buying Monero on an exchange and sending to another user. At the moment when either of us send transactions what we do is we look at all of the outputs on the blockchain and we will semi randomly according to a set process select other outputs on the blockchain which include both coinbase and non-coinbase outputs in the spend. When you are spending funds, since you are a public mining pool and you frequently mine actual coinbase outputs it is understandable for you to actually spend these coinbase outputs. It is also understandable for you to spend non-coinbase outputs. Let’s say you’ll spend a coinbase output to a miner and then you will receive a non-coinbase output as change which you will then use to spend and pay other users. Meanwhile since I am not directly mining I am never being the money printer. I am never actually the person that would convincingly hold a coinbase output myself. Whenever I send transactions that include a coinbase output outside observers can pretty reliably say “ This isn’t a real output that is being spent because why would they ever have possession of this output.” It is not convincing, it is not realistic. This person is probably not running a mining pool or probably isn’t getting super lucky solo mining.

The actual proposal is to partition the anonymity sets between coinbase and non-coinbase?

Exactly. As you do that there are quite a few things you can do. Mining pools publish a lot of information. Not only is it the case that coinbase outputs are not convincing for an individual to spend, most mining pools will publish what blocks they mine and they will publish lists of transactions that they make as payouts. You might see that this coinbase output was produced by this mining pool so what is the chance that this person is making a payment and is not the mining pool? Typically it is going to be pretty large. These are pretty toxic deducible outputs to begin with. Depending on network activity we can increase the effective ring size, not the real true ring size but what the after heuristic effectiveness is by about 3-4 percent. This seems small at the moment but it doesn’t cost any performance. It is just a smarter way of selecting outputs. That is one thing we can do. It is an option where we can stratify the different types of transactions by a type of metadata that we are required to have onchain anyway.

You have been talking about this like it is a new consensus rule to change it. Shouldn’t this be a wallet feature where your wallet doesn’t select coinbase UTXOs. It seems like a soft fork is extreme.

That is absolutely true. There are a few reasons why you may want to have a network upgrade. One, each wallet is going to do their own thing. If a user is sending funds that includes a coinbase decoy for example and suppose only one wallet does this, all the others have updated, then you know it is a user of that wallet. This is not ideal. Also we have the opportunity to do things like say coinbase outputs are pretty toxic anyway, let’s inform users that coinbase outputs should not have a ring. Perhaps we say you shouldn’t directly spend coinbase outputs to users if you do care about privacy in that transaction. Then just for those outputs we can make the ring size one and that would make the network more efficient but it would also require a consensus change. I also think in general even though we can make it a wallet change, as far as privacy is concerned it is better to enforce behavior rather than allow wallets to do their own thing. In the Monero community in general people are a little more open to the idea of forcing best practice behavior instead of simply encouraging it. Definitely a cultural difference between Monero and a few other communities.

In my mind saying that this wallet’s transactions aren’t valid now is way more extreme than forcing them to upgrade their wallet and making a backward compatibility change. If that is what Monero does it is interesting.

There are pros and cons.

Do you recommend the paper? I wasn’t sure if there was any tangible as a result of it.

I wouldn’t advise someone doesn’t read it and I also wouldn’t consider it an essential read. It is most useful for people who already have the context of the other papers. It updates on some of their data. It doesn’t really present much new compared to those. Part of the reason why we like to enforce behavior, this is perhaps controversial in Bitcoin implementations. Enforcing behavior gives much better desired results for privacy than not. Zcash, we constantly have people arguing Monero is better, Zcash is better. You can see the proportion of transactions that are shielded on the two networks comparatively and you can see one is a little bit more adopted. Monero, it has over 100 times as many transactions that hide the sender, receiver and amount than Zcash. That’s because instead of allowing the backwards compatibility users, exchanges and users are not forced to use the best practices, people typically don’t. I think that is because they don’t really care. Think about Bitcoin SegWit adoption. Ideally people should switch right away. But people don’t. If people aren’t going to switch for a financial incentive why are exchanges going to switch to get rid of a point of metadata that they don’t care about unless you force them. With privacy, in my opinion, it is not just whether a feature is available but whether it is enforced. More importantly whether people are actually using it. As you are evaluating implementations you should see if they are adopted or not. I know a lot of people talk about Samourai for example. This is some data on Samourai wallet usage. I want to be really clear. These numbers are not very comparable. These are interactive processes. For a 5 million satoshi pool 602 rounds occurred but those each include several participants. It is not just one participant that shows up. It might be ten participants or something. Even so if you stack all of these on top of each other which you cannot do because they are different amounts, they are denominated and they each have their own anonymity sets it is still pretty darn tiny. The implementation and encouraging good use is critically important. People think pretty highly of Samourai in general. I know the Samourai vs Wasabi feud, sometimes friendly, sometimes not so friendly. Ultimately the actual adoption is kind of small. One thing I mention is we are building these networks, we can talk about them in a research sense but we also need to talk about the sense that these are decentralized networks where it is permisisonless and anybody can send a transaction. People are going to do some weird stuff and people are not going to follow the best practices. That does matter for privacy adoption quite significantly.

Monero CLSAG audit results

https://web.getmonero.org/2020/07/31/clsag-audit.html

This is the big thing coming up with CLSAG. These are a more efficient form of Monero ring signature. They had an audit report come out where JP Aumasson and Antony Vennard reviewed it. You can see the whole version here. They had some proposed changes to the actual paper and how they have the proofs. But ultimately they were saying “This should be stronger.” It resulted in no material code changes for the actual implementation. The paper changed but the code didn’t. You can see the write up here. They are slightly more efficient transactions that will be out this October.

Are you talking size efficient or time efficient?

Both. They have stronger security proofs. They have about 25 percent smaller transaction size and they have 10-20 percent more efficient verification time. It is a really win-win-win across the board there.

FROST: Flexible Round-Optimized Schnorr Threshold Signatures

https://eprint.iacr.org/2020/852.pdf

This paper came out, FROST, which is a technique for doing a multisig to produce a Schnorr signature. I don’t know about the review that has gone on here. I wasn’t able to find any review of it. It is a pre-print. There is always a certain caveat. You should take a pre-print with a grain of salt but it is pretty serious. What distinguishes this Schnorr multisig is that it is a proposal for a two round Schnorr multisig. MuSig is probably the Schnorr multisig that most people are familiar with in this circle. MuSig started out life as a two round multisig scheme which had a vulnerability. The current best replacement for it is a three round. There are some rumors that there is a fix for that. I haven’t seen anything yet. If anybody else has please speak.

I have seen a pre-print that I don’t think is public yet. They have a way of doing MuSig with deterministic nonces in a fancy way. Not in any straightforward way. It is not too fancy but it uses bulletproofs that gets it back down to two rounds because the nonce no longer requires its own round.

I am looking forward to seeing that. If you have a math bent this paper is actually very nice. The constructions are beautiful. For those who are familiar with Shamir’s Secret Sharing, the idea is that you construct a polynomial, you give your users certain point values and then together a threshold subset of the users can reconstruct the secret. There is actually an interesting interplay between Shamir’s Secret Sharing and just additive secret sharing where you just adding the shares under certain situations. They have leveraged this duality that exists in a certain setting in order to get the low number of rounds. There is a key setup where what you are doing is everyone chooses a secret. I am going to ignore the binding and commitments for now. They construct a Shamir’s Secret Share for everybody else. Then they all add up the Shamir’s Secret Shares that all the other users give them and this gives them jointly a Shamir’s Secret Share for the sum of the individual secret shares. That’s the key. Nobody knows the key although any threshold t of them can reconstruct the secret directly. The point of the protocol though is that by doing signing nobody learns their secret. The signing procedure doesn’t leak the secret although they could in principle if they got together they could reconstruct it. Of course if you reconstruct the secret then every user who was in that reconstruction now has unilateral access to the secret. The signing procedure is somewhat similar to what you might be expecting. You have to construct a nonce. The way that you construct the nonce is you have to take care to bind everything properly to avoid certain classes of attack. What you do is everyone in this subset t where t is a threshold chooses a random nonce and then they construct a Shamir’s Secret Share for it and share it out to everybody. There is a nonce commitment phase which you have to do. In this setting what they describe it as is a preprocessing step. You might get together with the group of users that you think is going to be the most common signers. You are doing a 2-of-3, you have two signers that you think are the most likely, they get together, they construct a thousand nonces in one go and then they can use those nonces in the protocol later on and make the protocol from that point on one round. Signing uses this trick where you can convert between additive secret shares. The nonces shared is an additive secret share. The key that is shared is a Shamir’s Secret Share because you are using this threshold technique. There is a method for combining them. If you have an additive share for t people you can convert it into a Shamir’s Secret Share without any additional interaction. That’s the kernel of the idea. I think there is a small error in this paper although I could easily be the one who is wrong, that is the most likely. I think that the group response, you actually have to reconstruct this as a Shamir’s Secret Share. I think this is a small error.

How do they do the multiplication with the hash and the secret shard?

They have this commitment which is broken up into two pieces. The thing that they use is the fact that you can reconstruct the challenge using just the commitment values. It is bound to the round and the message so there is no possibility of the Drijvers attack. That is not a good answer.

With most of these threshold schemes everyone is happy with the addition and all of the complexity is always “We need to go through a multiplication circuit or something.”

This one is weird because the multiplication doesn’t play much of a role actually. It is more the reconstruction. If you have the group key that is shared as a Shamir’s Secret Share it is not clear what to do about the nonces so you can combine them and take advantage of Schnorr linearity. They found a way to do that.

Are public versions of the shards shared at any point? Is that how this is being done?

Let me show you the key generation. If you are asking about the key generation the public key is the Shamir’s Secret and then the public shares are not additive secret shares. They are public commitments to the Shamir shares. My verdict for that paper is do read, it is interesting. Having Schnorr multisig that has low interaction has proven to be extremely useful in the design of cryptocurrency stuff.

Minsc - A Miniscript based scripting language for Bitcoin contracts

https://min.sc/

For those who don’t know Miniscript is a language that encodes to Bitcoin Script. The thing that is cool about Miniscript is that it tracks a number of stack states as part of its type system. When you have a Miniscript expression you can understand how the subexpressions will edit the stack. The reason why you would want to do this is you have a table of satisfactions. Typically if you have some script it is complicated to calculate what the script that allows you to redeem it. With Miniscript every script that you write the type system tells you exactly what conditions you need to redeem that script. All of this geared is towards making analysis like this much more user friendly. It is very cool. Pieter Wuille put together a Policy to Miniscript compiler. Policy is just a simplified version without any of the typing rules. It has got this very simple set of combinators that you can use. Shesek put together a version of the Policy language with variable abstraction functions and some nicer syntax. This is going to be really useful for people who are experimenting with Bitcoin Script policies. It doesn’t have the full power of Miniscript because you can’t tell what is happening on the stack. If you have a Rust project you can include this as a library. But otherwise you can also compile sources using the compiler that Shesek put together. The website is also very nice. You can put in an expression, you get syntax highlighting and you get all the compiled artifacts. It is a really cool tool for playing with script. I recommend that. The GitHub repo, unless you are a real Rust fan there is no need to look into the details. Most of the meat of this is in sipa’s implementation of the Policy to Miniscript compiler.

BIP 118 (ANYPREVOUT)

https://github.com/ajtowns/bips/blob/bip-anyprevout/bip-0118.mediawiki

BIP 118 used to be SIGHASH_NOINPUT, now it is SIGHASH_ANYPREVOUT. The update that is being discussed right now is for integration with Taproot. The main design decisions that you should think about, there are two. One is fairly straightforward if you know how Tapscript works. If you want to use one of these new sighashes you have to use a new key type. In Taproot there is the notion of key types and this means that in order to construct a key post Taproot it is not enough to have a private key, you also have to have a marker that tells you what the capabilities of the corresponding public key are. It is part of a general move to make everything a lot more explicit and incumbent on programmers to explicitly tell the system what you are trying to do and to fail otherwise. Hashes post Taproot have a tag so you can’t reuse hashes in one Taproot setting in another one. And keys have a byte tag so that you can’t reuse public keys in different settings. The semantics of the two sighashes, ANYPREVOUT and ANYPREVOUTANYSCRIPT, if you are really into the guts of Taproot they affect what the transaction digest is that you sign. You should check out what the exact details are. As far as I could tell it is pretty straightforward. You are just not including certain existing Taproot fields. On the BIP right above the security section there is a summary for what these two sighashes do. The main thing is that ANYPREVOUT is anyone can pay except you don’t commit to the outpoint. ANYPREVOUT, you are not committing to the outpoint because that’s something you want to allow to be arbitrary. ANYPREVOUTANYSCRIPT is even weaker. Not only are you not committing to the outpoint but you are also not committing to the spend script.

I’m looking at the signature replay section and I know one objection to SIGHASH_NOINPUT, ANYPREVOUT is that if you do address reusage, let’s say you are a big exchange and you have three HSMs that you are using as your multisig wallet. It is super secure but you can’t regenerate addresses because there is only one key on the HSM. You end up with a static address. If somehow you end up signing an ANYPREVOUT input with these HSMs your wallet could be drained which is what they are hinting at with the signature replay section here. Is that addressed at all in the newer iteration?

No it is not. Nobody knows a way of dealing with it that is better than allowing it. I omitted something important which is that these keys, you can’t use them with a key path, you can only use them on a script path. Taproot has key path and script path spending modes which the user chooses at spend time. ANYPREVOUT is only available on the script path spending mode. This new key type which is the only key type with which you can use the new sighashes according to this proposal are only available in a Tapscript script. This is interesting but I still think that we are probably quite a way from actual deployment and deployment of eltoo. That is the main use case for this new sighash. This is nice because it can be included in another batch of BIPs. It is nice and self contained and clear. You are adding semantics but you are not going to care much about it unless other features become available. That’s the reality here as far as I can see.

BIP 118 is now ANYPREVOUT?

This is not merged and it looks like it might get a new BIP number. SIGHASH_NOINPUT, the name is almost certainly going to change. The SIGHASH_NOINPUT proposal is probably just going to remain in the historical record as an obsolete BIP.

We have been using NOINPUT for our vaults protocol but ANYPREVOUT was also mentioned in the paper. I am not sure on the actual differences.

At a high level they do the same thing. This says “Here is one way that you can incorporate it into a post Taproot Bitcoin.”

If you are on the Lightning dev mailing list people use NOINPUT and ANYPREVOUT interchangeably.

They are synonyms. You are mostly talking about higher level stuff. You want to be able to rebind your transactions. In some sense it doesn’t matter if you have to use a Tapscript or whatever special key type. You want the functionality.

Bitcoin Mining Hashrate and Power Analysis (BitOoda research)

https://medium.com/@BitOoda/bitcoin-mining-hashrate-and-power-analysis-bitooda-research-ebc25f5650bf

This is FCAT which is a subsidiary of Fidelity. They had a really great mining hash rate and power analysis. I am going to go through it pretty quickly. There are some interesting components to take away from this. They go into what is the common mining hardware, what is its efficiency. Roughly 50 percent of mining capacity is in China right now. I’m not really surprised there. They get into cost analysis. In their assessment 50 percent of all Bitcoin mining capacity pays 3 cents or less per kilowatt hour. They are doing energy arbitrage. They are finding pockets where it is really cheap to mine. According to them it is about 5000 dollars to mine a Bitcoin these days. Also the S9 class rigs that I think have been in the field for 3-5 years at this point. According to this research group you need sub 2 cents per kilowatt hour to break even with these S9s that are pretty old at this point. Here is the most interesting takeaway. A significant portion of the Chinese capacity migrates to take advantage of lower prices during the flood season. They go into a bunch of explanation here. What is this flood season? I was reading this and I didn’t know it is a problem. I had heard about it but I hadn’t investigated. The Southwestern provinces of Sichuan and Yunnan face heavy rainfall from May to October. This leads to huge inflows to the dams in these provinces causing a surge in production of hydroelectric power during this time. This power is sold cheaply to Bitcoin miners as the production capacity exceeds demand. Excess water is released form overflowing dams so selling cheap power is a win-win for both utilities and miners. This access of cheaper electricity prices attracts miners who migrate from nearby provinces to take advantage of the low price. Miners pay rough 2-3 cents per kWh in northern China during the dry months but sub 1 cent per kWh in Sichuan and Yunnan during the May to October wet season. They will move their mining operations from May to October down to these other provinces to take advantage of this hydroelectric power which is going on right now. This is the flood season in these provinces. Fascinating stuff. For some context this is Yunnan right here and Sichuan is here. I think they are moving from up in the northern parts here.

A mysterious group has hijacked Tor exit nodes to perform SSL stripping attacks

https://www.zdnet.com/article/a-mysterious-group-has-hijacked-tor-exit-nodes-to-perform-ssl-stripping-attacks/

A mysterious group hijacking Tor exit nodes to perform SSL stripping attacks. From my understanding Tor exit relays are trying to replace people’s Bitcoin addresses more or less. If you are using the Tor network and all your traffic gets encrypted and routed through this network it has to eventually exit the network and get back into plaintext. What these exit nodes are doing is they are looking at this plaintext, seeing if there is a Bitcoin address here, if so copy paste replace with my Bitcoin address rather than the intended address.

In the West most of the internet is now happily routed through TLS connections. One of the things this is trying to do is trying to redirect users to a non TLS version. This is somewhat difficult now because you have an end-to-end TLS connection over Tor if it works properly. It is not possible as far as we know to eavesdrop on that connection or to modify it as a man in the middle. You need to downgrade first.

I think they saw similar attacks to this in the 2018 timeframe but I guess they are picking back up.

It was the scale of this that was so incredible. One of the things that I think is a shame, if anybody is feeling very public spirited and doesn’t mind dealing with a big headache, running a Tor exit node is a really good service. This is where you are going to have a hose that is spewing sewage out onto the internet. Tonnes of random trolls are going to be doing things that are going to get you called. Or at least that is the worry. I have a friend who runs a Tor exit relay and he said that they actually have never had a complaint which I found shocking.

I have had complaints when my neighbor who uses my wifi forgets to VPN when he downloads Game of Thrones or something. It is surprising that that wouldn’t happen.

Is he credible? Does he actually run the exit node? It is like spewing sewage.

I should practice what I preach but I would love more people to run Tor relays. If they get complaints be like “**** you I am running a Tor relay. This is good for freedom.” When it is your own time that you have to waste on complaints it is hard.

Taproot activation proposals

https://en.bitcoin.it/wiki/Taproot_activation_proposals

There has been a big debate on how to activate Taproot in Bitcoin. Communities take different approaches to this.

There are like ten different proposals on the different ways to do it. Most of them are variations of BIP 8 or BIP 9 kind of things. It is picking how long we want to wait for activation and whether we want to overlap them and stuff like that. Currently talk has died down. The Schnorr PR is getting really close to merge and with that the Taproot implementation gets closer to be merged into Core. The discussion will probably then pick back up. Right now as far as I can tell a lot of people are leaning towards either Modern Soft Fork Activation or “Let’s see what happens” where we do a BIP8 of one year and see what happens. Once the PRs get merged people will have to make a decision.

I listened to Luke Dashjr and Eric Lombrozo and I think I am leaning now away from the multiphase, long period activation like Modern Soft Fork Activation. When you said its split between those two what is your feeling on the percentages either way. Is it 50, 50?

I would say it is like 25 percent on Modern Soft Fork Activation and 75 percent on a BIP8 1 year. That is what I gathered but I haven’t gone into the IRC for a week or so. The conversation has died down a lot. I don’t think there has been much talk about it since.

I am a BIP9 supporter for what it is worth. Tar and feather me if you must.

The fastest draw on the blockchain: Next generation front running on Ethereum

https://medium.com/@amanusk/the-fastest-draw-on-the-blockchain-bzrx-example-6bd19fabdbe1

With all this DeFi going on Ethereum there are obviously these large arbitrage opportunities. Since everything is on a blockchain there is no efficiency at all doing this stuff for privacy. It is fun to watch people bid up Ethereum blockchain fees and also very clever folks are taking advantage of bots and stuff programming allocations in these DeFi protocols. This is a whole big use case of getting into this BZRX token launch. It is done onchain and everybody has got to try to buy into the token allocation at a specific time. It goes through these strategies that people could do to get a very cheap token allocation. If I’m not mistaken this guy made half a million dollars because he was clever enough to get on the allocation and then sell at the right time too. The post talks about what is a mempool which we were talking about earlier. How can you guarantee that your transaction to purchase tokens is confirmed as close as possible to the transaction that opens up the auction process. They go through a bunch of different tactics that you could take to increase your odds and do the smart thing. There was some stuff that wasn’t obvious to me. The naive thing I thought was pay a very high gas price and you are likely to get it confirmed close to this auction opening transaction. But that is actually not true. You want to pay the exact same gas price as the transaction that opens the auction so you are guaranteed to be confirmed at the same time. You don’t want to have your transaction being confirmed before one of these auction transactions.

One of the things that recently was very interesting with this front running situation is that there are no bots running on Ethereum where anytime you do any type of arbitrage that is profitable those bots will instantly check that transaction in the mempool and create a new transaction with a higher fee in order to take advantage of your arbitrage opportunity and have their transaction go through first. I thought that was one of the interesting things that people have been doing recently. Not too familiar on this one actually.

I think it is a very fun space to see this play out onchain. Very interesting game theory.

It is also really relevant for us as we are messing with Layer 2 protocols. People here are hacking the mempool in this very adversarial setting and we are just talking about it for the most part. We could learn a lot.

Shouldn’t the miners be able to do all this themselves if they enough capital? They can do the arbitrage because they decide what goes in the block?

There are reports that some miners have been performing liquidations or favoring the ordering of transactions based on liquidating particular positions over professional liquidators that are trying to liquidate positions. Giving themselves an advantage in that way. There have been reports of that happening recently.

I agree that there is a tonne of stuff for us to learn in the Bitcoin ecosystem from what is going down in Ethereum right now. Especially around mempool stuff. Let’s look at this stuff and learn from it.

In terms of front running I used to work at a place where we were doing some Ethereum integration. At the time one of the suggestions that was catching a lot of traction was to have a commit, reveal workflow for participating in a decentralized order book. Does anybody know if that ever gained traction? I stopped paying attention. You would commit to the operation you’d want to do in one round, get that confirmed and then reveal in a future round. Your commits determine the execution order.

I am not familiar with that.

That sounds like a better idea.

It is a way to get rid of front running.

That is interesting. I think the majority of protocols aren’t doing that. The only thing that sounds familiar to that is the auction system in Maker DAO where you call to liquidate a position first. Then it doesn’t get confirmed until a hour later when the auction is finished. This has its own issues but that seems more similar to a commit, reveal. That is the only thing I can think of that is remotely similar.

Evidence of Mempool Manipulation on Black Thursday: Hammerbots, Mempool compression and Spontaneous Stuck Transcations

https://blog.blocknative.com/blog/mempool-forensics

More mempool manipulation, this is sort of the same topic. These guys are looking into Maker DAO liquidations between March 12th and 13th which was a very volatile time in the cryptocurrency markets. Of the 4000 liquidation auctions with Black Thursday 1500 of them were won by zero bids over a 12 hour period, 8 million dollars in aggregated locked collateralized debt positions was lost to these zero bid auctions. When they say a zero bid auction is it a trivial price being placed on the order book and they just got lucky it got filled? What is exactly meant by that?

I think what was happening here is there were a bunch of bots that came in. If you have an auction in Maker DAO, when they first implemented it the auction only lasted 10 minutes. What they would do is as soon as the value of a particular position went below the minimum collateralization ratio they would make a bid on that position for zero dollars. Usually what would happen is that in that 10 minutes someone else would outbid them. But what they would do is they would spam the entire network with a bunch of transactions at the same time so no one would be able to get a transaction in. At the end of 10 minutes they would win the bid. The person who had the position got screwed over.

This is fascinating stuff watching this play out on a live network and seeing people making substantial money from it too.

If you want to read more on how Maker DAO responded to this incident you can check this out.

Working with Binance to return 10,000 dollars of stolen crypto to victim

https://medium.com/mycrypto/working-with-binance-to-return-10-000-of-stolen-crypto-to-a-victim-3048bcc986a9

Another thing that has happened since we last talked was the cryptocurrency hack on Twitter for those who live on a hole or whatever. Somebody hacked Twitter and was posting crypto scams. Joe Biden and Barack Obama’s account was posting this scam. What these people are talking about is how exchanges can be useful in returning people’s property when things are stolen. They claim Coinbase blocked user attempts to send over 280,000 dollars to the scam address. I think this is a really trade-off between centralized exchanges. Obviously some people don’t necessarily like centralized exchanges but I think we can all agree that it is probably good that they were censoring these transactions to keep people from paying in to some Twitter hacker’s crypto scam on Joe Biden’s account or Barack Obama’s account or whoever’s account.

If you are going to scam make sure you reuse addresses so Coinbase can’t block you.

Don’t reuse.

This is perfect marketing for financial regulation. “Look it sometimes doesn’t hurt people.”

That should be the Coinbase advertisement on TV or whatever.

Samourai Wallet Address Reuse Bug

https://medium.com/@thepiratewhocantbenamed/samourai-wallet-address-reuse-bug-1d64d311983d

There is a Samourai wallet address reuse bug that can be triggered in some cases where you reuse the same address due to a null pointer exception in the wallet handling code. The author of this post was not too thrilled with Samourai’s response and handling of the situation from my understanding. Especially for a project that touts itself as privacy preserving.

The bug is fixed now. If you are using Samourai you are good.

BasicBlocker: Redesigning ISAs to Eliminate Speculative-Execution Attacks

https://arxiv.org/abs/2007.15919

Other things in the hardware realm. They are trying to fix all these vulnerabilities that are out there for speculative execution on modern processors. For those who don’t know hardware is a lot faster than software. What hardware and software engineers have realized is that we should speculatively execute code on the hardware just in case that’s the branch people want to take in software. That can have security implications because if you have a IF ELSE statement, say if Alice has permission to Bitcoin wallet allow her in to touch funds or to touch a private key else send her a rejected request. With speculative execution these processors will actually execute both sides of that branch for checking if Alice has access to private keys and that now can be cached on the processors, that is my understanding. That means you can maybe get access to something you shouldn’t get access to. This BasicBlocker, I think it was redesigning these instruction sets and asking for compiler updates and hardware updates to simplify the analysis of the speculative execution. I don’t know, I didn’t have any strong opinions on it. I think we’ve got to solve the problem eventually but nobody wants to take a 30 percent haircut on their performance.

Building formal tools to analyze this sounds like it would be awesome. The class of attack is a side channel attack and by definition a side channel attack is a channel that you weren’t expecting an attacker to use. I think hardening against speculative execution attacks is necessary but you really need to be very careful about applying security principles like minimum privilege.

It is tough. Things will get a lot slower if we decided to actually fix this stuff.

Thanks for attending. We will do it again in a month or so. Next one will be our 12th Socratic Seminar. It will be a year so maybe we will have to figure something special for that.

\ No newline at end of file diff --git a/chicago-bitdevs/index.xml b/chicago-bitdevs/index.xml index 70b1a4e90c..c18700a13f 100644 --- a/chicago-bitdevs/index.xml +++ b/chicago-bitdevs/index.xml @@ -7,4 +7,4 @@ They are starting P2P meetings for the Bitcoin protocol now.
\ No newline at end of file +Tainting, CoinJoin, PayJoin, CoinSwap Bitcoin dev mailing list post (Nopara) https://gnusha. \ No newline at end of file diff --git a/edgedevplusplus/2018/taproot-and-graftroot/index.html b/edgedevplusplus/2018/taproot-and-graftroot/index.html index b173bf48a1..9810ec0e46 100644 --- a/edgedevplusplus/2018/taproot-and-graftroot/index.html +++ b/edgedevplusplus/2018/taproot-and-graftroot/index.html @@ -10,4 +10,4 @@ Greg Sanders

Date: October 4, 2018

Transcript By: Bryan Bishop

Tags: Mast, P2c, Taproot

Category: Conference

Media: -https://www.youtube.com/watch?v=h2bvOal1u5k

https://twitter.com/kanzure/status/1047764770265284608

Introduction

Taproot and graftroot are recent proposals in the bitcoin world. Every script type looks different and they are distinct. From a privacy perspective, that’s pretty bad. You watermark or fingerprint yourself all the time by showing unique scripts on the blockchain or in your transactions. This causes censorship risks and other problems for fungibility. Often you are paying for contigencies that are never used, like the whoopsies branches in your scripts. If you are storing those scripts on the blockchain then you have to pay for those all the time.

Bitcoin script

There are two interesting cases to consider: the cooperative case where everyone behaves, signs, and updates state. In the m-of-n Schnorr case, that would be one signature for one public key. But then you might have some complex backup script as a backup measure. Perhaps you expect a threshold is not enough and too many signers go offline; after a year or so, you want to recover the funds using a smaller threshold or some alternative scheme. You can use timelocks, hash pre-images, and auditable telescoping multisig cases like I mentioned.

What if we could instead optimize for the first case (cooperation) which we believe to be far more common? Participants would not have to pay for the contingency case, which also might be a much larger script.

Example

In the lightning network, there are the punishment or justice transactions, and also the unilateral closing transaction. You have to spend more money to get your money back. That’s not great. Telescoping multisigs like n-of-m, with some timeout to a smaller multisig. So there’s a couple cases.

Merkleized abstract syntax trees (MASTs)

For privacy reasons, you could use a merkleized abstract syntax tree (MAST). It’s a merkle tree with a logical OR, and you expose one of the leaves of that tree, and that leaf would commit to a script and as long as you satisfy that script then you could spend those funds. You could have a merkle tree with two leafs, where the left branch is the common case, and the right side is the complex script case which you never really want to share unless it really happens. You only have to spend the funds to reveal the script and lose the privacy when the contingency is stumbled upon. You can also have a subtree of many possible conditions attached. For this to be privacy preserivng, everyone would have to commit to a MAST tree, even if you don’t have a contingency. Most transactions don’t have contingencies like that, it’s just simple multisig for the most part. So to maintain privacy, everyone would have to use it.

Most transactions don’t really have a requirement to use this, and the bytes used grows with the size of the tree. People would not be motivated to adopt these. And it might be hard to get it deployed in Bitcoin.

Pay-to-contract

But what if we can remove the additional cost of contingencies, without losing privacy?

There’s a concept called pay-to-contract (p2c). It’s a method of commiting to a specific message within a public key. For example, in the Elements sidechain based software uses this technique to give money to the federation’s public keys while committing to another scriptPubKey that is then redeemed on the sidechain.

Q = P + Hash(P || script) * G

Only one binding using this protocol is possible at a time. Only those who can sign for P can also sign for Q. In bitcoin, consensus doesn’t care about this. But you’re provably committing to this. Only one binding is possible at a time. You can sign for P and have the message that is being hashed, and having the script being hashed, then you know how to spend the Q, and that’s if and only if.

Taproot

We can commit to this contingency we talked about inside of bitcoin, while having the output script look like any other pay-to-pubkey style output. Q = P + Hash(P || script) * G where the script is the encumberance. The Q is being signed by an m-of-n federation of some sort that has done secret key sharing scheme of some kind. Every participant in that scheme knows how to compute their own shards of P and they also know the hash of P appended to the script. They are all able to sign for this.

The common case is that you sign for Q, verifiers have no knowledge of the existence or type of contigency. It looks like a normal pay-to-pubkey being spent.

The contingency case is that you reveal P, script, and witness to satisfy the script. Again, there is no setup time interactivity required.

Graftroot

Q = P + Hash(P || script) & G

A revival of an old idea called delegation. This is probably the reason behind OP_CODESEPARATOR, but you’d have to ask Satoshi. OP_CODESEPARATOR isn’t really used for anything in bitcoin except by Nicolas Dorier I think. At Q creation time, have lal the parties delegate another scrpt by signing the message Hash(Q || script2) where script2 is the new encumberance.

There can be any number of delegations, but only reveal one.

This requires that the signatures be stored. Before, we had non-interactivity where as long as you knew your partner’s public key shards ahead of time, you could do it without extra storage or interactivity. But in graftroot you need everyone sending signatures and storing them indefinitely. This clearly requires interactivity at setup time.

Two schools of thought

When it comes to script, if you add expressability, it tends to lower privacy as every new script is a privacy leak. And also, if not carefully designed, it increases the denial-of-service and consensus failure risk. Every node has to compute the answer just as fast as you’d hope while also agreeing with other nodes, especially alternative implementations. The other way to look at this is to have less script or no script at all. With taproot or graftroot, you might not even need script- maybe just a template like hashlock and key, or checksequenceverify and key. You would optimize for the common cases, and add more privacy by default.

References

\ No newline at end of file +https://www.youtube.com/watch?v=h2bvOal1u5k

https://twitter.com/kanzure/status/1047764770265284608

Introduction

Taproot and graftroot are recent proposals in the bitcoin world. Every script type looks different and they are distinct. From a privacy perspective, that’s pretty bad. You watermark or fingerprint yourself all the time by showing unique scripts on the blockchain or in your transactions. This causes censorship risks and other problems for fungibility. Often you are paying for contigencies that are never used, like the whoopsies branches in your scripts. If you are storing those scripts on the blockchain then you have to pay for those all the time.

Bitcoin script

There are two interesting cases to consider: the cooperative case where everyone behaves, signs, and updates state. In the m-of-n Schnorr case, that would be one signature for one public key. But then you might have some complex backup script as a backup measure. Perhaps you expect a threshold is not enough and too many signers go offline; after a year or so, you want to recover the funds using a smaller threshold or some alternative scheme. You can use timelocks, hash pre-images, and auditable telescoping multisig cases like I mentioned.

What if we could instead optimize for the first case (cooperation) which we believe to be far more common? Participants would not have to pay for the contingency case, which also might be a much larger script.

Example

In the lightning network, there are the punishment or justice transactions, and also the unilateral closing transaction. You have to spend more money to get your money back. That’s not great. Telescoping multisigs like n-of-m, with some timeout to a smaller multisig. So there’s a couple cases.

Merkleized abstract syntax trees (MASTs)

For privacy reasons, you could use a merkleized abstract syntax tree (MAST). It’s a merkle tree with a logical OR, and you expose one of the leaves of that tree, and that leaf would commit to a script and as long as you satisfy that script then you could spend those funds. You could have a merkle tree with two leafs, where the left branch is the common case, and the right side is the complex script case which you never really want to share unless it really happens. You only have to spend the funds to reveal the script and lose the privacy when the contingency is stumbled upon. You can also have a subtree of many possible conditions attached. For this to be privacy preserivng, everyone would have to commit to a MAST tree, even if you don’t have a contingency. Most transactions don’t have contingencies like that, it’s just simple multisig for the most part. So to maintain privacy, everyone would have to use it.

Most transactions don’t really have a requirement to use this, and the bytes used grows with the size of the tree. People would not be motivated to adopt these. And it might be hard to get it deployed in Bitcoin.

Pay-to-contract

But what if we can remove the additional cost of contingencies, without losing privacy?

There’s a concept called pay-to-contract (p2c). It’s a method of commiting to a specific message within a public key. For example, in the Elements sidechain based software uses this technique to give money to the federation’s public keys while committing to another scriptPubKey that is then redeemed on the sidechain.

Q = P + Hash(P || script) * G

Only one binding using this protocol is possible at a time. Only those who can sign for P can also sign for Q. In bitcoin, consensus doesn’t care about this. But you’re provably committing to this. Only one binding is possible at a time. You can sign for P and have the message that is being hashed, and having the script being hashed, then you know how to spend the Q, and that’s if and only if.

Taproot

We can commit to this contingency we talked about inside of bitcoin, while having the output script look like any other pay-to-pubkey style output. Q = P + Hash(P || script) * G where the script is the encumberance. The Q is being signed by an m-of-n federation of some sort that has done secret key sharing scheme of some kind. Every participant in that scheme knows how to compute their own shards of P and they also know the hash of P appended to the script. They are all able to sign for this.

The common case is that you sign for Q, verifiers have no knowledge of the existence or type of contigency. It looks like a normal pay-to-pubkey being spent.

The contingency case is that you reveal P, script, and witness to satisfy the script. Again, there is no setup time interactivity required.

Graftroot

Q = P + Hash(P || script) & G

A revival of an old idea called delegation. This is probably the reason behind OP_CODESEPARATOR, but you’d have to ask Satoshi. OP_CODESEPARATOR isn’t really used for anything in bitcoin except by Nicolas Dorier I think. At Q creation time, have lal the parties delegate another scrpt by signing the message Hash(Q || script2) where script2 is the new encumberance.

There can be any number of delegations, but only reveal one.

This requires that the signatures be stored. Before, we had non-interactivity where as long as you knew your partner’s public key shards ahead of time, you could do it without extra storage or interactivity. But in graftroot you need everyone sending signatures and storing them indefinitely. This clearly requires interactivity at setup time.

Two schools of thought

When it comes to script, if you add expressability, it tends to lower privacy as every new script is a privacy leak. And also, if not carefully designed, it increases the denial-of-service and consensus failure risk. Every node has to compute the answer just as fast as you’d hope while also agreeing with other nodes, especially alternative implementations. The other way to look at this is to have less script or no script at all. With taproot or graftroot, you might not even need script- maybe just a template like hashlock and key, or checksequenceverify and key. You would optimize for the common cases, and add more privacy by default.

References

\ No newline at end of file diff --git a/edgedevplusplus/2019/blockchain-design-patterns/index.html b/edgedevplusplus/2019/blockchain-design-patterns/index.html index 9e8b45623b..9e6cdd53ed 100644 --- a/edgedevplusplus/2019/blockchain-design-patterns/index.html +++ b/edgedevplusplus/2019/blockchain-design-patterns/index.html @@ -9,4 +9,4 @@ < Blockchain Design Patterns: Layers and scaling approaches

Blockchain Design Patterns: Layers and scaling approaches

Speakers: Andrew Poelstra, David Vorick

Date: September 10, 2019

Transcript By: Bryan Bishop

Tags: Taproot, Scalability

Category: -Conference

https://twitter.com/kanzure/status/1171400374336536576

Introduction

Alright. Are we ready to get going? Thumbs up? Alright. Cool. I am Andrew and this is David. We’re here to talk about blockchain design patterns: layers and scaling approaches. This will be a tour of a bunch of different scaling approaches in bitcoin in particular but it’s probably applicable to other blockchains out there. Our talk is in 3 parts. We’ll talk about some existing scaling tech and related things like caching transactions and mempool propagation. We’ll talk about some of the scaling proposals out there like taproot and erlay and other things in the news. In the third section, we talk about the more speculative bleeding edge crypto things.

Scaling: goals and constraints

So what do we mean by scaling?

It’s kind of hard to define scaling. People mean different things, like maybe processing transactions in bitcoin. There’s a lot of different stages that transactions can be at different stages and for different use cases those transactions can be in those stages for a long time. As an example, I can make a trillion one-year timelock transactions and I am not sure if it’s reasonable to claim that bitcoin can support a trillion transactions if I can do that.

One thing that scaling is not, at least for the purposes of this talk, are things like relaxing limits. Increasing the bitcoin block size, relaxing that limit, isn’t scaling. So you get 2x the throughput, for the cost of 2x all the resources. This is maybe technically possible, but not technically interesting. It has quite a serious tradeoff in terms of increasing the requirements for people to join and interact with the network. In particular, with the current limits, we’ve seen in a few talks earlier that it’s already impossible to verify the whole blockchain on your smartphone. It’s unfortunate that you can’t download the 300 GB blockchain and verify all the signatures.

I would say that when we think about scalability, one of the key restraints is– who can view our blockchain? Who can be a participant on our blockchain? These are two different things. This isn’t covered in the slides. I wanted to get across that if you can validate the chain, even if it’s too expensive for you to transact, you can verify the honesty of your bank still. So when we talk about increasing the block size or increasing resource limits, you want to make sure that you’re not excluding people from verification. In bitcoin today, that’s probably the biggest contention. When we talk about when things can scale or can’t scale, we want to make sure we don’t exclude people from being able to validate the chain. In order to do that, there’s no way to get around every user has to validate every single transaction. If you want to catch up and run a full node today, you start from the genesis block and you go through the whole history and do all the UTXOs and you validate everything. It’s a multi-day process, and then you get caught up.

A second order concern, and alternative cryptocurrencies address this differently, is the issue of miner safety. Block propagation is important. If it takes a long time for blocks to get around the network, then the miners can use selfish mining. The longer it takes for miners to talk wit heach other, there’s a longer period for bigger miners to get an advantage over smaller miners. We want the well-experienced large miners to be on the same playing field as a newer less well funded entrant to the mining ecosystem.

When we talk about scaling, there’s a few angles we’re coming from. Everything that we talk about, we have to view from the perspective of where we’re limited in the adversarial case instead of the average case. Often when we look at bitcoin network operations today, there are many optimizations that work super well when everyone is being friendly with each other. The relay network or the FIBRE network eliminates a lot of work for blocks, but we can’t use those metrics to decide that it’s safe to scale because these optimizations breakdown when the network is under attack. The limitation is, how do things perform when things aren’t going well? Continuous observation of the network to give you data right now, doesn’t give you data about how it behaves under attack right now.

https://diyhpl.us/~bryan/irc/bitcoin/scalingbitcoin-review.pdf

Bitcoin today: UTXOs

The first part of this talk is about bitcoin today. Things that are deployed and exist in the bitcoin network today. Let me give a quick summary of how bitcoin handles coins and transactions. This is the UTXO model– unsigned transaction outputs. All bitcoins are assigned to objects called UTXOs. These are individual indivisible objects that have a spending policy, and if you know the secret key you can spend the coins. They also have an amount. To spend coins, you have to take a bunch of UTXOs that you own, you spend all of the UTXOs, and then you create new UTXOs belonging to your recipient(s). Since these are indivisible, you usually have to create an extra output for change to send it back to yourself. Other blockchains have accounts instead of UTXOs, like ethereum.

There are a number of reasons that bitcoin chose the design that it did. When you see a new transaction, you know the only way it will effect the validity of other transactions is if they both try to spend the other UTXOs because UTXOs are atomic. So you check each transaction to see if it conflicts with existing transactions. If it conflicts, then throw it away. If not, bring it into the mempool and check its validity. You never need to validate it again if you see it like in a confirmed block or whatever. But in an account based model, you don’t have this simplicity. You can have multiple transactions spending from the same account. If in aggregate some of them spend from the account, then only some will be valid and some will be invalid and you can’t tell in advance which ones will be the ones that get into the block.

There’s also things about proving properties of existence of UTXOs. In bitcoin, UTXOs get created and destroyed. Those are the only two events in the UTXO’s lifecycle. If you want to prove that a UTXO existed at some point, you provide the transaction that created it and a merkle path for the inclusion of the transaction in a blockheader. For things like blockexplorers and other historical analysis auditing applications, this is very convenient. It’s very difficult to do this in an account-based model because you have to maintain the balance of every single account at every single point of time and there’s no nice compact way to tie that back to individual events in the blockchain which makes proving things about the ethereum chain historically quite difficult.

Bitcoin today: Headers-first, transaction relay, and caching

Headers first is very primitive technology today. But when you join the blockchain, you download all the blockheaders which are 80 byte data objects and you try to find the longest chain. The chain part of the blockchain is entirely contained in 80 byte headers. All 600,000 of these are like less than 60 megs of data or something. So you can validate this chain, and then later you download the block data, which is much more data. So you start by downloading the headerchain, and it’s low bandwidth– you can get it over SMS or over the satellite, and then you know the right chain, and then you go find the blocks. Maybe you get it from less trustable sources, since you already have the headers nobody can lie to you about the blocks. The headers cryptographically commit to the blocks. In 2014, headers first got implemented. Before that, you had to join the network and get lucky about choosing peers because you wouldn’t know that someone was lying to you until you were done with the sync.

Another interesting thing is transaction caching. You see a transaction and check its validity. One thing worth mentioning here is that this is complicated by transaction malleability. We used to hear this word a lot before segwit. You can change the witness data in a transaction. Prior to segwit, if you did that, it would change the txid and it would look like a conflicting transaction and it would require re-validation. Even after segwit, it’s possible for someone to replace the witness data on a transaction. This no longer effects the validity of the transaction or the txid, but what it means is that the transaction that appears in a block might not match the transaction data you have in your system, which you should be aware of when you’re designing applications or wallets. You need to design for the adversarial case, which is something you need to think about when you are developing transaction caching.

When you see a block and everything was previously validated, then you don’t need to validate them. With compact blocks, you don’t even need to download the transaction. You get a summary of the block contents and you’re able to quickly tell oh this summary only contains things that I know about and this block is good.

One last thing I’ll mention is dandelion, which is a transaction propagation scheme. The way this works is that you send a transaction along some one-dimensional path of peers during a stem phase. Rather than flooding the transaction to the network, which means the origin node can be determined by spy nodes on the network, you do the stem phase and then a fluff phase which makes it much more difficult to do that kind of spying analysis.

Compact blocks and FIBRE

We’ve talked about compact blocks and FIBRE. People sometimes get this confused. Compact blocks is about propagating blocks to nodes in the hopes that they already have the transaction. FIBRE is about miners communicating blocks to each other, and its goal is to minimize latency. Miners learn new blocks as quickly as possible. FIBRE is super wasteful at bandwidth. It uses an error correcting code to blindly send it among as many paths as it can to sort of stream the data. As long as it sees as much data from many different nodes, it can reconstruct that block basically. So one is for nodes, one is for miners.

On-chain and off-chain state

In a moment, we’re going to talk about layer 2.

There’s a philosophical difference between the bitcoin community and what used to be the ethereum community even if they’re coming around to this idea now with eth2 and sharding. The idea is that the blockchain is not a data storage or communication layer. If you can avoid it in any way, you should not be putting data on the blockchain. Once you put data on the blockchain, it needs to be globally replicated and verified. This is very expensive to the world and to the network, and this in turn means there’s more network fees, and it creates even more spacity for space and now you have to bid for the space to get into a block quickly.

Rather than trying to extend the capabilities and the space available within bitcoin blocks and transactions, much of the innovation in the bitcoin space has been about finding creative ways to avoid using existing technology and just anchor it in a minimal way into the blockchain. The purest example of this is the opentimestamps project. Have people heard of opentimestamps? A smattering. Do people use opentimestamps? You should totally use it. It’s open-source and free. I think petertodd is still paying the fees out of his pocket. It’s cheap to run. You can commit to data laying around on your hard drive like websites you visited, emails you have received, chat logs, whatever, you produce a cryptographic hash committing to that data. None of your data leaves your system. You send this hash to the opentimestamps server, it aggregates them, it produces a merkle root committing to millions of documents. petertodd was also timestamping the entire contents of the Internet Archive. Then you commit this 32 byte hash to the blockchain, and you can commit to an arbitrary amount of data. A commitment in bitcoin is a proof that the data existed at a certain time. The consensus assumption is that everyone saw the block at roughly the timestamp that the block was labeled with, or that everyone can agree when the block was created so you get a timestamp out of that.

You’re anchoring data to the blockchain, not publishing it. Putting data on the blockchain is a so-called proof-of-publication. If you need to prove that some data was available to everyone in the world at some point, you can put it in the blockchain. But typically when people think they need to do this, what they really need to do is anchor their data to the chain. There are some use cases where you really need proof-of-publication but it’s much more rare. I’m not aware of a real trustless alternative to using a blockchain for this, but I really wish people wouldn’t put all this arbitrary data in the blockchain.

Hash preimages and atomic swaps

We can guarantee that a cascade of events will all happen together or not happen together. This is kind of an extension of proof-of-publication. We have something where if a piece of data gets published, like a signature or something, we can guarantee that this cascade will trigger. This is a layer 1 tech primitive that leads into some very nice things that we can do in layer 2 that make blockchains more scalable. Publishing hashes and hash preimages is an example of where you need proof-of-publication because you need to make sure that if someone takes in money and the hash shows up in public, then the other person should be able to take their money on the other side of the atomic swap.

Replacing transactions: payment and state channels

gmaxwell in a 2013 bitcointalk post– it’s funny how old a lot of this stuff is. A theme that we’re not going to talk about in this talk is all the coordination and underlying blockchain tech and moonmath and crypto tricks are relatively simple, it’s coordination that is hard. If you do the atomic swap protocol that David just described, where they ensure their swap is atomic.. Say two parties do this, but rather than publishing the final transactoin with the hash preimages, the parties exchange this data offline with each other. At this point, they have avoided publication because they have only sent the data to one another. Now they can both get their money if the other party leads. As a last step, they throw out the transactions. Well, they replace the transactions cooperatively with an ordinary transaction that spends their coins to the right places without revealing any hash preimages. This is a little bit smaller because they are no longer revealing so much data. This is of course not an atomic operation; one party can sign and take the coins. But the blockchain is there ready to enforce the hash preimage thing to take the coins. So now they can do a unregulated atomic swap knowing that if anything goes wrong, they can fallback to on-chain transactions.

This idea of replacing transactions not published to the blockchain, with the idea that you are only going to publish the final version, is a very important concept. This can be generalized to the idea of a payment channel. At a high level, ignoring all the safety features, the idea of a payment channel is that two parties take some coins and put some coins into a 2-of-2 multisig policy. They repeatedly sign transactions that give some money to the other party or whatever. They keep doing this replacement over and over again, without publishing the final state to the blockchain because they expect further updates. There’s a lot of complexity to get from this to an HTLC and modern payment channel, but it’s the basic idea.

These payment channels and replacement gets you a lot of scalability because you’re not limited in the number of transactions you can do between now and the next block that gets mined. You’re replacing transactions rather than creating new ones that need to get committed, because of the way you setup the 2-of-2 multisig setup.

Payment channels and revoking old state

One problem with payment channels is that if you keep creating new transactions repeatedly, there’s the concern that one of the parties can publish an old state. None of these transactions are inherently better than the others. Because they are always spending the same coins, only one will be valid. But the blockchain isn’t going to do anything for you to ensure that the “right one” is valid. This seems like a no-go beczause you want to cancel all of the old states each time you replace. That’s what replacement should mean. You could double spend the coins and re-start, but then you would have to create a transaction and publish it to the blockchain. So you don’t gain anything. But there’s another way, where you create a new transaction that undoes the first transaction. So you create a second transaction that spends the original coins and gives it back to the original owner, and the original transaction is now revoked. If that first transaction hits the blockchain, then the other counterparty has a penalty transaction or other transaction they can publish to get their money back. This is fine, but this requires a separate revocation transaction to be created every time you do one of these state updates. This is in fact how lightning network works today; every payment channel has some extra state that parties need to store for every single channel update. There’s some ways to mitigate that.

This idea shows up quite a bit when people are proposing extensions to bitcoin: why don’t we provide a way to cancel old transactions directly? What if transactions have a timeout, and after that timeout, they are no longer valid? Or what about a signature you can add that will invalidate the transactions? What about a way to make transactions invalid after the fact? There’s a few reasons why this doesn’t work. There’s the ability to cache transactions in your mempool. If you have some transactions cached in your mempool that can be invalidated by some event like maybe a new block appearing and invalidating it, each time one of these events happens you have to rescan all of your mempool and update the transactions or remove them. This requires expensive scans or complex reindexing or something. One further complication is that this is made worse by the fact that the blockchain can reorg. You have to rescan the chain to see which transactions have been invalidated, unconfirmed or absent. It’s a big mess. It also creates a fungibility risk because if a transaction could be invalidated– like say a transaction was in a blockchain, a reorg happens and it becomes invalid. Every transaction that spent its coins, the whole transaction tree graph below that point, becomes invalid too. Coins with an expiry mechanism are riskier to accept than other coins. This is true by the way of coinbase transactions and miner rewards. In bitcoin, we require 100 blocks go by before miners can spend their coins because we want to make sure it’s next to impossible for coins to be reorged out if a reorg would cause those coins to be invalidated. A cancelation has the same issue. A lot of these proposals just, don’t work. It’s difficult to create a revocation mechanism that actually gets broadcasted to the whole network. If you revoke a transaction and it gets confirmed anyway, without a consensus layer which a blockchain is supposed to be anyway, how do you complain about that?

Linking payment channels

This is how the lightning network works. It uses HTLCs. It’s a hashlock preimage trick to make transactions atomic. You link multiple payment channels with the same hash. You can send money through long paths. This proof-of-publication mechanism guarantees that if the last person on the path can take their money, all the way back to the original sender so the money goes all the way or it doesn’t go at all. Each of these individual linked payment channels can be updated independently by the mechanism of creating revocation transactions and replacing the original transaction. Finally, there are some other security mechanisms like some timelocks like when anything goes wrong in the protocol then money will eventually return to where it comes from. There’s thus a limited hold-up risk related to this timelock period. I’m not going to talk about the details of the timelocks or about linking payment channels across chains, or multi-asset lightning.

I just want to talk about the specific scalability tricks that go into forming these payment channels. These are quite general tools you can use, and you can use it to create cool projects on the bitcoin network without requiring a lot of resources from the bitcoin chain.

Proposed improvements

This section is about proposals that are generally speaking accepted, things that have not been deployed yet but we believe they will be deployed because the tradeoffs make sense and we like the structure. First one we’re going to talk about is an erlay.

Erlay

Erlay deals with the transaction propagation layer. It’s consensus critical in the sense that if you’re not getting blocks or transactions then you can’t see the state of the network and you can be double spent. If you’re not aware of what hte most recent block is, or what the best chain is, you can be double spent and you’re at risk. On the network today, the transaction propagation system takes the most bandwidth. Because of that, we put a small cap on the number of peers that we actively go and get, which is 8, although a pull request has increased this to 10 for block propagation only not transaction propagation. 8 is enough to create a well-connected robust graph. Sometimes you see eclipse attacks where it often feels fragile.

What erlay does is it’s a set reconciliation mechanism where two different peers can say in very brief terms what transactions they have. They can do set reconciliation and they can figure out what each of them have that the others don’t. The trick here is that we substantially decrease the bandwidth we take, to stay up to date with the most recent transactions and the most recent blocks. This means it would be easy to increase from 8 peers to more. Today, when you quadruple your peers, you quadruple your bandwidth overhead. So it’s linear. But with erlay, it’s closer to constant. The amount of bandwidth with 100 peers, with erlay, is not as high as the previous linear bandwidth overhead. This reduces the cost of running a node and staying well connected.

This was developed in part by gleb naumenko. Ah, he’s hiding in the audience. Hopefully he wasn’t shaking his head because of anythin wrong we might have said.

Q: If we’re talking adversarially, then what’s the adversarial model here? It’s optimistically constant time. But what about the adversarial case?

A: I believe it remains constant time. You’re connected to peers. If you have an adversarial peer and they are consuming more bandwidth, you can detect that they are an adversarial peer. You can decrease their score and ban them. Under the erlay model, I believe an adversary cannot interrupt or degrade your connects with good peers, and you also get tools for detecting which ones are good peers.

Q: So you have to hold these transactions in order to give those sketches? Can I spoof having the exact same set as you?

A: No. You cannot spoof the sketches unless you have the actual transaction data. Or at least unless you have the transaction hashes. What you’re doing by spoofing those, you’re increasing your score. I see. I don’t think you would accomplish a lot by such an attack, other than avoiding getting yourself banned for being a spy node. If you were a spy node, and you had no transactions and you weren’t broadcasting new transactions, then you can’t spoof these sketches but even if you could then that’s a minimal harm thing to do.

Q: Is this the minisketch library from Pieter Wuille?

A: Yes. Erlay is transaction propagation– the proposal for transaction propagation in the bitcoin network. A key component of it is a set reconciliation algorithm, which is how the minisketch comes in. It’s for set equality and reconciliation. Minisketch is part of erlay.

Q: Does increasing the number of connected nodes, give privacy concerns to be connected to more spy nodes?

A: So the concern with connecting to more spy nodes is that, those nodes– so if a spy node can create a lot of connections, then it can see where transactions are being broadcast from and infer things about their origin. By use of erlay, if you connected to more spy nodes and not more real nodes, then I guess that would make things worse. But if everyone is connected to more nodes, then transactions can get further into the network using fewer hops which gives less information to spy nodes even if they have more connections. Dandelion can resolve a lot of these issues too. If you have a mechanism to drop transactions from random places, being connected to more spy nodes is not going to harm your privacy.

https://diyhpl.us/wiki/transcripts/lets-talk-bitcoin-podcast/2019-06-09-ltb-pieter-wuille-jonas-nick/

https://diyhpl.us/wiki/transcripts/tftc-podcast/2019-06-18-andrew-poelstra-tftc/

Compact threshold signatures

The next few slides are building up to a proposal called taproot but let me talk about some components first.

The first big hyped component of the taproot proposal for bitcoin, and hopefully– it’s an almost complete proposal. In any world except bitcoin, it would already be considered complete. Anyway, the first component is something called Schnorr signatures. As Ruben Somsen described, Schnorr signatures are an alternative to ECDSA. It’s a digital signature algorithm. It’s a replacement for ECDSA. It’s functionally the same. They are in many ways very similar. With both Schnorr and ECDSA, you can get compact 64 byte signatures. In both Schnorr and ECDSA you can do batch validation. But with Schnorr signatures, a lot of these things are much simpler and require fewer cryptographic assumptions. Secondly, the ECDSA signatures deployed in bitcoin are openssl-style ECDSA signatures. Satoshi originally created these by using the openssl library and encoding whatever came out of it. As a result, ECDSA signatures on bitcoin today are 72 bytes instead of 64 bytes and there’s literally no value gained by this. These are extra encoding bytes that aren’t needed at all. This is useful for larger crypto systems where you have objects that aren’t signatures and you don’t care about a few bytes; but in bitcoin we certianly do care about several bytes times 400 million transactions.

Another unfortunate thing about ECDSA signatures in bitcoin is that you can’t batch validation. This is half from ECDSA but half of how bitcoin does things. Basically the ECDSA verification equation only checks the x coordinate of some of the curve points that are involved. It’s a true statement that every x coordinate that corresponds to a curve point actually corresponds to two curve points. There are actually two different points that could be used in ECDSA signatures that would both validate. This is not a security loss, it’s not interesting in any way except that this fact means that when you try to batch validate ECDSA signatures and you have 1000 of them and you’re trying to combine them into an equation, the inputs to that equation are ambiguous. There’s this factor of 2 where you have to guess which points you’re working on; as a result, there’s a good chance your batch validation will fail even if the signatures are valid. We could in principle fix this with ECDSA. But we don’t want to; if we’re going to replace ECDSA signatures, all the complexity comes from the consensus layer, so we might as well replace the whole thing with Schnorr signatures anyway.

The first feature of Schnorr signatures that is really genuinely much easier with Schnorr signatures is this thing called threshold signatures. Ruben almost talked about this. Say you have multiple parties that together want to sign a transaction. The way that this works in bitcoin today is that they both contribute keys, they both contribute signatures, validators verify keys and signatures independently because they can’t batch them. It’s 2x the participants and 2x the resource usages.

But with Schnorr signatures, it’s very simple for these parties to instead combine the public keys, get a joint public key that represents both of them, and then jointly produce a signature such that neither of the individual parties are able to make a signature by themselves. This is identical to 2-of-2 in semantics, but what hits the blockchain is a single key and a single signature. This has a whole pile of benefits, like scalability because there’s much less data hitting the chian. Also, this improves privacy because blockchain validators no longer know the number of participants in this transaction. They can’t distinguish normal transactions from 2-of-2 things. Another benefit is that you are no longer restricted to what the blockchain supports. The blockchain supports signatures; you don’t have to worry about how many signatures you’re allowed to use or whatever. There’s all sorts of questions you have to ask right now to use native multisig in bitcoin, which you can avoid.

With some complexity, you can generalize this further. I said 3-of-3 and 5-of-5, but you can actually do thresholds as well. You can do k-of-n for any k and any n provided n > k. Similarly, this produces one key and one signature. You can generalize this even further, and this could be an hour long talk itself. You could do arbitrary montone functions where you can define– you can take a big set of signers and define any collection of subsets who should be able to sign, and you could produce a threshold signature like algorithm for which any of the admissable subsets would be allowed to sign and no other subsets. Monotone just means that if some set of signers is allowed, then any larger set is also allowed. It’s almost tautological in the blockchain context.

Adaptor signatures

Ruben gave a whole talk on adaptor signatures. We use the proof of publication trick and we encode this in the signature. Rather than a hash preimage in the script, you have this hash point and this extra secret. This completely replaces the hash in the preimage. What’s cool about this is that once again all you need is signature support from the blockchain, you don’t need additional consensus rule changes. You just throw a signature on the chain. Another benefit is privacy. There’s a lot of cool privacy tricks. One signature hits the chain, no indication that adaptor signatures were involved. You can also do reblinding and other tricks.

The proof-of-publication requirement we had for the preimages and the hashes and the preimage, now become the proof-of-publication requirement for the signatures. That’s kind of inherent to a blockchain. Blockchains have transactions with signatures. There’s no way with existing tech to eliminate the requirement that all the signatures get published, because the chain needs to be publicly verifiable and everyone needs to be able to verify the transactions which means they need to be able to validate the signatures. We get more capabilities with adaptor signatures, and we require less out of the blockchain. That’s my definition of scalability.

Taproot: main idea

Before I talk about taproot… Up til now, I have been talking about Schnorr signatures and how you can combine these to have arbitrary complex policies or protocols. It’s one key, one signature. The idea behind taproot is that if you can really do so much with one key and one signature, why don’t we just make all the outputs in bitcoin be one key and all the witnesses be one signature? Instead of bringing in this whole script evaluation aparatus, then we can bring in one key one signature and then that’s great. Your validation condition is a nice little algebraic equation and there’s little room for weird consensus edge cases. You also get batch validation. I should mention that this is kind of how mimblewimble works: you only have an ability to verify signature. Historically, this whole adaptor signature business came out of mimblewimble and in particular the question of how to add scripting to mimblewimble. As soon as I came up with this idea, I brought it to bitcoin and forgot about mimblewimble. As soon as I could do it with bitcoin, I ran away.

But bitcoin isn’t mimblewimble, and people use scripts like timelocks and lightning channels. petertodd has some weird things like coins out there that you can get if you collide sha1 or sha256 or basically any of the hash functions that bitcoin supports. You can implement hash collision bounties and the blockchain enforces it.

(An explanation of the following Q&A exchange can be found here.)

Q: Can you do timelocks iwth adaptor signatures?

A: There’s a few ways to do it, but they aren’t elegant. You could have a clock oracle that produces a signature.

Q: You make an adaptor signature for the redeem part, but then you do a joint musig signature on another transaction with a locktime of just… rather than having to do script.

A: That’s cool.

Q: You didn’t know that?

A: This is one way it’s being proposed by mimblewimble; but this requires the ability to aggregate signatures across transactions.

Q: No, there’s two transactions already existing. Before locktime, you can spend wit hthe adaptor signature one like atomic swaps. After locktime, the other one becomes valid and you can spend with that. They just double spend each other.

A: You’d have to diagram that out for me. There’s a few ways to do this, some that I know, but yours isn’t one of them.

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-September/017316.html

For timelocks, it appears that you need script support. That’s my current belief at this time of day.

Taproot does still support script. The cool thing is that taproot hides the script inside the key. You take the public key, and you can sign with it. But you can transform this key into a commitment to arbitrary data. The way you do this technically is you take the key and the extra data, you hash it all up, you multiply by the curve generator, and you add the result to your original key. So now you have a key that looks and acts just like an ordinary key, it’s still 32 bytes. It’s still 32 bytes. It’s still something that can sign. There’s nothing remarkable about it. Outside, you can’t tell there’s anything special about it. But secreltly, it has committed to extra script data. If you need to use the extra script features, then you have the option to do so. So you reveal the script and show that, and the validators once they see the information, are able to check that the pubkey tweak was produced correctly and they can see that the key was committing to this script, and now I will allow the script to spend the coins and a valid witness for that script. If you need the script, you reveal a witness. If you don’t need the script, you publish nothing not even the hash of the script. Pretty much all of these applications can fit in these keys, except the hash bounty.

Q: … MAST?

A: Does taproot supercede MAST?

I have a slide about this. Does taproot supercede MAST? MAST is this idea that has been floating around since 2012. It might have been russell o’connor on bitcointalk. I should stop repeating this history until I double check. The idea of a merkleized abstract syntax tree is that you have a script that have a whole bunch of independent conditions. You put them into a merkle tree, and you only reveal the branches thta you want to use at spending time. The other branches stay hidden under the other cryptographic hash commitments. There’s almost no tradeoff to using MAST over not using MAST, except that there’s many ways to do MAST. There’s perpetual bikeshedding and so many competing designs for MAST and none of them are any particularly better than the others. In the early days, nobody was using script for anything interesting, so the practical benefit of MAST back then, wouldn’t have been useful.

Instead of committing to a script, you can commit to a merkle tree of scripts. So there you go, that’s MAST. So in fact, the taproot proposal includes MAST as part of it. It’s separate from the committing structure I defined, it’s there, with a few couple advanced cool things that we noticed as part of implementation. We have a form of MAST where you provide a proof showing where in the tree the script you’re using is, we have a way of hiding the direction it’s taken. So it’s difficult to tell if a tree is unbalanced, and you learn very little information about the shape of the tree, and a few other efficiency things like that which I don’t want to get into. They are in Pieter Wuille’s fork of the bips repo which has a branch that has a draft BIP that has a few pull requests open on. It’s the first hit on google for bip-taproot so you can find it.

One other thing I wanted to mention here is that we eliminate the CHECKMULTISIG opcode in bip-taproot. I mentioned that this is the way in bitcoin how you provide multiple keys and multiple signatures. This opcode kinda sucks. The way it works is that the verifier goes through the list of keys, and for every key they try a signature until they find one that validates with that signature. So you wind up doing a lot of unnecessary signature validations, they just fail. They areu nnecessary. On its own, this prevents batch verification even if the signature algorithm itself was supposed to support batch validation. This wastes a lot of time wiating for invalid ones to fail. Russell O’Connor came up with a way to avoid this with pubkey recovery. It’s a very irritating opcode. It’s irritating for a pile of other reasons. I could do a full talk abou why I hate OP_CHECKMULTISIG. Another issue is that there’s an opcode limit in bitcoin, pushes don’t count, you have a limit of 201 opcodes. You can read the script and pick out the opcodes. But there’s an exception: if you have a CHECKMULTISIG opcode, and then you have to go back and evaluate all the branches, and then if CHECKMULTISIG is executed, you add the number of keys in the CHECKMULTISIG. If it’s not executed, then you do nothing. There’s a bunch of extra code dealing with this one opcode which is just, who knows, a historical accident, and it makes everything more complex.

Q: So CHECKMULTISIG is not going to be in tapscript?

CHECKMULTISIG does have one big benefit over interactive signatures, which is that CHECKMULTISIG is non-interactive for multisig. With threshold multisignatures I’ve been describing, all the people need to interact and they need their keys online and this isn’t always practical because maybe you have keys in vaults or something. If you want to do multisig with Schnorr and taproot, we have OP_CHECKDLSADD. It acts like a CHECKMULTISIG, but when it passes, it adds 1 to your accumulator. So you just count the number of these, and then you check if the accumulator is greater than or equal to whatever your threshold is. This gives you exactly the same use cases as CHECKMULTISIG but with much simpler semantics, with the ability to batch verify, and without the insane special cases in the validation cases.

I think that’s the last taproot slide.

https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my/

Revocations and SIGHASH_NOINPUT

Let’s double back a little bit to lightning.

The taproot talk was about how can we get rid of the hash preimages in lightning. There’s another issue in lightning, which is the revocation transactions. There are basically, every time you do a state update, there’s an extra transactions that both parties need to hold forever. If you’re doing watchtowers, then the watchtowers need to keep all this evergrowing state.

One proposal for bitcoin that would eliminate this complexity is this thing called SIGHASH_NOINPUT and basically what it allows you to do is create a signature that spends some coins that is valid for spending any UTXO any coin that has the same public key. Then there’s a proposal for lightning called eltoo. I think there might be other proposals that use this. It uses a feature of script to restrict this a little bit. The idea is that when you update your state in a payment channel, you createt a new transaction using SIGHASH_NOINPUT flag, and this new transaction is allowed to undo the old state and also every state that came before it. So you’re still doing these updates, but each update accomplishes the work of every previous revocation or update. You have state to keep around, but it’s just one transaction and it scales with O(1) instead of O(n). This eliminates one of the only scalability issues in lightning that is asymptotically really bad.

The lightning people are really excited about this. They are asking for SIGHASH_NOINPUT if we do tapscript or taproot. Unfortunately people are scared of NOINPUT. It’s very dangerous to have the ability to produce a signature for every single output with your key in it. Naively, anyone can send coins to your old address, and then use that NOINPUT signature to scoop those coins away from you. You would need a really badly designed wallet using NOINPUT, and people worry that these wallets would get created and users would be harmed. There’s been a lot of discussion about how to make SIGHASH_NOINPUT safe. Hopefully this will get into bitcoin at the same time that taproot does. There’s a lot of design iteration discussion on SIGHASH_NOINPUT that didn’t need to happen for all the other features of taproot. But this is a really important layer 1 update.

Q: Doesn’t this make it harder to reason about the security of bitcoin or bitcoin script?

A: Yeah. I do. The existence of sighash noinput makes it harder in general to think about bitcoin script. … With noinput, maybe somebody tricks me and asks me to do a blind signature. It turns out this blind signature is a NOINPUT signature, and now they can get all the money at once. This is the kind of problem that complicates the script model. I have a solution to this somewhere on the mailing list somewhere. It’s extra thought, extra design work. Chaperone signatures…

Initially I was opposed to SIGHASH_NOINPUT because I thought it would break a use case for various privacy tech. The fact that it doesn’t break it, is not trivial. NOINPUT is difficult. Let’s move on.

https://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/edgedevplusplus/sighash-noinput/

Speculative improvements

A lot of these improvements we’re just going to only touch the surface level. If you want to ask questions or go deeper on a particular one, we’re certainly happy to do that. But there’s a lot of scalability proposals out there. Some of them are more interesting than others. We tried to cherrypick things that are very beneficial for scalability that we liked, and then some that are innovative or maybe they break things. Some of them we put them in here because they are extremely popular even though we don’t like them.

utreexo

Utreexo out of all of these are hte most practical and most realistic of the speculative improvements. Today one of the big bottlenecks with bitcoin is blockchain validation. In particular, you need a ton of disk space and you need to keep track of the set of coins that are currently spendable. With utreexo, we merkleize this whole thing and we keep one merkle root and we get rid of the need to keep a database. Today to run bitcoin consensus, you need database software and keeping things on disk. That’s rough. Database optimization is incredibly involved and it’s difficult to be certain that it’s implemented correctly. If we switch to utreexo, then we can throw away our database software. Every time osmeone spends, they would provide a merkle proof showing that the coin was in the merkle root. This allows us to essentially keep things on disk. We can fit an entire blockchain validator on a single ASIC. We could create a chip that can validate the bitcoin blockchain just as fast as you can download it. The major tradeoff is that every transaction needs a merkle proof alongside it, which is a couple hundred bytes. There’s a few ways to help with that though.

Tadge did a talk yesterday on utreexo.

https://diyhpl.us/wiki/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/accumulators/

Client side validation

Currently no ongoing research into client side validation. The idea is to not put transactions on the blockchain at all. Most output history wouldn’t be visible. Since we’re not putting it on the blockchain, that means the blockchain takes less bandwidth and takes less computation to verify. It’s much simpler. To validate an output, if someone spends coins to someone else, you send that someone whoever’s receiving the output, the sender will provide them with the entire history of that output back to the coinbase where it was originally mined.

This has a few practical challenges that can be solved with a lot of interesting engineering and a marketplace for small outputs or linearized outputs. But basically, the general idea is that people instead of having the whole history of the chain, you have your coins’ whole history and you send this history when you send coins to a recipient.

This proposal is impractical for a few reasons. One problem is that if you lose your history, you lose your coins. So you can’t do seed-based recovery. There’s a lot of mission critical data that needs to be maintained. That’s a UX tradeoff which is not preferred.

The second thing that really harms client side validation is these histories– if you spend multiple outputs, then you have to include both. So these histories can blow-up a lot. You can avoid this by doing a process called linearization where you make sure you never combine outputs. So you just put in like a soft rule that outputs can never be combined and then if you need to spend a certain amount, you do trades on marketplaces. There’s a lot of engineering that has to happen to make that work. When it comes to making blocks, you get a lot of savings by doing complex signature aggregation which also implementation wise ended up being very different.

All of this together gets you at most a 5x scalability improvement over what we have today. With all the complexities, it’s not practical. But in theory, it’s sound, and it would give us a 5x scalability improvement.

Q: Aren’t the UX challenges here very similar to the off-chain techniques like in lightning network? You have to store some channel updates you can’t recover purely from seed data. I agree these are challenges, but aren’t we solving these anyway for off-chain second layer things? Even scriptless script stuff has stuff you can’t derive from the seed. It’s comparable to other problems we’re solving anyway.

A: It’s very similar in a lot of ways. The problem here is a lot stronger because for example if we switch to client side validation, then everyone has to save everything. Whereas with lightning, it’s opt-in and fractionally opt-in. I don’t need to put my whole portfolio into lightning. I can just do an amount I can afford to lose and not put a lot of coins at risk. I think the challenge is stronger here, but it’s a similar flavor.

Q: As a miner, I would just stuff random invalidation transactions into my block with as much fee as I can manage. You as a non-mining client can’t do much about this.

A: I glossed over that part. So the way we stop this, is that every– there’s two directions. The first thing is that every pubkey put into a transaction has to have a signature saying the owner of this pubkey authorizes that yes I appear in that block. Basically when I receive a proof, like here’s a giant history, and every point in that history, I have to ask the question, was it double spent between the time you say it was spent and the time that I received it? You can answer that question if you can look through the blocks and see if these public keys don’t appear anywhere in the blocks, then I know there’s no double spend. If it does appear, then you need to prove that this public key isn’t related to this output it was some other output. The other side of the defense is that if miners are making up random public keys and throwing in garbage, when the miner tries to spend that, they will not be able to provide a full historical proof. If a miner poofs coins out of nowhere, there will be no valid proof of those coins back to a coinbase transaction. To solve this, this requires a lot of signatures, and ideally you aggregate them so it doesn’t take up chain space.

Q: Other than the 5x scaling advantages, aren’t there other advantages like breaking SPV? So no more ambiguity that miners are validating. Now miners just prove publication without validation rules. Everyone really must run a full node. Plus, I guess, you would have a lot of privacy advantages not necessarily scalability.

A: I would say there are privacy advantages certainly. One thing I think about from a security model is what can the miners do that is malicious? Miners still have a validation rule. Every pubkey that gets spent in the block has to be listed in the block. The miner must include an aggregate signature or a series of aggregated signatures demonstrating that this pubkey was added to the block with permission. This gives miners less room to do manipulation. There are so many engineering challenges for linearizing history and doing signature aggregation and it just seems like too big of challenges relative to the scalability benefits you get, plus you have to move the whole chain to this model which is a non-starter.

Q: .. if you can’t rely on the miners to validate transactions and to enforce the new rules… then basically every… right?

A: No. You can add soft-forks because— a soft-fork..

Q: A soft-fork from the perspective that all the nodes… but nodes that…

A: Basically, you would get versioned outputs. Basically, if I’m on some soft-fork version, or even some hard-fork version, I have a different set of rules by which I accept outputs. When a sender sends me coins, I apply my rules to the history of those coins. You could get into situations where you have to ask someone to upgrade. If it’s a soft-fork, then you don’t have to ask them to upgrade they will blindly accept. If it’s a hard-fork, that’s another one. It allows you to add breaking changes to bitcoin without breaking everyone simultaneously. You break outputs progressively as they get spent in the new paradigm. It’s an interesting model. I haven’t thought about it before. I think that’s an interesting property of client-side validation.

https://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/client-side-validation/

DAG-based blockchain

This is another popular one. Ethereum uses a not-correct version of something called Ghost DAG. The bitcoin blockchain is a straight line, if you get forks or competing blocks then only one of them wins. You discard everything else. I think there was a 2015 paper called “inclusive lbockchain protocols” which presented an approach using directed acyclic graph approach to blockchain. Multiple miners can release blocks at the same time, and then there’s an algorithm to linearize those blocks and the invalid transactions you just ignore. The DAG approach solves particularly miner challenges. It allows you to have faster block times, lower safe confirmation times, it reduces the pressure on miners to have fast propagation. So fast propagation is no longer as important. It doesn’t help with initial block validation, you still have to check every output every spend. That resource bottleneck doesn’t change, therefore this isn’t seen as true scalability at least today because it doesn’t solve our biggest bottleneck.

Another thing that slows down DAGs from being accepted in bitcoin is that it’s a pretty serious hard-fork. It’s a big deal to change how the blockheaders work. This makes SPV a lot more expensive because there’s a lot more blocks. The other thing really holding this back is that it has deep game theory problems. I think there’s a research paper that went and fully fleshed out the incentives. I’ve had it on my reading list for a while but I can’t find it. Until we recover that paper, or someone else writes a new one, we don’t have a formal outlay of the incentives and we don’t have a convincing formal proof that selfish mining techniques are strictly less effective on DAG-based protocols than they are on bitcoin. I currently think this is true, but I don’t think there’s a proof out there. You wouldn’t want to switch to this. The whole point of DAGs is to make selfish mining less bad, and we need a proof that it actually achieves this, and I don’t think we have a fully fleshed out system yet.

Q: Would it be correct to say, a 51% adversary, block time isn’t really reduced– meaning the safe confirmation time?

A: Yes. Correct. DAGs are only advantageous in terms of confirmation security if we’re assuming that nobody has greater than 51% of the hashrate. If someone does have greater than 51% of the hashrate, then things are falling more back to meta incentives anyway.

https://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/braiding-the-blockchain/

https://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/breaking-the-chain/

https://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/chainbreak/

Confidential transactions

Confidential transactions is the idea that you can replace the amounts with homomorphic commitments that hide what the amounts are, but they still bind to the amounts so nobody can pretend they are something they are not. Homomorphic means they can be added. Validators can add up all the inputs and all the output amounts and they can tell whether or not they balance to zero, by checking those homomorphic commitments, or by using a proof of zeroness.

Confidential transactions aren’t an asymptotic hit to validation time or size. They add a 650 byte object called rangeproofs. These originally used to be 3-4 kilobytes but thanks to bulletproofs they are now “only” 650 bytes each which is cool. They can also be accumulated across outputs so in some contexts it could be 700 bytes per transaction, but that’s still not great.

If we had confidential transactions in bitcoin, and then you remove all the other things in bitcoin, you get left with mimblewimble which has some cool scalability like removing old unspent outputs. That’s cool, but you would have to remove script from bitcoin. That’s not going to happen. Bitcoin blocks arne’t going to be replaced by mimblewimble blocks ever due to the loss of functionality.

The other issue is that these things aren’t quantum secure. Right now in bitcoin all of our signatures will be broken as soon as there are practical quantum computers that exist. If we had confidential transactions, then not only would the signatures be broken and coins vulnerable to theft, but also the soundness of the system would be in question. Anyone who claimed to own any maount of coins, just instantly everything is gibberish. petertodd has argued this is a benefit for something like mimblewimble– if a quantum computer shows up, then every single output that ever existed on the mimblewimble chain can suddenly - to every value. So the entire chain then deletes itself. So that’s pretty cool. That’s kind of crazy. But it doesn’t fit into the bitcoin ethos super well…. a very vocal segment of bitcoiners don’t want everything to go away. So until we have a quantum secure version of something like bulletproofs or confidential transactions, my feeling is that it’s not going to happen in bitcoin.

https://diyhpl.us/wiki/transcripts/gmaxwell-confidential-transactions/

STARKs and general zero-knowledge proofs

In confidential transactions, we have a rangeproof attached to every output. This is an example of a zero-knowledge proof, specifically one to prove a range. The purpose of this is to make sure that ranges aren’t going to overflow, that the values aren’t going to be negative numbers and won’t overflow.

But in general, it’s possible to prove anything you want in zero knowledge. This has been generally known since the 1980s. Since the early 90s, it was known to be possible to be done and within the bounds of the known universe. More recently in 2013 there’s been a tremendous amount of development in really practical general zero-knowledge proof systems.

STARKs are developed by Eli Ben-Sasson’s and others here in Tel Aviv. STARKs are very fast to verify. They are asymptotically very small- they grow with the logarithm of the size of your program execution. Unfortunately, they have large constants like 10s or 100s of kilobytes. Practical STARK programs end up being 50 to 100 kilobytes. They are quantum secure, and they are fast to verify. They scale really well.

One application of STARKs if they were to be much smaller would be a quantum resistant rangeproof which would be interesting, which could lead to a confidential transaction proposal for bitcoin. But they are much more general than this. In theory, you could prove script conditions. You could hide your amount but also your spending conditions. You get something like taproot without all the tricks and the interaction and all these caveats about how sometimes you still need to reveal your scripts.

STARKs can also be used to basically do batch validation. You can produce a STARK proof of the validation of every single signature in the bitcoin blockchain for example. This is a little bit outside of feasibility but with special purpose ASICs you could probably produce such a proof. Now validation of every single signature comes down to validating this small STARK proof which is a few hundred kilobytes of size, rather than dealing with the 100s of gigabytes of data you presently need to do EC operations on. You could compress the whole blockchain signature validation workload into a single proof. We’re stretching what’s practical to encode; you would have to implement an entire script interpreter into a single zero-knowledge proof system which is very far away from our current ability to design programs.

It’s promising tech, and it’s practical for a lot of purposes even right now. It’s mindblowing and surprising that we can even do this. There’s llots of ways in the immediate future that these types of things could be used to improve the scalability of the current system, and also they interact pretty well with the other proposals. Utreexo proofs could be replaced with STARK proofs. The utreexo proofs are small, but STARKs could do aggregation of those. Also, you can do it piecemeal and you don’t need to get everyone to do that.

At some point, this will be mature and efficient enough taht we will start seeing it in the blockchain and see real scalability improvements.

Sharding

We have two slides left, sharding and proof-of-stakes. My two favorite things.

Sharding is the idea is that we have the same coins and the same consensus ,but we push it into different validator zones where transactions can happen that not everyone needs to verify. Sharding overcomes the challenge of “everyone needs to see everything”. Generally speaking, we only have one really widely accepted way of doing sharding and that’s a federated sidechain and some multisig quorum that signs off on what happens off-chain. If the federation decides to steal the money, the mainchain has no way of knowing if the federation is being honest or not.

There’s also SPV proofs, which are generally considered to be not that viable or too risky. I personally don’t think that miners should be trusted. Boiling any consensus down to an SPV proof makes me very nervous, I don’t think we should go down that road.

Another sharding proposal is plasma. I’m not comfortable with the security boundaries of plasma. Many of the plasma people will assert that the security model is identical to lightning which I don’t think is correct.

I’ve found that almost every sharding proposal can be broken by asking what happens if the shard does an information withholding attack. Just doing that analysis usually lets you figure out how to totally break the system.

Q: What about gmaxwell’s coin witness proposal? Use a STARK to prove the state of this sidechain without asking miners to vote or something like that.

A: There’s a lot of technologies that like– fundamental tech steps that would need to be done.

A: The challenge on that one is not just information withholding, but double spending. If you have multiple proofs that are all valid, and you see multiple histories that transpired. How do we pick which one transpired? If you combine sharding with STARKs, maybe you find some wiggle room to make it work.

Proof-of-stake

Proof-of-stake is incredibly popular in the altcoin world. It’s almost unconditionally popular. It falls back to traditional byzantine fault tolerance models. In proof-of-stake, you see 51% assumptions. Normally things fall away from incentive compatibility. I’m just going to say there’s many issues.

In 2012 and 2013 a lot of bitcoin developers did a deeper dive into proof-of-stake and identified a bunch of issues that transcend any particular issues. Like holding on to your keys after you spend your coins, that’s a fundamental issue. Andrew wrote two papers about the fundamental challenges with proof-of-stake. I think it’s fair to say since 2013 none of those issues have been addressed by proof-of-stake proposals.

Unfortunately, there are so many ways to approach proof-of-stake paradigms that it’s hard to talk through because once you dismantle any implementation everyone is like “well yeah that was the broken one”. At some point, I hope there’s a proof-of-stake implementation that everyone likes and we can attack it directly. But it’s just too much.

https://download.wpsoftware.net/bitcoin/pos.pdf

https://download.wpsoftware.net/bitcoin/alts.pdf

https://download.wpsoftware.net/bitcoin/asic-faq.pdf

Multiple blockchains

For scalability, you can have independent blockchains like bitcoin and litecoin. You don’t need sharding or anything. As a dogecoiner, I don’t need to care about what happens on the bitcoin blockchain. If we use cross-chain lightning, I receive doge not bitcoin, and that’s scalability. We can have on-chain on multiple chains, and then each chain has a separate security domain and a separate chain and their own problems.

\ No newline at end of file +Conference

https://twitter.com/kanzure/status/1171400374336536576

Introduction

Alright. Are we ready to get going? Thumbs up? Alright. Cool. I am Andrew and this is David. We’re here to talk about blockchain design patterns: layers and scaling approaches. This will be a tour of a bunch of different scaling approaches in bitcoin in particular but it’s probably applicable to other blockchains out there. Our talk is in 3 parts. We’ll talk about some existing scaling tech and related things like caching transactions and mempool propagation. We’ll talk about some of the scaling proposals out there like taproot and erlay and other things in the news. In the third section, we talk about the more speculative bleeding edge crypto things.

Scaling: goals and constraints

So what do we mean by scaling?

It’s kind of hard to define scaling. People mean different things, like maybe processing transactions in bitcoin. There’s a lot of different stages that transactions can be at different stages and for different use cases those transactions can be in those stages for a long time. As an example, I can make a trillion one-year timelock transactions and I am not sure if it’s reasonable to claim that bitcoin can support a trillion transactions if I can do that.

One thing that scaling is not, at least for the purposes of this talk, are things like relaxing limits. Increasing the bitcoin block size, relaxing that limit, isn’t scaling. So you get 2x the throughput, for the cost of 2x all the resources. This is maybe technically possible, but not technically interesting. It has quite a serious tradeoff in terms of increasing the requirements for people to join and interact with the network. In particular, with the current limits, we’ve seen in a few talks earlier that it’s already impossible to verify the whole blockchain on your smartphone. It’s unfortunate that you can’t download the 300 GB blockchain and verify all the signatures.

I would say that when we think about scalability, one of the key restraints is– who can view our blockchain? Who can be a participant on our blockchain? These are two different things. This isn’t covered in the slides. I wanted to get across that if you can validate the chain, even if it’s too expensive for you to transact, you can verify the honesty of your bank still. So when we talk about increasing the block size or increasing resource limits, you want to make sure that you’re not excluding people from verification. In bitcoin today, that’s probably the biggest contention. When we talk about when things can scale or can’t scale, we want to make sure we don’t exclude people from being able to validate the chain. In order to do that, there’s no way to get around every user has to validate every single transaction. If you want to catch up and run a full node today, you start from the genesis block and you go through the whole history and do all the UTXOs and you validate everything. It’s a multi-day process, and then you get caught up.

A second order concern, and alternative cryptocurrencies address this differently, is the issue of miner safety. Block propagation is important. If it takes a long time for blocks to get around the network, then the miners can use selfish mining. The longer it takes for miners to talk wit heach other, there’s a longer period for bigger miners to get an advantage over smaller miners. We want the well-experienced large miners to be on the same playing field as a newer less well funded entrant to the mining ecosystem.

When we talk about scaling, there’s a few angles we’re coming from. Everything that we talk about, we have to view from the perspective of where we’re limited in the adversarial case instead of the average case. Often when we look at bitcoin network operations today, there are many optimizations that work super well when everyone is being friendly with each other. The relay network or the FIBRE network eliminates a lot of work for blocks, but we can’t use those metrics to decide that it’s safe to scale because these optimizations breakdown when the network is under attack. The limitation is, how do things perform when things aren’t going well? Continuous observation of the network to give you data right now, doesn’t give you data about how it behaves under attack right now.

https://diyhpl.us/~bryan/irc/bitcoin/scalingbitcoin-review.pdf

Bitcoin today: UTXOs

The first part of this talk is about bitcoin today. Things that are deployed and exist in the bitcoin network today. Let me give a quick summary of how bitcoin handles coins and transactions. This is the UTXO model– unsigned transaction outputs. All bitcoins are assigned to objects called UTXOs. These are individual indivisible objects that have a spending policy, and if you know the secret key you can spend the coins. They also have an amount. To spend coins, you have to take a bunch of UTXOs that you own, you spend all of the UTXOs, and then you create new UTXOs belonging to your recipient(s). Since these are indivisible, you usually have to create an extra output for change to send it back to yourself. Other blockchains have accounts instead of UTXOs, like ethereum.

There are a number of reasons that bitcoin chose the design that it did. When you see a new transaction, you know the only way it will effect the validity of other transactions is if they both try to spend the other UTXOs because UTXOs are atomic. So you check each transaction to see if it conflicts with existing transactions. If it conflicts, then throw it away. If not, bring it into the mempool and check its validity. You never need to validate it again if you see it like in a confirmed block or whatever. But in an account based model, you don’t have this simplicity. You can have multiple transactions spending from the same account. If in aggregate some of them spend from the account, then only some will be valid and some will be invalid and you can’t tell in advance which ones will be the ones that get into the block.

There’s also things about proving properties of existence of UTXOs. In bitcoin, UTXOs get created and destroyed. Those are the only two events in the UTXO’s lifecycle. If you want to prove that a UTXO existed at some point, you provide the transaction that created it and a merkle path for the inclusion of the transaction in a blockheader. For things like blockexplorers and other historical analysis auditing applications, this is very convenient. It’s very difficult to do this in an account-based model because you have to maintain the balance of every single account at every single point of time and there’s no nice compact way to tie that back to individual events in the blockchain which makes proving things about the ethereum chain historically quite difficult.

Bitcoin today: Headers-first, transaction relay, and caching

Headers first is very primitive technology today. But when you join the blockchain, you download all the blockheaders which are 80 byte data objects and you try to find the longest chain. The chain part of the blockchain is entirely contained in 80 byte headers. All 600,000 of these are like less than 60 megs of data or something. So you can validate this chain, and then later you download the block data, which is much more data. So you start by downloading the headerchain, and it’s low bandwidth– you can get it over SMS or over the satellite, and then you know the right chain, and then you go find the blocks. Maybe you get it from less trustable sources, since you already have the headers nobody can lie to you about the blocks. The headers cryptographically commit to the blocks. In 2014, headers first got implemented. Before that, you had to join the network and get lucky about choosing peers because you wouldn’t know that someone was lying to you until you were done with the sync.

Another interesting thing is transaction caching. You see a transaction and check its validity. One thing worth mentioning here is that this is complicated by transaction malleability. We used to hear this word a lot before segwit. You can change the witness data in a transaction. Prior to segwit, if you did that, it would change the txid and it would look like a conflicting transaction and it would require re-validation. Even after segwit, it’s possible for someone to replace the witness data on a transaction. This no longer effects the validity of the transaction or the txid, but what it means is that the transaction that appears in a block might not match the transaction data you have in your system, which you should be aware of when you’re designing applications or wallets. You need to design for the adversarial case, which is something you need to think about when you are developing transaction caching.

When you see a block and everything was previously validated, then you don’t need to validate them. With compact blocks, you don’t even need to download the transaction. You get a summary of the block contents and you’re able to quickly tell oh this summary only contains things that I know about and this block is good.

One last thing I’ll mention is dandelion, which is a transaction propagation scheme. The way this works is that you send a transaction along some one-dimensional path of peers during a stem phase. Rather than flooding the transaction to the network, which means the origin node can be determined by spy nodes on the network, you do the stem phase and then a fluff phase which makes it much more difficult to do that kind of spying analysis.

Compact blocks and FIBRE

We’ve talked about compact blocks and FIBRE. People sometimes get this confused. Compact blocks is about propagating blocks to nodes in the hopes that they already have the transaction. FIBRE is about miners communicating blocks to each other, and its goal is to minimize latency. Miners learn new blocks as quickly as possible. FIBRE is super wasteful at bandwidth. It uses an error correcting code to blindly send it among as many paths as it can to sort of stream the data. As long as it sees as much data from many different nodes, it can reconstruct that block basically. So one is for nodes, one is for miners.

On-chain and off-chain state

In a moment, we’re going to talk about layer 2.

There’s a philosophical difference between the bitcoin community and what used to be the ethereum community even if they’re coming around to this idea now with eth2 and sharding. The idea is that the blockchain is not a data storage or communication layer. If you can avoid it in any way, you should not be putting data on the blockchain. Once you put data on the blockchain, it needs to be globally replicated and verified. This is very expensive to the world and to the network, and this in turn means there’s more network fees, and it creates even more spacity for space and now you have to bid for the space to get into a block quickly.

Rather than trying to extend the capabilities and the space available within bitcoin blocks and transactions, much of the innovation in the bitcoin space has been about finding creative ways to avoid using existing technology and just anchor it in a minimal way into the blockchain. The purest example of this is the opentimestamps project. Have people heard of opentimestamps? A smattering. Do people use opentimestamps? You should totally use it. It’s open-source and free. I think petertodd is still paying the fees out of his pocket. It’s cheap to run. You can commit to data laying around on your hard drive like websites you visited, emails you have received, chat logs, whatever, you produce a cryptographic hash committing to that data. None of your data leaves your system. You send this hash to the opentimestamps server, it aggregates them, it produces a merkle root committing to millions of documents. petertodd was also timestamping the entire contents of the Internet Archive. Then you commit this 32 byte hash to the blockchain, and you can commit to an arbitrary amount of data. A commitment in bitcoin is a proof that the data existed at a certain time. The consensus assumption is that everyone saw the block at roughly the timestamp that the block was labeled with, or that everyone can agree when the block was created so you get a timestamp out of that.

You’re anchoring data to the blockchain, not publishing it. Putting data on the blockchain is a so-called proof-of-publication. If you need to prove that some data was available to everyone in the world at some point, you can put it in the blockchain. But typically when people think they need to do this, what they really need to do is anchor their data to the chain. There are some use cases where you really need proof-of-publication but it’s much more rare. I’m not aware of a real trustless alternative to using a blockchain for this, but I really wish people wouldn’t put all this arbitrary data in the blockchain.

Hash preimages and atomic swaps

We can guarantee that a cascade of events will all happen together or not happen together. This is kind of an extension of proof-of-publication. We have something where if a piece of data gets published, like a signature or something, we can guarantee that this cascade will trigger. This is a layer 1 tech primitive that leads into some very nice things that we can do in layer 2 that make blockchains more scalable. Publishing hashes and hash preimages is an example of where you need proof-of-publication because you need to make sure that if someone takes in money and the hash shows up in public, then the other person should be able to take their money on the other side of the atomic swap.

Replacing transactions: payment and state channels

gmaxwell in a 2013 bitcointalk post– it’s funny how old a lot of this stuff is. A theme that we’re not going to talk about in this talk is all the coordination and underlying blockchain tech and moonmath and crypto tricks are relatively simple, it’s coordination that is hard. If you do the atomic swap protocol that David just described, where they ensure their swap is atomic.. Say two parties do this, but rather than publishing the final transactoin with the hash preimages, the parties exchange this data offline with each other. At this point, they have avoided publication because they have only sent the data to one another. Now they can both get their money if the other party leads. As a last step, they throw out the transactions. Well, they replace the transactions cooperatively with an ordinary transaction that spends their coins to the right places without revealing any hash preimages. This is a little bit smaller because they are no longer revealing so much data. This is of course not an atomic operation; one party can sign and take the coins. But the blockchain is there ready to enforce the hash preimage thing to take the coins. So now they can do a unregulated atomic swap knowing that if anything goes wrong, they can fallback to on-chain transactions.

This idea of replacing transactions not published to the blockchain, with the idea that you are only going to publish the final version, is a very important concept. This can be generalized to the idea of a payment channel. At a high level, ignoring all the safety features, the idea of a payment channel is that two parties take some coins and put some coins into a 2-of-2 multisig policy. They repeatedly sign transactions that give some money to the other party or whatever. They keep doing this replacement over and over again, without publishing the final state to the blockchain because they expect further updates. There’s a lot of complexity to get from this to an HTLC and modern payment channel, but it’s the basic idea.

These payment channels and replacement gets you a lot of scalability because you’re not limited in the number of transactions you can do between now and the next block that gets mined. You’re replacing transactions rather than creating new ones that need to get committed, because of the way you setup the 2-of-2 multisig setup.

Payment channels and revoking old state

One problem with payment channels is that if you keep creating new transactions repeatedly, there’s the concern that one of the parties can publish an old state. None of these transactions are inherently better than the others. Because they are always spending the same coins, only one will be valid. But the blockchain isn’t going to do anything for you to ensure that the “right one” is valid. This seems like a no-go beczause you want to cancel all of the old states each time you replace. That’s what replacement should mean. You could double spend the coins and re-start, but then you would have to create a transaction and publish it to the blockchain. So you don’t gain anything. But there’s another way, where you create a new transaction that undoes the first transaction. So you create a second transaction that spends the original coins and gives it back to the original owner, and the original transaction is now revoked. If that first transaction hits the blockchain, then the other counterparty has a penalty transaction or other transaction they can publish to get their money back. This is fine, but this requires a separate revocation transaction to be created every time you do one of these state updates. This is in fact how lightning network works today; every payment channel has some extra state that parties need to store for every single channel update. There’s some ways to mitigate that.

This idea shows up quite a bit when people are proposing extensions to bitcoin: why don’t we provide a way to cancel old transactions directly? What if transactions have a timeout, and after that timeout, they are no longer valid? Or what about a signature you can add that will invalidate the transactions? What about a way to make transactions invalid after the fact? There’s a few reasons why this doesn’t work. There’s the ability to cache transactions in your mempool. If you have some transactions cached in your mempool that can be invalidated by some event like maybe a new block appearing and invalidating it, each time one of these events happens you have to rescan all of your mempool and update the transactions or remove them. This requires expensive scans or complex reindexing or something. One further complication is that this is made worse by the fact that the blockchain can reorg. You have to rescan the chain to see which transactions have been invalidated, unconfirmed or absent. It’s a big mess. It also creates a fungibility risk because if a transaction could be invalidated– like say a transaction was in a blockchain, a reorg happens and it becomes invalid. Every transaction that spent its coins, the whole transaction tree graph below that point, becomes invalid too. Coins with an expiry mechanism are riskier to accept than other coins. This is true by the way of coinbase transactions and miner rewards. In bitcoin, we require 100 blocks go by before miners can spend their coins because we want to make sure it’s next to impossible for coins to be reorged out if a reorg would cause those coins to be invalidated. A cancelation has the same issue. A lot of these proposals just, don’t work. It’s difficult to create a revocation mechanism that actually gets broadcasted to the whole network. If you revoke a transaction and it gets confirmed anyway, without a consensus layer which a blockchain is supposed to be anyway, how do you complain about that?

Linking payment channels

This is how the lightning network works. It uses HTLCs. It’s a hashlock preimage trick to make transactions atomic. You link multiple payment channels with the same hash. You can send money through long paths. This proof-of-publication mechanism guarantees that if the last person on the path can take their money, all the way back to the original sender so the money goes all the way or it doesn’t go at all. Each of these individual linked payment channels can be updated independently by the mechanism of creating revocation transactions and replacing the original transaction. Finally, there are some other security mechanisms like some timelocks like when anything goes wrong in the protocol then money will eventually return to where it comes from. There’s thus a limited hold-up risk related to this timelock period. I’m not going to talk about the details of the timelocks or about linking payment channels across chains, or multi-asset lightning.

I just want to talk about the specific scalability tricks that go into forming these payment channels. These are quite general tools you can use, and you can use it to create cool projects on the bitcoin network without requiring a lot of resources from the bitcoin chain.

Proposed improvements

This section is about proposals that are generally speaking accepted, things that have not been deployed yet but we believe they will be deployed because the tradeoffs make sense and we like the structure. First one we’re going to talk about is an erlay.

Erlay

Erlay deals with the transaction propagation layer. It’s consensus critical in the sense that if you’re not getting blocks or transactions then you can’t see the state of the network and you can be double spent. If you’re not aware of what hte most recent block is, or what the best chain is, you can be double spent and you’re at risk. On the network today, the transaction propagation system takes the most bandwidth. Because of that, we put a small cap on the number of peers that we actively go and get, which is 8, although a pull request has increased this to 10 for block propagation only not transaction propagation. 8 is enough to create a well-connected robust graph. Sometimes you see eclipse attacks where it often feels fragile.

What erlay does is it’s a set reconciliation mechanism where two different peers can say in very brief terms what transactions they have. They can do set reconciliation and they can figure out what each of them have that the others don’t. The trick here is that we substantially decrease the bandwidth we take, to stay up to date with the most recent transactions and the most recent blocks. This means it would be easy to increase from 8 peers to more. Today, when you quadruple your peers, you quadruple your bandwidth overhead. So it’s linear. But with erlay, it’s closer to constant. The amount of bandwidth with 100 peers, with erlay, is not as high as the previous linear bandwidth overhead. This reduces the cost of running a node and staying well connected.

This was developed in part by gleb naumenko. Ah, he’s hiding in the audience. Hopefully he wasn’t shaking his head because of anythin wrong we might have said.

Q: If we’re talking adversarially, then what’s the adversarial model here? It’s optimistically constant time. But what about the adversarial case?

A: I believe it remains constant time. You’re connected to peers. If you have an adversarial peer and they are consuming more bandwidth, you can detect that they are an adversarial peer. You can decrease their score and ban them. Under the erlay model, I believe an adversary cannot interrupt or degrade your connects with good peers, and you also get tools for detecting which ones are good peers.

Q: So you have to hold these transactions in order to give those sketches? Can I spoof having the exact same set as you?

A: No. You cannot spoof the sketches unless you have the actual transaction data. Or at least unless you have the transaction hashes. What you’re doing by spoofing those, you’re increasing your score. I see. I don’t think you would accomplish a lot by such an attack, other than avoiding getting yourself banned for being a spy node. If you were a spy node, and you had no transactions and you weren’t broadcasting new transactions, then you can’t spoof these sketches but even if you could then that’s a minimal harm thing to do.

Q: Is this the minisketch library from Pieter Wuille?

A: Yes. Erlay is transaction propagation– the proposal for transaction propagation in the bitcoin network. A key component of it is a set reconciliation algorithm, which is how the minisketch comes in. It’s for set equality and reconciliation. Minisketch is part of erlay.

Q: Does increasing the number of connected nodes, give privacy concerns to be connected to more spy nodes?

A: So the concern with connecting to more spy nodes is that, those nodes– so if a spy node can create a lot of connections, then it can see where transactions are being broadcast from and infer things about their origin. By use of erlay, if you connected to more spy nodes and not more real nodes, then I guess that would make things worse. But if everyone is connected to more nodes, then transactions can get further into the network using fewer hops which gives less information to spy nodes even if they have more connections. Dandelion can resolve a lot of these issues too. If you have a mechanism to drop transactions from random places, being connected to more spy nodes is not going to harm your privacy.

https://diyhpl.us/wiki/transcripts/lets-talk-bitcoin-podcast/2019-06-09-ltb-pieter-wuille-jonas-nick/

https://diyhpl.us/wiki/transcripts/tftc-podcast/2019-06-18-andrew-poelstra-tftc/

Compact threshold signatures

The next few slides are building up to a proposal called taproot but let me talk about some components first.

The first big hyped component of the taproot proposal for bitcoin, and hopefully– it’s an almost complete proposal. In any world except bitcoin, it would already be considered complete. Anyway, the first component is something called Schnorr signatures. As Ruben Somsen described, Schnorr signatures are an alternative to ECDSA. It’s a digital signature algorithm. It’s a replacement for ECDSA. It’s functionally the same. They are in many ways very similar. With both Schnorr and ECDSA, you can get compact 64 byte signatures. In both Schnorr and ECDSA you can do batch validation. But with Schnorr signatures, a lot of these things are much simpler and require fewer cryptographic assumptions. Secondly, the ECDSA signatures deployed in bitcoin are openssl-style ECDSA signatures. Satoshi originally created these by using the openssl library and encoding whatever came out of it. As a result, ECDSA signatures on bitcoin today are 72 bytes instead of 64 bytes and there’s literally no value gained by this. These are extra encoding bytes that aren’t needed at all. This is useful for larger crypto systems where you have objects that aren’t signatures and you don’t care about a few bytes; but in bitcoin we certianly do care about several bytes times 400 million transactions.

Another unfortunate thing about ECDSA signatures in bitcoin is that you can’t batch validation. This is half from ECDSA but half of how bitcoin does things. Basically the ECDSA verification equation only checks the x coordinate of some of the curve points that are involved. It’s a true statement that every x coordinate that corresponds to a curve point actually corresponds to two curve points. There are actually two different points that could be used in ECDSA signatures that would both validate. This is not a security loss, it’s not interesting in any way except that this fact means that when you try to batch validate ECDSA signatures and you have 1000 of them and you’re trying to combine them into an equation, the inputs to that equation are ambiguous. There’s this factor of 2 where you have to guess which points you’re working on; as a result, there’s a good chance your batch validation will fail even if the signatures are valid. We could in principle fix this with ECDSA. But we don’t want to; if we’re going to replace ECDSA signatures, all the complexity comes from the consensus layer, so we might as well replace the whole thing with Schnorr signatures anyway.

The first feature of Schnorr signatures that is really genuinely much easier with Schnorr signatures is this thing called threshold signatures. Ruben almost talked about this. Say you have multiple parties that together want to sign a transaction. The way that this works in bitcoin today is that they both contribute keys, they both contribute signatures, validators verify keys and signatures independently because they can’t batch them. It’s 2x the participants and 2x the resource usages.

But with Schnorr signatures, it’s very simple for these parties to instead combine the public keys, get a joint public key that represents both of them, and then jointly produce a signature such that neither of the individual parties are able to make a signature by themselves. This is identical to 2-of-2 in semantics, but what hits the blockchain is a single key and a single signature. This has a whole pile of benefits, like scalability because there’s much less data hitting the chian. Also, this improves privacy because blockchain validators no longer know the number of participants in this transaction. They can’t distinguish normal transactions from 2-of-2 things. Another benefit is that you are no longer restricted to what the blockchain supports. The blockchain supports signatures; you don’t have to worry about how many signatures you’re allowed to use or whatever. There’s all sorts of questions you have to ask right now to use native multisig in bitcoin, which you can avoid.

With some complexity, you can generalize this further. I said 3-of-3 and 5-of-5, but you can actually do thresholds as well. You can do k-of-n for any k and any n provided n > k. Similarly, this produces one key and one signature. You can generalize this even further, and this could be an hour long talk itself. You could do arbitrary montone functions where you can define– you can take a big set of signers and define any collection of subsets who should be able to sign, and you could produce a threshold signature like algorithm for which any of the admissable subsets would be allowed to sign and no other subsets. Monotone just means that if some set of signers is allowed, then any larger set is also allowed. It’s almost tautological in the blockchain context.

Adaptor signatures

Ruben gave a whole talk on adaptor signatures. We use the proof of publication trick and we encode this in the signature. Rather than a hash preimage in the script, you have this hash point and this extra secret. This completely replaces the hash in the preimage. What’s cool about this is that once again all you need is signature support from the blockchain, you don’t need additional consensus rule changes. You just throw a signature on the chain. Another benefit is privacy. There’s a lot of cool privacy tricks. One signature hits the chain, no indication that adaptor signatures were involved. You can also do reblinding and other tricks.

The proof-of-publication requirement we had for the preimages and the hashes and the preimage, now become the proof-of-publication requirement for the signatures. That’s kind of inherent to a blockchain. Blockchains have transactions with signatures. There’s no way with existing tech to eliminate the requirement that all the signatures get published, because the chain needs to be publicly verifiable and everyone needs to be able to verify the transactions which means they need to be able to validate the signatures. We get more capabilities with adaptor signatures, and we require less out of the blockchain. That’s my definition of scalability.

Taproot: main idea

Before I talk about taproot… Up til now, I have been talking about Schnorr signatures and how you can combine these to have arbitrary complex policies or protocols. It’s one key, one signature. The idea behind taproot is that if you can really do so much with one key and one signature, why don’t we just make all the outputs in bitcoin be one key and all the witnesses be one signature? Instead of bringing in this whole script evaluation aparatus, then we can bring in one key one signature and then that’s great. Your validation condition is a nice little algebraic equation and there’s little room for weird consensus edge cases. You also get batch validation. I should mention that this is kind of how mimblewimble works: you only have an ability to verify signature. Historically, this whole adaptor signature business came out of mimblewimble and in particular the question of how to add scripting to mimblewimble. As soon as I came up with this idea, I brought it to bitcoin and forgot about mimblewimble. As soon as I could do it with bitcoin, I ran away.

But bitcoin isn’t mimblewimble, and people use scripts like timelocks and lightning channels. petertodd has some weird things like coins out there that you can get if you collide sha1 or sha256 or basically any of the hash functions that bitcoin supports. You can implement hash collision bounties and the blockchain enforces it.

(An explanation of the following Q&A exchange can be found here.)

Q: Can you do timelocks iwth adaptor signatures?

A: There’s a few ways to do it, but they aren’t elegant. You could have a clock oracle that produces a signature.

Q: You make an adaptor signature for the redeem part, but then you do a joint musig signature on another transaction with a locktime of just… rather than having to do script.

A: That’s cool.

Q: You didn’t know that?

A: This is one way it’s being proposed by mimblewimble; but this requires the ability to aggregate signatures across transactions.

Q: No, there’s two transactions already existing. Before locktime, you can spend wit hthe adaptor signature one like atomic swaps. After locktime, the other one becomes valid and you can spend with that. They just double spend each other.

A: You’d have to diagram that out for me. There’s a few ways to do this, some that I know, but yours isn’t one of them.

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-September/017316.html

For timelocks, it appears that you need script support. That’s my current belief at this time of day.

Taproot does still support script. The cool thing is that taproot hides the script inside the key. You take the public key, and you can sign with it. But you can transform this key into a commitment to arbitrary data. The way you do this technically is you take the key and the extra data, you hash it all up, you multiply by the curve generator, and you add the result to your original key. So now you have a key that looks and acts just like an ordinary key, it’s still 32 bytes. It’s still 32 bytes. It’s still something that can sign. There’s nothing remarkable about it. Outside, you can’t tell there’s anything special about it. But secreltly, it has committed to extra script data. If you need to use the extra script features, then you have the option to do so. So you reveal the script and show that, and the validators once they see the information, are able to check that the pubkey tweak was produced correctly and they can see that the key was committing to this script, and now I will allow the script to spend the coins and a valid witness for that script. If you need the script, you reveal a witness. If you don’t need the script, you publish nothing not even the hash of the script. Pretty much all of these applications can fit in these keys, except the hash bounty.

Q: … MAST?

A: Does taproot supercede MAST?

I have a slide about this. Does taproot supercede MAST? MAST is this idea that has been floating around since 2012. It might have been russell o’connor on bitcointalk. I should stop repeating this history until I double check. The idea of a merkleized abstract syntax tree is that you have a script that have a whole bunch of independent conditions. You put them into a merkle tree, and you only reveal the branches thta you want to use at spending time. The other branches stay hidden under the other cryptographic hash commitments. There’s almost no tradeoff to using MAST over not using MAST, except that there’s many ways to do MAST. There’s perpetual bikeshedding and so many competing designs for MAST and none of them are any particularly better than the others. In the early days, nobody was using script for anything interesting, so the practical benefit of MAST back then, wouldn’t have been useful.

Instead of committing to a script, you can commit to a merkle tree of scripts. So there you go, that’s MAST. So in fact, the taproot proposal includes MAST as part of it. It’s separate from the committing structure I defined, it’s there, with a few couple advanced cool things that we noticed as part of implementation. We have a form of MAST where you provide a proof showing where in the tree the script you’re using is, we have a way of hiding the direction it’s taken. So it’s difficult to tell if a tree is unbalanced, and you learn very little information about the shape of the tree, and a few other efficiency things like that which I don’t want to get into. They are in Pieter Wuille’s fork of the bips repo which has a branch that has a draft BIP that has a few pull requests open on. It’s the first hit on google for bip-taproot so you can find it.

One other thing I wanted to mention here is that we eliminate the CHECKMULTISIG opcode in bip-taproot. I mentioned that this is the way in bitcoin how you provide multiple keys and multiple signatures. This opcode kinda sucks. The way it works is that the verifier goes through the list of keys, and for every key they try a signature until they find one that validates with that signature. So you wind up doing a lot of unnecessary signature validations, they just fail. They areu nnecessary. On its own, this prevents batch verification even if the signature algorithm itself was supposed to support batch validation. This wastes a lot of time wiating for invalid ones to fail. Russell O’Connor came up with a way to avoid this with pubkey recovery. It’s a very irritating opcode. It’s irritating for a pile of other reasons. I could do a full talk abou why I hate OP_CHECKMULTISIG. Another issue is that there’s an opcode limit in bitcoin, pushes don’t count, you have a limit of 201 opcodes. You can read the script and pick out the opcodes. But there’s an exception: if you have a CHECKMULTISIG opcode, and then you have to go back and evaluate all the branches, and then if CHECKMULTISIG is executed, you add the number of keys in the CHECKMULTISIG. If it’s not executed, then you do nothing. There’s a bunch of extra code dealing with this one opcode which is just, who knows, a historical accident, and it makes everything more complex.

Q: So CHECKMULTISIG is not going to be in tapscript?

CHECKMULTISIG does have one big benefit over interactive signatures, which is that CHECKMULTISIG is non-interactive for multisig. With threshold multisignatures I’ve been describing, all the people need to interact and they need their keys online and this isn’t always practical because maybe you have keys in vaults or something. If you want to do multisig with Schnorr and taproot, we have OP_CHECKDLSADD. It acts like a CHECKMULTISIG, but when it passes, it adds 1 to your accumulator. So you just count the number of these, and then you check if the accumulator is greater than or equal to whatever your threshold is. This gives you exactly the same use cases as CHECKMULTISIG but with much simpler semantics, with the ability to batch verify, and without the insane special cases in the validation cases.

I think that’s the last taproot slide.

https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my/

Revocations and SIGHASH_NOINPUT

Let’s double back a little bit to lightning.

The taproot talk was about how can we get rid of the hash preimages in lightning. There’s another issue in lightning, which is the revocation transactions. There are basically, every time you do a state update, there’s an extra transactions that both parties need to hold forever. If you’re doing watchtowers, then the watchtowers need to keep all this evergrowing state.

One proposal for bitcoin that would eliminate this complexity is this thing called SIGHASH_NOINPUT and basically what it allows you to do is create a signature that spends some coins that is valid for spending any UTXO any coin that has the same public key. Then there’s a proposal for lightning called eltoo. I think there might be other proposals that use this. It uses a feature of script to restrict this a little bit. The idea is that when you update your state in a payment channel, you createt a new transaction using SIGHASH_NOINPUT flag, and this new transaction is allowed to undo the old state and also every state that came before it. So you’re still doing these updates, but each update accomplishes the work of every previous revocation or update. You have state to keep around, but it’s just one transaction and it scales with O(1) instead of O(n). This eliminates one of the only scalability issues in lightning that is asymptotically really bad.

The lightning people are really excited about this. They are asking for SIGHASH_NOINPUT if we do tapscript or taproot. Unfortunately people are scared of NOINPUT. It’s very dangerous to have the ability to produce a signature for every single output with your key in it. Naively, anyone can send coins to your old address, and then use that NOINPUT signature to scoop those coins away from you. You would need a really badly designed wallet using NOINPUT, and people worry that these wallets would get created and users would be harmed. There’s been a lot of discussion about how to make SIGHASH_NOINPUT safe. Hopefully this will get into bitcoin at the same time that taproot does. There’s a lot of design iteration discussion on SIGHASH_NOINPUT that didn’t need to happen for all the other features of taproot. But this is a really important layer 1 update.

Q: Doesn’t this make it harder to reason about the security of bitcoin or bitcoin script?

A: Yeah. I do. The existence of sighash noinput makes it harder in general to think about bitcoin script. … With noinput, maybe somebody tricks me and asks me to do a blind signature. It turns out this blind signature is a NOINPUT signature, and now they can get all the money at once. This is the kind of problem that complicates the script model. I have a solution to this somewhere on the mailing list somewhere. It’s extra thought, extra design work. Chaperone signatures…

Initially I was opposed to SIGHASH_NOINPUT because I thought it would break a use case for various privacy tech. The fact that it doesn’t break it, is not trivial. NOINPUT is difficult. Let’s move on.

https://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/edgedevplusplus/sighash-noinput/

Speculative improvements

A lot of these improvements we’re just going to only touch the surface level. If you want to ask questions or go deeper on a particular one, we’re certainly happy to do that. But there’s a lot of scalability proposals out there. Some of them are more interesting than others. We tried to cherrypick things that are very beneficial for scalability that we liked, and then some that are innovative or maybe they break things. Some of them we put them in here because they are extremely popular even though we don’t like them.

utreexo

Utreexo out of all of these are hte most practical and most realistic of the speculative improvements. Today one of the big bottlenecks with bitcoin is blockchain validation. In particular, you need a ton of disk space and you need to keep track of the set of coins that are currently spendable. With utreexo, we merkleize this whole thing and we keep one merkle root and we get rid of the need to keep a database. Today to run bitcoin consensus, you need database software and keeping things on disk. That’s rough. Database optimization is incredibly involved and it’s difficult to be certain that it’s implemented correctly. If we switch to utreexo, then we can throw away our database software. Every time osmeone spends, they would provide a merkle proof showing that the coin was in the merkle root. This allows us to essentially keep things on disk. We can fit an entire blockchain validator on a single ASIC. We could create a chip that can validate the bitcoin blockchain just as fast as you can download it. The major tradeoff is that every transaction needs a merkle proof alongside it, which is a couple hundred bytes. There’s a few ways to help with that though.

Tadge did a talk yesterday on utreexo.

https://diyhpl.us/wiki/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/accumulators/

Client side validation

Currently no ongoing research into client side validation. The idea is to not put transactions on the blockchain at all. Most output history wouldn’t be visible. Since we’re not putting it on the blockchain, that means the blockchain takes less bandwidth and takes less computation to verify. It’s much simpler. To validate an output, if someone spends coins to someone else, you send that someone whoever’s receiving the output, the sender will provide them with the entire history of that output back to the coinbase where it was originally mined.

This has a few practical challenges that can be solved with a lot of interesting engineering and a marketplace for small outputs or linearized outputs. But basically, the general idea is that people instead of having the whole history of the chain, you have your coins’ whole history and you send this history when you send coins to a recipient.

This proposal is impractical for a few reasons. One problem is that if you lose your history, you lose your coins. So you can’t do seed-based recovery. There’s a lot of mission critical data that needs to be maintained. That’s a UX tradeoff which is not preferred.

The second thing that really harms client side validation is these histories– if you spend multiple outputs, then you have to include both. So these histories can blow-up a lot. You can avoid this by doing a process called linearization where you make sure you never combine outputs. So you just put in like a soft rule that outputs can never be combined and then if you need to spend a certain amount, you do trades on marketplaces. There’s a lot of engineering that has to happen to make that work. When it comes to making blocks, you get a lot of savings by doing complex signature aggregation which also implementation wise ended up being very different.

All of this together gets you at most a 5x scalability improvement over what we have today. With all the complexities, it’s not practical. But in theory, it’s sound, and it would give us a 5x scalability improvement.

Q: Aren’t the UX challenges here very similar to the off-chain techniques like in lightning network? You have to store some channel updates you can’t recover purely from seed data. I agree these are challenges, but aren’t we solving these anyway for off-chain second layer things? Even scriptless script stuff has stuff you can’t derive from the seed. It’s comparable to other problems we’re solving anyway.

A: It’s very similar in a lot of ways. The problem here is a lot stronger because for example if we switch to client side validation, then everyone has to save everything. Whereas with lightning, it’s opt-in and fractionally opt-in. I don’t need to put my whole portfolio into lightning. I can just do an amount I can afford to lose and not put a lot of coins at risk. I think the challenge is stronger here, but it’s a similar flavor.

Q: As a miner, I would just stuff random invalidation transactions into my block with as much fee as I can manage. You as a non-mining client can’t do much about this.

A: I glossed over that part. So the way we stop this, is that every– there’s two directions. The first thing is that every pubkey put into a transaction has to have a signature saying the owner of this pubkey authorizes that yes I appear in that block. Basically when I receive a proof, like here’s a giant history, and every point in that history, I have to ask the question, was it double spent between the time you say it was spent and the time that I received it? You can answer that question if you can look through the blocks and see if these public keys don’t appear anywhere in the blocks, then I know there’s no double spend. If it does appear, then you need to prove that this public key isn’t related to this output it was some other output. The other side of the defense is that if miners are making up random public keys and throwing in garbage, when the miner tries to spend that, they will not be able to provide a full historical proof. If a miner poofs coins out of nowhere, there will be no valid proof of those coins back to a coinbase transaction. To solve this, this requires a lot of signatures, and ideally you aggregate them so it doesn’t take up chain space.

Q: Other than the 5x scaling advantages, aren’t there other advantages like breaking SPV? So no more ambiguity that miners are validating. Now miners just prove publication without validation rules. Everyone really must run a full node. Plus, I guess, you would have a lot of privacy advantages not necessarily scalability.

A: I would say there are privacy advantages certainly. One thing I think about from a security model is what can the miners do that is malicious? Miners still have a validation rule. Every pubkey that gets spent in the block has to be listed in the block. The miner must include an aggregate signature or a series of aggregated signatures demonstrating that this pubkey was added to the block with permission. This gives miners less room to do manipulation. There are so many engineering challenges for linearizing history and doing signature aggregation and it just seems like too big of challenges relative to the scalability benefits you get, plus you have to move the whole chain to this model which is a non-starter.

Q: .. if you can’t rely on the miners to validate transactions and to enforce the new rules… then basically every… right?

A: No. You can add soft-forks because— a soft-fork..

Q: A soft-fork from the perspective that all the nodes… but nodes that…

A: Basically, you would get versioned outputs. Basically, if I’m on some soft-fork version, or even some hard-fork version, I have a different set of rules by which I accept outputs. When a sender sends me coins, I apply my rules to the history of those coins. You could get into situations where you have to ask someone to upgrade. If it’s a soft-fork, then you don’t have to ask them to upgrade they will blindly accept. If it’s a hard-fork, that’s another one. It allows you to add breaking changes to bitcoin without breaking everyone simultaneously. You break outputs progressively as they get spent in the new paradigm. It’s an interesting model. I haven’t thought about it before. I think that’s an interesting property of client-side validation.

https://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/client-side-validation/

DAG-based blockchain

This is another popular one. Ethereum uses a not-correct version of something called Ghost DAG. The bitcoin blockchain is a straight line, if you get forks or competing blocks then only one of them wins. You discard everything else. I think there was a 2015 paper called “inclusive lbockchain protocols” which presented an approach using directed acyclic graph approach to blockchain. Multiple miners can release blocks at the same time, and then there’s an algorithm to linearize those blocks and the invalid transactions you just ignore. The DAG approach solves particularly miner challenges. It allows you to have faster block times, lower safe confirmation times, it reduces the pressure on miners to have fast propagation. So fast propagation is no longer as important. It doesn’t help with initial block validation, you still have to check every output every spend. That resource bottleneck doesn’t change, therefore this isn’t seen as true scalability at least today because it doesn’t solve our biggest bottleneck.

Another thing that slows down DAGs from being accepted in bitcoin is that it’s a pretty serious hard-fork. It’s a big deal to change how the blockheaders work. This makes SPV a lot more expensive because there’s a lot more blocks. The other thing really holding this back is that it has deep game theory problems. I think there’s a research paper that went and fully fleshed out the incentives. I’ve had it on my reading list for a while but I can’t find it. Until we recover that paper, or someone else writes a new one, we don’t have a formal outlay of the incentives and we don’t have a convincing formal proof that selfish mining techniques are strictly less effective on DAG-based protocols than they are on bitcoin. I currently think this is true, but I don’t think there’s a proof out there. You wouldn’t want to switch to this. The whole point of DAGs is to make selfish mining less bad, and we need a proof that it actually achieves this, and I don’t think we have a fully fleshed out system yet.

Q: Would it be correct to say, a 51% adversary, block time isn’t really reduced– meaning the safe confirmation time?

A: Yes. Correct. DAGs are only advantageous in terms of confirmation security if we’re assuming that nobody has greater than 51% of the hashrate. If someone does have greater than 51% of the hashrate, then things are falling more back to meta incentives anyway.

https://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/braiding-the-blockchain/

https://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/breaking-the-chain/

https://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/chainbreak/

Confidential transactions

Confidential transactions is the idea that you can replace the amounts with homomorphic commitments that hide what the amounts are, but they still bind to the amounts so nobody can pretend they are something they are not. Homomorphic means they can be added. Validators can add up all the inputs and all the output amounts and they can tell whether or not they balance to zero, by checking those homomorphic commitments, or by using a proof of zeroness.

Confidential transactions aren’t an asymptotic hit to validation time or size. They add a 650 byte object called rangeproofs. These originally used to be 3-4 kilobytes but thanks to bulletproofs they are now “only” 650 bytes each which is cool. They can also be accumulated across outputs so in some contexts it could be 700 bytes per transaction, but that’s still not great.

If we had confidential transactions in bitcoin, and then you remove all the other things in bitcoin, you get left with mimblewimble which has some cool scalability like removing old unspent outputs. That’s cool, but you would have to remove script from bitcoin. That’s not going to happen. Bitcoin blocks arne’t going to be replaced by mimblewimble blocks ever due to the loss of functionality.

The other issue is that these things aren’t quantum secure. Right now in bitcoin all of our signatures will be broken as soon as there are practical quantum computers that exist. If we had confidential transactions, then not only would the signatures be broken and coins vulnerable to theft, but also the soundness of the system would be in question. Anyone who claimed to own any maount of coins, just instantly everything is gibberish. petertodd has argued this is a benefit for something like mimblewimble– if a quantum computer shows up, then every single output that ever existed on the mimblewimble chain can suddenly - to every value. So the entire chain then deletes itself. So that’s pretty cool. That’s kind of crazy. But it doesn’t fit into the bitcoin ethos super well…. a very vocal segment of bitcoiners don’t want everything to go away. So until we have a quantum secure version of something like bulletproofs or confidential transactions, my feeling is that it’s not going to happen in bitcoin.

https://diyhpl.us/wiki/transcripts/gmaxwell-confidential-transactions/

STARKs and general zero-knowledge proofs

In confidential transactions, we have a rangeproof attached to every output. This is an example of a zero-knowledge proof, specifically one to prove a range. The purpose of this is to make sure that ranges aren’t going to overflow, that the values aren’t going to be negative numbers and won’t overflow.

But in general, it’s possible to prove anything you want in zero knowledge. This has been generally known since the 1980s. Since the early 90s, it was known to be possible to be done and within the bounds of the known universe. More recently in 2013 there’s been a tremendous amount of development in really practical general zero-knowledge proof systems.

STARKs are developed by Eli Ben-Sasson’s and others here in Tel Aviv. STARKs are very fast to verify. They are asymptotically very small- they grow with the logarithm of the size of your program execution. Unfortunately, they have large constants like 10s or 100s of kilobytes. Practical STARK programs end up being 50 to 100 kilobytes. They are quantum secure, and they are fast to verify. They scale really well.

One application of STARKs if they were to be much smaller would be a quantum resistant rangeproof which would be interesting, which could lead to a confidential transaction proposal for bitcoin. But they are much more general than this. In theory, you could prove script conditions. You could hide your amount but also your spending conditions. You get something like taproot without all the tricks and the interaction and all these caveats about how sometimes you still need to reveal your scripts.

STARKs can also be used to basically do batch validation. You can produce a STARK proof of the validation of every single signature in the bitcoin blockchain for example. This is a little bit outside of feasibility but with special purpose ASICs you could probably produce such a proof. Now validation of every single signature comes down to validating this small STARK proof which is a few hundred kilobytes of size, rather than dealing with the 100s of gigabytes of data you presently need to do EC operations on. You could compress the whole blockchain signature validation workload into a single proof. We’re stretching what’s practical to encode; you would have to implement an entire script interpreter into a single zero-knowledge proof system which is very far away from our current ability to design programs.

It’s promising tech, and it’s practical for a lot of purposes even right now. It’s mindblowing and surprising that we can even do this. There’s llots of ways in the immediate future that these types of things could be used to improve the scalability of the current system, and also they interact pretty well with the other proposals. Utreexo proofs could be replaced with STARK proofs. The utreexo proofs are small, but STARKs could do aggregation of those. Also, you can do it piecemeal and you don’t need to get everyone to do that.

At some point, this will be mature and efficient enough taht we will start seeing it in the blockchain and see real scalability improvements.

Sharding

We have two slides left, sharding and proof-of-stakes. My two favorite things.

Sharding is the idea is that we have the same coins and the same consensus ,but we push it into different validator zones where transactions can happen that not everyone needs to verify. Sharding overcomes the challenge of “everyone needs to see everything”. Generally speaking, we only have one really widely accepted way of doing sharding and that’s a federated sidechain and some multisig quorum that signs off on what happens off-chain. If the federation decides to steal the money, the mainchain has no way of knowing if the federation is being honest or not.

There’s also SPV proofs, which are generally considered to be not that viable or too risky. I personally don’t think that miners should be trusted. Boiling any consensus down to an SPV proof makes me very nervous, I don’t think we should go down that road.

Another sharding proposal is plasma. I’m not comfortable with the security boundaries of plasma. Many of the plasma people will assert that the security model is identical to lightning which I don’t think is correct.

I’ve found that almost every sharding proposal can be broken by asking what happens if the shard does an information withholding attack. Just doing that analysis usually lets you figure out how to totally break the system.

Q: What about gmaxwell’s coin witness proposal? Use a STARK to prove the state of this sidechain without asking miners to vote or something like that.

A: There’s a lot of technologies that like– fundamental tech steps that would need to be done.

A: The challenge on that one is not just information withholding, but double spending. If you have multiple proofs that are all valid, and you see multiple histories that transpired. How do we pick which one transpired? If you combine sharding with STARKs, maybe you find some wiggle room to make it work.

Proof-of-stake

Proof-of-stake is incredibly popular in the altcoin world. It’s almost unconditionally popular. It falls back to traditional byzantine fault tolerance models. In proof-of-stake, you see 51% assumptions. Normally things fall away from incentive compatibility. I’m just going to say there’s many issues.

In 2012 and 2013 a lot of bitcoin developers did a deeper dive into proof-of-stake and identified a bunch of issues that transcend any particular issues. Like holding on to your keys after you spend your coins, that’s a fundamental issue. Andrew wrote two papers about the fundamental challenges with proof-of-stake. I think it’s fair to say since 2013 none of those issues have been addressed by proof-of-stake proposals.

Unfortunately, there are so many ways to approach proof-of-stake paradigms that it’s hard to talk through because once you dismantle any implementation everyone is like “well yeah that was the broken one”. At some point, I hope there’s a proof-of-stake implementation that everyone likes and we can attack it directly. But it’s just too much.

https://download.wpsoftware.net/bitcoin/pos.pdf

https://download.wpsoftware.net/bitcoin/alts.pdf

https://download.wpsoftware.net/bitcoin/asic-faq.pdf

Multiple blockchains

For scalability, you can have independent blockchains like bitcoin and litecoin. You don’t need sharding or anything. As a dogecoiner, I don’t need to care about what happens on the bitcoin blockchain. If we use cross-chain lightning, I receive doge not bitcoin, and that’s scalability. We can have on-chain on multiple chains, and then each chain has a separate security domain and a separate chain and their own problems.

\ No newline at end of file diff --git a/edgedevplusplus/2019/statechains/index.html b/edgedevplusplus/2019/statechains/index.html index 5218e12c61..6183d4727b 100644 --- a/edgedevplusplus/2019/statechains/index.html +++ b/edgedevplusplus/2019/statechains/index.html @@ -10,4 +10,4 @@ < Statechains

Statechains

Speakers: Ruben Somsen

Date: September 10, 2019

Transcript By: Bryan Bishop

Tags: Sidechains

Category: -Conference

Schnorr signatures, adaptor signatures and statechains

https://twitter.com/kanzure/status/1171345418237685760

Introduction

If you want to know the details of statechains, I recommend checking out my talk from Breaking Bitcoin 2019 Amsterdam. I’ll give a quick recap of Schnorr signatures and adaptor signatures and then statechains. I think it’s important to understand Schnorr signatures to the point where you’re really comfortable with it. A lot of the cool stuff in bitcoin right now is related to or using Schnorr signatures. I’ll talk about adaptor signatures and I’ll tie it into how I am using adaptor signatures in statechains to have multiple transactions occur simultaneously.

Statechains

It’s a federated sidechain with a 2-of-2 channel between the “statechain entity” and the users. You transfer entire UTXOs, one chain each. There’s no split amounts. You have one statechain per UTXO. If you have 10 UTXOs, then you have 10 statechains.

It’s more secure than a federated sidechain in the sense that they can’t freeze your coins at a low threshold. With a federated sidechain, if some percent of the federation refuses to sign your pegouts, then your coins are frozen. But this is not the case in statechains because you have an on-chain transaction that you can use for redemption of the coins. Statechains have minimal complexity because it’s a linear blockchain moving from one person to one person to one person.

You can swap between chains and you can also swap coins into smaller denominations.

Money can get stolen if not atomic!

You need this to happen atomically. If one of these transactions doesn’t go through, then this whole thing doesn’t work and someone gets screwed. If it does work, then you can do things like coinswap. Also, there’s good suppport for lightning channel creation and this works with a form of a lightning network. You can create a lightning channel thing on top of statechains. Even if the statechain doesn’t allow splitting of UTXOs, you can do it on a second layer like lightning.

Schnorr signatures

http://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my/

Everything we can do with elliptic curve cryptography boils down to a few simple concepts. Ring signatures, confidential transactions, pedersen commitments, adaptor signatures, mimblewimble, they are all related to this math. Bulletproofs too, but that’s really complicated. All of these things are actually simple, if you know Schnorr signatures well. Don’t just try to understand it, try to grok it.

The only assumption that you have for elliptic curve cryptography is that you have these special numbers, curve points. You can only add and subtract these values, and nothing else. It’s a form of math where you have just those two operations.

We want to show that you can only calculate in one direction, and not be able to reverse the calculations. Something like 5A=E is bruteforceable because it’s just xA=E. You can just do E - A = D and just do trial and error. Since we chose such a low entropy value, you can get the answer quickly. If you pick a bigger number, then it becomes very difficult. Now, it’s literally impossible because the calculation takes forever. Even all the computers in the world would not be able to calculate this. But we still need to be able to calculate forward. Isn’t that going to be slow for big numbers? It isn’t, because you can do a little trick where you do doubling and then adding, so 2A + 2A = 4A and then 4A + 4A = 8A etc. This is a trapdoor function, the other way is impossibly slow.

Private and public keys

We started out with a curve point. Everyone knows G. It’s a generator. It’s a NUMS nothing-up-my-sleeves number. Then you pick your private key, as a huge random number, and you multiply that by G and that’s how we get a public key. This is essentially a pseudonymous identity on the blockchain. You create a new key for each transaction.

How do you prove to somebody that you know a private key? You sign a message and then they have to verify the message. I am going to go through a broken method first, though. Say we pick another huge number r*G=R. So then we calculate r + a = s where a is our secret. Then we give (r, s) to the verifier. So r + a = s*G and this proves, supposedly, that you know the private key of A, which is the value “a”. Why is this the case? There’s two variables and you can’t know both. The flaw, though, is that you can claculate r in such a way that a is actually not part of it. Instead of calculating r + a, you can calculate R - A + A which is just R. We can fix this flaw, and have the complete Schnorr signature protocol by adding the hash of R to the mix. Now it’s impossible to cheat because e=H(R) depends on R. So now R appears on both sides of the equation, even if the R value isn’t given to the verifier.

Adaptor signatures

https://diyhpl.us/wiki/transcripts/scalingbitcoin/stanford-2017/using-the-chain-for-what-chains-are-good-for/

https://diyhpl.us/wiki/transcripts/chaincode-labs/2019-08-16-elichai-turkel-schnorr-signatures/

https://diyhpl.us/wiki/transcripts/realworldcrypto/2018/mimblewimble-and-scriptless-scripts/

See also

https://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2019-06-07-statechains/

https://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html

\ No newline at end of file +Conference

Schnorr signatures, adaptor signatures and statechains

https://twitter.com/kanzure/status/1171345418237685760

Introduction

If you want to know the details of statechains, I recommend checking out my talk from Breaking Bitcoin 2019 Amsterdam. I’ll give a quick recap of Schnorr signatures and adaptor signatures and then statechains. I think it’s important to understand Schnorr signatures to the point where you’re really comfortable with it. A lot of the cool stuff in bitcoin right now is related to or using Schnorr signatures. I’ll talk about adaptor signatures and I’ll tie it into how I am using adaptor signatures in statechains to have multiple transactions occur simultaneously.

Statechains

It’s a federated sidechain with a 2-of-2 channel between the “statechain entity” and the users. You transfer entire UTXOs, one chain each. There’s no split amounts. You have one statechain per UTXO. If you have 10 UTXOs, then you have 10 statechains.

It’s more secure than a federated sidechain in the sense that they can’t freeze your coins at a low threshold. With a federated sidechain, if some percent of the federation refuses to sign your pegouts, then your coins are frozen. But this is not the case in statechains because you have an on-chain transaction that you can use for redemption of the coins. Statechains have minimal complexity because it’s a linear blockchain moving from one person to one person to one person.

You can swap between chains and you can also swap coins into smaller denominations.

Money can get stolen if not atomic!

You need this to happen atomically. If one of these transactions doesn’t go through, then this whole thing doesn’t work and someone gets screwed. If it does work, then you can do things like coinswap. Also, there’s good suppport for lightning channel creation and this works with a form of a lightning network. You can create a lightning channel thing on top of statechains. Even if the statechain doesn’t allow splitting of UTXOs, you can do it on a second layer like lightning.

Schnorr signatures

http://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my/

Everything we can do with elliptic curve cryptography boils down to a few simple concepts. Ring signatures, confidential transactions, pedersen commitments, adaptor signatures, mimblewimble, they are all related to this math. Bulletproofs too, but that’s really complicated. All of these things are actually simple, if you know Schnorr signatures well. Don’t just try to understand it, try to grok it.

The only assumption that you have for elliptic curve cryptography is that you have these special numbers, curve points. You can only add and subtract these values, and nothing else. It’s a form of math where you have just those two operations.

We want to show that you can only calculate in one direction, and not be able to reverse the calculations. Something like 5A=E is bruteforceable because it’s just xA=E. You can just do E - A = D and just do trial and error. Since we chose such a low entropy value, you can get the answer quickly. If you pick a bigger number, then it becomes very difficult. Now, it’s literally impossible because the calculation takes forever. Even all the computers in the world would not be able to calculate this. But we still need to be able to calculate forward. Isn’t that going to be slow for big numbers? It isn’t, because you can do a little trick where you do doubling and then adding, so 2A + 2A = 4A and then 4A + 4A = 8A etc. This is a trapdoor function, the other way is impossibly slow.

Private and public keys

We started out with a curve point. Everyone knows G. It’s a generator. It’s a NUMS nothing-up-my-sleeves number. Then you pick your private key, as a huge random number, and you multiply that by G and that’s how we get a public key. This is essentially a pseudonymous identity on the blockchain. You create a new key for each transaction.

How do you prove to somebody that you know a private key? You sign a message and then they have to verify the message. I am going to go through a broken method first, though. Say we pick another huge number r*G=R. So then we calculate r + a = s where a is our secret. Then we give (r, s) to the verifier. So r + a = s*G and this proves, supposedly, that you know the private key of A, which is the value “a”. Why is this the case? There’s two variables and you can’t know both. The flaw, though, is that you can claculate r in such a way that a is actually not part of it. Instead of calculating r + a, you can calculate R - A + A which is just R. We can fix this flaw, and have the complete Schnorr signature protocol by adding the hash of R to the mix. Now it’s impossible to cheat because e=H(R) depends on R. So now R appears on both sides of the equation, even if the R value isn’t given to the verifier.

Adaptor signatures

https://diyhpl.us/wiki/transcripts/scalingbitcoin/stanford-2017/using-the-chain-for-what-chains-are-good-for/

https://diyhpl.us/wiki/transcripts/chaincode-labs/2019-08-16-elichai-turkel-schnorr-signatures/

https://diyhpl.us/wiki/transcripts/realworldcrypto/2018/mimblewimble-and-scriptless-scripts/

See also

https://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2019-06-07-statechains/

https://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html

\ No newline at end of file diff --git a/es/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/index.html b/es/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/index.html index 1f234fa3e7..4bf0b050b6 100644 --- a/es/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/index.html +++ b/es/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/index.html @@ -28,4 +28,4 @@ Antoine Riard

Fecha: February 6, 2020

Transcripción De: Michael Folkson

Traducción Por: Blue Moon

Tags: Taproot, Lightning, Ptlc

Categoría: Conferencia

Media: -https://www.advancingbitcoin.com/video/a-schnorr-taprooted-lightning,11/

Nombre: Antoine Riard

Tema: A Schnorr-Taproot’ed Lightning

Ubicación: El avance de Bitcoin

Fecha: 6 de febrero de 2020

Diapositivas: https://www.dropbox.com/s/9vs54e9bqf317u0/Schnorr-Taproot%27ed-LN.pdf

Introducción

Hoy Schnorr y Taproot para Lightning, es un tema realmente apasionante.

Arquitectura Lightning

La arquitectura Lightning para aquellos que no están familiarizados con ella. Usted tiene el blockchain como la capa subyacente. Encima de ella vas a construir un canal, tienes un HTLC y la gente va a gastar capas hacia ti. Si quieres que te paguen vas a enviar una factura al remitente.

¿Para qué debemos diseñar Lightning?

¿Para qué deberíamos diseñar Lightning? Cuando hacemos la especificación del diseño de Lightning, estamos invirtiendo mucho en ello y todo el mundo tiene una visión diferente de lo que debería ser Lightning. ¿Debe ser Lightning un sistema de transacciones de pago rápido? ¿Debería Lightning estar optimizado para las microtransacciones? ¿Es Lightning realmente genial porque obtienes la finalidad instantánea de tus transacciones? ¿Es la privacidad la razón por la que estamos haciendo Lightning? Lightning puede tener mejores propiedades de privacidad. Cuando hablamos de privacidad para Lightning sería mejor tener en cuenta la privacidad de la capa base. En la capa base se van a difundir las transacciones. Hay una cantidad, no está encriptada. Hay una dirección, no está encriptada. Vas a enlazar entradas y salidas en el gráfico UTXO.

¿Cuál es la privacidad de la capa base?

La privacidad para la capa base no es tan grande que hoy. El rayo puede ser una forma de resolver la privacidad.

¿Cuál es la privacidad de Lightning?

Pero en Lightning hay una vía de pago. Los nodos de Lightning tienen pubkeys atadas a ellos y eso es un vector de identidad. Con los HTLCs puedes reutilizar un hash, hay muchos vectores de privacidad diferentes. La privacidad es, en mi opinión, muy importante si quieres dinero resistente a la censura.

¿Por qué debemos centrarnos en la privacidad?

“La criptografía reordena el poder, configura quién puede hacer qué, a partir de qué” El carácter moral del trabajo criptográfico (Rogaway)

Si no tienes privacidad, puedo sobornarte o chantajearte porque sé cómo estás usando esta tecnología. Ese es un enorme vector de ataque. Hay un documento impresionante de Philip Rogaway. Animo a todo el mundo a leerlo.

EC-Schnorr: esquema de firma eficiente

Par de claves = (x,P) con P= xG y par de claves efímeras (k,R) con R = kG

Hash del mensaje = e = hash(R | m) y Firma = (R,s) con s = k + ex

Verificación = sG = R + eP

Puedes ver a Schnorr y Taproot como un impulso a la privacidad. La razón para modificar la capa base del consenso, que es mucho trabajo, hay mucha gente involucrada, tiene que haber una buena motivación para hacer esto. Schnorr es un reemplazo para ECDSA. Originalmente Satoshi no introdujo Schnorr en Bitcoin porque había algunos problemas de patentes. Schnorr es realmente impresionante porque hay linealidad en la ecuación de verificación de Schnorr. Linealidad significa que es fácil sumar componentes. Es fácil sumar firmas, es fácil sumar pubkeys y es fácil sumar nonces entre diferentes partes.

Taproot: árbol de guiones que preserva la privacidad

Taproot pubkey: Q = P + tG con Q y P puntos de curva

t es la raíz de un árbol Merkle donde cada hoja es un hash de un script

El testigo de gasto proporciona la prueba Merkle y el script

La otra gran propuesta de actualización de consenso, nada ha sido aún adoptado por la red, Taproot es la idea de construir un árbol de Merkle de cada hoja del árbol de Merkle va a ser un script. Va a comprometer la raíz del árbol de Merkle dentro de la pubkey. Eso es genial. Ahora, cuando usted va a gastar una salida Taproot usted tiene dos opciones. La primera opción es usar un keypath spend. La otra opción es revelar uno de los scripts más una prueba Merkle. Esta prueba Merkle permite a la red verificar que este script ha sido comprometido con el compromiso inicial del scriptPubKey, la pubkey del gasto de la transacción.

Nuevas propiedades de consenso

¿Cuáles son las nuevas propiedades de consenso de esta actualización? La linealidad es la que vamos a utilizar para esta charla. Con Taproot tenemos guiones complejos baratos. Otra ventaja es que bajo el supuesto de Taproot, si todos están de acuerdo, no tienes un desacuerdo, pueden gastar una salida de Taproot de forma cooperativa para que el script no sea visto por ningún observador externo.

Más recursos de Schnorr-Taproot

Hay números BIP para Schnorr, Taproot y Tapscript. Te animo a que leas los BIPs. También hay más recursos en el [repo] de AJ Town en GitHub(https://github.com/ajtowns/taproot-review).

Canal: “Plaintext” cierre

Salida de P2WSH: 0 <32-byte-hash>

Script de testigo: 2 <pubkey1> <pubkey2> 2 OP_CHECKMULTISIG

En este momento se va a emitir una transacción de financiación onchain. Esta transacción de financiación va a ser un pay-to-witness-script-hash (P2WSH). Cuando cierres el canal todos los pares de la red van a ver que fue un 2 de 2. Al revelar el script vas a filtrar que estabas usando Lightning. ¿Cómo podemos resolver esto?

Schnorr Taproot -Canal: Cierre “discreto”

Salida de taproot: 1 <32-byte-pubkey>

Script de testigo: <MuSig-sig>

Podemos incrustar el script en una salida de Taproot. De esta manera si ambas partes acuerdan hacer un cierre mutuo no va a poder disociar esta salida de Taproot de financiación del rayo de otra salida de Taproot.

Canal: El peor caso de cierre

Yendo más allá, aunque no estemos de acuerdo, lo ideal sería que el canal no fuera visto por ninguna de las partes. El blockchain se preocupa por la ejecución fiel del contrato, pero lo ideal sería que no se enterara de las cantidades porque éstas forman parte del contrato.

Schnorr Taproot -Canal: Compromiso compartido

Creo que se puede ir más allá con esta idea. Puedes codificar la transacción de compromiso en su propio Taptree y cada Tapscript sería un HTLC. Este Tapscript pasaría a una transacción de 2ª etapa. Esta transacción de segunda etapa tendría dos salidas. Una salida pagando al HTLC y la otra pagando al Taptree menos el gasto del Tapscript. Creo que tal vez SIGHASH_NOINPUT sería un mejor ajuste para esta construcción, pero hay una manera de hacer el canal discreto. El blockchain no debería enterarse de que estás haciendo algún tipo de construcción offchain.

HTLC: correlación de hash de pago

Cada parte de HTLC de la ruta de pago reutiliza el mismo Script hashlock es decir

OP_HASH160 <RIPEMD160(payment_hash)> OP_EQUALVERIFY

En este momento estamos utilizando un hash de pago. Cualquier parte HTLC de la ruta de pago está reutilizando el mismo hash. Si eres una empresa de Chainalysis y tienes nodos espías en la red o tienes grandes nodos de procesamiento y estos nodos son parte de la misma ruta de pago, van a ser capaces de adivinar la “cercanía del gráfico” del emisor y del receptor. Eso es realmente malo porque ahora mismo las rutas de pago son bastante cortas dada la topología actual. Lo ideal sería utilizar un hashlock diferente para cada salto.

Schnorr-Taproot: Contrato por tiempo determinado

partial_sig = sG = R + H(P | R | m)P

adaptor_sig = s’G = T + R + H(P | R | m)P with the T the nonce tweak

secret t = adaptor_sig - partial_sig

Hay una idea genial de los scripts sin guión de Andrew Poelstra, que ha hablado hoy mismo. Con un script sin guión vas a retocar la pubkey nonce con un secreto. Cuando una de las partes está lista para reclamar el secreto tiene que revelarlo para desbloquear la salida.

Protocolo de PTLC: fase de preparación

(Ver diagrama en las diapositivas)

El protocolo funciona así. Se va a construir una pubkey agregada de 2-de-2. Una de las partes va a enviar una pubkey nonce modificada. Alice va a enviar un sig parcial a Bob. Bob va a enviar su sig parcial… Cuando Bob está listo para reclamar la salida tiene que revelar el secreto. Esta es una forma de intercambiar atómicamente fondos contra un secreto. Puedes reutilizar esta primitiva para construir un mundo como las rutas de pago de Lightning. PTLC, point timelocked contracts, debería ser el sustituto de HTLC. Habrá tres fases. La primera fase, configuración, se envía un punto de curva a cada parte de la ruta de pago.

Protocolo PTLC: fase de actualización

(Ver diagrama en las diapositivas)

La segunda fase es la de actualización. Vas a intercambiar sigs parciales entre cada salto de la ruta de pago.

Protocolo PTLC: fase de asentamiento

(Ver diagrama en las diapositivas)

La última fase es la del acuerdo. Dave va a revelar el secreto que permite a Carol conocer su propio secreto que va a permitir a Bob conocer su propio secreto. Bob va a reclamar el PTLC a Alice. Alice va a aprender el secreto final. Este secreto final puede ser reutilizado para resolver otros problemas.

Facturas: comprobantes de pago

En este momento, cuando vayas a realizar un pago en la red, aprenderás la preimagen. La preimagen se puede utilizar como prueba de pago. Pero no te dice quién es el remitente original. Cada salto de la ruta de pago puede afirmar ante un juez “Yo fui el que hizo el pago. Tengo la preimagen”. Si se puede presentar también la factura, no se puede asociar entre las partes de la ruta de pago.

Facturas de Schnorr Taproot: comprobante de pago

Reutilizando el valor z (zG ha sido firmado por el receptor) del protocolo PTLC, podrá tener este valor secreto único. Este valor secreto único sólo va a ser aprendido por el remitente original. Esto podría ser genial porque podrías usar esto para activar un contrato de segunda etapa o algún tipo de custodia de protección al consumidor, algo así.

Onion-packet: pago simple o MPP

El MPP ha sido presentado por Joost. En este momento el MPP es genial para resolver los problemas de liquidez, pero puede ser una debilidad para la privacidad porque puede ser capaz de hacer la intersección de las rutas de pago entre los diferentes MPP utilizados si un nodo de espionaje de parte de todas las rutas de pago MPP. Lo ideal es utilizar un valor diferente para esta ruta de pago.

Paquete de cebollas Schnorr Taproot: Tronco discreto AMP

Existe la idea de utilizar el mismo truco criptográfico de la linealidad de Schnorr. Antes de establecer la ruta de pago Alice el remitente compensará el punto de curva recibido de Dave, el último salto de la ruta de pago, por su propio secreto. Va a enviar fragmentos del secreto a través de cada parte de la cebolla de la ruta de pago atómica. Sólo cuando todos ellos estén bloqueados en el último salto, será posible combinar los fragmentos del secreto y reclamar el pago.

HTLC: pagos atascados

Ahora mismo hay otro problema que se está discutiendo en la lista de correo. Si envías un pago, uno de los saltos en la ruta de pago va a estar fuera de línea o no va a estar disponible. Para cancelar el pago y esperar a enviar otro tienes que esperar primero a que el bloqueo de tiempo de HTLC expire para que los fondos vuelvan al remitente original. Lo ideal sería que el remitente pudiera cancelar el pago sin tener que esperar.

Schnorr Taproot HTLC: pagos cancelables

Puede volver a hacerlo gracias a la construcción del PTLC. El último secreto sólo va a ser revelado por Alice cuando Dave, el receptor de los fondos, va a reconocer que recibió cada paquete de pago. Si haces esto es realmente genial porque puede permitirte construir protocolos de nivel superior, algún tipo de corrección de errores hacia adelante. La idea es que vas a enviar más paquetes de los necesarios para cumplir con el pago. Gracias a esto va a mejorar la UX porque si uno de los paquetes falla todavía tienes más paquetes para pagar al beneficiario.

HTLC: contrato simple con bloqueo de tiempo de hash

La última cosa que también podemos construir gracias a Schnorr… Ahora mismo los HTLCs son bastante geniales pero son bastante simples. Sólo hay un timelock, sólo hay un hash. Quizás la gente esté interesada en tener diferentes hashes. Uno de los hashes es presentado por un árbitro. Puede ser un árbitro en cualquier contrato. Soy Alice, estoy interesada en conseguir un envío de algunas mercancías. Hoy estoy financiando un pago pero nunca he recibido la mercancía. Usted puede insertar una plica en su HTLC. Haciendo esto significaría que cada parte de salto de la ruta de pago tiene que soportar el HTLC avanzado. Peor aún, va a aprender la semántica del contrato.

Schnorr Taproot: contratos de punto de pago de extremo a extremo

Lo que puedes hacer en lugar de esto es tener construcciones de puntos de pago. La idea es que sigas utilizando scripts sin guión pero que añadas otras primitivas gracias a la agregación de claves o al ECDH. También puedes hacer DLCs, que no es más que un punto de curva. Es posible que podamos construir una clase más amplia de paquetes HTLC o paquetes de pago condicional. Preveo que en unos años la gente hará futuros u opciones sobre Lightning. Esta clase de pagos va a ser confidencial. Sólo los puntos finales van a saber de esto.

Protocolo-marco, no hay bala de plata, un montón de trucos

Schnorr y Taproot, no es una bala de plata. Hay un montón de otras fugas como cuando usted está haciendo anuncios de canal en Lightning ahora usted está doxing a sí mismo mediante la vinculación de una identidad pubkey Lightning y onchain UTXO. Dentro de unos años la gente se va a despertar y dirá “Esta pubkey de Lightning estaba vinculada a un nombre de dominio”. Entonces se podrá enlazar entre un nombre de dominio y un UTXO onchain lo cual es realmente malo. Incluso si hacemos PTLC para la ruta de pago todavía tenemos problemas con el delta CLTV que es el mismo en cada salto. Además, el importe sigue siendo el mismo menos las tasas de Lightning en cada salto. Lo ideal sería implementar otros trucos, como algoritmos de enrutamiento aleatorio de la delta de CLTV o rellenar la ruta de pago para utilizar siempre 10 o 20 saltos, aunque sea más costoso. Eso puede ser mejor para la privacidad. Ahora mismo la gente está trabajando en canales de doble financiación para Lightning. Es posible que hagamos Coinjoin para cada transacción de financiación, lo que sería realmente genial. Schnorr y Taproot van a tardar más de un año en integrarse en Lightning. Esto será sólo el comienzo para construir una privacidad realmente consistente para Lightning.

El lado de la aplicación, la construcción de las primeras aplicaciones privadas

La privacidad va a ser el valor por defecto de Lightning, eso espero. Si vas a construir aplicaciones sobre esto deberías tener este enfoque holístico y pensar “Tengo este protocolo Lightning que me proporciona mucha privacidad. Intentaré no romper la privacidad de los usuarios de mi aplicación”. Deberías pensar en la integración con Tor, el inicio de sesión sin identidad o los tokens sin identidad, ese tipo de cosas. Creo que es un reto para los desarrolladores de aplicaciones que construyen sobre Lightning, pero creo que vale la pena. Estoy emocionado, Schnorr y Taproot han sido propuestos como BIPs y deberían ser soft forked en el protocolo si la comunidad lo apoya. Si estás interesado en contribuir a Lightning eres realmente bienvenido.

Gracias a Chaincode

Gracias a Chaincode por apoyar este trabajo. Gracias a avanzando con Bitcoin.

PREGUNTAS Y RESPUESTAS

P - ¿Cómo ves la implementación de Taproot en Lightning? ¿Sigue siendo Lightning?

R - Hay varias maneras. Primero puede integrar Taproot para la salida de fondos. Luego puede utilizar Taproot para la parte de salida HTLC de la transacción de compromiso. También puede utilizar Taproot para la salida de la transacción HTLC de la segunda etapa. Hay por lo menos múltiples salidas que se puede tratar de Rayo. Creo que la primera es arreglar la salida de la financiación porque si se hace esto nos beneficiaremos de la suposición de Taproot. Usando Taproot para las transacciones de compromiso todavía va a filtrar que está utilizando Lightning. Tal vez podríamos utilizar la construcción de la piscina que estaba hablando, pero eso es algo más difícil. Yo perseguiría esto primero.

P - Has dicho que Lightning tiene garantías de privacidad en su protocolo pero que los desarrolladores deben asegurarse de no arruinar las garantías de privacidad sobre el protocolo base de Lightning. ¿Ve usted una tendencia a que las aplicaciones tomen atajos en Lightning y arruinen la privacidad?

R: Sí. Ahora mismo existe esta idea de enrutamiento de trampolín que quizás sea genial para la experiencia del usuario pero en el lado de la privacidad está roto. Lo que nos da mucha privacidad en Lightning es el enrutamiento de origen. Ir al enrutamiento de trampolín significa que la persona que hace el enrutamiento de trampolín para ti va a saber quién eres si estás usando un salto y peor va a saber a quién estás enviando fondos. Existe el enrutamiento trampolín, si no estás usando clientes Lightning que preserven la privacidad… Nadie ha hecho un verdadero estudio de privacidad en los clientes Lightning. Neutrino, filtros bloom, nadie ha hecho una investigación real. No son geniales, hay fugas de privacidad si los usas. Hay problemas de privacidad de Lightning y hay problemas de privacidad de la capa base. Si estás construyendo una aplicación deberías tenerlos todos en cuenta. Es realmente difícil. Usar la pubkey del nodo no creo que sea genial. Me gustaría que rendez-vous routing se hiciera en Lightning para evitar anunciar mi pubkey, tener mi factura ligada a mi pubkey y que mi pubkey sea parte de Lightning. Y el anuncio del canal por supuesto. Espero que en algún momento tengamos algún tipo de prueba de propiedad para poder demostrar que soy dueño de este canal sin revelar de qué UTXO soy dueño.

\ No newline at end of file +https://www.advancingbitcoin.com/video/a-schnorr-taprooted-lightning,11/

Nombre: Antoine Riard

Tema: A Schnorr-Taproot’ed Lightning

Ubicación: El avance de Bitcoin

Fecha: 6 de febrero de 2020

Diapositivas: https://www.dropbox.com/s/9vs54e9bqf317u0/Schnorr-Taproot%27ed-LN.pdf

Introducción

Hoy Schnorr y Taproot para Lightning, es un tema realmente apasionante.

Arquitectura Lightning

La arquitectura Lightning para aquellos que no están familiarizados con ella. Usted tiene el blockchain como la capa subyacente. Encima de ella vas a construir un canal, tienes un HTLC y la gente va a gastar capas hacia ti. Si quieres que te paguen vas a enviar una factura al remitente.

¿Para qué debemos diseñar Lightning?

¿Para qué deberíamos diseñar Lightning? Cuando hacemos la especificación del diseño de Lightning, estamos invirtiendo mucho en ello y todo el mundo tiene una visión diferente de lo que debería ser Lightning. ¿Debe ser Lightning un sistema de transacciones de pago rápido? ¿Debería Lightning estar optimizado para las microtransacciones? ¿Es Lightning realmente genial porque obtienes la finalidad instantánea de tus transacciones? ¿Es la privacidad la razón por la que estamos haciendo Lightning? Lightning puede tener mejores propiedades de privacidad. Cuando hablamos de privacidad para Lightning sería mejor tener en cuenta la privacidad de la capa base. En la capa base se van a difundir las transacciones. Hay una cantidad, no está encriptada. Hay una dirección, no está encriptada. Vas a enlazar entradas y salidas en el gráfico UTXO.

¿Cuál es la privacidad de la capa base?

La privacidad para la capa base no es tan grande que hoy. El rayo puede ser una forma de resolver la privacidad.

¿Cuál es la privacidad de Lightning?

Pero en Lightning hay una vía de pago. Los nodos de Lightning tienen pubkeys atadas a ellos y eso es un vector de identidad. Con los HTLCs puedes reutilizar un hash, hay muchos vectores de privacidad diferentes. La privacidad es, en mi opinión, muy importante si quieres dinero resistente a la censura.

¿Por qué debemos centrarnos en la privacidad?

“La criptografía reordena el poder, configura quién puede hacer qué, a partir de qué” El carácter moral del trabajo criptográfico (Rogaway)

Si no tienes privacidad, puedo sobornarte o chantajearte porque sé cómo estás usando esta tecnología. Ese es un enorme vector de ataque. Hay un documento impresionante de Philip Rogaway. Animo a todo el mundo a leerlo.

EC-Schnorr: esquema de firma eficiente

Par de claves = (x,P) con P= xG y par de claves efímeras (k,R) con R = kG

Hash del mensaje = e = hash(R | m) y Firma = (R,s) con s = k + ex

Verificación = sG = R + eP

Puedes ver a Schnorr y Taproot como un impulso a la privacidad. La razón para modificar la capa base del consenso, que es mucho trabajo, hay mucha gente involucrada, tiene que haber una buena motivación para hacer esto. Schnorr es un reemplazo para ECDSA. Originalmente Satoshi no introdujo Schnorr en Bitcoin porque había algunos problemas de patentes. Schnorr es realmente impresionante porque hay linealidad en la ecuación de verificación de Schnorr. Linealidad significa que es fácil sumar componentes. Es fácil sumar firmas, es fácil sumar pubkeys y es fácil sumar nonces entre diferentes partes.

Taproot: árbol de guiones que preserva la privacidad

Taproot pubkey: Q = P + tG con Q y P puntos de curva

t es la raíz de un árbol Merkle donde cada hoja es un hash de un script

El testigo de gasto proporciona la prueba Merkle y el script

La otra gran propuesta de actualización de consenso, nada ha sido aún adoptado por la red, Taproot es la idea de construir un árbol de Merkle de cada hoja del árbol de Merkle va a ser un script. Va a comprometer la raíz del árbol de Merkle dentro de la pubkey. Eso es genial. Ahora, cuando usted va a gastar una salida Taproot usted tiene dos opciones. La primera opción es usar un keypath spend. La otra opción es revelar uno de los scripts más una prueba Merkle. Esta prueba Merkle permite a la red verificar que este script ha sido comprometido con el compromiso inicial del scriptPubKey, la pubkey del gasto de la transacción.

Nuevas propiedades de consenso

¿Cuáles son las nuevas propiedades de consenso de esta actualización? La linealidad es la que vamos a utilizar para esta charla. Con Taproot tenemos guiones complejos baratos. Otra ventaja es que bajo el supuesto de Taproot, si todos están de acuerdo, no tienes un desacuerdo, pueden gastar una salida de Taproot de forma cooperativa para que el script no sea visto por ningún observador externo.

Más recursos de Schnorr-Taproot

Hay números BIP para Schnorr, Taproot y Tapscript. Te animo a que leas los BIPs. También hay más recursos en el [repo] de AJ Town en GitHub(https://github.com/ajtowns/taproot-review).

Canal: “Plaintext” cierre

Salida de P2WSH: 0 <32-byte-hash>

Script de testigo: 2 <pubkey1> <pubkey2> 2 OP_CHECKMULTISIG

En este momento se va a emitir una transacción de financiación onchain. Esta transacción de financiación va a ser un pay-to-witness-script-hash (P2WSH). Cuando cierres el canal todos los pares de la red van a ver que fue un 2 de 2. Al revelar el script vas a filtrar que estabas usando Lightning. ¿Cómo podemos resolver esto?

Schnorr Taproot -Canal: Cierre “discreto”

Salida de taproot: 1 <32-byte-pubkey>

Script de testigo: <MuSig-sig>

Podemos incrustar el script en una salida de Taproot. De esta manera si ambas partes acuerdan hacer un cierre mutuo no va a poder disociar esta salida de Taproot de financiación del rayo de otra salida de Taproot.

Canal: El peor caso de cierre

Yendo más allá, aunque no estemos de acuerdo, lo ideal sería que el canal no fuera visto por ninguna de las partes. El blockchain se preocupa por la ejecución fiel del contrato, pero lo ideal sería que no se enterara de las cantidades porque éstas forman parte del contrato.

Schnorr Taproot -Canal: Compromiso compartido

Creo que se puede ir más allá con esta idea. Puedes codificar la transacción de compromiso en su propio Taptree y cada Tapscript sería un HTLC. Este Tapscript pasaría a una transacción de 2ª etapa. Esta transacción de segunda etapa tendría dos salidas. Una salida pagando al HTLC y la otra pagando al Taptree menos el gasto del Tapscript. Creo que tal vez SIGHASH_NOINPUT sería un mejor ajuste para esta construcción, pero hay una manera de hacer el canal discreto. El blockchain no debería enterarse de que estás haciendo algún tipo de construcción offchain.

HTLC: correlación de hash de pago

Cada parte de HTLC de la ruta de pago reutiliza el mismo Script hashlock es decir

OP_HASH160 <RIPEMD160(payment_hash)> OP_EQUALVERIFY

En este momento estamos utilizando un hash de pago. Cualquier parte HTLC de la ruta de pago está reutilizando el mismo hash. Si eres una empresa de Chainalysis y tienes nodos espías en la red o tienes grandes nodos de procesamiento y estos nodos son parte de la misma ruta de pago, van a ser capaces de adivinar la “cercanía del gráfico” del emisor y del receptor. Eso es realmente malo porque ahora mismo las rutas de pago son bastante cortas dada la topología actual. Lo ideal sería utilizar un hashlock diferente para cada salto.

Schnorr-Taproot: Contrato por tiempo determinado

partial_sig = sG = R + H(P | R | m)P

adaptor_sig = s’G = T + R + H(P | R | m)P with the T the nonce tweak

secret t = adaptor_sig - partial_sig

Hay una idea genial de los scripts sin guión de Andrew Poelstra, que ha hablado hoy mismo. Con un script sin guión vas a retocar la pubkey nonce con un secreto. Cuando una de las partes está lista para reclamar el secreto tiene que revelarlo para desbloquear la salida.

Protocolo de PTLC: fase de preparación

(Ver diagrama en las diapositivas)

El protocolo funciona así. Se va a construir una pubkey agregada de 2-de-2. Una de las partes va a enviar una pubkey nonce modificada. Alice va a enviar un sig parcial a Bob. Bob va a enviar su sig parcial… Cuando Bob está listo para reclamar la salida tiene que revelar el secreto. Esta es una forma de intercambiar atómicamente fondos contra un secreto. Puedes reutilizar esta primitiva para construir un mundo como las rutas de pago de Lightning. PTLC, point timelocked contracts, debería ser el sustituto de HTLC. Habrá tres fases. La primera fase, configuración, se envía un punto de curva a cada parte de la ruta de pago.

Protocolo PTLC: fase de actualización

(Ver diagrama en las diapositivas)

La segunda fase es la de actualización. Vas a intercambiar sigs parciales entre cada salto de la ruta de pago.

Protocolo PTLC: fase de asentamiento

(Ver diagrama en las diapositivas)

La última fase es la del acuerdo. Dave va a revelar el secreto que permite a Carol conocer su propio secreto que va a permitir a Bob conocer su propio secreto. Bob va a reclamar el PTLC a Alice. Alice va a aprender el secreto final. Este secreto final puede ser reutilizado para resolver otros problemas.

Facturas: comprobantes de pago

En este momento, cuando vayas a realizar un pago en la red, aprenderás la preimagen. La preimagen se puede utilizar como prueba de pago. Pero no te dice quién es el remitente original. Cada salto de la ruta de pago puede afirmar ante un juez “Yo fui el que hizo el pago. Tengo la preimagen”. Si se puede presentar también la factura, no se puede asociar entre las partes de la ruta de pago.

Facturas de Schnorr Taproot: comprobante de pago

Reutilizando el valor z (zG ha sido firmado por el receptor) del protocolo PTLC, podrá tener este valor secreto único. Este valor secreto único sólo va a ser aprendido por el remitente original. Esto podría ser genial porque podrías usar esto para activar un contrato de segunda etapa o algún tipo de custodia de protección al consumidor, algo así.

Onion-packet: pago simple o MPP

El MPP ha sido presentado por Joost. En este momento el MPP es genial para resolver los problemas de liquidez, pero puede ser una debilidad para la privacidad porque puede ser capaz de hacer la intersección de las rutas de pago entre los diferentes MPP utilizados si un nodo de espionaje de parte de todas las rutas de pago MPP. Lo ideal es utilizar un valor diferente para esta ruta de pago.

Paquete de cebollas Schnorr Taproot: Tronco discreto AMP

Existe la idea de utilizar el mismo truco criptográfico de la linealidad de Schnorr. Antes de establecer la ruta de pago Alice el remitente compensará el punto de curva recibido de Dave, el último salto de la ruta de pago, por su propio secreto. Va a enviar fragmentos del secreto a través de cada parte de la cebolla de la ruta de pago atómica. Sólo cuando todos ellos estén bloqueados en el último salto, será posible combinar los fragmentos del secreto y reclamar el pago.

HTLC: pagos atascados

Ahora mismo hay otro problema que se está discutiendo en la lista de correo. Si envías un pago, uno de los saltos en la ruta de pago va a estar fuera de línea o no va a estar disponible. Para cancelar el pago y esperar a enviar otro tienes que esperar primero a que el bloqueo de tiempo de HTLC expire para que los fondos vuelvan al remitente original. Lo ideal sería que el remitente pudiera cancelar el pago sin tener que esperar.

Schnorr Taproot HTLC: pagos cancelables

Puede volver a hacerlo gracias a la construcción del PTLC. El último secreto sólo va a ser revelado por Alice cuando Dave, el receptor de los fondos, va a reconocer que recibió cada paquete de pago. Si haces esto es realmente genial porque puede permitirte construir protocolos de nivel superior, algún tipo de corrección de errores hacia adelante. La idea es que vas a enviar más paquetes de los necesarios para cumplir con el pago. Gracias a esto va a mejorar la UX porque si uno de los paquetes falla todavía tienes más paquetes para pagar al beneficiario.

HTLC: contrato simple con bloqueo de tiempo de hash

La última cosa que también podemos construir gracias a Schnorr… Ahora mismo los HTLCs son bastante geniales pero son bastante simples. Sólo hay un timelock, sólo hay un hash. Quizás la gente esté interesada en tener diferentes hashes. Uno de los hashes es presentado por un árbitro. Puede ser un árbitro en cualquier contrato. Soy Alice, estoy interesada en conseguir un envío de algunas mercancías. Hoy estoy financiando un pago pero nunca he recibido la mercancía. Usted puede insertar una plica en su HTLC. Haciendo esto significaría que cada parte de salto de la ruta de pago tiene que soportar el HTLC avanzado. Peor aún, va a aprender la semántica del contrato.

Schnorr Taproot: contratos de punto de pago de extremo a extremo

Lo que puedes hacer en lugar de esto es tener construcciones de puntos de pago. La idea es que sigas utilizando scripts sin guión pero que añadas otras primitivas gracias a la agregación de claves o al ECDH. También puedes hacer DLCs, que no es más que un punto de curva. Es posible que podamos construir una clase más amplia de paquetes HTLC o paquetes de pago condicional. Preveo que en unos años la gente hará futuros u opciones sobre Lightning. Esta clase de pagos va a ser confidencial. Sólo los puntos finales van a saber de esto.

Protocolo-marco, no hay bala de plata, un montón de trucos

Schnorr y Taproot, no es una bala de plata. Hay un montón de otras fugas como cuando usted está haciendo anuncios de canal en Lightning ahora usted está doxing a sí mismo mediante la vinculación de una identidad pubkey Lightning y onchain UTXO. Dentro de unos años la gente se va a despertar y dirá “Esta pubkey de Lightning estaba vinculada a un nombre de dominio”. Entonces se podrá enlazar entre un nombre de dominio y un UTXO onchain lo cual es realmente malo. Incluso si hacemos PTLC para la ruta de pago todavía tenemos problemas con el delta CLTV que es el mismo en cada salto. Además, el importe sigue siendo el mismo menos las tasas de Lightning en cada salto. Lo ideal sería implementar otros trucos, como algoritmos de enrutamiento aleatorio de la delta de CLTV o rellenar la ruta de pago para utilizar siempre 10 o 20 saltos, aunque sea más costoso. Eso puede ser mejor para la privacidad. Ahora mismo la gente está trabajando en canales de doble financiación para Lightning. Es posible que hagamos Coinjoin para cada transacción de financiación, lo que sería realmente genial. Schnorr y Taproot van a tardar más de un año en integrarse en Lightning. Esto será sólo el comienzo para construir una privacidad realmente consistente para Lightning.

El lado de la aplicación, la construcción de las primeras aplicaciones privadas

La privacidad va a ser el valor por defecto de Lightning, eso espero. Si vas a construir aplicaciones sobre esto deberías tener este enfoque holístico y pensar “Tengo este protocolo Lightning que me proporciona mucha privacidad. Intentaré no romper la privacidad de los usuarios de mi aplicación”. Deberías pensar en la integración con Tor, el inicio de sesión sin identidad o los tokens sin identidad, ese tipo de cosas. Creo que es un reto para los desarrolladores de aplicaciones que construyen sobre Lightning, pero creo que vale la pena. Estoy emocionado, Schnorr y Taproot han sido propuestos como BIPs y deberían ser soft forked en el protocolo si la comunidad lo apoya. Si estás interesado en contribuir a Lightning eres realmente bienvenido.

Gracias a Chaincode

Gracias a Chaincode por apoyar este trabajo. Gracias a avanzando con Bitcoin.

PREGUNTAS Y RESPUESTAS

P - ¿Cómo ves la implementación de Taproot en Lightning? ¿Sigue siendo Lightning?

R - Hay varias maneras. Primero puede integrar Taproot para la salida de fondos. Luego puede utilizar Taproot para la parte de salida HTLC de la transacción de compromiso. También puede utilizar Taproot para la salida de la transacción HTLC de la segunda etapa. Hay por lo menos múltiples salidas que se puede tratar de Rayo. Creo que la primera es arreglar la salida de la financiación porque si se hace esto nos beneficiaremos de la suposición de Taproot. Usando Taproot para las transacciones de compromiso todavía va a filtrar que está utilizando Lightning. Tal vez podríamos utilizar la construcción de la piscina que estaba hablando, pero eso es algo más difícil. Yo perseguiría esto primero.

P - Has dicho que Lightning tiene garantías de privacidad en su protocolo pero que los desarrolladores deben asegurarse de no arruinar las garantías de privacidad sobre el protocolo base de Lightning. ¿Ve usted una tendencia a que las aplicaciones tomen atajos en Lightning y arruinen la privacidad?

R: Sí. Ahora mismo existe esta idea de enrutamiento de trampolín que quizás sea genial para la experiencia del usuario pero en el lado de la privacidad está roto. Lo que nos da mucha privacidad en Lightning es el enrutamiento de origen. Ir al enrutamiento de trampolín significa que la persona que hace el enrutamiento de trampolín para ti va a saber quién eres si estás usando un salto y peor va a saber a quién estás enviando fondos. Existe el enrutamiento trampolín, si no estás usando clientes Lightning que preserven la privacidad… Nadie ha hecho un verdadero estudio de privacidad en los clientes Lightning. Neutrino, filtros bloom, nadie ha hecho una investigación real. No son geniales, hay fugas de privacidad si los usas. Hay problemas de privacidad de Lightning y hay problemas de privacidad de la capa base. Si estás construyendo una aplicación deberías tenerlos todos en cuenta. Es realmente difícil. Usar la pubkey del nodo no creo que sea genial. Me gustaría que rendez-vous routing se hiciera en Lightning para evitar anunciar mi pubkey, tener mi factura ligada a mi pubkey y que mi pubkey sea parte de Lightning. Y el anuncio del canal por supuesto. Espero que en algún momento tengamos algún tipo de prueba de propiedad para poder demostrar que soy dueño de este canal sin revelar de qué UTXO soy dueño.

\ No newline at end of file diff --git a/es/austin-bitcoin-developers/2019-06-29-hardware-wallets/index.html b/es/austin-bitcoin-developers/2019-06-29-hardware-wallets/index.html index c771a9c199..2bfdd3278d 100644 --- a/es/austin-bitcoin-developers/2019-06-29-hardware-wallets/index.html +++ b/es/austin-bitcoin-developers/2019-06-29-hardware-wallets/index.html @@ -22,4 +22,4 @@ Stepan Snigirev

Fecha: June 29, 2019

Transcripción De: Bryan Bishop

Traducción Por: Blue Moon

Tags: Hardware wallet, Security problems

Categoría: Reunión

Media: -https://www.youtube.com/watch?v=rK0jUeHeDf0

https://twitter.com/kanzure/status/1145019634547978240

Ver también:

Antecedentes

Hace algo más de un año, pasé por la clase de Jimmy Song de Programación de Blockchain. Ahí fue donde conocí a M, donde él era el asistente de enseñanza. Básicamente, se escribe una biblioteca de bitcoin en python desde cero. La API de esta librería y las clases y funcciones que usa Jimmy son muy fáciles de leer y entender. Yo estaba contento con esa API, y lo que hice después fue que quería salir de la física cuántica para trabajar en bitcoin. Empecé a escribir una biblioteca que tenía una API similar - tomé una placa de arduino y escribí una biblioteca similar que tenía las mismas características, y cosas adicionales como claves HD y algunas otras cosas. Quería hacer un monedero de hardware que fuera fácil de programar, inspirado por la clase de Jimmy. Fue entonces cuando conocí y comencé CryptoAdvance. Luego hice una versión refactorizada que no requiere dependencias de arduino, así que ahora funciona con arduino, Mbed, bare metal C, con un sistema operativo en tiempo real, etc. También planeo hacer un enlace de micropython y un enlace de rust embebido.

Introducción

Hoy en día sólo estoy considerando tres carteras de hardware: Trezor, Ledger y ColdCard. Básicamente todos los demás tienen arquitecturas similares a uno de estos. Quizá sepas que Trezor utiliza un microcontrolador de propósito general, como los que se usan en los microondas o en nuestros coches. Estos están más o menos en todos los dispositivos que existen. Tomaron la decisión de usar sólo esto sin un elemento seguro por algunas razones: querían hacer la cartera de hardware completamente de código abierto y decir con seguridad lo que se está ejecutando en la cartera de hardware. No creo que no tengan ningún elemento seguro en la cartera de hardware a menos que desarrollemos un elemento seguro de código abierto. Creo que nuestra comunidad podría hacerlo realidad. Quizás podríamos cooperar con Trezor y Ledger y en algún momento desarrollar un elemento seguro basado en la arquitectura RISC-V.

Elementos de seguridad

Creo que necesitamos varios tipos de elementos seguros para diferentes modelos de seguridad. Quieres diversificar el riesgo. Quieres usar multisig con firmas Schnorr. Quieres diferentes dispositivos con diferentes modelos de seguridad, e idealmente cada clave debería ser almacenada con diferente hardware, y diferentes modelos de seguridad en cada uno también. ¿Cómo aparecerán las vulnerabilidades? Podría ser un protocolo mal diseñado, con suerte no tendrás un bug en eso pero a veces los monederos de hardware fallan. Podría ser una vulnerabilidad de software en la que la gente que escribió el software cometió un error, como desbordamientos o errores de implementación o algunas primitivas criptográficas no muy seguras como la fuga de información a través de un canal lateral. El hardware real puede ser vulnerable a ataques de hardware, como el glitching. Hay formas de hacer que los microcontroladores no se comporten según su especificación. También puede haber bugs de hardware, que ocurren de vez en cuando, y sólo porque el fabricante del chip también puede cometer errores - la mayoría de los chips siguen siendo diseñados no automáticamente sino por humanos. Cuando los humanos ponen transistores y optimizan esto a mano, también pueden cometer errores. También existe la posibilidad de que haya puertas traseras del gobierno, y por eso queremos un elemento seguro de código abierto.

Hace algún tiempo se habló de las instrucciones en los procesadores x86, donde básicamente, tienen un conjunto específico de instrucciones que no está documentado y no… lo llaman Apéndice H. Comparten este apéndice sólo con personas de confianza ((risas)). Sí. Estas instrucciones pueden hacer cosas raras, no sabemos exactamente qué, pero un tipo fue capaz de encontrar todas las instrucciones. Incluso fue capaz de obtener privilegios desde el nivel de usuario, no sólo a nivel de root sino a nivel de ring-2, un control completo al que ni siquiera el sistema operativo tiene acceso. Incluso si ejecuta colas, no significa que el ordenador sea apátrida. Todavía hay un montón de mierda que se ejecuta bajo el sistema operativo y que tu sistema operativo no conoce. Debes tener cuidado con eso. En los ordenadores librem, no sólo tienen PureOS sino también Qubes que puedes ejecutar y también usan el – que también es abierto– básicamente puedes tener… comprobar que está arrancando el tails real. La herramienta librem se llama heads not tails. Deberías mirar eso si eres particularmente paranoico.

Los ordenadores Librem tienen varias opciones. Puedes ejecutar PureOS o puedes ejecutar Qubes o Tails si lo deseas. La clave librem comprueba el cargador de arranque.

Destapado

Los monederos de hardware de Ledger utilizan un elemento seguro. También tienen un microcontrolador. Hay que hablar de dos arquitecturas diferentes. Trezor sólo utiliza un MCU de propósito general. Luego tenemos ColdCard, que utiliza un MCU de propósito general y además añade un elemento seguro… yo no lo llamaría un elemento seguro, pero es un almacenamiento de claves seguro. La cosa que está disponible en el mercado, los chicos de ColdCard fueron capaces de convencer al fabricante de abrir el código de este dispositivo de almacenamiento de claves seguras. Así que esperamos saber lo que se ejecuta en los microcontroladores, pero no podemos verificarlo. Si les damos el chip, no podemos verificarlo. En teoría podríamos hacer un decapping. Con el decapado, imagina un chip y tienes algo de epoxi sobre el semiconductor y el resto del espacio son sólo cables que van dentro del dispositivo. Si quieres estudiar lo que hay dentro del microcontrolador, lo que haces es ponerlo en la cortadora láser y primero haces un agujero en el dispositivo y luego puedes poner aquí el ácido nítrico que calientas a 100 grados y disolverá todo el plástico alrededor. A continuación, tiene un agujero para el microcontrolador, entonces usted puede poner esto en un microscopio óptico o microscopio electrónico o lo que usted tiene y realmente estudiar toda la superficie allí. Hubo un ATmega que alguien decapado y todavía era capaz de correr. Hubo una charla en defcon donde los chicos mostraron cómo hacer decappers DIY. Usted podría tomar un trezor u otra cartera de hardware y lo puso en un montaje, y luego sólo hay que poner la corriente de ácido nítrico para el microcontrolador y se disuelve el plástico, pero el dispositivo en sí todavía puede funcionar. Así que mientras haces algo de criptografía ahí, puedes llegar al nivel del semiconductor y ponerlo bajo el microscopio y observar cómo funciona exactamente. Entonces, cuando el microcontrolador funciona, puedes ver cómo el plástico… no sólo con el microscopio, sino también con… como cuando un transistor pasa de 0 a 1, tiene una pequeña posibilidad de emitir un fotón. Así que puedes observar los fotones emitidos y probablemente haya alguna información sobre las llaves dada por eso. Eventualmente podrías extraer las llaves. Cortas la mayor parte del plástico y luego pones el ácido nítrico para llegar al nivel de los semiconductores. En este ejemplo, el tipo estaba mirando el buffer de entrada y salida del microcontrolador. También puedes mirar los registros individuales. Sin embargo, es ligeramente diferente para los elementos seguros o el almacenamiento seguro de claves. Hacen un esfuerzo de ingeniería en el lado del hardware para asegurarse de que no es fácil hacer ningún descifrado. Cuando la criptografía está ocurriendo en el elemento seguro, tienen ciertas regiones que son partes ficticias del microcontrolador. Así que están operando y haciendo algo, pero están tratando de engañarte sobre dónde están las claves. Tienen un montón de otras cosas interesantes allí. Si estás trabajando con chips enfocados a la seguridad, entonces es mucho más difícil determinar lo que está pasando allí. La otra cosa es que en la ColdCard el dispositivo de almacenamiento de claves es bastante antiguo, así que por eso el fabricante estaba más dispuesto a abrirlo. Si somos capaces de ver lo que se está ejecutando allí, eso significa que el atacante también será capaz de extraer nuestras claves de allí. Así que ser capaz de verificar los chips, también muestra que no es seguro para los usuarios. Así que ser capaz de verificar con decapping podría no ser una buena cosa. Así que es complicado.

Elemento de almacenamiento de claves seguro (no es un elemento seguro)

Normalmente el microcontrolador pide al almacenamiento seguro que le dé la clave para trasladarla al microcontrolador principal, entonces se producen las operaciones criptográficas, y luego se borra de la memoria del microcontrolador. ¿Cómo se puede obtener esta clave del almacenamiento seguro de claves? Obviamente, tiene que autentificarse. En ColdCard, lo haces con un código PIN. ¿Cómo esperas que se comporte el monedero cuando introduces un código PIN erróneo? En Trezor, se aumenta un contador y se incrementa el retardo entre las entradas del PIN, o como en Ledger, donde tienes un número limitado de intentos de entrada antes de que tus secretos se borren. ColdCard utiliza el mecanismo de aumento de retardo. El problema es que este retardo no es aplicado por el elemento seguro sino por el microcontrolador de propósito general. Para adivinar el código PIN correcto, si eres capaz de impedir de alguna manera que el microcontrolador aumente el tiempo, entonces serías capaz de forzar el código PIN. Para comunicarse con el almacenamiento de claves, el microcontrolador tiene un secreto almacenado aquí, y siempre que utiliza el secreto para comunicarse con el almacenamiento de claves, entonces el almacenamiento de claves responderá. Si el atacante puede conseguir este secreto, entonces puede tirar el microcontrolador y usar su propio equipo con ese secreto y probar todas las combinaciones de PIN hasta que encuentre la correcta. Esto es debido a la elección que hicieron donde básicamente se puede tener cualquier cantidad de intentos para el código PIN para el elemento seguro. Este almacenamiento de claves seguras en particular tiene la opción de limitar el número de intentos de introducción del PIN. Pero el problema es que no es reseteable. Esto significa que, durante toda la vida útil del dispositivo, puedes tener un número determinado de entradas de PIN erróneas y, al llegar a este nivel, puedes tirar el dispositivo. Este es un compromiso de seguridad que hicieron. En realidad preferiría establecer este límite a, por ejemplo, 1000 códigos PIN, o 1000 intentos, y dudo que falle al introducir el código PIN 1000 veces. En el caso de la ColdCard, utilizan un buen enfoque para los códigos PIN en el que lo dividen en dos partes. Puedes usar una longitud arbitraria, pero recomiendan algo como 10 dígitos. Muestran algunas palabras de verificación, que te ayudan a verificar que el elemento seguro sigue siendo el correcto. Así que si alguien cambia tu dispositivo por otro, en un ataque de criada malvada, entonces verías que hay palabras diferentes y puedes dejar de introducir el PIN. De hecho, hubo un ataque a la tarjeta fría para forzarla. Las palabras son deterministas a partir de la primera parte de tu código PIN. Tal vez intentas varios dígitos, y luego escribes las palabras, y haces una tabla de esto, y luego cuando haces el ataque de la criada malvada, entonces pones esa tabla para que muestre la información al usuario. Tienes que hacer fuerza bruta con los primeros números del PIN y obtener esas palabras, pero las palabras posteriores son difíciles de averiguar sin conocer el código PIN.

La criada malvada ataca

No importa el monedero de hardware que tengas, incluso un Ledger con el mejor hardware - digamos que lo cojo y lo pongo en mi habitación y pongo un dispositivo similar allí que tiene una conectividad inalámbrica con el ordenador de un atacante, entonces es un puente al Ledger real, y puedo tener esta comunicación bidireccional y hacer un ataque man-in-the-middle. Lo que sea que muestre el Ledger real, puedo mostrarlo en este dispositivo y engañar al usuario. Puedo convertirme en el último hombre en el medio aquí. Cualquier cosa que el usuario ingrese, puedo ver lo que ingresa y conozco el código PIN y entonces puedo hacer todo. Hay dos maneras de mitigar este ataque - se puede utilizar una jaula de Faraday cada vez que está utilizando la cartera de hardware, y la segunda manera es hacer cumplir en el hardware y creo que los chicos Blockstream sugirió esto - usted tiene un cierto límite en la velocidad de la luz, por lo que no puede comunicarse más rápido que la velocidad de la derecha? Tu dispositivo está aquí. Si te estás comunicando con este dispositivo aquí, entonces puedes obligar a que la respuesta llegue en unos pocos nanosegundos. Entonces es muy poco probable que - no es posible por las leyes de la física para obtener la señal a su dispositivo real y luego volver.

¿Y si se utiliza un canal de comunicación cifrado con el monedero hardware? El atacante intenta obtener el código PIN. Sigue siendo vulnerable. En lugar de introducir el código PIN, en su lugar se utiliza un ordenador - entonces usted tiene su billetera de hardware conectado a él, y luego en la situación potencialmente comrpomised tenemos este atacante malicioso que tiene conectividad inalámbrica a la billetera de hardware real. Digamos que nuestro ordenador no está comprometido. A través de este canal encriptado, puede obtener la máscara para su código PIN, como algunos números aleatorios que debe añadir a su código PIN para introducirlo en el dispositivo. Tu MITM no sabe esto a menos que comprometa tu ordenador también. O una situación de “one-time pad”. Cada vez que introduces el código PIN, introduces el código PIN más este número. Podría funcionar con un one-time pad. Cuando estás operando con tu billetera de hardware, probablemente quieras firmar una transacción en particular. Si el atacante es capaz de reemplazar esta transacción con la suya, y mostrarte tu propia transacción entonces estás jodido. Pero si la transacción se pasa encriptada al monedero hardware real, entonces él no puede reemplazarla porque sí estaría sin autentificar.

Monederos efímeros de hardware desechable

Otra forma de deshacerse del almacenamiento seguro de la clave es hacer que todo el monedero hardware sea efímero, para que te centres en guardar la semilla y la frase de contraseña y la introduzcas cada vez. El monedero hardware es entonces desechable. Puede que nunca vuelvas a usar esa cartera de hardware. Estaba pensando en esto con respecto a la generación segura de mnemónicos en un dispositivo de consumo. Si tienes un microcontrolador desechable pero todo lo demás estamos seguros de que no tiene componentes digitales ni memoria, entonces cada vez podríamos sustituir el microcontrolador por uno nuevo y el viejo simplemente lo aplastamos con un martillo. Si ya te acuerdas de la mnemotecnia, pues el PIN te desanima a recordarla. Si usas hardware desechable, y no se almacena en la bóveda, entonces cuando alguien accede a la bóveda no ve nada y no sabe cuál es tu configuración realmente.

Generación mnemónica offline y compartición de secretos Shamir

Prefiero las mnemotecnias de 12 palabras porque son más fáciles de recordar y siguen teniendo una buena entropía, como la de 128 bits. Todavía puedo recordar las 12 palabras. Lo respaldaría con un esquema de intercambio de secretos Shamir. Tomamos nuestra mnemónica y la dividimos en partes. Si recuerdas la mnemónica, entonces no necesitas ir a recuperar tus acciones. No se requieren todas las acciones para recuperar el secreto, pero se puede configurar cuántas acciones se requieren.

Si te preocupa tu generador de números aleatorios, entonces deberías añadir entropía del usuario. Si tienes un generador de números aleatorios manipulado y estás añadiendo entropía de usuario de todos modos, la nit no importa. Los portátiles más antiguos pueden ser un problema, como las máquinas de 32 bits y los monederos podrían no soportar más CPUs de 32 bits o algo así. Si tu cartera fue escrita para python2.6…. ahora tienes que escribir algo para manejar enteros grandes, etc. Esto es un montón de bitrot en sólo 5-7 años.

En cuanto a la generación mnemotécnica fuera de línea o la generación mnemotécnica segura… usar un tablero de dardos, usar dados, ¿confías en ti mismo para generar tu entropía? Estoy pensando en algo así… tenemos verdaderos generadores de números aleatorios en las fichas; podemos pedir a las fichas los números aleatorios, luego usar esos números y luego mostrarlos al usuario. La palabra y el índice correspondiente. También sabemos que los circuitos se degradan con el tiempo, por lo que los generadores de números aleatorios podrían verse comprometidos de forma predecible basándose en las estadísticas de fallos del hardware.

Verificación de la entropía

Puedes tirar unos dados y luego sacar una foto. Si estoy construyendo un monedero de hardware malicioso y quiero robar tu bitcoin, esta es de lejos la forma más fácil. Tal vez el hardware es genial, pero el software que estoy ejecutando no está utilizando ese generador de números aleatorios de hardware. O tal vez hay un secreto conocido por el atacante. Esperas unos años, y entonces tienes una gran cuenta de retiro. También podrías aplicar un protocolo seguro que te permita usar un monedero de hardware aunque no confíes en él. Puedes generar una mnemotecnia con la entropía del usuario y verificar que la entropía fue efectivamente utilizada.

Firma con fuga de claves privadas debido a nonces no aleatorios elegidos por el monedero de hardware

Si sigo siendo un fabricante malvado del monedero de hardware, y fui forzado por la comunidad a incluir este mecanismo de verificación de entropía, y también quiero gustarle a la comunidad, entonces decimos que tenemos una solución completamente airgapped con códigos QR y cámaras… y decimos que puedes usar cualquier monedero de software que quieras porque funciona con todo. Ahora tienes una configuración como esta; digamos que tienes una computadora no comprometida, sólo una billetera de hardware comprometida que fue fabricada por medios malignos. Estás preparando la transacción sin firmar aquí, la pasas a la cartera de hardware usando los códigos QR. El monedero de hardware entonces te muestra la información y verificas que todo es correcto, y lo verificas en el ordenador también. Entonces la firmas, y obtienes de vuelta la transacción firmada. Entonces obtienes una transacción de bitcoin perfectamente válida que verificas que es correcta y la difundes a la red. Es el ataque nonce. Sí, exactamente. El problema es que la firma en bitcoin tiene dos números, uno es un nonce aleatorio. Nuestro monedero hardware puede elegir un nonce derivado de forma determinista o un nonce aleatorio para cegar la clave privada. Si este nonce se elige de forma insegura, ya sea porque estás usando un mal generador de números aleatorios que no está produciendo valores uniformemente aleatorios, o porque eres malvado, entonces con sólo mirar las firmas podré obtener información sobre tu clave privada. Algo así sucedió con yubikey recientemente. Había yubikeys con certificación FIPS y estaban filtrando tus claves privadas en 3 firmas. Lo estúpido es que introdujeron la vulnerabilidad por error cuando estaban preparando su dispositivo para el proceso de certificación, que te pide que uses números aleatorios, pero en realidad no quieres usar números aleatorios: quieres usar derivación determinista de nonces porque no confías en tus números aleatorios. Puedes seguir utilizando los números aleatorios, pero deberías utilizarlos junto con la derivación determinista de nonces. Digamos que tienes tu clave privada que nadie conoce, y quieres firmar un determinado mensaje…. idealmente debería ser HMAC algo…. Esta es la forma más bonita, pero no puedes verificar que tu billetera de hardware esté realmente haciendo esto. Tendrías que conocer tu clave privada para verificar esto, y no quieres poner tu clave privada en algún otro dispositivo. Además, no quieres que tu dispositivo sea capaz de cambiar a una generación de nonce maliciosa. Quieres asegurarte de que tu monedero por hardware no puede elegir nonces aleatorios arbitrarios. Puedes forzar a la billetera de hardware a usar no sólo los números que le gustan, sino también los números que a ti te gustan.

Podrías enviar el hash del número aleatorio que vas a utilizar, de tal manera que el monedero hardware no necesita utilizar un RNG, puede utilizar un algoritmo determinista para derivar este valor R. Sería un esquema de comunicación bastante seguro en ambos casos, tanto para el software como para el hardware, pero no para ambos al mismo tiempo. Uno de ellos debería estar bien.

Actualmente, todos los monederos de hardware lo ignoran. Estoy preparando una propuesta para incluir este campo en el PSBT bip174. Nuestro monedero de hardware lo soportará definitivamente. Quiero construir un sistema en el que no sea necesario confiar demasiado en el monedero de hardware, incluso si está comprometido o si hay errores. Todas las billeteras de hardware son hackeadas de vez en cuando.

Con las llaves falsas, puedes comprobar que las firmas se generan de forma determinista y asegurarte de que así es y entonces te sientes seguro con el monedero hardware quizás. Pero el cambio de este algoritmo determinista al malicioso puede ocurrir en cualquier momento. Podría ser provocado por una actualización de hardware o alguna fase de la luna, no puedes estar seguro.

Otra solución es que se puede utilizar la generación verificable de estos nonces aleatorios, creo que esto fue propuesto por Pieter Wuille. Para esto necesitas una función hash particular que soporte pruebas de conocimiento cero de que estabas usando este algoritmo sin exponer tus claves privadas. El problema aquí es que es un cálculo muy pesado para un microcontrolador, así que probablemente no lo vas a meter en un microcontrolador.

También hay una idea sobre el uso de firma de contrato como medida contra el canal de no-cierre.

Ledger y multisig

Si tienes p2sh multisig y este monedero era sólo una de las firmas, entonces incluso si era malicioso no controla todas las claves… y lo ideal es que estés usando un hardware diferente para las otras claves. El problema con la multifirma es que… bueno, Trezor lo soporta muy bien. Estoy muy contento con Trezor y la multifirma. ColdCard lanzó un firmware que soporta multisig hace un día. Ledger tiene una implementación terrible de la multifirma. Lo que esperaba del monedero es que mostrara cuando estás usando la multifirma, quieres ver tu dirección de bitcoin, y quieres ver cuál es la cantidad que realmente estás enviando o firmando? ¿Cuál es la salida de cambio? ¿Cuáles son las cantidades? Con el Ledger multisig, siempre ves dos salidas y no sabes cuál estás gastando y cuál es la dirección de cambio, si es que la hay. Con dos Ledgers en una configuración multisig, estás menos seguro que usando un solo Ledger. Si alguien quiere hacer un pull request a la aplicación bitcoin de Ledger, por favor, que lo haga. Está ahí, la gente se queja de este tema desde hace más de un año creo. Ledger no es muy bueno en la multifirma.

Practicidad

Sé de algunos ataques a la cadena de suministro, pero esos se basaban en que los usuarios hicieran cosas estúpidas. No conozco ningún ataque dirigido al hardware. Ahora mismo la forma más fácil de atacar la cartera de hardware de alguien es convencerle de que haga algo estúpido. Creo que por el momento no hay suficientes hackers que se dediquen a este campo. Pero el valor va a aumentar. Definitivamente ha habido ataques a monederos de software.

walletrecoveryservices.com vende algo de esto como un servicio de recuperación de carteras de hardware. El editor de la revista Wired perdió su código PIN o algo así, e hizo mucho trabajo para sacar los datos del dispositivo. Así que esto no es un ataque malicioso, pero no deja de ser un ataque.

Hay que desconfiar de los dispositivos de hardware de terceros de código cerrado sin marca. ¿Cómo se puede confiar en todo esto?

Ahora mismo puede ser más fácil conseguir criptodivisas lanzando malware o iniciando una nueva altcoin o algo así o un hard-fork de otra altcoin. Esas son las formas más fáciles. Ahora mismo es fácil apuntar a los nodos lightning y tomar el dinero allí; conoces sus direcciones públicas y cuánto dinero tienen, así que sabes que si puedes llegar al servidor entonces puedes recuperar tu costo de ataque. Así que esos objetivos son mucho más obvios y más fáciles de atacar en este punto. Hay ataques remotos más fáciles en este punto, que atacar carteras de hardware.

Desventajas de los microcontroladores

https://btctranscripts.com/breaking-bitcoin/2019/extracting-seeds-from-hardware-wallets/

¿Cómo funcionan los elementos de seguridad? Parece una bala de plata que lo hace todo, ¿verdad? Puedo decirle la diferencia entre los microcontroladores normales y los elementos seguros. Los microcontroladores normales están hechos para la velocidad, la eficiencia y son fáciles de desarrollar. Existen los llamados bits de seguridad que se configuran cuando se termina de programar el microcontrolador. Cuando el microcontrolador arranca, así que como te imaginarías es que debería arrancar con permisos de no-lectura-no-escritura y luego comprueba los bits de seguridad para ver si deberías ser capaz de comunicarte con él, y entonces deberías permitirle tener acceso de lectura/escritura. Pero a veces se hace al revés, donde el microcontrolador está en modo abierto para lectura-escritura y luego comprueba los bits de seguridad y luego se bloquea. Pero el problema es que si hablas con el microcontrolador antes de que sea capaz de leer esos bits, entonces podrías ser capaz de extraer un solo byte de la memoria flash del microcontrolador. Podrías seguir haciendo esto reiniciando una y otra vez; si eres rápido y el microcontrolador es lento, puedes hacer esto incluso más rápido. Creo que esto es algo a lo que Ledger hace referencia en todas sus charlas: este “ataque sin solución” a todos los microcontroladores como Trezor y otros. Creo que está relacionado con esto, porque esto es exactamente lo que está roto por diseño y no se puede arreglar sólo porque el sistema evolucionó así. No, no necesitan usar baja temperatura aquí. Sólo necesitan ser más rápidos que el microcontrolador, lo cual es fácil porque los microcontroladores utilizados en las carteras de hardware son de 200 MHz o así. Así que si usas una GPU o un ordenador moderno entonces podrías hacer algo.

¿Así que la amenaza es que se pueda leer la memoria del microcontrolador, antes de que se pueda bloquear? El problema es que se puede leer toda la memoria flash. Esto significa que aunque esté encriptada, tienes una clave en alguna parte para desencriptarla. ¿Qué se almacena en la memoria flash? ¿Qué se protege aquí? Algunos tienen claves secretas. Todos los dispositivos IoT probablemente tienen tu contraseña de wifi. Hay un montón de secretos diferentes que podrías querer proteger. La clave de descifrado podría, en teoría, estar almacenada en otro lugar. Parece que la amenaza es que el microcontrolador puede exponer los datos escritos en ellos, y tal vez usted se preocupa por eso porque son datos propietarios o un secreto o una clave privada de bitcoin en texto plano, entonces eso es un gran problema. Si tienes un microcontrolador en algún dispositivo informático que has programado… parece que esta amenaza es menos interesante. Sí, por eso la mayoría de la gente no se preocupa por el microcontrolador. En los dispositivos de consumo, normalmente la gente hace cosas aún más fáciles. Se olvidan de deshabilitar la interfaz de depuración, y entonces tienes acceso completo directo al microcontrolador con como JTAG o algo así. Así que puedes leer la memoria flash o estas otras cosas, y reprogramarlo si quieres.

También está la interfaz JTAG. También hay otro estándar, la interfaz de depuración por cable en serie (SWDI). Estas interfaces se utilizan para la depuración. Te permiten durante el desarrollo ver y controlar completamente todo el microcontrolador, puedes establecer puntos de interrupción, puedes observar la memoria, puedes disparar todos los pines alrededor del dispositivo. Puedes hacer lo que quieras usando estas interfaces. Hay una manera de desactivar esto, y eso es lo que hacen los fabricantes de carteras de hardware. O otro bit de seguridad, pero de nuevo el bit de seguridad no se comprueba en el momento del arranque, sino un poco más tarde, así que es otra condición de carrera. Ledger se olvidó de desactivar la interfaz JTAG en el microcontrolador que controla su pantalla, hace algún tiempo. Pero todavía tenían un elemento seguro, así que no fue un gran problema. Sí, deshabilitarla es una cosa de software. Todos los bits de seguridad son medidas de software. Sólo tienes que configurar las banderas de tu firmware para desactivar ciertas características.

Además, los microcontroladores como los normales, están diseñados para trabajar en ciertas condiciones. En el rango de temperatura de aquí a aquí, o el nivel de voltaje o fuente de alimentación aquí de 1,7 voltios a +- 0,2 voltios, y con una velocidad de reloj dentro de un cierto rango. ¿Qué sucede si este entorno cambia? Puedes obtener un comportamiento indefinido, que es exactamente lo que quiere el atacante. Lo que esto significa es que el microcontrolador operado más allá de esos límites podría saltarse instrucciones, hacer cálculos erróneos, puede hacer muchas cosas diferentes. También puede reiniciar. Puede saltarse instrucciones, o hacer cálculos incorrectos.

Como ejemplo, uno de los ataques a las carteras de hardware Trezor que utilizan este material fue ….. cuando conectan el Trezor a mi ordenador a través de USB, el ordenador pregunta al dispositivo ¿quién eres y en qué puedo ayudarte? ¿Qué tipo de dispositivo es usted? El Trezor dice soy Trezor modelo G por ejemplo. Entonces lo que el atacante fue capaz de hacer - incluso antes de desbloquear su cartera de hardware - como estos datos se envían a la computadora … Trezor está básicamente comprobando cuál es la longitud que debe enviar al ordenador… Y esta longitud se calcula durante ciertas instrucciones. Si se rompe el microcontrolador en este momento, y se hace que este cálculo haga algo aleatorio, como más de los 14 bits que la cartera de hardware espera, se obtiene no sólo la información del modelo X del trezor, sino que además se podría obtener la mnemotecnia y el contenido completo de la memoria. La información del modelo se almacenaba justo al lado de la información mnemónica. Sin embargo, han arreglado esto. Ahora mismo, tienes que desbloquear el Trezor con el PIN, no envía ningún dato hasta que se desbloquea. Hay una parte no legible de la memoria que el microcontrolador no puede leer; así que si hay desbordamiento, lanzará un error y no podrá leer esos bits. Así que este es también un buen enfoque. En principio, han hecho muchas correcciones recientemente. Este fue un arreglo de software no un arreglo de hardware.

La frase mnemotécnica del monedero hardware se almacenaba en texto plano y la verificación del PIN era vulnerable a un ataque de canal lateral. Otro gran ataque para los microcontroladores son los ataques de canal lateral. Cuando los microcontroladores están comparando números, pueden filtrar información simplemente consumiendo diferentes cantidades de energía, o tomando una cantidad de tiempo ligeramente diferente para calcular algo. Trezor fue vulnerable a esto también, hace algún tiempo, en particular a la verificación del código PIN. Así que estaban verificando esto introduciendo un PIN y comparándolo con un PIN almacenado. Esta comparación estaba consumiendo diferentes ciclos, diferentes patrones estaban causando diferentes - mediante la observación de la emisión de este sidechannel desde el microcontrolador, LedgerHQ fue capaz de distinguir entre diferentes dígitos en el PIN. Construyeron un sistema de aprendizaje automático para distinguir entre los sistemas y después de probar 5 PINs diferentes, este programa fue capaz de decir su PIN real. 5 PINs todavía era factible en términos de retraso, por lo que se puede hacer en unas pocas horas. Esto también fue arreglado. Ahora los códigos PIN no se almacenan en texto plano; ahora lo utilizan para obtener una clave de descifrado que descifra la frase mnemónica. Esta forma es más agradable porque incluso si tienes un PIN erróneo, la clave de descifrado es errónea y entonces no puedes descifrar el mnemónico y no es vulnerable a ningún tipo de ataque de canal lateral.

En cuanto al hardware, hay condiciones de carrera, ataques de canal lateral, manipulación del entorno operativo e interfaces de depuración para microcontroladores. También puedes decapitarlo y dispararle con láseres o algo así, y hacer que se comporte de forma extraña. Así que esto puede ser explotado por un atacante.

Volver a asegurar los elementos

Por otro lado, ¿qué hace un elemento seguro? Son similares a los microcontroladores, pero no tienen interfaces de depuración. No tienen banderas de lectura y escritura. También tienen un montón de contramedidas diferentes contra estos ataques, por ejemplo, medidas de hardware. Hay un watchdog que monitorea el voltaje en el PIN de la fuente de alimentación y tan pronto como ve que va por debajo de algún valor, dispara la alarma y puedes borrar las llaves tan pronto como veas que esto ocurre. O simplemente dejas de operar. Si ves que la tensión de alimentación está variando, simplemente dejas de operar. Si ves que la temperatura está variando demasiado, también puedes parar el funcionamiento. Puedes parar o borrar tus secretos. También está este mecanismo que permite al microcontrolador detectar si estás tratando de decapitar, como con un simple sensor de luz. Si decapitas el chip, tienes acceso al semiconductor y puedes ver que sale mucha luz y entonces detienes las operaciones. Aquí definitivamente quieres limpiar tus secretos y borrarlos todos. También utilizan un montón de técnicas interesantes contra los ataques de canal lateral. Por ejemplo, no se limitan a hacer un consumo de energía constante y una temporización constante, sino que además introducen retrasos aleatorios adicionales y ruido aleatorio en las líneas de alimentación, lo que hace cada vez más difícil que el atacante obtenga datos de ahí. Además, normalmente tienen una capacidad muy limitada de bits. Tienen un pin de alimentación, tierra, tal vez algunos más para manejar algo simple como un LED en la ColdCard o en el moderno Ledger Modelo X son realmente capaces de hablar con el controlador de pantalla para controlar la pantalla, lo que es una buena mejora en el hardware. En principio, no es muy capaz. No puedes esperar que el elemento seguro maneje una pantalla grande o que reaccione a la entrada del usuario. El botón está probablemente bien, pero definitivamente no es lo mejor.

La razón por la que el Ledger tiene una pantalla diminuta con baja resolución es porque están tratando de manejar todo desde el elemento seguro, al menos en el Ledger Model X. Anteriormente no era así, donde tenían un microcontrolador normal que hablaba con el elemento seguro donde lo desbloqueas y luego firma lo que quieras. Y luego este microcontrolador controla la pantalla. Esto fue en realidad un gran punto que fue señalado por — … esta arquitectura no es perfecta porque tienes este hombre en el medio que controla esta pantalla. Tienes que confiar en tu cartera de hardware tiene una pantalla de confianza, pero con esta arquitectura no puedes porque hay un hombre en el medio. Es difícil mitigar esto y averiguar cómo compensar la seguridad completa frente a la usabilidad del usuario. Espero que te hagas una idea de por qué los elementos seguros son realmente seguros.

Problemas con los elementos de seguridad

Sin embargo, hay algunos problemas con los elementos seguros. Tienen todos estos bonitos mecanismos anti-manipulación, pero también les gusta ocultar otras cosas. La práctica común en el campo de la seguridad es que cuando cierras la fuente de tu solución de seguridad, obtienes algunos puntos extra en la certificación de seguridad como ELF 5 y otros estándares. Sólo por el hecho de cerrar la fuente de lo que escribiste o lo que hiciste, obtienes puntos extra. Ahora lo que tenemos es el problema de que no podemos averiguar realmente lo que se está ejecutando dentro de estos dispositivos. Si quieres trabajar con un elemento seguro, tienes que ser lo suficientemente grande como para hablar con estas empresas y conseguir las claves necesarias para programarlo. Y también tienes que firmar su acuerdo de no divulgación. Y sólo en ese momento te darían la documentación; y entonces el problema es que no puedes abrir el código de lo que has escrito. Como alternativa, se utiliza un elemento seguro que ejecuta un [sistema operativo de tarjetas Java] (https://en.wikipedia.org/Java_Card), por lo que es algo así como un subconjunto de Java desarrollado para la industria bancaria, porque a los banqueros les gusta Java por alguna razón. Básicamente tienen esta máquina virtual Java que puede ejecutar tu applet… así que no tienes ni idea de cómo está operando la cosa, sólo confías en ellos porque está certificada y ya ha estado ahí durante 20-30 años y sabemos que todos los institutos de investigación de seguridad están tratando muy duro de conseguir incluso un único …. y entonces puedes abrir completamente el applet de la tarjeta Java que subes al elemento seguro pero no sabes lo que se está ejecutando debajo de él. Java Card se considera un elemento seguro, sí.

Por cierto, las tarjetas Java más seguras o los elementos seguros se desarrollaron normalmente para el … como cuando compras esta tarjeta, tienes un secreto allí que te permite ver la televisión desde una determinada cuenta. Eran muy buenos para proteger los secretos porque tenían el mismo secreto en todas partes. La señal viene del espacio o del satélite, la señal es siempre la misma. Estás obligado a usar el mismo secreto en todos los dispositivos. Esto significa que si incluso uno es hackeado, tienes televisión gratis para todos, así que pusieron mucho esfuerzo en asegurar este tipo de chip porque tan pronto como es hackeado entonces estás realmente jodido y tienes que reemplazar todos los dispositivos.

Además, hablemos de el ataque de compromiso de claves de Sony Playstation 3. Utilizan la firma ECDSA, y no se supone que todos los juegos sean gratuitos, ¿verdad? La única forma de conseguir que el juego funcione, es tener una firma adecuada en el juego, por lo que la firma de Sony. El problema es que se supone que no contrataron a un criptógrafo o a alguien decente en criptografía… implementaron un algoritmo de firma digital de forma que reutilizaban el mismo nonce una y otra vez. Es el mismo problema con las billeteras de hardware que describimos hoy. Si estás reutilizando el mismo nonce, entonces puedo extraer tu clave privada con sólo tener dos firmas tuyas. Entonces puedo obtener tu clave privada y luego puedo ejecutar cualquier juego que quiera porque tengo la clave privada de Sony. Esto fue para la Sony Playstation 2. Creo que fue el hackeo más rápido de una consola de juegos.

Monederos con código QR

Restricción de la unidireccionalidad de la información. Los bits no pueden fluir hacia atrás. El único lugar en el que debería estar una clave descifrada es en una cartera de hardware en funcionamiento. Hay una implementación en python de slip39. Chris Howe contribuyó con una biblioteca slip39 en C.

Con multisig y cada clave fragmentada, ¿se pueden mezclar los fragmentos de las diferentes claves y es eso seguro?

El código qr es json -> gzip -> base64 y cabe como 80-90 salidas y está bien. Los códigos QR animados podrían molar, pero hay algunas librerías como los códigos QR de colores que te dan un empujón. Es la paquetización. ¿Se le va a pedir al usuario que muestre ciertos códigos QR en un orden determinado, o se va a negociar gráficamente entre los dispositivos? Puedes utilizar cámaras de alto contraste.

Código QR impreso… cada paquete puede decir que es 1-de-n, y a medida que lo lee, averigua cuál es, y luego averigua cuál está hecho o no hecho todavía.

Las firmas podrían agruparse en un código QR de salida más grande en el dispositivo. Así que todavía no es un gran cuello de botella. Los códigos QR empaquetados son un área interesante. Cuando se analiza el código QR del material de Christopher Allen, el código QR dice lo que dice.

Ataques recientes

Hubo algunos ataques recientes que demostraron que incluso si usted está usando un hardware seguro, no significa que esté seguro. Cuando hay un atacante que puede llegar a tu dispositivo, entonces estás en problemas y pueden hacer ataques desagradables contra el microcontrolador. Otra idea es borrar el dispositivo cada vez. Hay monederos que utilizan elementos seguros, como el monedero de hardware de Ledger.

En cuanto al hardware, es maravilloso. El equipo es fuerte en el lado del hardware. Vienen de la industria de la seguridad. Conocen la certificación de elementos seguros, saben cómo hackear microcontroladores y siguen mostrando interesantes ataques a Trezor y otros monederos aleatorios. Son extremadamente buenos en el lado del software, pero eso no significa que no puedan meter la pata en el lado del software. De hecho, ha sucedido algunas veces.

Uno de los ataques más aterradores en Ledger ocurrió a finales del año pasado, y estaba relacionado con el cambio de dirección. Cuando estás enviando dinero a alguien, lo que esperas es que tengas tus entradas y digamos que tienes las entradas de… un bitcoin, digamos, y luego tienes normalmente dos salidas, como una es para el pago y la otra es la salida de cambio. ¿Cómo verificas que esta es la dirección de cambio? Deberías ser capaz de derivar la correspondiente clave privada y la clave pública que controlará esa salida. Si obtienes la misma dirección para la salida de cambio, entonces estás seguro de que el dinero vuelve a ti. Normalmente lo que haces es proporcionar la ruta de derivación para la clave privada correspondiente, porque tenemos este árbol jerárquico determinista de claves. Así que el monedero de hardware sólo necesita saber cómo derivar la clave. Así que envías al monedero la ruta de derivación también, como la ruta de derivación bip32. Entonces la cartera de hardware puede derivar la clave correspondiente y ver exactamente esta salida será controlada por esta clave por lo que tiene la dirección correcta. …. Así que lo que Ledger hizo es que no hicieron la verificación…. sólo asumieron que si había una salida con alguna ruta de derivación, entonces es probablemente correcta. Esto significa que el atacante podría reemplazar la dirección de esta salida a cualquier dirección, sólo hay que poner cualquier ruta de derivación y todo el dinero podría ir al atacante cuando usted está enviando una pequeña cantidad de bitcoin entonces todo el cambio va al atacante. Fue revelado el año pasado, y fue descubierto por los chicos de Mycelium porque estaban trabajando en la transferencia de fondos entre diferentes cuentas en Ledger y encontraron que de alguna manera es demasiado fácil de implementar esto en Ledger por lo que algo va mal aquí y descubrieron el ataque. Se arregló, pero quién sabe cuánto tiempo estuvo el problema. Desde la perspectiva de la cartera de hardware, si alguien no me dice que es una salida de cambio o me lo demuestra, entonces debería decir que no es una salida de cambio. Este era un problema.

También hubo un problema menor, ya que no leyeron la documentación del microcontrolador. El problema era, ¿cómo verificaron el firmware que se ejecuta en este microcontrolador? Básicamente …. cuando tenemos nuestro nuevo firmware, entonces el Ledger tiene una región específica en la memoria donde hda una secuencia mágica de bytes en particular para Ledger era algún número mágico hexadecimal. Entonces ellos almacenan eso allí. Lo que sucede es que cuando se actualiza el firmware del Ledger, el Ledger primero borra esto y luego flashea el firmware y luego al final verifica si la firma de este firmware es correcta. Si la firma era generada por la llave del Ledger, entonces ellos ponían de nuevo este número mágico en el registro y entonces eras capaz de iniciar este firmware y hacer que empezara a funcionar. Suena bien, ¿verdad? Si se le proporciona una firma incorrecta, entonces estos bytes mágicos son todos ceros en el momento por lo que no se ejecutaría este firmware, sólo retrocedería al firmware anterior. El problema es que si lees la documentación del microcontrolador, ves que hay dos direcciones diferentes para acceder a esta región de memoria donde almacenan estos bytes mágicos. Una de ellas estaba completamente bloqueada a la lectura-escritura externa, de manera que si se intentaba escribir en estos registros, se fallaba porque sólo el microcontrolador podía hacerlo. Pero luego había otro que tenía acceso a la misma región de memoria y podías escribir cualquier byte allí, y entonces podías hacer que el microcontrolador ejecutara cualquier firmware que le dieras. Alguien fue capaz de jugar un juego de la serpiente en la cartera de hardware Ledger como resultado de esto. Si consigues el control de esta pantalla y de los botones, con un firmware personalizado, puedes ocultar salidas arbitrarias. Puedes engañar al usuario de diferentes maneras, porque estás controlando lo que el usuario verá cuando esté firmando. Así que creo que es un problema bastante grande. Es un problema difícil de explotar, pero sigue siendo un problema.

Otra cagada súper grave que ocurrió con Bitbox… ¿la conoces? Algunos de los monederos tienen una bonita función de monedero oculto. La idea es que si alguien coge tu billetera por hardware y te dice que por favor la desbloquees, que si no te voy a dar con una llave inglesa. Probablemente la desbloquearás, y luego gastarás el dinero al atacante porque no quieres morir. La función de monedero oculto se supone que asegura tu dinero de tal manera que también hay un monedero oculto que el atacante no conoce y sólo obtendrá una fracción de tu dinero. Usas la misma mnemotecnia pero utilizas una frase de contraseña diferente, normalmente. Los chicos de Bitbox lo hicieron de forma ligeramente diferente, y fue una mala idea reinventar el protocolo con sus propias reglas. Así que lo que hicieron fue, tienes esta clave privada maestra como un bip32 xprv. Tiene un código de cadena y la clave privada allí. Así que cuando tienes la clave pública maestra, tienes el mismo chaincode y sólo la clave pública correspondiente a esta clave privada. Dada la clave pública maestra, puedes derivar todas tus direcciones pero no gastar, y si tienes la clave privada podrías gastar. El monedero oculto usaron el mismo xpub con el mismo chaincode pero voltearon el chaincode y la clave. Así que eso significa que si tu cartera de software conoce la clave pública maestra tanto de la cartera normal como de la cartera oculta, entonces básicamente conoce tanto el chaincode como la clave privada para que puedan obtener todo tu dinero. Si usted está utilizando esta función de cartera oculta, entonces usted está jodido.

¿Se supone que el atacante no conoce la función de cartera oculta? ¿Cómo se supone que funciona? En principio, esta función de monedero oculto es cuestionable. Como atacante, seguiría golpeándote con una llave inglesa hasta que me dieras todas las carteras ocultas que tienes. Seguiría golpeándote hasta que me dieras la siguiente contraseña o la siguiente frase de paso y así sucesivamente, nunca confiarían en que no tienes una siguiente cartera. El monedero tendría que estar suficientemente financiado para que el atacante pensara que es probable que lo sea todo. También podrías hacer la carrera de reemplazo por cuota en la que quemas todo tu dinero a los mineros ((esperemos que firmes las posibilidades de cuota correctamente)). El atacante no va a dejar de atacarte físicamente. Pero todavía hay una gran diferencia entre golpear físicamente a alguien y matarlo. El asesinato parece una línea que menos gente estaría dispuesta a cruzar.

TruCrypt tenía una negación plausible en el cifrado porque podías tener varias unidades cifradas, pero no sabías cuántas había. Podría ser sospechoso que un volumen encriptado de 1 GB sólo tenga un único archivo de 10 kb… pero la idea es poner algo realmente incriminatorio junto a tus 10 BTC, y simplemente dices estoy tan avergonzado y esto haría parecer más legítimo que en realidad es tu cantidad de monedas completa.

Tener un hardware seguro no significa que no seas vulnerable a los ataques. Realmente creo que lo mejor es utilizar la multifirma.

Timelock

Si estás dispuesto a esperar un retraso, puedes usar un timelock, o gastar instantáneamente con llaves multisig de 2 en 2. Usted haría cumplir en la cartera de hardware para hacer sólo estas transacciones timelock. El atacante proporciona la dirección. No importa lo que diga su monedero; en el mejor de los casos, su monedero ya lo ha bloqueado. No se puede gastar de forma que se bloquee, porque presumiblemente el atacante quiere usar su única dirección. Podrías pre-firmar la transacción y borrar tu clave privada… espero que hayas entendido bien esa tarifa.

Si puedes demostrar a un banco que obtendrás 1.000 millones de dólares dentro de un año, entonces te adelantarán el dinero. Tú consigues el uso del dinero, negocias con el atacante y le pagas un porcentaje. Pero esto entra en el tema de los seguros K&R …. También podrías usar un banco, que es 2-de-2 multisig o mi llave pero con un retraso de 6 meses. Así que esto significa que cada vez que necesites hacer una transacción, vas al banco y haces una transacción… todavía puedes recuperar tu dinero si el banco desaparece, y entonces el atacante no puede conseguir nada porque probablemente no quiera ir contigo al banco o cruzar múltiples fronteras mientras viajas por el mundo para conseguir todas tus llaves o algo así.

La mejor protección es no decir nunca a nadie cuánto bitcoin tienes.

¿Cómo se puede combinar un timelock con un tercero? Timelock con multisig está bien.

Estamos planeando añadir soporte para miniscript, que podría incluir timelocks. Pero, que yo sepa, ningún monedero de hardware aplica actualmente los timelocks.

Miniscript

Miniscript (o aquí) fue introducido por Pieter Wuille. No es un mapeo uno a uno de todos los posibles scripts de bitcoin, es un subconjunto de scripts de bitcoin pero cubre como el 99,99% de todos los casos de uso observados en la red hasta ahora. La idea es que describas la lógica de tu secuencia de comandos en una forma conveniente, de manera que una cartera pueda analizar esta información y averiguar qué claves o información necesita obtener para producir una clave. Esto también funciona para muchos de los scripts lightning y también para varios scripts multisig. A continuación, puede compilar esta política miniscript en bitcoin script. Entonces puede analizar y decir que esta rama es la más probable que usaré la mayor parte del tiempo, y luego ordenar las ramas en el script para hacerlo más eficientemente ejecutado en promedio en términos de sigops. Puede optimizar el script de tal manera que en realidad sus tarifas o sus datos que tiene cuando está firmando este script serán mínimos de acuerdo a sus prioridades. Así que si estás gastando principalmente con esto, entonces esto será superóptimo y esta otra rama podría ser un poco más larga.

Después de implementar miniscript, será posible utilizar timelock. Hasta entonces, necesitas algo como una raspberrypi con un firmware personalizado. Podemos tratar de implementar una característica timelock juntos mañana si todavía estará aquí.

Pieter tiene una prueba de concepto en su página web en la que puedes escribir las políticas y obtener un script de bitcoin real. No creo que tenga la demostración de ir al revés; pero se describe con muchos detalles cómo funciona todo esto. Creo que están terminando sus múltiples implementaciones en este momento y creo que está casi listo para empezar realmente. Algunos pull requests han sido fusionados para los descriptores de salida. En Bitcoin Core puedes proporcionar un descriptor de script y alimentar esto en la cartera, como si es segwit o legacy, segwit anidado o segwit nativo, etc. También puedes usar descriptores de script para monederos multifirma, ya puedes usar Bitcoin Core con los monederos de hardware existentes… todavía es un poco problemático porque necesitas ejecutar una interfaz de línea de comandos y no es súper fácil de usar y no está en la GUI todavía, pero si estás bien con las interfaces de línea de comandos y si estás dispuesto a hacer un pequeño script que lo haga por ti, entonces probablemente estés bien. Creo que la integración con Bitcoin Core es muy importante y es bueno que tengamos esto en marcha.

Funciones avanzadas para carteras de hardware

https://btctranscripts.com/breaking-bitcoin/2019/future-of-hardware-wallets/

Algo que podríamos hacer es coinjoin. Ahora mismo los monederos hardware sólo admiten situaciones en las que todas las entradas pertenecen a los monederos hardware. En las transacciones coinjoin, ese no es el caso. Si podemos engañar al monedero hardware para que muestre algo incorrecto, entonces podemos potencialmente robar los fondos. ¿Cómo podría el monedero hardware entender si esta entrada pertenece al monedero hardware o no? Necesita derivar la clave y comprobar si es capaz de firmar. Para ello necesita la ayuda del monedero software. El usuario necesita firmar las transacciones dos veces para este protocolo.

Es bastante normal que haya varias firmas de una transacción coinjoin en un corto período de tiempo, porque a veces el protocolo coinjoin se estanca debido a que los usuarios se caen de la red o simplemente se retrasan demasiado.

La firma de transacciones con entradas externas es complicada.

Prueba de (no) propiedad de la cartera de hardware

Digamos que somos un monedero malicioso. No soy un servidor de coinjoin, sino una aplicación cliente. Puedo poner dos entradas de usuario idénticas, lo que suele ser común en coinjoin, y las pones en las entradas y pones sólo una salida de usuario y luego las otras son otras salidas. ¿Cómo puede el monedero hardware decidir si la entrada pertenece al usuario o no? Ahora mismo no hay manera. Así que confiamos en que el software marque la entrada necesaria para firmar. El ataque consiste en marcar sólo una de las entradas del usuario como mía, y entonces el monedero hardware la firma y obtenemos la firma de la primera entrada. El monedero software entonces finge que la transacción de coinjoin ha fallado, y envía al monedero hardware la misma transacción pero marcando la segunda entrada como nuestra. Así que el monedero hardware no tiene forma de determinar qué entradas eran suyas. Podría hacer pruebas SPV para probar que una entrada es suya. Necesitamos una forma fiable de determinar si la entrada pertenece a la cartera de hardware o no. Trezor está trabajando en esto con achow101.

https://github.com/satoshilabs/slips/blob/slips-19-20-coinjoin-proofs/slip-0019.md

Podríamos hacer una prueba para cada entrada, y necesitamos firmar esta prueba con una clave. La idea es probar que puedes gastar y probar que… puede comprometer toda la transacción de coinjoin para probar al servidor que esto es propio, y ayuda al servidor a defenderse contra ataques de denegación de servicio porque ahora el atacante tiene que gastar sus propios UTXOs. La prueba sólo puede ser firmada por el propio monedero hardware. También tiene un identificador de transacción único.. Es sign(UTI||prueba_cuerpo, entrada_clave). No pueden tomar esta prueba y enviarla a otra ronda de coinjoin. Esta técnica demuestra que somos dueños de la entrada. El problema surge del hecho de que tenemos esta ruta de derivación loca. Utiliza la clave de identidad única, que puede ser una clave bitcoin normal con una ruta de derivación fija. El cuerpo de la prueba será HMAC(id_key, txid || vout). Esto puede ser específico de la cartera y el host puede recogerlos para UTXOs. No se puede falsificar esto porque la cartera de hardware es la única que puede generar esta prueba.

Esto podría extenderse a la agregación de claves multisig o incluso MuSig.

Podemos pedir a todos los participantes de esta transacción coinjoin que firmen un determinado mensaje con la clave privada que controla esta entrada. Así que tenemos un mensaje y una firma. La firma nos demuestra a nosotros, a todo el mundo, que el tipo que puso este mensaje controla realmente la clave privada correspondiente. Esta es la firma de la clave que controla esta entrada. En el lado del mensaje, podemos poner lo que el monedero de hardware quiera. El monedero de hardware es el tipo que puede firmar esta prueba. Es el único que controla esta clave. Así que lo que puede hacer es generar un mensaje particular que será capaz de reconocer después. Así que tomo el hash de la transacción y lo combino con mi clave fija que almaceno en mi memoria, y entonces obtengo un mensaje único que parece aleatorio pero que seré capaz de reproducir cada vez que lo vea y podré asegurarme de que fue mi entrada porque yo fui el tipo que generó esto dentro del mensaje. Una vez que proporcionamos todas estas pruebas para cada entrada, nuestro monedero de hardware puede revisar cada entrada y asegurarse de qué entradas son mías y cuáles no. Esto puede ayudar a detectar cuando la cartera de software está tratando de engañarte.

Espero que los monederos de hardware puedan hacer coinjoins muy pronto. Probablemente, Trezor lo desplegará primero porque estamos trabajando con ellos en ello.

P: ¿Cuál es el caso de uso de esto? ¿Digamos que quiero dejar algo conectado a Internet para ganar dinero con algo como joinmarket? ¿O que quiero ser un tomador de privacidad?

R: Funciona en ambos casos. Si quieres participar en coinjoin y ganar algo- pero ahora mismo no funciona así. Ahora mismo todos los ingresos van a parar a los chicos de Wasabi Wallet. Sus servidores cobran por conectar a la gente. Por el momento creo que si quieres usar el coinjoin para obtener algo de privacidad, entonces necesitas este tipo de protocolo, así que probablemente necesites conectar tu monedero de hardware para hacer esto o todavía puedes hacerlo usando el airgap.

En nuestro caso, por ejemplo, estaba pensando en tener una pantalla en el ordenador y luego un código QR y que puedan comunicarse a través de los códigos QR como esto es una cámara web y esto es una pantalla. También estaba pensando en la salida de audio, como una toma de 3,5 mm de la cartera de hardware a la computadora. El ancho de banda allí es bastante bueno. También podrías reproducir audio en un altavoz. Pero entonces tu billetera de hardware necesita un altavoz, y sólo puede enviar tu clave privada. Pero una toma de audio de 3,5 mm tiene sentido.

P: ¿Qué pasa con el coinshuffle o el coinswap?

R: Sólo sé un poco sobre esto. Para wasabi wallet, no sabe qué entradas corresponden a qué salidas porque las registra por separado. Te devuelven una firma ciega y les das una salida ciega o algo así. Generan una firma ciega y no saben lo que están firmando. Esto permite que el servidor de coinjoin verifique que sí, que he firmado algo y que este tipo quiere registrar esta salida para que se vea bien y la ponga en el coinjoin. Para toda esta comunicación utilizan las firmas Schnorr porque allí se pueden utilizar firmas ciegas. En principio esto significa que tienen dos identidades virtuales que no están conectadas entre sí; sus entradas y salidas están completamente desconectadas incluso para el servidor de coinjoin. También generan salidas del mismo valor y luego hacen otra sección de las salidas con un valor diferente por lo que también se puede conseguir el anonimato por alguna cantidad de cambio.

El monedero Wasabi soporta monederos de hardware, pero no para coinjoin. Entonces el único beneficio restante de usar Wasabi es tener un control completo de las monedas y poder elegir las monedas para enviar a la gente.

P: ¿Cómo maneja Wasabi la privacidad cuando obtiene sus UTXOs?

R: Creo que utilizan el protocolo Neutrino, piden los filtros al servidor y luego descargan bloques de nodos bitcoin aleatorios. No necesitas confiar en su servidor central en ese punto. Creo que ya está habilitado para conectarte a tu propio nodo, impresionante eso es genial. Genial. Entonces ahora puedes obtenerlo desde tu nodo de Bitcoin Core.

Lightning para carteras de hardware

Lightning aún está en desarrollo, pero ya está funcionando en vivo en mainnet. Sabemos que el software no es súper estable todavía, pero la gente estaba entusiasmada y empezó a usarlo con dinero real. No es mucho dinero real, es como unos pocos cientos de bitcoin en toda la red Lightning en este momento.

Ahora mismo la red de rayos sólo funciona con carteras calientes con las claves de su ordenador. Probablemente no sea un problema para nosotros, pero para los clientes normales que compran café a diario esto es un problema. Puede que esté bien almacenar unos cientos de dólares en tu monedero móvil y quizás te lo roben, no pasa nada, es una cantidad pequeña de dinero. Pero para los comerciantes y procesadores de pagos, entonces te preocupa perder monedas o pagos, y quieres tener suficientes canales abiertos para no tener que cerrar ninguno y no tener problemas de liquidez en la red. Tienes que almacenar tus claves privadas en un ordenador online que tiene una dirección IP concreta, o quizás a través de tor, y ciertos puertos abiertos, y estás transmitiendo a todo el mundo cuánto dinero tienes en esos canales. No es muy bueno, ¿verdad? Así que esta es otra cosa que estamos trabajando en que sería bueno para obtener las claves privadas de rayos en una cartera de hardware. Desafortunadamente, aquí no puedes realmente usar un airgap.

Podrías utilizar parcialmente el almacenamiento en frío. Podrías al menos asegurarte de que cuando el canal se cierra el dinero va a la cámara frigorífica. Hay una buena separación de diferentes claves en lightning. Cuando abres un canal, deberías verificar qué dirección usar al cerrar el canal. Entonces, incluso si tu nodo es hackeado, si el atacante intenta cerrar los canales y obtener el dinero, entonces falla porque todo el dinero va al almacenamiento en frío.

Pero si él es capaz de conseguir todo el dinero a través de la red de rayos a su nodo, entonces usted está probablemente jodido. Para almacenar estas claves privadas en la cartera de hardware es un reto, pero usted podría tener un …. él también puede hacer una actualización del canal firmado. Si proporcionas suficiente información al monedero de hardware, que realmente estás dirigiendo una transacción y que tu saldo está aumentando, entonces el monedero de hardware hot wallet podría firmar automáticamente. Si la cantidad está disminuyendo, entonces definitivamente tiene que pedir al usuario que confirme. Así que estamos trabajando en eso.

Firmas Schnorr para carteras de hardware

La ventaja de las firmas Schnorr para los monederos hardware es la agregación de claves. Imagina que utilizas transacciones multisig normales, como 3 de 5. Esto significa que cada vez que estás firmando la transacción y poniéndola en la blockchain, ves que hay 5 pubkeys y 3 firmas. Es una gran cantidad de datos, y todo el mundo puede ver que estás usando una configuración multisig de 3 de 5. Terrible para la privacidad, y terrible en términos de tarifas.

Con las firmas Schnorr, puedes combinar estas claves en una sola. Así que puedes tener varios dispositivos o firmantes que generen firmas y luego puedes combinar las firmas y las claves públicas correspondientes en una sola clave pública y una sola firma. Entonces todas las transacciones en la cadena de bloques y la mayoría de las transacciones en la cadena de bloques tendrían un aspecto similar, sólo una clave pública y una firma.

Con taproot (o aquí), es aún mejor. Usted puede agregar la funcionalidad de scripting allí también. Si todo va bien, como en un relámpago tal vez usted y su contraparte están cooperando libremente y no necesita hacer un cierre unilateral. Podrías hacer un cierre mutuo multisig 2 de 2, y entonces se parece exactamente a una clave pública y una firma única. Si alguien no está cooperando y las cosas van mal, entonces puedes mostrar una rama en el script de taproot que muestra que se te permite reclamar el dinero, pero este script sólo se revela si tienes que ir por este camino. De lo contrario, se obtiene una única clave pública y una única firma en la blockchain.

Podemos utilizar chips en un único dispositivo de monedero de hardware con diferentes arquitecturas y diferentes modelos de seguridad heterogéneos, y poner tres chips diferentes con tres claves diferentes allí, y asegurarnos de que podemos gastar el bitcoin sólo si cada uno de estos chips está firmando en el monedero de hardware. Así que uno puede ser un elemento de seguridad propio, y luego otros microcontroladores en el mismo monedero de hardware, y la salida es sólo una única clave pública y una única firma. También podríamos hacer un chip de Rusia y otro de China. Así que, aunque haya una puerta trasera, es poco probable que tanto el gobierno ruso como el estadounidense cooperen para atacar su monedero. Desde la perspectiva del usuario, parece que sólo hay una única clave o una única firma.

Todos estos perros guardianes y mecanismos anti-sabotaje y prevención de inyecciones de fallos y stuff…. aún no están implementados, pero sé que hay algunas empresas que están trabajando en periféricos de seguridad en torno a la arquitectura RISC-V. Así que es de esperar que pronto tengamos elementos seguros. El único problema en este momento es que la mayoría… algunas de las empresas, diría, toman esta arquitectura RISC-V de código abierto y le ponen encima un montón de módulos propietarios de código cerrado y esto arruina toda la idea. Necesitamos un chip RISC-V de código abierto. Definitivamente, recomendaría mirar a RISC-V.

Los IBM Power9 también son de código abierto en este momento. Raptor Computing Systems es uno de los pocos fabricantes que realmente le venderá el dispositivo. Es un servidor, así que no es lo ideal, pero en realidad es de código abierto. Es un equipo de 2000 dólares, es un ordenador completo. Así que no es un dispositivo de consumo ideal para carteras de hardware. Creo que la CPU y la mayor parte del dispositivo de la placa son de código abierto, incluido el núcleo. Es una arquitectura de IBM. Vale, probablemente debería mirar eso. Suena interesante.

Mejores prácticas

Quiero hablar de las mejores prácticas, y luego hablar de desarrollar nuestra propia cartera de hardware. Pero primero sobre las mejores prácticas.

Nunca almacene los mnemónicos como texto plano. Nunca cargues la mnemotecnia en la memoria del microcontrolador antes de que se desbloquee. Hubo algo llamado “ataque de Trezor congelado”. Coges tu Trezor, lo enciendes, lo primero que hace es cargar tu mnemónico en la memoria y luego lo que puedes hacer es congelar el Trezor con baja temperatura para asegurarte de que la memoria de la clave sigue siendo visible. Luego actualizas el firmware a un firmware personalizado, lo que permite Trezor, que normalmente está bien porque tu memoria flash se borra y suponen que tu RAM está decayendo pero a bajas temperaturas se queda ahí. Luego, una vez que tienes el firmware ahí, no tienes la mnemotecnia pero puedes imprimir en la serie… pero puedes seguir sacando los datos de ahí. El problema es que cargaban la mnemónica en la RAM antes de comprobar el bit. Así que nunca hagas eso.

Lo último y lo que impide que un atacante gaste sus fondos es el código PIN o el método de verificación que esté utilizando. Aquí es extremadamente mejor almacenar el código PIN correcto en el dispositivo. En principio, comparar el código PIN correcto con el código PIN incorrecto es malo porque durante estas comparaciones hay un ataque de canal lateral. En lugar de eso, quieres utilizar un código PIN y otro método de autenticación para obtener la clave de descifrado para descifrar el mnemónico cifrado. De esta manera, eliminas todos los ataques de canal lateral y no tienes el mnemónico como texto plano.

Otra buena característica que la gente debería utilizar es, ¿has oído hablar de funciones físicamente no clonables? Es una función muy buena. Digamos que tienes un microcontrolador fabricado… cuando la RAM se fabrica, hay ciertas fluctuaciones en el entorno de tal manera que la RAM se fabrica de forma diferente para cada bit. Cuando enciendas el microcontrolador, el estado de tu memoria será aleatorio. Luego la borras y empiezas a usarla normalmente. Pero esta aleatoriedad tiene un cierto patrón, y este patrón es inclasificable porque no puedes observarlo y no puede ser reproducido en otro dispositivo RAM. Puedes utilizar este patrón como una huella digital, como la clave única del dispositivo. Por eso se llama una función físicamente no clonable, se debe a las variaciones en el proceso de fabricación. Puedes usar eso junto con un código PIN y otras cosas para encriptar tu mnemónico. Cuando el dispositivo se inicie, será capaz de descifrar la mnemotecnia. Pero la extracción de la memoria flash completa no ayudará a esto, porque todavía necesita obtener la función físicamente no clonable que está en el dispositivo. La única manera de conseguir eso es flashear el firmware, leer la clave y extraerla por serie o lo que sea. Es un fallo tanto de la protección de lectura como de la protección de escritura.

P: ¿Por qué no eliminar el PIN y el almacenamiento mnemotécnico y exigir al usuario que lo introduzca y borre el dispositivo?

R: Se podría. Así que es un dispositivo de firma seguro, pero no un dispositivo de almacenamiento seguro. Así que hay un almacenamiento seguro y una firma segura. Así que almacenas la frase de contraseña o el monedero encriptado en papel o CryptoSteel en una cámara acorazada de un banco o algo así con la mnemotecnia o algo así… y está encriptado, y recuerdas la frase de contraseña. Así que nunca guardas la frase de contraseña en ningún sitio.

El problema del almacenamiento seguro y el de la firma segura deberían estar separados. Así que se podría utilizar hardware reemplazable para la firma, y la mnemotecnia debería almacenarse encriptada en papel o algo así. El problema de introducir un mnemotécnico cada vez es que podrías tener un ataque de criada malvada. La prevención aquí es no tener wifi ni nada. Pero tal vez el atacante es inteligente y pone un paquete de baterías y algún tipo de mecanismo de transmisión, pero esto vuelve a tener una cartera de productos desechables.

Además de la RAM, puedes utilizar un trozo de cristal para hacer una función físicamente incopiable. Podrías poner un trozo de cristal en la cartera y ésta tiene un láser y puede medir las imperfecciones de la pantalla y lo utiliza para obtener la clave de descifrado de tu mnemónica. Esto no es una huella digital. Sin embargo, el vidrio puede degradarse con el tiempo. Un trozo de plástico puede tener un patrón único que genere un patrón de interferencia y entonces se puede utilizar para extraer una clave de descifrado de eso. Pero eso no va a suceder por un tiempo.

Esta otra cosa sobre los ataques de la criada malvada y los implantes de hardware, ¿cómo podemos evitarlo? Hay una forma de hacer una malla antimanipulación alrededor del dispositivo, de manera que cuando alguien intente entrar en él, por ejemplo haciendo un agujero, se activen automáticamente las medidas de seguridad. En los HSM del sector bancario, básicamente tienen un dispositivo constantemente conectado a la corriente y que controla la corriente que pasa por esta malla conductora. Si detectan el cambio de corriente en esta malla, entonces borran el dispositivo. El problema aquí es que cuando te quedas sin energía, tienes que depender de la batería, y entonces cuando la batería se está agotando antes de que se agote tienes que borrar las claves de todos modos.

Hay una forma mejor, en la que no sólo se comprueba la corriente, sino que también se comprueba la capacidad de los cables en la malla y se utiliza eso como una huella digital única para generar la clave de descifrado única de tus secretos. Incluso si alguien hace un agujero de 100 micras aquí, la clave de descifrado cambia y ya no podrá extraer los secretos. De momento no se puede comprar un dispositivo así, pero es un enfoque muy prometedor. Probablemente sea para gente que realmente se preocupe por las grandes cantidades de dinero, porque esto es realmente caro.

Si vas a borrar las claves, entonces también podrías pre-firmar una transacción antes de borrar las claves, para enviar las monedas a un almacenamiento en frío de respaldo o algo así.

Rodando tu propia seguridad de hardware

No debes esperar que rodar tu propia cartera de hardware sea una buena idea. Tu dispositivo probablemente no será tan seguro como Trezor o Ledger porque tienen grandes equipos. Pero si hay un error en el firmware de Trezor, entonces los atacantes probablemente tratarán de explotarlo en todos los usuarios de Trezor. Pero si tienes una implementación personalizada que puede no ser súper segura pero es personalizada, entonces ¿cuáles son las posibilidades de que el tipo que viene a tu casa y encuentra un dispositivo de aspecto extraño y se da cuenta de que es una cartera de hardware y entonces cómo se da cuenta de cómo romperlo? Otra cosa que puedes hacer es esconder tu cartera de hardware en una carcasa de Trezor ((risas)).

Alguien en un video sugirió hacer una billetera de hardware falsa, y cuando es encendida por alguien, entonces envía un mensaje de alerta a un grupo de telegram y dice llama al 911 estoy siendo atacado. Usted podría poner esto en la carcasa de Trezor. Cuando el tipo se conecta a él, envía el mensaje. Otra cosa que podrías hacer es instalar un malware en el ordenador del atacante, y luego rastrearlo y hacer varias cosas de vigilancia. También podrías alegar que sí, que necesito usar Windows XP con esta configuración o algo igualmente inseguro, lo cual es plausible porque tal vez configuraste este sistema hace 10 años.

Opciones para hacer un prototipo de monedero de hardware

¿Qué podemos utilizar para hacer una cartera de hardware? Si crees que hacer hardware es difícil, no lo es. Sólo tienes que escribir un firmware y cargarlo. También puedes usar FPGAs que son divertidas para desarrollar. Me gustan las placas que soportan micropython, que es una versión limitada de python. Puedes hablar con periféricos, mostrar códigos QR, etc. Trezor y ColdCard utilizan micropython para su firmware. Sin embargo, creo que micropython tiene un largo camino por recorrer, porque tan pronto como te alejas de lo que ya se ha implementado, entonces terminas teniendo problemas en los que tienes que bucear en el interior de micropython y terminas teniendo que escribir nuevo código C o algo así. Pero si te gusta todo lo que ya existe, entonces es extremadamente fácil trabajar con él.

Otra opción es trabajar con arduinos. Se trata de un framework desarrollado hace quizá 20 años, no lo sé, y que se utiliza en toda la comunidad de bricolaje. Se hizo extremadamente fácil empezar a escribir código. Conozco a gente que aprendió a programar usando arduino. Es C++, y no es tan fácil de usar como python, pero aún así la forma en que hacen todas las bibliotecas y todos los módulos, es extremadamente amigable para el usuario. Sin embargo, no desarrollaron este marco con la seguridad en mente.

También está el framework Mbed. Soporta una gran variedad de placas. Este marco fue desarrollado por ARM. Vuelves a escribir código C++, lo compilas en el binario y cuando conectas la placa lo arrastras y lo sueltas en la placa. Es literalmente arrastrar y soltar. Es más, no necesitas instalar ninguna cadena de herramientas. Puedes conectarte a Internet y utilizar su compilador de navegador en línea. No es muy conveniente, excepto para empezar y conseguir que algunos LEDs parpadeen. Ni siquiera necesitas instalar nada en tu ordenador.

Otra cosa a la que hay que prestar atención es al lenguaje rust, que se centra en la seguridad de la memoria. Tiene mucho sentido. También tienen rust para sistemas Mbed. Así que puedes empezar a escribir rust para microcontroladores, pero también puedes acceder a las librerías que normalmente escribe el fabricante o algo así, como para hablar con las pantallas, los LEDs y todas estas cosas. En el mundo de los microcontroladores, todo está escrito en C. Puedes escribir tu lógica de cartera de hardware en rust, y luego seguir teniendo los enlaces a las bibliotecas de C.

Hay un proyecto muy interesante llamado TockOS. Se trata de un sistema operativo escrito completamente en rust, pero puedes seguir escribiendo en C o C++, pero el propio sistema operativo, como el sistema de gestión, puede asegurarse de que incluso si una de tus bibliotecas está completamente comprometida, sigues estando bien y no puede acceder a la memoria de otros programas. Así que creo que eso está muy bien. Por el momento, creo que no hay mucha gente que conozca rust, pero eso está mejorando. Definitivamente es una cadena de herramientas muy interesante.

Otra cosa buena que puedes hacer con los monederos de hardware DIY, o no sólo DIY sino con hardware flexible, es la autenticación personalizada. Si no estás contento con sólo un código PIN, como si quisieras una contraseña más larga, o quieres tener un acelerómetro y quieres girar la cartera de hardware de una manera determinada que sólo tú conoces o algo así, o por ejemplo puedes aplicar contraseñas de un solo uso y autenticación multifactor. No sólo requieres el PIN sino también una firma de tu yubikey, y todo este tipo de cosas raras, o incluso tu huella digital pero eso es una mala idea porque las huellas digitales tienen una baja entropía y la gente puede simplemente tomar tu dedo de todos modos o robar tus huellas digitales.ç

Podrías usar yubikey, Google Titan, o incluso algunas tarjetas bancarias. Podrías hacer autenticación multifactor y diferentes dispositivos de almacenamiento de claves privadas para hacer multisig que no tienen nada que ver con bitcoin, para autenticar para entrar en una cartera de hardware.

Ten en cuenta que todas estas placas de las que hablo no son súper seguras. Todas usan microcontroladores y no tienen un elemento seguro. Puedes conseguir una placa muy barata que cuesta como 2 dólares. Ten en cuenta que está fabricada y diseñada en China. Está muy extendida, pero quién sabe, tal vez todavía hay una puerta trasera en el dispositivo en alguna parte. Quién sabe. Además, tiene bluetooth y wifi así que eso es algo a tener en cuenta. Si quieres una versión no muy segura del Ledger X, entonces podrías hacerlo. Probablemente sería más seguro que guardar el dinero en tu portátil que está constantemente conectado. Todas las demás placas para desarrolladores tienden a tener microcontroladores simples para aplicaciones específicas. Esta de aquí tiene el mismo chip que tiene Trezor, en teoría podrías portar Trezor a esto. Así que obtienes la seguridad de un monedero Trezor, una pantalla mucho más grande y tal vez alguna funcionalidad adicional que te pueda gustar. Así que podría tener sentido en algunos casos. Yo no confiaría completamente en el hardware DIY para la seguridad.

También hay algunos chips baratos enfocados a la seguridad disponibles en el mercado. El que se utiliza en la ColdCard está en el mercado, una especie de ECC blahblahblha de Microchip. También se puede aplicar en el factor de forma arduino. Así que puede proporcionarle un almacenamiento de claves seguro para sus claves de bitcoin, y el resto se puede hacer en el microcontrolador normal.

Actualmente no hay elementos seguros en el mercado que permitan utilizar la criptografía de curva elíptica para la curva de bitcoin. Todavía no se han construido.

Hacer un elemento totalmente seguro que sea completamente de código abierto desde la base hasta la cima, costará como 20 millones de dólares. Lo que estamos liberando es lo que es accesible para nosotros en este momento. Así que lo que podemos hacer es conseguir este elemento seguro que tiene un sistema operativo de tarjeta Java propietario en la parte superior, y luego en la parte superior de esto podemos escribir un bitcoin específico que puede hablar con el hardware y utilizar todos los aceleradores de curva elíptica y las características de hardware y todavía puede ser de código abierto porque no tenemos saber exactamente este sistema operativo de tarjeta Java funciona por lo que no es totalmente de código abierto, sólo estamos abriendo todo lo que podemos. En la ColdCard, no se puede utilizar la criptografía de curva elíptica en el elemento de almacenamiento seguro de claves, pero en otros elementos seguros sí se puede ejecutar ECDSA y otra criptografía de curva elíptica.

Mis opciones de diseño para mi cartera de hardware

Quería hablar de cómo hemos diseñado nuestra cartera de hardware y obtener su opinión acerca de si esto tiene sentido o no y cómo podemos mejorar, especialmente Bryan y M. Después de eso, creo que podemos, si ustedes tienen sus ordenadores portátiles con usted, entonces podemos configurar el entorno de desarrollo para mañana para que podamos incluso dar las tablas para probar en la noche y llevarlos a casa. Si prometéis no robarla, podéis llevaros alguna a casa si queréis, si es que vais a probarla. Tened en cuenta que mañana os molestaré todo el tiempo, así que esta noche es vuestra oportunidad de mirar esto a solas. Mañana entonces podemos trabajar en algunas carteras de hardware loco.

Lo que decidimos hacer es que tenemos algunos socios de hardware que pueden fabricar chips personalizados para nosotros. Podemos tomar los componentes de los microcontroladores normales y ponerlos de forma agradable en un solo paquete. ¿Qué componentes tienen normalmente los monederos de hardware? Los que también tienen un elemento seguro, al menos. Así que decidimos primero ir al código abierto. No vamos a trabajar con un elemento seguro de metal desnudo y un sistema operativo de código cerrado que no podemos abrir debido a los acuerdos de confidencialidad. Así que estamos utilizando un sistema operativo de tarjeta Java y, aunque no sabemos cómo funciona, parece que funciona, así que debería ser bastante seguro. Luego, para escribir encima de eso, estamos escribiendo un applet de bitcoin que funciona con el elemento seguro. Sólo podemos poner allí cosas que firmamos y que subimos usando nuestras claves de administración. Esto significa que podemos desarrollar y subir el software, y luego puede sugerir ciertos cambios que podemos habilitar para la carga. Esto requiere cierta comunicación con nosotros primero. No puedo dar las claves a nadie más, porque si se filtran tenemos problemas con el gobierno porque están preocupados por los criminales que comprometen los canales de comunicación súper seguros o los usan para organizar sus actividades ilegales. Por desgracia, este es el estado de la industria. Queríamos desarrollar un elemento seguro de código abierto, pero por ahora sólo podemos hacer un monedero de hardware quizás más seguro y un poco más flexible.

Así que estamos utilizando la tarjeta inteligente Java Card, y luego tenemos otros dos microcontroladores. También tenemos una pantalla algo grande para mostrar toda la información sobre la transacción y darle un buen formato y para habilitar este airgap de código QR necesitamos poder mostrar códigos QR bastante grandes. Tenemos que usar otro microcontrolador de propósito general porque sólo ellos pueden manejar pantallas grandes por el momento. Luego tenemos un tercer microcontrolador que hace todo el trabajo sucio que no es crítico para la seguridad, como la comunicación a través de USB, hablar con las tarjetas SD, procesar las imágenes de la cámara para leer los códigos QR y cosas por el estilo. Esto hace que se aísle físicamente la gran base de código que no es crítica para la seguridad y que maneje todos los datos del usuario y los datos del ordenador frío. También tenemos otro microcontrolador que se dedica a la conducción de la pantalla para que pueda tener una pantalla de cierta confianza. Todos estos microcontroladores están empaquetados en el mismo paquete. En el interior, tenemos dispositivos semiconductores estratificados en el paquete, y están estratificados en una estructura de seguridad. El de arriba es el elemento seguro, y los otros dos están por debajo. Así que, en teoría, el calor de un chip del paquete puede detectarse en el otro, pero es de suponer que la tarjeta inteligente tiene mucho trabajo hecho en materia de análisis de potencia y prevención de canales laterales.

Incluso si el atacante tiene acceso a este chip y lo descifra, primero golpeará el elemento seguro que tiene todos estos mecanismos anti-manipulación como perros guardianes y detección de voltaje y mapeo de memoria y otras cosas, y comparte esta capacidad con otros chips. Obviamente comparten el mismo suministro de voltaje, así que primero se gana un poco de seguridad en el elemento seguro con los otros microcontroladores. Incluso si el atacante trata de entrar en la memoria de los microcontroladores no muy seguros, el elemento seguro está en el camino y es difícil llegar por debajo de él. Los clientes anteriores para los que hicieron esto era para los satélites, y en ese caso tienes problemas con la radiación y esas cosas. Para nosotros, esto nos ayudó porque significa que no se puede, en primer lugar, ninguna radiación electromagnética va desde el chip hacia el exterior, por lo que esto elimina algunos ataques de canal lateral, y en segundo lugar, la radiación electromagnética limitada entra en el chip. Y como he dicho, todos están en el mismo paquete. En nuestras placas de desarrollo, no están en el mismo paquete porque no tenemos el dinero para desarrollar el chip, pero empezaremos pronto. Aun así, ya tiene todo este hardware.

Los microcontroladores normales tienen estas interfaces de depuración. Incluso si están deshabilitadas con los bits de seguridad, el atacante puede hacer un montón de cosas de condición de carrera e incluso volver a habilitarlas. Así que incluimos un fusible que rompe físicamente la conexión de la interfaz JTAG con el mundo exterior. Así que sólo teniendo el chip, el atacante no es capaz de utilizar la interfaz JTAG. Esto es algo bueno que normalmente no está disponible. En la placa de desarrollador, exponemos un montón de conexiones con fines de desarrollo, pero en el producto final las únicas conexiones serán la pantalla, la pantalla táctil, y para la cámara no hemos decidido cuál de ellas queremos utilizar. Normalmente la cámara se utiliza para almacenar el código QR y obviamente está relacionada con la comunicación y debería estar en el microcontrolador de comunicación. Podemos tomar una foto de los dados como entropía definida por el usuario.

Tenemos un código completamente airgapped que funciona con códigos QR para escanear transacciones y códigos QR para transmitir firmas. Lo llamamos la función M porque fue él quien lo sugirió. Es tan bonito y fluido como funciona que tengo este bonito video que puedo mostrarte. Así que primero escaneamos el código QR del dinero que queremos enviar; el monedero watchonly del teléfono conoce todos los UTXOs y prepara las transacciones sin firma y se lo muestra al monedero hardware. A continuación, en el monedero físico escaneamos el código QR. Ahora vemos la información sobre la transacción en el monedero físico, que podemos firmar, y luego obtenemos la transacción firmada. A continuación, la escaneamos con el monedero de reloj, y luego la emitimos. El flujo es bastante conveniente.

Se trata de un flujo de datos unidireccional. Sólo va en una dirección. Es un flujo de datos muy controlado. También está limitado en cuanto a los datos que se pueden pasar de un lado a otro. Esto es algo bueno y malo a la vez. Para una transacción grande, puede ser un problema. Para datos pequeños, es genial. Todavía no hemos probado los códigos QR animados. Pude hacer PSBT con unas pocas entradas y salidas, por lo que era bastante grande. Con transacciones bitcoin parcialmente firmadas y entradas heredadas, entonces tus datos van a explotar en tamaño bastante rápido. Antes de segwit, tenías que poner mucha información en la función de hash para derivar el valor del sighash. Ahora con segwit, si la cartera miente sobre las cantidades, entonces simplemente generará una firma inválida. Para calcular la cuota del monedero hardware, tengo que pasar toda la transacción anterior, y puede ser enorme. Tal vez usted obtiene las monedas del intercambio, y el intercambio podría crear una gran transacción con miles de salidas y tendría que pasar todo a la cartera de hardware. Si estás usando direcciones heredadas, probablemente tendrás problemas con los códigos QR y entonces tendrás que hacer animaciones. Para transacciones segwit y un número razonable de entradas y salidas, estás bien. Podrías simplemente decir, no usar más legacy, y barrer esas monedas. El único vector de ataque razonable ahí es si eres un pool de minería y vas a engañarte para transmitir eso y robarte, pero fuera te perjudica pero no beneficia al atacante así que realmente no tiene incentivo. También puedes mostrar una advertencia y decir que si quieres usar las tasas, entonces usa bech32 o al menos compruébalo aquí. Puedes mostrarles la tarifa en la máquina caliente, pero eso probablemente esté comprometido.

Con el modo airgapped, tienes almacenamiento en frío y es un airgap y es razonablemente seguro. Nada es perfecto, habrá errores y ataques a nuestra cartera de hardware también. Sólo esperamos cometer menos errores de los que vemos en este momento. Además, otra cosa de la que no hablé es el …. dentro de las tarjetas inteligentes, tenemos que reutilizar el SO de la tarjeta Java sólo porque los applets tienen que ser escritos en java. Entonces decidimos ir con rust embebido, y aunque es un poco más difícil de desarrollar y no mucha gente conoce rust, esta parte es realmente crítica para la seguridad y no queremos dispararnos en el pie con errores allí. Además, se evita un montón de aislamiento y las mejores prácticas del mundo de la seguridad. En tercer lugar, nos abrimos a los desarrolladores y se está ejecutando micropython. Así que esto significa que si quieres una funcionalidad personalizada en la cartera de hardware, como un esquema de comunicación personalizado como bluetooth que no queremos, pero si lo quieres entonces eres bienvenido a hacerlo tú mismo. Entonces escribes un script en python que maneje este tipo de comunicación, y luego solo lo usas. Otra forma de hacerlo es que, si estás usando un script personalizado particular que no conocemos, y estamos mostrando esta transacción de una manera fea que es tal vez difícil de leer, entonces tal vez lo que puedes hacer es añadir algunos metadatos para enviarlo al otro microcontrolador para mostrarte algo. Todavía lo marcará como no súper confiable porque viene de esta aplicación de python, pero si usted está confiando en los desarrolladores de la aplicación, entonces usted puede hacer algunas opciones. Pero al menos puedes ver alguna información adicional sobre esta transacción si nuestro material no fue capaz de analizarla; como si es una transacción coinjoin que está aumentando tu conjunto de anonimato en 50 o lo que sea - todavía ves las entradas y salidas, además de la información extra normalmente. Así que esto podría ayudar a la experiencia del usuario.

Además del modo airgapped, hay otro caso de uso completamente diferente en el que se utiliza como una cartera caliente como para una cartera de hardware de rayos. Usted mantiene el monedero conectado a la computadora, y la computadora conectada ejecuta un monedero lightning de vigilancia, y luego se comunica con el monedero de hardware para las firmas de las transacciones que aumentan estrictamente el balance de su canal. Entonces, usted podría estar en el bucle para cualquier transacción que reduzca su saldo. Ten en cuenta todavía que este no es un modo de airgapped, pero es un modo de seguridad diferente. Probablemente quieras la mayor parte de tus fondos en el almacenamiento en frío, y luego alguna cantidad puede estar en este dispositivo de monedero de hardware caliente. Además, uno de los problemas con coinjoin es que puede tomar algún tiempo como unos pocos minutos, especialmente si usted necesita para reintentar varias veces, que es una especie de cartera caliente.

Probablemente no quieras llevar tu monedero hardware todo el tiempo, o ir a casa a pulsar un botón para confirmar tu pago relámpago o algo así. Así que lo que puedes hacer es configurar un teléfono con una aplicación que está emparejado con la cartera de hardware, por lo que la cartera de hardware es consciente de la aplicación y la aplicación almacena algún secreto para autenticarse y luego se puede establecer un límite en la mañana para su teléfono. Así, el monedero de hardware debe autorizar los pagos hasta una cierta cantidad si viene de esta aplicación segura. Incluso más, puedes establecer permisos particulares a la aplicación móvil. Por ejemplo, el monedero de intercambio debería poder pedir a la cartera de hardware una factura, pero sólo una factura para pagarte. Lo mismo con las direcciones de bitcoin, sólo puede pedir direcciones de bitcoin para una ruta particular. Si quieres, puedes hacerlo más flexible y conseguir una mejor experiencia de usuario. El monedero de hardware está conectado a Internet, y las solicitudes se reenvían a través de la nube en este esquema.

Lo primero que vamos a lanzar es el tablero de desarrolladores y el elemento seguro. Otra cosa que quiero discutir es la API de la primera versión del elemento seguro. En concreto, como desarrolladores, ¿qué os gustaría que tuviera? Tengo algunas ideas sobre lo que puede ser útil. Tal vez ustedes puedan pensar en algo más.

Obviamente, debe almacenar la mnemotecnia, las contraseñas, las frases de paso, y debe ser capaz de hacer todos los cálculos de derivación bip32 y almacenar claves públicas bip32. También queremos ECDSA y la firma de curva elíptica estándar que estamos utilizando en este momento. Además, quiero incluir este protocolo de mitigación de ataques anti-nonce. No queremos confiar en el elemento seguro propietario; queremos estar seguros de que no está tratando de filtrar nuestras claves privadas utilizando este ataque nonce elegido. Así que queremos aplicar este protocolo. Además, queremos utilizar las firmas Schnorr en particular con la compartición de secretos Shamir. Esto permitiría tomar las claves Schnorr, y luego obtener una firma que utiliza el punto correcto. La agregación de claves con Shamir, necesitas una función elegante para combinar cada uno de los puntos de las partes.

Tiene sentido usar a Shamir aquí porque puedes hacer umbrales. Con Musig también puedes hacerlo, pero con Schnorr es sólo k-de-k. Compartición de secretos Shamir verificable usando compromisos pedersen. Digamos que tenemos 3 claves viviendo en nuestro hardware, y estas otras partes están en algún otro hardware de respaldo en otro lugar. Todas ellas pueden comunicarse entre sí. Cada uno genera su propia clave, y luego usando compromisos pedersen y algún otro algoritmo de fantasía, pueden estar seguros de que cada uno de ellos termina con una parte de algún secreto común, pero ninguno de ellos sabe cuál es el secreto común. Así que tenemos una clave privada virtual o un secreto completo que se divide con el esquema de compartición de secretos de Shamir con todos estos tipos y ninguno de ellos conoce el secreto completo. El único problema es ¿cómo lo respaldas? No puedes ver una mnemotecnia correspondiente a esto. Nadie conoce esta clave, así que nadie puede mostrarla. Así que la única manera de hacerlo es de alguna manera - digamos que este tipo controla la pantalla, entonces puede optar por mostrar su propia clave privada, pero entonces ¿qué pasa con los demás? No quieren comunicarse con este tipo para mostrarla porque entonces pueden reconstruirla…. así que podrías usar algo como la pantalla que conectas a diferentes dispositivos para mostrar esta mnemotecnia, pero entonces el controlador de la pantalla puede robarlas todas. Lo ideal es que el controlador de pantalla sea un dispositivo muy simple que sólo tenga memoria y una pantalla. También podrías utilizar hardware desechable para generar la clave privada, pero entonces ¿cómo la recuperas?

Otra idea son las copias de seguridad sin semillas, de modo que si uno de los chips se rompe, si no usas 5 de 5, sino 3 de 5, puedes hacer que estos chicos se comuniquen entre sí, renegocien la misma clave, generen un nuevo fragmento para el nuevo miembro y reemplacen todas las partes antiguas de este esquema de compartición de secretos Shamir con el nuevo. En lugar del viejo polinomio, eliges el nuevo polinomio y hay una manera tal que cada uno de estos dispositivos puede cambiar a la nueva clave y en lugar de comprometer a uno obtenemos el nuevo miembro. Esto también es útil si uno de los fragmentos o claves se compromete o se pierde, siempre y cuando tengas suficientes claves para reconstruirlo.

Para la firma, no es necesario reensamblar la clave privada compartida secreta de Shamir porque es Schnorr. Así que las firmas se generan parcialmente, y luego se pueden combinar en la firma final correcta sin reensamblar la clave privada en una sola máquina.

Digamos que estás haciendo SSS sobre una clave privada maestra. Con cada fragmento, generamos una firma parcial sobre una transacción. Una firma Schnorr es la suma de este punto aleatorio más el hash por la clave privada, y es lineal. Podemos aplicar la misma función para recombinar las partes de la firma. Podemos aplicar la misma función a s1, s2 y s3 y entonces se obtiene la firma completa S de esta manera sin combinar nunca las partes de las claves en la clave completa.

La multifirma está bien cuando no eres el único propietario de las claves privadas, como en el caso de la custodia con tus amigos y familiares o lo que sea. El esquema Shamir Secret Sharing con Schnorr es genial si eres el único propietario de la clave, así que sólo necesitas los fragmentos de la clave. Hay un documento que puedo darte que explica cómo se generan los fragmentos, o si la clave privada maestra virtual se genera en una sola máquina. Multisig es mejor para la custodia comercial, y Shamir es mejor para la autocustodia y el autoalmacenamiento en frío. La multifirma clásica seguirá estando disponible con Schnorr, no tienes que usar la combinación de claves, seguirías usando el CHECKMULTISIG. Creo que todavía se puede utilizar. ((No, no se puede - ver la sección “Diseño” de bip-tapscript o CHECKDLSADD.)) Desde la perspectiva de la minería, CHECKMULTISIG hace que sea más caro validar esa transacción porque tiene muchas firmas.

Estaba pensando en utilizar políticas de miniscript para desbloquear el elemento seguro. Para desbloquear el elemento seguro, se podría utilizar simplemente un código PIN o se podría utilizar de forma que se necesitaran firmas de otros dispositivos o códigos de una sola vez de algún otro mecanismo de autenticación. Teníamos que implementar miniscript de todos modos. No estamos restringidos a los límites de sigop de bitcoin ni a nada aquí; así que el elemento seguro debería ser capaz de verificar este miniscript con cualquier clave de autenticación o contraseña que estés usando. Incluso puede ser CHECKMULTISIG con el límite de 15 claves eliminado.

Le enviaré a Bryan el documento sobre la compartición lineal de secretos para las firmas de umbral de Schnorr, en el que cada fragmento se puede utilizar para generar firmas parciales que luego se pueden recombinar. Tal vez el documento de Pedersen sobre la compartición de secretos verificable de 1999. Y luego hay una respuesta a ese papel donde se puede arreglar cómo una parte puede sesgar la clave pública, que sesgar la clave pública no importa, pero lo que sea. GJKR'99 sección 4 lo tiene. No es un esquema de compartición de secretos de Shamir si no reúnes la clave privada; es sólo un esquema de firma de umbral que resulta que utiliza la compartición lineal de secretos. Llamarlo de otra manera es peligroso porque podría animar a la gente a implementar SSSS donde la clave puede ser recuperada.

P: ¿Y si te deshaces del elemento de seguridad, y haces que el almacenamiento sea efímero y lo borres al apagar? Básicamente lo que hace Tails. El usuario tiene que introducir su mnemónico y su frase de contraseña. Incluso se puede guardar el mnemónico y sólo requerir una frase de contraseña. ¿Sería más fácil y barato si nos deshacemos del elemento de seguridad?

R: Bueno, ¿qué pasa con el ataque de una criada malvada? Sube cierto firmware a tu microcontrolador. ¿Cómo verificas que no ha cogido la contraseña?

P: Es posible, pero esta doncella malvada tiene que volver y conseguir el acceso de nuevo en el futuro. Pero mientras tanto, podrías destruirlo, y desmontarlo y confirmar e instalar el software cada vez.

R: Realmente quiero una cartera de hardware desechable. Mientras la firma sea válida, eso es todo lo que necesitas. Vas a destruir el dispositivo de hardware después de usarlo.

P: Si utilizas este enfoque sin un elemento seguro, ¿qué pasa con una situación de monedero caliente como para un rayo?

R: Si no tienen acceso a la cartera de hardware, entonces está bien. Pero si tienen acceso, entonces será aún peor en el escenario sin el elemento seguro. Está utilizando las claves privadas para las operaciones criptográficas, y si está en el microcontrolador normal, entonces se puede observar el consumo de energía y extraer las claves. Es bastante difícil deshacerse de esto. Trezor sigue siendo vulnerable a este problema. Los chicos de Ledger descubrieron un ataque de canal lateral cuando obtienes una clave pública a partir de tu clave privada, filtras alguna información sobre tu clave privada. Están trabajando en arreglar esto, pero básicamente para derivar una clave pública necesitas desbloquear el dispositivo y esto significa que ya conoces el código PIN así que no es un problema para ellos. Pero en modo automático, si tu dispositivo está bloqueado pero sigue haciendo criptografía en un microcontrolador, entonces sigue siendo un problema. Yo preferiría que sólo se hicieran operaciones de criptografía en un elemento seguro. Si implementamos algo que pueda almacenar las claves o borrar las claves también, y que pueda hacer operaciones criptográficas sin canales laterales, entonces creo que se puede pasar a una placa de desarrollador normal o a un microcontrolador extendido y hacer todo allí ya. Creo que no es puramente una restricción, es más como… tiene algunas limitaciones porque no podemos conectar la pantalla al elemento seguro, pero son compensaciones en las que estamos pensando. Para los desechables, creo que está perfectamente bien usar el… hacer una cosa desechable muy barata. Los microcontroladores utilizados en Trezor cuestan como 2 dólares o algo así. Así que podemos tener un montón de ellos, comprarlos al por mayor en digikey o algo así por miles y luego los pones en el chip y tienes que soldar un poco. Lo ideal sería un chip con un paquete DIP o un zócalo y simplemente poner el controlador allí. Estaría bien probarlo.

P: Si te preocupa el ataque de una doncella malvada, puedes utilizar una bolsa a prueba de manipulaciones. La doncella malvada podría sustituir tu cartera de hardware, pero en teoría lo sabrías gracias a la bolsa a prueba de manipulaciones. Puedes usar purpurina, sacarle una foto y guardarla, y cuando vuelvas a tu dispositivo compruebas tu purpurina o algo así. O tal vez tiene una conexión a Internet y si alguna vez es manipulado, entonces dice hey fue desconectado y no confío más.

Estos escáneres de huellas dactilares en los portátiles son completamente estúpidos. Además, un lugar fácil para encontrar esas huellas es el propio teclado ((risas)). Deberías encriptar tu disco duro con una frase de contraseña muy larga que escribes.

Cadenas de estados

https://btctranscripts.com/bitcoin-core-dev-tech/2019-06-07-statechains/

Con las cadenas de estados, puedes transferir la clave privada a otra persona, y entonces haces que sólo la otra persona pueda hacer una firma. Ruben vino con una construcción interesante donde encima de esto puedes hacer transacciones relámpago donde sólo transfieres parte de este dinero, y puedes hacer rebalanceo y cosas. Pero requiere firmas de Schnorr, firmas ciegas para Schnorr, y si ocurre pues no será por el momento.

La forma en que podemos ayudar con esto es que, podemos proporcionar la funcionalidad en el elemento seguro que extrae la clave y luego - de tal manera que usted está seguro de que la clave privada se mueve. Usted necesita confiar en el fabricante todavía, pero usted puede diverisificate la confianza de ella, de modo que usted no tiene que confiar plenamente en esta federación que está haciendo cumplir esta política.

Seguimiento

https://bitcoin-hardware-wallet.github.io/

\ No newline at end of file +https://www.youtube.com/watch?v=rK0jUeHeDf0

https://twitter.com/kanzure/status/1145019634547978240

Ver también:

Antecedentes

Hace algo más de un año, pasé por la clase de Jimmy Song de Programación de Blockchain. Ahí fue donde conocí a M, donde él era el asistente de enseñanza. Básicamente, se escribe una biblioteca de bitcoin en python desde cero. La API de esta librería y las clases y funcciones que usa Jimmy son muy fáciles de leer y entender. Yo estaba contento con esa API, y lo que hice después fue que quería salir de la física cuántica para trabajar en bitcoin. Empecé a escribir una biblioteca que tenía una API similar - tomé una placa de arduino y escribí una biblioteca similar que tenía las mismas características, y cosas adicionales como claves HD y algunas otras cosas. Quería hacer un monedero de hardware que fuera fácil de programar, inspirado por la clase de Jimmy. Fue entonces cuando conocí y comencé CryptoAdvance. Luego hice una versión refactorizada que no requiere dependencias de arduino, así que ahora funciona con arduino, Mbed, bare metal C, con un sistema operativo en tiempo real, etc. También planeo hacer un enlace de micropython y un enlace de rust embebido.

Introducción

Hoy en día sólo estoy considerando tres carteras de hardware: Trezor, Ledger y ColdCard. Básicamente todos los demás tienen arquitecturas similares a uno de estos. Quizá sepas que Trezor utiliza un microcontrolador de propósito general, como los que se usan en los microondas o en nuestros coches. Estos están más o menos en todos los dispositivos que existen. Tomaron la decisión de usar sólo esto sin un elemento seguro por algunas razones: querían hacer la cartera de hardware completamente de código abierto y decir con seguridad lo que se está ejecutando en la cartera de hardware. No creo que no tengan ningún elemento seguro en la cartera de hardware a menos que desarrollemos un elemento seguro de código abierto. Creo que nuestra comunidad podría hacerlo realidad. Quizás podríamos cooperar con Trezor y Ledger y en algún momento desarrollar un elemento seguro basado en la arquitectura RISC-V.

Elementos de seguridad

Creo que necesitamos varios tipos de elementos seguros para diferentes modelos de seguridad. Quieres diversificar el riesgo. Quieres usar multisig con firmas Schnorr. Quieres diferentes dispositivos con diferentes modelos de seguridad, e idealmente cada clave debería ser almacenada con diferente hardware, y diferentes modelos de seguridad en cada uno también. ¿Cómo aparecerán las vulnerabilidades? Podría ser un protocolo mal diseñado, con suerte no tendrás un bug en eso pero a veces los monederos de hardware fallan. Podría ser una vulnerabilidad de software en la que la gente que escribió el software cometió un error, como desbordamientos o errores de implementación o algunas primitivas criptográficas no muy seguras como la fuga de información a través de un canal lateral. El hardware real puede ser vulnerable a ataques de hardware, como el glitching. Hay formas de hacer que los microcontroladores no se comporten según su especificación. También puede haber bugs de hardware, que ocurren de vez en cuando, y sólo porque el fabricante del chip también puede cometer errores - la mayoría de los chips siguen siendo diseñados no automáticamente sino por humanos. Cuando los humanos ponen transistores y optimizan esto a mano, también pueden cometer errores. También existe la posibilidad de que haya puertas traseras del gobierno, y por eso queremos un elemento seguro de código abierto.

Hace algún tiempo se habló de las instrucciones en los procesadores x86, donde básicamente, tienen un conjunto específico de instrucciones que no está documentado y no… lo llaman Apéndice H. Comparten este apéndice sólo con personas de confianza ((risas)). Sí. Estas instrucciones pueden hacer cosas raras, no sabemos exactamente qué, pero un tipo fue capaz de encontrar todas las instrucciones. Incluso fue capaz de obtener privilegios desde el nivel de usuario, no sólo a nivel de root sino a nivel de ring-2, un control completo al que ni siquiera el sistema operativo tiene acceso. Incluso si ejecuta colas, no significa que el ordenador sea apátrida. Todavía hay un montón de mierda que se ejecuta bajo el sistema operativo y que tu sistema operativo no conoce. Debes tener cuidado con eso. En los ordenadores librem, no sólo tienen PureOS sino también Qubes que puedes ejecutar y también usan el – que también es abierto– básicamente puedes tener… comprobar que está arrancando el tails real. La herramienta librem se llama heads not tails. Deberías mirar eso si eres particularmente paranoico.

Los ordenadores Librem tienen varias opciones. Puedes ejecutar PureOS o puedes ejecutar Qubes o Tails si lo deseas. La clave librem comprueba el cargador de arranque.

Destapado

Los monederos de hardware de Ledger utilizan un elemento seguro. También tienen un microcontrolador. Hay que hablar de dos arquitecturas diferentes. Trezor sólo utiliza un MCU de propósito general. Luego tenemos ColdCard, que utiliza un MCU de propósito general y además añade un elemento seguro… yo no lo llamaría un elemento seguro, pero es un almacenamiento de claves seguro. La cosa que está disponible en el mercado, los chicos de ColdCard fueron capaces de convencer al fabricante de abrir el código de este dispositivo de almacenamiento de claves seguras. Así que esperamos saber lo que se ejecuta en los microcontroladores, pero no podemos verificarlo. Si les damos el chip, no podemos verificarlo. En teoría podríamos hacer un decapping. Con el decapado, imagina un chip y tienes algo de epoxi sobre el semiconductor y el resto del espacio son sólo cables que van dentro del dispositivo. Si quieres estudiar lo que hay dentro del microcontrolador, lo que haces es ponerlo en la cortadora láser y primero haces un agujero en el dispositivo y luego puedes poner aquí el ácido nítrico que calientas a 100 grados y disolverá todo el plástico alrededor. A continuación, tiene un agujero para el microcontrolador, entonces usted puede poner esto en un microscopio óptico o microscopio electrónico o lo que usted tiene y realmente estudiar toda la superficie allí. Hubo un ATmega que alguien decapado y todavía era capaz de correr. Hubo una charla en defcon donde los chicos mostraron cómo hacer decappers DIY. Usted podría tomar un trezor u otra cartera de hardware y lo puso en un montaje, y luego sólo hay que poner la corriente de ácido nítrico para el microcontrolador y se disuelve el plástico, pero el dispositivo en sí todavía puede funcionar. Así que mientras haces algo de criptografía ahí, puedes llegar al nivel del semiconductor y ponerlo bajo el microscopio y observar cómo funciona exactamente. Entonces, cuando el microcontrolador funciona, puedes ver cómo el plástico… no sólo con el microscopio, sino también con… como cuando un transistor pasa de 0 a 1, tiene una pequeña posibilidad de emitir un fotón. Así que puedes observar los fotones emitidos y probablemente haya alguna información sobre las llaves dada por eso. Eventualmente podrías extraer las llaves. Cortas la mayor parte del plástico y luego pones el ácido nítrico para llegar al nivel de los semiconductores. En este ejemplo, el tipo estaba mirando el buffer de entrada y salida del microcontrolador. También puedes mirar los registros individuales. Sin embargo, es ligeramente diferente para los elementos seguros o el almacenamiento seguro de claves. Hacen un esfuerzo de ingeniería en el lado del hardware para asegurarse de que no es fácil hacer ningún descifrado. Cuando la criptografía está ocurriendo en el elemento seguro, tienen ciertas regiones que son partes ficticias del microcontrolador. Así que están operando y haciendo algo, pero están tratando de engañarte sobre dónde están las claves. Tienen un montón de otras cosas interesantes allí. Si estás trabajando con chips enfocados a la seguridad, entonces es mucho más difícil determinar lo que está pasando allí. La otra cosa es que en la ColdCard el dispositivo de almacenamiento de claves es bastante antiguo, así que por eso el fabricante estaba más dispuesto a abrirlo. Si somos capaces de ver lo que se está ejecutando allí, eso significa que el atacante también será capaz de extraer nuestras claves de allí. Así que ser capaz de verificar los chips, también muestra que no es seguro para los usuarios. Así que ser capaz de verificar con decapping podría no ser una buena cosa. Así que es complicado.

Elemento de almacenamiento de claves seguro (no es un elemento seguro)

Normalmente el microcontrolador pide al almacenamiento seguro que le dé la clave para trasladarla al microcontrolador principal, entonces se producen las operaciones criptográficas, y luego se borra de la memoria del microcontrolador. ¿Cómo se puede obtener esta clave del almacenamiento seguro de claves? Obviamente, tiene que autentificarse. En ColdCard, lo haces con un código PIN. ¿Cómo esperas que se comporte el monedero cuando introduces un código PIN erróneo? En Trezor, se aumenta un contador y se incrementa el retardo entre las entradas del PIN, o como en Ledger, donde tienes un número limitado de intentos de entrada antes de que tus secretos se borren. ColdCard utiliza el mecanismo de aumento de retardo. El problema es que este retardo no es aplicado por el elemento seguro sino por el microcontrolador de propósito general. Para adivinar el código PIN correcto, si eres capaz de impedir de alguna manera que el microcontrolador aumente el tiempo, entonces serías capaz de forzar el código PIN. Para comunicarse con el almacenamiento de claves, el microcontrolador tiene un secreto almacenado aquí, y siempre que utiliza el secreto para comunicarse con el almacenamiento de claves, entonces el almacenamiento de claves responderá. Si el atacante puede conseguir este secreto, entonces puede tirar el microcontrolador y usar su propio equipo con ese secreto y probar todas las combinaciones de PIN hasta que encuentre la correcta. Esto es debido a la elección que hicieron donde básicamente se puede tener cualquier cantidad de intentos para el código PIN para el elemento seguro. Este almacenamiento de claves seguras en particular tiene la opción de limitar el número de intentos de introducción del PIN. Pero el problema es que no es reseteable. Esto significa que, durante toda la vida útil del dispositivo, puedes tener un número determinado de entradas de PIN erróneas y, al llegar a este nivel, puedes tirar el dispositivo. Este es un compromiso de seguridad que hicieron. En realidad preferiría establecer este límite a, por ejemplo, 1000 códigos PIN, o 1000 intentos, y dudo que falle al introducir el código PIN 1000 veces. En el caso de la ColdCard, utilizan un buen enfoque para los códigos PIN en el que lo dividen en dos partes. Puedes usar una longitud arbitraria, pero recomiendan algo como 10 dígitos. Muestran algunas palabras de verificación, que te ayudan a verificar que el elemento seguro sigue siendo el correcto. Así que si alguien cambia tu dispositivo por otro, en un ataque de criada malvada, entonces verías que hay palabras diferentes y puedes dejar de introducir el PIN. De hecho, hubo un ataque a la tarjeta fría para forzarla. Las palabras son deterministas a partir de la primera parte de tu código PIN. Tal vez intentas varios dígitos, y luego escribes las palabras, y haces una tabla de esto, y luego cuando haces el ataque de la criada malvada, entonces pones esa tabla para que muestre la información al usuario. Tienes que hacer fuerza bruta con los primeros números del PIN y obtener esas palabras, pero las palabras posteriores son difíciles de averiguar sin conocer el código PIN.

La criada malvada ataca

No importa el monedero de hardware que tengas, incluso un Ledger con el mejor hardware - digamos que lo cojo y lo pongo en mi habitación y pongo un dispositivo similar allí que tiene una conectividad inalámbrica con el ordenador de un atacante, entonces es un puente al Ledger real, y puedo tener esta comunicación bidireccional y hacer un ataque man-in-the-middle. Lo que sea que muestre el Ledger real, puedo mostrarlo en este dispositivo y engañar al usuario. Puedo convertirme en el último hombre en el medio aquí. Cualquier cosa que el usuario ingrese, puedo ver lo que ingresa y conozco el código PIN y entonces puedo hacer todo. Hay dos maneras de mitigar este ataque - se puede utilizar una jaula de Faraday cada vez que está utilizando la cartera de hardware, y la segunda manera es hacer cumplir en el hardware y creo que los chicos Blockstream sugirió esto - usted tiene un cierto límite en la velocidad de la luz, por lo que no puede comunicarse más rápido que la velocidad de la derecha? Tu dispositivo está aquí. Si te estás comunicando con este dispositivo aquí, entonces puedes obligar a que la respuesta llegue en unos pocos nanosegundos. Entonces es muy poco probable que - no es posible por las leyes de la física para obtener la señal a su dispositivo real y luego volver.

¿Y si se utiliza un canal de comunicación cifrado con el monedero hardware? El atacante intenta obtener el código PIN. Sigue siendo vulnerable. En lugar de introducir el código PIN, en su lugar se utiliza un ordenador - entonces usted tiene su billetera de hardware conectado a él, y luego en la situación potencialmente comrpomised tenemos este atacante malicioso que tiene conectividad inalámbrica a la billetera de hardware real. Digamos que nuestro ordenador no está comprometido. A través de este canal encriptado, puede obtener la máscara para su código PIN, como algunos números aleatorios que debe añadir a su código PIN para introducirlo en el dispositivo. Tu MITM no sabe esto a menos que comprometa tu ordenador también. O una situación de “one-time pad”. Cada vez que introduces el código PIN, introduces el código PIN más este número. Podría funcionar con un one-time pad. Cuando estás operando con tu billetera de hardware, probablemente quieras firmar una transacción en particular. Si el atacante es capaz de reemplazar esta transacción con la suya, y mostrarte tu propia transacción entonces estás jodido. Pero si la transacción se pasa encriptada al monedero hardware real, entonces él no puede reemplazarla porque sí estaría sin autentificar.

Monederos efímeros de hardware desechable

Otra forma de deshacerse del almacenamiento seguro de la clave es hacer que todo el monedero hardware sea efímero, para que te centres en guardar la semilla y la frase de contraseña y la introduzcas cada vez. El monedero hardware es entonces desechable. Puede que nunca vuelvas a usar esa cartera de hardware. Estaba pensando en esto con respecto a la generación segura de mnemónicos en un dispositivo de consumo. Si tienes un microcontrolador desechable pero todo lo demás estamos seguros de que no tiene componentes digitales ni memoria, entonces cada vez podríamos sustituir el microcontrolador por uno nuevo y el viejo simplemente lo aplastamos con un martillo. Si ya te acuerdas de la mnemotecnia, pues el PIN te desanima a recordarla. Si usas hardware desechable, y no se almacena en la bóveda, entonces cuando alguien accede a la bóveda no ve nada y no sabe cuál es tu configuración realmente.

Generación mnemónica offline y compartición de secretos Shamir

Prefiero las mnemotecnias de 12 palabras porque son más fáciles de recordar y siguen teniendo una buena entropía, como la de 128 bits. Todavía puedo recordar las 12 palabras. Lo respaldaría con un esquema de intercambio de secretos Shamir. Tomamos nuestra mnemónica y la dividimos en partes. Si recuerdas la mnemónica, entonces no necesitas ir a recuperar tus acciones. No se requieren todas las acciones para recuperar el secreto, pero se puede configurar cuántas acciones se requieren.

Si te preocupa tu generador de números aleatorios, entonces deberías añadir entropía del usuario. Si tienes un generador de números aleatorios manipulado y estás añadiendo entropía de usuario de todos modos, la nit no importa. Los portátiles más antiguos pueden ser un problema, como las máquinas de 32 bits y los monederos podrían no soportar más CPUs de 32 bits o algo así. Si tu cartera fue escrita para python2.6…. ahora tienes que escribir algo para manejar enteros grandes, etc. Esto es un montón de bitrot en sólo 5-7 años.

En cuanto a la generación mnemotécnica fuera de línea o la generación mnemotécnica segura… usar un tablero de dardos, usar dados, ¿confías en ti mismo para generar tu entropía? Estoy pensando en algo así… tenemos verdaderos generadores de números aleatorios en las fichas; podemos pedir a las fichas los números aleatorios, luego usar esos números y luego mostrarlos al usuario. La palabra y el índice correspondiente. También sabemos que los circuitos se degradan con el tiempo, por lo que los generadores de números aleatorios podrían verse comprometidos de forma predecible basándose en las estadísticas de fallos del hardware.

Verificación de la entropía

Puedes tirar unos dados y luego sacar una foto. Si estoy construyendo un monedero de hardware malicioso y quiero robar tu bitcoin, esta es de lejos la forma más fácil. Tal vez el hardware es genial, pero el software que estoy ejecutando no está utilizando ese generador de números aleatorios de hardware. O tal vez hay un secreto conocido por el atacante. Esperas unos años, y entonces tienes una gran cuenta de retiro. También podrías aplicar un protocolo seguro que te permita usar un monedero de hardware aunque no confíes en él. Puedes generar una mnemotecnia con la entropía del usuario y verificar que la entropía fue efectivamente utilizada.

Firma con fuga de claves privadas debido a nonces no aleatorios elegidos por el monedero de hardware

Si sigo siendo un fabricante malvado del monedero de hardware, y fui forzado por la comunidad a incluir este mecanismo de verificación de entropía, y también quiero gustarle a la comunidad, entonces decimos que tenemos una solución completamente airgapped con códigos QR y cámaras… y decimos que puedes usar cualquier monedero de software que quieras porque funciona con todo. Ahora tienes una configuración como esta; digamos que tienes una computadora no comprometida, sólo una billetera de hardware comprometida que fue fabricada por medios malignos. Estás preparando la transacción sin firmar aquí, la pasas a la cartera de hardware usando los códigos QR. El monedero de hardware entonces te muestra la información y verificas que todo es correcto, y lo verificas en el ordenador también. Entonces la firmas, y obtienes de vuelta la transacción firmada. Entonces obtienes una transacción de bitcoin perfectamente válida que verificas que es correcta y la difundes a la red. Es el ataque nonce. Sí, exactamente. El problema es que la firma en bitcoin tiene dos números, uno es un nonce aleatorio. Nuestro monedero hardware puede elegir un nonce derivado de forma determinista o un nonce aleatorio para cegar la clave privada. Si este nonce se elige de forma insegura, ya sea porque estás usando un mal generador de números aleatorios que no está produciendo valores uniformemente aleatorios, o porque eres malvado, entonces con sólo mirar las firmas podré obtener información sobre tu clave privada. Algo así sucedió con yubikey recientemente. Había yubikeys con certificación FIPS y estaban filtrando tus claves privadas en 3 firmas. Lo estúpido es que introdujeron la vulnerabilidad por error cuando estaban preparando su dispositivo para el proceso de certificación, que te pide que uses números aleatorios, pero en realidad no quieres usar números aleatorios: quieres usar derivación determinista de nonces porque no confías en tus números aleatorios. Puedes seguir utilizando los números aleatorios, pero deberías utilizarlos junto con la derivación determinista de nonces. Digamos que tienes tu clave privada que nadie conoce, y quieres firmar un determinado mensaje…. idealmente debería ser HMAC algo…. Esta es la forma más bonita, pero no puedes verificar que tu billetera de hardware esté realmente haciendo esto. Tendrías que conocer tu clave privada para verificar esto, y no quieres poner tu clave privada en algún otro dispositivo. Además, no quieres que tu dispositivo sea capaz de cambiar a una generación de nonce maliciosa. Quieres asegurarte de que tu monedero por hardware no puede elegir nonces aleatorios arbitrarios. Puedes forzar a la billetera de hardware a usar no sólo los números que le gustan, sino también los números que a ti te gustan.

Podrías enviar el hash del número aleatorio que vas a utilizar, de tal manera que el monedero hardware no necesita utilizar un RNG, puede utilizar un algoritmo determinista para derivar este valor R. Sería un esquema de comunicación bastante seguro en ambos casos, tanto para el software como para el hardware, pero no para ambos al mismo tiempo. Uno de ellos debería estar bien.

Actualmente, todos los monederos de hardware lo ignoran. Estoy preparando una propuesta para incluir este campo en el PSBT bip174. Nuestro monedero de hardware lo soportará definitivamente. Quiero construir un sistema en el que no sea necesario confiar demasiado en el monedero de hardware, incluso si está comprometido o si hay errores. Todas las billeteras de hardware son hackeadas de vez en cuando.

Con las llaves falsas, puedes comprobar que las firmas se generan de forma determinista y asegurarte de que así es y entonces te sientes seguro con el monedero hardware quizás. Pero el cambio de este algoritmo determinista al malicioso puede ocurrir en cualquier momento. Podría ser provocado por una actualización de hardware o alguna fase de la luna, no puedes estar seguro.

Otra solución es que se puede utilizar la generación verificable de estos nonces aleatorios, creo que esto fue propuesto por Pieter Wuille. Para esto necesitas una función hash particular que soporte pruebas de conocimiento cero de que estabas usando este algoritmo sin exponer tus claves privadas. El problema aquí es que es un cálculo muy pesado para un microcontrolador, así que probablemente no lo vas a meter en un microcontrolador.

También hay una idea sobre el uso de firma de contrato como medida contra el canal de no-cierre.

Ledger y multisig

Si tienes p2sh multisig y este monedero era sólo una de las firmas, entonces incluso si era malicioso no controla todas las claves… y lo ideal es que estés usando un hardware diferente para las otras claves. El problema con la multifirma es que… bueno, Trezor lo soporta muy bien. Estoy muy contento con Trezor y la multifirma. ColdCard lanzó un firmware que soporta multisig hace un día. Ledger tiene una implementación terrible de la multifirma. Lo que esperaba del monedero es que mostrara cuando estás usando la multifirma, quieres ver tu dirección de bitcoin, y quieres ver cuál es la cantidad que realmente estás enviando o firmando? ¿Cuál es la salida de cambio? ¿Cuáles son las cantidades? Con el Ledger multisig, siempre ves dos salidas y no sabes cuál estás gastando y cuál es la dirección de cambio, si es que la hay. Con dos Ledgers en una configuración multisig, estás menos seguro que usando un solo Ledger. Si alguien quiere hacer un pull request a la aplicación bitcoin de Ledger, por favor, que lo haga. Está ahí, la gente se queja de este tema desde hace más de un año creo. Ledger no es muy bueno en la multifirma.

Practicidad

Sé de algunos ataques a la cadena de suministro, pero esos se basaban en que los usuarios hicieran cosas estúpidas. No conozco ningún ataque dirigido al hardware. Ahora mismo la forma más fácil de atacar la cartera de hardware de alguien es convencerle de que haga algo estúpido. Creo que por el momento no hay suficientes hackers que se dediquen a este campo. Pero el valor va a aumentar. Definitivamente ha habido ataques a monederos de software.

walletrecoveryservices.com vende algo de esto como un servicio de recuperación de carteras de hardware. El editor de la revista Wired perdió su código PIN o algo así, e hizo mucho trabajo para sacar los datos del dispositivo. Así que esto no es un ataque malicioso, pero no deja de ser un ataque.

Hay que desconfiar de los dispositivos de hardware de terceros de código cerrado sin marca. ¿Cómo se puede confiar en todo esto?

Ahora mismo puede ser más fácil conseguir criptodivisas lanzando malware o iniciando una nueva altcoin o algo así o un hard-fork de otra altcoin. Esas son las formas más fáciles. Ahora mismo es fácil apuntar a los nodos lightning y tomar el dinero allí; conoces sus direcciones públicas y cuánto dinero tienen, así que sabes que si puedes llegar al servidor entonces puedes recuperar tu costo de ataque. Así que esos objetivos son mucho más obvios y más fáciles de atacar en este punto. Hay ataques remotos más fáciles en este punto, que atacar carteras de hardware.

Desventajas de los microcontroladores

https://btctranscripts.com/breaking-bitcoin/2019/extracting-seeds-from-hardware-wallets/

¿Cómo funcionan los elementos de seguridad? Parece una bala de plata que lo hace todo, ¿verdad? Puedo decirle la diferencia entre los microcontroladores normales y los elementos seguros. Los microcontroladores normales están hechos para la velocidad, la eficiencia y son fáciles de desarrollar. Existen los llamados bits de seguridad que se configuran cuando se termina de programar el microcontrolador. Cuando el microcontrolador arranca, así que como te imaginarías es que debería arrancar con permisos de no-lectura-no-escritura y luego comprueba los bits de seguridad para ver si deberías ser capaz de comunicarte con él, y entonces deberías permitirle tener acceso de lectura/escritura. Pero a veces se hace al revés, donde el microcontrolador está en modo abierto para lectura-escritura y luego comprueba los bits de seguridad y luego se bloquea. Pero el problema es que si hablas con el microcontrolador antes de que sea capaz de leer esos bits, entonces podrías ser capaz de extraer un solo byte de la memoria flash del microcontrolador. Podrías seguir haciendo esto reiniciando una y otra vez; si eres rápido y el microcontrolador es lento, puedes hacer esto incluso más rápido. Creo que esto es algo a lo que Ledger hace referencia en todas sus charlas: este “ataque sin solución” a todos los microcontroladores como Trezor y otros. Creo que está relacionado con esto, porque esto es exactamente lo que está roto por diseño y no se puede arreglar sólo porque el sistema evolucionó así. No, no necesitan usar baja temperatura aquí. Sólo necesitan ser más rápidos que el microcontrolador, lo cual es fácil porque los microcontroladores utilizados en las carteras de hardware son de 200 MHz o así. Así que si usas una GPU o un ordenador moderno entonces podrías hacer algo.

¿Así que la amenaza es que se pueda leer la memoria del microcontrolador, antes de que se pueda bloquear? El problema es que se puede leer toda la memoria flash. Esto significa que aunque esté encriptada, tienes una clave en alguna parte para desencriptarla. ¿Qué se almacena en la memoria flash? ¿Qué se protege aquí? Algunos tienen claves secretas. Todos los dispositivos IoT probablemente tienen tu contraseña de wifi. Hay un montón de secretos diferentes que podrías querer proteger. La clave de descifrado podría, en teoría, estar almacenada en otro lugar. Parece que la amenaza es que el microcontrolador puede exponer los datos escritos en ellos, y tal vez usted se preocupa por eso porque son datos propietarios o un secreto o una clave privada de bitcoin en texto plano, entonces eso es un gran problema. Si tienes un microcontrolador en algún dispositivo informático que has programado… parece que esta amenaza es menos interesante. Sí, por eso la mayoría de la gente no se preocupa por el microcontrolador. En los dispositivos de consumo, normalmente la gente hace cosas aún más fáciles. Se olvidan de deshabilitar la interfaz de depuración, y entonces tienes acceso completo directo al microcontrolador con como JTAG o algo así. Así que puedes leer la memoria flash o estas otras cosas, y reprogramarlo si quieres.

También está la interfaz JTAG. También hay otro estándar, la interfaz de depuración por cable en serie (SWDI). Estas interfaces se utilizan para la depuración. Te permiten durante el desarrollo ver y controlar completamente todo el microcontrolador, puedes establecer puntos de interrupción, puedes observar la memoria, puedes disparar todos los pines alrededor del dispositivo. Puedes hacer lo que quieras usando estas interfaces. Hay una manera de desactivar esto, y eso es lo que hacen los fabricantes de carteras de hardware. O otro bit de seguridad, pero de nuevo el bit de seguridad no se comprueba en el momento del arranque, sino un poco más tarde, así que es otra condición de carrera. Ledger se olvidó de desactivar la interfaz JTAG en el microcontrolador que controla su pantalla, hace algún tiempo. Pero todavía tenían un elemento seguro, así que no fue un gran problema. Sí, deshabilitarla es una cosa de software. Todos los bits de seguridad son medidas de software. Sólo tienes que configurar las banderas de tu firmware para desactivar ciertas características.

Además, los microcontroladores como los normales, están diseñados para trabajar en ciertas condiciones. En el rango de temperatura de aquí a aquí, o el nivel de voltaje o fuente de alimentación aquí de 1,7 voltios a +- 0,2 voltios, y con una velocidad de reloj dentro de un cierto rango. ¿Qué sucede si este entorno cambia? Puedes obtener un comportamiento indefinido, que es exactamente lo que quiere el atacante. Lo que esto significa es que el microcontrolador operado más allá de esos límites podría saltarse instrucciones, hacer cálculos erróneos, puede hacer muchas cosas diferentes. También puede reiniciar. Puede saltarse instrucciones, o hacer cálculos incorrectos.

Como ejemplo, uno de los ataques a las carteras de hardware Trezor que utilizan este material fue ….. cuando conectan el Trezor a mi ordenador a través de USB, el ordenador pregunta al dispositivo ¿quién eres y en qué puedo ayudarte? ¿Qué tipo de dispositivo es usted? El Trezor dice soy Trezor modelo G por ejemplo. Entonces lo que el atacante fue capaz de hacer - incluso antes de desbloquear su cartera de hardware - como estos datos se envían a la computadora … Trezor está básicamente comprobando cuál es la longitud que debe enviar al ordenador… Y esta longitud se calcula durante ciertas instrucciones. Si se rompe el microcontrolador en este momento, y se hace que este cálculo haga algo aleatorio, como más de los 14 bits que la cartera de hardware espera, se obtiene no sólo la información del modelo X del trezor, sino que además se podría obtener la mnemotecnia y el contenido completo de la memoria. La información del modelo se almacenaba justo al lado de la información mnemónica. Sin embargo, han arreglado esto. Ahora mismo, tienes que desbloquear el Trezor con el PIN, no envía ningún dato hasta que se desbloquea. Hay una parte no legible de la memoria que el microcontrolador no puede leer; así que si hay desbordamiento, lanzará un error y no podrá leer esos bits. Así que este es también un buen enfoque. En principio, han hecho muchas correcciones recientemente. Este fue un arreglo de software no un arreglo de hardware.

La frase mnemotécnica del monedero hardware se almacenaba en texto plano y la verificación del PIN era vulnerable a un ataque de canal lateral. Otro gran ataque para los microcontroladores son los ataques de canal lateral. Cuando los microcontroladores están comparando números, pueden filtrar información simplemente consumiendo diferentes cantidades de energía, o tomando una cantidad de tiempo ligeramente diferente para calcular algo. Trezor fue vulnerable a esto también, hace algún tiempo, en particular a la verificación del código PIN. Así que estaban verificando esto introduciendo un PIN y comparándolo con un PIN almacenado. Esta comparación estaba consumiendo diferentes ciclos, diferentes patrones estaban causando diferentes - mediante la observación de la emisión de este sidechannel desde el microcontrolador, LedgerHQ fue capaz de distinguir entre diferentes dígitos en el PIN. Construyeron un sistema de aprendizaje automático para distinguir entre los sistemas y después de probar 5 PINs diferentes, este programa fue capaz de decir su PIN real. 5 PINs todavía era factible en términos de retraso, por lo que se puede hacer en unas pocas horas. Esto también fue arreglado. Ahora los códigos PIN no se almacenan en texto plano; ahora lo utilizan para obtener una clave de descifrado que descifra la frase mnemónica. Esta forma es más agradable porque incluso si tienes un PIN erróneo, la clave de descifrado es errónea y entonces no puedes descifrar el mnemónico y no es vulnerable a ningún tipo de ataque de canal lateral.

En cuanto al hardware, hay condiciones de carrera, ataques de canal lateral, manipulación del entorno operativo e interfaces de depuración para microcontroladores. También puedes decapitarlo y dispararle con láseres o algo así, y hacer que se comporte de forma extraña. Así que esto puede ser explotado por un atacante.

Volver a asegurar los elementos

Por otro lado, ¿qué hace un elemento seguro? Son similares a los microcontroladores, pero no tienen interfaces de depuración. No tienen banderas de lectura y escritura. También tienen un montón de contramedidas diferentes contra estos ataques, por ejemplo, medidas de hardware. Hay un watchdog que monitorea el voltaje en el PIN de la fuente de alimentación y tan pronto como ve que va por debajo de algún valor, dispara la alarma y puedes borrar las llaves tan pronto como veas que esto ocurre. O simplemente dejas de operar. Si ves que la tensión de alimentación está variando, simplemente dejas de operar. Si ves que la temperatura está variando demasiado, también puedes parar el funcionamiento. Puedes parar o borrar tus secretos. También está este mecanismo que permite al microcontrolador detectar si estás tratando de decapitar, como con un simple sensor de luz. Si decapitas el chip, tienes acceso al semiconductor y puedes ver que sale mucha luz y entonces detienes las operaciones. Aquí definitivamente quieres limpiar tus secretos y borrarlos todos. También utilizan un montón de técnicas interesantes contra los ataques de canal lateral. Por ejemplo, no se limitan a hacer un consumo de energía constante y una temporización constante, sino que además introducen retrasos aleatorios adicionales y ruido aleatorio en las líneas de alimentación, lo que hace cada vez más difícil que el atacante obtenga datos de ahí. Además, normalmente tienen una capacidad muy limitada de bits. Tienen un pin de alimentación, tierra, tal vez algunos más para manejar algo simple como un LED en la ColdCard o en el moderno Ledger Modelo X son realmente capaces de hablar con el controlador de pantalla para controlar la pantalla, lo que es una buena mejora en el hardware. En principio, no es muy capaz. No puedes esperar que el elemento seguro maneje una pantalla grande o que reaccione a la entrada del usuario. El botón está probablemente bien, pero definitivamente no es lo mejor.

La razón por la que el Ledger tiene una pantalla diminuta con baja resolución es porque están tratando de manejar todo desde el elemento seguro, al menos en el Ledger Model X. Anteriormente no era así, donde tenían un microcontrolador normal que hablaba con el elemento seguro donde lo desbloqueas y luego firma lo que quieras. Y luego este microcontrolador controla la pantalla. Esto fue en realidad un gran punto que fue señalado por — … esta arquitectura no es perfecta porque tienes este hombre en el medio que controla esta pantalla. Tienes que confiar en tu cartera de hardware tiene una pantalla de confianza, pero con esta arquitectura no puedes porque hay un hombre en el medio. Es difícil mitigar esto y averiguar cómo compensar la seguridad completa frente a la usabilidad del usuario. Espero que te hagas una idea de por qué los elementos seguros son realmente seguros.

Problemas con los elementos de seguridad

Sin embargo, hay algunos problemas con los elementos seguros. Tienen todos estos bonitos mecanismos anti-manipulación, pero también les gusta ocultar otras cosas. La práctica común en el campo de la seguridad es que cuando cierras la fuente de tu solución de seguridad, obtienes algunos puntos extra en la certificación de seguridad como ELF 5 y otros estándares. Sólo por el hecho de cerrar la fuente de lo que escribiste o lo que hiciste, obtienes puntos extra. Ahora lo que tenemos es el problema de que no podemos averiguar realmente lo que se está ejecutando dentro de estos dispositivos. Si quieres trabajar con un elemento seguro, tienes que ser lo suficientemente grande como para hablar con estas empresas y conseguir las claves necesarias para programarlo. Y también tienes que firmar su acuerdo de no divulgación. Y sólo en ese momento te darían la documentación; y entonces el problema es que no puedes abrir el código de lo que has escrito. Como alternativa, se utiliza un elemento seguro que ejecuta un [sistema operativo de tarjetas Java] (https://en.wikipedia.org/Java_Card), por lo que es algo así como un subconjunto de Java desarrollado para la industria bancaria, porque a los banqueros les gusta Java por alguna razón. Básicamente tienen esta máquina virtual Java que puede ejecutar tu applet… así que no tienes ni idea de cómo está operando la cosa, sólo confías en ellos porque está certificada y ya ha estado ahí durante 20-30 años y sabemos que todos los institutos de investigación de seguridad están tratando muy duro de conseguir incluso un único …. y entonces puedes abrir completamente el applet de la tarjeta Java que subes al elemento seguro pero no sabes lo que se está ejecutando debajo de él. Java Card se considera un elemento seguro, sí.

Por cierto, las tarjetas Java más seguras o los elementos seguros se desarrollaron normalmente para el … como cuando compras esta tarjeta, tienes un secreto allí que te permite ver la televisión desde una determinada cuenta. Eran muy buenos para proteger los secretos porque tenían el mismo secreto en todas partes. La señal viene del espacio o del satélite, la señal es siempre la misma. Estás obligado a usar el mismo secreto en todos los dispositivos. Esto significa que si incluso uno es hackeado, tienes televisión gratis para todos, así que pusieron mucho esfuerzo en asegurar este tipo de chip porque tan pronto como es hackeado entonces estás realmente jodido y tienes que reemplazar todos los dispositivos.

Además, hablemos de el ataque de compromiso de claves de Sony Playstation 3. Utilizan la firma ECDSA, y no se supone que todos los juegos sean gratuitos, ¿verdad? La única forma de conseguir que el juego funcione, es tener una firma adecuada en el juego, por lo que la firma de Sony. El problema es que se supone que no contrataron a un criptógrafo o a alguien decente en criptografía… implementaron un algoritmo de firma digital de forma que reutilizaban el mismo nonce una y otra vez. Es el mismo problema con las billeteras de hardware que describimos hoy. Si estás reutilizando el mismo nonce, entonces puedo extraer tu clave privada con sólo tener dos firmas tuyas. Entonces puedo obtener tu clave privada y luego puedo ejecutar cualquier juego que quiera porque tengo la clave privada de Sony. Esto fue para la Sony Playstation 2. Creo que fue el hackeo más rápido de una consola de juegos.

Monederos con código QR

Restricción de la unidireccionalidad de la información. Los bits no pueden fluir hacia atrás. El único lugar en el que debería estar una clave descifrada es en una cartera de hardware en funcionamiento. Hay una implementación en python de slip39. Chris Howe contribuyó con una biblioteca slip39 en C.

Con multisig y cada clave fragmentada, ¿se pueden mezclar los fragmentos de las diferentes claves y es eso seguro?

El código qr es json -> gzip -> base64 y cabe como 80-90 salidas y está bien. Los códigos QR animados podrían molar, pero hay algunas librerías como los códigos QR de colores que te dan un empujón. Es la paquetización. ¿Se le va a pedir al usuario que muestre ciertos códigos QR en un orden determinado, o se va a negociar gráficamente entre los dispositivos? Puedes utilizar cámaras de alto contraste.

Código QR impreso… cada paquete puede decir que es 1-de-n, y a medida que lo lee, averigua cuál es, y luego averigua cuál está hecho o no hecho todavía.

Las firmas podrían agruparse en un código QR de salida más grande en el dispositivo. Así que todavía no es un gran cuello de botella. Los códigos QR empaquetados son un área interesante. Cuando se analiza el código QR del material de Christopher Allen, el código QR dice lo que dice.

Ataques recientes

Hubo algunos ataques recientes que demostraron que incluso si usted está usando un hardware seguro, no significa que esté seguro. Cuando hay un atacante que puede llegar a tu dispositivo, entonces estás en problemas y pueden hacer ataques desagradables contra el microcontrolador. Otra idea es borrar el dispositivo cada vez. Hay monederos que utilizan elementos seguros, como el monedero de hardware de Ledger.

En cuanto al hardware, es maravilloso. El equipo es fuerte en el lado del hardware. Vienen de la industria de la seguridad. Conocen la certificación de elementos seguros, saben cómo hackear microcontroladores y siguen mostrando interesantes ataques a Trezor y otros monederos aleatorios. Son extremadamente buenos en el lado del software, pero eso no significa que no puedan meter la pata en el lado del software. De hecho, ha sucedido algunas veces.

Uno de los ataques más aterradores en Ledger ocurrió a finales del año pasado, y estaba relacionado con el cambio de dirección. Cuando estás enviando dinero a alguien, lo que esperas es que tengas tus entradas y digamos que tienes las entradas de… un bitcoin, digamos, y luego tienes normalmente dos salidas, como una es para el pago y la otra es la salida de cambio. ¿Cómo verificas que esta es la dirección de cambio? Deberías ser capaz de derivar la correspondiente clave privada y la clave pública que controlará esa salida. Si obtienes la misma dirección para la salida de cambio, entonces estás seguro de que el dinero vuelve a ti. Normalmente lo que haces es proporcionar la ruta de derivación para la clave privada correspondiente, porque tenemos este árbol jerárquico determinista de claves. Así que el monedero de hardware sólo necesita saber cómo derivar la clave. Así que envías al monedero la ruta de derivación también, como la ruta de derivación bip32. Entonces la cartera de hardware puede derivar la clave correspondiente y ver exactamente esta salida será controlada por esta clave por lo que tiene la dirección correcta. …. Así que lo que Ledger hizo es que no hicieron la verificación…. sólo asumieron que si había una salida con alguna ruta de derivación, entonces es probablemente correcta. Esto significa que el atacante podría reemplazar la dirección de esta salida a cualquier dirección, sólo hay que poner cualquier ruta de derivación y todo el dinero podría ir al atacante cuando usted está enviando una pequeña cantidad de bitcoin entonces todo el cambio va al atacante. Fue revelado el año pasado, y fue descubierto por los chicos de Mycelium porque estaban trabajando en la transferencia de fondos entre diferentes cuentas en Ledger y encontraron que de alguna manera es demasiado fácil de implementar esto en Ledger por lo que algo va mal aquí y descubrieron el ataque. Se arregló, pero quién sabe cuánto tiempo estuvo el problema. Desde la perspectiva de la cartera de hardware, si alguien no me dice que es una salida de cambio o me lo demuestra, entonces debería decir que no es una salida de cambio. Este era un problema.

También hubo un problema menor, ya que no leyeron la documentación del microcontrolador. El problema era, ¿cómo verificaron el firmware que se ejecuta en este microcontrolador? Básicamente …. cuando tenemos nuestro nuevo firmware, entonces el Ledger tiene una región específica en la memoria donde hda una secuencia mágica de bytes en particular para Ledger era algún número mágico hexadecimal. Entonces ellos almacenan eso allí. Lo que sucede es que cuando se actualiza el firmware del Ledger, el Ledger primero borra esto y luego flashea el firmware y luego al final verifica si la firma de este firmware es correcta. Si la firma era generada por la llave del Ledger, entonces ellos ponían de nuevo este número mágico en el registro y entonces eras capaz de iniciar este firmware y hacer que empezara a funcionar. Suena bien, ¿verdad? Si se le proporciona una firma incorrecta, entonces estos bytes mágicos son todos ceros en el momento por lo que no se ejecutaría este firmware, sólo retrocedería al firmware anterior. El problema es que si lees la documentación del microcontrolador, ves que hay dos direcciones diferentes para acceder a esta región de memoria donde almacenan estos bytes mágicos. Una de ellas estaba completamente bloqueada a la lectura-escritura externa, de manera que si se intentaba escribir en estos registros, se fallaba porque sólo el microcontrolador podía hacerlo. Pero luego había otro que tenía acceso a la misma región de memoria y podías escribir cualquier byte allí, y entonces podías hacer que el microcontrolador ejecutara cualquier firmware que le dieras. Alguien fue capaz de jugar un juego de la serpiente en la cartera de hardware Ledger como resultado de esto. Si consigues el control de esta pantalla y de los botones, con un firmware personalizado, puedes ocultar salidas arbitrarias. Puedes engañar al usuario de diferentes maneras, porque estás controlando lo que el usuario verá cuando esté firmando. Así que creo que es un problema bastante grande. Es un problema difícil de explotar, pero sigue siendo un problema.

Otra cagada súper grave que ocurrió con Bitbox… ¿la conoces? Algunos de los monederos tienen una bonita función de monedero oculto. La idea es que si alguien coge tu billetera por hardware y te dice que por favor la desbloquees, que si no te voy a dar con una llave inglesa. Probablemente la desbloquearás, y luego gastarás el dinero al atacante porque no quieres morir. La función de monedero oculto se supone que asegura tu dinero de tal manera que también hay un monedero oculto que el atacante no conoce y sólo obtendrá una fracción de tu dinero. Usas la misma mnemotecnia pero utilizas una frase de contraseña diferente, normalmente. Los chicos de Bitbox lo hicieron de forma ligeramente diferente, y fue una mala idea reinventar el protocolo con sus propias reglas. Así que lo que hicieron fue, tienes esta clave privada maestra como un bip32 xprv. Tiene un código de cadena y la clave privada allí. Así que cuando tienes la clave pública maestra, tienes el mismo chaincode y sólo la clave pública correspondiente a esta clave privada. Dada la clave pública maestra, puedes derivar todas tus direcciones pero no gastar, y si tienes la clave privada podrías gastar. El monedero oculto usaron el mismo xpub con el mismo chaincode pero voltearon el chaincode y la clave. Así que eso significa que si tu cartera de software conoce la clave pública maestra tanto de la cartera normal como de la cartera oculta, entonces básicamente conoce tanto el chaincode como la clave privada para que puedan obtener todo tu dinero. Si usted está utilizando esta función de cartera oculta, entonces usted está jodido.

¿Se supone que el atacante no conoce la función de cartera oculta? ¿Cómo se supone que funciona? En principio, esta función de monedero oculto es cuestionable. Como atacante, seguiría golpeándote con una llave inglesa hasta que me dieras todas las carteras ocultas que tienes. Seguiría golpeándote hasta que me dieras la siguiente contraseña o la siguiente frase de paso y así sucesivamente, nunca confiarían en que no tienes una siguiente cartera. El monedero tendría que estar suficientemente financiado para que el atacante pensara que es probable que lo sea todo. También podrías hacer la carrera de reemplazo por cuota en la que quemas todo tu dinero a los mineros ((esperemos que firmes las posibilidades de cuota correctamente)). El atacante no va a dejar de atacarte físicamente. Pero todavía hay una gran diferencia entre golpear físicamente a alguien y matarlo. El asesinato parece una línea que menos gente estaría dispuesta a cruzar.

TruCrypt tenía una negación plausible en el cifrado porque podías tener varias unidades cifradas, pero no sabías cuántas había. Podría ser sospechoso que un volumen encriptado de 1 GB sólo tenga un único archivo de 10 kb… pero la idea es poner algo realmente incriminatorio junto a tus 10 BTC, y simplemente dices estoy tan avergonzado y esto haría parecer más legítimo que en realidad es tu cantidad de monedas completa.

Tener un hardware seguro no significa que no seas vulnerable a los ataques. Realmente creo que lo mejor es utilizar la multifirma.

Timelock

Si estás dispuesto a esperar un retraso, puedes usar un timelock, o gastar instantáneamente con llaves multisig de 2 en 2. Usted haría cumplir en la cartera de hardware para hacer sólo estas transacciones timelock. El atacante proporciona la dirección. No importa lo que diga su monedero; en el mejor de los casos, su monedero ya lo ha bloqueado. No se puede gastar de forma que se bloquee, porque presumiblemente el atacante quiere usar su única dirección. Podrías pre-firmar la transacción y borrar tu clave privada… espero que hayas entendido bien esa tarifa.

Si puedes demostrar a un banco que obtendrás 1.000 millones de dólares dentro de un año, entonces te adelantarán el dinero. Tú consigues el uso del dinero, negocias con el atacante y le pagas un porcentaje. Pero esto entra en el tema de los seguros K&R …. También podrías usar un banco, que es 2-de-2 multisig o mi llave pero con un retraso de 6 meses. Así que esto significa que cada vez que necesites hacer una transacción, vas al banco y haces una transacción… todavía puedes recuperar tu dinero si el banco desaparece, y entonces el atacante no puede conseguir nada porque probablemente no quiera ir contigo al banco o cruzar múltiples fronteras mientras viajas por el mundo para conseguir todas tus llaves o algo así.

La mejor protección es no decir nunca a nadie cuánto bitcoin tienes.

¿Cómo se puede combinar un timelock con un tercero? Timelock con multisig está bien.

Estamos planeando añadir soporte para miniscript, que podría incluir timelocks. Pero, que yo sepa, ningún monedero de hardware aplica actualmente los timelocks.

Miniscript

Miniscript (o aquí) fue introducido por Pieter Wuille. No es un mapeo uno a uno de todos los posibles scripts de bitcoin, es un subconjunto de scripts de bitcoin pero cubre como el 99,99% de todos los casos de uso observados en la red hasta ahora. La idea es que describas la lógica de tu secuencia de comandos en una forma conveniente, de manera que una cartera pueda analizar esta información y averiguar qué claves o información necesita obtener para producir una clave. Esto también funciona para muchos de los scripts lightning y también para varios scripts multisig. A continuación, puede compilar esta política miniscript en bitcoin script. Entonces puede analizar y decir que esta rama es la más probable que usaré la mayor parte del tiempo, y luego ordenar las ramas en el script para hacerlo más eficientemente ejecutado en promedio en términos de sigops. Puede optimizar el script de tal manera que en realidad sus tarifas o sus datos que tiene cuando está firmando este script serán mínimos de acuerdo a sus prioridades. Así que si estás gastando principalmente con esto, entonces esto será superóptimo y esta otra rama podría ser un poco más larga.

Después de implementar miniscript, será posible utilizar timelock. Hasta entonces, necesitas algo como una raspberrypi con un firmware personalizado. Podemos tratar de implementar una característica timelock juntos mañana si todavía estará aquí.

Pieter tiene una prueba de concepto en su página web en la que puedes escribir las políticas y obtener un script de bitcoin real. No creo que tenga la demostración de ir al revés; pero se describe con muchos detalles cómo funciona todo esto. Creo que están terminando sus múltiples implementaciones en este momento y creo que está casi listo para empezar realmente. Algunos pull requests han sido fusionados para los descriptores de salida. En Bitcoin Core puedes proporcionar un descriptor de script y alimentar esto en la cartera, como si es segwit o legacy, segwit anidado o segwit nativo, etc. También puedes usar descriptores de script para monederos multifirma, ya puedes usar Bitcoin Core con los monederos de hardware existentes… todavía es un poco problemático porque necesitas ejecutar una interfaz de línea de comandos y no es súper fácil de usar y no está en la GUI todavía, pero si estás bien con las interfaces de línea de comandos y si estás dispuesto a hacer un pequeño script que lo haga por ti, entonces probablemente estés bien. Creo que la integración con Bitcoin Core es muy importante y es bueno que tengamos esto en marcha.

Funciones avanzadas para carteras de hardware

https://btctranscripts.com/breaking-bitcoin/2019/future-of-hardware-wallets/

Algo que podríamos hacer es coinjoin. Ahora mismo los monederos hardware sólo admiten situaciones en las que todas las entradas pertenecen a los monederos hardware. En las transacciones coinjoin, ese no es el caso. Si podemos engañar al monedero hardware para que muestre algo incorrecto, entonces podemos potencialmente robar los fondos. ¿Cómo podría el monedero hardware entender si esta entrada pertenece al monedero hardware o no? Necesita derivar la clave y comprobar si es capaz de firmar. Para ello necesita la ayuda del monedero software. El usuario necesita firmar las transacciones dos veces para este protocolo.

Es bastante normal que haya varias firmas de una transacción coinjoin en un corto período de tiempo, porque a veces el protocolo coinjoin se estanca debido a que los usuarios se caen de la red o simplemente se retrasan demasiado.

La firma de transacciones con entradas externas es complicada.

Prueba de (no) propiedad de la cartera de hardware

Digamos que somos un monedero malicioso. No soy un servidor de coinjoin, sino una aplicación cliente. Puedo poner dos entradas de usuario idénticas, lo que suele ser común en coinjoin, y las pones en las entradas y pones sólo una salida de usuario y luego las otras son otras salidas. ¿Cómo puede el monedero hardware decidir si la entrada pertenece al usuario o no? Ahora mismo no hay manera. Así que confiamos en que el software marque la entrada necesaria para firmar. El ataque consiste en marcar sólo una de las entradas del usuario como mía, y entonces el monedero hardware la firma y obtenemos la firma de la primera entrada. El monedero software entonces finge que la transacción de coinjoin ha fallado, y envía al monedero hardware la misma transacción pero marcando la segunda entrada como nuestra. Así que el monedero hardware no tiene forma de determinar qué entradas eran suyas. Podría hacer pruebas SPV para probar que una entrada es suya. Necesitamos una forma fiable de determinar si la entrada pertenece a la cartera de hardware o no. Trezor está trabajando en esto con achow101.

https://github.com/satoshilabs/slips/blob/slips-19-20-coinjoin-proofs/slip-0019.md

Podríamos hacer una prueba para cada entrada, y necesitamos firmar esta prueba con una clave. La idea es probar que puedes gastar y probar que… puede comprometer toda la transacción de coinjoin para probar al servidor que esto es propio, y ayuda al servidor a defenderse contra ataques de denegación de servicio porque ahora el atacante tiene que gastar sus propios UTXOs. La prueba sólo puede ser firmada por el propio monedero hardware. También tiene un identificador de transacción único.. Es sign(UTI||prueba_cuerpo, entrada_clave). No pueden tomar esta prueba y enviarla a otra ronda de coinjoin. Esta técnica demuestra que somos dueños de la entrada. El problema surge del hecho de que tenemos esta ruta de derivación loca. Utiliza la clave de identidad única, que puede ser una clave bitcoin normal con una ruta de derivación fija. El cuerpo de la prueba será HMAC(id_key, txid || vout). Esto puede ser específico de la cartera y el host puede recogerlos para UTXOs. No se puede falsificar esto porque la cartera de hardware es la única que puede generar esta prueba.

Esto podría extenderse a la agregación de claves multisig o incluso MuSig.

Podemos pedir a todos los participantes de esta transacción coinjoin que firmen un determinado mensaje con la clave privada que controla esta entrada. Así que tenemos un mensaje y una firma. La firma nos demuestra a nosotros, a todo el mundo, que el tipo que puso este mensaje controla realmente la clave privada correspondiente. Esta es la firma de la clave que controla esta entrada. En el lado del mensaje, podemos poner lo que el monedero de hardware quiera. El monedero de hardware es el tipo que puede firmar esta prueba. Es el único que controla esta clave. Así que lo que puede hacer es generar un mensaje particular que será capaz de reconocer después. Así que tomo el hash de la transacción y lo combino con mi clave fija que almaceno en mi memoria, y entonces obtengo un mensaje único que parece aleatorio pero que seré capaz de reproducir cada vez que lo vea y podré asegurarme de que fue mi entrada porque yo fui el tipo que generó esto dentro del mensaje. Una vez que proporcionamos todas estas pruebas para cada entrada, nuestro monedero de hardware puede revisar cada entrada y asegurarse de qué entradas son mías y cuáles no. Esto puede ayudar a detectar cuando la cartera de software está tratando de engañarte.

Espero que los monederos de hardware puedan hacer coinjoins muy pronto. Probablemente, Trezor lo desplegará primero porque estamos trabajando con ellos en ello.

P: ¿Cuál es el caso de uso de esto? ¿Digamos que quiero dejar algo conectado a Internet para ganar dinero con algo como joinmarket? ¿O que quiero ser un tomador de privacidad?

R: Funciona en ambos casos. Si quieres participar en coinjoin y ganar algo- pero ahora mismo no funciona así. Ahora mismo todos los ingresos van a parar a los chicos de Wasabi Wallet. Sus servidores cobran por conectar a la gente. Por el momento creo que si quieres usar el coinjoin para obtener algo de privacidad, entonces necesitas este tipo de protocolo, así que probablemente necesites conectar tu monedero de hardware para hacer esto o todavía puedes hacerlo usando el airgap.

En nuestro caso, por ejemplo, estaba pensando en tener una pantalla en el ordenador y luego un código QR y que puedan comunicarse a través de los códigos QR como esto es una cámara web y esto es una pantalla. También estaba pensando en la salida de audio, como una toma de 3,5 mm de la cartera de hardware a la computadora. El ancho de banda allí es bastante bueno. También podrías reproducir audio en un altavoz. Pero entonces tu billetera de hardware necesita un altavoz, y sólo puede enviar tu clave privada. Pero una toma de audio de 3,5 mm tiene sentido.

P: ¿Qué pasa con el coinshuffle o el coinswap?

R: Sólo sé un poco sobre esto. Para wasabi wallet, no sabe qué entradas corresponden a qué salidas porque las registra por separado. Te devuelven una firma ciega y les das una salida ciega o algo así. Generan una firma ciega y no saben lo que están firmando. Esto permite que el servidor de coinjoin verifique que sí, que he firmado algo y que este tipo quiere registrar esta salida para que se vea bien y la ponga en el coinjoin. Para toda esta comunicación utilizan las firmas Schnorr porque allí se pueden utilizar firmas ciegas. En principio esto significa que tienen dos identidades virtuales que no están conectadas entre sí; sus entradas y salidas están completamente desconectadas incluso para el servidor de coinjoin. También generan salidas del mismo valor y luego hacen otra sección de las salidas con un valor diferente por lo que también se puede conseguir el anonimato por alguna cantidad de cambio.

El monedero Wasabi soporta monederos de hardware, pero no para coinjoin. Entonces el único beneficio restante de usar Wasabi es tener un control completo de las monedas y poder elegir las monedas para enviar a la gente.

P: ¿Cómo maneja Wasabi la privacidad cuando obtiene sus UTXOs?

R: Creo que utilizan el protocolo Neutrino, piden los filtros al servidor y luego descargan bloques de nodos bitcoin aleatorios. No necesitas confiar en su servidor central en ese punto. Creo que ya está habilitado para conectarte a tu propio nodo, impresionante eso es genial. Genial. Entonces ahora puedes obtenerlo desde tu nodo de Bitcoin Core.

Lightning para carteras de hardware

Lightning aún está en desarrollo, pero ya está funcionando en vivo en mainnet. Sabemos que el software no es súper estable todavía, pero la gente estaba entusiasmada y empezó a usarlo con dinero real. No es mucho dinero real, es como unos pocos cientos de bitcoin en toda la red Lightning en este momento.

Ahora mismo la red de rayos sólo funciona con carteras calientes con las claves de su ordenador. Probablemente no sea un problema para nosotros, pero para los clientes normales que compran café a diario esto es un problema. Puede que esté bien almacenar unos cientos de dólares en tu monedero móvil y quizás te lo roben, no pasa nada, es una cantidad pequeña de dinero. Pero para los comerciantes y procesadores de pagos, entonces te preocupa perder monedas o pagos, y quieres tener suficientes canales abiertos para no tener que cerrar ninguno y no tener problemas de liquidez en la red. Tienes que almacenar tus claves privadas en un ordenador online que tiene una dirección IP concreta, o quizás a través de tor, y ciertos puertos abiertos, y estás transmitiendo a todo el mundo cuánto dinero tienes en esos canales. No es muy bueno, ¿verdad? Así que esta es otra cosa que estamos trabajando en que sería bueno para obtener las claves privadas de rayos en una cartera de hardware. Desafortunadamente, aquí no puedes realmente usar un airgap.

Podrías utilizar parcialmente el almacenamiento en frío. Podrías al menos asegurarte de que cuando el canal se cierra el dinero va a la cámara frigorífica. Hay una buena separación de diferentes claves en lightning. Cuando abres un canal, deberías verificar qué dirección usar al cerrar el canal. Entonces, incluso si tu nodo es hackeado, si el atacante intenta cerrar los canales y obtener el dinero, entonces falla porque todo el dinero va al almacenamiento en frío.

Pero si él es capaz de conseguir todo el dinero a través de la red de rayos a su nodo, entonces usted está probablemente jodido. Para almacenar estas claves privadas en la cartera de hardware es un reto, pero usted podría tener un …. él también puede hacer una actualización del canal firmado. Si proporcionas suficiente información al monedero de hardware, que realmente estás dirigiendo una transacción y que tu saldo está aumentando, entonces el monedero de hardware hot wallet podría firmar automáticamente. Si la cantidad está disminuyendo, entonces definitivamente tiene que pedir al usuario que confirme. Así que estamos trabajando en eso.

Firmas Schnorr para carteras de hardware

La ventaja de las firmas Schnorr para los monederos hardware es la agregación de claves. Imagina que utilizas transacciones multisig normales, como 3 de 5. Esto significa que cada vez que estás firmando la transacción y poniéndola en la blockchain, ves que hay 5 pubkeys y 3 firmas. Es una gran cantidad de datos, y todo el mundo puede ver que estás usando una configuración multisig de 3 de 5. Terrible para la privacidad, y terrible en términos de tarifas.

Con las firmas Schnorr, puedes combinar estas claves en una sola. Así que puedes tener varios dispositivos o firmantes que generen firmas y luego puedes combinar las firmas y las claves públicas correspondientes en una sola clave pública y una sola firma. Entonces todas las transacciones en la cadena de bloques y la mayoría de las transacciones en la cadena de bloques tendrían un aspecto similar, sólo una clave pública y una firma.

Con taproot (o aquí), es aún mejor. Usted puede agregar la funcionalidad de scripting allí también. Si todo va bien, como en un relámpago tal vez usted y su contraparte están cooperando libremente y no necesita hacer un cierre unilateral. Podrías hacer un cierre mutuo multisig 2 de 2, y entonces se parece exactamente a una clave pública y una firma única. Si alguien no está cooperando y las cosas van mal, entonces puedes mostrar una rama en el script de taproot que muestra que se te permite reclamar el dinero, pero este script sólo se revela si tienes que ir por este camino. De lo contrario, se obtiene una única clave pública y una única firma en la blockchain.

Podemos utilizar chips en un único dispositivo de monedero de hardware con diferentes arquitecturas y diferentes modelos de seguridad heterogéneos, y poner tres chips diferentes con tres claves diferentes allí, y asegurarnos de que podemos gastar el bitcoin sólo si cada uno de estos chips está firmando en el monedero de hardware. Así que uno puede ser un elemento de seguridad propio, y luego otros microcontroladores en el mismo monedero de hardware, y la salida es sólo una única clave pública y una única firma. También podríamos hacer un chip de Rusia y otro de China. Así que, aunque haya una puerta trasera, es poco probable que tanto el gobierno ruso como el estadounidense cooperen para atacar su monedero. Desde la perspectiva del usuario, parece que sólo hay una única clave o una única firma.

Todos estos perros guardianes y mecanismos anti-sabotaje y prevención de inyecciones de fallos y stuff…. aún no están implementados, pero sé que hay algunas empresas que están trabajando en periféricos de seguridad en torno a la arquitectura RISC-V. Así que es de esperar que pronto tengamos elementos seguros. El único problema en este momento es que la mayoría… algunas de las empresas, diría, toman esta arquitectura RISC-V de código abierto y le ponen encima un montón de módulos propietarios de código cerrado y esto arruina toda la idea. Necesitamos un chip RISC-V de código abierto. Definitivamente, recomendaría mirar a RISC-V.

Los IBM Power9 también son de código abierto en este momento. Raptor Computing Systems es uno de los pocos fabricantes que realmente le venderá el dispositivo. Es un servidor, así que no es lo ideal, pero en realidad es de código abierto. Es un equipo de 2000 dólares, es un ordenador completo. Así que no es un dispositivo de consumo ideal para carteras de hardware. Creo que la CPU y la mayor parte del dispositivo de la placa son de código abierto, incluido el núcleo. Es una arquitectura de IBM. Vale, probablemente debería mirar eso. Suena interesante.

Mejores prácticas

Quiero hablar de las mejores prácticas, y luego hablar de desarrollar nuestra propia cartera de hardware. Pero primero sobre las mejores prácticas.

Nunca almacene los mnemónicos como texto plano. Nunca cargues la mnemotecnia en la memoria del microcontrolador antes de que se desbloquee. Hubo algo llamado “ataque de Trezor congelado”. Coges tu Trezor, lo enciendes, lo primero que hace es cargar tu mnemónico en la memoria y luego lo que puedes hacer es congelar el Trezor con baja temperatura para asegurarte de que la memoria de la clave sigue siendo visible. Luego actualizas el firmware a un firmware personalizado, lo que permite Trezor, que normalmente está bien porque tu memoria flash se borra y suponen que tu RAM está decayendo pero a bajas temperaturas se queda ahí. Luego, una vez que tienes el firmware ahí, no tienes la mnemotecnia pero puedes imprimir en la serie… pero puedes seguir sacando los datos de ahí. El problema es que cargaban la mnemónica en la RAM antes de comprobar el bit. Así que nunca hagas eso.

Lo último y lo que impide que un atacante gaste sus fondos es el código PIN o el método de verificación que esté utilizando. Aquí es extremadamente mejor almacenar el código PIN correcto en el dispositivo. En principio, comparar el código PIN correcto con el código PIN incorrecto es malo porque durante estas comparaciones hay un ataque de canal lateral. En lugar de eso, quieres utilizar un código PIN y otro método de autenticación para obtener la clave de descifrado para descifrar el mnemónico cifrado. De esta manera, eliminas todos los ataques de canal lateral y no tienes el mnemónico como texto plano.

Otra buena característica que la gente debería utilizar es, ¿has oído hablar de funciones físicamente no clonables? Es una función muy buena. Digamos que tienes un microcontrolador fabricado… cuando la RAM se fabrica, hay ciertas fluctuaciones en el entorno de tal manera que la RAM se fabrica de forma diferente para cada bit. Cuando enciendas el microcontrolador, el estado de tu memoria será aleatorio. Luego la borras y empiezas a usarla normalmente. Pero esta aleatoriedad tiene un cierto patrón, y este patrón es inclasificable porque no puedes observarlo y no puede ser reproducido en otro dispositivo RAM. Puedes utilizar este patrón como una huella digital, como la clave única del dispositivo. Por eso se llama una función físicamente no clonable, se debe a las variaciones en el proceso de fabricación. Puedes usar eso junto con un código PIN y otras cosas para encriptar tu mnemónico. Cuando el dispositivo se inicie, será capaz de descifrar la mnemotecnia. Pero la extracción de la memoria flash completa no ayudará a esto, porque todavía necesita obtener la función físicamente no clonable que está en el dispositivo. La única manera de conseguir eso es flashear el firmware, leer la clave y extraerla por serie o lo que sea. Es un fallo tanto de la protección de lectura como de la protección de escritura.

P: ¿Por qué no eliminar el PIN y el almacenamiento mnemotécnico y exigir al usuario que lo introduzca y borre el dispositivo?

R: Se podría. Así que es un dispositivo de firma seguro, pero no un dispositivo de almacenamiento seguro. Así que hay un almacenamiento seguro y una firma segura. Así que almacenas la frase de contraseña o el monedero encriptado en papel o CryptoSteel en una cámara acorazada de un banco o algo así con la mnemotecnia o algo así… y está encriptado, y recuerdas la frase de contraseña. Así que nunca guardas la frase de contraseña en ningún sitio.

El problema del almacenamiento seguro y el de la firma segura deberían estar separados. Así que se podría utilizar hardware reemplazable para la firma, y la mnemotecnia debería almacenarse encriptada en papel o algo así. El problema de introducir un mnemotécnico cada vez es que podrías tener un ataque de criada malvada. La prevención aquí es no tener wifi ni nada. Pero tal vez el atacante es inteligente y pone un paquete de baterías y algún tipo de mecanismo de transmisión, pero esto vuelve a tener una cartera de productos desechables.

Además de la RAM, puedes utilizar un trozo de cristal para hacer una función físicamente incopiable. Podrías poner un trozo de cristal en la cartera y ésta tiene un láser y puede medir las imperfecciones de la pantalla y lo utiliza para obtener la clave de descifrado de tu mnemónica. Esto no es una huella digital. Sin embargo, el vidrio puede degradarse con el tiempo. Un trozo de plástico puede tener un patrón único que genere un patrón de interferencia y entonces se puede utilizar para extraer una clave de descifrado de eso. Pero eso no va a suceder por un tiempo.

Esta otra cosa sobre los ataques de la criada malvada y los implantes de hardware, ¿cómo podemos evitarlo? Hay una forma de hacer una malla antimanipulación alrededor del dispositivo, de manera que cuando alguien intente entrar en él, por ejemplo haciendo un agujero, se activen automáticamente las medidas de seguridad. En los HSM del sector bancario, básicamente tienen un dispositivo constantemente conectado a la corriente y que controla la corriente que pasa por esta malla conductora. Si detectan el cambio de corriente en esta malla, entonces borran el dispositivo. El problema aquí es que cuando te quedas sin energía, tienes que depender de la batería, y entonces cuando la batería se está agotando antes de que se agote tienes que borrar las claves de todos modos.

Hay una forma mejor, en la que no sólo se comprueba la corriente, sino que también se comprueba la capacidad de los cables en la malla y se utiliza eso como una huella digital única para generar la clave de descifrado única de tus secretos. Incluso si alguien hace un agujero de 100 micras aquí, la clave de descifrado cambia y ya no podrá extraer los secretos. De momento no se puede comprar un dispositivo así, pero es un enfoque muy prometedor. Probablemente sea para gente que realmente se preocupe por las grandes cantidades de dinero, porque esto es realmente caro.

Si vas a borrar las claves, entonces también podrías pre-firmar una transacción antes de borrar las claves, para enviar las monedas a un almacenamiento en frío de respaldo o algo así.

Rodando tu propia seguridad de hardware

No debes esperar que rodar tu propia cartera de hardware sea una buena idea. Tu dispositivo probablemente no será tan seguro como Trezor o Ledger porque tienen grandes equipos. Pero si hay un error en el firmware de Trezor, entonces los atacantes probablemente tratarán de explotarlo en todos los usuarios de Trezor. Pero si tienes una implementación personalizada que puede no ser súper segura pero es personalizada, entonces ¿cuáles son las posibilidades de que el tipo que viene a tu casa y encuentra un dispositivo de aspecto extraño y se da cuenta de que es una cartera de hardware y entonces cómo se da cuenta de cómo romperlo? Otra cosa que puedes hacer es esconder tu cartera de hardware en una carcasa de Trezor ((risas)).

Alguien en un video sugirió hacer una billetera de hardware falsa, y cuando es encendida por alguien, entonces envía un mensaje de alerta a un grupo de telegram y dice llama al 911 estoy siendo atacado. Usted podría poner esto en la carcasa de Trezor. Cuando el tipo se conecta a él, envía el mensaje. Otra cosa que podrías hacer es instalar un malware en el ordenador del atacante, y luego rastrearlo y hacer varias cosas de vigilancia. También podrías alegar que sí, que necesito usar Windows XP con esta configuración o algo igualmente inseguro, lo cual es plausible porque tal vez configuraste este sistema hace 10 años.

Opciones para hacer un prototipo de monedero de hardware

¿Qué podemos utilizar para hacer una cartera de hardware? Si crees que hacer hardware es difícil, no lo es. Sólo tienes que escribir un firmware y cargarlo. También puedes usar FPGAs que son divertidas para desarrollar. Me gustan las placas que soportan micropython, que es una versión limitada de python. Puedes hablar con periféricos, mostrar códigos QR, etc. Trezor y ColdCard utilizan micropython para su firmware. Sin embargo, creo que micropython tiene un largo camino por recorrer, porque tan pronto como te alejas de lo que ya se ha implementado, entonces terminas teniendo problemas en los que tienes que bucear en el interior de micropython y terminas teniendo que escribir nuevo código C o algo así. Pero si te gusta todo lo que ya existe, entonces es extremadamente fácil trabajar con él.

Otra opción es trabajar con arduinos. Se trata de un framework desarrollado hace quizá 20 años, no lo sé, y que se utiliza en toda la comunidad de bricolaje. Se hizo extremadamente fácil empezar a escribir código. Conozco a gente que aprendió a programar usando arduino. Es C++, y no es tan fácil de usar como python, pero aún así la forma en que hacen todas las bibliotecas y todos los módulos, es extremadamente amigable para el usuario. Sin embargo, no desarrollaron este marco con la seguridad en mente.

También está el framework Mbed. Soporta una gran variedad de placas. Este marco fue desarrollado por ARM. Vuelves a escribir código C++, lo compilas en el binario y cuando conectas la placa lo arrastras y lo sueltas en la placa. Es literalmente arrastrar y soltar. Es más, no necesitas instalar ninguna cadena de herramientas. Puedes conectarte a Internet y utilizar su compilador de navegador en línea. No es muy conveniente, excepto para empezar y conseguir que algunos LEDs parpadeen. Ni siquiera necesitas instalar nada en tu ordenador.

Otra cosa a la que hay que prestar atención es al lenguaje rust, que se centra en la seguridad de la memoria. Tiene mucho sentido. También tienen rust para sistemas Mbed. Así que puedes empezar a escribir rust para microcontroladores, pero también puedes acceder a las librerías que normalmente escribe el fabricante o algo así, como para hablar con las pantallas, los LEDs y todas estas cosas. En el mundo de los microcontroladores, todo está escrito en C. Puedes escribir tu lógica de cartera de hardware en rust, y luego seguir teniendo los enlaces a las bibliotecas de C.

Hay un proyecto muy interesante llamado TockOS. Se trata de un sistema operativo escrito completamente en rust, pero puedes seguir escribiendo en C o C++, pero el propio sistema operativo, como el sistema de gestión, puede asegurarse de que incluso si una de tus bibliotecas está completamente comprometida, sigues estando bien y no puede acceder a la memoria de otros programas. Así que creo que eso está muy bien. Por el momento, creo que no hay mucha gente que conozca rust, pero eso está mejorando. Definitivamente es una cadena de herramientas muy interesante.

Otra cosa buena que puedes hacer con los monederos de hardware DIY, o no sólo DIY sino con hardware flexible, es la autenticación personalizada. Si no estás contento con sólo un código PIN, como si quisieras una contraseña más larga, o quieres tener un acelerómetro y quieres girar la cartera de hardware de una manera determinada que sólo tú conoces o algo así, o por ejemplo puedes aplicar contraseñas de un solo uso y autenticación multifactor. No sólo requieres el PIN sino también una firma de tu yubikey, y todo este tipo de cosas raras, o incluso tu huella digital pero eso es una mala idea porque las huellas digitales tienen una baja entropía y la gente puede simplemente tomar tu dedo de todos modos o robar tus huellas digitales.ç

Podrías usar yubikey, Google Titan, o incluso algunas tarjetas bancarias. Podrías hacer autenticación multifactor y diferentes dispositivos de almacenamiento de claves privadas para hacer multisig que no tienen nada que ver con bitcoin, para autenticar para entrar en una cartera de hardware.

Ten en cuenta que todas estas placas de las que hablo no son súper seguras. Todas usan microcontroladores y no tienen un elemento seguro. Puedes conseguir una placa muy barata que cuesta como 2 dólares. Ten en cuenta que está fabricada y diseñada en China. Está muy extendida, pero quién sabe, tal vez todavía hay una puerta trasera en el dispositivo en alguna parte. Quién sabe. Además, tiene bluetooth y wifi así que eso es algo a tener en cuenta. Si quieres una versión no muy segura del Ledger X, entonces podrías hacerlo. Probablemente sería más seguro que guardar el dinero en tu portátil que está constantemente conectado. Todas las demás placas para desarrolladores tienden a tener microcontroladores simples para aplicaciones específicas. Esta de aquí tiene el mismo chip que tiene Trezor, en teoría podrías portar Trezor a esto. Así que obtienes la seguridad de un monedero Trezor, una pantalla mucho más grande y tal vez alguna funcionalidad adicional que te pueda gustar. Así que podría tener sentido en algunos casos. Yo no confiaría completamente en el hardware DIY para la seguridad.

También hay algunos chips baratos enfocados a la seguridad disponibles en el mercado. El que se utiliza en la ColdCard está en el mercado, una especie de ECC blahblahblha de Microchip. También se puede aplicar en el factor de forma arduino. Así que puede proporcionarle un almacenamiento de claves seguro para sus claves de bitcoin, y el resto se puede hacer en el microcontrolador normal.

Actualmente no hay elementos seguros en el mercado que permitan utilizar la criptografía de curva elíptica para la curva de bitcoin. Todavía no se han construido.

Hacer un elemento totalmente seguro que sea completamente de código abierto desde la base hasta la cima, costará como 20 millones de dólares. Lo que estamos liberando es lo que es accesible para nosotros en este momento. Así que lo que podemos hacer es conseguir este elemento seguro que tiene un sistema operativo de tarjeta Java propietario en la parte superior, y luego en la parte superior de esto podemos escribir un bitcoin específico que puede hablar con el hardware y utilizar todos los aceleradores de curva elíptica y las características de hardware y todavía puede ser de código abierto porque no tenemos saber exactamente este sistema operativo de tarjeta Java funciona por lo que no es totalmente de código abierto, sólo estamos abriendo todo lo que podemos. En la ColdCard, no se puede utilizar la criptografía de curva elíptica en el elemento de almacenamiento seguro de claves, pero en otros elementos seguros sí se puede ejecutar ECDSA y otra criptografía de curva elíptica.

Mis opciones de diseño para mi cartera de hardware

Quería hablar de cómo hemos diseñado nuestra cartera de hardware y obtener su opinión acerca de si esto tiene sentido o no y cómo podemos mejorar, especialmente Bryan y M. Después de eso, creo que podemos, si ustedes tienen sus ordenadores portátiles con usted, entonces podemos configurar el entorno de desarrollo para mañana para que podamos incluso dar las tablas para probar en la noche y llevarlos a casa. Si prometéis no robarla, podéis llevaros alguna a casa si queréis, si es que vais a probarla. Tened en cuenta que mañana os molestaré todo el tiempo, así que esta noche es vuestra oportunidad de mirar esto a solas. Mañana entonces podemos trabajar en algunas carteras de hardware loco.

Lo que decidimos hacer es que tenemos algunos socios de hardware que pueden fabricar chips personalizados para nosotros. Podemos tomar los componentes de los microcontroladores normales y ponerlos de forma agradable en un solo paquete. ¿Qué componentes tienen normalmente los monederos de hardware? Los que también tienen un elemento seguro, al menos. Así que decidimos primero ir al código abierto. No vamos a trabajar con un elemento seguro de metal desnudo y un sistema operativo de código cerrado que no podemos abrir debido a los acuerdos de confidencialidad. Así que estamos utilizando un sistema operativo de tarjeta Java y, aunque no sabemos cómo funciona, parece que funciona, así que debería ser bastante seguro. Luego, para escribir encima de eso, estamos escribiendo un applet de bitcoin que funciona con el elemento seguro. Sólo podemos poner allí cosas que firmamos y que subimos usando nuestras claves de administración. Esto significa que podemos desarrollar y subir el software, y luego puede sugerir ciertos cambios que podemos habilitar para la carga. Esto requiere cierta comunicación con nosotros primero. No puedo dar las claves a nadie más, porque si se filtran tenemos problemas con el gobierno porque están preocupados por los criminales que comprometen los canales de comunicación súper seguros o los usan para organizar sus actividades ilegales. Por desgracia, este es el estado de la industria. Queríamos desarrollar un elemento seguro de código abierto, pero por ahora sólo podemos hacer un monedero de hardware quizás más seguro y un poco más flexible.

Así que estamos utilizando la tarjeta inteligente Java Card, y luego tenemos otros dos microcontroladores. También tenemos una pantalla algo grande para mostrar toda la información sobre la transacción y darle un buen formato y para habilitar este airgap de código QR necesitamos poder mostrar códigos QR bastante grandes. Tenemos que usar otro microcontrolador de propósito general porque sólo ellos pueden manejar pantallas grandes por el momento. Luego tenemos un tercer microcontrolador que hace todo el trabajo sucio que no es crítico para la seguridad, como la comunicación a través de USB, hablar con las tarjetas SD, procesar las imágenes de la cámara para leer los códigos QR y cosas por el estilo. Esto hace que se aísle físicamente la gran base de código que no es crítica para la seguridad y que maneje todos los datos del usuario y los datos del ordenador frío. También tenemos otro microcontrolador que se dedica a la conducción de la pantalla para que pueda tener una pantalla de cierta confianza. Todos estos microcontroladores están empaquetados en el mismo paquete. En el interior, tenemos dispositivos semiconductores estratificados en el paquete, y están estratificados en una estructura de seguridad. El de arriba es el elemento seguro, y los otros dos están por debajo. Así que, en teoría, el calor de un chip del paquete puede detectarse en el otro, pero es de suponer que la tarjeta inteligente tiene mucho trabajo hecho en materia de análisis de potencia y prevención de canales laterales.

Incluso si el atacante tiene acceso a este chip y lo descifra, primero golpeará el elemento seguro que tiene todos estos mecanismos anti-manipulación como perros guardianes y detección de voltaje y mapeo de memoria y otras cosas, y comparte esta capacidad con otros chips. Obviamente comparten el mismo suministro de voltaje, así que primero se gana un poco de seguridad en el elemento seguro con los otros microcontroladores. Incluso si el atacante trata de entrar en la memoria de los microcontroladores no muy seguros, el elemento seguro está en el camino y es difícil llegar por debajo de él. Los clientes anteriores para los que hicieron esto era para los satélites, y en ese caso tienes problemas con la radiación y esas cosas. Para nosotros, esto nos ayudó porque significa que no se puede, en primer lugar, ninguna radiación electromagnética va desde el chip hacia el exterior, por lo que esto elimina algunos ataques de canal lateral, y en segundo lugar, la radiación electromagnética limitada entra en el chip. Y como he dicho, todos están en el mismo paquete. En nuestras placas de desarrollo, no están en el mismo paquete porque no tenemos el dinero para desarrollar el chip, pero empezaremos pronto. Aun así, ya tiene todo este hardware.

Los microcontroladores normales tienen estas interfaces de depuración. Incluso si están deshabilitadas con los bits de seguridad, el atacante puede hacer un montón de cosas de condición de carrera e incluso volver a habilitarlas. Así que incluimos un fusible que rompe físicamente la conexión de la interfaz JTAG con el mundo exterior. Así que sólo teniendo el chip, el atacante no es capaz de utilizar la interfaz JTAG. Esto es algo bueno que normalmente no está disponible. En la placa de desarrollador, exponemos un montón de conexiones con fines de desarrollo, pero en el producto final las únicas conexiones serán la pantalla, la pantalla táctil, y para la cámara no hemos decidido cuál de ellas queremos utilizar. Normalmente la cámara se utiliza para almacenar el código QR y obviamente está relacionada con la comunicación y debería estar en el microcontrolador de comunicación. Podemos tomar una foto de los dados como entropía definida por el usuario.

Tenemos un código completamente airgapped que funciona con códigos QR para escanear transacciones y códigos QR para transmitir firmas. Lo llamamos la función M porque fue él quien lo sugirió. Es tan bonito y fluido como funciona que tengo este bonito video que puedo mostrarte. Así que primero escaneamos el código QR del dinero que queremos enviar; el monedero watchonly del teléfono conoce todos los UTXOs y prepara las transacciones sin firma y se lo muestra al monedero hardware. A continuación, en el monedero físico escaneamos el código QR. Ahora vemos la información sobre la transacción en el monedero físico, que podemos firmar, y luego obtenemos la transacción firmada. A continuación, la escaneamos con el monedero de reloj, y luego la emitimos. El flujo es bastante conveniente.

Se trata de un flujo de datos unidireccional. Sólo va en una dirección. Es un flujo de datos muy controlado. También está limitado en cuanto a los datos que se pueden pasar de un lado a otro. Esto es algo bueno y malo a la vez. Para una transacción grande, puede ser un problema. Para datos pequeños, es genial. Todavía no hemos probado los códigos QR animados. Pude hacer PSBT con unas pocas entradas y salidas, por lo que era bastante grande. Con transacciones bitcoin parcialmente firmadas y entradas heredadas, entonces tus datos van a explotar en tamaño bastante rápido. Antes de segwit, tenías que poner mucha información en la función de hash para derivar el valor del sighash. Ahora con segwit, si la cartera miente sobre las cantidades, entonces simplemente generará una firma inválida. Para calcular la cuota del monedero hardware, tengo que pasar toda la transacción anterior, y puede ser enorme. Tal vez usted obtiene las monedas del intercambio, y el intercambio podría crear una gran transacción con miles de salidas y tendría que pasar todo a la cartera de hardware. Si estás usando direcciones heredadas, probablemente tendrás problemas con los códigos QR y entonces tendrás que hacer animaciones. Para transacciones segwit y un número razonable de entradas y salidas, estás bien. Podrías simplemente decir, no usar más legacy, y barrer esas monedas. El único vector de ataque razonable ahí es si eres un pool de minería y vas a engañarte para transmitir eso y robarte, pero fuera te perjudica pero no beneficia al atacante así que realmente no tiene incentivo. También puedes mostrar una advertencia y decir que si quieres usar las tasas, entonces usa bech32 o al menos compruébalo aquí. Puedes mostrarles la tarifa en la máquina caliente, pero eso probablemente esté comprometido.

Con el modo airgapped, tienes almacenamiento en frío y es un airgap y es razonablemente seguro. Nada es perfecto, habrá errores y ataques a nuestra cartera de hardware también. Sólo esperamos cometer menos errores de los que vemos en este momento. Además, otra cosa de la que no hablé es el …. dentro de las tarjetas inteligentes, tenemos que reutilizar el SO de la tarjeta Java sólo porque los applets tienen que ser escritos en java. Entonces decidimos ir con rust embebido, y aunque es un poco más difícil de desarrollar y no mucha gente conoce rust, esta parte es realmente crítica para la seguridad y no queremos dispararnos en el pie con errores allí. Además, se evita un montón de aislamiento y las mejores prácticas del mundo de la seguridad. En tercer lugar, nos abrimos a los desarrolladores y se está ejecutando micropython. Así que esto significa que si quieres una funcionalidad personalizada en la cartera de hardware, como un esquema de comunicación personalizado como bluetooth que no queremos, pero si lo quieres entonces eres bienvenido a hacerlo tú mismo. Entonces escribes un script en python que maneje este tipo de comunicación, y luego solo lo usas. Otra forma de hacerlo es que, si estás usando un script personalizado particular que no conocemos, y estamos mostrando esta transacción de una manera fea que es tal vez difícil de leer, entonces tal vez lo que puedes hacer es añadir algunos metadatos para enviarlo al otro microcontrolador para mostrarte algo. Todavía lo marcará como no súper confiable porque viene de esta aplicación de python, pero si usted está confiando en los desarrolladores de la aplicación, entonces usted puede hacer algunas opciones. Pero al menos puedes ver alguna información adicional sobre esta transacción si nuestro material no fue capaz de analizarla; como si es una transacción coinjoin que está aumentando tu conjunto de anonimato en 50 o lo que sea - todavía ves las entradas y salidas, además de la información extra normalmente. Así que esto podría ayudar a la experiencia del usuario.

Además del modo airgapped, hay otro caso de uso completamente diferente en el que se utiliza como una cartera caliente como para una cartera de hardware de rayos. Usted mantiene el monedero conectado a la computadora, y la computadora conectada ejecuta un monedero lightning de vigilancia, y luego se comunica con el monedero de hardware para las firmas de las transacciones que aumentan estrictamente el balance de su canal. Entonces, usted podría estar en el bucle para cualquier transacción que reduzca su saldo. Ten en cuenta todavía que este no es un modo de airgapped, pero es un modo de seguridad diferente. Probablemente quieras la mayor parte de tus fondos en el almacenamiento en frío, y luego alguna cantidad puede estar en este dispositivo de monedero de hardware caliente. Además, uno de los problemas con coinjoin es que puede tomar algún tiempo como unos pocos minutos, especialmente si usted necesita para reintentar varias veces, que es una especie de cartera caliente.

Probablemente no quieras llevar tu monedero hardware todo el tiempo, o ir a casa a pulsar un botón para confirmar tu pago relámpago o algo así. Así que lo que puedes hacer es configurar un teléfono con una aplicación que está emparejado con la cartera de hardware, por lo que la cartera de hardware es consciente de la aplicación y la aplicación almacena algún secreto para autenticarse y luego se puede establecer un límite en la mañana para su teléfono. Así, el monedero de hardware debe autorizar los pagos hasta una cierta cantidad si viene de esta aplicación segura. Incluso más, puedes establecer permisos particulares a la aplicación móvil. Por ejemplo, el monedero de intercambio debería poder pedir a la cartera de hardware una factura, pero sólo una factura para pagarte. Lo mismo con las direcciones de bitcoin, sólo puede pedir direcciones de bitcoin para una ruta particular. Si quieres, puedes hacerlo más flexible y conseguir una mejor experiencia de usuario. El monedero de hardware está conectado a Internet, y las solicitudes se reenvían a través de la nube en este esquema.

Lo primero que vamos a lanzar es el tablero de desarrolladores y el elemento seguro. Otra cosa que quiero discutir es la API de la primera versión del elemento seguro. En concreto, como desarrolladores, ¿qué os gustaría que tuviera? Tengo algunas ideas sobre lo que puede ser útil. Tal vez ustedes puedan pensar en algo más.

Obviamente, debe almacenar la mnemotecnia, las contraseñas, las frases de paso, y debe ser capaz de hacer todos los cálculos de derivación bip32 y almacenar claves públicas bip32. También queremos ECDSA y la firma de curva elíptica estándar que estamos utilizando en este momento. Además, quiero incluir este protocolo de mitigación de ataques anti-nonce. No queremos confiar en el elemento seguro propietario; queremos estar seguros de que no está tratando de filtrar nuestras claves privadas utilizando este ataque nonce elegido. Así que queremos aplicar este protocolo. Además, queremos utilizar las firmas Schnorr en particular con la compartición de secretos Shamir. Esto permitiría tomar las claves Schnorr, y luego obtener una firma que utiliza el punto correcto. La agregación de claves con Shamir, necesitas una función elegante para combinar cada uno de los puntos de las partes.

Tiene sentido usar a Shamir aquí porque puedes hacer umbrales. Con Musig también puedes hacerlo, pero con Schnorr es sólo k-de-k. Compartición de secretos Shamir verificable usando compromisos pedersen. Digamos que tenemos 3 claves viviendo en nuestro hardware, y estas otras partes están en algún otro hardware de respaldo en otro lugar. Todas ellas pueden comunicarse entre sí. Cada uno genera su propia clave, y luego usando compromisos pedersen y algún otro algoritmo de fantasía, pueden estar seguros de que cada uno de ellos termina con una parte de algún secreto común, pero ninguno de ellos sabe cuál es el secreto común. Así que tenemos una clave privada virtual o un secreto completo que se divide con el esquema de compartición de secretos de Shamir con todos estos tipos y ninguno de ellos conoce el secreto completo. El único problema es ¿cómo lo respaldas? No puedes ver una mnemotecnia correspondiente a esto. Nadie conoce esta clave, así que nadie puede mostrarla. Así que la única manera de hacerlo es de alguna manera - digamos que este tipo controla la pantalla, entonces puede optar por mostrar su propia clave privada, pero entonces ¿qué pasa con los demás? No quieren comunicarse con este tipo para mostrarla porque entonces pueden reconstruirla…. así que podrías usar algo como la pantalla que conectas a diferentes dispositivos para mostrar esta mnemotecnia, pero entonces el controlador de la pantalla puede robarlas todas. Lo ideal es que el controlador de pantalla sea un dispositivo muy simple que sólo tenga memoria y una pantalla. También podrías utilizar hardware desechable para generar la clave privada, pero entonces ¿cómo la recuperas?

Otra idea son las copias de seguridad sin semillas, de modo que si uno de los chips se rompe, si no usas 5 de 5, sino 3 de 5, puedes hacer que estos chicos se comuniquen entre sí, renegocien la misma clave, generen un nuevo fragmento para el nuevo miembro y reemplacen todas las partes antiguas de este esquema de compartición de secretos Shamir con el nuevo. En lugar del viejo polinomio, eliges el nuevo polinomio y hay una manera tal que cada uno de estos dispositivos puede cambiar a la nueva clave y en lugar de comprometer a uno obtenemos el nuevo miembro. Esto también es útil si uno de los fragmentos o claves se compromete o se pierde, siempre y cuando tengas suficientes claves para reconstruirlo.

Para la firma, no es necesario reensamblar la clave privada compartida secreta de Shamir porque es Schnorr. Así que las firmas se generan parcialmente, y luego se pueden combinar en la firma final correcta sin reensamblar la clave privada en una sola máquina.

Digamos que estás haciendo SSS sobre una clave privada maestra. Con cada fragmento, generamos una firma parcial sobre una transacción. Una firma Schnorr es la suma de este punto aleatorio más el hash por la clave privada, y es lineal. Podemos aplicar la misma función para recombinar las partes de la firma. Podemos aplicar la misma función a s1, s2 y s3 y entonces se obtiene la firma completa S de esta manera sin combinar nunca las partes de las claves en la clave completa.

La multifirma está bien cuando no eres el único propietario de las claves privadas, como en el caso de la custodia con tus amigos y familiares o lo que sea. El esquema Shamir Secret Sharing con Schnorr es genial si eres el único propietario de la clave, así que sólo necesitas los fragmentos de la clave. Hay un documento que puedo darte que explica cómo se generan los fragmentos, o si la clave privada maestra virtual se genera en una sola máquina. Multisig es mejor para la custodia comercial, y Shamir es mejor para la autocustodia y el autoalmacenamiento en frío. La multifirma clásica seguirá estando disponible con Schnorr, no tienes que usar la combinación de claves, seguirías usando el CHECKMULTISIG. Creo que todavía se puede utilizar. ((No, no se puede - ver la sección “Diseño” de bip-tapscript o CHECKDLSADD.)) Desde la perspectiva de la minería, CHECKMULTISIG hace que sea más caro validar esa transacción porque tiene muchas firmas.

Estaba pensando en utilizar políticas de miniscript para desbloquear el elemento seguro. Para desbloquear el elemento seguro, se podría utilizar simplemente un código PIN o se podría utilizar de forma que se necesitaran firmas de otros dispositivos o códigos de una sola vez de algún otro mecanismo de autenticación. Teníamos que implementar miniscript de todos modos. No estamos restringidos a los límites de sigop de bitcoin ni a nada aquí; así que el elemento seguro debería ser capaz de verificar este miniscript con cualquier clave de autenticación o contraseña que estés usando. Incluso puede ser CHECKMULTISIG con el límite de 15 claves eliminado.

Le enviaré a Bryan el documento sobre la compartición lineal de secretos para las firmas de umbral de Schnorr, en el que cada fragmento se puede utilizar para generar firmas parciales que luego se pueden recombinar. Tal vez el documento de Pedersen sobre la compartición de secretos verificable de 1999. Y luego hay una respuesta a ese papel donde se puede arreglar cómo una parte puede sesgar la clave pública, que sesgar la clave pública no importa, pero lo que sea. GJKR'99 sección 4 lo tiene. No es un esquema de compartición de secretos de Shamir si no reúnes la clave privada; es sólo un esquema de firma de umbral que resulta que utiliza la compartición lineal de secretos. Llamarlo de otra manera es peligroso porque podría animar a la gente a implementar SSSS donde la clave puede ser recuperada.

P: ¿Y si te deshaces del elemento de seguridad, y haces que el almacenamiento sea efímero y lo borres al apagar? Básicamente lo que hace Tails. El usuario tiene que introducir su mnemónico y su frase de contraseña. Incluso se puede guardar el mnemónico y sólo requerir una frase de contraseña. ¿Sería más fácil y barato si nos deshacemos del elemento de seguridad?

R: Bueno, ¿qué pasa con el ataque de una criada malvada? Sube cierto firmware a tu microcontrolador. ¿Cómo verificas que no ha cogido la contraseña?

P: Es posible, pero esta doncella malvada tiene que volver y conseguir el acceso de nuevo en el futuro. Pero mientras tanto, podrías destruirlo, y desmontarlo y confirmar e instalar el software cada vez.

R: Realmente quiero una cartera de hardware desechable. Mientras la firma sea válida, eso es todo lo que necesitas. Vas a destruir el dispositivo de hardware después de usarlo.

P: Si utilizas este enfoque sin un elemento seguro, ¿qué pasa con una situación de monedero caliente como para un rayo?

R: Si no tienen acceso a la cartera de hardware, entonces está bien. Pero si tienen acceso, entonces será aún peor en el escenario sin el elemento seguro. Está utilizando las claves privadas para las operaciones criptográficas, y si está en el microcontrolador normal, entonces se puede observar el consumo de energía y extraer las claves. Es bastante difícil deshacerse de esto. Trezor sigue siendo vulnerable a este problema. Los chicos de Ledger descubrieron un ataque de canal lateral cuando obtienes una clave pública a partir de tu clave privada, filtras alguna información sobre tu clave privada. Están trabajando en arreglar esto, pero básicamente para derivar una clave pública necesitas desbloquear el dispositivo y esto significa que ya conoces el código PIN así que no es un problema para ellos. Pero en modo automático, si tu dispositivo está bloqueado pero sigue haciendo criptografía en un microcontrolador, entonces sigue siendo un problema. Yo preferiría que sólo se hicieran operaciones de criptografía en un elemento seguro. Si implementamos algo que pueda almacenar las claves o borrar las claves también, y que pueda hacer operaciones criptográficas sin canales laterales, entonces creo que se puede pasar a una placa de desarrollador normal o a un microcontrolador extendido y hacer todo allí ya. Creo que no es puramente una restricción, es más como… tiene algunas limitaciones porque no podemos conectar la pantalla al elemento seguro, pero son compensaciones en las que estamos pensando. Para los desechables, creo que está perfectamente bien usar el… hacer una cosa desechable muy barata. Los microcontroladores utilizados en Trezor cuestan como 2 dólares o algo así. Así que podemos tener un montón de ellos, comprarlos al por mayor en digikey o algo así por miles y luego los pones en el chip y tienes que soldar un poco. Lo ideal sería un chip con un paquete DIP o un zócalo y simplemente poner el controlador allí. Estaría bien probarlo.

P: Si te preocupa el ataque de una doncella malvada, puedes utilizar una bolsa a prueba de manipulaciones. La doncella malvada podría sustituir tu cartera de hardware, pero en teoría lo sabrías gracias a la bolsa a prueba de manipulaciones. Puedes usar purpurina, sacarle una foto y guardarla, y cuando vuelvas a tu dispositivo compruebas tu purpurina o algo así. O tal vez tiene una conexión a Internet y si alguna vez es manipulado, entonces dice hey fue desconectado y no confío más.

Estos escáneres de huellas dactilares en los portátiles son completamente estúpidos. Además, un lugar fácil para encontrar esas huellas es el propio teclado ((risas)). Deberías encriptar tu disco duro con una frase de contraseña muy larga que escribes.

Cadenas de estados

https://btctranscripts.com/bitcoin-core-dev-tech/2019-06-07-statechains/

Con las cadenas de estados, puedes transferir la clave privada a otra persona, y entonces haces que sólo la otra persona pueda hacer una firma. Ruben vino con una construcción interesante donde encima de esto puedes hacer transacciones relámpago donde sólo transfieres parte de este dinero, y puedes hacer rebalanceo y cosas. Pero requiere firmas de Schnorr, firmas ciegas para Schnorr, y si ocurre pues no será por el momento.

La forma en que podemos ayudar con esto es que, podemos proporcionar la funcionalidad en el elemento seguro que extrae la clave y luego - de tal manera que usted está seguro de que la clave privada se mueve. Usted necesita confiar en el fabricante todavía, pero usted puede diverisificate la confianza de ella, de modo que usted no tiene que confiar plenamente en esta federación que está haciendo cumplir esta política.

Seguimiento

https://bitcoin-hardware-wallet.github.io/

\ No newline at end of file diff --git a/es/austin-bitcoin-developers/2019-08-22-socratic-seminar-2/index.html b/es/austin-bitcoin-developers/2019-08-22-socratic-seminar-2/index.html index 09ab31e654..82f57f4c9c 100644 --- a/es/austin-bitcoin-developers/2019-08-22-socratic-seminar-2/index.html +++ b/es/austin-bitcoin-developers/2019-08-22-socratic-seminar-2/index.html @@ -20,4 +20,4 @@ < Bit Block Boom < Seminario Socrático 2

Seminario Socrático 2

Fecha: August 22, 2019

Transcripción De: Bryan Bishop

Traducción Por: Blue Moon

Tags: Research, Hardware wallet

Categoría: -Reunión

https://twitter.com/kanzure/status/1164710800910692353

Introducción

Hola. La idea era hacer una reunión de estilo más socrático. Esto fue popularizado por Bitdevs NYC y se extendió a SF. Lo intentamos hace unos meses con Jay. La idea es que recorremos las noticias de investigación, los boletines, los podcasters, hablamos de lo que ha pasado en la comunidad técnica de bitcoin. Vamos a tener diferentes presentadores.

Mike Schmidt hablará de algunos boletines de optech a los que ha contribuido. Dhruv hablará sobre el intercambio de secretos Hermit y Shamir. Flaxman nos enseñará cómo configurar una cartera de hardware multisig con Electrum. Nos mostrará cómo se puede hacer esto y algunas de las cosas que hemos aprendido. Bryan Bishop hablará sobre su propuesta de bóvedas que se hizo recientemente. Lo ideal es que cada uno de estos temas dure unos 10 minutos. Sin embargo, es probable que se extiendan un poco. Vamos a tener un montón de participación de la audiencia y realmente interactivo.

Boletines de Bitcoin Optech

No tengo nada preparado, pero podemos abrir algunos de estos enlaces y presentar mi perspectiva o lo que yo entiendo. Si la gente tiene ideas o preguntas, sólo tiene que hablar.

Boletín 57: Coinjoin y joinmarket

https://bitcoinops.org/en/newsletters/2019/07/31/

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017169.html

Bonos de fidelidad para proporcionar resistencia sibilina a joinmarket. ¿Alguien ha utilizado joinmarket antes? ¿No? ¿Nadie? Buen intento… Bien. Así que, joinmarket es un monedero que está diseñado específicamente para hacer coinjoins. Un coinjoin es una manera de hacer un poco de mezcla o volteo de monedas para aumentar la privacidad o fungibilidad de sus monedas. Hay algunas opciones diferentes para ello. Esencialmente utiliza chatbots de IRC para solicitar creadores y tomadores. Así que si realmente quieres mezclar tus monedas, eres un tomador, y un creador en el otro lado pone fondos para mezclar con tus fondos. Así que hay este modelo de creador/tomador que es interesante. No lo he usado, pero parece ser facilitado por el chat IRC. El creador, la persona que pone el dinero, no necesariamente necesita privacidad, gana un pequeño porcentaje de su bitcoin. Todo se hace con contratos inteligentes, y sus monedas no corren peligro en ningún momento, salvo en la medida en que se almacenan en un monedero caliente para interactuar con el protocolo. La resistencia sibilina que están hablando aquí es que, por lo que, Chris Belcher tiene una gran entrada de privacidad en el wiki bitcoin así que echa un vistazo a eso en algún momento. Él es uno de los desarrolladores de joinmarket. Se da cuenta de que cuesta muy poco inundar la red con un montón de creadores si eres un actor malicioso, y esto rompe la privacidad porque las posibilidades de que te encuentres con una empresa maliciosa o fraudulenta del tipo chainalysis, no es que puedan tomar tus monedas, pero estarían invadiendo tu privacidad. El coste de que lo hagan es bastante bajo, por lo que las posibilidades de que lo hagan son bastante altas.

Al igual que la minería de bitcoin, se trata de una resistencia sibilina mediante la quema de energía para la prueba de trabajo. Hay dos tipos de potenciales para la prueba de trabajo en este escenario contra la resistencia sibilina. Una es que puedes quemar bitcoin, y la otra es que puedes bloquear bitcoin, ambas son pruebas de que tienes algo de piel en el juego. Así que puedes probar ambas cosas en la cadena y es una forma de asociar que has bloqueado estas monedas y las has bloqueado una vez por este nick de IRC y esto te da credibilidad para comerciar como una persona normal. Así que no puedes tener 1000 chatbots para fisgonear… Son de 30 a 80.000 BTC. Eso sería el bloqueo. Se trata de bloquear esta cantidad de BTC para ocupar una parte de la capacidad total de la situación del mercado de la unión. No sería peor que la situación actual, donde tienen la capacidad de hacerlo de todas formas, así que esto lo hace más caro. También lo hace más caro para el usuario medio, que es la parte negativa. El coste de que los fabricantes legítimos apunten o bloqueen o quemen sus monedas se va a trasladar a los tomadores. En la forma en que está configurado ahora, la cuota de la minería es sustancialmente más de lo que estos fabricantes están haciendo por hacer la fijación, por lo que la teoría de acuerdo con Chris es que la gente estaría dispuesta a tomar una cuota más alta para la mezcla, porque ya están pagando 10x para las tasas de minería. No sé cuántos coinjoins se pueden hacer en un día, pero hay listas públicas de los fabricantes y lo que van a cobrar y cuál es su capacidad. Hay gente que pone 750 BTC y puedes mezclar con ellos, y cobran un 0,0001% o algo así. El costo más alto es para la protección sibilina, es una tasa natural. Si estás pagando 10 veces más para procesar la transacción en la red bitcoin, entonces tal vez estés dispuesto a poner unos cuantos sats más para pagar por esta resistencia sibilina.

Los equipos de los monederos Samurai y Wasabi tuvieron algunas discusiones interesantes. Estuvieron hablando sobre la reutilización de direcciones y cuánto reduce realmente la privacidad. No creo que sea un tema resuelto, ambos siguen yendo y viniendo atacándose mutuamente. Para cualquiera de estos coinjoins, todos están expuestos en cierta medida a coinjoin. Así que siempre hay compensaciones. Mayor coste, algo de protección, aún no es perfecto, una empresa podría estar dispuesta a bloquear esas monedas. Una cosa interesante sobre esto es que aumenta el coste de los servicios de Chainalysis - tendrán que cobrar más a sus clientes; así que esto reduce sus márgenes y tal vez podamos sacarlos del negocio.

Boletín de noticias 57: signmessage

https://github.com/bitcoin/bitcoin/issues/16440

https://github.com/bitcoin/bips/blob/master/bip-0322.mediawiki

Bitcoin Core tiene la capacidad de hacer signmessage, pero esta funcionalidad era sólo para single key pay-to-pubkeyhash (P2PKH). Kallewoof ha abierto un pull request que permite tener esa misma funcionalidad con otros tipos de direcciones. Así que para segwit, P2SH etc., … Creo que es interesante, y es compatible hacia adelante con futuras versiones de segwit, por lo que taproot y Schnorr están incluidos, tendrían la capacidad de firmar las secuencias de comandos con estas claves, y es compatible hacia atrás porque tiene la misma fnuccionalidad para firmar una sola clave. Sí, podría ser utilizado para una prueba de reserva. Steven Roose hizo la salida falsa con el mensaje, esa es su herramienta de prueba de reserva. Construye una transacción inválida para hacer una prueba de reserva. Si Coinbase quisiera probar que tiene las monedas, podría crear una transacción que se parezca mucho a una transacción válida pero que sea técnicamente incorrecta, pero aún así con firmas válidas. bip322 sólo está firmando el mensaje.

Todo lo que puedes hacer con la firma es demostrar que en algún momento tuviste esa clave privada. Si alguien te roba las claves privadas, puedes seguir firmando con tu clave privada, pero ya no tienes las monedas. Tienes que demostrar que no la tienes; o que otra persona no la tiene. O que, en la altura de bloque actual, tenías los fondos. Ese es el verdadero reto de la prueba de reservas, la mayoría de las propuestas tratan de mover los fondos.

Boletín 57: debate sobre el filtro Bloom

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017145.html

https://github.com/bitcoin/bitcoin/issues/16152

Hubo alguna consternación aquí sobre la desactivación del filtro bloom por defecto en Bitcoin Core. Ahora mismo si estás usando un monedero o cliente SPV, como Breadwallet que usa SPV… La discusión fue que en la semana anterior, hubo un pull request fusionado que deshabilita los bloom filters por defecto. Una nueva versión de Bitcoin Core ya no serviría estos filtros bloom a los clientes lite. Mi Breadwallet no podría conectarse a los nodos de Bitcoin Core y hacer esto por defecto, pero de nuevo alguien podría volver a activarlo.

Alguien argumentó que Chainalysis ya está ejecutando todos estos nodos de filtro de floración de todos modos, y continuarán recogiendo esa información. Muchos nodos de Bitcoin Core están ejecutando nodos que tienen más de un año de antigüedad, por lo que no van a ir a ninguna parte pronto. Todavía podrás ejecutar algunos clientes lite. Creo que Breadwallet ejecuta algunos nodos también. Siempre se puede ejecutar sus propios nodos también, y servir a los filtros de floración a ti mismo.

¿Alguien utiliza un monedero o es consciente de que está utilizando un monedero que es un cliente SPV lite? Electrum no hace los filtros bloom de bip37, utiliza un modelo de confianza.

¿La idea es que bip57 esté activado por defecto, para sustituirlo? ¿Va a estar Neutrino activado por defecto? Lo está para btcd. Me imagino que bitcoind tendrá algo similar. Son sólo comandos de red. Se almacena un poco más localmente, con los filtros de Neutrino. Tienes que hacer un seguimiento. Si hay un compromiso de coinbase o lo que sea, vas a tener que revisar eso también. Eso tendría que ser un soft-fork.

Noticias de Lightning (Buck Perley)

Voy a repasar los temas lightning de la lista del seminario socrático. Ayer estuve en un avión durante unas horas, así que preparé algunas preguntas y espero que podamos suscitar algunas cuestiones en torno a ellas.

Torres de vigilancia

https://blog.bitmex.com/lightning-network-part-4-all-adopt-the-watchtower/

Bitmex tenía algo sobre la ejecución de torres de vigilancia. Lo bueno de su artículo es que repasan los diferentes escenarios cuando estás ejecutando lightning y lo que hace que se cumpla el buen comportamiento en lightning. Si nos fijamos en la transacción de la justicia - en lightning, cuando usted tiene dos partes que entran en el canal y reequilibrar los fondos constantemente sin tener que publicar una transacción, la forma de hacer cumplir la no publicación del estado anterior es por una pena o transacción de la justicia. Si alguien intenta publicar un estado antiguo, puedes robar todos los fondos del canal como forma de castigarlo. Esto se llama transacción de justicia. Uno de los problemas con esto, y con lightning sobre todo, es que tu nodo tiene que estar en línea todo el tiempo porque la única manera de publicar la transacción de justicia es si estás en línea y tu nodo se da cuenta de que el estado incorrecto ha sido publicado.

Recientemente se ha publicado la versión 0.7 de lnd con torres de vigilancia disponibles. Lo que una torre de vigilancia es, es que te permite estar fuera de línea. Básicamente, si estás desconectado, puedes contratar a otro nodo para que vigile la blockchain por ti y publicará la transacción de justicia en tu nombre. Hay algunas construcciones interesantes en las que puedes pagar a la torre de vigilancia para que puedas dividir los fondos de la transacción de castigo que están en esa transacción. No hablan de eso aquí en el artículo.

Una de las cosas que es interesante es comparar las transacciones de justicia con eltoo donde tienen SIGHASH_NOINPUT. Me pareció un punto de discusión interesante.

P: He actualizado mi nodo para poder utilizar esa funcionalidad. ¿Cómo se realiza la transacción con otros nodos y se les configura para que sean una torre de vigilancia? No me queda claro cómo funciona.

R: Básicamente tienes que encontrar uno, y apuntar tu nodo hacia él y decir que este va a ser mi nodo de vigilancia. Esto añade un aspecto interesante en cuanto a cómo la economía de Lightning se refiere: …. los incentivos de la gente para enrutar los nodos, y enrutar los fondos y ganar honorarios sólo por tener un buen tiempo de actividad. Casa publicó su cosa de latido del nodo donde recompensan activamente a la gente. Olvidé la mecánica de cómo los mantienen honestos. Así que les dan actualizaciones de la transacción de justicia; ahora mismo hay un compromiso de privacidad. Sin eltoo, tienen que tener cada actualización de estado en el canal. Lo bueno de eltoo es que no, básicamente no tienes que almacenar todo el estado. Con eltoo, no necesitas recordar los estados intermedios, sólo el último.

P: Así que los otros nodos me están proporcionando servicios de torre de vigilancia; y a menos que yo actualice mi nodo para tener la torre de vigilancia, entonces otras personas pueden hacer lo mismo. ¿Hay que abrir un canal?

R: No, sólo les das la transacción en bruto.

P: ¿Están encriptando la transacción de justicia?

R: No estoy seguro. Se discutieron mecanismos para dividirlo aún más. La idea era que la torre de vigilancia sólo tuviera conocimiento de tu transacción una vez publicada; no podrían saber los detalles de la transacción, de antemano. Estarían constantemente vigilando la transacción e intentarían desencriptarla todo el tiempo.

P: ¿Ha intentado alguien comercializar algo de esto?

R: Bueno, nadie ha sido capaz de comercializar nada de Lightning. lnbig ha asegurado 5 millones de dólares en la red Lightning y está ganando 20 dólares al mes. En este punto, sólo lo está haciendo por razones altruistas.

P: Uno de los argumentos es que si las tasas del bitcoin suben mucho, tienes la ventaja de tener estos nodos y canales de enrutamiento ya configurados.

R: Sí, pero ahora mismo no es un negocio viable. Podría serlo en el futuro. Ahora mismo, mi sensación es que no estás ganando dinero con las tarifas, pero estás creando liquidez y esto hace que sea más viable para vuestros clientes utilizar Lightning. Así que realmente su modelo de negocio es más acerca de la creación de liquidez y ayudar a la utilidad en lugar de hacer dinero. La idea es que la gente gane cuotas como torres de vigilancia, cuotas de enrutamiento, aumentando la liquidez, y hay otro modelo de negocio donde la gente puede pagar por la liquidez entrante. Esos son los tres principales modelos de negocio de la red de rayos que conozco.

Modelo de estado estacionario para LN

https://github.com/gr-g/ln-steady-state-model

LN guía

https://blog.lightning.engineering/posts/2019/08/15/routing-quide-1.html

¿Hay alguien aquí que tenga un nodo de lightning? Bueno, unos cuantos. Uno de los grandes guiños a los lightning es que no son súper utilizables. Parte de eso es que están tratando de ayudar en el lado de la ingeniería con torres de vigilancia y el piloto automático y las nuevas interfaces gráficas de usuario. Otra gran parte es simplemente, guías sobre cómo ejecutar un nodo de lightning. Ellos van a través de las banderas útiles que se puede activar. Si alguna vez se ejecuta la ayuda de lndcli, es un enorme menú de cosas a tener en cuenta. ¿Alguna historia de horror de los dolores de cabeza de tratar con el lightning y las cosas que sería útil?

P: Más liquidez entrante.

R: ¿Qué cree que podría ayudar a ello? ¿Hay algo en proyecto que pueda ser útil?

P: Grease. Si te descargas su cartera, te dan 100 dólares de liquidez. Cuando se te acaba, tienes que conseguir más canales. Es una especie de extraño problema de incentivos. Es como una línea de crédito. Encerrar los fondos es costoso en el otro extremo, así que necesitan una buena razón para pensar que deben hacerlo.

Uno de los problemas de lightning es que hay que bloquear los fondos. Para recibir fondos, necesitas tener un canal donde alguien tenga sus propios fondos bloqueados. De otra manera, tu canal no puede ser rebalanceado cuando ganas más en tu lado. Si Jimmy tiene $100 de bitcoin en su lado del canal, y yo tengo $100 en mi lado, alguien puede pagarle a Jimmy hasta $100 a través de mí. Una vez que haya ganado 100 dólares y nuestro canal se haya reequilibrado, ya no podrá recibir más dinero. En el caso de los pagos en cadena, cualquiera puede pagarle inmediatamente, y en lightning eso no es lo mismo. La liquidez es un gran problema.

Tienen un servidor de Loop. Es básicamente swaps submarinos. Es una herramienta que aprovecha los swaps submarinos, construida por Lightning Labs, es una forma de tomar fondos por tu cartera lightning fuera de la cadena. Usted hace un bucle enviando a otro monedero que le enviará fondos en la cadena y esto le dará liquidez de entrada. O puedes pagar a tu propia cartera lightning desde los fondos on-chain. Si has visto esas interfaces de tiendas en las que puedes pagar con lightning, o pagar con bitcoin on-chain al lightning wallet, para eso están usando swaps submarinos. Tampoco es barato, porque hay comisiones asociadas. Usted recibe esos fondos de vuelta en la cadena, pero usted tiene que pagar las tasas de transacción en ese punto. Y luego hay cargos asociados con - el servidor Loop está cobrando honorarios por este servicio, que es otro modelo de negocio.

Tienen un mecanismo llamado Loopty Loop en el que se continúa recursivamente el bucle de salida. Se sacan fondos en bucle, se obtienen fnuds encadenados, se sacan en bucle y se vuelven a sacar en bucle. Puedes seguir haciendo eso y obtener liquidez entrante, pero de nuevo no es barato, y no es instantáneo. Así que estás perdiendo algunos de los beneficios de la lightning.

Copias de seguridad de los canales estáticos

Lightning Labs hablaba ahora de su aplicación móvil. Una de las cosas interesantes de esta actualización es que tienen copias de seguridad de canales estáticos en iCloud. Tenía curiosidad por saber si alguien tenía alguna opinión al respecto. Creo que es genial que se pueda hacer una copia de seguridad en la nube para estos. Almacena el estado del canal, incluyendo cuál es el saldo. Si tu nodo bitcoin se cae, y sólo tienes tu mnemónica, no pasa nada. Pero con LN, tienes estados fuera de la cadena donde no hay registro de ello en la blockchain. El único registro es la contraparte, pero no quieres confiar en ella. Si no tienes copias de seguridad de tu estado, tu contraparte podría publicar una transacción de robo y tú no lo sabrías. También podrías publicar accidentalmente un estado antiguo, lo que daría a tu contraparte la oportunidad de robar todos los fondos del canal, lo que es otra cosa que eltoo puede evitar. Si tienes la aplicación en iOS, estas cosas se actualizan automáticamente y no tienes que preocuparte por ello, pero estás confiando en Apple iCloud.

Bits de seguridad playground

Esto es… esto te permite pagar micropagos por relámpagos, puedes pagar por precios al contado, o por estadísticas de la NBA, y si fuera a pulsar algo… básicamente, es pagar por una llamada a la API para pequeñas peticiones. Así que sería como casi un AWS en la demanda, así es como lo pienso.

Boltwall

https://github.com/Tierion/boltwall

En el tema de las cosas de la API, esto era algo que recientemente he construido y publicado llamado boltwall. Se trata de un middleware basado en nodejs express que se puede poner delante de las rutas que quieras proteger. Es simple de configurar. Si tienes tu nodo lightning configurado, entonces puedes pasar la configuración necesaria. Estas configuraciones solo se almacenan en tu servidor. El cliente nunca ve nada de esto. O puedes usar opennode, que para los que no lo han usado, es un sistema de lightning custodiado donde ellos manejan el nodo LN y tu pones tu clave API en boltwall. Creo que es lo mejor para los pagos de máquina a máquina.

He utilizado macarrones como parte del mecanismo de autorización. Los macarrones se utilizan en lnd para su autorización y autenticación. Los macarrones son básicamente cookies con mucho más detalle. Las cookies de la web son normalmente un blob json que dice aquí están tus permisos y está firmado por el servidor y tú autentificas la firma. Lo que se hace con las macaroons, es que son básicamente HMACs, así que puedes tener cadenas de macaroons firmadas que están conectadas entre sí. Tengo uno construido aquí que es un macarrón basado en el tiempo donde puedes pagar un satoshi por un segundo de acceso. Cuando pienso en el lightning, hay un montón de puntos de dolor a nivel de consumidor involucrados.

P: ¿Por qué se basa en el tiempo, en lugar de pagar por una solicitud?

R: Depende del mercado. En lugar de decir que pago por una sola solicitud, podrías decir que en lugar del apretón de manos de ida y vuelta, obtienes acceso durante 30 segundos y terminas hasta que el tiempo expira. Construí una aplicación de prueba de concepto que es como un yalls (que es como un medium.com) para la lectura de contenido donde en lugar de pagar una cantidad a granel para un pedazo de contenido, usted dice oh voy a pagar por 30 segundos y ver si desea seguir leyendo sobre la base de eso. Permite mecanismos de precios más flexibles en los que puedes tener una discriminación de precios mucho más fina basada en la demanda.

Desvarío de la cartera de hardware

https://stephanlivera.com/episode/97/

Últimamente he estado hablando de multisig en carteras de hardware. Vamos a empezar con algo que es malo, y luego mostrar algo mejor. Adelante, encienda electrum. Pasa la bandera –testnet. No vamos a hacer lo del servidor personal.

https://github.com/gwillen/bitcoin/tree/feature-offline-v2

https://github.com/gwillen/bitcoin/tree/feature-offline-v1

https://github.com/gwillen/bitcoin/tree/feature-offline

Tenemos un nodo completo de Bitcoin Core aquí. Tenemos QT ahora mismo, pero puedes usar el cli. Hay un componente gastable, y eso no es nada porque todo lo que tengo es watchonly. Estoy usando Bitcoin Core para mis reglas de consenso, pero no lo estoy usando como cartera. Sólo estoy viendo las direcciones, haciendo un seguimiento del historial de transacciones, los saldos, ese tipo de cosas. Así que tenemos el servidor personal de electrum corriendo y el núcleo de bitcoin corriendo. Así que arranco electrum y ejecutarlo en testnet. También puse una bandera para decir, no se conecte accidentalmente a otra persona y decirles lo que todas mis direcciones son. Pones esto en el archivo de configuración, y también en la línea de comandos de nuevo sólo para estar seguro … Sí, también podrías usar reglas de firewall que podrían ser inteligentes en el futuro.

Podemos mirar una transacción en electrum, por lo que está diciendo que tengo este bitcoin que podemos ver que tengo aquí y mis transacciones recientes y ver que lo he recibido. Ahora bien, si voy a recibir más bitcoin, hay este genial botón de “mostrar en trezor”. Si le doy a esto, aparece en trezor y lo muestra. Esta es una parte esencial de la recepción de bitcoin; no preguntas por tu dirección de recepción en tu ordenador infectado con malware. Quieres hacer esta comprobación en un quórum de carteras de hardware. ¿Realmente quieres ir a 3 ubicaciones diferentes de monederos hardware antes de recibir los fondos? Si estás recibiendo 100 millones de dólares, entonces sí quieres. Si estás haciendo 3 de 5, y sólo confirmas en 2 de 5, entonces el atacante podría tener 3 de 5 pero los 2 de 5 han confirmado que son los participantes. Coldcard hará una cosa en la que registre el grupo de pubkeys para que sepa que estamos todos en esto… Coldcard tiene como 3 opciones, una es subir un archivo de texto con las pubkeys. Otra es que cuando le envíes una salida multisig, te ofrecerá crearla y te mostrará los xpubs, y te preguntará si quieres registrarla; y la tercera es confiar que es lo que hacen los demás. Casa te da todos los xpubs… es otra forma en que esto funciona; puedes poner esos en un cliente de electrum offline airgapped nunca toca el internet, y puede generar direcciones de recepción. Asi que puedes decir, bueno, esta maquina que nunca ha tocado internet dice que estos xpubs me daran estas direcciones, asi que 2-de-5 mas electrum offline entonces quizas estoy dispuesto a seguir adelante. Hay códigos QR incorporados para configurar estos.

No me gusta cuando el trezor está conectado a esta máquina, porque creo que la máquina puede estar infectada de malware. Pero este dispositivo podría ser un falso trezor, podría ser un teclado que instala malware o algo así y ni siquiera veo que escriba las urls de malware. Si tenemos tres dispositivos de hardware diferentes, quiero cuatro portátiles. Uno que está conectado a la red bitcoin; y cada uno de los otros tres portátiles están conectados a la cartera de hardware. Les paso las transacciones de bitcoin por código QR. Todo ese ecosistema de ordenador y monedero hardware puede estar eternamente en cuarentena y nunca conectado a internet. Así que podemos construir un airgap de hardware en esto.

Recomiendo un portátil porque tienen cámaras web y baterías. En esta demostración, tenemos que recoger los portátiles y apuntar las pantallas a las cámaras. Una buena cartera de hardware portátil con un escáner de código QR, tienes que recogerlos o algo así, eso estaría bien. Con los ordenadores de sobremesa, esto va a ser doloroso porque tienes que arrastrar tu ordenador a la caja de seguridad. Ten en cuenta que muchos bancos no tienen tomas de corriente en sus cajas fuertes, así que necesitas una batería. En realidad, cualquier máquina de 64 bits debería estar bien. Históricamente, he utilizado 32-bits, pero tails ya no es compatible con eso y algunas versiones de Ubuntu se quejan. En este demo, vamos a usar segwit nativo, y es una configuración multisig así que elige esa opción.

Electrum es muy quisquilloso. Le di al botón de retroceso. Volví a ver si este era el correcto y entonces perdí todo. Estoy usando una cartera de hardware con derivación de clave determinista, así que puedo volver a eso. El botón de retroceso debería preguntarte si realmente quieres deshacer todo este trabajo. La gran advertencia es que no se debe pulsar el botón de retroceso.

Es posible que hayas visto mis hilos de Twitter. Yo aceptaría un monedero de hardware muy malo, si permitiera multisig. Añadir un segundo monedero de hardware es sólo aditivo y puede ayudar a proteger contra los errores del monedero de hardware. En twitter, los fabricantes de carteras dijeron que no era un gran problema. Hay tres grandes problemas con Ledger. No soporta testnet. Toman la clave pública y muestran la representación en mainnet, y te preguntan si quieres enviar allí. No es que no lo apoyen; lo apoyaron en el pasado, y luego dejaron de apoyarlo. Así que no hay testnet. Tampoco tienen un mecanismo para verificar una dirección de recepción. Sólo si quieres usarlo de forma insegura te lo mostrará. El tercer problema es que tampoco soportan el envío, porque no hacen lo de la suma de las entradas y la suma de las salidas. No validan lo que es el cambio y lo que va a otra persona. Sólo te muestran un montón de salidas y te preguntan si parecen correctas, pero como humano no tienes ni idea de saber cuáles son todas las entradas y salidas, a menos que seas extremadamente cuidadoso y tomes notas. De lo contrario, podrían estar enviando cambios a tu atacante o algo así. Trezor no puede verificar que la dirección multisig pertenece a la misma cadena bip32; no puede verificar el quórum, pero puede verificar su propia clave. Así que digamos que es 3 de 5, puedes ir en 3 dispositivos que dirán que sí soy uno de los cinco de este 3 de 5, pero tienes que firmar en 3 dispositivos diferentes para que ahora sepas que eres 3 de esos 5. Siempre se puede demostrar que un monedero es miembro del quórum, excepto en Ledger. Antes exportaban los datos sin decirte cuál era la ruta de bip32, lo cual es un gran agujero. La mayoría de los monederos pueden demostrar que están en un quórum… ¿entienden que una de las salidas está en el mismo quórum que la entrada? Por lo que puedo entender, es sólo Coldcard que entiende que hoy en día. Trezor sabe que es una parte, pero no saben qué parte. Así que si estás firmando con un quórum de dispositivos que controlas con Trezor entonces no estás en riesgo, pero si tienes una tercera parte de confianza refrendando entonces se pone un poco raro porque podrían estar aumentando el número de claves que tiene una tercera parte de confianza. El segwit nativo nos habría permitido hacer los xpubs fuera de orden.

Bitcoin Core puede mostrar un código QR, pero no puede escanearlos. El problema ha estado abierto durante unos 2 años.

El 2-de-2 es malo porque si hay un error en cualquiera de ellos (ni siquiera en ambos) entonces tus fondos se pierden para siempre y no puedes recuperarlos. Un atacante también podría hacer un ataque al estilo Cryptolocker contra ti, y obligarte a darle alguna cantidad de los fondos para recuperar lo que negocies.

Cada uno de estos monederos de hardware tienen firmware, reglas udev, y cosas del lado de la computadora. Algunos de ellos son torpes como, conectarse a un navegador web e instalar alguna mierda. Oh Dios, ¿mi monedero hardware está conectado a internet? Bueno, instálalo una vez, y luego úsalo sólo en el airgap.

Tienes que verificar tu dirección de recepción en los monederos hardware. No te limites a comprobar los últimos caracteres de la dirección en la pantalla del dispositivo de hardware.

Al verificar la transacción en el dispositivo Ledger, la cartera electrum tiene una ventana emergente que ocluye la dirección. El monedero Ledger tampoco muestra el valor en la versión de la dirección de testnet, está mostrando una dirección de mainnet. Tendría que escribir un script para comprobar si esta dirección es realmente la misma dirección de testnet.

De todos modos, Chainalysis probablemente está ejecutando todos los nodos de electrum para testnet.

Me gustaría decir que fue fácil, pero no lo fue. Mucho de esto fue un costo de configuración de una sola vez. No es perfecto. Tiene errores. Tiene problemas. Pero técnicamente funciona. Al final del día, usted tiene que obtener las firmas de dos piezas diferentes de hardware. Puedes ser bastante tranquilo sobre cómo configurar las cosas.

P: Si se utiliza coldcard, ¿se puede obtener el xpub sin tener que enchufar la coldcard?

R: Creo que puedes escribir xpubs en la tarjeta sd.

R: Lo que realmente quiero es… parece que hay algunas cosas como mostrar una dirección, que sólo puedes hacer si la conectas. Muchas de las opciones están enterradas en los menús. La Coldcard es definitivamente mi favorita.

P: El Trezor sólo mostraba una salida porque la otra era de cambio, ¿verdad? Así que si usted estaba creando tres salidas, dos que iban a direcciones que no pertenecían al trezor o al quórum multisig, ¿mostraría las dos salidas en el trezor?

R: Creo que sí. Pero la única manera de saberlo es haciéndolo.

R: Estuve hablando con instagibbs que ha estado trabajando en HWI. Él dice que el trezor recibe una plantilla para lo que es la dirección de cambio; no hace ninguna verificación para definir lo que es el cambio, sólo confía en lo que el cliente dice. Así que puede que no sea mejor que un Ledger. Sólo confía en lo que la máquina caliente sabía. Ledger parece hacer un mejor trabajo porque– el trezor podría estar ocultando el— Coldcard está claramente haciendo el mejor trabajo, así que puedes enseñarle sobre los xpubs y puede hacer la afirmación por sí mismo sin tener que confiar en otros.

Bóvedas (Bryan Bishop)

Genial, gracias por describirlo.

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html

https://www.coindesk.com/the-vault-is-back-bitcoin-coder-to-revive-plan-to-shield-wallets-from-theft

https://bitcoinmagazine.com/articles/revamped-idea-bitcoin-vaults-may-end-exchange-hacks-good

Ermitaño

https://github.com/unchained-capital/hermit

Al principio, en Unchained empezamos usando Electrum. En ese momento, no estábamos haciendo custodia colaborativa. Teníamos todas las claves, era multisig. Así que, en teoría, teníamos empleados y equipos de firmas usando electrum, y era una experiencia terrible. Era todos los días usando electrum, siempre eran experiencias raras. Tampoco me gusta cómo Electrum construye estas direcciones.

Hoy no voy a hablar de multisig, sino que me voy a centrar en esta nueva herramienta llamada Hermit. Quiero dar una pequeña demostración. Haré algunas diapositivas. Es un monedero sharded de línea de comandos para despliegues de airgap. Tiene cierta similitud con los monederos airgap de código QR. En Unchained, nos centramos en la custodia colaborativa donde tenemos algunas de las claves y los clientes también tienen algunas de las claves. Esto no funcionaría si utilizáramos Electrum nosotros mismos, sería demasiado doloroso. Tuvimos que crear un software que lo agilizara para nosotros. El resultado neto es que estamos protegiendo las claves. Hay una gran diferencia entre una organización que protege las claves y un individuo que las protege.

En particular, para nosotros, no somos una empresa orientada a la cartera caliente. Nuestras transacciones son como mínimo de semanas, si no de meses o años, y se planifican con antelación. Nos gusta mantener todo en multisig y con almacenamiento en frío. Nos gustan los airgaps. Los monederos de hardware tienen un espacio de aire temporal, que puede ser agradable. No queremos gastar miles de dólares en HSMs para mantener un número secreto. Desde nuestro punto de vista, tenemos muchos dispositivos repartidos por la empresa para hacer pruebas, etc. Cada desarrollador tiene muchos dispositivos diferentes de cada tipo. Estas carteras de hardware no pueden costar más de 100 dólares cada una, de lo contrario es demasiado caro confiar en los nuevos productos. No nos gusta Trezor Connect, donde saben la transacción que estamos firmando, eso es profundamente frustrante. De nuevo, no somos individuos aquí, esto es una organización. Somos una empresa. Tenemos que escribir algunas cosas explícitamente o de lo contrario se perderá. Como persona, puede que te acuerdes, pero también debes anotarlo. También como organización, tenemos que coordinar. Como persona, recuerdas las llaves, las ubicaciones, cosas así. No necesitas enviarte un correo electrónico para avisar de que tienes que dar el paso 2 en el proceso, mientras que la empresa tiene este requisito. La mayoría de las empresas tienen rotación de empleados, nosotros parece que no, pero podríamos. También hay mucha información sobre nosotros que es pública, como la dirección de este edificio comercial. Tenemos un montón de carteras de hardware aquí en esta ubicación, pero ninguno de ellos importa. También hay problemas de programación, como gente que se va de vacaciones y otras cosas. Una sola persona no puede atender todas las solicitudes de firma, se quemaría. Así que tenemos que rotar, y tener tiempo de actividad, y así sucesivamente.

¿Cuáles son las opciones? Bueno, los monederos de hardware. Queremos animar a los clientes a utilizar carteras de hardware, y esperamos que haya mejores carteras de hardware en el futuro. Son mejores que los monederos de papel. Debido al producto multisig que ofrecemos, creemos que incluso los monederos malos cuando se juntan tienen un efecto multiplicador en la seguridad.

Hoy en día, no creo que sea razonable tener clientes que usen Hermit es demasiado para ellos. Probablemente utilizarán carteras de hardware en el futuro inmediato. Hemos estado utilizando carteras de hardware y nos gustaría pasar a algo como Hermit. Un candidato que miramos y que nos gustó mucho fue un proyecto llamado Subzero en Square, que pretendía ser una herramienta de airgapped y servir como almacenamiento en frío. Es un producto realmente bueno, pero no era suficiente para nosotros. Necesitábamos algo un poco más complejo.

Aquí muestro un diagrama de dos formas diferentes de pensar en la protección de una clave como multisig y Shamir secret sharing. ¿Se puede conseguir algo de redundancia sin usar multisig? Claro que sí, puedes utilizar la compartición de secretos de Shamir. Hay algunas propiedades interesantes como que se requiere que 2 de 3 acciones se combinen juntas en el mismo lugar. Un aspecto sorprendente de este esquema es que si tienes un fragmento, tienes precisamente cero piezas de información. Es un salto discreto en el que en cuanto tienes n fragmentos, lo consigues. No es sólo cortar trozos de una mnemotecnia o lo que sea, y reduce el espacio de búsqueda para un atacante. No es así como funcionan las acciones secretas.

SLIP 39 hace que sea más conveniente hacer fragmentos de Shamir encriptados con mnemónicos. SLIP 39 fue sacado por la gente de Trezor. Por mucho que nos caguemos en las carteras de hardware, tengo que saludar al equipo de SatoshiLabs por adelantarse a todo el mundo y liberar código fundacional como bip32 e implementarlo y hacerlo de una manera de código abierto. Leyendo su código fue como entendí algunas de sus ideas. Otra cosa que han hecho es liberar un sistema de fragmentos Shamir de 2 niveles. Quieren crear una forma de hacer sharding secreto Shamir, sin que todos los shards sean iguales. Puedes distinguir shards más o menos valiosos o distribuirlos a gente de confianza más o menos en función del nivel de seguridad de cada persona o de cada grupo. Así que puedes tener un secreto 1-de-3, y el segundo grupo puede tener una configuración diferente como 3-de-5. Esto no es multisig… donde puedes hacer esto de forma asíncrona en diferentes lugares y nunca estás obligado a estar en un lugar con todas tus claves. Esto no es eso, pero te da flexibilidad.

Voy a hacer una demostración rápida de cómo es el Ermitaño.

Hermit es un software de código abierto. Es “compatible con los estándares” pero es un nuevo estándar. SLIP 0039 no está realmente revisado criptográficamente todavía. Hemos contribuido no sólo con Hermit como una aplicación que utiliza SLIP 39, sino que hemos estado impulsando el código en la capa para decir que esta es la implementación de Shamir que… hasta ahora esta es la que la gente parece estar eligiendo, lo cual es emocionante. Está diseñada para despliegues en el aire, lo cual es bueno.

Hermit no es multisig. Multisig y shard son complementarios. Para un individuo, en lugar de gestionar shards, tal vez gestionar más de una clave. Para nosotros, ya estamos en un contexto de multisig aquí en Unchained, y queremos ser capaces de hacer un mejor trabajo y tener mejores controles de gestión de claves. Hermit tampoco es un monedero online. ¿Cómo supo qué poner aquí? No tiene ni idea. Algo más tiene que producir el código QR con bip174 PSBTs. El mes que viene, estoy emocionado de tener tiempo para presentar lo que creemos que es la otra mitad de esto, una herramienta para los monederos. Un monedero online está produciendo estos PSBTs y honestamente, sugiero imprimirlos. Imprima todos los metadatos, y venga a la sala y luego firme.

Hermit no es un HSM. Es una pieza de software python que se ejecuta en un ordenador portátil básico, que no es un HSM. El Ledger es un pequeño cónclave de alta seguridad que vive en el dispositivo electrónico y tiene propiedades interesantes. En particular, las formas de comunicarse dentro y fuera de él son realmente restrictivas y nunca revelará la clave. Si lo piensas, eso es lo que es una instalación Hermit. Tú controlas el hardware, es completamente de código abierto. Esto es básicamente lo que querías de un HSM, especialmente si lo ejecutas en un contexto que es extremadamente seguro. Sin embargo, los HSM son presumiblemente seguros incluso si los conectas a un portátil infectado con malware.

P: ¿Así que se celebra una ceremonia de firma y los propietarios de los fragmentos entran en la sala, introducen su parte y siguen adelante?

R: Sí, esa es una forma de hacerlo.

P: Así que para producir una firma de bitcoin, se necesita un quórum de fragmentos de cada grupo.

R: Correcto, es desbloquear todos los fragmentos juntos en la memoria en un lugar y luego actuar con eso. Lo que nos gusta de esto es que es una buena mezcla para nosotros porque podemos crear equipos de firma adversarios que se vigilan mutuamente y limitan las oportunidades de colusión. El uso de SLIP 39 es realmente agradable y flexible para las organizaciones.

Trezor afirma que soportará SLIP 39 a finales de verano, lo cual es realmente interesante porque puedes recuperar fragmentos de uno en uno en un Trezor y simplemente caminar hacia cada fragmento y recogerlos y obtener el secreto completo.

Jimmy stuff

Por último, pero no menos importante, Jimmy tiene algo que vendernos. Este es El pequeño libro de bitcoin. Está disponible en Amazon ahora mismo. Tuve siete coautores en esto. Escribimos el libro en cuatro días, lo que fue una experiencia muy divertida. Está pensado para alguien que no sabe nada de bitcoin. Es una lectura muy corta, de 115 páginas. Unas 30 páginas son de preguntas y respuestas. Otras 10 son glosario y cosas por el estilo. Así que son más bien 75 páginas que se pueden leer muy rápidamente. Teníamos en mente a una persona que no sabe nada de bitcoin. Le he dado este libro a mi esposa, que no sabe mucho sobre lo que está pasando; está destinado a ser ese tipo de libro que es entendible. El primer capítulo es “¿qué pasa con el dinero hoy en día?” y ¿qué pasa con el sistema actual? No menciona el bitcoin ni una sola vez, y luego pasa a lo que es el bitcoin. Se cuenta la historia del banco Lehman y se habla de lo que llevó a Satoshi a crear bitcoin. El otro capítulo es sobre el precio y la volatilidad. Preguntamos a mucha gente que conocíamos y que no sabía nada de bitcoin, se preguntan ¿con qué está respaldado? ¿Por qué tiene un precio de mercado? El capítulo cuatro es sobre por qué el bitcoin es importante para los derechos humanos y esto es sólo hablar de ello a nivel mundial y por qué es importante en este momento. Hay una perspectiva muy centrada en Silicon Valley sobre el bitcoin que es que va a interrumpir o lo que sea, pero hay personas reales en este momento que se están beneficiando de bitcoin que no tenían herramientas financieras o cuentas bancarias disponibles antes. Ahora mismo hay gente que escapa de Venezuela por el bitcoin. Hay un descuento de bitcoin en Colombia ahora mismo, porque hay muchos refugiados que salen de Venezuela con su riqueza en bitcoin y lo venden inmediatamente en Colombia para empezar su nueva vida. Hay taxistas en Irán que me preguntan por el bitcoin. Esto es algo real, chicos. Conseguir esa perspectiva global es un gran objetivo de este libro. El capítulo cinco es una historia de dos futuros y aquí es donde especulamos sobre cómo sería el futuro sin bitcoin, y luego cómo sería el futuro con bitcoin. Por último, aquí es donde termina el libro, y luego tenemos un montón de preguntas y respuestas y cosas que usted puede querer saber. Hay preguntas como, ¿quién es Satoshi? ¿Quién controla el bitcoin? ¿No es demasiado volátil? ¿Cómo se puede confiar en él? ¿Por qué se han pirateado tantos intercambios? Hay toda una sección sobre la cuestión energética. Todo tipo de cosas por el estilo. Recursos adicionales, como la página de Lopp, podcasts, libros, sitios web, cosas así. Probablemente voy a enviar esto con mis tarjetas de Navidad o algo así. La mitad de mis amigos no tienen ni idea de lo que estoy haciendo aquí. Esta es mi manera de informarles. Es el número en Amazon para la categoría de monedas digitales.

\ No newline at end of file +Reunión

https://twitter.com/kanzure/status/1164710800910692353

Introducción

Hola. La idea era hacer una reunión de estilo más socrático. Esto fue popularizado por Bitdevs NYC y se extendió a SF. Lo intentamos hace unos meses con Jay. La idea es que recorremos las noticias de investigación, los boletines, los podcasters, hablamos de lo que ha pasado en la comunidad técnica de bitcoin. Vamos a tener diferentes presentadores.

Mike Schmidt hablará de algunos boletines de optech a los que ha contribuido. Dhruv hablará sobre el intercambio de secretos Hermit y Shamir. Flaxman nos enseñará cómo configurar una cartera de hardware multisig con Electrum. Nos mostrará cómo se puede hacer esto y algunas de las cosas que hemos aprendido. Bryan Bishop hablará sobre su propuesta de bóvedas que se hizo recientemente. Lo ideal es que cada uno de estos temas dure unos 10 minutos. Sin embargo, es probable que se extiendan un poco. Vamos a tener un montón de participación de la audiencia y realmente interactivo.

Boletines de Bitcoin Optech

No tengo nada preparado, pero podemos abrir algunos de estos enlaces y presentar mi perspectiva o lo que yo entiendo. Si la gente tiene ideas o preguntas, sólo tiene que hablar.

Boletín 57: Coinjoin y joinmarket

https://bitcoinops.org/en/newsletters/2019/07/31/

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017169.html

Bonos de fidelidad para proporcionar resistencia sibilina a joinmarket. ¿Alguien ha utilizado joinmarket antes? ¿No? ¿Nadie? Buen intento… Bien. Así que, joinmarket es un monedero que está diseñado específicamente para hacer coinjoins. Un coinjoin es una manera de hacer un poco de mezcla o volteo de monedas para aumentar la privacidad o fungibilidad de sus monedas. Hay algunas opciones diferentes para ello. Esencialmente utiliza chatbots de IRC para solicitar creadores y tomadores. Así que si realmente quieres mezclar tus monedas, eres un tomador, y un creador en el otro lado pone fondos para mezclar con tus fondos. Así que hay este modelo de creador/tomador que es interesante. No lo he usado, pero parece ser facilitado por el chat IRC. El creador, la persona que pone el dinero, no necesariamente necesita privacidad, gana un pequeño porcentaje de su bitcoin. Todo se hace con contratos inteligentes, y sus monedas no corren peligro en ningún momento, salvo en la medida en que se almacenan en un monedero caliente para interactuar con el protocolo. La resistencia sibilina que están hablando aquí es que, por lo que, Chris Belcher tiene una gran entrada de privacidad en el wiki bitcoin así que echa un vistazo a eso en algún momento. Él es uno de los desarrolladores de joinmarket. Se da cuenta de que cuesta muy poco inundar la red con un montón de creadores si eres un actor malicioso, y esto rompe la privacidad porque las posibilidades de que te encuentres con una empresa maliciosa o fraudulenta del tipo chainalysis, no es que puedan tomar tus monedas, pero estarían invadiendo tu privacidad. El coste de que lo hagan es bastante bajo, por lo que las posibilidades de que lo hagan son bastante altas.

Al igual que la minería de bitcoin, se trata de una resistencia sibilina mediante la quema de energía para la prueba de trabajo. Hay dos tipos de potenciales para la prueba de trabajo en este escenario contra la resistencia sibilina. Una es que puedes quemar bitcoin, y la otra es que puedes bloquear bitcoin, ambas son pruebas de que tienes algo de piel en el juego. Así que puedes probar ambas cosas en la cadena y es una forma de asociar que has bloqueado estas monedas y las has bloqueado una vez por este nick de IRC y esto te da credibilidad para comerciar como una persona normal. Así que no puedes tener 1000 chatbots para fisgonear… Son de 30 a 80.000 BTC. Eso sería el bloqueo. Se trata de bloquear esta cantidad de BTC para ocupar una parte de la capacidad total de la situación del mercado de la unión. No sería peor que la situación actual, donde tienen la capacidad de hacerlo de todas formas, así que esto lo hace más caro. También lo hace más caro para el usuario medio, que es la parte negativa. El coste de que los fabricantes legítimos apunten o bloqueen o quemen sus monedas se va a trasladar a los tomadores. En la forma en que está configurado ahora, la cuota de la minería es sustancialmente más de lo que estos fabricantes están haciendo por hacer la fijación, por lo que la teoría de acuerdo con Chris es que la gente estaría dispuesta a tomar una cuota más alta para la mezcla, porque ya están pagando 10x para las tasas de minería. No sé cuántos coinjoins se pueden hacer en un día, pero hay listas públicas de los fabricantes y lo que van a cobrar y cuál es su capacidad. Hay gente que pone 750 BTC y puedes mezclar con ellos, y cobran un 0,0001% o algo así. El costo más alto es para la protección sibilina, es una tasa natural. Si estás pagando 10 veces más para procesar la transacción en la red bitcoin, entonces tal vez estés dispuesto a poner unos cuantos sats más para pagar por esta resistencia sibilina.

Los equipos de los monederos Samurai y Wasabi tuvieron algunas discusiones interesantes. Estuvieron hablando sobre la reutilización de direcciones y cuánto reduce realmente la privacidad. No creo que sea un tema resuelto, ambos siguen yendo y viniendo atacándose mutuamente. Para cualquiera de estos coinjoins, todos están expuestos en cierta medida a coinjoin. Así que siempre hay compensaciones. Mayor coste, algo de protección, aún no es perfecto, una empresa podría estar dispuesta a bloquear esas monedas. Una cosa interesante sobre esto es que aumenta el coste de los servicios de Chainalysis - tendrán que cobrar más a sus clientes; así que esto reduce sus márgenes y tal vez podamos sacarlos del negocio.

Boletín de noticias 57: signmessage

https://github.com/bitcoin/bitcoin/issues/16440

https://github.com/bitcoin/bips/blob/master/bip-0322.mediawiki

Bitcoin Core tiene la capacidad de hacer signmessage, pero esta funcionalidad era sólo para single key pay-to-pubkeyhash (P2PKH). Kallewoof ha abierto un pull request que permite tener esa misma funcionalidad con otros tipos de direcciones. Así que para segwit, P2SH etc., … Creo que es interesante, y es compatible hacia adelante con futuras versiones de segwit, por lo que taproot y Schnorr están incluidos, tendrían la capacidad de firmar las secuencias de comandos con estas claves, y es compatible hacia atrás porque tiene la misma fnuccionalidad para firmar una sola clave. Sí, podría ser utilizado para una prueba de reserva. Steven Roose hizo la salida falsa con el mensaje, esa es su herramienta de prueba de reserva. Construye una transacción inválida para hacer una prueba de reserva. Si Coinbase quisiera probar que tiene las monedas, podría crear una transacción que se parezca mucho a una transacción válida pero que sea técnicamente incorrecta, pero aún así con firmas válidas. bip322 sólo está firmando el mensaje.

Todo lo que puedes hacer con la firma es demostrar que en algún momento tuviste esa clave privada. Si alguien te roba las claves privadas, puedes seguir firmando con tu clave privada, pero ya no tienes las monedas. Tienes que demostrar que no la tienes; o que otra persona no la tiene. O que, en la altura de bloque actual, tenías los fondos. Ese es el verdadero reto de la prueba de reservas, la mayoría de las propuestas tratan de mover los fondos.

Boletín 57: debate sobre el filtro Bloom

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-July/017145.html

https://github.com/bitcoin/bitcoin/issues/16152

Hubo alguna consternación aquí sobre la desactivación del filtro bloom por defecto en Bitcoin Core. Ahora mismo si estás usando un monedero o cliente SPV, como Breadwallet que usa SPV… La discusión fue que en la semana anterior, hubo un pull request fusionado que deshabilita los bloom filters por defecto. Una nueva versión de Bitcoin Core ya no serviría estos filtros bloom a los clientes lite. Mi Breadwallet no podría conectarse a los nodos de Bitcoin Core y hacer esto por defecto, pero de nuevo alguien podría volver a activarlo.

Alguien argumentó que Chainalysis ya está ejecutando todos estos nodos de filtro de floración de todos modos, y continuarán recogiendo esa información. Muchos nodos de Bitcoin Core están ejecutando nodos que tienen más de un año de antigüedad, por lo que no van a ir a ninguna parte pronto. Todavía podrás ejecutar algunos clientes lite. Creo que Breadwallet ejecuta algunos nodos también. Siempre se puede ejecutar sus propios nodos también, y servir a los filtros de floración a ti mismo.

¿Alguien utiliza un monedero o es consciente de que está utilizando un monedero que es un cliente SPV lite? Electrum no hace los filtros bloom de bip37, utiliza un modelo de confianza.

¿La idea es que bip57 esté activado por defecto, para sustituirlo? ¿Va a estar Neutrino activado por defecto? Lo está para btcd. Me imagino que bitcoind tendrá algo similar. Son sólo comandos de red. Se almacena un poco más localmente, con los filtros de Neutrino. Tienes que hacer un seguimiento. Si hay un compromiso de coinbase o lo que sea, vas a tener que revisar eso también. Eso tendría que ser un soft-fork.

Noticias de Lightning (Buck Perley)

Voy a repasar los temas lightning de la lista del seminario socrático. Ayer estuve en un avión durante unas horas, así que preparé algunas preguntas y espero que podamos suscitar algunas cuestiones en torno a ellas.

Torres de vigilancia

https://blog.bitmex.com/lightning-network-part-4-all-adopt-the-watchtower/

Bitmex tenía algo sobre la ejecución de torres de vigilancia. Lo bueno de su artículo es que repasan los diferentes escenarios cuando estás ejecutando lightning y lo que hace que se cumpla el buen comportamiento en lightning. Si nos fijamos en la transacción de la justicia - en lightning, cuando usted tiene dos partes que entran en el canal y reequilibrar los fondos constantemente sin tener que publicar una transacción, la forma de hacer cumplir la no publicación del estado anterior es por una pena o transacción de la justicia. Si alguien intenta publicar un estado antiguo, puedes robar todos los fondos del canal como forma de castigarlo. Esto se llama transacción de justicia. Uno de los problemas con esto, y con lightning sobre todo, es que tu nodo tiene que estar en línea todo el tiempo porque la única manera de publicar la transacción de justicia es si estás en línea y tu nodo se da cuenta de que el estado incorrecto ha sido publicado.

Recientemente se ha publicado la versión 0.7 de lnd con torres de vigilancia disponibles. Lo que una torre de vigilancia es, es que te permite estar fuera de línea. Básicamente, si estás desconectado, puedes contratar a otro nodo para que vigile la blockchain por ti y publicará la transacción de justicia en tu nombre. Hay algunas construcciones interesantes en las que puedes pagar a la torre de vigilancia para que puedas dividir los fondos de la transacción de castigo que están en esa transacción. No hablan de eso aquí en el artículo.

Una de las cosas que es interesante es comparar las transacciones de justicia con eltoo donde tienen SIGHASH_NOINPUT. Me pareció un punto de discusión interesante.

P: He actualizado mi nodo para poder utilizar esa funcionalidad. ¿Cómo se realiza la transacción con otros nodos y se les configura para que sean una torre de vigilancia? No me queda claro cómo funciona.

R: Básicamente tienes que encontrar uno, y apuntar tu nodo hacia él y decir que este va a ser mi nodo de vigilancia. Esto añade un aspecto interesante en cuanto a cómo la economía de Lightning se refiere: …. los incentivos de la gente para enrutar los nodos, y enrutar los fondos y ganar honorarios sólo por tener un buen tiempo de actividad. Casa publicó su cosa de latido del nodo donde recompensan activamente a la gente. Olvidé la mecánica de cómo los mantienen honestos. Así que les dan actualizaciones de la transacción de justicia; ahora mismo hay un compromiso de privacidad. Sin eltoo, tienen que tener cada actualización de estado en el canal. Lo bueno de eltoo es que no, básicamente no tienes que almacenar todo el estado. Con eltoo, no necesitas recordar los estados intermedios, sólo el último.

P: Así que los otros nodos me están proporcionando servicios de torre de vigilancia; y a menos que yo actualice mi nodo para tener la torre de vigilancia, entonces otras personas pueden hacer lo mismo. ¿Hay que abrir un canal?

R: No, sólo les das la transacción en bruto.

P: ¿Están encriptando la transacción de justicia?

R: No estoy seguro. Se discutieron mecanismos para dividirlo aún más. La idea era que la torre de vigilancia sólo tuviera conocimiento de tu transacción una vez publicada; no podrían saber los detalles de la transacción, de antemano. Estarían constantemente vigilando la transacción e intentarían desencriptarla todo el tiempo.

P: ¿Ha intentado alguien comercializar algo de esto?

R: Bueno, nadie ha sido capaz de comercializar nada de Lightning. lnbig ha asegurado 5 millones de dólares en la red Lightning y está ganando 20 dólares al mes. En este punto, sólo lo está haciendo por razones altruistas.

P: Uno de los argumentos es que si las tasas del bitcoin suben mucho, tienes la ventaja de tener estos nodos y canales de enrutamiento ya configurados.

R: Sí, pero ahora mismo no es un negocio viable. Podría serlo en el futuro. Ahora mismo, mi sensación es que no estás ganando dinero con las tarifas, pero estás creando liquidez y esto hace que sea más viable para vuestros clientes utilizar Lightning. Así que realmente su modelo de negocio es más acerca de la creación de liquidez y ayudar a la utilidad en lugar de hacer dinero. La idea es que la gente gane cuotas como torres de vigilancia, cuotas de enrutamiento, aumentando la liquidez, y hay otro modelo de negocio donde la gente puede pagar por la liquidez entrante. Esos son los tres principales modelos de negocio de la red de rayos que conozco.

Modelo de estado estacionario para LN

https://github.com/gr-g/ln-steady-state-model

LN guía

https://blog.lightning.engineering/posts/2019/08/15/routing-quide-1.html

¿Hay alguien aquí que tenga un nodo de lightning? Bueno, unos cuantos. Uno de los grandes guiños a los lightning es que no son súper utilizables. Parte de eso es que están tratando de ayudar en el lado de la ingeniería con torres de vigilancia y el piloto automático y las nuevas interfaces gráficas de usuario. Otra gran parte es simplemente, guías sobre cómo ejecutar un nodo de lightning. Ellos van a través de las banderas útiles que se puede activar. Si alguna vez se ejecuta la ayuda de lndcli, es un enorme menú de cosas a tener en cuenta. ¿Alguna historia de horror de los dolores de cabeza de tratar con el lightning y las cosas que sería útil?

P: Más liquidez entrante.

R: ¿Qué cree que podría ayudar a ello? ¿Hay algo en proyecto que pueda ser útil?

P: Grease. Si te descargas su cartera, te dan 100 dólares de liquidez. Cuando se te acaba, tienes que conseguir más canales. Es una especie de extraño problema de incentivos. Es como una línea de crédito. Encerrar los fondos es costoso en el otro extremo, así que necesitan una buena razón para pensar que deben hacerlo.

Uno de los problemas de lightning es que hay que bloquear los fondos. Para recibir fondos, necesitas tener un canal donde alguien tenga sus propios fondos bloqueados. De otra manera, tu canal no puede ser rebalanceado cuando ganas más en tu lado. Si Jimmy tiene $100 de bitcoin en su lado del canal, y yo tengo $100 en mi lado, alguien puede pagarle a Jimmy hasta $100 a través de mí. Una vez que haya ganado 100 dólares y nuestro canal se haya reequilibrado, ya no podrá recibir más dinero. En el caso de los pagos en cadena, cualquiera puede pagarle inmediatamente, y en lightning eso no es lo mismo. La liquidez es un gran problema.

Tienen un servidor de Loop. Es básicamente swaps submarinos. Es una herramienta que aprovecha los swaps submarinos, construida por Lightning Labs, es una forma de tomar fondos por tu cartera lightning fuera de la cadena. Usted hace un bucle enviando a otro monedero que le enviará fondos en la cadena y esto le dará liquidez de entrada. O puedes pagar a tu propia cartera lightning desde los fondos on-chain. Si has visto esas interfaces de tiendas en las que puedes pagar con lightning, o pagar con bitcoin on-chain al lightning wallet, para eso están usando swaps submarinos. Tampoco es barato, porque hay comisiones asociadas. Usted recibe esos fondos de vuelta en la cadena, pero usted tiene que pagar las tasas de transacción en ese punto. Y luego hay cargos asociados con - el servidor Loop está cobrando honorarios por este servicio, que es otro modelo de negocio.

Tienen un mecanismo llamado Loopty Loop en el que se continúa recursivamente el bucle de salida. Se sacan fondos en bucle, se obtienen fnuds encadenados, se sacan en bucle y se vuelven a sacar en bucle. Puedes seguir haciendo eso y obtener liquidez entrante, pero de nuevo no es barato, y no es instantáneo. Así que estás perdiendo algunos de los beneficios de la lightning.

Copias de seguridad de los canales estáticos

Lightning Labs hablaba ahora de su aplicación móvil. Una de las cosas interesantes de esta actualización es que tienen copias de seguridad de canales estáticos en iCloud. Tenía curiosidad por saber si alguien tenía alguna opinión al respecto. Creo que es genial que se pueda hacer una copia de seguridad en la nube para estos. Almacena el estado del canal, incluyendo cuál es el saldo. Si tu nodo bitcoin se cae, y sólo tienes tu mnemónica, no pasa nada. Pero con LN, tienes estados fuera de la cadena donde no hay registro de ello en la blockchain. El único registro es la contraparte, pero no quieres confiar en ella. Si no tienes copias de seguridad de tu estado, tu contraparte podría publicar una transacción de robo y tú no lo sabrías. También podrías publicar accidentalmente un estado antiguo, lo que daría a tu contraparte la oportunidad de robar todos los fondos del canal, lo que es otra cosa que eltoo puede evitar. Si tienes la aplicación en iOS, estas cosas se actualizan automáticamente y no tienes que preocuparte por ello, pero estás confiando en Apple iCloud.

Bits de seguridad playground

Esto es… esto te permite pagar micropagos por relámpagos, puedes pagar por precios al contado, o por estadísticas de la NBA, y si fuera a pulsar algo… básicamente, es pagar por una llamada a la API para pequeñas peticiones. Así que sería como casi un AWS en la demanda, así es como lo pienso.

Boltwall

https://github.com/Tierion/boltwall

En el tema de las cosas de la API, esto era algo que recientemente he construido y publicado llamado boltwall. Se trata de un middleware basado en nodejs express que se puede poner delante de las rutas que quieras proteger. Es simple de configurar. Si tienes tu nodo lightning configurado, entonces puedes pasar la configuración necesaria. Estas configuraciones solo se almacenan en tu servidor. El cliente nunca ve nada de esto. O puedes usar opennode, que para los que no lo han usado, es un sistema de lightning custodiado donde ellos manejan el nodo LN y tu pones tu clave API en boltwall. Creo que es lo mejor para los pagos de máquina a máquina.

He utilizado macarrones como parte del mecanismo de autorización. Los macarrones se utilizan en lnd para su autorización y autenticación. Los macarrones son básicamente cookies con mucho más detalle. Las cookies de la web son normalmente un blob json que dice aquí están tus permisos y está firmado por el servidor y tú autentificas la firma. Lo que se hace con las macaroons, es que son básicamente HMACs, así que puedes tener cadenas de macaroons firmadas que están conectadas entre sí. Tengo uno construido aquí que es un macarrón basado en el tiempo donde puedes pagar un satoshi por un segundo de acceso. Cuando pienso en el lightning, hay un montón de puntos de dolor a nivel de consumidor involucrados.

P: ¿Por qué se basa en el tiempo, en lugar de pagar por una solicitud?

R: Depende del mercado. En lugar de decir que pago por una sola solicitud, podrías decir que en lugar del apretón de manos de ida y vuelta, obtienes acceso durante 30 segundos y terminas hasta que el tiempo expira. Construí una aplicación de prueba de concepto que es como un yalls (que es como un medium.com) para la lectura de contenido donde en lugar de pagar una cantidad a granel para un pedazo de contenido, usted dice oh voy a pagar por 30 segundos y ver si desea seguir leyendo sobre la base de eso. Permite mecanismos de precios más flexibles en los que puedes tener una discriminación de precios mucho más fina basada en la demanda.

Desvarío de la cartera de hardware

https://stephanlivera.com/episode/97/

Últimamente he estado hablando de multisig en carteras de hardware. Vamos a empezar con algo que es malo, y luego mostrar algo mejor. Adelante, encienda electrum. Pasa la bandera –testnet. No vamos a hacer lo del servidor personal.

https://github.com/gwillen/bitcoin/tree/feature-offline-v2

https://github.com/gwillen/bitcoin/tree/feature-offline-v1

https://github.com/gwillen/bitcoin/tree/feature-offline

Tenemos un nodo completo de Bitcoin Core aquí. Tenemos QT ahora mismo, pero puedes usar el cli. Hay un componente gastable, y eso no es nada porque todo lo que tengo es watchonly. Estoy usando Bitcoin Core para mis reglas de consenso, pero no lo estoy usando como cartera. Sólo estoy viendo las direcciones, haciendo un seguimiento del historial de transacciones, los saldos, ese tipo de cosas. Así que tenemos el servidor personal de electrum corriendo y el núcleo de bitcoin corriendo. Así que arranco electrum y ejecutarlo en testnet. También puse una bandera para decir, no se conecte accidentalmente a otra persona y decirles lo que todas mis direcciones son. Pones esto en el archivo de configuración, y también en la línea de comandos de nuevo sólo para estar seguro … Sí, también podrías usar reglas de firewall que podrían ser inteligentes en el futuro.

Podemos mirar una transacción en electrum, por lo que está diciendo que tengo este bitcoin que podemos ver que tengo aquí y mis transacciones recientes y ver que lo he recibido. Ahora bien, si voy a recibir más bitcoin, hay este genial botón de “mostrar en trezor”. Si le doy a esto, aparece en trezor y lo muestra. Esta es una parte esencial de la recepción de bitcoin; no preguntas por tu dirección de recepción en tu ordenador infectado con malware. Quieres hacer esta comprobación en un quórum de carteras de hardware. ¿Realmente quieres ir a 3 ubicaciones diferentes de monederos hardware antes de recibir los fondos? Si estás recibiendo 100 millones de dólares, entonces sí quieres. Si estás haciendo 3 de 5, y sólo confirmas en 2 de 5, entonces el atacante podría tener 3 de 5 pero los 2 de 5 han confirmado que son los participantes. Coldcard hará una cosa en la que registre el grupo de pubkeys para que sepa que estamos todos en esto… Coldcard tiene como 3 opciones, una es subir un archivo de texto con las pubkeys. Otra es que cuando le envíes una salida multisig, te ofrecerá crearla y te mostrará los xpubs, y te preguntará si quieres registrarla; y la tercera es confiar que es lo que hacen los demás. Casa te da todos los xpubs… es otra forma en que esto funciona; puedes poner esos en un cliente de electrum offline airgapped nunca toca el internet, y puede generar direcciones de recepción. Asi que puedes decir, bueno, esta maquina que nunca ha tocado internet dice que estos xpubs me daran estas direcciones, asi que 2-de-5 mas electrum offline entonces quizas estoy dispuesto a seguir adelante. Hay códigos QR incorporados para configurar estos.

No me gusta cuando el trezor está conectado a esta máquina, porque creo que la máquina puede estar infectada de malware. Pero este dispositivo podría ser un falso trezor, podría ser un teclado que instala malware o algo así y ni siquiera veo que escriba las urls de malware. Si tenemos tres dispositivos de hardware diferentes, quiero cuatro portátiles. Uno que está conectado a la red bitcoin; y cada uno de los otros tres portátiles están conectados a la cartera de hardware. Les paso las transacciones de bitcoin por código QR. Todo ese ecosistema de ordenador y monedero hardware puede estar eternamente en cuarentena y nunca conectado a internet. Así que podemos construir un airgap de hardware en esto.

Recomiendo un portátil porque tienen cámaras web y baterías. En esta demostración, tenemos que recoger los portátiles y apuntar las pantallas a las cámaras. Una buena cartera de hardware portátil con un escáner de código QR, tienes que recogerlos o algo así, eso estaría bien. Con los ordenadores de sobremesa, esto va a ser doloroso porque tienes que arrastrar tu ordenador a la caja de seguridad. Ten en cuenta que muchos bancos no tienen tomas de corriente en sus cajas fuertes, así que necesitas una batería. En realidad, cualquier máquina de 64 bits debería estar bien. Históricamente, he utilizado 32-bits, pero tails ya no es compatible con eso y algunas versiones de Ubuntu se quejan. En este demo, vamos a usar segwit nativo, y es una configuración multisig así que elige esa opción.

Electrum es muy quisquilloso. Le di al botón de retroceso. Volví a ver si este era el correcto y entonces perdí todo. Estoy usando una cartera de hardware con derivación de clave determinista, así que puedo volver a eso. El botón de retroceso debería preguntarte si realmente quieres deshacer todo este trabajo. La gran advertencia es que no se debe pulsar el botón de retroceso.

Es posible que hayas visto mis hilos de Twitter. Yo aceptaría un monedero de hardware muy malo, si permitiera multisig. Añadir un segundo monedero de hardware es sólo aditivo y puede ayudar a proteger contra los errores del monedero de hardware. En twitter, los fabricantes de carteras dijeron que no era un gran problema. Hay tres grandes problemas con Ledger. No soporta testnet. Toman la clave pública y muestran la representación en mainnet, y te preguntan si quieres enviar allí. No es que no lo apoyen; lo apoyaron en el pasado, y luego dejaron de apoyarlo. Así que no hay testnet. Tampoco tienen un mecanismo para verificar una dirección de recepción. Sólo si quieres usarlo de forma insegura te lo mostrará. El tercer problema es que tampoco soportan el envío, porque no hacen lo de la suma de las entradas y la suma de las salidas. No validan lo que es el cambio y lo que va a otra persona. Sólo te muestran un montón de salidas y te preguntan si parecen correctas, pero como humano no tienes ni idea de saber cuáles son todas las entradas y salidas, a menos que seas extremadamente cuidadoso y tomes notas. De lo contrario, podrían estar enviando cambios a tu atacante o algo así. Trezor no puede verificar que la dirección multisig pertenece a la misma cadena bip32; no puede verificar el quórum, pero puede verificar su propia clave. Así que digamos que es 3 de 5, puedes ir en 3 dispositivos que dirán que sí soy uno de los cinco de este 3 de 5, pero tienes que firmar en 3 dispositivos diferentes para que ahora sepas que eres 3 de esos 5. Siempre se puede demostrar que un monedero es miembro del quórum, excepto en Ledger. Antes exportaban los datos sin decirte cuál era la ruta de bip32, lo cual es un gran agujero. La mayoría de los monederos pueden demostrar que están en un quórum… ¿entienden que una de las salidas está en el mismo quórum que la entrada? Por lo que puedo entender, es sólo Coldcard que entiende que hoy en día. Trezor sabe que es una parte, pero no saben qué parte. Así que si estás firmando con un quórum de dispositivos que controlas con Trezor entonces no estás en riesgo, pero si tienes una tercera parte de confianza refrendando entonces se pone un poco raro porque podrían estar aumentando el número de claves que tiene una tercera parte de confianza. El segwit nativo nos habría permitido hacer los xpubs fuera de orden.

Bitcoin Core puede mostrar un código QR, pero no puede escanearlos. El problema ha estado abierto durante unos 2 años.

El 2-de-2 es malo porque si hay un error en cualquiera de ellos (ni siquiera en ambos) entonces tus fondos se pierden para siempre y no puedes recuperarlos. Un atacante también podría hacer un ataque al estilo Cryptolocker contra ti, y obligarte a darle alguna cantidad de los fondos para recuperar lo que negocies.

Cada uno de estos monederos de hardware tienen firmware, reglas udev, y cosas del lado de la computadora. Algunos de ellos son torpes como, conectarse a un navegador web e instalar alguna mierda. Oh Dios, ¿mi monedero hardware está conectado a internet? Bueno, instálalo una vez, y luego úsalo sólo en el airgap.

Tienes que verificar tu dirección de recepción en los monederos hardware. No te limites a comprobar los últimos caracteres de la dirección en la pantalla del dispositivo de hardware.

Al verificar la transacción en el dispositivo Ledger, la cartera electrum tiene una ventana emergente que ocluye la dirección. El monedero Ledger tampoco muestra el valor en la versión de la dirección de testnet, está mostrando una dirección de mainnet. Tendría que escribir un script para comprobar si esta dirección es realmente la misma dirección de testnet.

De todos modos, Chainalysis probablemente está ejecutando todos los nodos de electrum para testnet.

Me gustaría decir que fue fácil, pero no lo fue. Mucho de esto fue un costo de configuración de una sola vez. No es perfecto. Tiene errores. Tiene problemas. Pero técnicamente funciona. Al final del día, usted tiene que obtener las firmas de dos piezas diferentes de hardware. Puedes ser bastante tranquilo sobre cómo configurar las cosas.

P: Si se utiliza coldcard, ¿se puede obtener el xpub sin tener que enchufar la coldcard?

R: Creo que puedes escribir xpubs en la tarjeta sd.

R: Lo que realmente quiero es… parece que hay algunas cosas como mostrar una dirección, que sólo puedes hacer si la conectas. Muchas de las opciones están enterradas en los menús. La Coldcard es definitivamente mi favorita.

P: El Trezor sólo mostraba una salida porque la otra era de cambio, ¿verdad? Así que si usted estaba creando tres salidas, dos que iban a direcciones que no pertenecían al trezor o al quórum multisig, ¿mostraría las dos salidas en el trezor?

R: Creo que sí. Pero la única manera de saberlo es haciéndolo.

R: Estuve hablando con instagibbs que ha estado trabajando en HWI. Él dice que el trezor recibe una plantilla para lo que es la dirección de cambio; no hace ninguna verificación para definir lo que es el cambio, sólo confía en lo que el cliente dice. Así que puede que no sea mejor que un Ledger. Sólo confía en lo que la máquina caliente sabía. Ledger parece hacer un mejor trabajo porque– el trezor podría estar ocultando el— Coldcard está claramente haciendo el mejor trabajo, así que puedes enseñarle sobre los xpubs y puede hacer la afirmación por sí mismo sin tener que confiar en otros.

Bóvedas (Bryan Bishop)

Genial, gracias por describirlo.

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html

https://www.coindesk.com/the-vault-is-back-bitcoin-coder-to-revive-plan-to-shield-wallets-from-theft

https://bitcoinmagazine.com/articles/revamped-idea-bitcoin-vaults-may-end-exchange-hacks-good

Ermitaño

https://github.com/unchained-capital/hermit

Al principio, en Unchained empezamos usando Electrum. En ese momento, no estábamos haciendo custodia colaborativa. Teníamos todas las claves, era multisig. Así que, en teoría, teníamos empleados y equipos de firmas usando electrum, y era una experiencia terrible. Era todos los días usando electrum, siempre eran experiencias raras. Tampoco me gusta cómo Electrum construye estas direcciones.

Hoy no voy a hablar de multisig, sino que me voy a centrar en esta nueva herramienta llamada Hermit. Quiero dar una pequeña demostración. Haré algunas diapositivas. Es un monedero sharded de línea de comandos para despliegues de airgap. Tiene cierta similitud con los monederos airgap de código QR. En Unchained, nos centramos en la custodia colaborativa donde tenemos algunas de las claves y los clientes también tienen algunas de las claves. Esto no funcionaría si utilizáramos Electrum nosotros mismos, sería demasiado doloroso. Tuvimos que crear un software que lo agilizara para nosotros. El resultado neto es que estamos protegiendo las claves. Hay una gran diferencia entre una organización que protege las claves y un individuo que las protege.

En particular, para nosotros, no somos una empresa orientada a la cartera caliente. Nuestras transacciones son como mínimo de semanas, si no de meses o años, y se planifican con antelación. Nos gusta mantener todo en multisig y con almacenamiento en frío. Nos gustan los airgaps. Los monederos de hardware tienen un espacio de aire temporal, que puede ser agradable. No queremos gastar miles de dólares en HSMs para mantener un número secreto. Desde nuestro punto de vista, tenemos muchos dispositivos repartidos por la empresa para hacer pruebas, etc. Cada desarrollador tiene muchos dispositivos diferentes de cada tipo. Estas carteras de hardware no pueden costar más de 100 dólares cada una, de lo contrario es demasiado caro confiar en los nuevos productos. No nos gusta Trezor Connect, donde saben la transacción que estamos firmando, eso es profundamente frustrante. De nuevo, no somos individuos aquí, esto es una organización. Somos una empresa. Tenemos que escribir algunas cosas explícitamente o de lo contrario se perderá. Como persona, puede que te acuerdes, pero también debes anotarlo. También como organización, tenemos que coordinar. Como persona, recuerdas las llaves, las ubicaciones, cosas así. No necesitas enviarte un correo electrónico para avisar de que tienes que dar el paso 2 en el proceso, mientras que la empresa tiene este requisito. La mayoría de las empresas tienen rotación de empleados, nosotros parece que no, pero podríamos. También hay mucha información sobre nosotros que es pública, como la dirección de este edificio comercial. Tenemos un montón de carteras de hardware aquí en esta ubicación, pero ninguno de ellos importa. También hay problemas de programación, como gente que se va de vacaciones y otras cosas. Una sola persona no puede atender todas las solicitudes de firma, se quemaría. Así que tenemos que rotar, y tener tiempo de actividad, y así sucesivamente.

¿Cuáles son las opciones? Bueno, los monederos de hardware. Queremos animar a los clientes a utilizar carteras de hardware, y esperamos que haya mejores carteras de hardware en el futuro. Son mejores que los monederos de papel. Debido al producto multisig que ofrecemos, creemos que incluso los monederos malos cuando se juntan tienen un efecto multiplicador en la seguridad.

Hoy en día, no creo que sea razonable tener clientes que usen Hermit es demasiado para ellos. Probablemente utilizarán carteras de hardware en el futuro inmediato. Hemos estado utilizando carteras de hardware y nos gustaría pasar a algo como Hermit. Un candidato que miramos y que nos gustó mucho fue un proyecto llamado Subzero en Square, que pretendía ser una herramienta de airgapped y servir como almacenamiento en frío. Es un producto realmente bueno, pero no era suficiente para nosotros. Necesitábamos algo un poco más complejo.

Aquí muestro un diagrama de dos formas diferentes de pensar en la protección de una clave como multisig y Shamir secret sharing. ¿Se puede conseguir algo de redundancia sin usar multisig? Claro que sí, puedes utilizar la compartición de secretos de Shamir. Hay algunas propiedades interesantes como que se requiere que 2 de 3 acciones se combinen juntas en el mismo lugar. Un aspecto sorprendente de este esquema es que si tienes un fragmento, tienes precisamente cero piezas de información. Es un salto discreto en el que en cuanto tienes n fragmentos, lo consigues. No es sólo cortar trozos de una mnemotecnia o lo que sea, y reduce el espacio de búsqueda para un atacante. No es así como funcionan las acciones secretas.

SLIP 39 hace que sea más conveniente hacer fragmentos de Shamir encriptados con mnemónicos. SLIP 39 fue sacado por la gente de Trezor. Por mucho que nos caguemos en las carteras de hardware, tengo que saludar al equipo de SatoshiLabs por adelantarse a todo el mundo y liberar código fundacional como bip32 e implementarlo y hacerlo de una manera de código abierto. Leyendo su código fue como entendí algunas de sus ideas. Otra cosa que han hecho es liberar un sistema de fragmentos Shamir de 2 niveles. Quieren crear una forma de hacer sharding secreto Shamir, sin que todos los shards sean iguales. Puedes distinguir shards más o menos valiosos o distribuirlos a gente de confianza más o menos en función del nivel de seguridad de cada persona o de cada grupo. Así que puedes tener un secreto 1-de-3, y el segundo grupo puede tener una configuración diferente como 3-de-5. Esto no es multisig… donde puedes hacer esto de forma asíncrona en diferentes lugares y nunca estás obligado a estar en un lugar con todas tus claves. Esto no es eso, pero te da flexibilidad.

Voy a hacer una demostración rápida de cómo es el Ermitaño.

Hermit es un software de código abierto. Es “compatible con los estándares” pero es un nuevo estándar. SLIP 0039 no está realmente revisado criptográficamente todavía. Hemos contribuido no sólo con Hermit como una aplicación que utiliza SLIP 39, sino que hemos estado impulsando el código en la capa para decir que esta es la implementación de Shamir que… hasta ahora esta es la que la gente parece estar eligiendo, lo cual es emocionante. Está diseñada para despliegues en el aire, lo cual es bueno.

Hermit no es multisig. Multisig y shard son complementarios. Para un individuo, en lugar de gestionar shards, tal vez gestionar más de una clave. Para nosotros, ya estamos en un contexto de multisig aquí en Unchained, y queremos ser capaces de hacer un mejor trabajo y tener mejores controles de gestión de claves. Hermit tampoco es un monedero online. ¿Cómo supo qué poner aquí? No tiene ni idea. Algo más tiene que producir el código QR con bip174 PSBTs. El mes que viene, estoy emocionado de tener tiempo para presentar lo que creemos que es la otra mitad de esto, una herramienta para los monederos. Un monedero online está produciendo estos PSBTs y honestamente, sugiero imprimirlos. Imprima todos los metadatos, y venga a la sala y luego firme.

Hermit no es un HSM. Es una pieza de software python que se ejecuta en un ordenador portátil básico, que no es un HSM. El Ledger es un pequeño cónclave de alta seguridad que vive en el dispositivo electrónico y tiene propiedades interesantes. En particular, las formas de comunicarse dentro y fuera de él son realmente restrictivas y nunca revelará la clave. Si lo piensas, eso es lo que es una instalación Hermit. Tú controlas el hardware, es completamente de código abierto. Esto es básicamente lo que querías de un HSM, especialmente si lo ejecutas en un contexto que es extremadamente seguro. Sin embargo, los HSM son presumiblemente seguros incluso si los conectas a un portátil infectado con malware.

P: ¿Así que se celebra una ceremonia de firma y los propietarios de los fragmentos entran en la sala, introducen su parte y siguen adelante?

R: Sí, esa es una forma de hacerlo.

P: Así que para producir una firma de bitcoin, se necesita un quórum de fragmentos de cada grupo.

R: Correcto, es desbloquear todos los fragmentos juntos en la memoria en un lugar y luego actuar con eso. Lo que nos gusta de esto es que es una buena mezcla para nosotros porque podemos crear equipos de firma adversarios que se vigilan mutuamente y limitan las oportunidades de colusión. El uso de SLIP 39 es realmente agradable y flexible para las organizaciones.

Trezor afirma que soportará SLIP 39 a finales de verano, lo cual es realmente interesante porque puedes recuperar fragmentos de uno en uno en un Trezor y simplemente caminar hacia cada fragmento y recogerlos y obtener el secreto completo.

Jimmy stuff

Por último, pero no menos importante, Jimmy tiene algo que vendernos. Este es El pequeño libro de bitcoin. Está disponible en Amazon ahora mismo. Tuve siete coautores en esto. Escribimos el libro en cuatro días, lo que fue una experiencia muy divertida. Está pensado para alguien que no sabe nada de bitcoin. Es una lectura muy corta, de 115 páginas. Unas 30 páginas son de preguntas y respuestas. Otras 10 son glosario y cosas por el estilo. Así que son más bien 75 páginas que se pueden leer muy rápidamente. Teníamos en mente a una persona que no sabe nada de bitcoin. Le he dado este libro a mi esposa, que no sabe mucho sobre lo que está pasando; está destinado a ser ese tipo de libro que es entendible. El primer capítulo es “¿qué pasa con el dinero hoy en día?” y ¿qué pasa con el sistema actual? No menciona el bitcoin ni una sola vez, y luego pasa a lo que es el bitcoin. Se cuenta la historia del banco Lehman y se habla de lo que llevó a Satoshi a crear bitcoin. El otro capítulo es sobre el precio y la volatilidad. Preguntamos a mucha gente que conocíamos y que no sabía nada de bitcoin, se preguntan ¿con qué está respaldado? ¿Por qué tiene un precio de mercado? El capítulo cuatro es sobre por qué el bitcoin es importante para los derechos humanos y esto es sólo hablar de ello a nivel mundial y por qué es importante en este momento. Hay una perspectiva muy centrada en Silicon Valley sobre el bitcoin que es que va a interrumpir o lo que sea, pero hay personas reales en este momento que se están beneficiando de bitcoin que no tenían herramientas financieras o cuentas bancarias disponibles antes. Ahora mismo hay gente que escapa de Venezuela por el bitcoin. Hay un descuento de bitcoin en Colombia ahora mismo, porque hay muchos refugiados que salen de Venezuela con su riqueza en bitcoin y lo venden inmediatamente en Colombia para empezar su nueva vida. Hay taxistas en Irán que me preguntan por el bitcoin. Esto es algo real, chicos. Conseguir esa perspectiva global es un gran objetivo de este libro. El capítulo cinco es una historia de dos futuros y aquí es donde especulamos sobre cómo sería el futuro sin bitcoin, y luego cómo sería el futuro con bitcoin. Por último, aquí es donde termina el libro, y luego tenemos un montón de preguntas y respuestas y cosas que usted puede querer saber. Hay preguntas como, ¿quién es Satoshi? ¿Quién controla el bitcoin? ¿No es demasiado volátil? ¿Cómo se puede confiar en él? ¿Por qué se han pirateado tantos intercambios? Hay toda una sección sobre la cuestión energética. Todo tipo de cosas por el estilo. Recursos adicionales, como la página de Lopp, podcasts, libros, sitios web, cosas así. Probablemente voy a enviar esto con mis tarjetas de Navidad o algo así. La mitad de mis amigos no tienen ni idea de lo que estoy haciendo aquí. Esta es mi manera de informarles. Es el número en Amazon para la categoría de monedas digitales.

\ No newline at end of file diff --git a/es/austin-bitcoin-developers/2020-01-21-socratic-seminar-5/index.html b/es/austin-bitcoin-developers/2020-01-21-socratic-seminar-5/index.html index 14e72fe1b2..3f87bbf69d 100644 --- a/es/austin-bitcoin-developers/2020-01-21-socratic-seminar-5/index.html +++ b/es/austin-bitcoin-developers/2020-01-21-socratic-seminar-5/index.html @@ -23,4 +23,4 @@ < Bit Block Boom < Seminario Socrático 5

Seminario Socrático 5

Fecha: January 21, 2020

Transcripción De: Bryan Bishop

Traducción Por: Blue Moon

Tags: Lightning

Categoría: -Reunión

https://www.meetup.com/Austin-Bitcoin-Developers/events/267941700/

https://bitdevs.org/2019-12-03-socratic-seminar-99

https://bitdevs.org/2020-01-09-socratic-seminar-100

https://twitter.com/kanzure/status/1219817063948148737

LSATs

Así que solemos empezar con una demostración de un proyecto basado en un rayo en el que ha estado trabajando durante unos meses.

Esta no es una idea original para mí. Fue presentada por roasbeef, cofundador de Lightning Labs en la conferencia Lightning del pasado octubre. Trabajé en un proyecto que hacía algo similar. Cuando presentó esto como una especificación más formalizada, tuvo mucho sentido basado en lo que yo estaba trabajando. Así que acabo de terminar una versión inicial de una herramienta que pone esto en práctica y permite a la gente construir sobre esto. Voy a dar un breve resumen de lo que puede hacer con esto.

Un resumen rápido. Voy a hablar de las claves de la API y el estado de la autenticación hoy en día. A continuación, lo que son los macarrones, que son una gran parte de cómo funcionan las LSATs.

LSAT es un token de autentificación del servicio lightning..

Luego hablaremos de casos de uso y de algunas herramientas como lsat-js y otra. Esperemos que puedas usarlas. Gran parte del contenido aquí, se puede ver la presentación original que Laolu (roasbeef) dio y puso juntos. Algunos de los contenidos están inspirados en esa presentación.

Estado de la autenticación hoy en día: Cualquiera que esté en Internet debería estar familiarizado con nuestros problemas de autenticación. Si estás haciendo login y autenticación, probablemente estás haciendo contraseñas de correo electrónico o OAUTH o algo así. Es asqueroso. También puedes tener claves de API más generales. Si creas una cuenta de AWS o si quieres usar una API de Twilio, entonces obtienes una clave y esa clave va en la solicitud para mostrar que estás autenticado.

Las claves API no tienen realmente ninguna resistencia sibilina incorporada. Si obtienes una clave API, entonces puedes usarla en cualquier lugar, dependiendo de las restricciones del lado del servicio. Añaden restricciones sibilinas al tener que iniciar sesión a través del correo electrónico o algo así. La clave en sí no tiene resistencia sibilina, es sólo una cadena de letras y números y eso es todo.

Las claves de API y las cookies también -que era una forma inicial de lo que son los macarrones- no tienen la capacidad de delegar. Si tienes una clave de API y quieres compartir esa clave de API y compartir tu acceso con alguien, ellos tienen acceso completo a lo que esa clave de API proporciona. Algunos servicios te darán una clave de API de sólo lectura, una clave de API de lectura-escritura, una clave de API de nivel de administrador, etc. Es posible, pero tiene algunos problemas y no es tan flexible como podría ser.

La idea de iniciar sesión y obtener tokens de autenticación es engorrosa. La tarjeta de crédito, el correo electrónico, la dirección postal… todo esto no es tan bueno cuando sólo se quiere leer un artículo del WSJ o del NYT. ¿Por qué tenemos que dar toda esta información sólo para tener acceso?

Así que alguien puede estar utilizando algo que parece las formas correctas … como HTTPS, la commnication está encriptada, eso es genial .. Pero una vez que les das información privada, no tienes forma de auditar cómo almacenan esa información privada. Vemos grandes hacks en grandes almacenes y sitios web que filtran información privada. Un atacante sólo necesita atacar el sistema más débil que contenga tu información personal privada. Este es el origen del problema. Que Ashley Madison se entere de tu aventura no es un gran problema, pero que alguien piratee y exponga esa información es realmente malo.

Recomiendo encarecidamente leer sobre los macarrones. La idea básica es que los macarrones son como las cookies, para cualquiera que trabaje con ellas en el desarrollo web. Codifican cierta información que comparte con el servidor, como los niveles de autenticación y los tiempos de espera y cosas por el estilo, al siguiente nivel. lnd habla mucho de macaroons, pero esto no es algo específico de lnd. lnd simplemente utiliza esto, para la autenticación delegada a lnd. Ellos están usando macarrones, estas herramientas están usando macarrones. Están utilizando macarrones en su servicio de bucle de una manera totalmente diferente a su servicio de bucle. Estas podrían ser usadas en lugar de las cookies, es triste que casi nadie las esté usando.

Funciona en base a HMACs encadenados. Cuando creas un macarrón, tienes una clave de firma secreta, igual que cuando haces cookies. Firmas y te comprometes con una versión de un macarrón. Esto se relaciona con la delegación… puedes añadir lo que se llama advertencias y firmar usando una firma anterior y eso bloquea la nueva advertencia. Nadie que reciba la nueva versión del macarrón con una advertencia que ha sido firmada con una firma anterior, puede cambiarla. Es como una cadena de bloques. Simplemente no se puede revertir. Es muy chulo.

El transporte de pruebas cambia la arquitectura en torno a la autenticación. Estás poniendo la autorización en el propio macarrón. Así que estás diciendo qué permisos tiene alguien que tiene este macarrón, en lugar de poner un montón de lógica en el servidor. Así que si presentas este macarrón y es verificado por el servidor para ciertos niveles de acceso, entonces esto simplifica la autenticación y la autorización del backend mucho más. Desacopla la lógica de la autorización.

lsat es para los tokens de autentificación del servicio lightning. Esto podría ser útil para la facturación de pago por uso (no más suscripciones), no se requiere información personal, es comerciable (con la ayuda de un proveedor de servicios) - esto es algo que roasbeef ha propuesto. Puedes hacer que sea comerciable; a menos que lo hagas en una cadena de bloques, necesitas un servidor central que lo facilite. También está la autenticación de máquina a máquina. Debido a la resistencia sibilina incorporada que no está ligada a la identidad, puedes tener máquinas que paguen por el acceso a otras cosas sin tener tu número de tarjeta de crédito en el dispositivo. También puedes atenuar los privilegios. Puedes vender privilegios.

Voy a introducir algunos conceptos clave para hablar de cómo funciona esto. Hay códigos de estado - cualquiera que haya navegado por la web está familiarizado con el HTTP 404 que es para el recurso no encontrado. Hay un montón de estos números. HTTP 402 se supone que es para “pago requerido” y les tomó décadas para hacer esto a nivel de protocolo sin un dinero nativo de Internet. Así que LSAT aprovechará esto y utilizará HTTP 402 para enviar mensajes a través del cable.

Hay mucha información en las cabeceras HTTP que describen las peticiones y las respuestas. Aquí es donde vamos a establecer la información para LSAT. En la respuesta, tendrá un desafío emitido por un servidor cuando hay un pago HTTP 402 requerido. Esta es una cabecera WWW-Authenticate. También hay otro Authorized-request: autorización. La única cosa única es cómo vas a leer los valores asociados a estas claves de cabecera HTTP. Después de obtener el reto, envías una autorización que satisface ese reto.

Usted paga una solicitud de factura relámpago utilizando un determinado BOLT. Esto se pone en el desafío WWW-Authenticate. La preimagen es una cadena aleatoria de 32 bytes que se genera y forma parte de cada pago relámpago, pero se oculta hasta que el pago ha sido satisfecho. Así es como se puede confiar en los pagos de segunda capa de forma instantánea. Luego hay un hash de pago. Cualquier persona que haya recibido una factura de pago tiene este hash de pago. La preimagen se revela después de pagar. Esto básicamente, el hash de pago generado a partir del hashing de la preimagen… lo que significa que no puedes adivinar la preimagen, a partir del hash de pago. Pero una vez que tienes la preimagen, puedes probar que sólo esa preimagen pudo generar ese hash de pago. Esto es importante para la validación del LSAT.

Digamos que el cliente quiere acceder a algún contenido protegido. Digamos que el servidor entonces dice… que no hay autenticación asociada a esta petición. Voy a hornear un macarrón, y va a tener información que indicará lo que se requiere para el acceso. Esto va a incluir la generación de una factura de pago relámpago. Entonces enviamos un reto WWW-Autenticado de vuelta. Una vez que se paga una factura, se obtiene una preimagen a cambio, que se necesita para satisfacer el LSAT porque cuando se envía el token de vuelta es el macarrón y luego dos puntos y luego esa preimagen. Porque lo que sucede es que la información de la factura, el hash del pago está incrustado en el macarrón. Así que el servidor busca el hash de pago, y la preimagen, y luego comprueba H(preimagen) == hash de pago boom está hecho.

Dependiendo de las limitaciones que quieras poner en el muro de pago, se trata de una verificación de estado. Sabes que la persona que tiene esa preimagen tuvo que haber pagado la factura asociada a ese hash. El hash está en el token del macarrón.

Esto ayuda a desvincular el pago de la autorización. El servidor podría saber que el pago fue satisfecho usando lnd o algo así, pero esto ayuda a desacoplarlo. También ayuda a otros servicios a comprobar la autorización.

La versión actual de LSATs tiene un identificador de versión en los macarrones que genera. La forma en que el equipo de Lightning Labs ha hecho es que tienen un número de versión y se incrementará como la funcionalidad se añade a ella.

En mi herramienta, tenemos configuraciones de pre-construcción para añadir vencimientos. Así que puedes obtener 1 segundo de acceso por cada satoshi pagado, o algo así. Los niveles de servicio es algo en lo que el equipo de Loop ha estado trabajando.

La firma se hace en el momento de la cocción del macarrón. Así que tienes una clave secreta, se asigna un macarrón, y la firma se pasa con el macarrón.

Esto permite que las facturas de máquina a máquina sean resistentes a los sibilinos. Las facturas HODL son algo que he implementado. Las facturas HODL son básicamente una forma de pagar una factura sin que se liquide inmediatamente. Es un servicio de custodia construido con lightning, pero crea algunos problemas en la red de lightning. Hay formas de utilizarlas que no entorpecen la red, siempre que no se utilicen durante largos periodos de tiempo. Yo usaba esto para los tokens de un solo uso. Si tratas de acceder a ellos, y una factura está siendo retenida pero no liquidada, entonces tan pronto como se liquida entonces ya no es válida. También hay una manera de dividir los pagos y pagar una sola factura, pero entonces tienes algunos problemas de coordinación. Creo que esto es similar a la liberación del pararrayos que hizo Reese, que era para los pagos fuera de línea. Tienen un servicio en el que puedes hacer pagos de terceros sin confianza.

Hice lsat-js que es una biblioteca del lado del cliente para interactuar con los servicios de LSAT. Si tienes una aplicación web que tiene esto implementado, entonces puedes decodificarlos, obtener el hash de pago, ver si hay algún vencimiento en ellos. Luego está BOLTWALL donde añades una sola línea a un servidor, y lo pones alrededor de una ruta para la que quieres requerir el pago, entonces BOLTWALL lo coge cuando recibes una petición. Es sólo un middleware nodejs, por lo que podría funcionar con balanceadores de carga.

NOW-BOLTWALL es un marco sin servidor para desplegar sitios web y funciones normales sin servidor; esta es una herramienta CLI que lo configurará. La forma más fácil de hacerlo es btcpay y usar el despliegue con luna node por $8/mes, y luego puedes configurar NOW-BOLTWALL. Luego usando zyke que tienen un nivel gratuito, puedes desplegar un servidor por ahí y ellos mismos están ejecutando balanceadores de carga. Puedes pasarle una url que quieras protocolizar. Así que si usted tiene un servidor en otro lugar, sólo puede desplegar esto en la línea de comandos.

Y luego está lsat-playground, que voy a demostrar rápidamente. Esto es sólo la UX que puse juntos.

LSAT sería útil para un proveedor de servicios que aloja blogs de diferentes autores, y el autor puede estar convencido de que el usuario pagó y obtuvo acceso al contenido - y que el usuario pagó específicamente ese autor, no el proveedor de servicios.

Pondré algunas diapositivas en la página de la reunión.

Seminario socrático

Esto va a ser un repaso rápido de las cosas que están sucediendo en el desarrollo de bitcoin. Yo facilitaré esto y hablaré de los temas, y sacaré algo de conocimiento de la audiencia. Sólo entiendo como el 10% de esto, y algunos de ustedes entienden mucho más. Así que interrúmpeme y salta, pero no des un discurso.

Nos perdimos un mes porque tuvimos el taller de taproot el mes pasado. BitDevsNYC es un meetup en NYC que publica estas listas de enlaces sobre lo que pasó ese mes. He leído algunos de ellos.

OP_CHECKTEMPLATEVERIFY

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-November/017494.html

Este es el trabajo de Jeremy Rubin. La idea es que es una propuesta de pacto para bitcoin. La idea es que el UTXO sólo puede… ((bryan tomó esta)). Este taller va a ser a finales de mes. Dice que va a patrocinar a la gente, así que si estás en esto entonces considéralo. Debido a que puede ser auto-referencial, puedes tener una completitud de turing accidental. La versión inicial tenía este problema. También podría ser utilizado por los intercambios en las transacciones de retiro para prevenir o lista negra de sus futuras transacciones.

Atalaya BOLT

Es bastante interesante. Estuve en Londres y me reuní con estos tipos. Tienen una implementación completa de esto en python. Era agradable y simple, no estoy seguro si es de código abierto todavía. Hay como tres implementaciones de watchtower ahora, y deben ser estandarizadas.

Estafa de PlusToken

ErgoBTC hizo algunas investigaciones sobre la estafa PlusToken. Fue una estafa en China que obtuvo como 200.000 BTC. La gente que la dirigía no era sofisticada. Así que trataron de hacer alguna mezcla… pero barajaron sus monedas de mala manera. Los atraparon. Algún whitehat lo descubrió y fue capaz de rastrear dónde salían los fondos de un intercambio y demás. Aquí hay un hilo de Twitter que habla de cómo el movimiento de estos BTC podría haber afectado al precio. Hace un mes, algunos de los chicos fueron atrapados. La teoría de este tipo es que detuvieron a los subordinados, y el que tenía las llaves no fue detenido. Así que la mezcla continuó, claramente este otro tipo tiene las llaves. También tenían un montón de ETH y fue movido como hace un mes, y el mercado se asustó- el precio de ETH cayó. Así que tal vez tomó una gran posición corta, y luego movió las monedas, en lugar de vender. 200.000 BTC es mucho, realmente se puede mover el precio con esto.

PlusToken estafó 1.900 millones de dólares en todas las monedas, con una página de aterrizaje. Tenían gente en las calles yendo a estos chinos diciendo que compraran bitcoin y lo multiplicaran, es esta nueva cosa de la minería. MtGox era como 500.000 BTC, que era el 7% de la oferta en circulación en el momento. Así que esto podría ser el 2-3% de la oferta.

El tipo también apareció en un podcast donde habló de las herramientas que utilizó para averiguar esto. Este es un tema interesante. Los coinjoins van a ser un tema en muchos de estos. Esta es sólo una cara de coinjoin. Obviamente, el coinjoin que estaba usando era imperfecto.

txstats.com

Este es un visualizador de estadísticas de transacciones de la investigación de BitMex.

Aquí está Murch informando sobre un poco de dumping de intercambio. Él hace el desarrollo de la cartera para bitgo. A menudo habla de cosas de consolidación de UTXO. Alguien volcó 4 MB de transacciones a 1 sat/vbyte. Alguien estaba consolidando un montón de polvo cuando las tarifas eran realmente baratas.

Aquí hay un sitio de datos de rayos. Pierre tenía el nodo número uno. Tiene capacidad, diferentes tipos de canal se cierra. BitMex escribió un artículo informando sobre los cierres no cooperativos, porque puedes ver las operaciones de cierre forzado en el blockchain.

Jameson Lopp tiene algunas comparaciones de implementación de bitcoin. Esto es un análogo. Se trata de las diferentes versiones de Bitcoin Core como la v0.8 y siguientes. A continuación, analiza cuánto tiempo tarda la descarga del bloque inicial, para el blockchain actual. Hay otro para el tiempo que tarda en sincronizarse con el blockchain en la fecha en que fue lanzado. Hubo una gran caída cuando cambiaron de openssl a libsecp256k1. Entonces era enormemente más performante.

Atajos inseguros en MuSig

Se trata de parte de la interactividad en Schnorr MuSig. Hay tres rondas en este protocolo. En este artículo, él está discutiendo todas las maneras que usted puede estropear con él. MuSig es bastante complejo y hay un montón de armas de pie, que es el resumen aquí supongo.

ZenGo firma umbral basada en el tiempo

Multisig en el concepto es conseguir algunas entidades diferentes, donde se puede hacer en la cadena multisig o fuera de la cadena multisig donde se agregan las firmas juntos y se unen. Estos chicos tienen algo así, pero las claves rotan con el tiempo. Puedes tener un escenario en el que todas las partes pierden una clave durante un año determinado, pero como las claves son rotativas, ninguna de ellas pierde una cantidad mínima por encima de una determinada cantidad. Así que el monedero seguiría conservándose aunque todas las personas hayan perdido sus claves. Esto se llama “compartir el secreto proactivamente”. Parece que sería más práctico hacer 3-de-5 y simplemente configurar un nuevo 3-de-5 cuando 1-de-5 lo pierde. A Binance le gusta esto porque es la compatibilidad de shitcoin que les gusta. Ledger también.

Ataque a la tarjeta fría

La forma en que este ataque funciona es que puedes engañarlo para que genere una dirección de cambio en algo como esto… una ruta de derivación en la que tomas el hijo número 44, 0 endurecido, y luego el último es un número enorme. Entonces lo pones en una hoja muy al borde de la clave privada, de tal manera que sería difícil encontrarla de nuevo si la buscas. Técnicamente sigues siendo dueño de las monedas, pero sería difícil gastarlas. Así que era un exploit inteligente. Básicamente, un atacante puede convencer a tu tarjeta fría de que está siendo enviada a “tu” dirección, pero en realidad es una ruta bip32 aleatoria o algo así. Ningún monedero de hardware actualmente rastrea las direcciones de cambio que dan. Así que la idea es restringirlo a alguna brecha de búsqueda… no ir más allá de la brecha o algo así. O podría estar en un sitio web generando un montón de direcciones, por adelantado, para los usuarios o los pagos o algo así. También había algo sobre 32 bits + 1 o algo así, más allá del valor MAX.

Error de Trezor

Trezor tenía un error en el que si tenías una… si estabas tratando de hacer una salida single-sig, y luego tenías una entrada multi-sig y luego un cambio multi-sig, podías inyectar tu propia dirección de cambio multisig o algo así. Tu máquina anfitriona podría hacer esto. Esto fue como hace un mes o un mes y medio. No muestran el cambio, si es que lo tienes. En este escenario, la dirección de cambio de multisig es algo que no posees, y debería tratar eso como un gasto doble o algo así. Esto era un exploit remoto. Trató la dirección multisig de otra persona como tu dirección. Simplemente no se incluyó en el cálculo de la tarifa o algo así.

Hilo de Monero

Alguien tiene un hash malo en su software. Así que es una historia de detectives tratando de averiguar lo que salió mal, como si el sitio web es malo o algo así. Resulta que el binario era malicioso. Puedes ver el trabajo de detective en tiempo real. Alguien fue capaz de conseguir el binario y hacer algunos análisis. Se añadieron dos funciones; una de ellas enviaría su semilla al atacante. Así que si ejecutas este binario y tienes algún monero, entonces el atacante ahora tiene ese monero. Es bastante fascinante. El binario llegó al sitio web de Monero. Eso es bastante aterrador. Este es un buen ejemplo de por qué necesitas comprobar las firmas y los hashes cuando descargas una cartera. Monero estaba sirviendo esto a la gente que estaba descargando el software. Era getmonero.org que estaba sirviendo el malware. Es interesante que tuvieran acceso al sitio, y no actualizaran los hashes md5 o algo así. Bueno, tal vez pensaron que los usuarios comprobarían el sitio web y no el binario que realmente descargaron.

Detenciones de intercambiadores de SIM

Esto era sólo un artículo de noticias. El SIM swapping es cuando entras en una tienda de Verizon y dices que tienes un número, y entonces ponen tu número de teléfono en tu teléfono y entonces puedes robar el bitcoin de alguien o lo que sea. Utilizan las preguntas habituales como cuál es tu nombre de soltera y otra información pública.

Ataque de Vertcoin 51%

Esto ha sucedido ya dos veces. Tuvimos una discusión cuando esto ocurrió hace seis meses. De alguna manera esta moneda sobrevive a los ataques del 51%. ¿Por qué sobreviven? Tal vez es tan especulativo que la gente se encoge de hombros. ¿Qué pasa con el bitcoin o el ethereum que son atacados en un 51%? Así que tal vez todo es comercio especulativo, o los usuarios son demasiado estúpidos o algo así.

El papel del enrutamiento del diente de león en “romper el modelo de privacidad de mimblewimble”

Bitcoin Core 0.19.1

Hubo algún tipo de problema justo después de la v0.19.0, y luego salió la v0.19.1. Hay algunos nuevos comandos RPC. getbalance es mucho mejor. Un montón de correcciones RPC. Eso es genial, así que descárgalo.

Eliminar OpenSSL

OpenSSL fue eliminado por completo. Comenzó en 2012. Mucho de esto aceleró la descarga inicial de bloques. Lo curioso es que a gavinandresen no le gustaba la idea. Pero se convirtió en una gran mejora. Se tardó unos años en eliminar completamente OpenSSL, porque estaba suministrando todas las primitivas criptográficas para firmas, hashing, claves. Se tardó 10 años en eliminar openssl. Lo último que lo necesitaba era bip70. Lo necesitaban para algo.

Selección de monedas por rama y por límite

Es una mejor manera de hacer la selección de monedas al componer las transacciones. Quiere optimizar las comisiones a largo plazo. Así que escribió su tesis para demostrar que esta sería una buena forma de hacerlo. Murch también señaló que la selección aleatoria de monedas era realmente mejor que la solución de aproximación estocástica.

joostjager - enrutamiento permitir ruta …

Puedes pagar a alguien incluyéndolo en una ruta, sin que tenga que darte una factura. Alex Bosworth creó una biblioteca para hacer esto. Tienes que estar directamente conectado a ellos; así que puedes dirigirte a ti mismo a una persona con la que estés conectado.

Último salto opcional a los pagos

Así que aquí puedes decir, puedes definir una ruta diciendo quiero pagar a esta persona y el último salto tiene que ser del punto n-1 al n. Así que si por alguna razón quieres, como si no confiaras en alguien… Entonces quería pagarle, pero quería elegir quién era el último salto. Aunque no sé por qué querrías hacer eso.

lnrpc y otros comandos rpc para lnd

joinmarket-clientserver 0.6.0

¿Alguien usa realmente joinmarket? Si lo hiciera, no te lo diría. ¿Qué?

Hay mucho trabajo en joinmarket. Hay muchos cambios allí. Lo que realmente mola de joinmarket- nunca lo he usado- parece que ofrece la promesa de tener una cartera caliente pasiva sentada ahí ganando un rendimiento de tu bitcoin. Joinmarket tiene tasas de fabricante y tomador. Me alegro de que la gente esté trabajando en esto.

Reunión de la organización de estándares de carteras

Esta fue una transcripción que Bryan hizo de una reunión anterior de desarrolladores de Austin Bitcoin.

Lightning en el móvil: Neutrino en la palma de la mano

Mostró cómo crear una aplicación react-nativa que pudiera hacer de neutrino. Quiere que un usuario móvil no custodio y participante de pleno derecho sea un participante de pleno derecho en la red sin descargar 200 GB de cosas. Creo que la principal innovación no es neutrino, sino que en lugar de tener que escribir a medida para construir el binario de lnd para el móvil, es un SDK en el que sólo tienes que escribir “importar lnd” y eso es todo lo que necesitas para ir.

Nueva lista de correo para el desarrollo de lnd que roasbeef anunció

Probablemente relacionada con la migración de la Fundación Linux…

Derivados de Hashrate

Jeremy Rubin tiene otro proyecto que es una plataforma de derivados de hashrate. La idea es que puedes bloquear las transacciones en minutos o en bloques, y el que llegue más rápido se lleva el pago. Es una forma interesante de implementar un derivado. Es básicamente DeFi. Así que probablemente podrías jugar con esto si tuvieras alguna capacidad de minería de bitcoins. En un mes… Uh, bueno el mercado le pondrá precio. Ese es un buen punto.

Nuevo protocolo stratum para pools de minería (stratum v2)

Aquí se habla de marketing sobre stratum v2.

Lo más interesante es este hilo de reddit. La gente de slushpool está en este hilo con petertodd y bluematt. Algunos de los beneficios son que te ayudará a operar en conexiones de internet menos que ideales. Obtienes bloques y cosas más rápido creo. Una de las cosas interesantes que señaló bluematt es que si estás minando no estás seguro si tu ISP está robando parte de tu hashrate porque no hay autenticación entre tú y el pool y los mineros.

El protocolo ahora enviará plantillas de bloques por adelantado. Entonces, cuando se observan nuevos bloques, sólo enviarán el campo previoushash. Tratan de cargar la plantilla de bloques antes de tiempo, y luego envían 64 bytes para rellenar la cosa para que puedas empezar a minar inmediatamente. Es una optimización interesante. Es un hilo genial si quieres aprender más sobre la minería.

lnurl

Esta es otra forma de codificar las facturas HTTP en las cadenas de consulta HTTP.

BOLT-android

Algunos hackathons de LN

Retiros de LN en Bitfinex

Análisis del bug bech32

Un solo error tipográfico puede acarrear muchos problemas. Una de las cosas interesantes es que cambiando una constante en la implementación de bech32 se arregla el problema. ¿Cómo encontró ese tipo ese error? ¿No estaba Blockstream haciendo pruebas fuzz para prevenir esto? Millones de dólares de presupuesto en pruebas fuzz.

Coinjoins desiguales

Un tipo se retiró de Binance y se retiró y luego hizo coinjoins en Wasabi. Binance lo prohibió. Así que en el mundo de los coinjoins, hay una discusión sobre cómo lidiar con eso. El hecho de que los coinjoins son muy reconocibles. Si sacas dinero de un intercambio y haces un coinjoin, el intercambio lo va a saber. Entonces, ¿qué hay de hacer coinjoins con valores no iguales? Ahora mismo los coinjoins utilizan valores iguales, lo que los hace muy reconocibles. Sólo tienes que buscar estas transacciones no naturales y ya está. Pero, ¿qué pasa con hacer coinjoins con cantidades no iguales para que pueda parecer una transacción de intercambio por lotes o hacer pagos a los usuarios? Los coinjoiners están siendo discriminados. La persona a la que le han dado un tirón de orejas estaba retirando directamente en un coinjoin. No me malinterpretes, no les gustan los coinjoins, pero tampoco seas estúpido. No envíes directamente a un coinjoin. Al mismo tiempo, muestra una debilidad de este enfoque.

Wasabi organizó un club de investigación. Justo después de la cuestión de coinjoin-binance, una semana después Wasabi estaba haciendo algunas cosas alojadas en youtube para desenterrar viejas investigaciones sobre coinjoin de cantidades desiguales. Este es un tema interesante. Alguien tiene una implementación de referencia en rust, y el código es muy legible. Hay una discusión de una hora y media en la que Adam lo interroga. Es bastante bueno. Encontró un error en uno de los documentos… nunca pudo conseguir que su implementación funcionara, y entonces se dio cuenta de que había un error en la especificación del documento, lo arregló y consiguió que funcionara.

Minería fusionada a ciegas con pactos y OP_CTV

Esto es básicamente de lo que hablaba Paul Sztorc cuando nos visitó hace unos meses. Se trata de tener otra cadena asegurada por bitcoin de la que bitcoin no sería consciente, y habría algún token involucrado. La propuesta de Rubén es interesante porque se trata de minería ciega fusionada, que es lo que necesita Paul para sus cosas de truthcoin. Así que se consigue otra cosa gratis si conseguimos OP_CTV.

Un argumento que algunas personas hacen para cualquier nueva característica en bitcoin es que no sabemos qué más podríamos ser capaces de llegar, para usar esto. Como la versión original OP_SECURETHEBAG con la que resultó que se puede hacer turing completo. Tal vez es un caso de uso que queremos; pero mucha gente piensa que la minería fusionada a ciegas no es lo que queremos - no recuerdo por qué. Se ha pensado mucho en si los soft-forks deberían entrar.

ZmnSCPxj en la privacidad del camino

No estoy muy seguro de cómo pronunciar su nombre. ¿Zeeman? Es ZmnSCPxj. Puedes deducir mucha información sobre lo que pasó en la ruta de pagos. La primera parte del correo electrónico es como se puede utilizar esto para averiguar cosas. Así que habla de una vigilancia maligna en un nodo a lo largo de la ruta, pero si lo que si son dos nodos alrededor de la ruta. Puedes desarrollar tablas de enrutamiento inverso si tienes suficiente influencia en la red. Entra a hablar de algunas de las cosas que sucederán con Schnorr, como la descorrelación de la ruta y demás.

ZmnSCPxj en taproot y lightning

Esto es una locura. Esta fue una buena.

Boletines de Bitcoin Optech

c-lightning pasó de estar por defecto en testnet a estar por defecto en mainnet. Han añadido soporte para los secretos de pago. Puedes hacer esta cosa, el sondeo, donde tratas de enrutar pagos falsos a través de un nodo y tratar de evaluar y averiguar lo que puede hacer. Puedes generar preimágenes aleatorias y luego crear un hash de pago a partir de esa preimagen aunque sea inválida. Supongo que esto es una mitigación para eso.

Aquí hay un hilo sobre lo que las torres de vigilancia tienen que almacenar, en eltoo. Una de las ventajas de eltoo es que no tiene que almacenar el historial completo del canal, sólo la actualización más reciente. Entonces, ¿tienen que almacenar la última actualización, o también la transacción de liquidación? ¿Algún comentario al respecto? La verdad es que no conozco demasiado bien elto como para especular sobre eso.

c-lightning agregó métodos RPC de createonion y spendonion para permitir mensajes LN encriptados que el nodo mismo no tiene que entender. Esto permite que los plugins usen la red de rayos de forma más arbitraria para enviar mensajes de algún tipo, y son mensajes encriptados por Tor.

whatsat es una aplicación de texto/chat. Están tratando de conseguir la misma funcionalidad sobre c-lightning.

Las tres implementaciones de LN tienen ahora pagos de rutas múltiples. Esto te permite… digamos que tienes un bitcoin en tres canales diferentes. Aunque tengas 3 BTC, sólo puedes enviar 1 BTC. Multipath te permite enviar tres misiles al mismo objetivo. lnd, eclair y c-lightning soportan esto ahora en algún estado. ¿Se puede usar esto en mainnet? ¿Debería hacerlo? La implementación de lnd lo tiene en el código, pero sólo permiten especificar una ruta de todos modos. Así que en realidad no lo han probado en algo que la gente pueda ejecutar enviando múltiples rutas, pero el código ha sido refactorizado para permitirlo.

Andrew Chow dio una buena respuesta sobre la profundidad máxima de bip32, que es de 256.

Bitcoin Core añadió una arquitectura powerpc.

Ahora hay una lista blanca de rpc. Si tienes credenciales para hacer RPC con un nodo, puedes hacer básicamente cualquier cosa. Pero este comando te permite poner en la lista blanca ciertos comandos. Digamos que quieres que tu nodo lightning no añada nuevos peers en el nivel p2p, lo que te permitiría ser atacado por eclipse. Lightning sólo debe ser capaz de hacer consultas de monitoreo de blockchain. Nicolas Dorier dice que mi explorador de bloques sólo se basa en sendrawtransaction para la difusión. Así que usted quiere a la lista blanca, esto es por credencial de usuario. ¿Tienen múltiples credenciales de usuario para bitcoin.conf?

Esta es la razón por la que lnd utiliza macarrones. Resuelve este problema. No necesitas tener una lista de personas en el archivo de configuración, puedes simplemente tener personas que tengan macarrones que les den ese acceso.

Aquí está lo que Bryan estaba hablando, que es la revisión de fin de año. Te animo a que leas esto, si vas a leer sólo una cosa es el boletín de revisión de fin de año 2019 de Bitcoin Optech. Cada mes del año pasado ha habido alguna gran innovación. Es realmente una locura leer esto. Erlay es realmente grande también, como una reducción del 80% del ancho de banda.

Gleb Naumenk dio una bonita charla en London Bitcoin Devs sobre erlay. Habló de cosas de la red p2p. Te animo a comprobarlo si estás interesado.

El lenguaje descriptor de scripts es como una versión mejor de las direcciones para describir un rango de direcciones. Se parecen a un código con paréntesis y demás. Chris Belcher ha propuesto codificarlo en base64, porque ahora mismo si intentas copiar un descriptor de script por defecto no se resalta. Esto hace que los descriptores de script sean más ergonómicos para la gente que no sabe lo que significan.

Herramienta LN de BitMex

Esta es una herramienta de BitMex que es un sistema de alerta en vivo para los canales. Este era el monitor de horquillas de BitMex.

Caravana

La caravana de Unchained recibió un saludo.

Transacciones anónimas de coinjoin con

Este es un documento que wasabi desenterrado de hace como 2 años.

Luke-jr’s full node chart

El script para producir esto es de código cerrado, por lo que no se puede jugar con él. Pero hay múltiples implementaciones por ahí. Yo sospechaba de esto porque luke-jr está un poco loco, pero gleb parece pensar que es correcto. Estamos en el mismo número de nodos completos que a mediados de 2017. Así que eso es interesante. La línea superior es el número total de nodos completos, y la línea inferior es el número de nodos de escucha que te responderán si intentas iniciar una conexión con ellos. Probablemente quieras ser inalcanzable por razones egoístas, pero necesitas estar localizable para ayudar a la gente a sincronizarse con el blockchain. Mediados de 2017 podría ser el pico de segwit cuando la gente estaba girando nodos, o podría estar relacionado con el precio del mercado. También hubo un pico de precios en junio. Tal vez algunos de estos son para los nodos de relámpago. Apuesto a que mucha gente ya no lo hace.

El tablero de Clark Moody

El dashboard de Moody tiene un montón de estadísticas en tiempo real que puedes ver actualizadas en tiempo real.

Revisión de fin de año de Bitcoin Magazine

Tuvimos un crecimiento del 10% en el número de commits, un aumento del precio del 85%, el dominio de bitcoin subió un 15%, nuestra tasa de inflación es del 3,8%. El volumen diario subió. Segwit pasó del 32% al 62%. El valor de las transacciones diarias subió. El tamaño de la cadena de bloques creció un 29%. El recuento de nodos de Bitcoin se desplomó, bajó, lo que no es tan bueno. Puede ser porque mucha gente tenía sólo discos duros de 256 GB, tal vez por eso se retiraron… sí, ¿pero podar?

arxiv: nueva construcción del canal manuscrito

Lista de ataques a carteras de hardware (de Shift Crypto)

Es una lista bastante interesante. Esta es la razón por la que haces multisig de múltiples vendedores, tal vez. Esto es bastante aterrador.

Las trampas de multisig cuando se usan carteras de hardware

Una de las ideas que la gente no se da cuenta es que si usas multisig y pierdes los redeemSripts o la capacidad de computarlos, pierdes la capacidad de multisig. Necesitas hacer una copia de seguridad de algo más que tu semilla si estás haciendo multisig. Necesita hacer una copia de seguridad de redeemScripts. Algunos proveedores intentan mostrar todo en la pantalla, y otros infieren cosas. Los fabricantes no quieren tener un estado, pero el multisig requiere que mantengas un estado como el de no cambiar a otros participantes del multisig o el de intercambiar información por debajo de ti. Si estás pensando en implementar multisig, mira el artículo

charla de bunnie sobre hardware seguro

El gran punto de esta charla fue: no se puede hacer hash del hardware. Puedes hacer hash de un programa de ordenador y comprobarlo, pero no puedes hacerlo con el hardware. Así que, básicamente, tener hardware de código abierto no lo hace más seguro necesariamente. Él va a través de este gran todas las ramificaciones de que y lo que puede hacer. Él tiene un dispositivo, un teléfono de mensajes de texto que es tan seguro como él puede hacerlo, y lo interesante es que usted podría convertirlo en un hardware.

Conferencia y charlas de la CCC

SHA-1 colisión

Por 70k dólares fueron capaces de crear dos claves PGP, usando la versión heredada de PGP que usa sha-1 para el hash, y fueron capaces de crear dos claves que tenían diferentes ids de usuario con certificados que colisionaban.

Pruebas de falsificación de Bitcoin Core

Hay una página de estadísticas de fuzz testing, y luego un grupo de revisión de Bitcoin Core PR.

lncli + facturas - modo de envío experimental de llaves

Es sólo una manera de enviar a alguien, puede enviar a otra factura. Tuvieron esta función como un año y finalmente se fusionó.

\ No newline at end of file +Reunión

https://www.meetup.com/Austin-Bitcoin-Developers/events/267941700/

https://bitdevs.org/2019-12-03-socratic-seminar-99

https://bitdevs.org/2020-01-09-socratic-seminar-100

https://twitter.com/kanzure/status/1219817063948148737

LSATs

Así que solemos empezar con una demostración de un proyecto basado en un rayo en el que ha estado trabajando durante unos meses.

Esta no es una idea original para mí. Fue presentada por roasbeef, cofundador de Lightning Labs en la conferencia Lightning del pasado octubre. Trabajé en un proyecto que hacía algo similar. Cuando presentó esto como una especificación más formalizada, tuvo mucho sentido basado en lo que yo estaba trabajando. Así que acabo de terminar una versión inicial de una herramienta que pone esto en práctica y permite a la gente construir sobre esto. Voy a dar un breve resumen de lo que puede hacer con esto.

Un resumen rápido. Voy a hablar de las claves de la API y el estado de la autenticación hoy en día. A continuación, lo que son los macarrones, que son una gran parte de cómo funcionan las LSATs.

LSAT es un token de autentificación del servicio lightning..

Luego hablaremos de casos de uso y de algunas herramientas como lsat-js y otra. Esperemos que puedas usarlas. Gran parte del contenido aquí, se puede ver la presentación original que Laolu (roasbeef) dio y puso juntos. Algunos de los contenidos están inspirados en esa presentación.

Estado de la autenticación hoy en día: Cualquiera que esté en Internet debería estar familiarizado con nuestros problemas de autenticación. Si estás haciendo login y autenticación, probablemente estás haciendo contraseñas de correo electrónico o OAUTH o algo así. Es asqueroso. También puedes tener claves de API más generales. Si creas una cuenta de AWS o si quieres usar una API de Twilio, entonces obtienes una clave y esa clave va en la solicitud para mostrar que estás autenticado.

Las claves API no tienen realmente ninguna resistencia sibilina incorporada. Si obtienes una clave API, entonces puedes usarla en cualquier lugar, dependiendo de las restricciones del lado del servicio. Añaden restricciones sibilinas al tener que iniciar sesión a través del correo electrónico o algo así. La clave en sí no tiene resistencia sibilina, es sólo una cadena de letras y números y eso es todo.

Las claves de API y las cookies también -que era una forma inicial de lo que son los macarrones- no tienen la capacidad de delegar. Si tienes una clave de API y quieres compartir esa clave de API y compartir tu acceso con alguien, ellos tienen acceso completo a lo que esa clave de API proporciona. Algunos servicios te darán una clave de API de sólo lectura, una clave de API de lectura-escritura, una clave de API de nivel de administrador, etc. Es posible, pero tiene algunos problemas y no es tan flexible como podría ser.

La idea de iniciar sesión y obtener tokens de autenticación es engorrosa. La tarjeta de crédito, el correo electrónico, la dirección postal… todo esto no es tan bueno cuando sólo se quiere leer un artículo del WSJ o del NYT. ¿Por qué tenemos que dar toda esta información sólo para tener acceso?

Así que alguien puede estar utilizando algo que parece las formas correctas … como HTTPS, la commnication está encriptada, eso es genial .. Pero una vez que les das información privada, no tienes forma de auditar cómo almacenan esa información privada. Vemos grandes hacks en grandes almacenes y sitios web que filtran información privada. Un atacante sólo necesita atacar el sistema más débil que contenga tu información personal privada. Este es el origen del problema. Que Ashley Madison se entere de tu aventura no es un gran problema, pero que alguien piratee y exponga esa información es realmente malo.

Recomiendo encarecidamente leer sobre los macarrones. La idea básica es que los macarrones son como las cookies, para cualquiera que trabaje con ellas en el desarrollo web. Codifican cierta información que comparte con el servidor, como los niveles de autenticación y los tiempos de espera y cosas por el estilo, al siguiente nivel. lnd habla mucho de macaroons, pero esto no es algo específico de lnd. lnd simplemente utiliza esto, para la autenticación delegada a lnd. Ellos están usando macarrones, estas herramientas están usando macarrones. Están utilizando macarrones en su servicio de bucle de una manera totalmente diferente a su servicio de bucle. Estas podrían ser usadas en lugar de las cookies, es triste que casi nadie las esté usando.

Funciona en base a HMACs encadenados. Cuando creas un macarrón, tienes una clave de firma secreta, igual que cuando haces cookies. Firmas y te comprometes con una versión de un macarrón. Esto se relaciona con la delegación… puedes añadir lo que se llama advertencias y firmar usando una firma anterior y eso bloquea la nueva advertencia. Nadie que reciba la nueva versión del macarrón con una advertencia que ha sido firmada con una firma anterior, puede cambiarla. Es como una cadena de bloques. Simplemente no se puede revertir. Es muy chulo.

El transporte de pruebas cambia la arquitectura en torno a la autenticación. Estás poniendo la autorización en el propio macarrón. Así que estás diciendo qué permisos tiene alguien que tiene este macarrón, en lugar de poner un montón de lógica en el servidor. Así que si presentas este macarrón y es verificado por el servidor para ciertos niveles de acceso, entonces esto simplifica la autenticación y la autorización del backend mucho más. Desacopla la lógica de la autorización.

lsat es para los tokens de autentificación del servicio lightning. Esto podría ser útil para la facturación de pago por uso (no más suscripciones), no se requiere información personal, es comerciable (con la ayuda de un proveedor de servicios) - esto es algo que roasbeef ha propuesto. Puedes hacer que sea comerciable; a menos que lo hagas en una cadena de bloques, necesitas un servidor central que lo facilite. También está la autenticación de máquina a máquina. Debido a la resistencia sibilina incorporada que no está ligada a la identidad, puedes tener máquinas que paguen por el acceso a otras cosas sin tener tu número de tarjeta de crédito en el dispositivo. También puedes atenuar los privilegios. Puedes vender privilegios.

Voy a introducir algunos conceptos clave para hablar de cómo funciona esto. Hay códigos de estado - cualquiera que haya navegado por la web está familiarizado con el HTTP 404 que es para el recurso no encontrado. Hay un montón de estos números. HTTP 402 se supone que es para “pago requerido” y les tomó décadas para hacer esto a nivel de protocolo sin un dinero nativo de Internet. Así que LSAT aprovechará esto y utilizará HTTP 402 para enviar mensajes a través del cable.

Hay mucha información en las cabeceras HTTP que describen las peticiones y las respuestas. Aquí es donde vamos a establecer la información para LSAT. En la respuesta, tendrá un desafío emitido por un servidor cuando hay un pago HTTP 402 requerido. Esta es una cabecera WWW-Authenticate. También hay otro Authorized-request: autorización. La única cosa única es cómo vas a leer los valores asociados a estas claves de cabecera HTTP. Después de obtener el reto, envías una autorización que satisface ese reto.

Usted paga una solicitud de factura relámpago utilizando un determinado BOLT. Esto se pone en el desafío WWW-Authenticate. La preimagen es una cadena aleatoria de 32 bytes que se genera y forma parte de cada pago relámpago, pero se oculta hasta que el pago ha sido satisfecho. Así es como se puede confiar en los pagos de segunda capa de forma instantánea. Luego hay un hash de pago. Cualquier persona que haya recibido una factura de pago tiene este hash de pago. La preimagen se revela después de pagar. Esto básicamente, el hash de pago generado a partir del hashing de la preimagen… lo que significa que no puedes adivinar la preimagen, a partir del hash de pago. Pero una vez que tienes la preimagen, puedes probar que sólo esa preimagen pudo generar ese hash de pago. Esto es importante para la validación del LSAT.

Digamos que el cliente quiere acceder a algún contenido protegido. Digamos que el servidor entonces dice… que no hay autenticación asociada a esta petición. Voy a hornear un macarrón, y va a tener información que indicará lo que se requiere para el acceso. Esto va a incluir la generación de una factura de pago relámpago. Entonces enviamos un reto WWW-Autenticado de vuelta. Una vez que se paga una factura, se obtiene una preimagen a cambio, que se necesita para satisfacer el LSAT porque cuando se envía el token de vuelta es el macarrón y luego dos puntos y luego esa preimagen. Porque lo que sucede es que la información de la factura, el hash del pago está incrustado en el macarrón. Así que el servidor busca el hash de pago, y la preimagen, y luego comprueba H(preimagen) == hash de pago boom está hecho.

Dependiendo de las limitaciones que quieras poner en el muro de pago, se trata de una verificación de estado. Sabes que la persona que tiene esa preimagen tuvo que haber pagado la factura asociada a ese hash. El hash está en el token del macarrón.

Esto ayuda a desvincular el pago de la autorización. El servidor podría saber que el pago fue satisfecho usando lnd o algo así, pero esto ayuda a desacoplarlo. También ayuda a otros servicios a comprobar la autorización.

La versión actual de LSATs tiene un identificador de versión en los macarrones que genera. La forma en que el equipo de Lightning Labs ha hecho es que tienen un número de versión y se incrementará como la funcionalidad se añade a ella.

En mi herramienta, tenemos configuraciones de pre-construcción para añadir vencimientos. Así que puedes obtener 1 segundo de acceso por cada satoshi pagado, o algo así. Los niveles de servicio es algo en lo que el equipo de Loop ha estado trabajando.

La firma se hace en el momento de la cocción del macarrón. Así que tienes una clave secreta, se asigna un macarrón, y la firma se pasa con el macarrón.

Esto permite que las facturas de máquina a máquina sean resistentes a los sibilinos. Las facturas HODL son algo que he implementado. Las facturas HODL son básicamente una forma de pagar una factura sin que se liquide inmediatamente. Es un servicio de custodia construido con lightning, pero crea algunos problemas en la red de lightning. Hay formas de utilizarlas que no entorpecen la red, siempre que no se utilicen durante largos periodos de tiempo. Yo usaba esto para los tokens de un solo uso. Si tratas de acceder a ellos, y una factura está siendo retenida pero no liquidada, entonces tan pronto como se liquida entonces ya no es válida. También hay una manera de dividir los pagos y pagar una sola factura, pero entonces tienes algunos problemas de coordinación. Creo que esto es similar a la liberación del pararrayos que hizo Reese, que era para los pagos fuera de línea. Tienen un servicio en el que puedes hacer pagos de terceros sin confianza.

Hice lsat-js que es una biblioteca del lado del cliente para interactuar con los servicios de LSAT. Si tienes una aplicación web que tiene esto implementado, entonces puedes decodificarlos, obtener el hash de pago, ver si hay algún vencimiento en ellos. Luego está BOLTWALL donde añades una sola línea a un servidor, y lo pones alrededor de una ruta para la que quieres requerir el pago, entonces BOLTWALL lo coge cuando recibes una petición. Es sólo un middleware nodejs, por lo que podría funcionar con balanceadores de carga.

NOW-BOLTWALL es un marco sin servidor para desplegar sitios web y funciones normales sin servidor; esta es una herramienta CLI que lo configurará. La forma más fácil de hacerlo es btcpay y usar el despliegue con luna node por $8/mes, y luego puedes configurar NOW-BOLTWALL. Luego usando zyke que tienen un nivel gratuito, puedes desplegar un servidor por ahí y ellos mismos están ejecutando balanceadores de carga. Puedes pasarle una url que quieras protocolizar. Así que si usted tiene un servidor en otro lugar, sólo puede desplegar esto en la línea de comandos.

Y luego está lsat-playground, que voy a demostrar rápidamente. Esto es sólo la UX que puse juntos.

LSAT sería útil para un proveedor de servicios que aloja blogs de diferentes autores, y el autor puede estar convencido de que el usuario pagó y obtuvo acceso al contenido - y que el usuario pagó específicamente ese autor, no el proveedor de servicios.

Pondré algunas diapositivas en la página de la reunión.

Seminario socrático

Esto va a ser un repaso rápido de las cosas que están sucediendo en el desarrollo de bitcoin. Yo facilitaré esto y hablaré de los temas, y sacaré algo de conocimiento de la audiencia. Sólo entiendo como el 10% de esto, y algunos de ustedes entienden mucho más. Así que interrúmpeme y salta, pero no des un discurso.

Nos perdimos un mes porque tuvimos el taller de taproot el mes pasado. BitDevsNYC es un meetup en NYC que publica estas listas de enlaces sobre lo que pasó ese mes. He leído algunos de ellos.

OP_CHECKTEMPLATEVERIFY

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-November/017494.html

Este es el trabajo de Jeremy Rubin. La idea es que es una propuesta de pacto para bitcoin. La idea es que el UTXO sólo puede… ((bryan tomó esta)). Este taller va a ser a finales de mes. Dice que va a patrocinar a la gente, así que si estás en esto entonces considéralo. Debido a que puede ser auto-referencial, puedes tener una completitud de turing accidental. La versión inicial tenía este problema. También podría ser utilizado por los intercambios en las transacciones de retiro para prevenir o lista negra de sus futuras transacciones.

Atalaya BOLT

Es bastante interesante. Estuve en Londres y me reuní con estos tipos. Tienen una implementación completa de esto en python. Era agradable y simple, no estoy seguro si es de código abierto todavía. Hay como tres implementaciones de watchtower ahora, y deben ser estandarizadas.

Estafa de PlusToken

ErgoBTC hizo algunas investigaciones sobre la estafa PlusToken. Fue una estafa en China que obtuvo como 200.000 BTC. La gente que la dirigía no era sofisticada. Así que trataron de hacer alguna mezcla… pero barajaron sus monedas de mala manera. Los atraparon. Algún whitehat lo descubrió y fue capaz de rastrear dónde salían los fondos de un intercambio y demás. Aquí hay un hilo de Twitter que habla de cómo el movimiento de estos BTC podría haber afectado al precio. Hace un mes, algunos de los chicos fueron atrapados. La teoría de este tipo es que detuvieron a los subordinados, y el que tenía las llaves no fue detenido. Así que la mezcla continuó, claramente este otro tipo tiene las llaves. También tenían un montón de ETH y fue movido como hace un mes, y el mercado se asustó- el precio de ETH cayó. Así que tal vez tomó una gran posición corta, y luego movió las monedas, en lugar de vender. 200.000 BTC es mucho, realmente se puede mover el precio con esto.

PlusToken estafó 1.900 millones de dólares en todas las monedas, con una página de aterrizaje. Tenían gente en las calles yendo a estos chinos diciendo que compraran bitcoin y lo multiplicaran, es esta nueva cosa de la minería. MtGox era como 500.000 BTC, que era el 7% de la oferta en circulación en el momento. Así que esto podría ser el 2-3% de la oferta.

El tipo también apareció en un podcast donde habló de las herramientas que utilizó para averiguar esto. Este es un tema interesante. Los coinjoins van a ser un tema en muchos de estos. Esta es sólo una cara de coinjoin. Obviamente, el coinjoin que estaba usando era imperfecto.

txstats.com

Este es un visualizador de estadísticas de transacciones de la investigación de BitMex.

Aquí está Murch informando sobre un poco de dumping de intercambio. Él hace el desarrollo de la cartera para bitgo. A menudo habla de cosas de consolidación de UTXO. Alguien volcó 4 MB de transacciones a 1 sat/vbyte. Alguien estaba consolidando un montón de polvo cuando las tarifas eran realmente baratas.

Aquí hay un sitio de datos de rayos. Pierre tenía el nodo número uno. Tiene capacidad, diferentes tipos de canal se cierra. BitMex escribió un artículo informando sobre los cierres no cooperativos, porque puedes ver las operaciones de cierre forzado en el blockchain.

Jameson Lopp tiene algunas comparaciones de implementación de bitcoin. Esto es un análogo. Se trata de las diferentes versiones de Bitcoin Core como la v0.8 y siguientes. A continuación, analiza cuánto tiempo tarda la descarga del bloque inicial, para el blockchain actual. Hay otro para el tiempo que tarda en sincronizarse con el blockchain en la fecha en que fue lanzado. Hubo una gran caída cuando cambiaron de openssl a libsecp256k1. Entonces era enormemente más performante.

Atajos inseguros en MuSig

Se trata de parte de la interactividad en Schnorr MuSig. Hay tres rondas en este protocolo. En este artículo, él está discutiendo todas las maneras que usted puede estropear con él. MuSig es bastante complejo y hay un montón de armas de pie, que es el resumen aquí supongo.

ZenGo firma umbral basada en el tiempo

Multisig en el concepto es conseguir algunas entidades diferentes, donde se puede hacer en la cadena multisig o fuera de la cadena multisig donde se agregan las firmas juntos y se unen. Estos chicos tienen algo así, pero las claves rotan con el tiempo. Puedes tener un escenario en el que todas las partes pierden una clave durante un año determinado, pero como las claves son rotativas, ninguna de ellas pierde una cantidad mínima por encima de una determinada cantidad. Así que el monedero seguiría conservándose aunque todas las personas hayan perdido sus claves. Esto se llama “compartir el secreto proactivamente”. Parece que sería más práctico hacer 3-de-5 y simplemente configurar un nuevo 3-de-5 cuando 1-de-5 lo pierde. A Binance le gusta esto porque es la compatibilidad de shitcoin que les gusta. Ledger también.

Ataque a la tarjeta fría

La forma en que este ataque funciona es que puedes engañarlo para que genere una dirección de cambio en algo como esto… una ruta de derivación en la que tomas el hijo número 44, 0 endurecido, y luego el último es un número enorme. Entonces lo pones en una hoja muy al borde de la clave privada, de tal manera que sería difícil encontrarla de nuevo si la buscas. Técnicamente sigues siendo dueño de las monedas, pero sería difícil gastarlas. Así que era un exploit inteligente. Básicamente, un atacante puede convencer a tu tarjeta fría de que está siendo enviada a “tu” dirección, pero en realidad es una ruta bip32 aleatoria o algo así. Ningún monedero de hardware actualmente rastrea las direcciones de cambio que dan. Así que la idea es restringirlo a alguna brecha de búsqueda… no ir más allá de la brecha o algo así. O podría estar en un sitio web generando un montón de direcciones, por adelantado, para los usuarios o los pagos o algo así. También había algo sobre 32 bits + 1 o algo así, más allá del valor MAX.

Error de Trezor

Trezor tenía un error en el que si tenías una… si estabas tratando de hacer una salida single-sig, y luego tenías una entrada multi-sig y luego un cambio multi-sig, podías inyectar tu propia dirección de cambio multisig o algo así. Tu máquina anfitriona podría hacer esto. Esto fue como hace un mes o un mes y medio. No muestran el cambio, si es que lo tienes. En este escenario, la dirección de cambio de multisig es algo que no posees, y debería tratar eso como un gasto doble o algo así. Esto era un exploit remoto. Trató la dirección multisig de otra persona como tu dirección. Simplemente no se incluyó en el cálculo de la tarifa o algo así.

Hilo de Monero

Alguien tiene un hash malo en su software. Así que es una historia de detectives tratando de averiguar lo que salió mal, como si el sitio web es malo o algo así. Resulta que el binario era malicioso. Puedes ver el trabajo de detective en tiempo real. Alguien fue capaz de conseguir el binario y hacer algunos análisis. Se añadieron dos funciones; una de ellas enviaría su semilla al atacante. Así que si ejecutas este binario y tienes algún monero, entonces el atacante ahora tiene ese monero. Es bastante fascinante. El binario llegó al sitio web de Monero. Eso es bastante aterrador. Este es un buen ejemplo de por qué necesitas comprobar las firmas y los hashes cuando descargas una cartera. Monero estaba sirviendo esto a la gente que estaba descargando el software. Era getmonero.org que estaba sirviendo el malware. Es interesante que tuvieran acceso al sitio, y no actualizaran los hashes md5 o algo así. Bueno, tal vez pensaron que los usuarios comprobarían el sitio web y no el binario que realmente descargaron.

Detenciones de intercambiadores de SIM

Esto era sólo un artículo de noticias. El SIM swapping es cuando entras en una tienda de Verizon y dices que tienes un número, y entonces ponen tu número de teléfono en tu teléfono y entonces puedes robar el bitcoin de alguien o lo que sea. Utilizan las preguntas habituales como cuál es tu nombre de soltera y otra información pública.

Ataque de Vertcoin 51%

Esto ha sucedido ya dos veces. Tuvimos una discusión cuando esto ocurrió hace seis meses. De alguna manera esta moneda sobrevive a los ataques del 51%. ¿Por qué sobreviven? Tal vez es tan especulativo que la gente se encoge de hombros. ¿Qué pasa con el bitcoin o el ethereum que son atacados en un 51%? Así que tal vez todo es comercio especulativo, o los usuarios son demasiado estúpidos o algo así.

El papel del enrutamiento del diente de león en “romper el modelo de privacidad de mimblewimble”

Bitcoin Core 0.19.1

Hubo algún tipo de problema justo después de la v0.19.0, y luego salió la v0.19.1. Hay algunos nuevos comandos RPC. getbalance es mucho mejor. Un montón de correcciones RPC. Eso es genial, así que descárgalo.

Eliminar OpenSSL

OpenSSL fue eliminado por completo. Comenzó en 2012. Mucho de esto aceleró la descarga inicial de bloques. Lo curioso es que a gavinandresen no le gustaba la idea. Pero se convirtió en una gran mejora. Se tardó unos años en eliminar completamente OpenSSL, porque estaba suministrando todas las primitivas criptográficas para firmas, hashing, claves. Se tardó 10 años en eliminar openssl. Lo último que lo necesitaba era bip70. Lo necesitaban para algo.

Selección de monedas por rama y por límite

Es una mejor manera de hacer la selección de monedas al componer las transacciones. Quiere optimizar las comisiones a largo plazo. Así que escribió su tesis para demostrar que esta sería una buena forma de hacerlo. Murch también señaló que la selección aleatoria de monedas era realmente mejor que la solución de aproximación estocástica.

joostjager - enrutamiento permitir ruta …

Puedes pagar a alguien incluyéndolo en una ruta, sin que tenga que darte una factura. Alex Bosworth creó una biblioteca para hacer esto. Tienes que estar directamente conectado a ellos; así que puedes dirigirte a ti mismo a una persona con la que estés conectado.

Último salto opcional a los pagos

Así que aquí puedes decir, puedes definir una ruta diciendo quiero pagar a esta persona y el último salto tiene que ser del punto n-1 al n. Así que si por alguna razón quieres, como si no confiaras en alguien… Entonces quería pagarle, pero quería elegir quién era el último salto. Aunque no sé por qué querrías hacer eso.

lnrpc y otros comandos rpc para lnd

joinmarket-clientserver 0.6.0

¿Alguien usa realmente joinmarket? Si lo hiciera, no te lo diría. ¿Qué?

Hay mucho trabajo en joinmarket. Hay muchos cambios allí. Lo que realmente mola de joinmarket- nunca lo he usado- parece que ofrece la promesa de tener una cartera caliente pasiva sentada ahí ganando un rendimiento de tu bitcoin. Joinmarket tiene tasas de fabricante y tomador. Me alegro de que la gente esté trabajando en esto.

Reunión de la organización de estándares de carteras

Esta fue una transcripción que Bryan hizo de una reunión anterior de desarrolladores de Austin Bitcoin.

Lightning en el móvil: Neutrino en la palma de la mano

Mostró cómo crear una aplicación react-nativa que pudiera hacer de neutrino. Quiere que un usuario móvil no custodio y participante de pleno derecho sea un participante de pleno derecho en la red sin descargar 200 GB de cosas. Creo que la principal innovación no es neutrino, sino que en lugar de tener que escribir a medida para construir el binario de lnd para el móvil, es un SDK en el que sólo tienes que escribir “importar lnd” y eso es todo lo que necesitas para ir.

Nueva lista de correo para el desarrollo de lnd que roasbeef anunció

Probablemente relacionada con la migración de la Fundación Linux…

Derivados de Hashrate

Jeremy Rubin tiene otro proyecto que es una plataforma de derivados de hashrate. La idea es que puedes bloquear las transacciones en minutos o en bloques, y el que llegue más rápido se lleva el pago. Es una forma interesante de implementar un derivado. Es básicamente DeFi. Así que probablemente podrías jugar con esto si tuvieras alguna capacidad de minería de bitcoins. En un mes… Uh, bueno el mercado le pondrá precio. Ese es un buen punto.

Nuevo protocolo stratum para pools de minería (stratum v2)

Aquí se habla de marketing sobre stratum v2.

Lo más interesante es este hilo de reddit. La gente de slushpool está en este hilo con petertodd y bluematt. Algunos de los beneficios son que te ayudará a operar en conexiones de internet menos que ideales. Obtienes bloques y cosas más rápido creo. Una de las cosas interesantes que señaló bluematt es que si estás minando no estás seguro si tu ISP está robando parte de tu hashrate porque no hay autenticación entre tú y el pool y los mineros.

El protocolo ahora enviará plantillas de bloques por adelantado. Entonces, cuando se observan nuevos bloques, sólo enviarán el campo previoushash. Tratan de cargar la plantilla de bloques antes de tiempo, y luego envían 64 bytes para rellenar la cosa para que puedas empezar a minar inmediatamente. Es una optimización interesante. Es un hilo genial si quieres aprender más sobre la minería.

lnurl

Esta es otra forma de codificar las facturas HTTP en las cadenas de consulta HTTP.

BOLT-android

Algunos hackathons de LN

Retiros de LN en Bitfinex

Análisis del bug bech32

Un solo error tipográfico puede acarrear muchos problemas. Una de las cosas interesantes es que cambiando una constante en la implementación de bech32 se arregla el problema. ¿Cómo encontró ese tipo ese error? ¿No estaba Blockstream haciendo pruebas fuzz para prevenir esto? Millones de dólares de presupuesto en pruebas fuzz.

Coinjoins desiguales

Un tipo se retiró de Binance y se retiró y luego hizo coinjoins en Wasabi. Binance lo prohibió. Así que en el mundo de los coinjoins, hay una discusión sobre cómo lidiar con eso. El hecho de que los coinjoins son muy reconocibles. Si sacas dinero de un intercambio y haces un coinjoin, el intercambio lo va a saber. Entonces, ¿qué hay de hacer coinjoins con valores no iguales? Ahora mismo los coinjoins utilizan valores iguales, lo que los hace muy reconocibles. Sólo tienes que buscar estas transacciones no naturales y ya está. Pero, ¿qué pasa con hacer coinjoins con cantidades no iguales para que pueda parecer una transacción de intercambio por lotes o hacer pagos a los usuarios? Los coinjoiners están siendo discriminados. La persona a la que le han dado un tirón de orejas estaba retirando directamente en un coinjoin. No me malinterpretes, no les gustan los coinjoins, pero tampoco seas estúpido. No envíes directamente a un coinjoin. Al mismo tiempo, muestra una debilidad de este enfoque.

Wasabi organizó un club de investigación. Justo después de la cuestión de coinjoin-binance, una semana después Wasabi estaba haciendo algunas cosas alojadas en youtube para desenterrar viejas investigaciones sobre coinjoin de cantidades desiguales. Este es un tema interesante. Alguien tiene una implementación de referencia en rust, y el código es muy legible. Hay una discusión de una hora y media en la que Adam lo interroga. Es bastante bueno. Encontró un error en uno de los documentos… nunca pudo conseguir que su implementación funcionara, y entonces se dio cuenta de que había un error en la especificación del documento, lo arregló y consiguió que funcionara.

Minería fusionada a ciegas con pactos y OP_CTV

Esto es básicamente de lo que hablaba Paul Sztorc cuando nos visitó hace unos meses. Se trata de tener otra cadena asegurada por bitcoin de la que bitcoin no sería consciente, y habría algún token involucrado. La propuesta de Rubén es interesante porque se trata de minería ciega fusionada, que es lo que necesita Paul para sus cosas de truthcoin. Así que se consigue otra cosa gratis si conseguimos OP_CTV.

Un argumento que algunas personas hacen para cualquier nueva característica en bitcoin es que no sabemos qué más podríamos ser capaces de llegar, para usar esto. Como la versión original OP_SECURETHEBAG con la que resultó que se puede hacer turing completo. Tal vez es un caso de uso que queremos; pero mucha gente piensa que la minería fusionada a ciegas no es lo que queremos - no recuerdo por qué. Se ha pensado mucho en si los soft-forks deberían entrar.

ZmnSCPxj en la privacidad del camino

No estoy muy seguro de cómo pronunciar su nombre. ¿Zeeman? Es ZmnSCPxj. Puedes deducir mucha información sobre lo que pasó en la ruta de pagos. La primera parte del correo electrónico es como se puede utilizar esto para averiguar cosas. Así que habla de una vigilancia maligna en un nodo a lo largo de la ruta, pero si lo que si son dos nodos alrededor de la ruta. Puedes desarrollar tablas de enrutamiento inverso si tienes suficiente influencia en la red. Entra a hablar de algunas de las cosas que sucederán con Schnorr, como la descorrelación de la ruta y demás.

ZmnSCPxj en taproot y lightning

Esto es una locura. Esta fue una buena.

Boletines de Bitcoin Optech

c-lightning pasó de estar por defecto en testnet a estar por defecto en mainnet. Han añadido soporte para los secretos de pago. Puedes hacer esta cosa, el sondeo, donde tratas de enrutar pagos falsos a través de un nodo y tratar de evaluar y averiguar lo que puede hacer. Puedes generar preimágenes aleatorias y luego crear un hash de pago a partir de esa preimagen aunque sea inválida. Supongo que esto es una mitigación para eso.

Aquí hay un hilo sobre lo que las torres de vigilancia tienen que almacenar, en eltoo. Una de las ventajas de eltoo es que no tiene que almacenar el historial completo del canal, sólo la actualización más reciente. Entonces, ¿tienen que almacenar la última actualización, o también la transacción de liquidación? ¿Algún comentario al respecto? La verdad es que no conozco demasiado bien elto como para especular sobre eso.

c-lightning agregó métodos RPC de createonion y spendonion para permitir mensajes LN encriptados que el nodo mismo no tiene que entender. Esto permite que los plugins usen la red de rayos de forma más arbitraria para enviar mensajes de algún tipo, y son mensajes encriptados por Tor.

whatsat es una aplicación de texto/chat. Están tratando de conseguir la misma funcionalidad sobre c-lightning.

Las tres implementaciones de LN tienen ahora pagos de rutas múltiples. Esto te permite… digamos que tienes un bitcoin en tres canales diferentes. Aunque tengas 3 BTC, sólo puedes enviar 1 BTC. Multipath te permite enviar tres misiles al mismo objetivo. lnd, eclair y c-lightning soportan esto ahora en algún estado. ¿Se puede usar esto en mainnet? ¿Debería hacerlo? La implementación de lnd lo tiene en el código, pero sólo permiten especificar una ruta de todos modos. Así que en realidad no lo han probado en algo que la gente pueda ejecutar enviando múltiples rutas, pero el código ha sido refactorizado para permitirlo.

Andrew Chow dio una buena respuesta sobre la profundidad máxima de bip32, que es de 256.

Bitcoin Core añadió una arquitectura powerpc.

Ahora hay una lista blanca de rpc. Si tienes credenciales para hacer RPC con un nodo, puedes hacer básicamente cualquier cosa. Pero este comando te permite poner en la lista blanca ciertos comandos. Digamos que quieres que tu nodo lightning no añada nuevos peers en el nivel p2p, lo que te permitiría ser atacado por eclipse. Lightning sólo debe ser capaz de hacer consultas de monitoreo de blockchain. Nicolas Dorier dice que mi explorador de bloques sólo se basa en sendrawtransaction para la difusión. Así que usted quiere a la lista blanca, esto es por credencial de usuario. ¿Tienen múltiples credenciales de usuario para bitcoin.conf?

Esta es la razón por la que lnd utiliza macarrones. Resuelve este problema. No necesitas tener una lista de personas en el archivo de configuración, puedes simplemente tener personas que tengan macarrones que les den ese acceso.

Aquí está lo que Bryan estaba hablando, que es la revisión de fin de año. Te animo a que leas esto, si vas a leer sólo una cosa es el boletín de revisión de fin de año 2019 de Bitcoin Optech. Cada mes del año pasado ha habido alguna gran innovación. Es realmente una locura leer esto. Erlay es realmente grande también, como una reducción del 80% del ancho de banda.

Gleb Naumenk dio una bonita charla en London Bitcoin Devs sobre erlay. Habló de cosas de la red p2p. Te animo a comprobarlo si estás interesado.

El lenguaje descriptor de scripts es como una versión mejor de las direcciones para describir un rango de direcciones. Se parecen a un código con paréntesis y demás. Chris Belcher ha propuesto codificarlo en base64, porque ahora mismo si intentas copiar un descriptor de script por defecto no se resalta. Esto hace que los descriptores de script sean más ergonómicos para la gente que no sabe lo que significan.

Herramienta LN de BitMex

Esta es una herramienta de BitMex que es un sistema de alerta en vivo para los canales. Este era el monitor de horquillas de BitMex.

Caravana

La caravana de Unchained recibió un saludo.

Transacciones anónimas de coinjoin con

Este es un documento que wasabi desenterrado de hace como 2 años.

Luke-jr’s full node chart

El script para producir esto es de código cerrado, por lo que no se puede jugar con él. Pero hay múltiples implementaciones por ahí. Yo sospechaba de esto porque luke-jr está un poco loco, pero gleb parece pensar que es correcto. Estamos en el mismo número de nodos completos que a mediados de 2017. Así que eso es interesante. La línea superior es el número total de nodos completos, y la línea inferior es el número de nodos de escucha que te responderán si intentas iniciar una conexión con ellos. Probablemente quieras ser inalcanzable por razones egoístas, pero necesitas estar localizable para ayudar a la gente a sincronizarse con el blockchain. Mediados de 2017 podría ser el pico de segwit cuando la gente estaba girando nodos, o podría estar relacionado con el precio del mercado. También hubo un pico de precios en junio. Tal vez algunos de estos son para los nodos de relámpago. Apuesto a que mucha gente ya no lo hace.

El tablero de Clark Moody

El dashboard de Moody tiene un montón de estadísticas en tiempo real que puedes ver actualizadas en tiempo real.

Revisión de fin de año de Bitcoin Magazine

Tuvimos un crecimiento del 10% en el número de commits, un aumento del precio del 85%, el dominio de bitcoin subió un 15%, nuestra tasa de inflación es del 3,8%. El volumen diario subió. Segwit pasó del 32% al 62%. El valor de las transacciones diarias subió. El tamaño de la cadena de bloques creció un 29%. El recuento de nodos de Bitcoin se desplomó, bajó, lo que no es tan bueno. Puede ser porque mucha gente tenía sólo discos duros de 256 GB, tal vez por eso se retiraron… sí, ¿pero podar?

arxiv: nueva construcción del canal manuscrito

Lista de ataques a carteras de hardware (de Shift Crypto)

Es una lista bastante interesante. Esta es la razón por la que haces multisig de múltiples vendedores, tal vez. Esto es bastante aterrador.

Las trampas de multisig cuando se usan carteras de hardware

Una de las ideas que la gente no se da cuenta es que si usas multisig y pierdes los redeemSripts o la capacidad de computarlos, pierdes la capacidad de multisig. Necesitas hacer una copia de seguridad de algo más que tu semilla si estás haciendo multisig. Necesita hacer una copia de seguridad de redeemScripts. Algunos proveedores intentan mostrar todo en la pantalla, y otros infieren cosas. Los fabricantes no quieren tener un estado, pero el multisig requiere que mantengas un estado como el de no cambiar a otros participantes del multisig o el de intercambiar información por debajo de ti. Si estás pensando en implementar multisig, mira el artículo

charla de bunnie sobre hardware seguro

El gran punto de esta charla fue: no se puede hacer hash del hardware. Puedes hacer hash de un programa de ordenador y comprobarlo, pero no puedes hacerlo con el hardware. Así que, básicamente, tener hardware de código abierto no lo hace más seguro necesariamente. Él va a través de este gran todas las ramificaciones de que y lo que puede hacer. Él tiene un dispositivo, un teléfono de mensajes de texto que es tan seguro como él puede hacerlo, y lo interesante es que usted podría convertirlo en un hardware.

Conferencia y charlas de la CCC

SHA-1 colisión

Por 70k dólares fueron capaces de crear dos claves PGP, usando la versión heredada de PGP que usa sha-1 para el hash, y fueron capaces de crear dos claves que tenían diferentes ids de usuario con certificados que colisionaban.

Pruebas de falsificación de Bitcoin Core

Hay una página de estadísticas de fuzz testing, y luego un grupo de revisión de Bitcoin Core PR.

lncli + facturas - modo de envío experimental de llaves

Es sólo una manera de enviar a alguien, puede enviar a otra factura. Tuvieron esta función como un año y finalmente se fusionó.

\ No newline at end of file diff --git a/es/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/index.html b/es/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/index.html index bda1388624..990cb73722 100644 --- a/es/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/index.html +++ b/es/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/index.html @@ -1,6 +1,6 @@ Árboles de sintaxis abstracta merkleizados | Transcripciones de ₿itcoin \ No newline at end of file +Core dev tech

https://twitter.com/kanzure/status/907075529534328832

Árboles de sintaxis abstracta merkleizados (MAST)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html

Voy a hablar del esquema que publiqué ayer en la lista de correo, que consiste en implementar MAST (árboles de sintaxis abstracta merkleizados) en bitcoin de la forma menos invasiva posible. Está dividido en dos grandes rasgos de consenso que juntos nos dan MAST. Empezaré con el último BIP.

Esto es la evaluación de la llamada de cola. Podemos generalizar P2SH para darnos capacidades más generales, que un solo script de redimensionamiento. Así que en lugar de comprometerse con un solo script, qué tal si se trata de múltiples scripts. La semántica es súper simple. Todo lo que hace es que cuando se llega al final de la terminación de un script, si la pila no está limpia, lo que significa que hay al menos dos elementos, entonces se recurrirá a la más alta. Esto es lo mismo que hace P2SH… Así que recurrimos a los scripts. There’s two levels of P2SH. Para mantener la seguridad, limitamos la recursión a una evaluación de la llamada de cola.

Para obtener el resto de MAST, tenemos que introducir un nuevo opcode que estoy llamando OP_MERKLEBRANCHVERIFY. Toma 3 argumentos: la raíz, el hash de la hoja y una prueba. Este es el mismo tipo de extensión de opcode que OP_CSV y OP_CLTV. MBV será nop4. Fallará si no hay 3 elementos en la pila. La hoja y la raíz son ambos hashes de 32 bytes. Y el otro tiene que ser una prueba serializada que se especifica en el primer BIP. Comprueba los tipos para ver si pueden ser serializados. Y luego comprobará si estos dos coinciden con la raíz y se asegurará de obtener el valor y fallar o continuar.

Juntas, estas dos capacidades nos dan el MAST generalizado porque podemos construir algo que se parece a esto, donde nuestro testigo se parece a esto. Empujamos la prueba, la prueba merkle. Así que el testigo se ve como arg1..argN y luego [script] y luego [proof].

Testigo: arg1 … argN [script] prueba

Para el reedemScript, queremos copiar el script, capturamos el segundo elemento y lo hashificamos… añadimos el hash de la raíz, y luego MBV. Esto le da capacidades generalizadas de MAST con estas dos características. Después de ejecutar esto, tendremos …. oh, también tenemos que soltar los parámetros de MBV, para hacerlo compatible con el soft-fork.

Canjea el script: OVER HASH256 root MERKLEBRANCHVERIFY 2DROP DROP

Habremos probado que este script fue comprometido en el script original. Dejamos todo lo que hicimos aquí, con la prueba, dejando sólo el script en la pila y los argumentos para pasar a esto, y recurrimos exactamente una vez a este script y hacemos lo que quiera hacer. Podemos generar un contrato complejo y tiene múltiples caminos de ejecución.s Tienes una herramienta que genera cada uno de los posibles caminos de ejecución que te interesan. En el momento de redimir, usted especifica la ruta que desea utilizar, mostrar la ruta, y luego ejecutarlo.

Puedes cegar ciertas vías de código mostrando sólo la que utilizas. O puede hacer algo como árboles clave de sipa en el que se especifica el árbol de merkle de las claves que se desea utilizar, y sólo se sacan las que se quieren utilizar. Merklebranchverify utiliza la mayor parte de la lógica de otro lugar, aunque la haría consensuada. Y la recursividad de la llamada de cola es muy simple porque se limita a ser sólo llamadas de cola. Llegas al final del script, restableces algunas de las variables, ajustas tu script al nuevo script al que estás recursando, y luego realizas un GOTO.

Cuando llega al final de su ejecución, si hay algo todavía en su pila, lo ejecuta. Si hay dos o más. Si vas a través de tu ejecución y encuentras que todavía tienes cosas en tu pila, tomas lo de arriba y lo ejecutas como un script. Esto es seguro porque, en primer lugar no hay scripts por ahí al menos en la mainnet de bitcoin que no tengan cleanstack. No se retransmite por el momento. Eso no es consenso. “Es una regla de consenso para los testigos”. Esto no sería el trabajo en SegWit, correcto. Cuando terminas la ejecución ahora mismo con 2 o más elementos en la pila, lo cual está permitido excepto para SegWit…. cleanstack es una regla de consenso en SegWit, no es sólo una norma. Esto es sólo para v0, por supuesto. Parte del beneficio de esto es que… reduce la necesidad de versionar los scripts. Bueno, eso podría interactuar allí. Al menos sin SegWit, esto es un soft-fork porque cualquier script interesante que se empuje en la pila, excepto un script vacío o una serie de 0 pushes, esto se evaluaría como verdadero.

El guión de redención es parte del testigo. Cómo se evita estar limitado a .. niveles. Uno de ellos es que, la rama merkle, sin CAT o algo que usted no puede conseguir alrededor de él. Pero lo que puedes hacer es que estas estructura de árbol merkle fue diseñada para encadenar llamadas. Puedes sacar un hash del medio y verificar que es parte del siguiente. Y debido a que puedes hacer árboles balanceados de esta manera… y debido a que el caso unario es manejado como un paso a través, puedes tener sólo un paso a través. ¿Qué hay de la recursión de cola log N veces? Bueno, sólo un nivel de profundidad de recursión en la ejecución final. Pero eso probablemente no llegará a bitcoin porque es difícil acotar el cálculo de eso. Pero el tamaño del script será lineal en el… no, OP_DUP y generar datos, puedes hacer un script que se genere a sí mismo y llame a DUP o lo que sea, y siga adelante. Escribe un script que empuje a la pila una copia de sí mismo, y luego ejecutas eso y se ejecuta para siempre. No hay hash en la ejecución en esta propuesta.

La recursividad de cola como modelo de manejo de la recursividad general en los lenguajes basados en la pila, tiene mucho sentido, y requeriría el modelado de costos y probablemente habría scripts de “run-forever” y scripts de “bloat-memory-forever”. Nos salimos con la nuestra en los scripts de bitcoin porque estamos limitados por la no recursividad y la no realización de bucles y podemos casi ignorar el coste del uso de la memoria o de todo, no hay un patrón de opciones que pueda convertirse en un problema. Pero con la recursión sin límites, ese no es el caso. Un nivel de recursión es lo único que necesitas para sacar algo de un árbol hash. Es difícil pensar en un caso de uso en el que quieras la recursión generalizada. ¿Delegabilidad? Si tienes OP_CHECKSIGFROMSTACK, entonces eso resolvería eso. Si tuviéramos un CHECKSIGFROMSTACK esto hace que las cosas de verificación de la rama de merkle sean aún más poderosas donde se puede ejecutar un script y firmado por lo que sea.

¿Cuáles son los problemas con OP_CHECKSIGFROMSTACK? … Podrías hacer convenios, bien con OP_CAT. Aquí hay una moneda que sólo puede ser gastada por una determinada transacción, por lo que podría bloquear una moneda en una cadena.

Si no te importa la explosión exponencial, entonces todo se reduce a “una larga lista de formas de desbloquear esto, elige una”. El aumento exponencial de un árbol de merkle se convierte en un aumento lineal de las formas de desbloquear las cosas, pero sigue siendo un trabajo exponencial para construirlo.

Una de las ventajas de esto sobre bip114 original de jl2012 ((pero ver también) es que, además de estar descompuesto en dos componentes más simples… ir a buscar pubkeys de un árbol, luego tener firmas de árboles clave, también te ayuda a lidiar con el blow-up exponencial cuando empiezas a golpear esos límites, podrías poner más lógica en el script de nivel superior. ¿Qué tan difícil es hacer esto … sacar varias cosas del árbol a la vez, porque hay que compartir. Sí, ese fue uno de los comentarios, es factible, y usted tiene que trabajar en la estructura de la prueba, ya que está destinado a las salidas individuales. En la raíz puedes tener n, que es el número de elementos a sacar. Así que podría ser la prueba de 3 hojas de la raíz. Pero sin saber qué era ese n, básicamente tienes que usarlo como una constante en tu script, la raíz es una constante. Creo que sería interesante tener un n fijo aquí.)

La versión externa es el tipo de hash o explicar lo que los hash son. Así que la historia detrás de esto es que en algún momento estábamos pensando en estos … hashes recuperables que no creo que nadie está considerando seriamente en este momento, pero históricamente la razón para ampliar el tamaño … Creo que mi idea en el momento en que surgió este versionado de testigos, sólo necesitamos 16 versiones allí porque sólo necesitamos el número de versión para qué esquema de hashing utilizar. No quieres poner la versión de hashing dentro del testigo que está limitado por ese hash en sí mismo porque ahora alguien encuentra un error y escribe ripemd160 y ahora hay un ataque de preimagen allí y ahora alguien puede tomar un programa testigo pero reclamar que es un hash ripemd160 y gastarlo de esa manera. Así que, como mínimo, el esquema de hash en sí debería especificarse fuera del testigo. Pero casi todo lo demás puede estar dentro, y no sé qué estructura debería tener, como tal vez un campo de bits de características (no hablo en serio), pero podría haber un campo de bits que tiene los dos últimos se hash, en el programa.

Una de las ventajas del versionado de scripts que hicimos es que en realidad es difícil estar súper seguro de que una implementación de una nueva bandera es realmente un soft-fork. Es más fácil estar seguro de que estas cosas son soft-forks. Mark está diciendo como principio que no fusionamos múltiples cosas no relacionadas, excepto las más razonables. CHECKSEQUENCEVERIFY y CHECKLOCKTIMEVERIFY seguro. Segwit tenía bip147 nulldummy. Pero, por lo general, la combinación de varias cosas no tiene fin y se produce una penalización exponencial en la revisión en la que se dice que hay un montón de características no relacionadas… buena suerte.

No podría hacer un árbol en su esquema no podría hacer una multisig… La serialización de 600 bytes no puede ir en la pila. Cualquier elemento individual en la pila. Hay un push y el push es… … Este árbol más la recursividad de la cola termina con el siguiente script en la pila, por lo que está sujeto al límite de empuje de 510 bytes. Podría estar limitado en v1 o algo así. No puedes empujar algo más grande que eso. Incluso en SegWit, el script de SegWit no se empuja a la pila, sólo está separado. Segwit codifica la pila… no es un script que ejecute nada. El problema no es la prueba, sino el propio script. El script no puede contener todas las pubkeys. Un árbol de 19 de 20 checkmultisig, no se puede poner ese tipo de script para redimir, y esto podría ser arreglado en la v1 o algo así. El límite de 520 bytes para P2SH ha sido un dolor. Sólo puedes llegar hasta 17 de 17 o algo así. Hay razones para no usar… no la responsabilidad, pero también tienes que hacer la configuración por adelantado. Así que si tienes algún script que saca claves de un montón de árboles diferentes, y dice ve a hacer un multisig con eso. Ha sido un límite, pero sería mucho más de un límite, limitado de otras maneras. Ese límite se vuelve más favorable a medida que se introducen otras cosas en el script.

La forma en que las personas que interactúan con esto, como algún escritor de aplicaciones tratando de escribir un script, esto es mucho más complejo que la cosa original de MAST donde sólo tienes una estructura de datos. La gente que trabaja con esto va a tener más dificultades. Alguien debería hacer una propuesta adecuada de cómo es la alternativa.

Merklebranchverify es más útil que generalmente MAST. Esto es una ligera generalización de P2SH y esto es lo que P2SH debería haber sido originalmente. P2SH fue diseñado explícitamente para (¿no?) ser un código nop eval… que trata de qué código será ejecutado. P2SH con hashing especificado por el usuario. Usted hace el hashing, y se encarga de la ejecución, pero sólo un nivel de ejecución al igual que P2SH. El usuario proporciona algún nivel externo de código, y luego proporciona el siguiente nivel. En P2SH estás confinado a sólo ejecutar una plantilla, y esto elimina la plantilla. Con el merklebranchverify se puede hacer uso de esto. Son 3 capas por razones técnicas, por P2SH y por compatibilidad. Si hubiéramos meditado más sobre el P2SH podríamos haber terminado en este diseño de cola recurse también. Es casi indistinguible de P2SH en el sentido de que… si la plantilla de P2SH fuera ligeramente diferente, entonces se podría decir que siempre hizo eso.

El delta de P2SH es que el script sería un DUP, esa es la diferencia. Si P2SH tuviera un DUP delante del hashequalverify y luego ejecutara estas reglas de consenso, entonces simplemente funcionaría, porque P2SH consume el script. O tenía un opcode especial que no elimina realmente la cosa que se hash, entonces sería compatible con esto. Esto sería una reinterpretación de lo que hace P2SH, y haría que P2SH tuviera sentido. … Un script y OP_HASH160 y luego pasar a ella.. Pongo OP_NOP y luego funciona? La gente se queja de la magia de P2SH tipo de sugerir que la gente puede ser confundido así. Hay magia pushonly, derecho.

¿Modo limpio de hacer una salida múltiple para merklebranchverify? Sacar 3 de un árbol de mil. Bastante bien. Usted podría repetir la prueba y hacerlo de nuevo, proporcionar 3 pruebas separadas o algo así. La preocupación con multi-merklebranchverify es la verificación de las secuencias de comandos … pero tal vez no es un problema.

La serialización compacta podría optimizar eso… sólo haciéndolo en el nivel superior. Evita que hagas la operación de recolección y saques 3 elementos, y… No, eso no es cierto. Todavía puedes tener el merklebranchverify como un nopcode para hacer keytreesignatures donde no necesitas hacer recursión. Puedes hacer ambas cosas.

Si utiliza un esquema como bip114, entonces sería menos, porque en la propia salida… y resuelve sobre todo el tema del tamaño del push.

Una vez que usted comprueba la parte superior de la misma, se puede agarrar parte del árbol, y entonces usted puede merklebranchverify y - esto es más como merklebranchget, porque usted ya hizo el verificado. Usted hace una codificación de este donde a partir de un árbol merkle parcial, calcular un hash, y luego hacer GETs en él para ir a obtener elementos del árbol. Usted tiene un opcode que verifica una estructura de datos del árbol merkle podado es lo que usted piensa que es. Sólo hace la verificación en el hash esperado. Y luego obtiene lo que querías del árbol. Así que usted puede hacer la ramificación compartida hacia abajo. Usted toma una prueba, un opcode que toma, entonces usted hace las operaciones para obtener una rama y adjuntar otra rama. Esto es cerca de lo que hace bip114. No estaba claro por qué estaba sacando múltiples cosas, pero esto tiene sentido. Si usted está haciendo múltiples obtiene del árbol de merkle, entonces usted definitivamente quiere compartir los niveles. Si estás sacando dos cosas, entonces la mitad de las veces va a estar en la misma rama izquierda, etc. Y en la práctica estás… y si ordenas intencionadamente tu árbol de forma que lo estén, puedes hacer que tus casos comunes hagan esto de todas formas. Fíjate en que, de nuevo, un sistema que es sólo, reescribamos esa entrada de script como un árbol merkle, podría ser más fácil de implementar, hay casos de borde al final donde puedes comprobar cualquier rama que se proporcionó pero no se podó y nunca se accedió a ella, si es así entonces falla el script y esa es tu regla ((¿qué?)). No estoy entendiendo eso. Con un árbol merkle de serialización en el que tienes ramas podadas y no podadas, la maleabilidad es que alguien tome una rama podada y la haga no podada añadiendo datos extra, lo que podría hacer en algunos casos. Así que quieres deshacerte de la maleabilidad mediante algo como la regla de la pila limpia: si no lo sacaste, … está en la serialización mínima posible para lo que hiciste. La implementación real del árbol puede decir cuántos valores pueden ser… y simplemente llevar la cuenta de eso. El tamaño mínimo de serialización es diferente entre… en este tipo de árboles. O algo es o no es no podado. En la serialización, sólo hay una serialización posible. Es por eso que cleanstack tiene …. y la motivación es prevenir la maleabilidad. Donde un minero no puede añadir basura a su testigo. En un tipo diferente de codificación de serialización para las transacciones, se podría argumentar que la maleabilidad importa porque la cartera puede simplemente tirar eso. Tenemos una maleabilidad residual de los testigos y me gustaría tener un código en la red de retransmisión que autodestruya los testigos. Podría ser un punto de prueba para la maleabilidad por ser un punto de expansión. Por desgracia, es ambas cosas. La maleabilidad es el espacio de expansión del software… El punto intermedio es hacer que las cosas sean prohibidas por la política en lugar de ….

¿Se puede serializar el tamaño del.. en un checksig? Sí. En la v1, probablemente haríamos algo donde el… algo se serializa, como el tamaño de su testigo, algo que no se puede malear más grande.. validar la firma. Y esta idea ha surgido un par de veces en el pasado. Firmar el tamaño de tu firma, esencialmente. La maleabilidad en ambos casos no es un problema, reduce la eficiencia de la retransmisión. La única cosa que podrías hacer es tomar una transacción válida que está siendo retransmitida, y la hago más grande hasta el punto de que es … lo que resulta en la política de rechazos de esa transacción. Segwit mata la recursividad de la llamada de cola debido a cleanstack. En la v1 tal vez podamos cambiar eso. Bueno, ¿y si queremos una recursión general más tarde? Ataque específico aquí… puedes firmar el tamaño, o proporcionar el máximo. El testigo no será más largo que este valor, correcto. Puedes serializar el máximo que usaste. El máximo que usaste cuando firmaste fue de 200 bytes, tu firma podría ser menor, y el verificador necesita saber qué número usar, en el hash. Todo esto de la maleabilidad del testigo es como diferente de…

El proceso de revisión para convertir NOPs en soft-forks es doloroso.

¿Qué es exactamente lo que va en las propuestas de script v1 de bitcoin? ((muchos murmullos/gruñidos))

Podríamos tener un OP_SHA256VERIFY en lugar de OP_SHA256. Para muchas cosas, es conceptualmente… … Si cada operación siempre se verifica, sin no verificaciones, entonces podrías validar los scripts en perfecto paralelo. Tomas cada operación de un script y la validas por sí misma. No necesitas la verificación secuencial. Es malo para la serialización… no puedes hacer eso. Pero si no nos preocupáramos por el ancho de banda y el almacenamiento, entonces sería un mejor diseño.

Si tuvieras un hash256verify aquí en el merklebranchverify… algunos podrían convertirse en OP_DROP. El nopdrop ((risas)). OP_RETURNTRUE, no sólo OP_RETURN. Todos los nopcodes deberían convertirse en OP_RETURNTRUEs. Si se golpea este opcode, entonces se ha terminado para siempre. Esto es una generalización de la versión. La desventaja es que no sabes si has golpeado uno de estos hasta que estás en el script. Esto básicamente te permite tener un árbol merkle donde algunos scripts tienen diferentes versiones en ellos. Esta es una forma de hacer una versión testigo diciendo … mi primer opcode es OP_RETURN15 y 15 no está definido todavía, así que es cierto, y después de definirlo, ahora tiene un significado. Algunos de ellos podrían convertirse en NOPs… No lo llamemos OP_RETURN, debería ser OP_EVIL o OP_VER. Ahora mismo el script que no deserializa nunca es válido, porque no puedes llegar al final. Llega hasta el final, pero tiene un contador de saturación que dice que pase lo que pase, ya he terminado. En DEX, todo es deserialización. Así que tienes tus punteros en un flujo. La ruta de actualización para eso sería la estructura de datos que acabas de deserializar… ¿Qué partes necesitas deserializar si no lo has hecho? OP_RETURN en un scriptsig, devuelve lo que estaría en la parte superior de la pila, lo que sería malo.

No es bueno mezclar código y datos. Se puede hacer un hash de lo que está en la pila y no duplicarlo. La sobrecarga desaparecería. Si no, no necesitarías el gran push en merklebranchverify.

La versión de merklebranchverify en el enlace está encima de Core y eso es un hard-fork por los detalles del testigo. Pero también había uno en elements.git donde hay un RPC de merklebranch y otros. Toma una lista y la convierte en un árbol, pero quiere hacer objetos JSON arbitrarios.

Así que merklebranchverify tal vez debería ser desplegado con la versión no-SegWit.. pero tal vez eso enviaría un mensaje conflictivo a los usuarios de bitcoin. La regla cleanstack de Segwit nos impide hacer esto inmediatamente. Sólo en la v0.

Necesitamos algunos candidatos a soft-forks que sean muy deseados por los usuarios. Tal vez la agregación de firmas, tal vez la sugerencia de luke-jr anti-replay por OP_CHECKBLOCKATHEIGHT propuesta. Sin embargo, tiene que ser muy deseable para el usuario para que se pueda aplicar en este caso concreto. Tiene que ser un pequeño cambio, así que tal vez no la agregación de firmas, pero sí la agregación de firmas, ya que sigue siendo muy deseable.

Pueden romper un CHECKSIGFROMSTACK… en un hard-fork. CHECKBLOCKHASH tiene otras implicaciones, como que las transacciones no son… en el bloque inmediatamente anterior, no se puede reinsertar la transacción, no es reorg-safe. Debería restringirse a unos 100 bloques atrás por lo menos. +[a]Apruebo esta violación de la regla de la casa de Chatham

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014979.html

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015022.html

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014963.html

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014960.html

https://www.reddit.com/r/Bitcoin/comments/7p61xq/the_first_mast_pull_requests_just_hit_the_bitcoin/

bip98 “Árboles de Merkle rápidos” https://github.com/bitcoin/bips/blob/master/bip-0098.mediawiki

bip116 “MERKLEBRANCHVERIFY” https://github.com/bitcoin/bips/blob/master/bip-0116.mediawiki

bip117 “Semántica de la ejecución de la llamada de cola” https://github.com/bitcoin/bips/blob/master/bip-0117.mediawiki

implementación de bip98 and bip116 https://github.com/bitcoin/bitcoin/pull/12131

implementación de bip117 https://github.com/bitcoin/bitcoin/pull/12132

https://bitcointechtalk.com/what-is-a-bitcoin-merklized-abstract-syntax-tree-mast-33fdf2da5e2f

\ No newline at end of file diff --git a/es/bitcoin-core-dev-tech/2017-09/index.xml b/es/bitcoin-core-dev-tech/2017-09/index.xml index beafc873a0..775c444636 100644 --- a/es/bitcoin-core-dev-tech/2017-09/index.xml +++ b/es/bitcoin-core-dev-tech/2017-09/index.xml @@ -1,5 +1,5 @@ Bitcoin Core Dev Tech 2017 on Transcripciones de ₿itcoinhttps://btctranscripts.com/es/bitcoin-core-dev-tech/2017-09/Recent content in Bitcoin Core Dev Tech 2017 on Transcripciones de ₿itcoinHugo -- gohugo.ioesThu, 07 Sep 2017 00:00:00 +0000Árboles de sintaxis abstracta merkleizadoshttps://btctranscripts.com/es/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/Thu, 07 Sep 2017 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/https://twitter.com/kanzure/status/907075529534328832 -Árboles de sintaxis abstracta merkleizados (MAST) https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html +Árboles de sintaxis abstracta merkleizados (MAST) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html Voy a hablar del esquema que publiqué ayer en la lista de correo, que consiste en implementar MAST (árboles de sintaxis abstracta merkleizados) en bitcoin de la forma menos invasiva posible. Está dividido en dos grandes rasgos de consenso que juntos nos dan MAST. Empezaré con el último BIP. Esto es la evaluación de la llamada de cola. Podemos generalizar P2SH para darnos capacidades más generales, que un solo script de redimensionamiento.Agregación de firmashttps://btctranscripts.com/es/bitcoin-core-dev-tech/2017-09/2017-09-06-signature-aggregation/Wed, 06 Sep 2017 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2017-09/2017-09-06-signature-aggregation/https://twitter.com/kanzure/status/907065194463072258 Agregación de firmas Sipa, ¿puedes firmar y verificar firmas ECDSA a mano? No. Sobre GF(43), tal vez. Los inversos podrían tomar un poco de tiempo para computar. Sobre GF(2). diff --git a/es/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/index.html b/es/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/index.html index 4e8823dfda..f9bf35f701 100644 --- a/es/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/index.html +++ b/es/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/index.html @@ -1,7 +1,7 @@ Intercambios atómicos de curvas transversales | Transcripciones de ₿itcoin
\ No newline at end of file +Core dev tech

https://twitter.com/kanzure/status/971827042223345664

Borrador de un próximo documento de guiones sin guión. Esto fue a principios de 2017. Pero ya ha pasado todo un año.

transacciones lightning posteriores a schnorr https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html

Una firma adaptadora.. si tienes diferentes generadores, entonces los dos secretos a revelar, simplemente le das a alguien los dos, más una prueba de un log discreto, y entonces dices aprende el secreto a uno que consiga que la revelación sea la misma. Es una prueba de registro discreto de equivalencia. Descompones una clave secreta en bits. Para estos propósitos, está bien tener una clave secreta de 128 bits, sólo se usa una vez. 128 bits es mucho más pequeño que secp y demás. Definitivamente podemos descomponerla en bits. Se necesita una clave privada, pero menor que la ordenación del grupo en ambos. Voy a tratar la clave pública… Voy a asumir… que va a ser lo suficientemente pequeña, mapa de cada entero en un rango, biyección en el conjunto de enteros y conjunto de escalares en secp y el conjunto de escalares en… Es sólo conceptualmente porque de lo contrario no tiene sentido hacer eso. En la práctica, son todos los mismos números. Lo divides en bits, similar a lo del anillo-CT de Monero, que es una forma demasiado complicada de describirlo. ¿Qué pasa con las firmas del anillo de Schnorr? Básicamente, la forma en que funciona, una firma Schnorr tiene una elección de un nonce, se hace un hash, entonces se obtiene un valor S que de alguna manera satisface alguna ecuación que implica el hash y el nonce secreto y la clave secreta. La idea es que debido a que el hash se compromete con todo, incluyendo la clave pública y el nonce, la única manera de hacer que esta ecuación funcione es si se utiliza - la clave secreta - y entonces se puede calcular S. Y si el hash no se compromete, entonces se podría resolver lo que el nonce público debe ser. Pero no puedes hacer eso porque tienes que elegir un nonce antes de un hash. En una firma de anillo schnorr, tienes una pila de claves públicas, eliges un nonce y obtienes un hash, pero luego la siguiente clave tienes que usar ese hash pero la siguiente clave, y eventualmente llegas bcak al principio y se envuelve alrededor. Empiezas desde una de las claves secretas, empiezas una clave pasada, haces firmas aleatorias y las resuelves y vuelves, y eventualmente no puedes hacer una firma aleatoria y resolverla, y ese es el hash final, que ya está determinado, y no puedes hacerlo de nuevo, tienes que hacer lo correcto, necesitas una clave secreta. El verificador no sabe el orden, no puede distinguirlos. Lo que mola de esto es que estás haciendo un montón de álgebra, estás metiendo una mierda en un hash, y luego estás haciendo más álgebra de nuevo y repitiendo. Podrías hacer esto todo lo que quisieras, en la firma del anillo de no-tu-clave, sólo tiras en cosas al azar en este hash. Usted podría mostrar un priemage y demostrar que era una de esas personas, es cosas unlinkability. Podrías construir una prueba de rango a partir de esta firma de anillo schnorr. Supón que tienes un compromiso pedersen entre 0 y 10, llámalo compromiso C. Si el compromiso es 0, entonces conozco el registro discreto a C. Si el compromiso es a 1, entonces conozco C - H, y si es 2 entonces es C-2H, y entonces hago un anillo con C - H, hasta C-10H, y si el rango está ahí, entonces conoceré uno de esos registros discretos. Divides tu número en bits, tienes un compromiso de 0, 1, o 0, 2, o 0, 4, o 0, 8, y sumas todos estos. Cada una de estas firmas individuales del anillo es lineal en el número de cosas. Puedes obtener una prueba de tamaño logarítmico haciendo esto. Estos hashes… porque estamos poniendo puntos en estos hashes, y los hashes están usando estos datos. Puedo hacer una firma de anillo simultánea donde cada estado voy a compartir una función de hash, voy a hacer ambas firmas de anillo, pero estoy usando el mismo hash para ambos, así que elijo un nonce al azar por aquí, hago un hash de ambos, y luego calculo un valor S en ese hash en ambos lados, obtengo otro nonce y pongo ambos en el siguiente hash, y eventualmente tendré que resolver realmente en ambos lados. Así que esto es claramente dos firmas de anillo diferentes que ambos están compartiendo algunos…. Pero es cierto que el mismo índice.. Tengo que saber la clave secreta del mismo índice en ambos. Una forma de pensar en esto es que tengo secp y ed, y estoy encontrando una estructura de grupo en esto de la manera obvia, y entonces mi afirmación es que estos dos puntos, como Tsecp y Ted tienen el mismo registro discreto. En este grupo más grande, estoy afirmando que este log discreto de este multipunto es T, T y ambos componentes son los mismos. Hago una firma de anillo en este grupo de producto cartesiano, y estoy demostrando que son los mismos en ambos paso, y esto es equivalente a la misma cosa donde yo estaba combinando hashes.

Digamos que intentamos hacer un intercambio atómico. Tengo una firma adaptadora en la que puedo poner algunas monedas en un multisig. Tú me das una firma adaptadora, que me da la capacidad de traducir tu firma, que luego revelarás para tomar las monedas, en algún desafío de registro discreto. Tú me das una firma adaptadora y algunos valores, y yo digo que sí, mientras esto ocurra, entonces firmaré. Así que las monedas pueden utilizar una curva, y usted puede reclamar las curvas de la otra moneda. En la otra cadena hacemos lo mismo, así que me das una firma adaptadora con el mismo valor t. Ahora sólo tengo las firmas del adaptador. Tú firmas para tomar tus monedas, yo uso tu firma para aprender la clave secreta, y luego puedo usarla para tomar monedas en mi extremo. ¿Y si fuera una moneda ed25519 y una moneda secp? Depende de usar la misma t en ambos lados. Quiero que me des dos t y una en secp y otra en ed25519 y una prueba de que están usando la misma clave privada. ¿Cómo haces la clave… cómo limitas la… Cuando haces esta cosa de la firma en anillo, sólo tienes tantos dígitos. Lo divides en dígitos, sumas todos los dígitos. Por cada dígito, me das un… dices que esta es la clave secreta de 0 o 1, o 0 o 2, etc. No tiene que ser de 128 bits. Estas pruebas son bastante grandes. Cada firma de anillo es como… si lo haces en binario, 96 bytes por dígito, 96 bytes * 128 en este caso. Esto es sólo p2p. Es como 10-20 kb. No es mucho.

¿Actualmente la gente tiene intercambios atómicos entre cadenas para diferentes curvas? Podrías usar sólo hashes, pero cualquiera podría ver los hashes. Pueden enlazarlos.

El firmante podría darle una firma y la firma del adaptador. Hay un protocolo retador-respuesta en ese borrador de la escritura sin guión.

Necesitamos una nueva definición o letra para sG. Lo usamos mucho.

\ No newline at end of file diff --git a/es/bitcoin-core-dev-tech/2018-03/index.xml b/es/bitcoin-core-dev-tech/2018-03/index.xml index 3fd26c3b07..df3d92591d 100644 --- a/es/bitcoin-core-dev-tech/2018-03/index.xml +++ b/es/bitcoin-core-dev-tech/2018-03/index.xml @@ -7,6 +7,6 @@ Con graftroot y taproot, nunca para hacer cualquier scripts (que eran un hack pa Tomas todos los caminos que tiene; así que en lugar de eso, se convierte en &hellip; cierta condición, o ciertas no condiciones&hellip; Tomas todos los posibles ifs, usas esto, dices que es uno de estos, luego especificas cuál, y lo muestras y todos los demás pueden validar esto.
Bellare-Nevenhttps://btctranscripts.com/es/bitcoin-core-dev-tech/2018-03/2018-03-05-bellare-neven/Mon, 05 Mar 2018 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2018-03/2018-03-05-bellare-neven/Ver también http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-06-signature-aggregation/ Se ha publicado, ha existido durante una década, y es ampliamente citado. En Bellare-Neven, es en sí mismo, es un esquema multi-firma que significa múltiples pubkeys y un mensaje. Debe tratar las autorizaciones individuales para gastar entradas, como mensajes individuales. Lo que necesitamos es un esquema de firma agregada interactiva. El artículo de Bellare-Neven sugiere una forma trivial de construir un esquema de firma agregada a partir de un esquema multisig donde interactivamente todos firman el mensaje de todos.Intercambios atómicos de curvas transversaleshttps://btctranscripts.com/es/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/Mon, 05 Mar 2018 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/https://twitter.com/kanzure/status/971827042223345664 Borrador de un próximo documento de guiones sin guión. Esto fue a principios de 2017. Pero ya ha pasado todo un año. -transacciones lightning posteriores a schnorr https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html +transacciones lightning posteriores a schnorr https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html Una firma adaptadora.. si tienes diferentes generadores, entonces los dos secretos a revelar, simplemente le das a alguien los dos, más una prueba de un log discreto, y entonces dices aprende el secreto a uno que consiga que la revelación sea la misma.Taproot, Graftroot, Etc (2018-03-06)https://btctranscripts.com/es/bitcoin-core-dev-tech/2018-03/2018-03-06-taproot-graftroot-etc/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2018-03/2018-03-06-taproot-graftroot-etc/https://twitter.com/kanzure/status/972468121046061056 Graftroot La idea del graftroot es que en cada contrato hay un superconjunto de personas que pueden gastar el dinero. Esta suposición no siempre es cierta, pero casi siempre lo es. Digamos que quieres bloquear estas monedas durante un año, sin ninguna condición para ello, entonces no funciona. Pero suponga que tiene&hellip; ¿recuperación pubkey? No&hellip; la recuperación pubkey es inherentemente incompatible con cualquier forma de agregación, y la agregación es muy superior.
\ No newline at end of file diff --git a/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/index.html b/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/index.html index bc6aea86fc..f0cbdb779e 100644 --- a/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/index.html +++ b/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/index.html @@ -1,5 +1,5 @@ Gran limpieza de consenso | Transcripciones de ₿itcoin - \ No newline at end of file +Core dev tech

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html

https://twitter.com/kanzure/status/1136591286012698626

Introducción

No hay mucho nuevo de que hablar. No está claro lo de CODESEPARATOR. Se quiere convertir en regla de consenso que las transacciones no pueden ser mayores de 100 kb. ¿No hay reacciones a eso? De acuerdo. Bien, lo haremos. Hagámoslo. ¿Todos saben cuál es esta propuesta?

El tiempo de validación para cualquier bloque, éramos perezosos a la hora de arreglar esto. Segwit fue un primer paso para arreglar esto, dando a la gente una manera de hacer esto de una manera más eficiente. Así que el objetivo de la gran limpieza consensuada es arreglar eso, y también arreglar el timewarp, lo sabemos desde hace mucho tiempo y deberíamos arreglarlo. Tres, deberíamos hacer un soft-fork que no es lo más crítico y por lo tanto puede ser bikeshedded un poco y si no se activa entonces nadie es demasiado triste. Podemos hacerlo por el mero hecho de hacer un soft-fork y sacar algo adelante y solucionar los problemas que puedan surgir, antes de hacer algo más complicado como las firmas Schnorr.

¿Existe el riesgo de normalizar los soft-forks? Tal vez, pero no es una frivolidad.

Alteración temporal.

El soft-fork fix para timewarp es realmente simple. Me gustaría hacer ese pull request. ¿Cuál es la solución exactamente? El problema en timewarp es que hay un error en el cálculo de la altura para el ajuste de dificultad. Puedes saltar hacia atrás o hacia adelante significativamente, no estoy seguro, sólo en el cambio de bloque de ajuste de dificultad. Si sólo restringes esos dos bloques para que estén bien ordenados, entonces bien, el timewarp desaparece. Así que sólo tienes que añadir una nueva regla diciendo que estos bloques deben estar bien ordenados. Si estableces esa regla de tal manera que pueda ir hacia atrás hasta 2 horas, entonces no debería haber ningún impacto donde un minero genere tal bloque. La marca de tiempo no puede retroceder más de 7200 segundos.

Tener una regla entre los dos periodos de dificultad, y cualquier límite constante sobre cuánto puede retroceder el tiempo, arregla el ataque timewarp. En función de cuántos segundos se permite ir hacia atrás, hay un pequeño factor de aceleración de bloque que se puede lograr, pero siempre está limitado. Si se retrocede 600 segundos, el límite es exactamente el que queríamos, es decir, 2016 bloques cada 2 semanas. Si permites que retroceda más, permites un poco más de bloques. Si permites menos, curiosamente, tienes menos bloques.

¿Qué pasa con el uso de timewarp para bloques hacia adelante? Alguien preguntó acerca de hacer bloques hacia adelante en litecoin. Pero litecoin había fusionado una corrección de timewarp hace mucho tiempo.

¿Deberíamos corregir el error de timewarp? Slushpool ha dado su opinión. ¿Es sólo Mark quien quiere usarlo?

El mejor nrolling que tienes es generar un montón de midstates, e incrementar el temporizador en un segundo. Nadie hace n rolling en hardware todavía, pero la gente quiere hacerlo. Probablemente por eso la gente de slushpool estaba preocupada. La mediana del tiempo pasado podría estar en el pasado, y el bloque anterior podría estar 2 horas en el futuro, y por lo tanto estás más de 2 horas por detrás del bloque anterior. Por eso se permite retroceder 2 horas. La mediana del tiempo pasado puede estar en el pasado, y el bloque anterior puede estar 2 horas en el futuro. Si quisieras la mayor cantidad posible de n time rolling, empezarías con la mediana del tiempo pasado, y ahora mismo la gente empieza con el tiempo actual y realmente no hacen n time rolling. Básicamente no hay razón para n time roll significativamente en el futuro, siempre y cuando usted puede almacenar un montón de midstates por cada 1 segundo. Se podría argumentar que no hay razón para hacer esto, o no se puede hacer esto. No hay ninguna razón para hacer esto, y nadie lo está haciendo por lo que yo puedo decir.

La única evidencia que vimos de ntime rodando fue hasta tal vez 10 minutos o algo así, miramos. Los únicos tiempos que saltaron hacia atrás fueron probablemente otros problemas. ¿La gente mina bloques con marcas de tiempo en el pasado? Sí, pero eso es otra cosa. No, no lo es. Es trabajo antiguo. Sólo te actualizo cada 30 segundos. Alguien más puede joderte minando un bloque 2 horas en el futuro. Si empiezas en el momento actual y alguien mina un bloque 2 horas en el futuro, eso siempre está bien. Habrías rechazado un bloque más de 2 horas en el futuro. Cumplirás la regla. Sí minan en el pasado, pero empieza con la hora en la cabecera, pero lo encuentran más tarde en el futuro. ¿Nadie mina un bloque cuya marca de tiempo es más antigua que la hora real del bloque anterior? No, nadie lo hace. En el caso de los mineros, como stratum se actualiza periódicamente, casi todos los mineros minan con 15 segundos de retraso, ya que se actualizan cada 30 segundos, lo que supone una media de 15 segundos. Pero siempre se empieza con el tiempo actual cuando se genera el trabajo de stratum y se envía al cliente.

¿Cuánto varía la hora actual entre mineros? Unos pocos segundos. Harold rastreó la red bitcoin y encontró que el 90% de los nodos bitcoin que escuchan tienen una marca de tiempo que está dentro de la precisión de medición mía. Las personas sin un reloj preciso, están como 2 días fuera. Así que sí. Cuando te conectas, el mensaje de versión tiene la hora unix actual, la hora del sistema. Es la hora ajustada a la red, ¿eso creo? Eso sería infeccioso no. Anuncias la hora del sistema, pero usas la hora ajustada a la red. Tus datos de rastreo siguen siendo útiles porque te dicen la hora ajustada a la red. Los servidores de pool probablemente hacen lo suyo.

¿Tienen actualmente bfgminer o cgminer algún código para rechazar que su pool les diga que participen en un timewarp basándose en cabeceras anteriores? Los clientes no hacen nada, ni deberían hacerlo. Un ASIC no debería interpretar datos basándose en reglas de consenso. Es estúpido. bfgminer hace algunas locuras, pero eso es porque bfgminer es una locura. En general quieres que el minero sea tan tonto como un ladrillo, ellos solo tiran el trabajo a un ASIC. Ahora mismo son demasiado listos para su propio bien porque ven la transacción de coinbase.

Así que te arrastraste nodo veces, pero ¿te fijaste en los tiempos de la piscina? No, no llegué a hacerlo. Sólo miraba la red bitcoin y comparaba su reloj con mi reloj local. Casi todos los pools tienen un nodo bitcoin escuchando en la misma dirección IP, lo cual es raro. Pero para aquellos que no, tienen una piscina, y luego un nodo en el mismo rack o centro de datos que ejecuta un nodo bitcoin y luego tal vez un nodo bitcoin interno que no está escuchando. En general, los relojes parecen estar dentro de un segundo, que es más o menos lo que cabría esperar de NTP más o menos unos segundos. Un montón de gente tiene una hora de diferencia. Probablemente sea un error de zona horaria. Buggy horario de verano, probablemente. Y Windows configurando la hora local en lugar de UTC. Hay una zona horaria de 15 minutos. Corea del Norte tiene el peor huso horario de la historia. Bueno, eso es porque no se preocupan por el resto del mundo.

Más información sobre la propuesta de limpieza del gran consenso

Hay un montón de reglas, incluyendo las transacciones de 64 bytes que afecta principalmente a las transacciones segwit. Sí, eso es otra cosa. Lo de codesep solo esta pensado para transacciones no segwit pero tal vez lo dejemos en favor de una regla de 100k. Pero, ¿la regla de los 100.000 es sólo para el legado? No he pensado en eso. Debería ser 100k tamaño base. Podrías decir simplemente 100k tamaño base, y entonces se aplica a segwit también. O 400k testigo, en realidad. 100k tamaño de la tira, es lo que quieres, ¿verdad? El tamaño base es un lenguaje raro para mi. Testigo tamaño de la tira, no más de 100kb debe ser la propuesta. Eso es lo unico que importa para las reglas de sighashing. Pero tambien podrias hacerlo por el peso del testigo porque asi es como calculamos el tamaño ahora. Tienes razón, si quieres atacar la cosa exacta que está causando problemas, lo harías por…. no queremos hacerlo a la otra cosa porque si tienes un testigo gigante. ¿Queremos testigos de más de 400 kb?

¿Estás proponiendo esto como el próximo soft-fork, o estás argumentando que esto debería ser el próximo soft-fork? Teniendo en cuenta dónde estamos con taproot, parece posible que intentar hacer algo antes sólo lo retrasaría aún más. Bueno, se pretendía que fuera más rápido, pero nadie lo revisó ni le importó una mierda, así que da igual. Es completamente ortogonal. Podría hacerse en paralelo, no hay nada malo en ello. Además, si existe un acuerdo comunitario, no hay razón por la que no puedan activarse al mismo tiempo.

Estos vectores DoS, el error timewarp y las cosas sighash- ¿cuánto tiempo tendríamos para desplegar un soft-fork como este y tenerlo listo? Si algo malo sucede… hemos visto cosas malas suceder, hemos visto bloques que tardan demasiado en validarse. Ha sido explotado antes. Timewarp es una pregunta diferente. ¿Podemos lidiar con el Timewarp cuando suceda? No, de repente aparece un bloque timewarp y entonces sabemos que ha ocurrido. Explotar el timewarp es un proceso muy lento. Es en varios meses, así que podríamos optar por desplegar el soft-fork entonces. Si el timewarp está ocurriendo es porque los mineros lo están haciendo, a quienes necesitas para la activación del soft-fork en ese caso. Para sighashes, es necesario configurar 100’s de miles de UTXOs y podemos ver que en particular. Pero la limpieza de polvo estilo f2pool no es realmente tan malo, pero es bastante desagradable. Gran tamaño de scriptpubkey, 10 kb scriptpubkeys o algo así.

Si quieres hacer algo no controvertido y fácil de hacer un soft-fork, entonces creo que un límite de tamaño de 100kb lo hará muy controvertido. Cualquier tipo de límite de tamaño en las transacciones espero que sea controvertido. Lleva siendo no estándar desde hace 5-7 años. ¿Interferiría esto con las transacciones confidenciales? Bueno, podría decirse que no debería afectar a los datos de testigos. No se aplicaría a eso de todos modos.

¿Qué pasa con la gran transacción gigante por bloque, transacción gigante coinjoin con Schnorr? Si usted tiene un nuevo esquema de firma, y y y estas otras cosas, entonces tal vez. El nuevo tipo de sighash en segwit generalmente previene esto, así que podrías decir sólo no testigo… sólo los datos que son relevantes para las entradas no testigo, que mayormente corresponde al tamaño de la tira testigo. Sólo entonces se comprueba si es más de 100kb y si lo es, entonces usted falla. Quieres poder hacerlo sin validar los scripts. Contar el número de entradas heredadas y ponerle un límite. Puedes buscar los UTXOs, pero no tienes que validarlos. Un segundo tipo de testigo que se añade … usted no estaría contando que, por lo que es un poco peligroso, si predecimos que.

Otro comentario, realmente necesitas números para mostrar a la gente que este cambio reduce el peor de los casos. Sí, tengo cifras. Tener esos números sería genial para convencer a la gente. Aún no me he puesto a reescribirlo. A nadie parece importarle.

Soft-fork

Como no se ha violado la regla del timewarp, puede que nadie actualice- ¿cómo se sabe? Bueno, si minan bloques inválidos entonces no será aceptado por la red. Usted no tiene la seguridad, excepto que todo el mundo está ejecutando Bitcoin Core. Si empiezas a perder dinero, vas a arreglar tu mierda muy rápido.

El BIP dice usar bip9 versionbits con una promesa explícita de que si los nodos y el ecosistema parecen actualizarse robustamente, y bip9 se agota, entonces habrá una segunda ventana de señalización de bip9 en una futura versión donde esa ventana de señalización de bip9 se agota con una activación en lugar de un tiempo de espera. Con la señalización de bip9. Sí, eso es bip8. Así que sí, primero no lo hagas explícitamente activar, sólo ver que todo el mundo lo quiere, entonces usted podría ver la activación y ver en la red y todavía hay- todavía queremos usar bip9 pero usted tiene que cumplir con nosotros allí, de lo contrario bip8 y la zanahoria y el palo. Cliente podría tener una opción para que en la versión actual wihtout actualización para agregar los nuevos parámetros de señalización. Eso es una locura. Entonces podría haber gente demasiado impaciente para bip9 tiempo de espera … consenso archivo de configuración no es el camino correcto a seguir.

No quiero entrar en una discusión sobre la UASF. Es mejor tener un enfoque conservador y lento para asegurarnos de que realmente tenemos consenso para estos cambios. ¿Por qué tanta prisa? ¿Habrá alguna resistencia a un tenedor blando? Creo que esto no es lo suficientemente simple. Lo del separador de código se va a eliminar de la propuesta. Todavía hay muchos detalles sin definir. Es una buena idea. Si no tuviéramos taproot potencialmente listo también cerrar… ten cuidado con cómo lo expresas. ¿Cuál es el razonamiento para arreglar múltiples vulnerabilidades en lugar de sólo una? Hay un montón de sobrecarga para hacer un soft-fork, y un soft-fork sólo para eliminar las transacciones de 64 bytes no creo que siente precedente sobre cómo ocurren los soft-forks. No deberíamos sentar un precedente de hacer un soft-fork cada 6 meses. Sí, hay sobrecarga y el riesgo de soft-fork. Si usted tiene un montón de cosas no controvertidas, entonces seguro que es útil para agruparlos. También estos tienen que ser super simple. Matt tiene algunas cosas que son bastante sencillas, y tienen que ser adecuadamente socializadas para decirle a la gente por qué es importante. Bueno, queremos evitar los soft-forks omnibus. No es que esto está sucediendo aquí. Hay un equilibrio, por supuesto.

Debe ser incontrovertible y sencillo, lo que hará que vaya más rápido. Pero también hay que estar muy motivado, lo que hace que vaya más rápido o más suave. Creo que este es un argumento a favor del taproot. Creo que eso se ha socializado durante mucho más tiempo. Creo que la gente entiende podría ser una palabra fuerte, entienden que hay un beneficio para ellos por esto, mientras que estas cosas pueden ser menos claras. Habrá drama en torno al método de activación. La comunidad de twitter son todos los imbéciles de la UASF.

¿No os gusta el bip8? Hacer bip8 desde el principio es una mierda. Es como decir que los desarrolladores de Bitcoin Core han tomado la decisión. Necesitas hacerlo real para la gente, antes de saber que hay consenso. Esto es lo que permite bip9. Creo que se lanza con bip9, y entonces debería ser probable ver la adopción de los nodos, los mineros podrían ser lentos, pero ¿quién va a oponerse a ello?

Si vamos a tirar mano de obra en algo, ¿por qué no algo como taproot? Bueno, tienes que arreglar estas vulnerabilidades reales en algún momento. Es una cuestión de secuenciación y de cuándo hacerlo. Esto podría ser cierto si hacemos un buen trabajo motivando esto y explicándoselo a la gente.

Yo haría una ventana bip9 superlarga. Si ocurre antes de taproot es genial, si ocurre después de taproot está bien. Diríamos que nos gustaría que esto saliera, pero no es urgente. No queremos sentar un precedente para establecer dos horquillas blandas porque si se actualiza se obtiene… empezamos con una, y más tarde hacemos la de taproot. Tienen que estar separados un año o lo que sea. Creo que esta discusión es prematura hasta que tengamos una idea clara de si el ecosistema quiere estos cambios. Se puede hablar de mecanismos de activación en abstracto, claro.

Los BIP de taproot son relativamente nuevos. Yo trataría de argumentar que no se tiene una imagen clara de ello hasta que se escriben los pull requests y se empieza a escribir código y a fusionar y es entonces cuando se consigue que la gente entienda que esto es real. Revisar, publicar y fusionar el código es independiente de la decisión de añadir un mecanismo de activación. ¿No necesitas al menos un plan? Supongo que puedes cambiar el mecanismo de activación va a ser.

Me gustaría ver más código para el soft-fork de limpieza. Sí, hay un parche y un pull request. Sólo estaba en un blob. No, hay pruebas y todo. El código es bastante legible. La última vez que miré no parecía separado. Fanquake, necesita ser etiquetado como «necesita acción del autor». Tiene 5 commits, lo estoy mirando ahora mismo.

También podría tirar de un Satoshi y hacer un commit «change makefile» ((risas)).

Hay un pool de mineros censurando transacciones blindadas para zcash. Los mineros de Bitcoin Cash ya han activado un fork para firmas Schnorr así que al menos hay algo de interés ahí.

Hacer un mecanismo de señalización bip9, hacer un lanzamiento con él, realmente publicitarlo. El potencial de controversia es mayor con la gran limpieza de consenso porque está más cerca de bikeshed. Pero Schnorr-Taproot es Pieter trayendo las tabletas de la montaña ((risas; nadie quiere eso)).

¿Y si dividimos la gran limpieza de consenso? Sobrevaloras cuánto sigue la gente cualquiera de estas cosas. En general, no lo hacen. ¿Qué tendría de terrible tener múltiples soft-forks? Bueno, se convierte en 32 combinaciones para probar. Pesadillas de activación parcial.

No creo que estas bifurcaciones suaves puedan hacerse en paralelo. No somos buenos gestionando las expectativas sociales en este proyecto ni comunicándolas de forma adecuada. Si intentamos hacer esto para dos cosas, creo que lo haremos aún peor. Creo que es mejor centrarnos en una cosa, centrarnos en cómo la estamos comunicando, cómo estamos señalando que hay consenso sobre esto primero antes de lanzarlo por ahí. Si estamos haciendo dos bifurcaciones suaves, va a ser confuso para nosotros si cada una de ellas tiene consenso o lo que sea, olvídate de cómo lo perciben los demás. Ninguno de los problemas del gran tenedor blando de limpieza es especialmente sexy, así que te costará motivar e interesar a alguien. Pero si tienes números, entonces tal vez. No sé cuánto priorizar esto porque no sé cuáles son los números. Es raro decir «es difícil de justificar, así que deberíamos hacerlo». No digo que sea difícil de justificar, simplemente no es sexy. Socialmente sería difícil conseguir que se desplegara la corrección de errores del soft-fork OP_CHECKMULTISIG, porque es sólo un byte y a la mayoría de la gente no le importa. Bueno, no hemos intentado entusiasmar a la gente. Es difícil incluso para mí esto es importante, simplemente no es un tema emocionante. Si este grupo no está superexcitado al respecto, entonces ¿cómo podría el mundo entusiasmarse? Ni siquiera puedo recordar el tema completo y lo escribí.

Es significativamente menos emocionante que la taproot, y podría ayudar a construir una comprensión de los soft-forks en el futuro, y luego haces algo que es más probable que resulte en drama. Este tenedor suave no tiene ningún beneficio directo por lo que es más fácil argumentar en contra de ella. Bueno, ni siquiera tenemos los números de rendimiento.

Si quieres el consentimiento de la comunidad para estos cambios, que es lo que necesitas para hacer los cambios, entonces a pesar de lo nuevo que es el taproot, la idea general ha sido socializada durante tanto tiempo que estamos en un buen lugar para avanzar, especialmente porque lleva mucho tiempo de todos modos. Pero estas ideas de limpieza no han sido socializadas, y empaquetarlas en algo como esto es reciente. Si dejáramos la limpieza por ahí durante un tiempo y señaláramos que como grupo pensamos que esto es algo que deberíamos hacer, que probablemente sea lo siguiente que hagamos después del taproot, entonces la gente lo esperaría. Ahora esperan Schnorr, y deberían esperar estas limpiezas. Si hay una oposición decidida a esos cambios, tendremos tiempo de ver cuáles son esos argumentos.

Sabemos que en el peor de los casos el rendimiento es malo, pero lo que no sabemos es cuánto mejoraría las cosas el soft-fork de limpieza, como los ataques de validación. ¿Qué pasa si esto fue explotado, y le decimos a la gente ah bueno, sabíamos de la vulnerabilidad y simplemente no hicimos nada al respecto. Timewarp ha sido bien documentado desde hace 8 años. Teóricamente es posible que alguien construya un bloque que tarde mucho tiempo en validarse y eso tenga repercusiones negativas. ¿Será un bloque, muchos bloques? ¿Qué probabilidades hay de que esto ocurra realmente? Para ser justos, se ha documentado durante años y la mayoría de las altcoins lo han solucionado. Bitcoin puede que nunca lo haga porque es un hard-fork solía ser el viejo argumento. Creo que al principio la gente pensaba que era un hard-fork, pero hace tiempo que ya no es así.

Hubo un gran ataque timewarp en 2013 contra una altcoin que no solucionó la vulnerabilidad timewarp. También hubo uno reciente, el de dos pruebas de trabajo y un ataque de timewarp.

En cuanto al orden de activación, creo que podría ser razonable porque es un patrón común al que la gente está acostumbrada, sugerir quizás hacer un tenedor de características, un tenedor de limpieza. Esto podría preparar a la gente de una manera sin tantos inconvenientes. Me gusta eso, y la sugerencia de socializarlo, que aún no ha sucedido. Hemos estado socializando la eliminación de CODESEPARATOR durante unos 4 años. En una reunión de desarrolladores en SF hace unos meses, CODESEPARATOR era nuevo para ellos. No hubo discusión sobre el arreglo del timewarp allí, pero sobre CODESEPARATOR y los modos sighash creo- ¿cuáles eran las tres cosas? 64 bytes, separador de códigos, requisito de pushdata, sólo scriptsig push. Creo que el 80% de la discusión fue sobre OP_CODESEPARATOR. Si eso es alguna indicación; pero podría ser el comportamiento de la multitud al azar. Creo que esa es la indicación correcta, creo. Es difícil entender la utilidad de OP_CODESEPARATOR, y mucha gente cree haber encontrado un caso de uso, pero casi nunca funciona para nadie. No es que sea útil, sino que alguien podría pensar que es útil y por lo tanto podría bloquear sus fondos permanentemente, lo que hace esta discusión aún más difícil. Usted podría conseguir alrededor de esto con bien cualquier transacción creada antes de este tiempo y después de este tiempo, y entonces usted tiene problemas del contexto…

Además, en cuanto al límite de transacciones de 100kb, hay que discutir sobre el establecimiento de precedencias. Ignorando OP_CODESEPARATOR por un momento, ¿qué haces si quieres eliminar algo así? Resulta que es una vulnerabilidad importante, pero algunas personas podrían tener fondos bloqueados con ella, así que ¿qué haces al respecto? No hemos hecho mucho con limpiezas de softforks o softforks de corrección de vulnerabilidades, así que ¿cómo sientas precedente para eliminar algo así? Yo era partidario de hacer algo como, desde la activación del soft-fork ahora son 5 años antes de la activación. O sólo se aplica a UTXOs creados después de la fecha de activación. También soy fan de activarlo para todos los UTXOs creados después del soft-fork, y luego 5 años después lo activamos para todos los UTXOs antiguos antes de que se activara el soft-fork.

Probablemente va a caer el cambio OP_CODESEPARATOR. Resulta que no es…. hay algunas clases de transacciones donde es realmente molesto para el uso total de datos hash, pero si quieres maximizar el uso total de datos hash, no es realmente lo inmediatamente peor, no es la forma en que realmente lo harías, especialmente si añades un límite de transacción de 100 kb. Tengo un documento de texto donde escribí todo esto, ha pasado un tiempo.

¿Alguien cree firmemente que esto debería desplegarse antes que el soft-fork taproot? Depende de la activación. La pregunta que más me interesa, ¿es una buena idea o no? Siento que esa pregunta no ha sido respondida. Si el soft-fork es útil, deberíamos hacerlo. ¿A alguien no le gusta la idea de hacer estas limpiezas? Depende de cuánto mejore la situación. Transacciones de 64 bytes, ¿no quieres hacer soft-fork? Definitivamente quiero eliminar las transacciones de 64 bytes. También quiero arreglar el timewarp. Tenemos que aclarar más las cosas, obtener algunas cifras y luego dejar que se asimilen. También tenemos que encontrar la manera de comunicar estas cosas a la comunidad Bitcoin. ¿Quién va a defender esto ante la comunidad? Puede que estemos sobrestimando cuánto les importa esto a los usuarios; ¿por qué no lo hemos arreglado antes? La forma de hacerlo es ponerle una marca, llamarlo time bleed ((risas; no hagas eso)). Timewarp es una marca bastante buena, excepto que suena guay o bueno.

\ No newline at end of file diff --git a/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/index.html b/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/index.html index cfe359f723..85d08156b4 100644 --- a/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/index.html +++ b/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/index.html @@ -1,6 +1,6 @@ Taproot | Transcripciones de ₿itcoin \ No newline at end of file +Core dev tech

https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html

https://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/

Previamente: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/

https://twitter.com/kanzure/status/1136616356827283456

Introducción

Bien, primera pregunta: ¿quién puso mi nombre en esa lista y qué quieren? No fui yo. Voy a hacer preguntas. Puedo hacer un resumen, pero ya se ha hablado mucho y no sé en qué centrarme. ¿Qué le gustaría a sipa que revisáramos en particular al respecto? ¿Qué decisiones de diseño le inspiran menos confianza? ¿Hay algo en lo que le gustaría que otras personas investigaran las decisiones de diseño antes de seguir adelante?

Visión general de Taproot

Permítanme hacer una lista de todas las cosas que se incluyen en bip-taproot. ¿Debo hacer también un breve resumen de cuál es la idea general de taproot? ¿Es ya bien conocida? Vale, cinco minutos de resumen de taproot. Bien, ¿dónde está roasbeef? Esto es difícil, saber por dónde empezar. Empecemos con la suposición de taproot. Es un buen comienzo.

La historia antes de taproot es que hemos tenido p2sh donde movimos el código real del script no lo ponemos en la salida, sólo se revela en la entrada y te comprometes a ello antes de tiempo. La gente se dio cuenta de que se podía hacer esto recursivamente. En lugar de simplemente hacer un hash del script, hazlo un árbol merkle de múltiples scripts, comprométete sólo con la raíz merkle y luego revela sólo la parte del script que vas a usar, bajo la suposición de que vas a dividir tu script en una colección de declaraciones disyuntivas. Sólo revelas un número de ramas a escala logarítmica. Así que puedes tener una raíz al árbol del script, luego scripts aquí y scripts aquí. Así que hay un poco de mejora en la escala de esto y también un poco de privacidad porque sólo estás revelando las partes que estás usando.

Pago por contrato

Greg Maxwell y andytoshi se dio cuenta en un lugar de desayuno en Mountain View mientras yo estaba en el baño era que hay una función llamada la función de pago a contrato que toma como entrada un punto de curva elíptica o clave pública y una secuencia de comandos, que se define como la clave pública más el hash de la clave pública y S … es un compromiso criptográfico. Nadie más, conociendo P y S, no puede encontrar el mismo valor de salida. Es un valor hash raro, pero tiene todas las propiedades de una función hash. En segundo lugar, tiene la propiedad de que si conoces la clave privada de esta clave pública P, entonces también conoces la clave privada de la salida de esta función si también conoces S. Lo que puedes hacer con esto es decir, bueno, vamos a utilizar este árbol de Merkle todavía, pero vamos a añadir un nivel especial en la parte superior de la misma donde tienes P aquí y haces este pago por contrato de esas dos cosas en su lugar. Ahora podemos tener reglas de consenso que le permiten gastar de dos maneras diferentes. Una forma es, pago por contrato es un punto de curva elíptica, es una clave pública. Así que podrías firmar directamente con esa clave pública, lo que puedes hacer si conoces la clave privada. Es como tomar la rama correcta, pero es una rama especial y no tienes que revelar que había otra rama o cualquier ramificación en absoluto. O revelas P más una rama que lleva a uno de los scripts y puedes verificar que esta cosa se compromete con él. Esta es la ruta de gasto de la clave, y estas son las rutas de gasto de los scripts. Revelas la clave pública. Cuando gastas a través del script, revelas la clave pública más la ruta merkle y nada más. Es la clave pública interna. Hay una clave pública de salida, y luego esta es la clave pública interna. Solo revelas la publica intenral cuando gastas usando un script. Cuando gastas usando la clave pública interna, en realidad estás gastando con la clave pública de salida, pero es una versión modificada de la interna.

Suposición Taproot

Lo que estamos llamando la suposición taproot es que, cualquier tipo de construcción de script más complejo en bitcoin realmente sólo codifica las condiciones bajo las cuales las monedas pueden ser gastadas. En cualquier tipo de contrato inteligente en la parte superior, es posible añadir una rama que dice «todo el mundo está de acuerdo» o «al menos algunas personas están de acuerdo». Siempre se puede añadir una rama de unanimidad, o un cierto conjunto de personas. Es un script que consiste sólo en claves públicas. Con firmas adaptadoras, puedes hacer algunas cosas todavía. No hay bloqueos hash o timelocks, etc. Gastar usando esto es lo más barato; lo único que va en la blockchain es una clave pública y una firma. No revelas al mundo ni siquiera la existencia de los otros scripts a los que también se les permitió gastar. Incluso cuando gastas utilizando un script, no estás revelando qué otros scripts existían o si existía una clave pública. La suposición es que la mayoría de los gastos en la red pueden ser escritos como aquellos que sólo consisten en claves públicas. Siempre existe la posibilidad de gastar usando un script, pero creemos que si la opción más barata para gastar es una que sólo tenga una clave pública, entonces la gente optimizará sus scripts para que realmente tengan esta clave pública en la rama de unanimidad, y con el argumento de que -nunca hay un problema con tener una rama para cuando todos están de acuerdo- nunca debería haber un problema con eso.

Sin embargo, hay razones por las que puede que no quieras hacer eso, como esperar que todo el mundo esté conectado a la hora de firmar. Espero que sea bastante común, como un multisig dos de tres, tal vez usted elige los dos más- usted espera que casi siempre va a ser A y B a continuación, poner A y B en esta rama y luego poner otra rama que dice A y C, que es menos probable.

¿Cómo voy con mis cinco minutos?¿Por qué no usar firmas de umbral en ese caso específico? Estoy hablando de múltiples claves públicas, pero en taproot sólo hay una. En musig u otros esquemas de agregación de claves donde podemos representar una combinación de múltiples claves, realmente sólo con una única clave pública, y hay- con musig, puedes hacer argumentos de seguridad muy simples y bastante directos en n-de-n multisig. También puedes hacer umbrales no contables como 3-de-5 o cualquier expresión booleana sobre múltiples claves que puedas codificar en una única clave pública. Pero si no es todo el mundo, entonces siempre tienes una configuración interactiva donde los diferentes participantes tienen que compartir secretos entre sí que necesitan almacenar de forma segura y espero que esto sea un obstáculo importante en las implementaciones prácticas. Probablemente no en algo como Lightning, donde ya hay mucha interactividad; pero para la mayoría de las aplicaciones de monedero, esto probablemente no funcionará. Con n-de-n, se puede hacer una configuración no interactiva, pero hay interacción en el momento de la firma.

Todo se convierte en un árbol merkle con esta rama especial en la parte superior que es obligatoria. Dado que es obligatoria y muy barata de usar, es razonable que la gente la optimice.También da uniformidad. Cada salida va a parecer idéntica ahora. Todos son claves públicas o puntos de curva elíptica. No es realmente obligatorio, puedes usar una clave pública inexistente. Es obligatorio poner un punto ahí, pero no necesita ser una clave válida, por supuesto. Eso es un poco de compromiso. Probablemente hay construcciones donde es más eficiente publicar esto sin una clave pública requerida, pero esto rompe la uniformidad y obtienes menos privacidad. Se trata de un compromiso entre una ligera ventaja de escala, o ventaja de ancho de banda en realidad, frente a la privacidad.

La agregación de claves que utilizas no forma parte del consenso; puedes hacer lo que se le ocurra a la gente. Sí, los monederos deciden. El único requisito que tenemos a nivel de consenso es que sea un esquema de firma que permita la agregación fácilmente.Schnorr lo hace mucho más fácilmente que ECDSA, por eso Schnorr está ahí. Sé que la gente dirá que ECDSA de 2 partes es una cosa, pero es un par de órdenes de magnitud más difícil. Llamar a 2p-ECDSA una cosa es fuerte, hay algunos papeles. Tal vez la gente ya está masiva y ampliamente utilizando esto, específicamente bitconner.

¿Lo de pagar por contratar necesita más ojos? Si modelas el hash como un oráculo aleatorio, entonces se deduce trivialmente. El esquema pay-to-contract fue presentado en la conferencia de 2013 en San José por Tim Ohanka. No incluyó la clave pública aquí; lo que hizo que no fuera un compromiso y por lo tanto se rompió trivialmente.

Así que esa es la explicación de por qué incluir ramas merkle, que es que si ya estás haciendo esta estructura de ejecución taproot y firmas Schnorr, entonces las ramas merkle son literalmente tal vez 10 líneas de código de consenso. Hace que los scripts muy grandes escalen logarítmicamente en lugar de linealmente. Una victoria tan obvia, si vas a cambiar la estructura de todos modos.

¿Ha analizado cuánto tendrían los guiones históricos? No, no lo he hecho. He hecho números sobre lo que podemos esperar para varios tipos de construcciones multisiglas. Es difícil de analizar porque probablemente la mayor ventaja estará en la forma en que la gente utilice el sistema de scripts, no tanto en hacer exactamente lo mismo que se podía hacer antes.

Propuesta bip-taproot

En la propuesta, hay un montón de pequeñas y grandes cosas incluidas. Creo que voy a repasarlas. Tratamos de pensar realmente en la extensibilidad sin ir demasiado lejos, y tal vez fuimos demasiado lejos. El razonamiento aquí es que ya es un par de cosas; hay muchas más ideas, incluso sólo acerca de la estructura de ejecución de secuencias de comandos, hay graftroot, g’root, y los mecanismos de delegación que la gente ha pensado. Hay incentivos como ingeniero para tratar de empaquetar todo junto en una propuesta. Una de las razones es bueno, entonces usted puede analizar cómo interactúan todos y usted puede asegurarse de que todas las combinaciones se hacen de la manera más eficiente y que todos ellos son posibles, y también tiene algunas mejoras fungibilidad porque ahora usted no crea docenas de nuevas versiones de secuencias de comandos observables. Creo que tenemos que reconocer que el incentivo existe, y también elegir una compensación por elegir algunas características pero no todo. A medida que aumenta la complejidad de la propuesta, aumenta la dificultad política de convencer al ecosistema de la necesidad de todo. Este campo está evolucionando, así que para algunas cosas quizá sea mejor esperar. Para compensar la falta de un montón de cosas, pensamos en la extensibilidad en términos de asegurarnos de que algunas de las cosas que podemos imaginar al menos no causarían una caída en la eficiencia si se hicieran por separado.

Uno de ellos es que los scripts en la parte inferior del árbol se les da un número de versión, que estamos llamando la versión de la hoja. Realmente la razón para hacer esto fue porque teníamos 5 o 6 bits disponibles en la serialización, en lugar de decir que tienen que ser este valor, si es un número de versión desconocido, entonces es un gasto libre. La diferencia entre este número de versión, y el número de versión testigo que va en la parte superior, es que estos sólo se revelan junto con el script que realmente se ejecuta. Puedes tener un árbol con un montón de satisfacciones y todas están usando viejos y aburridos scripts, y una usa una nueva característica que existe en una versión futura, no vas a decirle a la red que estás haciendo algo con esa nueva versión a menos que reveles ese script. Así que esto te permite planificar futuras actualizaciones con antelación, sin decírselo a nadie ni revelar nada.

Hay algunos cambios en el lenguaje de scripting, como la validación por lotes, que garantiza que las firmas Schnorr y la validación de pago por contrato (p2c) puedan verificarse por lotes.Esto nos da un factor de aceleración de 2-4x al verificar todas las transacciones o un bloque entero o toda la cadena de bloques.Se pueden agregar millones de firmas.La velocidad aumenta sólo logarítmicamente, pero realmente estamos en un caso de uso en el que la validación por lotes es lo que… realmente tenemos millones de firmas, y lo único que nos importa es que todas sean válidas.Asegurarse de que no hay cosas en el sistema de script que romper esa capacidad de verificación por lotes. Por poner un ejemplo de algo no verificable por lotes es OP_CHECKSIGNOT… podrías escribir un script que dependa de dar una firma inválida para una clave privada dada; está permitido en el lenguaje de scripts aunque no se me ocurre por qué querrías eso. Así que tenemos una regla que dice que todas las firmas que pasen deben ser válidas o estar vacías. Esto hace que el checksig falle sin hacer que el script falle; esta es una regla de estandarización actual. Nulldummy es el de checkmultisig, creo que este es nullfail.

Los opcodes ECDSA han desaparecido y se han sustituido por opcodes Schnorr. Ya no hay OP_CHECKMULTISIG porque no es validable por lotes.En realidad intenta múltiples combinaciones, y si no tienes la información para decir qué coincide con qué, no puedes verificarlo por lotes.En su lugar, está CHECKSIG_ADD, e incrementa un contador con si la segunda comprobación de firma tuvo éxito.Puedes escribir la comprobación de firma como key checksig add number key checksig add number equal verify.Esto es validable por lotes. Todavía hay un límite de 201 opcodes en el lenguaje de scripting, y simular esto costaría 3 opcodes por clave, y con este opcode especial es sólo uno. Con respecto al límite de 200 opcodes, ¿sigues contándolos igual? Sí, no ha cambiado nada respecto a ese límite. CHECKSIGADD es sólo un coste menor. Si usas segwit v0, tienes opcodes ECDSA. Si usas v1 o taproot, solo tienes los opcodes Schnorr.

Hay toda una gama de opcodes inutilizables en el actual lenguaje de scripting de bitcoin que hacen fallar automáticamente tu script, que se convierten en OP_SUCCESS. Es un opcode que hace que el script tenga éxito automáticamente. Es una salida no gravada de nuevo, incluso cuando se encuentra en una rama IF no ejecutada.Hay razones para hacerlo; la ventaja de esto es que ya no estamos restringidos a redefinir opcodes NOP para introducir nueva funcionalidad.Siempre podrías tener una nueva versión de la hoja para reemplazar completamente el lenguaje de script, pero si quieres reemplazar sólo uno de los opcodes, no necesitas el mecanismo de la versión, puedes simplemente usar el opcode OP_SUCCESS y redefinirlo para que tenga un nuevo significado.Esto hace que el script devuelva true, y puede tener cualquier semántica, en lugar de la semántica de «no tocar la pila» para redefinir NOPs.¿Tiene sentido?

Una cosa más es claves públicas que comienzan con un byte desconocido son tratados como un éxito automático también. Esto significa que si a- no en el de nivel superior, sólo en los guiones o las hojas. La razón de esto es que te permite introducir nuevos sistemas criptográficos de clave pública, nuevos modos sighash, o cualquier cosa sin añadir otros 3 opcodes para hacer comprobaciones de firma, en su lugar simplemente lo codificas en la propia clave pública donde tenemos un byte de todas formas.Esto no es como OP_SUCCESS, solo hace que la comprobacion de firma tenga exito.Olvidé la razón de ser de esto.La clave pública es un argumento que se pasa a un CHECKSIG en un script.No necesita ser un push, puede venir de cualquier parte. Es un elemento de la pila que se pasa al opcode CHECKSIG. ¿Cómo evitas que alguien cambie una transacción en vuelo? Te estás comprometiendo con la clave pública en el script. Si la estás pasando en la pila, tienes otro problema, sin restringirlo de otra manera. Oh, cierto. ¿Alguna pregunta sobre esta parte?

Además, las claves públicas sin comprimir han desaparecido porque, en realidad, ¿por qué las seguimos teniendo?

Si más tarde haces un soft-fork, tienes que convertir cosas que antes eran válidas en inválidas. Así que OP_SUCCESS es útil para eso. Al redefinir un opcode NOP, lo único que puedes hacer es no modificar la pila pero puede observarla. Estás restringido a opcodes que no pueden modificar la pila. Por eso CHECKLOCKTIMEVERIFY deja los datos en la pila. Podría haber una variación de OP_SUCCESS llamada OP_RELATIVESUCCESS donde si aciertas el opcode entonces es éxito pero sino no. La razón por la que no hace eso es que, quieres un OP_SUCCESS que redefina todo el lenguaje de script en un potencial soft-fork. Te permite introducir un OP_SUCCESS que cambia la forma de analizar los opcodes, que es algo que haces antes de cualquier ejecución. La regla es que iteras a través de todos los opcodes y si encuentras OP_SUCCESS sin fallar el parseo, no ejecutas nada en absoluto, sólo tiene éxito. Tampoco continúas analizando.

También está la parte de eliminar el límite de sigops. No es eliminarlo del todo. En lugar de tener dos limitaciones de recursos separadas en un bloque - el límite de peso y los límites de sigops, lo que lleva a un molesto problema menor de optimización de los mineros. En su lugar, sólo se permite tener un sigop por cada 50 bytes de testigo. Dado que cada firma requiere 64 bytes para el testigo que se comprueba, más bytes para la clave pública, más la sobrecarga de la propia entrada, esto no debería restringir ninguna característica, pero elimina el problema bidimensional. Ahora, ¿qué pasa si en algún momento hay una razón para introducir un opcode que es muy caro, como decir que alguien quiere OP_CHECKSNARKVERIFY o algo más caro como ejecutar algo o OP_VERIFYETHEREUM o OP_JAVASCRIPT. Puedes imaginar que debido a esta suposición de taproot donde esencialmente asumimos que la mayoría de los spends van a usar el simple keypath, podría ser razonable tener cláusulas de excepción bastante caras en tus scripts de hoja de taproot para satisfacer. Bueno, ¿qué haces si tal opcode es órdenes de magnitud más caro que un sigop ahora? Querrías que eso correspondiera a un aumento de peso, porque proporcionalmente no quieres ir más allá de más de x CPU por ancho de banda realmente. Para hacer eso, temíamos que introducir eso incentivara a la gente a rellenar la transacción con cientos de bytes cero o algo así sólo para no llegar a este límite. Digamos que introducimos un límite que cuesta cien sigops; ahora necesitas un testigo de 5000 bytes que sería un desperdicio para la red sólo por rellenarlo. Así que una idea que tuvimos es, si tuviéramos una manera de modificar el peso de una transacción de una manera incondicional, donde sólo podríamos establecer un marcador en la transacción que dice calcular el peso, pero incrementado en este valor y que el incremento debe tener una propiedad que está firmado por todas las claves de lo contrario la gente podría cambiar el peso de la transacción en vuelo, y también debe ser reconocible fuera de contexto. Incluso cuando no se gasta el UTXO, se debería poder hacer incondicionalmente este ajuste de peso. Por esa razón, tenemos un anexo que es un elemento testigo cuando se gasta una salida taproot que no tiene ningún significado consensuado excepto que va en la firma y de lo contrario se omite. Esta es un área donde los incrementos de peso podrían ir. Digamos que la transacción no tenía un número de secuencia, y queríamos hacer algo como el tiempo de bloqueo relativo, lo que ahora llamamos el campo de secuencia podría haberse puesto en este anexo también. Es esencialmente añadir campos a una entrada de una transacción que no le importa al script. La única regla de consenso incluida ahora en la propuesta de taproot es que puedes tener un anexo identificado de esta cierta manera única con un cierto byte de prefijo, y si es así, entonces se omite. Vive en el testigo, es el último elemento de la pila del testigo, tiene que empezar con el byte 0x50 en hexadecimal. No hay ningún gasto de testigo válido ahora mismo que pueda empezar con el byte 0x50 en p2wsh o p2wpkh.

Otra cosa es, digamos que queremos hacer otro cambio de ejecución de script como graftroot o cosas en ese dominio. Tal vez queramos reutilizar tapscript dentro de esos donde la semántica de ejecución de scripts en realidad sigue siendo la misma y tal vez los incrementos ot el lenguaje de scripting se aplican a ambos. Si esos necesitaran un anexo, entonces… Creo que es útil pensar en las versiones de las hojas como la versión real del script, y la versión en la parte superior es realmente la versión de la estructura de ejecución y podrían ser independientes una de la otra. Hay un número de mecanismos de ejecución, y luego hay un número de versiones de script. Esto podría aumentar la razón por la que querríamos tener este anexo adjunto a los tapscripts.

Creo que eso es todo. Oh, el algoritmo de resumen de transacciones se actualiza. Jonas mencionó, hay una serie de mejoras sighash. Hay más cosas precalculadas para reducir el impacto de grandes scripts. Hashing etiquetado, sí. ¿Por qué? ¿No podrías etiquetarlo de otra manera si quisieras cambiarlo? Sí, se puede, pero por ejemplo no queremos introducir nuevas etiquetas para… no queremos que la introducción de nuevas etiquetas sea algo habitual. Es probable que quieras implementaciones optimizadas para ellas, y aumenta el tamaño del código. Para las cosas simples que se comparten, que desea poner en los datos, y el uso de las etiquetas para asegurarse de que diferentes dominios no interactúan. No me importan mucho los bytes de época.

Parece que el firmante requiere mucho más contexto. En realidad, es menos. El mayor cambio en el sighash es que SIGHASH_ALL ahora firma las cantidades que se gastan de todas las entradas, en lugar de sólo las entradas que se gastan. Esto se debe a un ataque particular contra las billeteras de hardware que es explotable hoy en día. Creo que esto fue discutido en las listas de correo, en realidad, hace un par de meses. Necesitas darle a la billetera de hardware las salidas no gastadas que se están gastando… necesitas los scripts de las salidas que se están gastando de todos modos para que la billetera de hardware pueda… digamos un coinjoin de 1000 personas, tienes una entrada, pero ahora necesitas todos los otros datos de contexto…». Sí, eso es un buen punto. Necesito releer ese hilo en esa lista de correo. Es un buen punto, y no lo había considerado. Esto haría que los PSBT fueran muy muy grandes antes de que pudieras siquiera considerar la posibilidad de firmar. Ya necesitas los vouts y los txids. También necesitas estos datos ahora mismo para calcular la tasa; no necesariamente tienes que calcular la tasa, pero ciertamente deberías y ciertamente deberías querer hacerlo.

Además, hay un cambio en el que siempre se firma la scriptpubkey que se gasta, lo que protege contra el tipo de preocupación de si puedo mutar el sighash que fue diseñado para una cosa P2SH en una cosa que no es P2SH, lo que es posible hoy en día. Esto se arregla incluyendo un bit que dice explícitamente si esto es P2SH pero para eliminar categóricamente esta preocupación se firma el scriptpubkey que se gasta.

Hay un par más de valores precomputados.. Cualquier tipo de datos de longitud variable es pre-hashed, por lo que si usted tiene múltiples checksigs no necesita ser recomputed. El anexo es de longitud variable. El hash del anexo se calcula una vez y se incluye siempre que sea necesario. El número de entradas y el número de salidas son siempre pre-hashed. Los datos introducidos en el cálculo sighash tienen un tamaño limitado, que es de 200 y pico bytes.

Esta es la primera vez que cualquier tipo de prueba de inclusión merkle está cubierto por el consenso. Usted ha hecho un cambio en el sentido de que ordena los pares. ¿Has considerado algún otro cambio? Hashes etiquetados, claro. Teóricamente podría ser cualquier tipo de acumulador, ¿verdad? Así que a lo que John se refiere es que, en este árbol de Merkle no nos importa la posición de las cosas, sólo que algo está ahí. Alguien sugirió, ¿por qué no ordenas las entradas a la función hash en cada nivel? Ahora no necesitas revelar a la red si vas a la izquierda o a la derecha; simplemente le das a la otra rama y siempre podrías combinarla. Esto se deshace de un bit de datos testigo para cada nivel, que no es un gran problema, pero hay un poco de complejidad evitada acerca de cómo hacer frente a la serialización de todos esos bits. Con guiones suficientemente aleatorios, en realidad da un poco de privacidad también porque ya no se filtra información a través del orden de posición en el árbol. Entonces, ¿por qué no cualquier otro tipo de acumulador? ¿Tienes alguna sugerencia? Es semi-serio. ¿Hay algo más por ahí? Creo que incluso un simple acumulador RSA, que tiene problemas con la configuración de confianza… pero en este caso, la configuración es la persona que posee la clave privada, ¿no? ¿Es un acumulador privado? ¿Haces un setup para cada clave? Hice matemáticas para esto en algún momento y llegué a la conclusión de que sólo tiene sentido si tienes más de 16 millones de hojas, sólo en términos de tamaño y CPU no quiero ni pensar. Usted puede tener hasta 4 mil millones de hojas, en realidad. Por encima de eso, usted tendrá dificultades para calcular su dirección.

La ordenación determinista de las hojas, ¿no puede ordenar sus hojas por probabilidad? Todavía se puede hacer eso. Usted quiere construir su árbol como un árbol de Huffman basado en la probabilidad, y luego hay clasificación que voltea en varios niveles, pero la profundidad de cada hoja sigue siendo la misma. Así que a través de eso, todavía filtrar información, obviamente, pero eso es deseable, creo.

Seguridad pos-cuántica

https://twitter.com/pwuille/status/1133838842690060289

Una de las cosas interesantes sobre la seguridad post-cuántica es que, si tienes una salida taproot generada a partir de una clave pública interna con un registro discreto conocido y alguna vez se gasta usando la ruta del script, tienes una prueba transferible de que ECDLP está roto. Alguien con una rotura ECDLP podría elegir no usarlo para esto, porque usarlo convencería al mundo de que realmente tiene una rotura ECDLP.

Esto no es específico de los ordenadores cuánticos. Si alguna vez hay una amenaza sustancial de que ECDLP se rompa, no hay más remedio que poner en la lista negra el gasto utilizando construcciones basadas en ECDLP. Así que en ese momento, o bien dices que sólo hay estas monedas y que ya no pueden moverse en absoluto, o vas hacia una prueba post-cuántica de conocimiento-cero bastante complicada para demostrar que esta clave pública se derivó de esta semilla o de este camino endurecido. Si una convención, y esto es inaplicable, si una convención sobre el uso de taproot es que la clave pública debe aparecer siempre….. la clave pública no tiene que aparecer allí, sin embargo. Puedes simplemente definirlo como otra forma de pasarlo. Podrías hacerlo en un hard-fork; esa es la menor de nuestras preocupaciones. La idea es que, en el mejor de los casos, si ECDLP se rompe, dentro de 10 años habrá suficiente investigación para que tengamos la confianza de que hay una compensación bastante aceptable para un esquema seguro post-cuántico, se define un nuevo tipo de salida, puedes gastarlo y todo bien. Entonces, 20 años después de eso, la gente empieza a decir que en realidad hay un ordenador cuántico de 500 qubits sólo un factor este más antes de que nuestros fondos estén en riesgo… pero para entonces, casi todo el mundo se ha movido a un esquema post-cuántico de todos modos. En el árbol taproot pones algo grande de hash para una situación post-cuántica, como respaldo. ¿Tienes una rama en algún lugar que es como una cosa no gastable pero con un hash de tu clave privada en ella? No no gastable; puedes definir una firma tipo Lamport, que nadie querrá usar porque es enorme, pero siempre puedes ponerla en el árbol de scripts y estará ahí durante años y décadas y nadie la tocará. Se puede añadir más tarde. Si tienes esto durante 10 o 20 años, todas las carteras lo ponen pero nunca gastan de él. Así que si eliminamos el ECDLP, entonces tiene sentido porque los monederos ya lo han estado usando. En cierto modo estoy de acuerdo, pero tiene que ser con suficiente antelación. El gran problema es que en los esquemas seguros post-cuánticos, no podemos hacer cosas divertidas. Pero eso pasa pase lo que pase. Bueno, no necesariamente. Quizás más adelante se encuentren esquemas post-cuánticos seguros que permitan eso. Eso no es una locura. Hay un coste mínimo para hacer esto por si acaso, y si encuentras un esquema post-cuántico mejor que tenga características divertidas, entonces puedes introducirlo más tarde. Necesitas otra rama, pero nunca la ves. Sin embargo, añade complejidad.

Si las claves públicas implicadas se derivan siempre de alguna semilla que conocemos, entonces podrías tener una prueba de conocimiento cero de conocimiento de lo que sea que generó esa semilla, que será enorme, pero eso está bien porque la mayoría de las cosas post-cuánticas son enormes. Esto sería un hard-fork, pero no necesitamos hacer nada excepto asegurarnos de que nuestras claves privadas tienen semillas conocidas que son conocidas por los monederos. O tu clave privada es sólo el hash de otra clave privada.

¿Sería la derivación bip32 demasiado loca para el propósito que describes? No sé cómo sería una prueba de conocimiento-cero. Es difícil predecir cuál sería la eficiencia. Probablemente será de al menos 10 kilobytes. Podrías usar un esquema post-cuántico simple como las firmas Lamport.

La mayoría de los monederos utilizan una de como cinco bibliotecas, así que tan pronto como lo tienes en bitcoinjs estás bien (lamentablemente). Parte del objetivo de taproot es que ya no podrás saber qué monederos usa la gente. Usted tiene un incentivo financiero para tener esos signautres Lamport allí.

\ No newline at end of file diff --git a/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/index.html b/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/index.html index e04bb6267c..181c6e0e79 100644 --- a/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/index.html +++ b/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/index.html @@ -1,5 +1,5 @@ Signet | Transcripciones de ₿itcoin - \ No newline at end of file +Core dev tech

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html

https://twitter.com/kanzure/status/1136980462524608512

Introducción

Voy a hablar un poco de signet. ¿Alguien no sabe lo que es signet? La idea es tener una firma del bloque o del bloque anterior. La idea es que testnet está horriblemente roto para probar cosas, especialmente para probar cosas a largo plazo. Hay grandes reorgs en testnet. ¿Qué pasa con testnet con un ajuste de dificultad menos roto? Testnet es realmente para probar mineros. Uno de los objetivos es que quieras una falta de fiabilidad predecible y no una falta de fiabilidad que destroce el mundo. Signet sería como una nueva serie de redes. Cualquiera puede crear una signet; es una red pública pero sólo las personas con la clave privada pueden crear bloques. Cualquiera puede crear bloques pero sería inválido si miras la coinbase. Podrías engañar a los clientes SPV supongo. Podrías tener una signet taproot y hacerla girar, o una signet Schnorr, o lo que quieras hacer.

Preguntas y respuestas.

PREGUNTAS Y RESPUESTAS

P: ¿Sigue haciendo proof-of-work?

R: El encabezamiento de bloque sigue siendo válido, sí. Sigue haciendo prueba de trabajo. La gente es perezosa y quiere incluir esto en su software, pero no quiere hackear su software de validación de consenso. En su lugar, pueden mantener esto como está. La forma de almacenar y descargar las cabeceras también sigue siendo la misma, no hay que cambiar esas cosas.

P: ¿Es este regtest una prueba de trabajo?

R: Es como la dificultad 1.

P: Entonces, ¿se le puede engañar fácilmente con cambios en las cabeceras?

R: Sí, se puede. Esto es para pruebas, así que no deberías estar conectado a nodos aleatorios.

Las implementaciones no necesitan implementar la comprobación de firma y funciona con todo el software existente. Tienes una salida de coinbase que tiene una firma que dice mi consenso está configurado y configuras cuál es la scriptpubkey para eso en el scriptsig. En vez de firmar la transacción firmas el bloque. Los cambios no son muy grandes en absoluto. Es lo mismo que firmar transacciones. Hay un nuevo firmante que toma un hash y se crea una firma. El hash es el blockhash.

P: ¿Su objetivo es que la gente cree signets al azar o que haya uno global?

R: Una idea es tener un sello fiable que la gente pueda utilizar para hacer pruebas. Este sello permanente tendría una interfaz web y podríamos pedirle que te gaste el doble o algo así y entonces gastaría el doble de tu dirección. Todo esto está fuera de la propuesta, esto es sólo una herramienta que lo hace. Es el doble gasto como servicio (DSaaS).

Tienes una dependencia circular- no puede ser el blockchain. La mejor manera sería eliminar el compromiso testigo manualmente. En segwit, lo ponen a 0000 en el merkle… Pero probablemente no quieras hacer eso aquí porque todavía quieres firmar tu coinbase. Podrías hacer algo como, calcular el posible blockhash si ese compromiso fuera eliminado, y entonces eso es lo que firmas. Cero o eliminado, de cualquier manera.

Podrías firmar el bloque anterior en lugar del bloque actual. Firmas todo excepto la propia firma, por supuesto, y probablemente el nonce en la cabecera. El problema con esto es que vas a tener que crear una firma cada vez, porque vas a hacer PoW y hacer una firma por nonce. Así que no se firma el nonce. Podrías hacer la firma, y luego tirar el nonce. Con dificultad 1, sólo vas a hacer uno de media de todos modos. Va a ser mainnet dificultad 1.

Regtest frente a signet

Regtest es malo, porque cualquiera puede ir y hacer mil millones de bloques. Tienes que conseguir las cabeceras y luego el bloque y luego comprobar la firma.

¿Qué tiene de malo que la firma esté en la cabecera? Todo el mundo tendría que cambiar su código de consenso para poder analizarlo y validarlo. Sería más fácil si no tuvieran que modificar ningún software para utilizarlo. O bien podría estar fuera de la caja, o hacen cambios para signet. Hay poca motivación para añadir la verificación de firmas a diferentes herramientas cuando esto no se utiliza en producción para nada. Es, literalmente, sólo para probar nuevos protocolos, o para probar su integración de intercambio para asegurarse de que está manejando reorgs correctamente - pero usted podría utilizar regtest para ese caso.

Puedes ejecutar bitcoind haciendo cumplir signet, y te conectas a tu propio nodo. Realmente no te importa que seas vulnerable, porque no estás comprobando, sólo estás recibiendo bloques de tu propio nodo. Lo mismo ocurre con regtest, pero cualquier otra persona que se conecte a esa red regtest puede acabar con tus bloques. Podrías usar regtest y sólo confiar en ciertos nodos, lo que significa que la retransmisión de bloques sería de un único nodo que ejecuta la cosa.

Sin embargo, no necesitas proteger una red signet. En signet, sigues conectado a un nodo que está validando. Un nodo que está validando en regtest verá el reorg y verá que todavía es válido y válido por consenso, a menos que hagas una lista blanca sólo para regtest, que todo el mundo tendría que configurar. Regtest es sensible al contexto. Los usuarios de Signet todavía necesitan validar firmas, te conectas a bitcoind ejecutando signet. Así que tienes que usar el software signet, pero no requieren otros cambios en sus otras pilas de software si el nuevo formato de encabezado rompe algo. Usted opta por un signet en particular sobre la base de su scriptsig. No importa qué software ejecutes internamente, pero usas bitcoind como enrutador de borde.

¿Qué tal tener una cabecera normal, y un conjunto separado de firma? Es el truco de segwit. ¿Cuántos cambios va a aceptar Bitcoin Core para esto de probar sólo la firma? Es super simple si es solo «una firma en un lugar determinado». Si no te gusta, no tienes que usarla. Bueno, si va a ser parte de Bitcoin Core entonces eso significa que estamos manteniendo.

¿Regtest no tiene prueba de trabajo? No, tiene proof-of-work pero es un bit de trabajo. Tienes que ponerlo a cero. La mitad de las veces, lo consigues en el primer intento.

Si tu objetivo es tener bloques de 10 minutos, no necesitas cambiar las reglas de dificultad en absoluto. Puedes usar las reglas de la red principal. Y entonces el firmante, si tienes un firmante de alto perfil en algún lugar, tienen 10 ASICs disponibles, pueden elegir una dificultad más alta si quieren y tendrá esa seguridad. La dificultad será exactamente la que el firmante elija o pueda producir. También puede elegir mínima y es menos seguro… El firmante puede tener un cronjob y hacer un bloque de dificultad mínima en ese momento. Sólo mina todo el tiempo y llega a cierta dificultad.

¿Cómo vas a hacer reorg bajo demanda si la dificultad es exactamente lo que pueden hacer? Bueno, tomará de 10 a 20 minutos hacer la reorg. Eso está bien. Sería bueno para reorgs más rápido. 10 minutos es sólo para el ajuste de la dificultad.

Tener una serialización chainparam y hacer que sea fácil enviar eso. Ese es el pull request que alguien estaba pensando - es una cadena personalizada como regtest pero puedes cambiar todos los chainparams a lo que quieras, como un genesis personalizado o lo que sea. Un arg configure o parámetro de línea de comandos que tiene el archivo para chainparams.

Aplicaciones

Creo que es superior en todos los sentidos a testnet. Para lo único que sirve testnet es para hacer pruebas de minería y probar el equipo de los mineros. Si quieres bloques realmente rápidos y reorgs realmente rápidos, entonces usa testnet.

Si estás probando protocolos como los de eltoo entre muchas personas diferentes, entonces regtest es demasiado frágil para eso, y testnet también es demasiado frágil para eso si quieres conseguir algo productivo. Pero aún así quieres poder hacer cosas como el doble gasto como servicio, porque eltoo necesita ser lo suficientemente robusto como para poder manejar reorgs esperados pero no necesariamente reorgs que hagan temblar la tierra. Otra aplicación es que, como intercambiador, siempre quise que mis clientes se unieran a regtest y probaran con mis reorgs arbitrarios.

Podemos tomar bip-taproot y simplemente meterlo ahí. Podríamos ejecutar la propia rama en signet… o el firmante puede aplicar otras reglas de consenso y ahora esas reglas de consenso están activas allí. Taproot puede ser un soft-fork y puedes simplemente decir que este soft-fork está habilitado en esta red, seguro. Durante el desarrollo de segwit, hubo algunas redes de prueba diferentes para segwit llamadas segnet. No es un error tipográfico, había segnet y ahora hay signet. Nadie se acuerda de segnet excepto Pieter.

También es útil para probar software de monederos. Digamos un intercambio que ejecuta un signet semi-privado. Es extremadamente común visitar intercambios y mirar el código de su monedero, y ni siquiera están comprobando los reorgs en absoluto. Así que aquí hay una manera fácil para ellos para comprobar su trabajo contra reorgs. Podría ser muy educativo.

Implementación

El pull request para signet está en el limbo. Estoy planeando volver a ello. Hay una implementación antigua que modifica los encabezados de bloque. Voy a reemplazarla con algo que no lo haga. No parece muy difícil de hacer.

\ No newline at end of file diff --git a/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/index.html b/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/index.html index c7ce152810..cd558c03fb 100644 --- a/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/index.html +++ b/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/index.html @@ -1,6 +1,6 @@ Cadenas de estado ciegas: Transferencia de UTXO con un servidor de firmas ciegas | Transcripciones de ₿itcoin \ No newline at end of file +Core dev tech

https://twitter.com/kanzure/status/1136992734953299970

Formalización de Blind Statechains como servidor de firma ciega minimalista https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html

Visión General: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39

Documento statechains: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf

Transcripción anterior: http://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/

Introducción

Voy a hablar casualmente de todo el asunto de las cadenas de estados. Si quieres intervenir, por favor hazlo. Voy a empezar. La idea actual es hacerlo completamente ciego. Es blinded statechains. El objetivo es permitir a la gente transferir un UTXO sin cambiar nada en la cadena. El concepto que utilizo para describirlo es un servidor de firma ciega. La idea es que el servidor sólo tiene dos funciones: puedes generar una nueva clave para un usuario, que es algo así como generar un nuevo llavero y es una cadena lineal que sólo va en una dirección y no puedes dividir las monedas, y puedes solicitar al servidor una firma ciega y apuntar al siguiente usuario que obtiene el siguiente usuario solicitado que es como llega a ser una cadena.

El usuario hace el trabajo pesado. El servidor sólo firma las cosas. Podría haber un usuario con una sola clave, pero esa clave podría ser una firma umbral y ser una federación. En lugar de 2-de-3, podría ser 3-de-5 más una persona más que siempre tiene que firmar.

Ejemplo

sigUserB(blindedMessageB, userC) es el usuario que pone una firma en un mensaje cegado y en el siguiente usuario que llegará a solicitar el siguiente mensaje. El mensaje cegado es firmado por el servidor, con la clave. Devuelve firma ciegaB. El dinero va de A a B. Lo repites para llegar de C a D usando sigUsuarioC(mensajeciegoC, usuarioD). Es un servidor simple donde estás creando… es sólo una cadena de firmas. Es una lista enlazada ECC, básicamente.

Es como transferir derechos de firma. La clave es cómo solicitan una firma con, y se llega a transferir quién llega a solicitar la siguiente firma. El servidor firma algo en nombre de un usuario, luego en nombre de otro usuario.

Propiedad conjunta de la clave con el servidor

Si el segundo usuario crea otra clave, y la llamamos clave transitoria porque se la vamos a dar a otra persona. Puede utilizar musig para crear otra clave que es una multisig 2-de-2 entre A y X. Para firmar con AX y utilizar este servidor, solicita una firma ciega a A, luego completa la firma firmando con esta clave X. Si desea transferir la propiedad de la clave AX, le da la clave privada X a alguien y le dice al servidor que transfiera los derechos de firma.

Un ejemplo más concreto

En la cadena de estados, digamos que tenemos al usuario B que controla lo que el servidor llega a firmar. El usuario B solicita una firma eltoo en esta transacción fuera de la cadena. Digamos que el dinero va a AX o después de un tiempo de espera a B. Así que esto es básicamente un relámpago básico en el sentido de que si hay un tiempo de espera entonces el dinero vuelve a B. Así que ahora que B tiene esa garantía, ahora envía dinero en la blockchain bitcoin a esto. Él ya tiene la firma, por lo que está garantizado para ser capaz de redimir el dinero en la cadena sin la ayuda del servidor. Si quiere transferir el dinero, solicita otra firma al servidor. Se trata de otra transacción de actualización de eltoo con otro número de estado. En lugar de que B consiga firmar, ahora es el usuario C el que consigue firmar. Podemos seguir así. Cambias las transacciones de actualización de eltoo, y básicamente así es como transfieres dinero fuera de la cadena.

En eltoo, puedes gastar una salida con una transacción. Incluso esto va a una blockchain, puedes enviar la transacción actualizada a la blockchain. Esto se debe a NOINPUT y a la imposición del número de secuencia y los timelocks en eltoo.

Sigue siendo seguridad reactiva, sí. Si no prestas atención, es lo mismo que un rayo. Tienes que prestar atención a la cadena de bloques y saber cuándo alguien está emitiendo una transacción para cerrar el canal.

Resultados prácticos

Podemos cambiar la propiedad de UTXO sin cambiar la clave. El servidor no conoce X, por lo que no tiene control total. Si el servidor desaparece, se puede canjear on-chain. Como estás haciendo firmas ciegas sobre todo, el servidor no sabe que está firmando algo como bitcoin. Sólo pone una firma ciega en algo, no verifica ningún dato de la transacción ni nada. No es consciente de ello. Son los usuarios los que tienen que descifrar y verificar estas transacciones.

P: ¿Qué verifica el servidor antes de realizar una firma ciega?

R: No verifica nada. Le das un mensaje y lo firma. El usuario desbloquea las firmas y puede elegir no aceptar una transacción. Esto es similar al trabajo de validación del lado del cliente de Peter Todd.

P: ¿No es necesario que el servidor garantice que no creará una firma para B después de que se haya transferido a C?

R: Sí, sólo firmará una vez para el usuario. Impone para quién firma y que sólo firma una vez.

P: ¿Es necesario comprobar que los números de secuencia aumentan?

R: Eso lo comprueba el receptor.

P: Pero no sabe cuáles eran los números de secuencia anteriores.

R: Todas las firmas ciegas anteriores que haya firmado el servidor llegarán al usuario receptor.

P: ¿Así que tiene un paquete de propiedad que crece linealmente?

R: Sí, es lo mismo que ocurre con las propuestas de validación del lado del cliente de Peter Todd. El descifrado es el mismo secreto que se ha pasado de un usuario a otro. Puedes hacer un hash de la clave privada y esos son los secretos que usas para cegar. Necesitas dos secretos para cegar. Puedes pasar las versiones no cegadas de las transacciones, eso podría ser suficiente. Depende de lo que quieras hacer. Las firmas cegadas podrían venir del servidor o los usuarios podrían pasarlas. Tal vez prefieras que el servidor guarde los mensajes cegados, tú los descargas y los descifras. Pasas X y lo que obtienes del servidor. Cualquiera de los dos métodos funciona.

Un usuario pide una firma, y dice que el receptor la reciba la próxima vez. Cuando ese próximo usuario pide una firma ciega, entonces el servidor conoce la cadena de transferencias. Eso es correcto. Pero no conoce la identidad de los propietarios de la clave pública. Pero definitivamente hay historia. La ruta es conocida, sí. Sin embargo, no sabe qué UTXO es. Pero sabe que si es un UTXO, el siguiente destinatario es el propietario actual. Es un token de un solo uso. El receptor podría ser la misma persona que el gastador, el servidor realmente no lo sabría.

Podrías hacer que la ruta no se conozca si usas tokens de ecash. Cambias el token por una firma y obtienes un nuevo token de vuelta, como chaumian ecash. Bien, hablaremos de eso.

Con eltoo puedes hacer corte de transacciones al UTXO final o a quien tenga la transacción final. Todo esto es ciego. La cadena de estados solo ve una cadena de solicitudes de usuarios relacionados pero no sabe que.

El papel del servidor

Confías en que el servidor sólo coopere con el último propietario. El servidor promete que sólo cooperará con el último propietario. Confías en que el servidor haga esto. El servidor es una federación, es una firma umbral Schnorr usando musig o algo así. Debe publicar todas las peticiones de firma ciega (la statechain). De esta manera la gente podría auditar el servidor y ver que nunca firmó dos veces. Asegúrate de que el servidor firma sólo para las peticiones de los usuarios, y asegúrate de que el servidor nunca firma dos veces para un usuario.

Esto es un blockchain público o statechains público. Está centralizado, así que no es un gran problema usar HTTPS, json-rpc, lo que sea.

Atomicidad mediante firmas de adaptadores

Una vez que ves una firma, aprendes un secreto. La firma tiene que dar o todas las firmas o ninguna de las firmas. Si intenta dar la mitad, entonces no funciona porque serías capaz de completar las otras firmas. Usamos esto para hacer posible la transferencia de múltiples monedas en múltiples de estas statechains. Si tienes una cadena con un bitcoin y otra cadena con un bitcoin y otra con dos bitcoin, puedes intercambiarlas. Estos intercambios atómicos también se pueden hacer entre monedas.

Para intercambiar a valores más pequeños… el servidor tiene todas las firmas de todo el mundo, excepto los secretos del adaptador. Una vez que recibe todos los secretos de todos, puede completar todas las firmas y puede publicarlas todas. Si decide publicar sólo la mitad de ellas, entonces los usuarios también tienen las firmas de sus adaptadores.

Comparación con las cadenas laterales federadas

No es divisible, sólo para UTXOs completos. Es casi el mismo modelo de turst que las sidechains federadas. Es no-custodial porque 2-de-2 y off-chain eltoo. Se necesita una torre de vigilancia o un nodo completo para vigilar las transacciones cercanas. No es un transmisor de dinero porque es sólo firma ciega que podría ser cualquier cosa. No me culpes, básicamente. Sigue siendo una federación. Lightning es más seguro. Si la federación realmente lo intentara, podrían conseguir tu clave privada como haciendo el intercambio y ellos son uno de los usuarios. Si consiguen una de las claves transitorias entonces pueden conseguir tu dinero.

Aquí, usted puede enviar a la gente bitcoin en un statechain- tendrían que confiar en el statechain, y tendrían que como bitcoin, pero no hay ningún gravamen y no es como relámpago en ese aspecto.

Peores escenarios

El servidor obtiene un puñado de claves transitorias, desenmascara las firmas, se da cuenta de las transacciones de bitcoins, procede a robar de forma demostrable las monedas, todos los demás usuarios (claves no robadas) se retiran en cadena como resultado. Pero esto es inofensivo sin las claves transitorias. Orden judicial de congelar o confiscar monedas, realmente no pueden cumplirla.

Microtransacciones

No puedes enviar nada más pequeño que un UTXO económicamente viable. Nunca podrían canjearlo en la cadena. Así que en realidad estás limitado por las tasas de transacción en la cadena. Como statechain, quieres cobrar comisiones, y esto es necesario cuando se intercambia entre monedas. Habrá algunas cantidades fraccionarias al intercambiar entre altcoins. Tiene que haber algún método para pagar, que sea más pequeño que los UTXOs que estás transfiriendo. Podrías darle a la statechain uno de estos UTXOs, bueno, podrías pagar con lightning o una tarjeta de crédito. O API satélite con tokens chaumian para pagos, supongo que eso no está desplegado todavía.

Relámpagos en las cadenas de estados: eltoo y las fábricas de canales

Si tienes una fábrica de canales, puedes añadir y eliminar participantes. Eltoo admite cualquier número de participantes. ¿No es eso una fábrica? La idea de una fábrica es que tienes un protocolo secundario funcionando además de eltoo. Pero en eltoo es esta cosa plana donde puedes reorganizar fondos entre participantes individuales y no necesitas esta segunda capa de segundas capas hasta el final. Deberíamos haberlo llamado tortuga.

El servidor puede estar dentro de un statechain sí mismo sin saber.

Canal actualizado junto con multi atomic swap. Cierre no cooperativo similar al eltoo regular. Las statechains usan una versión simplificada de eltoo, donde sólo tienes transacciones de actualización y tienes transacciones de liquidación de otra manera. Si quieres reequilibrar tu canal, puedes añadir un bitcoin al canal haciendo swap y luego moviéndote por el canal. Todo esto es posible. Acabamos de hablar de las fábricas de canales para añadir/eliminar miembros también.

Lightning tiene un rendimiento limitado; tienes rutas y sólo puedes enviar tanto dinero a través de él. Es divisible y puedes enviar cantidades fraccionarias sin problema. En statechains, hay un rendimiento infinito, pero no es divisible. Si combinas los dos, suponiendo que aceptas los supuestos de confianza, tienes una mezcla perfecta de poder enviar cualquier cosa sin fricción.

Nadie tiene que poner dinero para apoyar el protocolo. Podría tener una cuota fija. Las cuotas de eltoo dependen de los usuarios. En lightning, los únicos que pagan son los intermediarios y aquí no hay intermediarios. Los incorporas a tu propio grupo y les pagas directamente. Todos tenéis que estar conectados para hacerlo. Esto no se aplica sin statechains, nos permite tener membresía dinámica de instancias de eltoo, que es realmente genial.

Podrías hacer que el servidor fuera parte del canal relámpago y luego pagarles. Sí, claro. Asumamos que confiamos en el servidor, entonces estamos bien. Si una de las partes desaparece y deja de cooperar, estás forzado a entrar en la cadena. Así que aumentas tu riesgo en la cadena a medida que añades más miembros. Esa es la compensación. Pero ese es siempre el caso, incluso sólo con eltoo, tienes que saber que están en línea cuando llega el momento de firmar. Si añades el servidor a tu canal de eltoo, entonces conocen el UTXO y sigue siendo algo ciego, pero tienen más información. Podrías tener un canal para el servidor, exponer ese canal a ellos y pagarles a través de ese canal. Pero no a través de los demás canales.

A medida que aumenta el número de miembros en la cadena de estados, la cooperación se vuelve más cara, así que quizá quieran ganar dinero por ello.

Si confías en unos servidores y otro usuario confía en otros servidores de una federación diferente, ¿es algo posible? No puedes aumentar la seguridad, sólo puedes reducirla. Puedes tener firmas de umbral para hacer esto. Pero podríamos tener un paso intermedio que termine en la cadena, si lo haces en la cadena está bien. Tendríamos que repetirlo en la cadena de todos modos, correcto. Bueno, eso es desafortunado.

Casos de uso

Podrías hacer transferencia de valor fuera de la cadena, canales relámpago (balanceo), canales de apuestas (usando contratos multisig o de registro discreto), o tokens RGB no fungibles (usando sellos de un solo uso). Usas pay-to-contract para poner una moneda de color en un UTXO y pones el UTXO en la statechain, y ahora puedes mover este token no fungible fuera de la cadena. Eso lo soluciona.

Requiere más confianza porque el concepto de transacción fuera de la cadena no es algo que se pueda emular sin blockchain. Para los casos de uso que no son bitcoin, es extraño pensar en ello, pero hasta ahora si alguien tenía una clave privada, se suponía que no podía dársela a otra persona sin que ambos la tuvieran, pero el servidor les permite hacerlo y la suposición se rompe. Si ves una clave privada, se la puedes dar a otra persona. La propiedad puede cambiar moviendola a traves de la cadena de estados.

Otros temas

Puedes utilizar módulos de seguridad de hardware para transferir claves transitorias, como la atestación. Tienes una clave privada dentro de un HSM o un monedero hardware. Tienes otro dispositivo de hardware y quieres transferir la clave privada. Mientras la clave privada no salga del dispositivo, puedes hacer transferencia de dinero fuera de la cadena. Esto es como teechan, sí. Los HSM son terribles, pero la cosa es que estamos transfiriendo la clave transitoria de una manera que es incluso menos segura que eso. Si estás añadiendo un HSM entonces es más seguro, y si el HSM se rompe entonces estás de vuelta al modelo que tenemos ahora. El usuario podría confabularse con la federación para robar dinero… Todo el mundo tendría su propio dispositivo hardware monedero, y mi dispositivo habla con tu dispositivo, y mi clave transitoria está dentro de él, y nunca sale, o si sale entonces se niega a transferirse a tu dispositivo hardware. Esto requiere confianza en el HSM, por supuesto. Podrías ejecutar un programa por el servidor que atestigüe que no firma estados antiguos. No sé si eso sería equivalente o mejor seguridad, pero sí que es un buen punto. Al menos puedes compartir la confianza, dividir la confianza entre el desarrollador del hardware y el servidor, en lugar de confiar sólo en el servidor.

¿Qué pasa si el opendime tenía una clave transitoria, y podría firmar. Podrías entregarla físicamente. Sí, déjame pensarlo. No estoy seguro. Creo que debería funcionar siempre y cuando pueda hacer transacciones bitcoin parcialmente firmadas. No veo cómo no funciona, así que eso es interesante. Muy literalmente, esa es la única copia de la clave si es un opendime. Estoy seguro de que podrías diseñar algo así con propiedades similares a las del opendime, donde hay algunas garantías de seguridad en torno a que nadie ha visto realmente la clave privada. La información cegada puede estar en ese chip también, y quizás una cabecera de verificación. Esto añade una suposición adicional de que, si usted no va en línea en absoluto, hay una cosa adicional. Sí, sólo confía en mí, conecta esto al USB. Te estoy dando dinero, sólo confía en mí … claro, eso es lo que está pasando.

También podrías hacer graftroot withdrawal, que permite canjear forks o un ETF. En lugar de retirar de la statechain por- la retirada cooperativa sería una firma ciega donde el dinero sólo va a usted en la cadena, sin la tontería eltoo y tirar eso. Pero si puedes retirar a través de graftroot, entonces suponiendo que tuviéramos graftroot, suponiendo que después de graftroot ocurriera algún hard-fork entonces ahora tienes una clave graftroot con la que podrías sacar todas las monedas hard-fork. Porque la suposición es que hay algún tipo de protección de repetición, pero graftroot es lo mismo. Tu clave graftroot funcionará en todas las cadenas hard-forked pero necesitas crear una transacción diferente en esa otra cadena. Si retiras a través de graftroot, puedes retirar de todas las cadenas. Suponiendo que soporten graftroot. La suposición es que se trata de una bifurcación bitcoin y graftroot ya está ahí y que acaba de copiar esas características y soft-forks.

Esto también podría ser utilizado para un ETF. Con un ETF, el problema con un hard-fork es qué monedas te van a dar, bueno con graftroot te podrían dar todas las monedas sin saber los hard-forks o cuántas. Podrías tener un utxo con más hard-forks y tener un valor diferente o algo así. Pero de todas formas este puede ser el caso ahora mismo.

Un problema abierto es que podrías verificar sólo el historial de las monedas que posees o recibes. Pero necesitas algún tipo de garantía de que no hay dos historias. Así que necesitas almacenar y retransmitir sucintamente la historia de la cadena de estados. Necesitas ser capaz de conocer todas las cadenas que existen y saber que la tuya es única… una clave del servidor que sólo está firmando esto. ¿Pero cómo pruebas un negativo? Tiras un árbol merkle ahí. Hay varias maneras. Hubo una propuesta sobre prevenir el doble gasto forzando a firmar con la misma k dos veces, entonces si alguna vez firman algo dos veces pierden dinero o algo así. El castigo no importa, ya está ahí: si firman dos veces, la reputación se hace añicos. Ya están castigados, sólo hay que detectarlo. Una forma sería conocer todas las claves con las que se está firmando y obtener una lista de ellas y asegurarnos de que hay, sólo hay una vez. Otra forma es un árbol de Merkle disperso que no he mirado.

En el mejor de los casos se puede hacer pruebas de fraude más fácil de hacer y probar, pero ¿por qué iban a querer dar datos suficientes para demostrar que se produjo un fraude. ¿Cómo sabes que no hay dos historiales? Bueno, en la cadena sólo puede haber un historial. Una vez que la gente se entera, toda la reputación se viene abajo. Fuera de la cadena se puede inflar, pero una vez dentro de la cadena sólo se escribe uno de los historiales. El servidor firma una historia específica del statechain y te la da. Si tienes toda la lista de claves que ha dado, y tu clave sólo está ahí una vez, creo que es prueba suficiente.

Ver también

https://diyhpl.us/wiki/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/statechains/

\ No newline at end of file diff --git a/es/bitcoin-core-dev-tech/2019-06/index.xml b/es/bitcoin-core-dev-tech/2019-06/index.xml index 367b68a243..0f78d92513 100644 --- a/es/bitcoin-core-dev-tech/2019-06/index.xml +++ b/es/bitcoin-core-dev-tech/2019-06/index.xml @@ -1,6 +1,6 @@ Bitcoin Core Dev Tech 2019 on Transcripciones de ₿itcoinhttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/Recent content in Bitcoin Core Dev Tech 2019 on Transcripciones de ₿itcoinHugo -- gohugo.ioesFri, 07 Jun 2019 00:00:00 +0000AssumeUTXOhttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-assumeutxo/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-assumeutxo/https://twitter.com/kanzure/status/1137008648620838912 Por qué assumeutxo assumeutxo es la continuación espiritual de assumevalid. ¿Por qué queremos hacer esto en primer lugar? En la actualidad, la descarga inicial de bloques tarda horas y días. Varios proyectos en la comunidad han estado implementando medidas para acelerar esto. Casa creo que incluye datadir con sus nodos. Otros proyectos como btcpay tienen varias formas de agrupar esto y firmar las cosas con claves gpg y estas soluciones no son del todo a medias, pero probablemente tampoco son deseables.Cadenas de estado ciegas: Transferencia de UTXO con un servidor de firmas ciegashttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/https://twitter.com/kanzure/status/1136992734953299970 -Formalización de Blind Statechains como servidor de firma ciega minimalista https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html +Formalización de Blind Statechains como servidor de firma ciega minimalista https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html Visión General: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39 Documento statechains: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf Transcripción anterior: http://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/ @@ -9,16 +9,16 @@ https://twitter.com/kanzure/status/1136939003666685952 https://github.com/bitcoin-core/bitcoin-devwiki/wiki/P2P-Design-Philosophy &ldquo;Elligator al cuadrado: Puntos uniformes en curvas elípticas de orden primo como cadenas aleatorias uniformes&rdquo; https://eprint.iacr.org/2014/043 Introducción Esta propuesta lleva años en marcha. Muchas ideas de sipa y gmaxwell fueron a parar al bip151. Hace años decidí intentar sacar esto adelante. Hay bip151 que de nuevo la mayoría de las ideas no son mías sino que vienen de sipa y gmaxwell. La propuesta original fue retirada porque descubrimos formas de hacerlo mejor.Hardware Walletshttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-hardware-wallets/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-hardware-wallets/https://twitter.com/kanzure/status/1136924010955104257 -¿Cuánto debería hacer Bitcoin Core y cuánto otras bibliotecas? Andrew Chow escribió la maravillosa herramienta HWI. Ahora mismo tenemos un pull request para soportar firmantes externos. El script HWI puede hablar con la mayoría de los monederos hardware porque tiene todos los controladores incorporados, y puede obtener claves de ellos, y firmar transacciones arbitrarias. Eso es más o menos lo que hace. Es un poco manual, sin embargo. Tienes que introducir algunos comandos python; llamar a algún RPC de Bitcoin Core para obtener el resultado; así que escribí algunos métodos RPC de conveniencia para Bitcoin Core que te permiten hacer las mismas cosas con menos comandos.Signethttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html +¿Cuánto debería hacer Bitcoin Core y cuánto otras bibliotecas? Andrew Chow escribió la maravillosa herramienta HWI. Ahora mismo tenemos un pull request para soportar firmantes externos. El script HWI puede hablar con la mayoría de los monederos hardware porque tiene todos los controladores incorporados, y puede obtener claves de ellos, y firmar transacciones arbitrarias. Eso es más o menos lo que hace. Es un poco manual, sin embargo. Tienes que introducir algunos comandos python; llamar a algún RPC de Bitcoin Core para obtener el resultado; así que escribí algunos métodos RPC de conveniencia para Bitcoin Core que te permiten hacer las mismas cosas con menos comandos.Signethttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html https://twitter.com/kanzure/status/1136980462524608512 Introducción Voy a hablar un poco de signet. ¿Alguien no sabe lo que es signet? La idea es tener una firma del bloque o del bloque anterior. La idea es que testnet está horriblemente roto para probar cosas, especialmente para probar cosas a largo plazo. Hay grandes reorgs en testnet. ¿Qué pasa con testnet con un ajuste de dificultad menos roto? Testnet es realmente para probar mineros. Uno de los objetivos es que quieras una falta de fiabilidad predecible y no una falta de fiabilidad que destroce el mundo.Discusión general sobre SIGHASH_NOINPUT, OP_CHECKSIGFROMSTACK, and OP_SECURETHEBAGhttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-noinput-etc/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-noinput-etc/SIGHASH_NOINPUT, ANYPREVOUT, OP_CHECKSIGFROMSTACK, OP_CHECKOUTPUTSHASHVERIFY, and OP_SECURETHEBAG https://twitter.com/kanzure/status/1136636856093876225 Al parecer, hay algunos mensajes políticos en torno a OP_SECURETHEBA y &ldquo;asegurar la bolsa&rdquo; podría ser una cosa de Andrew Yang -SIGHASH_NOINPUT Muchos de nosotros estamos familiarizados con NOINPUT. ¿Alguien necesita una explicación? ¿Cuál es la diferencia entre el NOINPUT original y el nuevo? NOINPUT asusta al menos a algunas personas. Si sólo hacemos NOINPUT, ¿empezará a causar problemas en bitcoin? ¿Significa que los intercambios tienen que empezar a mirar el historial de transacciones y poner en la lista negra NOINPUT en el historial reciente hasta que esté profundamente enterrado?Gran limpieza de consensohttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html +SIGHASH_NOINPUT Muchos de nosotros estamos familiarizados con NOINPUT. ¿Alguien necesita una explicación? ¿Cuál es la diferencia entre el NOINPUT original y el nuevo? NOINPUT asusta al menos a algunas personas. Si sólo hacemos NOINPUT, ¿empezará a causar problemas en bitcoin? ¿Significa que los intercambios tienen que empezar a mirar el historial de transacciones y poner en la lista negra NOINPUT en el historial reciente hasta que esté profundamente enterrado?Gran limpieza de consensohttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html https://twitter.com/kanzure/status/1136591286012698626 Introducción No hay mucho nuevo de que hablar. No está claro lo de CODESEPARATOR. Se quiere convertir en regla de consenso que las transacciones no pueden ser mayores de 100 kb. ¿No hay reacciones a eso? De acuerdo. Bien, lo haremos. Hagámoslo. ¿Todos saben cuál es esta propuesta? El tiempo de validación para cualquier bloque, éramos perezosos a la hora de arreglar esto. Segwit fue un primer paso para arreglar esto, dando a la gente una manera de hacer esto de una manera más eficiente.Taproothttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki -https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html +https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html https://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/ Previamente: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/ https://twitter.com/kanzure/status/1136616356827283456 diff --git a/es/bitcoin-explained/index.xml b/es/bitcoin-explained/index.xml index 81e276f1fa..3807ef5961 100644 --- a/es/bitcoin-explained/index.xml +++ b/es/bitcoin-explained/index.xml @@ -5,13 +5,13 @@ Episodio anterior en lockinontimeout (LOT): https://btctranscripts.com/bitcoin-m Episodio anterior sobre Speedy Trial: https://btctranscripts.com/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial/ Aaron van Wirdum en &ldquo;Ahora hay dos clientes de activación de Taproot, aquí está el porqué&rdquo;: https://bitcoinmagazine.com/technical/there-are-now-two-taproot-activation-clients-heres-why Transcripción por: Michael Folkson -Introducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado.Taproot Activación con Speedy Trialhttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Fri, 12 Mar 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Propuesta Speedy Trial: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html +Introducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado.Taproot Activación con Speedy Trialhttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Fri, 12 Mar 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Propuesta Speedy Trial: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html Introducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado. Sjors, ¿cuál es tu juego de palabras de la semana? Sjors Provoost (SP): En realidad te pedí un juego de palabras y me dijiste &ldquo;Corta, reedita. Vamos a hacerlo de nuevo&rdquo;. No tengo un juego de palabras esta semana. AvW: Los juegos de palabras son lo tuyo. SP: La última vez intentamos esto de LOT.Activación de Taproot y LOT=true vs LOT=falsehttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-lockinontimeout/Fri, 26 Feb 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-lockinontimeout/BIP 8: https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki -Argumentos para LOT=true and LOT=false (T1-T6 and F1-F6): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html -Argumento adicional para LOT=false (F7): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html +Argumentos para LOT=true and LOT=false (T1-T6 and F1-F6): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html +Argumento adicional para LOT=false (F7): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html Artículo de Aaron van Wirdum en LOT=true or LOT=false: https://bitcoinmagazine.com/articles/lottrue-or-lotfalse-this-is-the-last-hurdle-before-taproot-activation Introducción Aaron van Wirdum (AvW): En directo desde Utrecht, este es el van Wirdum Sjorsnado. Sjors, haz el juego de palabras. Sjors Provoost (SP): Tenemos &ldquo;mucho&rdquo; que hablar. diff --git a/es/bitcoin-explained/taproot-activation-lockinontimeout/index.html b/es/bitcoin-explained/taproot-activation-lockinontimeout/index.html index ce7d72a773..81a959bdef 100644 --- a/es/bitcoin-explained/taproot-activation-lockinontimeout/index.html +++ b/es/bitcoin-explained/taproot-activation-lockinontimeout/index.html @@ -1,7 +1,7 @@ Activación de Taproot y LOT=true vs LOT=false | Transcripciones de ₿itcoin Episodio 2. Y hemos hablado de la activación de Taproot, de la activación de las horquillas suaves en general en el Episodio 3, así que puede que nos saltemos algunas cosas.

AvW: En el Episodio 3 hablamos de todo tipo de propuestas diferentes para activar Taproot, pero ha pasado más de medio año al menos, ¿no?

SP: Eso fue el 25 de septiembre, así que unos cinco meses, sí.

AvW: Ha pasado un tiempo y ahora la discusión ha llegado a su fase final, diría yo. En este momento la discusión es sobre el parámetro del lote, true o false. En primer lugar, para recapitular muy brevemente, Sjors, ¿puedes explicar qué estamos haciendo aquí? ¿Qué es una bifurcación suave?

¿Qué es una bifurcación suave?

SP: La idea de una bifurcación suave es que haces las reglas más estrictas. Eso significa que desde el punto de vista de un nodo que no se actualiza nada ha cambiado. Sólo ven las transacciones que son válidas para ellos de los nodos que sí se actualizan. Debido a que tienen reglas más estrictas se preocupan por lo que sucede. Lo bueno de las bifurcaciones suaves es que como usuario de un nodo puedes actualizar cuando quieras. Si no te importa esta característica, puedes actualizar cuando quieras.

AvW: Una bifurcación suave es una actualización del protocolo compatible con el pasado y lo bueno es que si la mayoría de los mineros aplican las reglas, eso significa automáticamente que todos los nodos de la red seguirán la misma cadena de bloques.

SP: Así es. Los nodos más antiguos no conocen estas nuevas reglas, pero sí saben que seguirán la cadena con más pruebas de trabajo, siempre que sean válidas. Si la mayoría de los mineros siguen las nuevas reglas, entonces la mayoría de las pruebas de trabajo seguirán las nuevas reglas. Y por lo tanto, un nodo antiguo seguirá eso por definición.

AvW: Lo bueno de las bifurcaciones suaves es que si la mayoría del poder de hash aplica las nuevas reglas, la red permanecerá en consenso. Por lo tanto, las últimas bifurcaciones suaves se activaron mediante la coordinación del poder de hash. Eso significa que los mineros podían incluir un bit en los bloques que minaban señalando que estaban listos para la actualización. Una vez que la mayoría de los mineros, el 95% en la mayoría de los casos, indicaban que estaban preparados, los nodos lo reconocían y aplicaban la actualización.

SP: Así es. Un nodo comprobaría, por ejemplo cada dos semanas, cuántos bloques señalaron esta cosa y si es así, entonces dice “Ok la bifurcación suave está ahora activa. Voy a suponer que los mineros van a aplicar esto”.

La capacidad de los mineros para bloquear una actualización de la bifurcación suave

AvW: Correcto. El problema con este mecanismo de actualización es que también significa que los mineros pueden bloquear la actualización.

SP: Sí, ese es el inconveniente.

AvW: Incluso si todo el mundo está de acuerdo con la actualización, por ejemplo en este caso Taproot, parece tener un amplio consenso, pero a pesar de ese amplio consenso los mineros podrían bloquear la actualización, que es lo que ocurrió con SegWit hace un par de años.

SP: Por aquel entonces hubo mucho debate sobre el tamaño del bloque y muchas propuestas de hard fork y muchos sentimientos heridos. Al final fue muy difícil conseguir que se activara SegWit porque los mineros no estaban señalando para ello, probablemente en su mayoría de forma intencionada. Ahora también puede ocurrir que los mineros simplemente ignoren una actualización, no porque no les guste, simplemente porque están ocupados.

AvW: Sí. En el caso de SegWit eso se resolvió al final a través de UASF, o al menos eso fue parte de ello. No vamos a entrar en eso en profundidad. Eso significó básicamente que un grupo de usuarios dijo “En este día (alguna fecha en el futuro, fue el 1 de agosto de 2017) vamos a activar las reglas de SegWit sin importar el poder del hash que lo soporte.”

SP: Correcto, al mismo tiempo y tal vez como consecuencia de eso, un grupo de mineros y otras empresas acordaron que comenzarían a señalar para SegWit. Hubo un montón de otras cosas que sucedieron al mismo tiempo. Lo que ocurrió el 1 de agosto, la cosa se activó, o un poco antes creo.

El parámetro lockinontimeout (LOT)

AvW: Ahora nos adelantamos en el tiempo, han pasado cuatro años y ahora la actualización de Taproot está lista para salir. Lo que ocurrió hace un par de años está provocando un nuevo debate sobre la actualización de Taproot. Eso nos lleva al parámetro lockinontimeout (LOT) que es un parámetro nuevo. Aunque está inspirado en cosas de ese periodo de actualización de SegWit.

SP: Es básicamente una opción incorporada en el UASF que puedes decidir utilizar o no. Ahora hay una manera formal en el protocolo de hacerlo para activar un soft fork en una fecha límite.

AvW: LOT tiene dos opciones. La primera opción es falsa, LOT es false. Eso significa que los mineros pueden señalar la actualización durante un año y luego en ese año si se cumple el umbral del 90 por ciento para la actualización se activará como acabamos de explicar. Por cierto, 1 año y 90 por ciento no es algo fijo, pero es lo que la gente parece establecer. Por conveniencia es lo que voy a usar para discutir esto. Los mineros tienen 1 año para activar la actualización. Si después de ese año no han actualizado la actualización de Taproot expirará. Simplemente no ocurrirá, esto es LOT es false.

SP: Y por supuesto siempre está la opción entonces de enviar una nueva versión, intentándolo de nuevo. No es un “no”, es que no pasa nada.

AvW: Exactamente. Luego está LOT=true que, de nuevo, los mineros tienen 1 año para señalar su apoyo (disposición) a la actualización. Si se alcanza un umbral del 90 por ciento, la actualización se activará. Sin embargo, la gran diferencia es lo que ocurre si los mineros no alcanzan este umbral. Si no dan la señal para la actualización. En ese caso, cuando el año esté a punto de terminar, los nodos que tengan LOT=true empezarán a rechazar todos los bloques que no señalen la actualización. En otras palabras, sólo aceptarán bloques que señalen para la actualización, lo que significa, por supuesto, que se cumplirá el umbral del 90 por ciento y, por tanto, se activará Taproot, o cualquier otra bifurcación suave de este mecanismo.

SP: Si se producen suficientes bloques.

AvW: Si se producen suficientes bloques, sí, es cierto. Un pequeño matiz para los que lo encuentren interesante, incluso los nodos LOT=true aceptarán hasta un 10% de bloques que no señalicen. Eso es para evitar escenarios extraños de división de la cadena.

SP: Sí. Si se activa de forma normal, sólo el 90% tiene que señalar. Si se ordena la señalización, sería raro tener un porcentaje diferente de repente.

AvW: Van a aceptar el primer 10% de los bloques que no emitan señales, pero después se rechazarán todos los bloques que no emitan señales. Así que el umbral del 90% se alcanzará sin duda. La gran razón para LOT=-true, para ponerlo en true, es que de esta manera los mineros no pueden bloquear la actualización. Incluso si intentan bloquear la actualización, una vez que el año ha terminado los nodos seguirán aplicando Taproot. Así que está garantizado que se produzca.

SP: Si se producen suficientes bloques. Podemos entrar en algunos de los riesgos de esto, pero creo que quieres seguir explicando un poco.

AvW: La razón por la que a algunas personas les gusta LOT=true es porque así los mineros no tienen veto. El contraargumento, que ya has sugerido, es que los mineros no tienen un veto de todos modos, incluso si usamos LOT=false la actualización expirará después de un año, pero después de ese año podemos desplegar un nuevo mecanismo de actualización y un nuevo período de señalización. Esta vez tal vez usar LOT=true.

SP: O incluso mientras esto se lleva a cabo. Podrías esperar medio año con LOT=false y medio año después decir “Esto está tardando demasiado. Arriesguémonos un poco más y pongamos LOT=true”. O bajar el umbral o alguna otra permutación que aumente ligeramente el riesgo pero también la probabilidad de activación.

AvW: Sí, tienes razón. Pero en realidad ese es también uno de los argumentos contra el uso de LOT=false. Los defensores de LOT=true dicen que, como tú has sugerido, podemos hacerlo después de 6 meses, pero hay otro grupo de usuarios que podría decir “No. Primero hay que esperar a que pase el año y luego volveremos a desplegar”. Digamos que después de 6 meses Taproot no se ha activado. Ahora, de repente, se produce una nueva discusión entre la gente que quiere empezar a desplegar los clientes LOT=true de inmediato y los grupos de usuarios que quieren esperar hasta que termine el año. Se reintroduce la discusión que tenemos ahora, salvo que para entonces sólo tenemos 6 meses para resolverla. Es una especie de bomba de relojería.

SP: En realidad, no para resolverla. Si no se hace nada durante 6 meses, sólo queda una opción, que es volver a intentarlo con un nuevo bit de activación.

AvW: Pero entonces hay que acordar cuándo se va a hacer eso. ¿Van a hacerlo después de 6 meses o van a hacerlo más tarde? La gente podría no estar de acuerdo.

SP: Entonces volveríamos a estar como ahora, salvo que sabríamos un poco más porque ahora sabemos que los mineros no estaban señalizando.

AvW: Y no tienes mucho tiempo para resolverlo porque después de 6 meses podría pasar. Algún grupo de usuarios podría ejecutar LOT=true o….

SP: De lo que hablas aquí es de la posibilidad de, digamos, la anarquía en el sentido de que no hay consenso sobre cuándo activar esta cosa. Un grupo, creo que lo discutimos ampliamente en el tercer episodio, se pone muy agresivo y dice “No, vamos a activar esto antes”. Entonces nadie sabe cuándo va a ocurrir.

AvW: Permítanme expresarlo de otra manera. Si ahora mismo decimos “Si después de 6 meses los mineros no han activado Taproot, entonces simplemente actualizaremos a los clientes de LOT=true”, entonces los defensores de LOT=true dirán “Si ese es el plan de todos modos, hagámoslo ahora. Es mucho más fácil. ¿Por qué tenemos que hacerlo a medias?”. Ese es el contraargumento del contraargumento.

SP: Lo entiendo. Pero, por supuesto, también está el escenario en el que nunca hacemos esto, Taproot simplemente no se activa. Depende de lo que la gente quiera. Hay algo que decir sobre el sesgo del statu quo en el que no se hace nada si es demasiado controvertido por la razón que sea. Hay otro caso secundario que es útil tener en cuenta. Puede haber una muy buena razón para cancelar Taproot. Puede haber un error que se revele después.

AvW: Te estás adelantando. Hay un montón de argumentos a favor de LOT=false. Un argumento es que ya hemos hecho LOT=false un montón de veces, el minero anterior activó las bifurcaciones suaves, y la mayoría de las veces salió bien. Sólo hubo una vez con SegWit en medio de una gran guerra, ahora no tenemos una gran guerra. No hay razón para cambiar lo que hemos estado haciendo con éxito hasta ahora. Ese es un argumento. El contraargumento sería, por ejemplo, “Sí, pero si eliges LOT=false ahora eso podría atraer la controversia en sí mismo. Podría utilizarse para abrir una brecha. Ahora mismo no estamos en guerra, pero podría provocar una guerra”.

SP: No veo cómo ese argumento no se aplica a LOT=true. Cualquier cosa puede causar controversia.

AvW: Probablemente sea justo. Estoy de acuerdo con eso. El otro argumento a favor de LOT=false es que los mineros, y especialmente los pools de minería, ya han indicado que apoyan a Taproot, lo activarán. No es necesario hacer lo de LOT=true por lo que se ve. El tercer argumento es el que acabas de mencionar. Es posible que alguien encuentre un error con Taproot, un error de software o algún otro problema es posible. Si haces LOT=false es bastante fácil dejar que expire y los usuarios no tendrán que actualizar su software de nuevo.

SP: Lo único que hay es que habría que recomendar a los mineros que no instalen esa actualización. Vale la pena señalar, creo que lo señalamos en el Episodio 3, que la gente no siempre revisa las cosas muy pronto. Mucha gente ha revisado el código de Taproot, pero otros pueden no molestarse en revisarlo hasta que el código de activación esté ahí porque simplemente esperan al último minuto. No es inverosímil que alguien muy inteligente comience a revisar esto muy tarde, tal vez algún intercambio que esté a punto de desplegarlo.

AvW: Algo así ocurrió con P2SH, el predecesor de P2SH. OP_EVAL estaba a punto de ser desplegado y entonces se encontró un bug bastante horrible.

SP: También lo hemos visto con ciertas altcoins, justo antes de su despliegue la gente encuentra días cero, ya sea porque fueron… o simplemente porque el código fue enviado con prisa y nadie lo revisó. Definitivamente siempre hay un riesgo, sea cual sea el mecanismo de soft fork que utilices, de que se descubra un bug en el último minuto. Si se tiene muy mala suerte, es demasiado tarde, se despliega y se necesita un hard fork para deshacerse de él, lo que sería muy, muy malo.

AvW: No creo que eso sea cierto. Hay otras formas de deshacerse de él.

SP: Dependiendo de cuál sea el fallo, se puede hacer una bifurcación suave.

AvW: Hay formas de arreglarlo incluso en ese caso. El otro contraargumento a ese punto sería: “Si no estamos seguros de que no tiene errores y no estamos seguros de que esta actualización es correcta, entonces no debería desplegarse de ninguna manera, LOT=true o LOT=false o lo que sea. Tenemos que estar seguros de eso de todos modos”.

SP: Sí, pero como dije, algunas personas no revisarán algo hasta que sea inevitable.

AvW: I am just listing the arguments. Fourth argument against LOT=true is that LOT=true could feed into the perception that Bitcoin and especially Bitcoin Core developers control the protocol, have power of the protocol. They are shipping code and that necessarily becomes the new protocol rules in the case of Taproot.

SP: Podría haber una futura bifurcación suave en la que realmente nadie en la comunidad se preocupe por ello, sólo un puñado de desarrolladores de Bitcoin Core lo hacen, y lo imponen a la comunidad. Entonces se produciría un montón de caos. Lo bueno de tener al menos la señal de los mineros es que son parte de la comunidad y al menos están de acuerdo con ello. El problema es que no refleja lo que otras personas de la comunidad piensan al respecto. Sólo refleja lo que ellos piensan al respecto. Hay un montón de mecanismos. Hay discusiones en la lista de correo, se ve si la gente tiene problemas. También está la señalización de los mineros, que es una buena indicación de que la gente está contenta. Se ve que hay el mayor número posible de personas que consienten. Sería bueno que hubiera otros mecanismos, por supuesto.

AvW: El otro punto que los desarrolladores de Bitcoin Core, mientras deciden qué código incluyen en Bitcoin Core no deciden lo que los usuarios realmente terminan ejecutando.

SP: Puede que nadie lo descargue.

AvW: Exactamente. En realidad no tienen poder sobre la red. En ese sentido el argumento es false, pero podría alimentar la percepción de que sí lo tienen. Incluso esa percepción, si se puede evitar, es mejor. Ese es un argumento.

SP: Y el precedente. ¿Qué pasa si Bitcoin Core se ve comprometido en algún momento y envía una actualización y dice “Si no lo paras, se va a activar”. Entonces es bueno que los mineros puedan decir “No, no lo creo”.

AvW: Los usuarios también podrían decir eso al no descargarlo como has dicho. Ahora llegamos al quinto argumento. Aquí es donde la cosa se pone bastante compleja. El quinto argumento contra LOT=true es que podría causar todo tipo de inestabilidad en la red. Si sucede que el año se acaba y hay clientes LOT=true en la red es posible que se separen de la cadena principal y podría haber re-orgs. La gente podría perder dinero y los mineros podrían minar un bloque inválido y perder su recompensa de bloque y todo ese tipo de cosas. Los defensores de LOT=true argumentan que ese riesgo se mitiga mejor si la gente adopta LOT=true.

SP: Soy escéptico, eso suena muy circular. ¿Quizás sea útil explicar cómo son esos malos escenarios? Entonces otros pueden decidir si a) creen que vale la pena arriesgarse a esos malos escenarios y b) cómo hacerlos menos probables. Algo de eso es casi político. En la sociedad se discute si la gente debe tener armas o no, cuáles son los incentivos, y puede que nunca se resuelva. Pero podemos hablar de algunos de los mecanismos aquí.

AvW: Para ser claros, si de alguna manera hubiera un consenso completo en la red sobre LOT=true, todos los nodos ejecutan LOT=true, o todos los nodos ejecutan LOT=false, entonces creo que estaría totalmente bien. De cualquier manera.

SP: Sí. La ironía es, por supuesto, que si hay un consenso total y todo el mundo ejecuta LOT=true, entonces nunca se utilizará. En teoría tienes razón. No veo un escenario en el que los mineros digan “Estamos contentos con LOT=true pero deliberadamente no vamos a señalar y luego señalamos en el último momento”.

AvW: Tienes razón, pero estamos divagando. La cuestión es que los escenarios realmente complicados surgen cuando algunas partes de la red tienen LOT=true, algunas partes de la red tienen LOT=false o algunas partes de la red no tienen ninguna de las dos cosas porque no se han actualizado. O una combinación de estos, la mitad de la red tiene LOT=true, la mitad de la red no tiene ninguno. Ahí es donde las cosas se complican mucho y Sjors, tú has pensado en ello, ¿qué piensas? Dime cuáles son los riesgos.

El escenario de división de la cadena

SP: Pensé en estas cosas durante la debacle de SegWit2x, así como la debacle de UASF, que fueron similares en cierto modo, pero también muy diferentes debido a quién estaba explicando y si era un hard fork o un soft fork. Digamos que estás ejecutando la versión LOT=true de Bitcoin Core. La descargaste, tal vez fue liberada por Bitcoin Core o tal vez la autocompilaste, pero dice LOT=true. Usted quiere que Taproot se active. Pero el escenario aquí es que el resto del mundo, los mineros no están haciendo esto. Llega el día y ves un bloque, no está señalando correctamente, pero quieres que señale correctamente, así que dices “Este bloque es ahora inválido. No voy a aceptar este bloque”. Voy a esperar hasta que otro minero venga con un bloque que sí cumpla mis criterios. Tal vez eso ocurra una vez cada 10 bloques, por ejemplo. Estás viendo nuevos bloques, pero están llegando muy, muy lentamente. Así que alguien te envía una transacción, quieres Bitcoin de alguien, te envían una transacción y esta transacción tiene una tarifa y probablemente va a estar mal. Digamos que estás recibiendo una transacción de alguien que está ejecutando un nodo con LOT=false. Están en una cadena que va diez veces más rápido que tú, en este estado intermedio. Sus bloques pueden estar apenas llenos, sus tarifas son bastante bajas, y tú la estás recibiendo. Pero tú estás en esta cadena más corta y de movimiento más lento, así que tu mempool está realmente lleno y tus bloques están completamente llenos, así que esa transacción probablemente no se confirmará en tu lado. Simplemente va a estar sentado en el mempool, que es una complejidad. En realidad es un escenario relativamente bueno porque no aceptas transacciones no confirmadas. Tendrás un desacuerdo con tu contraparte, dirás “No se ha confirmado” y ellos dirán “Se ha confirmado”. Entonces por lo menos te darás cuenta de lo que está pasando, leerás sobre la guerra de LOT o lo que sea. Así que ese es un escenario. El otro escenario es cuando de alguna manera se confirma en tu lado y también se confirma en el otro lado. Eso es bastante bueno porque entonces estás a salvo de cualquier manera. Si se confirma en ambos lados, entonces, pase lo que pase en una futura reorganización, esa transacción está realmente en la cadena, tal vez en un bloque diferente. Otro escenario podría ser porque hay dos cadenas, una cadena corta y otra larga, pero son diferentes. Si usted está recibiendo monedas que se envían de una transacción de coinbase en un lado o en el otro, entonces no hay manera de que pueda ser válido en su lado. Esto también puede ser una característica, se llama la protección de repetición esencialmente. Recibes una transacción y ni siquiera la ves en tu mempool, llamas a la otra persona y dices “Esto no tiene sentido”. Eso es bueno. Pero ahora, de repente, el mundo cambia de opinión y dice “No, sí queremos Taproot, sí queremos LOT=true, ahora somos incondicionales de LOT=true” y todos los mineros empiezan a minar encima de tu cadena más corta. Tu cadena corta se convierte en la cadena muy larga. En ese caso estás bastante contento en la mayoría de los escenarios que hemos discutido.

AvW: Me parece bien.

SP: Tenías una transacción que estaba tal vez en tu décimo bloque y en el otro lado estaba en el primer bloque. Sigue siendo tuya. Hubo algunas transacciones flotando en el mempool durante mucho tiempo, finalmente se confirman. Creo que estás bastante contento. Estábamos hablando del nodo LOT=true. Como usuario del nodo LOT=true, en estos escenarios estás contento. Tal vez no si pagas a alguien.

AvW: Estás empezando a hacer el caso de LOT=true Sjors, sé que no es tu intención pero estás haciendo un buen trabajo en ello.

SP: Para el usuario de nodo completo que sabe lo que está haciendo en general. Si eres un usuario de nodo completo y sabes lo que estás haciendo entonces creo que vas a estar bien en general. Esto no es tan malo. Ahora digamos que eres un usuario LOT=false y digamos que no sabes lo que estás haciendo. En el mismo escenario estás en la cadena más larga, estás recibiendo monedas de un exchange y has visto estas cabeceras por ahí para esta cadena más corta. Puede que los hayas visto, depende de si te llegan o no. Pero es una cadena más corta y es válida según tú porque es un conjunto de reglas más estricto. Estás bien, esta otra cadena tiene Taproot y tú probablemente no. Estás aceptando transacciones y eres un campista feliz pero de repente porque el mundo cambia todo desaparece de debajo de ti. Todas las transacciones que has visto confirmadas en un bloque están ahora de vuelta en el mempool y puede que se hayan gastado dos veces incluso.

AvW: Sí, la razón es que estamos hablando de una división de cadena que ha ocurrido. Tienes un nodo LOT=false pero en cualquier momento la cadena LOT=true se convierte en la cadena más larga, entonces tu nodo LOT=false seguiría aceptando esa cadena. La consideraría válida. Lo contrario no es cierto. Pero el nodo LOT=false siempre considerará válida la cadena LOT=true. Así que en tu escenario en el que estás usando Bitcoin en la cadena más larga, en la cadena LOT=false, estamos contentos, hemos recibido dinero, hemos hecho un duro día de trabajo y hemos recibido nuestro cheque al final, pagado en Bitcoin. Pensamos que estamos a salvo, recibimos un montón de confirmaciones pero de repente la cadena LOT=true se alarga lo que significa que tu nodo cambia a una cadena LOT=true. Ese dinero que recibiste en la cadena LOT=false que pensabas que era la cadena Bitcoin simplemente desaparece. Puf. Ese es el problema del que hablas.

SP: Exactamente.

AvW: Voy a añadir algo a esto muy brevemente. Creo que es un problema aún mayor para los nodos no actualizados.

SP: Estaba a punto de llegar a eso. Ahora estamos hablando de la gente de LOT=false. Todavía se podría decir “¿Por qué descargaste la versión LOT=false?” Porque no lo sabías. Ahora estamos hablando de un nodo sin actualizar. Para el nodo no actualizado no existe Taproot, así que no tiene preferencia por cuál de las cadenas, simplemente elegirá la más larga.

AvW: Es alguien en Corea, no sigue la discusión.

SP: No seamos malos con Corea.

AvW: Elige un país donde no hablen inglés.

SP: Corea del Norte.

AvW: Alguien no se mantiene al día en los foros de discusión de Bitcoin, tal vez no lee en inglés, realmente no le importa. Simplemente le gusta esto del Bitcoin, se descargó el software hace un par de años, puso su duro trabajo, le pagan y el dinero desaparece.

SP: O su nodo puede estar en un búnker nuclear en el que lo puso hace 5 años bajo 15 metros de hormigón, con un tapón de aire, y de alguna manera puede descargar bloques porque está viendo el satélite Blockstream o algo así, pero no se puede actualizar. Y no sabe de esta actualización. Lo que sería extraño si te gustan los búnkeres nucleares y los nodos completos. De todos modos alguien está ejecutando un nodo anticuado, en Bitcoin tenemos la política de que no tienes que actualizar, no es algo obligatorio. Debería ser seguro, o al menos relativamente seguro, ejecutar un nodo no actualizado. Estás recibiendo un salario, el mismo que la persona LOT=false, y de repente hay un gigantesco re-org que sale de la nada. No tienes ni idea de por qué la gente se molesta en reorganizar porque no sabes nada de este cambio de reglas.

AvW: Alguien no se mantiene al día en los foros de discusión de Bitcoin, tal vez no lee en inglés, realmente no le importa. Simplemente le gusta esto del Bitcoin, se descargó el software hace un par de años, puso su duro trabajo, le pagan y el dinero desaparece.

SP: O su nodo puede estar en un búnker nuclear en el que lo puso hace 5 años bajo 15 metros de hormigón, con un tapón de aire, y de alguna manera puede descargar bloques porque está viendo el satélite Blockstream o algo así, pero no se puede actualizar. Y no sabe de esta actualización. Lo que sería extraño si te gustan los búnkeres nucleares y los nodos completos. De todos modos alguien está ejecutando un nodo anticuado, en Bitcoin tenemos la política de que no tienes que actualizar, no es algo obligatorio. Debería ser seguro, o al menos relativamente seguro, ejecutar un nodo no actualizado. Estás recibiendo un salario, el mismo que la persona LOT=false, y de repente hay un gigantesco re-org que sale de la nada. No tienes ni idea de por qué la gente se molesta en reorganizar porque no sabes nada de este cambio de reglas.

AvW: No ves la diferencia.

SP: Y puf, tu sueldo ha desaparecido.

AvW: Eso es malo. Creo que ese es el peor escenario que nadie quiere.

SP: Sí. Esto se puede trasladar también a las personas que utilizan un software de cartera de hardware que no se ha actualizado, que utilizan nodos remotos o que utilizan nodos SPV que no comprueban las reglas sino que sólo comprueban el trabajo. Tendrán experiencias similares en las que, de repente, la cadena más larga cambia, por lo que su monedero SPV, que explicamos en un episodio anterior, su historia desaparece. Al menos para los nodos ligeros podrías hacer algo de victim shaming y decir “Deberías estar ejecutando un nodo completo. Si pasan cosas malas deberías haber ejecutado un nodo completo”. Pero sigo pensando que eso no es una buena ingeniería de seguridad, decirle a la gente “Si no usas el cinturón de seguridad en la posición correcta el coche puede explotar”. Pero al menos para el nodo completo no actualizado es un caso explícito que los Bitcoiners quieren apoyar. Quieren apoyar que la gente no se actualice y no pierda repentinamente sus monedas en una situación como esta. Por eso no soy una persona LOT=true.

Evitar un escenario de ruptura en cadena

AvW: Eso es lo que quiero conseguir. Todo el mundo está de acuerdo, o al menos ambos estamos de acuerdo, y creo que la mayoría de la gente estaría de acuerdo en que este escenario que acabamos de pintar es horrible, no queremos eso. Así que la siguiente pregunta es ¿cómo evitar este escenario? Esa es también una de las cosas en las que las personas de LOT=true y LOT=false difieren en sus opiniones. Los defensores de LOT=false, como tú, argumentan en contra de LOT=true porque la ruptura de la cadena fue causada por LOT=true y, por lo tanto, si no queremos rupturas en cadena, no queremos LOT=true y lo que acabamos de describir no ocurrirá. El peor escenario es que no tengamos Taproot, simplemente expirará. Eso no es tan malo como que este pobre coreano pierda su honesto día de trabajo.

SP: Exactamente y puede que tengamos Taproot más adelante.

AvW: Los defensores de LOT=true argumentarán que Bitcoin es una red abierta y que cualquier peer puede ejecutar el software que quiera. Para bien o para mal LOT=true es algo que existe. Si queremos evitar una ruptura de la cadena, la mejor manera de evitarlo es asegurarse de que todo el mundo utiliza LOT=true o al menos la mayoría de los mineros se actualizan a tiempo y LOT=true es la mejor manera de asegurar eso. Conseguir una masa crítica para LOT=true es en realidad la opción más segura a pesar de que LOT=true también introdujo el riesgo. Si quiero hacer una analogía, es como si el mundo fuera un lugar más seguro sin armas nucleares, pero hay armas nucleares. Parece que es más seguro tener una en ese caso.

SP: Creo que esa analogía se rompe rápidamente, pero sí, entiendo la idea.

AvW: No es una analogía perfecta, estoy seguro. La cuestión es que LOT=true existe y ahora tenemos que lidiar con ello. Podría ser un mundo mejor, un lugar más seguro, si LOT=true no existiera, si los UASF no existieran. Pero existe y ahora tenemos que enfrentarnos a ese hecho. Se puede argumentar que asegurarse de que la bifurcación suave tenga éxito es en realidad la mejor manera de salvar a ese pobre coreano.

SP: Siempre soy muy escéptico con este tipo de teoría del juego porque suena retóricamente bien pero no estoy seguro de que sea realmente cierto. Uno de los problemas obvios es cómo sabes que has llegado a toda la comunidad Bitcoin. Hablamos de esta hipotética persona en este otro país que no está leyendo Twitter y Reddit, no tiene ni idea de que esto está pasando y mucho menos la mayoría de los usuarios de carteras ligeras. El número de personas que usan Bitcoin es mucho, mucho mayor que el número de personas que están siquiera remotamente interesadas en estas discusiones. Además, explicar el riesgo a esas personas, incluso si pudiéramos llegar a ellas, para explicarles por qué deberían actualizarse, es un reto bastante grande. En este episodio tratamos de explicar a grandes rasgos qué es lo que saldría mal si no se actualiza. No podemos decirles simplemente que deben actualizarse. Eso viola la idea de que hay que persuadir a la gente con argumentos y dejar que decidan lo que quieren hacer en lugar de decírselo basándose en la autoridad.

AvW: Ten en cuenta que al final todo esto se evita si la mayoría del poder de hash se actualiza. Con LOT=true en realidad cualquier mayoría estaría bien al final. Si los propios mineros utilizan LOT=true entonces seguro que consiguen la cadena más larga al final del año.

SP: La teoría del juego se reduce a decir que quieres convencer a los mineros de que lo hagan. Sin embargo, el problema es que si falla acabamos de explicar el desastre que ocurre. Entonces la pregunta es ¿cuál es ese riesgo? ¿Puedes poner un porcentaje en eso? ¿Se puede simular de algún modo el mundo y averiguar qué ocurre?

AvW: Yo no me decido a hacerlo. Veo argumentos convincentes en ambos lados. Al principio me inclinaba por lot=false, pero cuanto más lo pienso… El argumento es que si incluyes lot=true en Bitcoin Core entonces eso prácticamente garantiza que todo irá bien porque la mayoría económica casi seguro que lo ejecutará. Los intercambios y la mayoría de los usuarios.

SP: Ni siquiera estoy seguro de que eso sea cierto. Eso supone que esa mayoría económica se apresure a actualizar y no ignore las cosas.

AvW: Al menos dentro de un año.

SP: Es posible que haya empresas que estén ejecutando nodos de 3 o 4 años de antigüedad porque tienen 16 s***coins diferentes. Incluso eso, yo no lo asumiría. Sabemos por la red en general que mucha gente no actualiza los nodos y un año es muy poco. No se puede saber por los nodos si son la mayoría económica o no. Puede que sean unos cuantos jugadores críticos los que se encarguen de ello.

AvW: Sí, no puedo estar seguro. No estoy seguro. Estoy especulando, estoy explicando el argumento. Pero lo contrario también es cierto, ahora que existe lot=true es casi seguro que algún grupo de usuarios lo ejecutará y eso introduce quizás mayores riesgos que si se incluyera en Core. Eso aumentaría las posibilidades de éxito de LOT=true, la mayoría actualizándose.

SP: Realmente depende de quién sea ese grupo. Porque si ese grupo son personas al azar que no son económicamente importantes, entonces ellos experimentan los problemas y nadie más se da cuenta de nada.

AvW: Eso es cierto. Si se trata de un grupo muy pequeño puede ser cierto, pero la cuestión es cuán pequeño o cuán grande tiene que ser ese grupo para que se convierta en un problema. Tienen una asimetría, esta ventaja, porque su cadena nunca puede ser reordenada, mientras que la cadena LOT=false puede ser reordenada.

SP: Pero su cadena puede no crecer nunca, así que eso también es un riesgo. No es una ventaja estricta.

AvW: Creo que es definitivamente una ventaja estricta.

SP: La ventaja es que no se puede volver a forjar. La desventaja es que tu cadena podría no crecer nunca. No sé cuál de las dos…

AvW: Probablemente crecería. Depende de lo grande que sea ese grupo de nuevo. Eso no es algo que podamos medir objetivamente. Supongo que a eso se reduce todo.

SP: Ni siquiera con carácter retroactivo podemos. Todavía no sabemos qué causó realmente la activación de SegWit, incluso cuatro años después. Eso te da una idea de lo difícil que es saber cuáles son realmente estas fuerzas.

AvW: Sí, en eso estamos de acuerdo. Es muy difícil saber de qué manera. Estoy indeciso al respecto.

SP: Lo más seguro es no hacer nada.

AvW: Ni siquiera eso. Todavía podría haber una minoría o incluso una mayoría que podría pasar.

SP: Otro experimento mental interesante es decir: “Siempre va a haber un grupo LOT=true para cualquier bifurcación suave. ¿Qué pasa con una bifurcación suave que no tiene el apoyo de la comunidad? ¿Qué pasa si un grupo arbitrario de personas decide llevar a cabo su propio soft fork porque quiere? Tal vez alguien quiere reducir la oferta de monedas. Poner la emisión de monedas a cero mañana o reducir el tamaño del bloque a 300 kilobytes”. Podrían decir “Porque es un soft fork y porque yo corro un nodo LOT=true, podría haber otros que corran un LOT=true. Por lo tanto, debe activarse y todo el mundo debería ejecutar este nodo”. Eso sería absurdo. Esta teoría del juego tiene un límite. Siempre se puede pensar en alguna bifurcación suave y en alguna pequeña comunidad que dirá esto y fracasará por completo. Tienes que estimar lo grande y lo potente que es esta cosa. Ni siquiera sé cuál es la métrica.

AvW: Pero también cómo de dañina es la actualización, porque yo diría que esa es la respuesta a tu punto. Si la mejora en sí se considera valiosa, a la gente le costará muy poco pasarse a la otra cadena, la que no se puede volver a forjar y que tiene la mejora que es valiosa. Esa es una muy buena razón para cambiar. Mientras que cambiar a una cadena, incluso si no se puede volver a forjar, que jode el límite de monedas o ese tipo de cosas, es un desincentivo mucho mayor y también un desincentivo para que los mineros cambien.

SP: Algunas personas podrían decir que un tamaño de bloque más pequeño es más seguro.

AvW: Son libres de bifurcarse, eso también es posible. Ni siquiera hemos hablado de eso, pero es posible que la división de la cadena sea duradera, que sea para siempre una cadena minoritaria LOT=true y una cadena mayoritaria LOT=false. Entonces tenemos la división de Bitcoin, Bitcoin Cash o algo así. Sólo tenemos dos monedas.

SP: Con la gran y temible espada de Damocles colgando encima.

AvW: Entonces quizá habría que incluir un punto de control en la cadena mayoritaria, lo que sería muy feo.

SP: Se podría idear algún tipo de bifurcación suave incompatible para evitar una reorganización en el futuro.

AvW: Vamos a trabajar hacia el final de este episodio.

SP: Creo que hemos cubierto muchos argumentos diferentes y hemos explicado que esto es bastante complicado.

AvW: ¿Qué cree que va a pasar? ¿Cómo cree que se desarrollará todo esto?

SP: Empecé a mirar un poco el meollo, uno de los pull requests que Luke Dashjr abrió para implementar BIP 8 en general, no específicamente para Taproot creo. Ya hay complejidad con esto de LOT=true comprometido porque hay que pensar en cómo debe comportarse la red peer-to-peer. Desde un principio de mínima acción, lo que es el menor trabajo para los desarrolladores, establecer LOT a false probablemente resulta en un código más fácil que se fusionará antes. E incluso si Luke es como “Sólo haré esto si se establece como true”, entonces alguien más hará una solicitud de extracción que lo establece como false y se fusiona antes. Creo que desde un punto de vista de lo que sucede cuando la gente perezosa, quiero decir perezosa en la forma más respetuosa, ¿cuál es el camino de menor resistencia? Probablemente sea LOT=false, sólo desde el punto de vista de la ingeniería.

AvW: Así que LOT=false en Bitcoin Core es lo que se esperaría en ese caso.

SP: Sí. Y alguien más implementaría LOT=true.

AvW: En algún cliente alternativo seguramente.

SP: Sí. Y eso podría no tener revisión de código.

AvW: Es sólo una configuración de parámetros, ¿verdad?

SP: No, es más complicado porque cómo va a interactuar con sus pares y qué va a hacer cuando haya una división de la cadena, etc.

AvW: ¿Qué opinas de este escenario que tampoco se implementa en Bitcoin Core? ¿Ves que eso ocurra?

SP: Ninguna de las dos cosas. ¿LOT=null?

AvW: Simplemente no hay mecanismo de activación porque no hay consenso para uno.

SP: No, creo que estará bien. No puedo predecir el futuro, pero creo que un LOT=false no será tan objetado como algunos podrían pensar.

AvW: Entonces ya veremos, supongo.

SP: Sí, ya veremos. Puede que esta sea la cosa más tonta que he dicho nunca.

\ No newline at end of file +https://www.youtube.com/watch?v=7ouVGgE75zg

BIP 8: https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki

Argumentos para LOT=true and LOT=false (T1-T6 and F1-F6): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html

Argumento adicional para LOT=false (F7): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html

Artículo de Aaron van Wirdum en LOT=true or LOT=false: https://bitcoinmagazine.com/articles/lottrue-or-lotfalse-this-is-the-last-hurdle-before-taproot-activation

Introducción

Aaron van Wirdum (AvW): En directo desde Utrecht, este es el van Wirdum Sjorsnado. Sjors, haz el juego de palabras.

Sjors Provoost (SP): Tenemos “mucho” que hablar.

AvW: Tenemos “mucho” que hablar. En este episodio vamos a discutir el proceso de activación de Taproot y el debate que lo rodea en el lote de parámetros, lockinontimeout que se puede establecer en true y false.

SP: Tal vez como un recordatorio para el oyente hemos hablado de Taproot en general varias veces, pero especialmente en Episodio 2. Y hemos hablado de la activación de Taproot, de la activación de las horquillas suaves en general en el Episodio 3, así que puede que nos saltemos algunas cosas.

AvW: En el Episodio 3 hablamos de todo tipo de propuestas diferentes para activar Taproot, pero ha pasado más de medio año al menos, ¿no?

SP: Eso fue el 25 de septiembre, así que unos cinco meses, sí.

AvW: Ha pasado un tiempo y ahora la discusión ha llegado a su fase final, diría yo. En este momento la discusión es sobre el parámetro del lote, true o false. En primer lugar, para recapitular muy brevemente, Sjors, ¿puedes explicar qué estamos haciendo aquí? ¿Qué es una bifurcación suave?

¿Qué es una bifurcación suave?

SP: La idea de una bifurcación suave es que haces las reglas más estrictas. Eso significa que desde el punto de vista de un nodo que no se actualiza nada ha cambiado. Sólo ven las transacciones que son válidas para ellos de los nodos que sí se actualizan. Debido a que tienen reglas más estrictas se preocupan por lo que sucede. Lo bueno de las bifurcaciones suaves es que como usuario de un nodo puedes actualizar cuando quieras. Si no te importa esta característica, puedes actualizar cuando quieras.

AvW: Una bifurcación suave es una actualización del protocolo compatible con el pasado y lo bueno es que si la mayoría de los mineros aplican las reglas, eso significa automáticamente que todos los nodos de la red seguirán la misma cadena de bloques.

SP: Así es. Los nodos más antiguos no conocen estas nuevas reglas, pero sí saben que seguirán la cadena con más pruebas de trabajo, siempre que sean válidas. Si la mayoría de los mineros siguen las nuevas reglas, entonces la mayoría de las pruebas de trabajo seguirán las nuevas reglas. Y por lo tanto, un nodo antiguo seguirá eso por definición.

AvW: Lo bueno de las bifurcaciones suaves es que si la mayoría del poder de hash aplica las nuevas reglas, la red permanecerá en consenso. Por lo tanto, las últimas bifurcaciones suaves se activaron mediante la coordinación del poder de hash. Eso significa que los mineros podían incluir un bit en los bloques que minaban señalando que estaban listos para la actualización. Una vez que la mayoría de los mineros, el 95% en la mayoría de los casos, indicaban que estaban preparados, los nodos lo reconocían y aplicaban la actualización.

SP: Así es. Un nodo comprobaría, por ejemplo cada dos semanas, cuántos bloques señalaron esta cosa y si es así, entonces dice “Ok la bifurcación suave está ahora activa. Voy a suponer que los mineros van a aplicar esto”.

La capacidad de los mineros para bloquear una actualización de la bifurcación suave

AvW: Correcto. El problema con este mecanismo de actualización es que también significa que los mineros pueden bloquear la actualización.

SP: Sí, ese es el inconveniente.

AvW: Incluso si todo el mundo está de acuerdo con la actualización, por ejemplo en este caso Taproot, parece tener un amplio consenso, pero a pesar de ese amplio consenso los mineros podrían bloquear la actualización, que es lo que ocurrió con SegWit hace un par de años.

SP: Por aquel entonces hubo mucho debate sobre el tamaño del bloque y muchas propuestas de hard fork y muchos sentimientos heridos. Al final fue muy difícil conseguir que se activara SegWit porque los mineros no estaban señalando para ello, probablemente en su mayoría de forma intencionada. Ahora también puede ocurrir que los mineros simplemente ignoren una actualización, no porque no les guste, simplemente porque están ocupados.

AvW: Sí. En el caso de SegWit eso se resolvió al final a través de UASF, o al menos eso fue parte de ello. No vamos a entrar en eso en profundidad. Eso significó básicamente que un grupo de usuarios dijo “En este día (alguna fecha en el futuro, fue el 1 de agosto de 2017) vamos a activar las reglas de SegWit sin importar el poder del hash que lo soporte.”

SP: Correcto, al mismo tiempo y tal vez como consecuencia de eso, un grupo de mineros y otras empresas acordaron que comenzarían a señalar para SegWit. Hubo un montón de otras cosas que sucedieron al mismo tiempo. Lo que ocurrió el 1 de agosto, la cosa se activó, o un poco antes creo.

El parámetro lockinontimeout (LOT)

AvW: Ahora nos adelantamos en el tiempo, han pasado cuatro años y ahora la actualización de Taproot está lista para salir. Lo que ocurrió hace un par de años está provocando un nuevo debate sobre la actualización de Taproot. Eso nos lleva al parámetro lockinontimeout (LOT) que es un parámetro nuevo. Aunque está inspirado en cosas de ese periodo de actualización de SegWit.

SP: Es básicamente una opción incorporada en el UASF que puedes decidir utilizar o no. Ahora hay una manera formal en el protocolo de hacerlo para activar un soft fork en una fecha límite.

AvW: LOT tiene dos opciones. La primera opción es falsa, LOT es false. Eso significa que los mineros pueden señalar la actualización durante un año y luego en ese año si se cumple el umbral del 90 por ciento para la actualización se activará como acabamos de explicar. Por cierto, 1 año y 90 por ciento no es algo fijo, pero es lo que la gente parece establecer. Por conveniencia es lo que voy a usar para discutir esto. Los mineros tienen 1 año para activar la actualización. Si después de ese año no han actualizado la actualización de Taproot expirará. Simplemente no ocurrirá, esto es LOT es false.

SP: Y por supuesto siempre está la opción entonces de enviar una nueva versión, intentándolo de nuevo. No es un “no”, es que no pasa nada.

AvW: Exactamente. Luego está LOT=true que, de nuevo, los mineros tienen 1 año para señalar su apoyo (disposición) a la actualización. Si se alcanza un umbral del 90 por ciento, la actualización se activará. Sin embargo, la gran diferencia es lo que ocurre si los mineros no alcanzan este umbral. Si no dan la señal para la actualización. En ese caso, cuando el año esté a punto de terminar, los nodos que tengan LOT=true empezarán a rechazar todos los bloques que no señalen la actualización. En otras palabras, sólo aceptarán bloques que señalen para la actualización, lo que significa, por supuesto, que se cumplirá el umbral del 90 por ciento y, por tanto, se activará Taproot, o cualquier otra bifurcación suave de este mecanismo.

SP: Si se producen suficientes bloques.

AvW: Si se producen suficientes bloques, sí, es cierto. Un pequeño matiz para los que lo encuentren interesante, incluso los nodos LOT=true aceptarán hasta un 10% de bloques que no señalicen. Eso es para evitar escenarios extraños de división de la cadena.

SP: Sí. Si se activa de forma normal, sólo el 90% tiene que señalar. Si se ordena la señalización, sería raro tener un porcentaje diferente de repente.

AvW: Van a aceptar el primer 10% de los bloques que no emitan señales, pero después se rechazarán todos los bloques que no emitan señales. Así que el umbral del 90% se alcanzará sin duda. La gran razón para LOT=-true, para ponerlo en true, es que de esta manera los mineros no pueden bloquear la actualización. Incluso si intentan bloquear la actualización, una vez que el año ha terminado los nodos seguirán aplicando Taproot. Así que está garantizado que se produzca.

SP: Si se producen suficientes bloques. Podemos entrar en algunos de los riesgos de esto, pero creo que quieres seguir explicando un poco.

AvW: La razón por la que a algunas personas les gusta LOT=true es porque así los mineros no tienen veto. El contraargumento, que ya has sugerido, es que los mineros no tienen un veto de todos modos, incluso si usamos LOT=false la actualización expirará después de un año, pero después de ese año podemos desplegar un nuevo mecanismo de actualización y un nuevo período de señalización. Esta vez tal vez usar LOT=true.

SP: O incluso mientras esto se lleva a cabo. Podrías esperar medio año con LOT=false y medio año después decir “Esto está tardando demasiado. Arriesguémonos un poco más y pongamos LOT=true”. O bajar el umbral o alguna otra permutación que aumente ligeramente el riesgo pero también la probabilidad de activación.

AvW: Sí, tienes razón. Pero en realidad ese es también uno de los argumentos contra el uso de LOT=false. Los defensores de LOT=true dicen que, como tú has sugerido, podemos hacerlo después de 6 meses, pero hay otro grupo de usuarios que podría decir “No. Primero hay que esperar a que pase el año y luego volveremos a desplegar”. Digamos que después de 6 meses Taproot no se ha activado. Ahora, de repente, se produce una nueva discusión entre la gente que quiere empezar a desplegar los clientes LOT=true de inmediato y los grupos de usuarios que quieren esperar hasta que termine el año. Se reintroduce la discusión que tenemos ahora, salvo que para entonces sólo tenemos 6 meses para resolverla. Es una especie de bomba de relojería.

SP: En realidad, no para resolverla. Si no se hace nada durante 6 meses, sólo queda una opción, que es volver a intentarlo con un nuevo bit de activación.

AvW: Pero entonces hay que acordar cuándo se va a hacer eso. ¿Van a hacerlo después de 6 meses o van a hacerlo más tarde? La gente podría no estar de acuerdo.

SP: Entonces volveríamos a estar como ahora, salvo que sabríamos un poco más porque ahora sabemos que los mineros no estaban señalizando.

AvW: Y no tienes mucho tiempo para resolverlo porque después de 6 meses podría pasar. Algún grupo de usuarios podría ejecutar LOT=true o….

SP: De lo que hablas aquí es de la posibilidad de, digamos, la anarquía en el sentido de que no hay consenso sobre cuándo activar esta cosa. Un grupo, creo que lo discutimos ampliamente en el tercer episodio, se pone muy agresivo y dice “No, vamos a activar esto antes”. Entonces nadie sabe cuándo va a ocurrir.

AvW: Permítanme expresarlo de otra manera. Si ahora mismo decimos “Si después de 6 meses los mineros no han activado Taproot, entonces simplemente actualizaremos a los clientes de LOT=true”, entonces los defensores de LOT=true dirán “Si ese es el plan de todos modos, hagámoslo ahora. Es mucho más fácil. ¿Por qué tenemos que hacerlo a medias?”. Ese es el contraargumento del contraargumento.

SP: Lo entiendo. Pero, por supuesto, también está el escenario en el que nunca hacemos esto, Taproot simplemente no se activa. Depende de lo que la gente quiera. Hay algo que decir sobre el sesgo del statu quo en el que no se hace nada si es demasiado controvertido por la razón que sea. Hay otro caso secundario que es útil tener en cuenta. Puede haber una muy buena razón para cancelar Taproot. Puede haber un error que se revele después.

AvW: Te estás adelantando. Hay un montón de argumentos a favor de LOT=false. Un argumento es que ya hemos hecho LOT=false un montón de veces, el minero anterior activó las bifurcaciones suaves, y la mayoría de las veces salió bien. Sólo hubo una vez con SegWit en medio de una gran guerra, ahora no tenemos una gran guerra. No hay razón para cambiar lo que hemos estado haciendo con éxito hasta ahora. Ese es un argumento. El contraargumento sería, por ejemplo, “Sí, pero si eliges LOT=false ahora eso podría atraer la controversia en sí mismo. Podría utilizarse para abrir una brecha. Ahora mismo no estamos en guerra, pero podría provocar una guerra”.

SP: No veo cómo ese argumento no se aplica a LOT=true. Cualquier cosa puede causar controversia.

AvW: Probablemente sea justo. Estoy de acuerdo con eso. El otro argumento a favor de LOT=false es que los mineros, y especialmente los pools de minería, ya han indicado que apoyan a Taproot, lo activarán. No es necesario hacer lo de LOT=true por lo que se ve. El tercer argumento es el que acabas de mencionar. Es posible que alguien encuentre un error con Taproot, un error de software o algún otro problema es posible. Si haces LOT=false es bastante fácil dejar que expire y los usuarios no tendrán que actualizar su software de nuevo.

SP: Lo único que hay es que habría que recomendar a los mineros que no instalen esa actualización. Vale la pena señalar, creo que lo señalamos en el Episodio 3, que la gente no siempre revisa las cosas muy pronto. Mucha gente ha revisado el código de Taproot, pero otros pueden no molestarse en revisarlo hasta que el código de activación esté ahí porque simplemente esperan al último minuto. No es inverosímil que alguien muy inteligente comience a revisar esto muy tarde, tal vez algún intercambio que esté a punto de desplegarlo.

AvW: Algo así ocurrió con P2SH, el predecesor de P2SH. OP_EVAL estaba a punto de ser desplegado y entonces se encontró un bug bastante horrible.

SP: También lo hemos visto con ciertas altcoins, justo antes de su despliegue la gente encuentra días cero, ya sea porque fueron… o simplemente porque el código fue enviado con prisa y nadie lo revisó. Definitivamente siempre hay un riesgo, sea cual sea el mecanismo de soft fork que utilices, de que se descubra un bug en el último minuto. Si se tiene muy mala suerte, es demasiado tarde, se despliega y se necesita un hard fork para deshacerse de él, lo que sería muy, muy malo.

AvW: No creo que eso sea cierto. Hay otras formas de deshacerse de él.

SP: Dependiendo de cuál sea el fallo, se puede hacer una bifurcación suave.

AvW: Hay formas de arreglarlo incluso en ese caso. El otro contraargumento a ese punto sería: “Si no estamos seguros de que no tiene errores y no estamos seguros de que esta actualización es correcta, entonces no debería desplegarse de ninguna manera, LOT=true o LOT=false o lo que sea. Tenemos que estar seguros de eso de todos modos”.

SP: Sí, pero como dije, algunas personas no revisarán algo hasta que sea inevitable.

AvW: I am just listing the arguments. Fourth argument against LOT=true is that LOT=true could feed into the perception that Bitcoin and especially Bitcoin Core developers control the protocol, have power of the protocol. They are shipping code and that necessarily becomes the new protocol rules in the case of Taproot.

SP: Podría haber una futura bifurcación suave en la que realmente nadie en la comunidad se preocupe por ello, sólo un puñado de desarrolladores de Bitcoin Core lo hacen, y lo imponen a la comunidad. Entonces se produciría un montón de caos. Lo bueno de tener al menos la señal de los mineros es que son parte de la comunidad y al menos están de acuerdo con ello. El problema es que no refleja lo que otras personas de la comunidad piensan al respecto. Sólo refleja lo que ellos piensan al respecto. Hay un montón de mecanismos. Hay discusiones en la lista de correo, se ve si la gente tiene problemas. También está la señalización de los mineros, que es una buena indicación de que la gente está contenta. Se ve que hay el mayor número posible de personas que consienten. Sería bueno que hubiera otros mecanismos, por supuesto.

AvW: El otro punto que los desarrolladores de Bitcoin Core, mientras deciden qué código incluyen en Bitcoin Core no deciden lo que los usuarios realmente terminan ejecutando.

SP: Puede que nadie lo descargue.

AvW: Exactamente. En realidad no tienen poder sobre la red. En ese sentido el argumento es false, pero podría alimentar la percepción de que sí lo tienen. Incluso esa percepción, si se puede evitar, es mejor. Ese es un argumento.

SP: Y el precedente. ¿Qué pasa si Bitcoin Core se ve comprometido en algún momento y envía una actualización y dice “Si no lo paras, se va a activar”. Entonces es bueno que los mineros puedan decir “No, no lo creo”.

AvW: Los usuarios también podrían decir eso al no descargarlo como has dicho. Ahora llegamos al quinto argumento. Aquí es donde la cosa se pone bastante compleja. El quinto argumento contra LOT=true es que podría causar todo tipo de inestabilidad en la red. Si sucede que el año se acaba y hay clientes LOT=true en la red es posible que se separen de la cadena principal y podría haber re-orgs. La gente podría perder dinero y los mineros podrían minar un bloque inválido y perder su recompensa de bloque y todo ese tipo de cosas. Los defensores de LOT=true argumentan que ese riesgo se mitiga mejor si la gente adopta LOT=true.

SP: Soy escéptico, eso suena muy circular. ¿Quizás sea útil explicar cómo son esos malos escenarios? Entonces otros pueden decidir si a) creen que vale la pena arriesgarse a esos malos escenarios y b) cómo hacerlos menos probables. Algo de eso es casi político. En la sociedad se discute si la gente debe tener armas o no, cuáles son los incentivos, y puede que nunca se resuelva. Pero podemos hablar de algunos de los mecanismos aquí.

AvW: Para ser claros, si de alguna manera hubiera un consenso completo en la red sobre LOT=true, todos los nodos ejecutan LOT=true, o todos los nodos ejecutan LOT=false, entonces creo que estaría totalmente bien. De cualquier manera.

SP: Sí. La ironía es, por supuesto, que si hay un consenso total y todo el mundo ejecuta LOT=true, entonces nunca se utilizará. En teoría tienes razón. No veo un escenario en el que los mineros digan “Estamos contentos con LOT=true pero deliberadamente no vamos a señalar y luego señalamos en el último momento”.

AvW: Tienes razón, pero estamos divagando. La cuestión es que los escenarios realmente complicados surgen cuando algunas partes de la red tienen LOT=true, algunas partes de la red tienen LOT=false o algunas partes de la red no tienen ninguna de las dos cosas porque no se han actualizado. O una combinación de estos, la mitad de la red tiene LOT=true, la mitad de la red no tiene ninguno. Ahí es donde las cosas se complican mucho y Sjors, tú has pensado en ello, ¿qué piensas? Dime cuáles son los riesgos.

El escenario de división de la cadena

SP: Pensé en estas cosas durante la debacle de SegWit2x, así como la debacle de UASF, que fueron similares en cierto modo, pero también muy diferentes debido a quién estaba explicando y si era un hard fork o un soft fork. Digamos que estás ejecutando la versión LOT=true de Bitcoin Core. La descargaste, tal vez fue liberada por Bitcoin Core o tal vez la autocompilaste, pero dice LOT=true. Usted quiere que Taproot se active. Pero el escenario aquí es que el resto del mundo, los mineros no están haciendo esto. Llega el día y ves un bloque, no está señalando correctamente, pero quieres que señale correctamente, así que dices “Este bloque es ahora inválido. No voy a aceptar este bloque”. Voy a esperar hasta que otro minero venga con un bloque que sí cumpla mis criterios. Tal vez eso ocurra una vez cada 10 bloques, por ejemplo. Estás viendo nuevos bloques, pero están llegando muy, muy lentamente. Así que alguien te envía una transacción, quieres Bitcoin de alguien, te envían una transacción y esta transacción tiene una tarifa y probablemente va a estar mal. Digamos que estás recibiendo una transacción de alguien que está ejecutando un nodo con LOT=false. Están en una cadena que va diez veces más rápido que tú, en este estado intermedio. Sus bloques pueden estar apenas llenos, sus tarifas son bastante bajas, y tú la estás recibiendo. Pero tú estás en esta cadena más corta y de movimiento más lento, así que tu mempool está realmente lleno y tus bloques están completamente llenos, así que esa transacción probablemente no se confirmará en tu lado. Simplemente va a estar sentado en el mempool, que es una complejidad. En realidad es un escenario relativamente bueno porque no aceptas transacciones no confirmadas. Tendrás un desacuerdo con tu contraparte, dirás “No se ha confirmado” y ellos dirán “Se ha confirmado”. Entonces por lo menos te darás cuenta de lo que está pasando, leerás sobre la guerra de LOT o lo que sea. Así que ese es un escenario. El otro escenario es cuando de alguna manera se confirma en tu lado y también se confirma en el otro lado. Eso es bastante bueno porque entonces estás a salvo de cualquier manera. Si se confirma en ambos lados, entonces, pase lo que pase en una futura reorganización, esa transacción está realmente en la cadena, tal vez en un bloque diferente. Otro escenario podría ser porque hay dos cadenas, una cadena corta y otra larga, pero son diferentes. Si usted está recibiendo monedas que se envían de una transacción de coinbase en un lado o en el otro, entonces no hay manera de que pueda ser válido en su lado. Esto también puede ser una característica, se llama la protección de repetición esencialmente. Recibes una transacción y ni siquiera la ves en tu mempool, llamas a la otra persona y dices “Esto no tiene sentido”. Eso es bueno. Pero ahora, de repente, el mundo cambia de opinión y dice “No, sí queremos Taproot, sí queremos LOT=true, ahora somos incondicionales de LOT=true” y todos los mineros empiezan a minar encima de tu cadena más corta. Tu cadena corta se convierte en la cadena muy larga. En ese caso estás bastante contento en la mayoría de los escenarios que hemos discutido.

AvW: Me parece bien.

SP: Tenías una transacción que estaba tal vez en tu décimo bloque y en el otro lado estaba en el primer bloque. Sigue siendo tuya. Hubo algunas transacciones flotando en el mempool durante mucho tiempo, finalmente se confirman. Creo que estás bastante contento. Estábamos hablando del nodo LOT=true. Como usuario del nodo LOT=true, en estos escenarios estás contento. Tal vez no si pagas a alguien.

AvW: Estás empezando a hacer el caso de LOT=true Sjors, sé que no es tu intención pero estás haciendo un buen trabajo en ello.

SP: Para el usuario de nodo completo que sabe lo que está haciendo en general. Si eres un usuario de nodo completo y sabes lo que estás haciendo entonces creo que vas a estar bien en general. Esto no es tan malo. Ahora digamos que eres un usuario LOT=false y digamos que no sabes lo que estás haciendo. En el mismo escenario estás en la cadena más larga, estás recibiendo monedas de un exchange y has visto estas cabeceras por ahí para esta cadena más corta. Puede que los hayas visto, depende de si te llegan o no. Pero es una cadena más corta y es válida según tú porque es un conjunto de reglas más estricto. Estás bien, esta otra cadena tiene Taproot y tú probablemente no. Estás aceptando transacciones y eres un campista feliz pero de repente porque el mundo cambia todo desaparece de debajo de ti. Todas las transacciones que has visto confirmadas en un bloque están ahora de vuelta en el mempool y puede que se hayan gastado dos veces incluso.

AvW: Sí, la razón es que estamos hablando de una división de cadena que ha ocurrido. Tienes un nodo LOT=false pero en cualquier momento la cadena LOT=true se convierte en la cadena más larga, entonces tu nodo LOT=false seguiría aceptando esa cadena. La consideraría válida. Lo contrario no es cierto. Pero el nodo LOT=false siempre considerará válida la cadena LOT=true. Así que en tu escenario en el que estás usando Bitcoin en la cadena más larga, en la cadena LOT=false, estamos contentos, hemos recibido dinero, hemos hecho un duro día de trabajo y hemos recibido nuestro cheque al final, pagado en Bitcoin. Pensamos que estamos a salvo, recibimos un montón de confirmaciones pero de repente la cadena LOT=true se alarga lo que significa que tu nodo cambia a una cadena LOT=true. Ese dinero que recibiste en la cadena LOT=false que pensabas que era la cadena Bitcoin simplemente desaparece. Puf. Ese es el problema del que hablas.

SP: Exactamente.

AvW: Voy a añadir algo a esto muy brevemente. Creo que es un problema aún mayor para los nodos no actualizados.

SP: Estaba a punto de llegar a eso. Ahora estamos hablando de la gente de LOT=false. Todavía se podría decir “¿Por qué descargaste la versión LOT=false?” Porque no lo sabías. Ahora estamos hablando de un nodo sin actualizar. Para el nodo no actualizado no existe Taproot, así que no tiene preferencia por cuál de las cadenas, simplemente elegirá la más larga.

AvW: Es alguien en Corea, no sigue la discusión.

SP: No seamos malos con Corea.

AvW: Elige un país donde no hablen inglés.

SP: Corea del Norte.

AvW: Alguien no se mantiene al día en los foros de discusión de Bitcoin, tal vez no lee en inglés, realmente no le importa. Simplemente le gusta esto del Bitcoin, se descargó el software hace un par de años, puso su duro trabajo, le pagan y el dinero desaparece.

SP: O su nodo puede estar en un búnker nuclear en el que lo puso hace 5 años bajo 15 metros de hormigón, con un tapón de aire, y de alguna manera puede descargar bloques porque está viendo el satélite Blockstream o algo así, pero no se puede actualizar. Y no sabe de esta actualización. Lo que sería extraño si te gustan los búnkeres nucleares y los nodos completos. De todos modos alguien está ejecutando un nodo anticuado, en Bitcoin tenemos la política de que no tienes que actualizar, no es algo obligatorio. Debería ser seguro, o al menos relativamente seguro, ejecutar un nodo no actualizado. Estás recibiendo un salario, el mismo que la persona LOT=false, y de repente hay un gigantesco re-org que sale de la nada. No tienes ni idea de por qué la gente se molesta en reorganizar porque no sabes nada de este cambio de reglas.

AvW: Alguien no se mantiene al día en los foros de discusión de Bitcoin, tal vez no lee en inglés, realmente no le importa. Simplemente le gusta esto del Bitcoin, se descargó el software hace un par de años, puso su duro trabajo, le pagan y el dinero desaparece.

SP: O su nodo puede estar en un búnker nuclear en el que lo puso hace 5 años bajo 15 metros de hormigón, con un tapón de aire, y de alguna manera puede descargar bloques porque está viendo el satélite Blockstream o algo así, pero no se puede actualizar. Y no sabe de esta actualización. Lo que sería extraño si te gustan los búnkeres nucleares y los nodos completos. De todos modos alguien está ejecutando un nodo anticuado, en Bitcoin tenemos la política de que no tienes que actualizar, no es algo obligatorio. Debería ser seguro, o al menos relativamente seguro, ejecutar un nodo no actualizado. Estás recibiendo un salario, el mismo que la persona LOT=false, y de repente hay un gigantesco re-org que sale de la nada. No tienes ni idea de por qué la gente se molesta en reorganizar porque no sabes nada de este cambio de reglas.

AvW: No ves la diferencia.

SP: Y puf, tu sueldo ha desaparecido.

AvW: Eso es malo. Creo que ese es el peor escenario que nadie quiere.

SP: Sí. Esto se puede trasladar también a las personas que utilizan un software de cartera de hardware que no se ha actualizado, que utilizan nodos remotos o que utilizan nodos SPV que no comprueban las reglas sino que sólo comprueban el trabajo. Tendrán experiencias similares en las que, de repente, la cadena más larga cambia, por lo que su monedero SPV, que explicamos en un episodio anterior, su historia desaparece. Al menos para los nodos ligeros podrías hacer algo de victim shaming y decir “Deberías estar ejecutando un nodo completo. Si pasan cosas malas deberías haber ejecutado un nodo completo”. Pero sigo pensando que eso no es una buena ingeniería de seguridad, decirle a la gente “Si no usas el cinturón de seguridad en la posición correcta el coche puede explotar”. Pero al menos para el nodo completo no actualizado es un caso explícito que los Bitcoiners quieren apoyar. Quieren apoyar que la gente no se actualice y no pierda repentinamente sus monedas en una situación como esta. Por eso no soy una persona LOT=true.

Evitar un escenario de ruptura en cadena

AvW: Eso es lo que quiero conseguir. Todo el mundo está de acuerdo, o al menos ambos estamos de acuerdo, y creo que la mayoría de la gente estaría de acuerdo en que este escenario que acabamos de pintar es horrible, no queremos eso. Así que la siguiente pregunta es ¿cómo evitar este escenario? Esa es también una de las cosas en las que las personas de LOT=true y LOT=false difieren en sus opiniones. Los defensores de LOT=false, como tú, argumentan en contra de LOT=true porque la ruptura de la cadena fue causada por LOT=true y, por lo tanto, si no queremos rupturas en cadena, no queremos LOT=true y lo que acabamos de describir no ocurrirá. El peor escenario es que no tengamos Taproot, simplemente expirará. Eso no es tan malo como que este pobre coreano pierda su honesto día de trabajo.

SP: Exactamente y puede que tengamos Taproot más adelante.

AvW: Los defensores de LOT=true argumentarán que Bitcoin es una red abierta y que cualquier peer puede ejecutar el software que quiera. Para bien o para mal LOT=true es algo que existe. Si queremos evitar una ruptura de la cadena, la mejor manera de evitarlo es asegurarse de que todo el mundo utiliza LOT=true o al menos la mayoría de los mineros se actualizan a tiempo y LOT=true es la mejor manera de asegurar eso. Conseguir una masa crítica para LOT=true es en realidad la opción más segura a pesar de que LOT=true también introdujo el riesgo. Si quiero hacer una analogía, es como si el mundo fuera un lugar más seguro sin armas nucleares, pero hay armas nucleares. Parece que es más seguro tener una en ese caso.

SP: Creo que esa analogía se rompe rápidamente, pero sí, entiendo la idea.

AvW: No es una analogía perfecta, estoy seguro. La cuestión es que LOT=true existe y ahora tenemos que lidiar con ello. Podría ser un mundo mejor, un lugar más seguro, si LOT=true no existiera, si los UASF no existieran. Pero existe y ahora tenemos que enfrentarnos a ese hecho. Se puede argumentar que asegurarse de que la bifurcación suave tenga éxito es en realidad la mejor manera de salvar a ese pobre coreano.

SP: Siempre soy muy escéptico con este tipo de teoría del juego porque suena retóricamente bien pero no estoy seguro de que sea realmente cierto. Uno de los problemas obvios es cómo sabes que has llegado a toda la comunidad Bitcoin. Hablamos de esta hipotética persona en este otro país que no está leyendo Twitter y Reddit, no tiene ni idea de que esto está pasando y mucho menos la mayoría de los usuarios de carteras ligeras. El número de personas que usan Bitcoin es mucho, mucho mayor que el número de personas que están siquiera remotamente interesadas en estas discusiones. Además, explicar el riesgo a esas personas, incluso si pudiéramos llegar a ellas, para explicarles por qué deberían actualizarse, es un reto bastante grande. En este episodio tratamos de explicar a grandes rasgos qué es lo que saldría mal si no se actualiza. No podemos decirles simplemente que deben actualizarse. Eso viola la idea de que hay que persuadir a la gente con argumentos y dejar que decidan lo que quieren hacer en lugar de decírselo basándose en la autoridad.

AvW: Ten en cuenta que al final todo esto se evita si la mayoría del poder de hash se actualiza. Con LOT=true en realidad cualquier mayoría estaría bien al final. Si los propios mineros utilizan LOT=true entonces seguro que consiguen la cadena más larga al final del año.

SP: La teoría del juego se reduce a decir que quieres convencer a los mineros de que lo hagan. Sin embargo, el problema es que si falla acabamos de explicar el desastre que ocurre. Entonces la pregunta es ¿cuál es ese riesgo? ¿Puedes poner un porcentaje en eso? ¿Se puede simular de algún modo el mundo y averiguar qué ocurre?

AvW: Yo no me decido a hacerlo. Veo argumentos convincentes en ambos lados. Al principio me inclinaba por lot=false, pero cuanto más lo pienso… El argumento es que si incluyes lot=true en Bitcoin Core entonces eso prácticamente garantiza que todo irá bien porque la mayoría económica casi seguro que lo ejecutará. Los intercambios y la mayoría de los usuarios.

SP: Ni siquiera estoy seguro de que eso sea cierto. Eso supone que esa mayoría económica se apresure a actualizar y no ignore las cosas.

AvW: Al menos dentro de un año.

SP: Es posible que haya empresas que estén ejecutando nodos de 3 o 4 años de antigüedad porque tienen 16 s***coins diferentes. Incluso eso, yo no lo asumiría. Sabemos por la red en general que mucha gente no actualiza los nodos y un año es muy poco. No se puede saber por los nodos si son la mayoría económica o no. Puede que sean unos cuantos jugadores críticos los que se encarguen de ello.

AvW: Sí, no puedo estar seguro. No estoy seguro. Estoy especulando, estoy explicando el argumento. Pero lo contrario también es cierto, ahora que existe lot=true es casi seguro que algún grupo de usuarios lo ejecutará y eso introduce quizás mayores riesgos que si se incluyera en Core. Eso aumentaría las posibilidades de éxito de LOT=true, la mayoría actualizándose.

SP: Realmente depende de quién sea ese grupo. Porque si ese grupo son personas al azar que no son económicamente importantes, entonces ellos experimentan los problemas y nadie más se da cuenta de nada.

AvW: Eso es cierto. Si se trata de un grupo muy pequeño puede ser cierto, pero la cuestión es cuán pequeño o cuán grande tiene que ser ese grupo para que se convierta en un problema. Tienen una asimetría, esta ventaja, porque su cadena nunca puede ser reordenada, mientras que la cadena LOT=false puede ser reordenada.

SP: Pero su cadena puede no crecer nunca, así que eso también es un riesgo. No es una ventaja estricta.

AvW: Creo que es definitivamente una ventaja estricta.

SP: La ventaja es que no se puede volver a forjar. La desventaja es que tu cadena podría no crecer nunca. No sé cuál de las dos…

AvW: Probablemente crecería. Depende de lo grande que sea ese grupo de nuevo. Eso no es algo que podamos medir objetivamente. Supongo que a eso se reduce todo.

SP: Ni siquiera con carácter retroactivo podemos. Todavía no sabemos qué causó realmente la activación de SegWit, incluso cuatro años después. Eso te da una idea de lo difícil que es saber cuáles son realmente estas fuerzas.

AvW: Sí, en eso estamos de acuerdo. Es muy difícil saber de qué manera. Estoy indeciso al respecto.

SP: Lo más seguro es no hacer nada.

AvW: Ni siquiera eso. Todavía podría haber una minoría o incluso una mayoría que podría pasar.

SP: Otro experimento mental interesante es decir: “Siempre va a haber un grupo LOT=true para cualquier bifurcación suave. ¿Qué pasa con una bifurcación suave que no tiene el apoyo de la comunidad? ¿Qué pasa si un grupo arbitrario de personas decide llevar a cabo su propio soft fork porque quiere? Tal vez alguien quiere reducir la oferta de monedas. Poner la emisión de monedas a cero mañana o reducir el tamaño del bloque a 300 kilobytes”. Podrían decir “Porque es un soft fork y porque yo corro un nodo LOT=true, podría haber otros que corran un LOT=true. Por lo tanto, debe activarse y todo el mundo debería ejecutar este nodo”. Eso sería absurdo. Esta teoría del juego tiene un límite. Siempre se puede pensar en alguna bifurcación suave y en alguna pequeña comunidad que dirá esto y fracasará por completo. Tienes que estimar lo grande y lo potente que es esta cosa. Ni siquiera sé cuál es la métrica.

AvW: Pero también cómo de dañina es la actualización, porque yo diría que esa es la respuesta a tu punto. Si la mejora en sí se considera valiosa, a la gente le costará muy poco pasarse a la otra cadena, la que no se puede volver a forjar y que tiene la mejora que es valiosa. Esa es una muy buena razón para cambiar. Mientras que cambiar a una cadena, incluso si no se puede volver a forjar, que jode el límite de monedas o ese tipo de cosas, es un desincentivo mucho mayor y también un desincentivo para que los mineros cambien.

SP: Algunas personas podrían decir que un tamaño de bloque más pequeño es más seguro.

AvW: Son libres de bifurcarse, eso también es posible. Ni siquiera hemos hablado de eso, pero es posible que la división de la cadena sea duradera, que sea para siempre una cadena minoritaria LOT=true y una cadena mayoritaria LOT=false. Entonces tenemos la división de Bitcoin, Bitcoin Cash o algo así. Sólo tenemos dos monedas.

SP: Con la gran y temible espada de Damocles colgando encima.

AvW: Entonces quizá habría que incluir un punto de control en la cadena mayoritaria, lo que sería muy feo.

SP: Se podría idear algún tipo de bifurcación suave incompatible para evitar una reorganización en el futuro.

AvW: Vamos a trabajar hacia el final de este episodio.

SP: Creo que hemos cubierto muchos argumentos diferentes y hemos explicado que esto es bastante complicado.

AvW: ¿Qué cree que va a pasar? ¿Cómo cree que se desarrollará todo esto?

SP: Empecé a mirar un poco el meollo, uno de los pull requests que Luke Dashjr abrió para implementar BIP 8 en general, no específicamente para Taproot creo. Ya hay complejidad con esto de LOT=true comprometido porque hay que pensar en cómo debe comportarse la red peer-to-peer. Desde un principio de mínima acción, lo que es el menor trabajo para los desarrolladores, establecer LOT a false probablemente resulta en un código más fácil que se fusionará antes. E incluso si Luke es como “Sólo haré esto si se establece como true”, entonces alguien más hará una solicitud de extracción que lo establece como false y se fusiona antes. Creo que desde un punto de vista de lo que sucede cuando la gente perezosa, quiero decir perezosa en la forma más respetuosa, ¿cuál es el camino de menor resistencia? Probablemente sea LOT=false, sólo desde el punto de vista de la ingeniería.

AvW: Así que LOT=false en Bitcoin Core es lo que se esperaría en ese caso.

SP: Sí. Y alguien más implementaría LOT=true.

AvW: En algún cliente alternativo seguramente.

SP: Sí. Y eso podría no tener revisión de código.

AvW: Es sólo una configuración de parámetros, ¿verdad?

SP: No, es más complicado porque cómo va a interactuar con sus pares y qué va a hacer cuando haya una división de la cadena, etc.

AvW: ¿Qué opinas de este escenario que tampoco se implementa en Bitcoin Core? ¿Ves que eso ocurra?

SP: Ninguna de las dos cosas. ¿LOT=null?

AvW: Simplemente no hay mecanismo de activación porque no hay consenso para uno.

SP: No, creo que estará bien. No puedo predecir el futuro, pero creo que un LOT=false no será tan objetado como algunos podrían pensar.

AvW: Entonces ya veremos, supongo.

SP: Sí, ya veremos. Puede que esta sea la cosa más tonta que he dicho nunca.

\ No newline at end of file diff --git a/es/bitcoin-explained/taproot-activation-speedy-trial/index.html b/es/bitcoin-explained/taproot-activation-speedy-trial/index.html index 58bb530e3a..e460070843 100644 --- a/es/bitcoin-explained/taproot-activation-speedy-trial/index.html +++ b/es/bitcoin-explained/taproot-activation-speedy-trial/index.html @@ -1,5 +1,5 @@ Taproot Activación con Speedy Trial | Transcripciones de ₿itcoin - \ No newline at end of file +https://www.youtube.com/watch?v=oCPrjaw3YVI

Propuesta Speedy Trial: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html

Introducción

Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado. Sjors, ¿cuál es tu juego de palabras de la semana?

Sjors Provoost (SP): En realidad te pedí un juego de palabras y me dijiste “Corta, reedita. Vamos a hacerlo de nuevo”. No tengo un juego de palabras esta semana.

AvW: Los juegos de palabras son lo tuyo.

SP: La última vez intentamos esto de LOT.

AvW: Sjors, vamos a hablar mucho esta semana.

SP: Nos van a bloquear por esto.

AvW: Hablamos mucho hace dos semanas. LOT fue el parámetro que discutimos hace dos semanas, LOT=true, LOT=false, sobre la activación de Taproot. Llevamos dos semanas y ahora parece que la comunidad está llegando a un cierto consenso sobre una solución de activación llamada “Speedy Trial”. Eso es lo que vamos a discutir hoy.

SP: Así es.

Propuesta de Speedy Trial

Propuesta de Speedy Trial: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html

Propuesta de calendario: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018594.html

AvW: ¿Empezamos con Speedy Trial, qué es Speedy Trial Sjors?

SP: Creo que es una buena idea. Con las propuestas de las que hablamos la última vez para activar Taproot, básicamente Bitcoin Core liberaría algún software, tal vez en abril o algo así, y luego los mineros empezarían a señalar usando ese software en, creo, agosto o algo así. Entonces pueden señalar durante un año y al final del año todo termina.

AvW: Eso era LOT=true o LOT=false. El debate era sobre si debía terminar con la señalización forzosa o no. Eso es lo de LOT=true, LOT=false.

SP: Lo que hay que tener en cuenta es que la primera señalización, pasaría un tiempo antes de que empiece a suceder. Hasta ese momento realmente no sabemos esencialmente. Lo que propone el Speedy Trial es decir “En lugar de discutir si va a haber o no señalización y tener muchas discusiones al respecto, vamos a probarlo muy rápidamente”. En lugar de eso, habría un lanzamiento tal vez alrededor de abril, por supuesto no hay nadie a cargo de los plazos reales. En ese caso la señalización comenzaría mucho antes, no estoy del todo seguro de cuándo, tal vez en mayo o bastante temprano. La señalización sólo duraría 3 meses. Al final de los 3 meses se abandonaría.

AvW: Terminaría en LOT=false básicamente.

SP: Sí. Es el equivalente a LOT=false o como solía ser con las horquillas blandas. Señala pero sólo durante un par de meses.

AvW: ¿Si no se activa en esos meses por la potencia de hash, que probablemente será del 90 por ciento de la potencia de hash? Se necesitará un 90% de potencia de hachís para activar Taproot. Si no lo hace, la propuesta caduca y cuando caduque podremos seguir discutiendo sobre cómo activar Taproot. ¿O si se activa entonces qué pasa?

SP: La cuestión es que como todavía se quiere dar a los mineros el tiempo suficiente para actualizar realmente su software, las reglas reales de Taproot no entrarán en vigor hasta septiembre o agosto.

AvW: Mineros y usuarios reales de Bitcoin.

SP: Sí. Se quiere dar a todo el mundo tiempo suficiente para actualizarse. La idea es que comencemos la señalización muy rápidamente. Además, los mineros pueden señalar sin instalar el software. Una vez que se haya alcanzado el umbral de señalización, la bifurcación suave será inamovible. Va a suceder, al menos si la gente ejecuta los nodos completos. Entonces todavía hay tiempo para que la gente se actualice y para que los mineros realmente se actualicen y ejecuten ese nuevo software en lugar de simplemente señalarlo. Podrían ejecutar ese software, pero podrían no hacerlo. Por eso está bien lanzarlo un poco antes.

AvW: ¿Deberían ejecutar realmente el software si están señalando?

AvW: Por ahora, para recapitular muy brevemente, Speedy Trial significa liberar el software bastante rápido y rápidamente después de ser liberado iniciar el período de señalización durante 3 meses, que es relativamente corto para un período de señalización. Ver si el 90 por ciento de los mineros están de acuerdo, si lo hacen Taproot se activa 6 meses después de la liberación inicial del software. Si el 90 por ciento de los mineros no se activa en 3 meses la propuesta expira y podemos continuar la discusión sobre cómo activar Taproot.

SP: Volvemos entonces a donde estábamos hace unas semanas pero con más datos.

La evolución de la propuesta de Speedy Trial

AvW: Exactamente. Quiero referirme brevemente a cómo hemos llegado hasta aquí. Discutimos todo el asunto de LOT=true y LOT=false y parecía haber un bloqueo. Algunos no querían LOT=true, otros no querían LOT=false y entonces entró en juego una tercera propuesta. No era nueva, pero no era una parte importante de la discusión, un simple día de la bandera. Un simple día de bandera habría significado que el código de Bitcoin Core habría incluido una fecha en el futuro o una altura de bloque en el futuro, momento en el que la actualización de Taproot se activaría independientemente de la potencia de hash hasta ese momento.

SP: Me parece una idea aún peor. Cuando hay mucho debate la gente empieza a proponer cosas.

AvW: Creo que la razón por la que hemos llegado a esta situación de estancamiento en la que la gente tiene ideas muy diferentes tiene mucho que ver con lo que ocurrió durante la actualización de SegWit. Ya lo hemos discutido antes, pero la gente tiene ideas muy diferentes de lo que realmente ocurrió. Algunas personas creen firmemente que los usuarios mostraron sus músculos. Los usuarios reclamaron su soberanía, los usuarios reclamaron el protocolo y básicamente forzaron a los mineros a activar la actualización de SegWit. Fue una gran victoria para los usuarios de Bitcoin. Por otro lado, otras personas creen firmemente que Bitcoin se acercó a un completo desastre con una red fracturada y gente perdiendo dinero, un gran desastre. Al primer grupo de gente le gusta mucho hacer un UASF de nuevo o empezar con LOT=false y cambiar a LOT=true o quizás sólo empezar con LOT=true. La gente que piensa que fue un gran lío, prefiere usar un día de bandera esta vez. Bonito y seguro en cierto modo, utilizar un día de bandera, nada de esta señalización de los mineros, los mineros no pueden ser obligados a señalar y todo eso. Estos diferentes puntos de vista sobre lo que realmente sucedió hace un par de años ahora significa que la gente no puede realmente estar de acuerdo en una nueva propuesta de activación. Después de muchas discusiones, todas las facciones estaban dispuestas a llegar a un acuerdo sobre el Speedy Trial, aunque a nadie le gusta realmente por un par de razones en las que entraremos. La gente de la UASF está de acuerdo con el Speedy Trial porque no se interpone en el camino de la UASF. Si el Speedy Trial fracasa, seguirán haciendo el UASF el año que viene. La gente del día de la bandera está de acuerdo porque los 3 meses no permiten una ventana suficientemente grande para hacer el UASF probablemente. La gente del UASF ha dicho que eso es demasiado rápido y que hagamos el Speedy Trial.

SP: También está el LOT=false, hagamos las bifurcaciones suaves de la forma en que las hemos hecho antes, en las que simplemente pueden expirar. Un grupo de personas que seguía trabajando en silencio en el código real que podría hacer eso. Sólo a partir de las listas de correo y de Twitter es difícil calibrar lo que realmente está pasando. Es una escala de tiempo muy corta.

AvW: La gente de LOT=false, esto es básicamente LOT=false sólo que en una escala de tiempo más corta. Todo el mundo está dispuesto a aceptarlo, aunque a nadie le guste.

SP: Desde el punto de vista que estoy viendo, realmente estoy mirando el código que se está escribiendo, lo que he notado es que una vez que el Speedy Trial salió más gente salió de la carpintería y comenzó a escribir el código que realmente podría hacer esto. Mientras que antes era sobre todo Luke, creo que escribiendo ese pull request.

AvW: ¿BIP 8?

SP: Sí, el BIP 8. Supongo que podemos entrar en los detalles técnicos, lo que estoy tratando de decir es que una cosa que muestra que el Speedy Trial parece una buena idea es que hay más desarrolladores de diferentes ángulos cooperando en él y haciendo las cosas un poco más rápido. Cuando hay algún desacuerdo, la gente empieza a procrastinar, a no revisar las cosas o a no escribirlas. Eso es un vago indicador de que esto parece estar bien. La gente está trabajando en ello rápidamente y está progresando, así que eso es bueno.

AvW: ¿Algún detalle técnico en el que quieras entrar?

Diferentes enfoques de la aplicación de Speedy Trial

Intercambio de información sobre la altura de los bloques frente a la combinación de la altura de los bloques y el MTP: https://bitcoin.stackexchange.com/questions/103854/should-block-height-or-mtp-or-a-mixture-of-both-be-used-in-a-soft-fork-activatio/

PR 21377 sobre la implementación de una combinación de altura de bloque y MTP: https://github.com/bitcoin/bitcoin/pull/21377

PR 21392 implementando la altura del bloque: https://github.com/bitcoin/bitcoin/pull/21392

SP: La idea del Speedy Trial puede implementarse de dos maneras diferentes. Se puede utilizar el sistema BIP 9 que ya tenemos. El argumento para eso sería que es mucho menos código porque ya funciona. Es sólo para 3 meses, así que ¿por qué no utilizar el antiguo código del BIP 9?

AvW: ¿El BIP 9 utiliza fechas en el futuro?

SP: Sí. Se puede saber cuándo podría empezar la señalización, cuándo se agota la señalización. Hay algunos casos extremos molestos en los que si se termina justo en la fecha límite pero luego hay una reorganización y termina justo antes de la fecha límite, el dinero de la gente podría perderse si intentan entrar en el primer bloque de Taproot. Esto es difícil de explicar a la gente.

AvW: La cuestión es que la señalización se produce por periodo de dificultad de los bloques de 2016. Al menos hasta ahora el 95% de los bloques necesitaban señalización de apoyo. Pero estos dos periodos de bloques, no se ajustan a fechas exactas ni nada por el estilo. Simplemente ocurren. Aunque el periodo de señalización sí empieza y termina en fechas concretas, por eso se pueden dar casos extraños.

SP: Hagamos un ejemplo, es divertido ilustrarlo. Digamos que la fecha límite de esta bifurcación suave es el 1 de septiembre, elige una fecha, para la señalización. El 1 de septiembre a medianoche UTC. Un minero mina el bloque número 2016 o algún múltiplo de 2016, que es cuando termina la votación. Minan este bloque un segundo antes de la medianoche UTC. Señalan “Sí”. Todos los que ven ese bloque dicen “Ok tenemos el 95% o lo que sea y justo antes de la medianoche Taproot está activo”. Tienen este script automático que dice “Ahora voy a poner todos mis ahorros en una dirección de Taproot porque quiero estar en el primer bloque y me siento imprudente, me encanta ser imprudente”. Luego hay otro minero que mina 2 segundos después porque no vio ese bloque reciente. Puede haber bloques rancios. Su bloque llega un segundo después de la medianoche. También vota positivo, pero es demasiado tarde, por lo que la bifurcación suave no se activa porque la señalización no se hizo antes de la medianoche, la fecha límite. Esa es la sutileza que se obtiene con el BIP 9. Normalmente no es un problema, pero es difícil explicar estos casos extremos a la gente.

AvW: ¿Es un problema mayor también con periodos de señalización más cortos?

SP: Sí, por supuesto. Si hay un periodo de señalización más largo, es menos probable que la señal llegue al final de un periodo.

AvW: El umbral, ¿creía que iba a ser del 90% esta vez?

SP: Eso es algo distinto. Primero vamos a hablar, independientemente del umbral, de estos dos mecanismos. Uno se basa en el tiempo, es el BIP 9, fácil porque ya tenemos el código para ello, el inconveniente son todas estas cosas raras que tienes que explicar a la gente. Hoy en día las bifurcaciones suaves en Bitcoin son tan importantes, tal vez la CNN quiera escribir sobre ello, es agradable si realmente puedes explicarlo sin sonar como un completo nerd. Pero la alternativa es decir “Vamos a utilizar este nuevo BIP 8 que se propuso de todos modos y utiliza la altura”. Ignoramos todo lo de LOT=true pero lo de la altura es muy útil. Entonces es mucho más sencillo. A partir de esta altura del bloque es cuando termina la señalización. Esa altura está siempre en el límite de estos periodos de retargeting. Eso es más fácil de razonar. Estás diciendo: “Si la señalización se alcanza en el bloque 700.321, entonces sucede, o no sucede”. Si hay un reorg, que por cierto podría seguir siendo un problema, podría haber un reorg a la misma altura. Pero entonces la diferencia sería que se activaría porque acabamos de hacer precisamente el 95%. Entonces hay una reorganización y ese minero vota no y entonces no se activa. Es un caso límite.

AvW: Eso también es cierto con el PIF 9. Si se elimina un caso límite, se tiene un caso límite menos, lo que es mejor.

SP: Correcto, con el BIP 9 podrías tener el mismo escenario, exactamente un voto, si es justo en el borde un voto minero. Pero el problema mucho mayor con el BIP 9 es que si la hora del bloque es 1 segundo después de la medianoche o antes esto importa. Incluso si están muy por encima del umbral. Pueden tener el 99,999% pero ese último bloque llega demasiado tarde y por tanto todo el periodo queda descalificado. En unas elecciones se miran todos los votos. Se dice: “Tiene un 97% de apoyo, va a pasar” y entonces ese último bloque llega demasiado tarde y no pasa. Es difícil de explicar, pero no tenemos este problema con la activación basada en la altura.

AvW: Supongo que la mayor desventaja de usar BIP 8 es que supone un cambio mayor en cuanto a código.

SP: Sí, pero ayer revisé ese código y escribí algunas pruebas para él. Andrew Chow y Luke Dashjr ya han implementado gran parte de él. Ya ha sido revisado por la gente. En realidad no está tan mal. Parecen 50 líneas de código. Sin embargo, si hay un error en él es muy, muy malo. Sólo porque se trata de unas pocas líneas de código, podría ser más seguro utilizar algo que ya existe. Pero no estoy terriblemente preocupado por ello.

El umbral de potencia del hash

AvW: Luego está el umbral de potencia de hash. ¿Es 90 o 95?

SP: Lo que se está implementando ahora en Bitcoin Core es el mecanismo general. Está diciendo “Para cualquier soft fork que se llame Speedy Trial se podría, por ejemplo, utilizar el 90 por ciento”. Pero para Taproot el código para Taproot en Bitcoin Core, sólo dice “Nunca se activa”. Esa es la forma de indicar que este soft fork está en el código pero no va a ocurrir todavía. Estos números son arbitrarios. El código soportará el 70 por ciento o el 95 por ciento, siempre que no sea un número imaginario o más del 100 por ciento.

AvW: Cabe señalar que al final siempre es el 51 por ciento de forma efectiva porque el 51 por ciento de los mineros siempre puede decidir dejar huérfanos los bloques que no son de señalización.

SP: Y crear un lío. Pero podrían hacerlo.

AvW: Hay que tener en cuenta que los mineros siempre pueden hacer eso si lo deciden.

SP: Pero el principio general que se está construyendo ahora es que al menos podríamos hacer un umbral ligeramente más bajo. Puede que todavía se discuta si eso es seguro o no.

AvW: ¿Todavía no está decidido? ¿Por lo que usted sabe, 90 o 95?

SP: No lo creo. Podríamos tener algunos argumentos a favor, pero ya lo veremos en la sección de riesgos.

AvW: O podemos mencionar muy brevemente que el beneficio de tener el umbral más alto es un menor riesgo de bloqueos huérfanos después de la activación. Esa es la razón principal.

SP: Pero como estamos haciendo una activación retardada, pasa mucho tiempo entre la señalización y la activación, mientras que normalmente se señala y se activa inmediatamente, o al menos en dos semanas. Ahora puede tardar mucho, mucho más. Eso significa que los mineros tienen más tiempo para actualizar. Hay un poco menos de riesgo de orfandad incluso si tienes un umbral de señalización más bajo.

Activación retardada

AvW: Cierto. Creo que ese era el tercer punto al que querías llegar de todos modos. La activación retardada.

SP: Lo que ocurre normalmente es que se cuentan los votos en el último periodo de dificultad. Si supera el umbral, el estado de la bifurcación suave pasa de STARTED, es decir, lo sabemos y lo estamos contando, a LOCKED_IN. El estado LOCKED_IN durará normalmente 2 semanas o un periodo de retargeting, y entonces las reglas entrarán realmente en vigor. Lo que ocurre con el Speedy Trial, la parte de activación retardada, es que este estado LOCKED_IN durará mucho más tiempo. Puede durar meses. Está BLOQUEADO_IN durante meses y luego las reglas entran realmente en vigor. Este cambio es sólo dos líneas de código que es bastante agradable.

Inconvenientes y riesgos de esta propuesta

AvW: Bien, ¿pasamos a algunos de los inconvenientes de esta propuesta?

SP: Algunos de los riesgos. El primero lo hemos mencionado brevemente. Debido a que esta cosa se despliega con bastante rapidez y a que está muy claro que la activación de las reglas se retrasa, existe un incentivo para que los mineros se limiten a dar señales en lugar de instalar realmente el código. Entonces podrían procrastinar la instalación real del software. Eso está bien, a menos que lo pospongan tanto que se olviden de aplicar realmente las reglas.

AvW: Lo cual me parece bastante mal, Sjors.

SP: Sí. Eso es malo, estoy de acuerdo. Siempre es posible que los mineros se limiten a dar señales y no apliquen realmente las reglas. Este riesgo existe con cualquier despliegue de soft fork.

AvW: Sí, los mineros siempre pueden hacer señales, falsas señales. Eso ha ocurrido en el pasado. Hemos visto señales falsas. En la bifurcación suave del BIP 66 nos dimos cuenta más tarde de que los mineros estaban emitiendo señales falsas porque vimos grandes reorganizaciones en la red. Eso es algo que definitivamente queremos evitar.

SP: Creo que lo hemos explicado brevemente antes, pero podemos volver a explicarlo. Bitcoin Core, si lo usas para crear tus bloques como minero, hay algunos mecanismos de seguridad para asegurar que no creas un bloque que no es válido. Sin embargo, si otro minero crea un bloque que no es válido, usted minará encima de él. Entonces tienes un problema porque los nodos completos que están aplicando Taproot rechazarán tu bloque. Presumiblemente la mayoría del ecosistema, si esta señalización funciona, se actualizará. Entonces te metes en esta situación muy aterradora en la que realmente esperas que sea verdad. No una parte masiva de la economía es demasiado perezosa para actualizar y obtienes un completo desastre.

AvW: Sí, correcto.

SP: Creo que el término del que hablamos es la idea de un troll. Podrías tener un usuario troll. Digamos que soy un usuario malvado y voy a crear una transacción que parece una transacción de Taproot pero que en realidad no es válida según las reglas de Taproot. La forma en que funciona, el mecanismo en Bitcoin para hacer bifurcaciones suaves es que usted tiene este número de versión en su transacción SegWit. Dices “Esta es una transacción SegWit versión 1”. Los nodos saben que cuando ves una versión SegWit superior que no conoces…

AvW: ¿Versión Taproot?

SP: Versión de SegWit. La versión actual de SegWit es la versión 0 porque somos unos frikis. Si ves una transacción de la versión SegWit con 1 o superior asumes que cualquiera puede gastar ese dinero. Eso significa que si alguien está gastando desde esa dirección no te importa. No consideras el bloque inválido como un nodo antiguo. Pero un nodo que conozca la versión comprobará las reglas. Lo que podrías hacer como troll es crear una firma Schnorr rota, por ejemplo. Tomas una firma Schnorr y cambias un byte. Entonces si eso es visto por un nodo antiguo dice “Esto es SegWit versión 1. No sé lo que es. Está bien. Cualquiera puede pasar esto así que no voy a comprobar la firma”. Pero los nodos Taproot dirán “Hey, espera un minuto. Esa es una firma inválida, por lo tanto es un bloque inválido”. Y tenemos un problema. Hay un mecanismo de protección para que los mineros normales no minen transacciones SegWit que no conocen. No minarán la versión 1 de SegWit si no están actualizados.

AvW: ¿No ocurre también que los nodos normales simplemente no reenviarán la transacción a otros nodos?

SP: Así es, es otro mecanismo de seguridad.

AvW: Hay dos mecanismos de seguridad.

SP: Básicamente dicen “Oye, otro nodo, no creo que quieras regalar tu dinero”. O bien “Estás intentando hacer algo súper sofisticado que no entiendo”, algo que se llama normalidad. Si estás haciendo algo que no es estándar no voy a transmitirlo. Eso no es una norma de consenso. Es importante. Significa que puedes compilar tu nodo para retransmitir esas cosas y puedes compilar tu minero para minar esas cosas, pero es un arma de pie si no sabes lo que estás haciendo. Pero no está en contra del consenso. Sin embargo, cuando una transacción está en un bloque, entonces estás tratando con reglas de consenso. Eso significa de nuevo que los nodos antiguos lo mirarán y dirán “no me importa. No voy a comprobar la firma porque es una versión superior a la que conozco”. Pero los nodos actualizados dirán “Eh, espera un momento. Este bloque contiene una transacción que no es válida. Este bloque no es válido”. Y así un usuario troll no tiene realmente la oportunidad de hacer mucho daño.

AvW: Porque la transacción no llegará a través de la red peer-to-peer e incluso si lo hace sólo llegaría a los mineros que aún la rechazarán. Un usuario troll probablemente no pueda hacer mucho daño.

SP: Nuestro ejemplo del troll de un usuario que intercambia un byte en una firma Schnorr, intenta enviar esta transacción, la envía a un nodo que está actualizado. Ese nodo dirá “Eso no es válido, vete. Voy a banearte ahora”. Tal vez no lo prohíba, pero definitivamente se enfadará. Pero si lo envía a un nodo que no está actualizado, ese nodo dirá “No sé nada de esta nueva versión de SegWit tuya. Vete. No me envíes estas cosas modernas. Soy de la vieja escuela. Envíame cosas viejas”. Así que la transacción no va a ninguna parte, pero tal vez de alguna manera termina con un minero. Entonces el minero dice “No voy a minar esta cosa que no conozco. Es peligroso porque podría perder todo mi dinero”. Sin embargo, podrías tener un minero troll. Eso sería un trolling muy, muy caro, pero tenemos multimillonarios en este ecosistema. Si minan un bloque que no es válido les va a costar unos cuantos cientos de miles de euros, creo que a los precios actuales, quizá incluso más.

AvW: Sí, 300.000 y pico.

SP: Si tienes 300.000 euros para quemar, podrías hacer un bloque así y desafiar al ecosistema, decir “Oye, aquí tienes un bloque. Déjame ver si lo verificas”. Entonces, si ese bloque va a nodos que están actualizados, estos lo rechazarán. Si ese bloque va a nodos que no están actualizados, está bien, es aceptado. Pero si alguien mina sobre él, si ese minero no se ha actualizado no lo comprobará, construirá sobre él. Al final, el ecosistema probablemente rechazará toda la cadena y se convertirá en un desastre. Entonces realmente, realmente, realmente quieres que una gran mayoría de mineros compruebe los bloques, no sólo minar a ciegas. En general, ya hay problemas con los mineros que minan a ciegas encima de otros mineros, incluso durante unos segundos, por razones económicas.

AvW: Esa fue una larga tangente sobre los problemas de la falsa señalización. ¿Todo esto sólo ocurriría si los mineros hicieran señales falsas?

SP: Para que quede claro, la señalización falsa no es un acto malintencionado, sólo es algo perezoso y conveniente. Dices “No te preocupes, haré mis deberes. Te enviaré ese memorándum a tiempo, no te preocupes”.

AvW: Todavía no me he actualizado, pero lo haré. Ese es el riesgo de la falsa señalización.

SP: También podría ser deliberado, pero tendría que ser una conspiración bastante grande.

AvW: Otra preocupación, un riesgo que se ha mencionado es que el uso de LOT=false en general podría ayudar a los usuarios a lanzar un UASF porque podrían ejecutar un cliente UASF con LOT=true e incentivar a los mineros a hacer señales, como acabamos de mencionar. Eso no sólo significaría que ellos mismos se bifurcarían a su propia bifurcación suave, sino que básicamente activarían una bifurcación suave para toda la economía. Eso no es un problema en sí mismo, pero algunas personas lo consideran un problema si los usuarios son incentivados a intentar un UASF. ¿Entiende usted ese problema?

SP: Si optamos por este enfoque del BIP 8, si pasamos a utilizar la altura de los bloques en lugar de las marcas de tiempo…

AvW: O el día de la bandera.

SP: El Speedy Trial no utiliza un día de bandera.

AvW: Lo sé. Lo que digo es que si se hace un día de bandera no se puede hacer un UASF que desencadene otra cosa.

SP: Tal vez se pueda, ¿por qué no?

AvW: ¿Qué activaría?

SP: Hay un día de bandera por ahí, pero se despliega un software que requiere señalización.

AvW: Eso es lo que ejecutaría la gente de la UASF.

SP: Pueden ejecutarlo de todos modos. Incluso si hay un día de bandera, pueden decidir ejecutar un software que requiera señalización, aunque probablemente nadie lo haría. Pero podrían hacerlo.

AvW: Absolutamente, pero no pueden “cooptar” para llamarlo que LOT=nodos falsos si sólo hay un día de bandera por ahí.

SP: Eso es cierto. Exigirían la señalización, pero los nodos del día de la bandera que están ahí fuera serían como “no sé por qué no aceptan estos bloques. No hay señal, no hay nada que activar. Sólo está mi día de bandera y voy a esperar a mi día de bandera”.

AvW: No quiero meterme demasiado en la maleza, pero si no hay nodos LOT=false para “cooptar”, entonces los mineros podrían simplemente emitir una señal falsa. Los nodos UASF están activando Taproot pero el resto de la red todavía no tiene Taproot activado. Si los nodos UASF envían monedas a una dirección Taproot van a perder sus monedas al menos en el resto de la red.

SP: Y no conseguirían esa ventaja de reorganización que creen tener. Esto parece aún más complicado que lo que hablamos hace dos semanas.

AvW: Sí, por eso mencioné que me estoy metiendo un poco en la maleza ahora. Pero, ¿entiendes el problema?

SP: ¿Es un argumento a favor o en contra del día de la bandera?

AvW: Depende de tu perspectiva, Sjors.

SP: La de alguien que no quiere que Bitcoin implosione en un gran incendio y que le gustaría que se activara Taproot.

AvW: Si no te gustan los UASF, si no quieres que la gente haga UASF, entonces tampoco querrás que haya nodos LOT=false.

SP: Sí, vale, estás diciendo “si realmente quieres que no existan los UASF”. No estoy terriblemente preocupado por la existencia de estas cosas. De lo que hablé hace 2 semanas, no voy a contribuir a ellas probablemente.

AvW: Sólo quería mencionar que ese es un argumento contra LOT=false que he visto por ahí. Tampoco es un argumento con el que yo esté de acuerdo, pero he visto el argumento.

SP: Lo que está diciendo exactamente es que es un argumento para no utilizar la señalización, pero sí un día de bandera.

AvW: Sí. Incluso el Speedy Trial utiliza la señalización. Aunque es más corto, puede ser lo suficientemente largo como para lanzar un UASF contra él, por ejemplo.

SP: Y es compatible con eso. Como utiliza la señalización, es perfectamente compatible con que alguien despliegue un sistema LOT=true y haga mucho ruido al respecto. Pero supongo que en este caso, incluso los defensores más firmes de LOT=true, uno de ellos al menos, argumentaron que sería completamente imprudente hacer eso.

AvW: Ahora mismo no hay ningún defensor de la UASF que piense que es una buena idea. Que yo sepa, al menos.

SP: Hasta ahora no los hay. Pero ya hablamos, en septiembre creo, de esta teoría de los vaqueros. Estoy seguro de que hay alguien por ahí que intentará un UASF incluso en el Speedy Trial.

¿Speedy Trial como plantilla para futuras activaciones de soft fork?

AvW: No se puede excluir la posibilidad al menos. Hay otro argumento en contra del Speedy Trial, encuentro este argumento bastante convincente en realidad, que es que salimos de 2017 con mucha incertidumbre. Acabo de mencionar la incertidumbre al principio de este episodio, parte de ella al menos. Algunos pensaron que la UASF fue un gran éxito, otros pensaron que fue una imprudencia. Ambas cosas son parcialmente ciertas, hay verdad en ambas. Ahora tenemos un soft fork, Taproot, que parece gustar a todo el mundo, a los usuarios, a los desarrolladores, a los mineros, a todos. Lo único que tenemos que hacer es actualizarlo. Ahora podría ser una muy buena oportunidad para limpiar el desorden de 2017 en cierto modo. Acordar qué son exactamente las bifurcaciones suaves, cuál es la mejor manera de desplegar una bifurcación suave y luego usar eso. De esta manera se convierte en una plantilla que podemos utilizar en tiempos más polémicos en el futuro cuando tal vez hay otra guerra civil en marcha o hay más FUD siendo lanzado en Bitcoin. Parece que estamos en aguas tranquilas ahora mismo. Tal vez este sea un buen momento para hacerlo bien, lo que nos ayudará a avanzar en el futuro. Mientras que Speedy Trial, nadie piensa que este sea realmente el camino correcto. Está bien, necesitamos algo así que hagámoslo. Podría decirse que es una patada a la lata de la discusión realmente grande que necesitamos tener en el camino.

SP: Sí, tal vez. Un escenario que podría ver es que el Speedy Trial pase, se active con éxito y el despliegue de Taproot pase y todo esté bien. Entonces creo que eso eliminaría ese trauma. La siguiente bifurcación suave se haría en el bonito y tradicional LOT=false BIP 8. Lanzaremos algo y varios meses después los mineros empezarán a dar señales y se activará. Así que tal vez sea una forma de superar el trauma.

AvW: ¿Crees que es una forma de superar el trastorno de estrés postraumático? Que todo el mundo vea que los mineros pueden activarse realmente.

SP: Puede que sea bueno deshacerse de esa tensión, porque el inconveniente de liberar el mecanismo regular, digamos BIP 8 LOT=false, es que van a ser 6 meses de esperar que los mineros señalen y luego, con suerte, sólo 2 semanas y ya está. Esos 6 meses en los que todo el mundo lo está anticipando, la gente se va a volver aún más loca de lo que está ahora quizás. Supongo que es una buena manera de decir “Acabemos con este trauma”, pero creo que hay desventajas. Por un lado, ¿qué pasa si en los próximos 6 meses encontramos un error en Taproot? Tenemos 6 meses para pensar en algo que ya está activado.

AvW: Podemos hacer un soft fork.

SP: Si se trata de un error que se puede arreglar en un soft fork, sí.

AvW: Creo que cualquier Taproot, podría simplemente quemar ese tipo.

SP: Supongo que se podría añadir un soft fork que diga “No se pueden minar direcciones de la versión 1”.

AvW: Sí, exactamente. Creo que eso debería ser posible, ¿verdad?

SP: Sí. Supongo que es posible anular Taproot, pero sigue dando miedo porque los nodos antiguos pensarán que está activo.

AvW: Esto es una preocupación menor para mí.

SP: Lo es y no lo es. Los nodos antiguos, los nodos que se liberan ahora básicamente y que conocen este Speedy Trial, pensarán que Taproot está activo. Podrían crear direcciones de recepción y enviar monedas. Pero sus transacciones no se confirmarán o se confirmarán y luego se desconfirmarán. No se dejarán arrastrar porque la bifurcación suave dirá “No puedes gastar este dinero”. No es “cualquiera puede gastar”, es “no puedes gastar esto”. Está protegido en ese sentido. Supongo que hay formas de salir del lío con el soft fork, pero que no son tan bonitas como decir “Aborta, aborta, aborta. No hagas la señal”. Si utilizamos el mecanismo normal del BIP 8, hasta que los mineros empiecen a señalar puedes decir simplemente “No señalar”.

AvW: Claro. ¿Alguna reflexión final? ¿Cuáles son sus expectativas? ¿Qué va a ocurrir?

SP: No lo sé, estoy contento de ver avances en el código. Al menos tenemos el código real y luego decidiremos qué hacer con él. Gracias por escuchar el van Wirdum Sjorsnado.

AvW: Ahí lo tienes.

\ No newline at end of file diff --git a/es/bitcoin-explained/taproot-activation-update/index.html b/es/bitcoin-explained/taproot-activation-update/index.html index 24c091f129..53531d7954 100644 --- a/es/bitcoin-explained/taproot-activation-update/index.html +++ b/es/bitcoin-explained/taproot-activation-update/index.html @@ -27,4 +27,4 @@ Sjors Provoost, Aaron van Wirdum

Fecha: April 23, 2021

Transcripción De: Michael Folkson

Traducción Por: Blue Moon

Tags: Taproot

Categoría: Podcast

Media: -https://www.youtube.com/watch?v=SHmEXPvN6t4

Tema: Actualización de la activación de Taproot: Speedy Trial y el cliente LOT=true

Ubicación: Bitcoin Magazine (en línea)

Fecha: 23 de abril de 2021

Episodio anterior en lockinontimeout (LOT): https://btctranscripts.com/bitcoin-magazine/2021-02-26-taproot-activation-lockinontimeout/

Episodio anterior sobre Speedy Trial: https://btctranscripts.com/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial/

Aaron van Wirdum en “Ahora hay dos clientes de activación de Taproot, aquí está el porqué”: https://bitcoinmagazine.com/technical/there-are-now-two-taproot-activation-clients-heres-why

Transcripción por: Michael Folkson

Introducción

Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado.

Sjors Provoost (SP): Hola

AvW: Sjors, hoy tenemos mucho más que discutir.

SP: Ya hemos hecho este juego de palabras.

AvW: Creo que ya lo hemos hecho dos veces. No importa. Hoy vamos a hablar de los últimos detalles de la implementación del Speedy Trial. Ya hemos hablado de Speedy Trial en un episodio anterior. Esta vez también vamos a contrastarlo con el cliente LOT=true que es una alternativa que ha sido lanzada por un par de miembros de la comunidad. Vamos a discutir cómo se comparan.

SP: Me parece una buena idea. También hablamos de las opciones de activación de Taproot en general en un episodio anterior.

AvW: ¿Uno de los primeros?

SP: También hablamos de esta idea de esta mentalidad de vaquero en la que alguien acabaría lanzando un cliente LOT=true hagas lo que hagas.

AvW: En eso estamos.

SP: También predijimos correctamente que habría un montón de motos.

AvW: Sí, esto también es algo en lo que vamos a entrar. En primer lugar, como breve resumen, estamos hablando de la activación de Taproot. Taproot es una propuesta de actualización del protocolo para contratos inteligentes compactos y potencialmente preservadores de la privacidad en el protocolo Bitcoin. ¿Es un buen resumen?

SP: Sí, creo que sí.

AvW: El debate sobre la forma de actualizar ya lleva un tiempo. El reto es que en una red abierta y descentralizada como Bitcoin, sin un dictador central que diga a todo el mundo qué debe ejecutar y cuándo, no vas a conseguir que todo el mundo se actualice al mismo tiempo. Pero queremos mantener la red en consenso de una forma u otra.

SP: Sí. La otra cosa que puede funcionar cuando se trata de un sistema distribuido es algún tipo de convenciones, formas en las que estás acostumbrado a hacer las cosas. Pero, por desgracia, la convención que teníamos se topó con problemas con el despliegue de SegWit. Entonces la pregunta es: “¿Debemos probar otra cosa o ha sido un accidente fortuito y debemos volver a intentar lo mismo?”.

AvW: Creo que el último preámbulo antes de que empecemos a hablar de Speedy Trial, me gustaría señalar que la idea general con una bifurcación suave, una actualización compatible hacia atrás que es Taproot, es que si la mayoría del poder de hash está aplicando las nuevas reglas eso significa que la red permanecerá en consenso.

SP: Sí. Podemos repetir que si sigues haciendo transacciones que son anteriores a Taproot, esas transacciones siguen siendo válidas. En ese sentido, como usuario puedes ignorar las bifurcaciones suaves. Desgraciadamente, si hay un problema, no puedes ignorarlo como usuario aunque tus transacciones no utilicen Taproot.

AvW: Creo que todo el mundo está de acuerdo en que es muy bueno que una mayoría de poder de hash haga cumplir las reglas. Existen mecanismos de coordinación para medir cuántos mineros están de acuerdo con una actualización. Así es como se puede coordinar un soft fork bastante seguro. Eso es algo en lo que todo el mundo está de acuerdo. Donde la gente empieza a discrepar es en lo que ocurre si los mineros no cooperan realmente con esta coordinación. No vamos a repetir todo eso. Hay episodios anteriores sobre eso. Lo que vamos a explicar es que al final la comunidad de desarrollo de Bitcoin Core se decantó por una solución llamada “Speedy Trial”. Ya lo mencionamos también en un episodio anterior. Ahora está finalizado y vamos a explicar cuáles son los parámetros finalizados para esto.

SP: Hubo un pequeño cambio.

AvW: Vamos a escucharlo Sjors. ¿Cuáles son los parámetros definitivos para el Speedy Trial? ¿Cómo va a actualizar Bitcoin Core a Taproot?

Parámetros de activación finalizados de Bitcoin Core

Notas de lanzamiento de Bitcoin Core 0.21.1: https://github.com/bitcoin/bitcoin/blob/0.21/doc/release-notes.md

Parámetros de activación de Speedy Trial fusionados en Core: https://github.com/bitcoin/bitcoin/pull/21686

SP: A partir de creo que es este domingo (25 de abril, medianoche) la primera vez que se reajusta lo difícil, eso ocurre cada dos semanas, probablemente una semana después del domingo…

AvW: Es el sábado

SP: … comienza la señalización. En unas dos semanas comienza la señalización, no antes de una semana.

AvW: Para que quede claro, eso es lo más pronto que puede empezar.

SP: Lo más pronto que puede empezar es el 24 de abril, pero como sólo empieza en un nuevo periodo de ajuste de la dificultad, un nuevo periodo de retargeting, probablemente no empezará hasta dentro de dos semanas.

AvW: Comenzará en el primer nuevo periodo de dificultad después del 24 de abril, que se calcula que será a principios de mayo. El 4 de mayo, que el cuarto esté contigo Sjors.

SP: Esa sería una fecha genial. Es cuando comienza la señalización y ésta se produce en rondas de votación, por así decirlo. Una ronda de votaciones son dos semanas o un período de ajuste de la dificultad, un período de retargeting. Si el 90% de los bloques en ese período de votación señalan el bit número 2, si eso sucede Taproot está bloqueado. Bloqueado significa que va a suceder, imagina el pequeño gif con Ron Paul, “Está sucediendo”. Pero las reglas reales de Taproot no entrarán en vigor inmediatamente, lo harán en el bloque número 709632.

AvW: ¿Que se estima que se minará cuando?

SP: Será el 12 de noviembre de este año.

AvW: Eso va a variar un poco, por supuesto, en función de la rapidez con la que se minen los bloques en los próximos meses. Será en noviembre, casi con toda seguridad.

SP: Lo que supondría 4 años después de la implosión del esfuerzo de SegWit2x.

AvW: Sí, un buen momento en ese sentido.

SP: Todas las fechas son buenas. Eso es lo que hace el Speedy Trial. Cada dos semanas hay una votación, si se alcanza el 90% de los votos, esa es la fecha de activación. No ocurre inmediatamente y como es un “Speedy Trial” también podría fallar rápidamente y eso es en agosto, alrededor del 11 de agosto. Si el periodo de dificultad después de eso o antes, siempre se me olvida, no llega a la meta, creo que es después…

AvW: El periodo de dificultad debe haber terminado el 11 de agosto ¿no?

SP: Cuando pasa el 11 de agosto todavía podría activarse pero luego el siguiente periodo de dificultad, no puede. Creo que la regla es que al final del periodo de dificultad se empieza a contar y si el resultado es un fracaso entonces si es después del 11 de agosto se abandona pero si todavía no es el 11 de agosto se entra en la siguiente ronda.

AvW: Si el primer bloque de un nuevo periodo de dificultad se mina el 10 de agosto, ¿ese periodo de dificultad seguirá contando?

SP: Así es. Creo que es uno de los cambios sutiles que se han hecho en el PIF 9 para que sea más fácil razonar sobre él. Creo que antes era al revés, cuando primero se comprobaba la fecha, pero si era pasada la fecha se abandonaba, pero si era antes de la fecha seguía contando. Ahora creo que es al revés, es un poco más sencillo.

AvW: Ya veo. Va a haber una ventana de señalización de unos 3 meses.

SP: Así es.

AvW: Si en cualquier periodo de dificultad dentro de esa ventana de señalización de 3 meses se alcanza el 90% de la potencia de hachís, Taproot se activará en noviembre de este año.

SP: Sí.

AvW: Creo que eso cubre el Speedy Trial.

SP: El umbral es el 90%, como hemos dicho. Normalmente, con el PIF 9 es del 95%, pero se ha rebajado al 90%.

AvW: ¿Qué ocurre si no se alcanza el umbral?

SP: Nada. Lo que significa que podría pasar cualquier cosa. La gente podría desplegar nuevas versiones de software, probar otro bit, etc.

AvW: Sólo quería aclarar eso.

SP: No significa que Taproot esté cancelado.

AvW: Si no se alcanza el umbral, este cliente de software específico no hará nada, pero los desarrolladores de Bitcoin Core y el resto de la comunidad de Bitcoin pueden seguir ideando nuevas formas de activar Taproot.

SP: Exactamente. Es un experimento de bajo coste. Si gana tendremos Taproot. Si no, entonces tenemos más información sobre por qué no…

AvW: También quiero aclarar. Todavía no sabemos cómo va a ser. Eso habrá que averiguarlo entonces. Podríamos empezar a calcularlo ahora, pero aún no se ha decidido cómo será el despliegue.

Alternativa a Bitcoin Core (Cliente Taproot basado en Bitcoin Core 0.21.0)

Actualización de las versiones de activación de Taproot: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-April/018790.html

AvW: También se lanzó otro cliente. Hay mucho debate sobre el nombre. Vamos a llamarlo cliente LOT=true.

SP: Eso me parece bien.

AvW: Se deriva de esta diferencia técnica y filosófica sobre cómo deben activarse los soft forks en primer lugar. Este cliente utiliza, lo has adivinado, LOT=true. LOT=true significa que si la ventana de señalización se acaba, al final de la misma los nodos empezarán a rechazar cualquier bloque que no señale. Sólo aceptan los bloques que señalan. Esta es la principal diferencia. Entremos en los detalles del cliente LOT=true.

SP: En principio es lo mismo, en principio es lo mismo. Hay una posibilidad teórica de que no lo sea si los mineros hacen algo realmente loco. Empieza a una cierta altura de bloque…

AvW: Acabamos de mencionar que Bitcoin Core 0.21.1 comienza su ventana de señalización en el primer periodo de dificultad después del 24 de abril. Este cliente LOT=true también comenzará en la práctica su ventana de señalización en el primer periodo de dificultad después del 24 de abril, excepto que el 24 de abril no se especifica específicamente. Simplemente eligieron la altura del bloque específico que se espera que sea el primero después del 24 de abril.

SP: Exactamente, eligieron el bloque 681.408.

AvW: Se especifica como una altura de bloque en lugar de hacerlo indirectamente mediante el uso de una fecha.

SP: Pero lo más probable es que sea exactamente el mismo momento. Tanto el Speedy Trial (Core) como el cliente LOT=true iniciarán la señalización, los periodos de votación al mismo tiempo. Los periodos de votación en sí, votan en el mismo bit. Ambos votan en el bit 2. Ambos tienen un umbral del 90%. Además, si el voto es verdadero, también tiene una activación retardada. La activación retardada es una altura de bloque en ambos escenarios, tanto en el Speedy Trial (Core) como en la variante LOT=true.

AvW: Botha son el 12 de noviembre, una activación de noviembre de todos modos. Si los mineros señalan que están preparados para Taproot dentro del periodo de Speedy Trial, ambos clientes activarán Taproot en noviembre en esa misma fecha, exactamente en el mismo bloque.

SP: Así que en ese sentido son idénticos. Pero también son diferentes.

AvW: Entremos en la primera gran diferencia. Ya hemos mencionado una diferencia, que es la diferencia muy sutil entre empezar en la altura, lo que acabamos de mencionar. Entremos en una diferencia mayor.

SP: También hay una altura para un tiempo de espera en este LOT=true y eso también es un bloque. Pero no sólo es un bloque, eso podría ser una pequeña diferencia especialmente cuando es un periodo de tiempo largo. Al principio si se usa la altura del bloque o la hora, la fecha se puede adivinar con mucha precisión pero si es un año por delante entonces no se puede. Pero esto es casi dos años por delante, esta altura de bloque (762048, aproximadamente el 10 de noviembre de 2022) que tienen allí. Va mucho más allá.

AvW: De aquí a dos años quieres decir, bueno uno y medio.

SP: Exactamente. En ese sentido, no importa realmente que utilicen la altura porque, de todos modos, la diferencia es muy grande. Pero esto es importante. Mantendrán la señalización durante mucho más tiempo que en el Speedy Trial. Podemos entrar en las implicaciones más adelante, pero básicamente señalarán mucho más tarde.

AvW: Vamos a ceñirnos a los hechos primero y a las implicaciones después. El Speedy Trial (Bitcoin Core), durará 3 meses. Y éste, el cliente LOT=true permitirá la señalización durante 18 meses.

SP: La otra gran diferencia es que al final de esos 18 meses, donde el Speedy Trial simplemente se dará por vencido y continuará, el LOT=true esperará a los mineros que sí señalicen. Esto podría ser nadie o podría ser todo el mundo.

AvW: Sólo aceptarán bloques de señalización después de esos 18 meses. Para los que están al tanto de toda la guerra del tamaño de los bloques, es un poco como el cliente BIP 148.

SP: Es más o menos lo mismo con una tolerancia ligeramente mayor. El cliente UASF requería cada uno de los bloques para señalar, mientras que éste requiere el 90% para señalar. En la práctica, si los mineros están en el último 10% de esa ventana, tienen que prestar un poco más de atención. Aparte de eso, es lo mismo.

AvW: Por eso algunos lo llaman el cliente UASF. El cliente BIP 148 era el cliente UASF para SegWit, este es el cliente UASF para Taproot. Sé que, por ejemplo, a Luke Dashjr, que ha contribuido a este cliente, no le gusta el término UASF en este contexto porque hay 18 meses de señalización regular de mineros.

SP: También el UASF. Es un poco más paciente que el UASF.

AvW: Hay mucha discusión sobre el nombre del cliente y sobre cómo debería llamarlo o no. En general, algunas personas lo han llamado cliente UASF y esta es la razón.

SP: Podrías llamarlo “UASF lento” o algo así.

Implicaciones de tener dos clientes alternativos

AvW: También he visto el nombre User Enforced Miner Activated Soft Fork (UEMASF). A la gente se le ocurren nombres. Los hechos básicos están claros ahora, espero. Entremos en las implicaciones. Hay algunas incompatibilidades potenciales entre estos dos clientes de activación. Todo el mundo está de acuerdo en que Taproot es genial. Todo el mundo quiere Taproot. Todo el mundo está de acuerdo en que sería preferible que los mineros lo activaran. Lo único en lo que hay cierto desacuerdo es en cuál es el plan de respaldo. Ahí es donde entran las incompatibilidades. ¿Está usted de acuerdo?

SP: Creo que sí.

AvW: ¿Cuáles son las incompatibilidades? En primer lugar, y ya lo he mencionado, para enfatizar esto, si Speedy Trial activa Taproot no hay incompatibilidades. Ambos clientes utilizan felizmente Taproot a partir de noviembre. Esto parece bastante probable porque el 90 por ciento de los pools de minería ya han indicado que soportan Taproot. Es probable que no haya un gran problema aquí, todo saldrá bien. Si Speedy Trial no consigue activar Taproot es cuando entramos en una fase en la que vamos a empezar a buscar posibles incompatibilidades.

SP: Por supuesto. Imagínate un escenario en el que Speedy Trial fracase. Probablemente la gente de Bitcoin Core pensará en eso durante un tiempo y pensará en otras posibilidades. Por alguna razón, los mineros se entusiasman justo después de que Speedy Trial fracase y empiezan a dar señales al 90%. En lo que respecta a Bitcoin Core, Taproot nunca se activó. En lo que respecta al cliente UASF o LOT=true Taproot se acaba de activar.

AvW: Digamos que en el mes 4, tenemos 3 meses de Speedy Trial y entonces en el mes 4 los mineros de repente señalan que están listos para Taproot. A Bitcoin Core ya no le importa, Bitcoin Core 0.21.1 ya no mira la señalización. Pero el cliente LOT=true sí. En el cliente LOT=true Taproot se activará en noviembre mientras que en este cliente Bitcoin Core no lo hará.

SP: Entonces, por supuesto, si estás usando ese cliente LOT=true y empiezas inmediatamente a usar Taproot en ese momento porque estás muy emocionado, ves todos estos bloques entrando, puedes o no perder tu dinero. Cualquiera que esté ejecutando el cliente normal de Bitcoin Core aceptará esos robos de las direcciones de Taproot esencialmente.

AvW: En este caso también importa lo que los mineros están haciendo. Si los mineros señalan que están preparados porque realmente están listos y van a aplicar Taproot, entonces está bien. No hay ningún problema porque aplicarán el soft fork e incluso los nodos de Bitcoin Core 0.21.1 seguirán esta cadena. El cliente LOT=true se aplicará y todo el mundo estará contento en la misma cadena. El único escenario en el que esto es un problema, lo que acabas de describir, es si los mineros señalan la disponibilidad pero no van a hacer cumplir las reglas de Taproot.

SP: El problema es, por supuesto, en general con las bifurcaciones suaves, pero especialmente si todo el mundo no está exactamente en la misma página sobre cuáles son las reglas, sólo se sabe que se aplica cuando realmente se aplica. No se sabe si se va a aplicar en el futuro. Esto creará un dilema para todos los demás porque entonces la pregunta es ¿qué hacer? Una cosa que podrías hacer en ese momento es decir “Obviamente Taproot está activado, así que vamos a lanzar una nueva versión de Bitcoin Core que sólo diga retroactivamente que está activado”. Podría ser simplemente un soft fork de BIP 9 repitiendo el mismo bit pero un poco más tarde o podría decir simplemente “Sabemos que se activó. Simplemente codificaremos la fecha de la bandera”.

AvW: Podría ser simplemente una segunda prueba rápida. ¿Todo funcionaría en ese caso?

SP: Hay un problema con la reutilización del mismo número de bits en un corto período de tiempo. (Nota: AJ Towns declaró en el IRC que esto sólo sería un problema si se desplegaran múltiples horquillas suaves en paralelo). Debido a que sería exactamente la semana después en el escenario que hablamos puede no ser posible utilizar el mismo bit. Entonces tendrás un problema porque no puedes comprobar ese bit específico pero no hay señal en ninguno de los otros bits. Eso crearía un poco de dolor de cabeza. La otra solución sería muy sencilla, decir que aparentemente está activado, así que simplemente codificaremos la fecha del bloque y lo activaremos entonces. El problema es qué pasa si entre el momento en que la comunidad decide “Vamos a hacer esto” y el momento en que el software se libera y se despliega un poco ampliamente uno o más mineros dicen “En realidad vamos a empezar a robar estas monedas Taproot”. Se obtiene un clusterf** * en términos de acuerdo en la cadena. Ahora los mineros no estarán incentivados a hacer esto porque ¿por qué crearías deliberadamente un caos total si acabas de dar la señal de una bifurcación suave? Pero es una situación que da mucho miedo y puede hacer que dé miedo hacer la liberación. Si se hace el lanzamiento pero los mineros empiezan a hacer estas travesuras, ¿qué se hace entonces? ¿Aceptas una enorme reorganización en algún momento? ¿O te rindes y consideras que no se ha desplegado? Pero entonces la gente pierde su dinero y has liberado un cliente del que ahora tienes que hacer un hard fork técnico. No es un buen escenario.

AvW: Se complica en escenarios como éste, también con la teoría de juegos y la economía. Incluso si los mineros decidieran robar, se arriesgan a robar monedas en una cadena que podría reagruparse. Acaban de minar una cadena que podría ser reordenada si otros mineros aplican estas reglas de Taproot. Es extraño, es una discusión sobre los incentivos económicos y la teoría del juego en ese escenario. Personalmente creo que es bastante improbable que algo así ocurra, pero al menos es técnicamente posible y es algo a tener en cuenta.

SP: Hace que uno se pregunte si como minero es inteligente señalar inmediatamente después de este Speedy Trial. Este cliente LOT=true permite dos años de todos modos. Si la única razón por la que estás señalando es porque este cliente existe, entonces yo sugeriría fuertemente no hacerlo inmediatamente después del Speedy Trial. Tal vez esperar un poco hasta que haya algún consenso sobre lo que hay que hacer a continuación.

AvW: Una cosa que has mencionado y que quiero abordar rápidamente, este riesgo siempre existe para cualquier soft fork. Los mineros siempre pueden dar falsas señales, podrían haberlo hecho con SegWit por ejemplo, dar falsas señales y luego robar monedas de las salidas de SegWit. Los nodos antiguos no notarían la diferencia. Eso siempre es un riesgo. Creo que la diferencia aquí es que los usuarios de Bitcoin Core 0.21.1 en este escenario podrían pensar que están ejecutando un nuevo nodo, desde su perspectiva están ejecutando un nodo actualizado. Están corriendo los mismos riesgos que antes sólo corrían los nodos desactualizados.

SP: Yo estaría más preocupado por los potenciales usuarios de la 0.21.2 que están instalando el sucesor de Speedy Trial que retroactivamente activa Taproot quizás. Ese grupo está muy inseguro de cuáles son las reglas.

AvW: ¿De qué grupo se trata?

SP: Si el Speedy Trial falla y luego es señalado, podría haber una nueva versión y la gente instalaría esa nueva versión, pero entonces no está claro si esa nueva versión sería segura o no. Esa nueva versión sería la única que realmente pensaría que Taproot está activo, así como el cliente LOT=true. Pero ahora no sabemos qué están ejecutando los mineros y no sabemos qué están ejecutando los intercambios porque esto es muy nuevo. Esto se haría en un periodo de semanas. Ahora mismo tenemos un plazo de 6 meses… Supongo que la fecha de activación seguiría siendo noviembre.

AvW: Seguiría siendo noviembre, así que todavía hay margen para prepararse en ese caso.

SP: Vale, entonces supongo que lo que he dicho antes no tiene sentido. La solución más fácil sería hacer una fecha de bandera en la que la nueva versión dijera “Se va a activar el 12 de noviembre o lo que sea la altura del bloque sin ninguna señalización”. La señalización existe, pero la gente tiene diferentes interpretaciones al respecto. Esa podría ser una forma.

Recapitulación

AvW: Sigo teniendo muy claro de qué estamos hablando aquí, pero no estoy seguro de que nuestros oyentes se estén poniendo al día en este momento. ¿Recapitulamos? Si los mineros se activan durante el Speedy Trial entonces todo está bien, todo el mundo está en consenso.

SP: Y las nuevas reglas entran en vigor en noviembre.

AvW: Si los mineros se activan después del periodo de Speedy Trial entonces existe la posibilidad de que el cliente LOT=true y el cliente Bitcoin Core 0.21.1 no estén en consenso si un bloque Taproot inválido es minado alguna vez.

SP: No tienen señalización forzada, tienes razón. Si una transacción Taproot inválida aparece después del 12 de noviembre….

AvW:…. y si se mina y se hace cumplir por una mayoría de mineros, una mayoría de mineros debe tener señalización falsa, entonces los dos clientes pueden salir del consenso. Técnicamente esto es cierto, personalmente creo que es bastante improbable. No me preocupa demasiado, pero al menos es técnicamente cierto y es algo que la gente debería tener en cuenta.

SP: Ese escenario podría prevenirse diciendo “Si vemos esta señalización “falsa”, si vemos esta señalización masiva una semana después del Speedy Trial entonces podrías decidir lanzar un cliente de fecha de bandera que simplemente diga que vamos a activar este 12 de noviembre porque aparentemente los mineros quieren esto. De lo contrario, no tenemos ni idea de qué hacer con esta señal”.

AvW: Me parece muy difícil predecir lo que los desarrolladores de Bitcoin Core van a decidir en este caso.

SP: Estoy de acuerdo, pero esta es una posibilidad.

Una posibilidad más probable es que los dos clientes sean incompatibles.

AvW: Esa es una de las formas en que los dos clientes pueden ser potencialmente incompatibles. Hay otra forma que tal vez sea más probable o al menos no es tan complicada.

SP: La otra es “Imaginemos que el Speedy Trial falla y la comunidad no tiene consenso sobre cómo proceder a continuación”. Los desarrolladores de Bitcoin Core pueden ver eso, hay una discusión continua y nadie se pone de acuerdo. Tal vez los desarrolladores de Bitcoin Core decidan esperar y ver.

AvW: Los mineros no están señalando…

SP: O de forma errática, etc. Los mineros no están señalando. La discusión continúa. No pasa nada. Entonces este mecanismo LOT=true entra en acción…

AvW: Después de 18 meses. Estamos hablando de noviembre de 2022, queda mucho tiempo, pero en algún momento el mecanismo LOT=true entrará en acción.

SP: Exactamente. Entonces esos nodos, suponiendo que los mineros sigan sin dar señales, dejarán de…

AvW: Eso es si literalmente no hay bloques de señalización LOT=true.

SP: En el otro escenario en el que los mineros sí empiezan a señalar masivamente, ahora volvemos a ese escenario anterior en el que de repente hay mucha señalización de mineros en el bit 2. Puede que la bifurcación suave esté activa pero ahora no hay retraso. Si la señalización ocurre en cualquier lugar después del 12 de noviembre el cliente LOT=true activará Taproot después de un periodo de ajuste.

AvW: No estoy seguro de haberte entendido.

SP: Digamos que en este caso en diciembre de este año los mineros empiezan a señalizar de repente. Después de la altura mínima de activación. En diciembre todos empiezan a señalar. El cliente Bitcoin Core lo ignorará pero el cliente LOT=true dirá “Ok Taproot está activo”.

AvW: ¿Es el mismo escenario que acabamos de discutir? Sólo hay un problema si hay una señalización falsa. Por lo demás, no hay ningún problema.

SP: Hay un problema si hay una señalización falsa, pero es más complicado resolverlo porque esa opción de simplemente lanzar un nuevo cliente con un día de bandera en él que esté lo suficientemente lejos en el futuro, eso ya no existe. Es potencialmente activo inmediatamente. Si haces un lanzamiento pero de repente un minero empieza a no aplicar las reglas, obtienes esta confusión de la que hablamos antes. Entonces podemos solucionarlo haciendo simplemente una fecha de bandera. Esto sería aún más confuso. Tal vez también sea aún menos probable.

AvW: Es bastante similar al escenario anterior, pero un poco más difícil, menos obvio cómo resolver esto.

SP: Creo que es más complicado porque es menos obvio cómo hacer un lanzamiento del día de la bandera en Bitcoin Core en ese escenario porque se activa inmediatamente.

AvW: No es ahí donde quería ir con esto.

SP: ¿Querías ir a un escenario en el que los mineros esperan hasta el final hasta que empiezan a señalar?

AvW: Sí, eso es lo que quería.

SP: Aquí es donde entra en juego la señalización obligatoria. Si no hay señalización obligatoria, los nodos LOT=true se detendrán hasta que alguien mine un bloque que le gustaría ver, un bloque que señale. Si ven este bloque que señala estamos de vuelta en el ejemplo anterior donde de repente los nodos regulares de Bitcoin Core ven esta señalización pero la ignoran. Ahora hay un grupo de nodos que creen que Taproot está activo y hay un grupo de nodos que no. Entonces alguien tiene que decidir qué hacer con él.

AvW: ¿Sigue hablando de falsa señalización aquí?

SP: Even if the signaling is genuine you still want there to be a Bitcoin Core release, probably, that actually says “We have Taproot now.” But the question is when do we have Taproot according to that release? What is a safe date to put in there? You could do it retroactively.

AvW: Cuando quieran. La cuestión es que si los mineros hacen cumplir las nuevas reglas, la cadena se mantendrá unida. Depende de Bitcoin Core implementarlas cuando les apetezca.

SP: El problema con esta señalización es que no sabes si está activa hasta que alguien decide intentar romper las reglas.

AvW: Mi suposición era que no había señalización falsa. De todos modos, crearán la cadena más larga con las reglas válidas.

SP: El problema con eso es que no se puede saber.

AvW: El escenario al que realmente quería llegar Sjors es el escenario muy simple en el que la mayoría de los mineros no señalan cuando se cumplen los 18 meses. En ese caso van a crear la cadena más larga que los nodos de Bitcoin Core 0.21.1 van a seguir mientras que los nodos LOT=true sólo van a aceptar los bloques que sí señalen, que pueden ser cero o al menos menos menos. Si es una mayoría entonces no hay división. Pero si no es una mayoría entonces tenemos una división.

SP: Y esa cadena se retrasaría cada vez más. El incentivo para hacer un lanzamiento que tenga en cuenta eso sería bastante pequeño, creo. Depende, aquí es donde entra la teoría del juego. Desde el punto de vista de la seguridad, si ahora haces un lanzamiento que diga “Por cierto, consideramos activo Taproot con carácter retroactivo”, eso provocaría una reorganización gigantesca. Si sólo lo activas, eso no causaría una reorganización gigante. Pero si se dice “Por cierto, vamos a ordenar retroactivamente esa señalización que a ustedes les interesa”, eso causaría una reorganización masiva. Esto sería inseguro, eso no sería algo que se liberaría probablemente. Es una situación muy complicada.

AvW: Hay escenarios potenciales desordenados. Quiero recalcar a nuestros queridos oyentes que nada de esto va a ocurrir en los próximos dos meses.

SP: Y espero que nunca. Vamos a lanzar algunos otros malos escenarios y luego supongo que podemos pasar a otros temas.

AvW: Quiero mencionar rápidamente que la razón por la que no me preocupan demasiado estos malos escenarios es porque creo que si parece mínimamente probable que haya una división de monedas o algo así, probablemente habrá mercados de futuros. Estos mercados de futuros probablemente dejarán muy claro a todo el mundo la cadena alternativa que tiene una oportunidad, lo que informará a los mineros sobre lo que deben minar y evitará una división de esa manera. Tengo bastante confianza en la sabiduría colectiva del mercado para advertir a todo el mundo sobre los posibles escenarios, así que probablemente funcionará bien. Esa es mi percepción general.

SP: El problema con este tipo de cosas es que si no sale bien es muy, muy malo. Entonces podemos decir con carácter retroactivo “Supongo que no ha salido bien”.

Proceso de desarrollo de LOT=true cliente

AvW: Quiero plantear algo antes de que plantees lo que querías plantear. He visto algunas preocupaciones por parte de los desarrolladores de Bitcoin Core sobre el proceso de desarrollo del cliente LOT=true. Creo que esto se reduce a la construcción de Gitian, la firma de Gitian que también discutimos en otro episodio.

SP: Hemos hablado de la necesidad de que el software sea de código abierto, de que sea fácil de auditar.

AvW: ¿Puede dar su opinión al respecto en este contexto?

SP: El cambio que hicieron en relación con el cliente principal de Bitcoin Core no es enorme. Se puede ver en GitHub. En ese sentido, esa parte del código abierto es razonablemente factible de verificar. Creo que ese código ha tenido menos revisión pero no cero revisión.

AvW: ¿Menos que el de Bitcoin Core?

SP: Exactamente, pero mucho más que el de UASF, mucho más que el de 2017.

AvW: ¿Más que eso?

SP: I would say. The idea has been studied a bit longer. But the second problem is how do you know that what you are downloading isn’t malware. There are two measures there. There is release signatures, the website explains pretty well how to check those. I think they were signed by Luke Dashjr and by the other developer. You can check that.

AvW: Bitcoin Mechanic es el otro desarrollador. En realidad, lo publican Bitcoin Mechanic y Shinobi y Luke Dashjr es el asesor, el colaborador.

SP: Normalmente hay un archivo binario que se descarga y luego hay un archivo con sumas de comprobación y ese archivo con sumas de comprobación también está firmado por una persona conocida. Si tienes la clave de Luke o de quien sea, su clave y la conoces, puedes comprobar que al menos el binario que has descargado no procede de un sitio web pirateado. Lo segundo, es que tienes un binario y sabes que lo han firmado, pero ¿quiénes son? La segunda cosa es que quieres comprobar que este código coincide con el binario y ahí es donde entra la construcción Gitian de la que hablamos en un [episodio] anterior (https://www.youtube.com/watch?v=_qdhc5WLd2A). Básicamente, las construcciones deterministas. Toma el código fuente y produce el binario. Múltiples personas pueden entonces firmar que, de hecho, según ellos este código fuente produce este binario. Cuantas más personas lo confirmen, más probable es que no haya colusión. Creo que sólo hay dos firmas de Gitian para esta otra versión.

AvW: Así que el software de Bitcoin Core está siendo firmado por Gitian…

SP: Creo que 10 o 20 personas.

AvW: Muchos de los desarrolladores experimentados de Bitcoin Core que han estado desarrollando el software de Bitcoin Core durante un tiempo. ¿Incluido usted? ¿Lo firmó usted?

SP: La versión más reciente, sí.

AvW: Usted confía en que no están todos confabulados y difundiendo malware. Para la mayoría de la gente, todo se reduce a la confianza en ese sentido.

SP: Si realmente estás pensando en utilizar este software alternativo, deberías saber lo que estás haciendo en términos de todos estos escenarios de reorganización. Si ya sabes lo que estás haciendo en esos términos, entonces simplemente compila la cosa desde el código fuente. ¿Por qué no? Si no eres capaz de compilar las cosas desde el código fuente, probablemente no deberías ejecutar esto. Pero eso depende de ti. No me preocupa que estén enviando malware, pero en general es sólo cuestión de tiempo antes de que alguien diga “Tengo una versión diferente con LOT=happy y por favor descárguela aquí” y le robe todo su Bitcoin. Es más el precedente que está sentando esto lo que me preocupa que esta cosa pueda realmente tener malware.

AvW: Eso es justo. ¿Tal vez firmarlo Sjors?

SP: No, porque no creo que sea una cosa sana para publicar.

AvW: Me parece justo.

SP: Es sólo mi opinión. Cada uno es libre de publicar lo que quiera.

AvW: ¿Había algo más que quisieras comentar?

¿Qué lanzaría Bitcoin Core si Speedy Trial no se activara?

SP: Sí, hablamos sobre la señalización verdadera o falsa en el bit 1, pero una posibilidad muy real creo que si esta activación falla y queremos probar algo más, entonces probablemente no queramos usar el mismo bit si es antes de la ventana de tiempo de espera. Eso podría crear un escenario en el que podrías empezar a decir “Vamos a usar otro bit para hacer la señalización”. Entonces podrías tener alguna confusión donde hay una nueva versión de Bitcoin Core que se activa usando el bit 3 por ejemplo pero la gente de LOT=true no lo ve porque están mirando el bit 1. Eso puede o no ser un problema real. La otra cosa es que podría haber todo tipo de otras formas de activar esta cosa. Una podría ser un día de bandera. Si Bitcoin Core lanzara un día de bandera entonces no habrá ninguna señalización. El cliente LOT=true no sabrá que Taproot está activo y exigirá la señalización en algún momento aunque Taproot ya esté activo.

AvW: Tu punto es que no sabemos lo que Bitcoin Core lanzará después de Speedy Trial y lo que podrían lanzar no necesariamente sería compatible con el cliente LOT=true. Eso funciona en ambos sentidos, por supuesto.

SP: Claro, sólo estoy razonando desde un punto. También diría que en el caso de que Bitcoin Core lance algo más que tenga un apoyo bastante amplio por parte de la comunidad, me imagino que la gente que está ejecutando los clientes BIP 8 no está sentada en una cueva en algún lugar. Probablemente son usuarios relativamente activos que pueden decidir “voy a ejecutar esta versión de Bitcoin Core de nuevo porque hay un día de bandera en él que es anterior a la señalización forzada”. Podría imaginar que decidirían ejecutarla o no.

AvW: Eso también funciona en ambos sentidos.

SP: No, en realidad no. Me preocupan mucho más las personas que no siguen este debate y que simplemente se limitan a utilizar la versión más reciente de Core. O no se actualizan en absoluto, todavía están ejecutando, digamos, Bitcoin Core v0.15. Estoy mucho más preocupado por ese grupo que por el grupo que toma activamente una posición en este asunto. Si usted toma activamente una posición mediante la ejecución de otra cosa, entonces usted sabe lo que está haciendo. Depende de ti estar al día. Pero tenemos un compromiso con todos los usuarios de que si todavía estáis ejecutando en vuestro búnker la versión 0.15 de Bitcoin Core que no os pase nada malo si seguís la mayor parte de las pruebas de trabajo dentro de las reglas que conocéis.

AvW: Eso también podría significar hacerlo compatible con el cliente LOT=true.

SP: No, en lo que respecta al nodo v0.15 no hay ningún cliente LOT=true.

AvW: ¿Queremos entrar en todo tipo de escenarios? El escenario que más me preocupa es el de la cadena LOT=true por llamarlo de alguna manera, que si alguna vez se produce una escisión ganará pero sólo después de un tiempo porque se obtienen reorgs largos. Esto vuelve a la discusión LOT=true versus LOT=false en primer lugar.

SP: Sólo veo que eso ocurra con un colapso masivo del precio del propio Bitcoin. Si se da el caso de que LOT=true empiece a ganar después de un retraso que requiera una gran reorganización… si es más probable que gane, su precio relativo subirá porque es más probable que gane. Pero como un re-org más grande es más desastroso para Bitcoin cuanto más largo sea el re-org más bajo será el precio de Bitcoin. Ese sería el escenario malo. Si hay una reorganización de 1000 bloques o más, entonces creo que el precio de Bitcoin se derrumbará a algo muy bajo. Realmente no nos importa si el cliente LOT=true gana o no. Eso ya no importa.

AvW: Estoy de acuerdo con eso. La razón por la que no me preocupa es lo que he mencionado antes, creo que estas cosas las resolverán los mercados de futuros mucho antes de que ocurra realmente.

SP: Supongo que el mercado de futuros predeciría exactamente eso. Eso no sería bueno. Dependiendo de tu confianza en los mercados de futuros, que para mí no es tan sorprendente.

Altura del bloque frente a MTP

https://github.com/bitcoin/bitcoin/pull/21377#issuecomment-818758277

SP: Podríamos seguir hablando de esta diferencia de fondo entre la altura de los bloques y el tiempo de los bloques. Hubo un fiasco pero no creo que sea una diferencia interesante.

AvW: También podríamos mencionarlo.

SP: Cuando describimos por primera vez el Speedy Trial asumimos que todo se basaría en la altura de los bloques. Habría una transformación de la forma en que funcionan ahora las horquillas suaves, que se basa en estos tiempos medios, a la altura de los bloques, que es conceptualmente más sencilla. Más tarde hubo alguna discusión entre la gente que estaba trabajando en eso, considerando que tal vez la única diferencia del Speedy Trial debería ser la altura de activación y ninguno de los otros cambios. Desde el punto de vista de la base de código existente es más fácil hacer que el Speedy Trial ajuste un parámetro que es una altura de activación mínima frente al cambio donde se cambia todo en alturas de bloque que es un cambio mayor del código existente. Aunque el resultado final sea más fácil. Un enfoque basado puramente en la altura de los bloques es más fácil de entender, más fácil de explicar lo que va a hacer, cuando lo va a hacer. Algunos casos extremos también son más fáciles. Pero permanecer más cerca de la base de código existente es algo más fácil para los revisores. La diferencia es bastante pequeña, así que creo que algunas personas decidieron lanzar una moneda al aire y otras creo que estuvieron de acuerdo sin lanzarla.

AvW: Hay argumentos en ambos lados pero parecen ser bastante sutiles, bastante matizados. Como hemos dicho, el Speedy Trial va a empezar en la misma fecha, así que no parece importar mucho. En algún momento algunos desarrolladores estaban considerando seriamente decidir mediante un lanzamiento de moneda usando la cadena de bloques de Bitcoin para ello, eligiendo un bloque en el futuro cercano y viendo si termina con un número par o impar. No sé si eso fue literalmente lo que hicieron pero sería una forma de hacerlo. Creo que sí hicieron el lanzamiento de la moneda, pero luego los defensores de ambas soluciones terminaron poniéndose de acuerdo de todos modos.

SP: Estuvieron de acuerdo en lo mismo que dijo la moneda.

AvW: El principal discrepante fue Luke Dashjr, que se siente fuertemente comprometido con el uso de las alturas de los bloques de forma consistente. También es de la opinión de que la comunidad ha llegado a un consenso al respecto y que el hecho de que los desarrolladores de Bitcoin Core no lo utilicen es un retroceso o una ruptura del consenso de la comunidad.

SP: Esa es su perspectiva. Si miras a la persona que escribió el pull request original que se basaba puramente en la altura, creo que fue Andrew Chow, cerró su propio pull request a favor de la solución mixta que tenemos ahora. Si la persona que escribe el código lo elimina él mismo, creo que está bastante claro. Desde mi punto de vista la gente que está poniendo más esfuerzo debería decidir cuando es algo tan trivial. No creo que importe tanto.

AvW: A mí me parece un punto menor, pero está claro que no todo el mundo está de acuerdo en que sea un punto menor.

SP: De eso se trata el bikeshedding, ¿no? No sería bikeshedding si todo el mundo pensara que es irrelevante el color del cobertizo para bicicletas.

AvW: Dejemos el tema de la moneda y la hora, la altura de la cuadra detrás de nosotros Sjors porque creo que cubrimos todo y tal vez no deberíamos insistir en este último punto. ¿Ya está? Espero que haya quedado claro.

SP: Creo que aún podemos intercalar muy brevemente una cosa que se ha planteado, que es el ataque en el tiempo.

AvW: No lo hemos mencionado, pero es algo relevante en este contexto. Un argumento contra el uso de la hora de los bloques es que abre la puerta a los ataques de timewarp, en los que los mineros falsifican las marcas de tiempo de los bloques que minan para fingir que se trata de una hora y una fecha diferentes. De este modo, pueden, por ejemplo, saltarse el periodo de señalización, si se confabulan para hacerlo.

SP: Eso parece una cantidad enorme de esfuerzo sin una buena razón, pero es un escenario interesante. Hicimos un episodio sobre el ataque de timewarp hace mucho tiempo, cuando yo lo entendía. Hay una propuesta de bifurcación suave para deshacerse de él que no creo que nadie haya objetado, pero tampoco nadie se ha molestado en poner en práctica. Una manera de lidiar con este hipotético escenario es que si ocurriera entonces desplegamos el soft fork contra el ataque timewarp primero y luego intentamos la activación de Taproot de nuevo.

AvW: El argumento en contra de alguien como Luke es que, por supuesto, se puede arreglar cualquier fallo, pero también se puede simplemente no incluir el fallo en primer lugar.

SP: Es bueno saber que los mineros estarían dispuestos a utilizarlo. Si sabemos que los mineros están realmente dispuestos a explotar el ataque de timewarp es una información increíblemente valiosa. Si tienen una manera de confabularse y una motivación para usar ese ataque… El coste de ese ataque sería bastante bajo, sería retrasar Taproot unos meses pero tendríamos esta conspiración masiva desvelada. Creo que eso es una victoria.

AvW: La forma en que Luke lo ve es que ya había consenso en todo tipo de cosas, usando el BIP 8 y esta cosa de LOT=true, vio esto como una especie de esfuerzo de consenso. En su opinión, el uso de los tiempos de bloqueo está frustrando eso. No quiero hablar por él, pero si trato de canalizar un poco a Luke o explicar su perspectiva sería eso. En su opinión, el consenso ya se estaba formando y ahora es un camino diferente.

SP: No creo que este nuevo enfoque bloquee tanto lo de LOT=verdad. Hemos pasado por todos los escenarios y la confusión no estaba en torno a la altura de los bloques frente al tiempo, sino en todo tipo de cosas que podrían salir mal dependiendo de cómo evolucionaran las cosas. Pero no esa cuestión en particular. En cuanto al consenso, el consenso está en el ojo del que mira. Yo diría que si varias personas no están de acuerdo, entonces no hay consenso.

AvW: Eso también funcionaría al revés. Si Luke no está de acuerdo, utiliza el tiempo en bloque.

SP: Pero no puede decir que haya habido consenso en algo. Si la gente no está de acuerdo, por definición no hubo consenso.

AvW: Mi impresión es que no hubo consenso porque la gente no está de acuerdo. Terminemos. Para nuestros oyentes que están confundidos y preocupados voy a enfatizar que los próximos 3 meses la prueba rápida va a funcionar en ambos clientes. Si los mineros se activan a través de Speedy Trial vamos a tener Taproot en noviembre y todo el mundo va a ser feliz. Continuaremos la discusión del soft fork con el próximo soft fork.

SP: Volveremos a tener las mismas discusiones porque no hemos aprendido absolutamente nada.

\ No newline at end of file +https://www.youtube.com/watch?v=SHmEXPvN6t4

Tema: Actualización de la activación de Taproot: Speedy Trial y el cliente LOT=true

Ubicación: Bitcoin Magazine (en línea)

Fecha: 23 de abril de 2021

Episodio anterior en lockinontimeout (LOT): https://btctranscripts.com/bitcoin-magazine/2021-02-26-taproot-activation-lockinontimeout/

Episodio anterior sobre Speedy Trial: https://btctranscripts.com/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial/

Aaron van Wirdum en “Ahora hay dos clientes de activación de Taproot, aquí está el porqué”: https://bitcoinmagazine.com/technical/there-are-now-two-taproot-activation-clients-heres-why

Transcripción por: Michael Folkson

Introducción

Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado.

Sjors Provoost (SP): Hola

AvW: Sjors, hoy tenemos mucho más que discutir.

SP: Ya hemos hecho este juego de palabras.

AvW: Creo que ya lo hemos hecho dos veces. No importa. Hoy vamos a hablar de los últimos detalles de la implementación del Speedy Trial. Ya hemos hablado de Speedy Trial en un episodio anterior. Esta vez también vamos a contrastarlo con el cliente LOT=true que es una alternativa que ha sido lanzada por un par de miembros de la comunidad. Vamos a discutir cómo se comparan.

SP: Me parece una buena idea. También hablamos de las opciones de activación de Taproot en general en un episodio anterior.

AvW: ¿Uno de los primeros?

SP: También hablamos de esta idea de esta mentalidad de vaquero en la que alguien acabaría lanzando un cliente LOT=true hagas lo que hagas.

AvW: En eso estamos.

SP: También predijimos correctamente que habría un montón de motos.

AvW: Sí, esto también es algo en lo que vamos a entrar. En primer lugar, como breve resumen, estamos hablando de la activación de Taproot. Taproot es una propuesta de actualización del protocolo para contratos inteligentes compactos y potencialmente preservadores de la privacidad en el protocolo Bitcoin. ¿Es un buen resumen?

SP: Sí, creo que sí.

AvW: El debate sobre la forma de actualizar ya lleva un tiempo. El reto es que en una red abierta y descentralizada como Bitcoin, sin un dictador central que diga a todo el mundo qué debe ejecutar y cuándo, no vas a conseguir que todo el mundo se actualice al mismo tiempo. Pero queremos mantener la red en consenso de una forma u otra.

SP: Sí. La otra cosa que puede funcionar cuando se trata de un sistema distribuido es algún tipo de convenciones, formas en las que estás acostumbrado a hacer las cosas. Pero, por desgracia, la convención que teníamos se topó con problemas con el despliegue de SegWit. Entonces la pregunta es: “¿Debemos probar otra cosa o ha sido un accidente fortuito y debemos volver a intentar lo mismo?”.

AvW: Creo que el último preámbulo antes de que empecemos a hablar de Speedy Trial, me gustaría señalar que la idea general con una bifurcación suave, una actualización compatible hacia atrás que es Taproot, es que si la mayoría del poder de hash está aplicando las nuevas reglas eso significa que la red permanecerá en consenso.

SP: Sí. Podemos repetir que si sigues haciendo transacciones que son anteriores a Taproot, esas transacciones siguen siendo válidas. En ese sentido, como usuario puedes ignorar las bifurcaciones suaves. Desgraciadamente, si hay un problema, no puedes ignorarlo como usuario aunque tus transacciones no utilicen Taproot.

AvW: Creo que todo el mundo está de acuerdo en que es muy bueno que una mayoría de poder de hash haga cumplir las reglas. Existen mecanismos de coordinación para medir cuántos mineros están de acuerdo con una actualización. Así es como se puede coordinar un soft fork bastante seguro. Eso es algo en lo que todo el mundo está de acuerdo. Donde la gente empieza a discrepar es en lo que ocurre si los mineros no cooperan realmente con esta coordinación. No vamos a repetir todo eso. Hay episodios anteriores sobre eso. Lo que vamos a explicar es que al final la comunidad de desarrollo de Bitcoin Core se decantó por una solución llamada “Speedy Trial”. Ya lo mencionamos también en un episodio anterior. Ahora está finalizado y vamos a explicar cuáles son los parámetros finalizados para esto.

SP: Hubo un pequeño cambio.

AvW: Vamos a escucharlo Sjors. ¿Cuáles son los parámetros definitivos para el Speedy Trial? ¿Cómo va a actualizar Bitcoin Core a Taproot?

Parámetros de activación finalizados de Bitcoin Core

Notas de lanzamiento de Bitcoin Core 0.21.1: https://github.com/bitcoin/bitcoin/blob/0.21/doc/release-notes.md

Parámetros de activación de Speedy Trial fusionados en Core: https://github.com/bitcoin/bitcoin/pull/21686

SP: A partir de creo que es este domingo (25 de abril, medianoche) la primera vez que se reajusta lo difícil, eso ocurre cada dos semanas, probablemente una semana después del domingo…

AvW: Es el sábado

SP: … comienza la señalización. En unas dos semanas comienza la señalización, no antes de una semana.

AvW: Para que quede claro, eso es lo más pronto que puede empezar.

SP: Lo más pronto que puede empezar es el 24 de abril, pero como sólo empieza en un nuevo periodo de ajuste de la dificultad, un nuevo periodo de retargeting, probablemente no empezará hasta dentro de dos semanas.

AvW: Comenzará en el primer nuevo periodo de dificultad después del 24 de abril, que se calcula que será a principios de mayo. El 4 de mayo, que el cuarto esté contigo Sjors.

SP: Esa sería una fecha genial. Es cuando comienza la señalización y ésta se produce en rondas de votación, por así decirlo. Una ronda de votaciones son dos semanas o un período de ajuste de la dificultad, un período de retargeting. Si el 90% de los bloques en ese período de votación señalan el bit número 2, si eso sucede Taproot está bloqueado. Bloqueado significa que va a suceder, imagina el pequeño gif con Ron Paul, “Está sucediendo”. Pero las reglas reales de Taproot no entrarán en vigor inmediatamente, lo harán en el bloque número 709632.

AvW: ¿Que se estima que se minará cuando?

SP: Será el 12 de noviembre de este año.

AvW: Eso va a variar un poco, por supuesto, en función de la rapidez con la que se minen los bloques en los próximos meses. Será en noviembre, casi con toda seguridad.

SP: Lo que supondría 4 años después de la implosión del esfuerzo de SegWit2x.

AvW: Sí, un buen momento en ese sentido.

SP: Todas las fechas son buenas. Eso es lo que hace el Speedy Trial. Cada dos semanas hay una votación, si se alcanza el 90% de los votos, esa es la fecha de activación. No ocurre inmediatamente y como es un “Speedy Trial” también podría fallar rápidamente y eso es en agosto, alrededor del 11 de agosto. Si el periodo de dificultad después de eso o antes, siempre se me olvida, no llega a la meta, creo que es después…

AvW: El periodo de dificultad debe haber terminado el 11 de agosto ¿no?

SP: Cuando pasa el 11 de agosto todavía podría activarse pero luego el siguiente periodo de dificultad, no puede. Creo que la regla es que al final del periodo de dificultad se empieza a contar y si el resultado es un fracaso entonces si es después del 11 de agosto se abandona pero si todavía no es el 11 de agosto se entra en la siguiente ronda.

AvW: Si el primer bloque de un nuevo periodo de dificultad se mina el 10 de agosto, ¿ese periodo de dificultad seguirá contando?

SP: Así es. Creo que es uno de los cambios sutiles que se han hecho en el PIF 9 para que sea más fácil razonar sobre él. Creo que antes era al revés, cuando primero se comprobaba la fecha, pero si era pasada la fecha se abandonaba, pero si era antes de la fecha seguía contando. Ahora creo que es al revés, es un poco más sencillo.

AvW: Ya veo. Va a haber una ventana de señalización de unos 3 meses.

SP: Así es.

AvW: Si en cualquier periodo de dificultad dentro de esa ventana de señalización de 3 meses se alcanza el 90% de la potencia de hachís, Taproot se activará en noviembre de este año.

SP: Sí.

AvW: Creo que eso cubre el Speedy Trial.

SP: El umbral es el 90%, como hemos dicho. Normalmente, con el PIF 9 es del 95%, pero se ha rebajado al 90%.

AvW: ¿Qué ocurre si no se alcanza el umbral?

SP: Nada. Lo que significa que podría pasar cualquier cosa. La gente podría desplegar nuevas versiones de software, probar otro bit, etc.

AvW: Sólo quería aclarar eso.

SP: No significa que Taproot esté cancelado.

AvW: Si no se alcanza el umbral, este cliente de software específico no hará nada, pero los desarrolladores de Bitcoin Core y el resto de la comunidad de Bitcoin pueden seguir ideando nuevas formas de activar Taproot.

SP: Exactamente. Es un experimento de bajo coste. Si gana tendremos Taproot. Si no, entonces tenemos más información sobre por qué no…

AvW: También quiero aclarar. Todavía no sabemos cómo va a ser. Eso habrá que averiguarlo entonces. Podríamos empezar a calcularlo ahora, pero aún no se ha decidido cómo será el despliegue.

Alternativa a Bitcoin Core (Cliente Taproot basado en Bitcoin Core 0.21.0)

Actualización de las versiones de activación de Taproot: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-April/018790.html

AvW: También se lanzó otro cliente. Hay mucho debate sobre el nombre. Vamos a llamarlo cliente LOT=true.

SP: Eso me parece bien.

AvW: Se deriva de esta diferencia técnica y filosófica sobre cómo deben activarse los soft forks en primer lugar. Este cliente utiliza, lo has adivinado, LOT=true. LOT=true significa que si la ventana de señalización se acaba, al final de la misma los nodos empezarán a rechazar cualquier bloque que no señale. Sólo aceptan los bloques que señalan. Esta es la principal diferencia. Entremos en los detalles del cliente LOT=true.

SP: En principio es lo mismo, en principio es lo mismo. Hay una posibilidad teórica de que no lo sea si los mineros hacen algo realmente loco. Empieza a una cierta altura de bloque…

AvW: Acabamos de mencionar que Bitcoin Core 0.21.1 comienza su ventana de señalización en el primer periodo de dificultad después del 24 de abril. Este cliente LOT=true también comenzará en la práctica su ventana de señalización en el primer periodo de dificultad después del 24 de abril, excepto que el 24 de abril no se especifica específicamente. Simplemente eligieron la altura del bloque específico que se espera que sea el primero después del 24 de abril.

SP: Exactamente, eligieron el bloque 681.408.

AvW: Se especifica como una altura de bloque en lugar de hacerlo indirectamente mediante el uso de una fecha.

SP: Pero lo más probable es que sea exactamente el mismo momento. Tanto el Speedy Trial (Core) como el cliente LOT=true iniciarán la señalización, los periodos de votación al mismo tiempo. Los periodos de votación en sí, votan en el mismo bit. Ambos votan en el bit 2. Ambos tienen un umbral del 90%. Además, si el voto es verdadero, también tiene una activación retardada. La activación retardada es una altura de bloque en ambos escenarios, tanto en el Speedy Trial (Core) como en la variante LOT=true.

AvW: Botha son el 12 de noviembre, una activación de noviembre de todos modos. Si los mineros señalan que están preparados para Taproot dentro del periodo de Speedy Trial, ambos clientes activarán Taproot en noviembre en esa misma fecha, exactamente en el mismo bloque.

SP: Así que en ese sentido son idénticos. Pero también son diferentes.

AvW: Entremos en la primera gran diferencia. Ya hemos mencionado una diferencia, que es la diferencia muy sutil entre empezar en la altura, lo que acabamos de mencionar. Entremos en una diferencia mayor.

SP: También hay una altura para un tiempo de espera en este LOT=true y eso también es un bloque. Pero no sólo es un bloque, eso podría ser una pequeña diferencia especialmente cuando es un periodo de tiempo largo. Al principio si se usa la altura del bloque o la hora, la fecha se puede adivinar con mucha precisión pero si es un año por delante entonces no se puede. Pero esto es casi dos años por delante, esta altura de bloque (762048, aproximadamente el 10 de noviembre de 2022) que tienen allí. Va mucho más allá.

AvW: De aquí a dos años quieres decir, bueno uno y medio.

SP: Exactamente. En ese sentido, no importa realmente que utilicen la altura porque, de todos modos, la diferencia es muy grande. Pero esto es importante. Mantendrán la señalización durante mucho más tiempo que en el Speedy Trial. Podemos entrar en las implicaciones más adelante, pero básicamente señalarán mucho más tarde.

AvW: Vamos a ceñirnos a los hechos primero y a las implicaciones después. El Speedy Trial (Bitcoin Core), durará 3 meses. Y éste, el cliente LOT=true permitirá la señalización durante 18 meses.

SP: La otra gran diferencia es que al final de esos 18 meses, donde el Speedy Trial simplemente se dará por vencido y continuará, el LOT=true esperará a los mineros que sí señalicen. Esto podría ser nadie o podría ser todo el mundo.

AvW: Sólo aceptarán bloques de señalización después de esos 18 meses. Para los que están al tanto de toda la guerra del tamaño de los bloques, es un poco como el cliente BIP 148.

SP: Es más o menos lo mismo con una tolerancia ligeramente mayor. El cliente UASF requería cada uno de los bloques para señalar, mientras que éste requiere el 90% para señalar. En la práctica, si los mineros están en el último 10% de esa ventana, tienen que prestar un poco más de atención. Aparte de eso, es lo mismo.

AvW: Por eso algunos lo llaman el cliente UASF. El cliente BIP 148 era el cliente UASF para SegWit, este es el cliente UASF para Taproot. Sé que, por ejemplo, a Luke Dashjr, que ha contribuido a este cliente, no le gusta el término UASF en este contexto porque hay 18 meses de señalización regular de mineros.

SP: También el UASF. Es un poco más paciente que el UASF.

AvW: Hay mucha discusión sobre el nombre del cliente y sobre cómo debería llamarlo o no. En general, algunas personas lo han llamado cliente UASF y esta es la razón.

SP: Podrías llamarlo “UASF lento” o algo así.

Implicaciones de tener dos clientes alternativos

AvW: También he visto el nombre User Enforced Miner Activated Soft Fork (UEMASF). A la gente se le ocurren nombres. Los hechos básicos están claros ahora, espero. Entremos en las implicaciones. Hay algunas incompatibilidades potenciales entre estos dos clientes de activación. Todo el mundo está de acuerdo en que Taproot es genial. Todo el mundo quiere Taproot. Todo el mundo está de acuerdo en que sería preferible que los mineros lo activaran. Lo único en lo que hay cierto desacuerdo es en cuál es el plan de respaldo. Ahí es donde entran las incompatibilidades. ¿Está usted de acuerdo?

SP: Creo que sí.

AvW: ¿Cuáles son las incompatibilidades? En primer lugar, y ya lo he mencionado, para enfatizar esto, si Speedy Trial activa Taproot no hay incompatibilidades. Ambos clientes utilizan felizmente Taproot a partir de noviembre. Esto parece bastante probable porque el 90 por ciento de los pools de minería ya han indicado que soportan Taproot. Es probable que no haya un gran problema aquí, todo saldrá bien. Si Speedy Trial no consigue activar Taproot es cuando entramos en una fase en la que vamos a empezar a buscar posibles incompatibilidades.

SP: Por supuesto. Imagínate un escenario en el que Speedy Trial fracase. Probablemente la gente de Bitcoin Core pensará en eso durante un tiempo y pensará en otras posibilidades. Por alguna razón, los mineros se entusiasman justo después de que Speedy Trial fracase y empiezan a dar señales al 90%. En lo que respecta a Bitcoin Core, Taproot nunca se activó. En lo que respecta al cliente UASF o LOT=true Taproot se acaba de activar.

AvW: Digamos que en el mes 4, tenemos 3 meses de Speedy Trial y entonces en el mes 4 los mineros de repente señalan que están listos para Taproot. A Bitcoin Core ya no le importa, Bitcoin Core 0.21.1 ya no mira la señalización. Pero el cliente LOT=true sí. En el cliente LOT=true Taproot se activará en noviembre mientras que en este cliente Bitcoin Core no lo hará.

SP: Entonces, por supuesto, si estás usando ese cliente LOT=true y empiezas inmediatamente a usar Taproot en ese momento porque estás muy emocionado, ves todos estos bloques entrando, puedes o no perder tu dinero. Cualquiera que esté ejecutando el cliente normal de Bitcoin Core aceptará esos robos de las direcciones de Taproot esencialmente.

AvW: En este caso también importa lo que los mineros están haciendo. Si los mineros señalan que están preparados porque realmente están listos y van a aplicar Taproot, entonces está bien. No hay ningún problema porque aplicarán el soft fork e incluso los nodos de Bitcoin Core 0.21.1 seguirán esta cadena. El cliente LOT=true se aplicará y todo el mundo estará contento en la misma cadena. El único escenario en el que esto es un problema, lo que acabas de describir, es si los mineros señalan la disponibilidad pero no van a hacer cumplir las reglas de Taproot.

SP: El problema es, por supuesto, en general con las bifurcaciones suaves, pero especialmente si todo el mundo no está exactamente en la misma página sobre cuáles son las reglas, sólo se sabe que se aplica cuando realmente se aplica. No se sabe si se va a aplicar en el futuro. Esto creará un dilema para todos los demás porque entonces la pregunta es ¿qué hacer? Una cosa que podrías hacer en ese momento es decir “Obviamente Taproot está activado, así que vamos a lanzar una nueva versión de Bitcoin Core que sólo diga retroactivamente que está activado”. Podría ser simplemente un soft fork de BIP 9 repitiendo el mismo bit pero un poco más tarde o podría decir simplemente “Sabemos que se activó. Simplemente codificaremos la fecha de la bandera”.

AvW: Podría ser simplemente una segunda prueba rápida. ¿Todo funcionaría en ese caso?

SP: Hay un problema con la reutilización del mismo número de bits en un corto período de tiempo. (Nota: AJ Towns declaró en el IRC que esto sólo sería un problema si se desplegaran múltiples horquillas suaves en paralelo). Debido a que sería exactamente la semana después en el escenario que hablamos puede no ser posible utilizar el mismo bit. Entonces tendrás un problema porque no puedes comprobar ese bit específico pero no hay señal en ninguno de los otros bits. Eso crearía un poco de dolor de cabeza. La otra solución sería muy sencilla, decir que aparentemente está activado, así que simplemente codificaremos la fecha del bloque y lo activaremos entonces. El problema es qué pasa si entre el momento en que la comunidad decide “Vamos a hacer esto” y el momento en que el software se libera y se despliega un poco ampliamente uno o más mineros dicen “En realidad vamos a empezar a robar estas monedas Taproot”. Se obtiene un clusterf** * en términos de acuerdo en la cadena. Ahora los mineros no estarán incentivados a hacer esto porque ¿por qué crearías deliberadamente un caos total si acabas de dar la señal de una bifurcación suave? Pero es una situación que da mucho miedo y puede hacer que dé miedo hacer la liberación. Si se hace el lanzamiento pero los mineros empiezan a hacer estas travesuras, ¿qué se hace entonces? ¿Aceptas una enorme reorganización en algún momento? ¿O te rindes y consideras que no se ha desplegado? Pero entonces la gente pierde su dinero y has liberado un cliente del que ahora tienes que hacer un hard fork técnico. No es un buen escenario.

AvW: Se complica en escenarios como éste, también con la teoría de juegos y la economía. Incluso si los mineros decidieran robar, se arriesgan a robar monedas en una cadena que podría reagruparse. Acaban de minar una cadena que podría ser reordenada si otros mineros aplican estas reglas de Taproot. Es extraño, es una discusión sobre los incentivos económicos y la teoría del juego en ese escenario. Personalmente creo que es bastante improbable que algo así ocurra, pero al menos es técnicamente posible y es algo a tener en cuenta.

SP: Hace que uno se pregunte si como minero es inteligente señalar inmediatamente después de este Speedy Trial. Este cliente LOT=true permite dos años de todos modos. Si la única razón por la que estás señalando es porque este cliente existe, entonces yo sugeriría fuertemente no hacerlo inmediatamente después del Speedy Trial. Tal vez esperar un poco hasta que haya algún consenso sobre lo que hay que hacer a continuación.

AvW: Una cosa que has mencionado y que quiero abordar rápidamente, este riesgo siempre existe para cualquier soft fork. Los mineros siempre pueden dar falsas señales, podrían haberlo hecho con SegWit por ejemplo, dar falsas señales y luego robar monedas de las salidas de SegWit. Los nodos antiguos no notarían la diferencia. Eso siempre es un riesgo. Creo que la diferencia aquí es que los usuarios de Bitcoin Core 0.21.1 en este escenario podrían pensar que están ejecutando un nuevo nodo, desde su perspectiva están ejecutando un nodo actualizado. Están corriendo los mismos riesgos que antes sólo corrían los nodos desactualizados.

SP: Yo estaría más preocupado por los potenciales usuarios de la 0.21.2 que están instalando el sucesor de Speedy Trial que retroactivamente activa Taproot quizás. Ese grupo está muy inseguro de cuáles son las reglas.

AvW: ¿De qué grupo se trata?

SP: Si el Speedy Trial falla y luego es señalado, podría haber una nueva versión y la gente instalaría esa nueva versión, pero entonces no está claro si esa nueva versión sería segura o no. Esa nueva versión sería la única que realmente pensaría que Taproot está activo, así como el cliente LOT=true. Pero ahora no sabemos qué están ejecutando los mineros y no sabemos qué están ejecutando los intercambios porque esto es muy nuevo. Esto se haría en un periodo de semanas. Ahora mismo tenemos un plazo de 6 meses… Supongo que la fecha de activación seguiría siendo noviembre.

AvW: Seguiría siendo noviembre, así que todavía hay margen para prepararse en ese caso.

SP: Vale, entonces supongo que lo que he dicho antes no tiene sentido. La solución más fácil sería hacer una fecha de bandera en la que la nueva versión dijera “Se va a activar el 12 de noviembre o lo que sea la altura del bloque sin ninguna señalización”. La señalización existe, pero la gente tiene diferentes interpretaciones al respecto. Esa podría ser una forma.

Recapitulación

AvW: Sigo teniendo muy claro de qué estamos hablando aquí, pero no estoy seguro de que nuestros oyentes se estén poniendo al día en este momento. ¿Recapitulamos? Si los mineros se activan durante el Speedy Trial entonces todo está bien, todo el mundo está en consenso.

SP: Y las nuevas reglas entran en vigor en noviembre.

AvW: Si los mineros se activan después del periodo de Speedy Trial entonces existe la posibilidad de que el cliente LOT=true y el cliente Bitcoin Core 0.21.1 no estén en consenso si un bloque Taproot inválido es minado alguna vez.

SP: No tienen señalización forzada, tienes razón. Si una transacción Taproot inválida aparece después del 12 de noviembre….

AvW:…. y si se mina y se hace cumplir por una mayoría de mineros, una mayoría de mineros debe tener señalización falsa, entonces los dos clientes pueden salir del consenso. Técnicamente esto es cierto, personalmente creo que es bastante improbable. No me preocupa demasiado, pero al menos es técnicamente cierto y es algo que la gente debería tener en cuenta.

SP: Ese escenario podría prevenirse diciendo “Si vemos esta señalización “falsa”, si vemos esta señalización masiva una semana después del Speedy Trial entonces podrías decidir lanzar un cliente de fecha de bandera que simplemente diga que vamos a activar este 12 de noviembre porque aparentemente los mineros quieren esto. De lo contrario, no tenemos ni idea de qué hacer con esta señal”.

AvW: Me parece muy difícil predecir lo que los desarrolladores de Bitcoin Core van a decidir en este caso.

SP: Estoy de acuerdo, pero esta es una posibilidad.

Una posibilidad más probable es que los dos clientes sean incompatibles.

AvW: Esa es una de las formas en que los dos clientes pueden ser potencialmente incompatibles. Hay otra forma que tal vez sea más probable o al menos no es tan complicada.

SP: La otra es “Imaginemos que el Speedy Trial falla y la comunidad no tiene consenso sobre cómo proceder a continuación”. Los desarrolladores de Bitcoin Core pueden ver eso, hay una discusión continua y nadie se pone de acuerdo. Tal vez los desarrolladores de Bitcoin Core decidan esperar y ver.

AvW: Los mineros no están señalando…

SP: O de forma errática, etc. Los mineros no están señalando. La discusión continúa. No pasa nada. Entonces este mecanismo LOT=true entra en acción…

AvW: Después de 18 meses. Estamos hablando de noviembre de 2022, queda mucho tiempo, pero en algún momento el mecanismo LOT=true entrará en acción.

SP: Exactamente. Entonces esos nodos, suponiendo que los mineros sigan sin dar señales, dejarán de…

AvW: Eso es si literalmente no hay bloques de señalización LOT=true.

SP: En el otro escenario en el que los mineros sí empiezan a señalar masivamente, ahora volvemos a ese escenario anterior en el que de repente hay mucha señalización de mineros en el bit 2. Puede que la bifurcación suave esté activa pero ahora no hay retraso. Si la señalización ocurre en cualquier lugar después del 12 de noviembre el cliente LOT=true activará Taproot después de un periodo de ajuste.

AvW: No estoy seguro de haberte entendido.

SP: Digamos que en este caso en diciembre de este año los mineros empiezan a señalizar de repente. Después de la altura mínima de activación. En diciembre todos empiezan a señalar. El cliente Bitcoin Core lo ignorará pero el cliente LOT=true dirá “Ok Taproot está activo”.

AvW: ¿Es el mismo escenario que acabamos de discutir? Sólo hay un problema si hay una señalización falsa. Por lo demás, no hay ningún problema.

SP: Hay un problema si hay una señalización falsa, pero es más complicado resolverlo porque esa opción de simplemente lanzar un nuevo cliente con un día de bandera en él que esté lo suficientemente lejos en el futuro, eso ya no existe. Es potencialmente activo inmediatamente. Si haces un lanzamiento pero de repente un minero empieza a no aplicar las reglas, obtienes esta confusión de la que hablamos antes. Entonces podemos solucionarlo haciendo simplemente una fecha de bandera. Esto sería aún más confuso. Tal vez también sea aún menos probable.

AvW: Es bastante similar al escenario anterior, pero un poco más difícil, menos obvio cómo resolver esto.

SP: Creo que es más complicado porque es menos obvio cómo hacer un lanzamiento del día de la bandera en Bitcoin Core en ese escenario porque se activa inmediatamente.

AvW: No es ahí donde quería ir con esto.

SP: ¿Querías ir a un escenario en el que los mineros esperan hasta el final hasta que empiezan a señalar?

AvW: Sí, eso es lo que quería.

SP: Aquí es donde entra en juego la señalización obligatoria. Si no hay señalización obligatoria, los nodos LOT=true se detendrán hasta que alguien mine un bloque que le gustaría ver, un bloque que señale. Si ven este bloque que señala estamos de vuelta en el ejemplo anterior donde de repente los nodos regulares de Bitcoin Core ven esta señalización pero la ignoran. Ahora hay un grupo de nodos que creen que Taproot está activo y hay un grupo de nodos que no. Entonces alguien tiene que decidir qué hacer con él.

AvW: ¿Sigue hablando de falsa señalización aquí?

SP: Even if the signaling is genuine you still want there to be a Bitcoin Core release, probably, that actually says “We have Taproot now.” But the question is when do we have Taproot according to that release? What is a safe date to put in there? You could do it retroactively.

AvW: Cuando quieran. La cuestión es que si los mineros hacen cumplir las nuevas reglas, la cadena se mantendrá unida. Depende de Bitcoin Core implementarlas cuando les apetezca.

SP: El problema con esta señalización es que no sabes si está activa hasta que alguien decide intentar romper las reglas.

AvW: Mi suposición era que no había señalización falsa. De todos modos, crearán la cadena más larga con las reglas válidas.

SP: El problema con eso es que no se puede saber.

AvW: El escenario al que realmente quería llegar Sjors es el escenario muy simple en el que la mayoría de los mineros no señalan cuando se cumplen los 18 meses. En ese caso van a crear la cadena más larga que los nodos de Bitcoin Core 0.21.1 van a seguir mientras que los nodos LOT=true sólo van a aceptar los bloques que sí señalen, que pueden ser cero o al menos menos menos. Si es una mayoría entonces no hay división. Pero si no es una mayoría entonces tenemos una división.

SP: Y esa cadena se retrasaría cada vez más. El incentivo para hacer un lanzamiento que tenga en cuenta eso sería bastante pequeño, creo. Depende, aquí es donde entra la teoría del juego. Desde el punto de vista de la seguridad, si ahora haces un lanzamiento que diga “Por cierto, consideramos activo Taproot con carácter retroactivo”, eso provocaría una reorganización gigantesca. Si sólo lo activas, eso no causaría una reorganización gigante. Pero si se dice “Por cierto, vamos a ordenar retroactivamente esa señalización que a ustedes les interesa”, eso causaría una reorganización masiva. Esto sería inseguro, eso no sería algo que se liberaría probablemente. Es una situación muy complicada.

AvW: Hay escenarios potenciales desordenados. Quiero recalcar a nuestros queridos oyentes que nada de esto va a ocurrir en los próximos dos meses.

SP: Y espero que nunca. Vamos a lanzar algunos otros malos escenarios y luego supongo que podemos pasar a otros temas.

AvW: Quiero mencionar rápidamente que la razón por la que no me preocupan demasiado estos malos escenarios es porque creo que si parece mínimamente probable que haya una división de monedas o algo así, probablemente habrá mercados de futuros. Estos mercados de futuros probablemente dejarán muy claro a todo el mundo la cadena alternativa que tiene una oportunidad, lo que informará a los mineros sobre lo que deben minar y evitará una división de esa manera. Tengo bastante confianza en la sabiduría colectiva del mercado para advertir a todo el mundo sobre los posibles escenarios, así que probablemente funcionará bien. Esa es mi percepción general.

SP: El problema con este tipo de cosas es que si no sale bien es muy, muy malo. Entonces podemos decir con carácter retroactivo “Supongo que no ha salido bien”.

Proceso de desarrollo de LOT=true cliente

AvW: Quiero plantear algo antes de que plantees lo que querías plantear. He visto algunas preocupaciones por parte de los desarrolladores de Bitcoin Core sobre el proceso de desarrollo del cliente LOT=true. Creo que esto se reduce a la construcción de Gitian, la firma de Gitian que también discutimos en otro episodio.

SP: Hemos hablado de la necesidad de que el software sea de código abierto, de que sea fácil de auditar.

AvW: ¿Puede dar su opinión al respecto en este contexto?

SP: El cambio que hicieron en relación con el cliente principal de Bitcoin Core no es enorme. Se puede ver en GitHub. En ese sentido, esa parte del código abierto es razonablemente factible de verificar. Creo que ese código ha tenido menos revisión pero no cero revisión.

AvW: ¿Menos que el de Bitcoin Core?

SP: Exactamente, pero mucho más que el de UASF, mucho más que el de 2017.

AvW: ¿Más que eso?

SP: I would say. The idea has been studied a bit longer. But the second problem is how do you know that what you are downloading isn’t malware. There are two measures there. There is release signatures, the website explains pretty well how to check those. I think they were signed by Luke Dashjr and by the other developer. You can check that.

AvW: Bitcoin Mechanic es el otro desarrollador. En realidad, lo publican Bitcoin Mechanic y Shinobi y Luke Dashjr es el asesor, el colaborador.

SP: Normalmente hay un archivo binario que se descarga y luego hay un archivo con sumas de comprobación y ese archivo con sumas de comprobación también está firmado por una persona conocida. Si tienes la clave de Luke o de quien sea, su clave y la conoces, puedes comprobar que al menos el binario que has descargado no procede de un sitio web pirateado. Lo segundo, es que tienes un binario y sabes que lo han firmado, pero ¿quiénes son? La segunda cosa es que quieres comprobar que este código coincide con el binario y ahí es donde entra la construcción Gitian de la que hablamos en un [episodio] anterior (https://www.youtube.com/watch?v=_qdhc5WLd2A). Básicamente, las construcciones deterministas. Toma el código fuente y produce el binario. Múltiples personas pueden entonces firmar que, de hecho, según ellos este código fuente produce este binario. Cuantas más personas lo confirmen, más probable es que no haya colusión. Creo que sólo hay dos firmas de Gitian para esta otra versión.

AvW: Así que el software de Bitcoin Core está siendo firmado por Gitian…

SP: Creo que 10 o 20 personas.

AvW: Muchos de los desarrolladores experimentados de Bitcoin Core que han estado desarrollando el software de Bitcoin Core durante un tiempo. ¿Incluido usted? ¿Lo firmó usted?

SP: La versión más reciente, sí.

AvW: Usted confía en que no están todos confabulados y difundiendo malware. Para la mayoría de la gente, todo se reduce a la confianza en ese sentido.

SP: Si realmente estás pensando en utilizar este software alternativo, deberías saber lo que estás haciendo en términos de todos estos escenarios de reorganización. Si ya sabes lo que estás haciendo en esos términos, entonces simplemente compila la cosa desde el código fuente. ¿Por qué no? Si no eres capaz de compilar las cosas desde el código fuente, probablemente no deberías ejecutar esto. Pero eso depende de ti. No me preocupa que estén enviando malware, pero en general es sólo cuestión de tiempo antes de que alguien diga “Tengo una versión diferente con LOT=happy y por favor descárguela aquí” y le robe todo su Bitcoin. Es más el precedente que está sentando esto lo que me preocupa que esta cosa pueda realmente tener malware.

AvW: Eso es justo. ¿Tal vez firmarlo Sjors?

SP: No, porque no creo que sea una cosa sana para publicar.

AvW: Me parece justo.

SP: Es sólo mi opinión. Cada uno es libre de publicar lo que quiera.

AvW: ¿Había algo más que quisieras comentar?

¿Qué lanzaría Bitcoin Core si Speedy Trial no se activara?

SP: Sí, hablamos sobre la señalización verdadera o falsa en el bit 1, pero una posibilidad muy real creo que si esta activación falla y queremos probar algo más, entonces probablemente no queramos usar el mismo bit si es antes de la ventana de tiempo de espera. Eso podría crear un escenario en el que podrías empezar a decir “Vamos a usar otro bit para hacer la señalización”. Entonces podrías tener alguna confusión donde hay una nueva versión de Bitcoin Core que se activa usando el bit 3 por ejemplo pero la gente de LOT=true no lo ve porque están mirando el bit 1. Eso puede o no ser un problema real. La otra cosa es que podría haber todo tipo de otras formas de activar esta cosa. Una podría ser un día de bandera. Si Bitcoin Core lanzara un día de bandera entonces no habrá ninguna señalización. El cliente LOT=true no sabrá que Taproot está activo y exigirá la señalización en algún momento aunque Taproot ya esté activo.

AvW: Tu punto es que no sabemos lo que Bitcoin Core lanzará después de Speedy Trial y lo que podrían lanzar no necesariamente sería compatible con el cliente LOT=true. Eso funciona en ambos sentidos, por supuesto.

SP: Claro, sólo estoy razonando desde un punto. También diría que en el caso de que Bitcoin Core lance algo más que tenga un apoyo bastante amplio por parte de la comunidad, me imagino que la gente que está ejecutando los clientes BIP 8 no está sentada en una cueva en algún lugar. Probablemente son usuarios relativamente activos que pueden decidir “voy a ejecutar esta versión de Bitcoin Core de nuevo porque hay un día de bandera en él que es anterior a la señalización forzada”. Podría imaginar que decidirían ejecutarla o no.

AvW: Eso también funciona en ambos sentidos.

SP: No, en realidad no. Me preocupan mucho más las personas que no siguen este debate y que simplemente se limitan a utilizar la versión más reciente de Core. O no se actualizan en absoluto, todavía están ejecutando, digamos, Bitcoin Core v0.15. Estoy mucho más preocupado por ese grupo que por el grupo que toma activamente una posición en este asunto. Si usted toma activamente una posición mediante la ejecución de otra cosa, entonces usted sabe lo que está haciendo. Depende de ti estar al día. Pero tenemos un compromiso con todos los usuarios de que si todavía estáis ejecutando en vuestro búnker la versión 0.15 de Bitcoin Core que no os pase nada malo si seguís la mayor parte de las pruebas de trabajo dentro de las reglas que conocéis.

AvW: Eso también podría significar hacerlo compatible con el cliente LOT=true.

SP: No, en lo que respecta al nodo v0.15 no hay ningún cliente LOT=true.

AvW: ¿Queremos entrar en todo tipo de escenarios? El escenario que más me preocupa es el de la cadena LOT=true por llamarlo de alguna manera, que si alguna vez se produce una escisión ganará pero sólo después de un tiempo porque se obtienen reorgs largos. Esto vuelve a la discusión LOT=true versus LOT=false en primer lugar.

SP: Sólo veo que eso ocurra con un colapso masivo del precio del propio Bitcoin. Si se da el caso de que LOT=true empiece a ganar después de un retraso que requiera una gran reorganización… si es más probable que gane, su precio relativo subirá porque es más probable que gane. Pero como un re-org más grande es más desastroso para Bitcoin cuanto más largo sea el re-org más bajo será el precio de Bitcoin. Ese sería el escenario malo. Si hay una reorganización de 1000 bloques o más, entonces creo que el precio de Bitcoin se derrumbará a algo muy bajo. Realmente no nos importa si el cliente LOT=true gana o no. Eso ya no importa.

AvW: Estoy de acuerdo con eso. La razón por la que no me preocupa es lo que he mencionado antes, creo que estas cosas las resolverán los mercados de futuros mucho antes de que ocurra realmente.

SP: Supongo que el mercado de futuros predeciría exactamente eso. Eso no sería bueno. Dependiendo de tu confianza en los mercados de futuros, que para mí no es tan sorprendente.

Altura del bloque frente a MTP

https://github.com/bitcoin/bitcoin/pull/21377#issuecomment-818758277

SP: Podríamos seguir hablando de esta diferencia de fondo entre la altura de los bloques y el tiempo de los bloques. Hubo un fiasco pero no creo que sea una diferencia interesante.

AvW: También podríamos mencionarlo.

SP: Cuando describimos por primera vez el Speedy Trial asumimos que todo se basaría en la altura de los bloques. Habría una transformación de la forma en que funcionan ahora las horquillas suaves, que se basa en estos tiempos medios, a la altura de los bloques, que es conceptualmente más sencilla. Más tarde hubo alguna discusión entre la gente que estaba trabajando en eso, considerando que tal vez la única diferencia del Speedy Trial debería ser la altura de activación y ninguno de los otros cambios. Desde el punto de vista de la base de código existente es más fácil hacer que el Speedy Trial ajuste un parámetro que es una altura de activación mínima frente al cambio donde se cambia todo en alturas de bloque que es un cambio mayor del código existente. Aunque el resultado final sea más fácil. Un enfoque basado puramente en la altura de los bloques es más fácil de entender, más fácil de explicar lo que va a hacer, cuando lo va a hacer. Algunos casos extremos también son más fáciles. Pero permanecer más cerca de la base de código existente es algo más fácil para los revisores. La diferencia es bastante pequeña, así que creo que algunas personas decidieron lanzar una moneda al aire y otras creo que estuvieron de acuerdo sin lanzarla.

AvW: Hay argumentos en ambos lados pero parecen ser bastante sutiles, bastante matizados. Como hemos dicho, el Speedy Trial va a empezar en la misma fecha, así que no parece importar mucho. En algún momento algunos desarrolladores estaban considerando seriamente decidir mediante un lanzamiento de moneda usando la cadena de bloques de Bitcoin para ello, eligiendo un bloque en el futuro cercano y viendo si termina con un número par o impar. No sé si eso fue literalmente lo que hicieron pero sería una forma de hacerlo. Creo que sí hicieron el lanzamiento de la moneda, pero luego los defensores de ambas soluciones terminaron poniéndose de acuerdo de todos modos.

SP: Estuvieron de acuerdo en lo mismo que dijo la moneda.

AvW: El principal discrepante fue Luke Dashjr, que se siente fuertemente comprometido con el uso de las alturas de los bloques de forma consistente. También es de la opinión de que la comunidad ha llegado a un consenso al respecto y que el hecho de que los desarrolladores de Bitcoin Core no lo utilicen es un retroceso o una ruptura del consenso de la comunidad.

SP: Esa es su perspectiva. Si miras a la persona que escribió el pull request original que se basaba puramente en la altura, creo que fue Andrew Chow, cerró su propio pull request a favor de la solución mixta que tenemos ahora. Si la persona que escribe el código lo elimina él mismo, creo que está bastante claro. Desde mi punto de vista la gente que está poniendo más esfuerzo debería decidir cuando es algo tan trivial. No creo que importe tanto.

AvW: A mí me parece un punto menor, pero está claro que no todo el mundo está de acuerdo en que sea un punto menor.

SP: De eso se trata el bikeshedding, ¿no? No sería bikeshedding si todo el mundo pensara que es irrelevante el color del cobertizo para bicicletas.

AvW: Dejemos el tema de la moneda y la hora, la altura de la cuadra detrás de nosotros Sjors porque creo que cubrimos todo y tal vez no deberíamos insistir en este último punto. ¿Ya está? Espero que haya quedado claro.

SP: Creo que aún podemos intercalar muy brevemente una cosa que se ha planteado, que es el ataque en el tiempo.

AvW: No lo hemos mencionado, pero es algo relevante en este contexto. Un argumento contra el uso de la hora de los bloques es que abre la puerta a los ataques de timewarp, en los que los mineros falsifican las marcas de tiempo de los bloques que minan para fingir que se trata de una hora y una fecha diferentes. De este modo, pueden, por ejemplo, saltarse el periodo de señalización, si se confabulan para hacerlo.

SP: Eso parece una cantidad enorme de esfuerzo sin una buena razón, pero es un escenario interesante. Hicimos un episodio sobre el ataque de timewarp hace mucho tiempo, cuando yo lo entendía. Hay una propuesta de bifurcación suave para deshacerse de él que no creo que nadie haya objetado, pero tampoco nadie se ha molestado en poner en práctica. Una manera de lidiar con este hipotético escenario es que si ocurriera entonces desplegamos el soft fork contra el ataque timewarp primero y luego intentamos la activación de Taproot de nuevo.

AvW: El argumento en contra de alguien como Luke es que, por supuesto, se puede arreglar cualquier fallo, pero también se puede simplemente no incluir el fallo en primer lugar.

SP: Es bueno saber que los mineros estarían dispuestos a utilizarlo. Si sabemos que los mineros están realmente dispuestos a explotar el ataque de timewarp es una información increíblemente valiosa. Si tienen una manera de confabularse y una motivación para usar ese ataque… El coste de ese ataque sería bastante bajo, sería retrasar Taproot unos meses pero tendríamos esta conspiración masiva desvelada. Creo que eso es una victoria.

AvW: La forma en que Luke lo ve es que ya había consenso en todo tipo de cosas, usando el BIP 8 y esta cosa de LOT=true, vio esto como una especie de esfuerzo de consenso. En su opinión, el uso de los tiempos de bloqueo está frustrando eso. No quiero hablar por él, pero si trato de canalizar un poco a Luke o explicar su perspectiva sería eso. En su opinión, el consenso ya se estaba formando y ahora es un camino diferente.

SP: No creo que este nuevo enfoque bloquee tanto lo de LOT=verdad. Hemos pasado por todos los escenarios y la confusión no estaba en torno a la altura de los bloques frente al tiempo, sino en todo tipo de cosas que podrían salir mal dependiendo de cómo evolucionaran las cosas. Pero no esa cuestión en particular. En cuanto al consenso, el consenso está en el ojo del que mira. Yo diría que si varias personas no están de acuerdo, entonces no hay consenso.

AvW: Eso también funcionaría al revés. Si Luke no está de acuerdo, utiliza el tiempo en bloque.

SP: Pero no puede decir que haya habido consenso en algo. Si la gente no está de acuerdo, por definición no hubo consenso.

AvW: Mi impresión es que no hubo consenso porque la gente no está de acuerdo. Terminemos. Para nuestros oyentes que están confundidos y preocupados voy a enfatizar que los próximos 3 meses la prueba rápida va a funcionar en ambos clientes. Si los mineros se activan a través de Speedy Trial vamos a tener Taproot en noviembre y todo el mundo va a ser feliz. Continuaremos la discusión del soft fork con el próximo soft fork.

SP: Volveremos a tener las mismas discusiones porque no hemos aprendido absolutamente nada.

\ No newline at end of file diff --git a/es/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation/index.html b/es/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation/index.html index e5d691c523..58d42957cc 100644 --- a/es/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation/index.html +++ b/es/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation/index.html @@ -25,4 +25,4 @@ Eric Lombrozo, Luke Dashjr

Fecha: August 3, 2020

Transcripción De: Michael Folkson

Traducción Por: Blue Moon

Tags: Taproot, Soft fork activation

Categoría: Podcast

Media: -https://www.youtube.com/watch?v=yQZb0RDyFCQ

Location: Bitcoin Magazine (en línea)

Aaron van Wirdum Aaron van Wirdum en Bitcoin Magazine sobre el BIP 8, el BIP 9 o la activación del Soft Fork moderno: https://bitcoinmagazine.com/articles/bip-8-bip-9-or-modern-soft-fork-activation-how-bitcoin-could-upgrade-next

David Harding sobre las propuestas de activación de Taproot: https://gist.github.com/harding/dda66f5fd00611c0890bdfa70e28152d

Introducción

Aaron van Wirdum (AvW): Eric, Luke bienvenido. Feliz Día de la Independencia de Bitcoin. ¿Cómo están?

Eric Lombrozo (EL): Estamos muy bien. ¿Cómo están ustedes?

AvW: Estoy bien, gracias. Luke, ¿cómo estás?

Luke Dashjr (LD): BIEN. ¿Cómo estás tú?

AvW: Bien, gracias. Es genial teneros en el Día de la Independencia del Bitcoin. Obviamente, ambos habéis jugado un gran papel en el movimiento UASF. Fuisteis tal vez dos de los partidarios más prominentes e influyentes. Este Día de la Independencia de Bitcoin, esta cosa del 1 de agosto de hace un par de años, se trataba de la activación de SegWit, la activación del soft fork. Esto está siendo relevante de nuevo porque estamos viendo una nueva bifurcación suave que podría estar llegando, Taproot. En general, la conversación sobre cómo activar las bifurcaciones suaves está empezando de nuevo. Así que lo que sucedió hace un par de años se está volviendo relevante de nuevo.

EL: No queremos repetir lo que ocurrió hace unos años. Queremos hacerlo mejor esta vez.

#Activaciones anteriores de la bifurcación suave

AvW: Eso es lo que quiero discutir con ustedes y por qué es genial tenerlos aquí. Empecemos con eso Eric. Hace un par de años, aparentemente algo salió mal. En primer lugar, mencionaremos brevemente que hubo un proceso llamado BIP 9. ¿Quieres explicar muy brevemente lo que era y luego se puede entrar en por qué era un problema o lo que salió mal?

EL: Las primeras bifurcaciones suaves que se activaron fueron con fecha de bandera, simplemente escritas en el código. Más tarde, para hacer la transición más suave, se incorporó la señalización de los mineros. Entonces el BIP 9 fue esta propuesta que permitía que varias bifurcaciones suaves estuvieran bajo activación al mismo tiempo con la idea de que habría extensibilidad al protocolo y habría todo este proceso que podríamos usar para añadir nuevas características. Pero resultó ser muy desordenado y no estoy seguro de que ese proceso sea sostenible.

AvW: Para que quede claro, la idea de BIP 9 era que hubiera una bifurcación suave, que hubiera un cambio en el protocolo, y que el reto fuera conseguir que la red se actualizara a ese cambio sin dividir la red entre los nodos actualizados y los no actualizados. La idea era dejar que la coordinación de la activación dependiera del hashpower. Una vez que suficientes mineros hayan señalado que se han actualizado la red lo reconoce, todos los nodos actualizados lo reconocen y empiezan a aplicar las nuevas reglas. La razón por la que esto es una buena idea es porque si la mayoría de los mineros hacen esto no hay riesgo de que la cadena se divida. Incluso los nodos no actualizados seguirán la versión “actualizada” de la cadena de bloques.

EL: Sería la cadena más larga. Por defecto, todos los demás clientes seguirían esa cadena automáticamente.

AvW: Esa es la ventaja. Pero parece que algo salió mal. O, al menos, eso cree usted.

EL: Las primeras bifurcaciones suaves no fueron realmente políticas en absoluto. No hubo ninguna disputa. SegWit fue la primera bifurcación suave que realmente generó cierta controversia debido a todo el asunto del tamaño de los bloques que ocurrió antes. Fue un momento realmente polémico en el espacio de Bitcoin. La activación se politizó, lo cual fue un problema muy serio porque el BIP 9 no fue diseñado para tratar con la política. Fue diseñado para lidiar con la activación por señalización de los mineros sólo por lo que estabas hablando, para que la red no se dividiera. Es realmente un proceso técnico para asegurarse de que todo el mundo está en la misma página. La idea era que todo el mundo estuviera ya a bordo de esto antes del lanzamiento sin ninguna política. No era ningún tipo de sistema de votación ni nada parecido. No se diseñó ni se pretendió que fuera un sistema de votación. Algunas personas lo malinterpretaron así y algunas personas se aprovecharon de la confusión para hacer que pareciera un proceso de votación. Se abusó mucho de ello y los mineros empezaron a manipular el proceso de señalización para alterar el precio del Bitcoin y otras cosas. Se convirtió en algo muy, muy complicado. No creo que queramos hacerlo así esta vez.

AvW: Este mecanismo de coordinación, básicamente se concedió a los mineros en cierto modo para que las actualizaciones se hicieran sin problemas. Su perspectiva es que empezaron a abusar de este derecho que se les concedió. ¿Es un buen resumen?

EL: Sí. Tenía un umbral del 95 por ciento, que era algo bastante razonable antes, cuando se trataba sobre todo de mejoras técnicas que no estaban politizadas en absoluto. Era un umbral de seguridad en el que si el 95 por ciento de la potencia de la red está señalizando, entonces es casi seguro que la red no se va a dividir. Pero en realidad sólo se necesita más de la mayoría del hashpower para que la cadena sea la más larga. Se podría bajar ese umbral y aumentar ligeramente el riesgo de que la cadena se divida. El umbral del 95 por ciento, si más del 5 por ciento de los hashpower no lo señalan, obtendrían un veto. Podían vetar toda la activación. Le dio a una minoría muy pequeña la capacidad de vetar una bifurcación suave. Por defecto, si no se activaba, simplemente no se bloqueaba y fallaba. Esto era un problema porque estaba en medio del asunto del tamaño de los bloques y todo el mundo quería una resolución para esto. El fracaso no era realmente una opción.

AvW: Luke, ¿estás de acuerdo con este análisis? ¿Es así como lo ves?

LD: Más o menos sí.

SegWit y BIP 148

AvW: En algún momento hubo una solución para salir del estancamiento que era un UASF. Esto finalmente tomó forma en el BIP 148. Luke, creo que usted participó en el primer proceso de decisión. ¿Cuál era la idea que había detrás del PIF 148?

LD: En realidad no participé tan pronto. De hecho, en un principio me opuse a ella.

AvW: ¿Por qué?

LD: Realmente no analicé las implicaciones en su totalidad.

AvW: ¿Puede explicar qué hizo el PIF 148 y por qué se diseñó de la forma en que se diseñó?

LD: Esencialmente, devolvió la decisión a los mineros. “El 1 de agosto vamos a empezar el proceso de activación de esto y eso es todo”.

AvW: Creo que la forma específica en que lo hizo fue que los nodos del BIP 148 empezaran a rechazar los bloques que no señalaran realmente la compatibilidad con SegWit. ¿Es eso cierto?

LD: Sí. Así es como se han desplegado anteriormente la mayoría de las bifurcaciones suaves activadas por el usuario. Si los mineros no señalaban la nueva versión, sus bloques serían inválidos.

AvW: Hay una diferencia de matiz entre la activación de SegWit y la aplicación de SegWit o la señalización de SegWit, ¿verdad?

LD: Las anteriores bifurcaciones suaves habían hecho prácticamente ambas cosas todo el tiempo. Antes de BIP 9 era un número de versión incremental. Los mineros tenían que tener el número de versión correcto o sus bloques eran inválidos. Cuando se lanzó la versión 3, todos los bloques de la versión 2 y anteriores dejaron de ser válidos.

AvW: ¿En algún momento empezaron a apoyar el BIP 148? ¿Por qué fue así?

LD: En un momento dado había un número suficiente de miembros de la comunidad que decían: “Vamos a hacer el BIP 148 sin importar cuántos otros estén a bordo”. A partir de ese momento, una vez que fue una minoría considerable, la única manera de evitar una ruptura en cadena era ir a por todas.

EL: En ese momento era todo o nada.

AvW: ¿Qué tamaño debe tener una minoría así? ¿Existe alguna forma de medirlo, pensarlo o razonarlo? ¿Qué les hizo darse cuenta de que había suficientes usuarios que apoyaban esto?

LD: It really comes down to not so much the number of people but their relevance to the economy. All these people that are going to do BIP 148, can you just ignore them and economically pressure them or cut them off from everyone else? Or is that not viable anymore?

AvW: ¿Cómo se toma esa decisión? ¿Cómo se distingue entre lo inviable y lo viable?

EL: En ese momento lo que hicimos fue intentar hablar con muchas bolsas y con muchas otras personas del ecosistema para evaluar su nivel de apoyo. Muchos usuarios estaban muy a favor, pero cuando se hizo más evidente que muchos de los grandes nodos económicos del sistema iban a utilizar el BIP 148, quedó claro que era hacerlo o morir, era todo o nada. En ese momento había que conseguir que todo el mundo se subiera a bordo o esto no iba a funcionar.

LD: Irónicamente, uno de los primeros partidarios del BIP 148 fue BitPay, que por aquel entonces era importante.

AvW: ¿Lo eran?

LD: Creo que perdieron relevancia poco después. Pero por aquel entonces eran más importantes.

AvW: Has mencionado esta situación de “hazlo o muérete”, ¿cuáles son los riesgos de morir? ¿Qué podría salir mal con algo como el BIP 148?

EL: En ese momento estábamos bastante seguros de que si la gente quería bifurcarse, lo iba a hacer. Hubo un gran empuje al asunto del BCH. Era “Vale, si la gente quiere bifurcarse, que se bifurque”. Dondequiera que la economía vaya es hacia donde la gente va a gravitar. Nosotros creíamos que gravitarían hacia el uso del BIP 148, insistiendo en que todos los demás lo hicieran porque eso era lo mejor para los intereses económicos de todos. Eso es lo que ocurrió. Era un riesgo, pero creo que era un riesgo calculado. Creo que habría habido una ruptura en cadena de todas formas. La cuestión era cómo asegurarnos de que la ruptura de la cadena tuviera la menor cantidad de repercusiones económicas para las personas que querían permanecer en la cadena SegWit.

AvW: Para que quede claro, el BIP 148 también podría haber provocado una división de la cadena y, posiblemente, una reordenación. Era un riesgo en ese sentido.

EL: Claro que estábamos en un territorio desconocido. Creo que la teoría era bastante sólida, pero siempre hay un riesgo. Creo que, en este punto, el hecho de que hubiera un riesgo formaba parte de la motivación para que la gente quisiera ejecutar el BIP 148, porque cuantas más personas lo hicieran, menor sería el riesgo.

AvW: Esa era una interesante ventaja teórica del juego que tenía este BIP. Hasta el día de hoy existe un desacuerdo al respecto, ¿qué creen ustedes que hizo realmente el PBI 148? Algunos dicen que al final no hizo nada. Los mineros sólo fueron los que actualizaron el protocolo. ¿Cómo lo ve usted?

LD: Fue claramente el BIP 148 el que consiguió que se activara SegWit. Si no hubiera sido así, todo el mundo lo habría tenido muy claro porque los bloques habrían sido rechazados por los nodos del BIP 148. No hay forma de tener un 100% de señalización de mineros sin el BIP 148.

AvW: Eric, ¿estás de acuerdo con eso?

EL: Creo que el BIP 148 desempeñó un gran papel, pero creo que hubo muchos factores que fueron muy importantes. Por ejemplo, el hecho de que SegWit2x estuviera en marcha y todo el acuerdo de Nueva York. Creo que eso impulsó a la gente a querer apoyar una bifurcación suave activada por el usuario aún más. Es un poco irónico. Si todo el asunto de SegWit2x no hubiera estado ocurriendo la gente podría haber sido más complaciente y haber dicho “Vamos a aguantar un poco y esperar a ver qué pasa”. Creo que esto presionó a todo el mundo para pasar a la acción. Parecía que había una amenaza inminente, así que la gente tenía que hacer algo. Ese fue el momento en el que creo que la teoría del juego empezó a funcionar de verdad, porque entonces sería posible cruzar ese umbral en el que es hacer o morir, el punto de no retorno.

AvW: Permítanme reformular un poco la pregunta. ¿Creen ustedes que SegWit se habría activado si el PIF 148 no hubiera ocurrido? ¿Tendríamos SegWit hoy?

EL: Es difícil de decir. Creo que al final podría haberse activado. En ese momento la gente lo quería y era un momento decisivo en el que la gente quería una resolución. Cuanto más se alargaba, más incertidumbre había. La incertidumbre en el protocolo no es buena para la red. Podrían haber surgido otros problemas. Creo que el momento fue el adecuado, el lugar adecuado y el momento adecuado para que esto ocurriera. ¿Podría haber ocurrido más tarde? Posiblemente. Pero creo que habría sido mucho más arriesgado,

LD: Esta conferencia de hoy probablemente estaría celebrando la activación final de SegWit en lugar de Taproot.

AvW: Una última pregunta. Para resumir esta parte de la historia de Bitcoin, ¿cuáles fueron las lecciones de este episodio? ¿Qué nos llevamos de este periodo de la historia de Bitcoin hacia adelante?

EL: Creo que es muy importante que intentemos evitar politizar este tipo de cosas. Lo ideal es que no queramos que la capa base del protocolo cambie mucho. Cada vez que se produce un cambio, se introduce un vector de ataque. Enseguida la gente podría intentar insertar vulnerabilidades o exploits o intentar dividir a la comunidad o crear ataques de ingeniería social o cosas así. Cada vez que se abre la puerta a cambiar las reglas, se está abriendo a los ataques. Ahora estamos en un punto en el que es imposible actualizar el protocolo sin tener algún tipo de proceso. Pero cuanto más intentamos ampliarlo, más tiende a convertirse en algo político. Espero que Taproot no cree el mismo tipo de contención y no se politice. No creo que sea un buen precedente que estas cosas sean controvertidas. Al mismo tiempo, no quiero que esto sea un proceso habitual. No quiero convertir en un hábito la activación de las horquillas blandas todo el tiempo. Creo que Taproot es una adición muy importante a Bitcoin y parece tener mucho apoyo de la comunidad. Sería muy bueno incluirlo ahora. Creo que si esperamos va a ser más difícil activarlo más adelante.

BIP 8 como una posible mejora de BIP 9

AvW: Has mencionado que en 2017 se produjo este periodo de controversia, la guerra civil del escalado, como quieras llamarlo. ¿Hubo realmente un problema con el PIF 9 o solo fue un periodo de controversia y a estas alturas el PIF 9 volvería a estar bien?

EL: Creo que el problema del PIF 9 fue que era muy optimista. La gente jugaría bien y que la gente cooperaría. El PIF 9 no funciona en absoluto bien en un escenario no cooperativo. Por defecto, fracasa. No sé si es algo que queremos hacer en el futuro porque si falla significa que todavía hay controversia, que no se ha resuelto. El PIF 8 tiene una fecha límite en la que tiene que activarse a una hora determinada. Por defecto se activa, mientras que el BIP 9 no se activa por defecto. Elimina el poder de veto de los mineros. Creo que el BIP 9 es muy problemático. En el mejor de los casos podría funcionar, pero es demasiado fácil de atacar.

LD: Y crea un incentivo para ese ataque.

AvW: Lo que está diciendo es que incluso en un periodo en el que no haya una guerra civil, ni una gran controversia, utilizar algo como el BIP 9 invitaría a la controversia. ¿Es eso lo que le preocupa?

EL: Posiblemente sí.

LD: Porque ahora los mineros pueden tener como rehén el soft fork que presumiblemente ya ha sido acordado por la comunidad. Si no ha sido acordado por la comunidad no deberíamos desplegarlo con ninguna activación, y punto.

AvW: Luke has estado trabajando en el BIP 8 que es una forma alternativa de activar las bifurcaciones suaves. ¿Puedes explicar en qué consiste el BIP 8?

LD: Más que una alternativa, lo veo como una forma de tomar el BIP 9 y arreglar los errores que tiene.

AvW: ¿Cómo se solucionan los errores? ¿Qué hace la BIP 8?

LD: El error más obvio fue el hecho de que uno de los problemas del BIP 9 cuando fuimos a hacer el BIP 148 era que originalmente se había fijado para la activación en noviembre, no en agosto. Entonces la gente se dio cuenta de que si el hashpower es demasiado bajo podríamos forzar la señalización durante noviembre y seguiría sin activarse porque el tiempo transcurriría demasiado rápido. Esto se debe a que el BIP 9 utilizaba marcas de tiempo, tiempo del mundo real, para determinar cuándo expiraría el BIP.

AvW: Porque los bloques se pueden minar más rápido o más lento, así que el tiempo del mundo real puede darte problemas.

LD: Si los bloques se minan demasiado despacio, el tiempo de espera se produce antes de que termine el periodo de dificultad, que era uno de los requisitos para la activación. El error más obvio que arregló el BIP 8 fue utilizar alturas en lugar de tiempo. De esta forma, si los bloques se ralentizaban, el tiempo de espera también se retrasaba. El BIP 148 solucionó esto adelantando el periodo de señalización obligatoria a agosto, que era muy rápido. Creo que todo el mundo estaba de acuerdo con eso. Fue una necesidad desafortunada debido a ese fallo. La otra es, como estábamos hablando, que crea un incentivo para que los mineros bloqueen la bifurcación suave. Si se activa después del tiempo de espera, ese incentivo desaparece. Hay riesgos si se encuentra un error. No podemos echarnos atrás una vez que se ha establecido la activación. Por supuesto, deberíamos encontrar errores antes de establecer la activación, así que esperamos que eso no importe.

AvW: ¿Es posible configurar el BIP 8? Se puede forzar la señalización al final o no. ¿Cómo funciona esto exactamente?

LD: Ese fue un cambio más reciente, ahora que hemos vuelto a poner en marcha el tema de la activación. Está diseñado para que puedas desplegarlo con una activación que haga timeout y aborte y luego cambiar esa bandera a un UASF. Si el UASF se establece más tarde, siempre y cuando tenga suficiente apoyo de la comunidad para hacer el UASF, todos los nodos que fueron configurados sin el UASF seguirán con él.

AvW: ¿Quién establece esta bandera? ¿Está integrada en una versión del software o es algo que los usuarios pueden hacer manualmente? ¿O es algo que se compila?

LD: Eso es un detalle de implementación. Al fin y al cabo, los usuarios pueden modificar el código o alguien puede proporcionar una versión con esa modificación.

AvW: ¿Cómo lo harías? ¿Cómo le gustaría que se utilizara el BIP 8?

LD: Como no deberíamos desplegar ningún parámetro de activación sin el suficiente apoyo de la comunidad, creo que deberíamos establecerlo por adelantado. Había un UASF de Bitcoin Core con el código del BIP 148 en él. Creo que sería mejor hacer todas las bifurcaciones suaves de esa manera a partir de ahora y dejar las versiones vanilla de Bitcoin Core sin ninguna activación de bifurcación suave.

AvW: Esa es una perspectiva interesante. ¿Podría Bitcoin Core no incluir ningún tipo de activación de BIP 8? ¿Está completamente integrado en clientes alternativos como el cliente BIP 148 y no en Bitcoin Core en absoluto? ¿Con el UASF esta es la forma de hacerlo a partir de ahora?

LD: Creo que sería lo mejor. Antes de 2017 habría dudado de que la comunidad lo aceptara, pero ha funcionado bien con el BIP 148, así que no veo ninguna razón para no continuarlo.

AvW: ¿No debería Bitcoin Core en ese caso incluir al menos el BIP 8 sin señalización forzada para que siga aplicando la bifurcación suave y siga adelante?

LD: Posiblemente. También hay un punto intermedio en el que podría detectar que se ha activado.

AvW: ¿Y entonces qué? ¿Los usuarios saben que deben actualizarse?

LD: Puede activarse en ese momento.

AvW: ¿Cuál es la ventaja de detectar que se ha actualizado?

LD: Para que pueda activar las reglas y seguir siendo un nodo completo. Si no está aplicando las horquillas suaves más recientes, ya no es un nodo completo. Eso puede ser un problema de seguridad.

AvW: That is what I meant with BIP 8 without forced signaling. I thought that was the same thing.

LD: There is a pull request to BIP 8 that may make the same thing. I would have to look at that pull request, I am not sure it is identical quite yet.

#Activación a través de clientes alternativos

AvW: Eric, ¿qué opinas de esta idea de activar las bifurcaciones suaves a través de clientes alternativos como hizo el BIP 148?

EL: Probablemente es una buena idea. Creo que es mejor que Bitcoin Core se mantenga al margen de todo el proceso. Cuanto menos político sea Bitcoin Core, mejor. La gente que ha trabajado en estos BIPs en su mayoría no quiere involucrarse demasiado en estas cosas públicas. Creo que es lo mejor. Creo que sentaría un precedente realmente horrible para Bitcoin Core el estar desplegando cambios de protocolo por sí mismo. Es muy importante que el apoyo de la comunidad esté ahí y se demuestre fuera del proyecto Bitcoin Core y que sea decisivo. Que no se politice una vez desplegado. Creo que es importante que haya suficiente apoyo antes del despliegue y es bastante seguro que va a suceder. En ese momento no hay que tomar más decisiones ni nada por el estilo, porque cualquier otra decisión que se añada al proceso no hace más que añadir más puntos potenciales en los que la gente podría intentar añadir polémica.

AvW: ¿Cuál es el problema si Bitcoin Core activa las bifurcaciones suaves o las bifurcaciones suaves se implementan en el cliente de Bitcoin Core?

EL: Sienta un precedente realmente malo porque Bitcoin Core se ha convertido en la implementación de referencia. Es el software de nodo más utilizado. Es muy peligroso, es una especie de separación de poderes. Sería realmente peligroso para Bitcoin Core tener la capacidad de implementar este tipo de cosas y desplegarlas especialmente bajo el radar, creo que sería realmente peligroso. Es muy importante que estas cosas sean revisadas y que todo el mundo tenga la oportunidad de verlas. Creo que ahora mismo la gente que está trabajando en Bitcoin Core puede ser buena, gente honesta y fiable, pero eventualmente podría ser infiltrada por gente o por otras personas que podrían no tener las mejores intenciones. Podría llegar a ser peligroso en algún momento si se convierte en un hábito hacerlo así.

AvW: O los desarrolladores de Bitcoin Core podrían ser coaccionados o recibir llamadas de agencias de tres letras.

LD: Especialmente si hay un precedente o incluso una apariencia de este supuesto poder. Va a provocar que la gente malintencionada piense que puede presionar a los desarrolladores e intente hacerlo aunque no funcione.

AvW: El inconveniente obvio es que si no hay suficiente gente que se actualice a este cliente alternativo en este caso podría dividir la red. Podría dividir la red, podría provocar el caos del que hablábamos antes con las reorganizaciones en cadena. ¿No es este un riesgo que le preocupa?

EL: Si no tiene suficiente apoyo previo, probablemente no debería hacerse. El apoyo debe estar ahí y debe quedar muy claro que hay un acuerdo casi unánime. Una gran parte de la comunidad lo quiere antes de que se despliegue el código. En el momento en que se despliegue el código, creo que sería bastante razonable esperar que la gente quiera ejecutarlo. De lo contrario, no creo que deba hacerse en absoluto.

AvW: Pero pone una fecha límite para que la gente actualice su software. La gente no puede ser demasiado perezosa. La gente tiene que hacer algo, de lo contrario surgirán riesgos.

LD: Eso es cierto, independientemente de la versión en la que se encuentre.

AvW: Entonces quiero plantear una hipótesis. Digamos que se elige esta solución. Existe este cliente alternativo que incluye la bifurcación suave con activación forzada de una u otra manera. No obtiene el apoyo que ustedes predicen que obtendrá o esperan que obtenga y provoca una ruptura de la cadena. ¿Qué versión es Bitcoin en ese caso?

LD: La comunidad tendría que decidirlo. No es algo que una persona pueda decidir o predecir. O el resto de la comunidad se actualiza o la gente que sí se actualizó no tendrá más remedio que revertir.

EL: En ese punto nos encontramos en un territorio desconocido en muchos sentidos. Tenemos que ver si los incentivos se alinean para que un número suficiente de personas quiera realmente apoyar una versión concreta de Bitcoin o no. Si hay un riesgo significativo de que pueda dividir la red permanentemente de una manera que no conduzca a un resultado decisivo, entonces no lo apoyaría. Creo que es importante que haya una alta probabilidad teórica de que la red tienda a converger. Los nodos económicos fuertes tenderán a converger en una blockchain concreta.

LD: También hay que tener en cuenta que no basta con no actualizar con algo así. Habría que bloquear explícitamente un bloque que rechace Taproot. De lo contrario, se correría el riesgo de que la cadena de Taproot superara su cadena y la sustituyera.

AvW: ¿Qué tipo de cronograma cree usted en este sentido? Luke, creo que te he visto mencionar un año. ¿Es eso cierto?

LD: Es importante que todos los nodos se actualicen, no sólo los mineros. Desde la fecha en que se hace el primer lanzamiento creo que tiene que haber al menos 3 meses antes de que pueda empezar la señalización. Una vez que comience, creo que un año estaría bien. Ni muy largo ni muy corto.

EL: Creo que un año podría ser un poco largo. Evidentemente, hay una contrapartida. Cuanto más corto sea el plazo, mayor será el riesgo de que no haya tiempo suficiente para que la gente se actualice. Pero, al mismo tiempo, cuanto más tiempo pase, mayor será la incertidumbre y eso también causa problemas. Es bueno encontrar el equilibrio adecuado. Creo que un año podría ser demasiado tiempo, no creo que vaya a tardar tanto. Preferiría que fuera más rápido. En realidad, me gustaría que esto ocurriera lo más rápido posible y que no causara una ruptura de la cadena o que redujera el riesgo de una ruptura de la cadena. Esa sería mi preferencia.

LD: No hay que olvidar que los mineros pueden señalar todavía y activarlo antes de un año.

EL: Claro, pero también está el tema del veto y la posibilidad de que la gente utilice la incertidumbre para jugar con los mercados u otras cosas por el estilo.

LD: No sé si hay un incentivo para intentar vetar cuando sólo se va a retrasar un año.

EL: Sí.

Activación de la horquilla suave moderna

AvW: La otra perspectiva en este debate sería, por ejemplo, la de Matt Corallo Modern Soft Fork Activation. Supongo que lo conoce. Yo mismo lo explicaré rápidamente. Modern Soft Fork Activation, la idea es que básicamente se utiliza el proceso de actualización del BIP 9 a la vieja usanza durante un año. Deja que los mineros lo activen durante un año. Si no funciona entonces los desarrolladores lo reconsiderarán durante 6 meses, verán si después de todo había un problema con Taproot en este caso, algo que se les había pasado por alto, alguna preocupación que los mineros tenían con él. Revisarlo durante 6 meses, si después de los 6 meses se encuentra que no había ningún problema real y los mineros estaban retrasando por cualquier razón, entonces la activación se vuelve a desplegar con un plazo duro de activación a los 2 años. ¿Qué opina de esto?

LD: Si es posible que haya un problema, ni siquiera deberíamos llegar a ese primer paso.

EL: No soy muy partidario de esto porque creo que se plantean demasiadas preguntas. Si no parece que sea decisivo y no hay una decisión tomada, la gente va a estar totalmente confundida. Es causar mucha incertidumbre. Creo que es algo en lo que tenemos que ser muy agresivos y conseguir que todo el mundo se suba al carro y sea decisivo y diga “Sí, esto va a pasar” o no vale la pena hacerlo. No me gusta la idea de ver qué pasa en 6 meses o lo que sea. O decidimos de inmediato que sí lo vamos a hacer o no.

LD: Es una especie de invitación a la controversia.

AvW: Sin embargo, la idea es que, al establecer el plan de esta manera, sigue existiendo la garantía de que al final se activará. Puede que tarde algo más de un año, pero al final se activará.

LD: Eso es lo que se puede hacer con el BIP 8, sin fijar el tiempo de espera. Luego, si se decide, se puede fijar el tiempo de espera.

AvW: Hay variaciones. Hay diferentes maneras de pensar en esto. La verdadera pregunta a la que quiero llegar es ¿cuál es la prisa? ¿No estamos en esto a largo plazo? ¿No estamos planeando construir algo que estará aquí durante 200 años? ¿Qué importan uno o dos años más?

EL: Creo que para las pruebas y la revisión de la propuesta real de Taproot debería haber tiempo suficiente. No deberíamos precipitarnos. No deberíamos intentar desplegar nada hasta que los desarrolladores que están trabajando en ello y lo están revisando y probando estén seguros de que está a un nivel en el que es seguro desplegarlo, independientemente de todo el tema de la activación. Sin embargo, creo que la activación debería ser rápida. No creo que se tarde tanto en incorporar a la gente. Estamos hablando de un par de meses como máximo para que todo el mundo se suba a bordo, si es que va a suceder. Si se tarda más tiempo, lo más probable es que no tenga suficiente apoyo. Probablemente no deberíamos hacerlo en primer lugar. Si la gente no puede hacerlo tan rápido, creo que todo el proceso es cuestionable. Cuantas más variables añadamos al proceso, más se invitará a la controversia y más se confundirá la gente y pensará que es más incierto. Con este tipo de cosas, creo que la gente busca la decisión y la resolución. Mantener la incertidumbre y la espera durante un largo periodo de tiempo no es saludable para la red. Creo que la parte de la activación debería ser rápida. El despliegue debería llevar el tiempo necesario para asegurarse de que el código es realmente bueno. No deberíamos desplegar un código que no haya sido completamente probado y revisado. Pero una vez que decidamos que “sí, esto es lo que hay que hacer”, creo que en ese momento debería ser rápido. De lo contrario, no deberíamos hacerlo en absoluto.

LD: La decisión, por supuesto, la toma la comunidad. Estoy de acuerdo con Eric.

AvW: Otro argumento a favor de la perspectiva de Matt es que el código debería ser bueno antes de desplegarlo, pero quizás en la realidad, en el mundo real, mucha gente sólo empezará a mirar el código, sólo empezará a mirar la actualización una vez que esté ahí fuera. Una vez que haya una versión de software que la incluya. Al no imponer inmediatamente la activación, se dispone de más tiempo para la revisión. ¿Qué opina usted?

LD: Creo que si la gente tiene algo que revisar debería hacerlo antes de que llegue a ese punto, antes de que se establezca la activación. Idealmente, antes de que se fusione.

AvW: Eric, ¿ves algún mérito en este argumento?

EL: Sin duda, cuantos más ojos lo miren, mejor, pero creo que es de suponer que nada se fusionará y se desplegará o se liberará hasta que al menos haya suficientes ojos competentes en esto. Ahora mismo creo que la gente más motivada es la que está trabajando en ello directamente. Puede que haya más ojos que lo miren después. Seguro que una vez que esté ahí fuera y una vez que se haya establecido la activación, recibirá más atención. Pero no creo que ese sea el momento en que la gente deba empezar a revisar. Creo que la revisión debería tener lugar antes.

LD: En ese momento, si se encuentra un problema, habrá que preguntarse: “¿Merece la pena volver a hacer todo esto sólo para solucionar ese problema o deberíamos seguir adelante de todas formas?”. Si el problema es lo suficientemente grave, los mineros podrían activarse simplemente por el problema porque quieren aprovecharse de él. No queremos que se active si hay un gran problema. Tenemos que estar completamente seguros de que no hay problemas antes de llegar a la fase de activación.

BIP 8 con señalización forzada

AvW: Déjame lanzar otra idea que ha estado circulando. ¿Y si hacemos un BIP 8 con señalización forzada hacia el final pero le damos un tiempo largo? Después de un tiempo siempre se puede acelerar con un nuevo cliente que incluya algo que obligue a los mineros a señalar antes.

LD: Con el BIP 8 actual no se puede hacer eso, pero Anthony Towns tiene un pull request que, con suerte, lo arreglará.

AvW: ¿Considera que esto tiene sentido?

EL: No creo que sea una buena idea. Cuanto más podamos reducir las variables una vez que se haya desplegado, mejor. Creo que deberíamos intentar solucionar todo esto antes. Si no se ha hecho antes, es que alguien no ha hecho bien su trabajo, es mi forma de verlo. Si las cosas se hacen bien, en el momento de la activación no debería haber ninguna controversia. Debería ser: “Saquemos esto y hagámoslo”.

LD: Esa podría ser una razón para empezar con un plazo de 1 año. Luego podemos pasarlo a 6 meses si resulta ser demasiado tiempo.

EL: Then we are inviting more controversy potentially. Last time it was kind of a mess with having to deploy BIP 148 then BIP 91 and then all this other stuff to patch it. The less patches necessary there the better. I think it sets a bad precedent if it is not decisive. If the community has not decided to do this it should not be done at all. If it has decided to do it then it should be decisive and quick. I think that is the best precedent we can set. The more we delay stuff and the more there is controversy, it just invites a lot more potential for things to happen in the future that could be problematic.

AvW: I could throw another idea out there but I expect your answer will be the same. I will throw it out there anyway. I proposed this idea where you have a long BIP 8 period with forced signaling towards the end and you can speed it up later if you decide to. The opposite of that would be to have a long BIP 8 signaling period without forced signaling towards the end. Then at some point we are going to do forced signaling anyway. I guess your answer would be the same. You don’t like solutions that need to be patched along the way?

EL: Yeah.

Protocolo de osificación

AvW: La última pregunta, creo. A mucha gente le gusta la osificación del protocolo. Les gusta que Bitcoin, al menos en algún momento del futuro, no pueda actualizarse.

EL: Eso me gustaría.

AvW: ¿Por qué le gustaría eso?

EL: Porque eso elimina la política del protocolo. Mientras se puedan cambiar las cosas, se abre la puerta a la política. Todas las formas constitucionales de gobierno, incluso muchas de las religiones abrahámicas, se basan en esta idea de que tienes esta cosa que es un protocolo que no cambia. Cada vez que hay un cambio hay algún tipo de cisma o algún tipo de ruptura en el sistema. Cuando se trata de algo en lo que el efecto de red es un componente tan grande de todo esto y las divisiones pueden ser problemáticas en términos de que la gente no pueda comerciar entre sí, creo que cuanto menos política haya involucrada, mejor. La gente va a hacer política al respecto. Cuanto más se juegue, más se querrá atacar. Cuantos menos vectores de ataque haya, más seguro será. Estaría bien que pudiéramos conseguir que no hubiera más cambios en el protocolo en la capa base. Siempre va a haber mejoras que se pueden proponer porque siempre aprendemos a posteriori. Siempre podemos decir “podemos mejorar esto. Podemos hacer este tipo de cosas mejor”. La cuestión es dónde ponemos el límite. ¿Dónde decimos “ya está bien y no hay más cambios a partir de ahora”? ¿Se puede hacer eso realmente? ¿Es posible que la gente decida realmente que el protocolo va a ser así? Además, hay otra cuestión: ¿qué pasa si más adelante se descubre algún problema que requiera algún cambio en el protocolo para solucionarlo? Algún tipo de error catastrófico o alguna vulnerabilidad o algo así. En ese momento podría haber una medida de emergencia para hacerlo, lo que podría no sentar precedente para que este tipo de cosas se conviertan en algo habitual. Cada vez que se invoca cualquier tipo de medidas de emergencia se abre la puerta a los ataques. No sé dónde está ese límite. Creo que es una cuestión realmente difícil.

AvW: Creo que Nick Szabo lo ha descrito como escalabilidad social. A eso se refiere. Cree que la osificación beneficiaría a la escalabilidad social.

LD: Creo que en un futuro lejano se puede considerar, pero en la etapa actual de Bitcoin, en la que se encuentra, si Bitcoin deja de mejorar eso abre la puerta a que las altcoins lo sustituyan y Bitcoin acabe siendo irrelevante.

AvW: ¿Dices que quizás en algún momento en el futuro?

LD: En un futuro lejano.

EL: Es demasiado pronto para decirlo.

AvW: ¿Tiene alguna idea de cuándo se hará?

EL: Creo que cualquiera que esté seguro de algo así está lleno de s**. No creo que nadie entienda aún bien este problema.

LD: Hay demasiadas cosas que no entendemos sobre cómo debería funcionar Bitcoin. Hay demasiadas mejoras que se podrían hacer. En este momento parece que debería estar fuera de toda duda.

AvW: ¿Por qué sería un problema? Digamos que esta bifurcación suave falla por alguna razón. No se activa. Nos enteramos en los próximos 2 años que Bitcoin no puede actualizarse más y esto es todo. Esto es con lo que vamos a vivir. ¿Por qué es esto un problema? ¿O es un problema?

EL: Hay ciertos problemas que existen con el protocolo ahora mismo. La privacidad es uno de los más importantes. Podría haber otras mejoras potenciales en la criptografía que permitan una compresión mucho más sofisticada o más privacidad u otro tipo de cosas que serían significativamente beneficiosas. Si alguna otra moneda es capaz de lanzarse con estas características, tiene el problema de no tener el mismo tipo de historia fundacional que Bitcoin y ese tipo de mito creo que es necesario para crear un movimiento como este. Crea incentivos para los primeros jugadores… Lo que ha sucedido con las altcoins y todas estas ICOs y cosas así, los fundadores básicamente terminan teniendo demasiado control sobre el protocolo. Ese es un tema en el que creo que Bitcoin realmente mantiene su efecto de red sólo por eso. Pero podría haber alguna mejora técnica en algún momento que sea tan significativa que realmente Bitcoin estaría como una seria ventaja si no incorporara algo así.

LD: En este momento creo que es un hecho que lo habrá.

AvW: Has empezado mencionando la privacidad. ¿No es algo que crees que podría resolverse bastante bien en segundas capas tal y como está el protocolo ahora mismo?

EL: Posiblemente. No estoy seguro. No creo que nadie tenga una respuesta completa a esto, por desgracia.

LD: La definición de “suficientemente bueno” puede cambiar a medida que mejore la tecnología que invade la privacidad. Si los gobiernos, ni siquiera los gobiernos, cualquiera mejora en la invasión de la privacidad de todos lo que tenemos que proteger contra eso podría muy bien subir el listón.

Taproot

AvW: Podemos hablar un poco de Taproot. ¿Es una buena idea? ¿Qué tiene de bueno Taproot? ¿Por qué lo apoya si lo apoya?

LD: Simplifica significativamente lo que tiene que ocurrir en la cadena. Ahora mismo tenemos todas estas capacidades de contratos inteligentes, que tenemos desde 2009, pero en la mayoría de los casos no es necesario tener el contrato inteligente si ambas partes ven que va a terminar de una determinada manera. Ambas partes pueden firmar una sola transacción y evitar todo lo relacionado con los contratos inteligentes. Este es el principal beneficio de Taproot. Puedes evitar todo el contrato inteligente en la mayoría de los casos. Entonces todos los nodos completos, tienen una verificación más barata, muy poca sobrecarga.

EL: Y todas las transacciones tendrían el mismo aspecto, por lo que nadie podría ver cuáles son los contratos. Todos los contratos se ejecutarían fuera de la cadena, lo que supondría una mejora significativa para la escalabilidad y la privacidad. Creo que es una victoria para todos.

LD: En lugar de que todo el mundo ejecute el contrato inteligente, son sólo los participantes.

EL: Que es la forma en que debería haber sido al principio, pero creo que tomó un tiempo hasta que la gente se dio cuenta de que eso es lo que el script debería estar haciendo. Al principio se pensó que podíamos tener estos scripts que se ejecutan en la cadena. Esta es la forma en que se hizo porque no creo que Satoshi pensó en esto completamente. Él sólo quería lanzar algo. Ahora tenemos mucha retrospectiva que no teníamos entonces. Ahora es obvio que realmente el blockchain se trata de autorizar transacciones, no se trata de procesar las condiciones de los contratos en sí. Todo eso se puede hacer offchain y se puede hacer muy bien offchain. Al final lo único que hay que hacer es que los participantes firmen que ha ocurrido y ya está. Eso es todo lo que le importa a todo el mundo. Todo el mundo está de acuerdo, así que ¿cuál es el problema? Si todo el mundo está de acuerdo, no hay ningún problema.

AvW: ¿No parece haber ningún inconveniente en Taproot? ¿Hay algún inconveniente, ha oído hablar de alguna preocupación? Creo que hubo un email en la lista de correo hace un tiempo con algunas preocupaciones.

LD: No se me ocurre ninguna. No hay ninguna razón para no desplegarlo, al menos no se me ocurre ninguna razón para no usarlo. Sí que arrastra el sesgo de SegWit hacia los bloques más grandes, pero eso es algo que hay que considerar independientemente. No hay ninguna razón para vincular las características al tamaño del bloque.

AvW: Los bloques deberían seguir siendo más pequeños, Luke, ¿sigue siendo esa su postura?

LD: Sí, pero eso es independiente de Taproot.

AvW: ¿Hay alguna otra bifurcación suave que os entusiasme o que podáis ver desplegada en Bitcoin en los próximos dos años?

LD: Está ANYPREVOUT, que antes se llamaba NOINPUT, y creo que está progresando mucho. También está CHECKTEMPLATEVERIFY, al que todavía no he prestado demasiada atención, pero que parece contar con un importante apoyo de la comunidad. Una vez que Taproot se despliegue, probablemente habrá un montón de otras mejoras además de eso.

EL: Después de la última bifurcación suave, la activación de SegWit, estaba muy quemado por todo el proceso. Este asunto fue realmente un proceso de más de dos años. En 2015 fue cuando todo esto empezó y no se activó hasta el 1 de agosto de 2017. Eso es más de dos años de este tipo de cosas pasando. No sé si tengo ganas de una batalla muy prolongada con esto en absoluto. Quiero ver lo que sucede con Taproot primero antes de sopesar otros tenedores suaves. Taproot es el que parece tener el mayor apoyo en este momento en cuanto a algo que es una obviedad, esto sería bueno tener. Me gustaría ver eso primero. Una vez que se active, tal vez tenga otras ideas sobre qué hacer a continuación. Ahora mismo no lo sé, es demasiado pronto.

AvW: ¿Cuál es su opinión final sobre la activación de la horquilla suave? ¿Qué quiere decir a nuestros espectadores?

EL: Creo que es realmente importante que establezcamos un buen precedente. He hablado de tres categorías diferentes de riesgos. La primera categoría de riesgo es sólo la técnica, asegurándose de que el código no tiene ningún error y cosas así. La segunda es la metodología de activación y asegurarse de que la red no se divide. La tercera es con los precedentes. Invitando a posibles ataques en el futuro por parte de gente que explote el propio proceso. La tercera parte es la que creo que se entiende menos. La primera parte es la que más se entiende incluso en 2015. La segunda categoría es la que toda la activación de SegWit nos enseñó muchas cosas aunque estuviéramos hablando de BIP 8 y BIP 9. Los riesgos de la categoría 3 ahora mismo creo que son muy desconocidos. Esa es mi mayor preocupación. Me gustaría ver que no hay ningún tipo de precedente establecido donde este tipo de cosas podrían ser explotadas en el futuro. Es muy difícil trazar la línea exacta de lo agresivos que debemos ser con esto y si eso prepara algo para que alguien más en el futuro haga algo malo. Creo que sólo podemos aprender haciéndolo y tenemos que correr riesgos. Con el Bitcoin ya estamos en un territorio desconocido. Bitcoin ya es una propuesta arriesgada para empezar. Tenemos que asumir algunos riesgos. Deben ser riesgos calculados y debemos aprender sobre la marcha y corregirnos lo antes posible. Aparte de eso, no sé exactamente cómo acabará siendo el proceso. Estamos aprendiendo mucho. En este momento, creo que la categoría 3 es la clave que vamos a ver si esto ocurre realmente. Es muy importante tenerlo en cuenta.

AvW: ¿Luke, alguna reflexión final?

LD: La verdad es que no.

\ No newline at end of file +https://www.youtube.com/watch?v=yQZb0RDyFCQ

Location: Bitcoin Magazine (en línea)

Aaron van Wirdum Aaron van Wirdum en Bitcoin Magazine sobre el BIP 8, el BIP 9 o la activación del Soft Fork moderno: https://bitcoinmagazine.com/articles/bip-8-bip-9-or-modern-soft-fork-activation-how-bitcoin-could-upgrade-next

David Harding sobre las propuestas de activación de Taproot: https://gist.github.com/harding/dda66f5fd00611c0890bdfa70e28152d

Introducción

Aaron van Wirdum (AvW): Eric, Luke bienvenido. Feliz Día de la Independencia de Bitcoin. ¿Cómo están?

Eric Lombrozo (EL): Estamos muy bien. ¿Cómo están ustedes?

AvW: Estoy bien, gracias. Luke, ¿cómo estás?

Luke Dashjr (LD): BIEN. ¿Cómo estás tú?

AvW: Bien, gracias. Es genial teneros en el Día de la Independencia del Bitcoin. Obviamente, ambos habéis jugado un gran papel en el movimiento UASF. Fuisteis tal vez dos de los partidarios más prominentes e influyentes. Este Día de la Independencia de Bitcoin, esta cosa del 1 de agosto de hace un par de años, se trataba de la activación de SegWit, la activación del soft fork. Esto está siendo relevante de nuevo porque estamos viendo una nueva bifurcación suave que podría estar llegando, Taproot. En general, la conversación sobre cómo activar las bifurcaciones suaves está empezando de nuevo. Así que lo que sucedió hace un par de años se está volviendo relevante de nuevo.

EL: No queremos repetir lo que ocurrió hace unos años. Queremos hacerlo mejor esta vez.

#Activaciones anteriores de la bifurcación suave

AvW: Eso es lo que quiero discutir con ustedes y por qué es genial tenerlos aquí. Empecemos con eso Eric. Hace un par de años, aparentemente algo salió mal. En primer lugar, mencionaremos brevemente que hubo un proceso llamado BIP 9. ¿Quieres explicar muy brevemente lo que era y luego se puede entrar en por qué era un problema o lo que salió mal?

EL: Las primeras bifurcaciones suaves que se activaron fueron con fecha de bandera, simplemente escritas en el código. Más tarde, para hacer la transición más suave, se incorporó la señalización de los mineros. Entonces el BIP 9 fue esta propuesta que permitía que varias bifurcaciones suaves estuvieran bajo activación al mismo tiempo con la idea de que habría extensibilidad al protocolo y habría todo este proceso que podríamos usar para añadir nuevas características. Pero resultó ser muy desordenado y no estoy seguro de que ese proceso sea sostenible.

AvW: Para que quede claro, la idea de BIP 9 era que hubiera una bifurcación suave, que hubiera un cambio en el protocolo, y que el reto fuera conseguir que la red se actualizara a ese cambio sin dividir la red entre los nodos actualizados y los no actualizados. La idea era dejar que la coordinación de la activación dependiera del hashpower. Una vez que suficientes mineros hayan señalado que se han actualizado la red lo reconoce, todos los nodos actualizados lo reconocen y empiezan a aplicar las nuevas reglas. La razón por la que esto es una buena idea es porque si la mayoría de los mineros hacen esto no hay riesgo de que la cadena se divida. Incluso los nodos no actualizados seguirán la versión “actualizada” de la cadena de bloques.

EL: Sería la cadena más larga. Por defecto, todos los demás clientes seguirían esa cadena automáticamente.

AvW: Esa es la ventaja. Pero parece que algo salió mal. O, al menos, eso cree usted.

EL: Las primeras bifurcaciones suaves no fueron realmente políticas en absoluto. No hubo ninguna disputa. SegWit fue la primera bifurcación suave que realmente generó cierta controversia debido a todo el asunto del tamaño de los bloques que ocurrió antes. Fue un momento realmente polémico en el espacio de Bitcoin. La activación se politizó, lo cual fue un problema muy serio porque el BIP 9 no fue diseñado para tratar con la política. Fue diseñado para lidiar con la activación por señalización de los mineros sólo por lo que estabas hablando, para que la red no se dividiera. Es realmente un proceso técnico para asegurarse de que todo el mundo está en la misma página. La idea era que todo el mundo estuviera ya a bordo de esto antes del lanzamiento sin ninguna política. No era ningún tipo de sistema de votación ni nada parecido. No se diseñó ni se pretendió que fuera un sistema de votación. Algunas personas lo malinterpretaron así y algunas personas se aprovecharon de la confusión para hacer que pareciera un proceso de votación. Se abusó mucho de ello y los mineros empezaron a manipular el proceso de señalización para alterar el precio del Bitcoin y otras cosas. Se convirtió en algo muy, muy complicado. No creo que queramos hacerlo así esta vez.

AvW: Este mecanismo de coordinación, básicamente se concedió a los mineros en cierto modo para que las actualizaciones se hicieran sin problemas. Su perspectiva es que empezaron a abusar de este derecho que se les concedió. ¿Es un buen resumen?

EL: Sí. Tenía un umbral del 95 por ciento, que era algo bastante razonable antes, cuando se trataba sobre todo de mejoras técnicas que no estaban politizadas en absoluto. Era un umbral de seguridad en el que si el 95 por ciento de la potencia de la red está señalizando, entonces es casi seguro que la red no se va a dividir. Pero en realidad sólo se necesita más de la mayoría del hashpower para que la cadena sea la más larga. Se podría bajar ese umbral y aumentar ligeramente el riesgo de que la cadena se divida. El umbral del 95 por ciento, si más del 5 por ciento de los hashpower no lo señalan, obtendrían un veto. Podían vetar toda la activación. Le dio a una minoría muy pequeña la capacidad de vetar una bifurcación suave. Por defecto, si no se activaba, simplemente no se bloqueaba y fallaba. Esto era un problema porque estaba en medio del asunto del tamaño de los bloques y todo el mundo quería una resolución para esto. El fracaso no era realmente una opción.

AvW: Luke, ¿estás de acuerdo con este análisis? ¿Es así como lo ves?

LD: Más o menos sí.

SegWit y BIP 148

AvW: En algún momento hubo una solución para salir del estancamiento que era un UASF. Esto finalmente tomó forma en el BIP 148. Luke, creo que usted participó en el primer proceso de decisión. ¿Cuál era la idea que había detrás del PIF 148?

LD: En realidad no participé tan pronto. De hecho, en un principio me opuse a ella.

AvW: ¿Por qué?

LD: Realmente no analicé las implicaciones en su totalidad.

AvW: ¿Puede explicar qué hizo el PIF 148 y por qué se diseñó de la forma en que se diseñó?

LD: Esencialmente, devolvió la decisión a los mineros. “El 1 de agosto vamos a empezar el proceso de activación de esto y eso es todo”.

AvW: Creo que la forma específica en que lo hizo fue que los nodos del BIP 148 empezaran a rechazar los bloques que no señalaran realmente la compatibilidad con SegWit. ¿Es eso cierto?

LD: Sí. Así es como se han desplegado anteriormente la mayoría de las bifurcaciones suaves activadas por el usuario. Si los mineros no señalaban la nueva versión, sus bloques serían inválidos.

AvW: Hay una diferencia de matiz entre la activación de SegWit y la aplicación de SegWit o la señalización de SegWit, ¿verdad?

LD: Las anteriores bifurcaciones suaves habían hecho prácticamente ambas cosas todo el tiempo. Antes de BIP 9 era un número de versión incremental. Los mineros tenían que tener el número de versión correcto o sus bloques eran inválidos. Cuando se lanzó la versión 3, todos los bloques de la versión 2 y anteriores dejaron de ser válidos.

AvW: ¿En algún momento empezaron a apoyar el BIP 148? ¿Por qué fue así?

LD: En un momento dado había un número suficiente de miembros de la comunidad que decían: “Vamos a hacer el BIP 148 sin importar cuántos otros estén a bordo”. A partir de ese momento, una vez que fue una minoría considerable, la única manera de evitar una ruptura en cadena era ir a por todas.

EL: En ese momento era todo o nada.

AvW: ¿Qué tamaño debe tener una minoría así? ¿Existe alguna forma de medirlo, pensarlo o razonarlo? ¿Qué les hizo darse cuenta de que había suficientes usuarios que apoyaban esto?

LD: It really comes down to not so much the number of people but their relevance to the economy. All these people that are going to do BIP 148, can you just ignore them and economically pressure them or cut them off from everyone else? Or is that not viable anymore?

AvW: ¿Cómo se toma esa decisión? ¿Cómo se distingue entre lo inviable y lo viable?

EL: En ese momento lo que hicimos fue intentar hablar con muchas bolsas y con muchas otras personas del ecosistema para evaluar su nivel de apoyo. Muchos usuarios estaban muy a favor, pero cuando se hizo más evidente que muchos de los grandes nodos económicos del sistema iban a utilizar el BIP 148, quedó claro que era hacerlo o morir, era todo o nada. En ese momento había que conseguir que todo el mundo se subiera a bordo o esto no iba a funcionar.

LD: Irónicamente, uno de los primeros partidarios del BIP 148 fue BitPay, que por aquel entonces era importante.

AvW: ¿Lo eran?

LD: Creo que perdieron relevancia poco después. Pero por aquel entonces eran más importantes.

AvW: Has mencionado esta situación de “hazlo o muérete”, ¿cuáles son los riesgos de morir? ¿Qué podría salir mal con algo como el BIP 148?

EL: En ese momento estábamos bastante seguros de que si la gente quería bifurcarse, lo iba a hacer. Hubo un gran empuje al asunto del BCH. Era “Vale, si la gente quiere bifurcarse, que se bifurque”. Dondequiera que la economía vaya es hacia donde la gente va a gravitar. Nosotros creíamos que gravitarían hacia el uso del BIP 148, insistiendo en que todos los demás lo hicieran porque eso era lo mejor para los intereses económicos de todos. Eso es lo que ocurrió. Era un riesgo, pero creo que era un riesgo calculado. Creo que habría habido una ruptura en cadena de todas formas. La cuestión era cómo asegurarnos de que la ruptura de la cadena tuviera la menor cantidad de repercusiones económicas para las personas que querían permanecer en la cadena SegWit.

AvW: Para que quede claro, el BIP 148 también podría haber provocado una división de la cadena y, posiblemente, una reordenación. Era un riesgo en ese sentido.

EL: Claro que estábamos en un territorio desconocido. Creo que la teoría era bastante sólida, pero siempre hay un riesgo. Creo que, en este punto, el hecho de que hubiera un riesgo formaba parte de la motivación para que la gente quisiera ejecutar el BIP 148, porque cuantas más personas lo hicieran, menor sería el riesgo.

AvW: Esa era una interesante ventaja teórica del juego que tenía este BIP. Hasta el día de hoy existe un desacuerdo al respecto, ¿qué creen ustedes que hizo realmente el PBI 148? Algunos dicen que al final no hizo nada. Los mineros sólo fueron los que actualizaron el protocolo. ¿Cómo lo ve usted?

LD: Fue claramente el BIP 148 el que consiguió que se activara SegWit. Si no hubiera sido así, todo el mundo lo habría tenido muy claro porque los bloques habrían sido rechazados por los nodos del BIP 148. No hay forma de tener un 100% de señalización de mineros sin el BIP 148.

AvW: Eric, ¿estás de acuerdo con eso?

EL: Creo que el BIP 148 desempeñó un gran papel, pero creo que hubo muchos factores que fueron muy importantes. Por ejemplo, el hecho de que SegWit2x estuviera en marcha y todo el acuerdo de Nueva York. Creo que eso impulsó a la gente a querer apoyar una bifurcación suave activada por el usuario aún más. Es un poco irónico. Si todo el asunto de SegWit2x no hubiera estado ocurriendo la gente podría haber sido más complaciente y haber dicho “Vamos a aguantar un poco y esperar a ver qué pasa”. Creo que esto presionó a todo el mundo para pasar a la acción. Parecía que había una amenaza inminente, así que la gente tenía que hacer algo. Ese fue el momento en el que creo que la teoría del juego empezó a funcionar de verdad, porque entonces sería posible cruzar ese umbral en el que es hacer o morir, el punto de no retorno.

AvW: Permítanme reformular un poco la pregunta. ¿Creen ustedes que SegWit se habría activado si el PIF 148 no hubiera ocurrido? ¿Tendríamos SegWit hoy?

EL: Es difícil de decir. Creo que al final podría haberse activado. En ese momento la gente lo quería y era un momento decisivo en el que la gente quería una resolución. Cuanto más se alargaba, más incertidumbre había. La incertidumbre en el protocolo no es buena para la red. Podrían haber surgido otros problemas. Creo que el momento fue el adecuado, el lugar adecuado y el momento adecuado para que esto ocurriera. ¿Podría haber ocurrido más tarde? Posiblemente. Pero creo que habría sido mucho más arriesgado,

LD: Esta conferencia de hoy probablemente estaría celebrando la activación final de SegWit en lugar de Taproot.

AvW: Una última pregunta. Para resumir esta parte de la historia de Bitcoin, ¿cuáles fueron las lecciones de este episodio? ¿Qué nos llevamos de este periodo de la historia de Bitcoin hacia adelante?

EL: Creo que es muy importante que intentemos evitar politizar este tipo de cosas. Lo ideal es que no queramos que la capa base del protocolo cambie mucho. Cada vez que se produce un cambio, se introduce un vector de ataque. Enseguida la gente podría intentar insertar vulnerabilidades o exploits o intentar dividir a la comunidad o crear ataques de ingeniería social o cosas así. Cada vez que se abre la puerta a cambiar las reglas, se está abriendo a los ataques. Ahora estamos en un punto en el que es imposible actualizar el protocolo sin tener algún tipo de proceso. Pero cuanto más intentamos ampliarlo, más tiende a convertirse en algo político. Espero que Taproot no cree el mismo tipo de contención y no se politice. No creo que sea un buen precedente que estas cosas sean controvertidas. Al mismo tiempo, no quiero que esto sea un proceso habitual. No quiero convertir en un hábito la activación de las horquillas blandas todo el tiempo. Creo que Taproot es una adición muy importante a Bitcoin y parece tener mucho apoyo de la comunidad. Sería muy bueno incluirlo ahora. Creo que si esperamos va a ser más difícil activarlo más adelante.

BIP 8 como una posible mejora de BIP 9

AvW: Has mencionado que en 2017 se produjo este periodo de controversia, la guerra civil del escalado, como quieras llamarlo. ¿Hubo realmente un problema con el PIF 9 o solo fue un periodo de controversia y a estas alturas el PIF 9 volvería a estar bien?

EL: Creo que el problema del PIF 9 fue que era muy optimista. La gente jugaría bien y que la gente cooperaría. El PIF 9 no funciona en absoluto bien en un escenario no cooperativo. Por defecto, fracasa. No sé si es algo que queremos hacer en el futuro porque si falla significa que todavía hay controversia, que no se ha resuelto. El PIF 8 tiene una fecha límite en la que tiene que activarse a una hora determinada. Por defecto se activa, mientras que el BIP 9 no se activa por defecto. Elimina el poder de veto de los mineros. Creo que el BIP 9 es muy problemático. En el mejor de los casos podría funcionar, pero es demasiado fácil de atacar.

LD: Y crea un incentivo para ese ataque.

AvW: Lo que está diciendo es que incluso en un periodo en el que no haya una guerra civil, ni una gran controversia, utilizar algo como el BIP 9 invitaría a la controversia. ¿Es eso lo que le preocupa?

EL: Posiblemente sí.

LD: Porque ahora los mineros pueden tener como rehén el soft fork que presumiblemente ya ha sido acordado por la comunidad. Si no ha sido acordado por la comunidad no deberíamos desplegarlo con ninguna activación, y punto.

AvW: Luke has estado trabajando en el BIP 8 que es una forma alternativa de activar las bifurcaciones suaves. ¿Puedes explicar en qué consiste el BIP 8?

LD: Más que una alternativa, lo veo como una forma de tomar el BIP 9 y arreglar los errores que tiene.

AvW: ¿Cómo se solucionan los errores? ¿Qué hace la BIP 8?

LD: El error más obvio fue el hecho de que uno de los problemas del BIP 9 cuando fuimos a hacer el BIP 148 era que originalmente se había fijado para la activación en noviembre, no en agosto. Entonces la gente se dio cuenta de que si el hashpower es demasiado bajo podríamos forzar la señalización durante noviembre y seguiría sin activarse porque el tiempo transcurriría demasiado rápido. Esto se debe a que el BIP 9 utilizaba marcas de tiempo, tiempo del mundo real, para determinar cuándo expiraría el BIP.

AvW: Porque los bloques se pueden minar más rápido o más lento, así que el tiempo del mundo real puede darte problemas.

LD: Si los bloques se minan demasiado despacio, el tiempo de espera se produce antes de que termine el periodo de dificultad, que era uno de los requisitos para la activación. El error más obvio que arregló el BIP 8 fue utilizar alturas en lugar de tiempo. De esta forma, si los bloques se ralentizaban, el tiempo de espera también se retrasaba. El BIP 148 solucionó esto adelantando el periodo de señalización obligatoria a agosto, que era muy rápido. Creo que todo el mundo estaba de acuerdo con eso. Fue una necesidad desafortunada debido a ese fallo. La otra es, como estábamos hablando, que crea un incentivo para que los mineros bloqueen la bifurcación suave. Si se activa después del tiempo de espera, ese incentivo desaparece. Hay riesgos si se encuentra un error. No podemos echarnos atrás una vez que se ha establecido la activación. Por supuesto, deberíamos encontrar errores antes de establecer la activación, así que esperamos que eso no importe.

AvW: ¿Es posible configurar el BIP 8? Se puede forzar la señalización al final o no. ¿Cómo funciona esto exactamente?

LD: Ese fue un cambio más reciente, ahora que hemos vuelto a poner en marcha el tema de la activación. Está diseñado para que puedas desplegarlo con una activación que haga timeout y aborte y luego cambiar esa bandera a un UASF. Si el UASF se establece más tarde, siempre y cuando tenga suficiente apoyo de la comunidad para hacer el UASF, todos los nodos que fueron configurados sin el UASF seguirán con él.

AvW: ¿Quién establece esta bandera? ¿Está integrada en una versión del software o es algo que los usuarios pueden hacer manualmente? ¿O es algo que se compila?

LD: Eso es un detalle de implementación. Al fin y al cabo, los usuarios pueden modificar el código o alguien puede proporcionar una versión con esa modificación.

AvW: ¿Cómo lo harías? ¿Cómo le gustaría que se utilizara el BIP 8?

LD: Como no deberíamos desplegar ningún parámetro de activación sin el suficiente apoyo de la comunidad, creo que deberíamos establecerlo por adelantado. Había un UASF de Bitcoin Core con el código del BIP 148 en él. Creo que sería mejor hacer todas las bifurcaciones suaves de esa manera a partir de ahora y dejar las versiones vanilla de Bitcoin Core sin ninguna activación de bifurcación suave.

AvW: Esa es una perspectiva interesante. ¿Podría Bitcoin Core no incluir ningún tipo de activación de BIP 8? ¿Está completamente integrado en clientes alternativos como el cliente BIP 148 y no en Bitcoin Core en absoluto? ¿Con el UASF esta es la forma de hacerlo a partir de ahora?

LD: Creo que sería lo mejor. Antes de 2017 habría dudado de que la comunidad lo aceptara, pero ha funcionado bien con el BIP 148, así que no veo ninguna razón para no continuarlo.

AvW: ¿No debería Bitcoin Core en ese caso incluir al menos el BIP 8 sin señalización forzada para que siga aplicando la bifurcación suave y siga adelante?

LD: Posiblemente. También hay un punto intermedio en el que podría detectar que se ha activado.

AvW: ¿Y entonces qué? ¿Los usuarios saben que deben actualizarse?

LD: Puede activarse en ese momento.

AvW: ¿Cuál es la ventaja de detectar que se ha actualizado?

LD: Para que pueda activar las reglas y seguir siendo un nodo completo. Si no está aplicando las horquillas suaves más recientes, ya no es un nodo completo. Eso puede ser un problema de seguridad.

AvW: That is what I meant with BIP 8 without forced signaling. I thought that was the same thing.

LD: There is a pull request to BIP 8 that may make the same thing. I would have to look at that pull request, I am not sure it is identical quite yet.

#Activación a través de clientes alternativos

AvW: Eric, ¿qué opinas de esta idea de activar las bifurcaciones suaves a través de clientes alternativos como hizo el BIP 148?

EL: Probablemente es una buena idea. Creo que es mejor que Bitcoin Core se mantenga al margen de todo el proceso. Cuanto menos político sea Bitcoin Core, mejor. La gente que ha trabajado en estos BIPs en su mayoría no quiere involucrarse demasiado en estas cosas públicas. Creo que es lo mejor. Creo que sentaría un precedente realmente horrible para Bitcoin Core el estar desplegando cambios de protocolo por sí mismo. Es muy importante que el apoyo de la comunidad esté ahí y se demuestre fuera del proyecto Bitcoin Core y que sea decisivo. Que no se politice una vez desplegado. Creo que es importante que haya suficiente apoyo antes del despliegue y es bastante seguro que va a suceder. En ese momento no hay que tomar más decisiones ni nada por el estilo, porque cualquier otra decisión que se añada al proceso no hace más que añadir más puntos potenciales en los que la gente podría intentar añadir polémica.

AvW: ¿Cuál es el problema si Bitcoin Core activa las bifurcaciones suaves o las bifurcaciones suaves se implementan en el cliente de Bitcoin Core?

EL: Sienta un precedente realmente malo porque Bitcoin Core se ha convertido en la implementación de referencia. Es el software de nodo más utilizado. Es muy peligroso, es una especie de separación de poderes. Sería realmente peligroso para Bitcoin Core tener la capacidad de implementar este tipo de cosas y desplegarlas especialmente bajo el radar, creo que sería realmente peligroso. Es muy importante que estas cosas sean revisadas y que todo el mundo tenga la oportunidad de verlas. Creo que ahora mismo la gente que está trabajando en Bitcoin Core puede ser buena, gente honesta y fiable, pero eventualmente podría ser infiltrada por gente o por otras personas que podrían no tener las mejores intenciones. Podría llegar a ser peligroso en algún momento si se convierte en un hábito hacerlo así.

AvW: O los desarrolladores de Bitcoin Core podrían ser coaccionados o recibir llamadas de agencias de tres letras.

LD: Especialmente si hay un precedente o incluso una apariencia de este supuesto poder. Va a provocar que la gente malintencionada piense que puede presionar a los desarrolladores e intente hacerlo aunque no funcione.

AvW: El inconveniente obvio es que si no hay suficiente gente que se actualice a este cliente alternativo en este caso podría dividir la red. Podría dividir la red, podría provocar el caos del que hablábamos antes con las reorganizaciones en cadena. ¿No es este un riesgo que le preocupa?

EL: Si no tiene suficiente apoyo previo, probablemente no debería hacerse. El apoyo debe estar ahí y debe quedar muy claro que hay un acuerdo casi unánime. Una gran parte de la comunidad lo quiere antes de que se despliegue el código. En el momento en que se despliegue el código, creo que sería bastante razonable esperar que la gente quiera ejecutarlo. De lo contrario, no creo que deba hacerse en absoluto.

AvW: Pero pone una fecha límite para que la gente actualice su software. La gente no puede ser demasiado perezosa. La gente tiene que hacer algo, de lo contrario surgirán riesgos.

LD: Eso es cierto, independientemente de la versión en la que se encuentre.

AvW: Entonces quiero plantear una hipótesis. Digamos que se elige esta solución. Existe este cliente alternativo que incluye la bifurcación suave con activación forzada de una u otra manera. No obtiene el apoyo que ustedes predicen que obtendrá o esperan que obtenga y provoca una ruptura de la cadena. ¿Qué versión es Bitcoin en ese caso?

LD: La comunidad tendría que decidirlo. No es algo que una persona pueda decidir o predecir. O el resto de la comunidad se actualiza o la gente que sí se actualizó no tendrá más remedio que revertir.

EL: En ese punto nos encontramos en un territorio desconocido en muchos sentidos. Tenemos que ver si los incentivos se alinean para que un número suficiente de personas quiera realmente apoyar una versión concreta de Bitcoin o no. Si hay un riesgo significativo de que pueda dividir la red permanentemente de una manera que no conduzca a un resultado decisivo, entonces no lo apoyaría. Creo que es importante que haya una alta probabilidad teórica de que la red tienda a converger. Los nodos económicos fuertes tenderán a converger en una blockchain concreta.

LD: También hay que tener en cuenta que no basta con no actualizar con algo así. Habría que bloquear explícitamente un bloque que rechace Taproot. De lo contrario, se correría el riesgo de que la cadena de Taproot superara su cadena y la sustituyera.

AvW: ¿Qué tipo de cronograma cree usted en este sentido? Luke, creo que te he visto mencionar un año. ¿Es eso cierto?

LD: Es importante que todos los nodos se actualicen, no sólo los mineros. Desde la fecha en que se hace el primer lanzamiento creo que tiene que haber al menos 3 meses antes de que pueda empezar la señalización. Una vez que comience, creo que un año estaría bien. Ni muy largo ni muy corto.

EL: Creo que un año podría ser un poco largo. Evidentemente, hay una contrapartida. Cuanto más corto sea el plazo, mayor será el riesgo de que no haya tiempo suficiente para que la gente se actualice. Pero, al mismo tiempo, cuanto más tiempo pase, mayor será la incertidumbre y eso también causa problemas. Es bueno encontrar el equilibrio adecuado. Creo que un año podría ser demasiado tiempo, no creo que vaya a tardar tanto. Preferiría que fuera más rápido. En realidad, me gustaría que esto ocurriera lo más rápido posible y que no causara una ruptura de la cadena o que redujera el riesgo de una ruptura de la cadena. Esa sería mi preferencia.

LD: No hay que olvidar que los mineros pueden señalar todavía y activarlo antes de un año.

EL: Claro, pero también está el tema del veto y la posibilidad de que la gente utilice la incertidumbre para jugar con los mercados u otras cosas por el estilo.

LD: No sé si hay un incentivo para intentar vetar cuando sólo se va a retrasar un año.

EL: Sí.

Activación de la horquilla suave moderna

AvW: La otra perspectiva en este debate sería, por ejemplo, la de Matt Corallo Modern Soft Fork Activation. Supongo que lo conoce. Yo mismo lo explicaré rápidamente. Modern Soft Fork Activation, la idea es que básicamente se utiliza el proceso de actualización del BIP 9 a la vieja usanza durante un año. Deja que los mineros lo activen durante un año. Si no funciona entonces los desarrolladores lo reconsiderarán durante 6 meses, verán si después de todo había un problema con Taproot en este caso, algo que se les había pasado por alto, alguna preocupación que los mineros tenían con él. Revisarlo durante 6 meses, si después de los 6 meses se encuentra que no había ningún problema real y los mineros estaban retrasando por cualquier razón, entonces la activación se vuelve a desplegar con un plazo duro de activación a los 2 años. ¿Qué opina de esto?

LD: Si es posible que haya un problema, ni siquiera deberíamos llegar a ese primer paso.

EL: No soy muy partidario de esto porque creo que se plantean demasiadas preguntas. Si no parece que sea decisivo y no hay una decisión tomada, la gente va a estar totalmente confundida. Es causar mucha incertidumbre. Creo que es algo en lo que tenemos que ser muy agresivos y conseguir que todo el mundo se suba al carro y sea decisivo y diga “Sí, esto va a pasar” o no vale la pena hacerlo. No me gusta la idea de ver qué pasa en 6 meses o lo que sea. O decidimos de inmediato que sí lo vamos a hacer o no.

LD: Es una especie de invitación a la controversia.

AvW: Sin embargo, la idea es que, al establecer el plan de esta manera, sigue existiendo la garantía de que al final se activará. Puede que tarde algo más de un año, pero al final se activará.

LD: Eso es lo que se puede hacer con el BIP 8, sin fijar el tiempo de espera. Luego, si se decide, se puede fijar el tiempo de espera.

AvW: Hay variaciones. Hay diferentes maneras de pensar en esto. La verdadera pregunta a la que quiero llegar es ¿cuál es la prisa? ¿No estamos en esto a largo plazo? ¿No estamos planeando construir algo que estará aquí durante 200 años? ¿Qué importan uno o dos años más?

EL: Creo que para las pruebas y la revisión de la propuesta real de Taproot debería haber tiempo suficiente. No deberíamos precipitarnos. No deberíamos intentar desplegar nada hasta que los desarrolladores que están trabajando en ello y lo están revisando y probando estén seguros de que está a un nivel en el que es seguro desplegarlo, independientemente de todo el tema de la activación. Sin embargo, creo que la activación debería ser rápida. No creo que se tarde tanto en incorporar a la gente. Estamos hablando de un par de meses como máximo para que todo el mundo se suba a bordo, si es que va a suceder. Si se tarda más tiempo, lo más probable es que no tenga suficiente apoyo. Probablemente no deberíamos hacerlo en primer lugar. Si la gente no puede hacerlo tan rápido, creo que todo el proceso es cuestionable. Cuantas más variables añadamos al proceso, más se invitará a la controversia y más se confundirá la gente y pensará que es más incierto. Con este tipo de cosas, creo que la gente busca la decisión y la resolución. Mantener la incertidumbre y la espera durante un largo periodo de tiempo no es saludable para la red. Creo que la parte de la activación debería ser rápida. El despliegue debería llevar el tiempo necesario para asegurarse de que el código es realmente bueno. No deberíamos desplegar un código que no haya sido completamente probado y revisado. Pero una vez que decidamos que “sí, esto es lo que hay que hacer”, creo que en ese momento debería ser rápido. De lo contrario, no deberíamos hacerlo en absoluto.

LD: La decisión, por supuesto, la toma la comunidad. Estoy de acuerdo con Eric.

AvW: Otro argumento a favor de la perspectiva de Matt es que el código debería ser bueno antes de desplegarlo, pero quizás en la realidad, en el mundo real, mucha gente sólo empezará a mirar el código, sólo empezará a mirar la actualización una vez que esté ahí fuera. Una vez que haya una versión de software que la incluya. Al no imponer inmediatamente la activación, se dispone de más tiempo para la revisión. ¿Qué opina usted?

LD: Creo que si la gente tiene algo que revisar debería hacerlo antes de que llegue a ese punto, antes de que se establezca la activación. Idealmente, antes de que se fusione.

AvW: Eric, ¿ves algún mérito en este argumento?

EL: Sin duda, cuantos más ojos lo miren, mejor, pero creo que es de suponer que nada se fusionará y se desplegará o se liberará hasta que al menos haya suficientes ojos competentes en esto. Ahora mismo creo que la gente más motivada es la que está trabajando en ello directamente. Puede que haya más ojos que lo miren después. Seguro que una vez que esté ahí fuera y una vez que se haya establecido la activación, recibirá más atención. Pero no creo que ese sea el momento en que la gente deba empezar a revisar. Creo que la revisión debería tener lugar antes.

LD: En ese momento, si se encuentra un problema, habrá que preguntarse: “¿Merece la pena volver a hacer todo esto sólo para solucionar ese problema o deberíamos seguir adelante de todas formas?”. Si el problema es lo suficientemente grave, los mineros podrían activarse simplemente por el problema porque quieren aprovecharse de él. No queremos que se active si hay un gran problema. Tenemos que estar completamente seguros de que no hay problemas antes de llegar a la fase de activación.

BIP 8 con señalización forzada

AvW: Déjame lanzar otra idea que ha estado circulando. ¿Y si hacemos un BIP 8 con señalización forzada hacia el final pero le damos un tiempo largo? Después de un tiempo siempre se puede acelerar con un nuevo cliente que incluya algo que obligue a los mineros a señalar antes.

LD: Con el BIP 8 actual no se puede hacer eso, pero Anthony Towns tiene un pull request que, con suerte, lo arreglará.

AvW: ¿Considera que esto tiene sentido?

EL: No creo que sea una buena idea. Cuanto más podamos reducir las variables una vez que se haya desplegado, mejor. Creo que deberíamos intentar solucionar todo esto antes. Si no se ha hecho antes, es que alguien no ha hecho bien su trabajo, es mi forma de verlo. Si las cosas se hacen bien, en el momento de la activación no debería haber ninguna controversia. Debería ser: “Saquemos esto y hagámoslo”.

LD: Esa podría ser una razón para empezar con un plazo de 1 año. Luego podemos pasarlo a 6 meses si resulta ser demasiado tiempo.

EL: Then we are inviting more controversy potentially. Last time it was kind of a mess with having to deploy BIP 148 then BIP 91 and then all this other stuff to patch it. The less patches necessary there the better. I think it sets a bad precedent if it is not decisive. If the community has not decided to do this it should not be done at all. If it has decided to do it then it should be decisive and quick. I think that is the best precedent we can set. The more we delay stuff and the more there is controversy, it just invites a lot more potential for things to happen in the future that could be problematic.

AvW: I could throw another idea out there but I expect your answer will be the same. I will throw it out there anyway. I proposed this idea where you have a long BIP 8 period with forced signaling towards the end and you can speed it up later if you decide to. The opposite of that would be to have a long BIP 8 signaling period without forced signaling towards the end. Then at some point we are going to do forced signaling anyway. I guess your answer would be the same. You don’t like solutions that need to be patched along the way?

EL: Yeah.

Protocolo de osificación

AvW: La última pregunta, creo. A mucha gente le gusta la osificación del protocolo. Les gusta que Bitcoin, al menos en algún momento del futuro, no pueda actualizarse.

EL: Eso me gustaría.

AvW: ¿Por qué le gustaría eso?

EL: Porque eso elimina la política del protocolo. Mientras se puedan cambiar las cosas, se abre la puerta a la política. Todas las formas constitucionales de gobierno, incluso muchas de las religiones abrahámicas, se basan en esta idea de que tienes esta cosa que es un protocolo que no cambia. Cada vez que hay un cambio hay algún tipo de cisma o algún tipo de ruptura en el sistema. Cuando se trata de algo en lo que el efecto de red es un componente tan grande de todo esto y las divisiones pueden ser problemáticas en términos de que la gente no pueda comerciar entre sí, creo que cuanto menos política haya involucrada, mejor. La gente va a hacer política al respecto. Cuanto más se juegue, más se querrá atacar. Cuantos menos vectores de ataque haya, más seguro será. Estaría bien que pudiéramos conseguir que no hubiera más cambios en el protocolo en la capa base. Siempre va a haber mejoras que se pueden proponer porque siempre aprendemos a posteriori. Siempre podemos decir “podemos mejorar esto. Podemos hacer este tipo de cosas mejor”. La cuestión es dónde ponemos el límite. ¿Dónde decimos “ya está bien y no hay más cambios a partir de ahora”? ¿Se puede hacer eso realmente? ¿Es posible que la gente decida realmente que el protocolo va a ser así? Además, hay otra cuestión: ¿qué pasa si más adelante se descubre algún problema que requiera algún cambio en el protocolo para solucionarlo? Algún tipo de error catastrófico o alguna vulnerabilidad o algo así. En ese momento podría haber una medida de emergencia para hacerlo, lo que podría no sentar precedente para que este tipo de cosas se conviertan en algo habitual. Cada vez que se invoca cualquier tipo de medidas de emergencia se abre la puerta a los ataques. No sé dónde está ese límite. Creo que es una cuestión realmente difícil.

AvW: Creo que Nick Szabo lo ha descrito como escalabilidad social. A eso se refiere. Cree que la osificación beneficiaría a la escalabilidad social.

LD: Creo que en un futuro lejano se puede considerar, pero en la etapa actual de Bitcoin, en la que se encuentra, si Bitcoin deja de mejorar eso abre la puerta a que las altcoins lo sustituyan y Bitcoin acabe siendo irrelevante.

AvW: ¿Dices que quizás en algún momento en el futuro?

LD: En un futuro lejano.

EL: Es demasiado pronto para decirlo.

AvW: ¿Tiene alguna idea de cuándo se hará?

EL: Creo que cualquiera que esté seguro de algo así está lleno de s**. No creo que nadie entienda aún bien este problema.

LD: Hay demasiadas cosas que no entendemos sobre cómo debería funcionar Bitcoin. Hay demasiadas mejoras que se podrían hacer. En este momento parece que debería estar fuera de toda duda.

AvW: ¿Por qué sería un problema? Digamos que esta bifurcación suave falla por alguna razón. No se activa. Nos enteramos en los próximos 2 años que Bitcoin no puede actualizarse más y esto es todo. Esto es con lo que vamos a vivir. ¿Por qué es esto un problema? ¿O es un problema?

EL: Hay ciertos problemas que existen con el protocolo ahora mismo. La privacidad es uno de los más importantes. Podría haber otras mejoras potenciales en la criptografía que permitan una compresión mucho más sofisticada o más privacidad u otro tipo de cosas que serían significativamente beneficiosas. Si alguna otra moneda es capaz de lanzarse con estas características, tiene el problema de no tener el mismo tipo de historia fundacional que Bitcoin y ese tipo de mito creo que es necesario para crear un movimiento como este. Crea incentivos para los primeros jugadores… Lo que ha sucedido con las altcoins y todas estas ICOs y cosas así, los fundadores básicamente terminan teniendo demasiado control sobre el protocolo. Ese es un tema en el que creo que Bitcoin realmente mantiene su efecto de red sólo por eso. Pero podría haber alguna mejora técnica en algún momento que sea tan significativa que realmente Bitcoin estaría como una seria ventaja si no incorporara algo así.

LD: En este momento creo que es un hecho que lo habrá.

AvW: Has empezado mencionando la privacidad. ¿No es algo que crees que podría resolverse bastante bien en segundas capas tal y como está el protocolo ahora mismo?

EL: Posiblemente. No estoy seguro. No creo que nadie tenga una respuesta completa a esto, por desgracia.

LD: La definición de “suficientemente bueno” puede cambiar a medida que mejore la tecnología que invade la privacidad. Si los gobiernos, ni siquiera los gobiernos, cualquiera mejora en la invasión de la privacidad de todos lo que tenemos que proteger contra eso podría muy bien subir el listón.

Taproot

AvW: Podemos hablar un poco de Taproot. ¿Es una buena idea? ¿Qué tiene de bueno Taproot? ¿Por qué lo apoya si lo apoya?

LD: Simplifica significativamente lo que tiene que ocurrir en la cadena. Ahora mismo tenemos todas estas capacidades de contratos inteligentes, que tenemos desde 2009, pero en la mayoría de los casos no es necesario tener el contrato inteligente si ambas partes ven que va a terminar de una determinada manera. Ambas partes pueden firmar una sola transacción y evitar todo lo relacionado con los contratos inteligentes. Este es el principal beneficio de Taproot. Puedes evitar todo el contrato inteligente en la mayoría de los casos. Entonces todos los nodos completos, tienen una verificación más barata, muy poca sobrecarga.

EL: Y todas las transacciones tendrían el mismo aspecto, por lo que nadie podría ver cuáles son los contratos. Todos los contratos se ejecutarían fuera de la cadena, lo que supondría una mejora significativa para la escalabilidad y la privacidad. Creo que es una victoria para todos.

LD: En lugar de que todo el mundo ejecute el contrato inteligente, son sólo los participantes.

EL: Que es la forma en que debería haber sido al principio, pero creo que tomó un tiempo hasta que la gente se dio cuenta de que eso es lo que el script debería estar haciendo. Al principio se pensó que podíamos tener estos scripts que se ejecutan en la cadena. Esta es la forma en que se hizo porque no creo que Satoshi pensó en esto completamente. Él sólo quería lanzar algo. Ahora tenemos mucha retrospectiva que no teníamos entonces. Ahora es obvio que realmente el blockchain se trata de autorizar transacciones, no se trata de procesar las condiciones de los contratos en sí. Todo eso se puede hacer offchain y se puede hacer muy bien offchain. Al final lo único que hay que hacer es que los participantes firmen que ha ocurrido y ya está. Eso es todo lo que le importa a todo el mundo. Todo el mundo está de acuerdo, así que ¿cuál es el problema? Si todo el mundo está de acuerdo, no hay ningún problema.

AvW: ¿No parece haber ningún inconveniente en Taproot? ¿Hay algún inconveniente, ha oído hablar de alguna preocupación? Creo que hubo un email en la lista de correo hace un tiempo con algunas preocupaciones.

LD: No se me ocurre ninguna. No hay ninguna razón para no desplegarlo, al menos no se me ocurre ninguna razón para no usarlo. Sí que arrastra el sesgo de SegWit hacia los bloques más grandes, pero eso es algo que hay que considerar independientemente. No hay ninguna razón para vincular las características al tamaño del bloque.

AvW: Los bloques deberían seguir siendo más pequeños, Luke, ¿sigue siendo esa su postura?

LD: Sí, pero eso es independiente de Taproot.

AvW: ¿Hay alguna otra bifurcación suave que os entusiasme o que podáis ver desplegada en Bitcoin en los próximos dos años?

LD: Está ANYPREVOUT, que antes se llamaba NOINPUT, y creo que está progresando mucho. También está CHECKTEMPLATEVERIFY, al que todavía no he prestado demasiada atención, pero que parece contar con un importante apoyo de la comunidad. Una vez que Taproot se despliegue, probablemente habrá un montón de otras mejoras además de eso.

EL: Después de la última bifurcación suave, la activación de SegWit, estaba muy quemado por todo el proceso. Este asunto fue realmente un proceso de más de dos años. En 2015 fue cuando todo esto empezó y no se activó hasta el 1 de agosto de 2017. Eso es más de dos años de este tipo de cosas pasando. No sé si tengo ganas de una batalla muy prolongada con esto en absoluto. Quiero ver lo que sucede con Taproot primero antes de sopesar otros tenedores suaves. Taproot es el que parece tener el mayor apoyo en este momento en cuanto a algo que es una obviedad, esto sería bueno tener. Me gustaría ver eso primero. Una vez que se active, tal vez tenga otras ideas sobre qué hacer a continuación. Ahora mismo no lo sé, es demasiado pronto.

AvW: ¿Cuál es su opinión final sobre la activación de la horquilla suave? ¿Qué quiere decir a nuestros espectadores?

EL: Creo que es realmente importante que establezcamos un buen precedente. He hablado de tres categorías diferentes de riesgos. La primera categoría de riesgo es sólo la técnica, asegurándose de que el código no tiene ningún error y cosas así. La segunda es la metodología de activación y asegurarse de que la red no se divide. La tercera es con los precedentes. Invitando a posibles ataques en el futuro por parte de gente que explote el propio proceso. La tercera parte es la que creo que se entiende menos. La primera parte es la que más se entiende incluso en 2015. La segunda categoría es la que toda la activación de SegWit nos enseñó muchas cosas aunque estuviéramos hablando de BIP 8 y BIP 9. Los riesgos de la categoría 3 ahora mismo creo que son muy desconocidos. Esa es mi mayor preocupación. Me gustaría ver que no hay ningún tipo de precedente establecido donde este tipo de cosas podrían ser explotadas en el futuro. Es muy difícil trazar la línea exacta de lo agresivos que debemos ser con esto y si eso prepara algo para que alguien más en el futuro haga algo malo. Creo que sólo podemos aprender haciéndolo y tenemos que correr riesgos. Con el Bitcoin ya estamos en un territorio desconocido. Bitcoin ya es una propuesta arriesgada para empezar. Tenemos que asumir algunos riesgos. Deben ser riesgos calculados y debemos aprender sobre la marcha y corregirnos lo antes posible. Aparte de eso, no sé exactamente cómo acabará siendo el proceso. Estamos aprendiendo mucho. En este momento, creo que la categoría 3 es la clave que vamos a ver si esto ocurre realmente. Es muy importante tenerlo en cuenta.

AvW: ¿Luke, alguna reflexión final?

LD: La verdad es que no.

\ No newline at end of file diff --git a/es/categories/core-dev-tech/index.xml b/es/categories/core-dev-tech/index.xml index c0c2ab50fa..cca6cf3f2c 100644 --- a/es/categories/core-dev-tech/index.xml +++ b/es/categories/core-dev-tech/index.xml @@ -12,7 +12,7 @@ Vamos a hablar un poco sobre bip324. Este es un BIP que ha tenido una larga hist bitcoin-otc sigue siendo la web de confianza PGP que opera desde hace más tiempo utilizando infraestructura de clave pública. Rumplepay podría ser capaz de arrancar una web-of-trust con el tiempo. Direcciones invisibles y pagos silenciosos He aquí algo controvertido.
AssumeUTXOhttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-assumeutxo/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-assumeutxo/https://twitter.com/kanzure/status/1137008648620838912 Por qué assumeutxo assumeutxo es la continuación espiritual de assumevalid. ¿Por qué queremos hacer esto en primer lugar? En la actualidad, la descarga inicial de bloques tarda horas y días. Varios proyectos en la comunidad han estado implementando medidas para acelerar esto. Casa creo que incluye datadir con sus nodos. Otros proyectos como btcpay tienen varias formas de agrupar esto y firmar las cosas con claves gpg y estas soluciones no son del todo a medias, pero probablemente tampoco son deseables.Cadenas de estado ciegas: Transferencia de UTXO con un servidor de firmas ciegashttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/https://twitter.com/kanzure/status/1136992734953299970 -Formalización de Blind Statechains como servidor de firma ciega minimalista https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html +Formalización de Blind Statechains como servidor de firma ciega minimalista https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html Visión General: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39 Documento statechains: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf Transcripción anterior: http://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/ @@ -21,16 +21,16 @@ https://twitter.com/kanzure/status/1136939003666685952 https://github.com/bitcoin-core/bitcoin-devwiki/wiki/P2P-Design-Philosophy &ldquo;Elligator al cuadrado: Puntos uniformes en curvas elípticas de orden primo como cadenas aleatorias uniformes&rdquo; https://eprint.iacr.org/2014/043 Introducción Esta propuesta lleva años en marcha. Muchas ideas de sipa y gmaxwell fueron a parar al bip151. Hace años decidí intentar sacar esto adelante. Hay bip151 que de nuevo la mayoría de las ideas no son mías sino que vienen de sipa y gmaxwell. La propuesta original fue retirada porque descubrimos formas de hacerlo mejor.Hardware Walletshttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-hardware-wallets/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-hardware-wallets/https://twitter.com/kanzure/status/1136924010955104257 -¿Cuánto debería hacer Bitcoin Core y cuánto otras bibliotecas? Andrew Chow escribió la maravillosa herramienta HWI. Ahora mismo tenemos un pull request para soportar firmantes externos. El script HWI puede hablar con la mayoría de los monederos hardware porque tiene todos los controladores incorporados, y puede obtener claves de ellos, y firmar transacciones arbitrarias. Eso es más o menos lo que hace. Es un poco manual, sin embargo. Tienes que introducir algunos comandos python; llamar a algún RPC de Bitcoin Core para obtener el resultado; así que escribí algunos métodos RPC de conveniencia para Bitcoin Core que te permiten hacer las mismas cosas con menos comandos.Signethttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html +¿Cuánto debería hacer Bitcoin Core y cuánto otras bibliotecas? Andrew Chow escribió la maravillosa herramienta HWI. Ahora mismo tenemos un pull request para soportar firmantes externos. El script HWI puede hablar con la mayoría de los monederos hardware porque tiene todos los controladores incorporados, y puede obtener claves de ellos, y firmar transacciones arbitrarias. Eso es más o menos lo que hace. Es un poco manual, sin embargo. Tienes que introducir algunos comandos python; llamar a algún RPC de Bitcoin Core para obtener el resultado; así que escribí algunos métodos RPC de conveniencia para Bitcoin Core que te permiten hacer las mismas cosas con menos comandos.Signethttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html https://twitter.com/kanzure/status/1136980462524608512 Introducción Voy a hablar un poco de signet. ¿Alguien no sabe lo que es signet? La idea es tener una firma del bloque o del bloque anterior. La idea es que testnet está horriblemente roto para probar cosas, especialmente para probar cosas a largo plazo. Hay grandes reorgs en testnet. ¿Qué pasa con testnet con un ajuste de dificultad menos roto? Testnet es realmente para probar mineros. Uno de los objetivos es que quieras una falta de fiabilidad predecible y no una falta de fiabilidad que destroce el mundo.Discusión general sobre SIGHASH_NOINPUT, OP_CHECKSIGFROMSTACK, and OP_SECURETHEBAGhttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-noinput-etc/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-noinput-etc/SIGHASH_NOINPUT, ANYPREVOUT, OP_CHECKSIGFROMSTACK, OP_CHECKOUTPUTSHASHVERIFY, and OP_SECURETHEBAG https://twitter.com/kanzure/status/1136636856093876225 Al parecer, hay algunos mensajes políticos en torno a OP_SECURETHEBA y &ldquo;asegurar la bolsa&rdquo; podría ser una cosa de Andrew Yang -SIGHASH_NOINPUT Muchos de nosotros estamos familiarizados con NOINPUT. ¿Alguien necesita una explicación? ¿Cuál es la diferencia entre el NOINPUT original y el nuevo? NOINPUT asusta al menos a algunas personas. Si sólo hacemos NOINPUT, ¿empezará a causar problemas en bitcoin? ¿Significa que los intercambios tienen que empezar a mirar el historial de transacciones y poner en la lista negra NOINPUT en el historial reciente hasta que esté profundamente enterrado?Gran limpieza de consensohttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html +SIGHASH_NOINPUT Muchos de nosotros estamos familiarizados con NOINPUT. ¿Alguien necesita una explicación? ¿Cuál es la diferencia entre el NOINPUT original y el nuevo? NOINPUT asusta al menos a algunas personas. Si sólo hacemos NOINPUT, ¿empezará a causar problemas en bitcoin? ¿Significa que los intercambios tienen que empezar a mirar el historial de transacciones y poner en la lista negra NOINPUT en el historial reciente hasta que esté profundamente enterrado?Gran limpieza de consensohttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html https://twitter.com/kanzure/status/1136591286012698626 Introducción No hay mucho nuevo de que hablar. No está claro lo de CODESEPARATOR. Se quiere convertir en regla de consenso que las transacciones no pueden ser mayores de 100 kb. ¿No hay reacciones a eso? De acuerdo. Bien, lo haremos. Hagámoslo. ¿Todos saben cuál es esta propuesta? El tiempo de validación para cualquier bloque, éramos perezosos a la hora de arreglar esto. Segwit fue un primer paso para arreglar esto, dando a la gente una manera de hacer esto de una manera más eficiente.Taproothttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki -https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html +https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html https://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/ Previamente: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/ https://twitter.com/kanzure/status/1136616356827283456 @@ -61,9 +61,9 @@ Con graftroot y taproot, nunca para hacer cualquier scripts (que eran un hack pa Tomas todos los caminos que tiene; así que en lugar de eso, se convierte en &hellip; cierta condición, o ciertas no condiciones&hellip; Tomas todos los posibles ifs, usas esto, dices que es uno de estos, luego especificas cuál, y lo muestras y todos los demás pueden validar esto.Bellare-Nevenhttps://btctranscripts.com/es/bitcoin-core-dev-tech/2018-03/2018-03-05-bellare-neven/Mon, 05 Mar 2018 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2018-03/2018-03-05-bellare-neven/Ver también http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-06-signature-aggregation/ Se ha publicado, ha existido durante una década, y es ampliamente citado. En Bellare-Neven, es en sí mismo, es un esquema multi-firma que significa múltiples pubkeys y un mensaje. Debe tratar las autorizaciones individuales para gastar entradas, como mensajes individuales. Lo que necesitamos es un esquema de firma agregada interactiva. El artículo de Bellare-Neven sugiere una forma trivial de construir un esquema de firma agregada a partir de un esquema multisig donde interactivamente todos firman el mensaje de todos.Intercambios atómicos de curvas transversaleshttps://btctranscripts.com/es/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/Mon, 05 Mar 2018 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/https://twitter.com/kanzure/status/971827042223345664 Borrador de un próximo documento de guiones sin guión. Esto fue a principios de 2017. Pero ya ha pasado todo un año. -transacciones lightning posteriores a schnorr https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html +transacciones lightning posteriores a schnorr https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html Una firma adaptadora.. si tienes diferentes generadores, entonces los dos secretos a revelar, simplemente le das a alguien los dos, más una prueba de un log discreto, y entonces dices aprende el secreto a uno que consiga que la revelación sea la misma.Árboles de sintaxis abstracta merkleizadoshttps://btctranscripts.com/es/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/Thu, 07 Sep 2017 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/https://twitter.com/kanzure/status/907075529534328832 -Árboles de sintaxis abstracta merkleizados (MAST) https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html +Árboles de sintaxis abstracta merkleizados (MAST) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html Voy a hablar del esquema que publiqué ayer en la lista de correo, que consiste en implementar MAST (árboles de sintaxis abstracta merkleizados) en bitcoin de la forma menos invasiva posible. Está dividido en dos grandes rasgos de consenso que juntos nos dan MAST. Empezaré con el último BIP. Esto es la evaluación de la llamada de cola. Podemos generalizar P2SH para darnos capacidades más generales, que un solo script de redimensionamiento.Agregación de firmashttps://btctranscripts.com/es/bitcoin-core-dev-tech/2017-09/2017-09-06-signature-aggregation/Wed, 06 Sep 2017 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2017-09/2017-09-06-signature-aggregation/https://twitter.com/kanzure/status/907065194463072258 Agregación de firmas Sipa, ¿puedes firmar y verificar firmas ECDSA a mano? No. Sobre GF(43), tal vez. Los inversos podrían tomar un poco de tiempo para computar. Sobre GF(2). diff --git a/es/categories/podcast/index.xml b/es/categories/podcast/index.xml index d3a2c0fbef..f7125314b8 100644 --- a/es/categories/podcast/index.xml +++ b/es/categories/podcast/index.xml @@ -5,13 +5,13 @@ Episodio anterior en lockinontimeout (LOT): https://btctranscripts.com/bitcoin-m Episodio anterior sobre Speedy Trial: https://btctranscripts.com/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial/ Aaron van Wirdum en &ldquo;Ahora hay dos clientes de activación de Taproot, aquí está el porqué&rdquo;: https://bitcoinmagazine.com/technical/there-are-now-two-taproot-activation-clients-heres-why Transcripción por: Michael Folkson -Introducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado.Taproot Activación con Speedy Trialhttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Fri, 12 Mar 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Propuesta Speedy Trial: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html +Introducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado.Taproot Activación con Speedy Trialhttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Fri, 12 Mar 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Propuesta Speedy Trial: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html Introducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado. Sjors, ¿cuál es tu juego de palabras de la semana? Sjors Provoost (SP): En realidad te pedí un juego de palabras y me dijiste &ldquo;Corta, reedita. Vamos a hacerlo de nuevo&rdquo;. No tengo un juego de palabras esta semana. AvW: Los juegos de palabras son lo tuyo. SP: La última vez intentamos esto de LOT.Activación de Taproot y LOT=true vs LOT=falsehttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-lockinontimeout/Fri, 26 Feb 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-lockinontimeout/BIP 8: https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki -Argumentos para LOT=true and LOT=false (T1-T6 and F1-F6): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html -Argumento adicional para LOT=false (F7): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html +Argumentos para LOT=true and LOT=false (T1-T6 and F1-F6): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html +Argumento adicional para LOT=false (F7): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html Artículo de Aaron van Wirdum en LOT=true or LOT=false: https://bitcoinmagazine.com/articles/lottrue-or-lotfalse-this-is-the-last-hurdle-before-taproot-activation Introducción Aaron van Wirdum (AvW): En directo desde Utrecht, este es el van Wirdum Sjorsnado. Sjors, haz el juego de palabras. Sjors Provoost (SP): Tenemos &ldquo;mucho&rdquo; que hablar. diff --git a/es/index.json b/es/index.json index 47ca8ffd0a..dd3c8d93a9 100644 --- a/es/index.json +++ b/es/index.json @@ -1 +1 @@ -[{"uri":"/es/tags/assumeutxo/","title":"assumeUTXO","content":""},{"uri":"/es/bitcoin-core-dev-tech/2023-04/2023-04-27-assumeutxo/","title":"AssumeUTXO update","content":"Objetivos permitir a los nodos obtener un conjunto utxo rápidamente (1h) al mismo tiempo, sin grandes concesiones en materia de seguridad Enfoque Proporcionar instantánea utxo serializada obtener primero la cadena de cabeceras, cargar la instantánea y deserializar, sincronizar con la punta a partir de ahí a continuación, iniciar la verificación de fondo con una segunda instantánea por último, comparar los hashes cuando el IBD en segundo plano llega a la base de la instantánea Actualización del …"},{"uri":"/es/bitcoin-core-dev-tech/","title":"Bitcoin Core Dev Tech","content":" Bitcoin Core Dev Tech 2015 Bitcoin Core Dev Tech 2017 Bitcoin Core Dev Tech 2018 (Mar) Bitcoin Core Dev Tech 2018 (Oct) Bitcoin Core Dev Tech 2019 Bitcoin Core Dev Tech 2022 Bitcoin Core Dev Tech 2023 (Apr) "},{"uri":"/es/bitcoin-core-dev-tech/2023-04/","title":"Bitcoin Core Dev Tech 2023 (Apr)","content":" Agrupación de Mempool Apr 25, 2023 Suhas Daftuar, Pieter Wuille Cluster mempool ASMap Apr 25, 2023 Fabian Jahr Bitcoin core Security enhancements AssumeUTXO update Apr 27, 2023 James O\u0026amp;#39;Beirne Bitcoin core Assumeutxo Fuzzing Apr 27, 2023 Niklas Gögge Bitcoin core Developer tools Libbitcoin kernel Apr 26, 2023 thecharlatan Bitcoin core Build system "},{"uri":"/es/tags/bitcoin-core/","title":"bitcoin-core","content":""},{"uri":"/es/categories/","title":"Categories","content":""},{"uri":"/es/categories/core-dev-tech/","title":"core-dev-tech","content":""},{"uri":"/es/tags/developer-tools/","title":"developer-tools","content":""},{"uri":"/es/bitcoin-core-dev-tech/2023-04/2023-04-27-fuzzing/","title":"Fuzzing","content":"Slides: https://docs.google.com/presentation/d/1NlTw_n60z9bvqziZqU3H3Jw7Xs5slnQoehYXhEKrzOE\nFuzzing El fuzzing se realiza de forma continua. Los objetivos del fuzzing pueden dar sus frutos incluso años más tarde encontrando nuevos bugs. Ejemplo en la diapositiva sobre libFuzzer fuzzing una función parse_json que podría bloquearse en alguna entrada extraña, pero no informará de entradas json no válidas que pasan el análisis. libFuzzer hace la cobertura de bucle de retroalimentación guiada + ayuda …"},{"uri":"/es/speakers/james-obeirne/","title":"James O'Beirne","content":""},{"uri":"/es/speakers/niklas-g%C3%B6gge/","title":"Niklas Gögge","content":""},{"uri":"/es/speakers/","title":"Speakers","content":""},{"uri":"/es/tags/","title":"Tags","content":""},{"uri":"/es/","title":"Transcripciones de ₿itcoin","content":""},{"uri":"/es/tags/build-system/","title":"build-system","content":""},{"uri":"/es/bitcoin-core-dev-tech/2023-04/2023-04-26-libbitcoin-kernel/","title":"Libbitcoin kernel","content":"Preguntas y respuestas Q: bitcoind y bitcoin-qt vinculado contra el núcleo de la biblioteca en el futuro?\npresentador: sí, ese es un / el objetivo P: ¿has mirado una implementación de electrum usando libbitcoinkernel?\naudiencia: sí, ¡estaría bien tener algo así! audiencia: ¿también podría hacer el largo índice de direcciones propuesto con eso? audiencia: no solo indice de direcciones, otros indices tambien. P: Otros casos de uso:\npúblico: poder ejecutar cosas en iOS P: ¿Debería estar el mempool …"},{"uri":"/es/speakers/thecharlatan/","title":"thecharlatan","content":""},{"uri":"/es/bitcoin-core-dev-tech/2023-04/2023-04-25-mempool-clustering/","title":"Agrupación de Mempool","content":"Problemas Actuales Muchos problemas en el mempool\nEl desalojo está roto. El algoritmo de minería es parte del problema, no es perfecto. RBF es como totalmente roto nos quejamos todo el tiempo, a veces hacemos/no RBF cuando deberíamos/no deberíamos. Desalojo El desalojo es cuando mempool está lleno, y queremos tirar el peor tx. Por ejemplo, pensamos que un tx es el peor en mempool pero es descendiente de un tx \u0026amp;ldquo;bueno\u0026amp;rdquo;. El desalojo de mempool es más o menos lo contrario del algoritmo …"},{"uri":"/es/bitcoin-core-dev-tech/2023-04/2023-04-27-asmap/","title":"ASMap","content":"¿Deberíamos enviarlo en cada versión de Core? La idea inicial es enviar un archivo de mapa en cada versión de Core. Fabian escribió un artículo sobre cómo se integraría en el despliegue (https://gist.github.com/fjahr/f879769228f4f1c49b49d348f80d7635). Algunos desarrolladores señalaron que una opción sería tenerlo separado del proceso de lanzamiento, cualquier colaborador habitual podría actualizarlo cuando quisiera (¿quién lo haría? ¿con qué frecuencia?). Entonces, cuando llegue el momento de la …"},{"uri":"/es/tags/cluster-mempool/","title":"cluster-mempool","content":""},{"uri":"/es/speakers/fabian-jahr/","title":"Fabian Jahr","content":""},{"uri":"/es/speakers/pieter-wuille/","title":"Pieter Wuille","content":""},{"uri":"/es/tags/security-enhancements/","title":"security-enhancements","content":""},{"uri":"/es/speakers/suhas-daftuar/","title":"Suhas Daftuar","content":""},{"uri":"/es/bitcoin-core-dev-tech/2022-10/","title":"Bitcoin Core Dev Tech 2022","content":" BIP324 - Versión 2 del protocolo de transporte cifrado p2p Oct 10, 2022 P2p Bitcoin core Bitcoin Core y GitHub (2022-10-11) Bitcoin core Libsecp256k1 Reunión de mantenedores (2022-10-12) Libsecp256k1 Mercado de tarifas Oct 11, 2022 Fee management Bitcoin core Varios Oct 10, 2022 P2p Bitcoin core "},{"uri":"/es/tags/fee-management/","title":"fee-management","content":""},{"uri":"/es/bitcoin-core-dev-tech/2022-10/2022-10-11-fee-market/","title":"Mercado de tarifas","content":"Hay dos momentos en los que hemos tenido tasas sostenidas: a finales de 2017 y a principios de 2021. A finales de 2017 vimos que muchas cosas se rompían porque la gente no había escrito un software para hacer frente a las tasas variables o algo así. No sé si eso fue un problema tan grande en 2021. Lo que sí me preocupa es que esto empiece a ser una cosa. Si no hay un mercado de tarifas variables y se puede poner 1 sat/vbyte durante varios años, entonces funcionará hasta que no funcione. Así que …"},{"uri":"/es/bitcoin-core-dev-tech/2022-10/2022-10-10-p2p-encryption/","title":"BIP324 - Versión 2 del protocolo de transporte cifrado p2p","content":"Intervenciones anteriores https://btctranscripts.com/scalingbitcoin/milan-2016/bip151-peer-encryption/\nhttps://btctranscripts.com/sf-bitcoin-meetup/2017-09-04-jonas-schnelli-bip150-bip151/\nhttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06-07-p2p-encryption/\nhttps://btctranscripts.com/breaking-bitcoin/2019/p2p-encryption/\nIntroducción y motivación ¿Podemos apagar las luces? \u0026amp;ldquo;A oscuras\u0026amp;rdquo; es un bonito tema para la charla. También tengo café oscuro. De acuerdo.\nVamos a hablar un …"},{"uri":"/es/tags/p2p/","title":"P2P","content":""},{"uri":"/es/bitcoin-core-dev-tech/2022-10/2022-10-10-misc/","title":"Varios","content":"Red de confianza Algunos de los operadores de servidores de clave pública interpretaron el GDPR como que ya no pueden operar infraestructuras de clave pública. Tiene que haber otra solución para la distribución p2p de claves y Web-of-Trust.\nbitcoin-otc sigue siendo la web de confianza PGP que opera desde hace más tiempo utilizando infraestructura de clave pública. Rumplepay podría ser capaz de arrancar una web-of-trust con el tiempo.\nDirecciones invisibles y pagos silenciosos He aquí algo …"},{"uri":"/es/speakers/aaron-van-wirdum/","title":"Aaron van Wirdum","content":""},{"uri":"/es/bitcoin-explained/taproot-activation-update/","title":"Actualización de la activación de Taproot","content":"Tema: Actualización de la activación de Taproot: Speedy Trial y el cliente LOT=true\nUbicación: Bitcoin Magazine (en línea)\nFecha: 23 de abril de 2021\nEpisodio anterior en lockinontimeout (LOT): https://btctranscripts.com/bitcoin-magazine/2021-02-26-taproot-activation-lockinontimeout/\nEpisodio anterior sobre Speedy Trial: https://btctranscripts.com/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial/\nAaron van Wirdum en \u0026amp;ldquo;Ahora hay dos clientes de activación de Taproot, aquí está el …"},{"uri":"/es/bitcoin-explained/","title":"Bitcoin Explained","content":" Actualización de la activación de Taproot Apr 23, 2021 Sjors Provoost, Aaron van Wirdum Taproot Episode 36 Taproot Activación con Speedy Trial Mar 12, 2021 Sjors Provoost, Aaron van Wirdum Taproot Soft fork activation Episode 31 Activación de Taproot y LOT=true vs LOT=false Feb 26, 2021 Sjors Provoost, Aaron van Wirdum Taproot Soft fork activation Episode 29 "},{"uri":"/es/categories/podcast/","title":"podcast","content":""},{"uri":"/es/speakers/sjors-provoost/","title":"Sjors Provoost","content":""},{"uri":"/es/tags/taproot/","title":"taproot","content":""},{"uri":"/es/tags/soft-fork-activation/","title":"soft-fork-activation","content":""},{"uri":"/es/bitcoin-explained/taproot-activation-speedy-trial/","title":"Taproot Activación con Speedy Trial","content":"Propuesta Speedy Trial: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html\nIntroducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado. Sjors, ¿cuál es tu juego de palabras de la semana?\nSjors Provoost (SP): En realidad te pedí un juego de palabras y me dijiste \u0026amp;ldquo;Corta, reedita. Vamos a hacerlo de nuevo\u0026amp;rdquo;. No tengo un juego de palabras esta semana.\nAvW: Los juegos de palabras son lo tuyo.\nSP: La última vez intentamos esto de …"},{"uri":"/es/bitcoin-explained/taproot-activation-lockinontimeout/","title":"Activación de Taproot y LOT=true vs LOT=false","content":"BIP 8: https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki\nArgumentos para LOT=true and LOT=false (T1-T6 and F1-F6): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html\nArgumento adicional para LOT=false (F7): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html\nArtículo de Aaron van Wirdum en LOT=true or LOT=false: https://bitcoinmagazine.com/articles/lottrue-or-lotfalse-this-is-the-last-hurdle-before-taproot-activation …"},{"uri":"/es/bitcoin-design/","title":"Bitcoin-designs","content":""},{"uri":"/es/tags/bitcoin-dise%C3%B1o/","title":"bitcoin-diseño","content":""},{"uri":"/es/bitcoin-design/misc/","title":"Miscellaneous","content":" Reunión introductoria de la GUI de Bitcoin Core Aug 20, 2020 Bitcoin diseño "},{"uri":"/es/bitcoin-design/misc/2020-08-20-bitcoin-core-gui/","title":"Reunión introductoria de la GUI de Bitcoin Core","content":"Tema: Enlace del orden del día publicado a continuación\nUbicación: Diseño de Bitcoin (en línea)\nVídeo: No se ha publicado ningún vídeo en línea\nAgenda: https://github.com/BitcoinDesign/Meta/issues/8\nLa conversación se ha anonimizado por defecto para proteger la identidad de los participantes. Aquellos que han expresado su preferencia por que se atribuyan sus comentarios son atribuidos. Si usted ha participado y quiere que sus comentarios sean atribuidos, póngase en contacto con nosotros. …"},{"uri":"/es/bitcoin-magazine/","title":"Bitcoin Magazine","content":" Cómo activar un nuevo Soft Fork Aug 03, 2020 Eric Lombrozo, Luke Dashjr Taproot Soft fork activation "},{"uri":"/es/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation/","title":"Cómo activar un nuevo Soft Fork","content":"Location: Bitcoin Magazine (en línea)\nAaron van Wirdum Aaron van Wirdum en Bitcoin Magazine sobre el BIP 8, el BIP 9 o la activación del Soft Fork moderno: https://bitcoinmagazine.com/articles/bip-8-bip-9-or-modern-soft-fork-activation-how-bitcoin-could-upgrade-next\nDavid Harding sobre las propuestas de activación de Taproot: https://gist.github.com/harding/dda66f5fd00611c0890bdfa70e28152d\nIntroducción Aaron van Wirdum (AvW): Eric, Luke bienvenido. Feliz Día de la Independencia de Bitcoin. ¿Cómo …"},{"uri":"/es/speakers/eric-lombrozo/","title":"Eric Lombrozo","content":""},{"uri":"/es/speakers/luke-dashjr/","title":"Luke Dashjr","content":""},{"uri":"/es/speakers/andreas-antonopoulos/","title":"Andreas Antonopoulos","content":""},{"uri":"/es/andreas-antonopoulos/","title":"Andreas Antonopoulos","content":" Bitcoin Q\u0026amp;amp;A: Descarga inicial de la cadena de bloques Oct 23, 2018 Andreas Antonopoulos Consensus enforcement División de la semilla Apr 08, 2020 Andreas Antonopoulos Cartera Seguridad Firmas Schnorr Oct 07, 2018 Andreas Antonopoulos Schnorr signatures Seguridad de la cartera de hardware Feb 01, 2019 Andreas Antonopoulos Cartera hardware Validación Cartera Senado de Canadá Bitcoin Oct 08, 2014 Andreas Antonopoulos Seguridad "},{"uri":"/es/tags/cartera/","title":"cartera","content":""},{"uri":"/es/andreas-antonopoulos/2020-04-08-andreas-antonopoulos-seed-splitting/","title":"División de la semilla","content":"Tema: ¿Por qué es una mala idea dividir las semillas?\nLocalisación: Canal de YouTube de Andreas Antonopoulos\n¿Por qué es una mala idea dividir las semillas? Esto es algo que se discute todo el tiempo, especialmente en los foros de novatos. Realmente hay que tener mucho cuidado. Un amigo tuvo la idea de lograr una protección de 2 de 3 de la semilla de mi cartera almacenando la semilla de la siguiente manera:\nUbicación 1: palabras clave 1-8 y 9-16 Ubicación 2: palabras clave 1-8 y 17-24 Ubicación …"},{"uri":"/es/tags/seguridad/","title":"seguridad","content":""},{"uri":"/es/austin-bitcoin-developers/","title":"Austin Bitcoin Developers","content":"https://austinbitdevs.com/\nBitcoin CLI y Regtest Aug 17, 2018 Richard Bondi Bitcoin core Developer tools Drivechain May 27, 2019 Paul Sztorc Investigación Lightning Capa 2 Sidechains Hardware Wallets Jun 29, 2019 Stepan Snigirev Hardware wallet Security problems Seminario Socrático 2 Aug 22, 2019 Research Hardware wallet Seminario Socrático 3 Oct 14, 2019 Miniscript Taproot Seminario Socrático 4 Nov 19, 2019 Seminario Socrático 5 Jan 21, 2020 Lightning Seminario Socrático 6 Feb 24, 2020 Taproot "},{"uri":"/es/categories/reuni%C3%B3n/","title":"reunión","content":""},{"uri":"/es/austin-bitcoin-developers/2020-02-24-socratic-seminar-6/","title":"Seminario Socrático 6","content":"https://www.meetup.com/Austin-Bitcoin-Developers/events/268812642/\nhttps://bitdevs.org/2020-02-12-socratic-seminar-101\nhttps://twitter.com/kanzure/status/1232132693179207682\nIntroducción Muy bien, vamos a empezar. Reúnanse. A Phil le vendría bien algo de compañía. A nadie le gusta la primera fila. Tal vez los bancos. Así que tengo un formato un poco diferente de cómo quiero hacerlo esta semana. Normalmente cubro una amplia serie de temas que robo de la lista de reuniones de Nueva York. Revisando …"},{"uri":"/es/advancing-bitcoin/","title":"Advancing Bitcoin","content":" Advancing Bitcoin 2019 Advancing Bitcoin 2020 "},{"uri":"/es/advancing-bitcoin/2020/","title":"Advancing Bitcoin 2020","content":" Carteras de descriptores Feb 06, 2020 Andrew Chow Cartera Miniscript Feb 07, 2020 Andrew Poelstra Miniscript Miniscript Introducción Feb 06, 2020 Andrew Poelstra Miniscript Cartera Taller de depuración de Bitcoin Core Feb 07, 2020 Fabian Jahr Bitcoin core Developer tools Taller Signet Feb 07, 2020 Kalle Alm Taproot Signet Taproot Lightning Feb 06, 2020 Antoine Riard Taproot Lightning Ptlc "},{"uri":"/es/speakers/andrew-poelstra/","title":"Andrew Poelstra","content":""},{"uri":"/es/speakers/kalle-alm/","title":"Kalle Alm","content":""},{"uri":"/es/tags/miniscript/","title":"miniscript","content":""},{"uri":"/es/advancing-bitcoin/2020/2020-02-07-andrew-poelstra-miniscript/","title":"Miniscript","content":"Taller sobre el avance de Bitcoin\nMiniscript\nPágina Web: https://bitcoin.sipa.be/miniscript/\nTaller de reposición: https://github.com/apoelstra/miniscript-workshop\nTranscripción de la presentación de Londres Bitcoin Devs Miniscript: https://btctranscripts.com/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript/\nParte 1 Así que lo que vamos a hacer, es un par de cosas. Vamos a ir a través del sitio web de Miniscript y aprender sobre cómo se construye Miniscript, cómo funciona el sistema de …"},{"uri":"/es/tags/signet/","title":"signet","content":""},{"uri":"/es/categories/taller/","title":"taller","content":""},{"uri":"/es/advancing-bitcoin/2020/2020-02-07-fabian-jahr-debugging-workshop/","title":"Taller de depuración de Bitcoin Core","content":"Vídeo: No se ha publicado ningún vídeo en Internet\nPresentación de Fabian en el Bitcoin Edge Dev++ 2019: https://btctranscripts.com/scalingbitcoin/tel-aviv-2019/edgedevplusplus/debugging-bitcoin/\nDepuración de Bitcoin Core doc: https://github.com/fjahr/debugging_bitcoin\nTaller de depuración de Bitcoin Core: https://gist.github.com/fjahr/5bf65daaf9ff189a0993196195005386\nIntroducción En primer lugar bienvenido al taller de depuración de Bitcoin Core. Todo lo que sé más o menos sobre el uso de un …"},{"uri":"/es/advancing-bitcoin/2020/2020-02-07-kalle-alm-signet-workshop/","title":"Taller Signet","content":"Tema: Taller Signet\nLocalización: El avance de Bitcoin\nVídeo: No se ha publicado ningún vídeo en línea\nPreparémonos mkdir workspace cd workspace git clone https://github.com/bitcoin/bitcoin.git cd bitcoin git remote add kallewoof https://github.com/kallewoof/bitcoin.git git fetch kallewoof git checkout signet ./autogen.sh ./configure -C --disable-bench --disable-test --without-gui make -j5 Cuando intentes ejecutar la parte de configuración vas a tener algunos problemas si no tienes las …"},{"uri":"/es/speakers/andrew-chow/","title":"Andrew Chow","content":""},{"uri":"/es/speakers/antoine-riard/","title":"Antoine Riard","content":""},{"uri":"/es/advancing-bitcoin/2020/2020-02-06-andrew-chow-descriptor-wallets/","title":"Carteras de descriptores","content":"Tema: Repensar la arquitectura de las carteras: Carteras de descriptores nativos\nUbicación: El avance de Bitcoin\nDiapositivas: https://www.dropbox.com/s/142b4o4lrbkvqnh/Rethinking%20Wallet%20Architecture_%20Native%20Descriptor%20Wallets.pptx\nSoporte para descriptores de salida en Bitcoin Core: https://github.com/bitcoin/bitcoin/blob/master/doc/descriptors.md\nBitcoin Optech en los descriptores de la secuencia de comandos de salida: https://bitcoinops.org/en/topics/output-script-descriptors/ …"},{"uri":"/es/categories/conferencia/","title":"conferencia","content":""},{"uri":"/es/tags/lightning/","title":"lightning","content":""},{"uri":"/es/advancing-bitcoin/2020/2020-02-06-andrew-poelstra-miniscript-intro/","title":"Miniscript Introducción","content":"Tema: Introducción a Miniscript\nLocación: El avance de Bitcoin\nDiapositivas: https://www.dropbox.com/s/vgh5vaooqqbgg1v/andrew-poelstra.pdf\nIntroducción Hola a todos. Algunos de vosotros estuvisteis en mi presentación de Bitcoin sobre Miniscript hace un par de días donde me las arreglé para pasar más de 2 horas dando una presentación sobre esto. Creo que esta vez la he reducido a 20 minutos, pero no prometo nada. Voy a mantener un ojo en el reloj.\nDescriptores Es muy agradable tener esto …"},{"uri":"/es/tags/ptlc/","title":"ptlc","content":""},{"uri":"/es/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/","title":"Taproot Lightning","content":"Nombre: Antoine Riard\nTema: A Schnorr-Taproot’ed Lightning\nUbicación: El avance de Bitcoin\nFecha: 6 de febrero de 2020\nDiapositivas: https://www.dropbox.com/s/9vs54e9bqf317u0/Schnorr-Taproot%27ed-LN.pdf\nIntroducción Hoy Schnorr y Taproot para Lightning, es un tema realmente apasionante.\nArquitectura Lightning La arquitectura Lightning para aquellos que no están familiarizados con ella. Usted tiene el blockchain como la capa subyacente. Encima de ella vas a construir un canal, tienes un HTLC y la …"},{"uri":"/es/austin-bitcoin-developers/2020-01-21-socratic-seminar-5/","title":"Seminario Socrático 5","content":"https://www.meetup.com/Austin-Bitcoin-Developers/events/267941700/\nhttps://bitdevs.org/2019-12-03-socratic-seminar-99\nhttps://bitdevs.org/2020-01-09-socratic-seminar-100\nhttps://twitter.com/kanzure/status/1219817063948148737\nLSATs Así que solemos empezar con una demostración de un proyecto basado en un rayo en el que ha estado trabajando durante unos meses.\nEsta no es una idea original para mí. Fue presentada por roasbeef, cofundador de Lightning Labs en la conferencia Lightning del pasado …"},{"uri":"/es/austin-bitcoin-developers/2019-11-19-socratic-seminar-4/","title":"Seminario Socrático 4","content":"https://twitter.com/kanzure/status/1196947713658626048\nOrganización de normas para carteras (WSO) Presentar un argumento económico para que los vendedores se unan a una organización y pongan algo de dinero o esfuerzo para construir la organización. El argumento económico es que los miembros comprometidos tienen un cierto nivel de fondos comprometidos, que representan un número X de usuarios, y X mil millones de dólares de fondos custodiados. Esto se puede utilizar como argumento convincente para …"},{"uri":"/es/austin-bitcoin-developers/2019-10-14-socratic-seminar-3/","title":"Seminario Socrático 3","content":"Seminario socrático de desarrolladores de Bitcoin de Austin 3\nhttps://www.meetup.com/Austin-Bitcoin-Developers/events/265295570/\nhttps://bitdevs.org/2019-09-16-socratic-seminar-96\nHemos hecho dos reuniones en este formato. La idea es que en Nueva York hay BitDevs que es una de las reuniones más antiguas. Esto ha estado sucediendo durante cinco años. Tienen un formato llamado socrático donde tienen un tipo talentoso llamado J que los conduce a través de algunos temas y tratan de conseguir un poco …"},{"uri":"/es/baltic-honeybadger/","title":"Baltic Honeybadger","content":" Baltic Honeybadger 2018 Baltic Honeybadger 2019 "},{"uri":"/es/baltic-honeybadger/2019/","title":"Baltic Honeybadger 2019","content":" Coldcard Mk3 - Security in Depth Sep 14, 2019 Rodolfo Novak Seguridad Cartera hardware "},{"uri":"/es/tags/cartera-hardware/","title":"cartera hardware","content":""},{"uri":"/es/baltic-honeybadger/2019/2019-09-14-rodolfo-novak-coldcard-mk3/","title":"Coldcard Mk3 - Security in Depth","content":"Intriduccióno Mi nombre es Rodolfo, he estado alrededor de Bitcoin por un tiempo. Hacemos hardware. Hoy quería entrar un poco en cómo hacer una billetera de hardware segura en términos un poco más legos. Ir a través del proceso de conseguir que se haga.\n¿Cuáles fueron las opciones? Cuando cerré mi última empresa y decidí buscar un lugar para almacenar mis monedas, no pude encontrar un monedero que satisficiera dos cosas que necesitaba. Que era la seguridad física y el código abierto. Hay dos …"},{"uri":"/es/speakers/rodolfo-novak/","title":"Rodolfo Novak","content":""},{"uri":"/es/tags/hardware-wallet/","title":"hardware-wallet","content":""},{"uri":"/es/tags/research/","title":"research","content":""},{"uri":"/es/austin-bitcoin-developers/2019-08-22-socratic-seminar-2/","title":"Seminario Socrático 2","content":"https://twitter.com/kanzure/status/1164710800910692353\nIntroducción Hola. La idea era hacer una reunión de estilo más socrático. Esto fue popularizado por Bitdevs NYC y se extendió a SF. Lo intentamos hace unos meses con Jay. La idea es que recorremos las noticias de investigación, los boletines, los podcasters, hablamos de lo que ha pasado en la comunidad técnica de bitcoin. Vamos a tener diferentes presentadores.\nMike Schmidt hablará de algunos boletines de optech a los que ha contribuido. …"},{"uri":"/es/speakers/0xb10c/","title":"0xB10C","content":""},{"uri":"/es/tags/history/","title":"history","content":""},{"uri":"/es/misc/","title":"Misc","content":" Bitcoin\u0026amp;#39;s Academic Pedigree Aug 29, 2017 Arvind Narayanan, Jeremy Clark History Bitcoin\u0026amp;#39;s Security Model - A Deep Dive Nov 13, 2016 Jameson Lopp Security If I\u0026amp;#39;d Known What We Were Starting Sep 20, 2017 Ray Dillinger History The Incomplete History of Bitcoin Development Aug 04, 2019 0xB10C History "},{"uri":"/es/misc/2019-08-04-incomplete_history/","title":"The Incomplete History of Bitcoin Development","content":"Autor: 0xB10C Texto original: https://b10c.me/blog/004-the-incomplete-history-of-bitcoin-development/\nLa historia incompleta del desarrollo de Bitcoin Para comprender plenamente la justificación del estado actual del desarrollo de Bitcoin, el conocimiento de los eventos históricos es esencial. Esta publicación de blog destaca eventos históricos seleccionados, versiones de software y correcciones de errores antes y después de que Satoshi abandonara el proyecto. Además, contiene una sección sobre …"},{"uri":"/es/austin-bitcoin-developers/2019-06-29-hardware-wallets/","title":"Hardware Wallets","content":"https://twitter.com/kanzure/status/1145019634547978240\nVer también:\nExtracción de semillas de hardware wallets El futuro de las hardware wallets coredev.tech 2019 debate de hardware wallets Antecedentes Hace algo más de un año, pasé por la clase de Jimmy Song de Programación de Blockchain. Ahí fue donde conocí a M, donde él era el asistente de enseñanza. Básicamente, se escribe una biblioteca de bitcoin en python desde cero. La API de esta librería y las clases y funcciones que usa Jimmy son muy …"},{"uri":"/es/tags/security-problems/","title":"security-problems","content":""},{"uri":"/es/speakers/stepan-snigirev/","title":"Stepan Snigirev","content":""},{"uri":"/es/chaincode-labs/","title":"Chaincode Labs","content":" Residencia de Chaincode "},{"uri":"/es/speakers/christian-decker/","title":"Christian Decker","content":""},{"uri":"/es/tags/eltoo/","title":"eltoo","content":""},{"uri":"/es/chaincode-labs/chaincode-residency/2019-06-25-christian-decker-eltoo/","title":"Eltoo - El (lejano) futuro de lightning","content":"Tema: Eltoo: El (lejano) futuro de lightning\nLugar: Chaincode Labs\nDiapositivas: https://residency.chaincode.com/presentations/lightning/Eltoo.pdf\nEltoo white paper: https://blockstream.com/eltoo.pdf\nArtículo de Bitcoin Magazine: https://bitcoinmagazine.com/articles/noinput-class-bitcoin-soft-fork-simplify-lightning\nIntro ¿Quién nunca ha oído hablar de eltoo? Es mi proyecto favorito y estoy bastante orgulloso de él. Intentaré que esto sea breve. Me dijeron que todos vieron mi presentación sobre …"},{"uri":"/es/chaincode-labs/chaincode-residency/","title":"Residencia de Chaincode","content":" Eltoo - El (lejano) futuro de lightning Jun 25, 2019 Christian Decker Eltoo Lightning Security Models Jun 17, 2019 John Newbery Security Taproot Cryptography "},{"uri":"/es/categories/residency/","title":"residency","content":""},{"uri":"/es/tags/cryptography/","title":"cryptography","content":""},{"uri":"/es/speakers/john-newbery/","title":"John Newbery","content":""},{"uri":"/es/tags/security/","title":"security","content":""},{"uri":"/es/chaincode-labs/chaincode-residency/2019-06-17-newbery-security-models/","title":"Security Models","content":"Texto original: https://btctranscripts.com/chaincode-labs/chaincode-residency/2019-06-17-john-newbery-security-models/\nTranscripción de: Caralie Chrisco\nUbicación: Residencia de Chaincode Labs 2019\nDiapositivas: https://residency.chaincode.com/presentations/bitcoin/security_models.pdf\nJohn Newbery: Muy bien, modelos de seguridad. Esto va a ser como un recorrido rápido de varias cosas, vista de muy alto nivel. Empezaré dándote algún tipo de marco para pensar en las cosas. Así que en criptografía, …"},{"uri":"/es/speakers/jonas-nick/","title":"Jonas Nick","content":""},{"uri":"/es/lets-talk-bitcoin-podcast/","title":"Lets Talk Bitcoin Podcast","content":" The Tools and The Work Jun 09, 2019 Pieter Wuille, Jonas Nick Taproot Schnorr signatures "},{"uri":"/es/tags/schnorr-signatures/","title":"schnorr-signatures","content":""},{"uri":"/es/lets-talk-bitcoin-podcast/2019-06-09-ltb-pieter-wuille-jonas-nick/","title":"The Tools and The Work","content":"Hablemos de Bitcoin con Pieter Wuille y Jonas Nick – 9 de Junio de 2019 (\u0026amp;ldquo;Let\u0026amp;rsquo;s Talk Bitcoin\u0026amp;rdquo;)\nhttps://twitter.com/kanzure/status/1155851797568917504\nParte 1: https://letstalkbitcoin.com/blog/post/lets-talk-bitcoin-400-the-tools-and-the-work\nParte 2: https://letstalkbitcoin.com/blog/post/lets-talk-bitcoin-401-the-tools-and-the-work-part-2\nBorrador de BIP-Schnorr: https://github.com/sipa/bips/blob/bip-schnorr/bip-schnorr.mediawiki\nBorrador de BIP-Taproot: …"},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-07-assumeutxo/","title":"AssumeUTXO","content":"https://twitter.com/kanzure/status/1137008648620838912\nPor qué assumeutxo assumeutxo es la continuación espiritual de assumevalid. ¿Por qué queremos hacer esto en primer lugar? En la actualidad, la descarga inicial de bloques tarda horas y días. Varios proyectos en la comunidad han estado implementando medidas para acelerar esto. Casa creo que incluye datadir con sus nodos. Otros proyectos como btcpay tienen varias formas de agrupar esto y firmar las cosas con claves gpg y estas soluciones no …"},{"uri":"/es/bitcoin-core-dev-tech/2019-06/","title":"Bitcoin Core Dev Tech 2019","content":" Arquitecturas de las wallets Jun 05, 2019 Andrew Chow Wallet Bitcoin core AssumeUTXO Jun 07, 2019 James O\u0026amp;#39;Beirne Assume utxo Bitcoin core Cadenas de estado ciegas: Transferencia de UTXO con un servidor de firmas ciegas Jun 07, 2019 Ruben Somsen Statechains Eltoo Channel factories Cifrado P2P Jun 07, 2019 P2 p Code Review Jun 05, 2019 Discusión general sobre SIGHASH_NOINPUT, OP_CHECKSIGFROMSTACK, and OP_SECURETHEBAG Jun 06, 2019 Olaoluwa Osuntokun, Jeremy Rubin Sighash anyprevout Op …"},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/","title":"Cadenas de estado ciegas: Transferencia de UTXO con un servidor de firmas ciegas","content":"https://twitter.com/kanzure/status/1136992734953299970\nFormalización de Blind Statechains como servidor de firma ciega minimalista https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html\nVisión General: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39\nDocumento statechains: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf\nTranscripción anterior: …"},{"uri":"/es/tags/channel-factories/","title":"channel-factories","content":""},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-07-p2p-encryption/","title":"Cifrado P2P","content":"Cifrado p2p\nhttps://twitter.com/kanzure/status/1136939003666685952\nhttps://github.com/bitcoin-core/bitcoin-devwiki/wiki/P2P-Design-Philosophy\n\u0026amp;ldquo;Elligator al cuadrado: Puntos uniformes en curvas elípticas de orden primo como cadenas aleatorias uniformes\u0026amp;rdquo; https://eprint.iacr.org/2014/043\nIntroducción Esta propuesta lleva años en marcha. Muchas ideas de sipa y gmaxwell fueron a parar al bip151. Hace años decidí intentar sacar esto adelante. Hay bip151 que de nuevo la mayoría de las ideas …"},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-07-hardware-wallets/","title":"Hardware Wallets","content":"https://twitter.com/kanzure/status/1136924010955104257\n¿Cuánto debería hacer Bitcoin Core y cuánto otras bibliotecas? Andrew Chow escribió la maravillosa herramienta HWI. Ahora mismo tenemos un pull request para soportar firmantes externos. El script HWI puede hablar con la mayoría de los monederos hardware porque tiene todos los controladores incorporados, y puede obtener claves de ellos, y firmar transacciones arbitrarias. Eso es más o menos lo que hace. Es un poco manual, sin embargo. Tienes …"},{"uri":"/es/tags/hwi/","title":"hwi","content":""},{"uri":"/es/speakers/jonas-schnelli/","title":"Jonas Schnelli","content":""},{"uri":"/es/speakers/ruben-somsen/","title":"Ruben Somsen","content":""},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/","title":"Signet","content":"https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html\nhttps://twitter.com/kanzure/status/1136980462524608512\nIntroducción Voy a hablar un poco de signet. ¿Alguien no sabe lo que es signet? La idea es tener una firma del bloque o del bloque anterior. La idea es que testnet está horriblemente roto para probar cosas, especialmente para probar cosas a largo plazo. Hay grandes reorgs en testnet. ¿Qué pasa con testnet con un ajuste de dificultad menos roto? Testnet es …"},{"uri":"/es/tags/statechains/","title":"statechains","content":""},{"uri":"/es/tags/consensus-cleanup/","title":"consensus-cleanup","content":""},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-06-noinput-etc/","title":"Discusión general sobre SIGHASH_NOINPUT, OP_CHECKSIGFROMSTACK, and OP_SECURETHEBAG","content":"SIGHASH_NOINPUT, ANYPREVOUT, OP_CHECKSIGFROMSTACK, OP_CHECKOUTPUTSHASHVERIFY, and OP_SECURETHEBAG\nhttps://twitter.com/kanzure/status/1136636856093876225\nAl parecer, hay algunos mensajes políticos en torno a OP_SECURETHEBA y \u0026amp;ldquo;asegurar la bolsa\u0026amp;rdquo; podría ser una cosa de Andrew Yang\nSIGHASH_NOINPUT Muchos de nosotros estamos familiarizados con NOINPUT. ¿Alguien necesita una explicación? ¿Cuál es la diferencia entre el NOINPUT original y el nuevo? NOINPUT asusta al menos a algunas …"},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/","title":"Gran limpieza de consenso","content":"https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html\nhttps://twitter.com/kanzure/status/1136591286012698626\nIntroducción No hay mucho nuevo de que hablar. No está claro lo de CODESEPARATOR. Se quiere convertir en regla de consenso que las transacciones no pueden ser mayores de 100 kb. ¿No hay reacciones a eso? De acuerdo. Bien, lo haremos. Hagámoslo. ¿Todos saben cuál es esta propuesta?\nEl tiempo de validación para cualquier bloque, éramos perezosos a la hora de …"},{"uri":"/es/speakers/jeremy-rubin/","title":"Jeremy Rubin","content":""},{"uri":"/es/speakers/matt-corallo/","title":"Matt Corallo","content":""},{"uri":"/es/speakers/olaoluwa-osuntokun/","title":"Olaoluwa Osuntokun","content":""},{"uri":"/es/tags/op_checksigfromstack/","title":"op_checksigfromstack","content":""},{"uri":"/es/tags/op_checktemplateverify/","title":"op_checktemplateverify","content":""},{"uri":"/es/tags/p2c/","title":"p2c","content":""},{"uri":"/es/tags/quantum-resistance/","title":"quantum-resistance","content":""},{"uri":"/es/tags/sighash_anyprevout/","title":"sighash_anyprevout","content":""},{"uri":"/es/speakers/tadge-dryja/","title":"Tadge Dryja","content":""},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/","title":"Taproot","content":"https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki\nhttps://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html\nhttps://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/\nPreviamente: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/\nhttps://twitter.com/kanzure/status/1136616356827283456\nIntroducción Bien, primera pregunta: ¿quién puso mi nombre en esa lista y qué quieren? No fui yo. Voy a hacer …"},{"uri":"/es/tags/utreexo/","title":"utreexo","content":""},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-06-utreexo/","title":"Utreexo","content":"Utreexo: acumulador basado en hash para UTXOs de bitcoin\nhttp://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-10-08-utxo-accumulators-and-utreexo/\nhttp://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2019/utreexo/\nDocumento Utreexo https://eprint.iacr.org/2019/611.pdf\nhttps://github.com/mit-dci/utreexo\nhttps://twitter.com/kanzure/status/1136560700187447297\nIntroducción Sigues descargando todo; en lugar de escribir en tu base de datos UTXO, modificas tu acumulador. Aceptas una prueba de que …"},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-05-wallet-architecture/","title":"Arquitecturas de las wallets","content":"Arquitectura de la wallet de Bitcoin Core + descriptores\nhttps://twitter.com/kanzure/status/1136282460675878915\nwriteup: https://github.com/bitcoin/bitcoin/issues/16165\nDebate sobre la arquitectura de las wallets Aquí hay tres áreas principales. Una es IsMine: ¿cómo determino que una salida concreta está afectando a mi wallet? ¿Qué hay de pedir una nueva dirección, de dónde viene? Eso no es sólo obtener una nueva dirección, es obtener una dirección de cambio en bruto, es también el cambio que se …"},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-05-code-review/","title":"Code Review","content":"Encuesta de revisión de códigos y reclamaciones https://twitter.com/kanzure/status/1136261311359324162\nIntroducción Quería hablar sobre el proceso de revisión de código para Bitcoin Core. No he hecho revisiones de código, pero siguiendo el proyecto durante el último año he oído que este es un punto de dolor para el proyecto y creo que a la mayoría de los desarrolladores les encantaría verlo mejorado. Me gustaría ayudar de alguna manera para ayudar a infundir un poco de energía para ayudar con …"},{"uri":"/es/tags/wallet/","title":"wallet","content":""},{"uri":"/es/tags/capa-2/","title":"capa 2","content":""},{"uri":"/es/austin-bitcoin-developers/2019-05-27-drivechain-paul-sztorc/","title":"Drivechain","content":"Drivechain: Una capa de interoperabilidad-2, descrita en términos de la red lightning - algo que ya entiendes\nhttps://twitter.com/kanzure/status/1133202672570519552\nSobre mí Bien, aquí hay algunas cosas sobre mí. He sido un bitcoiner desde 2012. He publicado investigaciones sobre bitcoin en el blog truthcoin.info. He presentado en Scaling Bitcoin 1, 2, 3, 4, tabconf, y Building on Bitcoin. Mi formación es en economía y estadística. Trabajé en el Departamento de Economía de Yale como estadístico …"},{"uri":"/es/tags/investigaci%C3%B3n/","title":"investigación","content":""},{"uri":"/es/speakers/paul-sztorc/","title":"Paul Sztorc","content":""},{"uri":"/es/tags/sidechains/","title":"sidechains","content":""},{"uri":"/es/advancing-bitcoin/2019/","title":"Advancing Bitcoin 2019","content":" Lightning Rust Feb 07, 2019 Matt Corallo Lightning "},{"uri":"/es/advancing-bitcoin/2019/2019-02-07-matt-corallo-rust-lightning/","title":"Lightning Rust","content":"Lightning flexible en Rust\nDiapositivas: https://docs.google.com/presentation/d/154bMWdcMCFUco4ZXQ3lWfF51U5dad8pQ23rKVkncnns/edit#slide=id.p\nhttps://twitter.com/kanzure/status/1144256392490029057\nIntroducción Gracias por recibirme. Quiero hablar un poco sobre un proyecto en el que he estado trabajando durante un año llamado \u0026amp;ldquo;Rust-lightning\u0026amp;rdquo;. Lo empecé en diciembre, así que hace un año y unos meses. Esta es mi primera presentación en él, así que estoy emocionado de finalmente llegar a …"},{"uri":"/es/andreas-antonopoulos/2019-02-01-andreas-antonopoulos-hardware-wallet-security/","title":"Seguridad de la cartera de hardware","content":"Tema: ¿Son los monederos electrónicos lo suficientemente seguros?\nLocalización: Canal de YouTube de Andreas Antonopoulos\n¿Son los monederos electrónicos lo suficientemente seguros? P - Hola Andreas. Almaceno mi criptografía en un Ledger. Escuchando a Trace Mayer esta semana me preocupa que esto no sea lo suficientemente seguro. Trace dice que necesitas Bitcoin Core para la validación de la red, Armory para gestionar las claves privadas y un protocolo Glacier para los procedimientos operativos …"},{"uri":"/es/tags/validaci%C3%B3n/","title":"validación","content":""},{"uri":"/es/andreas-antonopoulos/2018-10-23-andreas-antonopoulos-initial-blockchain-download/","title":"Bitcoin Q\u0026A: Descarga inicial de la cadena de bloques","content":"Becca pregunta por qué se tarda tanto en descargar el blockchain. Yo tengo una conexión rápida a Internet y he podido descargar 200GB en menos de una hora. A lo que Becca se refiere es a lo que se llama la descarga inicial del blockchain o IBD, que es la primera sincronización del nodo de Bitcoin o de cualquier tipo de nodo de blockchain con su blockchain. La respuesta es que, aunque la cantidad de datos que hay que descargar para obtener la cadena de bloques completa es de unos 200 GB, no se …"},{"uri":"/es/tags/consensus-enforcement/","title":"consensus-enforcement","content":""},{"uri":"/es/bitcoin-core-dev-tech/2018-10/","title":"Bitcoin Core Dev Tech 2018 (Oct)","content":" Acumuladores UTXO, compromisos UTXO y Utreexo Oct 08, 2018 Tadge Dryja Proof systems Utreexo Bitcoin Optech Oct 09, 2018 Lightning Segwit Cosas de Wallets Oct 09, 2018 Wallet Descriptores de guiones (2018-10-08) Wallet Mensaje de señalización Oct 10, 2018 Kalle Alm Wallet Retransmisión eficiente de transacciones P2P Oct 08, 2018 P2 p "},{"uri":"/es/bitcoin-core-dev-tech/2018-10/2018-10-10-signmessage/","title":"Mensaje de señalización","content":"kallewoof and others\nhttps://twitter.com/kanzure/status/1049834659306061829\nEstoy tratando de hacer un nuevo signmessage para hacer otras cosas. Sólo tiene que utilizar el sistema de firma dentro de bitcoin para firmar un mensaje. Firmar un mensaje que alguien quiere. Puedes usar proof-of-funds o lo que sea.\nUsted podría simplemente tener una firma y es una firma dentro de un paquete y es pequeño y fácil. Otra opción es tener un .. que no es válido de alguna manera. Haces una transacción con …"},{"uri":"/es/bitcoin-core-dev-tech/2018-10/2018-10-09-bitcoin-optech/","title":"Bitcoin Optech","content":"https://twitter.com/kanzure/status/1049527415767101440\nhttps://bitcoinops.org/\nBitcoin Optech trata de animar a las empresas bitcoin a adoptar mejores técnicas y tecnologías de escalado, cosas como batching, segwit, más adelante quizá firmas Schnorr, firmas agregables, quizá lightning. Ahora mismo nos estamos centrando en cosas que las empresas podrían estar haciendo ahora mismo. Los intercambios podrían ser por lotes, y algunos no lo son. Estamos hablando con esas empresas y escuchando sus …"},{"uri":"/es/bitcoin-core-dev-tech/2018-10/2018-10-09-wallet-stuff/","title":"Cosas de Wallets","content":"https://twitter.com/kanzure/status/1049526667079643136\nTal vez podamos hacer que los PRs del monedero tengan un proceso de revisión diferente para que pueda haber cierta especialización, incluso si el monedero no está listo para ser separado. En el futuro, si el monedero fuera un proyecto o repositorio separado, entonces sería mejor. Tenemos que ser capaces de subdividir el trabajo mejor de lo que ya lo hacemos, y la cartera es un buen lugar para empezar a hacerlo. Es diferente del código …"},{"uri":"/es/tags/segwit/","title":"segwit","content":""},{"uri":"/es/bitcoin-core-dev-tech/2018-10/2018-10-08-utxo-accumulators-and-utreexo/","title":"Acumuladores UTXO, compromisos UTXO y Utreexo","content":"https://twitter.com/kanzure/status/1049112390413897728\nSi la gente vio la charla de Benedikt, hace dos días, está relacionado. Es una construcción diferente, pero el mismo objetivo. La idea básica es, y creo que Cory comenzó a hablar de esto hace unos meses en la lista de correo \u0026amp;hellip; en lugar de almacenar todos los UTXOs en leveldb, almacenar el hash de cada UTXO, y entonces es la mitad del tamaño, y entonces casi se podría crear a partir del hash de la entrada, es como 10 bytes más. En vez …"},{"uri":"/es/tags/proof-systems/","title":"proof-systems","content":""},{"uri":"/es/bitcoin-core-dev-tech/2018-10/2018-10-08-efficient-p2p-transaction-relay/","title":"Retransmisión eficiente de transacciones P2P","content":"Mejoras del protocolo de retransmisión de transacciones p2p con reconciliación de conjuntos gleb\nNo sé si necesito motivar este problema. Presenté una sesión de trabajo en curso en Scaling. El coste de retransmitir transacciones o anunciar una transacción en una red: ¿cuántos anuncios tienes? Cada enlace tiene un anuncio en cualquier dirección en este momento, y luego está el número de nodos multiplicado por el número de conexiones por nodo. Esto es como 8 conexiones de media. Si hay más …"},{"uri":"/es/andreas-antonopoulos/2018-10-07-andreas-antonopoulos-schnorr-signatures/","title":"Firmas Schnorr","content":"LTB Episodio 378 - El Petro dentro de Venezuela y las firmas Schnorr\nActualmente utilizamos el algoritmo de firma digital de curva elíptica (ECDSA), que es una forma específica de realizar firmas digitales con curvas elípticas, pero no la única. Las firmas Schnorr tienen algunas características únicas que las hacen mejores en muchos aspectos que el algoritmo ECDSA que usamos actualmente en Bitcoin.\nLas firmas Schnorr son mejores porque tienen ciertas peculiaridades en la forma en que se …"},{"uri":"/es/austin-bitcoin-developers/2018-08-17-richard-bondi-bitcoin-cli-regtest/","title":"Bitcoin CLI y Regtest","content":"Clone este repositorio para seguir el proceso: https://github.com/austin-bitcoin-developers/regtest-dev-environment\nhttps://twitter.com/kanzure/status/1161266116293009408\nIntroducción Así que el objetivo aquí como Justin dijo es conseguir el entorno regtest configurado. Las ventajas que mencionó, también existe la ventaja de que usted puede minar sus propias monedas a voluntad por lo que no tiene que perder el tiempo con grifos testnet. También puedes generar bloques, así que no tienes que …"},{"uri":"/es/speakers/richard-bondi/","title":"Richard Bondi","content":""},{"uri":"/es/bitcoin-core-dev-tech/2018-03/","title":"Bitcoin Core Dev Tech 2018 (Mar)","content":" Árboles de sintaxis abstracta merkleizados - MAST Mar 06, 2018 Taproot Covenios Validación Bellare-Neven Mar 05, 2018 Signature aggregation Intercambios atómicos de curvas transversales Mar 05, 2018 Adaptor signatures Prioridades Mar 07, 2018 Taproot, Graftroot, Etc (2018-03-06) Contract protocols Taproot "},{"uri":"/es/bitcoin-core-dev-tech/2018-03/2018-03-07-priorities/","title":"Prioridades","content":"https://twitter.com/kanzure/status/972863994489901056\nPrioridades Vamos a esperar hasta que BlueMatt esté aquí. Nadie sabe cuáles son sus prioridades. Dice que podría llegar alrededor del mediodía.\nHay un ex-director de producto de Google interesado en ayudar con Bitcoin Core. Me preguntó cómo participar. Le dije que se involucrara simplemente sumergiéndose. Pasará algún tiempo en Chaincode a finales de marzo. Nos haremos una idea de cuáles son sus habilidades. Creo que podría ser útil para …"},{"uri":"/es/bitcoin-core-dev-tech/2018-03/2018-03-06-merkleized-abstract-syntax-trees-mast/","title":"Árboles de sintaxis abstracta merkleizados - MAST","content":"https://twitter.com/kanzure/status/972120890279432192\nVer también http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-07-merkleized-abstract-syntax-trees/\nCosas de MAST Podrías merkleizar directamente los scripts si cambias de IF, IFNOT, ELSE con IFJUMP que tiene el número de bytes.\nCon graftroot y taproot, nunca para hacer cualquier scripts (que eran un hack para empezar las cosas). Pero estamos haciendo la validación y el cálculo.\nTomas todos los caminos que tiene; así que en lugar …"},{"uri":"/es/tags/covenios/","title":"covenios","content":""},{"uri":"/es/tags/adaptor-signatures/","title":"adaptor-signatures","content":""},{"uri":"/es/bitcoin-core-dev-tech/2018-03/2018-03-05-bellare-neven/","title":"Bellare-Neven","content":"Ver también http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-06-signature-aggregation/\nSe ha publicado, ha existido durante una década, y es ampliamente citado. En Bellare-Neven, es en sí mismo, es un esquema multi-firma que significa múltiples pubkeys y un mensaje. Debe tratar las autorizaciones individuales para gastar entradas, como mensajes individuales. Lo que necesitamos es un esquema de firma agregada interactiva. El artículo de Bellare-Neven sugiere una forma trivial de …"},{"uri":"/es/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/","title":"Intercambios atómicos de curvas transversales","content":"https://twitter.com/kanzure/status/971827042223345664\nBorrador de un próximo documento de guiones sin guión. Esto fue a principios de 2017. Pero ya ha pasado todo un año.\ntransacciones lightning posteriores a schnorr https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html\nUna firma adaptadora.. si tienes diferentes generadores, entonces los dos secretos a revelar, simplemente le das a alguien los dos, más una prueba de un log discreto, y entonces dices aprende el …"},{"uri":"/es/tags/signature-aggregation/","title":"signature-aggregation","content":""},{"uri":"/es/misc/2017-09-20-ray-dillinger-if-id-known/","title":"If I'd Known What We Were Starting","content":"Autor: Ray Dillinger Texto original: https://www.linkedin.com/pulse/id-known-what-we-were-starting-ray-dillinger/\nSi hubiera sabido lo que estábamos empezando En noviembre de 2008, realicé una revisión de código y una auditoría de seguridad para la parte de la blockchain del código fuente de Bitcoin. El fallecido Hal Finney revisó el código y auditó el lenguaje de scripting, y ambos analizamos el código de contabilidad. Satoshi Nakamoto, arquitecto seudónimo y autor del código, alternó entre …"},{"uri":"/es/speakers/ray-dillinger/","title":"Ray Dillinger","content":""},{"uri":"/es/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/","title":"Árboles de sintaxis abstracta merkleizados","content":"https://twitter.com/kanzure/status/907075529534328832\nÁrboles de sintaxis abstracta merkleizados (MAST) https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html\nVoy a hablar del esquema que publiqué ayer en la lista de correo, que consiste en implementar MAST (árboles de sintaxis abstracta merkleizados) en bitcoin de la forma menos invasiva posible. Está dividido en dos grandes rasgos de consenso que juntos nos dan MAST. Empezaré con el último BIP.\nEsto es la …"},{"uri":"/es/bitcoin-core-dev-tech/2017-09/","title":"Bitcoin Core Dev Tech 2017","content":" Agregación de firmas Sep 06, 2017 Signature aggregation Árboles de sintaxis abstracta merkleizados Sep 07, 2017 Mast Notas de la reunión Sep 05, 2017 Privacidad "},{"uri":"/es/tags/mast/","title":"mast","content":""},{"uri":"/es/bitcoin-core-dev-tech/2017-09/2017-09-06-signature-aggregation/","title":"Agregación de firmas","content":"https://twitter.com/kanzure/status/907065194463072258\nAgregación de firmas Sipa, ¿puedes firmar y verificar firmas ECDSA a mano? No. Sobre GF(43), tal vez. Los inversos podrían tomar un poco de tiempo para computar. Sobre GF(2).\nCreo que lo primero de lo que deberíamos hablar es de algunas definiciones. Me gustaría empezar distinguiendo entre tres cosas: Agregación de claves, agregación de firmas y validación de lotes. Más adelante, la multifirma.\nHay tres problemas diferentes. La agregación de …"},{"uri":"/es/bitcoin-core-dev-tech/2017-09/2017-09-05-meeting-notes/","title":"Notas de la reunión","content":"coredev.tech septiembre 2017\nhttps://twitter.com/kanzure/status/907233490919464960\n((Como siempre, cualquier error es probablemente mío, etc.))\nIntroducción Existe una gran preocupación sobre si BlueMatt se ha convertido en un nombre erróneo.\nPresentación del lunes por la noche: https://btctranscripts.com/sf-bitcoin-meetup/2017-09-04-jonas-schnelli-bip150-bip151/\nCreo que deberíamos seguir usando #bitcoin-core-dev para cualquier cosa relacionada con el cambio de Bitcoin Core y tratar de mantener …"},{"uri":"/es/tags/privacidad/","title":"privacidad","content":""},{"uri":"/es/speakers/arvind-narayanan/","title":"Arvind Narayanan","content":""},{"uri":"/es/misc/2017-08-29-bitcoin-academic-pedigree/","title":"Bitcoin's Academic Pedigree","content":"Texto original: https://queue.acm.org/detail.cfm?id=3136559\nPedigrí académico de Bitcoin El concepto de criptomonedas se basa en ideas olvidadas en la literatura de investigación. Arvind Narayanan y Jeremy Clark\nSi has leído sobre bitcoin en la prensa y tienes cierta familiaridad con la investigación académica en el campo de la criptografía, podrías tener razonablemente la siguiente impresión: La investigación de varias décadas sobre el efectivo digital, comenzando por David Chaum,10,12 no …"},{"uri":"/es/speakers/jeremy-clark/","title":"Jeremy Clark","content":""},{"uri":"/es/misc/2016-11-13-lopp-bitcoin-security-model/","title":"Bitcoin's Security Model - A Deep Dive","content":"Texto original: https://www.coindesk.com/markets/2016/11/13/bitcoins-security-model-a-deep-dive/\nEl modelo de seguridad de Bitcoin: una profunda inmersión CoinDesk echa un vistazo bajo el capó para entender qué funciones de seguridad ofrecen y qué no ofrecen bitcoin. Cuando se discuten los mecanismos de consenso para diferentes criptomonedas, un tema que a menudo causa argumentos es la falta de comprensión (y definición) del modelo de seguridad que proporcionan para los datos históricos del …"},{"uri":"/es/speakers/jameson-lopp/","title":"Jameson Lopp","content":""},{"uri":"/es/greg-maxwell/2016-01-29-a-trip-to-the-moon/","title":"A trip to the moon requires a rocket with multiple stages","content":"Texto original: https://www.reddit.com/r/Bitcoin/comments/438hx0/a_trip_to_the_moon_requires_a_rocket_with/\nUn viaje a la luna requiere un cohete con múltiples etapas o, de lo contrario, la ecuación del cohete te comerá el almuerzo\u0026amp;hellip; empacar a todos como un coche de payasos en un trebuchet y esperar el éxito esta excluido.\nMucha gente en Reddit piensa que Bitcoin es principalmente un competidor de las redes de pago con tarjeta. Creo que esto es más que un poco extraño: Bitcoin es una …"},{"uri":"/es/speakers/greg-maxwell/","title":"Greg Maxwell","content":""},{"uri":"/es/greg-maxwell/","title":"Greg Maxwell","content":" A trip to the moon requires a rocket with multiple stages Jan 29, 2016 Greg Maxwell Scaling "},{"uri":"/es/tags/scaling/","title":"scaling","content":""},{"uri":"/es/andreas-antonopoulos/2014-10-08-andreas-antonopolous-canada-senate-bitcoin/","title":"Senado de Canadá Bitcoin","content":"Esta es una transcripción de las pruebas presentadas ante el Comité del Senado de Canadá sobre Banca, Comercio y Economía. Aquí hay un video. El discurso de apertura se puede encontrar aquí.\nPuede aparecer otra transcripción aquí pero quién sabe.\nSe puede encontrar una transcripción adicional aquí.\nInforme final: http://www.parl.gc.ca/Content/SEN/Committee/412/banc/rep/rep12jun15-e.pdf\nObservaciones preparadas. Mi experiencia se centra principalmente en la seguridad de la información y la …"},{"uri":"/es/bit-block-boom/2019/accumulating-bitcoin/","title":"Acumular Bitcoin","content":"Introducción Hoy voy a predicar al coro, creo. Espero reiterar y reforzar algunos puntos que ya conoces pero que tal vez olvidaste mencionar a tus amigos nocoiner, o aprender algo nuevo. Espero que haya algo para todos aquí.\nP: ¿Debo comprar bitcoin?\nR: Sí.\nP: ¿Vas a trolear a un periodista?\nR: ¿Hoy? Ya lo he hecho.\nConceptos erróneos Una de las mayores ideas erróneas que escuché cuando conocí el bitcoin fue que éste era sólo un sistema de pagos. Si lees el libro blanco, entonces lees esto y …"},{"uri":"/es/speakers/alena-vranova/","title":"Alena Vranova","content":""},{"uri":"/es/speakers/alex-petrov/","title":"Alex Petrov","content":""},{"uri":"/es/baltic-honeybadger/2018/opening/","title":"Apertura","content":"Discurso de apertura del Baltic Honeybadger 2018\ntwitter: https://twitter.com/search?f=tweets\u0026amp;amp;vertical=default\u0026amp;amp;q=bh2018\nhttps://twitter.com/kanzure/status/1043384689321566208\nBien chicos, vamos a empezar en cinco minutos. Así que quédense aquí. Ella va a presentar a los oradores y ayudar a todos. Gracias a todos por venir. Es una gran multitud este año. Quería hacer algunos anuncios técnicos. En primer lugar, recuerden que debemos ser excelentes entre nosotros. Tenemos gente de todo el …"},{"uri":"/es/baltic-honeybadger/2018/","title":"Baltic Honeybadger 2018","content":"http://web.archive.org/web/20180825023519/https://bh2018.hodlhodl.com/\nApertura Bitcoin como nueva institución de mercado Nic Carter Custodia de Bitcoin Bryan Bishop Custodia Reglamento El estándar Bitcoin Saifedean Ammous El futuro de Lightning Elizabeth Stark Lightning El futuro de los contratos inteligentes de Bitcoin Max Keidun Contratos inteligentes El futuro de los monederos de Bitcoin Pavol Rusnak, Lawrence Nahum, Giacomo Zucco Cartera hardware Cartera El maximalismo de Bitcoin …"},{"uri":"/es/bit-block-boom/","title":"Bit Block Boom","content":" Bit Block Boom 2019 "},{"uri":"/es/bit-block-boom/2019/","title":"Bit Block Boom 2019","content":" Acumular Bitcoin Pierre Rochard "},{"uri":"/es/baltic-honeybadger/2018/bitcoin-as-a-novel-market-institution/","title":"Bitcoin como nueva institución de mercado","content":"Bitcoin como nueva institución de mercado\nhttps://twitter.com/kanzure/status/1043763647498063872\nVoy a hablar de bitcoin como un sistema económico, no como un sistema de software o criptográfico. Esta charla tiene dos partes. En la primera, voy a hacer una retrospectiva de los últimos 10 años de bitcoin como sistema económico en funcionamiento. En la segunda parte, voy a analizar a bitcoin tal y como es hoy y cómo será en el futuro. Voy a mirar la cantidad de riqueza almacenada en bitcoin y …"},{"uri":"/es/bitcoin-core-dev-tech/2015-02/","title":"Bitcoin Core Dev Tech 2015","content":" Bitcoin Law For Developers James Gatto, Marco Santori Charla de los fundadores de Circle Jeremy Allaire, Sean Neville Gavin Andresen Research And Development Goals Patrick Murck, Gavin Andresen, Cory Fields Consensus "},{"uri":"/es/bitcoin-core-dev-tech/2022-10/2022-10-11-github/","title":"Bitcoin Core y GitHub (2022-10-11)","content":"Bitcoin Core y GitHub Creo que en este punto está bastante claro que no es necesariamente un \u0026amp;ldquo;si\u0026amp;rdquo; salimos de github, sino un cuándo y cómo. La pregunta sería, ¿cómo lo haríamos? Esto no es realmente una presentación. Es más bien una discusión. Hay algunas cosas a tener en cuenta, como el repo de bitcoin-gh-meta, que captura todas las cuestiones, comentarios y pull requests. Es bastante bueno. La capacidad de reconstruir lo que hay aquí en otra plataforma no parece posible en su …"},{"uri":"/es/bitcoin-core-dev-tech/2015-02/james-gatto-marco-santori-bitcoin-law-for-developers/","title":"Bitcoin Law For Developers","content":"Vamos a hacer una pausa de 15 minutos para tomar café después de nuestros dos próximos oradores. Quiero presentarles a James Gatto y Marco Santori de Pilsbury. Pasarán algún tiempo hablando de la ley Bitcoin. Tienen una sala esta tarde y se ofrecen a hablar con ustedes uno a uno. Así que Marco y James.\nTe perdiste la introducción. ¿Estuvo bien? (risas)\nEstamos aquí para hablar de cuestiones jurídicas. Vamos a intentar que sea ligero e interesante. Yo voy a hablar de patentes. Y Marc va a hablar …"},{"uri":"/es/speakers/bruce-fenton/","title":"Bruce Fenton","content":""},{"uri":"/es/speakers/bryan-bishop/","title":"Bryan Bishop","content":""},{"uri":"/es/bitcoin-core-dev-tech/2015-02/jeremy-allaire-circle/","title":"Charla de los fundadores de Circle","content":"Estamos encantados de estar aquí y de patrocinar este evento. Nuestra experiencia en herramientas para desarrolladores se remonta a los primeros días de algo.\n¿Cómo maduramos el desarrollo del propio Bitcoin Core? Una de las cosas útiles es identificar los componentes clave. En un estándar tienes una especificación, podría ser un libro blanco, y luego tienes una implementación de referencia, y luego un conjunto de pruebas que hace cumplir la interoperabilidad. El conjunto de pruebas es lo que …"},{"uri":"/es/categories/conference/","title":"conference","content":""},{"uri":"/es/categories/conferenciae/","title":"conferenciae","content":""},{"uri":"/es/tags/consensus/","title":"consensus","content":""},{"uri":"/es/tags/contract-protocols/","title":"contract-protocols","content":""},{"uri":"/es/tags/contratos-inteligentes/","title":"contratos inteligentes","content":""},{"uri":"/es/speakers/cory-fields/","title":"Cory Fields","content":""},{"uri":"/es/tags/custodia/","title":"custodia","content":""},{"uri":"/es/baltic-honeybadger/2018/bitcoin-custody/","title":"Custodia de Bitcoin","content":"https://twitter.com/kanzure/status/1048014038179823617\nCustodia de Bitcoin\nConferencia de Bitcoin del Báltico Honey Badger 2018, Riga, Letonia, Día 2\nSchedule: https://bh2018.hodlhodl.com/\nTranscripción\nHora de inicio: 6:09:50\nMi nombre es Bryan Bishop, voy a hablar de la custodia de bitcoin. Aquí está mi huella digital PGP, deberíamos hacer eso. Así que quién soy, tengo un fondo de desarrollo de software, no solo escribo transcripciones. En realidad ya no estoy en LedgerX desde el viernes (21 …"},{"uri":"/es/bitcoin-core-dev-tech/2018-10/2018-10-08-script-descriptors/","title":"Descriptores de guiones (2018-10-08)","content":"2018-10-08\nDescriptores de guiones Pieter Wuille (sipa)\nhttps://github.com/bitcoin/bitcoin/blob/master/doc/descriptors.md\nMe gustaría hablar de los descriptores de guiones. Hay varios proyectos en los que estamos trabajando y todos están relacionados. Me gustaría aclarar cómo encajan las cosas.\nNota: hay una transcripción anterior que no ha sido publicada (necesita ser revisada) sobre los descriptores de guiones.\nAgenda Historia de los descriptores de guiones y cómo llegamos a esto. Lo que hay …"},{"uri":"/es/baltic-honeybadger/2018/the-bitcoin-standard/","title":"El estándar Bitcoin","content":"El estándar bitcoin como solución de escalado por capas\nhttps://twitter.com/kanzure/status/1043425514801844224\nHola a todos. ¿Pueden escucharme todos? Bien, maravilloso. No puedes ver mis diapositivas, ¿verdad? Tienes que compartir la pantalla si quieres que vean tus diapositivas. ¿Cómo hago eso? ¿Dónde está eso? Este es el nuevo Skype, lo siento. Gracias a todos por invitarme a hablar hoy. Sería estupendo acompañaros, pero desgraciadamente no podré hacerlo.\nQuiero describir cómo veo el …"},{"uri":"/es/baltic-honeybadger/2018/the-future-of-lightning/","title":"El futuro de Lightning","content":"El futuro de lightning\nEl año del #craeful y el futuro de lightning\nhttps://twitter.com/kanzure/status/1043501348606693379\nEs estupendo estar de vuelta aquí en Riga. Demos un aplauso a los organizadores del evento y a todos los que nos han traído aquí. Este es el momento más cálido que he vivido en Riga. Ha sido genial. Quiero volver en los veranos.\nIntroducción Estoy aquí para hablar de lo que ha ocurrido en el último año y del futuro de lo que vamos a ver con los rayos. Un segundo mientras …"},{"uri":"/es/baltic-honeybadger/2018/the-future-of-bitcoin-smart-contracts/","title":"El futuro de los contratos inteligentes de Bitcoin","content":"El futuro de los contratos inteligentes de Bitcoin\nhttps://twitter.com/kanzure/status/1043419056492228608\nHola chicos, la próxima charla en 5 minutos. En cinco minutos.\nIntroducción Hola a todos. Si están caminando, por favor háganlo en silencio. Estoy un poco nervioso. Estuve muy ocupado organizando esta conferencia y no tuve tiempo para esta charla. Si tenían grandes expectativas para esta presentación, entonces por favor bájenlas durante los próximos 20 minutos. Iba a hablar de los modelos …"},{"uri":"/es/baltic-honeybadger/2018/the-future-of-bitcoin-wallets/","title":"El futuro de los monederos de Bitcoin","content":"1 a 1: El futuro de los monederos de bitcoin\nhttps://twitter.com/kanzure/status/1043445104827084800\nGZ: Muchas gracias. Vamos a hablar del futuro de los monederos de bitcoin. Como sabes, es un tema muy central. Siempre comparamos el bitcoin con Internet. Los monederos son básicamente como los navegadores, como al principio de la web. Son la primera puerta de entrada a una experiencia de usuario en bitcoin. Es importante ver cómo van a evolucionar. Tenemos dos representantes excepcionales del …"},{"uri":"/es/baltic-honeybadger/2018/bitcoin-maximalism-dissected/","title":"El maximalismo de Bitcoin diseccionado","content":"El maximalismo de Bitcoin diseccionado\nBuenos días a todos. Estoy muy contento de estar aquí en el Baltic Honeybadger. El año pasado hice una presentación de scaling. Inauguré la conferencia con una presentación de escalamiento. Este año, para compensar, seré súper serio. Esta será la presentación más aburrida de la conferencia. Voy a tratar de diseccionar y formalizar el maximalismo de bitcoin.\nEsta es la fuente más terrorífica que encontré en prezi. Quería algo con sangre saliendo de ella pero …"},{"uri":"/es/speakers/elizabeth-stark/","title":"Elizabeth Stark","content":""},{"uri":"/es/speakers/eric-voskuil/","title":"Eric Voskuil","content":""},{"uri":"/es/baltic-honeybadger/2018/trustlessness-scalability-and-directions-in-security-models/","title":"Escalabilidad y orientación de los modelos de seguridad","content":"Escalabilidad y orientación de los modelos de seguridad\nhttps://twitter.com/kanzure/status/1043397023846883329\n\u0026amp;hellip; ¿Están todos despiertos? Claro. Salto de obstáculos. Vaya, eso es brillante. Soy más idealista que Eric. Voy a hablar de la utilidad y de por qué la gente usa bitcoin y vamos a ver cómo va eso.\nLa falta de confianza es mucho mejor que la descentralización. Quiero hablar de la falta de confianza. La gente usa mucho la palabra descentralización y me parece que es algo inútil …"},{"uri":"/es/speakers/florian-maier/","title":"Florian Maier","content":""},{"uri":"/es/speakers/gavin-andresen/","title":"Gavin Andresen","content":""},{"uri":"/es/bitcoin-core-dev-tech/2015-02/gavinandresen/","title":"Gavin Andresen","content":"http://blog.circle.com/2015/02/10/devcore-livestream/\nEl tiempo de transacción instantánea\u0026amp;hellip; ya sabes, me acerco a una caja registradora, pongo mi teléfono ahí, y en un segundo o dos la transacción está confirmada y me voy con mi café. Cualquier cosa más allá de eso, 10 minutos frente a 1 minuto no importa. Hay un montón de ideas sobre esto, como un tercero de confianza que promete no duplicar el gasto, tener algunas monedas encerradas en un monedero multisig como Green Address. Hay ideas …"},{"uri":"/es/speakers/giacomo-zucco/","title":"Giacomo Zucco","content":""},{"uri":"/es/baltic-honeybadger/2018/investing-in-bitcoin-businesses/","title":"Invertir en negocios de Bitcoin","content":"1 a 1: Invertir en negocios de bitcoin\nMatthew Mezinskis (crypto_voices) (moderador)\nMM: Dirijo un podcast aquí en Letonia sobre la economía y el dinero del bitcoin. Es americano-latino. Lo hago con mi socio que está basado en Brasil. Hoy vamos a centrarnos en la inversión en negocios centrados en el bitcoin y las criptomonedas.\nNC: Me llamo Nic Carter. Trabajo para un fondo de riesgo. Somos uno de los pocos fondos de riesgo que se centran en el bitcoin y en las startups relacionadas con el …"},{"uri":"/es/speakers/james-gatto/","title":"James Gatto","content":""},{"uri":"/es/speakers/jeremy-allaire/","title":"Jeremy Allaire","content":""},{"uri":"/es/baltic-honeybadger/2018/the-reserve-currency-fallacy/","title":"La falacia de la moneda de reserva","content":"La falacia de la moneda de reserva\nhttps://twitter.com/kanzure/status/1043385469134925824\nGracias. Desarrolladores, desarrolladores, desarrolladores, desarrolladores. Muy bien, no fue tan malo. Hay mucho contenido para explicar este concepto de la falacia de la moneda de reserva. Es difícil de terminar en la cantidad de tiempo disponible. Estaré disponible en la fiesta de esta noche. Quiero ir a través de cuatro diapositivas y hablar de la historia de esta cuestión de la escala, y luego la …"},{"uri":"/es/baltic-honeybadger/2018/the-b-foundation/","title":"La Fundación B","content":"La Fundación B\nhttps://twitter.com/kanzure/status/1043802179004493825\nCreo que la mayoría de la gente aquí tiene una idea de quién soy. Soy un bitcoiner a largo plazo. He estado en bitcoin desde 2010. Me encanta el bitcoin. Me apasiona. Quiero verlo crecer y prosperar. Lo positivo de bitcoin es que tiene un ecosistema resistente. No necesita ningún director general. No necesita ninguna organización centralizada y no necesita ningún punto central para dirigir hacia dónde va. Básicamente funciona. …"},{"uri":"/es/baltic-honeybadger/2018/current-state-of-the-market-and-institutional-investors/","title":"La situación actual del mercado y los inversores institucionales","content":"1 a 1: La situación actual del mercado y los inversores institucionales\nhttps://twitter.com/kanzure/status/1043404928935444480\nAsociación Bitcoin Guy (BAG)\nBAG: Estoy aquí con Bruce Fenton y Tone Vays. Bruce es también anfitrión de la Mesa Redonda Satoshi y un inversor a largo plazo. Tone Vays es un operador de derivados y creador de contenidos. Vamos a hablar de Wall Street. ¿Creo que ambos están basados en Nueva York? ¿Cómo fueron los últimos 12 meses?\nTV: Tengo un apartamento allí, pero creo …"},{"uri":"/es/speakers/lawrence-nahum/","title":"Lawrence Nahum","content":""},{"uri":"/es/tags/libsecp256k1/","title":"libsecp256k1","content":""},{"uri":"/es/bitcoin-core-dev-tech/2022-10/2022-10-12-libsecp256k1/","title":"Libsecp256k1 Reunión de mantenedores (2022-10-12)","content":"P: ¿Por qué C89? Cuando te hice esta pregunta hace unos años, creo que dijiste gmaxwell.\nR: Hay una serie de dispositivos embebidos que sólo soportan C89 y sería bueno soportar esos dispositivos. Esa fue la respuesta de entonces, al menos.\nP: ¿Es un gran coste seguir haciendo C89?\nR: El único coste es para las cosas de contexto que queremos hacer threadlocal. El CPUid o las cosas específicas de x86. Estos podrían ser opcionales. Si realmente quieres entrar en este tema, entonces tal vez más …"},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-06-maintainers/","title":"Mantenedores (2019-06-06)","content":"Visión de los mantenedores del proyecto Bitcoin Core\nhttps://twitter.com/kanzure/status/1136568307992158208\n¿Cómo piensan o sienten los mantenedores que va todo? ¿Hay frustraciones? ¿Podrían los colaboradores ayudar a eliminar estas frustraciones? Eso es todo lo que tengo.\nSería bueno tener una mejor supervisión o visión general sobre quién está trabajando en qué dirección, para ser más eficientes. A veces he visto gente trabajando en lo mismo, y ambos hacen un pull request similar con mucho …"},{"uri":"/es/speakers/marco-santori/","title":"Marco Santori","content":""},{"uri":"/es/baltic-honeybadger/2018/beyond-bitcoin-decentralized-collaboration/","title":"Más allá de Bitcoin - Colaboración descentralizada","content":"Más allá de Bitcoin: colaboración descentralizada.\nhttps://twitter.com/kanzure/status/1043432684591230976\nhttp://sit.fyi/\nHola a todos. Hoy voy a hablar de algo diferente a lo habitual. Voy a hablar más sobre cómo podemos computar y cómo colaboramos. Empezaré con una introducción a la historia de cómo sucedieron las cosas y por qué son como son hoy. Muchos de ustedes probablemente utilizan aplicaciones SaaS en la nube. A menudo se promociona como algo nuevo, algo que ha sucedido en los últimos …"},{"uri":"/es/speakers/matthew-mezinskis/","title":"Matthew Mezinskis","content":""},{"uri":"/es/speakers/max-keidun/","title":"Max Keidun","content":""},{"uri":"/es/needs/","title":"Needs","content":""},{"uri":"/es/speakers/nic-carter/","title":"Nic Carter","content":""},{"uri":"/es/baltic-honeybadger/2018/extreme-opsec-for-the-modern-cypherpunk/","title":"Opsec extremo para el cypherpunk moderno","content":"Opsec extremo para el cypherpunk moderno\nJameson es el ingeniero de infraestructuras de Casa. Demos la bienvenida a Jameson Lopp.\nIntroducción Hola compañeros cypherpunks. Estamos bajo ataque. Las corporaciones y los estados nación en su búsqueda de la omnisciencia han despojado lentamente nuestra privacidad. Somos la rana que está siendo hervida en la olla llamada progreso. No podemos confiar en que las corporaciones nos concedan privacidad por su beneficio. Nuestros fracasos aquí son nuestros. …"},{"uri":"/es/baltic-honeybadger/2018/day-1-closing-panel/","title":"Panel de clausura del primer día","content":"Panel de cierre\nhttps://twitter.com/kanzure/status/1043517333640241152\nRS: Gracias chicos por unirse al panel. Sólo necesitamos una silla más. Voy a presentar a Alex Petrov aquí porque todos los demás estaban en el escenario. El panel de cierre va a ser una visión general de lo que está sucediendo en bitcoin. Quiero empezar con la pregunta que empecé el año pasado. ¿Cuál es el estado actual de bitcoin en comparación con el año pasado? ¿Qué ha sucedido?\nES: El año pasado, estaba sentado al lado …"},{"uri":"/es/baltic-honeybadger/2018/trading-panel/","title":"Panel de negociación","content":"MM: Como dice Trace Mayer, es perseguir al conejo. ¿Por qué no nos presentamos?\nTV: Mi nombre es Tone Vays. Vengo del entorno comercial tradicional de Wall Street. Me uní al espacio de las criptomonedas en 2013. Hablé en mi primera conferencia y escribí mi primer artículo en el primer trimestre de 2014. Estaba haciendo algo de trading. Con la popularidad del canal de youtube y de ir a conferencias, volví a operar con opciones en los mercados tradicionales. Volveré a operar con cripto pronto, …"},{"uri":"/es/speakers/patrick-murck/","title":"Patrick Murck","content":""},{"uri":"/es/speakers/pavol-rusnak/","title":"Pavol Rusnak","content":""},{"uri":"/es/speakers/peter-todd/","title":"Peter Todd","content":""},{"uri":"/es/speakers/pierre-rochard/","title":"Pierre Rochard","content":""},{"uri":"/es/baltic-honeybadger/2018/bitcoin-payment-processing-and-merchants/","title":"Procesamiento de pagos con Bitcoin y comerciantes","content":"1 a 1: Procesamiento de pagos con Bitcoin y comerciantes\nhttps://twitter.com/kanzure/status/1043476967918592003\nV: Hola y bienvenidos a esta increíble conferencia. Es una buena conferencia, vamos. Es genial porque puedes preguntarles sobre la UASF y ellos saben de qué estás hablando. Tengo algunos invitados conmigo hoy. Vamos a hablar sobre el procesamiento de los comerciantes y hablar de la adopción regular de bitcoin también. La primera pregunta que tengo para Alena es, \u0026amp;hellip; hubo un gran …"},{"uri":"/es/tags/reglamento/","title":"reglamento","content":""},{"uri":"/es/bitcoin-core-dev-tech/2015-02/research-and-development-goals/","title":"Research And Development Goals","content":"Objetivos y retos de R\u0026amp;amp;D\nA menudo vemos gente diciendo que están probando las aguas, que han corregido un error tipográfico, que han hecho una pequeña corrección que no tiene mucho impacto, que se están acostumbrando al proceso. Se están dando cuenta de que es muy fácil contribuir a Bitcoin Core. Codificas tus cambios, envías tus cambios, no hay mucho que hacer.\nHay una diferencia, y las líneas son difusas e indefinidas, y puedes hacer un cambio en Core que cambie un error ortográfico o un …"},{"uri":"/es/speakers/roman-snitko/","title":"Roman Snitko","content":""},{"uri":"/es/speakers/saifedean-ammous/","title":"Saifedean Ammous","content":""},{"uri":"/es/tags/scalabilidad/","title":"scalabilidad","content":""},{"uri":"/es/speakers/sean-neville/","title":"Sean Neville","content":""},{"uri":"/es/speakers/sergej-kotliar/","title":"Sergej Kotliar","content":""},{"uri":"/es/bitcoin-core-dev-tech/2018-03/2018-03-06-taproot-graftroot-etc/","title":"Taproot, Graftroot, Etc (2018-03-06)","content":"https://twitter.com/kanzure/status/972468121046061056\nGraftroot La idea del graftroot es que en cada contrato hay un superconjunto de personas que pueden gastar el dinero. Esta suposición no siempre es cierta, pero casi siempre lo es. Digamos que quieres bloquear estas monedas durante un año, sin ninguna condición para ello, entonces no funciona. Pero suponga que tiene\u0026amp;hellip; ¿recuperación pubkey? No\u0026amp;hellip; la recuperación pubkey es inherentemente incompatible con cualquier forma de …"},{"uri":"/es/speakers/tone-vays/","title":"Tone Vays","content":""},{"uri":"/es/speakers/vortex/","title":"Vortex","content":""},{"uri":"/es/speakers/whalepanda/","title":"Whalepanda","content":""},{"uri":"/es/speakers/yurii-rashkovskii/","title":"Yurii Rashkovskii","content":""}] \ No newline at end of file +[{"uri":"/es/tags/assumeutxo/","title":"assumeUTXO","content":""},{"uri":"/es/bitcoin-core-dev-tech/2023-04/2023-04-27-assumeutxo/","title":"AssumeUTXO update","content":"Objetivos permitir a los nodos obtener un conjunto utxo rápidamente (1h) al mismo tiempo, sin grandes concesiones en materia de seguridad Enfoque Proporcionar instantánea utxo serializada obtener primero la cadena de cabeceras, cargar la instantánea y deserializar, sincronizar con la punta a partir de ahí a continuación, iniciar la verificación de fondo con una segunda instantánea por último, comparar los hashes cuando el IBD en segundo plano llega a la base de la instantánea Actualización del …"},{"uri":"/es/bitcoin-core-dev-tech/","title":"Bitcoin Core Dev Tech","content":" Bitcoin Core Dev Tech 2015 Bitcoin Core Dev Tech 2017 Bitcoin Core Dev Tech 2018 (Mar) Bitcoin Core Dev Tech 2018 (Oct) Bitcoin Core Dev Tech 2019 Bitcoin Core Dev Tech 2022 Bitcoin Core Dev Tech 2023 (Apr) "},{"uri":"/es/bitcoin-core-dev-tech/2023-04/","title":"Bitcoin Core Dev Tech 2023 (Apr)","content":" Agrupación de Mempool Apr 25, 2023 Suhas Daftuar, Pieter Wuille Cluster mempool ASMap Apr 25, 2023 Fabian Jahr Bitcoin core Security enhancements AssumeUTXO update Apr 27, 2023 James O\u0026amp;#39;Beirne Bitcoin core Assumeutxo Fuzzing Apr 27, 2023 Niklas Gögge Bitcoin core Developer tools Libbitcoin kernel Apr 26, 2023 thecharlatan Bitcoin core Build system "},{"uri":"/es/tags/bitcoin-core/","title":"bitcoin-core","content":""},{"uri":"/es/categories/","title":"Categories","content":""},{"uri":"/es/categories/core-dev-tech/","title":"core-dev-tech","content":""},{"uri":"/es/tags/developer-tools/","title":"developer-tools","content":""},{"uri":"/es/bitcoin-core-dev-tech/2023-04/2023-04-27-fuzzing/","title":"Fuzzing","content":"Slides: https://docs.google.com/presentation/d/1NlTw_n60z9bvqziZqU3H3Jw7Xs5slnQoehYXhEKrzOE\nFuzzing El fuzzing se realiza de forma continua. Los objetivos del fuzzing pueden dar sus frutos incluso años más tarde encontrando nuevos bugs. Ejemplo en la diapositiva sobre libFuzzer fuzzing una función parse_json que podría bloquearse en alguna entrada extraña, pero no informará de entradas json no válidas que pasan el análisis. libFuzzer hace la cobertura de bucle de retroalimentación guiada + ayuda …"},{"uri":"/es/speakers/james-obeirne/","title":"James O'Beirne","content":""},{"uri":"/es/speakers/niklas-g%C3%B6gge/","title":"Niklas Gögge","content":""},{"uri":"/es/speakers/","title":"Speakers","content":""},{"uri":"/es/tags/","title":"Tags","content":""},{"uri":"/es/","title":"Transcripciones de ₿itcoin","content":""},{"uri":"/es/tags/build-system/","title":"build-system","content":""},{"uri":"/es/bitcoin-core-dev-tech/2023-04/2023-04-26-libbitcoin-kernel/","title":"Libbitcoin kernel","content":"Preguntas y respuestas Q: bitcoind y bitcoin-qt vinculado contra el núcleo de la biblioteca en el futuro?\npresentador: sí, ese es un / el objetivo P: ¿has mirado una implementación de electrum usando libbitcoinkernel?\naudiencia: sí, ¡estaría bien tener algo así! audiencia: ¿también podría hacer el largo índice de direcciones propuesto con eso? audiencia: no solo indice de direcciones, otros indices tambien. P: Otros casos de uso:\npúblico: poder ejecutar cosas en iOS P: ¿Debería estar el mempool …"},{"uri":"/es/speakers/thecharlatan/","title":"thecharlatan","content":""},{"uri":"/es/bitcoin-core-dev-tech/2023-04/2023-04-25-mempool-clustering/","title":"Agrupación de Mempool","content":"Problemas Actuales Muchos problemas en el mempool\nEl desalojo está roto. El algoritmo de minería es parte del problema, no es perfecto. RBF es como totalmente roto nos quejamos todo el tiempo, a veces hacemos/no RBF cuando deberíamos/no deberíamos. Desalojo El desalojo es cuando mempool está lleno, y queremos tirar el peor tx. Por ejemplo, pensamos que un tx es el peor en mempool pero es descendiente de un tx \u0026amp;ldquo;bueno\u0026amp;rdquo;. El desalojo de mempool es más o menos lo contrario del algoritmo …"},{"uri":"/es/bitcoin-core-dev-tech/2023-04/2023-04-27-asmap/","title":"ASMap","content":"¿Deberíamos enviarlo en cada versión de Core? La idea inicial es enviar un archivo de mapa en cada versión de Core. Fabian escribió un artículo sobre cómo se integraría en el despliegue (https://gist.github.com/fjahr/f879769228f4f1c49b49d348f80d7635). Algunos desarrolladores señalaron que una opción sería tenerlo separado del proceso de lanzamiento, cualquier colaborador habitual podría actualizarlo cuando quisiera (¿quién lo haría? ¿con qué frecuencia?). Entonces, cuando llegue el momento de la …"},{"uri":"/es/tags/cluster-mempool/","title":"cluster-mempool","content":""},{"uri":"/es/speakers/fabian-jahr/","title":"Fabian Jahr","content":""},{"uri":"/es/speakers/pieter-wuille/","title":"Pieter Wuille","content":""},{"uri":"/es/tags/security-enhancements/","title":"security-enhancements","content":""},{"uri":"/es/speakers/suhas-daftuar/","title":"Suhas Daftuar","content":""},{"uri":"/es/bitcoin-core-dev-tech/2022-10/","title":"Bitcoin Core Dev Tech 2022","content":" BIP324 - Versión 2 del protocolo de transporte cifrado p2p Oct 10, 2022 P2p Bitcoin core Bitcoin Core y GitHub (2022-10-11) Bitcoin core Libsecp256k1 Reunión de mantenedores (2022-10-12) Libsecp256k1 Mercado de tarifas Oct 11, 2022 Fee management Bitcoin core Varios Oct 10, 2022 P2p Bitcoin core "},{"uri":"/es/tags/fee-management/","title":"fee-management","content":""},{"uri":"/es/bitcoin-core-dev-tech/2022-10/2022-10-11-fee-market/","title":"Mercado de tarifas","content":"Hay dos momentos en los que hemos tenido tasas sostenidas: a finales de 2017 y a principios de 2021. A finales de 2017 vimos que muchas cosas se rompían porque la gente no había escrito un software para hacer frente a las tasas variables o algo así. No sé si eso fue un problema tan grande en 2021. Lo que sí me preocupa es que esto empiece a ser una cosa. Si no hay un mercado de tarifas variables y se puede poner 1 sat/vbyte durante varios años, entonces funcionará hasta que no funcione. Así que …"},{"uri":"/es/bitcoin-core-dev-tech/2022-10/2022-10-10-p2p-encryption/","title":"BIP324 - Versión 2 del protocolo de transporte cifrado p2p","content":"Intervenciones anteriores https://btctranscripts.com/scalingbitcoin/milan-2016/bip151-peer-encryption/\nhttps://btctranscripts.com/sf-bitcoin-meetup/2017-09-04-jonas-schnelli-bip150-bip151/\nhttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06-07-p2p-encryption/\nhttps://btctranscripts.com/breaking-bitcoin/2019/p2p-encryption/\nIntroducción y motivación ¿Podemos apagar las luces? \u0026amp;ldquo;A oscuras\u0026amp;rdquo; es un bonito tema para la charla. También tengo café oscuro. De acuerdo.\nVamos a hablar un …"},{"uri":"/es/tags/p2p/","title":"P2P","content":""},{"uri":"/es/bitcoin-core-dev-tech/2022-10/2022-10-10-misc/","title":"Varios","content":"Red de confianza Algunos de los operadores de servidores de clave pública interpretaron el GDPR como que ya no pueden operar infraestructuras de clave pública. Tiene que haber otra solución para la distribución p2p de claves y Web-of-Trust.\nbitcoin-otc sigue siendo la web de confianza PGP que opera desde hace más tiempo utilizando infraestructura de clave pública. Rumplepay podría ser capaz de arrancar una web-of-trust con el tiempo.\nDirecciones invisibles y pagos silenciosos He aquí algo …"},{"uri":"/es/speakers/aaron-van-wirdum/","title":"Aaron van Wirdum","content":""},{"uri":"/es/bitcoin-explained/taproot-activation-update/","title":"Actualización de la activación de Taproot","content":"Tema: Actualización de la activación de Taproot: Speedy Trial y el cliente LOT=true\nUbicación: Bitcoin Magazine (en línea)\nFecha: 23 de abril de 2021\nEpisodio anterior en lockinontimeout (LOT): https://btctranscripts.com/bitcoin-magazine/2021-02-26-taproot-activation-lockinontimeout/\nEpisodio anterior sobre Speedy Trial: https://btctranscripts.com/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial/\nAaron van Wirdum en \u0026amp;ldquo;Ahora hay dos clientes de activación de Taproot, aquí está el …"},{"uri":"/es/bitcoin-explained/","title":"Bitcoin Explained","content":" Actualización de la activación de Taproot Apr 23, 2021 Sjors Provoost, Aaron van Wirdum Taproot Episode 36 Taproot Activación con Speedy Trial Mar 12, 2021 Sjors Provoost, Aaron van Wirdum Taproot Soft fork activation Episode 31 Activación de Taproot y LOT=true vs LOT=false Feb 26, 2021 Sjors Provoost, Aaron van Wirdum Taproot Soft fork activation Episode 29 "},{"uri":"/es/categories/podcast/","title":"podcast","content":""},{"uri":"/es/speakers/sjors-provoost/","title":"Sjors Provoost","content":""},{"uri":"/es/tags/taproot/","title":"taproot","content":""},{"uri":"/es/tags/soft-fork-activation/","title":"soft-fork-activation","content":""},{"uri":"/es/bitcoin-explained/taproot-activation-speedy-trial/","title":"Taproot Activación con Speedy Trial","content":"Propuesta Speedy Trial: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html\nIntroducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado. Sjors, ¿cuál es tu juego de palabras de la semana?\nSjors Provoost (SP): En realidad te pedí un juego de palabras y me dijiste \u0026amp;ldquo;Corta, reedita. Vamos a hacerlo de nuevo\u0026amp;rdquo;. No tengo un juego de palabras esta semana.\nAvW: Los juegos de palabras son lo tuyo.\nSP: La última …"},{"uri":"/es/bitcoin-explained/taproot-activation-lockinontimeout/","title":"Activación de Taproot y LOT=true vs LOT=false","content":"BIP 8: https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki\nArgumentos para LOT=true and LOT=false (T1-T6 and F1-F6): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html\nArgumento adicional para LOT=false (F7): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html\nArtículo de Aaron van Wirdum en LOT=true or LOT=false: …"},{"uri":"/es/bitcoin-design/","title":"Bitcoin-designs","content":""},{"uri":"/es/tags/bitcoin-dise%C3%B1o/","title":"bitcoin-diseño","content":""},{"uri":"/es/bitcoin-design/misc/","title":"Miscellaneous","content":" Reunión introductoria de la GUI de Bitcoin Core Aug 20, 2020 Bitcoin diseño "},{"uri":"/es/bitcoin-design/misc/2020-08-20-bitcoin-core-gui/","title":"Reunión introductoria de la GUI de Bitcoin Core","content":"Tema: Enlace del orden del día publicado a continuación\nUbicación: Diseño de Bitcoin (en línea)\nVídeo: No se ha publicado ningún vídeo en línea\nAgenda: https://github.com/BitcoinDesign/Meta/issues/8\nLa conversación se ha anonimizado por defecto para proteger la identidad de los participantes. Aquellos que han expresado su preferencia por que se atribuyan sus comentarios son atribuidos. Si usted ha participado y quiere que sus comentarios sean atribuidos, póngase en contacto con nosotros. …"},{"uri":"/es/bitcoin-magazine/","title":"Bitcoin Magazine","content":" Cómo activar un nuevo Soft Fork Aug 03, 2020 Eric Lombrozo, Luke Dashjr Taproot Soft fork activation "},{"uri":"/es/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation/","title":"Cómo activar un nuevo Soft Fork","content":"Location: Bitcoin Magazine (en línea)\nAaron van Wirdum Aaron van Wirdum en Bitcoin Magazine sobre el BIP 8, el BIP 9 o la activación del Soft Fork moderno: https://bitcoinmagazine.com/articles/bip-8-bip-9-or-modern-soft-fork-activation-how-bitcoin-could-upgrade-next\nDavid Harding sobre las propuestas de activación de Taproot: https://gist.github.com/harding/dda66f5fd00611c0890bdfa70e28152d\nIntroducción Aaron van Wirdum (AvW): Eric, Luke bienvenido. Feliz Día de la Independencia de Bitcoin. ¿Cómo …"},{"uri":"/es/speakers/eric-lombrozo/","title":"Eric Lombrozo","content":""},{"uri":"/es/speakers/luke-dashjr/","title":"Luke Dashjr","content":""},{"uri":"/es/speakers/andreas-antonopoulos/","title":"Andreas Antonopoulos","content":""},{"uri":"/es/andreas-antonopoulos/","title":"Andreas Antonopoulos","content":" Bitcoin Q\u0026amp;amp;A: Descarga inicial de la cadena de bloques Oct 23, 2018 Andreas Antonopoulos Consensus enforcement División de la semilla Apr 08, 2020 Andreas Antonopoulos Cartera Seguridad Firmas Schnorr Oct 07, 2018 Andreas Antonopoulos Schnorr signatures Seguridad de la cartera de hardware Feb 01, 2019 Andreas Antonopoulos Cartera hardware Validación Cartera Senado de Canadá Bitcoin Oct 08, 2014 Andreas Antonopoulos Seguridad "},{"uri":"/es/tags/cartera/","title":"cartera","content":""},{"uri":"/es/andreas-antonopoulos/2020-04-08-andreas-antonopoulos-seed-splitting/","title":"División de la semilla","content":"Tema: ¿Por qué es una mala idea dividir las semillas?\nLocalisación: Canal de YouTube de Andreas Antonopoulos\n¿Por qué es una mala idea dividir las semillas? Esto es algo que se discute todo el tiempo, especialmente en los foros de novatos. Realmente hay que tener mucho cuidado. Un amigo tuvo la idea de lograr una protección de 2 de 3 de la semilla de mi cartera almacenando la semilla de la siguiente manera:\nUbicación 1: palabras clave 1-8 y 9-16 Ubicación 2: palabras clave 1-8 y 17-24 Ubicación …"},{"uri":"/es/tags/seguridad/","title":"seguridad","content":""},{"uri":"/es/austin-bitcoin-developers/","title":"Austin Bitcoin Developers","content":"https://austinbitdevs.com/\nBitcoin CLI y Regtest Aug 17, 2018 Richard Bondi Bitcoin core Developer tools Drivechain May 27, 2019 Paul Sztorc Investigación Lightning Capa 2 Sidechains Hardware Wallets Jun 29, 2019 Stepan Snigirev Hardware wallet Security problems Seminario Socrático 2 Aug 22, 2019 Research Hardware wallet Seminario Socrático 3 Oct 14, 2019 Miniscript Taproot Seminario Socrático 4 Nov 19, 2019 Seminario Socrático 5 Jan 21, 2020 Lightning Seminario Socrático 6 Feb 24, 2020 Taproot "},{"uri":"/es/categories/reuni%C3%B3n/","title":"reunión","content":""},{"uri":"/es/austin-bitcoin-developers/2020-02-24-socratic-seminar-6/","title":"Seminario Socrático 6","content":"https://www.meetup.com/Austin-Bitcoin-Developers/events/268812642/\nhttps://bitdevs.org/2020-02-12-socratic-seminar-101\nhttps://twitter.com/kanzure/status/1232132693179207682\nIntroducción Muy bien, vamos a empezar. Reúnanse. A Phil le vendría bien algo de compañía. A nadie le gusta la primera fila. Tal vez los bancos. Así que tengo un formato un poco diferente de cómo quiero hacerlo esta semana. Normalmente cubro una amplia serie de temas que robo de la lista de reuniones de Nueva York. Revisando …"},{"uri":"/es/advancing-bitcoin/","title":"Advancing Bitcoin","content":" Advancing Bitcoin 2019 Advancing Bitcoin 2020 "},{"uri":"/es/advancing-bitcoin/2020/","title":"Advancing Bitcoin 2020","content":" Carteras de descriptores Feb 06, 2020 Andrew Chow Cartera Miniscript Feb 07, 2020 Andrew Poelstra Miniscript Miniscript Introducción Feb 06, 2020 Andrew Poelstra Miniscript Cartera Taller de depuración de Bitcoin Core Feb 07, 2020 Fabian Jahr Bitcoin core Developer tools Taller Signet Feb 07, 2020 Kalle Alm Taproot Signet Taproot Lightning Feb 06, 2020 Antoine Riard Taproot Lightning Ptlc "},{"uri":"/es/speakers/andrew-poelstra/","title":"Andrew Poelstra","content":""},{"uri":"/es/speakers/kalle-alm/","title":"Kalle Alm","content":""},{"uri":"/es/tags/miniscript/","title":"miniscript","content":""},{"uri":"/es/advancing-bitcoin/2020/2020-02-07-andrew-poelstra-miniscript/","title":"Miniscript","content":"Taller sobre el avance de Bitcoin\nMiniscript\nPágina Web: https://bitcoin.sipa.be/miniscript/\nTaller de reposición: https://github.com/apoelstra/miniscript-workshop\nTranscripción de la presentación de Londres Bitcoin Devs Miniscript: https://btctranscripts.com/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript/\nParte 1 Así que lo que vamos a hacer, es un par de cosas. Vamos a ir a través del sitio web de Miniscript y aprender sobre cómo se construye Miniscript, cómo funciona el sistema de …"},{"uri":"/es/tags/signet/","title":"signet","content":""},{"uri":"/es/categories/taller/","title":"taller","content":""},{"uri":"/es/advancing-bitcoin/2020/2020-02-07-fabian-jahr-debugging-workshop/","title":"Taller de depuración de Bitcoin Core","content":"Vídeo: No se ha publicado ningún vídeo en Internet\nPresentación de Fabian en el Bitcoin Edge Dev++ 2019: https://btctranscripts.com/scalingbitcoin/tel-aviv-2019/edgedevplusplus/debugging-bitcoin/\nDepuración de Bitcoin Core doc: https://github.com/fjahr/debugging_bitcoin\nTaller de depuración de Bitcoin Core: https://gist.github.com/fjahr/5bf65daaf9ff189a0993196195005386\nIntroducción En primer lugar bienvenido al taller de depuración de Bitcoin Core. Todo lo que sé más o menos sobre el uso de un …"},{"uri":"/es/advancing-bitcoin/2020/2020-02-07-kalle-alm-signet-workshop/","title":"Taller Signet","content":"Tema: Taller Signet\nLocalización: El avance de Bitcoin\nVídeo: No se ha publicado ningún vídeo en línea\nPreparémonos mkdir workspace cd workspace git clone https://github.com/bitcoin/bitcoin.git cd bitcoin git remote add kallewoof https://github.com/kallewoof/bitcoin.git git fetch kallewoof git checkout signet ./autogen.sh ./configure -C --disable-bench --disable-test --without-gui make -j5 Cuando intentes ejecutar la parte de configuración vas a tener algunos problemas si no tienes las …"},{"uri":"/es/speakers/andrew-chow/","title":"Andrew Chow","content":""},{"uri":"/es/speakers/antoine-riard/","title":"Antoine Riard","content":""},{"uri":"/es/advancing-bitcoin/2020/2020-02-06-andrew-chow-descriptor-wallets/","title":"Carteras de descriptores","content":"Tema: Repensar la arquitectura de las carteras: Carteras de descriptores nativos\nUbicación: El avance de Bitcoin\nDiapositivas: https://www.dropbox.com/s/142b4o4lrbkvqnh/Rethinking%20Wallet%20Architecture_%20Native%20Descriptor%20Wallets.pptx\nSoporte para descriptores de salida en Bitcoin Core: https://github.com/bitcoin/bitcoin/blob/master/doc/descriptors.md\nBitcoin Optech en los descriptores de la secuencia de comandos de salida: https://bitcoinops.org/en/topics/output-script-descriptors/ …"},{"uri":"/es/categories/conferencia/","title":"conferencia","content":""},{"uri":"/es/tags/lightning/","title":"lightning","content":""},{"uri":"/es/advancing-bitcoin/2020/2020-02-06-andrew-poelstra-miniscript-intro/","title":"Miniscript Introducción","content":"Tema: Introducción a Miniscript\nLocación: El avance de Bitcoin\nDiapositivas: https://www.dropbox.com/s/vgh5vaooqqbgg1v/andrew-poelstra.pdf\nIntroducción Hola a todos. Algunos de vosotros estuvisteis en mi presentación de Bitcoin sobre Miniscript hace un par de días donde me las arreglé para pasar más de 2 horas dando una presentación sobre esto. Creo que esta vez la he reducido a 20 minutos, pero no prometo nada. Voy a mantener un ojo en el reloj.\nDescriptores Es muy agradable tener esto …"},{"uri":"/es/tags/ptlc/","title":"ptlc","content":""},{"uri":"/es/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/","title":"Taproot Lightning","content":"Nombre: Antoine Riard\nTema: A Schnorr-Taproot’ed Lightning\nUbicación: El avance de Bitcoin\nFecha: 6 de febrero de 2020\nDiapositivas: https://www.dropbox.com/s/9vs54e9bqf317u0/Schnorr-Taproot%27ed-LN.pdf\nIntroducción Hoy Schnorr y Taproot para Lightning, es un tema realmente apasionante.\nArquitectura Lightning La arquitectura Lightning para aquellos que no están familiarizados con ella. Usted tiene el blockchain como la capa subyacente. Encima de ella vas a construir un canal, tienes un HTLC y la …"},{"uri":"/es/austin-bitcoin-developers/2020-01-21-socratic-seminar-5/","title":"Seminario Socrático 5","content":"https://www.meetup.com/Austin-Bitcoin-Developers/events/267941700/\nhttps://bitdevs.org/2019-12-03-socratic-seminar-99\nhttps://bitdevs.org/2020-01-09-socratic-seminar-100\nhttps://twitter.com/kanzure/status/1219817063948148737\nLSATs Así que solemos empezar con una demostración de un proyecto basado en un rayo en el que ha estado trabajando durante unos meses.\nEsta no es una idea original para mí. Fue presentada por roasbeef, cofundador de Lightning Labs en la conferencia Lightning del pasado …"},{"uri":"/es/austin-bitcoin-developers/2019-11-19-socratic-seminar-4/","title":"Seminario Socrático 4","content":"https://twitter.com/kanzure/status/1196947713658626048\nOrganización de normas para carteras (WSO) Presentar un argumento económico para que los vendedores se unan a una organización y pongan algo de dinero o esfuerzo para construir la organización. El argumento económico es que los miembros comprometidos tienen un cierto nivel de fondos comprometidos, que representan un número X de usuarios, y X mil millones de dólares de fondos custodiados. Esto se puede utilizar como argumento convincente para …"},{"uri":"/es/austin-bitcoin-developers/2019-10-14-socratic-seminar-3/","title":"Seminario Socrático 3","content":"Seminario socrático de desarrolladores de Bitcoin de Austin 3\nhttps://www.meetup.com/Austin-Bitcoin-Developers/events/265295570/\nhttps://bitdevs.org/2019-09-16-socratic-seminar-96\nHemos hecho dos reuniones en este formato. La idea es que en Nueva York hay BitDevs que es una de las reuniones más antiguas. Esto ha estado sucediendo durante cinco años. Tienen un formato llamado socrático donde tienen un tipo talentoso llamado J que los conduce a través de algunos temas y tratan de conseguir un poco …"},{"uri":"/es/baltic-honeybadger/","title":"Baltic Honeybadger","content":" Baltic Honeybadger 2018 Baltic Honeybadger 2019 "},{"uri":"/es/baltic-honeybadger/2019/","title":"Baltic Honeybadger 2019","content":" Coldcard Mk3 - Security in Depth Sep 14, 2019 Rodolfo Novak Seguridad Cartera hardware "},{"uri":"/es/tags/cartera-hardware/","title":"cartera hardware","content":""},{"uri":"/es/baltic-honeybadger/2019/2019-09-14-rodolfo-novak-coldcard-mk3/","title":"Coldcard Mk3 - Security in Depth","content":"Intriduccióno Mi nombre es Rodolfo, he estado alrededor de Bitcoin por un tiempo. Hacemos hardware. Hoy quería entrar un poco en cómo hacer una billetera de hardware segura en términos un poco más legos. Ir a través del proceso de conseguir que se haga.\n¿Cuáles fueron las opciones? Cuando cerré mi última empresa y decidí buscar un lugar para almacenar mis monedas, no pude encontrar un monedero que satisficiera dos cosas que necesitaba. Que era la seguridad física y el código abierto. Hay dos …"},{"uri":"/es/speakers/rodolfo-novak/","title":"Rodolfo Novak","content":""},{"uri":"/es/tags/hardware-wallet/","title":"hardware-wallet","content":""},{"uri":"/es/tags/research/","title":"research","content":""},{"uri":"/es/austin-bitcoin-developers/2019-08-22-socratic-seminar-2/","title":"Seminario Socrático 2","content":"https://twitter.com/kanzure/status/1164710800910692353\nIntroducción Hola. La idea era hacer una reunión de estilo más socrático. Esto fue popularizado por Bitdevs NYC y se extendió a SF. Lo intentamos hace unos meses con Jay. La idea es que recorremos las noticias de investigación, los boletines, los podcasters, hablamos de lo que ha pasado en la comunidad técnica de bitcoin. Vamos a tener diferentes presentadores.\nMike Schmidt hablará de algunos boletines de optech a los que ha contribuido. …"},{"uri":"/es/speakers/0xb10c/","title":"0xB10C","content":""},{"uri":"/es/tags/history/","title":"history","content":""},{"uri":"/es/misc/","title":"Misc","content":" Bitcoin\u0026amp;#39;s Academic Pedigree Aug 29, 2017 Arvind Narayanan, Jeremy Clark History Bitcoin\u0026amp;#39;s Security Model - A Deep Dive Nov 13, 2016 Jameson Lopp Security If I\u0026amp;#39;d Known What We Were Starting Sep 20, 2017 Ray Dillinger History The Incomplete History of Bitcoin Development Aug 04, 2019 0xB10C History "},{"uri":"/es/misc/2019-08-04-incomplete_history/","title":"The Incomplete History of Bitcoin Development","content":"Autor: 0xB10C Texto original: https://b10c.me/blog/004-the-incomplete-history-of-bitcoin-development/\nLa historia incompleta del desarrollo de Bitcoin Para comprender plenamente la justificación del estado actual del desarrollo de Bitcoin, el conocimiento de los eventos históricos es esencial. Esta publicación de blog destaca eventos históricos seleccionados, versiones de software y correcciones de errores antes y después de que Satoshi abandonara el proyecto. Además, contiene una sección sobre …"},{"uri":"/es/austin-bitcoin-developers/2019-06-29-hardware-wallets/","title":"Hardware Wallets","content":"https://twitter.com/kanzure/status/1145019634547978240\nVer también:\nExtracción de semillas de hardware wallets El futuro de las hardware wallets coredev.tech 2019 debate de hardware wallets Antecedentes Hace algo más de un año, pasé por la clase de Jimmy Song de Programación de Blockchain. Ahí fue donde conocí a M, donde él era el asistente de enseñanza. Básicamente, se escribe una biblioteca de bitcoin en python desde cero. La API de esta librería y las clases y funcciones que usa Jimmy son muy …"},{"uri":"/es/tags/security-problems/","title":"security-problems","content":""},{"uri":"/es/speakers/stepan-snigirev/","title":"Stepan Snigirev","content":""},{"uri":"/es/chaincode-labs/","title":"Chaincode Labs","content":" Residencia de Chaincode "},{"uri":"/es/speakers/christian-decker/","title":"Christian Decker","content":""},{"uri":"/es/tags/eltoo/","title":"eltoo","content":""},{"uri":"/es/chaincode-labs/chaincode-residency/2019-06-25-christian-decker-eltoo/","title":"Eltoo - El (lejano) futuro de lightning","content":"Tema: Eltoo: El (lejano) futuro de lightning\nLugar: Chaincode Labs\nDiapositivas: https://residency.chaincode.com/presentations/lightning/Eltoo.pdf\nEltoo white paper: https://blockstream.com/eltoo.pdf\nArtículo de Bitcoin Magazine: https://bitcoinmagazine.com/articles/noinput-class-bitcoin-soft-fork-simplify-lightning\nIntro ¿Quién nunca ha oído hablar de eltoo? Es mi proyecto favorito y estoy bastante orgulloso de él. Intentaré que esto sea breve. Me dijeron que todos vieron mi presentación sobre …"},{"uri":"/es/chaincode-labs/chaincode-residency/","title":"Residencia de Chaincode","content":" Eltoo - El (lejano) futuro de lightning Jun 25, 2019 Christian Decker Eltoo Lightning Security Models Jun 17, 2019 John Newbery Security Taproot Cryptography "},{"uri":"/es/categories/residency/","title":"residency","content":""},{"uri":"/es/tags/cryptography/","title":"cryptography","content":""},{"uri":"/es/speakers/john-newbery/","title":"John Newbery","content":""},{"uri":"/es/tags/security/","title":"security","content":""},{"uri":"/es/chaincode-labs/chaincode-residency/2019-06-17-newbery-security-models/","title":"Security Models","content":"Texto original: https://btctranscripts.com/chaincode-labs/chaincode-residency/2019-06-17-john-newbery-security-models/\nTranscripción de: Caralie Chrisco\nUbicación: Residencia de Chaincode Labs 2019\nDiapositivas: https://residency.chaincode.com/presentations/bitcoin/security_models.pdf\nJohn Newbery: Muy bien, modelos de seguridad. Esto va a ser como un recorrido rápido de varias cosas, vista de muy alto nivel. Empezaré dándote algún tipo de marco para pensar en las cosas. Así que en criptografía, …"},{"uri":"/es/speakers/jonas-nick/","title":"Jonas Nick","content":""},{"uri":"/es/lets-talk-bitcoin-podcast/","title":"Lets Talk Bitcoin Podcast","content":" The Tools and The Work Jun 09, 2019 Pieter Wuille, Jonas Nick Taproot Schnorr signatures "},{"uri":"/es/tags/schnorr-signatures/","title":"schnorr-signatures","content":""},{"uri":"/es/lets-talk-bitcoin-podcast/2019-06-09-ltb-pieter-wuille-jonas-nick/","title":"The Tools and The Work","content":"Hablemos de Bitcoin con Pieter Wuille y Jonas Nick – 9 de Junio de 2019 (\u0026amp;ldquo;Let\u0026amp;rsquo;s Talk Bitcoin\u0026amp;rdquo;)\nhttps://twitter.com/kanzure/status/1155851797568917504\nParte 1: https://letstalkbitcoin.com/blog/post/lets-talk-bitcoin-400-the-tools-and-the-work\nParte 2: https://letstalkbitcoin.com/blog/post/lets-talk-bitcoin-401-the-tools-and-the-work-part-2\nBorrador de BIP-Schnorr: https://github.com/sipa/bips/blob/bip-schnorr/bip-schnorr.mediawiki\nBorrador de BIP-Taproot: …"},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-07-assumeutxo/","title":"AssumeUTXO","content":"https://twitter.com/kanzure/status/1137008648620838912\nPor qué assumeutxo assumeutxo es la continuación espiritual de assumevalid. ¿Por qué queremos hacer esto en primer lugar? En la actualidad, la descarga inicial de bloques tarda horas y días. Varios proyectos en la comunidad han estado implementando medidas para acelerar esto. Casa creo que incluye datadir con sus nodos. Otros proyectos como btcpay tienen varias formas de agrupar esto y firmar las cosas con claves gpg y estas soluciones no …"},{"uri":"/es/bitcoin-core-dev-tech/2019-06/","title":"Bitcoin Core Dev Tech 2019","content":" Arquitecturas de las wallets Jun 05, 2019 Andrew Chow Wallet Bitcoin core AssumeUTXO Jun 07, 2019 James O\u0026amp;#39;Beirne Assume utxo Bitcoin core Cadenas de estado ciegas: Transferencia de UTXO con un servidor de firmas ciegas Jun 07, 2019 Ruben Somsen Statechains Eltoo Channel factories Cifrado P2P Jun 07, 2019 P2 p Code Review Jun 05, 2019 Discusión general sobre SIGHASH_NOINPUT, OP_CHECKSIGFROMSTACK, and OP_SECURETHEBAG Jun 06, 2019 Olaoluwa Osuntokun, Jeremy Rubin Sighash anyprevout Op …"},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/","title":"Cadenas de estado ciegas: Transferencia de UTXO con un servidor de firmas ciegas","content":"https://twitter.com/kanzure/status/1136992734953299970\nFormalización de Blind Statechains como servidor de firma ciega minimalista https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html\nVisión General: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39\nDocumento statechains: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf\nTranscripción anterior: …"},{"uri":"/es/tags/channel-factories/","title":"channel-factories","content":""},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-07-p2p-encryption/","title":"Cifrado P2P","content":"Cifrado p2p\nhttps://twitter.com/kanzure/status/1136939003666685952\nhttps://github.com/bitcoin-core/bitcoin-devwiki/wiki/P2P-Design-Philosophy\n\u0026amp;ldquo;Elligator al cuadrado: Puntos uniformes en curvas elípticas de orden primo como cadenas aleatorias uniformes\u0026amp;rdquo; https://eprint.iacr.org/2014/043\nIntroducción Esta propuesta lleva años en marcha. Muchas ideas de sipa y gmaxwell fueron a parar al bip151. Hace años decidí intentar sacar esto adelante. Hay bip151 que de nuevo la mayoría de las ideas …"},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-07-hardware-wallets/","title":"Hardware Wallets","content":"https://twitter.com/kanzure/status/1136924010955104257\n¿Cuánto debería hacer Bitcoin Core y cuánto otras bibliotecas? Andrew Chow escribió la maravillosa herramienta HWI. Ahora mismo tenemos un pull request para soportar firmantes externos. El script HWI puede hablar con la mayoría de los monederos hardware porque tiene todos los controladores incorporados, y puede obtener claves de ellos, y firmar transacciones arbitrarias. Eso es más o menos lo que hace. Es un poco manual, sin embargo. Tienes …"},{"uri":"/es/tags/hwi/","title":"hwi","content":""},{"uri":"/es/speakers/jonas-schnelli/","title":"Jonas Schnelli","content":""},{"uri":"/es/speakers/ruben-somsen/","title":"Ruben Somsen","content":""},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/","title":"Signet","content":"https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html\nhttps://twitter.com/kanzure/status/1136980462524608512\nIntroducción Voy a hablar un poco de signet. ¿Alguien no sabe lo que es signet? La idea es tener una firma del bloque o del bloque anterior. La idea es que testnet está horriblemente roto para probar cosas, especialmente para probar cosas a largo plazo. Hay grandes reorgs en testnet. ¿Qué pasa con testnet con un ajuste de dificultad menos …"},{"uri":"/es/tags/statechains/","title":"statechains","content":""},{"uri":"/es/tags/consensus-cleanup/","title":"consensus-cleanup","content":""},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-06-noinput-etc/","title":"Discusión general sobre SIGHASH_NOINPUT, OP_CHECKSIGFROMSTACK, and OP_SECURETHEBAG","content":"SIGHASH_NOINPUT, ANYPREVOUT, OP_CHECKSIGFROMSTACK, OP_CHECKOUTPUTSHASHVERIFY, and OP_SECURETHEBAG\nhttps://twitter.com/kanzure/status/1136636856093876225\nAl parecer, hay algunos mensajes políticos en torno a OP_SECURETHEBA y \u0026amp;ldquo;asegurar la bolsa\u0026amp;rdquo; podría ser una cosa de Andrew Yang\nSIGHASH_NOINPUT Muchos de nosotros estamos familiarizados con NOINPUT. ¿Alguien necesita una explicación? ¿Cuál es la diferencia entre el NOINPUT original y el nuevo? NOINPUT asusta al menos a algunas …"},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/","title":"Gran limpieza de consenso","content":"https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html\nhttps://twitter.com/kanzure/status/1136591286012698626\nIntroducción No hay mucho nuevo de que hablar. No está claro lo de CODESEPARATOR. Se quiere convertir en regla de consenso que las transacciones no pueden ser mayores de 100 kb. ¿No hay reacciones a eso? De acuerdo. Bien, lo haremos. Hagámoslo. ¿Todos saben cuál es esta propuesta?\nEl tiempo de validación para cualquier bloque, éramos …"},{"uri":"/es/speakers/jeremy-rubin/","title":"Jeremy Rubin","content":""},{"uri":"/es/speakers/matt-corallo/","title":"Matt Corallo","content":""},{"uri":"/es/speakers/olaoluwa-osuntokun/","title":"Olaoluwa Osuntokun","content":""},{"uri":"/es/tags/op_checksigfromstack/","title":"op_checksigfromstack","content":""},{"uri":"/es/tags/op_checktemplateverify/","title":"op_checktemplateverify","content":""},{"uri":"/es/tags/p2c/","title":"p2c","content":""},{"uri":"/es/tags/quantum-resistance/","title":"quantum-resistance","content":""},{"uri":"/es/tags/sighash_anyprevout/","title":"sighash_anyprevout","content":""},{"uri":"/es/speakers/tadge-dryja/","title":"Tadge Dryja","content":""},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/","title":"Taproot","content":"https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki\nhttps://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html\nhttps://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/\nPreviamente: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/\nhttps://twitter.com/kanzure/status/1136616356827283456\nIntroducción Bien, primera pregunta: ¿quién puso mi nombre en esa lista y qué quieren? …"},{"uri":"/es/tags/utreexo/","title":"utreexo","content":""},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-06-utreexo/","title":"Utreexo","content":"Utreexo: acumulador basado en hash para UTXOs de bitcoin\nhttp://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-10-08-utxo-accumulators-and-utreexo/\nhttp://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2019/utreexo/\nDocumento Utreexo https://eprint.iacr.org/2019/611.pdf\nhttps://github.com/mit-dci/utreexo\nhttps://twitter.com/kanzure/status/1136560700187447297\nIntroducción Sigues descargando todo; en lugar de escribir en tu base de datos UTXO, modificas tu acumulador. Aceptas una prueba de que …"},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-05-wallet-architecture/","title":"Arquitecturas de las wallets","content":"Arquitectura de la wallet de Bitcoin Core + descriptores\nhttps://twitter.com/kanzure/status/1136282460675878915\nwriteup: https://github.com/bitcoin/bitcoin/issues/16165\nDebate sobre la arquitectura de las wallets Aquí hay tres áreas principales. Una es IsMine: ¿cómo determino que una salida concreta está afectando a mi wallet? ¿Qué hay de pedir una nueva dirección, de dónde viene? Eso no es sólo obtener una nueva dirección, es obtener una dirección de cambio en bruto, es también el cambio que se …"},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-05-code-review/","title":"Code Review","content":"Encuesta de revisión de códigos y reclamaciones https://twitter.com/kanzure/status/1136261311359324162\nIntroducción Quería hablar sobre el proceso de revisión de código para Bitcoin Core. No he hecho revisiones de código, pero siguiendo el proyecto durante el último año he oído que este es un punto de dolor para el proyecto y creo que a la mayoría de los desarrolladores les encantaría verlo mejorado. Me gustaría ayudar de alguna manera para ayudar a infundir un poco de energía para ayudar con …"},{"uri":"/es/tags/wallet/","title":"wallet","content":""},{"uri":"/es/tags/capa-2/","title":"capa 2","content":""},{"uri":"/es/austin-bitcoin-developers/2019-05-27-drivechain-paul-sztorc/","title":"Drivechain","content":"Drivechain: Una capa de interoperabilidad-2, descrita en términos de la red lightning - algo que ya entiendes\nhttps://twitter.com/kanzure/status/1133202672570519552\nSobre mí Bien, aquí hay algunas cosas sobre mí. He sido un bitcoiner desde 2012. He publicado investigaciones sobre bitcoin en el blog truthcoin.info. He presentado en Scaling Bitcoin 1, 2, 3, 4, tabconf, y Building on Bitcoin. Mi formación es en economía y estadística. Trabajé en el Departamento de Economía de Yale como estadístico …"},{"uri":"/es/tags/investigaci%C3%B3n/","title":"investigación","content":""},{"uri":"/es/speakers/paul-sztorc/","title":"Paul Sztorc","content":""},{"uri":"/es/tags/sidechains/","title":"sidechains","content":""},{"uri":"/es/advancing-bitcoin/2019/","title":"Advancing Bitcoin 2019","content":" Lightning Rust Feb 07, 2019 Matt Corallo Lightning "},{"uri":"/es/advancing-bitcoin/2019/2019-02-07-matt-corallo-rust-lightning/","title":"Lightning Rust","content":"Lightning flexible en Rust\nDiapositivas: https://docs.google.com/presentation/d/154bMWdcMCFUco4ZXQ3lWfF51U5dad8pQ23rKVkncnns/edit#slide=id.p\nhttps://twitter.com/kanzure/status/1144256392490029057\nIntroducción Gracias por recibirme. Quiero hablar un poco sobre un proyecto en el que he estado trabajando durante un año llamado \u0026amp;ldquo;Rust-lightning\u0026amp;rdquo;. Lo empecé en diciembre, así que hace un año y unos meses. Esta es mi primera presentación en él, así que estoy emocionado de finalmente llegar a …"},{"uri":"/es/andreas-antonopoulos/2019-02-01-andreas-antonopoulos-hardware-wallet-security/","title":"Seguridad de la cartera de hardware","content":"Tema: ¿Son los monederos electrónicos lo suficientemente seguros?\nLocalización: Canal de YouTube de Andreas Antonopoulos\n¿Son los monederos electrónicos lo suficientemente seguros? P - Hola Andreas. Almaceno mi criptografía en un Ledger. Escuchando a Trace Mayer esta semana me preocupa que esto no sea lo suficientemente seguro. Trace dice que necesitas Bitcoin Core para la validación de la red, Armory para gestionar las claves privadas y un protocolo Glacier para los procedimientos operativos …"},{"uri":"/es/tags/validaci%C3%B3n/","title":"validación","content":""},{"uri":"/es/andreas-antonopoulos/2018-10-23-andreas-antonopoulos-initial-blockchain-download/","title":"Bitcoin Q\u0026A: Descarga inicial de la cadena de bloques","content":"Becca pregunta por qué se tarda tanto en descargar el blockchain. Yo tengo una conexión rápida a Internet y he podido descargar 200GB en menos de una hora. A lo que Becca se refiere es a lo que se llama la descarga inicial del blockchain o IBD, que es la primera sincronización del nodo de Bitcoin o de cualquier tipo de nodo de blockchain con su blockchain. La respuesta es que, aunque la cantidad de datos que hay que descargar para obtener la cadena de bloques completa es de unos 200 GB, no se …"},{"uri":"/es/tags/consensus-enforcement/","title":"consensus-enforcement","content":""},{"uri":"/es/bitcoin-core-dev-tech/2018-10/","title":"Bitcoin Core Dev Tech 2018 (Oct)","content":" Acumuladores UTXO, compromisos UTXO y Utreexo Oct 08, 2018 Tadge Dryja Proof systems Utreexo Bitcoin Optech Oct 09, 2018 Lightning Segwit Cosas de Wallets Oct 09, 2018 Wallet Descriptores de guiones (2018-10-08) Wallet Mensaje de señalización Oct 10, 2018 Kalle Alm Wallet Retransmisión eficiente de transacciones P2P Oct 08, 2018 P2 p "},{"uri":"/es/bitcoin-core-dev-tech/2018-10/2018-10-10-signmessage/","title":"Mensaje de señalización","content":"kallewoof and others\nhttps://twitter.com/kanzure/status/1049834659306061829\nEstoy tratando de hacer un nuevo signmessage para hacer otras cosas. Sólo tiene que utilizar el sistema de firma dentro de bitcoin para firmar un mensaje. Firmar un mensaje que alguien quiere. Puedes usar proof-of-funds o lo que sea.\nUsted podría simplemente tener una firma y es una firma dentro de un paquete y es pequeño y fácil. Otra opción es tener un .. que no es válido de alguna manera. Haces una transacción con …"},{"uri":"/es/bitcoin-core-dev-tech/2018-10/2018-10-09-bitcoin-optech/","title":"Bitcoin Optech","content":"https://twitter.com/kanzure/status/1049527415767101440\nhttps://bitcoinops.org/\nBitcoin Optech trata de animar a las empresas bitcoin a adoptar mejores técnicas y tecnologías de escalado, cosas como batching, segwit, más adelante quizá firmas Schnorr, firmas agregables, quizá lightning. Ahora mismo nos estamos centrando en cosas que las empresas podrían estar haciendo ahora mismo. Los intercambios podrían ser por lotes, y algunos no lo son. Estamos hablando con esas empresas y escuchando sus …"},{"uri":"/es/bitcoin-core-dev-tech/2018-10/2018-10-09-wallet-stuff/","title":"Cosas de Wallets","content":"https://twitter.com/kanzure/status/1049526667079643136\nTal vez podamos hacer que los PRs del monedero tengan un proceso de revisión diferente para que pueda haber cierta especialización, incluso si el monedero no está listo para ser separado. En el futuro, si el monedero fuera un proyecto o repositorio separado, entonces sería mejor. Tenemos que ser capaces de subdividir el trabajo mejor de lo que ya lo hacemos, y la cartera es un buen lugar para empezar a hacerlo. Es diferente del código …"},{"uri":"/es/tags/segwit/","title":"segwit","content":""},{"uri":"/es/bitcoin-core-dev-tech/2018-10/2018-10-08-utxo-accumulators-and-utreexo/","title":"Acumuladores UTXO, compromisos UTXO y Utreexo","content":"https://twitter.com/kanzure/status/1049112390413897728\nSi la gente vio la charla de Benedikt, hace dos días, está relacionado. Es una construcción diferente, pero el mismo objetivo. La idea básica es, y creo que Cory comenzó a hablar de esto hace unos meses en la lista de correo \u0026amp;hellip; en lugar de almacenar todos los UTXOs en leveldb, almacenar el hash de cada UTXO, y entonces es la mitad del tamaño, y entonces casi se podría crear a partir del hash de la entrada, es como 10 bytes más. En vez …"},{"uri":"/es/tags/proof-systems/","title":"proof-systems","content":""},{"uri":"/es/bitcoin-core-dev-tech/2018-10/2018-10-08-efficient-p2p-transaction-relay/","title":"Retransmisión eficiente de transacciones P2P","content":"Mejoras del protocolo de retransmisión de transacciones p2p con reconciliación de conjuntos gleb\nNo sé si necesito motivar este problema. Presenté una sesión de trabajo en curso en Scaling. El coste de retransmitir transacciones o anunciar una transacción en una red: ¿cuántos anuncios tienes? Cada enlace tiene un anuncio en cualquier dirección en este momento, y luego está el número de nodos multiplicado por el número de conexiones por nodo. Esto es como 8 conexiones de media. Si hay más …"},{"uri":"/es/andreas-antonopoulos/2018-10-07-andreas-antonopoulos-schnorr-signatures/","title":"Firmas Schnorr","content":"LTB Episodio 378 - El Petro dentro de Venezuela y las firmas Schnorr\nActualmente utilizamos el algoritmo de firma digital de curva elíptica (ECDSA), que es una forma específica de realizar firmas digitales con curvas elípticas, pero no la única. Las firmas Schnorr tienen algunas características únicas que las hacen mejores en muchos aspectos que el algoritmo ECDSA que usamos actualmente en Bitcoin.\nLas firmas Schnorr son mejores porque tienen ciertas peculiaridades en la forma en que se …"},{"uri":"/es/austin-bitcoin-developers/2018-08-17-richard-bondi-bitcoin-cli-regtest/","title":"Bitcoin CLI y Regtest","content":"Clone este repositorio para seguir el proceso: https://github.com/austin-bitcoin-developers/regtest-dev-environment\nhttps://twitter.com/kanzure/status/1161266116293009408\nIntroducción Así que el objetivo aquí como Justin dijo es conseguir el entorno regtest configurado. Las ventajas que mencionó, también existe la ventaja de que usted puede minar sus propias monedas a voluntad por lo que no tiene que perder el tiempo con grifos testnet. También puedes generar bloques, así que no tienes que …"},{"uri":"/es/speakers/richard-bondi/","title":"Richard Bondi","content":""},{"uri":"/es/bitcoin-core-dev-tech/2018-03/","title":"Bitcoin Core Dev Tech 2018 (Mar)","content":" Árboles de sintaxis abstracta merkleizados - MAST Mar 06, 2018 Taproot Covenios Validación Bellare-Neven Mar 05, 2018 Signature aggregation Intercambios atómicos de curvas transversales Mar 05, 2018 Adaptor signatures Prioridades Mar 07, 2018 Taproot, Graftroot, Etc (2018-03-06) Contract protocols Taproot "},{"uri":"/es/bitcoin-core-dev-tech/2018-03/2018-03-07-priorities/","title":"Prioridades","content":"https://twitter.com/kanzure/status/972863994489901056\nPrioridades Vamos a esperar hasta que BlueMatt esté aquí. Nadie sabe cuáles son sus prioridades. Dice que podría llegar alrededor del mediodía.\nHay un ex-director de producto de Google interesado en ayudar con Bitcoin Core. Me preguntó cómo participar. Le dije que se involucrara simplemente sumergiéndose. Pasará algún tiempo en Chaincode a finales de marzo. Nos haremos una idea de cuáles son sus habilidades. Creo que podría ser útil para …"},{"uri":"/es/bitcoin-core-dev-tech/2018-03/2018-03-06-merkleized-abstract-syntax-trees-mast/","title":"Árboles de sintaxis abstracta merkleizados - MAST","content":"https://twitter.com/kanzure/status/972120890279432192\nVer también http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-07-merkleized-abstract-syntax-trees/\nCosas de MAST Podrías merkleizar directamente los scripts si cambias de IF, IFNOT, ELSE con IFJUMP que tiene el número de bytes.\nCon graftroot y taproot, nunca para hacer cualquier scripts (que eran un hack para empezar las cosas). Pero estamos haciendo la validación y el cálculo.\nTomas todos los caminos que tiene; así que en lugar …"},{"uri":"/es/tags/covenios/","title":"covenios","content":""},{"uri":"/es/tags/adaptor-signatures/","title":"adaptor-signatures","content":""},{"uri":"/es/bitcoin-core-dev-tech/2018-03/2018-03-05-bellare-neven/","title":"Bellare-Neven","content":"Ver también http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-06-signature-aggregation/\nSe ha publicado, ha existido durante una década, y es ampliamente citado. En Bellare-Neven, es en sí mismo, es un esquema multi-firma que significa múltiples pubkeys y un mensaje. Debe tratar las autorizaciones individuales para gastar entradas, como mensajes individuales. Lo que necesitamos es un esquema de firma agregada interactiva. El artículo de Bellare-Neven sugiere una forma trivial de …"},{"uri":"/es/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/","title":"Intercambios atómicos de curvas transversales","content":"https://twitter.com/kanzure/status/971827042223345664\nBorrador de un próximo documento de guiones sin guión. Esto fue a principios de 2017. Pero ya ha pasado todo un año.\ntransacciones lightning posteriores a schnorr https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html\nUna firma adaptadora.. si tienes diferentes generadores, entonces los dos secretos a revelar, simplemente le das a alguien los dos, más una prueba de un log discreto, y …"},{"uri":"/es/tags/signature-aggregation/","title":"signature-aggregation","content":""},{"uri":"/es/misc/2017-09-20-ray-dillinger-if-id-known/","title":"If I'd Known What We Were Starting","content":"Autor: Ray Dillinger Texto original: https://www.linkedin.com/pulse/id-known-what-we-were-starting-ray-dillinger/\nSi hubiera sabido lo que estábamos empezando En noviembre de 2008, realicé una revisión de código y una auditoría de seguridad para la parte de la blockchain del código fuente de Bitcoin. El fallecido Hal Finney revisó el código y auditó el lenguaje de scripting, y ambos analizamos el código de contabilidad. Satoshi Nakamoto, arquitecto seudónimo y autor del código, alternó entre …"},{"uri":"/es/speakers/ray-dillinger/","title":"Ray Dillinger","content":""},{"uri":"/es/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/","title":"Árboles de sintaxis abstracta merkleizados","content":"https://twitter.com/kanzure/status/907075529534328832\nÁrboles de sintaxis abstracta merkleizados (MAST) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html\nVoy a hablar del esquema que publiqué ayer en la lista de correo, que consiste en implementar MAST (árboles de sintaxis abstracta merkleizados) en bitcoin de la forma menos invasiva posible. Está dividido en dos grandes rasgos de consenso que juntos nos dan MAST. Empezaré con el último …"},{"uri":"/es/bitcoin-core-dev-tech/2017-09/","title":"Bitcoin Core Dev Tech 2017","content":" Agregación de firmas Sep 06, 2017 Signature aggregation Árboles de sintaxis abstracta merkleizados Sep 07, 2017 Mast Notas de la reunión Sep 05, 2017 Privacidad "},{"uri":"/es/tags/mast/","title":"mast","content":""},{"uri":"/es/bitcoin-core-dev-tech/2017-09/2017-09-06-signature-aggregation/","title":"Agregación de firmas","content":"https://twitter.com/kanzure/status/907065194463072258\nAgregación de firmas Sipa, ¿puedes firmar y verificar firmas ECDSA a mano? No. Sobre GF(43), tal vez. Los inversos podrían tomar un poco de tiempo para computar. Sobre GF(2).\nCreo que lo primero de lo que deberíamos hablar es de algunas definiciones. Me gustaría empezar distinguiendo entre tres cosas: Agregación de claves, agregación de firmas y validación de lotes. Más adelante, la multifirma.\nHay tres problemas diferentes. La agregación de …"},{"uri":"/es/bitcoin-core-dev-tech/2017-09/2017-09-05-meeting-notes/","title":"Notas de la reunión","content":"coredev.tech septiembre 2017\nhttps://twitter.com/kanzure/status/907233490919464960\n((Como siempre, cualquier error es probablemente mío, etc.))\nIntroducción Existe una gran preocupación sobre si BlueMatt se ha convertido en un nombre erróneo.\nPresentación del lunes por la noche: https://btctranscripts.com/sf-bitcoin-meetup/2017-09-04-jonas-schnelli-bip150-bip151/\nCreo que deberíamos seguir usando #bitcoin-core-dev para cualquier cosa relacionada con el cambio de Bitcoin Core y tratar de mantener …"},{"uri":"/es/tags/privacidad/","title":"privacidad","content":""},{"uri":"/es/speakers/arvind-narayanan/","title":"Arvind Narayanan","content":""},{"uri":"/es/misc/2017-08-29-bitcoin-academic-pedigree/","title":"Bitcoin's Academic Pedigree","content":"Texto original: https://queue.acm.org/detail.cfm?id=3136559\nPedigrí académico de Bitcoin El concepto de criptomonedas se basa en ideas olvidadas en la literatura de investigación. Arvind Narayanan y Jeremy Clark\nSi has leído sobre bitcoin en la prensa y tienes cierta familiaridad con la investigación académica en el campo de la criptografía, podrías tener razonablemente la siguiente impresión: La investigación de varias décadas sobre el efectivo digital, comenzando por David Chaum,10,12 no …"},{"uri":"/es/speakers/jeremy-clark/","title":"Jeremy Clark","content":""},{"uri":"/es/misc/2016-11-13-lopp-bitcoin-security-model/","title":"Bitcoin's Security Model - A Deep Dive","content":"Texto original: https://www.coindesk.com/markets/2016/11/13/bitcoins-security-model-a-deep-dive/\nEl modelo de seguridad de Bitcoin: una profunda inmersión CoinDesk echa un vistazo bajo el capó para entender qué funciones de seguridad ofrecen y qué no ofrecen bitcoin. Cuando se discuten los mecanismos de consenso para diferentes criptomonedas, un tema que a menudo causa argumentos es la falta de comprensión (y definición) del modelo de seguridad que proporcionan para los datos históricos del …"},{"uri":"/es/speakers/jameson-lopp/","title":"Jameson Lopp","content":""},{"uri":"/es/greg-maxwell/2016-01-29-a-trip-to-the-moon/","title":"A trip to the moon requires a rocket with multiple stages","content":"Texto original: https://www.reddit.com/r/Bitcoin/comments/438hx0/a_trip_to_the_moon_requires_a_rocket_with/\nUn viaje a la luna requiere un cohete con múltiples etapas o, de lo contrario, la ecuación del cohete te comerá el almuerzo\u0026amp;hellip; empacar a todos como un coche de payasos en un trebuchet y esperar el éxito esta excluido.\nMucha gente en Reddit piensa que Bitcoin es principalmente un competidor de las redes de pago con tarjeta. Creo que esto es más que un poco extraño: Bitcoin es una …"},{"uri":"/es/speakers/greg-maxwell/","title":"Greg Maxwell","content":""},{"uri":"/es/greg-maxwell/","title":"Greg Maxwell","content":" A trip to the moon requires a rocket with multiple stages Jan 29, 2016 Greg Maxwell Scaling "},{"uri":"/es/tags/scaling/","title":"scaling","content":""},{"uri":"/es/andreas-antonopoulos/2014-10-08-andreas-antonopolous-canada-senate-bitcoin/","title":"Senado de Canadá Bitcoin","content":"Esta es una transcripción de las pruebas presentadas ante el Comité del Senado de Canadá sobre Banca, Comercio y Economía. Aquí hay un video. El discurso de apertura se puede encontrar aquí.\nPuede aparecer otra transcripción aquí pero quién sabe.\nSe puede encontrar una transcripción adicional aquí.\nInforme final: http://www.parl.gc.ca/Content/SEN/Committee/412/banc/rep/rep12jun15-e.pdf\nObservaciones preparadas. Mi experiencia se centra principalmente en la seguridad de la información y la …"},{"uri":"/es/bit-block-boom/2019/accumulating-bitcoin/","title":"Acumular Bitcoin","content":"Introducción Hoy voy a predicar al coro, creo. Espero reiterar y reforzar algunos puntos que ya conoces pero que tal vez olvidaste mencionar a tus amigos nocoiner, o aprender algo nuevo. Espero que haya algo para todos aquí.\nP: ¿Debo comprar bitcoin?\nR: Sí.\nP: ¿Vas a trolear a un periodista?\nR: ¿Hoy? Ya lo he hecho.\nConceptos erróneos Una de las mayores ideas erróneas que escuché cuando conocí el bitcoin fue que éste era sólo un sistema de pagos. Si lees el libro blanco, entonces lees esto y …"},{"uri":"/es/speakers/alena-vranova/","title":"Alena Vranova","content":""},{"uri":"/es/speakers/alex-petrov/","title":"Alex Petrov","content":""},{"uri":"/es/baltic-honeybadger/2018/opening/","title":"Apertura","content":"Discurso de apertura del Baltic Honeybadger 2018\ntwitter: https://twitter.com/search?f=tweets\u0026amp;amp;vertical=default\u0026amp;amp;q=bh2018\nhttps://twitter.com/kanzure/status/1043384689321566208\nBien chicos, vamos a empezar en cinco minutos. Así que quédense aquí. Ella va a presentar a los oradores y ayudar a todos. Gracias a todos por venir. Es una gran multitud este año. Quería hacer algunos anuncios técnicos. En primer lugar, recuerden que debemos ser excelentes entre nosotros. Tenemos gente de todo el …"},{"uri":"/es/baltic-honeybadger/2018/","title":"Baltic Honeybadger 2018","content":"http://web.archive.org/web/20180825023519/https://bh2018.hodlhodl.com/\nApertura Bitcoin como nueva institución de mercado Nic Carter Custodia de Bitcoin Bryan Bishop Custodia Reglamento El estándar Bitcoin Saifedean Ammous El futuro de Lightning Elizabeth Stark Lightning El futuro de los contratos inteligentes de Bitcoin Max Keidun Contratos inteligentes El futuro de los monederos de Bitcoin Pavol Rusnak, Lawrence Nahum, Giacomo Zucco Cartera hardware Cartera El maximalismo de Bitcoin …"},{"uri":"/es/bit-block-boom/","title":"Bit Block Boom","content":" Bit Block Boom 2019 "},{"uri":"/es/bit-block-boom/2019/","title":"Bit Block Boom 2019","content":" Acumular Bitcoin Pierre Rochard "},{"uri":"/es/baltic-honeybadger/2018/bitcoin-as-a-novel-market-institution/","title":"Bitcoin como nueva institución de mercado","content":"Bitcoin como nueva institución de mercado\nhttps://twitter.com/kanzure/status/1043763647498063872\nVoy a hablar de bitcoin como un sistema económico, no como un sistema de software o criptográfico. Esta charla tiene dos partes. En la primera, voy a hacer una retrospectiva de los últimos 10 años de bitcoin como sistema económico en funcionamiento. En la segunda parte, voy a analizar a bitcoin tal y como es hoy y cómo será en el futuro. Voy a mirar la cantidad de riqueza almacenada en bitcoin y …"},{"uri":"/es/bitcoin-core-dev-tech/2015-02/","title":"Bitcoin Core Dev Tech 2015","content":" Bitcoin Law For Developers James Gatto, Marco Santori Charla de los fundadores de Circle Jeremy Allaire, Sean Neville Gavin Andresen Research And Development Goals Patrick Murck, Gavin Andresen, Cory Fields Consensus "},{"uri":"/es/bitcoin-core-dev-tech/2022-10/2022-10-11-github/","title":"Bitcoin Core y GitHub (2022-10-11)","content":"Bitcoin Core y GitHub Creo que en este punto está bastante claro que no es necesariamente un \u0026amp;ldquo;si\u0026amp;rdquo; salimos de github, sino un cuándo y cómo. La pregunta sería, ¿cómo lo haríamos? Esto no es realmente una presentación. Es más bien una discusión. Hay algunas cosas a tener en cuenta, como el repo de bitcoin-gh-meta, que captura todas las cuestiones, comentarios y pull requests. Es bastante bueno. La capacidad de reconstruir lo que hay aquí en otra plataforma no parece posible en su …"},{"uri":"/es/bitcoin-core-dev-tech/2015-02/james-gatto-marco-santori-bitcoin-law-for-developers/","title":"Bitcoin Law For Developers","content":"Vamos a hacer una pausa de 15 minutos para tomar café después de nuestros dos próximos oradores. Quiero presentarles a James Gatto y Marco Santori de Pilsbury. Pasarán algún tiempo hablando de la ley Bitcoin. Tienen una sala esta tarde y se ofrecen a hablar con ustedes uno a uno. Así que Marco y James.\nTe perdiste la introducción. ¿Estuvo bien? (risas)\nEstamos aquí para hablar de cuestiones jurídicas. Vamos a intentar que sea ligero e interesante. Yo voy a hablar de patentes. Y Marc va a hablar …"},{"uri":"/es/speakers/bruce-fenton/","title":"Bruce Fenton","content":""},{"uri":"/es/speakers/bryan-bishop/","title":"Bryan Bishop","content":""},{"uri":"/es/bitcoin-core-dev-tech/2015-02/jeremy-allaire-circle/","title":"Charla de los fundadores de Circle","content":"Estamos encantados de estar aquí y de patrocinar este evento. Nuestra experiencia en herramientas para desarrolladores se remonta a los primeros días de algo.\n¿Cómo maduramos el desarrollo del propio Bitcoin Core? Una de las cosas útiles es identificar los componentes clave. En un estándar tienes una especificación, podría ser un libro blanco, y luego tienes una implementación de referencia, y luego un conjunto de pruebas que hace cumplir la interoperabilidad. El conjunto de pruebas es lo que …"},{"uri":"/es/categories/conference/","title":"conference","content":""},{"uri":"/es/categories/conferenciae/","title":"conferenciae","content":""},{"uri":"/es/tags/consensus/","title":"consensus","content":""},{"uri":"/es/tags/contract-protocols/","title":"contract-protocols","content":""},{"uri":"/es/tags/contratos-inteligentes/","title":"contratos inteligentes","content":""},{"uri":"/es/speakers/cory-fields/","title":"Cory Fields","content":""},{"uri":"/es/tags/custodia/","title":"custodia","content":""},{"uri":"/es/baltic-honeybadger/2018/bitcoin-custody/","title":"Custodia de Bitcoin","content":"https://twitter.com/kanzure/status/1048014038179823617\nCustodia de Bitcoin\nConferencia de Bitcoin del Báltico Honey Badger 2018, Riga, Letonia, Día 2\nSchedule: https://bh2018.hodlhodl.com/\nTranscripción\nHora de inicio: 6:09:50\nMi nombre es Bryan Bishop, voy a hablar de la custodia de bitcoin. Aquí está mi huella digital PGP, deberíamos hacer eso. Así que quién soy, tengo un fondo de desarrollo de software, no solo escribo transcripciones. En realidad ya no estoy en LedgerX desde el viernes (21 …"},{"uri":"/es/bitcoin-core-dev-tech/2018-10/2018-10-08-script-descriptors/","title":"Descriptores de guiones (2018-10-08)","content":"2018-10-08\nDescriptores de guiones Pieter Wuille (sipa)\nhttps://github.com/bitcoin/bitcoin/blob/master/doc/descriptors.md\nMe gustaría hablar de los descriptores de guiones. Hay varios proyectos en los que estamos trabajando y todos están relacionados. Me gustaría aclarar cómo encajan las cosas.\nNota: hay una transcripción anterior que no ha sido publicada (necesita ser revisada) sobre los descriptores de guiones.\nAgenda Historia de los descriptores de guiones y cómo llegamos a esto. Lo que hay …"},{"uri":"/es/baltic-honeybadger/2018/the-bitcoin-standard/","title":"El estándar Bitcoin","content":"El estándar bitcoin como solución de escalado por capas\nhttps://twitter.com/kanzure/status/1043425514801844224\nHola a todos. ¿Pueden escucharme todos? Bien, maravilloso. No puedes ver mis diapositivas, ¿verdad? Tienes que compartir la pantalla si quieres que vean tus diapositivas. ¿Cómo hago eso? ¿Dónde está eso? Este es el nuevo Skype, lo siento. Gracias a todos por invitarme a hablar hoy. Sería estupendo acompañaros, pero desgraciadamente no podré hacerlo.\nQuiero describir cómo veo el …"},{"uri":"/es/baltic-honeybadger/2018/the-future-of-lightning/","title":"El futuro de Lightning","content":"El futuro de lightning\nEl año del #craeful y el futuro de lightning\nhttps://twitter.com/kanzure/status/1043501348606693379\nEs estupendo estar de vuelta aquí en Riga. Demos un aplauso a los organizadores del evento y a todos los que nos han traído aquí. Este es el momento más cálido que he vivido en Riga. Ha sido genial. Quiero volver en los veranos.\nIntroducción Estoy aquí para hablar de lo que ha ocurrido en el último año y del futuro de lo que vamos a ver con los rayos. Un segundo mientras …"},{"uri":"/es/baltic-honeybadger/2018/the-future-of-bitcoin-smart-contracts/","title":"El futuro de los contratos inteligentes de Bitcoin","content":"El futuro de los contratos inteligentes de Bitcoin\nhttps://twitter.com/kanzure/status/1043419056492228608\nHola chicos, la próxima charla en 5 minutos. En cinco minutos.\nIntroducción Hola a todos. Si están caminando, por favor háganlo en silencio. Estoy un poco nervioso. Estuve muy ocupado organizando esta conferencia y no tuve tiempo para esta charla. Si tenían grandes expectativas para esta presentación, entonces por favor bájenlas durante los próximos 20 minutos. Iba a hablar de los modelos …"},{"uri":"/es/baltic-honeybadger/2018/the-future-of-bitcoin-wallets/","title":"El futuro de los monederos de Bitcoin","content":"1 a 1: El futuro de los monederos de bitcoin\nhttps://twitter.com/kanzure/status/1043445104827084800\nGZ: Muchas gracias. Vamos a hablar del futuro de los monederos de bitcoin. Como sabes, es un tema muy central. Siempre comparamos el bitcoin con Internet. Los monederos son básicamente como los navegadores, como al principio de la web. Son la primera puerta de entrada a una experiencia de usuario en bitcoin. Es importante ver cómo van a evolucionar. Tenemos dos representantes excepcionales del …"},{"uri":"/es/baltic-honeybadger/2018/bitcoin-maximalism-dissected/","title":"El maximalismo de Bitcoin diseccionado","content":"El maximalismo de Bitcoin diseccionado\nBuenos días a todos. Estoy muy contento de estar aquí en el Baltic Honeybadger. El año pasado hice una presentación de scaling. Inauguré la conferencia con una presentación de escalamiento. Este año, para compensar, seré súper serio. Esta será la presentación más aburrida de la conferencia. Voy a tratar de diseccionar y formalizar el maximalismo de bitcoin.\nEsta es la fuente más terrorífica que encontré en prezi. Quería algo con sangre saliendo de ella pero …"},{"uri":"/es/speakers/elizabeth-stark/","title":"Elizabeth Stark","content":""},{"uri":"/es/speakers/eric-voskuil/","title":"Eric Voskuil","content":""},{"uri":"/es/baltic-honeybadger/2018/trustlessness-scalability-and-directions-in-security-models/","title":"Escalabilidad y orientación de los modelos de seguridad","content":"Escalabilidad y orientación de los modelos de seguridad\nhttps://twitter.com/kanzure/status/1043397023846883329\n\u0026amp;hellip; ¿Están todos despiertos? Claro. Salto de obstáculos. Vaya, eso es brillante. Soy más idealista que Eric. Voy a hablar de la utilidad y de por qué la gente usa bitcoin y vamos a ver cómo va eso.\nLa falta de confianza es mucho mejor que la descentralización. Quiero hablar de la falta de confianza. La gente usa mucho la palabra descentralización y me parece que es algo inútil …"},{"uri":"/es/speakers/florian-maier/","title":"Florian Maier","content":""},{"uri":"/es/speakers/gavin-andresen/","title":"Gavin Andresen","content":""},{"uri":"/es/bitcoin-core-dev-tech/2015-02/gavinandresen/","title":"Gavin Andresen","content":"http://blog.circle.com/2015/02/10/devcore-livestream/\nEl tiempo de transacción instantánea\u0026amp;hellip; ya sabes, me acerco a una caja registradora, pongo mi teléfono ahí, y en un segundo o dos la transacción está confirmada y me voy con mi café. Cualquier cosa más allá de eso, 10 minutos frente a 1 minuto no importa. Hay un montón de ideas sobre esto, como un tercero de confianza que promete no duplicar el gasto, tener algunas monedas encerradas en un monedero multisig como Green Address. Hay ideas …"},{"uri":"/es/speakers/giacomo-zucco/","title":"Giacomo Zucco","content":""},{"uri":"/es/baltic-honeybadger/2018/investing-in-bitcoin-businesses/","title":"Invertir en negocios de Bitcoin","content":"1 a 1: Invertir en negocios de bitcoin\nMatthew Mezinskis (crypto_voices) (moderador)\nMM: Dirijo un podcast aquí en Letonia sobre la economía y el dinero del bitcoin. Es americano-latino. Lo hago con mi socio que está basado en Brasil. Hoy vamos a centrarnos en la inversión en negocios centrados en el bitcoin y las criptomonedas.\nNC: Me llamo Nic Carter. Trabajo para un fondo de riesgo. Somos uno de los pocos fondos de riesgo que se centran en el bitcoin y en las startups relacionadas con el …"},{"uri":"/es/speakers/james-gatto/","title":"James Gatto","content":""},{"uri":"/es/speakers/jeremy-allaire/","title":"Jeremy Allaire","content":""},{"uri":"/es/baltic-honeybadger/2018/the-reserve-currency-fallacy/","title":"La falacia de la moneda de reserva","content":"La falacia de la moneda de reserva\nhttps://twitter.com/kanzure/status/1043385469134925824\nGracias. Desarrolladores, desarrolladores, desarrolladores, desarrolladores. Muy bien, no fue tan malo. Hay mucho contenido para explicar este concepto de la falacia de la moneda de reserva. Es difícil de terminar en la cantidad de tiempo disponible. Estaré disponible en la fiesta de esta noche. Quiero ir a través de cuatro diapositivas y hablar de la historia de esta cuestión de la escala, y luego la …"},{"uri":"/es/baltic-honeybadger/2018/the-b-foundation/","title":"La Fundación B","content":"La Fundación B\nhttps://twitter.com/kanzure/status/1043802179004493825\nCreo que la mayoría de la gente aquí tiene una idea de quién soy. Soy un bitcoiner a largo plazo. He estado en bitcoin desde 2010. Me encanta el bitcoin. Me apasiona. Quiero verlo crecer y prosperar. Lo positivo de bitcoin es que tiene un ecosistema resistente. No necesita ningún director general. No necesita ninguna organización centralizada y no necesita ningún punto central para dirigir hacia dónde va. Básicamente funciona. …"},{"uri":"/es/baltic-honeybadger/2018/current-state-of-the-market-and-institutional-investors/","title":"La situación actual del mercado y los inversores institucionales","content":"1 a 1: La situación actual del mercado y los inversores institucionales\nhttps://twitter.com/kanzure/status/1043404928935444480\nAsociación Bitcoin Guy (BAG)\nBAG: Estoy aquí con Bruce Fenton y Tone Vays. Bruce es también anfitrión de la Mesa Redonda Satoshi y un inversor a largo plazo. Tone Vays es un operador de derivados y creador de contenidos. Vamos a hablar de Wall Street. ¿Creo que ambos están basados en Nueva York? ¿Cómo fueron los últimos 12 meses?\nTV: Tengo un apartamento allí, pero creo …"},{"uri":"/es/speakers/lawrence-nahum/","title":"Lawrence Nahum","content":""},{"uri":"/es/tags/libsecp256k1/","title":"libsecp256k1","content":""},{"uri":"/es/bitcoin-core-dev-tech/2022-10/2022-10-12-libsecp256k1/","title":"Libsecp256k1 Reunión de mantenedores (2022-10-12)","content":"P: ¿Por qué C89? Cuando te hice esta pregunta hace unos años, creo que dijiste gmaxwell.\nR: Hay una serie de dispositivos embebidos que sólo soportan C89 y sería bueno soportar esos dispositivos. Esa fue la respuesta de entonces, al menos.\nP: ¿Es un gran coste seguir haciendo C89?\nR: El único coste es para las cosas de contexto que queremos hacer threadlocal. El CPUid o las cosas específicas de x86. Estos podrían ser opcionales. Si realmente quieres entrar en este tema, entonces tal vez más …"},{"uri":"/es/bitcoin-core-dev-tech/2019-06/2019-06-06-maintainers/","title":"Mantenedores (2019-06-06)","content":"Visión de los mantenedores del proyecto Bitcoin Core\nhttps://twitter.com/kanzure/status/1136568307992158208\n¿Cómo piensan o sienten los mantenedores que va todo? ¿Hay frustraciones? ¿Podrían los colaboradores ayudar a eliminar estas frustraciones? Eso es todo lo que tengo.\nSería bueno tener una mejor supervisión o visión general sobre quién está trabajando en qué dirección, para ser más eficientes. A veces he visto gente trabajando en lo mismo, y ambos hacen un pull request similar con mucho …"},{"uri":"/es/speakers/marco-santori/","title":"Marco Santori","content":""},{"uri":"/es/baltic-honeybadger/2018/beyond-bitcoin-decentralized-collaboration/","title":"Más allá de Bitcoin - Colaboración descentralizada","content":"Más allá de Bitcoin: colaboración descentralizada.\nhttps://twitter.com/kanzure/status/1043432684591230976\nhttp://sit.fyi/\nHola a todos. Hoy voy a hablar de algo diferente a lo habitual. Voy a hablar más sobre cómo podemos computar y cómo colaboramos. Empezaré con una introducción a la historia de cómo sucedieron las cosas y por qué son como son hoy. Muchos de ustedes probablemente utilizan aplicaciones SaaS en la nube. A menudo se promociona como algo nuevo, algo que ha sucedido en los últimos …"},{"uri":"/es/speakers/matthew-mezinskis/","title":"Matthew Mezinskis","content":""},{"uri":"/es/speakers/max-keidun/","title":"Max Keidun","content":""},{"uri":"/es/needs/","title":"Needs","content":""},{"uri":"/es/speakers/nic-carter/","title":"Nic Carter","content":""},{"uri":"/es/baltic-honeybadger/2018/extreme-opsec-for-the-modern-cypherpunk/","title":"Opsec extremo para el cypherpunk moderno","content":"Opsec extremo para el cypherpunk moderno\nJameson es el ingeniero de infraestructuras de Casa. Demos la bienvenida a Jameson Lopp.\nIntroducción Hola compañeros cypherpunks. Estamos bajo ataque. Las corporaciones y los estados nación en su búsqueda de la omnisciencia han despojado lentamente nuestra privacidad. Somos la rana que está siendo hervida en la olla llamada progreso. No podemos confiar en que las corporaciones nos concedan privacidad por su beneficio. Nuestros fracasos aquí son nuestros. …"},{"uri":"/es/baltic-honeybadger/2018/day-1-closing-panel/","title":"Panel de clausura del primer día","content":"Panel de cierre\nhttps://twitter.com/kanzure/status/1043517333640241152\nRS: Gracias chicos por unirse al panel. Sólo necesitamos una silla más. Voy a presentar a Alex Petrov aquí porque todos los demás estaban en el escenario. El panel de cierre va a ser una visión general de lo que está sucediendo en bitcoin. Quiero empezar con la pregunta que empecé el año pasado. ¿Cuál es el estado actual de bitcoin en comparación con el año pasado? ¿Qué ha sucedido?\nES: El año pasado, estaba sentado al lado …"},{"uri":"/es/baltic-honeybadger/2018/trading-panel/","title":"Panel de negociación","content":"MM: Como dice Trace Mayer, es perseguir al conejo. ¿Por qué no nos presentamos?\nTV: Mi nombre es Tone Vays. Vengo del entorno comercial tradicional de Wall Street. Me uní al espacio de las criptomonedas en 2013. Hablé en mi primera conferencia y escribí mi primer artículo en el primer trimestre de 2014. Estaba haciendo algo de trading. Con la popularidad del canal de youtube y de ir a conferencias, volví a operar con opciones en los mercados tradicionales. Volveré a operar con cripto pronto, …"},{"uri":"/es/speakers/patrick-murck/","title":"Patrick Murck","content":""},{"uri":"/es/speakers/pavol-rusnak/","title":"Pavol Rusnak","content":""},{"uri":"/es/speakers/peter-todd/","title":"Peter Todd","content":""},{"uri":"/es/speakers/pierre-rochard/","title":"Pierre Rochard","content":""},{"uri":"/es/baltic-honeybadger/2018/bitcoin-payment-processing-and-merchants/","title":"Procesamiento de pagos con Bitcoin y comerciantes","content":"1 a 1: Procesamiento de pagos con Bitcoin y comerciantes\nhttps://twitter.com/kanzure/status/1043476967918592003\nV: Hola y bienvenidos a esta increíble conferencia. Es una buena conferencia, vamos. Es genial porque puedes preguntarles sobre la UASF y ellos saben de qué estás hablando. Tengo algunos invitados conmigo hoy. Vamos a hablar sobre el procesamiento de los comerciantes y hablar de la adopción regular de bitcoin también. La primera pregunta que tengo para Alena es, \u0026amp;hellip; hubo un gran …"},{"uri":"/es/tags/reglamento/","title":"reglamento","content":""},{"uri":"/es/bitcoin-core-dev-tech/2015-02/research-and-development-goals/","title":"Research And Development Goals","content":"Objetivos y retos de R\u0026amp;amp;D\nA menudo vemos gente diciendo que están probando las aguas, que han corregido un error tipográfico, que han hecho una pequeña corrección que no tiene mucho impacto, que se están acostumbrando al proceso. Se están dando cuenta de que es muy fácil contribuir a Bitcoin Core. Codificas tus cambios, envías tus cambios, no hay mucho que hacer.\nHay una diferencia, y las líneas son difusas e indefinidas, y puedes hacer un cambio en Core que cambie un error ortográfico o un …"},{"uri":"/es/speakers/roman-snitko/","title":"Roman Snitko","content":""},{"uri":"/es/speakers/saifedean-ammous/","title":"Saifedean Ammous","content":""},{"uri":"/es/tags/scalabilidad/","title":"scalabilidad","content":""},{"uri":"/es/speakers/sean-neville/","title":"Sean Neville","content":""},{"uri":"/es/speakers/sergej-kotliar/","title":"Sergej Kotliar","content":""},{"uri":"/es/bitcoin-core-dev-tech/2018-03/2018-03-06-taproot-graftroot-etc/","title":"Taproot, Graftroot, Etc (2018-03-06)","content":"https://twitter.com/kanzure/status/972468121046061056\nGraftroot La idea del graftroot es que en cada contrato hay un superconjunto de personas que pueden gastar el dinero. Esta suposición no siempre es cierta, pero casi siempre lo es. Digamos que quieres bloquear estas monedas durante un año, sin ninguna condición para ello, entonces no funciona. Pero suponga que tiene\u0026amp;hellip; ¿recuperación pubkey? No\u0026amp;hellip; la recuperación pubkey es inherentemente incompatible con cualquier forma de …"},{"uri":"/es/speakers/tone-vays/","title":"Tone Vays","content":""},{"uri":"/es/speakers/vortex/","title":"Vortex","content":""},{"uri":"/es/speakers/whalepanda/","title":"Whalepanda","content":""},{"uri":"/es/speakers/yurii-rashkovskii/","title":"Yurii Rashkovskii","content":""}] \ No newline at end of file diff --git a/es/speakers/aaron-van-wirdum/index.xml b/es/speakers/aaron-van-wirdum/index.xml index 7f125981df..af6ac18ce3 100644 --- a/es/speakers/aaron-van-wirdum/index.xml +++ b/es/speakers/aaron-van-wirdum/index.xml @@ -5,13 +5,13 @@ Episodio anterior en lockinontimeout (LOT): https://btctranscripts.com/bitcoin-m Episodio anterior sobre Speedy Trial: https://btctranscripts.com/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial/ Aaron van Wirdum en &ldquo;Ahora hay dos clientes de activación de Taproot, aquí está el porqué&rdquo;: https://bitcoinmagazine.com/technical/there-are-now-two-taproot-activation-clients-heres-why Transcripción por: Michael Folkson -Introducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado.Taproot Activación con Speedy Trialhttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Fri, 12 Mar 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Propuesta Speedy Trial: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html +Introducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado.Taproot Activación con Speedy Trialhttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Fri, 12 Mar 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Propuesta Speedy Trial: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html Introducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado. Sjors, ¿cuál es tu juego de palabras de la semana? Sjors Provoost (SP): En realidad te pedí un juego de palabras y me dijiste &ldquo;Corta, reedita. Vamos a hacerlo de nuevo&rdquo;. No tengo un juego de palabras esta semana. AvW: Los juegos de palabras son lo tuyo. SP: La última vez intentamos esto de LOT.Activación de Taproot y LOT=true vs LOT=falsehttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-lockinontimeout/Fri, 26 Feb 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-lockinontimeout/BIP 8: https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki -Argumentos para LOT=true and LOT=false (T1-T6 and F1-F6): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html -Argumento adicional para LOT=false (F7): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html +Argumentos para LOT=true and LOT=false (T1-T6 and F1-F6): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html +Argumento adicional para LOT=false (F7): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html Artículo de Aaron van Wirdum en LOT=true or LOT=false: https://bitcoinmagazine.com/articles/lottrue-or-lotfalse-this-is-the-last-hurdle-before-taproot-activation Introducción Aaron van Wirdum (AvW): En directo desde Utrecht, este es el van Wirdum Sjorsnado. Sjors, haz el juego de palabras. Sjors Provoost (SP): Tenemos &ldquo;mucho&rdquo; que hablar. diff --git a/es/speakers/matt-corallo/index.xml b/es/speakers/matt-corallo/index.xml index 54ac63e258..f34ff06854 100644 --- a/es/speakers/matt-corallo/index.xml +++ b/es/speakers/matt-corallo/index.xml @@ -1,4 +1,4 @@ -Matt Corallo on Transcripciones de ₿itcoinhttps://btctranscripts.com/es/speakers/matt-corallo/Recent content in Matt Corallo on Transcripciones de ₿itcoinHugo -- gohugo.ioesThu, 06 Jun 2019 00:00:00 +0000Gran limpieza de consensohttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html +Matt Corallo on Transcripciones de ₿itcoinhttps://btctranscripts.com/es/speakers/matt-corallo/Recent content in Matt Corallo on Transcripciones de ₿itcoinHugo -- gohugo.ioesThu, 06 Jun 2019 00:00:00 +0000Gran limpieza de consensohttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html https://twitter.com/kanzure/status/1136591286012698626 Introducción No hay mucho nuevo de que hablar. No está claro lo de CODESEPARATOR. Se quiere convertir en regla de consenso que las transacciones no pueden ser mayores de 100 kb. ¿No hay reacciones a eso? De acuerdo. Bien, lo haremos. Hagámoslo. ¿Todos saben cuál es esta propuesta? El tiempo de validación para cualquier bloque, éramos perezosos a la hora de arreglar esto. Segwit fue un primer paso para arreglar esto, dando a la gente una manera de hacer esto de una manera más eficiente.Lightning Rusthttps://btctranscripts.com/es/advancing-bitcoin/2019/2019-02-07-matt-corallo-rust-lightning/Thu, 07 Feb 2019 00:00:00 +0000https://btctranscripts.com/es/advancing-bitcoin/2019/2019-02-07-matt-corallo-rust-lightning/Lightning flexible en Rust diff --git a/es/speakers/pieter-wuille/index.xml b/es/speakers/pieter-wuille/index.xml index ba706c7fba..c6eaf0cacc 100644 --- a/es/speakers/pieter-wuille/index.xml +++ b/es/speakers/pieter-wuille/index.xml @@ -7,7 +7,7 @@ Borrador de BIP-Schnorr: https://github.com/sipa/bips/blob/bip-schnorr/bip-schno Borrador de BIP-Taproot: https://github.com/sipa/bips/blob/bip-schnorr/bip-taproot.mediawiki Borrador de BIP-Tapscript: https://github.com/sipa/bips/blob/bip-schnorr/bip-tapscript.mediawiki Parte 1 Adam: En este episodio vamos a adentrarnos en uno de los cambios más importantes que llegarán pronto al protocolo de Bitcoin como BIPs o Propuestas de Mejora de Bitcoin (“Bitcoin Improvement Proposals”), con foco en Taproot, Tapscript y firmas Schnorr.Taproothttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki -https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html +https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html https://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/ Previamente: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/ https://twitter.com/kanzure/status/1136616356827283456 diff --git a/es/speakers/ruben-somsen/index.xml b/es/speakers/ruben-somsen/index.xml index 3b907b60a7..118fc49bf2 100644 --- a/es/speakers/ruben-somsen/index.xml +++ b/es/speakers/ruben-somsen/index.xml @@ -1,5 +1,5 @@ Ruben Somsen on Transcripciones de ₿itcoinhttps://btctranscripts.com/es/speakers/ruben-somsen/Recent content in Ruben Somsen on Transcripciones de ₿itcoinHugo -- gohugo.ioesFri, 07 Jun 2019 00:00:00 +0000Cadenas de estado ciegas: Transferencia de UTXO con un servidor de firmas ciegashttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/https://twitter.com/kanzure/status/1136992734953299970 -Formalización de Blind Statechains como servidor de firma ciega minimalista https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html +Formalización de Blind Statechains como servidor de firma ciega minimalista https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html Visión General: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39 Documento statechains: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf Transcripción anterior: http://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/ diff --git a/es/speakers/sjors-provoost/index.xml b/es/speakers/sjors-provoost/index.xml index 855201b1de..acd1a4e44d 100644 --- a/es/speakers/sjors-provoost/index.xml +++ b/es/speakers/sjors-provoost/index.xml @@ -5,13 +5,13 @@ Episodio anterior en lockinontimeout (LOT): https://btctranscripts.com/bitcoin-m Episodio anterior sobre Speedy Trial: https://btctranscripts.com/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial/ Aaron van Wirdum en &ldquo;Ahora hay dos clientes de activación de Taproot, aquí está el porqué&rdquo;: https://bitcoinmagazine.com/technical/there-are-now-two-taproot-activation-clients-heres-why Transcripción por: Michael Folkson -Introducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado.Taproot Activación con Speedy Trialhttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Fri, 12 Mar 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Propuesta Speedy Trial: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html +Introducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado.Taproot Activación con Speedy Trialhttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Fri, 12 Mar 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Propuesta Speedy Trial: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html Introducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado. Sjors, ¿cuál es tu juego de palabras de la semana? Sjors Provoost (SP): En realidad te pedí un juego de palabras y me dijiste &ldquo;Corta, reedita. Vamos a hacerlo de nuevo&rdquo;. No tengo un juego de palabras esta semana. AvW: Los juegos de palabras son lo tuyo. SP: La última vez intentamos esto de LOT.Activación de Taproot y LOT=true vs LOT=falsehttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-lockinontimeout/Fri, 26 Feb 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-lockinontimeout/BIP 8: https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki -Argumentos para LOT=true and LOT=false (T1-T6 and F1-F6): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html -Argumento adicional para LOT=false (F7): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html +Argumentos para LOT=true and LOT=false (T1-T6 and F1-F6): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html +Argumento adicional para LOT=false (F7): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html Artículo de Aaron van Wirdum en LOT=true or LOT=false: https://bitcoinmagazine.com/articles/lottrue-or-lotfalse-this-is-the-last-hurdle-before-taproot-activation Introducción Aaron van Wirdum (AvW): En directo desde Utrecht, este es el van Wirdum Sjorsnado. Sjors, haz el juego de palabras. Sjors Provoost (SP): Tenemos &ldquo;mucho&rdquo; que hablar. diff --git a/es/tags/adaptor-signatures/index.xml b/es/tags/adaptor-signatures/index.xml index a09326b522..f63f8f9a98 100644 --- a/es/tags/adaptor-signatures/index.xml +++ b/es/tags/adaptor-signatures/index.xml @@ -1,4 +1,4 @@ adaptor-signatures on Transcripciones de ₿itcoinhttps://btctranscripts.com/es/tags/adaptor-signatures/Recent content in adaptor-signatures on Transcripciones de ₿itcoinHugo -- gohugo.ioesMon, 05 Mar 2018 00:00:00 +0000Intercambios atómicos de curvas transversaleshttps://btctranscripts.com/es/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/Mon, 05 Mar 2018 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/https://twitter.com/kanzure/status/971827042223345664 Borrador de un próximo documento de guiones sin guión. Esto fue a principios de 2017. Pero ya ha pasado todo un año. -transacciones lightning posteriores a schnorr https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html +transacciones lightning posteriores a schnorr https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html Una firma adaptadora.. si tienes diferentes generadores, entonces los dos secretos a revelar, simplemente le das a alguien los dos, más una prueba de un log discreto, y entonces dices aprende el secreto a uno que consiga que la revelación sea la misma. \ No newline at end of file diff --git a/es/tags/channel-factories/index.xml b/es/tags/channel-factories/index.xml index dd71307051..6041ff44f0 100644 --- a/es/tags/channel-factories/index.xml +++ b/es/tags/channel-factories/index.xml @@ -1,5 +1,5 @@ channel-factories on Transcripciones de ₿itcoinhttps://btctranscripts.com/es/tags/channel-factories/Recent content in channel-factories on Transcripciones de ₿itcoinHugo -- gohugo.ioesFri, 07 Jun 2019 00:00:00 +0000Cadenas de estado ciegas: Transferencia de UTXO con un servidor de firmas ciegashttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/https://twitter.com/kanzure/status/1136992734953299970 -Formalización de Blind Statechains como servidor de firma ciega minimalista https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html +Formalización de Blind Statechains como servidor de firma ciega minimalista https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html Visión General: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39 Documento statechains: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf Transcripción anterior: http://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/ diff --git a/es/tags/consensus-cleanup/index.xml b/es/tags/consensus-cleanup/index.xml index b6f8a1042e..3a87d381e0 100644 --- a/es/tags/consensus-cleanup/index.xml +++ b/es/tags/consensus-cleanup/index.xml @@ -1,4 +1,4 @@ -consensus-cleanup on Transcripciones de ₿itcoinhttps://btctranscripts.com/es/tags/consensus-cleanup/Recent content in consensus-cleanup on Transcripciones de ₿itcoinHugo -- gohugo.ioesThu, 06 Jun 2019 00:00:00 +0000Gran limpieza de consensohttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html +consensus-cleanup on Transcripciones de ₿itcoinhttps://btctranscripts.com/es/tags/consensus-cleanup/Recent content in consensus-cleanup on Transcripciones de ₿itcoinHugo -- gohugo.ioesThu, 06 Jun 2019 00:00:00 +0000Gran limpieza de consensohttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html https://twitter.com/kanzure/status/1136591286012698626 Introducción No hay mucho nuevo de que hablar. No está claro lo de CODESEPARATOR. Se quiere convertir en regla de consenso que las transacciones no pueden ser mayores de 100 kb. ¿No hay reacciones a eso? De acuerdo. Bien, lo haremos. Hagámoslo. ¿Todos saben cuál es esta propuesta? El tiempo de validación para cualquier bloque, éramos perezosos a la hora de arreglar esto. Segwit fue un primer paso para arreglar esto, dando a la gente una manera de hacer esto de una manera más eficiente. \ No newline at end of file diff --git a/es/tags/eltoo/index.xml b/es/tags/eltoo/index.xml index 993cbd1ff9..1877a1fb7e 100644 --- a/es/tags/eltoo/index.xml +++ b/es/tags/eltoo/index.xml @@ -4,7 +4,7 @@ Diapositivas: https://residency.chaincode.com/presentations/lightning/Eltoo.pdf Eltoo white paper: https://blockstream.com/eltoo.pdf Artículo de Bitcoin Magazine: https://bitcoinmagazine.com/articles/noinput-class-bitcoin-soft-fork-simplify-lightning Intro ¿Quién nunca ha oído hablar de eltoo? Es mi proyecto favorito y estoy bastante orgulloso de él. Intentaré que esto sea breve. Me dijeron que todos vieron mi presentación sobre la evolución de los protocolos de actualización. Probablemente iré bastante rápido. Quiero darles a todos la oportunidad de hacer preguntas si tienen alguna.Cadenas de estado ciegas: Transferencia de UTXO con un servidor de firmas ciegashttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/https://twitter.com/kanzure/status/1136992734953299970 -Formalización de Blind Statechains como servidor de firma ciega minimalista https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html +Formalización de Blind Statechains como servidor de firma ciega minimalista https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html Visión General: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39 Documento statechains: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf Transcripción anterior: http://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/ diff --git a/es/tags/mast/index.xml b/es/tags/mast/index.xml index bd698ee6b9..52056bb39c 100644 --- a/es/tags/mast/index.xml +++ b/es/tags/mast/index.xml @@ -1,4 +1,4 @@ mast on Transcripciones de ₿itcoinhttps://btctranscripts.com/es/tags/mast/Recent content in mast on Transcripciones de ₿itcoinHugo -- gohugo.ioesThu, 07 Sep 2017 00:00:00 +0000Árboles de sintaxis abstracta merkleizadoshttps://btctranscripts.com/es/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/Thu, 07 Sep 2017 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/https://twitter.com/kanzure/status/907075529534328832 -Árboles de sintaxis abstracta merkleizados (MAST) https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html +Árboles de sintaxis abstracta merkleizados (MAST) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html Voy a hablar del esquema que publiqué ayer en la lista de correo, que consiste en implementar MAST (árboles de sintaxis abstracta merkleizados) en bitcoin de la forma menos invasiva posible. Está dividido en dos grandes rasgos de consenso que juntos nos dan MAST. Empezaré con el último BIP. Esto es la evaluación de la llamada de cola. Podemos generalizar P2SH para darnos capacidades más generales, que un solo script de redimensionamiento. \ No newline at end of file diff --git a/es/tags/p2c/index.xml b/es/tags/p2c/index.xml index 4f7ff1ff37..a52691cb21 100644 --- a/es/tags/p2c/index.xml +++ b/es/tags/p2c/index.xml @@ -1,5 +1,5 @@ p2c on Transcripciones de ₿itcoinhttps://btctranscripts.com/es/tags/p2c/Recent content in p2c on Transcripciones de ₿itcoinHugo -- gohugo.ioesThu, 06 Jun 2019 00:00:00 +0000Taproothttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki -https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html +https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html https://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/ Previamente: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/ https://twitter.com/kanzure/status/1136616356827283456 diff --git a/es/tags/quantum-resistance/index.xml b/es/tags/quantum-resistance/index.xml index 855700f6c4..2204c1ce7d 100644 --- a/es/tags/quantum-resistance/index.xml +++ b/es/tags/quantum-resistance/index.xml @@ -1,5 +1,5 @@ quantum-resistance on Transcripciones de ₿itcoinhttps://btctranscripts.com/es/tags/quantum-resistance/Recent content in quantum-resistance on Transcripciones de ₿itcoinHugo -- gohugo.ioesThu, 06 Jun 2019 00:00:00 +0000Taproothttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki -https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html +https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html https://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/ Previamente: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/ https://twitter.com/kanzure/status/1136616356827283456 diff --git a/es/tags/signet/index.xml b/es/tags/signet/index.xml index c04f491c12..3adf581b9e 100644 --- a/es/tags/signet/index.xml +++ b/es/tags/signet/index.xml @@ -1,6 +1,6 @@ signet on Transcripciones de ₿itcoinhttps://btctranscripts.com/es/tags/signet/Recent content in signet on Transcripciones de ₿itcoinHugo -- gohugo.ioesFri, 07 Feb 2020 00:00:00 +0000Taller Signethttps://btctranscripts.com/es/advancing-bitcoin/2020/2020-02-07-kalle-alm-signet-workshop/Fri, 07 Feb 2020 00:00:00 +0000https://btctranscripts.com/es/advancing-bitcoin/2020/2020-02-07-kalle-alm-signet-workshop/Tema: Taller Signet Localización: El avance de Bitcoin Vídeo: No se ha publicado ningún vídeo en línea -Preparémonos mkdir workspace cd workspace git clone https://github.com/bitcoin/bitcoin.git cd bitcoin git remote add kallewoof https://github.com/kallewoof/bitcoin.git git fetch kallewoof git checkout signet ./autogen.sh ./configure -C --disable-bench --disable-test --without-gui make -j5 Cuando intentes ejecutar la parte de configuración vas a tener algunos problemas si no tienes las dependencias. Si no tienes las dependencias busca en Google tu sistema operativo y &ldquo;Bitcoin build&rdquo;.Signethttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html +Preparémonos mkdir workspace cd workspace git clone https://github.com/bitcoin/bitcoin.git cd bitcoin git remote add kallewoof https://github.com/kallewoof/bitcoin.git git fetch kallewoof git checkout signet ./autogen.sh ./configure -C --disable-bench --disable-test --without-gui make -j5 Cuando intentes ejecutar la parte de configuración vas a tener algunos problemas si no tienes las dependencias. Si no tienes las dependencias busca en Google tu sistema operativo y &ldquo;Bitcoin build&rdquo;.Signethttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html https://twitter.com/kanzure/status/1136980462524608512 Introducción Voy a hablar un poco de signet. ¿Alguien no sabe lo que es signet? La idea es tener una firma del bloque o del bloque anterior. La idea es que testnet está horriblemente roto para probar cosas, especialmente para probar cosas a largo plazo. Hay grandes reorgs en testnet. ¿Qué pasa con testnet con un ajuste de dificultad menos roto? Testnet es realmente para probar mineros. Uno de los objetivos es que quieras una falta de fiabilidad predecible y no una falta de fiabilidad que destroce el mundo. \ No newline at end of file diff --git a/es/tags/soft-fork-activation/index.xml b/es/tags/soft-fork-activation/index.xml index e6c1e6a8dc..29ad5aa9a3 100644 --- a/es/tags/soft-fork-activation/index.xml +++ b/es/tags/soft-fork-activation/index.xml @@ -1,10 +1,10 @@ -soft-fork-activation on Transcripciones de ₿itcoinhttps://btctranscripts.com/es/tags/soft-fork-activation/Recent content in soft-fork-activation on Transcripciones de ₿itcoinHugo -- gohugo.ioesFri, 12 Mar 2021 00:00:00 +0000Taproot Activación con Speedy Trialhttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Fri, 12 Mar 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Propuesta Speedy Trial: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html +soft-fork-activation on Transcripciones de ₿itcoinhttps://btctranscripts.com/es/tags/soft-fork-activation/Recent content in soft-fork-activation on Transcripciones de ₿itcoinHugo -- gohugo.ioesFri, 12 Mar 2021 00:00:00 +0000Taproot Activación con Speedy Trialhttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Fri, 12 Mar 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Propuesta Speedy Trial: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html Introducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado. Sjors, ¿cuál es tu juego de palabras de la semana? Sjors Provoost (SP): En realidad te pedí un juego de palabras y me dijiste &ldquo;Corta, reedita. Vamos a hacerlo de nuevo&rdquo;. No tengo un juego de palabras esta semana. AvW: Los juegos de palabras son lo tuyo. SP: La última vez intentamos esto de LOT.Activación de Taproot y LOT=true vs LOT=falsehttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-lockinontimeout/Fri, 26 Feb 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-lockinontimeout/BIP 8: https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki -Argumentos para LOT=true and LOT=false (T1-T6 and F1-F6): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html -Argumento adicional para LOT=false (F7): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html +Argumentos para LOT=true and LOT=false (T1-T6 and F1-F6): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html +Argumento adicional para LOT=false (F7): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html Artículo de Aaron van Wirdum en LOT=true or LOT=false: https://bitcoinmagazine.com/articles/lottrue-or-lotfalse-this-is-the-last-hurdle-before-taproot-activation Introducción Aaron van Wirdum (AvW): En directo desde Utrecht, este es el van Wirdum Sjorsnado. Sjors, haz el juego de palabras. Sjors Provoost (SP): Tenemos &ldquo;mucho&rdquo; que hablar. diff --git a/es/tags/statechains/index.xml b/es/tags/statechains/index.xml index 0a96df79f6..d2578594c6 100644 --- a/es/tags/statechains/index.xml +++ b/es/tags/statechains/index.xml @@ -1,5 +1,5 @@ statechains on Transcripciones de ₿itcoinhttps://btctranscripts.com/es/tags/statechains/Recent content in statechains on Transcripciones de ₿itcoinHugo -- gohugo.ioesFri, 07 Jun 2019 00:00:00 +0000Cadenas de estado ciegas: Transferencia de UTXO con un servidor de firmas ciegashttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/https://twitter.com/kanzure/status/1136992734953299970 -Formalización de Blind Statechains como servidor de firma ciega minimalista https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html +Formalización de Blind Statechains como servidor de firma ciega minimalista https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html Visión General: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39 Documento statechains: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf Transcripción anterior: http://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/ diff --git a/es/tags/taproot/index.xml b/es/tags/taproot/index.xml index b7e5add6b0..922706e398 100644 --- a/es/tags/taproot/index.xml +++ b/es/tags/taproot/index.xml @@ -5,13 +5,13 @@ Episodio anterior en lockinontimeout (LOT): https://btctranscripts.com/bitcoin-m Episodio anterior sobre Speedy Trial: https://btctranscripts.com/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial/ Aaron van Wirdum en &ldquo;Ahora hay dos clientes de activación de Taproot, aquí está el porqué&rdquo;: https://bitcoinmagazine.com/technical/there-are-now-two-taproot-activation-clients-heres-why Transcripción por: Michael Folkson -Introducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado.Taproot Activación con Speedy Trialhttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Fri, 12 Mar 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Propuesta Speedy Trial: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html +Introducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado.Taproot Activación con Speedy Trialhttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Fri, 12 Mar 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-speedy-trial/Propuesta Speedy Trial: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html Introducción Aaron van Wirdum (AvW): En directo desde Utrecht este es el van Wirdum Sjorsnado. Sjors, ¿cuál es tu juego de palabras de la semana? Sjors Provoost (SP): En realidad te pedí un juego de palabras y me dijiste &ldquo;Corta, reedita. Vamos a hacerlo de nuevo&rdquo;. No tengo un juego de palabras esta semana. AvW: Los juegos de palabras son lo tuyo. SP: La última vez intentamos esto de LOT.Activación de Taproot y LOT=true vs LOT=falsehttps://btctranscripts.com/es/bitcoin-explained/taproot-activation-lockinontimeout/Fri, 26 Feb 2021 00:00:00 +0000https://btctranscripts.com/es/bitcoin-explained/taproot-activation-lockinontimeout/BIP 8: https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki -Argumentos para LOT=true and LOT=false (T1-T6 and F1-F6): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html -Argumento adicional para LOT=false (F7): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html +Argumentos para LOT=true and LOT=false (T1-T6 and F1-F6): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html +Argumento adicional para LOT=false (F7): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html Artículo de Aaron van Wirdum en LOT=true or LOT=false: https://bitcoinmagazine.com/articles/lottrue-or-lotfalse-this-is-the-last-hurdle-before-taproot-activation Introducción Aaron van Wirdum (AvW): En directo desde Utrecht, este es el van Wirdum Sjorsnado. Sjors, haz el juego de palabras. Sjors Provoost (SP): Tenemos &ldquo;mucho&rdquo; que hablar. @@ -48,7 +48,7 @@ Borrador de BIP-Schnorr: https://github.com/sipa/bips/blob/bip-schnorr/bip-schno Borrador de BIP-Taproot: https://github.com/sipa/bips/blob/bip-schnorr/bip-taproot.mediawiki Borrador de BIP-Tapscript: https://github.com/sipa/bips/blob/bip-schnorr/bip-tapscript.mediawiki Parte 1 Adam: En este episodio vamos a adentrarnos en uno de los cambios más importantes que llegarán pronto al protocolo de Bitcoin como BIPs o Propuestas de Mejora de Bitcoin (“Bitcoin Improvement Proposals”), con foco en Taproot, Tapscript y firmas Schnorr.Taproothttps://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/es/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki -https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html +https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html https://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/ Previamente: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/ https://twitter.com/kanzure/status/1136616356827283456 diff --git a/greg-maxwell/2017-04-28-gmaxwell-confidential-transactions/index.html b/greg-maxwell/2017-04-28-gmaxwell-confidential-transactions/index.html index b351b833e8..8138a1868b 100644 --- a/greg-maxwell/2017-04-28-gmaxwell-confidential-transactions/index.html +++ b/greg-maxwell/2017-04-28-gmaxwell-confidential-transactions/index.html @@ -10,4 +10,4 @@ < Confidential Transactions

Confidential Transactions

Speakers: Greg Maxwell

Date: April 28, 2017

Transcript By: Bryan Bishop

Tags: Privacy enhancements

Media: -http://web.archive.org/web/20171115183423/https://www.youtube.com/watch?v=LHPYNZ8i1cU&feature=youtu.be&t=1m

https://twitter.com/kanzure/status/859604355917414400

Introduction

Thank you.

So as mentioned, I am going to be talking about confidential transactions today. And I look at confidential transactions as a building block fundamental piece of technology, it’s not like altcoin or something like that. It’s not a turnkey system, it’s some fundamental tech, and there are many different ways to use it and deploy it.

My interest in this technology is primarily driven by bitcoin, but the technology here could be applied to many other things, including things outside of the cryptocurrency space.

So I want to talk a little bit about in detail about what motivates me in on working on this and why I think it’s interesting. What does CT do? What are the costs and benefits of a system like CT in bitcoin? I want to talk about extensions and related technology, and maybe also the prospect of using CT with bitcoin in the future. I also have some slides to go into more detail about how CT works in more technical detail, but I’m not sure how much interest there is in getting into the really deep details. I am seeing some thumbs up, so lots of details. We’ll see how the time works out on it too, because I want to leave some good time for questions and such. And if I start blabbering off in space and someone is confused, then please ask me to clarify.

Why confidential transactions?

CT is a tool for improving privacy and fungibility in a system like bitcoin. Why is this important? If you look at traditional systems, like banking systems, they provide a level of privacy in their systems. And when you usually use your bank account, your neighbors don’t know what you’re doing with your bank account. Your bank might know, but your neighbors don’t. Your counterparties know, other people generally don’t know. This is important to people, both for personal reasons as well as for commercial reasons. In a business, you don’t want your competition to know who is supplying your parts. You don’t want to necessarily share who is getting paid what. You don’t want your competition to have insight into your big sales or who your customers are and all that. There is a long list of commercial reasons as to why this kind of privacy is important.

Personally as well– if you think about it, in order to speak as a person to speak up in the political sphere, to have an effect on the world, you need to spend some money to do it… so this is basically you have the case that you’re spending of money is intricately tied to your ability to have free speech in the world. If someone is able to surveil and monitor and control your use of money all the time, they are also controlling your ability to speak. This is also the opinion of the U.S. Supreme Court in Citizens United and a lot of people on the more liberal end of the spectrum don’t like this conflation of money and free speech but it cuts both ways. People should have a high degree of financial privacy, and they can’t have that if they are using a system that doesn’t provide that.

In bitcoin, we have this fundamental conflict where we have built this decentralized distributed network, and it’s security is based on this concept of public verification: you know bitcoin is correct because your own computer running software that you or someone you trust has audited, has verified all of this history of it, and has verified that it all adds up. That all the signatures match, that all the values match and it’s all good to go. It uses conspicuous transparency in order to obtain security. Many people look at this and say ah there’s a conflict we can’t both have strong privacy and have this public verification needed for a decentralized system. But this apparent conflict is not true. If you think about the idea of a digital signature, that is a scheme that lets you sign a message and show that a person who knows the private key knows the message, we use this every day in bitcoin, you can use this to verify a digital signature without ever knowing your private key but they can verify that you knew it. So a digital signature showing that, you can build a system that can be publicly verified but without giving up your secrets. This shows that it is possible.

This whole idea of a system of money needed some kind of privacy– that’s not a new one. It’s discussed in the bitcoin whitepaper, where it says that you can basically use pseudonyms, one use addresses, in order to have privacy in the system. The challenge that we found is that this is very fragile. In practice, the pseudonymity in bitcoin doesn’t actually provide a whole lot of privacy. People reuse addresses and then it’s really easy to trace the transaction graph through the history. The way that the earliest bitcoin software was written, you didn’t usually pay to an address directly, you would provide an IP address, then your node would go to the other node and get an address from it, and then every payment would use a new fresh address on the blockchain. That was a much more private way of using things, but it required that to receive payments you had to run a node exposed to the internet not running behind NAT. So not very usable. And that whole protocol in fact was completely unauthenticated, so it was very vulnerable to man-in-the-middle attacks as well. That wasn’t a good way to transaction, and as a result people started using bitcoin addresses for everything, and the address model ends up causing lots of reuse and things that really degrades privacy.

When I have gone out and talked to institutions outside of bitcoin in the wider world and talked about using bitcoin itself as well as using blockchain sort-of technology adopted from bitcoin, one of the first concerns that many boring institutions like banks have brought up is this lack of privacy, from all these commercial angles, not necessarily protecting personal rights, but keeping your business deals secret is sort of the bread and butter of commerce. So the transparency that bitcoin provides is often touted as a very powerful feature, so that everyone can see the blockchain and it’s transparent– but transparency is a feature that cuts both ways. Having an open blockchain doesn’t necessarily mean your activities are transparent. You have an open blockchain, and everyone can see the transactions, but it doesn’t mean they have any wisdom. Right. It is not knowledge. Knowledge is not wisdom. And so this very open nature of bitcoin, unless people take action to turn this openness into transparency, isn’t functionally transparency, and the lack of privacy in bitcoin exacerbates existing power imbalances.. If you’re just some nobody and someone wants to block your transactions and mess around with you, you’re made more visible by the fact that your transactions are visible. And, some people are concerned about more harmful users, like users that want to use cryptocurrencies for immoral activities – it might be argued that their lack of privacy is a feature. But it turns out that people doing criminal activities have money to blow, and they can afford to purchase privacy via expensive means regardless. They don’t look and say oh bitcoin isn’t private I guess I’ll just have to make all of my criminal activities visible, no that’s not how it works. Certainly less privacy in bitcoin can make investigations in bitcoin easier at times, but it doesn’t fundamentally block criminal activity. So in my view, when we don’t have good privacy in a system like bitcoin, the criminal activity still occurs it’s just that everyone else who can’t afford to get privacy that suffers from the lack.

And a key point that I don’t have in the slide here that I should mention is that privacy is integrally tied to fungibility. The idea behind fungibility is that one coin is equivalent to every other coin, and this is an essential property for any money. The reason why it is essential is so that when you get paid, you know you’ve been paid. If instead you were being paid in a highly non-fungible asset like a piece of art or whatever, you might receive the art and think you were paid and then later find out the art was a forgery or you find out it’s stolen and you have to give it back. And so, that art doesn’t work as well as money, one piece of art isn’t the same as another piece of art. The more we can make our money-like asset behave in a fungible way, the better it is. And this is a well-recognized legal and social concept elsewhere. The US dollar is not physically fungible… one dollar is very different from another dollar, in fact they are serialized and have serial numbers, but for the most part as a society we are willfully blind to the non-fungibility of the dollar bill, because if we say hey this dollar is counterfeit but it’s unidentifiably counterfeit so it’s no good, then the dollar breaks down as being a useful money. And so we ignore those things, and in cryptocurrency we need to do a degree of that as well, if you want cryptocurrency to function as money.

https://www.youtube.com/watch?v=LHPYNZ8i1cU&t=10m24s

There have been many past proposals to improve privacy in the bitcoin space. Many people have recognized there’s a problem here. People have done things like, proposals to combine coinjoin and coinswap, which are completely compatible with the existing bitcoin system and use some smart contracting protocol trickery to increase user privacy. People have also used things like centralized servers where you trust a third-party, you give them the money, they make the payments for you, and that can at times improve your privacy except at the central server which usually destroys your privacy along the way. There have been cryptographic proposals, there’s a proposal called zerocoin which is distinct from the zerocash stuff that exists today, which had poor scalability but interesting privacy properties. There’s an interesting not-well-known proposal called one-way aggregatable signatures (OWAS) which showed up on bitcointalk from an anonymous author who vanished (follow-up). It’s a pretty interesting approach. There’s the traceable ring signatures that were used in bytecoin, monero and more recently in zerocash system which is now showing up as an altcoin. The compatible things– like coinjoin– have mostly suffered from not having as much privacy as you would expect, due to transaction amount tracing. The idea behind coinjoin is that multiple parties come together and they jointly author a single transaction that spends all of their money and pays out to a set of addresses, and by doing that, an observer can’t determine which outputs match with which inputs so that the users gain privacy from doing this, but if all the users are putting in and taking out different amounts then you can easily unravel the coinjoin and the requirement to make the amounts match to prevent the unraveling would make coinjoin hard to be usable. And then, prior to confidential transactions as a proposal, the cryptographic solutions that people proposed have broken the pruning process in bitcoin. They really have harmed the scalability of the system causing the state to balloon up forever. Today you can run a bitcoin node with something like 2 GB of space on a system, and that’s all it needs in order to validate new incoming blocks and that’s with all the validation and all the rules no trusting third parties. And if not for pruning, then that number would be 120 gigabytes and rapidly growing. Many of these cryptographic privacy tools have basically broken this pruning and have hurt scaling… the exception being the more recent tumblebit proposal, and tumblebit is a proposal like coinswap that allows two users to swap ownership of coins in a secure way without network observer being able to tell that they did this. Tumblebit improves the privacy of that by making the users themselves not able to tell who the other users were, and tumblebit doesn’t break scalability, but because it only does these swaps and a few other things, it’s not as flexible as many of the other options.

Confidential transactions

https://www.youtube.com/watch?v=LHPYNZ8i1cU&t=13m37s

What confidential transactions, what does it do? So the prior work on improving privacy in cryptocurrency systems has been entirely focused on or almost entirely focused on making the transaction graph private. It’s addressing the linkage between transactions. And this is kind of odd, because if you think about internet protocols, the transaction graph is like metadata: who’s talking to who. And metadata is absolutely information that needs to be private. But normally when we communicate on the internet, we don’t worry so much about making our metadata private, unless we use something like tor. We normally use SSL which makes the content of our messages private. Well the content of a transaction is the destination and the amounts involved. That’s really the transaction. And the prior privacy schemes, most of them didn’t make the content private, which is a little surprising. If you use one-use addresses, the destination is inherently pretty private, but the amounts are not private. In fact, in many commercial contexts at least, the amount is even more important to keep private. Knowing that Alice is paying Bob is somewhat interesting, but knowing that Alice is paying Bob $10,000 dollars is much more interesting. The amounts are pretty important. And one of the things on my slides a couple back I didn’t mention is that one of the things we worry about in bitcoin in particular as we watch the market price rocket up, is the security implications of owning bitcoin. If you own a bunch of coins, you generally don’t want the world to know about it. And if you go into a Starbucks and try to pay for something with bitcoin, you make a transaction and your transaction moves 1k bitcoin at a time, you have to worry that maybe the barrista is out back calling his buddies to mug you and take your phone or something. So making the amounts private is important.

That’s what confidential transactions does. It makes the amounts private. There are many benefits to this. The way that this works is that there’s normally in a transaction an 8-byte value that indicates the amount of the output, and that 8-byte value is replaced with this 33 byte commitment which is functionally like a cryptographic hash. Instead of putting the amount in the transaction, you put this hash of the amount in the transaction. And these commitments have a special property that, the hashing process preserves addition which means that if you take the hash of one amount and the hash of another amount and add them together, you get the hash of their sum. This addition preserving property is what allows the network to still verify that the inputs and the outputs still match up. This is why CT can be verified by a public network even though the information is private.

Using this kind of scheme for bitcoin was proposed by Adam Back in 2013 on bitcointalk. I assume these slides will be made available online, there’s links on all of these things too. And Adam’s post is really hard to understand and even for me.. Just knowing that he said you could do it, I went and reinvented the scheme and also came up with a much more efficient construction than he had proposed, and in fact the constructions I used for CT were by a large factor more efficient than any system in the literature previously, and that’s required in order to possibly begin to think about using these things in a large scale public system such as bitcoin. I’ll talk a little bit more about that.

Costs

https://www.youtube.com/watch?v=LHPYNZ8i1cU&t=17m19s

  • No new cryptographic assumptions: CT is secure if secp256k1 ECC signatures are secure (or even better).

  • ~66% increase in UTXO size (addition of 33 byte ‘CT amounts’)

  • 15x-20x increase in bandwidth for transactions

  • 30x-60x increase in validation costs. Not an issue for high end hardware especially due to validation caching, but a problem for low end hardware.

  • Addresses which are ~twice the length.

  • Moderate code complexity (+1.4kloc crypto code)

So on that subject of costs, doing this stuff with the hidden values isn’t cheap in this system. The end result is that on the plus side we can do it with no new cryptographic assumptions by which I mean that CT is completely secure so as long as our ECC signatures are completely secure. And in fact we can do even better so that CT is completely secure against inflation even if ECC is completely broken and I’ll talk a bit more about that. So that’s the good part.

The downside… so it’s roughly on order of a 66% increase in the UTXO set size. That’s manageable, that’s about 2 GB right now and maybe doubling it, that’s reasonable. But it also results in something of the order of a 15-20x in the amount of bandwidth required for transactions. Now, CT is completely prunable so you don’t have to keep that data around, but you do have to keep it temporarily.

It also results in according to signature validation a 30-60x increase in validation cost. So if you could assume that all your nodes are high-end hardware and they aren’t going to validate old history, taking a 30x increase in validation cost is okay because there is extensive caching in bitcoin node software.

Also it results in addresses that are 2x the length because CT needs to include an additional pubkey in the address which is used to encrypt a message to receiver to tell them how much you’re paying them or else they wouldn’t know.

There’s a complexity hit, my implementation adds about 1400 lines of code, this is pretty small compared to the rest of the crypto code in bitcoin, but it is a cost. It’s as small as it is because I was able to leverage existing infrastructure in bitcoin code.

Benefits

What benefits do we get out of this? Well, the amounts are private, and that has some direct effects. In many cases you need fewer transactions or outputs in order to preserve privacy. Today if you were to do something like decide to pay all your staff in coins and didn’t want to disclose how much your staff was paid, you would probably get your staff to provide you with lots of addresses per staff member and then you would pay the same amount to different addresses, and the people who get multiple transactions would get different total payouts. That would be reasonably private, until people go and spend those outputs in groups and reveal some of it, but it would be very inefficient because it would require larger transactions to execute that scheme. So I think making outputs private really does solve many of the commercial problems. That’s the big win directly.

On the privacy that CT provides, it’s controllable by the user. The sender obviously knows what he sent, the receiver knows what he received, but additionally the sender or receiver can show to any third party all the details of the transaction to show and prove that a payment happened, or publish it to the world. My ability to prove a payment is not reduced by CT. Moreover, it’s straightforward in the design of CT to share a per wallet master key with an auditor and the auditor could see all the information. You control that.

One of the side effects of the design is that there’s this, I mentioned this 15-20x increase in transaction bandwidth because of larger transactions, but 80% of that increase could be used to communicate a private memo from the sender to the receiver. This private memo can contain structured or unstructured arbitrary data from one side to the other. In the typical transaction there would be several kilobytes of memo field. Unfortunately we can’t eliminate this data overhead, but we can get dual use out of this– it’s part of the cryptographic overhead for CT, but it’s also let you send data.

One of the other benefits of using CT in a system is that it’s quite straightforward to do private solvency proofs over CT (Provisions 2015 paper). If you were to imagine that if bitcoin was using CT today, you would be able to construct a proof that says I own 1000 coins, at least 1000 coins, out of the set of all coins that exist, if everyone was using CT, the privacy set of that would be all the coins, and you could prove this to someone without revealing which coins are yours, and even without the amount, you could reveal just a threshold. You could build a system that in a bank-like situation, or an exchange, you could show that the users account balances all add up to the sum of the assets without showing what that sum is. So you could say something like Coinbase for example that the users account balances add up to the sum of the coins that Coinbase holds, without revealing the total number of users or total amounts. It’s possible to do this with bitcoin without CT by layering CT on top of it, but you don’t get a full privacy set because if you’re the only person using that privacy proof system then you would be easily discovered so it doesn’t add much privacy on the full set.

Benefits: Metadata privacy

https://www.youtube.com/watch?v=LHPYNZ8i1cU&t=23m10s

So, I said that CT provides value privacy, but in fact it also indirectly provides metadata graph privacy. The reason for this is that the existing compatible non-cryptographic proposals like coinjoin for graph privacy are primarily broken by their lack of amount privacy also their usability is broken by their lack of amount privacy. Coinjoin and CT can be combined so that you can have multiple people that collaborate together to make a transaction, nobody learns any amounts, and the observer doesn’t learn the matchup between inputs and outputs. You lose the coinjoin problem where inputs and output values have to match. So if you do this scheme, you can get metadata privacy and get amount privacy, it preserves the properties of CT alone. This is true for coinswap and tumblebit and many other techniques.

Benefits: Scalability

Although CT has significant costs, I would actually say that one of CT’s advantages is scalability. The costs involved in CT are constant factors. Pretty big constant factors, but they are constant, they don’t change the asymptotic scaling of the system. All of the overhead that CT creates, or almost all of the overhead, is prunable. So it will blowup the history of the chain but the state required to run a node and continue verifying things doesn’t increase that much. Other schemes for privacy result in perpetual growing states where you need to keep perpetual history on every node. CT + coinjoin and technologies based on it, like mimblewimble (see also andytoshi’s talk), are the only strong privacy approaches that don’t break scalability.

Benefits: Decentralization offset

Another benefit you can see from looking at something like CT is something I would call decentralization offset which is we talked before about how fungibility is important but in systems like bitcoin there is risk that miners or other ecosystem players might discriminate against coins or be forced by third-parties to participate in discrimination. If they did that, and did it enough, it would degrade the value of the system, although it might be profitable for the participating parties– perhaps they get paid by regulators or something to discriminate against coins. There was some prepub work in this MIT chainanchor paper where they have since removed this from their paper but they suggested paying miners to mine only approved transactions. My response is to not be angry about this, it’s a vulnerability in the system, let’s fix it. If the system is more fungible, then issues related to decentralization limits in other parts of the system are less of an issue, because all the coins look the same and therefore you can’t discriminate against coins. So that’s an advantage I think.

CT soundness

I mentioned before that CT is at least as strong as ECC security but actually it’s potentially stronger. The scheme in our original CT publication provides what’s called unconditional privacy but only computational soundness. This means that if ECC is broken the privacy is not compromised, but someone could inflate the currency. What unconditional privacy means is that even if you were to get an infinitely powerful computer that could solve any problem in one operation, you still could not remove the amount privacy in CT. The privacy part of it cannot be broken. But it only provides computational soundness, which means that if ECC is broken or if someone has an infinitely powerful computer, they can basically make CT proofs that are false and create inflation. And it’s fundamentally impossible for any scheme to achieve both unconditional privacy and unconditional soundness. It could miss both, it could have one but not the other, but it cannot have both.

For use in bitcoin, it probably makes sense to have a system that has computational privacy where if the crypto is broken you lose your privacy, but unconditional soundness where you know for absolute certainty that the coins cannot be inflated by the system. I am not too concerned about ECC in bitcoin, and even if it were to become broken, we have bigger problems to deal with. But, you know, for a system you want people to adopt, you have to answer FUD. Even if you could possibly have unending inflation in the system, hidden away, that would be a pretty good FUD point and I think it would be important for bitcoin to fix it.

When we first worked on CT, we knew that we could solve the soundness and make it unconditionally sound, but we thought it would make it significantly less efficient. It turns out that we found a way to make it equally efficient while preserving the unconditional soundness. And this is something that we have been working on for a while as well as Oleg Andreev for unconditional soundness for CT. We have some implementations that haven’t been published yet that do this.

CT vs Zcash

Now let me talk some comparisons of CT with some other technologies. And I’m talking about more the technology than the altcoin in cases where this stuff is implemented. Zcash uses some very modern cryptography to do some really cool stuff, and zcash directly hides the amount and the transaction metadata and hides it completely. Among all the private transactions in zcash, you can’t tell what the linkages are between any of them. That’s even stronger than coinjoin can provide. It’s basically perfect from that perspective, but you always have to be careful when only thinking about a little perspective at a time. Zcash isn’t unconditionally sound and cannot be made unconditionally sound with current technology, not even close. In fact, zcash requires trusted setup which means that a number of trusted parties have to get together, and if they cheat then they can break the crypto and make unbounded undetectable inflation. If there’s a crypto break, or a trusted setup flaw, then it’s really bad news. They had a ritual to increase trust in the trusted setup, but they have to redo this procedure to upgrade the crypto over time. So it’s a vulnerability.

Zcash involves a growing accumulator so there’s basically a spent coins list that all nodes must have. That spent coins list will grow forever. Zcash is not just using one new piece of crypto, it’s kind of a stack of new crypto on top of new crypto. I spoke with Dan Boneh, the author of the pairing cryptography that zcash is based on, and I said well what do you think about the security of pairing and he said oh yeah it’s great, then I asked well what do you think about the security of SNARKS which is the next layer in the zcash stack and he sort of shrugged and went “ehh I dunno, it’s new”. So, not to say it isn’t secure, but it’s really new, and that’s a big hurdle.

There have also been recent advancements in breaking the kind of crypto that is used in zcash that will eventually require that they upgrade to a larger curve at a minimum. Right now they have something on the order of 80 or 90 bits of security which isn’t really considered all that strong. Particularly with those upgrades, their verification speed is on par with CT, it would be maybe 25% in the future. Verification is similar but a bit slower.

The real killer for zcash in practice is that the signing speed is horribly slow, we’re talking about minutes to sign a private transaction. Zcash ouldn’t plausibly make private transactions mandatory because of that slowness and UX, and as a result it’s option and as a result very few of the transactions in the zcash blockchain are using the private transaction feature. If you look at the raw numbers, it’s 24%, but that number is misleading because miners are required to pay to a private address, but most miners are pools and the pools immediately unblind those coins and if you unseparate out the mining load then maybe it’s in the order of 4% of zcash transactions are private. As a result, this anonymity set that this “perfect” anonymity system is achieving system, isn’t that good. I think the idea is cool but it’s not the kind proposal I would want to take to bitcoin today.

CT vs Monero

https://www.youtube.com/watch?v=LHPYNZ8i1cU&t=32m22s

Monero is another kind of altcoin that uses privacy technique that was first in bytecoin which is an altcoin with probably the scammiest launch of anything I have ever seen. But monero is the reboot of bytecoin that removed the ridiculous premine and 5 years of faked history and obfuscated mining code and who knows what else. But what was really cool about bytecoin was that it had cool crypto in it that hadn’t been discussed. The idea behind bytecoin was that it used a ring signature and a ring signature lets you say I am spending one of these 4 coins and I won’t tell you which one but I will prove to you that it hadn’t been spent before whichever one it was. This hides the graph. The privacy of bytecoin and monero originally was really fragile and a vast majority of transactions were deanonymizable because of amount non-privacy which made things visible in the system.

More modern monero as of a few months ago is using CT, they have adapted ring signatures with CT for something called ringCT and it’s sort of like how CT can be combined with coinjoin.

So this system has the benefits of CT but it also has the disadvantages of the ring signature and there’s a forever-growing spent coins list. Monero today isn’t unconditionally sound, but it could be, just like CT it could be upgraded to unconditional soundness. The crypto assumptions are the same as CT but they use the ed25519 curve but otherwise it’s the same crypto assumptions.

The other cryptographically-private altcoin that people talk about is Dash… but it’s not cryptographically private at all. I had a slide about this that was just “Dash LOL”. It’s snakeoil. I’m beside myself about it, personally. What they have is a system like coinjoin, they nominate nodes based on proof of stake to be coinjoin masters, and then they have done this insecurely many times in the past I have no idea if the current version is secure. It’s not on the same level as zcash or monero maybe it’s better than doing nothing I don’t know. LOL, right?

Other risks

There are other risks with this technology. I guess I was talking about one of them right now. It’s difficult to distinguish snakeoil from real stuff and vet claims. Maybe some of the stuff I said today was crap, like maybe someone will claim that I said CT is unconditionally private (which is something I said) but maybe because of the way it gets used it’s not actually unconditionally private…. right, like it’s hard to work out this stuff and figure out which complaints are legitimate, not too many experts, new area, many people that try to sound like experts, etc.

A big risk is that if you look at any of these other cryptosystems that the consensus part of it is itself a cryptosystem just like RSA or AES or whatever. The whole thing has to work exactly right, and if one part is wrong then it doesn’t give the properties you think, and if you get it wrong, everything breaks. Complexity is the enemy of integrity, it’s hard to verify it. It makes the system more complex. I’m currently aware of three and a half devastating failures in privacy improvement attempts… There’s an altcoin based on the original zcoin proposal, where they had an inflation incident where it had more or less like a typo made it possible for people to mint coins out of nothing. They let their system keep running, the coins inflated, then they said well OK now there’s more coins. The loss was effectively socialized. Because this was caught fortunately due to some of the privacy limitations in the approach of exploiting it, the loss was found to be sort of small and it was possible to socialize it, but if they (the attacker) had printed billions and billions of coins, then that might not have been salvageable.

There’s a system called Shadowcash which is a clone of the bytecoin-like monero-like ring signatures with no CT based on the bitcoin code base. And, it had no privacy at all. I don’t know if the person building it was a complete fool or utter genius, but they managed to make a really subtle mistake that made it have no privacy.

These upgrades to monero called ringCT which is now deployed, the original version was unsound and could cause unbounded inflation due to basically a protocol design error. When they created ringCT, they didn’t copy the design of CT, they reinvented it again but I guess I’m guilty of that too, but anyway they made the error while doing so, and it made it through Ledger journal peer review without being detected. Fortunately it was detected by the Monero team as well as someone at my company, Blockstream. Before it was implemented in the system and they fixed it, and as far as anyone knows, the system is currently sound, and that’s my “half” one.

There’s another system that is non-public but it will be public in a couple months that was a total break in one of these kinds of things. It isn’t a concern now. But we have to learn from these mistakes as time goes on. (maybe this one)

I don’t really think that these privacy systems are really much harder than normal consensus work in cryptocurrencies, but the failures are severe and really hard or impossible to recover from, and it’s a place where people are doing something interesting, it’s not just the hot air stuff in altcoins, this cryptographic privacy stuff is real.

Further enhancements: Mimblewimble

https://www.youtube.com/watch?v=LHPYNZ8i1cU&t=38m25s

http://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2016-11-21-mimblewimble/

I would like to talk about where we could go from CT today. Maybe I should have added a slide here to say where we currently are. I have published a high-performance implementation of CT and it’s part of this demo called Elements alpha sidechain. It’s used in our Liquid product, too. It’s out there, it’s available, it’s described, it’s been included in academic publications. It’s real, it’s still being improved. Well where can we go moving forward?

CT is pretty powerful when combined with coinjoin. But coinjoin is interactive and the parties have to be online and communicate with each other. Coinjoin can be made non-interactive, to allow miners to aggregate transactions to improve scalability and privacy. Some of the properties of this aggregation make it possible to sync a node without looking at history only looking at UTXO set but have same security as if you had inspected the entire history, without having to transfer the entire history. The downside is that the state size for the verifying node is massively increased. The end result is that if– if bitcoin was mimblewimble from day one, the equivalent of the UTXO set size would be the size of the blockchain, but you wouldn’t need the blockchain to sync a node. The asymptotics are good, the constant factors stink, and people are experimenting with this and enhancing this.

Further enhancements: Multi-asset

https://www.youtube.com/watch?v=LHPYNZ8i1cU&t=40m20s

paper: https://blockstream.com/bitcoin17-final41.pdf

CT is single asset like bitcoin. But it’s possible to extend CT for many assets that could be kept– and in doing so, you provide privacy for the assets being traded. For a low volume asset, just knowing that it’s being traded at all is the really important thing that you need to keep private. Several groups have been working on this, Chain recently had an announcement about this, and Blockstream has been working on this and put out a paper at FC17 and put out a working implementation also a part of that Elements demo system.

I think this is a cool area and we’ll see where it goes. I think it’s very neat that you can just combine these approaches. Perhaps other things can be combined with it.

Further enhancements: ValueShuffle

So I have mentioned many times that CT + coinjoin is pretty interesting. The normal way to do coinjoin in practice today is that you have the participants either pick a server and the server mediates it and learns things about the correspondence between users, and that’s not good. It’s possible to have a protocol where all the participants in the join work with each other and nobody learns the input-output mapping. I knew it was possible when CT was created, but I didn’t work out the details. Fortunately, others have been working on this, and in particular the valueshuffle paper that works out how you can have a multi-party coinjoin with all the parties private from each other using CT.

Further enhancements: Efficiency

The area where I have been focusing the most on CT is efficiency. The costs are constant factors but they are big and it would be good to get them down. We have schemes right now that can make them 20% smaller or the same size but unconditionally sound. There’s more optimizations out there to be worked on. This is an area that Oleg Andreev at Chain has also been working on.

Patent status

So I should mention about patents. I have patented many of the techniques and optimizations in confidential transactions with the explicit intention and very loudly stated goal of using these patents to prevent other people from patent this stuff, and to commit to a patent nonaggression licensing scheme where anyone can use my patents as long as they commit not to sue each other into oblivion.

Blockstream has a very open patent policy that is purely defensive. It has been applauded by many groups, including the EFF. If it’s not good enough for somebody, let me know and we can figure out how to make it better. We can’t make the patent system go away, and it’s a risk to any deployment of any complex technology. I previously worked on royalty-free multimedia codecs. I am one of the authors of Vorbis, the Opus audio codec, WebRTC and like anyone using Signal is using Opus. We used patents strategically in Opus to get other patents opened. Trying to do the same thing with CT. I don’t think anyone needs to worry about patents in CT. However, if someone is worried, then I would be happy to work with them to make the situation better for everyone.

https://blockstream.com/about/patent_pledge/

https://blockstream.com/about/patent_faq/

https://defensivepatentlicense.org/license/

CT for bitcoin?

What about using this CT thing for bitcoin?

Well, you can use CT for bitcoin already with sidechains. And that’s what we’re doing with Liquid for example. You can build a system that trades bitcoin value, use CT to make it private inside that system, it’s just not private going in and out of that system. If you want to talk about using T with bitcoin itself, the efficiencies are a big problem. There are some really cool tricks we can use to move around the costs in the system… in particular there is a notion that every spend creates a coin, it could be possible for miners to basically perform all the validations and if you make an invalid CT proof with your coin then the miner could publish a proof that your CT proof was invalid and then confiscate your coins for doing that, there could be some tradeoffs made so that not everyone has to make the CT tradeoffs and therefore make it cheaper.

Politics are a big hurdle– some people don’t want bitcoin to improve on the privacy aspects, and some people don’t want bitcoin to improve at all. But I think that privacy and the existence of CT and sidechains and so on will remove these arguments. If bitcoin should use CT, and competing systems use it, I don’t think it will take forever.

There are designs for soft-fork CT and deploying it in a backwards-compatible manner, but right now they have some severe limitations, in particular if the chain has a reorgs once coins have been moved from CT back to non-CT transactions, the transactions around that reorg wont survive. Once you break the coins out of CT, you have to have a protocol rule to not spend the coins for like 100 blocks. This is the same issue that extension block proposals have, and is a reason why I have not been too supportive of extension block proposals in the past. If it’s the only way to do it then maybe it’s the only viable way, but I’d really like to find something better– haven’t yet, but there are many things that in bitcoin I have looked at and didn’t immediately know how to do better until later.

So the tech is still maturing for CT. If the reason for it not be deploying right now is 15x, well is 10x enough? With improvements we can get this down to lower amounts. As the technology improves, we could make this story better. I am happy to help with other people experimenting with this. I think it’s premature to do CT on litecoin today, Charlie, but I would like to see this get more use of course.

Q&A

Why don’t I go to questions? I have slides on more details and I am happy to answer questions even in email.

Q: Can you talk about range proofs about why they are needed and what they do?

A: Right, so let me go to my details slides. I’m going to do the fast description of what CT is doing. First my slides talk about you need a commitment scheme. This is a commitment where you say you have some secret text and a nonce and I want to prove to you that I have committed to the secret text without showing you the secret text. We can do this with a hash function. Take a hash of the secret text, give you the hash, and later I can show you the secret text of my nonce and you can verify it. That’s a commitment scheme. For CT, we need a commitment scheme with the following property: the commitment of A, with its nonce, and the commitment of B with its nonce, is equal to the commitment of A + B with a nonce of nonce A + nonce B. We need that property. We can’t get that with a normal hash function. So we introduce a thing called pedersen commitments. We reuse some properties of cyclic groups, ECC is one of these things. Whenever someone talks about ECC, if they start talking about actual numbers and doing arithmetic with the details behind the points, they’re really sort of wasting your time because one of the key points of ECDSA and discrete log problem is that it’s all based on abstract algebra, which is abstract, so the properties of algebra just generally work and you don’t need to worry about the properties under the hood. A pedersen commitment is a commitment that has this additively homomorphic property, where you can basically take these commitments and add them up. Great, hooray, they can be added. The problem that arises is that the addition is cyclic like adding an unsigned integer on a computer it overflows, so if I make a transaction that overflows I could potentially create coins from nothing, like a negative overflowing value in the output and a 13 value out, and it all adds up to 0 and nets out, so I would have created 10 coins out of nothing or something. We have to prove that all of our committed values are in a range, so you know that if you have a bunch of small values they will still fit with 256 bits or something. So the range proofs are a way to prevent this over/underflow problem. And so, almost all of the performance cost and size of CT is in that proving that the value is small. So we have a scheme to prove that a value is either zero or another number, we do this with a ring signature and say that this pubkey is a pubkey that corresponds to a zero commitment or a 1 commitment and we just do this over and over again, we prove this value is 0 or 1, this value is 0 or 4, and we add them all up together, and we show that they are equal to the value we’re talking about, and that it must be in range if it’s built out of this OR. So a typical size for a range proof covering 32 bits is about 2. kilobytes. But we can recover 80% of that for a memo field. There’s an optimization where most– there’s an exponent field, most bitcoin values are round numbers in base 10, so you can shift up your values by multiples of 10 in order to have a smaller proof that covers a larger range. If you add some more zeroes, it covers 42,000 coins instead, for example. So that’s where the range proof stuff comes from.

more links: https://www.youtube.com/watch?v=LHPYNZ8i1cU&t=51m9s

Q: This might be a bit complicated, but can you talk some more about how you make the range proofs include arbitrary user-selected data, because that seems magical.

A: Yeah. I feel really clever for coming up with that. It’s probably one of the things I am most proud of. It’s actually really simple. I wish I had a whiteboard, but I don’t have time anyway, in a ring signature most of the values used in it are dummy values, they are random, the signer picks random values, puts them in, uses their private key to solve one value and then sends it on. My message is signed by key A, key B, key C or key D. And the signature for 3 of the 4 dummies, and the 4th is computed based on the others. The other 3rd party observer must not distinguish them from random, so we can replace all the randomness in the range proof with an encrypted message that the receiver can decode. The receiver doesn’t need this to be private because they know the value anyway. So we also use this so that we don’t have to communicate extra data explicitly to the receiver to, for that unraveling in this scheme.

A: Also, one of the cool things about this message size thing is that, one thing I found in bitcoin is that for proposals to be accpeted by lots of people they kind of have to have different advantages to different factions. So some people might say “I don’t care if we can gain X” and then they get surprised about Y and excited… so the memo field might actually help this get adopted. For people that want a big memo field, that might offer preferential pricing to people’s fees and limits might be another reason that people would want to use it too.

Q: What are your latest thoughts on mining centralization and how are you thinking about this?

A: Mining centralization has been a long-term challenge for bitcoin in general. There’s all kinds of market effects that result in mining centralization, including the fact that many of the mining projects have been scams, which scares away principal investors. We have known this would be a problem since 2012. Better privacy tech could reduce the effects of mining centralization. In the long run, mining is going to be boring, and it’s not even as good as electricity production. In the long run, this will drive mining to decentralization just to gain access to cheap power in the world. Everyone in the bitcoin industry should be looking at opportunities to make mining more decentralized, like making small investments to make mining more decentralized. The system really does depend on this. It has to be widely spread, not even competently operated. It isn’t even all that competently operated today anyway.

Q: Some people make the argument that for real fungibility that privacy has to be mandatory. What do you tihnk about that argument? If CT was introduced in bitcoin, would it be optional or mandatory?

A: I think that argument has a lot of merit. Just look at the zcash case. But I don’t think it’s really viable to make it anything in bitcoin mandatory… maybe it is maybe it isn’t. It depends also on the cost. The way that bitcoin transactions are costed out is a byproduct of the system. One way that CT could be deployed is that every transaction in terms of its weight impact its impact on block limits could be charged based on its impact whether it has CT or not. I don’t know, I think the argument has a lot of merit, but I don’t see a way to hoist it on people that don’t want it. This is an advantage that a system like Monero has right from the start that is hard to match.

Q: How excited are you about segwit activating on litecoin?

A: It’ll be interesting. You know, I think it’s cool that it gets used more, but I already knew it would get used somewhere– maybe I’m deluded. In litecoin there’s some interesting challenges with miners and there’s some large vested interest in making segwit have problems. The size and scale in litecoin, it will get over the problems, it might be bumpy, I’ll be happy to help out. I pointed out on litecoin’s subreddit the other day about potential turbulence and there’s some good things to know about and to mitigate. It would be cool to get segwit activated on litecoin. I am happy about it. You are going to have bitcoin developers working on litecoin eventually, because nobody really doing protocol development in bitcoin wants to do script enhancement without segwit because it makes it so much easier. The idea of trying to do script enhancement without it is just not interesting…

Q: What do you think is preventing or how would you accelerate mainstream adoption of digital currency like bitcoin?

A: I think we have a long way to go for all the on ramps and off ramps. Having the right message and narrative for what bitcoin does for people. If you look at a public that is living entirely off of credit, what does bitcoin do for them? None of the cryptocurrencies provide credit, so how is he is he going to do it? In the long run, we will have credit, but we don’t have it today. If you look at the pace of other really transformative technology like the phone network or automobiles.. they had long incubation periods. The internet went from kinda visible to super visible in 10 years, but it took decades for the internet to get to that point in the first place. Bitcoin is asking people to rethink how they handle money and to reboot many concepts. Sort of like how linux has impacted people as well– linux has just permeated the infrastructure of the internet, but Joe Schmoe is not aware of it even though linux is on his phone. It’s going to take time.

Thank you.

\ No newline at end of file +http://web.archive.org/web/20171115183423/https://www.youtube.com/watch?v=LHPYNZ8i1cU&feature=youtu.be&t=1m

https://twitter.com/kanzure/status/859604355917414400

Introduction

Thank you.

So as mentioned, I am going to be talking about confidential transactions today. And I look at confidential transactions as a building block fundamental piece of technology, it’s not like altcoin or something like that. It’s not a turnkey system, it’s some fundamental tech, and there are many different ways to use it and deploy it.

My interest in this technology is primarily driven by bitcoin, but the technology here could be applied to many other things, including things outside of the cryptocurrency space.

So I want to talk a little bit about in detail about what motivates me in on working on this and why I think it’s interesting. What does CT do? What are the costs and benefits of a system like CT in bitcoin? I want to talk about extensions and related technology, and maybe also the prospect of using CT with bitcoin in the future. I also have some slides to go into more detail about how CT works in more technical detail, but I’m not sure how much interest there is in getting into the really deep details. I am seeing some thumbs up, so lots of details. We’ll see how the time works out on it too, because I want to leave some good time for questions and such. And if I start blabbering off in space and someone is confused, then please ask me to clarify.

Why confidential transactions?

CT is a tool for improving privacy and fungibility in a system like bitcoin. Why is this important? If you look at traditional systems, like banking systems, they provide a level of privacy in their systems. And when you usually use your bank account, your neighbors don’t know what you’re doing with your bank account. Your bank might know, but your neighbors don’t. Your counterparties know, other people generally don’t know. This is important to people, both for personal reasons as well as for commercial reasons. In a business, you don’t want your competition to know who is supplying your parts. You don’t want to necessarily share who is getting paid what. You don’t want your competition to have insight into your big sales or who your customers are and all that. There is a long list of commercial reasons as to why this kind of privacy is important.

Personally as well– if you think about it, in order to speak as a person to speak up in the political sphere, to have an effect on the world, you need to spend some money to do it… so this is basically you have the case that you’re spending of money is intricately tied to your ability to have free speech in the world. If someone is able to surveil and monitor and control your use of money all the time, they are also controlling your ability to speak. This is also the opinion of the U.S. Supreme Court in Citizens United and a lot of people on the more liberal end of the spectrum don’t like this conflation of money and free speech but it cuts both ways. People should have a high degree of financial privacy, and they can’t have that if they are using a system that doesn’t provide that.

In bitcoin, we have this fundamental conflict where we have built this decentralized distributed network, and it’s security is based on this concept of public verification: you know bitcoin is correct because your own computer running software that you or someone you trust has audited, has verified all of this history of it, and has verified that it all adds up. That all the signatures match, that all the values match and it’s all good to go. It uses conspicuous transparency in order to obtain security. Many people look at this and say ah there’s a conflict we can’t both have strong privacy and have this public verification needed for a decentralized system. But this apparent conflict is not true. If you think about the idea of a digital signature, that is a scheme that lets you sign a message and show that a person who knows the private key knows the message, we use this every day in bitcoin, you can use this to verify a digital signature without ever knowing your private key but they can verify that you knew it. So a digital signature showing that, you can build a system that can be publicly verified but without giving up your secrets. This shows that it is possible.

This whole idea of a system of money needed some kind of privacy– that’s not a new one. It’s discussed in the bitcoin whitepaper, where it says that you can basically use pseudonyms, one use addresses, in order to have privacy in the system. The challenge that we found is that this is very fragile. In practice, the pseudonymity in bitcoin doesn’t actually provide a whole lot of privacy. People reuse addresses and then it’s really easy to trace the transaction graph through the history. The way that the earliest bitcoin software was written, you didn’t usually pay to an address directly, you would provide an IP address, then your node would go to the other node and get an address from it, and then every payment would use a new fresh address on the blockchain. That was a much more private way of using things, but it required that to receive payments you had to run a node exposed to the internet not running behind NAT. So not very usable. And that whole protocol in fact was completely unauthenticated, so it was very vulnerable to man-in-the-middle attacks as well. That wasn’t a good way to transaction, and as a result people started using bitcoin addresses for everything, and the address model ends up causing lots of reuse and things that really degrades privacy.

When I have gone out and talked to institutions outside of bitcoin in the wider world and talked about using bitcoin itself as well as using blockchain sort-of technology adopted from bitcoin, one of the first concerns that many boring institutions like banks have brought up is this lack of privacy, from all these commercial angles, not necessarily protecting personal rights, but keeping your business deals secret is sort of the bread and butter of commerce. So the transparency that bitcoin provides is often touted as a very powerful feature, so that everyone can see the blockchain and it’s transparent– but transparency is a feature that cuts both ways. Having an open blockchain doesn’t necessarily mean your activities are transparent. You have an open blockchain, and everyone can see the transactions, but it doesn’t mean they have any wisdom. Right. It is not knowledge. Knowledge is not wisdom. And so this very open nature of bitcoin, unless people take action to turn this openness into transparency, isn’t functionally transparency, and the lack of privacy in bitcoin exacerbates existing power imbalances.. If you’re just some nobody and someone wants to block your transactions and mess around with you, you’re made more visible by the fact that your transactions are visible. And, some people are concerned about more harmful users, like users that want to use cryptocurrencies for immoral activities – it might be argued that their lack of privacy is a feature. But it turns out that people doing criminal activities have money to blow, and they can afford to purchase privacy via expensive means regardless. They don’t look and say oh bitcoin isn’t private I guess I’ll just have to make all of my criminal activities visible, no that’s not how it works. Certainly less privacy in bitcoin can make investigations in bitcoin easier at times, but it doesn’t fundamentally block criminal activity. So in my view, when we don’t have good privacy in a system like bitcoin, the criminal activity still occurs it’s just that everyone else who can’t afford to get privacy that suffers from the lack.

And a key point that I don’t have in the slide here that I should mention is that privacy is integrally tied to fungibility. The idea behind fungibility is that one coin is equivalent to every other coin, and this is an essential property for any money. The reason why it is essential is so that when you get paid, you know you’ve been paid. If instead you were being paid in a highly non-fungible asset like a piece of art or whatever, you might receive the art and think you were paid and then later find out the art was a forgery or you find out it’s stolen and you have to give it back. And so, that art doesn’t work as well as money, one piece of art isn’t the same as another piece of art. The more we can make our money-like asset behave in a fungible way, the better it is. And this is a well-recognized legal and social concept elsewhere. The US dollar is not physically fungible… one dollar is very different from another dollar, in fact they are serialized and have serial numbers, but for the most part as a society we are willfully blind to the non-fungibility of the dollar bill, because if we say hey this dollar is counterfeit but it’s unidentifiably counterfeit so it’s no good, then the dollar breaks down as being a useful money. And so we ignore those things, and in cryptocurrency we need to do a degree of that as well, if you want cryptocurrency to function as money.

https://www.youtube.com/watch?v=LHPYNZ8i1cU&t=10m24s

There have been many past proposals to improve privacy in the bitcoin space. Many people have recognized there’s a problem here. People have done things like, proposals to combine coinjoin and coinswap, which are completely compatible with the existing bitcoin system and use some smart contracting protocol trickery to increase user privacy. People have also used things like centralized servers where you trust a third-party, you give them the money, they make the payments for you, and that can at times improve your privacy except at the central server which usually destroys your privacy along the way. There have been cryptographic proposals, there’s a proposal called zerocoin which is distinct from the zerocash stuff that exists today, which had poor scalability but interesting privacy properties. There’s an interesting not-well-known proposal called one-way aggregatable signatures (OWAS) which showed up on bitcointalk from an anonymous author who vanished (follow-up). It’s a pretty interesting approach. There’s the traceable ring signatures that were used in bytecoin, monero and more recently in zerocash system which is now showing up as an altcoin. The compatible things– like coinjoin– have mostly suffered from not having as much privacy as you would expect, due to transaction amount tracing. The idea behind coinjoin is that multiple parties come together and they jointly author a single transaction that spends all of their money and pays out to a set of addresses, and by doing that, an observer can’t determine which outputs match with which inputs so that the users gain privacy from doing this, but if all the users are putting in and taking out different amounts then you can easily unravel the coinjoin and the requirement to make the amounts match to prevent the unraveling would make coinjoin hard to be usable. And then, prior to confidential transactions as a proposal, the cryptographic solutions that people proposed have broken the pruning process in bitcoin. They really have harmed the scalability of the system causing the state to balloon up forever. Today you can run a bitcoin node with something like 2 GB of space on a system, and that’s all it needs in order to validate new incoming blocks and that’s with all the validation and all the rules no trusting third parties. And if not for pruning, then that number would be 120 gigabytes and rapidly growing. Many of these cryptographic privacy tools have basically broken this pruning and have hurt scaling… the exception being the more recent tumblebit proposal, and tumblebit is a proposal like coinswap that allows two users to swap ownership of coins in a secure way without network observer being able to tell that they did this. Tumblebit improves the privacy of that by making the users themselves not able to tell who the other users were, and tumblebit doesn’t break scalability, but because it only does these swaps and a few other things, it’s not as flexible as many of the other options.

Confidential transactions

https://www.youtube.com/watch?v=LHPYNZ8i1cU&t=13m37s

What confidential transactions, what does it do? So the prior work on improving privacy in cryptocurrency systems has been entirely focused on or almost entirely focused on making the transaction graph private. It’s addressing the linkage between transactions. And this is kind of odd, because if you think about internet protocols, the transaction graph is like metadata: who’s talking to who. And metadata is absolutely information that needs to be private. But normally when we communicate on the internet, we don’t worry so much about making our metadata private, unless we use something like tor. We normally use SSL which makes the content of our messages private. Well the content of a transaction is the destination and the amounts involved. That’s really the transaction. And the prior privacy schemes, most of them didn’t make the content private, which is a little surprising. If you use one-use addresses, the destination is inherently pretty private, but the amounts are not private. In fact, in many commercial contexts at least, the amount is even more important to keep private. Knowing that Alice is paying Bob is somewhat interesting, but knowing that Alice is paying Bob $10,000 dollars is much more interesting. The amounts are pretty important. And one of the things on my slides a couple back I didn’t mention is that one of the things we worry about in bitcoin in particular as we watch the market price rocket up, is the security implications of owning bitcoin. If you own a bunch of coins, you generally don’t want the world to know about it. And if you go into a Starbucks and try to pay for something with bitcoin, you make a transaction and your transaction moves 1k bitcoin at a time, you have to worry that maybe the barrista is out back calling his buddies to mug you and take your phone or something. So making the amounts private is important.

That’s what confidential transactions does. It makes the amounts private. There are many benefits to this. The way that this works is that there’s normally in a transaction an 8-byte value that indicates the amount of the output, and that 8-byte value is replaced with this 33 byte commitment which is functionally like a cryptographic hash. Instead of putting the amount in the transaction, you put this hash of the amount in the transaction. And these commitments have a special property that, the hashing process preserves addition which means that if you take the hash of one amount and the hash of another amount and add them together, you get the hash of their sum. This addition preserving property is what allows the network to still verify that the inputs and the outputs still match up. This is why CT can be verified by a public network even though the information is private.

Using this kind of scheme for bitcoin was proposed by Adam Back in 2013 on bitcointalk. I assume these slides will be made available online, there’s links on all of these things too. And Adam’s post is really hard to understand and even for me.. Just knowing that he said you could do it, I went and reinvented the scheme and also came up with a much more efficient construction than he had proposed, and in fact the constructions I used for CT were by a large factor more efficient than any system in the literature previously, and that’s required in order to possibly begin to think about using these things in a large scale public system such as bitcoin. I’ll talk a little bit more about that.

Costs

https://www.youtube.com/watch?v=LHPYNZ8i1cU&t=17m19s

  • No new cryptographic assumptions: CT is secure if secp256k1 ECC signatures are secure (or even better).

  • ~66% increase in UTXO size (addition of 33 byte ‘CT amounts’)

  • 15x-20x increase in bandwidth for transactions

  • 30x-60x increase in validation costs. Not an issue for high end hardware especially due to validation caching, but a problem for low end hardware.

  • Addresses which are ~twice the length.

  • Moderate code complexity (+1.4kloc crypto code)

So on that subject of costs, doing this stuff with the hidden values isn’t cheap in this system. The end result is that on the plus side we can do it with no new cryptographic assumptions by which I mean that CT is completely secure so as long as our ECC signatures are completely secure. And in fact we can do even better so that CT is completely secure against inflation even if ECC is completely broken and I’ll talk a bit more about that. So that’s the good part.

The downside… so it’s roughly on order of a 66% increase in the UTXO set size. That’s manageable, that’s about 2 GB right now and maybe doubling it, that’s reasonable. But it also results in something of the order of a 15-20x in the amount of bandwidth required for transactions. Now, CT is completely prunable so you don’t have to keep that data around, but you do have to keep it temporarily.

It also results in according to signature validation a 30-60x increase in validation cost. So if you could assume that all your nodes are high-end hardware and they aren’t going to validate old history, taking a 30x increase in validation cost is okay because there is extensive caching in bitcoin node software.

Also it results in addresses that are 2x the length because CT needs to include an additional pubkey in the address which is used to encrypt a message to receiver to tell them how much you’re paying them or else they wouldn’t know.

There’s a complexity hit, my implementation adds about 1400 lines of code, this is pretty small compared to the rest of the crypto code in bitcoin, but it is a cost. It’s as small as it is because I was able to leverage existing infrastructure in bitcoin code.

Benefits

What benefits do we get out of this? Well, the amounts are private, and that has some direct effects. In many cases you need fewer transactions or outputs in order to preserve privacy. Today if you were to do something like decide to pay all your staff in coins and didn’t want to disclose how much your staff was paid, you would probably get your staff to provide you with lots of addresses per staff member and then you would pay the same amount to different addresses, and the people who get multiple transactions would get different total payouts. That would be reasonably private, until people go and spend those outputs in groups and reveal some of it, but it would be very inefficient because it would require larger transactions to execute that scheme. So I think making outputs private really does solve many of the commercial problems. That’s the big win directly.

On the privacy that CT provides, it’s controllable by the user. The sender obviously knows what he sent, the receiver knows what he received, but additionally the sender or receiver can show to any third party all the details of the transaction to show and prove that a payment happened, or publish it to the world. My ability to prove a payment is not reduced by CT. Moreover, it’s straightforward in the design of CT to share a per wallet master key with an auditor and the auditor could see all the information. You control that.

One of the side effects of the design is that there’s this, I mentioned this 15-20x increase in transaction bandwidth because of larger transactions, but 80% of that increase could be used to communicate a private memo from the sender to the receiver. This private memo can contain structured or unstructured arbitrary data from one side to the other. In the typical transaction there would be several kilobytes of memo field. Unfortunately we can’t eliminate this data overhead, but we can get dual use out of this– it’s part of the cryptographic overhead for CT, but it’s also let you send data.

One of the other benefits of using CT in a system is that it’s quite straightforward to do private solvency proofs over CT (Provisions 2015 paper). If you were to imagine that if bitcoin was using CT today, you would be able to construct a proof that says I own 1000 coins, at least 1000 coins, out of the set of all coins that exist, if everyone was using CT, the privacy set of that would be all the coins, and you could prove this to someone without revealing which coins are yours, and even without the amount, you could reveal just a threshold. You could build a system that in a bank-like situation, or an exchange, you could show that the users account balances all add up to the sum of the assets without showing what that sum is. So you could say something like Coinbase for example that the users account balances add up to the sum of the coins that Coinbase holds, without revealing the total number of users or total amounts. It’s possible to do this with bitcoin without CT by layering CT on top of it, but you don’t get a full privacy set because if you’re the only person using that privacy proof system then you would be easily discovered so it doesn’t add much privacy on the full set.

Benefits: Metadata privacy

https://www.youtube.com/watch?v=LHPYNZ8i1cU&t=23m10s

So, I said that CT provides value privacy, but in fact it also indirectly provides metadata graph privacy. The reason for this is that the existing compatible non-cryptographic proposals like coinjoin for graph privacy are primarily broken by their lack of amount privacy also their usability is broken by their lack of amount privacy. Coinjoin and CT can be combined so that you can have multiple people that collaborate together to make a transaction, nobody learns any amounts, and the observer doesn’t learn the matchup between inputs and outputs. You lose the coinjoin problem where inputs and output values have to match. So if you do this scheme, you can get metadata privacy and get amount privacy, it preserves the properties of CT alone. This is true for coinswap and tumblebit and many other techniques.

Benefits: Scalability

Although CT has significant costs, I would actually say that one of CT’s advantages is scalability. The costs involved in CT are constant factors. Pretty big constant factors, but they are constant, they don’t change the asymptotic scaling of the system. All of the overhead that CT creates, or almost all of the overhead, is prunable. So it will blowup the history of the chain but the state required to run a node and continue verifying things doesn’t increase that much. Other schemes for privacy result in perpetual growing states where you need to keep perpetual history on every node. CT + coinjoin and technologies based on it, like mimblewimble (see also andytoshi’s talk), are the only strong privacy approaches that don’t break scalability.

Benefits: Decentralization offset

Another benefit you can see from looking at something like CT is something I would call decentralization offset which is we talked before about how fungibility is important but in systems like bitcoin there is risk that miners or other ecosystem players might discriminate against coins or be forced by third-parties to participate in discrimination. If they did that, and did it enough, it would degrade the value of the system, although it might be profitable for the participating parties– perhaps they get paid by regulators or something to discriminate against coins. There was some prepub work in this MIT chainanchor paper where they have since removed this from their paper but they suggested paying miners to mine only approved transactions. My response is to not be angry about this, it’s a vulnerability in the system, let’s fix it. If the system is more fungible, then issues related to decentralization limits in other parts of the system are less of an issue, because all the coins look the same and therefore you can’t discriminate against coins. So that’s an advantage I think.

CT soundness

I mentioned before that CT is at least as strong as ECC security but actually it’s potentially stronger. The scheme in our original CT publication provides what’s called unconditional privacy but only computational soundness. This means that if ECC is broken the privacy is not compromised, but someone could inflate the currency. What unconditional privacy means is that even if you were to get an infinitely powerful computer that could solve any problem in one operation, you still could not remove the amount privacy in CT. The privacy part of it cannot be broken. But it only provides computational soundness, which means that if ECC is broken or if someone has an infinitely powerful computer, they can basically make CT proofs that are false and create inflation. And it’s fundamentally impossible for any scheme to achieve both unconditional privacy and unconditional soundness. It could miss both, it could have one but not the other, but it cannot have both.

For use in bitcoin, it probably makes sense to have a system that has computational privacy where if the crypto is broken you lose your privacy, but unconditional soundness where you know for absolute certainty that the coins cannot be inflated by the system. I am not too concerned about ECC in bitcoin, and even if it were to become broken, we have bigger problems to deal with. But, you know, for a system you want people to adopt, you have to answer FUD. Even if you could possibly have unending inflation in the system, hidden away, that would be a pretty good FUD point and I think it would be important for bitcoin to fix it.

When we first worked on CT, we knew that we could solve the soundness and make it unconditionally sound, but we thought it would make it significantly less efficient. It turns out that we found a way to make it equally efficient while preserving the unconditional soundness. And this is something that we have been working on for a while as well as Oleg Andreev for unconditional soundness for CT. We have some implementations that haven’t been published yet that do this.

CT vs Zcash

Now let me talk some comparisons of CT with some other technologies. And I’m talking about more the technology than the altcoin in cases where this stuff is implemented. Zcash uses some very modern cryptography to do some really cool stuff, and zcash directly hides the amount and the transaction metadata and hides it completely. Among all the private transactions in zcash, you can’t tell what the linkages are between any of them. That’s even stronger than coinjoin can provide. It’s basically perfect from that perspective, but you always have to be careful when only thinking about a little perspective at a time. Zcash isn’t unconditionally sound and cannot be made unconditionally sound with current technology, not even close. In fact, zcash requires trusted setup which means that a number of trusted parties have to get together, and if they cheat then they can break the crypto and make unbounded undetectable inflation. If there’s a crypto break, or a trusted setup flaw, then it’s really bad news. They had a ritual to increase trust in the trusted setup, but they have to redo this procedure to upgrade the crypto over time. So it’s a vulnerability.

Zcash involves a growing accumulator so there’s basically a spent coins list that all nodes must have. That spent coins list will grow forever. Zcash is not just using one new piece of crypto, it’s kind of a stack of new crypto on top of new crypto. I spoke with Dan Boneh, the author of the pairing cryptography that zcash is based on, and I said well what do you think about the security of pairing and he said oh yeah it’s great, then I asked well what do you think about the security of SNARKS which is the next layer in the zcash stack and he sort of shrugged and went “ehh I dunno, it’s new”. So, not to say it isn’t secure, but it’s really new, and that’s a big hurdle.

There have also been recent advancements in breaking the kind of crypto that is used in zcash that will eventually require that they upgrade to a larger curve at a minimum. Right now they have something on the order of 80 or 90 bits of security which isn’t really considered all that strong. Particularly with those upgrades, their verification speed is on par with CT, it would be maybe 25% in the future. Verification is similar but a bit slower.

The real killer for zcash in practice is that the signing speed is horribly slow, we’re talking about minutes to sign a private transaction. Zcash ouldn’t plausibly make private transactions mandatory because of that slowness and UX, and as a result it’s option and as a result very few of the transactions in the zcash blockchain are using the private transaction feature. If you look at the raw numbers, it’s 24%, but that number is misleading because miners are required to pay to a private address, but most miners are pools and the pools immediately unblind those coins and if you unseparate out the mining load then maybe it’s in the order of 4% of zcash transactions are private. As a result, this anonymity set that this “perfect” anonymity system is achieving system, isn’t that good. I think the idea is cool but it’s not the kind proposal I would want to take to bitcoin today.

CT vs Monero

https://www.youtube.com/watch?v=LHPYNZ8i1cU&t=32m22s

Monero is another kind of altcoin that uses privacy technique that was first in bytecoin which is an altcoin with probably the scammiest launch of anything I have ever seen. But monero is the reboot of bytecoin that removed the ridiculous premine and 5 years of faked history and obfuscated mining code and who knows what else. But what was really cool about bytecoin was that it had cool crypto in it that hadn’t been discussed. The idea behind bytecoin was that it used a ring signature and a ring signature lets you say I am spending one of these 4 coins and I won’t tell you which one but I will prove to you that it hadn’t been spent before whichever one it was. This hides the graph. The privacy of bytecoin and monero originally was really fragile and a vast majority of transactions were deanonymizable because of amount non-privacy which made things visible in the system.

More modern monero as of a few months ago is using CT, they have adapted ring signatures with CT for something called ringCT and it’s sort of like how CT can be combined with coinjoin.

So this system has the benefits of CT but it also has the disadvantages of the ring signature and there’s a forever-growing spent coins list. Monero today isn’t unconditionally sound, but it could be, just like CT it could be upgraded to unconditional soundness. The crypto assumptions are the same as CT but they use the ed25519 curve but otherwise it’s the same crypto assumptions.

The other cryptographically-private altcoin that people talk about is Dash… but it’s not cryptographically private at all. I had a slide about this that was just “Dash LOL”. It’s snakeoil. I’m beside myself about it, personally. What they have is a system like coinjoin, they nominate nodes based on proof of stake to be coinjoin masters, and then they have done this insecurely many times in the past I have no idea if the current version is secure. It’s not on the same level as zcash or monero maybe it’s better than doing nothing I don’t know. LOL, right?

Other risks

There are other risks with this technology. I guess I was talking about one of them right now. It’s difficult to distinguish snakeoil from real stuff and vet claims. Maybe some of the stuff I said today was crap, like maybe someone will claim that I said CT is unconditionally private (which is something I said) but maybe because of the way it gets used it’s not actually unconditionally private…. right, like it’s hard to work out this stuff and figure out which complaints are legitimate, not too many experts, new area, many people that try to sound like experts, etc.

A big risk is that if you look at any of these other cryptosystems that the consensus part of it is itself a cryptosystem just like RSA or AES or whatever. The whole thing has to work exactly right, and if one part is wrong then it doesn’t give the properties you think, and if you get it wrong, everything breaks. Complexity is the enemy of integrity, it’s hard to verify it. It makes the system more complex. I’m currently aware of three and a half devastating failures in privacy improvement attempts… There’s an altcoin based on the original zcoin proposal, where they had an inflation incident where it had more or less like a typo made it possible for people to mint coins out of nothing. They let their system keep running, the coins inflated, then they said well OK now there’s more coins. The loss was effectively socialized. Because this was caught fortunately due to some of the privacy limitations in the approach of exploiting it, the loss was found to be sort of small and it was possible to socialize it, but if they (the attacker) had printed billions and billions of coins, then that might not have been salvageable.

There’s a system called Shadowcash which is a clone of the bytecoin-like monero-like ring signatures with no CT based on the bitcoin code base. And, it had no privacy at all. I don’t know if the person building it was a complete fool or utter genius, but they managed to make a really subtle mistake that made it have no privacy.

These upgrades to monero called ringCT which is now deployed, the original version was unsound and could cause unbounded inflation due to basically a protocol design error. When they created ringCT, they didn’t copy the design of CT, they reinvented it again but I guess I’m guilty of that too, but anyway they made the error while doing so, and it made it through Ledger journal peer review without being detected. Fortunately it was detected by the Monero team as well as someone at my company, Blockstream. Before it was implemented in the system and they fixed it, and as far as anyone knows, the system is currently sound, and that’s my “half” one.

There’s another system that is non-public but it will be public in a couple months that was a total break in one of these kinds of things. It isn’t a concern now. But we have to learn from these mistakes as time goes on. (maybe this one)

I don’t really think that these privacy systems are really much harder than normal consensus work in cryptocurrencies, but the failures are severe and really hard or impossible to recover from, and it’s a place where people are doing something interesting, it’s not just the hot air stuff in altcoins, this cryptographic privacy stuff is real.

Further enhancements: Mimblewimble

https://www.youtube.com/watch?v=LHPYNZ8i1cU&t=38m25s

http://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2016-11-21-mimblewimble/

I would like to talk about where we could go from CT today. Maybe I should have added a slide here to say where we currently are. I have published a high-performance implementation of CT and it’s part of this demo called Elements alpha sidechain. It’s used in our Liquid product, too. It’s out there, it’s available, it’s described, it’s been included in academic publications. It’s real, it’s still being improved. Well where can we go moving forward?

CT is pretty powerful when combined with coinjoin. But coinjoin is interactive and the parties have to be online and communicate with each other. Coinjoin can be made non-interactive, to allow miners to aggregate transactions to improve scalability and privacy. Some of the properties of this aggregation make it possible to sync a node without looking at history only looking at UTXO set but have same security as if you had inspected the entire history, without having to transfer the entire history. The downside is that the state size for the verifying node is massively increased. The end result is that if– if bitcoin was mimblewimble from day one, the equivalent of the UTXO set size would be the size of the blockchain, but you wouldn’t need the blockchain to sync a node. The asymptotics are good, the constant factors stink, and people are experimenting with this and enhancing this.

Further enhancements: Multi-asset

https://www.youtube.com/watch?v=LHPYNZ8i1cU&t=40m20s

paper: https://blockstream.com/bitcoin17-final41.pdf

CT is single asset like bitcoin. But it’s possible to extend CT for many assets that could be kept– and in doing so, you provide privacy for the assets being traded. For a low volume asset, just knowing that it’s being traded at all is the really important thing that you need to keep private. Several groups have been working on this, Chain recently had an announcement about this, and Blockstream has been working on this and put out a paper at FC17 and put out a working implementation also a part of that Elements demo system.

I think this is a cool area and we’ll see where it goes. I think it’s very neat that you can just combine these approaches. Perhaps other things can be combined with it.

Further enhancements: ValueShuffle

So I have mentioned many times that CT + coinjoin is pretty interesting. The normal way to do coinjoin in practice today is that you have the participants either pick a server and the server mediates it and learns things about the correspondence between users, and that’s not good. It’s possible to have a protocol where all the participants in the join work with each other and nobody learns the input-output mapping. I knew it was possible when CT was created, but I didn’t work out the details. Fortunately, others have been working on this, and in particular the valueshuffle paper that works out how you can have a multi-party coinjoin with all the parties private from each other using CT.

Further enhancements: Efficiency

The area where I have been focusing the most on CT is efficiency. The costs are constant factors but they are big and it would be good to get them down. We have schemes right now that can make them 20% smaller or the same size but unconditionally sound. There’s more optimizations out there to be worked on. This is an area that Oleg Andreev at Chain has also been working on.

Patent status

So I should mention about patents. I have patented many of the techniques and optimizations in confidential transactions with the explicit intention and very loudly stated goal of using these patents to prevent other people from patent this stuff, and to commit to a patent nonaggression licensing scheme where anyone can use my patents as long as they commit not to sue each other into oblivion.

Blockstream has a very open patent policy that is purely defensive. It has been applauded by many groups, including the EFF. If it’s not good enough for somebody, let me know and we can figure out how to make it better. We can’t make the patent system go away, and it’s a risk to any deployment of any complex technology. I previously worked on royalty-free multimedia codecs. I am one of the authors of Vorbis, the Opus audio codec, WebRTC and like anyone using Signal is using Opus. We used patents strategically in Opus to get other patents opened. Trying to do the same thing with CT. I don’t think anyone needs to worry about patents in CT. However, if someone is worried, then I would be happy to work with them to make the situation better for everyone.

https://blockstream.com/about/patent_pledge/

https://blockstream.com/about/patent_faq/

https://defensivepatentlicense.org/license/

CT for bitcoin?

What about using this CT thing for bitcoin?

Well, you can use CT for bitcoin already with sidechains. And that’s what we’re doing with Liquid for example. You can build a system that trades bitcoin value, use CT to make it private inside that system, it’s just not private going in and out of that system. If you want to talk about using T with bitcoin itself, the efficiencies are a big problem. There are some really cool tricks we can use to move around the costs in the system… in particular there is a notion that every spend creates a coin, it could be possible for miners to basically perform all the validations and if you make an invalid CT proof with your coin then the miner could publish a proof that your CT proof was invalid and then confiscate your coins for doing that, there could be some tradeoffs made so that not everyone has to make the CT tradeoffs and therefore make it cheaper.

Politics are a big hurdle– some people don’t want bitcoin to improve on the privacy aspects, and some people don’t want bitcoin to improve at all. But I think that privacy and the existence of CT and sidechains and so on will remove these arguments. If bitcoin should use CT, and competing systems use it, I don’t think it will take forever.

There are designs for soft-fork CT and deploying it in a backwards-compatible manner, but right now they have some severe limitations, in particular if the chain has a reorgs once coins have been moved from CT back to non-CT transactions, the transactions around that reorg wont survive. Once you break the coins out of CT, you have to have a protocol rule to not spend the coins for like 100 blocks. This is the same issue that extension block proposals have, and is a reason why I have not been too supportive of extension block proposals in the past. If it’s the only way to do it then maybe it’s the only viable way, but I’d really like to find something better– haven’t yet, but there are many things that in bitcoin I have looked at and didn’t immediately know how to do better until later.

So the tech is still maturing for CT. If the reason for it not be deploying right now is 15x, well is 10x enough? With improvements we can get this down to lower amounts. As the technology improves, we could make this story better. I am happy to help with other people experimenting with this. I think it’s premature to do CT on litecoin today, Charlie, but I would like to see this get more use of course.

Q&A

Why don’t I go to questions? I have slides on more details and I am happy to answer questions even in email.

Q: Can you talk about range proofs about why they are needed and what they do?

A: Right, so let me go to my details slides. I’m going to do the fast description of what CT is doing. First my slides talk about you need a commitment scheme. This is a commitment where you say you have some secret text and a nonce and I want to prove to you that I have committed to the secret text without showing you the secret text. We can do this with a hash function. Take a hash of the secret text, give you the hash, and later I can show you the secret text of my nonce and you can verify it. That’s a commitment scheme. For CT, we need a commitment scheme with the following property: the commitment of A, with its nonce, and the commitment of B with its nonce, is equal to the commitment of A + B with a nonce of nonce A + nonce B. We need that property. We can’t get that with a normal hash function. So we introduce a thing called pedersen commitments. We reuse some properties of cyclic groups, ECC is one of these things. Whenever someone talks about ECC, if they start talking about actual numbers and doing arithmetic with the details behind the points, they’re really sort of wasting your time because one of the key points of ECDSA and discrete log problem is that it’s all based on abstract algebra, which is abstract, so the properties of algebra just generally work and you don’t need to worry about the properties under the hood. A pedersen commitment is a commitment that has this additively homomorphic property, where you can basically take these commitments and add them up. Great, hooray, they can be added. The problem that arises is that the addition is cyclic like adding an unsigned integer on a computer it overflows, so if I make a transaction that overflows I could potentially create coins from nothing, like a negative overflowing value in the output and a 13 value out, and it all adds up to 0 and nets out, so I would have created 10 coins out of nothing or something. We have to prove that all of our committed values are in a range, so you know that if you have a bunch of small values they will still fit with 256 bits or something. So the range proofs are a way to prevent this over/underflow problem. And so, almost all of the performance cost and size of CT is in that proving that the value is small. So we have a scheme to prove that a value is either zero or another number, we do this with a ring signature and say that this pubkey is a pubkey that corresponds to a zero commitment or a 1 commitment and we just do this over and over again, we prove this value is 0 or 1, this value is 0 or 4, and we add them all up together, and we show that they are equal to the value we’re talking about, and that it must be in range if it’s built out of this OR. So a typical size for a range proof covering 32 bits is about 2. kilobytes. But we can recover 80% of that for a memo field. There’s an optimization where most– there’s an exponent field, most bitcoin values are round numbers in base 10, so you can shift up your values by multiples of 10 in order to have a smaller proof that covers a larger range. If you add some more zeroes, it covers 42,000 coins instead, for example. So that’s where the range proof stuff comes from.

more links: https://www.youtube.com/watch?v=LHPYNZ8i1cU&t=51m9s

Q: This might be a bit complicated, but can you talk some more about how you make the range proofs include arbitrary user-selected data, because that seems magical.

A: Yeah. I feel really clever for coming up with that. It’s probably one of the things I am most proud of. It’s actually really simple. I wish I had a whiteboard, but I don’t have time anyway, in a ring signature most of the values used in it are dummy values, they are random, the signer picks random values, puts them in, uses their private key to solve one value and then sends it on. My message is signed by key A, key B, key C or key D. And the signature for 3 of the 4 dummies, and the 4th is computed based on the others. The other 3rd party observer must not distinguish them from random, so we can replace all the randomness in the range proof with an encrypted message that the receiver can decode. The receiver doesn’t need this to be private because they know the value anyway. So we also use this so that we don’t have to communicate extra data explicitly to the receiver to, for that unraveling in this scheme.

A: Also, one of the cool things about this message size thing is that, one thing I found in bitcoin is that for proposals to be accpeted by lots of people they kind of have to have different advantages to different factions. So some people might say “I don’t care if we can gain X” and then they get surprised about Y and excited… so the memo field might actually help this get adopted. For people that want a big memo field, that might offer preferential pricing to people’s fees and limits might be another reason that people would want to use it too.

Q: What are your latest thoughts on mining centralization and how are you thinking about this?

A: Mining centralization has been a long-term challenge for bitcoin in general. There’s all kinds of market effects that result in mining centralization, including the fact that many of the mining projects have been scams, which scares away principal investors. We have known this would be a problem since 2012. Better privacy tech could reduce the effects of mining centralization. In the long run, mining is going to be boring, and it’s not even as good as electricity production. In the long run, this will drive mining to decentralization just to gain access to cheap power in the world. Everyone in the bitcoin industry should be looking at opportunities to make mining more decentralized, like making small investments to make mining more decentralized. The system really does depend on this. It has to be widely spread, not even competently operated. It isn’t even all that competently operated today anyway.

Q: Some people make the argument that for real fungibility that privacy has to be mandatory. What do you tihnk about that argument? If CT was introduced in bitcoin, would it be optional or mandatory?

A: I think that argument has a lot of merit. Just look at the zcash case. But I don’t think it’s really viable to make it anything in bitcoin mandatory… maybe it is maybe it isn’t. It depends also on the cost. The way that bitcoin transactions are costed out is a byproduct of the system. One way that CT could be deployed is that every transaction in terms of its weight impact its impact on block limits could be charged based on its impact whether it has CT or not. I don’t know, I think the argument has a lot of merit, but I don’t see a way to hoist it on people that don’t want it. This is an advantage that a system like Monero has right from the start that is hard to match.

Q: How excited are you about segwit activating on litecoin?

A: It’ll be interesting. You know, I think it’s cool that it gets used more, but I already knew it would get used somewhere– maybe I’m deluded. In litecoin there’s some interesting challenges with miners and there’s some large vested interest in making segwit have problems. The size and scale in litecoin, it will get over the problems, it might be bumpy, I’ll be happy to help out. I pointed out on litecoin’s subreddit the other day about potential turbulence and there’s some good things to know about and to mitigate. It would be cool to get segwit activated on litecoin. I am happy about it. You are going to have bitcoin developers working on litecoin eventually, because nobody really doing protocol development in bitcoin wants to do script enhancement without segwit because it makes it so much easier. The idea of trying to do script enhancement without it is just not interesting…

Q: What do you think is preventing or how would you accelerate mainstream adoption of digital currency like bitcoin?

A: I think we have a long way to go for all the on ramps and off ramps. Having the right message and narrative for what bitcoin does for people. If you look at a public that is living entirely off of credit, what does bitcoin do for them? None of the cryptocurrencies provide credit, so how is he is he going to do it? In the long run, we will have credit, but we don’t have it today. If you look at the pace of other really transformative technology like the phone network or automobiles.. they had long incubation periods. The internet went from kinda visible to super visible in 10 years, but it took decades for the internet to get to that point in the first place. Bitcoin is asking people to rethink how they handle money and to reboot many concepts. Sort of like how linux has impacted people as well– linux has just permeated the infrastructure of the internet, but Joe Schmoe is not aware of it even though linux is on his phone. It’s going to take time.

Thank you.

\ No newline at end of file diff --git a/greg-maxwell/2017-08-28-gmaxwell-deep-dive-bitcoin-core-v0.15/index.html b/greg-maxwell/2017-08-28-gmaxwell-deep-dive-bitcoin-core-v0.15/index.html index e8ad9c5a0a..f09da6c1c9 100644 --- a/greg-maxwell/2017-08-28-gmaxwell-deep-dive-bitcoin-core-v0.15/index.html +++ b/greg-maxwell/2017-08-28-gmaxwell-deep-dive-bitcoin-core-v0.15/index.html @@ -10,4 +10,4 @@ < Greg Maxwell < Deep Dive Bitcoin Core V0.15

Deep Dive Bitcoin Core V0.15

Speakers: Greg Maxwell

Date: August 28, 2017

Transcript By: Bryan Bishop

Media: -https://www.youtube.com/watch?v=nSRoEeqYtJA

slides: https://people.xiph.org/~greg/gmaxwell-sf-015-2017.pdf

https://twitter.com/kanzure/status/903872678683082752

git repo: https://github.com/bitcoin/bitcoin

preliminary v0.15 release notes (not finished yet): http://dg0.dtrt.org/en/2017/09/01/release-0.15.0/

Introduction

Alright let’s get started. There are a lot of new faces in the room. I feel like there’s an increase in excitement around bitcoin right now. That’s great. For those of you who are new to this space, I am going to be talking about some technical details here, so you’ll get to drink from the firehose. And if you don’t understand something, feel free to ask me questions, or other people in the room after the event, because there are a lot of people here who know a lot about this stuff.

So tonight I’d like to start off by talking about the new cool stuff in Bitcoin Core v0.15 which is just about to be released. And then I am going to open up for open-format question and answer and maybe we’ll have some interesting discuss.

First thing I want to talk about for v0.15 is system number wise breakdown. What kind of activities go into bitcoin development these days? I am going to talk about the major themes and major improvements, things about performance, wallet features, and talk a little bit about this service bit disconnection thing, which is a really minor and obscure thing but it created some press and I think there’s a lot of misunderstanding about it. And then I am going to talk about some of the future work that is not in v0.15 that will be in future versions that I think is interesting.

Bitcoin Core releases

Quick refresher on how Bitcoin Core releases work. So there’s this master trunk branch of bitcoin development. And releases are branched off of it and then new fixes go into that releases are written into the master branch and then copied into the release branches. This is a pretty common process for software development. What this means is that features that are in v0.15 are a lot of them are also in v0.14.2 and v0.14 because v0.15 started with the release of v0.14.0 not with the release of v0.14.2. So back in February this year, v0.14 branched off, and in March v0.14.0 was released. In August, v0.15 branched off, and we had two release candidates, and there’s a third that should be up tonight or tomorrow morning, and we’re expecting the full release of v0.15 to be around the 14th or 15th, and we expect it to be delayed due to developers flying around and not having access to their cryptographic keys. Our original target date was September 1st, so two week slip there, unfortunate but not a big deal.

Some numbers about v0.15 contributions

Just some raw numbers. What kind of levels of activities are going into bitcoin development these days? Well, in the last 185 days, which is the development of v0.15, there were 627 pull requests merged on github. So these are individual high-level changes. Those requests contained 1,081 non-merge commits by 95 authors, which comes out to 6 commits per day which is pretty interesting compared to other cryptocurrency projects. One of the interesting things about v0.15 is that 20% of the commits are test-related, so they were adding or updating tests. And a lot of those actually came from jnewbery, a relatively new contributor who works at Chaincode Labs. His first commit was back in November 2016 and he was responsible for half of all the test-related commits and he was also the top committer. And then it falls down from there, lots of other people, a lot of contributions, and then a broad swath of everyone else. The total number of lines changed was rather large, 52k lines inserted. This was actually mostly introduction of new tests. That’s pretty good, we have a large backlog of things that are under-tested, for our standards… there’s some humor there because Bitcoin Core is already relatively well-tested compared to most software. All of this activity results in about 3k lines changed per week to review. For someone like me, who does most of his contributions in the form of review, there’s a lot of activity to keep up with.

Main areas of focus in v0.15: performance

So in v0.15, we had a couple of big areas of focus. As always, but especially in v0.15, we had a big push on performance. One of the reasons for this is that the bitcoin blockchain is growing, and just keeping up with that growth requires the software to get faster. But also, with segwit coming online, we knew that the blockchain would be growing at an even faster rate, so there was a desire to try to squeeze out all the performance we could, to make up for that increase in capacity.

There were some areas that were polished and problem areas that were made a little more reliable. And there were some areas that we didn’t work on– such as new consensus rules, we’ve been waiting to see how segwit shakes out, so there hasn’t been big focus on new consensus rules. There were a bunch of wallet features that we realized that were easier with segwit, so we held off on implementing those. There are some things that were held back, waiting for segwit, and now they can move forward at a pretty good speed.

Chainstate (UTXO) database performance

One of the important performance improvements is that we have completely reworked the chainstate database in Bitcoin Core. The chainstate database is what stores the information that is required to validate new blocks as they come in. This is also called the UTXO set. The current structure we’ve been using has been around since v0.8.0, and when it was introduced the disk structure, which has a separate database of just the information required to validate blocks, was something like a 40x performance improvement. That was an enormous performance improvement from what we had. So this new update is improving this further.

The previous structure we often talk about as a per-output database. So this is a UTXO database. People think of it as tracking every output from every transaction as a separate coin. But that’s not actually how it was implemented on the backend. The backend system previously batched up all outputs for a single transaction to a single record in the database and stored those together. Back at the time of v0.8.0, that was a much better way of doing it. The usage patterns and the load on the bitcoin network changed since then, though. The batching saved space because it shared some of the information, like height of a transaction, whether or not it’s a coinbase transaction, and the txid shared among all the outputs. One of several problems wth this is that when you spend the outputs, you have to read them all and write them back, and this creates a quasi-quadratic blow-up where a lot more work is required to modify transactions with many outputs. It also made the software a lot more complicated. If a pure utxo database only had to create outputs and delete them when you spend, but the merging form in the past supported modifications, such as spending some of the outputs but saving the rest back. This whole process resulted in considerable memory overhead.

We have changed to a new structure where we store one record per txout. This results in a 40% faster sync, so it’s a big performance improvement. This is 10% less memory for the same number of cache entries, so you can have larger caches but same amount of memory on your system, or you can run Bitcoin Core on a machine with less memory than you could have before. This 40% was given on a host with very fast disks, but if you’re going to run the software on a computer with slow media such as spinning disks or USB or something like that, then I would expect to see faster results even beyond 40% but I haven’t benchmarked.

There’s a downside, though, which is that it makes the database larger on disk. 15% chainstate size increase (2.8 GB now). So there’s a 15% increase in the chainstate directory. But this isn’t really a big deal.

Here’s a visualization of the trade. In the old scheme you had this add, modify, modify, delete operation sequence. So you had three outputs, one gets spent, another spent, and then you delete the rest. And the new way, which is what people thought it was doing all along. This same structure has been copied into other alternative implementations so btcd, bcoin, these other implementations copied the same structure as from Bitcoin Core so they all work like the original v0.8.0 method.

This is a database format change, so there’s a migration process when you start v0.15 on an older host it will then migrate to a new format. On a fast system this takes 2-3 minutes, but on a raspberry pi this is going to take a few hours. Once you run v0.15 on a node, in order to go back to an older version of bitcoind, you will have to do a chainstate reindex and that will take you hours. There’s a corner case where if you ran Bitcoin Core master in the couple week period from when we introduced this until we improved it a bit, your chainstate directory may have become 5.8 GB in size, so double size, and that’s because leveldb the underlying backend database was not particularly aggressive about removing old records. There’s a hidden forcecompactdb option that you can run as a one-time operation that will shrink the database back to its size. So if you see that you have a 5.8 GB chainstate directory, run that command and get it back to what it should be.

The graph at the bottom here is showing the improvements from this visually, and it’s a little strange to read. The x-axis is progress in terms of what percent of the blockchain was synced. And on the first graph, the y-axis is showing how much time has passed. This purple line at the top is the older code without the improvement, but if you look early on in the history, the difference isn’t as large. The lower graph here is showing the amount of data stored in the cache at a point in time. So you see the data increasing, then you see it flush the cache to disk and it goes to nothing, and so on. If you see the purple line there, it’s flushing very often as it gets further along in synchronization, and it’s flushing frequently over here so that it doesn’t have to do as much I/O.

Testing the upgraded chainstate database

Since this change is really a change at the heart of the consensus algorithm in bitcoin, we had some very difficult challenges related to testing it. We wouldn’t want to deploy a change that would cause it to corrupt or lose records, because that would cause a network split or network partition. So major consensus critical part of the system. This redo, actually, had a couple of months of history before it was made public where Pieter Wuille was working on it in private, trying a couple of different designs. Some of his work was driven by his work on the next feature that I am going to talk about in a minute. But once it was made public, it had 2 months of public review, there were at least 145 comments on it, people going through and reviewing the software line by line, asking questions, getting comments improved and things like that. This is an area of the software where we already had pretty good automated testing for it. We also added some new automated tests. We also used a technique called mutation testing, where basically we took the updated copy of bitcoin in its new tests, and we went line by line through the software and everywhere we saw a location where a byte could have occurred– such as every if-branch that could have been inverted, or every zero that could have been a one, and every add could be a subtract– we made each bug in term, and ran the tests, and verified that every change to the software would make the tests fail. So this process didn’t turn up any bugs in the new database, hooray, but it did turn up a pre-existing non-exploitable crash bug and some short-comings in the tests where some tests thought they were testing one aspect but it turns out they were testing nothing. And that’s a thing that happens, because often people don’t test the tests. You can think of mutation testing, in your mind, as normally the tests is the test of the software, but what’s the test of the test? Well, the software is the test of the test, you just have to break the software to see the results.

Non-atomic flushing

https://www.youtube.com/watch?v=nSRoEeqYtJA&t=13m40s

A feature related to the new chainstate database which we also did shortly after is non-atomic flushing. This database cache in bitcoin software is really more of a buffer than a cache. The thing it does that improves performance isn’t that it saves you from having to read things off disk– reading things off-disk is relatively fast. But what it does is that it prevents things from being written to disk such as when someone makes a transaction and then two blocks later they spend the output of that transaction, with a buffer you can avoid ever taking that data and writing it to the database. So that’s where almost all of the performance improvement from ‘caching’ really comes from in bitcoin. It’s this buffer operation. So that’s great. And it works pretty well.

One of the problems with it is that in order to make the system robust to crashes, the state on the disk always has to be consistent with a particular block so that if you were to yank the power out of your machine and your cache is gone, since it was just in memory, you want the node to be able to come up and continue validation from a particular block instead of having partial data on disk. It doesn’t have to be the latest block, it just has to be some exact block. So the way that this was built in Bitcoin Core was that whenever the cache would fill up, we would force-flush all the dirty entries to disk all at once, and then we could be sure that it was consistent with that point in time. In order to do that, we would use a database transaction, and the database transaction required creating another full copy of the data that was going to be flushed. So this extra operation meant that basically we lost half of the memory that we could tolerate in our cache with this really short 100 ms long memory peak, so we had to in older versions like v0.14 if you configure a database cache of say a gigabyte, then we really use 500 MB for cache and then 500 MB is left unused to handle this little peak that occurs during flushing. We’re also doing some work with smarter cache management strategies, where we incrementally flush things, but they interact poorly with the whole “flush at once” concept.

But we realized that the blockchain itself is actually a write-ahead log. It’s the exact thing you need for a database to not have to worry about what order your writes were given to the disk. With this in mind, our UTXO database doesn’t actually need to be consistent. What we can do instead is that we store in the database the earliest point at which writes could have occurred, such as the earliest blockheight for which there were writes in flight, and the latest, and then on startup we simply go through the blockchain on disk and re-apply all of the changes. Now, this was much harder when the database could have entries that were modified, but after changing to the per-TXO model, every entry is either inserted or deleted. Inserting a record twice just does nothing, and deleting a record twice also does nothing. So the software to do this is quite simple, it starts up, sees where it could have had writes in flight, and then follows them all out.

So this really simplifies a lot of things. It also gives us more flexibility on how we manage that database in the future. So it would be much easier to switch to a custom data structure or do other fancy things with it. It also allows us to do fancier cache management and lower latency flushing in the future, like we could in the future incrementally flush every block to avoid the latency spikes from doing a full flush all at once. Something we’ve experimented with, we just haven’t proposed it yet. But it’s now easy to do with this change. It also means that we— we’ve benchmared all the trendy database things that you might use in place of leveldb in this system, and so far we’ve not found anything that performs significantly better, but if in the future we found something that performs significantly better, we wouldn’t have to worry about supporting transactions anymore.

Platform acceleration

Also in v0.15, we did a bit of platform-specific acceleration work. And this is stuff that we knew we could do for a long time, but it didn’t rise to the top of the performance priority list. So, we now use SSE4 assembly implementation of sha256. And that results in a 5% speedup of initial block download, and a similar speedup for connecting a new block at the tip, maybe more like 10% there.

In v0.15, it’s not enabled in the build by default because we introduced this just a couple of days before the feature freeze and then spent 3 days trying to figure out why it was crashing on macosx, and it turns out that the macosx linker was optimizing out the entire chunk of assembly code, because of an error we made in the capitalization of the label. So… this took like 3 days to fix, and we didn’t feel like merging it in and then having v0.15 release delayed by potentially us finding out later that it doesn’t work on openbsd or something obscure or some old osx. But it will be enabled in the next releases.

We also implemented support for the sha-native instruction support. This is a new instruction set to do sha hardware acceleration that Intel has announced. They announced it but they really haven’t put it into many of their products, it’s supported in like one Atom CPU. The new AMD Ryzen stuff contains this instruction, though, and the use of it gives another 10% speedup in initial block download for those AMD systems.

We also made the backend database which uses CRC32 to detect errors in the database files, we made that use the SSE4 instruction too.

Script validation caching

Another one of the big performance improvements in v0.15 was this script validation caching. So bitcoin has had since v0.7, a cache that basically memorizes every public key message signature tuple and will allow you to validate those much faster if they were in the cache. This change was actually the last change to the bitcoin software by Satoshi, he had written the patch and sent it to Gavin and it just sat languishing in his mailbox for about a year and there’s some DoS attacks that can be solved by having a cache. The DoS attack is basically where you have a transaction with 10,000 valid signatures in it, and then the 10,001 first signature in the transaction is invalid, and you will spend all this time validating each of the signatures, to get down to the last one and find out that it’s invalid, and then the peer disconnects and sends you the same transaction but with one different invalid signature at the bottom and do it again… So the signature cache was introduced to fix that, but it also makes validation off the tip much faster. But even though that’s in place, all it’s caching is the elliptic curve operations, it’s not caching the validation of the scripts total. And this is a big deal for signature hashing. For transactions to be signed, you have to hash the transaction to determine what it’s signing, and for large non-segwit transactions, that can take a long time.

One question that has arised since 2012 was well why not use the fact that the transaction is in the mempool as a proxy for whether it’s valid. If it’s in your mempool, then it’s already valid, you already validated, go ahead and accept it. Well the problem wit hthis is that the rules for a transaction going into a mempool are not the same as the rules for a transaction into a block. They are supposed to be a subset, but it’s easy for software errors to turn it into a superset. There have been bugs in the mempool handling in the past that have resulted in invalid transactions appearing in the mempool. Because of how the rest of the software is structured, that error is completely harmless except wasting a little memory. But if you were using the mempool for validation, then having an invalid transaction in the mempool would immediately become a consensus splitting (network partition) bug. Using the mempool would massively increase the portion of the codebase that is consensus-critical, and nobody working on this project is really interested in using the mempool for doing that.

So what we’ve introduced in v0.15 is a separate cache for script validation caching. It maintains a cache where the keys are the txid and the validation flag for which rules are applicable for that tx, it caches that, and all the validity rules other than sequence number and blocktime are a pure function of the hash of the transaction, then that’s all it has to cache. For segwit that’s wtxid, not just txid. So the presence of this cache creates a 50% speedup of accepting blocks at the tip, so when a new block comes into a node.

Other improvements

There’s a ton of other speedups in v0.15. A couple that I wanted to highlight are on this slide. DisconnectTip which is the operation central to reorgs, so to unplug a block from the chain and undo it, that was made much faster, something on the order of 10s of times faster, especially if you are doing a reorg of many blocks, mostly by deferring mempool processing. Previously, you would disconnect the block, take all the transactions out, put it in your mempool, disconnect another block and put the transactions into the mempool, and so on, and we changed this so that it does all of this in a batch instead. We disconnect the block, throw the transactions into a buffer, leave the mempool alone until you’re done.

We added a cache for compact block messages. So when you’re relaying compact block messages to different peers, we hash the constructed message rather than reconstructing it for each peer.

The key generation in the wallet was made on the order of 20x faster by not flushing the database between every key that it inserted.

And there were some minor little secp2561 speedups on the order of 1%. But 1% is worth mentioning here because 1% speedups in the underlying crypto library are very hard to find. 1% speedup is at least weeks of work.

Performance comparison

In result, here is an initial block download (IBD) chart. The top bar is v0.14.2, and the bottom bar is v0.15 rc3. And this is showing number of seconds for initial sync across the network. In this configuration, I’ve set the dbcache size to effectively infinite, like 60 gigs. So the dbcache never fills on both of these hosts I was testing on. The reason why I tested with a dbcache size of infinite is that two reasons– one is that if you have decent hardware, then that’s the configuration you want to run while syncing anyway, and my second reason was that because with normal size dbcache which was sized to fit on a host with 1 GB of RAM, the benchmarks were taking so long that it wasn’t ready in time for my presentation tonight, and they will finish sometime around tonight. The two segments in the bars are showing how long it took to sync to a period of about 2 weeks ago, the outer point, and the inner part is how long it takes to sync to the point where v0.14 was released. And so what you can see in the lower bar, it’s considerably faster, almost a 50% speedup there. It’s actually taking about the same amount of time as v0.14 as v0.14 took to sync, so all of these speedups basically got us back to where we were at the beginning of the year in terms of aggregate total size just due to the underlying blockchain growth. And these numbers are in seconds, that’s probably not clear, but the shorter bar runs out to 3 hours, and that’s on a machine with 64 GB RAM and 24 cores.

https://www.youtube.com/watch?v=nSRoEeqYtJA&t=25m50s

Multiwallet

Out of the realm of performance for a while…. In v0.15, and as a long-requested feature that people have been asking about since 2011 is multiwallet support. Right now it’s a CLI and RPC-only feature. This allows the Bitcoin Core wallet to have multiple wallets loaded at once. It’s not released in the GUI yet, it will be in the next release. It’s easier to test in the CLI first and most of the developers are using CLI. You configure with a wallet conf argument for which wallet you want, and you tell the CLI which wallet you want to talk with, or if you’re using RPC then you change the URL to have the wallet slash the name of the wallet you want to use. This is very useful if you have a pruned node and you want to keep many wallets in sync and not have to rescan them, because you can’t rescan them on a pruned node.

For the moment, this should be considered somewhat experimental. The main delay with finally getting this in into v0.15 was debate over the interface and we weren’t sure what interface we should be providing on it. In the release notes, we explicitly mention that the interface is not stable and we didn’t want to delay on this. We’ve been working on this since v0.12, doing the backend restructuring, and in v0.15 was just the UI component.

Fee estimation

https://www.youtube.com/watch?v=nSRoEeqYtJA&t=27m30s

v0.15 features greatly improved fee estimation software. It tracks the… it starts with the original model we had in v0.14, but it tracks estimates on multiple time horizons, so that it can be more responsive to fast changes. It supports two estimation modes, conservative and economical. The economical mode responds much faster. The conservative just says what’s the fee that basically guarantees the transaction will be confirmed based on history. And the economical is “eh whatever’s most likely”. For bip125 replaceable transactions, if you underpay on it, it’s fixable, but if your transaction is not replaceable then we default to using the conservative estimator.

We also made the fee estimator machinery output more data, which will help us improve it in the future, but it’s also being used by external estimators where other people have built fee estimation frameworks that use Bitcoin Core’s estimator as its input. And this new fee estimator can produce fee estimates for much longer ranges of time, it can estimate for up to 1008 blocks back, so a week back in time, so you can say that you don’t care if it takes a long time to confirm then it can still produce useful estimates for that. Also it has fixed some corner cases where estimates were just nuts before.

The fundamental behavior hasn’t changed, so the fee estimator in bitcoin has this difficult challenge that it needs to safely operate unsupervised, which is that someone could be attacking your node and the fee estimator should not give a result where you pay a high fees just because someone is attacking you. And we need to assume that the user doesn’t know what high fees are… so this takes away from some of the obvious things we might do to make the fee estimator better. For example, the current fee estimator does not look at the current queue of transactions in the mempool that are ready to go into a block. It doesn’t bid against the mempool, and the reason why it doesn’t do this is because someone could send transactions that they know the miners are censoring, and they could pay very high fees on those transactions and cause you to outbid those transactions that will never get confirmed.

There are many ways that this could be improved in the future, such as using the mempool information but only to lower the fees that you pay. But that isn’t done yet. Onwards and upwards.

Fee handling

Separate from fee estimation, there are a number of fixes in the wallet related to fee handling. One of the big ones that I think a lot of people care about is that in the GUI there is support for turning on replaceability for transactions. You can make a transaction, and if you didn’t add enough fee and you regret it then you can now hit a button to increase the fee on the bip125 replaceable transaction. This was in the prior release as an RPC-only feature, it’s now available in the GUI. With segwit coming into play, this feature will get much more powerful. There are really cool bumpfee things that you can’t do without segwit.

There were some corner cases where the automatic coin selection could result in fee overpayment. Basically the bitcoin wallet whenever you make a transaction, it has to solve a pretty complicated problem where it has to figure out which of the coins to spend and then how much fee to pay. It doesn’t know how much fee it wants to spend until it knows which coins it wants to spend, and it doesn’t know which coins it wants to spend unless it knows the total amount of the transaction. So there’s an iterative algorithm that tries to solve this problem. There could be, in some wallets, with lots of tiny inputs, cases where the wallet would in the first pass select a lot of tiny coins for spending and then it goes oh we don’t have enough fees for all these coins so I need to select some more value– and it would go and select some more, but it’s not monotone, and it might go spend some 50 BTC input, and my fees are still high because that’s where I ended up in the last iteration and it would give up, and it would potentially overpay fees in that case. If your transaction had a change output, then it would pay those extra fees to change, but in the case where there wasn’t a change output, it would overpay, and v0.15 fixes this case.

v0.15 also makes the wallet smarter about not producing change outputs when they wouldn’t really be useful. So, it doesn’t make sense to make a change output for only 1 satoshi because the cost of spending that change is going to be greater than the cost of creating it. The wallet for a long time has avoided creating low value ones. It now has a better rational framework for this, and there’s now a configuration option called discardfee where you can make it more or less aggressive. But basically it has, it looks at the long term fee estimations to figure out what kind of output value is going to be completely worthless in the future, and it avoids creating smaller ones.

hdwallet rescans and auto top-up

In v0.13, Bitcoin Core introduced hdwallet support. This is being able to create a wallet once and deterministically generate all of its future keys so that you have less need to back up the wallet, so that if you fail to regularly backup your wallet you would be less likely to lose your funds. The rescan support in prior versions was effectively broken… you could take a backed-up hdwallet, put it on a new node, and it wouldn’t notice when all of the keys that had been pre-generated had been spent and fill up the keypool… So this made recovery not reliable. This is fixed in v0.15, where it now does a correct rescan including auto top-up, we also increased the size of the default keypool to 1000 keys so that it looks ahead. Whenever it sees a spend in its wallet for a key, it will advance to a thousand keys past that point.

This isn’t quite completely finished in v0.15 because we don’t handle the case where the wallet is encrypted. If the wallet is encrypted, then you need to manually unlock the wallet and trigger a rescan, if you’re restoring from a backup– it doesn’t do that, yet, but in future versions we’ll make this work or do something else.

And finally, and this was an oft-requested feature, there is now an abortrescan RPC command, because rescan takes hours, and their node is useless for a while, so there’s a command to stop rescan.

RNG improvements

https://www.youtube.com/watch?v=nSRoEeqYtJA&t=34m18s

More of the backend minutia of the system… we’ve been working on improving the random number generator, this is part of a long-term effort to completely eliminate any use of openssl in the bitcoin software. Right now all that remains of openssl use in the software is the random number generator and bitcoin-qt has bitcoin payment protocol support which uses https which uses QT which uses openssl.

Most of the random number use in the software package has been replaced with using chacha20 stream ciphers, including all the cases where we weren’t using a cryptographic random number generator, like unix rand-style random number generators that needed to be somewhat random but not secure, but those all use cryptographic RNG now because it was hardly any slower to do so.

One of the things that we’re concerned about in this design is that there have been, in recent years, issues with operating systems RNGs– both freebsd and netbsd have shipped versions where the kernel gave numbers that were not random. And also, on systems running inside of virtual machines, there are issues where you store a snapshot of the VM and then restore it and then the system regenerates the same “random” numbers that it generated before, which can be problematic.

That was a version for a famous bug in 2013 that caused bitcoinj to… so..

We prefer a really conservative design. Right now what we’re doing in v0.15 is that any of the long-term keys are generated by using the OS random number generator, the openssl random number generator, and a state that we … through… And in the future we will replace the openssl component with a separate initializer.

Q: What are you using for entropy source?

A: The entropy on that is whatever openssl does, /dev/urandom, rdrand, and then state that is queried through repeated runs of this algorithm.

Disconnecting service bits feature

I want to talk about this disconnecting service bits feature. It’s sort of a minutia that I wouldn’t normally mention, so now it’s interesting to talk about because I think there’s a lot of misunderstanding. A little bit of background first. Bitcoin’s partition resistance is really driven by many different heuristics to try to make sure that a node is not isolated from the network. If a node is isolated from the network because of the blockchain proof-of-work, you’re relatively safe but you’re still denied service. Consequently, the way that bitcoin does this is that we’re very greedy about keeping connections that we previously found to be useful, such as ones that relayed us transactions, blocks and have high uptime. The idea is that if there’s an attacker that tries to saturate all the connections on the network, he’ll have a hard time doing it because his connections came later, and everyone else is preferring good working connections. The downside of this is that the network can’t handle sudden topology changes. You can’t go from everyone connected to everyone, and then we’re all going to disconnect from everyone and connect to different people, you can’t do that, because you blow away all of that management state, and you can end up in situations where nodes are stuck and unable to connect to other nodes for long periods of time because you have lost all of their connections.

When we created segwit, which itself required a network topology change, the way we handled this is by front-loading the topology change to be more safe about it. So when you installed v0.13.1, your node made its new connections differently than prior nodes. It preferred to connect to other segwit-capable peers. And our rationale for doing that was that if something went wrong, if there weren’t enough segwit-capable peers, if you know this caused you to take a long time to get connected, then that’s fine because you just installed an upgrade and perhaps you weren’t expecting things to go perfectly. But it also means that it doesn’t happen to the whole network all at once, because people applied that upgrade over the course of months. So you didn’t go from being connected one second to some totally different topology immediately, it was a very gradual change and avoided thundering hoards of nodes attempting to connect to the few segwit-capable peers (bip144) at first. So that’s what we did for segwit, and it worked really well.

Recently… there was a spin-off of Bitcoin, the Bitcoin Cash altcoin, which for complex political insanity reasons used the bitcoin peer-to-peer port and the bitcoin p2p magic and was basically indistinguishable on the bitcoin network from a bitcoin node. And so, what that meant was that the bcash nodes would end up connected to bitcoin nodes, and bitcoin nodes likewise, sort of half-talking to each other and wasting each other’s connection slots. Don’t cross the streams. This didn’t really cause much damage for bitcoin because there weren’t that many bcash connections at the time, but it did cause significant damage to bcash and in fact it still does to this day because if you try to bring up a bcash node it will sync up to the point where they diverged, and it will often sit for 6-12 hours to connect to another bcash node and learn about its blocks. In bcash’s case, it was pretty likely to get disconnected because the bcash transactions are consensus-invalid under the bitcoin rules. So it would trigger banning, just not super fast.

There is another proposed spin-off called segwit2x, which is not related to the segwit project. And the S2X has the same problem but much worse. It’s even harder for it to get automatically banned, because it doesn’t have strong replay protection. And so it will take longer for nodes to realize that they are isolated. There’s a risk here that their new consensus rules could activate for their nodes and you could have a node that was connected to only s2x peers, your node isn’t going to accept their blockchain because it’s consensus-incompatible, but they’re not going to accept anything from you and likewise, and you could be isolated for potentially hours, and if you’re a miner then you could be in this situation for hours and it would create a big amount of mess. So v0.15 will disconnect any peer which is setting service bits 6 and 8, which is s2x and bcash which are setting those service bits. It will continue to do this until 2018-08-01. So whenever nodes are detected that are setting these bits, it just immediately disconnects them. This reduces these otherwise honest users inadvertedly DoS attacking each other.

The developer of S2X showed up and complained about premature partitioning pointing out that how it only really needs to do this at the moment their new rules activate. But as I have just explained, the network can’t really handle the sudden topology change all at once. And, we really think that this concern about partitioning is basically unjustified because S2X nodes will still happily connect to each other and to older versions. We know from prior upgrades that people are still happily running v0.12, v0.10 nodes out on the network. It would take years for S2X nodes to find nothing for them to connect to… but their consensus nodes will change in 2 months after the v0.15 release. So the only risk to them is if the entire bitcoin network upgrades to v0.15 within 2 months, which is just not going to happen. And if it does happen, then people will setup nodes for compatibility. Since there was some noise created about that, I wanted to talk about it.

OK, time for a drink. Just water. ((laughter))

Future work in-flight

https://www.youtube.com/watch?v=nSRoEeqYtJA&t=42m25s

There’s lots of really cool things going on. And often in the bitcoin space people are looking for the bitcoin developers to set a roadmap for what’s going to happen. But the bitcoin project is an open collaboration, and it’s hard to do anything that’s like a roadmap.

I have a quote from Andrew Morton about the Linux kernel developers, which I’d like to read, where he says: “Instead of a roadmap, there are technical guidelines. Instead of a central resource allocation, there are persons and companies who all have a stake in the further development of the Linux kernel, quite independently from one another: People like Linus Torvalds and I don’t plan the kernel evolution. We don’t sit there and think up the roadmap for the next two years, then assign resources to the various new features. That’s because we don’t have any resources. The resources are all owned by the various corporations who use and contribute to Linux, as well as by the various independent contributors out there. It’s those people who own the resources who decide.”

It’s the same kind of thing that also applies to bitcoin. What’s going to happen in bitcoin development? The real answer to that is another question: what are you going to make happen in bitcoin development? Every person involved has a stake in making it better and contributing. Nobody can really tell you what’s going to happen for sure. But I can certainly talk about what I know people are working on and what I know other people are working on, which might seem like a great magic trick of prediction but it’s really not I promise.

https://www.youtube.com/watch?v=nSRoEeqYtJA&t=43m45s

Upcoming: Full segwit support in wallet

So the obvious one is getting segwit fully supported in the wallet. In prior versions of bitcoin, we had segwit support but it’s really intended for testing, it’s not exposed in the GUI. All the tests use it, it works fine, but it’s not a user-friendly feature. There’s a good reason why we didn’t go and put it in the GUI in advance, and that’s people might have used segwit before its rules were enforced and they would have had their funds potentially stolen. Also just in terms of allocating their own resources, we thought it was more important to make the system more reliable and faster, and we didn’t know when segwit would activate.

In any case, we’re planning on doing a short release right after v0.15.0 with full support for segwit including things like sending to bip173 (bech32) addresses which Pieter Wuille presented here a few months back. Amusingly, this might indirectly help with the bcash spin-off for example where it uses the same addresses as bitcoin and people have already lost considerable amounts of money by sending things to the wrong addresses. So introducing a new address type into bitcoin will indirectly help this situation, unfortunately there’s really nothing to prevent people from copying bitcoin address format in a reasonable way.

Advanced fee bumping capability

There are other wallet improvements that people are working on. I had mentioned previously this evening about this fee bumping capability. There’s an idea for fee bumping which is that the wallet should keep track of all the things you’re currently trying to pay, and whenever you make a new payment, it should recompute the transactions for the things you’re trying to pay, to update them and potentially increase the fees at the time. It could also concurrently create timelocked alternative versions of the transactions that pay higher fees, so that you can pre-sign them and have them ready to go… So we went to go design this out a few months back, and we found that there were cases caused by transaction malleability that were hard to solve, but now with segwit activated, we should be able to support this kind of advanced fee bumping in the wallet, and that should be pretty cool.

More improvements

There is support being worked on for hardware wallets and easy off-line signing. You can do offline signing with Bitcoin Core today but it’s the sort of thing that I even I don’t love. It’s complicated. Andrew Chow has a bip proposal recently posted to the mailing list for partially signed bitcoin transactions (see also hardware wallet standardization things), it’s a format that can carry all the data that an offline wallet needs for signing, including hardware wallets. The deployment of this into Bitcoin Core is much easier for us to do safely and efficiently with segwit in place.

Another thing that people are working on is branch-and-bound coin selection to produce changeless outputs much of the time. So this is Murch’s design which Andrew Chow has been working on implementing. There’s a pull request barely missed going into v0.15 but the end result of this will be transactions less expensive for users and making the network more efficient because it’s creating change outputs much less often.

Obviously I mentioned the multiwallet feature before… it’s going to get supported in the GUI.

There are several pull requests in flight for full-block lite mode, where you run a bitcoin node download the full blocks so that you have full privacy but don’t do full validation for the past history so that you have instant start.

There’s some work on hashed timelock contracts (HTLCs) so that you can do more interesting things with smart contracts. Checksequenceverify (bip112) transactions which can be used for various security measures like cosigners that expire after a certain amount of time. And there’s some interesting work going on for separating the Bitcoin Core GUI from the backend process, and there are patches that do this, but I don’t know if those patches will be the ones that end up getting merged but that work is making progress right now.

Rolling UTXO set hashes

There are interesting things going on with network and consensus. One is a proposal for rolling UTXO set hashes (PR #10434). This is a design basically compute a hash of the UTXO set that is very efficient to incrementally update every block so that you don’t need to go through the entire UTXO set to compute a new hash. This can make it easier to validate that a node isn’t corrupted. But it also opens up new potentials for syncing a new node– where you say you don’t want to validate history from before a year ago, and you want to sync up to a state where up to that point and then continue on further. That’s a security trade-off, but we think we have ways of making that more realistic. We have some interesting design questions open– there are two competing approaches and they have different performance tradeoffs, like different performance in different cases.

# Signature aggregation

We’ve been doing work on signature aggregation. You can google for my bitcointalk post on signature aggregation. The idea is that basically if you have a transaction with many inputs, maybe it’s multsig or with many participants, it’s possible with some elliptic curve math to have just a single signature covering all of those inputs even if they are keys held by related but separate parties. This can be a significant bandwidth improvement for the system. Pieter Wuille and Andrew Poelstra and myself presented a paper at Financial Crypto 2017 about what we’re proposing to implement here and the risks and tricky math problems that have to be solved. One of the reviewers of our paper found an even earlier paper than ours that we completely missed, which implemented something that was almost identical to what we were doing, and it had even better security proofs than what we had, so we switched to that approach. We have an implementation of the backend and we’re playing around now with performance optimizations because they have some implications with regards to how the interface looks for bitcoin. One of the cool things that we also get out of this at the same time is what’s called batch aggregation where you can take multiple independent transactions and validate their signatures more efficiently in batch than you could by validating them one at a time.

The byte savings from using signature aggregation– we ran through simulation of the prior bitcoin history and said, if the prior bitcoin history had used signature aggregation, how much smaller would the blockchain be? Well, the amount of savings changes over time because the aggregation gains change depending on how much multisig you’re doing and how many inputs transactions on. Early on in bitcoin, it would have saved hardly nothing. But later on, it stabilized out at around 20%, and it would go up if more multisig was in use or if more coinjoins were in use because this aggregation process incentivizes coinjoin on making the fees lower for the aggregate.

When this is combined with segwit, if everyone is using it, is about 20%. So it’s a little less than the byte savings because of the segwit weight calculations. Now, what this scheme does is that it reduces the size of the signature to just one signature per transaction but the amount of computation done to verify it is still proportional to the number of public keys going into it, but we’re able to use the batching operation to combine them together and make it go much faster. This chart shows the different schemes we’ve been experimenting with. It gets faster as more keys are involved. At the 1,000 keys level, so if you’re making a transaction with a lot of inputs or a lot of multisig or batch validation, then we’re at the point where we can get a 5x speedup over the single signature time validation. So it’s a pretty good speedup. And this has been mostly work that Pieter Wuille, Andrew Poelstra and Jonas Nick have been focusing on, trying to get all the performance out of this that we can.

Network and consensus

Other things going on with the network and consensus… There’s bip150 and bip151 (see the bip151 talk) for encrypted and optionally authenticated p2p. I think the bips are half-way done. Jonas Schnelli will be talking about these in more detail next week. We’ve been waiting on network refactors before implementing this into Bitcoin Core. So this should be work that comes through pretty quickly.

There’s been work ongoing regarding private transaction announcement (the Dandelion paper and proposed dandelion BIP). Right now in the bitcoin network, there are people who connect to nodes all over the network and try to monitor where transactions are originating in an attempt to deanonymize people. There are countermeasures against this in the bitcoin protocol, but they are not especially strong. There is a recent paper proposing a technique called Dandelion which makes it much stronger. The authors have been working on an implementation and I’ve sort of been guiding them on that– either they will finish their implementation or I’ll reimplement it– and we’ll get this in relatively soon probably in the v0.16 timeframe. It requires a slight extension to the p2p protocol where you tell a peer that you would like it to relay a transaction but only to one peer. The idea is that transactions are relayed in a line through the network, just one node to one node to one node and then after basically after a random timeout they hit a spot where they expand to everything and they curve through the network and then explodes everywhere. Their paper makes a very good argument for the improvements in privacy of this technique. Obviously, if you want privacy then you should be running Bitcoin Core over tor. Still, I think this is a good technique to implement as well.

Q: Is the dandelion approach going to effect how fast your transactions might get into a block? If I’m not rushed…?

A: The dandelion approach can be tuned. Even with relatively modest delays, when used in an anonymous manner, it should only increase the time it takes for your transaction to get into a block by the time it takes for the transaction to reach the whole network. This is only on the order of 100s of milliseconds. It’s not a big delay. You can choose to not use it, if you want.

Q: ….

A: Right. So, the original dandelion paper is actually not robust. If a peer drops the transaction being forwarded and doesn’t forward it on further, then your transaction will disappear. That was my response to the author and they were working on a more robust version with a timeout. Every node along the stem that has seen the transaction using dandelion, starts a timer and the timer is timed to be longer than the propagation time of the network. If they don’t see the transaction go into the burst mode, then they go into burst mode on their own.

Q: ….

A: Yeah. I know, it’s not that surprising.

Q: What if you send it to a v0.15 node, wouldn’t that invalidate that approach completely?

A: No, with dandelion there will be a service bit assigned to it. You’ll know which peers are compatible. When you send a transaction with dandelion, it will traverse only through a graph of nodes that are capable of it. When this is ready, I am sure there will be another talk on this, maybe by the authors of the paper.

Q: It’s also really, it’s a great question, for applications.. trust.

A: Yes, it could perhaps be used by botnets, but so could bitcoin transactions anyway.

Work has started on something called “peer interrogation” to basically more rapidly kickoff peers that are on different consensus rules. If I have received an invalid block on one peer, then I can go to all my other peers and say hey do you have this block. Anyone else that says yes that you consider it invalid then you can disconnect them because obviously they are on different consensus rules than you. There are a number of techniques we’ve come up with that should be able to make the network a little more robust from crazy altcoins running on the same freaking p2p port. It’s mostly robust against this because we also have to tolerate attackers who aren’t you know going to be nice. But it would be nice if these altcoins could distinguish themselves from outright malicious behavior. It’s unfortunate when otherwise honest users use software that makes them inadvertently attack other nodes.

There’s been ongoing work on improved block fetch robustness. So right now the way that fetching works is that assuming you’re not using compact blocks high-bandwidth opportunistic send where peers send blocks even without you asking for it, you will only ask for blocks from a single peer at a time. So if I say give me a block, and he falls asleep and doesn’t send it, I will wait for a long multi-minute timeout before I try to get the block from someone else. So there’s some work ongoing to have the software try to fetch a block from multiple peers at the same time, and occasionally waste time.

Another thing that I expect to come in relatively soon is … proposal for gcs-lite-client BIP for bloom “map” for blocks. We’ll implement that in Bitcoin Core as well.

Further further….

All of the things that I have just talked about are things where I’ve seen code for it and I think they are going to happen. Going a little further out to where I haven’t seen code for it yet, well, segwit made it much easier to make enhancements to the script system to replace and update it. There are several people working on script system upgrades including full replacement script alternative systems that are radically more powerful. It’s really cool, but it’s also really hard to say when something like that could become adopted. There are also other improvements being proposed, like being able to do merkle tree lookups inside of bitcoin script, allowing you to build massively more scalable multisig like Pieter Wuille presented on a year ago for key tree signatures.

There’s some work done to use Proof-of-Work as an additional priority mechanism for connections (bip154). Right now in bitcoin there’s many different ways that a peer can be preferred, such as doing good work, being connected for a long time, having low latency, etc. Our security model for peer connections is that we have lots of different ways and we hope that an attacker can’t defeat all of them. We’d like to add proof-of-work as an additional way that we can compute connection slot priority.

There’s been some work done on using private information retrieval for privately fetching transactions on a pruned node. So say you have a wallet started on a pruned node, you don’t have the blockchain stored locally and you need to find out about your transactions. Well if you just connect to a normal server, you’re going to reveal your addresses to them and that’s not very private. There are cryptographic tools that allow you to query databases and look things up in them without revealing your private information. They are super inefficient. But bitcoin transactions are super small, so I think there’s a good match here and you might see in the future some wallets using PIR methods to look things up.

I’ve been doing some work on mempool reconciliation. Right now, transactions in the mempool just get there via the flooding mechanism. If you start up a node clean it will only learn about new transactions and wont have anything in its mempool. This is not ideal because it means that when a new block shows up in the network, you won’t be able to exploit the speed of compact blocks (bip152) until you have it running for a day or two and build up your mempool. We do save the mempool across restarts, but if you’ve been down for a while then that data is no longer useful. It’s also useful to have mempool pre-loaded fast because you can use it for better…. there are techniques to efficiently reconcile databases between different hosts, and there’s a bitcointalk post that I made about using some of these techniques for mempool reconciliation and I think we might do this in the not-too-distant future.

There has been some work on doing alternative serialization for transactions, which could be optionally used on disk or on the peer-by-peer basis to use this compact serialization and the one that Pieter Wuille has proposed gives a 25% reduction in the size of transactions. This particular feature is interesting to me outside of the context of Bitcoin Core because Blockstream has recently announced its satellite system and it could really use a 25% reduction in the amount of bandwidth required to send transactions.

Summary

Many important improvements in v0.15. I think the most important is the across-board 50% speedup which is able to get sync time back to where it was in February 2017. Hopefully it means that additional load created as people start using segwit, will be less damaging to the system. There are many exciting things being developed, and I didn’t even talk about the more speculative things, which I am sure people will have questions about them. There are many new contributors joining the project and ramping up. There are several organizations now paying people. I don’t know if they want attention, but I think it’s great that people can contribute on a more consistent basis. Anyone is welcome to contribute, and we’re particularly eager to meet people who can test the GUI and wallet because those get the least love in the system today. So, thank you, and let’s move to some open questions.

Q&A

https://www.youtube.com/watch?v=nSRoEeqYtJA&t=1h3m13s

First priority is for questions for the presentation, but after that, let’s open it up to whatever. Also wait for the mic.

Q: For the new UTXO storage format, how many bytes is it allocating to the index for the output for it?

A: It uses a compact int for it. In fact, it’s a little bit– this format is more efficient, it’s 15% larger but it might not think about a lot. leveldb will compress entries itself.

Q: What if some of the participants of the bitcoin network decided to not activate another soft-fork like schnorr signatures?

A: I mentioned the signature aggregation stuff, which would require schnorr signatures, yeah that’s a soft-fork and it would have to be activated in the network. So we did have kind of a long slog to get segwit activated. ((Someone tosses Greg a bip148 UASF hat))… Obviously people are going to do … but if in the end of the day, it’s all the users in the system that decide what the rules are. Especially for a backwards compatible rule like signature aggregation that we could use, well, miners are mining to make money, and certainly there are many factors involved in this but mining operations are very expensive, and if they choose not to use the rules of the bitcoin network is a miner that will soon be bankrupt. Users have the power in bitcoin. I wasn’t a big fan of bip148 although I think it has been successful. I think it was way too hastily carried out. I don’t anticipate problems with activating signature aggregation in the future, but that’s because I trust the users to get this activated and make use of it. Thanks.

Q: A few rapid fire questions I guess. For the new chainstate database, is the key just the transaction id and then the output index?

A: Yes that’s the key. It’s both.

Q: Is it desirable to ditch openssl because it’s heavy weight?

A: No, openssl was not well designed. It was previously used everywhere in bitcoin and it was not designed or maintained in a way that is suitable for consensus code. They changed their signature validation method in a backwards incompatible way. For all the basic crypto operations it’s better for us to have a consensus critical implementation that is consistent. As far as the RNG goes, I think nobody has read that code and still has a question about why removing it– it’s time tested yes, but it’s obviously bad code and it needs to go.

Q: …

A: Yes. The comment was made that all of openssl, the only thing that it has going for it is that it is time tested. Sure. That can’t be discounted.

Q: You mentioned in fee estimation that miners might censor transactions. Have we seen that before?

A: Well, we have seen it with soft-forks. A newly introduced soft-fork for someone who hasn’t upgraded, will look like censorship of a transaction. There are some scary proposals out there, people have written entire papers on bribing miners to not insert transactions. There are plenty of good arguments out there for why we’re not going to see that in the future, if mining is decentralized. But we still need to have a system that is robust against it potentially happening.

Q: So the Core of this point was that it created a buffer for a register that was like in moving thing for certain number of transactions for a certain period of time, so you kept track track of the outputs..

A: The outputs are the public keys.

Q: So you keep track of those and this was the bulk of the…?

A: Of the performance improvement.

Q: 50%?

A: Yes. The blockchain is pretty big. There have been over a billion transaction outputs created. Small changes in the performance of finding outputs and deleting it, has big impacts on performance.

Q: Is it possible for a non-bitcoin blockchain application, …

A: Yes I think the same techniques for optimization here would be applicable to any system with general system properties. Our focus in Bitcoin Core is on keeping bitcoin running.

Q: You mentioned at the end that GUI and wallet are two areas where you need more help. Can you expand on it?

A: There’s some selection bias in the bitcoin project. Bitcoin developers are overwhelmingly system programmers and we don’t even use GUIs generally. As a result of that, we also attract other system programmers to the project. There are some people that work on the GUI, but not so many. We don’t have any big design people working on it, for example. There’s nobody doing user testing on the wallet. We don’t expect Bitcoin Core to be for grandma, it’s expected to be industrial machinery, but there’s still a lot of user factors that there aren’t a lot o resources going into but we would like that to be different.

Q: Yeah I can speak about … I heard … blockchain… lightning right now. How is that going?

A: So the question is Blockstream has some people working on lightning. I am not one of them but my understanding is that the big news is that the specifications are coming to completion and there’s been great interop testing going on. There are other people in this room that are far more expert on lightning than I am, including laolu who can answer all questions about lightning. Yes, he’s waving his hand. I’m pleased to see it moving along.

Q: Hi, you mentioned taking out openssl. But someone who is working on secure communication between peers, is that using openssl?

A: Right, no chance of that. Openssl has a lot of stuff going for it– largely wideployed, but has lots of attack surface. We don’t need openssl for secure communication.

sipa: If I can comment briefly on this, the most important reason to use openssl is if you need ssl. For something like this, we have the expertise to build a much more narrow protocol without those dependencies.

A: We also have some interesting things different for bitcoin, normally our connections are unauthenticated because we have anonymous peers… there are some interesting tricks for that. I can have multiple keys to authenticate to peers, so we can have trace-resistance there, and we can’t do that in openssl. Your use of authentication doesn’t make you super-identifiable to everyone you connect with.

Q: Question about.. wondering when, what do you think would need to be happen for 1.0 version? More like longer term? Is there some kind of special requirement for a 1.0 release? I know that’s far out.

A: It’s just numbers, in my opinion. The list of things that I’d like to see in Bitcoin Core and the protocol would be improved fungibility. It’s a miles-long list. I expect it to be miles-long for the rest of my life. But maybe this should have nothing to do with v1.0 or not. There are many projects that have changed version numbers purely for marketing reasons, like Firefox and Chrome. There are versions of bitcoin altcoins that use version 1.14 to try to say they are 1 version greater… I roll my eyes at that, it sometimes contributes to user confusion, so I wouldn’t be surprised if the number gets ludicrously increased to 1000 or something. In terms of the system being complete, I think we have a long way to go. There are many credible things to do.

Q: There was recently a fork of the bitcoin network that was polite enough to include replay protection. But people are working on another fork that might not include replay protection. So is there a way to defend against this kind of malicious behavior?

A: Well I think that the most important thing is to encourage people to be sensible. But if they won’t be sensible and add replay protection, then there are many features that give some amount of automatic replay protection. In ethereum, it’s replay vulnerable because it uses accounts model. In bitcoin every time you spend you reference your coins, it’s automatic replay connection. There’s no problem with replay between testnet and bitcoin or litecoin and bitcoin, it’s the inherent replay protection built-in. Unfortunately, this doesn’t work if you literally copy the history of the system. The best way probably to deal with replay is to take advantage of replay prevention by making your transaction spend outputs that don’t exist on the other chain. People have circulated patches in the past that allow miners to create 0-value outputs that anyone can spend, and then wallets could then pick up those wallets and spend them. If you spend an output created by a miner that only exists on the chain that you want to spend on, then you’re naturally replay-protected. If that split happens without replay protection, there will be tools to help, but it will still be a mess. You see in ethereum and ethereum classic continued snafuus.

Q: There’s a altcoin that copied the bitcoin state that you can use identical equipment on.. and it will be more profitable to mine, and it will bounce back and forth between miners.

A: Well, the system works as designed in my opinion. Bcash uses the same PoW function. They changed their difficulty adjustment algorithm. They go into hyperinflation when they lose hashrate. And then those blocks coming in once every few minutes makes it more profitable to mine. Miners are trying to make money. They have to factor in the risk that the bcash coins will become worthless before they can sell them. Miners move them over, bitcoin remains stable, transaction fees go up, which has happened now twice because bitcoin becomes more profitable to mine just from the transaction fees. At the moment, bcash is in a low difficulty period but bitcoin is more profitable to mine because of transaction fees. Bitcoin’s update algorithm is not trying to chase out all possible altcoins. At the moment it works fine, it’s just a little slow.

Q: … further… lower.. increase.

A: Well, … one large miner after segwit activated seemed to lower their blocksize limit, probably related to other unrelated screwups. The system seems to be self-calibrating and it is working as expected, fees are high at the moment, but that’s all the more reason for people to adopt segwit and take advantage of lower fees. I’m of the opinion that the oscillation is damaging to the bcash side because it’s bushing them into hyperinflation. Because their block size is way larger, and they are producing 10 kb blocks, I don’t know what twill pay people to mine it as they wear out their inflation schedule. And it should have an accelerating inflation rate because of how that algorithm works. Some of the bcash miners are trying to stop the difficulty adjustments because they realize it’s bad. I’m actually disappointed that bcash is having these problems, it’s attractive for people to have some alternative if your idea of the world that bitcoin should have no block size limit that’s not a world that I can support based on the science we have today; if people want it, they ca have it, and I’m happy to say here go use that thing, but that thing is going off the rails maybe a little faster than I thought, but we’ll see.

Q: If I give you a chip that is a million times faster than the nvidia chip, how much saving would you have the mining?

A: A million times faster than nvidia chip… bitcoin mining today is done with highly specialized ASICs. How long until… So, bitcoin will soon to have done 2^88 cumulative sha256 operations. Oh yes, so we’re over 2^88 sha256 operations done for mining. This is a mind-bogglingly large number. We crossed 2^80 the other year. If you started at core 2 duo at the beginning of time, then it will just have done 2^80 work just when 2^80 was crossed. We have just crossed 256x more work than that. So that’s why something like a million times faster nvidia chip still wouldn’t hold a candle to the computing power from all these sha256 asics. I would recommend against 160-bit hashes for any cases where collisions are a concern. We use them for addresses, but addresses aren’t a collision concern, although segwit does change it for that reason.

Q: There’s a few different proposals nad trying to solve the same thing, like weak blocks, thin blocks, invertible bloom filters, is there anything in that realm on the horizon, what do you think is the most probable development there?

A: There’s a class of proposals called pre-consensus, like weak blocks (or here), where participants come to agreement in advance on what they will put into blocks before it is found. I think those techniques are neat, I’ve done some work on them, I think other people will work on them. There are many design choices, we could run multiples of these in parallel. We have made great progress on efficient block transmission with FIBRE and compact blocks. We went 5000 blocks ish without an orphan a couple weeks ago, so between FIBRE and v0.14 speedups, we’ve seen the orphan rate drop, it’s not as much of a concern. We might see it pop back up as segwit gets utilized, we’ll have to see.

Q: Is rolling utxo hash a segwit precondition?

A: I think most of these schemes wouldn’t make use of a rolling utxo set, there’s some potential interaction for fast syncing nodes on it. A lot of these, the preconsensus techniques you can have no consensus enforcement of the rules and the miners just go along with it. They are interesting techniques. There’s a paper 2 years ago called bitcoin-ng which itself can be seen as a pre-consensus technique that talks about some interesting things in this space.

Q: Going 5000 blocks…

A: Yeah, so… Bram is saying that if we go 5000 blocks without an orphan, then this implies that the blocks are going out on the order of 200 ms, that’s right. Well actually it’s more like 100 ms. Particularly BlueMatt’s technique transmits pretty reliably near the speed of light. Lots of ISPs… yeah so, part of the reason why we are able to do this, this is where BlueMatt’s stuff is really helpful, he setup a network of nodes that have been carefully placed to connect with each other and be low latency, and this is very difficult and it has taken some work, but it helps out the network.

Q: Hi. What do you think about proof-of-stake as a consensus mechanism?

A: There’s a couple of things. There have been many proof-of-stake proposals that are outright broken. And people haven’t done a good job of distinguishing the proposals. As an example, proof-of-stake in NXT is completely broken and you can just stakegrind it. The reason why people haven’t attacked it, because it hasn’t been worth it to do so… There are other proposals that achieve very different security properties than what bitcoin achieves. Bitcoin achieves this anonymous anyone can join very distributed mechanism of security, and the various proof-of-stake proposals out there have different tradeoffs. The existing users can exclude new users by not letting them stake their coins. I think the people working on proof-of-stake have done a poor job of stating their security assumptions and showing exactly what goals they achieve. There’s been also some work in this space where they make unrealistic assumptions. There was a recent PoS paper that claimed to solve problems, but the starting assumption was that the users have a synchronous network which is where all users will receive all the messages in order all the time. If you had a synchronous network then you don’t need a blockchain at all. This was one of the starting assumptions in a recent paper on PoS. So I think it’s interesting that people are exploring this stuff, but I think a lot of this work is not likely to be productive. I hope that the start of the art will improve… it’s tiresome to read papers that claim to solve these problems but then you get to page 14 and it says oh it requires a synchronous network, into the trash. This is a new area, and science takes time to evolve.

Q: Beyond .. bad netcode… what are the highlights for your current threat model for protecting the future of bitcoin?

A: I think that the most important highlight is education. Ultimately bitcoin is a social and economic system. We need to understand it and have a common set of beliefs. Even as constructed today. There are some long term problems related to mining centralization. I am of the belief that they are driven by short-term single shot events where people entered and overbuilt hardware. Perhaps it’s a short-term effect. But if it’s not, and a few years later we look at this and we see nothing improving, then maybe it’s time to get another answer. The bitcoin space is so surprising. Last time I spoke here, the bitcoin market price was $580. We can’t allow our excitement about adoption to make bad moves related to technology. I want bitcoin to expand to the world as fast as it can, but no faster.

Q: Hey uh… so it sounds like you, in November whenever s2x gets implemented, and it gets more work than bitcoin, I mean it sounds like you consider it an altcoin it’s like UASF territory at what point is bitcoin is bitcoin and what would y’all do with sha256 algorithm if s2x gets more work on it?

A: I think that the major contributors on Bitcoin Core are pretty consistent and clear on their views on s2x (again), we’re not interested and not going along with it. I think it will be unlikely to get more work on it. Miners are going to follow the money. Hypothetically? Well, I’ve never been of the opinion that more work matters. It’s always secondary to following the rules. Ethereum might have had more joules pumped into its mining than bitcoin, although I haven’t done the math that’s at least possible. I wouldn’t say ethereum is now bitcoin though… just because of the joules. Every version of bitcoin all the way back has had nodes enforcing the rules. It’s essential to bitcoin. Can I think bitcoin can hard-fork? Yeah, but all the users have to agree, and maybe that’s hard to achieve because we can do things without hard-forks. And I think that’s fine. If we can change bitcoin for a controversial change, then I think that’s bad because you could make other controversial changes. Bitcoin is a digital asset that is not going to change out from you. As for future proof-of-work functions, that’s unclear. If s2x gets more hashrate, then I think that would be because users as a whole were adopting it, and I think if that was the case then perhaps the Bitcoin developers would go do something else instead of Bitcoin development. It might make sense to use a different proof of work function. Changing a PoW function is a nuclear option and you don’t do it unless you have no other choice. But if you have no other choice, yeah do it.

Q: So sha256 is not a defining characteristic of bitcoin?

A: Right. We might even have to change sha256 in the future for security reasons anyway.

Q: Rusty Russell wrote a series of articles about if a hard-fork is necessary, what does it mean, how much time is required, I’m not say I’m proposing a hard-fork, but what kinds of changes might be useful to make it easier for us to make good decisions at that point?

A: I don’t think the system can decide on what it is. It’s inherently external. I am in favor of a system where we make changes with soft-forks, and we use hard-forks only to clean up technical debt so that it’s more easy to get social consensus on those. I think Rusty’s posts are interesting, but there’s this property that I’ve observed in bitcoin where there’s an inverse relationship between the distance you are to the code and how … no, proportional relationship: distance between the code and how viable you think the proposals are. Rusty is very technical, but he’s not a regular contributor to Bitcoin Core. He’s a lightning developer. This is a pattern we see in other places as well.

I think that was the last question. Alright, thanks.


https://www.reddit.com/r/Bitcoin/comments/6xj7io/greg_maxwell_a_deep_dive_into_bitcoin_core_015/

https://news.ycombinator.com/item?id=15155812

\ No newline at end of file +https://www.youtube.com/watch?v=nSRoEeqYtJA

slides: https://people.xiph.org/~greg/gmaxwell-sf-015-2017.pdf

https://twitter.com/kanzure/status/903872678683082752

git repo: https://github.com/bitcoin/bitcoin

preliminary v0.15 release notes (not finished yet): http://dg0.dtrt.org/en/2017/09/01/release-0.15.0/

Introduction

Alright let’s get started. There are a lot of new faces in the room. I feel like there’s an increase in excitement around bitcoin right now. That’s great. For those of you who are new to this space, I am going to be talking about some technical details here, so you’ll get to drink from the firehose. And if you don’t understand something, feel free to ask me questions, or other people in the room after the event, because there are a lot of people here who know a lot about this stuff.

So tonight I’d like to start off by talking about the new cool stuff in Bitcoin Core v0.15 which is just about to be released. And then I am going to open up for open-format question and answer and maybe we’ll have some interesting discuss.

First thing I want to talk about for v0.15 is system number wise breakdown. What kind of activities go into bitcoin development these days? I am going to talk about the major themes and major improvements, things about performance, wallet features, and talk a little bit about this service bit disconnection thing, which is a really minor and obscure thing but it created some press and I think there’s a lot of misunderstanding about it. And then I am going to talk about some of the future work that is not in v0.15 that will be in future versions that I think is interesting.

Bitcoin Core releases

Quick refresher on how Bitcoin Core releases work. So there’s this master trunk branch of bitcoin development. And releases are branched off of it and then new fixes go into that releases are written into the master branch and then copied into the release branches. This is a pretty common process for software development. What this means is that features that are in v0.15 are a lot of them are also in v0.14.2 and v0.14 because v0.15 started with the release of v0.14.0 not with the release of v0.14.2. So back in February this year, v0.14 branched off, and in March v0.14.0 was released. In August, v0.15 branched off, and we had two release candidates, and there’s a third that should be up tonight or tomorrow morning, and we’re expecting the full release of v0.15 to be around the 14th or 15th, and we expect it to be delayed due to developers flying around and not having access to their cryptographic keys. Our original target date was September 1st, so two week slip there, unfortunate but not a big deal.

Some numbers about v0.15 contributions

Just some raw numbers. What kind of levels of activities are going into bitcoin development these days? Well, in the last 185 days, which is the development of v0.15, there were 627 pull requests merged on github. So these are individual high-level changes. Those requests contained 1,081 non-merge commits by 95 authors, which comes out to 6 commits per day which is pretty interesting compared to other cryptocurrency projects. One of the interesting things about v0.15 is that 20% of the commits are test-related, so they were adding or updating tests. And a lot of those actually came from jnewbery, a relatively new contributor who works at Chaincode Labs. His first commit was back in November 2016 and he was responsible for half of all the test-related commits and he was also the top committer. And then it falls down from there, lots of other people, a lot of contributions, and then a broad swath of everyone else. The total number of lines changed was rather large, 52k lines inserted. This was actually mostly introduction of new tests. That’s pretty good, we have a large backlog of things that are under-tested, for our standards… there’s some humor there because Bitcoin Core is already relatively well-tested compared to most software. All of this activity results in about 3k lines changed per week to review. For someone like me, who does most of his contributions in the form of review, there’s a lot of activity to keep up with.

Main areas of focus in v0.15: performance

So in v0.15, we had a couple of big areas of focus. As always, but especially in v0.15, we had a big push on performance. One of the reasons for this is that the bitcoin blockchain is growing, and just keeping up with that growth requires the software to get faster. But also, with segwit coming online, we knew that the blockchain would be growing at an even faster rate, so there was a desire to try to squeeze out all the performance we could, to make up for that increase in capacity.

There were some areas that were polished and problem areas that were made a little more reliable. And there were some areas that we didn’t work on– such as new consensus rules, we’ve been waiting to see how segwit shakes out, so there hasn’t been big focus on new consensus rules. There were a bunch of wallet features that we realized that were easier with segwit, so we held off on implementing those. There are some things that were held back, waiting for segwit, and now they can move forward at a pretty good speed.

Chainstate (UTXO) database performance

One of the important performance improvements is that we have completely reworked the chainstate database in Bitcoin Core. The chainstate database is what stores the information that is required to validate new blocks as they come in. This is also called the UTXO set. The current structure we’ve been using has been around since v0.8.0, and when it was introduced the disk structure, which has a separate database of just the information required to validate blocks, was something like a 40x performance improvement. That was an enormous performance improvement from what we had. So this new update is improving this further.

The previous structure we often talk about as a per-output database. So this is a UTXO database. People think of it as tracking every output from every transaction as a separate coin. But that’s not actually how it was implemented on the backend. The backend system previously batched up all outputs for a single transaction to a single record in the database and stored those together. Back at the time of v0.8.0, that was a much better way of doing it. The usage patterns and the load on the bitcoin network changed since then, though. The batching saved space because it shared some of the information, like height of a transaction, whether or not it’s a coinbase transaction, and the txid shared among all the outputs. One of several problems wth this is that when you spend the outputs, you have to read them all and write them back, and this creates a quasi-quadratic blow-up where a lot more work is required to modify transactions with many outputs. It also made the software a lot more complicated. If a pure utxo database only had to create outputs and delete them when you spend, but the merging form in the past supported modifications, such as spending some of the outputs but saving the rest back. This whole process resulted in considerable memory overhead.

We have changed to a new structure where we store one record per txout. This results in a 40% faster sync, so it’s a big performance improvement. This is 10% less memory for the same number of cache entries, so you can have larger caches but same amount of memory on your system, or you can run Bitcoin Core on a machine with less memory than you could have before. This 40% was given on a host with very fast disks, but if you’re going to run the software on a computer with slow media such as spinning disks or USB or something like that, then I would expect to see faster results even beyond 40% but I haven’t benchmarked.

There’s a downside, though, which is that it makes the database larger on disk. 15% chainstate size increase (2.8 GB now). So there’s a 15% increase in the chainstate directory. But this isn’t really a big deal.

Here’s a visualization of the trade. In the old scheme you had this add, modify, modify, delete operation sequence. So you had three outputs, one gets spent, another spent, and then you delete the rest. And the new way, which is what people thought it was doing all along. This same structure has been copied into other alternative implementations so btcd, bcoin, these other implementations copied the same structure as from Bitcoin Core so they all work like the original v0.8.0 method.

This is a database format change, so there’s a migration process when you start v0.15 on an older host it will then migrate to a new format. On a fast system this takes 2-3 minutes, but on a raspberry pi this is going to take a few hours. Once you run v0.15 on a node, in order to go back to an older version of bitcoind, you will have to do a chainstate reindex and that will take you hours. There’s a corner case where if you ran Bitcoin Core master in the couple week period from when we introduced this until we improved it a bit, your chainstate directory may have become 5.8 GB in size, so double size, and that’s because leveldb the underlying backend database was not particularly aggressive about removing old records. There’s a hidden forcecompactdb option that you can run as a one-time operation that will shrink the database back to its size. So if you see that you have a 5.8 GB chainstate directory, run that command and get it back to what it should be.

The graph at the bottom here is showing the improvements from this visually, and it’s a little strange to read. The x-axis is progress in terms of what percent of the blockchain was synced. And on the first graph, the y-axis is showing how much time has passed. This purple line at the top is the older code without the improvement, but if you look early on in the history, the difference isn’t as large. The lower graph here is showing the amount of data stored in the cache at a point in time. So you see the data increasing, then you see it flush the cache to disk and it goes to nothing, and so on. If you see the purple line there, it’s flushing very often as it gets further along in synchronization, and it’s flushing frequently over here so that it doesn’t have to do as much I/O.

Testing the upgraded chainstate database

Since this change is really a change at the heart of the consensus algorithm in bitcoin, we had some very difficult challenges related to testing it. We wouldn’t want to deploy a change that would cause it to corrupt or lose records, because that would cause a network split or network partition. So major consensus critical part of the system. This redo, actually, had a couple of months of history before it was made public where Pieter Wuille was working on it in private, trying a couple of different designs. Some of his work was driven by his work on the next feature that I am going to talk about in a minute. But once it was made public, it had 2 months of public review, there were at least 145 comments on it, people going through and reviewing the software line by line, asking questions, getting comments improved and things like that. This is an area of the software where we already had pretty good automated testing for it. We also added some new automated tests. We also used a technique called mutation testing, where basically we took the updated copy of bitcoin in its new tests, and we went line by line through the software and everywhere we saw a location where a byte could have occurred– such as every if-branch that could have been inverted, or every zero that could have been a one, and every add could be a subtract– we made each bug in term, and ran the tests, and verified that every change to the software would make the tests fail. So this process didn’t turn up any bugs in the new database, hooray, but it did turn up a pre-existing non-exploitable crash bug and some short-comings in the tests where some tests thought they were testing one aspect but it turns out they were testing nothing. And that’s a thing that happens, because often people don’t test the tests. You can think of mutation testing, in your mind, as normally the tests is the test of the software, but what’s the test of the test? Well, the software is the test of the test, you just have to break the software to see the results.

Non-atomic flushing

https://www.youtube.com/watch?v=nSRoEeqYtJA&t=13m40s

A feature related to the new chainstate database which we also did shortly after is non-atomic flushing. This database cache in bitcoin software is really more of a buffer than a cache. The thing it does that improves performance isn’t that it saves you from having to read things off disk– reading things off-disk is relatively fast. But what it does is that it prevents things from being written to disk such as when someone makes a transaction and then two blocks later they spend the output of that transaction, with a buffer you can avoid ever taking that data and writing it to the database. So that’s where almost all of the performance improvement from ‘caching’ really comes from in bitcoin. It’s this buffer operation. So that’s great. And it works pretty well.

One of the problems with it is that in order to make the system robust to crashes, the state on the disk always has to be consistent with a particular block so that if you were to yank the power out of your machine and your cache is gone, since it was just in memory, you want the node to be able to come up and continue validation from a particular block instead of having partial data on disk. It doesn’t have to be the latest block, it just has to be some exact block. So the way that this was built in Bitcoin Core was that whenever the cache would fill up, we would force-flush all the dirty entries to disk all at once, and then we could be sure that it was consistent with that point in time. In order to do that, we would use a database transaction, and the database transaction required creating another full copy of the data that was going to be flushed. So this extra operation meant that basically we lost half of the memory that we could tolerate in our cache with this really short 100 ms long memory peak, so we had to in older versions like v0.14 if you configure a database cache of say a gigabyte, then we really use 500 MB for cache and then 500 MB is left unused to handle this little peak that occurs during flushing. We’re also doing some work with smarter cache management strategies, where we incrementally flush things, but they interact poorly with the whole “flush at once” concept.

But we realized that the blockchain itself is actually a write-ahead log. It’s the exact thing you need for a database to not have to worry about what order your writes were given to the disk. With this in mind, our UTXO database doesn’t actually need to be consistent. What we can do instead is that we store in the database the earliest point at which writes could have occurred, such as the earliest blockheight for which there were writes in flight, and the latest, and then on startup we simply go through the blockchain on disk and re-apply all of the changes. Now, this was much harder when the database could have entries that were modified, but after changing to the per-TXO model, every entry is either inserted or deleted. Inserting a record twice just does nothing, and deleting a record twice also does nothing. So the software to do this is quite simple, it starts up, sees where it could have had writes in flight, and then follows them all out.

So this really simplifies a lot of things. It also gives us more flexibility on how we manage that database in the future. So it would be much easier to switch to a custom data structure or do other fancy things with it. It also allows us to do fancier cache management and lower latency flushing in the future, like we could in the future incrementally flush every block to avoid the latency spikes from doing a full flush all at once. Something we’ve experimented with, we just haven’t proposed it yet. But it’s now easy to do with this change. It also means that we— we’ve benchmared all the trendy database things that you might use in place of leveldb in this system, and so far we’ve not found anything that performs significantly better, but if in the future we found something that performs significantly better, we wouldn’t have to worry about supporting transactions anymore.

Platform acceleration

Also in v0.15, we did a bit of platform-specific acceleration work. And this is stuff that we knew we could do for a long time, but it didn’t rise to the top of the performance priority list. So, we now use SSE4 assembly implementation of sha256. And that results in a 5% speedup of initial block download, and a similar speedup for connecting a new block at the tip, maybe more like 10% there.

In v0.15, it’s not enabled in the build by default because we introduced this just a couple of days before the feature freeze and then spent 3 days trying to figure out why it was crashing on macosx, and it turns out that the macosx linker was optimizing out the entire chunk of assembly code, because of an error we made in the capitalization of the label. So… this took like 3 days to fix, and we didn’t feel like merging it in and then having v0.15 release delayed by potentially us finding out later that it doesn’t work on openbsd or something obscure or some old osx. But it will be enabled in the next releases.

We also implemented support for the sha-native instruction support. This is a new instruction set to do sha hardware acceleration that Intel has announced. They announced it but they really haven’t put it into many of their products, it’s supported in like one Atom CPU. The new AMD Ryzen stuff contains this instruction, though, and the use of it gives another 10% speedup in initial block download for those AMD systems.

We also made the backend database which uses CRC32 to detect errors in the database files, we made that use the SSE4 instruction too.

Script validation caching

Another one of the big performance improvements in v0.15 was this script validation caching. So bitcoin has had since v0.7, a cache that basically memorizes every public key message signature tuple and will allow you to validate those much faster if they were in the cache. This change was actually the last change to the bitcoin software by Satoshi, he had written the patch and sent it to Gavin and it just sat languishing in his mailbox for about a year and there’s some DoS attacks that can be solved by having a cache. The DoS attack is basically where you have a transaction with 10,000 valid signatures in it, and then the 10,001 first signature in the transaction is invalid, and you will spend all this time validating each of the signatures, to get down to the last one and find out that it’s invalid, and then the peer disconnects and sends you the same transaction but with one different invalid signature at the bottom and do it again… So the signature cache was introduced to fix that, but it also makes validation off the tip much faster. But even though that’s in place, all it’s caching is the elliptic curve operations, it’s not caching the validation of the scripts total. And this is a big deal for signature hashing. For transactions to be signed, you have to hash the transaction to determine what it’s signing, and for large non-segwit transactions, that can take a long time.

One question that has arised since 2012 was well why not use the fact that the transaction is in the mempool as a proxy for whether it’s valid. If it’s in your mempool, then it’s already valid, you already validated, go ahead and accept it. Well the problem wit hthis is that the rules for a transaction going into a mempool are not the same as the rules for a transaction into a block. They are supposed to be a subset, but it’s easy for software errors to turn it into a superset. There have been bugs in the mempool handling in the past that have resulted in invalid transactions appearing in the mempool. Because of how the rest of the software is structured, that error is completely harmless except wasting a little memory. But if you were using the mempool for validation, then having an invalid transaction in the mempool would immediately become a consensus splitting (network partition) bug. Using the mempool would massively increase the portion of the codebase that is consensus-critical, and nobody working on this project is really interested in using the mempool for doing that.

So what we’ve introduced in v0.15 is a separate cache for script validation caching. It maintains a cache where the keys are the txid and the validation flag for which rules are applicable for that tx, it caches that, and all the validity rules other than sequence number and blocktime are a pure function of the hash of the transaction, then that’s all it has to cache. For segwit that’s wtxid, not just txid. So the presence of this cache creates a 50% speedup of accepting blocks at the tip, so when a new block comes into a node.

Other improvements

There’s a ton of other speedups in v0.15. A couple that I wanted to highlight are on this slide. DisconnectTip which is the operation central to reorgs, so to unplug a block from the chain and undo it, that was made much faster, something on the order of 10s of times faster, especially if you are doing a reorg of many blocks, mostly by deferring mempool processing. Previously, you would disconnect the block, take all the transactions out, put it in your mempool, disconnect another block and put the transactions into the mempool, and so on, and we changed this so that it does all of this in a batch instead. We disconnect the block, throw the transactions into a buffer, leave the mempool alone until you’re done.

We added a cache for compact block messages. So when you’re relaying compact block messages to different peers, we hash the constructed message rather than reconstructing it for each peer.

The key generation in the wallet was made on the order of 20x faster by not flushing the database between every key that it inserted.

And there were some minor little secp2561 speedups on the order of 1%. But 1% is worth mentioning here because 1% speedups in the underlying crypto library are very hard to find. 1% speedup is at least weeks of work.

Performance comparison

In result, here is an initial block download (IBD) chart. The top bar is v0.14.2, and the bottom bar is v0.15 rc3. And this is showing number of seconds for initial sync across the network. In this configuration, I’ve set the dbcache size to effectively infinite, like 60 gigs. So the dbcache never fills on both of these hosts I was testing on. The reason why I tested with a dbcache size of infinite is that two reasons– one is that if you have decent hardware, then that’s the configuration you want to run while syncing anyway, and my second reason was that because with normal size dbcache which was sized to fit on a host with 1 GB of RAM, the benchmarks were taking so long that it wasn’t ready in time for my presentation tonight, and they will finish sometime around tonight. The two segments in the bars are showing how long it took to sync to a period of about 2 weeks ago, the outer point, and the inner part is how long it takes to sync to the point where v0.14 was released. And so what you can see in the lower bar, it’s considerably faster, almost a 50% speedup there. It’s actually taking about the same amount of time as v0.14 as v0.14 took to sync, so all of these speedups basically got us back to where we were at the beginning of the year in terms of aggregate total size just due to the underlying blockchain growth. And these numbers are in seconds, that’s probably not clear, but the shorter bar runs out to 3 hours, and that’s on a machine with 64 GB RAM and 24 cores.

https://www.youtube.com/watch?v=nSRoEeqYtJA&t=25m50s

Multiwallet

Out of the realm of performance for a while…. In v0.15, and as a long-requested feature that people have been asking about since 2011 is multiwallet support. Right now it’s a CLI and RPC-only feature. This allows the Bitcoin Core wallet to have multiple wallets loaded at once. It’s not released in the GUI yet, it will be in the next release. It’s easier to test in the CLI first and most of the developers are using CLI. You configure with a wallet conf argument for which wallet you want, and you tell the CLI which wallet you want to talk with, or if you’re using RPC then you change the URL to have the wallet slash the name of the wallet you want to use. This is very useful if you have a pruned node and you want to keep many wallets in sync and not have to rescan them, because you can’t rescan them on a pruned node.

For the moment, this should be considered somewhat experimental. The main delay with finally getting this in into v0.15 was debate over the interface and we weren’t sure what interface we should be providing on it. In the release notes, we explicitly mention that the interface is not stable and we didn’t want to delay on this. We’ve been working on this since v0.12, doing the backend restructuring, and in v0.15 was just the UI component.

Fee estimation

https://www.youtube.com/watch?v=nSRoEeqYtJA&t=27m30s

v0.15 features greatly improved fee estimation software. It tracks the… it starts with the original model we had in v0.14, but it tracks estimates on multiple time horizons, so that it can be more responsive to fast changes. It supports two estimation modes, conservative and economical. The economical mode responds much faster. The conservative just says what’s the fee that basically guarantees the transaction will be confirmed based on history. And the economical is “eh whatever’s most likely”. For bip125 replaceable transactions, if you underpay on it, it’s fixable, but if your transaction is not replaceable then we default to using the conservative estimator.

We also made the fee estimator machinery output more data, which will help us improve it in the future, but it’s also being used by external estimators where other people have built fee estimation frameworks that use Bitcoin Core’s estimator as its input. And this new fee estimator can produce fee estimates for much longer ranges of time, it can estimate for up to 1008 blocks back, so a week back in time, so you can say that you don’t care if it takes a long time to confirm then it can still produce useful estimates for that. Also it has fixed some corner cases where estimates were just nuts before.

The fundamental behavior hasn’t changed, so the fee estimator in bitcoin has this difficult challenge that it needs to safely operate unsupervised, which is that someone could be attacking your node and the fee estimator should not give a result where you pay a high fees just because someone is attacking you. And we need to assume that the user doesn’t know what high fees are… so this takes away from some of the obvious things we might do to make the fee estimator better. For example, the current fee estimator does not look at the current queue of transactions in the mempool that are ready to go into a block. It doesn’t bid against the mempool, and the reason why it doesn’t do this is because someone could send transactions that they know the miners are censoring, and they could pay very high fees on those transactions and cause you to outbid those transactions that will never get confirmed.

There are many ways that this could be improved in the future, such as using the mempool information but only to lower the fees that you pay. But that isn’t done yet. Onwards and upwards.

Fee handling

Separate from fee estimation, there are a number of fixes in the wallet related to fee handling. One of the big ones that I think a lot of people care about is that in the GUI there is support for turning on replaceability for transactions. You can make a transaction, and if you didn’t add enough fee and you regret it then you can now hit a button to increase the fee on the bip125 replaceable transaction. This was in the prior release as an RPC-only feature, it’s now available in the GUI. With segwit coming into play, this feature will get much more powerful. There are really cool bumpfee things that you can’t do without segwit.

There were some corner cases where the automatic coin selection could result in fee overpayment. Basically the bitcoin wallet whenever you make a transaction, it has to solve a pretty complicated problem where it has to figure out which of the coins to spend and then how much fee to pay. It doesn’t know how much fee it wants to spend until it knows which coins it wants to spend, and it doesn’t know which coins it wants to spend unless it knows the total amount of the transaction. So there’s an iterative algorithm that tries to solve this problem. There could be, in some wallets, with lots of tiny inputs, cases where the wallet would in the first pass select a lot of tiny coins for spending and then it goes oh we don’t have enough fees for all these coins so I need to select some more value– and it would go and select some more, but it’s not monotone, and it might go spend some 50 BTC input, and my fees are still high because that’s where I ended up in the last iteration and it would give up, and it would potentially overpay fees in that case. If your transaction had a change output, then it would pay those extra fees to change, but in the case where there wasn’t a change output, it would overpay, and v0.15 fixes this case.

v0.15 also makes the wallet smarter about not producing change outputs when they wouldn’t really be useful. So, it doesn’t make sense to make a change output for only 1 satoshi because the cost of spending that change is going to be greater than the cost of creating it. The wallet for a long time has avoided creating low value ones. It now has a better rational framework for this, and there’s now a configuration option called discardfee where you can make it more or less aggressive. But basically it has, it looks at the long term fee estimations to figure out what kind of output value is going to be completely worthless in the future, and it avoids creating smaller ones.

hdwallet rescans and auto top-up

In v0.13, Bitcoin Core introduced hdwallet support. This is being able to create a wallet once and deterministically generate all of its future keys so that you have less need to back up the wallet, so that if you fail to regularly backup your wallet you would be less likely to lose your funds. The rescan support in prior versions was effectively broken… you could take a backed-up hdwallet, put it on a new node, and it wouldn’t notice when all of the keys that had been pre-generated had been spent and fill up the keypool… So this made recovery not reliable. This is fixed in v0.15, where it now does a correct rescan including auto top-up, we also increased the size of the default keypool to 1000 keys so that it looks ahead. Whenever it sees a spend in its wallet for a key, it will advance to a thousand keys past that point.

This isn’t quite completely finished in v0.15 because we don’t handle the case where the wallet is encrypted. If the wallet is encrypted, then you need to manually unlock the wallet and trigger a rescan, if you’re restoring from a backup– it doesn’t do that, yet, but in future versions we’ll make this work or do something else.

And finally, and this was an oft-requested feature, there is now an abortrescan RPC command, because rescan takes hours, and their node is useless for a while, so there’s a command to stop rescan.

RNG improvements

https://www.youtube.com/watch?v=nSRoEeqYtJA&t=34m18s

More of the backend minutia of the system… we’ve been working on improving the random number generator, this is part of a long-term effort to completely eliminate any use of openssl in the bitcoin software. Right now all that remains of openssl use in the software is the random number generator and bitcoin-qt has bitcoin payment protocol support which uses https which uses QT which uses openssl.

Most of the random number use in the software package has been replaced with using chacha20 stream ciphers, including all the cases where we weren’t using a cryptographic random number generator, like unix rand-style random number generators that needed to be somewhat random but not secure, but those all use cryptographic RNG now because it was hardly any slower to do so.

One of the things that we’re concerned about in this design is that there have been, in recent years, issues with operating systems RNGs– both freebsd and netbsd have shipped versions where the kernel gave numbers that were not random. And also, on systems running inside of virtual machines, there are issues where you store a snapshot of the VM and then restore it and then the system regenerates the same “random” numbers that it generated before, which can be problematic.

That was a version for a famous bug in 2013 that caused bitcoinj to… so..

We prefer a really conservative design. Right now what we’re doing in v0.15 is that any of the long-term keys are generated by using the OS random number generator, the openssl random number generator, and a state that we … through… And in the future we will replace the openssl component with a separate initializer.

Q: What are you using for entropy source?

A: The entropy on that is whatever openssl does, /dev/urandom, rdrand, and then state that is queried through repeated runs of this algorithm.

Disconnecting service bits feature

I want to talk about this disconnecting service bits feature. It’s sort of a minutia that I wouldn’t normally mention, so now it’s interesting to talk about because I think there’s a lot of misunderstanding. A little bit of background first. Bitcoin’s partition resistance is really driven by many different heuristics to try to make sure that a node is not isolated from the network. If a node is isolated from the network because of the blockchain proof-of-work, you’re relatively safe but you’re still denied service. Consequently, the way that bitcoin does this is that we’re very greedy about keeping connections that we previously found to be useful, such as ones that relayed us transactions, blocks and have high uptime. The idea is that if there’s an attacker that tries to saturate all the connections on the network, he’ll have a hard time doing it because his connections came later, and everyone else is preferring good working connections. The downside of this is that the network can’t handle sudden topology changes. You can’t go from everyone connected to everyone, and then we’re all going to disconnect from everyone and connect to different people, you can’t do that, because you blow away all of that management state, and you can end up in situations where nodes are stuck and unable to connect to other nodes for long periods of time because you have lost all of their connections.

When we created segwit, which itself required a network topology change, the way we handled this is by front-loading the topology change to be more safe about it. So when you installed v0.13.1, your node made its new connections differently than prior nodes. It preferred to connect to other segwit-capable peers. And our rationale for doing that was that if something went wrong, if there weren’t enough segwit-capable peers, if you know this caused you to take a long time to get connected, then that’s fine because you just installed an upgrade and perhaps you weren’t expecting things to go perfectly. But it also means that it doesn’t happen to the whole network all at once, because people applied that upgrade over the course of months. So you didn’t go from being connected one second to some totally different topology immediately, it was a very gradual change and avoided thundering hoards of nodes attempting to connect to the few segwit-capable peers (bip144) at first. So that’s what we did for segwit, and it worked really well.

Recently… there was a spin-off of Bitcoin, the Bitcoin Cash altcoin, which for complex political insanity reasons used the bitcoin peer-to-peer port and the bitcoin p2p magic and was basically indistinguishable on the bitcoin network from a bitcoin node. And so, what that meant was that the bcash nodes would end up connected to bitcoin nodes, and bitcoin nodes likewise, sort of half-talking to each other and wasting each other’s connection slots. Don’t cross the streams. This didn’t really cause much damage for bitcoin because there weren’t that many bcash connections at the time, but it did cause significant damage to bcash and in fact it still does to this day because if you try to bring up a bcash node it will sync up to the point where they diverged, and it will often sit for 6-12 hours to connect to another bcash node and learn about its blocks. In bcash’s case, it was pretty likely to get disconnected because the bcash transactions are consensus-invalid under the bitcoin rules. So it would trigger banning, just not super fast.

There is another proposed spin-off called segwit2x, which is not related to the segwit project. And the S2X has the same problem but much worse. It’s even harder for it to get automatically banned, because it doesn’t have strong replay protection. And so it will take longer for nodes to realize that they are isolated. There’s a risk here that their new consensus rules could activate for their nodes and you could have a node that was connected to only s2x peers, your node isn’t going to accept their blockchain because it’s consensus-incompatible, but they’re not going to accept anything from you and likewise, and you could be isolated for potentially hours, and if you’re a miner then you could be in this situation for hours and it would create a big amount of mess. So v0.15 will disconnect any peer which is setting service bits 6 and 8, which is s2x and bcash which are setting those service bits. It will continue to do this until 2018-08-01. So whenever nodes are detected that are setting these bits, it just immediately disconnects them. This reduces these otherwise honest users inadvertedly DoS attacking each other.

The developer of S2X showed up and complained about premature partitioning pointing out that how it only really needs to do this at the moment their new rules activate. But as I have just explained, the network can’t really handle the sudden topology change all at once. And, we really think that this concern about partitioning is basically unjustified because S2X nodes will still happily connect to each other and to older versions. We know from prior upgrades that people are still happily running v0.12, v0.10 nodes out on the network. It would take years for S2X nodes to find nothing for them to connect to… but their consensus nodes will change in 2 months after the v0.15 release. So the only risk to them is if the entire bitcoin network upgrades to v0.15 within 2 months, which is just not going to happen. And if it does happen, then people will setup nodes for compatibility. Since there was some noise created about that, I wanted to talk about it.

OK, time for a drink. Just water. ((laughter))

Future work in-flight

https://www.youtube.com/watch?v=nSRoEeqYtJA&t=42m25s

There’s lots of really cool things going on. And often in the bitcoin space people are looking for the bitcoin developers to set a roadmap for what’s going to happen. But the bitcoin project is an open collaboration, and it’s hard to do anything that’s like a roadmap.

I have a quote from Andrew Morton about the Linux kernel developers, which I’d like to read, where he says: “Instead of a roadmap, there are technical guidelines. Instead of a central resource allocation, there are persons and companies who all have a stake in the further development of the Linux kernel, quite independently from one another: People like Linus Torvalds and I don’t plan the kernel evolution. We don’t sit there and think up the roadmap for the next two years, then assign resources to the various new features. That’s because we don’t have any resources. The resources are all owned by the various corporations who use and contribute to Linux, as well as by the various independent contributors out there. It’s those people who own the resources who decide.”

It’s the same kind of thing that also applies to bitcoin. What’s going to happen in bitcoin development? The real answer to that is another question: what are you going to make happen in bitcoin development? Every person involved has a stake in making it better and contributing. Nobody can really tell you what’s going to happen for sure. But I can certainly talk about what I know people are working on and what I know other people are working on, which might seem like a great magic trick of prediction but it’s really not I promise.

https://www.youtube.com/watch?v=nSRoEeqYtJA&t=43m45s

Upcoming: Full segwit support in wallet

So the obvious one is getting segwit fully supported in the wallet. In prior versions of bitcoin, we had segwit support but it’s really intended for testing, it’s not exposed in the GUI. All the tests use it, it works fine, but it’s not a user-friendly feature. There’s a good reason why we didn’t go and put it in the GUI in advance, and that’s people might have used segwit before its rules were enforced and they would have had their funds potentially stolen. Also just in terms of allocating their own resources, we thought it was more important to make the system more reliable and faster, and we didn’t know when segwit would activate.

In any case, we’re planning on doing a short release right after v0.15.0 with full support for segwit including things like sending to bip173 (bech32) addresses which Pieter Wuille presented here a few months back. Amusingly, this might indirectly help with the bcash spin-off for example where it uses the same addresses as bitcoin and people have already lost considerable amounts of money by sending things to the wrong addresses. So introducing a new address type into bitcoin will indirectly help this situation, unfortunately there’s really nothing to prevent people from copying bitcoin address format in a reasonable way.

Advanced fee bumping capability

There are other wallet improvements that people are working on. I had mentioned previously this evening about this fee bumping capability. There’s an idea for fee bumping which is that the wallet should keep track of all the things you’re currently trying to pay, and whenever you make a new payment, it should recompute the transactions for the things you’re trying to pay, to update them and potentially increase the fees at the time. It could also concurrently create timelocked alternative versions of the transactions that pay higher fees, so that you can pre-sign them and have them ready to go… So we went to go design this out a few months back, and we found that there were cases caused by transaction malleability that were hard to solve, but now with segwit activated, we should be able to support this kind of advanced fee bumping in the wallet, and that should be pretty cool.

More improvements

There is support being worked on for hardware wallets and easy off-line signing. You can do offline signing with Bitcoin Core today but it’s the sort of thing that I even I don’t love. It’s complicated. Andrew Chow has a bip proposal recently posted to the mailing list for partially signed bitcoin transactions (see also hardware wallet standardization things), it’s a format that can carry all the data that an offline wallet needs for signing, including hardware wallets. The deployment of this into Bitcoin Core is much easier for us to do safely and efficiently with segwit in place.

Another thing that people are working on is branch-and-bound coin selection to produce changeless outputs much of the time. So this is Murch’s design which Andrew Chow has been working on implementing. There’s a pull request barely missed going into v0.15 but the end result of this will be transactions less expensive for users and making the network more efficient because it’s creating change outputs much less often.

Obviously I mentioned the multiwallet feature before… it’s going to get supported in the GUI.

There are several pull requests in flight for full-block lite mode, where you run a bitcoin node download the full blocks so that you have full privacy but don’t do full validation for the past history so that you have instant start.

There’s some work on hashed timelock contracts (HTLCs) so that you can do more interesting things with smart contracts. Checksequenceverify (bip112) transactions which can be used for various security measures like cosigners that expire after a certain amount of time. And there’s some interesting work going on for separating the Bitcoin Core GUI from the backend process, and there are patches that do this, but I don’t know if those patches will be the ones that end up getting merged but that work is making progress right now.

Rolling UTXO set hashes

There are interesting things going on with network and consensus. One is a proposal for rolling UTXO set hashes (PR #10434). This is a design basically compute a hash of the UTXO set that is very efficient to incrementally update every block so that you don’t need to go through the entire UTXO set to compute a new hash. This can make it easier to validate that a node isn’t corrupted. But it also opens up new potentials for syncing a new node– where you say you don’t want to validate history from before a year ago, and you want to sync up to a state where up to that point and then continue on further. That’s a security trade-off, but we think we have ways of making that more realistic. We have some interesting design questions open– there are two competing approaches and they have different performance tradeoffs, like different performance in different cases.

# Signature aggregation

We’ve been doing work on signature aggregation. You can google for my bitcointalk post on signature aggregation. The idea is that basically if you have a transaction with many inputs, maybe it’s multsig or with many participants, it’s possible with some elliptic curve math to have just a single signature covering all of those inputs even if they are keys held by related but separate parties. This can be a significant bandwidth improvement for the system. Pieter Wuille and Andrew Poelstra and myself presented a paper at Financial Crypto 2017 about what we’re proposing to implement here and the risks and tricky math problems that have to be solved. One of the reviewers of our paper found an even earlier paper than ours that we completely missed, which implemented something that was almost identical to what we were doing, and it had even better security proofs than what we had, so we switched to that approach. We have an implementation of the backend and we’re playing around now with performance optimizations because they have some implications with regards to how the interface looks for bitcoin. One of the cool things that we also get out of this at the same time is what’s called batch aggregation where you can take multiple independent transactions and validate their signatures more efficiently in batch than you could by validating them one at a time.

The byte savings from using signature aggregation– we ran through simulation of the prior bitcoin history and said, if the prior bitcoin history had used signature aggregation, how much smaller would the blockchain be? Well, the amount of savings changes over time because the aggregation gains change depending on how much multisig you’re doing and how many inputs transactions on. Early on in bitcoin, it would have saved hardly nothing. But later on, it stabilized out at around 20%, and it would go up if more multisig was in use or if more coinjoins were in use because this aggregation process incentivizes coinjoin on making the fees lower for the aggregate.

When this is combined with segwit, if everyone is using it, is about 20%. So it’s a little less than the byte savings because of the segwit weight calculations. Now, what this scheme does is that it reduces the size of the signature to just one signature per transaction but the amount of computation done to verify it is still proportional to the number of public keys going into it, but we’re able to use the batching operation to combine them together and make it go much faster. This chart shows the different schemes we’ve been experimenting with. It gets faster as more keys are involved. At the 1,000 keys level, so if you’re making a transaction with a lot of inputs or a lot of multisig or batch validation, then we’re at the point where we can get a 5x speedup over the single signature time validation. So it’s a pretty good speedup. And this has been mostly work that Pieter Wuille, Andrew Poelstra and Jonas Nick have been focusing on, trying to get all the performance out of this that we can.

Network and consensus

Other things going on with the network and consensus… There’s bip150 and bip151 (see the bip151 talk) for encrypted and optionally authenticated p2p. I think the bips are half-way done. Jonas Schnelli will be talking about these in more detail next week. We’ve been waiting on network refactors before implementing this into Bitcoin Core. So this should be work that comes through pretty quickly.

There’s been work ongoing regarding private transaction announcement (the Dandelion paper and proposed dandelion BIP). Right now in the bitcoin network, there are people who connect to nodes all over the network and try to monitor where transactions are originating in an attempt to deanonymize people. There are countermeasures against this in the bitcoin protocol, but they are not especially strong. There is a recent paper proposing a technique called Dandelion which makes it much stronger. The authors have been working on an implementation and I’ve sort of been guiding them on that– either they will finish their implementation or I’ll reimplement it– and we’ll get this in relatively soon probably in the v0.16 timeframe. It requires a slight extension to the p2p protocol where you tell a peer that you would like it to relay a transaction but only to one peer. The idea is that transactions are relayed in a line through the network, just one node to one node to one node and then after basically after a random timeout they hit a spot where they expand to everything and they curve through the network and then explodes everywhere. Their paper makes a very good argument for the improvements in privacy of this technique. Obviously, if you want privacy then you should be running Bitcoin Core over tor. Still, I think this is a good technique to implement as well.

Q: Is the dandelion approach going to effect how fast your transactions might get into a block? If I’m not rushed…?

A: The dandelion approach can be tuned. Even with relatively modest delays, when used in an anonymous manner, it should only increase the time it takes for your transaction to get into a block by the time it takes for the transaction to reach the whole network. This is only on the order of 100s of milliseconds. It’s not a big delay. You can choose to not use it, if you want.

Q: ….

A: Right. So, the original dandelion paper is actually not robust. If a peer drops the transaction being forwarded and doesn’t forward it on further, then your transaction will disappear. That was my response to the author and they were working on a more robust version with a timeout. Every node along the stem that has seen the transaction using dandelion, starts a timer and the timer is timed to be longer than the propagation time of the network. If they don’t see the transaction go into the burst mode, then they go into burst mode on their own.

Q: ….

A: Yeah. I know, it’s not that surprising.

Q: What if you send it to a v0.15 node, wouldn’t that invalidate that approach completely?

A: No, with dandelion there will be a service bit assigned to it. You’ll know which peers are compatible. When you send a transaction with dandelion, it will traverse only through a graph of nodes that are capable of it. When this is ready, I am sure there will be another talk on this, maybe by the authors of the paper.

Q: It’s also really, it’s a great question, for applications.. trust.

A: Yes, it could perhaps be used by botnets, but so could bitcoin transactions anyway.

Work has started on something called “peer interrogation” to basically more rapidly kickoff peers that are on different consensus rules. If I have received an invalid block on one peer, then I can go to all my other peers and say hey do you have this block. Anyone else that says yes that you consider it invalid then you can disconnect them because obviously they are on different consensus rules than you. There are a number of techniques we’ve come up with that should be able to make the network a little more robust from crazy altcoins running on the same freaking p2p port. It’s mostly robust against this because we also have to tolerate attackers who aren’t you know going to be nice. But it would be nice if these altcoins could distinguish themselves from outright malicious behavior. It’s unfortunate when otherwise honest users use software that makes them inadvertently attack other nodes.

There’s been ongoing work on improved block fetch robustness. So right now the way that fetching works is that assuming you’re not using compact blocks high-bandwidth opportunistic send where peers send blocks even without you asking for it, you will only ask for blocks from a single peer at a time. So if I say give me a block, and he falls asleep and doesn’t send it, I will wait for a long multi-minute timeout before I try to get the block from someone else. So there’s some work ongoing to have the software try to fetch a block from multiple peers at the same time, and occasionally waste time.

Another thing that I expect to come in relatively soon is … proposal for gcs-lite-client BIP for bloom “map” for blocks. We’ll implement that in Bitcoin Core as well.

Further further….

All of the things that I have just talked about are things where I’ve seen code for it and I think they are going to happen. Going a little further out to where I haven’t seen code for it yet, well, segwit made it much easier to make enhancements to the script system to replace and update it. There are several people working on script system upgrades including full replacement script alternative systems that are radically more powerful. It’s really cool, but it’s also really hard to say when something like that could become adopted. There are also other improvements being proposed, like being able to do merkle tree lookups inside of bitcoin script, allowing you to build massively more scalable multisig like Pieter Wuille presented on a year ago for key tree signatures.

There’s some work done to use Proof-of-Work as an additional priority mechanism for connections (bip154). Right now in bitcoin there’s many different ways that a peer can be preferred, such as doing good work, being connected for a long time, having low latency, etc. Our security model for peer connections is that we have lots of different ways and we hope that an attacker can’t defeat all of them. We’d like to add proof-of-work as an additional way that we can compute connection slot priority.

There’s been some work done on using private information retrieval for privately fetching transactions on a pruned node. So say you have a wallet started on a pruned node, you don’t have the blockchain stored locally and you need to find out about your transactions. Well if you just connect to a normal server, you’re going to reveal your addresses to them and that’s not very private. There are cryptographic tools that allow you to query databases and look things up in them without revealing your private information. They are super inefficient. But bitcoin transactions are super small, so I think there’s a good match here and you might see in the future some wallets using PIR methods to look things up.

I’ve been doing some work on mempool reconciliation. Right now, transactions in the mempool just get there via the flooding mechanism. If you start up a node clean it will only learn about new transactions and wont have anything in its mempool. This is not ideal because it means that when a new block shows up in the network, you won’t be able to exploit the speed of compact blocks (bip152) until you have it running for a day or two and build up your mempool. We do save the mempool across restarts, but if you’ve been down for a while then that data is no longer useful. It’s also useful to have mempool pre-loaded fast because you can use it for better…. there are techniques to efficiently reconcile databases between different hosts, and there’s a bitcointalk post that I made about using some of these techniques for mempool reconciliation and I think we might do this in the not-too-distant future.

There has been some work on doing alternative serialization for transactions, which could be optionally used on disk or on the peer-by-peer basis to use this compact serialization and the one that Pieter Wuille has proposed gives a 25% reduction in the size of transactions. This particular feature is interesting to me outside of the context of Bitcoin Core because Blockstream has recently announced its satellite system and it could really use a 25% reduction in the amount of bandwidth required to send transactions.

Summary

Many important improvements in v0.15. I think the most important is the across-board 50% speedup which is able to get sync time back to where it was in February 2017. Hopefully it means that additional load created as people start using segwit, will be less damaging to the system. There are many exciting things being developed, and I didn’t even talk about the more speculative things, which I am sure people will have questions about them. There are many new contributors joining the project and ramping up. There are several organizations now paying people. I don’t know if they want attention, but I think it’s great that people can contribute on a more consistent basis. Anyone is welcome to contribute, and we’re particularly eager to meet people who can test the GUI and wallet because those get the least love in the system today. So, thank you, and let’s move to some open questions.

Q&A

https://www.youtube.com/watch?v=nSRoEeqYtJA&t=1h3m13s

First priority is for questions for the presentation, but after that, let’s open it up to whatever. Also wait for the mic.

Q: For the new UTXO storage format, how many bytes is it allocating to the index for the output for it?

A: It uses a compact int for it. In fact, it’s a little bit– this format is more efficient, it’s 15% larger but it might not think about a lot. leveldb will compress entries itself.

Q: What if some of the participants of the bitcoin network decided to not activate another soft-fork like schnorr signatures?

A: I mentioned the signature aggregation stuff, which would require schnorr signatures, yeah that’s a soft-fork and it would have to be activated in the network. So we did have kind of a long slog to get segwit activated. ((Someone tosses Greg a bip148 UASF hat))… Obviously people are going to do … but if in the end of the day, it’s all the users in the system that decide what the rules are. Especially for a backwards compatible rule like signature aggregation that we could use, well, miners are mining to make money, and certainly there are many factors involved in this but mining operations are very expensive, and if they choose not to use the rules of the bitcoin network is a miner that will soon be bankrupt. Users have the power in bitcoin. I wasn’t a big fan of bip148 although I think it has been successful. I think it was way too hastily carried out. I don’t anticipate problems with activating signature aggregation in the future, but that’s because I trust the users to get this activated and make use of it. Thanks.

Q: A few rapid fire questions I guess. For the new chainstate database, is the key just the transaction id and then the output index?

A: Yes that’s the key. It’s both.

Q: Is it desirable to ditch openssl because it’s heavy weight?

A: No, openssl was not well designed. It was previously used everywhere in bitcoin and it was not designed or maintained in a way that is suitable for consensus code. They changed their signature validation method in a backwards incompatible way. For all the basic crypto operations it’s better for us to have a consensus critical implementation that is consistent. As far as the RNG goes, I think nobody has read that code and still has a question about why removing it– it’s time tested yes, but it’s obviously bad code and it needs to go.

Q: …

A: Yes. The comment was made that all of openssl, the only thing that it has going for it is that it is time tested. Sure. That can’t be discounted.

Q: You mentioned in fee estimation that miners might censor transactions. Have we seen that before?

A: Well, we have seen it with soft-forks. A newly introduced soft-fork for someone who hasn’t upgraded, will look like censorship of a transaction. There are some scary proposals out there, people have written entire papers on bribing miners to not insert transactions. There are plenty of good arguments out there for why we’re not going to see that in the future, if mining is decentralized. But we still need to have a system that is robust against it potentially happening.

Q: So the Core of this point was that it created a buffer for a register that was like in moving thing for certain number of transactions for a certain period of time, so you kept track track of the outputs..

A: The outputs are the public keys.

Q: So you keep track of those and this was the bulk of the…?

A: Of the performance improvement.

Q: 50%?

A: Yes. The blockchain is pretty big. There have been over a billion transaction outputs created. Small changes in the performance of finding outputs and deleting it, has big impacts on performance.

Q: Is it possible for a non-bitcoin blockchain application, …

A: Yes I think the same techniques for optimization here would be applicable to any system with general system properties. Our focus in Bitcoin Core is on keeping bitcoin running.

Q: You mentioned at the end that GUI and wallet are two areas where you need more help. Can you expand on it?

A: There’s some selection bias in the bitcoin project. Bitcoin developers are overwhelmingly system programmers and we don’t even use GUIs generally. As a result of that, we also attract other system programmers to the project. There are some people that work on the GUI, but not so many. We don’t have any big design people working on it, for example. There’s nobody doing user testing on the wallet. We don’t expect Bitcoin Core to be for grandma, it’s expected to be industrial machinery, but there’s still a lot of user factors that there aren’t a lot o resources going into but we would like that to be different.

Q: Yeah I can speak about … I heard … blockchain… lightning right now. How is that going?

A: So the question is Blockstream has some people working on lightning. I am not one of them but my understanding is that the big news is that the specifications are coming to completion and there’s been great interop testing going on. There are other people in this room that are far more expert on lightning than I am, including laolu who can answer all questions about lightning. Yes, he’s waving his hand. I’m pleased to see it moving along.

Q: Hi, you mentioned taking out openssl. But someone who is working on secure communication between peers, is that using openssl?

A: Right, no chance of that. Openssl has a lot of stuff going for it– largely wideployed, but has lots of attack surface. We don’t need openssl for secure communication.

sipa: If I can comment briefly on this, the most important reason to use openssl is if you need ssl. For something like this, we have the expertise to build a much more narrow protocol without those dependencies.

A: We also have some interesting things different for bitcoin, normally our connections are unauthenticated because we have anonymous peers… there are some interesting tricks for that. I can have multiple keys to authenticate to peers, so we can have trace-resistance there, and we can’t do that in openssl. Your use of authentication doesn’t make you super-identifiable to everyone you connect with.

Q: Question about.. wondering when, what do you think would need to be happen for 1.0 version? More like longer term? Is there some kind of special requirement for a 1.0 release? I know that’s far out.

A: It’s just numbers, in my opinion. The list of things that I’d like to see in Bitcoin Core and the protocol would be improved fungibility. It’s a miles-long list. I expect it to be miles-long for the rest of my life. But maybe this should have nothing to do with v1.0 or not. There are many projects that have changed version numbers purely for marketing reasons, like Firefox and Chrome. There are versions of bitcoin altcoins that use version 1.14 to try to say they are 1 version greater… I roll my eyes at that, it sometimes contributes to user confusion, so I wouldn’t be surprised if the number gets ludicrously increased to 1000 or something. In terms of the system being complete, I think we have a long way to go. There are many credible things to do.

Q: There was recently a fork of the bitcoin network that was polite enough to include replay protection. But people are working on another fork that might not include replay protection. So is there a way to defend against this kind of malicious behavior?

A: Well I think that the most important thing is to encourage people to be sensible. But if they won’t be sensible and add replay protection, then there are many features that give some amount of automatic replay protection. In ethereum, it’s replay vulnerable because it uses accounts model. In bitcoin every time you spend you reference your coins, it’s automatic replay connection. There’s no problem with replay between testnet and bitcoin or litecoin and bitcoin, it’s the inherent replay protection built-in. Unfortunately, this doesn’t work if you literally copy the history of the system. The best way probably to deal with replay is to take advantage of replay prevention by making your transaction spend outputs that don’t exist on the other chain. People have circulated patches in the past that allow miners to create 0-value outputs that anyone can spend, and then wallets could then pick up those wallets and spend them. If you spend an output created by a miner that only exists on the chain that you want to spend on, then you’re naturally replay-protected. If that split happens without replay protection, there will be tools to help, but it will still be a mess. You see in ethereum and ethereum classic continued snafuus.

Q: There’s a altcoin that copied the bitcoin state that you can use identical equipment on.. and it will be more profitable to mine, and it will bounce back and forth between miners.

A: Well, the system works as designed in my opinion. Bcash uses the same PoW function. They changed their difficulty adjustment algorithm. They go into hyperinflation when they lose hashrate. And then those blocks coming in once every few minutes makes it more profitable to mine. Miners are trying to make money. They have to factor in the risk that the bcash coins will become worthless before they can sell them. Miners move them over, bitcoin remains stable, transaction fees go up, which has happened now twice because bitcoin becomes more profitable to mine just from the transaction fees. At the moment, bcash is in a low difficulty period but bitcoin is more profitable to mine because of transaction fees. Bitcoin’s update algorithm is not trying to chase out all possible altcoins. At the moment it works fine, it’s just a little slow.

Q: … further… lower.. increase.

A: Well, … one large miner after segwit activated seemed to lower their blocksize limit, probably related to other unrelated screwups. The system seems to be self-calibrating and it is working as expected, fees are high at the moment, but that’s all the more reason for people to adopt segwit and take advantage of lower fees. I’m of the opinion that the oscillation is damaging to the bcash side because it’s bushing them into hyperinflation. Because their block size is way larger, and they are producing 10 kb blocks, I don’t know what twill pay people to mine it as they wear out their inflation schedule. And it should have an accelerating inflation rate because of how that algorithm works. Some of the bcash miners are trying to stop the difficulty adjustments because they realize it’s bad. I’m actually disappointed that bcash is having these problems, it’s attractive for people to have some alternative if your idea of the world that bitcoin should have no block size limit that’s not a world that I can support based on the science we have today; if people want it, they ca have it, and I’m happy to say here go use that thing, but that thing is going off the rails maybe a little faster than I thought, but we’ll see.

Q: If I give you a chip that is a million times faster than the nvidia chip, how much saving would you have the mining?

A: A million times faster than nvidia chip… bitcoin mining today is done with highly specialized ASICs. How long until… So, bitcoin will soon to have done 2^88 cumulative sha256 operations. Oh yes, so we’re over 2^88 sha256 operations done for mining. This is a mind-bogglingly large number. We crossed 2^80 the other year. If you started at core 2 duo at the beginning of time, then it will just have done 2^80 work just when 2^80 was crossed. We have just crossed 256x more work than that. So that’s why something like a million times faster nvidia chip still wouldn’t hold a candle to the computing power from all these sha256 asics. I would recommend against 160-bit hashes for any cases where collisions are a concern. We use them for addresses, but addresses aren’t a collision concern, although segwit does change it for that reason.

Q: There’s a few different proposals nad trying to solve the same thing, like weak blocks, thin blocks, invertible bloom filters, is there anything in that realm on the horizon, what do you think is the most probable development there?

A: There’s a class of proposals called pre-consensus, like weak blocks (or here), where participants come to agreement in advance on what they will put into blocks before it is found. I think those techniques are neat, I’ve done some work on them, I think other people will work on them. There are many design choices, we could run multiples of these in parallel. We have made great progress on efficient block transmission with FIBRE and compact blocks. We went 5000 blocks ish without an orphan a couple weeks ago, so between FIBRE and v0.14 speedups, we’ve seen the orphan rate drop, it’s not as much of a concern. We might see it pop back up as segwit gets utilized, we’ll have to see.

Q: Is rolling utxo hash a segwit precondition?

A: I think most of these schemes wouldn’t make use of a rolling utxo set, there’s some potential interaction for fast syncing nodes on it. A lot of these, the preconsensus techniques you can have no consensus enforcement of the rules and the miners just go along with it. They are interesting techniques. There’s a paper 2 years ago called bitcoin-ng which itself can be seen as a pre-consensus technique that talks about some interesting things in this space.

Q: Going 5000 blocks…

A: Yeah, so… Bram is saying that if we go 5000 blocks without an orphan, then this implies that the blocks are going out on the order of 200 ms, that’s right. Well actually it’s more like 100 ms. Particularly BlueMatt’s technique transmits pretty reliably near the speed of light. Lots of ISPs… yeah so, part of the reason why we are able to do this, this is where BlueMatt’s stuff is really helpful, he setup a network of nodes that have been carefully placed to connect with each other and be low latency, and this is very difficult and it has taken some work, but it helps out the network.

Q: Hi. What do you think about proof-of-stake as a consensus mechanism?

A: There’s a couple of things. There have been many proof-of-stake proposals that are outright broken. And people haven’t done a good job of distinguishing the proposals. As an example, proof-of-stake in NXT is completely broken and you can just stakegrind it. The reason why people haven’t attacked it, because it hasn’t been worth it to do so… There are other proposals that achieve very different security properties than what bitcoin achieves. Bitcoin achieves this anonymous anyone can join very distributed mechanism of security, and the various proof-of-stake proposals out there have different tradeoffs. The existing users can exclude new users by not letting them stake their coins. I think the people working on proof-of-stake have done a poor job of stating their security assumptions and showing exactly what goals they achieve. There’s been also some work in this space where they make unrealistic assumptions. There was a recent PoS paper that claimed to solve problems, but the starting assumption was that the users have a synchronous network which is where all users will receive all the messages in order all the time. If you had a synchronous network then you don’t need a blockchain at all. This was one of the starting assumptions in a recent paper on PoS. So I think it’s interesting that people are exploring this stuff, but I think a lot of this work is not likely to be productive. I hope that the start of the art will improve… it’s tiresome to read papers that claim to solve these problems but then you get to page 14 and it says oh it requires a synchronous network, into the trash. This is a new area, and science takes time to evolve.

Q: Beyond .. bad netcode… what are the highlights for your current threat model for protecting the future of bitcoin?

A: I think that the most important highlight is education. Ultimately bitcoin is a social and economic system. We need to understand it and have a common set of beliefs. Even as constructed today. There are some long term problems related to mining centralization. I am of the belief that they are driven by short-term single shot events where people entered and overbuilt hardware. Perhaps it’s a short-term effect. But if it’s not, and a few years later we look at this and we see nothing improving, then maybe it’s time to get another answer. The bitcoin space is so surprising. Last time I spoke here, the bitcoin market price was $580. We can’t allow our excitement about adoption to make bad moves related to technology. I want bitcoin to expand to the world as fast as it can, but no faster.

Q: Hey uh… so it sounds like you, in November whenever s2x gets implemented, and it gets more work than bitcoin, I mean it sounds like you consider it an altcoin it’s like UASF territory at what point is bitcoin is bitcoin and what would y’all do with sha256 algorithm if s2x gets more work on it?

A: I think that the major contributors on Bitcoin Core are pretty consistent and clear on their views on s2x (again), we’re not interested and not going along with it. I think it will be unlikely to get more work on it. Miners are going to follow the money. Hypothetically? Well, I’ve never been of the opinion that more work matters. It’s always secondary to following the rules. Ethereum might have had more joules pumped into its mining than bitcoin, although I haven’t done the math that’s at least possible. I wouldn’t say ethereum is now bitcoin though… just because of the joules. Every version of bitcoin all the way back has had nodes enforcing the rules. It’s essential to bitcoin. Can I think bitcoin can hard-fork? Yeah, but all the users have to agree, and maybe that’s hard to achieve because we can do things without hard-forks. And I think that’s fine. If we can change bitcoin for a controversial change, then I think that’s bad because you could make other controversial changes. Bitcoin is a digital asset that is not going to change out from you. As for future proof-of-work functions, that’s unclear. If s2x gets more hashrate, then I think that would be because users as a whole were adopting it, and I think if that was the case then perhaps the Bitcoin developers would go do something else instead of Bitcoin development. It might make sense to use a different proof of work function. Changing a PoW function is a nuclear option and you don’t do it unless you have no other choice. But if you have no other choice, yeah do it.

Q: So sha256 is not a defining characteristic of bitcoin?

A: Right. We might even have to change sha256 in the future for security reasons anyway.

Q: Rusty Russell wrote a series of articles about if a hard-fork is necessary, what does it mean, how much time is required, I’m not say I’m proposing a hard-fork, but what kinds of changes might be useful to make it easier for us to make good decisions at that point?

A: I don’t think the system can decide on what it is. It’s inherently external. I am in favor of a system where we make changes with soft-forks, and we use hard-forks only to clean up technical debt so that it’s more easy to get social consensus on those. I think Rusty’s posts are interesting, but there’s this property that I’ve observed in bitcoin where there’s an inverse relationship between the distance you are to the code and how … no, proportional relationship: distance between the code and how viable you think the proposals are. Rusty is very technical, but he’s not a regular contributor to Bitcoin Core. He’s a lightning developer. This is a pattern we see in other places as well.

I think that was the last question. Alright, thanks.


https://www.reddit.com/r/Bitcoin/comments/6xj7io/greg_maxwell_a_deep_dive_into_bitcoin_core_015/

https://news.ycombinator.com/item?id=15155812

\ No newline at end of file diff --git a/greg-maxwell/2017-11-27-gmaxwell-advances-in-block-propagation/index.html b/greg-maxwell/2017-11-27-gmaxwell-advances-in-block-propagation/index.html index 93669bfa0f..ebc3637d1a 100644 --- a/greg-maxwell/2017-11-27-gmaxwell-advances-in-block-propagation/index.html +++ b/greg-maxwell/2017-11-27-gmaxwell-advances-in-block-propagation/index.html @@ -17,4 +17,4 @@ < Advances In Block Propagation

Advances In Block Propagation

Speakers: Greg Maxwell

Date: November 27, 2017

Transcript By: Bryan Bishop

Tags: P2p, Mining

Media: -https://www.youtube.com/watch?v=EHIuuKCm53o

slides: https://web.archive.org/web/20190416113003/https://people.xiph.org/~greg/gmaxwell-sf-prop-2017.pdf

https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding

efficient block transfer: https://web.archive.org/web/20170912171304/https://people.xiph.org/~greg/efficient.block.xfer.txt

low latency block xfer: https://web.archive.org/web/20160607023347/https://people.xiph.org/~greg/lowlatency.block.xfer.txt

https://web.archive.org/web/20160607023408/https://people.xiph.org/~greg/mempool_sync_relay.txt

https://web.archive.org/web/20160607023359/http://people.xiph.org/~greg/weakblocks.txt

compact blocks FAQ https://bitcoincore.org/en/2016/06/07/compact-blocks-faq/

some history https://www.reddit.com/r/btc/comments/557ihg/what_are_the_arguments_against_xthin_blocks/d88q5l4/

more history https://www.reddit.com/r/btc/comments/6p076l/segwit_only_allows_170_of_current_transactions/dkmugw5/

https://twitter.com/kanzure/status/944656354307919873

Introduction

I’ll be talking about some recent advances in block propagation and why this stuff is important. I am going to talk about how the original bitcoin p2p protocol works, and it doesn’t work that way for the most part anymore. I will talk about the history of this, including fast block relay protocol, block network coding, and then building up to where the state of the art is today.

There are two things that people talk about when they are talking about the resource usage of block propagation. One is block propagation cost to the node operators. They use bandwidth and CPU power to propagate blocks. This is obviously a factor. The original bitcoin protocol was particularly inefficient with transmissions of blocks. You would operate as a node on the network and receive transactions that come in and then when a block was found on the network you would receive the block which included the transactions that you already had received. This was a doubling of the amount of data that needed to be spent. But not a doubling of the node’s overall bandwidth usage, because it turns out that nodes do things other than just relay blocks, and these other things are even less efficient than block propagation. In particular, this process that bitcoin uses to relay transactions is really inefficient. The standard bitcoin protocol method of relaying a transaction is to announce to the peers “hey, I know bitcoin txid ‘deadbeef’” and then the peers respond to me with “please send me transaction with txid ‘deadbeef’” and then there’s 38 bytes of INV transactions on the wire, and then a transaction is maybe 200 bytes. So you’re using more than 10% of the bandwidth just to announce the transaction. Even if you got rid of block relay entirely, the reduction in bandwidth to a bitcoin node is only 12%. Although, 12% is not nothing, bitcoin is a highly optimized system and we grab for every improvement that we can.

The other thing that is a problem with resource usage is that the old bitcoin block propagation mode is really bursty. You would use a steady amount of bandwidth as transactions come in, but when a block came in you would use a megabyte of bandwidth as soon as the blocks came in. Back at Mozilla, I could tell which of my colleagues were bitcoin users because during video chat their video would stall out in time with blocks appearing on the bitcoin network. So, they would have to turn off their bitcoin node.

That behavior was a real usability factor in how comfortable it was to run a node on residential broadband. A few years ago there was noise on the internet about buffer bloat on routers having excessive latency, which has still not been fixed. Big blocks all sent at once is basically the perfect storm for buffer bloat. So it makes your VOIP mess up.

The bigger concern for block propagation is what is the latency for distributing a block all around the network, in particular to all of the hashpower? This is a concern for two major reasons. One is that if it takes a long time relative to the interval between blocks, then there will be more forks in the network, and confirmations are less useful. It becomes a possibility that confirmations get reorged out. So this has an impact on bitcoin’s interblock interval, currently 10 minutes, which is a nice and safe number which is far away from convergence failures. In order to lower this number, the block interval needs to be up to this challenge, so that’s one reason that it’s important. The other reason is that block propagation time creates “progress”. Ideally, mining is supposed to work like a perfectly fair lottery, where if you have 10% of the network hashrate then you should mine 10% of the blocks. When you introduce delays into block propagation, mining works more like a race instead of a fair lottery. In a race, the fastest runner will always win unless the next fastest is very closest in speed. But if you have a runner that is 10% speed or 30% speed, the 10% one will never win the race. Mining is supposed to be like a lottery, propagation delay makes it more like a race, and the reason for this is that if I don’t know about the latest blocks then my blocks won’t extend them and I’m starting behind. Also, if other miners don’t know about my blocks then they won’t mine on this, either.

Why is progress in mining bad?

So why is “progress” in mining bad? As I was saying, there are incentives such that every participant should find a proportion of blocks equal to their hashrate proportion. There is a centralization pressure: if blocks take a longer time to propagate, then the bigger the consolidation of hashrate, and the bigger the advantage they have, and there is no system limit to this. You are more centralized, you can buy more hashpower and get even bigger. Miners have the opportunity to choose, to collaborate into a single pool. I’ll talk more about this.

This is often misunderstood, when propagation comes up on reddit, there’s some vaguely nationalist sentiments like “screw the Chinese with their slow connections” things that people say. I think that’s wrong, because block propagation issue isn’t better or worse for people with faster or slower connections, instead it’s better or worse for people with more hashrate. So if it were the case that people in China had very limited bandwidth, then they would gain from this problem, so as long as they had a lot of hashpower, which in fact they do.

In general, this is a problem that we need to overkill. When we talk about how we spend resources in the community, there’s many cases where we can half-ass a solution and that works just fine. It’s true for a lot of things. But other things we need to solve well. Then there’s a final set of things that we need to nuke from orbit. The reason why I think that block propagation is something that we should overkill is because we can’t directly observe its effects. If block propagation is too slow, then it’s causing mining centralization, and we won’t necessarily see that happening.

Why won’t miners solve block propagation?

https://www.youtube.com/watch?v=EHIuuKCm53o&t=7m30s

The problem adversely effects miners, so why won’t they solve it? Well, one reason is because bad propagation is actually good for larger miners, who also happen to be the ones in an economic position to work on the problem in the first place. I don’t think that any large miner today or in the recent part has been intentionally exploiting this. But there’s an effect where you might be benefiting from this, and it’s profitable for you, then you might not notice that you’re doing it wrong, you’re not going to go “hmm why am I making too much money?” and then go investigate. I mean, it takes a kind of weird person to do that. I am one of those weird people. But for the most part, sensible people don’t work like that, and that’s an effect worth keeping in mind.

Also, miner speciality isn’t protocol development. If you ask them to solve this problem, then they are going to solve it by doing it the miner way– the miner’s tool is to get lots of electricity, lots of hardware, contracts, etc. There are tools in the miner toolbelt, but they are not protocol design. In the past, we saw miners centralize to pools. For example, if their pool had too high of an orphan rate, they would move to another (larger) pool with a lower orphan rate. This is absolutely something that we have seen happen in the past, and I’ll talk about what we did to help stop those problems.

Another thing that miners have been doing is they sometimes extend chains while completely blind, so instead of waiting for the block to be propagated, they learn enough from another pool to get the header and they just extend it without validating, and they can mine sooner. The bip66 soft-fork activation showed us that likely a majority of hashrate on the network was mining without verifying. Mining without verifying is benign so as long as nothing goes wrong. Unfortunately, SPV clients make a very strong security assumption that miners are validating and are economically incentivized to validate. Unfortunately the miners incentives don’t work out like that, because they think that blocks aren’t invalid too often. If you are using an SPV client and making millions of dollars of transactions, it’s possibly bad for you right?

Bitcoin is designed to have a system of incentives and we expect participants to follow their incentives. However, this isn’t a moral judgement. We need to make it so that breaking the system isn’t the most profitable thing to do, and improving block propagation is one of the ways we can do that.

Sources of latency

https://www.youtube.com/watch?v=EHIuuKCm53o&t=10m45s

Why does it take a while for a block to propagate? There are many sources of block latency to relay a block through the network. Creating a block template to handout is something that we can improve in Bitcoin Core through better software engineering, not protocol changes. In earlier versions it would take a few seconds. Today, it’s a few milliseconds, and that’s through clever software optimization. The miner dispatches a block template to their miners, and that might be entire seconds in some places, and it’s not in my remit to control- perhaps mining manufacturers to do this. When you are purchasing equipment from mining hardware, you should ask the manufacturer for these numbers. These can be fixed with better firmware.

You need to distribute your solution to peers outside of your network. You have to send the data over the wire. There are protocol roundtrips, like INV and getdata. There are also TCP roundtrips, which are invisible to people, and people don’t realize how slow this is. Before we had the block propagation improvements, we measured block propagation in bitcoin network basd on block size and orphan rate, and we found that the aggregate transmission rate through the network was only around 750 kilobits/second. This was because of node latency and also just because packet loss and TCP behavior; all these things together made it so that even though nodes had fast connections, propagation between nodes was only 750k/sec.

One of the things that is very important to keep in mind is that if you’re going to send data all around the world, there will be high latency hops like from here to China will be more than a few dozen milliseconds. And there are also block cascades from peer to peer to peer.

Obviously, in the original bitcoin protocol, every time you relayed a block you would first validate it. And in v0.14 and v0.15 there has been speedups for validation at the tip through better caching. These more advanced propagation techniques can work without fully validating the blocks, though.

Original bitcoin block relay protocol

This is a visualization of the original bitcoin block relay protocol. Time goes down. Block comes in, there’s a chunk of validation, the node sends an INV message saying “hey, I have this block”, the other node responds with “please give me this block”, and the other node gives a megabyte of data. This is really simple, and it has some efficiency in that you don’t send a 1 MB block to a node multiple times. But it’s inefficient. A node knows 99.953% of all the transactions in a block, before the block arrives. As I mentioned before, in addition to the roundtrip in this INV getdata block sequence, the underlying TCP protocol adds additional roundtrips in many cases. And you have to wait for validation.

Bloom filtered block relay

One of the earliest things that people worked on, is that Pieter and Matt observed that bip37 filterblock messages could be used to transmit a block but eliminate transactions that were already sent. So we would skip sending the transactions that were already sent. In 2013, testing suggested that this slowed down transmission due to various overheads, and blocks were much smaller and it was a different network than today. The filtering assumed you would remember all transactions you had been sent, even those that you had discarded, which is obviously impossible. This was the start of an idea- we got it implemented, tried it out, and this inspired more work to refined ideas.

Protocols vs networks and the fast block relay protocol

Around this time, we started to see high orphan rates in the network that contributed to pool consolidation. There was a bit of a panic here among us, we were worried that the bitcoin network would catch fire and die. And when we saw ghash.io creeping up to majority hashrate on a single pool, that was a big issue.

https://www.youtube.com/watch?v=EHIuuKCm53o&t=16m50s

Matt’s idea was to work on a fast block relay network where he would spin up a bunch of really fast, well-maintained servers.

He would encourage miners to connect to the nodes with the theory that the relay was slow in the network because miners for commercial reasons are hesitant to connect with each other because if you know your competitor’s IP address, then you can do DoS attacks or be attacked. So he set it up and it seemed to work. But having nodes alone would not be enough to solve the problem. So, he developed some custom protocols. This has resulted in a bit of confusion. Sometimes we’re talking about the relay network like compact blocks and FIBRE, and sometimes Matt Corallo’s privately operated network, because it became a testbed for testing protocol changes. So I just want to emphasize that I’m not really talking about Matt’s relay network, but I do want to talk about relay network protocols, such as can be deployed on the bitcoin p2p network or already have been, and not necessarily Matt’s private networks which sometimes use the same protocols.

So the first actually widespread production to use faster block relay protocols is what I call the fast block relay protocol (2013-2015). The idea behind the fast block relay protocol is basically you have some hub node Alice, and she streams every transaction she sees to you Bob, and Bob promises by protocol contract to remember all the last 10k transactions that Alice has streamed. When a block shows up, Alice can send short 2-byte ids (indexes) to say which transactions Bob should use in the block. You don’t have to send a megabyte of data, you send only two bytes per transaction. If there’s a transaction that Alice has never sent, then she can send a zero and then the explicit transaction. One of the great things about this is that Alice doesn’t have to ask permission to send this data. It doesn’t hurt you to get this if she just streams it out to you anyway. This doesn’t require a round trip. The block would be transferred in half the roundtrip time. But the downside of this is that Alice has to send you every transaction on the network, redundantly and also send you blocks potentially redundantly. And this protocol had quite overhead if you had many Alices. There were some miners that ran their own copies of this as well. It has a high overhead. So it’s better to use hub-and-spoke in that scheme.

Block network coding (2014)

I created an omnibus protocol design where I took every good idea I could find, and created a page I called block network coding (2014) which I dare you to read. I came up with some ideas that I could argue got communication optimal block propagation and it can’t be made more efficient than this. When we tried to implement some of these ideas, it turned out to be very CPU inefficient. It doesn’t matter how small you made the block, if you’re going to spend 4 CPU seconds to decode the thing once it shows up. So there are many ideas all sort of mixed together and combined in this page. And almost all of them have come out and been included in production systems, but it required a lot of engineering to try to make it fast, and find the ones that paid for their cost in both implementation and CPU.

bip152 compact blocks

The next big advancement in block propagation was from 2013 and would eventually get written up into became bip152 compact blocks. So the primary goal of bip152 was to eliminate the overhead from block relay, and get rid of the double transaction transmission. For a lot of the developers, we basically prioritized doing bip152 to try to negate some of the negative potential effects of the segwit capacity increases by making it more efficient to transmit blocks in advance of the activation of the segwit upgrade. So as I said, compact blocks is really designed to lower bandwidth usage, and it’s not really designed to lower latency, but it does that too as a side effect. There’s a design draft document that is linked to, slides, etc. So, in December 2015, I wrote a long high-level design document. Matt made it better, and it turned into bip152 in April. Today, compact blocks are used on greater than 97% of the nodes on the network, and it’s what carries blocks in the bitcoin network today.

Compact blocks has two modes. Non-high bandwidth, and high bandwidth. So what’s a compact block? The idea of a compact block is that we want to exploit the far-end, the remote party, but we don’t know what they know. So what compact block does is sends a block header, transaction count, an 8 byte nonce which I will talk about in a minute, and then it sends along sequence of short IDs. They are 6 bytes each, and it sends one for every transaction in the block. A short id is a hash of the txid and a salt, effectively. So, it also can send explicit transaction, like the coinbase transaction, and the far-end never knows that so you know it doesn’t have that so you can send it explicitly. When you receive one of these compact blocks, you can scan your memory and other sources of transactions you might have laying around and see if you get any matches. This uses siphash as the main hash function in it, because it’s very fast and it needs to be fast because you need to scan through your mempool with that salt and look for the matches. This scheme only needs to send 6 bytes per transaction in a block, which is less efficient than the previous protocol, but it doesn’t have any requirement for a hub-and-spoke topology. And so it’s a bit less efficient (3x less efficient) but it’s still way more efficient than sending full transactions. But as a result there is some complexity, because the short 48-bit ids at least tcould theoretically have collisions and the protocol has to figure out that when something’s missing, you have to fill it in. I mentioned that the short ids are salted, and the reason why they are salted is pretty straightforward. Even if we were to imagine that our short ids were 8 bytes long, so even less efficient, it would be very easy for a troublemaker to produce two transactions that have two short ids that are the same if it wasn’t for the salt, and you could efficiently find two transactions with the same id and then you can spam the network with these and consequently jam up block propagation and a miner could do this in fact to create a profit potentially. So in the compact block scheme, this short id is salted using the block header that the block is found in. And when an attacker announces their transactions to the network, they aren’t going to have known the salt because the block hasn’t been found yet by definition. Also it has an additional salt that is provided by each of the generated parties that generate a compact block. The idea of this additional salt is that if just by dumb luck some transactions pair in the block or in the mempool happen to have the same short id as the one hashed with the block header– by having different salts on different links, means that those collisions would happen sort of randomly on the network instead of happening everywhere and as a result blocks will propagate around slowdown. So if Alice and Bob connection gets slowdown because of a collision, the block would instead route through the fastest path. It starts off with the receiving peer requesting high bandwidth mode. Then in some arbitrary time in the future, a block shows up and a compact block is immediately send. You don’t validate the block first, the validation box is moved down (referring to this graph on the slides), it occurs at the same time that you’re sending out the block. The reason why this is okay is because by definition of the protocol it works this way; and high bandwidth mode only works with full validating nodes, so you’ll notice if it’s bad and you won’t be handing it out to SPV nodes. Most of the time, some 85% of the time, that’s where the protocol stops: the block shows up, the high bandwidth compact block peer sends it to you, and you’re done. But if there was a hash collision or you didn’t have a transaction, you have to disambiguate, and you ask with a getblocktxn message, and the remote side responds with the missing transactions you asked for. The low bandwidth mode works similarly, but you have to validate before participating. Just like the original block transmission, you ask “would you like the block” and they respond with “yes, please send me the compact block”. It adds an extra roundtrip, but it’s more bandwidth efficient. In high bandwidth mode, if you ask multiple peers for high bandwidth service, you might get multiple copies of the compact block, which on average is maybe 13 kilobytes and you might get multiple copies in that mode. It wastes a little bandwidth but it’s much faster.

Bitcoin Core requests your 3 most recent peers to relay the high bandwidth compact block. We found that, the vast vast majority of the time, the first peer to relay your block is one of the last three to relay your block, in terms of the network topology. This redundancy also has other positive effects. You can DoS attack in the original bitcoin protocol by saying I have a block, then they ask to get the block, you send it, and then they don’t reply. So the timeouts are long. And this redundancy in transmission means that you bypass and have these delays and timeouts.

Summarizing compact blocks and its two modes– the high bandwidth mode takes half-roundtrip time just to send in one direction, up to one if they are missing transactions. The non-high bandwidth mode takes 1.5 to 2.5 roundtrip times. Whether you’re in high bandwidth or not, and whether you take 1 or 2.5 roundtrip times, in all cases, compact blocks saves 99% of the bandwidth sending compact blocks. This is on the order of 12% of a node’s bandwidth. Further protocols can’t really meaningfully improve the node’s bandwidth overall; if you half it, it’s going to only be like halfing 1% of 12%. But latency could be improved. Compact blocks in the worst case can take 2.5 roundtrip time. For a long haul link around the world, that’s really long. The fast block relay protocol showed that it could be better.

What about xthin?

There’s another protocol called xthin which I’ll comment on it briefly because otherwise people are going to ask me about it. So, it was a parallel development where basically Matt and Pieter did this bloom filter block stuff in 2013. Mike Hearn re-earthed it and made a patch for Bitcoin-XT which didn’t work particularly well. The “Bitcoin Unlimited”(BU) people picked up Mike Hearn’s work and apparently they were unaware of all the other development that had been in progress on this– developers don’t always communicate well… It’s a very similar protocol to compact blocks. Some BU advocates have irritated me pretty extensively by arguing that compact blocks was copied from xthin. Who cares, and it wasn’t. They were just parallel developed. It has some major differences. One is that it uses 8 bytes, so 64-bit short ids, and it doesn’t salt it, which means it has this vulnerability where you could easily construct a collision. It turns out that a bunch of the BU developers and their advocates didn’t know about the birthday paradox effect on how easy it is to create collisions. They argued it wasn’t feasible to construct a 64-bit collision and I had a bunch of fun on reddit for 2 days responding to every post giving out collisions generated bespoke from the messages because they said it would take hours to generate them. I was generating them in about 3 seconds or something. So, the other thing that it does differently from compact blocks is that it can’t be used in this high-bandwidth mode where you send an unsolicited block. With xthin, there’s always an INV message, and then the response to the INV message, sends a bloom filter of the receiving node’s mempool, basically says “here’s an approximate list of the transactions I know about”, and as a result of that bloom filter which is on the order of 20kb typically, the most of the time there’s no need to get extra transactions because the sender will know what transactions are missing and it will send them. So basically I look at this optimization is that it costs a constant 1 roundtrip time because you can’t use high-bandwidth mode, plus bandwidth plus CPU plus attack surface, to save a roundtrip time less than 15% of the time because high bandwidth mode normally doesn’t have roundtrip time 85% of the time. I don’t think that is useless, I think it would be useful for the non high bandwidth mode use case, but it’s a lot more code and attack surface, for a relatively low improvement. This is particularly interesting for xthin because of political drama it was rushed in production because they wanted to claim they had it first, and it resulted in at least three exploited crash bugs that knocked out every Bitcoin Unlimited node on the network. And every Bitcoin Classic fork node on the network (not Bitcoin Core nodes). And when they fixed some of those bugs later, that could cause nodes to get split from the network; particularly, interestingly, they introduced a bug where if you introduced a short id collision, the node would get stuck, a combination of two vulnerabilities so kind of something to learn from that.

What about “Xpedied”?

The BU folks also kind of have this thing called “expediated” which was basically a response to bip152 because their forwarding without high bandwidth mode is much slower, and it’s a manually configured mode that works almost identically to bip152 high bandwidth mode which doesn’t send a bloom filter, sends without asking, but since it’s manual configuration I don’t think anyone really ever used it and I’m confident that it’s not used today because their software doesn’t support segwit and only miners would use this and miners need software that supports segwit. Doesn’t have a spec either. I think it uses the same vulnerable short id scheme, but I’m not actually sure without having read the code. But it’s a thing, so now you know.

Better latency than bip152?

I mentioned above that I don’t think you can do much better on bandwidth, but you could certainly do better on latency. And we care a lot about the worst case on latency. If miners can gain from not cooperating from filling a block full of novel transactions that you have never seen before, then we should expect that sooner or later that someone is going to do it. In the worst case, bip152 behavior is just like the original behavior. It will send a megabyte of data, ultimately. But with one more roundtrip time, so in the worst case it’s potentially worse. On fast low latency links, bip152’s extra roundtrip is irrelevant, so you could say that bip152 is all you need on a fast low latency link. But this 2.5 roundtrip time in worst case is really awful internationally especially because long haul international links typically have on the order of 1% packet loss which makes TCP stall.

Erasure coding

https://www.youtube.com/watch?v=EHIuuKCm53o&t=35m20s

I want to take a quick detour into some fundamentals. I think most people are at least vaguely familiar with the concept of an erasure code. This is what RAID does with your disks. Basically you can take N blocks of data and encode them into n+k blocks of data such that any n out of the n+k is sufficient to recover the whole thing. This is what RAID6 does, for example. You’ve got two extra drives, and so as long as you have no more than two failed, you can recover all your data. It might seem a little magical, but if you think about the k as one case, you can just use XOR and you can work out on paper how that works fine. You compute your redundant block as the XOR of all the other blocks. This can be done for any n+k. So you could split a bitcoin block into a thousand pieces such that you only need any 500 fragments to recover the whole block. But the software to do that efficiently is not particularly computationally efficient. So if you use a Reed-Solomon code, which is the canonical optimal space-efficient tool to do this, it’s very slow if n and k are large. But there are other techniques which we could use, they aren’t perfectly efficient, they sometimes require an extra block or two extra blocks. But- they are very fast. So this is an erasure code, so keep this construct in mind.

Network coding

Another concept that there’s variously highly academic groups in IETF that they like to talk about– which is network coding, which is an application of linear codes and erasure codes to multi-cast and broadcast. The idea is that if you want to send out a piece of data to a bunch of different nodes, then you could split it into different parts and use error coding on it and spread them out over the network so that you never send the same data twice, you make optimal use of your data links, and then all the peers in the network are collaborating to recombine data and as soon as they have enough of it then they are done. So you can think of it, this like bittorrent swarming, but it can be more communication efficient because you don’t need to locate a correct source, everyone’s a correct source instead. For actual usage in bittorrent, computational efficiency is more important. For block relay though, we don’t realy want to have a lot of back-and-forth two-way communication, so we need to not know where our sources are.

Bitcoin FIBRE network

So this brings us to bitcoin FIBRE (fast internet block relay engine) which was a protocol developed heavily derived from block network coding writeup that I did early on. The idea of FIBRE is basically this: you start with bip152 compact blocks high bandwidth mode, but you modify the short ids to include the transactions. Then you transmit over UDP instead of over TCP and you apply erasure coding to the blocks. You take the 1 MB block and you expand it into 8 MB of redundant data, and you don’t send the original block data. You only send the redundant data and you stream it out to different peers and all of those peers will stream it out to each other. So you send each peer a chunk, and then each peer all send each other the chunks. All the chunks are useful because there’s no redundancy sent over it. And then, when the receiver gets this data, they use this size-augmented short ids in the compact blocks, to lay out a block in memory and there will be holes in the blocks for the transactions that they didn’t know about, and they use the erasure code to fill in the holes to make a complete block. So effectively what this does is it allows you to communicate a block to a remote end where you have no idea what they know, but if they know something then they can make good use of it, and so you could send it out and you don’t need to receive a response, and they are guaranteed to get the block so long as they get at least as many packets as they were missing. And you never need to hear back from them, you never take this roundtrip delay, just have one-way delay.

So FIBRE lets you build this single globally-distributed node with very very fast relay of blocks internally. And what this actually achieves in practice, on Matt’s relay network which uses this, is 95th percentile latency is under 16 ms latency over the speed of light delay. The speed of light delay is how long does it take your one byte to get from one end of the world to the other. And 95% of the time, the system is able to transmit the entire block in under 16 ms, pretty good, it could be better.

Q: …. pacific ocean …

A: Yes, oceans are big.

So this scheme has a lot of advantages, right? It’s unconditional uni-directional transport, you never need a roundtrip, unless a node is just, offline and didn’t get any of the packets. But it needs to be used only within a single administrative domain because part of the way that it works and is so fast is because nodes relay packets between each other without being able to validate them because they can’t reconstruct the block yet, they are just relaying them as soon as the packets come in. So this would be a DoS vector if it was open to the public. Also this has absurd bandwidth overheads (sending 8x the total block for example), but unlike TCP based protocols these overheads don’t come at the expense of additional delays, they lower delays. In the production FIBRE network, it sends 8x the total amount of block data. But unlike the overheads in the original bitcoin p2p relay network, these overheads don’t make the block slower to reconstruct. You don’t have to receive all 8 copies, you just need to receive one copy, and most of that copy comes out of your local mempool anyway.

The way that Matt uses this protocol in his own network is that he has a bunch of nodes, they are on well-maintained systems, and they speak FIBRE amongst each other, and then they speak regular bip152 compact blocks high-bandwidth mode to peers, anyone who is connected to his nodes. Because these bip152 links are short, because they are all nodes that are connecting to the closest FIBRE node, the fact that it has lots of roundtrips doesn’t matter in the end.

What can we do better?

Of course, there’s the drive to do better. Like I said, we want to overkill this. There are lots of tools in the toolbelt that could be used to make some of this stuff better. One of those tools is set reconciliation. This is an idea very similar to based on the same technology as the erasure coding stuff. The idea is that I have a set, a list of transactions, and we don’t know what each other’s sets are, but we assume they are really similar but maybe some minor differences. It turns out that there are protocols where someone can send a single message to another party with a similar set and they don’t know how similar, and the message has size that is equal to the symmetric set difference. So the size is the number of entries I send is how many things I have that [you] don’t and how many things you have that I don’t. There are protocols that allow you to, with no more data than that, to recover the other set. I can send you a message, and now you know my set. There are algorithms to do this, which are communications optimal, meaning no way to do it more efficiently in terms of communication. They are based on interpolating roots in polynomial ratios. They are similar to RS codes, but more expensive to decode. They take CPU time that is cubic in the set difference, and all of these operations are expensive. There are some newer techniques such as IBLT (invertible bloom lookup table) which are fast, they are linear in time of the set difference but they have bad constant factors in communication so they send more data. If your set differences are small and the entries are small, then IBLT it can be like 5x the set difference that you have to have to get a very reliable code.

Applying set reconciliation?

So how could you use set reconciliation technology? So back in this early efficient block transfer design doc that I wrote, I described that you could use set reconciliation to send the short ids in a compact block and use much less bandwidth. But the big challenge in using that is that you still have to communicate the transaction order because that’s normative in a block and miners can make whatever order they want, subject to the transaction dependency graph. And also, … they were unaware that I had previously suggested it, they went and implemented it using IBLT. Their protocol assumes you have changed the bitcoin consensus protocol to assume that transactions are listed in a certain order. But right now in bitcoin today, miners will order transactions by how profitable they are to the miners, which is an unknown function to the receiver. And, this ordering for most income to least income, means that mine pool software can fit transactions into blocks by dropping the last couple transactions. And that would be broken if you had a required order. Using set reconciliation this way, ignores the latency in general. In particular, Graphene (paper) only talks about bandwidth never talks about latency. Since it’s ignoring latency, it probably would have been better to use polynomial set reconciliation instead of IBLT. But in any case… on one hand I’m really excited about set reconciliation because it’s like cool rocket science, but on the other hand– IBLT algorithm in particular is a great programming project, basically any programmer could implement it and it’s magic, I would actually recommend implementing IBLT decoders if you want to do something fun, but this technology doesn’t really seem like it would lower latency in any normal application of block propagation. Any further bandwidth improvements are sharing off fractions of a percent or something– the graphene stuff for example ((talk)), is going to save only a couple hundred kilobytes per day on a node. It’s not that exciting. If the set reconciliation fails, you get a roundtrip time, there’s no way to incrementally increase the size of an IBLT, there’s negatives there. Bandwidth relaying blocks is 99% solved. Using set reconciliation to try to improve that, is probably trying to improve it. But I did say only 99% solved, there’s areas for improvement.

So can we do better?

There are improvements where bandwidth improvements would still be useful. 20 kb doesn’t take long to send over normal links. But there are cases where this will getting down will improve latency. There are some things we could do which are not necessarily useful for block propagation but can improve bandwidth overall.

When is 20 kb going to impact latency? Well, one important use case is satellites. The blockstream satellites are broadcasting blocks right now. This is to improve the partioning resistance of bitcoin. You can receive blocks over the radio, and then use your local bandwidth only for sending out transactions. The link speed on the satellite is only 80 kb/sec, which is enough to stay in sync with the bitcoin network. We use software-defined radio so we can increase/decrease bandwidth with some tradeoffs. This 20 kb compact block means that it takes a minimum of 2 second to send a block across it, which isn’t attractive for mining. You could mine over it, but it’s not particularly good. Another area that could be improved here, is transaction compression. The early fast block relay protocol used LZMA compression and it turned out to be worthless because compressors don’t do much when you use them on small amounts of data, and the faster compressors do even less. You can get good compression if you compress one or more blocks at a time, but that’s not useful for transaction relay when you send multiple transactions at a time. It’s possible to construct alternative serialization for bitcoin transactions. You can convert from a normal transaction to alternative serializations losslessly, you could use this alternative serialization and use it on the whole of bitcoin history. Pieter and I figured one out that can reduce the size of the entire history of bitcoin to 28%, while working on single transactions at a time. It’s CPU intensive, and it’s only based on a per transaction basis, so there are even more optimzations that could be done. If this was used, it could make block storage smaller, it could make transaction smaller, it could make block propagation smaller, but the real effect for users would probably be transaction relay.

Set reconciliation for transaction relay

Another idea that would be really powerful would be set reconciliation for transaction relay. It’s not useful too much for block relay, but transactions are naturally a set. In 2013, I talked on bitcointalk where instead of a node offers transactions to all of its peers, but you would offer it to one peer and sometimes you would send it to another peer based on a coinflip. If you did this alone, a transaction wouldn’t get very far in the network, because they go around the network until they make a cycle and then they stop. But if you had nodes periodically running set reconciliation to compare their mempools with each other, then every time a transaction got stuck in a part of the network, it would be able to break out. This would make transaction relay really efficient, because it’s low bandwidth.

Template delta compression

Another thing I have been working on is template-delta compression. The big cost in set reconciliation is sending the order of transactions. What if you sent a block template in advance of getting a block? You send the template over to someone; instead of sending the block in the future, you could send the permutation difference. If I have an efficient way of encoding the block I was expecting versus the block I actually got– or rather, encoding the difference, then maybe you could propagate this pretty quickly. With bip152, we made a stateless protocol for many peers– but what if we introduce state again and we can support differences? I designed a typical and fast compressor for typical permutations, using the av1 range coder which I worked on in a previous life. And, what I found using this scheme I came up with so far, is that if you look up with the difference of a 30 second old template and a block that actually shows up, I came up with a median size of 2916, mean 3041 bytes, and 22% of blocks fit in less than one IP packet over the internet. Saving bandwidth isn’t necessarily important in many cases, but for the blockstream satellites this can get down to a few hundred milliseconds from whole seconds.

Template deltas for pre-consensus and weak block schemes

The other reason why I think this delta stuff is interesting, other than for the satellite, is that there’s a set of proposals out there that I call pre-consensus. This whole block propagation problem is because mining is a race about announcing blocks. What if you could have the miners get together and decide what to include in the next block? And all you have to send is the pre-consensus the things it said it would have, and there’s the payout amount…. the bitcoin-ng– it’s not very bitcoiny, it introduces miner identities, you can’t slot it into bitcoin without very deep economic changes and technical changes. But you could do pre-consensus in a bitcoin compatible way, like with weak blocks where they use a p2pool-like sharechain that pre-commits to transactions they intend to include in the future, and it comes more frequently than blocks, and when a block shows up it just references one of the weak blocks. This is perfect fit for template deltas; the difference will be negligible.

Known improvements possible….

There are other known improvements that are possible that people have been talking about.. like the FIBRE swarming block thing in a single administrative domain; it could be swarmed over the whole bitcoin network by making the blocks commit to the chunks that they send out. But the error correction code would have to be part of the bitcoin consensus rules, and the validation would require CPU time. I don’t know how reasonable it would be to do that, and there’s some downsides to that as well. If you want to prove that a particular packet was one of the ones that a miner has authorized, the SPV proof for that or hash tree would be quite large.

Consensus could require an order that obeyed dependencies, so you could make set reconciliations more efficient but it would break truncation for miners. When a miner prioritizes transactions, those transactions end up at the top of the block. The bitcoin nodes could re-sort transactions so that they are in a more predictable order, but it’s unclear if this is worth implementing given its complexity. I think one of the big surprises is that there’s a lot of ideas that are awesome but once you implement it, it makes things worse. We benefited a lot from Matt’s relay network because we’re able to take these ideas and check if they work. It was only his nodes so if it was bad, we could change it, and we weren’t rushing out tech to the bitcoin network that could be used say to crash every node in the network. And some of the ideas turned out to be slow or whatever but they could be fixed once scaled back.

Q&A

https://www.youtube.com/watch?v=EHIuuKCm53o&t=57m24s

Please wait for the mic if you have a question.

Q: On your favorite topic of how to make compact blocks even smaller or something like that; so, the thing where you’re getting these like 2 byte indexes of transactions, would it be more efficient to send a compactified bitfield?

A: Yes.

Q: And then the permutation. And you want to get things down to fitting into a single packet. Isn’t that just 2kb for a permutation?

A: If the permutation is uniform, yes. But we have a strong prior for what that should look like, which is why I am able to get to that size with my template differencer. Miners produce blocks that are ranked by ancestor fee rate plus some exeptions where the prioritize transactions and move those to the top of the block. My encoding scheme is efficient for “here’s some stuff out of order and the rest is in order”. The 2 byte thing could be made more efficient, but the tradeoff is CPU time. Reading 2 bytes and indexing is really fast. You could use an arithmetic coder and doing things that my template differencer does, but it’s quite a bit slower.

Q: As far as reducing bandwidth used by relaying transactions, a pretty good network topology for doing that is full nodes become part of some club and each transaction becomes part of a club. So you stream it out to the club. And then you have one peer in that club…

A: There are many papers on efficient gossip networks. One of the things that academic work doesn’t really address is byzantine fault tolerance gossip networks. The efficient gossip networks tend to be ones where a single malicious peer can black out all your transactions. But in bitcoin you can have nodes that are completely malicious and it won’t really slow the propagation of transactions at all. So far academia hasn’t been super useful for concrete schemes.. I think the low fan out plus set reconciliation scheme is byzantine robust but I don’t have a strong proof of this yet.

Q: What’s the current propagation delay across the network and what’s a good target for that?

A: We have lost our ability to measure it because orphan rate is really low at this point. It’s hard to say. There’s some stats that people have from polling random peers. Christian Decker has a chart. It’s gotten much better, it’s nice to see. But this doesn’t look at miners, just peers. The best way we have to look, at block propagation times from miners is how often blocks are getting orphaned. We overshot on some of this work- we improved some of these things but orphan rate wasn’t going down. And then miners upgraded to segwit, and orphan rate went down. And we went for like 5000 blocks without seeing a single orphan, which was unlikely. I think the numbers are pretty good on the network. But the challenge is, is propagation time still adequate once we start producing blocks with more data in them?

sipa: The fact that we are seeing low orphan rates on the network means either our technology is awesome and we have solved the propagation delay….. or all miners are working together and it’s entirely centralized and we’ve failed. So it’s hard to tell.

A: Before all of this optimzation, there were like 2 or 3 orphans per day, so it wasn’t a very clear indicator in the first place, but it was one of the best we had, because it directly measures what we’re trying to solve.

Q: Is block propagation time bad on its own, or is just because of the effect on orphan rate?

A: It’s a little bad on its own independent of orphan rate because it means that you take longer to see what the status of the network is. But suppose if you have a scheme where there are no orphans—- there’s radical rethinkings of the blockchain where there’s no orphans, like schemes with DAGs, where blocks aren’t a straight-linked list, there’s a forest of blocks, and then transactions can occur in multiple places there and there’s some arbitration technique for dealing with conflicts. But the problem is low throughput because you have to send transaction data redundantly multiple times.

Q: It seems to be that there seems to be this arbitrary permutation encoded in a block kind of sucks.

A: It’s something I would like to fix, but there’s like, I know in genetics it’s commonly the case that there’s strange useless effects that turn out to be vital to life because other things have developed a dependency on this. In bitcoin, miners do really truncate blocks. If we take that away, would they care? It’s hard to get miners to talk about protocol details. You could require it as an arbitrary order subject to the dependency graph, which is easy to do inside of node software, but hard otherwise. If I was doing bitcoin design today, I would probably require an arbitrary order.

Q: When you said block propagation get better with segwit, is that because segwit helped it?

A: No. It turns out that there were some large miners running really old software like Bitcoin Unlimited which was forked from a really old bitcoin version which didn’t have cache improvements or any of the other improvements. It turns out that there weren’t enough miners running improved software. And then miners switched to modern Bitcoin Core and then the orphan rate fell through the floor. It was difficult to tell though, because slow propagation doesn’t just cause the miner that’s slow to have orphans, it causes everyone to have orphans. You couldn’t really say “ah Antpool has lots of orphans and Antpool is running BU” or whatever…

Q: So we’re in a stable time for network conditions. We’re seeing major network partioning like the GFW or Syria or Turkey… is there a way to measure what happens if the internet becomes a lot more fractured?

A: This is one of those things that is important but also hard to measure. It’s not like the internet in the developed world fractures every day. So we don’t have lots of data about how fractures or partitions impact the bitcoin network. So we need to overkill partition resistance. Satellites is one way. We could do more network monitoring in the community– but it’s not anybody’s job to run the bitcoin network operation center… so unless someone like BlueMatt steps up and starts monitoring it like he does, we don’t otherwise get the broad network data and nobody looks at it. There’s some things like we can see twitches in cdecker’s block propagation data, and if there was a widespread internet outage we might learn something about bitcoin’s network partition resistance.

Q: My question here.. you mentioned about BlueMatt’s network and how it has been useful for implementing and iterating on these… so what’s the difference on what Matt did on his network and regtest or testnet? Why has it provided such huge benefits, and why not replicate that in the normal…?

A: On all of these tools and techniques, we did lots of testing, sandboxing, simulated environments, there’s Linux dummynets to simulate packet loss and latency. But the problems with these things is that they allow you to deal with known unknowns. Like you already know you need to measure the situation with packet loss. But what about the actual network out there? And what about user factors? We learned a lot about interacting with miners, like what does motivate them, what interfaces do they prefer, how sustainable is manual configuration? Production usage for things like this is no replacement for a test, but a test is no replacement for production usage. And there was no real risk that Matt’s network would break bitcoin– the worse that would happen is that Matt’s network would stop working… but that’s the worst it could do.

Q: Can there be multiple relay networks?

A: There are in fact multiple relay networks. Some miners run their own relay networks between their data centers and they peer with themselves and friends. There’s a social dynamics question; how well can multiple of these scale? Any of these require manual configuration. Matt would love it if someone would run a well-maintained relay network. He was previously getting donations to fund doing it.. but the effort to collect the donations each month was more than it was worth to it to him. And it turns out there’s bitcoin miners that don’t want to pay bitcoin for things. There were people that only wanted to pay with wechat or something. So he pays out of pocket, and it was faster before when we had better funding for it.

Q: …

A: We have tested other heuristics. Compact blocks can send transactions opportunistically. It always sends the coinbase transaction because it knows you don’t have it. I have tested a heuristic that basically works like, “a transaction was surprising to me then I should assume it’s surprising to my peers, too”. This is an obvious area that could be investigating. Why do misses occur in compact blocks? One reason is that a transaction just arrived milliseconds after a block; or another reason it’s missed in a compact block is because there was a double spend– and you have a different one than the one that the miner confirmed. We addressed this somewhat in Bitcoin Core, we don’t just use the mempool to construct the compact block, we have an extra pool that includes double spends and replacements that we keep around for that reason.

Q: There is some research about gigabyte-sized blocks. The issue here– have you tried this yourself?

A: Haven’t tried that on Matt’s network, but in private test labs sure. Being able to run large blocks over this stuff is not a surprise. It’s designed to be able to handle it. The problem is the overall scaling impact on the network. It’s not just my one node on super fast hardware can it keep up. This bloom filter that xthin sends adds a superlinear scaling component that is worse than compact blocks. You can run a gigabyte block, but good luck keeping a node affordable running on that for any period of time running….

Q: I was wondering about transaction throughput on the bitcoin network. I have heard it’s like 10 or 18 with segwit. In practice it’s like 3?

A: The throughput depends on what transactions people are producing. Prior to segwit, people started to do batch sends, meaning that transaction throughput is a poor metric in terms of capacity of the network. The number of transactions can go down, but the number of UTXOs or the people using it goes up. Segwit is only being used by about 10% of the transactions so far. If you’re not using segwit and you care about transaction fees, you should use segwit, it’s a massive decrease in fees. I haven’t looked today– but in the past couple of days, the fees were around 5 sat/byte… Part of the design of segwit was to avoid creating a system shock by adding lots of capacity and causing a fee market collapse or something. There’s an incentive to use segwit that goes up when fees go higher, and we get more capacity when there’s more demand.

forward erasure correction codes https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding

\ No newline at end of file +https://www.youtube.com/watch?v=EHIuuKCm53o

slides: https://web.archive.org/web/20190416113003/https://people.xiph.org/~greg/gmaxwell-sf-prop-2017.pdf

https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding

efficient block transfer: https://web.archive.org/web/20170912171304/https://people.xiph.org/~greg/efficient.block.xfer.txt

low latency block xfer: https://web.archive.org/web/20160607023347/https://people.xiph.org/~greg/lowlatency.block.xfer.txt

https://web.archive.org/web/20160607023408/https://people.xiph.org/~greg/mempool_sync_relay.txt

https://web.archive.org/web/20160607023359/http://people.xiph.org/~greg/weakblocks.txt

compact blocks FAQ https://bitcoincore.org/en/2016/06/07/compact-blocks-faq/

some history https://www.reddit.com/r/btc/comments/557ihg/what_are_the_arguments_against_xthin_blocks/d88q5l4/

more history https://www.reddit.com/r/btc/comments/6p076l/segwit_only_allows_170_of_current_transactions/dkmugw5/

https://twitter.com/kanzure/status/944656354307919873

Introduction

I’ll be talking about some recent advances in block propagation and why this stuff is important. I am going to talk about how the original bitcoin p2p protocol works, and it doesn’t work that way for the most part anymore. I will talk about the history of this, including fast block relay protocol, block network coding, and then building up to where the state of the art is today.

There are two things that people talk about when they are talking about the resource usage of block propagation. One is block propagation cost to the node operators. They use bandwidth and CPU power to propagate blocks. This is obviously a factor. The original bitcoin protocol was particularly inefficient with transmissions of blocks. You would operate as a node on the network and receive transactions that come in and then when a block was found on the network you would receive the block which included the transactions that you already had received. This was a doubling of the amount of data that needed to be spent. But not a doubling of the node’s overall bandwidth usage, because it turns out that nodes do things other than just relay blocks, and these other things are even less efficient than block propagation. In particular, this process that bitcoin uses to relay transactions is really inefficient. The standard bitcoin protocol method of relaying a transaction is to announce to the peers “hey, I know bitcoin txid ‘deadbeef’” and then the peers respond to me with “please send me transaction with txid ‘deadbeef’” and then there’s 38 bytes of INV transactions on the wire, and then a transaction is maybe 200 bytes. So you’re using more than 10% of the bandwidth just to announce the transaction. Even if you got rid of block relay entirely, the reduction in bandwidth to a bitcoin node is only 12%. Although, 12% is not nothing, bitcoin is a highly optimized system and we grab for every improvement that we can.

The other thing that is a problem with resource usage is that the old bitcoin block propagation mode is really bursty. You would use a steady amount of bandwidth as transactions come in, but when a block came in you would use a megabyte of bandwidth as soon as the blocks came in. Back at Mozilla, I could tell which of my colleagues were bitcoin users because during video chat their video would stall out in time with blocks appearing on the bitcoin network. So, they would have to turn off their bitcoin node.

That behavior was a real usability factor in how comfortable it was to run a node on residential broadband. A few years ago there was noise on the internet about buffer bloat on routers having excessive latency, which has still not been fixed. Big blocks all sent at once is basically the perfect storm for buffer bloat. So it makes your VOIP mess up.

The bigger concern for block propagation is what is the latency for distributing a block all around the network, in particular to all of the hashpower? This is a concern for two major reasons. One is that if it takes a long time relative to the interval between blocks, then there will be more forks in the network, and confirmations are less useful. It becomes a possibility that confirmations get reorged out. So this has an impact on bitcoin’s interblock interval, currently 10 minutes, which is a nice and safe number which is far away from convergence failures. In order to lower this number, the block interval needs to be up to this challenge, so that’s one reason that it’s important. The other reason is that block propagation time creates “progress”. Ideally, mining is supposed to work like a perfectly fair lottery, where if you have 10% of the network hashrate then you should mine 10% of the blocks. When you introduce delays into block propagation, mining works more like a race instead of a fair lottery. In a race, the fastest runner will always win unless the next fastest is very closest in speed. But if you have a runner that is 10% speed or 30% speed, the 10% one will never win the race. Mining is supposed to be like a lottery, propagation delay makes it more like a race, and the reason for this is that if I don’t know about the latest blocks then my blocks won’t extend them and I’m starting behind. Also, if other miners don’t know about my blocks then they won’t mine on this, either.

Why is progress in mining bad?

So why is “progress” in mining bad? As I was saying, there are incentives such that every participant should find a proportion of blocks equal to their hashrate proportion. There is a centralization pressure: if blocks take a longer time to propagate, then the bigger the consolidation of hashrate, and the bigger the advantage they have, and there is no system limit to this. You are more centralized, you can buy more hashpower and get even bigger. Miners have the opportunity to choose, to collaborate into a single pool. I’ll talk more about this.

This is often misunderstood, when propagation comes up on reddit, there’s some vaguely nationalist sentiments like “screw the Chinese with their slow connections” things that people say. I think that’s wrong, because block propagation issue isn’t better or worse for people with faster or slower connections, instead it’s better or worse for people with more hashrate. So if it were the case that people in China had very limited bandwidth, then they would gain from this problem, so as long as they had a lot of hashpower, which in fact they do.

In general, this is a problem that we need to overkill. When we talk about how we spend resources in the community, there’s many cases where we can half-ass a solution and that works just fine. It’s true for a lot of things. But other things we need to solve well. Then there’s a final set of things that we need to nuke from orbit. The reason why I think that block propagation is something that we should overkill is because we can’t directly observe its effects. If block propagation is too slow, then it’s causing mining centralization, and we won’t necessarily see that happening.

Why won’t miners solve block propagation?

https://www.youtube.com/watch?v=EHIuuKCm53o&t=7m30s

The problem adversely effects miners, so why won’t they solve it? Well, one reason is because bad propagation is actually good for larger miners, who also happen to be the ones in an economic position to work on the problem in the first place. I don’t think that any large miner today or in the recent part has been intentionally exploiting this. But there’s an effect where you might be benefiting from this, and it’s profitable for you, then you might not notice that you’re doing it wrong, you’re not going to go “hmm why am I making too much money?” and then go investigate. I mean, it takes a kind of weird person to do that. I am one of those weird people. But for the most part, sensible people don’t work like that, and that’s an effect worth keeping in mind.

Also, miner speciality isn’t protocol development. If you ask them to solve this problem, then they are going to solve it by doing it the miner way– the miner’s tool is to get lots of electricity, lots of hardware, contracts, etc. There are tools in the miner toolbelt, but they are not protocol design. In the past, we saw miners centralize to pools. For example, if their pool had too high of an orphan rate, they would move to another (larger) pool with a lower orphan rate. This is absolutely something that we have seen happen in the past, and I’ll talk about what we did to help stop those problems.

Another thing that miners have been doing is they sometimes extend chains while completely blind, so instead of waiting for the block to be propagated, they learn enough from another pool to get the header and they just extend it without validating, and they can mine sooner. The bip66 soft-fork activation showed us that likely a majority of hashrate on the network was mining without verifying. Mining without verifying is benign so as long as nothing goes wrong. Unfortunately, SPV clients make a very strong security assumption that miners are validating and are economically incentivized to validate. Unfortunately the miners incentives don’t work out like that, because they think that blocks aren’t invalid too often. If you are using an SPV client and making millions of dollars of transactions, it’s possibly bad for you right?

Bitcoin is designed to have a system of incentives and we expect participants to follow their incentives. However, this isn’t a moral judgement. We need to make it so that breaking the system isn’t the most profitable thing to do, and improving block propagation is one of the ways we can do that.

Sources of latency

https://www.youtube.com/watch?v=EHIuuKCm53o&t=10m45s

Why does it take a while for a block to propagate? There are many sources of block latency to relay a block through the network. Creating a block template to handout is something that we can improve in Bitcoin Core through better software engineering, not protocol changes. In earlier versions it would take a few seconds. Today, it’s a few milliseconds, and that’s through clever software optimization. The miner dispatches a block template to their miners, and that might be entire seconds in some places, and it’s not in my remit to control- perhaps mining manufacturers to do this. When you are purchasing equipment from mining hardware, you should ask the manufacturer for these numbers. These can be fixed with better firmware.

You need to distribute your solution to peers outside of your network. You have to send the data over the wire. There are protocol roundtrips, like INV and getdata. There are also TCP roundtrips, which are invisible to people, and people don’t realize how slow this is. Before we had the block propagation improvements, we measured block propagation in bitcoin network basd on block size and orphan rate, and we found that the aggregate transmission rate through the network was only around 750 kilobits/second. This was because of node latency and also just because packet loss and TCP behavior; all these things together made it so that even though nodes had fast connections, propagation between nodes was only 750k/sec.

One of the things that is very important to keep in mind is that if you’re going to send data all around the world, there will be high latency hops like from here to China will be more than a few dozen milliseconds. And there are also block cascades from peer to peer to peer.

Obviously, in the original bitcoin protocol, every time you relayed a block you would first validate it. And in v0.14 and v0.15 there has been speedups for validation at the tip through better caching. These more advanced propagation techniques can work without fully validating the blocks, though.

Original bitcoin block relay protocol

This is a visualization of the original bitcoin block relay protocol. Time goes down. Block comes in, there’s a chunk of validation, the node sends an INV message saying “hey, I have this block”, the other node responds with “please give me this block”, and the other node gives a megabyte of data. This is really simple, and it has some efficiency in that you don’t send a 1 MB block to a node multiple times. But it’s inefficient. A node knows 99.953% of all the transactions in a block, before the block arrives. As I mentioned before, in addition to the roundtrip in this INV getdata block sequence, the underlying TCP protocol adds additional roundtrips in many cases. And you have to wait for validation.

Bloom filtered block relay

One of the earliest things that people worked on, is that Pieter and Matt observed that bip37 filterblock messages could be used to transmit a block but eliminate transactions that were already sent. So we would skip sending the transactions that were already sent. In 2013, testing suggested that this slowed down transmission due to various overheads, and blocks were much smaller and it was a different network than today. The filtering assumed you would remember all transactions you had been sent, even those that you had discarded, which is obviously impossible. This was the start of an idea- we got it implemented, tried it out, and this inspired more work to refined ideas.

Protocols vs networks and the fast block relay protocol

Around this time, we started to see high orphan rates in the network that contributed to pool consolidation. There was a bit of a panic here among us, we were worried that the bitcoin network would catch fire and die. And when we saw ghash.io creeping up to majority hashrate on a single pool, that was a big issue.

https://www.youtube.com/watch?v=EHIuuKCm53o&t=16m50s

Matt’s idea was to work on a fast block relay network where he would spin up a bunch of really fast, well-maintained servers.

He would encourage miners to connect to the nodes with the theory that the relay was slow in the network because miners for commercial reasons are hesitant to connect with each other because if you know your competitor’s IP address, then you can do DoS attacks or be attacked. So he set it up and it seemed to work. But having nodes alone would not be enough to solve the problem. So, he developed some custom protocols. This has resulted in a bit of confusion. Sometimes we’re talking about the relay network like compact blocks and FIBRE, and sometimes Matt Corallo’s privately operated network, because it became a testbed for testing protocol changes. So I just want to emphasize that I’m not really talking about Matt’s relay network, but I do want to talk about relay network protocols, such as can be deployed on the bitcoin p2p network or already have been, and not necessarily Matt’s private networks which sometimes use the same protocols.

So the first actually widespread production to use faster block relay protocols is what I call the fast block relay protocol (2013-2015). The idea behind the fast block relay protocol is basically you have some hub node Alice, and she streams every transaction she sees to you Bob, and Bob promises by protocol contract to remember all the last 10k transactions that Alice has streamed. When a block shows up, Alice can send short 2-byte ids (indexes) to say which transactions Bob should use in the block. You don’t have to send a megabyte of data, you send only two bytes per transaction. If there’s a transaction that Alice has never sent, then she can send a zero and then the explicit transaction. One of the great things about this is that Alice doesn’t have to ask permission to send this data. It doesn’t hurt you to get this if she just streams it out to you anyway. This doesn’t require a round trip. The block would be transferred in half the roundtrip time. But the downside of this is that Alice has to send you every transaction on the network, redundantly and also send you blocks potentially redundantly. And this protocol had quite overhead if you had many Alices. There were some miners that ran their own copies of this as well. It has a high overhead. So it’s better to use hub-and-spoke in that scheme.

Block network coding (2014)

I created an omnibus protocol design where I took every good idea I could find, and created a page I called block network coding (2014) which I dare you to read. I came up with some ideas that I could argue got communication optimal block propagation and it can’t be made more efficient than this. When we tried to implement some of these ideas, it turned out to be very CPU inefficient. It doesn’t matter how small you made the block, if you’re going to spend 4 CPU seconds to decode the thing once it shows up. So there are many ideas all sort of mixed together and combined in this page. And almost all of them have come out and been included in production systems, but it required a lot of engineering to try to make it fast, and find the ones that paid for their cost in both implementation and CPU.

bip152 compact blocks

The next big advancement in block propagation was from 2013 and would eventually get written up into became bip152 compact blocks. So the primary goal of bip152 was to eliminate the overhead from block relay, and get rid of the double transaction transmission. For a lot of the developers, we basically prioritized doing bip152 to try to negate some of the negative potential effects of the segwit capacity increases by making it more efficient to transmit blocks in advance of the activation of the segwit upgrade. So as I said, compact blocks is really designed to lower bandwidth usage, and it’s not really designed to lower latency, but it does that too as a side effect. There’s a design draft document that is linked to, slides, etc. So, in December 2015, I wrote a long high-level design document. Matt made it better, and it turned into bip152 in April. Today, compact blocks are used on greater than 97% of the nodes on the network, and it’s what carries blocks in the bitcoin network today.

Compact blocks has two modes. Non-high bandwidth, and high bandwidth. So what’s a compact block? The idea of a compact block is that we want to exploit the far-end, the remote party, but we don’t know what they know. So what compact block does is sends a block header, transaction count, an 8 byte nonce which I will talk about in a minute, and then it sends along sequence of short IDs. They are 6 bytes each, and it sends one for every transaction in the block. A short id is a hash of the txid and a salt, effectively. So, it also can send explicit transaction, like the coinbase transaction, and the far-end never knows that so you know it doesn’t have that so you can send it explicitly. When you receive one of these compact blocks, you can scan your memory and other sources of transactions you might have laying around and see if you get any matches. This uses siphash as the main hash function in it, because it’s very fast and it needs to be fast because you need to scan through your mempool with that salt and look for the matches. This scheme only needs to send 6 bytes per transaction in a block, which is less efficient than the previous protocol, but it doesn’t have any requirement for a hub-and-spoke topology. And so it’s a bit less efficient (3x less efficient) but it’s still way more efficient than sending full transactions. But as a result there is some complexity, because the short 48-bit ids at least tcould theoretically have collisions and the protocol has to figure out that when something’s missing, you have to fill it in. I mentioned that the short ids are salted, and the reason why they are salted is pretty straightforward. Even if we were to imagine that our short ids were 8 bytes long, so even less efficient, it would be very easy for a troublemaker to produce two transactions that have two short ids that are the same if it wasn’t for the salt, and you could efficiently find two transactions with the same id and then you can spam the network with these and consequently jam up block propagation and a miner could do this in fact to create a profit potentially. So in the compact block scheme, this short id is salted using the block header that the block is found in. And when an attacker announces their transactions to the network, they aren’t going to have known the salt because the block hasn’t been found yet by definition. Also it has an additional salt that is provided by each of the generated parties that generate a compact block. The idea of this additional salt is that if just by dumb luck some transactions pair in the block or in the mempool happen to have the same short id as the one hashed with the block header– by having different salts on different links, means that those collisions would happen sort of randomly on the network instead of happening everywhere and as a result blocks will propagate around slowdown. So if Alice and Bob connection gets slowdown because of a collision, the block would instead route through the fastest path. It starts off with the receiving peer requesting high bandwidth mode. Then in some arbitrary time in the future, a block shows up and a compact block is immediately send. You don’t validate the block first, the validation box is moved down (referring to this graph on the slides), it occurs at the same time that you’re sending out the block. The reason why this is okay is because by definition of the protocol it works this way; and high bandwidth mode only works with full validating nodes, so you’ll notice if it’s bad and you won’t be handing it out to SPV nodes. Most of the time, some 85% of the time, that’s where the protocol stops: the block shows up, the high bandwidth compact block peer sends it to you, and you’re done. But if there was a hash collision or you didn’t have a transaction, you have to disambiguate, and you ask with a getblocktxn message, and the remote side responds with the missing transactions you asked for. The low bandwidth mode works similarly, but you have to validate before participating. Just like the original block transmission, you ask “would you like the block” and they respond with “yes, please send me the compact block”. It adds an extra roundtrip, but it’s more bandwidth efficient. In high bandwidth mode, if you ask multiple peers for high bandwidth service, you might get multiple copies of the compact block, which on average is maybe 13 kilobytes and you might get multiple copies in that mode. It wastes a little bandwidth but it’s much faster.

Bitcoin Core requests your 3 most recent peers to relay the high bandwidth compact block. We found that, the vast vast majority of the time, the first peer to relay your block is one of the last three to relay your block, in terms of the network topology. This redundancy also has other positive effects. You can DoS attack in the original bitcoin protocol by saying I have a block, then they ask to get the block, you send it, and then they don’t reply. So the timeouts are long. And this redundancy in transmission means that you bypass and have these delays and timeouts.

Summarizing compact blocks and its two modes– the high bandwidth mode takes half-roundtrip time just to send in one direction, up to one if they are missing transactions. The non-high bandwidth mode takes 1.5 to 2.5 roundtrip times. Whether you’re in high bandwidth or not, and whether you take 1 or 2.5 roundtrip times, in all cases, compact blocks saves 99% of the bandwidth sending compact blocks. This is on the order of 12% of a node’s bandwidth. Further protocols can’t really meaningfully improve the node’s bandwidth overall; if you half it, it’s going to only be like halfing 1% of 12%. But latency could be improved. Compact blocks in the worst case can take 2.5 roundtrip time. For a long haul link around the world, that’s really long. The fast block relay protocol showed that it could be better.

What about xthin?

There’s another protocol called xthin which I’ll comment on it briefly because otherwise people are going to ask me about it. So, it was a parallel development where basically Matt and Pieter did this bloom filter block stuff in 2013. Mike Hearn re-earthed it and made a patch for Bitcoin-XT which didn’t work particularly well. The “Bitcoin Unlimited”(BU) people picked up Mike Hearn’s work and apparently they were unaware of all the other development that had been in progress on this– developers don’t always communicate well… It’s a very similar protocol to compact blocks. Some BU advocates have irritated me pretty extensively by arguing that compact blocks was copied from xthin. Who cares, and it wasn’t. They were just parallel developed. It has some major differences. One is that it uses 8 bytes, so 64-bit short ids, and it doesn’t salt it, which means it has this vulnerability where you could easily construct a collision. It turns out that a bunch of the BU developers and their advocates didn’t know about the birthday paradox effect on how easy it is to create collisions. They argued it wasn’t feasible to construct a 64-bit collision and I had a bunch of fun on reddit for 2 days responding to every post giving out collisions generated bespoke from the messages because they said it would take hours to generate them. I was generating them in about 3 seconds or something. So, the other thing that it does differently from compact blocks is that it can’t be used in this high-bandwidth mode where you send an unsolicited block. With xthin, there’s always an INV message, and then the response to the INV message, sends a bloom filter of the receiving node’s mempool, basically says “here’s an approximate list of the transactions I know about”, and as a result of that bloom filter which is on the order of 20kb typically, the most of the time there’s no need to get extra transactions because the sender will know what transactions are missing and it will send them. So basically I look at this optimization is that it costs a constant 1 roundtrip time because you can’t use high-bandwidth mode, plus bandwidth plus CPU plus attack surface, to save a roundtrip time less than 15% of the time because high bandwidth mode normally doesn’t have roundtrip time 85% of the time. I don’t think that is useless, I think it would be useful for the non high bandwidth mode use case, but it’s a lot more code and attack surface, for a relatively low improvement. This is particularly interesting for xthin because of political drama it was rushed in production because they wanted to claim they had it first, and it resulted in at least three exploited crash bugs that knocked out every Bitcoin Unlimited node on the network. And every Bitcoin Classic fork node on the network (not Bitcoin Core nodes). And when they fixed some of those bugs later, that could cause nodes to get split from the network; particularly, interestingly, they introduced a bug where if you introduced a short id collision, the node would get stuck, a combination of two vulnerabilities so kind of something to learn from that.

What about “Xpedied”?

The BU folks also kind of have this thing called “expediated” which was basically a response to bip152 because their forwarding without high bandwidth mode is much slower, and it’s a manually configured mode that works almost identically to bip152 high bandwidth mode which doesn’t send a bloom filter, sends without asking, but since it’s manual configuration I don’t think anyone really ever used it and I’m confident that it’s not used today because their software doesn’t support segwit and only miners would use this and miners need software that supports segwit. Doesn’t have a spec either. I think it uses the same vulnerable short id scheme, but I’m not actually sure without having read the code. But it’s a thing, so now you know.

Better latency than bip152?

I mentioned above that I don’t think you can do much better on bandwidth, but you could certainly do better on latency. And we care a lot about the worst case on latency. If miners can gain from not cooperating from filling a block full of novel transactions that you have never seen before, then we should expect that sooner or later that someone is going to do it. In the worst case, bip152 behavior is just like the original behavior. It will send a megabyte of data, ultimately. But with one more roundtrip time, so in the worst case it’s potentially worse. On fast low latency links, bip152’s extra roundtrip is irrelevant, so you could say that bip152 is all you need on a fast low latency link. But this 2.5 roundtrip time in worst case is really awful internationally especially because long haul international links typically have on the order of 1% packet loss which makes TCP stall.

Erasure coding

https://www.youtube.com/watch?v=EHIuuKCm53o&t=35m20s

I want to take a quick detour into some fundamentals. I think most people are at least vaguely familiar with the concept of an erasure code. This is what RAID does with your disks. Basically you can take N blocks of data and encode them into n+k blocks of data such that any n out of the n+k is sufficient to recover the whole thing. This is what RAID6 does, for example. You’ve got two extra drives, and so as long as you have no more than two failed, you can recover all your data. It might seem a little magical, but if you think about the k as one case, you can just use XOR and you can work out on paper how that works fine. You compute your redundant block as the XOR of all the other blocks. This can be done for any n+k. So you could split a bitcoin block into a thousand pieces such that you only need any 500 fragments to recover the whole block. But the software to do that efficiently is not particularly computationally efficient. So if you use a Reed-Solomon code, which is the canonical optimal space-efficient tool to do this, it’s very slow if n and k are large. But there are other techniques which we could use, they aren’t perfectly efficient, they sometimes require an extra block or two extra blocks. But- they are very fast. So this is an erasure code, so keep this construct in mind.

Network coding

Another concept that there’s variously highly academic groups in IETF that they like to talk about– which is network coding, which is an application of linear codes and erasure codes to multi-cast and broadcast. The idea is that if you want to send out a piece of data to a bunch of different nodes, then you could split it into different parts and use error coding on it and spread them out over the network so that you never send the same data twice, you make optimal use of your data links, and then all the peers in the network are collaborating to recombine data and as soon as they have enough of it then they are done. So you can think of it, this like bittorrent swarming, but it can be more communication efficient because you don’t need to locate a correct source, everyone’s a correct source instead. For actual usage in bittorrent, computational efficiency is more important. For block relay though, we don’t realy want to have a lot of back-and-forth two-way communication, so we need to not know where our sources are.

Bitcoin FIBRE network

So this brings us to bitcoin FIBRE (fast internet block relay engine) which was a protocol developed heavily derived from block network coding writeup that I did early on. The idea of FIBRE is basically this: you start with bip152 compact blocks high bandwidth mode, but you modify the short ids to include the transactions. Then you transmit over UDP instead of over TCP and you apply erasure coding to the blocks. You take the 1 MB block and you expand it into 8 MB of redundant data, and you don’t send the original block data. You only send the redundant data and you stream it out to different peers and all of those peers will stream it out to each other. So you send each peer a chunk, and then each peer all send each other the chunks. All the chunks are useful because there’s no redundancy sent over it. And then, when the receiver gets this data, they use this size-augmented short ids in the compact blocks, to lay out a block in memory and there will be holes in the blocks for the transactions that they didn’t know about, and they use the erasure code to fill in the holes to make a complete block. So effectively what this does is it allows you to communicate a block to a remote end where you have no idea what they know, but if they know something then they can make good use of it, and so you could send it out and you don’t need to receive a response, and they are guaranteed to get the block so long as they get at least as many packets as they were missing. And you never need to hear back from them, you never take this roundtrip delay, just have one-way delay.

So FIBRE lets you build this single globally-distributed node with very very fast relay of blocks internally. And what this actually achieves in practice, on Matt’s relay network which uses this, is 95th percentile latency is under 16 ms latency over the speed of light delay. The speed of light delay is how long does it take your one byte to get from one end of the world to the other. And 95% of the time, the system is able to transmit the entire block in under 16 ms, pretty good, it could be better.

Q: …. pacific ocean …

A: Yes, oceans are big.

So this scheme has a lot of advantages, right? It’s unconditional uni-directional transport, you never need a roundtrip, unless a node is just, offline and didn’t get any of the packets. But it needs to be used only within a single administrative domain because part of the way that it works and is so fast is because nodes relay packets between each other without being able to validate them because they can’t reconstruct the block yet, they are just relaying them as soon as the packets come in. So this would be a DoS vector if it was open to the public. Also this has absurd bandwidth overheads (sending 8x the total block for example), but unlike TCP based protocols these overheads don’t come at the expense of additional delays, they lower delays. In the production FIBRE network, it sends 8x the total amount of block data. But unlike the overheads in the original bitcoin p2p relay network, these overheads don’t make the block slower to reconstruct. You don’t have to receive all 8 copies, you just need to receive one copy, and most of that copy comes out of your local mempool anyway.

The way that Matt uses this protocol in his own network is that he has a bunch of nodes, they are on well-maintained systems, and they speak FIBRE amongst each other, and then they speak regular bip152 compact blocks high-bandwidth mode to peers, anyone who is connected to his nodes. Because these bip152 links are short, because they are all nodes that are connecting to the closest FIBRE node, the fact that it has lots of roundtrips doesn’t matter in the end.

What can we do better?

Of course, there’s the drive to do better. Like I said, we want to overkill this. There are lots of tools in the toolbelt that could be used to make some of this stuff better. One of those tools is set reconciliation. This is an idea very similar to based on the same technology as the erasure coding stuff. The idea is that I have a set, a list of transactions, and we don’t know what each other’s sets are, but we assume they are really similar but maybe some minor differences. It turns out that there are protocols where someone can send a single message to another party with a similar set and they don’t know how similar, and the message has size that is equal to the symmetric set difference. So the size is the number of entries I send is how many things I have that [you] don’t and how many things you have that I don’t. There are protocols that allow you to, with no more data than that, to recover the other set. I can send you a message, and now you know my set. There are algorithms to do this, which are communications optimal, meaning no way to do it more efficiently in terms of communication. They are based on interpolating roots in polynomial ratios. They are similar to RS codes, but more expensive to decode. They take CPU time that is cubic in the set difference, and all of these operations are expensive. There are some newer techniques such as IBLT (invertible bloom lookup table) which are fast, they are linear in time of the set difference but they have bad constant factors in communication so they send more data. If your set differences are small and the entries are small, then IBLT it can be like 5x the set difference that you have to have to get a very reliable code.

Applying set reconciliation?

So how could you use set reconciliation technology? So back in this early efficient block transfer design doc that I wrote, I described that you could use set reconciliation to send the short ids in a compact block and use much less bandwidth. But the big challenge in using that is that you still have to communicate the transaction order because that’s normative in a block and miners can make whatever order they want, subject to the transaction dependency graph. And also, … they were unaware that I had previously suggested it, they went and implemented it using IBLT. Their protocol assumes you have changed the bitcoin consensus protocol to assume that transactions are listed in a certain order. But right now in bitcoin today, miners will order transactions by how profitable they are to the miners, which is an unknown function to the receiver. And, this ordering for most income to least income, means that mine pool software can fit transactions into blocks by dropping the last couple transactions. And that would be broken if you had a required order. Using set reconciliation this way, ignores the latency in general. In particular, Graphene (paper) only talks about bandwidth never talks about latency. Since it’s ignoring latency, it probably would have been better to use polynomial set reconciliation instead of IBLT. But in any case… on one hand I’m really excited about set reconciliation because it’s like cool rocket science, but on the other hand– IBLT algorithm in particular is a great programming project, basically any programmer could implement it and it’s magic, I would actually recommend implementing IBLT decoders if you want to do something fun, but this technology doesn’t really seem like it would lower latency in any normal application of block propagation. Any further bandwidth improvements are sharing off fractions of a percent or something– the graphene stuff for example ((talk)), is going to save only a couple hundred kilobytes per day on a node. It’s not that exciting. If the set reconciliation fails, you get a roundtrip time, there’s no way to incrementally increase the size of an IBLT, there’s negatives there. Bandwidth relaying blocks is 99% solved. Using set reconciliation to try to improve that, is probably trying to improve it. But I did say only 99% solved, there’s areas for improvement.

So can we do better?

There are improvements where bandwidth improvements would still be useful. 20 kb doesn’t take long to send over normal links. But there are cases where this will getting down will improve latency. There are some things we could do which are not necessarily useful for block propagation but can improve bandwidth overall.

When is 20 kb going to impact latency? Well, one important use case is satellites. The blockstream satellites are broadcasting blocks right now. This is to improve the partioning resistance of bitcoin. You can receive blocks over the radio, and then use your local bandwidth only for sending out transactions. The link speed on the satellite is only 80 kb/sec, which is enough to stay in sync with the bitcoin network. We use software-defined radio so we can increase/decrease bandwidth with some tradeoffs. This 20 kb compact block means that it takes a minimum of 2 second to send a block across it, which isn’t attractive for mining. You could mine over it, but it’s not particularly good. Another area that could be improved here, is transaction compression. The early fast block relay protocol used LZMA compression and it turned out to be worthless because compressors don’t do much when you use them on small amounts of data, and the faster compressors do even less. You can get good compression if you compress one or more blocks at a time, but that’s not useful for transaction relay when you send multiple transactions at a time. It’s possible to construct alternative serialization for bitcoin transactions. You can convert from a normal transaction to alternative serializations losslessly, you could use this alternative serialization and use it on the whole of bitcoin history. Pieter and I figured one out that can reduce the size of the entire history of bitcoin to 28%, while working on single transactions at a time. It’s CPU intensive, and it’s only based on a per transaction basis, so there are even more optimzations that could be done. If this was used, it could make block storage smaller, it could make transaction smaller, it could make block propagation smaller, but the real effect for users would probably be transaction relay.

Set reconciliation for transaction relay

Another idea that would be really powerful would be set reconciliation for transaction relay. It’s not useful too much for block relay, but transactions are naturally a set. In 2013, I talked on bitcointalk where instead of a node offers transactions to all of its peers, but you would offer it to one peer and sometimes you would send it to another peer based on a coinflip. If you did this alone, a transaction wouldn’t get very far in the network, because they go around the network until they make a cycle and then they stop. But if you had nodes periodically running set reconciliation to compare their mempools with each other, then every time a transaction got stuck in a part of the network, it would be able to break out. This would make transaction relay really efficient, because it’s low bandwidth.

Template delta compression

Another thing I have been working on is template-delta compression. The big cost in set reconciliation is sending the order of transactions. What if you sent a block template in advance of getting a block? You send the template over to someone; instead of sending the block in the future, you could send the permutation difference. If I have an efficient way of encoding the block I was expecting versus the block I actually got– or rather, encoding the difference, then maybe you could propagate this pretty quickly. With bip152, we made a stateless protocol for many peers– but what if we introduce state again and we can support differences? I designed a typical and fast compressor for typical permutations, using the av1 range coder which I worked on in a previous life. And, what I found using this scheme I came up with so far, is that if you look up with the difference of a 30 second old template and a block that actually shows up, I came up with a median size of 2916, mean 3041 bytes, and 22% of blocks fit in less than one IP packet over the internet. Saving bandwidth isn’t necessarily important in many cases, but for the blockstream satellites this can get down to a few hundred milliseconds from whole seconds.

Template deltas for pre-consensus and weak block schemes

The other reason why I think this delta stuff is interesting, other than for the satellite, is that there’s a set of proposals out there that I call pre-consensus. This whole block propagation problem is because mining is a race about announcing blocks. What if you could have the miners get together and decide what to include in the next block? And all you have to send is the pre-consensus the things it said it would have, and there’s the payout amount…. the bitcoin-ng– it’s not very bitcoiny, it introduces miner identities, you can’t slot it into bitcoin without very deep economic changes and technical changes. But you could do pre-consensus in a bitcoin compatible way, like with weak blocks where they use a p2pool-like sharechain that pre-commits to transactions they intend to include in the future, and it comes more frequently than blocks, and when a block shows up it just references one of the weak blocks. This is perfect fit for template deltas; the difference will be negligible.

Known improvements possible….

There are other known improvements that are possible that people have been talking about.. like the FIBRE swarming block thing in a single administrative domain; it could be swarmed over the whole bitcoin network by making the blocks commit to the chunks that they send out. But the error correction code would have to be part of the bitcoin consensus rules, and the validation would require CPU time. I don’t know how reasonable it would be to do that, and there’s some downsides to that as well. If you want to prove that a particular packet was one of the ones that a miner has authorized, the SPV proof for that or hash tree would be quite large.

Consensus could require an order that obeyed dependencies, so you could make set reconciliations more efficient but it would break truncation for miners. When a miner prioritizes transactions, those transactions end up at the top of the block. The bitcoin nodes could re-sort transactions so that they are in a more predictable order, but it’s unclear if this is worth implementing given its complexity. I think one of the big surprises is that there’s a lot of ideas that are awesome but once you implement it, it makes things worse. We benefited a lot from Matt’s relay network because we’re able to take these ideas and check if they work. It was only his nodes so if it was bad, we could change it, and we weren’t rushing out tech to the bitcoin network that could be used say to crash every node in the network. And some of the ideas turned out to be slow or whatever but they could be fixed once scaled back.

Q&A

https://www.youtube.com/watch?v=EHIuuKCm53o&t=57m24s

Please wait for the mic if you have a question.

Q: On your favorite topic of how to make compact blocks even smaller or something like that; so, the thing where you’re getting these like 2 byte indexes of transactions, would it be more efficient to send a compactified bitfield?

A: Yes.

Q: And then the permutation. And you want to get things down to fitting into a single packet. Isn’t that just 2kb for a permutation?

A: If the permutation is uniform, yes. But we have a strong prior for what that should look like, which is why I am able to get to that size with my template differencer. Miners produce blocks that are ranked by ancestor fee rate plus some exeptions where the prioritize transactions and move those to the top of the block. My encoding scheme is efficient for “here’s some stuff out of order and the rest is in order”. The 2 byte thing could be made more efficient, but the tradeoff is CPU time. Reading 2 bytes and indexing is really fast. You could use an arithmetic coder and doing things that my template differencer does, but it’s quite a bit slower.

Q: As far as reducing bandwidth used by relaying transactions, a pretty good network topology for doing that is full nodes become part of some club and each transaction becomes part of a club. So you stream it out to the club. And then you have one peer in that club…

A: There are many papers on efficient gossip networks. One of the things that academic work doesn’t really address is byzantine fault tolerance gossip networks. The efficient gossip networks tend to be ones where a single malicious peer can black out all your transactions. But in bitcoin you can have nodes that are completely malicious and it won’t really slow the propagation of transactions at all. So far academia hasn’t been super useful for concrete schemes.. I think the low fan out plus set reconciliation scheme is byzantine robust but I don’t have a strong proof of this yet.

Q: What’s the current propagation delay across the network and what’s a good target for that?

A: We have lost our ability to measure it because orphan rate is really low at this point. It’s hard to say. There’s some stats that people have from polling random peers. Christian Decker has a chart. It’s gotten much better, it’s nice to see. But this doesn’t look at miners, just peers. The best way we have to look, at block propagation times from miners is how often blocks are getting orphaned. We overshot on some of this work- we improved some of these things but orphan rate wasn’t going down. And then miners upgraded to segwit, and orphan rate went down. And we went for like 5000 blocks without seeing a single orphan, which was unlikely. I think the numbers are pretty good on the network. But the challenge is, is propagation time still adequate once we start producing blocks with more data in them?

sipa: The fact that we are seeing low orphan rates on the network means either our technology is awesome and we have solved the propagation delay….. or all miners are working together and it’s entirely centralized and we’ve failed. So it’s hard to tell.

A: Before all of this optimzation, there were like 2 or 3 orphans per day, so it wasn’t a very clear indicator in the first place, but it was one of the best we had, because it directly measures what we’re trying to solve.

Q: Is block propagation time bad on its own, or is just because of the effect on orphan rate?

A: It’s a little bad on its own independent of orphan rate because it means that you take longer to see what the status of the network is. But suppose if you have a scheme where there are no orphans—- there’s radical rethinkings of the blockchain where there’s no orphans, like schemes with DAGs, where blocks aren’t a straight-linked list, there’s a forest of blocks, and then transactions can occur in multiple places there and there’s some arbitration technique for dealing with conflicts. But the problem is low throughput because you have to send transaction data redundantly multiple times.

Q: It seems to be that there seems to be this arbitrary permutation encoded in a block kind of sucks.

A: It’s something I would like to fix, but there’s like, I know in genetics it’s commonly the case that there’s strange useless effects that turn out to be vital to life because other things have developed a dependency on this. In bitcoin, miners do really truncate blocks. If we take that away, would they care? It’s hard to get miners to talk about protocol details. You could require it as an arbitrary order subject to the dependency graph, which is easy to do inside of node software, but hard otherwise. If I was doing bitcoin design today, I would probably require an arbitrary order.

Q: When you said block propagation get better with segwit, is that because segwit helped it?

A: No. It turns out that there were some large miners running really old software like Bitcoin Unlimited which was forked from a really old bitcoin version which didn’t have cache improvements or any of the other improvements. It turns out that there weren’t enough miners running improved software. And then miners switched to modern Bitcoin Core and then the orphan rate fell through the floor. It was difficult to tell though, because slow propagation doesn’t just cause the miner that’s slow to have orphans, it causes everyone to have orphans. You couldn’t really say “ah Antpool has lots of orphans and Antpool is running BU” or whatever…

Q: So we’re in a stable time for network conditions. We’re seeing major network partioning like the GFW or Syria or Turkey… is there a way to measure what happens if the internet becomes a lot more fractured?

A: This is one of those things that is important but also hard to measure. It’s not like the internet in the developed world fractures every day. So we don’t have lots of data about how fractures or partitions impact the bitcoin network. So we need to overkill partition resistance. Satellites is one way. We could do more network monitoring in the community– but it’s not anybody’s job to run the bitcoin network operation center… so unless someone like BlueMatt steps up and starts monitoring it like he does, we don’t otherwise get the broad network data and nobody looks at it. There’s some things like we can see twitches in cdecker’s block propagation data, and if there was a widespread internet outage we might learn something about bitcoin’s network partition resistance.

Q: My question here.. you mentioned about BlueMatt’s network and how it has been useful for implementing and iterating on these… so what’s the difference on what Matt did on his network and regtest or testnet? Why has it provided such huge benefits, and why not replicate that in the normal…?

A: On all of these tools and techniques, we did lots of testing, sandboxing, simulated environments, there’s Linux dummynets to simulate packet loss and latency. But the problems with these things is that they allow you to deal with known unknowns. Like you already know you need to measure the situation with packet loss. But what about the actual network out there? And what about user factors? We learned a lot about interacting with miners, like what does motivate them, what interfaces do they prefer, how sustainable is manual configuration? Production usage for things like this is no replacement for a test, but a test is no replacement for production usage. And there was no real risk that Matt’s network would break bitcoin– the worse that would happen is that Matt’s network would stop working… but that’s the worst it could do.

Q: Can there be multiple relay networks?

A: There are in fact multiple relay networks. Some miners run their own relay networks between their data centers and they peer with themselves and friends. There’s a social dynamics question; how well can multiple of these scale? Any of these require manual configuration. Matt would love it if someone would run a well-maintained relay network. He was previously getting donations to fund doing it.. but the effort to collect the donations each month was more than it was worth to it to him. And it turns out there’s bitcoin miners that don’t want to pay bitcoin for things. There were people that only wanted to pay with wechat or something. So he pays out of pocket, and it was faster before when we had better funding for it.

Q: …

A: We have tested other heuristics. Compact blocks can send transactions opportunistically. It always sends the coinbase transaction because it knows you don’t have it. I have tested a heuristic that basically works like, “a transaction was surprising to me then I should assume it’s surprising to my peers, too”. This is an obvious area that could be investigating. Why do misses occur in compact blocks? One reason is that a transaction just arrived milliseconds after a block; or another reason it’s missed in a compact block is because there was a double spend– and you have a different one than the one that the miner confirmed. We addressed this somewhat in Bitcoin Core, we don’t just use the mempool to construct the compact block, we have an extra pool that includes double spends and replacements that we keep around for that reason.

Q: There is some research about gigabyte-sized blocks. The issue here– have you tried this yourself?

A: Haven’t tried that on Matt’s network, but in private test labs sure. Being able to run large blocks over this stuff is not a surprise. It’s designed to be able to handle it. The problem is the overall scaling impact on the network. It’s not just my one node on super fast hardware can it keep up. This bloom filter that xthin sends adds a superlinear scaling component that is worse than compact blocks. You can run a gigabyte block, but good luck keeping a node affordable running on that for any period of time running….

Q: I was wondering about transaction throughput on the bitcoin network. I have heard it’s like 10 or 18 with segwit. In practice it’s like 3?

A: The throughput depends on what transactions people are producing. Prior to segwit, people started to do batch sends, meaning that transaction throughput is a poor metric in terms of capacity of the network. The number of transactions can go down, but the number of UTXOs or the people using it goes up. Segwit is only being used by about 10% of the transactions so far. If you’re not using segwit and you care about transaction fees, you should use segwit, it’s a massive decrease in fees. I haven’t looked today– but in the past couple of days, the fees were around 5 sat/byte… Part of the design of segwit was to avoid creating a system shock by adding lots of capacity and causing a fee market collapse or something. There’s an incentive to use segwit that goes up when fees go higher, and we get more capacity when there’s more demand.

forward erasure correction codes https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding

\ No newline at end of file diff --git a/greg-maxwell/2020-07-20-greg-maxwell-taproot-pace/index.html b/greg-maxwell/2020-07-20-greg-maxwell-taproot-pace/index.html index d2eeab5800..ddc6b460f5 100644 --- a/greg-maxwell/2020-07-20-greg-maxwell-taproot-pace/index.html +++ b/greg-maxwell/2020-07-20-greg-maxwell-taproot-pace/index.html @@ -9,4 +9,4 @@ < Is Taproot development moving too fast or too slow?

Is Taproot development moving too fast or too slow?

Speakers: Greg Maxwell

Date: July 20, 2020

Transcript By: Michael Folkson

Tags: Taproot, Soft fork activation

Media: -https://www.reddit.com/r/Bitcoin/comments/hrlpnc/technical_taproot_why_activate/fyqbn8s?utm_source=share&utm_medium=web2x&context=3

Taproot has been discussed for 2.5 years already and by the time it would activate it will certainly at this point be over three years.

The bulk of the Taproot proposal, other than Taproot itself and specific encoding details, is significantly older too. (Enough that earlier versions of our proposals have been copied and activated in other cryptocurrencies already)

Taproot’s implementation is also extremely simple, and will make common operations in simple wallets simpler.

Taproot’s changes to bitcoin’s consensus code are under 520 lines of difference, about 1/4th that of Segwit’s. Unlike Segwit, Taproot requires no P2P changes or changes to mining software, nor do we have to have a whole new address type for it. It is also significantly de-risked by the script version extension mechanisms added by Segwit. It has also undergone significantly more review than P2SH did, which is the most similar analogous prior change and which didn’t enjoy the benefits of Segwit.

Segwit went from early public discussions to merged in six months. So in spite of being more complex and subject to more debate due to splash back from the block size drama, Segwit was still done in significantly less time already.

Taproot has also been exceptionally widely discussed by the wider bitcoin community for a couple years now. Its application is narrow, users who don’t care to use it are ultimately unaffected by it (it should decrease resource consumption by nodes, rather than increase it) and no one is forced to use it for their own coins. It also introduces new tools to make other future improvements simpler, safer (particularly Taproot leaf versions), and more private… so there is a good reason that other future improvements are waiting on Tapoot.

To the extent that we might delay Taproot because we could instead deploy a more advanced version: Taproot is already cut down from a more advanced version which included signature aggregation, generalized Taproot (G’root), and more advanced signature masking (NOINPUT). A decision was made to narrow the taproot proposal because the additional ideas, while clearly very valuable, had research risk and the technical community also felt that we would learn a lot from in-field use of Taproot by users. So Taproot has already been narrowed to a small useful logical unit and additional improvements aren’t being worked on and would violate the intentional design of keeping it minimal.

Moreover, I believe the discussion about Taproot is essentially complete. It’s been extensively reviewed and analyzed. People want it. No major arguments against it have been raised. At this juncture, additional waiting isn’t adding more review and certainty. Instead, additional delay is sapping inertia and potentially increasing risk somewhat as people start forgetting details, delaying work on downstream usage (like wallet support), and not investing as much additional review effort as they would be investing if they felt confident about the activation timeframe. It’s also delaying work on subsequent technology like signature aggregation, G’root, and other things.

I’m all for taking things slowly and deliberately. But I think we have, especially given the narrowness of this change. No matter how slowly something goes someone could always say “but it could be done slower”– but slower is only better up to a point and beyond that slower is worse even if you totally disregard the fact that applications that would benefit from Taproot aren’t getting the benefit.

\ No newline at end of file +https://www.reddit.com/r/Bitcoin/comments/hrlpnc/technical_taproot_why_activate/fyqbn8s?utm_source=share&utm_medium=web2x&context=3

Taproot has been discussed for 2.5 years already and by the time it would activate it will certainly at this point be over three years.

The bulk of the Taproot proposal, other than Taproot itself and specific encoding details, is significantly older too. (Enough that earlier versions of our proposals have been copied and activated in other cryptocurrencies already)

Taproot’s implementation is also extremely simple, and will make common operations in simple wallets simpler.

Taproot’s changes to bitcoin’s consensus code are under 520 lines of difference, about 1/4th that of Segwit’s. Unlike Segwit, Taproot requires no P2P changes or changes to mining software, nor do we have to have a whole new address type for it. It is also significantly de-risked by the script version extension mechanisms added by Segwit. It has also undergone significantly more review than P2SH did, which is the most similar analogous prior change and which didn’t enjoy the benefits of Segwit.

Segwit went from early public discussions to merged in six months. So in spite of being more complex and subject to more debate due to splash back from the block size drama, Segwit was still done in significantly less time already.

Taproot has also been exceptionally widely discussed by the wider bitcoin community for a couple years now. Its application is narrow, users who don’t care to use it are ultimately unaffected by it (it should decrease resource consumption by nodes, rather than increase it) and no one is forced to use it for their own coins. It also introduces new tools to make other future improvements simpler, safer (particularly Taproot leaf versions), and more private… so there is a good reason that other future improvements are waiting on Tapoot.

To the extent that we might delay Taproot because we could instead deploy a more advanced version: Taproot is already cut down from a more advanced version which included signature aggregation, generalized Taproot (G’root), and more advanced signature masking (NOINPUT). A decision was made to narrow the taproot proposal because the additional ideas, while clearly very valuable, had research risk and the technical community also felt that we would learn a lot from in-field use of Taproot by users. So Taproot has already been narrowed to a small useful logical unit and additional improvements aren’t being worked on and would violate the intentional design of keeping it minimal.

Moreover, I believe the discussion about Taproot is essentially complete. It’s been extensively reviewed and analyzed. People want it. No major arguments against it have been raised. At this juncture, additional waiting isn’t adding more review and certainty. Instead, additional delay is sapping inertia and potentially increasing risk somewhat as people start forgetting details, delaying work on downstream usage (like wallet support), and not investing as much additional review effort as they would be investing if they felt confident about the activation timeframe. It’s also delaying work on subsequent technology like signature aggregation, G’root, and other things.

I’m all for taking things slowly and deliberately. But I think we have, especially given the narrowness of this change. No matter how slowly something goes someone could always say “but it could be done slower”– but slower is only better up to a point and beyond that slower is worse even if you totally disregard the fact that applications that would benefit from Taproot aren’t getting the benefit.

\ No newline at end of file diff --git a/honey-badger-diaries/2020-04-24-kevin-loaec-antoine-poinsot-revault/index.html b/honey-badger-diaries/2020-04-24-kevin-loaec-antoine-poinsot-revault/index.html index 164296aa7f..d11ffe7623 100644 --- a/honey-badger-diaries/2020-04-24-kevin-loaec-antoine-poinsot-revault/index.html +++ b/honey-badger-diaries/2020-04-24-kevin-loaec-antoine-poinsot-revault/index.html @@ -9,4 +9,4 @@ Kevin Loaec, Antoine Poinsot

Date: April 24, 2020

Transcript By: Michael Folkson

Tags: Vaults

Category: Podcast

Media: -https://www.youtube.com/watch?v=xDTCT75VwvU

Aaron: So you guys built something. First tell me are you a company? Is this a startup? What is the story here?

Kevin: My personal interest in vaults started last year when Bryan Bishop published his email on the mailing list. I was like “That is an interesting concept.” After that I started engaging with him on Twitter proposing a few different ideas. There were some limitations and some attacks around it. I didn’t go much further than that. At the end of the year a hedge fund reached out to my company Chainsmiths to architect a solution for them to be their own custodian while in a multi-stakeholder situation. They are four people in the company and they have two active traders that move funds from exchanges back to their company and stuff like that. They wanted a way to be able to have decent security within their own fund without having to rely on a third party like most funds do. I started working on that in December and quickly after that I started to reach out to other people who could help me. I reached out to Antoine and his company Leonod to help me build out the prototype and figure out the deep technical ideas about the architecture. Then Antoine helped me with the architecture as well to tweak a few things. Right now it is still a project that is open source and there is no owner of it as such. It was delivered as an open source project to our clients. Right now we are considering making it a product because it is just an architecture, it is just a prototype. Nobody can really use it today, it is just Python code, it is not secure or designed to be secure right now. We are trying to look for other people, companies that could support us either as sponsors or whatever for building the implementation. Or setting up a separate entity like a spin off of our company just to focus on that.

Aaron: I’m not sure what the best order is to tackle this. Why didn’t they just use Bryan’s vault design?

Antoine: It is a single party architecture and ours is multiparty architecture.

Aaron: Is this the main difference? The main benefit is that it can be used by multiple people at the same time?

Kevin: It is a lot of things in the way it is designed. One of the differences is that in Bryan’s implementation you need to know how much funds you are receiving before setting up the vault. You are already pre-signing the spending transaction and then you delete your private key. On our architecture you don’t need that. The vaults are just addresses that are generated in advance. Any money you receive is already behind a vault. You don’t need to know in advance how much money you are going to receive. You can already give an address to an exchange and whatever money you receive is in the vault. That is the major difference. It is very important in a business situation because you don’t necessarily know how much money you are safe keeping.

Aaron: Let’s get into how it actually works then. To understand this would it make sense to start with how Bryan’s design works and then get to yours or do you think it would be better to start with yours and forget about Bryan’s?

Antoine: It is quite different so I think it is better to get straight to ours. There are a lot of differences.

Aaron: Explain to me how it works. It is a vault. There are three of us. We want to hold our money safe somehow. We want to stop people from stealing it. What do we do?

Antoine: From a high level viewpoint it is an architecture that uses pre-signed transactions and revocable transactions. We called it Revault. Let’s take the company who hired Kevin to make this architecture and who the implementation is for. There are four stakeholders in the company. Two of them are traders that do the day-to-day management of funds. They might send Bitcoin to some exchanges. How it works is any of the stakeholders give an address to receive some Bitcoin to anyone. When they receive the coins they pre-sign 4 transaction. There are the emergency transactions, these spend from the vault which is a 4-of-4 multisig and pays to a timelocked 4-of-4 multisig with other static keys that are always the same. For vault transactions we can generate other keys. This one is pre-signed and shared between the stakeholders in order to revoke. If there is something nasty going on they can always broadcast the emergency transactions which lock funds to a deep vault.

Aaron: This transaction is timelocked? Who holds the keys to address that it sends to? The same four?

Antoine: Yes but since they are not intended to be used they can be stored in a physical safe in different locations and hard to be accessed. The idea behind these addresses is that they are extra secure and maybe also hard to be accessed.

Aaron: The idea behind these addresses is that they are extra secure. Maybe also harder to reach. You can’t spend from it easily. They are somewhere in the world and behind thick doors with locks.

Antoine: There is both a timelock and it takes time to access those keys.

Aaron: That is the emergency transaction.

Antoine: Then there is a transaction chain to be able to spend from the vault. We are going to use Segwit transactions of course and start by signing the last one of the chain. There is another emergency transaction for what we call the unvaulting transaction, the one that initiates the spend. This one pays to either the 4-of-4, the same keys, another emergency transaction can be used. Or it pays to the two traders after a small timelock. Like one hour. There are two transactions here which spend the unvault transaction. If you are in the middle of unvaulting a vault you can still broadcast the emergency transaction. You spend from the unvault one. There is the cancel transaction in case the traders broadcast an unvault transaction to spend the vault and one of the stakeholders doesn’t know about the spend. To prevent this it will broadcast the cancel transaction which cancels the unvault to another wallet.

Aaron: We’ve had the emergency and we’ve now had this spend from the vault. The idea here is that if not all four stakeholders agree then any of them already have a pre-signed transaction to send the money back to the vault?

Antoine: There is only one way to spend from the vault. It is by broadcasting the unvaulting transaction. There is no other transaction which is signed by the four parties. The only way is to broadcast the unvaulting transaction which will reach maturity of the timelock only if all the stakeholders agree about this spend. The spend is presented as what we call the spend transaction and spends from the unvault. It has to be shared among the stakeholders. The traders should advertise they are willing to spend the unvault before. There are a few tweaks but that is the general idea.

Kevin: The difference with a normal 4-of-4 is around the business operations or at least operations in general. All of the four pre-signed transactions are signed after some funds have been received. When you receive funds nobody can spend them. First you need the four stakeholders to pre-sign the emergency revaulting transaction and the unvaulting spending transaction. These transactions are already pre-signed and shared between the stakeholders. But then when the traders actively need to spend a transaction you don’t need to wait for the four people to agree with you. That is the whole point, not to annoy every stakeholder every time you need to spend some money. If two people agree on a destination address you can broadcast the transaction. But what Antoine was saying about the four people need to agree, when only two people sign to spend a transaction you have a timelock. During this time any of the four people can trigger the emergency or the revaulting transaction. If you want approve by default but if something goes wrong you can trigger revocation. That can be enforced by watchtowers and things like that because they are all pre-signed transactions. You don’t need to be a human being checking every transaction all the time.

Aaron: That makes sense. It inverts the permissions.

Kevin: You still need to have the four stakeholders to pre-sign the transaction at the beginning. You cannot have funds that are received and spent without somebody knowing about it. As soon as they are received they are unspendable until the four people have pre-signed transaction. That is also important. It cannot bypass the four signatures. We do this check at the beginning instead of at the end like a 4-of-4 would do.

Antoine: A big point that we can easily forget is that it reduces the incentive of trying to steal or to attack the stakeholders. If it was only a 4-of-4 multisig or another multisig scheme, I get the four keys and I get the Bitcoin and I can spend it. With this architecture even if you attack one stakeholder or many of them the network can be externalized. Emergency transactions can be broadcast from anywhere. In the case of a broadcast of an emergency transaction all of the wallets of the stakeholders and all of the watchtowers will broadcast all of the emergency transactions of all the vaults thus blocking the funds for one month without the possibility of getting it. It reduces the incentive of an attack.

Aaron: You have coded this up? Is it ready to be used? What is the status of it?

Antoine: The architecture is almost final I think. I am going to write a post to the mailing list to get some feedback from other people, other Bitcoin developers. Maybe we overlooked something. We can’t be sure but the state of implementation is a toy implementation, a demo which doesn’t even have a user interface. I did run some functional tests so it works. Maybe we overlooked something.

Aaron: You have mentioned this trading desk. What are other good examples of companies that could use it?

Kevin: For now we haven’t pushed the idea down to individual use for single person. But for any multiparty situation this is better than anything else that is used today. That applies for exchanges like replacing their current cold storage for example. That works for ATM operators who need to move funds between ATMs. You can have checks on whether the addresses are really owned by the ATMs and having the watchtower do that. Having some timelocks enforced which is doable when you are refilling ATMs. It works for pretty much anything. Any company who would have Bitcoin as an asset. For example, my company Chainsmiths doesn’t do any trading or anything like that but we could have some Bitcoin as an asset somewhere. It is better to have this kind of architecture where the CEO can spend the money without having to ask everybody but still have some sanity checks doable by the CFO. Pretty much any institution would benefit from using an architecture like that.

Antoine: We still assume that it will be used by an institution or a company because we don’t protect against bad intentions of one of the stakeholders like the refusal to sign which would not put the funds at risk but would block the operations. Or key deletion. We assume the user will be a company which already has agreements between the stakeholders. This can be another agreement which can be enforced by the legal system outside of the Bitcoin network.

Aaron: Are there any exchanges or any companies that are using vaults right now? The idea is not that new.

Kevin: You are right. The idea of vaults is not new. The problem is that there is no implementation so far. At least no production ready implementation. All we have is prototypes. Also they are not very practical for different reasons. I think the state of the art so far was Bryan’s implementation which as I said you need to know the amount before receiving the funds. You pre-sign all your derivation tree depending on the amount. That is not very practical. I guess that is the main reason. Another one is that it is really hard to deal with advanced scripting on Bitcoin. It doesn’t have to be really advanced. We are using OP_CHECKSEQUENCEVERIFY and even that is not supported by PSBT or Bitcoin Core if you want to do the transaction manually. It is a nightmare to deal with basic opcodes because nobody uses them.

Aaron: What is your timeline? When will we see stuff like this in use? What do you need?

Kevin: I think we need time so we can build it. We need people. To get people you need money. That could take a lot of different forms. We do believe that this would benefit anybody in the Bitcoin ecosystem. We will probably see more strategic investments. Maybe xchanges helping to fund this research or this implementation. If not getting some VC funding to see how far it can go. There are a lot of companies like BitGo and Unchained Capital that have a business out of 2-of-3 multisig. They are making money. I guess it makes sense to push the game forward and increase the level of security you can offer. Of course it really depends on how we can push that idea forward. So far we have been talking of 12 months to build a proper, production ready implementation. It is not something you can deliver in a month.

Aaron: It is one of the big challenges for Bitcoin still, securing the exchanges. We need something like this. When are you going to post this to the mailing list?

Antoine: Tomorrow I think. I begin to think of what I am going to write.

Aaron: Let’s get back to the design for one minute. What would need an attacker need to steal in order to steal the coins? What is the stuff he would need?

Antoine: The four private keys of the stakeholders. With this you can bypass the vault architecture because you don’t need pre-signed transactions anymore. The four private keys from the emergency deep vault, in this case you can try to make all the vaults broadcast all of the emergency transactions. After one month there would be the race between the stakeholders and you. There is a co-signing server in the process.

Aaron: Would it be a race or would it be a RBF battle to infinity?

Antoine: If I stole the private keys from the emergency deep vault and I spend from it after one month I would not signal for RBF. It is just a race.

Aaron: It is still up to miners in the end of course.

Kevin: Exactly that is a race.

Antoine: You would have to make all the stakeholders accept a spend to an unauthorized address. That is why we are posting to the mailing list, to get some feedback and another point of view.

Aaron: With Bryan’s first design one of the things the attacker could do is steal the key and wait until the unvault transaction was broadcast by the original owner. You kind of hide in the bushes. That is not possible here?

Kevin: We have the same issue but we solve it with a co-signing server. The co-signing server doesn’t add any risk but the co-signing server is making sure that a spending transaction can only be signed once. You hope the server is enforcing that. If it doesn’t again that is a race condition. That is the case in Bryan’s implementation as well.

Aaron: What is the co-signing server?

Kevin: Let’s start with Bryan’s implementation. In Bryan’s vault the funds first go to the hot wallet and from the hot wallet you can spend it to wherever you want. The thing is before the hot wallet can spend it there is a timelock. In the meantime if you see funds going to your hot wallet and it is not you who triggered it you can put it back in the vault. If this timelock expires then anybody with the private key of the hot wallet can spend the funds whatever. That is the race condition between whoever owns the vault and the attacker after the expiry of the timelock. In our architecture how we solve this is that the spending transaction from the hot wallet equivalent we have needs to be broadcast at the same time as the unvaulting transaction it is spending from. We ensure that way that the timelock is enforced and the transaction is visible for the duration of the timelock for all the stakeholders. We enforce that, there is a revocation window. That doesn’t mean that the Bitcoin network will mine this spending transaction. Somebody could broadcast at the exact block where it expires a competing transaction. To avoid that we have a server, a very dumb server that needs to co-sign every spending transaction but will only do it once per UTXO. This means that the transaction that the stakeholders will see when they broadcast at the same time the parent will be the only one signed to spend from it. Sadly it is not perfect.

Aaron: What if this server goes offline?

Kevin: Then the funds cannot move, they are locked. This is better than being stolen. That is not perfect. If the server signs whatever you send it without enforcing this rule of only once per UTXO then we were back to the same problem of the race condition. Today we don’t have a way of enforcing that on the blockchain yet. We don’t have CHECKTEMPLATEVERIFY. Without something like that we can’t enforce that a transaction will signed only once. This is our solution to it.

Aaron: Who owns the signing server? You guys or someone else?

Antoine: It is self hosted and non-custodial. The company deploying the architecture will be deploy a co-signing server and another server will hold the signatures for all the stakeholders. For them to share them and to make sure they use the same fee rate for all the transactions, otherwise the signatures are not valid. There are some servers to set up for the companies to use the architecture.

Antoine: The fee rate was the fun part. As always with the pre-signed transaction scheme if you sign the vault transactions and it can be changed afterwards you have to make a bet on what the fee rate will be if you need to broadcast the transaction. To work around this we used for all the revaulting transactions, the two emergency ones and the cancel one, we leverage the fact that there is one input and one output. We use SIGHASH_SINGLE SIGHASH_ANYONECANPAY. Since there is only one input and one output we are not subject to the bug. This allows us to be able to add another input and another output in order to bump the fee rate at the moment of broadcast. All the stakeholders will share signatures. The stakeholders don’t have the same transactions anymore but they share signatures will SIGHASH_SINGLE SIGHASH_ANYONECANPAY. Once they get the signatures of all the other stakeholders they append one with SIGHASH_ALL to avoid malleability and to avoid a simple attack which would allow anyone to decrease the fee rate by appending another input and output which isn’t balanced. If they need they can add another input.

Aaron: This is not the new SIGHASH output?

Antoine: No this is available today in Bitcoin. We made the choice of not waiting for everything. This is SIGHASH_SINGLE and ANYONECANPAY.

Aaron: I would have to see it written down to really analyze it. Maybe people listening are better at figuring it out on the go. Something like this is definitely necessary. We need better security especially for exchanges and big fund holders. I am looking forward to your dev mailing list post.

Antoine: In the meantime I have written about it in the README of the demo implementation. It is well documented. There is a doc directory with the structure of all the transactions and a README with an overview of the architecture.

\ No newline at end of file +https://www.youtube.com/watch?v=xDTCT75VwvU

Aaron: So you guys built something. First tell me are you a company? Is this a startup? What is the story here?

Kevin: My personal interest in vaults started last year when Bryan Bishop published his email on the mailing list. I was like “That is an interesting concept.” After that I started engaging with him on Twitter proposing a few different ideas. There were some limitations and some attacks around it. I didn’t go much further than that. At the end of the year a hedge fund reached out to my company Chainsmiths to architect a solution for them to be their own custodian while in a multi-stakeholder situation. They are four people in the company and they have two active traders that move funds from exchanges back to their company and stuff like that. They wanted a way to be able to have decent security within their own fund without having to rely on a third party like most funds do. I started working on that in December and quickly after that I started to reach out to other people who could help me. I reached out to Antoine and his company Leonod to help me build out the prototype and figure out the deep technical ideas about the architecture. Then Antoine helped me with the architecture as well to tweak a few things. Right now it is still a project that is open source and there is no owner of it as such. It was delivered as an open source project to our clients. Right now we are considering making it a product because it is just an architecture, it is just a prototype. Nobody can really use it today, it is just Python code, it is not secure or designed to be secure right now. We are trying to look for other people, companies that could support us either as sponsors or whatever for building the implementation. Or setting up a separate entity like a spin off of our company just to focus on that.

Aaron: I’m not sure what the best order is to tackle this. Why didn’t they just use Bryan’s vault design?

Antoine: It is a single party architecture and ours is multiparty architecture.

Aaron: Is this the main difference? The main benefit is that it can be used by multiple people at the same time?

Kevin: It is a lot of things in the way it is designed. One of the differences is that in Bryan’s implementation you need to know how much funds you are receiving before setting up the vault. You are already pre-signing the spending transaction and then you delete your private key. On our architecture you don’t need that. The vaults are just addresses that are generated in advance. Any money you receive is already behind a vault. You don’t need to know in advance how much money you are going to receive. You can already give an address to an exchange and whatever money you receive is in the vault. That is the major difference. It is very important in a business situation because you don’t necessarily know how much money you are safe keeping.

Aaron: Let’s get into how it actually works then. To understand this would it make sense to start with how Bryan’s design works and then get to yours or do you think it would be better to start with yours and forget about Bryan’s?

Antoine: It is quite different so I think it is better to get straight to ours. There are a lot of differences.

Aaron: Explain to me how it works. It is a vault. There are three of us. We want to hold our money safe somehow. We want to stop people from stealing it. What do we do?

Antoine: From a high level viewpoint it is an architecture that uses pre-signed transactions and revocable transactions. We called it Revault. Let’s take the company who hired Kevin to make this architecture and who the implementation is for. There are four stakeholders in the company. Two of them are traders that do the day-to-day management of funds. They might send Bitcoin to some exchanges. How it works is any of the stakeholders give an address to receive some Bitcoin to anyone. When they receive the coins they pre-sign 4 transaction. There are the emergency transactions, these spend from the vault which is a 4-of-4 multisig and pays to a timelocked 4-of-4 multisig with other static keys that are always the same. For vault transactions we can generate other keys. This one is pre-signed and shared between the stakeholders in order to revoke. If there is something nasty going on they can always broadcast the emergency transactions which lock funds to a deep vault.

Aaron: This transaction is timelocked? Who holds the keys to address that it sends to? The same four?

Antoine: Yes but since they are not intended to be used they can be stored in a physical safe in different locations and hard to be accessed. The idea behind these addresses is that they are extra secure and maybe also hard to be accessed.

Aaron: The idea behind these addresses is that they are extra secure. Maybe also harder to reach. You can’t spend from it easily. They are somewhere in the world and behind thick doors with locks.

Antoine: There is both a timelock and it takes time to access those keys.

Aaron: That is the emergency transaction.

Antoine: Then there is a transaction chain to be able to spend from the vault. We are going to use Segwit transactions of course and start by signing the last one of the chain. There is another emergency transaction for what we call the unvaulting transaction, the one that initiates the spend. This one pays to either the 4-of-4, the same keys, another emergency transaction can be used. Or it pays to the two traders after a small timelock. Like one hour. There are two transactions here which spend the unvault transaction. If you are in the middle of unvaulting a vault you can still broadcast the emergency transaction. You spend from the unvault one. There is the cancel transaction in case the traders broadcast an unvault transaction to spend the vault and one of the stakeholders doesn’t know about the spend. To prevent this it will broadcast the cancel transaction which cancels the unvault to another wallet.

Aaron: We’ve had the emergency and we’ve now had this spend from the vault. The idea here is that if not all four stakeholders agree then any of them already have a pre-signed transaction to send the money back to the vault?

Antoine: There is only one way to spend from the vault. It is by broadcasting the unvaulting transaction. There is no other transaction which is signed by the four parties. The only way is to broadcast the unvaulting transaction which will reach maturity of the timelock only if all the stakeholders agree about this spend. The spend is presented as what we call the spend transaction and spends from the unvault. It has to be shared among the stakeholders. The traders should advertise they are willing to spend the unvault before. There are a few tweaks but that is the general idea.

Kevin: The difference with a normal 4-of-4 is around the business operations or at least operations in general. All of the four pre-signed transactions are signed after some funds have been received. When you receive funds nobody can spend them. First you need the four stakeholders to pre-sign the emergency revaulting transaction and the unvaulting spending transaction. These transactions are already pre-signed and shared between the stakeholders. But then when the traders actively need to spend a transaction you don’t need to wait for the four people to agree with you. That is the whole point, not to annoy every stakeholder every time you need to spend some money. If two people agree on a destination address you can broadcast the transaction. But what Antoine was saying about the four people need to agree, when only two people sign to spend a transaction you have a timelock. During this time any of the four people can trigger the emergency or the revaulting transaction. If you want approve by default but if something goes wrong you can trigger revocation. That can be enforced by watchtowers and things like that because they are all pre-signed transactions. You don’t need to be a human being checking every transaction all the time.

Aaron: That makes sense. It inverts the permissions.

Kevin: You still need to have the four stakeholders to pre-sign the transaction at the beginning. You cannot have funds that are received and spent without somebody knowing about it. As soon as they are received they are unspendable until the four people have pre-signed transaction. That is also important. It cannot bypass the four signatures. We do this check at the beginning instead of at the end like a 4-of-4 would do.

Antoine: A big point that we can easily forget is that it reduces the incentive of trying to steal or to attack the stakeholders. If it was only a 4-of-4 multisig or another multisig scheme, I get the four keys and I get the Bitcoin and I can spend it. With this architecture even if you attack one stakeholder or many of them the network can be externalized. Emergency transactions can be broadcast from anywhere. In the case of a broadcast of an emergency transaction all of the wallets of the stakeholders and all of the watchtowers will broadcast all of the emergency transactions of all the vaults thus blocking the funds for one month without the possibility of getting it. It reduces the incentive of an attack.

Aaron: You have coded this up? Is it ready to be used? What is the status of it?

Antoine: The architecture is almost final I think. I am going to write a post to the mailing list to get some feedback from other people, other Bitcoin developers. Maybe we overlooked something. We can’t be sure but the state of implementation is a toy implementation, a demo which doesn’t even have a user interface. I did run some functional tests so it works. Maybe we overlooked something.

Aaron: You have mentioned this trading desk. What are other good examples of companies that could use it?

Kevin: For now we haven’t pushed the idea down to individual use for single person. But for any multiparty situation this is better than anything else that is used today. That applies for exchanges like replacing their current cold storage for example. That works for ATM operators who need to move funds between ATMs. You can have checks on whether the addresses are really owned by the ATMs and having the watchtower do that. Having some timelocks enforced which is doable when you are refilling ATMs. It works for pretty much anything. Any company who would have Bitcoin as an asset. For example, my company Chainsmiths doesn’t do any trading or anything like that but we could have some Bitcoin as an asset somewhere. It is better to have this kind of architecture where the CEO can spend the money without having to ask everybody but still have some sanity checks doable by the CFO. Pretty much any institution would benefit from using an architecture like that.

Antoine: We still assume that it will be used by an institution or a company because we don’t protect against bad intentions of one of the stakeholders like the refusal to sign which would not put the funds at risk but would block the operations. Or key deletion. We assume the user will be a company which already has agreements between the stakeholders. This can be another agreement which can be enforced by the legal system outside of the Bitcoin network.

Aaron: Are there any exchanges or any companies that are using vaults right now? The idea is not that new.

Kevin: You are right. The idea of vaults is not new. The problem is that there is no implementation so far. At least no production ready implementation. All we have is prototypes. Also they are not very practical for different reasons. I think the state of the art so far was Bryan’s implementation which as I said you need to know the amount before receiving the funds. You pre-sign all your derivation tree depending on the amount. That is not very practical. I guess that is the main reason. Another one is that it is really hard to deal with advanced scripting on Bitcoin. It doesn’t have to be really advanced. We are using OP_CHECKSEQUENCEVERIFY and even that is not supported by PSBT or Bitcoin Core if you want to do the transaction manually. It is a nightmare to deal with basic opcodes because nobody uses them.

Aaron: What is your timeline? When will we see stuff like this in use? What do you need?

Kevin: I think we need time so we can build it. We need people. To get people you need money. That could take a lot of different forms. We do believe that this would benefit anybody in the Bitcoin ecosystem. We will probably see more strategic investments. Maybe xchanges helping to fund this research or this implementation. If not getting some VC funding to see how far it can go. There are a lot of companies like BitGo and Unchained Capital that have a business out of 2-of-3 multisig. They are making money. I guess it makes sense to push the game forward and increase the level of security you can offer. Of course it really depends on how we can push that idea forward. So far we have been talking of 12 months to build a proper, production ready implementation. It is not something you can deliver in a month.

Aaron: It is one of the big challenges for Bitcoin still, securing the exchanges. We need something like this. When are you going to post this to the mailing list?

Antoine: Tomorrow I think. I begin to think of what I am going to write.

Aaron: Let’s get back to the design for one minute. What would need an attacker need to steal in order to steal the coins? What is the stuff he would need?

Antoine: The four private keys of the stakeholders. With this you can bypass the vault architecture because you don’t need pre-signed transactions anymore. The four private keys from the emergency deep vault, in this case you can try to make all the vaults broadcast all of the emergency transactions. After one month there would be the race between the stakeholders and you. There is a co-signing server in the process.

Aaron: Would it be a race or would it be a RBF battle to infinity?

Antoine: If I stole the private keys from the emergency deep vault and I spend from it after one month I would not signal for RBF. It is just a race.

Aaron: It is still up to miners in the end of course.

Kevin: Exactly that is a race.

Antoine: You would have to make all the stakeholders accept a spend to an unauthorized address. That is why we are posting to the mailing list, to get some feedback and another point of view.

Aaron: With Bryan’s first design one of the things the attacker could do is steal the key and wait until the unvault transaction was broadcast by the original owner. You kind of hide in the bushes. That is not possible here?

Kevin: We have the same issue but we solve it with a co-signing server. The co-signing server doesn’t add any risk but the co-signing server is making sure that a spending transaction can only be signed once. You hope the server is enforcing that. If it doesn’t again that is a race condition. That is the case in Bryan’s implementation as well.

Aaron: What is the co-signing server?

Kevin: Let’s start with Bryan’s implementation. In Bryan’s vault the funds first go to the hot wallet and from the hot wallet you can spend it to wherever you want. The thing is before the hot wallet can spend it there is a timelock. In the meantime if you see funds going to your hot wallet and it is not you who triggered it you can put it back in the vault. If this timelock expires then anybody with the private key of the hot wallet can spend the funds whatever. That is the race condition between whoever owns the vault and the attacker after the expiry of the timelock. In our architecture how we solve this is that the spending transaction from the hot wallet equivalent we have needs to be broadcast at the same time as the unvaulting transaction it is spending from. We ensure that way that the timelock is enforced and the transaction is visible for the duration of the timelock for all the stakeholders. We enforce that, there is a revocation window. That doesn’t mean that the Bitcoin network will mine this spending transaction. Somebody could broadcast at the exact block where it expires a competing transaction. To avoid that we have a server, a very dumb server that needs to co-sign every spending transaction but will only do it once per UTXO. This means that the transaction that the stakeholders will see when they broadcast at the same time the parent will be the only one signed to spend from it. Sadly it is not perfect.

Aaron: What if this server goes offline?

Kevin: Then the funds cannot move, they are locked. This is better than being stolen. That is not perfect. If the server signs whatever you send it without enforcing this rule of only once per UTXO then we were back to the same problem of the race condition. Today we don’t have a way of enforcing that on the blockchain yet. We don’t have CHECKTEMPLATEVERIFY. Without something like that we can’t enforce that a transaction will signed only once. This is our solution to it.

Aaron: Who owns the signing server? You guys or someone else?

Antoine: It is self hosted and non-custodial. The company deploying the architecture will be deploy a co-signing server and another server will hold the signatures for all the stakeholders. For them to share them and to make sure they use the same fee rate for all the transactions, otherwise the signatures are not valid. There are some servers to set up for the companies to use the architecture.

Antoine: The fee rate was the fun part. As always with the pre-signed transaction scheme if you sign the vault transactions and it can be changed afterwards you have to make a bet on what the fee rate will be if you need to broadcast the transaction. To work around this we used for all the revaulting transactions, the two emergency ones and the cancel one, we leverage the fact that there is one input and one output. We use SIGHASH_SINGLE SIGHASH_ANYONECANPAY. Since there is only one input and one output we are not subject to the bug. This allows us to be able to add another input and another output in order to bump the fee rate at the moment of broadcast. All the stakeholders will share signatures. The stakeholders don’t have the same transactions anymore but they share signatures will SIGHASH_SINGLE SIGHASH_ANYONECANPAY. Once they get the signatures of all the other stakeholders they append one with SIGHASH_ALL to avoid malleability and to avoid a simple attack which would allow anyone to decrease the fee rate by appending another input and output which isn’t balanced. If they need they can add another input.

Aaron: This is not the new SIGHASH output?

Antoine: No this is available today in Bitcoin. We made the choice of not waiting for everything. This is SIGHASH_SINGLE and ANYONECANPAY.

Aaron: I would have to see it written down to really analyze it. Maybe people listening are better at figuring it out on the go. Something like this is definitely necessary. We need better security especially for exchanges and big fund holders. I am looking forward to your dev mailing list post.

Antoine: In the meantime I have written about it in the README of the demo implementation. It is well documented. There is a doc directory with the structure of all the transactions and a README with an overview of the architecture.

\ No newline at end of file diff --git a/index.json b/index.json index 69a7c78845..6d611d8230 100644 --- a/index.json +++ b/index.json @@ -1 +1 @@ -[{"uri":"/scalingbitcoin/montreal-2015/","title":"Montreal (2015)","content":" Alternatives To Block Size As Aggregate Resource Limits Mark Friedenbach Research Amiko Pay Come Plooy Lightning Bitcoin Block Propagation and IBLT Rusty Russell P2p Research Bitcoin Failure Modes And The Role Of The Lightning Network Tadge Dryja, Joseph Poon Lightning Bitcoin Load Spike Simulation Conner Fromknecht, Nathan Wilcox Research Bitcoin Relay Network Matt Corallo Mining Blockchain Testbed Ittay Eyal, Emin Gun Sirer Research Developer tools Coinscope Andrew Miller, Dave Levin Mining …"},{"uri":"/scalingbitcoin/hong-kong-2015/","title":"Hong Kong (2015)","content":" A Bevy Of Block Size Proposals Bip100 Bip102 And More Jeff Garzik A Flexible Limit Trading Subsidy For Larger Blocks Mark Friedenbach Soft fork activation Bip101 Block Propagation Data From Testnet Jonathan Tomim Soft fork activation Bip99 And Uncontroversial Hard Forks Jorge Timón Soft fork activation Braiding The Blockchain Bob McElrath Mining Day 2 Opening Pindar Wong Extensibility Eric Lombrozo Security Fungibility And Scalability Dec 06, 2015 Adam Back Privacy enhancements In Adversarial …"},{"uri":"/scalingbitcoin/milan-2016/","title":"Milan (2016)","content":" BIP151: Peer-to-Peer Encryption and Authentication from the Perspective of End-User Oct 09, 2016 Jonas Schnelli V2 p2p transport Bitcoin Covenants: Opportunities and Challenges Oct 09, 2016 Emin Gun Sirer Vaults Covenants Breaking The Chain Bob McElrath, Peter Todd Mining Proof systems Build Scale Operate Meltem Demirors, Eric Lombrozo Chainbreak Coin Selection Mark Erhardt Day 1 Group Summaries Lightning Privacy enhancements Mining Day 2 Group Summaries Enhancing Bitcoin Security and …"},{"uri":"/scalingbitcoin/stanford-2017/","title":"Stanford (2017)","content":"https://stanford2017.scalingbitcoin.org/\nAtomically Trading With Roger Gambling On The Success Of A Hard Fork Ethan Heilman Bitcoin Script 2.0 And Strengthened Payment Channels Nov 04, 2017 Olaoluwa Osuntokun, Johnson Lau Lightning Scripts addresses BlockSci: a Platform for Blockchain Science and Exploration Nov 04, 2017 Harry Kalodner Research Developer tools BOLT Anonymous Payment Channels for Decentralized Currencies Nov 04, 2017 Ian Miers Research Privacy enhancements Lightning Changes …"},{"uri":"/scalingbitcoin/tokyo-2018/","title":"Tokyo (2018)","content":"https://tokyo2018.scalingbitcoin.org/\nA Scalable Drop in Replacement for Merkle Trees Oct 06, 2018 Benedikt Bünz Proof systems An analysis of dust in UTXO-based cryptocurrencies Oct 06, 2018 Sergi Delgado Segura Compact Multi-Signatures For Smaller Blockchains Oct 06, 2018 Dan Boneh Research Threshold signature Bls signatures Deploying Blockchain At Scale Lessons From The Internet Deployment In Japan Jun Muai Forward Blocks Mark Friedenbach How Much Privacy is Enough? Threats, Scaling, and …"},{"uri":"/scalingbitcoin/tel-aviv-2019/","title":"Tel Aviv (2019)","content":" A Survey of Progress in Succinct Zero Knowledge Proofs: Towards Trustless SNARKs Sep 11, 2019 Ben Fisch Proof systems A Tale of Two Trees: One Writes, and Other Reads, Scaling Oblivious Accesses to Large-Scale Blockchains Duc V. Le Research Lightweight client Privacy enhancements A2L: Anonymous Atomic Locks for Scalability and Interoperability in Payment Channel Hubs Sep 12, 2019 Pedro Moreno-Sanchez Research Lightning Adaptor signatures Applying Private Information Retrieval to Lightweight …"},{"uri":"/speakers/0xb10c/","title":"0xB10C","content":""},{"uri":"/tags/bitcoin-core/","title":"bitcoin-core","content":""},{"uri":"/brink/","title":"Brink","content":" The Bitcoin Development Podcast "},{"uri":"/tags/compact-block-relay/","title":"Compact block relay","content":""},{"uri":"/brink/the-bitcoin-development-podcast/discussing-pre-25-0-bitcoin-core-vulnerability-disclosures/","title":"Discussing Pre-25.0 Bitcoin Core Vulnerability Disclosures","content":"Introduction Speaker 0: 00:00:00\nHello there, welcome or welcome back to the Brink podcast. Today we\u0026amp;rsquo;re talking about some security advisories again that were released a few days ago. I have with me B10C and Niklas. Feel free to say hi.\nSpeaker 1: 00:00:15\nHey Gloria, hey Niklas. Hello.\nSpeaker 0: 00:00:18\nGreat. So on the agenda today, we\u0026amp;rsquo;ve got three vulnerabilities. A good mix, but all peer-to-peer. So first we\u0026amp;rsquo;re going to talk about the headers precinct bug, which was …"},{"uri":"/speakers/gloria-zhao/","title":"Gloria Zhao","content":""},{"uri":"/speakers/niklas-g%C3%B6gge/","title":"Niklas Gögge","content":""},{"uri":"/tags/p2p/","title":"P2P Network Protocol","content":""},{"uri":"/speakers/","title":"Speakers","content":""},{"uri":"/tags/","title":"Tags","content":""},{"uri":"/brink/the-bitcoin-development-podcast/","title":"The Bitcoin Development Podcast","content":" Discussing Pre-25.0 Bitcoin Core Vulnerability Disclosures Oct 10, 2024 0xB10C, Niklas Gögge, Gloria Zhao Bitcoin core P2p Compact block relay Discussing 0.21.0 Bitcoin Core Vulnerability Disclosures Jul 31, 2024 Gloria Zhao, Niklas Gögge Bitcoin core Discussing Pre-0.21.0 Bitcoin Core Vulnerability Disclosures Jul 11, 2024 Niklas Gögge, Gloria Zhao Bitcoin core Code Review and BIP324 Sep 06, 2023 Sebastian Falbesoner, Mike Schmidt Libsecp256k1 Generic signmessage V2 p2p transport Bitcoin core …"},{"uri":"/","title":"₿itcoin Transcripts","content":""},{"uri":"/speakers/aaron-van-wirdum/","title":"Aaron van Wirdum","content":""},{"uri":"/speakers/ava-chow/","title":"Ava Chow","content":""},{"uri":"/bitcoin-magazine/bitcoin-2024/","title":"Bitcoin 2024 Nashville","content":" Discreet Log Contracts, Oracles, Loans, Stablecoins, and More Aki Balogh, Shehzan Maredia, Daniel Hinton, Tadge Dryja Dlc Making Bitcoin More Private with CISA Aug 29, 2024 Fabian Jahr, Jameson Lopp, Craig Raw Cisa Open Source Mining Kulpreet Singh, Matt Corallo, Skot 9000, Mark Erhart Mining The State of Bitcoin Core Development Aug 31, 2024 Aaron van Wirdum, Ava Chow, Ishaana Misra, Mark Erhardt Bitcoin core Career Unlocking Expressivity with OP_CAT Andrew Poelstra, Brandon Black, Rijndael, …"},{"uri":"/bitcoin-magazine/","title":"Bitcoin Magazine","content":" Bitcoin 2024 Nashville How to Activate a New Soft Fork Aug 03, 2020 Eric Lombrozo, Luke Dashjr Taproot Soft fork activation The Politics of Bitcoin Development Jun 11, 2024 Christian Decker "},{"uri":"/tags/career/","title":"career","content":""},{"uri":"/speakers/ishaana-misra/","title":"Ishaana Misra","content":""},{"uri":"/speakers/mark-erhardt/","title":"Mark Erhardt","content":""},{"uri":"/bitcoin-magazine/bitcoin-2024/the-state-of-bitcoin-core-development/","title":"The State of Bitcoin Core Development","content":"Bitcoin Core Development Panel and Recent Policy Changes Speaker 0: 00:00:02\nGood morning. My name is Aron Favirim. I work for Bitcoin Magazine and this is the panel on Bitcoin Core development. I\u0026amp;rsquo;ll let the panelists introduce themselves. Let\u0026amp;rsquo;s start here with Eva.\nSpeaker 1: 00:00:16\nHi, I\u0026amp;rsquo;m Eva. I am one of the Bitcoin Core maintainers.\nSpeaker 2: 00:00:20\nHi I\u0026amp;rsquo;m Merch, I work at Chaincode Labs on Bitcoin projects.\nSpeaker 3: 00:00:25\nHi I\u0026amp;rsquo;m Ishana, I\u0026amp;rsquo;m a …"},{"uri":"/speakers/craig-raw/","title":"Craig Raw","content":""},{"uri":"/tags/cisa/","title":"Cross-input signature aggregation (CISA)","content":""},{"uri":"/speakers/fabian-jahr/","title":"Fabian Jahr","content":""},{"uri":"/speakers/jameson-lopp/","title":"Jameson Lopp","content":""},{"uri":"/bitcoin-magazine/bitcoin-2024/making-bitcoin-more-private-with-cisa/","title":"Making Bitcoin More Private with CISA","content":"Speaker 0: 00:00:01\nSo there\u0026amp;rsquo;s one person on the stage here that is not up there. Why?\nSpeaker 1: 00:00:07\nHi, everyone. My name is Nifi. I\u0026amp;rsquo;m going to be moderating this panel today. We\u0026amp;rsquo;re here to talk about CISA, I think, cross input signature aggregation. And joining me today on the stage, I have Jameson Lopp from Casa, Craig Rau of Sparrow Wallet and Fabien Yarr of Brink. So welcome them to the stage. Really appreciate it. Yeah. Great. So we\u0026amp;rsquo;re excited to be talking to …"},{"uri":"/brink/the-bitcoin-development-podcast/discussing-0-21-0-bitcoin-core-vulnerability-disclosures/","title":"Discussing 0.21.0 Bitcoin Core Vulnerability Disclosures","content":"Introduction Speaker 0: 00:00:00\nHello, Niklas.\nSpeaker 1: 00:00:02\nHi, Gloria. We\u0026amp;rsquo;re here to talk about the next batch of disclosures for Bitcoin Core. And this time there\u0026amp;rsquo;s only two bugs and they were fixed in version 22. So that means 21, version 21 was still vulnerable to these two bugs. If you\u0026amp;rsquo;re running version 21, you should upgrade. I mean, you can listen to this podcast and decide for yourself if you want to upgrade, but my recommendation, our recommendation would be …"},{"uri":"/tags/lightning/","title":"Lightning Network","content":""},{"uri":"/lightning-specification/","title":"Lightning Specification","content":" Lightning Specification Meeting - 1152 Apr 08, 2024 Lightning Lightning Specification Meeting - 1155 Apr 22, 2024 Lightning Lightning Specification Meeting - Agenda 0936 Nov 22, 2021 Lightning Lightning Specification Meeting - Agenda 0943 Dec 06, 2021 Lightning Lightning Specification Meeting - Agenda 0949 Jan 03, 2022 Lightning Lightning Specification Meeting - Agenda 0955 Jan 31, 2022 Lightning Lightning Specification Meeting - Agenda 0957 Feb 14, 2022 Lightning Lightning Specification …"},{"uri":"/lightning-specification/2024-07-29-specification-call/","title":"Lightning Specification Meeting - Agenda 1185","content":"Agenda: https://github.com/lightning/bolts/issues/1185\nSpeaker 0: Are we on the pathfinding one or did we pass that?\nSpeaker 1: I think [redacted] should be attending because they added their new gossip status thing to the agenda. I think they should join soon, but we can start with the verification, where I think it just needs more review to clarify some needs. But apart from that, it looks good to me. 1182. That\u0026amp;rsquo;s the kind of thing we want to get in quickly because otherwise, it\u0026amp;rsquo;s …"},{"uri":"/lightning-specification/2024-07-15-specification-call/","title":"Lightning Specification Meeting - Agenda 1183","content":"Agenda: https://github.com/lightning/bolts/issues/1183\nSpeaker 0: How do you do MPP by the way? For example, you have two blinded paths. You don\u0026amp;rsquo;t even split inside the path.\nSpeaker 1: We won\u0026amp;rsquo;t MPP. Our new routing rewrite — which is on the back burner while I sort all this out — should be able to MPP across because they give you the capacity. In theory, we could do that. In practice, if you need MPP, we\u0026amp;rsquo;re out. While I was rewriting this, I was thinking maybe we should have …"},{"uri":"/brink/the-bitcoin-development-podcast/discussing-pre-0-21-0-bitcoin-core-vulnerability-disclosures/","title":"Discussing Pre-0.21.0 Bitcoin Core Vulnerability Disclosures","content":"Introductions and motivation for disclosures Speaker 0: 00:00:00\nHello, Nicholas.\nSpeaker 1: 00:00:01\nHi, Gloria.\nSpeaker 0: 00:00:02\nCool. We are here to talk about some old security vulnerabilities that were disclosed last week for Bitcoin Core versions 0.21 and before, or I guess 21.0 and before, which has been end of life for a while. This is going to be somewhat technical, quite technical, but we\u0026amp;rsquo;ll try to make it accessible. So if you\u0026amp;rsquo;re kind of a Bitcoiner who\u0026amp;rsquo;s not a …"},{"uri":"/bitcoin-explained/","title":"Bitcoin Explained","content":" Silent Payments part 2 Jul 07, 2024 Sjors Provoost, Aaron van Wirdum, Ruben Somsen Silent payments Episode 94 The Great Consensus Cleanup Revival (And an Update on the Tornado Cash and Samourai Wallet Arrests) May 29, 2024 Sjors Provoost, Aaron van Wirdum Wallet Consensus cleanup Episode 93 Bitcoin Core 27.0 Apr 10, 2024 Bitcoin core Episode 92 Hashcash and Bit Gold Jan 03, 2024 Sjors Provoost, Aaron van Wirdum Episode 88 The Block 1, 983, 702 Problem Dec 21, 2023 Sjors Provoost, Aaron van …"},{"uri":"/speakers/ruben-somsen/","title":"Ruben Somsen","content":""},{"uri":"/tags/silent-payments/","title":"Silent payments","content":""},{"uri":"/bitcoin-explained/silent-payments-part-2/","title":"Silent Payments part 2","content":"Speaker 0: 00:00:16\nLive from Utrecht, this is Bitcoin Explained. Hey Sjoers. Hello. Hey Josie. Hey. Hey Ruben. Hey. We\u0026amp;rsquo;ve got two guests today Sjoers. Is that the first time we\u0026amp;rsquo;ve had two guests?\nSpeaker 1: 00:00:27\nI believe that is a record number. We have doubled the number of guests.\nSpeaker 0: 00:00:31\nThat is amazing. You know what else is amazing, Sjoerd? The 9V battery thing that you have in your hands. You just pushed a\u0026amp;hellip; This is a CoinKite product as well?\nSpeaker 1: …"},{"uri":"/speakers/sjors-provoost/","title":"Sjors Provoost","content":""},{"uri":"/speakers/christian-decker/","title":"Christian Decker","content":""},{"uri":"/bitcoin-magazine/the-politics-of-bitcoin-development/","title":"The Politics of Bitcoin Development","content":"Introduction and Rusty\u0026amp;rsquo;s Proposal Shinobi: 00:00:01\nHi everybody, I\u0026amp;rsquo;m Shinobi from Bitcoin Magazine and I\u0026amp;rsquo;m sitting down here with Christian Decker from Blockstream.\nChristian Decker: 00:00:07\nI am.\nShinobi: 00:00:08\nSo Rusty dropped an atom bomb yesterday with his proposal to turn all the things back on just with the kind of VerOps budget analogous to the SigOps budget to kind of rein in the denial of service risks as just a path forward given that everybody\u0026amp;rsquo;s spent the …"},{"uri":"/tags/consensus-cleanup/","title":"Consensus cleanup soft fork","content":""},{"uri":"/bitcoin-explained/the-great-consensus-cleanup-revival/","title":"The Great Consensus Cleanup Revival (And an Update on the Tornado Cash and Samourai Wallet Arrests)","content":"Speaker 0: 00:00:19\nLife from Utrecht, this is Bitcoin Explained. Sjoerds, it\u0026amp;rsquo;s been a month. That means you had a whole month to think about this pun that you told me you\u0026amp;rsquo;re going to tell our dear listeners for this episode. Let\u0026amp;rsquo;s hear it. Let\u0026amp;rsquo;s hear it.\nSpeaker 1: 00:00:35\nThat\u0026amp;rsquo;s right. So it\u0026amp;rsquo;s cold outside. The government is cold.\nSpeaker 0: 00:00:39\nYou know what else is cold? Sure. Our sponsor. That\u0026amp;rsquo;s right. The cold cart. The cold cart. If you have …"},{"uri":"/tags/wallet/","title":"wallet","content":""},{"uri":"/lightning-specification/2024-05-06-specification-call/","title":"Lightning Specification Meeting - Agenda 1161","content":"Agenda: https://github.com/lightning/bolts/issues/1161\nSpeaker 0: So the next one is go see 12-blocks delay channel closed follow-up. So, this is something we already merged something to a spec, saying that whenever you see a channel being spent on-chain, you should wait for 12 blocks to allow splicing to happen. It\u0026amp;rsquo;s just one line that was missing in the requirement, and then they found it in the older splicing PR. So, there\u0026amp;rsquo;s already three ACKs. So, is anyone opposed to just …"},{"uri":"/lightning-specification/2024-04-22-specification-call/","title":"Lightning Specification Meeting - 1155","content":"Agenda: https://github.com/lightning/bolts/issues/1155\nSpeaker 0: Cool. Okay. They have four or five things that I think are all in line with what we have already as well. First one is the spec-clean up. Last thing we did, I think we reverted one of the gossip changes, and that is going to be in LND 18. Basically, making the feature bits to be required over time. After that, is anything blocking this? I need to catch up with PR a little bit. Yeah, because I think we do all this. Yeah. Okay. …"},{"uri":"/speakers/adam-jonas/","title":"Adam Jonas","content":""},{"uri":"/categories/","title":"Categories","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2024/choosing-a-career-in-bitcoin-open-source-development/","title":"Choosing a Career in Bitcoin Open Source Development","content":"Introduction Why choose a career in bitcoin open source? My name is Adam Jonas. I work at Chaincode Labs. I\u0026amp;rsquo;m here to hunt unicorns.\nThe Current State of Bitcoin Development We\u0026amp;rsquo;re gonna start off with a couple of numbers that might surprise you. There are roughly 150 people in the world that work on bitcoin open source infrastructure. Around 30 of those work full-time on Bitcoin Core. I\u0026amp;rsquo;m looking for 31.\nWhat Drives Bitcoin Core Developers To find out if you\u0026amp;rsquo;re this …"},{"uri":"/mit-bitcoin-expo/","title":"MIT Bitcoin Expo","content":" Mit Bitcoin Expo 2015 Mit Bitcoin Expo 2016 MIT Bitcoin Expo 2017 Mit Bitcoin Expo 2018 Mit Bitcoin Expo 2019 Mit Bitcoin Expo 2020 Mit Bitcoin Expo 2021 MIT Bitcoin Expo 2022 MIT Bitcoin Expo 2024 "},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2024/","title":"MIT Bitcoin Expo 2024","content":" Choosing a Career in Bitcoin Open Source Development Apr 21, 2024 Adam Jonas Career Live Stream: https://web.mit.edu/webcast/bitcoin-expo-s24/\n"},{"uri":"/categories/video/","title":"video","content":""},{"uri":"/bitcoin-core-dev-tech/2024-04/asmap/","title":"ASMap","content":"From virtu\u0026amp;rsquo;s presentation Distribution of nodes in ASes is low 8k reachable clearnet nodes / 30k unreachable A contributor has different statistics that show a lot more nodes, not sure which numbers are (more) correct. These numbers are would mean that some of the simulations are already a reality. Most nodes from Hetzner and AWS Shift compute and bandwidth to nodes in small ASes Unreachable nodes cannot sustain ten outbound connections Discussions Ignore AS for blocks-only connections? …"},{"uri":"/bitcoin-core-dev-tech/","title":"Bitcoin Core Dev Tech","content":" Bitcoin Core Dev Tech 2015 Bitcoin Core Dev Tech 2017 Bitcoin Core Dev Tech 2018 (Mar) Bitcoin Core Dev Tech 2018 (Oct) Bitcoin Core Dev Tech 2019 Bitcoin Core Dev Tech 2022 Bitcoin Core Dev Tech 2023 (Apr) Bitcoin Core Dev Tech 2023 (Sept) Bitcoin Core Dev Tech 2024 (Apr) "},{"uri":"/bitcoin-core-dev-tech/2024-04/","title":"Bitcoin Core Dev Tech 2024 (Apr)","content":" ASMap Apr 11, 2024 Bitcoin core Security enhancements P2p assumeUTXO Mainnet Readiness Apr 10, 2024 Bitcoin core Assume utxo Coin Selection Apr 08, 2024 Bitcoin core Coin selection Cross Input Signature Aggregation Apr 08, 2024 Bitcoin core Cisa Great Consensus Cleanup Apr 08, 2024 Bitcoin core Consensus cleanup GUI Discussions Apr 10, 2024 Bitcoin core Ux Kernel Apr 10, 2024 Bitcoin core Build system P2P Monitoring Apr 09, 2024 Bitcoin core P2p Developer tools Private tx broadcast Apr 08, 2024 …"},{"uri":"/categories/core-dev-tech/","title":"core-dev-tech","content":""},{"uri":"/tags/security-enhancements/","title":"Security Enhancements","content":""},{"uri":"/tags/assumeutxo/","title":"AssumeUTXO","content":""},{"uri":"/bitcoin-core-dev-tech/2024-04/assumeutxo-mainnet-readiness/","title":"assumeUTXO Mainnet Readiness","content":" Conceptual discussion about the point raised by Sjors in the Tracking issue: https://github.com/bitcoin/bitcoin/issues/29616#issuecomment-1988390944 The outcome is pretty much the same as in the issue: Some people think it’s better to keep the params, and the rest agree that at least it’s better to keep them for now A perspective on the options: With the params, it puts more responsibility (and potentially pressure) on the maintainers, if they are removed the users have to do much more due …"},{"uri":"/bitcoin-explained/bitcoin-core-27-0/","title":"Bitcoin Core 27.0","content":"Speaker 0: 00:00:16\nLive from Utrecht, this is Bitcoin Explained. Hey Sjoerd.\nSpeaker 1: 00:00:20\nHello.\nSpeaker 0: 00:00:21\nFirst of all, I\u0026amp;rsquo;d like to thank our sponsor CoinKite, the creator of the OpenDime as well. That\u0026amp;rsquo;s right. Are we going to get sued if we only mention the Opendime, Sjoerds? No. Are we contractually obligated to mention the Gold Card or can we just riff off the Opendime this episode?\nSpeaker 1: 00:00:41\nIf you want to riff on the Opendime for the great memories …"},{"uri":"/tags/build-system/","title":"build-system","content":""},{"uri":"/bitcoin-core-dev-tech/2024-04/gui-discussions/","title":"GUI Discussions","content":"QML GUI Slides\nQ\u0026amp;amp;A\nCurrent GUI and QML progress seems slow? Code review / build system involvement? Will there be a test suite? Test suite yes, No fuzzing planned Why not RPC based? RPC not currently capable of building this UI on top of Is there a QML dependency graph? More dependencies required for sure May have to abandon depends approach Blocking calls historically an issue A consideration, but more to talk about here Integrated GUI Cost/Benefit Slides\nDiscussion\nIf other wallets and …"},{"uri":"/bitcoin-core-dev-tech/2024-04/kernel/","title":"Kernel","content":"The kernel project is just about done with its first stage (separating the validation logic into a separate library), so a discussion about the second stage of the project, giving the library a usable external API was held. Arguments around two questions were collected and briefly debated.\nShould a C API for the kernel library be developed with the goal of eventually shipping with releases? There are a bunch of tools that can translate C++ headers, but they have downsides due to the name …"},{"uri":"/tags/signet/","title":"Signet","content":""},{"uri":"/bitcoin-core-dev-tech/2024-04/signet-testnet4/","title":"Signet/Testnet4","content":" Signet Reset is less of a priority right now because the faucet is running again, still seeing huge number of requests Should still reset because of money making from signet coins Participants agree that getting coins doesn’t seem to be that hard, just need to ask on IRC or so Some people get repetitive messages about coins Signet can be reorged easily with a more work chain, that is actually shorter. Such a chain already exists and can be used any time. This effectively kills the current …"},{"uri":"/tags/ux/","title":"ux","content":""},{"uri":"/tags/developer-tools/","title":"Developer Tools","content":""},{"uri":"/tags/descriptors/","title":"Output script descriptors","content":""},{"uri":"/bitcoin-core-dev-tech/2024-04/p2p-monitoring/","title":"P2P Monitoring","content":"Slides\nStarted working on this about 2 years ago; in 2021. After we accidentally observed the address flooding anomaly/attack Primarily uses https://github.com/0xB10C/peer-observer to extract data from Bitcoin Core nodes with tracepoints. The infrastructure also includes a fork-observer connected to each node as well as an addrman-observer for each node. Additionally, detailed Bitcoin Core debug logs are avaliable. The main part are the Grafana dashboards. There’s a public version at …"},{"uri":"/bitcoin-core-dev-tech/2024-04/silent-payment-descriptors/","title":"Silent Payment Descriptors","content":"Silent payments properties:\ntwo ECDH with each scan key cost of scanning increases with number of scan keys multiple address = tweak spend key with label We wouldn’t wanna flip that because then the spend key would be common, reducing anonymity and adding extra scanning work\nBIP352 recommends NOT using with xpubs, it’s really difficult to have same public key with different chain codes.\nUse case question: with silent payments, let\u0026amp;rsquo;s say I make a legacy wallet and want to use one of my …"},{"uri":"/tags/coin-selection/","title":"Coin selection","content":""},{"uri":"/bitcoin-core-dev-tech/2024-04/coin-selection/","title":"Coin Selection","content":" Todo: Overview PR that states goal of replacing Knapsack Introduce Sand Compactor Demonstrate via Simulations that situation is improved vs Knapsack Potential privacy leak: all algorithms would be deterministic, but feels insignificant or at least would not make it worse Should we clear out negative effective value UTXOs? Users seem to indicate that they would prefer to empty wallets completely even if they pay more General agreement that we should continue to spend negative effective value …"},{"uri":"/bitcoin-core-dev-tech/2024-04/cross-input-signature-aggregation/","title":"Cross Input Signature Aggregation","content":" cisaresearch.org, put together by fjahr Documents progress of half and full agg (theory, implementation and deployment) Provides collection of CISA-related resources (ML posts, papers, videos/podcasts, etc.) Should provide guidance for further development/open todos for contributors to grab HRF announces CISA Research Fellowship Seeks to answer questions how CISA will affect privacy, cost-savings, and much more during a four-month period for a total of .5BTC More: …"},{"uri":"/bitcoin-core-dev-tech/2024-04/great-consensus-cleanup/","title":"Great Consensus Cleanup","content":"How bad are the bugs?\nHow good are the mitigations?\nImprovements to mitigations in the last 5 years?\nAnything else to fix?\nThe talk is a summary of https://delvingbitcoin.org/t/great-consensus-cleanup-revival/710 .\nTime warp What is it? Off by one in the retargeting period 2015 blocks instead of 2016 Impact Spam (since difficulty is 1 and block times are what restricts tx) UXTO set growth for the same reason 40 days to kill the chain Empowers 51% attacker Political games (users individually …"},{"uri":"/tags/libsecp256k1/","title":"libsecp256k1","content":""},{"uri":"/lightning-specification/2024-04-08-specification-call/","title":"Lightning Specification Meeting - 1152","content":"Agenda: https://github.com/lightning/bolts/issues/1152\nSpeaker 1: Yeah, we did a point release to undo it as well. What I\u0026amp;rsquo;d like to do is to write up the spec changes to basically say: Yeah, don\u0026amp;rsquo;t set the bit if you don\u0026amp;rsquo;t have anything interesting to say — rather than if you don\u0026amp;rsquo;t support it. Yes, my sweep through the network to go: Hey, there\u0026amp;rsquo;s only a few nodes that don\u0026amp;rsquo;t set this. Missed LDK nodes obviously. There are only a few of them that are public. I …"},{"uri":"/tags/mining/","title":"Mining","content":""},{"uri":"/bitcoin-core-dev-tech/2024-04/private-tx-broadcast/","title":"Private tx broadcast","content":"Updates:\nTX is validated before broadcast (using mempool test). The sender ignores incoming messages from the receiver (except the handshake and PONG), so the sender cannot send back the tx before disconnection. When it receives the tx back, it becomes \u0026amp;ldquo;just a tx in mempool\u0026amp;rdquo;. TODO/NICE TO HAVE\nCheck if the wallet is going to rebroadcast a tx it has created but has been broadcast via private broadcast and if yes, prevent that. Consider disabling walletbroadcast=1 if …"},{"uri":"/bitcoin-core-dev-tech/2024-04/silent-payments-libsecp/","title":"Silent Payments Libsecp Module","content":" High level vs low level API:\nLow level API could be more useful for multi-party SP implementation High level API is safer as it avoid managing SP state and staging secret data Rough consensus that high level API is preferable Responsibility of grouping and sorting recipients by scan key. Client vs library?\nWe need to assert grouping in the lib anyway to avoid catastrophic failure So it just makes sense for the lib to take care of the grouping Why we need grouping in the first place? This is for …"},{"uri":"/bitcoin-core-dev-tech/2024-04/stratumv2/","title":"Stratum v2","content":"I explained the various stratum v2 roles described in the images here: https://stratumprotocol.org\nDescribed the three layers of my main PR: https://github.com/bitcoin/bitcoin/pull/29432\nNoise protocol Transport based on the TransportV1 / TransportV2 class Application layer (listens on new port, sv2 apps connect to it) Discussion point: the Job Declarator client role typically runs on the same machine as the template provider, so technically we don’t need noise encryption. However, we may in the …"},{"uri":"/tags/stratum-v2/","title":"stratum-v2","content":""},{"uri":"/bitcoin-core-dev-tech/2024-04/weak-blocks/","title":"Weak Blocks","content":"Weak blocks: propagate stuff with low PoW as you are building it\nuse cases / why you wouldn’t hear of stuff nonstandard to you somehow didn’t propagate to you miner’s prioritisetransaction stuff with no fees why is this coming up now? more mempool heterogeneity “accelerate nonstandard transactions” services poc code: submits to mempool, rejected ones are stored in separate cache Questions\nwhy would a miner do this? (similar to compact blocks?) add to mempool? what if invalid/nonstandard? good …"},{"uri":"/lightning-specification/2024-03-25-specification-call/","title":"Lightning Specification Meeting - Agenda 1150","content":"NOTE: There were some issues with the recording of this call, resulting in a partial transcript.\nSpeaker 0: If you\u0026amp;rsquo;re only connected to a single peer, it makes sense not to tell them that you speak gossip, right? Because you won\u0026amp;rsquo;t have anything interesting to say, and you don\u0026amp;rsquo;t want to — with our current state of gossip queries, they might think they\u0026amp;rsquo;re probing randomly and happen to hit nodes that don\u0026amp;rsquo;t know anything, and fall into a bad state. So, I\u0026amp;rsquo;m not …"},{"uri":"/lightning-specification/2024-03-11-specification-call/","title":"Lightning Specification Meeting - Agenda 1146","content":"Agenda: https://github.com/lightning/bolts/issues/1146\nSpeaker 0: So first item, I just wanted to highlight that I opened the PR for zero reserve, if you want to look at it. I don\u0026amp;rsquo;t know if people are actually trying to implement that. It matches the implementation that we have currently in Eclair and Phoenix, but with a different feature bit. So if someone else is implementing that, they can just have a look at the PR, and we can work on the interop whenever someone is ready. I …"},{"uri":"/bitcoinops/","title":"Bitcoin Optech","content":" Newsletter #292 Recap Mar 07, 2024 Mark Erhardt, Dave Harding, Josibake, Salvatore Ingala, Fabian Jahr Developer tools Psbt Musig Schnorr and Taproot workshop Sep 27, 2019 Taproot Schnorr signatures Tapscript Musig "},{"uri":"/speakers/dave-harding/","title":"Dave Harding","content":""},{"uri":"/speakers/josibake/","title":"Josibake","content":""},{"uri":"/tags/musig/","title":"MuSig","content":""},{"uri":"/bitcoinops/newsletter-292-recap/","title":"Newsletter #292 Recap","content":"Mark Erhardt: 00:03:57\nWe are complete. Maybe we can start. So good morning. This is Optech Newsletter #292 Recap. And as you can hear, Mike is not here today and I\u0026amp;rsquo;m filling in as the main host. Today we have four news items, two releases and release candidates, and four PRs to talk about in our notable code and documentation changes. I\u0026amp;rsquo;m Merch and I work at Chaincode Labs and bring you weekly this OpTec newsletter recap. Today I\u0026amp;rsquo;m joined by Dave.\nDave Harding: 00:04:41\nHi, …"},{"uri":"/tags/psbt/","title":"Partially signed bitcoin transactions","content":""},{"uri":"/categories/podcast/","title":"podcast","content":""},{"uri":"/speakers/salvatore-ingala/","title":"Salvatore Ingala","content":""},{"uri":"/lightning-specification/2024-02-26-specification-call/","title":"Lightning Specification Meeting - Agenda 1142","content":"Agenda: https://github.com/lightning/bolts/issues/1142\nSpeaker 0: [redacted] said they won\u0026amp;rsquo;t be able to attend, but they’ve nicely created an agenda for us as usual. So I can take a look and run through the list. First up, so I think this has sort of went back and forth a few times. This is the very long-lived request to just have a zero value in reserve basically, like a first class type. [redacted] made this PR, so it\u0026amp;rsquo;s spec PR 1140. I think there\u0026amp;rsquo;s a back and forth thing. We …"},{"uri":"/bitcoin-design/","title":"Bitcoin Design","content":" Learning Bitcoin and Design Miscellaneous UX Research "},{"uri":"/speakers/christoph-ono/","title":"Christoph Ono","content":""},{"uri":"/tags/covenants/","title":"Covenants","content":""},{"uri":"/bitcoin-design/learning-bitcoin-and-design/covenants-part-2/","title":"Covenants Part 2","content":"Introduction Christoph Ono: 00:00:01\nAll right, welcome to our second learning Bitcoin and Design call about covenants. Today, the idea was to go deeper into covenants use cases, right? Last time, the first call was more about just generally what it is and what it does. Now we wanted to go deeper. I think on the web, on the GitHub issue, there was one request about congestion control. But the other big one was scaling, I think. So how\u0026amp;rsquo;s everyone feeling about covenants today? Maybe we …"},{"uri":"/bitcoin-design/learning-bitcoin-and-design/","title":"Learning Bitcoin and Design","content":" Covenants Part 2 Feb 02, 2024 Christoph Ono, Michael Haase, Mogashni Covenants Ux Silent Payments Jan 31, 2024 Christoph Ono, Michael Haase, Yashraj, Josibake Silent payments Covenants Jan 19, 2024 Christoph Ono, Michael Haase, Owen Kemeys, Yashraj, Mogashni Covenants Ux Op checktemplateverify FediMint Jul 29, 2022 Stephen DeLorme, Justin Moon, Christoph Ono Ecash Ux "},{"uri":"/speakers/michael-haase/","title":"Michael Haase","content":""},{"uri":"/speakers/mogashni/","title":"Mogashni","content":""},{"uri":"/bitcoin-design/learning-bitcoin-and-design/silent-payments/","title":"Silent Payments","content":"Introduction Christoph Ono: 00:00:02\nSo welcome everyone. Super excited to have this call here today. And we are in Learning Bitcoin and Design call number 16. Hilariously, we had number 17 this morning, because we move things around time-wise. But this is number 16 about silent payments. We have the super expert Josie here, who is willing to talk us through what silent payments are and how they work. So I\u0026amp;rsquo;ll just hand it over to Josie to kick us off and we can organically feel out where …"},{"uri":"/speakers/yashraj/","title":"Yashraj","content":""},{"uri":"/bitcoin-design/learning-bitcoin-and-design/covenants/","title":"Covenants","content":"Introductions Christoph Ono: 00:00:01\nWelcome to our first Learning Bitcoin and Design call of 2024. We haven\u0026amp;rsquo;t had one of these in a while, but we\u0026amp;rsquo;re picking them back up again. The first topic is a really tough one, I think at least, it\u0026amp;rsquo;s covenants. The reason, one of the reasons why this came up is because on Bitcoin Design, we have this new design challenge. We have a page with design challenges that are meant to be kind of take home challenges. If you want to challenge …"},{"uri":"/tags/op-checktemplateverify/","title":"OP_CHECKTEMPLATEVERIFY","content":""},{"uri":"/speakers/owen-kemeys/","title":"Owen Kemeys","content":""},{"uri":"/lightning-specification/lightning-2024-01-15-specification-call/","title":"Lightning Specification Meeting - Agenda 1127","content":"Agenda: https://github.com/lightning/bolts/issues/1127\nSpeaker 0: First of all, it looks like we still have a mailing list. I don\u0026amp;rsquo;t know how much we can rely on that. But, in the meanwhile, nobody has sent any email on the mailing list. I guess we should be migrating to Delving Bitcoin for now. Has someone experimented with running a discourse instance somewhere else? I think it was [redacted] who was supposed to do that. Yeah, so I guess nobody. So, let\u0026amp;rsquo;s switch to the V3 …"},{"uri":"/bitcoin-explained/hashcash-and-bit-gold/","title":"Hashcash and Bit Gold","content":"Aaron van Wirdum: 00:00:18\nLive from Utrecht, this is Bitcoin Explained. Hey Sjors.\nSjors Provoost: 00:00:21\nYo, yo.\n[removed sponsor segment]\nAaron\u0026amp;rsquo;s book - \u0026amp;ldquo;The Genesis Book\u0026amp;rdquo; Sjors Provoost: 00:01:07\nSo, Aaron, you wrote a book.\nAaron van Wirdum: 00:01:08\nYeah. Oh, yes, I did.\nSjors Provoost: 00:01:09\nCool.\nAaron van Wirdum: 00:01:09\nAt the time of recording, it\u0026amp;rsquo;s almost published. We\u0026amp;rsquo;re publishing it tomorrow, January 3rd, but that\u0026amp;rsquo;s like midnight actually, …"},{"uri":"/tags/security-problems/","title":"Security Problems","content":""},{"uri":"/bitcoin-explained/the-block-1-983-702-problem/","title":"The Block 1, 983, 702 Problem","content":"Introduction Aaron van Wirdum: 00:00:19\nLive from Utrecht, this is Bitcoin Explained. Hey Sjors.\n[omitted sponsors segment]\nAaron van Wirdum: 00:01:20\nSjors, today we\u0026amp;rsquo;re discussing a very niche bug that I had never heard of, but apparently there\u0026amp;rsquo;s a bug in the Bitcoin protocol.\nSjors Provoost: 00:01:31\nWell, there was a bug in the Bitcoin protocol.\nAaron van Wirdum: 00:01:35\nBut there\u0026amp;rsquo;s still a problem.\nSjors Provoost: 00:01:38\nThere was a bug, then there was no bug, then there …"},{"uri":"/tags/altcoins/","title":"altcoins","content":""},{"uri":"/speakers/ben-carman/","title":"Ben Carman","content":""},{"uri":"/bitcoinplusplus/","title":"Bitcoin++","content":" Bitcoin\u0026amp;#43;\u0026amp;#43; 2022 Layer 2 Let\u0026amp;#39;s Talk About Dual Funding: Protocol Overview Mar 16, 2023 Lisa Neigut Lightning Dual funding Onchain Privacy "},{"uri":"/tags/coinjoin/","title":"Coinjoin","content":""},{"uri":"/categories/conference/","title":"conference","content":""},{"uri":"/speakers/david-vorick/","title":"David Vorick","content":""},{"uri":"/bitcoinplusplus/onchain-privacy/monero-and-the-privacy-doom-principle/","title":"Monero and the Privacy Doom Principle","content":"I\u0026amp;rsquo;m a blockchain researcher and developer. I\u0026amp;rsquo;ve built the Sia blockchain ecosystem and I\u0026amp;rsquo;ve been in the space for a while and today I\u0026amp;rsquo;m talking about Monero and privacy models that ended up being explained in analytical techniques that we can use to beat error in privacy.\nThe Concept of Privacy in Cryptocurrencies So my name is David Vork. I\u0026amp;rsquo;ve been in the blockchain space for about a decade. I\u0026amp;rsquo;ve been between researching and launching blockchains, doing a lot …"},{"uri":"/bitcoinplusplus/onchain-privacy/","title":"Onchain Privacy","content":" Coinjoin Done Right: the onchain offchain mix (and anti-Sybil with RIDDLE) Dec 10, 2022 Adam Gibson Coinjoin Adaptor signatures Timelocks Ptlc Monero and the Privacy Doom Principle Dec 09, 2023 David Vorick Privacy problems Privacy enhancements Altcoins Splicing, Lightning\u0026amp;#39;s Multiparty Future Mar 14, 2023 Dusty Dettmer Splicing Teach the Controversy: mempoolfullrbf Dec 09, 2023 Peter Todd Rbf Transaction pinning Vortex: ZeroLink for Lightning Opens Dec 09, 2023 Ben Carman Lightning Coinjoin …"},{"uri":"/speakers/peter-todd/","title":"Peter Todd","content":""},{"uri":"/tags/privacy-enhancements/","title":"Privacy Enhancements","content":""},{"uri":"/tags/privacy-problems/","title":"Privacy Problems","content":""},{"uri":"/tags/rbf/","title":"Replace-by-fee (RBF)","content":""},{"uri":"/bitcoinplusplus/onchain-privacy/teach-the-controversy-mempoolfullrbf/","title":"Teach the Controversy: mempoolfullrbf","content":"What is transaction replacement? (00:00)\nSo, let\u0026amp;rsquo;s go start off. What is transaction replacement? And maybe not the best screen, but this is a screenshot of the Bitcoin wallet, Blue Wallet, and it shows off transaction replacement very nicely. I have a transaction, I got rid of $25 worth of Bitcoin, and I can do two things, I can go bump the fee or cancel the transaction, and if any of you guys played around with it, it does what it expects, it increases the fee or it cancels it. From a …"},{"uri":"/tags/transaction-pinning/","title":"Transaction pinning","content":""},{"uri":"/bitcoinplusplus/onchain-privacy/vortex-zerolink-for-lightning-opens/","title":"Vortex: ZeroLink for Lightning Opens","content":"(The start of the talk is missing from the source media)\nWhat is a blind signature All these people could come, and they all can then present their signatures. And Bob will know, \u0026amp;ldquo;okay, all of these signatures are correct, but I don\u0026amp;rsquo;t know who gave me which one. You\u0026amp;rsquo;ll see how we use this in the protocol to get some privacy. This is kind of the structure (referring to slide#2). There\u0026amp;rsquo;s a single coordinator, many clients connected to it. They can be Core Lightning, they …"},{"uri":"/bitcoin-explained/ocean-tides/","title":"Ocean Tides","content":"Introduction Aaron van Wirdum: 00:00:21\nLive from Utrecht, this is Bitcoin Explained. Hey Sjors.\nSjors Provoost: 00:00:26\nHello.\nAaron van Wirdum: 00:00:57\nSjors, today we\u0026amp;rsquo;re going to discuss a new mining pool launched. Well, kind of. Ocean Pool. I was invited to the launch event. It was sort of a mini conference.\nSjors Provoost: 00:01:15\nI felt a very strong FOMO for that one.\nI saw the video, you guys were in the middle of nowhere. And some like some horror movie video shoot game …"},{"uri":"/tags/pooled-mining/","title":"Pooled mining","content":""},{"uri":"/bitcoin-explained/bitcoin-core-26-0-and-f2pool-s-ofac-compliant-mining-policy/","title":"Bitcoin Core 26.0 (And F2Pool's OFAC Compliant Mining Policy)","content":"Speaker 0: 00:00:20\nLive from Utrecht, this is Bitcoin Explained. Hello Sjoerd.\nSpeaker 1: 00:00:25\nWhat\u0026amp;rsquo;s up? Welcome back. Thank you.\nSpeaker 0: 00:00:28\nYou\u0026amp;rsquo;ve been back for, do you even live in this country still?\nSpeaker 1: 00:00:31\nI certainly do. Yes.\nSpeaker 0: 00:00:32\nYou were gone for like two months?\nSpeaker 1: 00:00:33\nNo, one month.\nSpeaker 0: 00:00:36\nOne month, a couple of conferences. Where did you go?\nSpeaker 1: 00:00:40\nToo many conferences. In fact, Bitcoin …"},{"uri":"/lightning-specification/2023-11-20-specification-call/","title":"Lightning Specification Meeting - Agenda 1118","content":"Agenda: https://github.com/lightning/bolts/issues/1118\nSpeaker 0: We won\u0026amp;rsquo;t have posting for the mailing list anymore in one month and a half, so we should probably do something. My plan was to just wait to see what bitcoin-dev would do and do the same thing. Does someone have opinions on that or an idea on what we should do?\nSpeaker 1: Who\u0026amp;rsquo;s managing the email account for the lightning-dev mailing list? You, [Redacted]?\nSpeaker 0:I think it\u0026amp;rsquo;s mostly [Redacted] that has the main …"},{"uri":"/lightning-specification/2023-11-06-specification-call/","title":"Lightning Specification Meeting - Agenda 1116","content":"Agenda: https://github.com/lightning/bolts/issues/1116\nSpeaker 0: I moved the first item on the to-do list today, dual funding, because we finally have interop between C-Lightning and Eclair. We were only missing the reestablished part, and everything was mostly okay from the beginning. It looks like the spec seems to be clear enough because we both understood it the same way. So now, we really have full interop on master, and I guess this should go in a release with experimental flags for CLN. …"},{"uri":"/lightning-specification/2023-10-23-specification-call/","title":"Lightning Specification Meeting - Agenda 1115","content":"Agenda: https://github.com/lightning/bolts/issues/1115\nSpeaker 0: Alright. So, I guess the first item is something that has already been ACKed and is only one clarification. But I had one question on the PR. I was wondering, [redacted], for which feature you actually use that code because neither LND or Eclair handles — we just disconnect on anything mandatory that we haven\u0026amp;rsquo;t set. Where are you actually using that or how are you planning on using it?\nSpeaker 1: It\u0026amp;rsquo;s possible for pure …"},{"uri":"/bitcoin-review-podcast/","title":"Bitcoin Review Podcast","content":" Covenants, Introspection, CTV and Activation Impasse Sep 29, 2023 James O\u0026amp;#39;Beirne, Rijndael, NVK Covenants Vaults Soft fork activation Libsec Panel Sep 08, 2023 Jonas Nick, Tim Ruffing, Lloyd Fournier, Jesse Posner, Rijndael, NVK Libsecp256k1 Schnorr signatures Threshold signature #MPCbros vs #TeamScript Debate Jul 30, 2023 Rijndael, NVK, Rob Hamilton Musig Covenants Taproot Miniscript Lightning Privacy \u0026amp;amp; Splice Panel May 12, 2023 NVK, Bastien Teinturier, Jeff Czyz, Dusty Dettmer, Tony …"},{"uri":"/bitcoin-review-podcast/covenants-introspection-ctv-and-activation-impasse/","title":"Covenants, Introspection, CTV and Activation Impasse","content":"Introduction NVK: 00:01:10\nAs usual, nothing is happening in Bitcoin and we figured, what a great day to put people to sleep in a one to three hour podcast. We don\u0026amp;rsquo;t know yet how it\u0026amp;rsquo;s going to be. We actually don\u0026amp;rsquo;t know how this is going to play out at all. I have here with me Mr. Rijndael. Hello, sir. Welcome back.\nRijndael: 00:01:30\nHello. Good morning.\nNVK: 00:01:32\nGood morning.\nJames: 00:01:33\nFor his like 600th appearance on the show.\nRijndael: 00:01:37\nI haven\u0026amp;rsquo;t …"},{"uri":"/speakers/james-obeirne/","title":"James O'Beirne","content":""},{"uri":"/speakers/nvk/","title":"NVK","content":""},{"uri":"/speakers/rijndael/","title":"Rijndael","content":""},{"uri":"/tags/soft-fork-activation/","title":"Soft fork activation","content":""},{"uri":"/tags/vaults/","title":"Vaults","content":""},{"uri":"/lightning-specification/2023-09-25-specification-call/","title":"Lightning Specification Meeting - Agenda 1114","content":"Agenda: https://github.com/lightning/bolts/issues/1114\nSpeaker 0: So, [redacted] pointed out that we did not do this, and we have to do this. I\u0026amp;rsquo;ve been working on the code so that you have to push their commitment transaction. Now, there are several problems with this. One is that in many states, there are two possible commitment transactions they could have. In theory, you know what they are. In practice, it turned out to be a bit of a nightmare to unwind the state. So, I actually just …"},{"uri":"/speakers/andrew-chow/","title":"Andrew Chow","content":""},{"uri":"/bitcoin-core-dev-tech/2023-09/","title":"Bitcoin Core Dev Tech 2023 (Sept)","content":" AssumeUTXO Update Sep 20, 2023 Bitcoin core Assume utxo CMake Update Sep 21, 2023 Cory Fields Bitcoin core Build system Discussion on open Coin Selection matters Sep 19, 2023 Bitcoin core Coin selection Kernel Planning Sep 20, 2023 thecharlatan Bitcoin core Build system Kernel Update Sep 18, 2023 thecharlatan Bitcoin core Build system Libsecp256k1 Meeting Sep 20, 2023 Bitcoin core Libsecp256k1 P2P Design Goals Sep 20, 2023 Bitcoin core P2p P2P working session Sep 19, 2023 Bitcoin core P2p …"},{"uri":"/bitcoin-core-dev-tech/2023-09/cmake/","title":"CMake Update","content":"Update Hebasto has a branch he has been PRing into his own repo. Opened a huge CMake PR for Bitcoin core.\nIntroducing it chunk by chunk on his own repo\nQT and GUIX is after that\nNext steps How to get this into Core?\nWe don’t have something clean. Still have something wonky and how and what to do with autotools.\nIdeally introduce CMake for a full cycle. It might still be a little too rough to ship on day 1 of the v27 cycle.\nWe could deviate from the beginning of the cycle plan. Half way through a …"},{"uri":"/speakers/cory-fields/","title":"Cory Fields","content":""},{"uri":"/bitcoin-core-dev-tech/2023-09/wallet-legacy-upgrade/","title":"Remove the legacy wallet and updating descriptors","content":"Wallet migration + legacy wallet removal The long-term goal targeted for v29 is to delete BDB and drop the legacy wallet. The migration PR for the GUI was just merged recently, so that will be possible for the next release v26. The \u0026amp;ldquo;Drop migratewallet experimental warning\u0026amp;rdquo; PR (#28037) should also go in before v26. Migrating without BDB should be possible for v27 (PRs \u0026amp;ldquo;Independent BDB\u0026amp;rdquo; #26606 and \u0026amp;ldquo;Migrate without BDB\u0026amp;rdquo; #26596). Priority PRs for now are:\n#20892 …"},{"uri":"/bitcoin-core-dev-tech/2023-09/signature-aggregation/","title":"Signature Aggregation Update","content":"The status of the Half-Agg BIP? TODOs but also no use cases upcoming so adding it to the BIP repo doesn\u0026amp;rsquo;t seem useful\nBIP Half-agg TODOs for BIP\nConsider setting z_0 = 1\nReconsider maximum number of signatures\nAdd failing verification test vectors that exercise edge cases.\nAdd signing test vectors (passing and failing, including edge cases)\nTest latest version of hacspec (run through checker)\nHalf-agg BIP has a max number of signatures (2^16), making testing easy\nNeeds more test vectors …"},{"uri":"/tags/signature-aggregation/","title":"signature-aggregation","content":""},{"uri":"/bitcoin-core-dev-tech/2023-09/assumeutxo-update/","title":"AssumeUTXO Update","content":" One remaining PR\n#27596 Adds loadtxoutset and getchainstate RPC, documentation, scripts, tests Adds critical functionality needed for assumeutxo validation to work: net processing updates, validation interface updates, verifydb bugfix, cache rebalancing Makes other improvements so pruning, indexing, -reindex features are compatible with assumeutxo and work nicely Adds hardcoded assumeutxo hash at height 788,000 Probably this should be moved to separate PR? Questions about initial next steps …"},{"uri":"/bitcoin-core-dev-tech/2023-09/kernel-planning/","title":"Kernel Planning","content":"Undecided on where to take this next\nCarl purposely didn\u0026amp;rsquo;t plan beyond what we have\nOptions: Look for who the users currently are of kernel code and polish those interfaces. We\u0026amp;rsquo;ll end up with a bunch of trade-offs. And I don\u0026amp;rsquo;t see us piecemeal extracting something that is useable to core and someone on the outside.\nThe GUI much high level to be on this list. The GUI uses a node interface, it doesn\u0026amp;rsquo;t call an validation right now. It does use some of the data structures. …"},{"uri":"/bitcoin-core-dev-tech/2023-09/libsecp256k1-meeting/","title":"Libsecp256k1 Meeting","content":" Topics: Scope, Priorities Next release Dec 16th Scope: Informal agreeement currently What new modules to add? Needs a specification (whatever that means, Pseudocode etc.0 Should we formalize the agreement more? Should also not be too specific What are examples where this came up in the past? Exfill, Ecdh, Elswift, SIlent payments, musig, schnorr, adaptor sigs, half-agg How specific do we need to be? Tie it to examples to be more clear ECIES (Interesting in the future?) Other problem: What is …"},{"uri":"/bitcoin-core-dev-tech/2023-09/p2p-design-goals/","title":"P2P Design Goals","content":"Guiding Questions What are we trying to achieve?\nWhat are we trying to prevent?\nHow so we weight performance over privacy?\nWhat is our tolerance level for net attacks?\nAre we trying to add stuff to the network or are we trying to prevent people getting information?\nNetwork topology: By design we are trying to prevent the topology being known Information creation, addresses, txs or blocks\nWe want blocks at tips fast - consensus critical information needs to be as fast as possible - ability to get …"},{"uri":"/tags/package-relay/","title":"Package relay","content":""},{"uri":"/bitcoin-core-dev-tech/2023-09/package-relay-planning/","title":"Package Relay Planning","content":"Package Relay Planning What can we do better, keep doing?\nThis is all the work that needs to be done for package relay -\u0026amp;gt; big chart\nLeft part is mempool validation stuff. It’s how we decide if we put transactions in the mempool after receiving them “somehow”.\nRight is peer to peer stuff\nCurrent master is accepting parents-and-child packages(every tx but last must be a parent of child), one by one, then all at the same time.\nIt\u0026amp;rsquo;s a simplification, but also economically wrong. Missing …"},{"uri":"/bitcoin-core-dev-tech/2023-09/privacy-metrics/","title":"Privacy Metrics for Coin Selection","content":" Goal: Get privacy consciousness into coin selection Configurability Privacy vs cost (waste) Privacy: weighted on a 0-5 scale Cost: weighted on a 0-5 scale Convert privacy preference (0-5) into satoshis to make it compatible with the waste score Combined score = PrivacyScoreWeight x PrivacyScore + CostWeight x WasteMetric 20-30 sats per privacy point as a gut feeling Privacy score example: sending to different script type than inputs of transaction We already match the change type to the …"},{"uri":"/speakers/thecharlatan/","title":"thecharlatan","content":""},{"uri":"/bitcoin-core-dev-tech/2023-09/wallet-coin-selection/","title":"Discussion on open Coin Selection matters","content":" Topic: review of https://github.com/bitcoin/bitcoin/pull/27601 Problem statement: when doing manual RBF (without using bumpfee RPC) we treat previous change output as a receiver and thus create two outputs to the same address Proposal: combine amount on outputs to the same address What are valid use-cases for having the same address for change and output? Consolidation with payment Alternative: Use sendall with two outputs one with an amount and yours without an amount Payment and send at least …"},{"uri":"/bitcoin-core-dev-tech/2023-09/p2p-working-session/","title":"P2P working session","content":"Erlay Gleb is active and ready to move forward - #21515 Are there people generally interested in review? I wanted first to convince myself that this is useful. I couldn\u0026amp;rsquo;t reproduce the numbers from the paper - 5% was what I got with ~100 connections. My node is listening on a non-standard port. It may be that I don\u0026amp;rsquo;t have a normal sample. There is a pull request that could add RPC stats to bitcoind - that might get better numbers. Current stats are per peer and we lose those stats …"},{"uri":"/bitcoin-core-dev-tech/2023-09/kernel-update/","title":"Kernel Update","content":"Original roadmap decided by carl was:\nStage 1\nStep 1 Introduce bitcoin-chainstate \u0026amp;ldquo;kitchen sink\u0026amp;rdquo; Step 2 (wrapped up ~2mon ago) remove non-valiation code\nStep 3 (where we are rn) remove non-validation headers from bitcoin-chainstate\nWe have mostly implemented Step 4 integrate libbitcoinkernel as a static library\nHave the implementation on personal repo Need to look into breaking up files or live with code organization not being super logical Stage 2 (we should talk about this now) …"},{"uri":"/speakers/jesse-posner/","title":"Jesse Posner","content":""},{"uri":"/speakers/jonas-nick/","title":"Jonas Nick","content":""},{"uri":"/bitcoin-review-podcast/libsec-panel/","title":"Libsec Panel","content":"Intros NVK: I have an absolute rock star team here. So why don\u0026amp;rsquo;t I start introducing the panel? Tim, hello! Do you want to tell a very brief what do you do?\nTim: Yes, so my name is Tim Ruffing. I am a maintainer of the libsecp256k1 library. I do work for Blockstream, who pay me to do this and also who pay me to do research on cryptography and all kinds of aspects related to Bitcoin.\nNVK: Very cool. So full time cryptographer, implementer.\nTim: Yep.\nNVK: Living the dream, sir.\nTim: Full …"},{"uri":"/speakers/lloyd-fournier/","title":"Lloyd Fournier","content":""},{"uri":"/tags/schnorr-signatures/","title":"Schnorr signatures","content":""},{"uri":"/tags/threshold-signature/","title":"Threshold signature","content":""},{"uri":"/speakers/tim-ruffing/","title":"Tim Ruffing","content":""},{"uri":"/brink/the-bitcoin-development-podcast/code-review-and-bip324/","title":"Code Review and BIP324","content":"Sebastian\u0026amp;rsquo;s journey to Bitcoin development Mike Schmidt: 00:00:00\nI\u0026amp;rsquo;m sitting down with the stack Sebastian Falbesoner. Did I pronounce that right?\nSebastian Falbesoner: 00:00:06\nYes, perfect.\nMike Schmidt: 00:00:08\nAll right. We\u0026amp;rsquo;re going talk a little bit about his grant renewal what he\u0026amp;rsquo;s been working on in the past and what he plans to work on in the future. So we\u0026amp;rsquo;ve awarded Sebastian a year-long grant renewal through Brink. Brink is a not-for-profit whose purpose …"},{"uri":"/tags/generic-signmessage/","title":"Generic signmessage","content":""},{"uri":"/speakers/mike-schmidt/","title":"Mike Schmidt","content":""},{"uri":"/speakers/sebastian-falbesoner/","title":"Sebastian Falbesoner","content":""},{"uri":"/tags/v2-p2p-transport/","title":"Version 2 P2P transport","content":""},{"uri":"/lightning-specification/2023-08-28-specification-call/","title":"Lightning Specification Meeting - Agenda 1103","content":"Agenda: https://github.com/lightning/bolts/issues/1103\nSpeaker 0: I don\u0026amp;rsquo;t think I\u0026amp;rsquo;ve seen any discussion happening on any of the PRs. So, I don\u0026amp;rsquo;t think we should go and have them in order, except maybe the first one that we may want to just finalize and get something into the spec to say that it should be 2016 everywhere. But, as [redacted] was saying, can probably make it a one-liner and it would be easier.\nSpeaker 1: Please.\nSpeaker 0: Okay, so let\u0026amp;rsquo;s just make that …"},{"uri":"/lightning-specification/2023-08-14-specification-call/","title":"Lightning Specification Meeting - Agenda 1101","content":"Agenda: https://github.com/lightning/bolts/issues/1101\nSpeaker 0: Alright, should we start? I want to talk a quick update on dual funding because I\u0026amp;rsquo;ve been working with [redacted] on cross-compatibility tests between Core Lightning and Eclair, and so far, everything looks good. The only part that has not yet been fully implemented in CLN is the reconnection part — when you disconnect in the middle of the signature exchange. This kind of reconnection is supposed to complete that signature …"},{"uri":"/chaincode-labs/","title":"Chaincode Labs","content":" Chaincode Podcast Chaincode Residency "},{"uri":"/chaincode-labs/chaincode-podcast/","title":"Chaincode Podcast","content":" Privacy and robustness in Lightning Aug 09, 2023 Rusty Russell Lightning Eltoo Privacy enhancements Episode 34 Simple Taproot Channels Jul 15, 2023 Elle Mouton, Oliver Gugger Anchor outputs Cpfp carve out Ptlc Simple taproot channels Taproot Episode 33 The Bitcoin Development Kit (BDK) May 23, 2023 Alekos Filini, Daniela Brozzoni Descriptors Hwi Wallet Episode 32 Lightning History and everything else Mar 22, 2023 Tadge Dryja Eltoo Lightning Segwit Sighash anyprevout Trimmed htlc Episode 31 The …"},{"uri":"/tags/eltoo/","title":"Eltoo","content":""},{"uri":"/chaincode-labs/chaincode-podcast/privacy-and-robustness-in-lightning/","title":"Privacy and robustness in Lightning","content":"Introduction Rusty Russell: 00:00:00\nSimplicity is a virtue in itself, right? So you can try to increase fairness. If you reduce simplicity, you end up actually often stepping backwards. The thing that really appeals about LN symmetry is it\u0026amp;rsquo;s simple. And every scheme to fix things by introducing more fairness and try to reintroduce penalties and everything else ends up destroying the simplicity.\nMark Erhardt: 00:00:27\nSo soon again.\nAdam Jonas: 00:00:28\nYes. For those of you that …"},{"uri":"/speakers/rusty-russell/","title":"Rusty Russell","content":""},{"uri":"/lightning-specification/2023-07-31-specification-call/","title":"Lightning Specification Meeting - Agenda 1098","content":"Agenda: https://github.com/lightning/bolts/issues/1098\nSpeaker 0: Great. I guess, does anyone have any urgent stuff that we want to make sure we get to in the meeting today? If not, I\u0026amp;rsquo;m just going to go ahead and start working down the recently updated proposal seeking review list. Great. Okay. First up on the list is option simple close. This is from the summit we had a few weeks ago in July. Looks like [Redacted] and [Redacted] have commented on this and [Redacted] stuff. Does anyone …"},{"uri":"/bitcoin-review-podcast/mpcbros-vs-teamscript-debate/","title":"#MPCbros vs #TeamScript Debate","content":"NVK: 00:01:08\nHello and welcome back to the Bitcoin.review. Today we have a very interesting show. It\u0026amp;rsquo;s going to be a knife fight. Two men go into a room with one knife. One comes out. On my left side, Rijndael for Team MPC, weighing 64 kilobytes. On my right side, Rob for Team Script with variable sizing, so we don\u0026amp;rsquo;t know. Is this already a loss? We don\u0026amp;rsquo;t know, let\u0026amp;rsquo;s find out. Hello, gentlemen, welcome to the show.\nRijndael: 00:01:40\nHey, how\u0026amp;rsquo;s it going?\nRob …"},{"uri":"/tags/miniscript/","title":"Miniscript","content":""},{"uri":"/speakers/rob-hamilton/","title":"Rob Hamilton","content":""},{"uri":"/tags/taproot/","title":"Taproot","content":""},{"uri":"/speakers/arik-sosman/","title":"Arik Sosman","content":""},{"uri":"/bitcoinplusplus/layer-2/","title":"Layer 2","content":" Anchors \u0026amp;amp; Shackles (Ark) Jun 24, 2023 Burak Keceli Ark Building Trustless Bridges Apr 29, 2023 John Light Scalability Sidechains Statechains Exploring P2P Hashrate Markets V2 Apr 29, 2023 Nico Preti Mining Lightning on Taproot Jul 18, 2023 Arik Sosman Lightning Taproot Translational Bitcoin Development Apr 29, 2023 Tadge Dryja Ux "},{"uri":"/bitcoinplusplus/layer-2/lightning-on-taproot/","title":"Lightning on Taproot","content":"My name is Arik. I work at Spiral. And most recently I\u0026amp;rsquo;ve been working on adding support for taproot channels. At least I try to work on that, but I\u0026amp;rsquo;m getting pulled back into Swift bindings. And this presentation is supposed to be about why the taproot spec is laid out the way it is. What are some of the motivations, the constraints, limitations that are driving the design. And I wanna really dig deep into the math that some of the vulnerabilities that we\u0026amp;rsquo;re trying to avoid as …"},{"uri":"/lightning-specification/2023-07-17-specification-call/","title":"Lightning Specification Meeting - Agenda 1094","content":"Agenda: https://github.com/lightning/bolts/issues/1094\nSpeaker 0: Alright. Does anyone want to volunteer to take us through the list? Nope? Okay. I will read the issue that [Redacted] very kindly left for us then. The first item on the list is Bolt 8’s chaining keys clarification. Is there anything we need to talk about here?\nSpeaker 1: Yeah. Yes, it\u0026amp;rsquo;s almost under the spelling rule, but I think it\u0026amp;rsquo;s been ACKed by everyone. So, unless people have objections to the specific wording. …"},{"uri":"/tags/anchor-outputs/","title":"Anchor outputs","content":""},{"uri":"/tags/cpfp-carve-out/","title":"CPFP carve out","content":""},{"uri":"/speakers/elle-mouton/","title":"Elle Mouton","content":""},{"uri":"/speakers/oliver-gugger/","title":"Oliver Gugger","content":""},{"uri":"/tags/ptlc/","title":"Point Time Locked Contracts (PTLCs)","content":""},{"uri":"/tags/simple-taproot-channels/","title":"Simple taproot channels","content":""},{"uri":"/chaincode-labs/chaincode-podcast/simple-taproot-channels/","title":"Simple Taproot Channels","content":"Introduction Elle Mouton: 00:00:00\nThen we get to Gossip 2.0, which is the bigger jump here, which would be instead of tying a UTXO proof to every channel, you instead tie a proof to every node announcement.\nAdam Jonas: 00:00:15\nOkay.\nAdam Jonas: 00:00:19\nIt\u0026amp;rsquo;s a good week for podcasting.\nMark Erhardt: 00:00:21\nOh yes. We have a lot of people to talk to.\nAdam Jonas: 00:00:25\nWe\u0026amp;rsquo;ll see how many we can wrangle. But first two are going to be Elle and Oli. And what are we going to talk …"},{"uri":"/speakers/dusty-daemon/","title":"Dusty Daemon","content":""},{"uri":"/tags/scalability/","title":"scalability","content":""},{"uri":"/bitcoin-explained/scaling-to-billions-of-users/","title":"Scaling to Billions of Users","content":"Introduction Aaron: 00:01:54\nIt\u0026amp;rsquo;s a little bit of a different episode as usual because we\u0026amp;rsquo;re now really going to discuss one particular technical topic. We\u0026amp;rsquo;re more describing an outline of a vision for Bitcoin. And To be precise, it\u0026amp;rsquo;s going to be Anthony Towns\u0026amp;rsquo; sort of scaling vision. Right?\nSjors: 00:02:23\nThat\u0026amp;rsquo;s right. Anthony Towns had a vision.\nAaron: 00:02:26\nWho\u0026amp;rsquo;s Anthony Towns?\nSjors: 00:02:27\nHe\u0026amp;rsquo;s a Bitcoin developer.\nAaron: 00:02:28 …"},{"uri":"/tags/sidechains/","title":"Sidechains","content":""},{"uri":"/tags/splicing/","title":"Splicing","content":""},{"uri":"/speakers/stephan-livera/","title":"Stephan Livera","content":""},{"uri":"/stephan-livera-podcast/","title":"Stephan Livera Podcast","content":" 10x your Bitcoin Security with Multisig Sep 28, 2020 Michael Flaxman Security A New Way to HODL? Jan 14, 2023 James O\u0026amp;#39;Beirne ANYPREVOUT, MPP, Mitigating Lightning Attacks Aug 13, 2020 Christian Decker Sighash anyprevout Eltoo Transaction pinning Multipath payments Trampoline payments Privacy problems ANYPREVOUT, MPP, Mitigating LN Attacks Aug 13, 2020 Christian Decker Becoming A Lightning Routing Node Operator Sep 29, 2021 Thomas Jestopher, Anthony Potdevin Bitcoin Coin Selection - Managing …"},{"uri":"/stephan-livera-podcast/what-is-splicing/","title":"What is splicing?","content":"Stephan Livera 00:02:14\nDusty welcome to the show hey thanks for having me yeah I\u0026amp;rsquo;m a fan of what you got what you\u0026amp;rsquo;re doing with splicing and obviously it\u0026amp;rsquo;s been great to see you over the years kind of or in and around the Bitcoin and Lightning circles and yeah I love what you\u0026amp;rsquo;re doing with flashing so let\u0026amp;rsquo;s hear a little bit about you just for people who don\u0026amp;rsquo;t know you just a little bit on you\nDusty Daemon 00:02:36\nThanks yeah I\u0026amp;rsquo;m Dusty Damon …"},{"uri":"/bitcoin-explained/bitcoin-core-v25/","title":"Bitcoin Core 25.0","content":"Aaron van Wirdum: 00:00:21\nLive from Utrecht, this is Bitcoin\u0026amp;hellip;\nSjors Provoost: 00:00:24\nExplained.\nIntroduction Aaron van Wirdum: 00:00:24\nHey, Sjors, we\u0026amp;rsquo;re back. We\u0026amp;rsquo;re back in beautiful, sunny Utrecht after I spent some time in Miami and Prague. You spent some time in Prague, and more Prague, is that right?\nSjors Provoost: 00:00:38\nMostly Prague.\nAaron van Wirdum: 00:00:38\nYou were in Prague for a while. That\u0026amp;rsquo;s where we recorded the previous episode about Stratum V2. …"},{"uri":"/tags/compact-block-filters/","title":"Compact block filters","content":""},{"uri":"/tags/transaction-relay-policy/","title":"Transaction Relay Policy","content":""},{"uri":"/bitcoinplusplus/layer-2/anchors-shackles-ark/","title":"Anchors \u0026 Shackles (Ark)","content":"So hey guys, my name is Borat. Today I\u0026amp;rsquo;ll be talking about something that I\u0026amp;rsquo;ve been working on for the past, let\u0026amp;rsquo;s say, six months or so. To give a bit of a background, I started first in a big block sort of camp and then got introduced later to Liquid. And I did some covenant R\u0026amp;amp;D on Liquid for about two years, and now I\u0026amp;rsquo;m recently exploring the Lightning space for about a year. And as someone who initially comes sort of from that sort of big block camp, I\u0026amp;rsquo;ve …"},{"uri":"/tags/ark/","title":"Ark","content":""},{"uri":"/speakers/burak-keceli/","title":"Burak Keceli","content":""},{"uri":"/lightning-specification/2023-06-19-specification-call/","title":"Lightning Specification Meeting - Agenda 1088","content":"Agenda: https://github.com/lightning/bolts/issues/1088\nSpeaker 0: Thanks [Redacted]. So, there isn\u0026amp;rsquo;t much that has changed on any of the PRs that are on the pending to-do list, except for attributable errors. So, maybe since [Redacted] is here, we can start with attributable errors, and [Redacted], you can tell us what has changed about the HMAC truncation, for example.\nSpeaker 1: Yeah, so not that much changed. Just the realization that we could truncate the HMAC basically, because the …"},{"uri":"/lightning-specification/2023-06-05-specification-call/","title":"Lightning Specification Meeting - Agenda 1085","content":"Agenda: https://github.com/lightning/bolts/issues/1085\nSpeaker 0: Alright, so let\u0026amp;rsquo;s start. So, the first PR on the list is one we\u0026amp;rsquo;ve already discussed last week. It just needs someone from another team, either Lightning Labs or LDK to hack. It\u0026amp;rsquo;s just a clarification on Bolt 8. So, I don\u0026amp;rsquo;t think it should be reviewed right now because just takes time to work through it and verify that it matches your implementation. But if someone on either of those teams can add it to …"},{"uri":"/speakers/alekos-filini/","title":"Alekos Filini","content":""},{"uri":"/speakers/daniela-brozzoni/","title":"Daniela Brozzoni","content":""},{"uri":"/tags/hwi/","title":"Hardware wallet interface (HWI)","content":""},{"uri":"/chaincode-labs/chaincode-podcast/the-bitcoin-development-kit-bdk/","title":"The Bitcoin Development Kit (BDK)","content":"Intro Adam Jonas: 00:00:06\nWe are recording again.\nMark Erhardt: 00:00:08\nWe have as our guests today Alekos and Daniela from the BDK team. Cool.\nAdam Jonas: 00:00:13\nWe\u0026amp;rsquo;re going to get into it. BDK, how it came to be, some features that are going to be released soon. So hope you enjoy. All right, Alekos, Daniela, welcome to the Chaincode podcast. We\u0026amp;rsquo;re excited to have you here.\nDaniela Brozzoni: 00:00:30\nHi, thank you for having us.\nAlekos Filini: 00:00:32\nYeah, thank you for having …"},{"uri":"/lightning-specification/2023-05-22-specification-call/","title":"Lightning Specification Meeting - Agenda 1082","content":"Agenda: https://github.com/lightning/bolts/issues/1082\nSpeaker 1: So, the first PR we have on the list is a clarification on Bolt 8 by [Redacted]. It looks like, if I understand correctly, someone tried to reimplement Bolt 8 and got it wrong, and it\u0026amp;rsquo;s only clarifications. Is that correct?\nSpeaker 2: Yes. So LN message - a JavaScript library that speaks to nodes - had everything right except for the fact they shared a chaining key, which is correct to start with. They start at the same …"},{"uri":"/speakers/bastien-teinturier/","title":"Bastien Teinturier","content":""},{"uri":"/speakers/dusty-dettmer/","title":"Dusty Dettmer","content":""},{"uri":"/speakers/jeff-czyz/","title":"Jeff Czyz","content":""},{"uri":"/bitcoin-review-podcast/lightning-privacy-splice-panel/","title":"Lightning Privacy \u0026 Splice Panel","content":"Housekeeping Speaker 0: 00:01:17\nToday we\u0026amp;rsquo;re going to be talking about Lightning privacy, splicing and other very cool things with an absolute amazing panel here. Just before that, I have a couple of things that I want to set out some updates. So OpenSats is now taking grants for a Bitcoin application, sorry, taking grant applications for Bitcoin projects and Nostra projects and Lightning projects. And there is about five mil for an OSTER, five mil for Bitcoin. So do apply. I joined as a …"},{"uri":"/tags/offers/","title":"Offers","content":""},{"uri":"/tags/rv-routing/","title":"Rendez-vous routing","content":""},{"uri":"/speakers/tony-giorgio/","title":"Tony Giorgio","content":""},{"uri":"/speakers/vivek/","title":"Vivek","content":""},{"uri":"/speakers/greg-sanders/","title":"Greg Sanders","content":""},{"uri":"/bitcoin-review-podcast/op_vault-round-2-op_ctv/","title":"OP_VAULT Round 2 \u0026 OP_CTV","content":"Introductions NVK: 00:01:25\nToday, we\u0026amp;rsquo;re going to get back to OP_VAULT, just kidding, OP_VAULT, this new awesome proposal on how we can make people\u0026amp;rsquo;s money safe in the future. And in my opinion, one of very good ways of scaling Bitcoin self-custody in a sane way. And with that, let me introduce today\u0026amp;rsquo;s guests, Mr. James. And welcome back.\nJames O\u0026amp;rsquo;Beirne: 00:01:32\nHey, thanks. It\u0026amp;rsquo;s always good to be here. This is quickly becoming my favorite Bitcoin podcast. …"},{"uri":"/lightning-specification/2023-05-08-specification-call/","title":"Lightning Specification Meeting - Agenda 1076","content":"Agenda: https://github.com/lightning/bolts/issues/1076\nSpeaker 0: First off of the list is dual funding. [Redacted] has merged the two small patches that we had discussed adding. There\u0026amp;rsquo;s only one small patch that is remaining, which is about making the TLV signed to allow for future splicing to make sure that splicing can also use the RBF messages for signed amounts. But apart from that, it looks like dual funding is almost complete. We plan on activating it on our node soon to be able to …"},{"uri":"/bitcoinplusplus/layer-2/building-trustless-bridges/","title":"Building Trustless Bridges","content":"Thank you all for coming to my talk. As is titled, I\u0026amp;rsquo;ll be talking about how we can build trustless bridges for Bitcoin. So my name is John Light. I\u0026amp;rsquo;m working on a project called Sovryn. We actually utilize a bridge, the Rootstock Powpeg Bridge, because our project was built on a Rootstock side chain. And we\u0026amp;rsquo;re interested in how we can improve the quality of that bridge. Currently it\u0026amp;rsquo;s a federated bridge, we would like to upgrade to a trustless bridge. And this talk will …"},{"uri":"/bitcoinplusplus/layer-2/exploring-p2p-hashrate-markets-v2/","title":"Exploring P2P Hashrate Markets V2","content":"Introduction Hi everybody, I\u0026amp;rsquo;m Nico. I\u0026amp;rsquo;m one of the co-founders at Rigly, and we are a peer-to-peer hashrate market. The way this talk is kind of broken up is the first part we\u0026amp;rsquo;re going to go through what is hash rate, how the protocol for mining works across the network, and then a little bit on how hash rate markets have come around, and then finally how we can potentially do hash rate with root layer two, which is obviously given the kind of subject matter of the conference. …"},{"uri":"/speakers/john-light/","title":"John Light","content":""},{"uri":"/speakers/nico-preti/","title":"Nico Preti","content":""},{"uri":"/tags/statechains/","title":"Statechains","content":""},{"uri":"/speakers/tadge-dryja/","title":"Tadge Dryja","content":""},{"uri":"/bitcoinplusplus/layer-2/translational-bitcoin-development/","title":"Translational Bitcoin Development","content":"Introduction This is called Translational Bitcoin Development or Why Johnny Can\u0026amp;rsquo;t Verify. So quick intro, I\u0026amp;rsquo;m Tadge Dryja. Stuff I\u0026amp;rsquo;ve worked on in Bitcoin over the last decade is the Bitcoin Lightning Network paper, the Discreet Log Contracts paper, Utreexo, and building out a lot of that software. In terms of places I\u0026amp;rsquo;ve worked, Lightning Labs, founded that, and then MIT DCI is where I used to work for a couple years and now I\u0026amp;rsquo;m at a company called Lightspark. …"},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-27-assumeutxo/","title":"AssumeUTXO update","content":"Goals allow nodes to get a utxo set quickly (1h) at the same time, no major security concessions Approach Provide serialized utxo snapshot get headers chain first, load snapshot and deserialize, sync to tip from that then start background verification with a 2nd snapshot finally, compare hashes when background IBD hits snapshot base Progress update lots of refactoring has been done; ChainStateManager was introduced, globals removed, mempool / blockstorage refactored init / shutdown logic changes …"},{"uri":"/bitcoin-core-dev-tech/2023-04/","title":"Bitcoin Core Dev Tech 2023 (Apr)","content":" ASMap Apr 25, 2023 Fabian Jahr Bitcoin core Security enhancements AssumeUTXO update Apr 27, 2023 James O\u0026amp;#39;Beirne Bitcoin core Assumeutxo Fuzzing Apr 27, 2023 Niklas Gögge Bitcoin core Developer tools Libbitcoin kernel Apr 26, 2023 thecharlatan Bitcoin core Build system Mempool Clustering Apr 25, 2023 Suhas Daftuar, Pieter Wuille Cluster mempool Package Relay Primer Apr 25, 2023 Gloria Zhao Bitcoin core Package relay P2p Project Meta Discussion Apr 26, 2023 Bitcoin core Refactors Apr 25, 2023 …"},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-27-fuzzing/","title":"Fuzzing","content":"Slides: https://docs.google.com/presentation/d/1NlTw_n60z9bvqziZqU3H3Jw7Xs5slnQoehYXhEKrzOE\nFuzzing Fuzzing is done continuously. Fuzz targets can pay off even years later by finding newly introduced bugs. Example in slide about libFuzzer fuzzing a parse_json function which might crash on some weird input but won’t report invalid json inputs that pass parsing. libFuzzer does coverage guided feedback loop + helps with exploring control flow. Bug Oracles Assertions - Adding assertions is tricky …"},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-26-libbitcoin-kernel/","title":"Libbitcoin kernel","content":"Questions and Answers Q: bitcoind and bitcoin-qt linked against kernel the libary in the future?\npresenter: yes, that is a / the goal Q: Have you looked at an electrum implementation using libbitcoinkernel?\naudience: yes, would be good to have something like this! audience: Also could do the long proposed address index with that? audience: not only address index, other indexes too. Q: Other use-cases:\naudience: be able to run stuff on iOS Q: Should the mempool be in the kernel?\npresenter: there …"},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-26-meta-discussion/","title":"Project Meta Discussion","content":"Part 1 What makes bitcoin core fun Intellectual challenge/problems Interesting, diverse, open source project collaborators Meaningful project goals Culturally the project is a meritocracy Scientific domain intersecting with real world problems Real world usage What makes bitcoin core not fun Long delivery cycles -\u0026amp;gt; lack of shippers high Soft fork activation Antagonism (internal and external) Ambiguity of feature/code contribution usage Relationships Financial stability Unclear goals Part 2 …"},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-27-asmap/","title":"ASMap","content":"Should we ship it every Core release? The initial idea is shipping a map file every Core release. Fabian wrote an article about how would be integrated into the deployment (https://gist.github.com/fjahr/f879769228f4f1c49b49d348f80d7635). Some devs pointed out an option would be to have it separated to the release process, any regular contributor could update it whenever they like (who would do it? frequency?). Then when the release comes around one of the recent ones will be chosen. People …"},{"uri":"/tags/cluster-mempool/","title":"Cluster mempool","content":""},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-25-mempool-clustering/","title":"Mempool Clustering","content":"Current Problems lot of problems in the mempool\neviction is broken mining algorithm is part of the problem, it’s not perfect RBF is like totally broken we complain all the time, sometimes we do/don\u0026amp;rsquo;t RBF when we should/shouldn\u0026amp;rsquo;t Eviction Eviction is when mempool is full, and we want to throw away the worst tx. Example, we think a tx is worst in mempool but it’s a descendant of a \u0026amp;ldquo;good\u0026amp;rdquo; tx. Mempool eviction is kinda the opposite of the mining algorithm. For example the …"},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-25-package-relay-primer/","title":"Package Relay Primer","content":"Slides: https://docs.google.com/presentation/d/12YPlmmaCiNNL83b3FDmYwa7FKHP7yD0krzkEzM-tkTM\nProblems CPFP Doesn’t Work When Mempool Min Feerate Rises Bad for users who want to use CPFP and L2s, but also a glaring limitation in our ability to assess transaction incentive compatibility\nPinning being able to feebump transaction is a pinning concern counterpart can intentionally censor your transactions, and in L2 that can mean stealing your money because you didn’t meet the timelock Pinning …"},{"uri":"/speakers/pieter-wuille/","title":"Pieter Wuille","content":""},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-25-refactors/","title":"Refactors","content":"One take-away from the Chaincode residency in 2019 was: Don’t do refactors (unless you really need it)\nA marked increase from 2019 to today (Chart on the increase of refactors)\nThe comments and PRs are steady but the refactors are increasing\nQuibble about how regular reviewers are counted (should be higher than 5 comments)\nProject reasons:\nOssification? Natural way mature projects progress/Boy Scout Rule Personal reasons:\nTime commitment of large review may not be possible (extended period of …"},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-26-silent-payments/","title":"Silent Payments","content":"BIP Overview Scanning key and spending key are different: better security. Silent payment transactions are indistinguishable from transactions with taproot outputs on-chain.\nQ: Address labeling, why not create two silent payment addresses?\nA: It doubles scanning costs.\nLimited to taproot UTXOs (currently about 3% of transactions) but when it increases we should find ways to optimize scanning, even though it currently does not seem to be an issue.\nQ: Why no P2PK\nA: Limit to most used payment …"},{"uri":"/speakers/suhas-daftuar/","title":"Suhas Daftuar","content":""},{"uri":"/lightning-specification/2023-04-24-specification-call/","title":"Lightning Specification Meeting - Agenda 1067","content":"Agenda: https://github.com/lightning/bolts/issues/1067\nSpeaker 0: Alright, so first thing I\u0026amp;rsquo;ve got on deck is 1066, which says: Correct final CLTV handling and blinded paths.\nSpeaker 1: Yeah, I haven\u0026amp;rsquo;t dug as deep into the blinded path stuff, so this may be an incorrect reading that I was confused to. Basically, for a blinded path, we don\u0026amp;rsquo;t have a final CLTV for the recipient because we just have a full CLTV delta for the full blinded path. So, the spec is very clear that you …"},{"uri":"/bitcoin-explained/peer-to-peer-encryption/","title":"Peer-to-peer Encryption","content":"Aaron van Wirdum:\nSjros, you were just telling me that one of the reasons you came up with the topic for this episode, which is BIP324, a peer-to-peer encryption, is because you had a call with other Bitcoin developers about this. Since when do Bitcoin Core developers call? I know Bitcoin developers as IRC wizards, and they\u0026amp;rsquo;re on a call now? That sounds like Ethereum, Bitcoin Cash type of communication. What\u0026amp;rsquo;s going on?\nSjors Provoost:\nThe idea would be to put these calls on some …"},{"uri":"/speakers/calle/","title":"Calle","content":""},{"uri":"/bitcoin-review-podcast/cashu-and-fedimint/","title":"Cashu \u0026 Fedimint: ecash, Bitcoin's layer 3 \u0026 scalability","content":"Introduction If you haven\u0026amp;rsquo;t had your brain completely destroyed by the segregated witness episode, I hope that the e-cash episode will finalize. It\u0026amp;rsquo;s going to be fatality of your brain on this one. So I have with me a really good group of folks, two of them who are working on the biggest projects, who are deploying E-cash out there in the world.\nGuest introductions NVK: 00:01:30\nWelcome Calle.\nCalle: 00:01:33\nHey there, NVK. Hey everyone, Thanks for having me.\nNVK: 00:01:36\nHey, so …"},{"uri":"/tags/ecash/","title":"Ecash","content":""},{"uri":"/speakers/eric-sirion/","title":"Eric Sirion","content":""},{"uri":"/speakers/matt-odell/","title":"Matt Odell","content":""},{"uri":"/chaincode-labs/chaincode-podcast/lightning-history-and-everything-else/","title":"Lightning History and everything else","content":"Speaker 0: 00:00:00\nYou know, it works, but it also worries me because it\u0026amp;rsquo;s like, oh, people are like, oh, I\u0026amp;rsquo;m sending one sat. And you\u0026amp;rsquo;re like, yeah, but it works very differently whether you\u0026amp;rsquo;re sending 100,000 Satoshis or one.\nSpeaker 1: 00:00:18\nJonas.\nSpeaker 2: 00:00:19\nWe are back and we\u0026amp;rsquo;re going to have Taj joining us in the studio today. What should we chat about with him?\nSpeaker 1: 00:00:26\nI think maybe lightning stuff.\nSpeaker 2: 00:00:28\nYeah, lightning …"},{"uri":"/tags/segwit/","title":"Segregated witness","content":""},{"uri":"/tags/sighash-anyprevout/","title":"SIGHASH_ANYPREVOUT","content":""},{"uri":"/tags/trimmed-htlc/","title":"Trimmed HTLC","content":""},{"uri":"/tags/dual-funding/","title":"Dual funding","content":""},{"uri":"/bitcoinplusplus/lets-talk-about-dual-funding-protocol-overview/","title":"Let's Talk About Dual Funding: Protocol Overview","content":"Openning a Lightning Channel Unspent transaction output is locked up. Bitcoin is unspent, but it requires two parties to sign. In order to spend it, you need two people to sign off on it. How do we make that happen? A multisig that two people sign off, so it\u0026amp;rsquo;s 2 of 2 multisig.\nWe\u0026amp;rsquo;re talking about lightning channels. A lightning channel is a UTXO that is locked up with a 2 of 2 (multisig). The process of opening a lightning channel, what does that involve? We want to open a channel. …"},{"uri":"/speakers/lisa-neigut/","title":"Lisa Neigut","content":""},{"uri":"/bitcoinplusplus/onchain-privacy/splicing-lightnings-multiparty-future/","title":"Splicing, Lightning's Multiparty Future","content":"Introduction Thanks for coming. It\u0026amp;rsquo;s exciting to be in Mexico City, talking about Lightning, splicing, and privacy. I\u0026amp;rsquo;m Dusty. This is a photo of me when I was much younger—a little bit younger, but that number is growing every year, it turns out. So, a little bit about me: I\u0026amp;rsquo;ve been doing Bitcoin development stuff for probably five years, and about a year or change ago, I decided to focus on doing lightning stuff, and my first big project has been splicing. Okay, so who here …"},{"uri":"/chaincode-labs/chaincode-podcast/the-bitcoin-core-wallet/","title":"The Bitcoin Core Wallet","content":"Intro Andrew Chow: 00:00:00\nI will not say this is an unreasonable approach for a first implementation.\nAdam Jonas: 00:00:06\nSure. But Satoshi\u0026amp;rsquo;s been gone for a little bit.\nMurch: 00:00:18\nHey Jonas.\nAdam Jonas: 00:00:19\nHey Murch. We are here with one of your wallet collaborators, AChow101.\nMurch: 00:00:24\nWow, the wallet guy.\nAdam Jonas: 00:00:26\nYeah, you\u0026amp;rsquo;re still a collaborator.\nMurch: 00:00:28\nSure, sure. So Andy has been working on the Bitcoin Core wallet for many years, and …"},{"uri":"/stephan-livera-podcast/2023-03-06-greg-sanders/","title":"Bitcoin Transaction Pinning, ANYPREVOUT \u0026 eltoo","content":"podcast: https://stephanlivera.com/episode/463/\nStephan Livera – 00:02:02: So on to the discussion with Greg. Greg, also known as instagibbs. Welcome to the show.\nGreg Sanders :\nHi, glad to be here.\nStephan Livera :\nSo, Greg, I know you’re doing a lot of work on some interesting Lightning stuff and transaction fee pinning things. I know you’ve been around for a while in terms of Bitcoin development and Lightning stuff, so yeah, interested to chat and hear a little bit more from your perspective, …"},{"uri":"/advancing-bitcoin/","title":"Advancing Bitcoin","content":" Advancing Bitcoin 2019 Advancing Bitcoin 2020 Advancing Bitcoin 2022 Advancing Bitcoin 2023 "},{"uri":"/advancing-bitcoin/2022/","title":"Advancing Bitcoin 2022","content":" Challenges in implementing coinjoin in hardware wallets Mar 02, 2023 Pavol Rusnak Coinjoin Hardware wallet Taproot Deploying Lightning at Scale (Greenlight) Mar 03, 2022 Christian Decker Lightning Miniscript: Composable, Analyzable and Smarter Bitcoin Script Mar 03, 2022 Sanket Kanjalkar Miniscript Static Invoices in Lightning: A Deep Dive and Comparison of Bolt11 Invoices, LNURL, Offers and AMP Mar 03, 2022 Elle Mouton Lnurl Offers Amp Taproot multisig Mar 03, 2022 Jimmy Song Taproot Tapscript …"},{"uri":"/advancing-bitcoin/2022/challenges-in-implementing-coinjoin-in-hardware-wallets/","title":"Challenges in implementing coinjoin in hardware wallets","content":"Introduction Hi everyone. I\u0026amp;rsquo;m Pavel Rusnak, known as Stik in the Bitcoin community. I\u0026amp;rsquo;m the co-founder of Satoshi Labs, the company that brought Trezor. Today I\u0026amp;rsquo;m going to be talking about the challenges of implementing CoinJoin in hardware wallets.\nWhy Privacy Let\u0026amp;rsquo;s summarize why privacy matters. Among all reasons I consider these reasons as most important.\nAutonomy: Privacy allows individuals to have control over their personal information, enabling them to make …"},{"uri":"/tags/hardware-wallet/","title":"hardware-wallet","content":""},{"uri":"/speakers/pavol-rusnak/","title":"Pavol Rusnak","content":""},{"uri":"/advancing-bitcoin/2023/","title":"Advancing Bitcoin 2023","content":" Simplicity: Going Beyond Miniscript Mar 01, 2023 Christian Lewe Simplicity Timelock-enabled safety and recovery use cases for Bitcoin users Mar 01, 2023 Kevin Loaec Timelocks Miniscript "},{"uri":"/speakers/christian-lewe/","title":"Christian Lewe","content":""},{"uri":"/speakers/kevin-loaec/","title":"Kevin Loaec","content":""},{"uri":"/tags/simplicity/","title":"Simplicity","content":""},{"uri":"/advancing-bitcoin/2023/simplicity-going-beyond-miniscript/","title":"Simplicity: Going Beyond Miniscript","content":"Can we welcome Christian to the stage? I think I don\u0026amp;rsquo;t have any computer yet. Oh.. It\u0026amp;rsquo;s just okay. All right, I just, I just just read. There\u0026amp;rsquo;s a pointer, okay, great great. I was a bit confused for a second. Can I start? great, so welcome, I\u0026amp;rsquo;m Christian Lewe. I work for Blockstream to research where I mainly work on Simplicity and Miniscript and it\u0026amp;rsquo;s great that we just had a talk on Miniscript so you all know why Miniscript is great, why you should use Miniscript, …"},{"uri":"/advancing-bitcoin/2023/timelock-enabled-safety-and-recovery-use-cases-for-bitcoin-users/","title":"Timelock-enabled safety and recovery use cases for Bitcoin users","content":"Introduction Can we give a round of applause for Kevin, and welcome him to the stage. Thank you. Alright, yeah, so I\u0026amp;rsquo;m going to do a talk around Timelocks in Bitcoin. I\u0026amp;rsquo;m going to start with a pretty quick introduction on the different types of Timelocks over time in Bitcoin, and where we\u0026amp;rsquo;re at today. Also, a little bit, very, very quickly talking about what we\u0026amp;rsquo;re using Timelocks for in Bitcoin today. And also all the very exciting stuff that\u0026amp;rsquo;s happening on Bitcoin …"},{"uri":"/tags/timelocks/","title":"Timelocks","content":""},{"uri":"/stephan-livera-podcast/2023-02-27-craig-raw-bitcoin-multi-signature/","title":"Bitcoin Multi-signature","content":"podcast: https://stephanlivera.com/episode/462/\nStephan – 00:02:53 Craig, welcome back to the show.\nCraig :\nGreat, Stephan, it’s really good to be back.\nStephan :\nYeah, there’s been so many updates going on with Sparrow Wallet and I thought it’d be great to have you back to chat about the space. Whether it’s multisignature or privacy or import and export of transactions, I think there’s lots of things to add. So, yeah, I’m just curious, as you look at the space now, what are some of the big …"},{"uri":"/speakers/matt-corallo/","title":"Matt Corallo","content":""},{"uri":"/stephan-livera-podcast/2023-02-23-matt-corallo/","title":"What Bitcoin Specialises in","content":"podcast: https://stephanlivera.com/episode/461/\nStephan:\nWelcome back to the show, Matt.\nMatt :\nYeah, thanks for having me.\nStephan :\nSo lots of things going on. I know there’s always some topic of the day or whatever that’s happening, whether it’s Ordinals and inscriptions, which is the latest kind of craze. Maybe the Mempool is clearing out a little bit now, but yeah, I thought it would be good to chat with you again. I know nowadays you’re kind of more focused on to Lightning. What would you …"},{"uri":"/bitcoin-review-podcast/demystifying-and-understanding-bitcoin-core-development/","title":"Demystifying and Understanding Bitcoin Core Development","content":"Intro NVK: 00:00:40\nToday, we\u0026amp;rsquo;re going to be talking about Bitcoin Core development and trying to demystify it, maybe sort of like shed some light. So, some of the FUD goes away and new FUD comes in maybe, who knows?\nGuest introductions NVK: 00:00:51\nI have this awesome panel with me. I have a James O\u0026amp;rsquo;Beirne. Hi James.\nJames O\u0026amp;rsquo;Beirne: 00:00:56\nHi, good to be here.\nNVK: 00:00:58\nThanks for coming. Sjors.\nSjors Provoost: 00:01:00\nHello.\nNVK: 00:01:02\nThanks for coming back. And …"},{"uri":"/speakers/rodolfo-novak/","title":"Rodolfo Novak","content":""},{"uri":"/tags/ephemeral-anchors/","title":"Ephemeral anchors","content":""},{"uri":"/chaincode-labs/chaincode-podcast/sighash-anyprevout-ephemeral-anchors-and-ln-symmetry-eltoo/","title":"SIGHASH_ANYPREVOUT, ephemeral anchors and LN symmetry (ELTOO)","content":"Greg: 00:00:00\nWith ANYPREVOUT, I don’t want to champion it right now from an activation perspective. I think the community is pretty split right now on what to do, but we can build the knowledge and build up the tooling.\nMurch: 00:00:18\nHey Jonas.\nJonas: 00:00:19\nHey Murch. We are back in the studio for a productive podcasting week. We have @instagibbs (Greg Sanders). So what should we talk to him about?\nMurch: 00:00:28\nHe’s been looking a lot at mempool policy improvements. I think we’re going …"},{"uri":"/lightning-specification/2023-02-13-specification-call/","title":"Lightning Specification Meeting - Agenda 1055","content":"Name: Lightning specification call\nTopic: Agenda below\nLocation: Jitsi\nVideo: No video posted online\nAgenda: https://github.com/lightning/bolts/issues/1055\nSpeaker 0: A few people won\u0026amp;rsquo;t be able to attend. I guess we can proceed. Okay. I\u0026amp;rsquo;m going to go down the list that you prepared for this. The first thing being dual funding. I don\u0026amp;rsquo;t know if we have any one involved here at this point.\nSpeaker 1: All right.\nSpeaker 0: Last time, both of them were moving closer towards interop. …"},{"uri":"/categories/meeting/","title":"meeting","content":""},{"uri":"/speakers/antoine-poinsot/","title":"Antoine Poinsot","content":""},{"uri":"/tags/fee-management/","title":"Fee Management","content":""},{"uri":"/bitcoin-review-podcast/op_vault-for-bitcoin-covenants-panel/","title":"OP_VAULT for Bitcoin Covenants Panel","content":"Speaker 0: 00:00:45\nToday we have an absolute all-star panel here. We\u0026amp;rsquo;re going to be talking about the Bitcoin Op Vault. It\u0026amp;rsquo;s a new proposal by James. And you know, like any new proposals to Bitcoin, there is a lot to go over. And It\u0026amp;rsquo;s a very big, interesting topic. So today we have Rindell.\nSpeaker 2: 00:01:05\nHey, good morning. Yeah, I\u0026amp;rsquo;m here talking about vaults, Bitcoin developer, and I work a lot on multi-sig and vaults that don\u0026amp;rsquo;t use covenants. So really …"},{"uri":"/bitcoin-explained/inscriptions/","title":"Inscriptions","content":"Introduction Aaron Van Wirdum: 00:00:20\nLive from Utrecht, this is Bitcoin Explained. Sjors, we\u0026amp;rsquo;re gonna lean into some Twitter controversy today.\nSjors Provoost: 00:00:27\nExactly, we have to jump on the bandwagon and make sure that we talk about what other people talk about.\nAaron Van Wirdum: 00:00:34\nLove it! Yeah, actually I got this request from a colleague of mine, Brandon Green, and since I am his humble servant, I had no other choice but to make an episode about this topic, which is …"},{"uri":"/speakers/casey-rodarmor/","title":"Casey Rodarmor","content":""},{"uri":"/tags/privacy/","title":"privacy","content":""},{"uri":"/speakers/sergi-delgado/","title":"Sergi Delgado","content":""},{"uri":"/chaincode-labs/chaincode-podcast/watchtowers-lightning-privacy/","title":"Watchtowers, Lightning Privacy","content":"Speaker 0: 00:00:00\nI came into Bitcoin because I like peer-to-peer. I discover more things that I love by doing so. And again, these are not feelings, but they are feelings related to research. Sorry, failing time.\nSpeaker 1: 00:00:12\nWe\u0026amp;rsquo;re going to make sure our editor takes out The love word. We are back in the studio. We are.\nSpeaker 2: 00:00:24\nAnd today we\u0026amp;rsquo;re going to be joined by Sergi Delgado.\nSpeaker 1: 00:00:27\nYeah, we\u0026amp;rsquo;re going to talk about Watchtowers. We\u0026amp;rsquo;re …"},{"uri":"/stephan-livera-podcast/2023-02-02-casey-rodarmor/","title":"What are Ordinals?","content":"podcast: https://stephanlivera.com/episode/456/\nCasey, welcome to the show.\nCasey Rodarmor – 00:03:18:\nThank you very much. It’s a real pleasure. I am a long-time listener of the pod. It’s on my short list of Bitcoin podcasts. So it’s really surreal. You’ve been whispering into my ear as I go to the gym and I do whatever for a long time now. So it’s very surreal to actually be talking to you in person or virtually.\nStephan Livera – 00:03:40:\nYeah, of course. Well, that’s great. Well, thank you. …"},{"uri":"/speakers/anant-tapadia/","title":"Anant Tapadia","content":""},{"uri":"/tags/scripts-addresses/","title":"Scripts and Addresses","content":""},{"uri":"/stephan-livera-podcast/slp455-anant-tapadia-single-sig-or-multi-sig/","title":"SLP455 Anant Tapadia - Single Sig or Multi Sig?","content":"Stephan 00:00:00:\nAnant, welcome back to the show.\nAnant 00:02:38:\nHey Stephan, thanks for having me again.\nStephan 00:02:40:\nSo there\u0026amp;rsquo;s been a lot going on. I think the conversation around learning to self custody is always an important one. It\u0026amp;rsquo;s always one that\u0026amp;rsquo;s very fresh on my mind as well. And so we\u0026amp;rsquo;re seeing a lot of discussion. And I think recently, of course, there was the news about Luke Dash-jr losing his coins, I think, I don\u0026amp;rsquo;t know exactly how many, but …"},{"uri":"/lightning-specification/2023-01-30-specification-call/","title":"Lightning Specification Meeting - Agenda 1053","content":"Name: Lightning specification call\nTopic: Agenda below\nLocation: Jitsi\nVideo: No video posted online\nAgenda: https://github.com/lightning/bolts/issues/1053\nThe conversation has been anonymized by default to protect the identities of the participants. Participants that wish to be attributed are welcome to propose changes to the transcript.\nChannel Pruning Speaker 2: To be honest, I don\u0026amp;rsquo;t think anything has made a lot of progress since the last spec meeting, so I don\u0026amp;rsquo;t think we should …"},{"uri":"/bitcointranscripts/","title":"Bitcointranscripts","content":""},{"uri":"/bitcointranscripts/stephan-livera-podcast/opvault-a-new-way-to-hodl/","title":"OP_Vault - A New Way to HODL?","content":"Stephan:\nJames, welcome back to the show.\nJames 00:02:46:\nHey, Stephan, it\u0026amp;rsquo;s great to be back. I think it\u0026amp;rsquo;s been almost four years.\nStephan 00:02:50:\nDamn. Yeah, well, in terms of on the show, yeah, actually in person, you know, a couple of times, some of the conferences and things. But I know you\u0026amp;rsquo;ve got this awesome proposal out, and it looks pretty interesting to me, and I\u0026amp;rsquo;m sure SLP listeners will be very interested to hear more about it. So do you want to just set the …"},{"uri":"/stephan-livera-podcast/2023-01-23-antoine-poinsot-and-salvatore-ingala/","title":"Bitcoin Miniscript and what it enables","content":"podcast: https://stephanlivera.com/episode/452/\nAntoine and Salvatori, welcome to the show.\nAntoine :\nThanks for having us.\nSalvatore :\nThank you, Stephan.\nStephan :\nSo I know you guys are both building out some stuff in Miniscript. Obviously, software and hardware. And I know this is an interesting area, there’s been a bit of discussion about it, but there’s also been some critique about it as well. So I’d be interested to get into this with you guys, but maybe first if we could start. What was …"},{"uri":"/stephan-livera-podcast/2023-01-18-josibake/","title":"Bitcoin Developer Education Overview","content":"podcast: https://stephanlivera.com/episode/450/\nJosi, welcome to the show.\nJosibake – 00:02:32:\nThanks for having me.\nStephan- 00:02:33:\nSo, Josi, I know you were keen to chat about I know you’ve been doing a bit of work in the bitcoin development world, doing some mentoring as well, and keen to chat a bit about the process of Bitcoin developer education. Why is it important, all of that. But let’s first start with a little bit about you. Can you tell us a little bit of your journey getting into …"},{"uri":"/stephan-livera-podcast/2023-01-14-james-obeirne-a-new-way-to-hodl/","title":"A New Way to HODL?","content":"podcast: https://stephanlivera.com/episode/449/\nJames, welcome back to the show.\nJames – 00:02:46:\nHey, Stephan, it’s great to be back. I think it’s been almost four years.\nStephan – 00:02:50:\nDamn. Yeah, well, in terms of on the show, yeah, actually in person, you know, a couple of times, some of the conferences and things. But I know you’ve got this awesome proposal out, and it looks pretty interesting to me, and I’m sure SLP listeners will be very interested to hear more about it. So do you …"},{"uri":"/bitcoin-explained/bitcoin-core-v24-bug/","title":"Bitcoin Core 24.0 Bug","content":"Introduction Aaron van Wirdum: 00:00:20\nLive from Utrecht, this is Bitcoin Explained. Sjors, welcome back. I saw you were promoting your book everywhere in the world over the past couple of weeks. Where did you go?\nSjors Provoost: 00:00:31\nAbsolutely. I went to a place called New York, a place called Nashville and a place called Austin and those are all in the United States.\nAaron van Wirdum: 00:00:39\nThat sounds very exotic. And you were promoting your book. Which book is this yours? Do you …"},{"uri":"/tags/adaptor-signatures/","title":"Adaptor signatures","content":""},{"uri":"/chaincode-labs/chaincode-podcast/nesting-roast-half-aggregation-adaptor-signatures/","title":"Nesting, ROAST, Half-Aggregation, Adaptor Signatures","content":"Speaker 1: 00:00:00\nJust as a warning, so don\u0026amp;rsquo;t do this at home.\nSpeaker 0: 00:00:01\nYes, of course. Still have your friendly neighborhood cryptographer have a look at it.\nSpeaker 2: 00:00:16\nThis is the second half of the conversation with Tim and Peter. If you have not listened to the first half, I\u0026amp;rsquo;d suggest going back and listening to that episode. We cover all sorts of fun things, including when to roll your own cryptography, why we prefer Schnorr signatures over ECDSA, Schnorr …"},{"uri":"/tags/cryptography/","title":"cryptography","content":""},{"uri":"/chaincode-labs/chaincode-podcast/schnorr-musig-frost-and-more/","title":"Schnorr, MuSig, FROST and more","content":"Introduction Mark Erhart: 00:00:00\nThis goes to like this idea of like really only using the blockchain to settle disagreements. Like as long as everyone agrees with all we have to say to the blockchain, it\u0026amp;rsquo;s like, yeah, you don\u0026amp;rsquo;t really need to know what the rules were. Hey Jonas, good to be back.\nAdam Jonas: 00:00:19\nSo while you were out being the Bitcoin ambassador that we depended on you to be, I was here stuck in the cold talking to Tim and Pieter and I thought it was a good …"},{"uri":"/speakers/adam-gibson/","title":"Adam Gibson","content":""},{"uri":"/bitcoinplusplus/onchain-privacy/coinjoin-done-right-the-onchain-offchain-mix-and-anti-sybil-with-riddles/","title":"Coinjoin Done Right: the onchain offchain mix (and anti-Sybil with RIDDLE)","content":"NOTE: Slides of the talk can be found here\nIntroduction My name\u0026amp;rsquo;s Adam Gibson, also known as waxwing on the internet. Though the title might suggest that I have the solution to the problem of implementing CoinJoin, it\u0026amp;rsquo;s not exactly true. It\u0026amp;rsquo;s a clickbait. I haven\u0026amp;rsquo;t, but I suppose what I can say in a more serious way is that I\u0026amp;rsquo;m proposing what I consider to be a meaningfully different alternative way of looking at the concept of CoinJoin and therefore at the concept …"},{"uri":"/chaincode-labs/chaincode-podcast/channel-jamming-on-the-lightning-network/","title":"Channel Jamming on the Lightning Network","content":"Clara Shikhelman: 00:00:00\nWe identify the problem, we identify what would be a good solution, and then we go over the tools that are available for us over the Lightning Network.\nAdam Jonas: 00:00:15\nHello, Murch.\nMark Erhardt: 00:00:16\nHi, Jonas.\nAdam Jonas: 00:00:16\nBack in the saddle again. We are talking to Sergey and Clara today about their recent work on Jamming.\nMark Erhardt: 00:00:23\nThey have a paper forthcoming.\nAdam Jonas: 00:00:25\nWe\u0026amp;rsquo;ve seen some recent attacks on Lightning. …"},{"uri":"/speakers/clara-shikhelman/","title":"Clara Shikhelman","content":""},{"uri":"/tags/research/","title":"research","content":""},{"uri":"/speakers/sergei-tikhomirov/","title":"Sergei Tikhomirov","content":""},{"uri":"/tags/watchtowers/","title":"Watchtowers","content":""},{"uri":"/speakers/dhruv/","title":"Dhruv","content":""},{"uri":"/stephan-livera-podcast/2022-11-13-dhruv-pieter-wuille-and-tim-ruffing/","title":"v2 P2P Transport Protocol for Bitcoin (BIP324)","content":"podcast: https://stephanlivera.com/episode/433/\nStephan Livera – 00:03:20:\nGentlemen, welcome to the show.\nDhruv – 00:03:22:\nHello.\nTim Ruffing – 00:03:23:\nHi.\nStephan Livera – 00:03:24:\nYeah, so thanks, guys, for joining me and interested to chat about what you’re working on and especially what’s going on with P2P transport, a v2 P2P transport protocol for bitcoin core.\nDhruv – 00:03:36:\nBitcoin?\nStephan Livera – 00:03:37:\nYeah, for a course for bitcoin. So I think, Pieter and Tim, I think …"},{"uri":"/adopting-bitcoin/","title":"Adopting Bitcoin","content":" Adopting Bitcoin 2021 Adopting Bitcoin 2022 "},{"uri":"/adopting-bitcoin/2022/","title":"Adopting Bitcoin 2022","content":" How to get started contributing sustainably to Bitcoin Core Nov 11, 2022 Jon Atack, Stephan Livera Bitcoin core "},{"uri":"/adopting-bitcoin/2022/how-to-get-started-contributing-sustainably-to-bitcoin-core/","title":"How to get started contributing sustainably to Bitcoin Core","content":"Introduction Host: 00:00:00\nNow we are going to have a panel conversation; a kind of fireside chat between Stephan Livera and Jon Atack and the name is how to get started contributing sustainably to Bitcoin Core so if you are a dev and are looking to start to contribute to Bitcoin Core, this is your talk.\nStephan: 00:00:32\nOkay, so thank you for having us and thank you for joining us. Joining me today is Jon Atack. He is a Bitcoin Core developer, contributor. He started about two or three years …"},{"uri":"/speakers/jon-atack/","title":"Jon Atack","content":""},{"uri":"/chaincode-labs/chaincode-podcast/the-bitcoin-core-wallet-and-wrangling-bitcoin-data/","title":"The Bitcoin Core wallet and wrangling bitcoin data","content":"Introduction Adam Jonas: 00:00:00\nHey Murch!\nMark Erhardt: 00:00:01\nYo, what\u0026amp;rsquo;s up?\nAdam Jonas: 00:00:02\nWe are back, as promised. We\u0026amp;rsquo;re really following through this time. We have Josie Baker in the office this week, and we\u0026amp;rsquo;re going to hear about what he\u0026amp;rsquo;s up to.\nMark Erhardt: 00:00:11\nYeah, we\u0026amp;rsquo;ve been working with him on some mempool data stuff, and I\u0026amp;rsquo;ve also worked with him closely on some Bitcoin Core wallet privacy improvements and that\u0026amp;rsquo;s fun stuff. …"},{"uri":"/speakers/bob-mcelrath/","title":"Bob McElrath","content":""},{"uri":"/tabconf/2022/2022-10-15-braidpool/","title":"Braidpool","content":"Introduction I am going to talk today about Braidpool which is a decentralized mining pool. I hope some of you were in the mining panel earlier today in particular p2pool which this is a successor to. I gave a talk a long time ago about a directed acyclic graph blockchain.\nBraidpool Braidpool is a proposal for a decentralized mining pool which uses a merge-mined DAG with Nakamoto-like consensus to track shares. This was the most straightforward way I could find to apply the ideas of Nakamoto …"},{"uri":"/tabconf/2022/2022-10-15-segwit-vbytes-misconceptions/","title":"Misconceptions about segwit vbytes","content":"Weighing transactions: The witness discount You\u0026amp;rsquo;ve already heard from someone that this presentation will be more interactive. I probably won\u0026amp;rsquo;t get through all of my material. If you have questions during the talk, please feel free to raise your hand and we can cover it right away. I\u0026amp;rsquo;m going to try to walk you through a transaction serialization for both a non-segwit transaction and a segwit transaction. By the end of the talk, I hope you understand how the transaction weight …"},{"uri":"/tabconf/2022/2022-10-15-silent-payments/","title":"Silent Payments and Alternatives","content":"Introduction I will talk about silent payments but also in general the design space around what kind of constructs you can have to pay people in a non-interactive way. In the bitcoin world, there are a couple common ways of paying someone. Making a payment is such a basic thing. \u0026amp;hellip; The alternative we have is that you can generate a single address and put it in your twitter profile and everyone can pay you, but that\u0026amp;rsquo;s not private. So either you have this interactivity or some loss of …"},{"uri":"/tabconf/","title":"TABConf","content":" TABConf 2021 TABConf 2022 "},{"uri":"/tabconf/2022/","title":"TABConf 2022","content":" Braidpool Oct 15, 2022 Bob McElrath Mining How to become Signet Rich: Learn about Bitcoin Signet and Zebnet Oct 13, 2022 Richard Safier Signet Lightning Lnd Lightning is Broken AF (But We Can Fix It) Oct 14, 2022 Matt Corallo Lightning Misconceptions about segwit vbytes Oct 15, 2022 Mark Erhardt Segwit Fee management Provably bug-free BIPs and implementations using hac-spec Oct 14, 2022 Jonas Nick Cryptography ROAST: Robust asynchronous Schnorr threshold signatures Oct 14, 2022 Tim Ruffing …"},{"uri":"/tabconf/2022/lightning-is-broken-af-but-we-can-fix-it/","title":"Lightning is Broken AF (But We Can Fix It)","content":"Introduction Thank you. Yeah, so, as the title says, for those of you who work in lightning or have been around lightning for a long time, hopefully you will recognize basically all the issues I\u0026amp;rsquo;ll go through today. I\u0026amp;rsquo;m going to go through a very long list of things. But I\u0026amp;rsquo;m going to try to go through most of the kind of, at least my understanding of the current thinking of solutions for a lot of these issues. So hopefully that most people will at least have a chance to learn …"},{"uri":"/tabconf/2022/2022-10-14-hac-spec/","title":"Provably bug-free BIPs and implementations using hac-spec","content":"https://nickler.ninja/\nAlright. Strong crowd here, I can see. That\u0026amp;rsquo;s very nice.\nIntroduction I am Jonas and I will talk about provably bug-free BIPs and implementations. I don\u0026amp;rsquo;t have such a BIP nor such an implementation but if you lower your time-preference enough this could eventually be true at some point. This presentation is only 15 minutes so raise your hand if you have questions.\nJust to set the stage. Specifications should be free of bugs, we want them easy to implement and …"},{"uri":"/tabconf/2022/2022-10-14-roast/","title":"ROAST: Robust asynchronous Schnorr threshold signatures","content":"paper: https://ia.cr/2022/550\nslides: https://slides.com/real-or-random/roast-tabconf22/\nHey. Hello. My name is Tim and I work at Blockstream. This is some academic work in joint with some of my coworkers.\nSchnorr signatures in Bitcoin We recently got support for Schnorr signatures in bitcoin, introduced as part of bip340 which was activated as part of the taproot soft-fork. There are three main reasons why we want Schnorr signatures and prefer them over ECDSA bitcoin signatures which can still …"},{"uri":"/tabconf/2022/weighing-transactions-the-witness-discount/","title":"Weighing Transactions, The Witness Discount","content":"In this talk Mark Erhardt: 0:00:25\nI\u0026amp;rsquo;m going to try to walk with you through a transaction serialization, the Bitcoin transaction serialization. And we\u0026amp;rsquo;ll try to understand a non-SegWit transaction and then a SegWit transaction to compare it to. And I hope by the end of this talk, you understand how the transaction weight is calculated and how the witness discount works. And if we get to it, we may take a look at a few different output types in the end as well.\nTransaction components …"},{"uri":"/tabconf/2022/how-to-become-signet-rich-learn-about-bitcoin-signet-and-zebnet/","title":"How to become Signet Rich: Learn about Bitcoin Signet and Zebnet","content":"Richard: So today we are going to talk about Signet Lightning, PlebNet Playground, which was the first evolution of me playing with custom Signets, and eventually ZebNet Adventures. This is how you can become Signet-rich.\nFirst, we should probably do an overview of the various network types. Everybody knows about Mainnet. This is what we use day to day, but there\u0026amp;rsquo;s a few other testing networks. RegTest, or SimNet, if you are running BTCD, is a local unit testing framework. And this is …"},{"uri":"/tags/lnd/","title":"lnd","content":""},{"uri":"/speakers/richard-safier/","title":"Richard Safier","content":""},{"uri":"/bitcoin-core-dev-tech/2022-10/","title":"Bitcoin Core Dev Tech 2022","content":" BIP324 - Version 2 of p2p encrypted transport protocol Oct 10, 2022 P2p Bitcoin core Bitcoin Core and GitHub Oct 11, 2022 Bitcoin core Fee Market Oct 11, 2022 Fee management Bitcoin core FROST Oct 11, 2022 Threshold signature High-assurance cryptography specifications (hac-spec) Oct 11, 2022 Cryptography Libsecp256k1 Maintainers Meeting Oct 12, 2022 Libsecp256k1 Misc Oct 10, 2022 P2p Bitcoin core Package Relay BIP, implementation, V3, and package RBF proposals Oct 11, 2022 Package relay Bitcoin …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-12-libsecp256k1/","title":"Libsecp256k1 Maintainers Meeting","content":"Q: Why C89? When I asked you this question a few years ago, I think you said gmaxwell.\nA: There are a number of embedded devices that only support C89 and it\u0026amp;rsquo;d be good to support those devices. That was the answer back then at least.\nQ: Is it a large cost to keep doing C89?\nA: The only cost is for the context stuff we want to make threadlocal. The CPUid or the x86-specific things. These could be optional. If you really want to get into this topic, then perhaps later. It makes sense to …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-12-research-wishlist/","title":"Research wishlist","content":"https://docs.google.com/document/d/1oRCeDzY3zH2ZY-BUYIVfJ1GMkvLlqKHWCFdtS62QWAo/edit\nIntroduction In spirit of the conversation happening today earlier, I\u0026amp;rsquo;ll give some motivation. In general there is a disconnect between academic researchers and people who work in open-source software. It\u0026amp;rsquo;s a pity because these two groups are interested in bitcoin, they love difficult questions and working on them, and somehow it seems like the choice of questions and spirit of work is sometimes not …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-12-merging/","title":"Strategies for getting stuff merged","content":"Introduction I wanted to talk about things that have been leaking out over other conversations because sometimes people get frustrated that their stuff doesn\u0026amp;rsquo;t get merged. This is not a new problem. It\u0026amp;rsquo;s an issue that has been going on for a long time. It can be frustrating. I don\u0026amp;rsquo;t have the answer. This is going to be more discussion based and I\u0026amp;rsquo;ve asked a few folks to talk about strategies that have worked for them. Hopefully this will lead to a discussion about copying …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-11-github/","title":"Bitcoin Core and GitHub","content":"Bitcoin Core and GitHub\nI think at this point it\u0026amp;rsquo;s quite clear that it\u0026amp;rsquo;s not necessarily a \u0026amp;ldquo;if\u0026amp;rdquo; we get off github, but a when and how. The question would be, how would we do that? This isn\u0026amp;rsquo;t really a presentation. It\u0026amp;rsquo;s more of a discussion. There\u0026amp;rsquo;s a few things to keep in mind, like the bitcoin-gh-meta repo, which captures all the issues, comments and pull requests. It\u0026amp;rsquo;s quite good. The ability to reconstruct what\u0026amp;rsquo;s inside of here on another …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-11-fee-market/","title":"Fee Market","content":"Fee market\nThere are two times we have had sustained fees: late 2017 and early 2021. In late 2017 we saw lots of things break because people hadn\u0026amp;rsquo;t written software to deal with variable fees or anything. I don\u0026amp;rsquo;t know if that was as big of a problem in 2021. I do worry that this will start to become a thing. If you have no variable fee market, and you can just throw in 1 sat/vbyte for several years then it will just work until it doesn\u0026amp;rsquo;t. So right now developers don\u0026amp;rsquo;t …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-11-frost/","title":"FROST","content":"Introduction I am going to be going over the FROST implementation. I also have an early draft of the BIP. I am going to be focusing on the differences between the paper and the RFC and the overall scheme. This is meant to be an open discussion so feel free to jump in.\nDistributed key generation Maybe one good place to start is to look at the example file in the PR. It shows how the flow works with the API. The protocol we start off with generating a keypair. We\u0026amp;rsquo;re going to use the secret …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-11-hac-spec/","title":"High-assurance cryptography specifications (hac-spec)","content":"See https://btctranscripts.com/tabconf/2022/2022-10-14-hac-spec/ instead for a full transcript of a similar talk.\nFar far future In the far far future, we could get rid of this weird paper notation scheme and do a security proof directly for the specification. Presumably that is much harder than anything else in my slides. But this would rule out a lot of bugs.\nQ: But the security proof itself is written in a paper?\nA: The security proof itself would be written in hac-spec. And your simulators. …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-11-package-relay/","title":"Package Relay BIP, implementation, V3, and package RBF proposals","content":"Notes on Package Relay BIP, implementation, V3, and package RBF proposals from Core Dev in Atlanta.\nAlso at https://gist.github.com/glozow/8469dc9c3a003c7046033a92dd504329.\nAncestor Package Relay BIP BIP updated to be receiver-initiated ancestor packages only. Sender-initiated vs receiver-initiated package relay. Receiver-intiated package relay enables a node to ask for more information when they suspect they are missing something (i.e. to resolve orphans). Sender-initiated package relay should, …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-11-stratum-v2/","title":"Stratum V2","content":"Introduction There was an announcement earlier this morning that announced the open-source project implementing Stratum v2 is ready for testing now. Spiral has been contributing to this project for a few years. There\u0026amp;rsquo;s a few other companies funding it as well. It\u0026amp;rsquo;s finally ready for testing.\nHistory About 4 years ago or so, Matt proposed BetterHash which focused on enabling miners to be able to do their own transaction selection. There was an independent effort from Braaains that …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-10-p2p-encryption/","title":"BIP324 - Version 2 of p2p encrypted transport protocol","content":"Previous talks https://btctranscripts.com/scalingbitcoin/milan-2016/bip151-peer-encryption/\nhttps://btctranscripts.com/sf-bitcoin-meetup/2017-09-04-jonas-schnelli-bip150-bip151/\nhttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06-07-p2p-encryption/\nhttps://btctranscripts.com/breaking-bitcoin/2019/p2p-encryption/\nIntroduction and motivation Can we turn down the lights? \u0026amp;ldquo;Going dark\u0026amp;rdquo; is a nice theme for the talk. I also have dark coffee. Okay.\nWe\u0026amp;rsquo;re going to talk a little bit …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-10-misc/","title":"Misc","content":"Web of Trust Some of the public key server operators interpreted GDPR to mean that they can\u0026amp;rsquo;t operate public key infrastructure anymore. There needs to be another solution for p2p distribution of keys and Web-of-Trust.\n\u0026amp;lt;bitcoin-otc.com\u0026amp;gt; continues to be the longest operating PGP web-of-trust using public key infrastructure. Rumplepay might be able to bootstrap a web-of-trust over time.\nStealth addresses and silent payments Here\u0026amp;rsquo;s something controversial. Say you keep an …"},{"uri":"/bitcoin-explained/bitcoin-core-v24/","title":"Bitcoin Core 24.0","content":"Intro Aaron van Wirdum: 00:00:20\nLive from Utrecht, this is Bitcoin\nSjors Provoost: 00:00:23\nExplained.\nAaron van Wirdum: 00:00:24\nHey Sjors.\nSjors Provoost: 00:00:25\nWhat\u0026amp;rsquo;s up?\nAaron van Wirdum: 00:00:26\nI\u0026amp;rsquo;m good.\nSjors Provoost: 00:00:27\nHow do you like the weather?\nAaron van Wirdum: 00:00:29\nIt was too hot all summer and then it was nice for about a week and now we\u0026amp;rsquo;re back to too cold.\nSjors Provoost: 00:00:36\nThis is gonna be a great winter.\nAaron van Wirdum: 00:00:39 …"},{"uri":"/chaincode-labs/chaincode-podcast/lightning-updates-stratum-v2/","title":"Lightning updates / Stratum V2","content":"Murch: 00:00:00\nHey Jonas.\nJonas: 00:00:00\nHey Murch.\nMurch: 00:00:02\nWhat are we up to?\nJonas: 00:00:03\nWe are back.\nMurch: 00:00:04\nWho are we recording today?\nJonas: 00:00:05\nWe have Steve Lee in the office this week.\nMurch: 00:00:04\nHe is the head at Spiral. He\u0026amp;rsquo;s done a lot of open source development. He talks to a bunch of people. He\u0026amp;rsquo;s also the PM for the LDK team. So we\u0026amp;rsquo;re going to talk Lightning, the challenges that are open with Lightning, and maybe a little bit about …"},{"uri":"/speakers/steve-lee/","title":"Steve Lee","content":""},{"uri":"/tags/zero-conf-channels/","title":"Zero-conf channels","content":""},{"uri":"/wasabi/research-club/checking-bitcoin-balances-privately/","title":"Checking Bitcoin balances privately","content":"Speaker 0: 00:00:00\nSo hello and welcome to the Wasabi Wallet Research Club. Today we are speaking from with Samir from Spiral, which is the title of a fancy cryptography paper of homomorphic value encryption or homomorphic encryption and private information retrieval. The gist of this cryptomagic is that a client can request data from a server, but the server does not know which data was requested. And there are different variants of the CryptoMagick for different use cases. And there are …"},{"uri":"/categories/club/","title":"club","content":""},{"uri":"/speakers/samir-menon/","title":"Samir Menon","content":""},{"uri":"/wasabi/research-club/","title":"Wasabi Research Club","content":"https://github.com/zkSNACKs/WasabiResearchClub\nAnonymous Credentials Apr 14, 2020 Jonas Nick Research Privacy enhancements Cryptography Bls signatures Checking Bitcoin balances privately Sep 27, 2022 Samir Menon Research Privacy enhancements Cryptography CJDNS Mar 16, 2021 Caleb DeLisle, Adam Ficsor, Lucas Ontivero Research Anonymity networks CoinShuffle Jan 20, 2020 Aviv Milner, Adam Fiscor, Lucas Ontivero, Max Hillebrand Research Coinjoin CoinShuffle\u0026amp;#43;\u0026amp;#43; (Part 1) Feb 04, 2020 Tim …"},{"uri":"/wasabi/","title":"Wasabi Wallet","content":"https://github.com/zkSNACKs/WasabiResearchClub\nWasabi Research Club Wasabikas Bitcoin Privacy Podcast "},{"uri":"/tags/bip32/","title":"HD key generation","content":""},{"uri":"/bitcoin-explained/hd-wallets-mnemonic-codes-and-seedqr/","title":"HD Wallets, Mnemonic codes and SeedQR","content":"Aaron van Wirdum: 00:00:19\nLive from Utrecht, this is Bitcoin\u0026amp;hellip;\nSjors Provoost: 00:00:21\nExplained!\n(Ad removed)\nAaron van Wirdum: 00:01:41\nAll right, let\u0026amp;rsquo;s move on. Sjors, today, you\u0026amp;rsquo;ve got a new hobby.\nSjors Provoost: 00:01:46\nI have a new hobby.\nAaron van Wirdum: 00:01:47\nWhat am I looking at here?\nSjors Provoost: 00:01:48\nYou\u0026amp;rsquo;re looking at an anvil.\nAaron van Wirdum: 00:01:50\nYou\u0026amp;rsquo;ve got an actual miniature anvil on your desk right here.\nSjors Provoost: 00:01:53 …"},{"uri":"/london-bitcoin-devs/","title":"London Bitcoin Devs","content":" Better Hashing through BetterHash Feb 05, 2019 Matt Corallo Mining Bitcoin Core and hardware wallets Sep 19, 2018 Sjors Provoost Hardware wallet Bitcoin core Bitcoin Core V0.17 John Newbery Research Bitcoin Full Nodes Jul 23, 2018 John Light Soft fork activation Current state of P2P research in Bitcoin /Erlay Nov 13, 2019 Gleb Naumenko Research P2 p Grokking Bitcoin Apr 29, 2020 Kalle Rosenbaum Hardware Wallets (History of Attacks) May 01, 2019 Stepan Snigirev Hardware wallet Security problems …"},{"uri":"/categories/meetup/","title":"meetup","content":""},{"uri":"/london-bitcoin-devs/2022-08-11-tim-ruffing-musig2/","title":"MuSig2","content":"Reading list: https://gist.github.com/michaelfolkson/5bfffa71a93426b57d518b09ebd0998c\nIntroduction Michael Folkson (MF): This is a Socratic Seminar, we are going to be discussing MuSig2 and we’ll move onto adjacent topics such as FROST and libsecp256k1 later. We have a few people on the call including Tim (Ruffing). If you want to do short intros, you don’t have to, for the people on the call.\nTim Ruffing (TR): Hi. Thanks for having me. I am Tim Ruffing, my work is at Blockstream. I am an author …"},{"uri":"/stephan-livera-podcast/2022-08-11-gloria-zhao/","title":"What Do Bitcoin Core Maintainers Do?","content":"podcast: https://stephanlivera.com/episode/404/\nStephan Livera: Gloria, welcome back to the show.\nGloria Zhao: Thank you for having me again.\nStephan Livera: Yeah, always great to chat with you, Gloria. I know you’ve got a lot of things you’re working on in the space and you recently took on a role as a maintainer as well, so we’ll obviously get into that as well as all your work around the mempool. So do you want to just start with — and we’re going to keep this accessible for beginners — so …"},{"uri":"/speakers/chelsea-komlo/","title":"Chelsea Komlo","content":""},{"uri":"/speakers/elizabeth-crites/","title":"Elizabeth Crites","content":""},{"uri":"/misc/2022-08-07-komlo-crites-frost/","title":"FROST","content":"Location: Zcon 3\nFROST paper: https://eprint.iacr.org/2020/852.pdf\nSydney Socratic on FROST: https://btctranscripts.com/sydney-bitcoin-meetup/2022-03-29-socratic-seminar/\nIntroduction Elizabeth Crites (EC): My name is Elizabeth Crites, I’m a postdoc at the University of Edinburgh.\nChelsea Komlo (CK): I’m Chelsea Komlo, I’m at the University of Waterloo and I’m also a Principal Researcher at the Zcash Foundation. Today we will be giving you research updates on FROST.\nWhat is FROST? (Chelsea …"},{"uri":"/misc/","title":"Misc","content":" Bitcoin Scaling Tradeoffs Apr 05, 2016 Adam Back Scalability Bitcoin Script: Past and Future Apr 08, 2020 John Newbery Scripts addresses Bitcoin Sidechains - Unchained Epicenter Feb 03, 2015 Adam Back, Greg Maxwell Sidechains Bulletproofs Feb 02, 2018 Andrew Poelstra Proof systems CFTC Bitcoin Brian O’Keefe, David Bailey, Rodrigo Buenaventura CTV BIP Review Workshop Jeremy Rubin Discreet Log Contracts Tadge Dryja Failures Of Secret Key Cryptography (2013) Daniel J. Bernstein Security problems …"},{"uri":"/stephan-livera-podcast/2022-08-02-jonas-nick-tim-ruffing/","title":"Half Signature Aggregation-What is it and how does it help scale Bitcoin?","content":"podcast: https://stephanlivera.com/episode/400/\nStephan Livera: Jonas and Tim, welcome back to the show. Great to chat with you guys again. And so we’re gonna chat about half signature aggregation and hear a little bit from you guys about what you’re working on and just get into what that means for Bitcoin. So Jonas, I know you probably will be best to give some background on some of this — I know you did a talk at Adopting Bitcoin in November of last year, so call it 7–8 months ago, talking …"},{"uri":"/bitcoin-design/learning-bitcoin-and-design/fedimint/","title":"FediMint","content":"Introduction Stephen DeLorme: 00:00:04\nAll right. Welcome, everybody. This is the Learning Bitcoin and Design Call. It\u0026amp;rsquo;s July 26, 2022. And our topic this week is FediMint. And we have Justin Moon here with us and several designers and product folks in the room here that are interested in learning about this. So I\u0026amp;rsquo;ve got a FigJam open here where we can toss images, ideas, text, links, whatever we want to consolidate during the call. And anyone on the call right now can find that in …"},{"uri":"/speakers/justin-moon/","title":"Justin Moon","content":""},{"uri":"/speakers/stephen-delorme/","title":"Stephen DeLorme","content":""},{"uri":"/bitcoin-explained/op_return/","title":"OP_RETURN","content":"Intro Aaron van Wirdum: 00:00:19\nLive from Utrecht, this is Bitcoin Explained. Hey Sjors.\nSjors Provoost: 00:00:24\nWhat\u0026amp;rsquo;s up?\nAaron van Wirdum: 00:00:25\nHow exciting. Two weeks ago you told me that maybe you would go to Bitcoin Amsterdam.\nSjors Provoost: 00:00:31\nYes.\nAaron van Wirdum: 00:00:31\nAnd now you\u0026amp;rsquo;re a speaker.\nSjors Provoost: 00:00:33\nI\u0026amp;rsquo;m a panelist, probably not a real speaker.\nAaron van Wirdum: 00:00:37\nThat counts that that\u0026amp;rsquo;s a speaker in my book Sjors.\nSjors …"},{"uri":"/misc/2022-07-14-tim-ruffing-roast/","title":"ROAST","content":"Location: Monash Cybersecurity Seminars\nROAST paper: https://eprint.iacr.org/2022/550.pdf\nROAST blog post: https://medium.com/blockstream/roast-robust-asynchronous-schnorr-threshold-signatures-ddda55a07d1b\nROAST in Python: https://github.com/robot-dreams/roast\nIntroduction (Amin Sakzad) Welcome everyone to Monash Cybersecurity Seminars. Today we will be having Tim Ruffing. Tim is an applied cryptographer in the research team of Blockstream, a Bitcoin and blockchain technology company. He …"},{"uri":"/tags/htlc/","title":"Hash Time Locked Contract (HTLC)","content":""},{"uri":"/tags/incentives/","title":"incentives","content":""},{"uri":"/speakers/jeremy-rubin/","title":"Jeremy Rubin","content":""},{"uri":"/speakers/jonathan-harvey-buschel/","title":"Jonathan Harvey Buschel","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2022/lightning-panel/","title":"Lightning Panel","content":"What is Lightning Jeremy Rubin: 00:00:07\nI was hoping that we can have our panelists, and maybe we\u0026amp;rsquo;ll start with Rene. Just introduce themselves quickly, and also answer the question, what is Lightning to you? And I mean that sort of in the long run. Lightning is a thing today, but what is the philosophical Lightning to you?\nRene Pickhardt: 00:00:26\nSo I\u0026amp;rsquo;m Rene Pickardt, a researcher and developer for the Lightning Network. And for me personally, the Lightning Network is the means to …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2022/","title":"MIT Bitcoin Expo 2022","content":" Bitcoin R\u0026amp;amp;D Panel May 07, 2022 Jeremy Rubin, Gloria Zhao, Andrew Poelstra Research Covenants Scripts addresses Lightning Panel Jul 05, 2022 Jeremy Rubin, Rene Pickhardt, Lisa Neigut, Jonathan Harvey Buschel Lightning Taproot Ptlc Htlc Long-Term Trust and Analog Computers May 07, 2022 Andrew Poelstra Codex32 Tradeoffs in Permissionless Systems Jul 05, 2022 Gloria Zhao Bitcoin core Transaction relay policy Incentives "},{"uri":"/speakers/rene-pickhardt/","title":"Rene Pickhardt","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2022/tradeoffs-in-permissionless-systems/","title":"Tradeoffs in Permissionless Systems","content":"Introduction Hello. I wanted to make a talk about what I work on because I consider it the, well, the area of code that I work in, because I considered it to be one of the most fascinating and definitely the most underrated part of Bitcoin that no one ever talks about. And the reason is I kind of see it as where one of the most important ideological goals of Bitcoin translates into technical challenges, which also happens to be very, very interesting. So I\u0026amp;rsquo;m going to talk about the …"},{"uri":"/bitcoin-explained/silent-payments/","title":"Silent Payments","content":"Aaron: 00:00:20\nLive from Utrecht, this is Bitcoin Explained. Hey, Sjors.\nSjors: 00:00:23\nYo.\nAaron: 00:00:25\nHey, Ruben.\nRuben: 00:00:26\nHey.\nAaron: 00:00:27\nRuben is back. I think last time In Prague, we were all in Prague last week and there I introduced you as our resident funky second layer expert.\nRuben: 00:00:38\nThat is right.\nAaron: 00:00:39\nThis week we\u0026amp;rsquo;re going to talk about one of your new proposals, which is actually not a second layer proposal.\nRuben: 00:00:45\nTrue.\nAaron: …"},{"uri":"/speakers/alex-myers/","title":"Alex Myers","content":""},{"uri":"/bitcoinplusplus/2022/","title":"Bitcoin++ 2022","content":" Minisketch and Lightning gossip Jun 07, 2022 Alex Myers Lightning P2 p "},{"uri":"/bitcoinplusplus/2022/2022-06-07-alex-myers-minisketch-lightning-gossip/","title":"Minisketch and Lightning gossip","content":"Location: Bitcoin++\nSlides: https://endothermic.dev/presentations/magical-minisketch\nRusty Russell on using Minisketch for Lightning gossip: https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001741.html\nMinisketch library: https://github.com/sipa/minisketch\nBitcoin Core PR review club on Minisketch (3 sessions):\nhttps://bitcoincore.reviews/minisketch-26\nhttps://bitcoincore.reviews/minisketch-26-2\nhttps://bitcoincore.reviews/minisketch\nIntroduction Pretty new to Lightning …"},{"uri":"/chaincode-labs/chaincode-podcast/tracepoints-and-monitoring-the-bitcoin-network/","title":"Tracepoints and monitoring the Bitcoin network","content":"Speaker 0: 00:00:00\nHey, Merch. What up? We are back in the studio.\nSpeaker 1: 00:00:02\nWho are we talking to today? We\u0026amp;rsquo;re talking to OXB10C.\nSpeaker 0: 00:00:06\nI know him as Timo. So we\u0026amp;rsquo;re going to call him Timo.\nSpeaker 1: 00:00:09\nOK, fine.\nSpeaker 0: 00:00:12\nIt doesn\u0026amp;rsquo;t quite roll off the tongue. Is there anything in particular that you\u0026amp;rsquo;re interested in learning from Timo today?\nSpeaker 1: 00:00:17\nYeah, I think we need to talk about Tether. What he\u0026amp;rsquo;s doing …"},{"uri":"/chaincode-labs/chaincode-podcast/package-relay/","title":"Package Relay","content":"Introduction Mark Erhardt: 00:00:00\nHey Jonas.\nAdam Jonas: 00:00:00\nHey Murch.\nMark Erhardt: 00:00:01\nWe\u0026amp;rsquo;re going to try again to record with Gloria. This time we want to get it really focused, short.\nAdam Jonas: 00:00:07\nWe got a lot of tape in that last one, but I don\u0026amp;rsquo;t know if all of it was usable. Yeah, we\u0026amp;rsquo;re going to talk to Gloria today about her newly released proposal for package relay. Looking forward to getting something that we can release. Enjoy.\nAdam Jonas: …"},{"uri":"/tags/security/","title":"security","content":""},{"uri":"/chaincode-labs/chaincode-podcast/address-relay/","title":"Address Relay","content":"Martin: 00:00:00\nIn AddrMan you also have this distinction between the new and the tried tables. So the new tables are for unverified addresses that someone just sent you and you don\u0026amp;rsquo;t know whether they are good, whether there\u0026amp;rsquo;s really a Bitcoin node behind them. And the tried table is addresses that you\u0026amp;rsquo;ve been connected to in the past. So at some point, at least in the past, you know, they were your peers at some point.\nMark Erhardt: 00:00:19\nRight. And you also need to make …"},{"uri":"/speakers/martin-zumsande/","title":"Martin Zumsande","content":""},{"uri":"/speakers/andrew-poelstra/","title":"Andrew Poelstra","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2022/bitcoin-randd-panel/","title":"Bitcoin R\u0026D Panel","content":"Ayush: 00:00:07\nMy name is Ayush Khandelwal. I am a software engineer at Google. I run a tech crypto-focused podcast on the side. And I have some amazing guests with me. So if you want to introduce yourself, what do you work on? And yeah, just get us started.\nJeremy: 00:00:31\nI am Jeremy Rubin. I am a class of 2016 MIT alumni, so it\u0026amp;rsquo;s good to be back on campus. And currently I work on trying to make Bitcoin the best possible platform for capitalism, sort of my North Star.\nAyush: 00:00:51 …"},{"uri":"/tags/codex32/","title":"Codex32","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2022/long-term-trust-and-analog-computers/","title":"Long-Term Trust and Analog Computers","content":"What I am here to talk to you about are analog computers and long-term trust. And what I mean by long-term trust is how can you have assurance that your Bitcoin secrets, your seed words or your seed phrase or whatever have you, how can you be assured that that is stored correctly, that the data has integrity and so on, and how can you be assured of that for a long time? And so where trust comes into this is that traditionally, in the last 100 years, in order to verify the integrity of data, you …"},{"uri":"/bitcoin-explained/are-users-rejecting-bitcoin-forks/","title":"User Rejected Soft Forks (URSFs)","content":"Intro Aaron: Live from Utrecht, this is Bitcoin, Explained. Hey Sjors.\nSjors: What\u0026amp;rsquo;s up?\nAaron: Sjors Provoost, Bitcoin Core contributor and author.\nSjors: Oh yes.\nAaron: Congratulations!\nSjors: Thank you.\nAaron: Your new book, Bitcoin: A Work in Progress is now available on Amazon?\nSjors: On everywhere. You go to btcwip.com, which is Work in Progress. There\u0026amp;rsquo;s links to various Amazons and other sites where you can hopefully buy the book.\nAaron: What is the book?\nSjors: It is …"},{"uri":"/greg-maxwell/2022-05-05-covenants-bip119/","title":"Covenants and BIP119","content":"Location: Reddit\nhttps://www.reddit.com/r/Bitcoin/comments/uim560/bip_119/i7dhfpb/\nCovenants and BIP119 I was asked by an old colleague to respond to your post because I came up with the term covenant as applied Bitcoin many years ago back when I was still a Bitcoin developer.\ndoes bip 119 completely mess the fungibility of bitcoin. If the idea of covenants is that you can create bitcoin that can only be sent to certain addresses, doesnt that make two classes of bitcoin?\nNo. That\u0026amp;rsquo;s …"},{"uri":"/speakers/greg-maxwell/","title":"Greg Maxwell","content":""},{"uri":"/greg-maxwell/","title":"Greg Maxwell","content":" Advances In Block Propagation Nov 27, 2017 Greg Maxwell P2p Mining bech32 design Dec 22, 2017 Greg Maxwell Bech32 Bitcoin Core Github Oct 26, 2020 Greg Maxwell Bitcoin Core Testing Sep 23, 2018 Greg Maxwell Bitcoin core Developer tools Bitcoin Selection Cryptography Apr 29, 2015 Greg Maxwell Cryptography Checkmultisig Bug Aug 27, 2020 Greg Maxwell Confidential Transactions Apr 28, 2017 Greg Maxwell Privacy enhancements Covenants and BIP119 May 05, 2022 Greg Maxwell Covenants Deep Dive Bitcoin …"},{"uri":"/bitcoin-explained/bitcoin-core-v23/","title":"Bitcoin Core 23.0","content":"Intro Aaron van Wirdum: 00:00:20\nLive from Utrecht, this is Bitcoin Explained. Sjors, are we still making a bi-weekly podcast?\nSjors Provoost: 00:00:28\nI mean, it depends on what you mean by bi. It\u0026amp;rsquo;s more like a feeling, right?\nAaron van Wirdum: 00:00:32\nWe make a podcast when we feel about it these days. Is that the rule?\nSjors Provoost: 00:00:36\nNo, it\u0026amp;rsquo;s also when we\u0026amp;rsquo;re not traveling or sick or whatever.\nAaron van Wirdum: 00:00:40\nOr we don\u0026amp;rsquo;t have a good idea for a …"},{"uri":"/speakers/carl-dong/","title":"Carl Dong","content":""},{"uri":"/misc/2022-04-12-carl-dong-libbitcoinkernel/","title":"libbitcoinkernel","content":"Tracking issue in Bitcoin Core: https://github.com/bitcoin/bitcoin/issues/24303\nPieter Wuille on Chaincode podcast discussing consensus rules: https://btctranscripts.com/chaincode-labs/chaincode-podcast/2020-01-28-pieter-wuille/#part-2\nIntro Hi everyone. I’m Carl Dong from Chaincode Labs and I’m here to talk about libbitcoinkernel, a project I’ve been working on that aims to extract Bitcoin Core’s consensus engine. When we download and run Bitcoin Core it is nicely packaged into a single bundle, …"},{"uri":"/speakers/max-hillebrand/","title":"Max Hillebrand","content":""},{"uri":"/stephan-livera-podcast/2022-04-01-max-hillebrand/","title":"ZKSnacks Blacklisting Coins","content":"podcast: https://stephanlivera.com/episode/364/\nStephan Livera:\nMax, great to chat again.\nMax Hillebrand:\nOh, welcome, Stephan. I’m really looking forward to this conversation. It’s gonna be a fun one.\nStephan Livera:\nRight. So look, as I’ve mentioned, I think we’re gonna disagree a bit on this one, but let’s chat it out. Let’s discuss what’s going on in the world of Bitcoin and privacy. So obviously the main topic here is going to be around what’s going on with Wasabi Wallet and the …"},{"uri":"/sydney-bitcoin-meetup/","title":"Sydney Bitcoin Meetup","content":" Socratic Seminar Aug 25, 2020 Socratic Seminar Jul 21, 2020 Taproot Socratic Seminar Jun 23, 2020 Socratic Seminar May 19, 2020 Fee management Dual funding Sydney Socratic Seminar Mar 29, 2022 Jesse Posner Threshold signature Sydney Socratic Seminar Nov 02, 2021 Package relay Sydney Socratic Seminar Jul 06, 2021 Research Lightning Sydney Socratic Seminar Jun 01, 2021 Research Covenants Op checktemplateverify Taproot Sydney Socratic Seminar Feb 23, 2021 Lloyd Fournier, Anthony Towns Research …"},{"uri":"/sydney-bitcoin-meetup/2022-03-29-socratic-seminar/","title":"Sydney Socratic Seminar","content":"Topic: FROST\nLocation: Bitcoin Sydney (online)\nFROST paper: https://eprint.iacr.org/2020/852.pdf\nsecp256k1-zkp PR: https://github.com/ElementsProject/secp256k1-zkp/pull/138\nsecp256kfun issue: https://github.com/LLFourn/secp256kfun/issues/85\nCeremonies for Applied Secret Sharing paper: https://cypherpunks.ca/~iang/pubs/mindgap-popets20.pdf\nCoinbase blog post: https://blog.coinbase.com/frost-flexible-round-optimized-schnorr-threshold-signatures-b2e950164ee1\nAndrew Poelstra presentation at SF …"},{"uri":"/bitcoinology/","title":"Bitcoinology","content":" Cool Lightning Network Developments Jan 26, 2022 Igor Korsakov Lightning Client side validation History of Bitcoin in GAMING Mar 27, 2022 Christian Moss Lightning How to Design Schnorr Signatures Mar 14, 2022 Adam Gibson Schnorr signatures What is Lightning Network? Dec 15, 2021 Adam Gibson Lightning "},{"uri":"/speakers/christian-moss/","title":"Christian Moss","content":""},{"uri":"/bitcoinology/history-of-bitcoin-in-gaming/","title":"History of Bitcoin in GAMING","content":"Intro Christian: Before I start, it\u0026amp;rsquo;s probably a good idea to make sure you have a Lightning wallet downloaded because, at the end, I have some games we can play and actually get some Bitcoin through the games. So if you don\u0026amp;rsquo;t have a Lightning wallet installed, please do so. I recommend my company\u0026amp;rsquo;s wallet, Zebedee. So if you just type in Zebedee on my t-shirt into the Google Play or App Store, you should be able to download it. Today I\u0026amp;rsquo;m going to talk about the history …"},{"uri":"/stephan-livera-podcast/2022-03-27-rene-pickhardt/","title":"Pickhardt Payments \u0026 Zero Base Fee for Lightning Network","content":"podcast: https://stephanlivera.com/episode/361/\nStephan Livera:\nRené, welcome back to the show.\nRené Pickhardt:\nHey thanks, Stephan. I really practiced pronouncing your name.\nStephan Livera:\nAh yeah, it’s all good. There’s lots going on in the Lightning Network. The network is growing and it’s maturing. And a lot of the work you’re doing is around research and looking at ways to improve that and improve the way we route our payments and getting more reliable payments as well, as I understand …"},{"uri":"/bitcoinology/how-to-design-schnorr-signatures/","title":"How to Design Schnorr Signatures","content":"Passwords So, I\u0026amp;rsquo;m going to talk to you today about passwords and about the problem with passwords and how we might be able to solve it, right? So we\u0026amp;rsquo;re all familiar with using passwords on the internet and we\u0026amp;rsquo;re all sort of sometimes annoyed about it. Maybe you use a password manager, maybe you don\u0026amp;rsquo;t, but you have this problem, I think we all understand this problem that it\u0026amp;rsquo;s not just about you losing the password, but it\u0026amp;rsquo;s also about keeping it secret, right? …"},{"uri":"/lightning-specification/2022-03-14-specification-call/","title":"Lightning Specification Meeting - Agenda 0969","content":"Name: Lightning specification call\nTopic: Agenda below\nLocation: Jitsi\nVideo: No video posted online\nAgenda: https://github.com/lightning/bolts/issues/969\nBOLT 4: Remove legacy format, make var_onion_optin compulsory https://github.com/lightning/bolts/pull/962\nI think eclair and c-lightning have both removed support for legacy payments.\nI’ll probably make a PR to add the feature bit or at least flip it to required. The only wallet I can think of that is somewhat more fringe now is Simple Bitcoin …"},{"uri":"/tags/c-lightning/","title":"c-lightning","content":""},{"uri":"/c-lightning/","title":"c-lightning","content":" c-lightning developer call Mar 07, 2022 Lightning C lightning c-lightning developer call Jan 24, 2022 Lightning C lightning c-lightning developer call Jan 10, 2022 Lightning C lightning c-lightning developer call Dec 13, 2021 Lightning C lightning c-lightning developer call Nov 29, 2021 Lightning C lightning c-lightning developer call Nov 15, 2021 Lightning C lightning c-lightning developer call Nov 01, 2021 Lightning C lightning c-lightning developer call Oct 18, 2021 Lightning C lightning …"},{"uri":"/c-lightning/2022-03-07-developer-call/","title":"c-lightning developer call","content":"Name: c-lightning developer call\nTopic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nMinisketch and gossip spam I’ve been working on the multi channel connect stuff, trying to not get distracted by …"},{"uri":"/speakers/john-cantrell/","title":"John Cantrell","content":""},{"uri":"/stephan-livera-podcast/2022-03-07-john-cantrell-sensei/","title":"Sensei- A new lightning node client based on BDK + LDK","content":"podcast: https://stephanlivera.com/episode/353/\nStephan Livera:\nJohn, welcome back to the show.\nJohn Cantrell:\nHey, how’s it going? Happy to be back.\nStephan Livera:\nSo John, a lot has happened since the last time you were on the show. You are now working with Sensei and you’ve got this new Lightning node and interface that you’ve got to present and talk about. Do you want to just maybe tell us a little bit about the journey for yourself in terms of how you got to where you are now since the …"},{"uri":"/tags/amp/","title":"Atomic multipath payments (AMPs)","content":""},{"uri":"/advancing-bitcoin/2022/2022-03-03-christian-decker-greenlight/","title":"Deploying Lightning at Scale (Greenlight)","content":"Blog post on Greenlight: https://blog.blockstream.com/en-greenlight-by-blockstream-lightning-made-easy/\nGreenlight demo at London Bitcoin Devs: https://btctranscripts.com/london-bitcoin-devs/2022-03-01-lightning-panel/\nIntro Jeff Gallas: Next up is Christian Decker. He is a Lightning developer at Blockstream. He has got a very simple title. It is Security, Simplicity and DevOps: the trade-offs between a Lightning PPAS and running your own node. That is already half of the talk.\nChristian Decker: …"},{"uri":"/speakers/jimmy-song/","title":"Jimmy Song","content":""},{"uri":"/tags/lnurl/","title":"LNURL","content":""},{"uri":"/speakers/michael-folkson/","title":"Michael Folkson","content":""},{"uri":"/advancing-bitcoin/2022/2022-03-03-sanket-kanjalkar-miniscript/","title":"Miniscript: Composable, Analyzable and Smarter Bitcoin Script","content":"Andrew Poelstra on Miniscript: https://btctranscripts.com/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript/\nIntro (Jeff Gallas) The next speaker is Sanket. He is working at Blockstream and is mostly working on Simplicity and Miniscript and that is also what he is going to talk about, Miniscript. Please welcome Sanket.\nIntro (Sanket Kanjalkar) Good morning everyone. Today I will be discussing Miniscript which is a work of Pieter Wuille, Andrew Poelstra and me with input from many …"},{"uri":"/speakers/sanket-kanjalkar/","title":"Sanket Kanjalkar","content":""},{"uri":"/advancing-bitcoin/2022/static-invoices-in-lightning/","title":"Static Invoices in Lightning: A Deep Dive and Comparison of Bolt11 Invoices, LNURL, Offers and AMP","content":"Intro Hi guys, I\u0026amp;rsquo;m Elle, and I\u0026amp;rsquo;m going to be talking about static invoices in the Lightning Network. As most of you probably know, in Lightning, currently if you want to be paid, you have to create a new invoice every time. And this is a bit of a difficult thing for people to understand who don\u0026amp;rsquo;t know anything about the space. Today I\u0026amp;rsquo;m going to just be diving into exactly why it\u0026amp;rsquo;s a mission for us to have these static invoices, and then just outlining the three …"},{"uri":"/speakers/stepan-snigirev/","title":"Stepan Snigirev","content":""},{"uri":"/advancing-bitcoin/2022/2022-03-03-jimmy-song-taproot-multisig/","title":"Taproot multisig","content":"Slides: https://jimmysong.github.io/taproot-multisig\nPull request adding multisig Taproot support to buidl-python: https://github.com/buidl-bitcoin/buidl-python/pull/109\nIntro (Jeff Gallas) The next speaker is Jimmy Song who doesn’t really need an introduction but in any case, he has been, which is important for this conference, running the Programming Blockchain workshops for 5 years now. Leon is one of his alumni so there is a direct connection to this conference. He has also just released a …"},{"uri":"/advancing-bitcoin/2022/2022-03-03-stepan-snigirev-taproot-hardware-wallets/","title":"Taproot on hardware wallets","content":"Intro (Jeff Gallas) I’m very happy to announce our first speaker for today. It is Stepan Snigirev from Specter, he is the CTO of Specter Solutions and has been working on Bitcoin software and hardware wallets for 3 years now so welcome Stepan.\nOverview (Stepan Snigirev) What I want to talk about is Taproot on hardware wallets. We recently got the Taproot upgrade, yay, this is awesome. It activated in November. Some of the software wallets started to integrate that and even some of the hardware …"},{"uri":"/advancing-bitcoin/2022/2022-03-03-michael-folkson-taproot-update/","title":"Taproot update","content":"Topic: State of the Taproot Address 2022\nSlides: https://www.dropbox.com/s/l31cy3xkw0zi6aq/Advancing%20Bitcoin%20presentation%20-%20Michael%20Folkson.pdf?dl=0\nIntro Ok, so good morning. So earlier this week there was the State of the Union Address so I thought I’d make a bad joke first up. This talk is going to be the “State of the Taproot Address”. I promise all the bad jokes that I make in this presentation will all be Drake gifs, there will be no other jokes involved. This is the “State of …"},{"uri":"/tags/tapscript/","title":"Tapscript","content":""},{"uri":"/london-bitcoin-devs/2022-03-01-lightning-panel/","title":"Lightning Network Panel","content":"Topic: The Lightning Network in 2022\nLocation: London Bitcoin Devs\nIntroductions Ali Taylor-Cipolla (ATC): Ali coming in from Coinbase Recruiting. I’d like to introduce to my friend Tiago who is on the Europe side.\nTiago Schmidt (TS): Hello everyone. Quick introduction, sitting in the recruiting team here in London and leading the expansion for engineering hiring across UK and Ireland. Coinbase is in hyper-growth at the moment and we will be looking to hire loads of the people. We have over 200 …"},{"uri":"/bitcoin-explained/security-issue-burying-bitcoin-soft-forks/","title":"Burying Soft Forks","content":"Intro Aaron van Wirdum: 00:00:20\nLive from Utrecht, this is Bitcoin Explained. I love it. Sjors here we are again. It feels very retro. We just recorded three-quarters of an episode before you realized you weren\u0026amp;rsquo;t recording anything.\nNo Counter Sjors Provoost: 00:00:35\nYep, there\u0026amp;rsquo;s this little device and you have to put the SD card in it. Otherwise you\u0026amp;rsquo;ll press the record button and you\u0026amp;rsquo;ll see all the volume and all that stuff. It looks very good, but there is no counter. …"},{"uri":"/chaincode-labs/chaincode-podcast/lightning-privacy/","title":"Lightning privacy","content":"Sergei Tikhomirov: 00:00:00\nIt reminds me of the conversation about end-to-end encryption, like the companies like Facebook, for example, that try to kind of turn this conversation about privacy from, okay, I want my conversations to be private from Facebook, towards, oh, we\u0026amp;rsquo;re encrypting end-to-end, and no one except you and Facebook knows what you\u0026amp;rsquo;re talking about. I think we should aim for even more ambitious target to make it private even from the owners or from the operators of …"},{"uri":"/tags/topology/","title":"topology","content":""},{"uri":"/stephan-livera-podcast/2022-02-15-chris-stewart/","title":"Bitcoin DLCs \u0026 Stablechannels","content":"podcast: https://stephanlivera.com/episode/349/\nStephan Livera:\nChris, welcome to the show.\nChris Stewart:\nHey, nice to be talking with you, Stephan.\nStephan Livera:\nChris, I’m a fan of your work. I like reading your posts and hearing some of your commentary in the space. I know you’re doing a lot of stuff: obviously you’re CEO and founder of Suredbits. Can you tell us a little bit about what’s on your mind lately in terms of what’s happening in the space?\nChris Stewart:\nWell I think the space …"},{"uri":"/speakers/chris-stewart/","title":"Chris Stewart","content":""},{"uri":"/lightning-specification/2022-02-14-specification-call/","title":"Lightning Specification Meeting - Agenda 0957","content":"Name: Lightning specification call\nTopic: Agenda below\nLocation: Jitsi\nVideo: No video posted online\nAgenda: https://github.com/lightning/bolts/issues/957\nOrganizing a Lightning Core Dev meetup I was talking about organizing a face to face Lightning Core Dev meetup. If I understand correctly there has only been one formal one and that was in 2019 in Australia. There has been two?\nMilan, the kickoff. There has only ever been two.\nThat was probably before my time in Bitcoin.\nWe get to Bora Bora? …"},{"uri":"/chaincode-labs/chaincode-podcast/block-building/","title":"Block Building","content":"Mark Erhardt: 00:00:00\nThe idea is we want the set of miners to be an open set where anybody can enter and exit as they wish. And if we now had obvious optimizations that people had to implement to be as competitive as possible, that would make it harder for new miners to enter this place.\nAdam Jonas: 00:00:32\nHey, Murch. Hey, Clara.\nClara Shikhelman: 00:00:35\nHi.\nAdam Jonas: 00:00:38\nHi. Welcome to the Chaincode Podcast. Clara, for you, your first episode. We\u0026amp;rsquo;re so excited to have you …"},{"uri":"/stephan-livera-podcast/2022-01-31-nvk-bitcoin-hardware-innovation/","title":"Coldcard Mk4, Tapsigner, Satscard – Bitcoin Hardware Innovation","content":"podcast: https://stephanlivera.com/episode/344/\nStephan Livera:\nNVK, welcome back.\nNVK:\nHey man. Thanks for having me, dude. It’s been a while.\nStephan Livera:\nYeah, it has.\nNVK:\nI’m still winning on the amount of appearances consecutively.\nStephan Livera:\nYeah—actually I don’t know. I haven’t run that count actually. It’s been a while. But yeah, there’s so much going on with the world of Bitcoin, and obviously there’s always improvements going on in terms of hardware security, new technology …"},{"uri":"/lightning-specification/2022-01-31-specification-call/","title":"Lightning Specification Meeting - Agenda 0955","content":"Name: Lightning specification call\nTopic: Agenda below\nLocation: Jitsi\nVideo: No video posted online\nAgenda: https://github.com/lightning/bolts/issues/955\nBitcoin Core funding a transaction without looking at ancestors I have something that I wanted to ask of our implementers who implemented a wallet. It is tangentially related to Lightning especially for anchor outputs. I realized recently that when you ask Bitcoin Core to fund a transaction at a given fee rate it is not going to look at the …"},{"uri":"/tags/client-side-validation/","title":"Client-side validation","content":""},{"uri":"/bitcoinology/2022-01-26-igor-korsakov-cool-ln-developments/","title":"Cool Lightning Network Developments","content":"Synonym Igor Synonym is a company that John Carvalho started and he is trying to put Web3 inside of Bitcoin. But what it consists of? They\u0026amp;rsquo;re using several things like identities and people hosting their own data.\nAOPP For identity, the first thing I want to mention is Address Ownership Proof Protocol (AOPP). This is the field that we added in BlueWallet some months ago. I think it was a Switzerland exchange that wanted this, so they sent us a pull request and we merged it (PR#2915, …"},{"uri":"/speakers/igor-korsakov/","title":"Igor Korsakov","content":""},{"uri":"/c-lightning/2022-01-24-developer-call/","title":"c-lightning developer call","content":"Name: c-lightning developer call\nTopic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIndividual updates Still working on the cln stuff. cln is basically the Rust libraries that I am building to …"},{"uri":"/stephan-livera-podcast/2022-01-17-jeremy-rubin/","title":"What is Check Template Verify?","content":"podcast: https://stephanlivera.com/episode/339/\nStephan Livera:\nSo Jeremy, welcome to the show.\nJeremy Rubin:\nYeah, thanks for having me on.\nStephan Livera:\nSo Jeremy, you’ve been chatting a little bit and obviously building on this idea of Check Template Verify, and I think there’s a lot of people in the community who would really benefit from hearing from you exactly what it is, and talking a little bit about this journey of how you came to where it is now. And just for listeners who haven’t …"},{"uri":"/stephan-livera-podcast/2022-01-11-salvatore-ingala/","title":"Ledger’s New Bitcoin App- PSBT, Taproot, Descriptors, Multi-Sig","content":"podcast: https://stephanlivera.com/episode/337/\nStephan Livera:\nSalvatore, welcome to the show.\nSalvatore Ingala:\nThank you, Stephan. I learned a lot from your past shows, so I hope I can bring contributions as well.\nStephan Livera:\nFantastic. Well, Salvatore, for people who don’t know you, can you tell us a little bit about yourself and what you’re doing and where you’re working?\nSalvatore Ingala:\nYeah. So I have a background from academia. I did a PhD in algorithms before switching my …"},{"uri":"/c-lightning/2022-01-10-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nDate: January 10th 2022\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nAccounting plugin I have been working on the accounting plugin for the last few months. We are getting close to a …"},{"uri":"/lightning-specification/2022-01-03-specification-call/","title":"Lightning Specification Meeting - Agenda 0949","content":"Name: Lightning specification call\nTopic: Agenda below\nLocation: Jitsi\nVideo: No video posted online\nAgenda: https://github.com/lightning/bolts/issues/949\nIntroduction When people put comments and they are fixed they should mark them as resolved so that we don’t waste time thinking there are lots of outstanding comments and we have to read them again. I think that helps a lot.\nI can spend some time doing that.\nThere are a lot of comments, I’m starting at the bottom to see the things that have …"},{"uri":"/stephan-livera-podcast/2021-12-17-eric-sirion/","title":"MiniMint, Federated Mints for Bitcoin scaling and privacy","content":"podcast: https://stephanlivera.com/episode/331/\nStephan Livera:\nEric, welcome to the show.\nEric Sirion:\nHello. Yeah. Happy to be on. Nice to meet you.\nStephan Livera:\nYeah. So Eric, it was great to read about your proposal and read about what’s going on with MiniMint and all this stuff. And I think it’s definitely a topic that SLP listeners will be interested to hear about, ideas related to this, privacy, scalability, all sorts of things. So do you want to give us a little bit of your …"},{"uri":"/bitcoinology/what-is-lightning-network/","title":"What is Lightning Network?","content":"Introduction Hello everyone. We\u0026amp;rsquo;re going to sort of soft start now, and this is a new meetup group we\u0026amp;rsquo;ve decided to call Bitcoinology. It\u0026amp;rsquo;s a whimsical name. It\u0026amp;rsquo;s really intended to convey the idea that we want to discuss Bitcoin not in a super developer technical code way, but we do want to discuss it—how it\u0026amp;rsquo;s used, even what the history and culture of it are, and what are the interesting things as a user and as a sort of Bitcoin person you can learn. I don\u0026amp;rsquo;t …"},{"uri":"/c-lightning/2021-12-13-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nEnabling multi channel support https://github.com/ElementsProject/lightning/pull/4984\nhttps://github.com/ElementsProject/lightning/pull/4985\nI have …"},{"uri":"/lightning-specification/2021-12-06-specification-call/","title":"Lightning Specification Meeting - Agenda 0943","content":"Name: Lightning specification call\nTopic: Agenda below\nLocation: Google Meet\nVideo: No video posted online\nAgenda: https://github.com/lightning/bolts/issues/943\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nAdd payment metadata to payment request …"},{"uri":"/c-lightning/2021-11-29-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nDate: November 29th 2021\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nCI problems I will note we have been having CI problems. We obviously slowly grew to the point where our CI uses too much …"},{"uri":"/speakers/alex-bosworth/","title":"Alex Bosworth","content":""},{"uri":"/speakers/graham-krizek/","title":"Graham Krizek","content":""},{"uri":"/stephan-livera-podcast/2021-11-27-lisa-neigut-alex-bosworth-graham-krizek-and-matt-corallo/","title":"SLP324 Scaling Bitcoin off-chain with Matt Corallo, Alex Bosworth, Lisa Neigut, and Graham Krizek","content":"podcast: https://stephanlivera.com/episode/324/\nStephan Livera:\nSo guys, thanks for joining us. We’re talking about off-chain scaling. And so obviously Lightning is the first thing that comes to our mind, but maybe we could start with this idea of What is off-chain scaling? I think Lisa, you had something to add on this, didn’t you?\nMatt Corallo:\nWe had a meeting in the back and we decided we were moving Lightning to TRON. And so we think that’ll scale Lightning,\nStephan Livera:\nRight, yeah. …"},{"uri":"/bitcoin-explained/death-to-the-mempool-long-live-the-mempool/","title":"The Mempool (And Why We Need It)","content":"Preamble Aaron van Wirdum: 00:00:20\nLive from Utrecht, this is Bitcoin Explained.\nSjors Provoost: 00:00:23\nHello.\nAaron van Wirdum: 00:00:24\nHey Sjors.\nSjors Provoost: 00:00:25\nWhat\u0026amp;rsquo;s up?\nAaron van Wirdum: 00:00:26\nI\u0026amp;rsquo;m doing well.\nWe\u0026amp;rsquo;re still in Utrecht as I just said, but only for a couple more days, I\u0026amp;rsquo;m on my way back to El Salvador for a couple of conferences and I think you\u0026amp;rsquo;re gonna be traveling as well.\nSjors Provoost: 00:00:40\nHopefully, if it\u0026amp;rsquo;s warmer. …"},{"uri":"/tags/cpfp/","title":"Child pays for parent (CPFP)","content":""},{"uri":"/speakers/john-newbery/","title":"John Newbery","content":""},{"uri":"/brink/the-bitcoin-development-podcast/mempool-ancestors-and-descendants/","title":"Mempool Ancestors and Descendants","content":"Parent and child transactions Speaker 0: 00:00:00\nHi John.\nSpeaker 1: 00:00:01\nHi Gloria.\nSpeaker 0: 00:00:02\nFeel like talking about mempool ancestors and descendants today?\nSpeaker 1: 00:00:06\nYeah, let\u0026amp;rsquo;s do it.\nSpeaker 0: 00:00:06\nAll right. Let\u0026amp;rsquo;s define ancestors and descendants first. Or maybe let\u0026amp;rsquo;s define parents and children first.\nSpeaker 1: 00:00:13\nOkay. So in Bitcoin, when you have a transaction, you have the person or the owner spending the coin, sending it to …"},{"uri":"/lightning-specification/2021-11-22-specification-call/","title":"Lightning Specification Meeting - Agenda 0936","content":"Name: Lightning specification call\nTopic: Agenda below\nLocation: Google Meet\nVideo: No video posted online\nAgenda: https://github.com/lightning/bolts/issues/936\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nBOLT 7 onion message support …"},{"uri":"/adopting-bitcoin/2021/","title":"Adopting Bitcoin 2021","content":" Onboarding Bitcoin and Lightning Developers Nov 16, 2021 Gloria Zhao, Will Clark, Adam Jonas, Jonas Nick Bitcoin core Lightning Privacy On Lightning Nov 17, 2021 Bastien Teinturier Lightning Privacy problems Transaction Relay Policy for L2 Developers Nov 16, 2021 Gloria Zhao Transaction relay policy "},{"uri":"/adopting-bitcoin/2021/2021-11-17-bastien-teinturier-privacy-on-lightning/","title":"Privacy On Lightning","content":"Slides: https://docs.google.com/presentation/d/1llk70vI2Mo5uQ7vnpfbl1qlr0qe0NLpySIelChIKhH4/edit\nIntroduction Buenos Dias San Salvador, I\u0026amp;rsquo;m Bastien. Hopefully my slides are going to come up. Oh yay, perfect. So I\u0026amp;rsquo;m working on the Lightning Protocol Specification and one of its main implementation at Acinq.\nI\u0026amp;rsquo;m here to talk to you about privacy. This is a big topic. It\u0026amp;rsquo;s not a black and white thing. There are various degrees of gray when you talk about what your threat …"},{"uri":"/adopting-bitcoin/2021/onboarding-bitcoin-and-lightning-developers/","title":"Onboarding Bitcoin and Lightning Developers","content":"Introductions Introducer: 00:00:15\nThe next talk is going to be Onboarding Bitcoin and Lightning Developers with Will Clark, Gloria Zhao, Josie - Bitcoin Core contributor, Jonas Nick, please come to the stage.\nPablo Fernandez: 00:00:32\nHey, guys, how are you doing? Welcome. So we are going to be talking about onboarding Bitcoin and Lightning developers. Do you guys want to introduce yourselves? I think all of you have been on stage already, but want to do a quick round?\nWill Clark: 00:00:51 …"},{"uri":"/stephan-livera-podcast/2021-11-16-pieter-wuille-andrew-poelstra-andrew-chow-mark-erhardt/","title":"SLP321 On-chain scaling with Bitcoin Core Developers","content":"podcast: https://stephanlivera.com/episode/321/\nEmcee:\nAll right. I’m really excited for this next panel. I’ve heard some of them speak in different rooms and I can’t wait to hear what they all have to say today. Next up, I’m going to bring up our moderator. He has a podcast. It is a self-titled podcast and he’s also the managing director of Swan Bitcoin International. Please everybody, welcome to the stage Stephan Livera, everyone.\nStephan Livera:\nAll right, thanks very much for that. I’m …"},{"uri":"/adopting-bitcoin/2021/2021-11-16-gloria-zhao-transaction-relay-policy/","title":"Transaction Relay Policy for L2 Developers","content":"Slides: https://github.com/glozow/bitcoin-notes/blob/master/Tx%20Relay%20Policy%20for%20L2%20Devs%20-%20Adopting%20Bitcoin%202021.pdf\nIntro Hi I’m Gloria, I work on Bitcoin Core at Brink. Today I am going to talk a bit about mempool policy and why you cannot always expect your transactions to propagate or your fee bumps to work. What we are doing to try to resolve stuff like that. This talk is for those of you who find the mempool a bit murky or unstable or unpredictable. You are not really sure …"},{"uri":"/speakers/will-clark/","title":"Will Clark","content":""},{"uri":"/c-lightning/2021-11-15-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nUpgrading c-lightning for Taproot So, Taproot. The one place in the spec where Taproot matters if you are using it as an individual, generally it …"},{"uri":"/speakers/nicholas-gregory/","title":"Nicholas Gregory","content":""},{"uri":"/stephan-livera-podcast/2021-11-12-nicholas-gregory/","title":"Statechains and Mercury Wallet-A New Privacy Technique?","content":"podcast: https://stephanlivera.com/episode/320/\nStephan Livera:\nNicholas welcome to the show.\nNicholas Gregory:\nHello. Thanks for having me.\nStephan Livera:\nSo Nicholas, there’s discussion in the community around scaling and how do things scale, and also how do we get more privacy in the Bitcoin ecosystem? I think these are ideas that people are talking about, and I know you’re working on some stuff that’s obviously very relevant for that as well with statechains, Mercury wallet, and stuff like …"},{"uri":"/bitcoin-explained/bitcoin-was-under-attack/","title":"Bitcoin Was Under ATTACK!","content":"Sjors Provoost:\nSay, did you know that Bitcoin was attacked?\nAaron van Wirdum:\nOh, that\u0026amp;rsquo;s a much better angle.\nSjors Provoost:\nAnd that the media is hiding this from us because they haven\u0026amp;rsquo;t been talking about it. It\u0026amp;rsquo;s a cover up.\nAaron van Wirdum:\nYeah. We have a-\nSjors Provoost:\nWe\u0026amp;rsquo;re going to bring the truth.\nAaron van Wirdum:\nWe\u0026amp;rsquo;re going to bring you the truth on this episode on the attack that was happening on Bitcoin, it was the attack of the fake peers.\nSjors …"},{"uri":"/chaincode-labs/chaincode-podcast/miniscript/","title":"Miniscript","content":"Speaker 0: 00:00:15\nWelcome back to the Chaincode Podcast.\nSpeaker 1: 00:00:17\nHey, this is Murch.\nSpeaker 0: 00:00:18\nAnd Jonas. Murch, this episode sounds a little bit different when we actually get into the meat of things. So why is it different?\nSpeaker 1: 00:00:26\nI did a little bit of field reporting. So I recorded an episode myself for the first time. We were using different equipment, different environment. I think the sound could have gone a little better, but I recorded a great episode …"},{"uri":"/brink/the-bitcoin-development-podcast/mempool-policy/","title":"Mempool Policy","content":"Speaker 0: 00:00:00\nHi Gloria.\nSpeaker 1: 00:00:01\nHi John.\nSpeaker 0: 00:00:02\nWe\u0026amp;rsquo;re here to talk about Bitcoin.\nSpeaker 1: 00:00:04\nYes, we are. What are we going to talk about?\nSpeaker 0: 00:00:06\nWell, I think you wanted to talk about policy.\nSpeaker 1: 00:00:09\nYes, that\u0026amp;rsquo;s what I wanted to talk about. What kind of policy? Mempool policy, not legislative policy or miniscript policy. Mempool policy.\nSpeaker 0: 00:00:20\nOkay, well, let\u0026amp;rsquo;s start with the basics. What is a …"},{"uri":"/stephan-livera-podcast/2021-11-07-t-bast/","title":"Lightning Protocol Privacy Exploration","content":"podcast: https://stephanlivera.com/episode/319/\nStephan Livera:\nT-Bast, welcome to the show.\nT-Bast:\nHey, Stephan. Thanks for having me.\nStephan Livera:\nSo Bastien, I prefer definitely having good technical discussions about the Lightning Network. And I know you’ve got a lot of things to talk about. For people who don’t know you, do you want to just tell us a little bit about yourself?\nT-Bast:\nYeah, sure. I started working on Lightning a few years ago. I think it was almost two years and a half …"},{"uri":"/tabconf/2021/schnorr-signatures-jimmy-song/","title":"Schnorr Signatures","content":"Introduction Anyway, like I said, this is a technical talk and we\u0026amp;rsquo;re going to talk about Schnorr signatures and the benefits that it brings. So let\u0026amp;rsquo;s get started.\nAgenda So here\u0026amp;rsquo;s the agenda. We\u0026amp;rsquo;re going to talk about, we\u0026amp;rsquo;re going to review the signature algorithm that\u0026amp;rsquo;s currently on Bitcoin. This is Elliptic Curve Digital Signature Algorithm or ECDSA. Then we\u0026amp;rsquo;re going to go describe what Schnorr signatures are. And then we\u0026amp;rsquo;re going to go over all …"},{"uri":"/tabconf/2021/","title":"TABConf 2021","content":" Bitcoin Mining Firmware and Stratum v2 Nov 04, 2021 Rachel Rybarczyk Mining Coin Selection Algorithms Nov 05, 2021 Andrew Chow Coin selection Wallet Covenants Nov 05, 2021 Jeremy Rubin, Andrew Poelstra Covenants Lightning Dev Kit: Making Lightning More Accessible to Developers Nov 05, 2021 Matt Corallo Lightning Liquidity Advertisements - niftynei - TABConf 2021 Nov 06, 2021 Lisa Neigut Lightning Miniscript - Custody, Computable, Composable Nov 06, 2021 Andrew Poelstra Miniscript Scaling with …"},{"uri":"/tabconf/2021/liquidity-advertisements-niftynei-tabconf-2021/","title":"Liquidity Advertisements - niftynei - TABConf 2021","content":"Introduction Again, my name is Lisa Neigut. I work at Blockstream. I\u0026amp;rsquo;m here today to talk to you about the spec proposal that we\u0026amp;rsquo;ve added to c-lightning and hoping to get into more Lightning implementation soon to make it standard. This is about liquidity advertisements. So today I\u0026amp;rsquo;m going to be talking to you about what a liquidity ad is, how they work, kind of explain how data gets passed around Lightning Network just in general, and maybe get some of you excited about using …"},{"uri":"/tabconf/2021/2021-11-06-andrew-poelstra-miniscript/","title":"Miniscript - Custody, Computable, Composable","content":"Slides: https://download.wpsoftware.net/bitcoin/wizardry/2021-11-tabconf/slides.pdf\nIntro I am here to talk about Miniscript. This is something that I have talked about a few times before but usually in a much more technical way where we get into the weeds of Bitcoin Script and what Miniscript is and how to use it and stuff. I want to talk today about Miniscript from the perspective of someone actually trying to use Bitcoin and how this solves problems that I think are very important in the …"},{"uri":"/tabconf/2021/coin-selection-algorithms/","title":"Coin Selection Algorithms","content":"Introduction Hi, I\u0026amp;rsquo;m Andrew Chow. I\u0026amp;rsquo;m an engineer at Blockstream and I work on Bitcoin Core in the wallet and on various things in the wallet. So, today I\u0026amp;rsquo;m going to be talking about coin selection and generally how you can pick your inputs better for when you\u0026amp;rsquo;re making a transaction. To start off, what happens when you make a transaction? Well, first, you type in, you copy and paste in your address, you type in the amount that you want to send and you click the send …"},{"uri":"/tabconf/2021/2021-11-05-jeremy-rubin-andrew-poelstra-covenants/","title":"Covenants","content":"Topic: Covenants\nLocation: TABConf (The Atlanta Bitcoin Conference)\nVideo: No video was posted online\nAs per Socratic village rules names of all attendees (other than the advertised speakers) have been anonymized and audio will not be published to preserve anonymity of the question askers.\nThe covenant concept Shaun Apps (SA): I’ll kick things off with a bit of an introduction. Both Andrew and Jeremy will present. And then we will go through some questions and we\u0026amp;rsquo;ll go to questions from …"},{"uri":"/tabconf/2021/lightning-dev-kit-making-lightning-more-accessible-to-developers-matt-corallo-tabconf-2021/","title":"Lightning Dev Kit: Making Lightning More Accessible to Developers","content":"Introduction So, I worked on Bitcoin Core for many many years, but I am not here to talk about that! I now spend my time, full time at Square Crypto working on a project we call the Lightning Dev Kit (LDK). So, working to enable Lightning to be more accessible to developers and I\u0026amp;rsquo;m here to talk about that and give the lay of the land for why it exists, what the state of the Lightning ecosystem is broadly, and why we think it\u0026amp;rsquo;s an important project and why Square is funding us, to …"},{"uri":"/tabconf/2021/scaling-with-lnd/","title":"Scaling with LND","content":"Intro I wanted to talk about scaling lightning using LND. I started working at Lightning Labs three years ago and come really a long way. I started working with LND before that when I worked at BitGo. That was the barest, almost didn\u0026amp;rsquo;t work, it was crashing everyday type of scenario. Now, three years later, we\u0026amp;rsquo;re trying to scale this to millions of people. We\u0026amp;rsquo;re trying to scale this to millions of payments. I wanted to cover how do we do that? How do we do that with LND …"},{"uri":"/tabconf/2021/bitcoin-mining-firmware-and-stratum-v2-rachel-rybarczyk-tabconf-2021/","title":"Bitcoin Mining Firmware and Stratum v2","content":"Introduction Coming to the stage next, she is the VP of Mining at Galaxy Digital. Everybody please welcome to the stage Rachel Rybarczyk, everyone. My name is Rachel, I\u0026amp;rsquo;m VP of Mining at Galaxy Digital. I\u0026amp;rsquo;ve been in the space for a few years now and I recently started developing on Stratum V2, which is a new mining protocol. So I want to talk today about Stratum V2, how it compares to Stratum V1, the adoption efforts, some adoption hurdles, and how it all fits in together. But first, …"},{"uri":"/speakers/rachel-rybarczyk/","title":"Rachel Rybarczyk","content":""},{"uri":"/speakers/calvin-kim/","title":"Calvin Kim","content":""},{"uri":"/london-bitcoin-devs/2021-11-02-socratic-seminar-assumeutxo/","title":"Socratic Seminar - AssumeUTXO","content":"Reading list: https://gist.github.com/michaelfolkson/f46a7085af59b2e7b9a79047155c3993\nIntros Michael Folkson (MF): This is a discussion on AssumeUTXO. We are lucky to have James O’Beirne on the call. There is a reading list that I will share in a second with a bunch of links going right from concept, some of the podcasts James has done, a couple of presentations James has done. And then towards the end hopefully we will get pretty technical and in the weeds of some of the current, past and …"},{"uri":"/sydney-bitcoin-meetup/2021-11-02-socratic-seminar/","title":"Sydney Socratic Seminar","content":"Name: Socratic Seminar\nTopic: Package relay\nLocation: Bitcoin Sydney (online)\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nAgenda: https://github.com/bitcoin-sydney/socratic/blob/master/README.md#2021-11\nPackage Mempool Accept and …"},{"uri":"/tags/utreexo/","title":"Utreexo","content":""},{"uri":"/c-lightning/2021-11-01-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nc-lightning v0.10.2 release https://medium.com/blockstream/c-lightning-v0-10-2-bitcoin-dust-consensus-rule-33e777d58657\nWe’ve tagged RC1 to get …"},{"uri":"/speakers/k3tan/","title":"K3tan","content":""},{"uri":"/stephan-livera-podcast/2021-10-27-k3tan-networking-for-bitcoiners/","title":"Networking For Bitcoiners","content":"podcast: https://stephanlivera.com/episode/315/\nStephan Livera:\nk3tan, welcome to the show.\nK3tan:\nThank you. Thanks for having me.\nStephan Livera:\nMr. Ministry of Nodes and Mr. Calyx and also Pop OS. Obviously I had to get you on the show and it was time to get you on. So for anyone who doesn’t know you, tell us a little bit about yourself.\nK3tan:\nMy name’s k3tan, and I am the co-founder of Ministry of Nodes with you, Stefan, and we provide Bitcoin education material, particularly on YouTube. …"},{"uri":"/speakers/amiti-uttarwar/","title":"Amiti Uttarwar","content":""},{"uri":"/tags/erlay/","title":"Erlay","content":""},{"uri":"/chaincode-labs/chaincode-podcast/the-p2p-network-2/","title":"The P2P network","content":"Introduction Caralie: 00:00:05\nHi everyone. Welcome to the Chaincode Podcast. This is Caralie, and it\u0026amp;rsquo;s been a while. Hi Jonas.\nAdam Jonas: 00:00:10\nHey Caralie. Welcome back.\nCaralie: 00:00:11\nThank you. It\u0026amp;rsquo;s nice to be here. Tell me more about this episode we\u0026amp;rsquo;ve got.\nAdam Jonas: 00:00:16\nThis episode was designed for two P2P experts, Amiti and Pieter, to sit down and talk about all things P2P.\nCaralie: 00:00:24\nThat sounds very exciting.\nAdam Jonas: 00:00:25\nIt was very …"},{"uri":"/speakers/dustin-trammell/","title":"Dustin Trammell","content":""},{"uri":"/stephan-livera-podcast/2021-10-24-dustin-trammell/","title":"The 2nd Node On The Bitcoin Network? The Early Days of Bitcoin","content":"podcast: https://stephanlivera.com/episode/314/\nStephan Livera:\nDustin, welcome to the show.\nDustin Trammell:\nHey! Nice to be on.\nStephan Livera:\nSo Dustin, I’ve seen some of your work and obviously this recent discussion I think spurred some of the conversation. I thought it would be great to get you on and have a chat with you and hear a little bit about your story and where you came from and how you found all this Bitcoin stuff. And how you were there so early as well. I’d love to get into …"},{"uri":"/c-lightning/2021-10-18-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nDate: October 18th 2021\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nc-lightning v0.10.2 release https://medium.com/blockstream/c-lightning-v0-10-2-bitcoin-dust-consensus-rule-33e777d58657\nWe …"},{"uri":"/tags/eclipse-attacks/","title":"Eclipse attacks","content":""},{"uri":"/chaincode-labs/chaincode-podcast/the-p2p-network/","title":"The P2P network","content":"Amiti Uttarwar: 00:00:00\nWhen you have all of these different components in the system, even a very simple rule can turn into infinitely complex interactions, and it\u0026amp;rsquo;s incredibly difficult to wrap your head around at such grand scale. That\u0026amp;rsquo;s also something that makes it really cool because that\u0026amp;rsquo;s why it\u0026amp;rsquo;s so important to collaborate.\nIntroduction Adam Jonas: 00:00:25\nHi, I\u0026amp;rsquo;m Jonas, and welcome to the Chaincode Podcast.\nMark Erhardt: 00:00:41\nHi, I\u0026amp;rsquo;m Murch. …"},{"uri":"/c-lightning/2021-10-04-developer-call/","title":"c-lightning developer call","content":"Name: c-lightning developer call\nTopic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nDust HTLC exposure (Lisa Neigut) Antoine Riard email to the Lightning dev mailing list: …"},{"uri":"/speakers/philip-glazman/","title":"Philip Glazman","content":""},{"uri":"/stephan-livera-podcast/2021-10-02-philip-glazman/","title":"Scaling Bitcoin Exchanges With Lightning And Lightning User Profiles","content":"podcast: https://stephanlivera.com/episode/308/\nStephan Livera:\nPhilip, welcome to the show.\nPhilip Glazman:\nHi, Stephan. Really excited to be here.\nStephan Livera:\nSo Philip, you’re at River, and River has a lot of really cool, interesting stuff going on technologically, and I’m sure you can tell us a little bit about that. Do you want to tell us a little bit about your background and how you got into the more technical side of Bitcoin?\nPhilip Glazman:\nYeah, absolutely. So I’ve always been …"},{"uri":"/speakers/anthony-potdevin/","title":"Anthony Potdevin","content":""},{"uri":"/stephan-livera-podcast/2021-09-29-thomas-jestopher-and-anthony-potdevin/","title":"Becoming A Lightning Routing Node Operator","content":"podcast: https://stephanlivera.com/episode/307/\nStephan Livera:\nJestopher and Tony, welcome to the show.\nTony :\nHey, thank you. Thank you for having us.\nJestopher:\nThanks so much, Stephan.\nStephan Livera:\nSo I’ve been following what you guys are doing with Amboss and I thought it would be a good time to get you on and chat a little bit about Lightning Network just generally as well as maybe some tips out there for people who want to just get started and start running their own Lightning node and …"},{"uri":"/speakers/thomas-jestopher/","title":"Thomas Jestopher","content":""},{"uri":"/c-lightning/2021-09-20-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nAgenda: https://hackmd.io/@cdecker/Sy-9vZIQt\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIndividual updates Let’s start with a quick round of updates from the team. Then I’ll give a brief …"},{"uri":"/bitcoin-explained/the-libsecp256k1-library/","title":"Schnorr Signatures and Libsecp256k1","content":"Aaron Van Wirdum: 00:03:59\nWe\u0026amp;rsquo;re going to discuss libsecp25, it\u0026amp;rsquo;s a long name.\nSjors Provoost:\nLibsecp256k1.\nAaron van Wirdum:\nThank you. Why are we going to discuss it? We are going to discuss it because BIP 340 support was merged into libsecp256k1 this week.\nSjors Provoost:\nWhat was merged?\nAaron van Wirdum:\nShut up.\nSjors Provoost:\nSchnorr was added.\nAaron van Wirdum:\nOkay yeah, so Schnorr, exactly. Thank you for actually making it clear for our listener. Oh, I misunderstood your …"},{"uri":"/tags/anonymity-networks/","title":"Anonymity networks","content":""},{"uri":"/tags/bech32/","title":"Bech32(m)","content":""},{"uri":"/bitcoin-explained/bitcoin-core-v22-explained/","title":"Bitcoin Core 22.0 Explained","content":"Intro Aaron: 00:00:20\nWelcome to Bitcoin Explained, the podcast with the most boring name in Bitcoin. Hey Sjors.\nSjors: 00:00:27\nWhat\u0026amp;rsquo;s up? What\u0026amp;rsquo;s with the new name?\nAaron: 00:00:29\nSo We\u0026amp;rsquo;ve rebranded our podcast. What do you think?\nSjors: 00:00:33\nWell, I think, you know, especially if you read it correctly, Bitcoin, explained. I think it\u0026amp;rsquo;s an amazing name. I think the problem was that a lot of people have no idea what the hell a Van Weerdam Shores Nado is.\nAaron: …"},{"uri":"/c-lightning/2021-09-06-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nDate: September 6th 2021\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIndividual updates I’ve been mostly shepherding the pull requests on c-lightning and looking into issues we need to fix …"},{"uri":"/stephan-livera-podcast/2021-08-24-seedsigner/","title":"Bitcoin multi sig security under $50","content":"podcast: https://stephanlivera.com/episode/302/\nStephan Livera:\nSeedsigner welcome to the show.\nSeedsigner:\nSo glad to be here. Thank you for having me.\nStephan Livera:\nSo just for listeners, Seedsigner is operating under a pseudonym, so I’m just going to be calling him seed or SeedSigner. So seed, can you tell me a little bit about yourself and how this project came about? What was the inspiration?\nSeedsigner:\nSo it’s actually a full circle kind of journey to be talking with you today because …"},{"uri":"/c-lightning/2021-08-23-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIndividual updates I’ve been working on connectd, we got that merged. It was not a big change. I have complained in the past that there tends to be …"},{"uri":"/iacr/","title":"IACR Crypto","content":" MuSig2: Simple Two-Round Schnorr Multi-Signatures Aug 16, 2021 Jonas Nick Musig "},{"uri":"/iacr/2021-08-16-jonas-nick-musig2/","title":"MuSig2: Simple Two-Round Schnorr Multi-Signatures","content":"MuSig2 paper: https://eprint.iacr.org/2020/1261.pdf\nIntroduction This is a talk about MuSig2, simple two round Schnorr multisignatures. I am Jonas Nick and this work is a collaboration with my colleague Tim Ruffing at Blockstream and Yannick Seurin.\nMulti-Signatures Multisignatures allow n signers to produce a single signature on a single message. The signing protocol can be interactive and require multiple communication rounds. We distinguish between multisignatures where n-of-n signers can …"},{"uri":"/bitcoin-explained/basis-of-lightning-technology-12/","title":"Basis of Lightning Technology 12","content":"Intro Aaron Van Wirdum: 00:00:09\nLive from San Salvador and Utrecht, this is the Van Wirdum Sjorsnado. Hello! Hey Sjors, welcome back.\nSjors Provoost: 00:00:19\nThank you.\nAaron Van Wirdum: 00:00:19\nIt\u0026amp;rsquo;s been a while.\nSjors Provoost: 00:00:20\nI mean I\u0026amp;rsquo;m not physically back.\nAaron Van Wirdum: 00:00:24\nYeah, I\u0026amp;rsquo;m the one who\u0026amp;rsquo;s not physically back. I\u0026amp;rsquo;m still hanging out in El Salvador checking out what\u0026amp;rsquo;s going on here with the new Bitcoin law.\nSjors Provoost: …"},{"uri":"/tags/dlc/","title":"Discreet Log Contracts (DLCs)","content":""},{"uri":"/london-bitcoin-devs/2021-08-10-socratic-seminar-dlcs/","title":"Socratic Seminar - Discreet Log Contracts","content":"Gist of resources discussed: https://gist.github.com/michaelfolkson/f5da6774c24f99dba5c6c16ec8d499e9\nThis event was livestreamed on YouTube and so comments are attributed to specific individuals by the name or pseudonym that was visible on the livestream. If you were a participant and would like your comments to be anonymized please get in touch.\nIntro Michael Folkson (MF): Welcome to everyone on the call, welcome to people watching on YouTube. This is a Socratic Seminar, we’ve got a Zoom call …"},{"uri":"/c-lightning/2021-08-09-developer-call/","title":"c-lightning developer call","content":"c-lightning developer call Topic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIndividual updates Today I started to review some pull requests. I also finished my first draft of the paid order of the …"},{"uri":"/stephan-livera-podcast/2021-08-09-rusty-russell/","title":"Lightning Offers / BOLT12 -The next big thing in Lightning?","content":"podcast: https://stephanlivera.com/episode/298/\nStephan Livera:\nRusty welcome back to the show.\nRusty Russell:\nThanks. It’s always good to be here, Stephan.\nStephan Livera:\nSo Rusty, I know you’ve been making progress on this whole offers idea. Now you’ve mentioned this on the show in the past. And in fact, I recall at the Lightning conference and I think it was October, 2019 is when you actually first put this idea out there into the wild. So do you want to tell us a little bit about how this …"},{"uri":"/london-bitcoin-devs/2021-07-20-socratic-seminar-taproot-rollout/","title":"Socratic Seminar - Taproot is locked in. What now?","content":"Gist of resources discussed: https://gist.github.com/michaelfolkson/0803271754f851530fe8242087859254\nThis event was livestreamed on YouTube and so comments are attributed to specific individuals by the name or pseudonym that was visible on the livestream. If you were a participant and would like your comments to be anonymized please get in touch.\nIntro Michael Folkson (MF): Welcome to everyone on the call, welcome to anybody on YouTube. This is a Socratic Seminar on Taproot rollout or Taproot …"},{"uri":"/stephan-livera-podcast/2021-07-17-andrew-chow/","title":"Output Script Descriptors for Bitcoin","content":"podcast: https://stephanlivera.com/episode/292/\nStephan Livera:\nAndrew welcome back to the show.\nAndrew Chow:\nHey, thanks for having me.\nStephan Livera:\nSo, Andrew, you’ve been working on a bunch of things, one of which was the output descriptors idea. There’s been a little bit of progress on that. So maybe just tell us a little bit about what you’re working on lately.\nAndrew Chow:\nI’ve been working a lot on the Bitcoin core wallet, as usual. We’ve been preparing to support taproot with the next …"},{"uri":"/stephan-livera-podcast/2021-07-13-nvk-bitcoin-security/","title":"Bitcoin Security \u0026 Backups Primer","content":"podcast: https://stephanlivera.com/episode/290/\nStephan Livera:\nMr. NVK. Welcome back to the show.\nNVK:\nHey man. Thanks for having me.\nStephan Livera:\nYeah, dude, it’s been a while since we spoke about backups. So listeners, you might know we did an episode a little while back, but things change over time and we’ve got new listeners and new people coming in. So it was a good time. And I think there was a lot of support on Twitter for this idea of doing an episode specifically talking about how …"},{"uri":"/sydney-bitcoin-meetup/2021-07-06-socratic-seminar/","title":"Sydney Socratic Seminar","content":"Name: Socratic Seminar\nTopic: Fee bumping and layer 2 protocols\nLocation: Bitcoin Sydney (online)\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nAgenda: https://github.com/bitcoin-sydney/socratic/blob/master/README.md#2021-07\nFirst IRC …"},{"uri":"/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/","title":"Sydney Socratic Seminar","content":"Topic: Agenda in Google Doc below\nVideo: No video posted online\nGoogle Doc of the resources discussed: https://docs.google.com/document/d/1E9mzB7fmzPxZ74WZg0PsJfLwjpVZ7OClmRdGQQFlzoY/\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nBitcoin Problems (Lloyd Fournier) …"},{"uri":"/wasabi/podcast/2021-05-30-improving-lightning/","title":"Improving the Lightning Network","content":"How Rusty got into Lightning Max Hillebrand (MH): So Rusty, I am very happy that you joined me for this conversation today. You have been a pioneer in Lightning Network. To start off this conversation I am curious, 6 years ago before you got into the Lightning Network what was your understanding of Bitcoin back then and where did you see some of the big problems that needed to be solved at that point?\nRusty Russell (RR): Firstly thanks, it is good to be on. I guess we have to go back 7 or 8 …"},{"uri":"/wasabi/podcast/","title":"Wasabikas Bitcoin Privacy Podcast","content":" Improving the Lightning Network May 30, 2021 Rusty Russell Lightning C lightning "},{"uri":"/chaincode-labs/chaincode-podcast/chaincode-decoded-blockchain/","title":"Chaincode Decoded: Blockchain","content":"Introduction Mark Erhardt: 00:00:00\nHowever long it has been since the last block (has) been found, the next block is expected to take 10 minutes. So if you have been waiting for 10 minutes, or hashing for 10 minutes already, next block is still in about 10 minutes. Even if it\u0026amp;rsquo;s been an hour, it\u0026amp;rsquo;s going to take 10 minutes until the next block comes.\nAdam Jonas: 00:00:28\nWelcome to the Chaincode Podcast. I\u0026amp;rsquo;m here with Murch. Today, we are going to talk about blockchain …"},{"uri":"/speakers/alejandro-de-la-torre/","title":"Alejandro De La Torre","content":""},{"uri":"/stephan-livera-podcast/2021-05-24-alejandro-de-la-torre/","title":"Coordinating Bitcoin Upgrades With Mining Pools","content":"podcast: https://stephanlivera.com/episode/277/\nStephan Livera:\nAlejandro welcome to the show.\nAlejandro De La Torre:\nThank you, Stephan. Thanks. Thanks for having me.\nStephan Livera:\nSo Alejandro, I’ve been trying to get you on for a little while, but it just kind of hasn’t happened, but I also want to, I really wanted to chat with you about some of your work you’ve been doing around taproot activation. And obviously this has come quite some way and hopefully it’s looking like it’s pretty close …"},{"uri":"/speakers/anthony-ronning/","title":"Anthony Ronning","content":""},{"uri":"/stephan-livera-podcast/2021-05-21-anthony-ronning-bitcoin-lightning-privacy/","title":"Bitcoin Lightning Privacy - FUD and Facts","content":"podcast: https://stephanlivera.com/episode/276/\nStephan Livera:\nAnthony welcome to the show.\nAnthony Ronning:\nHi Stephan. Glad to be here. Thanks for inviting me on.\nStephan Livera:\nYeah. So I saw your article and I thought, well, we’ve got to do a discussion about this one. I think it will be very valuable for people who are trying to think more clearly about the privacy implications of Lightning. But tell us a little bit about yourself. Who are you and what’s your interest in Bitcoin and …"},{"uri":"/tags/cves/","title":"CVEs (various)","content":""},{"uri":"/bitcoin-explained/replace-by-fee-bug/","title":"Replace-by-Fee Bug (CVE-2021-31876)","content":"Intro Aaron van Wirdum: 00:00:08\nLive from Utrecht, Aruba and Utrecht, the Netherlands, this is The Van Wirdum Sjorsnado.\nSjors Provoost: 00:00:15\nHello, welcome, and listeners, sorry for the maybe not perfect sound quality, because we have to make do with the remote recording.\nAaron van Wirdum: 00:00:24\nYeah, for the second time since we\u0026amp;rsquo;re making our show my laptop broke, so here I am on my phone.\nSjors Provoost: 00:00:32\nAnd you decided to use your laptop in a swimming pool which you …"},{"uri":"/speakers/pavel-moravec/","title":"Pavel Moravec","content":""},{"uri":"/stephan-livera-podcast/2021-05-13-pavel-moravec/","title":"SlushPool Signalling For Taproot","content":"podcast: https://stephanlivera.com/episode/275/\nStephan Livera:\nPavel welcome to the show.\nPavel Moravec:\nHello. Great to be here again.\nStephan Livera:\nYes, So, Pavel, I see you guys have been keeping busy over at Braiins and SlushPool. So you were one of the first pools or I think you were the first pool to signal for taproot, which is very cool. And we’re definitely going to get into some of that for listeners who don’t know much about you, can you just give us a little bit of a background on …"},{"uri":"/chaincode-labs/chaincode-podcast/2021-05-12-matt-corallo-ldk/","title":"Lightning Development Kit","content":"Matt Corallo presentation at Advancing Bitcoin 2019: https://btctranscripts.com/advancing-bitcoin/2019/2019-02-07-matt-corallo-rust-lightning/\nrust-lightning repo: https://github.com/rust-bitcoin/rust-lightning\nIntro Adam Jonas (AJ): Welcome back to the office Matt, glad to have you back on the podcast.\nMatt Corallo (MC): Thank you.\nUpdate on LDK AJ: We are going to start with LDK. Where are we at? What is going on?\nMC: If listeners are aware LDK kind of grew out of a project that I started a …"},{"uri":"/speakers/pete-rizzo/","title":"Pete Rizzo","content":""},{"uri":"/stephan-livera-podcast/2021-04-28-pete-rizzo/","title":"When Satoshi Disappeared","content":"podcast: https://stephanlivera.com/episode/271/\nStephan Livera:\nPete, welcome to the show.\nPete Rizzo:\nHey. Well, thanks for having me glad to be here and excited to chat.\nStephan Livera:\nYeah, man. So I’ve had a chance to read your article about what happened when Satoshi disappeared, and I think it’s a good one to get into. But also just for listeners who don’t know you, can you just take a minute and just tell them a bit about yourself and your involvement in the Bitcoin world?\nPete Rizzo: …"},{"uri":"/chaincode-labs/chaincode-podcast/chaincode-decoded-mempool/","title":"Chaincode Decoded: Mempool","content":"Introduction Adam Jonas: 00:00:00\nWelcome to the Chaincode podcast. I\u0026amp;rsquo;m here with Murch. Today we\u0026amp;rsquo;re gonna jump into the mempool and that\u0026amp;rsquo;s a pun if you didn\u0026amp;rsquo;t get it. Welcome to Chaincode decoded - the mempool. The mempool, an area you are more than familiar with. The mempool whisperer you\u0026amp;rsquo;ve been called. Let\u0026amp;rsquo;s start with, what\u0026amp;rsquo;s the relationship between the mempool and fees?\nThe mempool and fees Mark Erhardt: 00:00:31\nWe often talk about the mempool, …"},{"uri":"/bitcoin-explained/taproot-activation-update/","title":"Taproot Activation Update: Speedy Trial And The LOT=True Client","content":"Previous episode on lockinontimeout (LOT): https://btctranscripts.com/bitcoin-magazine/2021-02-26-taproot-activation-lockinontimeout/\nPrevious episode on Speedy Trial: https://btctranscripts.com/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial/\nAaron van Wirdum on “There are now two Taproot activation clients, here’s why”: https://bitcoinmagazine.com/technical/there-are-now-two-taproot-activation-clients-heres-why\nIntro Aaron van Wirdum (AvW): Live from Utrecht this is the van Wirdum …"},{"uri":"/chaincode-labs/chaincode-podcast/chaincode-decoded-bech32m/","title":"Chaincode Decoded: Bech32m","content":"Adam Jonas: 00:00:00\nChaincode podcast is back, welcome to the Chaincode podcast. I am here with Murch, and we are back. It\u0026amp;rsquo;s been quite a layoff, but we\u0026amp;rsquo;re back in the studio and happy to be here.\nMark Erhardt: 00:00:16\nVery happy.\nAdam Jonas: 00:00:17\nToday we are going to revisit a segment called \u0026amp;ldquo;Chaincode decoded\u0026amp;rdquo; where we\u0026amp;rsquo;re going to be going into Bitcoin fundamentals, and we are going to start with bech32 and bech32m. Hope you enjoy, happy to be back.\nBech32 …"},{"uri":"/tags/minisketch/","title":"Minisketch","content":""},{"uri":"/bitcoin-explained/scaling-bitcoin-with-the-erlay-protocol/","title":"Scaling Bitcoin With The Erlay Protocol","content":"Preamble Aaron van Wirdum: 00:00:07\nLive from Utrecht, this is the Van Wirdum Sjorsnado.\nSjors Provoost: 00:00:10\nHello again.\nAaron van Wirdum: 00:00:12\nWe\u0026amp;rsquo;ve done it again, Sjors.\nSjors Provoost: 00:00:14\nWe\u0026amp;rsquo;ve done it again.\nAaron van Wirdum: 00:00:15\nWe recorded the whole episode without actually recording it.\nSjors Provoost: 00:00:18\nYeah.\nAaron van Wirdum: 00:00:19\nSo we\u0026amp;rsquo;re going to do the whole thing over.\nSjors Provoost: 00:00:20\nIt\u0026amp;rsquo;s this thing where you press …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2021/2021-04-03-defending-against-99-attack/","title":"How much Security is too much Security? Defending against a 99.999% Attack","content":"Introduction Hi everyone, it\u0026amp;rsquo;s too bad that we can\u0026amp;rsquo;t be in person but I think by next year we\u0026amp;rsquo;ll be able to do this in person. It\u0026amp;rsquo;s been a long year without conferences. And I guess for maybe many people watching the MIT Bitcoin Expo last March, was the last time I saw a bunch of people in person before this whole thing. Thanks to Neha, that was a really good start off for the day. I\u0026amp;rsquo;m going to talk in some ways about the opposite side of things, saying, well, do we …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2021/","title":"Mit Bitcoin Expo 2021","content":" How much Security is too much Security? Defending against a 99.999% Attack Apr 03, 2021 Tadge Dryja Security Utreexo "},{"uri":"/categories/hackathon/","title":"hackathon","content":""},{"uri":"/lightning-hack-day/","title":"Lightning Hack Day","content":" A Quantitative Analysis of Security, Anonymity and Scalability for the Lightning Network May 09, 2020 Sergei Tikhomirov Research Lightning Security problems Privacy problems Excel In Lightning Oct 27, 2018 Pierre Rochard Lnd Lightning Backups May 03, 2020 Christian Decker Lightning Lightning Routing Mar 27, 2021 Alex Bosworth Lightning Routing RaspiBlitz full node Jun 21, 2020 Rootzoll, Openoms Watchtowers and BOLT 13 May 24, 2020 Sergi Delgado Segura Lightning C lightning Zap Desktop: A look …"},{"uri":"/lightning-hack-day/2021-03-27-alex-bosworth-lightning-routing/","title":"Lightning Routing","content":"Lightning Routing - Building a new economy\nLocation: Lightning Hack Day\nmedia: No video posted online\nSlides: https://www.dropbox.com/s/oo9o8ij62utvlfo/alex-bosworth-lightning-routing.pdf\nTranscript of presentation on Channel Management (2018): https://btctranscripts.com/chaincode-labs/chaincode-residency/2018-10-22-alex-bosworth-channel-management/\nTranscript by: Michael Folkson\nIntro With Lightning routing I think it is a very exciting topic because we’re doing something that no one has ever …"},{"uri":"/tags/routing/","title":"routing","content":""},{"uri":"/bitcoin-explained/explaining-rgb-tokens/","title":"Explaining RGB Tokens","content":"Intro Aaron van Wirdum: 00:01:45\nLive, from Utrecht, this is The Van Wirdum Sjorsnado. Hello.\nRuben Somsen: 00:01:48\nHello.\nSjors Provoost: 00:01:49\nHey, who\u0026amp;rsquo;s there?\nRuben Somsen: 00:01:51\nIt\u0026amp;rsquo;s another Dutch guy.\nSjors Provoost: 00:01:53\nWelcome to the show, Ruben.\nRuben Somsen: 00:01:54\nThank you.\nAaron van Wirdum: 00:01:56\nWelcome back Ruben. Ruben, did you start selling your tweets yet?\nRuben Somsen: 00:01:59\nI tried to sell one of my tweets to my sister but she said, \u0026amp;ldquo;What …"},{"uri":"/blockstream-webinars/","title":"Blockstream Webinars","content":" C-Lightning Questions Sep 04, 2019 Christian Decker Lightning Lnd C lightning Dual Funded Channels Mar 25, 2021 Lisa Neigut Dual funding C lightning Getting Started With C-Lightning Jul 31, 2019 Rusty Russell Lightning C lightning Simplicity: Next-Gen Smart Contracting Apr 08, 2020 Adam Back Simplicity "},{"uri":"/blockstream-webinars/dual-funded-channels/","title":"Dual Funded Channels","content":"Dual Funding channels explainer + demo Cool. Okay, thanks everyone for coming. I think we\u0026amp;rsquo;re going to go ahead and get started. It\u0026amp;rsquo;s right at 11.02 on my time, so that hopefully more people will show up, but for everyone who\u0026amp;rsquo;s here, I want to make the best use of all of your time. Okay, great. So for those of you who aren\u0026amp;rsquo;t really sure what dual funding is, maybe you\u0026amp;rsquo;ve heard people say that word before, but let me give you a kind of a quick overview.\nI guess, of …"},{"uri":"/categories/workshop/","title":"workshop","content":""},{"uri":"/stephan-livera-podcast/2021-03-24-craig-raw-bitcoin-multi-sig/","title":"Bitcoin Multi Sig With Sparrow Wallet","content":"podcast: https://stephanlivera.com/episode/262/\nStephan Livera:\nCraig welcome to the show.\nCraig Raw:\nHi there, Stephan! It’s great to be here.\nStephan Livera:\nSo Craig I’ve been seeing what you’re doing with Sparrow wallet and I thought it’s time to get this guy on the show. So can you, I mean, obviously I know you’re under a pseudonym, right? So don’t dox anything about yourself that you don’t want that you’re not comfortable to, but can you tell us a little bit about how you got into Bitcoin …"},{"uri":"/wasabi/research-club/discrete-log-contract-specification/","title":"Discrete Log Contract Specification","content":"General overview of Discrete Log Contracts (DLC). Speaker 0: 00:00:07\nSo, everybody, welcome to this episode of the Wasabi Wallet Research Experience. This is our weekly research call where we talk about all the crazy things happening in the Bitcoin rabbit hole. We can use them to improve the Bitcoin privacy for many users. And today we got the usual reckless crew on board, but also two special guests from short bits. We have Nadav Cohen and Ben DeCarmen. How are you both?\nSpeaker 1: 00:00:36 …"},{"uri":"/speakers/nadav-kohen/","title":"Nadav Kohen","content":""},{"uri":"/bitcoin-explained/explaining-segregated-witness/","title":"Explaining Segregated Witness","content":"Aaron: 00:01:46\nLive from Utrecht. This is The Van Wirdum Sjorsnado. Hello. Sjors, we\u0026amp;rsquo;re going to talk about a classic today.\nSjors: 00:01:52\nYeah. We\u0026amp;rsquo;re going to party like it\u0026amp;rsquo;s 2015.\nAaron: 00:01:55\nSegWit.\nSegregated Witness Sjors: 00:01:56\nThat\u0026amp;rsquo;s right.\nAaron: 00:01:57\nSegregated Witness, which was the previous soft fork, well, was the last soft fork. We\u0026amp;rsquo;re working towards a Taproot soft fork now.\nSjors: 00:02:06\nIt\u0026amp;rsquo;s the last soft fork we know of.\nAaron: …"},{"uri":"/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/","title":"How Bitcoin UASF went down, Taproot LOT=true, Speedy Trial, Small Blocks","content":"Luke Dashjr arguments against LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html\nT1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html\nF7 argument for LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html\nTranscript by: Stephan Livera Edited by: Michael Folkson\nIntro Stephan Livera (SL): Luke, welcome to the show.\nLuke Dashjr (LD): …"},{"uri":"/stephan-livera-podcast/2021-03-17-luke-dashjr/","title":"How Bitcoin UASF Went Down, Taproot LOT=True, Speedy Trial, Small Blocks","content":"podcast: https://stephanlivera.com/episode/260/\nStephan Livera:\nLuke welcome to the show.\nLuke Dashjr:\nThanks.\nStephan Livera:\nSo, Luke for listeners who are unfamiliar, maybe you could just take a minute and just tell us a little bit about your background and how long you’ve been developing and contributing with Bitcoin core.\nLuke Dashjr:\nI first learned about Bitcoin back at the end of 2010, it was a new year’s party and I’ve been contributing since about a week later. So I recently got past …"},{"uri":"/speakers/luke-dashjr/","title":"Luke Dashjr","content":""},{"uri":"/speakers/adam-ficsor/","title":"Adam Ficsor","content":""},{"uri":"/speakers/caleb-delisle/","title":"Caleb DeLisle","content":""},{"uri":"/wasabi/research-club/cjdns/","title":"CJDNS","content":"Introduction. / BIP155. / Diverse, robust, resilient p2p networking. Lucas Ontivero: 00:00:00\nWelcome to a new Wasabi Research Experience meeting. This time we have a special guest. His name is Caleb. He is the creator of CJDNS project. Basically it\u0026amp;rsquo;s a network that can replace the internet, basically, of course, that is very ambitious. He will tell us better. Part of this project is now already supported by the BIP155 in Bitcoin and other libraries too. So this is an effort that part of …"},{"uri":"/speakers/lucas-ontivero/","title":"Lucas Ontivero","content":""},{"uri":"/bitcoin-explained/taproot-activation-speedy-trial/","title":"Taproot Activation with Speedy Trial","content":"Speedy Trial proposal: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html\nIntro Aaron van Wirdum (AvW): Live from Utrecht this is the van Wirdum Sjorsnado. Sjors, what is your pun of the week?\nSjors Provoost (SP): I actually asked you for a pun and then you said “Cut, re-edit. We are going to do it again.” I don’t have a pun this week.\nAvW: Puns are your thing.\nSP: We tried this LOT thing last time.\nAvW: Sjors, we are going to talk a lot this week.\nSP: We are going to …"},{"uri":"/bitcoin-explained/hardware-wallet-integration-in-bitcoin-core-and-hwi/","title":"Hardware Wallet Integration in Bitcoin Core and HWI","content":"Aaron van Wirdum: 00:01:46\nLive from Utrecht this is The Van Wirdum Sjorsnado. Hello! Are you running the BIP8 True independent client yet?\nSjors Provoost: 00:01:56\nNegative. I did not even know there was one.\nAaron van Wirdum: 00:01:59\nOne has been launched, started. I don\u0026amp;rsquo;t think it\u0026amp;rsquo;s actually a client yet, a project has started.\nSjors Provoost: 00:02:05\nOkay, a project has started, it\u0026amp;rsquo;s not a binary or a code that you can compile.\nAaron van Wirdum: 00:02:09\nBut I did see you …"},{"uri":"/stephan-livera-podcast/2021-03-04-matt-corallo/","title":"Bitcoin Soft Fork Activation, Taproot, and Playing Chicken","content":"podcast: https://stephanlivera.com/episode/257/\nStephan Livera:\nMatt, welcome to the show.\nMatt Corallo:\nHey yeah, thanks for having me.\nStephan Livera:\nSo guys, obviously, listeners, you know, we’re just having to re-record this. We had basically a screw up the first time around, but we wanted to chat about Taproot and soft fork activation. So perhaps Matt, if we could just start from your side, let’s try to keep this accessible for listeners who maybe they are new. They’re trying to learn …"},{"uri":"/speakers/gleb-naumenko/","title":"Gleb Naumenko","content":""},{"uri":"/bitcoin-explained/taproot-activation-lockinontimeout/","title":"Taproot activation and LOT=true vs LOT=false","content":"BIP 8: https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki\nArguments for LOT=true and LOT=false (T1-T6 and F1-F6): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html\nAdditional argument for LOT=false (F7): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html\nAaron van Wirdum article on LOT=true or LOT=false: https://bitcoinmagazine.com/articles/lottrue-or-lotfalse-this-is-the-last-hurdle-before-taproot-activation\nIntro Aaron …"},{"uri":"/stephan-livera-podcast/2021-02-26-gleb-naumenko/","title":"The Label, Bitcoin Dev \u0026 Consulting","content":"podcast: https://stephanlivera.com/episode/255/\nStephan Livera:\nGleb welcome back to the show.\nGleb Naumenko:\nHi, it’s good to be back\nStephan Livera:\nGlad you’ve been pretty busy with how you’ve started up a new Bitcoin development venture and you’re up to a few different things. Tell us what you’ve been working on lately.\nGleb Naumenko:\nYeah, last year it was like, I think it’s been maybe half a year or more since I came last time, I’ve been mostly working on actually lightning and related …"},{"uri":"/speakers/anthony-towns/","title":"Anthony Towns","content":""},{"uri":"/sydney-bitcoin-meetup/2021-02-23-socratic-seminar/","title":"Sydney Socratic Seminar","content":"Topic: Agenda in Google Doc below\nVideo: No video posted online\nGoogle Doc of the resources discussed: https://docs.google.com/document/d/1VAP4LNjHHVLy9RJwpQ8LqXEUw79z5TTZxr9du_Z0-9A/\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nPoDLEs revisited (Lloyd Fournier) …"},{"uri":"/munich-meetup/","title":"Munich Meetup","content":" Stratum v2 Feb 21, 2021 Daniela Brozzoni Mining "},{"uri":"/munich-meetup/2021-02-21-daniela-brozzoni-stratumv2/","title":"Stratum v2","content":"Topic: Mining Basics and Stratum v2\nLocation: Bitcoin Munich\nMatt Corallo presentation on BetterHash: https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/\nStratum v1 (BIP 310): https://github.com/bitcoin/bips/blob/master/bip-0310.mediawiki\nStratum v2: https://braiins.com/stratum-v2\nTranscript by: Michael Folkson\nIntro (Michael Ep) Hello and welcome to tonight’s Satoshi’s 21 seminar session hosted by the Bitcoin Munich meetup. We are always looking for good …"},{"uri":"/bitcoin-explained/explaining-bitcoin-addresses/","title":"Explaining Bitcoin Addresses","content":"Aaron van Wirdum: 00:01:45\nLive from Utrecht this is the Van Wirdum Sjorsnado. So the other day I wanted to send Bitcoin to someone, but I didn\u0026amp;rsquo;t.\nSjors Provoost: 00:01:52\nWhy? Shouldn\u0026amp;rsquo;t you hodl?\nAaron van Wirdum: 00:01:55\nI hodl all I can, but sometimes I need to eat, or I need to pay my rent, or I need to buy a new plant for my living room.\nSjors Provoost: 00:02:05\nYeah, let\u0026amp;rsquo;s do.\nAaron van Wirdum: 00:02:06\nSo the problem was, the person I wanted to send Bitcoin to, I …"},{"uri":"/tftc-podcast/","title":"TFTC Podcast","content":" Tales from the Crypt with Andrew Poelstra Jun 18, 2019 Andrew Poelstra Taproot Schnorr signatures Musig Miniscript UASFs, BIP 148, BIP 91 and Taproot Activation Feb 11, 2021 Matt Corallo Taproot "},{"uri":"/tftc-podcast/2021-02-11-matt-corallo-taproot-activation/","title":"UASFs, BIP 148, BIP 91 and Taproot Activation","content":"Topic: UASFs, BIP 148, BIP 91 and Taproot Activation\nLocation: Tales from the Crypt podcast\nIntro Marty Bent (MB): Sitting down with a man who needs no introduction on this podcast. I think you have been on four times already. I think this is number five Matt. You are worried about the future of Bitcoin. What the hell is going on? You reached out to me last week, you scaring the s*** out of me. Why are you worried?\nMatt Corallo (MC): First of all thanks for having me. I think the Bitcoin …"},{"uri":"/bitcoin-explained/replace-by-fee-rbf/","title":"Replace By Fee (RBF)","content":"Introduction Aaron van Wirdum: 00:01:33\nLive from Utrecht, this is the The Van Wirdum Sjorsnado. Sjors, I heard Bitcoin is broken.\nSjors Provoost: 00:01:40\nIt is. Yeah, it was absolutely terrible.\nAaron van Wirdum: 00:01:43\nA double spend happened.\nSjors Provoost: 00:01:44\nYep, ruined.\nAaron van Wirdum: 00:01:45\nAnd this is because - \u0026amp;ldquo;a fatal flaw in the Bitcoin protocol.\u0026amp;rdquo; That\u0026amp;rsquo;s how it was reported, I think, in Bloomberg?\nSjors Provoost: 00:01:54\nYeah, I couldn\u0026amp;rsquo;t find …"},{"uri":"/bitcoin-explained/compact-client-side-filtering/","title":"Compact Client Side Filtering (Neutrino)","content":"Introduction Aaron van Wirdum: 00:01:34\nLive from Utrecht, this is The Van Wirdum Sjorsnado.\nSjors Provoost: 00:01:37\nHello.\nAaron van Wirdum: 00:01:38\nVan Wirdum Sjorsnado. Did I say it right this time?\nSjors Provoost: 00:01:42\nI don\u0026amp;rsquo;t know. We\u0026amp;rsquo;re not going to check again. This is take number three. Now you\u0026amp;rsquo;re going to ask me whether I rioted and I\u0026amp;rsquo;m gonna say no.\nAaron van Wirdum: 00:01:51\nYou\u0026amp;rsquo;re like psychic.\nSjors Provoost: 00:01:53\nYes.\nAaron van Wirdum: …"},{"uri":"/bitcoin-explained/bitcoin-core-v0-21/","title":"Bitcoin Core 0.21.0","content":"Intro Aaron van Wirdum: 00:01:35\nLive from Utrecht, this is The Van Wirdum Sjorsnado. Episode 24, we\u0026amp;rsquo;re going to discuss Bitcoin Core 21.\nSjors Provost: 00:01:46\nHooray. Well, 0.21. We\u0026amp;rsquo;re still in the age of the zero point releases.\nAaron van Wirdum: 00:01:51\nYes, which is ending now.\nSjors Provost: 00:01:53\nProbably yes.\nAaron van Wirdum: 00:01:54\nThat\u0026amp;rsquo;s what I understand. The next one, the 22 will actually be Bitcoin Core 22.\nSjors Provost: 00:01:59\nThat\u0026amp;rsquo;s what it says …"},{"uri":"/stephan-livera-podcast/2021-01-21-wiz-and-simon-mempool/","title":"Mempool-space – helping Bitcoin migrate to a multi-layer ecosystem","content":"podcast: https://stephanlivera.com/episode/245/\nStephan Livera:\nWiz and Simon. Welcome to the show.\nSimon:\nThank you.\nWiz:\nYeah, thanks for having us.\nStephan Livera:\nSo, Wiz I think my listeners are already familiar with you. But let’s hear from you Simon. Tell us about yourself.\nSimon:\nI grew up in Sweden. I worked there as a software developer and about four years ago I decided to quit my day job and pursue a more nomadic lifestyle. So for the past four years, I’ve been mostly like traveling …"},{"uri":"/speakers/simon/","title":"Simon","content":""},{"uri":"/speakers/wiz/","title":"Wiz","content":""},{"uri":"/speakers/jonas-schnelli/","title":"Jonas Schnelli","content":""},{"uri":"/stephan-livera-podcast/2021-01-14-jonas-schnelli-maintaining-bitcoin-core/","title":"Maintaining Bitcoin Core-Contributions, Consensus, Conflict","content":"podcast: https://stephanlivera.com/episode/242/\nStephan Livera:\nJonas. Welcome back to the show.\nJonas Schnelli:\nHey! Hi, Stephan. Thanks for having me.\nStephan Livera:\nJonas, Its been a while since we spoke on the show and I was excited to get another opportunity to chat with you and learn a little bit more and obviously talk about this on behalf of some of the newer listeners who are coming in and they may not be familiar with how does Bitcoin Core work. So just for listeners who are a bit …"},{"uri":"/realworldcrypto/2021/2021-01-12-tim-ruffing-musig2/","title":"MuSig2: Simple Two-Round Schnorr Multi-Signatures","content":"MuSig2 paper: https://eprint.iacr.org/2020/1261.pdf\nIntroduction This is about MuSig2, simple two round Schnorr multisignatures. This is joint work with Jonas Nick and Yannick Seurin. Jonas will be available to answer questions as I will be of course.\nMulti-Signatures The idea that with multisignatures is that n signers can get together and produce a single signature on a single message. To be clear for this talk when I talk about multisignatures what I mean is n-of-n signatures and not the more …"},{"uri":"/realworldcrypto/","title":"Realworldcrypto","content":" Realworldcrypto 2018 Realworldcrypto 2021 "},{"uri":"/realworldcrypto/2021/","title":"Realworldcrypto 2021","content":" MuSig2: Simple Two-Round Schnorr Multi-Signatures Jan 12, 2021 Tim Ruffing Musig "},{"uri":"/tags/reproducible-builds/","title":"Reproducible builds","content":""},{"uri":"/bitcoin-explained/why-open-source-matters-for-bitcoin/","title":"Why Open Source Matters For Bitcoin","content":"Intro Aaron van Wirdum:\nLive from Utrecht, this is the Van Wirdum Sjorsnado.\nSjors Provoost:\nHello.\nAaron van Wirdum:\nThis episode, we\u0026amp;rsquo;re going to discuss open source?\nSjors Provoost:\nYes.\nAaron van Wirdum:\nI\u0026amp;rsquo;m just going to skip over the whole price thing. We\u0026amp;rsquo;re going to discuss open source and why it\u0026amp;rsquo;s useful, or free software and why it\u0026amp;rsquo;s useful. Are you on the free software train or on the open source train?\nSjors Provoost:\nI\u0026amp;rsquo;m on every train. I like …"},{"uri":"/chaincode-labs/chaincode-podcast/modularizing-the-bitcoin-consensus-engine/","title":"Modularizing the Bitcoin Consensus Engine","content":"AJ: Do you want to talk about isolating the consensus engine?\nCD: Sure. More recently I have dove into the codebase a little bit more. That started with looking at Matt’s async ProcessNewBlock work and playing around with that. Learning from that how do you make a change to the core engine of Bitcoin Core.\nMatt Corallo’s PR on async ProcessNewBlock https://github.com/bitcoin/bitcoin/pull/16175\nAJ: Can you talk about that PR a little bit and what it would do?\nCD: Basically right now when we …"},{"uri":"/bitcoin-explained/rsk-federated-sidechains-and-powpeg/","title":"RSK, Federated Sidechains And Powpeg","content":"Aaron van Wirdum: 00:00:07\nLive from Utrecht, this is the Van Wirdum Sjorsnado. Hello! Sjors, I have another announcement to make.\nSjors Provoost : 00:00:13\nExciting, tell me.\nAaron van Wirdum: 00:00:14\nDid you know you can find the Van Wirdum Sjorsnado on its own RSS feed?\nSjors Provoost : 00:00:19\nI did, yes. And this is actually a new recording of that announcement this is not spliced in by the editor.\nAaron van Wirdum: 00:00:25\nThe thing is the existing Bitcoin magazine RSS feed also has the …"},{"uri":"/chaincode-labs/chaincode-podcast/2020-11-30-carl-dong-reproducible-builds/","title":"Reproducible Builds","content":"Carl Dong’s presentation at Breaking Bitcoin 2019 on “Bitcoin Build System Security”: https://diyhpl.us/wiki/transcripts/breaking-bitcoin/2019/bitcoin-build-system/\nIntro Adam Jonas (AJ): Welcome to the Chaincode podcast Carl.\nCarl Dong (CD): Hello.\nAJ: You know we’ve been doing this podcast for a while now. How come you haven’t been on the show yet?\nCD: We’ve been at home.\nAJ: We did a few episodes before that though.\nMurch (M): It is fine. We’ve got you now.\nAJ: We’ll try not to take it too …"},{"uri":"/sf-bitcoin-meetup/","title":"SF Bitcoin Meetup","content":" Advanced Bitcoin Scripting Apr 03, 2017 Andreas Antonopoulos Scripts addresses Bcoin Sep 28, 2016 Consensus enforcement BIP Taproot and BIP Tapscript Dec 16, 2019 Pieter Wuille Taproot Tapscript Bip150 Bip151 Sep 04, 2017 Jonas Schnelli P2 p Bitcoin core Bitcoin Core Apr 23, 2018 Jeremy Rubin Bitcoin core Exploring Lnd0.4 Apr 20, 2018 Olaoluwa Osuntokun, Conner Fromknecht Lnd Lightning How Lightning.network can offer a solution for Bitcoin\u0026amp;#39;s scalability problems May 26, 2015 Tadge Dryja, …"},{"uri":"/sf-bitcoin-meetup/2020-11-30-socratic-seminar-20/","title":"Socratic Seminar 20","content":"SF Bitcoin Devs socratic seminar #20\n((Names anonymized.))\nXX01: For everyone new who has joined- this is an experiment for us. We\u0026amp;rsquo;ll see how this first online event goes. So far no major issues. Nice background, Uyluvolokutat. Should we let Harkenpost in?\nXX06: People around America are showing up since they don\u0026amp;rsquo;t have to show up in person.\nXX01: How do we block NY IPs?\nXX06: We already have people from Western Africa joining.\nXX05: Oh he\u0026amp;rsquo;s here? You can increase the tile …"},{"uri":"/greg-maxwell/2020-11-25-greg-maxwell-replacing-pgp/","title":"Replacing PGP with Bitcoin public key infrastructure","content":"Location: Reddit\nhttps://www.reddit.com/r/Bitcoin/comments/k0rnq8/pgp_is_replaceable_with_the_bitcoin_public_key/gdjv1dn?utm_source=share\u0026amp;amp;utm_medium=web2x\u0026amp;amp;context=3\nIs PGP replaceable with Bitcoin public key infrastructure? This is true in the same sense that PGP can also be replaced with some fancy functions on a school kids graphing calculator.\nYes you can construct some half-assed imitation of pgp using stuff from Bitcoin, but you probably shouldn\u0026amp;rsquo;t.\nIf all you really want is …"},{"uri":"/bitcoin-explained/erebus-attacks-and-how-to-stop-them-with-asmap/","title":"Erebus Attacks And How To Stop Them With ASMAP","content":"Aaron van Wirdum: 00:00:07\nLive from Utrecht, this is The Van Wirdum Sjorsnado.\nSjors Provoost: 00:00:10\nHello!\nAaron van Wirdum: 00:00:10\nHey Sjors. What\u0026amp;rsquo;s up? We got a market cap all time high, did you celebrate?\nSjors Provoost: 00:00:14\nNo, I did not because it\u0026amp;rsquo;s just dilution.\nAaron van Wirdum: 00:00:16\nWhat do you mean?\nSjors Provoost: 00:00:17\nI mean somebody\u0026amp;rsquo;s making more Bitcoin. So it\u0026amp;rsquo;s fun that the market cap goes up, but it doesn\u0026amp;rsquo;t matter unless …"},{"uri":"/bitcoin-explained/bitcoin-eclipse-attacks-and-how-to-stop-them/","title":"Bitcoin Eclipse Attacks And How To Stop Them","content":"Introduction Sjors Provoost: 00:01:01\nWe\u0026amp;rsquo;re going to discuss a paper about eclipse attacks. Couldn\u0026amp;rsquo;t come up with a better pun. So apologies.\nAaron van Wirdum: 00:01:06\nThat\u0026amp;rsquo;s all you got from us. It\u0026amp;rsquo;s the paper Eclipse Attacks on Bitcoin\u0026amp;rsquo;s Peer-to-Peer Network by Ethan Heilman, Alison Kendler, Aviv Zohar, and Sharon Goldberg from Boston University and Hebrew University MSR Israel.\nSjors Provoost: 00:01:20\nThat\u0026amp;rsquo;s right and it was published in 2015.\nAaron van …"},{"uri":"/chaincode-labs/chaincode-podcast/2020-11-09-enterprise-walletsutxo-management/","title":"Enterprise Wallets/UTXO Management","content":"Mark Erhardt: 00:00:00\nJust to throw out a few numbers there, non-SegWit inputs cost almost 300 bytes, and native SegWit inputs cost slightly more than 100 bytes. There\u0026amp;rsquo;s almost a reduction by two-thirds in fees if you switch from non-SegWit to native SegWit.\nIntroduction Caralie Chrisco: 00:00:29\nHi, everyone, welcome to the Chaincode podcast. My name is Caralie.\nAdam Jonas: 00:00:32\nAnd it\u0026amp;rsquo;s Jonas.\nCaralie Chrisco: 00:00:33\nAnd we\u0026amp;rsquo;re back!\nAdam Jonas: 00:00:34\nWe\u0026amp;rsquo;re …"},{"uri":"/tags/payment-batching/","title":"Payment batching","content":""},{"uri":"/bitcoin-explained/open-timestamps-leveraging-bitcoin-s-security-for-all-data/","title":"Open Timestamps: Leveraging Bitcoin's Security For All Data","content":"Introduction Aaron van Wirdum: 00:00:07\nLive from Utrecht, this is Van Wirdum Sjorsnado.\nSjors Provoost: 00:00:10\nHello!\nAaron van Wirdum: 00:00:11\nHey Sjors.\nSjors Provoost: 00:00:12\nWhat\u0026amp;rsquo;s up?\nAaron van Wirdum: 00:00:13\nSjors, today we are going to discuss at length in depth the American political situation.\nSjors Provoost: 00:00:19\nThat\u0026amp;rsquo;s right. We\u0026amp;rsquo;re going to explain everything and we\u0026amp;rsquo;re going to tell you who to vote for, even though this will be released after the …"},{"uri":"/tags/proof-systems/","title":"proof-systems","content":""},{"uri":"/greg-maxwell/2020-11-05-greg-maxwell-yubikey-security/","title":"Yubikey Security","content":" By this logic, a yubikey would also be a great targeting vector.\nThey would be, and if US intelligence services have not compromised yubis or at least have a perfect targeted substitution solutions for them then they should all be fired for gross incompetence and mismanagement of their funding.\nLikewise, if parties which things of significant value to secure who might be targeted by state level attackers are securing those things with just yubs instead of using yubis as a second factor in an …"},{"uri":"/greg-maxwell/2020-11-01-greg-maxwell-hardware-wallets-altcoins/","title":"Why do hardware wallets not support altcoins?","content":"They\u0026amp;rsquo;re an enormous distraction and hazard to software development. It\u0026amp;rsquo;s hard enough to correctly and safely write software to support one system. Every minute spent creating and testing the software for some alternative is a minute taken away from supporting Bitcoin.\nI can say first hand that my efforts to review hardware wallet code against possible vulnerabilities have been actively thwarted by hardware wallet codebases being crapped up with support for altcoins. It\u0026amp;rsquo;s easy …"},{"uri":"/bitcoin-explained/what-is-utreexo/","title":"What Is Utreexo?","content":"Introduction Aaron van Wirdum:\nAnd the proposal we\u0026amp;rsquo;re discussing this week is Utreexo.\nRuben Somsen:\nThat is correct.\nSjors Provoost:\nUtreexo, and the tree is for tree. The thing that grows in the forest.\nAaron van Wirdum:\nDid you know that was the pun, Ruben? I didn\u0026amp;rsquo;t realize\u0026amp;hellip;\nRuben Somsen:\nWell, I heard Tadge say that so I was aware of that, but there is a very specific reason why I was enthusiastic to talk about it. Well, one, I\u0026amp;rsquo;ve used it in one of the things …"},{"uri":"/stephan-livera-podcast/2020-10-27-jonas-nick-tim-ruffing-musig2/","title":"MuSig, MuSig-DN and MuSig2","content":"Tim Ruffing on Schnorr multisig at London Bitcoin Devs: https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-06-17-tim-ruffing-schnorr-multisig/\nMuSig paper: https://eprint.iacr.org/2018/068.pdf\nMuSig blog post: https://blockstream.com/2019/02/18/en-musig-a-new-multisignature-standard/\nInsecure shortcuts in MuSig: https://medium.com/blockstream/insecure-shortcuts-in-musig-2ad0d38a97da\nMuSig-DN paper: https://eprint.iacr.org/2020/1057.pdf\nMuSig-DN blog post: …"},{"uri":"/greg-maxwell/2020-10-26-greg-maxwell-bitcoin-core-github/","title":"Bitcoin Core Github","content":"Location: Reddit\nhttps://www.reddit.com/r/Bitcoin/comments/jiat6s/can_github_censor_the_code_of_bitcoin_the/ga5k9ap?utm_source=share\u0026amp;amp;utm_medium=web2x\u0026amp;amp;context=3\nCan GitHub censor Bitcoin Core? The event isn\u0026amp;rsquo;t news to Bitcoin developers either, github has done this a number of times before\u0026amp;ndash; even taking Mozilla offline as a result of an obviously spurious DMCA complaint.\nEvery developer that has the repository cloned has the full history.\nNot just the developers, thousands of …"},{"uri":"/bitcoin-explained/sync-bitcoin-faster-assume-utxo/","title":"Sync Bitcoin Faster! Assume UTXO","content":"Introduction Aaron van Wirdum:\nSjors, this week we are going to create a carbon copy of the Chaincode Podcast. They had an episode with James O’Beirne on AssumeUTXO, and we are going to make an episode on AssumeUTXO.\nSjors Provoost:\nAnd we are going to follow roughly the same structure.\nAaron van Wirdum:\nWe\u0026amp;rsquo;re just going to create the same podcast, just with our voices this time. We\u0026amp;rsquo;re gonna do it step by step. First step is, Headers First. That\u0026amp;rsquo;s how they did it, so …"},{"uri":"/bitcoin-explained/bitcoin-core-v21-supports-tor-v3/","title":"Bitcoin Core 0.21 Supports Tor V3","content":"Introduction Aaron van Wirdum: 00:00:07\nLive from Utrecht, this is the Van Wirdum Sjorsnado. Sjors, you pointed out to me that Bitcoin Core has an amazing new feature merged into its repository.\nSjors Provoost: 00:00:19\nAbsolutely, we have bigger onions now.\nAaron van Wirdum: 00:00:24\nRight, so I had basically no idea what it meant. You figured it out.\nSjors Provoost: 00:00:29\nI did.\nAaron van Wirdum: 00:00:34\nYeah so let\u0026amp;rsquo;s start at the beginning. It\u0026amp;rsquo;s about Tor.\nSjors Provoost: …"},{"uri":"/stephan-livera-podcast/2020-10-15-nadav-kohen-bitcoin-dlcs/","title":"What You Should Know About Bitcoin DLCs","content":"podcast: https://stephanlivera.com/episode/219/\nStephan Livera:\nNadav welcome to the show.\nNadav Kohen:\nThanks for having me.\nStephan Livera:\nNadav I’ve been following your work for a little while. Obviously I really I like reading your blog posts over at Suredbits, and I had the chance to meet you earlier this year in London. Can you tell us a little bit about yourself and what’s your role with Suredbits?\nNadav Kohen:\nYeah. so I am a software engineer at Suredbits. I’ve been working there since …"},{"uri":"/speakers/barnab%C3%A1s-b%C3%A1gyi/","title":"Barnabás Bágyi","content":""},{"uri":"/cppcon/","title":"CPPcon","content":" CPPcon 2017 CPPcon 2020 "},{"uri":"/cppcon/2020/","title":"CPPcon 2020","content":" Fuzzing Class Interfaces Oct 09, 2020 Barnabás Bágyi "},{"uri":"/cppcon/2020/2020-10-09-barnabas-bagyi-fuzzing-class-interfaces/","title":"Fuzzing Class Interfaces","content":"Fuzzing Class Interfaces for Generating and Running Tests with libFuzzer Location: CppCon 2020\nSlides: https://github.com/CppCon/CppCon2020/blob/main/Presentations/fuzzing_class_interfaces_for_generating_and_running_tests_with_libfuzzer/fuzzing_class_interfaces_for_generating_and_running_tests_with_libfuzzer__barnab%C3%A1s_b%C3%A1gyi__cppcon_2020.pdf\nLibFuzzer: https://llvm.org/docs/LibFuzzer.html\nLibFuzzer tutorial: https://github.com/google/fuzzing/blob/master/tutorial/libFuzzerTutorial.md …"},{"uri":"/bitcoin-explained/accounts-for-bitcoin-easypaysy/","title":"Accounts for Bitcoin, Easypaysy!","content":"Intro Aaron: 00:01:10\nSjors how do you like reusing addresses?\nSjors: 00:01:13\nI love reusing addresses, it\u0026amp;rsquo;s amazing.\nAaron: 00:01:16\nIt\u0026amp;rsquo;s so convenient isn\u0026amp;rsquo;t it\nSjors: 00:01:18\nIt just makes you feel like you know your exchange if you\u0026amp;rsquo;re sending them money they know everything about you and they like that right?\nAaron: 00:01:26\nIt\u0026amp;rsquo;s so convenient, it\u0026amp;rsquo;s so easy, nothing to complain about here.\nSjors: 00:01:30\nAbsolutely.\nAaron: 00:01:31\nSo that was our …"},{"uri":"/stephan-livera-podcast/2020-10-02-gloria-zhao-bitcoin-core/","title":"Learning Bitcoin Core Contribution \u0026 Hosting PR Review Club","content":"podcast: https://stephanlivera.com/episode/216/\nStephan Livera:\nGloria. Welcome to the show.\nGloria Zhao:\nThank you so much for having me.\nStephan Livera:\nSo Gloria I’ve heard a few things about you and I was looking up what you’ve been doing. You’ve been doing some really interesting things. Can we hear a little bit about you and how you got into Bitcoin?\nGloria Zhao:\nYeah, well, I didn’t get into Bitcoin by choice. Actually it was by accident. I’m a college student at Berkeley right now, and I …"},{"uri":"/stephan-livera-podcast/2020-09-28-michael-flaxman-security-guide/","title":"10x your Bitcoin Security with Multisig","content":"Previous SLP episode with Michael Flaxman: https://diyhpl.us/wiki/transcripts/stephan-livera-podcast/2019-08-08-michael-flaxman/\n10x Security Bitcoin Guide: https://btcguide.github.io/\nTranscript completed by: Stephan Livera Edited by: Michael Folkson\n10x Security Bitcoin Guide Stephan Livera (SL): Michael, welcome back to the show.\nMichael Flaxman (MF): It’s great to be here. I’m a big fan of the podcast so I love to be on it.\nSL: Michael, your first appearance on the show was a very, very …"},{"uri":"/speakers/michael-flaxman/","title":"Michael Flaxman","content":""},{"uri":"/bitcoin-explained/explaining-signet/","title":"Explaining Signet","content":"Intro Aaron van Wirdum: 00:00:07\nLive from Utrecht, this is the Van Wirdum Sjorsnedo. Hello! Sjors, welcome.\nSjors Provoost: 00:00:12\nThank you. It\u0026amp;rsquo;s good to be back. Well, I never left, but\u0026amp;hellip;\nAaron van Wirdum: 00:00:15\nYeah, well, we\u0026amp;rsquo;re at your home now, so you never left, I think. You probably literally never left because of corona\nSjors Provoost: 00:00:21\nExactly we\u0026amp;rsquo;re at my secret location\nAaron van Wirdum: 00:00:25\nHow are you enjoying the second wave?\nSjors …"},{"uri":"/stephan-livera-podcast/2020-09-15-steve-lee-of-square-crypto/","title":"Bitcoin Grants, Design \u0026 Crypto Patents (COPA)","content":"podcast: https://stephanlivera.com/episode/211/\nStephan Livera:\nSteve. Welcome back to the show.\nSteve Lee:\nThank you so much. Glad the glad to be here,\nStephan Livera:\nSteve. I see you guys have been very busy over at Cquare Crypto. Since we last spoke, you’ve been doing a lot of work in different, in different arenas as well. You’ve got the grants going, design and this crypto patent stuff. So tell us a little bit about you know, what you’ve been doing over the last few months.\nSteve Lee: …"},{"uri":"/speakers/ben-kaufman/","title":"Ben Kaufman","content":""},{"uri":"/stephan-livera-podcast/2020-08-28-stepan-snigirev-and-ben-kaufman/","title":"Specter Desktop Bitcoin Multi Sig","content":"podcast: https://stephanlivera.com/episode/205/\nStephan Livera:\nStepan and Ben, welcome to the show.\nStepan:\nThank you. Thank you, Stephan. It’s very nice to be here again.\nBen:\nThank you for inviting me.\nStephan Livera:\nStepan I know you’ve been on the show twice before, but perhaps just for any listeners who are a little bit newer, can you tell us a little bit about yourself?\nStepan:\nWe are doing well originally I came from quantum physics into Bitcoin and started working on hardware stuff …"},{"uri":"/greg-maxwell/2020-08-27-greg-maxwell-checkmultisig-bug/","title":"Checkmultisig Bug","content":"What is stopping the OP_CHECKMULTISIG extra pop bug from being fixed?\nLocation: Bitcointalk\nhttps://bitcointalk.org/index.php?topic=5271566.msg55079521#msg55079521\nWhat is stopping the OP_CHECKMULTISIG extra pop bug from being fixed? I think it is probably wrong to describe it as a bug. I think it was intended to indicate which signatures were present to fix the otherwise terrible performance of CHECKMULTISIG.\nRegardless, there is no real point to fixing it: Any \u0026amp;lsquo;fix\u0026amp;rsquo; would require …"},{"uri":"/sydney-bitcoin-meetup/2020-08-25-socratic-seminar/","title":"Socratic Seminar","content":"Name: Socratic Seminar\nTopic: Agenda in Google Doc below\nLocation: Bitcoin Sydney (online)\nVideo: No video posted online\nLast month’s Sydney Socratic: https://diyhpl.us/wiki/transcripts/sydney-bitcoin-meetup/2020-07-21-socratic-seminar/\nGoogle Doc of the resources discussed: https://docs.google.com/document/d/1rJxVznWaFHKe88s5GyrxOW-RFGTeD_GKdFzHNvhrq-c/\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their …"},{"uri":"/bitcoin-design/misc/2020-08-20-bitcoin-core-gui/","title":"Bitcoin Core GUI introductory meeting","content":"Topic: Agenda link posted below\nLocation: Bitcoin Design (online)\nVideo: No video posted online\nAgenda: https://github.com/BitcoinDesign/Meta/issues/8\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nBitcoin Core PR review There seems to be a lot to learn about the …"},{"uri":"/bitcoin-design/misc/","title":"Miscellaneous","content":" Bitcoin Core GUI introductory meeting Aug 20, 2020 Bitcoin core "},{"uri":"/london-bitcoin-devs/2020-08-19-socratic-seminar-signet/","title":"Socratic Seminar - Signet","content":"Pastebin of the resources discussed: https://pastebin.com/rAcXX9Tn\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIntro Michael Folkson (MF): This is a Socratic Seminar organized by London BitDevs. We have a few in the past. We had a couple on BIP-Schnorr and BIP-Taproot …"},{"uri":"/stephan-livera-podcast/2020-08-13-christian-decker-lightning-topics/","title":"ANYPREVOUT, MPP, Mitigating Lightning Attacks","content":"Transcript completed by: Stephan Livera Edited by: Michael Folkson\nLatest ANYPREVOUT update ANYPREVOUT BIP (BIP 118): https://github.com/ajtowns/bips/blob/bip-anyprevout/bip-0118.mediawiki\nStephan Livera (SL): Christian welcome back to the show.\nChristian Decker (CD): Hey Stephan, thanks for having me.\nSL: I wanted to chat with you about a bunch of stuff that you’ve been doing. We’ve got a couple of things that I was really interested to chat with you about: ANYPREVOUT, MPP, Lightning attacks, …"},{"uri":"/stephan-livera-podcast/2020-08-13-christian-decker/","title":"ANYPREVOUT, MPP, Mitigating LN Attacks","content":"podcast: https://stephanlivera.com/episode/200/\nStephan Livera:\nChristian welcome back to the show.\nChristian Decker:\nHey, Stephan, thanks for having me\nStephan Livera:\nWanted to chat with you about a bunch of stuff that you’ve been doing. We’ve got a couple of things that I was really interested to chat with you about ANYPREVOUT, MPP, lightning attacks. What’s the latest with lightning network. But yeah, let’s start with a little bit around ANYPREVOUT. So I see that yourself and AJ towns just …"},{"uri":"/tags/multipath-payments/","title":"Multipath payments","content":""},{"uri":"/tags/trampoline-payments/","title":"Trampoline payments","content":""},{"uri":"/chicago-bitdevs/","title":"Chicago Bitdevs","content":" Socratic Seminar Aug 12, 2020 P2p Research Threshold signature Sighash anyprevout Altcoins Socratic Seminar 10 Jul 08, 2020 "},{"uri":"/chicago-bitdevs/2020-08-12-socratic-seminar/","title":"Socratic Seminar","content":"Topic: Agenda below\nVideo: No video posted online\nBitDevs Solo Socratic 4 agenda: https://bitdevs.org/2020-07-31-solo-socratic-4\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nBitcoin Core P2P IRC Meetings https://github.com/bitcoin-core/bitcoin-devwiki/wiki/P2P-IRC-meetings …"},{"uri":"/stephan-livera-podcast/2020-08-09-thomas-voegtlin-ghost43-electrum/","title":"Electrum","content":"Topic: Electrum Wallet 4\nLocation: Stephan Livera Podcast\nElectrum GitHub: https://github.com/spesmilo/electrum\nTranscript completed by: Stephan Livera Edited by: Michael Folkson\nIntro Stephan Livera (SL): Thomas and Ghost43. Welcome back to the show.\nThomas Voegtlin (TV): Hi, thank you.\nGhost43 (G43): Hey Stephan, Thanks for having me.\nSL: Thomas, my listeners have already heard you on a prior episode. Ghost, did you want to tell us about yourself, how you got into Bitcoin and how you got into …"},{"uri":"/stephan-livera-podcast/2020-08-09-thomas-voegtlin-and-ghost43/","title":"Electrum Wallet","content":"podcast: https://stephanlivera.com/episode/199/\nStephan Livera:\nThomas and Ghost43. Welcome back to the show.\nThomas V:\nHi, thank you.\nGhost43:\nHey Stephan, Thanks for having me.\nStephan Livera:\nSo welcome to the show guys. Now, Thomas, I know my listeners have already heard you on the prior episode, but Ghost, did you want to tell us a little bit about yourself, how you got into Bitcoin and how you got into developing with Electrum Wallet?\nGhost43:\nYeah, sure. I mean, I got into Bitcoin a few …"},{"uri":"/speakers/ghost43/","title":"Ghost43","content":""},{"uri":"/speakers/thomas-voegtlin/","title":"Thomas Voegtlin","content":""},{"uri":"/speakers/eric-lombrozo/","title":"Eric Lombrozo","content":""},{"uri":"/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation/","title":"How to Activate a New Soft Fork","content":"Location: Bitcoin Magazine (online)\nAaron van Wirdum in Bitcoin Magazine on BIP 8, BIP 9 or Modern Soft Fork Activation: https://bitcoinmagazine.com/articles/bip-8-bip-9-or-modern-soft-fork-activation-how-bitcoin-could-upgrade-next\nDavid Harding on Taproot activation proposals: https://gist.github.com/harding/dda66f5fd00611c0890bdfa70e28152d\nIntro Aaron van Wirdum (AvW): Eric, Luke welcome. Happy Bitcoin Independence Day. How are you doing?\nEric Lombrozo (EL): We are doing great. How are you …"},{"uri":"/bitcoin-explained/what-is-miniscript/","title":"What is Miniscript","content":"Aaron van Wirdum:\nMiniscript. It\u0026amp;rsquo;s a project, I guess that\u0026amp;rsquo;s how I would describe it. It\u0026amp;rsquo;s a project by a couple of Blockstream engineers, even though it\u0026amp;rsquo;s not an official Blockstream project, but it\u0026amp;rsquo;s Pieter Wuille, Andrew Poelstra. And then, there was a third name that was as well-known, Sanket Kanjalkar.\nSjors Provoost:\nYeah, I believe he was an intern at Blockstream at the time.\nAaron van Wirdum:\nHe may have been an intern, yes. So they developed this idea …"},{"uri":"/stephan-livera-podcast/2020-07-26-nix-bitcoin/","title":"nix-bitcoin: A Security Focused Bitcoin Node","content":"nix-bitcoin on GitHub: https://github.com/fort-nix/nix-bitcoin\nTranscript completed by: Stephan Livera Edited by: Michael Folkson\nIntro Stephan Livera (SL): nixbitcoindev welcome to the show.\nnixbitcoindev (NBD): Hello. Thank you for having me.\nSL: Obviously as you’re under a pseudonym don’t dox anything about yourself but can you just tell us what you’re interested in about Bitcoin and is it the computer science aspects or what aspects of it interest you?\nNBD: I came into Bitcoin pretty much …"},{"uri":"/speakers/nixbitcoindev/","title":"nixbitcoindev","content":""},{"uri":"/bitcoin-explained/breaking-down-taproot-activation-options/","title":"Breaking Down Taproot Activation Options","content":"Aaron van Wirdum:\nWe\u0026amp;rsquo;re going to discuss Taproot activation, or more generally soft fork activation.\nSjors Provoost:\nYeah.\nAaron van Wirdum:\nThis has become a topic again in the sort of Bitcoin debate community public discourse. How do we activate soft forks? Because Taproot is getting to the point where it sort of ready to be deployed almost I think, so now the next question is, \u0026amp;ldquo;Okay, how are we actually going to activate this?\u0026amp;rdquo; This has been an issue in the past with several …"},{"uri":"/speakers/elichai-turkel/","title":"Elichai Turkel","content":""},{"uri":"/tags/mast/","title":"MAST","content":""},{"uri":"/speakers/openoms/","title":"Openoms","content":""},{"uri":"/stephan-livera-podcast/2020-07-21-rootzoll-and-openoms-raspiblitz/","title":"RaspiBlitz","content":"podcast: https://stephanlivera.com/episode/194/\nStephan Livera:\nHi, everyone. Welcome to the Stephan Livera podcast, a show about Bitcoin and Austrian economics today. My guests are Rootzoll and Openoms of the RaspiBlitz project. And if you’re interested in setting up your own Bitcoin node and running lightning and using Coinjoins with JoinMarket and so on, this is a great project to check out. So I’m really looking forward to my discussion with the guys today. Alright. So I’m just bringing in …"},{"uri":"/speakers/rootzoll/","title":"Rootzoll","content":""},{"uri":"/speakers/russell-oconnor/","title":"Russell O’Connor","content":""},{"uri":"/sydney-bitcoin-meetup/2020-07-21-socratic-seminar/","title":"Socratic Seminar","content":"Name: Socratic Seminar\nTopic: Agenda in Google Doc below\nLocation: Bitcoin Sydney (online)\nVideo: No video posted online\nLast month’s Sydney Socratic: https://diyhpl.us/wiki/transcripts/sydney-bitcoin-meetup/2020-06-23-socratic-seminar/\nGoogle Doc of the resources discussed: https://docs.google.com/document/d/1Aw_llsP8xSipp7l6JqjSpaqw5qN1vXRqhOyeulqmXcg/\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their …"},{"uri":"/london-bitcoin-devs/2020-07-21-socratic-seminar-bip-taproot/","title":"Socratic Seminar - BIP Taproot (BIP 341)","content":"Pastebin of the resources discussed: https://pastebin.com/vsT3DNqW\nTranscript of Socratic Seminar on BIP-Schnorr: https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr/\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIntroductions …"},{"uri":"/greg-maxwell/2020-07-20-greg-maxwell-taproot-pace/","title":"Is Taproot development moving too fast or too slow?","content":"Taproot has been discussed for 2.5 years already and by the time it would activate it will certainly at this point be over three years.\nThe bulk of the Taproot proposal, other than Taproot itself and specific encoding details, is significantly older too. (Enough that earlier versions of our proposals have been copied and activated in other cryptocurrencies already)\nTaproot\u0026amp;rsquo;s implementation is also extremely simple, and will make common operations in simple wallets simpler.\nTaproot\u0026amp;rsquo;s …"},{"uri":"/vr-bitcoin/2020-07-11-jeremy-rubin-sapio-101/","title":"Sapio: Stateful Smart Contracts for Bitcoin with OP_CTV","content":"Slides: https://docs.google.com/presentation/d/1X4AGNXJ5yCeHRrf5sa9DarWfDyEkm6fFUlrcIRQtUw4/\nutxos.org website: https://utxos.org/\nIntro (Udi Wertheimer) Today we are joined by Jeremy Rubin who is a Bitcoin Core contributor and the co-founder of the MIT Bitcoin Project. He is involved in a tonne of other cool stuff. He is also championing BIP 119 which is OP_CHECKTEMPLATEVERIFY. He is going to tell us about that and how it allows for new types of smart contracts.\nIntro Hey. Thanks for coming on …"},{"uri":"/vr-bitcoin/","title":"VR Bitcoin","content":" Laolu, Joost, Oliver - Lnd0.10 Apr 18, 2020 Olaoluwa Osuntokun, Joost Jager, Oliver Gugger Lnd Oliver Gugger, LSAT May 16, 2020 Oliver Gugger Sapio: Stateful Smart Contracts for Bitcoin with OP_CTV Jul 11, 2020 Jeremy Rubin Op checktemplateverify "},{"uri":"/misc/2020-07-10-what-am-i-working-on/","title":"What am I working on?","content":"I\u0026amp;rsquo;ve been working on essentially rewriting the Bitcoin Core wallet, one piece at a time. Now when I say \u0026amp;ldquo;the wallet\u0026amp;rdquo;, a lot of people think of the GUI. That\u0026amp;rsquo;s not what I\u0026amp;rsquo;ve been working on. Rather I\u0026amp;rsquo;ve been changing the internals; how keys are managed, how transactions are tracked, how inputs are selected for spending. All of those under the hood things. At some point, I will get around to changing the GUI. And in general, my focus has been on improving the …"},{"uri":"/chicago-bitdevs/2020-07-08-socratic-seminar/","title":"Socratic Seminar 10","content":"Location: Chicago BitDevs (online)\nVideo: No video posted online\nReddit link of the resources discussed: https://old.reddit.com/r/chibitdevs/\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nTainting, CoinJoin, PayJoin, CoinSwap Bitcoin dev mailing list post (Nopara) …"},{"uri":"/greg-maxwell/2020-07-05-greg-maxwell-useful-proof-of-work/","title":"Useful Proof Of Work","content":"Why can’t hash power be used for something useful?\nLocation: Reddit\nhttps://www.reddit.com/r/Bitcoin/comments/hlu2ah/why_cant_hash_power_be_used_for_something_useful/fx1axlt?utm_source=share\u0026amp;amp;utm_medium=web2x\u0026amp;amp;context=3\nWhy can’t hash power be used for something useful? There is a general game theory reason why this doesn\u0026amp;rsquo;t work out:\nImagine that you are considering attempting to reorder the chain to undo a transaction. You could decide to not attempt the attack in which case your …"},{"uri":"/stephan-livera-podcast/2020-06-30-john-cantrell-bruteforcing-bitcoin-seeds/","title":"Bruteforcing Bitcoin Seeds","content":"podcast: https://stephanlivera.com/episode/187/\nStephan Livera:\nI’m going to bring in my guest, John is a developer and he’s also known for working on the juggernaut project and he’s got this fantastic article that he wrote recently that I wanted to get him on and discuss. So John, welcome to the show.\nJohn Cantrell:\nIt’s great to be here.\nStephan Livera:\nSo John, tell us a little bit about yourself and your background as a developer.\nJohn Cantrell:\nSure. I’ve been doing software development now …"},{"uri":"/tags/coinswap/","title":"Coinswap","content":""},{"uri":"/sydney-bitcoin-meetup/2020-06-23-socratic-seminar/","title":"Socratic Seminar","content":"Name: Socratic Seminar\nTopic: Agenda in Google Doc below\nLocation: Bitcoin Sydney (online)\nVideo: No video posted online\nLast month’s Sydney Socratic: https://diyhpl.us/wiki/transcripts/sydney-bitcoin-meetup/2020-05-19-socratic-seminar/\nGoogle Doc of the resources discussed: https://docs.google.com/document/d/1TKVOScS0Ms52Vwb33n4cwCzQIkY1kjBj9UioKkkR9Xo/\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their …"},{"uri":"/london-bitcoin-devs/2020-06-23-socratic-seminar-coinswap/","title":"Socratic Seminar - Coinswap","content":"Pastebin of the resources discussed: https://pastebin.com/zbegGmb8\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIntroductions Michael Folkson (MF): This is London BitDevs, this is a Socratic Seminar. We had two events last week so it is great to see so many people on the …"},{"uri":"/lightning-hack-day/2020-06-21-rootzoll-openoms-raspiblitz/","title":"RaspiBlitz full node","content":"Location: Potzblitz (online)\nSlides: https://keybase.pub/oms/Potzblitz9-RaspiBlitz-slides.pdf\nRaspiBlitz GitHub: https://github.com/rootzoll/raspiblitz\nRootzoll presenting at London Bitcoin Devs in July 2019: https://www.youtube.com/watch?v=R_ggGj7Hk1w\nIntro (Jeff Gallas) Welcome to a new episode of Potzblitz. It is episode number 9. Today we are going to talk about the RaspiBlitz a Bitcoin and Lightning Network full node built on a Raspberry Pi. We have Christian Rotzoll and openoms, two of the …"},{"uri":"/la-bitdevs/","title":"LA Bitdevs","content":" Attacking Bitcoin Core Apr 16, 2020 Amiti Uttarwar Bitcoin core P2p Eclipse attacks Magical Bitcoin May 21, 2020 Alekos Filini Developer tools Miniscript Psbt Segwit, PSBT Vulnerability Jun 18, 2020 Luke Dashjr Psbt Segwit "},{"uri":"/la-bitdevs/2020-06-18-luke-dashjr-segwit-psbt-vulnerability/","title":"Segwit, PSBT Vulnerability","content":"SegWit/PSBT vulnerability (CVE-2020-14199)\nLocation: LA BitDevs (online)\nCVE: https://nvd.nist.gov/vuln/detail/CVE-2020-14199\nTrezor blog post on the vulnerability: https://blog.trezor.io/latest-firmware-updates-correct-possible-segwit-transaction-vulnerability-266df0d2860\nGreg Sanders Bitcoin dev mailing list post in April 2017: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html\nThe vulnerability The way Bitcoin transactions are encoded in the software is there is a …"},{"uri":"/london-bitcoin-devs/2020-06-17-tim-ruffing-schnorr-multisig/","title":"Taproot and Schnorr Multisignatures","content":"Location: London Bitcoin Devs (online)\nSlides: https://slides.com/real-or-random/taproot-and-schnorr-multisig\nTranscript of the previous day’s Socratic Seminar on BIP-Schnorr: https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr/\nIntro (Michael Folkson) We are now live. This is London Bitcoin Devs with Tim Ruffing. We had the Socratic Seminar yesterday, I think we have some people on the call that were at the Socratic Seminar. That was brilliant. Thanks …"},{"uri":"/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr/","title":"Socratic Seminar - BBIP Schnorr (BIP 340)","content":"Pastebin of the resources discussed: https://pastebin.com/uyktht33\nAugust 2020 update: Since this Socratic on BIP Schnorr there has been a proposed change to the BIP revisting the squaredness tiebreaker for the R point.\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. …"},{"uri":"/wasabi/research-club/2020-06-15-coinswap/","title":"CoinSwaps","content":"Intro (Aviv Milner) Today we are talking about CoinSwaps, massively improving Bitcoin privacy and fungibility. A lot of excitement about CoinSwaps so hopefully we can touch on some interesting things.\nCoinSwaps (Belcher 2020) https://gist.github.com/chris-belcher/9144bd57a91c194e332fb5ca371d0964\nThis is a 2020 ongoing GitHub research paper that Chris Belcher has been working on. You can view the original file here. It is heavily based on the 2013 work by Greg Maxwell.\nWasabi Research Club Just a …"},{"uri":"/speakers/ergo/","title":"Ergo","content":""},{"uri":"/stephan-livera-podcast/2020-06-09-ergo-unwinding-bitcoin/","title":"Unwinding Bitcoin Coinjoins - Tumblers, Wasabi, JoinMarket","content":"podcast: https://stephanlivera.com/episode/179/\nStephan Livera:\nHi Ergo, welcome to the show.\nErgo:\nHey Stephan. Thanks for having me back.\nStephan Livera:\nIt’s great to chat with you, and I know you’ve been doing a lot of awesome work, so I’m really excited to get into it and discuss with you. So I know you’ve got two reports that you have put out recently with OXT Research. So let’s start with the China and North Korea one. So it’s called the North Korean connection. What spurred this …"},{"uri":"/greg-maxwell/2020-06-08-greg-maxwell-liquid-censorship-resistance/","title":"Liquid Censorship Resistance","content":"Location: Reddit\nhttps://www.reddit.com/r/Bitcoin/comments/gye0yv/liquid_censorship_resistance/ftcllcm?utm_source=share\u0026amp;amp;utm_medium=web2x\u0026amp;amp;context=3\nIs Liquid censorship resistant? Liquid isn\u0026amp;rsquo;t particularly censorship resistant. If someone tells you it is they\u0026amp;rsquo;re confused.\nBack when I worked at Blockstream there was thought in the design to mitigate some of the risks: make transactions difficult to identify, even the peg outs use a ring signature to authorize them, and …"},{"uri":"/stephan-livera-podcast/2020-05-27-bitcoin-coin-selection/","title":"Bitcoin Coin Selection - Managing Large Wallets","content":"podcast: https://stephanlivera.com/episode/177/\nStephan Livera:\nWelcome, Murch.\nMurch:\nHey, Stephan, thanks for having me.\nStephan Livera:\nWell, yeah, thanks for joining me, man. I am a fan. I’ve read some of your work on coin selection. And I know you’ve, you’re you’re the man to talk to about this topic. So look, let’s start with how you got into Bitcoin and Bitcoin contribution and also just some of your work at BitGo.\nMurch:\nYeah, sure. I started following Bitcoin about 2011, 2012. I don’t …"},{"uri":"/london-bitcoin-devs/2020-05-26-kevin-loaec-antoine-poinsot-revault/","title":"Revault - A Multiparty Vault Architecture","content":"Location: London Bitcoin Devs (online)\nKevin slides: https://www.dropbox.com/s/rj45ebnic2m0q2m/kevin%20loaec%20revault%20slides.pdf?dl=0\nAntoine slides: https://www.dropbox.com/s/xaoior0goo37247/Antoine%20Poinsot%20Revault%20slides.odp?dl=0\nIntro (Michael Folkson) This is London Bitcoin Devs. This is on Zoom and live-streaming on YouTube. Last week we had a Socratic Seminar on vaults, covenants and CHECKTEMPLATEVERIFY. There is a video up for that. There is also a transcript, @btctranscripts on …"},{"uri":"/speakers/sergi-delgado-segura/","title":"Sergi Delgado Segura","content":""},{"uri":"/lightning-hack-day/2020-05-24-sergi-delgado-watchtowers/","title":"Watchtowers and BOLT 13","content":"Location: Potzblitz (online)\nSlides: https://srgi.me/resources/slides/Potzblitz!2020-Watchtowers.pdf\nThe Eye of Satoshi code: https://github.com/talaia-labs/python-teos\nDraft BOLT 13: https://github.com/sr-gi/bolt13/blob/master/13-watchtowers.md\nc-lightning watchtower plugin: https://github.com/talaia-labs/python-teos/tree/master/watchtower-plugin\nc-lightning watchtower hook discussion:\nhttps://github.com/ElementsProject/lightning/pull/3601\nhttps://github.com/ElementsProject/lightning/pull/3645 …"},{"uri":"/la-bitdevs/2020-05-21-alekos-filini-magical-bitcoin/","title":"Magical Bitcoin","content":"Magical Bitcoin site: https://magicalbitcoin.org/\nMagical Bitcoin wallet GitHub: https://github.com/MagicalBitcoin/magical-bitcoin-wallet\nsipa’s Miniscript site: http://bitcoin.sipa.be/miniscript/\nAndrew Poelstra Miniscript presentation at London Bitcoin Devs: https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript/\nAndrew Poelstra Miniscript workshop at Advancing Bitcoin: …"},{"uri":"/sydney-bitcoin-meetup/2020-05-19-socratic-seminar/","title":"Socratic Seminar","content":"Topic: Agenda in Google Doc below\nLocation: Bitcoin Sydney (online)\nVideo: No video posted online\nGoogle Doc of the resources discussed: https://docs.google.com/document/d/1hCTlQdt_dK6HerKNt0kl6X8HAeRh16VSAaEtrqcBvlw/\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. …"},{"uri":"/london-bitcoin-devs/2020-05-19-socratic-seminar-vaults/","title":"Socratic Seminar - Vaults and OP_CHECKTEMPLATEVERIFY","content":"Name: Socratic Seminar\nLocation: London BitDevs (online)\nPastebin of the resources discussed: https://pastebin.com/3Q8MSwky\nTwitter announcement: https://twitter.com/kanzure/status/1262821838654255104?s=20\nThe conversation has been anonymized by default to protect the identities of the participants. Those who would prefer their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIntroductions Michael Folkson (MF): …"},{"uri":"/vr-bitcoin/2020-05-16-oliver-gugger-lsat/","title":"Oliver Gugger, LSAT","content":"Topic: LSAT - Your Ticket Aboard The Lightning Native Web\nLocation: Reckless VR\nIntro Happy to be here to talk about LSAT, let’s jump right in. We are going to have a look at LSAT. What the motivation was to come up with this? What the use cases for and we are going to jump into the technical detail. This is going to be a technical talk. There will be some low level details.\nOutline First who am I? Why am I giving this talk? I joined Lightning Labs in September last year. LSAT was one of my …"},{"uri":"/ruben-somsen/","title":"Ruben Somsen","content":" Succinct Atomic Swap May 11, 2020 Ruben Somsen "},{"uri":"/ruben-somsen/2020-05-11-ruben-somsen-succinct-atomic-swap/","title":"Succinct Atomic Swap","content":"Topic: Succinct Atomic Swap (SAS)\nLocation: Ruben Somsen YouTube channel\nSAS resources: https://gist.github.com/RubenSomsen/8853a66a64825716f51b409be528355f\nIntro Happy halvening. Today I have a special technical presentation for you. Basically what we are going to do is cut atomic swaps in half. I call this succinct atomic swaps.\nMotivation First let me tell you my motivation. I think Bitcoin is a very important technology. We need to find ways to utilize it at its maximum potential without …"},{"uri":"/lightning-hack-day/2020-05-09-sergei-tikhomirov-lightning-privacy/","title":"A Quantitative Analysis of Security, Anonymity and Scalability for the Lightning Network","content":"Location: Lightning Hacksprint (Fulmo)\nSlides: https://docs.google.com/presentation/d/1H9EdrhjQ9x3q0qfsei7iFYINVf2lwPyjUKHd3CScy4k/edit#slide=id.g7774542944_0_0\nPaper: https://eprint.iacr.org/2020/303\nIntro Hello everyone. I am Sergei, I am a PhD student at the University of Luxembourg. I am going to present our joint work with Pedro Moreno-Sanchez and Matteo Maffei from TU Wien on certain aspects of the privacy and scalability of the Lightning Network. The full paper can be found here.\nLN intro …"},{"uri":"/tags/payjoin/","title":"Payjoin","content":""},{"uri":"/london-bitcoin-devs/2020-05-05-socratic-seminar-payjoins/","title":"Socratic Seminar Payjoins - Bitcoin privacy (Payjoin/P2EP)","content":"Adam Gibson Pastebin on Payjoin resources: https://pastebin.com/rgWVuNrC\n6102bitcoin resources on Payjoin/P2EP: https://github.com/6102bitcoin/CoinJoin-Research/blob/master/CoinJoin_Research/CoinJoin_Theory/P2EP/summary.md\nTransaction fingerprinting discussion: https://github.com/zkSNACKs/WalletWasabi/issues/3625\nBitDevs resources on Coinjoin: https://bitdevs.org/2020-02-19-whitepaper-series-11-coinjoin\nThe conversation has been anonymized to protect the identities of the participants.\nIntro …"},{"uri":"/lightning-hack-day/2020-05-03-christian-decker-lightning-backups/","title":"Lightning Backups","content":"Topic: Back the f*** up\nLocation: Potzblitz (online)\nIntro (Jeff Gallas) We’re live. Welcome to the fourth episode of Potzblitz, the weekly Lightning talk. Today we have a very special guest, Christian Decker of Blockstream. They also call him Dr Bitcoin. Our co-host today is Michael Folkson from the London BitDevs meetup. He is doing a bunch of Socratic Seminars and has a deep technical knowledge. He will be the perfect person to speak to Christian and has already prepared a bunch of questions. …"},{"uri":"/london-bitcoin-devs/2020-04-29-kalle-rosenbaum-grokking-bitcoin/","title":"Grokking Bitcoin","content":"Location: London Bitcoin Devs (online)\nSlides: http://rosenbaum.se/ldnbitcoindev/drawing.sozi.html\nBook: http://rosenbaum.se/book/\nIntro (Michael Folkson) Welcome to this London Bitcoin Devs online. We will be joined by Kalle Rosenbaum, the author of Grokking Bitcoin. In terms of London Bitcoin Devs you can follow us on Twitter at ldnbitcoindevs. We have a YouTube channel where we have a bunch of presentations from the last two years from people like John Newbery, Matt Corallo and Andrew …"},{"uri":"/speakers/kalle-rosenbaum/","title":"Kalle Rosenbaum","content":""},{"uri":"/honey-badger-diaries/","title":"Honey Badger Diaries","content":" Revault Bitcoin vaults Apr 24, 2020 Kevin Loaec, Antoine Poinsot Vaults "},{"uri":"/stephan-livera-podcast/2020-04-24-lisa-neigut-lighting-network-channel/","title":"Lightning Network Channel Accounting and Dual Funded Channels","content":"podcast: https://stephanlivera.com/episode/168/\nLisa Neigut of Blockstream joins me to talk about Lightning Network Channels, c-lightning, and Dual Funded Channels.\nStephan Livera:\nLisa, welcome to the show.\nLisa Neigut:\nHi Stephan. Thanks for having me on.\nStephan Livera:\nSo Lisa, I saw you tweeted out about wanting to talk about lightning cryptographic primitives and some of the, obviously you’re working on a range of things, at Blockstream on the Elements project working on C-lightning and …"},{"uri":"/honey-badger-diaries/2020-04-24-kevin-loaec-antoine-poinsot-revault/","title":"Revault Bitcoin vaults","content":"Aaron: So you guys built something. First tell me are you a company? Is this a startup? What is the story here?\nKevin: My personal interest in vaults started last year when Bryan Bishop published his email on the mailing list. I was like “That is an interesting concept.” After that I started engaging with him on Twitter proposing a few different ideas. There were some limitations and some attacks around it. I didn’t go much further than that. At the end of the year a hedge fund reached out to my …"},{"uri":"/london-bitcoin-devs/2020-04-22-socratic-seminar/","title":"Socratic Seminar - Grokking Bitcoin","content":"Name: Socratic Seminar\nLocation: London BitDevs (online)\nThe conversation has been anonymized to protect the identity of the participants.\nDiscussion of Hopin video conferencing software We tested this a few weeks back for a German meetup and it was pretty nice. It was reliable and it worked. Of course it is not open source and Jitsi could be much better. We were quite happy with the experience. We are looking to do the Value of Bitcoin conference on this platform. It is a little more robust …"},{"uri":"/speakers/chris-belcher/","title":"Chris Belcher","content":""},{"uri":"/stephan-livera-podcast/2020-04-21-chris-belcher/","title":"What’s The Problem With Bitcoin Surveillance?","content":"podcast: https://stephanlivera.com/episode/167/\nChris Belcher rejoins me on the show to talk about Bitcoin Surveillance companies, and what risks they present to Bitcoin. We also talk about JoinMarket fidelity bonds. Stephan Livera:\nChris, welcome back to the show.\nChris Belcher:\nThanks for having me.\nStephan Livera:\nSo, Chris, I know you have been vocal recently in your criticism of Bitcoin surveillance companies. And I share your views on this. I thought it would be good to just have some …"},{"uri":"/speakers/joost-jager/","title":"Joost Jager","content":""},{"uri":"/vr-bitcoin/2020-04-18-laolu-joost-oliver-lnd0.10/","title":"Laolu, Joost, Oliver - Lnd0.10","content":"Topic: lnd v0.10\nLocation: Virtual Reality (VR Bitcoin)\nIntro (Laolu Osuntokun) Thanks everyone for coming. This is my first time doing a VR presentation, really cool. I am going to be talking about lnd 0.10. Right now we are on rc2, the next major lnd release, it has a bunch of cool features, bug fixes, optimizations. I am going to talk about some big ticket features in 0.10. There are a bunch of other features that you are going to want to check out in the release notes. We will have a blog …"},{"uri":"/speakers/olaoluwa-osuntokun/","title":"Olaoluwa Osuntokun","content":""},{"uri":"/la-bitdevs/2020-04-16-amiti-uttarwar-attacking-bitcoin-core/","title":"Attacking Bitcoin Core","content":"Intro I’m going to talk to you today about attacking Bitcoin Core.\nBitcoin To start with the fundamentals. As we all know Bitcoin is a money and it is different to our existing centralized solution of money because instead we have nodes that are running all around the world.\nConsensus Fundamentally what is required for this to work is the idea of consensus where all the nodes must agree on the same fundamental values. Who has how much money? This is the core innovation of the blockchain. What it …"},{"uri":"/wasabi/research-club/anonymous-credentials/","title":"Anonymous Credentials","content":"Introduction to credentials. / Selective signing of attributes. / Range proof. / \u0026amp;ldquo;Rethinking Public Key Infrastructures and Digital Certificates\u0026amp;rdquo; (Stefan Brands, 1999) Speaker 0: 00:00:00\nTo approach this is to look from the point of view of blind signatures. Right? I guess blind signatures are mostly familiar to this audience, right? Yes. Or should be. Should be familiar for every Wasabi user. For every Wasabi power user, I guess. So, the idea of credentials is similar in the sense …"},{"uri":"/tags/bls-signatures/","title":"BLS signatures","content":""},{"uri":"/speakers/adam-back/","title":"Adam Back","content":""},{"uri":"/speakers/andreas-antonopoulos/","title":"Andreas Antonopoulos","content":""},{"uri":"/andreas-antonopoulos/","title":"Andreas Antonopoulos","content":" Are Hardware Wallets Secure Enough? Feb 01, 2019 Andreas Antonopoulos Hardware wallet Bitcoin Q\u0026amp;amp;A: Initial Blockchain Download Oct 23, 2018 Andreas Antonopoulos Consensus enforcement Canada Senate Bitcoin Oct 08, 2014 Andreas Antonopoulos Security Full Node and Home Network Security Aug 30, 2018 Andreas Antonopoulos Security Schnorr Signatures Oct 07, 2018 Andreas Antonopoulos Schnorr signatures Why is seed splitting a bad idea? Apr 08, 2020 Andreas Antonopoulos Bip32 Security problems "},{"uri":"/misc/2020-04-08-john-newbery-contracts-in-bitcoin/","title":"Bitcoin Script: Past and Future","content":"Location: She256 Onboarding to Bitcoin Webinar\nIntroduction John: My name is John. I am a Bitcoin Protocol Engineer at Chaincode Labs in New York. I\u0026amp;rsquo;m going to talk about contracts a little bit from a theoretical perspective, but I\u0026amp;rsquo;m not a lawyer, and I\u0026amp;rsquo;m not a legal scholar. For all of you legally minded people out there, I apologize in advance if I gobble this and say some nonsense.\nBefore I do that, I\u0026amp;rsquo;m going to tell you a story and the story based on this picture. …"},{"uri":"/blockstream-webinars/2020-04-08-adam-back-simplicity/","title":"Simplicity: Next-Gen Smart Contracting","content":"Slides: https://docsend.com/view/svs27jr\nBlockstream blog post on the Jets release: https://medium.com/blockstream/simplicity-jets-release-803db10fd589\nSimplicity GitHub repo: https://github.com/elementsproject/simplicity\nAsciinema video used in the presentation: https://asciinema.org/a/rhIsJBixoB3k8yuFQQr2UGAQN\nJoin the Simplicity mailing list here: https://lists.ozlabs.org/listinfo/simplicity\nIntro I am going to describe Simplicity which we made an announcement recently. It is a next …"},{"uri":"/andreas-antonopoulos/2020-04-08-andreas-antonopoulos-seed-splitting/","title":"Why is seed splitting a bad idea?","content":"This is something that is being discussed all the time especially on newbie forums. It really needs to be careful very carefully. A friend had the idea to achieve 2 out of 3 protection from my wallet seed by storing the seed in the following way:\nLocation 1: seed words 1-8 and 9-16 Location 2: seed words 1-8 and 17-24 Location 3: seed words 9-16 and 17-24\nIt sounds a lot like Shamir’s Secret Sharing scheme but easier. One location doesn’t reveal the whole seed but any of two of them are enough. …"},{"uri":"/stephan-livera-podcast/2020-04-07-gleb-naumenko-erlay/","title":"erlay Bitcoin transaction relay","content":"podcast: https://stephanlivera.com/episode/164/\nStephan Livera:\nGleb Naumenko of Chaincode Labs joins me in this episode to talk about erlay.\nGleb. Welcome to the show, mate.\nGleb Naumenko:\nHey, nice to be here.\nStephan Livera:\nSo Gleb, let’s hear a bit about your story. I know you’re working at Chaincode labs, you’re working on erlay and a range of other things as well. Tell us a little bit about yourself and how you got into Bitcoin development.\nGleb Naumenko:\nSo I never had the real job. My …"},{"uri":"/chaincode-labs/chaincode-podcast/payment-points/","title":"Payment Points","content":"Nadav Kohen: 00:00:00\nRight now in the Lightning Network, if I were to make a payment every single hop along that route, they would know that they\u0026amp;rsquo;re on the same route, because every single HTLC uses the same hash. It\u0026amp;rsquo;s a bad privacy leak. It\u0026amp;rsquo;s actually a much worse privacy leak now that we have multi-path payments, because every single path along your multi-path payment uses the same hash as well.\nIntro Caralie Chrisco: 00:00:37\nHi everyone, welcome to the Chaincode Podcast. …"},{"uri":"/chaincode-labs/chaincode-podcast/2020-03-12-matt-corallo-compact-blocks-fibre/","title":"Compact Blocks and Fibre","content":"Intro Adam Jonas (AJ): Welcome Matt\nMatt Corallo (MC): Hi\nJohn Newbery (JN): Hi Matt\nAJ: Today we are going to do a little bit of a “This is your life Bitcoin”.\nMC: I am not dead yet.\nAJ: You have a lot of contributions over the years so there is lots to talk about but we’ll start with three. Let’s start with compact blocks. Tell us a little bit about what compact blocks are and then we can dive a little bit deeper.\nMotivation for compact blocks MC: Compact blocks was the culmination of, at the …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2020/","title":"Mit Bitcoin Expo 2020","content":" Taproot Mar 07, 2020 Andrew Poelstra Taproot "},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2020/2020-03-07-andrew-poelstra-taproot/","title":"Taproot","content":"Topic: Taproot - Who, How, Why\nLocation: MIT Bitcoin Expo 2020\nBIP 341: https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki\nIntro (Hugo Uvegi) I am excited to introduce our first speaker, Andrew Poelstra. He is coming to us from Blockstream where he is Director of Research. He is going to be talking about Taproot which I think is on everybody’s mind these days.\nIntro (Andrew Poelstra) Thanks for waking up so early for the first day of the conference and making the trek from the …"},{"uri":"/chaincode-labs/chaincode-podcast/utxos-chaincode-decoded/","title":"Chaincode Decoded: UTXOs","content":"Introduction John Newbery: 00:00:00\nIf we go back in time to version 0.1, all that was stored was the blockchain and I think a marker saying whether that coin was spent or not. I mean that\u0026amp;rsquo;s okay for your first version but it doesn\u0026amp;rsquo;t really scale and it\u0026amp;rsquo;s bad performance because every time you want to access a coin you need to read it from disk.\nAdam Jonas: 00:00:28\nWelcome back to the Chaincode Podcast. This episode is going to be a little bit different. We\u0026amp;rsquo;re going to …"},{"uri":"/austin-bitcoin-developers/","title":"Austin Bitcoin Developers","content":"https://austinbitdevs.com/\nBitcoin CLI and Regtest Aug 17, 2018 Richard Bondi Bitcoin core Developer tools Drivechain May 27, 2019 Paul Sztorc Sidechains Hardware Wallets Jun 29, 2019 Stepan Snigirev Hardware wallet Security problems Socratic Seminar 2 Aug 22, 2019 Research Hardware wallet Socratic Seminar 3 Oct 14, 2019 Miniscript Taproot Socratic Seminar 4 Nov 19, 2019 Socratic Seminar 5 Jan 21, 2020 Lightning Socratic Seminar 6 Feb 24, 2020 Taproot "},{"uri":"/austin-bitcoin-developers/2020-02-24-socratic-seminar-6/","title":"Socratic Seminar 6","content":"https://www.meetup.com/Austin-Bitcoin-Developers/events/268812642/\nhttps://bitdevs.org/2020-02-12-socratic-seminar-101\nhttps://twitter.com/kanzure/status/1232132693179207682\nIntroduction Alrighty, let\u0026amp;rsquo;s get started. Gather around. Phil could use some company here. Nobody likes the front row. Maybe the benches. So I have a little different format for how I want to do it this week. Usually I cover a broad series of topics that I steal from the New York\u0026amp;rsquo;s meetup list. Going through …"},{"uri":"/stanford-blockchain-conference/2020/atomic-multi-channel-updates/","title":"Atomic Multi-Channel Updates with Constant Collateral in Bitcoin-Compatible Payment-Channel Networks","content":"https://diyhpl.us/wiki/transcripts/scalingbitcoin/tel-aviv-2019/atomic-multi-channel-updates/\nhttps://twitter.com/kanzure/status/1230929981011660800\nIntroduction Matteo got sick and couldn\u0026amp;rsquo;t come. His coauthors couldn\u0026amp;rsquo;t come either. So a talk was pre-recorded and will be played now. Our work is about realizing atomic multi-channel updates with constant collateral in bitcoin.\nScalability issues We all know that blockchains have scalability issues. Public verifiability means we have to …"},{"uri":"/speakers/matteo-maffei/","title":"Matteo Maffei","content":""},{"uri":"/stanford-blockchain-conference/","title":"Stanford Blockchain","content":" Stanford Blockchain Conference 2019 Stanford Blockchain Conference 2020 "},{"uri":"/stanford-blockchain-conference/2020/","title":"Stanford Blockchain Conference 2020","content":"https://cbr.stanford.edu/sbc20/\nArbitrum 2.0: Faster Off-Chain Contracts with On-Chain Security Ed Felten Altcoins Atomic Multi-Channel Updates with Constant Collateral in Bitcoin-Compatible Payment-Channel Networks Feb 21, 2020 Matteo Maffei Research Lightning Attacking EVM Resource Metering Feb 19, 2019 Daniel Perez, Benjamin Livshits Ethereum Beyond 51% attacks Feb 20, 2020 Vitalik Buterin Research Security problems Ethereum Block Rewards Tim Roughgarden Blockchains For Multiplayer Games …"},{"uri":"/stanford-blockchain-conference/2020/beyond-hashrate-majority-attacks/","title":"Beyond 51% attacks","content":"https://twitter.com/kanzure/status/1230624835996250112\nIntroduction Alright let\u0026amp;rsquo;s get started with the next session. Vitalik is from Canada. He is a former writer for Bitcoin Magazine and can often be seen wearing unicorn shirts. He will be telling us about \u0026amp;ldquo;Beyond 51% attacks\u0026amp;rdquo;. Take it away.\nOkay, so hello everyone. Hi. Okay. I will start by reminding everyone that 51% attacks are in fact bad. Bear with me for a few minutes. Is this better? Okay.\n51% attacks What can 51% …"},{"uri":"/stanford-blockchain-conference/2020/competitive-equilibria-staking-lending/","title":"Competitive equilibria between staking and on-chain lending","content":"https://twitter.com/kanzure/status/1230581977289379840\nhttps://arxiv.org/pdf/2001.00919v1.pdf\nSee also \u0026amp;ldquo;Stress testing decentralized finance\u0026amp;rdquo; https://diyhpl.us/wiki/transcripts/coordination-of-decentralized-finance-workshop/2020-stanford/stress-testing-decentralized-finance/\nIntroduction There have been some odd finanical attacks in the DeFi space and also on staking. This talk aims to show that the threat model for staking is definitively different from that of proof-of-work, and …"},{"uri":"/speakers/david-tse/","title":"David Tse","content":""},{"uri":"/tags/ethereum/","title":"ethereum","content":""},{"uri":"/tags/proof-of-stake/","title":"proof-of-stake","content":""},{"uri":"/stanford-blockchain-conference/2020/proof-of-stake/","title":"Proof-of-Stake Longest Chain Protocols Revisited","content":"https://twitter.com/kanzure/status/1230646529230163968\nIntroduction Thanks. I am the last talk of the session so I better make it interesting, and if not interesting then short at least. I am going to talk about proof-of-stake. This is collaboration with a few people. Vivek Bagaria, \u0026amp;hellip; et al.\nProof-of-stake prism The starting point of this project was to come up with a proof-of-stake version of prism, which we just saw in the last talk. Prism has good latency and very high throughput and …"},{"uri":"/speakers/tarun-chitra/","title":"Tarun Chitra","content":""},{"uri":"/speakers/vitalik-buterin/","title":"Vitalik Buterin","content":""},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/","title":"2020 Stanford","content":" Communication With Regulators Feb 18, 2020 Consensus Protocol Risks And Vulnerabilities Feb 19, 2020 Bram Cohen Cryptography Economic Risks Opening Remarks Feb 18, 2020 Shin\u0026amp;#39;ichiro Matsuo Research Regulatory Pain Points Feb 18, 2020 Ryosuke Ushida Research Regulation Risk Overview Feb 18, 2020 Byron Gibson Stress Testing Decentralized Finance Feb 18, 2020 Tarun Chitra Security problems Altcoins Technological Stability Feb 18, 2020 Regulation "},{"uri":"/speakers/bram-cohen/","title":"Bram Cohen","content":""},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/consensus-protocol-risks-and-vulnerabilities/","title":"Consensus Protocol Risks And Vulnerabilities","content":"Consensus protocol risks and vulnerabilities\nhttps://twitter.com/kanzure/status/1229906447808360450\nIntroduction I am going to talk about why cryptocurrencies matter. I am going to take the skeptics side. I am going to start from a banker\u0026amp;rsquo;s standpoint why a lot of the things that cryptocurrency people land say don\u0026amp;rsquo;t make any sense.\nWhere are the engineers coming from? An engineer will look at something like Visa and say this is horrible. These are systems from 50-100 years ago. …"},{"uri":"/coordination-of-decentralized-finance-workshop/","title":"Coordination of Decentralized Finance","content":" 2020 Stanford "},{"uri":"/speakers/dan-boneh/","title":"Dan Boneh","content":""},{"uri":"/speakers/florian-tramer/","title":"Florian Tramer","content":""},{"uri":"/stanford-blockchain-conference/2020/linking-anonymous-transactions/","title":"Linking Anonymous Transactions via Remote Side-Channel Attacks","content":"https://twitter.com/kanzure/status/1230214874908655617\npaper: https://crypto.stanford.edu/timings/paper.pdf\nIntroduction This is going to be talking about linking anonymous transactions in zcash and monero. Thanks for the introduction. I am going to be talking about remote side-channel attacks on anonymous transactions. This will be with respect to zcash and monero. Full disclosure, we disclosed all of the things I am talking about here today and everything has been fixed. No need to worry.\nThis …"},{"uri":"/stanford-blockchain-conference/2020/welcome-remarks/","title":"Welcome Remarks","content":"Welcome remarks\nhttps://twitter.com/kanzure/status/1230178753805836288\nhttps://cbr.stanford.edu/sbc20/\ntelegram: https://t.me/sbc20\nwifi: Stanford, blockchain/blockchain\nOkay, we\u0026amp;rsquo;re going to get started in five minutes. We\u0026amp;rsquo;re going to get started. We have a fully packed day. Everybody can grab their seats and then we can get started.\nAlright. So let\u0026amp;rsquo;s do this.\nOkay, welcome to the fourth Stanford Blockchain Conference. It\u0026amp;rsquo;s been a lot of fun running this conference for a …"},{"uri":"/speakers/byron-gibson/","title":"Byron Gibson","content":""},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/communication-with-regulators/","title":"Communication With Regulators","content":"Communication with regulators\nhttps://twitter.com/kanzure/status/1229856579324760064\nBackgrounds NTT JFSA SEC (enforcement division) celo compliance gold reserves, central bank digital currencies (monetary policy controls) soc1/soc2 audits engineer Developers and regulators: lawfulness questions Are software developers liable from a regulatory perspective for software that they write? There was a case against EtherDelta. He was receiving income from it. In any crime, for enforcement …"},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/opening-remarks/","title":"Opening Remarks","content":"Opening remarks\nhttp://bsafe.network/events/codefi_stanford_2020/\nhttps://diyhpl.us/wiki/transcripts/coordination-of-decentralized-finance-workshop/2020-stanford/\nhttps://twitter.com/kanzure/status/1229833591716102144\nGood morning. Let us start this workshop. I am a researcher at Georgetown University and it is my pleasure to host this workshop. CoDeFi is a workshop about coordination of decentralized finance. This unfortunately shares a name with a Consensys product. CoDeFi contains CoDe and …"},{"uri":"/tags/regulation/","title":"regulation","content":""},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/regulatory-pain-points/","title":"Regulatory Pain Points","content":"Regulatory pain points\nhttps://twitter.com/kanzure/status/1229838162827993088\nIntroduction Hi, good morning everyone. I am a Japanese financial regulator at JFSA. My goal is to promote financial innovations and still ensure the financial stability and various regulatory goals. I have nearly 10 years experience as a regulator. I don\u0026amp;rsquo;t have a strong background in technology, so I hope I can rely on the other stakeholders on this. I am now in Washington DC doing research on blockchain and …"},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/risk-overview/","title":"Risk Overview","content":"Overview of risk in decentralized financial systems\nhttps://twitter.com/kanzure/status/1229878346709782528\nIntroduction This is an overview of catastrophic failure risk in decentralized systems. There are certain degrees of risk in this system where losses can be moderate but not entirely destructive, but also potential failure modes where losses can be serious and ruinous equivalent to the global financial crisis.\nFrom a regulator\u0026amp;rsquo;s perspective, there\u0026amp;rsquo;s a couple of primary mandates …"},{"uri":"/speakers/ryosuke-ushida/","title":"Ryosuke Ushida","content":""},{"uri":"/speakers/shinichiro-matsuo/","title":"Shin'ichiro Matsuo","content":""},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/stress-testing-decentralized-finance/","title":"Stress Testing Decentralized Finance","content":"https://twitter.com/kanzure/status/1229844754990370816\nIntroduction More quantitatively, I think coming up with stress tests is akin to Bryan\u0026amp;rsquo;s question\u0026amp;hellip;. stress testing is something that people in this space kind of do, and I\u0026amp;rsquo;m going to talk about what that looks like and what an actuarial analysis should look like.\nI\u0026amp;rsquo;m going to go through three examples of where an actuarial analysis shows certain weaknesses that need to be monitored and understood. They are going to …"},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/technological-stability/","title":"Technological Stability","content":"https://twitter.com/kanzure/status/1229891596688052226\nControversial thesis: the primary source of economic instability is unreasonable demands (from regulators, but really from the general public) for violating certain cryptocurrency tech design principles, like on sound money- privacy and confidentiality (like confidential transactions and confidential assets), monetary policy (like scarcity economics), identity vs anonymity, AML/KYC requirements, censorship, sanctions requirements and …"},{"uri":"/stephan-livera-podcast/2020-02-16-waxwing-or-adam-gibson/","title":"Is Consumerism at Odds With Privacy in Bitcoin? JoinMarket, PayJoin, SNICKER","content":"podcast: https://stephanlivera.com/episode/149/\nStephan Livera:\nAdam, welcome to the show.\nWaxwing (Adam Gibson):\nHello, Stephan,\nStephan Livera:\nThank you for joining me. I’ve been very keen to have you on the show. I guess some listeners will know you as Waxwing you’re probably more known as Waxwing than as Adam.\nWaxwing (Adam Gibson):\nLittle bit more. Yeah, yeah.\nStephan Livera:\nBut yeah, I’m a big fan of your work. I really like what you’ve done in terms of putting a lot of educational stuff …"},{"uri":"/chaincode-labs/chaincode-podcast/assumeutxo/","title":"AssumeUTXO","content":"James O\u0026amp;rsquo;Beirne: 00:00:00\nBitcoin is a very complex piece of software and it\u0026amp;rsquo;s very difficult to review for correctness. Even people who are really experienced with the code base still have a really hard time determining, in some cases, whether a change is safe or not.\nIntroduction John Newberry: 00:00:25\nHi Jonas. Hi Caralie.\nCaralie: 00:00:27\nHi guys.\nJohn Newberry: 00:00:29\nWe have Caralie in the studio. Caralie is our producer and she\u0026amp;rsquo;s been helping us with the episodes. …"},{"uri":"/wasabi/research-club/coinshuffle-plusplus-part-2/","title":"Coinshuffle++ (Part 2)","content":"Speaker 0: 00:00:00\nYeah. All\nSpeaker 1: 00:00:02\nright, guys, we\u0026amp;rsquo;re here with Tim Ruffing, one of the authors of the Coinshuffle++ paper to talk about privacy. So this PowerPoint and the presentation, Tim will do that himself and I\u0026amp;rsquo;ll just sit back.\nSpeaker 0: 00:00:22\nYeah, hey. I\u0026amp;rsquo;m sorry if this will look weird now, but I had to send my slides to Arif to share the screen, so I can\u0026amp;rsquo;t click through the slides he needs to do it, so It could be a little bit weird. Maybe …"},{"uri":"/advancing-bitcoin/2020/","title":"Advancing Bitcoin 2020","content":" A Schnorr-Taproot’ed Lightning Feb 06, 2020 Antoine Riard Taproot Lightning Ptlc Debugging Bitcoin Core Workshop Feb 07, 2020 Fabian Jahr Bitcoin core Developer tools Descriptor Wallets Feb 06, 2020 Andrew Chow Wallet Introduction to Miniscript Feb 06, 2020 Andrew Poelstra Miniscript Miniscript Feb 07, 2020 Andrew Poelstra Miniscript Signet Integration Feb 06, 2020 Kalle Alm Taproot Signet Signet Workshop Feb 07, 2020 Kalle Alm Taproot Signet "},{"uri":"/advancing-bitcoin/2020/2020-02-07-fabian-jahr-debugging-workshop/","title":"Debugging Bitcoin Core Workshop","content":"Video: No video was posted online\nFabian presentation at Bitcoin Edge Dev++ 2019: https://diyhpl.us/wiki/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/debugging-bitcoin/\nDebugging Bitcoin Core doc: https://github.com/fjahr/debugging_bitcoin\nDebugging Bitcoin Core Workshop: https://gist.github.com/fjahr/5bf65daaf9ff189a0993196195005386\nIntroduction First of all welcome to the debugging Bitcoin Core workshop. Everything I know more or less about using a debugger to learn from Bitcoin …"},{"uri":"/speakers/kalle-alm/","title":"Kalle Alm","content":""},{"uri":"/advancing-bitcoin/2020/2020-02-07-andrew-poelstra-miniscript/","title":"Miniscript","content":"Advancing Bitcoin workshop\nMiniscript\nWebsite: https://bitcoin.sipa.be/miniscript/\nWorkshop repo: https://github.com/apoelstra/miniscript-workshop\nTranscript of London Bitcoin Devs Miniscript presentation: https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript/\nPart 1 So what we’re going to do, there is a couple of things. We are going to go through the Miniscript website and learn about how Miniscript is constructed, how the type system works, some of the …"},{"uri":"/advancing-bitcoin/2020/2020-02-07-kalle-alm-signet-workshop/","title":"Signet Workshop","content":"Video: No video posted online\nLet’s prepare mkdir workspace cd workspace git clone https://github.com/bitcoin/bitcoin.git cd bitcoin git remote add kallewoof https://github.com/kallewoof/bitcoin.git git fetch kallewoof git checkout signet ./autogen.sh ./configure -C --disable-bench --disable-test --without-gui make -j5 When you try to run the configure part you are going to have some problems if you don’t have the dependencies. If you don’t have the dependencies Google your OS and “Bitcoin …"},{"uri":"/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/","title":"A Schnorr-Taproot’ed Lightning","content":"Slides: https://www.dropbox.com/s/9vs54e9bqf317u0/Schnorr-Taproot%27ed-LN.pdf\nIntro Today Schnorr and Taproot for Lightning, it is a really exciting topic.\nLightning architecture The Lightning architecture for those who are not familiar with it. You have the blockchain as the underlying layer. On top of it you are going to build a channel, you have a HTLC and people are going to spend onions to you. If you want to be paid you are going to send an invoice to the sender.\nWhat should we design …"},{"uri":"/speakers/antoine-riard/","title":"Antoine Riard","content":""},{"uri":"/advancing-bitcoin/2020/2020-02-06-andrew-chow-descriptor-wallets/","title":"Descriptor Wallets","content":"Topic: Rethinking Wallet Architecture: Native Descriptor Wallets\nLocation: Advancing Bitcoin\nSlides: https://www.dropbox.com/s/142b4o4lrbkvqnh/Rethinking%20Wallet%20Architecture_%20Native%20Descriptor%20Wallets.pptx\nSupport for Output Descriptors in Bitcoin Core: https://github.com/bitcoin/bitcoin/blob/master/doc/descriptors.md\nBitcoin Optech on Output script descriptors: https://bitcoinops.org/en/topics/output-script-descriptors/\nBitcoin Core dev wiki on Wallet Class Structure Changes: …"},{"uri":"/advancing-bitcoin/2020/2020-02-06-andrew-poelstra-miniscript-intro/","title":"Introduction to Miniscript","content":"Slides: https://www.dropbox.com/s/vgh5vaooqqbgg1v/andrew-poelstra.pdf\nIntro Hi everyone. Some of you were at my Bitcoin dev presentation about Miniscript a couple of days where I managed to spend over 2 hours giving a presentation about this. I think I got this one down to 20 minutes but no promises. I will keep an eye on the clock.\nDescriptors It is very nice having this scheduled immediately after Andrew Chow’s talk about output descriptors because there is a good transition here. As Andrew …"},{"uri":"/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/","title":"Signet Integration","content":"Slides: https://www.dropbox.com/s/6fqwhx7ugr3ppsg/Signet%20Integration%20V2.pdf\nBIP 325: https://github.com/bitcoin/bips/blob/master/bip-0325.mediawiki\nSignet on Bitcoin Wiki: https://en.bitcoin.it/wiki/Signet\nBitcoin dev mailing list: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html\nBitcoin Core PR 16411 (closed): https://github.com/bitcoin/bitcoin/pull/16411\nBitcoin Core PR 18267 (open): https://github.com/bitcoin/bitcoin/pull/18267\nIntro I am going to talk about …"},{"uri":"/london-bitcoin-devs/2020-02-05-andrew-chow-hardware-wallets/","title":"Hardware Wallets in Bitcoin Core","content":"Slides: https://www.dropbox.com/s/k02o32d0pagac4r/Andrew%20Chow%20slides.pdf\nBIP174 (PSBT): https://github.com/bitcoin/bips/blob/master/bip-0174.mediawiki\nPSBT Howto for Bitcoin Core: https://github.com/bitcoin/bitcoin/blob/master/doc/psbt.md\nIntroduction Hey everyone. As you may have guessed I’m Andrew Chow. I’m Engineer at Blockstream and I also work on Bitcoin Core a lot, mostly in the wallet. I am going to be talking about hardware wallets and a bit history with them in Core and also how to …"},{"uri":"/wasabi/research-club/coinshuffle-plusplus-part-1/","title":"CoinShuffle++ (Part 1)","content":"Intro Max Hillebrand: 00:00:00\nNow, excellent. Okay, so let\u0026amp;rsquo;s go through it. CoinShuffle++, peer-to-peer mixing and unlinkable Bitcoin transactions. This is the paper we\u0026amp;rsquo;re talking about. It\u0026amp;rsquo;s unclear if the authors are going to be able to join us. If not, they\u0026amp;rsquo;ll join us next week, hopefully. Yes, two weeks ago we talked about CoinShuffle, which was the predecessor to CoinShuffle++. CoinShuffle dealt with the issue of removing the coordinator, doing fully decentralized …"},{"uri":"/speakers/james-chiang/","title":"James Chiang","content":""},{"uri":"/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript/","title":"Miniscript","content":"Bitcoin Script to Miniscript\nLondon Bitcoin Devs\nhttps://twitter.com/kanzure/status/1237071807699718146\nSlides: https://www.dropbox.com/s/au6u6rczljdc86y/Andrew%20Poelstra%20-%20Miniscript.pdf?dl=0\nIntroduction As Michael mentioned I’ve got like two hours. Hopefully I won’t get close to that but you guys are welcome to interrupt and ask questions. I’ve given a couple of talks about Miniscript usually in like 20 or 30 minute time slots where I try to give a high level overview of what this …"},{"uri":"/speakers/raphael/","title":"Raphael","content":""},{"uri":"/london-bitcoin-devs/2020-02-04-james-chiang-trace-net/","title":"Trace Net","content":"Bitcoin Trace-Net\nLocation: London Bitcoin Devs\nSlides: https://www.dropbox.com/s/99md18bnnl8jnnw/James%20Chiang%20-%20Bitcoin%20Trace%20Net.pdf\nIntro Good evening. My name is James and it is a real privilege to give this talk right after Andrew (Poelstra) has primed the entire audience on Miniscript. That’s pretty awesome. I learned a lot of things even though I coded the stuff that they have on Pieter’s website the actual justification and reasoning behind a lot of the decisions they made in …"},{"uri":"/chaincode-labs/chaincode-podcast/2020-02-11-jeremy-rubin-ctv/","title":"CHECKTEMPLATEVERIFY (CTV)","content":"CTV BIP review workshop transcript: https://diyhpl.us/wiki/transcripts/ctv-bip-review-workshop/\nIntro Jonas: Welcome to the podcast Jeremy.\nJeremy: Thanks for having me on.\nJonas: I think to start us off tell us a bit about your background, how you got into Bitcoin and we\u0026amp;rsquo;ll take it from there.\nJeremy: I first heard about Bitcoin in 2011. I was interning at MIT during high school and I was doing what interns do which is read Hacker News. I saw an article about Bitcoin and I thought it was …"},{"uri":"/tags/op_checktemplateverify/","title":"op_checktemplateverify","content":""},{"uri":"/chaincode-labs/chaincode-podcast/2020-01-28-pieter-wuille-part1/","title":"Pieter Wuille (part 1 of 2)","content":"Part 2: https://www.youtube.com/watch?v=Q2lXSRcacAo\nJonas: Welcome to the podcast\nJohn: Hi Pieter\nPieter: Hello John and Jonas\nJohn: Thank you for being the first guest on our podcast.\nJonas: So far the most important guest we’ve had.\nPieter: That’s an amazing honor. Thank you so much for having me.\nJohn: We’re here to talk about Bitcoin and Bitcoin Core development. We have Pieter Wuille as our guest who is a Bitcoin Core contributor of many years standing. Pieter you’ve had over 500 PRs merged …"},{"uri":"/chaincode-labs/chaincode-podcast/2020-01-28-pieter-wuille-part2/","title":"Pieter Wuille (part 2 of 2)","content":"Jonas: We are gonna pick up where we left off in episode 1 with a discussion of lessons learned from the 0.8 consensus failure. We then go on to cover libsecp and Pieter\u0026amp;rsquo;s thoughts about Bitcoin in 2020. We hope you enjoy this as much as we did.\nJohn: Ok I have a bunch of questions from that. One is what are the lessons from that?\nPieter: One of the things I think learned from that is specifying what your consensus rules is really hard. That doesn’t mean you can’t try but who would’ve …"},{"uri":"/austin-bitcoin-developers/2020-01-21-socratic-seminar-5/","title":"Socratic Seminar 5","content":"https://www.meetup.com/Austin-Bitcoin-Developers/events/267941700/\nhttps://bitdevs.org/2019-12-03-socratic-seminar-99\nhttps://bitdevs.org/2020-01-09-socratic-seminar-100\nhttps://twitter.com/kanzure/status/1219817063948148737\nLSATs So we usually start off with a lightning-based project demo that he has been working on for a few months.\nThis is not an original idea to me. This was presented by roasbeef, co-founder of Lightning Labs at the Lightning conference last October. I worked on a project …"},{"uri":"/speakers/adam-fiscor/","title":"Adam Fiscor","content":""},{"uri":"/speakers/aviv-milner/","title":"Aviv Milner","content":""},{"uri":"/wasabi/research-club/coinshuffle/","title":"CoinShuffle","content":"Introduction to CoinShuffle Aviv Milner: 00:00:00\nToday we\u0026amp;rsquo;re talking about CoinShuffle, practical decentralized coin mixing for Bitcoin. So, let\u0026amp;rsquo;s get started. So we\u0026amp;rsquo;re looking at a 2014 paper by Ruffing, Moreno-Sanchez, and Kate. Everything is obviously posted on GitHub. Here is the paper itself.\nLast week, just a reminder, we talked about SNICKER CoinJoin, and the summary was that we can have a non-interactive CoinJoin between two participants if the proposer, if one of the …"},{"uri":"/wasabi/research-club/snicker/","title":"SNICKER","content":"Speaker 0: 00:00:01\nAll right guys, welcome to week two. This week we\u0026amp;rsquo;re talking about Snigger, simple non-interactive coin drawing with keys for encryption reused. I\u0026amp;rsquo;m going to allow myself 15 minutes to go through the paper like we did last week and then we\u0026amp;rsquo;ll open it to discussion. We\u0026amp;rsquo;re very lucky that Adam Gibson, the author of this BIP proposal is here so he\u0026amp;rsquo;s going to answer questions and correct anything that I\u0026amp;rsquo;ve said that might be wrong. Okay, so …"},{"uri":"/speakers/felix-maurer/","title":"Felix Maurer","content":""},{"uri":"/wasabi/research-club/knapsack-mixing/","title":"Knapsack Mixing","content":"Aviv Milner: 00:00:00\nSo we\u0026amp;rsquo;re talking about knapsack coinjoins today. I\u0026amp;rsquo;m just going to run through this slide and then hopefully Felix will jump in to clarify questions as they come or if I make a mistake.\nAnonymous Coin Join Transactions with Arbitrary Values (2017) Aviv Milner: 00:00:15\nSo yeah, this is the paper, Anonymous Conjoin Transaction with Arbitrary Values. We have one of the authors with us. This PowerPoint will be made available if anyone wants it. Very …"},{"uri":"/stephan-livera-podcast/2020-01-06-andreas-antonopoulos/","title":"Mastering Lightning \u0026 Using BTCPayServer","content":"podcast: https://stephanlivera.com/episode/139/\nStephan Livera: Andreas, welcome back to the show.\nAndreas: It’s a pleasure. Thank you so much for having me again.\nStephan Livera: Oh, it’s, yeah, I know you’re doing, you’ve got a lot on, I know a couple months ago, you announced recently alongside Laolu Osuntukun and Rene Pickhardt that you’re working on Mastering Lightning. I’m excited to see the result of that. Let’s talk a little bit about how that came about.\nAndreas: So I’ve been involved …"},{"uri":"/stephan-livera-podcast/2019-12-27-aj-towns-schnorr-taproot/","title":"Schnorr Taproot Tapscript BIPs","content":"Stephan Livera: AJ, welcome to the show.\nAJ Towns: Howdy.\nStephan Livera: Thanks for joining me AJ. I know you’re doing a lot of really cool work with the Schnorr and Taproot proposal and the review club. But let’s start with a bit of your background and I know you’re working at Xapo as well.\nAJ Towns: Yeah so I’m working at Xapo on Bitcoin Core dev, which means that I get to do open source work full time and get paid for it and it’s pretty freaking awesome. So I got into Lightning from being …"},{"uri":"/stephan-livera-podcast/2019-12-27-aj-townsschnorr-taproot-tapscript-bips/","title":"Schnorr Taproot Tapscript BIPs","content":"podcast: https://stephanlivera.com/episode/137/\nStephan Livera: AJ, welcome to the show.\nAJ Towns: Howdy.\nStephan Livera: Thanks for joining me AJ. I know you’re doing a lot of really cool work with the Schnorr and Taproot proposal and the review club. But let’s start with a bit of your background and I know you’re working at Xapo as well.\nAJ Towns: Yeah, so I, I’m working at Xapo on Bitcoin core dev, which basically just means that I get to do open source work full time and get paid for it and …"},{"uri":"/sf-bitcoin-meetup/2019-12-16-bip-taproot-bip-tapscript/","title":"BIP Taproot and BIP Tapscript","content":"slides: https://prezi.com/view/AlXd19INd3isgt3SvW8g/\nhttps://twitter.com/kanzure/status/1208438837845929987\nhttps://twitter.com/SFBitcoinDevs/status/1206678306721894400\nbip-taproot: https://github.com/sipa/bips/blob/bip-schnorr/bip-taproot.mediawiki\nbip-tapscript: https://github.com/sipa/bips/blob/bip-schnorr/bip-tapscript.mediawiki\nbip-schnorr: https://github.com/sipa/bips/blob/bip-schnorr/bip-schnorr.mediawiki\nPlease try to find seats. Today we have a special treat. We have Pieter here, who is …"},{"uri":"/stephan-livera-podcast/2019-12-16-rusty-russelllightning-multi-part-payments/","title":"Lightning Multi Part Payments","content":"podcast: https://stephanlivera.com/episode/134/\nStephan Livera: Rusty, welcome back to the show.\nRusty Russell: Great to be on again Stephan.\nStephan Livera: So Rusty, I hear there’s been a lot of development around multi-part payment and you’ve got the new c-lightning coming out. Do you want to just give us an update there?\nRusty Russell: Yeah, for sure. So on the, so there’s two sides to this. One is kind of the spec side and the development and interoperability of multi-part payment stuff. …"},{"uri":"/speakers/jan-%C4%8Dapek/","title":"Jan Čapek","content":""},{"uri":"/stephan-livera-podcast/2019-11-27-jan-capekstratum-v2/","title":"Stratum v2 Bitcoin Mining Protocol","content":"podcast: https://stephanlivera.com/episode/128/\nStephan Livera: Jan, welcome to the show.\nJan Čapek: Hi, thanks for having me here.\nStephan Livera: So Jan I had the pleasure of meeting you at Riga for Baltic Honeybadger and I know you’ve been doing a lot of work on Stratum, the second version of the protocol. First I wanted to get a bit of background from you, how did you get involved with all of this and working as one of the CEOs of Braiins and Slush Pool?\nJan Čapek: So I come from the …"},{"uri":"/austin-bitcoin-developers/2019-11-19-socratic-seminar-4/","title":"Socratic Seminar 4","content":"https://twitter.com/kanzure/status/1196947713658626048\nWallet standards organization (WSO) Make an economic argument for vendors to join an organization and put in some money or effort to build the organization. The economic argument is that the committed members have some level of funds committed, representing some X number of users, and X billion dollars of custodied funds. This can then be used as a compelling argument to get each organization to join and bootstrap the standards body.\nThe …"},{"uri":"/stephan-livera-podcast/2019-11-16-thomas-voegtlin-electrum-wallet/","title":"Electrum Wallet-Bitcoin seeds, PSBT, Multi sig, Lightning","content":"podcast: https://stephanlivera.com/episode/125/\nStephan Livera: Thomas, welcome to the show.\nThomas Voegtlin: Thank you for having me.\nStephan Livera: So Tom I had the pleasure of meeting you at Berlin as part of the lightning conference. And I was really keen to discuss with you and talk about Electrum because obviously it’s one of the longstanding wallets in the space. Thomas, can you give us a little bit of your background and your story on how you came into Bitcoin and how you started …"},{"uri":"/stephan-livera-podcast/2019-11-13-jon-atack/","title":"Bitcoin Core contribution","content":"Stephan Livera Podcast with Jon Atack - November 13th 2019\nPodcast: https://stephanlivera.com/episode/124/\nhttps://twitter.com/kanzure/status/1195848417186078720\nStephan Livera: Jon, welcome to the show.\nJon Atack: Hi, Stephan. Thanks for having me on. I’m pleased to be here.\nStephan Livera: So, Jon, I had the pleasure of meeting you at The Lightning Conference recently and I know you are working on Bitcoin Core in a part time capacity to do some review and development work. I was keen to …"},{"uri":"/london-bitcoin-devs/2019-11-13-gleb-naumenko-p2p-erlay/","title":"Current state of P2P research in Bitcoin /Erlay","content":"London Bitcoin Devs\nhttps://twitter.com/kanzure/status/1201861687097352198\nIntroduction My name is Gleb. I’ve worked on Bitcoin stuff since 2015, building wallets and exchanges in Ukraine. Then I realized that I can do more fun stuff. I want to do more protocol research, improving Bitcoin itself, not just using it. I decided I will go to Canada to do a Masters. I started working on this topic on research stuff more precisely and I realized that not many people do work on peer-to-peer (P2P) of …"},{"uri":"/speakers/allen-piscitello/","title":"Allen Piscitello","content":""},{"uri":"/stephan-livera-podcast/2019-11-08-allen-piscitellosidechains/","title":"Sidechains \u0026 Blockstream Liquid","content":"podcast: https://stephanlivera.com/episode/123/\nStephan Livera: Allen, welcome to the show.\nAllen P: Thanks for having me on.\nStephan Livera: So Allen, I know you’re doing a lot of work with Blockstream and Liquid and side chains, so I thought it’d be great to get you on and just break some of that down for the listeners, but can you just start with a little bit of a background on yourself and what you’re doing with Blockstream?\nAllen P: Yeah, so I’m the vice president of product at Blockstream. …"},{"uri":"/speakers/jack-mallers/","title":"Jack Mallers","content":""},{"uri":"/stephan-livera-podcast/2019-10-29-jack-mallers/","title":"Lightning \u0026 Identity, Olympus and Zap","content":"podcast: https://stephanlivera.com/episode/120/\nStephan Livera: Jack, welcome to the show, man.\nJack Mallers: Stephan, thank you for having me. I’m a big fan.\nStephan Livera: I’m a big fan of you. I really like what your family’s doing, you know, Bitcoinmom and your dad @willbc20. I can’t remember his exact handle. And then obviously you guys are just killing it. I’m a big fan. Zap is my favorite wallet, so I really love using it and yeah, I’m really looking forward to just chatting with you …"},{"uri":"/stephan-livera-podcast/2019-10-24-alex-bosworth-submarine-swaps/","title":"Submarine Swaps, Lightning Loop, Hyperloop","content":"podcast: https://stephanlivera.com/episode/119/\nStephan Livera: Alex, welcome to the show.\nAlex Bosworth: Hi. It’s great to be on the show. Finally.\nStephan Livera: Yeah, I’ve been meaning to get you on for a while and you know, now we’re finally making it happen. So just, context for the listeners, we are recording this one in person just before the lightning conference, the inaugural lightning conference. So just a couple of days before there was lightning meetup last night. It’s been …"},{"uri":"/lightning-conference/","title":"Lightning Conference","content":" Lightning Conference 2019 "},{"uri":"/lightning-conference/2019/","title":"Lightning Conference 2019","content":" Lightning Payment Pathfinding for Reliability Oct 18, 2019 Joost Jager Pathfinding Lightning Routing Lnd Offers and BOLT 12 Oct 19, 2019 Rusty Russell Lightning C lightning Private Key Management Oct 19, 2019 Chris Stewart Replacing Payment Hashes with Payment Points Oct 20, 2019 Nadav Kohen Ptlc Rust Lightning Oct 20, 2019 Antoine Riard Lightning Trampoline Routing Oct 20, 2019 Bastien Teinturier Trampoline payments Amp "},{"uri":"/lightning-conference/2019/2019-10-20-nadav-kohen-payment-points/","title":"Replacing Payment Hashes with Payment Points","content":"Slides: https://docs.google.com/presentation/d/15l4h2_zEY4zXC6n1NqsImcjgA0fovl_lkgkKu1O3QT0/\nhttps://twitter.com/kanzure/status/1220453531683115020\nIntro My name is Nadav Kohen. I’m a software engineer at Suredbits. I’m going to talk about payment points which have been alluded to throughout the conference by various talks.\nHash Timelocked Contract (HTLC) Routing First of all, some preliminaries, HTLCs. I don’t think I’m going to spend much time on this slide. We all know what they are. For …"},{"uri":"/lightning-conference/2019/2019-10-20-antoine-riard-rust-lightning/","title":"Rust Lightning","content":"Deploying rust-lightning in the wild\nThe Lightning Conference Day 2 (Stage 2)\nSlides: https://github.com/ariard/talk-slides/blob/master/deploying-rust-lightning-in-the-wild.pdf\nhttps://twitter.com/kanzure/status/1220722130897252352\nIntro Hi everyone, super happy to be here at the Lightning Conference. I’ve had an awesome weekend. Today I will talk on rust-lightning a project I’ve been contributing to for a year and a half. To get started, please take photos of the slides, not of me. So the story …"},{"uri":"/lightning-conference/2019/2019-10-20-bastien-teinturier-trampoline-routing/","title":"Trampoline Routing","content":"The Lightning Conference Day 2 (Stage 2)\nSlides: https://docs.google.com/presentation/d/1bFu33_YFsRVUSEN-Vff5-wrxMabOFl2ck7ZTiVsp5ds/\nhttps://twitter.com/kanzure/status/1221100865328685057\nIntro We will start with gory technical details about routing and pathfinding. I hope I will be able to keep you awake for the next twenty minutes. I’m working for ACINQ on the Lightning Network. If you don’t know ACINQ we are contributors to the specification. We run the biggest node on the network and we are …"},{"uri":"/lightning-conference/2019/2019-10-19-rusty-russell-offers/","title":"Offers and BOLT 12","content":"Make Me An Offer: Next Level Invoicing\nThe Lightning Conference Day 1 (Stage 1)\nBOLT 11: https://github.com/lightningnetwork/lightning-rfc/blob/master/11-payment-encoding.md\nBOLT 12 PR: https://github.com/lightningnetwork/lightning-rfc/pull/798\nbolt12.org: https://bolt12.org/\nIntro Hi. My name is Rusty Russell and this is actually my official job title at Blockstream, code contributor. It is generic, it is flexible, it is probably a little too humble maybe but that is me. Before we begin the …"},{"uri":"/lightning-conference/2019/2019-10-19-chris-stewart-private-key-management/","title":"Private Key Management","content":"Lightning 101 for Exchanges: Private Key Management\nThe Lightning Conference Day 1 (Stage 2)\nhttps://twitter.com/kanzure/status/1212894908375322625\nSlides: https://docs.google.com/presentation/d/1_-FF0U2AXuhBxEzW9J_IrYxvRi1SS2MYwJl0QeIcqbI/edit#slide=id.p\nIntro Hi everybody. My name is Chris Stewart. I’m the Founder of a company called Suredbits. We monetize APIs with Lightning but we strongly believe that the quickest way to widespread Lightning adoption is through exchanges. As other people …"},{"uri":"/lightning-conference/2019/lightning-payment-pathfinding-for-reliability/","title":"Lightning Payment Pathfinding for Reliability","content":"Introduction I\u0026amp;rsquo;m Joost Jager. I work for Lightning Labs. At Lightning Labs, I\u0026amp;rsquo;m a software engineer, and my main focus is on LND. Over the last month, I spent quite some time to improve pathfinding in LND. In this talk, I would like to explain how we optimize pathfinding for the reliability of paths.\nLightning payment pathfinding So we\u0026amp;rsquo;re looking for a path. That\u0026amp;rsquo;s clear. But what do we optimize the path for? There are many ways that you can optimize a path for. In all …"},{"uri":"/tags/pathfinding/","title":"pathfinding","content":""},{"uri":"/cryptoeconomic-systems/","title":"Cryptoeconomic Systems","content":" Cryptoeconomic Systems 2019 "},{"uri":"/cryptoeconomic-systems/2019/","title":"Cryptoeconomic Systems 2019","content":"https://cryptoeconomicsystems.pubpub.org/ces19\nAll About Decentralized Trust Ittai Abraham Cross Chain Deals And Adversarial Commerce Maurice Herlihy Everything Is Broken Cory Fields Flash Boys V2 Ari Juels Research Funding bitcoin development Tadge Dryja Introduction Andrew Miller, Neha Narula Journal Review Journals As Clubs Jason Potts Knowledge Aggregation And Propagation Oct 06, 2019 Bryan Bishop Research Bitcoin core Libra Dahlia Malkhi Mechanism Design Matt Weinberg Mining Incentives Near …"},{"uri":"/cryptoeconomic-systems/2019/2019-10-15-russell-oconnor-simplicity/","title":"Simplicity","content":"Location: CES Summit 2019\nSlides: https://drive.google.com/file/d/1FivYGQzOYfM0JGl4SS3VR1UGKsyMn_L5/\nSimplicity white paper: https://blockstream.com/simplicity.pdf\nIntro So Simplicity is a programming language that I have been working on at Blockstream. I call it an alternative or reimagination of Bitcoin Script as it would have been if I had invented Bitcoin. An alternative or new instruction set that is compatible with Bitcoin. It could be soft forked in. We’ve experimented with it on …"},{"uri":"/austin-bitcoin-developers/2019-10-14-socratic-seminar-3/","title":"Socratic Seminar 3","content":"https://www.meetup.com/Austin-Bitcoin-Developers/events/265295570/\nhttps://bitdevs.org/2019-09-16-socratic-seminar-96\nWe have sort of done two meetups in this format. The idea here is that in NY there\u0026amp;rsquo;s BitDevs which is one of the oldest meetups. This has been going on for five years. They have a format called socratic where they have a talented guy named J who leads them through some topics and they try to get some discussion going on and pull from expertise in the audience. We are …"},{"uri":"/chaincode-labs/chaincode-residency/","title":"Chaincode Residency","content":" Advanced Segwit Jun 18, 2019 James O\u0026amp;#39;Beirne Segwit Attack Vectors of Lightning Network Jun 25, 2019 Fabrice Drouin Security problems Lightning Base and Transport Layers of the Lightning Network Jun 24, 2019 Fabrice Drouin Lightning Bitcoin network partitioning \u0026amp;amp; network-level privacy attacks Jun 12, 2019 Ethan Heilman Privacy problems P2p Eclipse attacks Bootstrapping Lightning Node Oct 22, 2018 Elaine Ou Lightning Routing Building Lightning Applications Oct 22, 2018 Alex Bosworth …"},{"uri":"/tags/dandelion/","title":"Dandelion","content":""},{"uri":"/speakers/giulia-fanti/","title":"Giulia Fanti","content":""},{"uri":"/chaincode-labs/chaincode-residency/2019-10-09-giulia-fanti-p2p-privacy-attacks/","title":"P2P Privacy Attacks","content":"Location: Chaincode Labs 2019 Residency\nSlides: https://residency.chaincode.com/presentations/bitcoin/giulia_fanti_bitcoin_p2p.pdf\nGiulia: So today I\u0026amp;rsquo;m going to talk to you about some work that my collaborators and I\u0026amp;rsquo;ve been working on for the last three years now. So this was joint work with a bunch of people who are now spread across a few different universities but it started at the University of Illinois: Shaileshh Bojja Venkatakrishnan, Surya Bakshi, Brad Denby, Shruti Bhargava, …"},{"uri":"/categories/residency/","title":"residency","content":""},{"uri":"/speakers/bryan-bishop/","title":"Bryan Bishop","content":""},{"uri":"/cryptoeconomic-systems/2019/knowledge-aggregation-and-propagation/","title":"Knowledge Aggregation And Propagation","content":"Intro https://docs.google.com/document/d/1a1uRy10dBBcxrnzjdUaO2y03f5H34yFlxktOcanvVYE/edit\nhttps://cess.pubpub.org/pub/knowledge-aggregation/branch/2/\nI\u0026amp;rsquo;ll be the contrarian - I think academia is awful and should be destroyed.\nReputation is completely independent of content and should not be the mechanism by which you judge the quality of research; your reputation is worthless to me. (laughter)\nMy perspective: [first, slidemaker\u0026amp;rsquo;s regret! might have reorganized this] I\u0026amp;rsquo;m a …"},{"uri":"/cryptoeconomic-systems/2019/threshold-schnorr-signatures/","title":"The Quest for Practical Threshold Schnorr Signatures","content":"Slides: https://slides.com/real-or-random/schnorr-threshold-sigs-ces-summit-2019\nIntroduction It\u0026amp;rsquo;s great to be here. My name is Tim Ruffing and I work for the research team at Blockstream. This is a talk about practical threshold Schnorr signatures and our quest for constructing them.\nDisclaimer First of all, this is work in progress. It\u0026amp;rsquo;s pretty early work in progress, mostly based on discussions with people at Blockstream. I describe the problem we want to solve, and we have some …"},{"uri":"/greg-maxwell/2019-10-04-majority-miner-attack/","title":"Should we be more concerned by the prospect of a 51 percent mining attack?","content":"I think questions like this are ultimately the result of a fundamental lack of understanding about what Bitcoin is doing.\nThe problem Bitcoin is attempting to solve is getting everyone everywhere to agree on the same stable history of transactions. This is necessary because in order to prevent users from printing money from nothing the system must have a rule that you can\u0026amp;rsquo;t spend a given coin more than once\u0026amp;ndash; like I have a dollar then pay both alice and bob that dollar, creating a …"},{"uri":"/bitcoinops/2019-09-27-schnorr-taproot-workshop/","title":"Schnorr and Taproot workshop","content":"Location: Chaincode Labs, NYC\nhttps://github.com/bitcoinops/taproot-workshop\nhttps://bitcoinops.slack.com\nhttps://twitter.com/kanzure/status/1177697093462482949\nRun: jupyter notebook\nSee also Schnorr signatures and taproot: https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my/\nhttps://github.com/sipa/bips/blob/bip-schnorr/bip-schnorr.mediawiki\nhttps://github.com/sipa/bips/blob/bip-schnorr/bip-taproot.mediawiki …"},{"uri":"/stephan-livera-podcast/2019-09-22-bryan-bishop/","title":"Bitcoin Vaults and Custody","content":"Stephan Livera Podcast with Bryan Bishop - September 22nd 2019\nPodcast: https://stephanlivera.com/episode/108/\nStephan Livera: Bryan, welcome to the show, man. I’m a big fan of your work.\nBryan Bishop: Thank you. I’m a big fan of yours as well.\nStephan Livera: Yeah, so look, we’re doing Bitcoin Custody. I know you’re one of the guys to talk to on this exact topic. And so, maybe we’ll just start with a little bit of a background. What was some of your work with LedgerX, and then what’s some of …"},{"uri":"/baltic-honeybadger/","title":"Baltic Honeybadger","content":" Baltic Honeybadger 2018 Baltic Honeybadger 2019 "},{"uri":"/baltic-honeybadger/2019/","title":"Baltic Honeybadger 2019","content":" Coldcard Mk3 - Security in Depth Sep 14, 2019 Rodolfo Novak Security Hardware wallet "},{"uri":"/baltic-honeybadger/2019/2019-09-14-rodolfo-novak-coldcard-mk3/","title":"Coldcard Mk3 - Security in Depth","content":"Intro My name is Rodolfo, I have been around Bitcoin for a little while. We make hardware. Today I wanted to get a little bit into how do you make a hardware wallet secure in a little bit more layman’s terms. Go through the process of getting that done.\nWhat were the options When I closed my last company and decided to find a place to store my coins I couldn’t really find a wallet that satisfied two things that I needed. Which was physical security and open source. There are two wallets on the …"},{"uri":"/scalingbitcoin/tel-aviv-2019/anonymous-atomic-locks/","title":"A2L: Anonymous Atomic Locks for Scalability and Interoperability in Payment Channel Hubs","content":"paper: https://eprint.iacr.org/2019/589.pdf\nhttps://github.com/etairi/A2L\nhttps://twitter.com/kanzure/status/1172116189742546945\nIntroduction I am going ot talk about anonymous atomic locks for scalability and interoperability in payment-channel hubs. This is joint work with my colleagues.\nScalability I will try to keep this section as short as possible. This talk is also about scalability in bitcoin. You\u0026amp;rsquo;re probably aware of the scalability issues. There is decentralized data structure …"},{"uri":"/scalingbitcoin/tel-aviv-2019/atomic-multi-channel-updates/","title":"Atomic Multi-Channel Updates with Constant Collateral in Bitcoin-Compatible Payment-Channel Networks","content":"paper: https://eprint.iacr.org/2019/583\nhttps://twitter.com/kanzure/status/1172102203995283456\nIntroduction Thank you for coming to my talk after lunch. This talk today is about atomic multi-channel updates with constant collateral in bitcoin-compatible payment-channel networks. It\u0026amp;rsquo;s a long title, but I hope you will understand. This is collaborative work with my colleagues.\nScalability We\u0026amp;rsquo;re at Scaling Bitcoin so you\u0026amp;rsquo;re probably not suprised that I think bitcoin has scaling …"},{"uri":"/speakers/georgios-konstantopoulos/","title":"Georgios Konstantopoulos","content":""},{"uri":"/speakers/kanta-matsuura/","title":"Kanta Matsuura","content":""},{"uri":"/speakers/pedro-moreno-sanchez/","title":"Pedro Moreno-Sanchez","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/plasma-cash/","title":"Plasma Cash: Towards more efficient Plasma Constructions","content":"Non-custodial sidechains for bitcoin utilizing plasma cash and covenants\nhttps://twitter.com/kanzure/status/1172108023705284609\npaper: https://github.com/loomnetwork/plasma-paper/blob/master/plasma_cash.pdf\nslides: https://gakonst.com/scalingbitcoin2019.pdf\nIntroduction We have known how to do these things for at least a year, but the question is how can we find the minimum changes that we can figure out to do in bitcoin if any on how to explore the layer 2 space in a better sense rather than …"},{"uri":"/scalingbitcoin/tel-aviv-2019/proof-of-verification-for-proof-of-work/","title":"Proof-of-Verification for Proof-of-Work: Miners Must Verify the Signatures on Bitcoin Transactions","content":"https://twitter.com/kanzure/status/1172142603007143936\nextended abstract: http://kmlab.iis.u-tokyo.ac.jp/papers/scaling19-matsuura-final.pdf\nHistory lesson In the 90s, we had some timestamping schemes where things are aggregated into one hash and then it would be publicized later in the newspaper. This scheme was commercialized, in fact. However, the resolution of the trust anchor is very limited. It\u0026amp;rsquo;s one day or half-day at best. When we want to verify, we need to visit the online-offline …"},{"uri":"/scalingbitcoin/tel-aviv-2019/payment-channel-recovery-with-seeds/","title":"Recovering Payment Channel Midstates Using only The User's Seed","content":"https://twitter.com/kanzure/status/1172129077765050368\nIntroduction Cool. This addresses the lightning network which is a scaling technique for the bitcoin network. A big challenge with the lightning network is that you have these midstates that you need to keep. If you lose them, you\u0026amp;rsquo;re out of luck with your counterparty. We had a lot of talks this morning about how to recover midstates but a lot of them used watchtowers. However, this proposal does not require a watchtower and you can …"},{"uri":"/scalingbitcoin/","title":"Scaling Bitcoin Conference","content":" Hong Kong (2015) Milan (2016) Montreal (2015) Stanford (2017) Tel Aviv (2019) Tokyo (2018) "},{"uri":"/scalingbitcoin/tel-aviv-2019/survey-of-progress-in-zero-knowledge-proofs-towards-trustless-snarks/","title":"A Survey of Progress in Succinct Zero Knowledge Proofs: Towards Trustless SNARKs","content":"https://twitter.com/kanzure/status/1171683484382957568\nIntroduction I am going to be giving a survey of recent progress into succinct zero-knowledge proofs. I\u0026amp;rsquo;ll survey recent developments, but I\u0026amp;rsquo;m going to start with what zero-knowledge proofs are. This is towards SNARKs without trusted setup. I want to help you get up to date on what\u0026amp;rsquo;s happening on with SNARKs. There\u0026amp;rsquo;s an explosion of manuscripts and it\u0026amp;rsquo;s hard to keep track.\nOne theme is the emergence of …"},{"uri":"/scalingbitcoin/tel-aviv-2019/private-information-retrieval/","title":"Applying Private Information Retrieval to Lightweight Bitcoin Clients","content":"SPV overview I have to thank the prior speaker. This is basically the same, except this time we\u0026amp;rsquo;re not using SGX. We\u0026amp;rsquo;re looking at bitcoin lightweight clients. You have a lite client with not much space not much computing power, can\u0026amp;rsquo;t store the bitcoin blockchain. All it knows about is the header information. We assume it has the blockheader history. In the header, we can find the merkle tree, and given the merkle tree we can make a merkle proof that a given transaction is …"},{"uri":"/speakers/ben-fisch/","title":"Ben Fisch","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/bip-securethebag/","title":"BIP: OP_SECURETHEBAG","content":"https://twitter.com/kanzure/status/1171750965478854656\nIntroduction Thank you for the introduction. Today I am going to talk to you about OP_SECURETHEBAG. It\u0026amp;rsquo;s research that I have been working on for the last couple years. I think it\u0026amp;rsquo;s very exciting and hopefully it will have a big compact. I have a lot of slides.\nWhy are we here? We are here for scaling bitcoin. The goal we have is to scale bitcoin. What is scaling, though? We have talks on networking, privacy, talks on all sorts …"},{"uri":"/tags/contract-protocols/","title":"Contract Protocols","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/bitml/","title":"Developing secure Bitcoin contracts using BitML","content":"paper: https://arxiv.org/abs/1905.07639\nhttps://twitter.com/kanzure/status/1171695588116746240\nThis is shared work with my collaborators and coauthors.\nSmart contracts I am sure everyone here knows what a smart contract is. It\u0026amp;rsquo;s a program that can move cryptoassets. It\u0026amp;rsquo;s executed in a decentralized environment. Generally speaking, we can say there\u0026amp;rsquo;s two classes of smart contracts. A smart contract is a program. While in blockchain like bitcoin, smart contracts are cryptographic …"},{"uri":"/tags/lightweight-client/","title":"Lightweight Client Support","content":""},{"uri":"/speakers/oleg-andreev/","title":"Oleg Andreev","content":""},{"uri":"/speakers/stefano-lande/","title":"Stefano Lande","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/zkvm/","title":"ZkVM: zero-knowledge virtual machine for fast confidential smart contracts","content":"https://medium.com/stellar-developers-blog/zkvm-a-new-design-for-fast-confidential-smart-contracts-d1122890d9ae and https://twitter.com/oleganza/status/1126612382728372224\nhttps://twitter.com/kanzure/status/1171711553512583169\nIntroduction Okay, welcome. What is zkvm? It\u0026amp;rsquo;s a multi-asset blockchain architecture that combines smart contracts and confidentiality features. It\u0026amp;rsquo;s written in pure rust from top to bottom.\nhttps://github.com/stellar/slingshot\nAgenda I\u0026amp;rsquo;ll explain the …"},{"uri":"/edgedevplusplus/2019/lightning-network-layer-by-layer/","title":"A walk through the layers of Lightning","content":"Introduction Good afternoon everyone. My name is Carla. I\u0026amp;rsquo;m one of the Chaincode residents from the summer residency that they ran in New York this year and this afternoon I\u0026amp;rsquo;m going to be talking about Lightning. In this talk, I\u0026amp;rsquo;m gonna be walking you through the protocol layer by layer and having a look at the different components that make up the Lightning Network.\nI\u0026amp;rsquo;m sure a lot of you are pretty familiar with what the Lightning Network is. It is an off-chain scaling …"},{"uri":"/edgedevplusplus/2019/bitcoin-core-functional-test-framework/","title":"Bitcoin Core Functional Test Framework","content":"Slides: https://telaviv2019.bitcoinedge.org/files/test-framework-in-bitcoin-core.pdf\nTranscript completed by: Bryan Bishop Edited by: Michael Folkson\nhttps://twitter.com/kanzure/status/1171357556519952385\nIntroduction I am pretty sure you can tell but I am not James (Chiang). I am taking over the functional testing framework talk from James. He has already given several great talks. I took over this talk at very short notice from James. I’d like to give a hands on talk.\nContent This is a brief …"},{"uri":"/edgedevplusplus/","title":"Bitcoin Edge Dev++","content":" Bitcoin Edge Dev\u0026amp;#43;\u0026amp;#43; 2017 Bitcoin Edge Dev\u0026amp;#43;\u0026amp;#43; 2018 Bitcoin Edge Dev\u0026amp;#43;\u0026amp;#43; 2019 "},{"uri":"/edgedevplusplus/2019/","title":"Bitcoin Edge Dev++ 2019","content":"https://telaviv2019.bitcoinedge.org/\nA walk through the layers of Lightning Sep 10, 2019 Carla Kirk-Cohen Lightning Acumulator Based Cryptography \u0026amp;amp; UTreexo Sep 09, 2019 Tadge Dryja Proof systems Utreexo Bitcoin Core Functional Test Framework Sep 10, 2019 Fabian Jahr Bitcoin core Developer tools Bitcoin Data Structures Sep 09, 2019 Jimmy Song Blockchain Design Patterns: Layers and scaling approaches Sep 10, 2019 Andrew Poelstra, David Vorick Taproot Scalability Build a Taproot Sep 09, 2019 …"},{"uri":"/edgedevplusplus/2019/blockchain-design-patterns/","title":"Blockchain Design Patterns: Layers and scaling approaches","content":"https://twitter.com/kanzure/status/1171400374336536576\nIntroduction Alright. Are we ready to get going? Thumbs up? Alright. Cool. I am Andrew and this is David. We\u0026amp;rsquo;re here to talk about blockchain design patterns: layers and scaling approaches. This will be a tour of a bunch of different scaling approaches in bitcoin in particular but it\u0026amp;rsquo;s probably applicable to other blockchains out there. Our talk is in 3 parts. We\u0026amp;rsquo;ll talk about some existing scaling tech and related things …"},{"uri":"/speakers/carla-kirk-cohen/","title":"Carla Kirk-Cohen","content":""},{"uri":"/edgedevplusplus/2019/bosminer/","title":"Challenges of developing bOSminer from scratch in Rust","content":"https://braiins-os.org/\nhttps://twitter.com/kanzure/status/1171331418716278785\nnotes from slides: https://docs.google.com/document/d/1ETKx8qfml2GOn_CBXhe9IZzjSv9VnXLGYfQb3nD3N4w/edit?usp=sharing\nIntroduction Good morning everyone. My task is to talk about the challenges we faced while we were implementing a replacement for the cgminer software. We\u0026amp;rsquo;re doing it in rust. Essentially, I would like to cover a little bit of the history and to give some credit to ck for his hard work.\ncgminer …"},{"uri":"/edgedevplusplus/2019/debugging-bitcoin/","title":"Debugging Bitcoin","content":"Slides: https://telaviv2019.bitcoinedge.org/files/debugging-tools-for-bitcoin-core.pdf\nDebugging Bitcoin Core: https://github.com/fjahr/debugging_bitcoin\nTranscript completed by: Bryan Bishop Edited by: Michael Folkson\nhttps://twitter.com/kanzure/status/1171024515490562048\nIntroduction I am going to talk about debugging Bitcoin. Of course if you want to contribute to Bitcoin there are a lot of conceptual things that you have to understand in order to do that. Most of the talks here today and …"},{"uri":"/edgedevplusplus/2019/hardware-wallet-design-best-practices/","title":"Hardware Wallet Design Best Practices","content":"https://twitter.com/kanzure/status/1171322716303036417\nIntroduction We have 30 minutes. This was a talk by me and Jimmy but Jimmy talked for like 5 hours yesterday so I\u0026amp;rsquo;ll be talking today.\nImagine you want to start working in bitcoin development. There\u0026amp;rsquo;s a chance you will end up using a hardware wallet. In principle, it\u0026amp;rsquo;s not any different from other software development. There\u0026amp;rsquo;s still programming involved, like for firmware. Wallets have certain features and nuances …"},{"uri":"/speakers/james-hilliard/","title":"James Hilliard","content":""},{"uri":"/edgedevplusplus/2019/libbitcoin/","title":"Libbitcoin: A practical introduction","content":"or: libbitcoin\u0026amp;rsquo;s bx tool: Constructing a raw transaction\nhttps://twitter.com/kanzure/status/1171352496247365633\nhttps://github.com/libbitcoin/libbitcoin-explorer/wiki/Download-BX\nIntroduction I am going to talk about libbitcoin. It\u0026amp;rsquo;s an alternative implementation of Bitcoin Core. There\u0026amp;rsquo;s a few alternatives out there, like btcd and then bcoin. Libbitcoin is written in C++ and it\u0026amp;rsquo;s the oldest alternative implementation that is out there. I\u0026amp;rsquo;d like to show a few …"},{"uri":"/edgedevplusplus/2019/lightning-network-sphinx-and-onion-routing/","title":"Lightning Network Sphinx And Onion Routing","content":"Introduction Hi everyone. I am a developer of rust-lightning, it started in 2018 by BlueMatt. I started contributing about a year ago now. It\u0026amp;rsquo;s a full featured, flexible, spec-compliant lightning library. It targets exchanges, wallet vendors, hardware, meshnet devices, and you can join us on Freenode IRC in the #rust-bitcoin channel which is a really nice one.\nPrivacy matters Privacy matters on the blockchain. This is the reason why lightning is taking so much time. We want the privacy to …"},{"uri":"/edgedevplusplus/2019/lightning-network-topology/","title":"Lightning network topology, its creation and maintenance","content":"Introduction Alright. Give me a second to test this. Alright. Antoine has taken you through the routing layer. I\u0026amp;rsquo;m going to take you through what the lightning network looks like today. This is the current topology of the network and how this came about, and some approaches for maintaining the network and making the graph look like we want it to look.\nLightning brief overview There\u0026amp;rsquo;s ideally only two transactions involved in any lightning channel, the commitment transaction and the …"},{"uri":"/edgedevplusplus/2019/mining-firmware-security/","title":"Mining Firmware Security","content":"slides: https://docs.google.com/presentation/d/1apJRD1BwskElWP0Yb1C_tXmGYA_vkx9rjS8VTDW_Z3A/edit?usp=sharing\n"},{"uri":"/edgedevplusplus/2019/signet/","title":"Signet annd its uses for development","content":"https://twitter.com/kanzure/status/1171310731100381184\nhttps://explorer.bc-2.jp/\nIntroduction I was going to talk about signet yesterday but people had some delay downloading docker images. How many of you have signet right now? How many think you have signet right now? How many downloaded something yesterday? How many docker users? And how many people have compiled it themselves? Okay. I think we have like 10 people. The people that compiled it yourself, I think you\u0026amp;rsquo;re going to be able to …"},{"uri":"/edgedevplusplus/2019/statechains/","title":"Statechains","content":"Schnorr signatures, adaptor signatures and statechains\nhttps://twitter.com/kanzure/status/1171345418237685760\nIntroduction If you want to know the details of statechains, I recommend checking out my talk from Breaking Bitcoin 2019 Amsterdam. I\u0026amp;rsquo;ll give a quick recap of Schnorr signatures and adaptor signatures and then statechains. I think it\u0026amp;rsquo;s important to understand Schnorr signatures to the point where you\u0026amp;rsquo;re really comfortable with it. A lot of the cool stuff in bitcoin …"},{"uri":"/edgedevplusplus/2019/accumulators/","title":"Acumulator Based Cryptography \u0026 UTreexo","content":"https://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-10-08-utxo-accumulators-and-utreexo/\nhttps://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2019/utreexo/\nhttps://diyhpl.us/wiki/transcripts/stanford-blockchain-conference/2019/accumulators/\nhttps://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/accumulators/\nhttps://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2019-06-06-utreexo/\nNote: the present presentation had more in depth information about RSA which might be interesting …"},{"uri":"/edgedevplusplus/2019/bitcoin-data-structures/","title":"Bitcoin Data Structures","content":"https://twitter.com/kanzure/status/1170974373089632257\nIntroduction Alright guys. Come on in. There\u0026amp;rsquo;s plenty of seats, guys. If you\u0026amp;rsquo;re sitting out on the edge, be nice to the other folks and get in the middle. Sitting at the edge is kind of being a dick. You\u0026amp;rsquo;re not letting anyone through. Come on guys, you don\u0026amp;rsquo;t have USB-C yet? You all have powerbanks. Get with the times. Okay, we\u0026amp;rsquo;re getting started.\nI have 90 minutes to teach you the rest of my book. I think I did …"},{"uri":"/edgedevplusplus/2019/taproot/","title":"Build a Taproot","content":"https://docs.google.com/presentation/d/1YVOYJGmQ_mY-5_Rs0Cu84AYSI6wCvpfIgoapfgsYP2U/edit\nhttps://github.com/bitcoinops/taproot-workshop/tree/BitcoinEdge\nhttps://github.com/bitcoinops/bitcoin/releases/tag/v0.1\n"},{"uri":"/needs/","title":"Needs","content":""},{"uri":"/edgedevplusplus/2019/privacy-concepts/","title":"Privacy Concepts for Bitcoin application developers","content":"https://twitter.com/kanzure/status/1171036497044267008\nIntroduction You\u0026amp;rsquo;re not going to walk out of here as a privacy protocol developer. I am going to mention and talk about some ideas in some protocols that exist. What I find myself is that of really smart people working on a lot of pretty cool stuff that can make privacy easier and better to use. A lot of times, application developers or exchanges aren\u0026amp;rsquo;t even aware of those things and as a result the end result is that people just …"},{"uri":"/edgedevplusplus/2019/rebroadcasting/","title":"Rebroadcast logic in Core","content":"https://twitter.com/kanzure/status/1171042478088232960\nIntroduction Hi, my name is Amiti. Thank you for having me here today. I wanted to talk with you about rebroadcasting logic in Bitcoin Core. For some context, I\u0026amp;rsquo;ve been working on improving it this summer. I wanted to tell you all about it.\nWhat is rebroadcasting? We all know what a broadcast is. It\u0026amp;rsquo;s hwen we send an INV message out to our peers and we let them know about a new transaction. Sometimes we rebroadcast and send an …"},{"uri":"/edgedevplusplus/2019/2019-09-09-carla-kirk-cohen-routing-problems-and-solutions/","title":"Routing Problems and Solutions","content":"Topic: Routing Problems and Solutions\nLocation: Tel-Aviv 2019\nIntroduction I\u0026amp;rsquo;m going to be talking about routing in the Lightning Network now. So I\u0026amp;rsquo;m going to touch on how it\u0026amp;rsquo;s currently operating, some issues that you run into when you are routing, and then some potential expansions to the spec which are going to address some of these issues.\nWe ran through the Lightning Network graph: the nodes of vertices, the channels in this graph of edges. And at the moment, nodes need to …"},{"uri":"/chaincode-labs/chaincode-residency/2019-09-09-amiti-uttarwar-transaction-rebroadcast/","title":"Transaction Rebroadcast","content":"https://twitter.com/kanzure/status/1199710296199385088\nhttp://diyhpl.us/wiki/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/rebroadcasting/\nIntroduction Hello. Thank you for joining me today. My name is Amiti and I’d like to tell you a bit about my Bitcoin journey.\nProfessional Background I graduated from Carnegie Mellon five years ago and ever since I’ve worked at a few different startups in San Francisco Bay Area. My adventures with blockchains began when I worked at Simbi. Simbi is …"},{"uri":"/needs/transcript/","title":"transcript","content":""},{"uri":"/speakers/udi-wertheimer/","title":"Udi Wertheimer","content":""},{"uri":"/edgedevplusplus/2019/wallet-architecture/","title":"Wallet Architecture in Bitcoin Core","content":"https://twitter.com/kanzure/status/1171018684816605185\nIntroduction Thanks Bryan for talking about HD wallets. I am going to be talking about wallet architecture in Bitcoin Core. Alright. Who am I? I work on Bitcoin Core. I\u0026amp;rsquo;ve been writing code on Bitcoin Core for about 3 years. Lately I have been working on the wallet. I work for Chaincode Labs which is why I get to work on Bitcoin Core. We are a small research lab in New York. There\u0026amp;rsquo;s also the residency program that we run. We just …"},{"uri":"/speakers/aviv-zohar/","title":"Aviv Zohar","content":""},{"uri":"/decentralized-financial-architecture-workshop/custody-group/","title":"Custody Working Group","content":"One of the problems in the ecosystem was nomenclature. We started working on a report about what features a nomenclature would have. Airgaps, constant-time software, sidechannel resistance, Faraday cage, deterministic software, entropy, blind signatures, proof-of-reserve, multi-vendor hardware to minimize compromise. Multi-location and multi-sig. Insurance guarantee scheme. Canaries, dead man switches, tripwires, heartbeat mechanisms. Shamir\u0026amp;rsquo;s secret sharing vs multisig. The second part of …"},{"uri":"/decentralized-financial-architecture-workshop/","title":"Decentralized Financial Architecture Workshop","content":"https://dfa2019.bitcoinedge.org/\nCompliance And Confidentiality Alexander Zaidelson Regulation Custody Working Group Sep 08, 2019 Regulation G20 Discussion Shin\u0026amp;#39;ichiro Matsuo Research Regulation Implications Yuta Takanashi Regulation Introduction to DFA Workshop Sep 08, 2019 Aviv Zohar Metadata Perspective Regulation "},{"uri":"/decentralized-financial-architecture-workshop/introduction/","title":"Introduction to DFA Workshop","content":"We are an international conference that focuses on the latest development in software and its scalability in the bitcoin ecosystem. We have come here to Israel. We are running 5 days of events, including this current workshop. Over the next 4 days following today, we\u0026amp;rsquo;re running Bitcoin Edge Dev++ workshop at Tel Aviv University for 2 days, and then Scaling Bitcoin for 2 days which is compromised of over 44 high tech sessions on complex scalability and security related topics in the bitcoin …"},{"uri":"/blockstream-webinars/2019-09-04-christian-decker-c-lightning-questions/","title":"C-Lightning Questions","content":"c-lightning Q and A session\nhttps://twitter.com/kanzure/status/1230527903969841152\nIntroduction Hello everyone. I’m Christian. I’m a software developer at Blockstream and I work on Lightning, both the specification and the implementation. Today we are going to have a shortish webinar on how to get started with Lightning.\nQ and A Q - Is it possible to run c-lightning and lnd on the same host in parallel?\nA - It absolutely is. We actually do that on a regular basis to test compatibility between …"},{"uri":"/chaincode-labs/chaincode-residency/2019-08-22-fabian-jahr-debugging/","title":"Debugging Bitcoin Core","content":"Slides: https://residency.chaincode.com/presentations/Debugging_Bitcoin.pdf\nRepo: https://gist.github.com/fjahr/2cd23ad743a2ddfd4eed957274beca0f\nhttps://twitter.com/kanzure/status/1165266077615632390\nIntroduction I’m talking about debugging Bitcoin and that means to me using loggers and debugging tools to work with Bitcoin and this is especially useful for somebody who is a beginner with Bitcoin development. Or even a beginner for C++ which I considered myself a couple of weeks ago actually. I’m …"},{"uri":"/austin-bitcoin-developers/2019-08-22-socratic-seminar-2/","title":"Socratic Seminar 2","content":"https://twitter.com/kanzure/status/1164710800910692353\nIntroduction Hello. The idea was to do a more socratic style meetup. This was popularized by Bitdevs NYC and spread to SF. We tried this a few months ago with Jay. The idea is we run through research news, newsletters, podcasters, talk about what happened in the technical bitcoin community. We\u0026amp;rsquo;re going to have different presenters.\nMike Schmidt is going to talk about some optech newsletters that he has been contributing to. Dhruv will …"},{"uri":"/chaincode-labs/chaincode-residency/2019-08-22-james-chiang-taproot-policy/","title":"Taproot Policy","content":"Slides: https://residency.chaincode.com/presentations/Taproot_Policy.pdf\nIntroduction Hi my name is James and I have been a resident here at Chaincode. It has been a privilege. I hear they might be doing it again in the future, I highly recommend the experience, it has been really fantastic. I’ve been working on a demo library in Python for Taproot and that’s why I chose to talk about Taproot and policy today. Seemingly two separate topics but actually there are some very interesting …"},{"uri":"/dallas-bitcoin-symposium/","title":"Dallas Bitcoin Symposium","content":" Bitcoin Developers Justin Moon Bitcoin Security Dhruv Bansal Security History And Extrapolation Intro Aug 16, 2019 Parker Lewis Q A Aug 16, 2019 Marty Bent, Michael Goldstein, Justin Moon, Dhruv Bansal, Gideon Powell, Tuur Demeester Sound Money Michael Goldstein Altcoins Texas Energy Market Gideon Powell Mining "},{"uri":"/speakers/dhruv-bansal/","title":"Dhruv Bansal","content":""},{"uri":"/speakers/gideon-powell/","title":"Gideon Powell","content":""},{"uri":"/dallas-bitcoin-symposium/intro/","title":"Intro","content":"https://twitter.com/kanzure/status/1162436437687623684\nThank you all for coming. We have some bitcoin contributors here, and some people from the investment community. The impetus for this event was a bitcoin conference this weekend, but also Gideon is speaking today. We went to a blockchain event a few months ago and it was our impression that they didn\u0026amp;rsquo;t do a good job of explaining bitcoin to investors. Since we\u0026amp;rsquo;re all up here in Dallas, we thought we would share some perspectives …"},{"uri":"/speakers/marty-bent/","title":"Marty Bent","content":""},{"uri":"/speakers/michael-goldstein/","title":"Michael Goldstein","content":""},{"uri":"/speakers/parker-lewis/","title":"Parker Lewis","content":""},{"uri":"/dallas-bitcoin-symposium/q-a/","title":"Q A","content":"Q\u0026amp;amp;A session\nhttps://twitter.com/kanzure/status/1162436437687623684\nMB: I have a podcast called Tales from the Crypt where I interview people working on and around bitcoin. I have spoken to many of these gentlemen on the podcast as well. I want to focus on, many of these presentations were focused on history and prehistory of bitcoin. For this Q\u0026amp;amp;A session, let\u0026amp;rsquo;s talk about the future of the bitcoin standard. Michael, I want you to talk about the future of the bitcoin standard on a …"},{"uri":"/chaincode-labs/chaincode-residency/2019-08-16-elichai-turkel-schnorr-signatures/","title":"Schnorr Signatures","content":"Slides: https://residency.chaincode.com/presentations/Schnorr_Signatures.pdf\nhttps://twitter.com/kanzure/status/1165618718677917698\nIntroduction I’m Elichai, I’m a resident here at Chaincode. Thanks to Chaincode for having us and having me. Most of you have probably heard about Taproot and Schnorr which are new technologies that we want to integrate into Bitcoin. Today I am going to explain what is Schnorr and why do we even want this?\nDigital Signatures Before that we need to define what are …"},{"uri":"/speakers/tuur-demeester/","title":"Tuur Demeester","content":""},{"uri":"/speakers/joe-netti/","title":"Joe Netti","content":""},{"uri":"/stephan-livera-podcast/2019-08-12-rusty-russell-joe-netti/","title":"Rusty Russell, Joe Netti","content":"Stephan Livera podcast with Rusty Russell and Joe Netti - August 12th 2019\nPodcast: https://stephanlivera.com/episode/98/\nStephan Livera:\tRusty and Joe, welcome to the show.\nJoe Netti:\tHi. How’s it going?\nRusty Russell:\tHey, Stephan. Good to be back.\nStephan Livera:\tThanks for rejoining me, Rusty, and thanks for joining me, Joe. So, just a quick intro just for the listeners. Joe, do you want to start?\nJoe Netti:\tYeah, sure. So, I got into Bitcoin in around 2013, and took me a while to learn what …"},{"uri":"/stephan-livera-podcast/2019-08-08-michael-flaxman/","title":"Every Bitcoin Hardware Wallet Sucks","content":"Topic: Every Bitcoin Hardware Wallet Sucks\nLocation: Stephan Livera Podcast\nDate: August 8th 2019\nTranscript completed by: Stephan Livera Edited by: Michael Folkson\nIntro Stephan Livera (SL): Michael, welcome to the show.\nMichael Flaxman (MF): Hi. It’s good to be here. Thank you for having me.\nSL: Michael, I think you are one of these really underrated or under followed guys. I think a lot of the hardcore Bitcoiners know you but there’s a lot of people who don’t know you. Do you want to give an …"},{"uri":"/misc/2019-08-07-jonathan-metzman-structured-fuzzing/","title":"Going Beyond Coverage-Guided Fuzzing with Structured Fuzzing","content":"Location: Black Hat USA 2019\nBlackhat: https://www.blackhat.com/us-19/briefings/schedule/#going-beyond-coverage-guided-fuzzing-with-structured-fuzzing-16110\nSlides: https://i.blackhat.com/USA-19/Wednesday/us-19-Metzman-Going-Beyond-Coverage-Guided-Fuzzing-With-Structured-Fuzzing.pdf\nTranscript completed by: Michael Folkson\nIntro Hi everyone. Thanks for coming to my talk. As I was introduced I’m Jonathan Metzman. I’m here to talk about how you can get more bugs with coverage guided fuzzing by …"},{"uri":"/speakers/jonathan-metzman/","title":"Jonathan Metzman","content":""},{"uri":"/blockstream-webinars/2019-07-31-rusty-russell-getting-started-with-c-lightning/","title":"Getting Started With C-Lightning","content":"Getting started with c-lightning\nhttps://twitter.com/kanzure/status/1231946205380403200\nIntroduction Hi everyone. We’re actually a couple of minutes early and I think we are going to give a couple of minutes past before we actually start because I think some people will probably be running late particularly with the change in times we had to make after this was announced. While we are waiting it would be interesting to find out a little bit about the attendees’ backgrounds and what they hope to …"},{"uri":"/stephan-livera-podcast/2019-07-31-roy-sheinfeld-lightning-network-services/","title":"Lightning Network Services for the Masses","content":"podcast: https://stephanlivera.com/episode/94/\nStephan Livera: Roy, welcome to the show.\nRoy Sheinfeld: Hey Stephan, great to be here.\nStephan Livera: So Roy, I’ve seen you’ve been writing some interesting posts on Medium, and obviously you’re the CEO and founder of Breez Technology, so can you just give us a little bit of a background on yourself, on you and also a little bit of your story on Bitcoin.\nRoy Sheinfeld: Sure, sure, I’d be happy to. So I am a software engineer by training, I’ve been …"},{"uri":"/speakers/roy-sheinfeld/","title":"Roy Sheinfeld","content":""},{"uri":"/stephan-livera-podcast/2019-07-31-roy-sheinfeld-stephan-livera/","title":"Roy Sheinfeld - Stephan Livera","content":"Stephan Livera podcast with Roy Sheinfeld - July 31st 2019\nPodcast: https://stephanlivera.com/episode/94/\nStephan Livera:\tRoy, welcome to the show.\nRoy Sheinfeld:\tHey Stephan, great to be here.\nStephan Livera:\tSo Roy, I’ve seen you’ve been writing some interesting posts on Medium, and obviously you’re the CEO and founder of Breez Technology, so can you just give us a little bit of a background on yourself, on you and also a little bit of your story on Bitcoin.\nRoy Sheinfeld:\tSure, sure, I’d be …"},{"uri":"/speakers/britt-kelly/","title":"Britt Kelly","content":""},{"uri":"/stephan-livera-podcast/2019-07-25-britt-kelly-btcpayserver-documentation/","title":"BTCPayServer documentation, translation \u0026 Newbie tips","content":"podcast: https://stephanlivera.com/episode/92/\nStephan Livera: Hi and welcome to the Stephan Livera Podcast focused on bitcoin and Austrian economics. Today we are closing out the BTCPayServer series with Britt Kelly. But first let me introduce the sponsors of the podcast. So firstly, checkout Kraken. Over my years in bitcoin I’ve been really impressed with the way they operate. They have a really, really strong focus on security and they have consistently acted ethically in the space. They’re …"},{"uri":"/tags/submarine-swaps/","title":"Submarine swaps","content":""},{"uri":"/london-bitcoin-devs/2019-07-03-alex-bosworth-submarine-swaps/","title":"Submarine Swaps","content":"London Bitcoin Devs\nSubmarine Swaps and Loop\nSlides: https://www.dropbox.com/s/cyh97jv81hrz8tf/alex-bosworth-submarine-swaps-loop.pdf?dl=0\nhttps://twitter.com/kanzure/status/1151158631527849985\nIntro Thanks for inviting me. I’m going to talk about submarine swaps and Lightning Loop which is something that I work on at Lightning Labs.\nBitcoin - One Currency, Multiple Settlement Networks Something that is pretty important to understand about Lightning is that it is a flow network so once you set …"},{"uri":"/austin-bitcoin-developers/2019-06-29-hardware-wallets/","title":"Hardware Wallets","content":"https://twitter.com/kanzure/status/1145019634547978240\nsee also:\nExtracting seeds from hardware wallets The future of hardware wallets coredev.tech 2019 hardware wallets discussion Background A bit more than a year ago, I went through Jimmy Song\u0026amp;rsquo;s Programming Blockchain class. That\u0026amp;rsquo;s where I met M where he was the teaching assistant. Basically, you write a python bitcoin library from scratch. The API for this library and the classes and fnuctions that Jimmy uses is very easy to read …"},{"uri":"/tags/channel-factory/","title":"channel factory","content":""},{"uri":"/tags/multiparty-channel/","title":"multiparty channel","content":""},{"uri":"/chaincode-labs/chaincode-residency/2019-06-28-christian-decker-multiparty-channels/","title":"Multiparty Channels (Lightning Network)","content":"Location: Chaincode Labs Lightning Residency 2019\nSlides: https://residency.chaincode.com/presentations/lightning/Multiparty_Channels.pdf\nSymmetric Update Protocols eltoo Update Mechanism Two days ago, we’ve seen the symmetric update mechanism called eltoo, and having a symmetric update mechanism has some really nice properties that enable some really cool stuff. Just to remind everybody, this is what eltoo looks like. That’s the one you\u0026amp;rsquo;ve already seen. It’s not the only symmetric update …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-28-christian-decker-rendezvous-routing/","title":"Rendezvous Routing","content":"Rendezvous Routing (Lightning Network)\nLocation: Chaincode Residency 2019\nSlides: https://residency.chaincode.com/presentations/lightning/Rendezvous_Routing.pdf\nTranscript by: Caralie Chrisco and Davius Parvin\nIntroduction Okay, the second part is rendezvous routing. I basically already gave away the trick but I hope you all forgot. So what is rendezvous routing? Anybody have a good explanation of what we are trying to do here? So we have someplace where we meet in the middle, and I don\u0026amp;rsquo;t …"},{"uri":"/speakers/fabrice-drouin/","title":"Fabrice Drouin","content":""},{"uri":"/chaincode-labs/chaincode-residency/2019-06-26-fabrice-drouin-limitations-of-lightweight-clients/","title":"Limitations of Lightweight Clients","content":"Location: Chaincode Labs Lightning Residency 2019\nIntroduction The limitations it\u0026amp;rsquo;s uh you have to fight when you want to build lightweight clients mobile clients so this again, this is how mobile clients work, you scan payment requests, you have a view of the network that is supposed to be good enough, you compute the routes, you get an HTLC, you get a preimage back you\u0026amp;rsquo;ve paid.\nSo a Lightning node is a Bitcoin node, as in, you have to be able to create, sign, and send Bitcoin …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-26-rene-pickhardt-path-finding-lightning-network/","title":"Path Finding in the Lightning Network","content":"Location: Chaincode Residency 2019\nIntroduction Today I\u0026amp;rsquo;m going to talk a little bit about path finding on the Lightning Network and some part of it will be about the gossip protocol which is like a little bit of a review of what the gossip messages are going to look like. So it\u0026amp;rsquo;s a little bit basic again. But other than that, I\u0026amp;rsquo;ve been told that you seek more for like opinionated talks and recent ideas instead of, like, how the bolts work because you already know all of that. …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-26-rene-pickhardt-splicing/","title":"Splicing","content":"Location: Chaincode Labs Lightning Residency 2019\nTranscript by: Caralie Chrisco\nIntroduction So splicing basically means you have a payment channel, you have a certain capacity of a payment channel and you want to change the capacity on the payment channel. But you don\u0026amp;rsquo;t want to change your balance right? For balance change,(inaudible) we\u0026amp;rsquo;ll get some money on it. But splicing really means you want to change the capacity.\nI mean the obvious way of doing this is close the channel, …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-26-rene-pickhardt-update-layer/","title":"The Update Layer","content":"Location: Chaincode Labs Lightning Residency 2019\nIntroduction Maybe we had a wrong impression, you can ask, like in Christian’s talk, as many questions as you want while the presentation is there. So, it\u0026amp;rsquo;s a small disclaimer just saying: these slides are part of a much larger slide set which is open-source and you find notes and, I think, we share slides in the Google Docs anyway.\nThe outline of the talk is, I’m going to talk about the construction of payment channels in Bitcoin. I start …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-25-fabrice-drouin-attack-vectors-of-lightning-network/","title":"Attack Vectors of Lightning Network","content":"Location: Chaincode Residency – Summer 2019\nIntroduction All right so I\u0026amp;rsquo;m going to introduce really quickly attack vectors on Lightning. I focus on first what you can do with the Lightning protocol, but I will mostly speak about how attacks will probably happen in real life. It\u0026amp;rsquo;s probably not going to be direct attacks on the protocol itself.\nDenial of Service So the basic attacks you can have when you\u0026amp;rsquo;re running lightning nodes are denial of service attacks basically. …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-25-christian-decker-eltoo/","title":"Eltoo","content":"Eltoo: The (Far) Future of Lightning\nLocation: Chaincode Labs\nSlides: https://residency.chaincode.com/presentations/lightning/Eltoo.pdf\nEltoo white paper: https://blockstream.com/eltoo.pdf\nBitcoin Magazine article: https://bitcoinmagazine.com/articles/noinput-class-bitcoin-soft-fork-simplify-lightning\nIntro Who has never heard about eltoo? It is my pet project and I am pretty proud of it. I will try to keep this short. I was told that you all have seen my presentation about the evolution of …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-25-alex-bosworth-mpp/","title":"Multiple-path Payments (MPP)","content":"Location: Chaincode Residency 2019\nBigger, Faster, Cheaper - Multiplexing Payments Alex: How to do multiple path payments. So there\u0026amp;rsquo;s a question: why do we even need multiple path payments? So you have a channel \u0026amp;ndash; like \u0026amp;ndash; we don\u0026amp;rsquo;t have them now, so are we really hurting for them? So I kind of like describing different situations where you would need it. So in the first quadrant you can see I have two channels. One of my channels has three coins on my side. I want to buy …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-25-christian-decker-onion-routing-deep-dive/","title":"Onion Routing Deep Dive","content":"Location: Chaincode Residency – Summer 2019\nIntro We\u0026amp;rsquo;ve not seen exactly why we are using an onion, or why we chose this construction of an onion and how this construction of an onion actually looks like. So my goal right now is to basically walk you through the iterations that onion routing packets have done so far, and why we chose this construction we have here.\nSo, just to be funny, I have once again the difference between distance vector routing, which is IP-based routing, basically, …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-25-fabrice-drouin-routing-failures/","title":"Routing Failures","content":"Introduction So I\u0026amp;rsquo;m going to talk about routing failures in lightning. Basically lightning is a network of payment channels and you can pay anyone, you can find a route. In routing in lightning you have two completely different concepts: how to find a route (path finding) and once you have a route how to actually send a payment through the routes to the final destination. This is what Christian showed that\u0026amp;rsquo;s the source routing.\nSo it\u0026amp;rsquo;s two different parts. When you have the …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-24-fabrice-drouin-base-and-transport-layers-of-lightning-network/","title":"Base and Transport Layers of the Lightning Network","content":"Location: Chaincode Labs Lightning Residency 2019\nSlides: https://residency.chaincode.com/presentations/lightning/Base_Transport.pdf\nIntroduction Fabrice: The base and transport layer, almost everything was already introduced this morning so it\u0026amp;rsquo;s going to be very quick. So have you visited the lightning repo on GitHub? Who hasn\u0026amp;rsquo;t?\nOkay, good. So you\u0026amp;rsquo;ve seen all this? That was a test.\nWe\u0026amp;rsquo;ve covered almost everything this morning. I\u0026amp;rsquo;m just gonna get back to the base …"},{"uri":"/speakers/conner-fromknecht/","title":"Conner Fromknecht","content":""},{"uri":"/stephan-livera-podcast/2019-06-24-conner-fromknecht-stephan-livera/","title":"Conner Fromknecht - Stephan Livera","content":"Stephan Livera podcast with Conner Fromknecht - June 24th 2019\nhttps://twitter.com/kanzure/status/1143553150152060928\nPodcast: https://stephanlivera.com/episode/83\nStephan: Conner I am a big fan of what you guys are doing at Lightning Labs and I’ve been trying to get you on for a while. I’m glad that we were finally able to make it happen. Welcome to the show.\nConner: Awesome, thank you Stephan. It has been a while since we first met in Australia. We’ve been busy putting stuff together for 0.7 …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-24-rene-pickhardt-multihop-in-lightning/","title":"Multihop of the Lightning Network","content":"Slides: https://residency.chaincode.com/presentations/lightning/Multihop_Layer.pdf\nIntroduction I\u0026amp;rsquo;m going to talk about Lightning Network payment channels and the update layer which is basically multihop payments and the outline of this talk is one slide myths about routing that are floating around and I just want to discuss them in the beginning very briefly. Then I will talk about the payment process on the Lightning Network and compare it a little bit with the payment process on …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-19-john-newbery-wallet-development/","title":"Wallet Development in Bitcoin Core","content":"Location: Chaincode Labs 2019 Residency\nSlides: https://residency.chaincode.com/presentations/bitcoin/Wallet_Development.pdf\nIntro I am going to talk to you about wallets. As for the previous presentation I have all of the links in this document which I will share with you. First of all why should we care about wallets? Kind of boring right? You’ve had all this fun stuff about peer-to-peer and consensus. Wallets are just on the edge. Maybe you’re not even interested in Bitcoin Core. Maybe you …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-18-james-obeirne-advanced-segwit/","title":"Advanced Segwit","content":"Location: Chaincode Labs 2019 Residency\nSlides: https://residency.chaincode.com/presentations/bitcoin/Advanced_segwit.pdf\nJames O\u0026amp;rsquo;Beirne: To sort of vamp off of Jonas\u0026amp;rsquo;s preamble, I\u0026amp;rsquo;m not that smart. I\u0026amp;rsquo;m a decent programmer, but compared to a lot of people who work on Bitcoin, I barely know what I\u0026amp;rsquo;m doing. I kind of consider myself like a carpenter, a digital carpenter equivalent. I\u0026amp;rsquo;m a steady hand. I can get things done. I\u0026amp;rsquo;m fairly patient, which i s key …"},{"uri":"/tftc-podcast/2019-06-18-andrew-poelstra-tftc/","title":"Tales from the Crypt with Andrew Poelstra","content":"https://twitter.com/kanzure/status/1160178010986876928\nPodcast: https://talesfromthecrypt.libsyn.com/tales-from-the-crypt-80-andrew-poelstra\nHow the idea of Taproot was formed Andrew: There are some small off suit IRC channels of bitcoin-wizards where people discuss some specific topics such as Minisketch which is a set reconciliation protocol that was developed by Greg Maxwell, Pieter Wuille and Gleb Naumenko among a couple of others. It was mostly those three. They have a channel where they …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-17-john-newbery-security-models/","title":"Security Models","content":"Topic: Security Models\nLocation: Chaincode Labs 2019 Residency\nSlides: https://residency.chaincode.com/presentations/bitcoin/security_models.pdf\nIntro John Newbery: Alright, Security Models. This is going to be like a quick whistle-stop tour of various things, very high-level view. I\u0026amp;rsquo;m going to start by giving you some kind of framework to think about things. So in cryptography, we often talk about security proofs in terms of existing schemes, we talk about assumptions.\nSo, for …"},{"uri":"/breaking-bitcoin/","title":"Breaking Bitcoin","content":" Breaking Bitcoin 2017 Breaking Bitcoin 2019 "},{"uri":"/breaking-bitcoin/2019/","title":"Breaking Bitcoin 2019","content":" Bitcoin Build System Security Jun 08, 2018 Carl Dong Build system Security Bitcoin P2P Encryption Jun 08, 2019 Jonas Schnelli V2 p2p transport Bitcoin Selfish Mining and Dyck Words Ricardo Perez-Marco Mining Breaking Bitcoin Privacy Chris Belcher Privacy problems Privacy enhancements Output linking Breaking Wasabi (and Automated Wallets) Udi Wertheimer Privacy problems Extracting Seeds From Hardware Wallets Charles Guillemet Security problems Hardware wallet Five Insights into the Defense of …"},{"uri":"/breaking-bitcoin/2019/2019-06-09-joost-jager-hodl-invoices/","title":"HODL Invoices","content":"Hodl Invoices (Breaking Bitcoin training session)\nhttps://twitter.com/kanzure/status/1146124260324384768\nIntro My name is Joost, Michael introduced me already. I’ve been working with Lightning Labs since the middle of last year and during that time I’ve worked on several subprojects within lnd. lnd is the Lightning implementation developed by Lightning Labs. I’m not sure really what the background knowledge is at this point in this three day course. One of those subprojects within lnd is called …"},{"uri":"/tags/hold-invoices/","title":"Hold invoices","content":""},{"uri":"/chaincode-labs/chaincode-residency/2019-06-12-ethan-heilman-network-partitioning-attacks/","title":"Bitcoin network partitioning \u0026 network-level privacy attacks","content":"Location: Chaincode Labs 2019 Residency\nSlides: https://residency.chaincode.com/presentations/bitcoin/ethan_heilman_p2p.pdf\nEclipse attack paper: https://eprint.iacr.org/2015/263.pdf\nBitcoin’s P2P Network Please interrupt me. I\u0026amp;rsquo;m going to go through this stuff a little bit quickly because I hear that you\u0026amp;rsquo;ve already read Eclipse attack paper and you\u0026amp;rsquo;ve already looked at some of the peer to peer stuff.\nBitcoin uses a peer-to-peer network to announce transactions and blocks. It …"},{"uri":"/speakers/ethan-heilman/","title":"Ethan Heilman","content":""},{"uri":"/lets-talk-bitcoin-podcast/","title":"Lets Talk Bitcoin Podcast","content":" Consensus Barnacles Dec 25, 2016 Christopher Jeffrey Consensus enforcement On Consensus and All Kinds of Forks Jun 04, 2017 Andreas Antonopoulos Soft fork activation The Tools and The Work Jun 09, 2019 Pieter Wuille, Jonas Nick Taproot Schnorr signatures "},{"uri":"/lets-talk-bitcoin-podcast/2019-06-09-ltb-pieter-wuille-jonas-nick/","title":"The Tools and The Work","content":"Let\u0026amp;rsquo;s Talk Bitcoin with Pieter Wuille and Jonas Nick - June 9th 2019\nhttps://twitter.com/kanzure/status/1155851797568917504\nPart 1: https://letstalkbitcoin.com/blog/post/lets-talk-bitcoin-400-the-tools-and-the-work\nPart 2: https://letstalkbitcoin.com/blog/post/lets-talk-bitcoin-401-the-tools-and-the-work-part-2\nDraft of BIP-Schnorr: https://github.com/sipa/bips/blob/bip-schnorr/bip-schnorr.mediawiki\nDraft of BIP-Taproot: https://github.com/sipa/bips/blob/bip-schnorr/bip-taproot.mediawiki …"},{"uri":"/breaking-bitcoin/2019/p2p-encryption/","title":"Bitcoin P2P Encryption","content":"bip324: v2 message transport protocol\nhttps://twitter.com/kanzure/status/1137312478373851136\nslides: https://twitter.com/_jonasschnelli_/status/1137389541512351749\nPrevious talks https://btctranscripts.com/scalingbitcoin/milan-2016/bip151-peer-encryption/\nhttps://btctranscripts.com/sf-bitcoin-meetup/2017-09-04-jonas-schenlli-bip150-bip151/\nhttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06-07-p2p-encryption/\nIntroduction This will be a technical talk. I hope you\u0026amp;rsquo;re prepared for some …"},{"uri":"/breaking-bitcoin/2019/2019-06-08-mempool-analysis-simulation/","title":"Mempool Analysis \u0026 Simulation","content":"https://twitter.com/kanzure/status/1137342023063744512\nslides: https://breaking-bitcoin.com/docs/slides/2019/mempoolAnalysis.pdf\nIntroduction Fun fact: I don\u0026amp;rsquo;t think anyone knows this fact, but antpool is a miner and antpool is not mining replace-by-fee transactions. What I mean by that is that if you do a replace-by-fee transaction, they will ignore the second high fee transaction. There are cases where transactions are RBF-bumped to increase the fee to make it mined faster, so you RBF …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-07-assumeutxo/","title":"AssumeUTXO","content":"https://twitter.com/kanzure/status/1137008648620838912\nWhy assumeutxo assumeutxo is a spiritual continuation of assumevalid. Why do we want to do this in the first place? At the moment, it takes hours and days to do initial block download. Various projects in the community have been implementing meassures to speed this up. Casa I think bundles datadir with their nodes. Other projects like btcpay have various ways of bundling this up and signing things with gpg keys and these solutions are not …"},{"uri":"/bitcoin-core-dev-tech/2019-06/","title":"Bitcoin Core Dev Tech 2019","content":" AssumeUTXO Jun 07, 2019 James O\u0026amp;#39;Beirne Assume utxo Bitcoin core Blind statechains: UTXO transfer with a blind signing server Jun 07, 2019 Ruben Somsen Statechains Eltoo Channel factories Code Review Jun 05, 2019 General discussion on SIGHASH_NOINPUT, OP_CHECKSIGFROMSTACK, and OP_SECURETHEBAG Jun 06, 2019 Olaoluwa Osuntokun, Jeremy Rubin Sighash anyprevout Op checksigfromstack Op checktemplateverify Great Consensus Cleanup Jun 06, 2019 Matt Corallo Consensus cleanup Hardware Wallets Jun 07, …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/","title":"Blind statechains: UTXO transfer with a blind signing server","content":"https://twitter.com/kanzure/status/1136992734953299970\n\u0026amp;ldquo;Formalizing Blind Statechains as a minimalistic blind signing server\u0026amp;rdquo; https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html\noverview: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39\nstatechains paper: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf\nprevious transcript: …"},{"uri":"/tags/channel-factories/","title":"Channel factories","content":""},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-07-hardware-wallets/","title":"Hardware Wallets","content":"https://twitter.com/kanzure/status/1136924010955104257\nHow much should Bitcoin Core do, and how much should other libraries do? Andrew Chow wrote the wonderful HWI tool. Right now we have a pull request to support external signers. The HWI script can talk to most major hardware wallets because it has all the drivers built in now, and it can get keys from it, and sign arbitrary transactions. That\u0026amp;rsquo;s roughly what it does. It\u0026amp;rsquo;s kind of manual, though. You have to enter some python …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-07-p2p-encryption/","title":"P2P Encryption","content":"https://twitter.com/kanzure/status/1136939003666685952\nhttps://github.com/bitcoin-core/bitcoin-devwiki/wiki/P2P-Design-Philosophy\n\u0026amp;ldquo;Elligator Squared: Uniform Points on Elliptic Curves of Prime Order as Uniform Random Strings\u0026amp;rdquo; https://eprint.iacr.org/2014/043\nPrevious talks https://btctranscripts.com/scalingbitcoin/milan-2016/bip151-peer-encryption/\nhttps://btctranscripts.com/sf-bitcoin-meetup/2017-09-04-jonas-schnelli-bip150-bip151/\nIntroduction This proposal has been in progress for …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/","title":"Signet","content":"https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html\nhttps://twitter.com/kanzure/status/1136980462524608512\nIntroduction I am going to talk a little bit about signet. Does anyone not know what signet is? The idea is to have a signature of the block or the previous block. The idea is that testnet is horribly broken for testing things, especially testing things for long-term. You have large reorgs on testnet. What about testnet with a less broken difficulty adjustment? …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-06-noinput-etc/","title":"General discussion on SIGHASH_NOINPUT, OP_CHECKSIGFROMSTACK, and OP_SECURETHEBAG","content":"SIGHASH_NOINPUT, ANYPREVOUT, OP_CHECKSIGFROMSTACK, OP_CHECKOUTPUTSHASHVERIFY, and OP_SECURETHEBAG\nhttps://twitter.com/kanzure/status/1136636856093876225\nThere\u0026amp;rsquo;s apparently some political messaging around OP_SECURETHEBAG and \u0026amp;ldquo;secure the bag\u0026amp;rdquo; might be an Andrew Yang thing.\nSIGHASH_NOINPUT A bunch of us are familiar with NOINPUT. Does anyone need an explainer? What\u0026amp;rsquo;s the difference from the original NOINPUT and the new one? NOINPUT is kind of scary to at least some people. …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/","title":"Great Consensus Cleanup","content":"https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html\nhttps://twitter.com/kanzure/status/1136591286012698626\nIntroduction There\u0026amp;rsquo;s not much new to talk about. Unclear about CODESEPARATOR. You want to make it a consensus rule that transactions can\u0026amp;rsquo;t be larger than 100 kb. No reactions to that? Alright. Fine, we\u0026amp;rsquo;re doing it. Let\u0026amp;rsquo;s do it. Does everyone know what this proposal is?\nValidation time for any block\u0026amp;ndash; we were lazy about fixing this. …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-06-maintainers/","title":"Maintainers view of the Bitcoin Core project","content":"https://twitter.com/kanzure/status/1136568307992158208\nHow do the maintainers think or feel everything is going? Are there any frustrations? Could contributors help eliminate these frustrations? That\u0026amp;rsquo;s all I have.\nIt would be good to have better oversight or overview about who is working in what direction, to be more efficient. Sometimes I have seen people working on the same thing, and both make a similar pull request with a lot of overlap. This is more of a coordination issue. I think …"},{"uri":"/speakers/michael-ford/","title":"Michael Ford","content":""},{"uri":"/tags/op_checksigfromstack/","title":"op_checksigfromstack","content":""},{"uri":"/tags/p2c/","title":"Pay-to-Contract (P2C) protocols","content":""},{"uri":"/tags/quantum-resistance/","title":"Quantum resistance","content":""},{"uri":"/breaking-bitcoin/2019/secure-protocols-bip-taproot/","title":"Secure protocols on BIP taproot","content":"36C7 1A37 C9D9 88BD E825 08D9 B1A7 0E4F 8DCD 0366\nhttps://twitter.com/kanzure/status/1137358557584793600\nhttp://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2019-06-06-taproot/\nslides: https://nickler.ninja/slides/2019-breaking.pdf\nIntroduction I am going to tlak about something that is a collaboration of many people in the bitcoin community including my colleagues at Blockstream. The bip-taproot proposal was recently posted to the bitcoin-dev mailing list which proposes an improvement for …"},{"uri":"/tags/sighash_anyprevout/","title":"sighash_anyprevout","content":""},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/","title":"Taproot","content":"https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki\nhttps://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html\nhttps://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/\npreviously: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/\nhttps://twitter.com/kanzure/status/1136616356827283456\nIntroduction Okay, so, first question- who put my name on that list and what do they want? It wasn\u0026amp;rsquo;t me. …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-06-utreexo/","title":"Utreexo","content":"Utreexo: hash-based accumulator for bitcoin UTXOs\nhttp://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-10-08-utxo-accumulators-and-utreexo/\nhttp://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2019/utreexo/\nUtreexo paper https://eprint.iacr.org/2019/611.pdf\nhttps://github.com/mit-dci/utreexo\nhttps://twitter.com/kanzure/status/1136560700187447297\nIntroduction You still download everything; instead of writing to your UTXO database, you modify your accumulator. You accept a proof that …"},{"uri":"/speakers/wladimir-van-der-laan/","title":"Wladimir van der Laan","content":""},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-05-code-review/","title":"Code Review","content":"Code review survey and complaints https://twitter.com/kanzure/status/1136261311359324162\nIntroduction I wanted to talk about the code review process for Bitcoin Core. I have done no code reviews, but following along the project for the past year I\u0026amp;rsquo;ve heard that this is a pain point for the project and I think most developers would love to see it improved. I\u0026amp;rsquo;d like to help out in some way to help infuse some energy to help with code reviews. I sent out a survey. 11 people responded. …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-05-wallet-architecture/","title":"Wallet Architecture","content":"Bitcoin Core wallet architecture + descriptors\nhttps://twitter.com/kanzure/status/1136282460675878915\nwriteup: https://github.com/bitcoin/bitcoin/issues/16165\nWallet architecture discussion There are three main areas here. One is IsMine: how do I determine a particular output is affecting my wallet? What about asking for a new address, where is it coming from? That\u0026amp;rsquo;s not just get new address, it\u0026amp;rsquo;s get raw change address, it\u0026amp;rsquo;s also change being created in fundrawtransaction. The …"},{"uri":"/austin-bitcoin-developers/2019-05-27-drivechain-paul-sztorc/","title":"Drivechain","content":"Drivechain: An interoperability layer-2, Described in terms of the lightning network- something you already understand\nhttps://twitter.com/kanzure/status/1133202672570519552\nAbout me Okay. So here\u0026amp;rsquo;s some things about myself. I\u0026amp;rsquo;ve been a bitcoiner since 2012. I\u0026amp;rsquo;ve published bitcoin research on truthcoin.info blog. I\u0026amp;rsquo;ve presented at Scaling Bitcoin 1, 2, 3, 4, tabconf, and Building on Bitcoin. My background is in economics and statistics. I worked at Yale Econ Department as …"},{"uri":"/speakers/paul-sztorc/","title":"Paul Sztorc","content":""},{"uri":"/what-bitcoin-did-podcast/2019-05-14-adam-back-bryan-bishop-block-reorgs/","title":"Block Reorgs","content":"Topic: Bitcoin Block Re-orgs\nLocation: What Bitcoin Did Podcast\nBitcoin Stack Exchange on blockchain rollbacks: https://bitcoin.stackexchange.com/questions/87652/51-attack-apparently-very-easy-refering-to-czs-rollback-btc-chain-how-t\nTranscript completed by: Peter McCormack Edited by: Michael Folkson\nBitcoin Block Re-orgs Peter McCormack (PM): Adam, how are you?\nAdam Back (AB): Good!\nPM: How you doing Bryan?\nBryan Bishop (BB): Doing good!\nPM: So we\u0026amp;rsquo;re here to talk about re-orgs. Something …"},{"uri":"/what-bitcoin-did-podcast/","title":"What Bitcoin Did Podcast","content":" Block Reorgs May 14, 2019 Adam Back, Bryan Bishop "},{"uri":"/magicalcryptoconference/2019/fork-dynamics/","title":"Fork Dynamics","content":"Fork dynamics\nThank you, Samson, for that great introduction and thank you for organizing this awesome event. It\u0026amp;rsquo;s great seeing everyone. A few years ago, the situation was extremely different. When Samson originally asked me to give a talk, I kind of thought a little bit about what I should talk about and I wasn\u0026amp;rsquo;t sure what\u0026amp;rsquo;s most interesting. A few years ago, it was much more clear about the set of things going on. I had something to say about those things. I guess now, I …"},{"uri":"/magicalcryptoconference/","title":"Magicalcryptoconference","content":" Magicalcryptoconference 2019 "},{"uri":"/magicalcryptoconference/2019/","title":"Magicalcryptoconference 2019","content":" Bitcoin Protocol Development Panel Eric Lombrozo, Matt Corallo, John Newbery, Luke Dashjr, Katherine Wu Mining Lightning Taproot Bitcoin Satellite Network May 11, 2019 Adam Back Bitcoin Without Internet Adam Back, Rodolfo Novak, Elaine Ou, Adam Back, Richard Myers Cryptographic Hocus-Pocus Meaning Nothing: The Zcash Trusted Setup MPC Peter Todd Privacy enhancements Cryptography Altcoins Fork Dynamics May 12, 2019 Eric Lombrozo Future Of Privacy Coins May 11, 2019 Brandon Goodell, Andrew …"},{"uri":"/magicalcryptoconference/2019/crypto-in-cryptocurrency/","title":"The 'Crypto' In Cryptocurrency: Why Everything is Weird and Hard","content":"I do not have any slides for you. This is my attempt to keep this non-technical, by depriving myself of any ability to write equations on screen. The topic of my talk is why everything is weird and hard in cryptocurrency. What is cryptography as a mathematics and field of science? And how does this work in practice?\nEncryption Historically the purvue of cryptography is encryption, like trying to come up with something random looking. Security here is straightforward: if you have a key then you …"},{"uri":"/magicalcryptoconference/2019/the-state-of-bitcoin-mining/","title":"The State of Bitcoin Mining","content":"Topic: The State of Bitcoin Mining: No Good, The Bad, and The Ugly\nSo I\u0026amp;rsquo;m going to talk a little bit about kind of the state of mining and the set of protocols that are used for mining today, a tiny bit about the history. And then I\u0026amp;rsquo;m going to talk very briefly about some of the solutions that are being proposed, whether it\u0026amp;rsquo;s BetterHash, whether it\u0026amp;rsquo;s Braidpool design, rebooting P2Pool, and some of the other stuff, only kind of briefly at the end. But mostly, I just want …"},{"uri":"/speakers/alexandra-moxin/","title":"Alexandra Moxin","content":""},{"uri":"/magicalcryptoconference/2019/bitcoin-satellite-network/","title":"Bitcoin Satellite Network","content":"Bitcoin in Spaaace! I am going to be talking about the bitcoin satellite service that Blockstream has been offering for a few years now. I am going to talk about what it does and why that\u0026amp;rsquo;s interesting to users.\nSatellite infrastructure There are 4 satellites, 2 uplinks with base stations sending data up to the satellites, and then 5 transponders. It\u0026amp;rsquo;s covering both Asia and Australia with one satellite with two transponders. Telstar 18v was launched by SpaceX late last year. It uses …"},{"uri":"/speakers/brandon-goodell/","title":"Brandon Goodell","content":""},{"uri":"/speakers/elizabeth-stark/","title":"Elizabeth Stark","content":""},{"uri":"/magicalcryptoconference/2019/future-of-privacy-coins/","title":"Future Of Privacy Coins","content":"AM: That\u0026amp;rsquo;s not good. Hi everyone. Thanks for coming to the future of privacy coins panel. I want to spend a minute talking ablout backgrounds and interest in this area.\nBG: I have a PhD in something. I got into monero a couple of years ago. I needed to pay rent, so I did some work in math in exchange for money.\nAP: I work on confidential transactions and scriptless scripts and I do not have a PhD.\nAM: What do you think about taproot and schnorr signatures?\nAP: Sure. There was a new …"},{"uri":"/magicalcryptoconference/2019/htc/","title":"Htc","content":"HTC / Exodus\nIntroduction SM: This conversation started a couple months ago in Hong Kong with Phil Chen and Adam Back were sitting in a bar talking about the bitcoin industry and what would move it forward. Without further adue, please welcome Phil Chen.\nSpecial announcement from HTC Exodus Hi. I am Phil Chen from HTC. As you may know, we built the first Android phone in 2008. The reason why we did that was because we believe in open-source. We believe in the promise of the internet. As we all …"},{"uri":"/magicalcryptoconference/2019/intro/","title":"Intro","content":"Intro\nWhalepanda\nHello? Ladies and gentlemen, please take your seats and we\u0026amp;rsquo;re going to start now. Good afternoon everyone. I\u0026amp;rsquo;m a self-proclaimed VP of operations and swag for this crypto conference. I\u0026amp;rsquo;m also a friend of the Magical Crypto Friends. I\u0026amp;rsquo;m very happy to see the turnout for this conference. I see people who are clearly passionate based on the way they dress. Thank you all for coming. I am going to ask a question. What\u0026amp;rsquo;s your favorite thing about this …"},{"uri":"/speakers/leigh-cuen/","title":"Leigh Cuen","content":""},{"uri":"/magicalcryptoconference/2019/lightning-payments/","title":"Lightning Payments","content":"Lightning payments (and more) on the web\nCL: Thank you Elizabeth and Stacy for that great talk. Who runs a lightning node here? So a lot of you. Are you guys excited about lightning? Great, we have like four more talks on lightning. Next we have Will O\u0026amp;rsquo;Beirne who will be talking about lightning payments on the web. Big round of applause.\nIntroduction Hey y\u0026amp;rsquo;all. We\u0026amp;rsquo;re doing a DIY setup here. Just bare with me. Alright. It was great to hear from Elizabeth Stark. Today I am going …"},{"uri":"/magicalcryptoconference/2019/ln-present-and-future-panel/","title":"Ln Present And Future Panel","content":"Lightning network present \u0026amp;amp; future panel\nLC: Can everyone hear me now? Alright, alright.\nWO: It\u0026amp;rsquo;s getting some good usage. A lot of those are businesses. Definitely it\u0026amp;rsquo;s in the early stage. Elizabeth talked about this. There\u0026amp;rsquo;s a lot of things that are mostly for fun, it\u0026amp;rsquo;s not really at ecommerce yet.\nLC: That\u0026amp;rsquo;s for lnd?\nWO: Sure, and I think Lisa can talk about c-lightning.\nLC: I like these different implementations for LN. Lisa, what about c-lightning …"},{"uri":"/magicalcryptoconference/2019/mcf-episode/","title":"Mcf Episode","content":"Video MCF Short: https://youtu.be/SnfKIKL_Ghk\nWP: The short was not about Craig Wright. It was not about him being a fraud, because he is not a fraud.\nSM: We also have a store. I don\u0026amp;rsquo;t think we\u0026amp;rsquo;re ready to start shipping stuff yet, so inventory will remain zero. I\u0026amp;rsquo;ll post a tweet when we\u0026amp;rsquo;re ready to start selling stuff. We might have some t-shirts and some other stuff. But not the coins, which are limited edition.\nWP: The coins are just for the conference here. If someone …"},{"uri":"/noded-podcast/2019-05-11-andrew-poelstra-miniscript/","title":"Miniscript","content":"Noded podcast May 10th 2019\nhttps://twitter.com/kanzure/status/1135217746059415552\nIntros Pierre: Welcome to the Noded Bitcoin podcast. I’m your co-host Pierre Rochard with my other co-host Michael Goldstein aka bitstein. How’s it going Michael?\nbitstein: It’s going well, I just hope they don’t ban Bitcoin\nPierre: Any day now. We have a very special guest on today, Andrew Poelstra from Blockstream. Andrew, how are you?\nAndrew: I’m good. How are you?\nPierre: Great. Did you hear today’s news about …"},{"uri":"/noded-podcast/","title":"Noded Podcast","content":" CVE-2018-17144 Bug Sep 26, 2018 John Newbery Lnd Dec 14, 2018 Olaoluwa Osuntokun, Conner Fromknecht Lightning Miniscript May 11, 2019 Andrew Poelstra Miniscript "},{"uri":"/speakers/phil-chen/","title":"Phil Chen","content":""},{"uri":"/speakers/stacy-herbert/","title":"Stacy Herbert","content":""},{"uri":"/magicalcryptoconference/2019/state-of-lightning-network/","title":"State Of Lightning Network","content":"Fireside chat on the state of the lightning network\nIntroduction WP: We have a fireside chat with Stacy Herbert from Kaiser Report and Elizabeth Stark. Some people say she is the CEO of Lightning Network, but she is the CEO of Lightning Labs. Let\u0026amp;rsquo;s invite them on stage.\nSH: Did anyone notice the bitcoin price today? How many here are reckless? How many of you have used lightning? Alright, how many are torchbearers? Wow, not very many. How many are running lightning nodes? Yeah. Alright. …"},{"uri":"/magicalcryptoconference/2019/submarine-swaps/","title":"Submarine Swaps","content":"Bridging off-chain and on-chain with submarine swaps\nIntroduction Hi, I work at Lightning Labs. I want to talk about the dawn of hyperloop. It\u0026amp;rsquo;s a new concept.\nHyperloop To zoom out on what problem we\u0026amp;rsquo;re working on here, we had this cool lightning network. But it\u0026amp;rsquo;s, the way the lightning network works, is that it\u0026amp;rsquo;s complementary to the bitcoin network. Each side has its pros and cons. If you want the most effective way to send capital around, you want to leverage both. …"},{"uri":"/magicalcryptoconference/2019/taxonomy-of-ln-nodes/","title":"Taxonomy Of Ln Nodes","content":"Introduction Hello. I work at Blockstream on c-lightning. I am here today to talk to you about a taxonomy of nodes about who\u0026amp;rsquo;s who on the lightning network. I wnat ot go through the tools that have been built, looking through the perspectives of what different nodes are trying to accomplish.\nLN map There\u0026amp;rsquo;s a bunch of nodes on the network. Let\u0026amp;rsquo;s look at some stats. There\u0026amp;rsquo;s a little over 8,317 nodes. There are two categories so far. On the LN statistics site, it will say …"},{"uri":"/speakers/will-obeirne/","title":"Will O'Beirne","content":""},{"uri":"/sf-bitcoin-meetup/2019-05-02-conner-fromknecht-lnd-0.6-beta/","title":"Lnd 0.6 Beta","content":"Conner Fromknecht\nSF Lightning Devs\nLND 0.6 Beta Deep Dive\nIntro lnd 0.6 was about seven months in the making. There’s more things that we could fit into this talk alone but I cherrypicked some of the highlights and things that will have some real world impact on users of the network or people who are using the API or just generally users of lnd. The high level impact on the network going forward and how we plan to scale up for the next 100K channels that come onto the network.\nOverview lnd 0.5 …"},{"uri":"/london-bitcoin-devs/2019-05-01-stepan-snigirev-hardware-wallet-attacks/","title":"Hardware Wallets (History of Attacks)","content":"Slides: https://www.dropbox.com/s/64s3mtmt3efijxo/Stepan%20Snigirev%20on%20hardware%20wallet%20attacks.pdf\nPieter Wuille on anti covert channel signing techniques: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-March/017667.html\nIntroduction This talk is the second in the series after my previous talk in London a few months ago at the Advancing Bitcoin conference. There I was talking mostly about general attacks on hardware, more from the theoretical perspective. I didn’t say …"},{"uri":"/stephan-livera-podcast/2019-04-11-james-obeirne/","title":"The Truth About 'Power' in Bitcoin","content":"Stephan Livera Podcast with James O’Beirne - April 11th 2019\nPodcast: https://stephanlivera.com/episode/66/\nhttps://twitter.com/kanzure/status/1198289594069983237\nStephan Livera: Hello and welcome to the Stephan Livera podcast focused on Bitcoin and Austrian economics. Today my guest is James O’Beirne, Bitcoin Core engineer working at Chaincode Labs. Here’s the interview. James, I’m a fan of what you are working on over at Chaincode Labs and welcome to the show.\nJames O’Beirne: Thanks Stephan. …"},{"uri":"/boltathon/","title":"Boltathon","content":" JSON Interface Apr 06, 2019 Rusty Russell Lightning C lightning Major Limitations of Lightning Apr 06, 2019 Alex Bosworth Lightning Watchtowers Apr 06, 2019 Conner Fromknecht Research Lightning Lnd "},{"uri":"/boltathon/2019-04-06-rusty-russell-json-interface/","title":"JSON Interface","content":"JSON Interface with c-lightning and Writing Extensions\nhttps://twitter.com/kanzure/status/1230892247345852416\nIntro Ok I have top running. We have a plan and we’re going to start from zero. We’re going to clone c-lightning, we’re going to compile it, we’re going to set up a test node. I’m assuming Bitcoin is already installed. We’re going to talk a little about configuring it and stuff like that. All this stuff is useful to see somebody else do. It is documented but it is nice to see someone …"},{"uri":"/boltathon/2019-04-06-alex-bosworth-major-limitations/","title":"Major Limitations of Lightning","content":"Slides: https://drive.google.com/open?id=1UIHfYdnAWGMvsLhljTgcneCxQAHcWjxg\nhttps://twitter.com/kanzure/status/1116824776952229890\nIntro So I had a Twitter poll to decide what to talk about. People seem pretty interested in the limitations. Where is the line of what Lightning can do and what Lightning can’t do. That’s what I want to talk about. To make it more clear what can be done and what can’t be done.\nEvery Transaction A Lightning Transaction When I first started really focusing on Lightning …"},{"uri":"/boltathon/2019-04-06-conner-fromknecht-watchtowers/","title":"Watchtowers","content":"Architecture of LND Watchtowers\nhttps://twitter.com/kanzure/status/1117427512081100800\ndiscussion: https://www.reddit.com/r/lightningdevs/comments/bd33cp/architecture_of_lnd_watchtowers_presentation_from/\nIntro Pretty excited today to speak on the topic of watchtowers. It is something that we at Lightning Labs have been working on for almost a year now. Conceptual proof of concept code to really flushing it out into what would be today more of a full on protocol. Hopefully today we’re going to …"},{"uri":"/speakers/daniel-lehnberg/","title":"Daniel Lehnberg","content":""},{"uri":"/grincon/2019/lehnberg/","title":"Grin update","content":"Mimblewimble pros and cons There are no amounts, no addresses, and improved scaling. But one of hte problems is that it requires interactive transactions. But this is a pro because of this you have to participate in mining. You can send money to addresses without having to accept the money. Just as you saw here, you can identify the blue box as an output or an input. You could build up a transaction graph. There are some attacks that are possible.\nI think Andrew is going tobe talking about …"},{"uri":"/grincon/","title":"Grincon","content":" Grincon 2019 "},{"uri":"/grincon/2019/","title":"Grincon 2019","content":" Cryptography Audit John Woeltz Research Libsecp256k1 Fireside chat with Dan Boneh Jan 28, 2019 Taariq Lewis, Dan Boneh Cryptography Quantum resistance Grin update Mar 26, 2019 Daniel Lehnberg Adaptor signatures Altcoins Scriptless Scripts With Mimblewimble Andrew Poelstra Adaptor signatures "},{"uri":"/sf-bitcoin-meetup/2019-03-15-partially-signed-bitcoin-transactions/","title":"Partially Signed Bitcoin Transactions","content":"Topic: PSBT\nLocation: SF Bitcoin Devs\nIntroduction Andrew: So as Mark said I\u0026amp;rsquo;m Andrew Chow and today I\u0026amp;rsquo;m going to be talking about BIP 174, partially signed Bitcoin transactions, also known as PSBT. Today I\u0026amp;rsquo;m going to be talking a bit about why we need PSBTs, or Partially Signed Bitcoin Transactions, what they are, and the actual format of the transaction itself and how to use a PSBT and the workflows around that.\nA quick story, around this time last year there was a fork of …"},{"uri":"/stephan-livera-podcast/2019-03-14-christian-decker-channel-factories/","title":"Lightning Channel Factories","content":"Lightning topology limitation Stephan: Christian, welcome to the show. I’m a big fan of your work.\nChristian: Thank you so much for having me Stephan. I am a long time listener and I am happy to be joining you today.\nStephan: There has been a lot of chatter in the Bitcoin community on what might happen in the future. One of the ideas you presented in one of your papers you wrote in 2016 called Scalable Funding of Bitcoin Micropayment Channel Networks That is the discussion where we are going to …"},{"uri":"/stephan-livera-podcast/2019-03-11-chris-belcher/","title":"Defending Bitcoin Privacy","content":"podcast: https://stephanlivera.com/episode/58/\nStephan Livera: My guest today is Chris Belcher, Bitcoin privacy O.G. He’s been in the game a long time, and has made great contributions on privacy. He’s involved with JoinMarket, Electrum Personal Server, and most recently he wrote, or rather updated, a fantastic Bitcoin privacy Wiki which you simply must read. Here is my interview with Chris. Chris, welcome to the show.\nChris Belcher: Hello, thanks for having me.\nStephan Livera: Yeah. Chris I’ve …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2019/","title":"Mit Bitcoin Expo 2019","content":" Secure Signatures: Harder Than You Think Mar 09, 2019 Andrew Poelstra Schnorr signatures Utreexo: Reducing bitcoin nodes to 1 kilobyte Tadge Dryja Utreexo "},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2019/signature-scheme-security-properties/","title":"Secure Signatures: Harder Than You Think","content":"Twitter announcement: https://twitter.com/kanzure/status/1104399812147916800\nTranscript completed by: Bryan Bishop Edited by: Michael Folkson\nIntroduction Hi everyone. Can you hear me alright? Is my mic? Okay, cool. I was scheduled to talk about the history of Schnorr signatures in Bitcoin. I wanted to do that but then I realized I’ve only got 20-30 minutes to talk here. Instead I am going to focus on one particular piece of that history, which is the security model surrounding not just Schnorr …"},{"uri":"/stanford-blockchain-conference/2020/attacking-evm-resource-metering/","title":"Attacking EVM Resource Metering","content":"Broken Metre: Attacking resource metering in EVM\nDaniel Perez, Benjamin Livshits\nhttps://twitter.com/kanzure/status/1230222221110468609\nhttps://arxiv.org/abs/1909.07220\nAbstract Metering is an approach developed to assign cost to smart contract execution in blockchain systems such as Ethereum. This paper presents a detailed investigation of the metering approach based on gas taken by the Ethereum blockchain. We discover a number of discrepancies in the metering model such as significant …"},{"uri":"/speakers/benjamin-livshits/","title":"Benjamin Livshits","content":""},{"uri":"/speakers/daniel-perez/","title":"Daniel Perez","content":""},{"uri":"/stephan-livera-podcast/2019-02-11-jack-dorsey-elizabeth-stark/","title":"Bitcoin- Native Currency of the Internet","content":"podcast: https://stephanlivera.com/episode/52/\nStephan Livera: Welcome to the Stephan Livera podcast, focused on Bitcoin and Austrian economics. Learn the technology and economics of Bitcoin by listening in to interviews with the best and brightest. It’s a very special episode today with Jack Dorsey, CEO of Twitter and Square, and also Elizabeth Stark, CEO of Lightning Labs.\nStephan Livera: I’ve been very impressed with Jack’s commentary on Bitcoin as potentially the native currency of the …"},{"uri":"/speakers/jack-dorsey/","title":"Jack Dorsey","content":""},{"uri":"/misc/2019-02-09-mcelrath-on-chain-defense-in-depth/","title":"On-Chain Defense in Depth","content":"On-chain defense in depth (UCL CBT seminar)\nhttps://twitter.com/kanzure/status/1159542209584218113\nIntroduction It seems strange to come to a university and put PhD on my slides because in the finance world that\u0026amp;rsquo;s weird. I put it up there.\nOkay, so. I work for Fidelity Digital Assets. Just a couple words about this. Our lawyers make us say this: \u0026amp;ldquo;This presentation is intended to be educational in nature and is not representative of existing products and services from Fidelity …"},{"uri":"/tags/op-checksigfromstack/","title":"OP_CHECKSIGFROMSTACK","content":""},{"uri":"/advancing-bitcoin/2019/","title":"Advancing Bitcoin 2019","content":" Rust Lightning Feb 07, 2019 Matt Corallo Lightning "},{"uri":"/advancing-bitcoin/2019/2019-02-07-matt-corallo-rust-lightning/","title":"Rust Lightning","content":"Flexible Lightning in Rust\nSlides: https://docs.google.com/presentation/d/154bMWdcMCFUco4ZXQ3lWfF51U5dad8pQ23rKVkncnns/edit#slide=id.p\nhttps://twitter.com/kanzure/status/1144256392490029057\nIntroduction Thanks for having me. I want to talk a little bit about a project I’ve been working on for about a year called rust-lightning. I started it in December so a year and a few months ago. This is my first presentation on it so I’m excited to finally get to talk about it a little bit.\nGoals It is yet …"},{"uri":"/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/","title":"Better Hashing through BetterHash","content":"Announcement of BetterHash on the mailing list: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016077.html\nDraft BIP: https://github.com/TheBlueMatt/bips/blob/betterhash/bip-XXXX.mediawiki\nIntro I am going to talk about BetterHash this evening. If you are coming to Advancing Bitcoin don’t worry I am talking about something completely different. You are not going to get duplicated content. That talk should be interesting as well though admittedly I haven’t written it yet. We’ll …"},{"uri":"/sf-bitcoin-meetup/2019-02-04-threshold-signatures-and-accountability/","title":"Threshold Signatures and Accountability","content":"Slides: https://download.wpsoftware.net/bitcoin/wizardry/2019-02-sfdevs-threshold/slides.pdf\nTranscript completed by: Bryan Bishop Edited by: Michael Folkson\nTwitter announcement: https://twitter.com/kanzure/status/1092665108960968704\nIntroduction (Mark Erhardt) Sorry for the delays. We\u0026amp;rsquo;re finally ready to talk. Welcome to our talk. We\u0026amp;rsquo;d like to thank our sponsors including Blockstream and Digital Garage. Today we have Andrew Poelstra to talk about threshold signatures and …"},{"uri":"/verifiable-delay-functions/vdf-day-2019/","title":"Vdf Day 2019","content":" Comments And Observations About Timelocks Ron Rivest Ron Rivest Timelocks Verifiable Delay Functions Feb 03, 2019 Dan Boneh Cryptography "},{"uri":"/verifiable-delay-functions/","title":"Verifiable Delay Functions","content":" Vdf Day 2019 "},{"uri":"/verifiable-delay-functions/vdf-day-2019/dan-boneh/","title":"Verifiable Delay Functions","content":"https://twitter.com/kanzure/status/1152586087912673280\nIntroduction I\u0026amp;rsquo;ll give an overview of VDFs and make sure that we\u0026amp;rsquo;re on the same page. I want to make sure everyone knows what it is that we\u0026amp;rsquo;re talking about.\nThe goal of a VDF is to slow things down. The question is, how do we slow things down in a verifiable way?\nWhat is a VDF? I won\u0026amp;rsquo;t give a precise definition, look at the manuscripts. A VDF is basically a function that is supposed to take T steps to evaluate, even …"},{"uri":"/andreas-antonopoulos/2019-02-01-andreas-antonopoulos-hardware-wallet-security/","title":"Are Hardware Wallets Secure Enough?","content":"Q - Hi Andreas. I store my crypto on a Ledger. Listening to Trace Mayer this week has me concerned that this is not safe enough. Trace says you need Bitcoin Core for network validation, Armory for managing the private keys and a Glacier protocol for standard operating procedures and a Purism laptop for hardware. What is the gold standard for storing crypto for non-technical people? Is a hardware wallet good enough? If my crypto has been on my hardware wallet for a year now is it more or less …"},{"uri":"/stanford-blockchain-conference/2019/handel-practical-multisig-aggregation/","title":"Handel: Practical Multi-Signature Aggregation for Large Byzantine Committees","content":"Introduction Handle is a aggregation protocol for large scale byzantine committees. This is work with my colleagues in addition to myself.\nWhy aggregate? Distributed systems often require gathering statements about large subsets of nodes. However, processing (filtering, aggregating\u0026amp;hellip;) measurements can be a bottleneck. We want to apply some processing while collecting the measurements.\nWhy byzantine? A byzantine node is a node that can be arbitrary. It can be offline and it can be anything …"},{"uri":"/speakers/nicolas-gailly/","title":"Nicolas Gailly","content":""},{"uri":"/stanford-blockchain-conference/2019/","title":"Stanford Blockchain Conference 2019","content":"https://cbr.stanford.edu/sbc19/\nAccumulators for blockchains Benedikt Bünz Research Proof systems Asics David Vorick Mining Aurora: Transparent Succinct Arguments for R1CS Alessandro Chiesa Research Proof systems Automatic Detection of Vulnerabilities in Smart Contracts Mooly Sagiv Research Security Bitcoin Payment Economic Analysis Jacob Leshno Bloxroute Soumya Basu Segwit Building Bulletproofs Henry de Valence, Cathie Yun Proof systems Building Mimblewimble/Grin, an implementation for privacy …"},{"uri":"/speakers/alberto-sonnino/","title":"Alberto Sonnino","content":""},{"uri":"/stanford-blockchain-conference/2019/building-mimblewimble-and-grin/","title":"Building Mimblewimble/Grin, an implementation for privacy and scalability","content":"Quentin Le Sceller (Grin Dev Team), Ignotus Peverell (Grin Dev Team), Yeastplume (Grin Dev Team), Antioch Peverell (Grin Dev Team), HashMap (Grin Dev Team), John Tromp (Grin Dev Team), Daniel Lehnberg (Grin Dev Team)\nhttps://twitter.com/kanzure/status/1090662484275384320\nOur first talk is from the Grin project. It was just launched a few weeks ago. Okay, let\u0026amp;rsquo;s get started.\nIntroduction Hi everyone. My name is Quentin Le Sceller. I\u0026amp;rsquo;m a developer at Blockcypher. I am here to talk about …"},{"uri":"/speakers/prastudy-fauzi/","title":"Prastudy Fauzi","content":""},{"uri":"/speakers/quentin-le-sceller/","title":"Quentin Le Sceller","content":""},{"uri":"/stanford-blockchain-conference/2019/quisquis/","title":"Quisquis: A New Design for Anonymous Cryptocurrencies","content":"https://twitter.com/kanzure/status/1090677466216128512\nhttps://eprint.iacr.org/2018/990\nIntroduction Good morning. I am here to talk to you about Quisquis which is a new design for an anonymous cryptocurrency. This talk is going to focus on the transaction layer. We don\u0026amp;rsquo;t talk about consensus or smart contracts or higher applications.\nOutline I am going to talk about bitcoin and anonymity. I\u0026amp;rsquo;ll talk about existing cryptocurrencies that claim to have anonymity and about their …"},{"uri":"/stanford-blockchain-conference/2019/sybilquorum/","title":"SybilQuorum: Open Distributed Ledgers Through Trust Networks","content":"https://twitter.com/kanzure/status/1090777875253407745\nIntroduction This is joint work with George Danezis who couldn\u0026amp;rsquo;t be here today and also cofounded Chainspace. Among all the possible challenges we may have on a blockchain, this talk will be about how to build strong sybil resistance. We would consider a subset of the problem, how will we bootstrap a federated agreement system?\nSybil attacks I am sure everyone knows about sybil attacks. Imagine you have a classic setting where you have …"},{"uri":"/grincon/2019/dan-boneh/","title":"Fireside chat with Dan Boneh","content":"TL: I was one of the founders of SF Bitcoin Devs with aantonop. I am also the co-founder of a company called Promise. We sponsor protocols in privacy as well as mimblewimble. If you are mining or actively looking to mine in grin, please say hello to me because we\u0026amp;rsquo;re connected with a lot of mining companies and hosting companies that are looking to host miners very cheaply. I\u0026amp;rsquo;d love to help. This is the most important panel of the day. There is not going to be another more important …"},{"uri":"/speakers/taariq-lewis/","title":"Taariq Lewis","content":""},{"uri":"/stephan-livera-podcast/2019-07-21-kukks-btcpayserver-architecture/","title":"BTCPayServer architecture and BTC transmuter","content":"podcast: https://stephanlivera.com/episode/91/\nStephan Livera: Hi, and welcome back to the Stephan Livera podcast, focused on Bitcoin and Austrian economics. Today, we are carrying on with he BTCPayServer series, with Kukks, a contributor of BTCPayServer. But first, let me introduce the sponsors of the podcast.\nStephan Livera: So firstly, check out Kraken. They are the best Bitcoin exchange. I’ve been consistently impressed with the way they operate, over the years. They also have this …"},{"uri":"/speakers/kukks/","title":"Kukks","content":""},{"uri":"/speakers/nicolas-dorier/","title":"Nicolas Dorier","content":""},{"uri":"/stephan-livera-podcast/2019-01-20-nicolas-dorier-and-btcpayserver/","title":"Nicolas Dorier and BTCPayServer – self hosted Bitcoin and Lightning payments","content":"podcast: https://stephanlivera.com/episode/48/\nStephan Livera: You’re listening to the Stephan Livera podcast focused on Bitcoin and Austrian economics. This is episode 48 with Nicolas Dorier, who started a great project. I wanted to discuss BTCPayServer. Nicolas is also the creator of NBitcoin, a Bitcoin library for the .Net platform in C#. Here’s my conversation with Nicolas. Nicolas, thanks for coming on the show.\nNicolas Dorier: Yeah, thank you for inviting me.\nStephan Livera: Yeah, I’ve …"},{"uri":"/stanford-blockchain-conference/2019/plasma-cash/","title":"Plasma Cash","content":"Plasma Cash: Towards more efficient Plasma constructions\nhttps://twitter.com/kanzure/status/1090691501561020416\nShi: Okay, we\u0026amp;rsquo;re going to start the next session. Please return to your seats. I have an important announcement before the session starts. Because of the fire code, we can\u0026amp;rsquo;t allow people standing room. Please go to the overflow rooms at the end of the hallway. We don\u0026amp;rsquo;t want the fire marshall to come and shutdown the conference. Also, the acoustics of the room make it …"},{"uri":"/speakers/istvan-andras-seres/","title":"Istvan Andras Seres","content":""},{"uri":"/breaking-bitcoin/2019/lightning-network-topological-analysis/","title":"Topological Analysis of the Lightning Network","content":"Topological analysis of the lightning network\npaper: https://arxiv.org/abs/1901.04972\nhttps://github.com/seresistvanandras/LNTopology\nIntroduction Knowing the topology is not necessarily a vulnerability, but it gives you a lot of information. If I know Bryan\u0026amp;rsquo;s full node is connected to some other nodes, I will have an easier time to cut Bryan out of the network or launch attacks against him. Knowing the topology is not necessarily a vulnerability, but it\u0026amp;rsquo;s really strong information. …"},{"uri":"/misc/2019-01-05-unchained-capital-socratic-seminar/","title":"Unchained Capital Socratic Seminar","content":"Unchained Capital Bitcoin Socratic Seminar\nSocratic Seminar 87 at BitDevs NYC has all of the links on meetup.com available.\nhttps://www.meetup.com/BitDevsNYC/events/256924041/\nhttps://www.meetup.com/Austin-Bitcoin-Developers/events/257718282/\nhttps://twitter.com/kanzure/status/1081674259880120323\nLast time Last time Andrew Poelstra gave a talk about research at Blockstream. He also talked about scriptless scripts and using the properties of Schnorr signatures to do all of your scripting. There …"},{"uri":"/noded-podcast/2018-12-14-laolu-conner-lnd/","title":"Lnd","content":"Noded podcast with Laolu Osuntokun and Conner Fromknecht - December 14th 2018\nhttps://twitter.com/kanzure/status/1089187758931890178\nPodcast: https://noded.org/podcast/noded-0360-with-olaoluwa-osuntokun-and-conner-fromknecht/\nPierre: Welcome to the Noded Bitcoin podcast. This is episode 36. We’re joined with Laolu and Conner from Lightning Labs as well as my co-host Michael Goldstein. How’s it going guys?\nroasbeef: Not bad\nPierre: So I listened to Laolu on Stephan Livera’s podcast and I would …"},{"uri":"/stephan-livera-podcast/2018-12-11-laolu-osuntokun-stephan-livera/","title":"Laolu Osuntokun - Stephan Livera","content":"Stephan Livera podcast with Laolu Osuntokun - December 11th 2018\nhttps://twitter.com/kanzure/status/1090640881873387525\nPodcast: https://stephanlivera.com/episode/39\nStephan: Hi Laolu. Thank you very much for coming on the show.\nroasbeef: It’s good to be here.\nStephan: It’s been a while since the Lighting Summit was on and now you’re back home. How’s things back home?\nroasbeef: Not bad, just the usual back to work, just the regular stuff. Australia was great. I had never been. It’s spring here, …"},{"uri":"/speakers/tom-kirkpatrick/","title":"Tom Kirkpatrick","content":""},{"uri":"/lightning-hack-day/2018-12-01-tom-kirkpatrick-zap-wallet/","title":"Zap Desktop: A look under the hood","content":"Introduction I’m Tom. I’ve been working on the Zap desktop wallet for the last six months or so. If you’re not familiar, we’re trying to build consumer facing Lightning applications. We’ve currently got an iOS app and a desktop wallet. They’re pretty similar, the interface is pretty similar between them but they’ve got a slightly different focus. I’ve been working on the desktop wallet. Can I just get an idea of how many people here have used Zap? Quite a lot. That’s probably about 80-90%, …"},{"uri":"/lightning-hack-day/2018-10-27-pierre-rochard-excel-in-lightning/","title":"Excel In Lightning","content":"Excel in Lightning - Lightning Hack Day NYC\nDemo I want to present a project I’ve been hacking on for the past month which is getting lnd inside of Excel. I’ll explain why someone would ever do that. I want to bring Lightning to the most popular business platform in existence today which is Microsoft Office Excel. It’s installed on hundreds of millions of computers around the world. It is used day-to-day by all sorts of business people who are doing adhoc data analysis. It is also abused by …"},{"uri":"/speakers/pierre-rochard/","title":"Pierre Rochard","content":""},{"uri":"/chaincode-labs/chaincode-residency/2018-10-26-pierre-rochard-lightning-excel-plugin/","title":"Lightning Excel Plugin","content":"Demo My project is to create a user interface for lightning that might appeal to a different crowd than ours. We’re kind of used to Linux, open source, free software so being in a proprietary environment like Excel is unfamiliar territory for us. And yet I think that there is a large demographic of people who are interested in Bitcoin who work in the business world where they use Excel day to day. They’re interested in experimenting with lightning and going on Y’alls or going on Satoshi’s place …"},{"uri":"/chaincode-labs/chaincode-residency/2018-10-24-christian-decker-c-lightning-api/","title":"C-Lightning API","content":"Location: Chaincode Labs Lightning Residency 2018\nhttps://twitter.com/kanzure/status/1232313790311436290\nIntro Good morning from the last day of presentations. I’m Chris still. Today I will be talking to you about c-lightning, how to develop on it, how to extend it, adapt it to your needs and how to customize it so it fits into whatever you are trying to do. Not whatever we thought you might want to do.\nGoals First of all the goals of c-lightning are basically to be fast and lightweight. We want …"},{"uri":"/andreas-antonopoulos/2018-10-23-andreas-antonopoulos-initial-blockchain-download/","title":"Bitcoin Q\u0026A: Initial Blockchain Download","content":"Becca asks why does it take it so long to download the blockchain? I do have a fast internet connection and I could download 200GB in less than a hour. What Becca is talking about is what’s called the initial blockchain download or IBD which is the first synchronization of the Bitcoin node or any kind of blockchain node to its blockchain. The answer is that while the amount of data you need to download in order to get the full blockchain is about 200GB or so you’re not simply downloading that …"},{"uri":"/tags/consensus-enforcement/","title":"Consensus Enforcement","content":""},{"uri":"/chaincode-labs/chaincode-residency/2018-10-22-elaine-ou-bootstrapping-lightning-node/","title":"Bootstrapping Lightning Node","content":"Bootstrapping and maintaining a Lightning Node\nSlides: https://lightningresidency.com/assets/presentations/Ou_Bootstrapping_and_Maintaining_a_Lightning_Node.pdf\nIntro My name ie Elaine. I’ll talk about bootstrapping and maintaining a lightning node. Pretty much all of you have probably already set up a node so this should all be review. You guys will probably know more about this than I do.\nBootstrapping and Maintaining a Lightning Node We’ll start with an overview of how routing works just to …"},{"uri":"/chaincode-labs/chaincode-residency/2018-10-22-alex-bosworth-building-lightning-applications/","title":"Building Lightning Applications","content":"Slides: https://lightningresidency.com/assets/presentations/Bosworth_Building_Applications.pdf\nhttps://twitter.com/kanzure/status/1116327230135836673\nIntro Today I’m going to talk about building on lnd from the perspective of we want to have developers building on lnd. Developers, developers, developers. I just have so many slides today that let’s leave questions to the end if I have any time because I over made slides.\nDesign Philosophy So what is the big idea behind lnd? What are we thinking …"},{"uri":"/chaincode-labs/chaincode-residency/2018-10-22-alex-bosworth-channel-management/","title":"Channel Management","content":"Slides: https://lightningresidency.com/assets/presentations/Bosworth_Channel_Management.pdf\nIntro Today I’m going to talk about something a little bit different to building apps. I’m going to talk about becoming a great router. Managing your node so that your node can be routing payments and be a good citizen if you choose to have public channels.\nUptime So basically the big message overall is and I think I’ve said this before, if you have public channels you’re kind of like waving to everybody …"},{"uri":"/speakers/elaine-ou/","title":"Elaine Ou","content":""},{"uri":"/chaincode-labs/chaincode-residency/2018-10-22-christian-decker-history-of-lightning/","title":"History of the Lightning Network","content":"Location: Chaincode Labs Lightning Residency 2018\nIntroduction Hi I\u0026amp;rsquo;m Chris, as I said like three times before, and I\u0026amp;rsquo;ll be talking about the history of Lightning and history of off-chain protocols or layer two, or you name it, whatever name you want. I like to call them off chain because it doesn\u0026amp;rsquo;t imply a hierarchy of layers but I\u0026amp;rsquo;ll use all three terms interchangeably anyway.\nI like to have an interactive presentation so if you have any questions please don\u0026amp;rsquo;t …"},{"uri":"/chaincode-labs/chaincode-residency/2018-10-22-christian-decker-lightning-bitcoin/","title":"Lightning ≈ Bitcoin","content":"Location: Chaincode Labs Lightning Residency 2018\nIntroduction Good morning everyone my name is Chris and I\u0026amp;rsquo;m a bitcoiners anonymous. So my job today is basically to tell you why Lightning is Bitcoin and why it\u0026amp;rsquo;s not exactly Bitcoin. So there\u0026amp;rsquo;s a few fundamental differences between Bitcoin on-chain payments and Lightning. I will always call Bitcoin on-chain payments just because it\u0026amp;rsquo;s sort of the classical way of doing it.\nThe differences between Bitcoin and Lightning are …"},{"uri":"/chaincode-labs/chaincode-residency/2018-10-22-alex-bosworth-lightning-protocol/","title":"Lightning Protocol","content":"The Lightning Protocol, an Application Design Perspective\nSlides: https://lightningresidency.com/assets/presentations/Bosworth_The_Lightning_Protocol.pdf\nIntro I’m talking about the protocol design and also going through the BOLTs. What I’m going to talk about is also about the BOLTs. The BOLTs are like the spec for Lightning and there’s a bunch of them.\nBOLT 00: Build Apps What I try to do is go through the BOLTs and look at only the parts of the BOLTs, the protocol design, that you as an app …"},{"uri":"/bitcoin-core-dev-tech/2018-10/","title":"Bitcoin Core Dev Tech 2018 (Oct)","content":" Bitcoin Optech Oct 09, 2018 Lightning Segwit Efficient P2P Transaction Relay Oct 08, 2018 P2 p Script Descriptors Oct 08, 2018 Pieter Wuille Wallet Signmessage Oct 10, 2018 Kalle Alm Wallet UTXO accumulators, UTXO commitments, and Utreexo Oct 08, 2018 Tadge Dryja Proof systems Utreexo Wallet Stuff Oct 09, 2018 Wallet "},{"uri":"/bitcoin-core-dev-tech/2018-10/2018-10-10-signmessage/","title":"Signmessage","content":"kallewoof and others\nhttps://twitter.com/kanzure/status/1049834659306061829\nI am trying to make a new signmessage to do other things. Just use the signature system inside bitcoin to sign a message. Sign a message that someone wants. You can use proof-of-funds or whatever.\nYou could just have a signature and it\u0026amp;rsquo;s a signature inside of a package and it\u0026amp;rsquo;s small and easy. Another option is to have a .. that is invalid somehow. You do a transaction with some input, where the txid is the …"},{"uri":"/bitcoin-core-dev-tech/2018-10/2018-10-09-bitcoin-optech/","title":"Bitcoin Optech","content":"https://twitter.com/kanzure/status/1049527415767101440\nhttps://bitcoinops.org/\nBitcoin Optech is trying to encourage bitcoin businesses to adopt better scaling techniques and technologies, things like batching, segwit, down the line maybe Schnorr signatures, aggregatable signatures, maybe lightning. Right now we\u0026amp;rsquo;re focusing on things that businesses could be doing right now. Exchanges could be batching, and some aren\u0026amp;rsquo;t. We\u0026amp;rsquo;re talking to those companies and listening to their …"},{"uri":"/bitcoin-core-dev-tech/2018-10/2018-10-09-wallet-stuff/","title":"Wallet Stuff","content":"https://twitter.com/kanzure/status/1049526667079643136\nMaybe we can have the wallet PRs have a different review process so that there can be some specialization, even if the wallet is not ready to be split out. In the future, if the wallet was a separate project or repository, then that would be better. We need to be able to subdivide the work better than we already do, and the wallet is a good place to start doing it. It\u0026amp;rsquo;s different from the consensus critical code. A change to coin …"},{"uri":"/bitcoin-core-dev-tech/2018-10/2018-10-08-efficient-p2p-transaction-relay/","title":"Efficient P2P Transaction Relay","content":"p2p transaction relay protocol improvements with set reconciliation gleb\nI don\u0026amp;rsquo;t know if I need to motivate this problem. I presented a work in progress session at Scaling. The cost of relaying transactions or announcing a transaction in a network\u0026amp;ndash; how many announcements do you have? Every link has an announcement in either direction right now, and then there\u0026amp;rsquo;s the number of nodes multiplied by the number of connections per node. This is like 8 connections on average. If there …"},{"uri":"/bitcoin-core-dev-tech/2018-10/2018-10-08-script-descriptors/","title":"Script Descriptors","content":"https://github.com/bitcoin/bitcoin/blob/master/doc/descriptors.md\nI would like to talk about script descriptors. There\u0026amp;rsquo;s a number of projects we\u0026amp;rsquo;re working on and they are all kind of related. I\u0026amp;rsquo;d like to clarify how things fit together.\nNote: there is an earlier transcript that has not been published (needs review) about script descriptors.\nAgenda History of script descriptors and how we came to this. What\u0026amp;rsquo;s currently in Bitcoin Core v0.17 Wallet integration DESCRIPT …"},{"uri":"/bitcoin-core-dev-tech/2018-10/2018-10-08-utxo-accumulators-and-utreexo/","title":"UTXO accumulators, UTXO commitments, and Utreexo","content":"https://twitter.com/kanzure/status/1049112390413897728\nIf people saw Benedikt\u0026amp;rsquo;s talk, two days ago, it\u0026amp;rsquo;s related. It\u0026amp;rsquo;s a different construction but same goal. The basic idea is, and I think Cory kind of started to talk about this a few months ago on the mailing list\u0026amp;hellip; instead of storing all UTXOs in leveldb, store the hash of each UTXO, and then it\u0026amp;rsquo;s half the size, and then you could almost create it from the hash of the input, it\u0026amp;rsquo;s like 10 bytes more. Instead …"},{"uri":"/scalingbitcoin/tokyo-2018/rebalancing-lightning/","title":"Rebalancing in the Lightning Network: Analysis and Implications","content":"https://twitter.com/kanzure/status/1048753406628655105\nIntroduction This talk is going to be about rebalancing in the lightning network. I\u0026amp;rsquo;ll talk about the implications and about running a lightning node.\nFinancial costs In the lightning network, routing nodes will incur in financial costs by having their money locked inside channels. They can\u0026amp;rsquo;t use that money elsewhere. They incur a financial cost to doing this activity, at least an opportunity cost. To recover this, they can …"},{"uri":"/andreas-antonopoulos/2018-10-07-andreas-antonopoulos-schnorr-signatures/","title":"Schnorr Signatures","content":"LTB Episode 378 - The Petro inside Venezuela \u0026amp;amp; Schnorr Signatures\nWe’re currently using the elliptic curve digital signature algorithm (ECDSA) which is a specific way of doing digital signatures with elliptic curves but not the only way. Schnorr signatures have some unique characteristics that make them better in many ways than the ECDSA algorithm we use today in Bitcoin.\nSchnorr signatures are better because they have certain quirks in the way the mathematics plays out. Digital signatures …"},{"uri":"/speakers/sebasti%C3%A1n-reca/","title":"Sebastián Reca","content":""},{"uri":"/scalingbitcoin/tokyo-2018/atomic-swaps/","title":"The State of Atomic Swaps","content":"https://twitter.com/kanzure/status/1048790167283068929\nHistory This should be easy to follow. I want to start off with some history. The first mention of atomic swap concepts was in 2013 in a bitcointalk.org thread where people were wondering how to exchange coins between independent ledgers in a trustless way. There were a few comments on the thread every now and then asking has anyone implemented this. It was quiet until last year when people started doing atomic swaps last year. Lightning …"},{"uri":"/speakers/thomas-eizinger/","title":"Thomas Eizinger","content":""},{"uri":"/scalingbitcoin/tokyo-2018/accumulators/","title":"A Scalable Drop in Replacement for Merkle Trees","content":"https://twitter.com/kanzure/status/1048454406755168257\nIntroduction Hello. Test. Okay. I am going to talk about accumulators for UTXOs. The previous two talks were a great setup for this. This is joint work with Ben Fisch who is also here today and Dan Boneh. I first want to advertise the Stanford Blockchain Conference (formerly known as BPASE) happening at the end of January 2019 at Stanford. Whether you give a talk or not, you should try to attend.\nUTXOs The UTXO set is a growing problem. We …"},{"uri":"/scalingbitcoin/tokyo-2018/analysis-of-dust-in-utxo-based-cryptocurrencies/","title":"An analysis of dust in UTXO-based cryptocurrencies","content":"Cristina Pérez-Solà, Sergi Delgado Segura, Guillermo Navarro-Arribas and Jordi Herrera (Universitat Autònoma de Barcelona)\nSergi Delgado Segura\nhttps://eprint.iacr.org/2018/513.pdf\nhttps://twitter.com/kanzure/status/1048380094446632961\nIntroduction Thank you everyone. I am glad to be here today to tell you about our analysis of the UTXO set in different cryptocurrencies. This is shared work with the other coauthors listed. The outline is that I\u0026amp;rsquo;ll talk about the UTXO set, then I\u0026amp;rsquo;ll …"},{"uri":"/speakers/benedikt-b%C3%BCnz/","title":"Benedikt Bünz","content":""},{"uri":"/scalingbitcoin/tokyo-2018/compact-multi-signatures-for-smaller-blockchains/","title":"Compact Multi-Signatures For Smaller Blockchains","content":"Dan Boneh (Stanford University), Manu Drijvers and Gregory Neven (DFINITY)\npaper: https://eprint.iacr.org/2018/483.pdf\nhttps://twitter.com/kanzure/status/1048441176024469504\nIntroduction Hi. Thanks a lot for coming in after lunch. This slot is often known as \u0026amp;ldquo;the slot of death\u0026amp;rdquo;. Some of us are jetlagged, this is the perfect time to doze off and only wake up at the end of the talk. I\u0026amp;rsquo;ll try to prevent that from happening. I think the best way to do that is to start about what …"},{"uri":"/speakers/giulio-malavolta/","title":"Giulio Malavolta","content":""},{"uri":"/scalingbitcoin/tokyo-2018/how-much-privacy-is-enough/","title":"How Much Privacy is Enough? Threats, Scaling, and Trade-offs in Blockchain Privacy Protocols","content":"unrelated, https://arxiv.org/pdf/1706.00916.pdf\nIntroductions Hi everybody. This is a talk on how much privacy is enough, evaluating trade-offs in privacy and scalability. This is going to be unusual for this conference because I will not be presenting new technical content. I am going to be talking about evaluating the tradeoffs between these scalability and privacy. As context, my name is Ian Miers. I am a postdoc at Cornell Tech. I\u0026amp;rsquo;ve been working on privacy tech since 2011.\nPrivacy …"},{"uri":"/speakers/ian-miers/","title":"Ian Miers","content":""},{"uri":"/scalingbitcoin/tokyo-2018/fraud-proofs/","title":"Improving SPV Client Validation and Security with Fraud Proofs","content":"paper: https://arxiv.org/pdf/1809.09044.pdf\nhttps://twitter.com/kanzure/status/1048446183004233731\nIntroduction I am going to be talking about fraud proofs. It allows lite clients to have a leve lof security of almost the level of a full node. Before I describe fraud proofs, how about we talk about motivations.\nMotivations There\u0026amp;rsquo;s a large tradeoff between blockchain decentralization and how much on-chain throughput you can get. The more transactions you have on the chain, the more …"},{"uri":"/scalingbitcoin/tokyo-2018/scriptless-ecdsa/","title":"Instantiating (Scriptless) 2P-ECDSA: Fungible 2-of-2 MultiSigs for Today's Bitcoin","content":"https://twitter.com/kanzure/status/1048483254087573504\nmaybe https://eprint.iacr.org/2018/472.pdf and https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-April/001221.html\nIntroduction Alright. Thank you very much. Thank you Pedro, that was a great segue into what I\u0026amp;rsquo;m talking about. He has been doing work on formalizing multi-hop locks. I want to also talk about what changes might be necessary to deploy this on the lightning network.\nHistory For what it\u0026amp;rsquo;s worth, these …"},{"uri":"/scalingbitcoin/tokyo-2018/multi-hop-locks/","title":"Multi-Hop Locks for Secure, Privacy-Preserving and Interoperable Payment-Channel Networks","content":"Giulio Malavolta (Friedrich-Alexander-University Erlangen-Nuernberg), Pedro Moreno-Sanchez (Purdue University), Clara Schneidewind (Vienna University of Technology), Aniket Kate (Purdue University) and Matteo Maffei (Vienna University of Technology)\nhttps://eprint.iacr.org/2018/472.pdf\nhttps://twitter.com/kanzure/status/1048476273259900928\nThis is joint work with my colleagues. I promise I will talk slower than the last talk. In a multi-hop network, the security tool is a hash-time lock contract …"},{"uri":"/speakers/mustafa-al-bassam/","title":"Mustafa Al-Bassam","content":""},{"uri":"/tags/multisignature/","title":"Scriptless multisignatures","content":""},{"uri":"/edgedevplusplus/2018/abstract-thinking-about-consensus-systems/","title":"Abstract Thinking About Consensus Systems","content":"https://twitter.com/kanzure/status/1048039485550690304\nslides: https://drive.google.com/file/d/1LiGzgFXMI2zq8o9skErLcO3ET2YeQbyx/view?usp=sharing\nSerialization Blocks are usually thought of as a serialized data stream. But really these are only true on the network serialization or over the network. A different implementation of bitcoin could in theory use a different format. The format is only used on the network and the disk, not the consensus protocol. The format could actually be completely …"},{"uri":"/edgedevplusplus/2018/","title":"Bitcoin Edge Dev++ 2018","content":"https://keio-devplusplus-2018.bitcoinedge.org/\nAbstract Thinking About Consensus Systems Oct 05, 2018 Luke Dashjr Consensus enforcement Bitcoin Toolchain, Unit Testing And Deterministic Builds Oct 05, 2018 Marco Falke Bitcoin core Build system Reproducible builds Blind Signatures Oct 04, 2018 Ethan Heilman Cryptography Block Structure And Headers Utxos Merkle Trees Segwit Bip141 Oct 04, 2018 Akio Nakamura Bulletproofs Oct 04, 2018 Kalle Alm Cryptography Coin Selection Oct 04, 2018 Kalle Alm Coin …"},{"uri":"/edgedevplusplus/2018/bitcoin-toolchain-unit-testing-and-deterministic-builds/","title":"Bitcoin Toolchain, Unit Testing And Deterministic Builds","content":"https://twitter.com/kanzure/status/1048103693885759489\nIntroduction Just to continue on what James said about the build system on Bitcoin Core\u0026amp;hellip; I am going to talk about deterministic builds. I am MarcoFalke and I also work at Chaincode Labs in NYC.\nBitcoin Core Build System The build system is based on autotools, so it should just work anywhere where autotools runs. Just run ./autogen.sh ./configure and then make, that\u0026amp;rsquo;s it.\nWe recently added support for MSVC builds mostly for …"},{"uri":"/edgedevplusplus/2018/cross-chain-swaps/","title":"Cross-Chain Swaps: Atomically Swapping Coins for Privacy or Cross-Blockchain Trades","content":"https://twitter.com/kanzure/status/1048017311431413760\nIntroduction We are Ethan Heilman and Nicolas Dorier. Ethan is from Boston University. Nicolas is from DG Lab and working on NBitcoin. Today we\u0026amp;rsquo;re going to be talking with you about atomic swaps for privacy and for cross-blockchain swaps. When we say atomic swaps, what do we mean? At a very high level, the idea is that it enables Alice and Bob to trade some cryptocurrency. But they shouldn\u0026amp;rsquo;t be able to cheat each other. The good …"},{"uri":"/edgedevplusplus/2018/sidechains-and-federation-models/","title":"Federated Chains, Sidechains","content":"https://twitter.com/kanzure/status/1048008467871666178\nIntroduction The name \u0026amp;ldquo;sidechain\u0026amp;rdquo; is overloaded. My favorite private sidechain is my coach. That\u0026amp;rsquo;s my private sidechain. Hopefully I can explain the high-level concepts of a sidechain, at a 10,000 foot view. What are these? What do they give you?\nSidechain components First you need a blockchain. It has to be something that builds a consensus history. The state of the ledger has to be recorded, or the account model, or …"},{"uri":"/edgedevplusplus/2018/reorgs/","title":"Handling Reorgs \u0026 Forks","content":"https://twitter.com/kanzure/status/1052344700554960896\nIntroduction Good morning, my name is Bryan. I\u0026amp;rsquo;m going to be talking about reorganizations and forks in the bitcoin blockchain. First, I want to define what a reorganization is and what a fork is.\nDefinitions You might have heard that the bitcoin blockchain is a permanent data structure that records everything from all time and that is actually somewhat false. Instead, actually the bitcoin blockchain can be modified and changed, and …"},{"uri":"/edgedevplusplus/2018/python-bitcoinlib/","title":"Interfacing with Python via python-bitcoinlib","content":"python-bitcoinlib repo: https://github.com/petertodd/python-bitcoinlib\nBitcoin Edge schedule: https://keio-devplusplus-2018.bitcoinedge.org/#schedule\nTwitter announcement: https://twitter.com/kanzure/status/1052927707888189442\nTranscript completed by: Bryan Bishop Edited by: Michael Folkson\nIntro I will be talking about python-bitcoinlib.\nwhoami First I am going to start with an introduction slide. This is because I keep forgetting who I am so I have to write it down in all of my presentations. …"},{"uri":"/edgedevplusplus/2018/lightning-network/","title":"Lightning Overview, Channel Factories, Discreet Log Contracts","content":"https://twitter.com/kanzure/status/1048039589837852672\nIntroduction I have quite a while to speak. I hope it\u0026amp;rsquo;s not too boring. I\u0026amp;rsquo;ll have an intermission half way through. It\u0026amp;rsquo;s not going to be two hours. It will be more like 80 minutes or something. We are going to talk about payment channels, unidirectional payment channels, lightning channels, homomorphic keys, hash trees, watchtowers, discreet log contracts, oracles, anticipated signatures, DLCS within channels, etc.\nI am …"},{"uri":"/speakers/marco-falke/","title":"Marco Falke","content":""},{"uri":"/edgedevplusplus/2018/overview-bitcoin-core-architecture/","title":"Overview Bitcoin Core Architecture","content":"https://twitter.com/kanzure/status/1048098034234445824\nslides: http://jameso.be/dev++2018/\nIntroduction Alright guys, how are you guys doing? You guys look tired. Given your brains are probably fried, I am delighted to tell you that this talk will be pretty high level. I am James O\u0026amp;rsquo;Beirne, I work at Chaincode Labs. I\u0026amp;rsquo;ve been working on Bitcoin Core since 2015. I am in New York most of the time.\nAgenda Today we are going to talk about Bitcoin Core. This will be pretty high-level. …"},{"uri":"/edgedevplusplus/2018/protecting-yourself-and-your-business/","title":"Protecting Yourself And Your Business","content":"https://twitter.com/kanzure/status/1048087226742071296\nIntroduction Hello. My name is Warren Togami and I will be talking about exchange security. Not only protecting the people but also the business. This has been a topic of interest recently.\nWarren\u0026amp;rsquo;s security background I have some credibility when it comes to security due to my previous career in open-source software. I have been working on the Linux operating system first with Fedora then with Red Hat for many years. Related to …"},{"uri":"/speakers/warren-togami/","title":"Warren Togami","content":""},{"uri":"/speakers/akio-nakamura/","title":"Akio Nakamura","content":""},{"uri":"/speakers/anton-yemelyanov/","title":"Anton Yemelyanov","content":""},{"uri":"/edgedevplusplus/2018/blind-signatures/","title":"Blind Signatures","content":"https://twitter.com/kanzure/status/1047648050234060800\nSee also http://diyhpl.us/wiki/transcripts/building-on-bitcoin/2018/blind-signatures-and-scriptless-scripts/\nIntroduction Hi everyone. I work at Commonwealth Crypto. Today I am going to be talking to you about blind signatures. I\u0026amp;rsquo;d like to encourage people to ask questions. Please think of questions. I\u0026amp;rsquo;ll give time and pause for people to ask questions.\nWhat are blind signatures? A very informal definition of blind signatures is …"},{"uri":"/edgedevplusplus/2018/block-structure-and-headers-utxos-merkle-trees-segwit-bip141/","title":"Block Structure And Headers Utxos Merkle Trees Segwit Bip141","content":"Block structure \u0026amp;amp; headers, UTXO, Merkle Trees, Address, Proof-of-Work \u0026amp;amp; Difficulty, SegWit (BIP141)\nThis presentation was spoken in Japanese. I don\u0026amp;rsquo;t speak Japanese, but I made a live translation as I listened anyway: https://docs.google.com/document/d/1ngqP9_Ep4x7iwTqF6Vvqm1Ob97A_a1irXMUc2ZHsEt4/edit\n"},{"uri":"/edgedevplusplus/2018/bulletproofs/","title":"Bulletproofs","content":"Bulletproofs https://twitter.com/kanzure/status/1047740138824945664\nIntroduction Is there anyone here who doesn\u0026amp;rsquo;t know what bulletproofs are? I will not get to all of my slides today. I\u0026amp;rsquo;ll take a top-down approach about bulletproofs. I\u0026amp;rsquo;ll talk about what they are, how they work, and then go into the rabbit hole and go from there. It will start dark and then get brighter as we go. As you leave here, you may not know what bulletproofs are. But you\u0026amp;rsquo;re going to have the tools …"},{"uri":"/edgedevplusplus/2018/coin-selection/","title":"Coin Selection","content":"https://twitter.com/kanzure/status/1047708247333859328\nIntroduction I am going to do a short talk on coin selection. Everyone calls me kalle. In Japanese, my name is kalle. Coin selection, what it is, how it works, and interestingly enough- in the latest version of Bitcoin Core released yesterday, they have changed the coin selection algorithm for the first time since a long time. They are now using something with more scientific rigor behind it (branch-and-bound coin selection), which is good …"},{"uri":"/edgedevplusplus/2018/digital-signatures/","title":"Finite fields, Elliptic curves, ECDSA, Schnorr","content":"https://twitter.com/kanzure/status/1047634593619181568\nIntroduction Thank you to Anton and to everyone at Keio University and Digital Garage. I have an hour to talk about digital signatures, finite fields, ECDSA, Schnorr signatures, there\u0026amp;rsquo;s a lot of ground to cover. I\u0026amp;rsquo;m going to move quickly and start from a high level.\nMy name is John. I live in New York. I work at a company called Chaincode Labs. Most of the time, I contribute to Bitcoin Core.\nI am going to talk about the concept …"},{"uri":"/edgedevplusplus/2018/hierarchical-deterministic-wallets/","title":"Hierarchical Deterministic Wallets","content":"https://twitter.com/kanzure/status/1047714436889235456\nhttps://teachbitcoin.io/presentations/wallets.html\nIntroduction My name is James Chiang. Quick introduction about myself. My contributions to bitcoin have been on the project libbitcoin where I do documentation. I\u0026amp;rsquo;ve been working through the APIs and libraries. I think it\u0026amp;rsquo;s a great toolkit. Eric Voskuil is speaking later today and he also works on libbitcoin. I also volunteered to talk about hierarchical deterministic wallets. …"},{"uri":"/edgedevplusplus/2018/introduction/","title":"Introduction to Bitcoin Edge Dev++ and Bc2","content":"We will be using two screens. The english will be on the front screen and the japanese will be on the other screen to the side here. We will also be sharing a google docs file for each one in slack. We should help each other and teach better and learn these things. Without further adue, here\u0026amp;rsquo;s Anton from Bitcoin Edge.\nHi everybody. Welcome to Dev++. It started last year at Stanford 2017 where we trained 100 people. Our goal is that we run Scaling Bitcoin events and we get the development …"},{"uri":"/edgedevplusplus/2018/p2pkh-p2wpkh-p2h-p2wsh/","title":"P2PKH, P2WPKH, P2SH, P2WSH","content":"https://twitter.com/kanzure/status/1047697572444270592\nIntroduction This is going to be a bit of a re-tread I think. I am going to be talking about common bitcoin script templates used in bitcoin today.\nAddresses do not exist on the blockchain. But scripts do. You\u0026amp;rsquo;ve heard about p2pk, p2pkh, p2sh, and others. I\u0026amp;rsquo;m going to go over these anyway.\nPay-to-pubkey (p2pk) This was the first type. It had no address format actually. Nodes connected to each other over IP address, with no …"},{"uri":"/edgedevplusplus/2018/partially-signed-bitcoin-transactions-bip174/","title":"Partially Signed Bitcoin Transactions (BIP174)","content":"https://twitter.com/kanzure/status/1047730297242935296\nIntroduction Bryan briefly mentioned partially-signed bitcoin transactions (PSBT). I can give more depth on this. There\u0026amp;rsquo;s history and motivation, a background of why this is important, what these things are doing. What is the software doing?\nHistory and motivation Historically, there was no standard for this. Armory had its own format. Bitcoin Core had network-serialized transactions. This ensures fragmentation and it\u0026amp;rsquo;s …"},{"uri":"/edgedevplusplus/2018/scripts-general-and-simple/","title":"Scripts (general \u0026 simple)","content":"https://twitter.com/kanzure/status/1047679223115083777\nIntroduction I am going to talk about why we have scripts in bitcoin. I\u0026amp;rsquo;ll give some examples and the design philosophy. I am not going to talk about the semantics of bitcoin script, though. Why do we have bitcoin script? I\u0026amp;rsquo;ll show how to lock and unlock coins. I\u0026amp;rsquo;ll talk about pay-to-pubkey, multisig, and computing vs verification in the blockchain.\nWhy have script at all? In my first talk, I talked about digital signatures …"},{"uri":"/edgedevplusplus/2018/sighash-noinput/","title":"SIGHASH_NOINPUT (BIP118)","content":"SIGHASH_NOINPUT (BIP118)\nhttps://twitter.com/kanzure/status/1049510702384173057\nHi, my name is Bryan, I\u0026amp;rsquo;m going to be talking about SIGHASH NOINPUT. It was something that I was asked to speak about. It is currently not deployed, but it is an active proposal.\nSo, just really brief about who I am. I have a software development background. I just left LedgerX last week, so that is an exciting change in my life. Also, I contribute to bitcoin core, usually as code review.\nIn order to talk about …"},{"uri":"/edgedevplusplus/2018/taproot-and-graftroot/","title":"Taproot and Graftroot","content":"https://twitter.com/kanzure/status/1047764770265284608\nIntroduction Taproot and graftroot are recent proposals in the bitcoin world. Every script type looks different and they are distinct. From a privacy perspective, that\u0026amp;rsquo;s pretty bad. You watermark or fingerprint yourself all the time by showing unique scripts on the blockchain or in your transactions. This causes censorship risks and other problems for fungibility. Often you are paying for contigencies that are never used, like the …"},{"uri":"/edgedevplusplus/2018/wallet-security/","title":"Wallet Security, Key Management \u0026 Hardware Security Modules (HSMs)","content":"https://twitter.com/kanzure/status/1049813559750746112\nIntro Alright, I’m now going to talk about bitcoin wallet security. And I was asked to talk about key management and hardware security modules and a bunch of other topics all in one talk. This will be a bit broader than some of the other talks because this is an important subject about how do you actually store bitcoin and then some of the developments around the actual storage of bitcoin in a secure way. Some have been integrated into …"},{"uri":"/noded-podcast/jnewbery-cve-2018-17144-bug/","title":"CVE-2018-17144 Bug","content":"Noded podcast September 26th 2018\nIntros\nPierre: This is an issue that affects our audience directly, right. This is the Noded podcast. It is for people who run Bitcoin full nodes and we had what is called a critical vulnerability exposure - CVE drop on Monday and there was an issue with our node software - specific versions and we can get into that.\nbitstein: And in a bombshell Tweet John Newbery said that he was responsible for the CVE.\nPierre: So normally when you do something really awful to …"},{"uri":"/baltic-honeybadger/2018/","title":"Baltic Honeybadger 2018","content":"http://web.archive.org/web/20180825023519/https://bh2018.hodlhodl.com/\nBeyond Bitcoin Decentralized Collaboration Yurii Rashkovskii Bitcoin As A Novel Market Institution Nic Carter Bitcoin Custody Sep 23, 2018 Bryan Bishop Regulation Bitcoin Maximalism Dissected Giacomo Zucco Bitcoin Payment Processing And Merchants Sergej Kotliar, Alena Vranova, Vortex Current State Of The Market And Institutional Investors Tone Vays, Bruce Fenton Day 1 Closing Panel Elizabeth Stark, Peter Todd, Jameson Lopp, …"},{"uri":"/greg-maxwell/2018-09-23-greg-maxwell-bitcoin-core-testing/","title":"Bitcoin Core Testing","content":"I believe slower would potentially result in less testing and not likely result in more at this point.\nIf we had an issue that newly introduced features were turning out to frequently have serious bugs that are discovered shortly after shipping there might be a case that it would improve the situation to delay improvements more before putting them into critical operation\u0026amp;hellip; but I think we\u0026amp;rsquo;ve been relatively free of such issues. The kind of issues that just will be found with a bit …"},{"uri":"/baltic-honeybadger/2018/bitcoin-custody/","title":"Bitcoin Custody","content":"https://twitter.com/kanzure/status/1048014038179823617\nStart time: 6:09:50\nMy name is Bryan Bishop, I’m going to be talking about bitcoin custody. Here is my PGP fingerprint, we should be doing that. So who am I, I have a software development background, I don’t just type transcripts. I’m actually no longer at LedgerX as of Friday (Sept 21 2018) when I came here. That is the end of 4 years, so you are looking at a free man. [Applause] Thank you.\nSo what is custody? A few of these slides will be …"},{"uri":"/greg-maxwell/2018-09-23-greg-maxwell-multiple-implementations/","title":"Multiple Implementations","content":"Location: Bitcointalk\nhttps://bitcointalk.org/index.php?topic=5035144.msg46077622#msg46077622\nRisks of multiple implementations They would create more risk. I don\u0026amp;rsquo;t think there is any reason to doubt that this is an objective fact which has been borne out by the history.\nFirst, failures in software are not independent. For example, when BU nodes were crashing due to xthin bugs, classic were also vulnerable to effectively the same bug even though their code was different and some triggers …"},{"uri":"/london-bitcoin-devs/2018-09-19-sjors-provoost-core-hardware-wallet/","title":"Bitcoin Core and hardware wallets","content":"Topic: Using Bitcoin Core with hardware wallets\nLocation: London Bitcoin Devs\nSlides: https://github.com/Sjors/presentations/blob/master/2018-09-19%20London%20Bitcoin%20Devs/2018-09%20London%20Bitcoin%20Devs%200.5.pdf\nCore, HWI docs: https://hwi.readthedocs.io/en/latest/examples/bitcoin-core-usage.html\nIntroduction I am Sjors, I am going to show you how to use, you probably shouldn’t try this at home, the Bitcoin Core wallet directly with a hardware wallet. Most of that work is done by Andrew …"},{"uri":"/chaincode-labs/chaincode-residency/2018-09-18-alex-bosworth-incentive-problems-in-the-lightning-network/","title":"Incentive Problems in the Lightning Network","content":"Incentive Problems in the Lightning Network I\u0026amp;rsquo;m going to talk about high-level incentive problems in the Lighting Network.\nMechanisms The first thing to think about is that we have two different mechanisms for making Bitcoin and lightning work. One mechanism is math — If you do a SHA 256 of something you don\u0026amp;rsquo;t have to worry about incentives, whether that\u0026amp;rsquo;s going to match to the expected result. There\u0026amp;rsquo;s no incentive there.\nSo the problem is we can\u0026amp;rsquo;t build a system …"},{"uri":"/andreas-antonopoulos/2018-08-30-andreas-antonopoulos-home-network-security/","title":"Full Node and Home Network Security","content":"Does running a Bitcoin full node attract hackers to my home network? Q - Does running a Bitcoin full node and/or a Lightning node at home attract hackers to my IP address and home network? Also could it reveal ownership of Bitcoin and attract physical attacks? Are preconfigured full node starter kits safe to use for non-technical people or is this completely against the point of running a full node and thus non-technical people should abandon the idea?\nA - Running a Bitcoin full node on your …"},{"uri":"/austin-bitcoin-developers/2018-08-17-richard-bondi-bitcoin-cli-regtest/","title":"Bitcoin CLI and Regtest","content":"Clone this repo to follow along: https://github.com/austin-bitcoin-developers/regtest-dev-environment\nhttps://twitter.com/kanzure/status/1161266116293009408\nIntro So the goal here as Justin said is to get the regtest environment set up. The advantages he mentioned, there is also the advantage that you can mine your own coins at will so you don’t have to mess around with testnet faucets. You can generate blocks as well so you don’t have to wait for six confirmations or whatever or even the ten …"},{"uri":"/speakers/richard-bondi/","title":"Richard Bondi","content":""},{"uri":"/speakers/jim-posen/","title":"Jim Posen","content":""},{"uri":"/misc/2018-07-24-la-blockchain-jim-posen-lightning-bolt-by-bolt/","title":"Lightning Network BOLT by BOLT","content":"Topic: Lightning Network BOLT by BOLT\nLocation: LA Blockchain Meetup\nTranscript by: glozow\nIntroduction I\u0026amp;rsquo;m Jim from the protocol team at Coinbase. We work on open source stuff in the crypto space. We just contribute to projects like Bitcoin core. The reason I\u0026amp;rsquo;m talking about this is I\u0026amp;rsquo;ve been contributing on and off to one of the major lightning implementations lnd for the past eight months or so.\nThis is gonna be a fairly technical talk. We\u0026amp;rsquo;re going to break down the …"},{"uri":"/london-bitcoin-devs/2018-07-23-john-light-bitcoin-full-nodes/","title":"Bitcoin Full Nodes","content":"John Light blog post on soft forks and hard forks: https://medium.com/@lightcoin/the-differences-between-a-hard-fork-a-soft-fork-and-a-chain-split-and-what-they-mean-for-the-769273f358c9\nIntro My presentation today is going to be about Bitcoin full nodes. Everything you wanted to know and more. There should perhaps be an asterisk because I’m not going to say literally everything but if anyone in the audience has any questions that want to go really deep down the rabbit hole then we’ll see just …"},{"uri":"/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my/","title":"Taproot, Schnorr signatures, and SIGHASH_NOINPUT","content":"https://twitter.com/kanzure/status/1021880538020368385\nslides: https://prezi.com/view/YkJwE7LYJzAzJw9g1bWV/\nIntroduction I am Pieter Wuille. I will just dive in. Today I want to talk about improvements to the bitcoin scripting language. There is feedback in the microphone. Okay.\nI will mostly be talking about improvements to the bitcoin scripting language. This is by no means an exhaustive list of all the things that are going on. It\u0026amp;rsquo;s more my personal focus in the things that I am …"},{"uri":"/building-on-bitcoin/2018/lightning-routing-ants-pheromones/","title":"Ant routing for the Lightning Network","content":"Decentralized anonymous routing algorithm for LN\nhttps://twitter.com/kanzure/status/1014530761398145025\nhttps://twitter.com/rperezmarco/status/1013938984718884866\nIntroduction I wanted to present some ideas for alternative ideas for routing on LN. This was published on arxiv: 1807.00151 (July 2018).\nhttp://webusers.imj-prg.fr/~ricardo.perez-marco\nOverview I am going to talk about what I understand to be decentralized network. Fabrice has already explained most of what is LN about and some …"},{"uri":"/building-on-bitcoin/2018/bitcoin-assets/","title":"Assets on Bitcoin","content":"https://twitter.com/kanzure/status/1014483345026289664\nGood morning everybody. In the mean time, while my presentation loads, I am Giacomo Zucco. I am the founder of BHB network and a consulting company. We do bitcoin consulting for institutional customers especially in Switzerland. We also like development on top of bitcoin and things like that.\nMy talk today is about assets on bitcoin. The subtitlte is \u0026amp;ldquo;Yes, ok, financial sovereignity and cryptoanarchy are cool, but what about those …"},{"uri":"/building-on-bitcoin/2018/bootstrapping-lightning-network/","title":"Bootstrapping LN: What have we learned?","content":"acinq\nhttps://twitter.com/kanzure/status/1014531050503065601\nMy name is Fabrice Drouin. I work with ACINQ. We are a comapny working on printing LN. I am going to talk about what happens, what was the process of bootstrapping and developing lightning and what happened on mainnet.\nTimeline Who doesn\u0026amp;rsquo;t know what lightning is? And who has used lightning? Okay, almost everyone. It\u0026amp;rsquo;s an evolution on the idea of payment channels. I think it\u0026amp;rsquo;s from 2011 actually. The first big step for …"},{"uri":"/building-on-bitcoin/","title":"Building On Bitcoin","content":" Building On Bitcoin 2018 "},{"uri":"/building-on-bitcoin/2018/","title":"Building On Bitcoin 2018","content":" Anonymous Bitcoin Jul 03, 2018 Adam Ficsor Ant routing for the Lightning Network Jul 04, 2018 Ricardo Perez-Marco Lightning Routing Assets on Bitcoin Jul 04, 2018 Giacomo Zucco Blind Signatures in Scriptless Scripts Jul 03, 2018 Jonas Nick Adaptor signatures Bootstrapping LN: What have we learned? Jul 04, 2018 Fabrice Drouin Lightning Building your own bank or... Constructing Crypto Castles Jul 04, 2018 Jameson Lopp Security CoinjoinXT and other techniques for deniable transfers Jul 03, 2018 …"},{"uri":"/building-on-bitcoin/2018/crypto-castles/","title":"Building your own bank or... Constructing Crypto Castles","content":"https://twitter.com/kanzure/status/1014483571946508288\nI am going to do a brain dump of basically anything that people who are building systems protecting private keys are thinking about. Hopefully you can ingest this for when you decide to build one of these systems.\nIf you have been in this space for long at all, you should be well aware that there are risks for trusting third-parties. Various entities are trying to centralize aspects of this system. The next problem once we have successfully …"},{"uri":"/speakers/eric-voskuil/","title":"Eric Voskuil","content":""},{"uri":"/speakers/giacomo-zucco/","title":"Giacomo Zucco","content":""},{"uri":"/building-on-bitcoin/2018/libbitcoin/","title":"Libbitcoin","content":"https://twitter.com/kanzure/status/1014483126817521665\nThis is me. When I first suggested giving a talk here on libbitcoin, \u0026amp;hellip;. one of the pieces of feedback I got was that bitcoin is libbitcoin, what are you building on bitcoin? libbitcoin is actually more. I will also talk about what it is going on underneath. It\u0026amp;rsquo;s an implementation of bitcoin as well.\nMost of the talks I give are around cryptoeconomics. Up until recently, libbitcoin hasn\u0026amp;rsquo;t been mature enough to go and …"},{"uri":"/speakers/ricardo-perez-marco/","title":"Ricardo Perez-Marco","content":""},{"uri":"/building-on-bitcoin/2018/anonymous-bitcoin/","title":"Anonymous Bitcoin","content":"https://twitter.com/kanzure/status/1014128850765303808\nhttps://github.com/zkSNACKs/WalletWasabi\nOne year ago I was standing here at this conference and Igave a talk. Today I am standing here and I titled my talk \u0026amp;ldquo;Anonymous bitcoin\u0026amp;rdquo;. One year ago at this conference, the conference was named \u0026amp;ldquo;breaking bitcoin\u0026amp;rdquo;. I did not break bitcoin. Someone else did, you might remember.\nToday I want to present how I build on bitcoin. I have two goals with this presentation. I want to …"},{"uri":"/building-on-bitcoin/2018/blind-signatures-and-scriptless-scripts/","title":"Blind Signatures in Scriptless Scripts","content":"https://twitter.com/kanzure/status/1014197255593603072\nMy name is Jonas. I work at Blockstream as an engineer. I am going to talk about some primitives about blind signatures, scriptless scripts, and blind coin swap, and I will explain how to trustlessly exchange ecash tokens using bitcoin.\nI want to introduce the blind Schnorr signature in a few moments.\nSchnorr signature My assumption is not that you will completely understand Schnorr signatures, but maybe you will at least agree that if you …"},{"uri":"/building-on-bitcoin/2018/coinjoinxt/","title":"CoinjoinXT and other techniques for deniable transfers","content":"https://twitter.com/kanzure/status/1014197088979226625\nhttps://joinmarket.me/blog/blog/coinjoinxt/\nIntroduction My talk today is about what I\u0026amp;rsquo;m calling CoinjoinXT which is kind of a new proposal. I wouldn\u0026amp;rsquo;t say it\u0026amp;rsquo;s a new technology, but perhaps a new combination of technologies. My name is Adam Gibson. Yes, another Adam, sorry about that. I have been working on privacy tech for the last number of years, mostly on joinmarket. Today\u0026amp;rsquo;s talk is not about joinmarket, but it …"},{"uri":"/building-on-bitcoin/2018/binary-transparency/","title":"Contours for Binary Transparency","content":"https://twitter.com/kanzure/status/1014167797205815297\nI am going to talk about binary transparency. What is it? Let\u0026amp;rsquo;s suppose that you have an android phone or iphone and you download some software from the app store or Google Play. How do you know that the apk or that software that you\u0026amp;rsquo;re being given is the same piece of software that is being given to everyone else and google or apple hasn\u0026amp;rsquo;t specifically given you a bad version of that software because they were threatened …"},{"uri":"/building-on-bitcoin/2018/current-and-future-state-of-wallets/","title":"Current And Future State Of Wallets","content":"https://twitter.com/kanzure/status/1014127893495021568\nIntroduction I started to contribute to Bitcoin Core about 5 years ago. Since then, I have managed to get 450 commits merged. I am also the co-founder of a wallet hardware company based in Switzerland called Shift+ Cryptocurrency.\nWallets is not rocket science. It\u0026amp;rsquo;s mostly about pointing the figure to things that we can do better. I prepared a lot of content.\nPrivacy, security and trust When I look at existing wallets, I see the …"},{"uri":"/building-on-bitcoin/2018/dandelion/","title":"Dandelion: Privacy-preserving transaction propagation in Bitcoin's p2p network","content":"https://twitter.com/kanzure/status/1014196927062249472\nToday I will be talking to you about privacy in bitcoin\u0026amp;rsquo;s p2p layer. This was joint work with some talented colleagues.\nBitcoin p2p layer I want to give a brief overview of the p2p layer. Users in bitcoin are connected over a p2p network with tcp links. Users are identified by ip address and port number. Users have a second identity, which is their address or public key or however you want to think about it.\nWhen Alice wants to send a …"},{"uri":"/building-on-bitcoin/2018/btcpay/","title":"How to make everyone run their own full node","content":"https://twitter.com/kanzure/status/1014166857958461440\nI am the maintainer of Nbitcoin and btcpay. I am a dot net fanboy. I am happy that the number of people working on bitcoin in C# has spread from one about four years ago to about ten in this room toay. That\u0026amp;rsquo;s great. I work at DG Lab in Japan.\nThe goal of my talk is to continue what Jonas Schnelli was talking about. He was speaking about how to get the best wallet and get the three components of security, privacy and trust. I have a …"},{"uri":"/speakers/lawrence-nahum/","title":"Lawrence Nahum","content":""},{"uri":"/building-on-bitcoin/2018/single-use-seals/","title":"Single Use Seals","content":"https://twitter.com/kanzure/status/1014168068447195136\nI am going to talk about single use seals and ways to use them. They are a concept of building consensus applications and things that need consensus. I want to mention why that matters. What problem are we trying to solve? Imagine my cell phone and I open up my banking app or the banking app or lightning app\u0026amp;hellip; I want to make sure that what I see on my screen is the same as what you see on your screen. We\u0026amp;rsquo;re trying to achieve …"},{"uri":"/speakers/thomas-kerin/","title":"Thomas Kerin","content":""},{"uri":"/building-on-bitcoin/2018/tooling/","title":"Tooling Panel","content":"https://twitter.com/kanzure/status/1014167542422822913\nWhat kind of tools do we need to create, in order to facilitate building on bitcoin? We are going to open questions to the floor. The guys will be there to get your questions. The panel will be about 20 minutes long, and then we will go for food.\nLN: I work at Blockstream. I work on low-level wallets and tooling for wallets.\nND: I am working on a library called Nbitcoin for programming bitcoin in C#. I am also the developer of btcpay.\nEV: …"},{"uri":"/building-on-bitcoin/2018/working-on-scripts/","title":"Working on Scripts with logical opcodes","content":"https://twitter.com/kanzure/status/1014167120417091584\nBitcoin has logical opcodes in bitcoin script. Depending on whose is trying to spend coins or what information they have, they interact with logical opcodes. We could see a simple example here taken from one of the lightning network commitment transcraction scripts. It pushes a key on to the stack so that a checksig can run later. It adds a relative timelock as well. Two people can interact with that script and we can set different …"},{"uri":"/london-bitcoin-devs/2018-06-12-adam-gibson-unfairly-linear-signatures/","title":"Unfairly Linear Signatures","content":"Slides: https://joinmarket.me/static/schnorrplus.pdf\nIntro This is supposed to be a developer meetup but this talk is going to be a little more on the theoretical side. The title is “Unfairly Linear Signatures” that is just a joke. It is talking about Schnorr signatures. They are something that could in the near future have a big practical significance in Bitcoin. I am not going to explain all the practical significance.\nOutline To give you an outline, I will explain what all these terms mean …"},{"uri":"/breaking-bitcoin/2019/bitcoin-build-system/","title":"Bitcoin Build System Security","content":"Alternative video without the Q\u0026amp;amp;A session: https://www.youtube.com/watch?v=I2iShmUTEl8\nhttps://twitter.com/kanzure/status/1137347937426661376\nI couldn\u0026amp;rsquo;t make it to Amsterdam this year, but I hope the graphics I have prepared for this talk can make up for my absence. Let\u0026amp;rsquo;s say you want to be a good bitcoin citizen and start to run your own bitcoin node. Say you go to bitcoincore.org, you click the download button, and you open the disk image and you double click on Bitcoin Core. …"},{"uri":"/layer2-summit/","title":"Layer2 Summit","content":" Layer2 Summit 2018 "},{"uri":"/layer2-summit/2018/","title":"Layer2 Summit 2018","content":" Lightning Overview Apr 25, 2018 Conner Fromknecht Amp Splicing Watchtowers Scriptless Scripts May 25, 2018 Andrew Poelstra "},{"uri":"/layer2-summit/2018/scriptless-scripts/","title":"Scriptless Scripts","content":"https://twitter.com/kanzure/status/1017881177355640833\nIntroduction I am here to talk about scriptless scripts today. Scriptless scripts are related to mimblewimble, which is the other thing I was going to talk about. For time constraints, I will only talk about scriptles scripts. I\u0026amp;rsquo;ll give a brief historical background, and then I will say what scriptless scripts are, what we can do with them, and give some examples.\nHistory In 2016, there was a mysterious paper dead-dropped on an IRC …"},{"uri":"/layer2-summit/2018/lightning-overview/","title":"Lightning Overview","content":"https://twitter.com/kanzure/status/1005913055333675009\nhttps://lightning.network/\nIntroduction First I am going to give a brief overview of how lightning works for those of who you may not be so familiar. Then I am going to discuss three technologies we\u0026amp;rsquo;ll be working on in the next year at Lightning Labs and pushing forward.\nPhilosophical perspective and scalability Before I do that though, following up on what Neha was saying, I want to give a philosophical perspective on how I think …"},{"uri":"/sf-bitcoin-meetup/2018-04-23-jeremy-rubin-bitcoin-core/","title":"Bitcoin Core","content":"A hardCORE workout\nSlides: https://drive.google.com/file/d/149Ta1WRXL5WEvnBdlL-HxmsFDXUbvFDy/view\nhttps://twitter.com/kanzure/status/1152926849879760896\nIntro Thank you very much for the warm welcome. So welcome to the hard core workout. It is not going to be that difficult but you can stretch if you need to. It is going to be a lot of material so I’m going to go a little bit fast. If you have any questions feel free to stop me in the middle. There’s a lot of material to get through so I might …"},{"uri":"/sf-bitcoin-meetup/2018-04-20-laolu-osuntokun-exploring-lnd0.4/","title":"Exploring Lnd0.4","content":"Slides: https://docs.google.com/presentation/d/1pyQgGDUcB29r8DnTaS0GQS6usjw4CpgciQx_-XBVOwU/\nThe Genesis of lnd First, the genesis of lnd. Here is the first commit of lnd. It was October 27 2015. This is when I was in school still. This was done when I went on break and it was the first major…. we had in terms of code. We talked about the language we’re using, the licensing, the architecture of the daemon. A fun fact, the original name of lnd was actually called Plasma. Before we made it open …"},{"uri":"/bitcoin-core-dev-tech/2018-03/","title":"Bitcoin Core Dev Tech 2018 (Mar)","content":" Bellare-Neven Mar 05, 2018 Signature aggregation Cross Curve Atomic Swaps Mar 05, 2018 Adaptor signatures Merkleized Abstract Syntax Trees - MAST Mar 06, 2018 Taproot Covenants Mast Priorities Mar 07, 2018 Taproot, Graftroot, Etc Mar 06, 2018 Contract protocols Taproot "},{"uri":"/bitcoin-core-dev-tech/2018-03/2018-03-07-priorities/","title":"Priorities","content":"https://twitter.com/kanzure/status/972863994489901056\nPriorities We\u0026amp;rsquo;re going to wait until BlueMatt is here. Nobody knows what his priorities are. He says he might be in around noon.\nThere\u0026amp;rsquo;s an ex-Google product director interested in helping with Bitcoin Core. He was asking about how to get involved. I told him to get involved by just diving in. He will be spending some time at Chaincode at the end of March. We\u0026amp;rsquo;ll get a sense for what his skills are. I think this could be …"},{"uri":"/bitcoin-core-dev-tech/2018-03/2018-03-06-merkleized-abstract-syntax-trees-mast/","title":"Merkleized Abstract Syntax Trees - MAST","content":"https://twitter.com/kanzure/status/972120890279432192\nSee also http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-07-merkleized-abstract-syntax-trees/\nMAST stuff You could directly merkleize scripts if you switch from IF, IFNOT, ELSE with IFJUMP that has the number of bytes.\nWith graftroot and taproot, you never to do any scripts (which were a hack to get things started). But we\u0026amp;rsquo;re doing validation and computation.\nYou take every single path it has; so instead, it becomes …"},{"uri":"/bitcoin-core-dev-tech/2018-03/2018-03-06-taproot-graftroot-etc/","title":"Taproot, Graftroot, Etc","content":"https://twitter.com/kanzure/status/972468121046061056\nGraftroot The idea of graftroot is that in every contract there is a superset of people that can spend the money. This assumption is not always true but it\u0026amp;rsquo;s almost always true. Say you want to lock up these coins for a year, without any conditionals to it, then it doesn\u0026amp;rsquo;t work. But assume you have\u0026amp;ndash; pubkey recovery? No\u0026amp;hellip; pubkey recovery is inherently incompatible with any form of aggregation, and aggregation is far …"},{"uri":"/bitcoin-core-dev-tech/2018-03/2018-03-05-bellare-neven/","title":"Bellare-Neven","content":"See also http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-06-signature-aggregation/\nIt\u0026amp;rsquo;s been published, it\u0026amp;rsquo;s been around for a decade, and it\u0026amp;rsquo;s widely cited. In Bellare-Neven, it\u0026amp;rsquo;s itself, it\u0026amp;rsquo;s a multi-signature scheme which means multiple pubkeys and one message. You should treat the individual authorizations to spend inputs, as individual messages. What we need is an interactive aggregate signature scheme. Bellare-Neven\u0026amp;rsquo;s paper suggests a …"},{"uri":"/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/","title":"Cross Curve Atomic Swaps","content":"https://twitter.com/kanzure/status/971827042223345664\nDraft of an upcoming scriptless scripts paper. This was at the beginning of 2017. But now an entire year has gone by.\npost-schnorr lightning transactions https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html\nAn adaptor signature.. if you have different generators, then the two secrets to reveal, you just give someone both of them, plus a proof of a discrete log, and then you say learn the secret to one that gets …"},{"uri":"/speakers/ron-paul/","title":"Ron Paul","content":""},{"uri":"/satoshi-roundtable/sr-004/ron-paul/","title":"Ron Paul","content":"Satoshi Roundtable IV\nIntroduction 1 Please welcome Alex from Coin Center\u0026amp;hellip; Bruce\u0026amp;rsquo;s son. My name is Alexander.. I\u0026amp;rsquo;m the CEO of Bit\u0026amp;hellip;.. something\u0026amp;hellip; I would like to introduce the \u0026amp;hellip; Nick Spanos.\nIntroduction 2 Nick Spanos\nA while ago, I think last year, \u0026amp;hellip; we said it would be cool to have Ron Paul here. Bruce calls me a few months ago and says we have to get Ron down here. I tried. I work for Ron. I worked for both campaigns. Rallied the troops and the …"},{"uri":"/satoshi-roundtable/","title":"Satoshi Roundtable","content":" Sr 004 "},{"uri":"/satoshi-roundtable/sr-004/","title":"Sr 004","content":" Ron Paul Feb 06, 2018 Ron Paul "},{"uri":"/misc/2018-02-02-andrew-poelstra-bulletproofs/","title":"Bulletproofs","content":"49th Milan Bitcoin meetup\nhttps://twitter.com/kanzure/status/962326126969442304\nhttps://www.reddit.com/r/Bitcoin/comments/7w72pq/bulletproofs_presentation_at_feb_2_milan_meetup/\nIntroduction Alright. Thank you for that introduction, Giacamo. I am here to talk about bulletproofs. This is different from proof-of-bullets, which is what fiat currencies use. In bulletproofs, we use a zero-knowledge proof which has nothing to do with consensus at all, but it has a lot of exciting applications. The …"},{"uri":"/blockchain-protocol-analysis-security-engineering/","title":"Blockchain Protocol Analysis Security Eng","content":" Blockchain Protocol Analysis Security Engineering 2017 Blockchain Protocol Analysis Security Engineering 2018 "},{"uri":"/blockchain-protocol-analysis-security-engineering/2018/","title":"Blockchain Protocol Analysis Security Engineering 2018","content":" Bulletproofs Benedikt Bünz Proof systems Hardening Lightning Jan 30, 2018 Olaoluwa Osuntokun Security Lightning Proofs Of Space Jan 31, 2018 Bram Cohen Schnorr Signatures For Bitcoin: Challenges and Opportunities Jan 31, 2018 Pieter Wuille Research Schnorr signatures Adaptor signatures Simplicity: A New Language for Blockchains Jan 25, 2018 Russell O’Connor Simplicity Smart Signatures: Experiments in Authorization Jan 24, 2018 Christopher Allen Cryptography Simplicity "},{"uri":"/blockchain-protocol-analysis-security-engineering/2018/proofs-of-space/","title":"Proofs Of Space","content":"Beyond Hellman\u0026amp;rsquo;s time-memory trade-offs with applications to proofs of space\nhttps://twitter.com/kanzure/status/962378701278208000\nIntroductions Hi everyone. This presentation is based off of a paper called \u0026amp;ldquo;Beyond Hellman\u0026amp;rsquo;s time-memory trade-offs\u0026amp;rdquo;. I\u0026amp;rsquo;ll get to why it\u0026amp;rsquo;s called that. These slides and the proofs are by Hamza Abusalah.\nOutline I am going to describe what proofs of space are. I\u0026amp;rsquo;ll give some previous constructions and information about them. …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities/","title":"Schnorr Signatures For Bitcoin: Challenges and Opportunities","content":"slides: https://prezi.com/bihvorormznd/schnorr-signatures-for-bitcoin/\nhttps://twitter.com/kanzure/status/958776403977220098\nhttps://blockstream.com/2018/02/15/schnorr-signatures-bpase.html\nIntroduction My name is Pieter Wuille. I work at Blockstream. I contribute to Bitcoin Core and bitcoin research in general. I work on various proposals for the bitcoin system for some time now. Today I will be talking about Schnorr signatures for bitcoin, which is a project we\u0026amp;rsquo;ve been working on, …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2018/hardening-lightning/","title":"Hardening Lightning","content":"https://twitter.com/kanzure/status/959155562134102016\nslides: https://docs.google.com/presentation/d/14NuX5LTDSmmfYbAn0NupuXnXpfoZE0nXsG7CjzPXdlA/edit\nprevious talk (2017): http://diyhpl.us/wiki/transcripts/blockchain-protocol-analysis-security-engineering/2017/lightning-network-security-analysis/\nprevious slides (2017): https://cyber.stanford.edu/sites/default/files/olaoluwaosuntokun.pdf\nIntroduction I am basically going to go over some ways that I\u0026amp;rsquo;ve been thinking about basically …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2018/2018-01-25-russell-oconnor-simplicity/","title":"Simplicity: A New Language for Blockchains","content":"Slides: https://cyber.stanford.edu/sites/g/files/sbiybj9936/f/slides-bpase-2018.pdf\nSimplicity white paper: https://blockstream.com/simplicity.pdf\nIntro Good morning everyone. I have been working on a new language for the consensus layer of blockchains and I am here to present that to you today.\nBlockchains and Smart Contracts I know everyone is familiar with this in the audience but I want to make sure that I’m on the same page with you guys. How do blockchains and smart contracts and …"},{"uri":"/speakers/christopher-allen/","title":"Christopher Allen","content":""},{"uri":"/misc/2018-01-24-rusty-russell-future-bitcoin-tech-directions/","title":"Future Bitcoin Tech Directions","content":"Future technological directions in bitcoin: An unreliable guide\nnotes: https://docs.google.com/document/d/1iNzJtJRq9-0vdILQAe9Bcj7r1a7xYVf8CSY0_oFrH9c/edit or http://diyhpl.us/~bryan/papers2/bitcoin/Future%20technological%20directions%20in%20bitcoin%20-%202018-01-24.pdf\nhttps://twitter.com/kanzure/status/957318453181939712\nIntroduction Thank you very much for that introduction. Alright, well polished machine right there. Let\u0026amp;rsquo;s try that again. Thank you for that introduction. My talk today …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2018/2018-01-24-christopher-allen-smart-signatures/","title":"Smart Signatures: Experiments in Authorization","content":"Slides: https://www.slideshare.net/ChristopherA/smart-signaturesexperiments-in-authentication-stanford-bpase-2018-final\nIntro Good afternoon. My name is Christopher Allen, I am with Blockstream and today I wanted to give you an introduction to something that isn’t about consensus, isn’t about proof of stake and other things. I want to pop up a level. I want to talk about smart signatures. These are a number of experiments in authorization.\nDigital Signatures First we need to recap, what is a …"},{"uri":"/realworldcrypto/2018/mimblewimble-and-scriptless-scripts/","title":"Mimblewimble And Scriptless Scripts","content":"https://twitter.com/kanzure/status/954531580848082944\nWe\u0026amp;rsquo;re about ready for the second session of RWC. We are going to move on to cryptocurrencies. So we have 3 talks lined up in this session. The first one is on mimblewimble and scriptless scripts, delivered by Andrew Poelstra from Blockstream.\nIntroduction Thank you. Cool. Hello. My name is Andrew Poelstra. I am the research director at Blockstream and I\u0026amp;rsquo;m here to talk about a couple of things that I\u0026amp;rsquo;ve been thinking about …"},{"uri":"/realworldcrypto/2018/","title":"Realworldcrypto 2018","content":" Mimblewimble And Scriptless Scripts Jan 11, 2018 Andrew Poelstra Adaptor signatures Sidechains "},{"uri":"/greg-maxwell/2017-12-22-bech32-design/","title":"bech32 design","content":"Location: Bitcointalk\nhttps://bitcointalk.org/index.php?topic=2624630.msg26762627#msg26762627\nbech32 design Bech32 is designed for human use and basically nothing else, the only consideration for QR codes is that all caps is also permitted.\nFor all your talk of human considerations you give a strong signal of having seldom actually used bitcoin addresses as a human. 1/3 type addresses are full of visually confusing characters and the case modulations are a pain to read and enter correctly. In …"},{"uri":"/greg-maxwell/2017-11-27-gmaxwell-advances-in-block-propagation/","title":"Advances In Block Propagation","content":"slides: https://web.archive.org/web/20190416113003/https://people.xiph.org/~greg/gmaxwell-sf-prop-2017.pdf\nhttps://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding\nefficient block transfer: https://web.archive.org/web/20170912171304/https://people.xiph.org/~greg/efficient.block.xfer.txt\nlow latency block xfer: https://web.archive.org/web/20160607023347/https://people.xiph.org/~greg/lowlatency.block.xfer.txt …"},{"uri":"/scalingbitcoin/stanford-2017/how-to-charge-lightning/","title":"How To Charge Lightning","content":"Our next talk is getting into layer 2. Leaving what needs to done at th\u0026amp;hellip;\nI am going to start with maybe a simple beginning talking about a single channel between Alice and Bob. We heard about channels yesterday so I am not going to go into details. Alice is not going to cheat Bob in my scenario. Everything is going to run as if everyone honest and doing their best effort, for my example. They open a channel and insert 10 coins. Every time Alice moves her coins to Bob or the other way …"},{"uri":"/scalingbitcoin/stanford-2017/incentives-tradeoffs-transaction-selection-in-dag-based-protocols/","title":"Incentives and Trade-offs in Transaction Selection in DAG-based Protocols","content":"Commercial effort to implement DAG-based protocols. No ICO, it\u0026amp;rsquo;s a regular company.\nA little bit background of DAG-based protocols. The background to this work is something we have been working at Hebrew University and Yoad. This idea of blockDAG is about layer 1 of bitcoin. And so it\u0026amp;rsquo;s a generalization of the chain structure of blockchain. As such, there are two solutions- lightning network and payment channels, and were scaling up layer 1, and you\u0026amp;rsquo;re scaling up in layer 2, …"},{"uri":"/speakers/joi-ito/","title":"Joi Ito","content":""},{"uri":"/scalingbitcoin/stanford-2017/joi-ito/","title":"Keynote","content":"Welcome back to Scaling Bitcoin day 2. I am sure everyone went home and tried Ethan\u0026amp;rsquo;s wagering setup yesterday. Today we are going to be covering topics like layer 2, fees, and consensus. To get us started today, we have a very special guest, Joi Ito, director of the MIT Media Lab, and he will be talking about parallels to the internet and ICOs and he will cover a lot of ground. Joi?\nI think everyone knows what the MIT Media Lab is. I am its director. I was involved in getting the digital …"},{"uri":"/scalingbitcoin/stanford-2017/microchains/","title":"Microchains","content":"https://twitter.com/kanzure/status/927257690685939712\nI would like to see a blockchain ecosystem with 1000s of transactions/second at layer 1 with full security. And so, we have to look at blockchain differently if we are going to achieve this.\nA brief introduction\u0026amp;hellip; my name is David Vorick. I have been studying bitcoin since 2011. My primary gig is Sia. I have been spending a lot of time on consensus algorithms and proof-of-work. If we could start over what sorts of things could we do …"},{"uri":"/speakers/nick-szabo/","title":"Nick Szabo","content":""},{"uri":"/scalingbitcoin/stanford-2017/weak-signal-radio-communications-for-bitcoin-network-resilience/","title":"Weak-Signal Radio Communications For Bitcoin Network Resilience","content":"What is weak-signal HF radio? These are called high frequency. That\u0026amp;rsquo;s the official name for these frequencies. Short wave is for historical reasons is longer waves and much higher frequency compared to our wireless internet frequencies. The terminology is backwards. I\u0026amp;rsquo;ll talk about the ionosphere and how we\u0026amp;rsquo;re bouncing signals off of it. This is really long-range stuff. This is an old Voice of America broadcast station that broadcasted radio programs across the pacific. The …"},{"uri":"/speakers/yonatan-sompolinsky/","title":"Yonatan Sompolinsky","content":""},{"uri":"/speakers/andrew-stone/","title":"Andrew Stone","content":""},{"uri":"/scalingbitcoin/stanford-2017/bitcoin-script-v2.0-and-strengthened-payment-channels/","title":"Bitcoin Script 2.0 And Strengthened Payment Channels","content":"This is a brief history of bitcoin script evolution. Since bitcoin was active in 2009, there was a lot of emergency fixes for the first 2 years done by Satoshi. He found that people could skpi the signature check using OP_RETURN and malformed scriptSigs. So those functions were removed. OP_VER and OP_VERIF were intended for script upgrades but it was found that after every release of bitcoin, it would become a hard-fork because of the design. So those were also removed. Also, many opcodes were …"},{"uri":"/scalingbitcoin/stanford-2017/blocksci-platform/","title":"BlockSci: a Platform for Blockchain Science and Exploration","content":"Alright well, my name is Harry Kalodner, I am a student at Princeton University and I am here to tell you about a tool I built alon with some colleagues at Princeton which could be used for analyzing the blockchain in different ways. So far most of the talks today have been constructive about new ways to use the blockchain and protocols to slightly modify bitcoin in roder to improve privacy or scaling. I am going ot take a step back and look at scaling demands beyond \u0026amp;hellip; to look at why are …"},{"uri":"/scalingbitcoin/stanford-2017/bolt-anonymous-payment-channels-for-decentralized-currencies/","title":"BOLT Anonymous Payment Channels for Decentralized Currencies","content":"paper: https://eprint.iacr.org/2016/701.pdf\nTo make questions easier, we are going to have a mic on the isle. Please line up so that we can have questions quickly. Okay, now we have Ian Miers.\nMy name is Ian Miers. Just got my PhD at Hpkins\u0026amp;hellip; authors of zcash, zerocash, \u0026amp;hellip; My interest in bitcoin was, first getting involved, was dealing with the privacy aspect. There is also a scaling problem. I assume you are aware of that. The bottom line is\u0026amp;hellip; converting this to PDF …"},{"uri":"/speakers/harry-kalodner/","title":"Harry Kalodner","content":""},{"uri":"/speakers/johnson-lau/","title":"Johnson Lau","content":""},{"uri":"/scalingbitcoin/stanford-2017/measuring-network-maximum-sustained-transaction-throughput/","title":"Measuring maximum sustained transaction throughput on a global network of Bitcoin nodes","content":"Andrew Stone, Bitcoin Unlimited\nWe are 25 minutes behind schedule.\nWe have the rest of the team wathcing remotely. I think the motivation for this project is clear. Transaction volume on the bitcoin network could be growing exponentially except that there is a limit on the number of confirmed transactions. Transaction fees have increased and confirmation times are not reliable, and bitcoin is unusable for some applications.\nSome people want to see the max blokc size limit lifted. Some people …"},{"uri":"/scalingbitcoin/stanford-2017/valueshuffle-mixing-confidential-transactions/","title":"ValueShuffle: Mixing Confidential Transactions","content":"paper: https://eprint.iacr.org/2017/238.pdf\nThank you.\nWe have seen two talks now about putting privacy in layer two. But let\u0026amp;rsquo;s talk about layer one. We still don\u0026amp;rsquo;t have great privacy there. So the title of this talk is valueshuffle - mixing confidential transactions. I don\u0026amp;rsquo;t have to convince you that bitcoin at the moment is not really private. There are a lot of possibilities to link\u0026amp;hellip; to deal them as, to link addresses together. We can even link addresses …"},{"uri":"/cppcon/2017/","title":"CPPcon 2017","content":" Fuzzing Oct 11, 2017 Kostya Serebryany "},{"uri":"/cppcon/2017/2017-10-11-kostya-serebryany-fuzzing/","title":"Fuzzing","content":"Fuzz or lose! Why and how to make fuzzing a standard practice for C++\nLocation: CppCon 2017\nSlides: https://github.com/CppCon/CppCon2017/blob/master/Demos/Fuzz%20Or%20Lose/Fuzz%20Or%20Lose%20-%20Kostya%20Serebryany%20-%20CppCon%202017.pdf\nPaper on “The Art, Science and Engineering of Fuzzing”: https://arxiv.org/pdf/1812.00140.pdf\nIntro Good afternoon and thank you for coming for the last afternoon session. I appreciate that you are tired. I hope to wake you up a little bit. My name is Kostya, I …"},{"uri":"/speakers/kostya-serebryany/","title":"Kostya Serebryany","content":""},{"uri":"/breaking-bitcoin/2017/","title":"Breaking Bitcoin 2017","content":" Breaking Hardware Wallets Sep 09, 2017 Nicolas Bacca Security problems Hardware wallet Changing Consensus Rules Without Breaking Bitcoin Sep 10, 2017 Eric Lombrozo Soft fork activation Consensus Pitfalls Sep 10, 2017 Christopher Jeffrey Hunting Moby Dick, an Analysis of 2015-2016 Spam Attacks Sep 09, 2017 Antoine Le Calvez Research Security problems Interview Adam Back Elizabeth Stark Sep 10, 2017 Adam Back, Elizabeth Stark Lightning Light Clients During 2017 Interfork Period Sep 09, 2017 …"},{"uri":"/breaking-bitcoin/2017/changing-consensus-rules-without-breaking-bitcoin/","title":"Changing Consensus Rules Without Breaking Bitcoin","content":"https://twitter.com/kanzure/status/1005822360321216512\nIntroduction I\u0026amp;rsquo;d like to talk about \u0026amp;hellip; we actually discovered we can replace the script completely with soft-forks.\nIt\u0026amp;rsquo;s important to note this quote from satoshi, from summer 2010: \u0026amp;ldquo;I don\u0026amp;rsquo;t believe a second compatible implementation will ever\u0026amp;rdquo; \u0026amp;hellip;\u0026amp;hellip;\nComparing open-source development to bitcoin\u0026amp;rsquo;s blockchain \u0026amp;hellip; a lot of the inspiration came from the development of open-source. All …"},{"uri":"/speakers/christopher-jeffrey/","title":"Christopher Jeffrey","content":""},{"uri":"/breaking-bitcoin/2017/2017-09-10-christopher-jeffrey-consensus-pitfalls/","title":"Consensus Pitfalls","content":"Pitfalls of Consensus Implementation\nLocation: Breaking Bitcoin\nIntro I’d like to talk about breaking Bitcoin today. Before I do that I’d like to give a short demonstration.\nDemo Right now I am SSH’d into a server with a couple of nodes on it. I am going to start up a bitcoind node and I am going to connect to one of these nodes and have it serve me a chain. The chain is kind of big. I predownloaded a lot of it. It is a regtest chain that I mine myself. As you can see here is the debug log. I’ve …"},{"uri":"/breaking-bitcoin/2017/interview-adam-back-elizabeth-stark/","title":"Interview Adam Back Elizabeth Stark","content":"https://twitter.com/kanzure/status/1005842855271784449\nstark: Thank you Giacomo. Who here had too much fun at the party last night? My voice is going. Hopefully it will stay for this interview. We are lucky to have Adam Back here today, co-founder of Blockstream and inventor of Hashcash (history). He just happened to appear and there was an available spot. Thank you for making this work.\nadam3us: Thank you for inviting me.\nstark: I\u0026amp;rsquo;m a co-organizer of this event. I am also founder of …"},{"uri":"/breaking-bitcoin/2017/solar-powered-space-pirates/","title":"Solar Powered Space Pirates","content":"Solar powered space pirates: A threat to bitcoin?\nhttps://twitter.com/kanzure/status/1005633917356007425\nLet\u0026amp;rsquo;s see. How do I make this fullscreen? Alright, anyone who can actually read French know how to make this fullscreen? My french teachers in high school would be very disappointed in me.\nMy name is Peter Todd. I do cryptography consulting. I\u0026amp;rsquo;ve been involved in Bitcoin Core as well. I want to tell you about whether solar powered space pirates are a threat to bitcoin.\nIf …"},{"uri":"/speakers/antoine-le-calvez/","title":"Antoine Le Calvez","content":""},{"uri":"/breaking-bitcoin/2017/breaking-hardware-wallets/","title":"Breaking Hardware Wallets","content":"Breaking hardware wallets: Tales from the trenches and glimpses into the future\nhttps://twitter.com/kanzure/status/1005627960160849920\nHello. Nobody cares? Ah. So next up is Nicolas. And he will talk about hardware attacks. Thank you.\nIntroduction Hi everybody. Thanks for making a great conference so far. I am Nicolas from Ledger hardware wallet. Today I am going to be talking about breaking hardware wallets. I don\u0026amp;rsquo;t mean physically breaking them. If you expected to come steal a hardware …"},{"uri":"/breaking-bitcoin/2017/spam-attacks-analysis/","title":"Hunting Moby Dick, an Analysis of 2015-2016 Spam Attacks","content":"https://twitter.com/kanzure/status/1005525351517360129\nIntroduction Hi. Thanks for Breaking Bitcoin to invite me to speak and to present this research on Moby Dick.\nMoby Dick and spam attacks\u0026amp;ndash; let me explain, Moby Dick is an entity that spammed or sent a lot of transactions in bitcoin in 2015. We did some analysis on who this person is or what they were doing. This is bitcoin archaeology, where we analyze past activities and try to analyze what happened and get insight into what could …"},{"uri":"/breaking-bitcoin/2017/light-clients-during-2017-interfork-period/","title":"Light Clients During 2017 Interfork Period","content":"https://twitter.com/kanzure/status/1005617123287289857\nNext up is Thomas.\nIntroduction Alright. Hello everyone. Thank you for having me. I\u0026amp;rsquo;m the developer of the Electrum wallet. I started this project in 2011. I\u0026amp;rsquo;m going to talk about lite clients and the implication of the 2017 hard-fork. We might be in the middle of two forks at the moment.\nSo we have segwit and we are very happy about that. ((applause)) This graph was taken from Pieter Wuille\u0026amp;rsquo;s website. It has lasted almost …"},{"uri":"/speakers/nicolas-bacca/","title":"Nicolas Bacca","content":""},{"uri":"/breaking-bitcoin/2017/socialized-costs-of-hard-forks/","title":"Socialized Costs Of Hard Forks","content":"https://twitter.com/kanzure/status/1005518746260262912\nKL: Can you hear me? Is it good? Okay. I hope you have practiced your French because.. no, the presentations are in English. Welcome and thank you very much for being here. This conference will very probably be the best we\u0026amp;rsquo;ve had in the bitcoin sphere. The crowd is pretty crazy. I\u0026amp;rsquo;ve had the ticket list and I can say the people in this room, not just the speakers, the attendees are all super amazing. I\u0026amp;rsquo;m glad to be …"},{"uri":"/breaking-bitcoin/2017/banks-as-bitcoin-custodians/","title":"Traditional Banks as Bitcoin Custodians: Security Challenges and Implications","content":"https://twitter.com/kanzure/status/1005545022559866880\nOur next speaker is Giacomo Zucco. Yeah, have fun. ((Applause))\nAs you can hear, there are a lot of italians here. Okay. So, good afternoon everyone. I am Giacomo Zucco.\nI am a theoretical physicist. I worked at Accenture as a technology consultant and now I work at BHB Network which is a bridge between free open-source development and research on the bitcoin protocol, and the world of the incumbents. These guys have the money, the other …"},{"uri":"/bitcoin-core-dev-tech/2017-09/","title":"Bitcoin Core Dev Tech 2017","content":" Meeting Notes Sep 05, 2017 Merkleized Abstract Syntax Trees Sep 07, 2017 Mast Signature Aggregation Sep 06, 2017 Signature aggregation "},{"uri":"/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/","title":"Merkleized Abstract Syntax Trees","content":"https://twitter.com/kanzure/status/907075529534328832\nhttps://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html\nI am going to talk about the scheme I posted to the mailing list yesterday which is to implement MAST (merkleized abstract syntax trees) in bitcoin in a minimally invasive way as possible. It\u0026amp;rsquo;s broken into two major consensus features that together gives us MAST. I\u0026amp;rsquo;ll start with the last BIP.\nThis is tail-call evaluation. Can we generalize P2SH to …"},{"uri":"/bitcoin-core-dev-tech/2017-09/2017-09-06-signature-aggregation/","title":"Signature Aggregation","content":"https://twitter.com/kanzure/status/907065194463072258\nSipa, can you sign and verify ECDSA signatures by hand? No. Over GF(43), maybe. Inverses could take a little bit to compute. Over GF(2).\nI think the first thing we should talk about is some definitions. I\u0026amp;rsquo;d like to start by distinguishing between three things: Key aggregation, signature aggregation, and batch validation. Multi-signature later.\nThere are three different problems. Key aggregation is where there are a number of people with …"},{"uri":"/bitcoin-core-dev-tech/2017-09/2017-09-05-meeting-notes/","title":"Meeting Notes","content":"coredev.tech september 2017\nhttps://twitter.com/kanzure/status/907233490919464960\n((As always, any errors are most likely my own. etc.))\nIntroduction There is significant concern regarding whether BlueMatt has become a misnomer.\nMonday night presentation: https://btctranscripts.com/sf-bitcoin-meetup/2017-09-04-jonas-schnelli-bip150-bip151/\nI think we should continue to use #bitcoin-core-dev for anything about changing Bitcoin Core and try to keep things open even though we\u0026amp;rsquo;re together here …"},{"uri":"/sf-bitcoin-meetup/2017-09-04-jonas-schnelli-bip150-bip151/","title":"Bip150 Bip151","content":"bip150 and bip151: Bitcoin p2p network encryption and authentication\nhttps://github.com/bitcoin/bips/blob/master/bip-0150.mediawiki\nhttps://github.com/bitcoin/bips/blob/master/bip-0151.mediawiki\nhttps://btctranscripts.com/scalingbitcoin/milan-2016/bip151-peer-encryption/\nIntroduction Alright guys.. take a seat. ((various mumblings)) Want to thank our sponsors, Digital Garage (applause). Second order of business, if you guys could\u0026amp;hellip; trash away.. that would be awesome. There are trash cans …"},{"uri":"/greg-maxwell/2017-08-28-gmaxwell-deep-dive-bitcoin-core-v0.15/","title":"Deep Dive Bitcoin Core V0.15","content":"slides: https://people.xiph.org/~greg/gmaxwell-sf-015-2017.pdf\nhttps://twitter.com/kanzure/status/903872678683082752\ngit repo: https://github.com/bitcoin/bitcoin\npreliminary v0.15 release notes (not finished yet): http://dg0.dtrt.org/en/2017/09/01/release-0.15.0/\nIntroduction Alright let\u0026amp;rsquo;s get started. There are a lot of new faces in the room. I feel like there\u0026amp;rsquo;s an increase in excitement around bitcoin right now. That\u0026amp;rsquo;s great. For those of you who are new to this space, I am …"},{"uri":"/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets/","title":"Merkle Set Data Structures for Scaling Bitcoin","content":"code: https://github.com/bramcohen/MerkleSet\nhttps://twitter.com/kanzure/status/888836850529558531\nIntroduction There\u0026amp;rsquo;s been a fair amount of talk about putting UTXO commitments in bitcoin blocks. Whether this is a good idea or not, but on the off-chance that it might be a good idea, I spent a fair amount of time thinking and actually good merkle set implementation whic his what you you need to put utxo set commitments in bitcoin blocks. There are other benefits to this, in that there are …"},{"uri":"/sf-bitcoin-meetup/2017-06-06-laolu-osuntokun-neutrino/","title":"Neutrino: The Privacy Preserving Bitcoin Light Client","content":"Slides: https://docs.google.com/presentation/d/1vjJUWOZIaPshjx1svfE6rFZ2lC07n47ifOdzpKWqpb4/\nhttps://twitter.com/kanzure/status/1054031024085258240\nIntroduction My name is Laolu or Roasbeef on the internet. I’m going to talk about light clients in general and then also what we’re working on in terms of light clients and then also how that factors into the lightning network itself. The talk is called Neutrino because that is the name of our new light client that has been released. There has been …"},{"uri":"/lets-talk-bitcoin-podcast/2017-06-04-consensus-uasf-and-forks/","title":"On Consensus and All Kinds of Forks","content":"User activated soft forks vs hard forks Adam Levine (AL): I’m trying to look at the user activated soft fork thing and not see hypocrisy just steaming out of its ears. I’m really in need of you to help me understand why this is not the case Andreas. The argument to this point about scaling, one of the arguments, one of the larger arguments, has been that hard forks are the worst thing that can happen. In the event of a hard fork that isn’t meticulously planned and even if meticulously planned it …"},{"uri":"/greg-maxwell/2017-04-28-gmaxwell-confidential-transactions/","title":"Confidential Transactions","content":"https://twitter.com/kanzure/status/859604355917414400\nIntroduction Thank you.\nSo as mentioned, I am going to be talking about confidential transactions today. And I look at confidential transactions as a building block fundamental piece of technology, it\u0026amp;rsquo;s not like altcoin or something like that. It\u0026amp;rsquo;s not a turnkey system, it\u0026amp;rsquo;s some fundamental tech, and there are many different ways to use it and deploy it.\nMy interest in this technology is primarily driven by bitcoin, but the …"},{"uri":"/sf-bitcoin-meetup/2017-04-03-andreas-antonopoulos-bitcoin-scripting/","title":"Advanced Bitcoin Scripting","content":"Mastering Bitcoin on GitHub: https://github.com/bitcoinbook/bitcoinbook\nIntro Today’s talk is different from some of the other talks I’ve done so bear with me. We are going to do this slowly in a very casual manner. This is not one of my usual talks where I am going to tell you about big picture vision and stuff like that. What I want to do is talk about some interesting features of Bitcoin Script and explain them. First question is how many of the people here are developers? Or have at least …"},{"uri":"/sf-bitcoin-meetup/2017-03-29-new-address-type-for-segwit-addresses/","title":"New Address Type For Segwit Addresses","content":"Topic: Bech32 addresses for Bitcoin\nSlides: https://prezi.com/gwnjkqjqjjbz/bech32-a-base32-address-format/\nProposal: https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki\nDemo website: http://bitcoin.sipa.be/bech32/demo/demo.html\nTwitter announcement: https://twitter.com/kanzure/status/847569047902273536\nTranscript completed by: Bryan Bishop Edited by: Michael Folkson\nIntro Can everyone hear me fine through this microphone? Anyone who can\u0026amp;rsquo;t hear me please raise your hand. Oh wait. …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2017/bitcoin-mining-and-trustlessness/","title":"Bitcoin's Miner Security Model","content":"https://twitter.com/kanzure/status/838500313212518404\nYes, something like that. As mentioned, I\u0026amp;rsquo;ve been contributing to Bitcoin Core since 2011 and I\u0026amp;rsquo;ve worked on various parts of bitcoin, such as the early payment channel work, some wallet stuff, and now I work for Chaincode Lab one of the sponsors\u0026amp;ndash; we\u0026amp;rsquo;re a bitcoin research lab. We all kind of hang out and work on various bitcoin projects we find interesting. I want to talk about reflections on trusting trust.\nFor those …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2017/exchange-security/","title":"Exchange Security","content":"https://twitter.com/kanzure/status/838517545783148545\nHow often do you check your wallets? Have you ever looked in your wallet and found all your bitcoin gone or missing? Last summer, somebody, a friend of mine checked his wallet and found that 100,000 bitcoin was gone. What the hell, right? But that\u0026amp;rsquo;s what happened at Bitfinex last summer. Some hacker got into their hot wallet and stole 100,000 bitcoin. This was $70 million bucks. Of which 1.5 million was mine.\nI spent the last 6 months …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2017/ideal-number-of-full-bitcoin-nodes/","title":"Ideal Number Of Full Bitcoin Nodes","content":"https://twitter.com/kanzure/status/838481130944876544\nToday I am going to be talking about bitcoin full nodes. Can everyone hear me? Okay, great. I don\u0026amp;rsquo;t actually think there\u0026amp;rsquo;s a specific number. I think a lot of people do. They look for a number. 5000 is about how many we have today, 6000. Some people ask whether it\u0026amp;rsquo;s a good number. I think that\u0026amp;rsquo;s the wrong way to look at the role of full nodes in the ecosystem.\nI\u0026amp;rsquo;ve been with bitcoin since 2011. I do a bunch of …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2017/mimblewimble-and-scriptless-scripts/","title":"Mimblewimble And Scriptless Scripts","content":"https://twitter.com/kanzure/status/838480912396533760\nUp next we have Andrew Poelstra and he is going to be talking about a really interesting new concept for a cryptocurrency. It\u0026amp;rsquo;s called mimblewimble, after a spell from Harry Potter, because it prevents characters from speaking. Try speaking? Hello, test. We also have a\u0026amp;hellip; hello? hello? Can you guys hear me?\nI am going to talk about the concept of scriptless scripts. It\u0026amp;rsquo;s a way of describing what mimblewimble does, in a way …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2017/","title":"MIT Bitcoin Expo 2017","content":"http://web.archive.org/web/20170610144420/http://mitbitcoinexpo.org/\nBitcoin\u0026amp;#39;s Miner Security Model Mar 04, 2017 Matt Corallo Security Exchange Security Mar 04, 2017 Mitchell Dong Security problems Ideal Number Of Full Bitcoin Nodes Mar 04, 2017 David Vorick Mimblewimble And Scriptless Scripts Mar 04, 2017 Andrew Poelstra Adaptor signatures Sidechains Scaling and UTXO commitments Mar 04, 2017 Peter Todd Live Stream:\nDay 1 Day 2 "},{"uri":"/speakers/mitchell-dong/","title":"Mitchell Dong","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2017/scaling-and-utxos/","title":"Scaling and UTXO commitments","content":"https://twitter.com/kanzure/status/838481311992012800\nThank you. I\u0026amp;rsquo;ll admit the last one was probably the most fun other than. On the schedule, it said scaling and UTXO and I threw something together. TXO commitments were a bold idea but recently there were some observations that make it more interesting. I thought I would start by going through what problem they are actually solving.\nLike David was saying in the last presentation, running a full node is kind of a pain. How big is the …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2017/","title":"Blockchain Protocol Analysis Security Engineering 2017","content":" Covenants - Structuring Multi Transaction Contracts in Bitcoin Jan 26, 2017 Jeremy Rubin Covenants Ivy: A Declarative Predicate Language for Smart Contracts Jan 26, 2017 Dan Robinson Research Contract protocols Post’s Theorem and Blockchain Languages: A Short Course in the Theory of Computation Jan 26, 2017 Russell O’Connor Cryptography Scalable Smart Contracts Via Proofs And Single Use Seals Feb 03, 2017 Peter Todd Research Contract protocols Security Analysis of the Lightning Network Feb 03, …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2017/scalable-smart-contracts-via-proofs-and-single-use-seals/","title":"Scalable Smart Contracts Via Proofs And Single Use Seals","content":"https://twitter.com/kanzure/status/957660108137418752\nslides: https://cyber.stanford.edu/sites/default/files/petertodd.pdf\nhttps://petertodd.org/2016/commitments-and-single-use-seals\nhttps://petertodd.org/2016/closed-seal-sets-and-truth-lists-for-privacy\nIntroduction I am petertodd and I am here to break your blockchain. It\u0026amp;rsquo;s kind of interesting following a talk like that, in some ways I\u0026amp;rsquo;m going in a more extreme direcion for consensus. Do we actually need consensus at all? Can we …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2017/lightning-network-security-analysis/","title":"Security Analysis of the Lightning Network","content":"https://twitter.com/kanzure/status/957978915515092992\nslides: https://cyber.stanford.edu/sites/default/files/olaoluwaosuntokun.pdf\nIntroduction My name is Laolu Osuntokun and I work on lightning stuff. I go by roasbeef on the internet. I work at Lightning Labs. I am going to be giving an overview of some security properties and privacy properties of lightning.\nState of the hash-lock Before I start, I want to go over the state of the hash-lock which is the current progress to completion and …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2017/2017-01-26-jeremy-rubin-covenants/","title":"Covenants - Structuring Multi Transaction Contracts in Bitcoin","content":"Structuring Multi Transaction Contracts in Bitcoin\nLocation: BPASE 2017, Stanford University\nSlides: https://rubin.io/public/pdfs/multi-txn-contracts.pdf\nIntro Hey everyone. How is it going? Happy to be going last right now given that I had all these wonderful speakers before me to explain so much of what I am about to tell you. I’m going to talk to you today about structuring multi-transaction contracts in Bitcoin. I think there is a lot of great work that happens in Bitcoin at the script …"},{"uri":"/speakers/dan-robinson/","title":"Dan Robinson","content":""},{"uri":"/blockchain-protocol-analysis-security-engineering/2017/2017-01-26-dan-robinson-ivy/","title":"Ivy: A Declarative Predicate Language for Smart Contracts","content":"Intro Hi I’m Dan Robinson, I’m one of the Product Architects at Chain which is an enterprise blockchain infrastructure company for financial institutions. We have a full stack blockchain protocol and what I am going to talk about is part of that that also can be used for other applications including as I am going to demo here, make it easier to write scripts for Bitcoin Script.\nIntroduction: Two Blockchain Models To start out I want to talk about two blockchain models. This isn’t the purpose of …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2017/2017-01-26-russell-oconnor-posts-theorem/","title":"Post’s Theorem and Blockchain Languages: A Short Course in the Theory of Computation","content":"Intro Hi. My name is Russell O’Connor. I am here to talk to you about Post’s theorem, the theory of computation and its applications to blockchain languages. There is a bit of a debate about whether Turing complete languages are appropriate or not for doing computations, whether the DAO fiasco is related to the Turing completeness problem or is it an unrelated issue? If we do go for non-Turing complete languages for our blockchains how much expressiveness are we losing? If we go back into the …"},{"uri":"/lets-talk-bitcoin-podcast/2016-12-25-christopher-jeffrey-consensus-barnacles/","title":"Consensus Barnacles","content":"Location: Let’s Talk Bitcoin podcast (Episode 319)\nSF Bitcoin Devs presentation on Bcoin: https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2016-09-28-christopher-jeffrey-bcoin/\nBreaking Bitcoin presentation on consensus pitfalls: https://diyhpl.us/wiki/transcripts/breaking-bitcoin/2017/2017-09-10-christopher-jeffrey-consensus-pitfalls/\nHistory of Bcoin Adam Levine (AL): JJ thanks for being here today.\nChristopher Jeffrey (JJ): Thanks man. Glad to be here.\nAL: We are going to jump right into …"},{"uri":"/speakers/roger-ver/","title":"Roger Ver","content":""},{"uri":"/misc/2016-12-14-whalepool/","title":"The Great Debate Whalepool","content":"https://twitter.com/kanzure/status/809174512105295872\nHost: Mr. Ver, can you hear this?\nRV: Yes, I can.\nHost: Welcome to Whalepool.\nGavin: Hey Roger, it\u0026amp;rsquo;s Gavin. I heard about this.\nRV: Hey Eric. I see lots of familiar names in here.\nRV: Who was that? I don\u0026amp;rsquo;t know whose voice that was.\nHodl: Mr. Hodl\nRV: Thanks for organizing this. Hello Mr. Hodl.\nHost: Okay, ready to go. Here we have Eric Lombrozo who is a developer who has committed code to Bitcoin Core. We also have Roger Ver. …"},{"uri":"/sf-bitcoin-meetup/2016-11-21-mimblewimble/","title":"Mimblewimble","content":"https://www.youtube.com/watch?v=aHTRlbCaUyM\nhttps://twitter.com/kanzure/status/801990263543648256\nHistory As Denise said, I gave a talk in Milan about mimblewimble about a month ago (slides). This is more or less the same talk, but rebalanced a bit to try to emphasize what I think is important and add some history that has happened in the intervening time. I\u0026amp;rsquo;ll get started.\nMany of you have heard of mimblewimble. It\u0026amp;rsquo;s been in the news. It has a catchy name. It sticks around. I should …"},{"uri":"/misc/2016-adam-back/","title":"Implications of Blockchain","content":"New Context Conference 2016 http://dg717.com/ncc/en/\nWe are using a hashtag to organize the information today. The hashtag we\u0026amp;rsquo;re using is nccblockchain. Ncc stands for new conference context. That\u0026amp;rsquo;s where we are. Blockchain is what we\u0026amp;rsquo;re talkin gabout. Can I ask the people in the back to take a seat? We\u0026amp;rsquo;re about to get ready. Meet the neighbor near your seat. Switch your cellphone off. I am going to demonstrate. The toilets squirt you. It\u0026amp;rsquo;s by design. We have …"},{"uri":"/scalingbitcoin/milan-2016/schnorr-signatures/","title":"Schnorr Signatures for Bitcoin","content":"Slides: URL expired\nPieter Wuille presentation on Schnorr at BPASE 2018: https://diyhpl.us/wiki/transcripts/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities/\nTranscript completed by: Bryan Bishop Edited by: Michael Folkson\nIntro Today I will be talking about Schnorr signatures for Bitcoin. I want to say a few things in advance. One is even though this is a talk about cryptography and in particular new cryptography I don\u0026amp;rsquo;t …"},{"uri":"/speakers/arthur-gervais/","title":"Arthur Gervais","content":""},{"uri":"/scalingbitcoin/milan-2016/bip151-peer-encryption/","title":"BIP151: Peer-to-Peer Encryption and Authentication from the Perspective of End-User","content":"https://twitter.com/kanzure/status/785046960705339392\nGood morning. Thanks for having me here. I would like to talk about the end-user perspective of p2p authentication. I would like to start with a little example of how difficult it is to be aware of end-user issues that they are facing at the front. Who is using a thin client? An \u0026amp;ldquo;SPV\u0026amp;rdquo; wallet on your smartphone? Yeah. Who of you are on iOS? Yeah iOS has a pretty large market share, maybe not in this room. But in general, yes. And …"},{"uri":"/scalingbitcoin/milan-2016/covenants/","title":"Bitcoin Covenants: Opportunities and Challenges","content":"https://twitter.com/kanzure/status/785071789705728000\nWe published this last February at an academic workshop. The work itself has interesting ramifications. My real goal here is to start a conversation and then do a follow-up blog post where we collate feedback from the community. We would like to add this to Bitcoin Core. Covenants.\nThis all started from a very basic and simple observation about the current status of our computing infrastructure. The observation is that the state of our …"},{"uri":"/speakers/emin-gun-sirer/","title":"Emin Gun Sirer","content":""},{"uri":"/scalingbitcoin/milan-2016/segwit-lessons-learned/","title":"Lessons Learned with Segwit in Bitcoin","content":"https://twitter.com/kanzure/status/785034301272420353\nHi. My name is Greg Sanders. I will be giving a talk about segwit in bitcoin lessons learned. I will also give some takeaways that I think are important. A little bit about myself. I work on Elements Project at Blockstream. I was a reviewer on segwit for Bitcoin Core. How do we scale protocol development? How do we scale review and keep things safe?\nSegwit started as an element in elements alpha. It allows for safe chaining of pre-signed …"},{"uri":"/scalingbitcoin/milan-2016/on-the-security-and-performance-of-proof-of-work-blockchains/","title":"On the Security and Performance of Proof of Work Blockchains","content":"https://twitter.com/kanzure/status/785066988532068352\nAs most of you might know, in bitcoin and blockchains, the information is very important. Every node should receive the blocks in the network. Increase latency and risk network partition. If individual peers do not get the information, they could be victims of selfish mining and so on. Multiple blockchains has been proposed, multiple reparameterizations like litecoin, dogecoin, ethereum and others. One of the key changes they did was the …"},{"uri":"/scalingbitcoin/milan-2016/client-side-validation/","title":"Progress on Scaling via Client-Side Validation","content":"https://twitter.com/kanzure/status/785121442602029056\nhttp://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/breaking-the-chain/\nLet\u0026amp;rsquo;s start with what isn\u0026amp;rsquo;t client-side validation. I am going to call this the miner-side approach. Here\u0026amp;rsquo;s some smart contract code. In this example, it\u0026amp;rsquo;s Chronos which is a timestamping contract. What you have here is a contract that stores in ethereum state consensus some hashes as they are received, they are timestamped. And ultimately at …"},{"uri":"/speakers/adlai-chandrasekhar/","title":"Adlai Chandrasekhar","content":""},{"uri":"/scalingbitcoin/milan-2016/flare-routing-in-lightning/","title":"Flare: An Approach to Routing in Lightning Network","content":"https://twitter.com/kanzure/status/784745976170999810\nI will not be as technical as Olaoluwa was. You have nodes that open up payment channels. This scheme is actually developed at the moment in great detail. So how do you find the channels through which to send payments? So we wanted to propose some initial solution to this.\nRouting requirements It should be source routing because we need the privacy. When a user sends a payment, so the decision of what route to choose should be on the sender …"},{"uri":"/scalingbitcoin/milan-2016/fungibility-overview/","title":"Fungibility Overview","content":"https://twitter.com/kanzure/status/784676022318952448\nIntroduction Alright. So let\u0026amp;rsquo;s get started. Okay. Why fungibility? Let\u0026amp;rsquo;s start. What does fungibility mean? Bitcoin is like cash. The hope is that it\u0026amp;rsquo;s for immediate and final payments. To add some nuiance there, you might have to wait for some confirmations. Once you receive a bitcoin, you have a bitcoin and that\u0026amp;rsquo;s final. So even with banks and PayPal, who sometimes shutdown accounts for trivial reasons or sometimes …"},{"uri":"/scalingbitcoin/milan-2016/joinmarket/","title":"Joinmarket","content":"https://twitter.com/kanzure/status/784681036504522752\nslides: https://scalingbitcoin.org/milan2016/presentations/D1%20-%202%20-%20Adlai%20Chandrasekhar.pdf\nFinding a risk-free rate for Bitcoin\nJoinmarket, just to clear up a misconception\u0026amp;hellip; Joinmarket, like bitcoin, it is a protocol. Not a company. The protocol is defined by an open-source reference implementation. There are other versions of the code out there. It\u0026amp;rsquo;s entirely, I guess you could say it\u0026amp;rsquo;s voluntary like bitcoin. …"},{"uri":"/speakers/leen-alshenibr/","title":"Leen AlShenibr","content":""},{"uri":"/scalingbitcoin/milan-2016/lightning/","title":"Lightning work group session","content":"https://twitter.com/kanzure/status/784783992478466049\nYou have a network. Anyone can join the network. You have two points and you want to do route finding between those two points. To be clear, this is a network of channels. It is similar to the internet, but it is not the internet. How do you find such a route? It\u0026amp;rsquo;s easy when you find all the implements of the network. Some of the information changes frequently. Payments change the balance of the channels so that other payments become …"},{"uri":"/scalingbitcoin/milan-2016/mimblewimble/","title":"Mimblewimble: Private, Massively-Prunable Blockchains","content":"https://twitter.com/kanzure/status/784696648597405696\nslides: http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimble-2016-scaling-bitcoin-slides.pdf\nHi, I am from Blockstream. I am here to talk about Mimblewimble. Mimblewimble is not mine. It\u0026amp;rsquo;s from the dark lord. I am going to start with the history of the paper. It has been in the news a little bit. I will talk about the transaction structure, the block structure, try to explain the trust model and argue that it\u0026amp;rsquo;s almost as strong as …"},{"uri":"/scalingbitcoin/milan-2016/onion-routing-in-lightning/","title":"Onion Routing In Lightning","content":"http://lightning.network/\nhttps://twitter.com/kanzure/status/784742298089299969\nPrivacy-preserving decentralized micropayments\nWe\u0026amp;rsquo;re excited about lightning because the second layer could be an important opportunity to improve privacy and fungibility. Also, there might be timing information in the payments themselves.\nDistributed set of onion routers (OR, cite OG onionrouting paper). Users create circuits with sub-set of nodes. Difficult for oion routers to gain more info than …"},{"uri":"/speakers/pavel-prihodko/","title":"Pavel Prihodko","content":""},{"uri":"/scalingbitcoin/milan-2016/tumblebit/","title":"TumbleBit: An Untrusted Bitcoin-Compatible Anonymous Payment Hub","content":"https://twitter.com/kanzure/status/784686167052709888\nHi. So I\u0026amp;rsquo;m going to be talking about tumblebit. It\u0026amp;rsquo;s our untrusted private bitcoin payment system. Tumblebit provides payments which are private. They provide set anonymity when used as a classic tumbler. They are untrusted. The tumbler can\u0026amp;rsquo;t violate your privacy, and can\u0026amp;rsquo;t steal your coins. It\u0026amp;rsquo;s also scalable. It\u0026amp;rsquo;s compatible and works with today\u0026amp;rsquo;s bitcoin. There are no malleability concerns. …"},{"uri":"/sf-bitcoin-meetup/2016-09-28-christopher-jeffrey-bcoin/","title":"Bcoin","content":"Bcoin repo: https://github.com/bcoin-org/bcoin\nIntro Hey everyone. I’m JJ with Purse. Tonight I want to talk to you about full nodes and full node implementations in general. In particular my project and the project we’ve been working on at Purse which is bcoin. A little history about me. I’ve actually given a presentation here before two years ago at the dev meetup. What I gave my talk on two years ago was actually the process of turning bitcoind into a shared object, into a library so you …"},{"uri":"/speakers/brian-deery/","title":"Brian Deery","content":""},{"uri":"/speakers/chris-odom/","title":"Chris Odom","content":""},{"uri":"/misc/mimblewimble-podcast/","title":"Mimblewimble Podcast","content":" Andrew Poelstra (andytoshi) Pieter Wuille (sipa) Brian Deery Chris Odom Starts at 26min.\nI am going to introduce our guests here in just a minute.\nLaunch of zcash might be delayed in order to allow the code to be analyzed by multiple third-party auditors. Zooko stated, \u0026amp;ldquo;I feel bad that we didn\u0026amp;rsquo;t do a better job of making TheDAO disaster-like problems harder for people\u0026amp;rdquo;.\nOur guests on the line are Andrew Poelstra and Pieter Wuille. Andrew are you there?\nAP: Hey. We\u0026amp;rsquo;re …"},{"uri":"/bitcoin-developers-miners-meeting-2016/dan-boneh/","title":"Dan Boneh","content":"A conversation with Dan Boneh\nPart of the challenge was that the Google Maps directions in the email were going down some roads that had construction\u0026amp;hellip;. been here 10 minutes, and some arriving in another 10 minutes. They are building a new biology building. Large campus here. In whatever direction you walk, there are buildings and more buildings. In two years there will be a new biology building, and two years later there will be a new computer science building.\nHow many users at Stanford? …"},{"uri":"/bitcoin-developers-miners-meeting-2016/","title":"Developers-Miners Meeting","content":" Cali 2016 Jul 30, 2016 Mining Ethereum Soft fork activation Dan Boneh Aug 01, 2016 Dan Boneh Research Ethereum Google Tech Talk (2016) Jihan Wu Mining "},{"uri":"/bitcoin-developers-miners-meeting-2016/cali2016/","title":"Cali 2016","content":"California Gathering 2016\nTranscription introduction Some Bitcoin developers and miners gathered together during the end of the July to socialize and become better acquainted with each other. The following discussions were live transcribed by an untrained volunteer with attribution removed as per Chatham House rules. In bitcoin, discussions can move very quickly, which can cause an increase in errors, including semantic errors, when typing in real time. This text was not produced from an audio …"},{"uri":"/sf-bitcoin-meetup/2016-07-18-laolu-osuntokun-lightning-network/","title":"Lightning Network","content":"Lightning Network Deep Dive\nSlides: https://docs.google.com/presentation/d/1OPijcjzadKkbxvGU6VcVnAeSD_DKb7gjnevBcqhPCW0/\nhttps://twitter.com/kanzure/status/1052238088913788928\nIntroduction My name is Laolu Osuntokun and I work with Lightning guys and you know me as maybe Roasbeef from the internet, on GitHub, Twitter, everything else. I’ve been Roasbeef for the past 8 years and I’m still Roasbeef that’s how it works. I’m going to give you a deep dive into the lightning network. I remember like 2 …"},{"uri":"/w3-blockchain-workshop-2016/lightning-network/","title":"Lightning Network","content":"I am going to talk about blockchain scalability using payment channel networks. I am working on Lightning Network. It\u0026amp;rsquo;s currently being built in bitcoin but it can support other things. One of the fundamental problems with blockchain is scalability. How can you have millions of users and rapid transactions?\nFundamentally, the blockchain is shared. You can call it a DLT but it\u0026amp;rsquo;s really a copied ledger. Every user has a copy. When you\u0026amp;rsquo;re at a conference and everyone is using the …"},{"uri":"/w3-blockchain-workshop-2016/","title":"W3 Blockchain Workshop","content":" Archival Science Arvind Narayanan Arvind Narayanan Blockchain Hub Kenji Saito Christopher Allen Christopher Allen Day 2 Groups Deterministic Signatures Group Threshold signature Dex: Deterministic Predicate Expressions for Smarter Signatures Peter Todd Research Ethcore Ethereum Groups Identity Intro to W3 Blockchain Workshop Jun 29, 2016 Doug Schepers Intro to W3C standards Wendy Seltzer Ipfs Lightning Network Jun 30, 2016 Tadge Dryja Lightning Matsuo Shin\u0026amp;#39;ichiro Matsuo Security Research …"},{"uri":"/speakers/doug-schepers/","title":"Doug Schepers","content":""},{"uri":"/w3-blockchain-workshop-2016/intro/","title":"Intro to W3 Blockchain Workshop","content":"Blockchains and standards\nW3C Workshop\n29-30 June 2016\nCambridge, Massachusetts\nDoug Schepers schepers@w3.org\n@shepazu\nhttps://www.w3.org/2016/04/blockchain-workshop/\nirc.w3.org #blockchain\nhttps://etherpad.w3ctag.org/p/blockchain\nMy name is Doug. I am ostensibly the organizer. Along with my chairs, Neha, where are you? Please stand up so that everyone can embarras you. She is with DCI here at MIT, the Digital Currency Initiative. Daza, also with MIT, he is the guy who grbabed the space for us, …"},{"uri":"/sf-bitcoin-meetup/2016-04-11-lightning-network-as-a-directed-graph-single-funded-channel-network-topology/","title":"Lightning Network As A Directed Graph Single Funded Channel Network Topology","content":"http://lightning.network/\nslides: http://diyhpl.us/~bryan/papers2/bitcoin/Lightning%20Network%20as%20Directed%20Graph%20Single-Funded%20Channel%20Topology%20-%20Tadge%20Dryja%20-%202016-04-11.pdf\nOkay. Hello everyone. Yes, okay, great. Yes. Yeah, I\u0026amp;rsquo;m working on Lightning Network. Bitcoin is great and fun to work on.\nBasic super quick summary of Lightning I am going to re-cap lightning network real quick. The basic idea is to have channels between two nodes. A channel, from the perspective …"},{"uri":"/misc/adam3us-bitcoin-scaling-tradeoffs/","title":"Bitcoin Scaling Tradeoffs","content":"Bitcoin scaling tradeoffs with Adam Back (adam3us) at Paralelni Polis\nInstitute of Cryptoanarchy http://www.paralelnipolis.cz/\nslides: http://www.slideshare.net/paralelnipolis/bitcoin-scaling-tradeoffs-with-adam-back\ndescription: \u0026amp;ldquo;In this talk, Adam Back overviews the Bitcoin scaling and discuses his personal view about its tradeoffs and future development.\u0026amp;rdquo;\nIntro fluff And now I would like to introduce you to a great guest from the U.K. or Malta, Adam Back. Adam is long-time …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/pindar-wong/","title":"Forked to Death: What Internet governance can learn from Bitcoin","content":"So, the last talk today is going to be from Pindar Wong. He\u0026amp;rsquo;s been involved with Internet in Hong Kong. He also helped organize the Scaling Bitcoin workshops. It\u0026amp;rsquo;s my pleasure to have him speaking today.\nOkay. Thank you very much for staying until the bitter end. My topic today is forked to death, what Internet governance can learn from Bitcoin. I do mean this, and I would like to explain why. This is a fork. I am holding a fork.\nIn Internet governance, we don\u0026amp;rsquo;t talk about …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/","title":"Mit Bitcoin Expo 2016","content":"http://web.archive.org/web/20160403090248/http://mitbitcoinexpo.org:80/\nCory Fields Cory Fields Forked to Death: What Internet governance can learn from Bitcoin Mar 05, 2016 Pindar Wong Fraud Proofs and Modularized Validation Peter Todd Lightweight client Improvements to Bitcoin Mark Friedenbach, Jonas Schnelli, Andrew Poelstra, Joseph Poon Research Lightning Wallet Lessons For Bitcoin From 150 Years Of Decentralization Arvind Narayanan Research Linq Alex Zinder R3 Kathleen Breitman Rootstock …"},{"uri":"/speakers/pindar-wong/","title":"Pindar Wong","content":""},{"uri":"/speakers/joseph-poon/","title":"Joseph Poon","content":""},{"uri":"/scalingbitcoin/hong-kong-2015/network-topologies-and-their-scalability-implications-on-decentralized-off-chain-networks/","title":"Network Topologies and their scalability implications on decentralized off-chain networks","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/1_layer2_3_poon.pdf\nLN transactions are real Bitcoin transactions. These are real zero-confirmation transactions. What\u0026amp;rsquo;s interesting is that the original idea behind Bitcoin was solving the double-spending problem. When you move off-chain and off-block, you sort of have this opposite situation where you use double-spends to your advantage, spending from the same output, except you have this global consensus of block but you …"},{"uri":"/tags/bandwidth-reduction/","title":"Bandwidth Reduction","content":""},{"uri":"/scalingbitcoin/hong-kong-2015/fungibility-and-scalability/","title":"Fungibility And Scalability","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY1/1_overviews_2_back.pdf\nI think everyone knows what fungibility is, it\u0026amp;rsquo;s when coins are equal and interchangeable. It\u0026amp;rsquo;s an important property for an electronic cash system. All bitcoin users would have a problem if fungability goes away.\nBitcoin has fungibility without privacy. Previous electronic cash systems had fungibility and cryptographic blinding. In Bitcoin, there\u0026amp;rsquo;s decentralized mining that somebody will …"},{"uri":"/scalingbitcoin/hong-kong-2015/invertible-bloom-lookup-tables-and-weak-block-propagation-performance/","title":"Invertible Bloom Lookup Tables (IBLT) and weak block propagation performance","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY1/3_block_propagation_1_rosenbaum.pdf\nBefore we get started, for privacy reasons, and for reasons of zero attribution, please do not take pictures of attendees. Do not have pictures of attendees in the photographs. Taking pictures of the stage and slides are OK, since they are being broadcasted. No photographs of any attendees, please let them remain anonymous and private. For Chinese speakers, please ask in Chinese and I will use …"},{"uri":"/greg-maxwell/2015-11-09-gmaxwell-mining-and-block-size-etc/","title":"Mining And Block Size Etc","content":"Mining, block size, etc. So uh I am really excited to announce or introduce our next speaker, Greg Maxwell. Greg, well, there are people who talk and there\u0026amp;rsquo;s people who do, and Greg is definitely a doer. He is definitely one of the most accomplished if not most helpful, one of the most active people, not just in terms of commits that we see, but also things we don\u0026amp;rsquo;t see, like input regarding bugs and to other developers and being a great voice in this industry. Also, he is with …"},{"uri":"/speakers/andrew-miller/","title":"Andrew Miller","content":""},{"uri":"/categories/developer-tools/","title":"developer-tools","content":""},{"uri":"/scalingbitcoin/montreal-2015/block-synchronization-time/","title":"Initial Block Synchronization is Quadratic Time Complexity","content":"slides: http://strateman.ninja/initial_block_synchronization.pdf\nBlock synchronization is how nodes join the market. New nodes download the blockchain and validate everything. This is how the security of the network works. Who is responsible for doing this? Everyone must be doing a full block synchronization. If you are not doing this, you\u0026amp;rsquo;re not on the bitcoin network. As a simple hypothesis, the block size growth is related to initial block synchronization as a simple function of the …"},{"uri":"/speakers/patrick-strateman/","title":"Patrick Strateman","content":""},{"uri":"/scalingbitcoin/montreal-2015/sharding-the-blockchain/","title":"Sharding the Blockchain","content":"I am going to talk about blockchain sharding. This is one of the scaling solutions that I like the most. I have done the most work on it. In case you don\u0026amp;rsquo;t know who I am, I am a researcher mostly with the ethereum project. Basically the basics of sharding is that in non-sharding, every node does every validation. In a sharding solution, nodes hold a subset of the state, and a subset of the blockchain. By state we mean UTXOs. Instead of everyone redundantly doing the same work, we\u0026amp;rsquo;re …"},{"uri":"/scalingbitcoin/montreal-2015/systematizing-knowledge/","title":"Systematizing Knowledge","content":"There should be a SNARK wiki. Any website that is better than someone saying \u0026amp;ldquo;you know, check the June July 2014 bitcoin-wizard logs\u0026amp;rdquo;. We need to research gmaxwell scalability. We should probably fix that.\nLet\u0026amp;rsquo;s talk about what some of the goals are that we might get out of this. We have some different ideas for approaches to systematizing knowledge. There are at least three camps here. I think Bryan\u0026amp;rsquo;s paper repositories.. whenever you search for papers on Google Scholar, …"},{"uri":"/speakers/vlad-zamfir/","title":"Vlad Zamfir","content":""},{"uri":"/scalingbitcoin/montreal-2015/transaction-fee-estimation/","title":"How Wallets Can Handle Real Transaction Fees","content":"You should have a market at this point, where some number of transactions never happen because they aren\u0026amp;rsquo;t willing to pay a fee. So you get some equilibrium of price going on. I proactively want to see this happen because we don\u0026amp;rsquo;t know what fees should be until we run the experiment and see what prices end up being. For this to happen, wallets have to do price setting in some way. For the most part they presently don\u0026amp;rsquo;t. I am going to talk about how they can be made to do that. …"},{"uri":"/speakers/miles-carlsten/","title":"Miles Carlsten","content":""},{"uri":"/scalingbitcoin/montreal-2015/overview-of-security-concerns/","title":"Overview of Security Concerns","content":"I am going to talk about scale-related security issues. Scale-related security issues. I said scale, not scalability. We are not talking about scalability. We\u0026amp;rsquo;re talking about changing numbers, and what are the implications, what attacks are easier or harder by changing the block size number. What are we trying to accomplish? What is the system meant to be? Nobody really agrees of course. What\u0026amp;rsquo;s interesting is that as much as we disagree about what was meant to be, there are a lot of …"},{"uri":"/scalingbitcoin/montreal-2015/privacy-preserving-smart-contracts/","title":"Privacy-Preserving Smart Contracts","content":"ranjit@csail.mit.edu\nhttp://people.csail.mit.edu/ranjit/\nWe have been looking at how much we can support or what limitations we might encounter. There are also efficiency issues and privacy guarantees. And then there are some potential relaxations that will help with removing the fundamental limitations and express the power and make smart contracts more useful.\nThe point of this talk is off-chain transactions and secure computation. I will also highlight the relationship of secure computation …"},{"uri":"/speakers/ranjit-kumaresan/","title":"Ranjit Kumaresan","content":""},{"uri":"/scalingbitcoin/montreal-2015/security-of-diminishing-block-subsidy/","title":"Security Of Diminishing Block Subsidy","content":"Miners no longer incentivized to mine all the time. The future of mining hardware. Impact on the bitcoin ecosystem. Implications and assumptions of the model.\nAre minting rewards and transaction fees the same? Transaction fees are time-dependent. A miner can only claim transaction fees as transactions are added to their block. As transaction fees start to dominate the block reward, they become time-dependent. We are modeling this block reward as a linear function of time, we have this factor B …"},{"uri":"/misc/2015-09-07-epicenter-bitcoin-adam3us-scalability/","title":"Why Bitcoin Needs A Measured Approach To Scaling","content":"BC: Adam Back is here for the second time on this show. Some of you will be familiar that Adam Back is the inventor of proof-of-work or hashcash as it was then. He was one of the few people cited in the original Bitcoin whitepaper. More recently he has been involved in sidechains and Blockstream where he is a founder. He has been vocal in the bitcoin block size debate and how it should be done or how it should be done. We\u0026amp;rsquo;re excited to have him on. We\u0026amp;rsquo;ve had perhaps too much of the …"},{"uri":"/sf-bitcoin-meetup/2015-08-24-pieter-wuille-key-tree-signatures/","title":"Key Tree Signatures","content":"Slides: https://prezi.com/vsj95ns4ucu3/key-tree-signatures/\nBlog post: https://blockstream.com/2015/08/24/en-treesignatures/\nIntro For those who don’t know me I’m Pieter Wuille. I have been working on Bitcoin Core for a while but this is work I have been doing at Blockstream. Today I will be talking about Key Tree Signatures which is what I have been working on these past few weeks. It is an improvement to the multisig schemes that we want to support. This is an outline of my talk. I’ll first …"},{"uri":"/speakers/daniele-micciancio/","title":"Daniele Micciancio","content":""},{"uri":"/simons-institute/history-of-lattice-based-cryptography/","title":"History Of Lattice Based Cryptography","content":"Historical talk on lattice-based cryptography\nDaniele Micciancio, UC San Diego\nhttps://www.youtube.com/watch?v=4KVJbEOqB_Q\u0026amp;amp;list=PLgO7JBj821uGZTXEXBLckChu70kl7Celh\u0026amp;amp;index=22\nhttps://twitter.com/kanzure/status/772649346454040577\nLattice cryptography - from computational complexity to fully homomorphic encryption\nIntroduction 1 So this first of all thanks for coming back from lunch, it\u0026amp;rsquo;s always an achievement. I want to say that this is a special talk within the workshop here at Simons …"},{"uri":"/simons-institute/","title":"Simons Institute","content":" A Wishlist For Verifiable Computation Michael Walfish Research History Of Lattice Based Cryptography Jul 15, 2015 Daniele Micciancio Cryptography Pairing Cryptography Jul 14, 2015 Dan Boneh Snarks And Their Practical Applications Eran Tromer Research Todo Cryptography Ux Zero Knowledge Probabilistic Proof Systems Shafi Goldwasser "},{"uri":"/simons-institute/pairing-cryptography/","title":"Pairing Cryptography","content":"Dan Boneh (see also 1)\nslides: http://crypto.biu.ac.il/sites/default/files/3rd_BIU_Winter_School/Boneh-basics-of-pairings.pdf\nhttps://twitter.com/kanzure/status/772287326999293952\noriginal video: https://video.simons.berkeley.edu/2015/other-events/historical-papers/05-Dan-Boneh.mp4\noriginal video sha256 hash: 1351217725741cd3161de95905da7477b9966e55a6c61686f7c88ba5a1ca0414\nOkay, so um I\u0026amp;rsquo;m very happy to introduce another speaker in this historical papers series seminar seminar series which …"},{"uri":"/greg-maxwell/2015-06-08-gmaxwell-sidechains-elements/","title":"Sidechains Elements","content":"Bringing New Elements to Bitcoin with Sidechains\nSF Bitcoin Devs Meetup\nGregory Maxwell DE47 BC9E 6D2D A6B0 2DC6 10B1 AC85 9362 B041 3BFA\nslides: https://people.xiph.org/~greg/blockstream.gmaxwell.elements.talk.060815.pdf\nhttps://github.com/ElementsProject/elements/\nHello, I am Greg Maxwell, one of the developers of the Bitcoin system and its reference software. Since 2011, I have worked on the system facing many interesting challenges and developing many solutions to many problems, adapting and …"},{"uri":"/sf-bitcoin-meetup/2015-05-26-lightning-network/","title":"How Lightning.network can offer a solution for Bitcoin's scalability problems","content":"http://lightning.network/\nslides: https://lightning.network/lightning-network-presentation-sfbitcoinsocial-2015-05-26.pdf\nGreat to have you guys. They are going to talk about bitcoin\u0026amp;rsquo;s scalability problems. There they are.\nOkay, hi. So yeah, I titled it solutions because that sounds more positive than talking about problems. Joseph will go after that and talk about what needs to change and the economics and things like that. Okay, so I\u0026amp;rsquo;ll start.\nBitcoin scalability. The thing that is …"},{"uri":"/greg-maxwell/2015-04-29-gmaxwell-bitcoin-selection-cryptography/","title":"Bitcoin Selection Cryptography","content":" slides: https://people.xiph.org/~greg/gmaxwell_sfbitcoin_2015_04_20.pdf A deep dive with Bitcoin Core developer Greg Maxwell\nThe blueberry muffins are a lie. But instead I got some other things to present about. My name is Greg Maxwell. I am one of the committers to Bitcoin Core. I am one of the five people with commit access. I have been working on Bitcoin since 2011. Came into Bitcoin through assorted paths as has everyone else.\nBack in around maybe 2004 I was involved very early in the …"},{"uri":"/speakers/madars-virza/","title":"Madars Virza","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/","title":"Mit Bitcoin Expo 2015","content":"http://web.archive.org/web/20150604032122/http://mitbitcoinexpo.org/\nArmory Proof Of Payment Andy Ofiesh Proof of payment Bitcoin Financing And Trading Joshua Lim, Juthica Chou, Bobby Cho Bitcoin Regulation Landscape Elizabeth Stark, Jerry Brito, Constance Choi Regulation Decentralization Through Game Theory Andreas Antonopoulos Mining Human Side Trust Workshop Matt Weiss, Joe Gerber Internet Of Value Jeremy Allaire Keynote Charlie Lee Keynote Gavin Andresen Gavin Andresen Open source - Beyond …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/zerocash-and-zero-knowledge-succint-arguments-of-knowledge-libsnark/","title":"Zerocash: Zero knowledge proofs and SNARKs and libsnark","content":"I am going to tell you about zerocash and our approach of addressing Bitcoin\u0026amp;rsquo;s privacy problem. All of this is joint work with Techion, ETCH Zurich, and MIT, and Matt Green, .. and .. and..\nSo first to get us going, I want to talk a little bit about two of my ideas around Bitcoin. There are two questions that every decentralized cryptocurrency must answer. First of them is where does the money come from. The second question is, if money is digital, if I have my digital $1 bill, can I copy …"},{"uri":"/sf-bitcoin-meetup/2015-02-23-scaling-bitcoin-to-billions-of-transactions-per-day/","title":"Scaling Bitcoin To Billions Of Transactions Per Day","content":"http://lightning.network/\nJoseph Poon joseph@lightning.network Thaddeus Dryja rx@awsomnet.org We\u0026amp;rsquo;re doing a presentation on the lightning network and micropayments. Right now bitcoin faces some interesting problems, like transactions aren\u0026amp;rsquo;t instant, and micropayments don\u0026amp;rsquo;t work. The transaction fees are 1/10th of a cent or higher depending on the exchange rate. And, \u0026amp;ldquo;bitcoin doesn\u0026amp;rsquo;t scale\u0026amp;rdquo; especially if you have a lot of micropayments. One megabyte blocks? …"},{"uri":"/misc/bitcoin-sidechains-unchained-epicenter-adam3us-gmaxwell/","title":"Bitcoin Sidechains - Unchained Epicenter","content":"EB65 – Adam Back \u0026amp;amp; Greg Maxwell: Sidechains Unchained\npeople:\nSebastien Couture Brian Fabian Crain Adam Back (adam3us) Greg Maxwell (gmaxwell) Brian: We are here today we have Adam Back and Greg Maxwell, to anyone who is following bitcoin and bitcoin\u0026amp;rsquo;s development, you will have probably heard of these two gentlemen. I was fortunate enough that the first time that I went to a bitcoin conference at the beginning of my involvement in this space, it was in Amsterdam 2013 and I somehow …"},{"uri":"/greg-maxwell/2015-01-08-libsecp256k1-testing/","title":"libsecp256k1 testing","content":"Today OpenSSL de-embargoed CVE-2014-3570 \u0026amp;ldquo;Bignum squaring may produce incorrect results\u0026amp;rdquo;. That particular security advisory is not a concern for Bitcoin users, but it allows me to explain some of the context behind a slightly cryptic statement I made in the release notes for the upcoming Bitcoin Core 0.10: “we have reason to believe that libsecp256k1 is better tested and more thoroughly reviewed than the implementation in OpenSSL”. Part of that “reason to believe” was our discovery …"},{"uri":"/greg-maxwell/2015-01-08-openssl-bug/","title":"OpenSSL bug discovery","content":"I contributed to the discovery and analysis of CVE-2014-3570 \u0026amp;ldquo;Bignum squaring may produce incorrect results\u0026amp;rdquo;. In this case, the issue was that one of the carry propagation conditions was missed. The bug was discovered as part of the development of libsecp256k1, a high performance (and hopefully high quality: correct, complete, side-channel resistant) implementation of the cryptographic operators used by Bitcoin, developed primarily by Bitcoin Core developer Pieter Wuille along with a …"},{"uri":"/misc/nydfs-bitlicense-lawsky-update/","title":"NY DFS Bitlicense Lawsky Update","content":"NY DFS BitLicense update 2014-12-18\nhttp://web.archive.org/web/20150511110453/http://bipartisanpolicy.org/events/payment-policy-in-the-21st-century-the-promise-of-innovation-and-the-challenge-of-regulation/\nAnd that\u0026amp;rsquo;s a serious problem that we all need to address with a hightened sense of urgency and focus. Let me start with the BitLicense and virtual currencies. These came on our radar screen last year at DFS because like all of the other states we regulate money transmitters like …"},{"uri":"/andreas-antonopoulos/2014-10-08-andreas-antonopolous-canada-senate-bitcoin/","title":"Canada Senate Bitcoin","content":"This is a transcript of submitted evidence to the Parliament of Canada\u0026amp;rsquo;s Senate Committee on Banking, Trade and Commerce. Here is a video. The opening remarks can be found here.\nAnother transcript may appear here but who knows.\nAn additional transcript can be found here.\nfinal report: http://www.parl.gc.ca/Content/SEN/Committee/412/banc/rep/rep12jun15-e.pdf\nPrepared remarks. My experience is primarily in information security and network architecture. I have a Master’s degree in networks …"},{"uri":"/misc/bitcoin-adam3us-fungibility-privacy/","title":"Fungibility and Privacy","content":"First of all I was going to explain what we mean by fungibility before bitcoin and ecash. It\u0026amp;rsquo;s an old legal concept in fact, about paper currency. It\u0026amp;rsquo;s the idea that a one ten dollar note is the same as any other ten dollar note. If you receive a note that was involved in a theft, 10 transactions ago, and the police investigate the theft, they have no right to remove the ten dollar note from your pocket. It\u0026amp;rsquo;s not your fault that it was involved in a previous crime. And so bank …"},{"uri":"/style/","title":"","content":"Bitcoin Transcripts Style Guide This is based on the Optech Style Guide.\nVocabulary Proper nouns The first letter of a proper noun is always capitalized.\nBitcoin (the network) MuSig (the pubkey and signature aggregation scheme. Don\u0026amp;rsquo;t refer to Musig, muSig or musig) Script (when referring to the Bitcoin scripting language) use script (no capitalization) when referring to an individual script True/False when referring to boolean arguments in plain text (eg \u0026amp;ldquo;set foo to True\u0026amp;rdquo;). use …"},{"uri":"/rebooting-web-of-trust/2019-prague/","title":"2019 Prague","content":" Abstract Groups Group Updates.Md Research Groups Security Intro Self Sovereign Identity Ideology And Architecture Christopher Allen Shamir Secret Sharing Swashbuckling Safety Training Kim Hamilton Duffy Topics Weak Signals "},{"uri":"/scalingbitcoin/hong-kong-2015/a-bevy-of-block-size-proposals-bip100-bip102-and-more/","title":"A Bevy Of Block Size Proposals Bip100 Bip102 And More","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/3_tweaking_the_chain_1_garzik.pdf\nslides: http://www.slideshare.net/jgarzik/a-bevy-of-block-size-proposals-scaling-bitcoin-hk-2015\nAlternative video: https://www.youtube.com/watch?v=37LiYOOevqs\u0026amp;amp;t=1h16m6s\nWe\u0026amp;rsquo;re going to be talking about every single block size proposal. This is going to be the least technical presentation at the conference. I am not going to go into the proposals htemselves. Changing the block size …"},{"uri":"/scalingbitcoin/hong-kong-2015/a-flexible-limit-trading-subsidy-for-larger-blocks/","title":"A Flexible Limit Trading Subsidy For Larger Blocks","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/3_tweaking_the_chain_2_friedenbach.pdf\nI really want to thank jgarzik for that interesting talk. Who picks what block size? My talk is going to be about a category of dynamic block size proposals that attach the block size limit to the fee in such a way that as there is more demand for block size, through users that want to pay fees, the block size will increase or decrease as necessary. The need for a dynamic block size is so …"},{"uri":"/scalingbitcoin/tel-aviv-2019/scaling-oblivious-read-write/","title":"A Tale of Two Trees: One Writes, and Other Reads, Scaling Oblivious Accesses to Large-Scale Blockchains","content":"Introduction We are trying to get optimized oblivious accesses to large-scale blockchains. This is collaborative work with our colleagues.\nMotivation As we all know, bitfcoin data has become too large to store in resource-constrained devices. It\u0026amp;rsquo;s like 240 GB. The current solution today is bip37 + Nakamoto\u0026amp;rsquo;s idea for simplified payment verification (SPV) clients which don\u0026amp;rsquo;t run the bitcoin network rules. Resource-constrained clients (thin clients) have to rely on other …"},{"uri":"/simons-institute/a-wishlist-for-verifiable-computation/","title":"A Wishlist For Verifiable Computation","content":"A wishlist for verifiable computation: An applied computer science perspective\nhttp://www.pepper-project.org/ https://www.youtube.com/watch?v=Z4jzA6ts2j4 http://simons.berkeley.edu/talks/mike-walfish-2015-06-10 slides: http://www.pepper-project.org/surveywishlist.pptx slides (again): http://simons.berkeley.edu/sites/default/files/docs/3316/surveywishlist2.pptx more: http://www.pepper-project.org/talks.htm Okay, thanks. I want to begin by thanking the organizers, especially the tall guy for …"},{"uri":"/rebooting-web-of-trust/2019-prague/abstract-groups/","title":"Abstract Groups","content":"Blockcert As the standards around verifiable credentials are starting to take form, different flavours of ‘verifiable credentials’ like data structures need to make necessary changes in order to leverage on the rulesets outlined and constantly reviewed by a knowledgeable community like RWOT and W3C. The purpose of this paper is to identify all of the changes needed to comply with the Verifiable Credentials \u0026amp;amp; Decentralized Identifiers standards.\nCooperation beats aggregation One important …"},{"uri":"/tags/accidental-confiscation/","title":"Accidental confiscation","content":""},{"uri":"/tags/acc/","title":"Accountable Computing Contracts","content":""},{"uri":"/bit-block-boom/2019/accumulating-bitcoin/","title":"Accumulating Bitcoin","content":"Introduction Today I am going to be preaching to the choir I think. Hopefully I will be reiterating and reinforcing some points that you already know but maybe forgot to bring up with your nocoiner friends, or learning something new. I hope there\u0026amp;rsquo;s something for everyone in here.\nQ: Should I buy bitcoin?\nA: Yes.\nQ: Are you going to troll a journalist?\nA: Today? Already done.\nMisconceptions One of the biggest misconceptions I heard when I first learned about bitcoin was that bitcoin was …"},{"uri":"/stanford-blockchain-conference/2019/accumulators/","title":"Accumulators for blockchains","content":"https://twitter.com/kanzure/status/1090748293234094082\nhttps://twitter.com/kanzure/status/1090741715059695617\nhttps://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/accumulators/\npaper: https://eprint.iacr.org/2018/1188\nIntroduction I am going to be talking about batching techniques for accumulators with applications to interactive oracle proofs and blockchains. I\u0026amp;rsquo;ll maybe also talk about how to make these proofs even smaller. The main thing that we are going to focus on today is the …"},{"uri":"/speakers/adam-ludwin/","title":"Adam Ludwin","content":""},{"uri":"/tags/addr-v2/","title":"Addr v2","content":""},{"uri":"/speakers/aki-balogh/","title":"Aki Balogh","content":""},{"uri":"/speakers/alan-reiner/","title":"Alan Reiner","content":""},{"uri":"/speakers/alena-vranova/","title":"Alena Vranova","content":""},{"uri":"/speakers/alessandro-chiesa/","title":"Alessandro Chiesa","content":""},{"uri":"/speakers/alex-petrov/","title":"Alex Petrov","content":""},{"uri":"/speakers/alex-zinder/","title":"Alex Zinder","content":""},{"uri":"/speakers/alexander-chepurnoy/","title":"Alexander Chepurnoy","content":""},{"uri":"/speakers/alexander-zaidelson/","title":"Alexander Zaidelson","content":""},{"uri":"/speakers/alexei-zamyatin/","title":"Alexei Zamyatin","content":""},{"uri":"/speakers/alicia-bendhan/","title":"Alicia Bendhan","content":""},{"uri":"/cryptoeconomic-systems/2019/all-about-decentralized-trust/","title":"All About Decentralized Trust","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/tags/altcoin/","title":"altcoin","content":""},{"uri":"/scalingbitcoin/montreal-2015/alternatives-to-block-size-as-aggregate-resource-limits/","title":"Alternatives To Block Size As Aggregate Resource Limits","content":"I am talking about block size, but not any particular proposal. Why do we have the block size limit and is it a good thing in the first place? The block size exists as a denial-of-service limiter. It limits the amount of resources that a node validating the blockchain can exhaust when validating a block. We want to do this because in the early days of bitcoin there are often certain kinds of transactions there are ways to slow down a validator by using a non-standard transaction or filling up a …"},{"uri":"/scalingbitcoin/montreal-2015/amiko-pay/","title":"Amiko Pay","content":"Amiko Pay aims to become an implementation of the lightning network. So that previous presentation was a great setup. There\u0026amp;rsquo;s a little bit more to Amiko Pay. If you look through the basic design of the lightning network, there\u0026amp;rsquo;s a network of payment channels. There have been several variations of this idea. Lightning Network happens to be the best so far. What Amiko Pay aims to do is to focus on the nodes, and do the routing between nodes. The other big problem of Amiko Pay making …"},{"uri":"/speakers/andy-ofiesh/","title":"Andy Ofiesh","content":""},{"uri":"/tags/annex/","title":"Annex","content":""},{"uri":"/stanford-blockchain-conference/2020/arbitrum-v2/","title":"Arbitrum 2.0: Faster Off-Chain Contracts with On-Chain Security","content":"https://twitter.com/kanzure/status/1230191734375628802\nIntroduction Thank you, good morning everybody. I\u0026amp;rsquo;m going to talk about our latest version of our Abitrum system. The first version was discussed in a paper in 2018. Since then, there\u0026amp;rsquo;s a lot of advances in my technology and a real running system has been made. This is the first working roll-up system for general smart contracts.\nFirst I will set the scene to provide some context on what we\u0026amp;rsquo;re talking about and how what …"},{"uri":"/w3-blockchain-workshop-2016/archival-science/","title":"Archival Science","content":"Blockchain is a memory transfer sytsem. These memories are moving through space and time. This is something that I recognized. As a type of record, a ledger is a type of record traditionally. There are a number of international standards. One of them is ISO 15489, international records amangemetn. You will see this idea of memory of transactions as evidence. You will see something like \u0026amp;ldquo;proofs on the blockchain\u0026amp;rdquo;. This is information received and created and maintained. In pursuance …"},{"uri":"/speakers/ari-juels/","title":"Ari Juels","content":""},{"uri":"/speakers/ariel-gabizon/","title":"Ariel Gabizon","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/armory-proof-of-payment/","title":"Armory Proof Of Payment","content":"I am going to talk about proof of payment. This is simple. You can do this. That you couldn\u0026amp;rsquo;t do before 2008. We\u0026amp;rsquo;ll see if this works. I hear it might work. That\u0026amp;rsquo;s okay, I\u0026amp;rsquo;ll pull this up here. I have ruined everything. Just jam that in there. Here we go.\nSo uh, I said who I am. Bitcoin Armory. I am here to talk about proof of payment. Basically, let\u0026amp;rsquo;s talk about Armory first real quick. First released in 2011. Open-source Bitcoin security software. It\u0026amp;rsquo;s been …"},{"uri":"/speakers/arvind-narayanan/","title":"Arvind Narayanan","content":""},{"uri":"/w3-blockchain-workshop-2016/arvind-narayanan/","title":"Arvind Narayanan","content":"I would like to introduce Arvind. He is a professor at Princeton. He has talked about this in a number of different forums.\nHi everyone. My name is Arvind. This is turning out to be one of the more unusual and interesting events that I have been to. Someone at my table called the first session a quasi-religious experience. Not sure whether that was a good thing or not. Joking aside, my favorite thing about this is that the position statements were available on the website. I found them …"},{"uri":"/tags/asicboost/","title":"ASICBoost","content":""},{"uri":"/stanford-blockchain-conference/2019/asics/","title":"Asics","content":"ASIC design for mining\nhttps://twitter.com/kanzure/status/1091056727464697856\nIntroduction My name is David Vorick. I am lead developer of Sia, a decentralized cloud storage solution. I come from the software side of the world, but I got into hardware. I think from the perspective of cryptocurrency developers. I am also CEO of Obelisk, which is a cryptocurrency ASIC miner manufacturing company. I was asked today to talk about cryptocurrency ASICs. This is a really broad topic and the format that …"},{"uri":"/speakers/assimakis-kattis/","title":"Assimakis Kattis","content":""},{"uri":"/tags/async-payments/","title":"Async payments","content":""},{"uri":"/scalingbitcoin/stanford-2017/atomically-trading-with-roger-gambling-on-the-success-of-a-hard-fork/","title":"Atomically Trading With Roger Gambling On The Success Of A Hard Fork","content":"Atomically Trading with Roger: Gambling on the success of a hard-fork\npaper: http://homepages.cs.ncl.ac.uk/patrick.mc-corry/atomically-trading-roger.pdf\nThis is joint work with Patrick McCorry, Andrew Miller and myslef. Patrick was originally going to give this talk but he was unable to make it. I\u0026amp;rsquo;ll give the talk with his slides. I want to acknowledge Roger Ver for permission to use his name in the title of the talk. We asked permission.\nI will only be talking about some of the …"},{"uri":"/stanford-blockchain-conference/2019/aurora-transparent-succinct-arguments-r1cs/","title":"Aurora: Transparent Succinct Arguments for R1CS","content":"https://twitter.com/kanzure/status/1090741190100545536\nIntroduction Hi. I am Alessandro Chiesa, a faculty member at UC Berkeley. I work on cryptography. I\u0026amp;rsquo;m also a chief scientist at Starkware. I was a founder of zcash. Today I will like to tell you about recent work in cryptographic proof systems. This is joint work with people at Starkware, UC Berkeley, MIT Media Lab, and others. In the prior talk, we talked about scalability. But this current talk is more about privacy by using …"},{"uri":"/speakers/austin-hill/","title":"Austin Hill","content":""},{"uri":"/stanford-blockchain-conference/2019/vulnerability-detection/","title":"Automatic Detection of Vulnerabilities in Smart Contracts","content":"Introduction If you find a bug in a smart contract, it\u0026amp;rsquo;s really hard to mitigate that bug. In formal verification, we really look for bugs. We can identify subtle bugs, like reentrancy attacks, ownership violation, incorrect transactions, bugs due to updated code. We also want to assure the absence of bugs.\nDevice drivers I come from the domain of Windows device drivers. These are drivers produced by third parties, and it integrates with operating system code. Bugs in the device drivers …"},{"uri":"/scalingbitcoin/tel-aviv-2019/backpackers/","title":"Backpackers","content":"Backpackers: A new paradigm for secure and high-performance blockchain\nThang N. Dinh\n"},{"uri":"/speakers/baker-marquart/","title":"Baker Marquart","content":""},{"uri":"/speakers/balaji-srinivasan/","title":"Balaji Srinivasan","content":""},{"uri":"/speakers/barry-silbert/","title":"Barry Silbert","content":""},{"uri":"/speakers/bart-suichies/","title":"Bart Suichies","content":""},{"uri":"/speakers/ben-maurer/","title":"Ben Maurer","content":""},{"uri":"/speakers/benjamin-chan/","title":"Benjamin Chan","content":""},{"uri":"/speakers/benjamin-lawsky/","title":"Benjamin Lawsky","content":""},{"uri":"/baltic-honeybadger/2018/beyond-bitcoin-decentralized-collaboration/","title":"Beyond Bitcoin Decentralized Collaboration","content":"Beyond bitcoin: Decentralized collaboration\nhttps://twitter.com/kanzure/status/1043432684591230976\nhttp://sit.fyi/\nHi everybody. Today I am going to talk about something different than my usual. I am going to talk more about how we can compute and how we collaborate. I\u0026amp;rsquo;ll start with an introduction to the history of how things happened and why are things the way they are today. Many of you probably use cloud SaaS applications. It\u0026amp;rsquo;s often touted as a new thing, something happening in …"},{"uri":"/scalingbitcoin/hong-kong-2015/bip101-block-propagation-data-from-testnet/","title":"Bip101 Block Propagation Data From Testnet","content":"I am a bitcoin miner. I am a C++ programmer and a scientist. I will be going pretty fast. I have a lot of stuff to cover. Bare with me.\nMy perspective on this is that scaling bitcoin is an engineer problem. My favorite proposal for how to scale bitcoin is bip101. It\u0026amp;rsquo;s over a 20-year time span. This will give us time to implement fixes to get Bitcoin to a large level. A hard-fork to increase the block size limit is hard, and soft-forks make it easier to decrease, that is to increase it is, …"},{"uri":"/tags/bip70-payment-protocol/","title":"BIP70 payment protocol","content":""},{"uri":"/scalingbitcoin/hong-kong-2015/bip99-and-uncontroversial-hard-forks/","title":"Bip99 And Uncontroversial Hard Forks","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY1/1_overview_1_timon.pdf\nbip99 https://github.com/bitcoin/bips/blob/master/bip-0099.mediawiki\nWe are going to focus on consensus rules. I am going to give an introduction and then focus on controversial versus non-controversial hard-forks.\nWhy do we want to classify consensus changes in the first place? Well, for one, you often have long discussions many times in Bitcoin, we don\u0026amp;rsquo;t always share the same terminology and the …"},{"uri":"/bit-block-boom/","title":"Bit Block Boom","content":" Bit Block Boom 2019 "},{"uri":"/bit-block-boom/2019/","title":"Bit Block Boom 2019","content":" Accumulating Bitcoin Pierre Rochard Bitcoin: There Can Only Be One Tone Vays Altcoins Building Vibrant Bitcoin Communities Kris Merkel Fiat Money \u0026amp;amp; Fiat Food Saifedean Ammous How To Meme Bitcoin To The Moon Michael Goldstein State Of Multisig Justin Moon Psbt Taproot, Schnorr, and the next soft-fork Mike Schmidt Schnorr signatures Taproot Soft fork activation Musig Adaptor signatures "},{"uri":"/baltic-honeybadger/2018/bitcoin-as-a-novel-market-institution/","title":"Bitcoin As A Novel Market Institution","content":"Bitcoin as a novel market institution\nhttps://twitter.com/kanzure/status/1043763647498063872\nI am going to be talking about bitcoin as an economic system, not as a software or cryptographic system. This talk has two parts. In the first, I\u0026amp;rsquo;m going to take a retrospective of the past 10 years of bitcoin as a functioning economic system. In the second part, I will be looking at bitcoin as it is today and how it will be in the future. I\u0026amp;rsquo;m going to look at the amount of wealth stored in …"},{"uri":"/scalingbitcoin/montreal-2015/bitcoin-block-propagation-iblt-rusty-russell/","title":"Bitcoin Block Propagation and IBLT","content":"This is not what I do. But I was doing it anyway.\nThe problem is that blocks are transmitted in their entirety. If you have a 1 MB uplink and you\u0026amp;rsquo;re connecting to 8 peers, your first peer will see a 1MB block in about 66.8 seconds. And the last one will get it in 76.4 seconds, because we basically blast out blocks in parallel to our peers. Miners can solve this problem of slow block transmission by centralizing and all using the same pool.\nThat\u0026amp;rsquo;s not desirable, so it would be nice if …"},{"uri":"/bitcoin-core-dev-tech/2015-02/","title":"Bitcoin Core Dev Tech 2015","content":" Bitcoin Law For Developers James Gatto, Marco Santori Gavinandresen R\u0026amp;amp;D Goals \u0026amp;amp; Challenges Patrick Murck, Gavin Andresen, Cory Fields Research Bitcoin core Talk by the founders of Circle Jeremy Allaire, Sean Neville "},{"uri":"/london-bitcoin-devs/jnewbery-bitcoin-core-v0.17/","title":"Bitcoin Core V0.17","content":"Bitcoin Core v0.17\nslides: https://www.dropbox.com/s/9kt32069hoxmgnt/john-newbery-bitcoincore0.17.pptx\nhttps://twitter.com/kanzure/status/1031960170027536384\nIntroduction I am John Newbery. I work on Bitcoin Core. This talk is going to be mostly about Bitcoin Core 0.17 which was branched on Monday. Hopefully the final release will be in the next couple of weeks.\nwhoami I live in New York and work at Chaincode Labs. I\u0026amp;rsquo;m not actually a native born New Yorker. It\u0026amp;rsquo;s nice to be back in …"},{"uri":"/dallas-bitcoin-symposium/bitcoin-developers/","title":"Bitcoin Developers","content":"Introduction I am going to talk about a programmer\u0026amp;rsquo;s perspective. As investors, you might find this interesting, but at the same time it\u0026amp;rsquo;s not entirely actionable. As a programmer, the bitcoin ecosystem is very attractive. This is true of cryptocurrencies in general. I\u0026amp;rsquo;ll make a distinction for bitcoin at the end. One of the interesting things is that you see some of the most talented hackers in the world are extremely attracted to bitcoin. I think that\u0026amp;rsquo;s an interesting …"},{"uri":"/edgedevplusplus/2017/","title":"Bitcoin Edge Dev++ 2017","content":"https://stanford-devplusplus-2017.bitcoinedge.org/\nBitcoin Peer-to-Peer Network and Mempool John Newbery P2p Transaction relay policy "},{"uri":"/scalingbitcoin/montreal-2015/bitcoin-failure-modes-and-the-role-of-the-lightning-network/","title":"Bitcoin Failure Modes And The Role Of The Lightning Network","content":"I am going to be talking about bitcoin failure modes, and Joseph will talk about how the lightning network can help. I\u0026amp;rsquo;ll start. We\u0026amp;rsquo;ll start off by saying that bitcoin is working, which is really cool. Blocks starts with lots of zeroes, coins stay put, they move when you tell them to. That\u0026amp;rsquo;s really cool and it works. That\u0026amp;rsquo;s great. This is a good starting place. We should acknowledge that bitcoin can fail. But it\u0026amp;rsquo;s anti-fragile, right? What\u0026amp;rsquo;s the blockheight of …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/bitcoin-financing-and-trading/","title":"Bitcoin Financing And Trading","content":" Joshua Lim (Circle, ex Goldman Sachs) Juthica Chou (LedgerX) (Goldman Sachs) Bobby Cho (ItBit) (VP of trading at SecondMarket) Harry Yeh (moderator) (Binary Financial) Today we will hear from some people who are traders and also infrastructure providers. Good morning everybody. It took me 18 hours to get here. The Uber dropped me off at the wrong building. They need better signs.\nSo usually I am the one that sits on the panel. This is actually the first time I get to moderate. I get to ask the …"},{"uri":"/bitcoin-core-dev-tech/2015-02/james-gatto-marco-santori-bitcoin-law-for-developers/","title":"Bitcoin Law For Developers","content":"We are going to be taking a 15 minute coffee break after our next two speakers. I want to introduce you to James Gatto and Marco Santori with Pilsbury. They will be spending some time talking about Bitcoin law. They have a room this afternoon and they are offering to talk with you one on one. So Marco and James.\nYou missed the introduction. Was it any good? (laughter)\nWe are here to talk about legal issues. We are going to try to keep it light and interesting. I am going to talk about patents. …"},{"uri":"/scalingbitcoin/montreal-2015/bitcoin-load-spike-simulation/","title":"Bitcoin Load Spike Simulation","content":"Our goal for this project, our rationale of what we\u0026amp;rsquo;re interesting is when many transactions arrive in a short period of time. This could be because of denial of service attacks where few entities are creating a large number of transactions, or many people wanting to create transactions, like a shopping spree. We wanted to answer two questions, how does the temporary spike in transaction rate affect confirmation delay distribution? For a given spike shape, can we change the block size and …"},{"uri":"/baltic-honeybadger/2018/bitcoin-maximalism-dissected/","title":"Bitcoin Maximalism Dissected","content":"Bitcoin maximalism dissected\nGood morning everyone. I am very happy to be here at Baltic Honeybadger. Last year I gave a scaling presentation. I opened the conference with a scaling presentation. This year in order to compensate, I will be super serious. This will be the most boring presentation at the conference. I am going to try to dissect and formalize bitcoin maximalism.\nThis is the scarriest font I found on prezi. I wanted something with blood coming out of it but they didn\u0026amp;rsquo;t have …"},{"uri":"/stanford-blockchain-conference/2019/bitcoin-payment-economic-analysis/","title":"Bitcoin Payment Economic Analysis","content":"Economic analysis of the bitcoin payment system\nhttps://twitter.com/kanzure/status/1091133843518582787\nIntroduction I am an economist. I am going to stop apologizing about this now that I\u0026amp;rsquo;ve said it once. I usually study market design. Bitcoin gives us new opportunities for how to design marketplaces. Regarding the talk title, we don\u0026amp;rsquo;t want to claim that bitcoin or any other cryptocurrency will be a monopoly. But to be a monopoly, it would behave very differently from traditional …"},{"uri":"/baltic-honeybadger/2018/bitcoin-payment-processing-and-merchants/","title":"Bitcoin Payment Processing And Merchants","content":"1 on 1: Bitcoin payment processing and merchants\nhttps://twitter.com/kanzure/status/1043476967918592003\nV: Hello and welcome to this amazing conference. It\u0026amp;rsquo;s a good conference, come on. It\u0026amp;rsquo;s great because you can ask them about UASF and they know what you\u0026amp;rsquo;re talking about. I have some guests with me today. We\u0026amp;rsquo;re going to be talkin gabout merchant processing and talk about regular bitcoin adoption too. The first question I have for Alena is, \u0026amp;hellip; there was a big effort …"},{"uri":"/edgedevplusplus/2017/p2p-john-newbery/","title":"Bitcoin Peer-to-Peer Network and Mempool","content":"Slides: https://johnnewbery.com/presentation/2017/11/02/dev-plus-plus-stanford/p2p.pdf \u0026amp;amp; https://johnnewbery.com/presentation/2017/11/02/dev-plus-plus-stanford/mempool.pdf\nIntro John: So far today, we\u0026amp;rsquo;ve talked about transactions and blocks and the blockchain. Those are the fundamental building blocks of Bitcoin. Those are the data structures that we use. On a more practical level, how does a Bitcoin network actually function? How do we transmit these transactions and blocks around? …"},{"uri":"/magicalcryptoconference/2019/bitcoin-protocol-development-panel/","title":"Bitcoin Protocol Development Panel","content":"Bitcoin protocol development panel\nKW: We have some wonderful panelists today. Let\u0026amp;rsquo;s kick it off. Let\u0026amp;rsquo;s start with Eric. Who are you and what role do you play in bitcoin development?\nEL: I got into bitcoin in 2011. I had my own network stack. I almost had a full node implementation but I stopped short because it wasn\u0026amp;rsquo;t as well tested or reviewed as Bitcoin Core. So I started to look into the community a little bit. I became really interested in the development process itself. …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/bitcoin-regulation-landscape/","title":"Bitcoin Regulation Landscape","content":" Elizabeth Stark Jerry Brito Constance Choi Christian Catalini (moderator) MIT Compton Labs - Building 26, room 100\nhttp://mitbitcoinexpo.org/\nOkay guys, we are getting close to the time for the next panel. Head back to your seats. You may bring food in with you. We also started 15 minutes late. People have 15 minutes .. we ended lunch on time. One of our speakers literally only came for 30 minutes today, 2 to 230.\nIt\u0026amp;rsquo;s going to be escape. I don\u0026amp;rsquo;t know how to use Macs. It will be …"},{"uri":"/scalingbitcoin/montreal-2015/relay-network/","title":"Bitcoin Relay Network","content":"It was indicative of two issues. A lot of people were reducing their security model by not validating their bitcoin. It was potentially less secure. That\u0026amp;rsquo;s not something that relay networks try to address. The big issue with the decreasing number of nodes, which the relay network doesn\u0026amp;rsquo;t address, is that it reduces the resilience against attack. If there\u0026amp;rsquo;s only 100 nodes, there\u0026amp;rsquo;s really only some 100 VPses that you have to knock offline before the bitcoin network stops …"},{"uri":"/dallas-bitcoin-symposium/bitcoin-security/","title":"Bitcoin Security","content":"Bitcoin security\nIntroduction Hi everyone, my name is Dhruv Bansal. I am the CTO of Unchained Capital. Thank you for the history lesson, Tuur. That\u0026amp;rsquo;s a hard act to follow. I keep thinking about how it must have been like to transact large amounts of money back then. It was a fascinating time period. There was a lot of foundations laid for how the global financial system will work later in the future, like fractional reserve lending, loans, and so on. I want to talk about what security …"},{"uri":"/breaking-bitcoin/2019/selfish-mining/","title":"Bitcoin Selfish Mining and Dyck Words","content":"Introduction I am going to talk about our recent work on selfish mining. I am going to explain the profitability model, the different mathematical models that are used to analyze selfish mining with their properties.\nBibliography On profitability of selfish mining On profitability of stubborn mining On profitability of trailing mining Bitcoin selfish mining and Dyck words Selfish mining and Dyck words in bitcoin and ethereum networks Selfish mining in ethereum The paper on selfish mining in …"},{"uri":"/magicalcryptoconference/2019/bitcoin-without-internet/","title":"Bitcoin Without Internet","content":"SM: We have a special announcement to make. Let me kick it over to Richard right now.\nRM: We completed a project that will integrate the GoTenna mesh radio system with Blockstream\u0026amp;rsquo;s blocksat satellite system. It\u0026amp;rsquo;s pretty exciting. It\u0026amp;rsquo;s called txtenna. It will allow anybody to send a signed bitcoin transaction from an off-grid full node that is receiving blockchain data from the blocksat network, and then relay it over the GoTenna mesh network. It\u0026amp;rsquo;s not just signed …"},{"uri":"/bit-block-boom/2019/there-can-only-be-one/","title":"Bitcoin: There Can Only Be One","content":"https://twitter.com/kanzure/status/1162742007460192256\nIntroduction I am basically going to talk about why bitcoin will basically eat all the shitcoins and why all of them are going away and I am going to point to some of the details. The big graphic right there is my attempt to categorize all of the altcoins, the ICO space, and so on. Everyone is slowly starting to merge mine with bitcoin, since they can\u0026amp;rsquo;t keep up with the charadde.\nProof-of-work The word \u0026amp;ldquo;blockchain\u0026amp;rdquo; was …"},{"uri":"/tags/block-explorers/","title":"Block explorers","content":""},{"uri":"/stanford-blockchain-conference/2020/block-rewards/","title":"Block Rewards","content":"An Axiomatic Approach to Block Rewards\nhttps://twitter.com/kanzure/status/1230574963813257216\nhttps://arxiv.org/pdf/1909.10645.pdf\nIntroduction Thank you, Bram. Can you hear me in the back? I want to talk about some work I have been doing with my colleagues. This is a paper about incentive issues in blockchain protocols. I am interested in thinking about whether protocols have been designed in a way that motivates users to behave in the way that the designer had hoped.\nGame theory and mechanism …"},{"uri":"/tags/block-withholding/","title":"Block withholding","content":""},{"uri":"/coindesk-consensus-2016/blockchain-database-technology-in-a-global-context/","title":"Blockchain Database Technology In A Global Context","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nBlockchain database technology in a global context\nAlan Murray - Moderator\nLawrence H. Summers\nAM: I would like to show a video.\nIt\u0026amp;rsquo;s not going to happen. You\u0026amp;rsquo;re wasting your time. When the DOJ calls you up and says it\u0026amp;rsquo;s an illegal currency, it\u0026amp;rsquo;s over. You will be put in jail. There will be no non-controlled currency in the world. There is no government that is going to put up with it in the world. Lots of …"},{"uri":"/w3-blockchain-workshop-2016/blockchain-hub/","title":"Blockchain Hub","content":"My understanding is that there is application logic, consensus mechanism, distributed timestamp and ledger structure. With these layers, the blockchain can give total control of assets to the end. This materializes the attitude and philosophy of the internet. The intention is I think great. There is a missing link, though. It would be nice if these blockchain could manage the digital representational of physical assets.\nI have been working with real estate escrow company in Japan to work on the …"},{"uri":"/scalingbitcoin/montreal-2015/blockchain-testbed/","title":"Blockchain Testbed","content":"A testbed for bitcoin scaling\nI want to talk today about how to scale bitcoin. We want lower latency, we want higher throughput, more bandwidth and more transactions per second, and we want security. We can try to tune the parameters. In all of the plots, I have time going from left to right and these are blocks in rectangles. We have larger blocks which might give us more throughput. We could have shorter block length, and transactions would come faster on the chain and we have better …"},{"uri":"/stanford-blockchain-conference/2020/blockchains-for-multiplayer-games/","title":"Blockchains For Multiplayer Games","content":"https://twitter.com/kanzure/status/1230256436527034368\nIntroduction Alright, thanks everyone for being here. Let\u0026amp;rsquo;s get started. I am part of a project called Forte which wants to equip the game ecosystem with blockchain finance for their in-game economies. This is a unique set of constraints compared to what a lot of people are trying to build on blockchain today. We\u0026amp;rsquo;re game experts. There\u0026amp;rsquo;s definitely people more experts than us on the topics we\u0026amp;rsquo;ll cover in this …"},{"uri":"/stanford-blockchain-conference/2019/bloxroute/","title":"Bloxroute","content":"bloXroute: A network for tomorrow\u0026amp;rsquo;s blockchain\nIntroduction Hi. I am Souyma and I am here to talk with you about Bloxroute. The elephant in the room is that blockchains have been claimed to solve everything like credit cards, social media, and global micropayments. To handle credit card payment volumes, blockchains need like 5000 transactions per second, for microtransactions you need 70000 transactions per second, and for social media even more. Blockchains today do about 10 …"},{"uri":"/speakers/bobby-cho/","title":"Bobby Cho","content":""},{"uri":"/speakers/bobby-lee/","title":"Bobby Lee","content":""},{"uri":"/stanford-blockchain-conference/2020/boomerang/","title":"Boomerang - Redundancy Improves Latency and Throughput in Payment-Channel Network","content":"https://twitter.com/kanzure/status/1230936389300080640\nIntroduction Redundancy can be a useful tool for speeding up payment channel networks and improving throughput. We presented last week at Financial Crypto. You just received a nice introduction to payment channels and payment channel networks.\nPayment channels Alice and Bob are connected through a payment channel. This is a channel and in that channel there are some coins that are escrowed. Some coins belong to Alice some belong to Bob and …"},{"uri":"/speakers/boyma-fahnbulleh/","title":"Boyma Fahnbulleh","content":""},{"uri":"/speakers/brad-peterson/","title":"Brad Peterson","content":""},{"uri":"/scalingbitcoin/hong-kong-2015/braiding-the-blockchain/","title":"Braiding The Blockchain","content":"Bob McElrath (bsm117532)\nslides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/2_breaking_the_chain_1_mcelrath.pdf\nI work for SolidX in New York. I am here to tell you about some modifications to the blockchain. All the things we heard yesterday about the block size, come down to the existence of orphans. The reason why we have these problems are orphans. These are consequences of physics and resources. This is not a fundamental property in Bitcoin. ((Transcripter\u0026amp;rsquo;s note: …"},{"uri":"/speakers/brandon-black/","title":"Brandon Black","content":""},{"uri":"/breaking-bitcoin/2019/breaking-bitcoin-privacy/","title":"Breaking Bitcoin Privacy","content":"0A8B 038F 5E10 CC27 89BF CFFF EF73 4EA6 77F3 1129\nhttps://twitter.com/kanzure/status/1137304437024862208\nIntroduction Hello everybody. I invented and created joinmarket, the first really popular coinjoin implementation. A couple months ago I wrote a big literature review on privacy. It has everything about anything in privacy in bitcoin, published on the bitcoin wiki. This talk is about privacy and what we can do to improve it.\nWhy privacy? Privacy is essential for fungibility, a necessary …"},{"uri":"/scalingbitcoin/milan-2016/breaking-the-chain/","title":"Breaking The Chain","content":"https://twitter.com/kanzure/status/785144266683195392\nhttp://dpaste.com/1ZYW028\nhttp://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/client-side-validation/\nhttp://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/braiding-the-blockchain/\nhttp://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/jute-braiding/\nInclusivitiy So it seems like we have two big topics here, one is braiding and one is proof-of-publication. I had a talk in Hong Kong about braiding. The major difference is the inclusive …"},{"uri":"/breaking-bitcoin/2019/breaking-wasabi/","title":"Breaking Wasabi (and Automated Wallets)","content":"1127 96A8 DFCA 1A96 C8B8 0094 9211 687A D298 9B12\nhttps://twitter.com/kanzure/status/1137652130611978240\nTrigger warnings Before I begin, some have told me that sometimes I tend to be toxic or provocative. So let\u0026amp;rsquo;s start with some trigger warnings. This presentation might include bad jokes about bitcoin personalitites. It might have some multiplication. I might say \u0026amp;ldquo;ethereum\u0026amp;rdquo;. There\u0026amp;rsquo;s also a coinjoin warning. If this is triggering to you, then it\u0026amp;rsquo;s okay to leave. …"},{"uri":"/speakers/brett-seyler/","title":"Brett Seyler","content":""},{"uri":"/speakers/brian-kelly/","title":"Brian Kelly","content":""},{"uri":"/speakers/brian-klein/","title":"Brian Klein","content":""},{"uri":"/speakers/brian-n.-levine/","title":"Brian N. Levine","content":""},{"uri":"/speakers/brian-okeefe/","title":"Brian O’Keefe","content":""},{"uri":"/stanford-blockchain-conference/2020/brick-async-state-channels/","title":"Brick Async State Channels","content":"Brick: Asynchronous State Channels\nhttps://twitter.com/kanzure/status/1230943445398614016\nhttps://arxiv.org/abs/1905.11360\nIntroduction I am going to be presenting on Brick for asynchronous state channels. This is joint work with my colleagues and my advisor. So far we have heard many things about payment channels and payment channel networks. They were focused on existing channel solutions. In this work, we focus on a different dimensions. We ask why do payment channels work the way they do, …"},{"uri":"/speakers/bruce-fenton/","title":"Bruce Fenton","content":""},{"uri":"/scalingbitcoin/milan-2016/build-scale-operate/","title":"Build Scale Operate","content":"Build - Scale - Operate: The Three Pillars of the Bitcoin Community\nhttps://twitter.com/kanzure/status/784713580713246720\nGood morning. This is such a beautiful conference. Eric and I are excited to speak with you. We are going to outline what our goals are. We want to be clear that we\u0026amp;rsquo;re not here to define vision. Rather, we\u0026amp;rsquo;re here about how to grow vision around how to grow bitcoin from a technical community perspectives. We\u0026amp;rsquo;re going to share some of our challenges with …"},{"uri":"/tags/build-systems/","title":"build systems","content":""},{"uri":"/stanford-blockchain-conference/2019/building-bulletproofs/","title":"Building Bulletproofs","content":"https://twitter.com/kanzure/status/1090668746073563136\nIntroduction There are two parts to this talk. The first part of the talk will be building, like all the pieces we put into making the implementation of bulletproofs. We\u0026amp;rsquo;ll talk about the group we used, the Ristretto group and ristretto255 group. I\u0026amp;rsquo;ll talk about parallel lelliptic curve arithmetic, and Merlin transcripts for zero-knowledge proofs, and how all these pieces fit together.\nCathie will talk in part two about …"},{"uri":"/bit-block-boom/2019/building-vibrant-bitcoin-communities/","title":"Building Vibrant Bitcoin Communities","content":"Building vibrant bitcoin communities\nI am the content manager at Exodus Wallet. I have been helping to build up a userbase over the last few years. We are going to talk about building vibrant bitcoin communities. This is the antithesis of Michael Goldstein\u0026amp;rsquo;s talk.\nI want to bring positive dynamics to the communities that we participate in. Bitcoin has a few problems we need to overcome. The problem is Craig Wright. Well, the problem is Roger Ver. If you take the personalities out of it, …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2018/bulletproofs/","title":"Bulletproofs","content":"https://twitter.com/kanzure/status/958881877896593410\nhttp://web.stanford.edu/~buenz/pubs/bulletproofs.pdf\nhttps://joinmarket.me/blog/blog/bulletpoints-on-bulletproofs/\nhttps://crypto.stanford.edu/bulletproofs\nIntroduction Good morning everyone. I am Benedikt Bünz and I am going to talk about bulletproofs. This is joint work with myself, Jonathan Bootle, Dan Boneh, Andrew Poelstra, Pieter Wuille, and Greg Maxwell. Bulletproofs are short proofs that we designed originally with the goal of …"},{"uri":"/stanford-blockchain-conference/2019/casper/","title":"Casper the Friendly Ghost: A 'Correct-by-Construction' Blockchain Consensus Protocol","content":"https://twitter.com/kanzure/status/1091460316288868352\nI have one announcement to make before we start the session. If you feel like after all of these talks that the thing you really need is a drink, then there\u0026amp;rsquo;s help. Chainlink is hosting the Stanford Blockchain Conference happy hour at The Patio which is the most famous bar in Palo Alto. It doesn\u0026amp;rsquo;t mean that much, but it\u0026amp;rsquo;s good fun. It\u0026amp;rsquo;s at 412 Evans St in Palo Alto, from 5-8pm and it\u0026amp;rsquo;s an open bar. Thank you …"},{"uri":"/speakers/cathie-yun/","title":"Cathie Yun","content":""},{"uri":"/stanford-blockchain-conference/2020/celo-ultralight-client/","title":"Celo Ultralight Client","content":"https://twitter.com/kanzure/status/1230261714039398402\nIntroduction Hello everyone. My name is Marek Olszewski. We are going to be talking about Plumo which is Celo\u0026amp;rsquo;s ultralightweight client protocol. This is joint work with a number of collaborators.\nPlumo Plumo sands for \u0026amp;ldquo;feather\u0026amp;rdquo; in esperanto. I hope most people here are familiar with this graph. Chain sizes are growing. This is a graph of the bitcoin chain size over time. It has continued to grow. We\u0026amp;rsquo;re now at 256 …"},{"uri":"/misc/cftc-bitcoin/","title":"CFTC Bitcoin","content":"Commodity Futures Trading Commission\nhttp://www.onlinevideoservice.com/clients/cftc/video.htm?eventid=cftclive\n# NDFs description Panel I: CFTC Clearing for Non-Deliverable Forwards (NDF)\nThe first panel will discuss whether mandatory clearing should be required of NDF swaps contracts. Each panelist will present and then there will be opportunity for broader discussion and questions. Representatives from the CFTC, the Bank of England, and the European Securities and Markets Authority will also …"},{"uri":"/scalingbitcoin/milan-2016/chainbreak/","title":"Chainbreak","content":" Table of Contents 1. braids 1.1. Properties of a braid system 1.1.1. Inclusivity 1.1.2. Delayed tx fee allocation 1.1.3. Network size measured by graph structure 1.1.4. cohort algorithm / sub-cohort ordering 1.1.5. outstanding problem - merging blocks of different difficulty 1.2. fee sniping 1.2.1. what even is fee sniping? 1.3. definition of a cohort 1.4. tx processing system must process all txs 1.5. can you have both high blocktime and a braid? 1.5.1. problem is double spends 1.5.2. two ways …"},{"uri":"/scalingbitcoin/stanford-2017/changes-without-unanimous-consent/","title":"Changes Without Unanimous Consent","content":"I want to talk about dealing with consensus changes without so-called consensus. I am using consensus in terms of the social aspect, not in terms of the software algorithm consensus for databases. \u0026amp;ldquo;Consensus changes without consensus\u0026amp;rdquo;. If you don\u0026amp;rsquo;t consensus on consensus, then some people are going to follow one chain and another another chain.\nIf you have unanimous consensus, then new upgrades work just fine. Developers write software, miners run the same stuff, and then there …"},{"uri":"/tags/channel-announcements/","title":"Channel announcements","content":""},{"uri":"/tags/channel-commitment-upgrades/","title":"Channel commitment upgrades","content":""},{"uri":"/tags/channel-jamming-attacks/","title":"Channel jamming attacks","content":""},{"uri":"/speakers/charles-cascarilla/","title":"Charles Cascarilla","content":""},{"uri":"/speakers/charles-guillemet/","title":"Charles Guillemet","content":""},{"uri":"/speakers/charlie-lee/","title":"Charlie Lee","content":""},{"uri":"/speakers/chris-church/","title":"Chris Church","content":""},{"uri":"/speakers/chris-tse/","title":"Chris Tse","content":""},{"uri":"/w3-blockchain-workshop-2016/christopher-allen/","title":"Christopher Allen","content":"Whta makes for a great wonderful magical calliberation? Collaboration? Grab one card randomly. Keep it with you. What is this particular pattern of greatness? See if you can\u0026amp;rsquo;t embody it or can\u0026amp;rsquo;t recommend it ofr find it in the next two days. I would like each of you to tell a story about a collaboration that was really meaningful to you. It could be a business effort, it could be another team in your company, or it could be a personal collaboration that you found really powerful. …"},{"uri":"/coindesk-consensus-2016/clearing-and-settlement-for-global-financial-institutions/","title":"Clearing And Settlement For Global Financial Institutions","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nClearing and settlement for global financial institutions\nMatthew Bishop, The Economist - Moderator\nCharles Cascarilla, itBit\nChris Church, Digital Asset Holdings\nBrad Peterson, Nasdaq\nSandra Ro, CME Group\nMB: Good evening. Nearly time for a drink, but before that we will have a stimulating conversation about payments and settlement. It\u0026amp;rsquo;s one of the startups coming in and got a banking license, mainstream next to him is Chris …"},{"uri":"/stanford-blockchain-conference/2020/clockwork-nonfrontrunning/","title":"Clockwork Nonfrontrunning","content":"ClockWork: An exchange protocol for proofs of non-front-running\nhttps://twitter.com/kanzure/status/1231012112517844993\nIntroduction Clockwork is an exchange protocol for proofs of non-frontrunning.\nExchange systems Exchange systems let you trade one asset for another asset at a certain price. It\u0026amp;rsquo;s simple and it\u0026amp;rsquo;s supposed to be fair. In this work, we\u0026amp;rsquo;re focused on centralized exchange systems. We\u0026amp;rsquo;re not working with decentralized exchanges just yet. These don\u0026amp;rsquo;t …"},{"uri":"/tags/cltv-expiry-delta/","title":"CLTV expiry delta","content":""},{"uri":"/scalingbitcoin/milan-2016/coin-selection/","title":"Coin Selection","content":"Simulation-based evaluation of coin selection strategies\nhttps://twitter.com/kanzure/status/785061222316113920\nIntroduction Thank you. ((applause))\nHi. Okay. I\u0026amp;rsquo;m going to talk about coin selection. I have been doing this for my master thesis. To take you through what I will be talking about, I will be shortly outlining my work, then talking about coin selection in general. And then the design space of coin selection. I will also be introducing the framework I have been using for simulating …"},{"uri":"/coindesk-consensus-2016/","title":"Coindesk Consensus","content":" Blockchain Database Technology In A Global Context Lawrence H. Summers Clearing And Settlement For Global Financial Institutions Charles Cascarilla, Chris Church, Brad Peterson, Sandra Ro Delaware Initiative Marco Santori Regulation Future Of Blockchains Paul Vigna, Balaji Srinivasan, David Rutter Future Of Regulation Jerry Brito, J. Christopher Giancarlo, Benjamin Lawsky, Mark Wetjen Regulation Hackathon Intro Robert Schwinker How Tech Companies Are Embracing Blockchain Database Technology …"},{"uri":"/scalingbitcoin/montreal-2015/coinscope-andrew-miller/","title":"Coinscope","content":"http://cs.umd.edu/projects/coinscope/\nI am going to talk about a pair of tools, one is a simulator and another one is a measurement station for the bitcoin network. So let me start with the bitcoin simulator framework. The easiest approach is to create a customized model that is a simplified abstraction of what you care about- which is like simbit, to show off how selfish-mining works. What you put in is what you get, so the model is actually different from the actual behavior of the bitcoin …"},{"uri":"/speakers/come-plooy/","title":"Come Plooy","content":""},{"uri":"/verifiable-delay-functions/vdf-day-2019/comments-and-observations-about-timelocks-ron-rivest/","title":"Comments And Observations About Timelocks Ron Rivest","content":"Comments and observations about timelocks and verifiable delay functions\nIntroduction Welcome everybody. It\u0026amp;rsquo;s not only about solving the puzzles, but the immense interest in verifiable delay functions. When setting up this conference, I hadn\u0026amp;rsquo;t expected that. It\u0026amp;rsquo;s hard to predict where technology will go or what things will be popular decades later when you work on something.\nI am just going to review some of the basic stuff we did way back when, setting up a puzzle which was …"},{"uri":"/scalingbitcoin/montreal-2015/competitive-fee-market-urgency/","title":"Competitive Fee Market Urgency","content":"The urgency of a competitive fee market to ensure a scalable future\nI am Marshall. I run Firehash. CTO Cryptsy also. Some people have higher cost, some people have lower cost. I think those were some good estimates for true costs. I am going to be talking about why fees are going to be very important very soon. We\u0026amp;rsquo;ll talk about what happens without fees, talk about why bitcoin really isn\u0026amp;rsquo;t free but how we can do things like microtransactions, real microtransactions, talk about the …"},{"uri":"/decentralized-financial-architecture-workshop/compliance-and-confidentiality/","title":"Compliance And Confidentiality","content":"Compliance and confidentiality: can they co-exist?\nAlexander Zaidelson, Beam\nIntroduction Hi everyone. It\u0026amp;rsquo;s a great pleasure to be here. I\u0026amp;rsquo;m Alexander Zaidelson, CEO of Beam. I\u0026amp;rsquo;ll be presenting our view on the regulation and how we live with it as a project. I\u0026amp;rsquo;m not an expert in regulation. Whatever I\u0026amp;rsquo;m showing here is how we see the landscape from what we understand.\nBeam is a confidential transaction. The question is, how can confidential cryptocurrency co-exist …"},{"uri":"/scalingbitcoin/stanford-2017/concurrency-and-privacy-with-payment-channel-networks/","title":"Concurrency And Privacy With Payment Channel Networks","content":"paper: https://eprint.iacr.org/2017/820.pdf\nIntroduction I am going to talk about concurrency and privacy. This was joint work with my collaborators.\nBitcoin has scalability issues. That\u0026amp;rsquo;s the main reason why we are here today. Currently the bitcoin network allows for less than 10 transactions/second. Today we have more than 135 gigabytes of memory requirement. And there are some high fees and micropayments are not really possible. One of the proposals to fix this is payment channels. …"},{"uri":"/speakers/constance-choi/","title":"Constance Choi","content":""},{"uri":"/stanford-blockchain-conference/2019/coordinated-upgrades/","title":"Coordinated Upgrades","content":"Blockchain upgrades as a coordination game\nhttps://twitter.com/kanzure/status/1091139621000339456\nIntroduction My name is Stephanie Hurder. Today I am going to talk about my paper, blockchain upgrade as a coordination game. This takes an economic lense to the question of how to design blockchain governance. This work was done at Prysm Group and my coauthors. We help blockchains with their governance design. I have a PhD in economics from Harvard where I shared office space with Jacob who just …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/cory-fields/","title":"Cory Fields","content":"MIT Bitcoin Expo 2016 transcript\nlast year- http://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2015/ this year- http://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2016/\nWelcome everyone to MIT Bitcoin Expo 2016. If you haven\u0026amp;rsquo;t heard of Bitcoin by now, then I am not sure how you got here. My name is nchinda. I am the reigning Bitcoin club president. I am going to have some fun. The club is hijacking the #mit-dci IRC channel. You can twitter with #mitbitcoinexpo. There\u0026amp;rsquo;s a link up on …"},{"uri":"/tags/countersign/","title":"Countersign","content":""},{"uri":"/cryptoeconomic-systems/2019/cross-chain-deals-and-adversarial-commerce/","title":"Cross Chain Deals And Adversarial Commerce","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/magicalcryptoconference/2019/cryptographic-hocus-pocus/","title":"Cryptographic Hocus-Pocus Meaning Nothing: The Zcash Trusted Setup MPC","content":"or: \u0026amp;ldquo;Peter Todd\u0026amp;rsquo;s secret love letter to Zooko\u0026amp;rdquo;\nBackground See http://web.archive.org/web/20170717025249/https://petertodd.org/2016/cypherpunk-desert-bus-zcash-trusted-setup-ceremony\nand later it was then redacted: https://petertodd.org/2016/cypherpunk-desert-bus-zcash-trusted-setup-ceremony\nIntroduction Alright. Hopefully you guys are all awake ish or something. I am going to give a talk about the zcash trusted setup. As many of you know, I was involved in it. The way to start …"},{"uri":"/grincon/2019/cryptography-audit/","title":"Cryptography Audit","content":"libsecp256k1-zkp audit\nIt\u0026amp;rsquo;s a real treat to be able to be a part of this.\nWhy bother? Just a high-level, why are we taking community resources to spend time on audits? There are some perspectives\u0026amp;ndash; it\u0026amp;rsquo;s good for the community and it\u0026amp;rsquo;s what you do.\nBitcoin had on professional security audits and did just fine. But it launched in a very different environment, and it did get audits eventually.\nBeyond just covering your ass, it\u0026amp;rsquo;s good to have these audits because if …"},{"uri":"/scalingbitcoin/stanford-2017/state-of-cryptography/","title":"Cryptography for Blockchains beyond ECDSA and sha256: Signatures and Zero-Knowledge Proofs","content":"Am going to talk about cryptography relevant to blockchains. I\u0026amp;rsquo;ll talk about signatures and zero-knowledge proofs. It has a lot of applications to bitcoin. It turns out that bitcoin is pretty simple in terms of cryptography. It just uses a hash function (sha256) and,separately, ECDSA for signatures. I will be mentioning some other cryptography that bitcoin could use.\nLet\u0026amp;rsquo;s start with signatures. What do signatures do? If Bob wants to say, on every e-cash system, Bob says he is going …"},{"uri":"/misc/ctv-bip-review-workshop/","title":"CTV BIP Review Workshop","content":"OP_CHECKTEMPLATEVERIFY workshop notes\nReference materials transcript tweet https://twitter.com/kanzure/status/1223708325960798208\ntweet https://twitter.com/JeremyRubin/status/1223664128381734912 or https://twitter.com/JeremyRubin/status/1223672458516938752 and https://twitter.com/JeremyRubin/status/1223729378946715648\nIRC logs: http://gnusha.org/ctv-bip-review/\nbip119 proposal: https://github.com/bitcoin/bips/tree/master/bip-0119.mediawiki\nproposal site: https://utxos.org/\nbranch comparison: …"},{"uri":"/baltic-honeybadger/2018/current-state-of-the-market-and-institutional-investors/","title":"Current State Of The Market And Institutional Investors","content":"1 on 1: The current state of the market \u0026amp;amp; institutional investors\nhttps://twitter.com/kanzure/status/1043404928935444480\nBitcoin Association Guy (BAG)\nBAG: I\u0026amp;rsquo;m here with Bruce Fenton and Tone Vays. Bruce is also host of the Satoshi Roundtable and a long-term investor. Tone Vays is a derivatives trader and content creator. We\u0026amp;rsquo;re going to be talking about Wall Street. I believe you\u0026amp;rsquo;re both based in NY? How was the last 12 months?\nTV: I own an apartment there, but I think I …"},{"uri":"/tags/cve-2018-17144/","title":"CVE-2018-17144","content":""},{"uri":"/speakers/dahlia-malkhi/","title":"Dahlia Malkhi","content":""},{"uri":"/speakers/daniel-cline/","title":"Daniel Cline","content":""},{"uri":"/speakers/daniel-hinton/","title":"Daniel Hinton","content":""},{"uri":"/speakers/daniel-j.-bernstein/","title":"Daniel J. Bernstein","content":""},{"uri":"/speakers/daniel-marquez/","title":"Daniel Marquez","content":""},{"uri":"/speakers/daniel-robinson/","title":"Daniel Robinson","content":""},{"uri":"/speakers/dave-levin/","title":"Dave Levin","content":""},{"uri":"/speakers/david-bailey/","title":"David Bailey","content":""},{"uri":"/speakers/david-rutter/","title":"David Rutter","content":""},{"uri":"/speakers/david-schwartz/","title":"David Schwartz","content":""},{"uri":"/baltic-honeybadger/2018/day-1-closing-panel/","title":"Day 1 Closing Panel","content":"Closing panel\nhttps://twitter.com/kanzure/status/1043517333640241152\nRS: Thanks guys for joining the panel. We just need to get one more chair. I am going to introduce Alex Petrov here because everyone else was on stage. The closing panel is going to be an overview of what\u0026amp;rsquo;s happening in bitcoin. I want to start with the question I started with last year. What is the current state of bitcoin compared to last year? What has happened?\nES: Last year, I was sitting next to Craig Wright. His …"},{"uri":"/scalingbitcoin/milan-2016/day-1-group-summaries/","title":"Day 1 Group Summaries","content":"We need two to three minute report backs from each session. Also, note takers. Please email your notes to contact@scalingbitcoin.org so that we can add notes to the wiki. Come join us. Okay, so.\nLightning We discussed routing and several aspects of routing algorithms in the lightning network. We talked about the tradeoff between privacy aspects and reliability in routing. We quickly discovered that after some of the things were already resolved by the papers\u0026amp;ndash; the way how we deal with a …"},{"uri":"/scalingbitcoin/milan-2016/day-2-group-summaries/","title":"Day 2 Group Summaries","content":"https://twitter.com/kanzure/status/785151245883412480\nOur epic journey in Milan is coming to an end. Let\u0026amp;rsquo;s get seated. Each group is going to have a maximum of 3 minutes to report back. First let\u0026amp;rsquo;s have collaboration between bitcoin community and academia.\nCollaboration between bitcoin community and academia In the first half the workshop, 6 universities gave introductions of their activities and relationship to engineering and companies. We share the fact that most universities have …"},{"uri":"/w3-blockchain-workshop-2016/day-2-groups/","title":"Day 2 Groups","content":"Browsers One was creating a new API or updating an existing API for Web Auth or an expansion to the web auth spec, where the browser would understand blockchain-based identity. We want to resolve and display blockchain identity in the browser, anything identified as an identity in a blockchain in a browser. Request payload signature. Apps receiving payloads could check from the user. Those are the four areas we are interested in pursuing for standards.\nBlockchain standardization proposals These …"},{"uri":"/scalingbitcoin/hong-kong-2015/day-2-opening/","title":"Day 2 Opening","content":"You know exactly what I am going to be saying. Code of Conduct. Code of Conduct. Code of Conduct. I am going to introduce the program chair, Neha. Hi everyone. I hope everyone enjoyed day 1. I thought it was quite fascinating.\nWe are going to have a few talks happen later. Look out for those. I\u0026amp;rsquo;d like to go ahead and introduce our first speaker, Pieter.\n"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/decentralization-through-game-theory/","title":"Decentralization Through Game Theory","content":"Thank you Charlie, let\u0026amp;rsquo;s have another round of applause. Up next, we have Andreas Antonopoulos.\nSo I may make this a bit difficult for the camera because I hate standing behind the podium. Welcome everyone. How are you, thank you for coming. How many people in this audience own Bitcoin?\nSo I usually start my question with that. And most audiences that gives me an accurate representation. You gave Bitcoin to every undergraduate, so you ruined my polling ability.\nHow many of you understand …"},{"uri":"/stanford-blockchain-conference/2020/decentralized-oracles-tls/","title":"DECO: Liberating Web Data Using Decentralized Oracles for TLS","content":"https://twitter.com/kanzure/status/1230657639740108806\nIntroduction I am Fan from Cornell. I am going to be talking about DECO, a privacy-preserving oracle for TLS. This is joint work with some other people.\nDecentralized identity What is identity? To use any system, the first thing that the user needs to do is to prove her identity and that she is a user to the system. The identity can be descriptive. You can think of digital identity as a set of descriptions about a user. For example, in some …"},{"uri":"/tags/default-minimum-transaction-relay-feerates/","title":"Default minimum transaction relay feerates","content":""},{"uri":"/coindesk-consensus-2016/marco-santori-delaware-initiative/","title":"Delaware Initiative","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nDelaware initiative\nIt\u0026amp;rsquo;s good to see so many friendly audiences in the audience. I am at Pillsbury Wintrhop Shaw Pittman. We are focusing on distinct areas of law as it applies to blockchain tech. We have been in the space as early as 2011 so we were trying to do the land grab. My team has been asked by the State of Delaware to serve as legal ambassadors to the blockchain industry. We couldn\u0026amp;rsquo;t be more honored to serve in this …"},{"uri":"/scalingbitcoin/tokyo-2018/deploying-blockchain-at-scale-lessons-from-the-internet-deployment-in-japan/","title":"Deploying Blockchain At Scale Lessons From The Internet Deployment In Japan","content":"Deploying blockchain at scale: Lessons learned from the Internet deployment in Japan\nJun Muai (Keio University)\nhttps://twitter.com/kanzure/status/1048373339826188288\nOkay. Good morning everybody. Good morning? I am from Keio University. Welcome to Keio University.\nLet me introduce something about our university. This is very old. It\u0026amp;rsquo;s from 1858. It\u0026amp;rsquo;s the oldest university in Japan. It was founded in 1858. The highest bill in this country is 10,000 yen on paper. This person is the …"},{"uri":"/building-on-bitcoin/2018/lightning-wallet-design/","title":"Designing Lightning Wallets for the Bitcoin Users","content":"Good morning everyone. Thank you for being here. Thank you to the organizers for this event. My talk is about designing lightning wallets for bitcoin users. I am talking about UX. But really, UX is a lot more than what wallets look like. It\u0026amp;rsquo;s from bip39 and making backups much easier, to anything you cna imagine, to make it easier for bitcoin to use bitcoin.\nI am Patricia Estevao. https://patestevao.com/\npatestevao\nhttps://patestevao.gitbooks.io/lightning-network-ux-research/\nThe potential …"},{"uri":"/w3-blockchain-workshop-2016/deterministic-signatures-group/","title":"Deterministic Signatures Group","content":"Deterministic expressions (group)\nSo please give your name and organization and what would you like to hear in the next twenty or thirty minutes.\nECDSA threshold signatures. Signature systems. Multisig predicate expressions.\nnew crypto systems build pipeline things, gitian things git signatures interop smart signatures \u0026amp;ldquo;crypto conditions\u0026amp;rdquo; ECDSA threshold signatures …"},{"uri":"/w3-blockchain-workshop-2016/petertodd-dex/","title":"Dex: Deterministic Predicate Expressions for Smarter Signatures","content":"I have been working on deterministic expressions language called DEX. It looks like lambda calculus. But really we\u0026amp;rsquo;re trying to create a way to specify expressions that would return true if a signature is valid. Something like this, which is really just check a signature against a pubkey for a message, is essentially something that would be baked into your software anyway. We\u0026amp;rsquo;re trying to move into a layer of abstraction where you can specify the conditions you need for your …"},{"uri":"/misc/discreet-log-contracts/","title":"Discreet Log Contracts","content":"paper: https://adiabat.github.io/dlc.pdf\nslides: http://bit.ly/2uSqkV6 or https://docs.google.com/presentation/d/1QXZBtELcVMoCq6wx-rJr31KvtsqxxcWIewMvuSTpsa4\nhttps://twitter.com/kanzure/status/902201044297555970\nWe are going to have next month on September 13 when Ethan Heilman will be talking about something controversial but he hasn\u0026amp;rsquo;t told me what it is yet. But it\u0026amp;rsquo;s controversial. I am Amy. I organize the meetup with some help from other people in the MIT Bitcoin Club. I write …"},{"uri":"/scalingbitcoin/stanford-2017/discreet-log-contracts/","title":"Discreet Log Contracts","content":"http://diyhpl.us/wiki/transcripts/discreet-log-contracts/\npaper: https://adiabat.github.io/dlc.pdf\nHello everyone. I am going to give a talk about discreet log contracts. I am from MIT DCI. I work on lightning network software and discreet log contracts. A quick intro: these are pretty new. I am not sure if they work. This presentation is part of the peer review process. Maybe the math doesn\u0026amp;rsquo;t work. Please let me know. It might also work, though.\nIt\u0026amp;rsquo;s a system of smart contracts …"},{"uri":"/bitcoin-magazine/bitcoin-2024/discreet-log-contracts-oracles-loans-stablecoins-and-more/","title":"Discreet Log Contracts, Oracles, Loans, Stablecoins, and More","content":""},{"uri":"/tags/dleq/","title":"Discrete log equivalency (DLEQ)","content":""},{"uri":"/speakers/duc-v.-le/","title":"Duc V. Le","content":""},{"uri":"/tags/duplex-micropayment-channels/","title":"Duplex micropayment channels","content":""},{"uri":"/tags/duplicate-transactions/","title":"Duplicate transactions","content":""},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/economic-risks/","title":"Economic Risks","content":"We looked at economic risks, and how to build a social safety net while allowing innovations. We talked about a bunch of different projects and looked at different systemic risk views from different countries. So the idea was to understand it from an economic perspective. I don\u0026amp;rsquo;t think we came to any great conclusions. There\u0026amp;rsquo;s a lot of economic risk, and people don\u0026amp;rsquo;t really understand it yet. One idea is to have more forums for central banks and regulators to understand what …"},{"uri":"/speakers/ed-felten/","title":"Ed Felten","content":""},{"uri":"/speakers/edward-budd/","title":"Edward Budd","content":""},{"uri":"/stanford-blockchain-conference/2019/threshold-signatures/","title":"Efficient Distributed Key Generation for Threshold Signatures","content":"Threshold signatures Threshold signature schemes are schemes where n parties and at least t are necessary and sufficient for signing. They want to make sure that t parties are necessary. Any threshold signature scheme typically has 3 protocols: key generation, signing, and verifying.\nIn key generation, one party creates a group public key as an output. Also, for each node that participates in the protocol, the protocol should output a secret key. In a signing phase, each party should sign a …"},{"uri":"/speakers/elaine-shi/","title":"Elaine Shi","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/elastic-block-caps/","title":"Elastic Block Caps","content":"Introduction Thank you everyone. I am from the Israeli Bitcoin Association. I will start by describing the problem.\nThe problem The problem is variable transaction fees. The transaction fees shot up. There\u0026amp;rsquo;s a lot of fluctuation. To understand why this is a problem, let\u0026amp;rsquo;s go back to the basics.\nBlock size limit We can have big blocks or small blocks. Each one has advantages and disadvantages. One of the benefits of small blocks is that it\u0026amp;rsquo;s easier to run a node. There\u0026amp;rsquo;s …"},{"uri":"/speakers/eleftherios-kokoris-kogias/","title":"Eleftherios Kokoris-Kogias","content":""},{"uri":"/speakers/eli-ben-sasson/","title":"Eli Ben-Sasson","content":""},{"uri":"/scalingbitcoin/milan-2016/collective-signing/","title":"Enhancing Bitcoin Security and Performance with Strong Consistency via Collective Signing","content":"https://twitter.com/kanzure/status/785102844839985153\nWe are starting the session called \u0026amp;ldquo;breaking the chain\u0026amp;rdquo;. We will first have collective signing presentation. Thank you.\nThis is joint work with some collaborators.\nWhat we have now is that real-time verification is not safe. What we managed to get is that we get transaction commitment in about 20-90 seconds at close to 1,000 transactions/second. In byzcoin, irrevocable transaction commitment in 20-90 second. Throughput up to 974 …"},{"uri":"/speakers/eran-tromer/","title":"Eran Tromer","content":""},{"uri":"/speakers/eric-martindale/","title":"Eric Martindale","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/erlay/","title":"Erlay","content":"Erlay: Bandwidth-efficient transaction relay for bitcoin\nhttps://twitter.com/kanzure/status/1171744449396953090\npaper: https://arxiv.org/abs/1905.10518\nIntroduction Hi, my name is Gleb. I work at Chaincode Labs. I am working on making nodes better and stronger and make the network more robust. I\u0026amp;rsquo;ve been working on erlay with Pieter and Greg and some other guys.\nBitcoin p2p network There are private nodes that are behind firewalls like ones run at home. If your node is connected to a …"},{"uri":"/w3-blockchain-workshop-2016/ethcore/","title":"Ethcore","content":"ethcore\nAt ethcore, we are building the infrastructure for the future of the decentralized web. Unless we get distracted by TheDAO and soft-forks and hard-forks. Maybe decentralization is not the most important thing. We want to build a p2p secure serverless web. Why is decentralization such an important notion? There\u0026amp;rsquo;s no intrinsic value in being decentralized. There\u0026amp;rsquo;s intrinsic value in being utilized and secure by people.\nThis is a bit of how we see the architecture. Blockchain is …"},{"uri":"/stanford-blockchain-conference/2019/ethereum2/","title":"Ethereum 2.0 and beyond","content":"Please welcome Vitalik who needs no introduction.\nIntroduction Okay. I have 20 minutes. I will get right to it. Vlad gave a presentation on some of his research into CBC Casper and how CBC Casper can be extended into sharding contexts. This presentation will be about taking some of the existing research into consensus algorithms and sharding and other things we have done in the last few years and see how they concretely translate into what is actually going to be deployed to Ethereum Foundation …"},{"uri":"/cryptoeconomic-systems/2019/everything-is-broken/","title":"Everything Is Broken","content":"Everything is broken\nThis is a clickbaity title, I realize. But the reason I say it is because it\u0026amp;rsquo;s become a mantra for at least myself but for the people who hear me banging my hands on the desk all the time. I am Cory Fields and I work here at the MIT DCI. I am also a Bitcoin Core developer. I am less active these days there because I have been spending some time looking at some higher layer stuff.\n\u0026amp;ldquo;Everything is broken\u0026amp;rdquo; to me is a sentiment that most bitcoin developers feel …"},{"uri":"/tags/exfiltration-resistant-signing/","title":"Exfiltration-resistant signing","content":""},{"uri":"/tags/expiration-floods/","title":"Expiration floods","content":""},{"uri":"/scalingbitcoin/hong-kong-2015/extensibility/","title":"Extensibility","content":"Extensibility, fraud proofs, and security models\nEric Lombrozo (CodeShark)\nslides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/4_extensibility_lombrozo.pdf\nTranscript pending, please email link to video to kanzure@gmail.com for transcribing\u0026amp;hellip;\n"},{"uri":"/breaking-bitcoin/2019/extracting-seeds-from-hardware-wallets/","title":"Extracting Seeds From Hardware Wallets","content":"7DC5A359D0D5B5AB6728\u0026amp;hellip;\nhttps://twitter.com/kanzure/status/1137672946435198976\nIntroduction My talk is about extracting seeds from hardware wallets. I am chief security officer at Ledger. I joined Ledger a bit more than one year ago. I built the Ledger Donjon which is an independent red team. Our mission is to help secure Ledger\u0026amp;rsquo;s products. So what we do day to do is try to break all the hardware wallets. We continuously challenge the security of our products. From time to time, we …"},{"uri":"/baltic-honeybadger/2018/extreme-opsec-for-the-modern-cypherpunk/","title":"Extreme Opsec For The Modern Cypherpunk","content":"Jameson is the infrastructure engineer at Casa. Please welcome Jameson Lopp.\nIntroduction Hi fellow cypherpunks. We\u0026amp;rsquo;re under attack. Corporations and nation states in their quest for omniscience have slowly stripped away our privacy. We are the frog being boiled in the pot called progress. We can\u0026amp;rsquo;t trust that corporations will grant us privacy out of their beneficience. Our failures here are our own. After being swatted a year ago, I set out to restart my life with a new focus on …"},{"uri":"/misc/failures-of-secret-key-cryptography/","title":"Failures Of Secret Key Cryptography (2013)","content":"FSE 2013\nWelcome to the second invited talk, to be given by Dan Bernstein. Dan is one of the few members of our community who does not need an introduction. It\u0026amp;rsquo;s a great honor and pleasure that he has agreed to give a talk here. Dan comes from UC Berkeley and from Chicago. Just a few things you might know Dan from\u0026amp;ndash; he sued the U.S. government about export controls. In 2004, .. in a class of his.. a lot of qmail, his work on factoring, ECC, and symmetric cryptology. Poly1305, .. and …"},{"uri":"/speakers/fan-zhang/","title":"Fan Zhang","content":""},{"uri":"/scalingbitcoin/milan-2016/fast-difficulty-adjustment/","title":"Fast Difficulty Adjustment","content":"Fast difficulty adjustment using a low-pass finite impulse response filter\nhttps://twitter.com/kanzure/status/785107422323085312\nslides https://scalingbitcoin.org/milan2016/presentations/Scaling%20Bitcoin%203%20-%20Mark%20Friedenbach.pdf\nIntroductions I work at Blockstream Labs. This work is based on work I did in 2013 which is now only being presented. If there are any control engineers, I would encourage you. I am going to cover some mistakes that have been covered in the industry on …"},{"uri":"/tags/fee-estimation/","title":"Fee estimation","content":""},{"uri":"/chaincode-labs/chaincode-residency/2019-06-25-fabrice-drouin-fee-management/","title":"Fee Management (Lightning Network)","content":"Location: Chaincode Labs Lightning Residency 2019\nFabrice: So I\u0026amp;rsquo;m going to talk about fee management in Lightning which has been surprisingly one of the biggest operational issues we had when we launched the Lightning Network. So again the idea of Lightning is you have transactions that are not published but publishable, and in our case the problem is what does exactly publishable mean. So it means the UTXO that you’re spending is still spendable. It means that the transaction is fully …"},{"uri":"/scalingbitcoin/montreal-2015/peter-r/","title":"Fee markets","content":"((Note that there is a more accurate transcript from Peter-R himself below.))\nMiners have another job as well. Miners are commodity producers, they produce something that the world has never seen. They produce block space, which is room for transactional debt. Let\u0026amp;rsquo;s explore what the field of economics tells us. We\u0026amp;rsquo;ll plot the total number of bytes per block. On the vertical we will plot the unit cost of the commodity, or the price of 1 transaction worth of blockspace. The coordinates …"},{"uri":"/tags/fee-sniping/","title":"Fee sniping","content":""},{"uri":"/tags/fee-sourcing/","title":"Fee sourcing","content":""},{"uri":"/tags/fee-sponsorship/","title":"Fee sponsorship","content":""},{"uri":"/bit-block-boom/2019/fiat-money-fiat-food/","title":"Fiat Money \u0026 Fiat Food","content":"https://twitter.com/kanzure/status/1162734566479728641\nIntroduction Saife saved my life. Until I met Saife, and had him on my show and talked to him, I was a shitcoiner. Okay? Sorry guys. I am going to lay it out there. He talked to me, he came on my show, he sent me his book, his writings, and before I knew it, I started something I really wanted to share with everyone.\nReal introduction Okay, thank you everyone and Gerry for inviting me. Always fun to be in Texas to share meat and bitcoin with …"},{"uri":"/stanford-blockchain-conference/2020/smart-contract-bug-hunting/","title":"Finding Bugs Automatically in Smart Contracts with Parameterized Specifications","content":"https://www.certora.com/pubs/sbc2020.pdf\nhttps://twitter.com/kanzure/status/1230906600140918784\nIntroduction I am from Tel Aviv University and now we have a 1-year old company called Cotera which is located in Berlin, Seattle and Tel Aviv. Our mission is to bring formal methods into code security and I will talk to you today about how we use this method to find bugs. And actually, I try to make this talk as informal as possible but there is actually a paper just download it from the certora …"},{"uri":"/breaking-bitcoin/2019/defense-of-bitcoin/","title":"Five Insights into the Defense of Bitcoin","content":"https://twitter.com/kanzure/status/1137389045355614208\nIntroduction Today I work at Blockstream but I have a long long history working in open-source software first with the Linux operating system as an advocate then working on the operating system itself. This has led me to a few insights on the defense of bitcoin that has little to do with software but it needs a talk thrown together very quickly for today.\nDefense by safety-critical engineering process I believe that it\u0026amp;rsquo;s possible to …"},{"uri":"/cryptoeconomic-systems/2019/flash-boys-v2/","title":"Flash Boys V2","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/speakers/florian-maier/","title":"Florian Maier","content":""},{"uri":"/scalingbitcoin/stanford-2017/flyclient-super-light-clients-for-cryptocurrencies/","title":"Flyclient: Super Light Client For Cryptocurrencies","content":"As you know, the blockchain is a chain because each block connects to the previous block through these hashes. In the header of each block is a commitment to all the transactions. It\u0026amp;rsquo;s using this merkle hash tree where at each level the parent node hashes to the children and this gives you some nice properties which I will talk about later. I need to check consistency though. The first thing I need to check is that the transactions don\u0026amp;rsquo;t spend more than they have\u0026amp;hellip; then I need …"},{"uri":"/stanford-blockchain-conference/2019/formal-verification/","title":"Formal verification: the road to complete security of smart contracts","content":"slides: https://twitter.com/MartinLundfall/status/1091119463276064769\nhttps://twitter.com/kanzure/status/1091103595016085504\nIntroduction Thank you everyone. It\u0026amp;rsquo;s exciting to be here speaking about formal verification. The road to complete security of smart contracts is a sensationalist title. I\u0026amp;rsquo;ll be spending a few moments making a huge disclaimer.\nDisclaimer This can only be interpreted as a very long road or a very narrow sense in which complete security is actually cheap. …"},{"uri":"/scalingbitcoin/tokyo-2018/forward-blocks/","title":"Forward Blocks","content":"Forward blocks: On-chain settlement capacity increases without the hard-fork\nMark Friedenbach (maaku)\npaper: http://freico.in/forward-blocks-scalingbitcoin-paper.pdf\nslides: http://freico.in/forward-blocks-scalingbitcoin-slides.pdf\nhttps://twitter.com/kanzure/status/1048416249791668225\nIntroduction I know that we\u0026amp;rsquo;re running late in this session so I\u0026amp;rsquo;ll try to keep this fast and get everyone to lunch. I have presented at Scaling Bitcoin four times now so I won\u0026amp;rsquo;t dwell on the …"},{"uri":"/stanford-blockchain-conference/2020/fractal/","title":"Fractal: Post-Quantum and Transparent Recursive Proofs from Holography","content":"https://twitter.com/kanzure/status/1230295372909514752\nhttps://eprint.iacr.org/2019/1076.pdf\nIntroduction One of the most powerful things that you can do with SNARKs is something called recursive proofs where you actually prove that another SNARK was correct. I can give you a proof of a proof of a proof and this is amazing because you can prove an ever-expanding computation like a blockchain is always correct. I can always give you a proof that you have the latest block and you know the entire …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/fraud-proofs-petertodd/","title":"Fraud Proofs and Modularized Validation","content":"In Bitcoin we have made this tradeoff where we don\u0026amp;rsquo;t have everyone running full nodes. Not everyone is participating equally. If you have a full node, you have lots of gigs of hard drive space, but if you do that, you only get a few transactions per second. Modularizing validation an argument for this is how to improve on this situation.\nWhat\u0026amp;rsquo;s the problem we are trying to solve? This is a real screenshot from one of the Android wallets. What this shows is that the SPV client will …"},{"uri":"/tags/free-relay/","title":"Free relay","content":""},{"uri":"/breaking-bitcoin/2019/fud-perceived-vs-real-bitcoin-risks/","title":"FUD (Perceived Vs RealRisks in Bitcoin)","content":"https://twitter.com/kanzure/status/1137285873534492672\nEric Voskuil 3CD8 C07F 0B5C E14E\nJimmy FAA6 17E3 2679 E455\nRodolfo 1C9E 033C 6C65 8606\nIntroduction AW: Some topics might have more agreement between the three of you, and some maybe less. It\u0026amp;rsquo;s up to you guys to say smart things.\nJS: Can\u0026amp;rsquo;t guarantee anything there.\nFUD: bitcoin too volatile to be used as money AW: What about, bitcoin is too volatile to be used as money?\nEV: It is used as money, so.\nRN: Bitcoin was created so that …"},{"uri":"/cryptoeconomic-systems/2019/funding/","title":"Funding bitcoin development","content":"Funding bitcoin development\nIt\u0026amp;rsquo;s basically open-source funding. It\u0026amp;rsquo;s a subset of open-source funding. But these are similar problems with many open-source projects. As far as I can tell, all these digital currency cryptocurrency projects are licensed with open-source licenses. There may be some that are not, and some things are patented. I generally steer clear of the patented work. If it\u0026amp;rsquo;s decentralized and open-source, then why are you just delivering a binary to me? I …"},{"uri":"/coindesk-consensus-2016/future-of-blockchains/","title":"Future Of Blockchains","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nThe future of blockchains\nPaul Vigna, WSJ - Moderator\nBalaji Srinivasan, 21 Inc.\nDavid Rutter, R3\nPV: Everyone has been asking me to stir you guys up. We have 25 minutes. I don\u0026amp;rsquo;t want to waste time. Which one of you is building betamax?\nBS: Bitcoin and blockchain database technology are complementary. I think last year I was much more bearish on private blockchain database technology. As I was talking with David earlier, \u0026amp;hellip;. …"},{"uri":"/coindesk-consensus-2016/future-of-regulation/","title":"Future Of Regulation","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nThe future of regulation\nJerry Brito, Coin Center\nJ. Christopher Giancarlo, CFTC\nBenjamin Lawsky, Lawsky Group\nMark Wetjen, DTCC\nJB: I want to start with Giancarlo. Your approach has been \u0026amp;ldquo;first do no harm\u0026amp;rdquo;. What opportunity is this?\nGiancarlo: There are a number of elements to my interest. I want to go back to a moment of time where it became clear tome that something like this, although I don\u0026amp;rsquo;t know if it would be …"},{"uri":"/scalingbitcoin/montreal-2015/future-of-spv-tech/","title":"Future of SPV Technology","content":"https://docs.google.com/document/d/1t0bSZj5b66xBdW7xrjHlcvfqYAbTaQDB4-_T0Jvs3T4/edit#heading=h.5lm45oa6kuri\nissues with current SPV wallets\nExisting clients and their status\nbitcoinj electrum ** they are doing SPV proofs ** bloom filters utxo commitments need to get in need to pick which type of utxo commitments\nNumber of nodes to connect to for more SPV clients\n1 header from each peer to verify that we are all getting the same thing what block does each peer think is the best? very cheap to …"},{"uri":"/decentralized-financial-architecture-workshop/g20-discussion/","title":"G20 Discussion","content":"Why do we need a multi-stakeholder discussion in a blockchain world? A report from G20 discussions.\nhttps://bsafe.network\nIntroduction I am the cofounder of the bsafe.network which focuses on blockchain research. I was a program co-chair for Scaling Bitcoin at Keio University last year. I also work with IEEE, ACM conferences, Ledger Journal, and more. My background is engagement with Bank of Japan NTT internet cash in 1998-2000.\nSeveral huge incidents From a regulatory point of view, there are …"},{"uri":"/tags/gap-limits/","title":"Gap limits","content":""},{"uri":"/speakers/gavin-andresen/","title":"Gavin Andresen","content":""},{"uri":"/bitcoin-core-dev-tech/2015-02/gavinandresen/","title":"Gavinandresen","content":"http://blog.circle.com/2015/02/10/devcore-livestream/\nThe instant transaction time.. you know I walk up to a cash register, I put my phone there, and in a second or two the transaction is confirmed and I walk away with my coffee. Anything beyond that, 10 minutes versus 1 minute doesn\u0026amp;rsquo;t matter. So the problem you want to solve is how do we get instant confirmation.. there\u0026amp;rsquo;s a bunch of ideas about this, like a trusted third party that promises to not double spend, have some coins …"},{"uri":"/speakers/georgia-avarikioti/","title":"Georgia Avarikioti","content":""},{"uri":"/bitcoin-developers-miners-meeting-2016/jihan-wu-google-tech-talk/","title":"Google Tech Talk (2016)","content":"Bitcoin\nGoogle Tech Talk\nOkay so maybe we will start. Thank you for coming. We have about 25 folks here from the Bitcoin community, both miners and Core developers. Don\u0026amp;rsquo;t know exactly what the definition of a Core developer is, but I guess I know one when I see one. So anyway, we\u0026amp;rsquo;re lucky here to have Jihan Wu. He\u0026amp;rsquo;s the co-founder of Bitmain, a leading Chinese bitcoin technology company and mining pool. He runs Antpool which has 270 petahash which I think represents about 20% …"},{"uri":"/texas-bitcoin-conference-2014/gox/","title":"Gox (2014)","content":"Ryan is a founder of BitBits. They donate to any charity in the US. BitBits was a 2013 Harvard Business School new venture finalist. Coming from Mass. As many of our speakers they have traveled a long way. Andreas from \u0026amp;hellip; lawyer. Anti-money laundering specialist. He represents many companies for the transfer of money. He is a criminal defense attorney. He is a frequent speaker. On freedom too. All about freedom and frequent speakers at many money transmitting issues.\nGentlemen you have a …"},{"uri":"/scalingbitcoin/stanford-2017/graphene-set-reconciliation/","title":"Graphene Set Reconciliation","content":"Brian N. Levine, College of Information and Computer Sciences, UMass Amherst\nThis is joint work with some people I work with at UMass, including Gavin Andresen.\nThe problem I am going to focus on in this presentation is how to relay information to a new block, to a neighbor, that doesn\u0026amp;rsquo;t know about next. This would be on the fast relay network or on the regular p2p network. It\u0026amp;rsquo;s about avoiding a situation where, the naieve situation is to send the entire block data. Alice gives block …"},{"uri":"/rebooting-web-of-trust/2019-prague/group-updates/","title":"Group Updates.Md","content":"Group updates\nAlice abuses verifiable credentials We restricted our scope to a manageable slice of the problem we think we can realistically handle. It\u0026amp;rsquo;s been very productive. We made an outline together, and now everyone can work on individual parts of the paper. I think it\u0026amp;rsquo;s been great.\nBlockcerts v3 We categorized what we wanted to write about and work on and kind of going back and looking at the history of blockcerts and specific things that were added, and an extension that were …"},{"uri":"/rebooting-web-of-trust/2019-prague/groups/","title":"Groups","content":"Group 1: Agents We talked about minimal requirements for \u0026amp;hellip; a different set of protocols and other implementations, and we asked the question, what\u0026amp;rsquo;s out there for being able to create a minimal requirement for\u0026amp;hellip; Are we able to map this and pull it all down into a minimal requirement architecture for an agent without having to buy into the whole stack? What about building an agent for a smartphone or a feature phone, and then move on from there? Do you need a blockchain for …"},{"uri":"/w3-blockchain-workshop-2016/groups-identity/","title":"Groups Identity","content":"The topic that we have been working on is identity for everyone. In terms of trusted set of parties if you will, that can potentially provide some level of attestation to the identity and that can be provided in a digitized manner as to what levels of service an be provided to that level of identity as well.\n\u0026amp;hellip;\nWhat belongs in a blockchain? What works in a blockchain? What doesn\u0026amp;rsquo;t work in a blockchain? How should it be stored? How can we make sure that there\u0026amp;rsquo;s redundancy? What …"},{"uri":"/coindesk-consensus-2016/hackathon-intro/","title":"Hackathon Intro","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nHackathon intro\u0026amp;mdash; Thank you everyone for showing up. This is a fantastic event coming togehter. Mid-February we were focusing on our team, how were we going to make the room feel full at 800 people? Well today we are sold-out at 1500 people. We have a full room today and we are excited about the developers that have come together. It is the perfect day for a Hackathon, it\u0026amp;rsquo;s rizzling 50 degrees if you choose to go outside so we …"},{"uri":"/stanford-blockchain-conference/2020/hardware-accelerated-rsa/","title":"Hardware Accelerated RSA - VDFs, SNARKs, and Accumulators","content":"https://twitter.com/kanzure/status/1230551345700069377\nIntroduction I am going to talk about accelerating RSA operations in hardware. So let\u0026amp;rsquo;s just get into it.\nOutline I\u0026amp;rsquo;ll talk about what acceleration is and why we\u0026amp;rsquo;re interested in it, then we\u0026amp;rsquo;ll get into RSA primitives, then we\u0026amp;rsquo;ll get into algorithms, and then various platforms for hardware. Hardware is anything you\u0026amp;rsquo;re going to run software on. We\u0026amp;rsquo;ll look at what the tradeoffs are, and then some …"},{"uri":"/speakers/henry-de-valence/","title":"Henry de Valence","content":""},{"uri":"/dallas-bitcoin-symposium/history-and-extrapolation/","title":"History And Extrapolation","content":""},{"uri":"/coindesk-consensus-2016/how-tech-companies-are-embracing-blockchain-database-technology/","title":"How Tech Companies Are Embracing Blockchain Database Technology","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nHow tech companies are embracing blockchain database technology\nMartha Bennett, Forrester - Moderator\nJerry Cuomo, IBM\nAustin Hill, Blockstream\nYorke Rhodes, Microsoft\nLata Varghese, Cognizant\nMB: Taking the temperature of the industry. What\u0026amp;rsquo;s actualy going on out there? You may know who Forrester is. I am a principal analyst there covering blockchain database technology. Many of the questions I will be asking the panelists are the …"},{"uri":"/coindesk-consensus-2016/how-to-get-bitcoin/","title":"How To Get Bitcoin","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nHow to get bitcoin\nBalaji Srinivasan\nMy talk is going to be pretty different from the others. I will try to make this relevant to the blockchain database technology attendees. We\u0026amp;rsquo;re going to talk here today what we call the machine web. It\u0026amp;rsquo;s a web where machines earn bitcoin on each HTTP request. Who here knows what an HTTP request? Every time you load a web page, you make a lot of HTTP requests. So if you are earning bitcoin …"},{"uri":"/bit-block-boom/2019/how-to-meme-bitcoin-to-the-moon/","title":"How To Meme Bitcoin To The Moon","content":"Introduction How many of y\u0026amp;rsquo;all were at the dinner last night? Okay, great. I had a lot of fun. It was fantastic. I don\u0026amp;rsquo;t know how we will beat it next year, but we will figure something out. I am Michael Goldstein, and welcome to my ridiculous tedtalk. A lot of people talk about price fundamentals, but I am going to talk about how to troll and take the curves hard.\nA month ago today, a pivotal moment in US history occurred in the US congress. The honorable representative Warren …"},{"uri":"/scalingbitcoin/montreal-2015/how-to-mine-bitcoin-profitably/","title":"How To Mine Bitcoin Profitably","content":"Minting money with megawatts\nThank you for the introduction. We thought bitcoin was really a good idea for an unstable and inefficient financial world. I am going to dive and talk about this presentation which is how to mine bitcoin profitably. This is a short talk, it\u0026amp;rsquo;s supposed to be a short talk, so let\u0026amp;rsquo;s get straight to the crux of the matter.\nBitcoin mining is the lowest part of the transaction stack. It scales. It\u0026amp;rsquo;s parallelizable. There might be second-order effects. …"},{"uri":"/tags/htlc-endorsement/","title":"HTLC endorsement","content":""},{"uri":"/stanford-blockchain-conference/2019/htlcs-considered-harmful/","title":"Htlcs Considered Harmful","content":"https://twitter.com/kanzure/status/1091033955824881664\nIntroduction I am here to talk about a design pattern that I don\u0026amp;rsquo;t like. You see HTLCs a lot. You see them in a lot of protocols for cross-chain atomic exchanges and payment channel networks such as lightning. But they have some downsides, which people aren\u0026amp;rsquo;t all familiar with. I am not affiliated with Interledger, but they convinced me that HTLCs were a bad idea and they developed an alternative to replace HTLCs. There\u0026amp;rsquo;s a …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/human-side-trust-workshop/","title":"Human Side Trust Workshop","content":"We would like to start with a brief activity. In you rprograms, this says design workshop. If that scares you, then we will not be doing trust falls. We will not be forming teams. But we would like to start with an activity. Identify someone around you who you do not know. Keep a mental image of that person in your mind.\nWould you trust this person to save your seat? Would you trust them to watch your laptop if you went to get lunch? Would you loan them $15 to grab lunch? In cash? Would you give …"},{"uri":"/speakers/hutchins/","title":"Hutchins","content":""},{"uri":"/speakers/ian-lee/","title":"Ian Lee","content":""},{"uri":"/decentralized-financial-architecture-workshop/implications/","title":"Implications","content":"Implications on regulation and governance of blockchain-based finance\nYuta Takanashi (Financial Services Agency, Japan)\nDisclaimer First, please note that the opinions presented here belong to myself and don\u0026amp;rsquo;t represent the organizations to which I belong.\nMajor goals for financial regulators The primary goals of financial regulators are to maintain financial stability, protect investors and consumers, and prevent financial crimes. These goals are public interests and are needed to be …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/maaku-panel/","title":"Improvements to Bitcoin","content":"This can be a broad range of topics for improvements. I am Mark Friedenbach, I have worked on Bitcoin protocol research, but I also do Core tech engineering at Blockstream. We also have Jonas, an independent Bitcoin Core developer. We also have Andrew Poelstra, who has been core to the crypto work which has been incoporated into Bitcoin, such as libsecp256k1 which we recently integrated in the 0.12 release. It speeds up Bitcoin validation by 7-8x. We also have Joseph Poon, the co-inventor of the …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2018/improving-bitcoin-smart-contract-efficiency/","title":"Improving Bitcoin Smart Contract Efficiency","content":"https://twitter.com/kanzure/status/980545280973201410\nIntroduction I will be talking about discreet log contracts, taproot, and graftroot. Not in that order, the order will actually be taproot, graftroot, and then discreet log contracts.\nQuick intro: I\u0026amp;rsquo;m Tadge Dryja. I work here at MIT down the street at Digital Currency Initiative. I was coauthor on the lightning network paper, and I worked on lightning. I also was the author of a paper introducing discreet log contracts. It\u0026amp;rsquo;s a …"},{"uri":"/scalingbitcoin/hong-kong-2015/in-adversarial-environments-blockchains-dont-scale/","title":"In Adversarial Environments Blockchains Dont Scale","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY1/2_security_and_incentives_2_todd.pdf\nThat was a very useful talk (on security assumptions), thank you Andrew.\nI am going to start by talking about nitty-gritty applications where adversarial thinking is useful for incentives for decentralization. In an environment where everyone has the data, we are kind of limited as to what we can do. When we talk about scale, I am talking about 5 or 10 orders of magnitude. Do we have a system …"},{"uri":"/tags/inbound-forwarding-fees/","title":"Inbound forwarding fees","content":""},{"uri":"/scalingbitcoin/tokyo-2018/incentivizing-payment-channel-watchtowers/","title":"Incentivizing Payment Channel Watchtowers","content":"Incentivizing payment channel watchtowers\nGeorgia Avarikioti, Felix Laufenberg, Jakub Sliwinski, Yuyi Wang and Roger Wattenhofer (ETH Zurich)\nZeta Avarikioti\nhttps://twitter.com/kanzure/status/1048767166823071746\nIntroduction Hi. Good morning. I am Zeta. I am going to talk about how to incentivize payment channel watchtowers. This is joint work with my collaborators.\nMicropayment channels There are many ways to construct channels such as lightning channels and duplex channels. There\u0026amp;rsquo;s …"},{"uri":"/coindesk-consensus-2016/internal-approaches-blockchain-database-technology-strategies/","title":"Internal Approaches Blockchain Database Technology Strategies","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nWilliam Mougayar, The Business Blockchain - Moderator\nEdward Budd, Deutsche Bank\nIan Lee, Citi Ventures\nScott Manuel, Thomson Reuters\nBart Suichies, Philips\nJeremy Wilson, Barclays\nJW: Matthew Bishop\u0026amp;rsquo;s panel yesterday divided the universe into three broad categories. This seems to be the way these things work out. You get the newcomer who realy gets it right and gets very big. Then you get in the case particuarly, which is what …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/internet-of-value/","title":"Internet Of Value","content":"What are the lessons from the development of the internet that we can learn from? Why is there an inevitability as to what Bitcoin is doing?\nAt a high level, I like to explain that we are experiencing a fundamental transformation in how the world economy stores and verifies value. Bitcoin and these other projects that are being built are part of that arc. We have talked not just about payments. It\u0026amp;rsquo;s about how ownership and fidicuiary trust in general can be managed. And development of …"},{"uri":"/rebooting-web-of-trust/2019-prague/intro/","title":"Intro","content":"Introduction Why are we here? We are here to write papers, code, make UI/UX mock-ups. If you want to work on something and it doesn\u0026amp;rsquo;t fit into that category, then let us know. There might be other people that want to work on it too. We want you to find something you enjoy working on. If you feel you\u0026amp;rsquo;re not as productive as you would like to be, then let us know and we\u0026amp;rsquo;ll find something better for you to work on.\nThis is a new space, and we\u0026amp;rsquo;re not the greatest at writing …"},{"uri":"/scalingbitcoin/hong-kong-2015/intro/","title":"Intro","content":"Introduction to event\nschedule: https://scalingbitcoin.org/hongkong2015/#schedule\nbetter livestream link: http://www.youtube.com/channel/UCql9h_eXmusjt-f3k8qLwPQ/live\nslides: https://scalingbitcoin.org/hongkong2015/#presentations\npaper for the \u0026amp;ldquo;SCP\u0026amp;rdquo; presentation: http://eprint.iacr.org/2015/1168.pdf\nslides from \u0026amp;ldquo;SCP\u0026amp;rdquo; presentation: http://www.comp.nus.edu.sg/~loiluu/talks/scp.pptx\nirc.freenode.net #bitcoin-workshops\nHello, good morning everyone. Hi welcome to Scaling …"},{"uri":"/scalingbitcoin/milan-2016/intro/","title":"Intro","content":"Introduction\nGood morning. Glad to see many of you again. My name is Anton Yemelyanov. I am one of the organizers of this event. On behalf of Scaling Bitcoin planning committee, I would like to welcome everybody here to the third scaling event. We\u0026amp;rsquo;re holding this event here in Milan. Milan has a really vibrant community. It has been a real pleasure working with everyone here to create this event. The goal of Scaling as everybody knows is to get everyone together in a single place, all …"},{"uri":"/scalingbitcoin/tel-aviv-2019/intro/","title":"Intro","content":"Introduction to Scaling Bitcoin Tel Aviv 2019\nhttps://telaviv2019.scalingbitcoin.org/\nhttps://telaviv2019.scalingbitcoin.org/#schedule\nFreenode IRC: #bitcoin-workshops\nhttps://telaviv2019.scalingbitcoin.org/live/\nEverybody please take your seats. Just to let you know, there\u0026amp;rsquo;s electrical plugs in the center of row 1 and row 6. Otherwise, there are extension cords connected on the sides.\nGood morning. Welcome to Israel. My name is Anton Yemelyanov. I am the chair of the Scaling Bitcoin …"},{"uri":"/w3-blockchain-workshop-2016/wendy/","title":"Intro to W3C standards","content":"https://goo.gl/3ZqAuo\nWe are working to make a web that is global and nteroperable and usable by everyone. If you have tech that you want to work on a global working basis, and distributed manner, then the web is the platform on which to do that. W3C is the consortium and standards body that works to help keep the web open and available. We don\u0026amp;rsquo;t have police power, ew don\u0026amp;rsquo;t have police power to compel web standards, we build consensu sthrough W3C process and our open royalty-free …"},{"uri":"/cryptoeconomic-systems/2019/introduction/","title":"Introduction","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/scalingbitcoin/stanford-2017/intro/","title":"Introduction","content":"https://scalingbitcoin.org/\nhttps://scalingbitcoin.org/event/stanford2017\nSorry that we\u0026amp;rsquo;re running a little bit late. My name is Anton Yemelyanov. I am one of the organizers running this event. Welcome everybody. It\u0026amp;rsquo;s great to have a lot of familiar faces here. First of all, as you may know, we have ran over the last 2 days, Bitcoin Edge Dev++ where we have trained over 80 new developers that have joined our ecosystem. They have covered a number of topics including ECDSA and discreet …"},{"uri":"/scalingbitcoin/tokyo-2018/introduction/","title":"Introduction","content":"Anton Yemelyanov\nIntroduction\nslides: https://tokyo2018.scalingbitcoin.org/presentations\nWelcome to scaling bitcoin. This is an international conference. We run the event every year in a different geographical location. We run this event as a strictly academic and engineering conference. The mentality that we take behind the event is think global, act local. It\u0026amp;rsquo;s a well known bitcoin additive. Our goal is to stimulate the bitcoin ecosystem. The best way to do this is to stimulate the …"},{"uri":"/baltic-honeybadger/2018/investing-in-bitcoin-businesses/","title":"Investing In Bitcoin Businesses","content":"1 on 1: Investing in bitcoin businesses\nMatthew Mezinskis (crypto_voices) (moderator)\nMM: I run a podcast based here in Latvia on bitcoin economics and money. It\u0026amp;rsquo;s American-Latvian. I do it with my partner who is based in Brazil. Today we\u0026amp;rsquo;re going to focus on investing in businesses focused on bitcoin and crypto.\nNC: My name is Nic Carter. I work for a venture fund. We\u0026amp;rsquo;re one of the few venture funds that focus on bitcoin and bitcoin-related startups. We\u0026amp;rsquo;re not focused at …"},{"uri":"/w3-blockchain-workshop-2016/ipfs/","title":"Ipfs","content":"I will not have time to describe IPFS. We have multiformats that allow protocol agility, interop and avoid lock-in. Multihash, multiaddr, multibase, multicodec, multistream, multikey. What do you do with cryptographic hashes? When you have four different hashes all the same length that happen to be coming from different functions, what are those functions? sha256? sha512? sha3? blake2b? How do you know which hash type it is?\nThe problem is that the values aren\u0026amp;rsquo;t self-describing. …"},{"uri":"/scalingbitcoin/montreal-2015/issues-impacting-block-size-proposals/","title":"Issues Impacting Block Size Proposals","content":"Issues impacting block size proposals\nJeff Garzik (jgarzik)\nBitcoin was introduced as a p2p electronic cash system. That\u0026amp;rsquo;s what it says in the whitepaper. That\u0026amp;rsquo;s what the users were exposed to. The 1 MB block size was set for anti-spam purposes. Otherwise when bitcoin value is low, it\u0026amp;rsquo;s trivial to DOS the network with large blocks.\nMoving on to some observations. The process of finding distributed ocnsensus takes time; it\u0026amp;rsquo;s a settlement system. The core service that …"},{"uri":"/speakers/ittai-abraham/","title":"Ittai Abraham","content":""},{"uri":"/speakers/ittay-eyal/","title":"Ittay Eyal","content":""},{"uri":"/speakers/j.-christopher-giancarlo/","title":"J. Christopher Giancarlo","content":""},{"uri":"/speakers/jacob-leshno/","title":"Jacob Leshno","content":""},{"uri":"/speakers/james-dangelo/","title":"James D'Angelo","content":""},{"uri":"/speakers/james-gatto/","title":"James Gatto","content":""},{"uri":"/speakers/james-smith/","title":"James Smith","content":""},{"uri":"/speakers/jarl-fransson/","title":"Jarl Fransson","content":""},{"uri":"/speakers/jason-potts/","title":"Jason Potts","content":""},{"uri":"/speakers/jason-teutsch/","title":"Jason Teutsch","content":""},{"uri":"/speakers/jeff-garzik/","title":"Jeff Garzik","content":""},{"uri":"/speakers/jeremy-allaire/","title":"Jeremy Allaire","content":""},{"uri":"/speakers/jeremy-wilson/","title":"Jeremy Wilson","content":""},{"uri":"/speakers/jerry-brito/","title":"Jerry Brito","content":""},{"uri":"/speakers/jerry-cuomo/","title":"Jerry Cuomo","content":""},{"uri":"/speakers/jihan-wu/","title":"Jihan Wu","content":""},{"uri":"/speakers/jinglan-wang/","title":"Jinglan Wang","content":""},{"uri":"/speakers/joachim-breitner/","title":"Joachim Breitner","content":""},{"uri":"/speakers/joachim-neu/","title":"Joachim Neu","content":""},{"uri":"/speakers/joe-gerber/","title":"Joe Gerber","content":""},{"uri":"/speakers/john-woeltz/","title":"John Woeltz","content":""},{"uri":"/tags/joinpools/","title":"Joinpools","content":""},{"uri":"/speakers/jonathan-bier/","title":"Jonathan Bier","content":""},{"uri":"/speakers/jonathan-tomim/","title":"Jonathan Tomim","content":""},{"uri":"/speakers/jorge-tim%C3%B3n/","title":"Jorge Timón","content":""},{"uri":"/speakers/joshua-lim/","title":"Joshua Lim","content":""},{"uri":"/cryptoeconomic-systems/2019/journal-review/","title":"Journal Review","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/cryptoeconomic-systems/2019/journals-as-clubs/","title":"Journals As Clubs","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/speakers/jun-muai/","title":"Jun Muai","content":""},{"uri":"/tags/jit-channels/","title":"Just-In-Time (JIT) channels","content":""},{"uri":"/tags/jit-routing/","title":"Just-in-time (JIT) routing","content":""},{"uri":"/speakers/justin-camarena/","title":"Justin Camarena","content":""},{"uri":"/scalingbitcoin/milan-2016/jute-braiding/","title":"Jute: New Braiding Techniques to Achieve Significant Scaling Gains","content":"https://twitter.com/kanzure/status/785116246257856512\nhttp://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/braiding-the-blockchain/\nIntroduction Okay, hello. My name is David Vorick. I have been with bitcoin since 2011. I have been a full-time blockchain engineer since about 2014. I run an altcoin for decentralized cloud storage called Sia. Today I am going to be talking about braiding which basically means we take the straight line blockchain of bitcoin and we allow blocks to have …"},{"uri":"/speakers/juthica-chou/","title":"Juthica Chou","content":""},{"uri":"/speakers/karl-floersch/","title":"Karl Floersch","content":""},{"uri":"/speakers/katherine-wu/","title":"Katherine Wu","content":""},{"uri":"/speakers/kathleen-breitman/","title":"Kathleen Breitman","content":""},{"uri":"/speakers/kenji-saito/","title":"Kenji Saito","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/keynote/","title":"Keynote","content":"It has been a while since I have been back to MIT. So. So um. Yeah, it\u0026amp;rsquo;s. I got extremely excited that the expo is at 26100. This was one of my favorite classrooms. Almost 20 years ago, I took 1802 with Professor Rogers. Yeah. I hope I am not as boring as he was back then.\nAnother thing I remember about this classroom is that we watched a lot of LS movies. Does LCE still suck? Um, yeah. I definitely did not expect back then to come back and give a talk. So I am extremely honored.\nI …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/keynote-gavin-andresen/","title":"Keynote Gavin Andresen","content":"His fame precedes him. He\u0026amp;rsquo;s going to talk about chain sizes and all that.\nCool, I\u0026amp;rsquo;ll have to follow you guys. Thank you, it\u0026amp;rsquo;s great to be here. I was here last time and had a great time. I\u0026amp;rsquo;m really happy that it\u0026amp;rsquo;s not snowing because we\u0026amp;rsquo;ve had too much snow. I\u0026amp;rsquo;m also happy that you all decided to listen to me on a Sunday morning. I don\u0026amp;rsquo;t usually give talks on Sunday.\nA couple years ago I was talking to a friend about Bitcoin. \u0026amp;ldquo;You know Gavin, …"},{"uri":"/speakers/kim-hamilton-duffy/","title":"Kim Hamilton Duffy","content":""},{"uri":"/tags/kindred-rbf/","title":"Kindred replace by fee","content":""},{"uri":"/speakers/kris-merkel/","title":"Kris Merkel","content":""},{"uri":"/speakers/kristov-atlas/","title":"Kristov Atlas","content":""},{"uri":"/speakers/kulpreet-singh/","title":"Kulpreet Singh","content":""},{"uri":"/tags/large-channels/","title":"Large channels","content":""},{"uri":"/speakers/lata-varghese/","title":"Lata Varghese","content":""},{"uri":"/coindesk-consensus-2016/law-enforcement-and-anonymous-transactions/","title":"Law Enforcement And Anonymous Transactions","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nLaw enforcement and anonymous transactions\nJason Weinstein, Blockchain Alliance, moderator\nBrian Klein, Baker Marquart\nPrakash Santhana, Deloitte\nJames Smith, Elliptic\nZooko Wilcox, doing essential zooko things (zerocash)\nLadies and gentlemen. Please take your seats. The session is about to begin.\nPlease welcome Jason Weinstein, Brian Klein, Prakash Santhana, James Smith, and Zooko Wilcox.\nJW: Any proof-of-work finalists should go to the …"},{"uri":"/speakers/lawrence-h.-summers/","title":"Lawrence H. Summers","content":""},{"uri":"/speakers/lei-yang/","title":"Lei Yang","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/lessons-for-bitcoin-from-150-years-of-decentralization/","title":"Lessons For Bitcoin From 150 Years Of Decentralization","content":"Arvind Narayanan (twitter- random_walker) - Princeton University\nAnother 30 seconds. Take your seats. We have Arvind Narayanan. Some of the research from Princeton has rivaled the work of MIT. Professor Narayanan is going to teach us some lessons for Bitcoin.\nThank you. Can you hear me? I think you can. What I want to do today is to look at lessons from history for the future of Bitcoin, the next 10 years for Bitcoin, the next 30 years for Bitcoin. Why am I doing this? Why does it make sense to …"},{"uri":"/cryptoeconomic-systems/2019/libra/","title":"Libra","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-24-fabrice-drouin-the-transfer-layer/","title":"Lightning - The Transfer Layer","content":"Location: Chaincode Labs Lightning Residency 2019\nContext Fabrice: Alright so, I\u0026amp;rsquo;m going to talk about the payments model we use in lightning, which is mainly the HTLC, and how it works. So what we\u0026amp;rsquo;ve explained so far is that a channel is basically a funding transaction that\u0026amp;rsquo;s been published. And a commitment transaction that spends from the funding transaction but is not published. So the funding transaction is confirmed and it\u0026amp;rsquo;s on-chain and it sends money to A and B. …"},{"uri":"/edgedevplusplus/2019/lightning-network-routing/","title":"Lightning Network Routing","content":"Introduction Good afternoon everyone. My name is Carla. I am one of the chaincode residents this summer. This talk is going to walk through the protocol layer by layer and look at the different components that make up the network. Lightning is an off-chain scaling solution which consists of a p2p network of payment channels. It allows payments to be forwarded by nodes in exchange for fees.\nScaling with payment channels This involves a payment channel. It\u0026amp;rsquo;s a construction where 2 …"},{"uri":"/breaking-bitcoin/2019/lightning-network-security-panel/","title":"Lightning Network Security Panel","content":"https://twitter.com/kanzure/status/1137758233865703424\nIntroduction MF: I will start by thanking Kevin and Lea and the organizers. It\u0026amp;rsquo;s been an awesome event. Thanks also to the sponsors. Without their assistance we couldn\u0026amp;rsquo;t have had the event. I\u0026amp;rsquo;d also like to make some plugs before we start the conversation. One, I run the London Bitcoin developers group. If you\u0026amp;rsquo;re ever in London or interested in speaking, please look up London Bitcoin Devs on Twitter or Meetup. …"},{"uri":"/stanford-blockchain-conference/2019/links/","title":"Links","content":"info: https://cyber.stanford.edu/sbc19\n"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/linq/","title":"Linq","content":"NASDAQ Linq\nI have heard that Alex Zinder has a grasp of what makes NASDAQ Linq good and bad, and the advantages of a public versus permissioned blockchain. Alex is director of global software development at NASDAQ, but he is here to talk about Linq, which is a platform for managing securities. Alex?\nThis is possibly Adam Ludwin\u0026amp;rsquo;s / chain.com\u0026amp;rsquo;s doing?\nWe have been looking at this for a while at NASDAQ. It has a lot of implications for capital markets. I want to talk about the Linq …"},{"uri":"/tags/liquidity-advertisements/","title":"Liquidity advertisements","content":""},{"uri":"/speakers/lous-parker/","title":"Lous Parker","content":""},{"uri":"/tags/low-r-grinding/","title":"Low-r grinding","content":""},{"uri":"/stanford-blockchain-conference/2020/lower-bounds-limits-plasma/","title":"Lower Bounds for Off-Chain Protocols: Exploring the Limits of Plasma","content":"https://twitter.com/kanzure/status/1230183338746335233\nIntroduction Okay, thank you. The title of my talk is exploring the limits of plasma. This is joint work with others.\nPlasma Let me start by telling you about Plasma. It\u0026amp;rsquo;s a family of layer 2 solutions for scaling blockchain. It was proposed in 2017 by Poon and Buterin. It\u0026amp;rsquo;s a family of protocols. There\u0026amp;rsquo;s plenty of different plasmas. This is a diagram from more than a year ago. There are many variants. It\u0026amp;rsquo;s an active …"},{"uri":"/speakers/mahnush-movahedi/","title":"Mahnush Movahedi","content":""},{"uri":"/speakers/marco-santori/","title":"Marco Santori","content":""},{"uri":"/speakers/marek-olszewski/","title":"Marek Olszewski","content":""},{"uri":"/speakers/mark-erhart/","title":"Mark Erhart","content":""},{"uri":"/speakers/mark-friedenbach/","title":"Mark Friedenbach","content":""},{"uri":"/speakers/mark-wetjen/","title":"Mark Wetjen","content":""},{"uri":"/speakers/marshall-long/","title":"Marshall Long","content":""},{"uri":"/speakers/martin-lundfall/","title":"Martin Lundfall","content":""},{"uri":"/w3-blockchain-workshop-2016/matsuo/","title":"Matsuo","content":"bsafe.network\nGiving trust by security evaluation and bsafe.network\nThis is a severe problem. How can we trust the output of blockchain technology? We have several security issues. There was the huge DAO attack from two weeks ago. We also have other problems, such as protocol specification, key management, implementation, operation, and vulnerability handling. Also key renewal and key revocation issues.\nISO/IEC 27000 ISO/IEC 15408 ISO/IEC 29128 ISO/IEC 29128 ISO/IEC, NIST IETF We already have …"},{"uri":"/tags/matt/","title":"MATT","content":""},{"uri":"/speakers/matt-weinberg/","title":"Matt Weinberg","content":""},{"uri":"/speakers/matt-weiss/","title":"Matt Weiss","content":""},{"uri":"/speakers/matthew-mezinskis/","title":"Matthew Mezinskis","content":""},{"uri":"/speakers/maurice-herlihy/","title":"Maurice Herlihy","content":""},{"uri":"/speakers/max-keidun/","title":"Max Keidun","content":""},{"uri":"/cryptoeconomic-systems/2019/mechanism-design/","title":"Mechanism Design","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/speakers/megan-chen/","title":"Megan Chen","content":""},{"uri":"/speakers/melanie-shapiro/","title":"Melanie Shapiro","content":""},{"uri":"/speakers/meltem-demirors/","title":"Meltem Demirors","content":""},{"uri":"/speakers/meni-rosenfeld/","title":"Meni Rosenfeld","content":""},{"uri":"/tags/merkle-tree-vulnerabilities/","title":"Merkle tree vulnerabilities","content":""},{"uri":"/decentralized-financial-architecture-workshop/metadata/","title":"Metadata","content":"website: https://dfa2019.bitcoinedge.org/\ntwitter: https://twitter.com/thebitcoinedge\nhttps://gnso.icann.org/sites/default/files/file/field-file-attach/presentation-pre-icann65-policy-report-13jun19-en.pdf\n"},{"uri":"/edgedevplusplus/2019/metadata/","title":"Metadata","content":"https://bitcoinedge.org/slack\n"},{"uri":"/speakers/michael-more/","title":"Michael More","content":""},{"uri":"/speakers/michael-straka/","title":"Michael Straka","content":""},{"uri":"/speakers/michael-walfish/","title":"Michael Walfish","content":""},{"uri":"/speakers/mikael-dubrovsky/","title":"Mikael Dubrovsky","content":""},{"uri":"/stanford-blockchain-conference/2019/miniscript/","title":"Miniscript","content":"https://twitter.com/kanzure/status/1091116834219151360\nJeremy: Our next speaker really needs no introduction. However, I am obligated to deliver an introduction anyway. I\u0026amp;rsquo;m not just trying to avoid pronouncing his name which is known to be unpronounceable. I am pleased to welcome Pieter Wuille to stage to present on miniscript.\nIntroduction Thanks, Jeremy. I am Pieter Wuille. I work at Blockstream. I do various things. Today I will be talking about miniscript, which is a joint effort …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2018/","title":"Mit Bitcoin Expo 2018","content":" Improving Bitcoin Smart Contract Efficiency Tadge Dryja Taproot Contract protocols Dlc "},{"uri":"/stanford-blockchain-conference/2020/mixicles/","title":"Mixicles: Simple Private Decentralized Finance","content":"https://twitter.com/kanzure/status/1230666545803563008\nhttps://chain.link/mixicles.pdf\nIntroduction This work was done in my capacity as technical advisor to Chainlink, and not my Cornell or IC3 affiliation. Mixicle is a combination of the words mixers and oracles. You will see why in a moment.\nDeFi and privacy As of a couple of weeks ago, DeFi passed the $1 billion mark which is to say there\u0026amp;rsquo;s over $1 billion of cryptocurrency in DeFi smart contracts today. This is great, but wait just a …"},{"uri":"/speakers/mooly-sagiv/","title":"Mooly Sagiv","content":""},{"uri":"/scalingbitcoin/montreal-2015/more-core-devs/","title":"More Core Devs","content":"10x the number of core devs\nObjectives:\n10x would only be like \u0026amp;lt;1000 bitcoin developers 7000 fedora developers how do we grow developers? how many tor developers? how many linux developers?\npoaching from each other is not cool\nThere\u0026amp;rsquo;s not enough people with the necessary skills. There\u0026amp;rsquo;s no clear way to train new people. Apprenticeship works, but it scales very very poorly. It\u0026amp;rsquo;s better than a static fixed number of developers. In the linux industry, started in linux at a time …"},{"uri":"/stanford-blockchain-conference/2020/motoko-language/","title":"Motoko, the language for the Internet Computer","content":"Introduction Thanks, Byron. Glad to be here. My name is Dominic Williams. I am founder of the internet computer project at Dfinity Foundation. Last year we talked about consensus protocols. Today we are talking about tools for building on top of the internet computer which is great news and shows the progress we have made. I\u0026amp;rsquo;m going to give some introductory context, and then our senior researcher from our languages division will talk about Motoko. Is it working? Alright.\nThe internet is …"},{"uri":"/scalingbitcoin/tokyo-2018/multi-party-channels-in-the-utxo-model-challenges-and-opportunities/","title":"Multi Party Channels In The Utxo Model Challenges And Opportunities","content":"Multi-party channels in the UTXO model: Challenges and opportunities\nOlaoluwa Osuntokun (Lightning Labs) (roasbeef)\nhttps://twitter.com/kanzure/status/1048468663089618944\nIntroductions Hi. So my name is Laolu. I am also known as roasbeef and I am co-founder of Lightning Labs. I am going to go over some cool research and science fiction. In the actual implementation for these things and cool to discuss and get some discussions around this. first I am going to talk about single-party channels and …"},{"uri":"/speakers/nathan-wilcox/","title":"Nathan Wilcox","content":""},{"uri":"/cryptoeconomic-systems/2019/near-misses/","title":"Near Misses","content":"Near misses: What could have gone wrong\nIntroduction Thank you. I am Ethan Heilman. I am a research whose has done a bunch of work in security of cryptocurrencies. I am also CTO of Arwen which does secure atomic swaps. I am a little sick today so please excuse my coughing, wheezing and drinking lots of water.\nBitcoin scary stories The general outline of this talk is going to be \u0026amp;ldquo;scary stories in bitcoin\u0026amp;rdquo;. Bitcoin has a long history and many of these lessons are applicable to other …"},{"uri":"/speakers/neha-narula/","title":"Neha Narula","content":""},{"uri":"/breaking-bitcoin/2019/neutrino/","title":"Neutrino, Good and Bad","content":"\u0026amp;hellip; 228C D70C FAA6 17E3 2679 E455\nIntroduction I want to start this talk with a question for the audience. How many of you have a mobile wallet on your phone right now? How many know the security model that the wallets are using? Okay, how many of you trust those security assumptions? Okay, less of you. I am going to talk about Neutrino. In order to talk about it, let\u0026amp;rsquo;s talk about before Neutrino.\nSimplified payment verification Let\u0026amp;rsquo;s start with simplified payment verification. …"},{"uri":"/stanford-blockchain-conference/2019/thundercore/","title":"New and Simple Consensus Algorithms for ThunderCore’s Main-Net","content":"https://twitter.com/kanzure/status/1090773938479624192\nIntroduction Synchronous with a chance of partition tolerance. Thank you for inviting me. I am going to be talking about some new updates. It\u0026amp;rsquo;s joint work with my collaborators. The problem is state-machine replication, sometimes called blockchain or consensus. These terms are going to be the same thing in this talk. The nodes are trying to agree on a linearly updated transaction log.\nState-machine replication We care about consistency …"},{"uri":"/speakers/nic-carter/","title":"Nic Carter","content":""},{"uri":"/speakers/nick-spooner/","title":"Nick Spooner","content":""},{"uri":"/stanford-blockchain-conference/2020/no-incentive/","title":"No Incentive","content":"The best incentive is no incentive\nhttps://twitter.com/kanzure/status/1230997662339436544\nIntroduction This is the last session of the conference. Time flies when you\u0026amp;rsquo;re having fun. We\u0026amp;rsquo;re going to be starting with David Schwartz who is one of the co-creators of Ripple and he is going to be talking about a controversial topic that \u0026amp;ldquo;the best incentive is no incentive\u0026amp;rdquo;.\nI am David Schwartz as you just heard I am one of the co-creators of Ripple. Thanks to Dan Boneh for …"},{"uri":"/scalingbitcoin/montreal-2015/non-currency-applications/","title":"Non Currency Applications","content":"Scalability of non-currency applications\nI am from Princeton University. I am here to talk about the scalability issues that non-currency applications of bitcoin have that might be a little different than regular payments that go through the bitcoin network. I put this together with my advisor, Arvind. The reason why I want to talk about this is because when people talk about how bitcoin will have to scale is that people throw out a thing about Visa processing 20,000 transactions per second or …"},{"uri":"/speakers/omer-shlomovits/","title":"Omer Shlomovits","content":""},{"uri":"/scalingbitcoin/tokyo-2018/omniledger/","title":"Omniledger","content":"Omniledger: A secure, scale-out, decentralized ledger via sharding\nEleftherios Kokoris-Kogias, Philipp Jovanovic, Linus Gasser, Nicolas Gailly, Ewa Syta and Bryan Ford (Ecole Polytechnique Fédérale de Lausanne)\n(LefKok)\npaper: https://eprint.iacr.org/2017/406.pdf\nhttps://twitter.com/kanzure/status/1048733316839432192\nIntroduction I am Lefteris. I am a 4th year PhD student. I am here to talk about omniledger. It\u0026amp;rsquo;s our architecture for a blockchain. It\u0026amp;rsquo;s not directly related to bitcoin …"},{"uri":"/tags/onion-messages/","title":"Onion messages","content":""},{"uri":"/tags/op-cat/","title":"OP_CAT","content":""},{"uri":"/tags/op-codeseparator/","title":"OP_CODESEPARATOR","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/eric-martindale/","title":"Open source - Beyond Bitcoin Core","content":"Open source: Beyond Bitcoin Core\nHe is at Bitpay, working on copay, bitcore, foxtrot, and their blockchain explorer.\nFirst of all, a little bit about Bitpay. We were actually founded in May 2011. We have been around for some time. At the time, MtGox was actually still a viable exchange. The Bitcoin price was down at about $1 or a little more. And we had a grand total of two merchants.\nSo what Bitpay does is that it accepts Bitcoin on behalf of a merchant and allows them to transform that into a …"},{"uri":"/bitcoin-magazine/bitcoin-2024/open-source-mining/","title":"Open Source Mining","content":""},{"uri":"/baltic-honeybadger/2018/opening/","title":"Opening","content":"Opening remarks for Baltic Honeybadger 2018\ntwitter: https://twitter.com/search?f=tweets\u0026amp;amp;vertical=default\u0026amp;amp;q=bh2018\nhttps://twitter.com/kanzure/status/1043384689321566208\nOkay guys, we\u0026amp;rsquo;re going to start in five minutes. So stay here. She is going to introduce speakers and help everyone out. Thanks everyone for coming. It\u0026amp;rsquo;s a huge crowd this year. I wanted to make a few technical announcements. First of all, just remember that we should be excellent to each other. We have …"},{"uri":"/breaking-bitcoin/2019/opening-remarks/","title":"Opening Remarks","content":"Opening remarks\nhttps://twitter.com/kanzure/status/1137273436332576768\nWe have a nice six second intro video. There we go. Woo. Alright. I think I can display my notes here. Yay. Good morning. First thing I need to tell you is the wifi password if you want to use the venue wifi. So, outside of that, thank you all for coming. This year we have a little fewer people than usual, but that\u0026amp;rsquo;s bear market and all. So we have to change our size of conference depending on the price of bitcoin. …"},{"uri":"/stanford-blockchain-conference/2019/opening-remarks/","title":"Opening Remarks","content":"Opening remarks\nWe will be starting in 10 minutes. I would like to ask the first speaker in the first session for grin to walk to the back of the room to get your microphone ready for the talk.\nIt\u0026amp;rsquo;s 9 o\u0026amp;rsquo;clock and time to get started. Could everyone please get to their seats? Alright, let\u0026amp;rsquo;s do this. Welcome everybody. This is the Stanford Blockchain Conference. This is the third time we\u0026amp;rsquo;re running this conference. I\u0026amp;rsquo;m looking forward to the program. Lots of technical …"},{"uri":"/coindesk-consensus-2016/opening-remarks-state-of-blockchain-ryan-selkis/","title":"Opening Remarks State Of Blockchain","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nOpening remarks \u0026amp;amp; state of blockchain\nRyan Selkis, Coindesk\nGarrick Hileman, The Cambridge Centre for Alternative Finance\nLadies and gentlemen, the session will begin in 5 minutes. Please take your seats. The session is about to begin.\nPlease welcome Ryan Selkis from CoinDesk. (obnoxious music plays, overly dramatic)\nWow is all I can say about what\u0026amp;rsquo;s happening over the last couple of months. This is absolutely incredible. Nice …"},{"uri":"/scalingbitcoin/stanford-2017/optimizing-fee-estimation-via-mempool-state/","title":"Optimizing Fee Estimation Via Mempool State","content":"I am a Bitcoin Core developer and I work at DG Lab. Today I would like to talk about fees. There\u0026amp;rsquo;s this weird gap between\u0026amp;ndash; there are two things going on. People complain about high fees. But people are confused about why Bitcoin Core is giving high fees but if you set fees manually you can get a much lower fee and get a transaction mined pretty fast. I started to look into this in detail and did simulations and a bunch of stuff.\nOne of the ideas I had was to use the mempool state to …"},{"uri":"/speakers/or-sattath/","title":"Or Sattath","content":""},{"uri":"/tags/out-of-band-fees/","title":"Out-of-band fees","content":""},{"uri":"/tags/output-linking/","title":"Output linking","content":""},{"uri":"/scalingbitcoin/hong-kong-2015/overview-of-bips-necessary-for-lightning/","title":"Overview Of BIPs Necessary For Lightning","content":"Scalability of Lightning with different BIPs and some back-of-the-envelope calculations.\nslides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/1_layer2_2_dryja.pdf\nI don\u0026amp;rsquo;t have time to introduce the idea of zero-confirmation transactions and lightning. We have given talks about this. There\u0026amp;rsquo;s some documentation available. There\u0026amp;rsquo;s an implementation being worked on. I think it helps a lot with scalability by keeping some transactions off the blockchain. Ideally, many …"},{"uri":"/speakers/patricia-estevao/","title":"Patricia Estevao","content":""},{"uri":"/speakers/patrick-mccorry/","title":"Patrick McCorry","content":""},{"uri":"/speakers/patrick-murck/","title":"Patrick Murck","content":""},{"uri":"/speakers/paul-vigna/","title":"Paul Vigna","content":""},{"uri":"/tags/payment-probes/","title":"Payment probes","content":""},{"uri":"/tags/payment-secrets/","title":"Payment secrets","content":""},{"uri":"/tags/peer-storage/","title":"Peer storage","content":""},{"uri":"/decentralized-financial-architecture-workshop/perspective/","title":"Perspective","content":" Regulators should focus on consumer protection\nRegulators should focus on operating (or overseeing) rating agencies, and formal review of proposed standards\nFungibility is important, and many regulatory goals are contrary to fungibility, indicating that their forms of money will be inferior and their economy will be left behind.\nAnonymity should be a core value\u0026amp;ndash; the society with the most privacy will have the most advantages.\nThe importance of regulatory sandboxes (which should be …"},{"uri":"/speakers/peter-rizun/","title":"Peter Rizun","content":""},{"uri":"/w3-blockchain-workshop-2016/physical-assets/","title":"Physical Assets","content":"Physical assets, archival science\nWe are going to spend 20 minutes talkign about the topics. Then we will spend 10 minutes getting summaries. Then we will spend 20 minutes summarizing everything.\nWhy is archival science important? I have a background in finance too. Blockchain securitizes things as well. It combines value and memory in one. Also, people have studied archives and how to get records and how to store records.\nHashkloud. Identity verification and KYC and AML management platform. We …"},{"uri":"/speakers/pierre-roberge/","title":"Pierre Roberge","content":""},{"uri":"/scalingbitcoin/tokyo-2018/playing-with-fire-adjusting-bitcoin-block-subsidy/","title":"Playing With Fire: Adjusting Bitcoin Block Subsidy","content":"slides: https://github.com/ajtowns/sc-btc-2018\nhttps://twitter.com/kanzure/status/1048401029148991488\nIntroduction First an apology about the title. I saw some comments on twitter after the talk was announced and they were speculating that I was going to break the 21 million coin limit. But that\u0026amp;rsquo;s not the case. Rusty has a civil war thesis: the third era will start with the civil war, the mathematics of this situation seem einevitable. As the miners and businesses with large transaction …"},{"uri":"/stanford-blockchain-conference/2020/plonk/","title":"Plonk","content":"PLONK: Permutations over Lagrange-bases for Oecumenical Noninteractive arguments of Knowledge\nAriel Gabizon\nhttps://twitter.com/kanzure/status/1230287617972768769\npaper: https://eprint.iacr.org/2019/pdf\nIntroduction One of the things you need when you design a zk proof system is that you need to know about polynomials. Two polynomials if they are the same then they are the same everywhere and if they are different then they are different almost everywhere. The other thing you need is how to get …"},{"uri":"/speakers/prakash-santhana/","title":"Prakash Santhana","content":""},{"uri":"/speakers/pramod-viswanath/","title":"Pramod Viswanath","content":""},{"uri":"/baltic-honeybadger/2018/present-and-future-tech-challenges-in-bitcoin/","title":"Present And Future Tech Challenges In Bitcoin","content":"1 on 1: Present and future tech challenges in bitcoin\nhttps://twitter.com/kanzure/status/1043484879210668032\nPR: Hi everybody. I think I have the great fortune to moderate this panel in the sense that we have really great questions thought by the organizers. Essentially we want to ask Adam and Peter what are their thoughts on the current and the future tech challenges for bitcoin. I\u0026amp;rsquo;m just going to start with that question, starting with Peter.\nPT: Oh, we better have a good answer for this …"},{"uri":"/stanford-blockchain-conference/2020/prism/","title":"Prism - Scaling bitcoin by 10,000x","content":"https://twitter.com/kanzure/status/1230634530110730241\nhttps://diyhpl.us/wiki/transcripts/scalingbitcoin/tel-aviv-2019/prism/\nIntroduction Hello everyone. I am excited to be here to talk about our implementation and evaluation of the Prism consensus protocol which achieves 10,000x better performance than bitcoin. This is a multi-university collaboration between MIT, Stanford and a few others. I\u0026amp;rsquo;d like to thank my collaborators, including my advisor, and other people.\nBitcoin performance We …"},{"uri":"/scalingbitcoin/tel-aviv-2019/prism/","title":"Prism: Scaling Bitcoin to Physical Limits","content":"Introduction Prism is a consensus protocol which is a one-stop solution to all of bitcoin\u0026amp;rsquo;s scaling problems. The title is 7 to 70000 transactions per second. We have a full stack implementation of prism running and we were able to obtain 70,000 transactions/second. Before we get started, here are my collaborators.\nThis workshop is on scaling bitcoin so I won\u0026amp;rsquo;t spend any time justifying why we would want to scale bitcoin. So let\u0026amp;rsquo;s talk about performance.\nBitcoin performance …"},{"uri":"/scalingbitcoin/montreal-2015/privacy-and-fungibility/","title":"Privacy and Scalability","content":"Even though Bitcoin is the worst privacy system ever, everyone in the community very strongly values privacy.\nThere are at least three things: privacy censorship resistance fungibility\nThe easiest way to censor things is to punish communication, not prevent communication. Privacy is the weakest link in censorship resistance. Fungibility is an absolute necessity for any medium of exchange. The properties of money include fungibility. Without privacy you may not be able to have fungibility. …"},{"uri":"/w3-blockchain-workshop-2016/privacy-anonymity-and-identity-group/","title":"Privacy Anonymity And Identity Group","content":"Anonymous identity\nAnother part of it is that all of those IoT things have firmware that can update. The moment that someone gets a key to update this, you can hack the grid by making rapid changes on power usage which actually destroys the \u0026amp;hellip;. do the standards make things less secure? It should be illegal to design broken cryptosystems. Engineers should go to jail. It\u0026amp;rsquo;s too dangerous. Do you think there should be smart grids at all?\nWell if the designs are based on digital security, …"},{"uri":"/stanford-blockchain-conference/2019/privacy-preserving-multi-hop-locks/","title":"Privacy Preserving Multi Hop Locks","content":"Privacy preserving multi-hop locks for blockchain scalability and interoperability\nhttps://twitter.com/kanzure/status/1091026195032924160\npaper: https://eprint.iacr.org/2018/472.pdf\nhttp://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/multi-hop-locks/\nIntroduction This is joint work with my collaborators. Permissionless blockchains have some issues. The permissionless nature leads to the transaction rate that they have. This limits widespread adoption of blockchain technology like bitcoin …"},{"uri":"/scalingbitcoin/tel-aviv-2019/proof-of-necessary-work/","title":"Proof of necessary work: Succinct state verification with fairness guarantees","content":"Introduction I am going to show you proof of necessary work today. We use proof of work in a prototype. This is joint work with my collaborators.\nProblems We wanted to tackle the problem of bitcoin blockchain size increases. Every 10 minutes, a block is added to the blockchain and the blockchain grows linearly in size over time. Initial block download increases over time.\nProof of necessary work We use proof-of-work to verify transactions. We allow lite clients to verify state with minimal …"},{"uri":"/stanford-blockchain-conference/2020/proof-of-necessary-work/","title":"Proof of Necessary Work: Succinct State Verification with Fairness Guarantees","content":"https://twitter.com/kanzure/status/1230199429849743360\nhttps://twitter.com/kanzure/status/1230192993786720256\nSee also https://eprint.iacr.org/2020/190\nPrevious talk: https://btctranscripts.com/scalingbitcoin/tel-aviv-2019/proof-of-necessary-work/\nIntroduction Let\u0026amp;rsquo;s start with something familiar. When you have a new lite client, or any client that wants to trustlessly connect to the network and joins the network for the first time\u0026amp;hellip; they see this: they need to download a lot of data, …"},{"uri":"/tags/proof-of-payment/","title":"Proof of payment","content":""},{"uri":"/tags/proof-of-reserves/","title":"Proof of reserves","content":""},{"uri":"/w3-blockchain-workshop-2016/provenance-groups/","title":"Provenance Groups","content":"Give the name of your tech topic table. You have three minutes to summarize. When you have one minute to summarize, I will give you the one minute warning. When you have zero minutes left, you will be thrown out the window.\nMedia rights We talked about media rights and payments. The ability to figure out the incentive structure for whether we can compensate people who create page views with people who have view rights and how that all works. One of the ways this works out is \u0026amp;hellip; the …"},{"uri":"/bitcoin-core-dev-tech/2015-02/research-and-development-goals/","title":"R\u0026D Goals \u0026 Challenges","content":"We often see people saying they are testing the waters, they fixed a typo, they made a tiny little fix that doesn\u0026amp;rsquo;t impact much, they are getting used to the process. They are finding that it\u0026amp;rsquo;s really easy to contribut to Bitcoin Core. You code your changes, you submit your changes, there\u0026amp;rsquo;s not much to it.\nThere\u0026amp;rsquo;s a difference, and the lines are fuzzy and undefined, and you can make a change to Core that changes a spelling error or a change to policy or consensus rules, …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/r3/","title":"R3","content":"Distributed Ledger Group\nR3\nI work at R3. It manages the distributed ledger group, of 42 banks interested in using blockchain or distributed ledger tech. People wonder when I tell them that I work for a consortium that wants to use electronic ledgers. What does that have to do with capital markets?\nLedgers can track transactions. Typically in a capital market context, this responsibility falls to the backoffice. After a trade, you finalize it and actually transfer value. In te case of trading …"},{"uri":"/coindesk-consensus-2016/reaching-consensus-open-blockchains/","title":"Reaching Consensus on Open Blockchains","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nReaching consensus on open blockchains\nModerator- Pindar Wong\nGavin Andresen, MIT Digital Currency Initiative\nVitalik Buterin, Ethereum Foundation\nEric Lombrozo, Ciphrex and Bitcoin Core\nNeha Narula, MIT Media Lab Digital Currency Initiative\nPlease silence your cell phone during this session. Thank you. Please silence your cell phones during this session. Thank you. Ladies and gentlemen, please take your seats. The session is about to …"},{"uri":"/rebooting-web-of-trust/","title":"Rebooting Web Of Trust","content":" 2019 Prague "},{"uri":"/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/","title":"Redesigning Bitcoin Fee Market","content":"https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015093.html\nhttps://www.reddit.com/r/Bitcoin/comments/72qi2r/redesigning_bitcoins_fee_market_a_new_paper_by/\npaper: https://arxiv.org/abs/1709.08881\nHe will be exploring alternative auction markets.\nHello. This is joint work with Aviv Zohar and I have just moved. And Ron Lavi. And I am going to be talking about tehcniques from\u0026amp;hellip; auction theory.. to rethink how fees in bitcoin are done.\nJust a fast recap of how the …"},{"uri":"/tags/redundant-overpayments/","title":"Redundant overpayments","content":""},{"uri":"/tags/replacement-cycling/","title":"Replacement cycling","content":""},{"uri":"/cryptoeconomic-systems/2019/reproducible-builds/","title":"Reproducible Builds","content":"Reproducible builds, binaries and trust\nCarl Dong (Chaincode Labs)\nIntroduction I am Carl and I work at Chaincode Labs and I will be talking about software, binaries and trust. I am happy that Ethan talked about his subject because it ties into my talk. For the purposes of this talk, assume that everything is perfect and there\u0026amp;rsquo;s no bugs and no CVEs and all the things that Ethan talked about don\u0026amp;rsquo;t exist. Basically, imagination. I am here to talk about that even if this is the case, …"},{"uri":"/scalingbitcoin/tokyo-2018/reproducible-lightning-benchmark/","title":"Reproducible Lightning Benchmark","content":"Reproducible lightning benchmark\nhttps://github.com/dgarage/LightningBenchmarks\nhttps://twitter.com/kanzure/status/1048760545699016705\nIntroduction Thanks for the introduction, Jameson. I feel like a rock star now. So yeah, I won\u0026amp;rsquo;t introduce myself. I call myself a code monkey the reason is that a lot of the talks in scaling bitcoin can surprise people but I understood very few of them. The reason is that I am more of an application developer. Basically, I try to understand a lot of the …"},{"uri":"/tags/responsible-disclosures/","title":"Responsible disclosures","content":""},{"uri":"/scalingbitcoin/montreal-2015/reworking-bitcoin-core-p2p-code-for-robustness-and-event-driven/","title":"Reworking Bitcoin Core P2P Code For Robustness And Event Driven","content":"That is me. I want to apologize right off. I am here to talk about reworking Bitcoin Core p2p code. This is a Bitcoin Core specific topic. I have been working on some software for the past few weeks to talk about working things rather than theory. The presentation is going to suffer because of this.\nBitcoin Core has a networking stack that dates back all the way to the initial implementation from Satoshi or at least it\u0026amp;rsquo;s been patched and hacked on ever since. It has some pretty serious …"},{"uri":"/speakers/riccardo-casatta/","title":"Riccardo Casatta","content":""},{"uri":"/speakers/richard-myers/","title":"Richard Myers","content":""},{"uri":"/speakers/robert-schwinker/","title":"Robert Schwinker","content":""},{"uri":"/speakers/rodrigo-buenaventura/","title":"Rodrigo Buenaventura","content":""},{"uri":"/speakers/roman-snitko/","title":"Roman Snitko","content":""},{"uri":"/speakers/ron-rivest/","title":"Ron Rivest","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/rootstock/","title":"Rootstock","content":"Sergio is co-founder and chief scientist at RSK Labs. The whitepaper was pretty good. I am happy that he is here. Thank you. Just give me a minute.\nAlicia Bendhan Lous Parker BitDevsNYC\nYes, I use powerpoint. Please don\u0026amp;rsquo;t read my emails.\nThanks everyone for staying here and not going to the next room. Thanks for the organizers for inviting me. Today I am going to talk about Rootstock. It\u0026amp;rsquo;s a codename for a smart contracting platform that we are developing for SKY labs, which is a …"},{"uri":"/scalingbitcoin/montreal-2015/roundgroup-roundup-1/","title":"Roundgroup Roundup 1","content":"Roundtable roundup review 1\nFuture of SPV Technology So hopefully I summarized this reasonably. There were a number of things that came up in our discussion. It would help SPV a lot if there were UTXO commitments in blocks. This can be done with a soft-fork. It has implementation cost difficulties. There are some ugly things you could do about how the trees are formed, so that you don\u0026amp;rsquo;t have to repopulate the whole thing every single block. There might be some happy place where the …"},{"uri":"/scalingbitcoin/montreal-2015/roundgroup-roundup-2/","title":"Roundgroup Roundup 2","content":"Communicating without official structures Roundgroup roundup day 2\nHi guys, okay I am going to try to find my voice here. I am losing it. I ran the session on communication without official structures. We went through the various channels through which people are communicating in this community like forums, blogs, irc, reddit, twitter, email, and we talked a little about them each. There\u0026amp;rsquo;s also the conferences and meetings. There are different forms of communication in text than in person. …"},{"uri":"/w3-blockchain-workshop-2016/royalties/","title":"Royalties","content":"Chris Tse\nMonegraph person\nI am going to talk about rights systems. A system that lets us trade rights, like to make movie off of a character, off of the things that Disney makes whenever I buy toys for my two year-old son. This is used in intellectual property. Monegraph is known as the media rights blockchain-ish company. What we discovered about how to model rights, but why is it important, kind of extending and maybe contradicting the point further that the rights are not just the identity …"},{"uri":"/speakers/ryan-selkis/","title":"Ryan Selkis","content":""},{"uri":"/misc/safecurves-choosing-safe-curves-for-elliptic-curve-cryptography-2014/","title":"Safecurves - Choosing Safe Curves For Elliptic Curve Cryptography (2014)","content":"Daniel J. Bernstein (djb) and Tanja Lange\nSchmooCon 2014\nvideo 2 (same?): https://www.youtube.com/watch?v=maBOorEfOOk\u0026amp;amp;list=PLgO7JBj821uGZTXEXBLckChu70kl7Celh\u0026amp;amp;index=91\nVideo intro text There are several different standards covering selection of curves for use in elliptic-curve cryptography (ECC). Each of these standards tries to ensure that the elliptic-curve discrete-logarithm problem (ECDLP) is difficult. ECDLP is the problem of finding an ECC user\u0026amp;rsquo;s secret key, given the …"},{"uri":"/speakers/saifedean-ammous/","title":"Saifedean Ammous","content":""},{"uri":"/speakers/sandra-ro/","title":"Sandra Ro","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/peter-todd-scalability/","title":"Scalability","content":"There we go. Yeah. Looks good. Thank you.\nSo I wanted to talk about scalability and I wanted to give two opposing views of it. The first is really, scaling Bitcoin up is really really easy. The underlying architecture of Bitcoin is designed in a way that makes it easy to scale. You have your blockchain, and you have your blocks at the top which are 80 bytes data structures. They form a massive long chain all the way back to the genesis block when Bitcoin was created. For each block, those blocks …"},{"uri":"/scalingbitcoin/milan-2016/timestamping/","title":"Scalable and Accountable Timestamping","content":"slides https://scalingbitcoin.org/milan2016/presentations/D1%20-%20A%20-%20Riccardo.pdf\nhttps://twitter.com/kanzure/status/784773221610586112\nHi everybody. I attended Montreal and Hong Kong. It\u0026amp;rsquo;s a pleasurable to be here in Milan giving a talk about scaling bitcoin. I am going to be talking about scalable and accountable timestamping. Why timestamping at this conference? Then I will talk about aggregating timestamps. Also there needs to be a timestamping proof format. I will describe two …"},{"uri":"/stanford-blockchain-conference/2020/scalable-rsa-modulus-generation/","title":"Scalable RSA Modulus Generation with Dishonest Majority","content":"https://twitter.com/kanzure/status/1230545603605585920\nIntroduction Groups of unknown order are interesting cryptographic primitives. You can commit to integers. If you commit to a very large integer, you have to do a lot of work to do the commitment so you have a sequentiality assumption useful for VDFs. But then you have this compression property where if you take a large integer with a lot of information in it, you can compress it down to something small. It also has a nice additive …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/scaling-debate-is-a-proxy-battle-over-centralization/","title":"Scaling Debate Is A Proxy Battle Over Centralization","content":"I really appreciated the fact that\u0026amp;hellip;. I really appreciated Cory\u0026amp;rsquo;s talk. I quoted someone in my talk. It was a fairly recent quote. His whole talk was speaking about governance. There\u0026amp;rsquo;s a lot of governing issues coming into Bitcoin. We like the solving the whole world with technical.\nThe word that I am fascinated with is the one highlighted here, the word \u0026amp;ldquo;consensus\u0026amp;rdquo;. I\u0026amp;rsquo;m just going to reach out to you guys. Does anyone want to make a stand at telling me what …"},{"uri":"/speakers/scott-manuel/","title":"Scott Manuel","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/scriptless-lotteries/","title":"Scriptless Lotteries","content":"Scriptless lotteries on bitcoin from oblivious transfer\nLloyd Fournier (lloyd.fourn@gmail.com)\nhttps://twitter.com/kanzure/status/1171717583629934593\nIntroduction I have no affiliations. I\u0026amp;rsquo;m just some guy. You\u0026amp;rsquo;re trusting a guy to talk about cryptography who got the date wrong. We\u0026amp;rsquo;re doing scriptless bitcoin lotteries from oblivious transfer.\nLotteries Imagine a trusted party who conducts lotteries between Alice and Bob. The goal of the lottery protocol is to use a …"},{"uri":"/grincon/2019/scriptless-scripts-with-mimblewimble/","title":"Scriptless Scripts With Mimblewimble","content":"Introduction Hi everyone. I am Andrew Poelstra. I am the research director at Blockstream. I want to talk about deploying scriptless scripts, wihch is something I haven\u0026amp;rsquo;t talked much about over the past year or two.\nHistory Let me give a bit of history about mimblewimble. As many of you know, this was dead-dropped anonymously in the middle of 2016 by someone named Tom Elvis Jedusor which is the French name for Voldemort. It had no scripts in it. The closest thing to bitcoin script were …"},{"uri":"/speakers/sean-neville/","title":"Sean Neville","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/secure-fountain-architecture/","title":"Secure Fountain Architecture","content":"A secure fountain architecture for slashing storage costs in blockchains\nSwanand Kadhe (UC Berkeley)\nhttps://twitter.com/kanzure/status/1172158545577594880\npaper: https://arxiv.org/abs/1906.12140\nIntroduction I know this is the last formal talk of the day so thank you so much for being here. This is joint work with my collaborators.\nIs storage really an issue for bitcoin? Bitcoin\u0026amp;rsquo;s blockchain size is only 238 GB. So why is storage important? But blockchain size is a growing problem. …"},{"uri":"/breaking-bitcoin/2019/security-attacks-decentralized-mining-pools/","title":"Security and Attacks on Decentralized Mining Pools","content":"https://twitter.com/kanzure/status/1137373752038240262\nIntroduction Hi, I am Alexei. I am a research student at Imperial College London and I will be talking about the security of decentralized mining pools.\nMotivation The motivation behind this talk is straightforward. I think we can all agree that to some extent the hashrate in bitcoin has seen some centralization around large mining pools, which undermines the base properties of censorship resistance in bitcoin which I think bitcoin solves in …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/security-and-usability/","title":"Security And Usability","content":" Melanie Shapiro Alan Reiner Kristov Atlas Elizabeth Stark (moderator) and maybe Melanie Shapiro? malani? Elizabeth: I think this is a highly relevant discussion. I talked a bit about regulation. Regulation might be the number threat, but security might be the number two threat. I am excited to have a panel of leaders in this space who are actively building in this ecosystem. I am going to ask each to do a quick intro.\nAlan: I am the original creator of Armory and the Armory Bitcoin wallet. Last …"},{"uri":"/breaking-bitcoin/2019/lightning-network-routing-security/","title":"Security Aspects of Lightning Network Routing","content":"D146 D0F6 8939 4362 68FA 9A13 0E26 BB61 B76C 4D3A\nhttps://twitter.com/kanzure/status/1137740282139676672\nIntroduction Together with my colleagues I am building lnd. It\u0026amp;rsquo;s one of the implementations of lightning. In routing, there\u0026amp;rsquo;s the sender of the payment, the intermediate routing nodes that forward payments, and a receiver. I would like to scope this down to looking at security just from the perspective of the sender.\nGoals of LN Why are we doing all of this? The thing we\u0026amp;rsquo;re …"},{"uri":"/scalingbitcoin/hong-kong-2015/security-assumptions/","title":"Security Assumptions","content":"Hi, welcome back.\nI am a developer for libsecp256k1. It\u0026amp;rsquo;s a library that does the underlying traditional cryptography used in Bitcoin. I am going to talk about security assumptions, security models and trust models. I am going to give a high-level overview of how we should be thinking about these issues for scaling and efficiency and decentralization. Bitcoin is a crypto system. Everything about it is a crypto system. It needs to be designed with an adversarial mindset, even in an …"},{"uri":"/scalingbitcoin/hong-kong-2015/segregated-witness-and-its-impact-on-scalability/","title":"Segregated Witness And Its Impact On Scalability","content":"slides: https://prezi.com/lyghixkrguao/segregated-witness-and-deploying-it-for-bitcoin/\ncode: https://github.com/sipa/bitcoin/commits/segwit\nSPV = simplified payment verification\nCSV = checksequenceverify\nCLTV = checklocktimeverify\nSegregated witness (segwit) and deploying it for Bitcoin\nOkay. So I am Pieter Wuille. I\u0026amp;rsquo;ll be talking about segregated witness for Bitcoin. Before I can explain this, I want to give some context. We all know how bitcoin transactions work. Every bitcoin …"},{"uri":"/scalingbitcoin/tokyo-2018/self-reproducing-coins-as-universal-turing-machine/","title":"Self Reproducing Coins As Universal Turing Machine","content":"paper: https://arxiv.org/abs/1806.10116\nIntroduction This is joint work with my colleagues. I am Alexander Chepurnoy. We work for Ergo Platform. Vasily is an external member. This talk is about turing completeness in the blockchain. This talk will be pretty high-level. This talk is based on already-published paper, which was presented at the CBT conference. Because of copyright agreements with Springer, you will not find the latest version on arxiv.\nThe general question of scalability is how can …"},{"uri":"/rebooting-web-of-trust/2019-prague/self-sovereign-identity-ideology-and-architecture/","title":"Self Sovereign Identity Ideology And Architecture","content":"https://twitter.com/kanzure/status/1168566218040762369\nParalelni Polis First I\u0026amp;rsquo;d like to introduce the space we\u0026amp;rsquo;re in right now. Welcome to Paralelni Polis. It is an education organization. It\u0026amp;rsquo;s a first organization in the world that is running fully on crypto. It is running solely on cryptoeconomics and donations. I\u0026amp;rsquo;d like to welcome you to the event called Rebooting the Web of Trust. This is the 9th event. This is its first time in Prague. We want to spread awareness …"},{"uri":"/speakers/sergej-kotliar/","title":"Sergej Kotliar","content":""},{"uri":"/speakers/sergio-lerner/","title":"Sergio Lerner","content":""},{"uri":"/speakers/shafi-goldwasser/","title":"Shafi Goldwasser","content":""},{"uri":"/rebooting-web-of-trust/2019-prague/shamir-secret-sharing/","title":"Shamir Secret Sharing","content":"Discussion https://twitter.com/kanzure/status/1168891841745494016\nChris Howe make a damn good start of a C version of Shamir secret sharing. We need to take it another level. This could be another project that we could engage on. We also didn\u0026amp;rsquo;t finish the actual paper, as such. I\u0026amp;rsquo;d like to get both of those things resolved.\nSatoshiLabs is saying that as far as they are concerned SLIP 39 is done and they are not interested in feedback or changing it or adapting it, at the words level. …"},{"uri":"/speakers/shayan-eskandari/","title":"Shayan Eskandari","content":""},{"uri":"/speakers/shehzan-maredia/","title":"Shehzan Maredia","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/siacoin/","title":"Siacoin","content":"I have been working on a decentralized cloud storage platform. Before I go into that, I wanted to mention that I have been working on Bitcoin since 2011. Since 2013 I have been in the wizards channel. In 2014 a friend and myself founded a company to build Siacoin. The team has grown to 3 people. Sia is the name of our decentralized storage protocol. We are trying to emulate Amazon S3. You give your data to Amazon, and then they hold it for you. We want low latency high throughput.\nWe had a bunch …"},{"uri":"/tags/side-channels/","title":"Side channels","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/matt-corallo-sidechains/","title":"Sidechains","content":"One second. Technical difficulties. Okay, sorry about that. Okay. Hi. I am Matt or BlueMatt for those of you know me. I swear it\u0026amp;rsquo;s not related to my hair. Nobody believes me.\nI got into Bitcoin three or four years ago because I was interested in the programming and computer science aspects and as well as my interest in large-scale economic incentive structures and macroeconomics and how we create incentive structures that create actions on a low level by rational actors. So this talk is a …"},{"uri":"/scalingbitcoin/milan-2016/sidechains/","title":"Sidechains","content":"https://twitter.com/kanzure/status/784767020290150400\nslides https://scalingbitcoin.org/milan2016/presentations/D1%20-%209%20-%20Paul.pdf\nBefore we begin, we will explain how the workshops will work upstairs. There will be some topics and room numbers. Now we will start the next presentation on sidechain scaling with Paul Sztorc. Thank you everyone.\nThanks a lot. So this talk will be a little different. Scaling via strategy, not physics. This will not change the kilobytes sent over\u0026amp;ndash; it …"},{"uri":"/tags/signer-delegation/","title":"Signer delegation","content":""},{"uri":"/speakers/simon-peffers/","title":"Simon Peffers","content":""},{"uri":"/speakers/skot-9000/","title":"Skot 9000","content":""},{"uri":"/w3-blockchain-workshop-2016/smart-signatures/","title":"Smart Signatures","content":"Christopher Allen\nSmart signatures and smarter signatures\nSignatures are a 50+ year old technology. I am talking about the digital signature. It\u0026amp;rsquo;s a hash of an object that has been encrypted by private key and verified by a public key against the hash of the object. All of our systems are using this in a varity of complex ways at high level to either do identity, sign a block, or do a variety of different things. The idea behind smarter signatures is can we allow for more functionality. …"},{"uri":"/scalingbitcoin/montreal-2015/snarks/","title":"Snarks","content":"some setup motivating topics:\npracticality pcp theorem trustless setup magic away the trusted setup schemes without trusted setup pcp theorem is the existent proof that the spherical cow exists. people:\nAndrew Miller (AM) Madars Virza (MV) Andrew Poelstra (AP) Bryan Bishop (BB) Nathan Wilcox ZZZZZ: zooko gmaxwell\u0026amp;rsquo;s ghost (only in spirit) SNARKs always require some sort of setup. PCP-based SNARKs can use random oracle assumption instantiated by hash function, sha256 acts as the random …"},{"uri":"/simons-institute/snarks-and-their-practical-applications/","title":"Snarks And Their Practical Applications","content":"Joint works with:\nEli Ben-Sasson Matthew Green Alessandro Chiesa Ian Miers Christina Garman Madars Virza Daniel Genkin Thank you very much. I would like to thank the organizers for giving this opportunity to learn so much from my peers and colleagues some of which are here. This is a wonderful survey of many of the works in this area. As I was sitting here, I was in real-time adjusting my slides to minimize the overlap. I will cut down on the amount of introduction because what we just saw (Mike …"},{"uri":"/stanford-blockchain-conference/2020/solving-data-availability-attacks-using-coded-merkle-trees/","title":"Solving Data Availability Attacks Using Coded Merkle Trees","content":"Coded Merkle Tree: Solving Data Availability Attacks in Blockchains\nhttps://eprint.iacr.org/2019/1139.pdf\nhttps://twitter.com/kanzure/status/1230984827651772416\nIntroduction I am going to talk about blockchain sharding via data availability. Good evening everybody. I am Sreeram Kannan. This talk is going to be on how to scale blockchains using data availability proofs. The work in this talk is done in collaboration with my colleagues including Fisher Yu, Songza Li, David Tse, Vivek Bagaria, …"},{"uri":"/cryptoeconomic-systems/2019/solving-the-blockchain-trilemma/","title":"Solving The Blockchain Trilemma","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/some-questions-for-bitcoiners/","title":"Some Questions For Bitcoiners","content":"Joi Ito\nHe is the director of the MIT Media Lab. He served as the chairman of the Creative Commons and is on the board of the Mozilla Foundation.\nJust so that I can get a sense of the room, how many people here would say that you are technical? Most of you. How many of you were on the cypherpunks mailing list? How many of you have been threatened by Timothy May? How many of you know Timothy May? Okay.\nIt\u0026amp;rsquo;s interesting. If you go back, this isn\u0026amp;rsquo;t really a new thing. I have been paying …"},{"uri":"/speakers/soumya-basu/","title":"Soumya Basu","content":""},{"uri":"/dallas-bitcoin-symposium/sound-money/","title":"Sound Money","content":"Introduction In 2014, Michael appeared on MSNBC and gave this quote: \u0026amp;ldquo;I\u0026amp;rsquo;d only recommend using altcoins for speculation purposes if you really love risk, you\u0026amp;rsquo;re absolutely in love with risk, and you\u0026amp;rsquo;re interested in watching money disappear.\u0026amp;rdquo; Today my talk is on sound money for the digital age. The concept of monetary economics is the reason why I got interested in bitcoin back in 2012. I still tihnk that\u0026amp;rsquo;s the important thing that makes bitcoin important in …"},{"uri":"/tags/spontaneous-payments/","title":"Spontaneous payments","content":""},{"uri":"/stanford-blockchain-conference/2019/spork-probabilistic-bitcoin-soft-forks/","title":"Spork: Probabilistic Bitcoin Soft Forks","content":"Introduction Thank you for the introduction. I have a secret. I am not an economist, so you\u0026amp;rsquo;ll have to forgive me if I don\u0026amp;rsquo;t have your favorite annotations today. I have my own annotations. I originally gave a predecessor to this talk in Japan. You might notice my title here is a haiku: these are protocols used for changing the bitcoin netwokr protocols. People do highly value staying together for a chain, we\u0026amp;rsquo;re doing coordinated activation of a new upgrade.\nDining …"},{"uri":"/speakers/sreeram-kannan/","title":"Sreeram Kannan","content":""},{"uri":"/stanford-blockchain-conference/2020/stark-for-developers/","title":"STARK For Developers","content":"https://twitter.com/kanzure/status/1230279740570783744\nIntroduction It always seems like there\u0026amp;rsquo;s a lot of different proof systems, but it turns out they really work well together and there\u0026amp;rsquo;s great abstractions that are happening where we can take different tools and plug them together. We will see some of those in this session. I think STARKs were first announced 3 years ago back when this conference was BPASE, and now they are ready to use for developers and they are deployed in the …"},{"uri":"/stanford-blockchain-conference/2019/state-channels/","title":"State Channels","content":"State channels as a scaling solution for cryptocurrencies\nhttps://twitter.com/kanzure/status/1091042382072532992\nIntroduction I am an assistant professor at King\u0026amp;rsquo;s College London. There are many collaborators on this project. Also IC3. Who here has heard of state channels before? Okay, about half of the audience. Let me remove some misconceptions around them.\nScalability problems in bitcoin Why are state channels necessary for cryptocurrencies? Cryptocurrencies do not scale. Bitcoin …"},{"uri":"/coindesk-consensus-2016/state-of-blockchain/","title":"State Of Blockchain","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nI am at the Cambridge center of alternative finance. I am also the founde rof\u0026amp;hellip; macroeconomics if you will\u0026amp;hellip; very proud to have created and \u0026amp;hellip; this is our .. we started this in 2014, this is a snapshot of what will be coming out over the next few days. If you are worried about the speed that I will be going through these, it will be online soon.\nSo\u0026amp;hellip; I am going to try to cover 4 things. I wnat to provide a general …"},{"uri":"/bit-block-boom/2019/state-of-multisig/","title":"State Of Multisig","content":"State of multisig for bitcoin hodlers: Better wallets with PSBT\nhttps://twitter.com/kanzure/status/1162769521993834496\nIntroduction I am going to be talking about the state of multisig. It\u0026amp;rsquo;s a way of creating a transaction output that requires multiple keys to sign. This is a focus for people holding bitcoin. I\u0026amp;rsquo;ll talk about some of the benefits and some of the issues. I am not a security expert, though.\nPerspective Multisig is an emphasis on saving and not spending a lot. This is …"},{"uri":"/scalingbitcoin/tokyo-2018/statechains/","title":"Statechains: Off-chain transfer of UTXOs","content":"I am the education director of readingbitcoin.org, and I am going to be talking about statechains for off-chain transfer of UTXOs.\nhttps://twitter.com/kanzure/status/1048799338703376385\nStatechains This is another layer 2 scaling solution, by avoiding on-chain transactions. It\u0026amp;rsquo;s similar to lightning network. The difference is that coin movement is not restricted. You deon\u0026amp;rsquo;t have these channels where you have to have a path and send an exact amount. There\u0026amp;rsquo;s some synergy with …"},{"uri":"/tags/stateless-invoices/","title":"Stateless invoices","content":""},{"uri":"/tags/static-channel-backups/","title":"Static channel backups","content":""},{"uri":"/speakers/stefan-dziembowski/","title":"Stefan Dziembowski","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/andrew-miller/","title":"Step by Step Towards Writing a Safe Contract: Insights from an Undergraduate Ethereum Lab","content":"Next up I want to introduce Elaine Shi and Andrew Miller from University of Maryland. Andrew is also on the zerocash team.\nOkay. I am an assistant professor at the University of Maryland. We all believe that cryptocurrency is the way of the future and smart contracts. So students in the future have to program smart contracts.\nSo that\u0026amp;rsquo;s what I did. I asked some students to program some smart contracts. I am going to first give some quick background and then I will talk about the insights …"},{"uri":"/speakers/stephanie-hurder/","title":"Stephanie Hurder","content":""},{"uri":"/stanford-blockchain-conference/2020/streamlet/","title":"Streamlet: Textbook Streamlined Blockchain Protocols","content":"((\u0026amp;hellip;. stream went offline in the other room, had to catch up.))\nShow you new consensus protocol called streamlet. We think it is remarkably simple intuitive. This is quite a bold claim, hopefully the protocol will speak for itself\nIncredibly subtle, hard to implement in practice. We want to take this reputation and sweep it out the door.\nJump right in. First model the consensus problem. Motivating simplicity as a goal. Spend the bulk of talk, line by line through the protocol By the end, …"},{"uri":"/scalingbitcoin/montreal-2015/stroem-payment-channels/","title":"Stroem Payment Channels","content":"Stroem\nJarl Fransson (Strawpay)\n7 transactions per second, or is it 3? That\u0026amp;rsquo;s about what we can do with bitcoin. What if you need 100,000 transactions per second? I come from Strawpay, we\u0026amp;rsquo;ve been looking into bitcoin for a long time. When we learned about scripting and the power of contracts, like payment channels, we thought maybe it\u0026amp;rsquo;s time to do micropayments and microtransactions. To give you some perspective on scale, let\u0026amp;rsquo;s say that 1 billion people make 25 …"},{"uri":"/speakers/swanand-kadhe/","title":"Swanand Kadhe","content":""},{"uri":"/rebooting-web-of-trust/2019-prague/swashbuckling-safety-training/","title":"Swashbuckling Safety Training","content":"Swashbuckling safety training with decentralized identifiers and verifiable credentials\nKim Hamilton Duffy, kimdhamilton\nhttps://twitter.com/kanzure/status/1168573225065951235\nIntroduction I was very excited to hear that I was speaking right after someone from the Pirate Party. We\u0026amp;rsquo;re part of the Digital Credentials Effort led by MIT and 12 other major universities. We will have a whitepaper out at the end of September. I am a W3c credentials community group co-chair. Formerly CTO of …"},{"uri":"/bitcoin-core-dev-tech/2015-02/jeremy-allaire-circle/","title":"Talk by the founders of Circle","content":"We are excited to be here and sponsoring this event. We have backgrounds in working on developer tools that goes back to the early days of something.\nHow do we mature the development of Bitcoin Core itself? One of the things that is useful is suss out the key components of it. In a standard you have a spec, it could be a whitepaper, and then you have a reference implementation, and then a test suite that enforces interoperability. The test suite is what enforces the standard. It\u0026amp;rsquo;s not the …"},{"uri":"/bit-block-boom/2019/taproot-schnorr-soft-fork/","title":"Taproot, Schnorr, and the next soft-fork","content":"https://twitter.com/kanzure/status/1162839000811548672\nIntroduction I am going to speak about this potential next version of bitcoin. Who here has heard of these words? Taproot? Schnorr? Who has heard these buzzwords these days? Yeah, so, the proposed next soft-fork is going to involve both of these technologies sort of interplaying. We want to talk about what are these technologies and how do we use thse things and why is it important that a bitcoin user or holder understands these …"},{"uri":"/tags/testnet/","title":"Testnet","content":""},{"uri":"/texas-bitcoin-conference-2014/","title":"Texas Bitcoin Conference","content":" Gox (2014) "},{"uri":"/dallas-bitcoin-symposium/texas-energy-market/","title":"Texas Energy Market","content":"Bitcoin mining\nBitcoin has been one of the most interesting rides of my life. I come from an oil/gas family. I lived in Midland after college. I became very interested in monetary policy, inflation and its effects on the world. Through that, really discovered bitcoin and what it is capable of doing. I come from the energy perspective and Austrian school of economics.\nMining This is our first large scale project\u0026amp;ndash; that\u0026amp;rsquo;s a 100 MW substation. That\u0026amp;rsquo;s a lot of power. We consume a …"},{"uri":"/speakers/thang-n.-dinh/","title":"Thang N. Dinh","content":""},{"uri":"/baltic-honeybadger/2018/the-b-foundation/","title":"The B Foundation","content":"The B Foundation\nhttps://twitter.com/kanzure/status/1043802179004493825\nI think most people here have some idea of who I am. I am a long-term bitcoiner. I\u0026amp;rsquo;ve been in bitcoin since 2010. I love bitcoin. I am passionate about it. I want to see it grow and prosper. The positive thing about bitcoin is that it has a resilient ecosystem. It doesn\u0026amp;rsquo;t need any CEO. It doesn\u0026amp;rsquo;t need any centralized organization and it doesn\u0026amp;rsquo;t need any central point to direct where it is going. It …"},{"uri":"/baltic-honeybadger/2018/the-bitcoin-standard/","title":"The Bitcoin Standard","content":"The bitcoin standard as a layered scaling solution\nhttps://twitter.com/kanzure/status/1043425514801844224\nHello, everyone. Can everyone hear me? Okay, wonderful. You can\u0026amp;rsquo;t see my slides, can you. You have to share the screen if you want them to see your slides. How do I do that? Where is that? This is new skype, I\u0026amp;rsquo;m sorry. Thank you everyone for inviting me to speak today. It would be great to join you, but unforutnately I couldn\u0026amp;rsquo;t make it.\nI want to describe how I see bitcoin …"},{"uri":"/scalingbitcoin/tokyo-2018/bitcoin-script/","title":"The evolution of bitcoin scripting","content":"Agenda opcodes OP_CHECKSIGFROMSTACK sighash flags keytree sigs MAST graftroot, taproot covenants, reverse covenants (input restrictions: this input has to be spent with tihs other input, or can only be spent if this other one doesn\u0026amp;rsquo;t exist) stack manipulation script languages (simplicity) tx formats, serialization? what would we change with hard-fork changes to the transaction format? Segwit transaction encoding format sucks; the witnesses are at the end and inline with all the scriptsigs …"},{"uri":"/baltic-honeybadger/2018/the-future-of-bitcoin-smart-contracts/","title":"The Future Of Bitcoin Smart Contracts","content":"https://twitter.com/kanzure/status/1043419056492228608\nHey guys, next talk in 5 minutes. In five minutes.\nIntroduction Hello everyone. If you\u0026amp;rsquo;re walking, please do it in silence. I am a bit nervous. I was too busy organizing this conference and didn\u0026amp;rsquo;t get time for this talk. If you had high expectatoins for this presentation, then please lower them for the next 20 minutes. I was going to talk about the traditional VC models and why it doesn\u0026amp;rsquo;t work in the bitcoin industry. They …"},{"uri":"/baltic-honeybadger/2018/the-future-of-bitcoin-wallets/","title":"The Future Of Bitcoin Wallets","content":"1 on 1: The future of bitcoin wallets\nhttps://twitter.com/kanzure/status/1043445104827084800\nGZ: Thank you very much. We\u0026amp;rsquo;re going to talk about the future of bitcoin wallets. As you know, it\u0026amp;rsquo;s a very central topic. We always compare bitcoin to the internet. Wallets are basically like browsers like back at the beginning of the web. They are the first gateway to a user experience for users on bitcoin. It\u0026amp;rsquo;s important to see how they will evolve. We have two exceptional …"},{"uri":"/breaking-bitcoin/2019/future-of-hardware-wallets/","title":"The Future of Hardware Wallets","content":"D419 C410 1E24 5B09 0D2C 46BF 8C3D 2C48 560E 81AC\nhttps://twitter.com/kanzure/status/1137663515957837826\nIntroduction We are making a secure hardware platform for developers so that they can build their own hardware wallets. Today I want to talk about certain challenges for hardware wallets, what we\u0026amp;rsquo;re missing, and how we can get better.\nCurrent capabilities of hardware wallets Normally, hardware wallets keep keys reasonably secret and are able to spend coins or sign transactions. All the …"},{"uri":"/baltic-honeybadger/2018/the-future-of-lightning/","title":"The Future Of Lightning","content":"The future of lightning\nThe year of #craeful and the future of lightning\nhttps://twitter.com/kanzure/status/1043501348606693379\nIt\u0026amp;rsquo;s great to be back here in Riga. Let\u0026amp;rsquo;s give a round of applause to the event organizers and everyone brought us back here. This is the warmest time I\u0026amp;rsquo;ve ever been in Riga. It\u0026amp;rsquo;s been great. I want to come back in the summers.\nIntroduction I am here to talk about what has happened in the past year and the future of what we\u0026amp;rsquo;re going to see …"},{"uri":"/scalingbitcoin/tokyo-2018/ghostdag/","title":"The GHOSTDAG protocol","content":"paper: https://eprint.iacr.org/2018/104.pdf\nIntroduction This is joint work with Yonatan. I am going to talk about a DAG-based protocol to scale on-chain. The reason why I like this protocol is because it\u0026amp;rsquo;s a very simple generalization of the longest-chain protocol that we all know and love. It gives some insight into what the role of proof-of-work is in these protocols. I\u0026amp;rsquo;ll start with an overview of bitcoin protocol because I want to contrast against it.\nBitcoin\u0026amp;rsquo;s consensus …"},{"uri":"/stanford-blockchain-conference/2020/libra-blockchain-intro/","title":"The Libra Blockchain \u0026 Move: A technical introduction","content":"https://twitter.com/kanzure/status/1230248685319024641\nIntroduction We\u0026amp;rsquo;re building a new wallet for the system, Calibre, which we are launching. A bit about myself before we get started. I have been at Facebook for about 10 years now. Before Facebook, I was one of the co-founders of reCaptcha- the squiggly letters you type in before you login and register. I have been working at Facebook on performance, reliability, and working with web standards to make the web faster.\nAgenda To start, I …"},{"uri":"/stanford-blockchain-conference/2020/optimistic-vm/","title":"The optimistic VM","content":"https://twitter.com/kanzure/status/1230974707249233921\nIntroduction Okay, let\u0026amp;rsquo;s get started our afternoon session. Our first talk I am very happy to introduce our speaker on optimistic virtual machines. Earlier in this conference we heard about optimistic roll-ups and I\u0026amp;rsquo;m looking forward to this talk on optimistic virtual machines which is an up-and-coming approach on doing this.\nWhy hello everyone. I am building on ethereum and in particular we\u0026amp;rsquo;re going to make Optimism: the …"},{"uri":"/baltic-honeybadger/2018/the-reserve-currency-fallacy/","title":"The Reserve Currency Fallacy","content":"The reserve currency fallacy\nhttps://twitter.com/kanzure/status/1043385469134925824\nThank you. Developers, developers, developers, developers. Alright, it wasn\u0026amp;rsquo;t that bad. There\u0026amp;rsquo;s a lot of content to explain this concept of the reserve currency fallacy. It\u0026amp;rsquo;s hard to get through it in the amount of available time. I\u0026amp;rsquo;ll be available at the party tonight. I want to go through four slides and talk about the history of this question of scaling, and then the first scaling …"},{"uri":"/stanford-blockchain-conference/2019/stark-dex/","title":"The STARK truth about DEXes","content":"https://twitter.com/kanzure/status/1090731793395798016\nWelcome to the first session after lunch. We call this the post-lunch session. The first part is on STARKs by Eli Ben-Sasson.\nIntroduction Thank you very much. Today I will be talking about STARK proofs of DEXes. I am chief scientist at Starkware. We\u0026amp;rsquo;re a one-year startup in Israel that has raised $40m and we have a grant from the Ethereum Foundation. We have 20 team members. Most of them are engineers. Those who are not engineers are …"},{"uri":"/scalingbitcoin/tel-aviv-2019/threshold-scriptless-scripts/","title":"Threshold Scriptless Scripts","content":"Omer Shlomovits (KZen Research) (omershlomovits, zengo)\nhttps://twitter.com/kanzure/status/1171690582445580288\nScriptless scripts We can write smart contracts in bitcoin today, but there\u0026amp;rsquo;s some limitations. We can do a lot of great things. We can do atomic swaps, multisig, payment channels, etc. There\u0026amp;rsquo;s a cost, though. Transactions become bigger with bigger scripts. Also, it\u0026amp;rsquo;s kind of heavy on the verifiers. The verifiers now need to verify more things. There\u0026amp;rsquo;s also …"},{"uri":"/stanford-blockchain-conference/2019/proofs-of-space-and-replication/","title":"Tight Proofs of Space and Replication","content":"https://twitter.com/kanzure/status/1091064672831234048\npaper: https://eprint.iacr.org/2018/702.pdf\nIntroduction This talk is about proofs of space and tight proofs of space.\nProof of Space A proof-of-space is an alternative to proof-of-work. Applications have been proposed like spam prevention, DoS attack prevention, sybil resistance in consensus networks which is most relevant to this conference. In proof-of-space, it\u0026amp;rsquo;s an interactive protocol between a miner and a prover who has some …"},{"uri":"/speakers/tim-roughgarden/","title":"Tim Roughgarden","content":""},{"uri":"/tags/time-warp/","title":"Time warp","content":""},{"uri":"/simons-institute/todo/","title":"Todo","content":" cryptography bootcamp https://www.youtube.com/playlist?list=PLgKuh-lKre139cwM0pjuxMa_YVzMeCiTf mathematics of modern cryptography https://www.youtube.com/playlist?list=PLgKuh-lKre10kb0so1PQ3eNz4Q6EBfecT historical papers in cryptography https://www.youtube.com/playlist?list=PLgKuh-lKre13lX4C_5ZCQKMDOniAn24NP securing computation https://www.youtube.com/playlist?list=PLgKuh-lKre12ddBkIB8wNC8D1kag5dEr- "},{"uri":"/cryptoeconomic-systems/2019/token-journal/","title":"Token Journal","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/speakers/tone-vays/","title":"Tone Vays","content":""},{"uri":"/rebooting-web-of-trust/2019-prague/topics/","title":"Topics","content":"Rebooting Web of Trust topic selection\nDictionary terms: we have a glossary that I wrote 6 months ago which by definition glosses everything. A dictionary would give an opportunity to go into more depth, to look at disagreements without getting lost in the weeds, and also talk about some foundational assumptions.\nVerifiable secret sharing: this is a modification for Shamir secret sharing that also incorporates multisignatures as well. We\u0026amp;rsquo;re interested in solidifying that into an actual …"},{"uri":"/baltic-honeybadger/2018/trading-panel/","title":"Trading Panel","content":"MM: As Trace Mayer says, it\u0026amp;rsquo;s chasing the rabbit. Why not introduce ourselves?\nTV: My name is Tone Vays. I come from the traditional Wall Street trading environment. I joined the crypto space in 2013. I spoke at my first conference and wrote my first article in Q1 2014. I was doing some trading. With the popularity of the youtube channel and going to conferences, I went back to trading options in traditional markets. I\u0026amp;rsquo;ll get back to trading crypto soon, but I have plenty of …"},{"uri":"/tags/transaction-bloom-filtering/","title":"Transaction bloom filtering","content":""},{"uri":"/tags/transaction-origin-privacy/","title":"Transaction origin privacy","content":""},{"uri":"/stanford-blockchain-conference/2020/transparent-dishonesty/","title":"Transparent Dishonesty: front-running attacks on Blockchain ","content":"https://twitter.com/kanzure/status/1231005309310627841\nIntroduction The last two talks are going to be about frontrunning. One of the features of blockchain is that they are slow but this also opens them up to vulnerabilities such that I can insert my transactions before other people and this could have bad consequences and Shayan is going to explore this a little bit.\nMy talk will not be as intense as the previous talk, but I\u0026amp;rsquo;m going to have more pictures. I am a PhD candidate at …"},{"uri":"/stanford-blockchain-conference/2020/transparent-snarks-from-dark-compilers/","title":"Transparent SNARKs from DARK compilers","content":"https://twitter.com/kanzure/status/1230561492254089224\npaper: https://eprint.iacr.org/2019/1229.pdf\nIntroduction Our next speaker is Ben Fisch, part of the Stanford Applied Crypto team and he has done a lot of work that is relevant to the blockchain space. Proofs of replication, proofs of space, he is also one of the coauthors of a paper that defined the notion of verifiable delay functions. He has also worked on accumulators, batching techniques, vector commitments, and one of his latest works …"},{"uri":"/cryptoeconomic-systems/2019/trust-and-blockchain-marketplaces/","title":"Trust And Blockchain Marketplaces","content":"Introduction I founded Sia, a decentralized storage platform started in 2014. Today, Sia is the only decentralized storage platform out there. I also founded Obelisk which is a mining equipment manufacturing company.\nMining supply chains are centralized Basically all the mining chips are coming out of TSMC or Samsung. For the longest time it was TSMC but for some reason Samsung is at the forefront for this cycle but I would expect them to fade out in the next cycle. There\u0026amp;rsquo;s just two, and …"},{"uri":"/baltic-honeybadger/2018/trustlessness-scalability-and-directions-in-security-models/","title":"Trustlessness Scalability And Directions In Security Models","content":"Trustlessness, scalability and directions in security models\nhttps://twitter.com/kanzure/status/1043397023846883329\n\u0026amp;hellip; Is everyone awake? Sure. Jumping jacks. Wow, that\u0026amp;rsquo;s bright. I am more idealistic than Eric. I am going to talk about utility and why people use bitcoin and let\u0026amp;rsquo;s see how that goes.\nTrustlessness is much better than decentralization. I want to talk about trustlessness. People use the word decentralization a lot and I find that to be kind of useless because …"},{"uri":"/scalingbitcoin/tel-aviv-2019/txprobe/","title":"TxProbe: Discovering bitcoin's network topology using orphan transactions","content":"https://twitter.com/kanzure/status/1171723329453142016\npaper: https://arxiv.org/abs/1812.00942\nhttps://diyhpl.us/wiki/transcripts/scalingbitcoin/coinscope-andrew-miller/\nIntroduction I am Sergi. I will be presenting txprobe. This is baout discovering the topology of the bitcoin network using orphan transactions. There are a bunch of coauthors and collaborators.\nWhat we know about the topology I want to start by talking a little bit about what we know about the network topology without using …"},{"uri":"/tags/unannounced-channels/","title":"Unannounced channels","content":""},{"uri":"/tags/uneconomical-outputs/","title":"Uneconomical outputs","content":""},{"uri":"/scalingbitcoin/milan-2016/unlinkable-outsourced-channel-monitoring/","title":"Unlinkable Outsourced Channel Monitoring","content":"http://lightning.network/\nhttps://twitter.com/kanzure/status/784752625074012160\nOkay. Hi everyone. Check. Wow. Check one two. Okay it\u0026amp;rsquo;s working. And I have 25 minutes. Great. So I am going to talk about unlinkable outsourced channel modeling. Everyone likes channels. You can update states. You can link them together to make a network. This is really great. There are risks. These are additional risks. The price of scalability is eternal vigilance. You have to keep watching your channels. …"},{"uri":"/bitcoin-magazine/bitcoin-2024/unlocking-expressivity-with-op-cat/","title":"Unlocking Expressivity with OP_CAT","content":""},{"uri":"/coindesk-consensus-2016/upgrading-capital-markets-for-digital-asset-trading/","title":"Upgrading Capital Markets For Digital Asset Trading","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nUpgrading capital markets for digital asset trading\nBrian Kelly - Moderator\nJuthica Chou, LedgerX\nBobby Lee, BTCC\nMichael More, Genesis\nBarry Silbert, Digital Currency Group\nPlease welcome Brian Kelly, Juthica Chou, Bobby Lee, Michael More, and Barry Silbert.\nBK: Alright. Welcome everyone to the upgrading capital markets panel. You have the introductions of who these people are. Just one thing before we start, does anyone want to …"},{"uri":"/stanford-blockchain-conference/2019/urkel-trees/","title":"Urkel Trees","content":"Urkel trees: An optimized and cryptographically provable key-value store for decentralized naming\nBoyma Fahnbulleh (Handshake) (boymanjor)\nhttps://twitter.com/kanzure/status/1090765616590381057\nIntroduction Hello my name is Boy Fahnbulleh. I have been contributing to Handshake. I\u0026amp;rsquo;ve worked on everything from building websites to writing firmware for Ledger Nano S. If you have ever written a decent amount of C, or refactored someone\u0026amp;rsquo;s CSS, you would know how much I appreciate being …"},{"uri":"/scalingbitcoin/stanford-2017/using-the-chain-for-what-chains-are-good-for/","title":"Using Chains for what They're Good For","content":"I am Andrew Poelstra, a cryptographer at Blockstream. I am going to talk about scriptless scripts. But this is a specific example of a more general theme, which is using the blockchain as a trust anchor or commitment layer for smart contracts whose real contents don\u0026amp;rsquo;t really hit the blockchain. I\u0026amp;rsquo;ll elaborate on what I mean by that, and I\u0026amp;rsquo;ll show what the benefits are for that.\nTo give some context, in Bitcoin script, the scripting language which is used to encode smart …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2019/utreexo/","title":"Utreexo: Reducing bitcoin nodes to 1 kilobyte","content":"https://twitter.com/kanzure/status/1104410958716387328\nSee also: https://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-10-08-utxo-accumulators-and-utreexo/\nIntroduction I am going to talk about another scaling solution and another strategy I\u0026amp;rsquo;ve been working on for about 6-9 months, called Utreexo.\nBitcoin scalability Scalability has been a concern in bitcoin for the whole time. In fact, it\u0026amp;rsquo;s the first thing that anyone said about bitcoin. In 2008, Satoshi Nakamoto said hey I …"},{"uri":"/bitcoin-design/ux-research/","title":"UX Research","content":" "},{"uri":"/scalingbitcoin/hong-kong-2015/validation-cost-metric/","title":"Validation Cost Metric","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/3_tweaking_the_chain_3_nick.pdf\nMotivation As we\u0026amp;rsquo;ve seen over the last two days scalability is a multidimensional problem. One of the main topics of research is increasing the blocksize to increase transaction throughput. The assumption is that as technological progress is continuing and transaction throughput is increased accordingly, the cost for runnning a fully validating node stays constant.\nHowever, blocksize …"},{"uri":"/scalingbitcoin/montreal-2015/validation-costs/","title":"Validation Costs","content":"Validation costs and incentives\nHow many of you have read the Satoshi whitepaper? It\u0026amp;rsquo;s a good paper. As we have discovered, it has some serious limitations that we think can be fixed but they are serious challenges for scaling. Amongst these are, it was really built as a single code base which was meant to be run in its entirety. All machines were going to participate equally in this p2p network. In a homogenous network, pretty much every node runs the same software, offers the same …"},{"uri":"/tags/v3-transaction-relay/","title":"Version 3 transaction relay","content":""},{"uri":"/coindesk-consensus-2016/visa-chain/","title":"Visa Chain","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nVisa \u0026amp;amp; Chain\nMy name is Adam Ludwin. We are here to talk about blockchain database technology networks. Chain.com is a blockchain database technology company. We do one thing. We partner with financia leaders like Visa to launch blockchain database technology networks. I want to talk about why. There\u0026amp;rsquo;s a lot of hype in blockchain database technology. There\u0026amp;rsquo;s a lot of attention in blockchain database technology. But I …"},{"uri":"/speakers/vivek-bagaria/","title":"Vivek Bagaria","content":""},{"uri":"/speakers/vortex/","title":"Vortex","content":""},{"uri":"/tags/wallet-labels/","title":"Wallet labels","content":""},{"uri":"/rebooting-web-of-trust/2019-prague/weak-signals/","title":"Weak Signals","content":"Weak signals exercise\nGroup 1: Decentralized Decentralization is a means of preventing a single point of control existing.\nGroup 3: Cloud We picked a term preventing decentralized identity. The one that resonated for us was \u0026amp;ldquo;cloud\u0026amp;rdquo;. We equated the \u0026amp;ldquo;cloud\u0026amp;rdquo; with the idea of centralized convenience and value creation that businesses and individuals look at the cloud as a great place to be because it\u0026amp;rsquo;s cheap and convenient and all of your dreams are answered in the …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/welcome/","title":"Welcome","content":"Welcome session\nJonathan Harvey Buschel Jinglan Wang We are going to get started in about two minutes, so please take your seats. We are also testing the mic for the live stream. If you have friends who are not here, feel free to tweet them, email them or post their facebook page or instagram whatever.\n(buzzing/feedback continues)\nHey. All y\u0026amp;rsquo;all ready? Alright, awesome. My name is Jing, I am president of the Wossley Bitcoin Club.\nLast year we had over 500 attendees and we had the launch of …"},{"uri":"/speakers/wendy-seltzer/","title":"Wendy Seltzer","content":""},{"uri":"/speakers/whalepanda/","title":"Whalepanda","content":""},{"uri":"/coindesk-consensus-2016/why-bitcoin-still-matters/","title":"Why Bitcoin Still Matters","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nWhy bitcoin still matters\nHutchins, Founder of Silver Lake\nI hope in the coming days that the experts will critique me to help me learn more. I hope people from the general community, rather the experts pardon me, what I have an initial hypothesis regarding what will help bitcoin reach its full potential for its extraordinary opportunity that I think it is.\nThe title of my presentation was \u0026amp;ldquo;Why bitcoin still matters\u0026amp;rdquo;. I was …"},{"uri":"/magicalcryptoconference/2019/why-block-sizes-should-not-be-too-big/","title":"Why Block Sizes Should Not Be Too Big","content":"Briefly, why block sizes shouldn\u0026amp;rsquo;t be too big\nslides: https://luke.dashjr.org/tmp/code/block-sizes-mcc.pdf\nI am luke-jr. I am going to go over why block size shouldn\u0026amp;rsquo;t be too big.\nHow does bitcoin work? Miners put transactrions into blocks. Users verify that the blocks are valid. If the users don\u0026amp;rsquo;t verify this, then miners can do basically anything. Because the users do this, 51% attacks are limited to reorgs which is undoing transactions. If there\u0026amp;rsquo;s no users verifying …"},{"uri":"/scalingbitcoin/hong-kong-2015/why-miners-will-not-voluntarily-individually-produce-smaller-blocks/","title":"Why Miners Will Not Voluntarily Individually Produce Smaller Blocks","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY1/2_security_and_incentives_3_bier.pdf\nMarginal cost is very low.\nIn about 2011, people started discussing the idea of needing an artificial cap for a block size limit to kind of kick up the transaction fees. The orphan risk is a marginal cost the larger the block, the higher the chance of an orphan.\nNow I\u0026amp;rsquo;ve got some issues why orphan risk may not be so great. As technology improves, over time the technology cost of …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/arvind-narayanan/","title":"Why you need threshold signatures to protect your wallet","content":"I want to tell you what threshold signatures are and I want to convince you that threshold signatures are a technology you need. This is collaborative work.\nThere are three things I want to tell you today. The first is that the banking security model has been very very refined and has sophisticated techniques and process and controls for ensuring security. It does not translate to Bitcoin. This may seem surprising. Hopefully it will be obvious in retrospect.\nYou need Bitcoin-specific …"},{"uri":"/speakers/william-mougayar/","title":"William Mougayar","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/work-in-progress/","title":"Work In Progress Sessions","content":"((this needs to break into individual transcripts))\nhttps://twitter.com/kanzure/status/1172173183329476609\nOptical proof of work Mikael Dubrovsky\nI am working on optical proof-of-work. I\u0026amp;rsquo;ll try to pack a lot into the 10 minutes. I spent a year and a half at Techion.\nPhysical scaling limits Probably the three big ones are, there\u0026amp;rsquo;s first the demand for blockchain or wanting to use bitcoin more. I think more people want to use bitcoin. Most of the world does not have access to property …"},{"uri":"/tags/x-only-public-keys/","title":"X-only public keys","content":""},{"uri":"/speakers/yorke-rhodes/","title":"Yorke Rhodes","content":""},{"uri":"/speakers/yoshinori-hashimoto/","title":"Yoshinori Hashimoto","content":""},{"uri":"/speakers/yurii-rashkovskii/","title":"Yurii Rashkovskii","content":""},{"uri":"/speakers/yuta-takanashi/","title":"Yuta Takanashi","content":""},{"uri":"/speakers/yuval-kogman/","title":"Yuval Kogman","content":""},{"uri":"/simons-institute/zero-knowledge-probabilistic-proof-systems/","title":"Zero Knowledge Probabilistic Proof Systems","content":"Historical Papers in Cryptography Seminar Series, Simons Institute\nShe has received several awards, she is a source of incredibly creative ideas, very effective advisor and metor, and I am always inspired when she speaks. So please welcome her.\nThere is no better place to give this historical talk on zero knowledge than at Berkeley where there are pre-historical origins of zero knowledge. At the same time it\u0026amp;rsquo;s a difficult talk to give, because this is a very distinguished group here, and …"},{"uri":"/scalingbitcoin/hong-kong-2015/zero-knowledge-proofs-for-bitcoin-scalability-and-beyond/","title":"Zero Knowledge Proofs For Bitcoin Scalability And Beyond","content":"I am going to tell you about zero-knowledge proofs for Bitcoin scalability. Even though this is an overview talk, many of the results I will be telling you about are based on work done by people in the slides and also here at the conference.\nFirst I will give a brief introduction to zero knowledge proofs. Then I will try to convince you that if you care about Bitcoin scalability, then you should care about zero-knowledge proofs and you should keep them on your radar.\n\u0026amp;hellip; stream went …"},{"uri":"/scalingbitcoin/tel-aviv-2019/zerolink-sudoku/","title":"Zerolink Sudoku - real vs perceived anonymity","content":"https://twitter.com/kanzure/status/1171788514326908928\nIntroduction I used to work in software. I transitioned to something else, but bitcoin changed my mind. This talk is a little bit of a work in progress. This was started by a guy named Aviv Milner who is involved with Wasabi. He roped me into this and then subsequently became unavailable. So this is basically me presenting his work. It\u0026amp;rsquo;s been severely \u0026amp;hellip;\u0026amp;hellip;. as I said, Aviv started this project, he defined the research …"},{"uri":"/speakers/zeta-avarikioti/","title":"Zeta Avarikioti","content":""},{"uri":"/cryptoeconomic-systems/2019/zksharks/","title":"ZkSHARKS","content":"Introduction Indeed I am Madars Virza. I am going to talk about SHARKs. I am going to be talking about zero-knowledge SHARKs. It\u0026amp;rsquo;s actually something serious and it\u0026amp;rsquo;s related to non-interactive zero-knowledge proofs.\nNon-interactive zero knowledge proofs zkproofs are protocols between two parties, a prover and a verifier. Both prover and verifier know a public input, but only the prover knows a secret input as a witness. The prover wants to convince the verifier that some …"}] \ No newline at end of file +[{"uri":"/scalingbitcoin/montreal-2015/","title":"Montreal (2015)","content":" Alternatives To Block Size As Aggregate Resource Limits Mark Friedenbach Research Amiko Pay Come Plooy Lightning Bitcoin Block Propagation and IBLT Rusty Russell P2p Research Bitcoin Failure Modes And The Role Of The Lightning Network Tadge Dryja, Joseph Poon Lightning Bitcoin Load Spike Simulation Conner Fromknecht, Nathan Wilcox Research Bitcoin Relay Network Matt Corallo Mining Blockchain Testbed Ittay Eyal, Emin Gun Sirer Research Developer tools Coinscope Andrew Miller, Dave Levin Mining …"},{"uri":"/scalingbitcoin/hong-kong-2015/","title":"Hong Kong (2015)","content":" A Bevy Of Block Size Proposals Bip100 Bip102 And More Jeff Garzik A Flexible Limit Trading Subsidy For Larger Blocks Mark Friedenbach Soft fork activation Bip101 Block Propagation Data From Testnet Jonathan Tomim Soft fork activation Bip99 And Uncontroversial Hard Forks Jorge Timón Soft fork activation Braiding The Blockchain Bob McElrath Mining Day 2 Opening Pindar Wong Extensibility Eric Lombrozo Security Fungibility And Scalability Dec 06, 2015 Adam Back Privacy enhancements In Adversarial …"},{"uri":"/scalingbitcoin/milan-2016/","title":"Milan (2016)","content":" BIP151: Peer-to-Peer Encryption and Authentication from the Perspective of End-User Oct 09, 2016 Jonas Schnelli V2 p2p transport Bitcoin Covenants: Opportunities and Challenges Oct 09, 2016 Emin Gun Sirer Vaults Covenants Breaking The Chain Bob McElrath, Peter Todd Mining Proof systems Build Scale Operate Meltem Demirors, Eric Lombrozo Chainbreak Coin Selection Mark Erhardt Day 1 Group Summaries Lightning Privacy enhancements Mining Day 2 Group Summaries Enhancing Bitcoin Security and …"},{"uri":"/scalingbitcoin/stanford-2017/","title":"Stanford (2017)","content":"https://stanford2017.scalingbitcoin.org/\nAtomically Trading With Roger Gambling On The Success Of A Hard Fork Ethan Heilman Bitcoin Script 2.0 And Strengthened Payment Channels Nov 04, 2017 Olaoluwa Osuntokun, Johnson Lau Lightning Scripts addresses BlockSci: a Platform for Blockchain Science and Exploration Nov 04, 2017 Harry Kalodner Research Developer tools BOLT Anonymous Payment Channels for Decentralized Currencies Nov 04, 2017 Ian Miers Research Privacy enhancements Lightning Changes …"},{"uri":"/scalingbitcoin/tokyo-2018/","title":"Tokyo (2018)","content":"https://tokyo2018.scalingbitcoin.org/\nA Scalable Drop in Replacement for Merkle Trees Oct 06, 2018 Benedikt Bünz Proof systems An analysis of dust in UTXO-based cryptocurrencies Oct 06, 2018 Sergi Delgado Segura Compact Multi-Signatures For Smaller Blockchains Oct 06, 2018 Dan Boneh Research Threshold signature Bls signatures Deploying Blockchain At Scale Lessons From The Internet Deployment In Japan Jun Muai Forward Blocks Mark Friedenbach How Much Privacy is Enough? Threats, Scaling, and …"},{"uri":"/scalingbitcoin/tel-aviv-2019/","title":"Tel Aviv (2019)","content":" A Survey of Progress in Succinct Zero Knowledge Proofs: Towards Trustless SNARKs Sep 11, 2019 Ben Fisch Proof systems A Tale of Two Trees: One Writes, and Other Reads, Scaling Oblivious Accesses to Large-Scale Blockchains Duc V. Le Research Lightweight client Privacy enhancements A2L: Anonymous Atomic Locks for Scalability and Interoperability in Payment Channel Hubs Sep 12, 2019 Pedro Moreno-Sanchez Research Lightning Adaptor signatures Applying Private Information Retrieval to Lightweight …"},{"uri":"/speakers/0xb10c/","title":"0xB10C","content":""},{"uri":"/tags/bitcoin-core/","title":"bitcoin-core","content":""},{"uri":"/brink/","title":"Brink","content":" The Bitcoin Development Podcast "},{"uri":"/tags/compact-block-relay/","title":"Compact block relay","content":""},{"uri":"/brink/the-bitcoin-development-podcast/discussing-pre-25-0-bitcoin-core-vulnerability-disclosures/","title":"Discussing Pre-25.0 Bitcoin Core Vulnerability Disclosures","content":"Introduction Speaker 0: 00:00:00\nHello there, welcome or welcome back to the Brink podcast. Today we\u0026amp;rsquo;re talking about some security advisories again that were released a few days ago. I have with me B10C and Niklas. Feel free to say hi.\nSpeaker 1: 00:00:15\nHey Gloria, hey Niklas. Hello.\nSpeaker 0: 00:00:18\nGreat. So on the agenda today, we\u0026amp;rsquo;ve got three vulnerabilities. A good mix, but all peer-to-peer. So first we\u0026amp;rsquo;re going to talk about the headers precinct bug, which was …"},{"uri":"/speakers/gloria-zhao/","title":"Gloria Zhao","content":""},{"uri":"/speakers/niklas-g%C3%B6gge/","title":"Niklas Gögge","content":""},{"uri":"/tags/p2p/","title":"P2P Network Protocol","content":""},{"uri":"/speakers/","title":"Speakers","content":""},{"uri":"/tags/","title":"Tags","content":""},{"uri":"/brink/the-bitcoin-development-podcast/","title":"The Bitcoin Development Podcast","content":" Discussing Pre-25.0 Bitcoin Core Vulnerability Disclosures Oct 10, 2024 0xB10C, Niklas Gögge, Gloria Zhao Bitcoin core P2p Compact block relay Discussing 0.21.0 Bitcoin Core Vulnerability Disclosures Jul 31, 2024 Gloria Zhao, Niklas Gögge Bitcoin core Discussing Pre-0.21.0 Bitcoin Core Vulnerability Disclosures Jul 11, 2024 Niklas Gögge, Gloria Zhao Bitcoin core Code Review and BIP324 Sep 06, 2023 Sebastian Falbesoner, Mike Schmidt Libsecp256k1 Generic signmessage V2 p2p transport Bitcoin core …"},{"uri":"/","title":"₿itcoin Transcripts","content":""},{"uri":"/speakers/aaron-van-wirdum/","title":"Aaron van Wirdum","content":""},{"uri":"/speakers/ava-chow/","title":"Ava Chow","content":""},{"uri":"/bitcoin-magazine/bitcoin-2024/","title":"Bitcoin 2024 Nashville","content":" Discreet Log Contracts, Oracles, Loans, Stablecoins, and More Aki Balogh, Shehzan Maredia, Daniel Hinton, Tadge Dryja Dlc Making Bitcoin More Private with CISA Aug 29, 2024 Fabian Jahr, Jameson Lopp, Craig Raw Cisa Open Source Mining Kulpreet Singh, Matt Corallo, Skot 9000, Mark Erhart Mining The State of Bitcoin Core Development Aug 31, 2024 Aaron van Wirdum, Ava Chow, Ishaana Misra, Mark Erhardt Bitcoin core Career Unlocking Expressivity with OP_CAT Andrew Poelstra, Brandon Black, Rijndael, …"},{"uri":"/bitcoin-magazine/","title":"Bitcoin Magazine","content":" Bitcoin 2024 Nashville How to Activate a New Soft Fork Aug 03, 2020 Eric Lombrozo, Luke Dashjr Taproot Soft fork activation The Politics of Bitcoin Development Jun 11, 2024 Christian Decker "},{"uri":"/tags/career/","title":"career","content":""},{"uri":"/speakers/ishaana-misra/","title":"Ishaana Misra","content":""},{"uri":"/speakers/mark-erhardt/","title":"Mark Erhardt","content":""},{"uri":"/bitcoin-magazine/bitcoin-2024/the-state-of-bitcoin-core-development/","title":"The State of Bitcoin Core Development","content":"Bitcoin Core Development Panel and Recent Policy Changes Speaker 0: 00:00:02\nGood morning. My name is Aron Favirim. I work for Bitcoin Magazine and this is the panel on Bitcoin Core development. I\u0026amp;rsquo;ll let the panelists introduce themselves. Let\u0026amp;rsquo;s start here with Eva.\nSpeaker 1: 00:00:16\nHi, I\u0026amp;rsquo;m Eva. I am one of the Bitcoin Core maintainers.\nSpeaker 2: 00:00:20\nHi I\u0026amp;rsquo;m Merch, I work at Chaincode Labs on Bitcoin projects.\nSpeaker 3: 00:00:25\nHi I\u0026amp;rsquo;m Ishana, I\u0026amp;rsquo;m a …"},{"uri":"/speakers/craig-raw/","title":"Craig Raw","content":""},{"uri":"/tags/cisa/","title":"Cross-input signature aggregation (CISA)","content":""},{"uri":"/speakers/fabian-jahr/","title":"Fabian Jahr","content":""},{"uri":"/speakers/jameson-lopp/","title":"Jameson Lopp","content":""},{"uri":"/bitcoin-magazine/bitcoin-2024/making-bitcoin-more-private-with-cisa/","title":"Making Bitcoin More Private with CISA","content":"Speaker 0: 00:00:01\nSo there\u0026amp;rsquo;s one person on the stage here that is not up there. Why?\nSpeaker 1: 00:00:07\nHi, everyone. My name is Nifi. I\u0026amp;rsquo;m going to be moderating this panel today. We\u0026amp;rsquo;re here to talk about CISA, I think, cross input signature aggregation. And joining me today on the stage, I have Jameson Lopp from Casa, Craig Rau of Sparrow Wallet and Fabien Yarr of Brink. So welcome them to the stage. Really appreciate it. Yeah. Great. So we\u0026amp;rsquo;re excited to be talking to …"},{"uri":"/brink/the-bitcoin-development-podcast/discussing-0-21-0-bitcoin-core-vulnerability-disclosures/","title":"Discussing 0.21.0 Bitcoin Core Vulnerability Disclosures","content":"Introduction Speaker 0: 00:00:00\nHello, Niklas.\nSpeaker 1: 00:00:02\nHi, Gloria. We\u0026amp;rsquo;re here to talk about the next batch of disclosures for Bitcoin Core. And this time there\u0026amp;rsquo;s only two bugs and they were fixed in version 22. So that means 21, version 21 was still vulnerable to these two bugs. If you\u0026amp;rsquo;re running version 21, you should upgrade. I mean, you can listen to this podcast and decide for yourself if you want to upgrade, but my recommendation, our recommendation would be …"},{"uri":"/tags/lightning/","title":"Lightning Network","content":""},{"uri":"/lightning-specification/","title":"Lightning Specification","content":" Lightning Specification Meeting - 1152 Apr 08, 2024 Lightning Lightning Specification Meeting - 1155 Apr 22, 2024 Lightning Lightning Specification Meeting - Agenda 0936 Nov 22, 2021 Lightning Lightning Specification Meeting - Agenda 0943 Dec 06, 2021 Lightning Lightning Specification Meeting - Agenda 0949 Jan 03, 2022 Lightning Lightning Specification Meeting - Agenda 0955 Jan 31, 2022 Lightning Lightning Specification Meeting - Agenda 0957 Feb 14, 2022 Lightning Lightning Specification …"},{"uri":"/lightning-specification/2024-07-29-specification-call/","title":"Lightning Specification Meeting - Agenda 1185","content":"Agenda: https://github.com/lightning/bolts/issues/1185\nSpeaker 0: Are we on the pathfinding one or did we pass that?\nSpeaker 1: I think [redacted] should be attending because they added their new gossip status thing to the agenda. I think they should join soon, but we can start with the verification, where I think it just needs more review to clarify some needs. But apart from that, it looks good to me. 1182. That\u0026amp;rsquo;s the kind of thing we want to get in quickly because otherwise, it\u0026amp;rsquo;s …"},{"uri":"/lightning-specification/2024-07-15-specification-call/","title":"Lightning Specification Meeting - Agenda 1183","content":"Agenda: https://github.com/lightning/bolts/issues/1183\nSpeaker 0: How do you do MPP by the way? For example, you have two blinded paths. You don\u0026amp;rsquo;t even split inside the path.\nSpeaker 1: We won\u0026amp;rsquo;t MPP. Our new routing rewrite — which is on the back burner while I sort all this out — should be able to MPP across because they give you the capacity. In theory, we could do that. In practice, if you need MPP, we\u0026amp;rsquo;re out. While I was rewriting this, I was thinking maybe we should have …"},{"uri":"/brink/the-bitcoin-development-podcast/discussing-pre-0-21-0-bitcoin-core-vulnerability-disclosures/","title":"Discussing Pre-0.21.0 Bitcoin Core Vulnerability Disclosures","content":"Introductions and motivation for disclosures Speaker 0: 00:00:00\nHello, Nicholas.\nSpeaker 1: 00:00:01\nHi, Gloria.\nSpeaker 0: 00:00:02\nCool. We are here to talk about some old security vulnerabilities that were disclosed last week for Bitcoin Core versions 0.21 and before, or I guess 21.0 and before, which has been end of life for a while. This is going to be somewhat technical, quite technical, but we\u0026amp;rsquo;ll try to make it accessible. So if you\u0026amp;rsquo;re kind of a Bitcoiner who\u0026amp;rsquo;s not a …"},{"uri":"/bitcoin-explained/","title":"Bitcoin Explained","content":" Silent Payments part 2 Jul 07, 2024 Sjors Provoost, Aaron van Wirdum, Ruben Somsen Silent payments Episode 94 The Great Consensus Cleanup Revival (And an Update on the Tornado Cash and Samourai Wallet Arrests) May 29, 2024 Sjors Provoost, Aaron van Wirdum Wallet Consensus cleanup Episode 93 Bitcoin Core 27.0 Apr 10, 2024 Bitcoin core Episode 92 Hashcash and Bit Gold Jan 03, 2024 Sjors Provoost, Aaron van Wirdum Episode 88 The Block 1, 983, 702 Problem Dec 21, 2023 Sjors Provoost, Aaron van …"},{"uri":"/speakers/ruben-somsen/","title":"Ruben Somsen","content":""},{"uri":"/tags/silent-payments/","title":"Silent payments","content":""},{"uri":"/bitcoin-explained/silent-payments-part-2/","title":"Silent Payments part 2","content":"Speaker 0: 00:00:16\nLive from Utrecht, this is Bitcoin Explained. Hey Sjoers. Hello. Hey Josie. Hey. Hey Ruben. Hey. We\u0026amp;rsquo;ve got two guests today Sjoers. Is that the first time we\u0026amp;rsquo;ve had two guests?\nSpeaker 1: 00:00:27\nI believe that is a record number. We have doubled the number of guests.\nSpeaker 0: 00:00:31\nThat is amazing. You know what else is amazing, Sjoerd? The 9V battery thing that you have in your hands. You just pushed a\u0026amp;hellip; This is a CoinKite product as well?\nSpeaker 1: …"},{"uri":"/speakers/sjors-provoost/","title":"Sjors Provoost","content":""},{"uri":"/speakers/christian-decker/","title":"Christian Decker","content":""},{"uri":"/bitcoin-magazine/the-politics-of-bitcoin-development/","title":"The Politics of Bitcoin Development","content":"Introduction and Rusty\u0026amp;rsquo;s Proposal Shinobi: 00:00:01\nHi everybody, I\u0026amp;rsquo;m Shinobi from Bitcoin Magazine and I\u0026amp;rsquo;m sitting down here with Christian Decker from Blockstream.\nChristian Decker: 00:00:07\nI am.\nShinobi: 00:00:08\nSo Rusty dropped an atom bomb yesterday with his proposal to turn all the things back on just with the kind of VerOps budget analogous to the SigOps budget to kind of rein in the denial of service risks as just a path forward given that everybody\u0026amp;rsquo;s spent the …"},{"uri":"/tags/consensus-cleanup/","title":"Consensus cleanup soft fork","content":""},{"uri":"/bitcoin-explained/the-great-consensus-cleanup-revival/","title":"The Great Consensus Cleanup Revival (And an Update on the Tornado Cash and Samourai Wallet Arrests)","content":"Speaker 0: 00:00:19\nLife from Utrecht, this is Bitcoin Explained. Sjoerds, it\u0026amp;rsquo;s been a month. That means you had a whole month to think about this pun that you told me you\u0026amp;rsquo;re going to tell our dear listeners for this episode. Let\u0026amp;rsquo;s hear it. Let\u0026amp;rsquo;s hear it.\nSpeaker 1: 00:00:35\nThat\u0026amp;rsquo;s right. So it\u0026amp;rsquo;s cold outside. The government is cold.\nSpeaker 0: 00:00:39\nYou know what else is cold? Sure. Our sponsor. That\u0026amp;rsquo;s right. The cold cart. The cold cart. If you have …"},{"uri":"/tags/wallet/","title":"wallet","content":""},{"uri":"/lightning-specification/2024-05-06-specification-call/","title":"Lightning Specification Meeting - Agenda 1161","content":"Agenda: https://github.com/lightning/bolts/issues/1161\nSpeaker 0: So the next one is go see 12-blocks delay channel closed follow-up. So, this is something we already merged something to a spec, saying that whenever you see a channel being spent on-chain, you should wait for 12 blocks to allow splicing to happen. It\u0026amp;rsquo;s just one line that was missing in the requirement, and then they found it in the older splicing PR. So, there\u0026amp;rsquo;s already three ACKs. So, is anyone opposed to just …"},{"uri":"/lightning-specification/2024-04-22-specification-call/","title":"Lightning Specification Meeting - 1155","content":"Agenda: https://github.com/lightning/bolts/issues/1155\nSpeaker 0: Cool. Okay. They have four or five things that I think are all in line with what we have already as well. First one is the spec-clean up. Last thing we did, I think we reverted one of the gossip changes, and that is going to be in LND 18. Basically, making the feature bits to be required over time. After that, is anything blocking this? I need to catch up with PR a little bit. Yeah, because I think we do all this. Yeah. Okay. …"},{"uri":"/speakers/adam-jonas/","title":"Adam Jonas","content":""},{"uri":"/categories/","title":"Categories","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2024/choosing-a-career-in-bitcoin-open-source-development/","title":"Choosing a Career in Bitcoin Open Source Development","content":"Introduction Why choose a career in bitcoin open source? My name is Adam Jonas. I work at Chaincode Labs. I\u0026amp;rsquo;m here to hunt unicorns.\nThe Current State of Bitcoin Development We\u0026amp;rsquo;re gonna start off with a couple of numbers that might surprise you. There are roughly 150 people in the world that work on bitcoin open source infrastructure. Around 30 of those work full-time on Bitcoin Core. I\u0026amp;rsquo;m looking for 31.\nWhat Drives Bitcoin Core Developers To find out if you\u0026amp;rsquo;re this …"},{"uri":"/mit-bitcoin-expo/","title":"MIT Bitcoin Expo","content":" Mit Bitcoin Expo 2015 Mit Bitcoin Expo 2016 MIT Bitcoin Expo 2017 Mit Bitcoin Expo 2018 Mit Bitcoin Expo 2019 Mit Bitcoin Expo 2020 Mit Bitcoin Expo 2021 MIT Bitcoin Expo 2022 MIT Bitcoin Expo 2024 "},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2024/","title":"MIT Bitcoin Expo 2024","content":" Choosing a Career in Bitcoin Open Source Development Apr 21, 2024 Adam Jonas Career Live Stream: https://web.mit.edu/webcast/bitcoin-expo-s24/\n"},{"uri":"/categories/video/","title":"video","content":""},{"uri":"/bitcoin-core-dev-tech/2024-04/asmap/","title":"ASMap","content":"From virtu\u0026amp;rsquo;s presentation Distribution of nodes in ASes is low 8k reachable clearnet nodes / 30k unreachable A contributor has different statistics that show a lot more nodes, not sure which numbers are (more) correct. These numbers are would mean that some of the simulations are already a reality. Most nodes from Hetzner and AWS Shift compute and bandwidth to nodes in small ASes Unreachable nodes cannot sustain ten outbound connections Discussions Ignore AS for blocks-only connections? …"},{"uri":"/bitcoin-core-dev-tech/","title":"Bitcoin Core Dev Tech","content":" Bitcoin Core Dev Tech 2015 Bitcoin Core Dev Tech 2017 Bitcoin Core Dev Tech 2018 (Mar) Bitcoin Core Dev Tech 2018 (Oct) Bitcoin Core Dev Tech 2019 Bitcoin Core Dev Tech 2022 Bitcoin Core Dev Tech 2023 (Apr) Bitcoin Core Dev Tech 2023 (Sept) Bitcoin Core Dev Tech 2024 (Apr) "},{"uri":"/bitcoin-core-dev-tech/2024-04/","title":"Bitcoin Core Dev Tech 2024 (Apr)","content":" ASMap Apr 11, 2024 Bitcoin core Security enhancements P2p assumeUTXO Mainnet Readiness Apr 10, 2024 Bitcoin core Assume utxo Coin Selection Apr 08, 2024 Bitcoin core Coin selection Cross Input Signature Aggregation Apr 08, 2024 Bitcoin core Cisa Great Consensus Cleanup Apr 08, 2024 Bitcoin core Consensus cleanup GUI Discussions Apr 10, 2024 Bitcoin core Ux Kernel Apr 10, 2024 Bitcoin core Build system P2P Monitoring Apr 09, 2024 Bitcoin core P2p Developer tools Private tx broadcast Apr 08, 2024 …"},{"uri":"/categories/core-dev-tech/","title":"core-dev-tech","content":""},{"uri":"/tags/security-enhancements/","title":"Security Enhancements","content":""},{"uri":"/tags/assumeutxo/","title":"AssumeUTXO","content":""},{"uri":"/bitcoin-core-dev-tech/2024-04/assumeutxo-mainnet-readiness/","title":"assumeUTXO Mainnet Readiness","content":" Conceptual discussion about the point raised by Sjors in the Tracking issue: https://github.com/bitcoin/bitcoin/issues/29616#issuecomment-1988390944 The outcome is pretty much the same as in the issue: Some people think it’s better to keep the params, and the rest agree that at least it’s better to keep them for now A perspective on the options: With the params, it puts more responsibility (and potentially pressure) on the maintainers, if they are removed the users have to do much more due …"},{"uri":"/bitcoin-explained/bitcoin-core-27-0/","title":"Bitcoin Core 27.0","content":"Speaker 0: 00:00:16\nLive from Utrecht, this is Bitcoin Explained. Hey Sjoerd.\nSpeaker 1: 00:00:20\nHello.\nSpeaker 0: 00:00:21\nFirst of all, I\u0026amp;rsquo;d like to thank our sponsor CoinKite, the creator of the OpenDime as well. That\u0026amp;rsquo;s right. Are we going to get sued if we only mention the Opendime, Sjoerds? No. Are we contractually obligated to mention the Gold Card or can we just riff off the Opendime this episode?\nSpeaker 1: 00:00:41\nIf you want to riff on the Opendime for the great memories …"},{"uri":"/tags/build-system/","title":"build-system","content":""},{"uri":"/bitcoin-core-dev-tech/2024-04/gui-discussions/","title":"GUI Discussions","content":"QML GUI Slides\nQ\u0026amp;amp;A\nCurrent GUI and QML progress seems slow? Code review / build system involvement? Will there be a test suite? Test suite yes, No fuzzing planned Why not RPC based? RPC not currently capable of building this UI on top of Is there a QML dependency graph? More dependencies required for sure May have to abandon depends approach Blocking calls historically an issue A consideration, but more to talk about here Integrated GUI Cost/Benefit Slides\nDiscussion\nIf other wallets and …"},{"uri":"/bitcoin-core-dev-tech/2024-04/kernel/","title":"Kernel","content":"The kernel project is just about done with its first stage (separating the validation logic into a separate library), so a discussion about the second stage of the project, giving the library a usable external API was held. Arguments around two questions were collected and briefly debated.\nShould a C API for the kernel library be developed with the goal of eventually shipping with releases? There are a bunch of tools that can translate C++ headers, but they have downsides due to the name …"},{"uri":"/tags/signet/","title":"Signet","content":""},{"uri":"/bitcoin-core-dev-tech/2024-04/signet-testnet4/","title":"Signet/Testnet4","content":" Signet Reset is less of a priority right now because the faucet is running again, still seeing huge number of requests Should still reset because of money making from signet coins Participants agree that getting coins doesn’t seem to be that hard, just need to ask on IRC or so Some people get repetitive messages about coins Signet can be reorged easily with a more work chain, that is actually shorter. Such a chain already exists and can be used any time. This effectively kills the current …"},{"uri":"/tags/ux/","title":"ux","content":""},{"uri":"/tags/developer-tools/","title":"Developer Tools","content":""},{"uri":"/tags/descriptors/","title":"Output script descriptors","content":""},{"uri":"/bitcoin-core-dev-tech/2024-04/p2p-monitoring/","title":"P2P Monitoring","content":"Slides\nStarted working on this about 2 years ago; in 2021. After we accidentally observed the address flooding anomaly/attack Primarily uses https://github.com/0xB10C/peer-observer to extract data from Bitcoin Core nodes with tracepoints. The infrastructure also includes a fork-observer connected to each node as well as an addrman-observer for each node. Additionally, detailed Bitcoin Core debug logs are avaliable. The main part are the Grafana dashboards. There’s a public version at …"},{"uri":"/bitcoin-core-dev-tech/2024-04/silent-payment-descriptors/","title":"Silent Payment Descriptors","content":"Silent payments properties:\ntwo ECDH with each scan key cost of scanning increases with number of scan keys multiple address = tweak spend key with label We wouldn’t wanna flip that because then the spend key would be common, reducing anonymity and adding extra scanning work\nBIP352 recommends NOT using with xpubs, it’s really difficult to have same public key with different chain codes.\nUse case question: with silent payments, let\u0026amp;rsquo;s say I make a legacy wallet and want to use one of my …"},{"uri":"/tags/coin-selection/","title":"Coin selection","content":""},{"uri":"/bitcoin-core-dev-tech/2024-04/coin-selection/","title":"Coin Selection","content":" Todo: Overview PR that states goal of replacing Knapsack Introduce Sand Compactor Demonstrate via Simulations that situation is improved vs Knapsack Potential privacy leak: all algorithms would be deterministic, but feels insignificant or at least would not make it worse Should we clear out negative effective value UTXOs? Users seem to indicate that they would prefer to empty wallets completely even if they pay more General agreement that we should continue to spend negative effective value …"},{"uri":"/bitcoin-core-dev-tech/2024-04/cross-input-signature-aggregation/","title":"Cross Input Signature Aggregation","content":" cisaresearch.org, put together by fjahr Documents progress of half and full agg (theory, implementation and deployment) Provides collection of CISA-related resources (ML posts, papers, videos/podcasts, etc.) Should provide guidance for further development/open todos for contributors to grab HRF announces CISA Research Fellowship Seeks to answer questions how CISA will affect privacy, cost-savings, and much more during a four-month period for a total of .5BTC More: …"},{"uri":"/bitcoin-core-dev-tech/2024-04/great-consensus-cleanup/","title":"Great Consensus Cleanup","content":"How bad are the bugs?\nHow good are the mitigations?\nImprovements to mitigations in the last 5 years?\nAnything else to fix?\nThe talk is a summary of https://delvingbitcoin.org/t/great-consensus-cleanup-revival/710 .\nTime warp What is it? Off by one in the retargeting period 2015 blocks instead of 2016 Impact Spam (since difficulty is 1 and block times are what restricts tx) UXTO set growth for the same reason 40 days to kill the chain Empowers 51% attacker Political games (users individually …"},{"uri":"/tags/libsecp256k1/","title":"libsecp256k1","content":""},{"uri":"/lightning-specification/2024-04-08-specification-call/","title":"Lightning Specification Meeting - 1152","content":"Agenda: https://github.com/lightning/bolts/issues/1152\nSpeaker 1: Yeah, we did a point release to undo it as well. What I\u0026amp;rsquo;d like to do is to write up the spec changes to basically say: Yeah, don\u0026amp;rsquo;t set the bit if you don\u0026amp;rsquo;t have anything interesting to say — rather than if you don\u0026amp;rsquo;t support it. Yes, my sweep through the network to go: Hey, there\u0026amp;rsquo;s only a few nodes that don\u0026amp;rsquo;t set this. Missed LDK nodes obviously. There are only a few of them that are public. I …"},{"uri":"/tags/mining/","title":"Mining","content":""},{"uri":"/bitcoin-core-dev-tech/2024-04/private-tx-broadcast/","title":"Private tx broadcast","content":"Updates:\nTX is validated before broadcast (using mempool test). The sender ignores incoming messages from the receiver (except the handshake and PONG), so the sender cannot send back the tx before disconnection. When it receives the tx back, it becomes \u0026amp;ldquo;just a tx in mempool\u0026amp;rdquo;. TODO/NICE TO HAVE\nCheck if the wallet is going to rebroadcast a tx it has created but has been broadcast via private broadcast and if yes, prevent that. Consider disabling walletbroadcast=1 if …"},{"uri":"/bitcoin-core-dev-tech/2024-04/silent-payments-libsecp/","title":"Silent Payments Libsecp Module","content":" High level vs low level API:\nLow level API could be more useful for multi-party SP implementation High level API is safer as it avoid managing SP state and staging secret data Rough consensus that high level API is preferable Responsibility of grouping and sorting recipients by scan key. Client vs library?\nWe need to assert grouping in the lib anyway to avoid catastrophic failure So it just makes sense for the lib to take care of the grouping Why we need grouping in the first place? This is for …"},{"uri":"/bitcoin-core-dev-tech/2024-04/stratumv2/","title":"Stratum v2","content":"I explained the various stratum v2 roles described in the images here: https://stratumprotocol.org\nDescribed the three layers of my main PR: https://github.com/bitcoin/bitcoin/pull/29432\nNoise protocol Transport based on the TransportV1 / TransportV2 class Application layer (listens on new port, sv2 apps connect to it) Discussion point: the Job Declarator client role typically runs on the same machine as the template provider, so technically we don’t need noise encryption. However, we may in the …"},{"uri":"/tags/stratum-v2/","title":"stratum-v2","content":""},{"uri":"/bitcoin-core-dev-tech/2024-04/weak-blocks/","title":"Weak Blocks","content":"Weak blocks: propagate stuff with low PoW as you are building it\nuse cases / why you wouldn’t hear of stuff nonstandard to you somehow didn’t propagate to you miner’s prioritisetransaction stuff with no fees why is this coming up now? more mempool heterogeneity “accelerate nonstandard transactions” services poc code: submits to mempool, rejected ones are stored in separate cache Questions\nwhy would a miner do this? (similar to compact blocks?) add to mempool? what if invalid/nonstandard? good …"},{"uri":"/lightning-specification/2024-03-25-specification-call/","title":"Lightning Specification Meeting - Agenda 1150","content":"NOTE: There were some issues with the recording of this call, resulting in a partial transcript.\nSpeaker 0: If you\u0026amp;rsquo;re only connected to a single peer, it makes sense not to tell them that you speak gossip, right? Because you won\u0026amp;rsquo;t have anything interesting to say, and you don\u0026amp;rsquo;t want to — with our current state of gossip queries, they might think they\u0026amp;rsquo;re probing randomly and happen to hit nodes that don\u0026amp;rsquo;t know anything, and fall into a bad state. So, I\u0026amp;rsquo;m not …"},{"uri":"/lightning-specification/2024-03-11-specification-call/","title":"Lightning Specification Meeting - Agenda 1146","content":"Agenda: https://github.com/lightning/bolts/issues/1146\nSpeaker 0: So first item, I just wanted to highlight that I opened the PR for zero reserve, if you want to look at it. I don\u0026amp;rsquo;t know if people are actually trying to implement that. It matches the implementation that we have currently in Eclair and Phoenix, but with a different feature bit. So if someone else is implementing that, they can just have a look at the PR, and we can work on the interop whenever someone is ready. I …"},{"uri":"/bitcoinops/","title":"Bitcoin Optech","content":" Newsletter #292 Recap Mar 07, 2024 Mark Erhardt, Dave Harding, Josibake, Salvatore Ingala, Fabian Jahr Developer tools Psbt Musig Schnorr and Taproot workshop Sep 27, 2019 Taproot Schnorr signatures Tapscript Musig "},{"uri":"/speakers/dave-harding/","title":"Dave Harding","content":""},{"uri":"/speakers/josibake/","title":"Josibake","content":""},{"uri":"/tags/musig/","title":"MuSig","content":""},{"uri":"/bitcoinops/newsletter-292-recap/","title":"Newsletter #292 Recap","content":"Mark Erhardt: 00:03:57\nWe are complete. Maybe we can start. So good morning. This is Optech Newsletter #292 Recap. And as you can hear, Mike is not here today and I\u0026amp;rsquo;m filling in as the main host. Today we have four news items, two releases and release candidates, and four PRs to talk about in our notable code and documentation changes. I\u0026amp;rsquo;m Merch and I work at Chaincode Labs and bring you weekly this OpTec newsletter recap. Today I\u0026amp;rsquo;m joined by Dave.\nDave Harding: 00:04:41\nHi, …"},{"uri":"/tags/psbt/","title":"Partially signed bitcoin transactions","content":""},{"uri":"/categories/podcast/","title":"podcast","content":""},{"uri":"/speakers/salvatore-ingala/","title":"Salvatore Ingala","content":""},{"uri":"/lightning-specification/2024-02-26-specification-call/","title":"Lightning Specification Meeting - Agenda 1142","content":"Agenda: https://github.com/lightning/bolts/issues/1142\nSpeaker 0: [redacted] said they won\u0026amp;rsquo;t be able to attend, but they’ve nicely created an agenda for us as usual. So I can take a look and run through the list. First up, so I think this has sort of went back and forth a few times. This is the very long-lived request to just have a zero value in reserve basically, like a first class type. [redacted] made this PR, so it\u0026amp;rsquo;s spec PR 1140. I think there\u0026amp;rsquo;s a back and forth thing. We …"},{"uri":"/bitcoin-design/","title":"Bitcoin Design","content":" Learning Bitcoin and Design Miscellaneous UX Research "},{"uri":"/speakers/christoph-ono/","title":"Christoph Ono","content":""},{"uri":"/tags/covenants/","title":"Covenants","content":""},{"uri":"/bitcoin-design/learning-bitcoin-and-design/covenants-part-2/","title":"Covenants Part 2","content":"Introduction Christoph Ono: 00:00:01\nAll right, welcome to our second learning Bitcoin and Design call about covenants. Today, the idea was to go deeper into covenants use cases, right? Last time, the first call was more about just generally what it is and what it does. Now we wanted to go deeper. I think on the web, on the GitHub issue, there was one request about congestion control. But the other big one was scaling, I think. So how\u0026amp;rsquo;s everyone feeling about covenants today? Maybe we …"},{"uri":"/bitcoin-design/learning-bitcoin-and-design/","title":"Learning Bitcoin and Design","content":" Covenants Part 2 Feb 02, 2024 Christoph Ono, Michael Haase, Mogashni Covenants Ux Silent Payments Jan 31, 2024 Christoph Ono, Michael Haase, Yashraj, Josibake Silent payments Covenants Jan 19, 2024 Christoph Ono, Michael Haase, Owen Kemeys, Yashraj, Mogashni Covenants Ux Op checktemplateverify FediMint Jul 29, 2022 Stephen DeLorme, Justin Moon, Christoph Ono Ecash Ux "},{"uri":"/speakers/michael-haase/","title":"Michael Haase","content":""},{"uri":"/speakers/mogashni/","title":"Mogashni","content":""},{"uri":"/bitcoin-design/learning-bitcoin-and-design/silent-payments/","title":"Silent Payments","content":"Introduction Christoph Ono: 00:00:02\nSo welcome everyone. Super excited to have this call here today. And we are in Learning Bitcoin and Design call number 16. Hilariously, we had number 17 this morning, because we move things around time-wise. But this is number 16 about silent payments. We have the super expert Josie here, who is willing to talk us through what silent payments are and how they work. So I\u0026amp;rsquo;ll just hand it over to Josie to kick us off and we can organically feel out where …"},{"uri":"/speakers/yashraj/","title":"Yashraj","content":""},{"uri":"/bitcoin-design/learning-bitcoin-and-design/covenants/","title":"Covenants","content":"Introductions Christoph Ono: 00:00:01\nWelcome to our first Learning Bitcoin and Design call of 2024. We haven\u0026amp;rsquo;t had one of these in a while, but we\u0026amp;rsquo;re picking them back up again. The first topic is a really tough one, I think at least, it\u0026amp;rsquo;s covenants. The reason, one of the reasons why this came up is because on Bitcoin Design, we have this new design challenge. We have a page with design challenges that are meant to be kind of take home challenges. If you want to challenge …"},{"uri":"/tags/op-checktemplateverify/","title":"OP_CHECKTEMPLATEVERIFY","content":""},{"uri":"/speakers/owen-kemeys/","title":"Owen Kemeys","content":""},{"uri":"/lightning-specification/lightning-2024-01-15-specification-call/","title":"Lightning Specification Meeting - Agenda 1127","content":"Agenda: https://github.com/lightning/bolts/issues/1127\nSpeaker 0: First of all, it looks like we still have a mailing list. I don\u0026amp;rsquo;t know how much we can rely on that. But, in the meanwhile, nobody has sent any email on the mailing list. I guess we should be migrating to Delving Bitcoin for now. Has someone experimented with running a discourse instance somewhere else? I think it was [redacted] who was supposed to do that. Yeah, so I guess nobody. So, let\u0026amp;rsquo;s switch to the V3 …"},{"uri":"/bitcoin-explained/hashcash-and-bit-gold/","title":"Hashcash and Bit Gold","content":"Aaron van Wirdum: 00:00:18\nLive from Utrecht, this is Bitcoin Explained. Hey Sjors.\nSjors Provoost: 00:00:21\nYo, yo.\n[removed sponsor segment]\nAaron\u0026amp;rsquo;s book - \u0026amp;ldquo;The Genesis Book\u0026amp;rdquo; Sjors Provoost: 00:01:07\nSo, Aaron, you wrote a book.\nAaron van Wirdum: 00:01:08\nYeah. Oh, yes, I did.\nSjors Provoost: 00:01:09\nCool.\nAaron van Wirdum: 00:01:09\nAt the time of recording, it\u0026amp;rsquo;s almost published. We\u0026amp;rsquo;re publishing it tomorrow, January 3rd, but that\u0026amp;rsquo;s like midnight actually, …"},{"uri":"/tags/security-problems/","title":"Security Problems","content":""},{"uri":"/bitcoin-explained/the-block-1-983-702-problem/","title":"The Block 1, 983, 702 Problem","content":"Introduction Aaron van Wirdum: 00:00:19\nLive from Utrecht, this is Bitcoin Explained. Hey Sjors.\n[omitted sponsors segment]\nAaron van Wirdum: 00:01:20\nSjors, today we\u0026amp;rsquo;re discussing a very niche bug that I had never heard of, but apparently there\u0026amp;rsquo;s a bug in the Bitcoin protocol.\nSjors Provoost: 00:01:31\nWell, there was a bug in the Bitcoin protocol.\nAaron van Wirdum: 00:01:35\nBut there\u0026amp;rsquo;s still a problem.\nSjors Provoost: 00:01:38\nThere was a bug, then there was no bug, then there …"},{"uri":"/tags/altcoins/","title":"altcoins","content":""},{"uri":"/speakers/ben-carman/","title":"Ben Carman","content":""},{"uri":"/bitcoinplusplus/","title":"Bitcoin++","content":" Bitcoin\u0026amp;#43;\u0026amp;#43; 2022 Layer 2 Let\u0026amp;#39;s Talk About Dual Funding: Protocol Overview Mar 16, 2023 Lisa Neigut Lightning Dual funding Onchain Privacy "},{"uri":"/tags/coinjoin/","title":"Coinjoin","content":""},{"uri":"/categories/conference/","title":"conference","content":""},{"uri":"/speakers/david-vorick/","title":"David Vorick","content":""},{"uri":"/bitcoinplusplus/onchain-privacy/monero-and-the-privacy-doom-principle/","title":"Monero and the Privacy Doom Principle","content":"I\u0026amp;rsquo;m a blockchain researcher and developer. I\u0026amp;rsquo;ve built the Sia blockchain ecosystem and I\u0026amp;rsquo;ve been in the space for a while and today I\u0026amp;rsquo;m talking about Monero and privacy models that ended up being explained in analytical techniques that we can use to beat error in privacy.\nThe Concept of Privacy in Cryptocurrencies So my name is David Vork. I\u0026amp;rsquo;ve been in the blockchain space for about a decade. I\u0026amp;rsquo;ve been between researching and launching blockchains, doing a lot …"},{"uri":"/bitcoinplusplus/onchain-privacy/","title":"Onchain Privacy","content":" Coinjoin Done Right: the onchain offchain mix (and anti-Sybil with RIDDLE) Dec 10, 2022 Adam Gibson Coinjoin Adaptor signatures Timelocks Ptlc Monero and the Privacy Doom Principle Dec 09, 2023 David Vorick Privacy problems Privacy enhancements Altcoins Splicing, Lightning\u0026amp;#39;s Multiparty Future Mar 14, 2023 Dusty Dettmer Splicing Teach the Controversy: mempoolfullrbf Dec 09, 2023 Peter Todd Rbf Transaction pinning Vortex: ZeroLink for Lightning Opens Dec 09, 2023 Ben Carman Lightning Coinjoin …"},{"uri":"/speakers/peter-todd/","title":"Peter Todd","content":""},{"uri":"/tags/privacy-enhancements/","title":"Privacy Enhancements","content":""},{"uri":"/tags/privacy-problems/","title":"Privacy Problems","content":""},{"uri":"/tags/rbf/","title":"Replace-by-fee (RBF)","content":""},{"uri":"/bitcoinplusplus/onchain-privacy/teach-the-controversy-mempoolfullrbf/","title":"Teach the Controversy: mempoolfullrbf","content":"What is transaction replacement? (00:00)\nSo, let\u0026amp;rsquo;s go start off. What is transaction replacement? And maybe not the best screen, but this is a screenshot of the Bitcoin wallet, Blue Wallet, and it shows off transaction replacement very nicely. I have a transaction, I got rid of $25 worth of Bitcoin, and I can do two things, I can go bump the fee or cancel the transaction, and if any of you guys played around with it, it does what it expects, it increases the fee or it cancels it. From a …"},{"uri":"/tags/transaction-pinning/","title":"Transaction pinning","content":""},{"uri":"/bitcoinplusplus/onchain-privacy/vortex-zerolink-for-lightning-opens/","title":"Vortex: ZeroLink for Lightning Opens","content":"(The start of the talk is missing from the source media)\nWhat is a blind signature All these people could come, and they all can then present their signatures. And Bob will know, \u0026amp;ldquo;okay, all of these signatures are correct, but I don\u0026amp;rsquo;t know who gave me which one. You\u0026amp;rsquo;ll see how we use this in the protocol to get some privacy. This is kind of the structure (referring to slide#2). There\u0026amp;rsquo;s a single coordinator, many clients connected to it. They can be Core Lightning, they …"},{"uri":"/bitcoin-explained/ocean-tides/","title":"Ocean Tides","content":"Introduction Aaron van Wirdum: 00:00:21\nLive from Utrecht, this is Bitcoin Explained. Hey Sjors.\nSjors Provoost: 00:00:26\nHello.\nAaron van Wirdum: 00:00:57\nSjors, today we\u0026amp;rsquo;re going to discuss a new mining pool launched. Well, kind of. Ocean Pool. I was invited to the launch event. It was sort of a mini conference.\nSjors Provoost: 00:01:15\nI felt a very strong FOMO for that one.\nI saw the video, you guys were in the middle of nowhere. And some like some horror movie video shoot game …"},{"uri":"/tags/pooled-mining/","title":"Pooled mining","content":""},{"uri":"/bitcoin-explained/bitcoin-core-26-0-and-f2pool-s-ofac-compliant-mining-policy/","title":"Bitcoin Core 26.0 (And F2Pool's OFAC Compliant Mining Policy)","content":"Speaker 0: 00:00:20\nLive from Utrecht, this is Bitcoin Explained. Hello Sjoerd.\nSpeaker 1: 00:00:25\nWhat\u0026amp;rsquo;s up? Welcome back. Thank you.\nSpeaker 0: 00:00:28\nYou\u0026amp;rsquo;ve been back for, do you even live in this country still?\nSpeaker 1: 00:00:31\nI certainly do. Yes.\nSpeaker 0: 00:00:32\nYou were gone for like two months?\nSpeaker 1: 00:00:33\nNo, one month.\nSpeaker 0: 00:00:36\nOne month, a couple of conferences. Where did you go?\nSpeaker 1: 00:00:40\nToo many conferences. In fact, Bitcoin …"},{"uri":"/lightning-specification/2023-11-20-specification-call/","title":"Lightning Specification Meeting - Agenda 1118","content":"Agenda: https://github.com/lightning/bolts/issues/1118\nSpeaker 0: We won\u0026amp;rsquo;t have posting for the mailing list anymore in one month and a half, so we should probably do something. My plan was to just wait to see what bitcoin-dev would do and do the same thing. Does someone have opinions on that or an idea on what we should do?\nSpeaker 1: Who\u0026amp;rsquo;s managing the email account for the lightning-dev mailing list? You, [Redacted]?\nSpeaker 0:I think it\u0026amp;rsquo;s mostly [Redacted] that has the main …"},{"uri":"/lightning-specification/2023-11-06-specification-call/","title":"Lightning Specification Meeting - Agenda 1116","content":"Agenda: https://github.com/lightning/bolts/issues/1116\nSpeaker 0: I moved the first item on the to-do list today, dual funding, because we finally have interop between C-Lightning and Eclair. We were only missing the reestablished part, and everything was mostly okay from the beginning. It looks like the spec seems to be clear enough because we both understood it the same way. So now, we really have full interop on master, and I guess this should go in a release with experimental flags for CLN. …"},{"uri":"/lightning-specification/2023-10-23-specification-call/","title":"Lightning Specification Meeting - Agenda 1115","content":"Agenda: https://github.com/lightning/bolts/issues/1115\nSpeaker 0: Alright. So, I guess the first item is something that has already been ACKed and is only one clarification. But I had one question on the PR. I was wondering, [redacted], for which feature you actually use that code because neither LND or Eclair handles — we just disconnect on anything mandatory that we haven\u0026amp;rsquo;t set. Where are you actually using that or how are you planning on using it?\nSpeaker 1: It\u0026amp;rsquo;s possible for pure …"},{"uri":"/bitcoin-review-podcast/","title":"Bitcoin Review Podcast","content":" Covenants, Introspection, CTV and Activation Impasse Sep 29, 2023 James O\u0026amp;#39;Beirne, Rijndael, NVK Covenants Vaults Soft fork activation Libsec Panel Sep 08, 2023 Jonas Nick, Tim Ruffing, Lloyd Fournier, Jesse Posner, Rijndael, NVK Libsecp256k1 Schnorr signatures Threshold signature #MPCbros vs #TeamScript Debate Jul 30, 2023 Rijndael, NVK, Rob Hamilton Musig Covenants Taproot Miniscript Lightning Privacy \u0026amp;amp; Splice Panel May 12, 2023 NVK, Bastien Teinturier, Jeff Czyz, Dusty Dettmer, Tony …"},{"uri":"/bitcoin-review-podcast/covenants-introspection-ctv-and-activation-impasse/","title":"Covenants, Introspection, CTV and Activation Impasse","content":"Introduction NVK: 00:01:10\nAs usual, nothing is happening in Bitcoin and we figured, what a great day to put people to sleep in a one to three hour podcast. We don\u0026amp;rsquo;t know yet how it\u0026amp;rsquo;s going to be. We actually don\u0026amp;rsquo;t know how this is going to play out at all. I have here with me Mr. Rijndael. Hello, sir. Welcome back.\nRijndael: 00:01:30\nHello. Good morning.\nNVK: 00:01:32\nGood morning.\nJames: 00:01:33\nFor his like 600th appearance on the show.\nRijndael: 00:01:37\nI haven\u0026amp;rsquo;t …"},{"uri":"/speakers/james-obeirne/","title":"James O'Beirne","content":""},{"uri":"/speakers/nvk/","title":"NVK","content":""},{"uri":"/speakers/rijndael/","title":"Rijndael","content":""},{"uri":"/tags/soft-fork-activation/","title":"Soft fork activation","content":""},{"uri":"/tags/vaults/","title":"Vaults","content":""},{"uri":"/lightning-specification/2023-09-25-specification-call/","title":"Lightning Specification Meeting - Agenda 1114","content":"Agenda: https://github.com/lightning/bolts/issues/1114\nSpeaker 0: So, [redacted] pointed out that we did not do this, and we have to do this. I\u0026amp;rsquo;ve been working on the code so that you have to push their commitment transaction. Now, there are several problems with this. One is that in many states, there are two possible commitment transactions they could have. In theory, you know what they are. In practice, it turned out to be a bit of a nightmare to unwind the state. So, I actually just …"},{"uri":"/speakers/andrew-chow/","title":"Andrew Chow","content":""},{"uri":"/bitcoin-core-dev-tech/2023-09/","title":"Bitcoin Core Dev Tech 2023 (Sept)","content":" AssumeUTXO Update Sep 20, 2023 Bitcoin core Assume utxo CMake Update Sep 21, 2023 Cory Fields Bitcoin core Build system Discussion on open Coin Selection matters Sep 19, 2023 Bitcoin core Coin selection Kernel Planning Sep 20, 2023 thecharlatan Bitcoin core Build system Kernel Update Sep 18, 2023 thecharlatan Bitcoin core Build system Libsecp256k1 Meeting Sep 20, 2023 Bitcoin core Libsecp256k1 P2P Design Goals Sep 20, 2023 Bitcoin core P2p P2P working session Sep 19, 2023 Bitcoin core P2p …"},{"uri":"/bitcoin-core-dev-tech/2023-09/cmake/","title":"CMake Update","content":"Update Hebasto has a branch he has been PRing into his own repo. Opened a huge CMake PR for Bitcoin core.\nIntroducing it chunk by chunk on his own repo\nQT and GUIX is after that\nNext steps How to get this into Core?\nWe don’t have something clean. Still have something wonky and how and what to do with autotools.\nIdeally introduce CMake for a full cycle. It might still be a little too rough to ship on day 1 of the v27 cycle.\nWe could deviate from the beginning of the cycle plan. Half way through a …"},{"uri":"/speakers/cory-fields/","title":"Cory Fields","content":""},{"uri":"/bitcoin-core-dev-tech/2023-09/wallet-legacy-upgrade/","title":"Remove the legacy wallet and updating descriptors","content":"Wallet migration + legacy wallet removal The long-term goal targeted for v29 is to delete BDB and drop the legacy wallet. The migration PR for the GUI was just merged recently, so that will be possible for the next release v26. The \u0026amp;ldquo;Drop migratewallet experimental warning\u0026amp;rdquo; PR (#28037) should also go in before v26. Migrating without BDB should be possible for v27 (PRs \u0026amp;ldquo;Independent BDB\u0026amp;rdquo; #26606 and \u0026amp;ldquo;Migrate without BDB\u0026amp;rdquo; #26596). Priority PRs for now are:\n#20892 …"},{"uri":"/bitcoin-core-dev-tech/2023-09/signature-aggregation/","title":"Signature Aggregation Update","content":"The status of the Half-Agg BIP? TODOs but also no use cases upcoming so adding it to the BIP repo doesn\u0026amp;rsquo;t seem useful\nBIP Half-agg TODOs for BIP\nConsider setting z_0 = 1\nReconsider maximum number of signatures\nAdd failing verification test vectors that exercise edge cases.\nAdd signing test vectors (passing and failing, including edge cases)\nTest latest version of hacspec (run through checker)\nHalf-agg BIP has a max number of signatures (2^16), making testing easy\nNeeds more test vectors …"},{"uri":"/tags/signature-aggregation/","title":"signature-aggregation","content":""},{"uri":"/bitcoin-core-dev-tech/2023-09/assumeutxo-update/","title":"AssumeUTXO Update","content":" One remaining PR\n#27596 Adds loadtxoutset and getchainstate RPC, documentation, scripts, tests Adds critical functionality needed for assumeutxo validation to work: net processing updates, validation interface updates, verifydb bugfix, cache rebalancing Makes other improvements so pruning, indexing, -reindex features are compatible with assumeutxo and work nicely Adds hardcoded assumeutxo hash at height 788,000 Probably this should be moved to separate PR? Questions about initial next steps …"},{"uri":"/bitcoin-core-dev-tech/2023-09/kernel-planning/","title":"Kernel Planning","content":"Undecided on where to take this next\nCarl purposely didn\u0026amp;rsquo;t plan beyond what we have\nOptions: Look for who the users currently are of kernel code and polish those interfaces. We\u0026amp;rsquo;ll end up with a bunch of trade-offs. And I don\u0026amp;rsquo;t see us piecemeal extracting something that is useable to core and someone on the outside.\nThe GUI much high level to be on this list. The GUI uses a node interface, it doesn\u0026amp;rsquo;t call an validation right now. It does use some of the data structures. …"},{"uri":"/bitcoin-core-dev-tech/2023-09/libsecp256k1-meeting/","title":"Libsecp256k1 Meeting","content":" Topics: Scope, Priorities Next release Dec 16th Scope: Informal agreeement currently What new modules to add? Needs a specification (whatever that means, Pseudocode etc.0 Should we formalize the agreement more? Should also not be too specific What are examples where this came up in the past? Exfill, Ecdh, Elswift, SIlent payments, musig, schnorr, adaptor sigs, half-agg How specific do we need to be? Tie it to examples to be more clear ECIES (Interesting in the future?) Other problem: What is …"},{"uri":"/bitcoin-core-dev-tech/2023-09/p2p-design-goals/","title":"P2P Design Goals","content":"Guiding Questions What are we trying to achieve?\nWhat are we trying to prevent?\nHow so we weight performance over privacy?\nWhat is our tolerance level for net attacks?\nAre we trying to add stuff to the network or are we trying to prevent people getting information?\nNetwork topology: By design we are trying to prevent the topology being known Information creation, addresses, txs or blocks\nWe want blocks at tips fast - consensus critical information needs to be as fast as possible - ability to get …"},{"uri":"/tags/package-relay/","title":"Package relay","content":""},{"uri":"/bitcoin-core-dev-tech/2023-09/package-relay-planning/","title":"Package Relay Planning","content":"Package Relay Planning What can we do better, keep doing?\nThis is all the work that needs to be done for package relay -\u0026amp;gt; big chart\nLeft part is mempool validation stuff. It’s how we decide if we put transactions in the mempool after receiving them “somehow”.\nRight is peer to peer stuff\nCurrent master is accepting parents-and-child packages(every tx but last must be a parent of child), one by one, then all at the same time.\nIt\u0026amp;rsquo;s a simplification, but also economically wrong. Missing …"},{"uri":"/bitcoin-core-dev-tech/2023-09/privacy-metrics/","title":"Privacy Metrics for Coin Selection","content":" Goal: Get privacy consciousness into coin selection Configurability Privacy vs cost (waste) Privacy: weighted on a 0-5 scale Cost: weighted on a 0-5 scale Convert privacy preference (0-5) into satoshis to make it compatible with the waste score Combined score = PrivacyScoreWeight x PrivacyScore + CostWeight x WasteMetric 20-30 sats per privacy point as a gut feeling Privacy score example: sending to different script type than inputs of transaction We already match the change type to the …"},{"uri":"/speakers/thecharlatan/","title":"thecharlatan","content":""},{"uri":"/bitcoin-core-dev-tech/2023-09/wallet-coin-selection/","title":"Discussion on open Coin Selection matters","content":" Topic: review of https://github.com/bitcoin/bitcoin/pull/27601 Problem statement: when doing manual RBF (without using bumpfee RPC) we treat previous change output as a receiver and thus create two outputs to the same address Proposal: combine amount on outputs to the same address What are valid use-cases for having the same address for change and output? Consolidation with payment Alternative: Use sendall with two outputs one with an amount and yours without an amount Payment and send at least …"},{"uri":"/bitcoin-core-dev-tech/2023-09/p2p-working-session/","title":"P2P working session","content":"Erlay Gleb is active and ready to move forward - #21515 Are there people generally interested in review? I wanted first to convince myself that this is useful. I couldn\u0026amp;rsquo;t reproduce the numbers from the paper - 5% was what I got with ~100 connections. My node is listening on a non-standard port. It may be that I don\u0026amp;rsquo;t have a normal sample. There is a pull request that could add RPC stats to bitcoind - that might get better numbers. Current stats are per peer and we lose those stats …"},{"uri":"/bitcoin-core-dev-tech/2023-09/kernel-update/","title":"Kernel Update","content":"Original roadmap decided by carl was:\nStage 1\nStep 1 Introduce bitcoin-chainstate \u0026amp;ldquo;kitchen sink\u0026amp;rdquo; Step 2 (wrapped up ~2mon ago) remove non-valiation code\nStep 3 (where we are rn) remove non-validation headers from bitcoin-chainstate\nWe have mostly implemented Step 4 integrate libbitcoinkernel as a static library\nHave the implementation on personal repo Need to look into breaking up files or live with code organization not being super logical Stage 2 (we should talk about this now) …"},{"uri":"/speakers/jesse-posner/","title":"Jesse Posner","content":""},{"uri":"/speakers/jonas-nick/","title":"Jonas Nick","content":""},{"uri":"/bitcoin-review-podcast/libsec-panel/","title":"Libsec Panel","content":"Intros NVK: I have an absolute rock star team here. So why don\u0026amp;rsquo;t I start introducing the panel? Tim, hello! Do you want to tell a very brief what do you do?\nTim: Yes, so my name is Tim Ruffing. I am a maintainer of the libsecp256k1 library. I do work for Blockstream, who pay me to do this and also who pay me to do research on cryptography and all kinds of aspects related to Bitcoin.\nNVK: Very cool. So full time cryptographer, implementer.\nTim: Yep.\nNVK: Living the dream, sir.\nTim: Full …"},{"uri":"/speakers/lloyd-fournier/","title":"Lloyd Fournier","content":""},{"uri":"/tags/schnorr-signatures/","title":"Schnorr signatures","content":""},{"uri":"/tags/threshold-signature/","title":"Threshold signature","content":""},{"uri":"/speakers/tim-ruffing/","title":"Tim Ruffing","content":""},{"uri":"/brink/the-bitcoin-development-podcast/code-review-and-bip324/","title":"Code Review and BIP324","content":"Sebastian\u0026amp;rsquo;s journey to Bitcoin development Mike Schmidt: 00:00:00\nI\u0026amp;rsquo;m sitting down with the stack Sebastian Falbesoner. Did I pronounce that right?\nSebastian Falbesoner: 00:00:06\nYes, perfect.\nMike Schmidt: 00:00:08\nAll right. We\u0026amp;rsquo;re going talk a little bit about his grant renewal what he\u0026amp;rsquo;s been working on in the past and what he plans to work on in the future. So we\u0026amp;rsquo;ve awarded Sebastian a year-long grant renewal through Brink. Brink is a not-for-profit whose purpose …"},{"uri":"/tags/generic-signmessage/","title":"Generic signmessage","content":""},{"uri":"/speakers/mike-schmidt/","title":"Mike Schmidt","content":""},{"uri":"/speakers/sebastian-falbesoner/","title":"Sebastian Falbesoner","content":""},{"uri":"/tags/v2-p2p-transport/","title":"Version 2 P2P transport","content":""},{"uri":"/lightning-specification/2023-08-28-specification-call/","title":"Lightning Specification Meeting - Agenda 1103","content":"Agenda: https://github.com/lightning/bolts/issues/1103\nSpeaker 0: I don\u0026amp;rsquo;t think I\u0026amp;rsquo;ve seen any discussion happening on any of the PRs. So, I don\u0026amp;rsquo;t think we should go and have them in order, except maybe the first one that we may want to just finalize and get something into the spec to say that it should be 2016 everywhere. But, as [redacted] was saying, can probably make it a one-liner and it would be easier.\nSpeaker 1: Please.\nSpeaker 0: Okay, so let\u0026amp;rsquo;s just make that …"},{"uri":"/lightning-specification/2023-08-14-specification-call/","title":"Lightning Specification Meeting - Agenda 1101","content":"Agenda: https://github.com/lightning/bolts/issues/1101\nSpeaker 0: Alright, should we start? I want to talk a quick update on dual funding because I\u0026amp;rsquo;ve been working with [redacted] on cross-compatibility tests between Core Lightning and Eclair, and so far, everything looks good. The only part that has not yet been fully implemented in CLN is the reconnection part — when you disconnect in the middle of the signature exchange. This kind of reconnection is supposed to complete that signature …"},{"uri":"/chaincode-labs/","title":"Chaincode Labs","content":" Chaincode Podcast Chaincode Residency "},{"uri":"/chaincode-labs/chaincode-podcast/","title":"Chaincode Podcast","content":" Privacy and robustness in Lightning Aug 09, 2023 Rusty Russell Lightning Eltoo Privacy enhancements Episode 34 Simple Taproot Channels Jul 15, 2023 Elle Mouton, Oliver Gugger Anchor outputs Cpfp carve out Ptlc Simple taproot channels Taproot Episode 33 The Bitcoin Development Kit (BDK) May 23, 2023 Alekos Filini, Daniela Brozzoni Descriptors Hwi Wallet Episode 32 Lightning History and everything else Mar 22, 2023 Tadge Dryja Eltoo Lightning Segwit Sighash anyprevout Trimmed htlc Episode 31 The …"},{"uri":"/tags/eltoo/","title":"Eltoo","content":""},{"uri":"/chaincode-labs/chaincode-podcast/privacy-and-robustness-in-lightning/","title":"Privacy and robustness in Lightning","content":"Introduction Rusty Russell: 00:00:00\nSimplicity is a virtue in itself, right? So you can try to increase fairness. If you reduce simplicity, you end up actually often stepping backwards. The thing that really appeals about LN symmetry is it\u0026amp;rsquo;s simple. And every scheme to fix things by introducing more fairness and try to reintroduce penalties and everything else ends up destroying the simplicity.\nMark Erhardt: 00:00:27\nSo soon again.\nAdam Jonas: 00:00:28\nYes. For those of you that …"},{"uri":"/speakers/rusty-russell/","title":"Rusty Russell","content":""},{"uri":"/lightning-specification/2023-07-31-specification-call/","title":"Lightning Specification Meeting - Agenda 1098","content":"Agenda: https://github.com/lightning/bolts/issues/1098\nSpeaker 0: Great. I guess, does anyone have any urgent stuff that we want to make sure we get to in the meeting today? If not, I\u0026amp;rsquo;m just going to go ahead and start working down the recently updated proposal seeking review list. Great. Okay. First up on the list is option simple close. This is from the summit we had a few weeks ago in July. Looks like [Redacted] and [Redacted] have commented on this and [Redacted] stuff. Does anyone …"},{"uri":"/bitcoin-review-podcast/mpcbros-vs-teamscript-debate/","title":"#MPCbros vs #TeamScript Debate","content":"NVK: 00:01:08\nHello and welcome back to the Bitcoin.review. Today we have a very interesting show. It\u0026amp;rsquo;s going to be a knife fight. Two men go into a room with one knife. One comes out. On my left side, Rijndael for Team MPC, weighing 64 kilobytes. On my right side, Rob for Team Script with variable sizing, so we don\u0026amp;rsquo;t know. Is this already a loss? We don\u0026amp;rsquo;t know, let\u0026amp;rsquo;s find out. Hello, gentlemen, welcome to the show.\nRijndael: 00:01:40\nHey, how\u0026amp;rsquo;s it going?\nRob …"},{"uri":"/tags/miniscript/","title":"Miniscript","content":""},{"uri":"/speakers/rob-hamilton/","title":"Rob Hamilton","content":""},{"uri":"/tags/taproot/","title":"Taproot","content":""},{"uri":"/speakers/arik-sosman/","title":"Arik Sosman","content":""},{"uri":"/bitcoinplusplus/layer-2/","title":"Layer 2","content":" Anchors \u0026amp;amp; Shackles (Ark) Jun 24, 2023 Burak Keceli Ark Building Trustless Bridges Apr 29, 2023 John Light Scalability Sidechains Statechains Exploring P2P Hashrate Markets V2 Apr 29, 2023 Nico Preti Mining Lightning on Taproot Jul 18, 2023 Arik Sosman Lightning Taproot Translational Bitcoin Development Apr 29, 2023 Tadge Dryja Ux "},{"uri":"/bitcoinplusplus/layer-2/lightning-on-taproot/","title":"Lightning on Taproot","content":"My name is Arik. I work at Spiral. And most recently I\u0026amp;rsquo;ve been working on adding support for taproot channels. At least I try to work on that, but I\u0026amp;rsquo;m getting pulled back into Swift bindings. And this presentation is supposed to be about why the taproot spec is laid out the way it is. What are some of the motivations, the constraints, limitations that are driving the design. And I wanna really dig deep into the math that some of the vulnerabilities that we\u0026amp;rsquo;re trying to avoid as …"},{"uri":"/lightning-specification/2023-07-17-specification-call/","title":"Lightning Specification Meeting - Agenda 1094","content":"Agenda: https://github.com/lightning/bolts/issues/1094\nSpeaker 0: Alright. Does anyone want to volunteer to take us through the list? Nope? Okay. I will read the issue that [Redacted] very kindly left for us then. The first item on the list is Bolt 8’s chaining keys clarification. Is there anything we need to talk about here?\nSpeaker 1: Yeah. Yes, it\u0026amp;rsquo;s almost under the spelling rule, but I think it\u0026amp;rsquo;s been ACKed by everyone. So, unless people have objections to the specific wording. …"},{"uri":"/tags/anchor-outputs/","title":"Anchor outputs","content":""},{"uri":"/tags/cpfp-carve-out/","title":"CPFP carve out","content":""},{"uri":"/speakers/elle-mouton/","title":"Elle Mouton","content":""},{"uri":"/speakers/oliver-gugger/","title":"Oliver Gugger","content":""},{"uri":"/tags/ptlc/","title":"Point Time Locked Contracts (PTLCs)","content":""},{"uri":"/tags/simple-taproot-channels/","title":"Simple taproot channels","content":""},{"uri":"/chaincode-labs/chaincode-podcast/simple-taproot-channels/","title":"Simple Taproot Channels","content":"Introduction Elle Mouton: 00:00:00\nThen we get to Gossip 2.0, which is the bigger jump here, which would be instead of tying a UTXO proof to every channel, you instead tie a proof to every node announcement.\nAdam Jonas: 00:00:15\nOkay.\nAdam Jonas: 00:00:19\nIt\u0026amp;rsquo;s a good week for podcasting.\nMark Erhardt: 00:00:21\nOh yes. We have a lot of people to talk to.\nAdam Jonas: 00:00:25\nWe\u0026amp;rsquo;ll see how many we can wrangle. But first two are going to be Elle and Oli. And what are we going to talk …"},{"uri":"/speakers/dusty-daemon/","title":"Dusty Daemon","content":""},{"uri":"/tags/scalability/","title":"scalability","content":""},{"uri":"/bitcoin-explained/scaling-to-billions-of-users/","title":"Scaling to Billions of Users","content":"Introduction Aaron: 00:01:54\nIt\u0026amp;rsquo;s a little bit of a different episode as usual because we\u0026amp;rsquo;re now really going to discuss one particular technical topic. We\u0026amp;rsquo;re more describing an outline of a vision for Bitcoin. And To be precise, it\u0026amp;rsquo;s going to be Anthony Towns\u0026amp;rsquo; sort of scaling vision. Right?\nSjors: 00:02:23\nThat\u0026amp;rsquo;s right. Anthony Towns had a vision.\nAaron: 00:02:26\nWho\u0026amp;rsquo;s Anthony Towns?\nSjors: 00:02:27\nHe\u0026amp;rsquo;s a Bitcoin developer.\nAaron: 00:02:28 …"},{"uri":"/tags/sidechains/","title":"Sidechains","content":""},{"uri":"/tags/splicing/","title":"Splicing","content":""},{"uri":"/speakers/stephan-livera/","title":"Stephan Livera","content":""},{"uri":"/stephan-livera-podcast/","title":"Stephan Livera Podcast","content":" 10x your Bitcoin Security with Multisig Sep 28, 2020 Michael Flaxman Security A New Way to HODL? Jan 14, 2023 James O\u0026amp;#39;Beirne ANYPREVOUT, MPP, Mitigating Lightning Attacks Aug 13, 2020 Christian Decker Sighash anyprevout Eltoo Transaction pinning Multipath payments Trampoline payments Privacy problems ANYPREVOUT, MPP, Mitigating LN Attacks Aug 13, 2020 Christian Decker Becoming A Lightning Routing Node Operator Sep 29, 2021 Thomas Jestopher, Anthony Potdevin Bitcoin Coin Selection - Managing …"},{"uri":"/stephan-livera-podcast/what-is-splicing/","title":"What is splicing?","content":"Stephan Livera 00:02:14\nDusty welcome to the show hey thanks for having me yeah I\u0026amp;rsquo;m a fan of what you got what you\u0026amp;rsquo;re doing with splicing and obviously it\u0026amp;rsquo;s been great to see you over the years kind of or in and around the Bitcoin and Lightning circles and yeah I love what you\u0026amp;rsquo;re doing with flashing so let\u0026amp;rsquo;s hear a little bit about you just for people who don\u0026amp;rsquo;t know you just a little bit on you\nDusty Daemon 00:02:36\nThanks yeah I\u0026amp;rsquo;m Dusty Damon …"},{"uri":"/bitcoin-explained/bitcoin-core-v25/","title":"Bitcoin Core 25.0","content":"Aaron van Wirdum: 00:00:21\nLive from Utrecht, this is Bitcoin\u0026amp;hellip;\nSjors Provoost: 00:00:24\nExplained.\nIntroduction Aaron van Wirdum: 00:00:24\nHey, Sjors, we\u0026amp;rsquo;re back. We\u0026amp;rsquo;re back in beautiful, sunny Utrecht after I spent some time in Miami and Prague. You spent some time in Prague, and more Prague, is that right?\nSjors Provoost: 00:00:38\nMostly Prague.\nAaron van Wirdum: 00:00:38\nYou were in Prague for a while. That\u0026amp;rsquo;s where we recorded the previous episode about Stratum V2. …"},{"uri":"/tags/compact-block-filters/","title":"Compact block filters","content":""},{"uri":"/tags/transaction-relay-policy/","title":"Transaction Relay Policy","content":""},{"uri":"/bitcoinplusplus/layer-2/anchors-shackles-ark/","title":"Anchors \u0026 Shackles (Ark)","content":"So hey guys, my name is Borat. Today I\u0026amp;rsquo;ll be talking about something that I\u0026amp;rsquo;ve been working on for the past, let\u0026amp;rsquo;s say, six months or so. To give a bit of a background, I started first in a big block sort of camp and then got introduced later to Liquid. And I did some covenant R\u0026amp;amp;D on Liquid for about two years, and now I\u0026amp;rsquo;m recently exploring the Lightning space for about a year. And as someone who initially comes sort of from that sort of big block camp, I\u0026amp;rsquo;ve …"},{"uri":"/tags/ark/","title":"Ark","content":""},{"uri":"/speakers/burak-keceli/","title":"Burak Keceli","content":""},{"uri":"/lightning-specification/2023-06-19-specification-call/","title":"Lightning Specification Meeting - Agenda 1088","content":"Agenda: https://github.com/lightning/bolts/issues/1088\nSpeaker 0: Thanks [Redacted]. So, there isn\u0026amp;rsquo;t much that has changed on any of the PRs that are on the pending to-do list, except for attributable errors. So, maybe since [Redacted] is here, we can start with attributable errors, and [Redacted], you can tell us what has changed about the HMAC truncation, for example.\nSpeaker 1: Yeah, so not that much changed. Just the realization that we could truncate the HMAC basically, because the …"},{"uri":"/lightning-specification/2023-06-05-specification-call/","title":"Lightning Specification Meeting - Agenda 1085","content":"Agenda: https://github.com/lightning/bolts/issues/1085\nSpeaker 0: Alright, so let\u0026amp;rsquo;s start. So, the first PR on the list is one we\u0026amp;rsquo;ve already discussed last week. It just needs someone from another team, either Lightning Labs or LDK to hack. It\u0026amp;rsquo;s just a clarification on Bolt 8. So, I don\u0026amp;rsquo;t think it should be reviewed right now because just takes time to work through it and verify that it matches your implementation. But if someone on either of those teams can add it to …"},{"uri":"/speakers/alekos-filini/","title":"Alekos Filini","content":""},{"uri":"/speakers/daniela-brozzoni/","title":"Daniela Brozzoni","content":""},{"uri":"/tags/hwi/","title":"Hardware wallet interface (HWI)","content":""},{"uri":"/chaincode-labs/chaincode-podcast/the-bitcoin-development-kit-bdk/","title":"The Bitcoin Development Kit (BDK)","content":"Intro Adam Jonas: 00:00:06\nWe are recording again.\nMark Erhardt: 00:00:08\nWe have as our guests today Alekos and Daniela from the BDK team. Cool.\nAdam Jonas: 00:00:13\nWe\u0026amp;rsquo;re going to get into it. BDK, how it came to be, some features that are going to be released soon. So hope you enjoy. All right, Alekos, Daniela, welcome to the Chaincode podcast. We\u0026amp;rsquo;re excited to have you here.\nDaniela Brozzoni: 00:00:30\nHi, thank you for having us.\nAlekos Filini: 00:00:32\nYeah, thank you for having …"},{"uri":"/lightning-specification/2023-05-22-specification-call/","title":"Lightning Specification Meeting - Agenda 1082","content":"Agenda: https://github.com/lightning/bolts/issues/1082\nSpeaker 1: So, the first PR we have on the list is a clarification on Bolt 8 by [Redacted]. It looks like, if I understand correctly, someone tried to reimplement Bolt 8 and got it wrong, and it\u0026amp;rsquo;s only clarifications. Is that correct?\nSpeaker 2: Yes. So LN message - a JavaScript library that speaks to nodes - had everything right except for the fact they shared a chaining key, which is correct to start with. They start at the same …"},{"uri":"/speakers/bastien-teinturier/","title":"Bastien Teinturier","content":""},{"uri":"/speakers/dusty-dettmer/","title":"Dusty Dettmer","content":""},{"uri":"/speakers/jeff-czyz/","title":"Jeff Czyz","content":""},{"uri":"/bitcoin-review-podcast/lightning-privacy-splice-panel/","title":"Lightning Privacy \u0026 Splice Panel","content":"Housekeeping Speaker 0: 00:01:17\nToday we\u0026amp;rsquo;re going to be talking about Lightning privacy, splicing and other very cool things with an absolute amazing panel here. Just before that, I have a couple of things that I want to set out some updates. So OpenSats is now taking grants for a Bitcoin application, sorry, taking grant applications for Bitcoin projects and Nostra projects and Lightning projects. And there is about five mil for an OSTER, five mil for Bitcoin. So do apply. I joined as a …"},{"uri":"/tags/offers/","title":"Offers","content":""},{"uri":"/tags/rv-routing/","title":"Rendez-vous routing","content":""},{"uri":"/speakers/tony-giorgio/","title":"Tony Giorgio","content":""},{"uri":"/speakers/vivek/","title":"Vivek","content":""},{"uri":"/speakers/greg-sanders/","title":"Greg Sanders","content":""},{"uri":"/bitcoin-review-podcast/op_vault-round-2-op_ctv/","title":"OP_VAULT Round 2 \u0026 OP_CTV","content":"Introductions NVK: 00:01:25\nToday, we\u0026amp;rsquo;re going to get back to OP_VAULT, just kidding, OP_VAULT, this new awesome proposal on how we can make people\u0026amp;rsquo;s money safe in the future. And in my opinion, one of very good ways of scaling Bitcoin self-custody in a sane way. And with that, let me introduce today\u0026amp;rsquo;s guests, Mr. James. And welcome back.\nJames O\u0026amp;rsquo;Beirne: 00:01:32\nHey, thanks. It\u0026amp;rsquo;s always good to be here. This is quickly becoming my favorite Bitcoin podcast. …"},{"uri":"/lightning-specification/2023-05-08-specification-call/","title":"Lightning Specification Meeting - Agenda 1076","content":"Agenda: https://github.com/lightning/bolts/issues/1076\nSpeaker 0: First off of the list is dual funding. [Redacted] has merged the two small patches that we had discussed adding. There\u0026amp;rsquo;s only one small patch that is remaining, which is about making the TLV signed to allow for future splicing to make sure that splicing can also use the RBF messages for signed amounts. But apart from that, it looks like dual funding is almost complete. We plan on activating it on our node soon to be able to …"},{"uri":"/bitcoinplusplus/layer-2/building-trustless-bridges/","title":"Building Trustless Bridges","content":"Thank you all for coming to my talk. As is titled, I\u0026amp;rsquo;ll be talking about how we can build trustless bridges for Bitcoin. So my name is John Light. I\u0026amp;rsquo;m working on a project called Sovryn. We actually utilize a bridge, the Rootstock Powpeg Bridge, because our project was built on a Rootstock side chain. And we\u0026amp;rsquo;re interested in how we can improve the quality of that bridge. Currently it\u0026amp;rsquo;s a federated bridge, we would like to upgrade to a trustless bridge. And this talk will …"},{"uri":"/bitcoinplusplus/layer-2/exploring-p2p-hashrate-markets-v2/","title":"Exploring P2P Hashrate Markets V2","content":"Introduction Hi everybody, I\u0026amp;rsquo;m Nico. I\u0026amp;rsquo;m one of the co-founders at Rigly, and we are a peer-to-peer hashrate market. The way this talk is kind of broken up is the first part we\u0026amp;rsquo;re going to go through what is hash rate, how the protocol for mining works across the network, and then a little bit on how hash rate markets have come around, and then finally how we can potentially do hash rate with root layer two, which is obviously given the kind of subject matter of the conference. …"},{"uri":"/speakers/john-light/","title":"John Light","content":""},{"uri":"/speakers/nico-preti/","title":"Nico Preti","content":""},{"uri":"/tags/statechains/","title":"Statechains","content":""},{"uri":"/speakers/tadge-dryja/","title":"Tadge Dryja","content":""},{"uri":"/bitcoinplusplus/layer-2/translational-bitcoin-development/","title":"Translational Bitcoin Development","content":"Introduction This is called Translational Bitcoin Development or Why Johnny Can\u0026amp;rsquo;t Verify. So quick intro, I\u0026amp;rsquo;m Tadge Dryja. Stuff I\u0026amp;rsquo;ve worked on in Bitcoin over the last decade is the Bitcoin Lightning Network paper, the Discreet Log Contracts paper, Utreexo, and building out a lot of that software. In terms of places I\u0026amp;rsquo;ve worked, Lightning Labs, founded that, and then MIT DCI is where I used to work for a couple years and now I\u0026amp;rsquo;m at a company called Lightspark. …"},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-27-assumeutxo/","title":"AssumeUTXO update","content":"Goals allow nodes to get a utxo set quickly (1h) at the same time, no major security concessions Approach Provide serialized utxo snapshot get headers chain first, load snapshot and deserialize, sync to tip from that then start background verification with a 2nd snapshot finally, compare hashes when background IBD hits snapshot base Progress update lots of refactoring has been done; ChainStateManager was introduced, globals removed, mempool / blockstorage refactored init / shutdown logic changes …"},{"uri":"/bitcoin-core-dev-tech/2023-04/","title":"Bitcoin Core Dev Tech 2023 (Apr)","content":" ASMap Apr 25, 2023 Fabian Jahr Bitcoin core Security enhancements AssumeUTXO update Apr 27, 2023 James O\u0026amp;#39;Beirne Bitcoin core Assumeutxo Fuzzing Apr 27, 2023 Niklas Gögge Bitcoin core Developer tools Libbitcoin kernel Apr 26, 2023 thecharlatan Bitcoin core Build system Mempool Clustering Apr 25, 2023 Suhas Daftuar, Pieter Wuille Cluster mempool Package Relay Primer Apr 25, 2023 Gloria Zhao Bitcoin core Package relay P2p Project Meta Discussion Apr 26, 2023 Bitcoin core Refactors Apr 25, 2023 …"},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-27-fuzzing/","title":"Fuzzing","content":"Slides: https://docs.google.com/presentation/d/1NlTw_n60z9bvqziZqU3H3Jw7Xs5slnQoehYXhEKrzOE\nFuzzing Fuzzing is done continuously. Fuzz targets can pay off even years later by finding newly introduced bugs. Example in slide about libFuzzer fuzzing a parse_json function which might crash on some weird input but won’t report invalid json inputs that pass parsing. libFuzzer does coverage guided feedback loop + helps with exploring control flow. Bug Oracles Assertions - Adding assertions is tricky …"},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-26-libbitcoin-kernel/","title":"Libbitcoin kernel","content":"Questions and Answers Q: bitcoind and bitcoin-qt linked against kernel the libary in the future?\npresenter: yes, that is a / the goal Q: Have you looked at an electrum implementation using libbitcoinkernel?\naudience: yes, would be good to have something like this! audience: Also could do the long proposed address index with that? audience: not only address index, other indexes too. Q: Other use-cases:\naudience: be able to run stuff on iOS Q: Should the mempool be in the kernel?\npresenter: there …"},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-26-meta-discussion/","title":"Project Meta Discussion","content":"Part 1 What makes bitcoin core fun Intellectual challenge/problems Interesting, diverse, open source project collaborators Meaningful project goals Culturally the project is a meritocracy Scientific domain intersecting with real world problems Real world usage What makes bitcoin core not fun Long delivery cycles -\u0026amp;gt; lack of shippers high Soft fork activation Antagonism (internal and external) Ambiguity of feature/code contribution usage Relationships Financial stability Unclear goals Part 2 …"},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-27-asmap/","title":"ASMap","content":"Should we ship it every Core release? The initial idea is shipping a map file every Core release. Fabian wrote an article about how would be integrated into the deployment (https://gist.github.com/fjahr/f879769228f4f1c49b49d348f80d7635). Some devs pointed out an option would be to have it separated to the release process, any regular contributor could update it whenever they like (who would do it? frequency?). Then when the release comes around one of the recent ones will be chosen. People …"},{"uri":"/tags/cluster-mempool/","title":"Cluster mempool","content":""},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-25-mempool-clustering/","title":"Mempool Clustering","content":"Current Problems lot of problems in the mempool\neviction is broken mining algorithm is part of the problem, it’s not perfect RBF is like totally broken we complain all the time, sometimes we do/don\u0026amp;rsquo;t RBF when we should/shouldn\u0026amp;rsquo;t Eviction Eviction is when mempool is full, and we want to throw away the worst tx. Example, we think a tx is worst in mempool but it’s a descendant of a \u0026amp;ldquo;good\u0026amp;rdquo; tx. Mempool eviction is kinda the opposite of the mining algorithm. For example the …"},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-25-package-relay-primer/","title":"Package Relay Primer","content":"Slides: https://docs.google.com/presentation/d/12YPlmmaCiNNL83b3FDmYwa7FKHP7yD0krzkEzM-tkTM\nProblems CPFP Doesn’t Work When Mempool Min Feerate Rises Bad for users who want to use CPFP and L2s, but also a glaring limitation in our ability to assess transaction incentive compatibility\nPinning being able to feebump transaction is a pinning concern counterpart can intentionally censor your transactions, and in L2 that can mean stealing your money because you didn’t meet the timelock Pinning …"},{"uri":"/speakers/pieter-wuille/","title":"Pieter Wuille","content":""},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-25-refactors/","title":"Refactors","content":"One take-away from the Chaincode residency in 2019 was: Don’t do refactors (unless you really need it)\nA marked increase from 2019 to today (Chart on the increase of refactors)\nThe comments and PRs are steady but the refactors are increasing\nQuibble about how regular reviewers are counted (should be higher than 5 comments)\nProject reasons:\nOssification? Natural way mature projects progress/Boy Scout Rule Personal reasons:\nTime commitment of large review may not be possible (extended period of …"},{"uri":"/bitcoin-core-dev-tech/2023-04/2023-04-26-silent-payments/","title":"Silent Payments","content":"BIP Overview Scanning key and spending key are different: better security. Silent payment transactions are indistinguishable from transactions with taproot outputs on-chain.\nQ: Address labeling, why not create two silent payment addresses?\nA: It doubles scanning costs.\nLimited to taproot UTXOs (currently about 3% of transactions) but when it increases we should find ways to optimize scanning, even though it currently does not seem to be an issue.\nQ: Why no P2PK\nA: Limit to most used payment …"},{"uri":"/speakers/suhas-daftuar/","title":"Suhas Daftuar","content":""},{"uri":"/lightning-specification/2023-04-24-specification-call/","title":"Lightning Specification Meeting - Agenda 1067","content":"Agenda: https://github.com/lightning/bolts/issues/1067\nSpeaker 0: Alright, so first thing I\u0026amp;rsquo;ve got on deck is 1066, which says: Correct final CLTV handling and blinded paths.\nSpeaker 1: Yeah, I haven\u0026amp;rsquo;t dug as deep into the blinded path stuff, so this may be an incorrect reading that I was confused to. Basically, for a blinded path, we don\u0026amp;rsquo;t have a final CLTV for the recipient because we just have a full CLTV delta for the full blinded path. So, the spec is very clear that you …"},{"uri":"/bitcoin-explained/peer-to-peer-encryption/","title":"Peer-to-peer Encryption","content":"Aaron van Wirdum:\nSjros, you were just telling me that one of the reasons you came up with the topic for this episode, which is BIP324, a peer-to-peer encryption, is because you had a call with other Bitcoin developers about this. Since when do Bitcoin Core developers call? I know Bitcoin developers as IRC wizards, and they\u0026amp;rsquo;re on a call now? That sounds like Ethereum, Bitcoin Cash type of communication. What\u0026amp;rsquo;s going on?\nSjors Provoost:\nThe idea would be to put these calls on some …"},{"uri":"/speakers/calle/","title":"Calle","content":""},{"uri":"/bitcoin-review-podcast/cashu-and-fedimint/","title":"Cashu \u0026 Fedimint: ecash, Bitcoin's layer 3 \u0026 scalability","content":"Introduction If you haven\u0026amp;rsquo;t had your brain completely destroyed by the segregated witness episode, I hope that the e-cash episode will finalize. It\u0026amp;rsquo;s going to be fatality of your brain on this one. So I have with me a really good group of folks, two of them who are working on the biggest projects, who are deploying E-cash out there in the world.\nGuest introductions NVK: 00:01:30\nWelcome Calle.\nCalle: 00:01:33\nHey there, NVK. Hey everyone, Thanks for having me.\nNVK: 00:01:36\nHey, so …"},{"uri":"/tags/ecash/","title":"Ecash","content":""},{"uri":"/speakers/eric-sirion/","title":"Eric Sirion","content":""},{"uri":"/speakers/matt-odell/","title":"Matt Odell","content":""},{"uri":"/chaincode-labs/chaincode-podcast/lightning-history-and-everything-else/","title":"Lightning History and everything else","content":"Speaker 0: 00:00:00\nYou know, it works, but it also worries me because it\u0026amp;rsquo;s like, oh, people are like, oh, I\u0026amp;rsquo;m sending one sat. And you\u0026amp;rsquo;re like, yeah, but it works very differently whether you\u0026amp;rsquo;re sending 100,000 Satoshis or one.\nSpeaker 1: 00:00:18\nJonas.\nSpeaker 2: 00:00:19\nWe are back and we\u0026amp;rsquo;re going to have Taj joining us in the studio today. What should we chat about with him?\nSpeaker 1: 00:00:26\nI think maybe lightning stuff.\nSpeaker 2: 00:00:28\nYeah, lightning …"},{"uri":"/tags/segwit/","title":"Segregated witness","content":""},{"uri":"/tags/sighash-anyprevout/","title":"SIGHASH_ANYPREVOUT","content":""},{"uri":"/tags/trimmed-htlc/","title":"Trimmed HTLC","content":""},{"uri":"/tags/dual-funding/","title":"Dual funding","content":""},{"uri":"/bitcoinplusplus/lets-talk-about-dual-funding-protocol-overview/","title":"Let's Talk About Dual Funding: Protocol Overview","content":"Openning a Lightning Channel Unspent transaction output is locked up. Bitcoin is unspent, but it requires two parties to sign. In order to spend it, you need two people to sign off on it. How do we make that happen? A multisig that two people sign off, so it\u0026amp;rsquo;s 2 of 2 multisig.\nWe\u0026amp;rsquo;re talking about lightning channels. A lightning channel is a UTXO that is locked up with a 2 of 2 (multisig). The process of opening a lightning channel, what does that involve? We want to open a channel. …"},{"uri":"/speakers/lisa-neigut/","title":"Lisa Neigut","content":""},{"uri":"/bitcoinplusplus/onchain-privacy/splicing-lightnings-multiparty-future/","title":"Splicing, Lightning's Multiparty Future","content":"Introduction Thanks for coming. It\u0026amp;rsquo;s exciting to be in Mexico City, talking about Lightning, splicing, and privacy. I\u0026amp;rsquo;m Dusty. This is a photo of me when I was much younger—a little bit younger, but that number is growing every year, it turns out. So, a little bit about me: I\u0026amp;rsquo;ve been doing Bitcoin development stuff for probably five years, and about a year or change ago, I decided to focus on doing lightning stuff, and my first big project has been splicing. Okay, so who here …"},{"uri":"/chaincode-labs/chaincode-podcast/the-bitcoin-core-wallet/","title":"The Bitcoin Core Wallet","content":"Intro Andrew Chow: 00:00:00\nI will not say this is an unreasonable approach for a first implementation.\nAdam Jonas: 00:00:06\nSure. But Satoshi\u0026amp;rsquo;s been gone for a little bit.\nMurch: 00:00:18\nHey Jonas.\nAdam Jonas: 00:00:19\nHey Murch. We are here with one of your wallet collaborators, AChow101.\nMurch: 00:00:24\nWow, the wallet guy.\nAdam Jonas: 00:00:26\nYeah, you\u0026amp;rsquo;re still a collaborator.\nMurch: 00:00:28\nSure, sure. So Andy has been working on the Bitcoin Core wallet for many years, and …"},{"uri":"/stephan-livera-podcast/2023-03-06-greg-sanders/","title":"Bitcoin Transaction Pinning, ANYPREVOUT \u0026 eltoo","content":"podcast: https://stephanlivera.com/episode/463/\nStephan Livera – 00:02:02: So on to the discussion with Greg. Greg, also known as instagibbs. Welcome to the show.\nGreg Sanders :\nHi, glad to be here.\nStephan Livera :\nSo, Greg, I know you’re doing a lot of work on some interesting Lightning stuff and transaction fee pinning things. I know you’ve been around for a while in terms of Bitcoin development and Lightning stuff, so yeah, interested to chat and hear a little bit more from your perspective, …"},{"uri":"/advancing-bitcoin/","title":"Advancing Bitcoin","content":" Advancing Bitcoin 2019 Advancing Bitcoin 2020 Advancing Bitcoin 2022 Advancing Bitcoin 2023 "},{"uri":"/advancing-bitcoin/2022/","title":"Advancing Bitcoin 2022","content":" Challenges in implementing coinjoin in hardware wallets Mar 02, 2023 Pavol Rusnak Coinjoin Hardware wallet Taproot Deploying Lightning at Scale (Greenlight) Mar 03, 2022 Christian Decker Lightning Miniscript: Composable, Analyzable and Smarter Bitcoin Script Mar 03, 2022 Sanket Kanjalkar Miniscript Static Invoices in Lightning: A Deep Dive and Comparison of Bolt11 Invoices, LNURL, Offers and AMP Mar 03, 2022 Elle Mouton Lnurl Offers Amp Taproot multisig Mar 03, 2022 Jimmy Song Taproot Tapscript …"},{"uri":"/advancing-bitcoin/2022/challenges-in-implementing-coinjoin-in-hardware-wallets/","title":"Challenges in implementing coinjoin in hardware wallets","content":"Introduction Hi everyone. I\u0026amp;rsquo;m Pavel Rusnak, known as Stik in the Bitcoin community. I\u0026amp;rsquo;m the co-founder of Satoshi Labs, the company that brought Trezor. Today I\u0026amp;rsquo;m going to be talking about the challenges of implementing CoinJoin in hardware wallets.\nWhy Privacy Let\u0026amp;rsquo;s summarize why privacy matters. Among all reasons I consider these reasons as most important.\nAutonomy: Privacy allows individuals to have control over their personal information, enabling them to make …"},{"uri":"/tags/hardware-wallet/","title":"hardware-wallet","content":""},{"uri":"/speakers/pavol-rusnak/","title":"Pavol Rusnak","content":""},{"uri":"/advancing-bitcoin/2023/","title":"Advancing Bitcoin 2023","content":" Simplicity: Going Beyond Miniscript Mar 01, 2023 Christian Lewe Simplicity Timelock-enabled safety and recovery use cases for Bitcoin users Mar 01, 2023 Kevin Loaec Timelocks Miniscript "},{"uri":"/speakers/christian-lewe/","title":"Christian Lewe","content":""},{"uri":"/speakers/kevin-loaec/","title":"Kevin Loaec","content":""},{"uri":"/tags/simplicity/","title":"Simplicity","content":""},{"uri":"/advancing-bitcoin/2023/simplicity-going-beyond-miniscript/","title":"Simplicity: Going Beyond Miniscript","content":"Can we welcome Christian to the stage? I think I don\u0026amp;rsquo;t have any computer yet. Oh.. It\u0026amp;rsquo;s just okay. All right, I just, I just just read. There\u0026amp;rsquo;s a pointer, okay, great great. I was a bit confused for a second. Can I start? great, so welcome, I\u0026amp;rsquo;m Christian Lewe. I work for Blockstream to research where I mainly work on Simplicity and Miniscript and it\u0026amp;rsquo;s great that we just had a talk on Miniscript so you all know why Miniscript is great, why you should use Miniscript, …"},{"uri":"/advancing-bitcoin/2023/timelock-enabled-safety-and-recovery-use-cases-for-bitcoin-users/","title":"Timelock-enabled safety and recovery use cases for Bitcoin users","content":"Introduction Can we give a round of applause for Kevin, and welcome him to the stage. Thank you. Alright, yeah, so I\u0026amp;rsquo;m going to do a talk around Timelocks in Bitcoin. I\u0026amp;rsquo;m going to start with a pretty quick introduction on the different types of Timelocks over time in Bitcoin, and where we\u0026amp;rsquo;re at today. Also, a little bit, very, very quickly talking about what we\u0026amp;rsquo;re using Timelocks for in Bitcoin today. And also all the very exciting stuff that\u0026amp;rsquo;s happening on Bitcoin …"},{"uri":"/tags/timelocks/","title":"Timelocks","content":""},{"uri":"/stephan-livera-podcast/2023-02-27-craig-raw-bitcoin-multi-signature/","title":"Bitcoin Multi-signature","content":"podcast: https://stephanlivera.com/episode/462/\nStephan – 00:02:53 Craig, welcome back to the show.\nCraig :\nGreat, Stephan, it’s really good to be back.\nStephan :\nYeah, there’s been so many updates going on with Sparrow Wallet and I thought it’d be great to have you back to chat about the space. Whether it’s multisignature or privacy or import and export of transactions, I think there’s lots of things to add. So, yeah, I’m just curious, as you look at the space now, what are some of the big …"},{"uri":"/speakers/matt-corallo/","title":"Matt Corallo","content":""},{"uri":"/stephan-livera-podcast/2023-02-23-matt-corallo/","title":"What Bitcoin Specialises in","content":"podcast: https://stephanlivera.com/episode/461/\nStephan:\nWelcome back to the show, Matt.\nMatt :\nYeah, thanks for having me.\nStephan :\nSo lots of things going on. I know there’s always some topic of the day or whatever that’s happening, whether it’s Ordinals and inscriptions, which is the latest kind of craze. Maybe the Mempool is clearing out a little bit now, but yeah, I thought it would be good to chat with you again. I know nowadays you’re kind of more focused on to Lightning. What would you …"},{"uri":"/bitcoin-review-podcast/demystifying-and-understanding-bitcoin-core-development/","title":"Demystifying and Understanding Bitcoin Core Development","content":"Intro NVK: 00:00:40\nToday, we\u0026amp;rsquo;re going to be talking about Bitcoin Core development and trying to demystify it, maybe sort of like shed some light. So, some of the FUD goes away and new FUD comes in maybe, who knows?\nGuest introductions NVK: 00:00:51\nI have this awesome panel with me. I have a James O\u0026amp;rsquo;Beirne. Hi James.\nJames O\u0026amp;rsquo;Beirne: 00:00:56\nHi, good to be here.\nNVK: 00:00:58\nThanks for coming. Sjors.\nSjors Provoost: 00:01:00\nHello.\nNVK: 00:01:02\nThanks for coming back. And …"},{"uri":"/speakers/rodolfo-novak/","title":"Rodolfo Novak","content":""},{"uri":"/tags/ephemeral-anchors/","title":"Ephemeral anchors","content":""},{"uri":"/chaincode-labs/chaincode-podcast/sighash-anyprevout-ephemeral-anchors-and-ln-symmetry-eltoo/","title":"SIGHASH_ANYPREVOUT, ephemeral anchors and LN symmetry (ELTOO)","content":"Greg: 00:00:00\nWith ANYPREVOUT, I don’t want to champion it right now from an activation perspective. I think the community is pretty split right now on what to do, but we can build the knowledge and build up the tooling.\nMurch: 00:00:18\nHey Jonas.\nJonas: 00:00:19\nHey Murch. We are back in the studio for a productive podcasting week. We have @instagibbs (Greg Sanders). So what should we talk to him about?\nMurch: 00:00:28\nHe’s been looking a lot at mempool policy improvements. I think we’re going …"},{"uri":"/lightning-specification/2023-02-13-specification-call/","title":"Lightning Specification Meeting - Agenda 1055","content":"Name: Lightning specification call\nTopic: Agenda below\nLocation: Jitsi\nVideo: No video posted online\nAgenda: https://github.com/lightning/bolts/issues/1055\nSpeaker 0: A few people won\u0026amp;rsquo;t be able to attend. I guess we can proceed. Okay. I\u0026amp;rsquo;m going to go down the list that you prepared for this. The first thing being dual funding. I don\u0026amp;rsquo;t know if we have any one involved here at this point.\nSpeaker 1: All right.\nSpeaker 0: Last time, both of them were moving closer towards interop. …"},{"uri":"/categories/meeting/","title":"meeting","content":""},{"uri":"/speakers/antoine-poinsot/","title":"Antoine Poinsot","content":""},{"uri":"/tags/fee-management/","title":"Fee Management","content":""},{"uri":"/bitcoin-review-podcast/op_vault-for-bitcoin-covenants-panel/","title":"OP_VAULT for Bitcoin Covenants Panel","content":"Speaker 0: 00:00:45\nToday we have an absolute all-star panel here. We\u0026amp;rsquo;re going to be talking about the Bitcoin Op Vault. It\u0026amp;rsquo;s a new proposal by James. And you know, like any new proposals to Bitcoin, there is a lot to go over. And It\u0026amp;rsquo;s a very big, interesting topic. So today we have Rindell.\nSpeaker 2: 00:01:05\nHey, good morning. Yeah, I\u0026amp;rsquo;m here talking about vaults, Bitcoin developer, and I work a lot on multi-sig and vaults that don\u0026amp;rsquo;t use covenants. So really …"},{"uri":"/bitcoin-explained/inscriptions/","title":"Inscriptions","content":"Introduction Aaron Van Wirdum: 00:00:20\nLive from Utrecht, this is Bitcoin Explained. Sjors, we\u0026amp;rsquo;re gonna lean into some Twitter controversy today.\nSjors Provoost: 00:00:27\nExactly, we have to jump on the bandwagon and make sure that we talk about what other people talk about.\nAaron Van Wirdum: 00:00:34\nLove it! Yeah, actually I got this request from a colleague of mine, Brandon Green, and since I am his humble servant, I had no other choice but to make an episode about this topic, which is …"},{"uri":"/speakers/casey-rodarmor/","title":"Casey Rodarmor","content":""},{"uri":"/tags/privacy/","title":"privacy","content":""},{"uri":"/speakers/sergi-delgado/","title":"Sergi Delgado","content":""},{"uri":"/chaincode-labs/chaincode-podcast/watchtowers-lightning-privacy/","title":"Watchtowers, Lightning Privacy","content":"Speaker 0: 00:00:00\nI came into Bitcoin because I like peer-to-peer. I discover more things that I love by doing so. And again, these are not feelings, but they are feelings related to research. Sorry, failing time.\nSpeaker 1: 00:00:12\nWe\u0026amp;rsquo;re going to make sure our editor takes out The love word. We are back in the studio. We are.\nSpeaker 2: 00:00:24\nAnd today we\u0026amp;rsquo;re going to be joined by Sergi Delgado.\nSpeaker 1: 00:00:27\nYeah, we\u0026amp;rsquo;re going to talk about Watchtowers. We\u0026amp;rsquo;re …"},{"uri":"/stephan-livera-podcast/2023-02-02-casey-rodarmor/","title":"What are Ordinals?","content":"podcast: https://stephanlivera.com/episode/456/\nCasey, welcome to the show.\nCasey Rodarmor – 00:03:18:\nThank you very much. It’s a real pleasure. I am a long-time listener of the pod. It’s on my short list of Bitcoin podcasts. So it’s really surreal. You’ve been whispering into my ear as I go to the gym and I do whatever for a long time now. So it’s very surreal to actually be talking to you in person or virtually.\nStephan Livera – 00:03:40:\nYeah, of course. Well, that’s great. Well, thank you. …"},{"uri":"/speakers/anant-tapadia/","title":"Anant Tapadia","content":""},{"uri":"/tags/scripts-addresses/","title":"Scripts and Addresses","content":""},{"uri":"/stephan-livera-podcast/slp455-anant-tapadia-single-sig-or-multi-sig/","title":"SLP455 Anant Tapadia - Single Sig or Multi Sig?","content":"Stephan 00:00:00:\nAnant, welcome back to the show.\nAnant 00:02:38:\nHey Stephan, thanks for having me again.\nStephan 00:02:40:\nSo there\u0026amp;rsquo;s been a lot going on. I think the conversation around learning to self custody is always an important one. It\u0026amp;rsquo;s always one that\u0026amp;rsquo;s very fresh on my mind as well. And so we\u0026amp;rsquo;re seeing a lot of discussion. And I think recently, of course, there was the news about Luke Dash-jr losing his coins, I think, I don\u0026amp;rsquo;t know exactly how many, but …"},{"uri":"/lightning-specification/2023-01-30-specification-call/","title":"Lightning Specification Meeting - Agenda 1053","content":"Name: Lightning specification call\nTopic: Agenda below\nLocation: Jitsi\nVideo: No video posted online\nAgenda: https://github.com/lightning/bolts/issues/1053\nThe conversation has been anonymized by default to protect the identities of the participants. Participants that wish to be attributed are welcome to propose changes to the transcript.\nChannel Pruning Speaker 2: To be honest, I don\u0026amp;rsquo;t think anything has made a lot of progress since the last spec meeting, so I don\u0026amp;rsquo;t think we should …"},{"uri":"/bitcointranscripts/","title":"Bitcointranscripts","content":""},{"uri":"/bitcointranscripts/stephan-livera-podcast/opvault-a-new-way-to-hodl/","title":"OP_Vault - A New Way to HODL?","content":"Stephan:\nJames, welcome back to the show.\nJames 00:02:46:\nHey, Stephan, it\u0026amp;rsquo;s great to be back. I think it\u0026amp;rsquo;s been almost four years.\nStephan 00:02:50:\nDamn. Yeah, well, in terms of on the show, yeah, actually in person, you know, a couple of times, some of the conferences and things. But I know you\u0026amp;rsquo;ve got this awesome proposal out, and it looks pretty interesting to me, and I\u0026amp;rsquo;m sure SLP listeners will be very interested to hear more about it. So do you want to just set the …"},{"uri":"/stephan-livera-podcast/2023-01-23-antoine-poinsot-and-salvatore-ingala/","title":"Bitcoin Miniscript and what it enables","content":"podcast: https://stephanlivera.com/episode/452/\nAntoine and Salvatori, welcome to the show.\nAntoine :\nThanks for having us.\nSalvatore :\nThank you, Stephan.\nStephan :\nSo I know you guys are both building out some stuff in Miniscript. Obviously, software and hardware. And I know this is an interesting area, there’s been a bit of discussion about it, but there’s also been some critique about it as well. So I’d be interested to get into this with you guys, but maybe first if we could start. What was …"},{"uri":"/stephan-livera-podcast/2023-01-18-josibake/","title":"Bitcoin Developer Education Overview","content":"podcast: https://stephanlivera.com/episode/450/\nJosi, welcome to the show.\nJosibake – 00:02:32:\nThanks for having me.\nStephan- 00:02:33:\nSo, Josi, I know you were keen to chat about I know you’ve been doing a bit of work in the bitcoin development world, doing some mentoring as well, and keen to chat a bit about the process of Bitcoin developer education. Why is it important, all of that. But let’s first start with a little bit about you. Can you tell us a little bit of your journey getting into …"},{"uri":"/stephan-livera-podcast/2023-01-14-james-obeirne-a-new-way-to-hodl/","title":"A New Way to HODL?","content":"podcast: https://stephanlivera.com/episode/449/\nJames, welcome back to the show.\nJames – 00:02:46:\nHey, Stephan, it’s great to be back. I think it’s been almost four years.\nStephan – 00:02:50:\nDamn. Yeah, well, in terms of on the show, yeah, actually in person, you know, a couple of times, some of the conferences and things. But I know you’ve got this awesome proposal out, and it looks pretty interesting to me, and I’m sure SLP listeners will be very interested to hear more about it. So do you …"},{"uri":"/bitcoin-explained/bitcoin-core-v24-bug/","title":"Bitcoin Core 24.0 Bug","content":"Introduction Aaron van Wirdum: 00:00:20\nLive from Utrecht, this is Bitcoin Explained. Sjors, welcome back. I saw you were promoting your book everywhere in the world over the past couple of weeks. Where did you go?\nSjors Provoost: 00:00:31\nAbsolutely. I went to a place called New York, a place called Nashville and a place called Austin and those are all in the United States.\nAaron van Wirdum: 00:00:39\nThat sounds very exotic. And you were promoting your book. Which book is this yours? Do you …"},{"uri":"/tags/adaptor-signatures/","title":"Adaptor signatures","content":""},{"uri":"/chaincode-labs/chaincode-podcast/nesting-roast-half-aggregation-adaptor-signatures/","title":"Nesting, ROAST, Half-Aggregation, Adaptor Signatures","content":"Speaker 1: 00:00:00\nJust as a warning, so don\u0026amp;rsquo;t do this at home.\nSpeaker 0: 00:00:01\nYes, of course. Still have your friendly neighborhood cryptographer have a look at it.\nSpeaker 2: 00:00:16\nThis is the second half of the conversation with Tim and Peter. If you have not listened to the first half, I\u0026amp;rsquo;d suggest going back and listening to that episode. We cover all sorts of fun things, including when to roll your own cryptography, why we prefer Schnorr signatures over ECDSA, Schnorr …"},{"uri":"/tags/cryptography/","title":"cryptography","content":""},{"uri":"/chaincode-labs/chaincode-podcast/schnorr-musig-frost-and-more/","title":"Schnorr, MuSig, FROST and more","content":"Introduction Mark Erhart: 00:00:00\nThis goes to like this idea of like really only using the blockchain to settle disagreements. Like as long as everyone agrees with all we have to say to the blockchain, it\u0026amp;rsquo;s like, yeah, you don\u0026amp;rsquo;t really need to know what the rules were. Hey Jonas, good to be back.\nAdam Jonas: 00:00:19\nSo while you were out being the Bitcoin ambassador that we depended on you to be, I was here stuck in the cold talking to Tim and Pieter and I thought it was a good …"},{"uri":"/speakers/adam-gibson/","title":"Adam Gibson","content":""},{"uri":"/bitcoinplusplus/onchain-privacy/coinjoin-done-right-the-onchain-offchain-mix-and-anti-sybil-with-riddles/","title":"Coinjoin Done Right: the onchain offchain mix (and anti-Sybil with RIDDLE)","content":"NOTE: Slides of the talk can be found here\nIntroduction My name\u0026amp;rsquo;s Adam Gibson, also known as waxwing on the internet. Though the title might suggest that I have the solution to the problem of implementing CoinJoin, it\u0026amp;rsquo;s not exactly true. It\u0026amp;rsquo;s a clickbait. I haven\u0026amp;rsquo;t, but I suppose what I can say in a more serious way is that I\u0026amp;rsquo;m proposing what I consider to be a meaningfully different alternative way of looking at the concept of CoinJoin and therefore at the concept …"},{"uri":"/chaincode-labs/chaincode-podcast/channel-jamming-on-the-lightning-network/","title":"Channel Jamming on the Lightning Network","content":"Clara Shikhelman: 00:00:00\nWe identify the problem, we identify what would be a good solution, and then we go over the tools that are available for us over the Lightning Network.\nAdam Jonas: 00:00:15\nHello, Murch.\nMark Erhardt: 00:00:16\nHi, Jonas.\nAdam Jonas: 00:00:16\nBack in the saddle again. We are talking to Sergey and Clara today about their recent work on Jamming.\nMark Erhardt: 00:00:23\nThey have a paper forthcoming.\nAdam Jonas: 00:00:25\nWe\u0026amp;rsquo;ve seen some recent attacks on Lightning. …"},{"uri":"/speakers/clara-shikhelman/","title":"Clara Shikhelman","content":""},{"uri":"/tags/research/","title":"research","content":""},{"uri":"/speakers/sergei-tikhomirov/","title":"Sergei Tikhomirov","content":""},{"uri":"/tags/watchtowers/","title":"Watchtowers","content":""},{"uri":"/speakers/dhruv/","title":"Dhruv","content":""},{"uri":"/stephan-livera-podcast/2022-11-13-dhruv-pieter-wuille-and-tim-ruffing/","title":"v2 P2P Transport Protocol for Bitcoin (BIP324)","content":"podcast: https://stephanlivera.com/episode/433/\nStephan Livera – 00:03:20:\nGentlemen, welcome to the show.\nDhruv – 00:03:22:\nHello.\nTim Ruffing – 00:03:23:\nHi.\nStephan Livera – 00:03:24:\nYeah, so thanks, guys, for joining me and interested to chat about what you’re working on and especially what’s going on with P2P transport, a v2 P2P transport protocol for bitcoin core.\nDhruv – 00:03:36:\nBitcoin?\nStephan Livera – 00:03:37:\nYeah, for a course for bitcoin. So I think, Pieter and Tim, I think …"},{"uri":"/adopting-bitcoin/","title":"Adopting Bitcoin","content":" Adopting Bitcoin 2021 Adopting Bitcoin 2022 "},{"uri":"/adopting-bitcoin/2022/","title":"Adopting Bitcoin 2022","content":" How to get started contributing sustainably to Bitcoin Core Nov 11, 2022 Jon Atack, Stephan Livera Bitcoin core "},{"uri":"/adopting-bitcoin/2022/how-to-get-started-contributing-sustainably-to-bitcoin-core/","title":"How to get started contributing sustainably to Bitcoin Core","content":"Introduction Host: 00:00:00\nNow we are going to have a panel conversation; a kind of fireside chat between Stephan Livera and Jon Atack and the name is how to get started contributing sustainably to Bitcoin Core so if you are a dev and are looking to start to contribute to Bitcoin Core, this is your talk.\nStephan: 00:00:32\nOkay, so thank you for having us and thank you for joining us. Joining me today is Jon Atack. He is a Bitcoin Core developer, contributor. He started about two or three years …"},{"uri":"/speakers/jon-atack/","title":"Jon Atack","content":""},{"uri":"/chaincode-labs/chaincode-podcast/the-bitcoin-core-wallet-and-wrangling-bitcoin-data/","title":"The Bitcoin Core wallet and wrangling bitcoin data","content":"Introduction Adam Jonas: 00:00:00\nHey Murch!\nMark Erhardt: 00:00:01\nYo, what\u0026amp;rsquo;s up?\nAdam Jonas: 00:00:02\nWe are back, as promised. We\u0026amp;rsquo;re really following through this time. We have Josie Baker in the office this week, and we\u0026amp;rsquo;re going to hear about what he\u0026amp;rsquo;s up to.\nMark Erhardt: 00:00:11\nYeah, we\u0026amp;rsquo;ve been working with him on some mempool data stuff, and I\u0026amp;rsquo;ve also worked with him closely on some Bitcoin Core wallet privacy improvements and that\u0026amp;rsquo;s fun stuff. …"},{"uri":"/speakers/bob-mcelrath/","title":"Bob McElrath","content":""},{"uri":"/tabconf/2022/2022-10-15-braidpool/","title":"Braidpool","content":"Introduction I am going to talk today about Braidpool which is a decentralized mining pool. I hope some of you were in the mining panel earlier today in particular p2pool which this is a successor to. I gave a talk a long time ago about a directed acyclic graph blockchain.\nBraidpool Braidpool is a proposal for a decentralized mining pool which uses a merge-mined DAG with Nakamoto-like consensus to track shares. This was the most straightforward way I could find to apply the ideas of Nakamoto …"},{"uri":"/tabconf/2022/2022-10-15-segwit-vbytes-misconceptions/","title":"Misconceptions about segwit vbytes","content":"Weighing transactions: The witness discount You\u0026amp;rsquo;ve already heard from someone that this presentation will be more interactive. I probably won\u0026amp;rsquo;t get through all of my material. If you have questions during the talk, please feel free to raise your hand and we can cover it right away. I\u0026amp;rsquo;m going to try to walk you through a transaction serialization for both a non-segwit transaction and a segwit transaction. By the end of the talk, I hope you understand how the transaction weight …"},{"uri":"/tabconf/2022/2022-10-15-silent-payments/","title":"Silent Payments and Alternatives","content":"Introduction I will talk about silent payments but also in general the design space around what kind of constructs you can have to pay people in a non-interactive way. In the bitcoin world, there are a couple common ways of paying someone. Making a payment is such a basic thing. \u0026amp;hellip; The alternative we have is that you can generate a single address and put it in your twitter profile and everyone can pay you, but that\u0026amp;rsquo;s not private. So either you have this interactivity or some loss of …"},{"uri":"/tabconf/","title":"TABConf","content":" TABConf 2021 TABConf 2022 "},{"uri":"/tabconf/2022/","title":"TABConf 2022","content":" Braidpool Oct 15, 2022 Bob McElrath Mining How to become Signet Rich: Learn about Bitcoin Signet and Zebnet Oct 13, 2022 Richard Safier Signet Lightning Lnd Lightning is Broken AF (But We Can Fix It) Oct 14, 2022 Matt Corallo Lightning Misconceptions about segwit vbytes Oct 15, 2022 Mark Erhardt Segwit Fee management Provably bug-free BIPs and implementations using hac-spec Oct 14, 2022 Jonas Nick Cryptography ROAST: Robust asynchronous Schnorr threshold signatures Oct 14, 2022 Tim Ruffing …"},{"uri":"/tabconf/2022/lightning-is-broken-af-but-we-can-fix-it/","title":"Lightning is Broken AF (But We Can Fix It)","content":"Introduction Thank you. Yeah, so, as the title says, for those of you who work in lightning or have been around lightning for a long time, hopefully you will recognize basically all the issues I\u0026amp;rsquo;ll go through today. I\u0026amp;rsquo;m going to go through a very long list of things. But I\u0026amp;rsquo;m going to try to go through most of the kind of, at least my understanding of the current thinking of solutions for a lot of these issues. So hopefully that most people will at least have a chance to learn …"},{"uri":"/tabconf/2022/2022-10-14-hac-spec/","title":"Provably bug-free BIPs and implementations using hac-spec","content":"https://nickler.ninja/\nAlright. Strong crowd here, I can see. That\u0026amp;rsquo;s very nice.\nIntroduction I am Jonas and I will talk about provably bug-free BIPs and implementations. I don\u0026amp;rsquo;t have such a BIP nor such an implementation but if you lower your time-preference enough this could eventually be true at some point. This presentation is only 15 minutes so raise your hand if you have questions.\nJust to set the stage. Specifications should be free of bugs, we want them easy to implement and …"},{"uri":"/tabconf/2022/2022-10-14-roast/","title":"ROAST: Robust asynchronous Schnorr threshold signatures","content":"paper: https://ia.cr/2022/550\nslides: https://slides.com/real-or-random/roast-tabconf22/\nHey. Hello. My name is Tim and I work at Blockstream. This is some academic work in joint with some of my coworkers.\nSchnorr signatures in Bitcoin We recently got support for Schnorr signatures in bitcoin, introduced as part of bip340 which was activated as part of the taproot soft-fork. There are three main reasons why we want Schnorr signatures and prefer them over ECDSA bitcoin signatures which can still …"},{"uri":"/tabconf/2022/weighing-transactions-the-witness-discount/","title":"Weighing Transactions, The Witness Discount","content":"In this talk Mark Erhardt: 0:00:25\nI\u0026amp;rsquo;m going to try to walk with you through a transaction serialization, the Bitcoin transaction serialization. And we\u0026amp;rsquo;ll try to understand a non-SegWit transaction and then a SegWit transaction to compare it to. And I hope by the end of this talk, you understand how the transaction weight is calculated and how the witness discount works. And if we get to it, we may take a look at a few different output types in the end as well.\nTransaction components …"},{"uri":"/tabconf/2022/how-to-become-signet-rich-learn-about-bitcoin-signet-and-zebnet/","title":"How to become Signet Rich: Learn about Bitcoin Signet and Zebnet","content":"Richard: So today we are going to talk about Signet Lightning, PlebNet Playground, which was the first evolution of me playing with custom Signets, and eventually ZebNet Adventures. This is how you can become Signet-rich.\nFirst, we should probably do an overview of the various network types. Everybody knows about Mainnet. This is what we use day to day, but there\u0026amp;rsquo;s a few other testing networks. RegTest, or SimNet, if you are running BTCD, is a local unit testing framework. And this is …"},{"uri":"/tags/lnd/","title":"lnd","content":""},{"uri":"/speakers/richard-safier/","title":"Richard Safier","content":""},{"uri":"/bitcoin-core-dev-tech/2022-10/","title":"Bitcoin Core Dev Tech 2022","content":" BIP324 - Version 2 of p2p encrypted transport protocol Oct 10, 2022 P2p Bitcoin core Bitcoin Core and GitHub Oct 11, 2022 Bitcoin core Fee Market Oct 11, 2022 Fee management Bitcoin core FROST Oct 11, 2022 Threshold signature High-assurance cryptography specifications (hac-spec) Oct 11, 2022 Cryptography Libsecp256k1 Maintainers Meeting Oct 12, 2022 Libsecp256k1 Misc Oct 10, 2022 P2p Bitcoin core Package Relay BIP, implementation, V3, and package RBF proposals Oct 11, 2022 Package relay Bitcoin …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-12-libsecp256k1/","title":"Libsecp256k1 Maintainers Meeting","content":"Q: Why C89? When I asked you this question a few years ago, I think you said gmaxwell.\nA: There are a number of embedded devices that only support C89 and it\u0026amp;rsquo;d be good to support those devices. That was the answer back then at least.\nQ: Is it a large cost to keep doing C89?\nA: The only cost is for the context stuff we want to make threadlocal. The CPUid or the x86-specific things. These could be optional. If you really want to get into this topic, then perhaps later. It makes sense to …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-12-research-wishlist/","title":"Research wishlist","content":"https://docs.google.com/document/d/1oRCeDzY3zH2ZY-BUYIVfJ1GMkvLlqKHWCFdtS62QWAo/edit\nIntroduction In spirit of the conversation happening today earlier, I\u0026amp;rsquo;ll give some motivation. In general there is a disconnect between academic researchers and people who work in open-source software. It\u0026amp;rsquo;s a pity because these two groups are interested in bitcoin, they love difficult questions and working on them, and somehow it seems like the choice of questions and spirit of work is sometimes not …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-12-merging/","title":"Strategies for getting stuff merged","content":"Introduction I wanted to talk about things that have been leaking out over other conversations because sometimes people get frustrated that their stuff doesn\u0026amp;rsquo;t get merged. This is not a new problem. It\u0026amp;rsquo;s an issue that has been going on for a long time. It can be frustrating. I don\u0026amp;rsquo;t have the answer. This is going to be more discussion based and I\u0026amp;rsquo;ve asked a few folks to talk about strategies that have worked for them. Hopefully this will lead to a discussion about copying …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-11-github/","title":"Bitcoin Core and GitHub","content":"Bitcoin Core and GitHub\nI think at this point it\u0026amp;rsquo;s quite clear that it\u0026amp;rsquo;s not necessarily a \u0026amp;ldquo;if\u0026amp;rdquo; we get off github, but a when and how. The question would be, how would we do that? This isn\u0026amp;rsquo;t really a presentation. It\u0026amp;rsquo;s more of a discussion. There\u0026amp;rsquo;s a few things to keep in mind, like the bitcoin-gh-meta repo, which captures all the issues, comments and pull requests. It\u0026amp;rsquo;s quite good. The ability to reconstruct what\u0026amp;rsquo;s inside of here on another …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-11-fee-market/","title":"Fee Market","content":"Fee market\nThere are two times we have had sustained fees: late 2017 and early 2021. In late 2017 we saw lots of things break because people hadn\u0026amp;rsquo;t written software to deal with variable fees or anything. I don\u0026amp;rsquo;t know if that was as big of a problem in 2021. I do worry that this will start to become a thing. If you have no variable fee market, and you can just throw in 1 sat/vbyte for several years then it will just work until it doesn\u0026amp;rsquo;t. So right now developers don\u0026amp;rsquo;t …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-11-frost/","title":"FROST","content":"Introduction I am going to be going over the FROST implementation. I also have an early draft of the BIP. I am going to be focusing on the differences between the paper and the RFC and the overall scheme. This is meant to be an open discussion so feel free to jump in.\nDistributed key generation Maybe one good place to start is to look at the example file in the PR. It shows how the flow works with the API. The protocol we start off with generating a keypair. We\u0026amp;rsquo;re going to use the secret …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-11-hac-spec/","title":"High-assurance cryptography specifications (hac-spec)","content":"See https://btctranscripts.com/tabconf/2022/2022-10-14-hac-spec/ instead for a full transcript of a similar talk.\nFar far future In the far far future, we could get rid of this weird paper notation scheme and do a security proof directly for the specification. Presumably that is much harder than anything else in my slides. But this would rule out a lot of bugs.\nQ: But the security proof itself is written in a paper?\nA: The security proof itself would be written in hac-spec. And your simulators. …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-11-package-relay/","title":"Package Relay BIP, implementation, V3, and package RBF proposals","content":"Notes on Package Relay BIP, implementation, V3, and package RBF proposals from Core Dev in Atlanta.\nAlso at https://gist.github.com/glozow/8469dc9c3a003c7046033a92dd504329.\nAncestor Package Relay BIP BIP updated to be receiver-initiated ancestor packages only. Sender-initiated vs receiver-initiated package relay. Receiver-intiated package relay enables a node to ask for more information when they suspect they are missing something (i.e. to resolve orphans). Sender-initiated package relay should, …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-11-stratum-v2/","title":"Stratum V2","content":"Introduction There was an announcement earlier this morning that announced the open-source project implementing Stratum v2 is ready for testing now. Spiral has been contributing to this project for a few years. There\u0026amp;rsquo;s a few other companies funding it as well. It\u0026amp;rsquo;s finally ready for testing.\nHistory About 4 years ago or so, Matt proposed BetterHash which focused on enabling miners to be able to do their own transaction selection. There was an independent effort from Braaains that …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-10-p2p-encryption/","title":"BIP324 - Version 2 of p2p encrypted transport protocol","content":"Previous talks https://btctranscripts.com/scalingbitcoin/milan-2016/bip151-peer-encryption/\nhttps://btctranscripts.com/sf-bitcoin-meetup/2017-09-04-jonas-schnelli-bip150-bip151/\nhttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06-07-p2p-encryption/\nhttps://btctranscripts.com/breaking-bitcoin/2019/p2p-encryption/\nIntroduction and motivation Can we turn down the lights? \u0026amp;ldquo;Going dark\u0026amp;rdquo; is a nice theme for the talk. I also have dark coffee. Okay.\nWe\u0026amp;rsquo;re going to talk a little bit …"},{"uri":"/bitcoin-core-dev-tech/2022-10/2022-10-10-misc/","title":"Misc","content":"Web of Trust Some of the public key server operators interpreted GDPR to mean that they can\u0026amp;rsquo;t operate public key infrastructure anymore. There needs to be another solution for p2p distribution of keys and Web-of-Trust.\n\u0026amp;lt;bitcoin-otc.com\u0026amp;gt; continues to be the longest operating PGP web-of-trust using public key infrastructure. Rumplepay might be able to bootstrap a web-of-trust over time.\nStealth addresses and silent payments Here\u0026amp;rsquo;s something controversial. Say you keep an …"},{"uri":"/bitcoin-explained/bitcoin-core-v24/","title":"Bitcoin Core 24.0","content":"Intro Aaron van Wirdum: 00:00:20\nLive from Utrecht, this is Bitcoin\nSjors Provoost: 00:00:23\nExplained.\nAaron van Wirdum: 00:00:24\nHey Sjors.\nSjors Provoost: 00:00:25\nWhat\u0026amp;rsquo;s up?\nAaron van Wirdum: 00:00:26\nI\u0026amp;rsquo;m good.\nSjors Provoost: 00:00:27\nHow do you like the weather?\nAaron van Wirdum: 00:00:29\nIt was too hot all summer and then it was nice for about a week and now we\u0026amp;rsquo;re back to too cold.\nSjors Provoost: 00:00:36\nThis is gonna be a great winter.\nAaron van Wirdum: 00:00:39 …"},{"uri":"/chaincode-labs/chaincode-podcast/lightning-updates-stratum-v2/","title":"Lightning updates / Stratum V2","content":"Murch: 00:00:00\nHey Jonas.\nJonas: 00:00:00\nHey Murch.\nMurch: 00:00:02\nWhat are we up to?\nJonas: 00:00:03\nWe are back.\nMurch: 00:00:04\nWho are we recording today?\nJonas: 00:00:05\nWe have Steve Lee in the office this week.\nMurch: 00:00:04\nHe is the head at Spiral. He\u0026amp;rsquo;s done a lot of open source development. He talks to a bunch of people. He\u0026amp;rsquo;s also the PM for the LDK team. So we\u0026amp;rsquo;re going to talk Lightning, the challenges that are open with Lightning, and maybe a little bit about …"},{"uri":"/speakers/steve-lee/","title":"Steve Lee","content":""},{"uri":"/tags/zero-conf-channels/","title":"Zero-conf channels","content":""},{"uri":"/wasabi/research-club/checking-bitcoin-balances-privately/","title":"Checking Bitcoin balances privately","content":"Speaker 0: 00:00:00\nSo hello and welcome to the Wasabi Wallet Research Club. Today we are speaking from with Samir from Spiral, which is the title of a fancy cryptography paper of homomorphic value encryption or homomorphic encryption and private information retrieval. The gist of this cryptomagic is that a client can request data from a server, but the server does not know which data was requested. And there are different variants of the CryptoMagick for different use cases. And there are …"},{"uri":"/categories/club/","title":"club","content":""},{"uri":"/speakers/samir-menon/","title":"Samir Menon","content":""},{"uri":"/wasabi/research-club/","title":"Wasabi Research Club","content":"https://github.com/zkSNACKs/WasabiResearchClub\nAnonymous Credentials Apr 14, 2020 Jonas Nick Research Privacy enhancements Cryptography Bls signatures Checking Bitcoin balances privately Sep 27, 2022 Samir Menon Research Privacy enhancements Cryptography CJDNS Mar 16, 2021 Caleb DeLisle, Adam Ficsor, Lucas Ontivero Research Anonymity networks CoinShuffle Jan 20, 2020 Aviv Milner, Adam Fiscor, Lucas Ontivero, Max Hillebrand Research Coinjoin CoinShuffle\u0026amp;#43;\u0026amp;#43; (Part 1) Feb 04, 2020 Tim …"},{"uri":"/wasabi/","title":"Wasabi Wallet","content":"https://github.com/zkSNACKs/WasabiResearchClub\nWasabi Research Club Wasabikas Bitcoin Privacy Podcast "},{"uri":"/tags/bip32/","title":"HD key generation","content":""},{"uri":"/bitcoin-explained/hd-wallets-mnemonic-codes-and-seedqr/","title":"HD Wallets, Mnemonic codes and SeedQR","content":"Aaron van Wirdum: 00:00:19\nLive from Utrecht, this is Bitcoin\u0026amp;hellip;\nSjors Provoost: 00:00:21\nExplained!\n(Ad removed)\nAaron van Wirdum: 00:01:41\nAll right, let\u0026amp;rsquo;s move on. Sjors, today, you\u0026amp;rsquo;ve got a new hobby.\nSjors Provoost: 00:01:46\nI have a new hobby.\nAaron van Wirdum: 00:01:47\nWhat am I looking at here?\nSjors Provoost: 00:01:48\nYou\u0026amp;rsquo;re looking at an anvil.\nAaron van Wirdum: 00:01:50\nYou\u0026amp;rsquo;ve got an actual miniature anvil on your desk right here.\nSjors Provoost: 00:01:53 …"},{"uri":"/london-bitcoin-devs/","title":"London Bitcoin Devs","content":" Better Hashing through BetterHash Feb 05, 2019 Matt Corallo Mining Bitcoin Core and hardware wallets Sep 19, 2018 Sjors Provoost Hardware wallet Bitcoin core Bitcoin Core V0.17 John Newbery Research Bitcoin Full Nodes Jul 23, 2018 John Light Soft fork activation Current state of P2P research in Bitcoin /Erlay Nov 13, 2019 Gleb Naumenko Research P2 p Grokking Bitcoin Apr 29, 2020 Kalle Rosenbaum Hardware Wallets (History of Attacks) May 01, 2019 Stepan Snigirev Hardware wallet Security problems …"},{"uri":"/categories/meetup/","title":"meetup","content":""},{"uri":"/london-bitcoin-devs/2022-08-11-tim-ruffing-musig2/","title":"MuSig2","content":"Reading list: https://gist.github.com/michaelfolkson/5bfffa71a93426b57d518b09ebd0998c\nIntroduction Michael Folkson (MF): This is a Socratic Seminar, we are going to be discussing MuSig2 and we’ll move onto adjacent topics such as FROST and libsecp256k1 later. We have a few people on the call including Tim (Ruffing). If you want to do short intros, you don’t have to, for the people on the call.\nTim Ruffing (TR): Hi. Thanks for having me. I am Tim Ruffing, my work is at Blockstream. I am an author …"},{"uri":"/stephan-livera-podcast/2022-08-11-gloria-zhao/","title":"What Do Bitcoin Core Maintainers Do?","content":"podcast: https://stephanlivera.com/episode/404/\nStephan Livera: Gloria, welcome back to the show.\nGloria Zhao: Thank you for having me again.\nStephan Livera: Yeah, always great to chat with you, Gloria. I know you’ve got a lot of things you’re working on in the space and you recently took on a role as a maintainer as well, so we’ll obviously get into that as well as all your work around the mempool. So do you want to just start with — and we’re going to keep this accessible for beginners — so …"},{"uri":"/speakers/chelsea-komlo/","title":"Chelsea Komlo","content":""},{"uri":"/speakers/elizabeth-crites/","title":"Elizabeth Crites","content":""},{"uri":"/misc/2022-08-07-komlo-crites-frost/","title":"FROST","content":"Location: Zcon 3\nFROST paper: https://eprint.iacr.org/2020/852.pdf\nSydney Socratic on FROST: https://btctranscripts.com/sydney-bitcoin-meetup/2022-03-29-socratic-seminar/\nIntroduction Elizabeth Crites (EC): My name is Elizabeth Crites, I’m a postdoc at the University of Edinburgh.\nChelsea Komlo (CK): I’m Chelsea Komlo, I’m at the University of Waterloo and I’m also a Principal Researcher at the Zcash Foundation. Today we will be giving you research updates on FROST.\nWhat is FROST? (Chelsea …"},{"uri":"/misc/","title":"Misc","content":" Bitcoin Scaling Tradeoffs Apr 05, 2016 Adam Back Scalability Bitcoin Script: Past and Future Apr 08, 2020 John Newbery Scripts addresses Bitcoin Sidechains - Unchained Epicenter Feb 03, 2015 Adam Back, Greg Maxwell Sidechains Bulletproofs Feb 02, 2018 Andrew Poelstra Proof systems CFTC Bitcoin Brian O’Keefe, David Bailey, Rodrigo Buenaventura CTV BIP Review Workshop Jeremy Rubin Discreet Log Contracts Tadge Dryja Failures Of Secret Key Cryptography (2013) Daniel J. Bernstein Security problems …"},{"uri":"/stephan-livera-podcast/2022-08-02-jonas-nick-tim-ruffing/","title":"Half Signature Aggregation-What is it and how does it help scale Bitcoin?","content":"podcast: https://stephanlivera.com/episode/400/\nStephan Livera: Jonas and Tim, welcome back to the show. Great to chat with you guys again. And so we’re gonna chat about half signature aggregation and hear a little bit from you guys about what you’re working on and just get into what that means for Bitcoin. So Jonas, I know you probably will be best to give some background on some of this — I know you did a talk at Adopting Bitcoin in November of last year, so call it 7–8 months ago, talking …"},{"uri":"/bitcoin-design/learning-bitcoin-and-design/fedimint/","title":"FediMint","content":"Introduction Stephen DeLorme: 00:00:04\nAll right. Welcome, everybody. This is the Learning Bitcoin and Design Call. It\u0026amp;rsquo;s July 26, 2022. And our topic this week is FediMint. And we have Justin Moon here with us and several designers and product folks in the room here that are interested in learning about this. So I\u0026amp;rsquo;ve got a FigJam open here where we can toss images, ideas, text, links, whatever we want to consolidate during the call. And anyone on the call right now can find that in …"},{"uri":"/speakers/justin-moon/","title":"Justin Moon","content":""},{"uri":"/speakers/stephen-delorme/","title":"Stephen DeLorme","content":""},{"uri":"/bitcoin-explained/op_return/","title":"OP_RETURN","content":"Intro Aaron van Wirdum: 00:00:19\nLive from Utrecht, this is Bitcoin Explained. Hey Sjors.\nSjors Provoost: 00:00:24\nWhat\u0026amp;rsquo;s up?\nAaron van Wirdum: 00:00:25\nHow exciting. Two weeks ago you told me that maybe you would go to Bitcoin Amsterdam.\nSjors Provoost: 00:00:31\nYes.\nAaron van Wirdum: 00:00:31\nAnd now you\u0026amp;rsquo;re a speaker.\nSjors Provoost: 00:00:33\nI\u0026amp;rsquo;m a panelist, probably not a real speaker.\nAaron van Wirdum: 00:00:37\nThat counts that that\u0026amp;rsquo;s a speaker in my book Sjors.\nSjors …"},{"uri":"/misc/2022-07-14-tim-ruffing-roast/","title":"ROAST","content":"Location: Monash Cybersecurity Seminars\nROAST paper: https://eprint.iacr.org/2022/550.pdf\nROAST blog post: https://medium.com/blockstream/roast-robust-asynchronous-schnorr-threshold-signatures-ddda55a07d1b\nROAST in Python: https://github.com/robot-dreams/roast\nIntroduction (Amin Sakzad) Welcome everyone to Monash Cybersecurity Seminars. Today we will be having Tim Ruffing. Tim is an applied cryptographer in the research team of Blockstream, a Bitcoin and blockchain technology company. He …"},{"uri":"/tags/htlc/","title":"Hash Time Locked Contract (HTLC)","content":""},{"uri":"/tags/incentives/","title":"incentives","content":""},{"uri":"/speakers/jeremy-rubin/","title":"Jeremy Rubin","content":""},{"uri":"/speakers/jonathan-harvey-buschel/","title":"Jonathan Harvey Buschel","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2022/lightning-panel/","title":"Lightning Panel","content":"What is Lightning Jeremy Rubin: 00:00:07\nI was hoping that we can have our panelists, and maybe we\u0026amp;rsquo;ll start with Rene. Just introduce themselves quickly, and also answer the question, what is Lightning to you? And I mean that sort of in the long run. Lightning is a thing today, but what is the philosophical Lightning to you?\nRene Pickhardt: 00:00:26\nSo I\u0026amp;rsquo;m Rene Pickardt, a researcher and developer for the Lightning Network. And for me personally, the Lightning Network is the means to …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2022/","title":"MIT Bitcoin Expo 2022","content":" Bitcoin R\u0026amp;amp;D Panel May 07, 2022 Jeremy Rubin, Gloria Zhao, Andrew Poelstra Research Covenants Scripts addresses Lightning Panel Jul 05, 2022 Jeremy Rubin, Rene Pickhardt, Lisa Neigut, Jonathan Harvey Buschel Lightning Taproot Ptlc Htlc Long-Term Trust and Analog Computers May 07, 2022 Andrew Poelstra Codex32 Tradeoffs in Permissionless Systems Jul 05, 2022 Gloria Zhao Bitcoin core Transaction relay policy Incentives "},{"uri":"/speakers/rene-pickhardt/","title":"Rene Pickhardt","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2022/tradeoffs-in-permissionless-systems/","title":"Tradeoffs in Permissionless Systems","content":"Introduction Hello. I wanted to make a talk about what I work on because I consider it the, well, the area of code that I work in, because I considered it to be one of the most fascinating and definitely the most underrated part of Bitcoin that no one ever talks about. And the reason is I kind of see it as where one of the most important ideological goals of Bitcoin translates into technical challenges, which also happens to be very, very interesting. So I\u0026amp;rsquo;m going to talk about the …"},{"uri":"/bitcoin-explained/silent-payments/","title":"Silent Payments","content":"Aaron: 00:00:20\nLive from Utrecht, this is Bitcoin Explained. Hey, Sjors.\nSjors: 00:00:23\nYo.\nAaron: 00:00:25\nHey, Ruben.\nRuben: 00:00:26\nHey.\nAaron: 00:00:27\nRuben is back. I think last time In Prague, we were all in Prague last week and there I introduced you as our resident funky second layer expert.\nRuben: 00:00:38\nThat is right.\nAaron: 00:00:39\nThis week we\u0026amp;rsquo;re going to talk about one of your new proposals, which is actually not a second layer proposal.\nRuben: 00:00:45\nTrue.\nAaron: …"},{"uri":"/speakers/alex-myers/","title":"Alex Myers","content":""},{"uri":"/bitcoinplusplus/2022/","title":"Bitcoin++ 2022","content":" Minisketch and Lightning gossip Jun 07, 2022 Alex Myers Lightning P2 p "},{"uri":"/bitcoinplusplus/2022/2022-06-07-alex-myers-minisketch-lightning-gossip/","title":"Minisketch and Lightning gossip","content":"Location: Bitcoin++\nSlides: https://endothermic.dev/presentations/magical-minisketch\nRusty Russell on using Minisketch for Lightning gossip: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001741.html\nMinisketch library: https://github.com/sipa/minisketch\nBitcoin Core PR review club on Minisketch (3 sessions):\nhttps://bitcoincore.reviews/minisketch-26\nhttps://bitcoincore.reviews/minisketch-26-2\nhttps://bitcoincore.reviews/minisketch\nIntroduction …"},{"uri":"/chaincode-labs/chaincode-podcast/tracepoints-and-monitoring-the-bitcoin-network/","title":"Tracepoints and monitoring the Bitcoin network","content":"Speaker 0: 00:00:00\nHey, Merch. What up? We are back in the studio.\nSpeaker 1: 00:00:02\nWho are we talking to today? We\u0026amp;rsquo;re talking to OXB10C.\nSpeaker 0: 00:00:06\nI know him as Timo. So we\u0026amp;rsquo;re going to call him Timo.\nSpeaker 1: 00:00:09\nOK, fine.\nSpeaker 0: 00:00:12\nIt doesn\u0026amp;rsquo;t quite roll off the tongue. Is there anything in particular that you\u0026amp;rsquo;re interested in learning from Timo today?\nSpeaker 1: 00:00:17\nYeah, I think we need to talk about Tether. What he\u0026amp;rsquo;s doing …"},{"uri":"/chaincode-labs/chaincode-podcast/package-relay/","title":"Package Relay","content":"Introduction Mark Erhardt: 00:00:00\nHey Jonas.\nAdam Jonas: 00:00:00\nHey Murch.\nMark Erhardt: 00:00:01\nWe\u0026amp;rsquo;re going to try again to record with Gloria. This time we want to get it really focused, short.\nAdam Jonas: 00:00:07\nWe got a lot of tape in that last one, but I don\u0026amp;rsquo;t know if all of it was usable. Yeah, we\u0026amp;rsquo;re going to talk to Gloria today about her newly released proposal for package relay. Looking forward to getting something that we can release. Enjoy.\nAdam Jonas: …"},{"uri":"/tags/security/","title":"security","content":""},{"uri":"/chaincode-labs/chaincode-podcast/address-relay/","title":"Address Relay","content":"Martin: 00:00:00\nIn AddrMan you also have this distinction between the new and the tried tables. So the new tables are for unverified addresses that someone just sent you and you don\u0026amp;rsquo;t know whether they are good, whether there\u0026amp;rsquo;s really a Bitcoin node behind them. And the tried table is addresses that you\u0026amp;rsquo;ve been connected to in the past. So at some point, at least in the past, you know, they were your peers at some point.\nMark Erhardt: 00:00:19\nRight. And you also need to make …"},{"uri":"/speakers/martin-zumsande/","title":"Martin Zumsande","content":""},{"uri":"/speakers/andrew-poelstra/","title":"Andrew Poelstra","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2022/bitcoin-randd-panel/","title":"Bitcoin R\u0026D Panel","content":"Ayush: 00:00:07\nMy name is Ayush Khandelwal. I am a software engineer at Google. I run a tech crypto-focused podcast on the side. And I have some amazing guests with me. So if you want to introduce yourself, what do you work on? And yeah, just get us started.\nJeremy: 00:00:31\nI am Jeremy Rubin. I am a class of 2016 MIT alumni, so it\u0026amp;rsquo;s good to be back on campus. And currently I work on trying to make Bitcoin the best possible platform for capitalism, sort of my North Star.\nAyush: 00:00:51 …"},{"uri":"/tags/codex32/","title":"Codex32","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2022/long-term-trust-and-analog-computers/","title":"Long-Term Trust and Analog Computers","content":"What I am here to talk to you about are analog computers and long-term trust. And what I mean by long-term trust is how can you have assurance that your Bitcoin secrets, your seed words or your seed phrase or whatever have you, how can you be assured that that is stored correctly, that the data has integrity and so on, and how can you be assured of that for a long time? And so where trust comes into this is that traditionally, in the last 100 years, in order to verify the integrity of data, you …"},{"uri":"/bitcoin-explained/are-users-rejecting-bitcoin-forks/","title":"User Rejected Soft Forks (URSFs)","content":"Intro Aaron: Live from Utrecht, this is Bitcoin, Explained. Hey Sjors.\nSjors: What\u0026amp;rsquo;s up?\nAaron: Sjors Provoost, Bitcoin Core contributor and author.\nSjors: Oh yes.\nAaron: Congratulations!\nSjors: Thank you.\nAaron: Your new book, Bitcoin: A Work in Progress is now available on Amazon?\nSjors: On everywhere. You go to btcwip.com, which is Work in Progress. There\u0026amp;rsquo;s links to various Amazons and other sites where you can hopefully buy the book.\nAaron: What is the book?\nSjors: It is …"},{"uri":"/greg-maxwell/2022-05-05-covenants-bip119/","title":"Covenants and BIP119","content":"Location: Reddit\nhttps://www.reddit.com/r/Bitcoin/comments/uim560/bip_119/i7dhfpb/\nCovenants and BIP119 I was asked by an old colleague to respond to your post because I came up with the term covenant as applied Bitcoin many years ago back when I was still a Bitcoin developer.\ndoes bip 119 completely mess the fungibility of bitcoin. If the idea of covenants is that you can create bitcoin that can only be sent to certain addresses, doesnt that make two classes of bitcoin?\nNo. That\u0026amp;rsquo;s …"},{"uri":"/speakers/greg-maxwell/","title":"Greg Maxwell","content":""},{"uri":"/greg-maxwell/","title":"Greg Maxwell","content":" Advances In Block Propagation Nov 27, 2017 Greg Maxwell P2p Mining bech32 design Dec 22, 2017 Greg Maxwell Bech32 Bitcoin Core Github Oct 26, 2020 Greg Maxwell Bitcoin Core Testing Sep 23, 2018 Greg Maxwell Bitcoin core Developer tools Bitcoin Selection Cryptography Apr 29, 2015 Greg Maxwell Cryptography Checkmultisig Bug Aug 27, 2020 Greg Maxwell Confidential Transactions Apr 28, 2017 Greg Maxwell Privacy enhancements Covenants and BIP119 May 05, 2022 Greg Maxwell Covenants Deep Dive Bitcoin …"},{"uri":"/bitcoin-explained/bitcoin-core-v23/","title":"Bitcoin Core 23.0","content":"Intro Aaron van Wirdum: 00:00:20\nLive from Utrecht, this is Bitcoin Explained. Sjors, are we still making a bi-weekly podcast?\nSjors Provoost: 00:00:28\nI mean, it depends on what you mean by bi. It\u0026amp;rsquo;s more like a feeling, right?\nAaron van Wirdum: 00:00:32\nWe make a podcast when we feel about it these days. Is that the rule?\nSjors Provoost: 00:00:36\nNo, it\u0026amp;rsquo;s also when we\u0026amp;rsquo;re not traveling or sick or whatever.\nAaron van Wirdum: 00:00:40\nOr we don\u0026amp;rsquo;t have a good idea for a …"},{"uri":"/speakers/carl-dong/","title":"Carl Dong","content":""},{"uri":"/misc/2022-04-12-carl-dong-libbitcoinkernel/","title":"libbitcoinkernel","content":"Tracking issue in Bitcoin Core: https://github.com/bitcoin/bitcoin/issues/24303\nPieter Wuille on Chaincode podcast discussing consensus rules: https://btctranscripts.com/chaincode-labs/chaincode-podcast/2020-01-28-pieter-wuille/#part-2\nIntro Hi everyone. I’m Carl Dong from Chaincode Labs and I’m here to talk about libbitcoinkernel, a project I’ve been working on that aims to extract Bitcoin Core’s consensus engine. When we download and run Bitcoin Core it is nicely packaged into a single bundle, …"},{"uri":"/speakers/max-hillebrand/","title":"Max Hillebrand","content":""},{"uri":"/stephan-livera-podcast/2022-04-01-max-hillebrand/","title":"ZKSnacks Blacklisting Coins","content":"podcast: https://stephanlivera.com/episode/364/\nStephan Livera:\nMax, great to chat again.\nMax Hillebrand:\nOh, welcome, Stephan. I’m really looking forward to this conversation. It’s gonna be a fun one.\nStephan Livera:\nRight. So look, as I’ve mentioned, I think we’re gonna disagree a bit on this one, but let’s chat it out. Let’s discuss what’s going on in the world of Bitcoin and privacy. So obviously the main topic here is going to be around what’s going on with Wasabi Wallet and the …"},{"uri":"/sydney-bitcoin-meetup/","title":"Sydney Bitcoin Meetup","content":" Socratic Seminar Aug 25, 2020 Socratic Seminar Jul 21, 2020 Taproot Socratic Seminar Jun 23, 2020 Socratic Seminar May 19, 2020 Fee management Dual funding Sydney Socratic Seminar Mar 29, 2022 Jesse Posner Threshold signature Sydney Socratic Seminar Nov 02, 2021 Package relay Sydney Socratic Seminar Jul 06, 2021 Research Lightning Sydney Socratic Seminar Jun 01, 2021 Research Covenants Op checktemplateverify Taproot Sydney Socratic Seminar Feb 23, 2021 Lloyd Fournier, Anthony Towns Research …"},{"uri":"/sydney-bitcoin-meetup/2022-03-29-socratic-seminar/","title":"Sydney Socratic Seminar","content":"Topic: FROST\nLocation: Bitcoin Sydney (online)\nFROST paper: https://eprint.iacr.org/2020/852.pdf\nsecp256k1-zkp PR: https://github.com/ElementsProject/secp256k1-zkp/pull/138\nsecp256kfun issue: https://github.com/LLFourn/secp256kfun/issues/85\nCeremonies for Applied Secret Sharing paper: https://cypherpunks.ca/~iang/pubs/mindgap-popets20.pdf\nCoinbase blog post: https://blog.coinbase.com/frost-flexible-round-optimized-schnorr-threshold-signatures-b2e950164ee1\nAndrew Poelstra presentation at SF …"},{"uri":"/bitcoinology/","title":"Bitcoinology","content":" Cool Lightning Network Developments Jan 26, 2022 Igor Korsakov Lightning Client side validation History of Bitcoin in GAMING Mar 27, 2022 Christian Moss Lightning How to Design Schnorr Signatures Mar 14, 2022 Adam Gibson Schnorr signatures What is Lightning Network? Dec 15, 2021 Adam Gibson Lightning "},{"uri":"/speakers/christian-moss/","title":"Christian Moss","content":""},{"uri":"/bitcoinology/history-of-bitcoin-in-gaming/","title":"History of Bitcoin in GAMING","content":"Intro Christian: Before I start, it\u0026amp;rsquo;s probably a good idea to make sure you have a Lightning wallet downloaded because, at the end, I have some games we can play and actually get some Bitcoin through the games. So if you don\u0026amp;rsquo;t have a Lightning wallet installed, please do so. I recommend my company\u0026amp;rsquo;s wallet, Zebedee. So if you just type in Zebedee on my t-shirt into the Google Play or App Store, you should be able to download it. Today I\u0026amp;rsquo;m going to talk about the history …"},{"uri":"/stephan-livera-podcast/2022-03-27-rene-pickhardt/","title":"Pickhardt Payments \u0026 Zero Base Fee for Lightning Network","content":"podcast: https://stephanlivera.com/episode/361/\nStephan Livera:\nRené, welcome back to the show.\nRené Pickhardt:\nHey thanks, Stephan. I really practiced pronouncing your name.\nStephan Livera:\nAh yeah, it’s all good. There’s lots going on in the Lightning Network. The network is growing and it’s maturing. And a lot of the work you’re doing is around research and looking at ways to improve that and improve the way we route our payments and getting more reliable payments as well, as I understand …"},{"uri":"/bitcoinology/how-to-design-schnorr-signatures/","title":"How to Design Schnorr Signatures","content":"Passwords So, I\u0026amp;rsquo;m going to talk to you today about passwords and about the problem with passwords and how we might be able to solve it, right? So we\u0026amp;rsquo;re all familiar with using passwords on the internet and we\u0026amp;rsquo;re all sort of sometimes annoyed about it. Maybe you use a password manager, maybe you don\u0026amp;rsquo;t, but you have this problem, I think we all understand this problem that it\u0026amp;rsquo;s not just about you losing the password, but it\u0026amp;rsquo;s also about keeping it secret, right? …"},{"uri":"/lightning-specification/2022-03-14-specification-call/","title":"Lightning Specification Meeting - Agenda 0969","content":"Name: Lightning specification call\nTopic: Agenda below\nLocation: Jitsi\nVideo: No video posted online\nAgenda: https://github.com/lightning/bolts/issues/969\nBOLT 4: Remove legacy format, make var_onion_optin compulsory https://github.com/lightning/bolts/pull/962\nI think eclair and c-lightning have both removed support for legacy payments.\nI’ll probably make a PR to add the feature bit or at least flip it to required. The only wallet I can think of that is somewhat more fringe now is Simple Bitcoin …"},{"uri":"/tags/c-lightning/","title":"c-lightning","content":""},{"uri":"/c-lightning/","title":"c-lightning","content":" c-lightning developer call Mar 07, 2022 Lightning C lightning c-lightning developer call Jan 24, 2022 Lightning C lightning c-lightning developer call Jan 10, 2022 Lightning C lightning c-lightning developer call Dec 13, 2021 Lightning C lightning c-lightning developer call Nov 29, 2021 Lightning C lightning c-lightning developer call Nov 15, 2021 Lightning C lightning c-lightning developer call Nov 01, 2021 Lightning C lightning c-lightning developer call Oct 18, 2021 Lightning C lightning …"},{"uri":"/c-lightning/2022-03-07-developer-call/","title":"c-lightning developer call","content":"Name: c-lightning developer call\nTopic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nMinisketch and gossip spam I’ve been working on the multi channel connect stuff, trying to not get distracted by …"},{"uri":"/speakers/john-cantrell/","title":"John Cantrell","content":""},{"uri":"/stephan-livera-podcast/2022-03-07-john-cantrell-sensei/","title":"Sensei- A new lightning node client based on BDK + LDK","content":"podcast: https://stephanlivera.com/episode/353/\nStephan Livera:\nJohn, welcome back to the show.\nJohn Cantrell:\nHey, how’s it going? Happy to be back.\nStephan Livera:\nSo John, a lot has happened since the last time you were on the show. You are now working with Sensei and you’ve got this new Lightning node and interface that you’ve got to present and talk about. Do you want to just maybe tell us a little bit about the journey for yourself in terms of how you got to where you are now since the …"},{"uri":"/tags/amp/","title":"Atomic multipath payments (AMPs)","content":""},{"uri":"/advancing-bitcoin/2022/2022-03-03-christian-decker-greenlight/","title":"Deploying Lightning at Scale (Greenlight)","content":"Blog post on Greenlight: https://blog.blockstream.com/en-greenlight-by-blockstream-lightning-made-easy/\nGreenlight demo at London Bitcoin Devs: https://btctranscripts.com/london-bitcoin-devs/2022-03-01-lightning-panel/\nIntro Jeff Gallas: Next up is Christian Decker. He is a Lightning developer at Blockstream. He has got a very simple title. It is Security, Simplicity and DevOps: the trade-offs between a Lightning PPAS and running your own node. That is already half of the talk.\nChristian Decker: …"},{"uri":"/speakers/jimmy-song/","title":"Jimmy Song","content":""},{"uri":"/tags/lnurl/","title":"LNURL","content":""},{"uri":"/speakers/michael-folkson/","title":"Michael Folkson","content":""},{"uri":"/advancing-bitcoin/2022/2022-03-03-sanket-kanjalkar-miniscript/","title":"Miniscript: Composable, Analyzable and Smarter Bitcoin Script","content":"Andrew Poelstra on Miniscript: https://btctranscripts.com/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript/\nIntro (Jeff Gallas) The next speaker is Sanket. He is working at Blockstream and is mostly working on Simplicity and Miniscript and that is also what he is going to talk about, Miniscript. Please welcome Sanket.\nIntro (Sanket Kanjalkar) Good morning everyone. Today I will be discussing Miniscript which is a work of Pieter Wuille, Andrew Poelstra and me with input from many …"},{"uri":"/speakers/sanket-kanjalkar/","title":"Sanket Kanjalkar","content":""},{"uri":"/advancing-bitcoin/2022/static-invoices-in-lightning/","title":"Static Invoices in Lightning: A Deep Dive and Comparison of Bolt11 Invoices, LNURL, Offers and AMP","content":"Intro Hi guys, I\u0026amp;rsquo;m Elle, and I\u0026amp;rsquo;m going to be talking about static invoices in the Lightning Network. As most of you probably know, in Lightning, currently if you want to be paid, you have to create a new invoice every time. And this is a bit of a difficult thing for people to understand who don\u0026amp;rsquo;t know anything about the space. Today I\u0026amp;rsquo;m going to just be diving into exactly why it\u0026amp;rsquo;s a mission for us to have these static invoices, and then just outlining the three …"},{"uri":"/speakers/stepan-snigirev/","title":"Stepan Snigirev","content":""},{"uri":"/advancing-bitcoin/2022/2022-03-03-jimmy-song-taproot-multisig/","title":"Taproot multisig","content":"Slides: https://jimmysong.github.io/taproot-multisig\nPull request adding multisig Taproot support to buidl-python: https://github.com/buidl-bitcoin/buidl-python/pull/109\nIntro (Jeff Gallas) The next speaker is Jimmy Song who doesn’t really need an introduction but in any case, he has been, which is important for this conference, running the Programming Blockchain workshops for 5 years now. Leon is one of his alumni so there is a direct connection to this conference. He has also just released a …"},{"uri":"/advancing-bitcoin/2022/2022-03-03-stepan-snigirev-taproot-hardware-wallets/","title":"Taproot on hardware wallets","content":"Intro (Jeff Gallas) I’m very happy to announce our first speaker for today. It is Stepan Snigirev from Specter, he is the CTO of Specter Solutions and has been working on Bitcoin software and hardware wallets for 3 years now so welcome Stepan.\nOverview (Stepan Snigirev) What I want to talk about is Taproot on hardware wallets. We recently got the Taproot upgrade, yay, this is awesome. It activated in November. Some of the software wallets started to integrate that and even some of the hardware …"},{"uri":"/advancing-bitcoin/2022/2022-03-03-michael-folkson-taproot-update/","title":"Taproot update","content":"Topic: State of the Taproot Address 2022\nSlides: https://www.dropbox.com/s/l31cy3xkw0zi6aq/Advancing%20Bitcoin%20presentation%20-%20Michael%20Folkson.pdf?dl=0\nIntro Ok, so good morning. So earlier this week there was the State of the Union Address so I thought I’d make a bad joke first up. This talk is going to be the “State of the Taproot Address”. I promise all the bad jokes that I make in this presentation will all be Drake gifs, there will be no other jokes involved. This is the “State of …"},{"uri":"/tags/tapscript/","title":"Tapscript","content":""},{"uri":"/london-bitcoin-devs/2022-03-01-lightning-panel/","title":"Lightning Network Panel","content":"Topic: The Lightning Network in 2022\nLocation: London Bitcoin Devs\nIntroductions Ali Taylor-Cipolla (ATC): Ali coming in from Coinbase Recruiting. I’d like to introduce to my friend Tiago who is on the Europe side.\nTiago Schmidt (TS): Hello everyone. Quick introduction, sitting in the recruiting team here in London and leading the expansion for engineering hiring across UK and Ireland. Coinbase is in hyper-growth at the moment and we will be looking to hire loads of the people. We have over 200 …"},{"uri":"/bitcoin-explained/security-issue-burying-bitcoin-soft-forks/","title":"Burying Soft Forks","content":"Intro Aaron van Wirdum: 00:00:20\nLive from Utrecht, this is Bitcoin Explained. I love it. Sjors here we are again. It feels very retro. We just recorded three-quarters of an episode before you realized you weren\u0026amp;rsquo;t recording anything.\nNo Counter Sjors Provoost: 00:00:35\nYep, there\u0026amp;rsquo;s this little device and you have to put the SD card in it. Otherwise you\u0026amp;rsquo;ll press the record button and you\u0026amp;rsquo;ll see all the volume and all that stuff. It looks very good, but there is no counter. …"},{"uri":"/chaincode-labs/chaincode-podcast/lightning-privacy/","title":"Lightning privacy","content":"Sergei Tikhomirov: 00:00:00\nIt reminds me of the conversation about end-to-end encryption, like the companies like Facebook, for example, that try to kind of turn this conversation about privacy from, okay, I want my conversations to be private from Facebook, towards, oh, we\u0026amp;rsquo;re encrypting end-to-end, and no one except you and Facebook knows what you\u0026amp;rsquo;re talking about. I think we should aim for even more ambitious target to make it private even from the owners or from the operators of …"},{"uri":"/tags/topology/","title":"topology","content":""},{"uri":"/stephan-livera-podcast/2022-02-15-chris-stewart/","title":"Bitcoin DLCs \u0026 Stablechannels","content":"podcast: https://stephanlivera.com/episode/349/\nStephan Livera:\nChris, welcome to the show.\nChris Stewart:\nHey, nice to be talking with you, Stephan.\nStephan Livera:\nChris, I’m a fan of your work. I like reading your posts and hearing some of your commentary in the space. I know you’re doing a lot of stuff: obviously you’re CEO and founder of Suredbits. Can you tell us a little bit about what’s on your mind lately in terms of what’s happening in the space?\nChris Stewart:\nWell I think the space …"},{"uri":"/speakers/chris-stewart/","title":"Chris Stewart","content":""},{"uri":"/lightning-specification/2022-02-14-specification-call/","title":"Lightning Specification Meeting - Agenda 0957","content":"Name: Lightning specification call\nTopic: Agenda below\nLocation: Jitsi\nVideo: No video posted online\nAgenda: https://github.com/lightning/bolts/issues/957\nOrganizing a Lightning Core Dev meetup I was talking about organizing a face to face Lightning Core Dev meetup. If I understand correctly there has only been one formal one and that was in 2019 in Australia. There has been two?\nMilan, the kickoff. There has only ever been two.\nThat was probably before my time in Bitcoin.\nWe get to Bora Bora? …"},{"uri":"/chaincode-labs/chaincode-podcast/block-building/","title":"Block Building","content":"Mark Erhardt: 00:00:00\nThe idea is we want the set of miners to be an open set where anybody can enter and exit as they wish. And if we now had obvious optimizations that people had to implement to be as competitive as possible, that would make it harder for new miners to enter this place.\nAdam Jonas: 00:00:32\nHey, Murch. Hey, Clara.\nClara Shikhelman: 00:00:35\nHi.\nAdam Jonas: 00:00:38\nHi. Welcome to the Chaincode Podcast. Clara, for you, your first episode. We\u0026amp;rsquo;re so excited to have you …"},{"uri":"/stephan-livera-podcast/2022-01-31-nvk-bitcoin-hardware-innovation/","title":"Coldcard Mk4, Tapsigner, Satscard – Bitcoin Hardware Innovation","content":"podcast: https://stephanlivera.com/episode/344/\nStephan Livera:\nNVK, welcome back.\nNVK:\nHey man. Thanks for having me, dude. It’s been a while.\nStephan Livera:\nYeah, it has.\nNVK:\nI’m still winning on the amount of appearances consecutively.\nStephan Livera:\nYeah—actually I don’t know. I haven’t run that count actually. It’s been a while. But yeah, there’s so much going on with the world of Bitcoin, and obviously there’s always improvements going on in terms of hardware security, new technology …"},{"uri":"/lightning-specification/2022-01-31-specification-call/","title":"Lightning Specification Meeting - Agenda 0955","content":"Name: Lightning specification call\nTopic: Agenda below\nLocation: Jitsi\nVideo: No video posted online\nAgenda: https://github.com/lightning/bolts/issues/955\nBitcoin Core funding a transaction without looking at ancestors I have something that I wanted to ask of our implementers who implemented a wallet. It is tangentially related to Lightning especially for anchor outputs. I realized recently that when you ask Bitcoin Core to fund a transaction at a given fee rate it is not going to look at the …"},{"uri":"/tags/client-side-validation/","title":"Client-side validation","content":""},{"uri":"/bitcoinology/2022-01-26-igor-korsakov-cool-ln-developments/","title":"Cool Lightning Network Developments","content":"Synonym Igor Synonym is a company that John Carvalho started and he is trying to put Web3 inside of Bitcoin. But what it consists of? They\u0026amp;rsquo;re using several things like identities and people hosting their own data.\nAOPP For identity, the first thing I want to mention is Address Ownership Proof Protocol (AOPP). This is the field that we added in BlueWallet some months ago. I think it was a Switzerland exchange that wanted this, so they sent us a pull request and we merged it (PR#2915, …"},{"uri":"/speakers/igor-korsakov/","title":"Igor Korsakov","content":""},{"uri":"/c-lightning/2022-01-24-developer-call/","title":"c-lightning developer call","content":"Name: c-lightning developer call\nTopic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIndividual updates Still working on the cln stuff. cln is basically the Rust libraries that I am building to …"},{"uri":"/stephan-livera-podcast/2022-01-17-jeremy-rubin/","title":"What is Check Template Verify?","content":"podcast: https://stephanlivera.com/episode/339/\nStephan Livera:\nSo Jeremy, welcome to the show.\nJeremy Rubin:\nYeah, thanks for having me on.\nStephan Livera:\nSo Jeremy, you’ve been chatting a little bit and obviously building on this idea of Check Template Verify, and I think there’s a lot of people in the community who would really benefit from hearing from you exactly what it is, and talking a little bit about this journey of how you came to where it is now. And just for listeners who haven’t …"},{"uri":"/stephan-livera-podcast/2022-01-11-salvatore-ingala/","title":"Ledger’s New Bitcoin App- PSBT, Taproot, Descriptors, Multi-Sig","content":"podcast: https://stephanlivera.com/episode/337/\nStephan Livera:\nSalvatore, welcome to the show.\nSalvatore Ingala:\nThank you, Stephan. I learned a lot from your past shows, so I hope I can bring contributions as well.\nStephan Livera:\nFantastic. Well, Salvatore, for people who don’t know you, can you tell us a little bit about yourself and what you’re doing and where you’re working?\nSalvatore Ingala:\nYeah. So I have a background from academia. I did a PhD in algorithms before switching my …"},{"uri":"/c-lightning/2022-01-10-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nDate: January 10th 2022\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nAccounting plugin I have been working on the accounting plugin for the last few months. We are getting close to a …"},{"uri":"/lightning-specification/2022-01-03-specification-call/","title":"Lightning Specification Meeting - Agenda 0949","content":"Name: Lightning specification call\nTopic: Agenda below\nLocation: Jitsi\nVideo: No video posted online\nAgenda: https://github.com/lightning/bolts/issues/949\nIntroduction When people put comments and they are fixed they should mark them as resolved so that we don’t waste time thinking there are lots of outstanding comments and we have to read them again. I think that helps a lot.\nI can spend some time doing that.\nThere are a lot of comments, I’m starting at the bottom to see the things that have …"},{"uri":"/stephan-livera-podcast/2021-12-17-eric-sirion/","title":"MiniMint, Federated Mints for Bitcoin scaling and privacy","content":"podcast: https://stephanlivera.com/episode/331/\nStephan Livera:\nEric, welcome to the show.\nEric Sirion:\nHello. Yeah. Happy to be on. Nice to meet you.\nStephan Livera:\nYeah. So Eric, it was great to read about your proposal and read about what’s going on with MiniMint and all this stuff. And I think it’s definitely a topic that SLP listeners will be interested to hear about, ideas related to this, privacy, scalability, all sorts of things. So do you want to give us a little bit of your …"},{"uri":"/bitcoinology/what-is-lightning-network/","title":"What is Lightning Network?","content":"Introduction Hello everyone. We\u0026amp;rsquo;re going to sort of soft start now, and this is a new meetup group we\u0026amp;rsquo;ve decided to call Bitcoinology. It\u0026amp;rsquo;s a whimsical name. It\u0026amp;rsquo;s really intended to convey the idea that we want to discuss Bitcoin not in a super developer technical code way, but we do want to discuss it—how it\u0026amp;rsquo;s used, even what the history and culture of it are, and what are the interesting things as a user and as a sort of Bitcoin person you can learn. I don\u0026amp;rsquo;t …"},{"uri":"/c-lightning/2021-12-13-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nEnabling multi channel support https://github.com/ElementsProject/lightning/pull/4984\nhttps://github.com/ElementsProject/lightning/pull/4985\nI have …"},{"uri":"/lightning-specification/2021-12-06-specification-call/","title":"Lightning Specification Meeting - Agenda 0943","content":"Name: Lightning specification call\nTopic: Agenda below\nLocation: Google Meet\nVideo: No video posted online\nAgenda: https://github.com/lightning/bolts/issues/943\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nAdd payment metadata to payment request …"},{"uri":"/c-lightning/2021-11-29-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nDate: November 29th 2021\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nCI problems I will note we have been having CI problems. We obviously slowly grew to the point where our CI uses too much …"},{"uri":"/speakers/alex-bosworth/","title":"Alex Bosworth","content":""},{"uri":"/speakers/graham-krizek/","title":"Graham Krizek","content":""},{"uri":"/stephan-livera-podcast/2021-11-27-lisa-neigut-alex-bosworth-graham-krizek-and-matt-corallo/","title":"SLP324 Scaling Bitcoin off-chain with Matt Corallo, Alex Bosworth, Lisa Neigut, and Graham Krizek","content":"podcast: https://stephanlivera.com/episode/324/\nStephan Livera:\nSo guys, thanks for joining us. We’re talking about off-chain scaling. And so obviously Lightning is the first thing that comes to our mind, but maybe we could start with this idea of What is off-chain scaling? I think Lisa, you had something to add on this, didn’t you?\nMatt Corallo:\nWe had a meeting in the back and we decided we were moving Lightning to TRON. And so we think that’ll scale Lightning,\nStephan Livera:\nRight, yeah. …"},{"uri":"/bitcoin-explained/death-to-the-mempool-long-live-the-mempool/","title":"The Mempool (And Why We Need It)","content":"Preamble Aaron van Wirdum: 00:00:20\nLive from Utrecht, this is Bitcoin Explained.\nSjors Provoost: 00:00:23\nHello.\nAaron van Wirdum: 00:00:24\nHey Sjors.\nSjors Provoost: 00:00:25\nWhat\u0026amp;rsquo;s up?\nAaron van Wirdum: 00:00:26\nI\u0026amp;rsquo;m doing well.\nWe\u0026amp;rsquo;re still in Utrecht as I just said, but only for a couple more days, I\u0026amp;rsquo;m on my way back to El Salvador for a couple of conferences and I think you\u0026amp;rsquo;re gonna be traveling as well.\nSjors Provoost: 00:00:40\nHopefully, if it\u0026amp;rsquo;s warmer. …"},{"uri":"/tags/cpfp/","title":"Child pays for parent (CPFP)","content":""},{"uri":"/speakers/john-newbery/","title":"John Newbery","content":""},{"uri":"/brink/the-bitcoin-development-podcast/mempool-ancestors-and-descendants/","title":"Mempool Ancestors and Descendants","content":"Parent and child transactions Speaker 0: 00:00:00\nHi John.\nSpeaker 1: 00:00:01\nHi Gloria.\nSpeaker 0: 00:00:02\nFeel like talking about mempool ancestors and descendants today?\nSpeaker 1: 00:00:06\nYeah, let\u0026amp;rsquo;s do it.\nSpeaker 0: 00:00:06\nAll right. Let\u0026amp;rsquo;s define ancestors and descendants first. Or maybe let\u0026amp;rsquo;s define parents and children first.\nSpeaker 1: 00:00:13\nOkay. So in Bitcoin, when you have a transaction, you have the person or the owner spending the coin, sending it to …"},{"uri":"/lightning-specification/2021-11-22-specification-call/","title":"Lightning Specification Meeting - Agenda 0936","content":"Name: Lightning specification call\nTopic: Agenda below\nLocation: Google Meet\nVideo: No video posted online\nAgenda: https://github.com/lightning/bolts/issues/936\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nBOLT 7 onion message support …"},{"uri":"/adopting-bitcoin/2021/","title":"Adopting Bitcoin 2021","content":" Onboarding Bitcoin and Lightning Developers Nov 16, 2021 Gloria Zhao, Will Clark, Adam Jonas, Jonas Nick Bitcoin core Lightning Privacy On Lightning Nov 17, 2021 Bastien Teinturier Lightning Privacy problems Transaction Relay Policy for L2 Developers Nov 16, 2021 Gloria Zhao Transaction relay policy "},{"uri":"/adopting-bitcoin/2021/2021-11-17-bastien-teinturier-privacy-on-lightning/","title":"Privacy On Lightning","content":"Slides: https://docs.google.com/presentation/d/1llk70vI2Mo5uQ7vnpfbl1qlr0qe0NLpySIelChIKhH4/edit\nIntroduction Buenos Dias San Salvador, I\u0026amp;rsquo;m Bastien. Hopefully my slides are going to come up. Oh yay, perfect. So I\u0026amp;rsquo;m working on the Lightning Protocol Specification and one of its main implementation at Acinq.\nI\u0026amp;rsquo;m here to talk to you about privacy. This is a big topic. It\u0026amp;rsquo;s not a black and white thing. There are various degrees of gray when you talk about what your threat …"},{"uri":"/adopting-bitcoin/2021/onboarding-bitcoin-and-lightning-developers/","title":"Onboarding Bitcoin and Lightning Developers","content":"Introductions Introducer: 00:00:15\nThe next talk is going to be Onboarding Bitcoin and Lightning Developers with Will Clark, Gloria Zhao, Josie - Bitcoin Core contributor, Jonas Nick, please come to the stage.\nPablo Fernandez: 00:00:32\nHey, guys, how are you doing? Welcome. So we are going to be talking about onboarding Bitcoin and Lightning developers. Do you guys want to introduce yourselves? I think all of you have been on stage already, but want to do a quick round?\nWill Clark: 00:00:51 …"},{"uri":"/stephan-livera-podcast/2021-11-16-pieter-wuille-andrew-poelstra-andrew-chow-mark-erhardt/","title":"SLP321 On-chain scaling with Bitcoin Core Developers","content":"podcast: https://stephanlivera.com/episode/321/\nEmcee:\nAll right. I’m really excited for this next panel. I’ve heard some of them speak in different rooms and I can’t wait to hear what they all have to say today. Next up, I’m going to bring up our moderator. He has a podcast. It is a self-titled podcast and he’s also the managing director of Swan Bitcoin International. Please everybody, welcome to the stage Stephan Livera, everyone.\nStephan Livera:\nAll right, thanks very much for that. I’m …"},{"uri":"/adopting-bitcoin/2021/2021-11-16-gloria-zhao-transaction-relay-policy/","title":"Transaction Relay Policy for L2 Developers","content":"Slides: https://github.com/glozow/bitcoin-notes/blob/master/Tx%20Relay%20Policy%20for%20L2%20Devs%20-%20Adopting%20Bitcoin%202021.pdf\nIntro Hi I’m Gloria, I work on Bitcoin Core at Brink. Today I am going to talk a bit about mempool policy and why you cannot always expect your transactions to propagate or your fee bumps to work. What we are doing to try to resolve stuff like that. This talk is for those of you who find the mempool a bit murky or unstable or unpredictable. You are not really sure …"},{"uri":"/speakers/will-clark/","title":"Will Clark","content":""},{"uri":"/c-lightning/2021-11-15-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nUpgrading c-lightning for Taproot So, Taproot. The one place in the spec where Taproot matters if you are using it as an individual, generally it …"},{"uri":"/speakers/nicholas-gregory/","title":"Nicholas Gregory","content":""},{"uri":"/stephan-livera-podcast/2021-11-12-nicholas-gregory/","title":"Statechains and Mercury Wallet-A New Privacy Technique?","content":"podcast: https://stephanlivera.com/episode/320/\nStephan Livera:\nNicholas welcome to the show.\nNicholas Gregory:\nHello. Thanks for having me.\nStephan Livera:\nSo Nicholas, there’s discussion in the community around scaling and how do things scale, and also how do we get more privacy in the Bitcoin ecosystem? I think these are ideas that people are talking about, and I know you’re working on some stuff that’s obviously very relevant for that as well with statechains, Mercury wallet, and stuff like …"},{"uri":"/bitcoin-explained/bitcoin-was-under-attack/","title":"Bitcoin Was Under ATTACK!","content":"Sjors Provoost:\nSay, did you know that Bitcoin was attacked?\nAaron van Wirdum:\nOh, that\u0026amp;rsquo;s a much better angle.\nSjors Provoost:\nAnd that the media is hiding this from us because they haven\u0026amp;rsquo;t been talking about it. It\u0026amp;rsquo;s a cover up.\nAaron van Wirdum:\nYeah. We have a-\nSjors Provoost:\nWe\u0026amp;rsquo;re going to bring the truth.\nAaron van Wirdum:\nWe\u0026amp;rsquo;re going to bring you the truth on this episode on the attack that was happening on Bitcoin, it was the attack of the fake peers.\nSjors …"},{"uri":"/chaincode-labs/chaincode-podcast/miniscript/","title":"Miniscript","content":"Speaker 0: 00:00:15\nWelcome back to the Chaincode Podcast.\nSpeaker 1: 00:00:17\nHey, this is Murch.\nSpeaker 0: 00:00:18\nAnd Jonas. Murch, this episode sounds a little bit different when we actually get into the meat of things. So why is it different?\nSpeaker 1: 00:00:26\nI did a little bit of field reporting. So I recorded an episode myself for the first time. We were using different equipment, different environment. I think the sound could have gone a little better, but I recorded a great episode …"},{"uri":"/brink/the-bitcoin-development-podcast/mempool-policy/","title":"Mempool Policy","content":"Speaker 0: 00:00:00\nHi Gloria.\nSpeaker 1: 00:00:01\nHi John.\nSpeaker 0: 00:00:02\nWe\u0026amp;rsquo;re here to talk about Bitcoin.\nSpeaker 1: 00:00:04\nYes, we are. What are we going to talk about?\nSpeaker 0: 00:00:06\nWell, I think you wanted to talk about policy.\nSpeaker 1: 00:00:09\nYes, that\u0026amp;rsquo;s what I wanted to talk about. What kind of policy? Mempool policy, not legislative policy or miniscript policy. Mempool policy.\nSpeaker 0: 00:00:20\nOkay, well, let\u0026amp;rsquo;s start with the basics. What is a …"},{"uri":"/stephan-livera-podcast/2021-11-07-t-bast/","title":"Lightning Protocol Privacy Exploration","content":"podcast: https://stephanlivera.com/episode/319/\nStephan Livera:\nT-Bast, welcome to the show.\nT-Bast:\nHey, Stephan. Thanks for having me.\nStephan Livera:\nSo Bastien, I prefer definitely having good technical discussions about the Lightning Network. And I know you’ve got a lot of things to talk about. For people who don’t know you, do you want to just tell us a little bit about yourself?\nT-Bast:\nYeah, sure. I started working on Lightning a few years ago. I think it was almost two years and a half …"},{"uri":"/tabconf/2021/schnorr-signatures-jimmy-song/","title":"Schnorr Signatures","content":"Introduction Anyway, like I said, this is a technical talk and we\u0026amp;rsquo;re going to talk about Schnorr signatures and the benefits that it brings. So let\u0026amp;rsquo;s get started.\nAgenda So here\u0026amp;rsquo;s the agenda. We\u0026amp;rsquo;re going to talk about, we\u0026amp;rsquo;re going to review the signature algorithm that\u0026amp;rsquo;s currently on Bitcoin. This is Elliptic Curve Digital Signature Algorithm or ECDSA. Then we\u0026amp;rsquo;re going to go describe what Schnorr signatures are. And then we\u0026amp;rsquo;re going to go over all …"},{"uri":"/tabconf/2021/","title":"TABConf 2021","content":" Bitcoin Mining Firmware and Stratum v2 Nov 04, 2021 Rachel Rybarczyk Mining Coin Selection Algorithms Nov 05, 2021 Andrew Chow Coin selection Wallet Covenants Nov 05, 2021 Jeremy Rubin, Andrew Poelstra Covenants Lightning Dev Kit: Making Lightning More Accessible to Developers Nov 05, 2021 Matt Corallo Lightning Liquidity Advertisements - niftynei - TABConf 2021 Nov 06, 2021 Lisa Neigut Lightning Miniscript - Custody, Computable, Composable Nov 06, 2021 Andrew Poelstra Miniscript Scaling with …"},{"uri":"/tabconf/2021/liquidity-advertisements-niftynei-tabconf-2021/","title":"Liquidity Advertisements - niftynei - TABConf 2021","content":"Introduction Again, my name is Lisa Neigut. I work at Blockstream. I\u0026amp;rsquo;m here today to talk to you about the spec proposal that we\u0026amp;rsquo;ve added to c-lightning and hoping to get into more Lightning implementation soon to make it standard. This is about liquidity advertisements. So today I\u0026amp;rsquo;m going to be talking to you about what a liquidity ad is, how they work, kind of explain how data gets passed around Lightning Network just in general, and maybe get some of you excited about using …"},{"uri":"/tabconf/2021/2021-11-06-andrew-poelstra-miniscript/","title":"Miniscript - Custody, Computable, Composable","content":"Slides: https://download.wpsoftware.net/bitcoin/wizardry/2021-11-tabconf/slides.pdf\nIntro I am here to talk about Miniscript. This is something that I have talked about a few times before but usually in a much more technical way where we get into the weeds of Bitcoin Script and what Miniscript is and how to use it and stuff. I want to talk today about Miniscript from the perspective of someone actually trying to use Bitcoin and how this solves problems that I think are very important in the …"},{"uri":"/tabconf/2021/coin-selection-algorithms/","title":"Coin Selection Algorithms","content":"Introduction Hi, I\u0026amp;rsquo;m Andrew Chow. I\u0026amp;rsquo;m an engineer at Blockstream and I work on Bitcoin Core in the wallet and on various things in the wallet. So, today I\u0026amp;rsquo;m going to be talking about coin selection and generally how you can pick your inputs better for when you\u0026amp;rsquo;re making a transaction. To start off, what happens when you make a transaction? Well, first, you type in, you copy and paste in your address, you type in the amount that you want to send and you click the send …"},{"uri":"/tabconf/2021/2021-11-05-jeremy-rubin-andrew-poelstra-covenants/","title":"Covenants","content":"Topic: Covenants\nLocation: TABConf (The Atlanta Bitcoin Conference)\nVideo: No video was posted online\nAs per Socratic village rules names of all attendees (other than the advertised speakers) have been anonymized and audio will not be published to preserve anonymity of the question askers.\nThe covenant concept Shaun Apps (SA): I’ll kick things off with a bit of an introduction. Both Andrew and Jeremy will present. And then we will go through some questions and we\u0026amp;rsquo;ll go to questions from …"},{"uri":"/tabconf/2021/lightning-dev-kit-making-lightning-more-accessible-to-developers-matt-corallo-tabconf-2021/","title":"Lightning Dev Kit: Making Lightning More Accessible to Developers","content":"Introduction So, I worked on Bitcoin Core for many many years, but I am not here to talk about that! I now spend my time, full time at Square Crypto working on a project we call the Lightning Dev Kit (LDK). So, working to enable Lightning to be more accessible to developers and I\u0026amp;rsquo;m here to talk about that and give the lay of the land for why it exists, what the state of the Lightning ecosystem is broadly, and why we think it\u0026amp;rsquo;s an important project and why Square is funding us, to …"},{"uri":"/tabconf/2021/scaling-with-lnd/","title":"Scaling with LND","content":"Intro I wanted to talk about scaling lightning using LND. I started working at Lightning Labs three years ago and come really a long way. I started working with LND before that when I worked at BitGo. That was the barest, almost didn\u0026amp;rsquo;t work, it was crashing everyday type of scenario. Now, three years later, we\u0026amp;rsquo;re trying to scale this to millions of people. We\u0026amp;rsquo;re trying to scale this to millions of payments. I wanted to cover how do we do that? How do we do that with LND …"},{"uri":"/tabconf/2021/bitcoin-mining-firmware-and-stratum-v2-rachel-rybarczyk-tabconf-2021/","title":"Bitcoin Mining Firmware and Stratum v2","content":"Introduction Coming to the stage next, she is the VP of Mining at Galaxy Digital. Everybody please welcome to the stage Rachel Rybarczyk, everyone. My name is Rachel, I\u0026amp;rsquo;m VP of Mining at Galaxy Digital. I\u0026amp;rsquo;ve been in the space for a few years now and I recently started developing on Stratum V2, which is a new mining protocol. So I want to talk today about Stratum V2, how it compares to Stratum V1, the adoption efforts, some adoption hurdles, and how it all fits in together. But first, …"},{"uri":"/speakers/rachel-rybarczyk/","title":"Rachel Rybarczyk","content":""},{"uri":"/speakers/calvin-kim/","title":"Calvin Kim","content":""},{"uri":"/london-bitcoin-devs/2021-11-02-socratic-seminar-assumeutxo/","title":"Socratic Seminar - AssumeUTXO","content":"Reading list: https://gist.github.com/michaelfolkson/f46a7085af59b2e7b9a79047155c3993\nIntros Michael Folkson (MF): This is a discussion on AssumeUTXO. We are lucky to have James O’Beirne on the call. There is a reading list that I will share in a second with a bunch of links going right from concept, some of the podcasts James has done, a couple of presentations James has done. And then towards the end hopefully we will get pretty technical and in the weeds of some of the current, past and …"},{"uri":"/sydney-bitcoin-meetup/2021-11-02-socratic-seminar/","title":"Sydney Socratic Seminar","content":"Name: Socratic Seminar\nTopic: Package relay\nLocation: Bitcoin Sydney (online)\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nAgenda: https://github.com/bitcoin-sydney/socratic/blob/master/README.md#2021-11\nPackage Mempool Accept and …"},{"uri":"/tags/utreexo/","title":"Utreexo","content":""},{"uri":"/c-lightning/2021-11-01-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nc-lightning v0.10.2 release https://medium.com/blockstream/c-lightning-v0-10-2-bitcoin-dust-consensus-rule-33e777d58657\nWe’ve tagged RC1 to get …"},{"uri":"/speakers/k3tan/","title":"K3tan","content":""},{"uri":"/stephan-livera-podcast/2021-10-27-k3tan-networking-for-bitcoiners/","title":"Networking For Bitcoiners","content":"podcast: https://stephanlivera.com/episode/315/\nStephan Livera:\nk3tan, welcome to the show.\nK3tan:\nThank you. Thanks for having me.\nStephan Livera:\nMr. Ministry of Nodes and Mr. Calyx and also Pop OS. Obviously I had to get you on the show and it was time to get you on. So for anyone who doesn’t know you, tell us a little bit about yourself.\nK3tan:\nMy name’s k3tan, and I am the co-founder of Ministry of Nodes with you, Stefan, and we provide Bitcoin education material, particularly on YouTube. …"},{"uri":"/speakers/amiti-uttarwar/","title":"Amiti Uttarwar","content":""},{"uri":"/tags/erlay/","title":"Erlay","content":""},{"uri":"/chaincode-labs/chaincode-podcast/the-p2p-network-2/","title":"The P2P network","content":"Introduction Caralie: 00:00:05\nHi everyone. Welcome to the Chaincode Podcast. This is Caralie, and it\u0026amp;rsquo;s been a while. Hi Jonas.\nAdam Jonas: 00:00:10\nHey Caralie. Welcome back.\nCaralie: 00:00:11\nThank you. It\u0026amp;rsquo;s nice to be here. Tell me more about this episode we\u0026amp;rsquo;ve got.\nAdam Jonas: 00:00:16\nThis episode was designed for two P2P experts, Amiti and Pieter, to sit down and talk about all things P2P.\nCaralie: 00:00:24\nThat sounds very exciting.\nAdam Jonas: 00:00:25\nIt was very …"},{"uri":"/speakers/dustin-trammell/","title":"Dustin Trammell","content":""},{"uri":"/stephan-livera-podcast/2021-10-24-dustin-trammell/","title":"The 2nd Node On The Bitcoin Network? The Early Days of Bitcoin","content":"podcast: https://stephanlivera.com/episode/314/\nStephan Livera:\nDustin, welcome to the show.\nDustin Trammell:\nHey! Nice to be on.\nStephan Livera:\nSo Dustin, I’ve seen some of your work and obviously this recent discussion I think spurred some of the conversation. I thought it would be great to get you on and have a chat with you and hear a little bit about your story and where you came from and how you found all this Bitcoin stuff. And how you were there so early as well. I’d love to get into …"},{"uri":"/c-lightning/2021-10-18-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nDate: October 18th 2021\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nc-lightning v0.10.2 release https://medium.com/blockstream/c-lightning-v0-10-2-bitcoin-dust-consensus-rule-33e777d58657\nWe …"},{"uri":"/tags/eclipse-attacks/","title":"Eclipse attacks","content":""},{"uri":"/chaincode-labs/chaincode-podcast/the-p2p-network/","title":"The P2P network","content":"Amiti Uttarwar: 00:00:00\nWhen you have all of these different components in the system, even a very simple rule can turn into infinitely complex interactions, and it\u0026amp;rsquo;s incredibly difficult to wrap your head around at such grand scale. That\u0026amp;rsquo;s also something that makes it really cool because that\u0026amp;rsquo;s why it\u0026amp;rsquo;s so important to collaborate.\nIntroduction Adam Jonas: 00:00:25\nHi, I\u0026amp;rsquo;m Jonas, and welcome to the Chaincode Podcast.\nMark Erhardt: 00:00:41\nHi, I\u0026amp;rsquo;m Murch. …"},{"uri":"/c-lightning/2021-10-04-developer-call/","title":"c-lightning developer call","content":"Name: c-lightning developer call\nTopic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nDust HTLC exposure (Lisa Neigut) Antoine Riard email to the Lightning dev mailing list: …"},{"uri":"/speakers/philip-glazman/","title":"Philip Glazman","content":""},{"uri":"/stephan-livera-podcast/2021-10-02-philip-glazman/","title":"Scaling Bitcoin Exchanges With Lightning And Lightning User Profiles","content":"podcast: https://stephanlivera.com/episode/308/\nStephan Livera:\nPhilip, welcome to the show.\nPhilip Glazman:\nHi, Stephan. Really excited to be here.\nStephan Livera:\nSo Philip, you’re at River, and River has a lot of really cool, interesting stuff going on technologically, and I’m sure you can tell us a little bit about that. Do you want to tell us a little bit about your background and how you got into the more technical side of Bitcoin?\nPhilip Glazman:\nYeah, absolutely. So I’ve always been …"},{"uri":"/speakers/anthony-potdevin/","title":"Anthony Potdevin","content":""},{"uri":"/stephan-livera-podcast/2021-09-29-thomas-jestopher-and-anthony-potdevin/","title":"Becoming A Lightning Routing Node Operator","content":"podcast: https://stephanlivera.com/episode/307/\nStephan Livera:\nJestopher and Tony, welcome to the show.\nTony :\nHey, thank you. Thank you for having us.\nJestopher:\nThanks so much, Stephan.\nStephan Livera:\nSo I’ve been following what you guys are doing with Amboss and I thought it would be a good time to get you on and chat a little bit about Lightning Network just generally as well as maybe some tips out there for people who want to just get started and start running their own Lightning node and …"},{"uri":"/speakers/thomas-jestopher/","title":"Thomas Jestopher","content":""},{"uri":"/c-lightning/2021-09-20-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nAgenda: https://hackmd.io/@cdecker/Sy-9vZIQt\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIndividual updates Let’s start with a quick round of updates from the team. Then I’ll give a brief …"},{"uri":"/bitcoin-explained/the-libsecp256k1-library/","title":"Schnorr Signatures and Libsecp256k1","content":"Aaron Van Wirdum: 00:03:59\nWe\u0026amp;rsquo;re going to discuss libsecp25, it\u0026amp;rsquo;s a long name.\nSjors Provoost:\nLibsecp256k1.\nAaron van Wirdum:\nThank you. Why are we going to discuss it? We are going to discuss it because BIP 340 support was merged into libsecp256k1 this week.\nSjors Provoost:\nWhat was merged?\nAaron van Wirdum:\nShut up.\nSjors Provoost:\nSchnorr was added.\nAaron van Wirdum:\nOkay yeah, so Schnorr, exactly. Thank you for actually making it clear for our listener. Oh, I misunderstood your …"},{"uri":"/tags/anonymity-networks/","title":"Anonymity networks","content":""},{"uri":"/tags/bech32/","title":"Bech32(m)","content":""},{"uri":"/bitcoin-explained/bitcoin-core-v22-explained/","title":"Bitcoin Core 22.0 Explained","content":"Intro Aaron: 00:00:20\nWelcome to Bitcoin Explained, the podcast with the most boring name in Bitcoin. Hey Sjors.\nSjors: 00:00:27\nWhat\u0026amp;rsquo;s up? What\u0026amp;rsquo;s with the new name?\nAaron: 00:00:29\nSo We\u0026amp;rsquo;ve rebranded our podcast. What do you think?\nSjors: 00:00:33\nWell, I think, you know, especially if you read it correctly, Bitcoin, explained. I think it\u0026amp;rsquo;s an amazing name. I think the problem was that a lot of people have no idea what the hell a Van Weerdam Shores Nado is.\nAaron: …"},{"uri":"/c-lightning/2021-09-06-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nDate: September 6th 2021\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIndividual updates I’ve been mostly shepherding the pull requests on c-lightning and looking into issues we need to fix …"},{"uri":"/stephan-livera-podcast/2021-08-24-seedsigner/","title":"Bitcoin multi sig security under $50","content":"podcast: https://stephanlivera.com/episode/302/\nStephan Livera:\nSeedsigner welcome to the show.\nSeedsigner:\nSo glad to be here. Thank you for having me.\nStephan Livera:\nSo just for listeners, Seedsigner is operating under a pseudonym, so I’m just going to be calling him seed or SeedSigner. So seed, can you tell me a little bit about yourself and how this project came about? What was the inspiration?\nSeedsigner:\nSo it’s actually a full circle kind of journey to be talking with you today because …"},{"uri":"/c-lightning/2021-08-23-developer-call/","title":"c-lightning developer call","content":"Topic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIndividual updates I’ve been working on connectd, we got that merged. It was not a big change. I have complained in the past that there tends to be …"},{"uri":"/iacr/","title":"IACR Crypto","content":" MuSig2: Simple Two-Round Schnorr Multi-Signatures Aug 16, 2021 Jonas Nick Musig "},{"uri":"/iacr/2021-08-16-jonas-nick-musig2/","title":"MuSig2: Simple Two-Round Schnorr Multi-Signatures","content":"MuSig2 paper: https://eprint.iacr.org/2020/1261.pdf\nIntroduction This is a talk about MuSig2, simple two round Schnorr multisignatures. I am Jonas Nick and this work is a collaboration with my colleague Tim Ruffing at Blockstream and Yannick Seurin.\nMulti-Signatures Multisignatures allow n signers to produce a single signature on a single message. The signing protocol can be interactive and require multiple communication rounds. We distinguish between multisignatures where n-of-n signers can …"},{"uri":"/bitcoin-explained/basis-of-lightning-technology-12/","title":"Basis of Lightning Technology 12","content":"Intro Aaron Van Wirdum: 00:00:09\nLive from San Salvador and Utrecht, this is the Van Wirdum Sjorsnado. Hello! Hey Sjors, welcome back.\nSjors Provoost: 00:00:19\nThank you.\nAaron Van Wirdum: 00:00:19\nIt\u0026amp;rsquo;s been a while.\nSjors Provoost: 00:00:20\nI mean I\u0026amp;rsquo;m not physically back.\nAaron Van Wirdum: 00:00:24\nYeah, I\u0026amp;rsquo;m the one who\u0026amp;rsquo;s not physically back. I\u0026amp;rsquo;m still hanging out in El Salvador checking out what\u0026amp;rsquo;s going on here with the new Bitcoin law.\nSjors Provoost: …"},{"uri":"/tags/dlc/","title":"Discreet Log Contracts (DLCs)","content":""},{"uri":"/london-bitcoin-devs/2021-08-10-socratic-seminar-dlcs/","title":"Socratic Seminar - Discreet Log Contracts","content":"Gist of resources discussed: https://gist.github.com/michaelfolkson/f5da6774c24f99dba5c6c16ec8d499e9\nThis event was livestreamed on YouTube and so comments are attributed to specific individuals by the name or pseudonym that was visible on the livestream. If you were a participant and would like your comments to be anonymized please get in touch.\nIntro Michael Folkson (MF): Welcome to everyone on the call, welcome to people watching on YouTube. This is a Socratic Seminar, we’ve got a Zoom call …"},{"uri":"/c-lightning/2021-08-09-developer-call/","title":"c-lightning developer call","content":"c-lightning developer call Topic: Various topics\nLocation: Jitsi online\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIndividual updates Today I started to review some pull requests. I also finished my first draft of the paid order of the …"},{"uri":"/stephan-livera-podcast/2021-08-09-rusty-russell/","title":"Lightning Offers / BOLT12 -The next big thing in Lightning?","content":"podcast: https://stephanlivera.com/episode/298/\nStephan Livera:\nRusty welcome back to the show.\nRusty Russell:\nThanks. It’s always good to be here, Stephan.\nStephan Livera:\nSo Rusty, I know you’ve been making progress on this whole offers idea. Now you’ve mentioned this on the show in the past. And in fact, I recall at the Lightning conference and I think it was October, 2019 is when you actually first put this idea out there into the wild. So do you want to tell us a little bit about how this …"},{"uri":"/london-bitcoin-devs/2021-07-20-socratic-seminar-taproot-rollout/","title":"Socratic Seminar - Taproot is locked in. What now?","content":"Gist of resources discussed: https://gist.github.com/michaelfolkson/0803271754f851530fe8242087859254\nThis event was livestreamed on YouTube and so comments are attributed to specific individuals by the name or pseudonym that was visible on the livestream. If you were a participant and would like your comments to be anonymized please get in touch.\nIntro Michael Folkson (MF): Welcome to everyone on the call, welcome to anybody on YouTube. This is a Socratic Seminar on Taproot rollout or Taproot …"},{"uri":"/stephan-livera-podcast/2021-07-17-andrew-chow/","title":"Output Script Descriptors for Bitcoin","content":"podcast: https://stephanlivera.com/episode/292/\nStephan Livera:\nAndrew welcome back to the show.\nAndrew Chow:\nHey, thanks for having me.\nStephan Livera:\nSo, Andrew, you’ve been working on a bunch of things, one of which was the output descriptors idea. There’s been a little bit of progress on that. So maybe just tell us a little bit about what you’re working on lately.\nAndrew Chow:\nI’ve been working a lot on the Bitcoin core wallet, as usual. We’ve been preparing to support taproot with the next …"},{"uri":"/stephan-livera-podcast/2021-07-13-nvk-bitcoin-security/","title":"Bitcoin Security \u0026 Backups Primer","content":"podcast: https://stephanlivera.com/episode/290/\nStephan Livera:\nMr. NVK. Welcome back to the show.\nNVK:\nHey man. Thanks for having me.\nStephan Livera:\nYeah, dude, it’s been a while since we spoke about backups. So listeners, you might know we did an episode a little while back, but things change over time and we’ve got new listeners and new people coming in. So it was a good time. And I think there was a lot of support on Twitter for this idea of doing an episode specifically talking about how …"},{"uri":"/sydney-bitcoin-meetup/2021-07-06-socratic-seminar/","title":"Sydney Socratic Seminar","content":"Name: Socratic Seminar\nTopic: Fee bumping and layer 2 protocols\nLocation: Bitcoin Sydney (online)\nVideo: No video posted online\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nAgenda: https://github.com/bitcoin-sydney/socratic/blob/master/README.md#2021-07\nFirst IRC …"},{"uri":"/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/","title":"Sydney Socratic Seminar","content":"Topic: Agenda in Google Doc below\nVideo: No video posted online\nGoogle Doc of the resources discussed: https://docs.google.com/document/d/1E9mzB7fmzPxZ74WZg0PsJfLwjpVZ7OClmRdGQQFlzoY/\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nBitcoin Problems (Lloyd Fournier) …"},{"uri":"/wasabi/podcast/2021-05-30-improving-lightning/","title":"Improving the Lightning Network","content":"How Rusty got into Lightning Max Hillebrand (MH): So Rusty, I am very happy that you joined me for this conversation today. You have been a pioneer in Lightning Network. To start off this conversation I am curious, 6 years ago before you got into the Lightning Network what was your understanding of Bitcoin back then and where did you see some of the big problems that needed to be solved at that point?\nRusty Russell (RR): Firstly thanks, it is good to be on. I guess we have to go back 7 or 8 …"},{"uri":"/wasabi/podcast/","title":"Wasabikas Bitcoin Privacy Podcast","content":" Improving the Lightning Network May 30, 2021 Rusty Russell Lightning C lightning "},{"uri":"/chaincode-labs/chaincode-podcast/chaincode-decoded-blockchain/","title":"Chaincode Decoded: Blockchain","content":"Introduction Mark Erhardt: 00:00:00\nHowever long it has been since the last block (has) been found, the next block is expected to take 10 minutes. So if you have been waiting for 10 minutes, or hashing for 10 minutes already, next block is still in about 10 minutes. Even if it\u0026amp;rsquo;s been an hour, it\u0026amp;rsquo;s going to take 10 minutes until the next block comes.\nAdam Jonas: 00:00:28\nWelcome to the Chaincode Podcast. I\u0026amp;rsquo;m here with Murch. Today, we are going to talk about blockchain …"},{"uri":"/speakers/alejandro-de-la-torre/","title":"Alejandro De La Torre","content":""},{"uri":"/stephan-livera-podcast/2021-05-24-alejandro-de-la-torre/","title":"Coordinating Bitcoin Upgrades With Mining Pools","content":"podcast: https://stephanlivera.com/episode/277/\nStephan Livera:\nAlejandro welcome to the show.\nAlejandro De La Torre:\nThank you, Stephan. Thanks. Thanks for having me.\nStephan Livera:\nSo Alejandro, I’ve been trying to get you on for a little while, but it just kind of hasn’t happened, but I also want to, I really wanted to chat with you about some of your work you’ve been doing around taproot activation. And obviously this has come quite some way and hopefully it’s looking like it’s pretty close …"},{"uri":"/speakers/anthony-ronning/","title":"Anthony Ronning","content":""},{"uri":"/stephan-livera-podcast/2021-05-21-anthony-ronning-bitcoin-lightning-privacy/","title":"Bitcoin Lightning Privacy - FUD and Facts","content":"podcast: https://stephanlivera.com/episode/276/\nStephan Livera:\nAnthony welcome to the show.\nAnthony Ronning:\nHi Stephan. Glad to be here. Thanks for inviting me on.\nStephan Livera:\nYeah. So I saw your article and I thought, well, we’ve got to do a discussion about this one. I think it will be very valuable for people who are trying to think more clearly about the privacy implications of Lightning. But tell us a little bit about yourself. Who are you and what’s your interest in Bitcoin and …"},{"uri":"/tags/cves/","title":"CVEs (various)","content":""},{"uri":"/bitcoin-explained/replace-by-fee-bug/","title":"Replace-by-Fee Bug (CVE-2021-31876)","content":"Intro Aaron van Wirdum: 00:00:08\nLive from Utrecht, Aruba and Utrecht, the Netherlands, this is The Van Wirdum Sjorsnado.\nSjors Provoost: 00:00:15\nHello, welcome, and listeners, sorry for the maybe not perfect sound quality, because we have to make do with the remote recording.\nAaron van Wirdum: 00:00:24\nYeah, for the second time since we\u0026amp;rsquo;re making our show my laptop broke, so here I am on my phone.\nSjors Provoost: 00:00:32\nAnd you decided to use your laptop in a swimming pool which you …"},{"uri":"/speakers/pavel-moravec/","title":"Pavel Moravec","content":""},{"uri":"/stephan-livera-podcast/2021-05-13-pavel-moravec/","title":"SlushPool Signalling For Taproot","content":"podcast: https://stephanlivera.com/episode/275/\nStephan Livera:\nPavel welcome to the show.\nPavel Moravec:\nHello. Great to be here again.\nStephan Livera:\nYes, So, Pavel, I see you guys have been keeping busy over at Braiins and SlushPool. So you were one of the first pools or I think you were the first pool to signal for taproot, which is very cool. And we’re definitely going to get into some of that for listeners who don’t know much about you, can you just give us a little bit of a background on …"},{"uri":"/chaincode-labs/chaincode-podcast/2021-05-12-matt-corallo-ldk/","title":"Lightning Development Kit","content":"Matt Corallo presentation at Advancing Bitcoin 2019: https://btctranscripts.com/advancing-bitcoin/2019/2019-02-07-matt-corallo-rust-lightning/\nrust-lightning repo: https://github.com/rust-bitcoin/rust-lightning\nIntro Adam Jonas (AJ): Welcome back to the office Matt, glad to have you back on the podcast.\nMatt Corallo (MC): Thank you.\nUpdate on LDK AJ: We are going to start with LDK. Where are we at? What is going on?\nMC: If listeners are aware LDK kind of grew out of a project that I started a …"},{"uri":"/speakers/pete-rizzo/","title":"Pete Rizzo","content":""},{"uri":"/stephan-livera-podcast/2021-04-28-pete-rizzo/","title":"When Satoshi Disappeared","content":"podcast: https://stephanlivera.com/episode/271/\nStephan Livera:\nPete, welcome to the show.\nPete Rizzo:\nHey. Well, thanks for having me glad to be here and excited to chat.\nStephan Livera:\nYeah, man. So I’ve had a chance to read your article about what happened when Satoshi disappeared, and I think it’s a good one to get into. But also just for listeners who don’t know you, can you just take a minute and just tell them a bit about yourself and your involvement in the Bitcoin world?\nPete Rizzo: …"},{"uri":"/chaincode-labs/chaincode-podcast/chaincode-decoded-mempool/","title":"Chaincode Decoded: Mempool","content":"Introduction Adam Jonas: 00:00:00\nWelcome to the Chaincode podcast. I\u0026amp;rsquo;m here with Murch. Today we\u0026amp;rsquo;re gonna jump into the mempool and that\u0026amp;rsquo;s a pun if you didn\u0026amp;rsquo;t get it. Welcome to Chaincode decoded - the mempool. The mempool, an area you are more than familiar with. The mempool whisperer you\u0026amp;rsquo;ve been called. Let\u0026amp;rsquo;s start with, what\u0026amp;rsquo;s the relationship between the mempool and fees?\nThe mempool and fees Mark Erhardt: 00:00:31\nWe often talk about the mempool, …"},{"uri":"/bitcoin-explained/taproot-activation-update/","title":"Taproot Activation Update: Speedy Trial And The LOT=True Client","content":"Previous episode on lockinontimeout (LOT): https://btctranscripts.com/bitcoin-magazine/2021-02-26-taproot-activation-lockinontimeout/\nPrevious episode on Speedy Trial: https://btctranscripts.com/bitcoin-magazine/2021-03-12-taproot-activation-speedy-trial/\nAaron van Wirdum on “There are now two Taproot activation clients, here’s why”: https://bitcoinmagazine.com/technical/there-are-now-two-taproot-activation-clients-heres-why\nIntro Aaron van Wirdum (AvW): Live from Utrecht this is the van Wirdum …"},{"uri":"/chaincode-labs/chaincode-podcast/chaincode-decoded-bech32m/","title":"Chaincode Decoded: Bech32m","content":"Adam Jonas: 00:00:00\nChaincode podcast is back, welcome to the Chaincode podcast. I am here with Murch, and we are back. It\u0026amp;rsquo;s been quite a layoff, but we\u0026amp;rsquo;re back in the studio and happy to be here.\nMark Erhardt: 00:00:16\nVery happy.\nAdam Jonas: 00:00:17\nToday we are going to revisit a segment called \u0026amp;ldquo;Chaincode decoded\u0026amp;rdquo; where we\u0026amp;rsquo;re going to be going into Bitcoin fundamentals, and we are going to start with bech32 and bech32m. Hope you enjoy, happy to be back.\nBech32 …"},{"uri":"/tags/minisketch/","title":"Minisketch","content":""},{"uri":"/bitcoin-explained/scaling-bitcoin-with-the-erlay-protocol/","title":"Scaling Bitcoin With The Erlay Protocol","content":"Preamble Aaron van Wirdum: 00:00:07\nLive from Utrecht, this is the Van Wirdum Sjorsnado.\nSjors Provoost: 00:00:10\nHello again.\nAaron van Wirdum: 00:00:12\nWe\u0026amp;rsquo;ve done it again, Sjors.\nSjors Provoost: 00:00:14\nWe\u0026amp;rsquo;ve done it again.\nAaron van Wirdum: 00:00:15\nWe recorded the whole episode without actually recording it.\nSjors Provoost: 00:00:18\nYeah.\nAaron van Wirdum: 00:00:19\nSo we\u0026amp;rsquo;re going to do the whole thing over.\nSjors Provoost: 00:00:20\nIt\u0026amp;rsquo;s this thing where you press …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2021/2021-04-03-defending-against-99-attack/","title":"How much Security is too much Security? Defending against a 99.999% Attack","content":"Introduction Hi everyone, it\u0026amp;rsquo;s too bad that we can\u0026amp;rsquo;t be in person but I think by next year we\u0026amp;rsquo;ll be able to do this in person. It\u0026amp;rsquo;s been a long year without conferences. And I guess for maybe many people watching the MIT Bitcoin Expo last March, was the last time I saw a bunch of people in person before this whole thing. Thanks to Neha, that was a really good start off for the day. I\u0026amp;rsquo;m going to talk in some ways about the opposite side of things, saying, well, do we …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2021/","title":"Mit Bitcoin Expo 2021","content":" How much Security is too much Security? Defending against a 99.999% Attack Apr 03, 2021 Tadge Dryja Security Utreexo "},{"uri":"/categories/hackathon/","title":"hackathon","content":""},{"uri":"/lightning-hack-day/","title":"Lightning Hack Day","content":" A Quantitative Analysis of Security, Anonymity and Scalability for the Lightning Network May 09, 2020 Sergei Tikhomirov Research Lightning Security problems Privacy problems Excel In Lightning Oct 27, 2018 Pierre Rochard Lnd Lightning Backups May 03, 2020 Christian Decker Lightning Lightning Routing Mar 27, 2021 Alex Bosworth Lightning Routing RaspiBlitz full node Jun 21, 2020 Rootzoll, Openoms Watchtowers and BOLT 13 May 24, 2020 Sergi Delgado Segura Lightning C lightning Zap Desktop: A look …"},{"uri":"/lightning-hack-day/2021-03-27-alex-bosworth-lightning-routing/","title":"Lightning Routing","content":"Lightning Routing - Building a new economy\nLocation: Lightning Hack Day\nmedia: No video posted online\nSlides: https://www.dropbox.com/s/oo9o8ij62utvlfo/alex-bosworth-lightning-routing.pdf\nTranscript of presentation on Channel Management (2018): https://btctranscripts.com/chaincode-labs/chaincode-residency/2018-10-22-alex-bosworth-channel-management/\nTranscript by: Michael Folkson\nIntro With Lightning routing I think it is a very exciting topic because we’re doing something that no one has ever …"},{"uri":"/tags/routing/","title":"routing","content":""},{"uri":"/bitcoin-explained/explaining-rgb-tokens/","title":"Explaining RGB Tokens","content":"Intro Aaron van Wirdum: 00:01:45\nLive, from Utrecht, this is The Van Wirdum Sjorsnado. Hello.\nRuben Somsen: 00:01:48\nHello.\nSjors Provoost: 00:01:49\nHey, who\u0026amp;rsquo;s there?\nRuben Somsen: 00:01:51\nIt\u0026amp;rsquo;s another Dutch guy.\nSjors Provoost: 00:01:53\nWelcome to the show, Ruben.\nRuben Somsen: 00:01:54\nThank you.\nAaron van Wirdum: 00:01:56\nWelcome back Ruben. Ruben, did you start selling your tweets yet?\nRuben Somsen: 00:01:59\nI tried to sell one of my tweets to my sister but she said, \u0026amp;ldquo;What …"},{"uri":"/blockstream-webinars/","title":"Blockstream Webinars","content":" C-Lightning Questions Sep 04, 2019 Christian Decker Lightning Lnd C lightning Dual Funded Channels Mar 25, 2021 Lisa Neigut Dual funding C lightning Getting Started With C-Lightning Jul 31, 2019 Rusty Russell Lightning C lightning Simplicity: Next-Gen Smart Contracting Apr 08, 2020 Adam Back Simplicity "},{"uri":"/blockstream-webinars/dual-funded-channels/","title":"Dual Funded Channels","content":"Dual Funding channels explainer + demo Cool. Okay, thanks everyone for coming. I think we\u0026amp;rsquo;re going to go ahead and get started. It\u0026amp;rsquo;s right at 11.02 on my time, so that hopefully more people will show up, but for everyone who\u0026amp;rsquo;s here, I want to make the best use of all of your time. Okay, great. So for those of you who aren\u0026amp;rsquo;t really sure what dual funding is, maybe you\u0026amp;rsquo;ve heard people say that word before, but let me give you a kind of a quick overview.\nI guess, of …"},{"uri":"/categories/workshop/","title":"workshop","content":""},{"uri":"/stephan-livera-podcast/2021-03-24-craig-raw-bitcoin-multi-sig/","title":"Bitcoin Multi Sig With Sparrow Wallet","content":"podcast: https://stephanlivera.com/episode/262/\nStephan Livera:\nCraig welcome to the show.\nCraig Raw:\nHi there, Stephan! It’s great to be here.\nStephan Livera:\nSo Craig I’ve been seeing what you’re doing with Sparrow wallet and I thought it’s time to get this guy on the show. So can you, I mean, obviously I know you’re under a pseudonym, right? So don’t dox anything about yourself that you don’t want that you’re not comfortable to, but can you tell us a little bit about how you got into Bitcoin …"},{"uri":"/wasabi/research-club/discrete-log-contract-specification/","title":"Discrete Log Contract Specification","content":"General overview of Discrete Log Contracts (DLC). Speaker 0: 00:00:07\nSo, everybody, welcome to this episode of the Wasabi Wallet Research Experience. This is our weekly research call where we talk about all the crazy things happening in the Bitcoin rabbit hole. We can use them to improve the Bitcoin privacy for many users. And today we got the usual reckless crew on board, but also two special guests from short bits. We have Nadav Cohen and Ben DeCarmen. How are you both?\nSpeaker 1: 00:00:36 …"},{"uri":"/speakers/nadav-kohen/","title":"Nadav Kohen","content":""},{"uri":"/bitcoin-explained/explaining-segregated-witness/","title":"Explaining Segregated Witness","content":"Aaron: 00:01:46\nLive from Utrecht. This is The Van Wirdum Sjorsnado. Hello. Sjors, we\u0026amp;rsquo;re going to talk about a classic today.\nSjors: 00:01:52\nYeah. We\u0026amp;rsquo;re going to party like it\u0026amp;rsquo;s 2015.\nAaron: 00:01:55\nSegWit.\nSegregated Witness Sjors: 00:01:56\nThat\u0026amp;rsquo;s right.\nAaron: 00:01:57\nSegregated Witness, which was the previous soft fork, well, was the last soft fork. We\u0026amp;rsquo;re working towards a Taproot soft fork now.\nSjors: 00:02:06\nIt\u0026amp;rsquo;s the last soft fork we know of.\nAaron: …"},{"uri":"/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/","title":"How Bitcoin UASF went down, Taproot LOT=true, Speedy Trial, Small Blocks","content":"Luke Dashjr arguments against LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html\nT1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html\nF7 argument for LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html\nTranscript by: Stephan Livera Edited by: Michael Folkson\nIntro …"},{"uri":"/stephan-livera-podcast/2021-03-17-luke-dashjr/","title":"How Bitcoin UASF Went Down, Taproot LOT=True, Speedy Trial, Small Blocks","content":"podcast: https://stephanlivera.com/episode/260/\nStephan Livera:\nLuke welcome to the show.\nLuke Dashjr:\nThanks.\nStephan Livera:\nSo, Luke for listeners who are unfamiliar, maybe you could just take a minute and just tell us a little bit about your background and how long you’ve been developing and contributing with Bitcoin core.\nLuke Dashjr:\nI first learned about Bitcoin back at the end of 2010, it was a new year’s party and I’ve been contributing since about a week later. So I recently got past …"},{"uri":"/speakers/luke-dashjr/","title":"Luke Dashjr","content":""},{"uri":"/speakers/adam-ficsor/","title":"Adam Ficsor","content":""},{"uri":"/speakers/caleb-delisle/","title":"Caleb DeLisle","content":""},{"uri":"/wasabi/research-club/cjdns/","title":"CJDNS","content":"Introduction. / BIP155. / Diverse, robust, resilient p2p networking. Lucas Ontivero: 00:00:00\nWelcome to a new Wasabi Research Experience meeting. This time we have a special guest. His name is Caleb. He is the creator of CJDNS project. Basically it\u0026amp;rsquo;s a network that can replace the internet, basically, of course, that is very ambitious. He will tell us better. Part of this project is now already supported by the BIP155 in Bitcoin and other libraries too. So this is an effort that part of …"},{"uri":"/speakers/lucas-ontivero/","title":"Lucas Ontivero","content":""},{"uri":"/bitcoin-explained/taproot-activation-speedy-trial/","title":"Taproot Activation with Speedy Trial","content":"Speedy Trial proposal: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html\nIntro Aaron van Wirdum (AvW): Live from Utrecht this is the van Wirdum Sjorsnado. Sjors, what is your pun of the week?\nSjors Provoost (SP): I actually asked you for a pun and then you said “Cut, re-edit. We are going to do it again.” I don’t have a pun this week.\nAvW: Puns are your thing.\nSP: We tried this LOT thing last time.\nAvW: Sjors, we are going to talk a lot this …"},{"uri":"/bitcoin-explained/hardware-wallet-integration-in-bitcoin-core-and-hwi/","title":"Hardware Wallet Integration in Bitcoin Core and HWI","content":"Aaron van Wirdum: 00:01:46\nLive from Utrecht this is The Van Wirdum Sjorsnado. Hello! Are you running the BIP8 True independent client yet?\nSjors Provoost: 00:01:56\nNegative. I did not even know there was one.\nAaron van Wirdum: 00:01:59\nOne has been launched, started. I don\u0026amp;rsquo;t think it\u0026amp;rsquo;s actually a client yet, a project has started.\nSjors Provoost: 00:02:05\nOkay, a project has started, it\u0026amp;rsquo;s not a binary or a code that you can compile.\nAaron van Wirdum: 00:02:09\nBut I did see you …"},{"uri":"/stephan-livera-podcast/2021-03-04-matt-corallo/","title":"Bitcoin Soft Fork Activation, Taproot, and Playing Chicken","content":"podcast: https://stephanlivera.com/episode/257/\nStephan Livera:\nMatt, welcome to the show.\nMatt Corallo:\nHey yeah, thanks for having me.\nStephan Livera:\nSo guys, obviously, listeners, you know, we’re just having to re-record this. We had basically a screw up the first time around, but we wanted to chat about Taproot and soft fork activation. So perhaps Matt, if we could just start from your side, let’s try to keep this accessible for listeners who maybe they are new. They’re trying to learn …"},{"uri":"/speakers/gleb-naumenko/","title":"Gleb Naumenko","content":""},{"uri":"/bitcoin-explained/taproot-activation-lockinontimeout/","title":"Taproot activation and LOT=true vs LOT=false","content":"BIP 8: https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki\nArguments for LOT=true and LOT=false (T1-T6 and F1-F6): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html\nAdditional argument for LOT=false (F7): https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html\nAaron van Wirdum article on LOT=true or LOT=false: …"},{"uri":"/stephan-livera-podcast/2021-02-26-gleb-naumenko/","title":"The Label, Bitcoin Dev \u0026 Consulting","content":"podcast: https://stephanlivera.com/episode/255/\nStephan Livera:\nGleb welcome back to the show.\nGleb Naumenko:\nHi, it’s good to be back\nStephan Livera:\nGlad you’ve been pretty busy with how you’ve started up a new Bitcoin development venture and you’re up to a few different things. Tell us what you’ve been working on lately.\nGleb Naumenko:\nYeah, last year it was like, I think it’s been maybe half a year or more since I came last time, I’ve been mostly working on actually lightning and related …"},{"uri":"/speakers/anthony-towns/","title":"Anthony Towns","content":""},{"uri":"/sydney-bitcoin-meetup/2021-02-23-socratic-seminar/","title":"Sydney Socratic Seminar","content":"Topic: Agenda in Google Doc below\nVideo: No video posted online\nGoogle Doc of the resources discussed: https://docs.google.com/document/d/1VAP4LNjHHVLy9RJwpQ8LqXEUw79z5TTZxr9du_Z0-9A/\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nPoDLEs revisited (Lloyd Fournier) …"},{"uri":"/munich-meetup/","title":"Munich Meetup","content":" Stratum v2 Feb 21, 2021 Daniela Brozzoni Mining "},{"uri":"/munich-meetup/2021-02-21-daniela-brozzoni-stratumv2/","title":"Stratum v2","content":"Topic: Mining Basics and Stratum v2\nLocation: Bitcoin Munich\nMatt Corallo presentation on BetterHash: https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/\nStratum v1 (BIP 310): https://github.com/bitcoin/bips/blob/master/bip-0310.mediawiki\nStratum v2: https://braiins.com/stratum-v2\nTranscript by: Michael Folkson\nIntro (Michael Ep) Hello and welcome to tonight’s Satoshi’s 21 seminar session hosted by the Bitcoin Munich meetup. We are always looking for good …"},{"uri":"/bitcoin-explained/explaining-bitcoin-addresses/","title":"Explaining Bitcoin Addresses","content":"Aaron van Wirdum: 00:01:45\nLive from Utrecht this is the Van Wirdum Sjorsnado. So the other day I wanted to send Bitcoin to someone, but I didn\u0026amp;rsquo;t.\nSjors Provoost: 00:01:52\nWhy? Shouldn\u0026amp;rsquo;t you hodl?\nAaron van Wirdum: 00:01:55\nI hodl all I can, but sometimes I need to eat, or I need to pay my rent, or I need to buy a new plant for my living room.\nSjors Provoost: 00:02:05\nYeah, let\u0026amp;rsquo;s do.\nAaron van Wirdum: 00:02:06\nSo the problem was, the person I wanted to send Bitcoin to, I …"},{"uri":"/tftc-podcast/","title":"TFTC Podcast","content":" Tales from the Crypt with Andrew Poelstra Jun 18, 2019 Andrew Poelstra Taproot Schnorr signatures Musig Miniscript UASFs, BIP 148, BIP 91 and Taproot Activation Feb 11, 2021 Matt Corallo Taproot "},{"uri":"/tftc-podcast/2021-02-11-matt-corallo-taproot-activation/","title":"UASFs, BIP 148, BIP 91 and Taproot Activation","content":"Topic: UASFs, BIP 148, BIP 91 and Taproot Activation\nLocation: Tales from the Crypt podcast\nIntro Marty Bent (MB): Sitting down with a man who needs no introduction on this podcast. I think you have been on four times already. I think this is number five Matt. You are worried about the future of Bitcoin. What the hell is going on? You reached out to me last week, you scaring the s*** out of me. Why are you worried?\nMatt Corallo (MC): First of all thanks for having me. I think the Bitcoin …"},{"uri":"/bitcoin-explained/replace-by-fee-rbf/","title":"Replace By Fee (RBF)","content":"Introduction Aaron van Wirdum: 00:01:33\nLive from Utrecht, this is the The Van Wirdum Sjorsnado. Sjors, I heard Bitcoin is broken.\nSjors Provoost: 00:01:40\nIt is. Yeah, it was absolutely terrible.\nAaron van Wirdum: 00:01:43\nA double spend happened.\nSjors Provoost: 00:01:44\nYep, ruined.\nAaron van Wirdum: 00:01:45\nAnd this is because - \u0026amp;ldquo;a fatal flaw in the Bitcoin protocol.\u0026amp;rdquo; That\u0026amp;rsquo;s how it was reported, I think, in Bloomberg?\nSjors Provoost: 00:01:54\nYeah, I couldn\u0026amp;rsquo;t find …"},{"uri":"/bitcoin-explained/compact-client-side-filtering/","title":"Compact Client Side Filtering (Neutrino)","content":"Introduction Aaron van Wirdum: 00:01:34\nLive from Utrecht, this is The Van Wirdum Sjorsnado.\nSjors Provoost: 00:01:37\nHello.\nAaron van Wirdum: 00:01:38\nVan Wirdum Sjorsnado. Did I say it right this time?\nSjors Provoost: 00:01:42\nI don\u0026amp;rsquo;t know. We\u0026amp;rsquo;re not going to check again. This is take number three. Now you\u0026amp;rsquo;re going to ask me whether I rioted and I\u0026amp;rsquo;m gonna say no.\nAaron van Wirdum: 00:01:51\nYou\u0026amp;rsquo;re like psychic.\nSjors Provoost: 00:01:53\nYes.\nAaron van Wirdum: …"},{"uri":"/bitcoin-explained/bitcoin-core-v0-21/","title":"Bitcoin Core 0.21.0","content":"Intro Aaron van Wirdum: 00:01:35\nLive from Utrecht, this is The Van Wirdum Sjorsnado. Episode 24, we\u0026amp;rsquo;re going to discuss Bitcoin Core 21.\nSjors Provost: 00:01:46\nHooray. Well, 0.21. We\u0026amp;rsquo;re still in the age of the zero point releases.\nAaron van Wirdum: 00:01:51\nYes, which is ending now.\nSjors Provost: 00:01:53\nProbably yes.\nAaron van Wirdum: 00:01:54\nThat\u0026amp;rsquo;s what I understand. The next one, the 22 will actually be Bitcoin Core 22.\nSjors Provost: 00:01:59\nThat\u0026amp;rsquo;s what it says …"},{"uri":"/stephan-livera-podcast/2021-01-21-wiz-and-simon-mempool/","title":"Mempool-space – helping Bitcoin migrate to a multi-layer ecosystem","content":"podcast: https://stephanlivera.com/episode/245/\nStephan Livera:\nWiz and Simon. Welcome to the show.\nSimon:\nThank you.\nWiz:\nYeah, thanks for having us.\nStephan Livera:\nSo, Wiz I think my listeners are already familiar with you. But let’s hear from you Simon. Tell us about yourself.\nSimon:\nI grew up in Sweden. I worked there as a software developer and about four years ago I decided to quit my day job and pursue a more nomadic lifestyle. So for the past four years, I’ve been mostly like traveling …"},{"uri":"/speakers/simon/","title":"Simon","content":""},{"uri":"/speakers/wiz/","title":"Wiz","content":""},{"uri":"/speakers/jonas-schnelli/","title":"Jonas Schnelli","content":""},{"uri":"/stephan-livera-podcast/2021-01-14-jonas-schnelli-maintaining-bitcoin-core/","title":"Maintaining Bitcoin Core-Contributions, Consensus, Conflict","content":"podcast: https://stephanlivera.com/episode/242/\nStephan Livera:\nJonas. Welcome back to the show.\nJonas Schnelli:\nHey! Hi, Stephan. Thanks for having me.\nStephan Livera:\nJonas, Its been a while since we spoke on the show and I was excited to get another opportunity to chat with you and learn a little bit more and obviously talk about this on behalf of some of the newer listeners who are coming in and they may not be familiar with how does Bitcoin Core work. So just for listeners who are a bit …"},{"uri":"/realworldcrypto/2021/2021-01-12-tim-ruffing-musig2/","title":"MuSig2: Simple Two-Round Schnorr Multi-Signatures","content":"MuSig2 paper: https://eprint.iacr.org/2020/1261.pdf\nIntroduction This is about MuSig2, simple two round Schnorr multisignatures. This is joint work with Jonas Nick and Yannick Seurin. Jonas will be available to answer questions as I will be of course.\nMulti-Signatures The idea that with multisignatures is that n signers can get together and produce a single signature on a single message. To be clear for this talk when I talk about multisignatures what I mean is n-of-n signatures and not the more …"},{"uri":"/realworldcrypto/","title":"Realworldcrypto","content":" Realworldcrypto 2018 Realworldcrypto 2021 "},{"uri":"/realworldcrypto/2021/","title":"Realworldcrypto 2021","content":" MuSig2: Simple Two-Round Schnorr Multi-Signatures Jan 12, 2021 Tim Ruffing Musig "},{"uri":"/tags/reproducible-builds/","title":"Reproducible builds","content":""},{"uri":"/bitcoin-explained/why-open-source-matters-for-bitcoin/","title":"Why Open Source Matters For Bitcoin","content":"Intro Aaron van Wirdum:\nLive from Utrecht, this is the Van Wirdum Sjorsnado.\nSjors Provoost:\nHello.\nAaron van Wirdum:\nThis episode, we\u0026amp;rsquo;re going to discuss open source?\nSjors Provoost:\nYes.\nAaron van Wirdum:\nI\u0026amp;rsquo;m just going to skip over the whole price thing. We\u0026amp;rsquo;re going to discuss open source and why it\u0026amp;rsquo;s useful, or free software and why it\u0026amp;rsquo;s useful. Are you on the free software train or on the open source train?\nSjors Provoost:\nI\u0026amp;rsquo;m on every train. I like …"},{"uri":"/chaincode-labs/chaincode-podcast/modularizing-the-bitcoin-consensus-engine/","title":"Modularizing the Bitcoin Consensus Engine","content":"AJ: Do you want to talk about isolating the consensus engine?\nCD: Sure. More recently I have dove into the codebase a little bit more. That started with looking at Matt’s async ProcessNewBlock work and playing around with that. Learning from that how do you make a change to the core engine of Bitcoin Core.\nMatt Corallo’s PR on async ProcessNewBlock https://github.com/bitcoin/bitcoin/pull/16175\nAJ: Can you talk about that PR a little bit and what it would do?\nCD: Basically right now when we …"},{"uri":"/bitcoin-explained/rsk-federated-sidechains-and-powpeg/","title":"RSK, Federated Sidechains And Powpeg","content":"Aaron van Wirdum: 00:00:07\nLive from Utrecht, this is the Van Wirdum Sjorsnado. Hello! Sjors, I have another announcement to make.\nSjors Provoost : 00:00:13\nExciting, tell me.\nAaron van Wirdum: 00:00:14\nDid you know you can find the Van Wirdum Sjorsnado on its own RSS feed?\nSjors Provoost : 00:00:19\nI did, yes. And this is actually a new recording of that announcement this is not spliced in by the editor.\nAaron van Wirdum: 00:00:25\nThe thing is the existing Bitcoin magazine RSS feed also has the …"},{"uri":"/chaincode-labs/chaincode-podcast/2020-11-30-carl-dong-reproducible-builds/","title":"Reproducible Builds","content":"Carl Dong’s presentation at Breaking Bitcoin 2019 on “Bitcoin Build System Security”: https://diyhpl.us/wiki/transcripts/breaking-bitcoin/2019/bitcoin-build-system/\nIntro Adam Jonas (AJ): Welcome to the Chaincode podcast Carl.\nCarl Dong (CD): Hello.\nAJ: You know we’ve been doing this podcast for a while now. How come you haven’t been on the show yet?\nCD: We’ve been at home.\nAJ: We did a few episodes before that though.\nMurch (M): It is fine. We’ve got you now.\nAJ: We’ll try not to take it too …"},{"uri":"/sf-bitcoin-meetup/","title":"SF Bitcoin Meetup","content":" Advanced Bitcoin Scripting Apr 03, 2017 Andreas Antonopoulos Scripts addresses Bcoin Sep 28, 2016 Consensus enforcement BIP Taproot and BIP Tapscript Dec 16, 2019 Pieter Wuille Taproot Tapscript Bip150 Bip151 Sep 04, 2017 Jonas Schnelli P2 p Bitcoin core Bitcoin Core Apr 23, 2018 Jeremy Rubin Bitcoin core Exploring Lnd0.4 Apr 20, 2018 Olaoluwa Osuntokun, Conner Fromknecht Lnd Lightning How Lightning.network can offer a solution for Bitcoin\u0026amp;#39;s scalability problems May 26, 2015 Tadge Dryja, …"},{"uri":"/sf-bitcoin-meetup/2020-11-30-socratic-seminar-20/","title":"Socratic Seminar 20","content":"SF Bitcoin Devs socratic seminar #20\n((Names anonymized.))\nXX01: For everyone new who has joined- this is an experiment for us. We\u0026amp;rsquo;ll see how this first online event goes. So far no major issues. Nice background, Uyluvolokutat. Should we let Harkenpost in?\nXX06: People around America are showing up since they don\u0026amp;rsquo;t have to show up in person.\nXX01: How do we block NY IPs?\nXX06: We already have people from Western Africa joining.\nXX05: Oh he\u0026amp;rsquo;s here? You can increase the tile …"},{"uri":"/greg-maxwell/2020-11-25-greg-maxwell-replacing-pgp/","title":"Replacing PGP with Bitcoin public key infrastructure","content":"Location: Reddit\nhttps://www.reddit.com/r/Bitcoin/comments/k0rnq8/pgp_is_replaceable_with_the_bitcoin_public_key/gdjv1dn?utm_source=share\u0026amp;amp;utm_medium=web2x\u0026amp;amp;context=3\nIs PGP replaceable with Bitcoin public key infrastructure? This is true in the same sense that PGP can also be replaced with some fancy functions on a school kids graphing calculator.\nYes you can construct some half-assed imitation of pgp using stuff from Bitcoin, but you probably shouldn\u0026amp;rsquo;t.\nIf all you really want is …"},{"uri":"/bitcoin-explained/erebus-attacks-and-how-to-stop-them-with-asmap/","title":"Erebus Attacks And How To Stop Them With ASMAP","content":"Aaron van Wirdum: 00:00:07\nLive from Utrecht, this is The Van Wirdum Sjorsnado.\nSjors Provoost: 00:00:10\nHello!\nAaron van Wirdum: 00:00:10\nHey Sjors. What\u0026amp;rsquo;s up? We got a market cap all time high, did you celebrate?\nSjors Provoost: 00:00:14\nNo, I did not because it\u0026amp;rsquo;s just dilution.\nAaron van Wirdum: 00:00:16\nWhat do you mean?\nSjors Provoost: 00:00:17\nI mean somebody\u0026amp;rsquo;s making more Bitcoin. So it\u0026amp;rsquo;s fun that the market cap goes up, but it doesn\u0026amp;rsquo;t matter unless …"},{"uri":"/bitcoin-explained/bitcoin-eclipse-attacks-and-how-to-stop-them/","title":"Bitcoin Eclipse Attacks And How To Stop Them","content":"Introduction Sjors Provoost: 00:01:01\nWe\u0026amp;rsquo;re going to discuss a paper about eclipse attacks. Couldn\u0026amp;rsquo;t come up with a better pun. So apologies.\nAaron van Wirdum: 00:01:06\nThat\u0026amp;rsquo;s all you got from us. It\u0026amp;rsquo;s the paper Eclipse Attacks on Bitcoin\u0026amp;rsquo;s Peer-to-Peer Network by Ethan Heilman, Alison Kendler, Aviv Zohar, and Sharon Goldberg from Boston University and Hebrew University MSR Israel.\nSjors Provoost: 00:01:20\nThat\u0026amp;rsquo;s right and it was published in 2015.\nAaron van …"},{"uri":"/chaincode-labs/chaincode-podcast/2020-11-09-enterprise-walletsutxo-management/","title":"Enterprise Wallets/UTXO Management","content":"Mark Erhardt: 00:00:00\nJust to throw out a few numbers there, non-SegWit inputs cost almost 300 bytes, and native SegWit inputs cost slightly more than 100 bytes. There\u0026amp;rsquo;s almost a reduction by two-thirds in fees if you switch from non-SegWit to native SegWit.\nIntroduction Caralie Chrisco: 00:00:29\nHi, everyone, welcome to the Chaincode podcast. My name is Caralie.\nAdam Jonas: 00:00:32\nAnd it\u0026amp;rsquo;s Jonas.\nCaralie Chrisco: 00:00:33\nAnd we\u0026amp;rsquo;re back!\nAdam Jonas: 00:00:34\nWe\u0026amp;rsquo;re …"},{"uri":"/tags/payment-batching/","title":"Payment batching","content":""},{"uri":"/bitcoin-explained/open-timestamps-leveraging-bitcoin-s-security-for-all-data/","title":"Open Timestamps: Leveraging Bitcoin's Security For All Data","content":"Introduction Aaron van Wirdum: 00:00:07\nLive from Utrecht, this is Van Wirdum Sjorsnado.\nSjors Provoost: 00:00:10\nHello!\nAaron van Wirdum: 00:00:11\nHey Sjors.\nSjors Provoost: 00:00:12\nWhat\u0026amp;rsquo;s up?\nAaron van Wirdum: 00:00:13\nSjors, today we are going to discuss at length in depth the American political situation.\nSjors Provoost: 00:00:19\nThat\u0026amp;rsquo;s right. We\u0026amp;rsquo;re going to explain everything and we\u0026amp;rsquo;re going to tell you who to vote for, even though this will be released after the …"},{"uri":"/tags/proof-systems/","title":"proof-systems","content":""},{"uri":"/greg-maxwell/2020-11-05-greg-maxwell-yubikey-security/","title":"Yubikey Security","content":" By this logic, a yubikey would also be a great targeting vector.\nThey would be, and if US intelligence services have not compromised yubis or at least have a perfect targeted substitution solutions for them then they should all be fired for gross incompetence and mismanagement of their funding.\nLikewise, if parties which things of significant value to secure who might be targeted by state level attackers are securing those things with just yubs instead of using yubis as a second factor in an …"},{"uri":"/greg-maxwell/2020-11-01-greg-maxwell-hardware-wallets-altcoins/","title":"Why do hardware wallets not support altcoins?","content":"They\u0026amp;rsquo;re an enormous distraction and hazard to software development. It\u0026amp;rsquo;s hard enough to correctly and safely write software to support one system. Every minute spent creating and testing the software for some alternative is a minute taken away from supporting Bitcoin.\nI can say first hand that my efforts to review hardware wallet code against possible vulnerabilities have been actively thwarted by hardware wallet codebases being crapped up with support for altcoins. It\u0026amp;rsquo;s easy …"},{"uri":"/bitcoin-explained/what-is-utreexo/","title":"What Is Utreexo?","content":"Introduction Aaron van Wirdum:\nAnd the proposal we\u0026amp;rsquo;re discussing this week is Utreexo.\nRuben Somsen:\nThat is correct.\nSjors Provoost:\nUtreexo, and the tree is for tree. The thing that grows in the forest.\nAaron van Wirdum:\nDid you know that was the pun, Ruben? I didn\u0026amp;rsquo;t realize\u0026amp;hellip;\nRuben Somsen:\nWell, I heard Tadge say that so I was aware of that, but there is a very specific reason why I was enthusiastic to talk about it. Well, one, I\u0026amp;rsquo;ve used it in one of the things …"},{"uri":"/stephan-livera-podcast/2020-10-27-jonas-nick-tim-ruffing-musig2/","title":"MuSig, MuSig-DN and MuSig2","content":"Tim Ruffing on Schnorr multisig at London Bitcoin Devs: https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-06-17-tim-ruffing-schnorr-multisig/\nMuSig paper: https://eprint.iacr.org/2018/068.pdf\nMuSig blog post: https://blockstream.com/2019/02/18/en-musig-a-new-multisignature-standard/\nInsecure shortcuts in MuSig: https://medium.com/blockstream/insecure-shortcuts-in-musig-2ad0d38a97da\nMuSig-DN paper: https://eprint.iacr.org/2020/1057.pdf\nMuSig-DN blog post: …"},{"uri":"/greg-maxwell/2020-10-26-greg-maxwell-bitcoin-core-github/","title":"Bitcoin Core Github","content":"Location: Reddit\nhttps://www.reddit.com/r/Bitcoin/comments/jiat6s/can_github_censor_the_code_of_bitcoin_the/ga5k9ap?utm_source=share\u0026amp;amp;utm_medium=web2x\u0026amp;amp;context=3\nCan GitHub censor Bitcoin Core? The event isn\u0026amp;rsquo;t news to Bitcoin developers either, github has done this a number of times before\u0026amp;ndash; even taking Mozilla offline as a result of an obviously spurious DMCA complaint.\nEvery developer that has the repository cloned has the full history.\nNot just the developers, thousands of …"},{"uri":"/bitcoin-explained/sync-bitcoin-faster-assume-utxo/","title":"Sync Bitcoin Faster! Assume UTXO","content":"Introduction Aaron van Wirdum:\nSjors, this week we are going to create a carbon copy of the Chaincode Podcast. They had an episode with James O’Beirne on AssumeUTXO, and we are going to make an episode on AssumeUTXO.\nSjors Provoost:\nAnd we are going to follow roughly the same structure.\nAaron van Wirdum:\nWe\u0026amp;rsquo;re just going to create the same podcast, just with our voices this time. We\u0026amp;rsquo;re gonna do it step by step. First step is, Headers First. That\u0026amp;rsquo;s how they did it, so …"},{"uri":"/bitcoin-explained/bitcoin-core-v21-supports-tor-v3/","title":"Bitcoin Core 0.21 Supports Tor V3","content":"Introduction Aaron van Wirdum: 00:00:07\nLive from Utrecht, this is the Van Wirdum Sjorsnado. Sjors, you pointed out to me that Bitcoin Core has an amazing new feature merged into its repository.\nSjors Provoost: 00:00:19\nAbsolutely, we have bigger onions now.\nAaron van Wirdum: 00:00:24\nRight, so I had basically no idea what it meant. You figured it out.\nSjors Provoost: 00:00:29\nI did.\nAaron van Wirdum: 00:00:34\nYeah so let\u0026amp;rsquo;s start at the beginning. It\u0026amp;rsquo;s about Tor.\nSjors Provoost: …"},{"uri":"/stephan-livera-podcast/2020-10-15-nadav-kohen-bitcoin-dlcs/","title":"What You Should Know About Bitcoin DLCs","content":"podcast: https://stephanlivera.com/episode/219/\nStephan Livera:\nNadav welcome to the show.\nNadav Kohen:\nThanks for having me.\nStephan Livera:\nNadav I’ve been following your work for a little while. Obviously I really I like reading your blog posts over at Suredbits, and I had the chance to meet you earlier this year in London. Can you tell us a little bit about yourself and what’s your role with Suredbits?\nNadav Kohen:\nYeah. so I am a software engineer at Suredbits. I’ve been working there since …"},{"uri":"/speakers/barnab%C3%A1s-b%C3%A1gyi/","title":"Barnabás Bágyi","content":""},{"uri":"/cppcon/","title":"CPPcon","content":" CPPcon 2017 CPPcon 2020 "},{"uri":"/cppcon/2020/","title":"CPPcon 2020","content":" Fuzzing Class Interfaces Oct 09, 2020 Barnabás Bágyi "},{"uri":"/cppcon/2020/2020-10-09-barnabas-bagyi-fuzzing-class-interfaces/","title":"Fuzzing Class Interfaces","content":"Fuzzing Class Interfaces for Generating and Running Tests with libFuzzer Location: CppCon 2020\nSlides: https://github.com/CppCon/CppCon2020/blob/main/Presentations/fuzzing_class_interfaces_for_generating_and_running_tests_with_libfuzzer/fuzzing_class_interfaces_for_generating_and_running_tests_with_libfuzzer__barnab%C3%A1s_b%C3%A1gyi__cppcon_2020.pdf\nLibFuzzer: https://llvm.org/docs/LibFuzzer.html\nLibFuzzer tutorial: https://github.com/google/fuzzing/blob/master/tutorial/libFuzzerTutorial.md …"},{"uri":"/bitcoin-explained/accounts-for-bitcoin-easypaysy/","title":"Accounts for Bitcoin, Easypaysy!","content":"Intro Aaron: 00:01:10\nSjors how do you like reusing addresses?\nSjors: 00:01:13\nI love reusing addresses, it\u0026amp;rsquo;s amazing.\nAaron: 00:01:16\nIt\u0026amp;rsquo;s so convenient isn\u0026amp;rsquo;t it\nSjors: 00:01:18\nIt just makes you feel like you know your exchange if you\u0026amp;rsquo;re sending them money they know everything about you and they like that right?\nAaron: 00:01:26\nIt\u0026amp;rsquo;s so convenient, it\u0026amp;rsquo;s so easy, nothing to complain about here.\nSjors: 00:01:30\nAbsolutely.\nAaron: 00:01:31\nSo that was our …"},{"uri":"/stephan-livera-podcast/2020-10-02-gloria-zhao-bitcoin-core/","title":"Learning Bitcoin Core Contribution \u0026 Hosting PR Review Club","content":"podcast: https://stephanlivera.com/episode/216/\nStephan Livera:\nGloria. Welcome to the show.\nGloria Zhao:\nThank you so much for having me.\nStephan Livera:\nSo Gloria I’ve heard a few things about you and I was looking up what you’ve been doing. You’ve been doing some really interesting things. Can we hear a little bit about you and how you got into Bitcoin?\nGloria Zhao:\nYeah, well, I didn’t get into Bitcoin by choice. Actually it was by accident. I’m a college student at Berkeley right now, and I …"},{"uri":"/stephan-livera-podcast/2020-09-28-michael-flaxman-security-guide/","title":"10x your Bitcoin Security with Multisig","content":"Previous SLP episode with Michael Flaxman: https://diyhpl.us/wiki/transcripts/stephan-livera-podcast/2019-08-08-michael-flaxman/\n10x Security Bitcoin Guide: https://btcguide.github.io/\nTranscript completed by: Stephan Livera Edited by: Michael Folkson\n10x Security Bitcoin Guide Stephan Livera (SL): Michael, welcome back to the show.\nMichael Flaxman (MF): It’s great to be here. I’m a big fan of the podcast so I love to be on it.\nSL: Michael, your first appearance on the show was a very, very …"},{"uri":"/speakers/michael-flaxman/","title":"Michael Flaxman","content":""},{"uri":"/bitcoin-explained/explaining-signet/","title":"Explaining Signet","content":"Intro Aaron van Wirdum: 00:00:07\nLive from Utrecht, this is the Van Wirdum Sjorsnedo. Hello! Sjors, welcome.\nSjors Provoost: 00:00:12\nThank you. It\u0026amp;rsquo;s good to be back. Well, I never left, but\u0026amp;hellip;\nAaron van Wirdum: 00:00:15\nYeah, well, we\u0026amp;rsquo;re at your home now, so you never left, I think. You probably literally never left because of corona\nSjors Provoost: 00:00:21\nExactly we\u0026amp;rsquo;re at my secret location\nAaron van Wirdum: 00:00:25\nHow are you enjoying the second wave?\nSjors …"},{"uri":"/stephan-livera-podcast/2020-09-15-steve-lee-of-square-crypto/","title":"Bitcoin Grants, Design \u0026 Crypto Patents (COPA)","content":"podcast: https://stephanlivera.com/episode/211/\nStephan Livera:\nSteve. Welcome back to the show.\nSteve Lee:\nThank you so much. Glad the glad to be here,\nStephan Livera:\nSteve. I see you guys have been very busy over at Cquare Crypto. Since we last spoke, you’ve been doing a lot of work in different, in different arenas as well. You’ve got the grants going, design and this crypto patent stuff. So tell us a little bit about you know, what you’ve been doing over the last few months.\nSteve Lee: …"},{"uri":"/speakers/ben-kaufman/","title":"Ben Kaufman","content":""},{"uri":"/stephan-livera-podcast/2020-08-28-stepan-snigirev-and-ben-kaufman/","title":"Specter Desktop Bitcoin Multi Sig","content":"podcast: https://stephanlivera.com/episode/205/\nStephan Livera:\nStepan and Ben, welcome to the show.\nStepan:\nThank you. Thank you, Stephan. It’s very nice to be here again.\nBen:\nThank you for inviting me.\nStephan Livera:\nStepan I know you’ve been on the show twice before, but perhaps just for any listeners who are a little bit newer, can you tell us a little bit about yourself?\nStepan:\nWe are doing well originally I came from quantum physics into Bitcoin and started working on hardware stuff …"},{"uri":"/greg-maxwell/2020-08-27-greg-maxwell-checkmultisig-bug/","title":"Checkmultisig Bug","content":"What is stopping the OP_CHECKMULTISIG extra pop bug from being fixed?\nLocation: Bitcointalk\nhttps://bitcointalk.org/index.php?topic=5271566.msg55079521#msg55079521\nWhat is stopping the OP_CHECKMULTISIG extra pop bug from being fixed? I think it is probably wrong to describe it as a bug. I think it was intended to indicate which signatures were present to fix the otherwise terrible performance of CHECKMULTISIG.\nRegardless, there is no real point to fixing it: Any \u0026amp;lsquo;fix\u0026amp;rsquo; would require …"},{"uri":"/sydney-bitcoin-meetup/2020-08-25-socratic-seminar/","title":"Socratic Seminar","content":"Name: Socratic Seminar\nTopic: Agenda in Google Doc below\nLocation: Bitcoin Sydney (online)\nVideo: No video posted online\nLast month’s Sydney Socratic: https://diyhpl.us/wiki/transcripts/sydney-bitcoin-meetup/2020-07-21-socratic-seminar/\nGoogle Doc of the resources discussed: https://docs.google.com/document/d/1rJxVznWaFHKe88s5GyrxOW-RFGTeD_GKdFzHNvhrq-c/\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their …"},{"uri":"/bitcoin-design/misc/2020-08-20-bitcoin-core-gui/","title":"Bitcoin Core GUI introductory meeting","content":"Topic: Agenda link posted below\nLocation: Bitcoin Design (online)\nVideo: No video posted online\nAgenda: https://github.com/BitcoinDesign/Meta/issues/8\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nBitcoin Core PR review There seems to be a lot to learn about the …"},{"uri":"/bitcoin-design/misc/","title":"Miscellaneous","content":" Bitcoin Core GUI introductory meeting Aug 20, 2020 Bitcoin core "},{"uri":"/london-bitcoin-devs/2020-08-19-socratic-seminar-signet/","title":"Socratic Seminar - Signet","content":"Pastebin of the resources discussed: https://pastebin.com/rAcXX9Tn\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIntro Michael Folkson (MF): This is a Socratic Seminar organized by London BitDevs. We have a few in the past. We had a couple on BIP-Schnorr and BIP-Taproot …"},{"uri":"/stephan-livera-podcast/2020-08-13-christian-decker-lightning-topics/","title":"ANYPREVOUT, MPP, Mitigating Lightning Attacks","content":"Transcript completed by: Stephan Livera Edited by: Michael Folkson\nLatest ANYPREVOUT update ANYPREVOUT BIP (BIP 118): https://github.com/ajtowns/bips/blob/bip-anyprevout/bip-0118.mediawiki\nStephan Livera (SL): Christian welcome back to the show.\nChristian Decker (CD): Hey Stephan, thanks for having me.\nSL: I wanted to chat with you about a bunch of stuff that you’ve been doing. We’ve got a couple of things that I was really interested to chat with you about: ANYPREVOUT, MPP, Lightning attacks, …"},{"uri":"/stephan-livera-podcast/2020-08-13-christian-decker/","title":"ANYPREVOUT, MPP, Mitigating LN Attacks","content":"podcast: https://stephanlivera.com/episode/200/\nStephan Livera:\nChristian welcome back to the show.\nChristian Decker:\nHey, Stephan, thanks for having me\nStephan Livera:\nWanted to chat with you about a bunch of stuff that you’ve been doing. We’ve got a couple of things that I was really interested to chat with you about ANYPREVOUT, MPP, lightning attacks. What’s the latest with lightning network. But yeah, let’s start with a little bit around ANYPREVOUT. So I see that yourself and AJ towns just …"},{"uri":"/tags/multipath-payments/","title":"Multipath payments","content":""},{"uri":"/tags/trampoline-payments/","title":"Trampoline payments","content":""},{"uri":"/chicago-bitdevs/","title":"Chicago Bitdevs","content":" Socratic Seminar Aug 12, 2020 P2p Research Threshold signature Sighash anyprevout Altcoins Socratic Seminar 10 Jul 08, 2020 "},{"uri":"/chicago-bitdevs/2020-08-12-socratic-seminar/","title":"Socratic Seminar","content":"Topic: Agenda below\nVideo: No video posted online\nBitDevs Solo Socratic 4 agenda: https://bitdevs.org/2020-07-31-solo-socratic-4\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nBitcoin Core P2P IRC Meetings https://github.com/bitcoin-core/bitcoin-devwiki/wiki/P2P-IRC-meetings …"},{"uri":"/stephan-livera-podcast/2020-08-09-thomas-voegtlin-ghost43-electrum/","title":"Electrum","content":"Topic: Electrum Wallet 4\nLocation: Stephan Livera Podcast\nElectrum GitHub: https://github.com/spesmilo/electrum\nTranscript completed by: Stephan Livera Edited by: Michael Folkson\nIntro Stephan Livera (SL): Thomas and Ghost43. Welcome back to the show.\nThomas Voegtlin (TV): Hi, thank you.\nGhost43 (G43): Hey Stephan, Thanks for having me.\nSL: Thomas, my listeners have already heard you on a prior episode. Ghost, did you want to tell us about yourself, how you got into Bitcoin and how you got into …"},{"uri":"/stephan-livera-podcast/2020-08-09-thomas-voegtlin-and-ghost43/","title":"Electrum Wallet","content":"podcast: https://stephanlivera.com/episode/199/\nStephan Livera:\nThomas and Ghost43. Welcome back to the show.\nThomas V:\nHi, thank you.\nGhost43:\nHey Stephan, Thanks for having me.\nStephan Livera:\nSo welcome to the show guys. Now, Thomas, I know my listeners have already heard you on the prior episode, but Ghost, did you want to tell us a little bit about yourself, how you got into Bitcoin and how you got into developing with Electrum Wallet?\nGhost43:\nYeah, sure. I mean, I got into Bitcoin a few …"},{"uri":"/speakers/ghost43/","title":"Ghost43","content":""},{"uri":"/speakers/thomas-voegtlin/","title":"Thomas Voegtlin","content":""},{"uri":"/speakers/eric-lombrozo/","title":"Eric Lombrozo","content":""},{"uri":"/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation/","title":"How to Activate a New Soft Fork","content":"Location: Bitcoin Magazine (online)\nAaron van Wirdum in Bitcoin Magazine on BIP 8, BIP 9 or Modern Soft Fork Activation: https://bitcoinmagazine.com/articles/bip-8-bip-9-or-modern-soft-fork-activation-how-bitcoin-could-upgrade-next\nDavid Harding on Taproot activation proposals: https://gist.github.com/harding/dda66f5fd00611c0890bdfa70e28152d\nIntro Aaron van Wirdum (AvW): Eric, Luke welcome. Happy Bitcoin Independence Day. How are you doing?\nEric Lombrozo (EL): We are doing great. How are you …"},{"uri":"/bitcoin-explained/what-is-miniscript/","title":"What is Miniscript","content":"Aaron van Wirdum:\nMiniscript. It\u0026amp;rsquo;s a project, I guess that\u0026amp;rsquo;s how I would describe it. It\u0026amp;rsquo;s a project by a couple of Blockstream engineers, even though it\u0026amp;rsquo;s not an official Blockstream project, but it\u0026amp;rsquo;s Pieter Wuille, Andrew Poelstra. And then, there was a third name that was as well-known, Sanket Kanjalkar.\nSjors Provoost:\nYeah, I believe he was an intern at Blockstream at the time.\nAaron van Wirdum:\nHe may have been an intern, yes. So they developed this idea …"},{"uri":"/stephan-livera-podcast/2020-07-26-nix-bitcoin/","title":"nix-bitcoin: A Security Focused Bitcoin Node","content":"nix-bitcoin on GitHub: https://github.com/fort-nix/nix-bitcoin\nTranscript completed by: Stephan Livera Edited by: Michael Folkson\nIntro Stephan Livera (SL): nixbitcoindev welcome to the show.\nnixbitcoindev (NBD): Hello. Thank you for having me.\nSL: Obviously as you’re under a pseudonym don’t dox anything about yourself but can you just tell us what you’re interested in about Bitcoin and is it the computer science aspects or what aspects of it interest you?\nNBD: I came into Bitcoin pretty much …"},{"uri":"/speakers/nixbitcoindev/","title":"nixbitcoindev","content":""},{"uri":"/bitcoin-explained/breaking-down-taproot-activation-options/","title":"Breaking Down Taproot Activation Options","content":"Aaron van Wirdum:\nWe\u0026amp;rsquo;re going to discuss Taproot activation, or more generally soft fork activation.\nSjors Provoost:\nYeah.\nAaron van Wirdum:\nThis has become a topic again in the sort of Bitcoin debate community public discourse. How do we activate soft forks? Because Taproot is getting to the point where it sort of ready to be deployed almost I think, so now the next question is, \u0026amp;ldquo;Okay, how are we actually going to activate this?\u0026amp;rdquo; This has been an issue in the past with several …"},{"uri":"/speakers/elichai-turkel/","title":"Elichai Turkel","content":""},{"uri":"/tags/mast/","title":"MAST","content":""},{"uri":"/speakers/openoms/","title":"Openoms","content":""},{"uri":"/stephan-livera-podcast/2020-07-21-rootzoll-and-openoms-raspiblitz/","title":"RaspiBlitz","content":"podcast: https://stephanlivera.com/episode/194/\nStephan Livera:\nHi, everyone. Welcome to the Stephan Livera podcast, a show about Bitcoin and Austrian economics today. My guests are Rootzoll and Openoms of the RaspiBlitz project. And if you’re interested in setting up your own Bitcoin node and running lightning and using Coinjoins with JoinMarket and so on, this is a great project to check out. So I’m really looking forward to my discussion with the guys today. Alright. So I’m just bringing in …"},{"uri":"/speakers/rootzoll/","title":"Rootzoll","content":""},{"uri":"/speakers/russell-oconnor/","title":"Russell O’Connor","content":""},{"uri":"/sydney-bitcoin-meetup/2020-07-21-socratic-seminar/","title":"Socratic Seminar","content":"Name: Socratic Seminar\nTopic: Agenda in Google Doc below\nLocation: Bitcoin Sydney (online)\nVideo: No video posted online\nLast month’s Sydney Socratic: https://diyhpl.us/wiki/transcripts/sydney-bitcoin-meetup/2020-06-23-socratic-seminar/\nGoogle Doc of the resources discussed: https://docs.google.com/document/d/1Aw_llsP8xSipp7l6JqjSpaqw5qN1vXRqhOyeulqmXcg/\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their …"},{"uri":"/london-bitcoin-devs/2020-07-21-socratic-seminar-bip-taproot/","title":"Socratic Seminar - BIP Taproot (BIP 341)","content":"Pastebin of the resources discussed: https://pastebin.com/vsT3DNqW\nTranscript of Socratic Seminar on BIP-Schnorr: https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr/\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIntroductions …"},{"uri":"/greg-maxwell/2020-07-20-greg-maxwell-taproot-pace/","title":"Is Taproot development moving too fast or too slow?","content":"Taproot has been discussed for 2.5 years already and by the time it would activate it will certainly at this point be over three years.\nThe bulk of the Taproot proposal, other than Taproot itself and specific encoding details, is significantly older too. (Enough that earlier versions of our proposals have been copied and activated in other cryptocurrencies already)\nTaproot\u0026amp;rsquo;s implementation is also extremely simple, and will make common operations in simple wallets simpler.\nTaproot\u0026amp;rsquo;s …"},{"uri":"/vr-bitcoin/2020-07-11-jeremy-rubin-sapio-101/","title":"Sapio: Stateful Smart Contracts for Bitcoin with OP_CTV","content":"Slides: https://docs.google.com/presentation/d/1X4AGNXJ5yCeHRrf5sa9DarWfDyEkm6fFUlrcIRQtUw4/\nutxos.org website: https://utxos.org/\nIntro (Udi Wertheimer) Today we are joined by Jeremy Rubin who is a Bitcoin Core contributor and the co-founder of the MIT Bitcoin Project. He is involved in a tonne of other cool stuff. He is also championing BIP 119 which is OP_CHECKTEMPLATEVERIFY. He is going to tell us about that and how it allows for new types of smart contracts.\nIntro Hey. Thanks for coming on …"},{"uri":"/vr-bitcoin/","title":"VR Bitcoin","content":" Laolu, Joost, Oliver - Lnd0.10 Apr 18, 2020 Olaoluwa Osuntokun, Joost Jager, Oliver Gugger Lnd Oliver Gugger, LSAT May 16, 2020 Oliver Gugger Sapio: Stateful Smart Contracts for Bitcoin with OP_CTV Jul 11, 2020 Jeremy Rubin Op checktemplateverify "},{"uri":"/misc/2020-07-10-what-am-i-working-on/","title":"What am I working on?","content":"I\u0026amp;rsquo;ve been working on essentially rewriting the Bitcoin Core wallet, one piece at a time. Now when I say \u0026amp;ldquo;the wallet\u0026amp;rdquo;, a lot of people think of the GUI. That\u0026amp;rsquo;s not what I\u0026amp;rsquo;ve been working on. Rather I\u0026amp;rsquo;ve been changing the internals; how keys are managed, how transactions are tracked, how inputs are selected for spending. All of those under the hood things. At some point, I will get around to changing the GUI. And in general, my focus has been on improving the …"},{"uri":"/chicago-bitdevs/2020-07-08-socratic-seminar/","title":"Socratic Seminar 10","content":"Location: Chicago BitDevs (online)\nVideo: No video posted online\nReddit link of the resources discussed: https://old.reddit.com/r/chibitdevs/\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nTainting, CoinJoin, PayJoin, CoinSwap Bitcoin dev mailing list post (Nopara) …"},{"uri":"/greg-maxwell/2020-07-05-greg-maxwell-useful-proof-of-work/","title":"Useful Proof Of Work","content":"Why can’t hash power be used for something useful?\nLocation: Reddit\nhttps://www.reddit.com/r/Bitcoin/comments/hlu2ah/why_cant_hash_power_be_used_for_something_useful/fx1axlt?utm_source=share\u0026amp;amp;utm_medium=web2x\u0026amp;amp;context=3\nWhy can’t hash power be used for something useful? There is a general game theory reason why this doesn\u0026amp;rsquo;t work out:\nImagine that you are considering attempting to reorder the chain to undo a transaction. You could decide to not attempt the attack in which case your …"},{"uri":"/stephan-livera-podcast/2020-06-30-john-cantrell-bruteforcing-bitcoin-seeds/","title":"Bruteforcing Bitcoin Seeds","content":"podcast: https://stephanlivera.com/episode/187/\nStephan Livera:\nI’m going to bring in my guest, John is a developer and he’s also known for working on the juggernaut project and he’s got this fantastic article that he wrote recently that I wanted to get him on and discuss. So John, welcome to the show.\nJohn Cantrell:\nIt’s great to be here.\nStephan Livera:\nSo John, tell us a little bit about yourself and your background as a developer.\nJohn Cantrell:\nSure. I’ve been doing software development now …"},{"uri":"/tags/coinswap/","title":"Coinswap","content":""},{"uri":"/sydney-bitcoin-meetup/2020-06-23-socratic-seminar/","title":"Socratic Seminar","content":"Name: Socratic Seminar\nTopic: Agenda in Google Doc below\nLocation: Bitcoin Sydney (online)\nVideo: No video posted online\nLast month’s Sydney Socratic: https://diyhpl.us/wiki/transcripts/sydney-bitcoin-meetup/2020-05-19-socratic-seminar/\nGoogle Doc of the resources discussed: https://docs.google.com/document/d/1TKVOScS0Ms52Vwb33n4cwCzQIkY1kjBj9UioKkkR9Xo/\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their …"},{"uri":"/london-bitcoin-devs/2020-06-23-socratic-seminar-coinswap/","title":"Socratic Seminar - Coinswap","content":"Pastebin of the resources discussed: https://pastebin.com/zbegGmb8\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIntroductions Michael Folkson (MF): This is London BitDevs, this is a Socratic Seminar. We had two events last week so it is great to see so many people on the …"},{"uri":"/lightning-hack-day/2020-06-21-rootzoll-openoms-raspiblitz/","title":"RaspiBlitz full node","content":"Location: Potzblitz (online)\nSlides: https://keybase.pub/oms/Potzblitz9-RaspiBlitz-slides.pdf\nRaspiBlitz GitHub: https://github.com/rootzoll/raspiblitz\nRootzoll presenting at London Bitcoin Devs in July 2019: https://www.youtube.com/watch?v=R_ggGj7Hk1w\nIntro (Jeff Gallas) Welcome to a new episode of Potzblitz. It is episode number 9. Today we are going to talk about the RaspiBlitz a Bitcoin and Lightning Network full node built on a Raspberry Pi. We have Christian Rotzoll and openoms, two of the …"},{"uri":"/la-bitdevs/","title":"LA Bitdevs","content":" Attacking Bitcoin Core Apr 16, 2020 Amiti Uttarwar Bitcoin core P2p Eclipse attacks Magical Bitcoin May 21, 2020 Alekos Filini Developer tools Miniscript Psbt Segwit, PSBT Vulnerability Jun 18, 2020 Luke Dashjr Psbt Segwit "},{"uri":"/la-bitdevs/2020-06-18-luke-dashjr-segwit-psbt-vulnerability/","title":"Segwit, PSBT Vulnerability","content":"SegWit/PSBT vulnerability (CVE-2020-14199)\nLocation: LA BitDevs (online)\nCVE: https://nvd.nist.gov/vuln/detail/CVE-2020-14199\nTrezor blog post on the vulnerability: https://blog.trezor.io/latest-firmware-updates-correct-possible-segwit-transaction-vulnerability-266df0d2860\nGreg Sanders Bitcoin dev mailing list post in April 2017: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html\nThe vulnerability The way Bitcoin transactions are encoded in the …"},{"uri":"/london-bitcoin-devs/2020-06-17-tim-ruffing-schnorr-multisig/","title":"Taproot and Schnorr Multisignatures","content":"Location: London Bitcoin Devs (online)\nSlides: https://slides.com/real-or-random/taproot-and-schnorr-multisig\nTranscript of the previous day’s Socratic Seminar on BIP-Schnorr: https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr/\nIntro (Michael Folkson) We are now live. This is London Bitcoin Devs with Tim Ruffing. We had the Socratic Seminar yesterday, I think we have some people on the call that were at the Socratic Seminar. That was brilliant. Thanks …"},{"uri":"/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr/","title":"Socratic Seminar - BBIP Schnorr (BIP 340)","content":"Pastebin of the resources discussed: https://pastebin.com/uyktht33\nAugust 2020 update: Since this Socratic on BIP Schnorr there has been a proposed change to the BIP revisting the squaredness tiebreaker for the R point.\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. …"},{"uri":"/wasabi/research-club/2020-06-15-coinswap/","title":"CoinSwaps","content":"Intro (Aviv Milner) Today we are talking about CoinSwaps, massively improving Bitcoin privacy and fungibility. A lot of excitement about CoinSwaps so hopefully we can touch on some interesting things.\nCoinSwaps (Belcher 2020) https://gist.github.com/chris-belcher/9144bd57a91c194e332fb5ca371d0964\nThis is a 2020 ongoing GitHub research paper that Chris Belcher has been working on. You can view the original file here. It is heavily based on the 2013 work by Greg Maxwell.\nWasabi Research Club Just a …"},{"uri":"/speakers/ergo/","title":"Ergo","content":""},{"uri":"/stephan-livera-podcast/2020-06-09-ergo-unwinding-bitcoin/","title":"Unwinding Bitcoin Coinjoins - Tumblers, Wasabi, JoinMarket","content":"podcast: https://stephanlivera.com/episode/179/\nStephan Livera:\nHi Ergo, welcome to the show.\nErgo:\nHey Stephan. Thanks for having me back.\nStephan Livera:\nIt’s great to chat with you, and I know you’ve been doing a lot of awesome work, so I’m really excited to get into it and discuss with you. So I know you’ve got two reports that you have put out recently with OXT Research. So let’s start with the China and North Korea one. So it’s called the North Korean connection. What spurred this …"},{"uri":"/greg-maxwell/2020-06-08-greg-maxwell-liquid-censorship-resistance/","title":"Liquid Censorship Resistance","content":"Location: Reddit\nhttps://www.reddit.com/r/Bitcoin/comments/gye0yv/liquid_censorship_resistance/ftcllcm?utm_source=share\u0026amp;amp;utm_medium=web2x\u0026amp;amp;context=3\nIs Liquid censorship resistant? Liquid isn\u0026amp;rsquo;t particularly censorship resistant. If someone tells you it is they\u0026amp;rsquo;re confused.\nBack when I worked at Blockstream there was thought in the design to mitigate some of the risks: make transactions difficult to identify, even the peg outs use a ring signature to authorize them, and …"},{"uri":"/stephan-livera-podcast/2020-05-27-bitcoin-coin-selection/","title":"Bitcoin Coin Selection - Managing Large Wallets","content":"podcast: https://stephanlivera.com/episode/177/\nStephan Livera:\nWelcome, Murch.\nMurch:\nHey, Stephan, thanks for having me.\nStephan Livera:\nWell, yeah, thanks for joining me, man. I am a fan. I’ve read some of your work on coin selection. And I know you’ve, you’re you’re the man to talk to about this topic. So look, let’s start with how you got into Bitcoin and Bitcoin contribution and also just some of your work at BitGo.\nMurch:\nYeah, sure. I started following Bitcoin about 2011, 2012. I don’t …"},{"uri":"/london-bitcoin-devs/2020-05-26-kevin-loaec-antoine-poinsot-revault/","title":"Revault - A Multiparty Vault Architecture","content":"Location: London Bitcoin Devs (online)\nKevin slides: https://www.dropbox.com/s/rj45ebnic2m0q2m/kevin%20loaec%20revault%20slides.pdf?dl=0\nAntoine slides: https://www.dropbox.com/s/xaoior0goo37247/Antoine%20Poinsot%20Revault%20slides.odp?dl=0\nIntro (Michael Folkson) This is London Bitcoin Devs. This is on Zoom and live-streaming on YouTube. Last week we had a Socratic Seminar on vaults, covenants and CHECKTEMPLATEVERIFY. There is a video up for that. There is also a transcript, @btctranscripts on …"},{"uri":"/speakers/sergi-delgado-segura/","title":"Sergi Delgado Segura","content":""},{"uri":"/lightning-hack-day/2020-05-24-sergi-delgado-watchtowers/","title":"Watchtowers and BOLT 13","content":"Location: Potzblitz (online)\nSlides: https://srgi.me/resources/slides/Potzblitz!2020-Watchtowers.pdf\nThe Eye of Satoshi code: https://github.com/talaia-labs/python-teos\nDraft BOLT 13: https://github.com/sr-gi/bolt13/blob/master/13-watchtowers.md\nc-lightning watchtower plugin: https://github.com/talaia-labs/python-teos/tree/master/watchtower-plugin\nc-lightning watchtower hook discussion:\nhttps://github.com/ElementsProject/lightning/pull/3601\nhttps://github.com/ElementsProject/lightning/pull/3645 …"},{"uri":"/la-bitdevs/2020-05-21-alekos-filini-magical-bitcoin/","title":"Magical Bitcoin","content":"Magical Bitcoin site: https://magicalbitcoin.org/\nMagical Bitcoin wallet GitHub: https://github.com/MagicalBitcoin/magical-bitcoin-wallet\nsipa’s Miniscript site: http://bitcoin.sipa.be/miniscript/\nAndrew Poelstra Miniscript presentation at London Bitcoin Devs: https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript/\nAndrew Poelstra Miniscript workshop at Advancing Bitcoin: …"},{"uri":"/sydney-bitcoin-meetup/2020-05-19-socratic-seminar/","title":"Socratic Seminar","content":"Topic: Agenda in Google Doc below\nLocation: Bitcoin Sydney (online)\nVideo: No video posted online\nGoogle Doc of the resources discussed: https://docs.google.com/document/d/1hCTlQdt_dK6HerKNt0kl6X8HAeRh16VSAaEtrqcBvlw/\nThe conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. …"},{"uri":"/london-bitcoin-devs/2020-05-19-socratic-seminar-vaults/","title":"Socratic Seminar - Vaults and OP_CHECKTEMPLATEVERIFY","content":"Name: Socratic Seminar\nLocation: London BitDevs (online)\nPastebin of the resources discussed: https://pastebin.com/3Q8MSwky\nTwitter announcement: https://twitter.com/kanzure/status/1262821838654255104?s=20\nThe conversation has been anonymized by default to protect the identities of the participants. Those who would prefer their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.\nIntroductions Michael Folkson (MF): …"},{"uri":"/vr-bitcoin/2020-05-16-oliver-gugger-lsat/","title":"Oliver Gugger, LSAT","content":"Topic: LSAT - Your Ticket Aboard The Lightning Native Web\nLocation: Reckless VR\nIntro Happy to be here to talk about LSAT, let’s jump right in. We are going to have a look at LSAT. What the motivation was to come up with this? What the use cases for and we are going to jump into the technical detail. This is going to be a technical talk. There will be some low level details.\nOutline First who am I? Why am I giving this talk? I joined Lightning Labs in September last year. LSAT was one of my …"},{"uri":"/ruben-somsen/","title":"Ruben Somsen","content":" Succinct Atomic Swap May 11, 2020 Ruben Somsen "},{"uri":"/ruben-somsen/2020-05-11-ruben-somsen-succinct-atomic-swap/","title":"Succinct Atomic Swap","content":"Topic: Succinct Atomic Swap (SAS)\nLocation: Ruben Somsen YouTube channel\nSAS resources: https://gist.github.com/RubenSomsen/8853a66a64825716f51b409be528355f\nIntro Happy halvening. Today I have a special technical presentation for you. Basically what we are going to do is cut atomic swaps in half. I call this succinct atomic swaps.\nMotivation First let me tell you my motivation. I think Bitcoin is a very important technology. We need to find ways to utilize it at its maximum potential without …"},{"uri":"/lightning-hack-day/2020-05-09-sergei-tikhomirov-lightning-privacy/","title":"A Quantitative Analysis of Security, Anonymity and Scalability for the Lightning Network","content":"Location: Lightning Hacksprint (Fulmo)\nSlides: https://docs.google.com/presentation/d/1H9EdrhjQ9x3q0qfsei7iFYINVf2lwPyjUKHd3CScy4k/edit#slide=id.g7774542944_0_0\nPaper: https://eprint.iacr.org/2020/303\nIntro Hello everyone. I am Sergei, I am a PhD student at the University of Luxembourg. I am going to present our joint work with Pedro Moreno-Sanchez and Matteo Maffei from TU Wien on certain aspects of the privacy and scalability of the Lightning Network. The full paper can be found here.\nLN intro …"},{"uri":"/tags/payjoin/","title":"Payjoin","content":""},{"uri":"/london-bitcoin-devs/2020-05-05-socratic-seminar-payjoins/","title":"Socratic Seminar Payjoins - Bitcoin privacy (Payjoin/P2EP)","content":"Adam Gibson Pastebin on Payjoin resources: https://pastebin.com/rgWVuNrC\n6102bitcoin resources on Payjoin/P2EP: https://github.com/6102bitcoin/CoinJoin-Research/blob/master/CoinJoin_Research/CoinJoin_Theory/P2EP/summary.md\nTransaction fingerprinting discussion: https://github.com/zkSNACKs/WalletWasabi/issues/3625\nBitDevs resources on Coinjoin: https://bitdevs.org/2020-02-19-whitepaper-series-11-coinjoin\nThe conversation has been anonymized to protect the identities of the participants.\nIntro …"},{"uri":"/lightning-hack-day/2020-05-03-christian-decker-lightning-backups/","title":"Lightning Backups","content":"Topic: Back the f*** up\nLocation: Potzblitz (online)\nIntro (Jeff Gallas) We’re live. Welcome to the fourth episode of Potzblitz, the weekly Lightning talk. Today we have a very special guest, Christian Decker of Blockstream. They also call him Dr Bitcoin. Our co-host today is Michael Folkson from the London BitDevs meetup. He is doing a bunch of Socratic Seminars and has a deep technical knowledge. He will be the perfect person to speak to Christian and has already prepared a bunch of questions. …"},{"uri":"/london-bitcoin-devs/2020-04-29-kalle-rosenbaum-grokking-bitcoin/","title":"Grokking Bitcoin","content":"Location: London Bitcoin Devs (online)\nSlides: http://rosenbaum.se/ldnbitcoindev/drawing.sozi.html\nBook: http://rosenbaum.se/book/\nIntro (Michael Folkson) Welcome to this London Bitcoin Devs online. We will be joined by Kalle Rosenbaum, the author of Grokking Bitcoin. In terms of London Bitcoin Devs you can follow us on Twitter at ldnbitcoindevs. We have a YouTube channel where we have a bunch of presentations from the last two years from people like John Newbery, Matt Corallo and Andrew …"},{"uri":"/speakers/kalle-rosenbaum/","title":"Kalle Rosenbaum","content":""},{"uri":"/honey-badger-diaries/","title":"Honey Badger Diaries","content":" Revault Bitcoin vaults Apr 24, 2020 Kevin Loaec, Antoine Poinsot Vaults "},{"uri":"/stephan-livera-podcast/2020-04-24-lisa-neigut-lighting-network-channel/","title":"Lightning Network Channel Accounting and Dual Funded Channels","content":"podcast: https://stephanlivera.com/episode/168/\nLisa Neigut of Blockstream joins me to talk about Lightning Network Channels, c-lightning, and Dual Funded Channels.\nStephan Livera:\nLisa, welcome to the show.\nLisa Neigut:\nHi Stephan. Thanks for having me on.\nStephan Livera:\nSo Lisa, I saw you tweeted out about wanting to talk about lightning cryptographic primitives and some of the, obviously you’re working on a range of things, at Blockstream on the Elements project working on C-lightning and …"},{"uri":"/honey-badger-diaries/2020-04-24-kevin-loaec-antoine-poinsot-revault/","title":"Revault Bitcoin vaults","content":"Aaron: So you guys built something. First tell me are you a company? Is this a startup? What is the story here?\nKevin: My personal interest in vaults started last year when Bryan Bishop published his email on the mailing list. I was like “That is an interesting concept.” After that I started engaging with him on Twitter proposing a few different ideas. There were some limitations and some attacks around it. I didn’t go much further than that. At the end of the year a hedge fund reached out to my …"},{"uri":"/london-bitcoin-devs/2020-04-22-socratic-seminar/","title":"Socratic Seminar - Grokking Bitcoin","content":"Name: Socratic Seminar\nLocation: London BitDevs (online)\nThe conversation has been anonymized to protect the identity of the participants.\nDiscussion of Hopin video conferencing software We tested this a few weeks back for a German meetup and it was pretty nice. It was reliable and it worked. Of course it is not open source and Jitsi could be much better. We were quite happy with the experience. We are looking to do the Value of Bitcoin conference on this platform. It is a little more robust …"},{"uri":"/speakers/chris-belcher/","title":"Chris Belcher","content":""},{"uri":"/stephan-livera-podcast/2020-04-21-chris-belcher/","title":"What’s The Problem With Bitcoin Surveillance?","content":"podcast: https://stephanlivera.com/episode/167/\nChris Belcher rejoins me on the show to talk about Bitcoin Surveillance companies, and what risks they present to Bitcoin. We also talk about JoinMarket fidelity bonds. Stephan Livera:\nChris, welcome back to the show.\nChris Belcher:\nThanks for having me.\nStephan Livera:\nSo, Chris, I know you have been vocal recently in your criticism of Bitcoin surveillance companies. And I share your views on this. I thought it would be good to just have some …"},{"uri":"/speakers/joost-jager/","title":"Joost Jager","content":""},{"uri":"/vr-bitcoin/2020-04-18-laolu-joost-oliver-lnd0.10/","title":"Laolu, Joost, Oliver - Lnd0.10","content":"Topic: lnd v0.10\nLocation: Virtual Reality (VR Bitcoin)\nIntro (Laolu Osuntokun) Thanks everyone for coming. This is my first time doing a VR presentation, really cool. I am going to be talking about lnd 0.10. Right now we are on rc2, the next major lnd release, it has a bunch of cool features, bug fixes, optimizations. I am going to talk about some big ticket features in 0.10. There are a bunch of other features that you are going to want to check out in the release notes. We will have a blog …"},{"uri":"/speakers/olaoluwa-osuntokun/","title":"Olaoluwa Osuntokun","content":""},{"uri":"/la-bitdevs/2020-04-16-amiti-uttarwar-attacking-bitcoin-core/","title":"Attacking Bitcoin Core","content":"Intro I’m going to talk to you today about attacking Bitcoin Core.\nBitcoin To start with the fundamentals. As we all know Bitcoin is a money and it is different to our existing centralized solution of money because instead we have nodes that are running all around the world.\nConsensus Fundamentally what is required for this to work is the idea of consensus where all the nodes must agree on the same fundamental values. Who has how much money? This is the core innovation of the blockchain. What it …"},{"uri":"/wasabi/research-club/anonymous-credentials/","title":"Anonymous Credentials","content":"Introduction to credentials. / Selective signing of attributes. / Range proof. / \u0026amp;ldquo;Rethinking Public Key Infrastructures and Digital Certificates\u0026amp;rdquo; (Stefan Brands, 1999) Speaker 0: 00:00:00\nTo approach this is to look from the point of view of blind signatures. Right? I guess blind signatures are mostly familiar to this audience, right? Yes. Or should be. Should be familiar for every Wasabi user. For every Wasabi power user, I guess. So, the idea of credentials is similar in the sense …"},{"uri":"/tags/bls-signatures/","title":"BLS signatures","content":""},{"uri":"/speakers/adam-back/","title":"Adam Back","content":""},{"uri":"/speakers/andreas-antonopoulos/","title":"Andreas Antonopoulos","content":""},{"uri":"/andreas-antonopoulos/","title":"Andreas Antonopoulos","content":" Are Hardware Wallets Secure Enough? Feb 01, 2019 Andreas Antonopoulos Hardware wallet Bitcoin Q\u0026amp;amp;A: Initial Blockchain Download Oct 23, 2018 Andreas Antonopoulos Consensus enforcement Canada Senate Bitcoin Oct 08, 2014 Andreas Antonopoulos Security Full Node and Home Network Security Aug 30, 2018 Andreas Antonopoulos Security Schnorr Signatures Oct 07, 2018 Andreas Antonopoulos Schnorr signatures Why is seed splitting a bad idea? Apr 08, 2020 Andreas Antonopoulos Bip32 Security problems "},{"uri":"/misc/2020-04-08-john-newbery-contracts-in-bitcoin/","title":"Bitcoin Script: Past and Future","content":"Location: She256 Onboarding to Bitcoin Webinar\nIntroduction John: My name is John. I am a Bitcoin Protocol Engineer at Chaincode Labs in New York. I\u0026amp;rsquo;m going to talk about contracts a little bit from a theoretical perspective, but I\u0026amp;rsquo;m not a lawyer, and I\u0026amp;rsquo;m not a legal scholar. For all of you legally minded people out there, I apologize in advance if I gobble this and say some nonsense.\nBefore I do that, I\u0026amp;rsquo;m going to tell you a story and the story based on this picture. …"},{"uri":"/blockstream-webinars/2020-04-08-adam-back-simplicity/","title":"Simplicity: Next-Gen Smart Contracting","content":"Slides: https://docsend.com/view/svs27jr\nBlockstream blog post on the Jets release: https://medium.com/blockstream/simplicity-jets-release-803db10fd589\nSimplicity GitHub repo: https://github.com/elementsproject/simplicity\nAsciinema video used in the presentation: https://asciinema.org/a/rhIsJBixoB3k8yuFQQr2UGAQN\nJoin the Simplicity mailing list here: https://lists.ozlabs.org/listinfo/simplicity\nIntro I am going to describe Simplicity which we made an announcement recently. It is a next …"},{"uri":"/andreas-antonopoulos/2020-04-08-andreas-antonopoulos-seed-splitting/","title":"Why is seed splitting a bad idea?","content":"This is something that is being discussed all the time especially on newbie forums. It really needs to be careful very carefully. A friend had the idea to achieve 2 out of 3 protection from my wallet seed by storing the seed in the following way:\nLocation 1: seed words 1-8 and 9-16 Location 2: seed words 1-8 and 17-24 Location 3: seed words 9-16 and 17-24\nIt sounds a lot like Shamir’s Secret Sharing scheme but easier. One location doesn’t reveal the whole seed but any of two of them are enough. …"},{"uri":"/stephan-livera-podcast/2020-04-07-gleb-naumenko-erlay/","title":"erlay Bitcoin transaction relay","content":"podcast: https://stephanlivera.com/episode/164/\nStephan Livera:\nGleb Naumenko of Chaincode Labs joins me in this episode to talk about erlay.\nGleb. Welcome to the show, mate.\nGleb Naumenko:\nHey, nice to be here.\nStephan Livera:\nSo Gleb, let’s hear a bit about your story. I know you’re working at Chaincode labs, you’re working on erlay and a range of other things as well. Tell us a little bit about yourself and how you got into Bitcoin development.\nGleb Naumenko:\nSo I never had the real job. My …"},{"uri":"/chaincode-labs/chaincode-podcast/payment-points/","title":"Payment Points","content":"Nadav Kohen: 00:00:00\nRight now in the Lightning Network, if I were to make a payment every single hop along that route, they would know that they\u0026amp;rsquo;re on the same route, because every single HTLC uses the same hash. It\u0026amp;rsquo;s a bad privacy leak. It\u0026amp;rsquo;s actually a much worse privacy leak now that we have multi-path payments, because every single path along your multi-path payment uses the same hash as well.\nIntro Caralie Chrisco: 00:00:37\nHi everyone, welcome to the Chaincode Podcast. …"},{"uri":"/chaincode-labs/chaincode-podcast/2020-03-12-matt-corallo-compact-blocks-fibre/","title":"Compact Blocks and Fibre","content":"Intro Adam Jonas (AJ): Welcome Matt\nMatt Corallo (MC): Hi\nJohn Newbery (JN): Hi Matt\nAJ: Today we are going to do a little bit of a “This is your life Bitcoin”.\nMC: I am not dead yet.\nAJ: You have a lot of contributions over the years so there is lots to talk about but we’ll start with three. Let’s start with compact blocks. Tell us a little bit about what compact blocks are and then we can dive a little bit deeper.\nMotivation for compact blocks MC: Compact blocks was the culmination of, at the …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2020/","title":"Mit Bitcoin Expo 2020","content":" Taproot Mar 07, 2020 Andrew Poelstra Taproot "},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2020/2020-03-07-andrew-poelstra-taproot/","title":"Taproot","content":"Topic: Taproot - Who, How, Why\nLocation: MIT Bitcoin Expo 2020\nBIP 341: https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki\nIntro (Hugo Uvegi) I am excited to introduce our first speaker, Andrew Poelstra. He is coming to us from Blockstream where he is Director of Research. He is going to be talking about Taproot which I think is on everybody’s mind these days.\nIntro (Andrew Poelstra) Thanks for waking up so early for the first day of the conference and making the trek from the …"},{"uri":"/chaincode-labs/chaincode-podcast/utxos-chaincode-decoded/","title":"Chaincode Decoded: UTXOs","content":"Introduction John Newbery: 00:00:00\nIf we go back in time to version 0.1, all that was stored was the blockchain and I think a marker saying whether that coin was spent or not. I mean that\u0026amp;rsquo;s okay for your first version but it doesn\u0026amp;rsquo;t really scale and it\u0026amp;rsquo;s bad performance because every time you want to access a coin you need to read it from disk.\nAdam Jonas: 00:00:28\nWelcome back to the Chaincode Podcast. This episode is going to be a little bit different. We\u0026amp;rsquo;re going to …"},{"uri":"/austin-bitcoin-developers/","title":"Austin Bitcoin Developers","content":"https://austinbitdevs.com/\nBitcoin CLI and Regtest Aug 17, 2018 Richard Bondi Bitcoin core Developer tools Drivechain May 27, 2019 Paul Sztorc Sidechains Hardware Wallets Jun 29, 2019 Stepan Snigirev Hardware wallet Security problems Socratic Seminar 2 Aug 22, 2019 Research Hardware wallet Socratic Seminar 3 Oct 14, 2019 Miniscript Taproot Socratic Seminar 4 Nov 19, 2019 Socratic Seminar 5 Jan 21, 2020 Lightning Socratic Seminar 6 Feb 24, 2020 Taproot "},{"uri":"/austin-bitcoin-developers/2020-02-24-socratic-seminar-6/","title":"Socratic Seminar 6","content":"https://www.meetup.com/Austin-Bitcoin-Developers/events/268812642/\nhttps://bitdevs.org/2020-02-12-socratic-seminar-101\nhttps://twitter.com/kanzure/status/1232132693179207682\nIntroduction Alrighty, let\u0026amp;rsquo;s get started. Gather around. Phil could use some company here. Nobody likes the front row. Maybe the benches. So I have a little different format for how I want to do it this week. Usually I cover a broad series of topics that I steal from the New York\u0026amp;rsquo;s meetup list. Going through …"},{"uri":"/stanford-blockchain-conference/2020/atomic-multi-channel-updates/","title":"Atomic Multi-Channel Updates with Constant Collateral in Bitcoin-Compatible Payment-Channel Networks","content":"https://diyhpl.us/wiki/transcripts/scalingbitcoin/tel-aviv-2019/atomic-multi-channel-updates/\nhttps://twitter.com/kanzure/status/1230929981011660800\nIntroduction Matteo got sick and couldn\u0026amp;rsquo;t come. His coauthors couldn\u0026amp;rsquo;t come either. So a talk was pre-recorded and will be played now. Our work is about realizing atomic multi-channel updates with constant collateral in bitcoin.\nScalability issues We all know that blockchains have scalability issues. Public verifiability means we have to …"},{"uri":"/speakers/matteo-maffei/","title":"Matteo Maffei","content":""},{"uri":"/stanford-blockchain-conference/","title":"Stanford Blockchain","content":" Stanford Blockchain Conference 2019 Stanford Blockchain Conference 2020 "},{"uri":"/stanford-blockchain-conference/2020/","title":"Stanford Blockchain Conference 2020","content":"https://cbr.stanford.edu/sbc20/\nArbitrum 2.0: Faster Off-Chain Contracts with On-Chain Security Ed Felten Altcoins Atomic Multi-Channel Updates with Constant Collateral in Bitcoin-Compatible Payment-Channel Networks Feb 21, 2020 Matteo Maffei Research Lightning Attacking EVM Resource Metering Feb 19, 2019 Daniel Perez, Benjamin Livshits Ethereum Beyond 51% attacks Feb 20, 2020 Vitalik Buterin Research Security problems Ethereum Block Rewards Tim Roughgarden Blockchains For Multiplayer Games …"},{"uri":"/stanford-blockchain-conference/2020/beyond-hashrate-majority-attacks/","title":"Beyond 51% attacks","content":"https://twitter.com/kanzure/status/1230624835996250112\nIntroduction Alright let\u0026amp;rsquo;s get started with the next session. Vitalik is from Canada. He is a former writer for Bitcoin Magazine and can often be seen wearing unicorn shirts. He will be telling us about \u0026amp;ldquo;Beyond 51% attacks\u0026amp;rdquo;. Take it away.\nOkay, so hello everyone. Hi. Okay. I will start by reminding everyone that 51% attacks are in fact bad. Bear with me for a few minutes. Is this better? Okay.\n51% attacks What can 51% …"},{"uri":"/stanford-blockchain-conference/2020/competitive-equilibria-staking-lending/","title":"Competitive equilibria between staking and on-chain lending","content":"https://twitter.com/kanzure/status/1230581977289379840\nhttps://arxiv.org/pdf/2001.00919v1.pdf\nSee also \u0026amp;ldquo;Stress testing decentralized finance\u0026amp;rdquo; https://diyhpl.us/wiki/transcripts/coordination-of-decentralized-finance-workshop/2020-stanford/stress-testing-decentralized-finance/\nIntroduction There have been some odd finanical attacks in the DeFi space and also on staking. This talk aims to show that the threat model for staking is definitively different from that of proof-of-work, and …"},{"uri":"/speakers/david-tse/","title":"David Tse","content":""},{"uri":"/tags/ethereum/","title":"ethereum","content":""},{"uri":"/tags/proof-of-stake/","title":"proof-of-stake","content":""},{"uri":"/stanford-blockchain-conference/2020/proof-of-stake/","title":"Proof-of-Stake Longest Chain Protocols Revisited","content":"https://twitter.com/kanzure/status/1230646529230163968\nIntroduction Thanks. I am the last talk of the session so I better make it interesting, and if not interesting then short at least. I am going to talk about proof-of-stake. This is collaboration with a few people. Vivek Bagaria, \u0026amp;hellip; et al.\nProof-of-stake prism The starting point of this project was to come up with a proof-of-stake version of prism, which we just saw in the last talk. Prism has good latency and very high throughput and …"},{"uri":"/speakers/tarun-chitra/","title":"Tarun Chitra","content":""},{"uri":"/speakers/vitalik-buterin/","title":"Vitalik Buterin","content":""},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/","title":"2020 Stanford","content":" Communication With Regulators Feb 18, 2020 Consensus Protocol Risks And Vulnerabilities Feb 19, 2020 Bram Cohen Cryptography Economic Risks Opening Remarks Feb 18, 2020 Shin\u0026amp;#39;ichiro Matsuo Research Regulatory Pain Points Feb 18, 2020 Ryosuke Ushida Research Regulation Risk Overview Feb 18, 2020 Byron Gibson Stress Testing Decentralized Finance Feb 18, 2020 Tarun Chitra Security problems Altcoins Technological Stability Feb 18, 2020 Regulation "},{"uri":"/speakers/bram-cohen/","title":"Bram Cohen","content":""},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/consensus-protocol-risks-and-vulnerabilities/","title":"Consensus Protocol Risks And Vulnerabilities","content":"Consensus protocol risks and vulnerabilities\nhttps://twitter.com/kanzure/status/1229906447808360450\nIntroduction I am going to talk about why cryptocurrencies matter. I am going to take the skeptics side. I am going to start from a banker\u0026amp;rsquo;s standpoint why a lot of the things that cryptocurrency people land say don\u0026amp;rsquo;t make any sense.\nWhere are the engineers coming from? An engineer will look at something like Visa and say this is horrible. These are systems from 50-100 years ago. …"},{"uri":"/coordination-of-decentralized-finance-workshop/","title":"Coordination of Decentralized Finance","content":" 2020 Stanford "},{"uri":"/speakers/dan-boneh/","title":"Dan Boneh","content":""},{"uri":"/speakers/florian-tramer/","title":"Florian Tramer","content":""},{"uri":"/stanford-blockchain-conference/2020/linking-anonymous-transactions/","title":"Linking Anonymous Transactions via Remote Side-Channel Attacks","content":"https://twitter.com/kanzure/status/1230214874908655617\npaper: https://crypto.stanford.edu/timings/paper.pdf\nIntroduction This is going to be talking about linking anonymous transactions in zcash and monero. Thanks for the introduction. I am going to be talking about remote side-channel attacks on anonymous transactions. This will be with respect to zcash and monero. Full disclosure, we disclosed all of the things I am talking about here today and everything has been fixed. No need to worry.\nThis …"},{"uri":"/stanford-blockchain-conference/2020/welcome-remarks/","title":"Welcome Remarks","content":"Welcome remarks\nhttps://twitter.com/kanzure/status/1230178753805836288\nhttps://cbr.stanford.edu/sbc20/\ntelegram: https://t.me/sbc20\nwifi: Stanford, blockchain/blockchain\nOkay, we\u0026amp;rsquo;re going to get started in five minutes. We\u0026amp;rsquo;re going to get started. We have a fully packed day. Everybody can grab their seats and then we can get started.\nAlright. So let\u0026amp;rsquo;s do this.\nOkay, welcome to the fourth Stanford Blockchain Conference. It\u0026amp;rsquo;s been a lot of fun running this conference for a …"},{"uri":"/speakers/byron-gibson/","title":"Byron Gibson","content":""},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/communication-with-regulators/","title":"Communication With Regulators","content":"Communication with regulators\nhttps://twitter.com/kanzure/status/1229856579324760064\nBackgrounds NTT JFSA SEC (enforcement division) celo compliance gold reserves, central bank digital currencies (monetary policy controls) soc1/soc2 audits engineer Developers and regulators: lawfulness questions Are software developers liable from a regulatory perspective for software that they write? There was a case against EtherDelta. He was receiving income from it. In any crime, for enforcement …"},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/opening-remarks/","title":"Opening Remarks","content":"Opening remarks\nhttp://bsafe.network/events/codefi_stanford_2020/\nhttps://diyhpl.us/wiki/transcripts/coordination-of-decentralized-finance-workshop/2020-stanford/\nhttps://twitter.com/kanzure/status/1229833591716102144\nGood morning. Let us start this workshop. I am a researcher at Georgetown University and it is my pleasure to host this workshop. CoDeFi is a workshop about coordination of decentralized finance. This unfortunately shares a name with a Consensys product. CoDeFi contains CoDe and …"},{"uri":"/tags/regulation/","title":"regulation","content":""},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/regulatory-pain-points/","title":"Regulatory Pain Points","content":"Regulatory pain points\nhttps://twitter.com/kanzure/status/1229838162827993088\nIntroduction Hi, good morning everyone. I am a Japanese financial regulator at JFSA. My goal is to promote financial innovations and still ensure the financial stability and various regulatory goals. I have nearly 10 years experience as a regulator. I don\u0026amp;rsquo;t have a strong background in technology, so I hope I can rely on the other stakeholders on this. I am now in Washington DC doing research on blockchain and …"},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/risk-overview/","title":"Risk Overview","content":"Overview of risk in decentralized financial systems\nhttps://twitter.com/kanzure/status/1229878346709782528\nIntroduction This is an overview of catastrophic failure risk in decentralized systems. There are certain degrees of risk in this system where losses can be moderate but not entirely destructive, but also potential failure modes where losses can be serious and ruinous equivalent to the global financial crisis.\nFrom a regulator\u0026amp;rsquo;s perspective, there\u0026amp;rsquo;s a couple of primary mandates …"},{"uri":"/speakers/ryosuke-ushida/","title":"Ryosuke Ushida","content":""},{"uri":"/speakers/shinichiro-matsuo/","title":"Shin'ichiro Matsuo","content":""},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/stress-testing-decentralized-finance/","title":"Stress Testing Decentralized Finance","content":"https://twitter.com/kanzure/status/1229844754990370816\nIntroduction More quantitatively, I think coming up with stress tests is akin to Bryan\u0026amp;rsquo;s question\u0026amp;hellip;. stress testing is something that people in this space kind of do, and I\u0026amp;rsquo;m going to talk about what that looks like and what an actuarial analysis should look like.\nI\u0026amp;rsquo;m going to go through three examples of where an actuarial analysis shows certain weaknesses that need to be monitored and understood. They are going to …"},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/technological-stability/","title":"Technological Stability","content":"https://twitter.com/kanzure/status/1229891596688052226\nControversial thesis: the primary source of economic instability is unreasonable demands (from regulators, but really from the general public) for violating certain cryptocurrency tech design principles, like on sound money- privacy and confidentiality (like confidential transactions and confidential assets), monetary policy (like scarcity economics), identity vs anonymity, AML/KYC requirements, censorship, sanctions requirements and …"},{"uri":"/stephan-livera-podcast/2020-02-16-waxwing-or-adam-gibson/","title":"Is Consumerism at Odds With Privacy in Bitcoin? JoinMarket, PayJoin, SNICKER","content":"podcast: https://stephanlivera.com/episode/149/\nStephan Livera:\nAdam, welcome to the show.\nWaxwing (Adam Gibson):\nHello, Stephan,\nStephan Livera:\nThank you for joining me. I’ve been very keen to have you on the show. I guess some listeners will know you as Waxwing you’re probably more known as Waxwing than as Adam.\nWaxwing (Adam Gibson):\nLittle bit more. Yeah, yeah.\nStephan Livera:\nBut yeah, I’m a big fan of your work. I really like what you’ve done in terms of putting a lot of educational stuff …"},{"uri":"/chaincode-labs/chaincode-podcast/assumeutxo/","title":"AssumeUTXO","content":"James O\u0026amp;rsquo;Beirne: 00:00:00\nBitcoin is a very complex piece of software and it\u0026amp;rsquo;s very difficult to review for correctness. Even people who are really experienced with the code base still have a really hard time determining, in some cases, whether a change is safe or not.\nIntroduction John Newberry: 00:00:25\nHi Jonas. Hi Caralie.\nCaralie: 00:00:27\nHi guys.\nJohn Newberry: 00:00:29\nWe have Caralie in the studio. Caralie is our producer and she\u0026amp;rsquo;s been helping us with the episodes. …"},{"uri":"/wasabi/research-club/coinshuffle-plusplus-part-2/","title":"Coinshuffle++ (Part 2)","content":"Speaker 0: 00:00:00\nYeah. All\nSpeaker 1: 00:00:02\nright, guys, we\u0026amp;rsquo;re here with Tim Ruffing, one of the authors of the Coinshuffle++ paper to talk about privacy. So this PowerPoint and the presentation, Tim will do that himself and I\u0026amp;rsquo;ll just sit back.\nSpeaker 0: 00:00:22\nYeah, hey. I\u0026amp;rsquo;m sorry if this will look weird now, but I had to send my slides to Arif to share the screen, so I can\u0026amp;rsquo;t click through the slides he needs to do it, so It could be a little bit weird. Maybe …"},{"uri":"/advancing-bitcoin/2020/","title":"Advancing Bitcoin 2020","content":" A Schnorr-Taproot’ed Lightning Feb 06, 2020 Antoine Riard Taproot Lightning Ptlc Debugging Bitcoin Core Workshop Feb 07, 2020 Fabian Jahr Bitcoin core Developer tools Descriptor Wallets Feb 06, 2020 Andrew Chow Wallet Introduction to Miniscript Feb 06, 2020 Andrew Poelstra Miniscript Miniscript Feb 07, 2020 Andrew Poelstra Miniscript Signet Integration Feb 06, 2020 Kalle Alm Taproot Signet Signet Workshop Feb 07, 2020 Kalle Alm Taproot Signet "},{"uri":"/advancing-bitcoin/2020/2020-02-07-fabian-jahr-debugging-workshop/","title":"Debugging Bitcoin Core Workshop","content":"Video: No video was posted online\nFabian presentation at Bitcoin Edge Dev++ 2019: https://diyhpl.us/wiki/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/debugging-bitcoin/\nDebugging Bitcoin Core doc: https://github.com/fjahr/debugging_bitcoin\nDebugging Bitcoin Core Workshop: https://gist.github.com/fjahr/5bf65daaf9ff189a0993196195005386\nIntroduction First of all welcome to the debugging Bitcoin Core workshop. Everything I know more or less about using a debugger to learn from Bitcoin …"},{"uri":"/speakers/kalle-alm/","title":"Kalle Alm","content":""},{"uri":"/advancing-bitcoin/2020/2020-02-07-andrew-poelstra-miniscript/","title":"Miniscript","content":"Advancing Bitcoin workshop\nMiniscript\nWebsite: https://bitcoin.sipa.be/miniscript/\nWorkshop repo: https://github.com/apoelstra/miniscript-workshop\nTranscript of London Bitcoin Devs Miniscript presentation: https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript/\nPart 1 So what we’re going to do, there is a couple of things. We are going to go through the Miniscript website and learn about how Miniscript is constructed, how the type system works, some of the …"},{"uri":"/advancing-bitcoin/2020/2020-02-07-kalle-alm-signet-workshop/","title":"Signet Workshop","content":"Video: No video posted online\nLet’s prepare mkdir workspace cd workspace git clone https://github.com/bitcoin/bitcoin.git cd bitcoin git remote add kallewoof https://github.com/kallewoof/bitcoin.git git fetch kallewoof git checkout signet ./autogen.sh ./configure -C --disable-bench --disable-test --without-gui make -j5 When you try to run the configure part you are going to have some problems if you don’t have the dependencies. If you don’t have the dependencies Google your OS and “Bitcoin …"},{"uri":"/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/","title":"A Schnorr-Taproot’ed Lightning","content":"Slides: https://www.dropbox.com/s/9vs54e9bqf317u0/Schnorr-Taproot%27ed-LN.pdf\nIntro Today Schnorr and Taproot for Lightning, it is a really exciting topic.\nLightning architecture The Lightning architecture for those who are not familiar with it. You have the blockchain as the underlying layer. On top of it you are going to build a channel, you have a HTLC and people are going to spend onions to you. If you want to be paid you are going to send an invoice to the sender.\nWhat should we design …"},{"uri":"/speakers/antoine-riard/","title":"Antoine Riard","content":""},{"uri":"/advancing-bitcoin/2020/2020-02-06-andrew-chow-descriptor-wallets/","title":"Descriptor Wallets","content":"Topic: Rethinking Wallet Architecture: Native Descriptor Wallets\nLocation: Advancing Bitcoin\nSlides: https://www.dropbox.com/s/142b4o4lrbkvqnh/Rethinking%20Wallet%20Architecture_%20Native%20Descriptor%20Wallets.pptx\nSupport for Output Descriptors in Bitcoin Core: https://github.com/bitcoin/bitcoin/blob/master/doc/descriptors.md\nBitcoin Optech on Output script descriptors: https://bitcoinops.org/en/topics/output-script-descriptors/\nBitcoin Core dev wiki on Wallet Class Structure Changes: …"},{"uri":"/advancing-bitcoin/2020/2020-02-06-andrew-poelstra-miniscript-intro/","title":"Introduction to Miniscript","content":"Slides: https://www.dropbox.com/s/vgh5vaooqqbgg1v/andrew-poelstra.pdf\nIntro Hi everyone. Some of you were at my Bitcoin dev presentation about Miniscript a couple of days where I managed to spend over 2 hours giving a presentation about this. I think I got this one down to 20 minutes but no promises. I will keep an eye on the clock.\nDescriptors It is very nice having this scheduled immediately after Andrew Chow’s talk about output descriptors because there is a good transition here. As Andrew …"},{"uri":"/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/","title":"Signet Integration","content":"Slides: https://www.dropbox.com/s/6fqwhx7ugr3ppsg/Signet%20Integration%20V2.pdf\nBIP 325: https://github.com/bitcoin/bips/blob/master/bip-0325.mediawiki\nSignet on Bitcoin Wiki: https://en.bitcoin.it/wiki/Signet\nBitcoin dev mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html\nBitcoin Core PR 16411 (closed): https://github.com/bitcoin/bitcoin/pull/16411\nBitcoin Core PR 18267 (open): https://github.com/bitcoin/bitcoin/pull/18267\nIntro I …"},{"uri":"/london-bitcoin-devs/2020-02-05-andrew-chow-hardware-wallets/","title":"Hardware Wallets in Bitcoin Core","content":"Slides: https://www.dropbox.com/s/k02o32d0pagac4r/Andrew%20Chow%20slides.pdf\nBIP174 (PSBT): https://github.com/bitcoin/bips/blob/master/bip-0174.mediawiki\nPSBT Howto for Bitcoin Core: https://github.com/bitcoin/bitcoin/blob/master/doc/psbt.md\nIntroduction Hey everyone. As you may have guessed I’m Andrew Chow. I’m Engineer at Blockstream and I also work on Bitcoin Core a lot, mostly in the wallet. I am going to be talking about hardware wallets and a bit history with them in Core and also how to …"},{"uri":"/wasabi/research-club/coinshuffle-plusplus-part-1/","title":"CoinShuffle++ (Part 1)","content":"Intro Max Hillebrand: 00:00:00\nNow, excellent. Okay, so let\u0026amp;rsquo;s go through it. CoinShuffle++, peer-to-peer mixing and unlinkable Bitcoin transactions. This is the paper we\u0026amp;rsquo;re talking about. It\u0026amp;rsquo;s unclear if the authors are going to be able to join us. If not, they\u0026amp;rsquo;ll join us next week, hopefully. Yes, two weeks ago we talked about CoinShuffle, which was the predecessor to CoinShuffle++. CoinShuffle dealt with the issue of removing the coordinator, doing fully decentralized …"},{"uri":"/speakers/james-chiang/","title":"James Chiang","content":""},{"uri":"/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript/","title":"Miniscript","content":"Bitcoin Script to Miniscript\nLondon Bitcoin Devs\nhttps://twitter.com/kanzure/status/1237071807699718146\nSlides: https://www.dropbox.com/s/au6u6rczljdc86y/Andrew%20Poelstra%20-%20Miniscript.pdf?dl=0\nIntroduction As Michael mentioned I’ve got like two hours. Hopefully I won’t get close to that but you guys are welcome to interrupt and ask questions. I’ve given a couple of talks about Miniscript usually in like 20 or 30 minute time slots where I try to give a high level overview of what this …"},{"uri":"/speakers/raphael/","title":"Raphael","content":""},{"uri":"/london-bitcoin-devs/2020-02-04-james-chiang-trace-net/","title":"Trace Net","content":"Bitcoin Trace-Net\nLocation: London Bitcoin Devs\nSlides: https://www.dropbox.com/s/99md18bnnl8jnnw/James%20Chiang%20-%20Bitcoin%20Trace%20Net.pdf\nIntro Good evening. My name is James and it is a real privilege to give this talk right after Andrew (Poelstra) has primed the entire audience on Miniscript. That’s pretty awesome. I learned a lot of things even though I coded the stuff that they have on Pieter’s website the actual justification and reasoning behind a lot of the decisions they made in …"},{"uri":"/chaincode-labs/chaincode-podcast/2020-02-11-jeremy-rubin-ctv/","title":"CHECKTEMPLATEVERIFY (CTV)","content":"CTV BIP review workshop transcript: https://diyhpl.us/wiki/transcripts/ctv-bip-review-workshop/\nIntro Jonas: Welcome to the podcast Jeremy.\nJeremy: Thanks for having me on.\nJonas: I think to start us off tell us a bit about your background, how you got into Bitcoin and we\u0026amp;rsquo;ll take it from there.\nJeremy: I first heard about Bitcoin in 2011. I was interning at MIT during high school and I was doing what interns do which is read Hacker News. I saw an article about Bitcoin and I thought it was …"},{"uri":"/tags/op_checktemplateverify/","title":"op_checktemplateverify","content":""},{"uri":"/chaincode-labs/chaincode-podcast/2020-01-28-pieter-wuille-part1/","title":"Pieter Wuille (part 1 of 2)","content":"Part 2: https://www.youtube.com/watch?v=Q2lXSRcacAo\nJonas: Welcome to the podcast\nJohn: Hi Pieter\nPieter: Hello John and Jonas\nJohn: Thank you for being the first guest on our podcast.\nJonas: So far the most important guest we’ve had.\nPieter: That’s an amazing honor. Thank you so much for having me.\nJohn: We’re here to talk about Bitcoin and Bitcoin Core development. We have Pieter Wuille as our guest who is a Bitcoin Core contributor of many years standing. Pieter you’ve had over 500 PRs merged …"},{"uri":"/chaincode-labs/chaincode-podcast/2020-01-28-pieter-wuille-part2/","title":"Pieter Wuille (part 2 of 2)","content":"Jonas: We are gonna pick up where we left off in episode 1 with a discussion of lessons learned from the 0.8 consensus failure. We then go on to cover libsecp and Pieter\u0026amp;rsquo;s thoughts about Bitcoin in 2020. We hope you enjoy this as much as we did.\nJohn: Ok I have a bunch of questions from that. One is what are the lessons from that?\nPieter: One of the things I think learned from that is specifying what your consensus rules is really hard. That doesn’t mean you can’t try but who would’ve …"},{"uri":"/austin-bitcoin-developers/2020-01-21-socratic-seminar-5/","title":"Socratic Seminar 5","content":"https://www.meetup.com/Austin-Bitcoin-Developers/events/267941700/\nhttps://bitdevs.org/2019-12-03-socratic-seminar-99\nhttps://bitdevs.org/2020-01-09-socratic-seminar-100\nhttps://twitter.com/kanzure/status/1219817063948148737\nLSATs So we usually start off with a lightning-based project demo that he has been working on for a few months.\nThis is not an original idea to me. This was presented by roasbeef, co-founder of Lightning Labs at the Lightning conference last October. I worked on a project …"},{"uri":"/speakers/adam-fiscor/","title":"Adam Fiscor","content":""},{"uri":"/speakers/aviv-milner/","title":"Aviv Milner","content":""},{"uri":"/wasabi/research-club/coinshuffle/","title":"CoinShuffle","content":"Introduction to CoinShuffle Aviv Milner: 00:00:00\nToday we\u0026amp;rsquo;re talking about CoinShuffle, practical decentralized coin mixing for Bitcoin. So, let\u0026amp;rsquo;s get started. So we\u0026amp;rsquo;re looking at a 2014 paper by Ruffing, Moreno-Sanchez, and Kate. Everything is obviously posted on GitHub. Here is the paper itself.\nLast week, just a reminder, we talked about SNICKER CoinJoin, and the summary was that we can have a non-interactive CoinJoin between two participants if the proposer, if one of the …"},{"uri":"/wasabi/research-club/snicker/","title":"SNICKER","content":"Speaker 0: 00:00:01\nAll right guys, welcome to week two. This week we\u0026amp;rsquo;re talking about Snigger, simple non-interactive coin drawing with keys for encryption reused. I\u0026amp;rsquo;m going to allow myself 15 minutes to go through the paper like we did last week and then we\u0026amp;rsquo;ll open it to discussion. We\u0026amp;rsquo;re very lucky that Adam Gibson, the author of this BIP proposal is here so he\u0026amp;rsquo;s going to answer questions and correct anything that I\u0026amp;rsquo;ve said that might be wrong. Okay, so …"},{"uri":"/speakers/felix-maurer/","title":"Felix Maurer","content":""},{"uri":"/wasabi/research-club/knapsack-mixing/","title":"Knapsack Mixing","content":"Aviv Milner: 00:00:00\nSo we\u0026amp;rsquo;re talking about knapsack coinjoins today. I\u0026amp;rsquo;m just going to run through this slide and then hopefully Felix will jump in to clarify questions as they come or if I make a mistake.\nAnonymous Coin Join Transactions with Arbitrary Values (2017) Aviv Milner: 00:00:15\nSo yeah, this is the paper, Anonymous Conjoin Transaction with Arbitrary Values. We have one of the authors with us. This PowerPoint will be made available if anyone wants it. Very …"},{"uri":"/stephan-livera-podcast/2020-01-06-andreas-antonopoulos/","title":"Mastering Lightning \u0026 Using BTCPayServer","content":"podcast: https://stephanlivera.com/episode/139/\nStephan Livera: Andreas, welcome back to the show.\nAndreas: It’s a pleasure. Thank you so much for having me again.\nStephan Livera: Oh, it’s, yeah, I know you’re doing, you’ve got a lot on, I know a couple months ago, you announced recently alongside Laolu Osuntukun and Rene Pickhardt that you’re working on Mastering Lightning. I’m excited to see the result of that. Let’s talk a little bit about how that came about.\nAndreas: So I’ve been involved …"},{"uri":"/stephan-livera-podcast/2019-12-27-aj-towns-schnorr-taproot/","title":"Schnorr Taproot Tapscript BIPs","content":"Stephan Livera: AJ, welcome to the show.\nAJ Towns: Howdy.\nStephan Livera: Thanks for joining me AJ. I know you’re doing a lot of really cool work with the Schnorr and Taproot proposal and the review club. But let’s start with a bit of your background and I know you’re working at Xapo as well.\nAJ Towns: Yeah so I’m working at Xapo on Bitcoin Core dev, which means that I get to do open source work full time and get paid for it and it’s pretty freaking awesome. So I got into Lightning from being …"},{"uri":"/stephan-livera-podcast/2019-12-27-aj-townsschnorr-taproot-tapscript-bips/","title":"Schnorr Taproot Tapscript BIPs","content":"podcast: https://stephanlivera.com/episode/137/\nStephan Livera: AJ, welcome to the show.\nAJ Towns: Howdy.\nStephan Livera: Thanks for joining me AJ. I know you’re doing a lot of really cool work with the Schnorr and Taproot proposal and the review club. But let’s start with a bit of your background and I know you’re working at Xapo as well.\nAJ Towns: Yeah, so I, I’m working at Xapo on Bitcoin core dev, which basically just means that I get to do open source work full time and get paid for it and …"},{"uri":"/sf-bitcoin-meetup/2019-12-16-bip-taproot-bip-tapscript/","title":"BIP Taproot and BIP Tapscript","content":"slides: https://prezi.com/view/AlXd19INd3isgt3SvW8g/\nhttps://twitter.com/kanzure/status/1208438837845929987\nhttps://twitter.com/SFBitcoinDevs/status/1206678306721894400\nbip-taproot: https://github.com/sipa/bips/blob/bip-schnorr/bip-taproot.mediawiki\nbip-tapscript: https://github.com/sipa/bips/blob/bip-schnorr/bip-tapscript.mediawiki\nbip-schnorr: https://github.com/sipa/bips/blob/bip-schnorr/bip-schnorr.mediawiki\nPlease try to find seats. Today we have a special treat. We have Pieter here, who is …"},{"uri":"/stephan-livera-podcast/2019-12-16-rusty-russelllightning-multi-part-payments/","title":"Lightning Multi Part Payments","content":"podcast: https://stephanlivera.com/episode/134/\nStephan Livera: Rusty, welcome back to the show.\nRusty Russell: Great to be on again Stephan.\nStephan Livera: So Rusty, I hear there’s been a lot of development around multi-part payment and you’ve got the new c-lightning coming out. Do you want to just give us an update there?\nRusty Russell: Yeah, for sure. So on the, so there’s two sides to this. One is kind of the spec side and the development and interoperability of multi-part payment stuff. …"},{"uri":"/speakers/jan-%C4%8Dapek/","title":"Jan Čapek","content":""},{"uri":"/stephan-livera-podcast/2019-11-27-jan-capekstratum-v2/","title":"Stratum v2 Bitcoin Mining Protocol","content":"podcast: https://stephanlivera.com/episode/128/\nStephan Livera: Jan, welcome to the show.\nJan Čapek: Hi, thanks for having me here.\nStephan Livera: So Jan I had the pleasure of meeting you at Riga for Baltic Honeybadger and I know you’ve been doing a lot of work on Stratum, the second version of the protocol. First I wanted to get a bit of background from you, how did you get involved with all of this and working as one of the CEOs of Braiins and Slush Pool?\nJan Čapek: So I come from the …"},{"uri":"/austin-bitcoin-developers/2019-11-19-socratic-seminar-4/","title":"Socratic Seminar 4","content":"https://twitter.com/kanzure/status/1196947713658626048\nWallet standards organization (WSO) Make an economic argument for vendors to join an organization and put in some money or effort to build the organization. The economic argument is that the committed members have some level of funds committed, representing some X number of users, and X billion dollars of custodied funds. This can then be used as a compelling argument to get each organization to join and bootstrap the standards body.\nThe …"},{"uri":"/stephan-livera-podcast/2019-11-16-thomas-voegtlin-electrum-wallet/","title":"Electrum Wallet-Bitcoin seeds, PSBT, Multi sig, Lightning","content":"podcast: https://stephanlivera.com/episode/125/\nStephan Livera: Thomas, welcome to the show.\nThomas Voegtlin: Thank you for having me.\nStephan Livera: So Tom I had the pleasure of meeting you at Berlin as part of the lightning conference. And I was really keen to discuss with you and talk about Electrum because obviously it’s one of the longstanding wallets in the space. Thomas, can you give us a little bit of your background and your story on how you came into Bitcoin and how you started …"},{"uri":"/stephan-livera-podcast/2019-11-13-jon-atack/","title":"Bitcoin Core contribution","content":"Stephan Livera Podcast with Jon Atack - November 13th 2019\nPodcast: https://stephanlivera.com/episode/124/\nhttps://twitter.com/kanzure/status/1195848417186078720\nStephan Livera: Jon, welcome to the show.\nJon Atack: Hi, Stephan. Thanks for having me on. I’m pleased to be here.\nStephan Livera: So, Jon, I had the pleasure of meeting you at The Lightning Conference recently and I know you are working on Bitcoin Core in a part time capacity to do some review and development work. I was keen to …"},{"uri":"/london-bitcoin-devs/2019-11-13-gleb-naumenko-p2p-erlay/","title":"Current state of P2P research in Bitcoin /Erlay","content":"London Bitcoin Devs\nhttps://twitter.com/kanzure/status/1201861687097352198\nIntroduction My name is Gleb. I’ve worked on Bitcoin stuff since 2015, building wallets and exchanges in Ukraine. Then I realized that I can do more fun stuff. I want to do more protocol research, improving Bitcoin itself, not just using it. I decided I will go to Canada to do a Masters. I started working on this topic on research stuff more precisely and I realized that not many people do work on peer-to-peer (P2P) of …"},{"uri":"/speakers/allen-piscitello/","title":"Allen Piscitello","content":""},{"uri":"/stephan-livera-podcast/2019-11-08-allen-piscitellosidechains/","title":"Sidechains \u0026 Blockstream Liquid","content":"podcast: https://stephanlivera.com/episode/123/\nStephan Livera: Allen, welcome to the show.\nAllen P: Thanks for having me on.\nStephan Livera: So Allen, I know you’re doing a lot of work with Blockstream and Liquid and side chains, so I thought it’d be great to get you on and just break some of that down for the listeners, but can you just start with a little bit of a background on yourself and what you’re doing with Blockstream?\nAllen P: Yeah, so I’m the vice president of product at Blockstream. …"},{"uri":"/speakers/jack-mallers/","title":"Jack Mallers","content":""},{"uri":"/stephan-livera-podcast/2019-10-29-jack-mallers/","title":"Lightning \u0026 Identity, Olympus and Zap","content":"podcast: https://stephanlivera.com/episode/120/\nStephan Livera: Jack, welcome to the show, man.\nJack Mallers: Stephan, thank you for having me. I’m a big fan.\nStephan Livera: I’m a big fan of you. I really like what your family’s doing, you know, Bitcoinmom and your dad @willbc20. I can’t remember his exact handle. And then obviously you guys are just killing it. I’m a big fan. Zap is my favorite wallet, so I really love using it and yeah, I’m really looking forward to just chatting with you …"},{"uri":"/stephan-livera-podcast/2019-10-24-alex-bosworth-submarine-swaps/","title":"Submarine Swaps, Lightning Loop, Hyperloop","content":"podcast: https://stephanlivera.com/episode/119/\nStephan Livera: Alex, welcome to the show.\nAlex Bosworth: Hi. It’s great to be on the show. Finally.\nStephan Livera: Yeah, I’ve been meaning to get you on for a while and you know, now we’re finally making it happen. So just, context for the listeners, we are recording this one in person just before the lightning conference, the inaugural lightning conference. So just a couple of days before there was lightning meetup last night. It’s been …"},{"uri":"/lightning-conference/","title":"Lightning Conference","content":" Lightning Conference 2019 "},{"uri":"/lightning-conference/2019/","title":"Lightning Conference 2019","content":" Lightning Payment Pathfinding for Reliability Oct 18, 2019 Joost Jager Pathfinding Lightning Routing Lnd Offers and BOLT 12 Oct 19, 2019 Rusty Russell Lightning C lightning Private Key Management Oct 19, 2019 Chris Stewart Replacing Payment Hashes with Payment Points Oct 20, 2019 Nadav Kohen Ptlc Rust Lightning Oct 20, 2019 Antoine Riard Lightning Trampoline Routing Oct 20, 2019 Bastien Teinturier Trampoline payments Amp "},{"uri":"/lightning-conference/2019/2019-10-20-nadav-kohen-payment-points/","title":"Replacing Payment Hashes with Payment Points","content":"Slides: https://docs.google.com/presentation/d/15l4h2_zEY4zXC6n1NqsImcjgA0fovl_lkgkKu1O3QT0/\nhttps://twitter.com/kanzure/status/1220453531683115020\nIntro My name is Nadav Kohen. I’m a software engineer at Suredbits. I’m going to talk about payment points which have been alluded to throughout the conference by various talks.\nHash Timelocked Contract (HTLC) Routing First of all, some preliminaries, HTLCs. I don’t think I’m going to spend much time on this slide. We all know what they are. For …"},{"uri":"/lightning-conference/2019/2019-10-20-antoine-riard-rust-lightning/","title":"Rust Lightning","content":"Deploying rust-lightning in the wild\nThe Lightning Conference Day 2 (Stage 2)\nSlides: https://github.com/ariard/talk-slides/blob/master/deploying-rust-lightning-in-the-wild.pdf\nhttps://twitter.com/kanzure/status/1220722130897252352\nIntro Hi everyone, super happy to be here at the Lightning Conference. I’ve had an awesome weekend. Today I will talk on rust-lightning a project I’ve been contributing to for a year and a half. To get started, please take photos of the slides, not of me. So the story …"},{"uri":"/lightning-conference/2019/2019-10-20-bastien-teinturier-trampoline-routing/","title":"Trampoline Routing","content":"The Lightning Conference Day 2 (Stage 2)\nSlides: https://docs.google.com/presentation/d/1bFu33_YFsRVUSEN-Vff5-wrxMabOFl2ck7ZTiVsp5ds/\nhttps://twitter.com/kanzure/status/1221100865328685057\nIntro We will start with gory technical details about routing and pathfinding. I hope I will be able to keep you awake for the next twenty minutes. I’m working for ACINQ on the Lightning Network. If you don’t know ACINQ we are contributors to the specification. We run the biggest node on the network and we are …"},{"uri":"/lightning-conference/2019/2019-10-19-rusty-russell-offers/","title":"Offers and BOLT 12","content":"Make Me An Offer: Next Level Invoicing\nThe Lightning Conference Day 1 (Stage 1)\nBOLT 11: https://github.com/lightningnetwork/lightning-rfc/blob/master/11-payment-encoding.md\nBOLT 12 PR: https://github.com/lightningnetwork/lightning-rfc/pull/798\nbolt12.org: https://bolt12.org/\nIntro Hi. My name is Rusty Russell and this is actually my official job title at Blockstream, code contributor. It is generic, it is flexible, it is probably a little too humble maybe but that is me. Before we begin the …"},{"uri":"/lightning-conference/2019/2019-10-19-chris-stewart-private-key-management/","title":"Private Key Management","content":"Lightning 101 for Exchanges: Private Key Management\nThe Lightning Conference Day 1 (Stage 2)\nhttps://twitter.com/kanzure/status/1212894908375322625\nSlides: https://docs.google.com/presentation/d/1_-FF0U2AXuhBxEzW9J_IrYxvRi1SS2MYwJl0QeIcqbI/edit#slide=id.p\nIntro Hi everybody. My name is Chris Stewart. I’m the Founder of a company called Suredbits. We monetize APIs with Lightning but we strongly believe that the quickest way to widespread Lightning adoption is through exchanges. As other people …"},{"uri":"/lightning-conference/2019/lightning-payment-pathfinding-for-reliability/","title":"Lightning Payment Pathfinding for Reliability","content":"Introduction I\u0026amp;rsquo;m Joost Jager. I work for Lightning Labs. At Lightning Labs, I\u0026amp;rsquo;m a software engineer, and my main focus is on LND. Over the last month, I spent quite some time to improve pathfinding in LND. In this talk, I would like to explain how we optimize pathfinding for the reliability of paths.\nLightning payment pathfinding So we\u0026amp;rsquo;re looking for a path. That\u0026amp;rsquo;s clear. But what do we optimize the path for? There are many ways that you can optimize a path for. In all …"},{"uri":"/tags/pathfinding/","title":"pathfinding","content":""},{"uri":"/cryptoeconomic-systems/","title":"Cryptoeconomic Systems","content":" Cryptoeconomic Systems 2019 "},{"uri":"/cryptoeconomic-systems/2019/","title":"Cryptoeconomic Systems 2019","content":"https://cryptoeconomicsystems.pubpub.org/ces19\nAll About Decentralized Trust Ittai Abraham Cross Chain Deals And Adversarial Commerce Maurice Herlihy Everything Is Broken Cory Fields Flash Boys V2 Ari Juels Research Funding bitcoin development Tadge Dryja Introduction Andrew Miller, Neha Narula Journal Review Journals As Clubs Jason Potts Knowledge Aggregation And Propagation Oct 06, 2019 Bryan Bishop Research Bitcoin core Libra Dahlia Malkhi Mechanism Design Matt Weinberg Mining Incentives Near …"},{"uri":"/cryptoeconomic-systems/2019/2019-10-15-russell-oconnor-simplicity/","title":"Simplicity","content":"Location: CES Summit 2019\nSlides: https://drive.google.com/file/d/1FivYGQzOYfM0JGl4SS3VR1UGKsyMn_L5/\nSimplicity white paper: https://blockstream.com/simplicity.pdf\nIntro So Simplicity is a programming language that I have been working on at Blockstream. I call it an alternative or reimagination of Bitcoin Script as it would have been if I had invented Bitcoin. An alternative or new instruction set that is compatible with Bitcoin. It could be soft forked in. We’ve experimented with it on …"},{"uri":"/austin-bitcoin-developers/2019-10-14-socratic-seminar-3/","title":"Socratic Seminar 3","content":"https://www.meetup.com/Austin-Bitcoin-Developers/events/265295570/\nhttps://bitdevs.org/2019-09-16-socratic-seminar-96\nWe have sort of done two meetups in this format. The idea here is that in NY there\u0026amp;rsquo;s BitDevs which is one of the oldest meetups. This has been going on for five years. They have a format called socratic where they have a talented guy named J who leads them through some topics and they try to get some discussion going on and pull from expertise in the audience. We are …"},{"uri":"/chaincode-labs/chaincode-residency/","title":"Chaincode Residency","content":" Advanced Segwit Jun 18, 2019 James O\u0026amp;#39;Beirne Segwit Attack Vectors of Lightning Network Jun 25, 2019 Fabrice Drouin Security problems Lightning Base and Transport Layers of the Lightning Network Jun 24, 2019 Fabrice Drouin Lightning Bitcoin network partitioning \u0026amp;amp; network-level privacy attacks Jun 12, 2019 Ethan Heilman Privacy problems P2p Eclipse attacks Bootstrapping Lightning Node Oct 22, 2018 Elaine Ou Lightning Routing Building Lightning Applications Oct 22, 2018 Alex Bosworth …"},{"uri":"/tags/dandelion/","title":"Dandelion","content":""},{"uri":"/speakers/giulia-fanti/","title":"Giulia Fanti","content":""},{"uri":"/chaincode-labs/chaincode-residency/2019-10-09-giulia-fanti-p2p-privacy-attacks/","title":"P2P Privacy Attacks","content":"Location: Chaincode Labs 2019 Residency\nSlides: https://residency.chaincode.com/presentations/bitcoin/giulia_fanti_bitcoin_p2p.pdf\nGiulia: So today I\u0026amp;rsquo;m going to talk to you about some work that my collaborators and I\u0026amp;rsquo;ve been working on for the last three years now. So this was joint work with a bunch of people who are now spread across a few different universities but it started at the University of Illinois: Shaileshh Bojja Venkatakrishnan, Surya Bakshi, Brad Denby, Shruti Bhargava, …"},{"uri":"/categories/residency/","title":"residency","content":""},{"uri":"/speakers/bryan-bishop/","title":"Bryan Bishop","content":""},{"uri":"/cryptoeconomic-systems/2019/knowledge-aggregation-and-propagation/","title":"Knowledge Aggregation And Propagation","content":"Intro https://docs.google.com/document/d/1a1uRy10dBBcxrnzjdUaO2y03f5H34yFlxktOcanvVYE/edit\nhttps://cess.pubpub.org/pub/knowledge-aggregation/branch/2/\nI\u0026amp;rsquo;ll be the contrarian - I think academia is awful and should be destroyed.\nReputation is completely independent of content and should not be the mechanism by which you judge the quality of research; your reputation is worthless to me. (laughter)\nMy perspective: [first, slidemaker\u0026amp;rsquo;s regret! might have reorganized this] I\u0026amp;rsquo;m a …"},{"uri":"/cryptoeconomic-systems/2019/threshold-schnorr-signatures/","title":"The Quest for Practical Threshold Schnorr Signatures","content":"Slides: https://slides.com/real-or-random/schnorr-threshold-sigs-ces-summit-2019\nIntroduction It\u0026amp;rsquo;s great to be here. My name is Tim Ruffing and I work for the research team at Blockstream. This is a talk about practical threshold Schnorr signatures and our quest for constructing them.\nDisclaimer First of all, this is work in progress. It\u0026amp;rsquo;s pretty early work in progress, mostly based on discussions with people at Blockstream. I describe the problem we want to solve, and we have some …"},{"uri":"/greg-maxwell/2019-10-04-majority-miner-attack/","title":"Should we be more concerned by the prospect of a 51 percent mining attack?","content":"I think questions like this are ultimately the result of a fundamental lack of understanding about what Bitcoin is doing.\nThe problem Bitcoin is attempting to solve is getting everyone everywhere to agree on the same stable history of transactions. This is necessary because in order to prevent users from printing money from nothing the system must have a rule that you can\u0026amp;rsquo;t spend a given coin more than once\u0026amp;ndash; like I have a dollar then pay both alice and bob that dollar, creating a …"},{"uri":"/bitcoinops/2019-09-27-schnorr-taproot-workshop/","title":"Schnorr and Taproot workshop","content":"Location: Chaincode Labs, NYC\nhttps://github.com/bitcoinops/taproot-workshop\nhttps://bitcoinops.slack.com\nhttps://twitter.com/kanzure/status/1177697093462482949\nRun: jupyter notebook\nSee also Schnorr signatures and taproot: https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my/\nhttps://github.com/sipa/bips/blob/bip-schnorr/bip-schnorr.mediawiki\nhttps://github.com/sipa/bips/blob/bip-schnorr/bip-taproot.mediawiki …"},{"uri":"/stephan-livera-podcast/2019-09-22-bryan-bishop/","title":"Bitcoin Vaults and Custody","content":"Stephan Livera Podcast with Bryan Bishop - September 22nd 2019\nPodcast: https://stephanlivera.com/episode/108/\nStephan Livera: Bryan, welcome to the show, man. I’m a big fan of your work.\nBryan Bishop: Thank you. I’m a big fan of yours as well.\nStephan Livera: Yeah, so look, we’re doing Bitcoin Custody. I know you’re one of the guys to talk to on this exact topic. And so, maybe we’ll just start with a little bit of a background. What was some of your work with LedgerX, and then what’s some of …"},{"uri":"/baltic-honeybadger/","title":"Baltic Honeybadger","content":" Baltic Honeybadger 2018 Baltic Honeybadger 2019 "},{"uri":"/baltic-honeybadger/2019/","title":"Baltic Honeybadger 2019","content":" Coldcard Mk3 - Security in Depth Sep 14, 2019 Rodolfo Novak Security Hardware wallet "},{"uri":"/baltic-honeybadger/2019/2019-09-14-rodolfo-novak-coldcard-mk3/","title":"Coldcard Mk3 - Security in Depth","content":"Intro My name is Rodolfo, I have been around Bitcoin for a little while. We make hardware. Today I wanted to get a little bit into how do you make a hardware wallet secure in a little bit more layman’s terms. Go through the process of getting that done.\nWhat were the options When I closed my last company and decided to find a place to store my coins I couldn’t really find a wallet that satisfied two things that I needed. Which was physical security and open source. There are two wallets on the …"},{"uri":"/scalingbitcoin/tel-aviv-2019/anonymous-atomic-locks/","title":"A2L: Anonymous Atomic Locks for Scalability and Interoperability in Payment Channel Hubs","content":"paper: https://eprint.iacr.org/2019/589.pdf\nhttps://github.com/etairi/A2L\nhttps://twitter.com/kanzure/status/1172116189742546945\nIntroduction I am going ot talk about anonymous atomic locks for scalability and interoperability in payment-channel hubs. This is joint work with my colleagues.\nScalability I will try to keep this section as short as possible. This talk is also about scalability in bitcoin. You\u0026amp;rsquo;re probably aware of the scalability issues. There is decentralized data structure …"},{"uri":"/scalingbitcoin/tel-aviv-2019/atomic-multi-channel-updates/","title":"Atomic Multi-Channel Updates with Constant Collateral in Bitcoin-Compatible Payment-Channel Networks","content":"paper: https://eprint.iacr.org/2019/583\nhttps://twitter.com/kanzure/status/1172102203995283456\nIntroduction Thank you for coming to my talk after lunch. This talk today is about atomic multi-channel updates with constant collateral in bitcoin-compatible payment-channel networks. It\u0026amp;rsquo;s a long title, but I hope you will understand. This is collaborative work with my colleagues.\nScalability We\u0026amp;rsquo;re at Scaling Bitcoin so you\u0026amp;rsquo;re probably not suprised that I think bitcoin has scaling …"},{"uri":"/speakers/georgios-konstantopoulos/","title":"Georgios Konstantopoulos","content":""},{"uri":"/speakers/kanta-matsuura/","title":"Kanta Matsuura","content":""},{"uri":"/speakers/pedro-moreno-sanchez/","title":"Pedro Moreno-Sanchez","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/plasma-cash/","title":"Plasma Cash: Towards more efficient Plasma Constructions","content":"Non-custodial sidechains for bitcoin utilizing plasma cash and covenants\nhttps://twitter.com/kanzure/status/1172108023705284609\npaper: https://github.com/loomnetwork/plasma-paper/blob/master/plasma_cash.pdf\nslides: https://gakonst.com/scalingbitcoin2019.pdf\nIntroduction We have known how to do these things for at least a year, but the question is how can we find the minimum changes that we can figure out to do in bitcoin if any on how to explore the layer 2 space in a better sense rather than …"},{"uri":"/scalingbitcoin/tel-aviv-2019/proof-of-verification-for-proof-of-work/","title":"Proof-of-Verification for Proof-of-Work: Miners Must Verify the Signatures on Bitcoin Transactions","content":"https://twitter.com/kanzure/status/1172142603007143936\nextended abstract: http://kmlab.iis.u-tokyo.ac.jp/papers/scaling19-matsuura-final.pdf\nHistory lesson In the 90s, we had some timestamping schemes where things are aggregated into one hash and then it would be publicized later in the newspaper. This scheme was commercialized, in fact. However, the resolution of the trust anchor is very limited. It\u0026amp;rsquo;s one day or half-day at best. When we want to verify, we need to visit the online-offline …"},{"uri":"/scalingbitcoin/tel-aviv-2019/payment-channel-recovery-with-seeds/","title":"Recovering Payment Channel Midstates Using only The User's Seed","content":"https://twitter.com/kanzure/status/1172129077765050368\nIntroduction Cool. This addresses the lightning network which is a scaling technique for the bitcoin network. A big challenge with the lightning network is that you have these midstates that you need to keep. If you lose them, you\u0026amp;rsquo;re out of luck with your counterparty. We had a lot of talks this morning about how to recover midstates but a lot of them used watchtowers. However, this proposal does not require a watchtower and you can …"},{"uri":"/scalingbitcoin/","title":"Scaling Bitcoin Conference","content":" Hong Kong (2015) Milan (2016) Montreal (2015) Stanford (2017) Tel Aviv (2019) Tokyo (2018) "},{"uri":"/scalingbitcoin/tel-aviv-2019/survey-of-progress-in-zero-knowledge-proofs-towards-trustless-snarks/","title":"A Survey of Progress in Succinct Zero Knowledge Proofs: Towards Trustless SNARKs","content":"https://twitter.com/kanzure/status/1171683484382957568\nIntroduction I am going to be giving a survey of recent progress into succinct zero-knowledge proofs. I\u0026amp;rsquo;ll survey recent developments, but I\u0026amp;rsquo;m going to start with what zero-knowledge proofs are. This is towards SNARKs without trusted setup. I want to help you get up to date on what\u0026amp;rsquo;s happening on with SNARKs. There\u0026amp;rsquo;s an explosion of manuscripts and it\u0026amp;rsquo;s hard to keep track.\nOne theme is the emergence of …"},{"uri":"/scalingbitcoin/tel-aviv-2019/private-information-retrieval/","title":"Applying Private Information Retrieval to Lightweight Bitcoin Clients","content":"SPV overview I have to thank the prior speaker. This is basically the same, except this time we\u0026amp;rsquo;re not using SGX. We\u0026amp;rsquo;re looking at bitcoin lightweight clients. You have a lite client with not much space not much computing power, can\u0026amp;rsquo;t store the bitcoin blockchain. All it knows about is the header information. We assume it has the blockheader history. In the header, we can find the merkle tree, and given the merkle tree we can make a merkle proof that a given transaction is …"},{"uri":"/speakers/ben-fisch/","title":"Ben Fisch","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/bip-securethebag/","title":"BIP: OP_SECURETHEBAG","content":"https://twitter.com/kanzure/status/1171750965478854656\nIntroduction Thank you for the introduction. Today I am going to talk to you about OP_SECURETHEBAG. It\u0026amp;rsquo;s research that I have been working on for the last couple years. I think it\u0026amp;rsquo;s very exciting and hopefully it will have a big compact. I have a lot of slides.\nWhy are we here? We are here for scaling bitcoin. The goal we have is to scale bitcoin. What is scaling, though? We have talks on networking, privacy, talks on all sorts …"},{"uri":"/tags/contract-protocols/","title":"Contract Protocols","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/bitml/","title":"Developing secure Bitcoin contracts using BitML","content":"paper: https://arxiv.org/abs/1905.07639\nhttps://twitter.com/kanzure/status/1171695588116746240\nThis is shared work with my collaborators and coauthors.\nSmart contracts I am sure everyone here knows what a smart contract is. It\u0026amp;rsquo;s a program that can move cryptoassets. It\u0026amp;rsquo;s executed in a decentralized environment. Generally speaking, we can say there\u0026amp;rsquo;s two classes of smart contracts. A smart contract is a program. While in blockchain like bitcoin, smart contracts are cryptographic …"},{"uri":"/tags/lightweight-client/","title":"Lightweight Client Support","content":""},{"uri":"/speakers/oleg-andreev/","title":"Oleg Andreev","content":""},{"uri":"/speakers/stefano-lande/","title":"Stefano Lande","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/zkvm/","title":"ZkVM: zero-knowledge virtual machine for fast confidential smart contracts","content":"https://medium.com/stellar-developers-blog/zkvm-a-new-design-for-fast-confidential-smart-contracts-d1122890d9ae and https://twitter.com/oleganza/status/1126612382728372224\nhttps://twitter.com/kanzure/status/1171711553512583169\nIntroduction Okay, welcome. What is zkvm? It\u0026amp;rsquo;s a multi-asset blockchain architecture that combines smart contracts and confidentiality features. It\u0026amp;rsquo;s written in pure rust from top to bottom.\nhttps://github.com/stellar/slingshot\nAgenda I\u0026amp;rsquo;ll explain the …"},{"uri":"/edgedevplusplus/2019/lightning-network-layer-by-layer/","title":"A walk through the layers of Lightning","content":"Introduction Good afternoon everyone. My name is Carla. I\u0026amp;rsquo;m one of the Chaincode residents from the summer residency that they ran in New York this year and this afternoon I\u0026amp;rsquo;m going to be talking about Lightning. In this talk, I\u0026amp;rsquo;m gonna be walking you through the protocol layer by layer and having a look at the different components that make up the Lightning Network.\nI\u0026amp;rsquo;m sure a lot of you are pretty familiar with what the Lightning Network is. It is an off-chain scaling …"},{"uri":"/edgedevplusplus/2019/bitcoin-core-functional-test-framework/","title":"Bitcoin Core Functional Test Framework","content":"Slides: https://telaviv2019.bitcoinedge.org/files/test-framework-in-bitcoin-core.pdf\nTranscript completed by: Bryan Bishop Edited by: Michael Folkson\nhttps://twitter.com/kanzure/status/1171357556519952385\nIntroduction I am pretty sure you can tell but I am not James (Chiang). I am taking over the functional testing framework talk from James. He has already given several great talks. I took over this talk at very short notice from James. I’d like to give a hands on talk.\nContent This is a brief …"},{"uri":"/edgedevplusplus/","title":"Bitcoin Edge Dev++","content":" Bitcoin Edge Dev\u0026amp;#43;\u0026amp;#43; 2017 Bitcoin Edge Dev\u0026amp;#43;\u0026amp;#43; 2018 Bitcoin Edge Dev\u0026amp;#43;\u0026amp;#43; 2019 "},{"uri":"/edgedevplusplus/2019/","title":"Bitcoin Edge Dev++ 2019","content":"https://telaviv2019.bitcoinedge.org/\nA walk through the layers of Lightning Sep 10, 2019 Carla Kirk-Cohen Lightning Acumulator Based Cryptography \u0026amp;amp; UTreexo Sep 09, 2019 Tadge Dryja Proof systems Utreexo Bitcoin Core Functional Test Framework Sep 10, 2019 Fabian Jahr Bitcoin core Developer tools Bitcoin Data Structures Sep 09, 2019 Jimmy Song Blockchain Design Patterns: Layers and scaling approaches Sep 10, 2019 Andrew Poelstra, David Vorick Taproot Scalability Build a Taproot Sep 09, 2019 …"},{"uri":"/edgedevplusplus/2019/blockchain-design-patterns/","title":"Blockchain Design Patterns: Layers and scaling approaches","content":"https://twitter.com/kanzure/status/1171400374336536576\nIntroduction Alright. Are we ready to get going? Thumbs up? Alright. Cool. I am Andrew and this is David. We\u0026amp;rsquo;re here to talk about blockchain design patterns: layers and scaling approaches. This will be a tour of a bunch of different scaling approaches in bitcoin in particular but it\u0026amp;rsquo;s probably applicable to other blockchains out there. Our talk is in 3 parts. We\u0026amp;rsquo;ll talk about some existing scaling tech and related things …"},{"uri":"/speakers/carla-kirk-cohen/","title":"Carla Kirk-Cohen","content":""},{"uri":"/edgedevplusplus/2019/bosminer/","title":"Challenges of developing bOSminer from scratch in Rust","content":"https://braiins-os.org/\nhttps://twitter.com/kanzure/status/1171331418716278785\nnotes from slides: https://docs.google.com/document/d/1ETKx8qfml2GOn_CBXhe9IZzjSv9VnXLGYfQb3nD3N4w/edit?usp=sharing\nIntroduction Good morning everyone. My task is to talk about the challenges we faced while we were implementing a replacement for the cgminer software. We\u0026amp;rsquo;re doing it in rust. Essentially, I would like to cover a little bit of the history and to give some credit to ck for his hard work.\ncgminer …"},{"uri":"/edgedevplusplus/2019/debugging-bitcoin/","title":"Debugging Bitcoin","content":"Slides: https://telaviv2019.bitcoinedge.org/files/debugging-tools-for-bitcoin-core.pdf\nDebugging Bitcoin Core: https://github.com/fjahr/debugging_bitcoin\nTranscript completed by: Bryan Bishop Edited by: Michael Folkson\nhttps://twitter.com/kanzure/status/1171024515490562048\nIntroduction I am going to talk about debugging Bitcoin. Of course if you want to contribute to Bitcoin there are a lot of conceptual things that you have to understand in order to do that. Most of the talks here today and …"},{"uri":"/edgedevplusplus/2019/hardware-wallet-design-best-practices/","title":"Hardware Wallet Design Best Practices","content":"https://twitter.com/kanzure/status/1171322716303036417\nIntroduction We have 30 minutes. This was a talk by me and Jimmy but Jimmy talked for like 5 hours yesterday so I\u0026amp;rsquo;ll be talking today.\nImagine you want to start working in bitcoin development. There\u0026amp;rsquo;s a chance you will end up using a hardware wallet. In principle, it\u0026amp;rsquo;s not any different from other software development. There\u0026amp;rsquo;s still programming involved, like for firmware. Wallets have certain features and nuances …"},{"uri":"/speakers/james-hilliard/","title":"James Hilliard","content":""},{"uri":"/edgedevplusplus/2019/libbitcoin/","title":"Libbitcoin: A practical introduction","content":"or: libbitcoin\u0026amp;rsquo;s bx tool: Constructing a raw transaction\nhttps://twitter.com/kanzure/status/1171352496247365633\nhttps://github.com/libbitcoin/libbitcoin-explorer/wiki/Download-BX\nIntroduction I am going to talk about libbitcoin. It\u0026amp;rsquo;s an alternative implementation of Bitcoin Core. There\u0026amp;rsquo;s a few alternatives out there, like btcd and then bcoin. Libbitcoin is written in C++ and it\u0026amp;rsquo;s the oldest alternative implementation that is out there. I\u0026amp;rsquo;d like to show a few …"},{"uri":"/edgedevplusplus/2019/lightning-network-sphinx-and-onion-routing/","title":"Lightning Network Sphinx And Onion Routing","content":"Introduction Hi everyone. I am a developer of rust-lightning, it started in 2018 by BlueMatt. I started contributing about a year ago now. It\u0026amp;rsquo;s a full featured, flexible, spec-compliant lightning library. It targets exchanges, wallet vendors, hardware, meshnet devices, and you can join us on Freenode IRC in the #rust-bitcoin channel which is a really nice one.\nPrivacy matters Privacy matters on the blockchain. This is the reason why lightning is taking so much time. We want the privacy to …"},{"uri":"/edgedevplusplus/2019/lightning-network-topology/","title":"Lightning network topology, its creation and maintenance","content":"Introduction Alright. Give me a second to test this. Alright. Antoine has taken you through the routing layer. I\u0026amp;rsquo;m going to take you through what the lightning network looks like today. This is the current topology of the network and how this came about, and some approaches for maintaining the network and making the graph look like we want it to look.\nLightning brief overview There\u0026amp;rsquo;s ideally only two transactions involved in any lightning channel, the commitment transaction and the …"},{"uri":"/edgedevplusplus/2019/mining-firmware-security/","title":"Mining Firmware Security","content":"slides: https://docs.google.com/presentation/d/1apJRD1BwskElWP0Yb1C_tXmGYA_vkx9rjS8VTDW_Z3A/edit?usp=sharing\n"},{"uri":"/edgedevplusplus/2019/signet/","title":"Signet annd its uses for development","content":"https://twitter.com/kanzure/status/1171310731100381184\nhttps://explorer.bc-2.jp/\nIntroduction I was going to talk about signet yesterday but people had some delay downloading docker images. How many of you have signet right now? How many think you have signet right now? How many downloaded something yesterday? How many docker users? And how many people have compiled it themselves? Okay. I think we have like 10 people. The people that compiled it yourself, I think you\u0026amp;rsquo;re going to be able to …"},{"uri":"/edgedevplusplus/2019/statechains/","title":"Statechains","content":"Schnorr signatures, adaptor signatures and statechains\nhttps://twitter.com/kanzure/status/1171345418237685760\nIntroduction If you want to know the details of statechains, I recommend checking out my talk from Breaking Bitcoin 2019 Amsterdam. I\u0026amp;rsquo;ll give a quick recap of Schnorr signatures and adaptor signatures and then statechains. I think it\u0026amp;rsquo;s important to understand Schnorr signatures to the point where you\u0026amp;rsquo;re really comfortable with it. A lot of the cool stuff in bitcoin …"},{"uri":"/edgedevplusplus/2019/accumulators/","title":"Acumulator Based Cryptography \u0026 UTreexo","content":"https://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-10-08-utxo-accumulators-and-utreexo/\nhttps://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2019/utreexo/\nhttps://diyhpl.us/wiki/transcripts/stanford-blockchain-conference/2019/accumulators/\nhttps://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/accumulators/\nhttps://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2019-06-06-utreexo/\nNote: the present presentation had more in depth information about RSA which might be interesting …"},{"uri":"/edgedevplusplus/2019/bitcoin-data-structures/","title":"Bitcoin Data Structures","content":"https://twitter.com/kanzure/status/1170974373089632257\nIntroduction Alright guys. Come on in. There\u0026amp;rsquo;s plenty of seats, guys. If you\u0026amp;rsquo;re sitting out on the edge, be nice to the other folks and get in the middle. Sitting at the edge is kind of being a dick. You\u0026amp;rsquo;re not letting anyone through. Come on guys, you don\u0026amp;rsquo;t have USB-C yet? You all have powerbanks. Get with the times. Okay, we\u0026amp;rsquo;re getting started.\nI have 90 minutes to teach you the rest of my book. I think I did …"},{"uri":"/edgedevplusplus/2019/taproot/","title":"Build a Taproot","content":"https://docs.google.com/presentation/d/1YVOYJGmQ_mY-5_Rs0Cu84AYSI6wCvpfIgoapfgsYP2U/edit\nhttps://github.com/bitcoinops/taproot-workshop/tree/BitcoinEdge\nhttps://github.com/bitcoinops/bitcoin/releases/tag/v0.1\n"},{"uri":"/needs/","title":"Needs","content":""},{"uri":"/edgedevplusplus/2019/privacy-concepts/","title":"Privacy Concepts for Bitcoin application developers","content":"https://twitter.com/kanzure/status/1171036497044267008\nIntroduction You\u0026amp;rsquo;re not going to walk out of here as a privacy protocol developer. I am going to mention and talk about some ideas in some protocols that exist. What I find myself is that of really smart people working on a lot of pretty cool stuff that can make privacy easier and better to use. A lot of times, application developers or exchanges aren\u0026amp;rsquo;t even aware of those things and as a result the end result is that people just …"},{"uri":"/edgedevplusplus/2019/rebroadcasting/","title":"Rebroadcast logic in Core","content":"https://twitter.com/kanzure/status/1171042478088232960\nIntroduction Hi, my name is Amiti. Thank you for having me here today. I wanted to talk with you about rebroadcasting logic in Bitcoin Core. For some context, I\u0026amp;rsquo;ve been working on improving it this summer. I wanted to tell you all about it.\nWhat is rebroadcasting? We all know what a broadcast is. It\u0026amp;rsquo;s hwen we send an INV message out to our peers and we let them know about a new transaction. Sometimes we rebroadcast and send an …"},{"uri":"/edgedevplusplus/2019/2019-09-09-carla-kirk-cohen-routing-problems-and-solutions/","title":"Routing Problems and Solutions","content":"Topic: Routing Problems and Solutions\nLocation: Tel-Aviv 2019\nIntroduction I\u0026amp;rsquo;m going to be talking about routing in the Lightning Network now. So I\u0026amp;rsquo;m going to touch on how it\u0026amp;rsquo;s currently operating, some issues that you run into when you are routing, and then some potential expansions to the spec which are going to address some of these issues.\nWe ran through the Lightning Network graph: the nodes of vertices, the channels in this graph of edges. And at the moment, nodes need to …"},{"uri":"/chaincode-labs/chaincode-residency/2019-09-09-amiti-uttarwar-transaction-rebroadcast/","title":"Transaction Rebroadcast","content":"https://twitter.com/kanzure/status/1199710296199385088\nhttp://diyhpl.us/wiki/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/rebroadcasting/\nIntroduction Hello. Thank you for joining me today. My name is Amiti and I’d like to tell you a bit about my Bitcoin journey.\nProfessional Background I graduated from Carnegie Mellon five years ago and ever since I’ve worked at a few different startups in San Francisco Bay Area. My adventures with blockchains began when I worked at Simbi. Simbi is …"},{"uri":"/needs/transcript/","title":"transcript","content":""},{"uri":"/speakers/udi-wertheimer/","title":"Udi Wertheimer","content":""},{"uri":"/edgedevplusplus/2019/wallet-architecture/","title":"Wallet Architecture in Bitcoin Core","content":"https://twitter.com/kanzure/status/1171018684816605185\nIntroduction Thanks Bryan for talking about HD wallets. I am going to be talking about wallet architecture in Bitcoin Core. Alright. Who am I? I work on Bitcoin Core. I\u0026amp;rsquo;ve been writing code on Bitcoin Core for about 3 years. Lately I have been working on the wallet. I work for Chaincode Labs which is why I get to work on Bitcoin Core. We are a small research lab in New York. There\u0026amp;rsquo;s also the residency program that we run. We just …"},{"uri":"/speakers/aviv-zohar/","title":"Aviv Zohar","content":""},{"uri":"/decentralized-financial-architecture-workshop/custody-group/","title":"Custody Working Group","content":"One of the problems in the ecosystem was nomenclature. We started working on a report about what features a nomenclature would have. Airgaps, constant-time software, sidechannel resistance, Faraday cage, deterministic software, entropy, blind signatures, proof-of-reserve, multi-vendor hardware to minimize compromise. Multi-location and multi-sig. Insurance guarantee scheme. Canaries, dead man switches, tripwires, heartbeat mechanisms. Shamir\u0026amp;rsquo;s secret sharing vs multisig. The second part of …"},{"uri":"/decentralized-financial-architecture-workshop/","title":"Decentralized Financial Architecture Workshop","content":"https://dfa2019.bitcoinedge.org/\nCompliance And Confidentiality Alexander Zaidelson Regulation Custody Working Group Sep 08, 2019 Regulation G20 Discussion Shin\u0026amp;#39;ichiro Matsuo Research Regulation Implications Yuta Takanashi Regulation Introduction to DFA Workshop Sep 08, 2019 Aviv Zohar Metadata Perspective Regulation "},{"uri":"/decentralized-financial-architecture-workshop/introduction/","title":"Introduction to DFA Workshop","content":"We are an international conference that focuses on the latest development in software and its scalability in the bitcoin ecosystem. We have come here to Israel. We are running 5 days of events, including this current workshop. Over the next 4 days following today, we\u0026amp;rsquo;re running Bitcoin Edge Dev++ workshop at Tel Aviv University for 2 days, and then Scaling Bitcoin for 2 days which is compromised of over 44 high tech sessions on complex scalability and security related topics in the bitcoin …"},{"uri":"/blockstream-webinars/2019-09-04-christian-decker-c-lightning-questions/","title":"C-Lightning Questions","content":"c-lightning Q and A session\nhttps://twitter.com/kanzure/status/1230527903969841152\nIntroduction Hello everyone. I’m Christian. I’m a software developer at Blockstream and I work on Lightning, both the specification and the implementation. Today we are going to have a shortish webinar on how to get started with Lightning.\nQ and A Q - Is it possible to run c-lightning and lnd on the same host in parallel?\nA - It absolutely is. We actually do that on a regular basis to test compatibility between …"},{"uri":"/chaincode-labs/chaincode-residency/2019-08-22-fabian-jahr-debugging/","title":"Debugging Bitcoin Core","content":"Slides: https://residency.chaincode.com/presentations/Debugging_Bitcoin.pdf\nRepo: https://gist.github.com/fjahr/2cd23ad743a2ddfd4eed957274beca0f\nhttps://twitter.com/kanzure/status/1165266077615632390\nIntroduction I’m talking about debugging Bitcoin and that means to me using loggers and debugging tools to work with Bitcoin and this is especially useful for somebody who is a beginner with Bitcoin development. Or even a beginner for C++ which I considered myself a couple of weeks ago actually. I’m …"},{"uri":"/austin-bitcoin-developers/2019-08-22-socratic-seminar-2/","title":"Socratic Seminar 2","content":"https://twitter.com/kanzure/status/1164710800910692353\nIntroduction Hello. The idea was to do a more socratic style meetup. This was popularized by Bitdevs NYC and spread to SF. We tried this a few months ago with Jay. The idea is we run through research news, newsletters, podcasters, talk about what happened in the technical bitcoin community. We\u0026amp;rsquo;re going to have different presenters.\nMike Schmidt is going to talk about some optech newsletters that he has been contributing to. Dhruv will …"},{"uri":"/chaincode-labs/chaincode-residency/2019-08-22-james-chiang-taproot-policy/","title":"Taproot Policy","content":"Slides: https://residency.chaincode.com/presentations/Taproot_Policy.pdf\nIntroduction Hi my name is James and I have been a resident here at Chaincode. It has been a privilege. I hear they might be doing it again in the future, I highly recommend the experience, it has been really fantastic. I’ve been working on a demo library in Python for Taproot and that’s why I chose to talk about Taproot and policy today. Seemingly two separate topics but actually there are some very interesting …"},{"uri":"/dallas-bitcoin-symposium/","title":"Dallas Bitcoin Symposium","content":" Bitcoin Developers Justin Moon Bitcoin Security Dhruv Bansal Security History And Extrapolation Intro Aug 16, 2019 Parker Lewis Q A Aug 16, 2019 Marty Bent, Michael Goldstein, Justin Moon, Dhruv Bansal, Gideon Powell, Tuur Demeester Sound Money Michael Goldstein Altcoins Texas Energy Market Gideon Powell Mining "},{"uri":"/speakers/dhruv-bansal/","title":"Dhruv Bansal","content":""},{"uri":"/speakers/gideon-powell/","title":"Gideon Powell","content":""},{"uri":"/dallas-bitcoin-symposium/intro/","title":"Intro","content":"https://twitter.com/kanzure/status/1162436437687623684\nThank you all for coming. We have some bitcoin contributors here, and some people from the investment community. The impetus for this event was a bitcoin conference this weekend, but also Gideon is speaking today. We went to a blockchain event a few months ago and it was our impression that they didn\u0026amp;rsquo;t do a good job of explaining bitcoin to investors. Since we\u0026amp;rsquo;re all up here in Dallas, we thought we would share some perspectives …"},{"uri":"/speakers/marty-bent/","title":"Marty Bent","content":""},{"uri":"/speakers/michael-goldstein/","title":"Michael Goldstein","content":""},{"uri":"/speakers/parker-lewis/","title":"Parker Lewis","content":""},{"uri":"/dallas-bitcoin-symposium/q-a/","title":"Q A","content":"Q\u0026amp;amp;A session\nhttps://twitter.com/kanzure/status/1162436437687623684\nMB: I have a podcast called Tales from the Crypt where I interview people working on and around bitcoin. I have spoken to many of these gentlemen on the podcast as well. I want to focus on, many of these presentations were focused on history and prehistory of bitcoin. For this Q\u0026amp;amp;A session, let\u0026amp;rsquo;s talk about the future of the bitcoin standard. Michael, I want you to talk about the future of the bitcoin standard on a …"},{"uri":"/chaincode-labs/chaincode-residency/2019-08-16-elichai-turkel-schnorr-signatures/","title":"Schnorr Signatures","content":"Slides: https://residency.chaincode.com/presentations/Schnorr_Signatures.pdf\nhttps://twitter.com/kanzure/status/1165618718677917698\nIntroduction I’m Elichai, I’m a resident here at Chaincode. Thanks to Chaincode for having us and having me. Most of you have probably heard about Taproot and Schnorr which are new technologies that we want to integrate into Bitcoin. Today I am going to explain what is Schnorr and why do we even want this?\nDigital Signatures Before that we need to define what are …"},{"uri":"/speakers/tuur-demeester/","title":"Tuur Demeester","content":""},{"uri":"/speakers/joe-netti/","title":"Joe Netti","content":""},{"uri":"/stephan-livera-podcast/2019-08-12-rusty-russell-joe-netti/","title":"Rusty Russell, Joe Netti","content":"Stephan Livera podcast with Rusty Russell and Joe Netti - August 12th 2019\nPodcast: https://stephanlivera.com/episode/98/\nStephan Livera:\tRusty and Joe, welcome to the show.\nJoe Netti:\tHi. How’s it going?\nRusty Russell:\tHey, Stephan. Good to be back.\nStephan Livera:\tThanks for rejoining me, Rusty, and thanks for joining me, Joe. So, just a quick intro just for the listeners. Joe, do you want to start?\nJoe Netti:\tYeah, sure. So, I got into Bitcoin in around 2013, and took me a while to learn what …"},{"uri":"/stephan-livera-podcast/2019-08-08-michael-flaxman/","title":"Every Bitcoin Hardware Wallet Sucks","content":"Topic: Every Bitcoin Hardware Wallet Sucks\nLocation: Stephan Livera Podcast\nDate: August 8th 2019\nTranscript completed by: Stephan Livera Edited by: Michael Folkson\nIntro Stephan Livera (SL): Michael, welcome to the show.\nMichael Flaxman (MF): Hi. It’s good to be here. Thank you for having me.\nSL: Michael, I think you are one of these really underrated or under followed guys. I think a lot of the hardcore Bitcoiners know you but there’s a lot of people who don’t know you. Do you want to give an …"},{"uri":"/misc/2019-08-07-jonathan-metzman-structured-fuzzing/","title":"Going Beyond Coverage-Guided Fuzzing with Structured Fuzzing","content":"Location: Black Hat USA 2019\nBlackhat: https://www.blackhat.com/us-19/briefings/schedule/#going-beyond-coverage-guided-fuzzing-with-structured-fuzzing-16110\nSlides: https://i.blackhat.com/USA-19/Wednesday/us-19-Metzman-Going-Beyond-Coverage-Guided-Fuzzing-With-Structured-Fuzzing.pdf\nTranscript completed by: Michael Folkson\nIntro Hi everyone. Thanks for coming to my talk. As I was introduced I’m Jonathan Metzman. I’m here to talk about how you can get more bugs with coverage guided fuzzing by …"},{"uri":"/speakers/jonathan-metzman/","title":"Jonathan Metzman","content":""},{"uri":"/blockstream-webinars/2019-07-31-rusty-russell-getting-started-with-c-lightning/","title":"Getting Started With C-Lightning","content":"Getting started with c-lightning\nhttps://twitter.com/kanzure/status/1231946205380403200\nIntroduction Hi everyone. We’re actually a couple of minutes early and I think we are going to give a couple of minutes past before we actually start because I think some people will probably be running late particularly with the change in times we had to make after this was announced. While we are waiting it would be interesting to find out a little bit about the attendees’ backgrounds and what they hope to …"},{"uri":"/stephan-livera-podcast/2019-07-31-roy-sheinfeld-lightning-network-services/","title":"Lightning Network Services for the Masses","content":"podcast: https://stephanlivera.com/episode/94/\nStephan Livera: Roy, welcome to the show.\nRoy Sheinfeld: Hey Stephan, great to be here.\nStephan Livera: So Roy, I’ve seen you’ve been writing some interesting posts on Medium, and obviously you’re the CEO and founder of Breez Technology, so can you just give us a little bit of a background on yourself, on you and also a little bit of your story on Bitcoin.\nRoy Sheinfeld: Sure, sure, I’d be happy to. So I am a software engineer by training, I’ve been …"},{"uri":"/speakers/roy-sheinfeld/","title":"Roy Sheinfeld","content":""},{"uri":"/stephan-livera-podcast/2019-07-31-roy-sheinfeld-stephan-livera/","title":"Roy Sheinfeld - Stephan Livera","content":"Stephan Livera podcast with Roy Sheinfeld - July 31st 2019\nPodcast: https://stephanlivera.com/episode/94/\nStephan Livera:\tRoy, welcome to the show.\nRoy Sheinfeld:\tHey Stephan, great to be here.\nStephan Livera:\tSo Roy, I’ve seen you’ve been writing some interesting posts on Medium, and obviously you’re the CEO and founder of Breez Technology, so can you just give us a little bit of a background on yourself, on you and also a little bit of your story on Bitcoin.\nRoy Sheinfeld:\tSure, sure, I’d be …"},{"uri":"/speakers/britt-kelly/","title":"Britt Kelly","content":""},{"uri":"/stephan-livera-podcast/2019-07-25-britt-kelly-btcpayserver-documentation/","title":"BTCPayServer documentation, translation \u0026 Newbie tips","content":"podcast: https://stephanlivera.com/episode/92/\nStephan Livera: Hi and welcome to the Stephan Livera Podcast focused on bitcoin and Austrian economics. Today we are closing out the BTCPayServer series with Britt Kelly. But first let me introduce the sponsors of the podcast. So firstly, checkout Kraken. Over my years in bitcoin I’ve been really impressed with the way they operate. They have a really, really strong focus on security and they have consistently acted ethically in the space. They’re …"},{"uri":"/tags/submarine-swaps/","title":"Submarine swaps","content":""},{"uri":"/london-bitcoin-devs/2019-07-03-alex-bosworth-submarine-swaps/","title":"Submarine Swaps","content":"London Bitcoin Devs\nSubmarine Swaps and Loop\nSlides: https://www.dropbox.com/s/cyh97jv81hrz8tf/alex-bosworth-submarine-swaps-loop.pdf?dl=0\nhttps://twitter.com/kanzure/status/1151158631527849985\nIntro Thanks for inviting me. I’m going to talk about submarine swaps and Lightning Loop which is something that I work on at Lightning Labs.\nBitcoin - One Currency, Multiple Settlement Networks Something that is pretty important to understand about Lightning is that it is a flow network so once you set …"},{"uri":"/austin-bitcoin-developers/2019-06-29-hardware-wallets/","title":"Hardware Wallets","content":"https://twitter.com/kanzure/status/1145019634547978240\nsee also:\nExtracting seeds from hardware wallets The future of hardware wallets coredev.tech 2019 hardware wallets discussion Background A bit more than a year ago, I went through Jimmy Song\u0026amp;rsquo;s Programming Blockchain class. That\u0026amp;rsquo;s where I met M where he was the teaching assistant. Basically, you write a python bitcoin library from scratch. The API for this library and the classes and fnuctions that Jimmy uses is very easy to read …"},{"uri":"/tags/channel-factory/","title":"channel factory","content":""},{"uri":"/tags/multiparty-channel/","title":"multiparty channel","content":""},{"uri":"/chaincode-labs/chaincode-residency/2019-06-28-christian-decker-multiparty-channels/","title":"Multiparty Channels (Lightning Network)","content":"Location: Chaincode Labs Lightning Residency 2019\nSlides: https://residency.chaincode.com/presentations/lightning/Multiparty_Channels.pdf\nSymmetric Update Protocols eltoo Update Mechanism Two days ago, we’ve seen the symmetric update mechanism called eltoo, and having a symmetric update mechanism has some really nice properties that enable some really cool stuff. Just to remind everybody, this is what eltoo looks like. That’s the one you\u0026amp;rsquo;ve already seen. It’s not the only symmetric update …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-28-christian-decker-rendezvous-routing/","title":"Rendezvous Routing","content":"Rendezvous Routing (Lightning Network)\nLocation: Chaincode Residency 2019\nSlides: https://residency.chaincode.com/presentations/lightning/Rendezvous_Routing.pdf\nTranscript by: Caralie Chrisco and Davius Parvin\nIntroduction Okay, the second part is rendezvous routing. I basically already gave away the trick but I hope you all forgot. So what is rendezvous routing? Anybody have a good explanation of what we are trying to do here? So we have someplace where we meet in the middle, and I don\u0026amp;rsquo;t …"},{"uri":"/speakers/fabrice-drouin/","title":"Fabrice Drouin","content":""},{"uri":"/chaincode-labs/chaincode-residency/2019-06-26-fabrice-drouin-limitations-of-lightweight-clients/","title":"Limitations of Lightweight Clients","content":"Location: Chaincode Labs Lightning Residency 2019\nIntroduction The limitations it\u0026amp;rsquo;s uh you have to fight when you want to build lightweight clients mobile clients so this again, this is how mobile clients work, you scan payment requests, you have a view of the network that is supposed to be good enough, you compute the routes, you get an HTLC, you get a preimage back you\u0026amp;rsquo;ve paid.\nSo a Lightning node is a Bitcoin node, as in, you have to be able to create, sign, and send Bitcoin …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-26-rene-pickhardt-path-finding-lightning-network/","title":"Path Finding in the Lightning Network","content":"Location: Chaincode Residency 2019\nIntroduction Today I\u0026amp;rsquo;m going to talk a little bit about path finding on the Lightning Network and some part of it will be about the gossip protocol which is like a little bit of a review of what the gossip messages are going to look like. So it\u0026amp;rsquo;s a little bit basic again. But other than that, I\u0026amp;rsquo;ve been told that you seek more for like opinionated talks and recent ideas instead of, like, how the bolts work because you already know all of that. …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-26-rene-pickhardt-splicing/","title":"Splicing","content":"Location: Chaincode Labs Lightning Residency 2019\nTranscript by: Caralie Chrisco\nIntroduction So splicing basically means you have a payment channel, you have a certain capacity of a payment channel and you want to change the capacity on the payment channel. But you don\u0026amp;rsquo;t want to change your balance right? For balance change,(inaudible) we\u0026amp;rsquo;ll get some money on it. But splicing really means you want to change the capacity.\nI mean the obvious way of doing this is close the channel, …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-26-rene-pickhardt-update-layer/","title":"The Update Layer","content":"Location: Chaincode Labs Lightning Residency 2019\nIntroduction Maybe we had a wrong impression, you can ask, like in Christian’s talk, as many questions as you want while the presentation is there. So, it\u0026amp;rsquo;s a small disclaimer just saying: these slides are part of a much larger slide set which is open-source and you find notes and, I think, we share slides in the Google Docs anyway.\nThe outline of the talk is, I’m going to talk about the construction of payment channels in Bitcoin. I start …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-25-fabrice-drouin-attack-vectors-of-lightning-network/","title":"Attack Vectors of Lightning Network","content":"Location: Chaincode Residency – Summer 2019\nIntroduction All right so I\u0026amp;rsquo;m going to introduce really quickly attack vectors on Lightning. I focus on first what you can do with the Lightning protocol, but I will mostly speak about how attacks will probably happen in real life. It\u0026amp;rsquo;s probably not going to be direct attacks on the protocol itself.\nDenial of Service So the basic attacks you can have when you\u0026amp;rsquo;re running lightning nodes are denial of service attacks basically. …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-25-christian-decker-eltoo/","title":"Eltoo","content":"Eltoo: The (Far) Future of Lightning\nLocation: Chaincode Labs\nSlides: https://residency.chaincode.com/presentations/lightning/Eltoo.pdf\nEltoo white paper: https://blockstream.com/eltoo.pdf\nBitcoin Magazine article: https://bitcoinmagazine.com/articles/noinput-class-bitcoin-soft-fork-simplify-lightning\nIntro Who has never heard about eltoo? It is my pet project and I am pretty proud of it. I will try to keep this short. I was told that you all have seen my presentation about the evolution of …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-25-alex-bosworth-mpp/","title":"Multiple-path Payments (MPP)","content":"Location: Chaincode Residency 2019\nBigger, Faster, Cheaper - Multiplexing Payments Alex: How to do multiple path payments. So there\u0026amp;rsquo;s a question: why do we even need multiple path payments? So you have a channel \u0026amp;ndash; like \u0026amp;ndash; we don\u0026amp;rsquo;t have them now, so are we really hurting for them? So I kind of like describing different situations where you would need it. So in the first quadrant you can see I have two channels. One of my channels has three coins on my side. I want to buy …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-25-christian-decker-onion-routing-deep-dive/","title":"Onion Routing Deep Dive","content":"Location: Chaincode Residency – Summer 2019\nIntro We\u0026amp;rsquo;ve not seen exactly why we are using an onion, or why we chose this construction of an onion and how this construction of an onion actually looks like. So my goal right now is to basically walk you through the iterations that onion routing packets have done so far, and why we chose this construction we have here.\nSo, just to be funny, I have once again the difference between distance vector routing, which is IP-based routing, basically, …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-25-fabrice-drouin-routing-failures/","title":"Routing Failures","content":"Introduction So I\u0026amp;rsquo;m going to talk about routing failures in lightning. Basically lightning is a network of payment channels and you can pay anyone, you can find a route. In routing in lightning you have two completely different concepts: how to find a route (path finding) and once you have a route how to actually send a payment through the routes to the final destination. This is what Christian showed that\u0026amp;rsquo;s the source routing.\nSo it\u0026amp;rsquo;s two different parts. When you have the …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-24-fabrice-drouin-base-and-transport-layers-of-lightning-network/","title":"Base and Transport Layers of the Lightning Network","content":"Location: Chaincode Labs Lightning Residency 2019\nSlides: https://residency.chaincode.com/presentations/lightning/Base_Transport.pdf\nIntroduction Fabrice: The base and transport layer, almost everything was already introduced this morning so it\u0026amp;rsquo;s going to be very quick. So have you visited the lightning repo on GitHub? Who hasn\u0026amp;rsquo;t?\nOkay, good. So you\u0026amp;rsquo;ve seen all this? That was a test.\nWe\u0026amp;rsquo;ve covered almost everything this morning. I\u0026amp;rsquo;m just gonna get back to the base …"},{"uri":"/speakers/conner-fromknecht/","title":"Conner Fromknecht","content":""},{"uri":"/stephan-livera-podcast/2019-06-24-conner-fromknecht-stephan-livera/","title":"Conner Fromknecht - Stephan Livera","content":"Stephan Livera podcast with Conner Fromknecht - June 24th 2019\nhttps://twitter.com/kanzure/status/1143553150152060928\nPodcast: https://stephanlivera.com/episode/83\nStephan: Conner I am a big fan of what you guys are doing at Lightning Labs and I’ve been trying to get you on for a while. I’m glad that we were finally able to make it happen. Welcome to the show.\nConner: Awesome, thank you Stephan. It has been a while since we first met in Australia. We’ve been busy putting stuff together for 0.7 …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-24-rene-pickhardt-multihop-in-lightning/","title":"Multihop of the Lightning Network","content":"Slides: https://residency.chaincode.com/presentations/lightning/Multihop_Layer.pdf\nIntroduction I\u0026amp;rsquo;m going to talk about Lightning Network payment channels and the update layer which is basically multihop payments and the outline of this talk is one slide myths about routing that are floating around and I just want to discuss them in the beginning very briefly. Then I will talk about the payment process on the Lightning Network and compare it a little bit with the payment process on …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-19-john-newbery-wallet-development/","title":"Wallet Development in Bitcoin Core","content":"Location: Chaincode Labs 2019 Residency\nSlides: https://residency.chaincode.com/presentations/bitcoin/Wallet_Development.pdf\nIntro I am going to talk to you about wallets. As for the previous presentation I have all of the links in this document which I will share with you. First of all why should we care about wallets? Kind of boring right? You’ve had all this fun stuff about peer-to-peer and consensus. Wallets are just on the edge. Maybe you’re not even interested in Bitcoin Core. Maybe you …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-18-james-obeirne-advanced-segwit/","title":"Advanced Segwit","content":"Location: Chaincode Labs 2019 Residency\nSlides: https://residency.chaincode.com/presentations/bitcoin/Advanced_segwit.pdf\nJames O\u0026amp;rsquo;Beirne: To sort of vamp off of Jonas\u0026amp;rsquo;s preamble, I\u0026amp;rsquo;m not that smart. I\u0026amp;rsquo;m a decent programmer, but compared to a lot of people who work on Bitcoin, I barely know what I\u0026amp;rsquo;m doing. I kind of consider myself like a carpenter, a digital carpenter equivalent. I\u0026amp;rsquo;m a steady hand. I can get things done. I\u0026amp;rsquo;m fairly patient, which i s key …"},{"uri":"/tftc-podcast/2019-06-18-andrew-poelstra-tftc/","title":"Tales from the Crypt with Andrew Poelstra","content":"https://twitter.com/kanzure/status/1160178010986876928\nPodcast: https://talesfromthecrypt.libsyn.com/tales-from-the-crypt-80-andrew-poelstra\nHow the idea of Taproot was formed Andrew: There are some small off suit IRC channels of bitcoin-wizards where people discuss some specific topics such as Minisketch which is a set reconciliation protocol that was developed by Greg Maxwell, Pieter Wuille and Gleb Naumenko among a couple of others. It was mostly those three. They have a channel where they …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-17-john-newbery-security-models/","title":"Security Models","content":"Topic: Security Models\nLocation: Chaincode Labs 2019 Residency\nSlides: https://residency.chaincode.com/presentations/bitcoin/security_models.pdf\nIntro John Newbery: Alright, Security Models. This is going to be like a quick whistle-stop tour of various things, very high-level view. I\u0026amp;rsquo;m going to start by giving you some kind of framework to think about things. So in cryptography, we often talk about security proofs in terms of existing schemes, we talk about assumptions.\nSo, for …"},{"uri":"/breaking-bitcoin/","title":"Breaking Bitcoin","content":" Breaking Bitcoin 2017 Breaking Bitcoin 2019 "},{"uri":"/breaking-bitcoin/2019/","title":"Breaking Bitcoin 2019","content":" Bitcoin Build System Security Jun 08, 2018 Carl Dong Build system Security Bitcoin P2P Encryption Jun 08, 2019 Jonas Schnelli V2 p2p transport Bitcoin Selfish Mining and Dyck Words Ricardo Perez-Marco Mining Breaking Bitcoin Privacy Chris Belcher Privacy problems Privacy enhancements Output linking Breaking Wasabi (and Automated Wallets) Udi Wertheimer Privacy problems Extracting Seeds From Hardware Wallets Charles Guillemet Security problems Hardware wallet Five Insights into the Defense of …"},{"uri":"/breaking-bitcoin/2019/2019-06-09-joost-jager-hodl-invoices/","title":"HODL Invoices","content":"Hodl Invoices (Breaking Bitcoin training session)\nhttps://twitter.com/kanzure/status/1146124260324384768\nIntro My name is Joost, Michael introduced me already. I’ve been working with Lightning Labs since the middle of last year and during that time I’ve worked on several subprojects within lnd. lnd is the Lightning implementation developed by Lightning Labs. I’m not sure really what the background knowledge is at this point in this three day course. One of those subprojects within lnd is called …"},{"uri":"/tags/hold-invoices/","title":"Hold invoices","content":""},{"uri":"/chaincode-labs/chaincode-residency/2019-06-12-ethan-heilman-network-partitioning-attacks/","title":"Bitcoin network partitioning \u0026 network-level privacy attacks","content":"Location: Chaincode Labs 2019 Residency\nSlides: https://residency.chaincode.com/presentations/bitcoin/ethan_heilman_p2p.pdf\nEclipse attack paper: https://eprint.iacr.org/2015/263.pdf\nBitcoin’s P2P Network Please interrupt me. I\u0026amp;rsquo;m going to go through this stuff a little bit quickly because I hear that you\u0026amp;rsquo;ve already read Eclipse attack paper and you\u0026amp;rsquo;ve already looked at some of the peer to peer stuff.\nBitcoin uses a peer-to-peer network to announce transactions and blocks. It …"},{"uri":"/speakers/ethan-heilman/","title":"Ethan Heilman","content":""},{"uri":"/lets-talk-bitcoin-podcast/","title":"Lets Talk Bitcoin Podcast","content":" Consensus Barnacles Dec 25, 2016 Christopher Jeffrey Consensus enforcement On Consensus and All Kinds of Forks Jun 04, 2017 Andreas Antonopoulos Soft fork activation The Tools and The Work Jun 09, 2019 Pieter Wuille, Jonas Nick Taproot Schnorr signatures "},{"uri":"/lets-talk-bitcoin-podcast/2019-06-09-ltb-pieter-wuille-jonas-nick/","title":"The Tools and The Work","content":"Let\u0026amp;rsquo;s Talk Bitcoin with Pieter Wuille and Jonas Nick - June 9th 2019\nhttps://twitter.com/kanzure/status/1155851797568917504\nPart 1: https://letstalkbitcoin.com/blog/post/lets-talk-bitcoin-400-the-tools-and-the-work\nPart 2: https://letstalkbitcoin.com/blog/post/lets-talk-bitcoin-401-the-tools-and-the-work-part-2\nDraft of BIP-Schnorr: https://github.com/sipa/bips/blob/bip-schnorr/bip-schnorr.mediawiki\nDraft of BIP-Taproot: https://github.com/sipa/bips/blob/bip-schnorr/bip-taproot.mediawiki …"},{"uri":"/breaking-bitcoin/2019/p2p-encryption/","title":"Bitcoin P2P Encryption","content":"bip324: v2 message transport protocol\nhttps://twitter.com/kanzure/status/1137312478373851136\nslides: https://twitter.com/_jonasschnelli_/status/1137389541512351749\nPrevious talks https://btctranscripts.com/scalingbitcoin/milan-2016/bip151-peer-encryption/\nhttps://btctranscripts.com/sf-bitcoin-meetup/2017-09-04-jonas-schenlli-bip150-bip151/\nhttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06-07-p2p-encryption/\nIntroduction This will be a technical talk. I hope you\u0026amp;rsquo;re prepared for some …"},{"uri":"/breaking-bitcoin/2019/2019-06-08-mempool-analysis-simulation/","title":"Mempool Analysis \u0026 Simulation","content":"https://twitter.com/kanzure/status/1137342023063744512\nslides: https://breaking-bitcoin.com/docs/slides/2019/mempoolAnalysis.pdf\nIntroduction Fun fact: I don\u0026amp;rsquo;t think anyone knows this fact, but antpool is a miner and antpool is not mining replace-by-fee transactions. What I mean by that is that if you do a replace-by-fee transaction, they will ignore the second high fee transaction. There are cases where transactions are RBF-bumped to increase the fee to make it mined faster, so you RBF …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-07-assumeutxo/","title":"AssumeUTXO","content":"https://twitter.com/kanzure/status/1137008648620838912\nWhy assumeutxo assumeutxo is a spiritual continuation of assumevalid. Why do we want to do this in the first place? At the moment, it takes hours and days to do initial block download. Various projects in the community have been implementing meassures to speed this up. Casa I think bundles datadir with their nodes. Other projects like btcpay have various ways of bundling this up and signing things with gpg keys and these solutions are not …"},{"uri":"/bitcoin-core-dev-tech/2019-06/","title":"Bitcoin Core Dev Tech 2019","content":" AssumeUTXO Jun 07, 2019 James O\u0026amp;#39;Beirne Assume utxo Bitcoin core Blind statechains: UTXO transfer with a blind signing server Jun 07, 2019 Ruben Somsen Statechains Eltoo Channel factories Code Review Jun 05, 2019 General discussion on SIGHASH_NOINPUT, OP_CHECKSIGFROMSTACK, and OP_SECURETHEBAG Jun 06, 2019 Olaoluwa Osuntokun, Jeremy Rubin Sighash anyprevout Op checksigfromstack Op checktemplateverify Great Consensus Cleanup Jun 06, 2019 Matt Corallo Consensus cleanup Hardware Wallets Jun 07, …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/","title":"Blind statechains: UTXO transfer with a blind signing server","content":"https://twitter.com/kanzure/status/1136992734953299970\n\u0026amp;ldquo;Formalizing Blind Statechains as a minimalistic blind signing server\u0026amp;rdquo; https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html\noverview: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39\nstatechains paper: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf\nprevious transcript: …"},{"uri":"/tags/channel-factories/","title":"Channel factories","content":""},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-07-hardware-wallets/","title":"Hardware Wallets","content":"https://twitter.com/kanzure/status/1136924010955104257\nHow much should Bitcoin Core do, and how much should other libraries do? Andrew Chow wrote the wonderful HWI tool. Right now we have a pull request to support external signers. The HWI script can talk to most major hardware wallets because it has all the drivers built in now, and it can get keys from it, and sign arbitrary transactions. That\u0026amp;rsquo;s roughly what it does. It\u0026amp;rsquo;s kind of manual, though. You have to enter some python …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-07-p2p-encryption/","title":"P2P Encryption","content":"https://twitter.com/kanzure/status/1136939003666685952\nhttps://github.com/bitcoin-core/bitcoin-devwiki/wiki/P2P-Design-Philosophy\n\u0026amp;ldquo;Elligator Squared: Uniform Points on Elliptic Curves of Prime Order as Uniform Random Strings\u0026amp;rdquo; https://eprint.iacr.org/2014/043\nPrevious talks https://btctranscripts.com/scalingbitcoin/milan-2016/bip151-peer-encryption/\nhttps://btctranscripts.com/sf-bitcoin-meetup/2017-09-04-jonas-schnelli-bip150-bip151/\nIntroduction This proposal has been in progress for …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/","title":"Signet","content":"https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html\nhttps://twitter.com/kanzure/status/1136980462524608512\nIntroduction I am going to talk a little bit about signet. Does anyone not know what signet is? The idea is to have a signature of the block or the previous block. The idea is that testnet is horribly broken for testing things, especially testing things for long-term. You have large reorgs on testnet. What about testnet with a less broken …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-06-noinput-etc/","title":"General discussion on SIGHASH_NOINPUT, OP_CHECKSIGFROMSTACK, and OP_SECURETHEBAG","content":"SIGHASH_NOINPUT, ANYPREVOUT, OP_CHECKSIGFROMSTACK, OP_CHECKOUTPUTSHASHVERIFY, and OP_SECURETHEBAG\nhttps://twitter.com/kanzure/status/1136636856093876225\nThere\u0026amp;rsquo;s apparently some political messaging around OP_SECURETHEBAG and \u0026amp;ldquo;secure the bag\u0026amp;rdquo; might be an Andrew Yang thing.\nSIGHASH_NOINPUT A bunch of us are familiar with NOINPUT. Does anyone need an explainer? What\u0026amp;rsquo;s the difference from the original NOINPUT and the new one? NOINPUT is kind of scary to at least some people. …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/","title":"Great Consensus Cleanup","content":"https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html\nhttps://twitter.com/kanzure/status/1136591286012698626\nIntroduction There\u0026amp;rsquo;s not much new to talk about. Unclear about CODESEPARATOR. You want to make it a consensus rule that transactions can\u0026amp;rsquo;t be larger than 100 kb. No reactions to that? Alright. Fine, we\u0026amp;rsquo;re doing it. Let\u0026amp;rsquo;s do it. Does everyone know what this proposal is?\nValidation time for any block\u0026amp;ndash; we were lazy …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-06-maintainers/","title":"Maintainers view of the Bitcoin Core project","content":"https://twitter.com/kanzure/status/1136568307992158208\nHow do the maintainers think or feel everything is going? Are there any frustrations? Could contributors help eliminate these frustrations? That\u0026amp;rsquo;s all I have.\nIt would be good to have better oversight or overview about who is working in what direction, to be more efficient. Sometimes I have seen people working on the same thing, and both make a similar pull request with a lot of overlap. This is more of a coordination issue. I think …"},{"uri":"/speakers/michael-ford/","title":"Michael Ford","content":""},{"uri":"/tags/op_checksigfromstack/","title":"op_checksigfromstack","content":""},{"uri":"/tags/p2c/","title":"Pay-to-Contract (P2C) protocols","content":""},{"uri":"/tags/quantum-resistance/","title":"Quantum resistance","content":""},{"uri":"/breaking-bitcoin/2019/secure-protocols-bip-taproot/","title":"Secure protocols on BIP taproot","content":"36C7 1A37 C9D9 88BD E825 08D9 B1A7 0E4F 8DCD 0366\nhttps://twitter.com/kanzure/status/1137358557584793600\nhttp://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2019-06-06-taproot/\nslides: https://nickler.ninja/slides/2019-breaking.pdf\nIntroduction I am going to tlak about something that is a collaboration of many people in the bitcoin community including my colleagues at Blockstream. The bip-taproot proposal was recently posted to the bitcoin-dev mailing list which proposes an improvement for …"},{"uri":"/tags/sighash_anyprevout/","title":"sighash_anyprevout","content":""},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/","title":"Taproot","content":"https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki\nhttps://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html\nhttps://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/\npreviously: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/\nhttps://twitter.com/kanzure/status/1136616356827283456\nIntroduction Okay, so, first question- who put my name on that list and what do they …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-06-utreexo/","title":"Utreexo","content":"Utreexo: hash-based accumulator for bitcoin UTXOs\nhttp://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-10-08-utxo-accumulators-and-utreexo/\nhttp://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2019/utreexo/\nUtreexo paper https://eprint.iacr.org/2019/611.pdf\nhttps://github.com/mit-dci/utreexo\nhttps://twitter.com/kanzure/status/1136560700187447297\nIntroduction You still download everything; instead of writing to your UTXO database, you modify your accumulator. You accept a proof that …"},{"uri":"/speakers/wladimir-van-der-laan/","title":"Wladimir van der Laan","content":""},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-05-code-review/","title":"Code Review","content":"Code review survey and complaints https://twitter.com/kanzure/status/1136261311359324162\nIntroduction I wanted to talk about the code review process for Bitcoin Core. I have done no code reviews, but following along the project for the past year I\u0026amp;rsquo;ve heard that this is a pain point for the project and I think most developers would love to see it improved. I\u0026amp;rsquo;d like to help out in some way to help infuse some energy to help with code reviews. I sent out a survey. 11 people responded. …"},{"uri":"/bitcoin-core-dev-tech/2019-06/2019-06-05-wallet-architecture/","title":"Wallet Architecture","content":"Bitcoin Core wallet architecture + descriptors\nhttps://twitter.com/kanzure/status/1136282460675878915\nwriteup: https://github.com/bitcoin/bitcoin/issues/16165\nWallet architecture discussion There are three main areas here. One is IsMine: how do I determine a particular output is affecting my wallet? What about asking for a new address, where is it coming from? That\u0026amp;rsquo;s not just get new address, it\u0026amp;rsquo;s get raw change address, it\u0026amp;rsquo;s also change being created in fundrawtransaction. The …"},{"uri":"/austin-bitcoin-developers/2019-05-27-drivechain-paul-sztorc/","title":"Drivechain","content":"Drivechain: An interoperability layer-2, Described in terms of the lightning network- something you already understand\nhttps://twitter.com/kanzure/status/1133202672570519552\nAbout me Okay. So here\u0026amp;rsquo;s some things about myself. I\u0026amp;rsquo;ve been a bitcoiner since 2012. I\u0026amp;rsquo;ve published bitcoin research on truthcoin.info blog. I\u0026amp;rsquo;ve presented at Scaling Bitcoin 1, 2, 3, 4, tabconf, and Building on Bitcoin. My background is in economics and statistics. I worked at Yale Econ Department as …"},{"uri":"/speakers/paul-sztorc/","title":"Paul Sztorc","content":""},{"uri":"/what-bitcoin-did-podcast/2019-05-14-adam-back-bryan-bishop-block-reorgs/","title":"Block Reorgs","content":"Topic: Bitcoin Block Re-orgs\nLocation: What Bitcoin Did Podcast\nBitcoin Stack Exchange on blockchain rollbacks: https://bitcoin.stackexchange.com/questions/87652/51-attack-apparently-very-easy-refering-to-czs-rollback-btc-chain-how-t\nTranscript completed by: Peter McCormack Edited by: Michael Folkson\nBitcoin Block Re-orgs Peter McCormack (PM): Adam, how are you?\nAdam Back (AB): Good!\nPM: How you doing Bryan?\nBryan Bishop (BB): Doing good!\nPM: So we\u0026amp;rsquo;re here to talk about re-orgs. Something …"},{"uri":"/what-bitcoin-did-podcast/","title":"What Bitcoin Did Podcast","content":" Block Reorgs May 14, 2019 Adam Back, Bryan Bishop "},{"uri":"/magicalcryptoconference/2019/fork-dynamics/","title":"Fork Dynamics","content":"Fork dynamics\nThank you, Samson, for that great introduction and thank you for organizing this awesome event. It\u0026amp;rsquo;s great seeing everyone. A few years ago, the situation was extremely different. When Samson originally asked me to give a talk, I kind of thought a little bit about what I should talk about and I wasn\u0026amp;rsquo;t sure what\u0026amp;rsquo;s most interesting. A few years ago, it was much more clear about the set of things going on. I had something to say about those things. I guess now, I …"},{"uri":"/magicalcryptoconference/","title":"Magicalcryptoconference","content":" Magicalcryptoconference 2019 "},{"uri":"/magicalcryptoconference/2019/","title":"Magicalcryptoconference 2019","content":" Bitcoin Protocol Development Panel Eric Lombrozo, Matt Corallo, John Newbery, Luke Dashjr, Katherine Wu Mining Lightning Taproot Bitcoin Satellite Network May 11, 2019 Adam Back Bitcoin Without Internet Adam Back, Rodolfo Novak, Elaine Ou, Adam Back, Richard Myers Cryptographic Hocus-Pocus Meaning Nothing: The Zcash Trusted Setup MPC Peter Todd Privacy enhancements Cryptography Altcoins Fork Dynamics May 12, 2019 Eric Lombrozo Future Of Privacy Coins May 11, 2019 Brandon Goodell, Andrew …"},{"uri":"/magicalcryptoconference/2019/crypto-in-cryptocurrency/","title":"The 'Crypto' In Cryptocurrency: Why Everything is Weird and Hard","content":"I do not have any slides for you. This is my attempt to keep this non-technical, by depriving myself of any ability to write equations on screen. The topic of my talk is why everything is weird and hard in cryptocurrency. What is cryptography as a mathematics and field of science? And how does this work in practice?\nEncryption Historically the purvue of cryptography is encryption, like trying to come up with something random looking. Security here is straightforward: if you have a key then you …"},{"uri":"/magicalcryptoconference/2019/the-state-of-bitcoin-mining/","title":"The State of Bitcoin Mining","content":"Topic: The State of Bitcoin Mining: No Good, The Bad, and The Ugly\nSo I\u0026amp;rsquo;m going to talk a little bit about kind of the state of mining and the set of protocols that are used for mining today, a tiny bit about the history. And then I\u0026amp;rsquo;m going to talk very briefly about some of the solutions that are being proposed, whether it\u0026amp;rsquo;s BetterHash, whether it\u0026amp;rsquo;s Braidpool design, rebooting P2Pool, and some of the other stuff, only kind of briefly at the end. But mostly, I just want …"},{"uri":"/speakers/alexandra-moxin/","title":"Alexandra Moxin","content":""},{"uri":"/magicalcryptoconference/2019/bitcoin-satellite-network/","title":"Bitcoin Satellite Network","content":"Bitcoin in Spaaace! I am going to be talking about the bitcoin satellite service that Blockstream has been offering for a few years now. I am going to talk about what it does and why that\u0026amp;rsquo;s interesting to users.\nSatellite infrastructure There are 4 satellites, 2 uplinks with base stations sending data up to the satellites, and then 5 transponders. It\u0026amp;rsquo;s covering both Asia and Australia with one satellite with two transponders. Telstar 18v was launched by SpaceX late last year. It uses …"},{"uri":"/speakers/brandon-goodell/","title":"Brandon Goodell","content":""},{"uri":"/speakers/elizabeth-stark/","title":"Elizabeth Stark","content":""},{"uri":"/magicalcryptoconference/2019/future-of-privacy-coins/","title":"Future Of Privacy Coins","content":"AM: That\u0026amp;rsquo;s not good. Hi everyone. Thanks for coming to the future of privacy coins panel. I want to spend a minute talking ablout backgrounds and interest in this area.\nBG: I have a PhD in something. I got into monero a couple of years ago. I needed to pay rent, so I did some work in math in exchange for money.\nAP: I work on confidential transactions and scriptless scripts and I do not have a PhD.\nAM: What do you think about taproot and schnorr signatures?\nAP: Sure. There was a new …"},{"uri":"/magicalcryptoconference/2019/htc/","title":"Htc","content":"HTC / Exodus\nIntroduction SM: This conversation started a couple months ago in Hong Kong with Phil Chen and Adam Back were sitting in a bar talking about the bitcoin industry and what would move it forward. Without further adue, please welcome Phil Chen.\nSpecial announcement from HTC Exodus Hi. I am Phil Chen from HTC. As you may know, we built the first Android phone in 2008. The reason why we did that was because we believe in open-source. We believe in the promise of the internet. As we all …"},{"uri":"/magicalcryptoconference/2019/intro/","title":"Intro","content":"Intro\nWhalepanda\nHello? Ladies and gentlemen, please take your seats and we\u0026amp;rsquo;re going to start now. Good afternoon everyone. I\u0026amp;rsquo;m a self-proclaimed VP of operations and swag for this crypto conference. I\u0026amp;rsquo;m also a friend of the Magical Crypto Friends. I\u0026amp;rsquo;m very happy to see the turnout for this conference. I see people who are clearly passionate based on the way they dress. Thank you all for coming. I am going to ask a question. What\u0026amp;rsquo;s your favorite thing about this …"},{"uri":"/speakers/leigh-cuen/","title":"Leigh Cuen","content":""},{"uri":"/magicalcryptoconference/2019/lightning-payments/","title":"Lightning Payments","content":"Lightning payments (and more) on the web\nCL: Thank you Elizabeth and Stacy for that great talk. Who runs a lightning node here? So a lot of you. Are you guys excited about lightning? Great, we have like four more talks on lightning. Next we have Will O\u0026amp;rsquo;Beirne who will be talking about lightning payments on the web. Big round of applause.\nIntroduction Hey y\u0026amp;rsquo;all. We\u0026amp;rsquo;re doing a DIY setup here. Just bare with me. Alright. It was great to hear from Elizabeth Stark. Today I am going …"},{"uri":"/magicalcryptoconference/2019/ln-present-and-future-panel/","title":"Ln Present And Future Panel","content":"Lightning network present \u0026amp;amp; future panel\nLC: Can everyone hear me now? Alright, alright.\nWO: It\u0026amp;rsquo;s getting some good usage. A lot of those are businesses. Definitely it\u0026amp;rsquo;s in the early stage. Elizabeth talked about this. There\u0026amp;rsquo;s a lot of things that are mostly for fun, it\u0026amp;rsquo;s not really at ecommerce yet.\nLC: That\u0026amp;rsquo;s for lnd?\nWO: Sure, and I think Lisa can talk about c-lightning.\nLC: I like these different implementations for LN. Lisa, what about c-lightning …"},{"uri":"/magicalcryptoconference/2019/mcf-episode/","title":"Mcf Episode","content":"Video MCF Short: https://youtu.be/SnfKIKL_Ghk\nWP: The short was not about Craig Wright. It was not about him being a fraud, because he is not a fraud.\nSM: We also have a store. I don\u0026amp;rsquo;t think we\u0026amp;rsquo;re ready to start shipping stuff yet, so inventory will remain zero. I\u0026amp;rsquo;ll post a tweet when we\u0026amp;rsquo;re ready to start selling stuff. We might have some t-shirts and some other stuff. But not the coins, which are limited edition.\nWP: The coins are just for the conference here. If someone …"},{"uri":"/noded-podcast/2019-05-11-andrew-poelstra-miniscript/","title":"Miniscript","content":"Noded podcast May 10th 2019\nhttps://twitter.com/kanzure/status/1135217746059415552\nIntros Pierre: Welcome to the Noded Bitcoin podcast. I’m your co-host Pierre Rochard with my other co-host Michael Goldstein aka bitstein. How’s it going Michael?\nbitstein: It’s going well, I just hope they don’t ban Bitcoin\nPierre: Any day now. We have a very special guest on today, Andrew Poelstra from Blockstream. Andrew, how are you?\nAndrew: I’m good. How are you?\nPierre: Great. Did you hear today’s news about …"},{"uri":"/noded-podcast/","title":"Noded Podcast","content":" CVE-2018-17144 Bug Sep 26, 2018 John Newbery Lnd Dec 14, 2018 Olaoluwa Osuntokun, Conner Fromknecht Lightning Miniscript May 11, 2019 Andrew Poelstra Miniscript "},{"uri":"/speakers/phil-chen/","title":"Phil Chen","content":""},{"uri":"/speakers/stacy-herbert/","title":"Stacy Herbert","content":""},{"uri":"/magicalcryptoconference/2019/state-of-lightning-network/","title":"State Of Lightning Network","content":"Fireside chat on the state of the lightning network\nIntroduction WP: We have a fireside chat with Stacy Herbert from Kaiser Report and Elizabeth Stark. Some people say she is the CEO of Lightning Network, but she is the CEO of Lightning Labs. Let\u0026amp;rsquo;s invite them on stage.\nSH: Did anyone notice the bitcoin price today? How many here are reckless? How many of you have used lightning? Alright, how many are torchbearers? Wow, not very many. How many are running lightning nodes? Yeah. Alright. …"},{"uri":"/magicalcryptoconference/2019/submarine-swaps/","title":"Submarine Swaps","content":"Bridging off-chain and on-chain with submarine swaps\nIntroduction Hi, I work at Lightning Labs. I want to talk about the dawn of hyperloop. It\u0026amp;rsquo;s a new concept.\nHyperloop To zoom out on what problem we\u0026amp;rsquo;re working on here, we had this cool lightning network. But it\u0026amp;rsquo;s, the way the lightning network works, is that it\u0026amp;rsquo;s complementary to the bitcoin network. Each side has its pros and cons. If you want the most effective way to send capital around, you want to leverage both. …"},{"uri":"/magicalcryptoconference/2019/taxonomy-of-ln-nodes/","title":"Taxonomy Of Ln Nodes","content":"Introduction Hello. I work at Blockstream on c-lightning. I am here today to talk to you about a taxonomy of nodes about who\u0026amp;rsquo;s who on the lightning network. I wnat ot go through the tools that have been built, looking through the perspectives of what different nodes are trying to accomplish.\nLN map There\u0026amp;rsquo;s a bunch of nodes on the network. Let\u0026amp;rsquo;s look at some stats. There\u0026amp;rsquo;s a little over 8,317 nodes. There are two categories so far. On the LN statistics site, it will say …"},{"uri":"/speakers/will-obeirne/","title":"Will O'Beirne","content":""},{"uri":"/sf-bitcoin-meetup/2019-05-02-conner-fromknecht-lnd-0.6-beta/","title":"Lnd 0.6 Beta","content":"Conner Fromknecht\nSF Lightning Devs\nLND 0.6 Beta Deep Dive\nIntro lnd 0.6 was about seven months in the making. There’s more things that we could fit into this talk alone but I cherrypicked some of the highlights and things that will have some real world impact on users of the network or people who are using the API or just generally users of lnd. The high level impact on the network going forward and how we plan to scale up for the next 100K channels that come onto the network.\nOverview lnd 0.5 …"},{"uri":"/london-bitcoin-devs/2019-05-01-stepan-snigirev-hardware-wallet-attacks/","title":"Hardware Wallets (History of Attacks)","content":"Slides: https://www.dropbox.com/s/64s3mtmt3efijxo/Stepan%20Snigirev%20on%20hardware%20wallet%20attacks.pdf\nPieter Wuille on anti covert channel signing techniques: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-March/017667.html\nIntroduction This talk is the second in the series after my previous talk in London a few months ago at the Advancing Bitcoin conference. There I was talking mostly about general attacks on hardware, more from the theoretical …"},{"uri":"/stephan-livera-podcast/2019-04-11-james-obeirne/","title":"The Truth About 'Power' in Bitcoin","content":"Stephan Livera Podcast with James O’Beirne - April 11th 2019\nPodcast: https://stephanlivera.com/episode/66/\nhttps://twitter.com/kanzure/status/1198289594069983237\nStephan Livera: Hello and welcome to the Stephan Livera podcast focused on Bitcoin and Austrian economics. Today my guest is James O’Beirne, Bitcoin Core engineer working at Chaincode Labs. Here’s the interview. James, I’m a fan of what you are working on over at Chaincode Labs and welcome to the show.\nJames O’Beirne: Thanks Stephan. …"},{"uri":"/boltathon/","title":"Boltathon","content":" JSON Interface Apr 06, 2019 Rusty Russell Lightning C lightning Major Limitations of Lightning Apr 06, 2019 Alex Bosworth Lightning Watchtowers Apr 06, 2019 Conner Fromknecht Research Lightning Lnd "},{"uri":"/boltathon/2019-04-06-rusty-russell-json-interface/","title":"JSON Interface","content":"JSON Interface with c-lightning and Writing Extensions\nhttps://twitter.com/kanzure/status/1230892247345852416\nIntro Ok I have top running. We have a plan and we’re going to start from zero. We’re going to clone c-lightning, we’re going to compile it, we’re going to set up a test node. I’m assuming Bitcoin is already installed. We’re going to talk a little about configuring it and stuff like that. All this stuff is useful to see somebody else do. It is documented but it is nice to see someone …"},{"uri":"/boltathon/2019-04-06-alex-bosworth-major-limitations/","title":"Major Limitations of Lightning","content":"Slides: https://drive.google.com/open?id=1UIHfYdnAWGMvsLhljTgcneCxQAHcWjxg\nhttps://twitter.com/kanzure/status/1116824776952229890\nIntro So I had a Twitter poll to decide what to talk about. People seem pretty interested in the limitations. Where is the line of what Lightning can do and what Lightning can’t do. That’s what I want to talk about. To make it more clear what can be done and what can’t be done.\nEvery Transaction A Lightning Transaction When I first started really focusing on Lightning …"},{"uri":"/boltathon/2019-04-06-conner-fromknecht-watchtowers/","title":"Watchtowers","content":"Architecture of LND Watchtowers\nhttps://twitter.com/kanzure/status/1117427512081100800\ndiscussion: https://www.reddit.com/r/lightningdevs/comments/bd33cp/architecture_of_lnd_watchtowers_presentation_from/\nIntro Pretty excited today to speak on the topic of watchtowers. It is something that we at Lightning Labs have been working on for almost a year now. Conceptual proof of concept code to really flushing it out into what would be today more of a full on protocol. Hopefully today we’re going to …"},{"uri":"/speakers/daniel-lehnberg/","title":"Daniel Lehnberg","content":""},{"uri":"/grincon/2019/lehnberg/","title":"Grin update","content":"Mimblewimble pros and cons There are no amounts, no addresses, and improved scaling. But one of hte problems is that it requires interactive transactions. But this is a pro because of this you have to participate in mining. You can send money to addresses without having to accept the money. Just as you saw here, you can identify the blue box as an output or an input. You could build up a transaction graph. There are some attacks that are possible.\nI think Andrew is going tobe talking about …"},{"uri":"/grincon/","title":"Grincon","content":" Grincon 2019 "},{"uri":"/grincon/2019/","title":"Grincon 2019","content":" Cryptography Audit John Woeltz Research Libsecp256k1 Fireside chat with Dan Boneh Jan 28, 2019 Taariq Lewis, Dan Boneh Cryptography Quantum resistance Grin update Mar 26, 2019 Daniel Lehnberg Adaptor signatures Altcoins Scriptless Scripts With Mimblewimble Andrew Poelstra Adaptor signatures "},{"uri":"/sf-bitcoin-meetup/2019-03-15-partially-signed-bitcoin-transactions/","title":"Partially Signed Bitcoin Transactions","content":"Topic: PSBT\nLocation: SF Bitcoin Devs\nIntroduction Andrew: So as Mark said I\u0026amp;rsquo;m Andrew Chow and today I\u0026amp;rsquo;m going to be talking about BIP 174, partially signed Bitcoin transactions, also known as PSBT. Today I\u0026amp;rsquo;m going to be talking a bit about why we need PSBTs, or Partially Signed Bitcoin Transactions, what they are, and the actual format of the transaction itself and how to use a PSBT and the workflows around that.\nA quick story, around this time last year there was a fork of …"},{"uri":"/stephan-livera-podcast/2019-03-14-christian-decker-channel-factories/","title":"Lightning Channel Factories","content":"Lightning topology limitation Stephan: Christian, welcome to the show. I’m a big fan of your work.\nChristian: Thank you so much for having me Stephan. I am a long time listener and I am happy to be joining you today.\nStephan: There has been a lot of chatter in the Bitcoin community on what might happen in the future. One of the ideas you presented in one of your papers you wrote in 2016 called Scalable Funding of Bitcoin Micropayment Channel Networks That is the discussion where we are going to …"},{"uri":"/stephan-livera-podcast/2019-03-11-chris-belcher/","title":"Defending Bitcoin Privacy","content":"podcast: https://stephanlivera.com/episode/58/\nStephan Livera: My guest today is Chris Belcher, Bitcoin privacy O.G. He’s been in the game a long time, and has made great contributions on privacy. He’s involved with JoinMarket, Electrum Personal Server, and most recently he wrote, or rather updated, a fantastic Bitcoin privacy Wiki which you simply must read. Here is my interview with Chris. Chris, welcome to the show.\nChris Belcher: Hello, thanks for having me.\nStephan Livera: Yeah. Chris I’ve …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2019/","title":"Mit Bitcoin Expo 2019","content":" Secure Signatures: Harder Than You Think Mar 09, 2019 Andrew Poelstra Schnorr signatures Utreexo: Reducing bitcoin nodes to 1 kilobyte Tadge Dryja Utreexo "},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2019/signature-scheme-security-properties/","title":"Secure Signatures: Harder Than You Think","content":"Twitter announcement: https://twitter.com/kanzure/status/1104399812147916800\nTranscript completed by: Bryan Bishop Edited by: Michael Folkson\nIntroduction Hi everyone. Can you hear me alright? Is my mic? Okay, cool. I was scheduled to talk about the history of Schnorr signatures in Bitcoin. I wanted to do that but then I realized I’ve only got 20-30 minutes to talk here. Instead I am going to focus on one particular piece of that history, which is the security model surrounding not just Schnorr …"},{"uri":"/stanford-blockchain-conference/2020/attacking-evm-resource-metering/","title":"Attacking EVM Resource Metering","content":"Broken Metre: Attacking resource metering in EVM\nDaniel Perez, Benjamin Livshits\nhttps://twitter.com/kanzure/status/1230222221110468609\nhttps://arxiv.org/abs/1909.07220\nAbstract Metering is an approach developed to assign cost to smart contract execution in blockchain systems such as Ethereum. This paper presents a detailed investigation of the metering approach based on gas taken by the Ethereum blockchain. We discover a number of discrepancies in the metering model such as significant …"},{"uri":"/speakers/benjamin-livshits/","title":"Benjamin Livshits","content":""},{"uri":"/speakers/daniel-perez/","title":"Daniel Perez","content":""},{"uri":"/stephan-livera-podcast/2019-02-11-jack-dorsey-elizabeth-stark/","title":"Bitcoin- Native Currency of the Internet","content":"podcast: https://stephanlivera.com/episode/52/\nStephan Livera: Welcome to the Stephan Livera podcast, focused on Bitcoin and Austrian economics. Learn the technology and economics of Bitcoin by listening in to interviews with the best and brightest. It’s a very special episode today with Jack Dorsey, CEO of Twitter and Square, and also Elizabeth Stark, CEO of Lightning Labs.\nStephan Livera: I’ve been very impressed with Jack’s commentary on Bitcoin as potentially the native currency of the …"},{"uri":"/speakers/jack-dorsey/","title":"Jack Dorsey","content":""},{"uri":"/misc/2019-02-09-mcelrath-on-chain-defense-in-depth/","title":"On-Chain Defense in Depth","content":"On-chain defense in depth (UCL CBT seminar)\nhttps://twitter.com/kanzure/status/1159542209584218113\nIntroduction It seems strange to come to a university and put PhD on my slides because in the finance world that\u0026amp;rsquo;s weird. I put it up there.\nOkay, so. I work for Fidelity Digital Assets. Just a couple words about this. Our lawyers make us say this: \u0026amp;ldquo;This presentation is intended to be educational in nature and is not representative of existing products and services from Fidelity …"},{"uri":"/tags/op-checksigfromstack/","title":"OP_CHECKSIGFROMSTACK","content":""},{"uri":"/advancing-bitcoin/2019/","title":"Advancing Bitcoin 2019","content":" Rust Lightning Feb 07, 2019 Matt Corallo Lightning "},{"uri":"/advancing-bitcoin/2019/2019-02-07-matt-corallo-rust-lightning/","title":"Rust Lightning","content":"Flexible Lightning in Rust\nSlides: https://docs.google.com/presentation/d/154bMWdcMCFUco4ZXQ3lWfF51U5dad8pQ23rKVkncnns/edit#slide=id.p\nhttps://twitter.com/kanzure/status/1144256392490029057\nIntroduction Thanks for having me. I want to talk a little bit about a project I’ve been working on for about a year called rust-lightning. I started it in December so a year and a few months ago. This is my first presentation on it so I’m excited to finally get to talk about it a little bit.\nGoals It is yet …"},{"uri":"/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/","title":"Better Hashing through BetterHash","content":"Announcement of BetterHash on the mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016077.html\nDraft BIP: https://github.com/TheBlueMatt/bips/blob/betterhash/bip-XXXX.mediawiki\nIntro I am going to talk about BetterHash this evening. If you are coming to Advancing Bitcoin don’t worry I am talking about something completely different. You are not going to get duplicated content. That talk should be interesting as well though admittedly I …"},{"uri":"/sf-bitcoin-meetup/2019-02-04-threshold-signatures-and-accountability/","title":"Threshold Signatures and Accountability","content":"Slides: https://download.wpsoftware.net/bitcoin/wizardry/2019-02-sfdevs-threshold/slides.pdf\nTranscript completed by: Bryan Bishop Edited by: Michael Folkson\nTwitter announcement: https://twitter.com/kanzure/status/1092665108960968704\nIntroduction (Mark Erhardt) Sorry for the delays. We\u0026amp;rsquo;re finally ready to talk. Welcome to our talk. We\u0026amp;rsquo;d like to thank our sponsors including Blockstream and Digital Garage. Today we have Andrew Poelstra to talk about threshold signatures and …"},{"uri":"/verifiable-delay-functions/vdf-day-2019/","title":"Vdf Day 2019","content":" Comments And Observations About Timelocks Ron Rivest Ron Rivest Timelocks Verifiable Delay Functions Feb 03, 2019 Dan Boneh Cryptography "},{"uri":"/verifiable-delay-functions/","title":"Verifiable Delay Functions","content":" Vdf Day 2019 "},{"uri":"/verifiable-delay-functions/vdf-day-2019/dan-boneh/","title":"Verifiable Delay Functions","content":"https://twitter.com/kanzure/status/1152586087912673280\nIntroduction I\u0026amp;rsquo;ll give an overview of VDFs and make sure that we\u0026amp;rsquo;re on the same page. I want to make sure everyone knows what it is that we\u0026amp;rsquo;re talking about.\nThe goal of a VDF is to slow things down. The question is, how do we slow things down in a verifiable way?\nWhat is a VDF? I won\u0026amp;rsquo;t give a precise definition, look at the manuscripts. A VDF is basically a function that is supposed to take T steps to evaluate, even …"},{"uri":"/andreas-antonopoulos/2019-02-01-andreas-antonopoulos-hardware-wallet-security/","title":"Are Hardware Wallets Secure Enough?","content":"Q - Hi Andreas. I store my crypto on a Ledger. Listening to Trace Mayer this week has me concerned that this is not safe enough. Trace says you need Bitcoin Core for network validation, Armory for managing the private keys and a Glacier protocol for standard operating procedures and a Purism laptop for hardware. What is the gold standard for storing crypto for non-technical people? Is a hardware wallet good enough? If my crypto has been on my hardware wallet for a year now is it more or less …"},{"uri":"/stanford-blockchain-conference/2019/handel-practical-multisig-aggregation/","title":"Handel: Practical Multi-Signature Aggregation for Large Byzantine Committees","content":"Introduction Handle is a aggregation protocol for large scale byzantine committees. This is work with my colleagues in addition to myself.\nWhy aggregate? Distributed systems often require gathering statements about large subsets of nodes. However, processing (filtering, aggregating\u0026amp;hellip;) measurements can be a bottleneck. We want to apply some processing while collecting the measurements.\nWhy byzantine? A byzantine node is a node that can be arbitrary. It can be offline and it can be anything …"},{"uri":"/speakers/nicolas-gailly/","title":"Nicolas Gailly","content":""},{"uri":"/stanford-blockchain-conference/2019/","title":"Stanford Blockchain Conference 2019","content":"https://cbr.stanford.edu/sbc19/\nAccumulators for blockchains Benedikt Bünz Research Proof systems Asics David Vorick Mining Aurora: Transparent Succinct Arguments for R1CS Alessandro Chiesa Research Proof systems Automatic Detection of Vulnerabilities in Smart Contracts Mooly Sagiv Research Security Bitcoin Payment Economic Analysis Jacob Leshno Bloxroute Soumya Basu Segwit Building Bulletproofs Henry de Valence, Cathie Yun Proof systems Building Mimblewimble/Grin, an implementation for privacy …"},{"uri":"/speakers/alberto-sonnino/","title":"Alberto Sonnino","content":""},{"uri":"/stanford-blockchain-conference/2019/building-mimblewimble-and-grin/","title":"Building Mimblewimble/Grin, an implementation for privacy and scalability","content":"Quentin Le Sceller (Grin Dev Team), Ignotus Peverell (Grin Dev Team), Yeastplume (Grin Dev Team), Antioch Peverell (Grin Dev Team), HashMap (Grin Dev Team), John Tromp (Grin Dev Team), Daniel Lehnberg (Grin Dev Team)\nhttps://twitter.com/kanzure/status/1090662484275384320\nOur first talk is from the Grin project. It was just launched a few weeks ago. Okay, let\u0026amp;rsquo;s get started.\nIntroduction Hi everyone. My name is Quentin Le Sceller. I\u0026amp;rsquo;m a developer at Blockcypher. I am here to talk about …"},{"uri":"/speakers/prastudy-fauzi/","title":"Prastudy Fauzi","content":""},{"uri":"/speakers/quentin-le-sceller/","title":"Quentin Le Sceller","content":""},{"uri":"/stanford-blockchain-conference/2019/quisquis/","title":"Quisquis: A New Design for Anonymous Cryptocurrencies","content":"https://twitter.com/kanzure/status/1090677466216128512\nhttps://eprint.iacr.org/2018/990\nIntroduction Good morning. I am here to talk to you about Quisquis which is a new design for an anonymous cryptocurrency. This talk is going to focus on the transaction layer. We don\u0026amp;rsquo;t talk about consensus or smart contracts or higher applications.\nOutline I am going to talk about bitcoin and anonymity. I\u0026amp;rsquo;ll talk about existing cryptocurrencies that claim to have anonymity and about their …"},{"uri":"/stanford-blockchain-conference/2019/sybilquorum/","title":"SybilQuorum: Open Distributed Ledgers Through Trust Networks","content":"https://twitter.com/kanzure/status/1090777875253407745\nIntroduction This is joint work with George Danezis who couldn\u0026amp;rsquo;t be here today and also cofounded Chainspace. Among all the possible challenges we may have on a blockchain, this talk will be about how to build strong sybil resistance. We would consider a subset of the problem, how will we bootstrap a federated agreement system?\nSybil attacks I am sure everyone knows about sybil attacks. Imagine you have a classic setting where you have …"},{"uri":"/grincon/2019/dan-boneh/","title":"Fireside chat with Dan Boneh","content":"TL: I was one of the founders of SF Bitcoin Devs with aantonop. I am also the co-founder of a company called Promise. We sponsor protocols in privacy as well as mimblewimble. If you are mining or actively looking to mine in grin, please say hello to me because we\u0026amp;rsquo;re connected with a lot of mining companies and hosting companies that are looking to host miners very cheaply. I\u0026amp;rsquo;d love to help. This is the most important panel of the day. There is not going to be another more important …"},{"uri":"/speakers/taariq-lewis/","title":"Taariq Lewis","content":""},{"uri":"/stephan-livera-podcast/2019-07-21-kukks-btcpayserver-architecture/","title":"BTCPayServer architecture and BTC transmuter","content":"podcast: https://stephanlivera.com/episode/91/\nStephan Livera: Hi, and welcome back to the Stephan Livera podcast, focused on Bitcoin and Austrian economics. Today, we are carrying on with he BTCPayServer series, with Kukks, a contributor of BTCPayServer. But first, let me introduce the sponsors of the podcast.\nStephan Livera: So firstly, check out Kraken. They are the best Bitcoin exchange. I’ve been consistently impressed with the way they operate, over the years. They also have this …"},{"uri":"/speakers/kukks/","title":"Kukks","content":""},{"uri":"/speakers/nicolas-dorier/","title":"Nicolas Dorier","content":""},{"uri":"/stephan-livera-podcast/2019-01-20-nicolas-dorier-and-btcpayserver/","title":"Nicolas Dorier and BTCPayServer – self hosted Bitcoin and Lightning payments","content":"podcast: https://stephanlivera.com/episode/48/\nStephan Livera: You’re listening to the Stephan Livera podcast focused on Bitcoin and Austrian economics. This is episode 48 with Nicolas Dorier, who started a great project. I wanted to discuss BTCPayServer. Nicolas is also the creator of NBitcoin, a Bitcoin library for the .Net platform in C#. Here’s my conversation with Nicolas. Nicolas, thanks for coming on the show.\nNicolas Dorier: Yeah, thank you for inviting me.\nStephan Livera: Yeah, I’ve …"},{"uri":"/stanford-blockchain-conference/2019/plasma-cash/","title":"Plasma Cash","content":"Plasma Cash: Towards more efficient Plasma constructions\nhttps://twitter.com/kanzure/status/1090691501561020416\nShi: Okay, we\u0026amp;rsquo;re going to start the next session. Please return to your seats. I have an important announcement before the session starts. Because of the fire code, we can\u0026amp;rsquo;t allow people standing room. Please go to the overflow rooms at the end of the hallway. We don\u0026amp;rsquo;t want the fire marshall to come and shutdown the conference. Also, the acoustics of the room make it …"},{"uri":"/speakers/istvan-andras-seres/","title":"Istvan Andras Seres","content":""},{"uri":"/breaking-bitcoin/2019/lightning-network-topological-analysis/","title":"Topological Analysis of the Lightning Network","content":"Topological analysis of the lightning network\npaper: https://arxiv.org/abs/1901.04972\nhttps://github.com/seresistvanandras/LNTopology\nIntroduction Knowing the topology is not necessarily a vulnerability, but it gives you a lot of information. If I know Bryan\u0026amp;rsquo;s full node is connected to some other nodes, I will have an easier time to cut Bryan out of the network or launch attacks against him. Knowing the topology is not necessarily a vulnerability, but it\u0026amp;rsquo;s really strong information. …"},{"uri":"/misc/2019-01-05-unchained-capital-socratic-seminar/","title":"Unchained Capital Socratic Seminar","content":"Unchained Capital Bitcoin Socratic Seminar\nSocratic Seminar 87 at BitDevs NYC has all of the links on meetup.com available.\nhttps://www.meetup.com/BitDevsNYC/events/256924041/\nhttps://www.meetup.com/Austin-Bitcoin-Developers/events/257718282/\nhttps://twitter.com/kanzure/status/1081674259880120323\nLast time Last time Andrew Poelstra gave a talk about research at Blockstream. He also talked about scriptless scripts and using the properties of Schnorr signatures to do all of your scripting. There …"},{"uri":"/noded-podcast/2018-12-14-laolu-conner-lnd/","title":"Lnd","content":"Noded podcast with Laolu Osuntokun and Conner Fromknecht - December 14th 2018\nhttps://twitter.com/kanzure/status/1089187758931890178\nPodcast: https://noded.org/podcast/noded-0360-with-olaoluwa-osuntokun-and-conner-fromknecht/\nPierre: Welcome to the Noded Bitcoin podcast. This is episode 36. We’re joined with Laolu and Conner from Lightning Labs as well as my co-host Michael Goldstein. How’s it going guys?\nroasbeef: Not bad\nPierre: So I listened to Laolu on Stephan Livera’s podcast and I would …"},{"uri":"/stephan-livera-podcast/2018-12-11-laolu-osuntokun-stephan-livera/","title":"Laolu Osuntokun - Stephan Livera","content":"Stephan Livera podcast with Laolu Osuntokun - December 11th 2018\nhttps://twitter.com/kanzure/status/1090640881873387525\nPodcast: https://stephanlivera.com/episode/39\nStephan: Hi Laolu. Thank you very much for coming on the show.\nroasbeef: It’s good to be here.\nStephan: It’s been a while since the Lighting Summit was on and now you’re back home. How’s things back home?\nroasbeef: Not bad, just the usual back to work, just the regular stuff. Australia was great. I had never been. It’s spring here, …"},{"uri":"/speakers/tom-kirkpatrick/","title":"Tom Kirkpatrick","content":""},{"uri":"/lightning-hack-day/2018-12-01-tom-kirkpatrick-zap-wallet/","title":"Zap Desktop: A look under the hood","content":"Introduction I’m Tom. I’ve been working on the Zap desktop wallet for the last six months or so. If you’re not familiar, we’re trying to build consumer facing Lightning applications. We’ve currently got an iOS app and a desktop wallet. They’re pretty similar, the interface is pretty similar between them but they’ve got a slightly different focus. I’ve been working on the desktop wallet. Can I just get an idea of how many people here have used Zap? Quite a lot. That’s probably about 80-90%, …"},{"uri":"/lightning-hack-day/2018-10-27-pierre-rochard-excel-in-lightning/","title":"Excel In Lightning","content":"Excel in Lightning - Lightning Hack Day NYC\nDemo I want to present a project I’ve been hacking on for the past month which is getting lnd inside of Excel. I’ll explain why someone would ever do that. I want to bring Lightning to the most popular business platform in existence today which is Microsoft Office Excel. It’s installed on hundreds of millions of computers around the world. It is used day-to-day by all sorts of business people who are doing adhoc data analysis. It is also abused by …"},{"uri":"/speakers/pierre-rochard/","title":"Pierre Rochard","content":""},{"uri":"/chaincode-labs/chaincode-residency/2018-10-26-pierre-rochard-lightning-excel-plugin/","title":"Lightning Excel Plugin","content":"Demo My project is to create a user interface for lightning that might appeal to a different crowd than ours. We’re kind of used to Linux, open source, free software so being in a proprietary environment like Excel is unfamiliar territory for us. And yet I think that there is a large demographic of people who are interested in Bitcoin who work in the business world where they use Excel day to day. They’re interested in experimenting with lightning and going on Y’alls or going on Satoshi’s place …"},{"uri":"/chaincode-labs/chaincode-residency/2018-10-24-christian-decker-c-lightning-api/","title":"C-Lightning API","content":"Location: Chaincode Labs Lightning Residency 2018\nhttps://twitter.com/kanzure/status/1232313790311436290\nIntro Good morning from the last day of presentations. I’m Chris still. Today I will be talking to you about c-lightning, how to develop on it, how to extend it, adapt it to your needs and how to customize it so it fits into whatever you are trying to do. Not whatever we thought you might want to do.\nGoals First of all the goals of c-lightning are basically to be fast and lightweight. We want …"},{"uri":"/andreas-antonopoulos/2018-10-23-andreas-antonopoulos-initial-blockchain-download/","title":"Bitcoin Q\u0026A: Initial Blockchain Download","content":"Becca asks why does it take it so long to download the blockchain? I do have a fast internet connection and I could download 200GB in less than a hour. What Becca is talking about is what’s called the initial blockchain download or IBD which is the first synchronization of the Bitcoin node or any kind of blockchain node to its blockchain. The answer is that while the amount of data you need to download in order to get the full blockchain is about 200GB or so you’re not simply downloading that …"},{"uri":"/tags/consensus-enforcement/","title":"Consensus Enforcement","content":""},{"uri":"/chaincode-labs/chaincode-residency/2018-10-22-elaine-ou-bootstrapping-lightning-node/","title":"Bootstrapping Lightning Node","content":"Bootstrapping and maintaining a Lightning Node\nSlides: https://lightningresidency.com/assets/presentations/Ou_Bootstrapping_and_Maintaining_a_Lightning_Node.pdf\nIntro My name ie Elaine. I’ll talk about bootstrapping and maintaining a lightning node. Pretty much all of you have probably already set up a node so this should all be review. You guys will probably know more about this than I do.\nBootstrapping and Maintaining a Lightning Node We’ll start with an overview of how routing works just to …"},{"uri":"/chaincode-labs/chaincode-residency/2018-10-22-alex-bosworth-building-lightning-applications/","title":"Building Lightning Applications","content":"Slides: https://lightningresidency.com/assets/presentations/Bosworth_Building_Applications.pdf\nhttps://twitter.com/kanzure/status/1116327230135836673\nIntro Today I’m going to talk about building on lnd from the perspective of we want to have developers building on lnd. Developers, developers, developers. I just have so many slides today that let’s leave questions to the end if I have any time because I over made slides.\nDesign Philosophy So what is the big idea behind lnd? What are we thinking …"},{"uri":"/chaincode-labs/chaincode-residency/2018-10-22-alex-bosworth-channel-management/","title":"Channel Management","content":"Slides: https://lightningresidency.com/assets/presentations/Bosworth_Channel_Management.pdf\nIntro Today I’m going to talk about something a little bit different to building apps. I’m going to talk about becoming a great router. Managing your node so that your node can be routing payments and be a good citizen if you choose to have public channels.\nUptime So basically the big message overall is and I think I’ve said this before, if you have public channels you’re kind of like waving to everybody …"},{"uri":"/speakers/elaine-ou/","title":"Elaine Ou","content":""},{"uri":"/chaincode-labs/chaincode-residency/2018-10-22-christian-decker-history-of-lightning/","title":"History of the Lightning Network","content":"Location: Chaincode Labs Lightning Residency 2018\nIntroduction Hi I\u0026amp;rsquo;m Chris, as I said like three times before, and I\u0026amp;rsquo;ll be talking about the history of Lightning and history of off-chain protocols or layer two, or you name it, whatever name you want. I like to call them off chain because it doesn\u0026amp;rsquo;t imply a hierarchy of layers but I\u0026amp;rsquo;ll use all three terms interchangeably anyway.\nI like to have an interactive presentation so if you have any questions please don\u0026amp;rsquo;t …"},{"uri":"/chaincode-labs/chaincode-residency/2018-10-22-christian-decker-lightning-bitcoin/","title":"Lightning ≈ Bitcoin","content":"Location: Chaincode Labs Lightning Residency 2018\nIntroduction Good morning everyone my name is Chris and I\u0026amp;rsquo;m a bitcoiners anonymous. So my job today is basically to tell you why Lightning is Bitcoin and why it\u0026amp;rsquo;s not exactly Bitcoin. So there\u0026amp;rsquo;s a few fundamental differences between Bitcoin on-chain payments and Lightning. I will always call Bitcoin on-chain payments just because it\u0026amp;rsquo;s sort of the classical way of doing it.\nThe differences between Bitcoin and Lightning are …"},{"uri":"/chaincode-labs/chaincode-residency/2018-10-22-alex-bosworth-lightning-protocol/","title":"Lightning Protocol","content":"The Lightning Protocol, an Application Design Perspective\nSlides: https://lightningresidency.com/assets/presentations/Bosworth_The_Lightning_Protocol.pdf\nIntro I’m talking about the protocol design and also going through the BOLTs. What I’m going to talk about is also about the BOLTs. The BOLTs are like the spec for Lightning and there’s a bunch of them.\nBOLT 00: Build Apps What I try to do is go through the BOLTs and look at only the parts of the BOLTs, the protocol design, that you as an app …"},{"uri":"/bitcoin-core-dev-tech/2018-10/","title":"Bitcoin Core Dev Tech 2018 (Oct)","content":" Bitcoin Optech Oct 09, 2018 Lightning Segwit Efficient P2P Transaction Relay Oct 08, 2018 P2 p Script Descriptors Oct 08, 2018 Pieter Wuille Wallet Signmessage Oct 10, 2018 Kalle Alm Wallet UTXO accumulators, UTXO commitments, and Utreexo Oct 08, 2018 Tadge Dryja Proof systems Utreexo Wallet Stuff Oct 09, 2018 Wallet "},{"uri":"/bitcoin-core-dev-tech/2018-10/2018-10-10-signmessage/","title":"Signmessage","content":"kallewoof and others\nhttps://twitter.com/kanzure/status/1049834659306061829\nI am trying to make a new signmessage to do other things. Just use the signature system inside bitcoin to sign a message. Sign a message that someone wants. You can use proof-of-funds or whatever.\nYou could just have a signature and it\u0026amp;rsquo;s a signature inside of a package and it\u0026amp;rsquo;s small and easy. Another option is to have a .. that is invalid somehow. You do a transaction with some input, where the txid is the …"},{"uri":"/bitcoin-core-dev-tech/2018-10/2018-10-09-bitcoin-optech/","title":"Bitcoin Optech","content":"https://twitter.com/kanzure/status/1049527415767101440\nhttps://bitcoinops.org/\nBitcoin Optech is trying to encourage bitcoin businesses to adopt better scaling techniques and technologies, things like batching, segwit, down the line maybe Schnorr signatures, aggregatable signatures, maybe lightning. Right now we\u0026amp;rsquo;re focusing on things that businesses could be doing right now. Exchanges could be batching, and some aren\u0026amp;rsquo;t. We\u0026amp;rsquo;re talking to those companies and listening to their …"},{"uri":"/bitcoin-core-dev-tech/2018-10/2018-10-09-wallet-stuff/","title":"Wallet Stuff","content":"https://twitter.com/kanzure/status/1049526667079643136\nMaybe we can have the wallet PRs have a different review process so that there can be some specialization, even if the wallet is not ready to be split out. In the future, if the wallet was a separate project or repository, then that would be better. We need to be able to subdivide the work better than we already do, and the wallet is a good place to start doing it. It\u0026amp;rsquo;s different from the consensus critical code. A change to coin …"},{"uri":"/bitcoin-core-dev-tech/2018-10/2018-10-08-efficient-p2p-transaction-relay/","title":"Efficient P2P Transaction Relay","content":"p2p transaction relay protocol improvements with set reconciliation gleb\nI don\u0026amp;rsquo;t know if I need to motivate this problem. I presented a work in progress session at Scaling. The cost of relaying transactions or announcing a transaction in a network\u0026amp;ndash; how many announcements do you have? Every link has an announcement in either direction right now, and then there\u0026amp;rsquo;s the number of nodes multiplied by the number of connections per node. This is like 8 connections on average. If there …"},{"uri":"/bitcoin-core-dev-tech/2018-10/2018-10-08-script-descriptors/","title":"Script Descriptors","content":"https://github.com/bitcoin/bitcoin/blob/master/doc/descriptors.md\nI would like to talk about script descriptors. There\u0026amp;rsquo;s a number of projects we\u0026amp;rsquo;re working on and they are all kind of related. I\u0026amp;rsquo;d like to clarify how things fit together.\nNote: there is an earlier transcript that has not been published (needs review) about script descriptors.\nAgenda History of script descriptors and how we came to this. What\u0026amp;rsquo;s currently in Bitcoin Core v0.17 Wallet integration DESCRIPT …"},{"uri":"/bitcoin-core-dev-tech/2018-10/2018-10-08-utxo-accumulators-and-utreexo/","title":"UTXO accumulators, UTXO commitments, and Utreexo","content":"https://twitter.com/kanzure/status/1049112390413897728\nIf people saw Benedikt\u0026amp;rsquo;s talk, two days ago, it\u0026amp;rsquo;s related. It\u0026amp;rsquo;s a different construction but same goal. The basic idea is, and I think Cory kind of started to talk about this a few months ago on the mailing list\u0026amp;hellip; instead of storing all UTXOs in leveldb, store the hash of each UTXO, and then it\u0026amp;rsquo;s half the size, and then you could almost create it from the hash of the input, it\u0026amp;rsquo;s like 10 bytes more. Instead …"},{"uri":"/scalingbitcoin/tokyo-2018/rebalancing-lightning/","title":"Rebalancing in the Lightning Network: Analysis and Implications","content":"https://twitter.com/kanzure/status/1048753406628655105\nIntroduction This talk is going to be about rebalancing in the lightning network. I\u0026amp;rsquo;ll talk about the implications and about running a lightning node.\nFinancial costs In the lightning network, routing nodes will incur in financial costs by having their money locked inside channels. They can\u0026amp;rsquo;t use that money elsewhere. They incur a financial cost to doing this activity, at least an opportunity cost. To recover this, they can …"},{"uri":"/andreas-antonopoulos/2018-10-07-andreas-antonopoulos-schnorr-signatures/","title":"Schnorr Signatures","content":"LTB Episode 378 - The Petro inside Venezuela \u0026amp;amp; Schnorr Signatures\nWe’re currently using the elliptic curve digital signature algorithm (ECDSA) which is a specific way of doing digital signatures with elliptic curves but not the only way. Schnorr signatures have some unique characteristics that make them better in many ways than the ECDSA algorithm we use today in Bitcoin.\nSchnorr signatures are better because they have certain quirks in the way the mathematics plays out. Digital signatures …"},{"uri":"/speakers/sebasti%C3%A1n-reca/","title":"Sebastián Reca","content":""},{"uri":"/scalingbitcoin/tokyo-2018/atomic-swaps/","title":"The State of Atomic Swaps","content":"https://twitter.com/kanzure/status/1048790167283068929\nHistory This should be easy to follow. I want to start off with some history. The first mention of atomic swap concepts was in 2013 in a bitcointalk.org thread where people were wondering how to exchange coins between independent ledgers in a trustless way. There were a few comments on the thread every now and then asking has anyone implemented this. It was quiet until last year when people started doing atomic swaps last year. Lightning …"},{"uri":"/speakers/thomas-eizinger/","title":"Thomas Eizinger","content":""},{"uri":"/scalingbitcoin/tokyo-2018/accumulators/","title":"A Scalable Drop in Replacement for Merkle Trees","content":"https://twitter.com/kanzure/status/1048454406755168257\nIntroduction Hello. Test. Okay. I am going to talk about accumulators for UTXOs. The previous two talks were a great setup for this. This is joint work with Ben Fisch who is also here today and Dan Boneh. I first want to advertise the Stanford Blockchain Conference (formerly known as BPASE) happening at the end of January 2019 at Stanford. Whether you give a talk or not, you should try to attend.\nUTXOs The UTXO set is a growing problem. We …"},{"uri":"/scalingbitcoin/tokyo-2018/analysis-of-dust-in-utxo-based-cryptocurrencies/","title":"An analysis of dust in UTXO-based cryptocurrencies","content":"Cristina Pérez-Solà, Sergi Delgado Segura, Guillermo Navarro-Arribas and Jordi Herrera (Universitat Autònoma de Barcelona)\nSergi Delgado Segura\nhttps://eprint.iacr.org/2018/513.pdf\nhttps://twitter.com/kanzure/status/1048380094446632961\nIntroduction Thank you everyone. I am glad to be here today to tell you about our analysis of the UTXO set in different cryptocurrencies. This is shared work with the other coauthors listed. The outline is that I\u0026amp;rsquo;ll talk about the UTXO set, then I\u0026amp;rsquo;ll …"},{"uri":"/speakers/benedikt-b%C3%BCnz/","title":"Benedikt Bünz","content":""},{"uri":"/scalingbitcoin/tokyo-2018/compact-multi-signatures-for-smaller-blockchains/","title":"Compact Multi-Signatures For Smaller Blockchains","content":"Dan Boneh (Stanford University), Manu Drijvers and Gregory Neven (DFINITY)\npaper: https://eprint.iacr.org/2018/483.pdf\nhttps://twitter.com/kanzure/status/1048441176024469504\nIntroduction Hi. Thanks a lot for coming in after lunch. This slot is often known as \u0026amp;ldquo;the slot of death\u0026amp;rdquo;. Some of us are jetlagged, this is the perfect time to doze off and only wake up at the end of the talk. I\u0026amp;rsquo;ll try to prevent that from happening. I think the best way to do that is to start about what …"},{"uri":"/speakers/giulio-malavolta/","title":"Giulio Malavolta","content":""},{"uri":"/scalingbitcoin/tokyo-2018/how-much-privacy-is-enough/","title":"How Much Privacy is Enough? Threats, Scaling, and Trade-offs in Blockchain Privacy Protocols","content":"unrelated, https://arxiv.org/pdf/1706.00916.pdf\nIntroductions Hi everybody. This is a talk on how much privacy is enough, evaluating trade-offs in privacy and scalability. This is going to be unusual for this conference because I will not be presenting new technical content. I am going to be talking about evaluating the tradeoffs between these scalability and privacy. As context, my name is Ian Miers. I am a postdoc at Cornell Tech. I\u0026amp;rsquo;ve been working on privacy tech since 2011.\nPrivacy …"},{"uri":"/speakers/ian-miers/","title":"Ian Miers","content":""},{"uri":"/scalingbitcoin/tokyo-2018/fraud-proofs/","title":"Improving SPV Client Validation and Security with Fraud Proofs","content":"paper: https://arxiv.org/pdf/1809.09044.pdf\nhttps://twitter.com/kanzure/status/1048446183004233731\nIntroduction I am going to be talking about fraud proofs. It allows lite clients to have a leve lof security of almost the level of a full node. Before I describe fraud proofs, how about we talk about motivations.\nMotivations There\u0026amp;rsquo;s a large tradeoff between blockchain decentralization and how much on-chain throughput you can get. The more transactions you have on the chain, the more …"},{"uri":"/scalingbitcoin/tokyo-2018/scriptless-ecdsa/","title":"Instantiating (Scriptless) 2P-ECDSA: Fungible 2-of-2 MultiSigs for Today's Bitcoin","content":"https://twitter.com/kanzure/status/1048483254087573504\nmaybe https://eprint.iacr.org/2018/472.pdf and https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-April/001221.html\nIntroduction Alright. Thank you very much. Thank you Pedro, that was a great segue into what I\u0026amp;rsquo;m talking about. He has been doing work on formalizing multi-hop locks. I want to also talk about what changes might be necessary to deploy this on the lightning network.\nHistory For what …"},{"uri":"/scalingbitcoin/tokyo-2018/multi-hop-locks/","title":"Multi-Hop Locks for Secure, Privacy-Preserving and Interoperable Payment-Channel Networks","content":"Giulio Malavolta (Friedrich-Alexander-University Erlangen-Nuernberg), Pedro Moreno-Sanchez (Purdue University), Clara Schneidewind (Vienna University of Technology), Aniket Kate (Purdue University) and Matteo Maffei (Vienna University of Technology)\nhttps://eprint.iacr.org/2018/472.pdf\nhttps://twitter.com/kanzure/status/1048476273259900928\nThis is joint work with my colleagues. I promise I will talk slower than the last talk. In a multi-hop network, the security tool is a hash-time lock contract …"},{"uri":"/speakers/mustafa-al-bassam/","title":"Mustafa Al-Bassam","content":""},{"uri":"/tags/multisignature/","title":"Scriptless multisignatures","content":""},{"uri":"/edgedevplusplus/2018/abstract-thinking-about-consensus-systems/","title":"Abstract Thinking About Consensus Systems","content":"https://twitter.com/kanzure/status/1048039485550690304\nslides: https://drive.google.com/file/d/1LiGzgFXMI2zq8o9skErLcO3ET2YeQbyx/view?usp=sharing\nSerialization Blocks are usually thought of as a serialized data stream. But really these are only true on the network serialization or over the network. A different implementation of bitcoin could in theory use a different format. The format is only used on the network and the disk, not the consensus protocol. The format could actually be completely …"},{"uri":"/edgedevplusplus/2018/","title":"Bitcoin Edge Dev++ 2018","content":"https://keio-devplusplus-2018.bitcoinedge.org/\nAbstract Thinking About Consensus Systems Oct 05, 2018 Luke Dashjr Consensus enforcement Bitcoin Toolchain, Unit Testing And Deterministic Builds Oct 05, 2018 Marco Falke Bitcoin core Build system Reproducible builds Blind Signatures Oct 04, 2018 Ethan Heilman Cryptography Block Structure And Headers Utxos Merkle Trees Segwit Bip141 Oct 04, 2018 Akio Nakamura Bulletproofs Oct 04, 2018 Kalle Alm Cryptography Coin Selection Oct 04, 2018 Kalle Alm Coin …"},{"uri":"/edgedevplusplus/2018/bitcoin-toolchain-unit-testing-and-deterministic-builds/","title":"Bitcoin Toolchain, Unit Testing And Deterministic Builds","content":"https://twitter.com/kanzure/status/1048103693885759489\nIntroduction Just to continue on what James said about the build system on Bitcoin Core\u0026amp;hellip; I am going to talk about deterministic builds. I am MarcoFalke and I also work at Chaincode Labs in NYC.\nBitcoin Core Build System The build system is based on autotools, so it should just work anywhere where autotools runs. Just run ./autogen.sh ./configure and then make, that\u0026amp;rsquo;s it.\nWe recently added support for MSVC builds mostly for …"},{"uri":"/edgedevplusplus/2018/cross-chain-swaps/","title":"Cross-Chain Swaps: Atomically Swapping Coins for Privacy or Cross-Blockchain Trades","content":"https://twitter.com/kanzure/status/1048017311431413760\nIntroduction We are Ethan Heilman and Nicolas Dorier. Ethan is from Boston University. Nicolas is from DG Lab and working on NBitcoin. Today we\u0026amp;rsquo;re going to be talking with you about atomic swaps for privacy and for cross-blockchain swaps. When we say atomic swaps, what do we mean? At a very high level, the idea is that it enables Alice and Bob to trade some cryptocurrency. But they shouldn\u0026amp;rsquo;t be able to cheat each other. The good …"},{"uri":"/edgedevplusplus/2018/sidechains-and-federation-models/","title":"Federated Chains, Sidechains","content":"https://twitter.com/kanzure/status/1048008467871666178\nIntroduction The name \u0026amp;ldquo;sidechain\u0026amp;rdquo; is overloaded. My favorite private sidechain is my coach. That\u0026amp;rsquo;s my private sidechain. Hopefully I can explain the high-level concepts of a sidechain, at a 10,000 foot view. What are these? What do they give you?\nSidechain components First you need a blockchain. It has to be something that builds a consensus history. The state of the ledger has to be recorded, or the account model, or …"},{"uri":"/edgedevplusplus/2018/reorgs/","title":"Handling Reorgs \u0026 Forks","content":"https://twitter.com/kanzure/status/1052344700554960896\nIntroduction Good morning, my name is Bryan. I\u0026amp;rsquo;m going to be talking about reorganizations and forks in the bitcoin blockchain. First, I want to define what a reorganization is and what a fork is.\nDefinitions You might have heard that the bitcoin blockchain is a permanent data structure that records everything from all time and that is actually somewhat false. Instead, actually the bitcoin blockchain can be modified and changed, and …"},{"uri":"/edgedevplusplus/2018/python-bitcoinlib/","title":"Interfacing with Python via python-bitcoinlib","content":"python-bitcoinlib repo: https://github.com/petertodd/python-bitcoinlib\nBitcoin Edge schedule: https://keio-devplusplus-2018.bitcoinedge.org/#schedule\nTwitter announcement: https://twitter.com/kanzure/status/1052927707888189442\nTranscript completed by: Bryan Bishop Edited by: Michael Folkson\nIntro I will be talking about python-bitcoinlib.\nwhoami First I am going to start with an introduction slide. This is because I keep forgetting who I am so I have to write it down in all of my presentations. …"},{"uri":"/edgedevplusplus/2018/lightning-network/","title":"Lightning Overview, Channel Factories, Discreet Log Contracts","content":"https://twitter.com/kanzure/status/1048039589837852672\nIntroduction I have quite a while to speak. I hope it\u0026amp;rsquo;s not too boring. I\u0026amp;rsquo;ll have an intermission half way through. It\u0026amp;rsquo;s not going to be two hours. It will be more like 80 minutes or something. We are going to talk about payment channels, unidirectional payment channels, lightning channels, homomorphic keys, hash trees, watchtowers, discreet log contracts, oracles, anticipated signatures, DLCS within channels, etc.\nI am …"},{"uri":"/speakers/marco-falke/","title":"Marco Falke","content":""},{"uri":"/edgedevplusplus/2018/overview-bitcoin-core-architecture/","title":"Overview Bitcoin Core Architecture","content":"https://twitter.com/kanzure/status/1048098034234445824\nslides: http://jameso.be/dev++2018/\nIntroduction Alright guys, how are you guys doing? You guys look tired. Given your brains are probably fried, I am delighted to tell you that this talk will be pretty high level. I am James O\u0026amp;rsquo;Beirne, I work at Chaincode Labs. I\u0026amp;rsquo;ve been working on Bitcoin Core since 2015. I am in New York most of the time.\nAgenda Today we are going to talk about Bitcoin Core. This will be pretty high-level. …"},{"uri":"/edgedevplusplus/2018/protecting-yourself-and-your-business/","title":"Protecting Yourself And Your Business","content":"https://twitter.com/kanzure/status/1048087226742071296\nIntroduction Hello. My name is Warren Togami and I will be talking about exchange security. Not only protecting the people but also the business. This has been a topic of interest recently.\nWarren\u0026amp;rsquo;s security background I have some credibility when it comes to security due to my previous career in open-source software. I have been working on the Linux operating system first with Fedora then with Red Hat for many years. Related to …"},{"uri":"/speakers/warren-togami/","title":"Warren Togami","content":""},{"uri":"/speakers/akio-nakamura/","title":"Akio Nakamura","content":""},{"uri":"/speakers/anton-yemelyanov/","title":"Anton Yemelyanov","content":""},{"uri":"/edgedevplusplus/2018/blind-signatures/","title":"Blind Signatures","content":"https://twitter.com/kanzure/status/1047648050234060800\nSee also http://diyhpl.us/wiki/transcripts/building-on-bitcoin/2018/blind-signatures-and-scriptless-scripts/\nIntroduction Hi everyone. I work at Commonwealth Crypto. Today I am going to be talking to you about blind signatures. I\u0026amp;rsquo;d like to encourage people to ask questions. Please think of questions. I\u0026amp;rsquo;ll give time and pause for people to ask questions.\nWhat are blind signatures? A very informal definition of blind signatures is …"},{"uri":"/edgedevplusplus/2018/block-structure-and-headers-utxos-merkle-trees-segwit-bip141/","title":"Block Structure And Headers Utxos Merkle Trees Segwit Bip141","content":"Block structure \u0026amp;amp; headers, UTXO, Merkle Trees, Address, Proof-of-Work \u0026amp;amp; Difficulty, SegWit (BIP141)\nThis presentation was spoken in Japanese. I don\u0026amp;rsquo;t speak Japanese, but I made a live translation as I listened anyway: https://docs.google.com/document/d/1ngqP9_Ep4x7iwTqF6Vvqm1Ob97A_a1irXMUc2ZHsEt4/edit\n"},{"uri":"/edgedevplusplus/2018/bulletproofs/","title":"Bulletproofs","content":"Bulletproofs https://twitter.com/kanzure/status/1047740138824945664\nIntroduction Is there anyone here who doesn\u0026amp;rsquo;t know what bulletproofs are? I will not get to all of my slides today. I\u0026amp;rsquo;ll take a top-down approach about bulletproofs. I\u0026amp;rsquo;ll talk about what they are, how they work, and then go into the rabbit hole and go from there. It will start dark and then get brighter as we go. As you leave here, you may not know what bulletproofs are. But you\u0026amp;rsquo;re going to have the tools …"},{"uri":"/edgedevplusplus/2018/coin-selection/","title":"Coin Selection","content":"https://twitter.com/kanzure/status/1047708247333859328\nIntroduction I am going to do a short talk on coin selection. Everyone calls me kalle. In Japanese, my name is kalle. Coin selection, what it is, how it works, and interestingly enough- in the latest version of Bitcoin Core released yesterday, they have changed the coin selection algorithm for the first time since a long time. They are now using something with more scientific rigor behind it (branch-and-bound coin selection), which is good …"},{"uri":"/edgedevplusplus/2018/digital-signatures/","title":"Finite fields, Elliptic curves, ECDSA, Schnorr","content":"https://twitter.com/kanzure/status/1047634593619181568\nIntroduction Thank you to Anton and to everyone at Keio University and Digital Garage. I have an hour to talk about digital signatures, finite fields, ECDSA, Schnorr signatures, there\u0026amp;rsquo;s a lot of ground to cover. I\u0026amp;rsquo;m going to move quickly and start from a high level.\nMy name is John. I live in New York. I work at a company called Chaincode Labs. Most of the time, I contribute to Bitcoin Core.\nI am going to talk about the concept …"},{"uri":"/edgedevplusplus/2018/hierarchical-deterministic-wallets/","title":"Hierarchical Deterministic Wallets","content":"https://twitter.com/kanzure/status/1047714436889235456\nhttps://teachbitcoin.io/presentations/wallets.html\nIntroduction My name is James Chiang. Quick introduction about myself. My contributions to bitcoin have been on the project libbitcoin where I do documentation. I\u0026amp;rsquo;ve been working through the APIs and libraries. I think it\u0026amp;rsquo;s a great toolkit. Eric Voskuil is speaking later today and he also works on libbitcoin. I also volunteered to talk about hierarchical deterministic wallets. …"},{"uri":"/edgedevplusplus/2018/introduction/","title":"Introduction to Bitcoin Edge Dev++ and Bc2","content":"We will be using two screens. The english will be on the front screen and the japanese will be on the other screen to the side here. We will also be sharing a google docs file for each one in slack. We should help each other and teach better and learn these things. Without further adue, here\u0026amp;rsquo;s Anton from Bitcoin Edge.\nHi everybody. Welcome to Dev++. It started last year at Stanford 2017 where we trained 100 people. Our goal is that we run Scaling Bitcoin events and we get the development …"},{"uri":"/edgedevplusplus/2018/p2pkh-p2wpkh-p2h-p2wsh/","title":"P2PKH, P2WPKH, P2SH, P2WSH","content":"https://twitter.com/kanzure/status/1047697572444270592\nIntroduction This is going to be a bit of a re-tread I think. I am going to be talking about common bitcoin script templates used in bitcoin today.\nAddresses do not exist on the blockchain. But scripts do. You\u0026amp;rsquo;ve heard about p2pk, p2pkh, p2sh, and others. I\u0026amp;rsquo;m going to go over these anyway.\nPay-to-pubkey (p2pk) This was the first type. It had no address format actually. Nodes connected to each other over IP address, with no …"},{"uri":"/edgedevplusplus/2018/partially-signed-bitcoin-transactions-bip174/","title":"Partially Signed Bitcoin Transactions (BIP174)","content":"https://twitter.com/kanzure/status/1047730297242935296\nIntroduction Bryan briefly mentioned partially-signed bitcoin transactions (PSBT). I can give more depth on this. There\u0026amp;rsquo;s history and motivation, a background of why this is important, what these things are doing. What is the software doing?\nHistory and motivation Historically, there was no standard for this. Armory had its own format. Bitcoin Core had network-serialized transactions. This ensures fragmentation and it\u0026amp;rsquo;s …"},{"uri":"/edgedevplusplus/2018/scripts-general-and-simple/","title":"Scripts (general \u0026 simple)","content":"https://twitter.com/kanzure/status/1047679223115083777\nIntroduction I am going to talk about why we have scripts in bitcoin. I\u0026amp;rsquo;ll give some examples and the design philosophy. I am not going to talk about the semantics of bitcoin script, though. Why do we have bitcoin script? I\u0026amp;rsquo;ll show how to lock and unlock coins. I\u0026amp;rsquo;ll talk about pay-to-pubkey, multisig, and computing vs verification in the blockchain.\nWhy have script at all? In my first talk, I talked about digital signatures …"},{"uri":"/edgedevplusplus/2018/sighash-noinput/","title":"SIGHASH_NOINPUT (BIP118)","content":"SIGHASH_NOINPUT (BIP118)\nhttps://twitter.com/kanzure/status/1049510702384173057\nHi, my name is Bryan, I\u0026amp;rsquo;m going to be talking about SIGHASH NOINPUT. It was something that I was asked to speak about. It is currently not deployed, but it is an active proposal.\nSo, just really brief about who I am. I have a software development background. I just left LedgerX last week, so that is an exciting change in my life. Also, I contribute to bitcoin core, usually as code review.\nIn order to talk about …"},{"uri":"/edgedevplusplus/2018/taproot-and-graftroot/","title":"Taproot and Graftroot","content":"https://twitter.com/kanzure/status/1047764770265284608\nIntroduction Taproot and graftroot are recent proposals in the bitcoin world. Every script type looks different and they are distinct. From a privacy perspective, that\u0026amp;rsquo;s pretty bad. You watermark or fingerprint yourself all the time by showing unique scripts on the blockchain or in your transactions. This causes censorship risks and other problems for fungibility. Often you are paying for contigencies that are never used, like the …"},{"uri":"/edgedevplusplus/2018/wallet-security/","title":"Wallet Security, Key Management \u0026 Hardware Security Modules (HSMs)","content":"https://twitter.com/kanzure/status/1049813559750746112\nIntro Alright, I’m now going to talk about bitcoin wallet security. And I was asked to talk about key management and hardware security modules and a bunch of other topics all in one talk. This will be a bit broader than some of the other talks because this is an important subject about how do you actually store bitcoin and then some of the developments around the actual storage of bitcoin in a secure way. Some have been integrated into …"},{"uri":"/noded-podcast/jnewbery-cve-2018-17144-bug/","title":"CVE-2018-17144 Bug","content":"Noded podcast September 26th 2018\nIntros\nPierre: This is an issue that affects our audience directly, right. This is the Noded podcast. It is for people who run Bitcoin full nodes and we had what is called a critical vulnerability exposure - CVE drop on Monday and there was an issue with our node software - specific versions and we can get into that.\nbitstein: And in a bombshell Tweet John Newbery said that he was responsible for the CVE.\nPierre: So normally when you do something really awful to …"},{"uri":"/baltic-honeybadger/2018/","title":"Baltic Honeybadger 2018","content":"http://web.archive.org/web/20180825023519/https://bh2018.hodlhodl.com/\nBeyond Bitcoin Decentralized Collaboration Yurii Rashkovskii Bitcoin As A Novel Market Institution Nic Carter Bitcoin Custody Sep 23, 2018 Bryan Bishop Regulation Bitcoin Maximalism Dissected Giacomo Zucco Bitcoin Payment Processing And Merchants Sergej Kotliar, Alena Vranova, Vortex Current State Of The Market And Institutional Investors Tone Vays, Bruce Fenton Day 1 Closing Panel Elizabeth Stark, Peter Todd, Jameson Lopp, …"},{"uri":"/greg-maxwell/2018-09-23-greg-maxwell-bitcoin-core-testing/","title":"Bitcoin Core Testing","content":"I believe slower would potentially result in less testing and not likely result in more at this point.\nIf we had an issue that newly introduced features were turning out to frequently have serious bugs that are discovered shortly after shipping there might be a case that it would improve the situation to delay improvements more before putting them into critical operation\u0026amp;hellip; but I think we\u0026amp;rsquo;ve been relatively free of such issues. The kind of issues that just will be found with a bit …"},{"uri":"/baltic-honeybadger/2018/bitcoin-custody/","title":"Bitcoin Custody","content":"https://twitter.com/kanzure/status/1048014038179823617\nStart time: 6:09:50\nMy name is Bryan Bishop, I’m going to be talking about bitcoin custody. Here is my PGP fingerprint, we should be doing that. So who am I, I have a software development background, I don’t just type transcripts. I’m actually no longer at LedgerX as of Friday (Sept 21 2018) when I came here. That is the end of 4 years, so you are looking at a free man. [Applause] Thank you.\nSo what is custody? A few of these slides will be …"},{"uri":"/greg-maxwell/2018-09-23-greg-maxwell-multiple-implementations/","title":"Multiple Implementations","content":"Location: Bitcointalk\nhttps://bitcointalk.org/index.php?topic=5035144.msg46077622#msg46077622\nRisks of multiple implementations They would create more risk. I don\u0026amp;rsquo;t think there is any reason to doubt that this is an objective fact which has been borne out by the history.\nFirst, failures in software are not independent. For example, when BU nodes were crashing due to xthin bugs, classic were also vulnerable to effectively the same bug even though their code was different and some triggers …"},{"uri":"/london-bitcoin-devs/2018-09-19-sjors-provoost-core-hardware-wallet/","title":"Bitcoin Core and hardware wallets","content":"Topic: Using Bitcoin Core with hardware wallets\nLocation: London Bitcoin Devs\nSlides: https://github.com/Sjors/presentations/blob/master/2018-09-19%20London%20Bitcoin%20Devs/2018-09%20London%20Bitcoin%20Devs%200.5.pdf\nCore, HWI docs: https://hwi.readthedocs.io/en/latest/examples/bitcoin-core-usage.html\nIntroduction I am Sjors, I am going to show you how to use, you probably shouldn’t try this at home, the Bitcoin Core wallet directly with a hardware wallet. Most of that work is done by Andrew …"},{"uri":"/chaincode-labs/chaincode-residency/2018-09-18-alex-bosworth-incentive-problems-in-the-lightning-network/","title":"Incentive Problems in the Lightning Network","content":"Incentive Problems in the Lightning Network I\u0026amp;rsquo;m going to talk about high-level incentive problems in the Lighting Network.\nMechanisms The first thing to think about is that we have two different mechanisms for making Bitcoin and lightning work. One mechanism is math — If you do a SHA 256 of something you don\u0026amp;rsquo;t have to worry about incentives, whether that\u0026amp;rsquo;s going to match to the expected result. There\u0026amp;rsquo;s no incentive there.\nSo the problem is we can\u0026amp;rsquo;t build a system …"},{"uri":"/andreas-antonopoulos/2018-08-30-andreas-antonopoulos-home-network-security/","title":"Full Node and Home Network Security","content":"Does running a Bitcoin full node attract hackers to my home network? Q - Does running a Bitcoin full node and/or a Lightning node at home attract hackers to my IP address and home network? Also could it reveal ownership of Bitcoin and attract physical attacks? Are preconfigured full node starter kits safe to use for non-technical people or is this completely against the point of running a full node and thus non-technical people should abandon the idea?\nA - Running a Bitcoin full node on your …"},{"uri":"/austin-bitcoin-developers/2018-08-17-richard-bondi-bitcoin-cli-regtest/","title":"Bitcoin CLI and Regtest","content":"Clone this repo to follow along: https://github.com/austin-bitcoin-developers/regtest-dev-environment\nhttps://twitter.com/kanzure/status/1161266116293009408\nIntro So the goal here as Justin said is to get the regtest environment set up. The advantages he mentioned, there is also the advantage that you can mine your own coins at will so you don’t have to mess around with testnet faucets. You can generate blocks as well so you don’t have to wait for six confirmations or whatever or even the ten …"},{"uri":"/speakers/richard-bondi/","title":"Richard Bondi","content":""},{"uri":"/speakers/jim-posen/","title":"Jim Posen","content":""},{"uri":"/misc/2018-07-24-la-blockchain-jim-posen-lightning-bolt-by-bolt/","title":"Lightning Network BOLT by BOLT","content":"Topic: Lightning Network BOLT by BOLT\nLocation: LA Blockchain Meetup\nTranscript by: glozow\nIntroduction I\u0026amp;rsquo;m Jim from the protocol team at Coinbase. We work on open source stuff in the crypto space. We just contribute to projects like Bitcoin core. The reason I\u0026amp;rsquo;m talking about this is I\u0026amp;rsquo;ve been contributing on and off to one of the major lightning implementations lnd for the past eight months or so.\nThis is gonna be a fairly technical talk. We\u0026amp;rsquo;re going to break down the …"},{"uri":"/london-bitcoin-devs/2018-07-23-john-light-bitcoin-full-nodes/","title":"Bitcoin Full Nodes","content":"John Light blog post on soft forks and hard forks: https://medium.com/@lightcoin/the-differences-between-a-hard-fork-a-soft-fork-and-a-chain-split-and-what-they-mean-for-the-769273f358c9\nIntro My presentation today is going to be about Bitcoin full nodes. Everything you wanted to know and more. There should perhaps be an asterisk because I’m not going to say literally everything but if anyone in the audience has any questions that want to go really deep down the rabbit hole then we’ll see just …"},{"uri":"/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my/","title":"Taproot, Schnorr signatures, and SIGHASH_NOINPUT","content":"https://twitter.com/kanzure/status/1021880538020368385\nslides: https://prezi.com/view/YkJwE7LYJzAzJw9g1bWV/\nIntroduction I am Pieter Wuille. I will just dive in. Today I want to talk about improvements to the bitcoin scripting language. There is feedback in the microphone. Okay.\nI will mostly be talking about improvements to the bitcoin scripting language. This is by no means an exhaustive list of all the things that are going on. It\u0026amp;rsquo;s more my personal focus in the things that I am …"},{"uri":"/building-on-bitcoin/2018/lightning-routing-ants-pheromones/","title":"Ant routing for the Lightning Network","content":"Decentralized anonymous routing algorithm for LN\nhttps://twitter.com/kanzure/status/1014530761398145025\nhttps://twitter.com/rperezmarco/status/1013938984718884866\nIntroduction I wanted to present some ideas for alternative ideas for routing on LN. This was published on arxiv: 1807.00151 (July 2018).\nhttp://webusers.imj-prg.fr/~ricardo.perez-marco\nOverview I am going to talk about what I understand to be decentralized network. Fabrice has already explained most of what is LN about and some …"},{"uri":"/building-on-bitcoin/2018/bitcoin-assets/","title":"Assets on Bitcoin","content":"https://twitter.com/kanzure/status/1014483345026289664\nGood morning everybody. In the mean time, while my presentation loads, I am Giacomo Zucco. I am the founder of BHB network and a consulting company. We do bitcoin consulting for institutional customers especially in Switzerland. We also like development on top of bitcoin and things like that.\nMy talk today is about assets on bitcoin. The subtitlte is \u0026amp;ldquo;Yes, ok, financial sovereignity and cryptoanarchy are cool, but what about those …"},{"uri":"/building-on-bitcoin/2018/bootstrapping-lightning-network/","title":"Bootstrapping LN: What have we learned?","content":"acinq\nhttps://twitter.com/kanzure/status/1014531050503065601\nMy name is Fabrice Drouin. I work with ACINQ. We are a comapny working on printing LN. I am going to talk about what happens, what was the process of bootstrapping and developing lightning and what happened on mainnet.\nTimeline Who doesn\u0026amp;rsquo;t know what lightning is? And who has used lightning? Okay, almost everyone. It\u0026amp;rsquo;s an evolution on the idea of payment channels. I think it\u0026amp;rsquo;s from 2011 actually. The first big step for …"},{"uri":"/building-on-bitcoin/","title":"Building On Bitcoin","content":" Building On Bitcoin 2018 "},{"uri":"/building-on-bitcoin/2018/","title":"Building On Bitcoin 2018","content":" Anonymous Bitcoin Jul 03, 2018 Adam Ficsor Ant routing for the Lightning Network Jul 04, 2018 Ricardo Perez-Marco Lightning Routing Assets on Bitcoin Jul 04, 2018 Giacomo Zucco Blind Signatures in Scriptless Scripts Jul 03, 2018 Jonas Nick Adaptor signatures Bootstrapping LN: What have we learned? Jul 04, 2018 Fabrice Drouin Lightning Building your own bank or... Constructing Crypto Castles Jul 04, 2018 Jameson Lopp Security CoinjoinXT and other techniques for deniable transfers Jul 03, 2018 …"},{"uri":"/building-on-bitcoin/2018/crypto-castles/","title":"Building your own bank or... Constructing Crypto Castles","content":"https://twitter.com/kanzure/status/1014483571946508288\nI am going to do a brain dump of basically anything that people who are building systems protecting private keys are thinking about. Hopefully you can ingest this for when you decide to build one of these systems.\nIf you have been in this space for long at all, you should be well aware that there are risks for trusting third-parties. Various entities are trying to centralize aspects of this system. The next problem once we have successfully …"},{"uri":"/speakers/eric-voskuil/","title":"Eric Voskuil","content":""},{"uri":"/speakers/giacomo-zucco/","title":"Giacomo Zucco","content":""},{"uri":"/building-on-bitcoin/2018/libbitcoin/","title":"Libbitcoin","content":"https://twitter.com/kanzure/status/1014483126817521665\nThis is me. When I first suggested giving a talk here on libbitcoin, \u0026amp;hellip;. one of the pieces of feedback I got was that bitcoin is libbitcoin, what are you building on bitcoin? libbitcoin is actually more. I will also talk about what it is going on underneath. It\u0026amp;rsquo;s an implementation of bitcoin as well.\nMost of the talks I give are around cryptoeconomics. Up until recently, libbitcoin hasn\u0026amp;rsquo;t been mature enough to go and …"},{"uri":"/speakers/ricardo-perez-marco/","title":"Ricardo Perez-Marco","content":""},{"uri":"/building-on-bitcoin/2018/anonymous-bitcoin/","title":"Anonymous Bitcoin","content":"https://twitter.com/kanzure/status/1014128850765303808\nhttps://github.com/zkSNACKs/WalletWasabi\nOne year ago I was standing here at this conference and Igave a talk. Today I am standing here and I titled my talk \u0026amp;ldquo;Anonymous bitcoin\u0026amp;rdquo;. One year ago at this conference, the conference was named \u0026amp;ldquo;breaking bitcoin\u0026amp;rdquo;. I did not break bitcoin. Someone else did, you might remember.\nToday I want to present how I build on bitcoin. I have two goals with this presentation. I want to …"},{"uri":"/building-on-bitcoin/2018/blind-signatures-and-scriptless-scripts/","title":"Blind Signatures in Scriptless Scripts","content":"https://twitter.com/kanzure/status/1014197255593603072\nMy name is Jonas. I work at Blockstream as an engineer. I am going to talk about some primitives about blind signatures, scriptless scripts, and blind coin swap, and I will explain how to trustlessly exchange ecash tokens using bitcoin.\nI want to introduce the blind Schnorr signature in a few moments.\nSchnorr signature My assumption is not that you will completely understand Schnorr signatures, but maybe you will at least agree that if you …"},{"uri":"/building-on-bitcoin/2018/coinjoinxt/","title":"CoinjoinXT and other techniques for deniable transfers","content":"https://twitter.com/kanzure/status/1014197088979226625\nhttps://joinmarket.me/blog/blog/coinjoinxt/\nIntroduction My talk today is about what I\u0026amp;rsquo;m calling CoinjoinXT which is kind of a new proposal. I wouldn\u0026amp;rsquo;t say it\u0026amp;rsquo;s a new technology, but perhaps a new combination of technologies. My name is Adam Gibson. Yes, another Adam, sorry about that. I have been working on privacy tech for the last number of years, mostly on joinmarket. Today\u0026amp;rsquo;s talk is not about joinmarket, but it …"},{"uri":"/building-on-bitcoin/2018/binary-transparency/","title":"Contours for Binary Transparency","content":"https://twitter.com/kanzure/status/1014167797205815297\nI am going to talk about binary transparency. What is it? Let\u0026amp;rsquo;s suppose that you have an android phone or iphone and you download some software from the app store or Google Play. How do you know that the apk or that software that you\u0026amp;rsquo;re being given is the same piece of software that is being given to everyone else and google or apple hasn\u0026amp;rsquo;t specifically given you a bad version of that software because they were threatened …"},{"uri":"/building-on-bitcoin/2018/current-and-future-state-of-wallets/","title":"Current And Future State Of Wallets","content":"https://twitter.com/kanzure/status/1014127893495021568\nIntroduction I started to contribute to Bitcoin Core about 5 years ago. Since then, I have managed to get 450 commits merged. I am also the co-founder of a wallet hardware company based in Switzerland called Shift+ Cryptocurrency.\nWallets is not rocket science. It\u0026amp;rsquo;s mostly about pointing the figure to things that we can do better. I prepared a lot of content.\nPrivacy, security and trust When I look at existing wallets, I see the …"},{"uri":"/building-on-bitcoin/2018/dandelion/","title":"Dandelion: Privacy-preserving transaction propagation in Bitcoin's p2p network","content":"https://twitter.com/kanzure/status/1014196927062249472\nToday I will be talking to you about privacy in bitcoin\u0026amp;rsquo;s p2p layer. This was joint work with some talented colleagues.\nBitcoin p2p layer I want to give a brief overview of the p2p layer. Users in bitcoin are connected over a p2p network with tcp links. Users are identified by ip address and port number. Users have a second identity, which is their address or public key or however you want to think about it.\nWhen Alice wants to send a …"},{"uri":"/building-on-bitcoin/2018/btcpay/","title":"How to make everyone run their own full node","content":"https://twitter.com/kanzure/status/1014166857958461440\nI am the maintainer of Nbitcoin and btcpay. I am a dot net fanboy. I am happy that the number of people working on bitcoin in C# has spread from one about four years ago to about ten in this room toay. That\u0026amp;rsquo;s great. I work at DG Lab in Japan.\nThe goal of my talk is to continue what Jonas Schnelli was talking about. He was speaking about how to get the best wallet and get the three components of security, privacy and trust. I have a …"},{"uri":"/speakers/lawrence-nahum/","title":"Lawrence Nahum","content":""},{"uri":"/building-on-bitcoin/2018/single-use-seals/","title":"Single Use Seals","content":"https://twitter.com/kanzure/status/1014168068447195136\nI am going to talk about single use seals and ways to use them. They are a concept of building consensus applications and things that need consensus. I want to mention why that matters. What problem are we trying to solve? Imagine my cell phone and I open up my banking app or the banking app or lightning app\u0026amp;hellip; I want to make sure that what I see on my screen is the same as what you see on your screen. We\u0026amp;rsquo;re trying to achieve …"},{"uri":"/speakers/thomas-kerin/","title":"Thomas Kerin","content":""},{"uri":"/building-on-bitcoin/2018/tooling/","title":"Tooling Panel","content":"https://twitter.com/kanzure/status/1014167542422822913\nWhat kind of tools do we need to create, in order to facilitate building on bitcoin? We are going to open questions to the floor. The guys will be there to get your questions. The panel will be about 20 minutes long, and then we will go for food.\nLN: I work at Blockstream. I work on low-level wallets and tooling for wallets.\nND: I am working on a library called Nbitcoin for programming bitcoin in C#. I am also the developer of btcpay.\nEV: …"},{"uri":"/building-on-bitcoin/2018/working-on-scripts/","title":"Working on Scripts with logical opcodes","content":"https://twitter.com/kanzure/status/1014167120417091584\nBitcoin has logical opcodes in bitcoin script. Depending on whose is trying to spend coins or what information they have, they interact with logical opcodes. We could see a simple example here taken from one of the lightning network commitment transcraction scripts. It pushes a key on to the stack so that a checksig can run later. It adds a relative timelock as well. Two people can interact with that script and we can set different …"},{"uri":"/london-bitcoin-devs/2018-06-12-adam-gibson-unfairly-linear-signatures/","title":"Unfairly Linear Signatures","content":"Slides: https://joinmarket.me/static/schnorrplus.pdf\nIntro This is supposed to be a developer meetup but this talk is going to be a little more on the theoretical side. The title is “Unfairly Linear Signatures” that is just a joke. It is talking about Schnorr signatures. They are something that could in the near future have a big practical significance in Bitcoin. I am not going to explain all the practical significance.\nOutline To give you an outline, I will explain what all these terms mean …"},{"uri":"/breaking-bitcoin/2019/bitcoin-build-system/","title":"Bitcoin Build System Security","content":"Alternative video without the Q\u0026amp;amp;A session: https://www.youtube.com/watch?v=I2iShmUTEl8\nhttps://twitter.com/kanzure/status/1137347937426661376\nI couldn\u0026amp;rsquo;t make it to Amsterdam this year, but I hope the graphics I have prepared for this talk can make up for my absence. Let\u0026amp;rsquo;s say you want to be a good bitcoin citizen and start to run your own bitcoin node. Say you go to bitcoincore.org, you click the download button, and you open the disk image and you double click on Bitcoin Core. …"},{"uri":"/layer2-summit/","title":"Layer2 Summit","content":" Layer2 Summit 2018 "},{"uri":"/layer2-summit/2018/","title":"Layer2 Summit 2018","content":" Lightning Overview Apr 25, 2018 Conner Fromknecht Amp Splicing Watchtowers Scriptless Scripts May 25, 2018 Andrew Poelstra "},{"uri":"/layer2-summit/2018/scriptless-scripts/","title":"Scriptless Scripts","content":"https://twitter.com/kanzure/status/1017881177355640833\nIntroduction I am here to talk about scriptless scripts today. Scriptless scripts are related to mimblewimble, which is the other thing I was going to talk about. For time constraints, I will only talk about scriptles scripts. I\u0026amp;rsquo;ll give a brief historical background, and then I will say what scriptless scripts are, what we can do with them, and give some examples.\nHistory In 2016, there was a mysterious paper dead-dropped on an IRC …"},{"uri":"/layer2-summit/2018/lightning-overview/","title":"Lightning Overview","content":"https://twitter.com/kanzure/status/1005913055333675009\nhttps://lightning.network/\nIntroduction First I am going to give a brief overview of how lightning works for those of who you may not be so familiar. Then I am going to discuss three technologies we\u0026amp;rsquo;ll be working on in the next year at Lightning Labs and pushing forward.\nPhilosophical perspective and scalability Before I do that though, following up on what Neha was saying, I want to give a philosophical perspective on how I think …"},{"uri":"/sf-bitcoin-meetup/2018-04-23-jeremy-rubin-bitcoin-core/","title":"Bitcoin Core","content":"A hardCORE workout\nSlides: https://drive.google.com/file/d/149Ta1WRXL5WEvnBdlL-HxmsFDXUbvFDy/view\nhttps://twitter.com/kanzure/status/1152926849879760896\nIntro Thank you very much for the warm welcome. So welcome to the hard core workout. It is not going to be that difficult but you can stretch if you need to. It is going to be a lot of material so I’m going to go a little bit fast. If you have any questions feel free to stop me in the middle. There’s a lot of material to get through so I might …"},{"uri":"/sf-bitcoin-meetup/2018-04-20-laolu-osuntokun-exploring-lnd0.4/","title":"Exploring Lnd0.4","content":"Slides: https://docs.google.com/presentation/d/1pyQgGDUcB29r8DnTaS0GQS6usjw4CpgciQx_-XBVOwU/\nThe Genesis of lnd First, the genesis of lnd. Here is the first commit of lnd. It was October 27 2015. This is when I was in school still. This was done when I went on break and it was the first major…. we had in terms of code. We talked about the language we’re using, the licensing, the architecture of the daemon. A fun fact, the original name of lnd was actually called Plasma. Before we made it open …"},{"uri":"/bitcoin-core-dev-tech/2018-03/","title":"Bitcoin Core Dev Tech 2018 (Mar)","content":" Bellare-Neven Mar 05, 2018 Signature aggregation Cross Curve Atomic Swaps Mar 05, 2018 Adaptor signatures Merkleized Abstract Syntax Trees - MAST Mar 06, 2018 Taproot Covenants Mast Priorities Mar 07, 2018 Taproot, Graftroot, Etc Mar 06, 2018 Contract protocols Taproot "},{"uri":"/bitcoin-core-dev-tech/2018-03/2018-03-07-priorities/","title":"Priorities","content":"https://twitter.com/kanzure/status/972863994489901056\nPriorities We\u0026amp;rsquo;re going to wait until BlueMatt is here. Nobody knows what his priorities are. He says he might be in around noon.\nThere\u0026amp;rsquo;s an ex-Google product director interested in helping with Bitcoin Core. He was asking about how to get involved. I told him to get involved by just diving in. He will be spending some time at Chaincode at the end of March. We\u0026amp;rsquo;ll get a sense for what his skills are. I think this could be …"},{"uri":"/bitcoin-core-dev-tech/2018-03/2018-03-06-merkleized-abstract-syntax-trees-mast/","title":"Merkleized Abstract Syntax Trees - MAST","content":"https://twitter.com/kanzure/status/972120890279432192\nSee also http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-07-merkleized-abstract-syntax-trees/\nMAST stuff You could directly merkleize scripts if you switch from IF, IFNOT, ELSE with IFJUMP that has the number of bytes.\nWith graftroot and taproot, you never to do any scripts (which were a hack to get things started). But we\u0026amp;rsquo;re doing validation and computation.\nYou take every single path it has; so instead, it becomes …"},{"uri":"/bitcoin-core-dev-tech/2018-03/2018-03-06-taproot-graftroot-etc/","title":"Taproot, Graftroot, Etc","content":"https://twitter.com/kanzure/status/972468121046061056\nGraftroot The idea of graftroot is that in every contract there is a superset of people that can spend the money. This assumption is not always true but it\u0026amp;rsquo;s almost always true. Say you want to lock up these coins for a year, without any conditionals to it, then it doesn\u0026amp;rsquo;t work. But assume you have\u0026amp;ndash; pubkey recovery? No\u0026amp;hellip; pubkey recovery is inherently incompatible with any form of aggregation, and aggregation is far …"},{"uri":"/bitcoin-core-dev-tech/2018-03/2018-03-05-bellare-neven/","title":"Bellare-Neven","content":"See also http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-06-signature-aggregation/\nIt\u0026amp;rsquo;s been published, it\u0026amp;rsquo;s been around for a decade, and it\u0026amp;rsquo;s widely cited. In Bellare-Neven, it\u0026amp;rsquo;s itself, it\u0026amp;rsquo;s a multi-signature scheme which means multiple pubkeys and one message. You should treat the individual authorizations to spend inputs, as individual messages. What we need is an interactive aggregate signature scheme. Bellare-Neven\u0026amp;rsquo;s paper suggests a …"},{"uri":"/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/","title":"Cross Curve Atomic Swaps","content":"https://twitter.com/kanzure/status/971827042223345664\nDraft of an upcoming scriptless scripts paper. This was at the beginning of 2017. But now an entire year has gone by.\npost-schnorr lightning transactions https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html\nAn adaptor signature.. if you have different generators, then the two secrets to reveal, you just give someone both of them, plus a proof of a discrete log, and then you say learn the …"},{"uri":"/speakers/ron-paul/","title":"Ron Paul","content":""},{"uri":"/satoshi-roundtable/sr-004/ron-paul/","title":"Ron Paul","content":"Satoshi Roundtable IV\nIntroduction 1 Please welcome Alex from Coin Center\u0026amp;hellip; Bruce\u0026amp;rsquo;s son. My name is Alexander.. I\u0026amp;rsquo;m the CEO of Bit\u0026amp;hellip;.. something\u0026amp;hellip; I would like to introduce the \u0026amp;hellip; Nick Spanos.\nIntroduction 2 Nick Spanos\nA while ago, I think last year, \u0026amp;hellip; we said it would be cool to have Ron Paul here. Bruce calls me a few months ago and says we have to get Ron down here. I tried. I work for Ron. I worked for both campaigns. Rallied the troops and the …"},{"uri":"/satoshi-roundtable/","title":"Satoshi Roundtable","content":" Sr 004 "},{"uri":"/satoshi-roundtable/sr-004/","title":"Sr 004","content":" Ron Paul Feb 06, 2018 Ron Paul "},{"uri":"/misc/2018-02-02-andrew-poelstra-bulletproofs/","title":"Bulletproofs","content":"49th Milan Bitcoin meetup\nhttps://twitter.com/kanzure/status/962326126969442304\nhttps://www.reddit.com/r/Bitcoin/comments/7w72pq/bulletproofs_presentation_at_feb_2_milan_meetup/\nIntroduction Alright. Thank you for that introduction, Giacamo. I am here to talk about bulletproofs. This is different from proof-of-bullets, which is what fiat currencies use. In bulletproofs, we use a zero-knowledge proof which has nothing to do with consensus at all, but it has a lot of exciting applications. The …"},{"uri":"/blockchain-protocol-analysis-security-engineering/","title":"Blockchain Protocol Analysis Security Eng","content":" Blockchain Protocol Analysis Security Engineering 2017 Blockchain Protocol Analysis Security Engineering 2018 "},{"uri":"/blockchain-protocol-analysis-security-engineering/2018/","title":"Blockchain Protocol Analysis Security Engineering 2018","content":" Bulletproofs Benedikt Bünz Proof systems Hardening Lightning Jan 30, 2018 Olaoluwa Osuntokun Security Lightning Proofs Of Space Jan 31, 2018 Bram Cohen Schnorr Signatures For Bitcoin: Challenges and Opportunities Jan 31, 2018 Pieter Wuille Research Schnorr signatures Adaptor signatures Simplicity: A New Language for Blockchains Jan 25, 2018 Russell O’Connor Simplicity Smart Signatures: Experiments in Authorization Jan 24, 2018 Christopher Allen Cryptography Simplicity "},{"uri":"/blockchain-protocol-analysis-security-engineering/2018/proofs-of-space/","title":"Proofs Of Space","content":"Beyond Hellman\u0026amp;rsquo;s time-memory trade-offs with applications to proofs of space\nhttps://twitter.com/kanzure/status/962378701278208000\nIntroductions Hi everyone. This presentation is based off of a paper called \u0026amp;ldquo;Beyond Hellman\u0026amp;rsquo;s time-memory trade-offs\u0026amp;rdquo;. I\u0026amp;rsquo;ll get to why it\u0026amp;rsquo;s called that. These slides and the proofs are by Hamza Abusalah.\nOutline I am going to describe what proofs of space are. I\u0026amp;rsquo;ll give some previous constructions and information about them. …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities/","title":"Schnorr Signatures For Bitcoin: Challenges and Opportunities","content":"slides: https://prezi.com/bihvorormznd/schnorr-signatures-for-bitcoin/\nhttps://twitter.com/kanzure/status/958776403977220098\nhttps://blockstream.com/2018/02/15/schnorr-signatures-bpase.html\nIntroduction My name is Pieter Wuille. I work at Blockstream. I contribute to Bitcoin Core and bitcoin research in general. I work on various proposals for the bitcoin system for some time now. Today I will be talking about Schnorr signatures for bitcoin, which is a project we\u0026amp;rsquo;ve been working on, …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2018/hardening-lightning/","title":"Hardening Lightning","content":"https://twitter.com/kanzure/status/959155562134102016\nslides: https://docs.google.com/presentation/d/14NuX5LTDSmmfYbAn0NupuXnXpfoZE0nXsG7CjzPXdlA/edit\nprevious talk (2017): http://diyhpl.us/wiki/transcripts/blockchain-protocol-analysis-security-engineering/2017/lightning-network-security-analysis/\nprevious slides (2017): https://cyber.stanford.edu/sites/default/files/olaoluwaosuntokun.pdf\nIntroduction I am basically going to go over some ways that I\u0026amp;rsquo;ve been thinking about basically …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2018/2018-01-25-russell-oconnor-simplicity/","title":"Simplicity: A New Language for Blockchains","content":"Slides: https://cyber.stanford.edu/sites/g/files/sbiybj9936/f/slides-bpase-2018.pdf\nSimplicity white paper: https://blockstream.com/simplicity.pdf\nIntro Good morning everyone. I have been working on a new language for the consensus layer of blockchains and I am here to present that to you today.\nBlockchains and Smart Contracts I know everyone is familiar with this in the audience but I want to make sure that I’m on the same page with you guys. How do blockchains and smart contracts and …"},{"uri":"/speakers/christopher-allen/","title":"Christopher Allen","content":""},{"uri":"/misc/2018-01-24-rusty-russell-future-bitcoin-tech-directions/","title":"Future Bitcoin Tech Directions","content":"Future technological directions in bitcoin: An unreliable guide\nnotes: https://docs.google.com/document/d/1iNzJtJRq9-0vdILQAe9Bcj7r1a7xYVf8CSY0_oFrH9c/edit or http://diyhpl.us/~bryan/papers2/bitcoin/Future%20technological%20directions%20in%20bitcoin%20-%202018-01-24.pdf\nhttps://twitter.com/kanzure/status/957318453181939712\nIntroduction Thank you very much for that introduction. Alright, well polished machine right there. Let\u0026amp;rsquo;s try that again. Thank you for that introduction. My talk today …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2018/2018-01-24-christopher-allen-smart-signatures/","title":"Smart Signatures: Experiments in Authorization","content":"Slides: https://www.slideshare.net/ChristopherA/smart-signaturesexperiments-in-authentication-stanford-bpase-2018-final\nIntro Good afternoon. My name is Christopher Allen, I am with Blockstream and today I wanted to give you an introduction to something that isn’t about consensus, isn’t about proof of stake and other things. I want to pop up a level. I want to talk about smart signatures. These are a number of experiments in authorization.\nDigital Signatures First we need to recap, what is a …"},{"uri":"/realworldcrypto/2018/mimblewimble-and-scriptless-scripts/","title":"Mimblewimble And Scriptless Scripts","content":"https://twitter.com/kanzure/status/954531580848082944\nWe\u0026amp;rsquo;re about ready for the second session of RWC. We are going to move on to cryptocurrencies. So we have 3 talks lined up in this session. The first one is on mimblewimble and scriptless scripts, delivered by Andrew Poelstra from Blockstream.\nIntroduction Thank you. Cool. Hello. My name is Andrew Poelstra. I am the research director at Blockstream and I\u0026amp;rsquo;m here to talk about a couple of things that I\u0026amp;rsquo;ve been thinking about …"},{"uri":"/realworldcrypto/2018/","title":"Realworldcrypto 2018","content":" Mimblewimble And Scriptless Scripts Jan 11, 2018 Andrew Poelstra Adaptor signatures Sidechains "},{"uri":"/greg-maxwell/2017-12-22-bech32-design/","title":"bech32 design","content":"Location: Bitcointalk\nhttps://bitcointalk.org/index.php?topic=2624630.msg26762627#msg26762627\nbech32 design Bech32 is designed for human use and basically nothing else, the only consideration for QR codes is that all caps is also permitted.\nFor all your talk of human considerations you give a strong signal of having seldom actually used bitcoin addresses as a human. 1/3 type addresses are full of visually confusing characters and the case modulations are a pain to read and enter correctly. In …"},{"uri":"/greg-maxwell/2017-11-27-gmaxwell-advances-in-block-propagation/","title":"Advances In Block Propagation","content":"slides: https://web.archive.org/web/20190416113003/https://people.xiph.org/~greg/gmaxwell-sf-prop-2017.pdf\nhttps://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding\nefficient block transfer: https://web.archive.org/web/20170912171304/https://people.xiph.org/~greg/efficient.block.xfer.txt\nlow latency block xfer: https://web.archive.org/web/20160607023347/https://people.xiph.org/~greg/lowlatency.block.xfer.txt …"},{"uri":"/scalingbitcoin/stanford-2017/how-to-charge-lightning/","title":"How To Charge Lightning","content":"Our next talk is getting into layer 2. Leaving what needs to done at th\u0026amp;hellip;\nI am going to start with maybe a simple beginning talking about a single channel between Alice and Bob. We heard about channels yesterday so I am not going to go into details. Alice is not going to cheat Bob in my scenario. Everything is going to run as if everyone honest and doing their best effort, for my example. They open a channel and insert 10 coins. Every time Alice moves her coins to Bob or the other way …"},{"uri":"/scalingbitcoin/stanford-2017/incentives-tradeoffs-transaction-selection-in-dag-based-protocols/","title":"Incentives and Trade-offs in Transaction Selection in DAG-based Protocols","content":"Commercial effort to implement DAG-based protocols. No ICO, it\u0026amp;rsquo;s a regular company.\nA little bit background of DAG-based protocols. The background to this work is something we have been working at Hebrew University and Yoad. This idea of blockDAG is about layer 1 of bitcoin. And so it\u0026amp;rsquo;s a generalization of the chain structure of blockchain. As such, there are two solutions- lightning network and payment channels, and were scaling up layer 1, and you\u0026amp;rsquo;re scaling up in layer 2, …"},{"uri":"/speakers/joi-ito/","title":"Joi Ito","content":""},{"uri":"/scalingbitcoin/stanford-2017/joi-ito/","title":"Keynote","content":"Welcome back to Scaling Bitcoin day 2. I am sure everyone went home and tried Ethan\u0026amp;rsquo;s wagering setup yesterday. Today we are going to be covering topics like layer 2, fees, and consensus. To get us started today, we have a very special guest, Joi Ito, director of the MIT Media Lab, and he will be talking about parallels to the internet and ICOs and he will cover a lot of ground. Joi?\nI think everyone knows what the MIT Media Lab is. I am its director. I was involved in getting the digital …"},{"uri":"/scalingbitcoin/stanford-2017/microchains/","title":"Microchains","content":"https://twitter.com/kanzure/status/927257690685939712\nI would like to see a blockchain ecosystem with 1000s of transactions/second at layer 1 with full security. And so, we have to look at blockchain differently if we are going to achieve this.\nA brief introduction\u0026amp;hellip; my name is David Vorick. I have been studying bitcoin since 2011. My primary gig is Sia. I have been spending a lot of time on consensus algorithms and proof-of-work. If we could start over what sorts of things could we do …"},{"uri":"/speakers/nick-szabo/","title":"Nick Szabo","content":""},{"uri":"/scalingbitcoin/stanford-2017/weak-signal-radio-communications-for-bitcoin-network-resilience/","title":"Weak-Signal Radio Communications For Bitcoin Network Resilience","content":"What is weak-signal HF radio? These are called high frequency. That\u0026amp;rsquo;s the official name for these frequencies. Short wave is for historical reasons is longer waves and much higher frequency compared to our wireless internet frequencies. The terminology is backwards. I\u0026amp;rsquo;ll talk about the ionosphere and how we\u0026amp;rsquo;re bouncing signals off of it. This is really long-range stuff. This is an old Voice of America broadcast station that broadcasted radio programs across the pacific. The …"},{"uri":"/speakers/yonatan-sompolinsky/","title":"Yonatan Sompolinsky","content":""},{"uri":"/speakers/andrew-stone/","title":"Andrew Stone","content":""},{"uri":"/scalingbitcoin/stanford-2017/bitcoin-script-v2.0-and-strengthened-payment-channels/","title":"Bitcoin Script 2.0 And Strengthened Payment Channels","content":"This is a brief history of bitcoin script evolution. Since bitcoin was active in 2009, there was a lot of emergency fixes for the first 2 years done by Satoshi. He found that people could skpi the signature check using OP_RETURN and malformed scriptSigs. So those functions were removed. OP_VER and OP_VERIF were intended for script upgrades but it was found that after every release of bitcoin, it would become a hard-fork because of the design. So those were also removed. Also, many opcodes were …"},{"uri":"/scalingbitcoin/stanford-2017/blocksci-platform/","title":"BlockSci: a Platform for Blockchain Science and Exploration","content":"Alright well, my name is Harry Kalodner, I am a student at Princeton University and I am here to tell you about a tool I built alon with some colleagues at Princeton which could be used for analyzing the blockchain in different ways. So far most of the talks today have been constructive about new ways to use the blockchain and protocols to slightly modify bitcoin in roder to improve privacy or scaling. I am going ot take a step back and look at scaling demands beyond \u0026amp;hellip; to look at why are …"},{"uri":"/scalingbitcoin/stanford-2017/bolt-anonymous-payment-channels-for-decentralized-currencies/","title":"BOLT Anonymous Payment Channels for Decentralized Currencies","content":"paper: https://eprint.iacr.org/2016/701.pdf\nTo make questions easier, we are going to have a mic on the isle. Please line up so that we can have questions quickly. Okay, now we have Ian Miers.\nMy name is Ian Miers. Just got my PhD at Hpkins\u0026amp;hellip; authors of zcash, zerocash, \u0026amp;hellip; My interest in bitcoin was, first getting involved, was dealing with the privacy aspect. There is also a scaling problem. I assume you are aware of that. The bottom line is\u0026amp;hellip; converting this to PDF …"},{"uri":"/speakers/harry-kalodner/","title":"Harry Kalodner","content":""},{"uri":"/speakers/johnson-lau/","title":"Johnson Lau","content":""},{"uri":"/scalingbitcoin/stanford-2017/measuring-network-maximum-sustained-transaction-throughput/","title":"Measuring maximum sustained transaction throughput on a global network of Bitcoin nodes","content":"Andrew Stone, Bitcoin Unlimited\nWe are 25 minutes behind schedule.\nWe have the rest of the team wathcing remotely. I think the motivation for this project is clear. Transaction volume on the bitcoin network could be growing exponentially except that there is a limit on the number of confirmed transactions. Transaction fees have increased and confirmation times are not reliable, and bitcoin is unusable for some applications.\nSome people want to see the max blokc size limit lifted. Some people …"},{"uri":"/scalingbitcoin/stanford-2017/valueshuffle-mixing-confidential-transactions/","title":"ValueShuffle: Mixing Confidential Transactions","content":"paper: https://eprint.iacr.org/2017/238.pdf\nThank you.\nWe have seen two talks now about putting privacy in layer two. But let\u0026amp;rsquo;s talk about layer one. We still don\u0026amp;rsquo;t have great privacy there. So the title of this talk is valueshuffle - mixing confidential transactions. I don\u0026amp;rsquo;t have to convince you that bitcoin at the moment is not really private. There are a lot of possibilities to link\u0026amp;hellip; to deal them as, to link addresses together. We can even link addresses …"},{"uri":"/cppcon/2017/","title":"CPPcon 2017","content":" Fuzzing Oct 11, 2017 Kostya Serebryany "},{"uri":"/cppcon/2017/2017-10-11-kostya-serebryany-fuzzing/","title":"Fuzzing","content":"Fuzz or lose! Why and how to make fuzzing a standard practice for C++\nLocation: CppCon 2017\nSlides: https://github.com/CppCon/CppCon2017/blob/master/Demos/Fuzz%20Or%20Lose/Fuzz%20Or%20Lose%20-%20Kostya%20Serebryany%20-%20CppCon%202017.pdf\nPaper on “The Art, Science and Engineering of Fuzzing”: https://arxiv.org/pdf/1812.00140.pdf\nIntro Good afternoon and thank you for coming for the last afternoon session. I appreciate that you are tired. I hope to wake you up a little bit. My name is Kostya, I …"},{"uri":"/speakers/kostya-serebryany/","title":"Kostya Serebryany","content":""},{"uri":"/breaking-bitcoin/2017/","title":"Breaking Bitcoin 2017","content":" Breaking Hardware Wallets Sep 09, 2017 Nicolas Bacca Security problems Hardware wallet Changing Consensus Rules Without Breaking Bitcoin Sep 10, 2017 Eric Lombrozo Soft fork activation Consensus Pitfalls Sep 10, 2017 Christopher Jeffrey Hunting Moby Dick, an Analysis of 2015-2016 Spam Attacks Sep 09, 2017 Antoine Le Calvez Research Security problems Interview Adam Back Elizabeth Stark Sep 10, 2017 Adam Back, Elizabeth Stark Lightning Light Clients During 2017 Interfork Period Sep 09, 2017 …"},{"uri":"/breaking-bitcoin/2017/changing-consensus-rules-without-breaking-bitcoin/","title":"Changing Consensus Rules Without Breaking Bitcoin","content":"https://twitter.com/kanzure/status/1005822360321216512\nIntroduction I\u0026amp;rsquo;d like to talk about \u0026amp;hellip; we actually discovered we can replace the script completely with soft-forks.\nIt\u0026amp;rsquo;s important to note this quote from satoshi, from summer 2010: \u0026amp;ldquo;I don\u0026amp;rsquo;t believe a second compatible implementation will ever\u0026amp;rdquo; \u0026amp;hellip;\u0026amp;hellip;\nComparing open-source development to bitcoin\u0026amp;rsquo;s blockchain \u0026amp;hellip; a lot of the inspiration came from the development of open-source. All …"},{"uri":"/speakers/christopher-jeffrey/","title":"Christopher Jeffrey","content":""},{"uri":"/breaking-bitcoin/2017/2017-09-10-christopher-jeffrey-consensus-pitfalls/","title":"Consensus Pitfalls","content":"Pitfalls of Consensus Implementation\nLocation: Breaking Bitcoin\nIntro I’d like to talk about breaking Bitcoin today. Before I do that I’d like to give a short demonstration.\nDemo Right now I am SSH’d into a server with a couple of nodes on it. I am going to start up a bitcoind node and I am going to connect to one of these nodes and have it serve me a chain. The chain is kind of big. I predownloaded a lot of it. It is a regtest chain that I mine myself. As you can see here is the debug log. I’ve …"},{"uri":"/breaking-bitcoin/2017/interview-adam-back-elizabeth-stark/","title":"Interview Adam Back Elizabeth Stark","content":"https://twitter.com/kanzure/status/1005842855271784449\nstark: Thank you Giacomo. Who here had too much fun at the party last night? My voice is going. Hopefully it will stay for this interview. We are lucky to have Adam Back here today, co-founder of Blockstream and inventor of Hashcash (history). He just happened to appear and there was an available spot. Thank you for making this work.\nadam3us: Thank you for inviting me.\nstark: I\u0026amp;rsquo;m a co-organizer of this event. I am also founder of …"},{"uri":"/breaking-bitcoin/2017/solar-powered-space-pirates/","title":"Solar Powered Space Pirates","content":"Solar powered space pirates: A threat to bitcoin?\nhttps://twitter.com/kanzure/status/1005633917356007425\nLet\u0026amp;rsquo;s see. How do I make this fullscreen? Alright, anyone who can actually read French know how to make this fullscreen? My french teachers in high school would be very disappointed in me.\nMy name is Peter Todd. I do cryptography consulting. I\u0026amp;rsquo;ve been involved in Bitcoin Core as well. I want to tell you about whether solar powered space pirates are a threat to bitcoin.\nIf …"},{"uri":"/speakers/antoine-le-calvez/","title":"Antoine Le Calvez","content":""},{"uri":"/breaking-bitcoin/2017/breaking-hardware-wallets/","title":"Breaking Hardware Wallets","content":"Breaking hardware wallets: Tales from the trenches and glimpses into the future\nhttps://twitter.com/kanzure/status/1005627960160849920\nHello. Nobody cares? Ah. So next up is Nicolas. And he will talk about hardware attacks. Thank you.\nIntroduction Hi everybody. Thanks for making a great conference so far. I am Nicolas from Ledger hardware wallet. Today I am going to be talking about breaking hardware wallets. I don\u0026amp;rsquo;t mean physically breaking them. If you expected to come steal a hardware …"},{"uri":"/breaking-bitcoin/2017/spam-attacks-analysis/","title":"Hunting Moby Dick, an Analysis of 2015-2016 Spam Attacks","content":"https://twitter.com/kanzure/status/1005525351517360129\nIntroduction Hi. Thanks for Breaking Bitcoin to invite me to speak and to present this research on Moby Dick.\nMoby Dick and spam attacks\u0026amp;ndash; let me explain, Moby Dick is an entity that spammed or sent a lot of transactions in bitcoin in 2015. We did some analysis on who this person is or what they were doing. This is bitcoin archaeology, where we analyze past activities and try to analyze what happened and get insight into what could …"},{"uri":"/breaking-bitcoin/2017/light-clients-during-2017-interfork-period/","title":"Light Clients During 2017 Interfork Period","content":"https://twitter.com/kanzure/status/1005617123287289857\nNext up is Thomas.\nIntroduction Alright. Hello everyone. Thank you for having me. I\u0026amp;rsquo;m the developer of the Electrum wallet. I started this project in 2011. I\u0026amp;rsquo;m going to talk about lite clients and the implication of the 2017 hard-fork. We might be in the middle of two forks at the moment.\nSo we have segwit and we are very happy about that. ((applause)) This graph was taken from Pieter Wuille\u0026amp;rsquo;s website. It has lasted almost …"},{"uri":"/speakers/nicolas-bacca/","title":"Nicolas Bacca","content":""},{"uri":"/breaking-bitcoin/2017/socialized-costs-of-hard-forks/","title":"Socialized Costs Of Hard Forks","content":"https://twitter.com/kanzure/status/1005518746260262912\nKL: Can you hear me? Is it good? Okay. I hope you have practiced your French because.. no, the presentations are in English. Welcome and thank you very much for being here. This conference will very probably be the best we\u0026amp;rsquo;ve had in the bitcoin sphere. The crowd is pretty crazy. I\u0026amp;rsquo;ve had the ticket list and I can say the people in this room, not just the speakers, the attendees are all super amazing. I\u0026amp;rsquo;m glad to be …"},{"uri":"/breaking-bitcoin/2017/banks-as-bitcoin-custodians/","title":"Traditional Banks as Bitcoin Custodians: Security Challenges and Implications","content":"https://twitter.com/kanzure/status/1005545022559866880\nOur next speaker is Giacomo Zucco. Yeah, have fun. ((Applause))\nAs you can hear, there are a lot of italians here. Okay. So, good afternoon everyone. I am Giacomo Zucco.\nI am a theoretical physicist. I worked at Accenture as a technology consultant and now I work at BHB Network which is a bridge between free open-source development and research on the bitcoin protocol, and the world of the incumbents. These guys have the money, the other …"},{"uri":"/bitcoin-core-dev-tech/2017-09/","title":"Bitcoin Core Dev Tech 2017","content":" Meeting Notes Sep 05, 2017 Merkleized Abstract Syntax Trees Sep 07, 2017 Mast Signature Aggregation Sep 06, 2017 Signature aggregation "},{"uri":"/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/","title":"Merkleized Abstract Syntax Trees","content":"https://twitter.com/kanzure/status/907075529534328832\nhttps://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html\nI am going to talk about the scheme I posted to the mailing list yesterday which is to implement MAST (merkleized abstract syntax trees) in bitcoin in a minimally invasive way as possible. It\u0026amp;rsquo;s broken into two major consensus features that together gives us MAST. I\u0026amp;rsquo;ll start with the last BIP.\nThis is tail-call evaluation. Can …"},{"uri":"/bitcoin-core-dev-tech/2017-09/2017-09-06-signature-aggregation/","title":"Signature Aggregation","content":"https://twitter.com/kanzure/status/907065194463072258\nSipa, can you sign and verify ECDSA signatures by hand? No. Over GF(43), maybe. Inverses could take a little bit to compute. Over GF(2).\nI think the first thing we should talk about is some definitions. I\u0026amp;rsquo;d like to start by distinguishing between three things: Key aggregation, signature aggregation, and batch validation. Multi-signature later.\nThere are three different problems. Key aggregation is where there are a number of people with …"},{"uri":"/bitcoin-core-dev-tech/2017-09/2017-09-05-meeting-notes/","title":"Meeting Notes","content":"coredev.tech september 2017\nhttps://twitter.com/kanzure/status/907233490919464960\n((As always, any errors are most likely my own. etc.))\nIntroduction There is significant concern regarding whether BlueMatt has become a misnomer.\nMonday night presentation: https://btctranscripts.com/sf-bitcoin-meetup/2017-09-04-jonas-schnelli-bip150-bip151/\nI think we should continue to use #bitcoin-core-dev for anything about changing Bitcoin Core and try to keep things open even though we\u0026amp;rsquo;re together here …"},{"uri":"/sf-bitcoin-meetup/2017-09-04-jonas-schnelli-bip150-bip151/","title":"Bip150 Bip151","content":"bip150 and bip151: Bitcoin p2p network encryption and authentication\nhttps://github.com/bitcoin/bips/blob/master/bip-0150.mediawiki\nhttps://github.com/bitcoin/bips/blob/master/bip-0151.mediawiki\nhttps://btctranscripts.com/scalingbitcoin/milan-2016/bip151-peer-encryption/\nIntroduction Alright guys.. take a seat. ((various mumblings)) Want to thank our sponsors, Digital Garage (applause). Second order of business, if you guys could\u0026amp;hellip; trash away.. that would be awesome. There are trash cans …"},{"uri":"/greg-maxwell/2017-08-28-gmaxwell-deep-dive-bitcoin-core-v0.15/","title":"Deep Dive Bitcoin Core V0.15","content":"slides: https://people.xiph.org/~greg/gmaxwell-sf-015-2017.pdf\nhttps://twitter.com/kanzure/status/903872678683082752\ngit repo: https://github.com/bitcoin/bitcoin\npreliminary v0.15 release notes (not finished yet): http://dg0.dtrt.org/en/2017/09/01/release-0.15.0/\nIntroduction Alright let\u0026amp;rsquo;s get started. There are a lot of new faces in the room. I feel like there\u0026amp;rsquo;s an increase in excitement around bitcoin right now. That\u0026amp;rsquo;s great. For those of you who are new to this space, I am …"},{"uri":"/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets/","title":"Merkle Set Data Structures for Scaling Bitcoin","content":"code: https://github.com/bramcohen/MerkleSet\nhttps://twitter.com/kanzure/status/888836850529558531\nIntroduction There\u0026amp;rsquo;s been a fair amount of talk about putting UTXO commitments in bitcoin blocks. Whether this is a good idea or not, but on the off-chance that it might be a good idea, I spent a fair amount of time thinking and actually good merkle set implementation whic his what you you need to put utxo set commitments in bitcoin blocks. There are other benefits to this, in that there are …"},{"uri":"/sf-bitcoin-meetup/2017-06-06-laolu-osuntokun-neutrino/","title":"Neutrino: The Privacy Preserving Bitcoin Light Client","content":"Slides: https://docs.google.com/presentation/d/1vjJUWOZIaPshjx1svfE6rFZ2lC07n47ifOdzpKWqpb4/\nhttps://twitter.com/kanzure/status/1054031024085258240\nIntroduction My name is Laolu or Roasbeef on the internet. I’m going to talk about light clients in general and then also what we’re working on in terms of light clients and then also how that factors into the lightning network itself. The talk is called Neutrino because that is the name of our new light client that has been released. There has been …"},{"uri":"/lets-talk-bitcoin-podcast/2017-06-04-consensus-uasf-and-forks/","title":"On Consensus and All Kinds of Forks","content":"User activated soft forks vs hard forks Adam Levine (AL): I’m trying to look at the user activated soft fork thing and not see hypocrisy just steaming out of its ears. I’m really in need of you to help me understand why this is not the case Andreas. The argument to this point about scaling, one of the arguments, one of the larger arguments, has been that hard forks are the worst thing that can happen. In the event of a hard fork that isn’t meticulously planned and even if meticulously planned it …"},{"uri":"/greg-maxwell/2017-04-28-gmaxwell-confidential-transactions/","title":"Confidential Transactions","content":"https://twitter.com/kanzure/status/859604355917414400\nIntroduction Thank you.\nSo as mentioned, I am going to be talking about confidential transactions today. And I look at confidential transactions as a building block fundamental piece of technology, it\u0026amp;rsquo;s not like altcoin or something like that. It\u0026amp;rsquo;s not a turnkey system, it\u0026amp;rsquo;s some fundamental tech, and there are many different ways to use it and deploy it.\nMy interest in this technology is primarily driven by bitcoin, but the …"},{"uri":"/sf-bitcoin-meetup/2017-04-03-andreas-antonopoulos-bitcoin-scripting/","title":"Advanced Bitcoin Scripting","content":"Mastering Bitcoin on GitHub: https://github.com/bitcoinbook/bitcoinbook\nIntro Today’s talk is different from some of the other talks I’ve done so bear with me. We are going to do this slowly in a very casual manner. This is not one of my usual talks where I am going to tell you about big picture vision and stuff like that. What I want to do is talk about some interesting features of Bitcoin Script and explain them. First question is how many of the people here are developers? Or have at least …"},{"uri":"/sf-bitcoin-meetup/2017-03-29-new-address-type-for-segwit-addresses/","title":"New Address Type For Segwit Addresses","content":"Topic: Bech32 addresses for Bitcoin\nSlides: https://prezi.com/gwnjkqjqjjbz/bech32-a-base32-address-format/\nProposal: https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki\nDemo website: http://bitcoin.sipa.be/bech32/demo/demo.html\nTwitter announcement: https://twitter.com/kanzure/status/847569047902273536\nTranscript completed by: Bryan Bishop Edited by: Michael Folkson\nIntro Can everyone hear me fine through this microphone? Anyone who can\u0026amp;rsquo;t hear me please raise your hand. Oh wait. …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2017/bitcoin-mining-and-trustlessness/","title":"Bitcoin's Miner Security Model","content":"https://twitter.com/kanzure/status/838500313212518404\nYes, something like that. As mentioned, I\u0026amp;rsquo;ve been contributing to Bitcoin Core since 2011 and I\u0026amp;rsquo;ve worked on various parts of bitcoin, such as the early payment channel work, some wallet stuff, and now I work for Chaincode Lab one of the sponsors\u0026amp;ndash; we\u0026amp;rsquo;re a bitcoin research lab. We all kind of hang out and work on various bitcoin projects we find interesting. I want to talk about reflections on trusting trust.\nFor those …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2017/exchange-security/","title":"Exchange Security","content":"https://twitter.com/kanzure/status/838517545783148545\nHow often do you check your wallets? Have you ever looked in your wallet and found all your bitcoin gone or missing? Last summer, somebody, a friend of mine checked his wallet and found that 100,000 bitcoin was gone. What the hell, right? But that\u0026amp;rsquo;s what happened at Bitfinex last summer. Some hacker got into their hot wallet and stole 100,000 bitcoin. This was $70 million bucks. Of which 1.5 million was mine.\nI spent the last 6 months …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2017/ideal-number-of-full-bitcoin-nodes/","title":"Ideal Number Of Full Bitcoin Nodes","content":"https://twitter.com/kanzure/status/838481130944876544\nToday I am going to be talking about bitcoin full nodes. Can everyone hear me? Okay, great. I don\u0026amp;rsquo;t actually think there\u0026amp;rsquo;s a specific number. I think a lot of people do. They look for a number. 5000 is about how many we have today, 6000. Some people ask whether it\u0026amp;rsquo;s a good number. I think that\u0026amp;rsquo;s the wrong way to look at the role of full nodes in the ecosystem.\nI\u0026amp;rsquo;ve been with bitcoin since 2011. I do a bunch of …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2017/mimblewimble-and-scriptless-scripts/","title":"Mimblewimble And Scriptless Scripts","content":"https://twitter.com/kanzure/status/838480912396533760\nUp next we have Andrew Poelstra and he is going to be talking about a really interesting new concept for a cryptocurrency. It\u0026amp;rsquo;s called mimblewimble, after a spell from Harry Potter, because it prevents characters from speaking. Try speaking? Hello, test. We also have a\u0026amp;hellip; hello? hello? Can you guys hear me?\nI am going to talk about the concept of scriptless scripts. It\u0026amp;rsquo;s a way of describing what mimblewimble does, in a way …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2017/","title":"MIT Bitcoin Expo 2017","content":"http://web.archive.org/web/20170610144420/http://mitbitcoinexpo.org/\nBitcoin\u0026amp;#39;s Miner Security Model Mar 04, 2017 Matt Corallo Security Exchange Security Mar 04, 2017 Mitchell Dong Security problems Ideal Number Of Full Bitcoin Nodes Mar 04, 2017 David Vorick Mimblewimble And Scriptless Scripts Mar 04, 2017 Andrew Poelstra Adaptor signatures Sidechains Scaling and UTXO commitments Mar 04, 2017 Peter Todd Live Stream:\nDay 1 Day 2 "},{"uri":"/speakers/mitchell-dong/","title":"Mitchell Dong","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2017/scaling-and-utxos/","title":"Scaling and UTXO commitments","content":"https://twitter.com/kanzure/status/838481311992012800\nThank you. I\u0026amp;rsquo;ll admit the last one was probably the most fun other than. On the schedule, it said scaling and UTXO and I threw something together. TXO commitments were a bold idea but recently there were some observations that make it more interesting. I thought I would start by going through what problem they are actually solving.\nLike David was saying in the last presentation, running a full node is kind of a pain. How big is the …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2017/","title":"Blockchain Protocol Analysis Security Engineering 2017","content":" Covenants - Structuring Multi Transaction Contracts in Bitcoin Jan 26, 2017 Jeremy Rubin Covenants Ivy: A Declarative Predicate Language for Smart Contracts Jan 26, 2017 Dan Robinson Research Contract protocols Post’s Theorem and Blockchain Languages: A Short Course in the Theory of Computation Jan 26, 2017 Russell O’Connor Cryptography Scalable Smart Contracts Via Proofs And Single Use Seals Feb 03, 2017 Peter Todd Research Contract protocols Security Analysis of the Lightning Network Feb 03, …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2017/scalable-smart-contracts-via-proofs-and-single-use-seals/","title":"Scalable Smart Contracts Via Proofs And Single Use Seals","content":"https://twitter.com/kanzure/status/957660108137418752\nslides: https://cyber.stanford.edu/sites/default/files/petertodd.pdf\nhttps://petertodd.org/2016/commitments-and-single-use-seals\nhttps://petertodd.org/2016/closed-seal-sets-and-truth-lists-for-privacy\nIntroduction I am petertodd and I am here to break your blockchain. It\u0026amp;rsquo;s kind of interesting following a talk like that, in some ways I\u0026amp;rsquo;m going in a more extreme direcion for consensus. Do we actually need consensus at all? Can we …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2017/lightning-network-security-analysis/","title":"Security Analysis of the Lightning Network","content":"https://twitter.com/kanzure/status/957978915515092992\nslides: https://cyber.stanford.edu/sites/default/files/olaoluwaosuntokun.pdf\nIntroduction My name is Laolu Osuntokun and I work on lightning stuff. I go by roasbeef on the internet. I work at Lightning Labs. I am going to be giving an overview of some security properties and privacy properties of lightning.\nState of the hash-lock Before I start, I want to go over the state of the hash-lock which is the current progress to completion and …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2017/2017-01-26-jeremy-rubin-covenants/","title":"Covenants - Structuring Multi Transaction Contracts in Bitcoin","content":"Structuring Multi Transaction Contracts in Bitcoin\nLocation: BPASE 2017, Stanford University\nSlides: https://rubin.io/public/pdfs/multi-txn-contracts.pdf\nIntro Hey everyone. How is it going? Happy to be going last right now given that I had all these wonderful speakers before me to explain so much of what I am about to tell you. I’m going to talk to you today about structuring multi-transaction contracts in Bitcoin. I think there is a lot of great work that happens in Bitcoin at the script …"},{"uri":"/speakers/dan-robinson/","title":"Dan Robinson","content":""},{"uri":"/blockchain-protocol-analysis-security-engineering/2017/2017-01-26-dan-robinson-ivy/","title":"Ivy: A Declarative Predicate Language for Smart Contracts","content":"Intro Hi I’m Dan Robinson, I’m one of the Product Architects at Chain which is an enterprise blockchain infrastructure company for financial institutions. We have a full stack blockchain protocol and what I am going to talk about is part of that that also can be used for other applications including as I am going to demo here, make it easier to write scripts for Bitcoin Script.\nIntroduction: Two Blockchain Models To start out I want to talk about two blockchain models. This isn’t the purpose of …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2017/2017-01-26-russell-oconnor-posts-theorem/","title":"Post’s Theorem and Blockchain Languages: A Short Course in the Theory of Computation","content":"Intro Hi. My name is Russell O’Connor. I am here to talk to you about Post’s theorem, the theory of computation and its applications to blockchain languages. There is a bit of a debate about whether Turing complete languages are appropriate or not for doing computations, whether the DAO fiasco is related to the Turing completeness problem or is it an unrelated issue? If we do go for non-Turing complete languages for our blockchains how much expressiveness are we losing? If we go back into the …"},{"uri":"/lets-talk-bitcoin-podcast/2016-12-25-christopher-jeffrey-consensus-barnacles/","title":"Consensus Barnacles","content":"Location: Let’s Talk Bitcoin podcast (Episode 319)\nSF Bitcoin Devs presentation on Bcoin: https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2016-09-28-christopher-jeffrey-bcoin/\nBreaking Bitcoin presentation on consensus pitfalls: https://diyhpl.us/wiki/transcripts/breaking-bitcoin/2017/2017-09-10-christopher-jeffrey-consensus-pitfalls/\nHistory of Bcoin Adam Levine (AL): JJ thanks for being here today.\nChristopher Jeffrey (JJ): Thanks man. Glad to be here.\nAL: We are going to jump right into …"},{"uri":"/speakers/roger-ver/","title":"Roger Ver","content":""},{"uri":"/misc/2016-12-14-whalepool/","title":"The Great Debate Whalepool","content":"https://twitter.com/kanzure/status/809174512105295872\nHost: Mr. Ver, can you hear this?\nRV: Yes, I can.\nHost: Welcome to Whalepool.\nGavin: Hey Roger, it\u0026amp;rsquo;s Gavin. I heard about this.\nRV: Hey Eric. I see lots of familiar names in here.\nRV: Who was that? I don\u0026amp;rsquo;t know whose voice that was.\nHodl: Mr. Hodl\nRV: Thanks for organizing this. Hello Mr. Hodl.\nHost: Okay, ready to go. Here we have Eric Lombrozo who is a developer who has committed code to Bitcoin Core. We also have Roger Ver. …"},{"uri":"/sf-bitcoin-meetup/2016-11-21-mimblewimble/","title":"Mimblewimble","content":"https://www.youtube.com/watch?v=aHTRlbCaUyM\nhttps://twitter.com/kanzure/status/801990263543648256\nHistory As Denise said, I gave a talk in Milan about mimblewimble about a month ago (slides). This is more or less the same talk, but rebalanced a bit to try to emphasize what I think is important and add some history that has happened in the intervening time. I\u0026amp;rsquo;ll get started.\nMany of you have heard of mimblewimble. It\u0026amp;rsquo;s been in the news. It has a catchy name. It sticks around. I should …"},{"uri":"/misc/2016-adam-back/","title":"Implications of Blockchain","content":"New Context Conference 2016 http://dg717.com/ncc/en/\nWe are using a hashtag to organize the information today. The hashtag we\u0026amp;rsquo;re using is nccblockchain. Ncc stands for new conference context. That\u0026amp;rsquo;s where we are. Blockchain is what we\u0026amp;rsquo;re talkin gabout. Can I ask the people in the back to take a seat? We\u0026amp;rsquo;re about to get ready. Meet the neighbor near your seat. Switch your cellphone off. I am going to demonstrate. The toilets squirt you. It\u0026amp;rsquo;s by design. We have …"},{"uri":"/scalingbitcoin/milan-2016/schnorr-signatures/","title":"Schnorr Signatures for Bitcoin","content":"Slides: URL expired\nPieter Wuille presentation on Schnorr at BPASE 2018: https://diyhpl.us/wiki/transcripts/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities/\nTranscript completed by: Bryan Bishop Edited by: Michael Folkson\nIntro Today I will be talking about Schnorr signatures for Bitcoin. I want to say a few things in advance. One is even though this is a talk about cryptography and in particular new cryptography I don\u0026amp;rsquo;t …"},{"uri":"/speakers/arthur-gervais/","title":"Arthur Gervais","content":""},{"uri":"/scalingbitcoin/milan-2016/bip151-peer-encryption/","title":"BIP151: Peer-to-Peer Encryption and Authentication from the Perspective of End-User","content":"https://twitter.com/kanzure/status/785046960705339392\nGood morning. Thanks for having me here. I would like to talk about the end-user perspective of p2p authentication. I would like to start with a little example of how difficult it is to be aware of end-user issues that they are facing at the front. Who is using a thin client? An \u0026amp;ldquo;SPV\u0026amp;rdquo; wallet on your smartphone? Yeah. Who of you are on iOS? Yeah iOS has a pretty large market share, maybe not in this room. But in general, yes. And …"},{"uri":"/scalingbitcoin/milan-2016/covenants/","title":"Bitcoin Covenants: Opportunities and Challenges","content":"https://twitter.com/kanzure/status/785071789705728000\nWe published this last February at an academic workshop. The work itself has interesting ramifications. My real goal here is to start a conversation and then do a follow-up blog post where we collate feedback from the community. We would like to add this to Bitcoin Core. Covenants.\nThis all started from a very basic and simple observation about the current status of our computing infrastructure. The observation is that the state of our …"},{"uri":"/speakers/emin-gun-sirer/","title":"Emin Gun Sirer","content":""},{"uri":"/scalingbitcoin/milan-2016/segwit-lessons-learned/","title":"Lessons Learned with Segwit in Bitcoin","content":"https://twitter.com/kanzure/status/785034301272420353\nHi. My name is Greg Sanders. I will be giving a talk about segwit in bitcoin lessons learned. I will also give some takeaways that I think are important. A little bit about myself. I work on Elements Project at Blockstream. I was a reviewer on segwit for Bitcoin Core. How do we scale protocol development? How do we scale review and keep things safe?\nSegwit started as an element in elements alpha. It allows for safe chaining of pre-signed …"},{"uri":"/scalingbitcoin/milan-2016/on-the-security-and-performance-of-proof-of-work-blockchains/","title":"On the Security and Performance of Proof of Work Blockchains","content":"https://twitter.com/kanzure/status/785066988532068352\nAs most of you might know, in bitcoin and blockchains, the information is very important. Every node should receive the blocks in the network. Increase latency and risk network partition. If individual peers do not get the information, they could be victims of selfish mining and so on. Multiple blockchains has been proposed, multiple reparameterizations like litecoin, dogecoin, ethereum and others. One of the key changes they did was the …"},{"uri":"/scalingbitcoin/milan-2016/client-side-validation/","title":"Progress on Scaling via Client-Side Validation","content":"https://twitter.com/kanzure/status/785121442602029056\nhttp://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/breaking-the-chain/\nLet\u0026amp;rsquo;s start with what isn\u0026amp;rsquo;t client-side validation. I am going to call this the miner-side approach. Here\u0026amp;rsquo;s some smart contract code. In this example, it\u0026amp;rsquo;s Chronos which is a timestamping contract. What you have here is a contract that stores in ethereum state consensus some hashes as they are received, they are timestamped. And ultimately at …"},{"uri":"/speakers/adlai-chandrasekhar/","title":"Adlai Chandrasekhar","content":""},{"uri":"/scalingbitcoin/milan-2016/flare-routing-in-lightning/","title":"Flare: An Approach to Routing in Lightning Network","content":"https://twitter.com/kanzure/status/784745976170999810\nI will not be as technical as Olaoluwa was. You have nodes that open up payment channels. This scheme is actually developed at the moment in great detail. So how do you find the channels through which to send payments? So we wanted to propose some initial solution to this.\nRouting requirements It should be source routing because we need the privacy. When a user sends a payment, so the decision of what route to choose should be on the sender …"},{"uri":"/scalingbitcoin/milan-2016/fungibility-overview/","title":"Fungibility Overview","content":"https://twitter.com/kanzure/status/784676022318952448\nIntroduction Alright. So let\u0026amp;rsquo;s get started. Okay. Why fungibility? Let\u0026amp;rsquo;s start. What does fungibility mean? Bitcoin is like cash. The hope is that it\u0026amp;rsquo;s for immediate and final payments. To add some nuiance there, you might have to wait for some confirmations. Once you receive a bitcoin, you have a bitcoin and that\u0026amp;rsquo;s final. So even with banks and PayPal, who sometimes shutdown accounts for trivial reasons or sometimes …"},{"uri":"/scalingbitcoin/milan-2016/joinmarket/","title":"Joinmarket","content":"https://twitter.com/kanzure/status/784681036504522752\nslides: https://scalingbitcoin.org/milan2016/presentations/D1%20-%202%20-%20Adlai%20Chandrasekhar.pdf\nFinding a risk-free rate for Bitcoin\nJoinmarket, just to clear up a misconception\u0026amp;hellip; Joinmarket, like bitcoin, it is a protocol. Not a company. The protocol is defined by an open-source reference implementation. There are other versions of the code out there. It\u0026amp;rsquo;s entirely, I guess you could say it\u0026amp;rsquo;s voluntary like bitcoin. …"},{"uri":"/speakers/leen-alshenibr/","title":"Leen AlShenibr","content":""},{"uri":"/scalingbitcoin/milan-2016/lightning/","title":"Lightning work group session","content":"https://twitter.com/kanzure/status/784783992478466049\nYou have a network. Anyone can join the network. You have two points and you want to do route finding between those two points. To be clear, this is a network of channels. It is similar to the internet, but it is not the internet. How do you find such a route? It\u0026amp;rsquo;s easy when you find all the implements of the network. Some of the information changes frequently. Payments change the balance of the channels so that other payments become …"},{"uri":"/scalingbitcoin/milan-2016/mimblewimble/","title":"Mimblewimble: Private, Massively-Prunable Blockchains","content":"https://twitter.com/kanzure/status/784696648597405696\nslides: http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimble-2016-scaling-bitcoin-slides.pdf\nHi, I am from Blockstream. I am here to talk about Mimblewimble. Mimblewimble is not mine. It\u0026amp;rsquo;s from the dark lord. I am going to start with the history of the paper. It has been in the news a little bit. I will talk about the transaction structure, the block structure, try to explain the trust model and argue that it\u0026amp;rsquo;s almost as strong as …"},{"uri":"/scalingbitcoin/milan-2016/onion-routing-in-lightning/","title":"Onion Routing In Lightning","content":"http://lightning.network/\nhttps://twitter.com/kanzure/status/784742298089299969\nPrivacy-preserving decentralized micropayments\nWe\u0026amp;rsquo;re excited about lightning because the second layer could be an important opportunity to improve privacy and fungibility. Also, there might be timing information in the payments themselves.\nDistributed set of onion routers (OR, cite OG onionrouting paper). Users create circuits with sub-set of nodes. Difficult for oion routers to gain more info than …"},{"uri":"/speakers/pavel-prihodko/","title":"Pavel Prihodko","content":""},{"uri":"/scalingbitcoin/milan-2016/tumblebit/","title":"TumbleBit: An Untrusted Bitcoin-Compatible Anonymous Payment Hub","content":"https://twitter.com/kanzure/status/784686167052709888\nHi. So I\u0026amp;rsquo;m going to be talking about tumblebit. It\u0026amp;rsquo;s our untrusted private bitcoin payment system. Tumblebit provides payments which are private. They provide set anonymity when used as a classic tumbler. They are untrusted. The tumbler can\u0026amp;rsquo;t violate your privacy, and can\u0026amp;rsquo;t steal your coins. It\u0026amp;rsquo;s also scalable. It\u0026amp;rsquo;s compatible and works with today\u0026amp;rsquo;s bitcoin. There are no malleability concerns. …"},{"uri":"/sf-bitcoin-meetup/2016-09-28-christopher-jeffrey-bcoin/","title":"Bcoin","content":"Bcoin repo: https://github.com/bcoin-org/bcoin\nIntro Hey everyone. I’m JJ with Purse. Tonight I want to talk to you about full nodes and full node implementations in general. In particular my project and the project we’ve been working on at Purse which is bcoin. A little history about me. I’ve actually given a presentation here before two years ago at the dev meetup. What I gave my talk on two years ago was actually the process of turning bitcoind into a shared object, into a library so you …"},{"uri":"/speakers/brian-deery/","title":"Brian Deery","content":""},{"uri":"/speakers/chris-odom/","title":"Chris Odom","content":""},{"uri":"/misc/mimblewimble-podcast/","title":"Mimblewimble Podcast","content":" Andrew Poelstra (andytoshi) Pieter Wuille (sipa) Brian Deery Chris Odom Starts at 26min.\nI am going to introduce our guests here in just a minute.\nLaunch of zcash might be delayed in order to allow the code to be analyzed by multiple third-party auditors. Zooko stated, \u0026amp;ldquo;I feel bad that we didn\u0026amp;rsquo;t do a better job of making TheDAO disaster-like problems harder for people\u0026amp;rdquo;.\nOur guests on the line are Andrew Poelstra and Pieter Wuille. Andrew are you there?\nAP: Hey. We\u0026amp;rsquo;re …"},{"uri":"/bitcoin-developers-miners-meeting-2016/dan-boneh/","title":"Dan Boneh","content":"A conversation with Dan Boneh\nPart of the challenge was that the Google Maps directions in the email were going down some roads that had construction\u0026amp;hellip;. been here 10 minutes, and some arriving in another 10 minutes. They are building a new biology building. Large campus here. In whatever direction you walk, there are buildings and more buildings. In two years there will be a new biology building, and two years later there will be a new computer science building.\nHow many users at Stanford? …"},{"uri":"/bitcoin-developers-miners-meeting-2016/","title":"Developers-Miners Meeting","content":" Cali 2016 Jul 30, 2016 Mining Ethereum Soft fork activation Dan Boneh Aug 01, 2016 Dan Boneh Research Ethereum Google Tech Talk (2016) Jihan Wu Mining "},{"uri":"/bitcoin-developers-miners-meeting-2016/cali2016/","title":"Cali 2016","content":"California Gathering 2016\nTranscription introduction Some Bitcoin developers and miners gathered together during the end of the July to socialize and become better acquainted with each other. The following discussions were live transcribed by an untrained volunteer with attribution removed as per Chatham House rules. In bitcoin, discussions can move very quickly, which can cause an increase in errors, including semantic errors, when typing in real time. This text was not produced from an audio …"},{"uri":"/sf-bitcoin-meetup/2016-07-18-laolu-osuntokun-lightning-network/","title":"Lightning Network","content":"Lightning Network Deep Dive\nSlides: https://docs.google.com/presentation/d/1OPijcjzadKkbxvGU6VcVnAeSD_DKb7gjnevBcqhPCW0/\nhttps://twitter.com/kanzure/status/1052238088913788928\nIntroduction My name is Laolu Osuntokun and I work with Lightning guys and you know me as maybe Roasbeef from the internet, on GitHub, Twitter, everything else. I’ve been Roasbeef for the past 8 years and I’m still Roasbeef that’s how it works. I’m going to give you a deep dive into the lightning network. I remember like 2 …"},{"uri":"/w3-blockchain-workshop-2016/lightning-network/","title":"Lightning Network","content":"I am going to talk about blockchain scalability using payment channel networks. I am working on Lightning Network. It\u0026amp;rsquo;s currently being built in bitcoin but it can support other things. One of the fundamental problems with blockchain is scalability. How can you have millions of users and rapid transactions?\nFundamentally, the blockchain is shared. You can call it a DLT but it\u0026amp;rsquo;s really a copied ledger. Every user has a copy. When you\u0026amp;rsquo;re at a conference and everyone is using the …"},{"uri":"/w3-blockchain-workshop-2016/","title":"W3 Blockchain Workshop","content":" Archival Science Arvind Narayanan Arvind Narayanan Blockchain Hub Kenji Saito Christopher Allen Christopher Allen Day 2 Groups Deterministic Signatures Group Threshold signature Dex: Deterministic Predicate Expressions for Smarter Signatures Peter Todd Research Ethcore Ethereum Groups Identity Intro to W3 Blockchain Workshop Jun 29, 2016 Doug Schepers Intro to W3C standards Wendy Seltzer Ipfs Lightning Network Jun 30, 2016 Tadge Dryja Lightning Matsuo Shin\u0026amp;#39;ichiro Matsuo Security Research …"},{"uri":"/speakers/doug-schepers/","title":"Doug Schepers","content":""},{"uri":"/w3-blockchain-workshop-2016/intro/","title":"Intro to W3 Blockchain Workshop","content":"Blockchains and standards\nW3C Workshop\n29-30 June 2016\nCambridge, Massachusetts\nDoug Schepers schepers@w3.org\n@shepazu\nhttps://www.w3.org/2016/04/blockchain-workshop/\nirc.w3.org #blockchain\nhttps://etherpad.w3ctag.org/p/blockchain\nMy name is Doug. I am ostensibly the organizer. Along with my chairs, Neha, where are you? Please stand up so that everyone can embarras you. She is with DCI here at MIT, the Digital Currency Initiative. Daza, also with MIT, he is the guy who grbabed the space for us, …"},{"uri":"/sf-bitcoin-meetup/2016-04-11-lightning-network-as-a-directed-graph-single-funded-channel-network-topology/","title":"Lightning Network As A Directed Graph Single Funded Channel Network Topology","content":"http://lightning.network/\nslides: http://diyhpl.us/~bryan/papers2/bitcoin/Lightning%20Network%20as%20Directed%20Graph%20Single-Funded%20Channel%20Topology%20-%20Tadge%20Dryja%20-%202016-04-11.pdf\nOkay. Hello everyone. Yes, okay, great. Yes. Yeah, I\u0026amp;rsquo;m working on Lightning Network. Bitcoin is great and fun to work on.\nBasic super quick summary of Lightning I am going to re-cap lightning network real quick. The basic idea is to have channels between two nodes. A channel, from the perspective …"},{"uri":"/misc/adam3us-bitcoin-scaling-tradeoffs/","title":"Bitcoin Scaling Tradeoffs","content":"Bitcoin scaling tradeoffs with Adam Back (adam3us) at Paralelni Polis\nInstitute of Cryptoanarchy http://www.paralelnipolis.cz/\nslides: http://www.slideshare.net/paralelnipolis/bitcoin-scaling-tradeoffs-with-adam-back\ndescription: \u0026amp;ldquo;In this talk, Adam Back overviews the Bitcoin scaling and discuses his personal view about its tradeoffs and future development.\u0026amp;rdquo;\nIntro fluff And now I would like to introduce you to a great guest from the U.K. or Malta, Adam Back. Adam is long-time …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/pindar-wong/","title":"Forked to Death: What Internet governance can learn from Bitcoin","content":"So, the last talk today is going to be from Pindar Wong. He\u0026amp;rsquo;s been involved with Internet in Hong Kong. He also helped organize the Scaling Bitcoin workshops. It\u0026amp;rsquo;s my pleasure to have him speaking today.\nOkay. Thank you very much for staying until the bitter end. My topic today is forked to death, what Internet governance can learn from Bitcoin. I do mean this, and I would like to explain why. This is a fork. I am holding a fork.\nIn Internet governance, we don\u0026amp;rsquo;t talk about …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/","title":"Mit Bitcoin Expo 2016","content":"http://web.archive.org/web/20160403090248/http://mitbitcoinexpo.org:80/\nCory Fields Cory Fields Forked to Death: What Internet governance can learn from Bitcoin Mar 05, 2016 Pindar Wong Fraud Proofs and Modularized Validation Peter Todd Lightweight client Improvements to Bitcoin Mark Friedenbach, Jonas Schnelli, Andrew Poelstra, Joseph Poon Research Lightning Wallet Lessons For Bitcoin From 150 Years Of Decentralization Arvind Narayanan Research Linq Alex Zinder R3 Kathleen Breitman Rootstock …"},{"uri":"/speakers/pindar-wong/","title":"Pindar Wong","content":""},{"uri":"/speakers/joseph-poon/","title":"Joseph Poon","content":""},{"uri":"/scalingbitcoin/hong-kong-2015/network-topologies-and-their-scalability-implications-on-decentralized-off-chain-networks/","title":"Network Topologies and their scalability implications on decentralized off-chain networks","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/1_layer2_3_poon.pdf\nLN transactions are real Bitcoin transactions. These are real zero-confirmation transactions. What\u0026amp;rsquo;s interesting is that the original idea behind Bitcoin was solving the double-spending problem. When you move off-chain and off-block, you sort of have this opposite situation where you use double-spends to your advantage, spending from the same output, except you have this global consensus of block but you …"},{"uri":"/tags/bandwidth-reduction/","title":"Bandwidth Reduction","content":""},{"uri":"/scalingbitcoin/hong-kong-2015/fungibility-and-scalability/","title":"Fungibility And Scalability","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY1/1_overviews_2_back.pdf\nI think everyone knows what fungibility is, it\u0026amp;rsquo;s when coins are equal and interchangeable. It\u0026amp;rsquo;s an important property for an electronic cash system. All bitcoin users would have a problem if fungability goes away.\nBitcoin has fungibility without privacy. Previous electronic cash systems had fungibility and cryptographic blinding. In Bitcoin, there\u0026amp;rsquo;s decentralized mining that somebody will …"},{"uri":"/scalingbitcoin/hong-kong-2015/invertible-bloom-lookup-tables-and-weak-block-propagation-performance/","title":"Invertible Bloom Lookup Tables (IBLT) and weak block propagation performance","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY1/3_block_propagation_1_rosenbaum.pdf\nBefore we get started, for privacy reasons, and for reasons of zero attribution, please do not take pictures of attendees. Do not have pictures of attendees in the photographs. Taking pictures of the stage and slides are OK, since they are being broadcasted. No photographs of any attendees, please let them remain anonymous and private. For Chinese speakers, please ask in Chinese and I will use …"},{"uri":"/greg-maxwell/2015-11-09-gmaxwell-mining-and-block-size-etc/","title":"Mining And Block Size Etc","content":"Mining, block size, etc. So uh I am really excited to announce or introduce our next speaker, Greg Maxwell. Greg, well, there are people who talk and there\u0026amp;rsquo;s people who do, and Greg is definitely a doer. He is definitely one of the most accomplished if not most helpful, one of the most active people, not just in terms of commits that we see, but also things we don\u0026amp;rsquo;t see, like input regarding bugs and to other developers and being a great voice in this industry. Also, he is with …"},{"uri":"/speakers/andrew-miller/","title":"Andrew Miller","content":""},{"uri":"/categories/developer-tools/","title":"developer-tools","content":""},{"uri":"/scalingbitcoin/montreal-2015/block-synchronization-time/","title":"Initial Block Synchronization is Quadratic Time Complexity","content":"slides: http://strateman.ninja/initial_block_synchronization.pdf\nBlock synchronization is how nodes join the market. New nodes download the blockchain and validate everything. This is how the security of the network works. Who is responsible for doing this? Everyone must be doing a full block synchronization. If you are not doing this, you\u0026amp;rsquo;re not on the bitcoin network. As a simple hypothesis, the block size growth is related to initial block synchronization as a simple function of the …"},{"uri":"/speakers/patrick-strateman/","title":"Patrick Strateman","content":""},{"uri":"/scalingbitcoin/montreal-2015/sharding-the-blockchain/","title":"Sharding the Blockchain","content":"I am going to talk about blockchain sharding. This is one of the scaling solutions that I like the most. I have done the most work on it. In case you don\u0026amp;rsquo;t know who I am, I am a researcher mostly with the ethereum project. Basically the basics of sharding is that in non-sharding, every node does every validation. In a sharding solution, nodes hold a subset of the state, and a subset of the blockchain. By state we mean UTXOs. Instead of everyone redundantly doing the same work, we\u0026amp;rsquo;re …"},{"uri":"/scalingbitcoin/montreal-2015/systematizing-knowledge/","title":"Systematizing Knowledge","content":"There should be a SNARK wiki. Any website that is better than someone saying \u0026amp;ldquo;you know, check the June July 2014 bitcoin-wizard logs\u0026amp;rdquo;. We need to research gmaxwell scalability. We should probably fix that.\nLet\u0026amp;rsquo;s talk about what some of the goals are that we might get out of this. We have some different ideas for approaches to systematizing knowledge. There are at least three camps here. I think Bryan\u0026amp;rsquo;s paper repositories.. whenever you search for papers on Google Scholar, …"},{"uri":"/speakers/vlad-zamfir/","title":"Vlad Zamfir","content":""},{"uri":"/scalingbitcoin/montreal-2015/transaction-fee-estimation/","title":"How Wallets Can Handle Real Transaction Fees","content":"You should have a market at this point, where some number of transactions never happen because they aren\u0026amp;rsquo;t willing to pay a fee. So you get some equilibrium of price going on. I proactively want to see this happen because we don\u0026amp;rsquo;t know what fees should be until we run the experiment and see what prices end up being. For this to happen, wallets have to do price setting in some way. For the most part they presently don\u0026amp;rsquo;t. I am going to talk about how they can be made to do that. …"},{"uri":"/speakers/miles-carlsten/","title":"Miles Carlsten","content":""},{"uri":"/scalingbitcoin/montreal-2015/overview-of-security-concerns/","title":"Overview of Security Concerns","content":"I am going to talk about scale-related security issues. Scale-related security issues. I said scale, not scalability. We are not talking about scalability. We\u0026amp;rsquo;re talking about changing numbers, and what are the implications, what attacks are easier or harder by changing the block size number. What are we trying to accomplish? What is the system meant to be? Nobody really agrees of course. What\u0026amp;rsquo;s interesting is that as much as we disagree about what was meant to be, there are a lot of …"},{"uri":"/scalingbitcoin/montreal-2015/privacy-preserving-smart-contracts/","title":"Privacy-Preserving Smart Contracts","content":"ranjit@csail.mit.edu\nhttp://people.csail.mit.edu/ranjit/\nWe have been looking at how much we can support or what limitations we might encounter. There are also efficiency issues and privacy guarantees. And then there are some potential relaxations that will help with removing the fundamental limitations and express the power and make smart contracts more useful.\nThe point of this talk is off-chain transactions and secure computation. I will also highlight the relationship of secure computation …"},{"uri":"/speakers/ranjit-kumaresan/","title":"Ranjit Kumaresan","content":""},{"uri":"/scalingbitcoin/montreal-2015/security-of-diminishing-block-subsidy/","title":"Security Of Diminishing Block Subsidy","content":"Miners no longer incentivized to mine all the time. The future of mining hardware. Impact on the bitcoin ecosystem. Implications and assumptions of the model.\nAre minting rewards and transaction fees the same? Transaction fees are time-dependent. A miner can only claim transaction fees as transactions are added to their block. As transaction fees start to dominate the block reward, they become time-dependent. We are modeling this block reward as a linear function of time, we have this factor B …"},{"uri":"/misc/2015-09-07-epicenter-bitcoin-adam3us-scalability/","title":"Why Bitcoin Needs A Measured Approach To Scaling","content":"BC: Adam Back is here for the second time on this show. Some of you will be familiar that Adam Back is the inventor of proof-of-work or hashcash as it was then. He was one of the few people cited in the original Bitcoin whitepaper. More recently he has been involved in sidechains and Blockstream where he is a founder. He has been vocal in the bitcoin block size debate and how it should be done or how it should be done. We\u0026amp;rsquo;re excited to have him on. We\u0026amp;rsquo;ve had perhaps too much of the …"},{"uri":"/sf-bitcoin-meetup/2015-08-24-pieter-wuille-key-tree-signatures/","title":"Key Tree Signatures","content":"Slides: https://prezi.com/vsj95ns4ucu3/key-tree-signatures/\nBlog post: https://blockstream.com/2015/08/24/en-treesignatures/\nIntro For those who don’t know me I’m Pieter Wuille. I have been working on Bitcoin Core for a while but this is work I have been doing at Blockstream. Today I will be talking about Key Tree Signatures which is what I have been working on these past few weeks. It is an improvement to the multisig schemes that we want to support. This is an outline of my talk. I’ll first …"},{"uri":"/speakers/daniele-micciancio/","title":"Daniele Micciancio","content":""},{"uri":"/simons-institute/history-of-lattice-based-cryptography/","title":"History Of Lattice Based Cryptography","content":"Historical talk on lattice-based cryptography\nDaniele Micciancio, UC San Diego\nhttps://www.youtube.com/watch?v=4KVJbEOqB_Q\u0026amp;amp;list=PLgO7JBj821uGZTXEXBLckChu70kl7Celh\u0026amp;amp;index=22\nhttps://twitter.com/kanzure/status/772649346454040577\nLattice cryptography - from computational complexity to fully homomorphic encryption\nIntroduction 1 So this first of all thanks for coming back from lunch, it\u0026amp;rsquo;s always an achievement. I want to say that this is a special talk within the workshop here at Simons …"},{"uri":"/simons-institute/","title":"Simons Institute","content":" A Wishlist For Verifiable Computation Michael Walfish Research History Of Lattice Based Cryptography Jul 15, 2015 Daniele Micciancio Cryptography Pairing Cryptography Jul 14, 2015 Dan Boneh Snarks And Their Practical Applications Eran Tromer Research Todo Cryptography Ux Zero Knowledge Probabilistic Proof Systems Shafi Goldwasser "},{"uri":"/simons-institute/pairing-cryptography/","title":"Pairing Cryptography","content":"Dan Boneh (see also 1)\nslides: http://crypto.biu.ac.il/sites/default/files/3rd_BIU_Winter_School/Boneh-basics-of-pairings.pdf\nhttps://twitter.com/kanzure/status/772287326999293952\noriginal video: https://video.simons.berkeley.edu/2015/other-events/historical-papers/05-Dan-Boneh.mp4\noriginal video sha256 hash: 1351217725741cd3161de95905da7477b9966e55a6c61686f7c88ba5a1ca0414\nOkay, so um I\u0026amp;rsquo;m very happy to introduce another speaker in this historical papers series seminar seminar series which …"},{"uri":"/greg-maxwell/2015-06-08-gmaxwell-sidechains-elements/","title":"Sidechains Elements","content":"Bringing New Elements to Bitcoin with Sidechains\nSF Bitcoin Devs Meetup\nGregory Maxwell DE47 BC9E 6D2D A6B0 2DC6 10B1 AC85 9362 B041 3BFA\nslides: https://people.xiph.org/~greg/blockstream.gmaxwell.elements.talk.060815.pdf\nhttps://github.com/ElementsProject/elements/\nHello, I am Greg Maxwell, one of the developers of the Bitcoin system and its reference software. Since 2011, I have worked on the system facing many interesting challenges and developing many solutions to many problems, adapting and …"},{"uri":"/sf-bitcoin-meetup/2015-05-26-lightning-network/","title":"How Lightning.network can offer a solution for Bitcoin's scalability problems","content":"http://lightning.network/\nslides: https://lightning.network/lightning-network-presentation-sfbitcoinsocial-2015-05-26.pdf\nGreat to have you guys. They are going to talk about bitcoin\u0026amp;rsquo;s scalability problems. There they are.\nOkay, hi. So yeah, I titled it solutions because that sounds more positive than talking about problems. Joseph will go after that and talk about what needs to change and the economics and things like that. Okay, so I\u0026amp;rsquo;ll start.\nBitcoin scalability. The thing that is …"},{"uri":"/greg-maxwell/2015-04-29-gmaxwell-bitcoin-selection-cryptography/","title":"Bitcoin Selection Cryptography","content":" slides: https://people.xiph.org/~greg/gmaxwell_sfbitcoin_2015_04_20.pdf A deep dive with Bitcoin Core developer Greg Maxwell\nThe blueberry muffins are a lie. But instead I got some other things to present about. My name is Greg Maxwell. I am one of the committers to Bitcoin Core. I am one of the five people with commit access. I have been working on Bitcoin since 2011. Came into Bitcoin through assorted paths as has everyone else.\nBack in around maybe 2004 I was involved very early in the …"},{"uri":"/speakers/madars-virza/","title":"Madars Virza","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/","title":"Mit Bitcoin Expo 2015","content":"http://web.archive.org/web/20150604032122/http://mitbitcoinexpo.org/\nArmory Proof Of Payment Andy Ofiesh Proof of payment Bitcoin Financing And Trading Joshua Lim, Juthica Chou, Bobby Cho Bitcoin Regulation Landscape Elizabeth Stark, Jerry Brito, Constance Choi Regulation Decentralization Through Game Theory Andreas Antonopoulos Mining Human Side Trust Workshop Matt Weiss, Joe Gerber Internet Of Value Jeremy Allaire Keynote Charlie Lee Keynote Gavin Andresen Gavin Andresen Open source - Beyond …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/zerocash-and-zero-knowledge-succint-arguments-of-knowledge-libsnark/","title":"Zerocash: Zero knowledge proofs and SNARKs and libsnark","content":"I am going to tell you about zerocash and our approach of addressing Bitcoin\u0026amp;rsquo;s privacy problem. All of this is joint work with Techion, ETCH Zurich, and MIT, and Matt Green, .. and .. and..\nSo first to get us going, I want to talk a little bit about two of my ideas around Bitcoin. There are two questions that every decentralized cryptocurrency must answer. First of them is where does the money come from. The second question is, if money is digital, if I have my digital $1 bill, can I copy …"},{"uri":"/sf-bitcoin-meetup/2015-02-23-scaling-bitcoin-to-billions-of-transactions-per-day/","title":"Scaling Bitcoin To Billions Of Transactions Per Day","content":"http://lightning.network/\nJoseph Poon joseph@lightning.network Thaddeus Dryja rx@awsomnet.org We\u0026amp;rsquo;re doing a presentation on the lightning network and micropayments. Right now bitcoin faces some interesting problems, like transactions aren\u0026amp;rsquo;t instant, and micropayments don\u0026amp;rsquo;t work. The transaction fees are 1/10th of a cent or higher depending on the exchange rate. And, \u0026amp;ldquo;bitcoin doesn\u0026amp;rsquo;t scale\u0026amp;rdquo; especially if you have a lot of micropayments. One megabyte blocks? …"},{"uri":"/misc/bitcoin-sidechains-unchained-epicenter-adam3us-gmaxwell/","title":"Bitcoin Sidechains - Unchained Epicenter","content":"EB65 – Adam Back \u0026amp;amp; Greg Maxwell: Sidechains Unchained\npeople:\nSebastien Couture Brian Fabian Crain Adam Back (adam3us) Greg Maxwell (gmaxwell) Brian: We are here today we have Adam Back and Greg Maxwell, to anyone who is following bitcoin and bitcoin\u0026amp;rsquo;s development, you will have probably heard of these two gentlemen. I was fortunate enough that the first time that I went to a bitcoin conference at the beginning of my involvement in this space, it was in Amsterdam 2013 and I somehow …"},{"uri":"/greg-maxwell/2015-01-08-libsecp256k1-testing/","title":"libsecp256k1 testing","content":"Today OpenSSL de-embargoed CVE-2014-3570 \u0026amp;ldquo;Bignum squaring may produce incorrect results\u0026amp;rdquo;. That particular security advisory is not a concern for Bitcoin users, but it allows me to explain some of the context behind a slightly cryptic statement I made in the release notes for the upcoming Bitcoin Core 0.10: “we have reason to believe that libsecp256k1 is better tested and more thoroughly reviewed than the implementation in OpenSSL”. Part of that “reason to believe” was our discovery …"},{"uri":"/greg-maxwell/2015-01-08-openssl-bug/","title":"OpenSSL bug discovery","content":"I contributed to the discovery and analysis of CVE-2014-3570 \u0026amp;ldquo;Bignum squaring may produce incorrect results\u0026amp;rdquo;. In this case, the issue was that one of the carry propagation conditions was missed. The bug was discovered as part of the development of libsecp256k1, a high performance (and hopefully high quality: correct, complete, side-channel resistant) implementation of the cryptographic operators used by Bitcoin, developed primarily by Bitcoin Core developer Pieter Wuille along with a …"},{"uri":"/misc/nydfs-bitlicense-lawsky-update/","title":"NY DFS Bitlicense Lawsky Update","content":"NY DFS BitLicense update 2014-12-18\nhttp://web.archive.org/web/20150511110453/http://bipartisanpolicy.org/events/payment-policy-in-the-21st-century-the-promise-of-innovation-and-the-challenge-of-regulation/\nAnd that\u0026amp;rsquo;s a serious problem that we all need to address with a hightened sense of urgency and focus. Let me start with the BitLicense and virtual currencies. These came on our radar screen last year at DFS because like all of the other states we regulate money transmitters like …"},{"uri":"/andreas-antonopoulos/2014-10-08-andreas-antonopolous-canada-senate-bitcoin/","title":"Canada Senate Bitcoin","content":"This is a transcript of submitted evidence to the Parliament of Canada\u0026amp;rsquo;s Senate Committee on Banking, Trade and Commerce. Here is a video. The opening remarks can be found here.\nAnother transcript may appear here but who knows.\nAn additional transcript can be found here.\nfinal report: http://www.parl.gc.ca/Content/SEN/Committee/412/banc/rep/rep12jun15-e.pdf\nPrepared remarks. My experience is primarily in information security and network architecture. I have a Master’s degree in networks …"},{"uri":"/misc/bitcoin-adam3us-fungibility-privacy/","title":"Fungibility and Privacy","content":"First of all I was going to explain what we mean by fungibility before bitcoin and ecash. It\u0026amp;rsquo;s an old legal concept in fact, about paper currency. It\u0026amp;rsquo;s the idea that a one ten dollar note is the same as any other ten dollar note. If you receive a note that was involved in a theft, 10 transactions ago, and the police investigate the theft, they have no right to remove the ten dollar note from your pocket. It\u0026amp;rsquo;s not your fault that it was involved in a previous crime. And so bank …"},{"uri":"/style/","title":"","content":"Bitcoin Transcripts Style Guide This is based on the Optech Style Guide.\nVocabulary Proper nouns The first letter of a proper noun is always capitalized.\nBitcoin (the network) MuSig (the pubkey and signature aggregation scheme. Don\u0026amp;rsquo;t refer to Musig, muSig or musig) Script (when referring to the Bitcoin scripting language) use script (no capitalization) when referring to an individual script True/False when referring to boolean arguments in plain text (eg \u0026amp;ldquo;set foo to True\u0026amp;rdquo;). use …"},{"uri":"/rebooting-web-of-trust/2019-prague/","title":"2019 Prague","content":" Abstract Groups Group Updates.Md Research Groups Security Intro Self Sovereign Identity Ideology And Architecture Christopher Allen Shamir Secret Sharing Swashbuckling Safety Training Kim Hamilton Duffy Topics Weak Signals "},{"uri":"/scalingbitcoin/hong-kong-2015/a-bevy-of-block-size-proposals-bip100-bip102-and-more/","title":"A Bevy Of Block Size Proposals Bip100 Bip102 And More","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/3_tweaking_the_chain_1_garzik.pdf\nslides: http://www.slideshare.net/jgarzik/a-bevy-of-block-size-proposals-scaling-bitcoin-hk-2015\nAlternative video: https://www.youtube.com/watch?v=37LiYOOevqs\u0026amp;amp;t=1h16m6s\nWe\u0026amp;rsquo;re going to be talking about every single block size proposal. This is going to be the least technical presentation at the conference. I am not going to go into the proposals htemselves. Changing the block size …"},{"uri":"/scalingbitcoin/hong-kong-2015/a-flexible-limit-trading-subsidy-for-larger-blocks/","title":"A Flexible Limit Trading Subsidy For Larger Blocks","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/3_tweaking_the_chain_2_friedenbach.pdf\nI really want to thank jgarzik for that interesting talk. Who picks what block size? My talk is going to be about a category of dynamic block size proposals that attach the block size limit to the fee in such a way that as there is more demand for block size, through users that want to pay fees, the block size will increase or decrease as necessary. The need for a dynamic block size is so …"},{"uri":"/scalingbitcoin/tel-aviv-2019/scaling-oblivious-read-write/","title":"A Tale of Two Trees: One Writes, and Other Reads, Scaling Oblivious Accesses to Large-Scale Blockchains","content":"Introduction We are trying to get optimized oblivious accesses to large-scale blockchains. This is collaborative work with our colleagues.\nMotivation As we all know, bitfcoin data has become too large to store in resource-constrained devices. It\u0026amp;rsquo;s like 240 GB. The current solution today is bip37 + Nakamoto\u0026amp;rsquo;s idea for simplified payment verification (SPV) clients which don\u0026amp;rsquo;t run the bitcoin network rules. Resource-constrained clients (thin clients) have to rely on other …"},{"uri":"/simons-institute/a-wishlist-for-verifiable-computation/","title":"A Wishlist For Verifiable Computation","content":"A wishlist for verifiable computation: An applied computer science perspective\nhttp://www.pepper-project.org/ https://www.youtube.com/watch?v=Z4jzA6ts2j4 http://simons.berkeley.edu/talks/mike-walfish-2015-06-10 slides: http://www.pepper-project.org/surveywishlist.pptx slides (again): http://simons.berkeley.edu/sites/default/files/docs/3316/surveywishlist2.pptx more: http://www.pepper-project.org/talks.htm Okay, thanks. I want to begin by thanking the organizers, especially the tall guy for …"},{"uri":"/rebooting-web-of-trust/2019-prague/abstract-groups/","title":"Abstract Groups","content":"Blockcert As the standards around verifiable credentials are starting to take form, different flavours of ‘verifiable credentials’ like data structures need to make necessary changes in order to leverage on the rulesets outlined and constantly reviewed by a knowledgeable community like RWOT and W3C. The purpose of this paper is to identify all of the changes needed to comply with the Verifiable Credentials \u0026amp;amp; Decentralized Identifiers standards.\nCooperation beats aggregation One important …"},{"uri":"/tags/accidental-confiscation/","title":"Accidental confiscation","content":""},{"uri":"/tags/acc/","title":"Accountable Computing Contracts","content":""},{"uri":"/bit-block-boom/2019/accumulating-bitcoin/","title":"Accumulating Bitcoin","content":"Introduction Today I am going to be preaching to the choir I think. Hopefully I will be reiterating and reinforcing some points that you already know but maybe forgot to bring up with your nocoiner friends, or learning something new. I hope there\u0026amp;rsquo;s something for everyone in here.\nQ: Should I buy bitcoin?\nA: Yes.\nQ: Are you going to troll a journalist?\nA: Today? Already done.\nMisconceptions One of the biggest misconceptions I heard when I first learned about bitcoin was that bitcoin was …"},{"uri":"/stanford-blockchain-conference/2019/accumulators/","title":"Accumulators for blockchains","content":"https://twitter.com/kanzure/status/1090748293234094082\nhttps://twitter.com/kanzure/status/1090741715059695617\nhttps://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/accumulators/\npaper: https://eprint.iacr.org/2018/1188\nIntroduction I am going to be talking about batching techniques for accumulators with applications to interactive oracle proofs and blockchains. I\u0026amp;rsquo;ll maybe also talk about how to make these proofs even smaller. The main thing that we are going to focus on today is the …"},{"uri":"/speakers/adam-ludwin/","title":"Adam Ludwin","content":""},{"uri":"/tags/addr-v2/","title":"Addr v2","content":""},{"uri":"/speakers/aki-balogh/","title":"Aki Balogh","content":""},{"uri":"/speakers/alan-reiner/","title":"Alan Reiner","content":""},{"uri":"/speakers/alena-vranova/","title":"Alena Vranova","content":""},{"uri":"/speakers/alessandro-chiesa/","title":"Alessandro Chiesa","content":""},{"uri":"/speakers/alex-petrov/","title":"Alex Petrov","content":""},{"uri":"/speakers/alex-zinder/","title":"Alex Zinder","content":""},{"uri":"/speakers/alexander-chepurnoy/","title":"Alexander Chepurnoy","content":""},{"uri":"/speakers/alexander-zaidelson/","title":"Alexander Zaidelson","content":""},{"uri":"/speakers/alexei-zamyatin/","title":"Alexei Zamyatin","content":""},{"uri":"/speakers/alicia-bendhan/","title":"Alicia Bendhan","content":""},{"uri":"/cryptoeconomic-systems/2019/all-about-decentralized-trust/","title":"All About Decentralized Trust","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/tags/altcoin/","title":"altcoin","content":""},{"uri":"/scalingbitcoin/montreal-2015/alternatives-to-block-size-as-aggregate-resource-limits/","title":"Alternatives To Block Size As Aggregate Resource Limits","content":"I am talking about block size, but not any particular proposal. Why do we have the block size limit and is it a good thing in the first place? The block size exists as a denial-of-service limiter. It limits the amount of resources that a node validating the blockchain can exhaust when validating a block. We want to do this because in the early days of bitcoin there are often certain kinds of transactions there are ways to slow down a validator by using a non-standard transaction or filling up a …"},{"uri":"/scalingbitcoin/montreal-2015/amiko-pay/","title":"Amiko Pay","content":"Amiko Pay aims to become an implementation of the lightning network. So that previous presentation was a great setup. There\u0026amp;rsquo;s a little bit more to Amiko Pay. If you look through the basic design of the lightning network, there\u0026amp;rsquo;s a network of payment channels. There have been several variations of this idea. Lightning Network happens to be the best so far. What Amiko Pay aims to do is to focus on the nodes, and do the routing between nodes. The other big problem of Amiko Pay making …"},{"uri":"/speakers/andy-ofiesh/","title":"Andy Ofiesh","content":""},{"uri":"/tags/annex/","title":"Annex","content":""},{"uri":"/stanford-blockchain-conference/2020/arbitrum-v2/","title":"Arbitrum 2.0: Faster Off-Chain Contracts with On-Chain Security","content":"https://twitter.com/kanzure/status/1230191734375628802\nIntroduction Thank you, good morning everybody. I\u0026amp;rsquo;m going to talk about our latest version of our Abitrum system. The first version was discussed in a paper in 2018. Since then, there\u0026amp;rsquo;s a lot of advances in my technology and a real running system has been made. This is the first working roll-up system for general smart contracts.\nFirst I will set the scene to provide some context on what we\u0026amp;rsquo;re talking about and how what …"},{"uri":"/w3-blockchain-workshop-2016/archival-science/","title":"Archival Science","content":"Blockchain is a memory transfer sytsem. These memories are moving through space and time. This is something that I recognized. As a type of record, a ledger is a type of record traditionally. There are a number of international standards. One of them is ISO 15489, international records amangemetn. You will see this idea of memory of transactions as evidence. You will see something like \u0026amp;ldquo;proofs on the blockchain\u0026amp;rdquo;. This is information received and created and maintained. In pursuance …"},{"uri":"/speakers/ari-juels/","title":"Ari Juels","content":""},{"uri":"/speakers/ariel-gabizon/","title":"Ariel Gabizon","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/armory-proof-of-payment/","title":"Armory Proof Of Payment","content":"I am going to talk about proof of payment. This is simple. You can do this. That you couldn\u0026amp;rsquo;t do before 2008. We\u0026amp;rsquo;ll see if this works. I hear it might work. That\u0026amp;rsquo;s okay, I\u0026amp;rsquo;ll pull this up here. I have ruined everything. Just jam that in there. Here we go.\nSo uh, I said who I am. Bitcoin Armory. I am here to talk about proof of payment. Basically, let\u0026amp;rsquo;s talk about Armory first real quick. First released in 2011. Open-source Bitcoin security software. It\u0026amp;rsquo;s been …"},{"uri":"/speakers/arvind-narayanan/","title":"Arvind Narayanan","content":""},{"uri":"/w3-blockchain-workshop-2016/arvind-narayanan/","title":"Arvind Narayanan","content":"I would like to introduce Arvind. He is a professor at Princeton. He has talked about this in a number of different forums.\nHi everyone. My name is Arvind. This is turning out to be one of the more unusual and interesting events that I have been to. Someone at my table called the first session a quasi-religious experience. Not sure whether that was a good thing or not. Joking aside, my favorite thing about this is that the position statements were available on the website. I found them …"},{"uri":"/tags/asicboost/","title":"ASICBoost","content":""},{"uri":"/stanford-blockchain-conference/2019/asics/","title":"Asics","content":"ASIC design for mining\nhttps://twitter.com/kanzure/status/1091056727464697856\nIntroduction My name is David Vorick. I am lead developer of Sia, a decentralized cloud storage solution. I come from the software side of the world, but I got into hardware. I think from the perspective of cryptocurrency developers. I am also CEO of Obelisk, which is a cryptocurrency ASIC miner manufacturing company. I was asked today to talk about cryptocurrency ASICs. This is a really broad topic and the format that …"},{"uri":"/speakers/assimakis-kattis/","title":"Assimakis Kattis","content":""},{"uri":"/tags/async-payments/","title":"Async payments","content":""},{"uri":"/scalingbitcoin/stanford-2017/atomically-trading-with-roger-gambling-on-the-success-of-a-hard-fork/","title":"Atomically Trading With Roger Gambling On The Success Of A Hard Fork","content":"Atomically Trading with Roger: Gambling on the success of a hard-fork\npaper: http://homepages.cs.ncl.ac.uk/patrick.mc-corry/atomically-trading-roger.pdf\nThis is joint work with Patrick McCorry, Andrew Miller and myslef. Patrick was originally going to give this talk but he was unable to make it. I\u0026amp;rsquo;ll give the talk with his slides. I want to acknowledge Roger Ver for permission to use his name in the title of the talk. We asked permission.\nI will only be talking about some of the …"},{"uri":"/stanford-blockchain-conference/2019/aurora-transparent-succinct-arguments-r1cs/","title":"Aurora: Transparent Succinct Arguments for R1CS","content":"https://twitter.com/kanzure/status/1090741190100545536\nIntroduction Hi. I am Alessandro Chiesa, a faculty member at UC Berkeley. I work on cryptography. I\u0026amp;rsquo;m also a chief scientist at Starkware. I was a founder of zcash. Today I will like to tell you about recent work in cryptographic proof systems. This is joint work with people at Starkware, UC Berkeley, MIT Media Lab, and others. In the prior talk, we talked about scalability. But this current talk is more about privacy by using …"},{"uri":"/speakers/austin-hill/","title":"Austin Hill","content":""},{"uri":"/stanford-blockchain-conference/2019/vulnerability-detection/","title":"Automatic Detection of Vulnerabilities in Smart Contracts","content":"Introduction If you find a bug in a smart contract, it\u0026amp;rsquo;s really hard to mitigate that bug. In formal verification, we really look for bugs. We can identify subtle bugs, like reentrancy attacks, ownership violation, incorrect transactions, bugs due to updated code. We also want to assure the absence of bugs.\nDevice drivers I come from the domain of Windows device drivers. These are drivers produced by third parties, and it integrates with operating system code. Bugs in the device drivers …"},{"uri":"/scalingbitcoin/tel-aviv-2019/backpackers/","title":"Backpackers","content":"Backpackers: A new paradigm for secure and high-performance blockchain\nThang N. Dinh\n"},{"uri":"/speakers/baker-marquart/","title":"Baker Marquart","content":""},{"uri":"/speakers/balaji-srinivasan/","title":"Balaji Srinivasan","content":""},{"uri":"/speakers/barry-silbert/","title":"Barry Silbert","content":""},{"uri":"/speakers/bart-suichies/","title":"Bart Suichies","content":""},{"uri":"/speakers/ben-maurer/","title":"Ben Maurer","content":""},{"uri":"/speakers/benjamin-chan/","title":"Benjamin Chan","content":""},{"uri":"/speakers/benjamin-lawsky/","title":"Benjamin Lawsky","content":""},{"uri":"/baltic-honeybadger/2018/beyond-bitcoin-decentralized-collaboration/","title":"Beyond Bitcoin Decentralized Collaboration","content":"Beyond bitcoin: Decentralized collaboration\nhttps://twitter.com/kanzure/status/1043432684591230976\nhttp://sit.fyi/\nHi everybody. Today I am going to talk about something different than my usual. I am going to talk more about how we can compute and how we collaborate. I\u0026amp;rsquo;ll start with an introduction to the history of how things happened and why are things the way they are today. Many of you probably use cloud SaaS applications. It\u0026amp;rsquo;s often touted as a new thing, something happening in …"},{"uri":"/scalingbitcoin/hong-kong-2015/bip101-block-propagation-data-from-testnet/","title":"Bip101 Block Propagation Data From Testnet","content":"I am a bitcoin miner. I am a C++ programmer and a scientist. I will be going pretty fast. I have a lot of stuff to cover. Bare with me.\nMy perspective on this is that scaling bitcoin is an engineer problem. My favorite proposal for how to scale bitcoin is bip101. It\u0026amp;rsquo;s over a 20-year time span. This will give us time to implement fixes to get Bitcoin to a large level. A hard-fork to increase the block size limit is hard, and soft-forks make it easier to decrease, that is to increase it is, …"},{"uri":"/tags/bip70-payment-protocol/","title":"BIP70 payment protocol","content":""},{"uri":"/scalingbitcoin/hong-kong-2015/bip99-and-uncontroversial-hard-forks/","title":"Bip99 And Uncontroversial Hard Forks","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY1/1_overview_1_timon.pdf\nbip99 https://github.com/bitcoin/bips/blob/master/bip-0099.mediawiki\nWe are going to focus on consensus rules. I am going to give an introduction and then focus on controversial versus non-controversial hard-forks.\nWhy do we want to classify consensus changes in the first place? Well, for one, you often have long discussions many times in Bitcoin, we don\u0026amp;rsquo;t always share the same terminology and the …"},{"uri":"/bit-block-boom/","title":"Bit Block Boom","content":" Bit Block Boom 2019 "},{"uri":"/bit-block-boom/2019/","title":"Bit Block Boom 2019","content":" Accumulating Bitcoin Pierre Rochard Bitcoin: There Can Only Be One Tone Vays Altcoins Building Vibrant Bitcoin Communities Kris Merkel Fiat Money \u0026amp;amp; Fiat Food Saifedean Ammous How To Meme Bitcoin To The Moon Michael Goldstein State Of Multisig Justin Moon Psbt Taproot, Schnorr, and the next soft-fork Mike Schmidt Schnorr signatures Taproot Soft fork activation Musig Adaptor signatures "},{"uri":"/baltic-honeybadger/2018/bitcoin-as-a-novel-market-institution/","title":"Bitcoin As A Novel Market Institution","content":"Bitcoin as a novel market institution\nhttps://twitter.com/kanzure/status/1043763647498063872\nI am going to be talking about bitcoin as an economic system, not as a software or cryptographic system. This talk has two parts. In the first, I\u0026amp;rsquo;m going to take a retrospective of the past 10 years of bitcoin as a functioning economic system. In the second part, I will be looking at bitcoin as it is today and how it will be in the future. I\u0026amp;rsquo;m going to look at the amount of wealth stored in …"},{"uri":"/scalingbitcoin/montreal-2015/bitcoin-block-propagation-iblt-rusty-russell/","title":"Bitcoin Block Propagation and IBLT","content":"This is not what I do. But I was doing it anyway.\nThe problem is that blocks are transmitted in their entirety. If you have a 1 MB uplink and you\u0026amp;rsquo;re connecting to 8 peers, your first peer will see a 1MB block in about 66.8 seconds. And the last one will get it in 76.4 seconds, because we basically blast out blocks in parallel to our peers. Miners can solve this problem of slow block transmission by centralizing and all using the same pool.\nThat\u0026amp;rsquo;s not desirable, so it would be nice if …"},{"uri":"/bitcoin-core-dev-tech/2015-02/","title":"Bitcoin Core Dev Tech 2015","content":" Bitcoin Law For Developers James Gatto, Marco Santori Gavinandresen R\u0026amp;amp;D Goals \u0026amp;amp; Challenges Patrick Murck, Gavin Andresen, Cory Fields Research Bitcoin core Talk by the founders of Circle Jeremy Allaire, Sean Neville "},{"uri":"/london-bitcoin-devs/jnewbery-bitcoin-core-v0.17/","title":"Bitcoin Core V0.17","content":"Bitcoin Core v0.17\nslides: https://www.dropbox.com/s/9kt32069hoxmgnt/john-newbery-bitcoincore0.17.pptx\nhttps://twitter.com/kanzure/status/1031960170027536384\nIntroduction I am John Newbery. I work on Bitcoin Core. This talk is going to be mostly about Bitcoin Core 0.17 which was branched on Monday. Hopefully the final release will be in the next couple of weeks.\nwhoami I live in New York and work at Chaincode Labs. I\u0026amp;rsquo;m not actually a native born New Yorker. It\u0026amp;rsquo;s nice to be back in …"},{"uri":"/dallas-bitcoin-symposium/bitcoin-developers/","title":"Bitcoin Developers","content":"Introduction I am going to talk about a programmer\u0026amp;rsquo;s perspective. As investors, you might find this interesting, but at the same time it\u0026amp;rsquo;s not entirely actionable. As a programmer, the bitcoin ecosystem is very attractive. This is true of cryptocurrencies in general. I\u0026amp;rsquo;ll make a distinction for bitcoin at the end. One of the interesting things is that you see some of the most talented hackers in the world are extremely attracted to bitcoin. I think that\u0026amp;rsquo;s an interesting …"},{"uri":"/edgedevplusplus/2017/","title":"Bitcoin Edge Dev++ 2017","content":"https://stanford-devplusplus-2017.bitcoinedge.org/\nBitcoin Peer-to-Peer Network and Mempool John Newbery P2p Transaction relay policy "},{"uri":"/scalingbitcoin/montreal-2015/bitcoin-failure-modes-and-the-role-of-the-lightning-network/","title":"Bitcoin Failure Modes And The Role Of The Lightning Network","content":"I am going to be talking about bitcoin failure modes, and Joseph will talk about how the lightning network can help. I\u0026amp;rsquo;ll start. We\u0026amp;rsquo;ll start off by saying that bitcoin is working, which is really cool. Blocks starts with lots of zeroes, coins stay put, they move when you tell them to. That\u0026amp;rsquo;s really cool and it works. That\u0026amp;rsquo;s great. This is a good starting place. We should acknowledge that bitcoin can fail. But it\u0026amp;rsquo;s anti-fragile, right? What\u0026amp;rsquo;s the blockheight of …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/bitcoin-financing-and-trading/","title":"Bitcoin Financing And Trading","content":" Joshua Lim (Circle, ex Goldman Sachs) Juthica Chou (LedgerX) (Goldman Sachs) Bobby Cho (ItBit) (VP of trading at SecondMarket) Harry Yeh (moderator) (Binary Financial) Today we will hear from some people who are traders and also infrastructure providers. Good morning everybody. It took me 18 hours to get here. The Uber dropped me off at the wrong building. They need better signs.\nSo usually I am the one that sits on the panel. This is actually the first time I get to moderate. I get to ask the …"},{"uri":"/bitcoin-core-dev-tech/2015-02/james-gatto-marco-santori-bitcoin-law-for-developers/","title":"Bitcoin Law For Developers","content":"We are going to be taking a 15 minute coffee break after our next two speakers. I want to introduce you to James Gatto and Marco Santori with Pilsbury. They will be spending some time talking about Bitcoin law. They have a room this afternoon and they are offering to talk with you one on one. So Marco and James.\nYou missed the introduction. Was it any good? (laughter)\nWe are here to talk about legal issues. We are going to try to keep it light and interesting. I am going to talk about patents. …"},{"uri":"/scalingbitcoin/montreal-2015/bitcoin-load-spike-simulation/","title":"Bitcoin Load Spike Simulation","content":"Our goal for this project, our rationale of what we\u0026amp;rsquo;re interesting is when many transactions arrive in a short period of time. This could be because of denial of service attacks where few entities are creating a large number of transactions, or many people wanting to create transactions, like a shopping spree. We wanted to answer two questions, how does the temporary spike in transaction rate affect confirmation delay distribution? For a given spike shape, can we change the block size and …"},{"uri":"/baltic-honeybadger/2018/bitcoin-maximalism-dissected/","title":"Bitcoin Maximalism Dissected","content":"Bitcoin maximalism dissected\nGood morning everyone. I am very happy to be here at Baltic Honeybadger. Last year I gave a scaling presentation. I opened the conference with a scaling presentation. This year in order to compensate, I will be super serious. This will be the most boring presentation at the conference. I am going to try to dissect and formalize bitcoin maximalism.\nThis is the scarriest font I found on prezi. I wanted something with blood coming out of it but they didn\u0026amp;rsquo;t have …"},{"uri":"/stanford-blockchain-conference/2019/bitcoin-payment-economic-analysis/","title":"Bitcoin Payment Economic Analysis","content":"Economic analysis of the bitcoin payment system\nhttps://twitter.com/kanzure/status/1091133843518582787\nIntroduction I am an economist. I am going to stop apologizing about this now that I\u0026amp;rsquo;ve said it once. I usually study market design. Bitcoin gives us new opportunities for how to design marketplaces. Regarding the talk title, we don\u0026amp;rsquo;t want to claim that bitcoin or any other cryptocurrency will be a monopoly. But to be a monopoly, it would behave very differently from traditional …"},{"uri":"/baltic-honeybadger/2018/bitcoin-payment-processing-and-merchants/","title":"Bitcoin Payment Processing And Merchants","content":"1 on 1: Bitcoin payment processing and merchants\nhttps://twitter.com/kanzure/status/1043476967918592003\nV: Hello and welcome to this amazing conference. It\u0026amp;rsquo;s a good conference, come on. It\u0026amp;rsquo;s great because you can ask them about UASF and they know what you\u0026amp;rsquo;re talking about. I have some guests with me today. We\u0026amp;rsquo;re going to be talkin gabout merchant processing and talk about regular bitcoin adoption too. The first question I have for Alena is, \u0026amp;hellip; there was a big effort …"},{"uri":"/edgedevplusplus/2017/p2p-john-newbery/","title":"Bitcoin Peer-to-Peer Network and Mempool","content":"Slides: https://johnnewbery.com/presentation/2017/11/02/dev-plus-plus-stanford/p2p.pdf \u0026amp;amp; https://johnnewbery.com/presentation/2017/11/02/dev-plus-plus-stanford/mempool.pdf\nIntro John: So far today, we\u0026amp;rsquo;ve talked about transactions and blocks and the blockchain. Those are the fundamental building blocks of Bitcoin. Those are the data structures that we use. On a more practical level, how does a Bitcoin network actually function? How do we transmit these transactions and blocks around? …"},{"uri":"/magicalcryptoconference/2019/bitcoin-protocol-development-panel/","title":"Bitcoin Protocol Development Panel","content":"Bitcoin protocol development panel\nKW: We have some wonderful panelists today. Let\u0026amp;rsquo;s kick it off. Let\u0026amp;rsquo;s start with Eric. Who are you and what role do you play in bitcoin development?\nEL: I got into bitcoin in 2011. I had my own network stack. I almost had a full node implementation but I stopped short because it wasn\u0026amp;rsquo;t as well tested or reviewed as Bitcoin Core. So I started to look into the community a little bit. I became really interested in the development process itself. …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/bitcoin-regulation-landscape/","title":"Bitcoin Regulation Landscape","content":" Elizabeth Stark Jerry Brito Constance Choi Christian Catalini (moderator) MIT Compton Labs - Building 26, room 100\nhttp://mitbitcoinexpo.org/\nOkay guys, we are getting close to the time for the next panel. Head back to your seats. You may bring food in with you. We also started 15 minutes late. People have 15 minutes .. we ended lunch on time. One of our speakers literally only came for 30 minutes today, 2 to 230.\nIt\u0026amp;rsquo;s going to be escape. I don\u0026amp;rsquo;t know how to use Macs. It will be …"},{"uri":"/scalingbitcoin/montreal-2015/relay-network/","title":"Bitcoin Relay Network","content":"It was indicative of two issues. A lot of people were reducing their security model by not validating their bitcoin. It was potentially less secure. That\u0026amp;rsquo;s not something that relay networks try to address. The big issue with the decreasing number of nodes, which the relay network doesn\u0026amp;rsquo;t address, is that it reduces the resilience against attack. If there\u0026amp;rsquo;s only 100 nodes, there\u0026amp;rsquo;s really only some 100 VPses that you have to knock offline before the bitcoin network stops …"},{"uri":"/dallas-bitcoin-symposium/bitcoin-security/","title":"Bitcoin Security","content":"Bitcoin security\nIntroduction Hi everyone, my name is Dhruv Bansal. I am the CTO of Unchained Capital. Thank you for the history lesson, Tuur. That\u0026amp;rsquo;s a hard act to follow. I keep thinking about how it must have been like to transact large amounts of money back then. It was a fascinating time period. There was a lot of foundations laid for how the global financial system will work later in the future, like fractional reserve lending, loans, and so on. I want to talk about what security …"},{"uri":"/breaking-bitcoin/2019/selfish-mining/","title":"Bitcoin Selfish Mining and Dyck Words","content":"Introduction I am going to talk about our recent work on selfish mining. I am going to explain the profitability model, the different mathematical models that are used to analyze selfish mining with their properties.\nBibliography On profitability of selfish mining On profitability of stubborn mining On profitability of trailing mining Bitcoin selfish mining and Dyck words Selfish mining and Dyck words in bitcoin and ethereum networks Selfish mining in ethereum The paper on selfish mining in …"},{"uri":"/magicalcryptoconference/2019/bitcoin-without-internet/","title":"Bitcoin Without Internet","content":"SM: We have a special announcement to make. Let me kick it over to Richard right now.\nRM: We completed a project that will integrate the GoTenna mesh radio system with Blockstream\u0026amp;rsquo;s blocksat satellite system. It\u0026amp;rsquo;s pretty exciting. It\u0026amp;rsquo;s called txtenna. It will allow anybody to send a signed bitcoin transaction from an off-grid full node that is receiving blockchain data from the blocksat network, and then relay it over the GoTenna mesh network. It\u0026amp;rsquo;s not just signed …"},{"uri":"/bit-block-boom/2019/there-can-only-be-one/","title":"Bitcoin: There Can Only Be One","content":"https://twitter.com/kanzure/status/1162742007460192256\nIntroduction I am basically going to talk about why bitcoin will basically eat all the shitcoins and why all of them are going away and I am going to point to some of the details. The big graphic right there is my attempt to categorize all of the altcoins, the ICO space, and so on. Everyone is slowly starting to merge mine with bitcoin, since they can\u0026amp;rsquo;t keep up with the charadde.\nProof-of-work The word \u0026amp;ldquo;blockchain\u0026amp;rdquo; was …"},{"uri":"/tags/block-explorers/","title":"Block explorers","content":""},{"uri":"/stanford-blockchain-conference/2020/block-rewards/","title":"Block Rewards","content":"An Axiomatic Approach to Block Rewards\nhttps://twitter.com/kanzure/status/1230574963813257216\nhttps://arxiv.org/pdf/1909.10645.pdf\nIntroduction Thank you, Bram. Can you hear me in the back? I want to talk about some work I have been doing with my colleagues. This is a paper about incentive issues in blockchain protocols. I am interested in thinking about whether protocols have been designed in a way that motivates users to behave in the way that the designer had hoped.\nGame theory and mechanism …"},{"uri":"/tags/block-withholding/","title":"Block withholding","content":""},{"uri":"/coindesk-consensus-2016/blockchain-database-technology-in-a-global-context/","title":"Blockchain Database Technology In A Global Context","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nBlockchain database technology in a global context\nAlan Murray - Moderator\nLawrence H. Summers\nAM: I would like to show a video.\nIt\u0026amp;rsquo;s not going to happen. You\u0026amp;rsquo;re wasting your time. When the DOJ calls you up and says it\u0026amp;rsquo;s an illegal currency, it\u0026amp;rsquo;s over. You will be put in jail. There will be no non-controlled currency in the world. There is no government that is going to put up with it in the world. Lots of …"},{"uri":"/w3-blockchain-workshop-2016/blockchain-hub/","title":"Blockchain Hub","content":"My understanding is that there is application logic, consensus mechanism, distributed timestamp and ledger structure. With these layers, the blockchain can give total control of assets to the end. This materializes the attitude and philosophy of the internet. The intention is I think great. There is a missing link, though. It would be nice if these blockchain could manage the digital representational of physical assets.\nI have been working with real estate escrow company in Japan to work on the …"},{"uri":"/scalingbitcoin/montreal-2015/blockchain-testbed/","title":"Blockchain Testbed","content":"A testbed for bitcoin scaling\nI want to talk today about how to scale bitcoin. We want lower latency, we want higher throughput, more bandwidth and more transactions per second, and we want security. We can try to tune the parameters. In all of the plots, I have time going from left to right and these are blocks in rectangles. We have larger blocks which might give us more throughput. We could have shorter block length, and transactions would come faster on the chain and we have better …"},{"uri":"/stanford-blockchain-conference/2020/blockchains-for-multiplayer-games/","title":"Blockchains For Multiplayer Games","content":"https://twitter.com/kanzure/status/1230256436527034368\nIntroduction Alright, thanks everyone for being here. Let\u0026amp;rsquo;s get started. I am part of a project called Forte which wants to equip the game ecosystem with blockchain finance for their in-game economies. This is a unique set of constraints compared to what a lot of people are trying to build on blockchain today. We\u0026amp;rsquo;re game experts. There\u0026amp;rsquo;s definitely people more experts than us on the topics we\u0026amp;rsquo;ll cover in this …"},{"uri":"/stanford-blockchain-conference/2019/bloxroute/","title":"Bloxroute","content":"bloXroute: A network for tomorrow\u0026amp;rsquo;s blockchain\nIntroduction Hi. I am Souyma and I am here to talk with you about Bloxroute. The elephant in the room is that blockchains have been claimed to solve everything like credit cards, social media, and global micropayments. To handle credit card payment volumes, blockchains need like 5000 transactions per second, for microtransactions you need 70000 transactions per second, and for social media even more. Blockchains today do about 10 …"},{"uri":"/speakers/bobby-cho/","title":"Bobby Cho","content":""},{"uri":"/speakers/bobby-lee/","title":"Bobby Lee","content":""},{"uri":"/stanford-blockchain-conference/2020/boomerang/","title":"Boomerang - Redundancy Improves Latency and Throughput in Payment-Channel Network","content":"https://twitter.com/kanzure/status/1230936389300080640\nIntroduction Redundancy can be a useful tool for speeding up payment channel networks and improving throughput. We presented last week at Financial Crypto. You just received a nice introduction to payment channels and payment channel networks.\nPayment channels Alice and Bob are connected through a payment channel. This is a channel and in that channel there are some coins that are escrowed. Some coins belong to Alice some belong to Bob and …"},{"uri":"/speakers/boyma-fahnbulleh/","title":"Boyma Fahnbulleh","content":""},{"uri":"/speakers/brad-peterson/","title":"Brad Peterson","content":""},{"uri":"/scalingbitcoin/hong-kong-2015/braiding-the-blockchain/","title":"Braiding The Blockchain","content":"Bob McElrath (bsm117532)\nslides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/2_breaking_the_chain_1_mcelrath.pdf\nI work for SolidX in New York. I am here to tell you about some modifications to the blockchain. All the things we heard yesterday about the block size, come down to the existence of orphans. The reason why we have these problems are orphans. These are consequences of physics and resources. This is not a fundamental property in Bitcoin. ((Transcripter\u0026amp;rsquo;s note: …"},{"uri":"/speakers/brandon-black/","title":"Brandon Black","content":""},{"uri":"/breaking-bitcoin/2019/breaking-bitcoin-privacy/","title":"Breaking Bitcoin Privacy","content":"0A8B 038F 5E10 CC27 89BF CFFF EF73 4EA6 77F3 1129\nhttps://twitter.com/kanzure/status/1137304437024862208\nIntroduction Hello everybody. I invented and created joinmarket, the first really popular coinjoin implementation. A couple months ago I wrote a big literature review on privacy. It has everything about anything in privacy in bitcoin, published on the bitcoin wiki. This talk is about privacy and what we can do to improve it.\nWhy privacy? Privacy is essential for fungibility, a necessary …"},{"uri":"/scalingbitcoin/milan-2016/breaking-the-chain/","title":"Breaking The Chain","content":"https://twitter.com/kanzure/status/785144266683195392\nhttp://dpaste.com/1ZYW028\nhttp://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/client-side-validation/\nhttp://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/braiding-the-blockchain/\nhttp://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/jute-braiding/\nInclusivitiy So it seems like we have two big topics here, one is braiding and one is proof-of-publication. I had a talk in Hong Kong about braiding. The major difference is the inclusive …"},{"uri":"/breaking-bitcoin/2019/breaking-wasabi/","title":"Breaking Wasabi (and Automated Wallets)","content":"1127 96A8 DFCA 1A96 C8B8 0094 9211 687A D298 9B12\nhttps://twitter.com/kanzure/status/1137652130611978240\nTrigger warnings Before I begin, some have told me that sometimes I tend to be toxic or provocative. So let\u0026amp;rsquo;s start with some trigger warnings. This presentation might include bad jokes about bitcoin personalitites. It might have some multiplication. I might say \u0026amp;ldquo;ethereum\u0026amp;rdquo;. There\u0026amp;rsquo;s also a coinjoin warning. If this is triggering to you, then it\u0026amp;rsquo;s okay to leave. …"},{"uri":"/speakers/brett-seyler/","title":"Brett Seyler","content":""},{"uri":"/speakers/brian-kelly/","title":"Brian Kelly","content":""},{"uri":"/speakers/brian-klein/","title":"Brian Klein","content":""},{"uri":"/speakers/brian-n.-levine/","title":"Brian N. Levine","content":""},{"uri":"/speakers/brian-okeefe/","title":"Brian O’Keefe","content":""},{"uri":"/stanford-blockchain-conference/2020/brick-async-state-channels/","title":"Brick Async State Channels","content":"Brick: Asynchronous State Channels\nhttps://twitter.com/kanzure/status/1230943445398614016\nhttps://arxiv.org/abs/1905.11360\nIntroduction I am going to be presenting on Brick for asynchronous state channels. This is joint work with my colleagues and my advisor. So far we have heard many things about payment channels and payment channel networks. They were focused on existing channel solutions. In this work, we focus on a different dimensions. We ask why do payment channels work the way they do, …"},{"uri":"/speakers/bruce-fenton/","title":"Bruce Fenton","content":""},{"uri":"/scalingbitcoin/milan-2016/build-scale-operate/","title":"Build Scale Operate","content":"Build - Scale - Operate: The Three Pillars of the Bitcoin Community\nhttps://twitter.com/kanzure/status/784713580713246720\nGood morning. This is such a beautiful conference. Eric and I are excited to speak with you. We are going to outline what our goals are. We want to be clear that we\u0026amp;rsquo;re not here to define vision. Rather, we\u0026amp;rsquo;re here about how to grow vision around how to grow bitcoin from a technical community perspectives. We\u0026amp;rsquo;re going to share some of our challenges with …"},{"uri":"/tags/build-systems/","title":"build systems","content":""},{"uri":"/stanford-blockchain-conference/2019/building-bulletproofs/","title":"Building Bulletproofs","content":"https://twitter.com/kanzure/status/1090668746073563136\nIntroduction There are two parts to this talk. The first part of the talk will be building, like all the pieces we put into making the implementation of bulletproofs. We\u0026amp;rsquo;ll talk about the group we used, the Ristretto group and ristretto255 group. I\u0026amp;rsquo;ll talk about parallel lelliptic curve arithmetic, and Merlin transcripts for zero-knowledge proofs, and how all these pieces fit together.\nCathie will talk in part two about …"},{"uri":"/bit-block-boom/2019/building-vibrant-bitcoin-communities/","title":"Building Vibrant Bitcoin Communities","content":"Building vibrant bitcoin communities\nI am the content manager at Exodus Wallet. I have been helping to build up a userbase over the last few years. We are going to talk about building vibrant bitcoin communities. This is the antithesis of Michael Goldstein\u0026amp;rsquo;s talk.\nI want to bring positive dynamics to the communities that we participate in. Bitcoin has a few problems we need to overcome. The problem is Craig Wright. Well, the problem is Roger Ver. If you take the personalities out of it, …"},{"uri":"/blockchain-protocol-analysis-security-engineering/2018/bulletproofs/","title":"Bulletproofs","content":"https://twitter.com/kanzure/status/958881877896593410\nhttp://web.stanford.edu/~buenz/pubs/bulletproofs.pdf\nhttps://joinmarket.me/blog/blog/bulletpoints-on-bulletproofs/\nhttps://crypto.stanford.edu/bulletproofs\nIntroduction Good morning everyone. I am Benedikt Bünz and I am going to talk about bulletproofs. This is joint work with myself, Jonathan Bootle, Dan Boneh, Andrew Poelstra, Pieter Wuille, and Greg Maxwell. Bulletproofs are short proofs that we designed originally with the goal of …"},{"uri":"/stanford-blockchain-conference/2019/casper/","title":"Casper the Friendly Ghost: A 'Correct-by-Construction' Blockchain Consensus Protocol","content":"https://twitter.com/kanzure/status/1091460316288868352\nI have one announcement to make before we start the session. If you feel like after all of these talks that the thing you really need is a drink, then there\u0026amp;rsquo;s help. Chainlink is hosting the Stanford Blockchain Conference happy hour at The Patio which is the most famous bar in Palo Alto. It doesn\u0026amp;rsquo;t mean that much, but it\u0026amp;rsquo;s good fun. It\u0026amp;rsquo;s at 412 Evans St in Palo Alto, from 5-8pm and it\u0026amp;rsquo;s an open bar. Thank you …"},{"uri":"/speakers/cathie-yun/","title":"Cathie Yun","content":""},{"uri":"/stanford-blockchain-conference/2020/celo-ultralight-client/","title":"Celo Ultralight Client","content":"https://twitter.com/kanzure/status/1230261714039398402\nIntroduction Hello everyone. My name is Marek Olszewski. We are going to be talking about Plumo which is Celo\u0026amp;rsquo;s ultralightweight client protocol. This is joint work with a number of collaborators.\nPlumo Plumo sands for \u0026amp;ldquo;feather\u0026amp;rdquo; in esperanto. I hope most people here are familiar with this graph. Chain sizes are growing. This is a graph of the bitcoin chain size over time. It has continued to grow. We\u0026amp;rsquo;re now at 256 …"},{"uri":"/misc/cftc-bitcoin/","title":"CFTC Bitcoin","content":"Commodity Futures Trading Commission\nhttp://www.onlinevideoservice.com/clients/cftc/video.htm?eventid=cftclive\n# NDFs description Panel I: CFTC Clearing for Non-Deliverable Forwards (NDF)\nThe first panel will discuss whether mandatory clearing should be required of NDF swaps contracts. Each panelist will present and then there will be opportunity for broader discussion and questions. Representatives from the CFTC, the Bank of England, and the European Securities and Markets Authority will also …"},{"uri":"/scalingbitcoin/milan-2016/chainbreak/","title":"Chainbreak","content":" Table of Contents 1. braids 1.1. Properties of a braid system 1.1.1. Inclusivity 1.1.2. Delayed tx fee allocation 1.1.3. Network size measured by graph structure 1.1.4. cohort algorithm / sub-cohort ordering 1.1.5. outstanding problem - merging blocks of different difficulty 1.2. fee sniping 1.2.1. what even is fee sniping? 1.3. definition of a cohort 1.4. tx processing system must process all txs 1.5. can you have both high blocktime and a braid? 1.5.1. problem is double spends 1.5.2. two ways …"},{"uri":"/scalingbitcoin/stanford-2017/changes-without-unanimous-consent/","title":"Changes Without Unanimous Consent","content":"I want to talk about dealing with consensus changes without so-called consensus. I am using consensus in terms of the social aspect, not in terms of the software algorithm consensus for databases. \u0026amp;ldquo;Consensus changes without consensus\u0026amp;rdquo;. If you don\u0026amp;rsquo;t consensus on consensus, then some people are going to follow one chain and another another chain.\nIf you have unanimous consensus, then new upgrades work just fine. Developers write software, miners run the same stuff, and then there …"},{"uri":"/tags/channel-announcements/","title":"Channel announcements","content":""},{"uri":"/tags/channel-commitment-upgrades/","title":"Channel commitment upgrades","content":""},{"uri":"/tags/channel-jamming-attacks/","title":"Channel jamming attacks","content":""},{"uri":"/speakers/charles-cascarilla/","title":"Charles Cascarilla","content":""},{"uri":"/speakers/charles-guillemet/","title":"Charles Guillemet","content":""},{"uri":"/speakers/charlie-lee/","title":"Charlie Lee","content":""},{"uri":"/speakers/chris-church/","title":"Chris Church","content":""},{"uri":"/speakers/chris-tse/","title":"Chris Tse","content":""},{"uri":"/w3-blockchain-workshop-2016/christopher-allen/","title":"Christopher Allen","content":"Whta makes for a great wonderful magical calliberation? Collaboration? Grab one card randomly. Keep it with you. What is this particular pattern of greatness? See if you can\u0026amp;rsquo;t embody it or can\u0026amp;rsquo;t recommend it ofr find it in the next two days. I would like each of you to tell a story about a collaboration that was really meaningful to you. It could be a business effort, it could be another team in your company, or it could be a personal collaboration that you found really powerful. …"},{"uri":"/coindesk-consensus-2016/clearing-and-settlement-for-global-financial-institutions/","title":"Clearing And Settlement For Global Financial Institutions","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nClearing and settlement for global financial institutions\nMatthew Bishop, The Economist - Moderator\nCharles Cascarilla, itBit\nChris Church, Digital Asset Holdings\nBrad Peterson, Nasdaq\nSandra Ro, CME Group\nMB: Good evening. Nearly time for a drink, but before that we will have a stimulating conversation about payments and settlement. It\u0026amp;rsquo;s one of the startups coming in and got a banking license, mainstream next to him is Chris …"},{"uri":"/stanford-blockchain-conference/2020/clockwork-nonfrontrunning/","title":"Clockwork Nonfrontrunning","content":"ClockWork: An exchange protocol for proofs of non-front-running\nhttps://twitter.com/kanzure/status/1231012112517844993\nIntroduction Clockwork is an exchange protocol for proofs of non-frontrunning.\nExchange systems Exchange systems let you trade one asset for another asset at a certain price. It\u0026amp;rsquo;s simple and it\u0026amp;rsquo;s supposed to be fair. In this work, we\u0026amp;rsquo;re focused on centralized exchange systems. We\u0026amp;rsquo;re not working with decentralized exchanges just yet. These don\u0026amp;rsquo;t …"},{"uri":"/tags/cltv-expiry-delta/","title":"CLTV expiry delta","content":""},{"uri":"/scalingbitcoin/milan-2016/coin-selection/","title":"Coin Selection","content":"Simulation-based evaluation of coin selection strategies\nhttps://twitter.com/kanzure/status/785061222316113920\nIntroduction Thank you. ((applause))\nHi. Okay. I\u0026amp;rsquo;m going to talk about coin selection. I have been doing this for my master thesis. To take you through what I will be talking about, I will be shortly outlining my work, then talking about coin selection in general. And then the design space of coin selection. I will also be introducing the framework I have been using for simulating …"},{"uri":"/coindesk-consensus-2016/","title":"Coindesk Consensus","content":" Blockchain Database Technology In A Global Context Lawrence H. Summers Clearing And Settlement For Global Financial Institutions Charles Cascarilla, Chris Church, Brad Peterson, Sandra Ro Delaware Initiative Marco Santori Regulation Future Of Blockchains Paul Vigna, Balaji Srinivasan, David Rutter Future Of Regulation Jerry Brito, J. Christopher Giancarlo, Benjamin Lawsky, Mark Wetjen Regulation Hackathon Intro Robert Schwinker How Tech Companies Are Embracing Blockchain Database Technology …"},{"uri":"/scalingbitcoin/montreal-2015/coinscope-andrew-miller/","title":"Coinscope","content":"http://cs.umd.edu/projects/coinscope/\nI am going to talk about a pair of tools, one is a simulator and another one is a measurement station for the bitcoin network. So let me start with the bitcoin simulator framework. The easiest approach is to create a customized model that is a simplified abstraction of what you care about- which is like simbit, to show off how selfish-mining works. What you put in is what you get, so the model is actually different from the actual behavior of the bitcoin …"},{"uri":"/speakers/come-plooy/","title":"Come Plooy","content":""},{"uri":"/verifiable-delay-functions/vdf-day-2019/comments-and-observations-about-timelocks-ron-rivest/","title":"Comments And Observations About Timelocks Ron Rivest","content":"Comments and observations about timelocks and verifiable delay functions\nIntroduction Welcome everybody. It\u0026amp;rsquo;s not only about solving the puzzles, but the immense interest in verifiable delay functions. When setting up this conference, I hadn\u0026amp;rsquo;t expected that. It\u0026amp;rsquo;s hard to predict where technology will go or what things will be popular decades later when you work on something.\nI am just going to review some of the basic stuff we did way back when, setting up a puzzle which was …"},{"uri":"/scalingbitcoin/montreal-2015/competitive-fee-market-urgency/","title":"Competitive Fee Market Urgency","content":"The urgency of a competitive fee market to ensure a scalable future\nI am Marshall. I run Firehash. CTO Cryptsy also. Some people have higher cost, some people have lower cost. I think those were some good estimates for true costs. I am going to be talking about why fees are going to be very important very soon. We\u0026amp;rsquo;ll talk about what happens without fees, talk about why bitcoin really isn\u0026amp;rsquo;t free but how we can do things like microtransactions, real microtransactions, talk about the …"},{"uri":"/decentralized-financial-architecture-workshop/compliance-and-confidentiality/","title":"Compliance And Confidentiality","content":"Compliance and confidentiality: can they co-exist?\nAlexander Zaidelson, Beam\nIntroduction Hi everyone. It\u0026amp;rsquo;s a great pleasure to be here. I\u0026amp;rsquo;m Alexander Zaidelson, CEO of Beam. I\u0026amp;rsquo;ll be presenting our view on the regulation and how we live with it as a project. I\u0026amp;rsquo;m not an expert in regulation. Whatever I\u0026amp;rsquo;m showing here is how we see the landscape from what we understand.\nBeam is a confidential transaction. The question is, how can confidential cryptocurrency co-exist …"},{"uri":"/scalingbitcoin/stanford-2017/concurrency-and-privacy-with-payment-channel-networks/","title":"Concurrency And Privacy With Payment Channel Networks","content":"paper: https://eprint.iacr.org/2017/820.pdf\nIntroduction I am going to talk about concurrency and privacy. This was joint work with my collaborators.\nBitcoin has scalability issues. That\u0026amp;rsquo;s the main reason why we are here today. Currently the bitcoin network allows for less than 10 transactions/second. Today we have more than 135 gigabytes of memory requirement. And there are some high fees and micropayments are not really possible. One of the proposals to fix this is payment channels. …"},{"uri":"/speakers/constance-choi/","title":"Constance Choi","content":""},{"uri":"/stanford-blockchain-conference/2019/coordinated-upgrades/","title":"Coordinated Upgrades","content":"Blockchain upgrades as a coordination game\nhttps://twitter.com/kanzure/status/1091139621000339456\nIntroduction My name is Stephanie Hurder. Today I am going to talk about my paper, blockchain upgrade as a coordination game. This takes an economic lense to the question of how to design blockchain governance. This work was done at Prysm Group and my coauthors. We help blockchains with their governance design. I have a PhD in economics from Harvard where I shared office space with Jacob who just …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/cory-fields/","title":"Cory Fields","content":"MIT Bitcoin Expo 2016 transcript\nlast year- http://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2015/ this year- http://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2016/\nWelcome everyone to MIT Bitcoin Expo 2016. If you haven\u0026amp;rsquo;t heard of Bitcoin by now, then I am not sure how you got here. My name is nchinda. I am the reigning Bitcoin club president. I am going to have some fun. The club is hijacking the #mit-dci IRC channel. You can twitter with #mitbitcoinexpo. There\u0026amp;rsquo;s a link up on …"},{"uri":"/tags/countersign/","title":"Countersign","content":""},{"uri":"/cryptoeconomic-systems/2019/cross-chain-deals-and-adversarial-commerce/","title":"Cross Chain Deals And Adversarial Commerce","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/magicalcryptoconference/2019/cryptographic-hocus-pocus/","title":"Cryptographic Hocus-Pocus Meaning Nothing: The Zcash Trusted Setup MPC","content":"or: \u0026amp;ldquo;Peter Todd\u0026amp;rsquo;s secret love letter to Zooko\u0026amp;rdquo;\nBackground See http://web.archive.org/web/20170717025249/https://petertodd.org/2016/cypherpunk-desert-bus-zcash-trusted-setup-ceremony\nand later it was then redacted: https://petertodd.org/2016/cypherpunk-desert-bus-zcash-trusted-setup-ceremony\nIntroduction Alright. Hopefully you guys are all awake ish or something. I am going to give a talk about the zcash trusted setup. As many of you know, I was involved in it. The way to start …"},{"uri":"/grincon/2019/cryptography-audit/","title":"Cryptography Audit","content":"libsecp256k1-zkp audit\nIt\u0026amp;rsquo;s a real treat to be able to be a part of this.\nWhy bother? Just a high-level, why are we taking community resources to spend time on audits? There are some perspectives\u0026amp;ndash; it\u0026amp;rsquo;s good for the community and it\u0026amp;rsquo;s what you do.\nBitcoin had on professional security audits and did just fine. But it launched in a very different environment, and it did get audits eventually.\nBeyond just covering your ass, it\u0026amp;rsquo;s good to have these audits because if …"},{"uri":"/scalingbitcoin/stanford-2017/state-of-cryptography/","title":"Cryptography for Blockchains beyond ECDSA and sha256: Signatures and Zero-Knowledge Proofs","content":"Am going to talk about cryptography relevant to blockchains. I\u0026amp;rsquo;ll talk about signatures and zero-knowledge proofs. It has a lot of applications to bitcoin. It turns out that bitcoin is pretty simple in terms of cryptography. It just uses a hash function (sha256) and,separately, ECDSA for signatures. I will be mentioning some other cryptography that bitcoin could use.\nLet\u0026amp;rsquo;s start with signatures. What do signatures do? If Bob wants to say, on every e-cash system, Bob says he is going …"},{"uri":"/misc/ctv-bip-review-workshop/","title":"CTV BIP Review Workshop","content":"OP_CHECKTEMPLATEVERIFY workshop notes\nReference materials transcript tweet https://twitter.com/kanzure/status/1223708325960798208\ntweet https://twitter.com/JeremyRubin/status/1223664128381734912 or https://twitter.com/JeremyRubin/status/1223672458516938752 and https://twitter.com/JeremyRubin/status/1223729378946715648\nIRC logs: http://gnusha.org/ctv-bip-review/\nbip119 proposal: https://github.com/bitcoin/bips/tree/master/bip-0119.mediawiki\nproposal site: https://utxos.org/\nbranch comparison: …"},{"uri":"/baltic-honeybadger/2018/current-state-of-the-market-and-institutional-investors/","title":"Current State Of The Market And Institutional Investors","content":"1 on 1: The current state of the market \u0026amp;amp; institutional investors\nhttps://twitter.com/kanzure/status/1043404928935444480\nBitcoin Association Guy (BAG)\nBAG: I\u0026amp;rsquo;m here with Bruce Fenton and Tone Vays. Bruce is also host of the Satoshi Roundtable and a long-term investor. Tone Vays is a derivatives trader and content creator. We\u0026amp;rsquo;re going to be talking about Wall Street. I believe you\u0026amp;rsquo;re both based in NY? How was the last 12 months?\nTV: I own an apartment there, but I think I …"},{"uri":"/tags/cve-2018-17144/","title":"CVE-2018-17144","content":""},{"uri":"/speakers/dahlia-malkhi/","title":"Dahlia Malkhi","content":""},{"uri":"/speakers/daniel-cline/","title":"Daniel Cline","content":""},{"uri":"/speakers/daniel-hinton/","title":"Daniel Hinton","content":""},{"uri":"/speakers/daniel-j.-bernstein/","title":"Daniel J. Bernstein","content":""},{"uri":"/speakers/daniel-marquez/","title":"Daniel Marquez","content":""},{"uri":"/speakers/daniel-robinson/","title":"Daniel Robinson","content":""},{"uri":"/speakers/dave-levin/","title":"Dave Levin","content":""},{"uri":"/speakers/david-bailey/","title":"David Bailey","content":""},{"uri":"/speakers/david-rutter/","title":"David Rutter","content":""},{"uri":"/speakers/david-schwartz/","title":"David Schwartz","content":""},{"uri":"/baltic-honeybadger/2018/day-1-closing-panel/","title":"Day 1 Closing Panel","content":"Closing panel\nhttps://twitter.com/kanzure/status/1043517333640241152\nRS: Thanks guys for joining the panel. We just need to get one more chair. I am going to introduce Alex Petrov here because everyone else was on stage. The closing panel is going to be an overview of what\u0026amp;rsquo;s happening in bitcoin. I want to start with the question I started with last year. What is the current state of bitcoin compared to last year? What has happened?\nES: Last year, I was sitting next to Craig Wright. His …"},{"uri":"/scalingbitcoin/milan-2016/day-1-group-summaries/","title":"Day 1 Group Summaries","content":"We need two to three minute report backs from each session. Also, note takers. Please email your notes to contact@scalingbitcoin.org so that we can add notes to the wiki. Come join us. Okay, so.\nLightning We discussed routing and several aspects of routing algorithms in the lightning network. We talked about the tradeoff between privacy aspects and reliability in routing. We quickly discovered that after some of the things were already resolved by the papers\u0026amp;ndash; the way how we deal with a …"},{"uri":"/scalingbitcoin/milan-2016/day-2-group-summaries/","title":"Day 2 Group Summaries","content":"https://twitter.com/kanzure/status/785151245883412480\nOur epic journey in Milan is coming to an end. Let\u0026amp;rsquo;s get seated. Each group is going to have a maximum of 3 minutes to report back. First let\u0026amp;rsquo;s have collaboration between bitcoin community and academia.\nCollaboration between bitcoin community and academia In the first half the workshop, 6 universities gave introductions of their activities and relationship to engineering and companies. We share the fact that most universities have …"},{"uri":"/w3-blockchain-workshop-2016/day-2-groups/","title":"Day 2 Groups","content":"Browsers One was creating a new API or updating an existing API for Web Auth or an expansion to the web auth spec, where the browser would understand blockchain-based identity. We want to resolve and display blockchain identity in the browser, anything identified as an identity in a blockchain in a browser. Request payload signature. Apps receiving payloads could check from the user. Those are the four areas we are interested in pursuing for standards.\nBlockchain standardization proposals These …"},{"uri":"/scalingbitcoin/hong-kong-2015/day-2-opening/","title":"Day 2 Opening","content":"You know exactly what I am going to be saying. Code of Conduct. Code of Conduct. Code of Conduct. I am going to introduce the program chair, Neha. Hi everyone. I hope everyone enjoyed day 1. I thought it was quite fascinating.\nWe are going to have a few talks happen later. Look out for those. I\u0026amp;rsquo;d like to go ahead and introduce our first speaker, Pieter.\n"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/decentralization-through-game-theory/","title":"Decentralization Through Game Theory","content":"Thank you Charlie, let\u0026amp;rsquo;s have another round of applause. Up next, we have Andreas Antonopoulos.\nSo I may make this a bit difficult for the camera because I hate standing behind the podium. Welcome everyone. How are you, thank you for coming. How many people in this audience own Bitcoin?\nSo I usually start my question with that. And most audiences that gives me an accurate representation. You gave Bitcoin to every undergraduate, so you ruined my polling ability.\nHow many of you understand …"},{"uri":"/stanford-blockchain-conference/2020/decentralized-oracles-tls/","title":"DECO: Liberating Web Data Using Decentralized Oracles for TLS","content":"https://twitter.com/kanzure/status/1230657639740108806\nIntroduction I am Fan from Cornell. I am going to be talking about DECO, a privacy-preserving oracle for TLS. This is joint work with some other people.\nDecentralized identity What is identity? To use any system, the first thing that the user needs to do is to prove her identity and that she is a user to the system. The identity can be descriptive. You can think of digital identity as a set of descriptions about a user. For example, in some …"},{"uri":"/tags/default-minimum-transaction-relay-feerates/","title":"Default minimum transaction relay feerates","content":""},{"uri":"/coindesk-consensus-2016/marco-santori-delaware-initiative/","title":"Delaware Initiative","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nDelaware initiative\nIt\u0026amp;rsquo;s good to see so many friendly audiences in the audience. I am at Pillsbury Wintrhop Shaw Pittman. We are focusing on distinct areas of law as it applies to blockchain tech. We have been in the space as early as 2011 so we were trying to do the land grab. My team has been asked by the State of Delaware to serve as legal ambassadors to the blockchain industry. We couldn\u0026amp;rsquo;t be more honored to serve in this …"},{"uri":"/scalingbitcoin/tokyo-2018/deploying-blockchain-at-scale-lessons-from-the-internet-deployment-in-japan/","title":"Deploying Blockchain At Scale Lessons From The Internet Deployment In Japan","content":"Deploying blockchain at scale: Lessons learned from the Internet deployment in Japan\nJun Muai (Keio University)\nhttps://twitter.com/kanzure/status/1048373339826188288\nOkay. Good morning everybody. Good morning? I am from Keio University. Welcome to Keio University.\nLet me introduce something about our university. This is very old. It\u0026amp;rsquo;s from 1858. It\u0026amp;rsquo;s the oldest university in Japan. It was founded in 1858. The highest bill in this country is 10,000 yen on paper. This person is the …"},{"uri":"/building-on-bitcoin/2018/lightning-wallet-design/","title":"Designing Lightning Wallets for the Bitcoin Users","content":"Good morning everyone. Thank you for being here. Thank you to the organizers for this event. My talk is about designing lightning wallets for bitcoin users. I am talking about UX. But really, UX is a lot more than what wallets look like. It\u0026amp;rsquo;s from bip39 and making backups much easier, to anything you cna imagine, to make it easier for bitcoin to use bitcoin.\nI am Patricia Estevao. https://patestevao.com/\npatestevao\nhttps://patestevao.gitbooks.io/lightning-network-ux-research/\nThe potential …"},{"uri":"/w3-blockchain-workshop-2016/deterministic-signatures-group/","title":"Deterministic Signatures Group","content":"Deterministic expressions (group)\nSo please give your name and organization and what would you like to hear in the next twenty or thirty minutes.\nECDSA threshold signatures. Signature systems. Multisig predicate expressions.\nnew crypto systems build pipeline things, gitian things git signatures interop smart signatures \u0026amp;ldquo;crypto conditions\u0026amp;rdquo; ECDSA threshold signatures …"},{"uri":"/w3-blockchain-workshop-2016/petertodd-dex/","title":"Dex: Deterministic Predicate Expressions for Smarter Signatures","content":"I have been working on deterministic expressions language called DEX. It looks like lambda calculus. But really we\u0026amp;rsquo;re trying to create a way to specify expressions that would return true if a signature is valid. Something like this, which is really just check a signature against a pubkey for a message, is essentially something that would be baked into your software anyway. We\u0026amp;rsquo;re trying to move into a layer of abstraction where you can specify the conditions you need for your …"},{"uri":"/misc/discreet-log-contracts/","title":"Discreet Log Contracts","content":"paper: https://adiabat.github.io/dlc.pdf\nslides: http://bit.ly/2uSqkV6 or https://docs.google.com/presentation/d/1QXZBtELcVMoCq6wx-rJr31KvtsqxxcWIewMvuSTpsa4\nhttps://twitter.com/kanzure/status/902201044297555970\nWe are going to have next month on September 13 when Ethan Heilman will be talking about something controversial but he hasn\u0026amp;rsquo;t told me what it is yet. But it\u0026amp;rsquo;s controversial. I am Amy. I organize the meetup with some help from other people in the MIT Bitcoin Club. I write …"},{"uri":"/scalingbitcoin/stanford-2017/discreet-log-contracts/","title":"Discreet Log Contracts","content":"http://diyhpl.us/wiki/transcripts/discreet-log-contracts/\npaper: https://adiabat.github.io/dlc.pdf\nHello everyone. I am going to give a talk about discreet log contracts. I am from MIT DCI. I work on lightning network software and discreet log contracts. A quick intro: these are pretty new. I am not sure if they work. This presentation is part of the peer review process. Maybe the math doesn\u0026amp;rsquo;t work. Please let me know. It might also work, though.\nIt\u0026amp;rsquo;s a system of smart contracts …"},{"uri":"/bitcoin-magazine/bitcoin-2024/discreet-log-contracts-oracles-loans-stablecoins-and-more/","title":"Discreet Log Contracts, Oracles, Loans, Stablecoins, and More","content":""},{"uri":"/tags/dleq/","title":"Discrete log equivalency (DLEQ)","content":""},{"uri":"/speakers/duc-v.-le/","title":"Duc V. Le","content":""},{"uri":"/tags/duplex-micropayment-channels/","title":"Duplex micropayment channels","content":""},{"uri":"/tags/duplicate-transactions/","title":"Duplicate transactions","content":""},{"uri":"/coordination-of-decentralized-finance-workshop/2020-stanford/economic-risks/","title":"Economic Risks","content":"We looked at economic risks, and how to build a social safety net while allowing innovations. We talked about a bunch of different projects and looked at different systemic risk views from different countries. So the idea was to understand it from an economic perspective. I don\u0026amp;rsquo;t think we came to any great conclusions. There\u0026amp;rsquo;s a lot of economic risk, and people don\u0026amp;rsquo;t really understand it yet. One idea is to have more forums for central banks and regulators to understand what …"},{"uri":"/speakers/ed-felten/","title":"Ed Felten","content":""},{"uri":"/speakers/edward-budd/","title":"Edward Budd","content":""},{"uri":"/stanford-blockchain-conference/2019/threshold-signatures/","title":"Efficient Distributed Key Generation for Threshold Signatures","content":"Threshold signatures Threshold signature schemes are schemes where n parties and at least t are necessary and sufficient for signing. They want to make sure that t parties are necessary. Any threshold signature scheme typically has 3 protocols: key generation, signing, and verifying.\nIn key generation, one party creates a group public key as an output. Also, for each node that participates in the protocol, the protocol should output a secret key. In a signing phase, each party should sign a …"},{"uri":"/speakers/elaine-shi/","title":"Elaine Shi","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/elastic-block-caps/","title":"Elastic Block Caps","content":"Introduction Thank you everyone. I am from the Israeli Bitcoin Association. I will start by describing the problem.\nThe problem The problem is variable transaction fees. The transaction fees shot up. There\u0026amp;rsquo;s a lot of fluctuation. To understand why this is a problem, let\u0026amp;rsquo;s go back to the basics.\nBlock size limit We can have big blocks or small blocks. Each one has advantages and disadvantages. One of the benefits of small blocks is that it\u0026amp;rsquo;s easier to run a node. There\u0026amp;rsquo;s …"},{"uri":"/speakers/eleftherios-kokoris-kogias/","title":"Eleftherios Kokoris-Kogias","content":""},{"uri":"/speakers/eli-ben-sasson/","title":"Eli Ben-Sasson","content":""},{"uri":"/scalingbitcoin/milan-2016/collective-signing/","title":"Enhancing Bitcoin Security and Performance with Strong Consistency via Collective Signing","content":"https://twitter.com/kanzure/status/785102844839985153\nWe are starting the session called \u0026amp;ldquo;breaking the chain\u0026amp;rdquo;. We will first have collective signing presentation. Thank you.\nThis is joint work with some collaborators.\nWhat we have now is that real-time verification is not safe. What we managed to get is that we get transaction commitment in about 20-90 seconds at close to 1,000 transactions/second. In byzcoin, irrevocable transaction commitment in 20-90 second. Throughput up to 974 …"},{"uri":"/speakers/eran-tromer/","title":"Eran Tromer","content":""},{"uri":"/speakers/eric-martindale/","title":"Eric Martindale","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/erlay/","title":"Erlay","content":"Erlay: Bandwidth-efficient transaction relay for bitcoin\nhttps://twitter.com/kanzure/status/1171744449396953090\npaper: https://arxiv.org/abs/1905.10518\nIntroduction Hi, my name is Gleb. I work at Chaincode Labs. I am working on making nodes better and stronger and make the network more robust. I\u0026amp;rsquo;ve been working on erlay with Pieter and Greg and some other guys.\nBitcoin p2p network There are private nodes that are behind firewalls like ones run at home. If your node is connected to a …"},{"uri":"/w3-blockchain-workshop-2016/ethcore/","title":"Ethcore","content":"ethcore\nAt ethcore, we are building the infrastructure for the future of the decentralized web. Unless we get distracted by TheDAO and soft-forks and hard-forks. Maybe decentralization is not the most important thing. We want to build a p2p secure serverless web. Why is decentralization such an important notion? There\u0026amp;rsquo;s no intrinsic value in being decentralized. There\u0026amp;rsquo;s intrinsic value in being utilized and secure by people.\nThis is a bit of how we see the architecture. Blockchain is …"},{"uri":"/stanford-blockchain-conference/2019/ethereum2/","title":"Ethereum 2.0 and beyond","content":"Please welcome Vitalik who needs no introduction.\nIntroduction Okay. I have 20 minutes. I will get right to it. Vlad gave a presentation on some of his research into CBC Casper and how CBC Casper can be extended into sharding contexts. This presentation will be about taking some of the existing research into consensus algorithms and sharding and other things we have done in the last few years and see how they concretely translate into what is actually going to be deployed to Ethereum Foundation …"},{"uri":"/cryptoeconomic-systems/2019/everything-is-broken/","title":"Everything Is Broken","content":"Everything is broken\nThis is a clickbaity title, I realize. But the reason I say it is because it\u0026amp;rsquo;s become a mantra for at least myself but for the people who hear me banging my hands on the desk all the time. I am Cory Fields and I work here at the MIT DCI. I am also a Bitcoin Core developer. I am less active these days there because I have been spending some time looking at some higher layer stuff.\n\u0026amp;ldquo;Everything is broken\u0026amp;rdquo; to me is a sentiment that most bitcoin developers feel …"},{"uri":"/tags/exfiltration-resistant-signing/","title":"Exfiltration-resistant signing","content":""},{"uri":"/tags/expiration-floods/","title":"Expiration floods","content":""},{"uri":"/scalingbitcoin/hong-kong-2015/extensibility/","title":"Extensibility","content":"Extensibility, fraud proofs, and security models\nEric Lombrozo (CodeShark)\nslides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/4_extensibility_lombrozo.pdf\nTranscript pending, please email link to video to kanzure@gmail.com for transcribing\u0026amp;hellip;\n"},{"uri":"/breaking-bitcoin/2019/extracting-seeds-from-hardware-wallets/","title":"Extracting Seeds From Hardware Wallets","content":"7DC5A359D0D5B5AB6728\u0026amp;hellip;\nhttps://twitter.com/kanzure/status/1137672946435198976\nIntroduction My talk is about extracting seeds from hardware wallets. I am chief security officer at Ledger. I joined Ledger a bit more than one year ago. I built the Ledger Donjon which is an independent red team. Our mission is to help secure Ledger\u0026amp;rsquo;s products. So what we do day to do is try to break all the hardware wallets. We continuously challenge the security of our products. From time to time, we …"},{"uri":"/baltic-honeybadger/2018/extreme-opsec-for-the-modern-cypherpunk/","title":"Extreme Opsec For The Modern Cypherpunk","content":"Jameson is the infrastructure engineer at Casa. Please welcome Jameson Lopp.\nIntroduction Hi fellow cypherpunks. We\u0026amp;rsquo;re under attack. Corporations and nation states in their quest for omniscience have slowly stripped away our privacy. We are the frog being boiled in the pot called progress. We can\u0026amp;rsquo;t trust that corporations will grant us privacy out of their beneficience. Our failures here are our own. After being swatted a year ago, I set out to restart my life with a new focus on …"},{"uri":"/misc/failures-of-secret-key-cryptography/","title":"Failures Of Secret Key Cryptography (2013)","content":"FSE 2013\nWelcome to the second invited talk, to be given by Dan Bernstein. Dan is one of the few members of our community who does not need an introduction. It\u0026amp;rsquo;s a great honor and pleasure that he has agreed to give a talk here. Dan comes from UC Berkeley and from Chicago. Just a few things you might know Dan from\u0026amp;ndash; he sued the U.S. government about export controls. In 2004, .. in a class of his.. a lot of qmail, his work on factoring, ECC, and symmetric cryptology. Poly1305, .. and …"},{"uri":"/speakers/fan-zhang/","title":"Fan Zhang","content":""},{"uri":"/scalingbitcoin/milan-2016/fast-difficulty-adjustment/","title":"Fast Difficulty Adjustment","content":"Fast difficulty adjustment using a low-pass finite impulse response filter\nhttps://twitter.com/kanzure/status/785107422323085312\nslides https://scalingbitcoin.org/milan2016/presentations/Scaling%20Bitcoin%203%20-%20Mark%20Friedenbach.pdf\nIntroductions I work at Blockstream Labs. This work is based on work I did in 2013 which is now only being presented. If there are any control engineers, I would encourage you. I am going to cover some mistakes that have been covered in the industry on …"},{"uri":"/tags/fee-estimation/","title":"Fee estimation","content":""},{"uri":"/chaincode-labs/chaincode-residency/2019-06-25-fabrice-drouin-fee-management/","title":"Fee Management (Lightning Network)","content":"Location: Chaincode Labs Lightning Residency 2019\nFabrice: So I\u0026amp;rsquo;m going to talk about fee management in Lightning which has been surprisingly one of the biggest operational issues we had when we launched the Lightning Network. So again the idea of Lightning is you have transactions that are not published but publishable, and in our case the problem is what does exactly publishable mean. So it means the UTXO that you’re spending is still spendable. It means that the transaction is fully …"},{"uri":"/scalingbitcoin/montreal-2015/peter-r/","title":"Fee markets","content":"((Note that there is a more accurate transcript from Peter-R himself below.))\nMiners have another job as well. Miners are commodity producers, they produce something that the world has never seen. They produce block space, which is room for transactional debt. Let\u0026amp;rsquo;s explore what the field of economics tells us. We\u0026amp;rsquo;ll plot the total number of bytes per block. On the vertical we will plot the unit cost of the commodity, or the price of 1 transaction worth of blockspace. The coordinates …"},{"uri":"/tags/fee-sniping/","title":"Fee sniping","content":""},{"uri":"/tags/fee-sourcing/","title":"Fee sourcing","content":""},{"uri":"/tags/fee-sponsorship/","title":"Fee sponsorship","content":""},{"uri":"/bit-block-boom/2019/fiat-money-fiat-food/","title":"Fiat Money \u0026 Fiat Food","content":"https://twitter.com/kanzure/status/1162734566479728641\nIntroduction Saife saved my life. Until I met Saife, and had him on my show and talked to him, I was a shitcoiner. Okay? Sorry guys. I am going to lay it out there. He talked to me, he came on my show, he sent me his book, his writings, and before I knew it, I started something I really wanted to share with everyone.\nReal introduction Okay, thank you everyone and Gerry for inviting me. Always fun to be in Texas to share meat and bitcoin with …"},{"uri":"/stanford-blockchain-conference/2020/smart-contract-bug-hunting/","title":"Finding Bugs Automatically in Smart Contracts with Parameterized Specifications","content":"https://www.certora.com/pubs/sbc2020.pdf\nhttps://twitter.com/kanzure/status/1230906600140918784\nIntroduction I am from Tel Aviv University and now we have a 1-year old company called Cotera which is located in Berlin, Seattle and Tel Aviv. Our mission is to bring formal methods into code security and I will talk to you today about how we use this method to find bugs. And actually, I try to make this talk as informal as possible but there is actually a paper just download it from the certora …"},{"uri":"/breaking-bitcoin/2019/defense-of-bitcoin/","title":"Five Insights into the Defense of Bitcoin","content":"https://twitter.com/kanzure/status/1137389045355614208\nIntroduction Today I work at Blockstream but I have a long long history working in open-source software first with the Linux operating system as an advocate then working on the operating system itself. This has led me to a few insights on the defense of bitcoin that has little to do with software but it needs a talk thrown together very quickly for today.\nDefense by safety-critical engineering process I believe that it\u0026amp;rsquo;s possible to …"},{"uri":"/cryptoeconomic-systems/2019/flash-boys-v2/","title":"Flash Boys V2","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/speakers/florian-maier/","title":"Florian Maier","content":""},{"uri":"/scalingbitcoin/stanford-2017/flyclient-super-light-clients-for-cryptocurrencies/","title":"Flyclient: Super Light Client For Cryptocurrencies","content":"As you know, the blockchain is a chain because each block connects to the previous block through these hashes. In the header of each block is a commitment to all the transactions. It\u0026amp;rsquo;s using this merkle hash tree where at each level the parent node hashes to the children and this gives you some nice properties which I will talk about later. I need to check consistency though. The first thing I need to check is that the transactions don\u0026amp;rsquo;t spend more than they have\u0026amp;hellip; then I need …"},{"uri":"/stanford-blockchain-conference/2019/formal-verification/","title":"Formal verification: the road to complete security of smart contracts","content":"slides: https://twitter.com/MartinLundfall/status/1091119463276064769\nhttps://twitter.com/kanzure/status/1091103595016085504\nIntroduction Thank you everyone. It\u0026amp;rsquo;s exciting to be here speaking about formal verification. The road to complete security of smart contracts is a sensationalist title. I\u0026amp;rsquo;ll be spending a few moments making a huge disclaimer.\nDisclaimer This can only be interpreted as a very long road or a very narrow sense in which complete security is actually cheap. …"},{"uri":"/scalingbitcoin/tokyo-2018/forward-blocks/","title":"Forward Blocks","content":"Forward blocks: On-chain settlement capacity increases without the hard-fork\nMark Friedenbach (maaku)\npaper: http://freico.in/forward-blocks-scalingbitcoin-paper.pdf\nslides: http://freico.in/forward-blocks-scalingbitcoin-slides.pdf\nhttps://twitter.com/kanzure/status/1048416249791668225\nIntroduction I know that we\u0026amp;rsquo;re running late in this session so I\u0026amp;rsquo;ll try to keep this fast and get everyone to lunch. I have presented at Scaling Bitcoin four times now so I won\u0026amp;rsquo;t dwell on the …"},{"uri":"/stanford-blockchain-conference/2020/fractal/","title":"Fractal: Post-Quantum and Transparent Recursive Proofs from Holography","content":"https://twitter.com/kanzure/status/1230295372909514752\nhttps://eprint.iacr.org/2019/1076.pdf\nIntroduction One of the most powerful things that you can do with SNARKs is something called recursive proofs where you actually prove that another SNARK was correct. I can give you a proof of a proof of a proof and this is amazing because you can prove an ever-expanding computation like a blockchain is always correct. I can always give you a proof that you have the latest block and you know the entire …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/fraud-proofs-petertodd/","title":"Fraud Proofs and Modularized Validation","content":"In Bitcoin we have made this tradeoff where we don\u0026amp;rsquo;t have everyone running full nodes. Not everyone is participating equally. If you have a full node, you have lots of gigs of hard drive space, but if you do that, you only get a few transactions per second. Modularizing validation an argument for this is how to improve on this situation.\nWhat\u0026amp;rsquo;s the problem we are trying to solve? This is a real screenshot from one of the Android wallets. What this shows is that the SPV client will …"},{"uri":"/tags/free-relay/","title":"Free relay","content":""},{"uri":"/breaking-bitcoin/2019/fud-perceived-vs-real-bitcoin-risks/","title":"FUD (Perceived Vs RealRisks in Bitcoin)","content":"https://twitter.com/kanzure/status/1137285873534492672\nEric Voskuil 3CD8 C07F 0B5C E14E\nJimmy FAA6 17E3 2679 E455\nRodolfo 1C9E 033C 6C65 8606\nIntroduction AW: Some topics might have more agreement between the three of you, and some maybe less. It\u0026amp;rsquo;s up to you guys to say smart things.\nJS: Can\u0026amp;rsquo;t guarantee anything there.\nFUD: bitcoin too volatile to be used as money AW: What about, bitcoin is too volatile to be used as money?\nEV: It is used as money, so.\nRN: Bitcoin was created so that …"},{"uri":"/cryptoeconomic-systems/2019/funding/","title":"Funding bitcoin development","content":"Funding bitcoin development\nIt\u0026amp;rsquo;s basically open-source funding. It\u0026amp;rsquo;s a subset of open-source funding. But these are similar problems with many open-source projects. As far as I can tell, all these digital currency cryptocurrency projects are licensed with open-source licenses. There may be some that are not, and some things are patented. I generally steer clear of the patented work. If it\u0026amp;rsquo;s decentralized and open-source, then why are you just delivering a binary to me? I …"},{"uri":"/coindesk-consensus-2016/future-of-blockchains/","title":"Future Of Blockchains","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nThe future of blockchains\nPaul Vigna, WSJ - Moderator\nBalaji Srinivasan, 21 Inc.\nDavid Rutter, R3\nPV: Everyone has been asking me to stir you guys up. We have 25 minutes. I don\u0026amp;rsquo;t want to waste time. Which one of you is building betamax?\nBS: Bitcoin and blockchain database technology are complementary. I think last year I was much more bearish on private blockchain database technology. As I was talking with David earlier, \u0026amp;hellip;. …"},{"uri":"/coindesk-consensus-2016/future-of-regulation/","title":"Future Of Regulation","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nThe future of regulation\nJerry Brito, Coin Center\nJ. Christopher Giancarlo, CFTC\nBenjamin Lawsky, Lawsky Group\nMark Wetjen, DTCC\nJB: I want to start with Giancarlo. Your approach has been \u0026amp;ldquo;first do no harm\u0026amp;rdquo;. What opportunity is this?\nGiancarlo: There are a number of elements to my interest. I want to go back to a moment of time where it became clear tome that something like this, although I don\u0026amp;rsquo;t know if it would be …"},{"uri":"/scalingbitcoin/montreal-2015/future-of-spv-tech/","title":"Future of SPV Technology","content":"https://docs.google.com/document/d/1t0bSZj5b66xBdW7xrjHlcvfqYAbTaQDB4-_T0Jvs3T4/edit#heading=h.5lm45oa6kuri\nissues with current SPV wallets\nExisting clients and their status\nbitcoinj electrum ** they are doing SPV proofs ** bloom filters utxo commitments need to get in need to pick which type of utxo commitments\nNumber of nodes to connect to for more SPV clients\n1 header from each peer to verify that we are all getting the same thing what block does each peer think is the best? very cheap to …"},{"uri":"/decentralized-financial-architecture-workshop/g20-discussion/","title":"G20 Discussion","content":"Why do we need a multi-stakeholder discussion in a blockchain world? A report from G20 discussions.\nhttps://bsafe.network\nIntroduction I am the cofounder of the bsafe.network which focuses on blockchain research. I was a program co-chair for Scaling Bitcoin at Keio University last year. I also work with IEEE, ACM conferences, Ledger Journal, and more. My background is engagement with Bank of Japan NTT internet cash in 1998-2000.\nSeveral huge incidents From a regulatory point of view, there are …"},{"uri":"/tags/gap-limits/","title":"Gap limits","content":""},{"uri":"/speakers/gavin-andresen/","title":"Gavin Andresen","content":""},{"uri":"/bitcoin-core-dev-tech/2015-02/gavinandresen/","title":"Gavinandresen","content":"http://blog.circle.com/2015/02/10/devcore-livestream/\nThe instant transaction time.. you know I walk up to a cash register, I put my phone there, and in a second or two the transaction is confirmed and I walk away with my coffee. Anything beyond that, 10 minutes versus 1 minute doesn\u0026amp;rsquo;t matter. So the problem you want to solve is how do we get instant confirmation.. there\u0026amp;rsquo;s a bunch of ideas about this, like a trusted third party that promises to not double spend, have some coins …"},{"uri":"/speakers/georgia-avarikioti/","title":"Georgia Avarikioti","content":""},{"uri":"/bitcoin-developers-miners-meeting-2016/jihan-wu-google-tech-talk/","title":"Google Tech Talk (2016)","content":"Bitcoin\nGoogle Tech Talk\nOkay so maybe we will start. Thank you for coming. We have about 25 folks here from the Bitcoin community, both miners and Core developers. Don\u0026amp;rsquo;t know exactly what the definition of a Core developer is, but I guess I know one when I see one. So anyway, we\u0026amp;rsquo;re lucky here to have Jihan Wu. He\u0026amp;rsquo;s the co-founder of Bitmain, a leading Chinese bitcoin technology company and mining pool. He runs Antpool which has 270 petahash which I think represents about 20% …"},{"uri":"/texas-bitcoin-conference-2014/gox/","title":"Gox (2014)","content":"Ryan is a founder of BitBits. They donate to any charity in the US. BitBits was a 2013 Harvard Business School new venture finalist. Coming from Mass. As many of our speakers they have traveled a long way. Andreas from \u0026amp;hellip; lawyer. Anti-money laundering specialist. He represents many companies for the transfer of money. He is a criminal defense attorney. He is a frequent speaker. On freedom too. All about freedom and frequent speakers at many money transmitting issues.\nGentlemen you have a …"},{"uri":"/scalingbitcoin/stanford-2017/graphene-set-reconciliation/","title":"Graphene Set Reconciliation","content":"Brian N. Levine, College of Information and Computer Sciences, UMass Amherst\nThis is joint work with some people I work with at UMass, including Gavin Andresen.\nThe problem I am going to focus on in this presentation is how to relay information to a new block, to a neighbor, that doesn\u0026amp;rsquo;t know about next. This would be on the fast relay network or on the regular p2p network. It\u0026amp;rsquo;s about avoiding a situation where, the naieve situation is to send the entire block data. Alice gives block …"},{"uri":"/rebooting-web-of-trust/2019-prague/group-updates/","title":"Group Updates.Md","content":"Group updates\nAlice abuses verifiable credentials We restricted our scope to a manageable slice of the problem we think we can realistically handle. It\u0026amp;rsquo;s been very productive. We made an outline together, and now everyone can work on individual parts of the paper. I think it\u0026amp;rsquo;s been great.\nBlockcerts v3 We categorized what we wanted to write about and work on and kind of going back and looking at the history of blockcerts and specific things that were added, and an extension that were …"},{"uri":"/rebooting-web-of-trust/2019-prague/groups/","title":"Groups","content":"Group 1: Agents We talked about minimal requirements for \u0026amp;hellip; a different set of protocols and other implementations, and we asked the question, what\u0026amp;rsquo;s out there for being able to create a minimal requirement for\u0026amp;hellip; Are we able to map this and pull it all down into a minimal requirement architecture for an agent without having to buy into the whole stack? What about building an agent for a smartphone or a feature phone, and then move on from there? Do you need a blockchain for …"},{"uri":"/w3-blockchain-workshop-2016/groups-identity/","title":"Groups Identity","content":"The topic that we have been working on is identity for everyone. In terms of trusted set of parties if you will, that can potentially provide some level of attestation to the identity and that can be provided in a digitized manner as to what levels of service an be provided to that level of identity as well.\n\u0026amp;hellip;\nWhat belongs in a blockchain? What works in a blockchain? What doesn\u0026amp;rsquo;t work in a blockchain? How should it be stored? How can we make sure that there\u0026amp;rsquo;s redundancy? What …"},{"uri":"/coindesk-consensus-2016/hackathon-intro/","title":"Hackathon Intro","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nHackathon intro\u0026amp;mdash; Thank you everyone for showing up. This is a fantastic event coming togehter. Mid-February we were focusing on our team, how were we going to make the room feel full at 800 people? Well today we are sold-out at 1500 people. We have a full room today and we are excited about the developers that have come together. It is the perfect day for a Hackathon, it\u0026amp;rsquo;s rizzling 50 degrees if you choose to go outside so we …"},{"uri":"/stanford-blockchain-conference/2020/hardware-accelerated-rsa/","title":"Hardware Accelerated RSA - VDFs, SNARKs, and Accumulators","content":"https://twitter.com/kanzure/status/1230551345700069377\nIntroduction I am going to talk about accelerating RSA operations in hardware. So let\u0026amp;rsquo;s just get into it.\nOutline I\u0026amp;rsquo;ll talk about what acceleration is and why we\u0026amp;rsquo;re interested in it, then we\u0026amp;rsquo;ll get into RSA primitives, then we\u0026amp;rsquo;ll get into algorithms, and then various platforms for hardware. Hardware is anything you\u0026amp;rsquo;re going to run software on. We\u0026amp;rsquo;ll look at what the tradeoffs are, and then some …"},{"uri":"/speakers/henry-de-valence/","title":"Henry de Valence","content":""},{"uri":"/dallas-bitcoin-symposium/history-and-extrapolation/","title":"History And Extrapolation","content":""},{"uri":"/coindesk-consensus-2016/how-tech-companies-are-embracing-blockchain-database-technology/","title":"How Tech Companies Are Embracing Blockchain Database Technology","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nHow tech companies are embracing blockchain database technology\nMartha Bennett, Forrester - Moderator\nJerry Cuomo, IBM\nAustin Hill, Blockstream\nYorke Rhodes, Microsoft\nLata Varghese, Cognizant\nMB: Taking the temperature of the industry. What\u0026amp;rsquo;s actualy going on out there? You may know who Forrester is. I am a principal analyst there covering blockchain database technology. Many of the questions I will be asking the panelists are the …"},{"uri":"/coindesk-consensus-2016/how-to-get-bitcoin/","title":"How To Get Bitcoin","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nHow to get bitcoin\nBalaji Srinivasan\nMy talk is going to be pretty different from the others. I will try to make this relevant to the blockchain database technology attendees. We\u0026amp;rsquo;re going to talk here today what we call the machine web. It\u0026amp;rsquo;s a web where machines earn bitcoin on each HTTP request. Who here knows what an HTTP request? Every time you load a web page, you make a lot of HTTP requests. So if you are earning bitcoin …"},{"uri":"/bit-block-boom/2019/how-to-meme-bitcoin-to-the-moon/","title":"How To Meme Bitcoin To The Moon","content":"Introduction How many of y\u0026amp;rsquo;all were at the dinner last night? Okay, great. I had a lot of fun. It was fantastic. I don\u0026amp;rsquo;t know how we will beat it next year, but we will figure something out. I am Michael Goldstein, and welcome to my ridiculous tedtalk. A lot of people talk about price fundamentals, but I am going to talk about how to troll and take the curves hard.\nA month ago today, a pivotal moment in US history occurred in the US congress. The honorable representative Warren …"},{"uri":"/scalingbitcoin/montreal-2015/how-to-mine-bitcoin-profitably/","title":"How To Mine Bitcoin Profitably","content":"Minting money with megawatts\nThank you for the introduction. We thought bitcoin was really a good idea for an unstable and inefficient financial world. I am going to dive and talk about this presentation which is how to mine bitcoin profitably. This is a short talk, it\u0026amp;rsquo;s supposed to be a short talk, so let\u0026amp;rsquo;s get straight to the crux of the matter.\nBitcoin mining is the lowest part of the transaction stack. It scales. It\u0026amp;rsquo;s parallelizable. There might be second-order effects. …"},{"uri":"/tags/htlc-endorsement/","title":"HTLC endorsement","content":""},{"uri":"/stanford-blockchain-conference/2019/htlcs-considered-harmful/","title":"Htlcs Considered Harmful","content":"https://twitter.com/kanzure/status/1091033955824881664\nIntroduction I am here to talk about a design pattern that I don\u0026amp;rsquo;t like. You see HTLCs a lot. You see them in a lot of protocols for cross-chain atomic exchanges and payment channel networks such as lightning. But they have some downsides, which people aren\u0026amp;rsquo;t all familiar with. I am not affiliated with Interledger, but they convinced me that HTLCs were a bad idea and they developed an alternative to replace HTLCs. There\u0026amp;rsquo;s a …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/human-side-trust-workshop/","title":"Human Side Trust Workshop","content":"We would like to start with a brief activity. In you rprograms, this says design workshop. If that scares you, then we will not be doing trust falls. We will not be forming teams. But we would like to start with an activity. Identify someone around you who you do not know. Keep a mental image of that person in your mind.\nWould you trust this person to save your seat? Would you trust them to watch your laptop if you went to get lunch? Would you loan them $15 to grab lunch? In cash? Would you give …"},{"uri":"/speakers/hutchins/","title":"Hutchins","content":""},{"uri":"/speakers/ian-lee/","title":"Ian Lee","content":""},{"uri":"/decentralized-financial-architecture-workshop/implications/","title":"Implications","content":"Implications on regulation and governance of blockchain-based finance\nYuta Takanashi (Financial Services Agency, Japan)\nDisclaimer First, please note that the opinions presented here belong to myself and don\u0026amp;rsquo;t represent the organizations to which I belong.\nMajor goals for financial regulators The primary goals of financial regulators are to maintain financial stability, protect investors and consumers, and prevent financial crimes. These goals are public interests and are needed to be …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/maaku-panel/","title":"Improvements to Bitcoin","content":"This can be a broad range of topics for improvements. I am Mark Friedenbach, I have worked on Bitcoin protocol research, but I also do Core tech engineering at Blockstream. We also have Jonas, an independent Bitcoin Core developer. We also have Andrew Poelstra, who has been core to the crypto work which has been incoporated into Bitcoin, such as libsecp256k1 which we recently integrated in the 0.12 release. It speeds up Bitcoin validation by 7-8x. We also have Joseph Poon, the co-inventor of the …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2018/improving-bitcoin-smart-contract-efficiency/","title":"Improving Bitcoin Smart Contract Efficiency","content":"https://twitter.com/kanzure/status/980545280973201410\nIntroduction I will be talking about discreet log contracts, taproot, and graftroot. Not in that order, the order will actually be taproot, graftroot, and then discreet log contracts.\nQuick intro: I\u0026amp;rsquo;m Tadge Dryja. I work here at MIT down the street at Digital Currency Initiative. I was coauthor on the lightning network paper, and I worked on lightning. I also was the author of a paper introducing discreet log contracts. It\u0026amp;rsquo;s a …"},{"uri":"/scalingbitcoin/hong-kong-2015/in-adversarial-environments-blockchains-dont-scale/","title":"In Adversarial Environments Blockchains Dont Scale","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY1/2_security_and_incentives_2_todd.pdf\nThat was a very useful talk (on security assumptions), thank you Andrew.\nI am going to start by talking about nitty-gritty applications where adversarial thinking is useful for incentives for decentralization. In an environment where everyone has the data, we are kind of limited as to what we can do. When we talk about scale, I am talking about 5 or 10 orders of magnitude. Do we have a system …"},{"uri":"/tags/inbound-forwarding-fees/","title":"Inbound forwarding fees","content":""},{"uri":"/scalingbitcoin/tokyo-2018/incentivizing-payment-channel-watchtowers/","title":"Incentivizing Payment Channel Watchtowers","content":"Incentivizing payment channel watchtowers\nGeorgia Avarikioti, Felix Laufenberg, Jakub Sliwinski, Yuyi Wang and Roger Wattenhofer (ETH Zurich)\nZeta Avarikioti\nhttps://twitter.com/kanzure/status/1048767166823071746\nIntroduction Hi. Good morning. I am Zeta. I am going to talk about how to incentivize payment channel watchtowers. This is joint work with my collaborators.\nMicropayment channels There are many ways to construct channels such as lightning channels and duplex channels. There\u0026amp;rsquo;s …"},{"uri":"/coindesk-consensus-2016/internal-approaches-blockchain-database-technology-strategies/","title":"Internal Approaches Blockchain Database Technology Strategies","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nWilliam Mougayar, The Business Blockchain - Moderator\nEdward Budd, Deutsche Bank\nIan Lee, Citi Ventures\nScott Manuel, Thomson Reuters\nBart Suichies, Philips\nJeremy Wilson, Barclays\nJW: Matthew Bishop\u0026amp;rsquo;s panel yesterday divided the universe into three broad categories. This seems to be the way these things work out. You get the newcomer who realy gets it right and gets very big. Then you get in the case particuarly, which is what …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/internet-of-value/","title":"Internet Of Value","content":"What are the lessons from the development of the internet that we can learn from? Why is there an inevitability as to what Bitcoin is doing?\nAt a high level, I like to explain that we are experiencing a fundamental transformation in how the world economy stores and verifies value. Bitcoin and these other projects that are being built are part of that arc. We have talked not just about payments. It\u0026amp;rsquo;s about how ownership and fidicuiary trust in general can be managed. And development of …"},{"uri":"/rebooting-web-of-trust/2019-prague/intro/","title":"Intro","content":"Introduction Why are we here? We are here to write papers, code, make UI/UX mock-ups. If you want to work on something and it doesn\u0026amp;rsquo;t fit into that category, then let us know. There might be other people that want to work on it too. We want you to find something you enjoy working on. If you feel you\u0026amp;rsquo;re not as productive as you would like to be, then let us know and we\u0026amp;rsquo;ll find something better for you to work on.\nThis is a new space, and we\u0026amp;rsquo;re not the greatest at writing …"},{"uri":"/scalingbitcoin/hong-kong-2015/intro/","title":"Intro","content":"Introduction to event\nschedule: https://scalingbitcoin.org/hongkong2015/#schedule\nbetter livestream link: http://www.youtube.com/channel/UCql9h_eXmusjt-f3k8qLwPQ/live\nslides: https://scalingbitcoin.org/hongkong2015/#presentations\npaper for the \u0026amp;ldquo;SCP\u0026amp;rdquo; presentation: http://eprint.iacr.org/2015/1168.pdf\nslides from \u0026amp;ldquo;SCP\u0026amp;rdquo; presentation: http://www.comp.nus.edu.sg/~loiluu/talks/scp.pptx\nirc.freenode.net #bitcoin-workshops\nHello, good morning everyone. Hi welcome to Scaling …"},{"uri":"/scalingbitcoin/milan-2016/intro/","title":"Intro","content":"Introduction\nGood morning. Glad to see many of you again. My name is Anton Yemelyanov. I am one of the organizers of this event. On behalf of Scaling Bitcoin planning committee, I would like to welcome everybody here to the third scaling event. We\u0026amp;rsquo;re holding this event here in Milan. Milan has a really vibrant community. It has been a real pleasure working with everyone here to create this event. The goal of Scaling as everybody knows is to get everyone together in a single place, all …"},{"uri":"/scalingbitcoin/tel-aviv-2019/intro/","title":"Intro","content":"Introduction to Scaling Bitcoin Tel Aviv 2019\nhttps://telaviv2019.scalingbitcoin.org/\nhttps://telaviv2019.scalingbitcoin.org/#schedule\nFreenode IRC: #bitcoin-workshops\nhttps://telaviv2019.scalingbitcoin.org/live/\nEverybody please take your seats. Just to let you know, there\u0026amp;rsquo;s electrical plugs in the center of row 1 and row 6. Otherwise, there are extension cords connected on the sides.\nGood morning. Welcome to Israel. My name is Anton Yemelyanov. I am the chair of the Scaling Bitcoin …"},{"uri":"/w3-blockchain-workshop-2016/wendy/","title":"Intro to W3C standards","content":"https://goo.gl/3ZqAuo\nWe are working to make a web that is global and nteroperable and usable by everyone. If you have tech that you want to work on a global working basis, and distributed manner, then the web is the platform on which to do that. W3C is the consortium and standards body that works to help keep the web open and available. We don\u0026amp;rsquo;t have police power, ew don\u0026amp;rsquo;t have police power to compel web standards, we build consensu sthrough W3C process and our open royalty-free …"},{"uri":"/cryptoeconomic-systems/2019/introduction/","title":"Introduction","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/scalingbitcoin/stanford-2017/intro/","title":"Introduction","content":"https://scalingbitcoin.org/\nhttps://scalingbitcoin.org/event/stanford2017\nSorry that we\u0026amp;rsquo;re running a little bit late. My name is Anton Yemelyanov. I am one of the organizers running this event. Welcome everybody. It\u0026amp;rsquo;s great to have a lot of familiar faces here. First of all, as you may know, we have ran over the last 2 days, Bitcoin Edge Dev++ where we have trained over 80 new developers that have joined our ecosystem. They have covered a number of topics including ECDSA and discreet …"},{"uri":"/scalingbitcoin/tokyo-2018/introduction/","title":"Introduction","content":"Anton Yemelyanov\nIntroduction\nslides: https://tokyo2018.scalingbitcoin.org/presentations\nWelcome to scaling bitcoin. This is an international conference. We run the event every year in a different geographical location. We run this event as a strictly academic and engineering conference. The mentality that we take behind the event is think global, act local. It\u0026amp;rsquo;s a well known bitcoin additive. Our goal is to stimulate the bitcoin ecosystem. The best way to do this is to stimulate the …"},{"uri":"/baltic-honeybadger/2018/investing-in-bitcoin-businesses/","title":"Investing In Bitcoin Businesses","content":"1 on 1: Investing in bitcoin businesses\nMatthew Mezinskis (crypto_voices) (moderator)\nMM: I run a podcast based here in Latvia on bitcoin economics and money. It\u0026amp;rsquo;s American-Latvian. I do it with my partner who is based in Brazil. Today we\u0026amp;rsquo;re going to focus on investing in businesses focused on bitcoin and crypto.\nNC: My name is Nic Carter. I work for a venture fund. We\u0026amp;rsquo;re one of the few venture funds that focus on bitcoin and bitcoin-related startups. We\u0026amp;rsquo;re not focused at …"},{"uri":"/w3-blockchain-workshop-2016/ipfs/","title":"Ipfs","content":"I will not have time to describe IPFS. We have multiformats that allow protocol agility, interop and avoid lock-in. Multihash, multiaddr, multibase, multicodec, multistream, multikey. What do you do with cryptographic hashes? When you have four different hashes all the same length that happen to be coming from different functions, what are those functions? sha256? sha512? sha3? blake2b? How do you know which hash type it is?\nThe problem is that the values aren\u0026amp;rsquo;t self-describing. …"},{"uri":"/scalingbitcoin/montreal-2015/issues-impacting-block-size-proposals/","title":"Issues Impacting Block Size Proposals","content":"Issues impacting block size proposals\nJeff Garzik (jgarzik)\nBitcoin was introduced as a p2p electronic cash system. That\u0026amp;rsquo;s what it says in the whitepaper. That\u0026amp;rsquo;s what the users were exposed to. The 1 MB block size was set for anti-spam purposes. Otherwise when bitcoin value is low, it\u0026amp;rsquo;s trivial to DOS the network with large blocks.\nMoving on to some observations. The process of finding distributed ocnsensus takes time; it\u0026amp;rsquo;s a settlement system. The core service that …"},{"uri":"/speakers/ittai-abraham/","title":"Ittai Abraham","content":""},{"uri":"/speakers/ittay-eyal/","title":"Ittay Eyal","content":""},{"uri":"/speakers/j.-christopher-giancarlo/","title":"J. Christopher Giancarlo","content":""},{"uri":"/speakers/jacob-leshno/","title":"Jacob Leshno","content":""},{"uri":"/speakers/james-dangelo/","title":"James D'Angelo","content":""},{"uri":"/speakers/james-gatto/","title":"James Gatto","content":""},{"uri":"/speakers/james-smith/","title":"James Smith","content":""},{"uri":"/speakers/jarl-fransson/","title":"Jarl Fransson","content":""},{"uri":"/speakers/jason-potts/","title":"Jason Potts","content":""},{"uri":"/speakers/jason-teutsch/","title":"Jason Teutsch","content":""},{"uri":"/speakers/jeff-garzik/","title":"Jeff Garzik","content":""},{"uri":"/speakers/jeremy-allaire/","title":"Jeremy Allaire","content":""},{"uri":"/speakers/jeremy-wilson/","title":"Jeremy Wilson","content":""},{"uri":"/speakers/jerry-brito/","title":"Jerry Brito","content":""},{"uri":"/speakers/jerry-cuomo/","title":"Jerry Cuomo","content":""},{"uri":"/speakers/jihan-wu/","title":"Jihan Wu","content":""},{"uri":"/speakers/jinglan-wang/","title":"Jinglan Wang","content":""},{"uri":"/speakers/joachim-breitner/","title":"Joachim Breitner","content":""},{"uri":"/speakers/joachim-neu/","title":"Joachim Neu","content":""},{"uri":"/speakers/joe-gerber/","title":"Joe Gerber","content":""},{"uri":"/speakers/john-woeltz/","title":"John Woeltz","content":""},{"uri":"/tags/joinpools/","title":"Joinpools","content":""},{"uri":"/speakers/jonathan-bier/","title":"Jonathan Bier","content":""},{"uri":"/speakers/jonathan-tomim/","title":"Jonathan Tomim","content":""},{"uri":"/speakers/jorge-tim%C3%B3n/","title":"Jorge Timón","content":""},{"uri":"/speakers/joshua-lim/","title":"Joshua Lim","content":""},{"uri":"/cryptoeconomic-systems/2019/journal-review/","title":"Journal Review","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/cryptoeconomic-systems/2019/journals-as-clubs/","title":"Journals As Clubs","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/speakers/jun-muai/","title":"Jun Muai","content":""},{"uri":"/tags/jit-channels/","title":"Just-In-Time (JIT) channels","content":""},{"uri":"/tags/jit-routing/","title":"Just-in-time (JIT) routing","content":""},{"uri":"/speakers/justin-camarena/","title":"Justin Camarena","content":""},{"uri":"/scalingbitcoin/milan-2016/jute-braiding/","title":"Jute: New Braiding Techniques to Achieve Significant Scaling Gains","content":"https://twitter.com/kanzure/status/785116246257856512\nhttp://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/braiding-the-blockchain/\nIntroduction Okay, hello. My name is David Vorick. I have been with bitcoin since 2011. I have been a full-time blockchain engineer since about 2014. I run an altcoin for decentralized cloud storage called Sia. Today I am going to be talking about braiding which basically means we take the straight line blockchain of bitcoin and we allow blocks to have …"},{"uri":"/speakers/juthica-chou/","title":"Juthica Chou","content":""},{"uri":"/speakers/karl-floersch/","title":"Karl Floersch","content":""},{"uri":"/speakers/katherine-wu/","title":"Katherine Wu","content":""},{"uri":"/speakers/kathleen-breitman/","title":"Kathleen Breitman","content":""},{"uri":"/speakers/kenji-saito/","title":"Kenji Saito","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/keynote/","title":"Keynote","content":"It has been a while since I have been back to MIT. So. So um. Yeah, it\u0026amp;rsquo;s. I got extremely excited that the expo is at 26100. This was one of my favorite classrooms. Almost 20 years ago, I took 1802 with Professor Rogers. Yeah. I hope I am not as boring as he was back then.\nAnother thing I remember about this classroom is that we watched a lot of LS movies. Does LCE still suck? Um, yeah. I definitely did not expect back then to come back and give a talk. So I am extremely honored.\nI …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/keynote-gavin-andresen/","title":"Keynote Gavin Andresen","content":"His fame precedes him. He\u0026amp;rsquo;s going to talk about chain sizes and all that.\nCool, I\u0026amp;rsquo;ll have to follow you guys. Thank you, it\u0026amp;rsquo;s great to be here. I was here last time and had a great time. I\u0026amp;rsquo;m really happy that it\u0026amp;rsquo;s not snowing because we\u0026amp;rsquo;ve had too much snow. I\u0026amp;rsquo;m also happy that you all decided to listen to me on a Sunday morning. I don\u0026amp;rsquo;t usually give talks on Sunday.\nA couple years ago I was talking to a friend about Bitcoin. \u0026amp;ldquo;You know Gavin, …"},{"uri":"/speakers/kim-hamilton-duffy/","title":"Kim Hamilton Duffy","content":""},{"uri":"/tags/kindred-rbf/","title":"Kindred replace by fee","content":""},{"uri":"/speakers/kris-merkel/","title":"Kris Merkel","content":""},{"uri":"/speakers/kristov-atlas/","title":"Kristov Atlas","content":""},{"uri":"/speakers/kulpreet-singh/","title":"Kulpreet Singh","content":""},{"uri":"/tags/large-channels/","title":"Large channels","content":""},{"uri":"/speakers/lata-varghese/","title":"Lata Varghese","content":""},{"uri":"/coindesk-consensus-2016/law-enforcement-and-anonymous-transactions/","title":"Law Enforcement And Anonymous Transactions","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nLaw enforcement and anonymous transactions\nJason Weinstein, Blockchain Alliance, moderator\nBrian Klein, Baker Marquart\nPrakash Santhana, Deloitte\nJames Smith, Elliptic\nZooko Wilcox, doing essential zooko things (zerocash)\nLadies and gentlemen. Please take your seats. The session is about to begin.\nPlease welcome Jason Weinstein, Brian Klein, Prakash Santhana, James Smith, and Zooko Wilcox.\nJW: Any proof-of-work finalists should go to the …"},{"uri":"/speakers/lawrence-h.-summers/","title":"Lawrence H. Summers","content":""},{"uri":"/speakers/lei-yang/","title":"Lei Yang","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/lessons-for-bitcoin-from-150-years-of-decentralization/","title":"Lessons For Bitcoin From 150 Years Of Decentralization","content":"Arvind Narayanan (twitter- random_walker) - Princeton University\nAnother 30 seconds. Take your seats. We have Arvind Narayanan. Some of the research from Princeton has rivaled the work of MIT. Professor Narayanan is going to teach us some lessons for Bitcoin.\nThank you. Can you hear me? I think you can. What I want to do today is to look at lessons from history for the future of Bitcoin, the next 10 years for Bitcoin, the next 30 years for Bitcoin. Why am I doing this? Why does it make sense to …"},{"uri":"/cryptoeconomic-systems/2019/libra/","title":"Libra","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/chaincode-labs/chaincode-residency/2019-06-24-fabrice-drouin-the-transfer-layer/","title":"Lightning - The Transfer Layer","content":"Location: Chaincode Labs Lightning Residency 2019\nContext Fabrice: Alright so, I\u0026amp;rsquo;m going to talk about the payments model we use in lightning, which is mainly the HTLC, and how it works. So what we\u0026amp;rsquo;ve explained so far is that a channel is basically a funding transaction that\u0026amp;rsquo;s been published. And a commitment transaction that spends from the funding transaction but is not published. So the funding transaction is confirmed and it\u0026amp;rsquo;s on-chain and it sends money to A and B. …"},{"uri":"/edgedevplusplus/2019/lightning-network-routing/","title":"Lightning Network Routing","content":"Introduction Good afternoon everyone. My name is Carla. I am one of the chaincode residents this summer. This talk is going to walk through the protocol layer by layer and look at the different components that make up the network. Lightning is an off-chain scaling solution which consists of a p2p network of payment channels. It allows payments to be forwarded by nodes in exchange for fees.\nScaling with payment channels This involves a payment channel. It\u0026amp;rsquo;s a construction where 2 …"},{"uri":"/breaking-bitcoin/2019/lightning-network-security-panel/","title":"Lightning Network Security Panel","content":"https://twitter.com/kanzure/status/1137758233865703424\nIntroduction MF: I will start by thanking Kevin and Lea and the organizers. It\u0026amp;rsquo;s been an awesome event. Thanks also to the sponsors. Without their assistance we couldn\u0026amp;rsquo;t have had the event. I\u0026amp;rsquo;d also like to make some plugs before we start the conversation. One, I run the London Bitcoin developers group. If you\u0026amp;rsquo;re ever in London or interested in speaking, please look up London Bitcoin Devs on Twitter or Meetup. …"},{"uri":"/stanford-blockchain-conference/2019/links/","title":"Links","content":"info: https://cyber.stanford.edu/sbc19\n"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/linq/","title":"Linq","content":"NASDAQ Linq\nI have heard that Alex Zinder has a grasp of what makes NASDAQ Linq good and bad, and the advantages of a public versus permissioned blockchain. Alex is director of global software development at NASDAQ, but he is here to talk about Linq, which is a platform for managing securities. Alex?\nThis is possibly Adam Ludwin\u0026amp;rsquo;s / chain.com\u0026amp;rsquo;s doing?\nWe have been looking at this for a while at NASDAQ. It has a lot of implications for capital markets. I want to talk about the Linq …"},{"uri":"/tags/liquidity-advertisements/","title":"Liquidity advertisements","content":""},{"uri":"/speakers/lous-parker/","title":"Lous Parker","content":""},{"uri":"/tags/low-r-grinding/","title":"Low-r grinding","content":""},{"uri":"/stanford-blockchain-conference/2020/lower-bounds-limits-plasma/","title":"Lower Bounds for Off-Chain Protocols: Exploring the Limits of Plasma","content":"https://twitter.com/kanzure/status/1230183338746335233\nIntroduction Okay, thank you. The title of my talk is exploring the limits of plasma. This is joint work with others.\nPlasma Let me start by telling you about Plasma. It\u0026amp;rsquo;s a family of layer 2 solutions for scaling blockchain. It was proposed in 2017 by Poon and Buterin. It\u0026amp;rsquo;s a family of protocols. There\u0026amp;rsquo;s plenty of different plasmas. This is a diagram from more than a year ago. There are many variants. It\u0026amp;rsquo;s an active …"},{"uri":"/speakers/mahnush-movahedi/","title":"Mahnush Movahedi","content":""},{"uri":"/speakers/marco-santori/","title":"Marco Santori","content":""},{"uri":"/speakers/marek-olszewski/","title":"Marek Olszewski","content":""},{"uri":"/speakers/mark-erhart/","title":"Mark Erhart","content":""},{"uri":"/speakers/mark-friedenbach/","title":"Mark Friedenbach","content":""},{"uri":"/speakers/mark-wetjen/","title":"Mark Wetjen","content":""},{"uri":"/speakers/marshall-long/","title":"Marshall Long","content":""},{"uri":"/speakers/martin-lundfall/","title":"Martin Lundfall","content":""},{"uri":"/w3-blockchain-workshop-2016/matsuo/","title":"Matsuo","content":"bsafe.network\nGiving trust by security evaluation and bsafe.network\nThis is a severe problem. How can we trust the output of blockchain technology? We have several security issues. There was the huge DAO attack from two weeks ago. We also have other problems, such as protocol specification, key management, implementation, operation, and vulnerability handling. Also key renewal and key revocation issues.\nISO/IEC 27000 ISO/IEC 15408 ISO/IEC 29128 ISO/IEC 29128 ISO/IEC, NIST IETF We already have …"},{"uri":"/tags/matt/","title":"MATT","content":""},{"uri":"/speakers/matt-weinberg/","title":"Matt Weinberg","content":""},{"uri":"/speakers/matt-weiss/","title":"Matt Weiss","content":""},{"uri":"/speakers/matthew-mezinskis/","title":"Matthew Mezinskis","content":""},{"uri":"/speakers/maurice-herlihy/","title":"Maurice Herlihy","content":""},{"uri":"/speakers/max-keidun/","title":"Max Keidun","content":""},{"uri":"/cryptoeconomic-systems/2019/mechanism-design/","title":"Mechanism Design","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/speakers/megan-chen/","title":"Megan Chen","content":""},{"uri":"/speakers/melanie-shapiro/","title":"Melanie Shapiro","content":""},{"uri":"/speakers/meltem-demirors/","title":"Meltem Demirors","content":""},{"uri":"/speakers/meni-rosenfeld/","title":"Meni Rosenfeld","content":""},{"uri":"/tags/merkle-tree-vulnerabilities/","title":"Merkle tree vulnerabilities","content":""},{"uri":"/decentralized-financial-architecture-workshop/metadata/","title":"Metadata","content":"website: https://dfa2019.bitcoinedge.org/\ntwitter: https://twitter.com/thebitcoinedge\nhttps://gnso.icann.org/sites/default/files/file/field-file-attach/presentation-pre-icann65-policy-report-13jun19-en.pdf\n"},{"uri":"/edgedevplusplus/2019/metadata/","title":"Metadata","content":"https://bitcoinedge.org/slack\n"},{"uri":"/speakers/michael-more/","title":"Michael More","content":""},{"uri":"/speakers/michael-straka/","title":"Michael Straka","content":""},{"uri":"/speakers/michael-walfish/","title":"Michael Walfish","content":""},{"uri":"/speakers/mikael-dubrovsky/","title":"Mikael Dubrovsky","content":""},{"uri":"/stanford-blockchain-conference/2019/miniscript/","title":"Miniscript","content":"https://twitter.com/kanzure/status/1091116834219151360\nJeremy: Our next speaker really needs no introduction. However, I am obligated to deliver an introduction anyway. I\u0026amp;rsquo;m not just trying to avoid pronouncing his name which is known to be unpronounceable. I am pleased to welcome Pieter Wuille to stage to present on miniscript.\nIntroduction Thanks, Jeremy. I am Pieter Wuille. I work at Blockstream. I do various things. Today I will be talking about miniscript, which is a joint effort …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2018/","title":"Mit Bitcoin Expo 2018","content":" Improving Bitcoin Smart Contract Efficiency Tadge Dryja Taproot Contract protocols Dlc "},{"uri":"/stanford-blockchain-conference/2020/mixicles/","title":"Mixicles: Simple Private Decentralized Finance","content":"https://twitter.com/kanzure/status/1230666545803563008\nhttps://chain.link/mixicles.pdf\nIntroduction This work was done in my capacity as technical advisor to Chainlink, and not my Cornell or IC3 affiliation. Mixicle is a combination of the words mixers and oracles. You will see why in a moment.\nDeFi and privacy As of a couple of weeks ago, DeFi passed the $1 billion mark which is to say there\u0026amp;rsquo;s over $1 billion of cryptocurrency in DeFi smart contracts today. This is great, but wait just a …"},{"uri":"/speakers/mooly-sagiv/","title":"Mooly Sagiv","content":""},{"uri":"/scalingbitcoin/montreal-2015/more-core-devs/","title":"More Core Devs","content":"10x the number of core devs\nObjectives:\n10x would only be like \u0026amp;lt;1000 bitcoin developers 7000 fedora developers how do we grow developers? how many tor developers? how many linux developers?\npoaching from each other is not cool\nThere\u0026amp;rsquo;s not enough people with the necessary skills. There\u0026amp;rsquo;s no clear way to train new people. Apprenticeship works, but it scales very very poorly. It\u0026amp;rsquo;s better than a static fixed number of developers. In the linux industry, started in linux at a time …"},{"uri":"/stanford-blockchain-conference/2020/motoko-language/","title":"Motoko, the language for the Internet Computer","content":"Introduction Thanks, Byron. Glad to be here. My name is Dominic Williams. I am founder of the internet computer project at Dfinity Foundation. Last year we talked about consensus protocols. Today we are talking about tools for building on top of the internet computer which is great news and shows the progress we have made. I\u0026amp;rsquo;m going to give some introductory context, and then our senior researcher from our languages division will talk about Motoko. Is it working? Alright.\nThe internet is …"},{"uri":"/scalingbitcoin/tokyo-2018/multi-party-channels-in-the-utxo-model-challenges-and-opportunities/","title":"Multi Party Channels In The Utxo Model Challenges And Opportunities","content":"Multi-party channels in the UTXO model: Challenges and opportunities\nOlaoluwa Osuntokun (Lightning Labs) (roasbeef)\nhttps://twitter.com/kanzure/status/1048468663089618944\nIntroductions Hi. So my name is Laolu. I am also known as roasbeef and I am co-founder of Lightning Labs. I am going to go over some cool research and science fiction. In the actual implementation for these things and cool to discuss and get some discussions around this. first I am going to talk about single-party channels and …"},{"uri":"/speakers/nathan-wilcox/","title":"Nathan Wilcox","content":""},{"uri":"/cryptoeconomic-systems/2019/near-misses/","title":"Near Misses","content":"Near misses: What could have gone wrong\nIntroduction Thank you. I am Ethan Heilman. I am a research whose has done a bunch of work in security of cryptocurrencies. I am also CTO of Arwen which does secure atomic swaps. I am a little sick today so please excuse my coughing, wheezing and drinking lots of water.\nBitcoin scary stories The general outline of this talk is going to be \u0026amp;ldquo;scary stories in bitcoin\u0026amp;rdquo;. Bitcoin has a long history and many of these lessons are applicable to other …"},{"uri":"/speakers/neha-narula/","title":"Neha Narula","content":""},{"uri":"/breaking-bitcoin/2019/neutrino/","title":"Neutrino, Good and Bad","content":"\u0026amp;hellip; 228C D70C FAA6 17E3 2679 E455\nIntroduction I want to start this talk with a question for the audience. How many of you have a mobile wallet on your phone right now? How many know the security model that the wallets are using? Okay, how many of you trust those security assumptions? Okay, less of you. I am going to talk about Neutrino. In order to talk about it, let\u0026amp;rsquo;s talk about before Neutrino.\nSimplified payment verification Let\u0026amp;rsquo;s start with simplified payment verification. …"},{"uri":"/stanford-blockchain-conference/2019/thundercore/","title":"New and Simple Consensus Algorithms for ThunderCore’s Main-Net","content":"https://twitter.com/kanzure/status/1090773938479624192\nIntroduction Synchronous with a chance of partition tolerance. Thank you for inviting me. I am going to be talking about some new updates. It\u0026amp;rsquo;s joint work with my collaborators. The problem is state-machine replication, sometimes called blockchain or consensus. These terms are going to be the same thing in this talk. The nodes are trying to agree on a linearly updated transaction log.\nState-machine replication We care about consistency …"},{"uri":"/speakers/nic-carter/","title":"Nic Carter","content":""},{"uri":"/speakers/nick-spooner/","title":"Nick Spooner","content":""},{"uri":"/stanford-blockchain-conference/2020/no-incentive/","title":"No Incentive","content":"The best incentive is no incentive\nhttps://twitter.com/kanzure/status/1230997662339436544\nIntroduction This is the last session of the conference. Time flies when you\u0026amp;rsquo;re having fun. We\u0026amp;rsquo;re going to be starting with David Schwartz who is one of the co-creators of Ripple and he is going to be talking about a controversial topic that \u0026amp;ldquo;the best incentive is no incentive\u0026amp;rdquo;.\nI am David Schwartz as you just heard I am one of the co-creators of Ripple. Thanks to Dan Boneh for …"},{"uri":"/scalingbitcoin/montreal-2015/non-currency-applications/","title":"Non Currency Applications","content":"Scalability of non-currency applications\nI am from Princeton University. I am here to talk about the scalability issues that non-currency applications of bitcoin have that might be a little different than regular payments that go through the bitcoin network. I put this together with my advisor, Arvind. The reason why I want to talk about this is because when people talk about how bitcoin will have to scale is that people throw out a thing about Visa processing 20,000 transactions per second or …"},{"uri":"/speakers/omer-shlomovits/","title":"Omer Shlomovits","content":""},{"uri":"/scalingbitcoin/tokyo-2018/omniledger/","title":"Omniledger","content":"Omniledger: A secure, scale-out, decentralized ledger via sharding\nEleftherios Kokoris-Kogias, Philipp Jovanovic, Linus Gasser, Nicolas Gailly, Ewa Syta and Bryan Ford (Ecole Polytechnique Fédérale de Lausanne)\n(LefKok)\npaper: https://eprint.iacr.org/2017/406.pdf\nhttps://twitter.com/kanzure/status/1048733316839432192\nIntroduction I am Lefteris. I am a 4th year PhD student. I am here to talk about omniledger. It\u0026amp;rsquo;s our architecture for a blockchain. It\u0026amp;rsquo;s not directly related to bitcoin …"},{"uri":"/tags/onion-messages/","title":"Onion messages","content":""},{"uri":"/tags/op-cat/","title":"OP_CAT","content":""},{"uri":"/tags/op-codeseparator/","title":"OP_CODESEPARATOR","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/eric-martindale/","title":"Open source - Beyond Bitcoin Core","content":"Open source: Beyond Bitcoin Core\nHe is at Bitpay, working on copay, bitcore, foxtrot, and their blockchain explorer.\nFirst of all, a little bit about Bitpay. We were actually founded in May 2011. We have been around for some time. At the time, MtGox was actually still a viable exchange. The Bitcoin price was down at about $1 or a little more. And we had a grand total of two merchants.\nSo what Bitpay does is that it accepts Bitcoin on behalf of a merchant and allows them to transform that into a …"},{"uri":"/bitcoin-magazine/bitcoin-2024/open-source-mining/","title":"Open Source Mining","content":""},{"uri":"/baltic-honeybadger/2018/opening/","title":"Opening","content":"Opening remarks for Baltic Honeybadger 2018\ntwitter: https://twitter.com/search?f=tweets\u0026amp;amp;vertical=default\u0026amp;amp;q=bh2018\nhttps://twitter.com/kanzure/status/1043384689321566208\nOkay guys, we\u0026amp;rsquo;re going to start in five minutes. So stay here. She is going to introduce speakers and help everyone out. Thanks everyone for coming. It\u0026amp;rsquo;s a huge crowd this year. I wanted to make a few technical announcements. First of all, just remember that we should be excellent to each other. We have …"},{"uri":"/breaking-bitcoin/2019/opening-remarks/","title":"Opening Remarks","content":"Opening remarks\nhttps://twitter.com/kanzure/status/1137273436332576768\nWe have a nice six second intro video. There we go. Woo. Alright. I think I can display my notes here. Yay. Good morning. First thing I need to tell you is the wifi password if you want to use the venue wifi. So, outside of that, thank you all for coming. This year we have a little fewer people than usual, but that\u0026amp;rsquo;s bear market and all. So we have to change our size of conference depending on the price of bitcoin. …"},{"uri":"/stanford-blockchain-conference/2019/opening-remarks/","title":"Opening Remarks","content":"Opening remarks\nWe will be starting in 10 minutes. I would like to ask the first speaker in the first session for grin to walk to the back of the room to get your microphone ready for the talk.\nIt\u0026amp;rsquo;s 9 o\u0026amp;rsquo;clock and time to get started. Could everyone please get to their seats? Alright, let\u0026amp;rsquo;s do this. Welcome everybody. This is the Stanford Blockchain Conference. This is the third time we\u0026amp;rsquo;re running this conference. I\u0026amp;rsquo;m looking forward to the program. Lots of technical …"},{"uri":"/coindesk-consensus-2016/opening-remarks-state-of-blockchain-ryan-selkis/","title":"Opening Remarks State Of Blockchain","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nOpening remarks \u0026amp;amp; state of blockchain\nRyan Selkis, Coindesk\nGarrick Hileman, The Cambridge Centre for Alternative Finance\nLadies and gentlemen, the session will begin in 5 minutes. Please take your seats. The session is about to begin.\nPlease welcome Ryan Selkis from CoinDesk. (obnoxious music plays, overly dramatic)\nWow is all I can say about what\u0026amp;rsquo;s happening over the last couple of months. This is absolutely incredible. Nice …"},{"uri":"/scalingbitcoin/stanford-2017/optimizing-fee-estimation-via-mempool-state/","title":"Optimizing Fee Estimation Via Mempool State","content":"I am a Bitcoin Core developer and I work at DG Lab. Today I would like to talk about fees. There\u0026amp;rsquo;s this weird gap between\u0026amp;ndash; there are two things going on. People complain about high fees. But people are confused about why Bitcoin Core is giving high fees but if you set fees manually you can get a much lower fee and get a transaction mined pretty fast. I started to look into this in detail and did simulations and a bunch of stuff.\nOne of the ideas I had was to use the mempool state to …"},{"uri":"/speakers/or-sattath/","title":"Or Sattath","content":""},{"uri":"/tags/out-of-band-fees/","title":"Out-of-band fees","content":""},{"uri":"/tags/output-linking/","title":"Output linking","content":""},{"uri":"/scalingbitcoin/hong-kong-2015/overview-of-bips-necessary-for-lightning/","title":"Overview Of BIPs Necessary For Lightning","content":"Scalability of Lightning with different BIPs and some back-of-the-envelope calculations.\nslides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/1_layer2_2_dryja.pdf\nI don\u0026amp;rsquo;t have time to introduce the idea of zero-confirmation transactions and lightning. We have given talks about this. There\u0026amp;rsquo;s some documentation available. There\u0026amp;rsquo;s an implementation being worked on. I think it helps a lot with scalability by keeping some transactions off the blockchain. Ideally, many …"},{"uri":"/speakers/patricia-estevao/","title":"Patricia Estevao","content":""},{"uri":"/speakers/patrick-mccorry/","title":"Patrick McCorry","content":""},{"uri":"/speakers/patrick-murck/","title":"Patrick Murck","content":""},{"uri":"/speakers/paul-vigna/","title":"Paul Vigna","content":""},{"uri":"/tags/payment-probes/","title":"Payment probes","content":""},{"uri":"/tags/payment-secrets/","title":"Payment secrets","content":""},{"uri":"/tags/peer-storage/","title":"Peer storage","content":""},{"uri":"/decentralized-financial-architecture-workshop/perspective/","title":"Perspective","content":" Regulators should focus on consumer protection\nRegulators should focus on operating (or overseeing) rating agencies, and formal review of proposed standards\nFungibility is important, and many regulatory goals are contrary to fungibility, indicating that their forms of money will be inferior and their economy will be left behind.\nAnonymity should be a core value\u0026amp;ndash; the society with the most privacy will have the most advantages.\nThe importance of regulatory sandboxes (which should be …"},{"uri":"/speakers/peter-rizun/","title":"Peter Rizun","content":""},{"uri":"/w3-blockchain-workshop-2016/physical-assets/","title":"Physical Assets","content":"Physical assets, archival science\nWe are going to spend 20 minutes talkign about the topics. Then we will spend 10 minutes getting summaries. Then we will spend 20 minutes summarizing everything.\nWhy is archival science important? I have a background in finance too. Blockchain securitizes things as well. It combines value and memory in one. Also, people have studied archives and how to get records and how to store records.\nHashkloud. Identity verification and KYC and AML management platform. We …"},{"uri":"/speakers/pierre-roberge/","title":"Pierre Roberge","content":""},{"uri":"/scalingbitcoin/tokyo-2018/playing-with-fire-adjusting-bitcoin-block-subsidy/","title":"Playing With Fire: Adjusting Bitcoin Block Subsidy","content":"slides: https://github.com/ajtowns/sc-btc-2018\nhttps://twitter.com/kanzure/status/1048401029148991488\nIntroduction First an apology about the title. I saw some comments on twitter after the talk was announced and they were speculating that I was going to break the 21 million coin limit. But that\u0026amp;rsquo;s not the case. Rusty has a civil war thesis: the third era will start with the civil war, the mathematics of this situation seem einevitable. As the miners and businesses with large transaction …"},{"uri":"/stanford-blockchain-conference/2020/plonk/","title":"Plonk","content":"PLONK: Permutations over Lagrange-bases for Oecumenical Noninteractive arguments of Knowledge\nAriel Gabizon\nhttps://twitter.com/kanzure/status/1230287617972768769\npaper: https://eprint.iacr.org/2019/pdf\nIntroduction One of the things you need when you design a zk proof system is that you need to know about polynomials. Two polynomials if they are the same then they are the same everywhere and if they are different then they are different almost everywhere. The other thing you need is how to get …"},{"uri":"/speakers/prakash-santhana/","title":"Prakash Santhana","content":""},{"uri":"/speakers/pramod-viswanath/","title":"Pramod Viswanath","content":""},{"uri":"/baltic-honeybadger/2018/present-and-future-tech-challenges-in-bitcoin/","title":"Present And Future Tech Challenges In Bitcoin","content":"1 on 1: Present and future tech challenges in bitcoin\nhttps://twitter.com/kanzure/status/1043484879210668032\nPR: Hi everybody. I think I have the great fortune to moderate this panel in the sense that we have really great questions thought by the organizers. Essentially we want to ask Adam and Peter what are their thoughts on the current and the future tech challenges for bitcoin. I\u0026amp;rsquo;m just going to start with that question, starting with Peter.\nPT: Oh, we better have a good answer for this …"},{"uri":"/stanford-blockchain-conference/2020/prism/","title":"Prism - Scaling bitcoin by 10,000x","content":"https://twitter.com/kanzure/status/1230634530110730241\nhttps://diyhpl.us/wiki/transcripts/scalingbitcoin/tel-aviv-2019/prism/\nIntroduction Hello everyone. I am excited to be here to talk about our implementation and evaluation of the Prism consensus protocol which achieves 10,000x better performance than bitcoin. This is a multi-university collaboration between MIT, Stanford and a few others. I\u0026amp;rsquo;d like to thank my collaborators, including my advisor, and other people.\nBitcoin performance We …"},{"uri":"/scalingbitcoin/tel-aviv-2019/prism/","title":"Prism: Scaling Bitcoin to Physical Limits","content":"Introduction Prism is a consensus protocol which is a one-stop solution to all of bitcoin\u0026amp;rsquo;s scaling problems. The title is 7 to 70000 transactions per second. We have a full stack implementation of prism running and we were able to obtain 70,000 transactions/second. Before we get started, here are my collaborators.\nThis workshop is on scaling bitcoin so I won\u0026amp;rsquo;t spend any time justifying why we would want to scale bitcoin. So let\u0026amp;rsquo;s talk about performance.\nBitcoin performance …"},{"uri":"/scalingbitcoin/montreal-2015/privacy-and-fungibility/","title":"Privacy and Scalability","content":"Even though Bitcoin is the worst privacy system ever, everyone in the community very strongly values privacy.\nThere are at least three things: privacy censorship resistance fungibility\nThe easiest way to censor things is to punish communication, not prevent communication. Privacy is the weakest link in censorship resistance. Fungibility is an absolute necessity for any medium of exchange. The properties of money include fungibility. Without privacy you may not be able to have fungibility. …"},{"uri":"/w3-blockchain-workshop-2016/privacy-anonymity-and-identity-group/","title":"Privacy Anonymity And Identity Group","content":"Anonymous identity\nAnother part of it is that all of those IoT things have firmware that can update. The moment that someone gets a key to update this, you can hack the grid by making rapid changes on power usage which actually destroys the \u0026amp;hellip;. do the standards make things less secure? It should be illegal to design broken cryptosystems. Engineers should go to jail. It\u0026amp;rsquo;s too dangerous. Do you think there should be smart grids at all?\nWell if the designs are based on digital security, …"},{"uri":"/stanford-blockchain-conference/2019/privacy-preserving-multi-hop-locks/","title":"Privacy Preserving Multi Hop Locks","content":"Privacy preserving multi-hop locks for blockchain scalability and interoperability\nhttps://twitter.com/kanzure/status/1091026195032924160\npaper: https://eprint.iacr.org/2018/472.pdf\nhttp://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/multi-hop-locks/\nIntroduction This is joint work with my collaborators. Permissionless blockchains have some issues. The permissionless nature leads to the transaction rate that they have. This limits widespread adoption of blockchain technology like bitcoin …"},{"uri":"/scalingbitcoin/tel-aviv-2019/proof-of-necessary-work/","title":"Proof of necessary work: Succinct state verification with fairness guarantees","content":"Introduction I am going to show you proof of necessary work today. We use proof of work in a prototype. This is joint work with my collaborators.\nProblems We wanted to tackle the problem of bitcoin blockchain size increases. Every 10 minutes, a block is added to the blockchain and the blockchain grows linearly in size over time. Initial block download increases over time.\nProof of necessary work We use proof-of-work to verify transactions. We allow lite clients to verify state with minimal …"},{"uri":"/stanford-blockchain-conference/2020/proof-of-necessary-work/","title":"Proof of Necessary Work: Succinct State Verification with Fairness Guarantees","content":"https://twitter.com/kanzure/status/1230199429849743360\nhttps://twitter.com/kanzure/status/1230192993786720256\nSee also https://eprint.iacr.org/2020/190\nPrevious talk: https://btctranscripts.com/scalingbitcoin/tel-aviv-2019/proof-of-necessary-work/\nIntroduction Let\u0026amp;rsquo;s start with something familiar. When you have a new lite client, or any client that wants to trustlessly connect to the network and joins the network for the first time\u0026amp;hellip; they see this: they need to download a lot of data, …"},{"uri":"/tags/proof-of-payment/","title":"Proof of payment","content":""},{"uri":"/tags/proof-of-reserves/","title":"Proof of reserves","content":""},{"uri":"/w3-blockchain-workshop-2016/provenance-groups/","title":"Provenance Groups","content":"Give the name of your tech topic table. You have three minutes to summarize. When you have one minute to summarize, I will give you the one minute warning. When you have zero minutes left, you will be thrown out the window.\nMedia rights We talked about media rights and payments. The ability to figure out the incentive structure for whether we can compensate people who create page views with people who have view rights and how that all works. One of the ways this works out is \u0026amp;hellip; the …"},{"uri":"/bitcoin-core-dev-tech/2015-02/research-and-development-goals/","title":"R\u0026D Goals \u0026 Challenges","content":"We often see people saying they are testing the waters, they fixed a typo, they made a tiny little fix that doesn\u0026amp;rsquo;t impact much, they are getting used to the process. They are finding that it\u0026amp;rsquo;s really easy to contribut to Bitcoin Core. You code your changes, you submit your changes, there\u0026amp;rsquo;s not much to it.\nThere\u0026amp;rsquo;s a difference, and the lines are fuzzy and undefined, and you can make a change to Core that changes a spelling error or a change to policy or consensus rules, …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/r3/","title":"R3","content":"Distributed Ledger Group\nR3\nI work at R3. It manages the distributed ledger group, of 42 banks interested in using blockchain or distributed ledger tech. People wonder when I tell them that I work for a consortium that wants to use electronic ledgers. What does that have to do with capital markets?\nLedgers can track transactions. Typically in a capital market context, this responsibility falls to the backoffice. After a trade, you finalize it and actually transfer value. In te case of trading …"},{"uri":"/coindesk-consensus-2016/reaching-consensus-open-blockchains/","title":"Reaching Consensus on Open Blockchains","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nReaching consensus on open blockchains\nModerator- Pindar Wong\nGavin Andresen, MIT Digital Currency Initiative\nVitalik Buterin, Ethereum Foundation\nEric Lombrozo, Ciphrex and Bitcoin Core\nNeha Narula, MIT Media Lab Digital Currency Initiative\nPlease silence your cell phone during this session. Thank you. Please silence your cell phones during this session. Thank you. Ladies and gentlemen, please take your seats. The session is about to …"},{"uri":"/rebooting-web-of-trust/","title":"Rebooting Web Of Trust","content":" 2019 Prague "},{"uri":"/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/","title":"Redesigning Bitcoin Fee Market","content":"https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015093.html\nhttps://www.reddit.com/r/Bitcoin/comments/72qi2r/redesigning_bitcoins_fee_market_a_new_paper_by/\npaper: https://arxiv.org/abs/1709.08881\nHe will be exploring alternative auction markets.\nHello. This is joint work with Aviv Zohar and I have just moved. And Ron Lavi. And I am going to be talking about tehcniques from\u0026amp;hellip; auction theory.. to rethink how fees in bitcoin are done.\nJust a fast …"},{"uri":"/tags/redundant-overpayments/","title":"Redundant overpayments","content":""},{"uri":"/tags/replacement-cycling/","title":"Replacement cycling","content":""},{"uri":"/cryptoeconomic-systems/2019/reproducible-builds/","title":"Reproducible Builds","content":"Reproducible builds, binaries and trust\nCarl Dong (Chaincode Labs)\nIntroduction I am Carl and I work at Chaincode Labs and I will be talking about software, binaries and trust. I am happy that Ethan talked about his subject because it ties into my talk. For the purposes of this talk, assume that everything is perfect and there\u0026amp;rsquo;s no bugs and no CVEs and all the things that Ethan talked about don\u0026amp;rsquo;t exist. Basically, imagination. I am here to talk about that even if this is the case, …"},{"uri":"/scalingbitcoin/tokyo-2018/reproducible-lightning-benchmark/","title":"Reproducible Lightning Benchmark","content":"Reproducible lightning benchmark\nhttps://github.com/dgarage/LightningBenchmarks\nhttps://twitter.com/kanzure/status/1048760545699016705\nIntroduction Thanks for the introduction, Jameson. I feel like a rock star now. So yeah, I won\u0026amp;rsquo;t introduce myself. I call myself a code monkey the reason is that a lot of the talks in scaling bitcoin can surprise people but I understood very few of them. The reason is that I am more of an application developer. Basically, I try to understand a lot of the …"},{"uri":"/tags/responsible-disclosures/","title":"Responsible disclosures","content":""},{"uri":"/scalingbitcoin/montreal-2015/reworking-bitcoin-core-p2p-code-for-robustness-and-event-driven/","title":"Reworking Bitcoin Core P2P Code For Robustness And Event Driven","content":"That is me. I want to apologize right off. I am here to talk about reworking Bitcoin Core p2p code. This is a Bitcoin Core specific topic. I have been working on some software for the past few weeks to talk about working things rather than theory. The presentation is going to suffer because of this.\nBitcoin Core has a networking stack that dates back all the way to the initial implementation from Satoshi or at least it\u0026amp;rsquo;s been patched and hacked on ever since. It has some pretty serious …"},{"uri":"/speakers/riccardo-casatta/","title":"Riccardo Casatta","content":""},{"uri":"/speakers/richard-myers/","title":"Richard Myers","content":""},{"uri":"/speakers/robert-schwinker/","title":"Robert Schwinker","content":""},{"uri":"/speakers/rodrigo-buenaventura/","title":"Rodrigo Buenaventura","content":""},{"uri":"/speakers/roman-snitko/","title":"Roman Snitko","content":""},{"uri":"/speakers/ron-rivest/","title":"Ron Rivest","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/rootstock/","title":"Rootstock","content":"Sergio is co-founder and chief scientist at RSK Labs. The whitepaper was pretty good. I am happy that he is here. Thank you. Just give me a minute.\nAlicia Bendhan Lous Parker BitDevsNYC\nYes, I use powerpoint. Please don\u0026amp;rsquo;t read my emails.\nThanks everyone for staying here and not going to the next room. Thanks for the organizers for inviting me. Today I am going to talk about Rootstock. It\u0026amp;rsquo;s a codename for a smart contracting platform that we are developing for SKY labs, which is a …"},{"uri":"/scalingbitcoin/montreal-2015/roundgroup-roundup-1/","title":"Roundgroup Roundup 1","content":"Roundtable roundup review 1\nFuture of SPV Technology So hopefully I summarized this reasonably. There were a number of things that came up in our discussion. It would help SPV a lot if there were UTXO commitments in blocks. This can be done with a soft-fork. It has implementation cost difficulties. There are some ugly things you could do about how the trees are formed, so that you don\u0026amp;rsquo;t have to repopulate the whole thing every single block. There might be some happy place where the …"},{"uri":"/scalingbitcoin/montreal-2015/roundgroup-roundup-2/","title":"Roundgroup Roundup 2","content":"Communicating without official structures Roundgroup roundup day 2\nHi guys, okay I am going to try to find my voice here. I am losing it. I ran the session on communication without official structures. We went through the various channels through which people are communicating in this community like forums, blogs, irc, reddit, twitter, email, and we talked a little about them each. There\u0026amp;rsquo;s also the conferences and meetings. There are different forms of communication in text than in person. …"},{"uri":"/w3-blockchain-workshop-2016/royalties/","title":"Royalties","content":"Chris Tse\nMonegraph person\nI am going to talk about rights systems. A system that lets us trade rights, like to make movie off of a character, off of the things that Disney makes whenever I buy toys for my two year-old son. This is used in intellectual property. Monegraph is known as the media rights blockchain-ish company. What we discovered about how to model rights, but why is it important, kind of extending and maybe contradicting the point further that the rights are not just the identity …"},{"uri":"/speakers/ryan-selkis/","title":"Ryan Selkis","content":""},{"uri":"/misc/safecurves-choosing-safe-curves-for-elliptic-curve-cryptography-2014/","title":"Safecurves - Choosing Safe Curves For Elliptic Curve Cryptography (2014)","content":"Daniel J. Bernstein (djb) and Tanja Lange\nSchmooCon 2014\nvideo 2 (same?): https://www.youtube.com/watch?v=maBOorEfOOk\u0026amp;amp;list=PLgO7JBj821uGZTXEXBLckChu70kl7Celh\u0026amp;amp;index=91\nVideo intro text There are several different standards covering selection of curves for use in elliptic-curve cryptography (ECC). Each of these standards tries to ensure that the elliptic-curve discrete-logarithm problem (ECDLP) is difficult. ECDLP is the problem of finding an ECC user\u0026amp;rsquo;s secret key, given the …"},{"uri":"/speakers/saifedean-ammous/","title":"Saifedean Ammous","content":""},{"uri":"/speakers/sandra-ro/","title":"Sandra Ro","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/peter-todd-scalability/","title":"Scalability","content":"There we go. Yeah. Looks good. Thank you.\nSo I wanted to talk about scalability and I wanted to give two opposing views of it. The first is really, scaling Bitcoin up is really really easy. The underlying architecture of Bitcoin is designed in a way that makes it easy to scale. You have your blockchain, and you have your blocks at the top which are 80 bytes data structures. They form a massive long chain all the way back to the genesis block when Bitcoin was created. For each block, those blocks …"},{"uri":"/scalingbitcoin/milan-2016/timestamping/","title":"Scalable and Accountable Timestamping","content":"slides https://scalingbitcoin.org/milan2016/presentations/D1%20-%20A%20-%20Riccardo.pdf\nhttps://twitter.com/kanzure/status/784773221610586112\nHi everybody. I attended Montreal and Hong Kong. It\u0026amp;rsquo;s a pleasurable to be here in Milan giving a talk about scaling bitcoin. I am going to be talking about scalable and accountable timestamping. Why timestamping at this conference? Then I will talk about aggregating timestamps. Also there needs to be a timestamping proof format. I will describe two …"},{"uri":"/stanford-blockchain-conference/2020/scalable-rsa-modulus-generation/","title":"Scalable RSA Modulus Generation with Dishonest Majority","content":"https://twitter.com/kanzure/status/1230545603605585920\nIntroduction Groups of unknown order are interesting cryptographic primitives. You can commit to integers. If you commit to a very large integer, you have to do a lot of work to do the commitment so you have a sequentiality assumption useful for VDFs. But then you have this compression property where if you take a large integer with a lot of information in it, you can compress it down to something small. It also has a nice additive …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/scaling-debate-is-a-proxy-battle-over-centralization/","title":"Scaling Debate Is A Proxy Battle Over Centralization","content":"I really appreciated the fact that\u0026amp;hellip;. I really appreciated Cory\u0026amp;rsquo;s talk. I quoted someone in my talk. It was a fairly recent quote. His whole talk was speaking about governance. There\u0026amp;rsquo;s a lot of governing issues coming into Bitcoin. We like the solving the whole world with technical.\nThe word that I am fascinated with is the one highlighted here, the word \u0026amp;ldquo;consensus\u0026amp;rdquo;. I\u0026amp;rsquo;m just going to reach out to you guys. Does anyone want to make a stand at telling me what …"},{"uri":"/speakers/scott-manuel/","title":"Scott Manuel","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/scriptless-lotteries/","title":"Scriptless Lotteries","content":"Scriptless lotteries on bitcoin from oblivious transfer\nLloyd Fournier (lloyd.fourn@gmail.com)\nhttps://twitter.com/kanzure/status/1171717583629934593\nIntroduction I have no affiliations. I\u0026amp;rsquo;m just some guy. You\u0026amp;rsquo;re trusting a guy to talk about cryptography who got the date wrong. We\u0026amp;rsquo;re doing scriptless bitcoin lotteries from oblivious transfer.\nLotteries Imagine a trusted party who conducts lotteries between Alice and Bob. The goal of the lottery protocol is to use a …"},{"uri":"/grincon/2019/scriptless-scripts-with-mimblewimble/","title":"Scriptless Scripts With Mimblewimble","content":"Introduction Hi everyone. I am Andrew Poelstra. I am the research director at Blockstream. I want to talk about deploying scriptless scripts, wihch is something I haven\u0026amp;rsquo;t talked much about over the past year or two.\nHistory Let me give a bit of history about mimblewimble. As many of you know, this was dead-dropped anonymously in the middle of 2016 by someone named Tom Elvis Jedusor which is the French name for Voldemort. It had no scripts in it. The closest thing to bitcoin script were …"},{"uri":"/speakers/sean-neville/","title":"Sean Neville","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/secure-fountain-architecture/","title":"Secure Fountain Architecture","content":"A secure fountain architecture for slashing storage costs in blockchains\nSwanand Kadhe (UC Berkeley)\nhttps://twitter.com/kanzure/status/1172158545577594880\npaper: https://arxiv.org/abs/1906.12140\nIntroduction I know this is the last formal talk of the day so thank you so much for being here. This is joint work with my collaborators.\nIs storage really an issue for bitcoin? Bitcoin\u0026amp;rsquo;s blockchain size is only 238 GB. So why is storage important? But blockchain size is a growing problem. …"},{"uri":"/breaking-bitcoin/2019/security-attacks-decentralized-mining-pools/","title":"Security and Attacks on Decentralized Mining Pools","content":"https://twitter.com/kanzure/status/1137373752038240262\nIntroduction Hi, I am Alexei. I am a research student at Imperial College London and I will be talking about the security of decentralized mining pools.\nMotivation The motivation behind this talk is straightforward. I think we can all agree that to some extent the hashrate in bitcoin has seen some centralization around large mining pools, which undermines the base properties of censorship resistance in bitcoin which I think bitcoin solves in …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/security-and-usability/","title":"Security And Usability","content":" Melanie Shapiro Alan Reiner Kristov Atlas Elizabeth Stark (moderator) and maybe Melanie Shapiro? malani? Elizabeth: I think this is a highly relevant discussion. I talked a bit about regulation. Regulation might be the number threat, but security might be the number two threat. I am excited to have a panel of leaders in this space who are actively building in this ecosystem. I am going to ask each to do a quick intro.\nAlan: I am the original creator of Armory and the Armory Bitcoin wallet. Last …"},{"uri":"/breaking-bitcoin/2019/lightning-network-routing-security/","title":"Security Aspects of Lightning Network Routing","content":"D146 D0F6 8939 4362 68FA 9A13 0E26 BB61 B76C 4D3A\nhttps://twitter.com/kanzure/status/1137740282139676672\nIntroduction Together with my colleagues I am building lnd. It\u0026amp;rsquo;s one of the implementations of lightning. In routing, there\u0026amp;rsquo;s the sender of the payment, the intermediate routing nodes that forward payments, and a receiver. I would like to scope this down to looking at security just from the perspective of the sender.\nGoals of LN Why are we doing all of this? The thing we\u0026amp;rsquo;re …"},{"uri":"/scalingbitcoin/hong-kong-2015/security-assumptions/","title":"Security Assumptions","content":"Hi, welcome back.\nI am a developer for libsecp256k1. It\u0026amp;rsquo;s a library that does the underlying traditional cryptography used in Bitcoin. I am going to talk about security assumptions, security models and trust models. I am going to give a high-level overview of how we should be thinking about these issues for scaling and efficiency and decentralization. Bitcoin is a crypto system. Everything about it is a crypto system. It needs to be designed with an adversarial mindset, even in an …"},{"uri":"/scalingbitcoin/hong-kong-2015/segregated-witness-and-its-impact-on-scalability/","title":"Segregated Witness And Its Impact On Scalability","content":"slides: https://prezi.com/lyghixkrguao/segregated-witness-and-deploying-it-for-bitcoin/\ncode: https://github.com/sipa/bitcoin/commits/segwit\nSPV = simplified payment verification\nCSV = checksequenceverify\nCLTV = checklocktimeverify\nSegregated witness (segwit) and deploying it for Bitcoin\nOkay. So I am Pieter Wuille. I\u0026amp;rsquo;ll be talking about segregated witness for Bitcoin. Before I can explain this, I want to give some context. We all know how bitcoin transactions work. Every bitcoin …"},{"uri":"/scalingbitcoin/tokyo-2018/self-reproducing-coins-as-universal-turing-machine/","title":"Self Reproducing Coins As Universal Turing Machine","content":"paper: https://arxiv.org/abs/1806.10116\nIntroduction This is joint work with my colleagues. I am Alexander Chepurnoy. We work for Ergo Platform. Vasily is an external member. This talk is about turing completeness in the blockchain. This talk will be pretty high-level. This talk is based on already-published paper, which was presented at the CBT conference. Because of copyright agreements with Springer, you will not find the latest version on arxiv.\nThe general question of scalability is how can …"},{"uri":"/rebooting-web-of-trust/2019-prague/self-sovereign-identity-ideology-and-architecture/","title":"Self Sovereign Identity Ideology And Architecture","content":"https://twitter.com/kanzure/status/1168566218040762369\nParalelni Polis First I\u0026amp;rsquo;d like to introduce the space we\u0026amp;rsquo;re in right now. Welcome to Paralelni Polis. It is an education organization. It\u0026amp;rsquo;s a first organization in the world that is running fully on crypto. It is running solely on cryptoeconomics and donations. I\u0026amp;rsquo;d like to welcome you to the event called Rebooting the Web of Trust. This is the 9th event. This is its first time in Prague. We want to spread awareness …"},{"uri":"/speakers/sergej-kotliar/","title":"Sergej Kotliar","content":""},{"uri":"/speakers/sergio-lerner/","title":"Sergio Lerner","content":""},{"uri":"/speakers/shafi-goldwasser/","title":"Shafi Goldwasser","content":""},{"uri":"/rebooting-web-of-trust/2019-prague/shamir-secret-sharing/","title":"Shamir Secret Sharing","content":"Discussion https://twitter.com/kanzure/status/1168891841745494016\nChris Howe make a damn good start of a C version of Shamir secret sharing. We need to take it another level. This could be another project that we could engage on. We also didn\u0026amp;rsquo;t finish the actual paper, as such. I\u0026amp;rsquo;d like to get both of those things resolved.\nSatoshiLabs is saying that as far as they are concerned SLIP 39 is done and they are not interested in feedback or changing it or adapting it, at the words level. …"},{"uri":"/speakers/shayan-eskandari/","title":"Shayan Eskandari","content":""},{"uri":"/speakers/shehzan-maredia/","title":"Shehzan Maredia","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2016/siacoin/","title":"Siacoin","content":"I have been working on a decentralized cloud storage platform. Before I go into that, I wanted to mention that I have been working on Bitcoin since 2011. Since 2013 I have been in the wizards channel. In 2014 a friend and myself founded a company to build Siacoin. The team has grown to 3 people. Sia is the name of our decentralized storage protocol. We are trying to emulate Amazon S3. You give your data to Amazon, and then they hold it for you. We want low latency high throughput.\nWe had a bunch …"},{"uri":"/tags/side-channels/","title":"Side channels","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/matt-corallo-sidechains/","title":"Sidechains","content":"One second. Technical difficulties. Okay, sorry about that. Okay. Hi. I am Matt or BlueMatt for those of you know me. I swear it\u0026amp;rsquo;s not related to my hair. Nobody believes me.\nI got into Bitcoin three or four years ago because I was interested in the programming and computer science aspects and as well as my interest in large-scale economic incentive structures and macroeconomics and how we create incentive structures that create actions on a low level by rational actors. So this talk is a …"},{"uri":"/scalingbitcoin/milan-2016/sidechains/","title":"Sidechains","content":"https://twitter.com/kanzure/status/784767020290150400\nslides https://scalingbitcoin.org/milan2016/presentations/D1%20-%209%20-%20Paul.pdf\nBefore we begin, we will explain how the workshops will work upstairs. There will be some topics and room numbers. Now we will start the next presentation on sidechain scaling with Paul Sztorc. Thank you everyone.\nThanks a lot. So this talk will be a little different. Scaling via strategy, not physics. This will not change the kilobytes sent over\u0026amp;ndash; it …"},{"uri":"/tags/signer-delegation/","title":"Signer delegation","content":""},{"uri":"/speakers/simon-peffers/","title":"Simon Peffers","content":""},{"uri":"/speakers/skot-9000/","title":"Skot 9000","content":""},{"uri":"/w3-blockchain-workshop-2016/smart-signatures/","title":"Smart Signatures","content":"Christopher Allen\nSmart signatures and smarter signatures\nSignatures are a 50+ year old technology. I am talking about the digital signature. It\u0026amp;rsquo;s a hash of an object that has been encrypted by private key and verified by a public key against the hash of the object. All of our systems are using this in a varity of complex ways at high level to either do identity, sign a block, or do a variety of different things. The idea behind smarter signatures is can we allow for more functionality. …"},{"uri":"/scalingbitcoin/montreal-2015/snarks/","title":"Snarks","content":"some setup motivating topics:\npracticality pcp theorem trustless setup magic away the trusted setup schemes without trusted setup pcp theorem is the existent proof that the spherical cow exists. people:\nAndrew Miller (AM) Madars Virza (MV) Andrew Poelstra (AP) Bryan Bishop (BB) Nathan Wilcox ZZZZZ: zooko gmaxwell\u0026amp;rsquo;s ghost (only in spirit) SNARKs always require some sort of setup. PCP-based SNARKs can use random oracle assumption instantiated by hash function, sha256 acts as the random …"},{"uri":"/simons-institute/snarks-and-their-practical-applications/","title":"Snarks And Their Practical Applications","content":"Joint works with:\nEli Ben-Sasson Matthew Green Alessandro Chiesa Ian Miers Christina Garman Madars Virza Daniel Genkin Thank you very much. I would like to thank the organizers for giving this opportunity to learn so much from my peers and colleagues some of which are here. This is a wonderful survey of many of the works in this area. As I was sitting here, I was in real-time adjusting my slides to minimize the overlap. I will cut down on the amount of introduction because what we just saw (Mike …"},{"uri":"/stanford-blockchain-conference/2020/solving-data-availability-attacks-using-coded-merkle-trees/","title":"Solving Data Availability Attacks Using Coded Merkle Trees","content":"Coded Merkle Tree: Solving Data Availability Attacks in Blockchains\nhttps://eprint.iacr.org/2019/1139.pdf\nhttps://twitter.com/kanzure/status/1230984827651772416\nIntroduction I am going to talk about blockchain sharding via data availability. Good evening everybody. I am Sreeram Kannan. This talk is going to be on how to scale blockchains using data availability proofs. The work in this talk is done in collaboration with my colleagues including Fisher Yu, Songza Li, David Tse, Vivek Bagaria, …"},{"uri":"/cryptoeconomic-systems/2019/solving-the-blockchain-trilemma/","title":"Solving The Blockchain Trilemma","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/some-questions-for-bitcoiners/","title":"Some Questions For Bitcoiners","content":"Joi Ito\nHe is the director of the MIT Media Lab. He served as the chairman of the Creative Commons and is on the board of the Mozilla Foundation.\nJust so that I can get a sense of the room, how many people here would say that you are technical? Most of you. How many of you were on the cypherpunks mailing list? How many of you have been threatened by Timothy May? How many of you know Timothy May? Okay.\nIt\u0026amp;rsquo;s interesting. If you go back, this isn\u0026amp;rsquo;t really a new thing. I have been paying …"},{"uri":"/speakers/soumya-basu/","title":"Soumya Basu","content":""},{"uri":"/dallas-bitcoin-symposium/sound-money/","title":"Sound Money","content":"Introduction In 2014, Michael appeared on MSNBC and gave this quote: \u0026amp;ldquo;I\u0026amp;rsquo;d only recommend using altcoins for speculation purposes if you really love risk, you\u0026amp;rsquo;re absolutely in love with risk, and you\u0026amp;rsquo;re interested in watching money disappear.\u0026amp;rdquo; Today my talk is on sound money for the digital age. The concept of monetary economics is the reason why I got interested in bitcoin back in 2012. I still tihnk that\u0026amp;rsquo;s the important thing that makes bitcoin important in …"},{"uri":"/tags/spontaneous-payments/","title":"Spontaneous payments","content":""},{"uri":"/stanford-blockchain-conference/2019/spork-probabilistic-bitcoin-soft-forks/","title":"Spork: Probabilistic Bitcoin Soft Forks","content":"Introduction Thank you for the introduction. I have a secret. I am not an economist, so you\u0026amp;rsquo;ll have to forgive me if I don\u0026amp;rsquo;t have your favorite annotations today. I have my own annotations. I originally gave a predecessor to this talk in Japan. You might notice my title here is a haiku: these are protocols used for changing the bitcoin netwokr protocols. People do highly value staying together for a chain, we\u0026amp;rsquo;re doing coordinated activation of a new upgrade.\nDining …"},{"uri":"/speakers/sreeram-kannan/","title":"Sreeram Kannan","content":""},{"uri":"/stanford-blockchain-conference/2020/stark-for-developers/","title":"STARK For Developers","content":"https://twitter.com/kanzure/status/1230279740570783744\nIntroduction It always seems like there\u0026amp;rsquo;s a lot of different proof systems, but it turns out they really work well together and there\u0026amp;rsquo;s great abstractions that are happening where we can take different tools and plug them together. We will see some of those in this session. I think STARKs were first announced 3 years ago back when this conference was BPASE, and now they are ready to use for developers and they are deployed in the …"},{"uri":"/stanford-blockchain-conference/2019/state-channels/","title":"State Channels","content":"State channels as a scaling solution for cryptocurrencies\nhttps://twitter.com/kanzure/status/1091042382072532992\nIntroduction I am an assistant professor at King\u0026amp;rsquo;s College London. There are many collaborators on this project. Also IC3. Who here has heard of state channels before? Okay, about half of the audience. Let me remove some misconceptions around them.\nScalability problems in bitcoin Why are state channels necessary for cryptocurrencies? Cryptocurrencies do not scale. Bitcoin …"},{"uri":"/coindesk-consensus-2016/state-of-blockchain/","title":"State Of Blockchain","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nI am at the Cambridge center of alternative finance. I am also the founde rof\u0026amp;hellip; macroeconomics if you will\u0026amp;hellip; very proud to have created and \u0026amp;hellip; this is our .. we started this in 2014, this is a snapshot of what will be coming out over the next few days. If you are worried about the speed that I will be going through these, it will be online soon.\nSo\u0026amp;hellip; I am going to try to cover 4 things. I wnat to provide a general …"},{"uri":"/bit-block-boom/2019/state-of-multisig/","title":"State Of Multisig","content":"State of multisig for bitcoin hodlers: Better wallets with PSBT\nhttps://twitter.com/kanzure/status/1162769521993834496\nIntroduction I am going to be talking about the state of multisig. It\u0026amp;rsquo;s a way of creating a transaction output that requires multiple keys to sign. This is a focus for people holding bitcoin. I\u0026amp;rsquo;ll talk about some of the benefits and some of the issues. I am not a security expert, though.\nPerspective Multisig is an emphasis on saving and not spending a lot. This is …"},{"uri":"/scalingbitcoin/tokyo-2018/statechains/","title":"Statechains: Off-chain transfer of UTXOs","content":"I am the education director of readingbitcoin.org, and I am going to be talking about statechains for off-chain transfer of UTXOs.\nhttps://twitter.com/kanzure/status/1048799338703376385\nStatechains This is another layer 2 scaling solution, by avoiding on-chain transactions. It\u0026amp;rsquo;s similar to lightning network. The difference is that coin movement is not restricted. You deon\u0026amp;rsquo;t have these channels where you have to have a path and send an exact amount. There\u0026amp;rsquo;s some synergy with …"},{"uri":"/tags/stateless-invoices/","title":"Stateless invoices","content":""},{"uri":"/tags/static-channel-backups/","title":"Static channel backups","content":""},{"uri":"/speakers/stefan-dziembowski/","title":"Stefan Dziembowski","content":""},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/andrew-miller/","title":"Step by Step Towards Writing a Safe Contract: Insights from an Undergraduate Ethereum Lab","content":"Next up I want to introduce Elaine Shi and Andrew Miller from University of Maryland. Andrew is also on the zerocash team.\nOkay. I am an assistant professor at the University of Maryland. We all believe that cryptocurrency is the way of the future and smart contracts. So students in the future have to program smart contracts.\nSo that\u0026amp;rsquo;s what I did. I asked some students to program some smart contracts. I am going to first give some quick background and then I will talk about the insights …"},{"uri":"/speakers/stephanie-hurder/","title":"Stephanie Hurder","content":""},{"uri":"/stanford-blockchain-conference/2020/streamlet/","title":"Streamlet: Textbook Streamlined Blockchain Protocols","content":"((\u0026amp;hellip;. stream went offline in the other room, had to catch up.))\nShow you new consensus protocol called streamlet. We think it is remarkably simple intuitive. This is quite a bold claim, hopefully the protocol will speak for itself\nIncredibly subtle, hard to implement in practice. We want to take this reputation and sweep it out the door.\nJump right in. First model the consensus problem. Motivating simplicity as a goal. Spend the bulk of talk, line by line through the protocol By the end, …"},{"uri":"/scalingbitcoin/montreal-2015/stroem-payment-channels/","title":"Stroem Payment Channels","content":"Stroem\nJarl Fransson (Strawpay)\n7 transactions per second, or is it 3? That\u0026amp;rsquo;s about what we can do with bitcoin. What if you need 100,000 transactions per second? I come from Strawpay, we\u0026amp;rsquo;ve been looking into bitcoin for a long time. When we learned about scripting and the power of contracts, like payment channels, we thought maybe it\u0026amp;rsquo;s time to do micropayments and microtransactions. To give you some perspective on scale, let\u0026amp;rsquo;s say that 1 billion people make 25 …"},{"uri":"/speakers/swanand-kadhe/","title":"Swanand Kadhe","content":""},{"uri":"/rebooting-web-of-trust/2019-prague/swashbuckling-safety-training/","title":"Swashbuckling Safety Training","content":"Swashbuckling safety training with decentralized identifiers and verifiable credentials\nKim Hamilton Duffy, kimdhamilton\nhttps://twitter.com/kanzure/status/1168573225065951235\nIntroduction I was very excited to hear that I was speaking right after someone from the Pirate Party. We\u0026amp;rsquo;re part of the Digital Credentials Effort led by MIT and 12 other major universities. We will have a whitepaper out at the end of September. I am a W3c credentials community group co-chair. Formerly CTO of …"},{"uri":"/bitcoin-core-dev-tech/2015-02/jeremy-allaire-circle/","title":"Talk by the founders of Circle","content":"We are excited to be here and sponsoring this event. We have backgrounds in working on developer tools that goes back to the early days of something.\nHow do we mature the development of Bitcoin Core itself? One of the things that is useful is suss out the key components of it. In a standard you have a spec, it could be a whitepaper, and then you have a reference implementation, and then a test suite that enforces interoperability. The test suite is what enforces the standard. It\u0026amp;rsquo;s not the …"},{"uri":"/bit-block-boom/2019/taproot-schnorr-soft-fork/","title":"Taproot, Schnorr, and the next soft-fork","content":"https://twitter.com/kanzure/status/1162839000811548672\nIntroduction I am going to speak about this potential next version of bitcoin. Who here has heard of these words? Taproot? Schnorr? Who has heard these buzzwords these days? Yeah, so, the proposed next soft-fork is going to involve both of these technologies sort of interplaying. We want to talk about what are these technologies and how do we use thse things and why is it important that a bitcoin user or holder understands these …"},{"uri":"/tags/testnet/","title":"Testnet","content":""},{"uri":"/texas-bitcoin-conference-2014/","title":"Texas Bitcoin Conference","content":" Gox (2014) "},{"uri":"/dallas-bitcoin-symposium/texas-energy-market/","title":"Texas Energy Market","content":"Bitcoin mining\nBitcoin has been one of the most interesting rides of my life. I come from an oil/gas family. I lived in Midland after college. I became very interested in monetary policy, inflation and its effects on the world. Through that, really discovered bitcoin and what it is capable of doing. I come from the energy perspective and Austrian school of economics.\nMining This is our first large scale project\u0026amp;ndash; that\u0026amp;rsquo;s a 100 MW substation. That\u0026amp;rsquo;s a lot of power. We consume a …"},{"uri":"/speakers/thang-n.-dinh/","title":"Thang N. Dinh","content":""},{"uri":"/baltic-honeybadger/2018/the-b-foundation/","title":"The B Foundation","content":"The B Foundation\nhttps://twitter.com/kanzure/status/1043802179004493825\nI think most people here have some idea of who I am. I am a long-term bitcoiner. I\u0026amp;rsquo;ve been in bitcoin since 2010. I love bitcoin. I am passionate about it. I want to see it grow and prosper. The positive thing about bitcoin is that it has a resilient ecosystem. It doesn\u0026amp;rsquo;t need any CEO. It doesn\u0026amp;rsquo;t need any centralized organization and it doesn\u0026amp;rsquo;t need any central point to direct where it is going. It …"},{"uri":"/baltic-honeybadger/2018/the-bitcoin-standard/","title":"The Bitcoin Standard","content":"The bitcoin standard as a layered scaling solution\nhttps://twitter.com/kanzure/status/1043425514801844224\nHello, everyone. Can everyone hear me? Okay, wonderful. You can\u0026amp;rsquo;t see my slides, can you. You have to share the screen if you want them to see your slides. How do I do that? Where is that? This is new skype, I\u0026amp;rsquo;m sorry. Thank you everyone for inviting me to speak today. It would be great to join you, but unforutnately I couldn\u0026amp;rsquo;t make it.\nI want to describe how I see bitcoin …"},{"uri":"/scalingbitcoin/tokyo-2018/bitcoin-script/","title":"The evolution of bitcoin scripting","content":"Agenda opcodes OP_CHECKSIGFROMSTACK sighash flags keytree sigs MAST graftroot, taproot covenants, reverse covenants (input restrictions: this input has to be spent with tihs other input, or can only be spent if this other one doesn\u0026amp;rsquo;t exist) stack manipulation script languages (simplicity) tx formats, serialization? what would we change with hard-fork changes to the transaction format? Segwit transaction encoding format sucks; the witnesses are at the end and inline with all the scriptsigs …"},{"uri":"/baltic-honeybadger/2018/the-future-of-bitcoin-smart-contracts/","title":"The Future Of Bitcoin Smart Contracts","content":"https://twitter.com/kanzure/status/1043419056492228608\nHey guys, next talk in 5 minutes. In five minutes.\nIntroduction Hello everyone. If you\u0026amp;rsquo;re walking, please do it in silence. I am a bit nervous. I was too busy organizing this conference and didn\u0026amp;rsquo;t get time for this talk. If you had high expectatoins for this presentation, then please lower them for the next 20 minutes. I was going to talk about the traditional VC models and why it doesn\u0026amp;rsquo;t work in the bitcoin industry. They …"},{"uri":"/baltic-honeybadger/2018/the-future-of-bitcoin-wallets/","title":"The Future Of Bitcoin Wallets","content":"1 on 1: The future of bitcoin wallets\nhttps://twitter.com/kanzure/status/1043445104827084800\nGZ: Thank you very much. We\u0026amp;rsquo;re going to talk about the future of bitcoin wallets. As you know, it\u0026amp;rsquo;s a very central topic. We always compare bitcoin to the internet. Wallets are basically like browsers like back at the beginning of the web. They are the first gateway to a user experience for users on bitcoin. It\u0026amp;rsquo;s important to see how they will evolve. We have two exceptional …"},{"uri":"/breaking-bitcoin/2019/future-of-hardware-wallets/","title":"The Future of Hardware Wallets","content":"D419 C410 1E24 5B09 0D2C 46BF 8C3D 2C48 560E 81AC\nhttps://twitter.com/kanzure/status/1137663515957837826\nIntroduction We are making a secure hardware platform for developers so that they can build their own hardware wallets. Today I want to talk about certain challenges for hardware wallets, what we\u0026amp;rsquo;re missing, and how we can get better.\nCurrent capabilities of hardware wallets Normally, hardware wallets keep keys reasonably secret and are able to spend coins or sign transactions. All the …"},{"uri":"/baltic-honeybadger/2018/the-future-of-lightning/","title":"The Future Of Lightning","content":"The future of lightning\nThe year of #craeful and the future of lightning\nhttps://twitter.com/kanzure/status/1043501348606693379\nIt\u0026amp;rsquo;s great to be back here in Riga. Let\u0026amp;rsquo;s give a round of applause to the event organizers and everyone brought us back here. This is the warmest time I\u0026amp;rsquo;ve ever been in Riga. It\u0026amp;rsquo;s been great. I want to come back in the summers.\nIntroduction I am here to talk about what has happened in the past year and the future of what we\u0026amp;rsquo;re going to see …"},{"uri":"/scalingbitcoin/tokyo-2018/ghostdag/","title":"The GHOSTDAG protocol","content":"paper: https://eprint.iacr.org/2018/104.pdf\nIntroduction This is joint work with Yonatan. I am going to talk about a DAG-based protocol to scale on-chain. The reason why I like this protocol is because it\u0026amp;rsquo;s a very simple generalization of the longest-chain protocol that we all know and love. It gives some insight into what the role of proof-of-work is in these protocols. I\u0026amp;rsquo;ll start with an overview of bitcoin protocol because I want to contrast against it.\nBitcoin\u0026amp;rsquo;s consensus …"},{"uri":"/stanford-blockchain-conference/2020/libra-blockchain-intro/","title":"The Libra Blockchain \u0026 Move: A technical introduction","content":"https://twitter.com/kanzure/status/1230248685319024641\nIntroduction We\u0026amp;rsquo;re building a new wallet for the system, Calibre, which we are launching. A bit about myself before we get started. I have been at Facebook for about 10 years now. Before Facebook, I was one of the co-founders of reCaptcha- the squiggly letters you type in before you login and register. I have been working at Facebook on performance, reliability, and working with web standards to make the web faster.\nAgenda To start, I …"},{"uri":"/stanford-blockchain-conference/2020/optimistic-vm/","title":"The optimistic VM","content":"https://twitter.com/kanzure/status/1230974707249233921\nIntroduction Okay, let\u0026amp;rsquo;s get started our afternoon session. Our first talk I am very happy to introduce our speaker on optimistic virtual machines. Earlier in this conference we heard about optimistic roll-ups and I\u0026amp;rsquo;m looking forward to this talk on optimistic virtual machines which is an up-and-coming approach on doing this.\nWhy hello everyone. I am building on ethereum and in particular we\u0026amp;rsquo;re going to make Optimism: the …"},{"uri":"/baltic-honeybadger/2018/the-reserve-currency-fallacy/","title":"The Reserve Currency Fallacy","content":"The reserve currency fallacy\nhttps://twitter.com/kanzure/status/1043385469134925824\nThank you. Developers, developers, developers, developers. Alright, it wasn\u0026amp;rsquo;t that bad. There\u0026amp;rsquo;s a lot of content to explain this concept of the reserve currency fallacy. It\u0026amp;rsquo;s hard to get through it in the amount of available time. I\u0026amp;rsquo;ll be available at the party tonight. I want to go through four slides and talk about the history of this question of scaling, and then the first scaling …"},{"uri":"/stanford-blockchain-conference/2019/stark-dex/","title":"The STARK truth about DEXes","content":"https://twitter.com/kanzure/status/1090731793395798016\nWelcome to the first session after lunch. We call this the post-lunch session. The first part is on STARKs by Eli Ben-Sasson.\nIntroduction Thank you very much. Today I will be talking about STARK proofs of DEXes. I am chief scientist at Starkware. We\u0026amp;rsquo;re a one-year startup in Israel that has raised $40m and we have a grant from the Ethereum Foundation. We have 20 team members. Most of them are engineers. Those who are not engineers are …"},{"uri":"/scalingbitcoin/tel-aviv-2019/threshold-scriptless-scripts/","title":"Threshold Scriptless Scripts","content":"Omer Shlomovits (KZen Research) (omershlomovits, zengo)\nhttps://twitter.com/kanzure/status/1171690582445580288\nScriptless scripts We can write smart contracts in bitcoin today, but there\u0026amp;rsquo;s some limitations. We can do a lot of great things. We can do atomic swaps, multisig, payment channels, etc. There\u0026amp;rsquo;s a cost, though. Transactions become bigger with bigger scripts. Also, it\u0026amp;rsquo;s kind of heavy on the verifiers. The verifiers now need to verify more things. There\u0026amp;rsquo;s also …"},{"uri":"/stanford-blockchain-conference/2019/proofs-of-space-and-replication/","title":"Tight Proofs of Space and Replication","content":"https://twitter.com/kanzure/status/1091064672831234048\npaper: https://eprint.iacr.org/2018/702.pdf\nIntroduction This talk is about proofs of space and tight proofs of space.\nProof of Space A proof-of-space is an alternative to proof-of-work. Applications have been proposed like spam prevention, DoS attack prevention, sybil resistance in consensus networks which is most relevant to this conference. In proof-of-space, it\u0026amp;rsquo;s an interactive protocol between a miner and a prover who has some …"},{"uri":"/speakers/tim-roughgarden/","title":"Tim Roughgarden","content":""},{"uri":"/tags/time-warp/","title":"Time warp","content":""},{"uri":"/simons-institute/todo/","title":"Todo","content":" cryptography bootcamp https://www.youtube.com/playlist?list=PLgKuh-lKre139cwM0pjuxMa_YVzMeCiTf mathematics of modern cryptography https://www.youtube.com/playlist?list=PLgKuh-lKre10kb0so1PQ3eNz4Q6EBfecT historical papers in cryptography https://www.youtube.com/playlist?list=PLgKuh-lKre13lX4C_5ZCQKMDOniAn24NP securing computation https://www.youtube.com/playlist?list=PLgKuh-lKre12ddBkIB8wNC8D1kag5dEr- "},{"uri":"/cryptoeconomic-systems/2019/token-journal/","title":"Token Journal","content":"\u0026amp;ndash; Disclaimer \u0026amp;ndash;\nThese are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.\nI sometimes add annotations to the transcription text. These will always be denoted by a standard …"},{"uri":"/speakers/tone-vays/","title":"Tone Vays","content":""},{"uri":"/rebooting-web-of-trust/2019-prague/topics/","title":"Topics","content":"Rebooting Web of Trust topic selection\nDictionary terms: we have a glossary that I wrote 6 months ago which by definition glosses everything. A dictionary would give an opportunity to go into more depth, to look at disagreements without getting lost in the weeds, and also talk about some foundational assumptions.\nVerifiable secret sharing: this is a modification for Shamir secret sharing that also incorporates multisignatures as well. We\u0026amp;rsquo;re interested in solidifying that into an actual …"},{"uri":"/baltic-honeybadger/2018/trading-panel/","title":"Trading Panel","content":"MM: As Trace Mayer says, it\u0026amp;rsquo;s chasing the rabbit. Why not introduce ourselves?\nTV: My name is Tone Vays. I come from the traditional Wall Street trading environment. I joined the crypto space in 2013. I spoke at my first conference and wrote my first article in Q1 2014. I was doing some trading. With the popularity of the youtube channel and going to conferences, I went back to trading options in traditional markets. I\u0026amp;rsquo;ll get back to trading crypto soon, but I have plenty of …"},{"uri":"/tags/transaction-bloom-filtering/","title":"Transaction bloom filtering","content":""},{"uri":"/tags/transaction-origin-privacy/","title":"Transaction origin privacy","content":""},{"uri":"/stanford-blockchain-conference/2020/transparent-dishonesty/","title":"Transparent Dishonesty: front-running attacks on Blockchain ","content":"https://twitter.com/kanzure/status/1231005309310627841\nIntroduction The last two talks are going to be about frontrunning. One of the features of blockchain is that they are slow but this also opens them up to vulnerabilities such that I can insert my transactions before other people and this could have bad consequences and Shayan is going to explore this a little bit.\nMy talk will not be as intense as the previous talk, but I\u0026amp;rsquo;m going to have more pictures. I am a PhD candidate at …"},{"uri":"/stanford-blockchain-conference/2020/transparent-snarks-from-dark-compilers/","title":"Transparent SNARKs from DARK compilers","content":"https://twitter.com/kanzure/status/1230561492254089224\npaper: https://eprint.iacr.org/2019/1229.pdf\nIntroduction Our next speaker is Ben Fisch, part of the Stanford Applied Crypto team and he has done a lot of work that is relevant to the blockchain space. Proofs of replication, proofs of space, he is also one of the coauthors of a paper that defined the notion of verifiable delay functions. He has also worked on accumulators, batching techniques, vector commitments, and one of his latest works …"},{"uri":"/cryptoeconomic-systems/2019/trust-and-blockchain-marketplaces/","title":"Trust And Blockchain Marketplaces","content":"Introduction I founded Sia, a decentralized storage platform started in 2014. Today, Sia is the only decentralized storage platform out there. I also founded Obelisk which is a mining equipment manufacturing company.\nMining supply chains are centralized Basically all the mining chips are coming out of TSMC or Samsung. For the longest time it was TSMC but for some reason Samsung is at the forefront for this cycle but I would expect them to fade out in the next cycle. There\u0026amp;rsquo;s just two, and …"},{"uri":"/baltic-honeybadger/2018/trustlessness-scalability-and-directions-in-security-models/","title":"Trustlessness Scalability And Directions In Security Models","content":"Trustlessness, scalability and directions in security models\nhttps://twitter.com/kanzure/status/1043397023846883329\n\u0026amp;hellip; Is everyone awake? Sure. Jumping jacks. Wow, that\u0026amp;rsquo;s bright. I am more idealistic than Eric. I am going to talk about utility and why people use bitcoin and let\u0026amp;rsquo;s see how that goes.\nTrustlessness is much better than decentralization. I want to talk about trustlessness. People use the word decentralization a lot and I find that to be kind of useless because …"},{"uri":"/scalingbitcoin/tel-aviv-2019/txprobe/","title":"TxProbe: Discovering bitcoin's network topology using orphan transactions","content":"https://twitter.com/kanzure/status/1171723329453142016\npaper: https://arxiv.org/abs/1812.00942\nhttps://diyhpl.us/wiki/transcripts/scalingbitcoin/coinscope-andrew-miller/\nIntroduction I am Sergi. I will be presenting txprobe. This is baout discovering the topology of the bitcoin network using orphan transactions. There are a bunch of coauthors and collaborators.\nWhat we know about the topology I want to start by talking a little bit about what we know about the network topology without using …"},{"uri":"/tags/unannounced-channels/","title":"Unannounced channels","content":""},{"uri":"/tags/uneconomical-outputs/","title":"Uneconomical outputs","content":""},{"uri":"/scalingbitcoin/milan-2016/unlinkable-outsourced-channel-monitoring/","title":"Unlinkable Outsourced Channel Monitoring","content":"http://lightning.network/\nhttps://twitter.com/kanzure/status/784752625074012160\nOkay. Hi everyone. Check. Wow. Check one two. Okay it\u0026amp;rsquo;s working. And I have 25 minutes. Great. So I am going to talk about unlinkable outsourced channel modeling. Everyone likes channels. You can update states. You can link them together to make a network. This is really great. There are risks. These are additional risks. The price of scalability is eternal vigilance. You have to keep watching your channels. …"},{"uri":"/bitcoin-magazine/bitcoin-2024/unlocking-expressivity-with-op-cat/","title":"Unlocking Expressivity with OP_CAT","content":""},{"uri":"/coindesk-consensus-2016/upgrading-capital-markets-for-digital-asset-trading/","title":"Upgrading Capital Markets For Digital Asset Trading","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nUpgrading capital markets for digital asset trading\nBrian Kelly - Moderator\nJuthica Chou, LedgerX\nBobby Lee, BTCC\nMichael More, Genesis\nBarry Silbert, Digital Currency Group\nPlease welcome Brian Kelly, Juthica Chou, Bobby Lee, Michael More, and Barry Silbert.\nBK: Alright. Welcome everyone to the upgrading capital markets panel. You have the introductions of who these people are. Just one thing before we start, does anyone want to …"},{"uri":"/stanford-blockchain-conference/2019/urkel-trees/","title":"Urkel Trees","content":"Urkel trees: An optimized and cryptographically provable key-value store for decentralized naming\nBoyma Fahnbulleh (Handshake) (boymanjor)\nhttps://twitter.com/kanzure/status/1090765616590381057\nIntroduction Hello my name is Boy Fahnbulleh. I have been contributing to Handshake. I\u0026amp;rsquo;ve worked on everything from building websites to writing firmware for Ledger Nano S. If you have ever written a decent amount of C, or refactored someone\u0026amp;rsquo;s CSS, you would know how much I appreciate being …"},{"uri":"/scalingbitcoin/stanford-2017/using-the-chain-for-what-chains-are-good-for/","title":"Using Chains for what They're Good For","content":"I am Andrew Poelstra, a cryptographer at Blockstream. I am going to talk about scriptless scripts. But this is a specific example of a more general theme, which is using the blockchain as a trust anchor or commitment layer for smart contracts whose real contents don\u0026amp;rsquo;t really hit the blockchain. I\u0026amp;rsquo;ll elaborate on what I mean by that, and I\u0026amp;rsquo;ll show what the benefits are for that.\nTo give some context, in Bitcoin script, the scripting language which is used to encode smart …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2019/utreexo/","title":"Utreexo: Reducing bitcoin nodes to 1 kilobyte","content":"https://twitter.com/kanzure/status/1104410958716387328\nSee also: https://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-10-08-utxo-accumulators-and-utreexo/\nIntroduction I am going to talk about another scaling solution and another strategy I\u0026amp;rsquo;ve been working on for about 6-9 months, called Utreexo.\nBitcoin scalability Scalability has been a concern in bitcoin for the whole time. In fact, it\u0026amp;rsquo;s the first thing that anyone said about bitcoin. In 2008, Satoshi Nakamoto said hey I …"},{"uri":"/bitcoin-design/ux-research/","title":"UX Research","content":" "},{"uri":"/scalingbitcoin/hong-kong-2015/validation-cost-metric/","title":"Validation Cost Metric","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/3_tweaking_the_chain_3_nick.pdf\nMotivation As we\u0026amp;rsquo;ve seen over the last two days scalability is a multidimensional problem. One of the main topics of research is increasing the blocksize to increase transaction throughput. The assumption is that as technological progress is continuing and transaction throughput is increased accordingly, the cost for runnning a fully validating node stays constant.\nHowever, blocksize …"},{"uri":"/scalingbitcoin/montreal-2015/validation-costs/","title":"Validation Costs","content":"Validation costs and incentives\nHow many of you have read the Satoshi whitepaper? It\u0026amp;rsquo;s a good paper. As we have discovered, it has some serious limitations that we think can be fixed but they are serious challenges for scaling. Amongst these are, it was really built as a single code base which was meant to be run in its entirety. All machines were going to participate equally in this p2p network. In a homogenous network, pretty much every node runs the same software, offers the same …"},{"uri":"/tags/v3-transaction-relay/","title":"Version 3 transaction relay","content":""},{"uri":"/coindesk-consensus-2016/visa-chain/","title":"Visa Chain","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nVisa \u0026amp;amp; Chain\nMy name is Adam Ludwin. We are here to talk about blockchain database technology networks. Chain.com is a blockchain database technology company. We do one thing. We partner with financia leaders like Visa to launch blockchain database technology networks. I want to talk about why. There\u0026amp;rsquo;s a lot of hype in blockchain database technology. There\u0026amp;rsquo;s a lot of attention in blockchain database technology. But I …"},{"uri":"/speakers/vivek-bagaria/","title":"Vivek Bagaria","content":""},{"uri":"/speakers/vortex/","title":"Vortex","content":""},{"uri":"/tags/wallet-labels/","title":"Wallet labels","content":""},{"uri":"/rebooting-web-of-trust/2019-prague/weak-signals/","title":"Weak Signals","content":"Weak signals exercise\nGroup 1: Decentralized Decentralization is a means of preventing a single point of control existing.\nGroup 3: Cloud We picked a term preventing decentralized identity. The one that resonated for us was \u0026amp;ldquo;cloud\u0026amp;rdquo;. We equated the \u0026amp;ldquo;cloud\u0026amp;rdquo; with the idea of centralized convenience and value creation that businesses and individuals look at the cloud as a great place to be because it\u0026amp;rsquo;s cheap and convenient and all of your dreams are answered in the …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/welcome/","title":"Welcome","content":"Welcome session\nJonathan Harvey Buschel Jinglan Wang We are going to get started in about two minutes, so please take your seats. We are also testing the mic for the live stream. If you have friends who are not here, feel free to tweet them, email them or post their facebook page or instagram whatever.\n(buzzing/feedback continues)\nHey. All y\u0026amp;rsquo;all ready? Alright, awesome. My name is Jing, I am president of the Wossley Bitcoin Club.\nLast year we had over 500 attendees and we had the launch of …"},{"uri":"/speakers/wendy-seltzer/","title":"Wendy Seltzer","content":""},{"uri":"/speakers/whalepanda/","title":"Whalepanda","content":""},{"uri":"/coindesk-consensus-2016/why-bitcoin-still-matters/","title":"Why Bitcoin Still Matters","content":"Preliminary notes:\nContact me- https://twitter.com/kanzure\nWhy bitcoin still matters\nHutchins, Founder of Silver Lake\nI hope in the coming days that the experts will critique me to help me learn more. I hope people from the general community, rather the experts pardon me, what I have an initial hypothesis regarding what will help bitcoin reach its full potential for its extraordinary opportunity that I think it is.\nThe title of my presentation was \u0026amp;ldquo;Why bitcoin still matters\u0026amp;rdquo;. I was …"},{"uri":"/magicalcryptoconference/2019/why-block-sizes-should-not-be-too-big/","title":"Why Block Sizes Should Not Be Too Big","content":"Briefly, why block sizes shouldn\u0026amp;rsquo;t be too big\nslides: https://luke.dashjr.org/tmp/code/block-sizes-mcc.pdf\nI am luke-jr. I am going to go over why block size shouldn\u0026amp;rsquo;t be too big.\nHow does bitcoin work? Miners put transactrions into blocks. Users verify that the blocks are valid. If the users don\u0026amp;rsquo;t verify this, then miners can do basically anything. Because the users do this, 51% attacks are limited to reorgs which is undoing transactions. If there\u0026amp;rsquo;s no users verifying …"},{"uri":"/scalingbitcoin/hong-kong-2015/why-miners-will-not-voluntarily-individually-produce-smaller-blocks/","title":"Why Miners Will Not Voluntarily Individually Produce Smaller Blocks","content":"slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY1/2_security_and_incentives_3_bier.pdf\nMarginal cost is very low.\nIn about 2011, people started discussing the idea of needing an artificial cap for a block size limit to kind of kick up the transaction fees. The orphan risk is a marginal cost the larger the block, the higher the chance of an orphan.\nNow I\u0026amp;rsquo;ve got some issues why orphan risk may not be so great. As technology improves, over time the technology cost of …"},{"uri":"/mit-bitcoin-expo/mit-bitcoin-expo-2015/arvind-narayanan/","title":"Why you need threshold signatures to protect your wallet","content":"I want to tell you what threshold signatures are and I want to convince you that threshold signatures are a technology you need. This is collaborative work.\nThere are three things I want to tell you today. The first is that the banking security model has been very very refined and has sophisticated techniques and process and controls for ensuring security. It does not translate to Bitcoin. This may seem surprising. Hopefully it will be obvious in retrospect.\nYou need Bitcoin-specific …"},{"uri":"/speakers/william-mougayar/","title":"William Mougayar","content":""},{"uri":"/scalingbitcoin/tel-aviv-2019/work-in-progress/","title":"Work In Progress Sessions","content":"((this needs to break into individual transcripts))\nhttps://twitter.com/kanzure/status/1172173183329476609\nOptical proof of work Mikael Dubrovsky\nI am working on optical proof-of-work. I\u0026amp;rsquo;ll try to pack a lot into the 10 minutes. I spent a year and a half at Techion.\nPhysical scaling limits Probably the three big ones are, there\u0026amp;rsquo;s first the demand for blockchain or wanting to use bitcoin more. I think more people want to use bitcoin. Most of the world does not have access to property …"},{"uri":"/tags/x-only-public-keys/","title":"X-only public keys","content":""},{"uri":"/speakers/yorke-rhodes/","title":"Yorke Rhodes","content":""},{"uri":"/speakers/yoshinori-hashimoto/","title":"Yoshinori Hashimoto","content":""},{"uri":"/speakers/yurii-rashkovskii/","title":"Yurii Rashkovskii","content":""},{"uri":"/speakers/yuta-takanashi/","title":"Yuta Takanashi","content":""},{"uri":"/speakers/yuval-kogman/","title":"Yuval Kogman","content":""},{"uri":"/simons-institute/zero-knowledge-probabilistic-proof-systems/","title":"Zero Knowledge Probabilistic Proof Systems","content":"Historical Papers in Cryptography Seminar Series, Simons Institute\nShe has received several awards, she is a source of incredibly creative ideas, very effective advisor and metor, and I am always inspired when she speaks. So please welcome her.\nThere is no better place to give this historical talk on zero knowledge than at Berkeley where there are pre-historical origins of zero knowledge. At the same time it\u0026amp;rsquo;s a difficult talk to give, because this is a very distinguished group here, and …"},{"uri":"/scalingbitcoin/hong-kong-2015/zero-knowledge-proofs-for-bitcoin-scalability-and-beyond/","title":"Zero Knowledge Proofs For Bitcoin Scalability And Beyond","content":"I am going to tell you about zero-knowledge proofs for Bitcoin scalability. Even though this is an overview talk, many of the results I will be telling you about are based on work done by people in the slides and also here at the conference.\nFirst I will give a brief introduction to zero knowledge proofs. Then I will try to convince you that if you care about Bitcoin scalability, then you should care about zero-knowledge proofs and you should keep them on your radar.\n\u0026amp;hellip; stream went …"},{"uri":"/scalingbitcoin/tel-aviv-2019/zerolink-sudoku/","title":"Zerolink Sudoku - real vs perceived anonymity","content":"https://twitter.com/kanzure/status/1171788514326908928\nIntroduction I used to work in software. I transitioned to something else, but bitcoin changed my mind. This talk is a little bit of a work in progress. This was started by a guy named Aviv Milner who is involved with Wasabi. He roped me into this and then subsequently became unavailable. So this is basically me presenting his work. It\u0026amp;rsquo;s been severely \u0026amp;hellip;\u0026amp;hellip;. as I said, Aviv started this project, he defined the research …"},{"uri":"/speakers/zeta-avarikioti/","title":"Zeta Avarikioti","content":""},{"uri":"/cryptoeconomic-systems/2019/zksharks/","title":"ZkSHARKS","content":"Introduction Indeed I am Madars Virza. I am going to talk about SHARKs. I am going to be talking about zero-knowledge SHARKs. It\u0026amp;rsquo;s actually something serious and it\u0026amp;rsquo;s related to non-interactive zero-knowledge proofs.\nNon-interactive zero knowledge proofs zkproofs are protocols between two parties, a prover and a verifier. Both prover and verifier know a public input, but only the prover knows a secret input as a witness. The prover wants to convince the verifier that some …"}] \ No newline at end of file diff --git a/la-bitdevs/2020-06-18-luke-dashjr-segwit-psbt-vulnerability/index.html b/la-bitdevs/2020-06-18-luke-dashjr-segwit-psbt-vulnerability/index.html index 95615bcf73..7915584281 100644 --- a/la-bitdevs/2020-06-18-luke-dashjr-segwit-psbt-vulnerability/index.html +++ b/la-bitdevs/2020-06-18-luke-dashjr-segwit-psbt-vulnerability/index.html @@ -3,7 +3,7 @@ Location: LA BitDevs (online) CVE: https://nvd.nist.gov/vuln/detail/CVE-2020-14199 Trezor blog post on the vulnerability: https://blog.trezor.io/latest-firmware-updates-correct-possible-segwit-transaction-vulnerability-266df0d2860 -Greg Sanders Bitcoin dev mailing list post in April 2017: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html +Greg Sanders Bitcoin dev mailing list post in April 2017: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html The vulnerability The way Bitcoin transactions are encoded in the software is there is a list of coins essentially and then there is a list of destinations and the amount being sent to each destination. The destinations do not include the fee. Nothing in the transaction tells you what the fee of the transaction is.">
\ No newline at end of file +https://www.youtube.com/watch?v=CojixIMgg3c

SegWit/PSBT vulnerability (CVE-2020-14199)

Location: LA BitDevs (online)

CVE: https://nvd.nist.gov/vuln/detail/CVE-2020-14199

Trezor blog post on the vulnerability: https://blog.trezor.io/latest-firmware-updates-correct-possible-segwit-transaction-vulnerability-266df0d2860

Greg Sanders Bitcoin dev mailing list post in April 2017: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html

The vulnerability

The way Bitcoin transactions are encoded in the software is there is a list of coins essentially and then there is a list of destinations and the amount being sent to each destination. The destinations do not include the fee. Nothing in the transaction tells you what the fee of the transaction is. The inputs do not tell you the amount of each input, they are just identifiers that are resolved by a full node. There is no way to tell how many Bitcoin are going into a transaction itself, just in total how many are going out. Without knowing how many are going in you cannot know the fee of the transaction. You need a full node to know the fee of the transaction normally. The way hardware wallets have dealt with this, they need to provide the user with information to confirm, is for each of the inputs your computer over the USB or whatever would send every transaction that created those inputs. That transaction has the amount and the output because it created what is now being used as the input. That was a lot of data. If you are sending a transaction you would not only have to send the hardware wallet the transaction you want to send but also a transaction behind every coin you were spending in that transaction. A few years ago for SegWit the idea was the signature includes the amount of the input in it. If you sign the input is 1 Bitcoin but the input is really 2 Bitcoin the signature is invalid. The idea being the hardware wallet can trust what the computer is telling it for the amount and if the computer is lying the signature won’t matter anyway. The transaction won’t go through. The problem with this turned out to be if you can lie to the hardware wallet in different ways you can trick it into giving you signatures. If it is 30 Bitcoin and then 20 Bitcoin, the example Trezor used. You have got a 30 Bitcoin coin and a 20 Bitcoin coin. You tell the hardware wallet that one of those coins is much smaller than it is and the other coin is bigger than it is. 20 and 30 may not be the best examples for reasons I will get into later. The hardware wallet will then sign for both inputs, each input has its own signature. That transaction is invalid because the computer lied about the amount of the input. But if you trick the user. The computer says “This signature failed. Try again.” Your hardware wallet will then again ask you and you confirm it a second time, the computer is also lying the second time. It is saying the other input is the wrong size. Now you have two different invalid transactions but the signatures on each input in those are valid. The opposite one is invalid. You throw away the invalid signatures and you take the valid signatures and combine them into a new transaction. Now you have a valid transaction with signatures on both inputs and the outputs are what it told you but the fee is completely different because you have the true value of both inputs in there. The one reason this isn’t as concerning necessarily is that the user has to be socially engineered to sign the transaction twice. They have to approve the transaction twice. If the computer is malicious anyway it can make them both be valid transactions and send the transaction twice. The reason that doesn’t negate it entirely is that the hardware wallet could be telling you that it is 0.01 Bitcoin for each of the transactions. You could be thinking “That’s no big deal. If I send it twice, I send it twice.” But that 0.01 may not be the fee you are being tricked into sending. The fee you are being tricked into sending could very well be your whole wallet balance. It is not entirely harmless either.

Q&A

Q - The worst case scenario is that your funds could be drained from your hardware wallet. Is that correct?

A - Pretty much. That is a pretty bad scenario.

Q - The fee goes to whichever miner picks it up and mines it?

A - Potentially. If the attacker is a miner they can just not broadcast it until it is mined in their own block. I don’t know how probable that is with mining being as centralized as it is. It would be pretty obvious, maybe not. The miner could claim that it was broadcast. There are only so many miners that could pull it off.

Q - Different hardware wallets chose to solve this problem in different ways. Trezor, they did something that broke backward compatibility. Coldcard made you opt in to it.

A - The solution Trezor went with which is pretty straightforward was they just treated a SegWit transaction as a non-SegWit transaction. They sent over the transactions behind every input just like they did before. Not using that feature of SegWit.

Q - People were saying that now you can’t use Trezor with BTCPay Server anymore.

A - The software on the computer has to know that you are treating it as non-SegWit even though it is SegWit. Previously the software assumed if it is a SegWit they don’t need to send the transaction for that because the signature will be signed anyway. Now that they have changed this the software has to send the input transactions for everything even if it is SegWit.

Q - Could you talk through from your perspective if you developed a hardware wallet how would you roll out a fix for this bug? What kind of things would you keep in mind while still maintaining security as your number one feature in your hardware wallet?

A - If security is the number one feature then you would compromise with compatibility with old software. You’d break compatibility with all the software out there but that’s just temporary until that software gets updated. In the meantime you at least have the security of knowing that you are not vulnerable to this attack. You could also tell people “If you don’t want to upgrade until your software is ready simply be aware of this problem and don’t sign multiple times no matter how small the amount may appear.”

Q - For whatever reason you might be signing a transaction you didn’t intend?

A - The transaction will be more or less the same, the same amount being sent but the fee would be a lot larger than what you intended to send. It could be a tiny amount but the fee may be wrong and it may be your entire wallet being sent to fee.

Q - An attacker, they wouldn’t get the funds, all of that would go to the miner?

A - Unless the attacker was the miner. Or had some agreement with the miner.

Q - You said this was a feature of SegWit. Was this an intended feature? Obviously the bug is unintended, to try to make the hardware wallet interface send less data was that an intention?

A - That was a planned feature. If you go to bitcoincore.org and see the SegWit FAQ it has a whole section on how it works.

Q - This was an unforeseen consequence?

A - Yes. Before this exploit was announced it was in Taproot that it was going to sign the amounts of all the inputs to prevent this.

Q - Taproot would have addressed this also in that case?

A - Right. Taproot was going to address this when deployed.

Q - For another reason or due to recognizing this sort of problem?

A - I think it was recognizing it but not necessarily recognizing the security implication.

Q - Were people aware of this problem for a long time and it has just recently come to light or was this is a recent discovery?

A - I didn’t hear about it before Trezor announced it. But I think Trezor said that it was revealed to them a few months previously. Other hardware wallets I guess had asked for more time to fix it.

Q - We have SegWit and that has been deployed for over a year now.

A - Three years now almost.

Q - Just now we are finding out a vulnerability like this. It is crazy to me that on other blockchains people are developing new technologies, new formats and new languages and yet they slap a security audit on it and say you can trust it. Bitcoin Core tech is so rigorously scrutinized and yet three years later we find a bug like this.

A - The review process is definitely a good idea. I don’t know if it provides as much security as people assume it does. Things that slip past one person may very well slip past ten people or whatever.

Q - Will HWI get changed here?

A - Bitcoin Core is going to probably need updates to handle this as well. Right now it will also not provide the transactions if there is a SegWit input. That is being changed. It also actually used to reject PSBT (partially signed Bitcoin transactions) as invalid or another phrase that was used. It would reject those if they had the input transaction for a SegWit input. That is probably a compatibility issue that will have to be addressed. That may get backported into 0.21. I put that fix into Bitcoin Knots 0.20 so if it does become an issue there are options.

Q - There is a fix in 0.20?

A - Bitcoin Knots, my derivative of Bitcoin Core that has a bunch of enhancements and sometimes fixes. It is still in the review process for Core (PR 19215). It may take a few months.

Q - Do you have a general sense of how long it will take for something like Taproot to get reviewed and then implemented in Core?

A - Not really. It really depends on how much time people put into it. There are some bug fixes that have sat for years before finally being merged. Hopefully Taproot won’t be like those. It has a lot of enthusiasm than some other pull requests so hopefully it will get reviewed quicker.

Q - I feel like there are not a lot of people against it if any.

A - I haven’t heard of anyone against it. What I think might be an issue is the activation method. There is some disagreement about what the best approach at this point is. With SegWit we tried to do miner activation but obviously that didn’t work out and we had to switch to user activated. It has never quite clarified in everyone’s mind what the best way to do user activated soft fork is going forward.

Q - Can you lay out the different options people are currently considering?

A - The main two ideas are to do a miner activated soft fork but if the miners don’t do it within a certain amount of time then it becomes active on its own as if it had never been inactive. It is similar to what BIP 149 was that did not get used for SegWit where suddenly the rule starts applying one day and we hope that there are enough people enforcing it. There is no real way to tell. The other approach which we used for SegWit and I think we should probably continue to use is similar to BIP 148 where you have a month or whatever of blocks that explicitly say “This is activating.” You have the signaling for several months before the final activation but that last block before it is locked in should require the miners to explicitly say “Yes I have upgraded. We understand these rules that are coming into effect.” This way everyone can be sure that when it is activated it is actually activated. The rule is expected to be enforced. There is no question of whether Taproot is active or maybe not active and working by chance.

Q - The first one kind of sounds like you are almost forcing the miners. Do this by this time or else you are off the network.

A - Miners never have a choice in the matter. It is all about what the users want. Miners either go along with it or they are mining in an altcoin. The difference lies in whether the blockchain itself is committing to the new rules. Whether the chain says “These are the rules that are now in effect” or whether it is just an assumption that the rules are in effect but nobody really knows until they try to attack it. Usually nobody does try to attack it so it leaves an open question. We wouldn’t know today if SegWit was active or not because there would be no clear indication if we hadn’t done this with SegWit. We would be able to say “No one has attacked my SegWit coins” but there would be no way to know for sure if someone did attack those SegWIt coins whether they would succeed or not.

Q - Is there a fail option? The second option sounds like it is based on percentage. A certain amount of miners have to start signaling that they are going to implement this new fork. Is there a fail option to that?

A - Both options allow miners to indicate that they are ready to protect old nodes early. It could be a month or two months and miners can trigger to activate it early. The question is what happens if the miners don’t do that and it goes on for a year or two or maybe a little shorter? That is another question that needs to be addressed, the timeframe. What happens if it has been a year and the miners haven’t bothered to upgrade yet? If the community is going to activate it anyway do we explicitly have the chain signal that it is going to be upgraded or do we just assume that it has upgraded and hope for the best?

Q - Taproot when it goes into production will it be limited to opcodes or will it look more like Ethereum’s Solidity? More open to different development?

A - One of the main ideas behind Taproot is that in most cases you can have this script that says “These are the rules. Who can spend it. Who can’t spend it.” In most of the cases if both parties in the transaction agree then you don’t need to bother evaluating the rules. You can just have both people sign. In most cases it will short circuit to a multisig. The only time it would have a program run in the script would be if one of the parties is refusing to cooperate. Hopefully the fact that you can fallback to running that script will be an incentive to always cooperate because it is not going to stop anything from happening.

Q - How about creating a complex multisig wallet for example? A 2-of-2 wallet where one of the signers is actually a multisig. Would that become possible?

A - I believe the Schnorr signatures that are included in Taproot make that possible but that script is over my head.

\ No newline at end of file diff --git a/la-bitdevs/index.xml b/la-bitdevs/index.xml index 1c1cfda39c..295a2ab091 100644 --- a/la-bitdevs/index.xml +++ b/la-bitdevs/index.xml @@ -2,7 +2,7 @@ Location: LA BitDevs (online) CVE: https://nvd.nist.gov/vuln/detail/CVE-2020-14199 Trezor blog post on the vulnerability: https://blog.trezor.io/latest-firmware-updates-correct-possible-segwit-transaction-vulnerability-266df0d2860 -Greg Sanders Bitcoin dev mailing list post in April 2017: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html +Greg Sanders Bitcoin dev mailing list post in April 2017: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html The vulnerability The way Bitcoin transactions are encoded in the software is there is a list of coins essentially and then there is a list of destinations and the amount being sent to each destination. The destinations do not include the fee. Nothing in the transaction tells you what the fee of the transaction is.
Magical Bitcoinhttps://btctranscripts.com/la-bitdevs/2020-05-21-alekos-filini-magical-bitcoin/Thu, 21 May 2020 00:00:00 +0000https://btctranscripts.com/la-bitdevs/2020-05-21-alekos-filini-magical-bitcoin/Magical Bitcoin site: https://magicalbitcoin.org/ Magical Bitcoin wallet GitHub: https://github.com/MagicalBitcoin/magical-bitcoin-wallet sipa’s Miniscript site: http://bitcoin.sipa.be/miniscript/ diff --git a/layer2-summit/2018/lightning-overview/index.html b/layer2-summit/2018/lightning-overview/index.html index 00c499639f..1891e0053a 100644 --- a/layer2-summit/2018/lightning-overview/index.html +++ b/layer2-summit/2018/lightning-overview/index.html @@ -12,4 +12,4 @@ Conner Fromknecht

Date: April 25, 2018

Transcript By: Bryan Bishop

Tags: Amp, Splicing, Watchtowers

Category: Conference

Media: -https://www.youtube.com/watch?v=jzoS0tPUAiQ&t=15m10s

https://twitter.com/kanzure/status/1005913055333675009

https://lightning.network/

Introduction

First I am going to give a brief overview of how lightning works for those of who you may not be so familiar. Then I am going to discuss three technologies we’ll be working on in the next year at Lightning Labs and pushing forward.

Philosophical perspective and scalability

Before I do that though, following up on what Neha was saying, I want to give a philosophical perspective on how I think about layer 2 as a scalability solution.

How does layer 2 improve scalability? The metric I like to think about is bytes per meat space transaction. By meat space transaction I mean when I go down to 7-11 and buy something, that’s a logical transaction but it need not correlate 1-to-1 to a transaction on the blockchain.

A normal bitcoin transaction is about 500 bytes. When you use a payment channel that can be reused multiple multiple times for transactions, that can be reused a thousand or hundreds of thousands of times, the actual bytes per transaction that you actually have in the blockchain is much lower, we’re talking sub 1 byte per transaction. I think that’s the way to think about this. We’re reusing more bytes on-chain by using batching and layer 2 solutions.

There’s also this efficiency-of-cooperation is what I would call it. In the general case, most people don’t have disputes during their transactions. A judge doesn’t need to preside over my transaction when I go to 7-11 and buy a bag of chips. Using the blockchain is like going to a judge. You really don’t need that. In the genreal case we can assume that people will cooperate, and have provisions in the case that there is a dispute or diversion from the typical case. Awesome.

That’s my high-level introduction or the aims of how layer 2 solutions are trying to scale.

Lightning channels

Moving on, what is a lightning channel? The lightning network paper was originally written by Tadge Dryja and Joseph Poon. Tadge are you here somewhere? There he is. And Joseph Poon, back in 2015 I believe. It was an epic paper. Two years later, we have working code, we have networks, Tadge continues to work on it at MIT as well.

What is a lightning channel? In reality, it’s a single output on the bitcoin blockchain. It’s locked by 2-of-2 multisig. Because it’s 2-of-2 multisig, once the funds are in there, they can only be spent in which two participants of the channel wish to actually spend that.

The lightning network is a protocol for negotiating other transactions that can possibly spend from that UTXO. If you think the channel lifetime over the lifetime of the channel you’re going to be updating with your channel partner a number of successive states where only the final one or the most recent agreement should be valid. Because of that, this is how we get the scaling. Not all of the transactions need to be broadcasted because there are economic and cryptographic assurances that only the most recent state will be executed. If people deviate from this, then they get punished. We can assume cooperation to get some more scalability benefits. As a cryptocurrency project, we still have to build trust-minimizing protocols.

Lightning network

Channels on their own are great, but they aren’t enough. A channel when you think about it looks like this. I have a channel forest diagram here. At each end of those lines there’s one party. Maybe Alice and Bob. This is what it looks like on-chain. This is not useful on its own because if I want to pay anyone then I would have to make a channel to each person and that’s O(n^2) number of channels and you would be better off just not using channels at all. The magic happens when you have a network and you have routing. The way this works is that people at the end of the channels announce they own a channel and if you notice an intersection of 3 edges in that graph then they have ownership of that and you can facilitate movement of money through that path. The density of that graph is important to making sure there’s sufficient path diversity and this is how the backbone of the lightning network infrastructure is formed.

Atomic multi-path payments

https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/000993.html

Now we’re going to move on to one new technology called atomic multi-path payments (amps). The problem is that you need a path on the network between multiple nodes on the graph. The problem is that if Alice wants to send 8 BTC she has to– these numbers are the capacities in the direction towards Felix. There’s a capacity in both directions. If Alice wants to pay Felix she wants to send 8 BTC but she can’t because each path on its own doesn’t have enough capacity. And Felix at a time can only receive up to 10 BTC because he has that inbound liquidity, but he’s unable to because of the single path constraint. This is solved by atomic multi-path payments.

Atomic multi-path payments allow a single logical payment to be sharded across multiple paths across the network. This is done at the sender side. Multiple subtransactions can be sent over the network and take their own paths and then the necessary constraint is that they all settle together. If one of the payments fails then we want them to all fail. This is where the atomicity comes in.

This enables better usage of in-bound and out-bound liquidity. You can send payments over multiple routes, and this allows you to split up and use the network better. This is a more intuitive user experience because like a bitcoin wallet you expect to be able to send most of the money in your wallet. Without AMPs, this is difficult because you can only send a max amount based on current channel liquidity or something, which doesn’t really make sense to a user.

Only the sender and receiver need to be aware of this technology. For this reason, it can be adopted into the current lightning network. It can be done with a feature bit and anyone in the middle doesn’t have to know, and to them they look like single normal payments.

The way this works is that the sender creates a base pre-image. Transactions in lightning are locked via a hash, which is a cryptographic hash function applied to some preimage and this determines who is able to claim the prize on-chain. We are going to construct these in a specific way such that they enforce the atomicity constraint we talked about before.

From the base preimage, there’s partial preimages that we can construct. I can do this as many times as I need. Then these are hashed against and these are the payment hashes that are used in the channels. Just from knowing the base preimage and the number of partial payments, I can derive the preimages and the hashes I need for the transactions.

So the sender generates the base preimages, and then compute the partial preimages as well, and they are locked with P1 and P2 for the differently routed partial payments.

To explain how this works on the return trip, there’s actually a way we can do this– in the original proposal that roasbeef and I worked and send to the mailing list- there’s a way to do this with extra onion blobs where you attach extra data to the lightning payment and the receiver unwraps another layer of encryption and there’s more information for them. We’re going to use this to send over shares of the base preimage. For those of you familiar with secret sharing, it’s a way of dividing up a piece of information such that when you reconstruct and put something back together, it’s that no piece alone is able to divulge the whole secret. We want all payments to go through in order for the payment to be considered valid. We can do something simple like XOR. And we generate all these random values and continue to concatenate them into a big value and at the end we mix in our base preimage. When the receiver receives this data— each of those shares are sent out in those extra onion blobs. When the receiver gets this, they are able to reconstruct the base preimage. Because all of the payment preimages and payment hashes are derived from this base preimage, as soon as Felix (the receiver) knows the base preimage, he knows all the information to settle these payments in one go. The atomicity is governed by his ability to reconstruct the base preimage, which he can only reconstruct if all the payments reach him. He settles back with the partial preimages and Alice is happy. So this transaction was able to route 8 BTC to Felix even though no single path had that liquidity.

You can imagine a single payment carving a knife through the network but now you can do this in a more diffuse manner. With smaller updates, you’re more likely to never need to close out an entire channel.

We’re going to be working on AMPs in the next few months.

Splicing overview

https://lists.linuxfoundation.org/pipermail/lightning-dev/2017-May/000692.html

Splicing is another cool technology. I think roasbeef came up with the scheme I’m going to present today. There’s a lot of different ways to do this. We’re still in the research phase. I am just going to present an example that is sort of familiar and easy to graph.

Can I fund a channel with a regular bitcoin wallet and can I top off an existing channel with an on-chain transaction? The answer to both is actually yes. We use something called splicing which allows us to modify the live balance of a channel.

With a splice-in, funds can be added to the channel, and they subsume an existing UTXO. And splice-out can remove funds from a participant balance and this creates a UTXO. You can take what’s in the channel, splice it off to an address that a person I’m trying to pay controls.

This removes the concept of my “funds are locked in the lightning network” because now you can do both on-chain payments into a channel and you can also do a payment out of a channel without interrupting the normal operation of the channel.

These basic operations can be composed into a number of different things. Someone might want to splice in, someone might want to splic eout, so you can get really creative about this.

We’d like to minimize the number of on-chain transactions. We want to be non-blocking and let the channel be able to continue usage while this is happening. Doing this, putting your channel on hold and you weren’t able to route, that wouldn’t be great for usability. I’m going to describe a way that we can do this with one transaction, is non-blocking, and allows the channel to continue operation.

It starts with the funding output (UTXO) that we talked about earlier. And, the next thing we’re going to do– this starts at an initial state, the commitment transactions are spending from the agreement, and they update to state i which is the most recent and valid one in this scenario. At some point later, Alice has a UTXO and wants to splice into the channel. So we create a new funding output and this has its own commitment states. This is an entirely separate channel but we’re going to copy over the state from the old funding commitments to the new ones. This should be congruent except that Alice has a little bit more money now. So now the trick to continuing to use this channel without closing it down, is that we’re going to continue updating both channels in parallel. This means that the funding balance– the balance added to the new funding transaction is not usable until it fully confirms, but it doesn’t stop us from using the channel. Let’s say the funding transaction doesn’t go through- I still have my old funding transaction and that channel is still working. But if the new funding is working, and confirmed, then we can continue operations throughout that, because they were both compatible.

What happens if the new funding transaction doesn’t confirm? Well we might need to use replace-by-fee. You might actually need to maintain a superposition of n-different channels. You might start with one, then you add more fees and bump it up. Instead of doing it to both channels, it’s the current channel and all n-pending channels. All transactions that we do are valid across all of these and at the end of the day one of them is going to confirm and take over.

The modification to the current channel design is we need to update and validate these commitments. We need a validation phase where you check all the channels, and finally you just commit them. I think roasbeef has been working on this last week. I’ll have to ask him about how hard was it really.

The only real change is the routing.. when an output is spent from and some other transaction is broadcast on the chain, most nodes are going to see that as a channel being closed (especially if they are not upgraded), and it needs to remain open. So we need a new message that says hey this is being spliced it will closed but don’t worry you can keep routing through me. That’s the only change for the routing layer that has to happen. So they can close nad reopeen on another channel. The upgraded nodes can continue using that one, of course.

Watchtowers

see outsourcers in https://diyhpl.us/wiki/transcripts/blockchain-protocol-analysis-security-engineering/2018/hardening-lightning/

The last thing that we’re going to go into is watchtowers. Watchtowers are more important for moving to a mobile-friendly environment. People want to be able to do contactless payments when they walk through stores and this is going to be a part of the modern economy.

The problem is that you might go offline, you might go on a hike, someone might try to cheat you by broadcasting a state that’s not the most recent. So what we’re proposing is that we give some other person the ability to sweep this channel, close it and give me the money back. You think this might require you to give them the keys, but in fact, you only need them to give the ability to construct a transaction that you have authorized. This would require knowing what the transactions look like, being able to generate the scripts in the outputs, and generating the witness which authenticates the transaction.

One of the hcallenges is privacy. If you’re backing up these updates to different nodes then this might be a timing channel. We think we can mitigate this to a reasonable level. You also need to negotiate this with a finder, how much do you give them so that they have economic incentive? And finally, you need ways to clean up old channel states so that they are cleaned up properly and nodes don’t have to unnecessarily use space.

Encrypted blobs

The method that I am going to describe uses encrypted blobs. Whenever a prior state is revoked, I take the commitment transaction and take the txid. I take the first 16 bytes as a hint. The second half is an encryption key. As a hash this is hard for someone to guess. The watchtower is only going to really realize that they have an encrypted blob when that transaction is published. They look at all the transactions in a block, they look at their database, then they unwrap them using their encryption keys. The channel backing up also requires signatures so that it can be swept without my presence. And finally I need to put all of this into a blob including the script template parameters to fill out all the script templates we use. And finally they encrypt and send this package of hints and encrypted blobs. You send it up to a watchtower, they ACK.

In terms of sweeping outputs, here’s an example script of one of the scripts in the lightning script. There’s also a receive HTLC and offer HTLC script. Anything in brackets is what I’m calling a script template parameter. You can inject these values and reconstruct the actual scripts used in the protocol. There’s also the local delayed pubkey parameter in this to-local script. The other hting I need is if that– we only need the watchtowers into play when someone broadacsts an old state, I need a signature also with the revocation pubkey signed under that pubkey. That’s what I’m doing to assemble these template parameters and signatures. We actually sweep– there are two types of outputs, commitment outputs and HTLC outputs. The majority of your balance is kept in the commitment outputs. That’s where the majority stays. Anything in the HTLC outputs are transient. There’s typically two commitment outputs and there can be up to 966 HTLC outputs on a single transaction.

HTLC output scripts are a little more involved but they have script templates too and follow the same general format. There’s SIGHASH_ALL which is one of the sighash flags used in bitcoin required for this… the state space can manifest on chain because of thse 2-state layers of HTLC. Using SIGHASH_SINGLE it’s more liberal than SIGHASH_ALL and allows us to get this down to a linear amount of space required for the signatures.

And finally, in eltoo, there’s a recent proposal for SIGHASH_NOINPUT which is evne more liberal and it requires just one signature for all HTLCs. This will make this watchtower stuff pretty optimal in my opinion.

Watchtower upgrade proposals

And finally just some closing thoughts on privacy for watchtowers. We want people’s anonymity to be protected so that we can’t correlate across channels and watch for watchtower updates. We want to use unique keys for each brontide handhsake. Brontide is the transport protocol we use to connect on the watchtower network. We can batch these encrypted blobs. There’s no requirement that I need to upload them immediately, I just need to do it before I go offline for a long time. I could save it up and broadcast on a random timer. And finally, there’s the concept of blinded tokens where I can pre-negotiated and get signatures from the watchtower on redeemable tokens and present one of those when I want to actually do a watchtower update and they would authenticate that they have already seen this token or whatever. There’s no requirement to use the same watchtower, you can update across many of them or you can switch intermediately. I think I’m out of time. Thank you.

\ No newline at end of file +https://www.youtube.com/watch?v=jzoS0tPUAiQ&t=15m10s

https://twitter.com/kanzure/status/1005913055333675009

https://lightning.network/

Introduction

First I am going to give a brief overview of how lightning works for those of who you may not be so familiar. Then I am going to discuss three technologies we’ll be working on in the next year at Lightning Labs and pushing forward.

Philosophical perspective and scalability

Before I do that though, following up on what Neha was saying, I want to give a philosophical perspective on how I think about layer 2 as a scalability solution.

How does layer 2 improve scalability? The metric I like to think about is bytes per meat space transaction. By meat space transaction I mean when I go down to 7-11 and buy something, that’s a logical transaction but it need not correlate 1-to-1 to a transaction on the blockchain.

A normal bitcoin transaction is about 500 bytes. When you use a payment channel that can be reused multiple multiple times for transactions, that can be reused a thousand or hundreds of thousands of times, the actual bytes per transaction that you actually have in the blockchain is much lower, we’re talking sub 1 byte per transaction. I think that’s the way to think about this. We’re reusing more bytes on-chain by using batching and layer 2 solutions.

There’s also this efficiency-of-cooperation is what I would call it. In the general case, most people don’t have disputes during their transactions. A judge doesn’t need to preside over my transaction when I go to 7-11 and buy a bag of chips. Using the blockchain is like going to a judge. You really don’t need that. In the genreal case we can assume that people will cooperate, and have provisions in the case that there is a dispute or diversion from the typical case. Awesome.

That’s my high-level introduction or the aims of how layer 2 solutions are trying to scale.

Lightning channels

Moving on, what is a lightning channel? The lightning network paper was originally written by Tadge Dryja and Joseph Poon. Tadge are you here somewhere? There he is. And Joseph Poon, back in 2015 I believe. It was an epic paper. Two years later, we have working code, we have networks, Tadge continues to work on it at MIT as well.

What is a lightning channel? In reality, it’s a single output on the bitcoin blockchain. It’s locked by 2-of-2 multisig. Because it’s 2-of-2 multisig, once the funds are in there, they can only be spent in which two participants of the channel wish to actually spend that.

The lightning network is a protocol for negotiating other transactions that can possibly spend from that UTXO. If you think the channel lifetime over the lifetime of the channel you’re going to be updating with your channel partner a number of successive states where only the final one or the most recent agreement should be valid. Because of that, this is how we get the scaling. Not all of the transactions need to be broadcasted because there are economic and cryptographic assurances that only the most recent state will be executed. If people deviate from this, then they get punished. We can assume cooperation to get some more scalability benefits. As a cryptocurrency project, we still have to build trust-minimizing protocols.

Lightning network

Channels on their own are great, but they aren’t enough. A channel when you think about it looks like this. I have a channel forest diagram here. At each end of those lines there’s one party. Maybe Alice and Bob. This is what it looks like on-chain. This is not useful on its own because if I want to pay anyone then I would have to make a channel to each person and that’s O(n^2) number of channels and you would be better off just not using channels at all. The magic happens when you have a network and you have routing. The way this works is that people at the end of the channels announce they own a channel and if you notice an intersection of 3 edges in that graph then they have ownership of that and you can facilitate movement of money through that path. The density of that graph is important to making sure there’s sufficient path diversity and this is how the backbone of the lightning network infrastructure is formed.

Atomic multi-path payments

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/000993.html

Now we’re going to move on to one new technology called atomic multi-path payments (amps). The problem is that you need a path on the network between multiple nodes on the graph. The problem is that if Alice wants to send 8 BTC she has to– these numbers are the capacities in the direction towards Felix. There’s a capacity in both directions. If Alice wants to pay Felix she wants to send 8 BTC but she can’t because each path on its own doesn’t have enough capacity. And Felix at a time can only receive up to 10 BTC because he has that inbound liquidity, but he’s unable to because of the single path constraint. This is solved by atomic multi-path payments.

Atomic multi-path payments allow a single logical payment to be sharded across multiple paths across the network. This is done at the sender side. Multiple subtransactions can be sent over the network and take their own paths and then the necessary constraint is that they all settle together. If one of the payments fails then we want them to all fail. This is where the atomicity comes in.

This enables better usage of in-bound and out-bound liquidity. You can send payments over multiple routes, and this allows you to split up and use the network better. This is a more intuitive user experience because like a bitcoin wallet you expect to be able to send most of the money in your wallet. Without AMPs, this is difficult because you can only send a max amount based on current channel liquidity or something, which doesn’t really make sense to a user.

Only the sender and receiver need to be aware of this technology. For this reason, it can be adopted into the current lightning network. It can be done with a feature bit and anyone in the middle doesn’t have to know, and to them they look like single normal payments.

The way this works is that the sender creates a base pre-image. Transactions in lightning are locked via a hash, which is a cryptographic hash function applied to some preimage and this determines who is able to claim the prize on-chain. We are going to construct these in a specific way such that they enforce the atomicity constraint we talked about before.

From the base preimage, there’s partial preimages that we can construct. I can do this as many times as I need. Then these are hashed against and these are the payment hashes that are used in the channels. Just from knowing the base preimage and the number of partial payments, I can derive the preimages and the hashes I need for the transactions.

So the sender generates the base preimages, and then compute the partial preimages as well, and they are locked with P1 and P2 for the differently routed partial payments.

To explain how this works on the return trip, there’s actually a way we can do this– in the original proposal that roasbeef and I worked and send to the mailing list- there’s a way to do this with extra onion blobs where you attach extra data to the lightning payment and the receiver unwraps another layer of encryption and there’s more information for them. We’re going to use this to send over shares of the base preimage. For those of you familiar with secret sharing, it’s a way of dividing up a piece of information such that when you reconstruct and put something back together, it’s that no piece alone is able to divulge the whole secret. We want all payments to go through in order for the payment to be considered valid. We can do something simple like XOR. And we generate all these random values and continue to concatenate them into a big value and at the end we mix in our base preimage. When the receiver receives this data— each of those shares are sent out in those extra onion blobs. When the receiver gets this, they are able to reconstruct the base preimage. Because all of the payment preimages and payment hashes are derived from this base preimage, as soon as Felix (the receiver) knows the base preimage, he knows all the information to settle these payments in one go. The atomicity is governed by his ability to reconstruct the base preimage, which he can only reconstruct if all the payments reach him. He settles back with the partial preimages and Alice is happy. So this transaction was able to route 8 BTC to Felix even though no single path had that liquidity.

You can imagine a single payment carving a knife through the network but now you can do this in a more diffuse manner. With smaller updates, you’re more likely to never need to close out an entire channel.

We’re going to be working on AMPs in the next few months.

Splicing overview

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2017-May/000692.html

Splicing is another cool technology. I think roasbeef came up with the scheme I’m going to present today. There’s a lot of different ways to do this. We’re still in the research phase. I am just going to present an example that is sort of familiar and easy to graph.

Can I fund a channel with a regular bitcoin wallet and can I top off an existing channel with an on-chain transaction? The answer to both is actually yes. We use something called splicing which allows us to modify the live balance of a channel.

With a splice-in, funds can be added to the channel, and they subsume an existing UTXO. And splice-out can remove funds from a participant balance and this creates a UTXO. You can take what’s in the channel, splice it off to an address that a person I’m trying to pay controls.

This removes the concept of my “funds are locked in the lightning network” because now you can do both on-chain payments into a channel and you can also do a payment out of a channel without interrupting the normal operation of the channel.

These basic operations can be composed into a number of different things. Someone might want to splice in, someone might want to splic eout, so you can get really creative about this.

We’d like to minimize the number of on-chain transactions. We want to be non-blocking and let the channel be able to continue usage while this is happening. Doing this, putting your channel on hold and you weren’t able to route, that wouldn’t be great for usability. I’m going to describe a way that we can do this with one transaction, is non-blocking, and allows the channel to continue operation.

It starts with the funding output (UTXO) that we talked about earlier. And, the next thing we’re going to do– this starts at an initial state, the commitment transactions are spending from the agreement, and they update to state i which is the most recent and valid one in this scenario. At some point later, Alice has a UTXO and wants to splice into the channel. So we create a new funding output and this has its own commitment states. This is an entirely separate channel but we’re going to copy over the state from the old funding commitments to the new ones. This should be congruent except that Alice has a little bit more money now. So now the trick to continuing to use this channel without closing it down, is that we’re going to continue updating both channels in parallel. This means that the funding balance– the balance added to the new funding transaction is not usable until it fully confirms, but it doesn’t stop us from using the channel. Let’s say the funding transaction doesn’t go through- I still have my old funding transaction and that channel is still working. But if the new funding is working, and confirmed, then we can continue operations throughout that, because they were both compatible.

What happens if the new funding transaction doesn’t confirm? Well we might need to use replace-by-fee. You might actually need to maintain a superposition of n-different channels. You might start with one, then you add more fees and bump it up. Instead of doing it to both channels, it’s the current channel and all n-pending channels. All transactions that we do are valid across all of these and at the end of the day one of them is going to confirm and take over.

The modification to the current channel design is we need to update and validate these commitments. We need a validation phase where you check all the channels, and finally you just commit them. I think roasbeef has been working on this last week. I’ll have to ask him about how hard was it really.

The only real change is the routing.. when an output is spent from and some other transaction is broadcast on the chain, most nodes are going to see that as a channel being closed (especially if they are not upgraded), and it needs to remain open. So we need a new message that says hey this is being spliced it will closed but don’t worry you can keep routing through me. That’s the only change for the routing layer that has to happen. So they can close nad reopeen on another channel. The upgraded nodes can continue using that one, of course.

Watchtowers

see outsourcers in https://diyhpl.us/wiki/transcripts/blockchain-protocol-analysis-security-engineering/2018/hardening-lightning/

The last thing that we’re going to go into is watchtowers. Watchtowers are more important for moving to a mobile-friendly environment. People want to be able to do contactless payments when they walk through stores and this is going to be a part of the modern economy.

The problem is that you might go offline, you might go on a hike, someone might try to cheat you by broadcasting a state that’s not the most recent. So what we’re proposing is that we give some other person the ability to sweep this channel, close it and give me the money back. You think this might require you to give them the keys, but in fact, you only need them to give the ability to construct a transaction that you have authorized. This would require knowing what the transactions look like, being able to generate the scripts in the outputs, and generating the witness which authenticates the transaction.

One of the hcallenges is privacy. If you’re backing up these updates to different nodes then this might be a timing channel. We think we can mitigate this to a reasonable level. You also need to negotiate this with a finder, how much do you give them so that they have economic incentive? And finally, you need ways to clean up old channel states so that they are cleaned up properly and nodes don’t have to unnecessarily use space.

Encrypted blobs

The method that I am going to describe uses encrypted blobs. Whenever a prior state is revoked, I take the commitment transaction and take the txid. I take the first 16 bytes as a hint. The second half is an encryption key. As a hash this is hard for someone to guess. The watchtower is only going to really realize that they have an encrypted blob when that transaction is published. They look at all the transactions in a block, they look at their database, then they unwrap them using their encryption keys. The channel backing up also requires signatures so that it can be swept without my presence. And finally I need to put all of this into a blob including the script template parameters to fill out all the script templates we use. And finally they encrypt and send this package of hints and encrypted blobs. You send it up to a watchtower, they ACK.

In terms of sweeping outputs, here’s an example script of one of the scripts in the lightning script. There’s also a receive HTLC and offer HTLC script. Anything in brackets is what I’m calling a script template parameter. You can inject these values and reconstruct the actual scripts used in the protocol. There’s also the local delayed pubkey parameter in this to-local script. The other hting I need is if that– we only need the watchtowers into play when someone broadacsts an old state, I need a signature also with the revocation pubkey signed under that pubkey. That’s what I’m doing to assemble these template parameters and signatures. We actually sweep– there are two types of outputs, commitment outputs and HTLC outputs. The majority of your balance is kept in the commitment outputs. That’s where the majority stays. Anything in the HTLC outputs are transient. There’s typically two commitment outputs and there can be up to 966 HTLC outputs on a single transaction.

HTLC output scripts are a little more involved but they have script templates too and follow the same general format. There’s SIGHASH_ALL which is one of the sighash flags used in bitcoin required for this… the state space can manifest on chain because of thse 2-state layers of HTLC. Using SIGHASH_SINGLE it’s more liberal than SIGHASH_ALL and allows us to get this down to a linear amount of space required for the signatures.

And finally, in eltoo, there’s a recent proposal for SIGHASH_NOINPUT which is evne more liberal and it requires just one signature for all HTLCs. This will make this watchtower stuff pretty optimal in my opinion.

Watchtower upgrade proposals

And finally just some closing thoughts on privacy for watchtowers. We want people’s anonymity to be protected so that we can’t correlate across channels and watch for watchtower updates. We want to use unique keys for each brontide handhsake. Brontide is the transport protocol we use to connect on the watchtower network. We can batch these encrypted blobs. There’s no requirement that I need to upload them immediately, I just need to do it before I go offline for a long time. I could save it up and broadcast on a random timer. And finally, there’s the concept of blinded tokens where I can pre-negotiated and get signatures from the watchtower on redeemable tokens and present one of those when I want to actually do a watchtower update and they would authenticate that they have already seen this token or whatever. There’s no requirement to use the same watchtower, you can update across many of them or you can switch intermediately. I think I’m out of time. Thank you.

\ No newline at end of file diff --git a/layer2-summit/2018/scriptless-scripts/index.html b/layer2-summit/2018/scriptless-scripts/index.html index 29bde59ce7..648ceca957 100644 --- a/layer2-summit/2018/scriptless-scripts/index.html +++ b/layer2-summit/2018/scriptless-scripts/index.html @@ -10,4 +10,4 @@ < Scriptless Scripts

Scriptless Scripts

Speakers: Andrew Poelstra

Date: May 25, 2018

Transcript By: Bryan Bishop

Category: Conference

Media: -https://www.youtube.com/watch?v=jzoS0tPUAiQ&t=3h36m

https://twitter.com/kanzure/status/1017881177355640833

Introduction

I am here to talk about scriptless scripts today. Scriptless scripts are related to mimblewimble, which is the other thing I was going to talk about. For time constraints, I will only talk about scriptles scripts. I’ll give a brief historical background, and then I will say what scriptless scripts are, what we can do with them, and give some examples.

History

In 2016, there was a mysterious paper dead-dropped on an IRC channel that I frequent called #bitcoin-wizards. The paper was written by somebody called Tom Elvis Jedusor which is the name of Voldemort in the French Harry Potter books. This was a text document written in broken English describing a blockchain design called mimblewimble in which all the scripts are removed in favor of transactions which only have inputs and output amounts. The way that things are authenticated is that amounts rathe rthan being exposed were hidden behind homomorphic commitments.

Homomorphic commitments allow people to add things up and verifying inputs and output amounts, but also hiding the amounts. The hiding involves a random secret blinding factor. In mimblewimble, this factor would be used instead of secret keys.

The result is a blockchain with only signatures.

The question presented in the paper was what kinds of scripts could be given in this blockchain, without breaking the hmomorophic property that the transaction security depends on. Scriptless scripts is the answer to this question.

Over the most of 2017 to now, we have been working on scriptless scripts as an answer to that question.

Scriptless scripts

Scriptless scripts are a way to encode smart contracts into digital signatures. This has applications way beyond mimblewimble. The entire purpose is to do things without any scripts. Scriptless scripts are completely agnostic to which blockchain they are on, but they do require some sort of digital signature support. This was kind of hinted at during the panel earlier today. What we’re doing here is removing these hash preimages, removing these various different script tricks that people use to get atomicity between chains and transactions, and moving those into signatures so that you get something smaller, more efficient, more scalable, more private, and also it’s inherently interoperable.

Not just mimblewimble

Let me review just what Script is. We know what Script is. Let me review why Script sucks.

The reason why Script sucks is that they are big. They need to be dowlnoaded, parsed, and validated by every full node on the network. They can’t be compressed. They contain the hashes, pubkeys, and preimages of hashes, and all of these objects are things that look like random data– they are highly structured but there’s no random data to compress out.

The details of this data is visible to everyone forever in the blockchain. The details of the scripts are publshed to the blockchain and anyone can analyze these details and they wont ever go away.

With scriptless scripts, nearly the only thing visible is the public keys and signatures. More than that, in multi-party settings, there will be a single public key and a single signature for all the actors. Everything looks the same– lightning payments would look the same as payments, escrows, atomic swaps, or sidechain federation pegs, and it will look the same as tumblebit-like things. Pretty much anything you think about that people are doing on bitcoin today, can be made to look essentially the same.

They all look like public keys, and they might look like locktimes in some cases, and there’s a way to factor that out.

So it gets us pretty close to perfect fungibility in temrs of how outputs are labeled.

Adaptor signatures

The basic building block is an adaptor signature. Tadge has gone into this with his discreet log contracts. One way to think about the relationship between these two is that discreet log contracts are a way to do alchemy with public keys, and scriptless scripts is a way to do alchemy with signatures.

As a consequence of this, you need to use something called a Schnorr signature. Right now bitcoin, zcash, litecoin and all others use ECDSA signatures, which are harder to work with than Schnorr signature. There was a recent paper published that there’s a way to do scriptless scripts with ECDSA.

I am going to describe everything in terms of Schnorr signatures. We need Schnorr signatures to do this efficiently and with a minimum of cryptographic assumptions. In principle you could do this with ECDSA.

What is a multi-signature? It has multiple participants that produce the signature. Every participant might produce a separate signature and concatenate them and this is a multi-sig. Thta’s what bitcoin does. It’s cool and nice and it’s easy to reason about because you don’t have to worry about signers canceling each other’s signatures. But it’s inefficient- it’s a lot of data and you have to handle this separately.

With Schnorr signatures, you could have a single pubkey which is the sum of many different people’s public keys. The resulting public key is one that will signatures will be verifiable against- as long as all the participants can cooperate to produce a signature. So it’s still a multisig. They work together.

The way this works with Schnorr is that they exchange public nonces, like in discreet log contracts these commit to the randomness of the signature and allow people to produce a public key version of the signature. And then everyone produces a private version, this is called a partial signature. They all add these together, and this can only be produced by them cooperating.

If you use this partial signature and you have that algebraic property– well, you can also have another object where you could learn the secret when they all publish it.

Given an adaptor signature, if someone later reveals some secret to you, you will learn the real secret if you subtract them. If someone later reveals the other data, you can learn the secret. So we can use this in two directions.

Atomic cross-chain swaps

https://www.youtube.com/watch?v=jzoS0tPUAiQ&t=3h44m53s

https://github.com/apoelstra/scriptless-scripts/blob/master/md/atomic-swap.md

Let’s think about cross-chain swaps. Patrick hinted at this during the panel.

The way that you do a cross-chain swap today typically is you have 2 parties on 2 different blockchains. They put their coins up in a 2-of-2 output where they both have to sign off to move them. You require that in order to move the coins, on both sides in order to move the coins, the preimage of a specific hash has to be revealed, and the trick is that it’s the same hash. When one party signs to take their coins, they have to reveal the hash preimage, and the ohter party can take that secret value and then take hteir coins on the other chain. That’s how the atomicity is enforced.

This sucks, though. You have to publish the hash on both blockchains, everyone has to download hte hash, has to download the hash preimage, they can see forever that these two transactions were linked, and you can’t do atomic multi-path payments with this– there’s some neat algebraic trickery to get those various hashes to combine into one revelaed secret.

In discrete logarithms, the relationship between secret and public keys is nice in that it inherently supports this kind of addition.

As before, two parties, two blockchains. They each put their coins up on a 2-of-2 output on each side. At this point, let’s suppose that I’m trying to sell some litecoins to Tadge. So I put up a bitcoin, he puts up a litecoin, and before I do anything else– first, we’re going to start the multisig protocol and we exchange public nonces on both sides. Before we start exchanging partial signatures, I will give him two adaptor signatures to the same committed value. What these adaptor signatures are is that they are something such that if he has a partial signature then he can learn a secret.

I will give him an adaptor signature giving him his bitcoin, and I will produce an adaptor signature taking my litecoin. If he learns the secret value then he learns both signatures. And then he will give me a partial signature taking the litecoins and givingt them to me. He will see my signature on the blockchain, subtract his contribution, and then he learns the secret, he takes that secret and subtracts it from the other adaptor signature, and now he can take the coins. The atomicity is in the signature. Just by publishing the signature, Tadge has learned something htat he can use to produce a signature.

There’s a more straightforward way to do this, but I wanted to do this with adaptor signatures to demonstrate this exchange of data using this commit-reveal trick using nothing but signatures.

There’s a public key on both sides, both belonging to me and Tadge, and then there’s some signatures that are produced by us. It looks like one public key and one signature and no relationship from them. Someone could download those two signatures and imagine some value and show they are related… anyone could do that for any pair of signatures. Everything done here in public is independently uniformly randomly. Before I produced any signatures, I gave Tadge some partial adaptor signatures. The crux is just the ordering of the data that we exchanged. After the fact, neither of us can prove that we did any such protocol.

Blind atomic swaps (Jonas Nick)

https://www.youtube.com/watch?v=jzoS0tPUAiQ&t=3h49m35s

http://diyhpl.us/wiki/transcripts/building-on-bitcoin/2018/blind-signatures-and-scriptless-scripts/

This is similar in principle to tumblebit. Suppose that one party is a mixing service that is receiving bitcoin and it is sending bitcoin. Suppose it is only receiving bitcoin in like one… it has a lot of UTXOs and they all have 1 BTC, the same amount. And they are all controlled by the same secret key. The server is producing blind signatures.

A blind signature is produced interactively with the server and the server doesn’t know what it is signing. It just knows how many signatures it has produced. The server can’t distinguish between the coins, it can only count how many it has signed for.

Rather than using an adaptor signature to reveal some secret arbitrary value to commit and exchange between parties… the adpator signature here will be the public part of a blind signature. There’s a similarity to the discreet log contract here, but I’m doing it with signatures rather than public keys.

The result is that– suppose I’m a tumbler, and I’m executing these blind swaps.. Tadge sends me some coins, not directly to me, but to a 2-of-2 output such that we both have to sign off to get them. I tell him well I’m going to give you an adaptor signature such that if I sign to take these coins then I’m going to complete some blind signature protocol we went into together. At that point he says okay, and he provides a signature to send coins to me, I take them, he sees the signature of me taking them, he uses them to complete the blind signature– and all I know is that someone connected to me and was doing a blind signature protocol, and somehow a signature comes out of that, and bitcoins move. I know how mnay signatures I made, and maybe I can do some timing analysis, but that’s about it.

So that’s blind swaps.

Zero knowledge contingent payments (Maxwell 2011)

https://bitcoincore.org/en/2016/02/26/zero-knowledge-contingent-payments-announcement/

I have lots of other examples but they start to sound the same. Zero-knowledge contingenet payments are similar.

In this zero-knowledge contingent payment (zkcp) example, I will be selling some secret knowledge. It’s an idea that has been around forever. It came from Greg Maxwell in 2011 when there were no efficient zero-knowledge proofs out there. In 2013, this zero-knowledge scheme called SNARKs showed up. Then in 2015, Sean Bowe and gmaxwell did such a thing.

In zkcp, Sean solves some sudoku puzzle and sold the solution to gmaxwell trustly. They did some interaction such that Sean Bowe could only take those coins if he gave gmaxwell the solution to the sudoku puzzle.

They use hash preimage reveals to do this. Sean thought of some random number, hashed it, gave gmaxwell the hash, Sean solved the sudoku puzzle, gave the encrypted solution to gmaxwell, and then gave a zero-knowledge proof that the preimage he gave was a valid solution to the sudoku problem. This was very general- if you have a solutoin to a problem, this is a statement that zero-knowledge provers can prove to you as long as it’s doable in NP.

They could have used adaptor signatures. The way this would have worked is that Sean rathe rthan giving a hash that gmaxwell would have had to put on a blockchain, Sean would have given him a commitment to some value. And then Sean would have provided a – would have done a multisig protocol and provided an adaptor signature to the secret half of that value. If Sean completed the multisig protocol and took the coins, then he would complete it, and he would provide a zero-knowledge proof that the signature is offset by some secret value which is some pile of data which is the solution to some NP problem. Maybe it’s some other signature, like some other blockchain validation maybe. In principle, you could do all sorts of things with zero-knowledge proofs, even though zk proofs are difficult to verify. In principle, you can do pretty much anything this way. You could make anything atomic with anything else as long as the anything else is somehow verifiable in NP.

Features of adaptor signatures

As I hinted, you can do zero-knowledge proofs with these, rather than directrly selling a solution to an arbitrary statement. You could prove the commitment is some part of some other protocol. There are all kinds of cool interactive protocols involving discrete logarithms and you can attach monetary value to it with adaptor signature sor zero-knowledge proofs. This is cool. There are many protocols out there that are “semi-honest” where as long as people follow the protocol then things are okay. If people abort or lie about which values they commit to, then maybe it doesn’t work, and then you need to add zero-knowledge proofs and checksums and stuff. But using this, you can attach a monetary value to doing the protocol properly, which is cool and might be a general construction.

You can make arbitrarily sets of signatures atomic. I’m just going to say that. It’s a bit tricky to say precisely. What I mean is that you can have a set of signatures such that if any of those signatures or some subset of a certain form appear, then you can cause other signatures to appear, like constructing multi-hop lightning payments. Every time you make a hop, it could be atomic with another signature and another and so on. These can all be multisigs where there are multiple partial signature contributions which are atomic in different directions. During the atomic multi-path presentation- you can reblind these each time, and add anothe random value that you are adding that you are revealing at each step. You can have a many-hop many-channel payment path in which individual participants in the path wouldn’t be able to tell they are in the same path unless they were directly connected, because there’s this re-blinding happen. Anyone watching transactions on the chain wouldn’t be able tosee what’s going on, because everything appears uniformly random and independent.

There’s a deniability property– because I’m taking signatures and differences of signatures and encoding the semantics into the order which you reveal to different people…. anyone can take any values from the blockchain and make their own protocol transcripts. It would be difficult to leave any evidence. You might be able to hash every step and use the hash as your randomizer but that’s dangerous.

Unlike hash preimages where the preimage is published on the blockchain and revealed… we’re distributing a discrete log, the final party gets the secret key and doesn’t have to reveal it. This allows a transferable proof-of-payment. Rather than saying this payment went through and then publishing the hash preimage and not having linkability, you can sort of prove that you know something without revealing it to anyone, and this makes protocols related to invoicing much simpler.

Because I am encoding all of these semantics into signatures themselves at the time that signatures are produced, I don’t have ot put anything into the blockchain except possibly some locktimes. If I have any multisig output laying around with someone, maybe Ethan and I have a 2-of-2 output because maybe one or both of us was trying to want a multisig wallet with each other… we could reuse that and say whoops we left these coins there several years ago, let’s make a payment channel with this, we could do the adaptor signature trick to do this.

New developments

Lightning with scriptless scripts– getting that into lightning protocol is quite difficult. ajtowns has decided that he is doing it. He posted a message to lightning-dev, and he’s doing it. That’s awesome.

Doing scriptless scripts with ECDSA would be interesting today. Monero maybe– doesn’t have refund support. None of this works unless you have ECDSA. It turns out there was a paper dropped on lightning-dev about this. There’s some groups working on implementing this multi-party ECDSA protocol which is exciting. This could be happening today. People could be doing it today, there’s no evidence on the blockchain. You have no idea how many people were involved or what kind of smart contract they were involved. And if you are working for a Chainalysis company then you are lying to yourself and others.

That’s all I got. Thank you.

https://www.youtube.com/watch?v=jzoS0tPUAiQ&t=4h1m

\ No newline at end of file +https://www.youtube.com/watch?v=jzoS0tPUAiQ&t=3h36m

https://twitter.com/kanzure/status/1017881177355640833

Introduction

I am here to talk about scriptless scripts today. Scriptless scripts are related to mimblewimble, which is the other thing I was going to talk about. For time constraints, I will only talk about scriptles scripts. I’ll give a brief historical background, and then I will say what scriptless scripts are, what we can do with them, and give some examples.

History

In 2016, there was a mysterious paper dead-dropped on an IRC channel that I frequent called #bitcoin-wizards. The paper was written by somebody called Tom Elvis Jedusor which is the name of Voldemort in the French Harry Potter books. This was a text document written in broken English describing a blockchain design called mimblewimble in which all the scripts are removed in favor of transactions which only have inputs and output amounts. The way that things are authenticated is that amounts rathe rthan being exposed were hidden behind homomorphic commitments.

Homomorphic commitments allow people to add things up and verifying inputs and output amounts, but also hiding the amounts. The hiding involves a random secret blinding factor. In mimblewimble, this factor would be used instead of secret keys.

The result is a blockchain with only signatures.

The question presented in the paper was what kinds of scripts could be given in this blockchain, without breaking the hmomorophic property that the transaction security depends on. Scriptless scripts is the answer to this question.

Over the most of 2017 to now, we have been working on scriptless scripts as an answer to that question.

Scriptless scripts

Scriptless scripts are a way to encode smart contracts into digital signatures. This has applications way beyond mimblewimble. The entire purpose is to do things without any scripts. Scriptless scripts are completely agnostic to which blockchain they are on, but they do require some sort of digital signature support. This was kind of hinted at during the panel earlier today. What we’re doing here is removing these hash preimages, removing these various different script tricks that people use to get atomicity between chains and transactions, and moving those into signatures so that you get something smaller, more efficient, more scalable, more private, and also it’s inherently interoperable.

Not just mimblewimble

Let me review just what Script is. We know what Script is. Let me review why Script sucks.

The reason why Script sucks is that they are big. They need to be dowlnoaded, parsed, and validated by every full node on the network. They can’t be compressed. They contain the hashes, pubkeys, and preimages of hashes, and all of these objects are things that look like random data– they are highly structured but there’s no random data to compress out.

The details of this data is visible to everyone forever in the blockchain. The details of the scripts are publshed to the blockchain and anyone can analyze these details and they wont ever go away.

With scriptless scripts, nearly the only thing visible is the public keys and signatures. More than that, in multi-party settings, there will be a single public key and a single signature for all the actors. Everything looks the same– lightning payments would look the same as payments, escrows, atomic swaps, or sidechain federation pegs, and it will look the same as tumblebit-like things. Pretty much anything you think about that people are doing on bitcoin today, can be made to look essentially the same.

They all look like public keys, and they might look like locktimes in some cases, and there’s a way to factor that out.

So it gets us pretty close to perfect fungibility in temrs of how outputs are labeled.

Adaptor signatures

The basic building block is an adaptor signature. Tadge has gone into this with his discreet log contracts. One way to think about the relationship between these two is that discreet log contracts are a way to do alchemy with public keys, and scriptless scripts is a way to do alchemy with signatures.

As a consequence of this, you need to use something called a Schnorr signature. Right now bitcoin, zcash, litecoin and all others use ECDSA signatures, which are harder to work with than Schnorr signature. There was a recent paper published that there’s a way to do scriptless scripts with ECDSA.

I am going to describe everything in terms of Schnorr signatures. We need Schnorr signatures to do this efficiently and with a minimum of cryptographic assumptions. In principle you could do this with ECDSA.

What is a multi-signature? It has multiple participants that produce the signature. Every participant might produce a separate signature and concatenate them and this is a multi-sig. Thta’s what bitcoin does. It’s cool and nice and it’s easy to reason about because you don’t have to worry about signers canceling each other’s signatures. But it’s inefficient- it’s a lot of data and you have to handle this separately.

With Schnorr signatures, you could have a single pubkey which is the sum of many different people’s public keys. The resulting public key is one that will signatures will be verifiable against- as long as all the participants can cooperate to produce a signature. So it’s still a multisig. They work together.

The way this works with Schnorr is that they exchange public nonces, like in discreet log contracts these commit to the randomness of the signature and allow people to produce a public key version of the signature. And then everyone produces a private version, this is called a partial signature. They all add these together, and this can only be produced by them cooperating.

If you use this partial signature and you have that algebraic property– well, you can also have another object where you could learn the secret when they all publish it.

Given an adaptor signature, if someone later reveals some secret to you, you will learn the real secret if you subtract them. If someone later reveals the other data, you can learn the secret. So we can use this in two directions.

Atomic cross-chain swaps

https://www.youtube.com/watch?v=jzoS0tPUAiQ&t=3h44m53s

https://github.com/apoelstra/scriptless-scripts/blob/master/md/atomic-swap.md

Let’s think about cross-chain swaps. Patrick hinted at this during the panel.

The way that you do a cross-chain swap today typically is you have 2 parties on 2 different blockchains. They put their coins up in a 2-of-2 output where they both have to sign off to move them. You require that in order to move the coins, on both sides in order to move the coins, the preimage of a specific hash has to be revealed, and the trick is that it’s the same hash. When one party signs to take their coins, they have to reveal the hash preimage, and the ohter party can take that secret value and then take hteir coins on the other chain. That’s how the atomicity is enforced.

This sucks, though. You have to publish the hash on both blockchains, everyone has to download hte hash, has to download the hash preimage, they can see forever that these two transactions were linked, and you can’t do atomic multi-path payments with this– there’s some neat algebraic trickery to get those various hashes to combine into one revelaed secret.

In discrete logarithms, the relationship between secret and public keys is nice in that it inherently supports this kind of addition.

As before, two parties, two blockchains. They each put their coins up on a 2-of-2 output on each side. At this point, let’s suppose that I’m trying to sell some litecoins to Tadge. So I put up a bitcoin, he puts up a litecoin, and before I do anything else– first, we’re going to start the multisig protocol and we exchange public nonces on both sides. Before we start exchanging partial signatures, I will give him two adaptor signatures to the same committed value. What these adaptor signatures are is that they are something such that if he has a partial signature then he can learn a secret.

I will give him an adaptor signature giving him his bitcoin, and I will produce an adaptor signature taking my litecoin. If he learns the secret value then he learns both signatures. And then he will give me a partial signature taking the litecoins and givingt them to me. He will see my signature on the blockchain, subtract his contribution, and then he learns the secret, he takes that secret and subtracts it from the other adaptor signature, and now he can take the coins. The atomicity is in the signature. Just by publishing the signature, Tadge has learned something htat he can use to produce a signature.

There’s a more straightforward way to do this, but I wanted to do this with adaptor signatures to demonstrate this exchange of data using this commit-reveal trick using nothing but signatures.

There’s a public key on both sides, both belonging to me and Tadge, and then there’s some signatures that are produced by us. It looks like one public key and one signature and no relationship from them. Someone could download those two signatures and imagine some value and show they are related… anyone could do that for any pair of signatures. Everything done here in public is independently uniformly randomly. Before I produced any signatures, I gave Tadge some partial adaptor signatures. The crux is just the ordering of the data that we exchanged. After the fact, neither of us can prove that we did any such protocol.

Blind atomic swaps (Jonas Nick)

https://www.youtube.com/watch?v=jzoS0tPUAiQ&t=3h49m35s

http://diyhpl.us/wiki/transcripts/building-on-bitcoin/2018/blind-signatures-and-scriptless-scripts/

This is similar in principle to tumblebit. Suppose that one party is a mixing service that is receiving bitcoin and it is sending bitcoin. Suppose it is only receiving bitcoin in like one… it has a lot of UTXOs and they all have 1 BTC, the same amount. And they are all controlled by the same secret key. The server is producing blind signatures.

A blind signature is produced interactively with the server and the server doesn’t know what it is signing. It just knows how many signatures it has produced. The server can’t distinguish between the coins, it can only count how many it has signed for.

Rather than using an adaptor signature to reveal some secret arbitrary value to commit and exchange between parties… the adpator signature here will be the public part of a blind signature. There’s a similarity to the discreet log contract here, but I’m doing it with signatures rather than public keys.

The result is that– suppose I’m a tumbler, and I’m executing these blind swaps.. Tadge sends me some coins, not directly to me, but to a 2-of-2 output such that we both have to sign off to get them. I tell him well I’m going to give you an adaptor signature such that if I sign to take these coins then I’m going to complete some blind signature protocol we went into together. At that point he says okay, and he provides a signature to send coins to me, I take them, he sees the signature of me taking them, he uses them to complete the blind signature– and all I know is that someone connected to me and was doing a blind signature protocol, and somehow a signature comes out of that, and bitcoins move. I know how mnay signatures I made, and maybe I can do some timing analysis, but that’s about it.

So that’s blind swaps.

Zero knowledge contingent payments (Maxwell 2011)

https://bitcoincore.org/en/2016/02/26/zero-knowledge-contingent-payments-announcement/

I have lots of other examples but they start to sound the same. Zero-knowledge contingenet payments are similar.

In this zero-knowledge contingent payment (zkcp) example, I will be selling some secret knowledge. It’s an idea that has been around forever. It came from Greg Maxwell in 2011 when there were no efficient zero-knowledge proofs out there. In 2013, this zero-knowledge scheme called SNARKs showed up. Then in 2015, Sean Bowe and gmaxwell did such a thing.

In zkcp, Sean solves some sudoku puzzle and sold the solution to gmaxwell trustly. They did some interaction such that Sean Bowe could only take those coins if he gave gmaxwell the solution to the sudoku puzzle.

They use hash preimage reveals to do this. Sean thought of some random number, hashed it, gave gmaxwell the hash, Sean solved the sudoku puzzle, gave the encrypted solution to gmaxwell, and then gave a zero-knowledge proof that the preimage he gave was a valid solution to the sudoku problem. This was very general- if you have a solutoin to a problem, this is a statement that zero-knowledge provers can prove to you as long as it’s doable in NP.

They could have used adaptor signatures. The way this would have worked is that Sean rathe rthan giving a hash that gmaxwell would have had to put on a blockchain, Sean would have given him a commitment to some value. And then Sean would have provided a – would have done a multisig protocol and provided an adaptor signature to the secret half of that value. If Sean completed the multisig protocol and took the coins, then he would complete it, and he would provide a zero-knowledge proof that the signature is offset by some secret value which is some pile of data which is the solution to some NP problem. Maybe it’s some other signature, like some other blockchain validation maybe. In principle, you could do all sorts of things with zero-knowledge proofs, even though zk proofs are difficult to verify. In principle, you can do pretty much anything this way. You could make anything atomic with anything else as long as the anything else is somehow verifiable in NP.

Features of adaptor signatures

As I hinted, you can do zero-knowledge proofs with these, rather than directrly selling a solution to an arbitrary statement. You could prove the commitment is some part of some other protocol. There are all kinds of cool interactive protocols involving discrete logarithms and you can attach monetary value to it with adaptor signature sor zero-knowledge proofs. This is cool. There are many protocols out there that are “semi-honest” where as long as people follow the protocol then things are okay. If people abort or lie about which values they commit to, then maybe it doesn’t work, and then you need to add zero-knowledge proofs and checksums and stuff. But using this, you can attach a monetary value to doing the protocol properly, which is cool and might be a general construction.

You can make arbitrarily sets of signatures atomic. I’m just going to say that. It’s a bit tricky to say precisely. What I mean is that you can have a set of signatures such that if any of those signatures or some subset of a certain form appear, then you can cause other signatures to appear, like constructing multi-hop lightning payments. Every time you make a hop, it could be atomic with another signature and another and so on. These can all be multisigs where there are multiple partial signature contributions which are atomic in different directions. During the atomic multi-path presentation- you can reblind these each time, and add anothe random value that you are adding that you are revealing at each step. You can have a many-hop many-channel payment path in which individual participants in the path wouldn’t be able to tell they are in the same path unless they were directly connected, because there’s this re-blinding happen. Anyone watching transactions on the chain wouldn’t be able tosee what’s going on, because everything appears uniformly random and independent.

There’s a deniability property– because I’m taking signatures and differences of signatures and encoding the semantics into the order which you reveal to different people…. anyone can take any values from the blockchain and make their own protocol transcripts. It would be difficult to leave any evidence. You might be able to hash every step and use the hash as your randomizer but that’s dangerous.

Unlike hash preimages where the preimage is published on the blockchain and revealed… we’re distributing a discrete log, the final party gets the secret key and doesn’t have to reveal it. This allows a transferable proof-of-payment. Rather than saying this payment went through and then publishing the hash preimage and not having linkability, you can sort of prove that you know something without revealing it to anyone, and this makes protocols related to invoicing much simpler.

Because I am encoding all of these semantics into signatures themselves at the time that signatures are produced, I don’t have ot put anything into the blockchain except possibly some locktimes. If I have any multisig output laying around with someone, maybe Ethan and I have a 2-of-2 output because maybe one or both of us was trying to want a multisig wallet with each other… we could reuse that and say whoops we left these coins there several years ago, let’s make a payment channel with this, we could do the adaptor signature trick to do this.

New developments

Lightning with scriptless scripts– getting that into lightning protocol is quite difficult. ajtowns has decided that he is doing it. He posted a message to lightning-dev, and he’s doing it. That’s awesome.

Doing scriptless scripts with ECDSA would be interesting today. Monero maybe– doesn’t have refund support. None of this works unless you have ECDSA. It turns out there was a paper dropped on lightning-dev about this. There’s some groups working on implementing this multi-party ECDSA protocol which is exciting. This could be happening today. People could be doing it today, there’s no evidence on the blockchain. You have no idea how many people were involved or what kind of smart contract they were involved. And if you are working for a Chainalysis company then you are lying to yourself and others.

That’s all I got. Thank you.

https://www.youtube.com/watch?v=jzoS0tPUAiQ&t=4h1m

\ No newline at end of file diff --git a/lets-talk-bitcoin-podcast/2017-06-04-consensus-uasf-and-forks/index.html b/lets-talk-bitcoin-podcast/2017-06-04-consensus-uasf-and-forks/index.html index 8b0845e066..43f8acb871 100644 --- a/lets-talk-bitcoin-podcast/2017-06-04-consensus-uasf-and-forks/index.html +++ b/lets-talk-bitcoin-podcast/2017-06-04-consensus-uasf-and-forks/index.html @@ -8,4 +8,4 @@ Andreas Antonopoulos

Date: June 4, 2017

Transcript By: Michael Folkson

Tags: Soft fork activation

Category: Podcast

Media: -https://letstalkbitcoin.com/blog/post/lets-talk-bitcoin-333-on-consensus-and-all-kinds-of-forks

User activated soft forks vs hard forks

Adam Levine (AL): I’m trying to look at the user activated soft fork thing and not see hypocrisy just steaming out of its ears. I’m really in need of you to help me understand why this is not the case Andreas. The argument to this point about scaling, one of the arguments, one of the larger arguments, has been that hard forks are the worst thing that can happen. In the event of a hard fork that isn’t meticulously planned and even if meticulously planned it is still probably a terrible idea. We’ve been now in this argument for two years where you can’t hard fork, you can’t do a variety of other things because they are all too dangerous. The user activated soft fork movement appears to be basically a quick way to create a situation that looks a whole lot like a hard fork. I’ll just lay it out real quick so you understand what I think and you can tell me where I’m wrong. The user activated soft fork as defined in BIP 148 and there are a couple of others floating around too so of course this is not definitive, basically what it does is it protects people who adopt the new version, who adopt the user activated soft fork in the event that there is ever a Bitcoin flip flop in consensus. The user activated soft fork software begins below 50 percent and people who remain on the non user activated soft fork have the majority. Under normal circumstance that would be Bitcoin but because of the way that this is built it can exist with a very low relative strength compared to the rest of the network and just kind of potter along as its own network. But if it ever gets above 50 percent and achieves consensus dominance within the Bitcoin network then the chain that had been the minority will become the majority. There is a large potential for people who have just been making normal transactions, merchants who have just been accepting normal transactions to find themselves essentially with the transactions that they made never happened because they weren’t included in blocks that were found to be valid by the chain that at the time was not what Bitcoin is but later became what Bitcoin is. So that sounds convoluted, it is a little convoluted. Tell me where I’m wrong, why is this not such a hypocritical thing to be talking about at this point.

Andreas Antonopoulos (AA): I don’t know that you are wrong. I think there is a whiff of hypocrisy, I think that’s par for the course in the scaling debate right now. It appears that this debate has fully morphed into pure power play across the board. It is no longer about scaling, it is no longer about any technical issues, it is about who gets to make the rules and who has the power to make decisions about Bitcoin’s future. Much of it is cover for that, the real debate that is happening which is a power struggle. It is a power struggle against a set of consensus rules that absent overwhelming consensus to move in a certain direction will maintain the status quo indefinitely and thwart the attempts of most actors to seize power and try to move the consensus rules in a particular direction. So far Bitcoin has proven to be remarkably resilient to this. Given the amount of propaganda and marketing and money that has been thrown at this power struggle, the fact that nothing has happened is really quite remarkable. This burst of innovation is about all things fork which we really didn’t have two years ago. This innovation is actually giving us much better options for paths forward that we can take in the scaling debate which are unfortunately clouded by the power struggle. If you look back two years ago, first of all the very term hard fork was barely used…. changes to the network, but those changes were done in a variety of ways. People were still experimenting on the best way to signal, activate and make a change to the consensus rules, an upgrade if you like, without causing disruption. Today we have not only distinctions between hard forks and soft forks and several different signaling mechanisms for activating, we also now have a further distinction between miner activated and user activated hard forks and soft forks. Which means we have at least four categories. It is a bit difficult to figure out which is which and there is quite heated debate still on the definition of a hard fork, a soft fork and when one can effectively become the other. You with me so far?

AL: I am yeah. I think it would be prudent if you could just go through and explain a little bit about how the user activated soft fork actually differs from how we normally make these changes.

Soft forks and hard forks

AA: Why don’t I take a step back and we go to a more basic question which is what is the difference between a hard fork and a soft fork? I think fundamentally the primary difference between a hard fork and a soft fork is that a hard fork expands the options, the range of the consensus rules. It makes things that were previously invalid subsequently become valid. So far so good? For example if we wanted to change the difficulty algorithm straight up and say it is going to be calculated a different way. If we make something that gets calculated with a new difficulty algorithm, any nodes that haven’t upgraded to understand that new difficulty algorithm will consider that new block invalid. By the old rules it is invalid. But by the new rules it is valid. Meaning that if all the nodes have upgraded they will consider it valid and the chain will continue. If there is a mix of old and new nodes where the old nodes consider it invalid, the unupgraded nodes consider it invalid, they will effectively stop following that chain. If that chain continues to grow and it grows faster than the old chain they will be effectively kicked off the consensus chain. They will be no longer be following the longest difficulty or the greatest cumulative difficulty valid chain. Not because the difficulty changed, not because the length changed but because the very definition of valid changed. It changed in a way that broke the rules as they were before. It is a discontinuous change, you might call it backwards incompatible but it is actually forwards incompatible. Does that make sense?

AL: Yeah so let me restate that. Basically what you are saying is that a hard fork is as you said adding a new capability or something like that or changing the meaning of something in a way that creates something entirely new that wasn’t there before. I will just go ahead and infer soft fork, a soft fork is the opposite of that in that it takes a capability that is already there and the software allows the voluntary stepping down. So if you have 100 percent you could soft fork to having only 80 percent of that capacity. But if you went to 101 percent that would be a hard fork. Is that right?

AA: Correct. Hard fork, the previously invalid is now valid according to new rules. Soft fork, the previously valid is now invalid according to the new rules. For old clients that do not upgrade in a soft fork the previously valid is still valid and therefore they will just ignore the fact that under the new rules it is invalid, they will just consider it valid and continue. The new clients may see that change and say “Oh no that is no longer valid by these new rules. It is now invalid.” You don’t have to upgrade, you find your node doing less strict validation than everybody else. Here is a specific example where this happened. With BIP 66, it is now about a year and a half old. It was a change introduced in order to tighten the rules by which signatures were validated, specifically the DER encoding of ECDSA signatures in Bitcoin. Specifically to prevent certain classes of transaction malleability that were introduced in signatures. A signature that was valid under the previous rules may have been invalid under the new rules. Essentially that was a tightening of the rules, not a loosening of the rules. It was implemented as a soft fork. It actually did cause miners to fork themselves out of consensus. When they said they would enforce that new rule and then didn’t, they created and mined blocks that contained signatures that the rest of the network considered invalid because the rules had tightened and they weren’t enforcing them fully. Six blocks got mined in violation of the soft fork rules and those miners found themselves losing those six blocks because the rest of the network rejected them. Miners can find themselves forked off of consensus if they do not upgrade to more strictly validate the rules of a soft fork because they are staking their energy behind validating to the most current rules.

Chain splits are not hard forks

AL: I feel like we have a Bitcoin and Bitcoin problem here so let’s just add that third definition that I think is missing here. We have a soft fork which is a voluntary restriction of the current rules which makes it so that there is a new sort of consensus but it doesn’t add anything that wasn’t already there. There is a hard fork which adds something that wasn’t already there and so can be used to make larger more systemic changes or at least in a more dramatic way. But then there is this other thing that is called a chain fork or a chain split. I think that is what you are talking about now Andreas. Even in a soft fork situation if the consensus diverges, if part of the network believes this and part of the network believes this, either because of non-upgraded software or whatever, then you can still have a chain fork which causes what many people think of as a hard fork. I think when a lot of people think hard fork they are thinking chain split.

AA: Right and that is not what a hard fork means. Forks in general occur continuously on an almost daily basis. On average once a day the Bitcoin chain forks and it forks because consensus is an eventual settlement system that converges on a common chain after a certain number of blocks. There is a reason why miners can’t collect their reward for 100 blocks, the coinbase is not redeemable for 100 blocks under the consensus rules. The reason for that is because there may be divergence in the consensus on what is the valid main chain which is temporary and can be caused by totally natural causes. Let me give you an example. If during the production of blocks two miners who are distant from each other in terms of network topology, I’ll simplify things and say assuming the network topology matched geography. You’ve got a miner in Northern Canada and a miner in Australia both mining for blocks. They both find a valid block, totally valid by everybody’s consensus rules, everybody agrees. They find proof of work for that valid block at the same time, perhaps slightly different blocks, the transactions might be in a different order, they almost certainly will be. They mine these two blocks simultaneously and then they start telling everyone they know. Imagine this block from Australia is now propagating out, radiating out. It goes from Australia to Indonesia, from Indonesia to India, and it is gradually spreading. Meanwhile there is another valid block spreading from Northern Canada going in the opposite direction. It is going to North America, Central America, South America spreading towards Europe. Somewhere in Europe the two blocks meet each other. At that point half the network will see the Canadian block first, half the network will see the Australian block first and they will have a different picture on what the current longest cumulative work valid chain is. That is a fork. It happens about once a day, slightly less than once a day on average. Four or five times a week. It gets resolved within one block. The way it gets resolved within one block is half the network sees the Canadian block, half the network sees the Australian block, the half with the Canadian block will assume that is the longest chain valid parent and will build on top of that and start building a new block. The other half of the network will start building on top of the Australian block. At this point the chances of them simultaneously resolving the proof of work again are now infinitesimal, they are much lower than that what just happened happening again. It is more likely that one of the two sides, one of the two perspectives, will produce a new block with proof of work first with a significant time difference. That block will be able to propagate across the entire global network before another block is found. Then the side that thought the Canadian block was the winner builds a child on top of that and propagates it to the whole network. At this point the whole network now sees the long chain as having the Canadian block as the parent and the Australian block has now been deprecated to a minority chain. All of the transactions that are in the Australian block that weren’t in the Canadian block get re-queued in the mempool and get mined in the next block. Everything gets redone. The Australian block, valid as it was, lost the race. In retrospect that miner will not collect that reward because their block did not become eventually part of the longest difficulty valid chain. This happens every single week in Bitcoin. Nobody even notices it. It is a normal consequence of the synchronization across latency. The problem is when these divergences do not re-converge. The reason they do not re-converge is because it is not about a difference in perspective about which is the longest difficulty chain, but it is a difference in perspective about which is the longest difficulty valid chain. That valid part is the consensus rules. If there is a divergence on the interpretation of the consensus rules or an inability to validate with the current consensus rules, as we saw with the bug in the database software back in April of 2013, then you have a persistent convergence. That persistent convergence doesn’t get resolved easily. One chain may be able to validate both sides and then you have the possibility of what is called a wipe out where eventually what was a minority chain becomes a majority chain and then wipes out a very long sequence of blocks. It may be that the consensus rules can only go in one way, so only one chain can wipe out the other. That is the case with the user activated soft fork.

The analogy of a bingo hall for block discovery

AL: So let me mangle one of our old analogies here. We have often talked about Bitcoin and the process of mining like a game of bingo where essentially everybody is playing against themselves but there ultimately can only be one winner per contest. It sounds like the analogous scenario to two miners finding a block is two people playing bingo both find bingo at the same time, the person who gets it up to the front and gives it to the bingo caller or what have you, is the person who actually wins that round. There can only be one. It is a bit of a race there, that’s the propagation where both blocks are going all over the network trying to get majority before the other guy because that makes it more likely.

AA: But with the difference that there is no upfront, there is no leader of the game. In fact what happens is everybody who is playing bingo at all of the different bingo tables is listening around them. Some people who are closer to one of the bingo winners will hear them shout “Bingo”. They will assume they are the winner for some period of time. Whereas the people who are sitting closer to the other person will hear them say “Bingo” first. If you stopped everything and did a poll saying “Who do you think won the last game?” you’d get a different result on the two sides of the room because of the way the sound of their shouting “Bingo” propagated across the room.

AL: We can also imagine that the rules are also propagating by the same kind of whisper network. People are saying “This has just become invalid.” That is what a soft fork is. People who don’t hear about that or don’t follow those rules still go up when they get a “Bingo” that complies with the old set of rules but it doesn’t comply with the new set of rules. This causes the confusion we are talking about here.

A soft fork to reduce block size scenario

AA: Right. Let’s use a different example for a soft fork. Right now there is a block size limit that requires all blocks be less than or equal to 1 megabyte in size. A consensus rule change that said “Actually we want them to be less than 750K, three quarters of a megabyte from now on. We are going to squeeze them smaller.” That is a soft fork change. The reason it is a soft fork change is because things that were previously valid are now invalid. A 800K block that was previously valid under the 1MB limit is no longer valid under the 750K limit. If you are validating and you are validating by the old rules the new 750K blocks still look fine to you because they are less than a MB, you are cool with them. The chain that is producing 750K blocks happily chugs and everyone accepts their blocks because they are fine. They are fine by both the old and the new rules. But if somebody produces a 800K block a big chunk of the network, perhaps the majority, will reject it because under the new rules it is no longer valid. When they reject it if the part of the network that is still following the old rules continues building on that you are going to have a divergence again. That divergence is that some people will assume the 800K block is perfectly fine because they haven’t upgraded and they will continue building on top of that. They’ll continue, they’ll continue and they’ll continue. Now it becomes a race. Which side has the majority of the hash power? The problem is that the side that has the 1MB rule sees both chains as valid but the side that has the 750K rule sees only one of them as valid and therefore will never accept the other side. At some point if that goes above 50 percent it is going to re-converge as the longest chain and wipe out the other side, the side that had the 800K block. The opposite can’t happen. The side that has the 800K block can never pull over the new small blockers because they refuse to see that as valid. One side can wipe out the other but not vice versa. Is this making sense?

BIP 148 user activated soft fork

AL: It does. That’s actually the opposite of the situation we are talking about here with the user activated soft fork right?

AA: Actually it is not the opposite, it is the exact same thing. Let’s talk about UASF BIP 148 very specifically. I will summarize as best I can and please read the BIP, a summary is not an accurate representation of this. The bottom line is that on August 1st 2017 nodes that follow UASF BIP 148 and validate based on it, not just signal, but actually validate based on the rules introduced by UASF BIP 148, have a new rule. That new rule is that after August 1st it is not acceptable to have blocks that are not signaling SegWit. They say “We will only accept blocks created by miners that signal acceptance of SegWit.” They don’t have to be SegWit blocks, they don’t have to activate SegWit, they simply have to signal it. So far so good? What that means is that if you had a node and you were running UASF BIP 148 and on August 1st a miner produces a block that is not signaling SegWit you take that block when it comes to you and you go “No it is rejected” and you refuse to propagate it, you refuse to accept it as valid. Everybody who is not working by these rules will propagate it. How far does it go? That depends. It depends on how many miners are participating in this, it depends on how many exchanges are participating in this, it depends on how many nodes are running this new set of rules. If a majority of the nodes are running this, especially a majority of the economic nodes, exchanges, merchants, retailers, wallets etc then miners will face a rather difficult problem which is they are essentially facing an economic blockade. They can mine blocks without signaling SegWit but if those blocks are rejected by the exchanges then 100 blocks later they can’t essentially cash out. They cannot exchange their earned Bitcoin rewards for fiat to pay their electricity bills because those blocks don’t exist on the exchanges’ blockchains. The exchanges are not recognizing them as valid blocks and therefore they are on a different chain. At that point it is presumably easier for a miner to simply signal SegWit and that way they know their mining rewards are safe. On the other hand if only a minority do this or if it is mostly nodes that are not engaged in economic activity and there are no miners supporting this then the people who are doing this and invalidating these blocks on their own blockchain will find themselves on a minority chain with a minority of the hash power which only they and a minority of the nodes are following, which doesn’t have the majority of the transactions and economic activity on it, it doesn’t have the majority of the hash power, it is slower to confirm, it has fewer transactions and it potentially has no blocks or only empty blocks. Those are the two possible scenarios there. Then it gets complicated because of course one of the third scenarios is that miners may decide to fight. If they decide to fight what they could do is they can dedicate some of their hashing power to purposefully mining empty blocks on the UASF chain in order to deny service. It is a denial of service attack. If they mine empty blocks no transactions are being mined on that chain. In order to do that they then have to essentially split their hash power between maintaining the supremacy of the “We won’t do SegWit” chain while also putting some hash power to beating down the UASF chain by denial of service with mining empty blocks. That is a dangerous game because if the UASF chain takes over it will wipe out the “We’re not doing SegWit” chain whereas in order to really stop the UASF chain you have to do it through attrition. People have to give up because they’re not getting transaction throughput and they are not getting economic activity on it but it will never get wiped out. The nodes that are following that have effectively created an altcoin at that point if it doesn’t have….

AL: Right. That brings us back round to the beginning of this conversation which was the perceived hypocrisy that I saw in this. We have now talked about hard fork versus soft fork but one of the arguments that I’ve been making and getting some pushback on is that necessarily a user activated soft fork can only succeed by scaring people into action. Because if it doesn’t scare people into action then it happens exactly like you said. You wind up with a situation where you can have your own coin but it is effectively an altcoin because you weren’t able to get the majority of Bitcoin economic activity and these other things to go along with you. But it is set up in such a way that it is kind of like a booby trap. If that’s the situation that happens, it starts with less than 50 percent but then goes to over 50 percent, that’s just bad right? It seems to me that there is a fundamental difference in this approach to doing things compared to anything else I’ve seen. This is more like an attack or an actual threat on non-participation where everything else has been “We want you to support this because this is the right choice. Please opt in.” This is like “If you don’t opt in you have a major chance of being injured by this.”

AA: I think it would be accurate to call it a boycott, possibly an economic blockade. An economic blockade, if you park your ship in the other party’s harbor, you don’t let any other ships go in and out, that’s an economic blockade.

AL: And the starvation is collateral damage.

AA: If you park your ship in the middle of the Pacific Ocean and you declare an economic blockade nobody gives a damn. For an economic blockade to matter you have to be able to impose it on the hubs of economic activity or they have to be working with you. If exchanges, merchants, wallets don’t in a major way support this on August 1st then you’ve parked your ship in the middle of the Pacific Ocean and you’re going “I’m going to blockade y’all” and no one around you for 10,000 miles hears your cry of defiance because nobody gives a damn. At that point of course you are not doing any damage to anyone other than yourself. That is a fundamental difference. A hard fork, 51 percent consensus or more attack forces a change on the entire network. You might call it an attack, you might simply call it a 51 percent consensus rule change. Most people consider it reckless to do that with anything other than overwhelming consensus. It stands a pretty good chance of causing a rift in the network. UASF also has a chance to cause a rift in the network, especially if miners decide to attack the minority chain which would be another escalation. It is effectively a game of chicken. I saw this analogy on Reddit so attribution there to whoever said it. The only way to win a game of chicken is to very visibly take the steering wheel off and throw it out the window. At that point your opponent knows that you have no ability to swerve. There is a head on collision possibility there always. This is what it is. The bottom line to all of this is the issue of power struggles and the fact that this is where we are now. It is not about scaling really, it is about power struggle.

Constituencies of consensus

AA: I’ve said this many times before. There are five constituencies of consensus. Miners, developers, users with wallets, users with exchanges or economic actors with exchanges and economic actors with merchant activity. So merchants, exchanges, wallets, miners and devs. That’s the five constituencies of consensus. They are not hard lines, there is overlap between any of those. Think of it as a Venn diagram. Obviously miners who spend their money at exchanges are also economic actors. Of course users can be miners, miners can be users, they can have wallets, miners might be merchants etc etc. Developers may come from any and all of these. But for the most part think of these as the five constituencies. You cannot change the rules without a majority. Think of it as a 3-of-5. You need 3 of 5 at least. In order to do it in a way that is unlikely to cause disruption you need a 4 of 5 or 5 of 5 agreement. That’s how the consensus rules change. The miners have in the past suggested changing the rules by adopting software by 51 percent or more of the hash power under Bitcoin XT, Bitcoin Classic, Bitcoin Unlimited and various other attempts to change the consensus rules to increase the block size by a hard fork with a 51 percent or more hash rate. That is absolutely going to work effectively and cleanly as long as they have the other four constituencies with them. If they do it is completely seamless and harmless but you can never get full consensus on that. You haven’t seen full consensus emerge on that solution. At the moment full consensus has not emerged on any change to the consensus rules. There is always a minority, somewhere between 20 percent at least and up to 50 percent that doesn’t want one of these solutions. Until that is overcome a lot of this is really not going to work. How much disruption it causes, we’ll see. Maybe a lot, maybe none at all. It very much depends on who plays along with UASF. We might see a big change by August 1st, there are a bunch of other proposals at the same time that are being developed including SegWit2x which is the proposal that reached agreement among certain participants in the ecosystem in New York during Consensus (conference). UASF is just one gambit. What is interesting about UASF is that it is the first time you’ve seen the user base, wallets and in some cases some merchants really claim their power and say “You know what? We are part of this too. We have power.” As is the case with all of the other constituencies of consensus, the claim is not “We also have power.” It is “We have ultimate power.” This is what miners say, this is what some developers say, this is what some exchanges have said even. Of course all of those are false. No one has the ultimate power. That is the tenuous balance that is at the heart of a decentralized system like Bitcoin, that does not allow anyone to take over. That of course is the most important feature of this system. It makes for a very nasty debate and some very difficult scaling decisions but it also makes for a very resilient system that deters attackers and cannot be moulded to the whims of someone with a lot of money and propaganda or a popular movement among users that doesn’t have overwhelming consensus or miners who are pursuing their own self interest or developer groups that are doing their own thing. None of those groups have power ultimately unless they agree with a bigger constituency.

AL: Necessarily a user activated soft fork or anything else that is a fork of the network is it sounds like going to have to do a hard fork in order to lower the mining difficulty if they don’t have 70 percent of the network.

AA: If a soft fork turns into a hard fork then it is a completely different thing.

AL: Assume for a second that we wind up in a situation where there is 40 percent support for UASF. In that situation if there is 40 percent hashing power on that side then that means they either have blocks that are substantially slower than where they are or they hard fork in order to fix it. But by hard forking to fix it again you negate the argument against a hard fork.

AA: That is the case for both sides of the chain at that point. The real consideration for a user activation soft fork driven by the economic actors of the network is is it driven by the economic actors of the network? If it is driven by the economic actors of the network and there is overwhelming support among exchanges and merchants then the miners really don’t have a choice. They can’t mine a chain that they can’t spend.

AL: It is just the redundancy we are talking about, nobody can force a sea change here. To even attempt it is very risky as we’ve seen reputationally certainly. Most of the people who have attempted to do anything like that have much lower profiles and much less prestige in the space than they did before they tried to do that. And also technically it is a risk to your users and in the case of UASF to other users as well.

Innovation on upgrade mechanisms

AA: I think we are going to see resolution of the scaling debate, let me add that as a positive and optimistic note. And the reason we are going to see resolution of the scaling debate is rather simple. Over the last two years during this scaling debate we have seen the emergence of more than a dozen proposals. We have seen an enormous amount of research and development and resources poured into finding technical means by which to clarify, secure, make these solutions better, less disruptive, understand the implications of the different mechanisms for upgrading. The state of the art on upgrading a decentralized consensus algorithm has advanced tremendously. Two years ago we didn’t even have the term hard fork, now we are talking about four different categories of deliberate upgrade forks, miner activated, user activated, soft fork, hard fork and all the combinations. And we are in fact discovering there might be more nuances within that. SegWit didn’t exist two years ago, a bit more than two years ago. SegWit as a soft fork was an invention specifically designed to cause a less disruptive approach towards the scaling debate. That is currently being signaled under BIP 9. BIP 9 itself as a signaling mechanism for miner activated soft forks which were not signaled in that way before, is a new one. Spoonnet, Spoonnet2 and potentially Spoonnet3 which are a series of proposals by Johnson Lau that give us a reformatting of the block header in order to solve several different problems, not just the block size but also to make hard forks much cleaner and less disruptive by ensuring that there is replay protection, so transactions cannot be replayed from one side of the fork to the other, as well as wipeout protection so that you can’t have a very, very long chain develop that then gets wiped out in a reorganization. Those developments did not exist two years ago. We are now much, much more advanced in our understanding, in our research and in our development to do these things in a live network and upgrade them. The proposals that come out are more and more complicated, in some cases that is counterproductive, but they are also more and more sophisticated. You are seeing that people are actively trying to find ways to create solutions both political which I don’t think is the right approach, but also technical solutions to this debate that try to resolve the underlying technical issues in a way that is the least disruptive possible. And I am confident that eventually we are going to see convergence on a solution that is broadly acceptable, that offers a road forward, that is least disruptive, that the community and every one of the five constituencies get behind. And we will see pretty much all of the above. We’ll see an activation of something that is SegWit or very similar to Segregated Witness for transaction malleability and witness scaling and all the other things that SegWit does. We are going to see a base block size increase in addition to SegWit’s block weight increase eventually. We are going to see a reformatting of the block header in order to introduce new features such as extra nonces for miners, a more flexible header format that improves a lot of other issues. We might see a change in the transaction format. We are going to see Schnorr signatures, we are going to see signature aggregation techniques, we are going to see UTXO sets and potentially things like MMR, Merkle Mountain Ranges and other proposals for creating fraud proofs and verifiable UTXO sets and optimizations like that. All of the above is the scaling solution. The question that remains is not what do we do, the question that remains is in what sequence and how do we do it in the safest, least disruptive way to the broader ecosystem. That question has not been resolved and it is a technical issue. It is overshadowed by the political struggle and the power struggle but at the bottom line this is a matter of science. I am confident that we will see a road forward.

Prospect of a community split and an altcoin

AL: Since we first started seeing contention in the cryptocurrency space, specifically in the Bitcoin community, I don’t even remember what the first issues were but there have been issues for years at this point. I have long thought that the necessary conclusion of this process is that you have two Bitcoins for the two different core constituencies of Bitcoin. You are talking about the different classes of users, in terms of what role they play in the ecosystem, I’m thinking more about this from a what do these people actually want from Bitcoin perspective. The problem with where we are going is as you’ve said there are the technical concerns but a lot of this is just about who gets to decide and who gets to dictate the way Bitcoin should develop. If we were looking at something that was monolithic, if we are looking at something that wasn’t open source, that wasn’t Bitcoin, then I would agree with you. I would say eventually because these problems need to be solved things will come into alignment. But the reality is that our technical solutions as you said are better than they ever have been. Our processes for figuring out the science of this thing and the amount of people participating and the amount of money being poured in have never been better. And yet the contentious issues have never been more contentious, even when everybody agrees that Segregated Witness should go in. The reason why it hasn’t gone in is because of that power dynamic you were talking about. The one side that is holding it up basically believes this is their leverage and if they give up that leverage that’s pretty much the end of it. The other side doesn’t want to give because of technical arguments and also because they are winning. If nothing changes then the status quo remains and since that is largely more supportive of that position than it is of the people who are blocking it you have this loggerheads. That’s where I get back to. I don’t disagree with anything you said but I think the human element here is a much larger piece of the puzzle and it is very difficult to see because on the one side of the argument which is in favor of small blocks generally many of those channels are frankly pretty repressive and don’t really have conversations about this stuff in a way that lets you see both sides of the argument. I’m not saying the other side is better and I know they’ll yell at me for that too because they yell at everybody and everybody yells at everybody but that’s the point. The stakes are really, really high here and on the one hand there are technical clear paths forward but on the other hand a lot of this is “We don’t really know and so we have to pick and we have to experiment.” You can look around at experiments and they are not very applicable but you can see that people are doing larger blocks and you can see that people are attempting to scale in different ways. That is what I get back to. Contention or not I expect to see two blockchains that both bear the name Bitcoin, that take part of that community and basically don’t look back at the other side. If it is this issue that becomes it then what we’ll see is a Bitcoin that has a small block size, 1MB maybe even gets smaller, that’s Bitcoin Settlement. On the other side you’ll have Bitcoin P2P or Bitcoin Cash or whatever you want to call it and that’ll be something that will have much larger blocks, it will mostly use SPV mining, it will be less secured but it’ll be possible for individuals to transact on it. I don’t see that as a bad thing. It might mean that the value of each of them goes down but I think if anything Ethereum is showing with Ethereum and Ethereum Classic that it doesn’t really matter. It is actually not a net negative for the prices and sure maybe the price of Ethereum might be 500 dollars now versus 250 or wherever it is at the moment. But it is still a net positive regardless of how many of these things you have if there is utility and communities behind each one. With the emotions tied up in this and the different use cases, on the one hand you’ve got the settlement, on the other hand you’ve got all the players like Shapeshift and BitPay who want larger blocks because they need to fit more transactions into the blockchain. Is that a bad outcome and do you expect to see that happen, do you not expect to see that happen? That’s my thesis, been for a while, still feeling good about it, how do you feel about it?

AA: I would hope that doesn’t happen. Part of the reason I would hope that doesn’t happen is because I’m not confident that if that happens it will happen in any way that is not disruptive. The first question I would have for any proposal to create such a split is really simple. It has to do with the fundamental technology of how that split is engineered and it has to do with whether there is replay and wipeout protection between the two chains. You can get a split that protects both chains from each other and allows it to be somewhat clean. One of the problems we saw in Ethereum is that there was no replay protection and there was no wipeout protection. That made it a damaging split for the ecosystem and caused people to lose money. Several exchanges lost money because of that, because of the lack of replay protection. For an economy like Bitcoin and with the dynamics of the difficulty retargeting in Bitcoin that would be much messier. I would hope to see if something like that did happen that it was organized and engineered in a responsible way with at minimum replay and wipeout protection built in. I think it is absolutely irresponsible not to address that since we know the damage that something like that does.

AL: I totally agree with you about the unplanned thing. The unplanned thing is definitely the part that makes anything like that dangerous, anything that is contentious, that is why UASF is also dangerous. It is not because it is inherently dangerous, if it has support there is no problem. It is only a problem if you go into it and neither side is really preparing for a situation where it winds up being contentious. Just to quote John F Kennedy, “Those who make peaceful revolution impossible will make violent revolution inevitable.” That continues to be the trajectory that I see. The attempts to have this peaceful conversation in a way where both sides are able to compromise, let’s not even say willing, able to compromise in order to give the other side what they actually want while not giving up what they themselves need, I don’t see that as being possible given the differences in what they want from the thing. That’s my concern for something that is unplanned, that this just continues to get worse and worse and worse. We continue to have problems, we don’t have solutions for them because we can’t get consensus because of this disconnect. That’s where you wind up with a violent break and both sides struggle to fix their stuff on either of their sides. If we could get past that and say “Maybe it is not that bad if there were two Bitcoins, one that is for scaling onchain and one that’s for settlement onchain at the highest security possible per transaction.” It seems like that would be the path forward if we get to pick but it seems like the most likely path forward is that we just keep banging heads against each other and eventually it happens. Both sides say “Screw you, I’m just going to do my thing and I will protect myself against you however I need to.”

AA: I hope we don’t have that outcome. There are a couple of reasons why I hope we don’t have that outcome. Effectively splitting Bitcoin in two is creating one or depending on your perspective possibly two altcoins. There are 1500+ alternative and varied cryptocurrencies and blockchains in this ecosystem. As you say some of them have different perspectives on scaling and governance. I would prefer to see some of that pressure go there than see the Bitcoin chain split in two. There is another problem with splitting the Bitcoin chain in two which has to do with the monetary policy. There will only ever be 21 million Bitcoin. In a two chain situation there will only ever be 42 million Bitcoin. That is a problem and I hope we don’t see that happen. But if it happens I think Bitcoin will survive it. I think inevitably there will be some very heated debates about which one is the real Bitcoin which really doesn’t matter at that point. There will always be hold outs who will stubbornly refuse to follow upgrades. The question is how big is that hold out group? Is it going to be 5 percent, is it going to be 20 percent or is it going to be 50 percent? We’ll see. I’m optimistic. I hope we can find a solution that doesn’t involve a divorce. But if that’s what it takes that can happen too, I just hope it doesn’t.

AL: I feel like a lot of the last couple of years, for people who are looking at this technology as a solution rather than an elegant technical project, has been a process of repeatedly killing all the sacred cows that we know about. That happens on the code side too because on the code side as you said everything is being upgraded. What upgraded means is we’re making it better. We’re taking out the old thing and we are improving it so that it becomes more efficient and better to use for all these different reasons. The one bubble that I feel like we’ve been in for 4, 5 years, I think I started talking about this pretty early on is that Bitcoin is “the one”. Bitcoin is what cryptocurrency is, the pinnacle of cryptocurrency. We’ve talked about this lots of time before, how Bitcoin may fail but it might not even necessarily fail. You’re talking about the 42 million versus 21 million, that’s what I feel like there. How is that any different than Bitcoin not working any longer for cross border settlement because the fees have gone up? That was a sacred cow too that we took a lot of care of and a lot of pride in. It didn’t really matter at the end of the day because it turns out that wasn’t what Bitcoin was. I just don’t understand how we get to pick.

AA: To me it is very easy to determine which ones we pick and which ones we don’t. The current fee situation restricts the ability to use Bitcoin for several purposes that were previously possible at one scale and are no longer possible at another scale. But in my mind the important difference is that I consider this fee environment that we have today temporary. I don’t think this is the fee environment that we will be in the future. I look back at other technologies and how they developed over time. I remember a time when email was unusable because of spam. When attachments couldn’t be sent because DSL and modems didn’t have enough capacity, when Usenet was flooded by image attachments and couldn’t scale, when entire… of Usenet stopped getting used. This is the old days of course. In most cases these were temporary problems when the capacity of the network was outstripped by the demand and didn’t scale fast enough and then it did. And when it did it opened up new applications again that were previously unavailable and things reverted back. I think the fee situation is temporary, I think a split in Bitcoin is not. It is permanent and it creates a precedent and an irreversible collapse in the resilience and governance of the system that will damage Bitcoin. Not fatally, absolutely not. And I don’t believe that Bitcoin is “the one”. It is not a maximalist position that leads me to do this. It is just that I think that resilience is a feature of the system. If the system is able to fall to the whims of a small group, defining small whichever way you want then it is not doing what it is supposed to do. There are plenty of other cryptocurrencies that are more centralized that have easier governance by being more open to that. I don’t think that is what Bitcoin should be, I think it should be the resilient and robust system but that is my personal opinion and of course the market decides what it is going to be. We are going to find out, that’s part of being on this journey. We’ll see how it works but I think that is the big difference between the fee situation now versus a split in the governance over one scaling decision and really over a power struggle over who gets to make the rules.

\ No newline at end of file +https://letstalkbitcoin.com/blog/post/lets-talk-bitcoin-333-on-consensus-and-all-kinds-of-forks

User activated soft forks vs hard forks

Adam Levine (AL): I’m trying to look at the user activated soft fork thing and not see hypocrisy just steaming out of its ears. I’m really in need of you to help me understand why this is not the case Andreas. The argument to this point about scaling, one of the arguments, one of the larger arguments, has been that hard forks are the worst thing that can happen. In the event of a hard fork that isn’t meticulously planned and even if meticulously planned it is still probably a terrible idea. We’ve been now in this argument for two years where you can’t hard fork, you can’t do a variety of other things because they are all too dangerous. The user activated soft fork movement appears to be basically a quick way to create a situation that looks a whole lot like a hard fork. I’ll just lay it out real quick so you understand what I think and you can tell me where I’m wrong. The user activated soft fork as defined in BIP 148 and there are a couple of others floating around too so of course this is not definitive, basically what it does is it protects people who adopt the new version, who adopt the user activated soft fork in the event that there is ever a Bitcoin flip flop in consensus. The user activated soft fork software begins below 50 percent and people who remain on the non user activated soft fork have the majority. Under normal circumstance that would be Bitcoin but because of the way that this is built it can exist with a very low relative strength compared to the rest of the network and just kind of potter along as its own network. But if it ever gets above 50 percent and achieves consensus dominance within the Bitcoin network then the chain that had been the minority will become the majority. There is a large potential for people who have just been making normal transactions, merchants who have just been accepting normal transactions to find themselves essentially with the transactions that they made never happened because they weren’t included in blocks that were found to be valid by the chain that at the time was not what Bitcoin is but later became what Bitcoin is. So that sounds convoluted, it is a little convoluted. Tell me where I’m wrong, why is this not such a hypocritical thing to be talking about at this point.

Andreas Antonopoulos (AA): I don’t know that you are wrong. I think there is a whiff of hypocrisy, I think that’s par for the course in the scaling debate right now. It appears that this debate has fully morphed into pure power play across the board. It is no longer about scaling, it is no longer about any technical issues, it is about who gets to make the rules and who has the power to make decisions about Bitcoin’s future. Much of it is cover for that, the real debate that is happening which is a power struggle. It is a power struggle against a set of consensus rules that absent overwhelming consensus to move in a certain direction will maintain the status quo indefinitely and thwart the attempts of most actors to seize power and try to move the consensus rules in a particular direction. So far Bitcoin has proven to be remarkably resilient to this. Given the amount of propaganda and marketing and money that has been thrown at this power struggle, the fact that nothing has happened is really quite remarkable. This burst of innovation is about all things fork which we really didn’t have two years ago. This innovation is actually giving us much better options for paths forward that we can take in the scaling debate which are unfortunately clouded by the power struggle. If you look back two years ago, first of all the very term hard fork was barely used…. changes to the network, but those changes were done in a variety of ways. People were still experimenting on the best way to signal, activate and make a change to the consensus rules, an upgrade if you like, without causing disruption. Today we have not only distinctions between hard forks and soft forks and several different signaling mechanisms for activating, we also now have a further distinction between miner activated and user activated hard forks and soft forks. Which means we have at least four categories. It is a bit difficult to figure out which is which and there is quite heated debate still on the definition of a hard fork, a soft fork and when one can effectively become the other. You with me so far?

AL: I am yeah. I think it would be prudent if you could just go through and explain a little bit about how the user activated soft fork actually differs from how we normally make these changes.

Soft forks and hard forks

AA: Why don’t I take a step back and we go to a more basic question which is what is the difference between a hard fork and a soft fork? I think fundamentally the primary difference between a hard fork and a soft fork is that a hard fork expands the options, the range of the consensus rules. It makes things that were previously invalid subsequently become valid. So far so good? For example if we wanted to change the difficulty algorithm straight up and say it is going to be calculated a different way. If we make something that gets calculated with a new difficulty algorithm, any nodes that haven’t upgraded to understand that new difficulty algorithm will consider that new block invalid. By the old rules it is invalid. But by the new rules it is valid. Meaning that if all the nodes have upgraded they will consider it valid and the chain will continue. If there is a mix of old and new nodes where the old nodes consider it invalid, the unupgraded nodes consider it invalid, they will effectively stop following that chain. If that chain continues to grow and it grows faster than the old chain they will be effectively kicked off the consensus chain. They will be no longer be following the longest difficulty or the greatest cumulative difficulty valid chain. Not because the difficulty changed, not because the length changed but because the very definition of valid changed. It changed in a way that broke the rules as they were before. It is a discontinuous change, you might call it backwards incompatible but it is actually forwards incompatible. Does that make sense?

AL: Yeah so let me restate that. Basically what you are saying is that a hard fork is as you said adding a new capability or something like that or changing the meaning of something in a way that creates something entirely new that wasn’t there before. I will just go ahead and infer soft fork, a soft fork is the opposite of that in that it takes a capability that is already there and the software allows the voluntary stepping down. So if you have 100 percent you could soft fork to having only 80 percent of that capacity. But if you went to 101 percent that would be a hard fork. Is that right?

AA: Correct. Hard fork, the previously invalid is now valid according to new rules. Soft fork, the previously valid is now invalid according to the new rules. For old clients that do not upgrade in a soft fork the previously valid is still valid and therefore they will just ignore the fact that under the new rules it is invalid, they will just consider it valid and continue. The new clients may see that change and say “Oh no that is no longer valid by these new rules. It is now invalid.” You don’t have to upgrade, you find your node doing less strict validation than everybody else. Here is a specific example where this happened. With BIP 66, it is now about a year and a half old. It was a change introduced in order to tighten the rules by which signatures were validated, specifically the DER encoding of ECDSA signatures in Bitcoin. Specifically to prevent certain classes of transaction malleability that were introduced in signatures. A signature that was valid under the previous rules may have been invalid under the new rules. Essentially that was a tightening of the rules, not a loosening of the rules. It was implemented as a soft fork. It actually did cause miners to fork themselves out of consensus. When they said they would enforce that new rule and then didn’t, they created and mined blocks that contained signatures that the rest of the network considered invalid because the rules had tightened and they weren’t enforcing them fully. Six blocks got mined in violation of the soft fork rules and those miners found themselves losing those six blocks because the rest of the network rejected them. Miners can find themselves forked off of consensus if they do not upgrade to more strictly validate the rules of a soft fork because they are staking their energy behind validating to the most current rules.

Chain splits are not hard forks

AL: I feel like we have a Bitcoin and Bitcoin problem here so let’s just add that third definition that I think is missing here. We have a soft fork which is a voluntary restriction of the current rules which makes it so that there is a new sort of consensus but it doesn’t add anything that wasn’t already there. There is a hard fork which adds something that wasn’t already there and so can be used to make larger more systemic changes or at least in a more dramatic way. But then there is this other thing that is called a chain fork or a chain split. I think that is what you are talking about now Andreas. Even in a soft fork situation if the consensus diverges, if part of the network believes this and part of the network believes this, either because of non-upgraded software or whatever, then you can still have a chain fork which causes what many people think of as a hard fork. I think when a lot of people think hard fork they are thinking chain split.

AA: Right and that is not what a hard fork means. Forks in general occur continuously on an almost daily basis. On average once a day the Bitcoin chain forks and it forks because consensus is an eventual settlement system that converges on a common chain after a certain number of blocks. There is a reason why miners can’t collect their reward for 100 blocks, the coinbase is not redeemable for 100 blocks under the consensus rules. The reason for that is because there may be divergence in the consensus on what is the valid main chain which is temporary and can be caused by totally natural causes. Let me give you an example. If during the production of blocks two miners who are distant from each other in terms of network topology, I’ll simplify things and say assuming the network topology matched geography. You’ve got a miner in Northern Canada and a miner in Australia both mining for blocks. They both find a valid block, totally valid by everybody’s consensus rules, everybody agrees. They find proof of work for that valid block at the same time, perhaps slightly different blocks, the transactions might be in a different order, they almost certainly will be. They mine these two blocks simultaneously and then they start telling everyone they know. Imagine this block from Australia is now propagating out, radiating out. It goes from Australia to Indonesia, from Indonesia to India, and it is gradually spreading. Meanwhile there is another valid block spreading from Northern Canada going in the opposite direction. It is going to North America, Central America, South America spreading towards Europe. Somewhere in Europe the two blocks meet each other. At that point half the network will see the Canadian block first, half the network will see the Australian block first and they will have a different picture on what the current longest cumulative work valid chain is. That is a fork. It happens about once a day, slightly less than once a day on average. Four or five times a week. It gets resolved within one block. The way it gets resolved within one block is half the network sees the Canadian block, half the network sees the Australian block, the half with the Canadian block will assume that is the longest chain valid parent and will build on top of that and start building a new block. The other half of the network will start building on top of the Australian block. At this point the chances of them simultaneously resolving the proof of work again are now infinitesimal, they are much lower than that what just happened happening again. It is more likely that one of the two sides, one of the two perspectives, will produce a new block with proof of work first with a significant time difference. That block will be able to propagate across the entire global network before another block is found. Then the side that thought the Canadian block was the winner builds a child on top of that and propagates it to the whole network. At this point the whole network now sees the long chain as having the Canadian block as the parent and the Australian block has now been deprecated to a minority chain. All of the transactions that are in the Australian block that weren’t in the Canadian block get re-queued in the mempool and get mined in the next block. Everything gets redone. The Australian block, valid as it was, lost the race. In retrospect that miner will not collect that reward because their block did not become eventually part of the longest difficulty valid chain. This happens every single week in Bitcoin. Nobody even notices it. It is a normal consequence of the synchronization across latency. The problem is when these divergences do not re-converge. The reason they do not re-converge is because it is not about a difference in perspective about which is the longest difficulty chain, but it is a difference in perspective about which is the longest difficulty valid chain. That valid part is the consensus rules. If there is a divergence on the interpretation of the consensus rules or an inability to validate with the current consensus rules, as we saw with the bug in the database software back in April of 2013, then you have a persistent convergence. That persistent convergence doesn’t get resolved easily. One chain may be able to validate both sides and then you have the possibility of what is called a wipe out where eventually what was a minority chain becomes a majority chain and then wipes out a very long sequence of blocks. It may be that the consensus rules can only go in one way, so only one chain can wipe out the other. That is the case with the user activated soft fork.

The analogy of a bingo hall for block discovery

AL: So let me mangle one of our old analogies here. We have often talked about Bitcoin and the process of mining like a game of bingo where essentially everybody is playing against themselves but there ultimately can only be one winner per contest. It sounds like the analogous scenario to two miners finding a block is two people playing bingo both find bingo at the same time, the person who gets it up to the front and gives it to the bingo caller or what have you, is the person who actually wins that round. There can only be one. It is a bit of a race there, that’s the propagation where both blocks are going all over the network trying to get majority before the other guy because that makes it more likely.

AA: But with the difference that there is no upfront, there is no leader of the game. In fact what happens is everybody who is playing bingo at all of the different bingo tables is listening around them. Some people who are closer to one of the bingo winners will hear them shout “Bingo”. They will assume they are the winner for some period of time. Whereas the people who are sitting closer to the other person will hear them say “Bingo” first. If you stopped everything and did a poll saying “Who do you think won the last game?” you’d get a different result on the two sides of the room because of the way the sound of their shouting “Bingo” propagated across the room.

AL: We can also imagine that the rules are also propagating by the same kind of whisper network. People are saying “This has just become invalid.” That is what a soft fork is. People who don’t hear about that or don’t follow those rules still go up when they get a “Bingo” that complies with the old set of rules but it doesn’t comply with the new set of rules. This causes the confusion we are talking about here.

A soft fork to reduce block size scenario

AA: Right. Let’s use a different example for a soft fork. Right now there is a block size limit that requires all blocks be less than or equal to 1 megabyte in size. A consensus rule change that said “Actually we want them to be less than 750K, three quarters of a megabyte from now on. We are going to squeeze them smaller.” That is a soft fork change. The reason it is a soft fork change is because things that were previously valid are now invalid. A 800K block that was previously valid under the 1MB limit is no longer valid under the 750K limit. If you are validating and you are validating by the old rules the new 750K blocks still look fine to you because they are less than a MB, you are cool with them. The chain that is producing 750K blocks happily chugs and everyone accepts their blocks because they are fine. They are fine by both the old and the new rules. But if somebody produces a 800K block a big chunk of the network, perhaps the majority, will reject it because under the new rules it is no longer valid. When they reject it if the part of the network that is still following the old rules continues building on that you are going to have a divergence again. That divergence is that some people will assume the 800K block is perfectly fine because they haven’t upgraded and they will continue building on top of that. They’ll continue, they’ll continue and they’ll continue. Now it becomes a race. Which side has the majority of the hash power? The problem is that the side that has the 1MB rule sees both chains as valid but the side that has the 750K rule sees only one of them as valid and therefore will never accept the other side. At some point if that goes above 50 percent it is going to re-converge as the longest chain and wipe out the other side, the side that had the 800K block. The opposite can’t happen. The side that has the 800K block can never pull over the new small blockers because they refuse to see that as valid. One side can wipe out the other but not vice versa. Is this making sense?

BIP 148 user activated soft fork

AL: It does. That’s actually the opposite of the situation we are talking about here with the user activated soft fork right?

AA: Actually it is not the opposite, it is the exact same thing. Let’s talk about UASF BIP 148 very specifically. I will summarize as best I can and please read the BIP, a summary is not an accurate representation of this. The bottom line is that on August 1st 2017 nodes that follow UASF BIP 148 and validate based on it, not just signal, but actually validate based on the rules introduced by UASF BIP 148, have a new rule. That new rule is that after August 1st it is not acceptable to have blocks that are not signaling SegWit. They say “We will only accept blocks created by miners that signal acceptance of SegWit.” They don’t have to be SegWit blocks, they don’t have to activate SegWit, they simply have to signal it. So far so good? What that means is that if you had a node and you were running UASF BIP 148 and on August 1st a miner produces a block that is not signaling SegWit you take that block when it comes to you and you go “No it is rejected” and you refuse to propagate it, you refuse to accept it as valid. Everybody who is not working by these rules will propagate it. How far does it go? That depends. It depends on how many miners are participating in this, it depends on how many exchanges are participating in this, it depends on how many nodes are running this new set of rules. If a majority of the nodes are running this, especially a majority of the economic nodes, exchanges, merchants, retailers, wallets etc then miners will face a rather difficult problem which is they are essentially facing an economic blockade. They can mine blocks without signaling SegWit but if those blocks are rejected by the exchanges then 100 blocks later they can’t essentially cash out. They cannot exchange their earned Bitcoin rewards for fiat to pay their electricity bills because those blocks don’t exist on the exchanges’ blockchains. The exchanges are not recognizing them as valid blocks and therefore they are on a different chain. At that point it is presumably easier for a miner to simply signal SegWit and that way they know their mining rewards are safe. On the other hand if only a minority do this or if it is mostly nodes that are not engaged in economic activity and there are no miners supporting this then the people who are doing this and invalidating these blocks on their own blockchain will find themselves on a minority chain with a minority of the hash power which only they and a minority of the nodes are following, which doesn’t have the majority of the transactions and economic activity on it, it doesn’t have the majority of the hash power, it is slower to confirm, it has fewer transactions and it potentially has no blocks or only empty blocks. Those are the two possible scenarios there. Then it gets complicated because of course one of the third scenarios is that miners may decide to fight. If they decide to fight what they could do is they can dedicate some of their hashing power to purposefully mining empty blocks on the UASF chain in order to deny service. It is a denial of service attack. If they mine empty blocks no transactions are being mined on that chain. In order to do that they then have to essentially split their hash power between maintaining the supremacy of the “We won’t do SegWit” chain while also putting some hash power to beating down the UASF chain by denial of service with mining empty blocks. That is a dangerous game because if the UASF chain takes over it will wipe out the “We’re not doing SegWit” chain whereas in order to really stop the UASF chain you have to do it through attrition. People have to give up because they’re not getting transaction throughput and they are not getting economic activity on it but it will never get wiped out. The nodes that are following that have effectively created an altcoin at that point if it doesn’t have….

AL: Right. That brings us back round to the beginning of this conversation which was the perceived hypocrisy that I saw in this. We have now talked about hard fork versus soft fork but one of the arguments that I’ve been making and getting some pushback on is that necessarily a user activated soft fork can only succeed by scaring people into action. Because if it doesn’t scare people into action then it happens exactly like you said. You wind up with a situation where you can have your own coin but it is effectively an altcoin because you weren’t able to get the majority of Bitcoin economic activity and these other things to go along with you. But it is set up in such a way that it is kind of like a booby trap. If that’s the situation that happens, it starts with less than 50 percent but then goes to over 50 percent, that’s just bad right? It seems to me that there is a fundamental difference in this approach to doing things compared to anything else I’ve seen. This is more like an attack or an actual threat on non-participation where everything else has been “We want you to support this because this is the right choice. Please opt in.” This is like “If you don’t opt in you have a major chance of being injured by this.”

AA: I think it would be accurate to call it a boycott, possibly an economic blockade. An economic blockade, if you park your ship in the other party’s harbor, you don’t let any other ships go in and out, that’s an economic blockade.

AL: And the starvation is collateral damage.

AA: If you park your ship in the middle of the Pacific Ocean and you declare an economic blockade nobody gives a damn. For an economic blockade to matter you have to be able to impose it on the hubs of economic activity or they have to be working with you. If exchanges, merchants, wallets don’t in a major way support this on August 1st then you’ve parked your ship in the middle of the Pacific Ocean and you’re going “I’m going to blockade y’all” and no one around you for 10,000 miles hears your cry of defiance because nobody gives a damn. At that point of course you are not doing any damage to anyone other than yourself. That is a fundamental difference. A hard fork, 51 percent consensus or more attack forces a change on the entire network. You might call it an attack, you might simply call it a 51 percent consensus rule change. Most people consider it reckless to do that with anything other than overwhelming consensus. It stands a pretty good chance of causing a rift in the network. UASF also has a chance to cause a rift in the network, especially if miners decide to attack the minority chain which would be another escalation. It is effectively a game of chicken. I saw this analogy on Reddit so attribution there to whoever said it. The only way to win a game of chicken is to very visibly take the steering wheel off and throw it out the window. At that point your opponent knows that you have no ability to swerve. There is a head on collision possibility there always. This is what it is. The bottom line to all of this is the issue of power struggles and the fact that this is where we are now. It is not about scaling really, it is about power struggle.

Constituencies of consensus

AA: I’ve said this many times before. There are five constituencies of consensus. Miners, developers, users with wallets, users with exchanges or economic actors with exchanges and economic actors with merchant activity. So merchants, exchanges, wallets, miners and devs. That’s the five constituencies of consensus. They are not hard lines, there is overlap between any of those. Think of it as a Venn diagram. Obviously miners who spend their money at exchanges are also economic actors. Of course users can be miners, miners can be users, they can have wallets, miners might be merchants etc etc. Developers may come from any and all of these. But for the most part think of these as the five constituencies. You cannot change the rules without a majority. Think of it as a 3-of-5. You need 3 of 5 at least. In order to do it in a way that is unlikely to cause disruption you need a 4 of 5 or 5 of 5 agreement. That’s how the consensus rules change. The miners have in the past suggested changing the rules by adopting software by 51 percent or more of the hash power under Bitcoin XT, Bitcoin Classic, Bitcoin Unlimited and various other attempts to change the consensus rules to increase the block size by a hard fork with a 51 percent or more hash rate. That is absolutely going to work effectively and cleanly as long as they have the other four constituencies with them. If they do it is completely seamless and harmless but you can never get full consensus on that. You haven’t seen full consensus emerge on that solution. At the moment full consensus has not emerged on any change to the consensus rules. There is always a minority, somewhere between 20 percent at least and up to 50 percent that doesn’t want one of these solutions. Until that is overcome a lot of this is really not going to work. How much disruption it causes, we’ll see. Maybe a lot, maybe none at all. It very much depends on who plays along with UASF. We might see a big change by August 1st, there are a bunch of other proposals at the same time that are being developed including SegWit2x which is the proposal that reached agreement among certain participants in the ecosystem in New York during Consensus (conference). UASF is just one gambit. What is interesting about UASF is that it is the first time you’ve seen the user base, wallets and in some cases some merchants really claim their power and say “You know what? We are part of this too. We have power.” As is the case with all of the other constituencies of consensus, the claim is not “We also have power.” It is “We have ultimate power.” This is what miners say, this is what some developers say, this is what some exchanges have said even. Of course all of those are false. No one has the ultimate power. That is the tenuous balance that is at the heart of a decentralized system like Bitcoin, that does not allow anyone to take over. That of course is the most important feature of this system. It makes for a very nasty debate and some very difficult scaling decisions but it also makes for a very resilient system that deters attackers and cannot be moulded to the whims of someone with a lot of money and propaganda or a popular movement among users that doesn’t have overwhelming consensus or miners who are pursuing their own self interest or developer groups that are doing their own thing. None of those groups have power ultimately unless they agree with a bigger constituency.

AL: Necessarily a user activated soft fork or anything else that is a fork of the network is it sounds like going to have to do a hard fork in order to lower the mining difficulty if they don’t have 70 percent of the network.

AA: If a soft fork turns into a hard fork then it is a completely different thing.

AL: Assume for a second that we wind up in a situation where there is 40 percent support for UASF. In that situation if there is 40 percent hashing power on that side then that means they either have blocks that are substantially slower than where they are or they hard fork in order to fix it. But by hard forking to fix it again you negate the argument against a hard fork.

AA: That is the case for both sides of the chain at that point. The real consideration for a user activation soft fork driven by the economic actors of the network is is it driven by the economic actors of the network? If it is driven by the economic actors of the network and there is overwhelming support among exchanges and merchants then the miners really don’t have a choice. They can’t mine a chain that they can’t spend.

AL: It is just the redundancy we are talking about, nobody can force a sea change here. To even attempt it is very risky as we’ve seen reputationally certainly. Most of the people who have attempted to do anything like that have much lower profiles and much less prestige in the space than they did before they tried to do that. And also technically it is a risk to your users and in the case of UASF to other users as well.

Innovation on upgrade mechanisms

AA: I think we are going to see resolution of the scaling debate, let me add that as a positive and optimistic note. And the reason we are going to see resolution of the scaling debate is rather simple. Over the last two years during this scaling debate we have seen the emergence of more than a dozen proposals. We have seen an enormous amount of research and development and resources poured into finding technical means by which to clarify, secure, make these solutions better, less disruptive, understand the implications of the different mechanisms for upgrading. The state of the art on upgrading a decentralized consensus algorithm has advanced tremendously. Two years ago we didn’t even have the term hard fork, now we are talking about four different categories of deliberate upgrade forks, miner activated, user activated, soft fork, hard fork and all the combinations. And we are in fact discovering there might be more nuances within that. SegWit didn’t exist two years ago, a bit more than two years ago. SegWit as a soft fork was an invention specifically designed to cause a less disruptive approach towards the scaling debate. That is currently being signaled under BIP 9. BIP 9 itself as a signaling mechanism for miner activated soft forks which were not signaled in that way before, is a new one. Spoonnet, Spoonnet2 and potentially Spoonnet3 which are a series of proposals by Johnson Lau that give us a reformatting of the block header in order to solve several different problems, not just the block size but also to make hard forks much cleaner and less disruptive by ensuring that there is replay protection, so transactions cannot be replayed from one side of the fork to the other, as well as wipeout protection so that you can’t have a very, very long chain develop that then gets wiped out in a reorganization. Those developments did not exist two years ago. We are now much, much more advanced in our understanding, in our research and in our development to do these things in a live network and upgrade them. The proposals that come out are more and more complicated, in some cases that is counterproductive, but they are also more and more sophisticated. You are seeing that people are actively trying to find ways to create solutions both political which I don’t think is the right approach, but also technical solutions to this debate that try to resolve the underlying technical issues in a way that is the least disruptive possible. And I am confident that eventually we are going to see convergence on a solution that is broadly acceptable, that offers a road forward, that is least disruptive, that the community and every one of the five constituencies get behind. And we will see pretty much all of the above. We’ll see an activation of something that is SegWit or very similar to Segregated Witness for transaction malleability and witness scaling and all the other things that SegWit does. We are going to see a base block size increase in addition to SegWit’s block weight increase eventually. We are going to see a reformatting of the block header in order to introduce new features such as extra nonces for miners, a more flexible header format that improves a lot of other issues. We might see a change in the transaction format. We are going to see Schnorr signatures, we are going to see signature aggregation techniques, we are going to see UTXO sets and potentially things like MMR, Merkle Mountain Ranges and other proposals for creating fraud proofs and verifiable UTXO sets and optimizations like that. All of the above is the scaling solution. The question that remains is not what do we do, the question that remains is in what sequence and how do we do it in the safest, least disruptive way to the broader ecosystem. That question has not been resolved and it is a technical issue. It is overshadowed by the political struggle and the power struggle but at the bottom line this is a matter of science. I am confident that we will see a road forward.

Prospect of a community split and an altcoin

AL: Since we first started seeing contention in the cryptocurrency space, specifically in the Bitcoin community, I don’t even remember what the first issues were but there have been issues for years at this point. I have long thought that the necessary conclusion of this process is that you have two Bitcoins for the two different core constituencies of Bitcoin. You are talking about the different classes of users, in terms of what role they play in the ecosystem, I’m thinking more about this from a what do these people actually want from Bitcoin perspective. The problem with where we are going is as you’ve said there are the technical concerns but a lot of this is just about who gets to decide and who gets to dictate the way Bitcoin should develop. If we were looking at something that was monolithic, if we are looking at something that wasn’t open source, that wasn’t Bitcoin, then I would agree with you. I would say eventually because these problems need to be solved things will come into alignment. But the reality is that our technical solutions as you said are better than they ever have been. Our processes for figuring out the science of this thing and the amount of people participating and the amount of money being poured in have never been better. And yet the contentious issues have never been more contentious, even when everybody agrees that Segregated Witness should go in. The reason why it hasn’t gone in is because of that power dynamic you were talking about. The one side that is holding it up basically believes this is their leverage and if they give up that leverage that’s pretty much the end of it. The other side doesn’t want to give because of technical arguments and also because they are winning. If nothing changes then the status quo remains and since that is largely more supportive of that position than it is of the people who are blocking it you have this loggerheads. That’s where I get back to. I don’t disagree with anything you said but I think the human element here is a much larger piece of the puzzle and it is very difficult to see because on the one side of the argument which is in favor of small blocks generally many of those channels are frankly pretty repressive and don’t really have conversations about this stuff in a way that lets you see both sides of the argument. I’m not saying the other side is better and I know they’ll yell at me for that too because they yell at everybody and everybody yells at everybody but that’s the point. The stakes are really, really high here and on the one hand there are technical clear paths forward but on the other hand a lot of this is “We don’t really know and so we have to pick and we have to experiment.” You can look around at experiments and they are not very applicable but you can see that people are doing larger blocks and you can see that people are attempting to scale in different ways. That is what I get back to. Contention or not I expect to see two blockchains that both bear the name Bitcoin, that take part of that community and basically don’t look back at the other side. If it is this issue that becomes it then what we’ll see is a Bitcoin that has a small block size, 1MB maybe even gets smaller, that’s Bitcoin Settlement. On the other side you’ll have Bitcoin P2P or Bitcoin Cash or whatever you want to call it and that’ll be something that will have much larger blocks, it will mostly use SPV mining, it will be less secured but it’ll be possible for individuals to transact on it. I don’t see that as a bad thing. It might mean that the value of each of them goes down but I think if anything Ethereum is showing with Ethereum and Ethereum Classic that it doesn’t really matter. It is actually not a net negative for the prices and sure maybe the price of Ethereum might be 500 dollars now versus 250 or wherever it is at the moment. But it is still a net positive regardless of how many of these things you have if there is utility and communities behind each one. With the emotions tied up in this and the different use cases, on the one hand you’ve got the settlement, on the other hand you’ve got all the players like Shapeshift and BitPay who want larger blocks because they need to fit more transactions into the blockchain. Is that a bad outcome and do you expect to see that happen, do you not expect to see that happen? That’s my thesis, been for a while, still feeling good about it, how do you feel about it?

AA: I would hope that doesn’t happen. Part of the reason I would hope that doesn’t happen is because I’m not confident that if that happens it will happen in any way that is not disruptive. The first question I would have for any proposal to create such a split is really simple. It has to do with the fundamental technology of how that split is engineered and it has to do with whether there is replay and wipeout protection between the two chains. You can get a split that protects both chains from each other and allows it to be somewhat clean. One of the problems we saw in Ethereum is that there was no replay protection and there was no wipeout protection. That made it a damaging split for the ecosystem and caused people to lose money. Several exchanges lost money because of that, because of the lack of replay protection. For an economy like Bitcoin and with the dynamics of the difficulty retargeting in Bitcoin that would be much messier. I would hope to see if something like that did happen that it was organized and engineered in a responsible way with at minimum replay and wipeout protection built in. I think it is absolutely irresponsible not to address that since we know the damage that something like that does.

AL: I totally agree with you about the unplanned thing. The unplanned thing is definitely the part that makes anything like that dangerous, anything that is contentious, that is why UASF is also dangerous. It is not because it is inherently dangerous, if it has support there is no problem. It is only a problem if you go into it and neither side is really preparing for a situation where it winds up being contentious. Just to quote John F Kennedy, “Those who make peaceful revolution impossible will make violent revolution inevitable.” That continues to be the trajectory that I see. The attempts to have this peaceful conversation in a way where both sides are able to compromise, let’s not even say willing, able to compromise in order to give the other side what they actually want while not giving up what they themselves need, I don’t see that as being possible given the differences in what they want from the thing. That’s my concern for something that is unplanned, that this just continues to get worse and worse and worse. We continue to have problems, we don’t have solutions for them because we can’t get consensus because of this disconnect. That’s where you wind up with a violent break and both sides struggle to fix their stuff on either of their sides. If we could get past that and say “Maybe it is not that bad if there were two Bitcoins, one that is for scaling onchain and one that’s for settlement onchain at the highest security possible per transaction.” It seems like that would be the path forward if we get to pick but it seems like the most likely path forward is that we just keep banging heads against each other and eventually it happens. Both sides say “Screw you, I’m just going to do my thing and I will protect myself against you however I need to.”

AA: I hope we don’t have that outcome. There are a couple of reasons why I hope we don’t have that outcome. Effectively splitting Bitcoin in two is creating one or depending on your perspective possibly two altcoins. There are 1500+ alternative and varied cryptocurrencies and blockchains in this ecosystem. As you say some of them have different perspectives on scaling and governance. I would prefer to see some of that pressure go there than see the Bitcoin chain split in two. There is another problem with splitting the Bitcoin chain in two which has to do with the monetary policy. There will only ever be 21 million Bitcoin. In a two chain situation there will only ever be 42 million Bitcoin. That is a problem and I hope we don’t see that happen. But if it happens I think Bitcoin will survive it. I think inevitably there will be some very heated debates about which one is the real Bitcoin which really doesn’t matter at that point. There will always be hold outs who will stubbornly refuse to follow upgrades. The question is how big is that hold out group? Is it going to be 5 percent, is it going to be 20 percent or is it going to be 50 percent? We’ll see. I’m optimistic. I hope we can find a solution that doesn’t involve a divorce. But if that’s what it takes that can happen too, I just hope it doesn’t.

AL: I feel like a lot of the last couple of years, for people who are looking at this technology as a solution rather than an elegant technical project, has been a process of repeatedly killing all the sacred cows that we know about. That happens on the code side too because on the code side as you said everything is being upgraded. What upgraded means is we’re making it better. We’re taking out the old thing and we are improving it so that it becomes more efficient and better to use for all these different reasons. The one bubble that I feel like we’ve been in for 4, 5 years, I think I started talking about this pretty early on is that Bitcoin is “the one”. Bitcoin is what cryptocurrency is, the pinnacle of cryptocurrency. We’ve talked about this lots of time before, how Bitcoin may fail but it might not even necessarily fail. You’re talking about the 42 million versus 21 million, that’s what I feel like there. How is that any different than Bitcoin not working any longer for cross border settlement because the fees have gone up? That was a sacred cow too that we took a lot of care of and a lot of pride in. It didn’t really matter at the end of the day because it turns out that wasn’t what Bitcoin was. I just don’t understand how we get to pick.

AA: To me it is very easy to determine which ones we pick and which ones we don’t. The current fee situation restricts the ability to use Bitcoin for several purposes that were previously possible at one scale and are no longer possible at another scale. But in my mind the important difference is that I consider this fee environment that we have today temporary. I don’t think this is the fee environment that we will be in the future. I look back at other technologies and how they developed over time. I remember a time when email was unusable because of spam. When attachments couldn’t be sent because DSL and modems didn’t have enough capacity, when Usenet was flooded by image attachments and couldn’t scale, when entire… of Usenet stopped getting used. This is the old days of course. In most cases these were temporary problems when the capacity of the network was outstripped by the demand and didn’t scale fast enough and then it did. And when it did it opened up new applications again that were previously unavailable and things reverted back. I think the fee situation is temporary, I think a split in Bitcoin is not. It is permanent and it creates a precedent and an irreversible collapse in the resilience and governance of the system that will damage Bitcoin. Not fatally, absolutely not. And I don’t believe that Bitcoin is “the one”. It is not a maximalist position that leads me to do this. It is just that I think that resilience is a feature of the system. If the system is able to fall to the whims of a small group, defining small whichever way you want then it is not doing what it is supposed to do. There are plenty of other cryptocurrencies that are more centralized that have easier governance by being more open to that. I don’t think that is what Bitcoin should be, I think it should be the resilient and robust system but that is my personal opinion and of course the market decides what it is going to be. We are going to find out, that’s part of being on this journey. We’ll see how it works but I think that is the big difference between the fee situation now versus a split in the governance over one scaling decision and really over a power struggle over who gets to make the rules.

\ No newline at end of file diff --git a/lightning-conference/2019/2019-10-20-antoine-riard-rust-lightning/index.html b/lightning-conference/2019/2019-10-20-antoine-riard-rust-lightning/index.html index b909c855b1..30f0466414 100644 --- a/lightning-conference/2019/2019-10-20-antoine-riard-rust-lightning/index.html +++ b/lightning-conference/2019/2019-10-20-antoine-riard-rust-lightning/index.html @@ -13,4 +13,4 @@ Antoine Riard

Date: October 20, 2019

Transcript By: Michael Folkson

Tags: Lightning

Category: Conference

Media: -https://www.youtube.com/watch?v=q6a1On5pirk

Deploying rust-lightning in the wild

The Lightning Conference Day 2 (Stage 2)

Slides: https://github.com/ariard/talk-slides/blob/master/deploying-rust-lightning-in-the-wild.pdf

https://twitter.com/kanzure/status/1220722130897252352

Intro

Hi everyone, super happy to be here at the Lightning Conference. I’ve had an awesome weekend. Today I will talk on rust-lightning a project I’ve been contributing to for a year and a half. To get started, please take photos of the slides, not of me. So the story of rust-lightning. The idea was to build something really flexible, something really modular. That was TheBlueMatt idea. It got started in the beginning of 2018. I started to contribute September 2018. It is still ongoing work. It is built on top of the awesome rust-bitcoin ecosystem. We use the awesome andytoshi rust-bitcoin library.

Why Lightning?

So why Lightning? Why are we here? What do we want to build with Lightning? Do we want to reach Bitcoin promises of instant transaction, scaling to the masses, these types of hopes? Do we want to enable fancy financial contracts? Do we want to build streams of microtransactions? It is not really clear. When you are reading the Lightning white paper people have different views on how you can use Lightning and what you can use Lightning for? Why should you work on Lightning if you are a young developer? It is one of the most wide and unchartered territories. There are so many things to do, so many things to build, it is really exciting. There are still a lot of unknowns. We are building this network of pipes but we don’t know yet the how of the pipes. We don’t know what they will be used for, we don’t know where they will be used and by who. There is a lot of uncertainty. Right now it is single funded channels, really simple to understand. Tomorrow there are things like channel factories, multiparty channels… maybe splicing and a coinjoin transaction will open a set of channels. Maybe something like OP_SECUREBAG to do Lightning… There are a lot of efforts. So what are we going to send through these pipes? Are we going to send only HTLC or more complex stuff like DLC or a combination of DLC, conditional payments. If you follow Lightning-dev there is an awesome ongoing conversation on payment points and what you can build thanks to that. Where? Are we going to deploy Lightning on the internet? There are a lot of ideas on how to use Lightning to fund mesh nets and this kind of stuff. Or it could be a device and you are going to pay for what you consume from a stream. Maybe hardware security modules if you are an exchange, you are going to deploy Lightning on some architecture without a broadband connection. Who are going to use our stuff? I think that it is the biggest question to ask. You don’t have the same bandwidth if you live in New York or you live in South Africa or you live in Germany. People have different viewpoints on this, they have different resources. A basic consumer is not going to use Lightning the way a merchant is going to use Lightning, Lightning liquidity providers are going to set infrastructures. There are a lot of open questions. Who? What? How? When? We can look at the history of software engineering and how it solves these issues. I believe in the UNIX philosophy of doing something similar, doing something modular and combine the building blocks.

rust-lightning

That is what I really like about rust-lightning. We are trying to build a modular library. It should be simple and you should be able to integrate it into your own stuff the way you want it. We have a real focus on testing and playing with fuzzers and burning a lot of CPU hours on fuzzing. It should be easy to plug in your own stuff. The c-lightning is building a lot of great plugins, it should be easy to integrate this kind of stuff, just an interface to write. And no dependencies. Since this summer we have no more dependencies in rust-bitcoin and the rust-bitcoin library. That’s really cool because tracking down what is going inside your library and your third party library and your dependencies. That’s a nightmare.

Anatomy of rust-lightning

A quick look at the anatomy of rust-lightning. The main component is ChannelManager. It is going to receive keys from the KeysManager and then be able to generate a channel. Every update you get in your channel you should pass it to the ManyChannelMonitor, that is what we call watchtowers in the mainstream Lightning language. Your ChannelMonitor should be connected to the chain and at the same time you should have a PeerHandler which sends blobs of encrypted data to the Lightning network. All of these components, if you don’t like what we have done with the peer handler you can write your own peer-to-peer stack. If you don’t need the Router because you are going to maybe only do channels with a pre-set of people you can take out the Router. With the ManyChannelMonitor we have a default implementation for the backend of watchtowers, we don’t have the rest of the integration, you can rewrite yours. Really modular. Let’s go through every interface.

Anatomy of a LN node: ChannelManager

ChannelManager is our own implementation of the abstract interface, the ChannelMessageHandler. That’s the main component of every Lightning implementation is how you are going to despatch the message: opening a channel, updating a channel, closing a channel.

Anatomy of a LN node: KeysManager

We have the KeysManager. That should just be a stub of code to plug with your wallet. We don’t want you to be forced to use a Lightning wallet, you should come with your wallet, you are already using this for onchain. You should just plug Lightning as a stack to your wallet. You are going to ask the user “Give me a secret and I’m going to do the Lightning derivation stuff for you.” If I’ve got outputs on the chain I’m just going to send events to your wallet “Hey. You have outpoints waiting for you on the chain to be used in your later transactions.”

Anatomy of a LN node: PeerHandler

Peer managers, like I said it is really important, it is going to drive your whole implementation if you want to implement this stuff. You should react to network events and update the state of your Lightning node thanks to the message.

Anatomy of a LN node: ManyChannelMonitor

Then you have the ManyChannelMonitor, something I’ve worked a lot on. Getting right the punishment stuff on Lightning is maybe one of the hardest parts. That is super easy to integrate with a ChannelManager or… You should just get updates from the offchain stuff to the onchain. If there is any onchain stuff, you read the preimage from someone claiming the HTLC and you pass the preimage to your offchain stuff so your offchain stuff is able to claim backward in the incoming channel what has been broadcast on the outgoing channel onchain.

Anatomy of a LN node: Router

The last piece of software, the last abstract interface, the RoutingMessageHandler. We have implemented the component Router. All the gossip stuff. If you want to build a huge database to be able to have your own view of the network graph you should do it. If you have just a mobile device and you want a smaller one you can do it.

Anatomy of a LN node: ChainWatchInterface

The last piece. It is more like integrating with onchain stuff. What is really hard if you have written any piece of Bitcoin software is getting right in case of re-org. If you already have a chain management tracker for your enterprise wallet you should be able to reuse this chain management stuff for your Lightning stuff. You should not have multiple servers doing chain tracking or chain management. You should be able to integrate this interface on top of a Electrum server or something like that.

Scenario 1: An exchange

I will go through some scenarios or use cases for how you can deploy rust-lightning. If you are an exchange you may want to have multiple nodes but if you have multiple Lightning nodes you should share the Router because with gossip there is no reason to have multiple routers from multiple Lightning nodes you own. You should have multiple watchtowers for your Lightning nodes. Every time you get an update in your channel you should do a copy of the update and have multiple watchtowers on different servers. If one goes down you still have all those watchtowers to be sure the state is going to be enforced onchain. Some things you should do if you are an enterprise or exchange or if you handle a lot of money. You should have multiple Bitcoin nodes because there are a lot of eclipse attacks, ways to trick your view of the blockchain on the first layer and steal your money on the offchain one. You should have multiple Bitcoin nodes and do reconciliation between what your different Bitcoin nodes see to be sure you see the real blockchain and not the fake one. If you are an merchant, you already have a big wallet with a lot of UTXO, you should reuse this exchange wallet with your Lightning daemon, you should be able to do this in a simple way. If you are an exchange, if you are a merchant, those are maybe the kind of things you want to do.

Scenario 2: Mesh net

Let’s say you want to build a Lightning node for a mesh net which is an interesting exercise. If you are a Lightning node you don’t have broadband communication every time so you are going to do an update with another guy. Your watchtower is going to be connected to the broadband internet. Every time you can get a connection to the broadband connection through the mesh net you are going to send updates. In this kind of case you may want to have longer CSV on your commitment transactions, HTLCs, to offer you more time being offline. Your Bitcoin full node is going to run on the other side of the separation between mesh net and internet. If you are writing a router for the mesh net it is going to maybe be super simple so you may want to adopt that.

Scenario 3: Hardware wallet

Let’s say you are a hardware wallet vendor and you want to split the different components. It is still ongoing work. Are we able to have a trusted device just owning the keys? We should be able to distrust the Lightning node, that is still an ongoing conversation. We don’t want a hardware wallet to be full of the Lightning protocol because they don’t have enough RAM to do it. You may want to despatch a component in a way to just have the KeysManager on the device and all the other components on a normal computer.

Scenario 4: WASM

The last thing we have done is being able to run rust-lightning on WASM so you can run rust-lightning in your browser. There is a multicore process, there is a browser, there is a renderer, there is a plugin. You can imagine having a full Lightning node running in every one of your tabs and sending through the browser to your Bitcoin full node or your Lightning back end. Having a watchtower watching outside of your browser for your Lightning node running in your browser. Lightning over HTTPS should be a thing. I don’t see how people should use this because running money into your browser may not be a best idea. For small sums, for video games this kind of stuff it is less of a risk.

State of the project

So the state of the project. We are almost done with the 1.0. In fact we are done with the 1.0, we are just implementing bumping and time incentive bumping of the fee rate of the transaction. It works on testnet with different implementations, lnd, c-lightning, eclair. We just wait for one object out of the 1.1 spec like an option simplifying commitments because we don’t like the update fee mechanism right now, it is hard mechanism to get right. New contributors are welcome. If you want to get started in Bitcoin protocol development and if you like Rust it is an awesome project. It is not hard, there are not years of software to understand. We just have the core of the protocol. It is super friendly, we have an awesome rust-bitcoin IRC channel. Just ping us there, people are super welcoming. Thanks to Chaincode for sponsoring this work, an amazing team. I am glad to be part of them. Thanks to The Lightning Conference, thanks to the volunteers who have given time. Questions? Thanks

\ No newline at end of file +https://www.youtube.com/watch?v=q6a1On5pirk

Deploying rust-lightning in the wild

The Lightning Conference Day 2 (Stage 2)

Slides: https://github.com/ariard/talk-slides/blob/master/deploying-rust-lightning-in-the-wild.pdf

https://twitter.com/kanzure/status/1220722130897252352

Intro

Hi everyone, super happy to be here at the Lightning Conference. I’ve had an awesome weekend. Today I will talk on rust-lightning a project I’ve been contributing to for a year and a half. To get started, please take photos of the slides, not of me. So the story of rust-lightning. The idea was to build something really flexible, something really modular. That was TheBlueMatt idea. It got started in the beginning of 2018. I started to contribute September 2018. It is still ongoing work. It is built on top of the awesome rust-bitcoin ecosystem. We use the awesome andytoshi rust-bitcoin library.

Why Lightning?

So why Lightning? Why are we here? What do we want to build with Lightning? Do we want to reach Bitcoin promises of instant transaction, scaling to the masses, these types of hopes? Do we want to enable fancy financial contracts? Do we want to build streams of microtransactions? It is not really clear. When you are reading the Lightning white paper people have different views on how you can use Lightning and what you can use Lightning for? Why should you work on Lightning if you are a young developer? It is one of the most wide and unchartered territories. There are so many things to do, so many things to build, it is really exciting. There are still a lot of unknowns. We are building this network of pipes but we don’t know yet the how of the pipes. We don’t know what they will be used for, we don’t know where they will be used and by who. There is a lot of uncertainty. Right now it is single funded channels, really simple to understand. Tomorrow there are things like channel factories, multiparty channels… maybe splicing and a coinjoin transaction will open a set of channels. Maybe something like OP_SECUREBAG to do Lightning… There are a lot of efforts. So what are we going to send through these pipes? Are we going to send only HTLC or more complex stuff like DLC or a combination of DLC, conditional payments. If you follow Lightning-dev there is an awesome ongoing conversation on payment points and what you can build thanks to that. Where? Are we going to deploy Lightning on the internet? There are a lot of ideas on how to use Lightning to fund mesh nets and this kind of stuff. Or it could be a device and you are going to pay for what you consume from a stream. Maybe hardware security modules if you are an exchange, you are going to deploy Lightning on some architecture without a broadband connection. Who are going to use our stuff? I think that it is the biggest question to ask. You don’t have the same bandwidth if you live in New York or you live in South Africa or you live in Germany. People have different viewpoints on this, they have different resources. A basic consumer is not going to use Lightning the way a merchant is going to use Lightning, Lightning liquidity providers are going to set infrastructures. There are a lot of open questions. Who? What? How? When? We can look at the history of software engineering and how it solves these issues. I believe in the UNIX philosophy of doing something similar, doing something modular and combine the building blocks.

rust-lightning

That is what I really like about rust-lightning. We are trying to build a modular library. It should be simple and you should be able to integrate it into your own stuff the way you want it. We have a real focus on testing and playing with fuzzers and burning a lot of CPU hours on fuzzing. It should be easy to plug in your own stuff. The c-lightning is building a lot of great plugins, it should be easy to integrate this kind of stuff, just an interface to write. And no dependencies. Since this summer we have no more dependencies in rust-bitcoin and the rust-bitcoin library. That’s really cool because tracking down what is going inside your library and your third party library and your dependencies. That’s a nightmare.

Anatomy of rust-lightning

A quick look at the anatomy of rust-lightning. The main component is ChannelManager. It is going to receive keys from the KeysManager and then be able to generate a channel. Every update you get in your channel you should pass it to the ManyChannelMonitor, that is what we call watchtowers in the mainstream Lightning language. Your ChannelMonitor should be connected to the chain and at the same time you should have a PeerHandler which sends blobs of encrypted data to the Lightning network. All of these components, if you don’t like what we have done with the peer handler you can write your own peer-to-peer stack. If you don’t need the Router because you are going to maybe only do channels with a pre-set of people you can take out the Router. With the ManyChannelMonitor we have a default implementation for the backend of watchtowers, we don’t have the rest of the integration, you can rewrite yours. Really modular. Let’s go through every interface.

Anatomy of a LN node: ChannelManager

ChannelManager is our own implementation of the abstract interface, the ChannelMessageHandler. That’s the main component of every Lightning implementation is how you are going to despatch the message: opening a channel, updating a channel, closing a channel.

Anatomy of a LN node: KeysManager

We have the KeysManager. That should just be a stub of code to plug with your wallet. We don’t want you to be forced to use a Lightning wallet, you should come with your wallet, you are already using this for onchain. You should just plug Lightning as a stack to your wallet. You are going to ask the user “Give me a secret and I’m going to do the Lightning derivation stuff for you.” If I’ve got outputs on the chain I’m just going to send events to your wallet “Hey. You have outpoints waiting for you on the chain to be used in your later transactions.”

Anatomy of a LN node: PeerHandler

Peer managers, like I said it is really important, it is going to drive your whole implementation if you want to implement this stuff. You should react to network events and update the state of your Lightning node thanks to the message.

Anatomy of a LN node: ManyChannelMonitor

Then you have the ManyChannelMonitor, something I’ve worked a lot on. Getting right the punishment stuff on Lightning is maybe one of the hardest parts. That is super easy to integrate with a ChannelManager or… You should just get updates from the offchain stuff to the onchain. If there is any onchain stuff, you read the preimage from someone claiming the HTLC and you pass the preimage to your offchain stuff so your offchain stuff is able to claim backward in the incoming channel what has been broadcast on the outgoing channel onchain.

Anatomy of a LN node: Router

The last piece of software, the last abstract interface, the RoutingMessageHandler. We have implemented the component Router. All the gossip stuff. If you want to build a huge database to be able to have your own view of the network graph you should do it. If you have just a mobile device and you want a smaller one you can do it.

Anatomy of a LN node: ChainWatchInterface

The last piece. It is more like integrating with onchain stuff. What is really hard if you have written any piece of Bitcoin software is getting right in case of re-org. If you already have a chain management tracker for your enterprise wallet you should be able to reuse this chain management stuff for your Lightning stuff. You should not have multiple servers doing chain tracking or chain management. You should be able to integrate this interface on top of a Electrum server or something like that.

Scenario 1: An exchange

I will go through some scenarios or use cases for how you can deploy rust-lightning. If you are an exchange you may want to have multiple nodes but if you have multiple Lightning nodes you should share the Router because with gossip there is no reason to have multiple routers from multiple Lightning nodes you own. You should have multiple watchtowers for your Lightning nodes. Every time you get an update in your channel you should do a copy of the update and have multiple watchtowers on different servers. If one goes down you still have all those watchtowers to be sure the state is going to be enforced onchain. Some things you should do if you are an enterprise or exchange or if you handle a lot of money. You should have multiple Bitcoin nodes because there are a lot of eclipse attacks, ways to trick your view of the blockchain on the first layer and steal your money on the offchain one. You should have multiple Bitcoin nodes and do reconciliation between what your different Bitcoin nodes see to be sure you see the real blockchain and not the fake one. If you are an merchant, you already have a big wallet with a lot of UTXO, you should reuse this exchange wallet with your Lightning daemon, you should be able to do this in a simple way. If you are an exchange, if you are a merchant, those are maybe the kind of things you want to do.

Scenario 2: Mesh net

Let’s say you want to build a Lightning node for a mesh net which is an interesting exercise. If you are a Lightning node you don’t have broadband communication every time so you are going to do an update with another guy. Your watchtower is going to be connected to the broadband internet. Every time you can get a connection to the broadband connection through the mesh net you are going to send updates. In this kind of case you may want to have longer CSV on your commitment transactions, HTLCs, to offer you more time being offline. Your Bitcoin full node is going to run on the other side of the separation between mesh net and internet. If you are writing a router for the mesh net it is going to maybe be super simple so you may want to adopt that.

Scenario 3: Hardware wallet

Let’s say you are a hardware wallet vendor and you want to split the different components. It is still ongoing work. Are we able to have a trusted device just owning the keys? We should be able to distrust the Lightning node, that is still an ongoing conversation. We don’t want a hardware wallet to be full of the Lightning protocol because they don’t have enough RAM to do it. You may want to despatch a component in a way to just have the KeysManager on the device and all the other components on a normal computer.

Scenario 4: WASM

The last thing we have done is being able to run rust-lightning on WASM so you can run rust-lightning in your browser. There is a multicore process, there is a browser, there is a renderer, there is a plugin. You can imagine having a full Lightning node running in every one of your tabs and sending through the browser to your Bitcoin full node or your Lightning back end. Having a watchtower watching outside of your browser for your Lightning node running in your browser. Lightning over HTTPS should be a thing. I don’t see how people should use this because running money into your browser may not be a best idea. For small sums, for video games this kind of stuff it is less of a risk.

State of the project

So the state of the project. We are almost done with the 1.0. In fact we are done with the 1.0, we are just implementing bumping and time incentive bumping of the fee rate of the transaction. It works on testnet with different implementations, lnd, c-lightning, eclair. We just wait for one object out of the 1.1 spec like an option simplifying commitments because we don’t like the update fee mechanism right now, it is hard mechanism to get right. New contributors are welcome. If you want to get started in Bitcoin protocol development and if you like Rust it is an awesome project. It is not hard, there are not years of software to understand. We just have the core of the protocol. It is super friendly, we have an awesome rust-bitcoin IRC channel. Just ping us there, people are super welcoming. Thanks to Chaincode for sponsoring this work, an amazing team. I am glad to be part of them. Thanks to The Lightning Conference, thanks to the volunteers who have given time. Questions? Thanks

\ No newline at end of file diff --git a/lightning-conference/2019/2019-10-20-bastien-teinturier-trampoline-routing/index.html b/lightning-conference/2019/2019-10-20-bastien-teinturier-trampoline-routing/index.html index 25bc6162ae..f79e512b1b 100644 --- a/lightning-conference/2019/2019-10-20-bastien-teinturier-trampoline-routing/index.html +++ b/lightning-conference/2019/2019-10-20-bastien-teinturier-trampoline-routing/index.html @@ -12,4 +12,4 @@ Bastien Teinturier

Date: October 20, 2019

Transcript By: Michael Folkson

Tags: Trampoline payments, Amp

Category: Conference

Media: -https://www.youtube.com/watch?v=1E-KhLA6Gck

The Lightning Conference Day 2 (Stage 2)

Slides: https://docs.google.com/presentation/d/1bFu33_YFsRVUSEN-Vff5-wrxMabOFl2ck7ZTiVsp5ds/

https://twitter.com/kanzure/status/1221100865328685057

Intro

We will start with gory technical details about routing and pathfinding. I hope I will be able to keep you awake for the next twenty minutes. I’m working for ACINQ on the Lightning Network. If you don’t know ACINQ we are contributors to the specification. We run the biggest node on the network and we are also one of the leading mobile wallets, eclair-mobile. We also just teased a prototype of our next wallet Phoenix on Twitter. If you haven’t seen that yet check out our Twitter account. What I’m going to be talking about for the next twenty minutes is how we plan on improving the mobile app experience with trampoline routing. I can’t talk as fast as Laolu and we only have twenty minutes so I will stay at a high level overview. If you have questions or want more details there are links at the end to the full specification. You can talk to me anytime.

The Lightning Network, a Payment Channels Network

So first of all what is the Lightning Network? I’m hoping you already know that so that is going to be very quick. The Lightning Network is just a Layer 2 solution, a payment channel network meant to help scale Bitcoin payments. You just create a 2-of-2 multisig on the Bitcoin blockchain and then you are able to update balances of your channel completely offchain by just sending peer to peer messages between people in the channel. The only important point in that slide is that there is network in the title. That’s really important because if you had to open a channel to everyone you wanted to pay that would be quite limiting. What we can do is just route through other people’s channels in exchange for fees. This way you can get anyone you can find a route to in the network graph.

Payment Routing Features

So payment routing today in Lightning offers quite a lot of interesting features. I want to highlight them briefly. First of all right now we are using source routing. This is nice. This means that the sender decides on the whole route that the payment will take. This gives a lot of flexibility to the sender because by having full control on the route that we take, the sender is able to optimize for fees if he wants to pay the least fees by just running a pathfinding algorithm to minimize for fees. Or he can choose to optimize for privacy by using a longer route and paying more fees. Overall it gives lots of flexibility to the sender. Another interesting and important property of payments in the Lightning Network is that they are trustless. They use HTLCs for that. Udi did a great job yesterday at not explaining how HTLCs work. I’ll do exactly the same. I don’t have a fancy meme but if you want to know how HTLCs work it is going to be for another talk. But it allows us to route in a trustless way with multihop payments. A very important property of payment in the Lightning Network is that they are private. We are using onion encryption for that based on a crypto scheme named Sphinx that I think was released in 2009. This allows us to make sure that people that are intermediaries in our payment route don’t know where they are in the payment route. They don’t know if they are the first one, the second one, the third one and they don’t know who the sender or the recipient is. It gives you a bit of anonymity when you’re writing your payments. You are not telling the whole network who you are paying and how much you are paying. That is something we definitely want to keep. All of those come at a cost for the payer. You need to sync a big network graph which means it costs bandwidth. You then need to store that which costs storage on your small mobile device. Then you need to run a pathfinding algorithm to find routes to pay whoever you want to pay. That costs a lot of CPU which drains your battery. It is also horrible for the UX experience because you launch the app, it has to sync all the network updates that it missed to be able to find that payment route so it is going to be really bad. We want to improve that. That brings me to the cool slide. The one with the potential impossibility result. The Lightning payment triangle of success.

Lightning Payment Triangle of Success

Basically I think that in peer to peer decentralized routing networks we can only achieve two of the three properties of this triangle: privacy, route cost optimum (which means you optimize your fee and pay the least fee that you can pay for that payment) and performance. Right now we are at the left edge here where we have good privacy and good route cost optimum. This means that we can route payments privately by paying just a small amount of fees but at the cost of performance. Note that we are not exactly at the center of this edge and we can decide to be anywhere on this edge right now because you can choose to optimize for fees by using smaller routes or you can choose to use longer routes and pay more fees for more anonymity. Of course we don’t want to be on the edge of the bottom because we don’t want to sacrifice privacy for reasons. What I’d like to focus on is the edge on the right side. The one where we give away some of the route cost optimum properties for better performance. We keep privacy but we’re ready to pay a bit more fees to have a better experience especially for mobile devices. First of all, let me explain what led us to this need. As I said before we currently operate both the biggest node on the network on the server side and one of the leading mobile wallets. That allows us to get a good real view of what works and what doesn’t work for Lightning today. Our server node has been running for more than a year without any issues so on the server side everything works fine. But we have started to see the mobile apps’ experience degrade a bit because of the network growth and because of more updates to sync and a bigger graph to run pathfinding on. We will need to address that and do something about it.

Let a billion payment channels bloom

When you are running a mobile node on your phone of course you are offline most of the time which means you are missing a lot of information. When you are taking your phone out of your pocket and you want to pay for something, you have to do a lot of things before that. You have to first connect to a peer, sync all the updates that you have missed about what happened in the network to get a good enough view of the graph to be able to find a route. This consumes a lot of bandwidth. You could be on a mobile plan and you don’t want to consume that much bandwidth. And it takes time. That means you have a spinner running and you can’t just pay, you have to wait for that to finish syncing. Then once you are done you also need to run a pathfinding algorithm to find the route to your destination. If it is some guy at the other side of the graph, this is going to take time as well, consume a lot of CPU, drain your battery and you’re still not paying. You’re still seeing a sliding thing saying you have to wait. That’s bad. We want to do better than that. When we think about mobile payment apps we think of things like Apple Pay, Venmo or WeChat. What we want to do is take our phone out of our pocket, scan a QR code and voila the payment is sent instantly. If we want to get mainstream adoption of Lightning that’s who our competitors are. Our UX should be no different, we should have the same kind of UX and it should not be that painful for users to just pay something with the Lightning Network. The easy way to just improve performance is to stop syncing the whole graph. If you don’t sync the whole graph then that means you don’t have to consume bandwidth to sync things that are far away from you. This means that you have a much lighter pathfinding algorithm to run because the network graph is going to be a lot smaller. Overall it is going to improve performance a lot. Of course if you don’t sync the whole network how do you find a route to your destination? If you don’t know how to reach the guy then you can’t pay either. You have something that is faster but you can’t pay. The idea is to use what we call trampoline routing.

Trampoline Routing

You are going to simply defer building parts of the route to some node in the network. Not the whole route because that would be the same thing as a custodial wallet but only parts of the route. You’re going to keep the same privacy properties that you care about. The idea is that you are only going to sync the part of the network that is close to you. For example your two radius neighborhood. If we have a node in the center here in light blue we’re going to sync nodes that are a distance of two from us. Or we could change three or four depending on what bandwidth we have available on our phone and how often we are online. Let’s choose two. That means we have a much smaller graph to sync and analyze when we’re trying to pay. We’re also going to be syncing some trampoline nodes from the network. That could be remote, that could be in your local neighborhood. We are going to sample them probably randomly. Those are the nodes in dark blue here. When you want to make a payment you are going to first choose a trampoline route. That means you are going to choose a few trampoline nodes that you are going to use to reach your destination. For example, hear node 1 and node 2. Let’s imagine that our destination is that node in grey. We are choosing node 1 and node 2 and committing to using that route. We make it so that the first trampoline node is in our neighborhood. That way we can find a normal payment route to the first trampoline node and send him what looks like a normal payment. When it reaches that trampoline node he is going to discover instructions to route to node 2. He is going to run the pathfinding algorithm for us to find the route to node 2 and to forward our payment to node 2. When node 2 receives the payment he will peel the onion and discover instructions to route to that node in grey. In fact he will run the pathfinding algorithm to find the route to that node. It repeats. You can use as many trampoline nodes as you want, there is a limit of course. You repeat until you reach the destination.

A tale of onions

The thing that makes that possible is a change we made to the protocol recently which is a variable length onion and I will detail that in that slide. Before that a few months ago what we had in the onion construction was 1300 bytes onion divided in 20 frames of 65 bytes. That meant we had 65 bytes per intermediate node in the network and we couldn’t put much information in there but we could make 20 hops through the network. Now we’ve changed that so we can put as many bytes as we want for each intermediate node. What we do with that is we are putting in the payload, the big onion payload, a smaller onion payload which is in orange here. Here Alice wants to pay Bob. Alice is choosing trampoline node 1 and 2 to pay Bob. Alice is first creating a small onion in orange. You need to start reading it from the bottom. It says “Forward to node_id bob a payment with that expiry and that amount (1000 sats).” She encrypts that for node 2. Then she adds on top of that instructions for node 1 to route to node 2 saying “You should route to node_id t2 with that expiry and that amount.” That makes an onion that can only be decrypted by node 1. Then Alice is going to run a pathfinding algorithm but on a much smaller graph to find a path to node 1. She is going to put that orange onion in a normal onion that says to the first grey node “You should forward 1045 sats through that channel_id 42.” What is interesting here is that the first node in grey is just receiving a normal onion. He doesn’t know that trampoline is used, he doesn’t have to be updated to understand trampoline. It is just a legacy payment and this can be a legacy node. But when it reaches node 1 he is going to be able to decrypt the top level onion and verify that he was supposed to receive 1045 sats. Then he finds that there is a smaller onion in there that he can also decrypt. Decrypting that small onion gives him information about what he needs to send to trampoline node 2. Now he runs the pathfinding algorithm and does the same. He puts that small onion in a bigger onion and sends that to node 2. It is going to repeat at node 2 until it reaches the final destination. This is effectively trading fees for performance. Alice here didn’t have to run anything expensive. She had to run a pathfinding algorithm on a graph that has a depth of only 3. This is really simple, she can completely brute force that and it is still going to run in less than 100 milliseconds. It is node 1 that is computing a route and node 2 that is computing a route. From that they take a bit more fees than what they would have otherwise taken with normal relay. There are a lot of features that kind of come for free once we have trampoline that I think are interesting. For example, AMP.

AMP and Trampoline

Right now when we are doing AMP in the network with normal payments the sender decides on how to split the payments and every intermediate node just blindly forwards those shares. That means the sender has to do an even harder job of finding routes but also finding a correct way to split a payment into the network. If you are a mobile node that means you will have to make something that is even more expensive than what we have to do today to be able to send an AMP payment. But with trampoline we can do something that is much smarter than that because we can aggregate multipath payments at each of the trampoline nodes and then decide how to resplit to reach the next destination. For example, if Alice on the left wants to send 0.5 Bitcoin to a destination and she only has smaller channels, I ignored the fees on this slide because it is easier. She is going to just decide that she is going to use Bob and another guy as a trampoline node and she is only going to care about how to send that 0.5 Bitcoin to the first trampoline node. Since she has a much smaller graph of only depth 3 she can very easily find an optimal split to reach that guy. She is splitting it into 3 shares here for example. That trampoline node is going to wait to receive all the shares and once he has everything and has the destination he knows that he has a better way to reach that guy because he has quite big channels. He can split it into 2 paths instead of 3. Splitting into more paths also costs more fees. If you are able to send a payment via only one path it is always better. He is able to relay to the second node with only 2 splits and the second node has a big channel to the destination so he doesn’t have to split at all. This is a lot more efficient because each node has more information about others, about his local surroundings because he knows the balances of his own channels. Whereas Alice wouldn’t know the balances of Bob’s channels so would not know that it is possible to use this channel or this channel because the balance allows it. So trampoline allows for much better AMP than the normal AMP that we have today. It is really easy to build on top of trampoline in the existing code. There are also other features that we can do more easily with trampoline like rendez-vous routing. I haven’t detailed that yet but I will send a mail to the mailing list soon about that. What is interesting about that change is that at first it looks like a very big and scary change. We are basically moving from self routing to kind of packet switching so it looks really complex and hard to add to the codebase but in fact it is not. Because we are reusing the onion construction but just with smaller onions it is an incremental change that we add on top of the existing payment infrastructure. It is quite easy to develop and add to our codebase. It doesn’t change anything in the way we route HTLCs. HTLCs just work independently of how we do the routing underneath. That is important. If you have seen the Phoenix demo, that is what allows Phoenix to send payments instantly and not have a waiting bar when you launch the app. We have currently a prototype of that deployed and we’re working on finishing and making it production ready. We are going to deploy that to our node. You are going to be able to use that soonish. An important point is that at some point maybe the network will grow to be huge and we cannot ask any routing node to be able to compute a route to anyone else on the network because the network is going to be too big. If that happens, I’m not saying it will but if that happens we are going to have to somehow divide the network into zones. When we do that having a kind of packet switching like trampoline makes it really easy to move to something like that. I’m not saying we are going to have to do it some day or even in the far future but at least we are taking an incremental step towards that direction to be able to do it if we need it some day.

To Infinity and Beyond

That is all I had. This is a high level view. Of course there are a lot of gory details that are not completely fleshed out yet and we would love to have more feedback on the proposal if you have ideas. There are parts of it that we are not completely happy with, especially when dealing with legacy recipients that do not support trampoline. If you want to look at the proposal and add your ideas or contribute we would welcome that. There is currently a spec PR that we will update soon because some small things have changed. I sent a mail to the mailing list a few months ago to detail the high level view of how we think it is going to work. Have a look at that and don’t hesitate to put comments in there or just come and reach me to ask any questions. I think we have two minutes for questions.

Q & A

Q - My question is about privacy in trampoline routing. How do you prevent let’s say the NSA running a large fraction of all trampoline nodes?

A - Because running a trampoline node is not that expensive. Right now when you are running a routing node your CPU is mostly idle because the only thing you are doing is that you are just taking an incoming packet, decrypting an onion and it already says what channel you need to route to. If you are not a really nice routing channel you don’t even have to keep the network graph. You can ignore that and just decrypt forward. This doesn’t use the CPU at all. What is good with trampoline is that it incentivizes routing nodes to keep the network graph and also make sure that it is possible to reach any node in the network. I think that since right now we are not using your CPU, this is lost capex, you should use that. If you can collect more fees by using your CPU to run pathfinding for people it is even better. I think that every serious routing node that is running on a server machine and not a Raspberry Pi and not some crappy hardware, every real routing node has an incentive to run trampoline because it is a good way to earn more fees while providing useful services for the whole network. I am expecting that there are enough people to counterbalance what the NSA would run. Even if the NSA runs most of the trampoline nodes it is the same as today. They would need to have all the trampoline nodes in the route because since we are using onion encryption for the trampoline onion as well, a trampoline node here doesn’t know where he is on the trampoline route. It is a smaller route than the 20 hop route that we can have in the whole payment, I think it is 5 hops max but still you don’t know where you are in that.

Q - Does the last trampoline node discover who the recipient is?

A - If the recipient supports trampoline and knows how to decrypt this packet, no. The last sender just sees a normal onion packet and sees that he is supposed to forward it to that next guy. If the recipient does not support trampoline then yes you are losing privacy. But it is kind of the same as today. If the next to last node in the route is forwarding to an unannounced channel for example, or someone he knows is not routing, he knows that this is the destination. We have the same kind of trade-offs, a bit worse when the recipient is a legacy node. If the recipient supports trampoline we have ways to make it as good as today. That is an area where we would like to see more ideas and more ways of analyzing that we’re not losing any privacy.

Q - If you are giving up custody of your route to a third party. Let’s say every Sunday I go to church and I make a donation to my church. If I give that pathfinding to a third party then they can route through somebody that will use that information?

A - It depends because you still choose your trampoline node. It is just that you are choosing less of the route. You are not choosing the full route but you are choosing trampoline nodes and you can rotate them randomly. Also it is not replacing the existing routing algorithm. You still have the choice to use either of those. If you want to optimize for fees but have a slower boot up time before payment you can still do that. It is when you want to optimize for performance that you would use trampoline routing. If you are doing something that you think may be subject to analysis then you should use eclair-mobile instead of Phoenix and compute the whole path for yourself. It is just about giving more choice. On that triangle we currently only have the edge on the left and we are just giving you the opportunity to use the edge on the right. We are not forcing people to do that. You still have the opportunity of doing both of those depending on what your need is.

\ No newline at end of file +https://www.youtube.com/watch?v=1E-KhLA6Gck

The Lightning Conference Day 2 (Stage 2)

Slides: https://docs.google.com/presentation/d/1bFu33_YFsRVUSEN-Vff5-wrxMabOFl2ck7ZTiVsp5ds/

https://twitter.com/kanzure/status/1221100865328685057

Intro

We will start with gory technical details about routing and pathfinding. I hope I will be able to keep you awake for the next twenty minutes. I’m working for ACINQ on the Lightning Network. If you don’t know ACINQ we are contributors to the specification. We run the biggest node on the network and we are also one of the leading mobile wallets, eclair-mobile. We also just teased a prototype of our next wallet Phoenix on Twitter. If you haven’t seen that yet check out our Twitter account. What I’m going to be talking about for the next twenty minutes is how we plan on improving the mobile app experience with trampoline routing. I can’t talk as fast as Laolu and we only have twenty minutes so I will stay at a high level overview. If you have questions or want more details there are links at the end to the full specification. You can talk to me anytime.

The Lightning Network, a Payment Channels Network

So first of all what is the Lightning Network? I’m hoping you already know that so that is going to be very quick. The Lightning Network is just a Layer 2 solution, a payment channel network meant to help scale Bitcoin payments. You just create a 2-of-2 multisig on the Bitcoin blockchain and then you are able to update balances of your channel completely offchain by just sending peer to peer messages between people in the channel. The only important point in that slide is that there is network in the title. That’s really important because if you had to open a channel to everyone you wanted to pay that would be quite limiting. What we can do is just route through other people’s channels in exchange for fees. This way you can get anyone you can find a route to in the network graph.

Payment Routing Features

So payment routing today in Lightning offers quite a lot of interesting features. I want to highlight them briefly. First of all right now we are using source routing. This is nice. This means that the sender decides on the whole route that the payment will take. This gives a lot of flexibility to the sender because by having full control on the route that we take, the sender is able to optimize for fees if he wants to pay the least fees by just running a pathfinding algorithm to minimize for fees. Or he can choose to optimize for privacy by using a longer route and paying more fees. Overall it gives lots of flexibility to the sender. Another interesting and important property of payments in the Lightning Network is that they are trustless. They use HTLCs for that. Udi did a great job yesterday at not explaining how HTLCs work. I’ll do exactly the same. I don’t have a fancy meme but if you want to know how HTLCs work it is going to be for another talk. But it allows us to route in a trustless way with multihop payments. A very important property of payment in the Lightning Network is that they are private. We are using onion encryption for that based on a crypto scheme named Sphinx that I think was released in 2009. This allows us to make sure that people that are intermediaries in our payment route don’t know where they are in the payment route. They don’t know if they are the first one, the second one, the third one and they don’t know who the sender or the recipient is. It gives you a bit of anonymity when you’re writing your payments. You are not telling the whole network who you are paying and how much you are paying. That is something we definitely want to keep. All of those come at a cost for the payer. You need to sync a big network graph which means it costs bandwidth. You then need to store that which costs storage on your small mobile device. Then you need to run a pathfinding algorithm to find routes to pay whoever you want to pay. That costs a lot of CPU which drains your battery. It is also horrible for the UX experience because you launch the app, it has to sync all the network updates that it missed to be able to find that payment route so it is going to be really bad. We want to improve that. That brings me to the cool slide. The one with the potential impossibility result. The Lightning payment triangle of success.

Lightning Payment Triangle of Success

Basically I think that in peer to peer decentralized routing networks we can only achieve two of the three properties of this triangle: privacy, route cost optimum (which means you optimize your fee and pay the least fee that you can pay for that payment) and performance. Right now we are at the left edge here where we have good privacy and good route cost optimum. This means that we can route payments privately by paying just a small amount of fees but at the cost of performance. Note that we are not exactly at the center of this edge and we can decide to be anywhere on this edge right now because you can choose to optimize for fees by using smaller routes or you can choose to use longer routes and pay more fees for more anonymity. Of course we don’t want to be on the edge of the bottom because we don’t want to sacrifice privacy for reasons. What I’d like to focus on is the edge on the right side. The one where we give away some of the route cost optimum properties for better performance. We keep privacy but we’re ready to pay a bit more fees to have a better experience especially for mobile devices. First of all, let me explain what led us to this need. As I said before we currently operate both the biggest node on the network on the server side and one of the leading mobile wallets. That allows us to get a good real view of what works and what doesn’t work for Lightning today. Our server node has been running for more than a year without any issues so on the server side everything works fine. But we have started to see the mobile apps’ experience degrade a bit because of the network growth and because of more updates to sync and a bigger graph to run pathfinding on. We will need to address that and do something about it.

Let a billion payment channels bloom

When you are running a mobile node on your phone of course you are offline most of the time which means you are missing a lot of information. When you are taking your phone out of your pocket and you want to pay for something, you have to do a lot of things before that. You have to first connect to a peer, sync all the updates that you have missed about what happened in the network to get a good enough view of the graph to be able to find a route. This consumes a lot of bandwidth. You could be on a mobile plan and you don’t want to consume that much bandwidth. And it takes time. That means you have a spinner running and you can’t just pay, you have to wait for that to finish syncing. Then once you are done you also need to run a pathfinding algorithm to find the route to your destination. If it is some guy at the other side of the graph, this is going to take time as well, consume a lot of CPU, drain your battery and you’re still not paying. You’re still seeing a sliding thing saying you have to wait. That’s bad. We want to do better than that. When we think about mobile payment apps we think of things like Apple Pay, Venmo or WeChat. What we want to do is take our phone out of our pocket, scan a QR code and voila the payment is sent instantly. If we want to get mainstream adoption of Lightning that’s who our competitors are. Our UX should be no different, we should have the same kind of UX and it should not be that painful for users to just pay something with the Lightning Network. The easy way to just improve performance is to stop syncing the whole graph. If you don’t sync the whole graph then that means you don’t have to consume bandwidth to sync things that are far away from you. This means that you have a much lighter pathfinding algorithm to run because the network graph is going to be a lot smaller. Overall it is going to improve performance a lot. Of course if you don’t sync the whole network how do you find a route to your destination? If you don’t know how to reach the guy then you can’t pay either. You have something that is faster but you can’t pay. The idea is to use what we call trampoline routing.

Trampoline Routing

You are going to simply defer building parts of the route to some node in the network. Not the whole route because that would be the same thing as a custodial wallet but only parts of the route. You’re going to keep the same privacy properties that you care about. The idea is that you are only going to sync the part of the network that is close to you. For example your two radius neighborhood. If we have a node in the center here in light blue we’re going to sync nodes that are a distance of two from us. Or we could change three or four depending on what bandwidth we have available on our phone and how often we are online. Let’s choose two. That means we have a much smaller graph to sync and analyze when we’re trying to pay. We’re also going to be syncing some trampoline nodes from the network. That could be remote, that could be in your local neighborhood. We are going to sample them probably randomly. Those are the nodes in dark blue here. When you want to make a payment you are going to first choose a trampoline route. That means you are going to choose a few trampoline nodes that you are going to use to reach your destination. For example, hear node 1 and node 2. Let’s imagine that our destination is that node in grey. We are choosing node 1 and node 2 and committing to using that route. We make it so that the first trampoline node is in our neighborhood. That way we can find a normal payment route to the first trampoline node and send him what looks like a normal payment. When it reaches that trampoline node he is going to discover instructions to route to node 2. He is going to run the pathfinding algorithm for us to find the route to node 2 and to forward our payment to node 2. When node 2 receives the payment he will peel the onion and discover instructions to route to that node in grey. In fact he will run the pathfinding algorithm to find the route to that node. It repeats. You can use as many trampoline nodes as you want, there is a limit of course. You repeat until you reach the destination.

A tale of onions

The thing that makes that possible is a change we made to the protocol recently which is a variable length onion and I will detail that in that slide. Before that a few months ago what we had in the onion construction was 1300 bytes onion divided in 20 frames of 65 bytes. That meant we had 65 bytes per intermediate node in the network and we couldn’t put much information in there but we could make 20 hops through the network. Now we’ve changed that so we can put as many bytes as we want for each intermediate node. What we do with that is we are putting in the payload, the big onion payload, a smaller onion payload which is in orange here. Here Alice wants to pay Bob. Alice is choosing trampoline node 1 and 2 to pay Bob. Alice is first creating a small onion in orange. You need to start reading it from the bottom. It says “Forward to node_id bob a payment with that expiry and that amount (1000 sats).” She encrypts that for node 2. Then she adds on top of that instructions for node 1 to route to node 2 saying “You should route to node_id t2 with that expiry and that amount.” That makes an onion that can only be decrypted by node 1. Then Alice is going to run a pathfinding algorithm but on a much smaller graph to find a path to node 1. She is going to put that orange onion in a normal onion that says to the first grey node “You should forward 1045 sats through that channel_id 42.” What is interesting here is that the first node in grey is just receiving a normal onion. He doesn’t know that trampoline is used, he doesn’t have to be updated to understand trampoline. It is just a legacy payment and this can be a legacy node. But when it reaches node 1 he is going to be able to decrypt the top level onion and verify that he was supposed to receive 1045 sats. Then he finds that there is a smaller onion in there that he can also decrypt. Decrypting that small onion gives him information about what he needs to send to trampoline node 2. Now he runs the pathfinding algorithm and does the same. He puts that small onion in a bigger onion and sends that to node 2. It is going to repeat at node 2 until it reaches the final destination. This is effectively trading fees for performance. Alice here didn’t have to run anything expensive. She had to run a pathfinding algorithm on a graph that has a depth of only 3. This is really simple, she can completely brute force that and it is still going to run in less than 100 milliseconds. It is node 1 that is computing a route and node 2 that is computing a route. From that they take a bit more fees than what they would have otherwise taken with normal relay. There are a lot of features that kind of come for free once we have trampoline that I think are interesting. For example, AMP.

AMP and Trampoline

Right now when we are doing AMP in the network with normal payments the sender decides on how to split the payments and every intermediate node just blindly forwards those shares. That means the sender has to do an even harder job of finding routes but also finding a correct way to split a payment into the network. If you are a mobile node that means you will have to make something that is even more expensive than what we have to do today to be able to send an AMP payment. But with trampoline we can do something that is much smarter than that because we can aggregate multipath payments at each of the trampoline nodes and then decide how to resplit to reach the next destination. For example, if Alice on the left wants to send 0.5 Bitcoin to a destination and she only has smaller channels, I ignored the fees on this slide because it is easier. She is going to just decide that she is going to use Bob and another guy as a trampoline node and she is only going to care about how to send that 0.5 Bitcoin to the first trampoline node. Since she has a much smaller graph of only depth 3 she can very easily find an optimal split to reach that guy. She is splitting it into 3 shares here for example. That trampoline node is going to wait to receive all the shares and once he has everything and has the destination he knows that he has a better way to reach that guy because he has quite big channels. He can split it into 2 paths instead of 3. Splitting into more paths also costs more fees. If you are able to send a payment via only one path it is always better. He is able to relay to the second node with only 2 splits and the second node has a big channel to the destination so he doesn’t have to split at all. This is a lot more efficient because each node has more information about others, about his local surroundings because he knows the balances of his own channels. Whereas Alice wouldn’t know the balances of Bob’s channels so would not know that it is possible to use this channel or this channel because the balance allows it. So trampoline allows for much better AMP than the normal AMP that we have today. It is really easy to build on top of trampoline in the existing code. There are also other features that we can do more easily with trampoline like rendez-vous routing. I haven’t detailed that yet but I will send a mail to the mailing list soon about that. What is interesting about that change is that at first it looks like a very big and scary change. We are basically moving from self routing to kind of packet switching so it looks really complex and hard to add to the codebase but in fact it is not. Because we are reusing the onion construction but just with smaller onions it is an incremental change that we add on top of the existing payment infrastructure. It is quite easy to develop and add to our codebase. It doesn’t change anything in the way we route HTLCs. HTLCs just work independently of how we do the routing underneath. That is important. If you have seen the Phoenix demo, that is what allows Phoenix to send payments instantly and not have a waiting bar when you launch the app. We have currently a prototype of that deployed and we’re working on finishing and making it production ready. We are going to deploy that to our node. You are going to be able to use that soonish. An important point is that at some point maybe the network will grow to be huge and we cannot ask any routing node to be able to compute a route to anyone else on the network because the network is going to be too big. If that happens, I’m not saying it will but if that happens we are going to have to somehow divide the network into zones. When we do that having a kind of packet switching like trampoline makes it really easy to move to something like that. I’m not saying we are going to have to do it some day or even in the far future but at least we are taking an incremental step towards that direction to be able to do it if we need it some day.

To Infinity and Beyond

That is all I had. This is a high level view. Of course there are a lot of gory details that are not completely fleshed out yet and we would love to have more feedback on the proposal if you have ideas. There are parts of it that we are not completely happy with, especially when dealing with legacy recipients that do not support trampoline. If you want to look at the proposal and add your ideas or contribute we would welcome that. There is currently a spec PR that we will update soon because some small things have changed. I sent a mail to the mailing list a few months ago to detail the high level view of how we think it is going to work. Have a look at that and don’t hesitate to put comments in there or just come and reach me to ask any questions. I think we have two minutes for questions.

Q & A

Q - My question is about privacy in trampoline routing. How do you prevent let’s say the NSA running a large fraction of all trampoline nodes?

A - Because running a trampoline node is not that expensive. Right now when you are running a routing node your CPU is mostly idle because the only thing you are doing is that you are just taking an incoming packet, decrypting an onion and it already says what channel you need to route to. If you are not a really nice routing channel you don’t even have to keep the network graph. You can ignore that and just decrypt forward. This doesn’t use the CPU at all. What is good with trampoline is that it incentivizes routing nodes to keep the network graph and also make sure that it is possible to reach any node in the network. I think that since right now we are not using your CPU, this is lost capex, you should use that. If you can collect more fees by using your CPU to run pathfinding for people it is even better. I think that every serious routing node that is running on a server machine and not a Raspberry Pi and not some crappy hardware, every real routing node has an incentive to run trampoline because it is a good way to earn more fees while providing useful services for the whole network. I am expecting that there are enough people to counterbalance what the NSA would run. Even if the NSA runs most of the trampoline nodes it is the same as today. They would need to have all the trampoline nodes in the route because since we are using onion encryption for the trampoline onion as well, a trampoline node here doesn’t know where he is on the trampoline route. It is a smaller route than the 20 hop route that we can have in the whole payment, I think it is 5 hops max but still you don’t know where you are in that.

Q - Does the last trampoline node discover who the recipient is?

A - If the recipient supports trampoline and knows how to decrypt this packet, no. The last sender just sees a normal onion packet and sees that he is supposed to forward it to that next guy. If the recipient does not support trampoline then yes you are losing privacy. But it is kind of the same as today. If the next to last node in the route is forwarding to an unannounced channel for example, or someone he knows is not routing, he knows that this is the destination. We have the same kind of trade-offs, a bit worse when the recipient is a legacy node. If the recipient supports trampoline we have ways to make it as good as today. That is an area where we would like to see more ideas and more ways of analyzing that we’re not losing any privacy.

Q - If you are giving up custody of your route to a third party. Let’s say every Sunday I go to church and I make a donation to my church. If I give that pathfinding to a third party then they can route through somebody that will use that information?

A - It depends because you still choose your trampoline node. It is just that you are choosing less of the route. You are not choosing the full route but you are choosing trampoline nodes and you can rotate them randomly. Also it is not replacing the existing routing algorithm. You still have the choice to use either of those. If you want to optimize for fees but have a slower boot up time before payment you can still do that. It is when you want to optimize for performance that you would use trampoline routing. If you are doing something that you think may be subject to analysis then you should use eclair-mobile instead of Phoenix and compute the whole path for yourself. It is just about giving more choice. On that triangle we currently only have the edge on the left and we are just giving you the opportunity to use the edge on the right. We are not forcing people to do that. You still have the opportunity of doing both of those depending on what your need is.

\ No newline at end of file diff --git a/lightning-conference/2019/2019-10-20-nadav-kohen-payment-points/index.html b/lightning-conference/2019/2019-10-20-nadav-kohen-payment-points/index.html index 0a4c01c1c0..18b5c60f9f 100644 --- a/lightning-conference/2019/2019-10-20-nadav-kohen-payment-points/index.html +++ b/lightning-conference/2019/2019-10-20-nadav-kohen-payment-points/index.html @@ -12,4 +12,4 @@ Nadav Kohen

Date: October 20, 2019

Transcript By: Michael Folkson

Tags: Ptlc

Category: Conference

Media: -https://www.youtube.com/watch?v=TE65I9E8Q5k

Slides: https://docs.google.com/presentation/d/15l4h2_zEY4zXC6n1NqsImcjgA0fovl_lkgkKu1O3QT0/

https://twitter.com/kanzure/status/1220453531683115020

Intro

My name is Nadav Kohen. I’m a software engineer at Suredbits. I’m going to talk about payment points which have been alluded to throughout the conference by various talks.

Hash Timelocked Contract (HTLC) Routing

First of all, some preliminaries, HTLCs. I don’t think I’m going to spend much time on this slide. We all know what they are. For every hop forward you add a hash timelocked contract. Then on every hop backward you complete it. The hash preimage is often referred to as the proof of payment. It is this important thing that is used by applications. In Trey’s talk about atomic swaps with things that aren’t just other currencies he discussed use cases in which you use this preimage. That’s going to be a theme in this talk that this preimage is important.

Problems with HTLC Routing

Some problems with HTLC routing and why I want to get rid of it. Every hop uses the same hash. This has some major problems. Essentially any time you are using a payment preimage in an application you have to expect that the payment preimage will be on the Bitcoin blockchain for everyone to see. These proofs of payment aren’t actually real proofs of payment because everyone sees them. You have to in practice mix in the user’s pub key or some additional information which in a perfect world we wouldn’t actually want to have. Additionally since every hop uses the same hash there is a huge privacy leak. If two nodes find out… say we have a payer paying the payee over all of these hops. You have two malicious nodes, Mallory and Mike. They realize that they have the same hash on this payment. They know they are on the same route and they are communicating with each other. What they can do is something called a wormhole attack. During set up they are cooperative but during settlement the payee reveals the preimage, it goes all the way back. Mike gets it, rather than Mike revealing it to the hop in front of him, Mike reveals this preimage directly to Mallory. Then it keeps going all the way back to the payer. What happens here is that it doesn’t impact the payee or the payer but Mallory and Mike were able to steal the fees from every hop in between while holding their funds hostage. The reason you put up your funds is because you are getting a fee in return. This is not great. Aside from privacy concerns we also have these fee stealing concerns. We have no proof of payment that is really a proof of payment that is useful. Also importantly hashes are boring. They destroy too much information, you can’t do any fun magic. You can’t use fun payment point things that I am going to be talking about in a second. The best examples of this are that AMPs and stuckless payments as mentioned do not have proof of payment. Anytime you have a spontaneous payment where the user needs to know the preimage when you set up, you cannot have proof of payment. As mentioned you can have multihash lock stuff but I think that that is actually a bigger change than this which will be definitely be happening anyway to prevent wormhole attacks and for various other reasons. Just because it is the nice way to go, more on that later. Let’s start talking about elliptic curve points.

Elliptic Curve Points

Real quick, some review, Bitcoin uses secp256k1 which is this curve y^2 = x^3 + 7. It is a little white lie, you take the finite version of this but whatever. It kind of looks like that. It is important to note that adding points together is fast and scalar multiplication is fast. When I say scalar multiplication I mean if I have some number, just a normal number x. We have all agreed on this point G on this curve. Then taking xG which is just G+G+G+G… is really fast. But undoing scalar multiplication is really hard. This x is a private key, xG is a public key. This is exactly what private and public keys are in Bitcoin. We’re going to be reusing all of that fun public key cryptography math and we have this super nice property that has been mentioned earlier when we’re talking about Schnorr and various other things that you can do key aggregation. If I have xG and yG and I add these points together it is actually the same thing as if I have this number x+y and I multiply it by G. That is because if I have G+G+G+G, (x+y)G which is the same as xG + y*G. This is a nice addition preserving property which is what we’ll be using pretty heavily in doing things like replacing any solution that would require multiple hashes to lock as well as other cool things we can do with this.

Payment Points to the Rescue!

As I mentioned payment points to the rescue. We are going to replace HTLCs with PTLCs. Essentially rather than saying “Give me the preimage to this hash and you can have this money” you say “Give me the scalar to this point and you can have this money.” We are already assuming that the discrete log problem is hard in Bitcoin and in general. This is not adding to our cryptographic assumptions or anything like this. Essentially we use points for hiding rather than hashes because we don’t need to hide as much as hashes do. Hashes destroy all the information of which some we could still use while hiding the preimage which is really all you need in order to make payments atomic and things like this. As I’ll discuss in the next slide, now that we have these nice additive properties we can have every hop along a route use a different payment point. There’s no payment correlation problem, we get rid of wormhole attacks, better privacy, AMPs can be decorrelated between their partial payments, things like this. We can recover proof of payment and basically any scheme that is missing it. Payment points destroy just enough information so that they can be usable for the purposes that we use hashes over the Lightning Network whilst still letting us do lots of other cool things that I will discuss some of here. The first thing I want to talk about is the motivation for why payment points came around and that is wormhole attack mitigation and payment decorrelation.

Wormhole Attack Mitigation (Payment Decorrelation)

Essentially like I mentioned every hop is going to have a different point. How this is achieved? Alice is the person setting up this payment. She is paying Carol through Bob. For every single hop Alice will add a nonce. The point that we are using is z*G. The preimage is z here. Alice is going to come up with two nonces for her two hops. Alice is going to add nonce x to her hop with Bob and then inform Bob here is y. Bob is going to add y to his hop with Carol. You can imagine that if there were more hops here Alice would generate x_1, x_2, x_3 and give in the onion each person their nonce that they’re supposed to add on the way forward and subtract on the way back. What she gives Carol at the very end is the sum of all the nonces. This is the sum of all the things Carol doesn’t know. It is just a number. All of these are random so they all look entirely different. Carol who knows the preimage can now add the preimage to the sum of the nonces and reveal that. As we see down here during the reveal phase Carol knows x+y because it was told to her by Alice in the onion. She knows z, she can reveal x+y+z. Bob who knows y because he added it forward, can subtract it on the way back which gives Alice x+z. Alice who knows x and added it on the hop forward, subtracts it and learns z the payment preimage or the scalar or whatever you want to call it. This is an actual proof of payment because it will never end up on the chain even if every single hop along the route ends up onchain. No one but Alice learns what z is. Carol didn’t have to know anything about Alice in order to achieve this. That’s what payments are going to look like once we have payment points. How this is going to be achieved is not directly by having a script contract that says that if you reveal the preimage to this point then you get the money. It is going to be using adaptor signatures. Abstractly we can think about things in essentially terms of replacing hashes with points. I’ll discuss implementation in a second.

“Stuckless” Payments

To touch on stuckless payments briefly. I know that two talks ago we went over this in quite a bit of detail so I won’t spend too much time here. Essentially the idea is that we don’t want payments to get stuck on set up, terrible UX when a payment stalls and you have to wait a bunch of time. What is being proposed is that we add this update phase between set up and settlement in which through some means Carol responds with an ACK once a payment is successfully set up. Alice then reveals the sum of the nonce because until Carol knows the sum of the nonce, which is itself just a random number, Carol cannot complete the payment. Carol can then respond with the payment preimage at this point, it is safe to do so. Even if she doesn’t though eventually Alice will get that preimage just over the Lightning Network through completed PTLCs. The nice thing with this is that we have this proof of payment. If you wanted to implement this with hashes, as was mentioned earlier, you would need something like a multihash lock or something quite complicated. Here everything is completely indistinguishable from a normal payment. You still get this proof of payment property from stuckless payments as a proposal.

High AMPs

Another fun thing is what’s usually called High AMP. I’m trying to remember what Conner called it in his talk. DAMP I think. DAMP or High AMP is essentially OG AMP which was just called AMP in Conner’s talk, where you recover proof of payment. OG AMP is the AMP proposal in which you have nice mathematical assumptions rather than economic assumptions for what makes things atomic. If you want n different partial payments you create n secrets, s_1 through s_n. You XOR them together to get this base secret. Then you HMAC those for each index to get your preimage. Then you hash those to get your payment hash for each thing. Then on each payment you send with this hash one of the s_i’s. Only once they recover all of those s_i’s can they recreate the base secret and then HMAC and deterministically recreate all of the preimages. The drawback to this is there is no proof of payment because the person who is setting it up has to know all of the preimages in advance. You can’t do fancy math on hash preimages given input from someone else that would be hashes. You would need to use multihash locks. As I mentioned we don’t want those because we want these anyway so let’s do it this way. High AMP is essentially this exact same procedure except for rather than having a hash between r_1 and H_1 you have r_1 turning from a scalar into a point and adding some other point that the payee knows. So essentially you invoice as always, you give a point instead of a hash that the person getting paid knows the preimage to and then you can simply add that point to all of these P_1, P_2 through P_n rather than H. Then only once you receive everything can you reconstruct the r’s which are necessary in order to claim each of these payments. Then the first payment that reaches back to the payer, they will be able to subtract that r_i and learn the preimage to this point that they didn’t know earlier. In a somewhat similar way to how we recover proof of payment with stuckless we also recover proof of payment here. Essentially with any spontaneous payment set up you can always just add a point where you don’t know the preimage to it. If you know the preimage to the rest of it as is for usual spontaneous payments in hash-land then you can just subtract the stuff you know off and get something new that you don’t know which you can treat as proof of payment. That is all High AMP is. Essentially it is just OG AMP which in my opinion is the nicer of the AMPs because it doesn’t require the assumption that the proof of payment is valuable which Base AMP does. Essentially we have this nice atomic structure that also has a proof of payment.

Escrow over Lightning

This next scheme proposed by Z-man is called escrow over Lightning. Essentially you have some contract that a buyer and a seller agree to ahead of time in some language that the escrow understands be that English or C++. Then the buyer and the seller each have these points and the escrow also has a point. You tweak this point. I’m not going to get into the details here, go read the mailing list if you’re interested. Essentially you use as a payment point the Diffie Hellman secret exchange between the buyer and the escrow which is tweaked by the contract so that it is committed to, plus the seller. To break this down, to make it much easier to look at, essentially what this is saying is that this preimage can become known if the seller knows their preimage which they should, and either the buyer reveals their preimage or the escrow reveals their preimage. In the cooperative case what happens here is that the buyer had their service from the seller. After this payment has been set up, they agree that the seller say mowed their lawn, did a good job, great, here’s the preimage to B. From that you can compute the Diffie Hellman secret, add it to the S you already know and there you go, you’re done. s acts as a proof of payment because the buyer knows b. In the case of a dispute the seller can go to the escrow with this tweak essentially telling them what the contract is. Then the escrow runs the contract or reads the contract depending on whether it is C++ or English or whatever else language. If they agree with the seller that indeed this has been done and they deserve to be paid, the escrow can reveal their preimage tweaked and that is all you need to compute the ECDH secret between the buyer and the escrow. That also allows the seller to reveal the preimage and claim their payment. The buyer still gets s as proof of payment because they also know the ECDH preimage. What this does is you have this AND and OR structure where it is the buyer or the escrow and the seller. You can actually do this in a much more general way, I’m not going to talk about it too much now because I had this idea more generally after I made these slides and turned them in. But you can add as many points together as you want to have this AND structure. In certain circumstances you can have a bunch of ORs. So you can have these more general access data structures in which for example if you want to have a weird multisig condition payment where you have a bunch of parties who are actively involved. If m-of-n of them agree that this payment should go through they can compute the preimage to this point and let it go through. You can do much more general things. This is the nice, relatively easy to understand case that showcases both the AND and the OR. Once again we have proof of payment here so this is quite cool. You can do really cool, much more general stuff than just this, escrow.

Selling (Schnorr) Signatures

Another thing we can do with points is you can sell Schnorr signatures trustlessly. You can sell Schnorr signatures over Lightning today with HTLCs but it is not trustless and also the signature will get revealed to everyone if any of the hops go onchain. Essentially what I mean by selling Schnorr signatures over the Lightning Network is you use your Schnorr signature, the last 32 bytes of it, as the payment preimage. If you wanted to do that today, the person who sets up the payment has no way of knowing that the hash that they’re using is actually the hash of the signature that they want. Here’s the math. Basically the thing that you need to know is that you can compute s*G where s is the signature of some message just from public information. R which is a public key, X which is a public key and the message. From public information and the message that you want a signature on with these specific public keys you can compute the public point, set up a payment using that point as the payment point. Then you know that you will get a valid signature to a specific message with specific keys if and only if your money gets claimed. In order to claim your money they must reveal the signature to you. Essentially we can trustlessly sell Schnorr signatures over the Lightning Network in nice, private ways. This can be used with blind Schnorr signatures as well. That is a link to a Jonas Nick talk from a while ago about how you can use blind servers to implement e-cash systems and various things like this. Here’s the great way you can sell signatures in a trustless fashion. Another thing that you can do selling signatures is you can have discreet log contract like options contracts. Essentially rather than having both parties have the ability to execute some contract, one party sells its signatures to some contract, to the other party in return for a premium. You essentially then have an option on some future event. That is on the mailing list if you’re interested in hearing more. Just in general, selling Schnorr signatures is a new, nice atomic thing that you can use in various schemes.

Pay for Decommitment (Pay for Nonce)

Very close to this is paying for decommitment or pay for nonce. Essentially if someone has a Pedersen commitment for some number x, this means they’ve hidden this thing by adding some point to it. You use this point as your payment point and in return you get this nonce. Essentially what that is is it is a decommitment to some Pedersen commitment. When you see someone decommit, that means you can decommit as well. So you can also think of this as paying for a commitment. For example, if someone wanted to sell timestamps they are putting a bunch of Pedersen commitments as people ask them to for specific messages onchain. Then they’re selling the decommitment to those things. If you put things in Merkle trees it is basically no cost to just aggregate all of these things up. More on that on the mailing list as well.

Implementation

So a quick note on why don’t we have these already? These are great, hashes suck. Why are we doing HTLCs, everything is terrible? Option 1 is we could introduce OP_COMPUTEPUBKEY. I think actually this has been proposed not by this name, I forget exactly what this name was. Essentially the idea is replace OP_HASH160 in all of your HTLCs with this OP_COMPUTEPUBKEY. This isn’t going to happen. We have better things to do than add a bunch of OP codes that aren’t known to have a bunch of use cases. Another thing we can do is we can use 2p-ECDSA adaptor signatures. This is really complicated, it has fewer bits of security than what I’m going to discuss next. It requires another crypto system be implemented in an efficient way on top of libsecp256k1. In theory it could work today if we did this. I do know a couple of people who are interested in working on this. Option 3 is once we have bip-schnorr it is relatively simple once we have bip-schnorr. It only requires libsecp256k1. Sadly it is a couple of years away but once we have bip-schnorr we can do all of this pretty easily.

Recap - Payment Point Give Us

To recap, we get payment decorrelation, stuckless payments with proof of payment, AMP with proof of payment, Lightning escrow contracts, trustlessly selling signatures, trustlessly selling decommitment and a bunch more things that I just came up with in the past couple of days along with other people that’s up on the mailing list that didn’t make it onto these slides. That’s my talk.

\ No newline at end of file +https://www.youtube.com/watch?v=TE65I9E8Q5k

Slides: https://docs.google.com/presentation/d/15l4h2_zEY4zXC6n1NqsImcjgA0fovl_lkgkKu1O3QT0/

https://twitter.com/kanzure/status/1220453531683115020

Intro

My name is Nadav Kohen. I’m a software engineer at Suredbits. I’m going to talk about payment points which have been alluded to throughout the conference by various talks.

Hash Timelocked Contract (HTLC) Routing

First of all, some preliminaries, HTLCs. I don’t think I’m going to spend much time on this slide. We all know what they are. For every hop forward you add a hash timelocked contract. Then on every hop backward you complete it. The hash preimage is often referred to as the proof of payment. It is this important thing that is used by applications. In Trey’s talk about atomic swaps with things that aren’t just other currencies he discussed use cases in which you use this preimage. That’s going to be a theme in this talk that this preimage is important.

Problems with HTLC Routing

Some problems with HTLC routing and why I want to get rid of it. Every hop uses the same hash. This has some major problems. Essentially any time you are using a payment preimage in an application you have to expect that the payment preimage will be on the Bitcoin blockchain for everyone to see. These proofs of payment aren’t actually real proofs of payment because everyone sees them. You have to in practice mix in the user’s pub key or some additional information which in a perfect world we wouldn’t actually want to have. Additionally since every hop uses the same hash there is a huge privacy leak. If two nodes find out… say we have a payer paying the payee over all of these hops. You have two malicious nodes, Mallory and Mike. They realize that they have the same hash on this payment. They know they are on the same route and they are communicating with each other. What they can do is something called a wormhole attack. During set up they are cooperative but during settlement the payee reveals the preimage, it goes all the way back. Mike gets it, rather than Mike revealing it to the hop in front of him, Mike reveals this preimage directly to Mallory. Then it keeps going all the way back to the payer. What happens here is that it doesn’t impact the payee or the payer but Mallory and Mike were able to steal the fees from every hop in between while holding their funds hostage. The reason you put up your funds is because you are getting a fee in return. This is not great. Aside from privacy concerns we also have these fee stealing concerns. We have no proof of payment that is really a proof of payment that is useful. Also importantly hashes are boring. They destroy too much information, you can’t do any fun magic. You can’t use fun payment point things that I am going to be talking about in a second. The best examples of this are that AMPs and stuckless payments as mentioned do not have proof of payment. Anytime you have a spontaneous payment where the user needs to know the preimage when you set up, you cannot have proof of payment. As mentioned you can have multihash lock stuff but I think that that is actually a bigger change than this which will be definitely be happening anyway to prevent wormhole attacks and for various other reasons. Just because it is the nice way to go, more on that later. Let’s start talking about elliptic curve points.

Elliptic Curve Points

Real quick, some review, Bitcoin uses secp256k1 which is this curve y^2 = x^3 + 7. It is a little white lie, you take the finite version of this but whatever. It kind of looks like that. It is important to note that adding points together is fast and scalar multiplication is fast. When I say scalar multiplication I mean if I have some number, just a normal number x. We have all agreed on this point G on this curve. Then taking xG which is just G+G+G+G… is really fast. But undoing scalar multiplication is really hard. This x is a private key, xG is a public key. This is exactly what private and public keys are in Bitcoin. We’re going to be reusing all of that fun public key cryptography math and we have this super nice property that has been mentioned earlier when we’re talking about Schnorr and various other things that you can do key aggregation. If I have xG and yG and I add these points together it is actually the same thing as if I have this number x+y and I multiply it by G. That is because if I have G+G+G+G, (x+y)G which is the same as xG + y*G. This is a nice addition preserving property which is what we’ll be using pretty heavily in doing things like replacing any solution that would require multiple hashes to lock as well as other cool things we can do with this.

Payment Points to the Rescue!

As I mentioned payment points to the rescue. We are going to replace HTLCs with PTLCs. Essentially rather than saying “Give me the preimage to this hash and you can have this money” you say “Give me the scalar to this point and you can have this money.” We are already assuming that the discrete log problem is hard in Bitcoin and in general. This is not adding to our cryptographic assumptions or anything like this. Essentially we use points for hiding rather than hashes because we don’t need to hide as much as hashes do. Hashes destroy all the information of which some we could still use while hiding the preimage which is really all you need in order to make payments atomic and things like this. As I’ll discuss in the next slide, now that we have these nice additive properties we can have every hop along a route use a different payment point. There’s no payment correlation problem, we get rid of wormhole attacks, better privacy, AMPs can be decorrelated between their partial payments, things like this. We can recover proof of payment and basically any scheme that is missing it. Payment points destroy just enough information so that they can be usable for the purposes that we use hashes over the Lightning Network whilst still letting us do lots of other cool things that I will discuss some of here. The first thing I want to talk about is the motivation for why payment points came around and that is wormhole attack mitigation and payment decorrelation.

Wormhole Attack Mitigation (Payment Decorrelation)

Essentially like I mentioned every hop is going to have a different point. How this is achieved? Alice is the person setting up this payment. She is paying Carol through Bob. For every single hop Alice will add a nonce. The point that we are using is z*G. The preimage is z here. Alice is going to come up with two nonces for her two hops. Alice is going to add nonce x to her hop with Bob and then inform Bob here is y. Bob is going to add y to his hop with Carol. You can imagine that if there were more hops here Alice would generate x_1, x_2, x_3 and give in the onion each person their nonce that they’re supposed to add on the way forward and subtract on the way back. What she gives Carol at the very end is the sum of all the nonces. This is the sum of all the things Carol doesn’t know. It is just a number. All of these are random so they all look entirely different. Carol who knows the preimage can now add the preimage to the sum of the nonces and reveal that. As we see down here during the reveal phase Carol knows x+y because it was told to her by Alice in the onion. She knows z, she can reveal x+y+z. Bob who knows y because he added it forward, can subtract it on the way back which gives Alice x+z. Alice who knows x and added it on the hop forward, subtracts it and learns z the payment preimage or the scalar or whatever you want to call it. This is an actual proof of payment because it will never end up on the chain even if every single hop along the route ends up onchain. No one but Alice learns what z is. Carol didn’t have to know anything about Alice in order to achieve this. That’s what payments are going to look like once we have payment points. How this is going to be achieved is not directly by having a script contract that says that if you reveal the preimage to this point then you get the money. It is going to be using adaptor signatures. Abstractly we can think about things in essentially terms of replacing hashes with points. I’ll discuss implementation in a second.

“Stuckless” Payments

To touch on stuckless payments briefly. I know that two talks ago we went over this in quite a bit of detail so I won’t spend too much time here. Essentially the idea is that we don’t want payments to get stuck on set up, terrible UX when a payment stalls and you have to wait a bunch of time. What is being proposed is that we add this update phase between set up and settlement in which through some means Carol responds with an ACK once a payment is successfully set up. Alice then reveals the sum of the nonce because until Carol knows the sum of the nonce, which is itself just a random number, Carol cannot complete the payment. Carol can then respond with the payment preimage at this point, it is safe to do so. Even if she doesn’t though eventually Alice will get that preimage just over the Lightning Network through completed PTLCs. The nice thing with this is that we have this proof of payment. If you wanted to implement this with hashes, as was mentioned earlier, you would need something like a multihash lock or something quite complicated. Here everything is completely indistinguishable from a normal payment. You still get this proof of payment property from stuckless payments as a proposal.

High AMPs

Another fun thing is what’s usually called High AMP. I’m trying to remember what Conner called it in his talk. DAMP I think. DAMP or High AMP is essentially OG AMP which was just called AMP in Conner’s talk, where you recover proof of payment. OG AMP is the AMP proposal in which you have nice mathematical assumptions rather than economic assumptions for what makes things atomic. If you want n different partial payments you create n secrets, s_1 through s_n. You XOR them together to get this base secret. Then you HMAC those for each index to get your preimage. Then you hash those to get your payment hash for each thing. Then on each payment you send with this hash one of the s_i’s. Only once they recover all of those s_i’s can they recreate the base secret and then HMAC and deterministically recreate all of the preimages. The drawback to this is there is no proof of payment because the person who is setting it up has to know all of the preimages in advance. You can’t do fancy math on hash preimages given input from someone else that would be hashes. You would need to use multihash locks. As I mentioned we don’t want those because we want these anyway so let’s do it this way. High AMP is essentially this exact same procedure except for rather than having a hash between r_1 and H_1 you have r_1 turning from a scalar into a point and adding some other point that the payee knows. So essentially you invoice as always, you give a point instead of a hash that the person getting paid knows the preimage to and then you can simply add that point to all of these P_1, P_2 through P_n rather than H. Then only once you receive everything can you reconstruct the r’s which are necessary in order to claim each of these payments. Then the first payment that reaches back to the payer, they will be able to subtract that r_i and learn the preimage to this point that they didn’t know earlier. In a somewhat similar way to how we recover proof of payment with stuckless we also recover proof of payment here. Essentially with any spontaneous payment set up you can always just add a point where you don’t know the preimage to it. If you know the preimage to the rest of it as is for usual spontaneous payments in hash-land then you can just subtract the stuff you know off and get something new that you don’t know which you can treat as proof of payment. That is all High AMP is. Essentially it is just OG AMP which in my opinion is the nicer of the AMPs because it doesn’t require the assumption that the proof of payment is valuable which Base AMP does. Essentially we have this nice atomic structure that also has a proof of payment.

Escrow over Lightning

This next scheme proposed by Z-man is called escrow over Lightning. Essentially you have some contract that a buyer and a seller agree to ahead of time in some language that the escrow understands be that English or C++. Then the buyer and the seller each have these points and the escrow also has a point. You tweak this point. I’m not going to get into the details here, go read the mailing list if you’re interested. Essentially you use as a payment point the Diffie Hellman secret exchange between the buyer and the escrow which is tweaked by the contract so that it is committed to, plus the seller. To break this down, to make it much easier to look at, essentially what this is saying is that this preimage can become known if the seller knows their preimage which they should, and either the buyer reveals their preimage or the escrow reveals their preimage. In the cooperative case what happens here is that the buyer had their service from the seller. After this payment has been set up, they agree that the seller say mowed their lawn, did a good job, great, here’s the preimage to B. From that you can compute the Diffie Hellman secret, add it to the S you already know and there you go, you’re done. s acts as a proof of payment because the buyer knows b. In the case of a dispute the seller can go to the escrow with this tweak essentially telling them what the contract is. Then the escrow runs the contract or reads the contract depending on whether it is C++ or English or whatever else language. If they agree with the seller that indeed this has been done and they deserve to be paid, the escrow can reveal their preimage tweaked and that is all you need to compute the ECDH secret between the buyer and the escrow. That also allows the seller to reveal the preimage and claim their payment. The buyer still gets s as proof of payment because they also know the ECDH preimage. What this does is you have this AND and OR structure where it is the buyer or the escrow and the seller. You can actually do this in a much more general way, I’m not going to talk about it too much now because I had this idea more generally after I made these slides and turned them in. But you can add as many points together as you want to have this AND structure. In certain circumstances you can have a bunch of ORs. So you can have these more general access data structures in which for example if you want to have a weird multisig condition payment where you have a bunch of parties who are actively involved. If m-of-n of them agree that this payment should go through they can compute the preimage to this point and let it go through. You can do much more general things. This is the nice, relatively easy to understand case that showcases both the AND and the OR. Once again we have proof of payment here so this is quite cool. You can do really cool, much more general stuff than just this, escrow.

Selling (Schnorr) Signatures

Another thing we can do with points is you can sell Schnorr signatures trustlessly. You can sell Schnorr signatures over Lightning today with HTLCs but it is not trustless and also the signature will get revealed to everyone if any of the hops go onchain. Essentially what I mean by selling Schnorr signatures over the Lightning Network is you use your Schnorr signature, the last 32 bytes of it, as the payment preimage. If you wanted to do that today, the person who sets up the payment has no way of knowing that the hash that they’re using is actually the hash of the signature that they want. Here’s the math. Basically the thing that you need to know is that you can compute s*G where s is the signature of some message just from public information. R which is a public key, X which is a public key and the message. From public information and the message that you want a signature on with these specific public keys you can compute the public point, set up a payment using that point as the payment point. Then you know that you will get a valid signature to a specific message with specific keys if and only if your money gets claimed. In order to claim your money they must reveal the signature to you. Essentially we can trustlessly sell Schnorr signatures over the Lightning Network in nice, private ways. This can be used with blind Schnorr signatures as well. That is a link to a Jonas Nick talk from a while ago about how you can use blind servers to implement e-cash systems and various things like this. Here’s the great way you can sell signatures in a trustless fashion. Another thing that you can do selling signatures is you can have discreet log contract like options contracts. Essentially rather than having both parties have the ability to execute some contract, one party sells its signatures to some contract, to the other party in return for a premium. You essentially then have an option on some future event. That is on the mailing list if you’re interested in hearing more. Just in general, selling Schnorr signatures is a new, nice atomic thing that you can use in various schemes.

Pay for Decommitment (Pay for Nonce)

Very close to this is paying for decommitment or pay for nonce. Essentially if someone has a Pedersen commitment for some number x, this means they’ve hidden this thing by adding some point to it. You use this point as your payment point and in return you get this nonce. Essentially what that is is it is a decommitment to some Pedersen commitment. When you see someone decommit, that means you can decommit as well. So you can also think of this as paying for a commitment. For example, if someone wanted to sell timestamps they are putting a bunch of Pedersen commitments as people ask them to for specific messages onchain. Then they’re selling the decommitment to those things. If you put things in Merkle trees it is basically no cost to just aggregate all of these things up. More on that on the mailing list as well.

Implementation

So a quick note on why don’t we have these already? These are great, hashes suck. Why are we doing HTLCs, everything is terrible? Option 1 is we could introduce OP_COMPUTEPUBKEY. I think actually this has been proposed not by this name, I forget exactly what this name was. Essentially the idea is replace OP_HASH160 in all of your HTLCs with this OP_COMPUTEPUBKEY. This isn’t going to happen. We have better things to do than add a bunch of OP codes that aren’t known to have a bunch of use cases. Another thing we can do is we can use 2p-ECDSA adaptor signatures. This is really complicated, it has fewer bits of security than what I’m going to discuss next. It requires another crypto system be implemented in an efficient way on top of libsecp256k1. In theory it could work today if we did this. I do know a couple of people who are interested in working on this. Option 3 is once we have bip-schnorr it is relatively simple once we have bip-schnorr. It only requires libsecp256k1. Sadly it is a couple of years away but once we have bip-schnorr we can do all of this pretty easily.

Recap - Payment Point Give Us

To recap, we get payment decorrelation, stuckless payments with proof of payment, AMP with proof of payment, Lightning escrow contracts, trustlessly selling signatures, trustlessly selling decommitment and a bunch more things that I just came up with in the past couple of days along with other people that’s up on the mailing list that didn’t make it onto these slides. That’s my talk.

\ No newline at end of file diff --git a/lightning-hack-day/2020-05-03-christian-decker-lightning-backups/index.html b/lightning-hack-day/2020-05-03-christian-decker-lightning-backups/index.html index 9e8aa34add..557f382d05 100644 --- a/lightning-hack-day/2020-05-03-christian-decker-lightning-backups/index.html +++ b/lightning-hack-day/2020-05-03-christian-decker-lightning-backups/index.html @@ -10,4 +10,4 @@ Christian Decker

Date: May 3, 2020

Transcript By: Michael Folkson

Tags: Lightning

Category: Hackathon

Media: -https://www.youtube.com/watch?v=kGQF3wtzr04

Topic: Back the f*** up

Location: Potzblitz (online)

Intro (Jeff Gallas)

We’re live. Welcome to the fourth episode of Potzblitz, the weekly Lightning talk. Today we have a very special guest, Christian Decker of Blockstream. They also call him Dr Bitcoin. Our co-host today is Michael Folkson from the London BitDevs meetup. He is doing a bunch of Socratic Seminars and has a deep technical knowledge. He will be the perfect person to speak to Christian and has already prepared a bunch of questions. We announced this on Twitter a couple days ago. He has been collecting questions. If you do have questions during the stream put them in the YouTube comments or ask them on Twitter using the hashtag #potzblitz. We also have a Mattermost channel mm.fulmo.org, the channel is called Potzblitz. You can find Christian there, he will be checking it out later. As always once we are done streaming you can join the Mattermost channel and the Jitsi room and hang out and ask some more questions. I expect that we will not be able answer all of your questions today. Join us in the Jitsi room afterwards. We are using Jitsi to stream this. It is an open source video conferencing tool. It usually works quite well. Sometimes it doesn’t but most of the time it does. If there are any glitches please forgive us. It is permissionless without collecting your data. This is a pretty good option. If you like this give a thumbs up, subscribe to the channel so we can do this another time.

Back the f*** up (Christian Decker)

Thank you so much for having me. I hope people will excuse my swear word in the title. It fit in quite nicely with my feelings about backups or the lack thereof in Lightning. The title today is back the f*** up. How to backup and secure your Lightning node in a way that you cannot end up losing money by operating your node. Like Jeff said I am Christian Decker, I work for Blockstream hence the small logo up there. I work mainly on the c-lightning implementation of the Lightning protocol of which I also help out a bit. You might notice that later on because I tend to emphasize what c-lightning does correctly. More on that later.

Why Backup?

First of all why do we want to backup in the first place? I think everybody will be shouting right now that we are actually handling money and so the rationale for having a backup of what is handling our money is pretty fundamental. The way to do backups in onchain systems like your Ledger wallets or your Bitcoin Core nodes or systems where you perform onchain payments is basically just write down 24 words. If you ever destroy or lose your device you can recover from those 24 words which seed a random number generator which is used to generate addresses. This is a common thing in the onchain world where you have your 24 words, you write them down on paper, you send half of them to your parents or you engrave them on metal. All in the name of making this as disaster resilient as possible and ensuring that you can recover. That is quite an easy option. We can do that in Lightning as well. However it turns out backups in Lightning are quite a bit harder. They can even be dangerous. To see why they are dangerous let’s quickly go through how not to backup a Lightning node.

How not to backup

If you are anything like me you probably end up with a couple of backups like this where we have the original .lightning folder. Then at some point we decided we might want to take a backup of that. We rename that directory or copy it over to something with an extension. Let’s call it .lightning.orig. Then we do some more changes and we perform another backup. We call it .lightning.v1. But we are computer scientists so we start by counting at zero. Even our naming scheme becomes very inconsistent over time. That is not the whole issue. The whole issue is that even if we were able to take backups of our Lightning node the act of restoring it can put some of our funds in danger. Simply because of the mechanism that we use in the Lightning Network to update state. To see that we need to look at what the update mechanism in Lightning does.

How the update mechanism in LN works

In Lightning we use a mechanism that is called a penalty based mechanism in which each of these square boxes here represents a state. Initially we have the initial state where person A has 10 Bitcoin and the other participant in the channel has zero. We transfer 1 Bitcoin over. Suddenly we have two states that are active. Once we have agreed on this state we poison the old state. If any participant ends up publishing this transaction representing our state this poison would be activated and the misbehaving node that has misbehaved by publishing the old state would lose all of its funds. We do that a couple of times. Transferring 1 from A to B, then we transfer 4 more from A to B. We then poison the old state. This state is currently active. Then we perform another update. This is a bit special. We don’t actually transfer money but these state transitions can also occur because we change the fee that we need to pay onchain. Therefore even non-user triggered actions can result in a state change that would end up agreeing on a new state and poisoning the old state. Finally we take this last state and transfer from B to A one more Bitcoin. This final state which does not have the biohazard symbol, the only state that can be enacted without incurring a penalty. All of the prior states are poisoned. By poisoned I mean the counterparty always has a matching reaction that could be used to punish if I were to publish this old transaction. If we were to take a backup anywhere here we might end up losing some information.

So what’s the problem?

The problem is exactly that this is a copy of the previous image. Both A and B agreed on this state being the last state. However, because I took a backup of this state and then had to restore this state for me this appears this is actually the last valid state and I should be able to send that state onchain. Get my 5 Bitcoin. However, that would be cheating from the point of view of my counterparty because they have a matching penalty transaction for this state. If I were to try to enact this state then I would lose my whole state in this channel. Not only is backing up and restoring a Lightning node rather hard but it is also quite dangerous because if you end up restoring a state that isn’t the last one you might accidentally end up cheating. Over the time we the Lightning developers came up with a range of possible solutions to mitigate this situation or even prevent it altogether. Sadly none of these are complete or gives you a 100 percent protection but we are getting closer. What I’d like to do is step through the various mechanisms that have been used previously. What we are working towards right now when it comes to securing the data that you have in the channel.

Backup to Google Drive (eclair)

The first one is something that eclair did right at the start when we started rolling out these Lightning nodes. The developers of eclair were getting some feedback that people were losing their devices with the Lightning node on them. They were unable to recover the funds on those phones. It is quite natural if you have a phone. Phones have a tendency to drop into water or break or just get lost. If you have money on the phone that is doubly sad. What the eclair team did was they added a trivial backup mechanism where you take the state of the Lightning node after each change and push that to a Google Drive folder from where they could restore the entire state. I’m not too familiar with how this was done. Whether it was done incrementally or whether it was done in a batch. However this mechanism seems not to be working anymore. Simply because people soon figured out that if I start my eclair wallet on Phone A and I restore on Phone B I can run this node from both phones at the same time. They share data withe each other. If you end up restoring a node that is not the latest state and you end up cheating, that same thing happens if you have two nodes running against the same dataset pretending to have the same channels and therefore contradicting each other while they proceed in the Lightning protocol. This may end up looking like a cheat attempt and the node being punished. This first attempt was mainly used for recovery but people abused it a bit to share a single node across multiple devices. So don’t do this.

Static channel backups (lnd)

A better solution are the static channel backups that are implemented in lnd. As you can see on the left side this is basically the structure that a single channel static backup looks like. It has a bunch of information that relates to how the channel was created, who the counterparty is and all of the information that is static in the channel and doesn’t change over time. What this allows you to do is start the lnd node back up again with this backup file and restore what they call a channel shell. Not a true channel but enough information about the channel so that you can reconnect to your peer and ask for information for you to get your funds back. This is rather neat because it allows you to take a static channel backup, store it somewhere offline, it is probably not small enough so that you can write it on a piece of paper and write it back in when you want to restore it. But you can store it on a USB stick or a file server somewhere. That alone should allow you to at least get your funds back. It is minimal in that sense. It doesn’t change while you are renegotiating the state of the channel itself. And so for each channel you take one backup and you are safe until you open the next channel. The downside is that it requires you to be able to contact the counterparty node once you attempt to recover simply because this information that we track in the Lightning protocol cannot be recomputed solely from this structure. But you need to reach out to you peer and ask it “What is the current state? Please give me all the information to recognize and to retrieve my funds after closure. Please close the channel.” This reconnects and tells it to close the channel. The downside is of course that on relying on the peer to do something for you might be get you into a weird situation because you are telling the peer “I lost my data. Can you please help me recover my funds?” The counterparty could not give you that information or it can refuse to close the channel on your behalf that results in them holding your funds hostage. All of the current implementations implement this correctly and will cooperate with lnd to recover their funds. But a malicious actor could of course change the code so that lnd nodes cannot close successfully or recover successfully. While this is not a perfect backup it is an excellent emergency recovery mechanism to claw back as much of your initial funds as possible.

Synchronous Database Log (c-lightning)

The third mechanism that we have, this goes more into what c-lightning is viewing as the correct version of doing this, is the ability for plugins that you can attach to c-lightning to keep a synchronous database log in the background on whatever medium you want to do. The reason why we made it into a plugin is because we don’t want to tell you how to manage these backups. We want you to have the ability of incorporating them into your own environment. This just tells the plugins every database transaction that would result in a modification of the state before we actually commit to this transaction. It is a write ahead log of the database transactions that we track before committing too changes. Depending on your infrastructure you might want to have an append-only log. You can have compressible logs or you can even a replica of the original database being managed concurrently to the main database. If you do that then you can failover immediately if your node dies. What you do is have a replica that tracks changes along the main database on a network mount and if your node dies you can spin up another node and connect to your replica. You can continue where you left off. We don’t rely on external help, we don’t need to interact with our peer which may not even be available even more. But the downside is of course that these are synchronous backups that require us to write to the backup every time that we have a state change. This is one mechanism.

Database Replication and Failover (c-lightning)

The other mechanism that c-lightning is the database replication and failover. We have abstracted the database interface in such a way that we can take c-lightning and have it talk to a Postgres database for example. Whereas the default database is a sqlite3 database. Postgres does require a bit more set up. It allows you to also have a synchronous replication of your node and even transparent failover from one database instance to the next one. This is something that enterprise users are very much interested in because they can rely on what they already know about replicating and mirroring and failover on Postgres databases which is very well known in the wider software engineering world. Whereas replicating and scaling a single Lightning node that is very niche knowledge. We can reconnect to what institutional knowledge there already is and people feel pretty comfortable with the way that databases can get replicated and secured. I mentioned this is more of an enterprise set up. It also comes with a bit of upfront investment cost to set up all of these nodes, set up the database and have replication set up. But gabridome has written an excellent guide on how to set up Postgres in combination with c-lightning. I can only recommend reading that and seeing if you can get it working as well. The upside is obviously that restore is immediate because we have replicas that have the up to date information. If the node dies we just spin up a new node and if it connect to the database. If the database dies we switch over to a new master database and the node doesn’t even learn anything about the database having failed. This is another form of synchronous replication.

The future (eltoo)

Of course this wouldn’t be a talk by me if I weren’t to somehow get eltoo into the picture. Eltoo is a paper that we wrote two years ago now about an alternative update mechanism in which we don’t penalize people anymore for publishing an old state. Instead what we do is override the effects that your misbehavior would have and enact the state that we actually had. What we have here is the set up transaction that creates a shared address between the blue user and the green user. We create a refund transaction that gives the original owner back the funds after a certain timeout expires. The way we perform updates is by having an Update 1 that takes these ten Bitcoin here, ratchets them forward and attaches this Settle 1 in which both parties have 5 Bitcoin each after this timeout expires. Then we update again by ratcheting these funds forward and attach this settlement with the newer state. This time the green user sends 1 Bitcoin over to the blue user and this update overrides whatever would have happened in Settle 1. Instead of punishing the counterparty for misbehaving what we do is override the effects that they would have had. Let’s say in this case the green user would like to enact this settlement. It sends out Update 1 and now has to wait for this timeout to expire. In the meantime the blue user publishes Update 2 overriding this Settle 1 and starting the timer on Settle 2. We no longer punish our counterparty for misbehaving but we override the effect and get to a state that was agreed upon in the end. If I were to backup and restore, restore at the time that Update 1 and Settle 1 were the last state then all that would happen is that my counterparty would send out Update 2 and correct my mistake instead of hitting me in my face by stealing all my funds. This is something that would allow us to have a much more robust mechanism of backup and restore. I think ultimately this might become the way that we end up doing stable nodes sometime in the future. Many people have argued that this loses the penalties. However as far as I can see the penalty mechanism is intrinsically linked with our inability to create backups and restore because whenever we backup and restore we might end up in a situation where we can be penalized. I think we might want to distance ourselves from this satisfaction of punishing people and go to more a collaborative solution which in this case would be eltoo.

How about a unified backup standard?

I have just given five different backup standards. How about creating a unified one? Everybody seems to be pulling in a different direction and everybody is experimenting with different ways of backing up and restoring. Wouldn’t it be nice to have a backup that can be created by c-lightning and then later be imported into lnd and restore from that? The answer is of course. That would be really nice if in the end we have this convergence of different ideas and different implementations where we agree on a unified standard for these kind of operations. However, currently at least the internal structure of the data that we track is not in a form that we can share these backups between implementations. Maybe we will eventually get there but currently that is not the case. I would argue that this experimental phase which we are currently in is good because it allows us to experiment with different trade-offs and come up with better ideas in the long run that are then capable of getting a majority of the ecosystem behind it. Actually make this a better standard overall because this has always been the way that the Lightning development has worked. We have always gone off into different directions and then emerged back and shared experiences from the way that some of our approaches worked and others didn’t. My hope is that we will end up with a unified standard.

Resources

This is my last slide which is the resources starting with the cloud backup by ACINQ, the excellent article on recovering funds from lnd using static channel backups. We also have a couple of backup plugins and documentation on the db_write hook. And of course how you can wire c-lightning to run with a PostgresQL database. Gabriele’s excellent tutorial on how to set up replication with PostgresQL so you can have the real enterprise feeling for you node. Read the eltoo paper. I think it is neat but of course I am biased. That is it from me.

Q&A

Questions were collected here on Twitter.

Q - On backups, I think you alluded some of the trade-offs. There is ease of use, there is trust in terms of requesting certain information from your remote party, there is security, there is cost. Are there any other trade-offs and which of those align towards a technical user, a non-technical user and an enterprise?

A - One very important one that I failed to mention before is that these backups come with some latency. Your node, depending on which backup you choose, you might end up sacrificing some of the throughput of your node simply by having to reach out to a backup plugin or to a remote storage or some other operations that you need to do in order to make this safe. The only option that doesn’t incur this cost is the static channel backups because they are done once at the beginning of node set up and every time you open a channel. There is no additional latency cost that you incur at each update. Whereas the full backup solutions like pushing onto a cloud server or having some plugin, that will obviously incur throughput cost. I think the static channel backups are probably the easiest ones to set up. We are trying to make it as easy as possible for the plugins as well. We do have a plugin repository that is community driven. We do have people working on Dropbox backends where each single update gets written to Dropbox and then we have… working in the background. It should end up being a plugin that you drop into a folder and everything is taken care of from thereon. Currently there is some work that you need to invest to get these working. Of course the extreme example is the replicated database backend which really takes some digging to get right. Not for the faint hearted I guess but we are getting there.

Q - I imagine if either the lnd set up or another implementation comes up with a backup strategy that takes those trade-offs in a different way and is popular there could be a c-lightning plugin that replicates it?

A - That is how our development works as well. I wanted to point that out in the second to last slide which is we do learn from what other teams do. We do bring some of these innovations back to the specification to be discussed. If there is a clear winner when it comes to usability and other trade-offs then there is absolutely no hurdle from having that become part of the specification itself. At which point it becomes available to all implementations.

Q - You alluded to this. What exactly does need to be backed up? Chris Stewart did a very good presentation at the Lightning Conference last year on the six different private keys that you need to think about with Lightning. Some need to be hot, some can be cold. What exactly needs to be backed up in these different scenarios?

A - Obviously the first one is the seed key that you use to generate all of your addresses. That’s pretty much identical to onchain. It is what creates all of these addresses that are used throughout the lifetime of your funds. When they enter or leave a channel they will end up on addresses that are generated from that. That is the central piece of private information that you need to backup. We have a variety of parts where keys are used that are not directly derived from this seed key. Namely whenever you and your peer with which you have a channel open need to generate a shared address. That will not be part of your derivation tree that is rooted in this seed key. Those keys need to be backed up along the way. This includes HTLC secrets. Whenever a payment goes through your channel you had better remember what the key was that this payment was contingent on. The indices of the channel in the database because that is the index of the derivation path that is going to be used. And revocation secrets, those are important because that is the poison we use to keep each other honest. My absolute favorite is the per commitment points which is a ECDSA point that is added to your direct output if I close the channel. Without that point you wouldn’t even recognize that these funds are supposed to go to your node. That is a really weird use of these point tweaks because they are in there to make sure that each state commitment is unique. It just makes it so hard to create an onchain wallet for Lightning. When I stumbled over those and I had to implement the wallet I just swore.

Q - What about watchtowers? Is there a role for watchtowers with backups? They are predominantly there so that you don’t have to be online the whole time so they can stop someone from cheating you. Is there a role for them to store some state on your behalf? Do you see the role of a watchtower expanding and different services being provided around a watchtower?

A - Absolutely. The role of a watchtower is definitely not clear cut. I usually tend to think of a watchtower as a third party that I put in charge of reacting when something nefarious happens onchain. But of course this definition can be expanded to also include arbitrary data storage on behalf of a node in exchange maybe for a small fee. I wouldn’t say there is a clearcut boundary between a watchtower and a backup service. I guess the name implies what is the primary use for watchtowers, watching your channels to react if your counterparty dies or does something nefarious. Whereas backups are there to store your database changes so that you can recover the state later on. If we add a way for nodes to retrieve data from a watchtower it basically becomes a backup server as well. A little known secret is that the punishment transactions that we send over to a watchtower are actually encrypted. The encrypted data could be anything. It could be your backup. If you ever end up needing an arbitrary data storage just pay a watchtower to store it for your and hope they give it back to you if you need it.

Q - Are you planning on implementing watchtowers in c-lightning?

A - Absolutely. For a long time we’ve seen c-lightning as more of a hosted solution where you run your node at home or you run it in some data center and then you remotely connect to it. Our nodes would be online 24/7. But there has been quite a bit of demand for watchtower functionality as well. What I did, it is not yet released, there is a hook for plugins that pushed the penalty transaction for each of the prior states to a plugin. A hook is a mechanism for us to tell the plugin some piece of information in a synchronous manner so that the plugin can take this information and move it off to somewhere else, your watchtower backend for example, before we continue with the process of the Lightning channel itself. That way we make sure that the watchtower has the information that needs to be stored before we can make progress on the channel itself. By pushing that into a plugin we can enable our watchtower producers to create their own connection or protocols to talk to watchtowers and we don’t have to meld that into the Lightning node itself. It is very much on our roadmap now and I think I have three variants of the watchtower hook. It should hopefully be there in the next release.

Q - With end users or people with private channels on the edge of the network versus routing nodes in the middle, do you think the backup strategy or backup setup will be different for those two participants on the network?

A - I do think that as the routing nodes inside of the network start to professionalize they will probably start professionalizing their data backend as well. I foresee mostly businesses and routing nodes starting to use more and more the replicated database backends. For end users I do think that the backup plugins provide a very nice trade-off between having the functionality of a backup but not having to have the big setup or upfront cost of setting up a replicated database and automated failover and all of that stuff. Those are the two rails that we have chosen and cover these two use cases. Professional node operators and your geeky son that sets up a node for the rest of the family.

Q - If you are the end user on the edge of the network you don’t have to worry so much going down at the time that you are routing the payment. That is the big challenge here. You need to have that latest state. If there is some bizarre timing that is where it becomes technically difficult.

A - I have recently done a restore of one of my nodes that actually does quite a bit of probing in the network. I think it had a week’s backlog of changes and it recovered in 7 or 8 minutes. That might be too much for a professional operator but for me sitting at home not doing my grocery shopping or not buying anything on Amazon, 7 minutes of downtime while I restore my node is perfectly fine. Even then we do have trade-offs. We can have backup plugins that replicate the database in the background. Then it takes a couple of seconds to failover.

Q - You mentioned eltoo. One of the questions was when SIGHASH_NOINPUT? Obviously you don’t know when but is there any chance of getting it into the next soft fork with Taproot? Or has that ship sailed? If that ship has sailed what is viable for getting it into Bitcoin if we ever get it in?

A - It would be awesome if we could get SIGHASH_NOINPUT or ANYPREVOUT which is AJ (Town)’s proposal into the soft fork that is rolling out Taproot and Schnorr. The problem being that all of the reviewers currently are focused mainly on Taproot. It is really hard to grab people and get them to discuss yet another proposal that we might want to have in the same soft fork. While I think it is still possible I’m pretty sure that it will be a secondary, maybe lighter soft fork at a later point. Maybe bundled with some more lightweight proposals that require less changes to the structure itself so it is easier to review as well. See what kind of parts of the codebase are being touched and what the side effects are. Because Taproot and Schnorr are a big change all as well. That being said AJ has taken SIGHASH_NOINPUT and with his ANYPREVOUT he has formulated all of the details that need to change in SIGHASH_NOINPUT for it to nicely mesh with Schnorr and Taproot. With his proposal we could end up with a very nice bundle of changes that can be deployed at a later point in time independently of Taproot and Schnorr. I remain optimistic that we can get it in but I can’t promise anything.

Q - Do you have any thoughts on whether planning for the next soft fork can really happen seriously until we have what is likely to be, no one knows, the Schnorr, Taproot soft fork first? Can there be a parallel process or do we really need to get all of those Core reviewers and brains on the one soft fork before we even get serious about any other soft fork?

A - I think it is probably best if we keep the distractions as low as possible while we do an important update like Taproot and Schnorr is. I personally don’t feel comfortable creating more noise by trying to push hard on SIGHASH_NOINPUT. I think Schnorr and Taproot will get us many, many nice features and I think we shouldn’t hold up the process by trying to push in more features while we are at it. Everybody wants their favorite feature in and I don’t see this particular feature being so life changing that it should jump the queue. I would like to have it in because I think eltoo is a really nice proposal. Self congratulating again. Hopefully we will get it at some point but I don’t think we need to stop the entire machinery just for this one proposal. Who knows? We might come up with better solutions. I am under no impression that this proposal is perfect. The more time we can spend on improving and analyzing the better the outcome in the end.

Q - Have you been following the conversation on activation? Do you have any thoughts? Is there any way to speed up the process or is this going to be a long drawn out discussion?

A - Activation is a topic that many people like to talk about and it is a very crowded space. I have a tendency to stay out of these discussions where you already have too many cooks in a kitchen. I think whatever the activation mechanism is it is going to be perfectly fine if it works. I don’t really have a preference.

Q - For the researchers and developers in the space where should one put their efforts for the maximum positive impact on the Lightning Network? Obviously you do a lot of stuff. You are a researcher, you contribute to papers, you contribute to c-lightning, you do a lot of work on the protocol and the BOLT specifications. How do you prioritize your time? Do you follow the Pieter Wuille work on whatever is most fun or do you try to prioritize the things that you think are the most important?

A - Definitely choose topics that interest you the most. I did my PhD on this and it was amazing because you could jump around the whole space. It was a big green field where you could start building stuff. It was all novel and new. I get the feeling that Lightning is the same way. All of a sudden you have this playground which you can explore. Don’t limit yourself to something that might be profitable or the cool thing. Do whatever interests you. For me personally I enjoy breaking stuff. I will always try to find out things that can be broken and see if I can get away with it within limits. Lightning is cool to explore and see what you can do. If you want to increase security through watchtowers that is a good thing. If you want to increase privacy by figuring out ways to break privacy that is another good thing. Whatever you like the most I guess.

Q - Where is the most impact you can make? We only have limited time and resources so how do you think one person can have the biggest impact?

A - I do think that there are quite a few statements that need to be proven or disproven. In particular one thing that I like doing is attacking the privacy of the network. Of course we now have completely different trade-offs when it comes to privacy from onchain to offchain. We don’t leave eternal traces of our actions like we do on a blockchain. We do leave those traces on Lightning but we do talk to peers. What might these peers infer from our actions? What information could they extract from that? There is definitely the privacy aspect that is worth looking at. Then there is network formation games. If you have ever done game theory, finding a good way to create resilient networks but also networks that are efficient is a huge question. How can we create a network where each node individually takes some decision and we don’t end up with one big node in the middle becoming the lynchpin, the single point of failure where if that one node goes down everybody starts crying. That is definitely an open research question. Also more fundamental stuff like how can we improve the protocol itself to be more efficient? How can we gossip better? How can we create an update mechanism that is more efficient, that is quicker, that needs less round trips? We do have one engineer, Rusty who is in Australia and he is always the guy who will try to get that half round trip shaved off of your protocol. That is his speciality. He has a focus on that. I wouldn’t say that there is an official list of priorities for the Lightning Network and associated research. It is you making your own priorities. People tend to congregate on certain tracks.

Q - What are your thoughts on the privacy papers? I believe you were the co-author of at least one. Could you give a high level summary of those conclusions and thoughts?

A - I started collecting data about the Lightning Network the moment we started. I do have a backlog on the evolution of the Lightning Network. I have been approached by researchers over time who want to analyze this data. How did the network grow? What the structure is, what the success probabilities are. That is how I usually end up in these research papers. So far all of the analyses are very much on point. Purely the analysis of centrality in the network and the upsides and downsides. The efficiency that we get through more centralized networks. The resilience we get from distributing the network more. All of these are pretty nicely laid out in these papers. In some papers there is some attempt to extrapolate from that information. That is usually not something that I encourage because these extrapolations are based on the bootstrapping phase of the Lightning Network. It is not clear that these patterns and behaviors will continue to exist going forward. It is mostly these extrapolations that people jump on when they say “The Lightning Network is getting increasingly more centralized and will continue to become so.” That is something that I don’t like too much. The other one is that people usually fail to see that the decentralization in the Lightning Network is fundamentally different from the decentralization in the Bitcoin network. In the Bitcoin network we have a broadcast medium where everybody exchanges transactions and by glancing at when I learn information about which transaction I can infer who the original sender is and so on. In Lightning the decentralization of the network is not so important because we do have quite robust mechanisms of preserving privacy even though we are now involving others in our transactions. We do have onion routing, we do have timing countermeasures, we do have mechanisms to add shadow routes which are appendices to the actual route pretending we send further than our destination. We fuzz the amounts so that the amounts are never round. There is never a 5 dollar amount in Bitcoin being transferred exactly over the Lightning Network. We have recently added multipart payments where payments of a certain size are split. Unless you can correlate all of the parts you will not even learn the exact amount being transferred. All of these things are there to make the network more privacy preserving. Is it perfect? Of course not. But in order for us to improve the situation we have to learn about what is working and what is not working. That is my motivation behind all of these research papers. To see whether our mitigations have an effect. If yes how much more can we improve them? Or should we just drop our mitigation altogether because it might have a cost associated with it. While I think we are doing good I think we could do better. That is why we do this research. We do need to talk publicly about the trade-offs of these systems because if we promise a perfect system to everybody that is a promise that we have already broken. Being upfront with the upsides but also with the downsides of a system is important I think.

Q - Going back to the backup discussion where we were talking about hobbyists and enterprises. Do you think it is important that there is a route for a hobbyist to set up a routing node? And that there are a lot of hobbyists running routing nodes just so that people don’t have to go through the enterprises. Or maybe we just need a lot more enterprises. Maybe both?

A - Does a hobbyist become an enterprise once we professionalizes? There must always be the option for users to professionalize and become more proficient and more professional in their operations because that is something that we’ve gotten from Bitcoin. It is not that everybody must run a node, it is not that we suddenly have a new currency. But now suddenly everybody has the option of taking on their own responsibility and becoming their own custodian and not having to rely on other parties. We shouldn’t shame people into doing that but the option must be there for those interested. I think there is a wide spectrum of options that we can offer from custodial wallets up to having fully self sovereign nodes that run the full software stack at home and you connect to your own realm in Bitcoin. Of course this is a spectrum. Depending on your interest you will land somewhere in there. We would like to have more people on the educated and the knowledgeable side of things where they run as much as possible but if you are somebody who just wants to accept the payment without having to read for months on how Bitcoin and Lightning works and have to understand everything. I think there should be an option for that as well. The important part is that it must be an option. We shouldn’t be forcing people into one direction or another.

Q - Is real time database replication already in? You just merged your plugin some weeks ago allowing back end customization. Can we get comments on real time replicas made with Postgres?

A - The replication of a Postgres database is something that we as c-lightning do not include in our code. It is just a backend that you write to. It is up to the operator to set up a replicated set of Postgres servers. Then just pointing c-lightning towards it. That is definitely in there. I think it has been there since 0.7.2. That is almost a year now. All you actually need to do is add —wallet=postgres and then the username, password and URL of where your database node lives. It will then start talking to Postgres instead of a sqlite database on your local machine. Again Gabriele Domenichini has an excellent tutorial on that which I have linked to in the presentation.

Q - What do you think about custom channels or custom HTLC types? Is that possible and feasible? Would integration of Miniscript allow for such customization? Do you see Miniscript playing a role?

A - I don’t see where Miniscript comes in. It might be useful when it comes to having a fully flexible implementation where we can have external applications deciding on what kind of outputs we add. One of the proposals for example once we have eltoo, we can have multiparty channels where we can have any number of participants. The only thing that these multiparty channels is whether to add or remove or adjust the amounts on individual outputs. The way we can describe those outputs could be in the form of Miniscript. That way each participant in this ensemble of eltoo participants is deciding on whether or not to add an output to the state of the channel. They wouldn’t need to know the exact application that is sitting behind it in order for them to decide on whether an output makes sense or not because they would get a Miniscript descriptor. Besides that if we go back to the Lightning Network it absolutely makes sense to have custom protocols between nodes. These can include custom channels or custom assets being transferred on those channels. Or even different constructions of HTLCs. The only thing that is important is that the two nodes that decide to use that certain custom protocol agree on what this custom protocol is and they implement it correctly. The second thing is that if we want to maintain the cohesion of the Lightning Network and the ability to transfer funds from point A to point B we need to have a HTLC construction that is compatible with the preimage trick that we are currently using. Or the point contingency that PTLCs would bring. That means if I receive an incoming HTLC from left I can use whatever protocol I want to forward it to the right person if my right person knows that protocol. But we need to switch back to normal HTLCs once we’ve left this channel. We can have mixes of different forwarding mechanisms and different update mechanisms and custom channel constructions as long as from the outside it all looks compatible. That’s also one thing that I pointed out in the eltoo paper is that eltoo is a drop in replacement for the update mechanism in Lightning. Since we don’t change the HTLCs I could receive an HTLC over Lightning penalty from Jeff and I could forward it as a HTLC over eltoo to you. We can exchange individual parts as long as on the multihop part we do agree on certain standards that are interoperable.

Q - If we have a payment channel open between me and you we can do anything. It is completely down to what we want.

A - We could be pretending to transfer. If we trust each other and we have the same operator, we can settle outside of the network. Jeff could send me a HTLC, I could tell you using a HTTP request that it is ok and to forward this payment to wherever it needs to go. We will settle with a beer later. Even these constructions are possible where we don’t even have to have a channel between us to forward a payment if we trust each other. Or we could change the transport mechanism and transfer the Lightning packets over SMTP or transfer them over ham radio like some people have done. We can take parts of this stack of protocols and replace them with other parts that resemble or have the same functionality but have different trade-offs.

Q - If me and you have a channel in the middle of the route from A to B there are some restrictions. We have to be BOLT compatible if we are going to be routing onion routed payments?

A - There is one restriction that we will only accept channel announcements if they correspond to an outpoint on the blockchain. This is done as an anti-spam measure. We would have to create something that looks like a Lightning channel if we were to pretend there is a channel in the end. Other than that you can do whatever.

Q - You don’t think there is a role for Miniscript in terms of custom scripts or making changes to those standardized Lightning scripts that most of us will be using?

A - There might be a point to it. Miniscript is a mechanism where we can express some output that we haven’t agreed upon ahead of time. I could tell you “Please add this output here” and I wouldn’t have to tell you before we start a channel how it needs to look. I could just send you a Miniscript. Miniscript is a tool that allows us to talk about scripts. Currently in the Lightning protocol all of the scripts are defined in the specification itself. For us there is no currently no need to talk about different structures of outputs. There might be a use where we can add this flexibility and decide on the fly what a certain output should like. There has never been this need to meta talk about outputs because we already know what this looks like.

Q - So instead of Miniscript and Script perhaps we are using scriptless scripts and PTLCs. What are the biggest obstacles and pain points to implementing protocol changes in a Lightning node? If we wanted stuckless payments or PTLCs?

A - It varies a lot. Depending on where this change is being made it might require changes to just a single line of code which is always a nice thing because you get a change log entry for ten seconds of work. Others require reworking the entire state machine of a Lightning channel. Those often require months of work if not longer. For PTLCs it would not be too hard. It just changes the way we grab the preimage on the downstream HTLC and hand it over to the upstream HTLC. It would be an additional computational step in there instead of taking a preimage here and applying it here, it would be take a preimage or signature here, modify it slightly and that gives you whatever is needed to be plugged in on the other side. When it comes to deeper changes to the state machine we have the anchor outputs proposal currently which would require us to rework quite a lot of our onchain transaction handling. We try to be very slow and deliberate about which changes of this caliber we add to the specification because they usually bring a lot of work with them. Of course the extreme case of all of this is if we were to add eltoo, that would be reworking the entire state machine of the entire protocol. That is some major work. Eltoo has its downsides too.

Q - Nadav (Kohen) has been working on PTLCs. I think he implemented with Jonas Nick and a few others PTLCs with ECDSA. Do you think there should be a lot more work on things like channel factories before eltoo because eltoo is probably going to be a while?

A - I am a bit hesitant when it comes to channel factories because depending on what day it is they are either brilliant or they are just stupid. Being one of the authors of that paper I don’t know. The main problem with channel factories is we require multiparty channels first. What channel factories do is they take some shared funds that are managed collaboratively between a number of participants. They take part of that and move it into a separate subchannel in that construction. That has the advantage that since the entire group no longer needs to sign off changes, just the two of us that need to agree on what happens to these funds in the subchannel it is way quicker to collect two signatures rather than fifteen. That is the main upside. The downside of course that first we need to have this group of fifteen. The way we implement this group of fifteen, the multiparty channel, needs to either be a duplex micropayment channel which is a very old paper of mine which never really took off because its blockchain footprint is rather large or we use eltoo that allows us to set up these very lightweight 15-of-15 or 60-of-60 channels. Then we can add channel factories on top for efficiency. The reason why I am saying depending on the day the channel factories sound weird is that we already have an offchain construction where we can immediately sign off on changes without having to have yet another level of indirection. Then there is the efficiency gain, still undecided. I don’t know.

Q - What can we do to encourage better modularity? Is this important an approaching Taproot world?

A - I think the modularity of the protocol and the modularity of the implementations pretty much go hand in hand. If the specification has very nice modular boundaries where you have separation of concerns, one thing manages updates of state and one thing manages how we communicate with the blockchain and one thing manages how we do multihop security. That automatically leads to a structure which is very modular. The issue that we have currently have is that the Lightning penalty mechanism namely the fact that whatever output we create in our state must be penalizable makes it so that this update mechanism leaks into the rest of the protocol stack. I showed before how we punish the commitment transaction if I were ever to publish an old commitment. But if we had a HTLC attached to that, this HTLC too would have to have the facility for me to punish you if you published this old state with this HTLC that had been resolved correctly or incorrectly or timed out. It is really hard in the penalty mechanism to have a clear cut separation between the update mechanism and the multihop mechanism and whatever else we build on top of the update mechanism. It leaks into each other. That is something that I really like about eltoo. We have this clear separation of this is the update mechanism and this is the multihop mechanism and there is no interference between the two of them. I think by clearing up the protocol stack we will end up with more modular implementations. Of course at c-lightning we try to expose as much as possible from the internals to plugins so that plugins are first class citizens in the Lightning nodes themselves. They have the same power as most of our pre-shipped tools have. One little known fact is that the pay command which is used to pay a BOLT11 invoice is also implemented as a plugin. The plugin takes care of decoding the invoice of initiating a payment, retrying if a payment fails, splitting a payment if it is too large or adding a shadow route or adding fuzzing or all of this. It is all implemented in a plugin and the bare bones implementation of c-lighting is very light. It doesn’t come with a lot of bells and whistles but we make it so that you have the power of customizing it and so on. There we do try to keep a modular aspect to c-lightning despite the protocol not being a perfectly modular system itself.

Q - There was a mailing list post from Joost (Jager). What are the issues with upfront payments? There was some discussion about it then it seems to have stopped. Why haven’t we seen more development in that direction yet?

A - Upfront payments is a proposal that came up when we first started probing the network. Probing involves sending a payment that can never terminate correctly. By looking at the error code we receive back we learn about the network. It is for free because the payments never actually terminate. That brought up the question of aren’t we using somebody else’s resources by creating HTLCs with their funds as well but not paying them for it? The idea came up of having upfront payments which means that if I try to route a payment I will definitely leave a fee even if that payment fails. That is neat but the balance between working and not working is hard to get right. The main issue is that if we pay upfront for them receiving a payment and not forwarding a payment then they may be happy to take the upfront fee and just fail without any stake in the system. If I were to receive an incoming HTLC from Jeff and I need to forward it to you Michael and Jeff is paying me 10 millisatoshis for the privilege of talking to me I might not actually take my half a Bitcoin and lock it up in a HTLC to you. I might be happy taking those 10 millisatoshis and say “I’m ok with this. You try another route.” It is an issue of incentivizing good behavior versus incentivizing abusing the system to maximize your outcome. A mix of upfront payments and fees contingent on the success of the actual payment is probably the right way. We need to discuss a bit more and people’s time is tight when it comes to these proposals. There has been too much movement I guess.

Q - I’ve been reading a bit about Simplicity which your colleagues Adam Back and Russell O’Connor have been working on at Blockstream. They talk about being able to do new sighash flags, SIGHASH_NOINPUT, ANYPREVOUT without a soft fork. In a world where we had Simplicity perhaps the arguments against NOINPUT or ANYPREVOUT namely they being dangerous for users no longer apply if Simplicity was in Bitcoin and people could use it anyway. Any thoughts on that?

A - That is an interesting question. How could I not get fired for discussing this in public? I do think Simplicity is an awesome proposal. It is something that I would love to have because so far during my PhD and during my work at Blockstream the number one issue that we had was stuff that we wanted to do was blocked by them being available in Bitcoin itself. It can be frustrating at times to come up with a really neat solution and not being able to enact them. As far as the criticism to SIGHASH_NOINPUT and ANYPREVOUT we shouldn’t take them lightly and I do see points where people bring up good points about there being some uncertainty and there being insecurity when it comes to double spending and securing funds. How do we clamp down this proposal as much as possible so people can’t inadvertently abuse it. But I do think that with all of the existing systems that we already have in Bitcoin we are currently trying to save a sandcastle while the dyke is breaking behind us. It is a disproportionate amount of caution when we already have some really dangerous tools in the Bitcoin protocol itself. First and foremost who invented SIGHASH_NONE? You can have a signature that does not cover anything but you can still spend funds with it. While I do take criticism seriously I don’t think we need to spend too much time on that. Indeed if we get Simplicity a lot more flexibility could be added but of course with great power comes great responsibility. We need to make sure that people who want to use those features do know the trade-offs really well and don’t put user funds at risk. That has always been something that we’ve pointed towards. We need to have tech savvy people doing these custom protocols. Otherwise you should stick with what is tested and proven.

Q - And obviously Simplicity is still far off and won’t be in Bitcoin anytime soon.

A - We recently had a Simplicity based transaction on Elements. We do have a network where we test these experimental features to showcase that these are possible and what possible implementations could look like. That is our testing ground. Russell has used his Simplicity implementation on some of these transactions. We have some cool stuff in there like quines.

\ No newline at end of file +https://www.youtube.com/watch?v=kGQF3wtzr04

Topic: Back the f*** up

Location: Potzblitz (online)

Intro (Jeff Gallas)

We’re live. Welcome to the fourth episode of Potzblitz, the weekly Lightning talk. Today we have a very special guest, Christian Decker of Blockstream. They also call him Dr Bitcoin. Our co-host today is Michael Folkson from the London BitDevs meetup. He is doing a bunch of Socratic Seminars and has a deep technical knowledge. He will be the perfect person to speak to Christian and has already prepared a bunch of questions. We announced this on Twitter a couple days ago. He has been collecting questions. If you do have questions during the stream put them in the YouTube comments or ask them on Twitter using the hashtag #potzblitz. We also have a Mattermost channel mm.fulmo.org, the channel is called Potzblitz. You can find Christian there, he will be checking it out later. As always once we are done streaming you can join the Mattermost channel and the Jitsi room and hang out and ask some more questions. I expect that we will not be able answer all of your questions today. Join us in the Jitsi room afterwards. We are using Jitsi to stream this. It is an open source video conferencing tool. It usually works quite well. Sometimes it doesn’t but most of the time it does. If there are any glitches please forgive us. It is permissionless without collecting your data. This is a pretty good option. If you like this give a thumbs up, subscribe to the channel so we can do this another time.

Back the f*** up (Christian Decker)

Thank you so much for having me. I hope people will excuse my swear word in the title. It fit in quite nicely with my feelings about backups or the lack thereof in Lightning. The title today is back the f*** up. How to backup and secure your Lightning node in a way that you cannot end up losing money by operating your node. Like Jeff said I am Christian Decker, I work for Blockstream hence the small logo up there. I work mainly on the c-lightning implementation of the Lightning protocol of which I also help out a bit. You might notice that later on because I tend to emphasize what c-lightning does correctly. More on that later.

Why Backup?

First of all why do we want to backup in the first place? I think everybody will be shouting right now that we are actually handling money and so the rationale for having a backup of what is handling our money is pretty fundamental. The way to do backups in onchain systems like your Ledger wallets or your Bitcoin Core nodes or systems where you perform onchain payments is basically just write down 24 words. If you ever destroy or lose your device you can recover from those 24 words which seed a random number generator which is used to generate addresses. This is a common thing in the onchain world where you have your 24 words, you write them down on paper, you send half of them to your parents or you engrave them on metal. All in the name of making this as disaster resilient as possible and ensuring that you can recover. That is quite an easy option. We can do that in Lightning as well. However it turns out backups in Lightning are quite a bit harder. They can even be dangerous. To see why they are dangerous let’s quickly go through how not to backup a Lightning node.

How not to backup

If you are anything like me you probably end up with a couple of backups like this where we have the original .lightning folder. Then at some point we decided we might want to take a backup of that. We rename that directory or copy it over to something with an extension. Let’s call it .lightning.orig. Then we do some more changes and we perform another backup. We call it .lightning.v1. But we are computer scientists so we start by counting at zero. Even our naming scheme becomes very inconsistent over time. That is not the whole issue. The whole issue is that even if we were able to take backups of our Lightning node the act of restoring it can put some of our funds in danger. Simply because of the mechanism that we use in the Lightning Network to update state. To see that we need to look at what the update mechanism in Lightning does.

How the update mechanism in LN works

In Lightning we use a mechanism that is called a penalty based mechanism in which each of these square boxes here represents a state. Initially we have the initial state where person A has 10 Bitcoin and the other participant in the channel has zero. We transfer 1 Bitcoin over. Suddenly we have two states that are active. Once we have agreed on this state we poison the old state. If any participant ends up publishing this transaction representing our state this poison would be activated and the misbehaving node that has misbehaved by publishing the old state would lose all of its funds. We do that a couple of times. Transferring 1 from A to B, then we transfer 4 more from A to B. We then poison the old state. This state is currently active. Then we perform another update. This is a bit special. We don’t actually transfer money but these state transitions can also occur because we change the fee that we need to pay onchain. Therefore even non-user triggered actions can result in a state change that would end up agreeing on a new state and poisoning the old state. Finally we take this last state and transfer from B to A one more Bitcoin. This final state which does not have the biohazard symbol, the only state that can be enacted without incurring a penalty. All of the prior states are poisoned. By poisoned I mean the counterparty always has a matching reaction that could be used to punish if I were to publish this old transaction. If we were to take a backup anywhere here we might end up losing some information.

So what’s the problem?

The problem is exactly that this is a copy of the previous image. Both A and B agreed on this state being the last state. However, because I took a backup of this state and then had to restore this state for me this appears this is actually the last valid state and I should be able to send that state onchain. Get my 5 Bitcoin. However, that would be cheating from the point of view of my counterparty because they have a matching penalty transaction for this state. If I were to try to enact this state then I would lose my whole state in this channel. Not only is backing up and restoring a Lightning node rather hard but it is also quite dangerous because if you end up restoring a state that isn’t the last one you might accidentally end up cheating. Over the time we the Lightning developers came up with a range of possible solutions to mitigate this situation or even prevent it altogether. Sadly none of these are complete or gives you a 100 percent protection but we are getting closer. What I’d like to do is step through the various mechanisms that have been used previously. What we are working towards right now when it comes to securing the data that you have in the channel.

Backup to Google Drive (eclair)

The first one is something that eclair did right at the start when we started rolling out these Lightning nodes. The developers of eclair were getting some feedback that people were losing their devices with the Lightning node on them. They were unable to recover the funds on those phones. It is quite natural if you have a phone. Phones have a tendency to drop into water or break or just get lost. If you have money on the phone that is doubly sad. What the eclair team did was they added a trivial backup mechanism where you take the state of the Lightning node after each change and push that to a Google Drive folder from where they could restore the entire state. I’m not too familiar with how this was done. Whether it was done incrementally or whether it was done in a batch. However this mechanism seems not to be working anymore. Simply because people soon figured out that if I start my eclair wallet on Phone A and I restore on Phone B I can run this node from both phones at the same time. They share data withe each other. If you end up restoring a node that is not the latest state and you end up cheating, that same thing happens if you have two nodes running against the same dataset pretending to have the same channels and therefore contradicting each other while they proceed in the Lightning protocol. This may end up looking like a cheat attempt and the node being punished. This first attempt was mainly used for recovery but people abused it a bit to share a single node across multiple devices. So don’t do this.

Static channel backups (lnd)

A better solution are the static channel backups that are implemented in lnd. As you can see on the left side this is basically the structure that a single channel static backup looks like. It has a bunch of information that relates to how the channel was created, who the counterparty is and all of the information that is static in the channel and doesn’t change over time. What this allows you to do is start the lnd node back up again with this backup file and restore what they call a channel shell. Not a true channel but enough information about the channel so that you can reconnect to your peer and ask for information for you to get your funds back. This is rather neat because it allows you to take a static channel backup, store it somewhere offline, it is probably not small enough so that you can write it on a piece of paper and write it back in when you want to restore it. But you can store it on a USB stick or a file server somewhere. That alone should allow you to at least get your funds back. It is minimal in that sense. It doesn’t change while you are renegotiating the state of the channel itself. And so for each channel you take one backup and you are safe until you open the next channel. The downside is that it requires you to be able to contact the counterparty node once you attempt to recover simply because this information that we track in the Lightning protocol cannot be recomputed solely from this structure. But you need to reach out to you peer and ask it “What is the current state? Please give me all the information to recognize and to retrieve my funds after closure. Please close the channel.” This reconnects and tells it to close the channel. The downside is of course that on relying on the peer to do something for you might be get you into a weird situation because you are telling the peer “I lost my data. Can you please help me recover my funds?” The counterparty could not give you that information or it can refuse to close the channel on your behalf that results in them holding your funds hostage. All of the current implementations implement this correctly and will cooperate with lnd to recover their funds. But a malicious actor could of course change the code so that lnd nodes cannot close successfully or recover successfully. While this is not a perfect backup it is an excellent emergency recovery mechanism to claw back as much of your initial funds as possible.

Synchronous Database Log (c-lightning)

The third mechanism that we have, this goes more into what c-lightning is viewing as the correct version of doing this, is the ability for plugins that you can attach to c-lightning to keep a synchronous database log in the background on whatever medium you want to do. The reason why we made it into a plugin is because we don’t want to tell you how to manage these backups. We want you to have the ability of incorporating them into your own environment. This just tells the plugins every database transaction that would result in a modification of the state before we actually commit to this transaction. It is a write ahead log of the database transactions that we track before committing too changes. Depending on your infrastructure you might want to have an append-only log. You can have compressible logs or you can even a replica of the original database being managed concurrently to the main database. If you do that then you can failover immediately if your node dies. What you do is have a replica that tracks changes along the main database on a network mount and if your node dies you can spin up another node and connect to your replica. You can continue where you left off. We don’t rely on external help, we don’t need to interact with our peer which may not even be available even more. But the downside is of course that these are synchronous backups that require us to write to the backup every time that we have a state change. This is one mechanism.

Database Replication and Failover (c-lightning)

The other mechanism that c-lightning is the database replication and failover. We have abstracted the database interface in such a way that we can take c-lightning and have it talk to a Postgres database for example. Whereas the default database is a sqlite3 database. Postgres does require a bit more set up. It allows you to also have a synchronous replication of your node and even transparent failover from one database instance to the next one. This is something that enterprise users are very much interested in because they can rely on what they already know about replicating and mirroring and failover on Postgres databases which is very well known in the wider software engineering world. Whereas replicating and scaling a single Lightning node that is very niche knowledge. We can reconnect to what institutional knowledge there already is and people feel pretty comfortable with the way that databases can get replicated and secured. I mentioned this is more of an enterprise set up. It also comes with a bit of upfront investment cost to set up all of these nodes, set up the database and have replication set up. But gabridome has written an excellent guide on how to set up Postgres in combination with c-lightning. I can only recommend reading that and seeing if you can get it working as well. The upside is obviously that restore is immediate because we have replicas that have the up to date information. If the node dies we just spin up a new node and if it connect to the database. If the database dies we switch over to a new master database and the node doesn’t even learn anything about the database having failed. This is another form of synchronous replication.

The future (eltoo)

Of course this wouldn’t be a talk by me if I weren’t to somehow get eltoo into the picture. Eltoo is a paper that we wrote two years ago now about an alternative update mechanism in which we don’t penalize people anymore for publishing an old state. Instead what we do is override the effects that your misbehavior would have and enact the state that we actually had. What we have here is the set up transaction that creates a shared address between the blue user and the green user. We create a refund transaction that gives the original owner back the funds after a certain timeout expires. The way we perform updates is by having an Update 1 that takes these ten Bitcoin here, ratchets them forward and attaches this Settle 1 in which both parties have 5 Bitcoin each after this timeout expires. Then we update again by ratcheting these funds forward and attach this settlement with the newer state. This time the green user sends 1 Bitcoin over to the blue user and this update overrides whatever would have happened in Settle 1. Instead of punishing the counterparty for misbehaving what we do is override the effects that they would have had. Let’s say in this case the green user would like to enact this settlement. It sends out Update 1 and now has to wait for this timeout to expire. In the meantime the blue user publishes Update 2 overriding this Settle 1 and starting the timer on Settle 2. We no longer punish our counterparty for misbehaving but we override the effect and get to a state that was agreed upon in the end. If I were to backup and restore, restore at the time that Update 1 and Settle 1 were the last state then all that would happen is that my counterparty would send out Update 2 and correct my mistake instead of hitting me in my face by stealing all my funds. This is something that would allow us to have a much more robust mechanism of backup and restore. I think ultimately this might become the way that we end up doing stable nodes sometime in the future. Many people have argued that this loses the penalties. However as far as I can see the penalty mechanism is intrinsically linked with our inability to create backups and restore because whenever we backup and restore we might end up in a situation where we can be penalized. I think we might want to distance ourselves from this satisfaction of punishing people and go to more a collaborative solution which in this case would be eltoo.

How about a unified backup standard?

I have just given five different backup standards. How about creating a unified one? Everybody seems to be pulling in a different direction and everybody is experimenting with different ways of backing up and restoring. Wouldn’t it be nice to have a backup that can be created by c-lightning and then later be imported into lnd and restore from that? The answer is of course. That would be really nice if in the end we have this convergence of different ideas and different implementations where we agree on a unified standard for these kind of operations. However, currently at least the internal structure of the data that we track is not in a form that we can share these backups between implementations. Maybe we will eventually get there but currently that is not the case. I would argue that this experimental phase which we are currently in is good because it allows us to experiment with different trade-offs and come up with better ideas in the long run that are then capable of getting a majority of the ecosystem behind it. Actually make this a better standard overall because this has always been the way that the Lightning development has worked. We have always gone off into different directions and then emerged back and shared experiences from the way that some of our approaches worked and others didn’t. My hope is that we will end up with a unified standard.

Resources

This is my last slide which is the resources starting with the cloud backup by ACINQ, the excellent article on recovering funds from lnd using static channel backups. We also have a couple of backup plugins and documentation on the db_write hook. And of course how you can wire c-lightning to run with a PostgresQL database. Gabriele’s excellent tutorial on how to set up replication with PostgresQL so you can have the real enterprise feeling for you node. Read the eltoo paper. I think it is neat but of course I am biased. That is it from me.

Q&A

Questions were collected here on Twitter.

Q - On backups, I think you alluded some of the trade-offs. There is ease of use, there is trust in terms of requesting certain information from your remote party, there is security, there is cost. Are there any other trade-offs and which of those align towards a technical user, a non-technical user and an enterprise?

A - One very important one that I failed to mention before is that these backups come with some latency. Your node, depending on which backup you choose, you might end up sacrificing some of the throughput of your node simply by having to reach out to a backup plugin or to a remote storage or some other operations that you need to do in order to make this safe. The only option that doesn’t incur this cost is the static channel backups because they are done once at the beginning of node set up and every time you open a channel. There is no additional latency cost that you incur at each update. Whereas the full backup solutions like pushing onto a cloud server or having some plugin, that will obviously incur throughput cost. I think the static channel backups are probably the easiest ones to set up. We are trying to make it as easy as possible for the plugins as well. We do have a plugin repository that is community driven. We do have people working on Dropbox backends where each single update gets written to Dropbox and then we have… working in the background. It should end up being a plugin that you drop into a folder and everything is taken care of from thereon. Currently there is some work that you need to invest to get these working. Of course the extreme example is the replicated database backend which really takes some digging to get right. Not for the faint hearted I guess but we are getting there.

Q - I imagine if either the lnd set up or another implementation comes up with a backup strategy that takes those trade-offs in a different way and is popular there could be a c-lightning plugin that replicates it?

A - That is how our development works as well. I wanted to point that out in the second to last slide which is we do learn from what other teams do. We do bring some of these innovations back to the specification to be discussed. If there is a clear winner when it comes to usability and other trade-offs then there is absolutely no hurdle from having that become part of the specification itself. At which point it becomes available to all implementations.

Q - You alluded to this. What exactly does need to be backed up? Chris Stewart did a very good presentation at the Lightning Conference last year on the six different private keys that you need to think about with Lightning. Some need to be hot, some can be cold. What exactly needs to be backed up in these different scenarios?

A - Obviously the first one is the seed key that you use to generate all of your addresses. That’s pretty much identical to onchain. It is what creates all of these addresses that are used throughout the lifetime of your funds. When they enter or leave a channel they will end up on addresses that are generated from that. That is the central piece of private information that you need to backup. We have a variety of parts where keys are used that are not directly derived from this seed key. Namely whenever you and your peer with which you have a channel open need to generate a shared address. That will not be part of your derivation tree that is rooted in this seed key. Those keys need to be backed up along the way. This includes HTLC secrets. Whenever a payment goes through your channel you had better remember what the key was that this payment was contingent on. The indices of the channel in the database because that is the index of the derivation path that is going to be used. And revocation secrets, those are important because that is the poison we use to keep each other honest. My absolute favorite is the per commitment points which is a ECDSA point that is added to your direct output if I close the channel. Without that point you wouldn’t even recognize that these funds are supposed to go to your node. That is a really weird use of these point tweaks because they are in there to make sure that each state commitment is unique. It just makes it so hard to create an onchain wallet for Lightning. When I stumbled over those and I had to implement the wallet I just swore.

Q - What about watchtowers? Is there a role for watchtowers with backups? They are predominantly there so that you don’t have to be online the whole time so they can stop someone from cheating you. Is there a role for them to store some state on your behalf? Do you see the role of a watchtower expanding and different services being provided around a watchtower?

A - Absolutely. The role of a watchtower is definitely not clear cut. I usually tend to think of a watchtower as a third party that I put in charge of reacting when something nefarious happens onchain. But of course this definition can be expanded to also include arbitrary data storage on behalf of a node in exchange maybe for a small fee. I wouldn’t say there is a clearcut boundary between a watchtower and a backup service. I guess the name implies what is the primary use for watchtowers, watching your channels to react if your counterparty dies or does something nefarious. Whereas backups are there to store your database changes so that you can recover the state later on. If we add a way for nodes to retrieve data from a watchtower it basically becomes a backup server as well. A little known secret is that the punishment transactions that we send over to a watchtower are actually encrypted. The encrypted data could be anything. It could be your backup. If you ever end up needing an arbitrary data storage just pay a watchtower to store it for your and hope they give it back to you if you need it.

Q - Are you planning on implementing watchtowers in c-lightning?

A - Absolutely. For a long time we’ve seen c-lightning as more of a hosted solution where you run your node at home or you run it in some data center and then you remotely connect to it. Our nodes would be online 24/7. But there has been quite a bit of demand for watchtower functionality as well. What I did, it is not yet released, there is a hook for plugins that pushed the penalty transaction for each of the prior states to a plugin. A hook is a mechanism for us to tell the plugin some piece of information in a synchronous manner so that the plugin can take this information and move it off to somewhere else, your watchtower backend for example, before we continue with the process of the Lightning channel itself. That way we make sure that the watchtower has the information that needs to be stored before we can make progress on the channel itself. By pushing that into a plugin we can enable our watchtower producers to create their own connection or protocols to talk to watchtowers and we don’t have to meld that into the Lightning node itself. It is very much on our roadmap now and I think I have three variants of the watchtower hook. It should hopefully be there in the next release.

Q - With end users or people with private channels on the edge of the network versus routing nodes in the middle, do you think the backup strategy or backup setup will be different for those two participants on the network?

A - I do think that as the routing nodes inside of the network start to professionalize they will probably start professionalizing their data backend as well. I foresee mostly businesses and routing nodes starting to use more and more the replicated database backends. For end users I do think that the backup plugins provide a very nice trade-off between having the functionality of a backup but not having to have the big setup or upfront cost of setting up a replicated database and automated failover and all of that stuff. Those are the two rails that we have chosen and cover these two use cases. Professional node operators and your geeky son that sets up a node for the rest of the family.

Q - If you are the end user on the edge of the network you don’t have to worry so much going down at the time that you are routing the payment. That is the big challenge here. You need to have that latest state. If there is some bizarre timing that is where it becomes technically difficult.

A - I have recently done a restore of one of my nodes that actually does quite a bit of probing in the network. I think it had a week’s backlog of changes and it recovered in 7 or 8 minutes. That might be too much for a professional operator but for me sitting at home not doing my grocery shopping or not buying anything on Amazon, 7 minutes of downtime while I restore my node is perfectly fine. Even then we do have trade-offs. We can have backup plugins that replicate the database in the background. Then it takes a couple of seconds to failover.

Q - You mentioned eltoo. One of the questions was when SIGHASH_NOINPUT? Obviously you don’t know when but is there any chance of getting it into the next soft fork with Taproot? Or has that ship sailed? If that ship has sailed what is viable for getting it into Bitcoin if we ever get it in?

A - It would be awesome if we could get SIGHASH_NOINPUT or ANYPREVOUT which is AJ (Town)’s proposal into the soft fork that is rolling out Taproot and Schnorr. The problem being that all of the reviewers currently are focused mainly on Taproot. It is really hard to grab people and get them to discuss yet another proposal that we might want to have in the same soft fork. While I think it is still possible I’m pretty sure that it will be a secondary, maybe lighter soft fork at a later point. Maybe bundled with some more lightweight proposals that require less changes to the structure itself so it is easier to review as well. See what kind of parts of the codebase are being touched and what the side effects are. Because Taproot and Schnorr are a big change all as well. That being said AJ has taken SIGHASH_NOINPUT and with his ANYPREVOUT he has formulated all of the details that need to change in SIGHASH_NOINPUT for it to nicely mesh with Schnorr and Taproot. With his proposal we could end up with a very nice bundle of changes that can be deployed at a later point in time independently of Taproot and Schnorr. I remain optimistic that we can get it in but I can’t promise anything.

Q - Do you have any thoughts on whether planning for the next soft fork can really happen seriously until we have what is likely to be, no one knows, the Schnorr, Taproot soft fork first? Can there be a parallel process or do we really need to get all of those Core reviewers and brains on the one soft fork before we even get serious about any other soft fork?

A - I think it is probably best if we keep the distractions as low as possible while we do an important update like Taproot and Schnorr is. I personally don’t feel comfortable creating more noise by trying to push hard on SIGHASH_NOINPUT. I think Schnorr and Taproot will get us many, many nice features and I think we shouldn’t hold up the process by trying to push in more features while we are at it. Everybody wants their favorite feature in and I don’t see this particular feature being so life changing that it should jump the queue. I would like to have it in because I think eltoo is a really nice proposal. Self congratulating again. Hopefully we will get it at some point but I don’t think we need to stop the entire machinery just for this one proposal. Who knows? We might come up with better solutions. I am under no impression that this proposal is perfect. The more time we can spend on improving and analyzing the better the outcome in the end.

Q - Have you been following the conversation on activation? Do you have any thoughts? Is there any way to speed up the process or is this going to be a long drawn out discussion?

A - Activation is a topic that many people like to talk about and it is a very crowded space. I have a tendency to stay out of these discussions where you already have too many cooks in a kitchen. I think whatever the activation mechanism is it is going to be perfectly fine if it works. I don’t really have a preference.

Q - For the researchers and developers in the space where should one put their efforts for the maximum positive impact on the Lightning Network? Obviously you do a lot of stuff. You are a researcher, you contribute to papers, you contribute to c-lightning, you do a lot of work on the protocol and the BOLT specifications. How do you prioritize your time? Do you follow the Pieter Wuille work on whatever is most fun or do you try to prioritize the things that you think are the most important?

A - Definitely choose topics that interest you the most. I did my PhD on this and it was amazing because you could jump around the whole space. It was a big green field where you could start building stuff. It was all novel and new. I get the feeling that Lightning is the same way. All of a sudden you have this playground which you can explore. Don’t limit yourself to something that might be profitable or the cool thing. Do whatever interests you. For me personally I enjoy breaking stuff. I will always try to find out things that can be broken and see if I can get away with it within limits. Lightning is cool to explore and see what you can do. If you want to increase security through watchtowers that is a good thing. If you want to increase privacy by figuring out ways to break privacy that is another good thing. Whatever you like the most I guess.

Q - Where is the most impact you can make? We only have limited time and resources so how do you think one person can have the biggest impact?

A - I do think that there are quite a few statements that need to be proven or disproven. In particular one thing that I like doing is attacking the privacy of the network. Of course we now have completely different trade-offs when it comes to privacy from onchain to offchain. We don’t leave eternal traces of our actions like we do on a blockchain. We do leave those traces on Lightning but we do talk to peers. What might these peers infer from our actions? What information could they extract from that? There is definitely the privacy aspect that is worth looking at. Then there is network formation games. If you have ever done game theory, finding a good way to create resilient networks but also networks that are efficient is a huge question. How can we create a network where each node individually takes some decision and we don’t end up with one big node in the middle becoming the lynchpin, the single point of failure where if that one node goes down everybody starts crying. That is definitely an open research question. Also more fundamental stuff like how can we improve the protocol itself to be more efficient? How can we gossip better? How can we create an update mechanism that is more efficient, that is quicker, that needs less round trips? We do have one engineer, Rusty who is in Australia and he is always the guy who will try to get that half round trip shaved off of your protocol. That is his speciality. He has a focus on that. I wouldn’t say that there is an official list of priorities for the Lightning Network and associated research. It is you making your own priorities. People tend to congregate on certain tracks.

Q - What are your thoughts on the privacy papers? I believe you were the co-author of at least one. Could you give a high level summary of those conclusions and thoughts?

A - I started collecting data about the Lightning Network the moment we started. I do have a backlog on the evolution of the Lightning Network. I have been approached by researchers over time who want to analyze this data. How did the network grow? What the structure is, what the success probabilities are. That is how I usually end up in these research papers. So far all of the analyses are very much on point. Purely the analysis of centrality in the network and the upsides and downsides. The efficiency that we get through more centralized networks. The resilience we get from distributing the network more. All of these are pretty nicely laid out in these papers. In some papers there is some attempt to extrapolate from that information. That is usually not something that I encourage because these extrapolations are based on the bootstrapping phase of the Lightning Network. It is not clear that these patterns and behaviors will continue to exist going forward. It is mostly these extrapolations that people jump on when they say “The Lightning Network is getting increasingly more centralized and will continue to become so.” That is something that I don’t like too much. The other one is that people usually fail to see that the decentralization in the Lightning Network is fundamentally different from the decentralization in the Bitcoin network. In the Bitcoin network we have a broadcast medium where everybody exchanges transactions and by glancing at when I learn information about which transaction I can infer who the original sender is and so on. In Lightning the decentralization of the network is not so important because we do have quite robust mechanisms of preserving privacy even though we are now involving others in our transactions. We do have onion routing, we do have timing countermeasures, we do have mechanisms to add shadow routes which are appendices to the actual route pretending we send further than our destination. We fuzz the amounts so that the amounts are never round. There is never a 5 dollar amount in Bitcoin being transferred exactly over the Lightning Network. We have recently added multipart payments where payments of a certain size are split. Unless you can correlate all of the parts you will not even learn the exact amount being transferred. All of these things are there to make the network more privacy preserving. Is it perfect? Of course not. But in order for us to improve the situation we have to learn about what is working and what is not working. That is my motivation behind all of these research papers. To see whether our mitigations have an effect. If yes how much more can we improve them? Or should we just drop our mitigation altogether because it might have a cost associated with it. While I think we are doing good I think we could do better. That is why we do this research. We do need to talk publicly about the trade-offs of these systems because if we promise a perfect system to everybody that is a promise that we have already broken. Being upfront with the upsides but also with the downsides of a system is important I think.

Q - Going back to the backup discussion where we were talking about hobbyists and enterprises. Do you think it is important that there is a route for a hobbyist to set up a routing node? And that there are a lot of hobbyists running routing nodes just so that people don’t have to go through the enterprises. Or maybe we just need a lot more enterprises. Maybe both?

A - Does a hobbyist become an enterprise once we professionalizes? There must always be the option for users to professionalize and become more proficient and more professional in their operations because that is something that we’ve gotten from Bitcoin. It is not that everybody must run a node, it is not that we suddenly have a new currency. But now suddenly everybody has the option of taking on their own responsibility and becoming their own custodian and not having to rely on other parties. We shouldn’t shame people into doing that but the option must be there for those interested. I think there is a wide spectrum of options that we can offer from custodial wallets up to having fully self sovereign nodes that run the full software stack at home and you connect to your own realm in Bitcoin. Of course this is a spectrum. Depending on your interest you will land somewhere in there. We would like to have more people on the educated and the knowledgeable side of things where they run as much as possible but if you are somebody who just wants to accept the payment without having to read for months on how Bitcoin and Lightning works and have to understand everything. I think there should be an option for that as well. The important part is that it must be an option. We shouldn’t be forcing people into one direction or another.

Q - Is real time database replication already in? You just merged your plugin some weeks ago allowing back end customization. Can we get comments on real time replicas made with Postgres?

A - The replication of a Postgres database is something that we as c-lightning do not include in our code. It is just a backend that you write to. It is up to the operator to set up a replicated set of Postgres servers. Then just pointing c-lightning towards it. That is definitely in there. I think it has been there since 0.7.2. That is almost a year now. All you actually need to do is add —wallet=postgres and then the username, password and URL of where your database node lives. It will then start talking to Postgres instead of a sqlite database on your local machine. Again Gabriele Domenichini has an excellent tutorial on that which I have linked to in the presentation.

Q - What do you think about custom channels or custom HTLC types? Is that possible and feasible? Would integration of Miniscript allow for such customization? Do you see Miniscript playing a role?

A - I don’t see where Miniscript comes in. It might be useful when it comes to having a fully flexible implementation where we can have external applications deciding on what kind of outputs we add. One of the proposals for example once we have eltoo, we can have multiparty channels where we can have any number of participants. The only thing that these multiparty channels is whether to add or remove or adjust the amounts on individual outputs. The way we can describe those outputs could be in the form of Miniscript. That way each participant in this ensemble of eltoo participants is deciding on whether or not to add an output to the state of the channel. They wouldn’t need to know the exact application that is sitting behind it in order for them to decide on whether an output makes sense or not because they would get a Miniscript descriptor. Besides that if we go back to the Lightning Network it absolutely makes sense to have custom protocols between nodes. These can include custom channels or custom assets being transferred on those channels. Or even different constructions of HTLCs. The only thing that is important is that the two nodes that decide to use that certain custom protocol agree on what this custom protocol is and they implement it correctly. The second thing is that if we want to maintain the cohesion of the Lightning Network and the ability to transfer funds from point A to point B we need to have a HTLC construction that is compatible with the preimage trick that we are currently using. Or the point contingency that PTLCs would bring. That means if I receive an incoming HTLC from left I can use whatever protocol I want to forward it to the right person if my right person knows that protocol. But we need to switch back to normal HTLCs once we’ve left this channel. We can have mixes of different forwarding mechanisms and different update mechanisms and custom channel constructions as long as from the outside it all looks compatible. That’s also one thing that I pointed out in the eltoo paper is that eltoo is a drop in replacement for the update mechanism in Lightning. Since we don’t change the HTLCs I could receive an HTLC over Lightning penalty from Jeff and I could forward it as a HTLC over eltoo to you. We can exchange individual parts as long as on the multihop part we do agree on certain standards that are interoperable.

Q - If we have a payment channel open between me and you we can do anything. It is completely down to what we want.

A - We could be pretending to transfer. If we trust each other and we have the same operator, we can settle outside of the network. Jeff could send me a HTLC, I could tell you using a HTTP request that it is ok and to forward this payment to wherever it needs to go. We will settle with a beer later. Even these constructions are possible where we don’t even have to have a channel between us to forward a payment if we trust each other. Or we could change the transport mechanism and transfer the Lightning packets over SMTP or transfer them over ham radio like some people have done. We can take parts of this stack of protocols and replace them with other parts that resemble or have the same functionality but have different trade-offs.

Q - If me and you have a channel in the middle of the route from A to B there are some restrictions. We have to be BOLT compatible if we are going to be routing onion routed payments?

A - There is one restriction that we will only accept channel announcements if they correspond to an outpoint on the blockchain. This is done as an anti-spam measure. We would have to create something that looks like a Lightning channel if we were to pretend there is a channel in the end. Other than that you can do whatever.

Q - You don’t think there is a role for Miniscript in terms of custom scripts or making changes to those standardized Lightning scripts that most of us will be using?

A - There might be a point to it. Miniscript is a mechanism where we can express some output that we haven’t agreed upon ahead of time. I could tell you “Please add this output here” and I wouldn’t have to tell you before we start a channel how it needs to look. I could just send you a Miniscript. Miniscript is a tool that allows us to talk about scripts. Currently in the Lightning protocol all of the scripts are defined in the specification itself. For us there is no currently no need to talk about different structures of outputs. There might be a use where we can add this flexibility and decide on the fly what a certain output should like. There has never been this need to meta talk about outputs because we already know what this looks like.

Q - So instead of Miniscript and Script perhaps we are using scriptless scripts and PTLCs. What are the biggest obstacles and pain points to implementing protocol changes in a Lightning node? If we wanted stuckless payments or PTLCs?

A - It varies a lot. Depending on where this change is being made it might require changes to just a single line of code which is always a nice thing because you get a change log entry for ten seconds of work. Others require reworking the entire state machine of a Lightning channel. Those often require months of work if not longer. For PTLCs it would not be too hard. It just changes the way we grab the preimage on the downstream HTLC and hand it over to the upstream HTLC. It would be an additional computational step in there instead of taking a preimage here and applying it here, it would be take a preimage or signature here, modify it slightly and that gives you whatever is needed to be plugged in on the other side. When it comes to deeper changes to the state machine we have the anchor outputs proposal currently which would require us to rework quite a lot of our onchain transaction handling. We try to be very slow and deliberate about which changes of this caliber we add to the specification because they usually bring a lot of work with them. Of course the extreme case of all of this is if we were to add eltoo, that would be reworking the entire state machine of the entire protocol. That is some major work. Eltoo has its downsides too.

Q - Nadav (Kohen) has been working on PTLCs. I think he implemented with Jonas Nick and a few others PTLCs with ECDSA. Do you think there should be a lot more work on things like channel factories before eltoo because eltoo is probably going to be a while?

A - I am a bit hesitant when it comes to channel factories because depending on what day it is they are either brilliant or they are just stupid. Being one of the authors of that paper I don’t know. The main problem with channel factories is we require multiparty channels first. What channel factories do is they take some shared funds that are managed collaboratively between a number of participants. They take part of that and move it into a separate subchannel in that construction. That has the advantage that since the entire group no longer needs to sign off changes, just the two of us that need to agree on what happens to these funds in the subchannel it is way quicker to collect two signatures rather than fifteen. That is the main upside. The downside of course that first we need to have this group of fifteen. The way we implement this group of fifteen, the multiparty channel, needs to either be a duplex micropayment channel which is a very old paper of mine which never really took off because its blockchain footprint is rather large or we use eltoo that allows us to set up these very lightweight 15-of-15 or 60-of-60 channels. Then we can add channel factories on top for efficiency. The reason why I am saying depending on the day the channel factories sound weird is that we already have an offchain construction where we can immediately sign off on changes without having to have yet another level of indirection. Then there is the efficiency gain, still undecided. I don’t know.

Q - What can we do to encourage better modularity? Is this important an approaching Taproot world?

A - I think the modularity of the protocol and the modularity of the implementations pretty much go hand in hand. If the specification has very nice modular boundaries where you have separation of concerns, one thing manages updates of state and one thing manages how we communicate with the blockchain and one thing manages how we do multihop security. That automatically leads to a structure which is very modular. The issue that we have currently have is that the Lightning penalty mechanism namely the fact that whatever output we create in our state must be penalizable makes it so that this update mechanism leaks into the rest of the protocol stack. I showed before how we punish the commitment transaction if I were ever to publish an old commitment. But if we had a HTLC attached to that, this HTLC too would have to have the facility for me to punish you if you published this old state with this HTLC that had been resolved correctly or incorrectly or timed out. It is really hard in the penalty mechanism to have a clear cut separation between the update mechanism and the multihop mechanism and whatever else we build on top of the update mechanism. It leaks into each other. That is something that I really like about eltoo. We have this clear separation of this is the update mechanism and this is the multihop mechanism and there is no interference between the two of them. I think by clearing up the protocol stack we will end up with more modular implementations. Of course at c-lightning we try to expose as much as possible from the internals to plugins so that plugins are first class citizens in the Lightning nodes themselves. They have the same power as most of our pre-shipped tools have. One little known fact is that the pay command which is used to pay a BOLT11 invoice is also implemented as a plugin. The plugin takes care of decoding the invoice of initiating a payment, retrying if a payment fails, splitting a payment if it is too large or adding a shadow route or adding fuzzing or all of this. It is all implemented in a plugin and the bare bones implementation of c-lighting is very light. It doesn’t come with a lot of bells and whistles but we make it so that you have the power of customizing it and so on. There we do try to keep a modular aspect to c-lightning despite the protocol not being a perfectly modular system itself.

Q - There was a mailing list post from Joost (Jager). What are the issues with upfront payments? There was some discussion about it then it seems to have stopped. Why haven’t we seen more development in that direction yet?

A - Upfront payments is a proposal that came up when we first started probing the network. Probing involves sending a payment that can never terminate correctly. By looking at the error code we receive back we learn about the network. It is for free because the payments never actually terminate. That brought up the question of aren’t we using somebody else’s resources by creating HTLCs with their funds as well but not paying them for it? The idea came up of having upfront payments which means that if I try to route a payment I will definitely leave a fee even if that payment fails. That is neat but the balance between working and not working is hard to get right. The main issue is that if we pay upfront for them receiving a payment and not forwarding a payment then they may be happy to take the upfront fee and just fail without any stake in the system. If I were to receive an incoming HTLC from Jeff and I need to forward it to you Michael and Jeff is paying me 10 millisatoshis for the privilege of talking to me I might not actually take my half a Bitcoin and lock it up in a HTLC to you. I might be happy taking those 10 millisatoshis and say “I’m ok with this. You try another route.” It is an issue of incentivizing good behavior versus incentivizing abusing the system to maximize your outcome. A mix of upfront payments and fees contingent on the success of the actual payment is probably the right way. We need to discuss a bit more and people’s time is tight when it comes to these proposals. There has been too much movement I guess.

Q - I’ve been reading a bit about Simplicity which your colleagues Adam Back and Russell O’Connor have been working on at Blockstream. They talk about being able to do new sighash flags, SIGHASH_NOINPUT, ANYPREVOUT without a soft fork. In a world where we had Simplicity perhaps the arguments against NOINPUT or ANYPREVOUT namely they being dangerous for users no longer apply if Simplicity was in Bitcoin and people could use it anyway. Any thoughts on that?

A - That is an interesting question. How could I not get fired for discussing this in public? I do think Simplicity is an awesome proposal. It is something that I would love to have because so far during my PhD and during my work at Blockstream the number one issue that we had was stuff that we wanted to do was blocked by them being available in Bitcoin itself. It can be frustrating at times to come up with a really neat solution and not being able to enact them. As far as the criticism to SIGHASH_NOINPUT and ANYPREVOUT we shouldn’t take them lightly and I do see points where people bring up good points about there being some uncertainty and there being insecurity when it comes to double spending and securing funds. How do we clamp down this proposal as much as possible so people can’t inadvertently abuse it. But I do think that with all of the existing systems that we already have in Bitcoin we are currently trying to save a sandcastle while the dyke is breaking behind us. It is a disproportionate amount of caution when we already have some really dangerous tools in the Bitcoin protocol itself. First and foremost who invented SIGHASH_NONE? You can have a signature that does not cover anything but you can still spend funds with it. While I do take criticism seriously I don’t think we need to spend too much time on that. Indeed if we get Simplicity a lot more flexibility could be added but of course with great power comes great responsibility. We need to make sure that people who want to use those features do know the trade-offs really well and don’t put user funds at risk. That has always been something that we’ve pointed towards. We need to have tech savvy people doing these custom protocols. Otherwise you should stick with what is tested and proven.

Q - And obviously Simplicity is still far off and won’t be in Bitcoin anytime soon.

A - We recently had a Simplicity based transaction on Elements. We do have a network where we test these experimental features to showcase that these are possible and what possible implementations could look like. That is our testing ground. Russell has used his Simplicity implementation on some of these transactions. We have some cool stuff in there like quines.

\ No newline at end of file diff --git a/lightning-specification/2021-12-06-specification-call/index.html b/lightning-specification/2021-12-06-specification-call/index.html index 38bdbaae04..58607958a3 100644 --- a/lightning-specification/2021-12-06-specification-call/index.html +++ b/lightning-specification/2021-12-06-specification-call/index.html @@ -15,4 +15,4 @@ Lightning

Category: Meetup

Name: Lightning specification call

Topic: Agenda below

Location: Google Meet

Video: No video posted online

Agenda: https://github.com/lightning/bolts/issues/943

The conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Add payment metadata to payment request

https://github.com/lightning/bolts/pull/912

In theory you could have an invoice that is so large that it forces you to use a direct payment. But that is a pretty bad theoretical case. We could make some handwavey comment that users should be aware that it has to go in the onion so it should be of limited length. But that is like saying “Don’t do anything stupid”.

I asked on the issue before the meeting, is there any concern with someone who is making a repeated payment who tries to detect the length of the route in use by varying the payment data length over time across various retries?

How would they be able to do that, because of the payment secret thing?

As long as you know when a payment failed. If you have the ability to send someone ten invoices and have them try them in order and detect which ones fail then you would have the ability to figure out the length of the path they are using, at least currently.

You mean the recipient is de-anonymizing the sender? Not an intermediate node?

Basically, yes.

Maybe an intermediate option is to limit it to a large number just to prevent that.

That was basically what I was suggesting, limit it to something. I don’t know if you need 700, limit it to 256 or something.

640 is traditional.

I would make it longer than 28 because that seems really small. There might be some applications that are excluded by that. I think it is already quite exotic what you described. You are able to supply multiple invoices that are all attempted and you are going to derive something out of it about the distance. I would pick a large value.

Note that any amount we limit it to is basically plus 32 because you can always use the payment secret as well.

There are other fields that are always there. What’s the minimum for a direct payment?

That attack already exists because I can hand you a fake route hint of arbitrary length that you have to use to make the payment, therefore leaving less room in the onion.

This is true.

Shall I add a comment to the PR that you have to be aware that it needs to fit in the onion including the route?

Yeah.

I did also comment on the PR. I think if we do that then we should at least also say that a payer must not limit the length except when they create the onion ie if someone knows they are well connected node they can just use a larger value. The payer must not apply any concrete limits.

To prevent the opposite thing, people limiting more than necessary.

Yes, exactly.

Let’s add that, let’s implement it and let’s do compatibility testing again.

Advertize compression algorithms in init

https://github.com/lightning/bolts/pull/825

Let’s move onto the next one, the compression algorithm support thing. It is an old one, it was removing the dependency on zlib for implementations that did not want to support it. At the same time make it flexible enough to add new encodings or new compression algorithms if we ever need to. Make sure that this is also something that is more general than gossip queries if we ever need to. We can do it for free even though I’m sure we’ll ever use it. Basically you send a bitfield in a TLV in your init saying “Here are the compression algorithms I support”. People for gossip queries only use the things that both you and I support. We have tested with Matt on testnet and it worked great with LDK and eclair. It is not very hard to implement.

It is so much easier to implement zlib than it is to actually implement this.

It adds a bit of complexity. It is still one feature. It is not very hard.

I think what happens in the long term, we will end up using Minisketch for gossip I’m pretty sure. This becomes less important. I am happy to take your advice. If having implemented it if you are happy with the complexity of it sure.

I find it simple but it provides extension points that I think we don’t really need. But maybe we never use them so it is fine. I agree that the next changes we will make to gossip queries or to gossiping overall will be mostly a complete rewrite using something like Minisketch. We will change everything so we will throw this away. But I can understand that if LDK needs something to start with…

There are a lot of nodes on the network that already use non-zlib. In general it isn’t really an issue if you connect to enough nodes to get one that will send you uncompressed data. Given that I am not sure which nodes it is. Does lnd send uncompressed data by default or something?

It did for a while. We do a heuristic where we look at the compressed length and we go “It is not worth compressing” which is probably a waste of time. We’ve compressed it already, we should probably just use it. We do go “That didn’t win so we won’t bother”. If you ask for enough you’ll get compressed data.

It seems like less complexity than that. My point there was that it seems there are a number of nodes on the network that need this basically.

I think everyone has implemented decompression, I think some people haven’t implemented compression so that is what you are seeing. Implementing decompression is the easier part. The compression involves heuristics, you’ve got to fit it in a message and you’re like “I don’t know how much it is going to compress until I compress it” and stuff like that.

Interestingly we do already have these hooks. We already have the compression flag stuff. At least on the LDK end I just reuse those enums for compression because they happen to line up, at least currently.

Basically I think the state of this PR is we know we can do it, it is not too hard but do we really want to do it? Do we need it?

It is going to be quite a while before we rewrite stuff and not everyone wants the zlib dependency because zlib has not had the best history of ensuring its decompression library is safe.

It is just you, everyone else is happy with it, everyone else is stuck with it is the truth. zlib has a bad name but those are in the past. It has been pretty thoroughly vetted now. zlib is a lot smaller. The main issue with zlib was the whole explosive compression thing where you could really cram a lot of data in if people made assumptions about how much they could get out. Which is why the spec went through and figured out the maximum realistic size. If you get more than that you just throw it away. I don’t know. Shying away from zlib because it has bugs, if you can’t trust zlib who can you trust?

There is just not enough data in the network graph right now to care and so it is very easy to say “We can avoid bugs by simply not compressing because we don’t care and eventually we will move to something that is substantially more proficient than any of this anyway”.

I do buy that argument.

But then if we do that it is maybe not worth integrating this PR and instead phasing out zlib.

You mean drop zlib entirely? We could do that. I have a feeling that is not going to fly mostly with y’all on the mobile side. I know mobile using this stuff is important assuming you are doing graph sync on mobile.

We don’t care for Phoenix because there’s no graph sync. I don’t think eclair mobile even has the zlib part. Maybe but I’m not sure. I will ask Breez to see if they care and if they extensively use zlib.

Going back to first principles I’d be quite happy to just drop zlib. I would want to check some numbers but that is simple. I could do that commit. And it is perfectly backwards compatible. Stop compressing, keep decompressing and then at some point turn off decompression.

I’ll try to get in touch with a few wallets. See if they are using it, if they even know if they are using it or not. And if they can measure the difference between the two and see if they really need it. Maybe they’ll say that the fact that it uses more battery is more of an issue for them than the fact that it uses more bandwidth. I don’t even know if it is something that is important for them.

You are just not going to download the graph fast enough on mobile to want to do that anyway. At least not from a peer-to-peer node.

Eugene is saying that lnd doesn’t even have a config option for zlib by default. Unless Breez forked it they couldn’t be using zlib at all. I’ll check with the Breez team and if they are not using it I guess we can go and start deprecating it. At least open a PR and tell the mailing list and see if anyone says that they absolutely want to keep it.

Dynamic DNS support in gossip messages

https://github.com/lightning/bolts/pull/911 https://github.com/lightning/bolts/pull/917

I guess the second one we don’t even need to talk about it because nothing has changed since last time. There is an eclair implementation and there is an ongoing c-lightning one but it is not ready yet. But the DNS one I haven’t done it on eclair at all. I don’t know if anyone else has worked on it?

911 we have merged at least as an experimental option. DNS support we have. We can put DNS fields in advertizements, it seems to work pretty much as you’d expect. It is a fairly straightforward change. 911 we have implemented, we should probably get someone else to implement it as well. There is a draft PR for 917, the init message change. Without Michael (Schmoock) here I don’t know what the status is but I can look at it.

I think it is not filtering local addresses yet. But apart from that he has the TLV and I tested that the TLV decodes fine on the eclair side. But he doesn’t yet filter local addresses.

That’s weird because we have that code already. That’s how we decide whether or not by default we advertize our own address. He probably just hasn’t hooked it up.

I think we can discuss these two when there is momentum in the implementations.

They are completely independent. I guess it would be nice if somebody implemented DNS, at least the lookup side.

One question on DNS. Punycode or UTF-8? It should be specified.

Interestingly in the offers spec we add a UTF-8 type to the fundamental types which is basically a byte string. It gives you a nice hint that something should be in an array of UTF-8. It might be useful to pull that in. It should be specified.

I vaguely prefer that this be ASCII and Punycode, DNS anyway. Things that do DNS tend to do the reverse resolution fairly well. I’m ok being overruled. It is just easier to walk a string and check that it is ASCII than it is to walk a string and check that it is UTF-8 with no control characters.

What’s the worst that can happen with UTF-8? It is ourself announcing and as far as I can see the only thing that we could get tricked into is trying to resolve this kind of stuff. It is not like the human is going to read it and get tricked by characters that look alike.

Ultimately when you do the resolution you convert it to ASCII and Punycode anyway. If you are looking to do that you might as well just do that upfront. Most of the DNS applications that I’ve seen will do that reverse resolution for you if you want it.

Is that true?

I know at least in some web browsers if you type in the Punycode it will show it to you as the UTF-8 as long as the TLD is one of the internationalized ones. At least this used to be true.

I think there are some domains you won’t be able to reach if you can’t do Unicode but I could be wrong on that.

No everything gets converted to ASCII. You take the UTF-8, you convert it to ASCII with this weird prefix, accent dash dash or something like that, I forget what the actual prefix is. You encode the UTF-8 as ASCII and then you do the resolution.

I am not a Punycode expert but that means that Punycode by definition is at least as long as UTF-8 right? Would that be an argument? It is gossip data that has to be replicated on each individual node so any byte we can save there might be a massive win for us. If we have to have some sort of tiebreaker.

It is only an extra byte or two. Obviously I prefer ASCII for everything because it is easier than trying to think about attack scenarios but if someone feels very strongly about byte length I’m ok with that too.

I guess I would prefer ASCII as well because it is just simpler. People can just find an ASCII hostname, there is no good reason to have something that is not ASCII for Lightning nodes.

If people insist on having weird hostnames they will shoot themselves in the foot by them paying the cost to advertize that. Probably going to stick with ASCII myself.

Everyone pays the price because it is gossiped. To be honest I just looked up Punycode, I had completely forgotten that existed. I am a little bit horrified. I remember before UTF-8 ruled the world. A certain amount of PTSD. I would say yes, it is ASCII due to DNS limitations, that is fine.

Alright, I will comment on the PR.

And make sure you put a link to the Wikipedia Punycode article so people can share the horror if they haven’t been exposed before.

And encode the link in Punycode itself.

Yeah, do that.

To one of internationalized versions.

BLIPs

https://github.com/lightning/blips -https://github.com/lightning/blips/pull/4

I am ambivalent about whether they should go in together or go in separate. Whichever allows us to get this stuff merged before the year ends is what I’m in favor of.

Maybe separate is a good idea. The one I opened will be updated very often. Every time someone adds a BLIP they will have to open this table of things that they want to reserve. Whereas they should not open things about the meta process of adding a BLIP.

Agreed, I think that makes sense.

Keeping the two separate I think makes sense. I haven’t read the latest version of BLIP 1 since it has moved to the new repo. I need to do that right now.

I listed in the initial commit the changes that I made. One was per Matt’s feedback out of this specific mandatory universality section requesting the proposals discuss why the given features are not intended to be universal. That’s a must now. Add a little bit more detail around what ranges of feature bits and TLVs belong here. Between 100 and 1000 for experimentation on feature bits and then TLVs above 65, 536. Then a couple of links and stuff. Otherwise I think it is largely the same that we were close to having consensus on in the previous repo.

I guess it sounds like a good start to me. I think we will probably change some of those as we go, as we learn how people write BLIPs and what are the pain points. I think we should start with something as small and simple as we can. And then make it evolve as we learn more about the process. That looks good enough.

This looks good to me. I’m sure we are going to have some level of friction over the status field. The BIP process uses them for a little bit of pseudo inside baseball sometimes. I have a feeling that we are going to want to revisit these as we go but that is ok. I don’t think we need to figure that out now.

I think in general having this as a starting point and continually iterating on it as people start using the process makes a lot of sense.

I agree. I will do a real review this week. It is a conceptual ACK from me at least.

I will do the same for yours. For PR 4.

Route blinding

https://github.com/lightning/bolts/pull/765

There has been a new review on route blinding. There has been a complete compatibility test between c-lightning and eclair on the latest version of onion messages. Did anyone have time to look at the route blinding one? My main feedback from the last meeting, I was asking what parts I should remove to get a first version in. Mostly probably the parts about the payments and I should keep only the parts that are library cryptographic utilities that are being used by onion messages. Should I keep it in the proposal document where it is not specified yet and we can iterate on it? I updated the test vectors as well and I think they should be easier for you to work with and build upon for onion messages.

Great. We have had route blinding for payments for well over a year as an experimental option. Basically unused except for the test code because there was no good way to specify it. When we tried to revise it to the latest thing we hit those issues that I commented on the PR. But it works really well for the onion message as a base and we have interop testing so I am tempted to leave it there in the proposal document but just not in the spec.

That sounds good to me. I didn’t put anything in the BOLTs related to messages, I left it in the proposal. I think the only important parts to discuss are the potential unblinding attacks that you would do by probing fees, probing CLTV expiry delta. It is basically things that we recommend implementations do like use the same CLTV expiry delta across the whole route. Use something that is different from the public values but maybe higher, same for the fees. I am not sure how to best convey those in the spec.

While I can specify that you should use the same value across you can always ignore those values and choose to probe the differences. The ultimate answer is you put it in the encrypted TLV, you put “By the way please enforce this value” so they can’t play with it.

That makes it bigger.

Perfect is the enemy of the good, yeah. That is another thing that can potentially be added later. I think for now a graph is relatively homogenous anyway so you wouldn’t get all that much data. Although you would get some doing those games. We have a way of fixing it later if we want.

What would you tell an intermediate node that is inside the blinded route when they receive a payment that they should forward and the fee is not enough or the CLTV is not enough? What error should they answer?

They have to fail. We have an error that we return from anywhere in the blinded tunnel, in that TLV we return the same error. In fact our implementation, if it was blinded we always reply with the same error. You’ll actually see it from the entrance to that TLV scheme and you’ll never see an error from the middle. That’s partially to protect against this attack. Although it is not perfect because you can still do success or failure tests. You get one bit of information out but you can’t tell exactly where it comes from. I’ll have to look up the code to see what we do but we have a specific error, I think we added an error for it. It has been a while, I’ll have to look.

From a honest sender’s point of view if you get such an error that tells you something wrong happened inside the blinded route, maybe it is fees, maybe it is CLTV expiry delta, should you just retry by raising everything for every blinded hop so that you have a chance of success? I think you should because there is a chance that your payment will go through. This is why it makes it better compared to rendezvous, you can retry, still use that blinded route and adapt to dynamic fees changing.

That is really hard to know. Your chances of success at that stage have surely dropped significantly. It is really up to the person handing you the route to have done that padding if any for you. If they haven’t maybe you could retry but you don’t actually know what is going on. It could be that there’s temporary disconnect in the route, it is not working anymore. That’s possibly the more likely case. It just won’t work. In which case you are kind of out of luck at that point. Unless they hand you multiple encrypted routes which is allowed in the spec but we haven’t implemented it.

I think that makes sense from the recipient’s point of view. Especially if you have multiple entry points. Just give most of them in the invoice. The idea is the same way you do route hints, you’d now use these. You can hand out multiple. The case where you don’t allow it… For payments it is a little bit different. For onion messages you only give a single reply path but the idea is you will resend if it doesn’t work and maybe try a different reply path.

Even from a payment’s point of view as a recipient if you are not well connected at all, you just have one entry point, since it is blinded you can still make it look like you have many entry points and provide many fake route hints. I think it is a good idea to have this option.

I agree. That’s for payments. We’d need to test that. But for onion messages this route blinding works really well.

I saw that Tim made a comment about typo. Do you know if he has some time available to review the crypto? I discussed it with Jonas Nick in El Salvador. He said it could be on his list because he didn’t realize it was a requirement for offers. They have so much to do already, I’m not sure if they will find the time to take a look at it.

I will beg and see what happens.

Sounds good.

Onion messages

https://github.com/lightning/bolts/pull/759

On onion messages what is the status? We have two implementations that support it that are compatible. I know that since Matt has been reviewing it actively we should wait for more feedback. We can just not touch it anymore because we know that we are compatible but still wait for more feedback?

It is up to you what you want to do. I am not going to have time in the next two weeks to do it. I am hoping I will be able to make it my holiday project and find some time over the end of December. But we have been pretty swamped. I would say don’t spend too much time worrying about waiting for me. I hope to have time to do it at the end of this month but there’s never any promises there.

Whether it is merged or not it doesn’t stop us from continuing our work on offers. We are actively working on offers. It doesn’t change much for us if it is merged or if we just wait a month or two before we merge it.

Your plan is to do the route blinding PR first right because it depends on some of those commits at least? Is that correct?

Yes, exactly.

I am tempted to merge route blinding and onions because we have our interoperability test and we can say they are not going to change. Having changed this multiple times it is not actually that bad to change in practice. What you do is deprecate the old ones, you assign a new message number and you can do anything you want in the new onions. You can support both, it is a bit of a dance but if we were to find some issue… It doesn’t hurt anyone who doesn’t use it for a start. If we find there is some crypto issue, we should really do it this way instead then we can bump the message number by 2 and do our variation of the scheme there. It is not that bad. I do like the idea of merging it in because that fires the starting gun for people to go “We should really implement this now or at least look at it”. I vote that we merge those two having passed interop test.

There is just one thing that your comment made me think about. If it is true if we are only using route blinding and onion messages it is really easy to move to a different version where we for example change the internals of route blinding. But if you start using it for payments in invoices then you must specify some kind of versioning for this route blinding scheme? If we want to be able to move to a new one?

Not if we do it in BOLT 12. You’d use a modern message to request the invoice. Then you go “You’re speaking the modern message so I’ll give you a modern TLV”. You can switch the whole basis across because I’ve done this once already. If they ask to use the old onion message we’ll give them an old TLV, an old invoice. It is not pretty, it is layering violation but it does work. This then beds that down, the next thing to do is the route blinding for payments. That is something we can look at as well. I’m not going to commit to two weeks.

A new guy, lightning-developer started reviewing route blinding and onion messages so we can give him a few days or weeks before we merge. Depending on his feedback we merge these two.

ACK.

Warning messages

https://github.com/lightning/bolts/pull/834

Let’s finalize warning messages I guess. There’s not much to say.

We had some argument last week but Rusty has not updated the PR it looks like. We are waiting on that.

You are ok to re-add that Rusty? The O0 errors.

Yes. I will re-add zeros.

I’ll re-read it again and ACK it. We can finally get this merged since it is already live in 3 implementations.

lnd doesn’t.

LDK, c-lightning and eclair.

We never merged it. I have tested it with Rusty but we haven’t merged the PR.

Clarify channel_reestablish requirements

https://github.com/lightning/bolts/pull/932

I created an issue to make it more clear. And I opened issues on lnd and c-lightning related to that.

https://github.com/lightning/bolts/issues/934

It makes sense but I don’t know what we actually do in practice.

I haven’t tested again. He was pretty sure lnd does automatically close before receiving.. but c-lightning he was not entirely sure. We started to implement a new mode for node operators of big nodes who are actively monitoring their node and don’t want to take any risks. They can configure this new strategy when we detect that we are late and the other guy says we are late, we print a big log in a specific log file and send a notification on Telegram or something to the node operator. Give them an opportunity to fix it before they lose thousands of channels. If they messed up something with TLV or.. We then realized it didn’t make sense to implement it right now because our peers would close anyway regardless of what we did.

It makes sense. I see that Eugene confirms that lnd does close. Is there a plan to fix that? I opened an issue on the lnd repo about that.

I saw the issue, no plan currently. But that doesn’t mean it won’t happen.

Simplified update and PTLCs

https://github.com/lightning/bolts/pull/867

One thing I wanted to discuss is the simplified update because it is related to something I posted on the mailing list today. The reason I’m bringing this one up again is I started looking into PTLCs recently and what was the way we could get a minimal version of PTLCs with a minimal set of changes to the existing protocol and transaction format. I discussed it with AJ who agrees that his proposal is a more long term thing. We should start with getting PTLCs on top of the existing structure. But actually there is one roadblock, there is one thing that PTLCs completely change compared to payment secrets. I posted that on the mailing list today and I have a detailed article on it. With a payment secret and preimage, when someone spends a HTLC success or claims a HTLC success you discover the secret by just watching the witness of the script. But with PTLCs it is different. When someone claims a PTLC success you don’t discover anything, the only way to discover it is if you had received an adaptor signature before. That means you have to receive adaptor signatures before you can sign your commitment. Worse than that it also means that when the remote commitment is published you cannot directly claim from it. You have to go through a pre-signed transaction that gave the other guy the opportunity to have an adaptor signature so that they are able to discover the secret when you claim it. We don’t have the easy structure that we had where I only sign things that go to your transactions, you only sign mine and I don’t need to give you signatures for anything from my local commitment. But now we do. I think that the current protocol of commit sig, revoke and ack, commit sig, revoke and ack doesn’t fit that model well. I think it is time to change it. If we change it we should try to find something that is somewhat similar to what we have so it is not too much of a complex change. Something that is compatible probably with an option simplified commitment and that works for both HTLCs and this first version of PTLCs. My mail to the mailing list was a call to action to protocol designers to propose some things without proposing something too complex or too different from what we have.

I haven’t worked on option simplified commitment in a while. An option simplified commitment is literally a subset of what we have now which is nice. It is significantly simpler. It doesn’t win you much in code until you remove the old version of course. You still have to support the whole state machine. One thing about is option simplified commitment that is worth noting, at the moment there are some things that you have to tell your peer to never send you because you don’t ever want to see them in your commitment message because to fail a HTLC you have to go through a whole cycle. If you switch to option simplified commitment it is actually easy to NACK a commitment without changing the state machine. They send you the commitment signed and you go “No I told you I don’t want that”. They go round again. That means you never have to give any explicit limits to your peer which is really good because that has been a source of all kinds of channel breaking bugs. Your peer thinks they can do something, you think they shouldn’t do it and they do it. The other reason I want option simplified commitment is there is a simple extension to it that we can do later that allows you to NACK a commitment signed. Then you don’t have any restrictions on your peer. You send me whatever you want, I’ll just NACK the ones I don’t like. I’ll instant fail HTLCs for you. I really want option simplified commitment for that because it simplifies the thing further as well. The more I think about it the more I really like this idea. I hadn’t realized the PTLC thing. There’s that and an extra round trip. I guess we will discuss it on the mailing list.

At least one I guess. That’s the hard part. If we didn’t have to have a new pre-signed transaction for the case where the remote commitment is published it would be fine. We’d just send adaptor signatures before our commitment signed but in the other direction. If I want to send you my commitment signed you would just have to send me all your adaptor signatures before and I can do the same. Now there’s also a pre-signed transaction in the remote commit and we have to both share this new signature for that but also a new adaptor signature for that before. That makes it a bit messy. That is why it is a bit hard to find the right way to translate what we have today into something that works for this case and is not a complete mess where people will get confused between what is in my transaction, what is in yours, what’s an adaptor signature. That is some whiteboard design.

I guess we will have to discuss that on the mailing list. I do like option simplified commitment. I think in practice it doesn’t actually slow things down very much because you just end up batching more as things go back.

Especially since what we realized with PTLCs, in a way you have to do some kind of simplified commitment. You cannot really stream in both directions because before sending your commit sig you have to wait for the other guy to send something. That is why I thought about option simplified commitment and making sure that they would work together. I think option simplified commitment, if we have drafted the rest, can be a good first step towards the protocol change for PTLCs.

Yes. The other thing about option simplified commitment is it makes update a lot easier. At the moment the channel update proposal uses this quiescent state where you have to make sure that nobody has got anything in flight. That is always true in option simplified commitment. At the beginning of your turn by definition it is static. That is why the spec is written in the twisted way it is. You have to be quiescent, you are always quiescent at the beginning. Any significant changes are much easier in this model. It is my fault because a certain person encouraged me to write the optimal algorithm in the first place and I should have pushed back and said “No let’s start with something simple like this”. This was my original scheme by the way. A simplex rather than a duplex protocol but lessons learned, I’ll take that one. Playing with implementations of this is probably useful too.

I think it is important to start thinking right now about how we could do PTLCs and not paint ourselves in a corner with a protocol change that would make it harder to do PTLCs than what we could ideally do. I would really like to have PTLCs in 2022, at least a first version of it.

\ No newline at end of file +https://github.com/lightning/blips/pull/4

I am ambivalent about whether they should go in together or go in separate. Whichever allows us to get this stuff merged before the year ends is what I’m in favor of.

Maybe separate is a good idea. The one I opened will be updated very often. Every time someone adds a BLIP they will have to open this table of things that they want to reserve. Whereas they should not open things about the meta process of adding a BLIP.

Agreed, I think that makes sense.

Keeping the two separate I think makes sense. I haven’t read the latest version of BLIP 1 since it has moved to the new repo. I need to do that right now.

I listed in the initial commit the changes that I made. One was per Matt’s feedback out of this specific mandatory universality section requesting the proposals discuss why the given features are not intended to be universal. That’s a must now. Add a little bit more detail around what ranges of feature bits and TLVs belong here. Between 100 and 1000 for experimentation on feature bits and then TLVs above 65, 536. Then a couple of links and stuff. Otherwise I think it is largely the same that we were close to having consensus on in the previous repo.

I guess it sounds like a good start to me. I think we will probably change some of those as we go, as we learn how people write BLIPs and what are the pain points. I think we should start with something as small and simple as we can. And then make it evolve as we learn more about the process. That looks good enough.

This looks good to me. I’m sure we are going to have some level of friction over the status field. The BIP process uses them for a little bit of pseudo inside baseball sometimes. I have a feeling that we are going to want to revisit these as we go but that is ok. I don’t think we need to figure that out now.

I think in general having this as a starting point and continually iterating on it as people start using the process makes a lot of sense.

I agree. I will do a real review this week. It is a conceptual ACK from me at least.

I will do the same for yours. For PR 4.

Route blinding

https://github.com/lightning/bolts/pull/765

There has been a new review on route blinding. There has been a complete compatibility test between c-lightning and eclair on the latest version of onion messages. Did anyone have time to look at the route blinding one? My main feedback from the last meeting, I was asking what parts I should remove to get a first version in. Mostly probably the parts about the payments and I should keep only the parts that are library cryptographic utilities that are being used by onion messages. Should I keep it in the proposal document where it is not specified yet and we can iterate on it? I updated the test vectors as well and I think they should be easier for you to work with and build upon for onion messages.

Great. We have had route blinding for payments for well over a year as an experimental option. Basically unused except for the test code because there was no good way to specify it. When we tried to revise it to the latest thing we hit those issues that I commented on the PR. But it works really well for the onion message as a base and we have interop testing so I am tempted to leave it there in the proposal document but just not in the spec.

That sounds good to me. I didn’t put anything in the BOLTs related to messages, I left it in the proposal. I think the only important parts to discuss are the potential unblinding attacks that you would do by probing fees, probing CLTV expiry delta. It is basically things that we recommend implementations do like use the same CLTV expiry delta across the whole route. Use something that is different from the public values but maybe higher, same for the fees. I am not sure how to best convey those in the spec.

While I can specify that you should use the same value across you can always ignore those values and choose to probe the differences. The ultimate answer is you put it in the encrypted TLV, you put “By the way please enforce this value” so they can’t play with it.

That makes it bigger.

Perfect is the enemy of the good, yeah. That is another thing that can potentially be added later. I think for now a graph is relatively homogenous anyway so you wouldn’t get all that much data. Although you would get some doing those games. We have a way of fixing it later if we want.

What would you tell an intermediate node that is inside the blinded route when they receive a payment that they should forward and the fee is not enough or the CLTV is not enough? What error should they answer?

They have to fail. We have an error that we return from anywhere in the blinded tunnel, in that TLV we return the same error. In fact our implementation, if it was blinded we always reply with the same error. You’ll actually see it from the entrance to that TLV scheme and you’ll never see an error from the middle. That’s partially to protect against this attack. Although it is not perfect because you can still do success or failure tests. You get one bit of information out but you can’t tell exactly where it comes from. I’ll have to look up the code to see what we do but we have a specific error, I think we added an error for it. It has been a while, I’ll have to look.

From a honest sender’s point of view if you get such an error that tells you something wrong happened inside the blinded route, maybe it is fees, maybe it is CLTV expiry delta, should you just retry by raising everything for every blinded hop so that you have a chance of success? I think you should because there is a chance that your payment will go through. This is why it makes it better compared to rendezvous, you can retry, still use that blinded route and adapt to dynamic fees changing.

That is really hard to know. Your chances of success at that stage have surely dropped significantly. It is really up to the person handing you the route to have done that padding if any for you. If they haven’t maybe you could retry but you don’t actually know what is going on. It could be that there’s temporary disconnect in the route, it is not working anymore. That’s possibly the more likely case. It just won’t work. In which case you are kind of out of luck at that point. Unless they hand you multiple encrypted routes which is allowed in the spec but we haven’t implemented it.

I think that makes sense from the recipient’s point of view. Especially if you have multiple entry points. Just give most of them in the invoice. The idea is the same way you do route hints, you’d now use these. You can hand out multiple. The case where you don’t allow it… For payments it is a little bit different. For onion messages you only give a single reply path but the idea is you will resend if it doesn’t work and maybe try a different reply path.

Even from a payment’s point of view as a recipient if you are not well connected at all, you just have one entry point, since it is blinded you can still make it look like you have many entry points and provide many fake route hints. I think it is a good idea to have this option.

I agree. That’s for payments. We’d need to test that. But for onion messages this route blinding works really well.

I saw that Tim made a comment about typo. Do you know if he has some time available to review the crypto? I discussed it with Jonas Nick in El Salvador. He said it could be on his list because he didn’t realize it was a requirement for offers. They have so much to do already, I’m not sure if they will find the time to take a look at it.

I will beg and see what happens.

Sounds good.

Onion messages

https://github.com/lightning/bolts/pull/759

On onion messages what is the status? We have two implementations that support it that are compatible. I know that since Matt has been reviewing it actively we should wait for more feedback. We can just not touch it anymore because we know that we are compatible but still wait for more feedback?

It is up to you what you want to do. I am not going to have time in the next two weeks to do it. I am hoping I will be able to make it my holiday project and find some time over the end of December. But we have been pretty swamped. I would say don’t spend too much time worrying about waiting for me. I hope to have time to do it at the end of this month but there’s never any promises there.

Whether it is merged or not it doesn’t stop us from continuing our work on offers. We are actively working on offers. It doesn’t change much for us if it is merged or if we just wait a month or two before we merge it.

Your plan is to do the route blinding PR first right because it depends on some of those commits at least? Is that correct?

Yes, exactly.

I am tempted to merge route blinding and onions because we have our interoperability test and we can say they are not going to change. Having changed this multiple times it is not actually that bad to change in practice. What you do is deprecate the old ones, you assign a new message number and you can do anything you want in the new onions. You can support both, it is a bit of a dance but if we were to find some issue… It doesn’t hurt anyone who doesn’t use it for a start. If we find there is some crypto issue, we should really do it this way instead then we can bump the message number by 2 and do our variation of the scheme there. It is not that bad. I do like the idea of merging it in because that fires the starting gun for people to go “We should really implement this now or at least look at it”. I vote that we merge those two having passed interop test.

There is just one thing that your comment made me think about. If it is true if we are only using route blinding and onion messages it is really easy to move to a different version where we for example change the internals of route blinding. But if you start using it for payments in invoices then you must specify some kind of versioning for this route blinding scheme? If we want to be able to move to a new one?

Not if we do it in BOLT 12. You’d use a modern message to request the invoice. Then you go “You’re speaking the modern message so I’ll give you a modern TLV”. You can switch the whole basis across because I’ve done this once already. If they ask to use the old onion message we’ll give them an old TLV, an old invoice. It is not pretty, it is layering violation but it does work. This then beds that down, the next thing to do is the route blinding for payments. That is something we can look at as well. I’m not going to commit to two weeks.

A new guy, lightning-developer started reviewing route blinding and onion messages so we can give him a few days or weeks before we merge. Depending on his feedback we merge these two.

ACK.

Warning messages

https://github.com/lightning/bolts/pull/834

Let’s finalize warning messages I guess. There’s not much to say.

We had some argument last week but Rusty has not updated the PR it looks like. We are waiting on that.

You are ok to re-add that Rusty? The O0 errors.

Yes. I will re-add zeros.

I’ll re-read it again and ACK it. We can finally get this merged since it is already live in 3 implementations.

lnd doesn’t.

LDK, c-lightning and eclair.

We never merged it. I have tested it with Rusty but we haven’t merged the PR.

Clarify channel_reestablish requirements

https://github.com/lightning/bolts/pull/932

I created an issue to make it more clear. And I opened issues on lnd and c-lightning related to that.

https://github.com/lightning/bolts/issues/934

It makes sense but I don’t know what we actually do in practice.

I haven’t tested again. He was pretty sure lnd does automatically close before receiving.. but c-lightning he was not entirely sure. We started to implement a new mode for node operators of big nodes who are actively monitoring their node and don’t want to take any risks. They can configure this new strategy when we detect that we are late and the other guy says we are late, we print a big log in a specific log file and send a notification on Telegram or something to the node operator. Give them an opportunity to fix it before they lose thousands of channels. If they messed up something with TLV or.. We then realized it didn’t make sense to implement it right now because our peers would close anyway regardless of what we did.

It makes sense. I see that Eugene confirms that lnd does close. Is there a plan to fix that? I opened an issue on the lnd repo about that.

I saw the issue, no plan currently. But that doesn’t mean it won’t happen.

Simplified update and PTLCs

https://github.com/lightning/bolts/pull/867

One thing I wanted to discuss is the simplified update because it is related to something I posted on the mailing list today. The reason I’m bringing this one up again is I started looking into PTLCs recently and what was the way we could get a minimal version of PTLCs with a minimal set of changes to the existing protocol and transaction format. I discussed it with AJ who agrees that his proposal is a more long term thing. We should start with getting PTLCs on top of the existing structure. But actually there is one roadblock, there is one thing that PTLCs completely change compared to payment secrets. I posted that on the mailing list today and I have a detailed article on it. With a payment secret and preimage, when someone spends a HTLC success or claims a HTLC success you discover the secret by just watching the witness of the script. But with PTLCs it is different. When someone claims a PTLC success you don’t discover anything, the only way to discover it is if you had received an adaptor signature before. That means you have to receive adaptor signatures before you can sign your commitment. Worse than that it also means that when the remote commitment is published you cannot directly claim from it. You have to go through a pre-signed transaction that gave the other guy the opportunity to have an adaptor signature so that they are able to discover the secret when you claim it. We don’t have the easy structure that we had where I only sign things that go to your transactions, you only sign mine and I don’t need to give you signatures for anything from my local commitment. But now we do. I think that the current protocol of commit sig, revoke and ack, commit sig, revoke and ack doesn’t fit that model well. I think it is time to change it. If we change it we should try to find something that is somewhat similar to what we have so it is not too much of a complex change. Something that is compatible probably with an option simplified commitment and that works for both HTLCs and this first version of PTLCs. My mail to the mailing list was a call to action to protocol designers to propose some things without proposing something too complex or too different from what we have.

I haven’t worked on option simplified commitment in a while. An option simplified commitment is literally a subset of what we have now which is nice. It is significantly simpler. It doesn’t win you much in code until you remove the old version of course. You still have to support the whole state machine. One thing about is option simplified commitment that is worth noting, at the moment there are some things that you have to tell your peer to never send you because you don’t ever want to see them in your commitment message because to fail a HTLC you have to go through a whole cycle. If you switch to option simplified commitment it is actually easy to NACK a commitment without changing the state machine. They send you the commitment signed and you go “No I told you I don’t want that”. They go round again. That means you never have to give any explicit limits to your peer which is really good because that has been a source of all kinds of channel breaking bugs. Your peer thinks they can do something, you think they shouldn’t do it and they do it. The other reason I want option simplified commitment is there is a simple extension to it that we can do later that allows you to NACK a commitment signed. Then you don’t have any restrictions on your peer. You send me whatever you want, I’ll just NACK the ones I don’t like. I’ll instant fail HTLCs for you. I really want option simplified commitment for that because it simplifies the thing further as well. The more I think about it the more I really like this idea. I hadn’t realized the PTLC thing. There’s that and an extra round trip. I guess we will discuss it on the mailing list.

At least one I guess. That’s the hard part. If we didn’t have to have a new pre-signed transaction for the case where the remote commitment is published it would be fine. We’d just send adaptor signatures before our commitment signed but in the other direction. If I want to send you my commitment signed you would just have to send me all your adaptor signatures before and I can do the same. Now there’s also a pre-signed transaction in the remote commit and we have to both share this new signature for that but also a new adaptor signature for that before. That makes it a bit messy. That is why it is a bit hard to find the right way to translate what we have today into something that works for this case and is not a complete mess where people will get confused between what is in my transaction, what is in yours, what’s an adaptor signature. That is some whiteboard design.

I guess we will have to discuss that on the mailing list. I do like option simplified commitment. I think in practice it doesn’t actually slow things down very much because you just end up batching more as things go back.

Especially since what we realized with PTLCs, in a way you have to do some kind of simplified commitment. You cannot really stream in both directions because before sending your commit sig you have to wait for the other guy to send something. That is why I thought about option simplified commitment and making sure that they would work together. I think option simplified commitment, if we have drafted the rest, can be a good first step towards the protocol change for PTLCs.

Yes. The other thing about option simplified commitment is it makes update a lot easier. At the moment the channel update proposal uses this quiescent state where you have to make sure that nobody has got anything in flight. That is always true in option simplified commitment. At the beginning of your turn by definition it is static. That is why the spec is written in the twisted way it is. You have to be quiescent, you are always quiescent at the beginning. Any significant changes are much easier in this model. It is my fault because a certain person encouraged me to write the optimal algorithm in the first place and I should have pushed back and said “No let’s start with something simple like this”. This was my original scheme by the way. A simplex rather than a duplex protocol but lessons learned, I’ll take that one. Playing with implementations of this is probably useful too.

I think it is important to start thinking right now about how we could do PTLCs and not paint ourselves in a corner with a protocol change that would make it harder to do PTLCs than what we could ideally do. I would really like to have PTLCs in 2022, at least a first version of it.

\ No newline at end of file diff --git a/lightning-specification/2022-02-14-specification-call/index.html b/lightning-specification/2022-02-14-specification-call/index.html index 669b51f931..ff763cc8d0 100644 --- a/lightning-specification/2022-02-14-specification-call/index.html +++ b/lightning-specification/2022-02-14-specification-call/index.html @@ -13,4 +13,4 @@ < Lightning Specification < Lightning Specification Meeting - Agenda 0957

Lightning Specification Meeting - Agenda 0957

Date: February 14, 2022

Transcript By: Michael Folkson

Tags: Lightning

Category: -Meetup

Name: Lightning specification call

Topic: Agenda below

Location: Jitsi

Video: No video posted online

Agenda: https://github.com/lightning/bolts/issues/957

Organizing a Lightning Core Dev meetup

I was talking about organizing a face to face Lightning Core Dev meetup. If I understand correctly there has only been one formal one and that was in 2019 in Australia. There has been two?

Milan, the kickoff. There has only ever been two.

That was probably before my time in Bitcoin.

We get to Bora Bora? Did that come up on the list?

I think it is high time that we meet in person. I know there was one last fall but Rusty couldn’t travel and a bunch of folks couldn’t. Let’s try to get one organized that has as good a participation as possible from the Lightning spec developers. If anyone has questions we can discus right now. I just wanted to give a heads up. I plan to send a survey out just to get some sense for locations and dates that work for folks. It will probably be impossible to accommodate everyone but I’ll do my best to take that information and find a way to get something scheduled in the next few months. I suspect it will be April, May timeframe. I am going to try to make it work, at least by May. Something that works for everyone. Are there any quick questions or things I should be thinking about when trying to organize this? I’ve organized a couple of Core Dev meetups in the past, we’ll take lessons and learnings from that. As they develop we’ll reveal plans and get feedback from everyone to make sure it is as valuable as possible.

There are a few things popping up randomly that maybe people will be at. I know some people are going to be in London, some people may be in Miami. If we can piggy back maybe that can work but London is in like 3 weeks.

I’m probably the only Lighting one but I’m not going to the UK. I know a number of Core devs aren’t going to the UK either.

Some of our people are going but they are already in Europe, it is a skip. Not a long distance.

I’m happy to go to Europe. Because of the lawsuit currently entering into the UK… At least until we finish the jurisdiction challenge.

I forgot that was happening in the background.

As these Bitcoin conferences occur, some subset of us there, let’s meet up and make progress. Work towards to this one where the majority of us can hopefully attend.

Use warning message in quick close

https://github.com/lightning/bolts/pull/904

This is a one liner to use a warning. This should be easy to integrate. Another thing that we could discuss about that PR is the point Laolu raised. We could add a feature bit for that. I don’t think we need to and I think Matt doesn’t think we need to either.

If this thing can save you a bunch of chain fees maybe you want to find people that promise to always do it. That was the rationale there. Otherwise you have the fallback thing. Maybe you end up paying more because they are doing weird stuff. I started implementing this, I need to go back to my PR.

We can separately discuss having a feature bit for the quick close thing itself. But this in general is just adding an extra warning message on the wire. I don’t know why we would add a feature bit for just that.

People send warnings today. I get some issues in lnd, an unknown thing. Maybe we should add more logic in there basically. I think c-lightning sends one if you already have a channel that is closing and you try to do a new one. I think Carla has an older PR for that, we just need to revive it so we can see when things are going wrong.

I think we send them too now. It is just a new message type that you are supposed to log.

We had an older PR that was waiting for stuff to settle down but it is merged now so we could move forward with that. I’m trying to find it now.

I agree it is completely orthogonal to a feature bit so maybe have a quick look at that PR.

Offers

https://github.com/lightning/bolts/pull/798

We’ve been working a lot on offers on eclair recently. It is making steady progress. I put one comment on the PR about blinded paths in the payinfo but apart from that I don’t think there’s much to discuss on our side.

I did the classic pre-meeting “I should do something”. I went through and applied a few changes, a few typos and things. There was a compat break that I have implemented with compat mode. We seem to be pretty close. I look forward to seeing how interop goes once you’ve got that working. There was a request from Matt. We’ve got blinded paths but we don’t have simple route hints. The problem is you can’t pay via a blinded path until we’ve got the pay via blinded path stuff. He was wanting this shim that I’m reluctant to do but it is practical. I can give you a single unblinded hop. We hold our nose and implement for now. Then eventually it will be in the wonderful unicorn, flying car days and we’ll have full blinded payments and we can get rid of it.

To be clear I am not going to agitate strongly for this. I think it would let us deploy a little quicker. Obviously we would deprecate it within a year. Then we would remove it another year. But if the answer is no I am definitely not going to advocate very strongly for this and push back. I leave it up to you.

I need to dive a bit more into that. I do not realize yet how much work I will have to do. I would be able to know more in a few weeks and then make a recommendation. Right now I suggest to keep it in until we realize that it is too much and we want to ship without it. Then we may want to remove it. I added a comment today which is potentially another breaking thing so you may want to take a look at it. It is about using a single blinded hint for the whole blinded path instead of one per hop. That is something that would change the wire requirements. We need to decide whether we want to do it or not. While I was reviewing Thomas’ PR on eclair to add offers I realized that there this thing which is a routing hint saying how much fees and CLTV delta to use for the parts of the blinded path. Thomas understood it as there must be one for every hop in every blinded path that is in the offer or the invoice. The way I understood it was we only need one per path and we should apply the same fee and CLTV for all hops in the path to hide them more. You don’t want to show there are different fees here. That is an unblinding vector and it takes up less space.

You still have to indicate the number of hops though.

Yeah. You still have to have an unencrypted blob for each of the hops. But for the fees and CLTV you just provide one value that should work for all the hops in that route. It is more lightweight in the offer and in the invoice. Especially if you add dummy hops at the end of the blinded route you don’t have to repeat new fees and CLTV expiry that takes up more space for no reason. It also forces you to have something that is uniform and works for the whole path which makes it hopefully harder to unblind.

Does that mean you have to go through all the nodes across all the different payment paths, find the one which is charging the highest fees and apply that ubiquitously to every single node?

In a way that is already what you do. When you include the path you have all the data for all these hops so you just select the highest fee instead of selecting the right fee for each of the hops.

If you are doing something smart you obfuscate those numbers. It doesn’t help a huge amount because they can still probe. We have a plan for that, that’s later. You put something inside the onion to say “Don’t accept below this fee because they are trying to use it to probe you.” It is not a break for me because I don’t write this field at the moment. We can certainly change it. It is a simplification, it makes sense. You could imagine a case where I am feeding you a blinded path where one is higher. You could argue if it is such an obvious vector then don’t put that in the blinded path, start with the blinded path after that.

Or just use the higher value for everyone. One other thing I was arguing in the route blinding PR is that it may be frightening for the sender to see that there is a high fee to pay for the blinded part of the route. But actually you could reverse that and make it be paid by the merchant. The merchant would discount the value of the real item and would actually pay for the fee of the blinded path himself because it makes sense. The merchant is trying to hide themselves so they should pay the fee for the blinded part of the route.

I buy the argument. If you are paying for liquidity you will end up with this one hop that is potentially significantly higher. But at the moment the Lightning Network is still low. I ACK that, I will write some verbiage around it. Change it to a single that applies across the route, I like it.

Zero conf channels

Previous discussion: https://btctranscripts.com/lightning-specification/2021-11-22-specification-call/#simple-turbo-channels-enablement

We have a pretty comprehensive implementation of it but there was that one thing that we left, the channel type or not. Maybe here I can explain our use case or flow versus the reject on accept. For me it is a fail early versus fail after they send you a non-zero value on min depth. In our case, people like Breez are already doing zero conf channels today. If Breez is starting to use Pool to acquire a zero conf channel for onboarding, in our case we have a channel acceptor that looks at the order in Pool and says “This is not the channel type, we reject.” With this we need a channel acceptor acceptor. Whenever someone sends you an accept message you need to have that hook there. We already have a pre-accept hook versus a post-accept one. Adding the channel type here would let us know if they are actually going to do the thing. Some people commented that they could do whatever anyway but at least we have that first protection. You can say “They can do that in any case but the protocol is what we expect. The extraneous stuff can be handled on the side.” We have a full implementation. We should probably test some stuff on the side. That’s the one thing. We want a channel type so we can at least say “I want to open a zero conf channel to you”. Whereas right now this is saying “I will allow you to do zero conf after it is extended”. That is a slightly different flow.

Don’t you already have some kind of hooks on the accept channel message? There’s tonnes of fields in the accept channel message that users will presumably want to filter on?

The way we handle it, depending on how you do it, you either have a function closure that tells you what to post to the other party… Up until now there has never been a need to do anything on accept channel. Whenever someone sends you an open_channel message that’s when you’d say “I only want private channels. I don’t support this feature bit. They have a green alias, I don’t like that.” That is what people use pretty widely today.

You said on receiving open_channel? These are two different directions. You mean before you send open_channel?

Us and other implementations have something like a channel acceptor when you receive open_channel. You are saying “Do I want to accept this channel since the default things are currently one way?”

Let’s separate the conversation between outbound and inbound channels.

I’m concerned with accepting. Let’s say Breez acquired a channel for their user, a node on the network is now opening a channel to your mobile.

So it is an outbound channel?

Yes it is an outbound channel. The way it is setup, the maker is always the one that is going to be opening the channel, in this case the person who is opening the zero conf channel. Right now in our flow the user would see the open_channel, assuming there is a channel type and whatever else, see it is not zero conf and then reject it. Otherwise it would need to accept it and then later on have an exception down the line that they send a min_depth of a different value. That’s the flow.

You flipped it on us again. You are talking about the side that is accepting the channel, not the channel opener. And you want to filter on the open_channel message itself.

Yes. We do a similar thing. If someone wants anchor only because we have a feature bit or a channel type there, they can say “That’s not an anchor channel. I’m rejecting it” and everything moves forward like that. I don’t see a reason not to add a channel type here if it can make peering and general protocols built on top of it more explicit. We can fail quicker rather than failing later. The failing later, we would receive the min_depth

You said this is for the case where a user has received an open_channel and then is going to make some decision based on that open_channel and then send a response or an accept_channel. But once you’ve received that open_channel you now have all the information. The min_depth is only in the accept_channel. Presumably the node that is opening the channel, if you tell it it is zero conf it is just going to accept that because why wouldn’t it? In my understanding of the way we’ve done it and c-lightning has spoken about implementing it just seeing the open_channel and knowing what you are going to write in the accept_channel is sufficient to know whether the channel will be zero conf.

That’s the difference. Y’all are saying zero conf all day everyday. We are saying zero conf under these very precise scenarios. It wouldn’t be a default thing for the world. I don’t see any downside and I feel like it makes certain protocols more precise because you can fail earlier. We have a lot of feature bits, we already have a channel type here too. Maybe certain channels can’t support zero conf in the future.

And multi funder is a whole other discussion.

We have the ability at the protocol level to allow that filtering to exist in the future by having the zero conf bit here.

In the case of you’ve received an open_channel message, you say “I’m going to do zero conf with this channel”. Presumably at that point you’ve done further out of band negotiation. Obviously you are not going to accept zero conf from anyone, you are going to say “This node, we’ve already negotiated this and that”. Why can that negotiation not be the thing that decides this instead of having it be a negotiation? First you negotiate, you know you are going to do zero conf with this node, you get a channel from that node and then you do an additional negotiation step and say “It must be zero conf”.

This is when things are extended. At that point maybe they are eligible. But in this case whenever you send it I know it is there at runtime. We always try to verify the lowest layer. Let’s say we are doing this thing and it is not in the feature bit. Then the user sends min_depth zero, for whatever reason other party says “No”. At that point you have a weird silent failure. Now the receiver is saying “Zero conf” rather than the proposer. If the proposer initially gets the accept_channel and then does nothing, UX wise it is hard to have a consistent flow there.

I don’t understand why. You’ve already negotiated it. The initiator and the acceptor has negotiated it. The acceptor says “Yes zero conf” and now the initiator is like “Actually no, not zero conf”. Then you’re saying “The UX is going to be confused.” Of course the UX is confused, the initiator is buggy.

If we have a flow here where we know the initiator is doing a weird thing from the beginning we are able to make things a lot more consistent. The way it works, we ignore them, we do matching again and we can proceed. While with this one it is a weird indeterminate thing.

The channel now has to wait for a few confirmations. So what?

But now the user’s expectation is trashed. “I thought I was getting a zero conf channel. I can’t use it at all. Is the wallet broken?”

It is the user’s fault.

It is not the user’s fault. It is the protocol not being able to surface and allow things to be explicit like this. Can you explain the cost of adding a channel type feature bit here? In the future maybe certain channel types aren’t zero conf friendly at all. We are able to add a provision for that from the get go versus in the future realizing that the min_depth dance isn’t sufficient for whatever new super Taproot covenant channel type people come up with.

I am just trying to understand exactly the differences here in terms of the flow.

The UX is inconsistent because after everything is under way things can break down. Versus just saying “We’ve checked at the beginning. You have the open_channel rejected there.” Then we can at least say “We weren’t able to find a match for you” versus “We’ve found a match but then it turned out to be a counterfeit good basically”. It is like buying a car, they told you it was manual and it is automatic. You are like “What is this? I can’t drive this thing.” That’s a framing. It is making sure the user is buying or selling the good as much as we can validate it upfront.

The problem with this PR is it conflicts two things. One is if you do zero conf you need some alias mechanism, you need a name for it before it is confirmed. That’s almost useful. We’ve decided we like that. Whether you are doing zero conf or not it is nice to have this alias facility, private channels and stuff like that. That’s almost a PR by itself. The problem with zero conf is if you say “Yes I want a zero conf channel” are you committing to trusting them with the channel? I can open a zero conf channel and let you open it and pretend and then never forward anything until it is confirmed. But presumably when you’ve said “I want a zero conf channel” you are all in and ready to trust with this. Or you are on the side that doesn’t require trust. That is what you are trying to signal.

One other slight thing here with the way Pool works, we like this because it increases the set of signers required to double spend. For example if I have a batch of 5 people opening a channel it requires all 5 of them to double spend rather than just the person that was opening. It also requires us to double spend as well too. It increases the total set of collusion that is necessary in order to double spend the output. The reason they can’t double spend is they are in a UTXO that is a 2-of-2 with us. They would need us and every other person as well to double spend the entire batch. That’s the one difference security model wise with how this works in this setting. It is like a coinjoin where everyone has a timelocked input basically. The input will only be signed if things look good. The trust stuff is explicit. That’s another reason to add a channel type there. “Do I want to accept this zero conf thing?” You are right that there is a double opt-in. We are just trying to make it more explicit. It is more sensible if we know zero conf stuff can’t work for every channel type.

Originally the channel types were just to get around this hack. There were some features we had to remember. If you negotiated that at the beginning that made sense for the whole channel lifetime independent of what’s in the future. But generalizing it to “This is not persistent state but this is stuff about this channel”. It is not objectionable.

If we want explicit signaling I would strongly suggest we switch to a TLV in open_channel rather than making it a channel type.

That’s exactly what we have. We have a TLV that is a channel type in open_channel.

Internally from the architecture, when we switched across I just went through and changed everything to channel types internally. It was so much nicer. Instead of all these adhoc things going “Is this an anchor channel? Is this a static remote key channel?” suddenly became this bit test of the channel type, this field that went all the way through.

For us it allows us to move our implementation closer to the protocol. We already had the channel type before but now it is one-to-one. It is a different enum or whatever but same thing.

In retrospect we should have done this originally. There are two things. One is do you get an alias? I think the answer is everyone wants an alias. You need an alias if you are doing the zero conf thing obviously. But the way the spec was written is that you’ll get one. I think this is nice. I am actually not all that happy with a channel type the more I think about it. But I do want to go and implement it and see what that does.

It does feel weird because channel type is all stuff that is only persistent.

It is today but maybe that is inflexible thinking. My only caveat on this, it is not necessarily a huge issue, you open a channel and you go “I expected that to be zero conf”. You can specify that afterwards. We were going to have an API where you could go “Trust this node ID”. Obviously if you open a new channel it would be zero conf but you could also zero conf an existing channel by going “I’m going to start ACKing it”. Assuming that it had support for the alias so you were ready to do that. You would end up with a zero conf but you would never have signaled a zero conf. I guess you are free to do that.

Presumably the way y’all would implement that is that even if your counterparty says “6 confs” you will always send the funding_locked immediately after you broadcast the funding transaction if you are the initiator. Is that what you are thinking?

Yeah. If you are the initiator and there is no push_msat. And in our case with dual open, if you’ve got no funds on the line we will just go “Sure whatever”, we will zero conf stuff.

Why does push_msat matter?

If I have all the funds in the channel then I can use the channel immediately. If you screw me you’ve just lost money. But if I’ve push_msat to you you can push stuff through the channel.

You are still presumably not going to double spend yourself. It just prevents you from double spending the funding transaction? The idea is that you’d like to be able to continue double spending the funding transaction? The initiator pushes msat to the other counterparty, it is all your funds but you’ve given it to the initiate key?

Specifying push_msat puts you at some risk of them getting the money. If it is single conf even if your double spend fails you still have everything.

Presumably you were ok with them getting the money because you’ve pushed msat to them?

If you wanted to scam them maybe you wouldn’t do that.

The guy who accepts the push_msat, if it is a payment for something that has been semi trusted and done before, “I will push you msat because I opened this channel because you opened a channel to me. I opened a channel back in response. I will push you money through that.” But if you accept it as zero conf and they double spend it you lost that msat, maybe you opened a channel in response. It is more the guy who accepts the push_msat that has a risk of accepting zero conf.

You can generalize this for the dual funding case.

This is an interesting question then. Basically the channel type or TLV or whatever would say “Either send me an accept with zero min_depth or I’m going to immediately close the channel.”

Or send a warning message or error and whatever else.

The initiator will still always send a funding _locked immediately and the receiver can still send a funding_locked immediately if they want to. The feature bit is only an indicator of either you do this or I am going to close the channel.

It should also indicate that you have some degree of trust, that you will route. I could send whatever I want across the wire and not consider the channel open until… I think it should imply that you are going to let them do zero conf.

They can just deny any channel that has this set. Maybe that helps, maybe that doesn’t. They at least have that ability.

The thing is it should flag that they are trusting the zero conf, not just that they are walking through the protocol.

It should say that they must, not just that they can. If you see this bit and you are going to send an accept_channel that does not have a zero conf min_depth you must fail the channel.

Negotiation has failed at that point.

It is not optional.

On the alias being decoupled, do we like that in the same combo still? The alias thing has a feature bit already right?

Yes.

You must only route through this bit, not there is an alias offered. The feature bit is not just for the alias itself.

The feature bit is weird. It is like “Only use this one. I really understand what I’m doing and I only want you to use this. Discard the real short channel ID.” This is kind of what you want. But whether we should use a different feature bit for that, I am going to have to look back. We do want a way to say “I am all in on this whole alias idea.”

It should be all or nothing.

But for backwards compat or “I don’t care. It is going to be a public channel but it is zero conf for now” I can use an alias and then throw it away. This is where the alias becomes a bit schizophrenic. We’ve got these two roles. The feature bit would say “We’re all alias and we are only alias”. It is kind of overloaded. Maybe we should go back and change the bit number. If I switch it to a channel type I’ll see what happens.

I thought you did switch it to a channel type.

That is a channel type. Because that is something you have got to remember forever. The same bit would be used for the other channel type so now I have to find another one.

I think it is an example of the advantage of the feature bit. You can have these as individual things. Zero conf and the alias only or you can have all of them. That’s nice in terms of the bit vector comparison thing.

This is where I’m coming around to they are separate things.

A different feature bit.

Yes. Part of the roadblock that we got is because we put them both in together. It became this logjam.

Zero conf requires aliasing though yes?

In order to receive before confirmed yes but maybe we don’t care about that eventually.

Yes. If you don’t have an alias then all that can happen is they can push sats through you but you can’t use the channel.

It is kind of useless without that. Does that mean that you intend to split this PR into two? Or are we going to continue? It sounds like LND already has an implementation, LDK has one.

We have one of everything. The only divergence is the upfront feature bit check. I am cool with keeping it as is and we maybe throw out the bits. Once we have that squared up we can look at cross compat testing.

Add a TLV rather than defining a new bit.

We have a TLV.

Add a TLV that says the required bit.

I’ll edit this PR for now. If I was smarter I’d have split it into two. I don’t think it is worth splitting now.

It is pretty small, not a multi file mega thing.

Action Rusty to do another pass, make a channel type and see what happens, how bad it gets.

Zero reserve

I’m going to jot that down on the PR. One other thing related to this is zero reserve. Eugene is implementing this and asking questions about zero reserve. Right now I let you cheat me for free but maybe it is not useful unless we have it both ways. I think he was wondering do you always do zero reserve? I think right now technically if you send zero it is in violation of the spec. I think we have must be greater than zero thing.

We accept it, we do not allow you to send it currently. We may at some point allow you to send it. We accept it, maybe in violation of the spec.

Must set greater than or equal to dust_limit_satoshis.

If you set that to zero there is that weird interaction. I looked at a really old Breez PR, I found that it allowed zero reserve but it didn’t because it would reject it if it was less than the dust limit. We also had some dust limit revelations a few months ago as far as interactions with other fields.

At least in our codebase I don’t think there’s a weird interaction. If the output value is less than the dust limit you still have to remove it.

Otherwise you’d have a weird situation where I make the reserve on your commitment transaction below dust which means it can’t propagate. Maybe I can do that by rebalancing or something like that.

There is still a dust_limit_satoshis.

The issue is you can end up with a zero output transaction.

As long as your dust limit is non-zero. You still remove the output.

No you remove all the outputs, that’s the problem. That’s not a valid transaction.

A zero output transaction, I see your point.

By forcing a reserve you are saying that someone has an output at all times. I think that was the corner case that we ended up slamming into. Maybe it doesn’t matter. What are you doing with your life if you’ve managed to turn everything into dust? I don’t if that is real but I remember that corner case.

I know Breez is running a LND fork and they already doing this in the wild.

If the other guy lets you have zero reserve on your side it is all a win for you. It is only for the other guy that it is a risk.

Exactly. If you say I can have zero I’m fine with that. That doesn’t necessarily give you what you want. You want to be able to do “send all”, close the app and walk away. But right now people have that weird change value lingering. I’m not sure how the mobile apps handle it in practice.

That’s why we did zero reserve on Phoenix, to be able to do “send all” and to have users send all of their balance out, there is nothing remaining.

Because otherwise it is weird. You have this value you can’t move and people will get frustrated about that.

We had users who were like “No I need this or I can’t ship”. I think we have separate dust enforcement around our outputs. That’s a good point, there may be a corner case where you could hit a zero output transaction.

That was the killer. It is unspendable. In one way you don’t care, on the other hand it is UTXO damage.

It is application level brain damage at that point.

So this is one of those more investigation required things?

Write in the spec what you do if you end up in this case. Or figure out a way to avoid it. Say that “The minimum channel size must be such that you can’t be all dust”. Though that isn’t actually possible because your HTLC dust limit depends on your fee rate and stuff like that.

If it is only when anchor outputs, zero fee is used then you have no risk. There is no trim to dust, it is only the dust limit on HTLCs. If your channel is not really small you will always have outputs in there.

What do you mean there is no dust? What if we just move the reserve to the anchor output? Maybe that would solve it.

He’s two steps ahead. I was suggesting that you make your channel size such that you can never have it all dust. But that is not possible in a classic channel because your HTLC size that gets trimmed depends on your fee rate. But as he’s saying, that is not true with zero fee HTLC anchors. Modern anchors, that is not true anymore. You could just figure out what the number is and don’t have a channel smaller than this and you can have zero reserve. Maybe that’s the answer.

Can you explain the making sure you never have dust? You mean you have a minimum channel size that is just above dust?

To avoid this problem where you end up with zero outputs because everything is dust, if you blow away the reserve requirement, you could fill it with enough HTLCs that are all dust. Suddenly you have got zero outputs. You want to avoid that. Figure out what the numbers are. It depends on the max number of HTLCs and your dust limit. But it no longer depends on the fee rate which was the impossible one. If you put that as a requirement, you’ve got to have modern anchor and you’ve got to have larger than this number, formula whatever, then you can have zero reserve. I think that covers the corner case.

You could probably also get away with that Antoine PR that is overly specific. Presumably at this point every implementation has their own separate dust limiting functionality and you could also lean on that depending on how you implemented that. Maybe that is too weird.

It seems like anchors makes it possible for you to compute what this minimum channel size should be to make sure nothing is ever fully dust. You always have enough funds left over after paying for the fees of HTLCs, the first level output that is.

Independent of fee rate which is nice.

There is still one edge case. If the fee rate goes really, really high and you are not capping it because of anchor. If you don’t have any HTLCs and the fee rate goes to the roof… there is only one guy paying the fee…

As long as that guy paying the fee is the one with all the balance and the other one has rounds to dust.

It is possible in theory, yeah.

You may have to put a clause in the fee rate saying you don’t do that. “Don’t set a fee rate such that you would end up with zero outputs”. Figure out exactly what to test rather than just saying that. Assuming we can work out why are we suggesting this is a new channel type? A zero reserve channel type?

I don’t see why it would be.

The argument is “This is the kind of channel I want”.

It would be the exact same reasoning of the previous discussion. They can always send it and presumably you negotiated it in advance.

Similar thing. This would give them the level of guarantee they have today. But more broadly in the network. By them I mean people like Muun and Breez that already do it.

It is one of those things that are pre-negotiated.

That does touch on what Lisa is doing on dual funding. On dual funding she says that the reserve is not negotiated, it is only a percentage. There is a boolean saying “Include it or not include it”. You decide it at what step exactly? In which message? I don’t remember.

I don’t think I have added the boolean thing yet. We’ve talked about adding it.

At the moment it is 1 percent. The 1 percent is known at negotiation time, you choose the protocol you are using. I haven’t added the boolean thing yet.

Even if we had the boolean it would be after discussing the channel types. So maybe it would be the same thing as for zero conf. If we want to know it upfront then we do need a channel type.

I think it makes sense as a channel type. It also is a feature bit. You know what you are getting. The reason I like 1 percent reserve is the same kind of reason. I could tell you exactly what channel size you will have after putting in this many sats. If we make it a channel type it falls automatically into dual funding anyway.

Does anybody know if you can get zero outputs with just one side sending zero channel reserve? Because there’s an asymmetry here?

Both sides have to have zero reserve right?

What he’s saying is ignoring this I can send it and you don’t send it. Presumably we have asymmetric dust limits, is there some weird edge case there?

Only pre any HTLCs right? You could start with a zero balance on one side before any HTLCs have flown?

https://github.com/lightning/bolts/issues/959

There was another PR on ours that we were looking at but I guess this clarifies things. If it is a bit we would modify the bit vector to make sure it is only the new anchors or whatever anchors and go from there. Now we can test some stuff out in the wild again, I can take a look at Eugene’s monster PR.

RBF

One topic that has been discussed a lot recently is RBF. I don’t know if you’ve followed all the discussions on the mailing list about RBF. I am really eager to see people in meatspace and discuss it more.

Why isn’t it as simple as just deleting that other code? If it was me I would just have a pure delete PR.

It is a denial of service attack against Bitcoin Core nodes. The problem is all of this stuff very quickly turns into either a) it is not actually optimal for miners or b) it is a denial of service attack against Bitcoin Core nodes such that you can’t possibly implement it without breaking everything.

Wouldn’t you keep the whole min step thing? Each replacement still needs to replace a certain amount?

One issue is that if the transaction package is going to confirm in the next block, it is actually not optimal for miners to accept a replacement that is smaller but has a higher fee rate. You decrease the value of the next block which is exactly the thing you don’t want to do.

Why would a miner have a bigger mempool and keep the conflicts? You could check only the ancestor package and accept things that have a bigger ancestor package than the things they are replacing, not care about descendants. The miners would keep more and would keep conflicts, for them it would make sense.

You are saying that you look at just the part that is in the next block and then you look at whether or not that has a higher total fee?

No. The code that we would use for RBF on the relaying nodes would not be the exact same code as what the miners would do.

Russell O’Connor’s proposal from 2018 is still the best in my opinion.

If a Bitcoin Core node is making decisions that are different from what is being mined you are denial of service attacking yourself. Fundamentally by relaying and by validating and doing all the work you are spending time doing something. If that transaction is something that the creator of that transaction knows will never get mined then they know they can do this all day long.

You are not always doing the optimal thing because there may be descendants. I want to ignore descendants when evaluating whether a package is better than another package. Ignoring descendants, it is much easier because it doesn’t vary from one mempool to another if you only look at ancestors. You still force the ancestor package to increase.

The ancestor package but what about the descendants? You’ve kicked out the descendants and the descendants are a free relay issue. If I can add a bunch of descendants to the mempool and then do something that kicks them out without paying for the descendants then I can do this over and over again and blow up your CPU.

I believe the conflict is fundamental here. The miners don’t care how much spam you’ve got to get through to get to them, that is what their priority is. The network priority is minimize network traffic. These two are in conflict. They are absolutely in conflict. Currently Bitcoin Core biases towards protecting the network rather than optimizing for miners. Not long term incentive compatible.

What you are saying, you don’t care if you are throwing out children. The point is you could also have a world where you just simply don’t accept these children. If you are wasting traffic adding all these children, I guess this gets into the distinction between top block versus not. These descendants being in the top block versus not. Fundamentally if you really wanted to rearchitect the whole mempool from top to bottom what you would really do is say “I am going to aggressively relay things that fit in the next block. Beyond that I am going to aggressively rate limit it. I am going to have a fixed bandwidth buffer for stuff that is not in the next block. If it is in the next block I’ll relay it.

You have less spam problems at that point because anyone trying to spam with tiny replacements is at risk of getting mined at any point. It also gives them emergency override to go “I need to override this so I am going to slam it with a fee that is going to get in the next block and everyone will relay it.” It is also much more miner compatible. Fundamentally the concept of a mempool as being a linear array of transactions that is ready to go into blocks at any point is not optimal for miners. They should be keeping a whole pile of options open and doing this NP complete thing where they are rejuggling things, that is not going to happen.

What is practically optimal for miners is also what they can run in their CPU in a reasonable amount of time and can actually implement. There is an argument to be made that what is optimal for miners is always do whatever Bitcoin Core does because it is probably good enough and anything else takes a lot of effort which may screw you if you make an invalid block.

On your point about evicting descendants being costly. Is it really because it is bounded? You don’t have chains of descendants that can be longer than 25.

It is not bounded because you can do it over and over again.

Every time you are still increasing the package of the ancestors. That on its own will eventually confirm. You will have paid for something.

The question is how much of a blowup compared to current relay cost are you doing? Current relay is very strict. There should be no way to relay anything such that you pay less than 1 satoshi per vbyte of the thing you’re relaying. Full stop, you should always pay at least that. What you’re saying is “Yes you can relay more but you’ll pay something”. It is true, you’ll pay for something because you are increasing the fee rate. If you don’t require that you pay an absolute fee for the things you evicted you are potentially paying substantially lower than 1 satoshi per vbyte. The question is how much of a blowup is acceptable, how much of a blowup is it? To make this argument you’d need to go quantify exactly how much blowup can you do, what have you reduced the relay cost to from 1 satoshi per vbyte? I don’t think any of these proposals have done that.

That is something that should be easy to compute. I can try to have a look at it before London. I will discuss this with folks who will be in London.

I will start to review Eugene’s zero conf thing. People can ping on the issue once they have that ready for interop. Then maybe by a meeting or two from now I will have some Taproot PTLC stuff ready and make t-bast’s gist a little more concrete.

I don’t know if anyone replied to the gossip thing I threw out there. I did promise last meeting I’d put some meat on that proposal. It is still way off.

\ No newline at end of file +Meetup

Name: Lightning specification call

Topic: Agenda below

Location: Jitsi

Video: No video posted online

Agenda: https://github.com/lightning/bolts/issues/957

Organizing a Lightning Core Dev meetup

I was talking about organizing a face to face Lightning Core Dev meetup. If I understand correctly there has only been one formal one and that was in 2019 in Australia. There has been two?

Milan, the kickoff. There has only ever been two.

That was probably before my time in Bitcoin.

We get to Bora Bora? Did that come up on the list?

I think it is high time that we meet in person. I know there was one last fall but Rusty couldn’t travel and a bunch of folks couldn’t. Let’s try to get one organized that has as good a participation as possible from the Lightning spec developers. If anyone has questions we can discus right now. I just wanted to give a heads up. I plan to send a survey out just to get some sense for locations and dates that work for folks. It will probably be impossible to accommodate everyone but I’ll do my best to take that information and find a way to get something scheduled in the next few months. I suspect it will be April, May timeframe. I am going to try to make it work, at least by May. Something that works for everyone. Are there any quick questions or things I should be thinking about when trying to organize this? I’ve organized a couple of Core Dev meetups in the past, we’ll take lessons and learnings from that. As they develop we’ll reveal plans and get feedback from everyone to make sure it is as valuable as possible.

There are a few things popping up randomly that maybe people will be at. I know some people are going to be in London, some people may be in Miami. If we can piggy back maybe that can work but London is in like 3 weeks.

I’m probably the only Lighting one but I’m not going to the UK. I know a number of Core devs aren’t going to the UK either.

Some of our people are going but they are already in Europe, it is a skip. Not a long distance.

I’m happy to go to Europe. Because of the lawsuit currently entering into the UK… At least until we finish the jurisdiction challenge.

I forgot that was happening in the background.

As these Bitcoin conferences occur, some subset of us there, let’s meet up and make progress. Work towards to this one where the majority of us can hopefully attend.

Use warning message in quick close

https://github.com/lightning/bolts/pull/904

This is a one liner to use a warning. This should be easy to integrate. Another thing that we could discuss about that PR is the point Laolu raised. We could add a feature bit for that. I don’t think we need to and I think Matt doesn’t think we need to either.

If this thing can save you a bunch of chain fees maybe you want to find people that promise to always do it. That was the rationale there. Otherwise you have the fallback thing. Maybe you end up paying more because they are doing weird stuff. I started implementing this, I need to go back to my PR.

We can separately discuss having a feature bit for the quick close thing itself. But this in general is just adding an extra warning message on the wire. I don’t know why we would add a feature bit for just that.

People send warnings today. I get some issues in lnd, an unknown thing. Maybe we should add more logic in there basically. I think c-lightning sends one if you already have a channel that is closing and you try to do a new one. I think Carla has an older PR for that, we just need to revive it so we can see when things are going wrong.

I think we send them too now. It is just a new message type that you are supposed to log.

We had an older PR that was waiting for stuff to settle down but it is merged now so we could move forward with that. I’m trying to find it now.

I agree it is completely orthogonal to a feature bit so maybe have a quick look at that PR.

Offers

https://github.com/lightning/bolts/pull/798

We’ve been working a lot on offers on eclair recently. It is making steady progress. I put one comment on the PR about blinded paths in the payinfo but apart from that I don’t think there’s much to discuss on our side.

I did the classic pre-meeting “I should do something”. I went through and applied a few changes, a few typos and things. There was a compat break that I have implemented with compat mode. We seem to be pretty close. I look forward to seeing how interop goes once you’ve got that working. There was a request from Matt. We’ve got blinded paths but we don’t have simple route hints. The problem is you can’t pay via a blinded path until we’ve got the pay via blinded path stuff. He was wanting this shim that I’m reluctant to do but it is practical. I can give you a single unblinded hop. We hold our nose and implement for now. Then eventually it will be in the wonderful unicorn, flying car days and we’ll have full blinded payments and we can get rid of it.

To be clear I am not going to agitate strongly for this. I think it would let us deploy a little quicker. Obviously we would deprecate it within a year. Then we would remove it another year. But if the answer is no I am definitely not going to advocate very strongly for this and push back. I leave it up to you.

I need to dive a bit more into that. I do not realize yet how much work I will have to do. I would be able to know more in a few weeks and then make a recommendation. Right now I suggest to keep it in until we realize that it is too much and we want to ship without it. Then we may want to remove it. I added a comment today which is potentially another breaking thing so you may want to take a look at it. It is about using a single blinded hint for the whole blinded path instead of one per hop. That is something that would change the wire requirements. We need to decide whether we want to do it or not. While I was reviewing Thomas’ PR on eclair to add offers I realized that there this thing which is a routing hint saying how much fees and CLTV delta to use for the parts of the blinded path. Thomas understood it as there must be one for every hop in every blinded path that is in the offer or the invoice. The way I understood it was we only need one per path and we should apply the same fee and CLTV for all hops in the path to hide them more. You don’t want to show there are different fees here. That is an unblinding vector and it takes up less space.

You still have to indicate the number of hops though.

Yeah. You still have to have an unencrypted blob for each of the hops. But for the fees and CLTV you just provide one value that should work for all the hops in that route. It is more lightweight in the offer and in the invoice. Especially if you add dummy hops at the end of the blinded route you don’t have to repeat new fees and CLTV expiry that takes up more space for no reason. It also forces you to have something that is uniform and works for the whole path which makes it hopefully harder to unblind.

Does that mean you have to go through all the nodes across all the different payment paths, find the one which is charging the highest fees and apply that ubiquitously to every single node?

In a way that is already what you do. When you include the path you have all the data for all these hops so you just select the highest fee instead of selecting the right fee for each of the hops.

If you are doing something smart you obfuscate those numbers. It doesn’t help a huge amount because they can still probe. We have a plan for that, that’s later. You put something inside the onion to say “Don’t accept below this fee because they are trying to use it to probe you.” It is not a break for me because I don’t write this field at the moment. We can certainly change it. It is a simplification, it makes sense. You could imagine a case where I am feeding you a blinded path where one is higher. You could argue if it is such an obvious vector then don’t put that in the blinded path, start with the blinded path after that.

Or just use the higher value for everyone. One other thing I was arguing in the route blinding PR is that it may be frightening for the sender to see that there is a high fee to pay for the blinded part of the route. But actually you could reverse that and make it be paid by the merchant. The merchant would discount the value of the real item and would actually pay for the fee of the blinded path himself because it makes sense. The merchant is trying to hide themselves so they should pay the fee for the blinded part of the route.

I buy the argument. If you are paying for liquidity you will end up with this one hop that is potentially significantly higher. But at the moment the Lightning Network is still low. I ACK that, I will write some verbiage around it. Change it to a single that applies across the route, I like it.

Zero conf channels

Previous discussion: https://btctranscripts.com/lightning-specification/2021-11-22-specification-call/#simple-turbo-channels-enablement

We have a pretty comprehensive implementation of it but there was that one thing that we left, the channel type or not. Maybe here I can explain our use case or flow versus the reject on accept. For me it is a fail early versus fail after they send you a non-zero value on min depth. In our case, people like Breez are already doing zero conf channels today. If Breez is starting to use Pool to acquire a zero conf channel for onboarding, in our case we have a channel acceptor that looks at the order in Pool and says “This is not the channel type, we reject.” With this we need a channel acceptor acceptor. Whenever someone sends you an accept message you need to have that hook there. We already have a pre-accept hook versus a post-accept one. Adding the channel type here would let us know if they are actually going to do the thing. Some people commented that they could do whatever anyway but at least we have that first protection. You can say “They can do that in any case but the protocol is what we expect. The extraneous stuff can be handled on the side.” We have a full implementation. We should probably test some stuff on the side. That’s the one thing. We want a channel type so we can at least say “I want to open a zero conf channel to you”. Whereas right now this is saying “I will allow you to do zero conf after it is extended”. That is a slightly different flow.

Don’t you already have some kind of hooks on the accept channel message? There’s tonnes of fields in the accept channel message that users will presumably want to filter on?

The way we handle it, depending on how you do it, you either have a function closure that tells you what to post to the other party… Up until now there has never been a need to do anything on accept channel. Whenever someone sends you an open_channel message that’s when you’d say “I only want private channels. I don’t support this feature bit. They have a green alias, I don’t like that.” That is what people use pretty widely today.

You said on receiving open_channel? These are two different directions. You mean before you send open_channel?

Us and other implementations have something like a channel acceptor when you receive open_channel. You are saying “Do I want to accept this channel since the default things are currently one way?”

Let’s separate the conversation between outbound and inbound channels.

I’m concerned with accepting. Let’s say Breez acquired a channel for their user, a node on the network is now opening a channel to your mobile.

So it is an outbound channel?

Yes it is an outbound channel. The way it is setup, the maker is always the one that is going to be opening the channel, in this case the person who is opening the zero conf channel. Right now in our flow the user would see the open_channel, assuming there is a channel type and whatever else, see it is not zero conf and then reject it. Otherwise it would need to accept it and then later on have an exception down the line that they send a min_depth of a different value. That’s the flow.

You flipped it on us again. You are talking about the side that is accepting the channel, not the channel opener. And you want to filter on the open_channel message itself.

Yes. We do a similar thing. If someone wants anchor only because we have a feature bit or a channel type there, they can say “That’s not an anchor channel. I’m rejecting it” and everything moves forward like that. I don’t see a reason not to add a channel type here if it can make peering and general protocols built on top of it more explicit. We can fail quicker rather than failing later. The failing later, we would receive the min_depth

You said this is for the case where a user has received an open_channel and then is going to make some decision based on that open_channel and then send a response or an accept_channel. But once you’ve received that open_channel you now have all the information. The min_depth is only in the accept_channel. Presumably the node that is opening the channel, if you tell it it is zero conf it is just going to accept that because why wouldn’t it? In my understanding of the way we’ve done it and c-lightning has spoken about implementing it just seeing the open_channel and knowing what you are going to write in the accept_channel is sufficient to know whether the channel will be zero conf.

That’s the difference. Y’all are saying zero conf all day everyday. We are saying zero conf under these very precise scenarios. It wouldn’t be a default thing for the world. I don’t see any downside and I feel like it makes certain protocols more precise because you can fail earlier. We have a lot of feature bits, we already have a channel type here too. Maybe certain channels can’t support zero conf in the future.

And multi funder is a whole other discussion.

We have the ability at the protocol level to allow that filtering to exist in the future by having the zero conf bit here.

In the case of you’ve received an open_channel message, you say “I’m going to do zero conf with this channel”. Presumably at that point you’ve done further out of band negotiation. Obviously you are not going to accept zero conf from anyone, you are going to say “This node, we’ve already negotiated this and that”. Why can that negotiation not be the thing that decides this instead of having it be a negotiation? First you negotiate, you know you are going to do zero conf with this node, you get a channel from that node and then you do an additional negotiation step and say “It must be zero conf”.

This is when things are extended. At that point maybe they are eligible. But in this case whenever you send it I know it is there at runtime. We always try to verify the lowest layer. Let’s say we are doing this thing and it is not in the feature bit. Then the user sends min_depth zero, for whatever reason other party says “No”. At that point you have a weird silent failure. Now the receiver is saying “Zero conf” rather than the proposer. If the proposer initially gets the accept_channel and then does nothing, UX wise it is hard to have a consistent flow there.

I don’t understand why. You’ve already negotiated it. The initiator and the acceptor has negotiated it. The acceptor says “Yes zero conf” and now the initiator is like “Actually no, not zero conf”. Then you’re saying “The UX is going to be confused.” Of course the UX is confused, the initiator is buggy.

If we have a flow here where we know the initiator is doing a weird thing from the beginning we are able to make things a lot more consistent. The way it works, we ignore them, we do matching again and we can proceed. While with this one it is a weird indeterminate thing.

The channel now has to wait for a few confirmations. So what?

But now the user’s expectation is trashed. “I thought I was getting a zero conf channel. I can’t use it at all. Is the wallet broken?”

It is the user’s fault.

It is not the user’s fault. It is the protocol not being able to surface and allow things to be explicit like this. Can you explain the cost of adding a channel type feature bit here? In the future maybe certain channel types aren’t zero conf friendly at all. We are able to add a provision for that from the get go versus in the future realizing that the min_depth dance isn’t sufficient for whatever new super Taproot covenant channel type people come up with.

I am just trying to understand exactly the differences here in terms of the flow.

The UX is inconsistent because after everything is under way things can break down. Versus just saying “We’ve checked at the beginning. You have the open_channel rejected there.” Then we can at least say “We weren’t able to find a match for you” versus “We’ve found a match but then it turned out to be a counterfeit good basically”. It is like buying a car, they told you it was manual and it is automatic. You are like “What is this? I can’t drive this thing.” That’s a framing. It is making sure the user is buying or selling the good as much as we can validate it upfront.

The problem with this PR is it conflicts two things. One is if you do zero conf you need some alias mechanism, you need a name for it before it is confirmed. That’s almost useful. We’ve decided we like that. Whether you are doing zero conf or not it is nice to have this alias facility, private channels and stuff like that. That’s almost a PR by itself. The problem with zero conf is if you say “Yes I want a zero conf channel” are you committing to trusting them with the channel? I can open a zero conf channel and let you open it and pretend and then never forward anything until it is confirmed. But presumably when you’ve said “I want a zero conf channel” you are all in and ready to trust with this. Or you are on the side that doesn’t require trust. That is what you are trying to signal.

One other slight thing here with the way Pool works, we like this because it increases the set of signers required to double spend. For example if I have a batch of 5 people opening a channel it requires all 5 of them to double spend rather than just the person that was opening. It also requires us to double spend as well too. It increases the total set of collusion that is necessary in order to double spend the output. The reason they can’t double spend is they are in a UTXO that is a 2-of-2 with us. They would need us and every other person as well to double spend the entire batch. That’s the one difference security model wise with how this works in this setting. It is like a coinjoin where everyone has a timelocked input basically. The input will only be signed if things look good. The trust stuff is explicit. That’s another reason to add a channel type there. “Do I want to accept this zero conf thing?” You are right that there is a double opt-in. We are just trying to make it more explicit. It is more sensible if we know zero conf stuff can’t work for every channel type.

Originally the channel types were just to get around this hack. There were some features we had to remember. If you negotiated that at the beginning that made sense for the whole channel lifetime independent of what’s in the future. But generalizing it to “This is not persistent state but this is stuff about this channel”. It is not objectionable.

If we want explicit signaling I would strongly suggest we switch to a TLV in open_channel rather than making it a channel type.

That’s exactly what we have. We have a TLV that is a channel type in open_channel.

Internally from the architecture, when we switched across I just went through and changed everything to channel types internally. It was so much nicer. Instead of all these adhoc things going “Is this an anchor channel? Is this a static remote key channel?” suddenly became this bit test of the channel type, this field that went all the way through.

For us it allows us to move our implementation closer to the protocol. We already had the channel type before but now it is one-to-one. It is a different enum or whatever but same thing.

In retrospect we should have done this originally. There are two things. One is do you get an alias? I think the answer is everyone wants an alias. You need an alias if you are doing the zero conf thing obviously. But the way the spec was written is that you’ll get one. I think this is nice. I am actually not all that happy with a channel type the more I think about it. But I do want to go and implement it and see what that does.

It does feel weird because channel type is all stuff that is only persistent.

It is today but maybe that is inflexible thinking. My only caveat on this, it is not necessarily a huge issue, you open a channel and you go “I expected that to be zero conf”. You can specify that afterwards. We were going to have an API where you could go “Trust this node ID”. Obviously if you open a new channel it would be zero conf but you could also zero conf an existing channel by going “I’m going to start ACKing it”. Assuming that it had support for the alias so you were ready to do that. You would end up with a zero conf but you would never have signaled a zero conf. I guess you are free to do that.

Presumably the way y’all would implement that is that even if your counterparty says “6 confs” you will always send the funding_locked immediately after you broadcast the funding transaction if you are the initiator. Is that what you are thinking?

Yeah. If you are the initiator and there is no push_msat. And in our case with dual open, if you’ve got no funds on the line we will just go “Sure whatever”, we will zero conf stuff.

Why does push_msat matter?

If I have all the funds in the channel then I can use the channel immediately. If you screw me you’ve just lost money. But if I’ve push_msat to you you can push stuff through the channel.

You are still presumably not going to double spend yourself. It just prevents you from double spending the funding transaction? The idea is that you’d like to be able to continue double spending the funding transaction? The initiator pushes msat to the other counterparty, it is all your funds but you’ve given it to the initiate key?

Specifying push_msat puts you at some risk of them getting the money. If it is single conf even if your double spend fails you still have everything.

Presumably you were ok with them getting the money because you’ve pushed msat to them?

If you wanted to scam them maybe you wouldn’t do that.

The guy who accepts the push_msat, if it is a payment for something that has been semi trusted and done before, “I will push you msat because I opened this channel because you opened a channel to me. I opened a channel back in response. I will push you money through that.” But if you accept it as zero conf and they double spend it you lost that msat, maybe you opened a channel in response. It is more the guy who accepts the push_msat that has a risk of accepting zero conf.

You can generalize this for the dual funding case.

This is an interesting question then. Basically the channel type or TLV or whatever would say “Either send me an accept with zero min_depth or I’m going to immediately close the channel.”

Or send a warning message or error and whatever else.

The initiator will still always send a funding _locked immediately and the receiver can still send a funding_locked immediately if they want to. The feature bit is only an indicator of either you do this or I am going to close the channel.

It should also indicate that you have some degree of trust, that you will route. I could send whatever I want across the wire and not consider the channel open until… I think it should imply that you are going to let them do zero conf.

They can just deny any channel that has this set. Maybe that helps, maybe that doesn’t. They at least have that ability.

The thing is it should flag that they are trusting the zero conf, not just that they are walking through the protocol.

It should say that they must, not just that they can. If you see this bit and you are going to send an accept_channel that does not have a zero conf min_depth you must fail the channel.

Negotiation has failed at that point.

It is not optional.

On the alias being decoupled, do we like that in the same combo still? The alias thing has a feature bit already right?

Yes.

You must only route through this bit, not there is an alias offered. The feature bit is not just for the alias itself.

The feature bit is weird. It is like “Only use this one. I really understand what I’m doing and I only want you to use this. Discard the real short channel ID.” This is kind of what you want. But whether we should use a different feature bit for that, I am going to have to look back. We do want a way to say “I am all in on this whole alias idea.”

It should be all or nothing.

But for backwards compat or “I don’t care. It is going to be a public channel but it is zero conf for now” I can use an alias and then throw it away. This is where the alias becomes a bit schizophrenic. We’ve got these two roles. The feature bit would say “We’re all alias and we are only alias”. It is kind of overloaded. Maybe we should go back and change the bit number. If I switch it to a channel type I’ll see what happens.

I thought you did switch it to a channel type.

That is a channel type. Because that is something you have got to remember forever. The same bit would be used for the other channel type so now I have to find another one.

I think it is an example of the advantage of the feature bit. You can have these as individual things. Zero conf and the alias only or you can have all of them. That’s nice in terms of the bit vector comparison thing.

This is where I’m coming around to they are separate things.

A different feature bit.

Yes. Part of the roadblock that we got is because we put them both in together. It became this logjam.

Zero conf requires aliasing though yes?

In order to receive before confirmed yes but maybe we don’t care about that eventually.

Yes. If you don’t have an alias then all that can happen is they can push sats through you but you can’t use the channel.

It is kind of useless without that. Does that mean that you intend to split this PR into two? Or are we going to continue? It sounds like LND already has an implementation, LDK has one.

We have one of everything. The only divergence is the upfront feature bit check. I am cool with keeping it as is and we maybe throw out the bits. Once we have that squared up we can look at cross compat testing.

Add a TLV rather than defining a new bit.

We have a TLV.

Add a TLV that says the required bit.

I’ll edit this PR for now. If I was smarter I’d have split it into two. I don’t think it is worth splitting now.

It is pretty small, not a multi file mega thing.

Action Rusty to do another pass, make a channel type and see what happens, how bad it gets.

Zero reserve

I’m going to jot that down on the PR. One other thing related to this is zero reserve. Eugene is implementing this and asking questions about zero reserve. Right now I let you cheat me for free but maybe it is not useful unless we have it both ways. I think he was wondering do you always do zero reserve? I think right now technically if you send zero it is in violation of the spec. I think we have must be greater than zero thing.

We accept it, we do not allow you to send it currently. We may at some point allow you to send it. We accept it, maybe in violation of the spec.

Must set greater than or equal to dust_limit_satoshis.

If you set that to zero there is that weird interaction. I looked at a really old Breez PR, I found that it allowed zero reserve but it didn’t because it would reject it if it was less than the dust limit. We also had some dust limit revelations a few months ago as far as interactions with other fields.

At least in our codebase I don’t think there’s a weird interaction. If the output value is less than the dust limit you still have to remove it.

Otherwise you’d have a weird situation where I make the reserve on your commitment transaction below dust which means it can’t propagate. Maybe I can do that by rebalancing or something like that.

There is still a dust_limit_satoshis.

The issue is you can end up with a zero output transaction.

As long as your dust limit is non-zero. You still remove the output.

No you remove all the outputs, that’s the problem. That’s not a valid transaction.

A zero output transaction, I see your point.

By forcing a reserve you are saying that someone has an output at all times. I think that was the corner case that we ended up slamming into. Maybe it doesn’t matter. What are you doing with your life if you’ve managed to turn everything into dust? I don’t if that is real but I remember that corner case.

I know Breez is running a LND fork and they already doing this in the wild.

If the other guy lets you have zero reserve on your side it is all a win for you. It is only for the other guy that it is a risk.

Exactly. If you say I can have zero I’m fine with that. That doesn’t necessarily give you what you want. You want to be able to do “send all”, close the app and walk away. But right now people have that weird change value lingering. I’m not sure how the mobile apps handle it in practice.

That’s why we did zero reserve on Phoenix, to be able to do “send all” and to have users send all of their balance out, there is nothing remaining.

Because otherwise it is weird. You have this value you can’t move and people will get frustrated about that.

We had users who were like “No I need this or I can’t ship”. I think we have separate dust enforcement around our outputs. That’s a good point, there may be a corner case where you could hit a zero output transaction.

That was the killer. It is unspendable. In one way you don’t care, on the other hand it is UTXO damage.

It is application level brain damage at that point.

So this is one of those more investigation required things?

Write in the spec what you do if you end up in this case. Or figure out a way to avoid it. Say that “The minimum channel size must be such that you can’t be all dust”. Though that isn’t actually possible because your HTLC dust limit depends on your fee rate and stuff like that.

If it is only when anchor outputs, zero fee is used then you have no risk. There is no trim to dust, it is only the dust limit on HTLCs. If your channel is not really small you will always have outputs in there.

What do you mean there is no dust? What if we just move the reserve to the anchor output? Maybe that would solve it.

He’s two steps ahead. I was suggesting that you make your channel size such that you can never have it all dust. But that is not possible in a classic channel because your HTLC size that gets trimmed depends on your fee rate. But as he’s saying, that is not true with zero fee HTLC anchors. Modern anchors, that is not true anymore. You could just figure out what the number is and don’t have a channel smaller than this and you can have zero reserve. Maybe that’s the answer.

Can you explain the making sure you never have dust? You mean you have a minimum channel size that is just above dust?

To avoid this problem where you end up with zero outputs because everything is dust, if you blow away the reserve requirement, you could fill it with enough HTLCs that are all dust. Suddenly you have got zero outputs. You want to avoid that. Figure out what the numbers are. It depends on the max number of HTLCs and your dust limit. But it no longer depends on the fee rate which was the impossible one. If you put that as a requirement, you’ve got to have modern anchor and you’ve got to have larger than this number, formula whatever, then you can have zero reserve. I think that covers the corner case.

You could probably also get away with that Antoine PR that is overly specific. Presumably at this point every implementation has their own separate dust limiting functionality and you could also lean on that depending on how you implemented that. Maybe that is too weird.

It seems like anchors makes it possible for you to compute what this minimum channel size should be to make sure nothing is ever fully dust. You always have enough funds left over after paying for the fees of HTLCs, the first level output that is.

Independent of fee rate which is nice.

There is still one edge case. If the fee rate goes really, really high and you are not capping it because of anchor. If you don’t have any HTLCs and the fee rate goes to the roof… there is only one guy paying the fee…

As long as that guy paying the fee is the one with all the balance and the other one has rounds to dust.

It is possible in theory, yeah.

You may have to put a clause in the fee rate saying you don’t do that. “Don’t set a fee rate such that you would end up with zero outputs”. Figure out exactly what to test rather than just saying that. Assuming we can work out why are we suggesting this is a new channel type? A zero reserve channel type?

I don’t see why it would be.

The argument is “This is the kind of channel I want”.

It would be the exact same reasoning of the previous discussion. They can always send it and presumably you negotiated it in advance.

Similar thing. This would give them the level of guarantee they have today. But more broadly in the network. By them I mean people like Muun and Breez that already do it.

It is one of those things that are pre-negotiated.

That does touch on what Lisa is doing on dual funding. On dual funding she says that the reserve is not negotiated, it is only a percentage. There is a boolean saying “Include it or not include it”. You decide it at what step exactly? In which message? I don’t remember.

I don’t think I have added the boolean thing yet. We’ve talked about adding it.

At the moment it is 1 percent. The 1 percent is known at negotiation time, you choose the protocol you are using. I haven’t added the boolean thing yet.

Even if we had the boolean it would be after discussing the channel types. So maybe it would be the same thing as for zero conf. If we want to know it upfront then we do need a channel type.

I think it makes sense as a channel type. It also is a feature bit. You know what you are getting. The reason I like 1 percent reserve is the same kind of reason. I could tell you exactly what channel size you will have after putting in this many sats. If we make it a channel type it falls automatically into dual funding anyway.

Does anybody know if you can get zero outputs with just one side sending zero channel reserve? Because there’s an asymmetry here?

Both sides have to have zero reserve right?

What he’s saying is ignoring this I can send it and you don’t send it. Presumably we have asymmetric dust limits, is there some weird edge case there?

Only pre any HTLCs right? You could start with a zero balance on one side before any HTLCs have flown?

https://github.com/lightning/bolts/issues/959

There was another PR on ours that we were looking at but I guess this clarifies things. If it is a bit we would modify the bit vector to make sure it is only the new anchors or whatever anchors and go from there. Now we can test some stuff out in the wild again, I can take a look at Eugene’s monster PR.

RBF

One topic that has been discussed a lot recently is RBF. I don’t know if you’ve followed all the discussions on the mailing list about RBF. I am really eager to see people in meatspace and discuss it more.

Why isn’t it as simple as just deleting that other code? If it was me I would just have a pure delete PR.

It is a denial of service attack against Bitcoin Core nodes. The problem is all of this stuff very quickly turns into either a) it is not actually optimal for miners or b) it is a denial of service attack against Bitcoin Core nodes such that you can’t possibly implement it without breaking everything.

Wouldn’t you keep the whole min step thing? Each replacement still needs to replace a certain amount?

One issue is that if the transaction package is going to confirm in the next block, it is actually not optimal for miners to accept a replacement that is smaller but has a higher fee rate. You decrease the value of the next block which is exactly the thing you don’t want to do.

Why would a miner have a bigger mempool and keep the conflicts? You could check only the ancestor package and accept things that have a bigger ancestor package than the things they are replacing, not care about descendants. The miners would keep more and would keep conflicts, for them it would make sense.

You are saying that you look at just the part that is in the next block and then you look at whether or not that has a higher total fee?

No. The code that we would use for RBF on the relaying nodes would not be the exact same code as what the miners would do.

Russell O’Connor’s proposal from 2018 is still the best in my opinion.

If a Bitcoin Core node is making decisions that are different from what is being mined you are denial of service attacking yourself. Fundamentally by relaying and by validating and doing all the work you are spending time doing something. If that transaction is something that the creator of that transaction knows will never get mined then they know they can do this all day long.

You are not always doing the optimal thing because there may be descendants. I want to ignore descendants when evaluating whether a package is better than another package. Ignoring descendants, it is much easier because it doesn’t vary from one mempool to another if you only look at ancestors. You still force the ancestor package to increase.

The ancestor package but what about the descendants? You’ve kicked out the descendants and the descendants are a free relay issue. If I can add a bunch of descendants to the mempool and then do something that kicks them out without paying for the descendants then I can do this over and over again and blow up your CPU.

I believe the conflict is fundamental here. The miners don’t care how much spam you’ve got to get through to get to them, that is what their priority is. The network priority is minimize network traffic. These two are in conflict. They are absolutely in conflict. Currently Bitcoin Core biases towards protecting the network rather than optimizing for miners. Not long term incentive compatible.

What you are saying, you don’t care if you are throwing out children. The point is you could also have a world where you just simply don’t accept these children. If you are wasting traffic adding all these children, I guess this gets into the distinction between top block versus not. These descendants being in the top block versus not. Fundamentally if you really wanted to rearchitect the whole mempool from top to bottom what you would really do is say “I am going to aggressively relay things that fit in the next block. Beyond that I am going to aggressively rate limit it. I am going to have a fixed bandwidth buffer for stuff that is not in the next block. If it is in the next block I’ll relay it.

You have less spam problems at that point because anyone trying to spam with tiny replacements is at risk of getting mined at any point. It also gives them emergency override to go “I need to override this so I am going to slam it with a fee that is going to get in the next block and everyone will relay it.” It is also much more miner compatible. Fundamentally the concept of a mempool as being a linear array of transactions that is ready to go into blocks at any point is not optimal for miners. They should be keeping a whole pile of options open and doing this NP complete thing where they are rejuggling things, that is not going to happen.

What is practically optimal for miners is also what they can run in their CPU in a reasonable amount of time and can actually implement. There is an argument to be made that what is optimal for miners is always do whatever Bitcoin Core does because it is probably good enough and anything else takes a lot of effort which may screw you if you make an invalid block.

On your point about evicting descendants being costly. Is it really because it is bounded? You don’t have chains of descendants that can be longer than 25.

It is not bounded because you can do it over and over again.

Every time you are still increasing the package of the ancestors. That on its own will eventually confirm. You will have paid for something.

The question is how much of a blowup compared to current relay cost are you doing? Current relay is very strict. There should be no way to relay anything such that you pay less than 1 satoshi per vbyte of the thing you’re relaying. Full stop, you should always pay at least that. What you’re saying is “Yes you can relay more but you’ll pay something”. It is true, you’ll pay for something because you are increasing the fee rate. If you don’t require that you pay an absolute fee for the things you evicted you are potentially paying substantially lower than 1 satoshi per vbyte. The question is how much of a blowup is acceptable, how much of a blowup is it? To make this argument you’d need to go quantify exactly how much blowup can you do, what have you reduced the relay cost to from 1 satoshi per vbyte? I don’t think any of these proposals have done that.

That is something that should be easy to compute. I can try to have a look at it before London. I will discuss this with folks who will be in London.

I will start to review Eugene’s zero conf thing. People can ping on the issue once they have that ready for interop. Then maybe by a meeting or two from now I will have some Taproot PTLC stuff ready and make t-bast’s gist a little more concrete.

I don’t know if anyone replied to the gossip thing I threw out there. I did promise last meeting I’d put some meat on that proposal. It is still way off.

\ No newline at end of file diff --git a/london-bitcoin-devs/2018-06-12-adam-gibson-unfairly-linear-signatures/index.html b/london-bitcoin-devs/2018-06-12-adam-gibson-unfairly-linear-signatures/index.html index 15a2959c22..83019df941 100644 --- a/london-bitcoin-devs/2018-06-12-adam-gibson-unfairly-linear-signatures/index.html +++ b/london-bitcoin-devs/2018-06-12-adam-gibson-unfairly-linear-signatures/index.html @@ -11,4 +11,4 @@ Schnorr signatures

Category: Meetup

Media: https://www.youtube.com/watch?v=mLZ7qVwKalE

Slides: https://joinmarket.me/static/schnorrplus.pdf

Intro

This is supposed to be a developer meetup but this talk is going to be a little more on the theoretical side. The title is “Unfairly Linear Signatures” that is just a joke. It is talking about Schnorr signatures. They are something that could in the near future have a big practical significance in Bitcoin. I am not going to explain all the practical significance.

Outline

To give you an outline, I will explain what all these terms mean and how they fit in shortly. The theoretical basis, what is a Schnorr signature? Where does it come from? Why is it done the way it is done? Is it secure? That’s what these early theoretical parts are about. After that we will look at uses of Schnorr signatures, specifically in Bitcoin or related to things in Bitcoin. I’m also expanding this onto things called CoinSwaps and multisig with ECDSA. These later parts of the talk about ways we can use Schnorr signatures and to a lesser extent ECDSA to do interesting things. This first half which is more theoretical, I will get you to let me know how you feel about it. Do you find it interesting? Is it all gobbledegook? Or do you find this really cool and you want to learn more about it? In which case you may need to ask someone who knows more about it than me. My level of knowledge of this is ok, high level but I am not going to give you a full university lecture about all the theorems and all that stuff. It is more higher level. The reason I want to look at it is I think sometimes the things we use if we are developing on Bitcoin it all seems a bit magical. Why is the formula for ECDSA the way it is? What is an elliptic curve? This stuff is a bit magical. To the extent I can I want to give you an intuition and a feeling for what this object is. I think the Schnorr signature is particularly suitable for that because it is much simpler than a lot of other things. If we started to have a talk about the elliptic curve addition formula and writing out all the algebra all of it. We would have to go back into some incredible long spiel about mathematics. It is very complicated. We don’t need to do that. ECDSA is a little bit more complicated than Schnorr. Even though Schnorr is coming after ECDSA in Bitcoin, Schnorr is actually simpler to understand. That is what we are aiming to do here. The second half will be a bit more practical about the applications.

Commitments - 1

To go anywhere with this we first have to understand the idea of a commitment scheme and introduce the topic of a commitment. I’ve called it the telephone game, like Chinese whispers. Everyone knows about tossing coins. Heads or tails and you choose. That is very well known. If you wanted to do that over a communication channel, Chris and I were going to bet on a toss of coin, Chris is going to call it heads or tails and I’m going to toss the coin. The problem is that if we are over a telephone then we can’t see what each other is doing. It is not safe. If he calls “Heads” I can just say “I’ve tossed it and it was Tails”. He doesn’t know if I am lying, I probably am lying if there is any incentive. And vice versa, it is not safe for me if I toss the coin first and say it is “Heads” and Chris says “I chose Heads.” It is not safe. How do you avoid that problem over a communication channel? The marvelous thing is that it turns out that you can avoid that problem specifically if you have a one way function. If you can get something where given the output you can’t retroactively figure out the output you can achieve this goal. Everyone here is familiar with hash functions right? Using a hash I hope you can see that this is the right way to solve that problem. Maybe there is some tweaks that you’d need to make it a completely perfect protocol. The general idea is if Alice hashes the choice that he makes and then gives Bob the hash then Bob can safely reveal the result of tossing the coin. There are some details to that. If Alice just hashes the string “tails”, a 32 byte random string, and sends that over to Bob over the telephone wire or the communication channel, that would not really work. Do you know why that wouldn’t work? Bob can pre-compute the hash of “heads” and the hash of “tails” and compare the result. The protocol without that part is a failed protocol because it doesn’t have a property called hiding. A coin toss first of all needs to hide the result. The problem of doing just a hash of “tails” is that it doesn’t hide the result. Whereas if we include the long random string we are actually getting the hiding effect of whatever the result was. The other effect we have is we need it to be the case that having done this I can’t change the result of the coin toss. That’s what we call binding. That is where the hash function’s property comes in. Having computed the hash of “tails” + long random string Alice can’t then lie after Bob has revealed his result. If it is “heads” she can’t then make up a random string that gives the same hash as “heads”. If she puts “heads” here she can’t find a random string that gives the same hash output. That is second preimage resistance, I believe is the technical term. We want hash functions to have specific properties. That is the idea of a commitment. That solves that problem. It gives a way so that Alice and Bob both can’t cheat. Bob can’t know what Alice chose.

Commitments - 2

The toy example that we just gave illustrated two key properties, binding and hiding. A commitment scheme has three aspects. A set up, a commitment and an opening. A commitment is when Alice produces her hash result. An opening is when Alice says to Bob “This is the data I used to generate the hash.” That’s commitment schemes. It is very important for what we will discuss next. The zero knowledge proof of knowledge and Schnorr signatures. We can do the same thing using elliptic curve points. This is going to crop up increasingly frequently as I talk. Is everyone familiar with the idea of adding two elliptic curve points together. I don’t mean drawing out curves and all that stuff. Let’s talk about private and public keys because it is something we are all familiar with.

P=xG

G is a generator point, a point on the curve, a specific publicly known generator point multiplied by a scalar number giving another elliptic curve point. An elliptic curve scalar multiplication. It means G+G+G…. x times. It is not a fundamental operation, the fundamental operation is adding two points. It is a group with an addition operation defined for it. Of course you can add the same point to itself. We call that 2A = A+A. We can do commitments using elliptic curve points instead of using cryptographic hash functions.

Pedersen commitment

We take lower case letters to mean scalar numbers, actually numbers modulo the order of the group. We scalar multiply one elliptic curve point H by one scalar and another elliptic curve point G, the generator point, by another scalar. x is the message we are committing to and r the random string we had in the previous slide.

C = rH + xG

That is the basic idea. There is a lot of stuff. It is really interesting and very relevant to all kinds of things. We can’t do everything in one day so I’m not going to talk about that right now.

Zero Knowledge Proof of Knowledge

The next concept that I think is essential to understand the Schnorr signature is the zero knowledge proof of knowledge. Most people will at least have heard of a zero knowledge proof. A couple of months I started getting very interested in bulletproofs, this new system for confidentiality transactions. What is going on here? We can use a commitment scheme as a tool to help us prove that we know a secret. It is not exactly the same as the telephone game. We are going to prove that we know a secret value using a commitment but without revealing that secret value. It is already a strange thing to claim that you can do. If you have never thought about it before you should realize that every time you make a Bitcoin transaction you are doing that more or less. Because you are proving that you know the private key to validate the spending of the coin. It is connected but not exactly the same thing as a signature. How do we use a commitment scheme to prove that we know a secret without revealing it. Alice revealed her “heads” or “tails” so this is a step forward. This is another level of abstraction we are going to have to go up. We have got our secret here but we can introduce another secret just to be confusing. That new secret which is just going to be a random number I make up, I am going to commit to another random number. Then you are going to have give me a challenge value and I am going to take that challenge value and combine it in a certain way with my commitment and my secret to give you a new value which you know I can only have produced if I knew the secret. The general structure I have just described here where I commit to something, you give me a challenge and then I give you something back that proves is called a Sigma protocol. The nature of the interaction being this way, back, this way and at the end there is a verify. It very vaguely looks like a sigma when you draw it out as an interaction diagram. They often draw diagrams like this. The prover P does something sends something to the verifier V. The verifier does something and sends it toe the prover. And so on and so on.

Q - It sounds like a Diffie-Hellman exchange?

A - A Diffie-Hellman exchange is not a Sigma protocol but it has some similarities.

Sigma protocol

Here is our Sigma protocol. This is an example, it is not all Sigma protocols but it is the canonical example, the most important one. It is basically what we just said. Alice has a secret x. Bob has P which is the public key, the curve point corresponding to xG. <- means chooses randomly from the set of all scalars between 1 and n-1 where n is the curve order. Alice chooses a random k, sends R=kG, notice the pattern is same as the pattern there. She is doing scalar multiplication so that R although it is derived from k, it does expose k. In the same way as your public key does not expose your private key. She sends R over to Bob. Bob chooses a completely random e from scalar values. The last step is that Alice calculates s = k + ex. She takes her original random value k and takes the challenge value Bob gave her, multiplies by it her secret value and returns that to Bob. Can you see why Alice sending s to Bob proves to Bob that she knows the secret x? What could Bob do with this value s? If we put a G here crudely and we put on a G on everything else do you see what happens? Taking s and multiplying it, scalar multiplication, by the generator point G is the same as kG + exG. R=kG. exG = eP.

Let’s go back a step. We know the general idea that a private key times the generator point gives a public key A. aG = A. Note I am using capital letters for curve points and lower case letters for scalar numbers. bG =B. If we add these together we get aG+bG = A+B or (a+b)G = A+B. All the normal rules of multiplication and addition still apply in this context. The only thing you must avoid getting confused about is you can’t do things like this: AB. There is no such thing. We have a group and the group has an operation defined on it which is addition. It doesn’t have any multiplication defined on it. When we talking earlier about multiplication it was scalar multiplication. If you have a vector you can multiply it by 3 or 7 or 8. That is scalar multiplication. Same thing here because you just add the vector to itself n times. Or in this case add the point to itself n times. When we see equations like this we take this scalar equation, an equation with only numbers and we kind of lift it onto the curve. We multiply G but each one of those scalars. Then we look at what we’ve got. We’ve got sG, that is a new thing. Then we’ve got kG which is R and we’ve got exG which is eP. Why is that important? Because Bob the verifier has all these values. Bob has P, that was defined at the start. He has e, he chose it himself. He has R, notice he does not have k, that is very important. But he has R so he can add eP to R. He can verify whether or not it is equal to sG. If it is the claim is that Alice must have known the private key x. That’s what we are going to discuss next.

Sigma protocol - reasoning

This is the center of the presentation. We want to reflect on it. What if we removed the first step? Remember there were three steps. Suppose Alice didn’t bother to send a R. The protocol involved only Bob sending an e. Alice can’t return that anymore, it doesn’t exist so she would be returning s = ex. Would that be a good idea? Would that be secure? There are two dangers here from both sides. The danger to Alice is that Bob learns her private key. We must have it so the protocol does not allow that. The danger to Bob is Alice can generate this value without knowing the private key. We could call that forgery. We want to make sure the protocol doesn’t allow either of those two dangers. If we changed it and wrote s = ex where e is sent from Bob to Alice do you see any problem with that?

Q - You could divide s by e and get x?

A - We multiply s by the modulo inverse of the value e because this is modular arithmetic. It is basically the same as dividing, you can do that because it is a scalar not a point. This version is not secure because Alice has not succeeded in hiding her value.

The first part cannot be removed because without the k in k + ex, x is easily extracted. k is needed to blind the value of x. What about if we removed the second step? If instead Alice generated k and then sent R = kG to Bob and Bob didn’t do anything at all. Then Alice said s = k+x. She sends R and then she sends s. This is a non-interactive protocol so it would be really cool if it worked. What is the problem there? How does Bob verify this? What would Bob’s verification step be? Bob never gets k in the normal protocol, he gets R. Bob would do sG = R+P. He would take the s and multiply by G. He would take the R he has been given and add it to P which is known at the start of the protocol and check if it is equal. This would be valid mathematically but the problem is that because Alice wasn’t forced to get s after an interaction she is able to do something like this. Firstly she makes up a R’ which she knows the k’. Then she subtracts P from the R’. This is what we call a key subtraction attack. It means that when Bob checks that sG = R+P he doesn’t realize that Alice did R = R’-P. She knows that R’ = k’G. When she puts that in there it just becomes (R-P) + P so P is irrelevant now. She has forged the proof that she knew x by subtracting x out of the equation. She can’t do that if we include this second step of a challenge. That is why it is an interactive challenge. k protects Alice and e protects Bob. You can do a lot more thought experiments about this. It is not the only way of achieving this goal but maybe it is the simplest.

Schnorr protocol and signature

What we just talked about is not the Sigma protocol, it is the Schnorr protocol which is one example of a Sigma protocol. The Sigma protocol concept is more general and it means anytime you have this pattern: commit, challenge, response. When I was looking at bulletproofs, the academic background of bulletproofs has several examples of the same structure but more complicated. Maybe you have several commitments or several challenges but it is the same pattern. What we looked at though is the specific example called the Schnorr identity protocol. An identity protocol because it is Alice is proving her identity in a way. She has a private key representing her identity, it is a bit like a password. She is proving that she knows the secret.

ZKPOK - Definitions

What about security? To talk about the security of this construction we need to talk about the definition of a zero knowledge proof of knowledge. I think this is quite self explanatory. It is an abstract term but keep in mind that example where someone is trying to prove they know a secret. It could be more complicated than that but let’s keep it simple. Obviously it is necessary if I do know the secret I should be able to prove it to you. Equally obviously it is necessary that if I don’t know the secret I should not be able to prove it. That is these two definitions here, completeness and soundness. Completeness is always really trivial to prove because it is just like saying “Does the equation actually balance?” If we wrote the equation wrong s = k+ (alpha)ex and no one knew alpha then we can’t produce an answer so it is not complete. Soundness, even though it is the flip concept it is much more difficult to prove that I can’t forge the secret. Zero knowledgeness is the really crazy one. I am going to prove it to you but I am not going to reveal any information doing so. This is critical for a digital signature scheme if we are going to use it for cryptocurrency. Hopefully we don’t reuse address but if we did or we do and we start making lots of signatures on one key, if somebody can get any information from that at all it could be a complete disaster. That is a concrete example of why this concept is so important. It has got much wider applicability than that. We want the proof to reveal absolutely nothing apart from one bit of information. One bit, either I know the secret or I don’t. I was thinking about that today, what if I know half the secret? Have I revealed half a bit of information? I remember reading about if you have non-randomness in your nonce generation then you have these lattice attacks, even from a tiny one bit of non-randomness you can get the secret.

Soundness

This is a wacky part and if it doesn’t makes sense to you join the club. To try to prove that the proof that we have is sound, that nobody could produce the proof other than the genuine owner of the secret, we need to imagine the prover as being constrained or isolated. You can think of the prover as a program or algorithm. Obviously in real life the prover is an algorithm controlled by a human who happens to know the secret. But you can think of it as a function or a piece of code that contains within it the secret and follows the step: commit, challenge, response; and produces the correct answer and passes the test so to speak. We imagine the verifier trying to extract the secret. The idea of the soundness proof is that he can’t do that. Can he extract the secret if he cheats? Here cheats can only mean cheats with a Prover that executes as normal but we create different Provers in different universes. This sounds ridiculous. I have heard it described in different ways but the way I really like to think of it is the Prover is sitting there, maybe it is on my computer. It is a very small function. Inside that function somewhere must be the secret. It can’t produce the correct verification if the secret is not in there. So what if I’m a programmer and I start running the Prover and then I use break points, I debug it and go into debug mode. I make a second copy of the program, nothing is stopping me doing that. In the literature they usually describe in terms of rewinding and going back to an earlier step. I prefer to think of it as making two copies and going forward one step in one copy and keeping the first one earlier on. Think of the Prover as being isolated as a program that we can control. In that scenario can we construct the secret from that Prover? The program will do something like Step 1 generate a random k, produce R and output R. Step 2 on input of a random number e calculate k + ex and output s. There are a set of steps there. Parts of it don’t have to be deterministic because you generate a random number k. The point is if you can run two copies of it separately…

The three step protocol is Prover chooses k and sends R to the Verifier. The Verifier generates e and sends e to the Prover. s = k + ex is calculated and s is sent. The verification was sG = R + eP where P = xG. The Prover commits, generates k and sends R, the Prover has done Step 1. Verifier branches the Universe, he makes two copies of the program at that point. He sets the break point. He stops the program after the sending of R and makes two copies. Verifier challenges in both. Instead of this flow being one flow there are now two flows, a second flow where he sends a second e and the Prover following its algorithm will calculate a second s. s_2 = k + (e_2)x. It won’t be the same because although the k is the same, the commitment step is the same, and the x is the same, the challenge is different.

Soundness - 2

You imagine the Prover as a dumb program, it doesn’t know what it is doing. It has got a secret inside it which I represent as a dollar sign, $. The cheating Verifier, we call it the extractor, has total control of its environment. Stops it, copies it and runs it twice. If you do this protocol twice with the same k value and the same private key x you can trivially extract x and k. Take one equation s_2 = k + e_2(x) away from the other equation s = k + ex. This is a very well known weakness both in Schnorr and in ECDSA which is that you can extract the private key if you use the same k value which we we call a nonce. If we use a nonce twice on the same private key. What does that all mean? It means that if we could control it like that we could extract the secret. We can’t control like that because if we could this whole thing wouldn’t work. Every time you sign the thing I could take the private key and I control you. But I don’t control you. Why am I saying this? In a scenario that is not real. There is no God, there is no controlling the Prover. What does it prove? It proves that x is known by this machine. Imagine you have 100 dollars, God can steal 100 dollars from you because he is omnipotent. But can God steal 100 dollars from you if you don’t have 100 dollars? Even though he is omnipotent he can’t. He could imagine 100 dollars, put it in your pocket and then take it out but then he wouldn’t be stealing it from you. This is the basis of this argument. The basis is that you can only extract something that is already there. No matter how ludicrous or ridiculous the set up of the proof is the fact that the secret comes out means that the secret was in there. What this does is it proves that that protocol that we described does actually have knowledge soundness. It is impossible to forge the signature or the identity of Alice. k is essentially a private key, it is just an ephemeral private key that we use once and throw it away. Are you with familiar with the term nonce? n once, a number used once.

Zero-Knowledgeness (HVZK)

This is also weird but the opposite problem. To remind you we showed how to prove soundness, vaguely but we gave the general idea. Zero knowledgeness, we want to prove now that doing this protocol Alice doesn’t reveal any information to Bob. This is very important. I am going to sketch it. What we want to do is produce transcripts of the protocol that look as if they are genuine. What am I talking about transcripts? Think of this as a conversation between Prover and Verifier, Alice and Bob. Prover sends R to Verifier. Verifier sends e to Prover. Prover sends s to Verifier. You can summarize this as (R, e, s). The conversation that the Prover and the Verifier have with each other. This is a transcript of the conversation. The great insight of the original zero knowledge paper written by -Shafi Goldwasser and others in the mid 1980s was imagine you have a lot of different protocol executions, lots of conversations with the same private key. It doesn’t follow by the way that if you do lots you get this problem as long as you don’t use the same k. If we have k_2 then we don’t know k, k_2 or x.

Q - What is the probability of generating the same k?

A - It depends on the number of bits. It has to be a scalar so if the group is size of 256 bits then it is 2^256. But if the group is tiny… Bitcoin is slightly less than 256 bits. It is the same level of security you have with private keys. You generate private keys randomly, you can generate as many addresses as you like in your wallet. Nobody is going to tell you “You might accidentally collide” it is not going to happen.

Notice even if you make two signatures or two conversations with the same private key as long as you use different k you are fine. Because you have three unknowns and two equations there. It will always be overdetermined. You can’t solve it. In normal operation we makes lots of these conversations. It is interesting that the values in the conversation (R, e, s) are all random numbers. Even though R is a point not a scalar it is still a random value. What if you forge lots of conversations that looked similar? Could I make another conversation (R’, e’, s’) in such a way that it would like a proper conversation to an outsider. Is it possible to do that?

Q - Do you mean convince them that the proof is valid?

A - The reason there is confusion here is because we are talking about an interactive protocol. If you and I engage in the protocol we just discussed I can prove to you that I own the private key. But if you take the conversation we had and gave it to someone else it doesn’t prove it does it? Because you had to be the one who chose the challenge.

What they said in the paper was if we can set it up so that we can create a whole set of conversations that look valid specifically in the sense of sG = R + eP being validated then it must be the case that no knowledge is exposed. If an adversary, an attacker can make lots of conversation transcripts that look exactly the same at least in terms of distribution, obviously they are not literally the same, as fake transcripts then he can’t be receiving any information when he receives real transcripts because he doesn’t know the difference. It is a very weird argument. How it works in practice that we have the notion of a Simulator, someone who simulates these conversations. All the Simulator needs to do is work backwards. That’s the trick. What makes this work is causality. The fact that I can only do that after you do that. But if we can violate causality, create s randomly, then create e randomly what would we do next? Solve for R. We can solve for R because we have s and e. You can do sG - eP = R. Notice interestingly you don’t get k but you do get R and that is all you need to make a conversation transcript that looks like a real one. If I can make an infinite number of fake conversation transcripts that look exactly the same to an attacker as real ones… By look the same I mean statistically look the same, obviously not the same numbers. You can make lots of R, s and e. If the distribution of fake transcripts is indistinguishable from the distribution of genuine ones it proves that zero information is conveyed. What statistical distribution are we talking about here? The uniform random distribution, I’m not going to try to prove that.

Q - Are you saying that in the proof of this statistics comes into it?

A - I think it could. I glaze over it when it gets to that level of detail. It wouldn’t be distributions like exponential or Poisson.

Q - You could have all these transcripts, real and fake transcripts, but there is no way to convince someone that a transcript is fake. Therefore there is no knowledge inside it. The fact that you can’t convince someone that it is safe. It could be real or fake. It is indistinguishable.

A - Absolutely.

Fiat-Shamir transform

This is an annoying add-on but it is really vitally important in practice, what is called a Fiat-Shamir transform. We have been talking about Sigma protocols and specifically the Schnorr identity protocol as an example. If we want to make it non-interactive, it is no use just having protocols where I prove to Chris, I prove to you etc. We need something where I can prove and make the proof transferable. Prove it to the whole world so everyone can see that I know the secret. To do that we need to make the protocol non-interactive, to remove this step. We just spent an hour explaining why we need that step so how the hell does that work? The way we do it is very ingenious. We use again a hash function. We need a random oracle, that is the technical term. But let’s think of it as a hash function. What we do is we hash the conversation up to the point of the challenge. In this case that is very simple. That is just the value R. If I hash R I create a random value output which we could not have predicted. Because we could not have predicted it we can’t cheat. Remember we had the problem that if you removed this step we found that it was not secure because we had k + ex and you could subtract the public key easily. But if you hash this you still do get a number here but it is not a number you can predict at the start. That is the key. We don’t know e in advance and so we can’t do an attack like that to get s without knowing x. I am trying to give you a really concrete explanation of the concept. The concept is the hash of the conversation up to the challenge and make that the challenge. That is something that Alice can do on our own. Bob doesn’t need to be there. She just hashes R. It is like simulating the challenge. So now the Schnorr signature. What is it? It is exactly that, what we have just discussed. Using that Fiat-Shamir trick because otherwise we can’t make a signature non-interactively. Of course we have to put the message somewhere. We include the message in the hash. We may or may not, it is better if we do, also include the pubkey. It doesn’t actually hurt if you think about it. You can put anything in there. We put the message that we are signing. Now it is s = k + H(m | P | R) x. e has changed from being a random number to being a hash of the message and crucially the point R and possibly the point P. We have changed our identity protocol into a signature scheme. Hash one-wayness enforces ordering of steps in absence of Verifier enforcement. A | B represents concatenation of A and B. m and P and R are all hashed together.

ZK RO

We’ve got most of the theory out of the way. These next few points are a bit vague. We talked about zero knowledge but we didn’t really talk about the fact that that proof is really limited. It is what is called an Honest Verifier’s zero knowledge because we are assuming that the e value, the challenge given by the Verifier is random and not deliberately malformed in some way. The way we get round the fact that all these proofs were based on interactivity is we program the random oracle. The hash function is what is called a random oracle. Does that phrase mean anything to anyone? Forget about hash functions for a minute. Our task is to create is something that creates a random output given any input but does it deterministically. Imagine I’m the random oracle, you give me an input string “Hello”. I’ve got this massive table and I write in my table “Hello”, output “03647….”. The point is that if anyone says “Hello” I am going to respond with the same thing. But if someone says “Goodbye” I’m going to make up a new random value. A hash function is an attempt to create that effect using some mathematical tricks.

Q - Is the hash function secret? Are you announcing what the hash function is?

A - When they go through all these security proofs they use this concept of the random oracle model. The thing about hash functions is they are actual concrete mathematical functions. We don’t know for sure that they behave exactly the way we intend them to. From a theorist point of view they define a random oracle as an ideal of what the hash function should do which is to output deterministic random values for any input. There are no secrets involved. It is just that you can’t predict what the output will be given the input.

Q - You are able to calculate the output because you know the hash function. But does the person who is giving you the input know what the hash function is?

A - It is the same as any hash function in that regard. Anyone can do it. The point is that they can’t predict the output so you get this forced causality.

The Simulator programs the random oracle. Program the random oracle to output the e that you want it to output in order to make this proof still work even though we are not interacting anymore. Matthew Green has got two really good blog posts on zero knowledge proofs. In the second one he talks about this and how it is a really dodgy proof. I’ll let the experts argue about that.

Reductions to ECDLP

So what security do we have? We have some concept that all of this is sort of secure. We half proved that you can’t forge it and you can’t extract the private key. But it was all based on a very basic assumption which is the very first thing I wrote on the whiteboard was P = xG, the public key and the private key. The idea is that you can’t get x from P. In the elliptic curve context that is called the elliptic curve discrete logarithm problem. The problem of extracting the elliptic curve discrete log of P which is x given only P. It is not impossible because you could simply look through every possible value of x and you could find it. We call it that computationally hard because we don’t have algorithms that calculate it quickly. We treat it as a one way function but it is not perfectly true. Given enough computational power you could find the input simply by exhaustive search. There are algorithms that allow you to do it. ECDLP is lost to a sufficiently powerful quantum computer. It can be shown that if an attacker can extract the private key from a Schnorr signature, they can also solve the ECDLP. We can say under a certain set of assumptions if the elliptic curve discrete logarithm problem is not cracked, solved in sub exponential time then the Schnorr signature is secure.

Reductions to ECDLP - 2

The claim is based on an argument along the lines of what we’ve just discussed. If you have an adversary that is able to crack the signature scheme, an adversary that with one of this type of protocol can extract the x just by verifying or looking at it. Let’s say you are able to do that with probability epsilon, you are not going to crack it every time, just one time in ten let’s say. If we have such an adversary that adversary can crack the elliptic curve discrete logarithm problem. There’s two slightly different things. Here’s a Schnorr signature. Let’s say I claim that I am the holder of the private key and I can make a signature. And let’s say Chris actually holds the private key and I don’t. If I can impersonate Chris by making a Schnorr signature it doesn’t necessarily mean that I have the key. It might be that there is something broken. What this argument says is that if I can impersonate Chris it means I can solve the elliptic curve discrete logarithm problem because I can extract the discrete log by impersonating twice. This same argument we used before that if you use two different challenges on the same commitment you end up being able to extract x. Roughly your chance of success in extracting the discrete log is the square of epsilon. That is less. Instead of one tenth we are talking about one hundredth. They talk about these arguments as security reductions. The signature scheme is secure because we can’t solve the elliptic curve discrete log. If the signature scheme was not secure then I could do this and I could solve ECDLP. I don’t think anyone can solve ECDLP therefore the signature scheme is secure. This is what they call a reduction.

Q - You are proving they are equivalent problems. If you can solve one you can solve another.

A - But the annoying thing is that they are not literally equivalent.

Q - You have the complexity classes like P, NP. Non deterministic polynomial classes which are NP problems and we are simulating them in exponential time nowadays. You have the concept of reduction. What this means is if you solve one problem in a NP class and all the problems have the property of reduction with one problem you can solve all of them.

Digital signature security

I’m not sure if I’ve got much to say about this. I was reading about what is digital signature security. There seems to be a lot of different concepts. It makes sense but this is getting complicated now. You can talk about obvious things like is the signature scheme against someone extracting the private key? But also is this secure against forgery? Also is this secure against forgery if I’ve already got several examples of the signature? Also is it secure against forgery if I manage to do several signatures in parallel? I don’t know much about that stuff.

Q - It would need to be secure against all of those?

A - I think this in bold is the strongest definition of security. “Security against existential forgery under chosen message attack.” If I can get you to output signatures for messages that I choose you still can’t create a new signature.

ECDSA’s weaknesses

I want to make a few notes about ECDSA. I’ll stick to the same notation. P = xG. k = nonce. For ECDSA we have a different equation for the signature. Anyone want to tell me? s = k^(-1) (H(m) + rx) We publish the signature as two values, r and s. With the Schnorr signature we publish as either e and s or R and s. (r , s) is an ECDSA signature. What is r? r is the x coordinate of R (R_x). R = kG. This is the same as Schnorr but we then do something very different. We take the x coordinate. If I take the signature I have been given and take the modular inverse of it which is annoying, and multiply it by the hash of the message times G plus the x coordinate of the R value, which is published as part of the signature, multiplied by the public key and then take the x coordinate it should be equal to r. That is how we verify a ECDSA signature.

s^(-1) (H(m)G + rP)|_x = r

As you can see the formula is more complicated. It uses modular inverse when you are creating it and when you are verifying it. It also does this weird thing of taking the x coordinate of a point. There are two or three different things that are a bit strange about it. This was created I think in 1992. It was created because shortly before that Claus Schnorr had patented the Schnorr signature and so the simpler version that we just studied they were not able to use. They had to use a different formula. I don’t know how they came up with this exact form. I am sure it has some good properties and some bad properties. I will note a couple of bad properties. Firstly because r is a x coordinate, there are going to be two points on the curve with the same x coordinate. This can be explained easily without pictures. The formula for our elliptic curve is y^2 = x^3 + 7. Given a particular x we have two different y values. Two different points (x,y) and (x,-y) on the curve for which this verification is possible. All we are checking is a x coordinate. That creates what you might call an intrinsic malleability in the signature scheme because it means that if (r,s) is a valid signature so is (r,-s). BIP 66 made sure that only one of these is standard in Bitcoin now. If you have a really old wallet it might generate signatures which don’t broadcast properly on the network. ECDSA fundamentally has that property, we can’t get around it because it is using the x coordinate. You could argue it is just a technical problem. Cryptographers have a serious problem with that because it is a bad property for a signature scheme to have. You shouldn’t have a situation where given a signature I can make another one that is different even though I don’t have the key. That is not desirable. There is a security reduction which we mentioned on the previous slide. It is difficult but as far as I understand it the cryptographers are not able to find anywhere as clear a security reduction for ECDSA. It makes sense because of this weird x coordinate thing. I have a link to a paper (Vaudenay) if anyone is interested.

Q - The safety of Bitcoin shows that it actually works?

A - I think that is a fair point. Maybe it depends on the way you use it and if you change the way you use it… If we are in a position of not knowing it is not good. I am not trying to claim that the Schnorr signature is perfect. Given a certain set of assumptions it seems that you can construct a very elegant proof.

I do want to mention this. No linearity (especially over nonces due to funky use of x-coordinate). If you look at the Schnorr equation, it is very nice and linear. If you add the two equations together because there are no non-linear terms you get something that is consistent.

Leveraging linearity

What if we add signatures (s = k + ex) together? Alice and Bob can do a signature, add them together, we won’t know each other nonce values or private keys but we’ll be adding them. For that to be linear the e’s can’t be different. e must commit to both nonces like e = H(R_A + R_B | P_A + P_B | m). But this is insecure. It is insecure for the reason that we touched right at the start. Why is that insecure?

Q - Alice can choose her public key to cancel out Bob’s public key.

A - That’s right.

Aggregation schemes

If keys P produced ephemerally, open to direct key subtraction attack, last player can delete everyone else’s key, disaster for multisig. This attack only applies if we don’t have the public keys fixed in advance. Or if the Verifier doesn’t know the separate the public keys. Generally speaking it is not a secure approach because we can subtract other people’s pubkeys. We will go round the room and add our public keys together. The last person will make a public key that subtracts all the other ones. This is rather annoying because then you can spend the multisig on its own. It is supposed to be five people agree and he does it all on his own. That’s terrible, absolute disaster. This is called key aggregation. What you want is to make a signature where you publish on the blockchain only one public key. That public key I’m calling the aggregated public key here. In order to do that we are going to have to interact with each other unfortunately. Interact, share each other’s R values and share each other’s public keys and then build this rather complicated equation for the signature. What I am writing here is specifically the construction that they are calling Musig. We are aggregating keys and the Verifier is going to be able to look at it as just one key. That is very good in terms of saving space and privacy. But it does require that all of us interact in order to create the signature. There is more than one way to do it. It turned out about a month ago that there was a flaw in the security proof of one way of doing it. But it was ok because there is another way of doing it changing it from two rounds of interaction to three rounds of interaction. That doesn’t make that much difference. You should check the latest version of the paper because they changed something. The basic gist of it is that you combine and interact your keys and make it safe from this attack so it is not the case that someone can do it on their own. And you have this property that it is private because you only have one signature and public key on the chain. Other people can’t tell even that it is a multisig. Not only do they not know who they don’t even know it is a multisig. They only see a Schnorr signature. That is why this is so exciting to a lot of Bitcoin developers. They have been really focused on it for a long time.

Q - You could put a MAST script in that as well?

A - Taproot also uses that linearity. I have deliberately avoided talking about it here but it is a very important additional point.

Aggregation schemes - 2

As well as Musig there is something called Bellare-Neven which existed before Musig which is another way of doing the same thing. It doesn’t aggregate the keys in the same way because it would require you to publish all the public keys. Have people used a multisig before? If you ever do it in Bitcoin you will know you will see that all the public keys are published and however many signatures you need are also published. It is a lot of data. Bellare-Neven has a cleaner security proof but since it requires your keys for verification you would have to publish them. Remember with Bitcoin verification is not one person it is everyone. You would have to publish all the keys. The latest version of the Musig paper talks about three rounds of interaction because you commit to these R values first. Everyone doesn’t just handover their R values, they hash them and they send the hash of the R values. That forces you to fix your R value before knowing anyone else’s R value. There is a link here. There are different ways we can use one or both of these constructions (Musig and Bellare-Neven) which are slightly different but have this goal of allowing us to aggregate either the keys or the signatures. We could have multiple signatures aggregated together using these clever constructions. It is all based on the Schnorr signature and combining multiple Schnorr signatures together.

Q - The per transaction aggregation would mean there is just one transaction in a block?

A - A weird detail that you might not immediately think of is each input, it is not the same message that you are signing. Even though you are signing the same transaction you are signing a slightly different version of it. I think it is with Bellare-Neven that you can make one signature for all the inputs. Per block aggregation sounds intriguing.

Q - That sounds like it is batch validation. You add up all the signatures in a block and validate them at once. It says whether the block is valid or not.

A - That is something I forgot to include in all this. Batch validation of Schnorr signatures is possible but it needs a trick that is similar to what we talked about where you have the problem of things being subtracted. If you use the verification equation for lots of different signatures and add them altogether it has a similar security problem. It turns out it is easily solved by adding random terms to each part of the equation. That is another topic that I don’t know much about.

CoinSwap

I don’t think we have time to do CoinSwap. We’ve got how you can use a Schnorr signature to make CoinSwap better. Something called an adaptor signature. So what is a CoinSwap?

Q - You can think of it as Alice and Bob want to trade coins and they do it by way of putting their coins in escrow and releasing the escrow securely. In practice it used for privacy. Alice’s coins can end up being possessed by Bob and the other way round. A bit like a mixer that can’t run off with your coins.

Q - A private atomic swap?

Q - An atomic swap with the intention of privacy instead of trading.

CoinSwap in 2017

Here is an approach that I and several other people worked on last year mostly although it has been talked about for many years. Greg Maxwell came up with the idea in 2013. The blue and orange is input and output. It is Alice and Carol, not Alice and Bob. The interesting part is at the bottom here where you have a HTLC (hash timelocked contract). The idea is that this output can be spent in one of two ways. Either somebody knows a secret value or after a timeout someone can spend it.

Q - Is that inside a script?

A - Yes inside a Bitcoin script. You have CLTV (CheckLockTimeVerify) or CSV. With CLTV you put a block number in there and it checks that against the nLockTime field. The nLockTime field only allows you to broadcast… The details don’t matter. The point is you put a block height in the script and that branch of the transaction is not valid until after that block.

What we are doing is linking two transaction outputs together and trying to create a situation where if one gets spent the other one gets spent. If it doesn’t get spent at all then it times out and the money goes back to the start, whoever paid it. We set up these weird complicated scripts but they are not private because they have a special secret value which links them. What we do is create an alternative output which looks like a normal output and even both parties agree then the money goes to the normal outputs. It is only if one of the parties don’t cooperate then we have to use the non-secret. It is non-secret because it has a preimage of a hash which is a very special value that anyone can link. The only reason I included CoinSwap in this presentation is that it is one example of how the linearity in the Schnorr signature allows us to do clever things.

Adaptor signatures - 1

We are by now familiar with the idea that s = k + ex where e is the hash of the message, the R value and possibly the P value. What you do here is you embed a secret into the nonce. You have some secret value which we are calling t and you add it to k. Effectively the nonce is now k + t and not k. Apart from that it is the same before notice we also put T here. This is the same pattern, we have just added an extra variable.

s = k + H(m|R|P)x -> s = k + t + H(m|R+T|P)x

We are working with a counterparty and we want to swap coins let’s say. That’s the obvious application. I am going to give Chris not this secret t value, but its corresponding public key T. I am also going to give him something else. This is the clever idea and it comes from Andrew Poelstra. What you do is not just give him that public value but give him something which is like a signature but is not a valid signature. We call that an adaptor signature.

s’ = k + H(m|R+T|P)x

Why is that not a valid signature? Because there is a T here. The rule of the signature is that this hash is supposed to be the message, the nonce point R and the public key. Here it is not the nonce point, it doesn’t correspond. If you went through those equations it would not work out because this k doesn’t correspond to the private key of that. I send him the adaptor signature. Even though it is not valid signature he can verify that it is an adaptor signature because if he multiplies by G he gets what he expects to get. He takes s’ which I have given him and multiplies by G and he asks himself is it or is not equal to:

s’G ?= R + H(m|R+T|P)P

I gave him T before. He is doing the verification equation but he is missing out that there. Having verified that what does it tell him? It tells him that if the Prover gives t then you can build a real signature. The starting step was I as Prover create t, set T=tG, send to Verifier T and s’ the adaptor signature. The Verifier checks the adaptor signature fits this equation which is not a proper signature verification equation because there is no +T which there should be to match. But he can still check it and he knows that if that check passes and it later the Prover gives him t, the Verifier can construct the signature himself. All he does is take s’ and add t s+t. He doesn’t know the private key. s’ blinds the private key in the same way as an ordinary signature does. By adding this t he will be able to create a valid signature using the Prover’s private key. That’s the basic idea. I won’t go through the details, it is complicated. The idea is we cooperate to make a multisig Schnorr key. The idea is that if we take two different adaptor signatures with the same t then when I spend my one I am going to reveal t. You are going to take t and apply it to your signature. There are two transactions. One is paying out to me and one is paying out to you. If you are the one who has t, if you spend your one and publish the signature it exposes t and allows me to add t to my signature and broadcast my transaction. Functionally it is exactly the same as this thing here. This says “Pay to Carol if she knows the preimage of a hash value.” It is the same thing. Here we are using this like a hash function. t is like the preimage and T is like the hash value. This is all coming from the linearity. The idea that signatures are added together or subtracted to do interesting things. There is another idea called discreet log contracts which is different but it is a similar idea where you use the fact that Schnorr signatures are linear. When one gets published on the chain you can add or subtract it and get some new secret. Or maybe claim some coins. There are many variants of that idea. There is a blog post about it here. It is worth mentioning deniability. The cool thing is when you publish a signature like this to the chain to claim the coins my counterparty is able to subtract s’ to get t. Nobody else in the world has any idea, it is just a random number. A very powerful idea.

Adaptor sig in ECDSA

You can try to do something similar to what I just described using ECDSA but it requires a different cryptographic structure called a Paillier crypto system. It is completely separate to Bitcoin. It is more similar to RSA than to elliptic curves. It has some special property that might allow us to do something similar to that using ECDSA. Then maybe we can do completely private CoinSwaps today without Schnorr signatures. I have no idea when Schnorr signatures are coming to Bitcoin.

Q - MuSig was invented specifically for Bitcoin. It is being built from the ground up.

A - I wouldn’t say from the ground up. That Bellare-Neven scheme is similar. If you look at the MuSig paper it has a tonne of references to people who have tried to do this kind of thing before. All of these schemes were broken due to variants to that thing where you subtract other people’s keys away. It is not just a question of being broken, it is also a question of is the security proof actually valid? If we don’t have a proper proof of security it is scary. They have a detailed security proof for MuSig, it is really complicated. About one month ago Gregory Neven plus some other people wrote a paper explaining why the security proof of MuSig with only two rounds is insecure. Two rounds means we will make our R values and send them. Then we use the formula to make our keys and signature. We hand out our signatures and we add them together to get the result. This paper from Gregory Neven et al showed that the security proof is not valid. It was an argument against MuSig and a threshold signature scheme. It is not that we know the two round version of the protocol is insecure, it is that we don’t have a security proof of it that is valid. The three round has a solid security proof.

Q - Is Schnorr sufficiently battle tested?

A - The basic Schnorr signature is a much stronger security proof than ECDSA. Doing this clever multisig stuff. That is another matter. It is a heavily studied problem.

Q - It needs exposure to the real world?

Q - The simpler the better?

A - You would think so but with key subtraction the simplest possible way to combine signatures is completely insecure.

\ No newline at end of file +Shafi Goldwasser and others in the mid 1980s was imagine you have a lot of different protocol executions, lots of conversations with the same private key. It doesn’t follow by the way that if you do lots you get this problem as long as you don’t use the same k. If we have k_2 then we don’t know k, k_2 or x.

Q - What is the probability of generating the same k?

A - It depends on the number of bits. It has to be a scalar so if the group is size of 256 bits then it is 2^256. But if the group is tiny… Bitcoin is slightly less than 256 bits. It is the same level of security you have with private keys. You generate private keys randomly, you can generate as many addresses as you like in your wallet. Nobody is going to tell you “You might accidentally collide” it is not going to happen.

Notice even if you make two signatures or two conversations with the same private key as long as you use different k you are fine. Because you have three unknowns and two equations there. It will always be overdetermined. You can’t solve it. In normal operation we makes lots of these conversations. It is interesting that the values in the conversation (R, e, s) are all random numbers. Even though R is a point not a scalar it is still a random value. What if you forge lots of conversations that looked similar? Could I make another conversation (R’, e’, s’) in such a way that it would like a proper conversation to an outsider. Is it possible to do that?

Q - Do you mean convince them that the proof is valid?

A - The reason there is confusion here is because we are talking about an interactive protocol. If you and I engage in the protocol we just discussed I can prove to you that I own the private key. But if you take the conversation we had and gave it to someone else it doesn’t prove it does it? Because you had to be the one who chose the challenge.

What they said in the paper was if we can set it up so that we can create a whole set of conversations that look valid specifically in the sense of sG = R + eP being validated then it must be the case that no knowledge is exposed. If an adversary, an attacker can make lots of conversation transcripts that look exactly the same at least in terms of distribution, obviously they are not literally the same, as fake transcripts then he can’t be receiving any information when he receives real transcripts because he doesn’t know the difference. It is a very weird argument. How it works in practice that we have the notion of a Simulator, someone who simulates these conversations. All the Simulator needs to do is work backwards. That’s the trick. What makes this work is causality. The fact that I can only do that after you do that. But if we can violate causality, create s randomly, then create e randomly what would we do next? Solve for R. We can solve for R because we have s and e. You can do sG - eP = R. Notice interestingly you don’t get k but you do get R and that is all you need to make a conversation transcript that looks like a real one. If I can make an infinite number of fake conversation transcripts that look exactly the same to an attacker as real ones… By look the same I mean statistically look the same, obviously not the same numbers. You can make lots of R, s and e. If the distribution of fake transcripts is indistinguishable from the distribution of genuine ones it proves that zero information is conveyed. What statistical distribution are we talking about here? The uniform random distribution, I’m not going to try to prove that.

Q - Are you saying that in the proof of this statistics comes into it?

A - I think it could. I glaze over it when it gets to that level of detail. It wouldn’t be distributions like exponential or Poisson.

Q - You could have all these transcripts, real and fake transcripts, but there is no way to convince someone that a transcript is fake. Therefore there is no knowledge inside it. The fact that you can’t convince someone that it is safe. It could be real or fake. It is indistinguishable.

A - Absolutely.

Fiat-Shamir transform

This is an annoying add-on but it is really vitally important in practice, what is called a Fiat-Shamir transform. We have been talking about Sigma protocols and specifically the Schnorr identity protocol as an example. If we want to make it non-interactive, it is no use just having protocols where I prove to Chris, I prove to you etc. We need something where I can prove and make the proof transferable. Prove it to the whole world so everyone can see that I know the secret. To do that we need to make the protocol non-interactive, to remove this step. We just spent an hour explaining why we need that step so how the hell does that work? The way we do it is very ingenious. We use again a hash function. We need a random oracle, that is the technical term. But let’s think of it as a hash function. What we do is we hash the conversation up to the point of the challenge. In this case that is very simple. That is just the value R. If I hash R I create a random value output which we could not have predicted. Because we could not have predicted it we can’t cheat. Remember we had the problem that if you removed this step we found that it was not secure because we had k + ex and you could subtract the public key easily. But if you hash this you still do get a number here but it is not a number you can predict at the start. That is the key. We don’t know e in advance and so we can’t do an attack like that to get s without knowing x. I am trying to give you a really concrete explanation of the concept. The concept is the hash of the conversation up to the challenge and make that the challenge. That is something that Alice can do on our own. Bob doesn’t need to be there. She just hashes R. It is like simulating the challenge. So now the Schnorr signature. What is it? It is exactly that, what we have just discussed. Using that Fiat-Shamir trick because otherwise we can’t make a signature non-interactively. Of course we have to put the message somewhere. We include the message in the hash. We may or may not, it is better if we do, also include the pubkey. It doesn’t actually hurt if you think about it. You can put anything in there. We put the message that we are signing. Now it is s = k + H(m | P | R) x. e has changed from being a random number to being a hash of the message and crucially the point R and possibly the point P. We have changed our identity protocol into a signature scheme. Hash one-wayness enforces ordering of steps in absence of Verifier enforcement. A | B represents concatenation of A and B. m and P and R are all hashed together.

ZK RO

We’ve got most of the theory out of the way. These next few points are a bit vague. We talked about zero knowledge but we didn’t really talk about the fact that that proof is really limited. It is what is called an Honest Verifier’s zero knowledge because we are assuming that the e value, the challenge given by the Verifier is random and not deliberately malformed in some way. The way we get round the fact that all these proofs were based on interactivity is we program the random oracle. The hash function is what is called a random oracle. Does that phrase mean anything to anyone? Forget about hash functions for a minute. Our task is to create is something that creates a random output given any input but does it deterministically. Imagine I’m the random oracle, you give me an input string “Hello”. I’ve got this massive table and I write in my table “Hello”, output “03647….”. The point is that if anyone says “Hello” I am going to respond with the same thing. But if someone says “Goodbye” I’m going to make up a new random value. A hash function is an attempt to create that effect using some mathematical tricks.

Q - Is the hash function secret? Are you announcing what the hash function is?

A - When they go through all these security proofs they use this concept of the random oracle model. The thing about hash functions is they are actual concrete mathematical functions. We don’t know for sure that they behave exactly the way we intend them to. From a theorist point of view they define a random oracle as an ideal of what the hash function should do which is to output deterministic random values for any input. There are no secrets involved. It is just that you can’t predict what the output will be given the input.

Q - You are able to calculate the output because you know the hash function. But does the person who is giving you the input know what the hash function is?

A - It is the same as any hash function in that regard. Anyone can do it. The point is that they can’t predict the output so you get this forced causality.

The Simulator programs the random oracle. Program the random oracle to output the e that you want it to output in order to make this proof still work even though we are not interacting anymore. Matthew Green has got two really good blog posts on zero knowledge proofs. In the second one he talks about this and how it is a really dodgy proof. I’ll let the experts argue about that.

Reductions to ECDLP

So what security do we have? We have some concept that all of this is sort of secure. We half proved that you can’t forge it and you can’t extract the private key. But it was all based on a very basic assumption which is the very first thing I wrote on the whiteboard was P = xG, the public key and the private key. The idea is that you can’t get x from P. In the elliptic curve context that is called the elliptic curve discrete logarithm problem. The problem of extracting the elliptic curve discrete log of P which is x given only P. It is not impossible because you could simply look through every possible value of x and you could find it. We call it that computationally hard because we don’t have algorithms that calculate it quickly. We treat it as a one way function but it is not perfectly true. Given enough computational power you could find the input simply by exhaustive search. There are algorithms that allow you to do it. ECDLP is lost to a sufficiently powerful quantum computer. It can be shown that if an attacker can extract the private key from a Schnorr signature, they can also solve the ECDLP. We can say under a certain set of assumptions if the elliptic curve discrete logarithm problem is not cracked, solved in sub exponential time then the Schnorr signature is secure.

Reductions to ECDLP - 2

The claim is based on an argument along the lines of what we’ve just discussed. If you have an adversary that is able to crack the signature scheme, an adversary that with one of this type of protocol can extract the x just by verifying or looking at it. Let’s say you are able to do that with probability epsilon, you are not going to crack it every time, just one time in ten let’s say. If we have such an adversary that adversary can crack the elliptic curve discrete logarithm problem. There’s two slightly different things. Here’s a Schnorr signature. Let’s say I claim that I am the holder of the private key and I can make a signature. And let’s say Chris actually holds the private key and I don’t. If I can impersonate Chris by making a Schnorr signature it doesn’t necessarily mean that I have the key. It might be that there is something broken. What this argument says is that if I can impersonate Chris it means I can solve the elliptic curve discrete logarithm problem because I can extract the discrete log by impersonating twice. This same argument we used before that if you use two different challenges on the same commitment you end up being able to extract x. Roughly your chance of success in extracting the discrete log is the square of epsilon. That is less. Instead of one tenth we are talking about one hundredth. They talk about these arguments as security reductions. The signature scheme is secure because we can’t solve the elliptic curve discrete log. If the signature scheme was not secure then I could do this and I could solve ECDLP. I don’t think anyone can solve ECDLP therefore the signature scheme is secure. This is what they call a reduction.

Q - You are proving they are equivalent problems. If you can solve one you can solve another.

A - But the annoying thing is that they are not literally equivalent.

Q - You have the complexity classes like P, NP. Non deterministic polynomial classes which are NP problems and we are simulating them in exponential time nowadays. You have the concept of reduction. What this means is if you solve one problem in a NP class and all the problems have the property of reduction with one problem you can solve all of them.

Digital signature security

I’m not sure if I’ve got much to say about this. I was reading about what is digital signature security. There seems to be a lot of different concepts. It makes sense but this is getting complicated now. You can talk about obvious things like is the signature scheme against someone extracting the private key? But also is this secure against forgery? Also is this secure against forgery if I’ve already got several examples of the signature? Also is it secure against forgery if I manage to do several signatures in parallel? I don’t know much about that stuff.

Q - It would need to be secure against all of those?

A - I think this in bold is the strongest definition of security. “Security against existential forgery under chosen message attack.” If I can get you to output signatures for messages that I choose you still can’t create a new signature.

ECDSA’s weaknesses

I want to make a few notes about ECDSA. I’ll stick to the same notation. P = xG. k = nonce. For ECDSA we have a different equation for the signature. Anyone want to tell me? s = k^(-1) (H(m) + rx) We publish the signature as two values, r and s. With the Schnorr signature we publish as either e and s or R and s. (r , s) is an ECDSA signature. What is r? r is the x coordinate of R (R_x). R = kG. This is the same as Schnorr but we then do something very different. We take the x coordinate. If I take the signature I have been given and take the modular inverse of it which is annoying, and multiply it by the hash of the message times G plus the x coordinate of the R value, which is published as part of the signature, multiplied by the public key and then take the x coordinate it should be equal to r. That is how we verify a ECDSA signature.

s^(-1) (H(m)G + rP)|_x = r

As you can see the formula is more complicated. It uses modular inverse when you are creating it and when you are verifying it. It also does this weird thing of taking the x coordinate of a point. There are two or three different things that are a bit strange about it. This was created I think in 1992. It was created because shortly before that Claus Schnorr had patented the Schnorr signature and so the simpler version that we just studied they were not able to use. They had to use a different formula. I don’t know how they came up with this exact form. I am sure it has some good properties and some bad properties. I will note a couple of bad properties. Firstly because r is a x coordinate, there are going to be two points on the curve with the same x coordinate. This can be explained easily without pictures. The formula for our elliptic curve is y^2 = x^3 + 7. Given a particular x we have two different y values. Two different points (x,y) and (x,-y) on the curve for which this verification is possible. All we are checking is a x coordinate. That creates what you might call an intrinsic malleability in the signature scheme because it means that if (r,s) is a valid signature so is (r,-s). BIP 66 made sure that only one of these is standard in Bitcoin now. If you have a really old wallet it might generate signatures which don’t broadcast properly on the network. ECDSA fundamentally has that property, we can’t get around it because it is using the x coordinate. You could argue it is just a technical problem. Cryptographers have a serious problem with that because it is a bad property for a signature scheme to have. You shouldn’t have a situation where given a signature I can make another one that is different even though I don’t have the key. That is not desirable. There is a security reduction which we mentioned on the previous slide. It is difficult but as far as I understand it the cryptographers are not able to find anywhere as clear a security reduction for ECDSA. It makes sense because of this weird x coordinate thing. I have a link to a paper (Vaudenay) if anyone is interested.

Q - The safety of Bitcoin shows that it actually works?

A - I think that is a fair point. Maybe it depends on the way you use it and if you change the way you use it… If we are in a position of not knowing it is not good. I am not trying to claim that the Schnorr signature is perfect. Given a certain set of assumptions it seems that you can construct a very elegant proof.

I do want to mention this. No linearity (especially over nonces due to funky use of x-coordinate). If you look at the Schnorr equation, it is very nice and linear. If you add the two equations together because there are no non-linear terms you get something that is consistent.

Leveraging linearity

What if we add signatures (s = k + ex) together? Alice and Bob can do a signature, add them together, we won’t know each other nonce values or private keys but we’ll be adding them. For that to be linear the e’s can’t be different. e must commit to both nonces like e = H(R_A + R_B | P_A + P_B | m). But this is insecure. It is insecure for the reason that we touched right at the start. Why is that insecure?

Q - Alice can choose her public key to cancel out Bob’s public key.

A - That’s right.

Aggregation schemes

If keys P produced ephemerally, open to direct key subtraction attack, last player can delete everyone else’s key, disaster for multisig. This attack only applies if we don’t have the public keys fixed in advance. Or if the Verifier doesn’t know the separate the public keys. Generally speaking it is not a secure approach because we can subtract other people’s pubkeys. We will go round the room and add our public keys together. The last person will make a public key that subtracts all the other ones. This is rather annoying because then you can spend the multisig on its own. It is supposed to be five people agree and he does it all on his own. That’s terrible, absolute disaster. This is called key aggregation. What you want is to make a signature where you publish on the blockchain only one public key. That public key I’m calling the aggregated public key here. In order to do that we are going to have to interact with each other unfortunately. Interact, share each other’s R values and share each other’s public keys and then build this rather complicated equation for the signature. What I am writing here is specifically the construction that they are calling Musig. We are aggregating keys and the Verifier is going to be able to look at it as just one key. That is very good in terms of saving space and privacy. But it does require that all of us interact in order to create the signature. There is more than one way to do it. It turned out about a month ago that there was a flaw in the security proof of one way of doing it. But it was ok because there is another way of doing it changing it from two rounds of interaction to three rounds of interaction. That doesn’t make that much difference. You should check the latest version of the paper because they changed something. The basic gist of it is that you combine and interact your keys and make it safe from this attack so it is not the case that someone can do it on their own. And you have this property that it is private because you only have one signature and public key on the chain. Other people can’t tell even that it is a multisig. Not only do they not know who they don’t even know it is a multisig. They only see a Schnorr signature. That is why this is so exciting to a lot of Bitcoin developers. They have been really focused on it for a long time.

Q - You could put a MAST script in that as well?

A - Taproot also uses that linearity. I have deliberately avoided talking about it here but it is a very important additional point.

Aggregation schemes - 2

As well as Musig there is something called Bellare-Neven which existed before Musig which is another way of doing the same thing. It doesn’t aggregate the keys in the same way because it would require you to publish all the public keys. Have people used a multisig before? If you ever do it in Bitcoin you will know you will see that all the public keys are published and however many signatures you need are also published. It is a lot of data. Bellare-Neven has a cleaner security proof but since it requires your keys for verification you would have to publish them. Remember with Bitcoin verification is not one person it is everyone. You would have to publish all the keys. The latest version of the Musig paper talks about three rounds of interaction because you commit to these R values first. Everyone doesn’t just handover their R values, they hash them and they send the hash of the R values. That forces you to fix your R value before knowing anyone else’s R value. There is a link here. There are different ways we can use one or both of these constructions (Musig and Bellare-Neven) which are slightly different but have this goal of allowing us to aggregate either the keys or the signatures. We could have multiple signatures aggregated together using these clever constructions. It is all based on the Schnorr signature and combining multiple Schnorr signatures together.

Q - The per transaction aggregation would mean there is just one transaction in a block?

A - A weird detail that you might not immediately think of is each input, it is not the same message that you are signing. Even though you are signing the same transaction you are signing a slightly different version of it. I think it is with Bellare-Neven that you can make one signature for all the inputs. Per block aggregation sounds intriguing.

Q - That sounds like it is batch validation. You add up all the signatures in a block and validate them at once. It says whether the block is valid or not.

A - That is something I forgot to include in all this. Batch validation of Schnorr signatures is possible but it needs a trick that is similar to what we talked about where you have the problem of things being subtracted. If you use the verification equation for lots of different signatures and add them altogether it has a similar security problem. It turns out it is easily solved by adding random terms to each part of the equation. That is another topic that I don’t know much about.

CoinSwap

I don’t think we have time to do CoinSwap. We’ve got how you can use a Schnorr signature to make CoinSwap better. Something called an adaptor signature. So what is a CoinSwap?

Q - You can think of it as Alice and Bob want to trade coins and they do it by way of putting their coins in escrow and releasing the escrow securely. In practice it used for privacy. Alice’s coins can end up being possessed by Bob and the other way round. A bit like a mixer that can’t run off with your coins.

Q - A private atomic swap?

Q - An atomic swap with the intention of privacy instead of trading.

CoinSwap in 2017

Here is an approach that I and several other people worked on last year mostly although it has been talked about for many years. Greg Maxwell came up with the idea in 2013. The blue and orange is input and output. It is Alice and Carol, not Alice and Bob. The interesting part is at the bottom here where you have a HTLC (hash timelocked contract). The idea is that this output can be spent in one of two ways. Either somebody knows a secret value or after a timeout someone can spend it.

Q - Is that inside a script?

A - Yes inside a Bitcoin script. You have CLTV (CheckLockTimeVerify) or CSV. With CLTV you put a block number in there and it checks that against the nLockTime field. The nLockTime field only allows you to broadcast… The details don’t matter. The point is you put a block height in the script and that branch of the transaction is not valid until after that block.

What we are doing is linking two transaction outputs together and trying to create a situation where if one gets spent the other one gets spent. If it doesn’t get spent at all then it times out and the money goes back to the start, whoever paid it. We set up these weird complicated scripts but they are not private because they have a special secret value which links them. What we do is create an alternative output which looks like a normal output and even both parties agree then the money goes to the normal outputs. It is only if one of the parties don’t cooperate then we have to use the non-secret. It is non-secret because it has a preimage of a hash which is a very special value that anyone can link. The only reason I included CoinSwap in this presentation is that it is one example of how the linearity in the Schnorr signature allows us to do clever things.

Adaptor signatures - 1

We are by now familiar with the idea that s = k + ex where e is the hash of the message, the R value and possibly the P value. What you do here is you embed a secret into the nonce. You have some secret value which we are calling t and you add it to k. Effectively the nonce is now k + t and not k. Apart from that it is the same before notice we also put T here. This is the same pattern, we have just added an extra variable.

s = k + H(m|R|P)x -> s = k + t + H(m|R+T|P)x

We are working with a counterparty and we want to swap coins let’s say. That’s the obvious application. I am going to give Chris not this secret t value, but its corresponding public key T. I am also going to give him something else. This is the clever idea and it comes from Andrew Poelstra. What you do is not just give him that public value but give him something which is like a signature but is not a valid signature. We call that an adaptor signature.

s’ = k + H(m|R+T|P)x

Why is that not a valid signature? Because there is a T here. The rule of the signature is that this hash is supposed to be the message, the nonce point R and the public key. Here it is not the nonce point, it doesn’t correspond. If you went through those equations it would not work out because this k doesn’t correspond to the private key of that. I send him the adaptor signature. Even though it is not valid signature he can verify that it is an adaptor signature because if he multiplies by G he gets what he expects to get. He takes s’ which I have given him and multiplies by G and he asks himself is it or is not equal to:

s’G ?= R + H(m|R+T|P)P

I gave him T before. He is doing the verification equation but he is missing out that there. Having verified that what does it tell him? It tells him that if the Prover gives t then you can build a real signature. The starting step was I as Prover create t, set T=tG, send to Verifier T and s’ the adaptor signature. The Verifier checks the adaptor signature fits this equation which is not a proper signature verification equation because there is no +T which there should be to match. But he can still check it and he knows that if that check passes and it later the Prover gives him t, the Verifier can construct the signature himself. All he does is take s’ and add t s+t. He doesn’t know the private key. s’ blinds the private key in the same way as an ordinary signature does. By adding this t he will be able to create a valid signature using the Prover’s private key. That’s the basic idea. I won’t go through the details, it is complicated. The idea is we cooperate to make a multisig Schnorr key. The idea is that if we take two different adaptor signatures with the same t then when I spend my one I am going to reveal t. You are going to take t and apply it to your signature. There are two transactions. One is paying out to me and one is paying out to you. If you are the one who has t, if you spend your one and publish the signature it exposes t and allows me to add t to my signature and broadcast my transaction. Functionally it is exactly the same as this thing here. This says “Pay to Carol if she knows the preimage of a hash value.” It is the same thing. Here we are using this like a hash function. t is like the preimage and T is like the hash value. This is all coming from the linearity. The idea that signatures are added together or subtracted to do interesting things. There is another idea called discreet log contracts which is different but it is a similar idea where you use the fact that Schnorr signatures are linear. When one gets published on the chain you can add or subtract it and get some new secret. Or maybe claim some coins. There are many variants of that idea. There is a blog post about it here. It is worth mentioning deniability. The cool thing is when you publish a signature like this to the chain to claim the coins my counterparty is able to subtract s’ to get t. Nobody else in the world has any idea, it is just a random number. A very powerful idea.

Adaptor sig in ECDSA

You can try to do something similar to what I just described using ECDSA but it requires a different cryptographic structure called a Paillier crypto system. It is completely separate to Bitcoin. It is more similar to RSA than to elliptic curves. It has some special property that might allow us to do something similar to that using ECDSA. Then maybe we can do completely private CoinSwaps today without Schnorr signatures. I have no idea when Schnorr signatures are coming to Bitcoin.

Q - MuSig was invented specifically for Bitcoin. It is being built from the ground up.

A - I wouldn’t say from the ground up. That Bellare-Neven scheme is similar. If you look at the MuSig paper it has a tonne of references to people who have tried to do this kind of thing before. All of these schemes were broken due to variants to that thing where you subtract other people’s keys away. It is not just a question of being broken, it is also a question of is the security proof actually valid? If we don’t have a proper proof of security it is scary. They have a detailed security proof for MuSig, it is really complicated. About one month ago Gregory Neven plus some other people wrote a paper explaining why the security proof of MuSig with only two rounds is insecure. Two rounds means we will make our R values and send them. Then we use the formula to make our keys and signature. We hand out our signatures and we add them together to get the result. This paper from Gregory Neven et al showed that the security proof is not valid. It was an argument against MuSig and a threshold signature scheme. It is not that we know the two round version of the protocol is insecure, it is that we don’t have a security proof of it that is valid. The three round has a solid security proof.

Q - Is Schnorr sufficiently battle tested?

A - The basic Schnorr signature is a much stronger security proof than ECDSA. Doing this clever multisig stuff. That is another matter. It is a heavily studied problem.

Q - It needs exposure to the real world?

Q - The simpler the better?

A - You would think so but with key subtraction the simplest possible way to combine signatures is completely insecure.

\ No newline at end of file diff --git a/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/index.html b/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/index.html index fc16cb65c9..305eaef228 100644 --- a/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/index.html +++ b/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/index.html @@ -1,5 +1,5 @@ Better Hashing through BetterHash | ₿itcoin Transcripts - \ No newline at end of file +https://www.youtube.com/watch?v=0lGO5I74qJM

Announcement of BetterHash on the mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016077.html

Draft BIP: https://github.com/TheBlueMatt/bips/blob/betterhash/bip-XXXX.mediawiki

Intro

I am going to talk about BetterHash this evening. If you are coming to Advancing Bitcoin don’t worry I am talking about something completely different. You are not going to get duplicated content. That talk should be interesting as well though admittedly I haven’t written it yet. We’ll find out. BetterHash is a project that unfortunately has some naming collisions so it might get renamed at some point, I’ve been working on for about a year to redo mining and the way it works in Bitcoin. Who feels like they are familiar with how Stratum and pools and all that good stuff works in Bitcoin at a high level? A little bit more than half. I am going to through a little bit of background. I can rush through it and skip it. It is kind of stolen from a talk I gave that was much less technical. If you fall asleep for the first ten minutes or so that is going to be perfectly ok.

Why Are We Here

I wanted to give a little bit more philosophical background as well. Bitcoin did not invent ecash. Ecash as a concept is not new, it has been around since the 1980s and 1990s and a tonne of people were trying to do this at that time. They all kind of failed. Who read the Ray Dillinger piece on LinkedIn? If you haven’t read it you should go back and read it because it was really good. A random guy claimed he was around back then in the 1980s and 1990s, claimed that Satoshi had mailed him the paper before the announcement, no material reason to disbelieve that claim although it doesn’t really matter. He wrote a really good piece about what it was like back then working on ecash for all these startups in the Caribbean and essentially trying to build Bitcoin but failing. Out of this came a lot of simple centralized US dollar banks that failed because their business models failed. Out of this came PayPal which I guess we would have considered to have failed in the censorship resistant money case because they had to succumb to regulators. Out of this came some attempts at ecash like Chaumian tokens which were shut down because they were very usable as money laundering and got shut down by various regulators especially in the US. But ultimately they all failed for the central reason that they were a centralized third party. They had some business that could go under or fail or be targeted by regulators. They all failed to live up to this goal. Ray, I think accurately, phrased Bitcoin as a novel attempt to solve this centralization problem, or this ecash problem, by removing the centralized third party. It is the first real attempt someone came up with that is realistic, it might actually work. We don’t know that it is going to work yet and that’s why we are talking about mining and centralization of mining, but it is experimental and maybe we can actually achieve these goals that we’ve had for 30 years and haven’t accomplished yet.

Bitcoin Pool Background

I am sure a lot of you will recognize this. This is a number of months out of date but that was as of a number of months ago the pool distribution and hash rate distribution for pools in Bitcoin. The top two are owned explicitly by the same company so that should be one, that’s AntPool and BTC.com. The fact that they also run ViaBTC which is one of these, it is kind of bad right? They clearly have a lot of control of hash power. When I gave this original talk I phrased it as consensus group. I think this is actually a really useful phrase when we are talking about cryptocurrencies broadly because it is not just hash power. Ultimately it may be stakers in a proof of stake system, maybe it is the 5 or 6 nodes who validate things in Ripple, whatever. Ultimately, your consensus group is like the people who are able to select transactions and put them in blocks or make them confirmed in some way. In Bitcoin that is obviously pools. Pools work by having this block template, they create the block template, they probably use Bitcoin Core, they get a list of transactions, they create this candidate block, they send it to all their clients who then hash a bunch. If it matches a full block, it has enough proof of work to become a full Bitcoin block it is sent back to the pool and the pool will distribute it. If it has a lot of work but not enough it still gets sent to the pool. In fact that how they do payouts. If you have some probability that it will be a full Bitcoin block and 10x higher probability that it will be not a Bitcoin block but sent to the pool anyway, you can still see how much work all your clients are doing and you can pay out based on that.

BetterHash

SlushPool actually has the public statistics for the breakdown of each of the individual users on their pool and how much hash rate they have. I blindly took that and mapped that onto the graph you saw earlier and this is what I came up with. This is what our goal is, to make Bitcoin look like this. Obviously it is an oversimplification to the nth degree but it was easy and it was the public stat that I had. There is reason to believe that you have some larger miners on some other pools but even still this is actually probably not entirely inaccurate. It is in fact the case that ownership of hardware and management of farms is way more decentralized than pools. Your whole goal is to reduce this variance problem. If you are a 1 percent miner or a 0.5 percent miner or a 0.25 percent miner some months you are not going to make your power bill, some months you are going to make a bunch more money so you have to have a pool. Naturally there is a lot of centralization in that market whereas in the actual hash rate market, purchasing cheap power is often more available in small quantities than large quantities, not to mention larger investments than a pool etc. You actually see much better decentralization. The goal is to get there. We do this by taking the key observation that the consensus group in Bitcoin is the people selecting the transactions. That doesn’t have to be the pool, the pool can still manage the payout. The idea is we take all of the miners, we give them all full nodes, they select all the transactions, they have a mempool, they build the candidate blocks that they work on and then they pay the pool. You still have centralized pools but the only thing they are doing is receiving the payment for blocks, full valid blocks, and then they can distribute that as they do normally, as they do today. They don’t necessarily need to be involved in selecting the transactions and doing the key consensus group things that we care about in Bitcoin.

Protocol Overview

There are obviously a number of problems with that that I will get into. But first I’m going to describe what the state is today and the high level protocol design. BetterHash is actually a replacement for two protocols. Currently there are two main things that are used in the mining process today. There is Stratum that more people are familiar with. This is the protocol that the pool server uses to communicate with the actual clients. It is a mess. getblocktemplate is also a mess. Stratum is JSON and it is mostly hex sent over the wire in JSON, it has things like 32 byte numbers that are sent as hex in JSON, byte swapped on 4 byte boundaries because apparently no one knows how endianness works. It is confusing.

Q - Can you elaborate why you dislike JSON encoding?

A - JSON is fine but hex in JSON has all of the downsides of having a binary protocol in that you can’t read it on the wire anyway because it is hex. And all of the downsides of being JSON in text because it is bigger and has more complex parsing. Remember ultimately the ASIC controllers are embedded devices, they are running OpenWrt and some pretty crappy software. You don’t really want a super complicated parser. They never get updated so if you have a bug in your JSON parser all these things are vulnerable. It is not ideal in that environment especially.

The other current protocol is called getblocktemplate. It is in fact just a RPC call on Bitcoin Core. It has a bunch of bells and whistles that literally no one knows how to use and probably don’t work. So I’m pretty sure they don’t get tested. But it is the protocol with which the pool server can request data from the Bitcoin Core node about which transactions to send. It is also a mess. It sends you all of the transaction data, again hex in JSON. Don’t know why. It sends you all of the transaction data, the full hex of each transaction that you want to include in the block but you have no reason for that. Stratum actually only sends you a Merkle path. It sends you information you need to build the coinbase ie the first transaction and then hashes of the other pieces that you need to build the Merkle root. It doesn’t actually have to send you anything on one side of the tree. Whereas getblocktemplate, you in fact only want the same information but it gives you not only the full tree but all the transaction data anyway just to confuse you and make you do more work than you need to. This also of course makes it slower. The JSON serializer in Bitcoin Core is fine but it is not that fast. If you are encoding a full 4 megabytes of hex blobs into JSON it is not that fast and has no reason to be that slow. It is also very pool oriented. It leaves a lot of complexity on the pool server instead of the Bitcoin Core node being able to say “Hi I have some new transactions that you should include in your new block” or “Hi I just got a new block. You should start working on this now.” It forces the pool server to think about these things and request that information from Bitcoin Core. It has some bells and whistles to try to not do that but no one uses them. We don’t like this world. Bitcoin Core, obviously in the node has more information than the pool server does because it has all the transactions, it knows when it just got another transaction with a really big fee on it that it wants to make sure you are mining on. It knows when you got a new block because it is the first thing to see it. We would like to have more control over that in Bitcoin Core just because we can make better decisions and we can optimize that instead of making the pool guys optimize that, which is a centralization pressure. A well optimized pool shouldn’t be a lot faster than a poorly optimized pool because that means they have to have technical competence. We would rather them just be able to spin it up and run it. That’s what is used now.

Q - I don’t know the history but can you talk about the step from getwork to getblocktemplate?

A - getwork was a much older thing in Bitcoin Core that was used to get work. We only gave you the header, the 80 byte block header which was fine for CPU mining and GPU mining but that is actually too slow for an ASIC. You’d have to get a stream of those, like millions of times a second. It is vastly too slow for that. A new thing had to be created. getblocktemplate was added as the thing to replace it. It is terrible but someone had to make something and so the first thing that got written got merged.

Q - getblocktemplate was never designed for pools?

A - There was some intention for it to be used as such but it never made any sense. It is too much data too. Even in Stratum with the weird hex encoded nonsense if you get a work update for a new block it still fits in a single TCP packet. Not by a tonne, it is close, but it does. That is nice for efficiency. Whereas getblocktemplate is 2 megabytes of data minimum plus SegWit just to give you a new block template. That is just nonsense.

Q - Why do you think it was designed that way?

A - The intention was for getblocktemplate, the pool would give you too much information and then the clients would do some policy thing. Luke designed it. The intention was that clients could do policy around which transactions to include themselves. It never really made any sense to be honest.

So BetterHash, I talk about it as two protocols, it is really three protocols. Obviously it splits the lines a bit differently because the intention is that clients run full nodes and not just the pool. It splits the lines a little bit differently so we have the Work protocol which is that Merkle path, the current header, some basic information about how to build the coinbase transaction, the previous block hash, those kinds of things. And the Pool protocol which is really just pay out information. Here’s how you submit shares and if you find a block it should pay out to this address. Then you use that to submit shares to prove that you are working and prove that you are paying out to the right address etc. I say it is kind of three protocols. The Work protocol is used in a few different contexts. It can either be final or it can be non-final. Non-final means it doesn’t have full pay out information. If you start mining to a non-final Work without adding a full payout you are going to burn all the reward to nothing which would obviously be kind of a shame. Final is like “Here is the fully built thing. You can go mine on this.” You can imagine in non-final mode you need to connect to a pool or be solo mining. You need to combine that payout information you got from the pool or you can be solo mining. Whereas in final mode that is the thing you pass to your ASIC and you go mine with that. There is a high level goal of having a mining proxy in the farm which you can think of as a Raspberry Pi or CasaHODL node kind of thing. All of your devices can connect to that device, it can run your full node, it can connect to the pool etc. You could in theory also skip that and have each of your ASICs actually connect to the pool and have them all connect to a full node. But that is up to you. Of course for practicality reasons because a number of miners aren’t going to run their own full node you could also have a pool expose your final Work protocol version to clients who don’t want to run their own full node. That’s just going to be how it works in practice a lot. An interesting goal, ideally we’d like to be able to just mine against Bitcoin Core. You shouldn’t necessarily need a pool if you want to solo mine or mostly for testnet, but also for fallback. The Work protocol, being the thing that connects to your ASIC controller or to your ASIC, if that’s also the thing that Bitcoin Core speaks it is nice because if the pool goes offline or if you are testing or if you just want to solo mine you don’t anything else. Currently you need a pool server because no ASIC speaks getblocktemplate. So you have to have some other software in between. Now you don’t necessarily need that. If the pool goes offline you can fall back to solo mining, at least until the pool comes back. This is a nice little property. Of course I lied, there are four versions of this protocol. There are two variants of the Work protocol. Who is vaguely familiar with the Bitcoin block header format? It is 80 bytes but the key observation is there is a 4 byte nonce, you increment the nonce, you test that hash, and then you increment the nonce and so on. There is also a 4 byte timestamp and then there is a 4 byte version field. If you just increment the nonce you would run out very quickly on an ASIC, that was in fact the problem with getwork that we mentioned earlier, it is just too slow. You run out of 32 bits of nonces very quickly on a modern ASIC. So you really only need a few more bits. There is a proposal to use some of the bits in the version field, use 16 of the 32 bits in the version field, we don’t really need that many bits in the version field, as a secondary nonce location. Then you can cheat and you can spin 2 bytes of the version, 4 bytes of the nonce and you won’t run out within 1 second. Then every second you can increment the nTime field. You actually never need to give the ASIC device the coinbase transaction which is currently really annoying because all of the end hardware in Bitcoin, or at least the ASIC controllers, which generally run crappy software that is not maintained, not updated, is involved in constructing the coinbase transactions and building the whole Merkle path. It has a lot of complexity. If we can get rid of a lot of that complexity so we literally give it 80 bytes and it knows it can spend these 4 bytes, these 2 bytes and it needs to increment that 1 byte every second it is so much simpler and we can have a lot more flexibility in terms of protocol upgrades and that kind of thing.

Q - How does this interact with AsicBoost?

A - The current AsicBoost stuff actually uses the nVersion field, specifically those 2 bytes. The BetterHash spec references that and says “Yes these 2 bytes that people have already been talking about using and are already using, just go ahead and use them.”

Q - It is like those warnings we see on our node?

A - Yes. The current AsicBoost stuff only uses like 2 bits but there was a proposal that says “These 2 bytes should be the ones used for AsicBoost.”

Q - Currently the ASICs tweak the coinbase transaction?

A - Yes. The way Stratum gives you a coinbase transaction is it gives you a prefix, it tells you how many bytes you get and it gives you a postfix. You take the prefix, you add however many bytes, you can use that as a nonce and then you add the postfix and then you build the Merkle path.

Q - What is the postfix in reality? Some OP_RETURN?

A - The prefix and postfix are just parts of the transaction. The part you get to spin in the middle is in the scriptSig input. It will be like the version, the number of inputs…

Q - Not covered by signatures?

A - Not covered by signatures, the coinbase transaction. The prefix will be the transaction version which is 4 bytes, the number of inputs which is 1 byte, the length of the script and then the pool’s name and stuff like that. Then you’ll have 4 bytes that you get to play with and then it will be the rest of the scriptSig, the number of outputs, all the outputs. That is a blob.

We’d like to move to headers only. Of course with this whole separation of the Work protocol versus the Pool protocol where the Work protocol has to be able to support not having the payout information. We still have to support the non-header version but ideally from an ASIC perspective you can just do this header version that is 80 bytes and you don’t have to have all this complexity.

Existing Problems

Some existing problems in the current world. I already went through getblocktemplate. It is a complete mess. It also was really gnarly, SegWit required updates to getblocktemplate and so we had to update all the pool servers for getblocktemplate, for SegWit support. There was no reason for that. That shouldn’t be how we had to do things. getblocktemplate needs to die, I will be very happy to see it go. Stratum is not authenticated at all. There is no crypto, there are no signatures, there’s nothing. This is terrible.

Q - How is that possible if you sending shares? Surely it must be authenticated?

A - No there is no authentication whatsoever. If I ran a ISP in China I’d be skimming a percent off the top and no one would notice. I’m dead serious, no one would notice if all the ISPs in China are skimming a percent of the hash rate off the top. Mining farms will vary in its hash rate a few percent a day based on temperature and other nonsense.

Q - …

A - Some pools support wrapping it in SSL but it depends on the pool. It is not the majority and mostly ASICs don’t support it.

Q - I’m going to Dogecoin.

A - All the coins use Stratum. Dogecoin uses Stratum, everything uses Stratum. This is a mess. We saw a BGP hijack against altcoin pools in like 2012. I am not aware of one against Bitcoin pools but we saw some against altcoin pools in like 2012 or 2013 so we know this is a very practical attack and could steal a tonne of hash rate.

Q - …

A - They clearly should. It is a material risk too because Stratum has a wonderful message where you can say “Hey I the pool server am moving to a new host. Please connect over here.” Then they have to power cycle all the hardware. If you can get a temporary man in the middle you can get all the hardware to point to a new pool server for you and then they have to power cycle their entire mining farm before it reconnects. It is terrible. Don’t be surprised if we see massive 51 percent attacks for a day while people have to react and power cycle their hardware. It is not an inconceivable attack. That said, you can’t just trivially encrypt the whole thing because pool servers actually have relatively high server overhead. They are checking shares. The whole point of a pool is you get more shares. The lower the difficulty on the shares is, the more consistent the payouts. That is what your users want so they always tune to have very low share difficulty which means more server CPU time hashing all the shares and making sure they’re valid. You can’t just blindly encrypt everything.

BetterHash is relatively carefully laid out with all the messages so that they are signed. There is no encryption but they are signed. You can still see that someone is running BetterHash but at least you can’t modify it. The messages are signed in such a way that the only messages which are signed are one-off messages or messages that get sent to all the clients. When a new user connects the pool will sign a response like “Hi. I understand you are User A. Here is your payout information for you User A.” Obviously the user can check that and make sure that lines up with the username. But that only happens once on connection. The share submissions aren’t signed because the pool told the user the payout information for that user. It is self authenticated already. There is no reason to add a signature to that, it will just slow down the pool. Also on the Work protocol side, if you have a new block update that data itself, you just have to sign once and you can send that to all your users. You don’t have to sign a different copy for every user. There is a separate message for per user data just to avoid any complaints about server overhead for pools who might push to deploy such a thing.

This isn’t documented anywhere. I couldn’t find anywhere that said this. I had to figure it out on my own. There is no standard for Stratum. Good luck trying tor reimplement this stuff. It is nonsense. BetterHash actually has a spec, I bothered to write it out. It is also incredibly dense. If you have ever tried to read any of my BIPs, they are dense. It is very clear exactly what you should do. You just have to be good at parsing my sentences.

Q - Is there a BIP number?

A - XXX. No I haven’t bothered to get it a BIP number yet. There are a few more tweaks I wanted to do.

Q - Is it public?

A - There is a post on the mailing list and there’s a link actually in the Meetup description to the current version on GitHub.

Vendor messages, there’s a general trend right now. If you run a mining farm, monitoring and management is actually a pain in the ass. There is one off the shelf solution that I am aware of. They charge you a certain number of dollars per device. It is Windows only, it is not a remote management thing, you have to be there and it is really bad. Most farms end up rolling their own management and monitoring software which is terrible because most of them are people who have cheap power, they are not necessarily technical people who know what they are doing. We want some extensibility there but I am also not going to bake in all the “Please tell me your current temperature” kind of messages into the spec. Instead there is an explicit extensibility thing where you can say “Hi. This is a vendor message. Here is the type of message. Ignore it if you don’t know what that means. Do something with it if you do.” That is all there. That is actually really nice for pools hopefully and for clients. I don’t know why you’d want this but someone asked me to add this so I did. I wrote up a little blurb and how you can use the vendor messages to make the header only, final Work protocol go over UDP because someone told me that they wanted to set up their farm to take the data from broadcast UDP, you don’t have to have individual connections per device and shove it blindly into the ASIC itself without an ASIC controller whatsoever. I don’t know why you’d want to do that but if you do and you are crazy this is explicitly supported. There is a write up of how you might imagine doing such a completely insane thing.

New Problems

Those are the existing problems. Luckily BetterHash being more fancy, it comes with its own problems that have to be solved if we want to get any kind of adoption. First of all, operational complexity is the obvious one. As I mentioned a lot of mining farm operators aren’t necessarily the most technical people. Telling them “Hi. You need to run a full node now” is a steep barrier. With that in mind it is an explicitly supported thing that the clients don’t necessarily have to run a full node. It is an option, strongly encouraged. That’s the whole point but if you don’t at least we get authentication and standards and the other things that are nice. The way current mining farms are often set up is you have like a thousand devices and they all connect to the pool. You configure all of them so you have like a thousand TCP connections from your farm that are all getting the exact same data from the same server at the same time. Some of the pools have designed these little proxies that are simple little Raspberry Pis. You plug it in at the farm and you point all your devices to that and that connects upstream to the pool. They had clients who were running very large farms who couldn’t figure out how to do that, couldn’t figure out how to plug in a Raspberry Pi and reconfigure their miners. Some of these guys really don’t know what they are doing. With that in mind if we are going to imagine a world where some of these guys who don’t know what they are doing are going to run a full node, you can imagine how they might creatively misconfigure this and end up generating garbage or not uploading the blocks or ending up with no mempool so they have no transaction fees they’re claiming. You can imagine how a user might deliberately misconfigure this. So we have to this option for the pools to spot check the work that is coming in. This in fact interacts with block propagation as well. Existing pools obviously have a lot of block propagation optimization that they’ve done. When they are the ones creating the work they have all the transactions. If a share comes in that is a full block they have the full block. Whereas if you imagine sending just shares to the pool where the client is the one who created the full block, the pool doesn’t have the full block to broadcast. Of course some of these farms are behind random crappy DSL modems in rural China so you can imagine block propagation might suffer. With that in mind the Pool protocol actually has a weak blocks protocol built in. We can finally actually do weak blocks because no one has ever bothered to do it. It actually works really well in this scenario. Who is familiar when I say weak block, what that means at a high level? I’ll go through it real quick. Weak blocks is a super old idea for how to do better block propagation in the peer to peer network using a similar observation as pools use for shares. You can say the peer to peer network won’t only send around full difficulty blocks, it will actually send around what look like shares that are slightly reduced difficulty blocks. These blocks won’t be validated, won’t be connected to the chain but if we’ve already seen most of the transactions that come through, even one of these lower difficulty blocks, then we can use a differential compression for the real full blocks that have many similar transactions. Of course compact blocks and other schemes are more efficient than this so this doesn’t actually exist in an implemented form as far as I’m aware. But it makes a lot of sense here because we already have these shares, we already have this natural place to shim in weak blocks. We are already sending from the client to the pool the shares and we can have a second share difficulty that is slightly higher where we send the full block data. All the transaction data as well instead of just the Merkle path. We do that and that uses differential compression to say “Hey, that last weak block I sent you. Take these transactions from it and here’s some new transactions.” That works pretty well. The pool can select different difficulty tuning between the shares themselves and the weak blocks so they can choose how much data they get from the client in that regard. That allows them to spot check. They can take those weak blocks they receive, look at them, say “Yes these transactions are valid. You are not mining Dogecoin on a Bitcoin pool.” These are well formed things and you are including transactions and everything looks good so we can pay you out. There is a little bit of complexity here. We’d like the pool to accept chain fork blocks. Not Bitcoin Cash, Bitcoin kind of forks, but if you have two blocks at the same height that are both valid we don’t want the pool to be involved in selecting which block you should be mining on because that enables selfish mining attacks. If we make that up to the clients that means the client would have to do the selfish mining attack and not the pool. Ideally that is on the client end but that obviously has complexity because you can’t just take a random block you see and validate it steady state, you have to build UTXO for the previous block. If you have a Bitcoin node you have to have some way to not validate in that case. Eventually we will have a solution to that. I just know that that’s rare enough that hopefully you are fine. In practice, those are super rare. Again this is spot checking where you aren’t checking every share that comes in fully. That is probably ok. Your biggest goal here is to identify a misconfigured client not necessarily a malicious client. Malicious clients can already exist today in Stratum. BetterHash doesn’t change anything in that regard. A malicious client in Stratum today can do block withholding attacks where they send you shares but not if that share is going to actually be a full Bitcoin block because they know that. That means the pool loses money but still pays you out. That doesn’t change. Malicious miners, it is the same threat model that you have today. But misconfigured miners are a hell of a lot worse in BetterHash than in Stratum today. That is the reason for the weak blocks. We can fudge it a little bit and say “It is ok that we don’t necessarily have the ability to spot check everything. We can at least see that it is built on a block of the right height at a potential tip that might be valid. We’ll go ahead and accept that.”

Q - Why is the block withholding attack useful against your own pool?

A - Against your own pool? I mean against a competitor. You do it against a competitor. That’s a good point though actually. Who is familiar with pay out schemes for pools? This is a quick detour in algebra. We can get into that in the Q&A if you are interested. There are some fun attacks in early payout schemes. Because a lot of mining farms are operated by people who don’t necessarily know much about Bitcoin they naively assume or maybe rightfully that if the pool doesn’t pay them out the exact same amount every day or every hour, the pool is scamming them. If you have variance, the pool has some variance, sometimes gets more blocks, sometimes gets less blocks and pays out based on how many blocks they get, you’ll have a slightly different payout everyday. For practical business reasons most pools, especially in the Chinese market, use what is called a paper share. This means that they pay out the expected value of a share every time they receive a share, irrespective of whether or not they mine a block. If the pool hasn’t found a block in a long time it might go bankrupt because they are still paying out based on this. This makes block withholding attacks very nasty because if a pool is charging a 5 percent fee and you have 6 percent of the hash rate of that pool you can make them go bankrupt eventually with high probability. Because you do block withholding, they lose 6 percent of their income, they still pay you out 99.9 percent of what they would pay you if you weren’t doing this attack and they don’t get the blocks so you put them out of business. Why this attack doesn’t happen in practice I don’t understand. I guess everyone is just kind of fine with if. In practice most pools, especially in the Chinese market, know their customers. What you would see if this attack started happening, I imagine they would KYC their customers. You’d think it is a cutthroat industry and you want to put your competitors out of business but I guess not. I don’t know.

Q - They know where you live.

A - Yeah. It is China, who knows.

Q - If you own all your competitors, that works as well.

A - Yeah if you are your own competitor it doesn’t help

Q - There is no way of detecting when…

A - Not materially no. Because blocks are rare enough, you don’t have enough information to do any kind of statistical analysis to see when your clients are doing a block withholding attack. Unless that client is really large. But that is not that common.

Q - You could make a probabilistic judgement that they are withholding blocks?

A - But only if they have been doing it for six months and they very clearly should have found a block. There is not really enough information. I have actually heard a case of an accidental block withholding attack. It was detected because they weren’t doing the block withholding attack at the real Bitcoin difficulty, but at a much lower difficulty. There was this weird distribution of the difficulty of the shares coming from that miner. They were able to identify just because of a software bug in the miners. If there is a way to f*** it up your users will find a way to do it.

Q - Could you send the user a prepared… use this second nonce you will run into the main nonce and you should find a block. If you have a username in there it doesn’t work?

A - If you have per user data that doesn’t work. Of course you want the user to be doing useful work because otherwise you are wasting money.

Q - For testing?

A - That only works if it is a full Bitcoin block. You’d only be able to do it right when you find a block, you could send that same work to all other clients and make sure they get back to you with something. The user could detect you doing this. If you want to go down the rabbit hole of a user being truly malicious and competent they could.

So chickens and eggs. Getting adoption for something like this, a pool doesn’t want to run parallel infrastructure, they don’t want to have their existing Stratum setup and also a whole second pool server without a lot of customer demand. Of course the customers aren’t going to demand something that a) doesn’t make them more money and b) the pool doesn’t even offer. How do you solve chicken and egg problems in terms of protocol adoption? I am all ears if you have ideas. Working on that one, it is slow going.

Q - …

A - If you are writing a new pool, this has an implementation of the Work and Pool protocols. It has a sample pool and it will speak Stratum on the backend to connect to existing clients. An existing miner obviously only speaks Stratum and this will work for that. If you are running a new pool the anticipated layout is that you would use this software on the pool end and then also run your own mining proxies that speak Stratum for clients who want to use Stratum. Clients who want to use BetterHash can run their own mining proxy as well. You provide that mining proxy as an option for clients.

Q - What are the constraints for running a mining pool? Obviously when you run physical miners yourself you are constrained by electricity?

A - The only real constraint is that you have enough clients. You have to have enough clients to have steady blocks founds or enough hash rate in total. You can’t necessarily run a pool if you have a tenth of a percent of the network hash rate because you won’t really be solving the variance problem for your clients. They might as well just solo mine.

Q - Does geographical proximity to the clients matter much?

A - For block propagation, in current pool design yes. BetterHash kind of helps with that because of this weak block thing. Pools don’t want to throw away all of their existing block propagation tuning that they’ve done. Especially in the pay per share model because in the pay per share model any orphan rate comes directly out of the pool’s fee and not out of the client’s. It comes out of the client’s in pay per last end share or any of the sane payout schemes. They don’t want to throw away all of their block propagation work so that is another reason for the weak block stuff. The pool gets the full block relatively quickly when a block is found which means they can do block propagation with that. But also the client can do it. Now if the pool server is further away you might have a higher stale rate on your work submissions but the client can do block propagation redundantly with the pool. Obviously that is purely additive, that is only a gain. Now the latency between you and the pool for block propagation isn’t as big a concern. You can propagate it locally and then the pool can do it on the other side of the world and that is fine.

This started as demoware. It is not that far off. There are some issues filed on it that are tagged. If you want to help and you want to write stuff in Rust and you are interested in Rust contributors are welcome to that. There is some stuff that is not done yet, not implemented yet, some features that need to be added before it is usable. There is information there. There are three Bitcoin Core patches that I need to land upstream. You need to be able to submit headers and validate headers, loose headers. I talked about how ideally if one of your clients as a pool, is mining on some fork that is at the same block height as you, you want to be able to say “Yes, they are mining on a fork. It is the same block height, this is rare enough, this is ok, I am going to accept this.” But in order for you to be able to do that you have to have the header for the previous block that they are mining on. They actually send you the header in the protocol, it is just an extra 80 bytes, it is not very expensive. That needs upstream support in Bitcoin Core. I think that may have already landed. I told Marco to do it and I think he did it, I think that might have landed. I don’t know, I haven’t been following that.

Q - You live in a bubble when it comes to Bitcoin Core because you are always following a subset of pull requests. You don’t see half the rest of the repo.

A - Yeah exactly.

I think that one might exist but it needs to be plumbed into the mining proxy code. It also needs support for validating the blocks themselves, testing the block validity of a weak block that you received from the client, that also needs to be plumbed through in the mining proxy and landed upstream. But there needs to be some tuning there to say “If you have spare CPU cycles you should be validating someone’s weak blocks. But if you don’t have spare CPU cycles you should turn up their difficulty so that they submit fewer of them. That needs some clever tuning there. Practical fun project if someone feels like writing that. And then the Work protocol itself, again it has to exist in Bitcoin Core, I have an experimental patch set for that. It is linked from the README of this and it has successfully mined testnet blocks before so it probably works. It needs to land upstream.

Questions

I was told I have to mention that we are running another residency this summer, Chaincode is. It is free and I think we might help with housing if you need financial support. I think, don’t quote me on that. This time it is going to be a full summer, it is going to be 2-3 weeks of course stuff and all kinds of fun philosophy of Bitcoin Core and details and implementation, all kinds of fun talks like this. Except for 2-3 weeks solid, a stupid amount of learning. Then 2 months-ish depending on your schedule, of project time, hands on with great mentors. You can apply at residency.chaincode.com. Don’t wait, we are doing early submissions so you screw yourself if you wait too long. Of course if you so choose and you get accepted your project time can be on BetterHash so you could work on this with me in New York. With that, any questions.

Q - Are there major pools interested in this?

A - I have gotten some level of interest from folks. I have gotten the BIP reviewed by some of the pool operators. They provided good feedback. There are a few more changes I need to make in response to feedback that I got recently. There is the chicken and egg problem that I mentioned and also the running the second set of infrastructure. It is a little bit harder to justify that to an existing pool. There are some folks who are talking about building new pools based on this software. It is kind of nice because a lot of the pool software I have written for you and you can just use it. You just have to build the website, the fancy graphs and sparkly things. Obviously the SlushPool people are excited about the concept. Jaan, the CEO of SlushPool was saying this on stage in Vegas a week ago. This is cool and they support it in theory but it is hard to justify at least without customer demand. If there were a bunch of customers banging on their door saying “Hi I want to use your pool but I want to run my own full node” maybe they would be more likely to do it. You should totally go form a Twitter mob and tag @slush_pool I think it is. But it is hard to justify. It is one of those slow burner things.

Q - Can the previous protocol be deprecated?

A - Ideally, eventually. But you have to get the hardware to upgrade because all the hardware supports Stratum and only supports Stratum. That is a lot of roll time.

Q - The other option is for a miner to get a pool onboard?

A - Yeah or you could run your own pool or you could solo mine using it if you are sufficiently large that you could solo mine. There are folks talking about starting a pool. This is one of those things that is build it and then wait a few years and hopefully they will come. It is intended to be a slow burn. I don’t anticipate seeing material adoption of it in any short timeline but hopefully with enough continued pressure and if someone wants to finish the bugs that need to be implemented then we can make that happen.

Q - There are people interested in making a pool?

A - Yeah. There are always people, it is not a terrible business to be in so there are always people interested in getting in I guess.

Q - There is an incentive there for the clients to run their own full nodes so that if the pool goes down they can get their money?

A - Yeah. Most of them are already configured with multiple pools so if one pool goes down they can carry on mining on a different pool. It is one of those things that is hard to justify this to anyone involved because no one makes more money necessarily. You can make some argument about how this looks better to investors of Bitcoin than the other chart. And so it is going to cause the Bitcoin price to pump so we should get this adopted now. But the market seems to be pretty rational (joke) so I don’t know how strong an argument that is.

Q - You prove to the pool that you paid the pool in the coinbase but you can lie about the precious transactions paying large transaction fees.

A - You can totally claim in a share “Hi I have a million Bitcoin in transaction fees. You should pay me out more because of that.” You have to handle that on the payout side. The BIP, I think, if not I should go write it because I intended to, tells you “Your payout should not exceed the median fee claimed by your users.” That makes it safe. Obviously you have to spot check the weak blocks. If you spot check the weak blocks you can detect a user who is cheating and ban them. Then as long as you don’t pay out in excess of the median fee claimed by your users you are never going to pay out too much. You just might be paying out to what is effectively a block withholding attack.

Q - Are there any scenarios where a security vulnerability in the old protocol could cause mass adoption of BetterHash?

A - Yeah these pools could lose all their hash rate overnight. If someone has an unfiltered BGP connection, you can get those in Eastern Europe all over the place I hear. If you want to make some easy money go BGP hijack some Bitcoin pools and you will also drive adoption of BetterHash (joke). I didn’t just say that.

Q - Wait until the BetterHash pool is ready.

A - Yeah please wait until there exists a BetterHash pool.

Q - That goes along with Eric Voskuil’s general thesis that decentralization is a result of attacks. If governments attack mining pools, stuff like that.

Q - Have you given any thought to paying out shares with Lightning? Does this make it easier?

A - No it doesn’t really change it. Payouts are orthogonal and in fact not implemented in the sample client I have.

Q - Could you not do it in a trustless way? That would be cool. Somehow the shares already open a Lightning channel or somehow push things inside a channel?

A - Intuitively my answer is no. I haven’t spent much time thinking about it. Bob McElrath has some designs around fancier P2Pool that he claims does it. I am intuitively skeptical but I haven’t thought about it. Maybe it works, talk to Bob or go read his post.

Q - It could be some referential thing, maybe it is Lightning-esque where whatever you are doing in the Lightning channel points back to the hash of the block that you are doing it in?

Q - What you do is you make it so that the coinbase includes a preimage. You wait for the pool to get its money to reveal its preimage and that also unlocks…

A - But that only works when the pool finds a full block and broadcasts it. They don’t have to reveal 99.99 percent of shares because they are not full blocks.

Q - It is intended for P2Pool.

A - Ok. His scheme is like a P2Pool variant with GHOST, SPECTRE kind of design.

Q - I guess all the payouts could be a Merkle tree of preimages of individual Lightning payments?

A - The problem is you are trying to reveal them and publish them. That would be a lot of data to publish.

Q - I don’t know how it would actually work. But it’d be nice because it solves a lot of these problems if you know that every individual thing that you send to the pool is self contained. You don’t have to think about who to pay how much.

A - Bob’s design no longer has a centralized pool involved. Back to the P2Pool design which is a little bit more self contained.

Q - Bob didn’t come up with this, it was me.

A - Chris, do you want to explain P2Pool?

Chris Belcher: P2Pool works, these shares form a share chain. Every node in P2Pool verifies this share chain and makes sure it pays out to the right people. When a real block is found the hashes get paid in proportion to how much work they have contributed to the share chain. You could make it trustless so that they can’t cheat each other. That’s a summary of how it works but it is dead, there are loads of problems with it unfortunately.

Its biggest problem was it was bad UX. It reported a higher stale rate because it had this low inter block time so you had naturally high stale rates. But what mattered for the payouts, for a centralized pool if you have a stale rate you miss that many payouts, in P2Pool if you have a stale rate you can still get payouts for stale shares, the only thing that matters is your stale rate in comparison to other clients. And so you have a lot of miners who were running P2Pool and it said “Hey you have a 2 percent stale rate” and they were like “F*** this. My centralized pool says I have 0.1 percent stale rate. I am not going to use P2Pool.” And so P2Pool died for no reason because it had bad UX.

Q - Does P2Pool solve most of these problems?

A - P2Pool has a lot of complexity. The intention of BetterHash is that we can do effectively as well as P2Pool. You still trust the pools for payouts but hopefully they are only able to rip you off for a day and then you go somewhere else. But we can do almost as good as P2Pool without all the complexity of a share chain. This is two orders of magnitude easier to implement and maintain than P2Pool. That goes a long way in terms of adoption I think. But I’d be happy to be wrong, I’d be happy if P2Pool took off again and everyone just used that instead of this.

Q - Was it a factor with P2Pool that you had to have a full node? For a lot of people mining having a full node is an issue.

A - Totally, yeah. P2Pool made you have a full node whereas this is at least optional. I anticipate that most people using BetterHash kind of pools won’t run their own full node. My hope is that you can move to a world where it is really easy. You just buy a CasaHODL style node, you plug it in and now you are running a full node. You can have at least some of the larger clients of pools do that which will get you most of the advantage here. Even if all the small clients are just on pools, it is ok.

Q - One of the big issues with a pool is that you have tiny payouts and then you have lots of them before you form a meaningful output. While in a centralized pool you can set a payout threshold where here you cannot?

A - BetterHash doesn’t really affect payouts in any material way. In fact the software I have, it just has a hook for “Hi I received a share from this client.” It is up to someone building the pool to implement the database tracking and all of that stuff and actually do the payout stuff. That is no different from today versus this. You still have that “I got a share from client A for value B”

Q - I was talking about P2Pool.

A - P2Pool had very high payout thresholds because they were all onchain. They ended up with these huge coinbase transactions. I guess another problem with P2Pool is they had big coinbase transactions and some of the early ASICs actually barfed if you gave them too large a coinbase transaction and couldn’t mine with it. You have to pass the ASIC the coinbase transaction itself in Stratum. They had some limit in their JSON parser because parsing JSON in embedded devices is hard, who would have guessed? Also Forrest walked away from it and wasn’t really maintaining it. He went and did a rocket science PhD and got bored with P2Pool. I think it is kind of unmaintained. If someone wants to maintain it, go for it.

Q - It is really hard to read, massive Python files.

A - Someone could rewrite P2Pool and then maintain it.

Q - Some questions from Twitter. I think you’ve answered a couple of them. Does BetterHash allow overt AsicBoost?

A - Yeah. It is explicit in the BetterHash spec that overt AsicBoost is allowed specifically because the 2 bytes in the version field that I mentioned are explicitly allowed to be tweaked any way you’d like. That is compatible with existing AsciBoost stuff which just uses 2 bits and also gives you spare room to not need the extra nonce in the coinbase.

Q - Why weren’t the 2 bytes of the version field not used before?

A - The question is why didn’t early Stratum specify that the 2 bytes in the version field are free for extra nonce space. It is bad form to be like “This thing that has consensus rules around it, we are going to embed it in the ASICs that they can change this.” Then you might have a soft fork or something that breaks your ASICs just because someone wants to increase the version number. Again if we all agree that we can take 2 bytes of the version field and apply no consensus meaning to them forever more it is completely fine to do this. At this point because of AsicBoost we are already locked in to that. We can’t do anything about it, at least without breaking all the AsicBoost miners which we don’t necessarily want to do. Just because it is bad form I guess.

Q - It is 4 bytes, the version are 32 bit integers?

A - Yes they are all 32 bit integers.

Q - I guess that’s just the way it is.

A - Because Satoshi. They are also little endian 32 bit, it is weird.

Q - I was quite surprised when I learnt that a lot of the Bitcoin exchanges don’t rely much on Bitcoin Core software. I was just as surprised also to learn that miners do heavily rely on Bitcoin Core software. Is that a true impression?

A - I actually don’t think that most exchanges don’t rely on Bitcoin Core. I think by far the most common setup for an exchange is you have Bitcoin Core nodes that you run and then you have your own internal SPV node using BitcoinJS or BitcoinJ or whatever and then that connects only to your own Bitcoin Core nodes. So in a sense you are still relying on Bitcoin Core, you won’t get data from anything else except your own full node that you are running, but you don’t have any of your business logic in Bitcoin Core. In a sense the pool setup is actually kind of similar in that most of your business logic is on the pool server. All of the logic around payout and shared validation and everything is on the pool server. You are just Bitcoin Core to get data about new blocks. You use it a little bit more obviously than just making a peer to peer connection to it. To my knowledge there is not a real alternative, if you want good performance for getblocktemplate I don’t know if anything has reimplemented getblocktemplate in its insanity.

Q - I have never even looked at it.

A - I don’t think anyone else has either. It has a bunch of bells and whistles that I’m pretty sure are documented in the getblocktemplate spec and aren’t even implemented. I know that none of the things that it has aside from just the really basic “give me a block template” version are implemented in any of the pools. It has bells and whistles but no one cares because it is gratuitously inefficient and over designed.

Q - …much better latency in this regard and more profitable?

A - Yeah. That’s the other reason to pull getblocktemplate out and into a binary protocol that’s just push specifically. In the BetterHash Work protocol you have if Bitcoin Core gets a new block it pushes that information to the pool server, to the clients whereas getblocktemplate is polling.

Q - …

A - No it is a raw TCP socket. You just connect and it will give you new work. So in practice it will be a little lower latency, not enough to make a difference for profitability or at least not materially. But only because most of the pool servers are heavily optimized for making sure that right when Bitcoin Core gets a new block it does a getblocktemplate request. They do stupid crap like connect to the peer-to-peer port to get a push notification of a new block so that they can then go request getblocktemplate. Without the JSON roundtrip it will be a little bit faster. You could make some argument about better profitability but not a very strong one.

Q - It said in your BIP there is a reliance on Bitcoin Core APIs.

A - That is a reference to getblocktemplate? Because most of these pool servers are optimized for latency and they care a lot about the latency they do these stupid hacks where if we change something about the way we send notifications of new blocks to our peer-to-peer connections then we might break pool servers because they are using that as a notification to some other subpart of the daemon. It is this weird dependence that we might accidentally break because we go optimize something else. Oops now all of the pool servers are broken through complete nonsense reasons. We don’t want to be in that state.

Q - We want in the architecture to have a proper boundary between what they see as their Bitcoin Core as a service. Like you said with the exchanges, it is the same thing. They have Core and in theory they could swap it out.

A - At least BetterHash is better documented, you still get that boundary. In this case it is “Here’s one protocol. You use this.” Not “You connect over ZMQ and the peer-to-peer….” It is like have your little potato computer and shove electrodes on both sides so you can get something out…

Q - When your disk space decreases by more than….

A - Call getblocktemplate.

Q - Why would you subscribe to the blocks and connect to the P2P interface of the same server?

A - Because the getblocktemplate doesn’t give you a push notification when you get a new block. So you have to connect via the peer-to-peer interface to get a push notification so that you can go poll getblocktemplate. I think they also support ZMQ to get the push notification.

Q - You connect twice. One from the P2P and one from ZMQ.

A - Yeah. One might come first and one will come first. It is random which one because Bitcoin Core is not designed to support one millisecond push notifications over arbitrary interfaces. But we can do that if we are using the protocol that explicitly supports it. We can go optimize that in Bitcoin Core and it is like a first class supported feature versus these weird hacks that people do that we might break without realizing it, very easily.

Q - Is the pie chart generated distributed randomly?

A - This is actual data. I took the breakdown of SlushPool… they have public user A has 1 percent of their hash rate and user B has 0.5 percent and whatever. I plugged it all into Excel. You can see where the big chunks are. This is AntPool and that is BTC.com. Then the actual breakdown of the individual bars are based on SlushPool distribution of the hash rate within their users. It is at least valid for SlushPool. The distribution will be different on AntPool and BTC.com.

Q - You are assuming that each pool is the same as SlushPool?

A - Yeah. Assuming each pool has the same distribution of hash rate then that would be valid. It is a strong assumption but it is not completely far off.

Q - There’s an old theory in Bitcoin, we’re going to have commodity hardware, miners in toasters. What do you feel?

A - It seems like 21 had to pivot so it seems like it didn’t work. That was a few years ago.

Q - Were they crazy or were they just ahead of their time?

A - Most of that is termed in the “we’re going to use them as heaters.” Remember that an electric heater is a hell of a lot less efficient than a heat pump. If you care about the efficiency of your heating you are not going to use an ASIC. If you have an existing electric heater you might replace it with an ASIC but it seems weird that that would be competitive commercially, especially since cheap power is in a few specific locations that have hydro, that have cheap green energy.

Q - It is more about energy distribution than it is about the fact that hardware reaches a certain physical limit in terms of efficiency…

A - We are at the point for hardware. That’s why you see more competition in that market which is really exciting. My sense has always been that distribution of hash power is more about the power markets than distribution of the hardware itself. Luckily power markets are pretty distributed it turns out. You can go buy a small amount of cheap hydro power a hell of a lot easier than you can show up and buy a hundred megawatts or two hundred megawatts of cheap hydro power.

Q - Consumer electricity is pretty expensive. Unless you have a number of giant solar panels it just does not add up enough. So unless you have a windmill in your garden…

A - Some people have windmills.

Q - You joked about hijacking BGP. Has anyone seriously considered mechanisms to disincentivize pooled mining?

A - There have been some attempts to disincentivize pooled mining. The term you are looking for in a Google search is non-outsourcable proof of work, a non-outsourcable puzzle. The idea of a non-outsourcable puzzle is that you design it such that if the client of the pool is able to detect whether or not this meets the full block difficulty then they can steal the full reward. You allow the client to steal the reward. You design it such that the pool will never be able to trust the clients because if the block is actually a full valid block the client will just steal all the reward for the block and the pool won’t get the reward, pools would be broken. I am vaguely dubious of this because as we discussed you could already put all your competition out of business with block withholding attacks. This doesn’t happen so it seems weird that a technical solution will solve this. Again you’ll just have pools that KYC.

Q - Any one of the miners in the pool can steal…?

A - No it would have to be the miner who found the block can steal that block reward. You just end up in a world where you KYC them and you are fine. The problem is also the reward variance. If you make the variance really low you have less incentive for pools to exist but you might still have pools. You still see pools in Ethereum but the reason you see pools in Ethereum is because they take over running a full node for you which is a lot of effort. Especially in Ethereum it is really hard. Whereas there is lower inter block time so there is much lower variance but even still you see pools.

Q - Up to a certain size I would say pools are a positive because they allow much smaller organizations to mine.

A - Yeah there is nothing inherently wrong with pools. They very neatly solve the payout distribution which is really nice. If we could get them to use BetterHash they could rip off their miners and try to hold them hostage but the miners could go elsewhere pretty easily.

Q - What are you talking about at the conference?

A - On Thursday I’m going to be talking about rust-lightning which is a project I’ve been working on to implement a batteries not included Lightning node. lnd, c-lightning, these things are a full standalone Lightning node where they have a wallet, they download the blockchain, they scan the blockchain, they do all of these things. If you are for example an existing wallet already, you are Electrum or Mycelium or whatever, taking a full Lightning implementation in a node, how do you use that, how you integrate that? You don’t just run a second wallet in your wallet, that’s kind of nonsense. rust-lightning is all of the pieces of Lightning implemented for you except for the last step of downloading the chain, generating keys, storing things on disk, these kinds of things. There is some cool stuff that I’ve been doing with it in terms of fuzzing the actual protocol logic and finding bugs in much cleverer ways than almost any other existing fuzz work that I’ve seen. That I’ll be talking about in addition to the high level API and what you can do with it, how you might integrate it into a mobile wallet or how you might integrate it into a hardware wallet. All kinds of fun stuff.

Q - It is able to be really flexible in comparison to some of those alternatives.

A - Yeah if you run a Lightning daemon I would not suggest you take rust-lightning, I would suggest you go take lnd and I don’t intend to compete with that. It is more a very flexible thing you can integrate a lot of different ways. You might imagine having it synced across different devices, having hardware modules that are mutually distrusting that are partially offline, integrating it that way. It is designed to support these kinds of different use cases that current Lightning stuff doesn’t have a concept for.

Q - Are you following those other implementations closely?

A - Not that closely. I don’t have strong views of them.

Q - I found an interesting Twitter conversation between you and Alex Bosworth on some of the design decisions for c-lightning and lnd versus rust-lightning.

A - I don’t remember.

Q - Apart from the pooling problem are you not concerned about mining centralization?

A - I am not really. Eric Voskuil likes to complain about BetterHash. He has a valid argument. His point is essentially that if you are beholden to your pool they could tell you “Hi, you have to now run Bitcoin Core patched to censor these transactions or we are not going to pay you.” There is still some effective centralization there. I am personally not at all concerned about that because currently today the pool can just do this and you don’t even notice. You would have to spend a lot of effort to detect this. Whereas in a BetterHash world you have to take active action to apply this patch and run this version of Bitcoin Core instead. It is much easier for your response to that to be switch to a different pool than actually do what they tell you to do. So you don’t really care. I’m not too worried about that. If you have only one pool you might still have that problem but of course if you only have one pool and they do this you can go create a pool.

Q - From a manufacturing point of view?

A - That is completely orthogonal, the manufacturing. Luckily that is improving a lot because we are getting to the current gen process for semi conductors so that’s become much more distributed which is nice.

Q - You are removing the need to parse JSON…

A - Yeah and we can simplify the controllers so that they are not running weird CG miner hacks on hacks on hacks and parsing JSON and spaghetti C code.

Q - Can you talk about Rust and how you are finding it developing on?

A - I really like Rust. Remember Rust was designed in large part because Mozilla got tired of C++ being terrible in Firefox and having security bugs up the wazoo. They wanted something that was efficient and much safer to replace it with. I have found it to be really nice to use but I was also coming from C++ so I was the target audience. Generally I am a huge fan. I haven’t messed around with Go or any of the things that are seen as alternatives but they rather orthogonal. If you are going to do especially embedded programming, some of stuff it is intended for, especially what rust-lightning is intended for, Go doesn’t really make sense. You have this big runtime and garbage collector and whatever. Rust doesn’t which is nice for that use case. I have found it to be great. It is still a little bit immature. The BetterHash implementation I have is in Rust but uses some of the new fangled Rust async stuff which isn’t complete yet. Hopefully they will finish up in the next year or two. It is still a newer language especially for server development kind of stuff. But rust-lightning is just a C callable library so I have most of the C wrapper written for it so you can embed it anywhere C is. Rust is relatively mature for that kind of application where you are not doing anything fancy, you are just providing an API in a C callable wrapper, I have found it to be pretty mature for that.

Q - Can you open a Lightning channel with a coinbase transaction?

A - Yes

Q - Does BetterHash enable things like I as a miner have my full node, I can include transactions that I actually want to send for no fees?

A - Yes. The “I want to mine my own transactions to avoid paying fee on it”. You do pay the fee in opportunity cost.

Q - And you hurt your anonymity.

A - Yeah and you hurt your anonymity because it is clearly the miner who included this and you are paying it in electricity fees.

Q - At least you know for sure you can close a channel.

A - That’s true if you have enough hash rate to mine a block in the correct amount of time.

Q - There is interest in decentralized mining with Lightning. As Lightning matures, Lightning is really dependent on miners not being able to censor transactions.

A - Yeah. Almost all proposals for doing more scalable chains, sidechain kind of designs, Lightning, there is other stuff in that category. The key assumption that they’ve all chosen to make is they elevate the censorship resistance assumption in Bitcoin to a security assumption instead of a utility assumption. We assume if you couldn’t get your transaction through in Bitcoin and transactions were being censored this is probably going to destroy the value of Bitcoin in the long run. But maybe this is ok on short bursts. Maybe this won’t work hurt Bitcoin materially if we can redecentralize mining. Whereas in a system like Lightning or a system like a proof of work secure sidechain or something like that, if this is violated temporarily you not only have a broken user experience but you actually lose money. That is generally the key design. As you point out decentralizing mining is an important part of that at least in my opinion. That was kind of some of the impetus to work on this but we are currently in a very f***ed state so we should fix that.

Q - It goes along Schnorr I would say because Schnorr lets you blend in these opening and closing transactions a bit more so it is more difficult to see that’s a Lightning channel close. A force close wouldn’t have that because you are still expressing the script.

A - With a force close you wouldn’t have that and the open is just a bech32 output anyway. The comment for those on video is because Schnorr or MAST might allow you to make Lightning transactions look the same as any other transaction, decentralizing mining is useful and goes hand in hand with that.

Q - If you also used Taproot then it would look like a normal transaction.

A - Yes if you used Taproot it would look like a normal transaction. eltoo uses SIGHASH_NOINPUT, that would still be obvious. Hopefully the only thing that uses SIGHASH_NOINPUT. If anything else uses SIGHASH_NOINPUT we might be f***ed.

Q - You didn’t give it its full title.

A - SIGHASH_NOINPUTDONTUSETHISITISDANGEROUS

Q - Could you give a bit more detail on the fuzz tests that you wrote for rust-lightning? Perhaps explain what fuzz testing is first.

A - For those who are not familiar fuzz testing is a general testing approach of black boxing a test case that takes as input some random set of bytes, running a program on it and trying to make it crash. This is primarily used for decoders, things like that. They have proven to be very effective at taking an image decompression library or something that processes untrusted data off a network and making it crash, finding vulnerabilities in it, finding stack overflow, buffer overflow kind of vulnerabilities. They are not just completely dumb shove in random bytes, they usually instrument the binary, you can do this in hardware or in software, and detect when a given input has found new paths in the program. You map out all the IF statements and all the branches in the program and if it finds an input that hits a new branch then it considers that input interesting and it will mutate that input a little bit more than other inputs. They’ve actually had great results finding all kinds of fun security bugs in mostly image decompression, that kind of library. With rust-lightning, because it is a library and it is this C embeddable thing and it has no runtime associated with it, it is super easy to shove it in fuzz tests. One thing that I’ve done recently that has turned out to be really cool and really useful, to my knowledge the first use of this kind of approach to fuzzing, there is a fuzz test where it stands up multiple nodes in the same process, connects them to each other and then interprets the fuzz input as a list of commands to do to these nodes. Then tries to make the nodes disagree about the current state of the channel. These commands take the form of things like “Please initiate sending a payment from this node to this node. Or from different sets of nodes.” It can also deliver messages out of order. The fuzz tester can actually hit speed of light issues in terms of messages sent and getting delivered at different times. Its goal is to say “If somehow it can make the nodes disagree about the state of the channel then this is considered a crash and this is bad.” This has found a number of really weird corner case bugs of “If you do these four things in exactly this order then it forgets to set this flag. That will result in a disagreement if these other four things happen in this order.” The Lightning channel state machine is actually not as simple as it sounds. It is rather complicated. It has been a very effective test at fuzzing the rust-lightning channel state machine.

\ No newline at end of file diff --git a/london-bitcoin-devs/2019-05-01-stepan-snigirev-hardware-wallet-attacks/index.html b/london-bitcoin-devs/2019-05-01-stepan-snigirev-hardware-wallet-attacks/index.html index c4f6822bd5..1e4e6efa9f 100644 --- a/london-bitcoin-devs/2019-05-01-stepan-snigirev-hardware-wallet-attacks/index.html +++ b/london-bitcoin-devs/2019-05-01-stepan-snigirev-hardware-wallet-attacks/index.html @@ -1,6 +1,6 @@ Hardware Wallets (History of Attacks) | ₿itcoin Transcripts
\ No newline at end of file +https://www.youtube.com/watch?v=P5PI5MZ_2yo

Slides: https://www.dropbox.com/s/64s3mtmt3efijxo/Stepan%20Snigirev%20on%20hardware%20wallet%20attacks.pdf

Pieter Wuille on anti covert channel signing techniques: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-March/017667.html

Introduction

This talk is the second in the series after my previous talk in London a few months ago at the Advancing Bitcoin conference. There I was talking mostly about general attacks on hardware, more from the theoretical perspective. I didn’t say anything bad hardware wallets that exist and I didn’t specify anything on the bad side. Here I feel a bit more free as it is a meetup not a conference so I can say bad things about everyone.

Our project

The reason is that we are building a Bitcoin hardware platform and we are trying to make another hardware wallet plus a developer board and a toolbox for developers so they can do whatever hardware devices or Bitcoin powered devices. When you start doing something like that you obviously want to learn from others what mistakes they did. During this research I found a lot of interesting attacks and I found many interesting things, what hardware wallets do or do not do. That is what I want to share with you today.

Why hardware wallets?

This I will skip because we have already discussed it.

Cast

These are the wallets I will be talking about: Trezor, Ledger are pretty well known, Coldcard is a nice hardware wallet made in the US and allows you to use a complete air gap so you don’t need to connect it to the computer and you use a SD card to transfer transactions back and forth, and Digital Bitbox is a product of a company from Switzerland, Shift Crypto.

Table of contents

I split all the attacks into four categories. The first one is f*** ups. There are not many in this section. These are very serious vulnerabilities and if you have them then your product failed to deliver what it promised. The first reason for hardware wallets to exist to protect against remote attacks. If your computer is compromised your keys should be safe. F*** ups cover these kinds of attacks where it doesn’t work. Then there are attacks on the software side. When you write something in software that is vulnerable then you have an attack and you can fix it with software. Hardware attacks due to certain features of hardware, some of them can be fixed by software and some not. And finally the architecture things. You have certain limitations when you choose how your hardware wallet works and what protocol you use, what data you use to verify stuff and also what microcontrollers you use.

Change address verification

https://sergeylappo.github.io/ledger-hack/

First the f*** ups. The first one is the pretty recent one. It was described in December 2018, a few months ago and it was on Ledger. What happens if you are trying to sign a transaction with Ledger? You connect Ledger to your computer and your computer prepares a transaction and says to the wallet “Here are the inputs. It has 1 Bitcoin and this is the derivation path you need to use to derive the private key to sign it. And these are the outputs. The first output is what we are actually sending and the second output is our change.” You don’t need to show the change on the wallet itself. You can provide the information for the hardware wallet to verify that this amount will actually end up in the address that the wallet controls. You use the derivation path to get the private key and to figure out that you actually control this address. Then on the hardware wallet side it displays to the user that you are sending 0.1 Bitcoin to this address and 0.01 Bitcoin for a fee. You don’t need to show the change address because you can verify that it will go back to you. The problem with Ledger was they trusted the computer when the computer was saying “This is the change address”. They didn’t verify it will be controlled by the private key from the wallet. You could replace this address by any other address, just send some derivation path, the Ledger didn’t check it and just remove this output from being displayed to the user. You will send 0.1 Bitcoin to the address you wanted and you will lose all your funds. This attack was fixed. I don’t know the reason why they didn’t check that but now they do check it. I would say it is a pretty critical vulnerability.

Hidden wallet feature

https://saleemrashid.com/2018/11/26/breaking-into-bitbox/

The second one is a very nice hidden wallet feature of the Digital Bitbox. We don’t like to reuse addresses. To derive new addresses and private keys we use BIP32 and master private and public keys (xprv and xpub). The master private key is a pair of 32 byte numbers, one is the chaincode, the other is the private key. Then the corresponding master public key will be the same chaincode and the public key corresponding to this private key. If our computer knows the master public key it can derive all the addresses and all the public keys underneath that. If you know the master private key you can derive corresponding private keys. Here is the flaw of this hidden wallet feature. Why do you need a hidden wallet? Imagine you were trapped and the robbers hit you with a wrench until you sent them all your Bitcoin. You tell them the password and send the Bitcoin. If you have a hidden wallet that they don’t know about then you can easily send the funds and you still have most of your funds securely stored on the hidden wallet. How did these guys implement hidden wallets? They used the same values as in the master private key but they flipped them. In the hidden wallet they used the private key of the normal wallet as a chaincode and the chaincode as a private key. What this means is if your computer knows the master public key of the normal wallet and the hidden wallet, for example you want to check your balances, then you get the chaincode from here (xpub) and the private key from here (xpub’). The whole private key for all your funds are transferred to your computer. This is kind of sad. When you try to protect yourself and use a nice feature and then it screws you up. Now they have fixed it, it was in November (2018). They fixed it by normal use of the passphrase. To derive the hidden wallet master private key you use the same mnemonic with another password. Similar to what Trezor does. Right now it is not an issue anymore but it was pretty sad to see. It was discovered by Saleem Rashid together with a few other vulnerabilities that are not that interesting.

Software

So we had two major f*** ups. Two is already enough. Now let’s move to the software side.

Q - What went wrong there? They didn’t do the right checks? How does that happen?

A - If you don’t know how exactly master private keys work then you can make this kind of mistake. It is not about the validation. The private key should always stay private, it shouldn’t go anywhere else. What they did, they screwed up the standard implementation.

Sjors Provoost: They made two mistakes. One they ignored the standard, there was a standard for hidden wallets. And two they rolled their own crypto. “We’ll just swap these two things around”.

Normally if you try to do something you need to look at the standard that already exists. There are smart people developing cryptographic standards and they are thinking about all kinds of things that you would never think of. If you don’t have any kind of standard then you need to think more than twice, many times, before you release something like that. I think this can be solved by security audits at least.

Q - This will be an issue only if you are signing with a hidden wallet?

A - Not even signing. Normally master public keys are used not to sign but just to monitor the blockchain. If you never give your computer the master public key then it will know your private key for this wallet. It would be probably ok-ish to use one computer to monitor this master public key (xpub) and to use another one to monitor this (xpub’). But then the usability of your hardware wallet isn’t great. You have two different wallets on your phone.

Q - The hidden wallet is just one button click away. The guy with the wrench would just say “Can you also click the button for the hidden wallet?”

A - I don’t know how exactly it was implemented but probably yeah. The password scheme works much better because you have an infinite amount of hidden wallets. If you want to go really crazy. Another problem is the attacker will hit you with a wrench until you give out all of them.

Sjors Provoost: If they know which addresses you control they can keep using the wrench.

A - This is the privacy issue that you should solve with coinjoins and stuff.

Q - Just give you enough plausible deniability in case that happens.

A - In general disclosing information about your funds is pretty bad.

So the software part.

USB descriptor

https://blog.trezor.io/details-of-security-updates-for-trezor-one-firmware-1-8-0-and-trezor-model-t-firmware-2-1-0-408e59dc012

I talk a lot about Trezor in the other parts not because they are bad but because they are the oldest wallet and also they are the most collaborative I would say. I learned a lot by reading their security updates and their blog and talking to them directly. More eyes are monitoring the code of Trezor so more attacks are discovered, they fix it almost instantly. In that sense I think it is a great product and I would recommend Trezor. There are some attacks that we can learn from. One that was fixed in March 2019, a few months ago, it is pretty relevant to the talk I gave a month earlier. It is a glitch during the USB communication. When you have your hardware wallet and you plug it into your computer over USB what happens under the hood? The computer asks the hardware wallet “Who are you? What is your vendor ID? What is your description?” The wallet should send back something like “I am Trezor Model T. This is my vendor ID, blah blah blah”. It was happening before you unlocked the wallet with a PIN. Normally it is fine. Even if the computer asks for more bytes than you can deliver there was a check “If the computer is asking for more than 14 bytes then give back only 14”. This is Trezor Model T. But the problem is if you are using an application microcontroller that doesn’t have any protection against glitches. If you send this request and if you time everything correctly such that when this comparison of 500 to 14 happens you either skip this interaction or make it evaluate to a wrong value then the wallet will just send you 500 bytes. It happened that after the descriptor there was a mnemonic. Asking for 500 bytes would give you “Trezor model T” and here is my mnemonic. It was fixed very quickly and easily. First for Trezor Model 1 I think they added read protected bytes. When the microcontroller is trying to read this byte it will go into fault. It will never go after that. This is the first protection for Model 1 and for Model T they made it even easier. They didn’t respond to any request until you enter the PIN. If you know the PIN then you control the wallet so it is fine. They have these read_protected_bytes just in case but a pretty elegant solution I would say.

PIN side channel

https://blog.trezor.io/details-of-security-updates-for-trezor-one-firmware-1-8-0-and-trezor-model-t-firmware-2-1-0-408e59dc012

Another one is on the PIN side channel. I don’t know if you were at Advancing Bitcoin or not but basically there is such a thing as a side channel attack. This is a slide from the presentation. If you are trying to verify the PIN, this is the PIN we entered and this is the correct PIN. If you do it not smart enough then with the strncmp function the microcontroller first checks the first digits. If they are different then it returns false. If they are the same it keeps going. What you can do is try the first digit and measure the time between the start of the PIN verification and the failure. If the first digit is correct then the time that the microcontroller will take is a little bit longer. You know that you guessed the first number. Then you continue the same with the second, the third, the fourth. This was not an issue in Trezor, they had a slightly different issue. They were comparing the PIN digit by digit and were keeping it until the end. If you entered the wrong PIN and the right one is 1234 then it compared all 4 digits and at the end returned false. The timing attack was not an issue. The issue was the power consumption and electromagnetic radiation. When you are comparing 5 to 1 the pattern of the radiation and the power consumption is slightly different compared to other comparisons like 6 to 2 and so on. What Ledger did, it was disclosed by Ledger, they trained a neural network, artificial intelligence, to distinguish between different comparisons and with a random hardware wallet they were able to guess the right PIN after I think 5 tries or so. You still have a pretty small delay on this range. If you try 5 times it will be maybe a day but still doable. It was also fixed. And it was fixed in a very smart way. Trezor even released an open source library for encrypted storage. Basically your mnemonic and your private keys and all the secret information was stored encrypted and this encryption key is derived by the PIN and other entropy from different sources. For example the unique ID serial number of the device. Now they are also working on having some key from the SD card. Given all these keys you can decrypt the mnemonic but otherwise you can’t. It is very resistant against all kinds of side channel attacks because what you always do is take this PIN, you don’t know the correct one, you use that together with a fancy algorithm to derive the decryption key, then you try to decrypt your secret information and if you decrypt it correctly you will see it looks like a mnemonic so it is the correct PIN.

Q - I’m assuming the PINs on all the hardware devices are implemented the same? You can only have 3 tries…

A - In Trezor you can have as many tries as you want if you are willing to wait long enough. They use an incremental timer. Between the first try and the second try it is 1 second, after that that you have to wait for 15 seconds, then for 1 minute, for 5 minutes. After 10 tries you will need to wait for a few days and if you fail once again it will already be weeks.

Q - How does it track time?

A - They start the timer at boot. They increase the counter then even if you unplug the device and plug it back you just need to wait for this time.

Q - This is the Trezor Model T, it is also on the Trezor Model One?

A - Yes they have very similar firmware?

Q - And Ledger…

A - Ledger has a limited number of tries, they refine the PIN on the secure element. The secure element was designed to do everything correctly because it is used in banking applications and so on. For them it is not an issue. You normally don’t have such side channel attacks. On the Coldcard they use secure key storage. There is another attack on Coldcard I will talk about later but in principle it is also not vulnerable against side channels. I don’t know how the Digital Bitbox because they have pretty bad overall security.

Q - They are still a very new company in comparison.

A - Yes they are working on a new product where they will fix some of these things, hopefully. What else do we have? All the Trezor clones are probably very vulnerable to these kinds of attacks. In principle if the hardware wallet uses some kind of secure element then probably you are fine with side channels. If it is just an application microcontroller then there is a pretty big attack surface for side channels and deleting and fault injections.

Q - One feature that I’d love to see in hardware wallets is it always give you a delay. If I enter my PIN code or even before I can enter my PIN code, just wait 24 hours. If somebody wants to rob you physically they have to wait 24 hours.

Q - That also means you can’t spend your money for 24 hours.

Q - Maybe it is like different PIN codes for different periods on how long you have to wait.

A - There is another option. You can have a PIN code, like in banking applications, that erases, wipes the device. Instead of 1234 you enter 4321 and then everything is erased. I don’t know what will happen to you afterwards. To restore you probably have the seed or maybe Shamir’s Secret Sharing.

Q - The timeout gives you plausibly deniability.

Frozen memory attack

https://medium.com/@Zero404Cool/frozen-trezor-data-remanence-attacks-de4d70c9ee8c

This attack is pretty old actually. It is August 2017, it was fixed a long time ago. The problem was when you load something into the memory it stays there for a while. When you unplug the device from the power it starts to decay. At room temperature it happens pretty quickly. This is a chart of how fast it happens for different devices. At room temperature it is a few hundred milliseconds. But if you take the device, you freeze it, then it can last for 10 seconds for example. What you could do, you plug in your Trezor, it loads into the memory your PIN code and you mnemonic phrase. Then you can trigger a firmware upgrade. You unplug the device, plug it back and then start update process, you get your new firmware and then you can read out the contents of the memory. The whole procedure doesn’t require a PIN. Maybe you forgot your PIN and you just want to wipe your device or something. The solution for this was pretty easy. You just don’t load the mnemonic until the PIN is entered. With this encrypted storage it became even nicer. You don’t even know the PIN. But still it is an attack surface. You need to be careful on what you store in the memory.

Point multiplication: lookup tables

https://blog.trezor.io/details-of-security-updates-for-trezor-one-firmware-1-8-0-and-trezor-model-t-firmware-2-1-0-408e59dc012

Recent attacks, they are not directly fixed because you need to know the PIN and then you control everything. Guys are working on that in Trezor. Again, side channel, when you are computing your public key you take your private key and multiply it by the generator point of the curve. To make it efficient you normally use lookup tables. It is a huge table where you have G, 2G, 3G, 4G and many other Gs. G multiplied by many different numbers. Then when you have a private key, we show it in binary, “Here I have one, I need to add G. Here I have another one I need to add 2G. Here I have zero so I don’t need to add 6G, I go further and I add 8G here”. All these values, you don’t compute by yourself, you take them from the lookup table. It improves the performance of the signing and public key derivation by roughly a factor of 3. But the problem is when you are accessing different regions of the memory of this lookup table you provide some information to the outside. I think again it was the electromagnetic radiation. Maybe power consumption. Here the easy fix would be to not use lookup tables and multiply the numbers. But it is not really an issue, I don’t know if they will downgrade the performance or not. To ask for the public key you need to know the PIN. I see some problems with that in future. When for example you have a hardware wallet that supports Lightning Network and you want to keep it locked at home. It routes the payments for you, it does all the automatic stuff and verifies that you are only earning money. Here you can have an attack surface for these kinds of side channels. But right now it is not an issue because no one supports Lightning Network yet, even coinjoin.

Hardware

We are done with the software attacks, now to the hardware side.

Scalar multiplication: imul instruction

https://blog.trezor.io/details-of-security-updates-for-trezor-one-firmware-1-8-0-and-trezor-model-t-firmware-2-1-0-408e59dc012

Trezor again. A major player in the talk. Scalar multiplication, basically the same stuff. Even if you don’t use the lookup tables and you try to compute the public key. What you normally do during this computation, you take huge numbers, 256 bit numbers, and you need to multiply them. The microcontroller cannot multiply such huge numbers. We have 32 bit, 64 bit. So you need to split it into smaller pieces. In the current implementation they use 30 bit numbers, the leftover is 16 bits, and then you multiply them. The problem with the multiplication instruction on the hardware level, on the semiconductor level, they use slightly different implementations depending on the value of the number. When you are using 30 by 30 bit number multiplication it will be one instruction, 30 by 16 will be different, with two small numbers even more different. Another thing where performance gave you an attack surface on the side channel. Here there is a way to slightly change the implementation of the elliptic curve calculation and use roughly the same length of these words for multiplication and not one of them is much shorter than the other. Then you should be safe. It is a hardware problem that needs to be fixed by changing the software implementation.

SRAM dump

https://blog.trezor.io/details-of-security-updates-for-trezor-one-firmware-1-8-0-and-trezor-model-t-firmware-2-1-0-408e59dc012

Another hack that was demonstrated at the Hacker Congress in Leipzig in December by the wallet.fail guys. This was a great talk. What they showed is how you can extract the firmware including the PIN code and the mnemonic phrase from Trezor using a pretty nice tool. It looked like this. You have the thing, you put your Trezor microcontroller and you get everything. The problem there was they were able to glitch the chip in such a way that the protection level of the chip implemented by the manufacturer of the chip can be downgraded. The chip normally has three different RDP levels of access to the memory. Level 0 is full access. When you connect your debug interface you can read and write anything, both from the memory and from the Flash. Then there is Level 1 that is used by most consumer electronics. When you connect the debugger you can read only the memory but you cannot read what is in the Flash. And then finally Level 2 that doesn’t give you any access to any information. Trezor normally uses RDP Level 2 so you should be safe. This is implemented in software actually. Your microcontroller boots in Level 0, then it quickly changes it to Level 1, Level 2 and then you are fine. If you glitch at the right moment then you can downgrade from Level 2 to Level 1. What glitching does, it allows you to skip certain instructions or make them complete in a wrong way. The problem is still that the mnemonic phrase is in the Flash. Here on Level 1 you don’t have access to the Flash. What you need to do, you need to first somehow move your mnemonic to the memory and then use this glitch. What they did, they started an update process. What happens during the update process? Normally you don’t want the user to reenter your mnemonic. You just copy all the secret information into the memory and start writing into the Flash new firmware. At this moment you glitch and downgrade the protection level to Level 1, cool you have access to the memory and you have the mnemonic in your memory. You have everything.

Cortex M3 bug

https://blog.trezor.io/trezor-one-firmware-update-1-6-1-eecd0534ab95

Another problem that was fixed already by the semi-conductor manufacturers, SD Microelectronics. It was discovered on Trezor. The problem is on Cortex M3, the one that was used in the Trezor Model One, there was an annoying bug. You can set this protection level by putting a certain value into the register and committing this value to writing it on the chip. Then you should be safe. The problem was what you could do, you could write to the chip the Level 2 and then you write to the register a different value, for instance AA. The microcontroller for some reason was taking not the value that is stored in the microcontroller itself but the value in this register. You can put any value without committing it and you completely remove the read protection this way. It is also pretty annoying but it was fixed by implementing the software workaround. It is a little complicated.

F00DBABE

Another nice one is from Ledger, also from the CCC conference. The Ledger architecture, you have a secure element, then you have the microcontroller that controls your display and the buttons, then you have the USB for communication. What happens when you update the firmware on Ledger? When you start updating there is a register in part of the memory where the Ledger stores a certain magic value. When the update process starts, before the verification, the microcontroller sets this value to all zeros and after writing the full firmware it verifies that the signatures are ok, that the firmware comes from Ledger and if it is true then it puts back the F00DBABE to this register. You don’t have access to this register during the firmware update. If you don’t have a signature you still have zeros so the hardware wallet will not start at all. You need to put back the original firmware. The problem was that if you carefully read the manual for the microcontroller you will see that the same parts of the Flash can be accessed with this memory mapping, with different addresses. This meant that if you try to put something into 000_3000 it would effectively write this value to this magic memory region 800_3000. Using the simple bug that was caused by not carefully reading the documentation of the microcontroller. It is easy to skip this thing because it is 150 pages long, the documentation for the microcontroller. It is painful to read it but if you are developing a security device you should probably read it. You can write whatever you want to whatever value you want and then it looks like valid firmware. This is the video that they demonstrated. Your device is genuine but you can play Snake on it. You still don’t have access to the secure element, to your private keys, but you do control the display and the buttons. This means that in principle by slightly changing the firmware you can encode something “Please always consider the second output as the change address and never show it to the user”. Together with a compromised computer you can steal all their funds, I still think it is an issue. It was fixed by the way.

Architecture

Then the architecture.

Man In The Middle

https://saleemrashid.com/2018/03/20/breaking-ledger-security-model/

Still about Ledger. The recent Ledger Model X, even though they advertise it as Bluetooth the real nice thing about it is they improved the architecture. Before on Ledger Nano S they had this microcontroller that is like a Man In The Middle. It controls all the important parts like the screen and the buttons. Whatever you see you see it not from the secure element but from this microcontroller. If this guy (the MCU) gets hacked then you are probably screwed because you don’t need to have access to the mnemonic itself, you just need to fool the user to get you the signature. Displaying something weird is enough. Together with a previous attack what you can have, during the supply chain attack when you slightly change the firmware here you can set entropy to zero or a defined value that is known by the attacker. Then all the users will always derive the mnemonic that you know. This is another video from Saleem Rashid where he is setting the entropy to zero. He is initializing the new device and the mnemonic looks like “abandon abandon abandon abandon…”. The last one is different. It is when everything is zero but obviously you can set it to arbitrary known by the attacker values.

Bruteforce attack

https://twitter.com/FreedomIsntSafe/status/1089976828184342528

Now about the Coldcard. It is a pretty new wallet and I would say it is pretty safe mostly because you can use it in a completely air gapped mode. You don’t have a bidirectional dataflow and you pass everything with a SD card. But still there is a question about the architecture. What they use, they use a secure key storage chip that can store your keys securely. But it cannot do any PIN verification or elliptic curve calculations. This means that every time when you sign the transaction or whatever you grab your private key from the key storage, put it into the microcontroller, do all the calculations and when you are done you put it back. The same happens with the PIN. With the PIN they use a certain counter inside here. Whenever you enter the wrong PIN the microcontroller increases this counter and asks the key storage for this counter on the next run and waits for a corresponding amount of time. What happens if you put a chip on this bus? This chip is basically blocking the message that is increasing the PIN counter. Then the PIN counter will never be increased, what you can do, this is the demo, you can try as many PINs as you want one by one without any delays. Initially it uses the wait similar to Trezor, first you wait 15 seconds then more and more. Here you just need to enter them one by one. Still you need to wait for half a second before every try, it is hardcoded in the microcontroller. This way you can hack all the PINs that are smaller than 6 digits. Coldcard recommends you to use 8 or 10 digit PINs. This is exactly the reason.

Q - The reason is specifically for this attack?

A - Yes this brute force attack. To communicate to this key storage you need to keep a pairing secret. As soon as you get this pairing secret, using any kind of attack that was demonstrated on Trezor for example, you can just drop all this stuff, directly connect your own fast microcontroller to this key storage and without any delays brute force all the PINs. There will be a limitation due to the communication speed, I think even 8 PIN numbers are breakable if you know the pairing secret. With this MITM in the bus you can easily try 4 or 6 digits in a couple of days.

There is a solution that they decided not to take. There is another counter in this key storage that cannot be reset at all. You can design the protocol in such a way that your mnemonic is stored encrypted and is recoverable only until this counter reaches a certain value, let’s say 10. But this would mean that in total you have only 10 tries to enter the wrong PIN. After 10 tries you need to throw away your device and get a new one. But on the other hand you don’t have a brute force problem. There is a trade-off. I think they decided to go more user friendly than towards security.

Multisignature flaws

I wanted also to talk a bit about multisignature in general. This is an issue on most of the hardware wallets. Coldcard doesn’t support it currently, they are working on it but it is not released yet. I will tell you why afterwards. Trezor supports multisignature very nicely. They verify everything, they show you the fee, they hide the change addresses from you, they can verify that they are indeed the change addresses and so on. I saw on the Ledger website “Buy our bundle”. I used this bluetooth thing for my everyday expenses where I store a small amount and then I use 2-of-2 multisignature between these two devices to keep my life savings safe. This guy is stored somewhere in my safe at home. Even if some guys get that device, still I am safe because I’m using 2-of-2 multisignature. The problem is that when you try to display the address on the screen of the device, when you are using Ledger and multisignature, you can’t really do it. But it is important. Imagine I want to be sure that I am sending my Bitcoin to my multisignature address. How would I do it if I can’t verify it on the screen of the device? What should I rely on? This is the first problem. The second problem, I feel bad I bought this device. This is the Ledger Model Blue. It never gets the firmware updates, they dropped support for it and they don’t want to develop it anymore. I feel like I spent 300 euros for a brick. This is what the multisignature transaction looks like when you try to do it on Ledger Blue. Still even on the Ledger Nano S it will display the fee but it will not display that one of the addresses is the change address. I cannot verify that the change money is going back to my address. We really need to work more on the multisignature side of hardware wallets. Right now if you are trying to use multisignature you decrease your security and that is weird.

https://github.com/stepansnigirev/random_notes/blob/master/psbt_multisig.md

Another problem on the protocol, this is probably the reason why Coldcard still don’t have the multisignature support. I sent an email to the mailing list. This is the missing piece in the partially signed Bitcoin transaction (PSBT) format. Imagine that we have a 2-of-4 multisignature scheme. We use 2 hardware wallets, one air gapped computer and one paper backup. And we have 4 keys, we have our addresses that use these 4 keys. If there are 2 signatures then we are fine. When we are preparing the transaction what do we want? We have the input, we have the keys there, we need to know the derivation path for these keys and then for our change address our software wallet, watch only wallet should be able to tell us that these are the keys derived from the same master keys as the inputs. Then we can consider that this thing is the change output and we are good. The problem right now is that in the PSBT you don’t have the master public key field. You only have the derivation path where you have a fingerprint and derivation indexes. This means that if I am sending this information to one of these two guys they will be able to verify that their keys are here but they have no idea about the other two. They will just have to trust that they are not replaced by something else. What the attacker can do is replace two other keys with his key and then using a pure PSBT that Coldcard is trying to do you can lose all the money because the attacker also controls this output. Hopefully we will add xpub fields to PSBT and then you are fine.

Should we trust chip manufacturers?

https://www.aisec.fraunhofer.de/content/dam/aisec/ResearchExcellence/woot17-paper-obermaier.pdf

Another thing that I want to say about architecture and hardware in general. The problem is we don’t really know what is happening inside the chips, what capabilities they have. Maybe they have backdoors. It was shown actually. These are three research papers, Skorobogatov from Cambridge, which demonstrate there is a backdoor or some hidden functionality in the debug interface of the security focused… And other things like how to make the microcontrollers work less securely than they should. I would say that what I would like to see in hardware wallets is chips from different vendors, ideally open source cores. Right now the only option for that is RISC-V architecture that has an open source core. Unfortunately there are no security focused devices based on that. Plus all the vendors take this open source core and put proprietary stuff around it. Pretty sad. But at least we have a core. Ideally we should be able to take this open source core, put some open source anti-tamper and security features around it and have a completely open source chip. That would be perfect. But for now what can we do?

Wishlist

We can at least stop relying on a single manufacturer and single chip. When we have Schnorr multisignature, then we can store the keys on different parts of our device, one on the secure element, one on a microcontroller, another on another microcontroller. Maybe the third one on the computer. Then we merge them together in such a way that the full key is never assembled in a single part of the chip. Or at least to make the hardware untrusted. For that I have one proposal that I wrote sometime ago.

Bonus: if hardware wallet is hacked

Imagine if you have a hardware wallet and this hardware wallet is hacked. Let’s say you are super paranoid and you are using it in a completely air gapped way. You can go to a remote part, inside a Faraday cage and you do everything in a super paranoid way. How can the attacker, this malicious hardware wallet, disclose your private key? There is a way. In every signature that you are generating you have a wonderful thing called a random nonce, the blinding factor. For every signature you need to generate a random number or a pseudorandom number according to certain standards, and then use it to generate the signature itself. If the hardware wallet is compromised he can pick whatever random number he wants including not a random number. Including the number that is known to the attacker. With this signature you can leak some information about your private keys. The attacker doesn’t need to compromise your computer, it just needs to scan the blockchain for certain patterns, for certain keys or transactions with certain flags. I had a proof of concept demo where I was able to extract the master private key in roughly 24 transactions. What you could do instead, you don’t let the hardware wallet choose this random nonce. You have a software wallet that is not compromised and a hardware wallet that is compromised. It is the other way around compared to normal assumptions. Then on the software wallet you generate a random number and send the hash of this random number to the hardware wallet. The hardware wallet then needs to commit to this certain nonce that it will use. Basically after this commitment it cannot change it because otherwise you will reject the transaction. Then you tell him the random number and verify that for the signature he was using this number (k) plus this number (n). Then if one of these parts is honest then you are safe. In principle even if this (k) is not a random number, by adding a random number to it you randomize everything. Something like this is already implemented in the Schnorr MuSig protocol. We can extend it also to ECDSA.

Q&A

Q - …

A - There are different types of supply chain attacks. First is when you swap the device for another one. Here ideally every device should come with a unique key that can be verified by you that this key belongs to the vendor. What you can do is ask the device to sign a random message, get the signature and then verify that the signature corresponds to the public key that you are expecting. For example we are sending you a hardware wallet via post and we are also sending you an email with a public key of the device that is coming to you. Then the only way to forge the device is to extract this key somehow. If it is not possible then you can authenticate that this is the device that was sent to you. This is the first option. The second supply chain attack is when the chip is still there but there are additional components or hardware implants. How do you solve that? Coldcard is doing pretty well by using a transparent casing. I would say that all countermeasures of Trezor and Ledger, Ledger don’t have any, Trezor has this whole graphic thing that is not really secure. For us we are actually planning to do transparent casing as well but there are ways, if you are thinking about really large amounts, to make sure that the whole device is genuine. I can tell you how. There is such a thing as physically unclonable function. Let’s say you have the casing itself and it obviously has a bunch of perfections. You can use these perfections that appeared due to manufacturing processes as a unique fingerprint of the fingerprint. This fingerprint you can use to derive some kind of secret key. Physically unclonable means it is extremely hard to manufacture something even close to that. Also what you can do, there are a few groups working in this direction, even if you drill a tiny hole then you screw up this key. There are a few ways. One approach is where you cover the whole device with a conductive mesh and on the boot of the device you are measuring the frequency response of this mesh. From these measurements you derive the key. The second one, there was a talk at CCC, you can emit the electromagnetic radiation from the inside of the chip. It will be reflected by the boundaries of the casing and they will obviously interfere. Then you measure this interference pattern and from this pattern you derive the key. There is a third one that is hard to implement but might be possible. When you use a speckled pattern on the optical propagation of the laser light. The casing is made of glass, you put some laser light in there, it bounces back and forth, reflects and interferes with imperfections of the material. This is the ultimate level. The problem with these things is it is still in development and it is also raising the price of the device quite a bit. If you drop it it may break. Also in some of these solutions if you increase the temperature there will be a different key. To turn on your device you may need to go to the room where you have exactly 22 degrees.

Q - The electromagnetic field changes?

A - It is not very sensitive to magnetic fields. This electromagnetic reflection and interference is sensitive to temperature because the air pressure and the density and the index of reflection changes so the optical path is changing. With a mesh it is easier but the problem is that in our case we need to put a display in there somehow. That is the problem. Right now what we decided to do, we have transparent casing and we only have the wires that are coming from the chip to the display. Every time when you turn it on you verify that it looks good.

Q - It would be cool to have a compass, you have to hold it in a certain direction.

A - In principle there are plenty of ways how you can move away from just a simple PIN to something crazier. For example you have the compass and the orientation or some movements. You make your magic dance and then it unlocks. Also what we discussed in London, this challenge and response. You don’t enter the same PIN all the time. Before you enter the digits your hardware wallet vibrates a few times and then you add the number of vibrations to your digit. Every time your PIN will be different. Different crazy things. Hopefully with the developer board you could do something like that. But for the consumer device you don’t want to go that crazy.

Q - My first question is about hardware but in mobile phones, coming out with secure storage for private keys for example. Samsung is coming out with a device similar to that. In the future perhaps we could see mobile phones with integrated hardware wallets. Based on what we know of these devices right now have you explored any attack vectors to find out if they are secure or not. My second question, right now we have a lot of apps that are mobile wallets. They use secure storage for various data, the private keys are stored securely on the mobile phone, Abra and Bluewallet for example. What are the attack vectors for such apps? Have there been any known breaches?

A - I didn’t look closely into that. I’ve heard a few talks about that. You split your phone into two parts, your process into two parts: insecure world and secure world. If these apps that are running on the secure side are implemented correctly then you are pretty much safe. There are still issues with the shared memory and some side channels. By design the secure storage is not resistant to side channel attacks. By side channel attacks I mean not even the physical side channels like power consumption and stuff. You can check how much memory and how much processing time this process is using. It is not a completely physically isolated storage. It is running on the same microcontroller so you still have some issues. Also if the application that is running on the secure side is screwed then you are probably screwed. If that part has a vulnerability you can explore that and you can get into the secure side of the process. I would say that it is much better than just using an app, better to use secure storage than not to use it, but it is not an ultimate solution. I would still say that for everyday expenses you can use your phone but for larger you need a dedicated device, ideally cold storage.

Q - With mobile phones using secure storage for critical data like private keys and mnemonics, have there been any dedicated attempts to deconstruct the app and figure out what the logic is. Maybe not in iOS but in Android devices, have there been any known issues like that?

A - You mean for Bitcoin applications? Regarding that I don’t know. It is still a pretty new field for attackers and for the security investigators. There are no nicely developed toolboxes to do all these kinds of things. I think we will see some attacks in this direction later on as the Bitcoin ecosystem evolves but right now I don’t know about any of them. We just got some interest from the researchers into the hardware wallets and then we had a very productive winter. Maybe the next frontier will be apps.

Q - Can you have an attacker at the assembly level when they assemble the component?

A - Yes. For example, when the microcontroller is talking to the display and the buttons, this is the most obvious one. If you are using Trezor T they have a touchscreen. If you are using Ledger X there is a display controller that is controlling the device. These are separate chips, not the microcontroller itself. They obviously have firmware. With the standard components you cannot verify that they are genuine or not. You talk to it and hope it is ok. On the assembler level I would say that you can flip the display driver to another one that analyzes what is sent to it and then changes it. This is an issue. On the touchscreen controller the same. You can add some functionality to the firmware there, to touch the thing whenever some trigger happens. It is a pretty sophisticated attack I would say. I doubt that we will see it in the wild, at least now. But in principle they are possible. On the other hand vendors can ask the manufacturers of the touch screen drivers or display drivers to use custom firmware that also has a unique private, public keypair. Then you can authenticate the device. The problem is it becomes more expensive and you need to order on large scales to get them interested.

Q - Someone on Twitter said that the next attack on hardware wallets is going to be an insider job from the manufacturer of the hardware wallet. How do you protect that? If you are a manufacturer and one of your staff is an insider doing an inside job how are you going to protect against that?

A - What I said on the previous slide, this is what I was trying to solve. I don’t want to trust the manufacturer at all. I want to be able to use the hardware wallet even if I consider it can be hacked. Some kind of untrusted setup would help.

Q - For the entropy?

A - Entropy provided by the user. If you don’t trust the hardware random generator you can also ask the user “Please keep pressing these buttons. I will tell you what they are and then you can actually verify.” This is the number that I generated by the hardware random generator, you have a 32 byte number let’s say. Then you start hitting the buttons and you have a string that you hash together with this random number and you get the entropy that is produced by both the hardware and by you. You can verify it, you can XOR in your head right?!

Q - No. I could have the computer do a similar game as that right?

A - Yeah. Here we ended up with a random number that is not known by the computer and it is forced by the computer to be random. You can use the same scheme for the key derivation. I just don’t like plugging anything into the computer, that is why I thought about the mind method. On the other hand the communication doesn’t have to be over USB. You can use QR codes or whatever, audio channel, QR is probably better because it is more space constrained. That is the perfect world where you don’t need to rely on the manufacturers or on the insiders and still operate securely.

Q - You can take quite a few countermeausures. I think Ledger for example, when they release a firmware binary, a bunch of people have to sign off on it and that gets deterministically built as well. It is not like someone can sneak a binary into a specific user’s device.

A - Another thing, you ship everything blank. The only thing that is on the microcontroller is the key from the factory that is verifying the bootloader is fine. Then you can still mess up the bootloader probably or you can make a setup procedure that uploads the firmware and also updates the bootloader. Then if you build deterministically you don’t need to trust anyone. You just build, you compare this against firmware from the vendor, you still need to use firmware from the vendor because it has signatures.

Q - The other thing that scares me is some sort of government order to ship a firmware update to a specific customer plus a gag order. They can’t tell anybody else that they are doing it. They could steal money from one specific user. I think one solution to that could be all firmware updates have to be committed to the blockchain, a hash, in a particular way. Then at least your own computer can see that there is a firmware update and that everybody in the world is seeing that firmware update. You are not getting a special firmware update. That would be done on your computer so it doesn’t work if your computer is also compromised. It will at least prevent someone specifically messing with your hardware wallet.

A - I just don’t like putting data on the blockchain.

Q - You could put it on a website but that website can look different from wherever your IP address is.j

Q - If you use pay-to-contract then it doesn’t cost any block space.

Q - This is one UTXO per quarter so you can do it efficiently.

A - Probably the optimal way would be to run a sidechain for that.

Q - I guess the government would work around this by putting a camera in your home and waiting for you to enter your PIN code.

A - Or just taking you to their facilities and asking you to do whatever they want. Different levels of paranoia.

Q - I would want there to be a ban on your neighbors having an Alexa because if they put it next to the walls. Those things are scary but then so is your phone. Anything with a camera can also be used as a microphone indirectly, you can film a Coca Cola bottle through the window and get the audio.

A - There are other interesting side channel attacks on these kinds of general things. You can use the fan on your computer to send the data by speeding it up and slowing it down. It is a pretty slow thing but still your fan can be a communication channel. Then if you are using a standard computer that is plugged to your power plug and you are using an old keyboard then by listening to the signal on this power line in the same building, maybe a few floors away, you can guess what keys were pressed, you have a key logger in your power plug. Crazy stuff. I would definitely recommend watching some Defcon, CCC and Black Hat talks, it is amazing what you can hack and how.

Q - Regarding multisig you said ideally multi device setup would be best. Trezor is working quite well but what if you use Ledger, Trezor and Coldcard for example?

A - Coldcard will hopefully get it soon. Ledger are not fixing this issue so I would not recommend using Ledger in the multisignature setup. The workaround I found to do this is if I write a software wallet that asks Ledger to display at least his public key, his address corresponding to this change or receiving address and then having my own do-it-yourself device that can show me all the public keys and addresses of all the three. It is ok but it is an ugly workaround I would say. Bitcoin App is actually open source so if someone would spend a few weeks ago and putting in proper support for mulitisignature in their app then it would be great. But right now as soon as Coldcard implements multisig I would probably use Trezor and Coldcard. When we release I think we will use ours together with Trezor and Coldcard. I don’t know any other nice hardware wallets unfortunately.

Q - The PSBT format, it is a BIP, they can implement the xpub field?

A - Hopefully as soon as we have more hardware wallets that support PSBT natively and if we continue working on PSBT then we will have many interesting applications. For example another one that we are working on right now with Trezor is how to implement coinjoins securely on the hardware wallet. It is pretty challenging. Here you have a bunch of inputs from random people and you can fool the hardware wallet by saying “This input is yours and this one is not yours”. Even though both of them are yours. Then you flip it. There are ways with a recent PSBT implementation to steal the money using coinjoin. But hopefully also we will develop a scheme that makes it secure and then ideally all the hardware wallets that support PSBT will be nice enough to work with multisignature and coinjoin and other stuff.

Q - So without PSBT how are these wallets transferring the partially signed transaction?

A - Every vendor has their own format. Ledger are using APDU commands that are normally used in communication with smartcards. They have a very specific format in principle. Trezor is using Google’s protobuf as a communication protocol. They implement something very similar to PSBT, by changing a few bytes you will get PSBT, it is easy to transfer them back and forth. Coldcard is using PSBT. Others I don’t know. Before PSBT we had a bunch of different formats, now we have plus one different format but hopefully this one will become standard. The nice thing is that we have a tool, HWI, that is in the Bitcoin Core repository, a Python tool that converts from PSBT to any of those devices. Using that you can forget about all other standards, you only use PSBT. Then you will be able to communicate through this universal translator to all of the hardware wallets.

Q - What’s that tool?

A - It is called HWI, hardware wallet interface. It is in the same organization as Bitcoin Core. The nice thing is that it is pretty easy to implement. If you decide to do your own hardware wallet for instance you take this tool, you make a few Python files, you write in Python, everything is nice and convenient and then you have your own wallet integration into this format. No matter what type of communication you use. Pretty nice project I would say.

Q - You went through some of the hardware issues, most of them if not all were solved by firmware updates. What would have to happen for them not to be able to be resolved by firmware updates?

A - With the Cortex M3 bug it was a problem by the manufacturer. Basically they fixed it for a particular use case. It is still an issue. Maybe it was fixed by the manufacturer afterwards but you can’t update hardware itself. It was fixed in such a way that at least you can’t extract secrets from the Trezor. They had to place the secret in a certain place such that when you log the device the bootloader overrides this part during the firmware updates and things like that. It was really a workaround. As far as I know there is an attack that Ledger discovered on all Cortex M cores that are not really fixable. I think the Trezor guys will still fix it somehow in the software but unfortunately Ledger did not disclose exactly the details of the attack. So no one really knows what it is about. It is hard to fix something that people don’t give you the information for. In principle I would say most of Trezor’s problems are coming from the fact that they are using an application microcontroller. These are not designed to protect against all kinds of hardware attacks. They try really hard and really well to protect against all of them but I feel like there will be more and more in the upcoming years just because of the architecture limitations. This is why I said my dream is to make a secure thing that has internal power reference, anti tamper mechanisms and is still open source and used by all kinds of hardware wallets. You can verify that by putting it into the X-ray machine and checking the semiconductors that are there but it is not going to happen in the next couple of years. It is more like a goal for a decade. But Bitcoin is changing the world so we can also change the semiconductor industry.

Hardware wallets and Lightning

Q - You did a presentation on the role of hardware wallets in Lightning. I’m trying to understand how hardware wallets will be used with regards to end users and also routing nodes. I’m struggling to understand how routing nodes are going to use hardware wallets.

A - The problem is if you are a routing node you have open channels and you are routing the transaction. This means that someone offered you a HTLC that gives you some money if you give back the hash. Then you offer a slightly smaller amount of money to another node if he gives you the hash. If he gives you the hash you can pass this hash forward and then you earn some fees. For the hardware wallet it is important that you get all these pairs together such that you can verify on the hardware itself, you don’t want to click the button all the time “Ok yes I want to earn 1 satoshi”, “Yes I want to earn another satoshi”. Instead it should happen automatically as soon as you have two transactions that in total increase the amount of funds of the user. In principle it doesn’t look complicated, it becomes harder if you think about what can happen to this channel. We have me as a hardware wallet and as a node, we have two guys, I have to open channels with them. If my node is compromised, this is the goal of the hardware wallet, if the computer is compromised you still have the funds secure. This node closes the channel unilaterally. Then we wait for one day, we don’t tell this to the hardware wallet. The hardware wallet still thinks there is a channel open and you can get channel updates. You update the channel increasing the funds of the user on this non-existing channel and then you steal the money from the other channel. The hardware wallet will sign. The problem here is that hardware wallets for Lightning need to monitor the blockchain. They need to get every 10 minutes or whatever new blocks, they need to check the block, parse the block and see that there are no channel closing transactions for the channel that they have. They need to store the database, either on the node encrypted and authenticated, or on the SD card. Plus you need a real time clock because how do you know that the timelock didn’t pass? You tell the hardware wallet “No the 1 day didn’t pass yet”. You need a realtime clock. You need a lot of other stuff to make it happen.

Q - The block header would be enough right?

A - Imagine you start delaying the blocks. Instead of sending them every 10 minutes you send them every 20 minutes. Delay the blocks. The hardware wallet needs to know the time, it needs a real time clock because in the block you have the timestamp. If you don’t have a real time clock you can’t verify that the block is coming to you is the current block and isn’t from 1 day ago. Then another problem, what happens if you have the HTLC update and when you are trying to push another one you get disconnected, then this channel is closed and you can steal one routing payment. The node is still hacked. There are plenty of problems in here. It looks like in general that the hardware wallets that we currently have, they will still work and make your private keys for Lightning much more secure than when you store them in the node, but it is not perfect security. To have perfect security of your keys, assuming the hardware wallet is not hacked, you will also need to have some kind of backup channel or watchtowers and other stuff. The problem with Lightning is it is very interactive and has time constraints. But in principle having the secret keys on the hardware wallet and verifying this transaction, it is already much better.

Q - It seems to me to have Lightning on a hardware wallet, the hardware wallet needs to be a full node. It has to monitor all these blocks and check that they are valid.

Q - It has to be one with full access to the internet because otherwise you are just performing an eclipse attack. You can do all sorts of things with eclipse attacks including making it look like the Bitcoin difficulty is going down. You have to trust the computer you are connected to. Let’s say you have a little node at home connected to a hardware wallet, you go on vacation and somebody breaks into your house, it would be nice if there’s no way for them to take your money without unplugging it and turning it off. Just that I think would be nice.

A - It would also be nice to have your Lightning node in the cloud without any secrets. You have public IP, you can do it however you want and then you have your hardware wallet at home that is connected to this node in the cloud. You don’t put your private keys into the cloud of Digital Ocean or Amazon or Google.

Q - You are still trusting them a little bit.

A - In Lightning you can’t completely remove the trust in your computer but you can try to diversify the risks a bit.

Q - Is it fast enough?

A - Depending on how many transactions you are routing. If you are trying to route 1000 transactions per second, that probably never happens in Lightning through one node, then you might have problems. But in principle I think it is 30 milliseconds per signature so 100 easily.

Q - Just plug in multiple hardware wallets.

A - Alternatively you can have HSMs, this rack of hardware wallets that are processing that. You can also have a load balancer, they all need to agree on the state of the channels. Or you can run multiple nodes, this is also ok. I don’t know how many transactions you are routing per second? 70,000 in 3 months? So easily doable with a hardware wallet if they don’t come in within 1 second.

Q - How durable are hardware wallets? With a normal hardware wallet you might sign a couple of transactions. But now you are talking about signing a lot of transactions. Is it going to burn out?

A - I don’t think so. I didn’t test them intensively, I’d run our chip through a bunch of transactions. As long as we are not doing heavy multiparty computation it is ok. One signature per second is safe. The hottest part in the hardware wallet right now is the display actually. Signing is not a big deal there.

Q - It is a great problem to have. The chip gets so hot that you are routing so many Lightning payments.

A - If you earn 1 satoshi per payment then it is a pretty effective miner I would say.

Q - That would mean the Lightning wallet in Trezor would be a custodial node?

A - Not necessarily. You can still run your own node and Trezor plugs into your computer. Why do you need to go custodial? It has to be hot, permanently connected to the computer. The problem there is if you leave your hardware wallet unintended in the current setup you keep it unlocked because you entered the PIN so people can steal your funds. For that kind of thing you need to think about outer locking and keeping the transactions going.

Q - It is like the coinjoin right? With the coinjoin you could leave it on at home but it will only sign transactions that make it richer, otherwise it will turn itself off.

A - Yeah. Then we don’t need to rely on the centralized service and pay the fees, we can actually earn something. Alternatively a coinjoin transaction doesn’t happen instantly. Right now if you try to do it with a hardware wallet you will probably need to confirm a few times. First you need to commit, second you need to sign, then if the transaction fails you need to sign again a slightly different transaction. Another approach, you commit once to a type of transaction, not a transaction itself but a transaction that has to have these inputs and these outputs. I don’t care about the other stuff. You confirm once, the rest happens automatically. I would say coinjoin support will happen pretty soon in Trezor and Wasabi, Wasabi is already integrating hardware wallets but you can’t use it with coinjoins. The next step would be to make it happen. Not in two weeks but maybe in a few months.

Q - LND uses macaroons. Do you know much about macaroons?

A - Not really, I normally work with c-lightning.

Q - Would you set LND up such that if you are signing transactions you’d require a hardware wallet but if you are doing read only functions you wouldn’t.

A - What kind of read only functions? In Lightning there are three types of private keys. The first one that is controlling onchain transactions, another one that is controlling channel updates and a third one that is controlling your node ID. I would say keeping the node ID on the Lightning node is fine. Then you can broadcast your channels or connect to other nodes but you cannot open channels, you cannot route payments, you can’t do anything valuable. You can only watch the network and observe what is happening there.

Q - It is still useful for things like watchtowers though. You don’t necessarily have to have the ability to sign transactions.

A - For a watchtower do you even need to be a Lightning node? You can just sit and watch the blockchain. A service built on top of bitcoind or something.

Q - I can’t remember which presentation but you said C has a bunch of built in functions, many of them are not secure. Using C, isn’t that just a challenge of training developers to only use the secure functions and not using the insecure ones or is there a problem with using C generally and people should use other languages such as Rust?

A - C has a bunch of functions that are not secure but you can use secure versions of them. If you know what you are doing you are perfectly fine working in C. You can easily mistakes there. You can use some tools like static code analyzers or cover everything with tests, also hire smart developers that are careful enough and have experience in security. It is doable. Still most of the embedded devices are written in C. Even the Java virtual machine that is inside the smart card was written in C. You can make secure firmware in C. It is just much easier if you use a memory safe language. You decrease the number of potential bugs, you make the overall flow easier, you spend less time on that. In that sense Trezor decided to go with MicroPython. It is a pretty good choice. I still think that it is not very well developed so you may have some issues. Probably better than C. Coldcard is doing the same, I think that they even use Trezor’s MicroPython bindings to Trezor’s crypto libraries. By the way be careful with that. This means that security updates on the Trezor crypto libraries will appear on the Coldcard a little bit later, they are a fork. Then what is interesting right now, embedded trust is super interesting. Folks were tired of the memory consumptions and leaks in Firefox so they invented a new language. But they also designed it like a system programming language for security critical applications. Now it is going to embedded development so you can start playing on a few boards that they support. Around 10 but you can port to any board. You can have both C parts of the code that you normally get from the manufacturer of the libraries and API. And you can write Rust parts, they all work together. There is even an OS that allows you to write everything in C but you can’t be sure that each process is separated from each other. Even if you have bugs in the SD card driver or system driver you don’t expose any secrets because you have a supervisor of the whole flow.

Q - McAfee’s super device, is it secure? McAfee said it is unhackable.

A - I didn’t look at it. It was hacked the next day or so.

Q - You can run Doom on it but is it really hacked? Can you extract private keys out of it?

A - I don’t know. We have a bunch of new game consoles. McAfee’s Doom, Ledger’s Snake. You can also upload custom firmware on Trezor and play whatever game you want there.

Q - Wasn’t the McAfee problem also that it was just a phone number password. You could brute force and start taking money from random people.

A - Fortunately I didn’t look into this product and I feel I don’t need to spend time on that.

Q - Security consultants usually don’t consider this a major player in the market of hardware wallets?

A - There are also other hardware wallets that are either based on Android phones or something like Raspberry Pi and I don’t think they are really secure. The only way for instance to make a reasonable hardware wallet on Raspberry Pi is if you get rid of the whole operating system and program on the bare metal, the powerful microcontroller that it has. Then you have a pretty nice thing that is super fast, it is hard to glitch with precise timing because it is fast and has multithreading. Still a reasonable choice but don’t run a full OS on there, it is too much.

Q - There is Ledger and there is Trezor and there is other stuff. The other stuff, there isn’t really anything coming up, Coldcard is coming up. KeepKey is nowhere in terms of traffic and sales.

A - Who has hardware wallets that are not Trezor or Ledger? Coldcard? The market is mostly covered by Trezors and Ledgers. I personally have two Ledgers and two Trezors and I have Digital Bitbox but I don’t use any of them for Bitcoin storage. I just play with them and like to see how they look inside.

Q - Would you like to discuss CryptoAdvance?

A - Quickly to say, we are doing not just a hardware wallet but hopefully a whole platform. One quote I really liked from one of the talks is “Your security model is not my security model” but yours is probably fine. If we can make a tool that is flexible enough to cover all craziness of Bitcoin hodlers so they can make something custom for them then it would be great. That is what we are focusing on as well as a normal hardware wallet because normal people also need to store Bitcoin somewhere. And we are currently in the fundraising phase, we are here today and tomorrow. If you are interested in developing something on top of our platform or any other, talking about hardware security or about our project talk to us. I didn’t to say that everyone is bad and we are good because we will also fail at some points and have certain attacks. I hope in two years I will come here again and give a 2 hour presentation about our hacks and how we fix them.

\ No newline at end of file diff --git a/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript/index.html b/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript/index.html index 174d2af08e..486045eb4f 100644 --- a/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript/index.html +++ b/london-bitcoin-devs/2020-02-04-andrew-poelstra-miniscript/index.html @@ -12,4 +12,4 @@ Andrew Poelstra

Date: February 4, 2020

Transcript By: Michael Folkson

Tags: Miniscript

Category: Meetup

Media: -https://www.youtube.com/watch?v=_v1lECxNDiM

Bitcoin Script to Miniscript

London Bitcoin Devs

https://twitter.com/kanzure/status/1237071807699718146

Slides: https://www.dropbox.com/s/au6u6rczljdc86y/Andrew%20Poelstra%20-%20Miniscript.pdf?dl=0

Introduction

As Michael mentioned I’ve got like two hours. Hopefully I won’t get close to that but you guys are welcome to interrupt and ask questions. I’ve given a couple of talks about Miniscript usually in like 20 or 30 minute time slots where I try to give a high level overview of what this project is, why should you use it and why it makes Bitcoin script accessible in ways that it hasn’t been. In particular you can do all sorts of cool things that in principle Bitcoin script can do but in practice you cannot through various algorithmic, practical reasons. Bitcoin script is just not easy to reason about, it is not easy to develop with, it is not fragile but it has got a lot of sharp edges. The opposite of fragile, you will feel very fragile trying to use Bitcoin script. Since I’ve got a whole pile of time and a reasonably technical audience I am going to try to do something different here. I’m going to talk in a fair bit of detail about some of the issues with Bitcoin script and some of the motivations for Miniscript. Then I’m going to try to build up some of the detail design of Miniscript which is essentially a reinterpretation of a subset of Bitcoin script that is structured in a way that lets you do all sorts of analysis. We’re going to try to build up the design and I’m going to talk about some of the open problems with Miniscript. There’s maybe an irony here that there are so many difficult things with Miniscript given how much simpler it is than script. It highlights how impossible it is to work with script to begin with. Even after all of this work we’ve put in to structuring things and getting all sorts of nice easy algorithms, there are still a lot of natural things to do that are quite difficult. In particular around dealing with fee estimation and with malleability. So let’s get started.

Script

First I’m going to spend a few slides giving an overview of what Bitcoin script is and how it is structured. As many of you probably know Bitcoin has a scripting system that allows you to express complicated predicates. You can have coins controlled not only by single keys but by multiple keys. You can have multisignatures, threshold signatures, hash preimage type things, you can have timelocks. This is how Lightning HTLCs are created. You can do weird things like have bounties for finding hash collisions. Peter Todd put a few of these up I think in 2013. There are a whole bunch of Bitcoins you can take if you find a hash collision. The one for SHA-1 has been taken but there are ones up for the other Bitcoin hashes, for SHA-2 and the combo hashes that you can still take if you can find a collision. The way you can do this is with a thing called Bitcoin script. Bitcoin script is a stack based assembly language that is very similar to Forth if any of you are 70 years old you might have used Forth before. It is a very primitive, very frustrating language. Bitcoin took a bunch of the specific frustrating parts of it and not the rest. A quick overview. Every Bitcoin script opcode is a single byte. There are 256 of them, they are allocated as follows. You have 78 for pushing data of arbitrary size. All your stack elements are these byte vectors. The first 76 opcodes are just “push this many bytes onto the stack.” There are a couple of more general ones. We also have a bunch of special case opcodes for pushing small numbers. In particular, -1 through 16 have their own opcodes. Then for future extensibility we have a bunch of no-ops. For historical reasons we have 75 opcodes that are synonyms for what we call OP_RETURN, just fail the script immediately. For even sillier historical reasons we have 17 opcodes that will fail your script even if you don’t execute them. You can have IF branches that aren’t accessed but if there are some certain opcodes in there it will fail your script. That’s probably a historical mistake in the Bitcoin Core script interpreter that we can trace back to 2010-2012 which is when most of this came to be. Finally we have 57 “real” opcodes. These are the core of Bitcoin script.

Of these real opcodes we have a lot of things you might expect. We have 4 control-flow ops. Those are IF, ELSE, ELSEIF and we also have a NOTIF which will save you a byte in some cases. Basically it is the same as IF except it will execute if you pass in a zero rather than executing if you pass in a non-zero. We have a few oddball opcodes, let me quickly cover those. We have OP_VERIFY which will interpret the data as a boolean. If it is true then it passes, that’s great. If it is false it fails the script. IFDUP, if you pass it a true it will duplicate it, if you pass it a false it will just eat it. DEPTH and SIZE, those are interesting. DEPTH will tell you the current number of elements on the stack, SIZE will tell you the number of bytes in the most recent element. These are interesting, I’ll explain in the next slide, mainly because they make analysis very difficult. They put constraints on aspects of your stack that otherwise would not be constrained. When you are trying to general purpose script analysis these opcodes will get you in trouble. We have OP_CODESEPARATOR, it is just weird. It was there originally to enable a form of delegation where after the fact you could decide on a different public key that you wanted to allow spending with. That never worked. CODESEPARATOR does a very technical thing of unclear utility. We have 15 stack manipulation opcodes. These basically rearrange things on your stack. You’ve got a bunch of items, there are a whole bunch of things that will reorder, duplicate certain ones or duplicate certain combinations of them or swap them, all sorts of crazy stuff like that. We also have a couple of altstack opcodes. We have one that will move the top stack element to the alternate stack, one that will bring stuff back. Those of you who are familiar with the charlatan Craig Wright may be aware that you can design a Turing machine using a two stack finite state automaton or something like that. It is plausible that the alternative stack in Bitcoin was inspired by this but actually this provides you no explicit power. The only things you can do with the altstack in Bitcoin are move things on to it and move things off of it. Sometimes this lets you manipulate the stack in a slightly more efficient way. But all that is is a staging ground. You can’t do computations there, you can’t manipulate the altstack directly, you can’t do anything fun. We have 6 different opcodes that compare equality. I’ll talk a bit about those in the next slide. The reason that there is 6 of them is to make analysis hard. We have 16 numerical opcodes. These do things like addition, subtraction, boolean comparisons which is kind of weird. There are special purpose opcodes for incrementing and for decrementing. There used to be opcodes for multiplication, division and concatenation and a bunch of other stuff. Those have turned into the fail even if not executed opcodes. Those are disabled many years ago. The way that they are disabled causes them to have this weird failing behavior. There are 5 hashes in Bitcoin which are RIPEMD160, SHA1, SHA2 and there are various combinations of these. There is HASH256 which means do SHA2 twice, there is HASH160 which means you SHA2 and then RIPEMD160. Finally we have the CHECKSIG opcodes which are probably the most well known and most important ones and also the most complicated ones. We have CHECKSIG which checks a single signature on the transaction, we have CHECKMULTISIG which checks potentially many signatures on the current transaction.

Some miscellaneous information about script, some of which I’ve hinted at. We have a whole pile of limits which I’ll talk about in a second. Stack elements are just raw byte strings, there’s no typing here, there’s nothing interesting. They are just byte strings that you can create in various ways. The maximum size is 520 bytes. This was a limit set by Satoshi we think because you can put a 4000 bit RSA key into 520 bytes. Bitcoin does not and never has supported RSA keys but Satoshi did take a lot of things from the OpenSSL API and we think this might have been one of them. As far as I’m aware there’s no use for stack elements that are larger… I guess DER signatures are 75 bytes but you can go up to 520. There are many interpretations of these raw byte strings though. I said we have no types but actually every opcode has a notion of type-ness in it. A lot of the hash ops and the stack manipulation things just treat them as raw byte strings, that’s great. But all the numeric opcodes treat their things as numbers. In Bitcoin this means specifically up to 32 bit signed magnitude numbers. Signed magnitude means the top bit. If it is 1 you have a negative number, if it is 0 you have a positive number. This means you have zero and negative zero for example, two distinct things. You also have a whole bunch of non-canonical encodings. You can put a bunch of zero padding in to all your numbers and those will continue to be valid. But if you exceed 4 bytes that is not a number. For example you can add sufficiently large numbers and you will get something that is too big and so it is no longer a number. If you keep using OP_ADD and you’re not careful about it then you might overflow and then the next OP_ADD is going to sever your script. If you’re trying to reason about script you need to know these exact rules. Some things, the IF statements, OP_VERIFY, a few others but oddly not the boolean AND and OR will interpret these things as booleans. Booleans are similar to numbers. If you encode zero or negative zero that counts as false. If you encode anything else that is considered true. The difference between booleans and numbers is that booleans can be arbitrarily sized. You can make a negative zero that is like 1000 bytes long. This will not be interpreted as zero by any of the numeric opcodes, it will fail your script, but they will be interpreted as false by the booleans. Something to be aware of. If you are trying to reason about script you need to know these rules. Then finally we have the CHECKSIG and CHECKMULTISIG and those are the closest to doing something sane. They interpret their arguments as public keys or signatures. The public keys are supposed to be well formed, the signatures are supposed to be either well formed or the entry string which is how indicate no signature. These are not consensus rules, you can actually put whatever garbage you want in there and the opcode will simply fail. There is a comparison to C if any of you guys have done a lot of C programming and tried to reason about it or tried to convince the C compiler to reason about your code instead of doing insane and wrong things. You may notice that the literal zero if you type this into C source code, this is a valid value for literally every single built in type. Zero is an integer of all sizes, zero is a floating point, zero is a pointer to any arbitrary object, zero is an enum, zero is everything. There is no way to tell the compiler not to interpret zero that way. Of course the C standard library and PosEx and everything else uses a constant zero as an error return code about 40% of the time so you need to deal with this fact. This infuriating aspect of C was carried over to Bitcoin script in the form of the empty string being a valid instance of every single different way of interpreting script opcodes. That’s just the script semantics. Those are a bunch of weird edge cases and difficulties in reasoning about the script semantics and trying to convince yourself that a given arbitrary script is doing what you expect.

Additionally there are a whole pile of independent and arbitrary limits in Bitcoin script. As I mentioned stack elements can only be 520 bytes. Your total script size can only be 10,000 bytes. You can only have 201 non-push opcodes. A non-push opcode is something a little bit weird. A push opcode includes all the things you expect to be pushes like pushing arbitrary data onto the stack, pushing a number onto the stack. It also includes a couple of opcodes that are not pushes but they sort of got caught in the range check, just comparing the byte values. You need to know which ones those are. Also in CHECKMULTISIG… This 201 opcode limit actually only applies to the whole script regardless of what gets executed. You can have all sorts of IF branches and you count all the opcodes. Except if you execute a CHECKMULTISIG then all of the pubkeys in that CHECKMULTISIG count as non-push opcodes even thought they are pushes, but only if they are executed. There’s a very technical rule that is maximally difficult to verify that as far as I know nobody was aware of until we did this Miniscript development and we found this lurking in the Satoshi script interpreter. In our Miniscript compiler we have to consider that, that is a resource on it we hit sometimes in trying to work with Miniscript. There is a sigops limit. This is something a lot of you are familiar with. This is what restricts your ability to do enormous CHECKMULTISIGs and other interesting things. There are a couple of different sigops limits. I think the only consensus one is the one that limits a given block to have 80,000 sigops in it. But there are policy limits for how many sigops you can have in a given transaction. These are different for raw script and for P2SH and for SegWit. Also there are a thousand elements, you are only allowed to have a thousand elements on the stack. It is basically impossible to get a thousand elements on the stack so it doesn’t really matter. But just to make sure your optimization is constrained in as many dimensions as possible. That’s an additional thing. For what it is worth, in BIP 342 which is Tapscript we fix a whole pile of these. In particular because having so many independent limits is making a lot of our Miniscript work more difficult. We clean this whole mess up. We kept the 1000 stack element limit because it provides some useful denial of service protection and it is so far away from anything we know how to hit without doing really contrived stuff. I think we got rid of the script size limit because there are already limits on transactions and blocks which is great. We kept the stack element size limit just because that’s not doing anything, all of our stack elements are limited in practice anyway. We combined the opcode limit and the sigop limit. We got rid of CHECKMULTISIG. The most difficult thing to keep in mind when you’re constraining your scripts is now a single one dimensional thing. Now absent extreme things that you might be doing that are all outside the scope of Miniscript, you actually just have a one dimensional limit. If you are trying to work within these limits it is much easier in Tapscript which will be SegWit v1 I hope, knock knock on wood.

A couple of particular opcodes that I hate. Everyone else that I work with has a different set of opcodes. Everyone who works with script has their own set of opcodes, these are the ones that I hate. PICK and ROLL, first of all. What these opcodes do is they take a number from the top of the stack, one of them will copy that many elements back to the top, the other one will move that many elements back to the top. Now you are taking one of these arbitrarily typed objects, who knows where you got it. Now you’re interpreting it as an index into your stack. If you are trying to reason about things now you have stack indices as a new kind of thing that is happening in your script. So when you’re trying to constrain what possible things might happen that is a whole other dimension rolled up in there. Numerics overflowing I mentioned. The CHECKMULTISIG business I mentioned. DEPTH and SIZE I mentioned. These are simple opcodes but they are just extra things to think about. If you are trying to reason about what the script interpreter is doing it would be nice if as many of the internals of that interpreter you could just take for granted and weren’t available for introspection to your users. But they are. These opcodes make those available for introspection so now they are mixed in with your reasoning about what values might be possible. IFDUP does a similar annoying thing. You can use OP_DUP and OP_IF together to get an IFDUP but this opcode just happens to be the only one that will conditionally change the depth of your stack depending on whether what you give it is considered true or false. When you’re trying to keep track of your stack depth when you’re reasoning about scripts it gets in your way.

Q - What are these used for?

A - We use IFDUP in Miniscript quite a bit. Do I have an internet connection on this computer? Later in the talk I might go to the Miniscript website which has some detailed justification for some of these things. IFDUP gets used in Miniscript. None of these other things are useful for Miniscript. Let me give a couple of justifications. PICK and ROLL I have used before, only in demonstration of weird script things I guess. I can imagine cases where you want to use PICK and ROLL to manipulate your stack. If you have a whole bunch of stack elements in a row and say you are trying to verify a signature that’s really deep in your stack. You can use OP_PICK to bring that public key and signature to the top of your stack, verify them and then move on with your life without trying to swap them out a whole bunch of times. OP_DEPTH and OP_SIZE do actually have a bunch of uses. OP_DEPTH is kind of cool. We use this in Liquid as a way of distinguishing between two multisig branches. We have the standard 11-of-15 functionary quorum check and we also have a 2-of-3 emergency withdrawal check. You’re allowed to use either one of these. The emergency withdrawal has a timelock on it. This is how Liquid coins are protected. We’re using OP_DEPTH to count the number of signatures provided. If you give the script 11 signatures it knows this is a functionary signature, let’s use that branch. If you only give it 2 signatures it knows to use the other branch. That is an example of what OP_DEPTH is for. OP_SIZE has a bunch of cool applications. One is if you use OP_SIZE on a ECDSA signature to constrain the size to be very small, this forces the user to use a specific nonce that is known to have a bunch of zero bytes in it that has a known discrete log, which is a multiplicative inverse of 2 in your secret key field. That means that by signing in such a way, by producing a small enough signature you are revealing your secret key. So you can use OP_SIZE to force revelation of a secret key. You can use OP_SIZE to very efficiently force a value to be either zero or one through a cool trick we discovered in developing Miniscript. If you use OP_SIZE and then OP_EQUALVERIFY in a row that will pass the zero or one but it will fail your script otherwise. Because the size of zero is zero, that’s the empty string, the size of one is one, but no other value is equal to its own size and so the EQUALVERIFY will abort. In an early iteration of Miniscript before we realized that we had to depend on a lot of policy rules for non-malleability, we were using SIZE, EQUALVERIFY before every one of our IF statements. Because otherwise a third party could change one of our TRUEs from one to some arbitrary large TRUE value, change the size of our witnesses and wreck our fee rate. We don’t do that now because we’ve had to change strategies because there are other places where we had similar malleability. Ultimately we weren’t able to efficiently deal with it in script. But if you really want consensus guaranteed non-malleability, not policy minimalist guarantees on malleability OP_SIZE is your friend. You are going to use that a lot. One more thing OP_SIZE is for.

Q - They are used in Lightning scripts.

A - Yes exactly. In Lightning and in other atomic swap protocols you want your hash preimages to be constrained to 32 bytes. The reason being, this is kind of a footgun reason, basically if you are interoperating with other blockchains, you want to interoperate with future versions of Bitcoin or whatever, a concern is that Bitcoin allows hash preimages or anything up to 520 bytes on your stack. But if you are trying to interoperate with another system that doesn’t have such a limit, say whatever your offline Lightning implementation protocol is, maybe some bad guy creates a hash preimage that is larger than 520 bytes, it passes all the offline checks but then you try to use it on the blockchain and you get rejected because it is too large. Similarly if you are trying to do a cross-chain atomic swap and one of the chains has a different limit than the other. Then you can do the same thing. You can create a hash preimage that works on one chain but doesn’t work on the other. Now you can break the atomic swap protocol. You want to constrain your hash preimages to being 32 bytes. Everybody supports 32 byte preimages, that’s enough entropy that you’re safe for whatever entropy requirements you have. And OP_SIZE is a way to enforce this. We’ll see this later if I can get to the Miniscript site and look at some specific fragments. We have a 32 OP_SIZE, 32 EQUALVERIFY confirming that we have 32 byte things.

Q - I’m assuming if it had no use you could disable it?

A - Right. The only thing here which we disabled in Tapscript is CHECKMULTISIG ironically and that sounds like the most useful but in some sense has the worst complexity to value ratio. There are other ways to get what CHECKMULTISIG gives you that doesn’t have the limitations and doesn’t have the inefficiency of validation that CHECKMULTISIG does. I will get into that a bit later on.

Any more questions about this slide? I have one more slide whining about script and then I’ll get into the real talk. I’ve already spent 20 minutes complaining about script. I’m sorry for boring you guys but this helps me.

Another random list of complaints about script and surprising things. As I mentioned BOOLAND and BOOLOR despite having bool in the name do not take booleans they take numbers. If you put something larger than 4 bytes into these opcodes they will fail your script. Pieter Wuille claimed he didn’t know this until I pointed it out to him. No matter how long you’ve been doing this you get burned by weird, surprising stuff like this. I’ve been saying numbers are maximum 4 bytes. Actually CHECKSEQUENCEVERIFY and CHECKLOCKTIMEVERIFY have their own distinct numeric type that can be up to 5 bytes because 32 bit signed magnitude numbers can only go up to 2^31 which doesn’t cover very many dates. I think they burn out in 2038 as many of us know for other reasons. In order to get some more range we had to allow an extra bit which meant allowing an extra byte. These values can be larger than any other number. So there are actually two distinct numeric types as well as the boolean. I mentioned the CHECKMULTISIG opcode counting weirdness. One thing that burns you if you don’t unit test your stuff is the numeric values of the push number opcodes are really weird. It is hex 50 plus whatever value you’re doing, negative 1 through 16. Except zero, zero is not 0x50. If you use 0x50 that will fail your script. Zero is 0x00. You need to special case zero. If you are naively thinking you can just take your number and add decimal 80, 0x50 to it like you might say if you were trying to support SegWit and you have in the wild the only version number being used being zero but every other version number having this 0x50 added to it. This is a very easy mistake to make that you wouldn’t notice with real world data. But there we go, that’s life. Zero is zero, everything else is what you expect plus 50.

Q - What opcode is 50?

A - 50 is OP_RESERVED. I think it was always called OP_RESERVED. Unlike a lot of the other reserved opcodes that didn’t used to have a name and then we had to disable it because it triggered undefined behavior. I think OP_RESERVED has always been reserved. It has always been because you would expect a zero to go there. If you weren’t going to put zero where it should be then the safest thing to put there would be fail the script.

Q - What is the purpose of 0x50 element in SegWit version 1 if you are putting into the witness?

A - In SegWit your witness programs have a numeric opcode, a numeric push indicating your version number followed by your 32 byte program.

Q - In Taproot you can have in the witness the byte 50 that is some annex or something? There is a description, you shouldn’t use it, it is for future use. What future use are you planning for that?

A - The question is about the annex field in SegWit v1, Tapscript signatures. Let me see if I can remember the justification… Let me just give a general answer because I’m forgetting exactly how it works. The idea is that the annex is covered by the signature. The idea is that you can add arbitrary extra data that will be covered by the signature. This in some future version in something locktime related this could be useful for adding new opcodes that would conditionally constrain your signatures in some way. As of now there is no use for it. It is basically this extra field that gets covered by the signature and that may in future do something interesting for other future opcodes that don’t exist yet. We are putting it in there now so that we won’t require a hard fork to do it later. Right now it is constrained by policy to always be empty because there is no value. I wonder if the number 0x50 is used there. I think that might just be a coincidence, maybe not. It doesn’t sound like a coincidence.

Q - It is to avoid some collision with a SegWit version number or something like this?

A - Yeah it is not a coincidence but the exact means of where the collision was imagined to be I don’t remember. I’d have to read the BIP to remember. As far as the general complaint about complexity it is getting worse but very slowly. Overall it is getting better I think. There is a lot of crap that we cleared out with Tapscript and Taproot but the annex is one place where you can still see some historic baggage carried forward.

Script - High level problems

So on a high level, let me just summarize my complaints about script and then we’re going to go into what Miniscript is and how Miniscript tries to solve these things. It is difficult to argue correctness because every single opcode has insane semantics. I talked about some of them, I didn’t even talk about the bad ones. I just talked about the simple ones that are obviously crazy. It is difficult to argue security meaning is there some weird secret way you can satisfy a script. It is hard to reason about all the different script paths that might be possible. It is hard to argue malleability freeness. There are weird surprising things like you can often replace booleans with larger booleans or you can replace numbers with non canonical numbers, stuff like this which is forbidden by policy. Most of this class of attacks will be prevented on the network but in theory miners could try to execute something like this. There are so many surprising corners, so many surprising ways in which things that can be malleated that this is something difficult to reason about. Assuming you have a script that you are comfortable with from a correctness and security perspective it is difficult to estimate your satisfaction cost. If you’ve got a script that can only be satisfied one way or a couple ways you can say it is this many signatures and this many bytes. You manually count them up and you put that in a vault somewhere. In general if you are given an arbitrary script and you are trying to figure out what’s the worst case satisfaction cost for this, this is very difficult to do. The irony of this is that over in Ethereum land they have the same problem and this results in coins getting stuck all the time. As Bitcoin developers this should be something that we can laugh at them for because Bitcoin script is designed so that you reason about it and you won’t have this problem. But in fact you do. We don’t have Turing completeness, we don’t have arbitrary loops, we don’t have all these complicated things that make it intractable on Ethereum but it is intractable anyway for no good reason. So Miniscript as we’ll see solves this. It is difficult to determine how to work with an arbitrary script. You don’t know which signatures, hashes and timelocks or whatever you might need to satisfy an arbitrary script. You basically can only support scripts that your software explicitly knows how to recognize by template matching or whatever. Then even if you do know what you need to satisfy the script actually assembling it in the right order, putting all the signatures in the right place and putting the 1s and 0s for booleans and all that kind of stuff also requires a bunch of adhoc code.

Fundamentally the issue here is that there is a disconnect between the way that script works. You’ve got this stack machine with a bunch of weird opcodes that are mostly copied from Forth, some of them mangled in various ways and some of them mangled in other ways. But the way that we reason about what script does is that you are putting spending conditions on coins. You’ve got a coin, you want to say whoever has this key can spend it or maybe two of these three people can spend it or maybe it can spent by revealing a hash preimage or after a certain amount of time has gone by there is an emergency key or something. Something like this. Our job as developers for Bitcoin wallet software or Lightning software or whatever is somehow translate this high level user model into the low level stack machine model. Along the way as soon as you stick your hand into the stack machine it is going to eat your hand. Hopefully you get more than half your work done because you’ve only got one more hand. Then you’ve got to convince the rest of the world that what you did is actually correct. It is just very difficult and as a result we have very few instances of creative script usage. The biggest one is Lightning that has a tremendous amount of developer power pointed at it. Much more than should be necessary just for the purpose of verifying that the scripts do what you expect. Nonetheless what Lightning needs fortunately is what it has. But if you are some loner trying to create your own system of similar script complexity to Lightning you’re going to be in a lot of trouble. Actually one thing I did well working with Miniscript was looking for instances of people doing creative script things and I found security issues in real life uses of script out there. I don’t even know how to track down the original people, I’m not going to say any more detail about that. The fact is this is difficult, there’s not a lot of review and the review is very hard for all of these uses.

Script: Brainteasers

I have three more slides of script whining. These are just a couple of fun questions, I’ll be quick. Here’s a fun one. We know that zero and negative zero can both be interpreted as false as booleans. Is it possible to create a signature that will pass that is zero or negative zero. This might be surprising because we use the canonical value zero to signal no signature. In the case of CHECKMULTISIG where you’ve got multiple signature and you want to say this key doesn’t have a signature. Is it possible that you could check the signature, get false but secretly it is a valid signature. The answer is no for ECDSA because the DER encoding requires that it starts with the digit 2 which is not zero or negative zero so we’re safe by accident. With Schnorr signatures you also can’t do zero or negative zero. The reason being that in order to do so you would have to set your nonce to be some R coordinate that is basically nothing but zeros and maybe a 1 bit. Mathematically there is no way to do this, the Schnorr signature algorithm that we chose doesn’t allow you to do this without breaking your hash functions in ways we assume are intractable. Another brainteaser. Can you get a transaction hash onto the stack the way you might want to for covenants or the way you might want to with something like OP_CTV behavior, CHECKTEMPLATEVERIFY. That’s Jeremy Rubin’s special purpose covenant thing. So you can actually do this with ECDSA. It turns out not in a useful way. You can write a script that requires a signature of a specific form, so maybe one where your s value is all zeros and your R value is some specific point and the public key is free. The person satisfying the script is actually able to produce a public key that will validate given a specific transaction and that specific signature that is fixed in the script. That public key you compute by running the ECDSA verification equation in reverse is actually a hash run through a bunch of extra EC ops. It is a hash of your transaction, it is a hash of what gets signed. In theory you could do a really limited form of OP_CTV just with this mechanism. It turns out you can’t by accident because we don’t have SIGHASH_NOINPUT so every single sighash mode covers your previous outpoint which in turn is a hash of the signature you’re trying to match which means you cannot precompute this. I apologize, I would need a white board to really go through this. Basically there is an accidental design feature of Bitcoin that is several layers deep that prevents you from using this to get covenants in Bitcoin. That’s the story of Bitcoin script. There’s always really far reaching remote interactions. If you are trying to reason about Bitcoin script you have to know this.

Here’s another fun one which I hinted at earlier. What is the largest CHECKMULTISIG, what’s the largest multisignature you can do? I think CHECKMULTISIG with P2SH limits you to only putting 15 keys on because there is a sigops limit there. CHECKMULTISIG with SegWit I think limits you to 20 for other reasons that I don’t quite remember. It turns out that you don’t need to use CHECKMULTISIG at all to do threshold signatures. You can just use CHECKSIG, you do CHECKSIG on a specific signature, if it is a valid signature it will output one, it it is not it will output zero. You can take that zero and you can move it out the way onto the altstack or something, do another signature check and then bring your zero or one back and you keep adding all these together. You can get a sum of all your individual signature checks and if that sum is greater than your threshold which is easy to check with script then that’s the exact equivalent semantics of CHECKMULTISIG except it is a little bit more efficient for the verifier. If you don’t use that then you’re only limited by the amount of crap you can put on the stack based on the other limits. You can wind up having multisignatures that are 67? I think 67 keys is right. You can get 67 keys and 67 signatures onto your stack using this which is a cool way to bypass all these extra limits. This is also the justification for removing CHECKMULTISIG in Tapscript by the way. We have a new opcode who’s current name I forget that does this all in one. CHECKSIGADD? Good, we had a bunch of weird extra letters there before. It does a signature check. Rather than outputting a zero or one it takes a zero or one and adds it to an accumulator that’s already in the right place. So you can do this directly rather than needing 3 or 4 extra opcodes, I think you need two opcodes per signature check. Now you can eliminate all the weird complexity of reasoning about CHECKMULTISIG and using it, it is now much more straightforward.

Miniscript

Onto Miniscript. There we go. Forty minutes on why I hate script. I will actually try not to use the full two hours even though I have two hours of slides.

Q - Last question on script. It is very hard to judge Satoshi ten years on. There’s a lot of subtle complexity here. Do you honestly think back in Satoshi’s time, 2009 that you’d have had the perspective to design a better language?

A - The question is in 2009 could I have done better? Or could we have expected Satoshi to have done better? Essentially no. There are a couple of minor things that I think could’ve have been done better with a bit of forethought. For some historical context the original Bitcoin script was never used. It was this idea for this smart contracting system and smart contracting was something in blog posts by Wei Dai, Nick Szabo and Hal and a few other people. It was this vague idea that you could have programmable money. Script was created with the intention that you be able to do that. Basically it was a copy of Forth, it actually had way more of the Forth language than we have today. We had all the mathematical opcodes and we had a bunch of other weird stuff. There was also this half baked idea for delegation where the idea was that your script is a script you execute but also the witness is a script that you execute. The way that you verify is you run both scripts in a row and if the final result is true you’re good, if the final result is false…. The idea is that your scriptPubKey which is committed to the coins should check that the earlier script didn’t do anything bad. There were a couple of serious issues with these, two that I will highlight. One was there was an opcode called OP_VER, OP version. I can see some grimaces. It would push the client version onto the stack. This meant that when you upgraded Bitcoin say from 0.1 to 0.2, that’s a hard fork. Now script will execute OP_VER and push 0.1 onto the stack for some people and 0.2 onto the stack for other people. You’ve forked your chain. Fortunately nobody ever used this opcode which is good. Another issue was the original OP_RETURN. Rather than failing the script like it does now it would pass the script. Because we had this two script execution model you could stick an OP_RETURN in what’s called your script signature. It would run and you wouldn’t even get to the scriptPubKey. It would just immediately pass. You could take any coins whatsoever just by sticking an OP_RETURN into your scriptSig. Bitcoin was launched in the beginning of 2009. This was reported privately to Satoshi and Gavin in the summer of 2010. This was 18 months of you being able to take every single coin from Bitcoin. In a sketchy commit that was labelled as being like a Makefile cleanup or something, Satoshi completely overhauled the script system. He did a couple of interesting things here. One was he fixed the OP_RETURN bug. I think he got rid of OP_VER a little bit earlier than that. Another was he added all the NO_OP opcodes. Around this time if you guys are BitcoinTalk archivists you’ll notice that talk of soft forks and hard forks first appeared around this time. It would appear forensically that around this script update, the one that fixed OP_RETURN was the first time that people really thought in terms of what changes would cause the network to fork. Before that nobody was really thinking about this as an attack vector, the idea that different client versions might fork off each other. Either explicitly or because there is different script behavior. And so the NOP opcodes or the NO_OPs were added as a way to add updates later in the form of a soft fork. The fact that this happened at the same time as the OP_RETURN fix is I think historically very interesting. I think it reflects a big leap forward in our understanding of how to develop consensus systems. Knowing that historic context the original question was in 2009 could we have done better? The answer is no basically. Our ideas about what script systems should look like and what blockchains should look like and the difficulty of consensus systems, nobody had any comprehension of this whatsoever. The fact that there were bugs that let you steal all the coins for over a year tells you that no one was even using this, no one was even experimenting. It was just weird, obscure thing that Satoshi put in there based on some Nick Szabo blog posts and nobody had really tried to see if it would fulfill that vision or not.

Q - Did he literally copy something from Forth or did he reproduce it selectively or precisely from a manual?

A - The question is did he literally copy from Forth. Or did he use a manual? I don’t believe there is any actual code copying from any Forth compiler or something like that. The reason I say that everything is copied from Forth is a couple of specific things. The specific things are the stack manipulation opcodes like the SWAP and OP_ROTATE which rotates in a specific direction that isn’t useful for Bitcoin but is probably useful in Forth. All of the stack manipulation semantics seem to have come from Forth. These are just Forth opcodes just reinterpreted in a Bitcoin context. Beyond that I don’t know Forth well enough to say. There are a couple of specific things that are really coincidental that suggest that he was using the Forth language as a blueprint.

Q - He either copied something or he made something up. In the latter case he must have thought about it?

A - The statement is either he copied something or made something up. If he made something up he must have thought about it. I don’t think that’s true. I think he made up a lot of stuff without thinking about it.

Q - It accidentally worked?

A - That’s a very good point. Someone said it accidentally worked. There are a lot of things in Bitcoin that accidentally work. There’s pretty strong evidence that some amount of thought went into all of this weird stuff. There are a lot of accidentally works here. There are a lot of subtle bugs that turned out not to be serious but by all rights they should’ve been. I don’t know what evidence to draw from that. One idea is that Bitcoin was designed by a number of people bouncing ideas off each other but the code has a pretty consistent style or lack of style. It is all lost to time now.

Let me move onto Miniscript. Are there any more questions about script or its historical oddities? Cool. In practice what script is actually used for are signature checks, hashlock checks and timelocks. This is what Lightning uses, this is what atomic swaps use, this is what escrow things use like Bitrated. This is what split custody wallets use, this is what split custody exchanges like Arwen use in Boston. This is what Liquid uses. Anybody doing anything interesting with Bitcoin script, the chances are you’ve got something interesting else. Some timelock which is just a couple of lawyers with Yubikeys. All of these things fit into this paradigm of signature checks, hashlocks, timelocks and then arbitrary monotone functions of these. A monotone function just means that you’ve got ANDs and ORs and thresholds. You’ve got this and this or that and 3 of 5 of these different things. That’s all you have. An idea for a version of script that didn’t have all of these sharp edges and that allowed analysis is what if we just created templates of all these things? What if you as a user would say “I want to check signatures with these keys and I want these hashlocks and I want these timelocks and I want them in this particular order. I want these keys or this key and a timelock or this key and a hash preimage.” That’s what I want. That would be my script and I will just literally have a program that is a DAG or a tree of all these checks and various ways of combining them. Now you can reason very easily about the semantics of that. You just go through the tree, you traverse through the tree and check that every branch has the properties that you expect. If there was a way that we could create script fragments or script templates for these three primitives, these particular checks, and also create script fragments that represents ANDs and ORs and thresholds. If we could do this in a way that you could just plug them into each other then we would have a composable script subset that was actually usable on the blockchain and we could reason about in a very generic way. That’s the idea behind Miniscript. As we’ll see there are a number of design constraints. The biggest one though is that we wanted something that was reasonably efficient. There is a naive way to do this where you take a bunch of CHECKSIGs and hash preimage checks and stuff. Then you write different ways of composing these and the result is that for every single combination you’re wasting 5 or 6 bytes. Just doing a bunch of opcodes to really make this composable to make sure no matter how your different subsets are shaped you can still do this reasonably. We didn’t want that. If you are wasting 5 or 6 bytes in every single one of your UTXOs that is going to add up in terms of fees. Even if it doesn’t add up someone is going to say that it adds up. You couldn’t do something in Lightning and gratuitously waste several bytes on every single UTXO just because it gives us this general tooling. Similarly in any other specific application you are not going to gratuitously waste these extra bytes because what do you get for this? If anyone was using this you’d get interoperability and you’d get standard tooling and all that good stuff. But no one is going to go first because it is less efficient than what they have. We really have to match or beat the efficiency of stuff that is deployed. As I said in another of my more public facing talks we did actually accomplish that really well.

Let’s go through all the problems that we encounter with this approach. Here is an example of our original vision for Miniscript. You see we’ve got an AND and some pubkey checks and we’ve got some ORs and there is a timelock thing going on. If you wrote this out as a tree you could clearly see where everything is. When there are a bunch of parentheses it looks like Lisp and it is hard to tell what matches what. You would take this thing and it would just map directly to script. One problem with this approach is that there are many different ways to write each fragment. There are a number of ways to do a key check. You can use a CHECKSIG opcode, you can use a CHECKSIGVERIFY opcode, you could use CHECKMULTISIG with one key or something weird. There are a couple of different ways to do your timelock checks and then there are many different ways to do ANDs and ORs and thresholds in Bitcoin script. There are a whole bunch of ways and I’ll talk about this in a little bit. Maybe that is not so bad though. Maybe we have 5 different ANDs and so instead of writing AND at the top line of that slide I have an AND and a tag like AND_1, AND_2, AND_3. Each one represents a different one. That’s not so elegant but it is still human readable, it is still very easy to reason about. It still gets you all these nice benefits of Miniscript. A related problem is that these different fragments might not be composable in a generic way. If I’ve got a CHECKSIG that is going to output zero or one depending on whether the signature is correct. If I’ve got a CHECKSIGVERIFY that’s going to output nothing or it is going to abort my script. There’s no AND construction that you can just plug either of those into. Your AND construction needs to know what is going to happen if your thing passes. You might have an AND that expects both of your things to either pass or abort the script. You can do an AND that way just by literally running each check in a row. If the first one fails your script aborts. If the second one fails your script aborts. Otherwise it just runs through. If your opcodes are leaving extra ones and zeros on the stack you can’t just compose them. You’d run the first one, deal with the one that is out there, maybe OP_SWAP, swap it out the way or maybe move it to the altstack. Then run the other one and do a bool AND or something to check that both are ok. That’s also fine, again we can just label things. Another problem though is when deciding between all these different ways to write ANDs and ORs there are trade-offs to make. Some of these have a larger scriptPubKey. They are larger to express. Some of them have larger satisfaction sizes. Maybe you have to push zeros or ones because they have IF statements in them. Some of them have larger dissatisfaction sizes. For some of them maybe you can just push a zero which is an empty stack element and it will just skip over the entire fragment. You don’t have to think about it. For other ones you’ve got to recurse into it and zero out every single public key. Your choice of which different OR to use… In an OR your choice of which fragments to use for the children of the OR depend very much on the probability that certain branches will be taken in real life. You have a branch that is very unlikely, you want to make its script size very small, you want to make its dissatisfaction size if that’s relevant very small and your satisfaction size you can make it as large as you want because the idea is that you will never use it. If you are using it you are in emergency mode and maybe you’re willing to waste a bit of money. But now this means your branches need to somehow label their probability. Now the mapping between this nice elegant thing at the top, maybe with tags in Bitcoin script is no longer two way. I remember in the very early days of Miniscript, I think in the first 36 hours we ran into this problem and Pieter was telling me that we need to have a separate language that we can compile to Miniscript. We’d end up with two languages that kind of look the same and I put my head in my hands and I said “No this is ruined. This is not what I want it to be.” And I had a bit of a breakdown over this because what an elegant idea that you could just have this language, you see the policy and that directly maps to script. You could pull these policies out of the script and now all of a sudden there is this weird indirect thing and Pieter for the entire time was trying to write this optimal compiler. I did not care about the compiler at all. I just wanted a nice cleaner way to write script and he was turning the compiler into this first class thing. Anyway… This is in fact where we landed. You’ll see on the next slide that the results are reasonably elegant. One final problem, there are a few more on the next slide, but the last problem on this slide is that actually figuring out the optimal compilation from this abstract thing at the top to what we represent in script there are optimizations that Miniscript can’t handle. With the design necessity of Miniscript, it can’t handle. In particular if you reuse keys, the same key multiple times in some weird script. Maybe it appears as a check in one branch but then you have the same check in another branch. Maybe there is a way to rearrange your tree, to rearrange your branches so you get logically the same function but what you express in script is slightly different. Maybe the key appears once instead of twice or twice instead of once or something like that. There are two problems with this. One is that verifying that the result of these transformations is equivalent is very difficult. Given two programs or two logical predicates prove that they are equal on all inputs. I believe that that is NP complete. The only reason that it might not be is because of restrictions of monotone functions but I don’t actually think that changes anything. I think that’s actually program equivalence. Maybe it is halting complete. It is definitely not halting complete for Bitcoin script because you don’t have loops. It is hard.

Q - Isn’t the motivation for Miniscript to make that possible?

A - Yeah the original Miniscript motivation was to make this kind of analysis possible. So what we said was basically don’t reuse keys. We cannot do what we call a common subexpression elimination in Miniscript. Basically if we wanted to do these kind of advanced optimizations we would lose the simplicity and analyzability of Miniscript. As far as we’re aware nobody is really champing at the bit to do this. All of our Blockstream projects work with this current paradigm. So does Lightning, so do the atomic swap schemes that people are doing. So do escrow schemes, so do various split custody schemes. Everything that we’re aware of people doing will work in this paradigm. If our goal is to get enough uptake so that there will be standard tooling… Ultimately the goal is to have standard tooling because that is the benefit of Miniscript is that you write a script in Miniscript, you grab something off the shelf and it will deal with your script. It will give you fee estimations, it will create your witnesses for you, you can use it in PSBT with other things, you can coinjoin with other things, you don’t need to write a whole bunch of script munging code for it. These optimizations would in theory get you stuff but it would be very difficult and adhoc to implement and it would break these properties that we think are important for adoption. I threw it in there but we’re actually not going to fix it. Maybe there will be a Miniscript 2 that would not really resemble Miniscript that could do this.

Q - I saw a conversation on Reddit between you, Pieter and I think Greg as well that said the current Lightning script had no analogue in Miniscript. Something about a HTLC switching on a pubkey hash or something like this? Is that still relevant?

A - This is really cool. The question is about Lightning HTLCs not having any analogue in Miniscript. So in Lightning HTLCs you have a pubkey hash construction and if you reveal the preimage to a certain hash, this needs to be a public key and the signature check is done with that public key. If you don’t reveal a preimage to that hash then you switch and do something different instead. In the original version of Miniscript we had no notion of pubkey hashes so you couldn’t do this. Then in May of last year Sanket Kanjalkar joined us at Blockstream as an intern and he said “What if we do a pubkey hash construction?”. There were a lot of head in my hands reasons not to do this for complexity. But Sanket showed that if we add this we would need to change up our type system a little bit but then we would get a Miniscript expressible version of Lightning HTLCs. There’s two, one you keep on the local side and one that’s remote. On one of them we saved 7 bytes and on the other we saved 17 bytes versus the deployed Lightning HTLC. We actually went from being unable to express what Lightning does without unrolling it into a very inefficient thing, to having a more efficient way to express what Lightning does in Miniscript. It is still incompatible with what Lightning actually did. One goal of ours was that you could take the HTLCs that are defined in BOLT 3 and just interpret those as Miniscripts. We can’t do that because of this switch behavior. Because Miniscript doesn’t really have a way to say if a preimage is present use it as a public key otherwise use it as a reason to do something else. I think adding that to Miniscript would be technically possible but it would add complexity to Miniscript and all we would gain was the ability to interpret HTLCs as Miniscripts. It is unclear where the value is to either Miniscript or Lightning because Lightning already has a bunch of special purpose tooling. The idea behind Miniscript is that you are trying to express a condition in which coins can be spent. Whereas the idea behind a Lightning HTLC is you are trying to use the blockchain to constrain an online interactive protocol. Those are conceptually different things. It probably doesn’t make sense to use Miniscript inside of Lightning unless it was really easy to do. You would save some code deduplication but you need so much extra code to handle Lightning properly anyway that the script manipulation stuff is kind of a minor thing.

Q - It might be useful if one side of the channel is a multisig set up and doesn’t want to bother the other side with changes to the multisig set up.

A - The core developer in the back points out that it would be useful if you could have a Lightning HTLC where the individual public keys were instead complicated policies. That might be multiple public keys or a CHECKMULTISIG or something and that’s true. If you used Miniscript with Lightning HTLCs then you could have two parties open a channel where one person proposes a HTLC that isn’t actually the standard HTLC template. It is a HTLC template where the keys are replaced by more interesting policies and then your counterparty would be able to verify that. That’s true. That would be a benefit of using Miniscript with Lightning. There is a trade-off between having that ability and having to add special purpose Lightning things into Miniscript that would complicate the system. Maybe we made the wrong choice on that trade-off and maybe we want to extend Miniscript.

Q - It depends if the scripts are going to be regularly redesigned or if there are going to be different alternative paths or if there are contracts on top of Lightning. I know Z-man has talked about arbitrary contracts where there potentially could be scripts or Miniscripts on top of Lightning.

Q - I think you can use scriptless scripts to express everything you want on top of Lightning.

A - There is another thing that Pedro, Guido, Aniket and me have this scriptless script that I think is public now or will be soon where you can do all sorts of really complicated policies with scriptless scripts. If we are going to overhaul Lightning anyway what if we move to scriptless scripts instead of Miniscript. That’s kind of moving in the opposite direction towards more complicated, interactive things rather than off the shelf static analysis things. That’s another observation. The impression I get from the Lightning community is that there is more excitement about the scriptless script stuff which has a whole pile of other privacy and scalability benefits than there is excitement about Miniscript. I think this maybe comes down to Miniscript being targeted at coins that mostly sit still. Maybe. Or maybe they don’t mostly sit still but it is about spending policies rather than constraining a complicated protocol is my view of the way different people think about this.

Q - I did see that Conner and the lnd team had used Miniscript to optimize some new Lightning scripts. Did you see that?

A - Cool, I did not. The claim is that Conner used Pieter’s website or something to find a more optimal Miniscript.

Q - They saved some bytes on a new Lightning script.

A - I’ve heard this from a couple of people. Apparently Arwen which does a non-custodial exchange had a similar thing where they encoded their contract as a policy and then ran it through Pieter’s website and they saved a byte. It has happened to us at Blockstream for Liquid. We had a hand optimized script in Liquid that is deployed on the network now and Miniscript beat it which is kind of embarrassing for all of us because we’re supposed to be the script experts aren’t we? Where we landed is we don’t have explicit Lightning support. I could be convinced to add it, it is just a pain in the ass. It will lengthen the length of the website by like 20 percent. It has a trade-off in uptake and complexity and so forth but we could be convinced to add it. We’re also not going to solve this common subexpression thing. That’s beyond adding a few things and that’s really changing how we think about this.

Another category of problems here that I’m going to call malleability. As a quick bit of background, malleability is the ability for a third party to change the witness that you put on a transaction. Prior to SegWit if somebody did this it would change your transaction ID and completely break your chain of transactions and it was just a huge mess. This was very bad. After SegWit the consequences were less severe. If somebody is able to replace your witness with another valid witness they may be able to mess up transaction propagation or block propagation because it will interfere with compact blocks. It may also change the weight of your transaction in a way that your transaction is larger than you expected so the fee rate that you put on it is going to be higher than the fee rate that the network sees. You’re going to wind up underpaying because some third party added crap to your witness. We really want things that Miniscript produces to be non-malleable. An interesting observation here is that malleability is not really a property of your script as much as it is a property of your script plus your witness. The reason being when you create a full transaction with witnesses and stuff the question you ask is can somebody replace this witness with a different one? It turns out that for many scripts there are some witnesses you can choose for which the answer is yes but there are other equivalent witnesses that you can choose for which the answer is no. That’s something where there’s a lot of complexity involved and we didn’t expect going into this. That’s kind of a scary thing because when there is complexity in satisfying scripts that we didn’t even think about before Miniscript this means that this has been hiding in Bitcoin script all along and every usage of script has had to accidentally deal with this without being able to think clearly. There’s some complexity about what malleability means. One thing is that the ability of third parties to change things depends a bit on how much they know. Do they know the preimage to some hash that is out there for example and can they stick that on the blockchain? If you have a timelock somewhere in your script and this is covered by a signature then the adversary can’t change it because timelocks are encoded in your sequence or locktime field. Those are signed. If it is not covered by signatures then maybe somebody can change that and then change branches. It is actually a little bit difficult…

Q - You don’t have a sighash to not sign timelocks?

A - That’s correct. We don’t have a sighash that won’t sign timelocks but you can create Miniscripts that have no signatures and then there is a form of malleability that appears.

This is a little bit difficult to reason about although eventually Pieter and I settled on something that is fairly coherent. A long argument I had with Pieter was about whether it counted as malleability if other participants in your transaction are able to sign things. I wanted them to mark certain keys as being untrusted when we consider signatures appearing out of nowhere. I think Pieter won that, I’m not going to rehash that entire argument. I think there’s maybe some simplicity of reasoning where you can say “This key is untrusted” and then use all your standard malleability analysis to decide what’s the worst that can happen to your transaction that might be useful in some cases. But I think that’s a technical thing. Pieter argued that that’s an abuse of the word malleability. Maybe I can share algorithms but I shouldn’t be arguing this in public so I won’t.

Miniscript and Spending Policies

This covers some simple things. To make our task tractable here we are going to assume standard policy rules. I talked about SIZE EQUALVERIFY earlier. There’s a policy rule in Bitcoin, meaning a standardness rule that affects transactions on the network called minimal IF. That means that what you put as input to an IF or a NOTIF has to be zero or one. So you do not need SIZE EQUALVERIFY, the nodes will enforce that SIZE EQUALVERIFY is done. We’re going to assume that standardness. We’re going to say that if something is malleable but you have to violate standardness to malleate it then that doesn’t count for our concerns about transaction propagation. If your only potential attackers are miners, miners can already mess up transaction propagation by creating blocks full of secret transactions and stuff like that. We’re not going to deal with that, that will save us a whole bunch of bytes in almost every fragment. In particular enforcing that signatures either have to be valid or they have to be verified is very complicated and wasteful to do directly in script. But there’s a standardness rule that requires signatures to be either empty or valid. As I mentioned we don’t have a common subexpression. It would conflict with this model where you have a tree that you can iterate over where clearly your script corresponds to some tree and vice versa. To retain that we can’t be doing complicated transformations. We also assume that no key reuse occurs for malleability. Here’s something where you have two branches, the same key is there, if you provide a signature for one branch then absent of using OP_CODESEPARATOR say which is also expensive somebody could take that signature from one branch and reuse it in another branch. Now there’s potentially a malleability vector where somebody could switch branches where an OR statement is being used. I think we have to assume no key reuse because it seems that in general if you can have arbitrary sets of keys that intersect in arbitrary ways in conjunction with thresholds, it seems intractable. It doesn’t feel NP hard but we don’t know how to do it in sub exponential time so it is possible that it is. Those are our three ground rules for Miniscript.

We are also going to separate out Miniscript, Miniscripts directly correspond, it is just a mapping between script and something human readable, we are also going to have this policy language. The policy language has probabilities that different branches are taken basically. Miniscript has weird tags like AND_1, AND_2, AND_3 and so on. They have better names than that. The policy language has probabilities. Given a policy you can run it through a compiler that will produce a Miniscript. It will look at your probabilities, it will decide what the optimal choice of a specific Miniscript fragment is, it will output a Miniscript which is clearly semantically identical to the original. This is very easy to check by the way. You take your policy and delete your probabilities. You take your Miniscript and delete all the tags and those will be equal. That’s how you check that your compiler output is correct. That’s a really powerful feature. You can take stuff off the blockchain, decode it into a Miniscript, lift it into a policy by deleting all the noise and then you can see immediately what it is doing. But Miniscript is where all the technical complexity is. We have all these different tags that need to compose in certain ways. This is what I’m going to get into that I haven’t really gotten into in any of my public talks here. Here’s a policy, rather than writing it in the Lisp format I drew this pretty picture like a year ago that I’ve kept carrying around. Here’s the same thing compiled to Miniscript. So at the bottom of the slide I’ve written the actual script out in opcodes but you can see the difference between these two. There are two differences. One is that I’ve attached all these tags to the ANDs and ORs. There’s an AND_v and an OR_c, I think are the current names for them. The other interesting thing is the pk on the left, checking pubkey 1, it added a c there. That’s saying there’s actually a public key but also I stuck a CHECKSIG operator in there. Then there’s jv, v means there is an OP_VERIFY at the end of this hash check and the j means there is some complicated check that basically skips over the whole check if you give it an empty string meaning no preimage and otherwise will do the check. That j is needed for anti-malleability protection. Then we’ve got these b’s and v’s and stuff which I will explain in the next slide. These are types. Miniscript unlike script has a type system. To have a valid Miniscript you need these types and you need to do checks.

Miniscript Types (Correctness)

Let me justify that real quick. As I mentioned you’ve got some opcodes like CHECKSIG that push zero on the stack but one on success. You have other ones like CHECKSIGVERIFY which push nothing on success and they abort your script on failure. You have all these expressions of wrapped things. There’s the composition ones, the ANDs, ORs and thresholds but there are also these j’s and c’s and v’s, all these little wrapper things that manipulate your types and they might behave differently depending on whether you’ve given a CHECKSIG or a CHECKSIGVERIFY or whether you’ve given something that can accept a zero as its input or can’t ever accept zero as an input or something like that. Basically we have these four base types, B, V, K and W. So B is just base, that is what we call it. This is something that has a canonical satisfaction that will result in pushing a nonzero value on the stack. If it can be dissatisfied it will push a zero on the stack. Then we have this V which has a canonical satisfaction that will push nothing on the stack. There are no rules beyond that. If you do something wrong it will abort. Basically all these things have this caveat if you do something wrong it will abort. There is this weird one K. This one is either you satisfy or dissatisfy it, it is going to push a key onto the stack. Whether you satisfy or dissatisfy will propagate in an abstract way up to your CHECKSIG operator that will turn that key into either a zero or a one. Then you’ve got this weird one W that is a so called wrapped base. This is used for when you are doing an AND or an OR and you’ve got a whole bunch of expressions in a row. Basically what a wrapped expression does is it takes the top element of your stack, it expects that to be some sort of accumulator, 1, 2, 3, 4, 5, some counter or whatever. It will move the accumulator out of the way, it will execute the fragment, it will bring the accumulator back and it will somehow combine the result of your subexpression and the accumulator. Typically it does this by moving the accumulator to the altstack, moving it back when you’re done and combining them using bool AND, bool OR, OP_ADD in some cases. There are these minor type modifiers. We’ve got five of them. I was going to go to the website and read through all the detailed justification for these but I think I’m not going to do that. They are minor things like the o means 1, it means that this is a fragment that takes exactly one thing from the stack. This affects the behavior of certain wrappers. The idea behind all of this is that if you have a Miniscript program you can run it through a type checker that assigns all of these tags and type modifiers and stuff to your program. If your top level program, if this top AND here is a B you have a valid Miniscript. If it is not a B it is an invalid Miniscript, all bets are off, probably it can’t be satisfied. This type check has very simple rules. There’s linear time, very straightforward rules for propagating these types up. There is a giant table on Pieter’s website which describes exactly what these mean and what the rules for propagation for. You as a user don’t really need to care about that. All you need to know is that when you run the type checker if you get a B at the top you’re golden, if you don’t you’re not, throw it out. If you get your Miniscript by running our compiler then it will always type check because our compiler enforces that type checking happens. All of these complicated questions of correctness are now wrapped up entirely in this type system. You have four base types and five modifiers.

Miniscript Types (Malleability)

We’ve got another type system in parallel for malleability. This also was something that we got pretty far into the project before we realized that we needed to separate these things out conceptually. Malleability as I mentioned is this ability for a third party to replace valid witnesses with another valid witness. One complicated thing that we didn’t realize until later is that we have a notion of canonical witnesses that we expect honest signers to use. If you have an OR, like two branches, either satisfy this subscript or this other subscript. There’s some form of OR where you try to run both of them and then you check that one of them succeeded. Any honest user is just going to pick one or the other and satisfy it and dissatisfy the other one. Somebody being dishonest could satisfy both and this would still pass the rules of Bitcoin script. When we are thinking about malleability we have to consider dissatisfactions like that. That was annoying. There’s suddenly this whole wide set of possible behaviors in Miniscript that suddenly we had to consider when doing malleability reasons. The bad things about malleability, particularly they affect transaction propagation, they affect fee estimation. Those are the bad things about malleability.

What questions can we ask? Can we assure that all witnesses to a script are non-malleable. This is the wrong question to ask it turns out. I mentioned that malleability is a property of witnesses not scripts but even so it is not necessary that every possible witness be non-malleable. What we want is this second thing in red here. Basically no matter how you want to satisfy a script, no matter which set of public keys or hash preimages or whatever, you want there to be some witness you can create even if it is a bit expensive that will be non-malleable. Our compiler will ensure that this is true. If you give it a given policy it will make sure every single possible branch of that policy has a non-malleable witness that can be created. This was quite difficult and non-trivial. This was the source of a lot of security bugs that I found when auditing random crap that I found on the blockchain. As it turns out curiously, the Tier Nolan atomic swap protocol from 2012, 2013 is actually ok and Lightning is actually ok. That was surprising. I think the Lightning folks got lucky in some ways. There was a lot of thought that went into Lightning. The way that we are thinking about this here is a very structured way where you can even reason about this is an automated fashion. It is great that we can do that now. It provides a lot more safety and assurance. What I found very frequently when trying to use Miniscript internally at Blockstream to optimize Liquid and optimize various things I was doing, I was like I have Miniscript, I can do all these powerful things, now I can optimize the hell out of my script and fearlessly do things. Every time I would find an optimization my analysis tools at the compiler would reject it. I’d be like this is broken, another bug in Miniscript. Pretty much every time it was not a bug in Miniscript. I would chase through the logic, this is a difficult problem of how to get good error messages. When I’d trace through the logic of why it was failing it would come down to me failing one of these malleability properties. I would find some attack that was actually being prevented by the Miniscript analyzer warning me that “Hey you’ve got a bare hash preimage sitting over here. If that hash becomes public someone can throw away the rest of the witness and use that for example. We have these three extra type modifiers. There’s four base types, five modifiers that represent correctness. We have three more for malleability. s means there is a signature involved somewhere in the satisfaction. It used to stand for strong, then it used to stand for safe, all these weird notions. Now it is just if there is a signature that captures the properties that we need. There’s this weird thing called f for forced. What that means is there is no way to dissatisfy it at least not without throwing a signature on this. No third party can dissatisfy something that is a f. Then there is an e which is in some abstract sense the opposite of f. With e you have a unique dissatisfaction that has no signature on it. A third party can use that unique dissatisfaction but if you dissatisfy it then that’s the only dissatisfaction. If you are not satisfying this then a third party can’t do anything about it. They see the dissatisfaction, they have no other choices.

I’m going to bump over to the website in a second. It has got a table of what these rules mean, how they propagate, what the justifications for it are and it also has a summary of this slide here which is how do we produce a signature. I’ve been hinting at various malleability vectors. There is a question. Suppose you have a Miniscript that is something like “take either this timelock or this timelock and combine it with a public key.” Is this malleable? It is actually not because the signature is going to cover the specific… this is complicated. I should have chosen a simpler example. The signature will cover the locktime but if the locktime is high enough that it actually satisfies both branches then if you’ve got like an OP_IF…

Q - If it is an already spent timelock or a timelock in the past?

A - Yes. If you try to use the later timelock then you put a timelock that will be greater than both restrictions. Now if you’re switching between those using the OP_IF or something a third party can change your zero to a one and swap which branch of the OP_IF you’re using and that’s a malleability vector.

Q - The transaction is going to fail on the locktime check in the script evaluation?

A - In this example I’m assuming that the locktime is satisfied. There exists a branch in your script that if somebody honestly tries to take the result will be a malleable witness. You’re trying to take the later locktime and some third party can change it so you’re taking the earlier locktime. We spent a lot of time iterating to try to prevent this from happening. It is actually quite difficult to reason through why exactly this is happening because as Lightning developer is saying you do have a specific locktime in the transaction that is covered by the signature so where does the malleability come from? The answer in Miniscript parlance is that you have an IF statement where neither of the two branches have this s property, signature on them. If you ever have an IF statement or a threshold where there are multiple possible paths that don’t have the s on them that’s illegal and you have to dissatisfy that. You aren’t allowed to take either timelock because if you do the result is going to be malleable. Then if you have to dissatisfy it and then you don’t have the e property then probably your whole script is going to be malleable because even after dissatisfying if you’re missing e that means that somebody else can dissatisfy. So actually all of these rules have simple propagation rules that are very difficult to convince yourself are correct but that catch all this complicated stuff. So at signing time how do we avoid this? How do we say “I’ve got this weird branch here where maybe it is a timelock or a hash preimage and now I need to think I’m only allowed to take the hash preimage if the timelock has not yet expired.” If I try to use a hash preimage after the timelock has expired a third party can delete the hash preimage, take the timelock branch and there’s a malleability vector. So how does my signing algorithm reason about this? How can I actually produce a valid signature and not get any malleability under these constraints? The answer is basically you recursively go through your Miniscript trying to satisfy every subexpression. Then you try to satisfy the AND or the OR, however these are combined and you propagate all the way to the top. Whenever you have a choice of satisfactions you look at what your choices are. If they all require a signature that is great, we don’t have to think about malleability because nobody can change those, they would need a signature to change things out. We assume no signature reuse, we assume that third parties don’t know your secret keys and so on.

Q - No sighash?

A - In Miniscript we only use SIGHASH_ALL. That’s a good point from the audience. Different sighash modes might complicate this but in Miniscript we only use SIGHASH_ALL.

They all require signatures, that’s great you just take the cheapest one. If there is one possibility that doesn’t require a signature now you need to be careful because a third party can go ahead and take that. Now you can’t take any of the signature ones, you have to take the one that the third party can take. The reason being that you have to assume a third party is going to do that to you so you have to get in front of them. You just don’t publish any of the signature requiring ones. This may actually result in you creating a more expensive satisfaction than you would if you didn’t care about malleability. There is no context I can imagine where this is safe but you may want to use a malleable signer and maybe you could save a few bytes but you really need to be careful doing that. Finally if there are multiple choices that don’t have a signature then you’re… because no matter what you choose a third party is going to have some option. That’s all there is to this. You have these rules for e, f and s that propagate through. Using these rules at signing time you apply these three points and you can recursively generate a signature. That’s pretty great. This was a really complicated thing to think about and to reason through and it is really cool that we got such a straightforward algorithm that will handle it for you. When I talk our analysis tools catching me doing dumb stuff this is what I mean. I mean I would try to sign and it would refuse to sign. I would trace through and I’d be like “What’s wrong? Did I give it the wrong keys? What is going on?” Eventually I would find that somewhere I would have a choice between two things that didn’t have signatures and it was barfing on that. That’s really cool, something that has actually saved me in real life. Only saving me from wielding the power Miniscript gives me in unsafe ways. That is a good thing.

Q - Does the tooling have a non-malleable mode? If I wanted to make it malleable because I’m a miner…

A - The question is does the tooling have a non-malleable mode. I think Pieter’s code does. We might have an open issue on rust-miniscript to add this. It is a little bit more complicated than just disabling the checks because you might also want to take some of these non-canonical satisfactions in that case that you can imagine is a little bit cheaper. I think Pieter implemented it but Sanket and I didn’t. There will be, that is a design requirement for our finished Miniscript library, that you can enable malleability if you want.

I will link to the website. I was going to go through this in more detail but I can see that I’m burning everyone out already.

Q - Shall I get the website up?

A - Let me run through, I’ve only got a couple more slides. Let me run through my last subject here which is fee estimation which I should’ve moved to sooner. Malleability was the most taxing. Let’s think about fee estimation. I will be a bit more high level and a bit more friendly for these last few slides.

Miniscript: Fee Estimation

Fee estimation requires similar reasoning to malleability. You’ve got to go through your transaction, you’ve got to figure out what is the largest possible witness size that might exist on these coins. That witness size, the largest possible witness size is what I have to pay for it. If I’m in control of all the keys I can do better. I can say “I’m not going to screw myself” so I should actually find the smallest satisfaction that uses the keys that I have. But if I have other third parties who might be contributing I have to consider the worst case of what they do. They might be able to sign, they might not. Maybe that affects how efficiently I can sign something. Regardless there is a simple linear time algorithm I can go through where I just tell it which keys are available and which aren’t. I think I have to assume all keys are not potentially available before it gets complicated. Simple linear time algorithm, it will output a maximum. I know the maximum size of my witness and that is how much transaction weight I should pay for when I’m doing my fee estimation. This is great by the way. This was the worst thing about Liquid. We had half of our development effort going into figuring out fee estimation. In Liquid we have the ability to swap out quorums. We have coins in Liquid, some of them are controlled by different scripts than others. We have this joint multisig wallet trying to do fee estimation and cross-checking each other’s fee estimation on coins with complicated scripts that potentially have different sizes of witnesses. With Miniscript we just call a function, we just get maximum witness size and it does it for us. Miniscript was really just Pieter and I going off on a bender but we really wouldn’t have been able to do Liquid without it. We got lucky there.

Q - They don’t need to try to maximize it [witness size], they just need to make it bigger than the honest signers’?

A - That’s correct. You need an overestimate, that’s what you need. You can overshoot that is perfectly fine but if you undershoot you might be in trouble because the transaction might not propagate.

If you run through your script and do your type inference and your top level thing has an s on it… If you run through the type checker and it says it is non-malleable you don’t have to think about malleability, that’s awesome. If you’re being weird and you want to have potentially malleable scripts then that will complicate your fee estimation but again it is doable, you can run through this in linear time. Let me talk a bit about how malleability and fee estimation work. This is one security vulnerability I found on the blockchain. You could argue that this is irresponsible disclosure but saying it is on the blockchain and giving a script that is not even the exact script. If you guys can find this, you could have found it without me. Suppose you have this atomic swap thing. You’ve got these coins that can either be spent by two parties cooperating or it can be spent by one party by revealing a hash preimage or after a timeout the second party can use this timelock. The issue here is that party B is not required for this at all. Suppose the timelock expires. Party B creates this transaction using the timelock case and publishes this to the network. Party A sees this and eclipse attacks Party B. Party B replaces the witness that was using the timelock with one that uses the hash preimage. They are actually following the atomic swap protocol but at this point Party B has given up. They are using the hash preimage, they are no longer waiting, they’re trying to back out. Party A uses the hash preimage to increase the weight on the transaction causing it to no longer propagate and no longer get into the next block. Then Party A can wait for the timelock on the other side and steal the coins. There are two solutions to this using Miniscript. One is Party B, when doing fee estimation, should consider that A’s key is not under his control and expect worst case behavior from A. That will do the right thing because Party B will wind up paying a higher fee but they’re paying a higher fee to prevent this attack.

Q - The real attack is tracking the transaction in the mempool. I can always drop the signature for the witness?

A - The real attack here is that I’ve gotten the transaction in the mempool with a specific transaction ID but with a lower fee rate.

Q - You don’t track by witness transaction ID right now?

A - Yes that’s correct and I don’t think we want to. If the network was looking at wtxid’s that would also prevent this kind of thing.

Another solution is when creating the transaction Party B could go over this and ask the question “Is my signature required for every branch of this?” In the standard Tier Nolan atomic swap the answer to that question should be yes. The party who has a timelock should have to sign off on all branches. That is very easy to check with Miniscript. Many a time you just scan through the script, look at every branch and say “Is my key in every branch?” Party B does this and if Party A had proposed this broken thing B would reject it out of hand. There are a number of ways that we can address this concern and Miniscript gives you the tools to do them. That’s good. This was something that I hadn’t really considered. It is a little bit contrived but not a lot contrived. You can only decrease the fee rate by a little bit and your goal is to decrease the fee rate by enough that you stall a transaction for so long that a different timelock expires and the other party doesn’t notice that you put a hash preimage that they can use to take coins in the mempool. It is a little bit contrived but you can imagine this reflecting a wider issue where you make incorrect assumptions about how your witness size might grow because other parties are not cooperating with you. Miniscript gives you the tool to reason about that.

So I’ve been talking about keys being trusted or untrusted up to this point. If a key is yours you can assume that you are always going to do the best possible thing. If the witness is smaller by signing with it you’ll sign with it. If the witness is smaller by not signing with it you won’t sign with it. Then you have these untrusted keys where you’ve got to assume they’ll do the worst thing when doing fee estimation. There is another dimension here actually about availability. If keys are definitely available, this is very similar to being trusted. If you trust a key and it is available you can just do your fee estimation assuming it will be used in the most optimal way. If the key might not be available then you can’t actually. If it is on a hardware wallet that your end user might not have access to or something like that then you’ve got to assume that the signature won’t be there, you’ve got to treat the worst case. If a key is definitely unavailable that is kind of cool. If a key is untrusted but you know it is definitely unavailable because you know it is locked in a safe somewhere in another continent, then you can treat it as trusted. Unfortunately this notion of trust and availability don’t actually overlap entirely.

It turns out that if you try to do fee estimation the way it is described it looks like it might be intractable. We’ve got an open problem. This is frustrating. This is why rust-miniscript is not done by the way. Every time I go to the issue tracker to close all of my remaining things I jump onto this and then I don’t make any progress for many hours or days and then I have to go back to my real job. It is very surprising that this is open. We thought that we had a solution at the Core Dev meeting in Amsterdam last year, Pieter and I. I think Sanket and Pieter broke it the next day. It seems like we have this ability to estimate transaction weight in a really optimal weight that takes into account all of the surrounding context. But then it seems like it is intractable here. I tried to write at a high level what you have to do and even in my high level description I was able to find bugs in. Miniscript gives you the tools to get a lower bound on the worst case transaction size and that’s what you need. What I’m saying here is that we could do even better than that by giving Miniscript tooling some more information. I don’t know how to write the algorithm to do that. This is a plea for anybody who is interested. I think you could get a Master’s thesis out of that. You won’t be able to get a PhD thesis. If anyone here is a Master’s student I think this is definitely a thesis worthy thing to do. Unless it turns out to be trivial but I’m pretty sure it is not. Pieter and I tried for a while and Pieter has two PhDs.

Miniscript and PSBT

Last slide. I nailed the two hour time limit. Let me cover the interaction between PSBT and Miniscript. This ends the technical part of the talk. This is really just engineering, something we need to grind through, there are a couple of minor things that need to be done. PSBT is of course Andrew Chow’s brainchild. This is a transfer protocol for unsigned transactions that lets you tack things onto your inputs and outputs and stuff. If you are a wallet trying to sign a transaction in some complicated multisig thing you don’t need to think about that. You take the transaction, you sign it, you attach a transaction to it and pass it onto the next guy. You don’t need to worry about putting the signature in the right place and giving room for other people to fit it in or whatever other weird stuff you might otherwise have to do. You just stick it onto the PSBT. So PSBT has a whole bunch of different roles. The most interesting role is the so called finalizer. That’s the person who takes this PSBT, takes all the signatures that are attached to this, all the hash preimages that are attached to it and stuff and actually assembles a witness. Up until now the only way to write a finalizer is to do template matching. “This is a normal CHECKMULTISIG, I know I need to look for these keys and put them in this order.” With Miniscript you can write a finalizer that can handle any kind of policy that you throw at it and any script that you throw at it. It will figure out whether it has enough data to actually satisfy the input and actually finalize the transaction. Given enough data it can even optimize this. It can say “If I use these signatures instead of that signature I can come up with a smaller witness.” Your finalizer now has the ability to not only support pretty much anything that you throw at it, any protocol that anyone is using today you can just write an off the shelf finalizer, maybe with some special purpose HTLC code, that will work and the output will be more optimal than what would otherwise be done. Maybe a dumb example of this is I have a simple finalizer that doesn’t use PSBT but the finalizer logic in Liquid where typically we need 11 signatures on a given transaction and typically we get 15 because all of the signers are actually online. My code will go through and find the 11 shortest ones by byte. We save a few bytes in Liquid every transaction. We waste a tremendous amount using CHECKMULTISIG and explicit keys and all this. But we save a few bytes that way. The reason we can do that is that I’m not writing this insane micro-optimization code for Liquid. I’m writing it in a Miniscript context where it can be vetted and reused and stuff. It is actually worthwhile to do these sorts of things. That’s really cool. Miniscript was designed to work with PSBT in this way. There are a couple of minor extensions to PSBT that we need. There has been some mailing list activity in the last couple of months but it is not there.

Q - All your suggestions have gone nowhere.

A - I did a bad thing and I wrote a bunch of suggestions in an English language email to the mailing list and wrote no code and never followed up so of course nothing has happened. That is on me, I should have written code, I should follow up and do that. And I will when I find the time.

Q - On the last slide, around roles. It means that you don’t have to allocate the finalizer or the finalizer can be transferred at a later date? What’s that final point?

A - The roles are actually different participants. If you are writing code for a hardware wallet say, the chances are you only care about the signer role which means looking at validating the transaction and just producing a signature. You tack that onto the PSBT and then it is somebody else’s problem to put that into the transaction. There is another role, the updater, where you add extra information. You say “This public key is mine” or “I know a BIP32 path for that.” If you’re writing host software maybe you care about being an updater. Right now to use PSBT at all you have to write finalizer code because every individual person needs to know how to make a transaction. But the goal of Miniscript is that we can write one finalizer to rule them all. There will be a standard Miniscript finalizer and then nobody else in the ecosystem is going to have to deal with that again. As long as they can fit their project into Miniscript they can just use a standard finalizer. That is great because that removes by far the most complicated part of producing a Bitcoin transaction. It also gets you better flexibility and interoperability. The different roles are actually different participants.

As I mentioned finalizing optimally is similar to fee estimation. If you actually take all the information that is potentially available then it becomes intractable even though it feels like it should be tractable. That’s a bit frustrating. We can definitely finalize in an optimal way given information about what data is actually available in the final state. You give it a PSBT with enough signatures and hash preimages and whatever and we can write very straightforward linear time code that will find the optimal non-malleable witness using that data which is a really cool thing. Although you need to be a tiny bit careful. The finalizer has access to extra signatures and stuff that in principle the finalizer could use to malleate. If you have an untrusted finalizer then that is a different trust model and you need to think about that.

Q - Not necessarily an untrusted finalizer. Anyone who has the full PSBT?

A - Yes. This is a point of minor contention between Pieter and I, the question of how do you think about distrust from your peers when you’re doing a multisignature thing. Where we landed on for the purpose of Miniscript is that it does not count as malleability. That’s a separate analysis thing that I haven’t fully fleshed out to be honest. I don’t know the best way to think about that if you actually distrust your peers and you think they might malleate your transactions. I’m not sure in general what if anything you can do.

Q - You don’t give them all of the PSBT?

A - It is not just not giving all the PSBT. They can replace their own signatures.

Q - They can ransom change?

A - Yeah. There is no straightforward thing. I can find other attacks for most of the straightforward things.

Open Problems

This is actually the end. Thank you all for listening for almost two hours. I had a lot of fun doing this. I hope you guys got something out of it. A real quick summary. There is an issue tracker on rust-miniscript that has all of the open problems that I mentioned. It is bugs and also some janitor stuff in particular writing the PSBT finalizer. There are a couple of open bugs that are actually bugs that I need to fix. There is also work to do here. There is a reasonable number of straightforward things that need to be done and there’s a small number like two of hard things that might be like thesis level work. It would be awesome if someone did the thesis stuff because then I would definitely finish the other stuff. Thank you all for listening. It has been a hour and 45 minutes. We have 15 minutes for questions if people are still awake.

Q - You want the website up?

A - Yeah can I put the website up?

Q - What’s Pieter’s repo?

A - I don’t know that Pieter has a public repo yet because his code is all patched into Bitcoin Core.

Q - There’s a PR for that?

A - So then Pieter’s repo is the PR. I think he has a separate repo that he compiled the WebJS or something for the website but I think that’s a private repo. Is it public now? Ok I should add the link. There you go, you can see the code that I’m hovering over. That one is Pieter’s website, there’s the code that produced it, there’s Pieter’s PR. There’s me and Sanket’s code. We have a Policy to Miniscript compiler. Up top here’s a policy in the high level language. I click compile there and you can see it output the Miniscript. You can see the Miniscript and the policy look the same except that the Miniscript has tags on the ANDs and ORs and it has these extra wrappers stuck in there. It also gives you a bit of information about script spending cost. You can see that I put this 9 here saying that this branch of the OR is 9 times as likely as the other one to be taken. It gave me an average expected weight for spending it. It also wrote up the script here which is cool. This is Bitcoin script and Miniscript. Those are exactly equivalent, straightforward linear time translation between these. It is literally a parser that you can write with one token look ahead. It is linear time. If you have a Miniscript you can run it through this analysis tool, it gives you some cool stuff. Then here is the Miniscript reference that I mentioned. Here are all the different Miniscript fragments and the Bitcoin script that they correspond to. Here is the type system that I mentioned, you have these four base types: B, V, K and W. This is what they mean. The five ones, this is what they mean. Here is how they propagate forward. You can see all of the leaf fragments here like pubkey checks and timelocks and stuff have certain properties. The o, n, d, u here. If you have combinators like ANDs and ORs and stuff then the properties of those propagate from the properties of the children. You can convince yourself that all of these rules are correct. There are a lot of them. You can go through them if you want to not trust and verify. You can check that these are what you expect. They are very straightforward to implement. James wrote all of this in Python and Sanket looked over it for the Bitcoin Core Python test framework.

Q - I just use this website?

A - Yeah. You can just copy this stuff right off the website. It is really nice how this turned out. You guys should’ve seen some of the earlier versions of this website.

There is some banter about the resource limitations which matter if you are writing a compiler but otherwise don’t matter. Some discussion about security. This is about how to satisfy and dissatisfy things, the different ORs and ANDs, the different ways of being satisfied or dissatisfied. This is where the different trade-offs in terms of weights come from. There is a discussion about malleability. Here is an overview of the algorithm that I described. This is more talk about malleability. This is the non-malleable satisfaction algorithm that I mentioned. You can see that it is a little bit more technical than I said but it is actually not a lot. I think Pieter overcomplicated this. He didn’t. Every time I complain about stuff he had some reason that he had written stuff the way that he did.

Q - He is very precise.

A - He is very precise, yes.

Then here are the three properties for non-malleability and here is how they propagate. It is basically the same as the last table. There you go. This is Miniscript. All of Miniscript is on this website with the entire contents of this talk, the justification for everything and the detailed rules for doing type checking and malleability checking and for producing non-malleable optimal signatures. It is all here on this page. It is a lot of data but you can see that I can scroll through it in a couple of seconds. Compare the reference book for any programming language out there to this web page that is like three pages long. It was actually very small and straightforward. Ordinary people can get through this and convince themselves that all the design decisions make sense. This gets us to the point where we really can laugh at the Ethereum for having a broken script system. Until now we couldn’t. Bitcoin script pointlessly had all of the problems of Ethereum and more. But here we are, we’re living the dream.

Q - When will this get a BIP number?

A - That’s a good question. I guess it should have a BIP number. A lot of things do that are not really consensus. BIP 32 has a number, 49 does, PSBT has a number. Our thought was that when we solve these open problems that we keep thinking are easier than they turn out to be. I don’t have a plan for asking for a number. My feeling is that I want to finish the Rust implementation and I want to have something better to say than this is open regarding the optimal fee estimation. I would like to have a finalizer that somebody out there is using. I would think maybe if Miniscript should have its own number and the finalizer should have its own number, I don’t know. I don’t have specific plans for asking for numbers. I agree that it should have a number. It feels like the kind of thing that should.

Q - It feels that there should at least be some other documentation than Pieter’s website that may or may not randomly disappear.

A - That’s fair. Right now the other documentation is the implementations. You’re right, there should be something in the canonical repo and the BIP repo is a good candidate for that.

Q - Now that you’ve gone through all this pain, are you going to come up with the new Tapscript version? I guess it is versioned Tapscript so you could come up with a new one that only has the opcodes that makes life good and maybe completely different opcodes?

A - The question is whether we are going to optimize Tapscript for Miniscript. The short answer is no. When we start to do this we wind up getting ratholed pretty quickly. There are a lot of aspects of Miniscript that are kind of specific to the design of Bitcoin script. You can see how many different fragments we have here. If we were to create a new script system that directly efficiently implemented the Miniscript semantics we would actually want to make a lot of changes and it would result in a simpler and different Miniscript that we haven’t designed. What we landed on was just getting rid of CHECKMULTISIG because that was a particularly egregious thing and we made no other changes because there wasn’t any clear Schelling point or point where we could stop.

Q - Could you get rid of negative zero?

A - We didn’t even get rid of negative zero, no. That’s an interesting one.

Q - And the infinite length zero?

A - We might have made MINIMALIF a consensus rule, I think we did. That covers the issue with giant booleans.

Q - What should be in Core and what shouldn’t be in Core? There are two PRs currently open, Pieter’s and James’ for the TestFramework. I’m assuming there is going to be more functionality in your Rust library than will get into Core. How much?

A - The question is what should be in Core and what shouldn’t be in Core. Core should have the ability to sign for Miniscripts. There is really not a lot to signing. You don’t need to understand the script to sign it. Core should have the ability I think to recognize Miniscripts it has control over.

Q - Core should be able to finalize?

A - You could go either way but I also think Core should be able to finalize as Andrew says. That means that they have a RPC or something where you give it a PSBT with all the relevant data and Core will be able to output a complete transaction. You could argue that that is a separate thing from Core’s primary job but it is a really fundamental thing to create transactions. If we are going to have the createrawtransaction, signrawtransaction API a very natural successor to that would be a general PSBT finalizer.

Q - Part of it is that there is a plan to have the Core wallet store Miniscripts which directly implies Core has to be able to finalize those and sign them.

A - Andrew is saying that there are plans for Core to support Miniscript in the descriptor wallet, to have arbitrary Miniscripts. That would require it to be able to finalize in order to sign for its own outputs. So what should not be in Core? The compiler here, that should not be in Core. That is a separate beast of a thing that doesn’t even necessarily have a canonical design. There are some interesting fee estimation and analysis tooling that I have been hinting at. Those probably don’t belong in Core. Those probably belong in their own library or their own RPC tool or something like that because they are only relevant…

Q - We should get all of the fee estimation out of Core. Fee estimation should be its own process or its own interface.

A - Core should be able to estimate fees for its own stuff. But in Core should you be able to say “These keys are available, these keys are not. These are trusted and these are not” and so on and so forth. You can get into really complicated hypothetical scenarios that Miniscript can answer for but I don’t think it is Core’s job to answer that. I think Pieter would agree. Pieter continues to disagree that my scenarios are even sensible.

Q - When you were giving up this original plan of making Miniscript a preprocessor of script was there a certain detail that was convincing you or was it the sum of all the complications?

A - The question is when I had to give up my original dream of Miniscript and the Policy language being the same. I think it was Pieter explaining the way that he had separated the Policy language from Miniscript in his head. I didn’t have that separation in my model of things. He explained conceptually that first of all the Policy language had this information about branch probabilities that could not be expressed in script. There was just no way I could preserve that and go into Bitcoin script. I think that was what did it. Then I realized there was no way I was going to be able to unify these things. He had another point which was that once you were in Miniscript everything is deterministic. Miniscript has a deterministic transformation to script and back but the compilation step from the Policy language to Miniscript that involves doing all sorts of optimizations. There is a lot of potential there to discover new optimizations or give more useful advice or something. That’s a part that shouldn’t be a component of any protocols. At least if you had a compiler that was part of some protocol it would need to be constrained in some way and we didn’t want to constrain that early on. We had this clear separation both between the Policy language having extra data that we couldn’t preserve and also the Policy language being a target for a lot of design iteration on optimization kind of stuff. With Miniscript alone as a pure thing, a re-encoding of Bitcoin script for which all the manipulations we wanted to do clearly had deterministic results. I could have my beautiful little Miniscript world where everything was deterministic and there was only one thing. Basically I could treat the Policy language as not existing, just some ugly thing that I need to use to get Miniscript sometimes. That difference in opinion between Pieter and I persists to this day. When we’re developing stuff he puts all his time into the compiler and I put no time into my compiler. Eventually Sanket rewrote the entire compiler for the Rust implementation because mine didn’t even work by the time we were done changing the language. Once Pieter described this separation I realized that this is actually a sensible separation and that is something we should move forwards with and my dream of a simple Miniscript could still be preserved.

Q - Would the Policy language be supported in Core?

A - The question is will the Policy language be supported in Core. Andrew says probably not. I think that is probably true. That would be up to Pieter. If Pieter were here he could give you some answer. I think it comes down to Pieter’s opinion. I guess Pieter needs at least one other opinion to get stuff merged but my guess would be no. The compiler is a pretty complicated piece of software that’s doing this one shot task of figuring out an optimal script that doesn’t really integrate with descriptors obviously and in a natural way. It also has some more design iteration that we want to do. A question like if you have a threshold of 3-of-5 things, maybe you can get a more optimal script by reordering the components of the threshold. That is something that is not implemented and is probably intractable in general so we have to devise some heuristics for. There is design iteration on stuff like that that still needs to be done.

Q - It may also get stuck in review from everyone not understanding.

A - Andrew says it might get stuck in review which I certainly believe. I haven’t looked at Pieter’s compiler but on the Rust side the compiler is literally half the entire library. It is much more complicated. Everything else is really straightforward, all the same algorithm, iterate over everything, recurse in these specific ways and you can review it in a modular way. The compiler is this incredibly propagating permission upward and downward, going through different things, caching different values and ordering them in certain weird ways and reasoning about malleability at the same time. It is quite a beast to review. I can’t see it getting through the Core review process just because it is such a complicated thing.

Q - A really high level question. Craig Wright wants to apparently secure about 6000 patents on blockchain technology which will obviously affect Bitcoin and the Ethereum communities and every other blockchain. When do you think he’s going to jail?

A - Craig Wright as I understand it is a UK citizen. UK law is different to US law and different to Canadian law in ways that benefit him. For example I understand in the UK that the truth is not a defense to a defamation suit. Ignoring all the perjury and so forth, he came to some US court and he got in a whole bunch of trouble for contempt of court, for throwing papers around and crying and so forth. But independent of that and independent of the patent trolling which unfortunately I don’t think violates any laws and he can just do, the other legal thing he is in the middle of are these defamation suits and these counter defamation suits. I’m not a lawyer of course but my understanding is that because he is doing most of this crap in the UK and UK law is a bit more favorable to this kind of trolling than it is in other jurisdictions. In the US he probably would have had no ability to do a lot of the stuff he is doing.

Q - Some cases have already been dismissed here in the UK. They don’t have jurisdiction over those kind of things.

A - I’m hearing that some cases have been dismissed. I know your question was a joke but it is a good question. I don’t know. I wish he would go to jail.

Q - If we have OP_CAT to force nonce reuse to do this kind of protocol could you catch this with Miniscript or a Miniscript extension?

A - The question is what if we added OP_CAT to script. There are a bunch of benefits like forcing nonce reuse and doing other things. The short answer is yes, we would extend Miniscript to support the new capabilities of OP_CAT but what specifically I don’t know. We haven’t really thought through how we would want to do that. The way it would look is that we would add more fragments to this website basically. It would have to be things that were composable. I don’t have specific plans but definitely I’m sure there would be some things you can do with OP_CAT that we would just add to Miniscript.

Q - …. preimages and keys. These are all things that you can do with scriptless scripts. How long do you see Miniscript being used up until we only use scriptless scripts?

A - With scriptless scripts you cannot do timelocks. Scriptless scripts are interactive and require online keys. I think Miniscript will forever be used for cold storage. That’s my feeling is that we’re going to tend towards a world where cold storage keys use Miniscript as a means of having complicated redemption and recovery policies and the rest of the world doing cool fast paced interactive things would move to scriptless scripts. But I also think there will be a long period of research before people are comfortable deploying scriptless scripts in practice because they are complicated interactive crypto protocols.

Q - We may have hardware wallet support with the same assumptions as a cold wallet, not a deep cold wallet, just a cold wallet, we may have this kind of stuff for Lightning.

A - Support for Lightning?

Q - Support for Lightning and scriptless scripts.

A - Yeah when we get to that point I imagine people will just transition to scriptless scripts for hot stuff. I don’t know how quickly. It seems to me that we’ve got a little while, probably another year or two, maybe five. In all seriousness these are complicated interactive protocols. But in the far future I think Miniscript is for cold storage and scriptless scripts is for hot stuff.

Q - What would Simplicity be for, this new language that has been developed?

A - That’s not a question I can answer in 30 seconds. Miniscript can only do timelocks, hashlocks, signature checks. Simplicity can do anything with the same kind of assurances. Because Simplicity does significantly more stuff the kind of analysis you can do is much more powerful because it is defined in the Coq theorem proving system but also much more difficult. If you have a policy that you can encode in Miniscript you probably want to use Miniscript. In Simplicity you can implement all of the Miniscript combinators so you can just compile Miniscript policies directly to Simplicity which is kind of a nice thing. The short answer is Simplicity lets you do all sorts of stuff that you cannot do with Miniscript. It will let you covenants, it will let you interact with confidential transactions by opening up Pedersen commitments and stuff like this. It will let you create algorithmic limit orders and stuff and put those on a blockchain and have coins that can only be spent if they are being transferred into a different asset under certain terms. You can do vaults where you have coins that can only be moved to a staging area where they can only be moved back or have to sit for a day. A lot of what I just said Miniscript will never be able to do because they don’t fit into this model of being a tree of spending conditions. Simplicity is just infinitely powerful. You can verify the execution of any Turing complete program with Simplicity.

\ No newline at end of file +https://www.youtube.com/watch?v=_v1lECxNDiM

Bitcoin Script to Miniscript

London Bitcoin Devs

https://twitter.com/kanzure/status/1237071807699718146

Slides: https://www.dropbox.com/s/au6u6rczljdc86y/Andrew%20Poelstra%20-%20Miniscript.pdf?dl=0

Introduction

As Michael mentioned I’ve got like two hours. Hopefully I won’t get close to that but you guys are welcome to interrupt and ask questions. I’ve given a couple of talks about Miniscript usually in like 20 or 30 minute time slots where I try to give a high level overview of what this project is, why should you use it and why it makes Bitcoin script accessible in ways that it hasn’t been. In particular you can do all sorts of cool things that in principle Bitcoin script can do but in practice you cannot through various algorithmic, practical reasons. Bitcoin script is just not easy to reason about, it is not easy to develop with, it is not fragile but it has got a lot of sharp edges. The opposite of fragile, you will feel very fragile trying to use Bitcoin script. Since I’ve got a whole pile of time and a reasonably technical audience I am going to try to do something different here. I’m going to talk in a fair bit of detail about some of the issues with Bitcoin script and some of the motivations for Miniscript. Then I’m going to try to build up some of the detail design of Miniscript which is essentially a reinterpretation of a subset of Bitcoin script that is structured in a way that lets you do all sorts of analysis. We’re going to try to build up the design and I’m going to talk about some of the open problems with Miniscript. There’s maybe an irony here that there are so many difficult things with Miniscript given how much simpler it is than script. It highlights how impossible it is to work with script to begin with. Even after all of this work we’ve put in to structuring things and getting all sorts of nice easy algorithms, there are still a lot of natural things to do that are quite difficult. In particular around dealing with fee estimation and with malleability. So let’s get started.

Script

First I’m going to spend a few slides giving an overview of what Bitcoin script is and how it is structured. As many of you probably know Bitcoin has a scripting system that allows you to express complicated predicates. You can have coins controlled not only by single keys but by multiple keys. You can have multisignatures, threshold signatures, hash preimage type things, you can have timelocks. This is how Lightning HTLCs are created. You can do weird things like have bounties for finding hash collisions. Peter Todd put a few of these up I think in 2013. There are a whole bunch of Bitcoins you can take if you find a hash collision. The one for SHA-1 has been taken but there are ones up for the other Bitcoin hashes, for SHA-2 and the combo hashes that you can still take if you can find a collision. The way you can do this is with a thing called Bitcoin script. Bitcoin script is a stack based assembly language that is very similar to Forth if any of you are 70 years old you might have used Forth before. It is a very primitive, very frustrating language. Bitcoin took a bunch of the specific frustrating parts of it and not the rest. A quick overview. Every Bitcoin script opcode is a single byte. There are 256 of them, they are allocated as follows. You have 78 for pushing data of arbitrary size. All your stack elements are these byte vectors. The first 76 opcodes are just “push this many bytes onto the stack.” There are a couple of more general ones. We also have a bunch of special case opcodes for pushing small numbers. In particular, -1 through 16 have their own opcodes. Then for future extensibility we have a bunch of no-ops. For historical reasons we have 75 opcodes that are synonyms for what we call OP_RETURN, just fail the script immediately. For even sillier historical reasons we have 17 opcodes that will fail your script even if you don’t execute them. You can have IF branches that aren’t accessed but if there are some certain opcodes in there it will fail your script. That’s probably a historical mistake in the Bitcoin Core script interpreter that we can trace back to 2010-2012 which is when most of this came to be. Finally we have 57 “real” opcodes. These are the core of Bitcoin script.

Of these real opcodes we have a lot of things you might expect. We have 4 control-flow ops. Those are IF, ELSE, ELSEIF and we also have a NOTIF which will save you a byte in some cases. Basically it is the same as IF except it will execute if you pass in a zero rather than executing if you pass in a non-zero. We have a few oddball opcodes, let me quickly cover those. We have OP_VERIFY which will interpret the data as a boolean. If it is true then it passes, that’s great. If it is false it fails the script. IFDUP, if you pass it a true it will duplicate it, if you pass it a false it will just eat it. DEPTH and SIZE, those are interesting. DEPTH will tell you the current number of elements on the stack, SIZE will tell you the number of bytes in the most recent element. These are interesting, I’ll explain in the next slide, mainly because they make analysis very difficult. They put constraints on aspects of your stack that otherwise would not be constrained. When you are trying to general purpose script analysis these opcodes will get you in trouble. We have OP_CODESEPARATOR, it is just weird. It was there originally to enable a form of delegation where after the fact you could decide on a different public key that you wanted to allow spending with. That never worked. CODESEPARATOR does a very technical thing of unclear utility. We have 15 stack manipulation opcodes. These basically rearrange things on your stack. You’ve got a bunch of items, there are a whole bunch of things that will reorder, duplicate certain ones or duplicate certain combinations of them or swap them, all sorts of crazy stuff like that. We also have a couple of altstack opcodes. We have one that will move the top stack element to the alternate stack, one that will bring stuff back. Those of you who are familiar with the charlatan Craig Wright may be aware that you can design a Turing machine using a two stack finite state automaton or something like that. It is plausible that the alternative stack in Bitcoin was inspired by this but actually this provides you no explicit power. The only things you can do with the altstack in Bitcoin are move things on to it and move things off of it. Sometimes this lets you manipulate the stack in a slightly more efficient way. But all that is is a staging ground. You can’t do computations there, you can’t manipulate the altstack directly, you can’t do anything fun. We have 6 different opcodes that compare equality. I’ll talk a bit about those in the next slide. The reason that there is 6 of them is to make analysis hard. We have 16 numerical opcodes. These do things like addition, subtraction, boolean comparisons which is kind of weird. There are special purpose opcodes for incrementing and for decrementing. There used to be opcodes for multiplication, division and concatenation and a bunch of other stuff. Those have turned into the fail even if not executed opcodes. Those are disabled many years ago. The way that they are disabled causes them to have this weird failing behavior. There are 5 hashes in Bitcoin which are RIPEMD160, SHA1, SHA2 and there are various combinations of these. There is HASH256 which means do SHA2 twice, there is HASH160 which means you SHA2 and then RIPEMD160. Finally we have the CHECKSIG opcodes which are probably the most well known and most important ones and also the most complicated ones. We have CHECKSIG which checks a single signature on the transaction, we have CHECKMULTISIG which checks potentially many signatures on the current transaction.

Some miscellaneous information about script, some of which I’ve hinted at. We have a whole pile of limits which I’ll talk about in a second. Stack elements are just raw byte strings, there’s no typing here, there’s nothing interesting. They are just byte strings that you can create in various ways. The maximum size is 520 bytes. This was a limit set by Satoshi we think because you can put a 4000 bit RSA key into 520 bytes. Bitcoin does not and never has supported RSA keys but Satoshi did take a lot of things from the OpenSSL API and we think this might have been one of them. As far as I’m aware there’s no use for stack elements that are larger… I guess DER signatures are 75 bytes but you can go up to 520. There are many interpretations of these raw byte strings though. I said we have no types but actually every opcode has a notion of type-ness in it. A lot of the hash ops and the stack manipulation things just treat them as raw byte strings, that’s great. But all the numeric opcodes treat their things as numbers. In Bitcoin this means specifically up to 32 bit signed magnitude numbers. Signed magnitude means the top bit. If it is 1 you have a negative number, if it is 0 you have a positive number. This means you have zero and negative zero for example, two distinct things. You also have a whole bunch of non-canonical encodings. You can put a bunch of zero padding in to all your numbers and those will continue to be valid. But if you exceed 4 bytes that is not a number. For example you can add sufficiently large numbers and you will get something that is too big and so it is no longer a number. If you keep using OP_ADD and you’re not careful about it then you might overflow and then the next OP_ADD is going to sever your script. If you’re trying to reason about script you need to know these exact rules. Some things, the IF statements, OP_VERIFY, a few others but oddly not the boolean AND and OR will interpret these things as booleans. Booleans are similar to numbers. If you encode zero or negative zero that counts as false. If you encode anything else that is considered true. The difference between booleans and numbers is that booleans can be arbitrarily sized. You can make a negative zero that is like 1000 bytes long. This will not be interpreted as zero by any of the numeric opcodes, it will fail your script, but they will be interpreted as false by the booleans. Something to be aware of. If you are trying to reason about script you need to know these rules. Then finally we have the CHECKSIG and CHECKMULTISIG and those are the closest to doing something sane. They interpret their arguments as public keys or signatures. The public keys are supposed to be well formed, the signatures are supposed to be either well formed or the entry string which is how indicate no signature. These are not consensus rules, you can actually put whatever garbage you want in there and the opcode will simply fail. There is a comparison to C if any of you guys have done a lot of C programming and tried to reason about it or tried to convince the C compiler to reason about your code instead of doing insane and wrong things. You may notice that the literal zero if you type this into C source code, this is a valid value for literally every single built in type. Zero is an integer of all sizes, zero is a floating point, zero is a pointer to any arbitrary object, zero is an enum, zero is everything. There is no way to tell the compiler not to interpret zero that way. Of course the C standard library and PosEx and everything else uses a constant zero as an error return code about 40% of the time so you need to deal with this fact. This infuriating aspect of C was carried over to Bitcoin script in the form of the empty string being a valid instance of every single different way of interpreting script opcodes. That’s just the script semantics. Those are a bunch of weird edge cases and difficulties in reasoning about the script semantics and trying to convince yourself that a given arbitrary script is doing what you expect.

Additionally there are a whole pile of independent and arbitrary limits in Bitcoin script. As I mentioned stack elements can only be 520 bytes. Your total script size can only be 10,000 bytes. You can only have 201 non-push opcodes. A non-push opcode is something a little bit weird. A push opcode includes all the things you expect to be pushes like pushing arbitrary data onto the stack, pushing a number onto the stack. It also includes a couple of opcodes that are not pushes but they sort of got caught in the range check, just comparing the byte values. You need to know which ones those are. Also in CHECKMULTISIG… This 201 opcode limit actually only applies to the whole script regardless of what gets executed. You can have all sorts of IF branches and you count all the opcodes. Except if you execute a CHECKMULTISIG then all of the pubkeys in that CHECKMULTISIG count as non-push opcodes even thought they are pushes, but only if they are executed. There’s a very technical rule that is maximally difficult to verify that as far as I know nobody was aware of until we did this Miniscript development and we found this lurking in the Satoshi script interpreter. In our Miniscript compiler we have to consider that, that is a resource on it we hit sometimes in trying to work with Miniscript. There is a sigops limit. This is something a lot of you are familiar with. This is what restricts your ability to do enormous CHECKMULTISIGs and other interesting things. There are a couple of different sigops limits. I think the only consensus one is the one that limits a given block to have 80,000 sigops in it. But there are policy limits for how many sigops you can have in a given transaction. These are different for raw script and for P2SH and for SegWit. Also there are a thousand elements, you are only allowed to have a thousand elements on the stack. It is basically impossible to get a thousand elements on the stack so it doesn’t really matter. But just to make sure your optimization is constrained in as many dimensions as possible. That’s an additional thing. For what it is worth, in BIP 342 which is Tapscript we fix a whole pile of these. In particular because having so many independent limits is making a lot of our Miniscript work more difficult. We clean this whole mess up. We kept the 1000 stack element limit because it provides some useful denial of service protection and it is so far away from anything we know how to hit without doing really contrived stuff. I think we got rid of the script size limit because there are already limits on transactions and blocks which is great. We kept the stack element size limit just because that’s not doing anything, all of our stack elements are limited in practice anyway. We combined the opcode limit and the sigop limit. We got rid of CHECKMULTISIG. The most difficult thing to keep in mind when you’re constraining your scripts is now a single one dimensional thing. Now absent extreme things that you might be doing that are all outside the scope of Miniscript, you actually just have a one dimensional limit. If you are trying to work within these limits it is much easier in Tapscript which will be SegWit v1 I hope, knock knock on wood.

A couple of particular opcodes that I hate. Everyone else that I work with has a different set of opcodes. Everyone who works with script has their own set of opcodes, these are the ones that I hate. PICK and ROLL, first of all. What these opcodes do is they take a number from the top of the stack, one of them will copy that many elements back to the top, the other one will move that many elements back to the top. Now you are taking one of these arbitrarily typed objects, who knows where you got it. Now you’re interpreting it as an index into your stack. If you are trying to reason about things now you have stack indices as a new kind of thing that is happening in your script. So when you’re trying to constrain what possible things might happen that is a whole other dimension rolled up in there. Numerics overflowing I mentioned. The CHECKMULTISIG business I mentioned. DEPTH and SIZE I mentioned. These are simple opcodes but they are just extra things to think about. If you are trying to reason about what the script interpreter is doing it would be nice if as many of the internals of that interpreter you could just take for granted and weren’t available for introspection to your users. But they are. These opcodes make those available for introspection so now they are mixed in with your reasoning about what values might be possible. IFDUP does a similar annoying thing. You can use OP_DUP and OP_IF together to get an IFDUP but this opcode just happens to be the only one that will conditionally change the depth of your stack depending on whether what you give it is considered true or false. When you’re trying to keep track of your stack depth when you’re reasoning about scripts it gets in your way.

Q - What are these used for?

A - We use IFDUP in Miniscript quite a bit. Do I have an internet connection on this computer? Later in the talk I might go to the Miniscript website which has some detailed justification for some of these things. IFDUP gets used in Miniscript. None of these other things are useful for Miniscript. Let me give a couple of justifications. PICK and ROLL I have used before, only in demonstration of weird script things I guess. I can imagine cases where you want to use PICK and ROLL to manipulate your stack. If you have a whole bunch of stack elements in a row and say you are trying to verify a signature that’s really deep in your stack. You can use OP_PICK to bring that public key and signature to the top of your stack, verify them and then move on with your life without trying to swap them out a whole bunch of times. OP_DEPTH and OP_SIZE do actually have a bunch of uses. OP_DEPTH is kind of cool. We use this in Liquid as a way of distinguishing between two multisig branches. We have the standard 11-of-15 functionary quorum check and we also have a 2-of-3 emergency withdrawal check. You’re allowed to use either one of these. The emergency withdrawal has a timelock on it. This is how Liquid coins are protected. We’re using OP_DEPTH to count the number of signatures provided. If you give the script 11 signatures it knows this is a functionary signature, let’s use that branch. If you only give it 2 signatures it knows to use the other branch. That is an example of what OP_DEPTH is for. OP_SIZE has a bunch of cool applications. One is if you use OP_SIZE on a ECDSA signature to constrain the size to be very small, this forces the user to use a specific nonce that is known to have a bunch of zero bytes in it that has a known discrete log, which is a multiplicative inverse of 2 in your secret key field. That means that by signing in such a way, by producing a small enough signature you are revealing your secret key. So you can use OP_SIZE to force revelation of a secret key. You can use OP_SIZE to very efficiently force a value to be either zero or one through a cool trick we discovered in developing Miniscript. If you use OP_SIZE and then OP_EQUALVERIFY in a row that will pass the zero or one but it will fail your script otherwise. Because the size of zero is zero, that’s the empty string, the size of one is one, but no other value is equal to its own size and so the EQUALVERIFY will abort. In an early iteration of Miniscript before we realized that we had to depend on a lot of policy rules for non-malleability, we were using SIZE, EQUALVERIFY before every one of our IF statements. Because otherwise a third party could change one of our TRUEs from one to some arbitrary large TRUE value, change the size of our witnesses and wreck our fee rate. We don’t do that now because we’ve had to change strategies because there are other places where we had similar malleability. Ultimately we weren’t able to efficiently deal with it in script. But if you really want consensus guaranteed non-malleability, not policy minimalist guarantees on malleability OP_SIZE is your friend. You are going to use that a lot. One more thing OP_SIZE is for.

Q - They are used in Lightning scripts.

A - Yes exactly. In Lightning and in other atomic swap protocols you want your hash preimages to be constrained to 32 bytes. The reason being, this is kind of a footgun reason, basically if you are interoperating with other blockchains, you want to interoperate with future versions of Bitcoin or whatever, a concern is that Bitcoin allows hash preimages or anything up to 520 bytes on your stack. But if you are trying to interoperate with another system that doesn’t have such a limit, say whatever your offline Lightning implementation protocol is, maybe some bad guy creates a hash preimage that is larger than 520 bytes, it passes all the offline checks but then you try to use it on the blockchain and you get rejected because it is too large. Similarly if you are trying to do a cross-chain atomic swap and one of the chains has a different limit than the other. Then you can do the same thing. You can create a hash preimage that works on one chain but doesn’t work on the other. Now you can break the atomic swap protocol. You want to constrain your hash preimages to being 32 bytes. Everybody supports 32 byte preimages, that’s enough entropy that you’re safe for whatever entropy requirements you have. And OP_SIZE is a way to enforce this. We’ll see this later if I can get to the Miniscript site and look at some specific fragments. We have a 32 OP_SIZE, 32 EQUALVERIFY confirming that we have 32 byte things.

Q - I’m assuming if it had no use you could disable it?

A - Right. The only thing here which we disabled in Tapscript is CHECKMULTISIG ironically and that sounds like the most useful but in some sense has the worst complexity to value ratio. There are other ways to get what CHECKMULTISIG gives you that doesn’t have the limitations and doesn’t have the inefficiency of validation that CHECKMULTISIG does. I will get into that a bit later on.

Any more questions about this slide? I have one more slide whining about script and then I’ll get into the real talk. I’ve already spent 20 minutes complaining about script. I’m sorry for boring you guys but this helps me.

Another random list of complaints about script and surprising things. As I mentioned BOOLAND and BOOLOR despite having bool in the name do not take booleans they take numbers. If you put something larger than 4 bytes into these opcodes they will fail your script. Pieter Wuille claimed he didn’t know this until I pointed it out to him. No matter how long you’ve been doing this you get burned by weird, surprising stuff like this. I’ve been saying numbers are maximum 4 bytes. Actually CHECKSEQUENCEVERIFY and CHECKLOCKTIMEVERIFY have their own distinct numeric type that can be up to 5 bytes because 32 bit signed magnitude numbers can only go up to 2^31 which doesn’t cover very many dates. I think they burn out in 2038 as many of us know for other reasons. In order to get some more range we had to allow an extra bit which meant allowing an extra byte. These values can be larger than any other number. So there are actually two distinct numeric types as well as the boolean. I mentioned the CHECKMULTISIG opcode counting weirdness. One thing that burns you if you don’t unit test your stuff is the numeric values of the push number opcodes are really weird. It is hex 50 plus whatever value you’re doing, negative 1 through 16. Except zero, zero is not 0x50. If you use 0x50 that will fail your script. Zero is 0x00. You need to special case zero. If you are naively thinking you can just take your number and add decimal 80, 0x50 to it like you might say if you were trying to support SegWit and you have in the wild the only version number being used being zero but every other version number having this 0x50 added to it. This is a very easy mistake to make that you wouldn’t notice with real world data. But there we go, that’s life. Zero is zero, everything else is what you expect plus 50.

Q - What opcode is 50?

A - 50 is OP_RESERVED. I think it was always called OP_RESERVED. Unlike a lot of the other reserved opcodes that didn’t used to have a name and then we had to disable it because it triggered undefined behavior. I think OP_RESERVED has always been reserved. It has always been because you would expect a zero to go there. If you weren’t going to put zero where it should be then the safest thing to put there would be fail the script.

Q - What is the purpose of 0x50 element in SegWit version 1 if you are putting into the witness?

A - In SegWit your witness programs have a numeric opcode, a numeric push indicating your version number followed by your 32 byte program.

Q - In Taproot you can have in the witness the byte 50 that is some annex or something? There is a description, you shouldn’t use it, it is for future use. What future use are you planning for that?

A - The question is about the annex field in SegWit v1, Tapscript signatures. Let me see if I can remember the justification… Let me just give a general answer because I’m forgetting exactly how it works. The idea is that the annex is covered by the signature. The idea is that you can add arbitrary extra data that will be covered by the signature. This in some future version in something locktime related this could be useful for adding new opcodes that would conditionally constrain your signatures in some way. As of now there is no use for it. It is basically this extra field that gets covered by the signature and that may in future do something interesting for other future opcodes that don’t exist yet. We are putting it in there now so that we won’t require a hard fork to do it later. Right now it is constrained by policy to always be empty because there is no value. I wonder if the number 0x50 is used there. I think that might just be a coincidence, maybe not. It doesn’t sound like a coincidence.

Q - It is to avoid some collision with a SegWit version number or something like this?

A - Yeah it is not a coincidence but the exact means of where the collision was imagined to be I don’t remember. I’d have to read the BIP to remember. As far as the general complaint about complexity it is getting worse but very slowly. Overall it is getting better I think. There is a lot of crap that we cleared out with Tapscript and Taproot but the annex is one place where you can still see some historic baggage carried forward.

Script - High level problems

So on a high level, let me just summarize my complaints about script and then we’re going to go into what Miniscript is and how Miniscript tries to solve these things. It is difficult to argue correctness because every single opcode has insane semantics. I talked about some of them, I didn’t even talk about the bad ones. I just talked about the simple ones that are obviously crazy. It is difficult to argue security meaning is there some weird secret way you can satisfy a script. It is hard to reason about all the different script paths that might be possible. It is hard to argue malleability freeness. There are weird surprising things like you can often replace booleans with larger booleans or you can replace numbers with non canonical numbers, stuff like this which is forbidden by policy. Most of this class of attacks will be prevented on the network but in theory miners could try to execute something like this. There are so many surprising corners, so many surprising ways in which things that can be malleated that this is something difficult to reason about. Assuming you have a script that you are comfortable with from a correctness and security perspective it is difficult to estimate your satisfaction cost. If you’ve got a script that can only be satisfied one way or a couple ways you can say it is this many signatures and this many bytes. You manually count them up and you put that in a vault somewhere. In general if you are given an arbitrary script and you are trying to figure out what’s the worst case satisfaction cost for this, this is very difficult to do. The irony of this is that over in Ethereum land they have the same problem and this results in coins getting stuck all the time. As Bitcoin developers this should be something that we can laugh at them for because Bitcoin script is designed so that you reason about it and you won’t have this problem. But in fact you do. We don’t have Turing completeness, we don’t have arbitrary loops, we don’t have all these complicated things that make it intractable on Ethereum but it is intractable anyway for no good reason. So Miniscript as we’ll see solves this. It is difficult to determine how to work with an arbitrary script. You don’t know which signatures, hashes and timelocks or whatever you might need to satisfy an arbitrary script. You basically can only support scripts that your software explicitly knows how to recognize by template matching or whatever. Then even if you do know what you need to satisfy the script actually assembling it in the right order, putting all the signatures in the right place and putting the 1s and 0s for booleans and all that kind of stuff also requires a bunch of adhoc code.

Fundamentally the issue here is that there is a disconnect between the way that script works. You’ve got this stack machine with a bunch of weird opcodes that are mostly copied from Forth, some of them mangled in various ways and some of them mangled in other ways. But the way that we reason about what script does is that you are putting spending conditions on coins. You’ve got a coin, you want to say whoever has this key can spend it or maybe two of these three people can spend it or maybe it can spent by revealing a hash preimage or after a certain amount of time has gone by there is an emergency key or something. Something like this. Our job as developers for Bitcoin wallet software or Lightning software or whatever is somehow translate this high level user model into the low level stack machine model. Along the way as soon as you stick your hand into the stack machine it is going to eat your hand. Hopefully you get more than half your work done because you’ve only got one more hand. Then you’ve got to convince the rest of the world that what you did is actually correct. It is just very difficult and as a result we have very few instances of creative script usage. The biggest one is Lightning that has a tremendous amount of developer power pointed at it. Much more than should be necessary just for the purpose of verifying that the scripts do what you expect. Nonetheless what Lightning needs fortunately is what it has. But if you are some loner trying to create your own system of similar script complexity to Lightning you’re going to be in a lot of trouble. Actually one thing I did well working with Miniscript was looking for instances of people doing creative script things and I found security issues in real life uses of script out there. I don’t even know how to track down the original people, I’m not going to say any more detail about that. The fact is this is difficult, there’s not a lot of review and the review is very hard for all of these uses.

Script: Brainteasers

I have three more slides of script whining. These are just a couple of fun questions, I’ll be quick. Here’s a fun one. We know that zero and negative zero can both be interpreted as false as booleans. Is it possible to create a signature that will pass that is zero or negative zero. This might be surprising because we use the canonical value zero to signal no signature. In the case of CHECKMULTISIG where you’ve got multiple signature and you want to say this key doesn’t have a signature. Is it possible that you could check the signature, get false but secretly it is a valid signature. The answer is no for ECDSA because the DER encoding requires that it starts with the digit 2 which is not zero or negative zero so we’re safe by accident. With Schnorr signatures you also can’t do zero or negative zero. The reason being that in order to do so you would have to set your nonce to be some R coordinate that is basically nothing but zeros and maybe a 1 bit. Mathematically there is no way to do this, the Schnorr signature algorithm that we chose doesn’t allow you to do this without breaking your hash functions in ways we assume are intractable. Another brainteaser. Can you get a transaction hash onto the stack the way you might want to for covenants or the way you might want to with something like OP_CTV behavior, CHECKTEMPLATEVERIFY. That’s Jeremy Rubin’s special purpose covenant thing. So you can actually do this with ECDSA. It turns out not in a useful way. You can write a script that requires a signature of a specific form, so maybe one where your s value is all zeros and your R value is some specific point and the public key is free. The person satisfying the script is actually able to produce a public key that will validate given a specific transaction and that specific signature that is fixed in the script. That public key you compute by running the ECDSA verification equation in reverse is actually a hash run through a bunch of extra EC ops. It is a hash of your transaction, it is a hash of what gets signed. In theory you could do a really limited form of OP_CTV just with this mechanism. It turns out you can’t by accident because we don’t have SIGHASH_NOINPUT so every single sighash mode covers your previous outpoint which in turn is a hash of the signature you’re trying to match which means you cannot precompute this. I apologize, I would need a white board to really go through this. Basically there is an accidental design feature of Bitcoin that is several layers deep that prevents you from using this to get covenants in Bitcoin. That’s the story of Bitcoin script. There’s always really far reaching remote interactions. If you are trying to reason about Bitcoin script you have to know this.

Here’s another fun one which I hinted at earlier. What is the largest CHECKMULTISIG, what’s the largest multisignature you can do? I think CHECKMULTISIG with P2SH limits you to only putting 15 keys on because there is a sigops limit there. CHECKMULTISIG with SegWit I think limits you to 20 for other reasons that I don’t quite remember. It turns out that you don’t need to use CHECKMULTISIG at all to do threshold signatures. You can just use CHECKSIG, you do CHECKSIG on a specific signature, if it is a valid signature it will output one, it it is not it will output zero. You can take that zero and you can move it out the way onto the altstack or something, do another signature check and then bring your zero or one back and you keep adding all these together. You can get a sum of all your individual signature checks and if that sum is greater than your threshold which is easy to check with script then that’s the exact equivalent semantics of CHECKMULTISIG except it is a little bit more efficient for the verifier. If you don’t use that then you’re only limited by the amount of crap you can put on the stack based on the other limits. You can wind up having multisignatures that are 67? I think 67 keys is right. You can get 67 keys and 67 signatures onto your stack using this which is a cool way to bypass all these extra limits. This is also the justification for removing CHECKMULTISIG in Tapscript by the way. We have a new opcode who’s current name I forget that does this all in one. CHECKSIGADD? Good, we had a bunch of weird extra letters there before. It does a signature check. Rather than outputting a zero or one it takes a zero or one and adds it to an accumulator that’s already in the right place. So you can do this directly rather than needing 3 or 4 extra opcodes, I think you need two opcodes per signature check. Now you can eliminate all the weird complexity of reasoning about CHECKMULTISIG and using it, it is now much more straightforward.

Miniscript

Onto Miniscript. There we go. Forty minutes on why I hate script. I will actually try not to use the full two hours even though I have two hours of slides.

Q - Last question on script. It is very hard to judge Satoshi ten years on. There’s a lot of subtle complexity here. Do you honestly think back in Satoshi’s time, 2009 that you’d have had the perspective to design a better language?

A - The question is in 2009 could I have done better? Or could we have expected Satoshi to have done better? Essentially no. There are a couple of minor things that I think could’ve have been done better with a bit of forethought. For some historical context the original Bitcoin script was never used. It was this idea for this smart contracting system and smart contracting was something in blog posts by Wei Dai, Nick Szabo and Hal and a few other people. It was this vague idea that you could have programmable money. Script was created with the intention that you be able to do that. Basically it was a copy of Forth, it actually had way more of the Forth language than we have today. We had all the mathematical opcodes and we had a bunch of other weird stuff. There was also this half baked idea for delegation where the idea was that your script is a script you execute but also the witness is a script that you execute. The way that you verify is you run both scripts in a row and if the final result is true you’re good, if the final result is false…. The idea is that your scriptPubKey which is committed to the coins should check that the earlier script didn’t do anything bad. There were a couple of serious issues with these, two that I will highlight. One was there was an opcode called OP_VER, OP version. I can see some grimaces. It would push the client version onto the stack. This meant that when you upgraded Bitcoin say from 0.1 to 0.2, that’s a hard fork. Now script will execute OP_VER and push 0.1 onto the stack for some people and 0.2 onto the stack for other people. You’ve forked your chain. Fortunately nobody ever used this opcode which is good. Another issue was the original OP_RETURN. Rather than failing the script like it does now it would pass the script. Because we had this two script execution model you could stick an OP_RETURN in what’s called your script signature. It would run and you wouldn’t even get to the scriptPubKey. It would just immediately pass. You could take any coins whatsoever just by sticking an OP_RETURN into your scriptSig. Bitcoin was launched in the beginning of 2009. This was reported privately to Satoshi and Gavin in the summer of 2010. This was 18 months of you being able to take every single coin from Bitcoin. In a sketchy commit that was labelled as being like a Makefile cleanup or something, Satoshi completely overhauled the script system. He did a couple of interesting things here. One was he fixed the OP_RETURN bug. I think he got rid of OP_VER a little bit earlier than that. Another was he added all the NO_OP opcodes. Around this time if you guys are BitcoinTalk archivists you’ll notice that talk of soft forks and hard forks first appeared around this time. It would appear forensically that around this script update, the one that fixed OP_RETURN was the first time that people really thought in terms of what changes would cause the network to fork. Before that nobody was really thinking about this as an attack vector, the idea that different client versions might fork off each other. Either explicitly or because there is different script behavior. And so the NOP opcodes or the NO_OPs were added as a way to add updates later in the form of a soft fork. The fact that this happened at the same time as the OP_RETURN fix is I think historically very interesting. I think it reflects a big leap forward in our understanding of how to develop consensus systems. Knowing that historic context the original question was in 2009 could we have done better? The answer is no basically. Our ideas about what script systems should look like and what blockchains should look like and the difficulty of consensus systems, nobody had any comprehension of this whatsoever. The fact that there were bugs that let you steal all the coins for over a year tells you that no one was even using this, no one was even experimenting. It was just weird, obscure thing that Satoshi put in there based on some Nick Szabo blog posts and nobody had really tried to see if it would fulfill that vision or not.

Q - Did he literally copy something from Forth or did he reproduce it selectively or precisely from a manual?

A - The question is did he literally copy from Forth. Or did he use a manual? I don’t believe there is any actual code copying from any Forth compiler or something like that. The reason I say that everything is copied from Forth is a couple of specific things. The specific things are the stack manipulation opcodes like the SWAP and OP_ROTATE which rotates in a specific direction that isn’t useful for Bitcoin but is probably useful in Forth. All of the stack manipulation semantics seem to have come from Forth. These are just Forth opcodes just reinterpreted in a Bitcoin context. Beyond that I don’t know Forth well enough to say. There are a couple of specific things that are really coincidental that suggest that he was using the Forth language as a blueprint.

Q - He either copied something or he made something up. In the latter case he must have thought about it?

A - The statement is either he copied something or made something up. If he made something up he must have thought about it. I don’t think that’s true. I think he made up a lot of stuff without thinking about it.

Q - It accidentally worked?

A - That’s a very good point. Someone said it accidentally worked. There are a lot of things in Bitcoin that accidentally work. There’s pretty strong evidence that some amount of thought went into all of this weird stuff. There are a lot of accidentally works here. There are a lot of subtle bugs that turned out not to be serious but by all rights they should’ve been. I don’t know what evidence to draw from that. One idea is that Bitcoin was designed by a number of people bouncing ideas off each other but the code has a pretty consistent style or lack of style. It is all lost to time now.

Let me move onto Miniscript. Are there any more questions about script or its historical oddities? Cool. In practice what script is actually used for are signature checks, hashlock checks and timelocks. This is what Lightning uses, this is what atomic swaps use, this is what escrow things use like Bitrated. This is what split custody wallets use, this is what split custody exchanges like Arwen use in Boston. This is what Liquid uses. Anybody doing anything interesting with Bitcoin script, the chances are you’ve got something interesting else. Some timelock which is just a couple of lawyers with Yubikeys. All of these things fit into this paradigm of signature checks, hashlocks, timelocks and then arbitrary monotone functions of these. A monotone function just means that you’ve got ANDs and ORs and thresholds. You’ve got this and this or that and 3 of 5 of these different things. That’s all you have. An idea for a version of script that didn’t have all of these sharp edges and that allowed analysis is what if we just created templates of all these things? What if you as a user would say “I want to check signatures with these keys and I want these hashlocks and I want these timelocks and I want them in this particular order. I want these keys or this key and a timelock or this key and a hash preimage.” That’s what I want. That would be my script and I will just literally have a program that is a DAG or a tree of all these checks and various ways of combining them. Now you can reason very easily about the semantics of that. You just go through the tree, you traverse through the tree and check that every branch has the properties that you expect. If there was a way that we could create script fragments or script templates for these three primitives, these particular checks, and also create script fragments that represents ANDs and ORs and thresholds. If we could do this in a way that you could just plug them into each other then we would have a composable script subset that was actually usable on the blockchain and we could reason about in a very generic way. That’s the idea behind Miniscript. As we’ll see there are a number of design constraints. The biggest one though is that we wanted something that was reasonably efficient. There is a naive way to do this where you take a bunch of CHECKSIGs and hash preimage checks and stuff. Then you write different ways of composing these and the result is that for every single combination you’re wasting 5 or 6 bytes. Just doing a bunch of opcodes to really make this composable to make sure no matter how your different subsets are shaped you can still do this reasonably. We didn’t want that. If you are wasting 5 or 6 bytes in every single one of your UTXOs that is going to add up in terms of fees. Even if it doesn’t add up someone is going to say that it adds up. You couldn’t do something in Lightning and gratuitously waste several bytes on every single UTXO just because it gives us this general tooling. Similarly in any other specific application you are not going to gratuitously waste these extra bytes because what do you get for this? If anyone was using this you’d get interoperability and you’d get standard tooling and all that good stuff. But no one is going to go first because it is less efficient than what they have. We really have to match or beat the efficiency of stuff that is deployed. As I said in another of my more public facing talks we did actually accomplish that really well.

Let’s go through all the problems that we encounter with this approach. Here is an example of our original vision for Miniscript. You see we’ve got an AND and some pubkey checks and we’ve got some ORs and there is a timelock thing going on. If you wrote this out as a tree you could clearly see where everything is. When there are a bunch of parentheses it looks like Lisp and it is hard to tell what matches what. You would take this thing and it would just map directly to script. One problem with this approach is that there are many different ways to write each fragment. There are a number of ways to do a key check. You can use a CHECKSIG opcode, you can use a CHECKSIGVERIFY opcode, you could use CHECKMULTISIG with one key or something weird. There are a couple of different ways to do your timelock checks and then there are many different ways to do ANDs and ORs and thresholds in Bitcoin script. There are a whole bunch of ways and I’ll talk about this in a little bit. Maybe that is not so bad though. Maybe we have 5 different ANDs and so instead of writing AND at the top line of that slide I have an AND and a tag like AND_1, AND_2, AND_3. Each one represents a different one. That’s not so elegant but it is still human readable, it is still very easy to reason about. It still gets you all these nice benefits of Miniscript. A related problem is that these different fragments might not be composable in a generic way. If I’ve got a CHECKSIG that is going to output zero or one depending on whether the signature is correct. If I’ve got a CHECKSIGVERIFY that’s going to output nothing or it is going to abort my script. There’s no AND construction that you can just plug either of those into. Your AND construction needs to know what is going to happen if your thing passes. You might have an AND that expects both of your things to either pass or abort the script. You can do an AND that way just by literally running each check in a row. If the first one fails your script aborts. If the second one fails your script aborts. Otherwise it just runs through. If your opcodes are leaving extra ones and zeros on the stack you can’t just compose them. You’d run the first one, deal with the one that is out there, maybe OP_SWAP, swap it out the way or maybe move it to the altstack. Then run the other one and do a bool AND or something to check that both are ok. That’s also fine, again we can just label things. Another problem though is when deciding between all these different ways to write ANDs and ORs there are trade-offs to make. Some of these have a larger scriptPubKey. They are larger to express. Some of them have larger satisfaction sizes. Maybe you have to push zeros or ones because they have IF statements in them. Some of them have larger dissatisfaction sizes. For some of them maybe you can just push a zero which is an empty stack element and it will just skip over the entire fragment. You don’t have to think about it. For other ones you’ve got to recurse into it and zero out every single public key. Your choice of which different OR to use… In an OR your choice of which fragments to use for the children of the OR depend very much on the probability that certain branches will be taken in real life. You have a branch that is very unlikely, you want to make its script size very small, you want to make its dissatisfaction size if that’s relevant very small and your satisfaction size you can make it as large as you want because the idea is that you will never use it. If you are using it you are in emergency mode and maybe you’re willing to waste a bit of money. But now this means your branches need to somehow label their probability. Now the mapping between this nice elegant thing at the top, maybe with tags in Bitcoin script is no longer two way. I remember in the very early days of Miniscript, I think in the first 36 hours we ran into this problem and Pieter was telling me that we need to have a separate language that we can compile to Miniscript. We’d end up with two languages that kind of look the same and I put my head in my hands and I said “No this is ruined. This is not what I want it to be.” And I had a bit of a breakdown over this because what an elegant idea that you could just have this language, you see the policy and that directly maps to script. You could pull these policies out of the script and now all of a sudden there is this weird indirect thing and Pieter for the entire time was trying to write this optimal compiler. I did not care about the compiler at all. I just wanted a nice cleaner way to write script and he was turning the compiler into this first class thing. Anyway… This is in fact where we landed. You’ll see on the next slide that the results are reasonably elegant. One final problem, there are a few more on the next slide, but the last problem on this slide is that actually figuring out the optimal compilation from this abstract thing at the top to what we represent in script there are optimizations that Miniscript can’t handle. With the design necessity of Miniscript, it can’t handle. In particular if you reuse keys, the same key multiple times in some weird script. Maybe it appears as a check in one branch but then you have the same check in another branch. Maybe there is a way to rearrange your tree, to rearrange your branches so you get logically the same function but what you express in script is slightly different. Maybe the key appears once instead of twice or twice instead of once or something like that. There are two problems with this. One is that verifying that the result of these transformations is equivalent is very difficult. Given two programs or two logical predicates prove that they are equal on all inputs. I believe that that is NP complete. The only reason that it might not be is because of restrictions of monotone functions but I don’t actually think that changes anything. I think that’s actually program equivalence. Maybe it is halting complete. It is definitely not halting complete for Bitcoin script because you don’t have loops. It is hard.

Q - Isn’t the motivation for Miniscript to make that possible?

A - Yeah the original Miniscript motivation was to make this kind of analysis possible. So what we said was basically don’t reuse keys. We cannot do what we call a common subexpression elimination in Miniscript. Basically if we wanted to do these kind of advanced optimizations we would lose the simplicity and analyzability of Miniscript. As far as we’re aware nobody is really champing at the bit to do this. All of our Blockstream projects work with this current paradigm. So does Lightning, so do the atomic swap schemes that people are doing. So do escrow schemes, so do various split custody schemes. Everything that we’re aware of people doing will work in this paradigm. If our goal is to get enough uptake so that there will be standard tooling… Ultimately the goal is to have standard tooling because that is the benefit of Miniscript is that you write a script in Miniscript, you grab something off the shelf and it will deal with your script. It will give you fee estimations, it will create your witnesses for you, you can use it in PSBT with other things, you can coinjoin with other things, you don’t need to write a whole bunch of script munging code for it. These optimizations would in theory get you stuff but it would be very difficult and adhoc to implement and it would break these properties that we think are important for adoption. I threw it in there but we’re actually not going to fix it. Maybe there will be a Miniscript 2 that would not really resemble Miniscript that could do this.

Q - I saw a conversation on Reddit between you, Pieter and I think Greg as well that said the current Lightning script had no analogue in Miniscript. Something about a HTLC switching on a pubkey hash or something like this? Is that still relevant?

A - This is really cool. The question is about Lightning HTLCs not having any analogue in Miniscript. So in Lightning HTLCs you have a pubkey hash construction and if you reveal the preimage to a certain hash, this needs to be a public key and the signature check is done with that public key. If you don’t reveal a preimage to that hash then you switch and do something different instead. In the original version of Miniscript we had no notion of pubkey hashes so you couldn’t do this. Then in May of last year Sanket Kanjalkar joined us at Blockstream as an intern and he said “What if we do a pubkey hash construction?”. There were a lot of head in my hands reasons not to do this for complexity. But Sanket showed that if we add this we would need to change up our type system a little bit but then we would get a Miniscript expressible version of Lightning HTLCs. There’s two, one you keep on the local side and one that’s remote. On one of them we saved 7 bytes and on the other we saved 17 bytes versus the deployed Lightning HTLC. We actually went from being unable to express what Lightning does without unrolling it into a very inefficient thing, to having a more efficient way to express what Lightning does in Miniscript. It is still incompatible with what Lightning actually did. One goal of ours was that you could take the HTLCs that are defined in BOLT 3 and just interpret those as Miniscripts. We can’t do that because of this switch behavior. Because Miniscript doesn’t really have a way to say if a preimage is present use it as a public key otherwise use it as a reason to do something else. I think adding that to Miniscript would be technically possible but it would add complexity to Miniscript and all we would gain was the ability to interpret HTLCs as Miniscripts. It is unclear where the value is to either Miniscript or Lightning because Lightning already has a bunch of special purpose tooling. The idea behind Miniscript is that you are trying to express a condition in which coins can be spent. Whereas the idea behind a Lightning HTLC is you are trying to use the blockchain to constrain an online interactive protocol. Those are conceptually different things. It probably doesn’t make sense to use Miniscript inside of Lightning unless it was really easy to do. You would save some code deduplication but you need so much extra code to handle Lightning properly anyway that the script manipulation stuff is kind of a minor thing.

Q - It might be useful if one side of the channel is a multisig set up and doesn’t want to bother the other side with changes to the multisig set up.

A - The core developer in the back points out that it would be useful if you could have a Lightning HTLC where the individual public keys were instead complicated policies. That might be multiple public keys or a CHECKMULTISIG or something and that’s true. If you used Miniscript with Lightning HTLCs then you could have two parties open a channel where one person proposes a HTLC that isn’t actually the standard HTLC template. It is a HTLC template where the keys are replaced by more interesting policies and then your counterparty would be able to verify that. That’s true. That would be a benefit of using Miniscript with Lightning. There is a trade-off between having that ability and having to add special purpose Lightning things into Miniscript that would complicate the system. Maybe we made the wrong choice on that trade-off and maybe we want to extend Miniscript.

Q - It depends if the scripts are going to be regularly redesigned or if there are going to be different alternative paths or if there are contracts on top of Lightning. I know Z-man has talked about arbitrary contracts where there potentially could be scripts or Miniscripts on top of Lightning.

Q - I think you can use scriptless scripts to express everything you want on top of Lightning.

A - There is another thing that Pedro, Guido, Aniket and me have this scriptless script that I think is public now or will be soon where you can do all sorts of really complicated policies with scriptless scripts. If we are going to overhaul Lightning anyway what if we move to scriptless scripts instead of Miniscript. That’s kind of moving in the opposite direction towards more complicated, interactive things rather than off the shelf static analysis things. That’s another observation. The impression I get from the Lightning community is that there is more excitement about the scriptless script stuff which has a whole pile of other privacy and scalability benefits than there is excitement about Miniscript. I think this maybe comes down to Miniscript being targeted at coins that mostly sit still. Maybe. Or maybe they don’t mostly sit still but it is about spending policies rather than constraining a complicated protocol is my view of the way different people think about this.

Q - I did see that Conner and the lnd team had used Miniscript to optimize some new Lightning scripts. Did you see that?

A - Cool, I did not. The claim is that Conner used Pieter’s website or something to find a more optimal Miniscript.

Q - They saved some bytes on a new Lightning script.

A - I’ve heard this from a couple of people. Apparently Arwen which does a non-custodial exchange had a similar thing where they encoded their contract as a policy and then ran it through Pieter’s website and they saved a byte. It has happened to us at Blockstream for Liquid. We had a hand optimized script in Liquid that is deployed on the network now and Miniscript beat it which is kind of embarrassing for all of us because we’re supposed to be the script experts aren’t we? Where we landed is we don’t have explicit Lightning support. I could be convinced to add it, it is just a pain in the ass. It will lengthen the length of the website by like 20 percent. It has a trade-off in uptake and complexity and so forth but we could be convinced to add it. We’re also not going to solve this common subexpression thing. That’s beyond adding a few things and that’s really changing how we think about this.

Another category of problems here that I’m going to call malleability. As a quick bit of background, malleability is the ability for a third party to change the witness that you put on a transaction. Prior to SegWit if somebody did this it would change your transaction ID and completely break your chain of transactions and it was just a huge mess. This was very bad. After SegWit the consequences were less severe. If somebody is able to replace your witness with another valid witness they may be able to mess up transaction propagation or block propagation because it will interfere with compact blocks. It may also change the weight of your transaction in a way that your transaction is larger than you expected so the fee rate that you put on it is going to be higher than the fee rate that the network sees. You’re going to wind up underpaying because some third party added crap to your witness. We really want things that Miniscript produces to be non-malleable. An interesting observation here is that malleability is not really a property of your script as much as it is a property of your script plus your witness. The reason being when you create a full transaction with witnesses and stuff the question you ask is can somebody replace this witness with a different one? It turns out that for many scripts there are some witnesses you can choose for which the answer is yes but there are other equivalent witnesses that you can choose for which the answer is no. That’s something where there’s a lot of complexity involved and we didn’t expect going into this. That’s kind of a scary thing because when there is complexity in satisfying scripts that we didn’t even think about before Miniscript this means that this has been hiding in Bitcoin script all along and every usage of script has had to accidentally deal with this without being able to think clearly. There’s some complexity about what malleability means. One thing is that the ability of third parties to change things depends a bit on how much they know. Do they know the preimage to some hash that is out there for example and can they stick that on the blockchain? If you have a timelock somewhere in your script and this is covered by a signature then the adversary can’t change it because timelocks are encoded in your sequence or locktime field. Those are signed. If it is not covered by signatures then maybe somebody can change that and then change branches. It is actually a little bit difficult…

Q - You don’t have a sighash to not sign timelocks?

A - That’s correct. We don’t have a sighash that won’t sign timelocks but you can create Miniscripts that have no signatures and then there is a form of malleability that appears.

This is a little bit difficult to reason about although eventually Pieter and I settled on something that is fairly coherent. A long argument I had with Pieter was about whether it counted as malleability if other participants in your transaction are able to sign things. I wanted them to mark certain keys as being untrusted when we consider signatures appearing out of nowhere. I think Pieter won that, I’m not going to rehash that entire argument. I think there’s maybe some simplicity of reasoning where you can say “This key is untrusted” and then use all your standard malleability analysis to decide what’s the worst that can happen to your transaction that might be useful in some cases. But I think that’s a technical thing. Pieter argued that that’s an abuse of the word malleability. Maybe I can share algorithms but I shouldn’t be arguing this in public so I won’t.

Miniscript and Spending Policies

This covers some simple things. To make our task tractable here we are going to assume standard policy rules. I talked about SIZE EQUALVERIFY earlier. There’s a policy rule in Bitcoin, meaning a standardness rule that affects transactions on the network called minimal IF. That means that what you put as input to an IF or a NOTIF has to be zero or one. So you do not need SIZE EQUALVERIFY, the nodes will enforce that SIZE EQUALVERIFY is done. We’re going to assume that standardness. We’re going to say that if something is malleable but you have to violate standardness to malleate it then that doesn’t count for our concerns about transaction propagation. If your only potential attackers are miners, miners can already mess up transaction propagation by creating blocks full of secret transactions and stuff like that. We’re not going to deal with that, that will save us a whole bunch of bytes in almost every fragment. In particular enforcing that signatures either have to be valid or they have to be verified is very complicated and wasteful to do directly in script. But there’s a standardness rule that requires signatures to be either empty or valid. As I mentioned we don’t have a common subexpression. It would conflict with this model where you have a tree that you can iterate over where clearly your script corresponds to some tree and vice versa. To retain that we can’t be doing complicated transformations. We also assume that no key reuse occurs for malleability. Here’s something where you have two branches, the same key is there, if you provide a signature for one branch then absent of using OP_CODESEPARATOR say which is also expensive somebody could take that signature from one branch and reuse it in another branch. Now there’s potentially a malleability vector where somebody could switch branches where an OR statement is being used. I think we have to assume no key reuse because it seems that in general if you can have arbitrary sets of keys that intersect in arbitrary ways in conjunction with thresholds, it seems intractable. It doesn’t feel NP hard but we don’t know how to do it in sub exponential time so it is possible that it is. Those are our three ground rules for Miniscript.

We are also going to separate out Miniscript, Miniscripts directly correspond, it is just a mapping between script and something human readable, we are also going to have this policy language. The policy language has probabilities that different branches are taken basically. Miniscript has weird tags like AND_1, AND_2, AND_3 and so on. They have better names than that. The policy language has probabilities. Given a policy you can run it through a compiler that will produce a Miniscript. It will look at your probabilities, it will decide what the optimal choice of a specific Miniscript fragment is, it will output a Miniscript which is clearly semantically identical to the original. This is very easy to check by the way. You take your policy and delete your probabilities. You take your Miniscript and delete all the tags and those will be equal. That’s how you check that your compiler output is correct. That’s a really powerful feature. You can take stuff off the blockchain, decode it into a Miniscript, lift it into a policy by deleting all the noise and then you can see immediately what it is doing. But Miniscript is where all the technical complexity is. We have all these different tags that need to compose in certain ways. This is what I’m going to get into that I haven’t really gotten into in any of my public talks here. Here’s a policy, rather than writing it in the Lisp format I drew this pretty picture like a year ago that I’ve kept carrying around. Here’s the same thing compiled to Miniscript. So at the bottom of the slide I’ve written the actual script out in opcodes but you can see the difference between these two. There are two differences. One is that I’ve attached all these tags to the ANDs and ORs. There’s an AND_v and an OR_c, I think are the current names for them. The other interesting thing is the pk on the left, checking pubkey 1, it added a c there. That’s saying there’s actually a public key but also I stuck a CHECKSIG operator in there. Then there’s jv, v means there is an OP_VERIFY at the end of this hash check and the j means there is some complicated check that basically skips over the whole check if you give it an empty string meaning no preimage and otherwise will do the check. That j is needed for anti-malleability protection. Then we’ve got these b’s and v’s and stuff which I will explain in the next slide. These are types. Miniscript unlike script has a type system. To have a valid Miniscript you need these types and you need to do checks.

Miniscript Types (Correctness)

Let me justify that real quick. As I mentioned you’ve got some opcodes like CHECKSIG that push zero on the stack but one on success. You have other ones like CHECKSIGVERIFY which push nothing on success and they abort your script on failure. You have all these expressions of wrapped things. There’s the composition ones, the ANDs, ORs and thresholds but there are also these j’s and c’s and v’s, all these little wrapper things that manipulate your types and they might behave differently depending on whether you’ve given a CHECKSIG or a CHECKSIGVERIFY or whether you’ve given something that can accept a zero as its input or can’t ever accept zero as an input or something like that. Basically we have these four base types, B, V, K and W. So B is just base, that is what we call it. This is something that has a canonical satisfaction that will result in pushing a nonzero value on the stack. If it can be dissatisfied it will push a zero on the stack. Then we have this V which has a canonical satisfaction that will push nothing on the stack. There are no rules beyond that. If you do something wrong it will abort. Basically all these things have this caveat if you do something wrong it will abort. There is this weird one K. This one is either you satisfy or dissatisfy it, it is going to push a key onto the stack. Whether you satisfy or dissatisfy will propagate in an abstract way up to your CHECKSIG operator that will turn that key into either a zero or a one. Then you’ve got this weird one W that is a so called wrapped base. This is used for when you are doing an AND or an OR and you’ve got a whole bunch of expressions in a row. Basically what a wrapped expression does is it takes the top element of your stack, it expects that to be some sort of accumulator, 1, 2, 3, 4, 5, some counter or whatever. It will move the accumulator out of the way, it will execute the fragment, it will bring the accumulator back and it will somehow combine the result of your subexpression and the accumulator. Typically it does this by moving the accumulator to the altstack, moving it back when you’re done and combining them using bool AND, bool OR, OP_ADD in some cases. There are these minor type modifiers. We’ve got five of them. I was going to go to the website and read through all the detailed justification for these but I think I’m not going to do that. They are minor things like the o means 1, it means that this is a fragment that takes exactly one thing from the stack. This affects the behavior of certain wrappers. The idea behind all of this is that if you have a Miniscript program you can run it through a type checker that assigns all of these tags and type modifiers and stuff to your program. If your top level program, if this top AND here is a B you have a valid Miniscript. If it is not a B it is an invalid Miniscript, all bets are off, probably it can’t be satisfied. This type check has very simple rules. There’s linear time, very straightforward rules for propagating these types up. There is a giant table on Pieter’s website which describes exactly what these mean and what the rules for propagation for. You as a user don’t really need to care about that. All you need to know is that when you run the type checker if you get a B at the top you’re golden, if you don’t you’re not, throw it out. If you get your Miniscript by running our compiler then it will always type check because our compiler enforces that type checking happens. All of these complicated questions of correctness are now wrapped up entirely in this type system. You have four base types and five modifiers.

Miniscript Types (Malleability)

We’ve got another type system in parallel for malleability. This also was something that we got pretty far into the project before we realized that we needed to separate these things out conceptually. Malleability as I mentioned is this ability for a third party to replace valid witnesses with another valid witness. One complicated thing that we didn’t realize until later is that we have a notion of canonical witnesses that we expect honest signers to use. If you have an OR, like two branches, either satisfy this subscript or this other subscript. There’s some form of OR where you try to run both of them and then you check that one of them succeeded. Any honest user is just going to pick one or the other and satisfy it and dissatisfy the other one. Somebody being dishonest could satisfy both and this would still pass the rules of Bitcoin script. When we are thinking about malleability we have to consider dissatisfactions like that. That was annoying. There’s suddenly this whole wide set of possible behaviors in Miniscript that suddenly we had to consider when doing malleability reasons. The bad things about malleability, particularly they affect transaction propagation, they affect fee estimation. Those are the bad things about malleability.

What questions can we ask? Can we assure that all witnesses to a script are non-malleable. This is the wrong question to ask it turns out. I mentioned that malleability is a property of witnesses not scripts but even so it is not necessary that every possible witness be non-malleable. What we want is this second thing in red here. Basically no matter how you want to satisfy a script, no matter which set of public keys or hash preimages or whatever, you want there to be some witness you can create even if it is a bit expensive that will be non-malleable. Our compiler will ensure that this is true. If you give it a given policy it will make sure every single possible branch of that policy has a non-malleable witness that can be created. This was quite difficult and non-trivial. This was the source of a lot of security bugs that I found when auditing random crap that I found on the blockchain. As it turns out curiously, the Tier Nolan atomic swap protocol from 2012, 2013 is actually ok and Lightning is actually ok. That was surprising. I think the Lightning folks got lucky in some ways. There was a lot of thought that went into Lightning. The way that we are thinking about this here is a very structured way where you can even reason about this is an automated fashion. It is great that we can do that now. It provides a lot more safety and assurance. What I found very frequently when trying to use Miniscript internally at Blockstream to optimize Liquid and optimize various things I was doing, I was like I have Miniscript, I can do all these powerful things, now I can optimize the hell out of my script and fearlessly do things. Every time I would find an optimization my analysis tools at the compiler would reject it. I’d be like this is broken, another bug in Miniscript. Pretty much every time it was not a bug in Miniscript. I would chase through the logic, this is a difficult problem of how to get good error messages. When I’d trace through the logic of why it was failing it would come down to me failing one of these malleability properties. I would find some attack that was actually being prevented by the Miniscript analyzer warning me that “Hey you’ve got a bare hash preimage sitting over here. If that hash becomes public someone can throw away the rest of the witness and use that for example. We have these three extra type modifiers. There’s four base types, five modifiers that represent correctness. We have three more for malleability. s means there is a signature involved somewhere in the satisfaction. It used to stand for strong, then it used to stand for safe, all these weird notions. Now it is just if there is a signature that captures the properties that we need. There’s this weird thing called f for forced. What that means is there is no way to dissatisfy it at least not without throwing a signature on this. No third party can dissatisfy something that is a f. Then there is an e which is in some abstract sense the opposite of f. With e you have a unique dissatisfaction that has no signature on it. A third party can use that unique dissatisfaction but if you dissatisfy it then that’s the only dissatisfaction. If you are not satisfying this then a third party can’t do anything about it. They see the dissatisfaction, they have no other choices.

I’m going to bump over to the website in a second. It has got a table of what these rules mean, how they propagate, what the justifications for it are and it also has a summary of this slide here which is how do we produce a signature. I’ve been hinting at various malleability vectors. There is a question. Suppose you have a Miniscript that is something like “take either this timelock or this timelock and combine it with a public key.” Is this malleable? It is actually not because the signature is going to cover the specific… this is complicated. I should have chosen a simpler example. The signature will cover the locktime but if the locktime is high enough that it actually satisfies both branches then if you’ve got like an OP_IF…

Q - If it is an already spent timelock or a timelock in the past?

A - Yes. If you try to use the later timelock then you put a timelock that will be greater than both restrictions. Now if you’re switching between those using the OP_IF or something a third party can change your zero to a one and swap which branch of the OP_IF you’re using and that’s a malleability vector.

Q - The transaction is going to fail on the locktime check in the script evaluation?

A - In this example I’m assuming that the locktime is satisfied. There exists a branch in your script that if somebody honestly tries to take the result will be a malleable witness. You’re trying to take the later locktime and some third party can change it so you’re taking the earlier locktime. We spent a lot of time iterating to try to prevent this from happening. It is actually quite difficult to reason through why exactly this is happening because as Lightning developer is saying you do have a specific locktime in the transaction that is covered by the signature so where does the malleability come from? The answer in Miniscript parlance is that you have an IF statement where neither of the two branches have this s property, signature on them. If you ever have an IF statement or a threshold where there are multiple possible paths that don’t have the s on them that’s illegal and you have to dissatisfy that. You aren’t allowed to take either timelock because if you do the result is going to be malleable. Then if you have to dissatisfy it and then you don’t have the e property then probably your whole script is going to be malleable because even after dissatisfying if you’re missing e that means that somebody else can dissatisfy. So actually all of these rules have simple propagation rules that are very difficult to convince yourself are correct but that catch all this complicated stuff. So at signing time how do we avoid this? How do we say “I’ve got this weird branch here where maybe it is a timelock or a hash preimage and now I need to think I’m only allowed to take the hash preimage if the timelock has not yet expired.” If I try to use a hash preimage after the timelock has expired a third party can delete the hash preimage, take the timelock branch and there’s a malleability vector. So how does my signing algorithm reason about this? How can I actually produce a valid signature and not get any malleability under these constraints? The answer is basically you recursively go through your Miniscript trying to satisfy every subexpression. Then you try to satisfy the AND or the OR, however these are combined and you propagate all the way to the top. Whenever you have a choice of satisfactions you look at what your choices are. If they all require a signature that is great, we don’t have to think about malleability because nobody can change those, they would need a signature to change things out. We assume no signature reuse, we assume that third parties don’t know your secret keys and so on.

Q - No sighash?

A - In Miniscript we only use SIGHASH_ALL. That’s a good point from the audience. Different sighash modes might complicate this but in Miniscript we only use SIGHASH_ALL.

They all require signatures, that’s great you just take the cheapest one. If there is one possibility that doesn’t require a signature now you need to be careful because a third party can go ahead and take that. Now you can’t take any of the signature ones, you have to take the one that the third party can take. The reason being that you have to assume a third party is going to do that to you so you have to get in front of them. You just don’t publish any of the signature requiring ones. This may actually result in you creating a more expensive satisfaction than you would if you didn’t care about malleability. There is no context I can imagine where this is safe but you may want to use a malleable signer and maybe you could save a few bytes but you really need to be careful doing that. Finally if there are multiple choices that don’t have a signature then you’re… because no matter what you choose a third party is going to have some option. That’s all there is to this. You have these rules for e, f and s that propagate through. Using these rules at signing time you apply these three points and you can recursively generate a signature. That’s pretty great. This was a really complicated thing to think about and to reason through and it is really cool that we got such a straightforward algorithm that will handle it for you. When I talk our analysis tools catching me doing dumb stuff this is what I mean. I mean I would try to sign and it would refuse to sign. I would trace through and I’d be like “What’s wrong? Did I give it the wrong keys? What is going on?” Eventually I would find that somewhere I would have a choice between two things that didn’t have signatures and it was barfing on that. That’s really cool, something that has actually saved me in real life. Only saving me from wielding the power Miniscript gives me in unsafe ways. That is a good thing.

Q - Does the tooling have a non-malleable mode? If I wanted to make it malleable because I’m a miner…

A - The question is does the tooling have a non-malleable mode. I think Pieter’s code does. We might have an open issue on rust-miniscript to add this. It is a little bit more complicated than just disabling the checks because you might also want to take some of these non-canonical satisfactions in that case that you can imagine is a little bit cheaper. I think Pieter implemented it but Sanket and I didn’t. There will be, that is a design requirement for our finished Miniscript library, that you can enable malleability if you want.

I will link to the website. I was going to go through this in more detail but I can see that I’m burning everyone out already.

Q - Shall I get the website up?

A - Let me run through, I’ve only got a couple more slides. Let me run through my last subject here which is fee estimation which I should’ve moved to sooner. Malleability was the most taxing. Let’s think about fee estimation. I will be a bit more high level and a bit more friendly for these last few slides.

Miniscript: Fee Estimation

Fee estimation requires similar reasoning to malleability. You’ve got to go through your transaction, you’ve got to figure out what is the largest possible witness size that might exist on these coins. That witness size, the largest possible witness size is what I have to pay for it. If I’m in control of all the keys I can do better. I can say “I’m not going to screw myself” so I should actually find the smallest satisfaction that uses the keys that I have. But if I have other third parties who might be contributing I have to consider the worst case of what they do. They might be able to sign, they might not. Maybe that affects how efficiently I can sign something. Regardless there is a simple linear time algorithm I can go through where I just tell it which keys are available and which aren’t. I think I have to assume all keys are not potentially available before it gets complicated. Simple linear time algorithm, it will output a maximum. I know the maximum size of my witness and that is how much transaction weight I should pay for when I’m doing my fee estimation. This is great by the way. This was the worst thing about Liquid. We had half of our development effort going into figuring out fee estimation. In Liquid we have the ability to swap out quorums. We have coins in Liquid, some of them are controlled by different scripts than others. We have this joint multisig wallet trying to do fee estimation and cross-checking each other’s fee estimation on coins with complicated scripts that potentially have different sizes of witnesses. With Miniscript we just call a function, we just get maximum witness size and it does it for us. Miniscript was really just Pieter and I going off on a bender but we really wouldn’t have been able to do Liquid without it. We got lucky there.

Q - They don’t need to try to maximize it [witness size], they just need to make it bigger than the honest signers’?

A - That’s correct. You need an overestimate, that’s what you need. You can overshoot that is perfectly fine but if you undershoot you might be in trouble because the transaction might not propagate.

If you run through your script and do your type inference and your top level thing has an s on it… If you run through the type checker and it says it is non-malleable you don’t have to think about malleability, that’s awesome. If you’re being weird and you want to have potentially malleable scripts then that will complicate your fee estimation but again it is doable, you can run through this in linear time. Let me talk a bit about how malleability and fee estimation work. This is one security vulnerability I found on the blockchain. You could argue that this is irresponsible disclosure but saying it is on the blockchain and giving a script that is not even the exact script. If you guys can find this, you could have found it without me. Suppose you have this atomic swap thing. You’ve got these coins that can either be spent by two parties cooperating or it can be spent by one party by revealing a hash preimage or after a timeout the second party can use this timelock. The issue here is that party B is not required for this at all. Suppose the timelock expires. Party B creates this transaction using the timelock case and publishes this to the network. Party A sees this and eclipse attacks Party B. Party B replaces the witness that was using the timelock with one that uses the hash preimage. They are actually following the atomic swap protocol but at this point Party B has given up. They are using the hash preimage, they are no longer waiting, they’re trying to back out. Party A uses the hash preimage to increase the weight on the transaction causing it to no longer propagate and no longer get into the next block. Then Party A can wait for the timelock on the other side and steal the coins. There are two solutions to this using Miniscript. One is Party B, when doing fee estimation, should consider that A’s key is not under his control and expect worst case behavior from A. That will do the right thing because Party B will wind up paying a higher fee but they’re paying a higher fee to prevent this attack.

Q - The real attack is tracking the transaction in the mempool. I can always drop the signature for the witness?

A - The real attack here is that I’ve gotten the transaction in the mempool with a specific transaction ID but with a lower fee rate.

Q - You don’t track by witness transaction ID right now?

A - Yes that’s correct and I don’t think we want to. If the network was looking at wtxid’s that would also prevent this kind of thing.

Another solution is when creating the transaction Party B could go over this and ask the question “Is my signature required for every branch of this?” In the standard Tier Nolan atomic swap the answer to that question should be yes. The party who has a timelock should have to sign off on all branches. That is very easy to check with Miniscript. Many a time you just scan through the script, look at every branch and say “Is my key in every branch?” Party B does this and if Party A had proposed this broken thing B would reject it out of hand. There are a number of ways that we can address this concern and Miniscript gives you the tools to do them. That’s good. This was something that I hadn’t really considered. It is a little bit contrived but not a lot contrived. You can only decrease the fee rate by a little bit and your goal is to decrease the fee rate by enough that you stall a transaction for so long that a different timelock expires and the other party doesn’t notice that you put a hash preimage that they can use to take coins in the mempool. It is a little bit contrived but you can imagine this reflecting a wider issue where you make incorrect assumptions about how your witness size might grow because other parties are not cooperating with you. Miniscript gives you the tool to reason about that.

So I’ve been talking about keys being trusted or untrusted up to this point. If a key is yours you can assume that you are always going to do the best possible thing. If the witness is smaller by signing with it you’ll sign with it. If the witness is smaller by not signing with it you won’t sign with it. Then you have these untrusted keys where you’ve got to assume they’ll do the worst thing when doing fee estimation. There is another dimension here actually about availability. If keys are definitely available, this is very similar to being trusted. If you trust a key and it is available you can just do your fee estimation assuming it will be used in the most optimal way. If the key might not be available then you can’t actually. If it is on a hardware wallet that your end user might not have access to or something like that then you’ve got to assume that the signature won’t be there, you’ve got to treat the worst case. If a key is definitely unavailable that is kind of cool. If a key is untrusted but you know it is definitely unavailable because you know it is locked in a safe somewhere in another continent, then you can treat it as trusted. Unfortunately this notion of trust and availability don’t actually overlap entirely.

It turns out that if you try to do fee estimation the way it is described it looks like it might be intractable. We’ve got an open problem. This is frustrating. This is why rust-miniscript is not done by the way. Every time I go to the issue tracker to close all of my remaining things I jump onto this and then I don’t make any progress for many hours or days and then I have to go back to my real job. It is very surprising that this is open. We thought that we had a solution at the Core Dev meeting in Amsterdam last year, Pieter and I. I think Sanket and Pieter broke it the next day. It seems like we have this ability to estimate transaction weight in a really optimal weight that takes into account all of the surrounding context. But then it seems like it is intractable here. I tried to write at a high level what you have to do and even in my high level description I was able to find bugs in. Miniscript gives you the tools to get a lower bound on the worst case transaction size and that’s what you need. What I’m saying here is that we could do even better than that by giving Miniscript tooling some more information. I don’t know how to write the algorithm to do that. This is a plea for anybody who is interested. I think you could get a Master’s thesis out of that. You won’t be able to get a PhD thesis. If anyone here is a Master’s student I think this is definitely a thesis worthy thing to do. Unless it turns out to be trivial but I’m pretty sure it is not. Pieter and I tried for a while and Pieter has two PhDs.

Miniscript and PSBT

Last slide. I nailed the two hour time limit. Let me cover the interaction between PSBT and Miniscript. This ends the technical part of the talk. This is really just engineering, something we need to grind through, there are a couple of minor things that need to be done. PSBT is of course Andrew Chow’s brainchild. This is a transfer protocol for unsigned transactions that lets you tack things onto your inputs and outputs and stuff. If you are a wallet trying to sign a transaction in some complicated multisig thing you don’t need to think about that. You take the transaction, you sign it, you attach a transaction to it and pass it onto the next guy. You don’t need to worry about putting the signature in the right place and giving room for other people to fit it in or whatever other weird stuff you might otherwise have to do. You just stick it onto the PSBT. So PSBT has a whole bunch of different roles. The most interesting role is the so called finalizer. That’s the person who takes this PSBT, takes all the signatures that are attached to this, all the hash preimages that are attached to it and stuff and actually assembles a witness. Up until now the only way to write a finalizer is to do template matching. “This is a normal CHECKMULTISIG, I know I need to look for these keys and put them in this order.” With Miniscript you can write a finalizer that can handle any kind of policy that you throw at it and any script that you throw at it. It will figure out whether it has enough data to actually satisfy the input and actually finalize the transaction. Given enough data it can even optimize this. It can say “If I use these signatures instead of that signature I can come up with a smaller witness.” Your finalizer now has the ability to not only support pretty much anything that you throw at it, any protocol that anyone is using today you can just write an off the shelf finalizer, maybe with some special purpose HTLC code, that will work and the output will be more optimal than what would otherwise be done. Maybe a dumb example of this is I have a simple finalizer that doesn’t use PSBT but the finalizer logic in Liquid where typically we need 11 signatures on a given transaction and typically we get 15 because all of the signers are actually online. My code will go through and find the 11 shortest ones by byte. We save a few bytes in Liquid every transaction. We waste a tremendous amount using CHECKMULTISIG and explicit keys and all this. But we save a few bytes that way. The reason we can do that is that I’m not writing this insane micro-optimization code for Liquid. I’m writing it in a Miniscript context where it can be vetted and reused and stuff. It is actually worthwhile to do these sorts of things. That’s really cool. Miniscript was designed to work with PSBT in this way. There are a couple of minor extensions to PSBT that we need. There has been some mailing list activity in the last couple of months but it is not there.

Q - All your suggestions have gone nowhere.

A - I did a bad thing and I wrote a bunch of suggestions in an English language email to the mailing list and wrote no code and never followed up so of course nothing has happened. That is on me, I should have written code, I should follow up and do that. And I will when I find the time.

Q - On the last slide, around roles. It means that you don’t have to allocate the finalizer or the finalizer can be transferred at a later date? What’s that final point?

A - The roles are actually different participants. If you are writing code for a hardware wallet say, the chances are you only care about the signer role which means looking at validating the transaction and just producing a signature. You tack that onto the PSBT and then it is somebody else’s problem to put that into the transaction. There is another role, the updater, where you add extra information. You say “This public key is mine” or “I know a BIP32 path for that.” If you’re writing host software maybe you care about being an updater. Right now to use PSBT at all you have to write finalizer code because every individual person needs to know how to make a transaction. But the goal of Miniscript is that we can write one finalizer to rule them all. There will be a standard Miniscript finalizer and then nobody else in the ecosystem is going to have to deal with that again. As long as they can fit their project into Miniscript they can just use a standard finalizer. That is great because that removes by far the most complicated part of producing a Bitcoin transaction. It also gets you better flexibility and interoperability. The different roles are actually different participants.

As I mentioned finalizing optimally is similar to fee estimation. If you actually take all the information that is potentially available then it becomes intractable even though it feels like it should be tractable. That’s a bit frustrating. We can definitely finalize in an optimal way given information about what data is actually available in the final state. You give it a PSBT with enough signatures and hash preimages and whatever and we can write very straightforward linear time code that will find the optimal non-malleable witness using that data which is a really cool thing. Although you need to be a tiny bit careful. The finalizer has access to extra signatures and stuff that in principle the finalizer could use to malleate. If you have an untrusted finalizer then that is a different trust model and you need to think about that.

Q - Not necessarily an untrusted finalizer. Anyone who has the full PSBT?

A - Yes. This is a point of minor contention between Pieter and I, the question of how do you think about distrust from your peers when you’re doing a multisignature thing. Where we landed on for the purpose of Miniscript is that it does not count as malleability. That’s a separate analysis thing that I haven’t fully fleshed out to be honest. I don’t know the best way to think about that if you actually distrust your peers and you think they might malleate your transactions. I’m not sure in general what if anything you can do.

Q - You don’t give them all of the PSBT?

A - It is not just not giving all the PSBT. They can replace their own signatures.

Q - They can ransom change?

A - Yeah. There is no straightforward thing. I can find other attacks for most of the straightforward things.

Open Problems

This is actually the end. Thank you all for listening for almost two hours. I had a lot of fun doing this. I hope you guys got something out of it. A real quick summary. There is an issue tracker on rust-miniscript that has all of the open problems that I mentioned. It is bugs and also some janitor stuff in particular writing the PSBT finalizer. There are a couple of open bugs that are actually bugs that I need to fix. There is also work to do here. There is a reasonable number of straightforward things that need to be done and there’s a small number like two of hard things that might be like thesis level work. It would be awesome if someone did the thesis stuff because then I would definitely finish the other stuff. Thank you all for listening. It has been a hour and 45 minutes. We have 15 minutes for questions if people are still awake.

Q - You want the website up?

A - Yeah can I put the website up?

Q - What’s Pieter’s repo?

A - I don’t know that Pieter has a public repo yet because his code is all patched into Bitcoin Core.

Q - There’s a PR for that?

A - So then Pieter’s repo is the PR. I think he has a separate repo that he compiled the WebJS or something for the website but I think that’s a private repo. Is it public now? Ok I should add the link. There you go, you can see the code that I’m hovering over. That one is Pieter’s website, there’s the code that produced it, there’s Pieter’s PR. There’s me and Sanket’s code. We have a Policy to Miniscript compiler. Up top here’s a policy in the high level language. I click compile there and you can see it output the Miniscript. You can see the Miniscript and the policy look the same except that the Miniscript has tags on the ANDs and ORs and it has these extra wrappers stuck in there. It also gives you a bit of information about script spending cost. You can see that I put this 9 here saying that this branch of the OR is 9 times as likely as the other one to be taken. It gave me an average expected weight for spending it. It also wrote up the script here which is cool. This is Bitcoin script and Miniscript. Those are exactly equivalent, straightforward linear time translation between these. It is literally a parser that you can write with one token look ahead. It is linear time. If you have a Miniscript you can run it through this analysis tool, it gives you some cool stuff. Then here is the Miniscript reference that I mentioned. Here are all the different Miniscript fragments and the Bitcoin script that they correspond to. Here is the type system that I mentioned, you have these four base types: B, V, K and W. This is what they mean. The five ones, this is what they mean. Here is how they propagate forward. You can see all of the leaf fragments here like pubkey checks and timelocks and stuff have certain properties. The o, n, d, u here. If you have combinators like ANDs and ORs and stuff then the properties of those propagate from the properties of the children. You can convince yourself that all of these rules are correct. There are a lot of them. You can go through them if you want to not trust and verify. You can check that these are what you expect. They are very straightforward to implement. James wrote all of this in Python and Sanket looked over it for the Bitcoin Core Python test framework.

Q - I just use this website?

A - Yeah. You can just copy this stuff right off the website. It is really nice how this turned out. You guys should’ve seen some of the earlier versions of this website.

There is some banter about the resource limitations which matter if you are writing a compiler but otherwise don’t matter. Some discussion about security. This is about how to satisfy and dissatisfy things, the different ORs and ANDs, the different ways of being satisfied or dissatisfied. This is where the different trade-offs in terms of weights come from. There is a discussion about malleability. Here is an overview of the algorithm that I described. This is more talk about malleability. This is the non-malleable satisfaction algorithm that I mentioned. You can see that it is a little bit more technical than I said but it is actually not a lot. I think Pieter overcomplicated this. He didn’t. Every time I complain about stuff he had some reason that he had written stuff the way that he did.

Q - He is very precise.

A - He is very precise, yes.

Then here are the three properties for non-malleability and here is how they propagate. It is basically the same as the last table. There you go. This is Miniscript. All of Miniscript is on this website with the entire contents of this talk, the justification for everything and the detailed rules for doing type checking and malleability checking and for producing non-malleable optimal signatures. It is all here on this page. It is a lot of data but you can see that I can scroll through it in a couple of seconds. Compare the reference book for any programming language out there to this web page that is like three pages long. It was actually very small and straightforward. Ordinary people can get through this and convince themselves that all the design decisions make sense. This gets us to the point where we really can laugh at the Ethereum for having a broken script system. Until now we couldn’t. Bitcoin script pointlessly had all of the problems of Ethereum and more. But here we are, we’re living the dream.

Q - When will this get a BIP number?

A - That’s a good question. I guess it should have a BIP number. A lot of things do that are not really consensus. BIP 32 has a number, 49 does, PSBT has a number. Our thought was that when we solve these open problems that we keep thinking are easier than they turn out to be. I don’t have a plan for asking for a number. My feeling is that I want to finish the Rust implementation and I want to have something better to say than this is open regarding the optimal fee estimation. I would like to have a finalizer that somebody out there is using. I would think maybe if Miniscript should have its own number and the finalizer should have its own number, I don’t know. I don’t have specific plans for asking for numbers. I agree that it should have a number. It feels like the kind of thing that should.

Q - It feels that there should at least be some other documentation than Pieter’s website that may or may not randomly disappear.

A - That’s fair. Right now the other documentation is the implementations. You’re right, there should be something in the canonical repo and the BIP repo is a good candidate for that.

Q - Now that you’ve gone through all this pain, are you going to come up with the new Tapscript version? I guess it is versioned Tapscript so you could come up with a new one that only has the opcodes that makes life good and maybe completely different opcodes?

A - The question is whether we are going to optimize Tapscript for Miniscript. The short answer is no. When we start to do this we wind up getting ratholed pretty quickly. There are a lot of aspects of Miniscript that are kind of specific to the design of Bitcoin script. You can see how many different fragments we have here. If we were to create a new script system that directly efficiently implemented the Miniscript semantics we would actually want to make a lot of changes and it would result in a simpler and different Miniscript that we haven’t designed. What we landed on was just getting rid of CHECKMULTISIG because that was a particularly egregious thing and we made no other changes because there wasn’t any clear Schelling point or point where we could stop.

Q - Could you get rid of negative zero?

A - We didn’t even get rid of negative zero, no. That’s an interesting one.

Q - And the infinite length zero?

A - We might have made MINIMALIF a consensus rule, I think we did. That covers the issue with giant booleans.

Q - What should be in Core and what shouldn’t be in Core? There are two PRs currently open, Pieter’s and James’ for the TestFramework. I’m assuming there is going to be more functionality in your Rust library than will get into Core. How much?

A - The question is what should be in Core and what shouldn’t be in Core. Core should have the ability to sign for Miniscripts. There is really not a lot to signing. You don’t need to understand the script to sign it. Core should have the ability I think to recognize Miniscripts it has control over.

Q - Core should be able to finalize?

A - You could go either way but I also think Core should be able to finalize as Andrew says. That means that they have a RPC or something where you give it a PSBT with all the relevant data and Core will be able to output a complete transaction. You could argue that that is a separate thing from Core’s primary job but it is a really fundamental thing to create transactions. If we are going to have the createrawtransaction, signrawtransaction API a very natural successor to that would be a general PSBT finalizer.

Q - Part of it is that there is a plan to have the Core wallet store Miniscripts which directly implies Core has to be able to finalize those and sign them.

A - Andrew is saying that there are plans for Core to support Miniscript in the descriptor wallet, to have arbitrary Miniscripts. That would require it to be able to finalize in order to sign for its own outputs. So what should not be in Core? The compiler here, that should not be in Core. That is a separate beast of a thing that doesn’t even necessarily have a canonical design. There are some interesting fee estimation and analysis tooling that I have been hinting at. Those probably don’t belong in Core. Those probably belong in their own library or their own RPC tool or something like that because they are only relevant…

Q - We should get all of the fee estimation out of Core. Fee estimation should be its own process or its own interface.

A - Core should be able to estimate fees for its own stuff. But in Core should you be able to say “These keys are available, these keys are not. These are trusted and these are not” and so on and so forth. You can get into really complicated hypothetical scenarios that Miniscript can answer for but I don’t think it is Core’s job to answer that. I think Pieter would agree. Pieter continues to disagree that my scenarios are even sensible.

Q - When you were giving up this original plan of making Miniscript a preprocessor of script was there a certain detail that was convincing you or was it the sum of all the complications?

A - The question is when I had to give up my original dream of Miniscript and the Policy language being the same. I think it was Pieter explaining the way that he had separated the Policy language from Miniscript in his head. I didn’t have that separation in my model of things. He explained conceptually that first of all the Policy language had this information about branch probabilities that could not be expressed in script. There was just no way I could preserve that and go into Bitcoin script. I think that was what did it. Then I realized there was no way I was going to be able to unify these things. He had another point which was that once you were in Miniscript everything is deterministic. Miniscript has a deterministic transformation to script and back but the compilation step from the Policy language to Miniscript that involves doing all sorts of optimizations. There is a lot of potential there to discover new optimizations or give more useful advice or something. That’s a part that shouldn’t be a component of any protocols. At least if you had a compiler that was part of some protocol it would need to be constrained in some way and we didn’t want to constrain that early on. We had this clear separation both between the Policy language having extra data that we couldn’t preserve and also the Policy language being a target for a lot of design iteration on optimization kind of stuff. With Miniscript alone as a pure thing, a re-encoding of Bitcoin script for which all the manipulations we wanted to do clearly had deterministic results. I could have my beautiful little Miniscript world where everything was deterministic and there was only one thing. Basically I could treat the Policy language as not existing, just some ugly thing that I need to use to get Miniscript sometimes. That difference in opinion between Pieter and I persists to this day. When we’re developing stuff he puts all his time into the compiler and I put no time into my compiler. Eventually Sanket rewrote the entire compiler for the Rust implementation because mine didn’t even work by the time we were done changing the language. Once Pieter described this separation I realized that this is actually a sensible separation and that is something we should move forwards with and my dream of a simple Miniscript could still be preserved.

Q - Would the Policy language be supported in Core?

A - The question is will the Policy language be supported in Core. Andrew says probably not. I think that is probably true. That would be up to Pieter. If Pieter were here he could give you some answer. I think it comes down to Pieter’s opinion. I guess Pieter needs at least one other opinion to get stuff merged but my guess would be no. The compiler is a pretty complicated piece of software that’s doing this one shot task of figuring out an optimal script that doesn’t really integrate with descriptors obviously and in a natural way. It also has some more design iteration that we want to do. A question like if you have a threshold of 3-of-5 things, maybe you can get a more optimal script by reordering the components of the threshold. That is something that is not implemented and is probably intractable in general so we have to devise some heuristics for. There is design iteration on stuff like that that still needs to be done.

Q - It may also get stuck in review from everyone not understanding.

A - Andrew says it might get stuck in review which I certainly believe. I haven’t looked at Pieter’s compiler but on the Rust side the compiler is literally half the entire library. It is much more complicated. Everything else is really straightforward, all the same algorithm, iterate over everything, recurse in these specific ways and you can review it in a modular way. The compiler is this incredibly propagating permission upward and downward, going through different things, caching different values and ordering them in certain weird ways and reasoning about malleability at the same time. It is quite a beast to review. I can’t see it getting through the Core review process just because it is such a complicated thing.

Q - A really high level question. Craig Wright wants to apparently secure about 6000 patents on blockchain technology which will obviously affect Bitcoin and the Ethereum communities and every other blockchain. When do you think he’s going to jail?

A - Craig Wright as I understand it is a UK citizen. UK law is different to US law and different to Canadian law in ways that benefit him. For example I understand in the UK that the truth is not a defense to a defamation suit. Ignoring all the perjury and so forth, he came to some US court and he got in a whole bunch of trouble for contempt of court, for throwing papers around and crying and so forth. But independent of that and independent of the patent trolling which unfortunately I don’t think violates any laws and he can just do, the other legal thing he is in the middle of are these defamation suits and these counter defamation suits. I’m not a lawyer of course but my understanding is that because he is doing most of this crap in the UK and UK law is a bit more favorable to this kind of trolling than it is in other jurisdictions. In the US he probably would have had no ability to do a lot of the stuff he is doing.

Q - Some cases have already been dismissed here in the UK. They don’t have jurisdiction over those kind of things.

A - I’m hearing that some cases have been dismissed. I know your question was a joke but it is a good question. I don’t know. I wish he would go to jail.

Q - If we have OP_CAT to force nonce reuse to do this kind of protocol could you catch this with Miniscript or a Miniscript extension?

A - The question is what if we added OP_CAT to script. There are a bunch of benefits like forcing nonce reuse and doing other things. The short answer is yes, we would extend Miniscript to support the new capabilities of OP_CAT but what specifically I don’t know. We haven’t really thought through how we would want to do that. The way it would look is that we would add more fragments to this website basically. It would have to be things that were composable. I don’t have specific plans but definitely I’m sure there would be some things you can do with OP_CAT that we would just add to Miniscript.

Q - …. preimages and keys. These are all things that you can do with scriptless scripts. How long do you see Miniscript being used up until we only use scriptless scripts?

A - With scriptless scripts you cannot do timelocks. Scriptless scripts are interactive and require online keys. I think Miniscript will forever be used for cold storage. That’s my feeling is that we’re going to tend towards a world where cold storage keys use Miniscript as a means of having complicated redemption and recovery policies and the rest of the world doing cool fast paced interactive things would move to scriptless scripts. But I also think there will be a long period of research before people are comfortable deploying scriptless scripts in practice because they are complicated interactive crypto protocols.

Q - We may have hardware wallet support with the same assumptions as a cold wallet, not a deep cold wallet, just a cold wallet, we may have this kind of stuff for Lightning.

A - Support for Lightning?

Q - Support for Lightning and scriptless scripts.

A - Yeah when we get to that point I imagine people will just transition to scriptless scripts for hot stuff. I don’t know how quickly. It seems to me that we’ve got a little while, probably another year or two, maybe five. In all seriousness these are complicated interactive protocols. But in the far future I think Miniscript is for cold storage and scriptless scripts is for hot stuff.

Q - What would Simplicity be for, this new language that has been developed?

A - That’s not a question I can answer in 30 seconds. Miniscript can only do timelocks, hashlocks, signature checks. Simplicity can do anything with the same kind of assurances. Because Simplicity does significantly more stuff the kind of analysis you can do is much more powerful because it is defined in the Coq theorem proving system but also much more difficult. If you have a policy that you can encode in Miniscript you probably want to use Miniscript. In Simplicity you can implement all of the Miniscript combinators so you can just compile Miniscript policies directly to Simplicity which is kind of a nice thing. The short answer is Simplicity lets you do all sorts of stuff that you cannot do with Miniscript. It will let you covenants, it will let you interact with confidential transactions by opening up Pedersen commitments and stuff like this. It will let you create algorithmic limit orders and stuff and put those on a blockchain and have coins that can only be spent if they are being transferred into a different asset under certain terms. You can do vaults where you have coins that can only be moved to a staging area where they can only be moved back or have to sit for a day. A lot of what I just said Miniscript will never be able to do because they don’t fit into this model of being a tree of spending conditions. Simplicity is just infinitely powerful. You can verify the execution of any Turing complete program with Simplicity.

\ No newline at end of file diff --git a/london-bitcoin-devs/2020-05-05-socratic-seminar-payjoins/index.html b/london-bitcoin-devs/2020-05-05-socratic-seminar-payjoins/index.html index 99b3bfd3fa..569f677fe8 100644 --- a/london-bitcoin-devs/2020-05-05-socratic-seminar-payjoins/index.html +++ b/london-bitcoin-devs/2020-05-05-socratic-seminar-payjoins/index.html @@ -13,4 +13,4 @@ Adam Gibson

Date: May 5, 2020

Transcript By: Michael Folkson

Tags: Payjoin

Category: Meetup

Media: -https://www.youtube.com/watch?v=hX86rKyNB8I

Adam Gibson Pastebin on Payjoin resources: https://pastebin.com/rgWVuNrC

6102bitcoin resources on Payjoin/P2EP: https://github.com/6102bitcoin/CoinJoin-Research/blob/master/CoinJoin_Research/CoinJoin_Theory/P2EP/summary.md

Transaction fingerprinting discussion: https://github.com/zkSNACKs/WalletWasabi/issues/3625

BitDevs resources on Coinjoin: https://bitdevs.org/2020-02-19-whitepaper-series-11-coinjoin

The conversation has been anonymized to protect the identities of the participants.

Intro (Michael Folkson)

We are live, London BitDevs Socratic Seminar on Bitcoin privacy with Adam Gibson. This is a live call on Jitsi. We are streaming on YouTube so bear that in mind. Thanks to Jeff from Fulmo for helping us set this up. For those who don’t know Jitsi is open source, it doesn’t collect your data and it was surprisingly easy to get this set up. Let’s hope there are no problems today and we can show how wonderful Jitsi is. So far so good. This is London BitDevs. You can follow us on Twitter @LDNBitcoinDevs. We have a YouTube channel with some really good presentations from the last couple of years. Adam Gibson has talked about Schnorr, Chris Belcher has a couple of presentations on Joinmarket and Sjors Provoost and John Newbery and all sorts. We are also looking for future speakers. If you are a Bitcoin developer or a Lightning developer and you are interested in speaking about work you have been doing or a particular topic get in touch. We can do a presentation or we can a Socratic Seminar like today. I’ll talk a bit about Socratic Seminars. It originated at BitDevs in New York. It is a discussion normally with an agenda that the organizers put together and you discuss all the various different resources on that agenda. The emphasis is on participation and interaction. This isn’t supposed to be a presentation from Adam today, this is supposed to be a discussion. We will try to get as many people who are on the call involved in that discussion as possible. For resources on doing Socratic Seminars online or in person check out bitdevs.org. Introductions, Adam Gibson I think everyone will know. He works on Joinmarket and a bunch of Bitcoin privacy projects. We have got a number of people on the call including Max Hillebrand and openoms. We normally start the Socratic by doing introductions. Obviously you don’t have to give your real name if you don’t want. Also if you don’t want to do an introduction that is also fine.

(The participants in the call then introduced themselves)

Bitcoin privacy basics

For the benefit of the beginners who are perhaps watching on YouTube or maybe some of the beginners/intermediates on the call we will start discussing the basic concepts first. It will get advanced technically later on. Obviously we have got a bunch of developers who are working on privacy tech on the call. What are the problems with Bitcoin privacy? What are we worried about when we make a Bitcoin transaction in terms of privacy leakage?

Satoshi wrote in the white paper that the privacy is based on pseudonymous identities, Bitcoin addresses in the very general sense. What we want to achieve is to delink these pseudonymous identities so that one user having several of these is not tied to or revealed in the cluster. The easiest way to get deanonymized is by reusing addresses. Then it is clear that this is the same pseudonymous identity. Even if you don’t reuse addresses there are certain ways that you can transact Bitcoin that still reveal ownership of coins and addresses for a specific user. These are heuristics. There is the common input ownership heuristic. Basically all these deanonymization techniques rely on the fact that they can link different pseudonymous identities to the same cluster.

I would expand a bit further. This is not just at the level of the transaction structure. All transactions are completely transparent because that is how everybody can validate. Also all of the other underlying infrastructure like the network level privacy is full of concerns. There is no encryption in the peer-to-peer protocol. There are many facets to this, not just the transaction structure.

There are a couple different aspects. There are things recorded on the blockchain and preserved forever and can be analyzed. There is also metadata that can be related to the network, IP addresses recorded together with the transactions in some databases that are outside of the blockchain. There could be timing analysis, trying to locate the source of the transactions. Also there are databases which are different from the blockchain but listing the transactions. The big exchanges or anything running with KYC which are connecting the transactions and withdrawals with the passport and personal data of the users. There are other things that we need to be aware of.

What is a coinjoin? What is the problem that a coinjoin is attempting to solve?

Coinjoins is joining multiple transactions. Multiple inputs creating a certain number of outputs.

I think the main problem is the connection between outputs. I mostly make the analogy to a twenty dollar cash bill. If you want to buy something for five dollars and you give the person a twenty dollar cash bill you get fifteen back. It is mostly known that the fifteen dollar are from the sender of the first payment. And so you can correlate the amount spent and the amount received back. This is the most common transaction type where you pay with one output or several outputs to one address and create two new outputs. One for the receiver and one as a change output.

The idea with coinjoin is to literally join your coins. This is a way for multiple users to consolidate several of their coins in one transaction. The important idea is to break the common input ownership heuristic. That is one part of it. We can no longer assume that all inputs of a transaction belong to one entity. As would be the case for transaction batching where one entity does several payments in one transaction. It is a scalability improvement. A further privacy result that we want to get from this tool is that the outputs being created are no longer tied to the inputs. The pseudonymous identities of the input user are no longer linked to the output users.

One last thing is that coinjoin is completely trustless. It relies on the fact that every input requires a signature that authorizes the entire transaction. So unless all the participants agree about where the funds end up there is no possibility of theft.

Another thing that we are exposing in transactions is not only that the actual UTXO can be tied to an entity but also the payment history and the future payments from that point when the inputs are tied to someone. By employing a coinjoin what we can do is break the UTXO from its history. If we pay through a coinjoin or pay after a coinjoin we can decouple from the future history of the coins.

What are the challenges when setting up a coinjoin?

There is fundamentally a coordination problem between the different participants. If you want to participate in a coinjoin you have to find other Bitcoin users who are willing to collaborate with you to construct this transaction. There are privacy considerations in whatever protocol you use. When you communicate with those parties to this transaction you generally will disclose more information if you do it naively. It is both a coordination problem, finding counterparts, and it requires additional layers of privacy to actually go through that whole process in a way that maintains the benefits of a coinjoin from the blockchain layer perspective.

We already spoke about the non-custodial aspect that nobody can steal and the privacy aspect that none of the participants can spy on each other. I think one important nuance that is very difficult to do in the implementation is denial of service protection especially because we work with pseudonymous identities here. It might be very easy for an attacker to do a sybil attack or to break the protocol at a specific time to simply delay it or make the service unusable. Denial of service is a very nuanced thing that we might not even have seen working properly in production yet.

We can speak about how to construct this transaction. These participants come together but then there must be a coordinator who puts together all the inputs and all the signatures and all the outputs. That can be a central entity like in the case of Wasabi or Whirlpool or can be decentralized in terms of one entity who is acting as the taker in the Joinmarket model.

So let’s move on to the subject of today, payjoin. What is a payjoin?

If I try to stick with the cash example. The same as before, I want to buy something for five dollars. I give the seller twenty dollars, the seller will give me five dollars and I give him twenty and five dollars. He gives me some cash back. On the perspective of the blockchain outsiders can’t decide which input or which cash comes from me and which comes from the seller. The receiver of the payment gives some money to the buyer and receives mostly more back. It is to decorrelate.

I think that is mostly correct. It is better to think of it as last steps. Before the buyer would pay the seller would offer him some extra money to be included in that payment. Let’s say they put it on the table, the seller puts the product they want to sell on the table and they also put another twenty dollars on the paper. The buyer will pay the price of the product plus twenty dollars.

What do we want to try to break on the heuristic side? Firstly of course the common input ownership heuristic again because now not only the sender provides an input but also the receiver provides an input. That is very nice. The cool thing is that this heuristic is now weakened not just for the payjoin transactions but for every coin consolidation transaction. Every transaction with more than one input might be a payjoin and thus the common input ownership heuristic does not hold up as often as it used to. That is a huge win already. Another point is the actual economic value transacted. If we take the previous example. Alice wants to pay for the pizza with 1 Bitcoin and Bob has 0.5 Bitcoin that he adds to this transaction. All of a sudden Alice gets her change back, for example 2 Bitcoin. But the output that belongs to Bob not only has the 1 Bitcoin transaction for the pizza but also the 0.5 Bitcoin additionally from Bob. This output is 1.5. Although the economic value transacted was actually 1 Bitcoin. This is obfuscated as well.

It is interesting when you try to make the analogies with cash work. It is a good example where it doesn’t work too well I think because of the nature of cash. It is more fungible in its nature than Bitcoin with these txo objects that have a name on them. Let’s not get bogged down in analogies that don’t work too well. We want a few things. We want coinjoins that have the effect of breaking the common input ownership heuristic and we want them to be steganographic. Steganographic means that you are hiding in the crowd. You are doing it in a way where your transaction doesn’t look different at all from ordinary payment transactions. You are getting the coinjoin effect but you are not making it obvious that you are creating the coinjoin effect that creates a very strong additional privacy boost to the whole network because it makes blockchain analysis less effective. There has been some discussion recently about the purpose of payjoin. I think when I wrote a blog post about it the last sentence I wrote was something like this is the final nail in the coffin of blockchain analysis. That is the dream of it I think, that it makes blockchain analysis horribly ineffective. Currently we have a situation with coinjoins with equal sized outputs that have a tremendous benefit in delinking those outputs from the inputs. They have a significant benefit but the problem is currently as we understand it, this is a hot topic that nobody is 100 percent sure about it, blockchain analysis firms are marking those transactions as mixes and putting them to one side in their analysis or they are doing some crude percent based taint analysis where they say these coins are 20 percent connected. In either case they can see on the blockchain that these coinjoins are happening. What we are trying to do with payjoin is to trade-off having smaller anonymity sets per transaction because we are talking about two party coinjoins, that is intrinsic to the fact that a transaction is two party. A transaction doesn’t have to be two party, I could pay three people. We could get more complicated with this. For now we are just sticking with the two party case. It is intrinsically a small coinjoin with a couple of people in it. It is clearly not as powerful as using exactly the same output size which is intrinsically indistinguishable. This is not an intrinsically indistinguishable situation. If somebody looks at a payjoin they might be able to figure something out. We will get into that later. That is giving you the thousand foot view of where the motivation is. What I want to do for the next twenty minutes I want to keep this fairly brief, is to go through the history of payjoin as an idea.

History of Payjoin (Adam Gibson)

This pastebin was me writing in sequence all the things I remember about payjoin. I’ve got all those links open in my browser. Earlier descriptions, before anyone used the word payjoin, obviously coinjoin as a concept was first expounded in 2013 by Greg Maxwell. As should always be noted coinjoin is not an innovation in Bitcoin at 2013, it always existed from the start as something in principle people could do. He mentioned it before 2013, I think he called it I taint rich, this is way back in the annals of Bitcoin history. Coinjoin itself, it is there from the beginning, payjoin is a variant of that central idea that when you create a transaction it doesn’t have to be just your inputs. The strong power behind that idea is this crucial issue that at least with standard sighash flags, if you sign a Bitcoin transaction with your key on one of the inputs you are signing the whole transaction therefore fixing that signing to the specific set of outputs. You don’t take any risk in signing one of the inputs to a transaction and having someone else sign another input to the transaction. You can still make sure that you get the money you expect to get. Consequently there are various different models for coinjoin. This particular one was not taken very seriously by many people including myself because we thought it was a pain for a payer to coordinate with a receiver instead of the usual model of Bitcoin which is a payer prepares a transaction on their own and doesn’t even have to talk to the receiver. One interesting detail is that since Lightning has become quite a big thing in the last couple of years, it also has an interactive model of payment. There is some new technology that gets around that but let’s just say Lightning generally involves an exchange of messages between sender and receiver. It may be that subtly that influenced the thinking of the people including myself at a meeting in London in 2018 where we discussed possible ways to make coinjoin more widely used. The first link here is one of the first things that came out of that meeting in London. At the time Blockstream put out an article, it was Matthew Haywood who was at the meeting talking about pay-to-endpoint. At the time I thought this was a poor name. I thought payjoin would be a better name. That debate still exists. It is not a very interesting debate about names, who cares? If you see pay-to-endpoint (P2EP) and you see payjoin think of it as basically the same thing. Let’s say it is the same thing because it is not interesting to argue about. He wrote a long blog post. He gives all the basic principles. Towards the end he talks about some interesting questions around protecting against potential privacy attacks. This is still an interactive protocol. Even if it is only two party there could be scenarios where the sender could be trying to steal privacy from the receiver by for example getting lots of information about their UTXOs. There is some discussion of that there. There is some high tech ideas about how to solve it. You will see the nuances as we go on. That was one of the first explanations of the concept. I think technically nopara who was also at the meeting at me put out a blog post here on Medium. He was very interested in this idea how can we prevent senders stealing privacy from receivers. We were looking at zero knowledge proofs and so on. He talks about that and as usual he has got cartoon characters. He always has some interesting pictures in his blog posts. I wrote this very dense wordy blog post. I think it is one of my better blog posts even though it is not very pretty. It does go through some nice pretty easy to understand examples that lead you into payjoin. For example, asking the question hiding in a much bigger crowd, how could we do a coinjoin but make it look like an ordinary transaction, that is the story I am trying to tell here. I draw examples like here where Alice and Bob do a coinjoin where they both put in 0.05 and they both get out 0.05. That is stupid. There is no real transaction. There is no payment transactions that look like that so that is a failure. Then I talked about how could you make a coinjoin where this looks a bit more like a normal payment because it has two outputs. Normal payments have a payment output and a change output and some inputs. It could be 1, 2, 3, 4. Who knows? The problem with this kind of thing is less obvious but it is very revealing when you understand the problem with this. 0.01 + 0.04 adds up to 0.05. 0.03 + 0.03 adds up to 0.06. That is slightly unobvious but on reflection obvious that a subset of the inputs, two of them, add up to one of the outputs and the remaining subset of the inputs adds up to the other output. We call this subset sum analysis. Crudely separating the inputs and outputs into different subsets and seeing that they match up is a dead giveaway, not 100 percent certain, that what is happening there is a mixing event where two parties are almost certainly not paying each other they are paying themselves and keeping the same amount of money they had to start with. This is a central problem if your goal is to try to create coinjoins breaking the common input ownership heuristic but at the same time you don’t want to give away that you are doing that. That is where payjoin comes in. Payjoin is accepting the inevitable. If I want a mixing transaction to look like a payment transaction it unfortunately does have to be a payment transaction. Is that 100 percent true? No. But payjoin says “Let’s accept that. Let’s do mixing transactions in payment transactions”, in other words together at the same time. That’s why we end up with stuff like this where if you can see here we have actually Alice paid Bob 0.1. If you imagine looking at this on the blockchain you certainly wouldn’t expect that there was a payment of 0.1 Bitcoin. But actually according to that logic there was. It is because Alice and Bob coordinate so that Bob, the receiver, actually contributes this input and he receives this back this output. The net difference between the input he provides and the output he receives is 0.1. So he was actually paid 0.1 even though it is not obvious.

Unnecessary input heuristic

There were a few blog posts there. I am not sure any of them are superb. What is good is a nice simple summary on the Bitcoin wiki. It goes over the basic ideas here in a couple of paragraphs. We had the idea, it was out there in 2018. It had become concrete as an idea that we would really like to do this. You will find discussions on the mailing list from late 2018. Before I started writing any code for it though I did want to figure out a couple of details. This link is interesting not so much because of the content of the description of a protocol for payjoin but because of the discussion that happened afterwards particularly when LaurentMT raised several points. This is a discussion between myself, Chris Belcher and LaurentMT. We are discussing different heuristics that might give away… Even though in the example I showed you you don’t know it is a payment of 0.1 because you can’t see the amount, there is more to it than that. How do wallets select inputs to provide the necessary amount of Bitcoin to make the payment. Every wallet is a bit different in this regard but there are various rules you can think about. What is discussed here is something we ended up calling the unnecessary input heuristic (UIH). I am not going into it now because it is going to take a bit too long but I strongly recommend reading it. We even broke down the unnecessary input heuristic into two types. To give a high level idea, if you are going to be paying 1 Bitcoin you wouldn’t use a 2 Bitcoin UTXO and a 10 Bitcoin UTXO to pay 1 Bitcoin. Because the 2 Bitcoin UTXO on its own would have been enough. Is that an absolute statement? Nowhere near is it absolutely true because wallets have really complicated ways of choosing UTXOs. That’s the general idea and there is a lot to investigate there. It is still an open question what one should do about that in the context of payjoin and it doesn’t look suspicious. That was the discussion on that gist that I linked there.

Existing implementations

Existing implementations, here I am talking about things that were implemented in 2018, 2019. At that time I worked on a payjoin implementation within Joinmarket. That is that link there. It is just code. That was an implementation where the idea was let’s do payjoin but only between two Joinmarket wallets because it is much easier that way and where we use the existing messaging infrastructure where Joinmarket bots talk to each other end-to-end encrypted over IRC servers. Let’s use that messaging channel and coordinate that protocol so that one Joinmarket wallet owner can pay another Joinmarket wallet owner. Around the same time Samourai did what they called Stowaway. There is some high level description and some code. They were basically doing the same thing as what I did in Joinmarket which was do payjoin but they called it Stowaway, between two Samourai wallets. Two Samourai wallets can pay each other with Stowaway and use this exact technique where the receiver contributes UTXOs to the transaction thus breaking the common input ownership heuristic and making the payment amount unclear. Those were early implementations. The real issue of course is that payments are often between parties not using the same wallet. In fact the majority of payments occur between parties not using the same wallet. We really needed a standard and that started getting discussed. The first person to take the initiative on that was Ryan Havar who called his proposal Bustapay. He created BIP 79 and it lays it out in fairly simple terms. The focus here is not so much on two peers using a phone wallet or using a software wallet on their desktop but a customer paying a merchant. It doesn’t insist on that but that’s its focus because it is a client server architecture where the server is a web server. He describes in detail a protocol where using a HTTP POST request you could put in the body of a request and proposed transaction. Here is the smartness of a good payjoin protocol comes in. We discussed this at that London meeting in quite some detail. The original proposer who wants to pay with a payjoin, you’re a merchant who is selling widgets for 1 BTC. I’m a customer, I want to pay you with payjoin because it is better for privacy. I don’t just say “Please can I do a payjoin?” I send you a payment which is not a payjoin which I’ve signed because I am intending to pay you. Instead of broadcasting my payment on the network which is the normal workflow and you as the merchant notice the payment on the blockchain, I sign the transaction as normal but do not broadcast it and I send it to you in the body of a POST request. You receive that and you look at it. You say to yourself “That looks like a decent correct payment of this invoice.” Now because he has used the payjoin flag let’s say I am going to send him back a partially signed transaction and he is going to co-sign that and complete the payjoin. Remember that is the whole thing about payjoin. Like any coinjoin both parties or more than one party has to sign the inputs. He signs his input(s), sends it back to me as the response to my request. I look at what he has sent me and say “That is still basically the same transaction. I am still spending the same amount of money, I’m not getting cheated.” He has signed his inputs, his inputs look the right type like they are the same kind of input as my kind of input. There are a set of checks. Then I say “I agree.” I cosign that payjoin transaction and I do broadcast that onto the network unlike the original proposal transaction which I did not broadcast on the network. Then you as a merchant, you keep an eye on the blockchain. If you see the payjoin transaction was broadcast, great, that means your invoice was paid. If you don’t after 1 minute you have in your back pocket the original payment transaction that I sent you. The nice thing about this protocol is that it ensures that the payment goes through whatever happens. Can anybody explain why is it important that in the payjoin protocol the customer who is paying sends this non-payjoin transaction first?

It is very important to limit the queries the payer can do because otherwise if he didn’t send over a valid transaction first then he could keep asking until he knows all the UTXOs in the wallet. That would be a big privacy leak.

It is denial of service protection. To have an economic cost for users misbehaving. This is a payment coinjoin so the user actually wants to buy something from the merchant and he wants to make that transaction anyhow. He can sign it and if the receiver is no longer online or something goes wrong it is just broadcasted anyhow without the added privacy of the payjoin but it is a final transaction.

I could expand on that question because I think this is a really important point. Does anybody think that that anti snooping measure, where people find out all the contents of the merchant’s wallet as they iterate over their UTXOs. Does anyone think that this measure is not sufficient to stop that attack?

I think it is not sufficient because if the original payment proposed is higher than the amount in the seller’s wallet it could expose the whole wallet in the first step.

I’m not sure how avoidable that is. If you are a merchant and if you have widgets for 100 dollars.. it is a really tricky point.

A complication is that the merchant has to select a specific input that they would like to join. If that process is not deterministic that means multiple requests would reveal their funds. I think this is orthogonal to having the payment amount. If a user sends a payment, whatever the merchant responds with should be determined completely by what they intended to receive so that this request for Bitcoin does not leak more than the minimum information required to participate in this protocol.

I don’t know if everyone here has read BIP 79 but he was quite specific about this point that the merchant must be careful to always respond with the same proposed joining UTXO if he is receiving requests…

“To prevent an attack where a receiver is continually sent variations of the same transaction to enumerate the receivers utxo set, it is essential that the receiver always returns the same contributed inputs when it’s seen the same inputs.” (BIP 79)

I don’t think he explained it the best way there but I understood him to mean that if someone keeps sending a proposed transaction spending the same UTXOs from the sender you should return the same contributed UTXOs as the receiver. There is a slight detail that is maybe missed there. What if the sender sends a new version where the input set is not identical but is overlapping? I didn’t really want to go into in extreme detail here because I think this is a tricky point. I did want to raise because this discussion is maybe not entirely finished. When people think carefully about how you might attack payjoin there are various different angles to attack it. Certainly this thing of snooping is something that people are going to try hard to get it right. I think that principle he expresses there is a very useful one. But I’m not sure that even that is quite enough to say “This is a safe thing. There is no possibility of snooping.” Another high level consideration to bear in mind is by the nature of being merchants they are exposing quite a bit of information. Businesses generally at least in the current version of Bitcoin where the protocol is completely transparent in terms of amounts and the transaction graph, buying stuff from a merchant is a way to expose information about them. That was seen even in the very earliest academic analysis of Bitcoin privacy in 2013. If you look on the Bitcoin privacy wiki page you will see a section called the Mystery shopper payment Chris Belcher who wrote that explains in some detail there. Anyway it is a big topic but I wanted to highlight it because it is one of the more important conceptual and technical issues in the idea of payjoin.

One way to limit that interaction is by making the endpoint disposable. You create an endpoint for receiving only one proposal. One use only. That is possible to make it easier. If your endpoint is a Tor onion address you can create as many disposable addresses as you want or you can manipulate the URL in your website to make those endpoints disposable. The buyer doesn’t have a valid endpoint to continue making proposals to you.

This is where the subtlety of whether it is a peer-to-peer payment or a merchant payment comes in because if you did that as a merchant you would just be spawning out new onions or endpoints for every payment request. It doesn’t make any difference in that case because that is how it works. I was just experimenting with a BTCPay Server instance that Mr Kukks set up and I just kept pinging it with different requests. It doesn’t matter if it is an onion there does it?

It doesn’t matter really, you are right. For example, for a scriptPubKey you can associate an onion address to a scriptPubKey and make it disposable. You can only receive one proposal for that scriptPubKey.

One thing that I also just realized is that the sender can do a double spending attack because he makes the first signed transaction and then he receives the pre-signed or partially signed transaction from the receiver. So he knows the fee rate. He can simply not sign the second transaction, the payjoin transaction and double spend the original transaction paying back to himself with a slightly higher fee rate. If he is the one to broadcast transaction first the original transaction may not even be broadcast because full nodes will think it is a double spend if RBF is not activated. That might be an issue to.

I can’t remember the thinking around that. I seem to remember reading about it somewhere, that is an interesting point for sure. Let’s keep going because we want to get to the meat of it, what people are doing now. I wanted to give the history. That was the BIP (BIP 79). It is a very interesting read, it is not difficult to read. I did have some issues of it with it though and I raised them in January 2019. Ryan Havar had posted the proposal and I wrote some thoughts on it here. Fairly minor things, I didn’t like the name. Protocol versioning, other people didn’t seem to think it mattered that much though apparently some of them are changing their mind now. Then there was the whole thing about the unnecessary input heuristic which we talked about earlier. It is interesting to discuss whether something like that should be put in a BIP, a standardization document, or whether it should be left up to an implementation. We will come back to that at the end. There are some other technical questions, you can read that if you are interested in such stuff. The conversation in different venues has been ongoing.

Payjoin in BTCPay Server

The big change recently was this one from BTCPay Server. Here is now the spec document that they have written and is still undergoing changes as we speak. Payjoin-spec.md in the btcpayserver-doc repo, it describes a delta to that BIP 79 that we just looked at. It is the same basic idea in terms of making a HTTP request to some endpoint. I forgot to mention importantly BIP 21 URI. What is a BIP 21 URI?

BIP21 URI is a URI with a Bitcoin scheme. A scheme is just a standard thing for all URIs that describe the protocol. Following that is a Bitcoin address and optionally some metadata parameters. It is just a standard format that makes it so that you can click on a payment and if you have a wallet configured locally on your computer it could go from an email or a website directly to that wallet with the correct payment amount and the correct destination address. I think there is an optional payment description as well, I don’t remember the details.

I think the idea is that you can have several optional fields. You could also have required fields such that if it is not recognized then that is not allowed. An example of this is Mr Kukk’s donation BTCPay Server instance. You can see that you’ve got this scheme is Bitcoin, we have been using this since 2012, 2011, it has been around a long time. It is an early BIP. You have the address, amount and critically in this case pj meaning payjoin, the flag that tells the merchant or whoever is receiving the payment that this is intended to follow that payjoin protocol that BTCPay Server are defining in that document which is itself a delta to BIP 79 which had Bustapay instead. This just has payjoin. It is a delta that doesn’t change that much. It changes a few things about the communication between the parties. The most important change apart from trivially the name is that it uses PSBT. This is one of the things I was complaining about in the original proposal at the time in Bustapay, I thought we’ve got to use PSBT because everyone is obviously going to migrate to that. It is a sufficiently flexible standard for cross-wallet sending transactions. Does anybody not know what PSBT is? It means partially signed Bitcoin transactions. It is a standard, BIP 174. It does actually allow you to use a raw hex format for a transaction instead of PSBT. As far as I’m aware everybody who has implemented this is using PSBT. This adds a little bit more detail. It also adds stuff that is very useful. If you ever find yourself in a position where you want to implement payjoin you’ve got the advantage that it tells you all the checks that you should be doing on both the receiver and sender side. It has some interesting considerations about the relationship between the choices we make in building these transactions and the position of the blockchain analysts. That is what we are going to move on in the next few steps. They brought out that document and they implemented it within BTCPay Server. Does anyone know when they pushed that out? Three or four weeks ago? It was recently.

In this merchant scenario could the merchant help coordinate combining purchases from different shoppers in the same transaction to mix more UTXOs?

I think that is difficult. Something simpler than that but somewhat similar, one thing that was discussed in the BTCPay Server doc was receiver’s payment batching. It is a nice thing, it is not hard. The way the protocol is defined the sender of the payment doesn’t bother looking at the outputs which do not belong to him. That allows the receiver to be flexible and batch his own payments to his own suppliers, he may want to pay three different people at the same time, he can batch that into the payjoin. This makes it more economical for him at least and also confuses the blockchain analysts a little bit more, that is debatable. The other idea of having multiple payers paying at the same time, it brings us back to that coordination problem. When you try to coinjoin with 4, 5, 10 people they have all got to be there at the same time. The good thing of course is that they don’t have the problem of coordinating the same output amount because they are all making their own payments, that is fine. The bad thing is that they all have to make their payment at the same time. Today I was testing this protocol and it was cool doing coinjoins that take less than one second. I was pressing on the keyboard and the payment just went through. I am so used to in my head dealing with Joinmarket where it takes a minute to figure everything out and send the data back and forth. The answer is yes in theory but there are some practical difficulties there.

There is a privacy concern for the payers. They might learn about each other’s inputs and then refuse to sign which I think is a serious issue.

To reiterate, there is a coordination problem, a privacy problem and a protocol jamming problem. The protocol might not finish. Whenever you coordinate between multiple parties you have to deal with this stuff, it is pretty hard. The upshot is that it would be very hard to do that.

Can I have two wallets and make a payjoin to myself just to fake it and get more anonymity?

The thing with payjoin is that it is steganographic. It is really not clear that the common input ownership heuristic is broken, it might look like it upholds. If you fake a payjoin you are just doing transaction batching or input consolidation. In your case the common input ownership heuristic. You might be deanonymized or you might not be, it may fool someone. As we try to get payjoins steganographic this would reveal the common input ownership heuristic.

I think one can answer a little simpler than that even though that is correct. By steganographic we mean that we are trying to make coinjoins that look like ordinary payment transactions. If you make a payjoin to yourself it is not achieving any goal that I can understand because you are just making a payment to yourself. Notice how it is different from doing a fake equal output coinjoin. I’m not arguing that that is a particularly good or bad thing to do but if you do that you are creating a coinjoin which blockchain analysts may choose to flag and say that is a mixing transaction. Here you are creating something that looks like an ordinary payment so they wouldn’t flag it. You may well raise the question is it interesting to do payments to yourself, multiple hops kind of thing. Maybe, maybe not. It is a complicated question. Intrinsically it is steganographic, that’s the main answer so it doesn’t really make sense.

If I would like to top up my second wallet can I make a payjoin to my second wallet to fake it and get more anonymity from it?

My answer is what is the difference between doing that and just paying your second wallet? I don’t see the difference. In terms of their onchain footprint they look the same. An ordinary payment and a payjoin, that is the idea.

Does payjoin usually fragment or aggregate UTXOs? I assume for privacy it would fragment. In that case do you think it could cause a long term effect on the size of the UTXO set on nodes?

I am going to give my approximate answer, I’m sure others have opinions on this. As far as I understand unlike equal sized output coinjoins which do create extra UTXOs…. Equal sized coinjoins over time are not going to make your wallet worse than some kind of equilibrium state. But they are obviously using up a lot of blockchain space. Payjoins, I’d argue maybe even better because you are making a payment anyway so the only thing that is happening is on the receiver side. On the sender side it doesn’t change anything. You use up the same inputs you would have used anyway, you get back the same change you would have got back anyway, nothing has changed there. For the receiver though it does change the dynamics of their wallet because if you were receiving let’s say 10 payments a day, a merchant receiving Bitcoin, if you don’t use any payjoin at all you are going to get 10 new UTXOs that day. At some point you are going to have to consolidate them somehow. If on the other hand you use payjoin for all of them, in the most extreme case you just have 1 UTXO at the beginning of the day and 1 UTXO at the end of the day. Each new payment you would consume the previous payment’s UTXO as input and you get a new one coming out. We call that the snowball effect because that tends to lead to larger and larger size of UTXO. The reality is that no one is going to be 100 percent payjoins so it is going to be more complicated. It is also not 100 percent clear what the effect is in terms of the economics of how much money you spend in transaction fees as a merchant using this technique versus not. A complicated question.

It is an interesting question with the snowballing effect because on the one hand if you use the same UTXOs for payjoin transactions over and over again that might lead to fingerprinting that in fact a payjoin transaction is going on. We would know that the common input ownership is actually broken. Then it might be worse. That is worth considering. Then it is a question of coin selection of how to do payjoin. I think we have a quite good effect with using equal outputs of a coinjoin or the change of an equal value input coinjoin like Wasabi or Joinmarket. Here if both users have change from a coinjoin and then they do a payjoin the heuristic would say that both of these changes belong to the same user. That might help quite a lot with the fingerprinting of change because it would be screwed up quite a lot.

That gets even more complicated. In the example you just gave, if we did an equal sized output coinjoin together and there were 15 people in the set, 2 of us in that set took the change outputs we got and made a payjoin with them then I think we are going to give away that we have done a payjoin because the way coinjoin software works now is it only produces one change output. It wouldn’t produce two. Unless yours produces two in which case that is fair.

I’m not sure how it holds up for the participants of the same coinjoin.

I think we are getting into the weeds here with these details but maybe it is ok because these are the kind of things that people are thinking about as they go through these protocols.

Why is it worse to do this snowballing? I didn’t get it. Maybe you can elaborate on that. The sender has to provide a transaction to the receiver. The receiver does a partially signed Bitcoin transaction and sends it back to the sender? Is this right? A three way handshake?

It is two way not three way.

The sender doesn’t have to provide another output. It just has to send the transaction. The transaction is not completed from the receiver?

Why might snowball be usually bad? In this case where a merchant receives a hundred payments in one day, if he were to take the first payment and use it as input to the second payment then he would double the amount, go from 1 Bitcoin to 2 Bitcoin. Then he would use it again as an input to the next transaction where he would receive another Bitcoin. It would be 4 Bitcoin. By the end of the day he would have a single UTXO with 100 Bitcoin. The bad thing about that is that is a very obvious behavior to observe on the blockchain because usually the way the merchant works now, they are going to get a lot of transactions where the inputs are just normal sized but here you could easily trace a single UTXO being consumed and then produced. Consumed and then produced. In a series of steps. I’m not saying 100 percent because the whole point of all this is that nothing is certain in blockchain analysis. It is not like anyone can be 100 percent sure of anything from the blockchain data on its own. You don’t want to make really obvious flags like that. The second question, I understand you are a little confused about the sequence of events in a payjoin as currently implemented. I wish I had a nice diagram of steps. I think it is pretty simple. If you look at this example transaction here with Alice and Bob. How would it actually end up happening? Alice would come along and she would have these first two 0.05 and 0.09 inputs and she is planning to pay Bob 0.1. 0.05 plus 0.09 being 0.14 and this is 0.1 so this adds up to enough to pay that 0.1. She could make a normal transaction where the inputs were 0.09 and 0.05 and one of the outputs was 0.1 paying Bob and the other output would be change paying her 0.04. That would be a normal payment transaction where she pays 0.1 and she gets 0.04 back as change. Here is what she does. She makes that transaction, the normal transaction, she signs it as normal. The normal next step is to broadcast it onto the Bitcoin network. She does not do that. Instead she sends the payjoin request. She has already got a BIP 21 URI, a request from the server saying “Please pay me 100 dollars and here is the payjoin URL.” She builds that standard normal payment transaction and sends it in her HTTP request to that payjoin URL. The server only has to do the following. It has to check that that payment transaction is valid, does not broadcast it, instead a creates a partially signed transaction of exactly the type that you are seeing here. He is the one who creates this transaction. He takes the same inputs that Alice gave him, the 0.05 and 0.09 inputs, he adds one of his own inputs. Then he creates two outputs but the difference being of course that while Alice still gets her change, his output is no longer 0.1 it is now 0.18 because he added in the 0.8 that he provided to the inputs to balance it out. He creates this transaction but of course he completely sign and broadcast transaction because he is not Alice and he can’t sign these first two inputs because she owns these inputs. He creates the transaction in a partially signed state where he has signed his input here and her inputs remain unsigned. In that partially signed form, as a PSBT, he sends it back as part of his HTTP response. When Alice receives it she looks at it, does all the checks, if she is happy she signs these first two inputs, broadcasts the whole thing onto the network. If anything goes wrong at any stage in that process one of the other parties can broadcast the original ordinary transaction.

There are two transactions involved. One is from Alice. She sends a complete signed transaction to the receiver. The receiver just looks at it, evaluates the signature is true and then creates a completely new partially signed Bitcoin transaction with the amounts of the previous transaction…

It is not completely new. Maybe this is the point that is confusing you. The one that he creates still uses the same Alice inputs as before.

The same inputs but the transactions are different because the first transaction is already signed and the receiver needs a partially signed Bitcoin transaction.

Don’t forget that whatever signatures Alice had on the original transaction are no longer valid in the new transaction. I think you understand it. When you look at this kind of transaction you have in general four possible interpretations. You could interpret it as being a payment of 0.18 with a change of 0.04 and not being a payjoin, being an ordinary transaction. You could interpret it as being a payment of 0.04 and a change of 0.18 and not being a payjoin. Or you could interpret it as a payjoin, there are at least two more interpretations. There are multiple interpretations. It could be the payment is 0.1. I’ll let you figure out the details yourself. The point is that you can take the position that it is a payjoin or you can take the position that it isn’t a payjoin and that will give you different possible interpretations. It would be silly to think that payjoin somehow hides the amount totally. Obviously it doesn’t hide the amount totally. There are a fixed number of combinations of numbers here. Unless this is opening a Lightning channel and you don’t know what is going on because who is paying who? You could be paying someone on the other side of the network or it could be a coin swap and so on. If we just restrict ourselves to ordinary transactions or payjoins there are a finite fixed number of interpretations.

What about implementations? We mentioned the BTCPay Server delta to BIP 79 which I am sometimes calling BIP 79++ in some documents. Wasabi has merged a PR into master, thanks to Max for confirming these details for me. This is the pull request that was merged five days ago. They have got sender side BIP 79++. What that means is that you can use a Wasabi wallet to pay a BTCPay Server merchant using this technique as we just described it where either it works or you fallback and make an ordinary payment anyway. Also myself recently, this is not merged into master because there is this big refactoring behind it. It is finished in the sense that it is working but not finished in the sense of merging it. The implementation in PR 536 in Joinmarket. BTCPay Server itself I heard has some kind of plan for an implementation for a client wallet of payjoin. Mr Kukks who worked on the BTCPay Server code for this pointed me to this Javascript library. I don’t know the exact relevance but it is interesting because you can look at the logic in the code. I heard that they had some kind of client wallet, not just the server wallet. I also heard that BlueWallet are doing a BIP 79++ implementation? Anybody know that?

As far as I know this Javascript payjoin client library is used by BlueWallet. They are working on at least the sender side and are yet unsure about the receiving side.

That leaves one remaining open question. Does anyone know or have opinions about which other wallets should or could or will implement this? In a minute I am going to show you the result of my implementation and a discussion of fingerprinting.

Electrum should implement it.

I agree. Has anybody heard from them about it? I’ll ping ghost and see if anyone can tell me anything.

Green wallet. Blockstream had a blog post to kick off all this. They have been sponsoring BTCPay to use the standard they were proposing. It would be quite logical to get into Green wallet as soon as possible.

Fingerprinting

This brings us nicely into the topic of fingerprinting because unless I missed something, Green wallet’s inputs are distinct.

They would be a 2-of-2 multisig if they used that feature. But you don’t necessarily use that feature in Green wallet, it is optional.

I actually have Green wallet and I use it fairly regularly but I never bothered to look at the actual transactions. I thought it was either 2-of-2 or 2-of-3 or is there an option to have single key?

It is either 2FA or Blockstream has the other key.

Are you saying that you think it is perfectly reasonable that they could implement payjoin between Green wallets. I don’t think it would work with a merchant because the merchant would not have the same kind of input scripts. Am I missing something? What I raised on this post is I made my second experiment… I had a BTCPay Server remote instance that I chose. I paid him. I made this payjoin transaction using my Joinmarket wallet. If you are confused Joinmarket does have native SegWit wallets, it is just that we don’t really use them because you can’t use them in the equal amount coinjoins. This is from a Joinmarket wallet to a BTCPay Server instance. It produced this transaction. First of all let’s look at the amounts. Is there anything that stands out in the amounts? I was asking a trick question. Unless you are talking about what is already highlighted here in blue (Unnecessary input heuristic) there isn’t really anything remarkable about those arguments. blockstream.info, this particular block explorer does mark rounded amounts. There are no rounded amounts here. I chose a random payment amount. That was my choice because it was a donation server, I could choose a payment amount. The amounts don’t give away very much. Indeed you would be hard pressed to figure out how much money I donated just by looking at these numbers. You could come up with a list of possibilities. The interesting thing is if you actually expand the transaction and you look at the whole serialization here there is something that might stand out about this transaction, maybe two things. The first one is the unnecessary input heuristic that we discussed earlier. Let’s say this was an ordinary payment of 0.017. Why would the payer produce 0.0179 and 0.00198? What would be the necessity of adding this extra input. You might say for fees but that is already quite a bit bigger than needed for fees there. That is one version of the unnecessary input heuristic. This does flag that. The interesting thing is that quite a lot of ordinary transactions flag that heuristic anyway. I’m quite interested to talk to Larry or whoever it is runs this block explorer nowadays and ask them how often they make this flag. I think they are going to make that flag attached to transactions quite often. I’m not sure if it is going to be 20 percent or 50 percent but I would like to know. There is one other thing about this transaction that flags. Can anyone see it?

I have a guess. The nSequence number.

Very good. Did you hear about it recently or did you see it just from looking at it?

I see it.

This is different. Do you remember what the significance of these particular values is?

0xB10C wrote an article a couple of days ago about the nSequence number for timelocks. It is about the fee sniping attack if I am right.

It is related to that for sure.

It means I think that you only can relay transactions with the current block height.

Something like that. Obviously we are not going to have an hour lecture on the details of Bitcoin logic here. Let’s give a few key points. First of all, this is the maximum integer in 4 bytes and this is the default nSequence number. It indicates that the input is final. In some vague sense that meant something to Satoshi back in the day but it has evolved in ways that a lot of people can barely understand, it is a bit weird. But if a transaction is final in the sense that all of these values are the largest integer 0xffffffff then the nLockTime field in the transaction which is shown somewhere or not? The nLockTime field is disabled. If you want to use a nLockTime value the problem is that if you have all the sequence numbers at that maximum value it won’t be enabled. So what Bitcoin Core does is use this value 0xfffffffe and this value is one less than the maximum integer. It has e instead of f at the end so it is one less in hex. That value enables the nLockTime field. If it is enabled you can do an anti fee sniping measure which is you set the locktime to a recent block and that helps with avoiding a miner attack. Can anyone explain it?

I believe this is not exactly correct. The problem with a fee sniping attack is if there is a block confirmed that has spent a lot on fees a miner has an incentive to mine a parent of that block and reorg. The difference in the fees relative to the current state of the mempool is what they would gain by doing that. Maybe that is enough to offset the block subsidy and the difficulty of actually mining two blocks and succeeding in a reorg. But that would be fixed by setting the nLockTime field.

There is a connection between the two. You want to set the nLockTime to fix this problem but you can’t set the nLockTime and also set all the nSequence values to 0xffffffff. It disables it. We went over this yesterday because we were discussing this point in the BTCPay Server Mattermost. I can recommend this particular StackExchange discussion that helps to explain the rather complicated relationship between nLockTime and nSequence. There is the question of what the actual rules are. Then there is the question of what does Bitcoin Core usually do? That is relevant because a wallet like Joinmarket might want to copy what Bitcoin Core usually does. There is the additional complicating factor of whether you are setting the opt-in RBF or not. To set that you need to set this nSequence field to max int minus two where this is only max int minus one. In this case RBF is disabled. There are a whole bunch of complicated rules and considerations here. The upshot of a long conversation I had with Nicolas Dorier yesterday was that he agreed that we should have the server wallet agree with the sender or the client wallets setting of nLockTime. In Joinmarket’s case I always have this nSequence value set to enable the nLockTime. His server, his NBitcoin code was by default sending out the maximum integer always. The reasons are complicated. He has updated this now apparently. What is the point of going into all these details? I just wanted to explain to people how tricky these questions are. If we are going to try to have a steganographic transaction we are going to have to try to make sure that the servers should absolutely mimic the sending wallet. The sending wallet could be a mobile wallet like Green wallet or Electrum or whatever. My point is that I think the servers have to mimic the clients. Even that statement is not quite clear enough. The protocol should be really clear on these points I think.

It is good that you bring this up. BTCPay Server already does this with the address types which is another way to fingerprint this. If the receiver initially proposes a native SegWit bech32 address but the sender spends a coin and generates a change address that is legacy then in the payjoin transaction the server will respond and switch his receiving address at least to a legacy address. I’m not sure if they also select a coin if its legacy. That would be optimal too.

I want to expand a bit on those points. First of all we are mostly concerned with the inputs not the outputs because it seems to me that having more than one scriptPubKey type in the output is not really a problem because it is very common for a sending wallet to have a different scriptPubKey than a receiving wallet. That is one of the ways you identify change outputs so maybe you are right. But it is the inputs that really count. The thing is that BTCPay Server as I understand it from speaking to Mr Kukks the merchant wallet does not yet have the ability to support multiple scriptPubKey types as once. You could have a P2SH-P2WPKH wallet in BTCPay Server merchant or you can have a native bech32 BTCPay Server merchant wallet. I don’t think you can currently have one that mixes both types together. That would be ideal for our scenario because we want to allow different sending wallets to get payjoins successfully through and not have to fallback to the default. Absolutely a scriptPubKey is a huge issue as well. This is the minutiae, the nSequence but you have to get it right.

This tells you where Blockstream are flagging it. I am curious whether they are using what we called one or two but I’ll read that later.

Both heuristics are used from what I saw. This is a shameless plug to please review my tiny documentation Bitcoin Core PR where I have commented about nSequence. If you look at the diff it is quite simple, it is just a comment change. nSequence is so confusing.

I think it is confusing because the original concept behind it was not something that ended up being… A complicated question. I think the most important point there is that there is a connection and nLockTIme and nSequence. Nicolas Dorier didn’t get it at first either and I certainly wouldn’t have got it unless I read up and researched it. For me I wouldn’t usually think about stuff like this. It all came out of the fact that I was really keen on copying the anonymity set of Bitcoin Core and Electrum in Joinmarket’s non-coinjoin payments. Obviously it is a wallet so you have to the ability to make ordinary payments. In that case I want to be steganographic. In the equal sized coinjoin outputs in Joinmarket there is no possibility so we don’t even try. Because I did that that is why I did this particular combination of nSequence and nLockTime in Joinmarket and that is why when I did this payjoin with BTCPay Server because their logic didn’t agree precisely with mine we got this fingerprint.

To mention about the nSequence. Many people prefer to signal replace-by-fee (RBF) because they want to be sure to make the transaction confirm in the timeframe that they want. That is something that can fingerprint the wallet that the user is using.

What is even more confusing is that I believe that you only have to set the RBF flag in one input? You don’t have to set it all of them do you? Is that right?

If you flag one input then the transaction and all of the child transactions are replaceable. Of course you can double spend the input that you flag.

The reason I mention it is because when I was going over this with Nicolas Dorier yesterday we were trying to figure out exactly what we should do, we had to worry about what if the sender is not using the same nSequence in every input. This is really in the weeds. Perhaps we should stop with all that extremely detailed stuff and stick with the more high level time for what remains.

I think this is an interesting document. Let’s go back to the highest level again. This is a gist from LaurentMT who a lot of you will know he is employed by Samourai. He is an analyst who has been working on blockchain analysis and privacy tools for ages in Bitcoin. He is giving some opinions here about payjoin transactions and deception tools and the concept of steganographic transactions that we have discussed today. He is addressing this issue of fingerprinting that we just talked about. He is saying “If you want to avoid having fingerprinting you can go in two directions. You can choose to have everyone have exactly the same fingerprint or you can go in the direction of having fingerprints be random all the time.” I don’t think I agree with everything he said but I agree with a lot of what he said. He is arguing that randomness is better and more realistic because it is very hard for every wallet to be the same and have the same fingerprint in every detail. What happens when somebody wants to update their software and so on. I think that is largely true but I think a couple of additional points I’d make. First of all we don’t have infinite scope to be random of course. The trivial example is you want RBF you can’t just use maximum sequence number. The other point I’d make is that while it is true that if we are just talking about a big network talking to each other then it is hard for everyone to agree with the same fingerprints. I would argue it is a bit easier for servers just to mimic and agree with the fingerprints of their clients. It is not necessarily that easy because different scriptPubKey types are a bit of a hurdle. We can’t really have payjoins where the server produces a native SegWit input and the client produces a legacy input. I do recommend reading it though. It has got some thoughtful points about different ways we might be building wallets with steganographic transactions in mind.

Q&A

Should the RBF flag be truly random in the sense that it is a 50/50 chance between RBF signaling or not RBF signaling. The downside that I see here, we talked about this at Wasabi, is that currently 0.6 percent of all native SegWit transactions signal for RBF. If Wasabi went for 50/50 then there is a pretty high likelihood that the RBF transaction is from Wasabi. To work around this what we have done is on average 2 percent of transactions will signal RBF. It is still randomly chosen but with a skew towards the actual usage on the network. Is that useful?

If you asking me my opinion as I already said while in principle complete randomness is a more practical option than complete homogeneity, I was musing about this on Mastodon earlier today. I was saying on the one hand you could have a completely blank sheet of paper and on the other hand you could have a sheet of paper with masses of random pen scribbles all over it. In both cases no information conveyed. If we go to the massive amount of scribbles, have everything activated randomly it is highly problematic because as an example your wallet might really think that RBF is a bad thing or RBF is a good thing and might really want to have that feature. You can’t really randomize it unless you are going to give up what you are trying to do. Your question is another point that often gets raised which is we want this thing to be random but if we are trying to do randomness for the purposes of steganographic behavior then we have to try to match our random distribution to the existing distribution on the network. Then you get these feedback loops. It is possible but it sounds damn complicated. I really feel like the problem is a deep one so that it may be the practical reality is a certain degree of randomness is the best we can hope for and we can’t have perfect randomness. I also think that in this specific case of this two party protocol the practical reality is that we have to focus on the idea that the receiver mimics the sender’s behavior. It won’t always be possible to get it right. That is ok. Let’s not forget that 10, 20 percent of payjoins on the network, if it is known to be 10, 20 percent realistically is more than enough because it is going to screw up any realistic assumption mechanism.

There is another aspect of this which is something I find both deeply fascinating and troubling. The concept of a Schelling point from game theory. There is always a dilemma between your behavior as an individual trying to protect your own privacy and an individual acting as part of an ecosystem where privacy is an issue. If we assume that fungibility is emergent from privacy and that is a desirable property of money it is very unclear what is the optimal behavior? If everybody could just agree to stop producing fingerprintable transactions and actually do it that would be great. There is a huge technical barrier there. There is always a dilemma as an individual interested in Bitcoin’s success and their own personal privacy, do you make sacrifices like participating in coinjoins which is an overt political act? It sends a message to the world at large. Or do you try to blend in to the crowd and mask yourself?

I don’t think that bifurcation is quite right. Doing equal sized coinjoins I agree is an overt political act and could be in some circumstances a sacrifice. But you can’t argue that it has no value at all.

I’m arguing it has tremendous value.

I mean specifically in creating privacy for that user.

That could also mean that you are banned from various exchanges.

It is a sacrifice but you can’t argue that there isn’t a positive effect specifically for that user.

Sorry if I implied that, that wasn’t my intention at all. I didn’t want to get into coinjoins, that was just the easy example of an obvious trade-off. In the context of this stuff that further complicates all of these discussions. I have nothing really to say practically speaking it is just that the nuances here are not just technical it is also about how people perceive these technical details.

I totally agree with your point. When I was on Stephan Livera I was really hammering that Schelling point home. I’m trying to look at it in a positive way. There is a Schelling point today that Bitcoin transactions are traceable and indeed that doing anything apart from ordinary transactions looks suspicious. I think payjoin could be part of the way we flip that Schelling point. If we have techniques that make it not only normal but easy to slightly improve your privacy. Ideally they would be of economic benefit. We haven’t discussed that today. It is a very interesting question about what is the economic impact of doing payjoins as opposed to ordinary transactions. Ideally we could have techniques to improve that like signature aggregation. But even if we don’t have economic benefit, even if we have some very minor privacy benefit and it is steganographic or it is opportunistic so it is very easy to do. Maybe we could flip the Schelling point in the sense that we make blockchain analysis very ineffective to the point that it looks stupid to try to claim you are able to trace all this other stuff. Then maybe it is not so weird to be someone who does coinjoins or whatever.

I think it is more than that. It is absolutely critical because we already know objectively today that this analysis is not very scientifically rigorous. There are all these examples, I think the most prominent one is from the Dutch court system where chain analysis companies were used as expert witnesses. From the point of view of law enforcement what they would like as a service from these companies is a clear cut coherent narrative for non-technical people and they don’t really care if it is pseudo-scientific. So ambiguity which somebody with intellectual integrity like a proper scientific analysis would do is not part of what is being sold as a service by these companies. This is why it is on people who actually care about Bitcoin succeeding to really be conscious of not just looking at this at a technical level but also in terms of the perceptions. I think payjoin is a fantastic development in this regard because it confers this plausible deniability to your activity. Another interesting one in this regard is the multiparty ECDSA stuff.

Can somebody help me understand why a random RBF flag makes sense? I didn’t get the point.

Just the idea of blending into a crowd. The general idea is if your wallet has one specific behavior like a specific flag like RBF… I think a lot of people have probably heard of the concept of browser fingerprinting. The idea is your browser gives away a bunch of data about the version of Mozilla and of the screen dimensions, I don’t even know the details. Cumulatively a lot of little pieces of information together might make it very obvious who you are. This is kind of similar, maybe it is not the same. If your wallet uses four or five different flags consistently in a certain way it might be obvious that that is the same wallet. Whereas if you randomize some of those flags, from one spend to the next spend, one uses RBF and the next one doesn’t, it then may not be obvious that it is the same wallet.

Regarding the scriptPubKey compatibility what do you think if the sender creates a disposable 1 hop to a “correct” type of the receiver and then using that to have the correct fingerprint?

Is he saying that somebody who is a sender in the payjoin protocol might just for the payment create a different scriptPubKey so that it matches the server? Is that what he means?

I think what he means is if the sender proposes a legacy address or whatever the fingerprint is and the server only has native SegWit bech32 address coins then he spends a SegWit native coin, a one input, one output transaction, where the output is a legacy address and then he uses this unconfirmed transaction output in the payjoin proposal. Then at least for this individual payjoin transaction both inputs are with legacy addresses. Of course if you look at the grander scheme of things, especially timing analysis, you would see one of these coins was done in this spontaneous self spend.

Who is creating the one time address in order to match up? Is the server doing that? The server. Interesting thought, I’m not sure.

How far is the payjoin integration in Joinmarket? What was the previous work and how has this changed with the new proposal?

The previous work is something that happened in early 2019. That allowed two Joinmarket wallets to pay each other with a payjoin. Same basic protocol but not over HTTP and no server, just two Joinmarket bots talking to each other over an encrypted message channel. The new version, I think I explained before the call started, is a situation where in order to implement it I had to build it on some rearchitected Bitcoin backend using a fork of python-bitcoinlib in fact. I built that in a branch in order to get PSBT functionality which we now have in that branch. Also I put SNICKER functionality in there but it is not really user exposed anyway. That is shelved for the moment. I was able to build this new BTCPay Server payjoin code in that branch. It is a pull request 536 in Joinmarket. You will see there where I have got to point where it is all finished on the command line at least. I was able to do experiments sending. If there is anyone there who is somewhat developer inclined and wants to help out they could use that pull request branch 536. Make sure to install libsecp256k1 locally because there is a technical installation issue there that is going to get resolved soon. That can get merged in a few weeks. The code is already functional, I have been testing it with mainnet. For receiving we would probably have a similar situation to Wasabi, we would probably like to set it up so that a user can set up their own ephemeral hidden service temporarily to receive a payment using the same protocol as BTCPay Server uses today instead of this old method that we had which is proprietary to Joinmarket. I think it would be nice if all the wallets that are currently implementing the sender side at least try to receive the receiver side using hidden services. It is not necessarily that convenient for users to do that but at least that is very generic and it should function for most people if they want to use it.

It is awesome to know that Joinmarket is going to implement the receiver part. Payjoin is when two people to collude to fool a third one, a blockchain spy. In order to be able to do this they need to be able to communicate. That is where the endpoint part comes from. This works but in order to make it work even more we need the wallets like Joinmarket for example to implement the receiver part. Wallets that are run on small devices or wallets that for some reason need to be on for a long time, I will implement this in Wasabi too. This should be a peer-to-peer way to introduce an enormous amount of confusion into the blockchain so we need this. If the wallet runs for a long time it is a good candidate and if it is not because it is just a mobile wallet, it just implements the sender part, it is ok, you are still contributing to the privacy.

I have primitive server code, I think we can do the onion receiver part but we need to get the sender part in first, get it into master and then into a release. It is going to be a few weeks. Regarding BIP 69, in a discussion with Nicolas Dorier about this issue of how we do we make sure that payjoins don’t fingerprint it made me realize that the fact that some wallets use BIP 69 is actually problematic for this. The thing about BIP 69 is that it is ordering inputs and outputs lexicographically, alphabetical but with characters. That is all very well and good because it is random and it is deterministic random but the thing is it is identifiable. Because it is identifiable it is a fingerprint. Unless everyone was going to use BIP 69 but the presence of BIP 69 is a net negative for us trying to make transactions not stand out. Nicolas Dorier originally had the intention to replicate BIP 69 in the payjoin transaction if he sees it in the original fallback transaction. I was saying that is a problem because there are plenty of false positives. A randomly shuffled transaction which is how a lot of wallets do it including Joinmarket can sometimes look like BIP 69 by accident. If you replicate that you are going to change the distribution. These are the nasty details you get when you choose to add semantics into the transaction construction which is not intrinsically needed to be there. I have been convinced by this discussion that BIP 69 is a scourge on Bitcoin and should be eradicated at all costs.

So SNICKER. What is the latest on that?

I am thinking about it from Joinmarket’s point of view because other people don’t seem to be interested in implementing it right now. I have put the backend infrastructure in place for it. Of all the cryptographic primitives I even have a receiver script for it but if I was to actually make it work even in the restricted case of using it with Joinmarket participants it would work really well because it would be easy to find the candidate transactions but either I or someone else would have to set up a hidden service and start scanning and writing scanning code and writing proposal code, a bunch of work. I haven’t got any time to do it now. If I had nothing else to do it I would do it because I think it is really interesting. But I think other people don’t seem to have decided that it is sufficiently interesting to put a lot of time and effort into building infrastructure for it. We’ll see. I still think it is way better than people think but I have been saying that for two years.

You are more confident in it now? It has been a few months since the mailing list post.

I didn’t really change my mind about it. I think it is one interesting feather to put in your cap. It is another technique to add to something like Joinmarket which is trying to be a multifaceted Bitcoin wallet as much as it can, focused on coinjoin. It would be nice to add it there. Maybe some other wallets too. In the long term it could conceivably be something added to mobile wallets but there doesn’t seem to be an appetite to do that right now.

\ No newline at end of file +https://www.youtube.com/watch?v=hX86rKyNB8I

Adam Gibson Pastebin on Payjoin resources: https://pastebin.com/rgWVuNrC

6102bitcoin resources on Payjoin/P2EP: https://github.com/6102bitcoin/CoinJoin-Research/blob/master/CoinJoin_Research/CoinJoin_Theory/P2EP/summary.md

Transaction fingerprinting discussion: https://github.com/zkSNACKs/WalletWasabi/issues/3625

BitDevs resources on Coinjoin: https://bitdevs.org/2020-02-19-whitepaper-series-11-coinjoin

The conversation has been anonymized to protect the identities of the participants.

Intro (Michael Folkson)

We are live, London BitDevs Socratic Seminar on Bitcoin privacy with Adam Gibson. This is a live call on Jitsi. We are streaming on YouTube so bear that in mind. Thanks to Jeff from Fulmo for helping us set this up. For those who don’t know Jitsi is open source, it doesn’t collect your data and it was surprisingly easy to get this set up. Let’s hope there are no problems today and we can show how wonderful Jitsi is. So far so good. This is London BitDevs. You can follow us on Twitter @LDNBitcoinDevs. We have a YouTube channel with some really good presentations from the last couple of years. Adam Gibson has talked about Schnorr, Chris Belcher has a couple of presentations on Joinmarket and Sjors Provoost and John Newbery and all sorts. We are also looking for future speakers. If you are a Bitcoin developer or a Lightning developer and you are interested in speaking about work you have been doing or a particular topic get in touch. We can do a presentation or we can a Socratic Seminar like today. I’ll talk a bit about Socratic Seminars. It originated at BitDevs in New York. It is a discussion normally with an agenda that the organizers put together and you discuss all the various different resources on that agenda. The emphasis is on participation and interaction. This isn’t supposed to be a presentation from Adam today, this is supposed to be a discussion. We will try to get as many people who are on the call involved in that discussion as possible. For resources on doing Socratic Seminars online or in person check out bitdevs.org. Introductions, Adam Gibson I think everyone will know. He works on Joinmarket and a bunch of Bitcoin privacy projects. We have got a number of people on the call including Max Hillebrand and openoms. We normally start the Socratic by doing introductions. Obviously you don’t have to give your real name if you don’t want. Also if you don’t want to do an introduction that is also fine.

(The participants in the call then introduced themselves)

Bitcoin privacy basics

For the benefit of the beginners who are perhaps watching on YouTube or maybe some of the beginners/intermediates on the call we will start discussing the basic concepts first. It will get advanced technically later on. Obviously we have got a bunch of developers who are working on privacy tech on the call. What are the problems with Bitcoin privacy? What are we worried about when we make a Bitcoin transaction in terms of privacy leakage?

Satoshi wrote in the white paper that the privacy is based on pseudonymous identities, Bitcoin addresses in the very general sense. What we want to achieve is to delink these pseudonymous identities so that one user having several of these is not tied to or revealed in the cluster. The easiest way to get deanonymized is by reusing addresses. Then it is clear that this is the same pseudonymous identity. Even if you don’t reuse addresses there are certain ways that you can transact Bitcoin that still reveal ownership of coins and addresses for a specific user. These are heuristics. There is the common input ownership heuristic. Basically all these deanonymization techniques rely on the fact that they can link different pseudonymous identities to the same cluster.

I would expand a bit further. This is not just at the level of the transaction structure. All transactions are completely transparent because that is how everybody can validate. Also all of the other underlying infrastructure like the network level privacy is full of concerns. There is no encryption in the peer-to-peer protocol. There are many facets to this, not just the transaction structure.

There are a couple different aspects. There are things recorded on the blockchain and preserved forever and can be analyzed. There is also metadata that can be related to the network, IP addresses recorded together with the transactions in some databases that are outside of the blockchain. There could be timing analysis, trying to locate the source of the transactions. Also there are databases which are different from the blockchain but listing the transactions. The big exchanges or anything running with KYC which are connecting the transactions and withdrawals with the passport and personal data of the users. There are other things that we need to be aware of.

What is a coinjoin? What is the problem that a coinjoin is attempting to solve?

Coinjoins is joining multiple transactions. Multiple inputs creating a certain number of outputs.

I think the main problem is the connection between outputs. I mostly make the analogy to a twenty dollar cash bill. If you want to buy something for five dollars and you give the person a twenty dollar cash bill you get fifteen back. It is mostly known that the fifteen dollar are from the sender of the first payment. And so you can correlate the amount spent and the amount received back. This is the most common transaction type where you pay with one output or several outputs to one address and create two new outputs. One for the receiver and one as a change output.

The idea with coinjoin is to literally join your coins. This is a way for multiple users to consolidate several of their coins in one transaction. The important idea is to break the common input ownership heuristic. That is one part of it. We can no longer assume that all inputs of a transaction belong to one entity. As would be the case for transaction batching where one entity does several payments in one transaction. It is a scalability improvement. A further privacy result that we want to get from this tool is that the outputs being created are no longer tied to the inputs. The pseudonymous identities of the input user are no longer linked to the output users.

One last thing is that coinjoin is completely trustless. It relies on the fact that every input requires a signature that authorizes the entire transaction. So unless all the participants agree about where the funds end up there is no possibility of theft.

Another thing that we are exposing in transactions is not only that the actual UTXO can be tied to an entity but also the payment history and the future payments from that point when the inputs are tied to someone. By employing a coinjoin what we can do is break the UTXO from its history. If we pay through a coinjoin or pay after a coinjoin we can decouple from the future history of the coins.

What are the challenges when setting up a coinjoin?

There is fundamentally a coordination problem between the different participants. If you want to participate in a coinjoin you have to find other Bitcoin users who are willing to collaborate with you to construct this transaction. There are privacy considerations in whatever protocol you use. When you communicate with those parties to this transaction you generally will disclose more information if you do it naively. It is both a coordination problem, finding counterparts, and it requires additional layers of privacy to actually go through that whole process in a way that maintains the benefits of a coinjoin from the blockchain layer perspective.

We already spoke about the non-custodial aspect that nobody can steal and the privacy aspect that none of the participants can spy on each other. I think one important nuance that is very difficult to do in the implementation is denial of service protection especially because we work with pseudonymous identities here. It might be very easy for an attacker to do a sybil attack or to break the protocol at a specific time to simply delay it or make the service unusable. Denial of service is a very nuanced thing that we might not even have seen working properly in production yet.

We can speak about how to construct this transaction. These participants come together but then there must be a coordinator who puts together all the inputs and all the signatures and all the outputs. That can be a central entity like in the case of Wasabi or Whirlpool or can be decentralized in terms of one entity who is acting as the taker in the Joinmarket model.

So let’s move on to the subject of today, payjoin. What is a payjoin?

If I try to stick with the cash example. The same as before, I want to buy something for five dollars. I give the seller twenty dollars, the seller will give me five dollars and I give him twenty and five dollars. He gives me some cash back. On the perspective of the blockchain outsiders can’t decide which input or which cash comes from me and which comes from the seller. The receiver of the payment gives some money to the buyer and receives mostly more back. It is to decorrelate.

I think that is mostly correct. It is better to think of it as last steps. Before the buyer would pay the seller would offer him some extra money to be included in that payment. Let’s say they put it on the table, the seller puts the product they want to sell on the table and they also put another twenty dollars on the paper. The buyer will pay the price of the product plus twenty dollars.

What do we want to try to break on the heuristic side? Firstly of course the common input ownership heuristic again because now not only the sender provides an input but also the receiver provides an input. That is very nice. The cool thing is that this heuristic is now weakened not just for the payjoin transactions but for every coin consolidation transaction. Every transaction with more than one input might be a payjoin and thus the common input ownership heuristic does not hold up as often as it used to. That is a huge win already. Another point is the actual economic value transacted. If we take the previous example. Alice wants to pay for the pizza with 1 Bitcoin and Bob has 0.5 Bitcoin that he adds to this transaction. All of a sudden Alice gets her change back, for example 2 Bitcoin. But the output that belongs to Bob not only has the 1 Bitcoin transaction for the pizza but also the 0.5 Bitcoin additionally from Bob. This output is 1.5. Although the economic value transacted was actually 1 Bitcoin. This is obfuscated as well.

It is interesting when you try to make the analogies with cash work. It is a good example where it doesn’t work too well I think because of the nature of cash. It is more fungible in its nature than Bitcoin with these txo objects that have a name on them. Let’s not get bogged down in analogies that don’t work too well. We want a few things. We want coinjoins that have the effect of breaking the common input ownership heuristic and we want them to be steganographic. Steganographic means that you are hiding in the crowd. You are doing it in a way where your transaction doesn’t look different at all from ordinary payment transactions. You are getting the coinjoin effect but you are not making it obvious that you are creating the coinjoin effect that creates a very strong additional privacy boost to the whole network because it makes blockchain analysis less effective. There has been some discussion recently about the purpose of payjoin. I think when I wrote a blog post about it the last sentence I wrote was something like this is the final nail in the coffin of blockchain analysis. That is the dream of it I think, that it makes blockchain analysis horribly ineffective. Currently we have a situation with coinjoins with equal sized outputs that have a tremendous benefit in delinking those outputs from the inputs. They have a significant benefit but the problem is currently as we understand it, this is a hot topic that nobody is 100 percent sure about it, blockchain analysis firms are marking those transactions as mixes and putting them to one side in their analysis or they are doing some crude percent based taint analysis where they say these coins are 20 percent connected. In either case they can see on the blockchain that these coinjoins are happening. What we are trying to do with payjoin is to trade-off having smaller anonymity sets per transaction because we are talking about two party coinjoins, that is intrinsic to the fact that a transaction is two party. A transaction doesn’t have to be two party, I could pay three people. We could get more complicated with this. For now we are just sticking with the two party case. It is intrinsically a small coinjoin with a couple of people in it. It is clearly not as powerful as using exactly the same output size which is intrinsically indistinguishable. This is not an intrinsically indistinguishable situation. If somebody looks at a payjoin they might be able to figure something out. We will get into that later. That is giving you the thousand foot view of where the motivation is. What I want to do for the next twenty minutes I want to keep this fairly brief, is to go through the history of payjoin as an idea.

History of Payjoin (Adam Gibson)

This pastebin was me writing in sequence all the things I remember about payjoin. I’ve got all those links open in my browser. Earlier descriptions, before anyone used the word payjoin, obviously coinjoin as a concept was first expounded in 2013 by Greg Maxwell. As should always be noted coinjoin is not an innovation in Bitcoin at 2013, it always existed from the start as something in principle people could do. He mentioned it before 2013, I think he called it I taint rich, this is way back in the annals of Bitcoin history. Coinjoin itself, it is there from the beginning, payjoin is a variant of that central idea that when you create a transaction it doesn’t have to be just your inputs. The strong power behind that idea is this crucial issue that at least with standard sighash flags, if you sign a Bitcoin transaction with your key on one of the inputs you are signing the whole transaction therefore fixing that signing to the specific set of outputs. You don’t take any risk in signing one of the inputs to a transaction and having someone else sign another input to the transaction. You can still make sure that you get the money you expect to get. Consequently there are various different models for coinjoin. This particular one was not taken very seriously by many people including myself because we thought it was a pain for a payer to coordinate with a receiver instead of the usual model of Bitcoin which is a payer prepares a transaction on their own and doesn’t even have to talk to the receiver. One interesting detail is that since Lightning has become quite a big thing in the last couple of years, it also has an interactive model of payment. There is some new technology that gets around that but let’s just say Lightning generally involves an exchange of messages between sender and receiver. It may be that subtly that influenced the thinking of the people including myself at a meeting in London in 2018 where we discussed possible ways to make coinjoin more widely used. The first link here is one of the first things that came out of that meeting in London. At the time Blockstream put out an article, it was Matthew Haywood who was at the meeting talking about pay-to-endpoint. At the time I thought this was a poor name. I thought payjoin would be a better name. That debate still exists. It is not a very interesting debate about names, who cares? If you see pay-to-endpoint (P2EP) and you see payjoin think of it as basically the same thing. Let’s say it is the same thing because it is not interesting to argue about. He wrote a long blog post. He gives all the basic principles. Towards the end he talks about some interesting questions around protecting against potential privacy attacks. This is still an interactive protocol. Even if it is only two party there could be scenarios where the sender could be trying to steal privacy from the receiver by for example getting lots of information about their UTXOs. There is some discussion of that there. There is some high tech ideas about how to solve it. You will see the nuances as we go on. That was one of the first explanations of the concept. I think technically nopara who was also at the meeting at me put out a blog post here on Medium. He was very interested in this idea how can we prevent senders stealing privacy from receivers. We were looking at zero knowledge proofs and so on. He talks about that and as usual he has got cartoon characters. He always has some interesting pictures in his blog posts. I wrote this very dense wordy blog post. I think it is one of my better blog posts even though it is not very pretty. It does go through some nice pretty easy to understand examples that lead you into payjoin. For example, asking the question hiding in a much bigger crowd, how could we do a coinjoin but make it look like an ordinary transaction, that is the story I am trying to tell here. I draw examples like here where Alice and Bob do a coinjoin where they both put in 0.05 and they both get out 0.05. That is stupid. There is no real transaction. There is no payment transactions that look like that so that is a failure. Then I talked about how could you make a coinjoin where this looks a bit more like a normal payment because it has two outputs. Normal payments have a payment output and a change output and some inputs. It could be 1, 2, 3, 4. Who knows? The problem with this kind of thing is less obvious but it is very revealing when you understand the problem with this. 0.01 + 0.04 adds up to 0.05. 0.03 + 0.03 adds up to 0.06. That is slightly unobvious but on reflection obvious that a subset of the inputs, two of them, add up to one of the outputs and the remaining subset of the inputs adds up to the other output. We call this subset sum analysis. Crudely separating the inputs and outputs into different subsets and seeing that they match up is a dead giveaway, not 100 percent certain, that what is happening there is a mixing event where two parties are almost certainly not paying each other they are paying themselves and keeping the same amount of money they had to start with. This is a central problem if your goal is to try to create coinjoins breaking the common input ownership heuristic but at the same time you don’t want to give away that you are doing that. That is where payjoin comes in. Payjoin is accepting the inevitable. If I want a mixing transaction to look like a payment transaction it unfortunately does have to be a payment transaction. Is that 100 percent true? No. But payjoin says “Let’s accept that. Let’s do mixing transactions in payment transactions”, in other words together at the same time. That’s why we end up with stuff like this where if you can see here we have actually Alice paid Bob 0.1. If you imagine looking at this on the blockchain you certainly wouldn’t expect that there was a payment of 0.1 Bitcoin. But actually according to that logic there was. It is because Alice and Bob coordinate so that Bob, the receiver, actually contributes this input and he receives this back this output. The net difference between the input he provides and the output he receives is 0.1. So he was actually paid 0.1 even though it is not obvious.

Unnecessary input heuristic

There were a few blog posts there. I am not sure any of them are superb. What is good is a nice simple summary on the Bitcoin wiki. It goes over the basic ideas here in a couple of paragraphs. We had the idea, it was out there in 2018. It had become concrete as an idea that we would really like to do this. You will find discussions on the mailing list from late 2018. Before I started writing any code for it though I did want to figure out a couple of details. This link is interesting not so much because of the content of the description of a protocol for payjoin but because of the discussion that happened afterwards particularly when LaurentMT raised several points. This is a discussion between myself, Chris Belcher and LaurentMT. We are discussing different heuristics that might give away… Even though in the example I showed you you don’t know it is a payment of 0.1 because you can’t see the amount, there is more to it than that. How do wallets select inputs to provide the necessary amount of Bitcoin to make the payment. Every wallet is a bit different in this regard but there are various rules you can think about. What is discussed here is something we ended up calling the unnecessary input heuristic (UIH). I am not going into it now because it is going to take a bit too long but I strongly recommend reading it. We even broke down the unnecessary input heuristic into two types. To give a high level idea, if you are going to be paying 1 Bitcoin you wouldn’t use a 2 Bitcoin UTXO and a 10 Bitcoin UTXO to pay 1 Bitcoin. Because the 2 Bitcoin UTXO on its own would have been enough. Is that an absolute statement? Nowhere near is it absolutely true because wallets have really complicated ways of choosing UTXOs. That’s the general idea and there is a lot to investigate there. It is still an open question what one should do about that in the context of payjoin and it doesn’t look suspicious. That was the discussion on that gist that I linked there.

Existing implementations

Existing implementations, here I am talking about things that were implemented in 2018, 2019. At that time I worked on a payjoin implementation within Joinmarket. That is that link there. It is just code. That was an implementation where the idea was let’s do payjoin but only between two Joinmarket wallets because it is much easier that way and where we use the existing messaging infrastructure where Joinmarket bots talk to each other end-to-end encrypted over IRC servers. Let’s use that messaging channel and coordinate that protocol so that one Joinmarket wallet owner can pay another Joinmarket wallet owner. Around the same time Samourai did what they called Stowaway. There is some high level description and some code. They were basically doing the same thing as what I did in Joinmarket which was do payjoin but they called it Stowaway, between two Samourai wallets. Two Samourai wallets can pay each other with Stowaway and use this exact technique where the receiver contributes UTXOs to the transaction thus breaking the common input ownership heuristic and making the payment amount unclear. Those were early implementations. The real issue of course is that payments are often between parties not using the same wallet. In fact the majority of payments occur between parties not using the same wallet. We really needed a standard and that started getting discussed. The first person to take the initiative on that was Ryan Havar who called his proposal Bustapay. He created BIP 79 and it lays it out in fairly simple terms. The focus here is not so much on two peers using a phone wallet or using a software wallet on their desktop but a customer paying a merchant. It doesn’t insist on that but that’s its focus because it is a client server architecture where the server is a web server. He describes in detail a protocol where using a HTTP POST request you could put in the body of a request and proposed transaction. Here is the smartness of a good payjoin protocol comes in. We discussed this at that London meeting in quite some detail. The original proposer who wants to pay with a payjoin, you’re a merchant who is selling widgets for 1 BTC. I’m a customer, I want to pay you with payjoin because it is better for privacy. I don’t just say “Please can I do a payjoin?” I send you a payment which is not a payjoin which I’ve signed because I am intending to pay you. Instead of broadcasting my payment on the network which is the normal workflow and you as the merchant notice the payment on the blockchain, I sign the transaction as normal but do not broadcast it and I send it to you in the body of a POST request. You receive that and you look at it. You say to yourself “That looks like a decent correct payment of this invoice.” Now because he has used the payjoin flag let’s say I am going to send him back a partially signed transaction and he is going to co-sign that and complete the payjoin. Remember that is the whole thing about payjoin. Like any coinjoin both parties or more than one party has to sign the inputs. He signs his input(s), sends it back to me as the response to my request. I look at what he has sent me and say “That is still basically the same transaction. I am still spending the same amount of money, I’m not getting cheated.” He has signed his inputs, his inputs look the right type like they are the same kind of input as my kind of input. There are a set of checks. Then I say “I agree.” I cosign that payjoin transaction and I do broadcast that onto the network unlike the original proposal transaction which I did not broadcast on the network. Then you as a merchant, you keep an eye on the blockchain. If you see the payjoin transaction was broadcast, great, that means your invoice was paid. If you don’t after 1 minute you have in your back pocket the original payment transaction that I sent you. The nice thing about this protocol is that it ensures that the payment goes through whatever happens. Can anybody explain why is it important that in the payjoin protocol the customer who is paying sends this non-payjoin transaction first?

It is very important to limit the queries the payer can do because otherwise if he didn’t send over a valid transaction first then he could keep asking until he knows all the UTXOs in the wallet. That would be a big privacy leak.

It is denial of service protection. To have an economic cost for users misbehaving. This is a payment coinjoin so the user actually wants to buy something from the merchant and he wants to make that transaction anyhow. He can sign it and if the receiver is no longer online or something goes wrong it is just broadcasted anyhow without the added privacy of the payjoin but it is a final transaction.

I could expand on that question because I think this is a really important point. Does anybody think that that anti snooping measure, where people find out all the contents of the merchant’s wallet as they iterate over their UTXOs. Does anyone think that this measure is not sufficient to stop that attack?

I think it is not sufficient because if the original payment proposed is higher than the amount in the seller’s wallet it could expose the whole wallet in the first step.

I’m not sure how avoidable that is. If you are a merchant and if you have widgets for 100 dollars.. it is a really tricky point.

A complication is that the merchant has to select a specific input that they would like to join. If that process is not deterministic that means multiple requests would reveal their funds. I think this is orthogonal to having the payment amount. If a user sends a payment, whatever the merchant responds with should be determined completely by what they intended to receive so that this request for Bitcoin does not leak more than the minimum information required to participate in this protocol.

I don’t know if everyone here has read BIP 79 but he was quite specific about this point that the merchant must be careful to always respond with the same proposed joining UTXO if he is receiving requests…

“To prevent an attack where a receiver is continually sent variations of the same transaction to enumerate the receivers utxo set, it is essential that the receiver always returns the same contributed inputs when it’s seen the same inputs.” (BIP 79)

I don’t think he explained it the best way there but I understood him to mean that if someone keeps sending a proposed transaction spending the same UTXOs from the sender you should return the same contributed UTXOs as the receiver. There is a slight detail that is maybe missed there. What if the sender sends a new version where the input set is not identical but is overlapping? I didn’t really want to go into in extreme detail here because I think this is a tricky point. I did want to raise because this discussion is maybe not entirely finished. When people think carefully about how you might attack payjoin there are various different angles to attack it. Certainly this thing of snooping is something that people are going to try hard to get it right. I think that principle he expresses there is a very useful one. But I’m not sure that even that is quite enough to say “This is a safe thing. There is no possibility of snooping.” Another high level consideration to bear in mind is by the nature of being merchants they are exposing quite a bit of information. Businesses generally at least in the current version of Bitcoin where the protocol is completely transparent in terms of amounts and the transaction graph, buying stuff from a merchant is a way to expose information about them. That was seen even in the very earliest academic analysis of Bitcoin privacy in 2013. If you look on the Bitcoin privacy wiki page you will see a section called the Mystery shopper payment Chris Belcher who wrote that explains in some detail there. Anyway it is a big topic but I wanted to highlight it because it is one of the more important conceptual and technical issues in the idea of payjoin.

One way to limit that interaction is by making the endpoint disposable. You create an endpoint for receiving only one proposal. One use only. That is possible to make it easier. If your endpoint is a Tor onion address you can create as many disposable addresses as you want or you can manipulate the URL in your website to make those endpoints disposable. The buyer doesn’t have a valid endpoint to continue making proposals to you.

This is where the subtlety of whether it is a peer-to-peer payment or a merchant payment comes in because if you did that as a merchant you would just be spawning out new onions or endpoints for every payment request. It doesn’t make any difference in that case because that is how it works. I was just experimenting with a BTCPay Server instance that Mr Kukks set up and I just kept pinging it with different requests. It doesn’t matter if it is an onion there does it?

It doesn’t matter really, you are right. For example, for a scriptPubKey you can associate an onion address to a scriptPubKey and make it disposable. You can only receive one proposal for that scriptPubKey.

One thing that I also just realized is that the sender can do a double spending attack because he makes the first signed transaction and then he receives the pre-signed or partially signed transaction from the receiver. So he knows the fee rate. He can simply not sign the second transaction, the payjoin transaction and double spend the original transaction paying back to himself with a slightly higher fee rate. If he is the one to broadcast transaction first the original transaction may not even be broadcast because full nodes will think it is a double spend if RBF is not activated. That might be an issue to.

I can’t remember the thinking around that. I seem to remember reading about it somewhere, that is an interesting point for sure. Let’s keep going because we want to get to the meat of it, what people are doing now. I wanted to give the history. That was the BIP (BIP 79). It is a very interesting read, it is not difficult to read. I did have some issues of it with it though and I raised them in January 2019. Ryan Havar had posted the proposal and I wrote some thoughts on it here. Fairly minor things, I didn’t like the name. Protocol versioning, other people didn’t seem to think it mattered that much though apparently some of them are changing their mind now. Then there was the whole thing about the unnecessary input heuristic which we talked about earlier. It is interesting to discuss whether something like that should be put in a BIP, a standardization document, or whether it should be left up to an implementation. We will come back to that at the end. There are some other technical questions, you can read that if you are interested in such stuff. The conversation in different venues has been ongoing.

Payjoin in BTCPay Server

The big change recently was this one from BTCPay Server. Here is now the spec document that they have written and is still undergoing changes as we speak. Payjoin-spec.md in the btcpayserver-doc repo, it describes a delta to that BIP 79 that we just looked at. It is the same basic idea in terms of making a HTTP request to some endpoint. I forgot to mention importantly BIP 21 URI. What is a BIP 21 URI?

BIP21 URI is a URI with a Bitcoin scheme. A scheme is just a standard thing for all URIs that describe the protocol. Following that is a Bitcoin address and optionally some metadata parameters. It is just a standard format that makes it so that you can click on a payment and if you have a wallet configured locally on your computer it could go from an email or a website directly to that wallet with the correct payment amount and the correct destination address. I think there is an optional payment description as well, I don’t remember the details.

I think the idea is that you can have several optional fields. You could also have required fields such that if it is not recognized then that is not allowed. An example of this is Mr Kukk’s donation BTCPay Server instance. You can see that you’ve got this scheme is Bitcoin, we have been using this since 2012, 2011, it has been around a long time. It is an early BIP. You have the address, amount and critically in this case pj meaning payjoin, the flag that tells the merchant or whoever is receiving the payment that this is intended to follow that payjoin protocol that BTCPay Server are defining in that document which is itself a delta to BIP 79 which had Bustapay instead. This just has payjoin. It is a delta that doesn’t change that much. It changes a few things about the communication between the parties. The most important change apart from trivially the name is that it uses PSBT. This is one of the things I was complaining about in the original proposal at the time in Bustapay, I thought we’ve got to use PSBT because everyone is obviously going to migrate to that. It is a sufficiently flexible standard for cross-wallet sending transactions. Does anybody not know what PSBT is? It means partially signed Bitcoin transactions. It is a standard, BIP 174. It does actually allow you to use a raw hex format for a transaction instead of PSBT. As far as I’m aware everybody who has implemented this is using PSBT. This adds a little bit more detail. It also adds stuff that is very useful. If you ever find yourself in a position where you want to implement payjoin you’ve got the advantage that it tells you all the checks that you should be doing on both the receiver and sender side. It has some interesting considerations about the relationship between the choices we make in building these transactions and the position of the blockchain analysts. That is what we are going to move on in the next few steps. They brought out that document and they implemented it within BTCPay Server. Does anyone know when they pushed that out? Three or four weeks ago? It was recently.

In this merchant scenario could the merchant help coordinate combining purchases from different shoppers in the same transaction to mix more UTXOs?

I think that is difficult. Something simpler than that but somewhat similar, one thing that was discussed in the BTCPay Server doc was receiver’s payment batching. It is a nice thing, it is not hard. The way the protocol is defined the sender of the payment doesn’t bother looking at the outputs which do not belong to him. That allows the receiver to be flexible and batch his own payments to his own suppliers, he may want to pay three different people at the same time, he can batch that into the payjoin. This makes it more economical for him at least and also confuses the blockchain analysts a little bit more, that is debatable. The other idea of having multiple payers paying at the same time, it brings us back to that coordination problem. When you try to coinjoin with 4, 5, 10 people they have all got to be there at the same time. The good thing of course is that they don’t have the problem of coordinating the same output amount because they are all making their own payments, that is fine. The bad thing is that they all have to make their payment at the same time. Today I was testing this protocol and it was cool doing coinjoins that take less than one second. I was pressing on the keyboard and the payment just went through. I am so used to in my head dealing with Joinmarket where it takes a minute to figure everything out and send the data back and forth. The answer is yes in theory but there are some practical difficulties there.

There is a privacy concern for the payers. They might learn about each other’s inputs and then refuse to sign which I think is a serious issue.

To reiterate, there is a coordination problem, a privacy problem and a protocol jamming problem. The protocol might not finish. Whenever you coordinate between multiple parties you have to deal with this stuff, it is pretty hard. The upshot is that it would be very hard to do that.

Can I have two wallets and make a payjoin to myself just to fake it and get more anonymity?

The thing with payjoin is that it is steganographic. It is really not clear that the common input ownership heuristic is broken, it might look like it upholds. If you fake a payjoin you are just doing transaction batching or input consolidation. In your case the common input ownership heuristic. You might be deanonymized or you might not be, it may fool someone. As we try to get payjoins steganographic this would reveal the common input ownership heuristic.

I think one can answer a little simpler than that even though that is correct. By steganographic we mean that we are trying to make coinjoins that look like ordinary payment transactions. If you make a payjoin to yourself it is not achieving any goal that I can understand because you are just making a payment to yourself. Notice how it is different from doing a fake equal output coinjoin. I’m not arguing that that is a particularly good or bad thing to do but if you do that you are creating a coinjoin which blockchain analysts may choose to flag and say that is a mixing transaction. Here you are creating something that looks like an ordinary payment so they wouldn’t flag it. You may well raise the question is it interesting to do payments to yourself, multiple hops kind of thing. Maybe, maybe not. It is a complicated question. Intrinsically it is steganographic, that’s the main answer so it doesn’t really make sense.

If I would like to top up my second wallet can I make a payjoin to my second wallet to fake it and get more anonymity from it?

My answer is what is the difference between doing that and just paying your second wallet? I don’t see the difference. In terms of their onchain footprint they look the same. An ordinary payment and a payjoin, that is the idea.

Does payjoin usually fragment or aggregate UTXOs? I assume for privacy it would fragment. In that case do you think it could cause a long term effect on the size of the UTXO set on nodes?

I am going to give my approximate answer, I’m sure others have opinions on this. As far as I understand unlike equal sized output coinjoins which do create extra UTXOs…. Equal sized coinjoins over time are not going to make your wallet worse than some kind of equilibrium state. But they are obviously using up a lot of blockchain space. Payjoins, I’d argue maybe even better because you are making a payment anyway so the only thing that is happening is on the receiver side. On the sender side it doesn’t change anything. You use up the same inputs you would have used anyway, you get back the same change you would have got back anyway, nothing has changed there. For the receiver though it does change the dynamics of their wallet because if you were receiving let’s say 10 payments a day, a merchant receiving Bitcoin, if you don’t use any payjoin at all you are going to get 10 new UTXOs that day. At some point you are going to have to consolidate them somehow. If on the other hand you use payjoin for all of them, in the most extreme case you just have 1 UTXO at the beginning of the day and 1 UTXO at the end of the day. Each new payment you would consume the previous payment’s UTXO as input and you get a new one coming out. We call that the snowball effect because that tends to lead to larger and larger size of UTXO. The reality is that no one is going to be 100 percent payjoins so it is going to be more complicated. It is also not 100 percent clear what the effect is in terms of the economics of how much money you spend in transaction fees as a merchant using this technique versus not. A complicated question.

It is an interesting question with the snowballing effect because on the one hand if you use the same UTXOs for payjoin transactions over and over again that might lead to fingerprinting that in fact a payjoin transaction is going on. We would know that the common input ownership is actually broken. Then it might be worse. That is worth considering. Then it is a question of coin selection of how to do payjoin. I think we have a quite good effect with using equal outputs of a coinjoin or the change of an equal value input coinjoin like Wasabi or Joinmarket. Here if both users have change from a coinjoin and then they do a payjoin the heuristic would say that both of these changes belong to the same user. That might help quite a lot with the fingerprinting of change because it would be screwed up quite a lot.

That gets even more complicated. In the example you just gave, if we did an equal sized output coinjoin together and there were 15 people in the set, 2 of us in that set took the change outputs we got and made a payjoin with them then I think we are going to give away that we have done a payjoin because the way coinjoin software works now is it only produces one change output. It wouldn’t produce two. Unless yours produces two in which case that is fair.

I’m not sure how it holds up for the participants of the same coinjoin.

I think we are getting into the weeds here with these details but maybe it is ok because these are the kind of things that people are thinking about as they go through these protocols.

Why is it worse to do this snowballing? I didn’t get it. Maybe you can elaborate on that. The sender has to provide a transaction to the receiver. The receiver does a partially signed Bitcoin transaction and sends it back to the sender? Is this right? A three way handshake?

It is two way not three way.

The sender doesn’t have to provide another output. It just has to send the transaction. The transaction is not completed from the receiver?

Why might snowball be usually bad? In this case where a merchant receives a hundred payments in one day, if he were to take the first payment and use it as input to the second payment then he would double the amount, go from 1 Bitcoin to 2 Bitcoin. Then he would use it again as an input to the next transaction where he would receive another Bitcoin. It would be 4 Bitcoin. By the end of the day he would have a single UTXO with 100 Bitcoin. The bad thing about that is that is a very obvious behavior to observe on the blockchain because usually the way the merchant works now, they are going to get a lot of transactions where the inputs are just normal sized but here you could easily trace a single UTXO being consumed and then produced. Consumed and then produced. In a series of steps. I’m not saying 100 percent because the whole point of all this is that nothing is certain in blockchain analysis. It is not like anyone can be 100 percent sure of anything from the blockchain data on its own. You don’t want to make really obvious flags like that. The second question, I understand you are a little confused about the sequence of events in a payjoin as currently implemented. I wish I had a nice diagram of steps. I think it is pretty simple. If you look at this example transaction here with Alice and Bob. How would it actually end up happening? Alice would come along and she would have these first two 0.05 and 0.09 inputs and she is planning to pay Bob 0.1. 0.05 plus 0.09 being 0.14 and this is 0.1 so this adds up to enough to pay that 0.1. She could make a normal transaction where the inputs were 0.09 and 0.05 and one of the outputs was 0.1 paying Bob and the other output would be change paying her 0.04. That would be a normal payment transaction where she pays 0.1 and she gets 0.04 back as change. Here is what she does. She makes that transaction, the normal transaction, she signs it as normal. The normal next step is to broadcast it onto the Bitcoin network. She does not do that. Instead she sends the payjoin request. She has already got a BIP 21 URI, a request from the server saying “Please pay me 100 dollars and here is the payjoin URL.” She builds that standard normal payment transaction and sends it in her HTTP request to that payjoin URL. The server only has to do the following. It has to check that that payment transaction is valid, does not broadcast it, instead a creates a partially signed transaction of exactly the type that you are seeing here. He is the one who creates this transaction. He takes the same inputs that Alice gave him, the 0.05 and 0.09 inputs, he adds one of his own inputs. Then he creates two outputs but the difference being of course that while Alice still gets her change, his output is no longer 0.1 it is now 0.18 because he added in the 0.8 that he provided to the inputs to balance it out. He creates this transaction but of course he completely sign and broadcast transaction because he is not Alice and he can’t sign these first two inputs because she owns these inputs. He creates the transaction in a partially signed state where he has signed his input here and her inputs remain unsigned. In that partially signed form, as a PSBT, he sends it back as part of his HTTP response. When Alice receives it she looks at it, does all the checks, if she is happy she signs these first two inputs, broadcasts the whole thing onto the network. If anything goes wrong at any stage in that process one of the other parties can broadcast the original ordinary transaction.

There are two transactions involved. One is from Alice. She sends a complete signed transaction to the receiver. The receiver just looks at it, evaluates the signature is true and then creates a completely new partially signed Bitcoin transaction with the amounts of the previous transaction…

It is not completely new. Maybe this is the point that is confusing you. The one that he creates still uses the same Alice inputs as before.

The same inputs but the transactions are different because the first transaction is already signed and the receiver needs a partially signed Bitcoin transaction.

Don’t forget that whatever signatures Alice had on the original transaction are no longer valid in the new transaction. I think you understand it. When you look at this kind of transaction you have in general four possible interpretations. You could interpret it as being a payment of 0.18 with a change of 0.04 and not being a payjoin, being an ordinary transaction. You could interpret it as being a payment of 0.04 and a change of 0.18 and not being a payjoin. Or you could interpret it as a payjoin, there are at least two more interpretations. There are multiple interpretations. It could be the payment is 0.1. I’ll let you figure out the details yourself. The point is that you can take the position that it is a payjoin or you can take the position that it isn’t a payjoin and that will give you different possible interpretations. It would be silly to think that payjoin somehow hides the amount totally. Obviously it doesn’t hide the amount totally. There are a fixed number of combinations of numbers here. Unless this is opening a Lightning channel and you don’t know what is going on because who is paying who? You could be paying someone on the other side of the network or it could be a coin swap and so on. If we just restrict ourselves to ordinary transactions or payjoins there are a finite fixed number of interpretations.

What about implementations? We mentioned the BTCPay Server delta to BIP 79 which I am sometimes calling BIP 79++ in some documents. Wasabi has merged a PR into master, thanks to Max for confirming these details for me. This is the pull request that was merged five days ago. They have got sender side BIP 79++. What that means is that you can use a Wasabi wallet to pay a BTCPay Server merchant using this technique as we just described it where either it works or you fallback and make an ordinary payment anyway. Also myself recently, this is not merged into master because there is this big refactoring behind it. It is finished in the sense that it is working but not finished in the sense of merging it. The implementation in PR 536 in Joinmarket. BTCPay Server itself I heard has some kind of plan for an implementation for a client wallet of payjoin. Mr Kukks who worked on the BTCPay Server code for this pointed me to this Javascript library. I don’t know the exact relevance but it is interesting because you can look at the logic in the code. I heard that they had some kind of client wallet, not just the server wallet. I also heard that BlueWallet are doing a BIP 79++ implementation? Anybody know that?

As far as I know this Javascript payjoin client library is used by BlueWallet. They are working on at least the sender side and are yet unsure about the receiving side.

That leaves one remaining open question. Does anyone know or have opinions about which other wallets should or could or will implement this? In a minute I am going to show you the result of my implementation and a discussion of fingerprinting.

Electrum should implement it.

I agree. Has anybody heard from them about it? I’ll ping ghost and see if anyone can tell me anything.

Green wallet. Blockstream had a blog post to kick off all this. They have been sponsoring BTCPay to use the standard they were proposing. It would be quite logical to get into Green wallet as soon as possible.

Fingerprinting

This brings us nicely into the topic of fingerprinting because unless I missed something, Green wallet’s inputs are distinct.

They would be a 2-of-2 multisig if they used that feature. But you don’t necessarily use that feature in Green wallet, it is optional.

I actually have Green wallet and I use it fairly regularly but I never bothered to look at the actual transactions. I thought it was either 2-of-2 or 2-of-3 or is there an option to have single key?

It is either 2FA or Blockstream has the other key.

Are you saying that you think it is perfectly reasonable that they could implement payjoin between Green wallets. I don’t think it would work with a merchant because the merchant would not have the same kind of input scripts. Am I missing something? What I raised on this post is I made my second experiment… I had a BTCPay Server remote instance that I chose. I paid him. I made this payjoin transaction using my Joinmarket wallet. If you are confused Joinmarket does have native SegWit wallets, it is just that we don’t really use them because you can’t use them in the equal amount coinjoins. This is from a Joinmarket wallet to a BTCPay Server instance. It produced this transaction. First of all let’s look at the amounts. Is there anything that stands out in the amounts? I was asking a trick question. Unless you are talking about what is already highlighted here in blue (Unnecessary input heuristic) there isn’t really anything remarkable about those arguments. blockstream.info, this particular block explorer does mark rounded amounts. There are no rounded amounts here. I chose a random payment amount. That was my choice because it was a donation server, I could choose a payment amount. The amounts don’t give away very much. Indeed you would be hard pressed to figure out how much money I donated just by looking at these numbers. You could come up with a list of possibilities. The interesting thing is if you actually expand the transaction and you look at the whole serialization here there is something that might stand out about this transaction, maybe two things. The first one is the unnecessary input heuristic that we discussed earlier. Let’s say this was an ordinary payment of 0.017. Why would the payer produce 0.0179 and 0.00198? What would be the necessity of adding this extra input. You might say for fees but that is already quite a bit bigger than needed for fees there. That is one version of the unnecessary input heuristic. This does flag that. The interesting thing is that quite a lot of ordinary transactions flag that heuristic anyway. I’m quite interested to talk to Larry or whoever it is runs this block explorer nowadays and ask them how often they make this flag. I think they are going to make that flag attached to transactions quite often. I’m not sure if it is going to be 20 percent or 50 percent but I would like to know. There is one other thing about this transaction that flags. Can anyone see it?

I have a guess. The nSequence number.

Very good. Did you hear about it recently or did you see it just from looking at it?

I see it.

This is different. Do you remember what the significance of these particular values is?

0xB10C wrote an article a couple of days ago about the nSequence number for timelocks. It is about the fee sniping attack if I am right.

It is related to that for sure.

It means I think that you only can relay transactions with the current block height.

Something like that. Obviously we are not going to have an hour lecture on the details of Bitcoin logic here. Let’s give a few key points. First of all, this is the maximum integer in 4 bytes and this is the default nSequence number. It indicates that the input is final. In some vague sense that meant something to Satoshi back in the day but it has evolved in ways that a lot of people can barely understand, it is a bit weird. But if a transaction is final in the sense that all of these values are the largest integer 0xffffffff then the nLockTime field in the transaction which is shown somewhere or not? The nLockTime field is disabled. If you want to use a nLockTime value the problem is that if you have all the sequence numbers at that maximum value it won’t be enabled. So what Bitcoin Core does is use this value 0xfffffffe and this value is one less than the maximum integer. It has e instead of f at the end so it is one less in hex. That value enables the nLockTime field. If it is enabled you can do an anti fee sniping measure which is you set the locktime to a recent block and that helps with avoiding a miner attack. Can anyone explain it?

I believe this is not exactly correct. The problem with a fee sniping attack is if there is a block confirmed that has spent a lot on fees a miner has an incentive to mine a parent of that block and reorg. The difference in the fees relative to the current state of the mempool is what they would gain by doing that. Maybe that is enough to offset the block subsidy and the difficulty of actually mining two blocks and succeeding in a reorg. But that would be fixed by setting the nLockTime field.

There is a connection between the two. You want to set the nLockTime to fix this problem but you can’t set the nLockTime and also set all the nSequence values to 0xffffffff. It disables it. We went over this yesterday because we were discussing this point in the BTCPay Server Mattermost. I can recommend this particular StackExchange discussion that helps to explain the rather complicated relationship between nLockTime and nSequence. There is the question of what the actual rules are. Then there is the question of what does Bitcoin Core usually do? That is relevant because a wallet like Joinmarket might want to copy what Bitcoin Core usually does. There is the additional complicating factor of whether you are setting the opt-in RBF or not. To set that you need to set this nSequence field to max int minus two where this is only max int minus one. In this case RBF is disabled. There are a whole bunch of complicated rules and considerations here. The upshot of a long conversation I had with Nicolas Dorier yesterday was that he agreed that we should have the server wallet agree with the sender or the client wallets setting of nLockTime. In Joinmarket’s case I always have this nSequence value set to enable the nLockTime. His server, his NBitcoin code was by default sending out the maximum integer always. The reasons are complicated. He has updated this now apparently. What is the point of going into all these details? I just wanted to explain to people how tricky these questions are. If we are going to try to have a steganographic transaction we are going to have to try to make sure that the servers should absolutely mimic the sending wallet. The sending wallet could be a mobile wallet like Green wallet or Electrum or whatever. My point is that I think the servers have to mimic the clients. Even that statement is not quite clear enough. The protocol should be really clear on these points I think.

It is good that you bring this up. BTCPay Server already does this with the address types which is another way to fingerprint this. If the receiver initially proposes a native SegWit bech32 address but the sender spends a coin and generates a change address that is legacy then in the payjoin transaction the server will respond and switch his receiving address at least to a legacy address. I’m not sure if they also select a coin if its legacy. That would be optimal too.

I want to expand a bit on those points. First of all we are mostly concerned with the inputs not the outputs because it seems to me that having more than one scriptPubKey type in the output is not really a problem because it is very common for a sending wallet to have a different scriptPubKey than a receiving wallet. That is one of the ways you identify change outputs so maybe you are right. But it is the inputs that really count. The thing is that BTCPay Server as I understand it from speaking to Mr Kukks the merchant wallet does not yet have the ability to support multiple scriptPubKey types as once. You could have a P2SH-P2WPKH wallet in BTCPay Server merchant or you can have a native bech32 BTCPay Server merchant wallet. I don’t think you can currently have one that mixes both types together. That would be ideal for our scenario because we want to allow different sending wallets to get payjoins successfully through and not have to fallback to the default. Absolutely a scriptPubKey is a huge issue as well. This is the minutiae, the nSequence but you have to get it right.

This tells you where Blockstream are flagging it. I am curious whether they are using what we called one or two but I’ll read that later.

Both heuristics are used from what I saw. This is a shameless plug to please review my tiny documentation Bitcoin Core PR where I have commented about nSequence. If you look at the diff it is quite simple, it is just a comment change. nSequence is so confusing.

I think it is confusing because the original concept behind it was not something that ended up being… A complicated question. I think the most important point there is that there is a connection and nLockTIme and nSequence. Nicolas Dorier didn’t get it at first either and I certainly wouldn’t have got it unless I read up and researched it. For me I wouldn’t usually think about stuff like this. It all came out of the fact that I was really keen on copying the anonymity set of Bitcoin Core and Electrum in Joinmarket’s non-coinjoin payments. Obviously it is a wallet so you have to the ability to make ordinary payments. In that case I want to be steganographic. In the equal sized coinjoin outputs in Joinmarket there is no possibility so we don’t even try. Because I did that that is why I did this particular combination of nSequence and nLockTime in Joinmarket and that is why when I did this payjoin with BTCPay Server because their logic didn’t agree precisely with mine we got this fingerprint.

To mention about the nSequence. Many people prefer to signal replace-by-fee (RBF) because they want to be sure to make the transaction confirm in the timeframe that they want. That is something that can fingerprint the wallet that the user is using.

What is even more confusing is that I believe that you only have to set the RBF flag in one input? You don’t have to set it all of them do you? Is that right?

If you flag one input then the transaction and all of the child transactions are replaceable. Of course you can double spend the input that you flag.

The reason I mention it is because when I was going over this with Nicolas Dorier yesterday we were trying to figure out exactly what we should do, we had to worry about what if the sender is not using the same nSequence in every input. This is really in the weeds. Perhaps we should stop with all that extremely detailed stuff and stick with the more high level time for what remains.

I think this is an interesting document. Let’s go back to the highest level again. This is a gist from LaurentMT who a lot of you will know he is employed by Samourai. He is an analyst who has been working on blockchain analysis and privacy tools for ages in Bitcoin. He is giving some opinions here about payjoin transactions and deception tools and the concept of steganographic transactions that we have discussed today. He is addressing this issue of fingerprinting that we just talked about. He is saying “If you want to avoid having fingerprinting you can go in two directions. You can choose to have everyone have exactly the same fingerprint or you can go in the direction of having fingerprints be random all the time.” I don’t think I agree with everything he said but I agree with a lot of what he said. He is arguing that randomness is better and more realistic because it is very hard for every wallet to be the same and have the same fingerprint in every detail. What happens when somebody wants to update their software and so on. I think that is largely true but I think a couple of additional points I’d make. First of all we don’t have infinite scope to be random of course. The trivial example is you want RBF you can’t just use maximum sequence number. The other point I’d make is that while it is true that if we are just talking about a big network talking to each other then it is hard for everyone to agree with the same fingerprints. I would argue it is a bit easier for servers just to mimic and agree with the fingerprints of their clients. It is not necessarily that easy because different scriptPubKey types are a bit of a hurdle. We can’t really have payjoins where the server produces a native SegWit input and the client produces a legacy input. I do recommend reading it though. It has got some thoughtful points about different ways we might be building wallets with steganographic transactions in mind.

Q&A

Should the RBF flag be truly random in the sense that it is a 50/50 chance between RBF signaling or not RBF signaling. The downside that I see here, we talked about this at Wasabi, is that currently 0.6 percent of all native SegWit transactions signal for RBF. If Wasabi went for 50/50 then there is a pretty high likelihood that the RBF transaction is from Wasabi. To work around this what we have done is on average 2 percent of transactions will signal RBF. It is still randomly chosen but with a skew towards the actual usage on the network. Is that useful?

If you asking me my opinion as I already said while in principle complete randomness is a more practical option than complete homogeneity, I was musing about this on Mastodon earlier today. I was saying on the one hand you could have a completely blank sheet of paper and on the other hand you could have a sheet of paper with masses of random pen scribbles all over it. In both cases no information conveyed. If we go to the massive amount of scribbles, have everything activated randomly it is highly problematic because as an example your wallet might really think that RBF is a bad thing or RBF is a good thing and might really want to have that feature. You can’t really randomize it unless you are going to give up what you are trying to do. Your question is another point that often gets raised which is we want this thing to be random but if we are trying to do randomness for the purposes of steganographic behavior then we have to try to match our random distribution to the existing distribution on the network. Then you get these feedback loops. It is possible but it sounds damn complicated. I really feel like the problem is a deep one so that it may be the practical reality is a certain degree of randomness is the best we can hope for and we can’t have perfect randomness. I also think that in this specific case of this two party protocol the practical reality is that we have to focus on the idea that the receiver mimics the sender’s behavior. It won’t always be possible to get it right. That is ok. Let’s not forget that 10, 20 percent of payjoins on the network, if it is known to be 10, 20 percent realistically is more than enough because it is going to screw up any realistic assumption mechanism.

There is another aspect of this which is something I find both deeply fascinating and troubling. The concept of a Schelling point from game theory. There is always a dilemma between your behavior as an individual trying to protect your own privacy and an individual acting as part of an ecosystem where privacy is an issue. If we assume that fungibility is emergent from privacy and that is a desirable property of money it is very unclear what is the optimal behavior? If everybody could just agree to stop producing fingerprintable transactions and actually do it that would be great. There is a huge technical barrier there. There is always a dilemma as an individual interested in Bitcoin’s success and their own personal privacy, do you make sacrifices like participating in coinjoins which is an overt political act? It sends a message to the world at large. Or do you try to blend in to the crowd and mask yourself?

I don’t think that bifurcation is quite right. Doing equal sized coinjoins I agree is an overt political act and could be in some circumstances a sacrifice. But you can’t argue that it has no value at all.

I’m arguing it has tremendous value.

I mean specifically in creating privacy for that user.

That could also mean that you are banned from various exchanges.

It is a sacrifice but you can’t argue that there isn’t a positive effect specifically for that user.

Sorry if I implied that, that wasn’t my intention at all. I didn’t want to get into coinjoins, that was just the easy example of an obvious trade-off. In the context of this stuff that further complicates all of these discussions. I have nothing really to say practically speaking it is just that the nuances here are not just technical it is also about how people perceive these technical details.

I totally agree with your point. When I was on Stephan Livera I was really hammering that Schelling point home. I’m trying to look at it in a positive way. There is a Schelling point today that Bitcoin transactions are traceable and indeed that doing anything apart from ordinary transactions looks suspicious. I think payjoin could be part of the way we flip that Schelling point. If we have techniques that make it not only normal but easy to slightly improve your privacy. Ideally they would be of economic benefit. We haven’t discussed that today. It is a very interesting question about what is the economic impact of doing payjoins as opposed to ordinary transactions. Ideally we could have techniques to improve that like signature aggregation. But even if we don’t have economic benefit, even if we have some very minor privacy benefit and it is steganographic or it is opportunistic so it is very easy to do. Maybe we could flip the Schelling point in the sense that we make blockchain analysis very ineffective to the point that it looks stupid to try to claim you are able to trace all this other stuff. Then maybe it is not so weird to be someone who does coinjoins or whatever.

I think it is more than that. It is absolutely critical because we already know objectively today that this analysis is not very scientifically rigorous. There are all these examples, I think the most prominent one is from the Dutch court system where chain analysis companies were used as expert witnesses. From the point of view of law enforcement what they would like as a service from these companies is a clear cut coherent narrative for non-technical people and they don’t really care if it is pseudo-scientific. So ambiguity which somebody with intellectual integrity like a proper scientific analysis would do is not part of what is being sold as a service by these companies. This is why it is on people who actually care about Bitcoin succeeding to really be conscious of not just looking at this at a technical level but also in terms of the perceptions. I think payjoin is a fantastic development in this regard because it confers this plausible deniability to your activity. Another interesting one in this regard is the multiparty ECDSA stuff.

Can somebody help me understand why a random RBF flag makes sense? I didn’t get the point.

Just the idea of blending into a crowd. The general idea is if your wallet has one specific behavior like a specific flag like RBF… I think a lot of people have probably heard of the concept of browser fingerprinting. The idea is your browser gives away a bunch of data about the version of Mozilla and of the screen dimensions, I don’t even know the details. Cumulatively a lot of little pieces of information together might make it very obvious who you are. This is kind of similar, maybe it is not the same. If your wallet uses four or five different flags consistently in a certain way it might be obvious that that is the same wallet. Whereas if you randomize some of those flags, from one spend to the next spend, one uses RBF and the next one doesn’t, it then may not be obvious that it is the same wallet.

Regarding the scriptPubKey compatibility what do you think if the sender creates a disposable 1 hop to a “correct” type of the receiver and then using that to have the correct fingerprint?

Is he saying that somebody who is a sender in the payjoin protocol might just for the payment create a different scriptPubKey so that it matches the server? Is that what he means?

I think what he means is if the sender proposes a legacy address or whatever the fingerprint is and the server only has native SegWit bech32 address coins then he spends a SegWit native coin, a one input, one output transaction, where the output is a legacy address and then he uses this unconfirmed transaction output in the payjoin proposal. Then at least for this individual payjoin transaction both inputs are with legacy addresses. Of course if you look at the grander scheme of things, especially timing analysis, you would see one of these coins was done in this spontaneous self spend.

Who is creating the one time address in order to match up? Is the server doing that? The server. Interesting thought, I’m not sure.

How far is the payjoin integration in Joinmarket? What was the previous work and how has this changed with the new proposal?

The previous work is something that happened in early 2019. That allowed two Joinmarket wallets to pay each other with a payjoin. Same basic protocol but not over HTTP and no server, just two Joinmarket bots talking to each other over an encrypted message channel. The new version, I think I explained before the call started, is a situation where in order to implement it I had to build it on some rearchitected Bitcoin backend using a fork of python-bitcoinlib in fact. I built that in a branch in order to get PSBT functionality which we now have in that branch. Also I put SNICKER functionality in there but it is not really user exposed anyway. That is shelved for the moment. I was able to build this new BTCPay Server payjoin code in that branch. It is a pull request 536 in Joinmarket. You will see there where I have got to point where it is all finished on the command line at least. I was able to do experiments sending. If there is anyone there who is somewhat developer inclined and wants to help out they could use that pull request branch 536. Make sure to install libsecp256k1 locally because there is a technical installation issue there that is going to get resolved soon. That can get merged in a few weeks. The code is already functional, I have been testing it with mainnet. For receiving we would probably have a similar situation to Wasabi, we would probably like to set it up so that a user can set up their own ephemeral hidden service temporarily to receive a payment using the same protocol as BTCPay Server uses today instead of this old method that we had which is proprietary to Joinmarket. I think it would be nice if all the wallets that are currently implementing the sender side at least try to receive the receiver side using hidden services. It is not necessarily that convenient for users to do that but at least that is very generic and it should function for most people if they want to use it.

It is awesome to know that Joinmarket is going to implement the receiver part. Payjoin is when two people to collude to fool a third one, a blockchain spy. In order to be able to do this they need to be able to communicate. That is where the endpoint part comes from. This works but in order to make it work even more we need the wallets like Joinmarket for example to implement the receiver part. Wallets that are run on small devices or wallets that for some reason need to be on for a long time, I will implement this in Wasabi too. This should be a peer-to-peer way to introduce an enormous amount of confusion into the blockchain so we need this. If the wallet runs for a long time it is a good candidate and if it is not because it is just a mobile wallet, it just implements the sender part, it is ok, you are still contributing to the privacy.

I have primitive server code, I think we can do the onion receiver part but we need to get the sender part in first, get it into master and then into a release. It is going to be a few weeks. Regarding BIP 69, in a discussion with Nicolas Dorier about this issue of how we do we make sure that payjoins don’t fingerprint it made me realize that the fact that some wallets use BIP 69 is actually problematic for this. The thing about BIP 69 is that it is ordering inputs and outputs lexicographically, alphabetical but with characters. That is all very well and good because it is random and it is deterministic random but the thing is it is identifiable. Because it is identifiable it is a fingerprint. Unless everyone was going to use BIP 69 but the presence of BIP 69 is a net negative for us trying to make transactions not stand out. Nicolas Dorier originally had the intention to replicate BIP 69 in the payjoin transaction if he sees it in the original fallback transaction. I was saying that is a problem because there are plenty of false positives. A randomly shuffled transaction which is how a lot of wallets do it including Joinmarket can sometimes look like BIP 69 by accident. If you replicate that you are going to change the distribution. These are the nasty details you get when you choose to add semantics into the transaction construction which is not intrinsically needed to be there. I have been convinced by this discussion that BIP 69 is a scourge on Bitcoin and should be eradicated at all costs.

So SNICKER. What is the latest on that?

I am thinking about it from Joinmarket’s point of view because other people don’t seem to be interested in implementing it right now. I have put the backend infrastructure in place for it. Of all the cryptographic primitives I even have a receiver script for it but if I was to actually make it work even in the restricted case of using it with Joinmarket participants it would work really well because it would be easy to find the candidate transactions but either I or someone else would have to set up a hidden service and start scanning and writing scanning code and writing proposal code, a bunch of work. I haven’t got any time to do it now. If I had nothing else to do it I would do it because I think it is really interesting. But I think other people don’t seem to have decided that it is sufficiently interesting to put a lot of time and effort into building infrastructure for it. We’ll see. I still think it is way better than people think but I have been saying that for two years.

You are more confident in it now? It has been a few months since the mailing list post.

I didn’t really change my mind about it. I think it is one interesting feather to put in your cap. It is another technique to add to something like Joinmarket which is trying to be a multifaceted Bitcoin wallet as much as it can, focused on coinjoin. It would be nice to add it there. Maybe some other wallets too. In the long term it could conceivably be something added to mobile wallets but there doesn’t seem to be an appetite to do that right now.

\ No newline at end of file diff --git a/london-bitcoin-devs/2020-05-19-socratic-seminar-vaults/index.html b/london-bitcoin-devs/2020-05-19-socratic-seminar-vaults/index.html index 90ea2c4016..1ba8442276 100644 --- a/london-bitcoin-devs/2020-05-19-socratic-seminar-vaults/index.html +++ b/london-bitcoin-devs/2020-05-19-socratic-seminar-vaults/index.html @@ -11,4 +11,4 @@ < London Bitcoin Devs < Socratic Seminar - Vaults and OP_CHECKTEMPLATEVERIFY

Socratic Seminar - Vaults and OP_CHECKTEMPLATEVERIFY

Date: May 19, 2020

Transcript By: Michael Folkson

Category: Meetup

Media: -https://www.youtube.com/watch?v=34jMGiCAmQM

Name: Socratic Seminar

Location: London BitDevs (online)

Pastebin of the resources discussed: https://pastebin.com/3Q8MSwky

Twitter announcement: https://twitter.com/kanzure/status/1262821838654255104?s=20

The conversation has been anonymized by default to protect the identities of the participants. Those who would prefer their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Introductions

Michael Folkson (MF): This is a London BitDevs Socratic Seminar. We are live-streaming on YouTube so please be aware of that for the guys and girls on the call. We are using Jitsi which is open source, free and doesn’t collect your data so check out Jitsi if you are interested in doing similar conversations and Socratics in future. Today we are doing a Socratic Seminar. For those who haven’t previously attended a Socratic Seminar they originated at the BitDevs in New York. There are a number of people on the call who have previously attended them. The emphasis is on discussion, interaction and feel free to ask questions and move the discussion onto whatever topics you are interested in. This isn’t a formal presentation certainly not by me but not even some of the experts on the call. This was set up because we have Kevin Loaec and Antoine Poinsot presenting next week on Revault which is their vault design so that will be live-streamed as well. That will be more of a formal presentation structure and Q&A rather than today which is more of a discussion. The topic is covenants, CHECKTEMPLATEVERIFY. Jeremy Rubin has just joined the call which is fantastic. And also in the second half we will focus on vaults which is one of the use cases of CHECKTEMPLATEVERIFY. What we normally do is we start off by doing intros. A very short intro. You can raise your hand if you want to do an intro. Introduce yourself, how much you know about covenants and vaults and we’ll go through the people on the call who do want to introduce yourself. You don’t have to if you don’t want. If you are speaking and you are happy to have the video turned on make sure you turn the audio and the video on. It will be better for the video if people can see who is speaking. If you don’t want the video on obviously don’t turn the video on. If you don’t care switch the video on when you are speaking.

Bryan Bishop (BB): I am Bryan Bishop, Avanti CTO, a Bitcoin developer, I’ve worked on Bitcoin vaults. I had a prototype release recently. I am also working with a few others on the call here on two manuscripts related to both covenants and vaults. I also did an implementation of my vault prototype with Jeremy Rubin’s BIP 119, OP_CHECKTEMPLATEVERIFY proposal.

Spencer Hommel (SH): My name is Spencer. I am currently a Bitcoin developer at Fidelity Center of Applied Technology’s blockchain incubator. I have been working on vaults since the summer of 2018. More specifically working on a hardware solution using pre-signed transactions More specifically working on a hardware solution using pre-signed transactions with deleted keys. I am also working on with Bryan Bishop on those manuscripts as well.

Kevin Loaec (KL): I am Kevin Loaec. I’m probably the least technical on this call. I’ve been interested in covenants and vaults. Not too long ago, my interest was raised with the first proposal that Bryan sent to the bitcoin-dev mailing list about 7 months ago. I have been working on it since December of this year when a client of ours had a specific need where they wanted a multi-party vault architecture for their hedge fund. That’s when I started digging into this and exploring different types of architecture. I am working on this project which we call Revault which is a multiparty vault architecture that is a little bit different from the one the other guys here are working on. But it also has a lot of similarities so it is going to be a very interesting talk today.

Max Hillebrand (MH): I am Max. I’m mainly a user of Bitcoin technologies and I contribute some to open source projects too, mainly Wasabi wallet. I have always been interested in the different property rights definitions that Bitcoin Script can enable. Specifically multisignatures which was part of my Bachelor thesis that I wrote. I have been following vaults specifically on the mailing list, some transcripts by Bryan and the awesome utxos.org site that Jeremy put up. Just interested in the topic in general. Looking forward to the discussion, thanks for organizing it.

Jeremy Rubin (JR): Hey everyone, I’m Jeremy. Thanks for the intro before though. You know a little bit about me. I’ve been working on BIP 119 CHECKTEMPLATEVERIFY which is a new opcode for Bitcoin that is going to enable many new types of covenants and smart contracts. One of those use cases is vaults. I released some code that is linked on utxos.org. You can see there is a vault implementation that you can check out based on CHECKTEMPLATEVERIFY. I am currently working on implementing better tools for being able to use CHECKTEMPLATEVERIFY that will hopefully make a lot easier to implement all kinds of vaults in the future.

Sam Abbassi (SA): My name is Sam. I am also working on vaults with Bryan and Spencer over at Fidelity. I probably have the least amount of experience with respect to vaults. This is part of me gaining more exposure but happy to be here.

openoms (O): I am openoms. I am working on mainly some new services in a full node script collection called the Raspiblitz. I am generally a Bitcoin and Lightning enthusiast. I am enthusiastic about privacy as well. I don’t know a lot about vaults but I am looking forward to hearing more and learning more.

Jacob Swambo (JS): Hello everyone. My name is Jacob. I am working with Bryan, Spencer and Bob on the vaults manuscript that was mentioned on the bitcoin-dev mailing list not that long ago. I am a PhD student at King’s College London and I have been working on this for about a year and a half. I am happy to be here talking about all this.

Adam Gibson (AG): I just wanted to mention I’m here. It is Adam Gibson here, waxwing on the internet. I don’t have any specific knowledge on this topic but I am very interested.

What are covenants?

MF: Basic questions. These are for the beginners and intermediates and then we move onto the discussion for the experts on the call. To begin with what is a covenant and what problem is a covenant trying to solve?

Bob McElrath (BM): I didn’t introduce myself. I’m Bob McElrath also working with Bryan, Spencer and Sam on a draft that will appear very soon, hopefully a week or so you will be able to read it. I have a talk on this topic last summer in Munich. This is a whole talk about covenants and going through the various mechanisms. A covenant is by definition a restriction on where a UTXO goes next. When I sign a UTXO I provide a signature which proves that I control it and it is mine. But I don’t have control over where it goes after that. A covenant by definition is some kind of restriction on the following transaction. With that you can create what is commonly called a vault. A vault basically says “The following transaction has to go to me one way or another.” I am going to separate my wallet into a couple of pieces, one of which is going to be more secure than the other. When I send to myself, between hot and cold or between an active wallet or something like that, this has to go to me. I am making a restriction on the transfer of the UTXO that says “If you get into my wallet you can’t directly steal this UTXO just by signing it” because the covenant enforces that it has to go to me next. From there I can send it on. I am introducing a couple layers of complexity into my wallet to do that.

MF: It certainly does, a sophisticated answer. Any of the beginners, intermediates did you understand that answer? What was your initial understanding of covenants before this? Any questions for Bob?

AG: I think this has been an open question since the early days of these ideas. It is such an obscure name. It doesn’t immediately jump out at you what it is. I appreciate the explanation Bob, that is excellent.

BM: We can blame Emin Gun Sirer and his collaborators for that. They wrote the paper in 2016 and they named it covenants. It is all their fault.

JR: I know it is a fun game to blame Emin but the term covenants existed before that in a Bitcoin context. The historical reason is that covenants are something that you use when you transfer property. It restricts how it can be used. In the Bay Area where I live there is a dark history with covenants where they were used to prevent black people from owning property. “You can sell this house but not to a black person.” That was relatively common. When people talk about covenants they oftentimes have weird things in their deeds like “You can only ever use this property to house 25 artists.” You sell the property with the covenants and the person can’t ever remove these covenants from the property. There was some mention in the notes that people didn’t really like covenants early on. Covenants is inherently a loaded term. It was a term that was come up to cast some of this stuff in a negative light because some people don’t like covenants. Not in a Bitcoin or cryptocurrency context. In general people have a negative association with someone else controlling your own property. In a Bitcoin context, as Bob pointed out, I liked his description, it is about you controlling your own property. One of the metaphors that I like to use for this is right now each UTXO is a little bit like a treasure chest. You open it up and it is full with gold coins. Then you get to do whatever you want with the gold coins. Imagine one day you opened up your treasure chest and you found Jimi Hendrix’s guitar in it. Where do you store that? Can you take that and throw it in the back of your Subaru? No this is a sacred thing, it needs to go in your guitar case. There is a restriction where you open up your treasure chest and it says “This is a guitar. You can only put this into another suitable guitar case.” That is what a covenant is doing. It is telling you what are safe containers for you to move your item to another. It turns out for the most part we are talking about using coins. It would be “These are coins but they are made out of uranium so you have to put them in a lead box.” They need to go in a lead box, that is the next safe step. That is one of the metaphors that works for covenants. It is about you being able to control the safety and movement of your own coins.

MH: I know the term covenants from the incumbent banking system where if you have a loan contract, for example a bank to a company, the bank can make the requirement that if the company’s cashflow to equity ratio drops to a certain level then the debt has to be paid back. Or it has to be renegotiated. It is a limitation on the contract where the contract itself is terminated or changed when one of these things comes into place. Seeing it as a restriction also makes sense in a Bitcoin context. We have a Bitcoin contract that is “If you have the private keys to this address then you can spend it. But in this restriction that it has to go into this other address.”

MF: You dropped off there Max so we didn’t hear the second half of your answer. I think some of you have already seen this. I did create a Pastebin for some of the resources that we can talk through. That is on the Twitter and on the Meetup page. I got a bunch of these links from Jeremy’s interview on Chaincode Labs’ podcast which I thought was excellent. Jeremy talked about some of the history in terms of implementing covenants on Bitcoin. The first one is an early bitcointalk.org post on how covenants are a bad idea. Perhaps we can talk about why people thought covenants were a bad idea back in 2013 and what progress has been made since then that perhaps has changed their mind or perhaps they still think covenants are a bad idea.

Historical concern about Bitcoin covenants

BB: I’ll start off. As I recall that was a Greg Maxwell post. I have talked with him and without corrupting what his opinion actually is too much I think the major concern was mainly about recursive covenants. People using them and not really understanding how restrictive a recursive covenant really is. That was the main concern, not that covenants are actually awful. It unintentionally reads as “Do no use covenants under any circumstance” which is unfortunate but that was not the intention of the post.

JR: Greg has explicitly said in IRC somewhere something to the tune of “I don’t know why everybody thinks covenants are bad. Covenants are fine. I have never said anything about them being bad.” I said to him “Greg, everybody I talked says that they think it is because you said that they are bad in this thread. If you don’t think they are bad you should make that clear.” You can’t make Greg write something. He has written that in IRC. He doesn’t have any hang ups or holdups about them. It is not even the recursion that is the problem, it is vitality. He doesn’t want a covenant system where somebody else is doing something and then all of a sudden your coins get wrapped up into their covenant. It may be that recursion is a part of that but that is the major concern with the ones that Greg was looking at.

MF: There are two elements here. One is timing. Perhaps it was way too early to start thinking about implementing covenants back in 2013. Secondly there is perhaps views on if the ideas on covenants were stupid or too complex back then that it was just a case of battening down the hatches and making sure some crazy covenant ideas didn’t get into Bitcoin. Any thoughts on that or any of the previous conversation?

BM: Any time you do a covenant… Any agreement you make when you are sending funds with the receiver is a private agreement. You can make whatever terms you want. Greg’s post enumerates a particularly bad idea where one party can impose a restriction on another party against their will. I think most people would think is a terrible idea. It is not that covenants themselves are a bad idea. If you agree to it and I agree to it fine. I think for the most part covenants are most useful for where it is not a two party arrangement, it is a one party arrangement. Once you get two parties involved everybody has to understand what is going on. By default it can’t be a regular wallet. The structure of the scripts have to change somehow. I have to know about those rules. Along as I agree to them I think that is completely fine.

JR: A hard disagree on that note. One of the major benefits of a covenant system comes into play with Lightning Network related stuff. It actually dramatically simplifies the protocol and increases the routability by improving the number of HTLCs that you can have and the smart contracts that can live underneath a Lightning channel feasibly with reasonable latency.

BM: I think we agree there. If you are using a Lightning wallet then you have agreed to those rules.

JR: I do agree that there is a huge amount of value when it is a single party system but multiparty things are actually really useful in covenants because the auditability of those protocols is just simpler. A lot of the set up things are you are writing half the amount of code.

MF: The next couple of links I put up were a good intro for beginners to this topic which is Aaron van Wirdum’s Bitcoin Magazine article on SECURETHEBAG. This was after Jeremy’s presentation at Scaling Bitcoin 2019. Then there’s this paper by Emin Gun Sirer, Mosser and Iyal. Any thoughts on this paper? Anyone want to summarize the key findings from this paper?

The first paper on Bitcoin covenants

BM: I can give it a stab. This was the first paper in the space. They defined a new opcode that acts rather like a regular expression that says “I’m going to examine your redeem script and I am going to impose some restrictions which are essentially an arbitrary sort of regular expression on your script.” The second thing that they defined is recursive covenants which is the thing Greg Maxwell doesn’t like as well as a kind of protocol where if somebody manages to steal your funds you can yourself get into a game where you keep replacing each other’s transactions until the entire value of the UTXO goes to fees. They claim this is somehow beneficial because the thief can’t actually steal anything. That aspect of the paper I don’t like very much. I don’t think anybody wants to get into a game where they lose their funds anyway even if they prevent the attacker from gaming them and sending them to fees instead. Those are broadly the three things in that paper.

BB: I’ll disagree. I think it is valuable to have the lose everything to fee because it comes down to the following. Would you rather fund an adversary or lose your money? Unfortunately in the contrived scenario is that you haven’t.

BM: That is not true. You can definitely have neither. You don’t have to get yourself into this game where you are paying fees.

AG: What exactly was the name of the new opcode for it? This was probably why it didn’t go anywhere.

JR: They called it OP_COV. There were a few problems with it. It wasn’t just the technical capability that it introduced. I don’t think the proposal was that secure. There were a few gotchas in it that would make it hard to deploy. With BIP 119 I tried to answer the integration questions of if you are really signing transactions what can go wrong? It turns out with a design that is not restrictive enough there are a lot of weird edge cases you can run into. That is why that proposal didn’t go anywhere.

BB: The other thing I remember is that in that paper in 2016 the manuscript was published around the time that BIP 68 and BIP 112 occurred, the relative timelocks. I think in the paper itself said this is going to require a hard fork which strikes me as odd.

JR: I think they probably just didn’t know.

BM: It was published right before those BIPs. I had a post after that that used the deleted key thing and those opcodes because it was obvious to me that they had missed that. That paper does not talk about timelocked opcodes correctly.

MF: This is Jeremy’s presentation at the Stanford Blockchain Conference in 2017. This was your first presentation that I saw Jeremy on covenants. It had a bunch of different use cases like “The Naughty Banker” and your current thinking back then. So of all these use cases which ones are still of interest now and how has your thinking changed since that presentation. I enjoyed that presentation, I thought it was very informative.

Introducing the use cases of Bitcoin covenants

JR: Here are the slides. A lot of them are still useful. Congestion control is particularly of note. The example I gave was how to use congestion control for Lightning resolution where you want to lock in a resolution and you do the details later. There’s things like optical isolated contracts, there is some vaults stuff in here too. That stuff is obviously still interesting. In this presentation I define a bunch of different types of opcodes. Those could still be interesting. One of the things that I define here is a way of doing a Tapscript style thing at the transaction level. If you had transactions that you can mark as being required to be spent within the same block then you could have scripts that expand out over the series of transactions. In my opinion that is a slightly more interesting primitive to work with because then you can have scripts that expand out over a number of blocks but then they split how the funds are being distributed to different UTXOs. You can build out some different flows and primitives based on that expansion. I think those things could be interesting in the future. I don’t think there is anything that is irrelevant in this presentation at this point. It is like carving out the small bits that we know how to do safely in Bitcoin and making it work. There are a few that aren’t here that I would be excited to add. One thing that I have been thinking about as a next step for Bitcoin after CHECKTEMPLATEVERIFY or an equivalent gets merged is an opcode that allows you to check how much value is in an output as you are executing. A really simple use case you can imagine for this is you paste an address to somebody and if it is under 1 Bitcoin you have a single key because it is not that much. But if it is over a Bitcoin then you have multisig. You can use that as a safety mechanism in a number of different applications. I think that is an important thing going forward that wasn’t in this presentation. It is worth looking at if you are thinking about how to contract in the UTXO model, what sorts of things could be possible.

The congestion control use case

MH: I am still somewhat lacking intuition on why this is an improvement for congestion. How it can save fees in times where there is a high fee level. If someone could explain that a bit more succinctly that would be nice.

JR: The idea of congestion control is mostly that there’s a fundamental amount of traffic that has to happen but there is a peak demand. It is kind of like Corona virus, you want to flatten curve. I was looking at all these diagrams of flattening the curve and I was like “Hey this has been what I’ve been working on for the last year.” Let’s say it is lunch hour and we have 10 megabytes of transaction data coming in every ten minutes but it is only for an hour. Over the rest of the day those transactions are going to be clearing. With the congestion control solution you can commit to all of them, confirm all of them and then only when they need to be redeemed do they have to pay fees. You have localized the confirmation window for all of them, confirmed all of them at one time and then you spread out the redemption window where somebody goes and gets an individual UTXO out. The reason why this ends up decreasing fees is that if you are think about fees as a bidding market you are bidding for two different goods. You are bidding simultaneously for confirmation and you are bidding for redemption. That is an inefficient market because those are two separate quantities. By splitting out the quantities you bid one price for confirmation and that confirmation price can be shared among a number of actors. Then you bid a separate price for redemption. That has the effect of allowing you to have fewer people bidding in the confirmation market with CHECKTEMPLATEVERIFY and more people bidding in the redemption market. I think that is the shape of why it is going to be an improvement.

MH: One follow up question. Let’s say I make a transaction that pays 100 users. I get that confirmed at a low fee. How does that work with the users redeeming their coins? Does it have to happen for all the 100 users at the same time or can 10 users do it fast and the other 90 do it slower?

JR: CHECKTEMPLATEVERIFY is a general purpose opcode. It doesn’t do one of these things specifically. The answer is what does your users want. We could also talk about mining revenue because I think it is important when we are talking about something that looks like it is reducing fees, if it is improving revenue I think it does improve revenue but that is a separate conversation. What you would do is you would bundle up all your 100 users into a transaction. You would have a single output for all of them. Then you would create that. You would probably end up paying a very high fee on that transaction because it is representing confirmation for 100 users. A high fee on a transaction with one output is a lot lower than low fee on a hundred transactions or a hundred outputs. You are still saving money as the user but you are maybe paying a higher fee rate. What you give to your users is if they have an old wallet it looks like an unconfirmed spend. It would be “Here is a couple of unconfirmed spends” and you can structure the spends as any data structure that you want that is a tree of some sort. A linked list is a tree. You could have something where it is one person, the next person, the next person, that is a little bit inefficient. It turns out that it is optimal for the users to do a tree of radix 4. You would have a tree that says “Pay out to these 4 groups” and each group of 4 pays out to 4 groups and each group of 4 pays out to 4 groups. Then the total amount of work that you have to do is log(n) to get a single redemption in transaction space and amortized over all the users it is only a constant amount of transaction overhead.

BB: One interesting point here is in the tree structure the way this is set up is that in certain scenarios some users are paying a little bit more than others.

JR: Yes. One of the issues is it a balanced tree or not? It turns out that we’re already talking logarithmic so it is already going to be pretty small. We are talking maybe plus or minus one on the depth in the tree. You can pay a little bit more but the other side of it is that users who want to redeem earlier subsidize users who redeem later because it is amortized over all the users. Let’s say I have a branch that ultimately yields a group of 4 and one of those people decides that they really want their coins. Them getting their coins subsidizes the creation of everybody else who is one of their neighbors along the path. There is a thing where naturally there is a priority queue which is what you want to have I think in Bitcoin. You want the fee market to be a priority queue where people who have higher priority, higher requirement of getting their transaction through end up paying more. What this changes is the redemption to a lazy process where you do it whenever demand is low enough to justify your use case. You are not worried about confirmation. The alternative is that these transactions sit unconfirmed in the mempool. I think unconfirmed funds are far worse. The real benefit of that, this goes back to why I think this is really good for multiparty situations, these payouts can be Lightning channels. You are wondering how am I going to get liquidity, it turns out that you can immediately start routing it in the Lightning Network. That’s one of the benefits of this design is that it allows you to bootstrap Lightning channels much more easily because you are not time sensitive on the creation of the channel as long as it is confirmed.

AG: Can we dial back a little bit? I think we have jumped a few steps ahead. I want to make sure I understand the most basic concept. I think Max was asking something similar. Do I understand that the most basic concept of congestion control here is with this covenant mechanism you are able to effectively treat unconfirmed transactions or chains of unconfirmed transactions as if they are settled so to speak? This distinction about confirmation and redemption you were saying is that the receiver can treat the money as having been received even though it is not in a Bitcoin block because there is a covenant. Is that right?

JR: That is exactly correct. So we have a diagram here. I compare normal transactions where you have some inputs and blue is the payments and pink is your change UTXOs. This is normal transactions on the left. If you go to normal batching then you have a number of outputs and a single change and it is more efficient. With congestion control payments what you do is have a single output and then you have a tree of possible redemption paths underneath. Here I show a little bit more advanced demo. Ignore the part on the right. Just imagine you go down with this radix 4 tree, you go down to Option B. You expand out and then you have all these different transactions. What this diagram is showing you is that the different leaves or nodes of this transaction graph can be expanded at different times and in different blocks. If you look at the gray boxes, look at the size of them. Normal transactions are the worst. It is the biggest gray box. Then batched transactions is the next smallest. Congestion controlled transactions are even smaller. Your real time block demand is really low. Then sometime later these other transactions can be played but they are guaranteed to go down that route. The optionality that I’m showing answers the question that Max had earlier which is how do they actually redeem. You could redeem on different types of tree. The one on the right is Option A, let’s redeem as a single step and pay out to everyone. That is an immediately useful one and is maybe a little bit easier to understand and integrate into existing wallet infrastructure. It is a single unconfirmed parent for this transaction. If you want optimal efficiency on a per user basis then you would do a tree expansion. In Option A it is less fair if you are the one person that wants to redeem their funds, you’ve got to pay for everyone. On Option B you only have to pay for log(n) of everyone else which you can kind of ignore.

MF: There is a question in the YouTube chat. How is this different to child-pays-for-parent?

JR: The difference between this and child-pays-for-parent (CPFP) is that CPFP is a non-consensus rule around deciding what transactions you should mine. This is a consensus rule around being able to prove a transaction creates another transaction. In this world you do end up wanting to use CPFP where you can attach your spending transaction with a higher fee to pay for the stuff up the chain. In this example you would spend from one of these outputs and then you would attach a high fee to it. Then that would subsidize the chain of unconfirmeds. It is related to CPFP in that way but it is a distinct concept in that these pending transactions are completely confirmed. There is no requirement to do CPFP in order to get confirmation of the parent. That is the difference.

BB: I’ll point out another difference that CPFP doesn’t require a soft fork. It doesn’t accomplish the same thing either.

JR: The other thing I would add if we are going to go tongue in cheek is I am going to probably end up removing CPFP or having to completely rearchitect it. The mempool is a big project right now. There is a lot of stuff that doesn’t work how people think it works. Transaction pinning is one of these issues that comes up. It is a result of our CPFP policy. There is a very complicated relationship between a lot of these fixes, features and problems we end up having.

MH: Can we still do for CPFP for that commitment CTV transaction?

JR: Yes. You just spend from the child and then it is CPFP. That’s where the mempool issues come in. CPFP doesn’t actually work. This is the problem that people are coming into with Lightning. People have a model of what CPFP means and the model that they have is perfect economic rationality. It turns out that perfect economic rationality is a NP hard problem. We are never going to have a perfectly rational mempool. We are always going to be rejecting things that look good. It just turns out that the current CPFP policy we have is really deficient for most use cases. CPFP already only works in a handful of cases. It doesn’t work for the Lightning Network. Even with the recent carve-out it still doesn’t really work properly.

SH: Is there any consideration to how exactly you structure the tree with radix 4? Is there a certain algorithm or protocol to place certain outputs in certain positions of the tree or is it left to random or open to whatever implementation?

JR: I think it is open to implementation. The opcode is generic and you can do whatever you want. That said there are some really compelling ones that I have thought of that I think would be good. One would be if you had priority information on how likely people are going to be in the same priority group. You can either choose to have a neutral priority arrangement where you try to pair high priority with low priority or you can do something which is a fair arrangement where high priority is other high priority so people are more likely to share fees. There are also fun layouts you can do where the probability of this one being redeemed quickly and then you can Huffman encode the tree based on that. The other one I really like and this goes into the Lightning side of things which is a bit more advanced is you can order things by the probability of people being able to cooperate. If you had some notion of who knows other people then you can do a recursive multiparty Lightning channel tree and then if you group people by the probability that they are able to work together in a group then you make an optimal updatable tree state. That one I am really excited about as a payment pool option. The last one would be if you are making payments and they might be the same service. You can make a payment tree where keys that you suspect are owned by the same wallet exist in the same sub-branches. There is an opportunity for cutting out some of the redemption transactions by redeeming at that higher order node. There are a lot of options.

SH: My next question was about the probabilistic payouts in Lightning so thank you for answering that.

BM: Could you talk a bit more about CPFP because I think one way to describe this is instead of the sender paying fees the receiver can pull and therefore the receiver has to pay fees which means the receiver is going to have to use CPFP to do that. Could you talk a bit more about the interplay between those two?

JF: CPFP is not the only way to pay fees. There’s a litany of ways to pay fees in one of these systems. CPFP is I would say the best way because it is a pure API where you want to declare which transactions you want to do and then the paying of fees for those should be abstracted away from the actual execution. CPFP is good to express these arbitrary relationships. It actually turns out that there are better APIs. That is one of the other soft forks I am looking at that maybe we could do is something that gives us a much more robust fee subsidizing methodology.

BM: If the receiver wants to pull a payment out of the tree and get it confirmed for whatever reason he may have to pay fees. The transaction at the end of the tree could pay fees. It may not be enough. The receiver may have to add fees to that and they may desire to which means they have to use some kind of replacement. Due to the structure of CTV replace-by-fee is not going to be viable.

BB: I don’t think you would replace the fees at the end of it, I guess you could. I was expecting that you do make a child transaction that pays the fees in addition to whatever you pull out of the tree.

JR: Replace-by-fee (RBF) works fine with CHECKTEMPLATEVERIFY (CTV). The only issue that comes up is if you want CHECKTEMPLATEVERIFY to be inherently Lightning compatible then RBF is not a Lightning compatible idea. In general because you have to worry about the state of HTLCs in subcontracts so you can’t arbitrarily RBF because you may be bound to a specific UTXO. If you have things like ANYPREVOUT then that wouldn’t necessarily be true. You would be able to get around some of those constraints. I reason why I prefer CPFP is that it doesn’t disturb the txids in your parents and your own branch. I think txid stability at least for the current Lightning Network designs that we have is an important property. But you can use RBF it just changes your txid. With CTV there are two forms of it. There is an unbounded form where you allow any output to be added that adds more money. There is also a bounded form that is possible through a quirk. I like that you can do it. Using a P2SH Segwit address you can specify which key is allowed to add a dynamic amount of fee. If you pick a key that is known to be of the parties in that subtree then it would only be through the coordination of those entities that the txid could be changed. If you are trying to do a Lightning thing and then the RBF requires coordination of all the subowners it can work as well in a protected form that protects your state of HTLCs. I think that that is a complicated thing to build out and CPFP is conceptually a lot simpler. RBF does not work well for a lot of services. This was one of the debates about RBF in the first place. People didn’t like it because people wanted to issue one txid and they wanted to be an exchange and say “Here is your txid” and then not worry about having to reissue the txid because it looks like a double spend and wallets get upset. It is not awful that the code supports it but it is an awful thing to use in practice because it has bad externalities. It is just more robust, that is the reason why I’ve been advocating CPFP.

The design of CHECKTEMPLATEVERIFY (CTV)

MF: We’ve jumped straight into use cases. I’m wary of that. Jeremy, could you take a step back and explain what CTV is in comparison to some of the other covenant designs?

JR: With the presentation I gave in 2017, at that time I was like “Covenants are really cool. Let me think about the whole covenant space.” The Emin Gun Sirer paper only covers one type of covenant which is how an output has to be spent but it doesn’t cover covenants around which inputs have to spent with, there are a lot of things. I thought about it and I tried to get people excited, people got excited. At the implementation point people were like “This stuff is scary to do. We are not really sure what is possible to do safely in Bitcoin. We have all these properties we want to preserve around how transactions behave in re-orgs.” I was like “Let’s do a long study of how this stuff should work.” I was doing that and working on other things, figuring out what made sense. A lot of the proposals for covenants have flaws in either how much computation they are expecting a validator to do or what abstractions and boundaries they violate in terms of transaction validation context. Observing things that you are not supposed to observe. As I went by I started building vaults in 2016. I was talking to some people about building them. I had a design that ended up being somewhat simpler to what Revault looks like. I was using lots of niche features like special sighash flags for making some of this stuff work. At the end of the day it really was not working that well. I went back to the drawing board, looking at how you can do big ECDSA multisignatures to emulate having big pre-signed chains. I tried to get people excited about this at one of the Core Dev meetings. People said “This stuff is not what we are interested in.” No one would review it. I stepped back and said “I am trying to accomplish this specific goal. What is the most conservative minimal opcode I could introduce to do that without having any major security impact change to Bitcoin.” I came up with CTV, it had a couple of precursors. The design is basically the same. It was actually more conservative originally, I have made it more flexible in this iteration. I presented that to the San Francisco BitDevs. The usual suspects were there. The response was very positive. People were like “This seems like a covenant proposal that does not have that much complexity we were expecting from validation. And it does not have that much potential for a negative recursive or viral use case that would add a large problem. It used to be called SECURETHEBAG, it also used to be called CHECKOUTPUTSHASHVERIFY. It was a back and forth. I originally called it CHECKOUTPUTSHASHVERIFY because I was like “Let me call it the most boring thing that is exactly what it does” and then everybody at the meetup was like “That name sucks. You have got to name it something more fun.” I renamed it SECURETHEBAG and the other half of people were like “Bitcoin is serious business, no funny names.” I renamed to CHECKTEMPLATEVERIFY which is conceptually there but it is not that boring as CHECKOUTPUTSHASHVERIFY. It really gets to the heart of what the idea is, what type of covenant you are writing. Essentially all what you are doing is saying “Here is a specific transaction template. A template is everything except for the specific coutpoints or coins that you are spending. That covers sequences, version, locktime, scriptSigs and outputs of course and whatever other fields that I may have missed that are in the txid commitment. People are writing in the chat that they want to bring back SECURETHEBAG. If you want to bring it back I have no business with that, I can’t be responsible. It just checks that the hash of the transaction matches those details. That’s basically it. That’s why it’s a template. Here is the specific transaction that I want to do. If you want to do more than one transaction, what if I want Option A or Option B, simple. Wrap it in an IF ELSE. If you pass in one then do Transaction 1, if you pass in zero do Transaction 2.

BB: When I was working on my Bitcoin vaults prototype and I was doing the CHECKTEMPLATEVERIFY implementation version I was originally doing secure key deletion and then I was like “I should try BIP 119.” I asked Jeremy, this IF ELSE thing sucks if you have a lot of branching. Jeremy suggested a very simple script that was significantly more concise. That was interesting.

JR: I have been become a bit of Script virtuoso. There are a lot of funny script paradigms that you can do with this stuff to make it really easy to implement. The IF ELSE thing always bothered me. Do you do a big chain of IF ELSEs or do you do the balanced tree branch conditionals and pass that in? In turns out there is a script that Bryan is referencing where you just have to pass in the number of the branch that you want to take. It is that simple. Bryan, maybe I will send it to you for review. I posted on StackExchange somewhere a script which emulates a switch statement where you pass in a number and it takes whatever branch of code you want to execute underneath. It is a little bit more verbose but it is very easy for a compiler writer to target.

AG: You said CHECKTEMPLATEVERIFY is essentially looking at what the txid encompasses, in other words the template transaction. But then you said it includes the scriptSig and it doesn’t include the coutpoints. Surely it is the other way round?

JR: One critique that comes up that sometimes people say, I have designed CTV for a very specific use case. There is a more general thing out there that maybe could be better. That is a little bit true. The very specific use case that I have in mind is where you have a single input. There is a reason for that. That is why I was talking about the malleability before. If you have a single input there is no malleability you can have with the transaction coutpoint if you know one parent’s coutpoint. You know that one parent’s coutpoint and then you can compile down the tree. You can fill in all the details as they go. It is all deterministic. That is one of the use cases that is not specifically designed for that but it is designed so that use case works really well. When you look at the scriptSigs that is a little bit weird. It basically means that you mostly cannot use bare script for CTV because you are committing to signatures there if you have signatures. If you have a bare CTV where it is just a CTV you can use a bare script because you don’t put anything in your scriptSig. As soon as you have signatures and other things you end up having a hash cycle. The way you end up getting around that is you use a SegWit address. In a SegWit address the witness data is not committed to in the txid so your signatures and stuff are all safe. Unless it is P2SH and then you commit to the program. You can use the SegWit P2SH as a cool hack where you can commit to which other key has to be spending. That’s the reason why you are committing to the scriptSigs but not the coutpoints. The scriptSigs affect the txid but given a known chain of CHECKTEMPLATEVERIFYs the coutpoint does not affect the txids given a single parent known coutpoint.

I’ll give you a concrete example. One of the big benefits of CTV is you have all these non-interactive protocols where I define here’s an address and then if enough coins move into this address then I have started the Lightning channel without having to do any back and forth with my counterparty. I still need to know in order to update that channel state the txid of the channel that eventually gets created. If I spend to that address and it has a single input then I know who spent to it and I know the coutpoint. I can fill in all of the txids below. Those txids won’t change. Any terminal state that I am updating with a HTLC is guaranteed to be stable. If I had malleability of the txid either by having RBF or by having multiple inputs or not committing to the set of data I commit to then you would run into the issue that I am mentioning where things can get disrupted. It is a little bit abstract but if you read the BIP there is a lot of language explaining why it is set up that way.

SH: I think you touched on this during your CTV workshop back in February. Can you elaborate how if at all Tapscript affects some of the scripts that you and Bryan mentioned just a few minutes ago or CTV scripts in general?

JR: Tapscript makes a lot of this stuff much easier. In Tapscript you would never use an OP_IF. There are some use cases because you have a combinatorial blowup in script complexity. You would maybe use it for those purposes. You wouldn’t need to use it in most use cases. Tapscript makes a lot of these things easier to do. You could have an opcode which is “This is an intermediate output and it has to spent by the end of this block or this transaction can’t be included.” This would give you the same functionality as CTV. It is about being able to have some branch that has to execute and you don’t need to pass in all these bytes to signify which branch you want to execute. It is painful to do that.

BM: Can you elaborate some of the arguments and counterarguments for or against the implementation of CTV? In particular there is a balance between making a super restrictive opcode, you started with something more restrictive and then you moved to something less restrictive. One of the things that I have been fooling around with is the new Simplicity language which if we got that soft forked into Bitcoin has bare access to essentially all of the transaction data. You could compose anything you wanted as far as a covenant goes. It is perhaps the polar opposite in terms of flexibility. I have been thinking about implementing CTV just for the fun of it in Simplicity to understand how it works. Can you elaborate on the spectrum here, what is too restrictive, what is not restrictive enough and why?

JR: Simplicity is really cool first off. I don’t think it does what you think it does. In the sense that you can write a valid contract in Simplicity for whatever covenant you want but it is not necessarily executable onchain. As you write more complicated scripts in Simplicity the runtime goes up and you have some certain runtime limits or fee limits on how much work a transaction can require. Unless you get a soft fork for the specific jet that you want to add you can’t do it. The way I think about Simplicity is Simplicity is what if we had the optimal language for our sighash flags? What would that look like? Simplicity lets you define whatever you want and then you can easily soft fork compatibility where if you need to add old clients should be able to understand the new specification Simplicity lets you do that. Simplicity lets you express these things, it doesn’t necessarily let you make transactions based on them. One point that I would also make about the compactness, this is something I have spoken to Bram Cohen about and you can ask him for his actual opinion if I misstate it, even if you have a really sophisticated covenant system, general covenants are runtime compiled, where you are interpreting live in the script. CTV is ahead of time compiled. You only have to put onchain the data for the branches that you are actually doing. You could write that in Simplicity as well. I think what you would end up doing is implementing CTV in Simplicity. I don’t think that right now given the complexity of Simplicity as a long term upgrade we should ignore doing something that works today for that type of use case. It is basically just saying if you want to map it “We are doing a jet today for this CTV type script” and that will be available in Simplicity one day. Having this is both good for privacy in that you don’t reveal your whole contract but it is also good in terms of compactness in that you only reveal the parts of your contract that need to execute. There are a lot of benefits rather than having the complete program expressed in Simplicity at least as far as I can tell.

On the question of why have something restrictive versus something general. It is really easy to audit what happens with CTV. There are a few things that you can do, a few different code paths. It is a hundred lines of code to add it to Core. It is pretty easy. Within an individual transaction context there is no major validation overhead. It is just simple to get going. It makes it easy to write tools around it. Writing tools around a Simplicity script is probably going to be relatively complicated because you are dealing with arbitrary binaries. You are probably going to be using a few well tested primitives in that use case. With CTV it is a basic primitive. The tooling ends up being pretty easy to implement as well. I think Bryan can speak to that. With respect to it originally starting more restrictive. The restrictions I had originally were basically around if you added other features to Bitcoin whether CTV would allow you to do more complicated scripts. I removed those features. People said “We want these things to be enabled.” I didn’t want CTV to occupy the space such that we added CTV and now we can’t add this other thing that we want without enabling these very complicated contracts. I said “Let me make this as restrictive as possible.” People said “No if we add those things the chances are that we do really want these more complicated contracts.” This is like OP_CAT for example. I said “Ok sure. I will remove these restrictions, make it a little bit more flexible.” Now if you were to get OP_CAT or OP_SHA256STREAM in Core then you would actually be able to start doing much more sophisticated CTV scripts. This gets to a separate question that I will pose in a second. One thing you can do for example is write a contract that says ‘This template must pay out to all of these outputs and any output of your choosing.” This can be useful if you want to add an additional output. You can’t remove any outputs that are already specified but you could add another output. It gives you some more flexibility if you had OP_CAT. But because we don’t have it you can’t really do that today. That gets to the point of why not just do ANYPREVOUT which also gives you an analog for CTV. There would be no upgrade path for ANYPREVOUT short of Simplicity that would allow ANYPREVOUT to ever gain higher order templating facilities. CTV has a nice upgrade path for more flexibility in the future if we want it.

nothingmuch: What about recursion?

JR: So basically all the recursion happens at compile time. You can recurse as much as you want. This is sort of under wraps right now but I am happy to describe it. I have been building a compiler for CTV. I hope to release it sometime soon. The compiler ends up being Turing complete where you can compile any contract you want that expresses itself in Bitcoin transactions. But the compiler produces a finite list of Bitcoin transactions at the end of the day. There is no recursion within those. Those are just a fixed set of transactions that can be produced. If you want any recursion or any principle you can do that at the compile time but not at the actual runtime. I don’t know what “bounded input size” means but I think that is a sufficient answer. We can follow up offline about “bounded input size.”

MF: There are a couple of things I would like to cover before we transition to vaults. One is in that 2017 presentation you talked about some of the grave concerns. Were you able to address all these concerns? Fungibility, privacy, combinatorial explosion etc

JR: In terms of computational explosion I think we are completely fine. Like I mentioned compile time can be Turing complete but that is equivalent to saying “You on your own computer can run any software you want and emit whatever list of transactions you want.” At runtime it has to be a finite set of transactions. There is no infiniteness about it. Then in terms of fungibility and privacy, I think it is relatively ok. If you want privacy there are ways of getting it in a different trust model. For example, if you want privacy and you are willing to have a multisig signing server then you can use Taproot. You get a trust model where the signing server could steal your funds if you had all the parties working together but they can’t go offline and steal your funds because you have an alternative redemption path. In terms of fungibility, the issue is less around whether or not people can tag your coins because that is the privacy issue. The fungibility issue is whether your coins can be spent with other coins. Because this is a program that is guaranteed to terminate and it has to terminate in a coin that is unencumbered by any contract those coins can be spent with any other coin. There is no ongoing recursive segregation of coins. The fungibility I think is addressed. For privacy what I would say is that I think that having onchain contracts, these are really good onchain contracts in terms of you only show the part you are executing, not the whole program. You don’t learn other branches of the program that might have been there. But you are seeing that you are executing a CTV program so there is maybe a little bit of privacy harm there. The way that I like to think of this and why this is a huge win for privacy, is that this is going to enable a lot better Layer 2 protocols and things like payjoin and mixers. It is going to make a lot of those things more efficient. Our ability to add better privacy tools to Bitcoin is going to improve because we’re able to bootstrap these protocols more efficiently. It is going to be a big win for privacy overall. There is some new information revealed, I wouldn’t say there is nothing new revealed.

Other use cases of CTV

MF: Let’s go on to use cases. We’ve already discussed congestion control. Perhaps Jeremy you could put up the utxos.org site and go to the use cases tab. So one of them is congestion control, one of them is vaults. You have a bunch of other use cases there as well. Before we move on specifically to vaults perhaps you could talk about some of those different use cases and which ones are promising and which ones you are focusing on?

JR: Like I mentioned I’ve been working on a compiler. The use cases that I now have is probably triple of what is here. There is a lot of stuff you can do. Every protocol I have looked at, things like discreet log contracts become simpler to implement in this framework. The use cases are pretty dramatic. I am really excited about non-interactive channels. I think that is going to be huge. It gets rid of 25-50 percent of the codebase for implementing a Lightning channel because a lot of it is the initial handshaking and makes it possible to do certain things that are hard to do right now. The other stuff is all related with like scaling and trustless coordination free mining pools where you can pay people out. I sent Bob at some point some graphs around this. You can set up a mining pool where every block pays out to every single miner that participated in the mining pool over the last thousand blocks. Then you can do this on a running basis. You can have something where there is no central operator. You only get to participate if you provably participated in paying out to the people as specified over the last 1000 block run. Then you can use the non-interactive channels to balance out so that the actual number of redemptions per miner ends up being 1 for every given window that they exist in. You could minimize the amount of onchain load while being completely trustless for the miners in receiving those redemptions. There is a lot of stuff that is really exciting for making Bitcoin work as a really good base layer for Layer 2. That I think is something is going to be the major other use case. Another thing I am excited about with vaults is that vaults exist not just as something for an institution but they are really important for people who are thinking about their last will and testament, inheritance schemes. This is where the non-interactivity becomes really important. You can set up an audible vault system that pays out a trust fund to all your inheritors without interaction and without having to a priori inform of what the layout is. It can be proved to an auditor which is important for tax considerations. Anytime you are like “I gave 10 million dollars of Bitcoin to my heirs”, you have to prove when they got access to those funds. That is difficult to do in the current regime. Using CTV you can actually prove that there is only one time path to redeem those funds. You can set up things where there are opportunities to reclaim your money if you were ever to come back from the dead. If you were lost on a desert island you could come back and there would still be funds remaining in the timed payouts. I am really excited with all the new types of things people are going to be able to do. Vaults I think are a really important use case. Vaults are important not just for individual businesses where you are like “How are we securing our hot wallet stuff?” I think vaults are most impactful for end users where you don’t have the resources to employ people to be managing this for you. You want to set something up where let’s say you’ve got an offline wallet that you can send money to and then funds automatically come back online to your phone. But if you ever lose your phone you can stop the flow of funds. I think that is really exciting for CTV in particular, the ability to send funds to a vault address and for that vault address to automatically move funds to your hot wallet without requiring any signatures or anything. The management overhead for a user is very low. Your cold wallets can just be keys that are only sent to in the event of a disaster. Let’s say you are in 7 different bank vaults around the world. You have your vault that you send to and then you don’t have any requirement to actually have those recovery keys unless you have to recover. That is the big difference with CTV and vaults is that you remove keys from the hot path completely. There is no need for signing, there is just a need to send the funds to the correct place. This vault diagram is not accurate by the way. This a type of vault. The ones that I implemented that are in the repo are more similar I think to the form that Bryan put out.

MF: I am assuming you are going to have to focus on one or two use cases. To get consensus on this you need to convince that there is at least one real use case that people are going to get value out of. The flip side is making sure that it is not adding anything to the protocol that we don’t want to add. The upsides and downsides.

JR: It has been really difficult to be a singular advocate for this because you have to make a lot of conflicting arguments that ultimately work together. If you just told one side people would say “How do they work together?”. An example of this is Bob gives me a bit of grief. “If you had to design an opcode that was specifically the best thing for vaults would be CTV?” My opinion is yes. The next question is “It does all this other stuff too. Is that really accurate?” The people on the other side of the fence say “CTV, you really only have a single use case that you care about. Can you show that you have hundreds of use cases because we want to have things that are really flexible and general?” I am like “Yes it is very general.” What I am hoping to show with the language I am building is that it is really flexible and it is really good for these use cases. Hopefully that will be available relatively soon. The other argument that is difficult with this is talking about fees with scaling. I am telling everybody this is going to dramatically reduce fees for users but it is also going to increase mining revenue. How can they both be true? You are making better settlement layer use of Bitcoin so the transactions happening are going to be higher fee and you are going to have more users. It is called Jevons paradox if anyone is curious. As the system becomes more efficient usage goes up. You don’t end up saving.

MH: To build on what you just said Jeremy, to combine these different use cases. Could someone speak a bit more about having these batched withdrawals where you have the CTV to commit to the withdrawal transaction for users that then directly opens non-interactive channels? Would that work?

JR: That works out of the box. This is why I am very adamant about replace-by-fee is bad is that what I really want to see is a world where I go to an exchange with an address and they have no idea what that address is for, they just pay to it. They pay to it in one of these trees that has one txid or a known set of possible txids for the eventual payout. That lets me immediately start using a channel. What is nice about this is the integration between those two components is zero. I don’t need to tell the exchange that I am opening a Lightning channel. I just tell them “This is my address, pay this much Bitcoin to it.” There is no co-operation. You can imagine a world where you go to Coinbase and you give them a non-interactive channel address. It creates a channel for you. You give them a vault address, it creates a vault for you. You give them an annuity, it gives you an annuity. You can set it up so that there is zero…. If you paste in the address and you send the amount of funds you get the right outcome. I think there is definitely some tooling to support. I mentioned earlier having an opcode that lets you check how much money was sent to an address would be really nice. That’s an example that would make this integration a little bit easier in case the exchange sends the wrong amount of money. Most exchanges I know send exact amounts. Some don’t but I think that is a relatively easy upgrade. It also could be a new address type that specifies how much money is supposed to go there so the smart contracting side integrates really easily. Other than that they don’t need to know what your underlying contract is. I think it opens up a world in Bitcoin that works a lot more seamlessly. This is another big scaling benefit. Right now if you want to open a Lightning channel and you have funds on Coinbase you are doing at least one or two intermediate transactions to get the funds into your Lightning wallet and opening a channel with somebody. In this case you get rid of all those intermediate transactions. If you are talking about how Bitcoin is going to scale as a Lightning channel thing that is without having to convince exchanges to adopt a lot of new infrastructure for opening channels for users.

BM: This is basically one of the major benefits of CTV over deleted keys. A year or so ago I started making a prototype by essentially making a pre-signed transaction and deleting a key which is a mechanism to do a covenant but one of the major problems with it is I have to send from my wallet. I can’t give somebody an address which sends directly to a covenant. As Jeremy has described with CTV you can because you can put the script right there. It reduces the number of total transactions because as he mentioned if you want to open a Lightning channel first you have to send it to your Lightning wallet, then you have to open the Lightning channel. It is at least two transactions. The CTV route is more efficient and perhaps more interesting in that someone can send directly to your vault. You cannot do that generally with a deleted key type of covenant.

JR: What I would add is even less so for the vault use case than a scaling benefit, it is a big security benefit for a user. If you did have an exchange that you set up to understand vault protocols you could say “Only allow me to withdraw to vault contracts.” It would have to receive the vault description. You don’t have to have this intermediate wallet that you move funds on that maybe gets hacked with all the money. I think it adds a lot of user security having that story.

KL: I didn’t really catch up on the San Francisco workshop so sorry if it is a question that has been asked a lot of times. How do you compare or differentiate against a SIGHASH_NOINPUT in terms of vaults specifically?

JR: With SIGHASH_NOINPUT you can perfectly emulate CTV. It is not the obvious way of using SIGHASH_NOINPUT but it is one way you can use it. It emulates something that is very similar. There are a few drawbacks to that methodology. The first drawback is that is less efficient to validate and your fees are going to be more expensive. The other drawback is that it is a little bit harder to compile which is annoying if you are talking about making contracts where you have log(n) possible resolutions but you have a very wide number of different possible cases. Your compiler is going to be way slower. This imposes a limitation if you are using these contracts inside of Lightning channels on how many resolutions you can have. It makes them less useful in a Layer 2 context, negligibly so I guess. Signatures are 100,000 times slower than hashes. It is a lot slower if you are doing a signature based one. You are adding more functionality. SIGHASH_NOINPUT / ANYPREVOUT is less likely to get into Bitcoin. This is what I talked about when I said you have to preserve these really critical invariants in Bitcoin. It is pretty easy to show CTV doesn’t break these but the broader set of functionalities that you have around SIGHASH_NOINPUT you do have issues of burning keys permanently because you signed with them. We have all these design constraints around SIGHASH_NOINPUT that have come out around tagging keys and having different specifiers in order to prevent these weird use cases. CTV doesn’t really have the same issues around the fact that it is using a hash that is only used for CTV, it is not using keys that are used for general purposes. Making keys into some sort of toxic waste. I think that is one of the other benefits in terms of security. There are a few other reasons why you would prefer CTV. Future flexibility, if you add OP_CAT later you don’t get new features with SIGHASH_NOINPUT I think. With CTV you get a bunch of new types of custom template contracts that you can write. It has a better upgrading path in the future as well. With CTV the hashes are versioned so if you add a new version to the hashes you can add a new sighash flag field basically. So there is more flexibility down the road than with SIGHASH_NOINPUT functionality. Strictly speaking I would be very happy if SIGHASH_NOINPUT, ANYPREVOUT or ANYSCRIPT would get merged because that would let me do it today but I think it is less likely.

Vault designs

MF: We’ll move onto vaults. Obviously for those don’t know, some vaults need CTV, other vault designs don’t. I think Kevin (Loaec) who will speak about his design next week has got around using CTV. He did say in the interview with Aaron van Wirdum that ideally he would have used CTV if it was available. In terms of resources that we have on this Pastebin, one of the early ones is a post from Bob McElrath on reimagining cold storage with timelocks. Bob, do you want to talk through this post?

BM: I can give a brief description. This was published shortly after Gun Sirer et al published their paper on covenants. It was around the time that the timelocks came out. At the time, before Jeremy had the idea of doing CTV, there was no covenant mechanism. There have been about five different covenant mechanisms that have been proposed, none of which are active today on Bitcoin. These are all covered in the talk that I gave. There are probably more. The only thing that was actually available when I started our project over a year ago was deleting keys. That what that post was about. There is historic precedence for this. Back in the olden days, like in the old West people would create a bank vault with a physical timelock on it. In other words the bank operator goes home at 6pm or whatever and locks the vault such that a robber can’t get into the vault at night whilst he is away. This is a physical example of a timelock. At the time timelocks had just come out and enabled some of these use cases. The picture for the vault use case is that there are two spending branches. One of which is timelocked, one of which is not. The timelocked branch is your normal operation. This is exactly opposite to the way Lightning works. You want to enforce a timelock on your funds such that you yourself can’t spend it until let’s say 24 hours passes. There is an unvault operation. You have to take your funds and you have to unvault them. In the case of CTV or something like that you are broadcasting the redemption transaction. Or if you have deleted keys you are broadcasting a pre-signed transaction. In the case of Revault they don’t use deleted keys but they do make a big multisig and they pre-sign this transaction. The whole point of that blog post was that once you’ve done that, the signed transaction or the CTV script is a vaulted object. I can then figure out what do I do with that vaulted object. How do I move it around? Where do I store it securely? When do I broadcast it? This signed transaction is of somewhat lower risk than bare private keys. As I mentioned, there are two spending paths. One is timelocked and the second is not timelocked but has a different set of keys in it. That is your emergency back out condition. The point of the whole vault construction is that if somebody gets into your wallet and they get the bare keys they will presumably get the ones that are timelocked. If you see them unvault one of your transactions you know a thief has gotten in. You can go get the second branch of keys out of emergency cold storage and use those to reclaim the funds before the thief can get the funds.

BB: We should not be recommending that.

BM: I am describing what the blog post says. We will discuss reasons why we shouldn’t do that.

BB: You should pre-sign a push transaction instead of having to go to your cold storage to get keys out. That is the obvious answer. You are saying that you go to cold storage when there is a problem and you use the keys to fix the problem. But really you should have a pre-signed push transaction pushing to the cold storage keys.

BM: Yes. This starts to get into a lot of design as to how do you organize these transactions. What Bryan is discussing is what we call a push-to-recovery-wallet transaction. The thief has gotten in. I have to do something and I am going to push this to another wallet. Now I have three sets of keys. I have the spending keys that I want to use, I have my emergency back out keys and then if I have to use those emergency back out keys I have to somewhere to send those funds that the thief wouldn’t have access to. These vault designs end up getting rather complicated rather fast. I am now talking about three different wallets, each of which in principle should be multisig. If I do 2-of-3 I am now talking about 3 devices. In addition, when this happens, when a thief gets in and tries to steal funds I want to push this transaction. Who does that and how? This implies a set of watchtowers similar to Lightning watchtowers that look for this event and are tasked with broadcasting a transaction which will send it to my super, super backup wallet.

BB: One idea that I will throw out is that in my email to the bitcoin-dev mailing list last year I pointed out that what you want to do is split up your coins into a bunch of UTXOs and slowly transfer it over to your destination wallet one at a time. If you see at the destination that something gets stolen then you stop broadcasting to that wallet and you send to cold storage instead. The other important rule is that you only allow by for example enforcing a watchtower rule only allow one UTXO to be available in that hot wallet. If the thief steals one UTXO and you’ve split it into 100 by definition there is one percent that they have stolen. Then you know and you stop sending to the thief. Bob calls it a policy recommendation.

MF: There are different designs here. I am trying to logically get it in my mind. Are there certain frameworks that we can hang the different designs on? We will get onto your mailing list post in a minute Bryan. Kevin has got a different design, Bob seemed to be talking about an earlier design. How do I structure this inside of my head in terms of the different options? Are they are all going to be personalized for specific situations?

JR: The way I have been thinking about it is in terms of what the base layer components are. I think that in a vault essentially what you are looking at is an annuity. You are setting up a fixed contract that has some timing condition on every next payment. At the base layer most good vault designs, some you would do something a little bit different, this is what you are working with. The value flows along that path. At any point you can cancel the annuity. If you cancel it the amount remaining goes back somewhere else. Or you can spend the amount that has been redeemed so far. Everything else exists as either key policies on who can take those pathways or as policies on if you observe certain conditions which actions you should take. Those exist at a higher order of logic. Does that help a little bit? That backbone exists in all of these proposals. It is just whether or not you are using multisigs or you are using push-to-recover or you are using signing paths for the cold storage cancel path.

Bitcoin upgrade path to enable better vault designs

KL: Another important thing is what is doable today and what is not. Of course the kind of vault that for example Revault describes is practical today. You don’t need to change Bitcoin to make it work. Of course we are very far from having a properly blockchain enforced covenant. We have to use some tricks around it with either deleting private keys or for us we use co-signing servers which are very far from being perfect. At least we can somewhat emulate the fact that the output is pre-defined to follow certain rules. The early papers on covenants usually required a new opcode and that is a big problem. Who is going to work on creating an implementation of that if you don’t know if the opcode is going to be added to Bitcoin? That is what Jeremy is facing right now. He has been working hard for a few years on his opcode but you still have this uncertainty. When is it going to be added? Is it going to be 6 months? Is it going to be 2 years? Is it going to be 5? It is really hard when you are trying to push an idea like vaults especially for businesses when you are required to work a lot on the implementation itself because it is a security product, if you rely on assumptions like is this specific opcode going to be added or are there going to be some major changes to my BIP that are going to break my implementation? So for me there is also this separation between what can be done today practically even if it is not perfect versus what would be the perfect way of doing it where everything is enforced by Bitcoin itself.

JR: I think that is a super useful distinction to be drawing because it is looking at the trust models. I do think at the same time Bryan has a public confirmation of this idea, is that the way CTV and multisig and pre-signed and key deletion interact is that they are all basically perfectly interoperable. You could have a system with minimal changes between A and B where you are using one or the other security models. Feel free to disagree, but if CTV were available this conversation wouldn’t be happening. We would all agree that CTV was the easiest thing to work with for this goal. There may be some questions around fees but I think this is the design space. The question between Revault and “The Team” is really around do you prefer pre-signed deleted keys or do you prefer a multisig server. That is ultimately a user preference. That should be a check box. If you have to choose between pre-signed and a multisig server which one do you prefer?

BB: Another interesting way to distinguish that even further is that for secure key deletion, that works really well when a user is the primary beneficiary. It works less well when it is a group of mutually distrusting parties.

BM: You run into serious problems for instance with audits. How do I prove that the vault exists and you basically can’t? I think everyone on the call would agree that CTV is the best solution here. I know Jeremy has been very frustrated and has been working on this for a long time. Everyone is basically hedging their bets. As Kevin just said “Maybe we will get this BIP, maybe we won’t. Maybe we will go in a different direction because we are not sure.” It would be terribly fruitful if everybody on this call could get behind these ideas. There has been very little response to Jeremy’s work. There are no responses to Bryan’s post on the mailing list. We have all got to get together, do we want this or not?

BB: So about that, I know this has been an issue for a lot of us including Jeremy. Getting pubic feedback and review and interest. I would say from my email there has been a lot of private feedback and conversations like this. They just don’t show up on the mailing list because none of us can be bothered to write the same things twice which is a bit of an issue. This is a pet peeve of mine. It would be wonderful if there was only a single place where you had to check to see the latest update on things.

BM: It creates a perception that no one cares because there are no responses. I think this is definitely not the case. We need to rally round something, one way or another.

MF: Maybe it is just a site like Jeremy’s, utxos.org but for vaults. Then there is at least a centralized place where you keep going for updates. The big thing hanging over all of this conversation is that I don’t think there are too many people who want to discuss future soft forks until Taproot is done. People are so busy on getting Schnorr and Taproot in and nothing else is going to be getting into that soft fork. That is still a long way off. That is the big question mark, whether people have the time.

JR: I think this is a pretty big mistake. Maybe this is something as a community we can work on. With this group of people we could really make an impact on this. Taproot is having a lot of changes still. It is still not a very stable proposal. Taproot is great and I really want to see it. But that is the reality. There are changes that happened a week or two ago for the signature algorithm that are being proposed. There are changes to the point selection, if it is even or square. The horizon is perpetually looking a year out on it being a locked down document. I know that CTV has not had the level of review that Taproot has had but it is substantially simpler. I don’t think it requires any changes at this point. I do think if we had a concerted push towards getting it reviewed and slotted for roll out there is no formal schedule for when changes have to deliver in Bitcoin. They can be soft forked out when they are ready. I think the question is if we have a room of five people that are all saying that our lives would be made easier if we had CTV then let’s get it done.

MF: The argument against that though is it would be rushed to try to get it in before Taproot. All efforts towards Taproot and then future soft forks once Taproot is in.

JR: Why though?

BB: One conceptualization of Bitcoin Core review capacity is that it is a single mind’s eye. We can only very slowly move together and carefully examine things all at once.

JR: I think there is value to that. We only have so much focused capacity. I would make the suggestion that if that is the case then we did Taproot in the completely wrong way. We really should have done Schnorr and MAST as two separate things that we can checkmark progress along the way rather than all at once that is going to take years to roll out. There is other stuff that is important work to get done. This is my question for vault implementers generally. I think vaults is one of the most compelling new use cases for Bitcoin to dramatically improve user security. My question is does Taproot or CTV do more for the practicality of you being able to deliver vaults to your users?

MF: In the small case of vaults maybe. Obviously there are lots of other use cases.

JR: With CTV there are many other use cases too. That’s a general question more rooted in the CTV side. That is the question just focused on vaults because we are in the vault section. What is the wishlist of things to make vaults better in Bitcoin you need to have and how do we deliver as the Bitcoin community? How can we deliver on these features to make this viable? If it is not CTV I don’t care. If we need something else what are the actual things that we need to be focused on in order to make vaults really work for people.

KL: For us for example Schnorr and Taproot would be a really good improvement already. Maybe if you have a proper covenant you can prevent a theft. Although at some point you need to be able to move your funds to wherever you want. Having a vault is more of a deterrent against an attack than a way to completely be in your own bubble where nobody can move funds outside. At some point you will need to be able to move funds outside. For us Schnorr and Taproot is a really important thing because it would completely hide the fact that we might be using vaults. It also hides some of the defense mechanism especially around the emergency transactions that Bob also uses. That is one of the things that I wanted to cover is that multisig today is cool but what we can do with Schnorr, Taproot is much more powerful. That would be extremely useful for vaults in my opinion.

BM: When you use Taproot your timelock script has to be revealed. A timelock is an opcode and you have to reveal it. One of the major benefits of Schnorr is that you can just sign a multisig. This is great for protocols like Lightning. Because the timelock in vaults works in the opposite way all of your spends have to reveal the Tapscript that contains the timelock. You lose a lot of privacy doing that.

JR: I don’t think that that is completely accurate. If you are signing and you are using pre-signed or multisig you sign the nLockTime and nSequence field without necessarily requiring a CheckLockTimeVerify enforcement. Does that make sense?

BM: Yes. There are a couple of ways to do timelocks there.

JR: That’s the thing with CTV that is interesting. You have to commit to all the timelock fields so you don’t need CheckLockTimeVerify or CheckSequenceVerify. It is just automatically committed to. It is the same thing if you are doing pre-signed. You don’t need those opcodes unless you want to restrict that a certain key can’t be used until a certain time.

MH: Taproot is awesome, CTV is awesome. Why not both? Could we get CTV as part of Tapscript so the new opcodes that are being introduced with Taproot, could CTV be one of them?

JR: I’m not aware of any new opcodes currently being proposed with Tapscript. There might be some with slightly different semantics like the signature stuff that I’m not sure of. It is not like we are adding new things there. The reason why in my original proposal for CHECKOUTPUTSHASHVERIFY, last year I was like “It looks like Taproot is happening soon so let me propose it as using these new extensions.” Then months went by and Taproot didn’t seem to be making strong headway so I said “Let me do this as a normal OP_NOP upgrade because it is really not a dependent feature.” I think that is better for Bitcoin because if you try to say that you will only get CTV if you accept Taproot is worse for the network being able to independently consider changes. That is one reason to not layer them. The larger question you are asking on why not both? Yes, let’s get both in. I think it is a question of on our engineering timelines what is feasible to get done? One of the reasons why I think we want to get CTV very soon is it does help with congestion. It will take a year or two for people to employ those congestion control mechanisms and we are already seeing a major increase in fees right now. We have to be really focused on improving our fee situation. Taproot helps a little bit with fees but the reality is that most users are using very simple keys. Hopefully we can change that, hopefully we will add more user security. Right now the majority of transactions are not materially going to be made more efficient with Schnorr and Taproot. The majority are simple single signatures. Maybe validation will be faster but we are not increasing capacity or decreasing fees. I think we need to be doing things in that category. That’s why I think there is some urgency to do this. I think this is doable in a month. I’m not trying to advocate that timeline but I think that is the amount of review that this idea would take for people to seriously get comfortable with it. Taproot, I have reviewed it many times and I am still not completely comfortable with it. It is inherently going to take a long time. To Bryan’s point on if we should have the eye of review on a single topic, we need to as a community only put our eye on review on things that we can more quickly process. If it is things that are very slow to process we are not going to be nimble enough as a project to actually deal with issues that come up, if we are stuck on things that are like a three year roadmap.

MF: I asked Christian Decker about SIGHASH_NOINPUT. He was very much of the opinion that we don’t want to change the proposal now. Any thinking of adding new stuff in is potentially going to open up Pandora’s box or a can of worms where it starts all this discussion up and disagreement that we want to avoid.

AG: It was interesting to hear that discussion of motivations. I am hearing both security improvements and you are really focused on this congestion control as to practical implications about trying to get this out fairly quickly. I’m curious though. Let’s say we got Taproot quickly, it would take a long time for it to have any impact on wallets and it probably wouldn’t address fees immediately in any realistic sense. I can certainly see those arguments. I am a bit worried about what this looks like. Suppose CTV was deployed today what are wallets going to have to do to make best use of this congestion control feature? You might argue nothing but I have a feeling in practice it is going to be a lot of infrastructure work.

BM: It is a lot. They have to understand the tree structure.

JR: That is only marginally accurate. It depends on what wallet you are asking about. There are two classes of wallet that we’ll look at. Let’s look at infrastructural wallets like Coinbase or Kraken and then user wallets. They both have different requirements. If you look at infrastructural wallet they have a really hard time changing what their internal keys look like. BitMEX for example still uses uncompressed public keys. Why? They made a really great custody engine and is it worth it for them to change the code there? It has worked, they haven’t had a hack of it that I recall. If they were to change that then there is risk and risk costs a lot of money. For them maybe one day they will get SegWit. But they are probably not going to adopt even these better multisig things, for a decade. For them changing their own internal key type is really hard. Changing their output type is actually a little bit less challenging because they are just changing what address they are paying into. They have been able to do things like batching sooner. Batching is much more complicated than SegWit, keep in mind. Batching has a lot of very weird edge cases that you have to deal with in order for batching to not result in loss of funds. But they have been able to add things like batching. I think that for CTV all it is doing is at the layer where they decide which transaction they are going to spend to, it is a single new address they are going to be paying to cover their liabilities. On the receiving end for CTV users who have existing wallets, those wallets just need to be able to understand, in order for this to work today, an unconfirmed transaction. I think that most wallets already understand an unconfirmed transaction. So it should work reasonably ok. At the exchange, infrastructural wallet layer they can also guarantee some of the execution of those expansions. I think Optech has a good write up of this. If you are willing to wait up until a day of blocks to get final confirmation you can save 99 percent on fees. They can take that advantage to get the confirmation at the time of request. They as the infrastructural wallet make the full redemption whenever fees are low enough later that day. I think you will see a pretty easy to migrate to benefit without having to have too many changes to user wallets that can understand unconfirmeds and which the processing to get fully confirmed can be handled by the exchange. I think it is easy to deploy on relatively near term. But the more sophisticated use cases are absolutely going to take a longer amount of time.

AG: For myself I find it a little bit unclear. My main feeling about it is that it is going to be a struggle to convince the hoi polloi of exactly what is going on here. As you say wallets already understand unconfirmed. If that is all we are talking about then people will just say “Why aren’t you sending me my transactions quickly enough?” Most ordinary users just think unconfirmed is nothing and that is why they are generally willing to spend more in fees than they should be willing to spend. They don’t really get it. I don’t think they are going to get this either.

JR: I think it depends. Ultimately with this the only change that the wallets would need to have is to tag things that are observable as a CTV tree as being confirmed and treat it as confirmed. That is very minimal. I have made that change for Bitcoin Core’s wallet. It is like a 30 minute change. It is not that hard. It is just a question of if they have updated software or not and whether it shows up being fully confirmed or unconfirmed. It is hard to get wallets to upgrade but it is not the largest change around. There is this weird curve where the wallets are worse just always spend from unconfirmed so it is not a problem for them. The wallets that are better separate them out but also people who are using those wallets are more likely to maybe receive an upgrade. I don’t think the roll out would be awful for this type of stuff. It would be we go to the exchange and we ask them “Why isn’t this confirmed?” and they say “No it is. Upgrade your wallet and you will see it.” For users who aren’t sophisticated that is a sufficient story.

BM: It does require an additional communication channel between the receiver’s wallet and the sender’s wallet? The sender has to send the tree?

JR: Just the mempool.

BM: You have the whole tree in the mempool?

JR: Yes. Congestion is really for block space. Unless you have a privacy reason for not showing what the total flow is you can always just broadcast a transaction and then it will live at the bottom of the mempool. That is how people learn of transactions right now.

BM: Another interesting thing to think about here is whether wallets in the past have upgraded to new features in general. As mentioned the vast majority of transactions out there are pay-to-pubkey-hash (P2PKH). Why is that? Most wallets don’t even use pay-to-script-hash (P2SH) or multisig. Why? The answer is because everybody who is making wallets is also making s***coin wallets. In order to have an uniform experience and uniform key management for let’s say Ethereum and Bitcoin, what they’ve done is go toward using a single key for everything. And adding things on the back end like multiparty ECDSA so that it is actually multisig on the back end. Unfortunately I don’t think this dynamic isn’t going to go away anytime soon. In my experience very few vendors have implemented some of the more advanced features on Bitcoin unfortunately.

JR: I think that is a great point. One of the things I am worried about for Taproot for example is the actual roll out in wallets is going to be ridiculously slow. It is going to be a new signing thing and wallets are already existing with a single seed format. They are not going to want to rederive a new thing. I think it is going to take a very long time for that adoption to pick up for user wallets. That is one thing that is nice with the CTV rollout. All they have to do is what they are already doing. Most of these wallets already show an unconfirmed balance, especially the s***coin ones. They show unconfirmed balance because they are zero confirmation wallets.

BM: The benefit of using the tree is that you don’t have to put it all in the mempool. If I am going to put everything into the mempool anyway I might as well have not done the tree?

JR: That is not quite true. You can always broadcast anything that will show up in the mempool somewhere but what is important to keep decongested is the top of the mempool. The actual mempool itself, it is the fine to put these things in and then they get propagated around. If the mempool backlog grows they get evicted. That is fine, that is an ok outcome. You don’t want to be in the situation where you have so much stuff in the mempool that high value transactions that are completely unconfirmed because if their outputs get unspent they could be double spent, those getting kicked out of the mempool is much more problematic for users. When you are a user and something goes in the mempool, you see it, you see observe it. If it applies to you you store it in your wallet even if it goes in and out of the mempool. It just has to go into some mempool. Most of these wallets are not using a mempool on their own wallet. They are using a mempool on the server of whoever is providing the wallet. Those can be configured to be watching for the users’ keys or whatever. Or you are filtering and they can be configured to be storing…. The mempool can be terabytes big. It doesn’t need to be a small thing. It only needs to be small if you are a miner. If it is too big and you are trying to mine, a big mempool is more problematic.

Other vault related ideas

MF: Before we go onto Bryan’s mailing list posts there are a couple of people’s work that I added to that Pastebin. One was Peter Todd’s work on single use seals. Another was Christopher Allen’s work on smart custody. Are any of these of interest to people? Any thoughts on these guys’ work?

AG: I just want to mention how incredibly easy to understand single use seals are. It was an extremely sarcastic comment.

JR: I have always found them and their description completely inscrutable. If somebody feels like they can take a shot at that.

BB: Apparently RGB is using this in their code.

AG: We just read some C++ code and it would be easier to understand than the mailing list post.

BB: Code is the universal language.

BM: I think I can describe it pretty simply but I don’t know how it is related to vaults. A single use seal is something that you can use once. If you have ever seen a tag on your shipping crate.

BB: We understand what it is. It is just Peter Todd’s description of it and how it applies to Bitcoin.

BM: Peter Todd’s description is inscrutable, I agree.

JR: Is it just spending a UTXO? That is all we are talking about?

BM: Spending a UTXO is an example of a single use seal. You can only spend a UTXO once.

AG: My sarcasm is one thing. I do think there is something interesting there. His idea of client side filtering but it is pretty abstract. It is perhaps not a topic for today, I’m not sure.

BM: I don’t know how it relates to the vault topic.

MF: Apologies for that if I am introducing noise. The smart custody stuff that Christoper Allen worked on is relevant.

BB: I was co-author with Christopher Allen on that project for some of the Smart Custody book along with Shannon Appelcine and a few others. It was the idea of let’s put together a worksheet on how to do custody for individuals, how to safely store your Bitcoin using hardware wallets. The sort of planning you might go through to be very thorough and make sure you have checklists and understand the sorts of problems you are trying to defend against. The plan was and I think it is still the plan to do a second version of this smartcustody.com work for multisig which was not covered in the original booklet.

JR: I would love that. I have some code that I can donate for that. Generating on your computer a codex which is a shuffled and then zipped BIP 32 wordlist. Then you take the wordlist and you use it as a cipher to write your seed in a different set of words. You give one party your cipher and you give the other party the encrypted word list. What is the point of that? You have a seed that is on paper somewhere and now you want to encrypt it and give a copy to two of your friends so that you have a recovery option. I feel like having little tools to allow people to do that kind of stuff would be pretty nice. Being able to generate completely offline backup keys and shard out keys to people.

BB: Definitely send that to Christopher Allen. He also has an air gapped signing Bitcoin wallet based off of a stripped down iPod touch with a camera and a screen for QR codes. That is on Blockchain Commons GitHub.

MF: Let’s go onto to your mailing list posts Bryan. There are two. Bitcoin vaults with anti-theft recovery/clawback mechanisms and On-chain vaults prototype.

BB: There is good news actually. There were actually three. There were two on the first day. While the first one was somewhat interesting it is actually wrong and you should focus on the second one that occurred on that same day which was the one that quickly said “Aaron van Wirdum pointed out that this insecure and the adversary can just wait for you to broadcast an unlocking transaction and then steal your funds.” I was like “Yes that’s true.” The solution is the sharding which I talked about earlier today. Basically the idea is that if someone is going to steal your money you want them to steal less than 100 percent of your money. You can achieve something like that with vaults.

KL: Something else really interesting in it is Bryan also takes the path of deterring an attack. I think Bryan in the last step you always burn the funds although maybe it is not always?

BB: It is only if you are in an adversarial situation but yes.

KL: I think it is really cool because the whole point of this type of approach is really not to be bad on the user but really to be hard to steal. To deter the attack in the first place.

BB: My vault implementation is not the same as the version being implemented at Fidelity. It is a totally different implementation. There is some similarity. I admit this is very confusing. There are like five different vault implementations flying around at the moment. Jacob on the call her, he has is own little implementation which is number 5. Jeremy has his which is 6. There is mine and the one at Fidelity based off of secure key deletion and also some hardware wallet prototypes. Then there’s Kevin’s Revault. I’m sure I’m forgetting one at this point.

MF: There is a lot going on. I’m assuming some of them are going to be very specific to a certain use case. I don’t know what is going on at Fidelity. Perhaps they have special requirements that other custodians wouldn’t have.

SH: I can speak to what we’re doing at Fidelity. This is what I am on working on. As you may know in January 2019 we released FDAS which is Fidelity Digital Asset Services. It is custodianship for institutional clients. The deleted key vault that we are working on is open sourced. We do have a work in progress implementation on our public facing GitHub page. The interesting part is Vault-mbed repo which is currently under refactoring. We are not looking at extra functionality at the moment. I’d be happy to answer any questions that someone may have.

MF: The next resource is Bryan’s Python vaults repo which is one of those particular implementations. In Python so I am assuming this is a toy, proof of concept type things.

BB: Yes. This is definitely proof of concept. Don’t use it in production. The purpose was to demonstrate that this could all work. To get some sample transactions and use them against Bitcoin regtest mode. It works and definitely open to feedback and review about it. One of the interesting things in there is that there is both a version which is default that uses secure key deletion or a pre-signed transaction where you delete the key. But also an implementation using BIP 119 (OP_CHECKTEMPLATEVERIFY) as well. An interesting note, Jeremy has been polite enough to not bring it up, Jeremy’s version in his branch of Core is substantially more concise and I am a little confused about that. I’m not sure why mine isn’t as concise as yours.

JR: I think I benefit a lot from having implemented it in Core. You have access to all the different bits and bobs of Core functionality, wallet signing and stuff like that. It is an interesting point. Let me find a link so I can send out this implementation to people. I was trying to think about how I write this as a template meta program so that I have all of these recursion things handled in terms of “Here is a class that attaches to another class and there are subclasses.” I think that is a nice model. I also spent some time trying to make a template meta programming language for C++ that allows you to write all different types of smart contracts. I really hit a wall with that. What I have built now, setting up for the big punchline, is this smart contracting language that I have been trying to hype a little bit. It is called Sapio. It isn’t released yet but hopefully soon I will get it out there. If you think the implementation I have in Core is concise wait until you see this one. This one is like 20 lines of code, the whole thing. It is 20 lines of code and then thousands of lines of compiler. There is a trade-off there. I am hoping the compiler that I am working on will be general purpose and I think this is something that I’d love to follow up later with everyone who is working on vaults because I think everybody’s vaults can probably be expressed as programs in this language. You will probably save a lot of lines of code. Maybe we can put communal effort on the things that are same across all implementations. Things like how do you add signatures? How do you template those out? How do you write the PSBT finalizers? All that kind of stuff is general logic.

Q - Can you describe this language briefly? How does it compare to say Ivy or Miniscript?

JR: It is a CTV based language. You can plug out the CTV with emulated single party pre-signed or you can have a multisignature thing. That is the security model that you are willing to do and whether you have the feature available or not. Ivy and Miniscript are key description languages. They operate at the level in a metaphor like “What is my house key? What is my car key? What is my bus pass? What is my office key?” This language operates at the level of commutes. You say “I leave my house, I lock my door, I go to my car, I unlock my car, I start my car, I drive to my office. Then I unlock my office.” Or I walk to the train station, I take the train, I walk to my office and then I unlock the office. It is describing the value flow not just a single instance of an unlocking. Ivy and Miniscript describe single instances of unlocking. This language is a Turing complete meta programming language that emits lists of Bitcoin transactions rather emitting a script. It emits list of transactions rather than a script. That’s the succinct version.

Revault design

MF: Just to bridge to next week’s presentation with Kevin and Antoine, I thought we could discuss Revault. Aaron tried to do this in the interview with Kevin and Antoine which was tease out the differences between Revault and some of the other vault designs such as Bryan’s and Fidelity’s.

KL: With Revault the main thing is that when we started it we started with a threat model and a situation which is different. It was a multiparty situation where we had a set of participants being the stakeholders and a subset of participants for this specific client at least that were doing the day-to-day movement of funds. To explain more clearly they are a hedge fund, they are different participants in the hedge fund but they are only a subset that are traders. The threat model includes internal threats like the two traders trying to steal the money of the fund. This is something quite new or not well covered in other proposals until now. Hopefully we will see more soon. There is the external threat model that a multisig covers. Then you have the internal threat model. The two signatures are not enough for the two traders because you want to include the other one as some kind of reviewer of transactions or whatever you want to call them. Of course there is the main threat for most companies and people today in Bitcoin which is the physical threat. Somebody coming to you, the five dollar wrench attack. We are quite lucky to not have too many of those but when you look at the security at for example exchanges today you would be really surprised to see 500 million or even a billion USD of Bitcoin being secured behind a single key. If you find the right people to threaten then you might be able to steal the funds which is super scary. We are trying to address that kind of stuff. Another difference is because it is for business operations we are trying to reduce the hassle of abusing it. Most security things are defeated if it is complex to use, people are going to bypass it. This is also very problematic so you want the traders to be able to do their job. If every time they are doing a transaction they have to ask their boss if it is ok to move the funds and check the destination address, amounts etc then it is never going to be used. They are going to bypass it somehow. We are trying to move the validation from being some kind of verification after the transaction is created to the opposite. When the funds enter, when somebody deposits money to this hedge fund or whoever uses Revault, the funds are locked. The funds are locked by default. Then you would need all the stakeholders, in my example it is four, to pre-sign the set of transactions to be able to move them. By default they are locked and then you can enable the spending by having everybody signing the transaction being able to move them and revault them. In case of an attack you need to have a pre-signed transaction to put them back in the vault. After that the spenders, the two traders, are able to craft a transaction. That will be timelocked, you will have a delay before it is mined. Different conditions can be triggered to revault the funds if something is wrong. This could be enforced by the stakeholders, that could be third parties like watchtowers and other things like that. Of course it is more complex than that because we are really trying to emulate something like CTV with the tools that we have today. It is not really simple. Personally I am not fond of deleting private keys. Secure key erasure is not something I really like. Personally I am trying to avoid this in the design. At the end of the day it creates other problems. We are having to use a co-signing servers today which is not cool. I don’t know how we will implement that properly or if we can remove it. Antoine who is not on the call today is actively working on trying to remove this co-signing server but that might mean that we are moving towards secure key deletion as well. I think it is a trade-off and you have to see what risk you want to take. Secure key deletion creates other burdens around backups because you need to have to backup every pre-signed transaction. I hope it is a good primer. I don’t know if you have any questions. It would take a long time to dig into it I think.

MF: The few main differences that you talked about in that podcast with Aaron van Wirdum is multiparty architecture rather than single party. Pre-signing the derivation tree depending on the amount. In Bryan’s implementation you need to know the amount in advance.

KL: In the original architecture from Bryan last year the funds are not in the vault before you know exactly how many Bitcoin you want to secure. You have to pre-sign all your tree and then you move the funds in. From that time the funds are protected. Of course that is not usable in normal business operation. At least you would have to consider a step before that where your deposit address is not part of the vault system. It is doable, you can do a system like that, it is not a problem. But it is not part of the vault before pre-signing the transaction because of the deleted private keys and things like that. We don’t have this problem. Our funds are secure as soon as they enter. Different trade-offs.

Mempool transaction pinning problems

MF: Before we wrap I do want to touch on Antoine’s mailing list post on mempool transaction pinning problems. Is this a weakness of your design Kevin? Does Bob have any thoughts on this?

BM: I think Jeremy is probably the best to respond to that as he is actively working on this.

JR: All I can say is we are all f***ed. It would be great if there was a story where one of these designs is really good in the mempool. It turns out that the mempool is really messy. We need to employ 3-5 people who are just working on making the mempool work. There is not the engineering budget for that. The mempool needs a complete rearchitecting and it doesn’t work in the way that anybody expects it to work. You think that the mempool is supposed to be doing one layer of functionality but the reality is the mempool touches everything. The mempool is there in validation, it is there in transaction relay and propagation and it is there in mining. It needs to function in all those different contexts. It needs to function quickly and performantly. What you end up getting is situations where you get pinned. What pinning means is the mempool has all these denial of service protections built into it so that it won’t look at or consider transactions. Because the mempool is part of consensus it is not what it sounds like. It is not a dumb list of memory that just stores things, it is actually a consensus critical data structure that has to store transactions that could be in a valid block. It has to be able to produce a list of transactions that could go into a valid block. Because of that you really tightly guard how complicated the chains of transactions that can get into the mempool are. People have issues where they will do something that is slightly outside of those bounds and they are relying on the mempool being able to accept a transaction for a Lightning channel let’s say. Because the mempool is quite big you have things that are low fee at the bottom that can preclude a new high fee transaction coming in that prevents you from redeeming a Lightning channel which means you are going to lose money. Especially for Lightning, especially for cross-chain atomic swaps. What is annoying about this is because of the way UTXOs are structured this can be somebody who is completely unrelated to you spending from some change address of a change address of some other long chain. With any of these designs, if you have pending transactions you are going to have a really hard time with this stuff. I would like to say we have a plan for making the situation completely addressed. Bcash did deal with this, they got rid of child-pays-for-parent and they have unlimited block size. It turns out that if you do those two things you stop having a lot of these denial of service issues. I don’t think that is viable for Bitcoin at this point but it is not even the worst option among options that could possibly be a thing. I think we just need to invest a lot more engineering effort to see where we can elevate the mempool into. It is the type of issue that people look for carve outs, little things that can make their niche application work. Lightning did this one time with the Lightning carve out that prevents pinning in a certain use case. Six months later they found out that it doesn’t solve all the problems. I don’t think it is going to be a carve out thing to fix some of these pinning issues. It is going to be we have completely rearchitected the mempool able to handle a much broader set of use cases and applications. I am a little bit negative on the prospects of the mempool working really well. I think that is the reality. I am working on it, I am not just complaining. I have like 30 outstanding PRs worth of work but nobody has reviewed the second PR for two months. It is not going to happen if people aren’t putting the engineering resource on it. That’s the reality.

The role of watchtowers

Q - What are your thoughts on the watchtower requirement here? I see a path of either bundling watchtowers for many services versus separate towers for separate services. Which way of handling do you think is best long term?

BB: I will probably hand this over to Bob or Jeremy about watchtowers. It is a huge problem. The prototype I put together did not include a watchtower even though it is absolutely necessary. It is really interesting. One comment I made to Bob is that vaults have revealed things that we should be doing with normal Bitcoin wallets that we just don’t do. Everyone should be watching their coins onchain at all times but most people don’t do that. In vaults it becomes absolutely necessary but is that a property of vaults or is that actually a normal everyday property of how to use Bitcoin that we have mostly been ignoring. I don’t know.

BM: There are many uses of watchtowers. As time goes on we are going to see more. Another use for watchtowers that has come up recently is the statechain discussion. Tom Trevethan posted a ECDSA based statechain that I think is pretty interesting. It also has the requirement for watchtowers. It is a method to transfer UTXOs. What you want to know is did a previous holder of the UTXO broadcast his redemption transaction and how can you deal with that? I think there is a path here to combine all of these ideas but there is so much uncertainty around it we currently wouldn’t know how to do it. There are multiple state update mechanisms in Lightning and that is still in flux. Once you start to add in vaults and then statechains with different ways to update their state there is going to be a wide variety of watchtower needs. Then you get to things like now I want to pay a watchtower. Is the watchtower a service I pay for? Can it be decentralized? Can I open a Lightning channel and pay a little bit over time to make sure this guy is still watching from his watchtower? How do I get guarantees that he is still watching my transactions for me? There is a lot of design space there which is largely unexplored. It is a terribly interesting thing to do if anybody is interested.

JR: I think part of the issue is that we are trying to solve too many problems at once. The reality is we don’t even have a good watchtower that I am operating myself and I fully trust. That should be the first step. We don’t even have the code to run your own server for these things. That has to be where you start. I agree longer term outsourcing makes sense but for sophisticated contracts we need to have at least something that does this functionality that you can run yourself. Then we can figure out these higher order constraints. I think we are putting the cart before the horse on completely functional watchtowers that are bonded. That stuff can come later.

BM: I think the first order thing to solve is how to do the state update mechanism. We are still not decided on whether we are going to get eltoo and SIGHASH_NOINPUT which implies a different update mechanism and a different set of transactions that need to be watched. That conversation doesn’t seem to be settling down anytime soon. It seems like we are not going to get SIGHASH_NOINPUT anytime soon. I don’t know. There is a lot of uncertainty.

KL: For the watchtowers I am not as skeptical as you guys I think for multiple reasons. One of them is that anybody could be working on watchtowers today or even have watchtowers in production and we would not know about it. This is one of the cool things about watchtowers. It behaves as if it has a private key but it doesn’t have it. It has a pre-signed transaction instead. Another thing regarding hosting itself or giving it to other people or having a third party watchtower. I think it is a good thing that you should have a lot of these. Of course you should have one or multiple watchtowers yourself but you should also deal with third parties. You might have to pay them of course. The fact that you should have a lot of watchtowers and no one knows how many you have is really important in terms of security. At least they don’t know if there is a single point of failure. They don’t know if it is a vector of DDOS. They don’t know who to attack to prevent the trigger of a prevention mechanism. I am really bullish on watchtowers and I know a few people working on them. I am really looking forward to seeing them in production.

MF: I’ll wrap up. In terms of transcripts I am doing a lot more transcripts than Bryan these days because obviously Bryan is very busy with his CTO role. If you want to follow new transcripts follow @btctranscripts on Twitter. Eventually there will be a site up, I’m working on it. Bryan has released a transcript of today for the first half. We’ll get that cleaned up and add any content that is missing from the second half. Apart from that the only thing to say is thank you for everybody attending. If you want to hear more about vaults, Kevin and Antoine are presenting next week. Thank you very much everyone.

\ No newline at end of file +https://www.youtube.com/watch?v=34jMGiCAmQM

Name: Socratic Seminar

Location: London BitDevs (online)

Pastebin of the resources discussed: https://pastebin.com/3Q8MSwky

Twitter announcement: https://twitter.com/kanzure/status/1262821838654255104?s=20

The conversation has been anonymized by default to protect the identities of the participants. Those who would prefer their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Introductions

Michael Folkson (MF): This is a London BitDevs Socratic Seminar. We are live-streaming on YouTube so please be aware of that for the guys and girls on the call. We are using Jitsi which is open source, free and doesn’t collect your data so check out Jitsi if you are interested in doing similar conversations and Socratics in future. Today we are doing a Socratic Seminar. For those who haven’t previously attended a Socratic Seminar they originated at the BitDevs in New York. There are a number of people on the call who have previously attended them. The emphasis is on discussion, interaction and feel free to ask questions and move the discussion onto whatever topics you are interested in. This isn’t a formal presentation certainly not by me but not even some of the experts on the call. This was set up because we have Kevin Loaec and Antoine Poinsot presenting next week on Revault which is their vault design so that will be live-streamed as well. That will be more of a formal presentation structure and Q&A rather than today which is more of a discussion. The topic is covenants, CHECKTEMPLATEVERIFY. Jeremy Rubin has just joined the call which is fantastic. And also in the second half we will focus on vaults which is one of the use cases of CHECKTEMPLATEVERIFY. What we normally do is we start off by doing intros. A very short intro. You can raise your hand if you want to do an intro. Introduce yourself, how much you know about covenants and vaults and we’ll go through the people on the call who do want to introduce yourself. You don’t have to if you don’t want. If you are speaking and you are happy to have the video turned on make sure you turn the audio and the video on. It will be better for the video if people can see who is speaking. If you don’t want the video on obviously don’t turn the video on. If you don’t care switch the video on when you are speaking.

Bryan Bishop (BB): I am Bryan Bishop, Avanti CTO, a Bitcoin developer, I’ve worked on Bitcoin vaults. I had a prototype release recently. I am also working with a few others on the call here on two manuscripts related to both covenants and vaults. I also did an implementation of my vault prototype with Jeremy Rubin’s BIP 119, OP_CHECKTEMPLATEVERIFY proposal.

Spencer Hommel (SH): My name is Spencer. I am currently a Bitcoin developer at Fidelity Center of Applied Technology’s blockchain incubator. I have been working on vaults since the summer of 2018. More specifically working on a hardware solution using pre-signed transactions More specifically working on a hardware solution using pre-signed transactions with deleted keys. I am also working on with Bryan Bishop on those manuscripts as well.

Kevin Loaec (KL): I am Kevin Loaec. I’m probably the least technical on this call. I’ve been interested in covenants and vaults. Not too long ago, my interest was raised with the first proposal that Bryan sent to the bitcoin-dev mailing list about 7 months ago. I have been working on it since December of this year when a client of ours had a specific need where they wanted a multi-party vault architecture for their hedge fund. That’s when I started digging into this and exploring different types of architecture. I am working on this project which we call Revault which is a multiparty vault architecture that is a little bit different from the one the other guys here are working on. But it also has a lot of similarities so it is going to be a very interesting talk today.

Max Hillebrand (MH): I am Max. I’m mainly a user of Bitcoin technologies and I contribute some to open source projects too, mainly Wasabi wallet. I have always been interested in the different property rights definitions that Bitcoin Script can enable. Specifically multisignatures which was part of my Bachelor thesis that I wrote. I have been following vaults specifically on the mailing list, some transcripts by Bryan and the awesome utxos.org site that Jeremy put up. Just interested in the topic in general. Looking forward to the discussion, thanks for organizing it.

Jeremy Rubin (JR): Hey everyone, I’m Jeremy. Thanks for the intro before though. You know a little bit about me. I’ve been working on BIP 119 CHECKTEMPLATEVERIFY which is a new opcode for Bitcoin that is going to enable many new types of covenants and smart contracts. One of those use cases is vaults. I released some code that is linked on utxos.org. You can see there is a vault implementation that you can check out based on CHECKTEMPLATEVERIFY. I am currently working on implementing better tools for being able to use CHECKTEMPLATEVERIFY that will hopefully make a lot easier to implement all kinds of vaults in the future.

Sam Abbassi (SA): My name is Sam. I am also working on vaults with Bryan and Spencer over at Fidelity. I probably have the least amount of experience with respect to vaults. This is part of me gaining more exposure but happy to be here.

openoms (O): I am openoms. I am working on mainly some new services in a full node script collection called the Raspiblitz. I am generally a Bitcoin and Lightning enthusiast. I am enthusiastic about privacy as well. I don’t know a lot about vaults but I am looking forward to hearing more and learning more.

Jacob Swambo (JS): Hello everyone. My name is Jacob. I am working with Bryan, Spencer and Bob on the vaults manuscript that was mentioned on the bitcoin-dev mailing list not that long ago. I am a PhD student at King’s College London and I have been working on this for about a year and a half. I am happy to be here talking about all this.

Adam Gibson (AG): I just wanted to mention I’m here. It is Adam Gibson here, waxwing on the internet. I don’t have any specific knowledge on this topic but I am very interested.

What are covenants?

MF: Basic questions. These are for the beginners and intermediates and then we move onto the discussion for the experts on the call. To begin with what is a covenant and what problem is a covenant trying to solve?

Bob McElrath (BM): I didn’t introduce myself. I’m Bob McElrath also working with Bryan, Spencer and Sam on a draft that will appear very soon, hopefully a week or so you will be able to read it. I have a talk on this topic last summer in Munich. This is a whole talk about covenants and going through the various mechanisms. A covenant is by definition a restriction on where a UTXO goes next. When I sign a UTXO I provide a signature which proves that I control it and it is mine. But I don’t have control over where it goes after that. A covenant by definition is some kind of restriction on the following transaction. With that you can create what is commonly called a vault. A vault basically says “The following transaction has to go to me one way or another.” I am going to separate my wallet into a couple of pieces, one of which is going to be more secure than the other. When I send to myself, between hot and cold or between an active wallet or something like that, this has to go to me. I am making a restriction on the transfer of the UTXO that says “If you get into my wallet you can’t directly steal this UTXO just by signing it” because the covenant enforces that it has to go to me next. From there I can send it on. I am introducing a couple layers of complexity into my wallet to do that.

MF: It certainly does, a sophisticated answer. Any of the beginners, intermediates did you understand that answer? What was your initial understanding of covenants before this? Any questions for Bob?

AG: I think this has been an open question since the early days of these ideas. It is such an obscure name. It doesn’t immediately jump out at you what it is. I appreciate the explanation Bob, that is excellent.

BM: We can blame Emin Gun Sirer and his collaborators for that. They wrote the paper in 2016 and they named it covenants. It is all their fault.

JR: I know it is a fun game to blame Emin but the term covenants existed before that in a Bitcoin context. The historical reason is that covenants are something that you use when you transfer property. It restricts how it can be used. In the Bay Area where I live there is a dark history with covenants where they were used to prevent black people from owning property. “You can sell this house but not to a black person.” That was relatively common. When people talk about covenants they oftentimes have weird things in their deeds like “You can only ever use this property to house 25 artists.” You sell the property with the covenants and the person can’t ever remove these covenants from the property. There was some mention in the notes that people didn’t really like covenants early on. Covenants is inherently a loaded term. It was a term that was come up to cast some of this stuff in a negative light because some people don’t like covenants. Not in a Bitcoin or cryptocurrency context. In general people have a negative association with someone else controlling your own property. In a Bitcoin context, as Bob pointed out, I liked his description, it is about you controlling your own property. One of the metaphors that I like to use for this is right now each UTXO is a little bit like a treasure chest. You open it up and it is full with gold coins. Then you get to do whatever you want with the gold coins. Imagine one day you opened up your treasure chest and you found Jimi Hendrix’s guitar in it. Where do you store that? Can you take that and throw it in the back of your Subaru? No this is a sacred thing, it needs to go in your guitar case. There is a restriction where you open up your treasure chest and it says “This is a guitar. You can only put this into another suitable guitar case.” That is what a covenant is doing. It is telling you what are safe containers for you to move your item to another. It turns out for the most part we are talking about using coins. It would be “These are coins but they are made out of uranium so you have to put them in a lead box.” They need to go in a lead box, that is the next safe step. That is one of the metaphors that works for covenants. It is about you being able to control the safety and movement of your own coins.

MH: I know the term covenants from the incumbent banking system where if you have a loan contract, for example a bank to a company, the bank can make the requirement that if the company’s cashflow to equity ratio drops to a certain level then the debt has to be paid back. Or it has to be renegotiated. It is a limitation on the contract where the contract itself is terminated or changed when one of these things comes into place. Seeing it as a restriction also makes sense in a Bitcoin context. We have a Bitcoin contract that is “If you have the private keys to this address then you can spend it. But in this restriction that it has to go into this other address.”

MF: You dropped off there Max so we didn’t hear the second half of your answer. I think some of you have already seen this. I did create a Pastebin for some of the resources that we can talk through. That is on the Twitter and on the Meetup page. I got a bunch of these links from Jeremy’s interview on Chaincode Labs’ podcast which I thought was excellent. Jeremy talked about some of the history in terms of implementing covenants on Bitcoin. The first one is an early bitcointalk.org post on how covenants are a bad idea. Perhaps we can talk about why people thought covenants were a bad idea back in 2013 and what progress has been made since then that perhaps has changed their mind or perhaps they still think covenants are a bad idea.

Historical concern about Bitcoin covenants

BB: I’ll start off. As I recall that was a Greg Maxwell post. I have talked with him and without corrupting what his opinion actually is too much I think the major concern was mainly about recursive covenants. People using them and not really understanding how restrictive a recursive covenant really is. That was the main concern, not that covenants are actually awful. It unintentionally reads as “Do no use covenants under any circumstance” which is unfortunate but that was not the intention of the post.

JR: Greg has explicitly said in IRC somewhere something to the tune of “I don’t know why everybody thinks covenants are bad. Covenants are fine. I have never said anything about them being bad.” I said to him “Greg, everybody I talked says that they think it is because you said that they are bad in this thread. If you don’t think they are bad you should make that clear.” You can’t make Greg write something. He has written that in IRC. He doesn’t have any hang ups or holdups about them. It is not even the recursion that is the problem, it is vitality. He doesn’t want a covenant system where somebody else is doing something and then all of a sudden your coins get wrapped up into their covenant. It may be that recursion is a part of that but that is the major concern with the ones that Greg was looking at.

MF: There are two elements here. One is timing. Perhaps it was way too early to start thinking about implementing covenants back in 2013. Secondly there is perhaps views on if the ideas on covenants were stupid or too complex back then that it was just a case of battening down the hatches and making sure some crazy covenant ideas didn’t get into Bitcoin. Any thoughts on that or any of the previous conversation?

BM: Any time you do a covenant… Any agreement you make when you are sending funds with the receiver is a private agreement. You can make whatever terms you want. Greg’s post enumerates a particularly bad idea where one party can impose a restriction on another party against their will. I think most people would think is a terrible idea. It is not that covenants themselves are a bad idea. If you agree to it and I agree to it fine. I think for the most part covenants are most useful for where it is not a two party arrangement, it is a one party arrangement. Once you get two parties involved everybody has to understand what is going on. By default it can’t be a regular wallet. The structure of the scripts have to change somehow. I have to know about those rules. Along as I agree to them I think that is completely fine.

JR: A hard disagree on that note. One of the major benefits of a covenant system comes into play with Lightning Network related stuff. It actually dramatically simplifies the protocol and increases the routability by improving the number of HTLCs that you can have and the smart contracts that can live underneath a Lightning channel feasibly with reasonable latency.

BM: I think we agree there. If you are using a Lightning wallet then you have agreed to those rules.

JR: I do agree that there is a huge amount of value when it is a single party system but multiparty things are actually really useful in covenants because the auditability of those protocols is just simpler. A lot of the set up things are you are writing half the amount of code.

MF: The next couple of links I put up were a good intro for beginners to this topic which is Aaron van Wirdum’s Bitcoin Magazine article on SECURETHEBAG. This was after Jeremy’s presentation at Scaling Bitcoin 2019. Then there’s this paper by Emin Gun Sirer, Mosser and Iyal. Any thoughts on this paper? Anyone want to summarize the key findings from this paper?

The first paper on Bitcoin covenants

BM: I can give it a stab. This was the first paper in the space. They defined a new opcode that acts rather like a regular expression that says “I’m going to examine your redeem script and I am going to impose some restrictions which are essentially an arbitrary sort of regular expression on your script.” The second thing that they defined is recursive covenants which is the thing Greg Maxwell doesn’t like as well as a kind of protocol where if somebody manages to steal your funds you can yourself get into a game where you keep replacing each other’s transactions until the entire value of the UTXO goes to fees. They claim this is somehow beneficial because the thief can’t actually steal anything. That aspect of the paper I don’t like very much. I don’t think anybody wants to get into a game where they lose their funds anyway even if they prevent the attacker from gaming them and sending them to fees instead. Those are broadly the three things in that paper.

BB: I’ll disagree. I think it is valuable to have the lose everything to fee because it comes down to the following. Would you rather fund an adversary or lose your money? Unfortunately in the contrived scenario is that you haven’t.

BM: That is not true. You can definitely have neither. You don’t have to get yourself into this game where you are paying fees.

AG: What exactly was the name of the new opcode for it? This was probably why it didn’t go anywhere.

JR: They called it OP_COV. There were a few problems with it. It wasn’t just the technical capability that it introduced. I don’t think the proposal was that secure. There were a few gotchas in it that would make it hard to deploy. With BIP 119 I tried to answer the integration questions of if you are really signing transactions what can go wrong? It turns out with a design that is not restrictive enough there are a lot of weird edge cases you can run into. That is why that proposal didn’t go anywhere.

BB: The other thing I remember is that in that paper in 2016 the manuscript was published around the time that BIP 68 and BIP 112 occurred, the relative timelocks. I think in the paper itself said this is going to require a hard fork which strikes me as odd.

JR: I think they probably just didn’t know.

BM: It was published right before those BIPs. I had a post after that that used the deleted key thing and those opcodes because it was obvious to me that they had missed that. That paper does not talk about timelocked opcodes correctly.

MF: This is Jeremy’s presentation at the Stanford Blockchain Conference in 2017. This was your first presentation that I saw Jeremy on covenants. It had a bunch of different use cases like “The Naughty Banker” and your current thinking back then. So of all these use cases which ones are still of interest now and how has your thinking changed since that presentation. I enjoyed that presentation, I thought it was very informative.

Introducing the use cases of Bitcoin covenants

JR: Here are the slides. A lot of them are still useful. Congestion control is particularly of note. The example I gave was how to use congestion control for Lightning resolution where you want to lock in a resolution and you do the details later. There’s things like optical isolated contracts, there is some vaults stuff in here too. That stuff is obviously still interesting. In this presentation I define a bunch of different types of opcodes. Those could still be interesting. One of the things that I define here is a way of doing a Tapscript style thing at the transaction level. If you had transactions that you can mark as being required to be spent within the same block then you could have scripts that expand out over the series of transactions. In my opinion that is a slightly more interesting primitive to work with because then you can have scripts that expand out over a number of blocks but then they split how the funds are being distributed to different UTXOs. You can build out some different flows and primitives based on that expansion. I think those things could be interesting in the future. I don’t think there is anything that is irrelevant in this presentation at this point. It is like carving out the small bits that we know how to do safely in Bitcoin and making it work. There are a few that aren’t here that I would be excited to add. One thing that I have been thinking about as a next step for Bitcoin after CHECKTEMPLATEVERIFY or an equivalent gets merged is an opcode that allows you to check how much value is in an output as you are executing. A really simple use case you can imagine for this is you paste an address to somebody and if it is under 1 Bitcoin you have a single key because it is not that much. But if it is over a Bitcoin then you have multisig. You can use that as a safety mechanism in a number of different applications. I think that is an important thing going forward that wasn’t in this presentation. It is worth looking at if you are thinking about how to contract in the UTXO model, what sorts of things could be possible.

The congestion control use case

MH: I am still somewhat lacking intuition on why this is an improvement for congestion. How it can save fees in times where there is a high fee level. If someone could explain that a bit more succinctly that would be nice.

JR: The idea of congestion control is mostly that there’s a fundamental amount of traffic that has to happen but there is a peak demand. It is kind of like Corona virus, you want to flatten curve. I was looking at all these diagrams of flattening the curve and I was like “Hey this has been what I’ve been working on for the last year.” Let’s say it is lunch hour and we have 10 megabytes of transaction data coming in every ten minutes but it is only for an hour. Over the rest of the day those transactions are going to be clearing. With the congestion control solution you can commit to all of them, confirm all of them and then only when they need to be redeemed do they have to pay fees. You have localized the confirmation window for all of them, confirmed all of them at one time and then you spread out the redemption window where somebody goes and gets an individual UTXO out. The reason why this ends up decreasing fees is that if you are think about fees as a bidding market you are bidding for two different goods. You are bidding simultaneously for confirmation and you are bidding for redemption. That is an inefficient market because those are two separate quantities. By splitting out the quantities you bid one price for confirmation and that confirmation price can be shared among a number of actors. Then you bid a separate price for redemption. That has the effect of allowing you to have fewer people bidding in the confirmation market with CHECKTEMPLATEVERIFY and more people bidding in the redemption market. I think that is the shape of why it is going to be an improvement.

MH: One follow up question. Let’s say I make a transaction that pays 100 users. I get that confirmed at a low fee. How does that work with the users redeeming their coins? Does it have to happen for all the 100 users at the same time or can 10 users do it fast and the other 90 do it slower?

JR: CHECKTEMPLATEVERIFY is a general purpose opcode. It doesn’t do one of these things specifically. The answer is what does your users want. We could also talk about mining revenue because I think it is important when we are talking about something that looks like it is reducing fees, if it is improving revenue I think it does improve revenue but that is a separate conversation. What you would do is you would bundle up all your 100 users into a transaction. You would have a single output for all of them. Then you would create that. You would probably end up paying a very high fee on that transaction because it is representing confirmation for 100 users. A high fee on a transaction with one output is a lot lower than low fee on a hundred transactions or a hundred outputs. You are still saving money as the user but you are maybe paying a higher fee rate. What you give to your users is if they have an old wallet it looks like an unconfirmed spend. It would be “Here is a couple of unconfirmed spends” and you can structure the spends as any data structure that you want that is a tree of some sort. A linked list is a tree. You could have something where it is one person, the next person, the next person, that is a little bit inefficient. It turns out that it is optimal for the users to do a tree of radix 4. You would have a tree that says “Pay out to these 4 groups” and each group of 4 pays out to 4 groups and each group of 4 pays out to 4 groups. Then the total amount of work that you have to do is log(n) to get a single redemption in transaction space and amortized over all the users it is only a constant amount of transaction overhead.

BB: One interesting point here is in the tree structure the way this is set up is that in certain scenarios some users are paying a little bit more than others.

JR: Yes. One of the issues is it a balanced tree or not? It turns out that we’re already talking logarithmic so it is already going to be pretty small. We are talking maybe plus or minus one on the depth in the tree. You can pay a little bit more but the other side of it is that users who want to redeem earlier subsidize users who redeem later because it is amortized over all the users. Let’s say I have a branch that ultimately yields a group of 4 and one of those people decides that they really want their coins. Them getting their coins subsidizes the creation of everybody else who is one of their neighbors along the path. There is a thing where naturally there is a priority queue which is what you want to have I think in Bitcoin. You want the fee market to be a priority queue where people who have higher priority, higher requirement of getting their transaction through end up paying more. What this changes is the redemption to a lazy process where you do it whenever demand is low enough to justify your use case. You are not worried about confirmation. The alternative is that these transactions sit unconfirmed in the mempool. I think unconfirmed funds are far worse. The real benefit of that, this goes back to why I think this is really good for multiparty situations, these payouts can be Lightning channels. You are wondering how am I going to get liquidity, it turns out that you can immediately start routing it in the Lightning Network. That’s one of the benefits of this design is that it allows you to bootstrap Lightning channels much more easily because you are not time sensitive on the creation of the channel as long as it is confirmed.

AG: Can we dial back a little bit? I think we have jumped a few steps ahead. I want to make sure I understand the most basic concept. I think Max was asking something similar. Do I understand that the most basic concept of congestion control here is with this covenant mechanism you are able to effectively treat unconfirmed transactions or chains of unconfirmed transactions as if they are settled so to speak? This distinction about confirmation and redemption you were saying is that the receiver can treat the money as having been received even though it is not in a Bitcoin block because there is a covenant. Is that right?

JR: That is exactly correct. So we have a diagram here. I compare normal transactions where you have some inputs and blue is the payments and pink is your change UTXOs. This is normal transactions on the left. If you go to normal batching then you have a number of outputs and a single change and it is more efficient. With congestion control payments what you do is have a single output and then you have a tree of possible redemption paths underneath. Here I show a little bit more advanced demo. Ignore the part on the right. Just imagine you go down with this radix 4 tree, you go down to Option B. You expand out and then you have all these different transactions. What this diagram is showing you is that the different leaves or nodes of this transaction graph can be expanded at different times and in different blocks. If you look at the gray boxes, look at the size of them. Normal transactions are the worst. It is the biggest gray box. Then batched transactions is the next smallest. Congestion controlled transactions are even smaller. Your real time block demand is really low. Then sometime later these other transactions can be played but they are guaranteed to go down that route. The optionality that I’m showing answers the question that Max had earlier which is how do they actually redeem. You could redeem on different types of tree. The one on the right is Option A, let’s redeem as a single step and pay out to everyone. That is an immediately useful one and is maybe a little bit easier to understand and integrate into existing wallet infrastructure. It is a single unconfirmed parent for this transaction. If you want optimal efficiency on a per user basis then you would do a tree expansion. In Option A it is less fair if you are the one person that wants to redeem their funds, you’ve got to pay for everyone. On Option B you only have to pay for log(n) of everyone else which you can kind of ignore.

MF: There is a question in the YouTube chat. How is this different to child-pays-for-parent?

JR: The difference between this and child-pays-for-parent (CPFP) is that CPFP is a non-consensus rule around deciding what transactions you should mine. This is a consensus rule around being able to prove a transaction creates another transaction. In this world you do end up wanting to use CPFP where you can attach your spending transaction with a higher fee to pay for the stuff up the chain. In this example you would spend from one of these outputs and then you would attach a high fee to it. Then that would subsidize the chain of unconfirmeds. It is related to CPFP in that way but it is a distinct concept in that these pending transactions are completely confirmed. There is no requirement to do CPFP in order to get confirmation of the parent. That is the difference.

BB: I’ll point out another difference that CPFP doesn’t require a soft fork. It doesn’t accomplish the same thing either.

JR: The other thing I would add if we are going to go tongue in cheek is I am going to probably end up removing CPFP or having to completely rearchitect it. The mempool is a big project right now. There is a lot of stuff that doesn’t work how people think it works. Transaction pinning is one of these issues that comes up. It is a result of our CPFP policy. There is a very complicated relationship between a lot of these fixes, features and problems we end up having.

MH: Can we still do for CPFP for that commitment CTV transaction?

JR: Yes. You just spend from the child and then it is CPFP. That’s where the mempool issues come in. CPFP doesn’t actually work. This is the problem that people are coming into with Lightning. People have a model of what CPFP means and the model that they have is perfect economic rationality. It turns out that perfect economic rationality is a NP hard problem. We are never going to have a perfectly rational mempool. We are always going to be rejecting things that look good. It just turns out that the current CPFP policy we have is really deficient for most use cases. CPFP already only works in a handful of cases. It doesn’t work for the Lightning Network. Even with the recent carve-out it still doesn’t really work properly.

SH: Is there any consideration to how exactly you structure the tree with radix 4? Is there a certain algorithm or protocol to place certain outputs in certain positions of the tree or is it left to random or open to whatever implementation?

JR: I think it is open to implementation. The opcode is generic and you can do whatever you want. That said there are some really compelling ones that I have thought of that I think would be good. One would be if you had priority information on how likely people are going to be in the same priority group. You can either choose to have a neutral priority arrangement where you try to pair high priority with low priority or you can do something which is a fair arrangement where high priority is other high priority so people are more likely to share fees. There are also fun layouts you can do where the probability of this one being redeemed quickly and then you can Huffman encode the tree based on that. The other one I really like and this goes into the Lightning side of things which is a bit more advanced is you can order things by the probability of people being able to cooperate. If you had some notion of who knows other people then you can do a recursive multiparty Lightning channel tree and then if you group people by the probability that they are able to work together in a group then you make an optimal updatable tree state. That one I am really excited about as a payment pool option. The last one would be if you are making payments and they might be the same service. You can make a payment tree where keys that you suspect are owned by the same wallet exist in the same sub-branches. There is an opportunity for cutting out some of the redemption transactions by redeeming at that higher order node. There are a lot of options.

SH: My next question was about the probabilistic payouts in Lightning so thank you for answering that.

BM: Could you talk a bit more about CPFP because I think one way to describe this is instead of the sender paying fees the receiver can pull and therefore the receiver has to pay fees which means the receiver is going to have to use CPFP to do that. Could you talk a bit more about the interplay between those two?

JF: CPFP is not the only way to pay fees. There’s a litany of ways to pay fees in one of these systems. CPFP is I would say the best way because it is a pure API where you want to declare which transactions you want to do and then the paying of fees for those should be abstracted away from the actual execution. CPFP is good to express these arbitrary relationships. It actually turns out that there are better APIs. That is one of the other soft forks I am looking at that maybe we could do is something that gives us a much more robust fee subsidizing methodology.

BM: If the receiver wants to pull a payment out of the tree and get it confirmed for whatever reason he may have to pay fees. The transaction at the end of the tree could pay fees. It may not be enough. The receiver may have to add fees to that and they may desire to which means they have to use some kind of replacement. Due to the structure of CTV replace-by-fee is not going to be viable.

BB: I don’t think you would replace the fees at the end of it, I guess you could. I was expecting that you do make a child transaction that pays the fees in addition to whatever you pull out of the tree.

JR: Replace-by-fee (RBF) works fine with CHECKTEMPLATEVERIFY (CTV). The only issue that comes up is if you want CHECKTEMPLATEVERIFY to be inherently Lightning compatible then RBF is not a Lightning compatible idea. In general because you have to worry about the state of HTLCs in subcontracts so you can’t arbitrarily RBF because you may be bound to a specific UTXO. If you have things like ANYPREVOUT then that wouldn’t necessarily be true. You would be able to get around some of those constraints. I reason why I prefer CPFP is that it doesn’t disturb the txids in your parents and your own branch. I think txid stability at least for the current Lightning Network designs that we have is an important property. But you can use RBF it just changes your txid. With CTV there are two forms of it. There is an unbounded form where you allow any output to be added that adds more money. There is also a bounded form that is possible through a quirk. I like that you can do it. Using a P2SH Segwit address you can specify which key is allowed to add a dynamic amount of fee. If you pick a key that is known to be of the parties in that subtree then it would only be through the coordination of those entities that the txid could be changed. If you are trying to do a Lightning thing and then the RBF requires coordination of all the subowners it can work as well in a protected form that protects your state of HTLCs. I think that that is a complicated thing to build out and CPFP is conceptually a lot simpler. RBF does not work well for a lot of services. This was one of the debates about RBF in the first place. People didn’t like it because people wanted to issue one txid and they wanted to be an exchange and say “Here is your txid” and then not worry about having to reissue the txid because it looks like a double spend and wallets get upset. It is not awful that the code supports it but it is an awful thing to use in practice because it has bad externalities. It is just more robust, that is the reason why I’ve been advocating CPFP.

The design of CHECKTEMPLATEVERIFY (CTV)

MF: We’ve jumped straight into use cases. I’m wary of that. Jeremy, could you take a step back and explain what CTV is in comparison to some of the other covenant designs?

JR: With the presentation I gave in 2017, at that time I was like “Covenants are really cool. Let me think about the whole covenant space.” The Emin Gun Sirer paper only covers one type of covenant which is how an output has to be spent but it doesn’t cover covenants around which inputs have to spent with, there are a lot of things. I thought about it and I tried to get people excited, people got excited. At the implementation point people were like “This stuff is scary to do. We are not really sure what is possible to do safely in Bitcoin. We have all these properties we want to preserve around how transactions behave in re-orgs.” I was like “Let’s do a long study of how this stuff should work.” I was doing that and working on other things, figuring out what made sense. A lot of the proposals for covenants have flaws in either how much computation they are expecting a validator to do or what abstractions and boundaries they violate in terms of transaction validation context. Observing things that you are not supposed to observe. As I went by I started building vaults in 2016. I was talking to some people about building them. I had a design that ended up being somewhat simpler to what Revault looks like. I was using lots of niche features like special sighash flags for making some of this stuff work. At the end of the day it really was not working that well. I went back to the drawing board, looking at how you can do big ECDSA multisignatures to emulate having big pre-signed chains. I tried to get people excited about this at one of the Core Dev meetings. People said “This stuff is not what we are interested in.” No one would review it. I stepped back and said “I am trying to accomplish this specific goal. What is the most conservative minimal opcode I could introduce to do that without having any major security impact change to Bitcoin.” I came up with CTV, it had a couple of precursors. The design is basically the same. It was actually more conservative originally, I have made it more flexible in this iteration. I presented that to the San Francisco BitDevs. The usual suspects were there. The response was very positive. People were like “This seems like a covenant proposal that does not have that much complexity we were expecting from validation. And it does not have that much potential for a negative recursive or viral use case that would add a large problem. It used to be called SECURETHEBAG, it also used to be called CHECKOUTPUTSHASHVERIFY. It was a back and forth. I originally called it CHECKOUTPUTSHASHVERIFY because I was like “Let me call it the most boring thing that is exactly what it does” and then everybody at the meetup was like “That name sucks. You have got to name it something more fun.” I renamed it SECURETHEBAG and the other half of people were like “Bitcoin is serious business, no funny names.” I renamed to CHECKTEMPLATEVERIFY which is conceptually there but it is not that boring as CHECKOUTPUTSHASHVERIFY. It really gets to the heart of what the idea is, what type of covenant you are writing. Essentially all what you are doing is saying “Here is a specific transaction template. A template is everything except for the specific coutpoints or coins that you are spending. That covers sequences, version, locktime, scriptSigs and outputs of course and whatever other fields that I may have missed that are in the txid commitment. People are writing in the chat that they want to bring back SECURETHEBAG. If you want to bring it back I have no business with that, I can’t be responsible. It just checks that the hash of the transaction matches those details. That’s basically it. That’s why it’s a template. Here is the specific transaction that I want to do. If you want to do more than one transaction, what if I want Option A or Option B, simple. Wrap it in an IF ELSE. If you pass in one then do Transaction 1, if you pass in zero do Transaction 2.

BB: When I was working on my Bitcoin vaults prototype and I was doing the CHECKTEMPLATEVERIFY implementation version I was originally doing secure key deletion and then I was like “I should try BIP 119.” I asked Jeremy, this IF ELSE thing sucks if you have a lot of branching. Jeremy suggested a very simple script that was significantly more concise. That was interesting.

JR: I have been become a bit of Script virtuoso. There are a lot of funny script paradigms that you can do with this stuff to make it really easy to implement. The IF ELSE thing always bothered me. Do you do a big chain of IF ELSEs or do you do the balanced tree branch conditionals and pass that in? In turns out there is a script that Bryan is referencing where you just have to pass in the number of the branch that you want to take. It is that simple. Bryan, maybe I will send it to you for review. I posted on StackExchange somewhere a script which emulates a switch statement where you pass in a number and it takes whatever branch of code you want to execute underneath. It is a little bit more verbose but it is very easy for a compiler writer to target.

AG: You said CHECKTEMPLATEVERIFY is essentially looking at what the txid encompasses, in other words the template transaction. But then you said it includes the scriptSig and it doesn’t include the coutpoints. Surely it is the other way round?

JR: One critique that comes up that sometimes people say, I have designed CTV for a very specific use case. There is a more general thing out there that maybe could be better. That is a little bit true. The very specific use case that I have in mind is where you have a single input. There is a reason for that. That is why I was talking about the malleability before. If you have a single input there is no malleability you can have with the transaction coutpoint if you know one parent’s coutpoint. You know that one parent’s coutpoint and then you can compile down the tree. You can fill in all the details as they go. It is all deterministic. That is one of the use cases that is not specifically designed for that but it is designed so that use case works really well. When you look at the scriptSigs that is a little bit weird. It basically means that you mostly cannot use bare script for CTV because you are committing to signatures there if you have signatures. If you have a bare CTV where it is just a CTV you can use a bare script because you don’t put anything in your scriptSig. As soon as you have signatures and other things you end up having a hash cycle. The way you end up getting around that is you use a SegWit address. In a SegWit address the witness data is not committed to in the txid so your signatures and stuff are all safe. Unless it is P2SH and then you commit to the program. You can use the SegWit P2SH as a cool hack where you can commit to which other key has to be spending. That’s the reason why you are committing to the scriptSigs but not the coutpoints. The scriptSigs affect the txid but given a known chain of CHECKTEMPLATEVERIFYs the coutpoint does not affect the txids given a single parent known coutpoint.

I’ll give you a concrete example. One of the big benefits of CTV is you have all these non-interactive protocols where I define here’s an address and then if enough coins move into this address then I have started the Lightning channel without having to do any back and forth with my counterparty. I still need to know in order to update that channel state the txid of the channel that eventually gets created. If I spend to that address and it has a single input then I know who spent to it and I know the coutpoint. I can fill in all of the txids below. Those txids won’t change. Any terminal state that I am updating with a HTLC is guaranteed to be stable. If I had malleability of the txid either by having RBF or by having multiple inputs or not committing to the set of data I commit to then you would run into the issue that I am mentioning where things can get disrupted. It is a little bit abstract but if you read the BIP there is a lot of language explaining why it is set up that way.

SH: I think you touched on this during your CTV workshop back in February. Can you elaborate how if at all Tapscript affects some of the scripts that you and Bryan mentioned just a few minutes ago or CTV scripts in general?

JR: Tapscript makes a lot of this stuff much easier. In Tapscript you would never use an OP_IF. There are some use cases because you have a combinatorial blowup in script complexity. You would maybe use it for those purposes. You wouldn’t need to use it in most use cases. Tapscript makes a lot of these things easier to do. You could have an opcode which is “This is an intermediate output and it has to spent by the end of this block or this transaction can’t be included.” This would give you the same functionality as CTV. It is about being able to have some branch that has to execute and you don’t need to pass in all these bytes to signify which branch you want to execute. It is painful to do that.

BM: Can you elaborate some of the arguments and counterarguments for or against the implementation of CTV? In particular there is a balance between making a super restrictive opcode, you started with something more restrictive and then you moved to something less restrictive. One of the things that I have been fooling around with is the new Simplicity language which if we got that soft forked into Bitcoin has bare access to essentially all of the transaction data. You could compose anything you wanted as far as a covenant goes. It is perhaps the polar opposite in terms of flexibility. I have been thinking about implementing CTV just for the fun of it in Simplicity to understand how it works. Can you elaborate on the spectrum here, what is too restrictive, what is not restrictive enough and why?

JR: Simplicity is really cool first off. I don’t think it does what you think it does. In the sense that you can write a valid contract in Simplicity for whatever covenant you want but it is not necessarily executable onchain. As you write more complicated scripts in Simplicity the runtime goes up and you have some certain runtime limits or fee limits on how much work a transaction can require. Unless you get a soft fork for the specific jet that you want to add you can’t do it. The way I think about Simplicity is Simplicity is what if we had the optimal language for our sighash flags? What would that look like? Simplicity lets you define whatever you want and then you can easily soft fork compatibility where if you need to add old clients should be able to understand the new specification Simplicity lets you do that. Simplicity lets you express these things, it doesn’t necessarily let you make transactions based on them. One point that I would also make about the compactness, this is something I have spoken to Bram Cohen about and you can ask him for his actual opinion if I misstate it, even if you have a really sophisticated covenant system, general covenants are runtime compiled, where you are interpreting live in the script. CTV is ahead of time compiled. You only have to put onchain the data for the branches that you are actually doing. You could write that in Simplicity as well. I think what you would end up doing is implementing CTV in Simplicity. I don’t think that right now given the complexity of Simplicity as a long term upgrade we should ignore doing something that works today for that type of use case. It is basically just saying if you want to map it “We are doing a jet today for this CTV type script” and that will be available in Simplicity one day. Having this is both good for privacy in that you don’t reveal your whole contract but it is also good in terms of compactness in that you only reveal the parts of your contract that need to execute. There are a lot of benefits rather than having the complete program expressed in Simplicity at least as far as I can tell.

On the question of why have something restrictive versus something general. It is really easy to audit what happens with CTV. There are a few things that you can do, a few different code paths. It is a hundred lines of code to add it to Core. It is pretty easy. Within an individual transaction context there is no major validation overhead. It is just simple to get going. It makes it easy to write tools around it. Writing tools around a Simplicity script is probably going to be relatively complicated because you are dealing with arbitrary binaries. You are probably going to be using a few well tested primitives in that use case. With CTV it is a basic primitive. The tooling ends up being pretty easy to implement as well. I think Bryan can speak to that. With respect to it originally starting more restrictive. The restrictions I had originally were basically around if you added other features to Bitcoin whether CTV would allow you to do more complicated scripts. I removed those features. People said “We want these things to be enabled.” I didn’t want CTV to occupy the space such that we added CTV and now we can’t add this other thing that we want without enabling these very complicated contracts. I said “Let me make this as restrictive as possible.” People said “No if we add those things the chances are that we do really want these more complicated contracts.” This is like OP_CAT for example. I said “Ok sure. I will remove these restrictions, make it a little bit more flexible.” Now if you were to get OP_CAT or OP_SHA256STREAM in Core then you would actually be able to start doing much more sophisticated CTV scripts. This gets to a separate question that I will pose in a second. One thing you can do for example is write a contract that says ‘This template must pay out to all of these outputs and any output of your choosing.” This can be useful if you want to add an additional output. You can’t remove any outputs that are already specified but you could add another output. It gives you some more flexibility if you had OP_CAT. But because we don’t have it you can’t really do that today. That gets to the point of why not just do ANYPREVOUT which also gives you an analog for CTV. There would be no upgrade path for ANYPREVOUT short of Simplicity that would allow ANYPREVOUT to ever gain higher order templating facilities. CTV has a nice upgrade path for more flexibility in the future if we want it.

nothingmuch: What about recursion?

JR: So basically all the recursion happens at compile time. You can recurse as much as you want. This is sort of under wraps right now but I am happy to describe it. I have been building a compiler for CTV. I hope to release it sometime soon. The compiler ends up being Turing complete where you can compile any contract you want that expresses itself in Bitcoin transactions. But the compiler produces a finite list of Bitcoin transactions at the end of the day. There is no recursion within those. Those are just a fixed set of transactions that can be produced. If you want any recursion or any principle you can do that at the compile time but not at the actual runtime. I don’t know what “bounded input size” means but I think that is a sufficient answer. We can follow up offline about “bounded input size.”

MF: There are a couple of things I would like to cover before we transition to vaults. One is in that 2017 presentation you talked about some of the grave concerns. Were you able to address all these concerns? Fungibility, privacy, combinatorial explosion etc

JR: In terms of computational explosion I think we are completely fine. Like I mentioned compile time can be Turing complete but that is equivalent to saying “You on your own computer can run any software you want and emit whatever list of transactions you want.” At runtime it has to be a finite set of transactions. There is no infiniteness about it. Then in terms of fungibility and privacy, I think it is relatively ok. If you want privacy there are ways of getting it in a different trust model. For example, if you want privacy and you are willing to have a multisig signing server then you can use Taproot. You get a trust model where the signing server could steal your funds if you had all the parties working together but they can’t go offline and steal your funds because you have an alternative redemption path. In terms of fungibility, the issue is less around whether or not people can tag your coins because that is the privacy issue. The fungibility issue is whether your coins can be spent with other coins. Because this is a program that is guaranteed to terminate and it has to terminate in a coin that is unencumbered by any contract those coins can be spent with any other coin. There is no ongoing recursive segregation of coins. The fungibility I think is addressed. For privacy what I would say is that I think that having onchain contracts, these are really good onchain contracts in terms of you only show the part you are executing, not the whole program. You don’t learn other branches of the program that might have been there. But you are seeing that you are executing a CTV program so there is maybe a little bit of privacy harm there. The way that I like to think of this and why this is a huge win for privacy, is that this is going to enable a lot better Layer 2 protocols and things like payjoin and mixers. It is going to make a lot of those things more efficient. Our ability to add better privacy tools to Bitcoin is going to improve because we’re able to bootstrap these protocols more efficiently. It is going to be a big win for privacy overall. There is some new information revealed, I wouldn’t say there is nothing new revealed.

Other use cases of CTV

MF: Let’s go on to use cases. We’ve already discussed congestion control. Perhaps Jeremy you could put up the utxos.org site and go to the use cases tab. So one of them is congestion control, one of them is vaults. You have a bunch of other use cases there as well. Before we move on specifically to vaults perhaps you could talk about some of those different use cases and which ones are promising and which ones you are focusing on?

JR: Like I mentioned I’ve been working on a compiler. The use cases that I now have is probably triple of what is here. There is a lot of stuff you can do. Every protocol I have looked at, things like discreet log contracts become simpler to implement in this framework. The use cases are pretty dramatic. I am really excited about non-interactive channels. I think that is going to be huge. It gets rid of 25-50 percent of the codebase for implementing a Lightning channel because a lot of it is the initial handshaking and makes it possible to do certain things that are hard to do right now. The other stuff is all related with like scaling and trustless coordination free mining pools where you can pay people out. I sent Bob at some point some graphs around this. You can set up a mining pool where every block pays out to every single miner that participated in the mining pool over the last thousand blocks. Then you can do this on a running basis. You can have something where there is no central operator. You only get to participate if you provably participated in paying out to the people as specified over the last 1000 block run. Then you can use the non-interactive channels to balance out so that the actual number of redemptions per miner ends up being 1 for every given window that they exist in. You could minimize the amount of onchain load while being completely trustless for the miners in receiving those redemptions. There is a lot of stuff that is really exciting for making Bitcoin work as a really good base layer for Layer 2. That I think is something is going to be the major other use case. Another thing I am excited about with vaults is that vaults exist not just as something for an institution but they are really important for people who are thinking about their last will and testament, inheritance schemes. This is where the non-interactivity becomes really important. You can set up an audible vault system that pays out a trust fund to all your inheritors without interaction and without having to a priori inform of what the layout is. It can be proved to an auditor which is important for tax considerations. Anytime you are like “I gave 10 million dollars of Bitcoin to my heirs”, you have to prove when they got access to those funds. That is difficult to do in the current regime. Using CTV you can actually prove that there is only one time path to redeem those funds. You can set up things where there are opportunities to reclaim your money if you were ever to come back from the dead. If you were lost on a desert island you could come back and there would still be funds remaining in the timed payouts. I am really excited with all the new types of things people are going to be able to do. Vaults I think are a really important use case. Vaults are important not just for individual businesses where you are like “How are we securing our hot wallet stuff?” I think vaults are most impactful for end users where you don’t have the resources to employ people to be managing this for you. You want to set something up where let’s say you’ve got an offline wallet that you can send money to and then funds automatically come back online to your phone. But if you ever lose your phone you can stop the flow of funds. I think that is really exciting for CTV in particular, the ability to send funds to a vault address and for that vault address to automatically move funds to your hot wallet without requiring any signatures or anything. The management overhead for a user is very low. Your cold wallets can just be keys that are only sent to in the event of a disaster. Let’s say you are in 7 different bank vaults around the world. You have your vault that you send to and then you don’t have any requirement to actually have those recovery keys unless you have to recover. That is the big difference with CTV and vaults is that you remove keys from the hot path completely. There is no need for signing, there is just a need to send the funds to the correct place. This vault diagram is not accurate by the way. This a type of vault. The ones that I implemented that are in the repo are more similar I think to the form that Bryan put out.

MF: I am assuming you are going to have to focus on one or two use cases. To get consensus on this you need to convince that there is at least one real use case that people are going to get value out of. The flip side is making sure that it is not adding anything to the protocol that we don’t want to add. The upsides and downsides.

JR: It has been really difficult to be a singular advocate for this because you have to make a lot of conflicting arguments that ultimately work together. If you just told one side people would say “How do they work together?”. An example of this is Bob gives me a bit of grief. “If you had to design an opcode that was specifically the best thing for vaults would be CTV?” My opinion is yes. The next question is “It does all this other stuff too. Is that really accurate?” The people on the other side of the fence say “CTV, you really only have a single use case that you care about. Can you show that you have hundreds of use cases because we want to have things that are really flexible and general?” I am like “Yes it is very general.” What I am hoping to show with the language I am building is that it is really flexible and it is really good for these use cases. Hopefully that will be available relatively soon. The other argument that is difficult with this is talking about fees with scaling. I am telling everybody this is going to dramatically reduce fees for users but it is also going to increase mining revenue. How can they both be true? You are making better settlement layer use of Bitcoin so the transactions happening are going to be higher fee and you are going to have more users. It is called Jevons paradox if anyone is curious. As the system becomes more efficient usage goes up. You don’t end up saving.

MH: To build on what you just said Jeremy, to combine these different use cases. Could someone speak a bit more about having these batched withdrawals where you have the CTV to commit to the withdrawal transaction for users that then directly opens non-interactive channels? Would that work?

JR: That works out of the box. This is why I am very adamant about replace-by-fee is bad is that what I really want to see is a world where I go to an exchange with an address and they have no idea what that address is for, they just pay to it. They pay to it in one of these trees that has one txid or a known set of possible txids for the eventual payout. That lets me immediately start using a channel. What is nice about this is the integration between those two components is zero. I don’t need to tell the exchange that I am opening a Lightning channel. I just tell them “This is my address, pay this much Bitcoin to it.” There is no co-operation. You can imagine a world where you go to Coinbase and you give them a non-interactive channel address. It creates a channel for you. You give them a vault address, it creates a vault for you. You give them an annuity, it gives you an annuity. You can set it up so that there is zero…. If you paste in the address and you send the amount of funds you get the right outcome. I think there is definitely some tooling to support. I mentioned earlier having an opcode that lets you check how much money was sent to an address would be really nice. That’s an example that would make this integration a little bit easier in case the exchange sends the wrong amount of money. Most exchanges I know send exact amounts. Some don’t but I think that is a relatively easy upgrade. It also could be a new address type that specifies how much money is supposed to go there so the smart contracting side integrates really easily. Other than that they don’t need to know what your underlying contract is. I think it opens up a world in Bitcoin that works a lot more seamlessly. This is another big scaling benefit. Right now if you want to open a Lightning channel and you have funds on Coinbase you are doing at least one or two intermediate transactions to get the funds into your Lightning wallet and opening a channel with somebody. In this case you get rid of all those intermediate transactions. If you are talking about how Bitcoin is going to scale as a Lightning channel thing that is without having to convince exchanges to adopt a lot of new infrastructure for opening channels for users.

BM: This is basically one of the major benefits of CTV over deleted keys. A year or so ago I started making a prototype by essentially making a pre-signed transaction and deleting a key which is a mechanism to do a covenant but one of the major problems with it is I have to send from my wallet. I can’t give somebody an address which sends directly to a covenant. As Jeremy has described with CTV you can because you can put the script right there. It reduces the number of total transactions because as he mentioned if you want to open a Lightning channel first you have to send it to your Lightning wallet, then you have to open the Lightning channel. It is at least two transactions. The CTV route is more efficient and perhaps more interesting in that someone can send directly to your vault. You cannot do that generally with a deleted key type of covenant.

JR: What I would add is even less so for the vault use case than a scaling benefit, it is a big security benefit for a user. If you did have an exchange that you set up to understand vault protocols you could say “Only allow me to withdraw to vault contracts.” It would have to receive the vault description. You don’t have to have this intermediate wallet that you move funds on that maybe gets hacked with all the money. I think it adds a lot of user security having that story.

KL: I didn’t really catch up on the San Francisco workshop so sorry if it is a question that has been asked a lot of times. How do you compare or differentiate against a SIGHASH_NOINPUT in terms of vaults specifically?

JR: With SIGHASH_NOINPUT you can perfectly emulate CTV. It is not the obvious way of using SIGHASH_NOINPUT but it is one way you can use it. It emulates something that is very similar. There are a few drawbacks to that methodology. The first drawback is that is less efficient to validate and your fees are going to be more expensive. The other drawback is that it is a little bit harder to compile which is annoying if you are talking about making contracts where you have log(n) possible resolutions but you have a very wide number of different possible cases. Your compiler is going to be way slower. This imposes a limitation if you are using these contracts inside of Lightning channels on how many resolutions you can have. It makes them less useful in a Layer 2 context, negligibly so I guess. Signatures are 100,000 times slower than hashes. It is a lot slower if you are doing a signature based one. You are adding more functionality. SIGHASH_NOINPUT / ANYPREVOUT is less likely to get into Bitcoin. This is what I talked about when I said you have to preserve these really critical invariants in Bitcoin. It is pretty easy to show CTV doesn’t break these but the broader set of functionalities that you have around SIGHASH_NOINPUT you do have issues of burning keys permanently because you signed with them. We have all these design constraints around SIGHASH_NOINPUT that have come out around tagging keys and having different specifiers in order to prevent these weird use cases. CTV doesn’t really have the same issues around the fact that it is using a hash that is only used for CTV, it is not using keys that are used for general purposes. Making keys into some sort of toxic waste. I think that is one of the other benefits in terms of security. There are a few other reasons why you would prefer CTV. Future flexibility, if you add OP_CAT later you don’t get new features with SIGHASH_NOINPUT I think. With CTV you get a bunch of new types of custom template contracts that you can write. It has a better upgrading path in the future as well. With CTV the hashes are versioned so if you add a new version to the hashes you can add a new sighash flag field basically. So there is more flexibility down the road than with SIGHASH_NOINPUT functionality. Strictly speaking I would be very happy if SIGHASH_NOINPUT, ANYPREVOUT or ANYSCRIPT would get merged because that would let me do it today but I think it is less likely.

Vault designs

MF: We’ll move onto vaults. Obviously for those don’t know, some vaults need CTV, other vault designs don’t. I think Kevin (Loaec) who will speak about his design next week has got around using CTV. He did say in the interview with Aaron van Wirdum that ideally he would have used CTV if it was available. In terms of resources that we have on this Pastebin, one of the early ones is a post from Bob McElrath on reimagining cold storage with timelocks. Bob, do you want to talk through this post?

BM: I can give a brief description. This was published shortly after Gun Sirer et al published their paper on covenants. It was around the time that the timelocks came out. At the time, before Jeremy had the idea of doing CTV, there was no covenant mechanism. There have been about five different covenant mechanisms that have been proposed, none of which are active today on Bitcoin. These are all covered in the talk that I gave. There are probably more. The only thing that was actually available when I started our project over a year ago was deleting keys. That what that post was about. There is historic precedence for this. Back in the olden days, like in the old West people would create a bank vault with a physical timelock on it. In other words the bank operator goes home at 6pm or whatever and locks the vault such that a robber can’t get into the vault at night whilst he is away. This is a physical example of a timelock. At the time timelocks had just come out and enabled some of these use cases. The picture for the vault use case is that there are two spending branches. One of which is timelocked, one of which is not. The timelocked branch is your normal operation. This is exactly opposite to the way Lightning works. You want to enforce a timelock on your funds such that you yourself can’t spend it until let’s say 24 hours passes. There is an unvault operation. You have to take your funds and you have to unvault them. In the case of CTV or something like that you are broadcasting the redemption transaction. Or if you have deleted keys you are broadcasting a pre-signed transaction. In the case of Revault they don’t use deleted keys but they do make a big multisig and they pre-sign this transaction. The whole point of that blog post was that once you’ve done that, the signed transaction or the CTV script is a vaulted object. I can then figure out what do I do with that vaulted object. How do I move it around? Where do I store it securely? When do I broadcast it? This signed transaction is of somewhat lower risk than bare private keys. As I mentioned, there are two spending paths. One is timelocked and the second is not timelocked but has a different set of keys in it. That is your emergency back out condition. The point of the whole vault construction is that if somebody gets into your wallet and they get the bare keys they will presumably get the ones that are timelocked. If you see them unvault one of your transactions you know a thief has gotten in. You can go get the second branch of keys out of emergency cold storage and use those to reclaim the funds before the thief can get the funds.

BB: We should not be recommending that.

BM: I am describing what the blog post says. We will discuss reasons why we shouldn’t do that.

BB: You should pre-sign a push transaction instead of having to go to your cold storage to get keys out. That is the obvious answer. You are saying that you go to cold storage when there is a problem and you use the keys to fix the problem. But really you should have a pre-signed push transaction pushing to the cold storage keys.

BM: Yes. This starts to get into a lot of design as to how do you organize these transactions. What Bryan is discussing is what we call a push-to-recovery-wallet transaction. The thief has gotten in. I have to do something and I am going to push this to another wallet. Now I have three sets of keys. I have the spending keys that I want to use, I have my emergency back out keys and then if I have to use those emergency back out keys I have to somewhere to send those funds that the thief wouldn’t have access to. These vault designs end up getting rather complicated rather fast. I am now talking about three different wallets, each of which in principle should be multisig. If I do 2-of-3 I am now talking about 3 devices. In addition, when this happens, when a thief gets in and tries to steal funds I want to push this transaction. Who does that and how? This implies a set of watchtowers similar to Lightning watchtowers that look for this event and are tasked with broadcasting a transaction which will send it to my super, super backup wallet.

BB: One idea that I will throw out is that in my email to the bitcoin-dev mailing list last year I pointed out that what you want to do is split up your coins into a bunch of UTXOs and slowly transfer it over to your destination wallet one at a time. If you see at the destination that something gets stolen then you stop broadcasting to that wallet and you send to cold storage instead. The other important rule is that you only allow by for example enforcing a watchtower rule only allow one UTXO to be available in that hot wallet. If the thief steals one UTXO and you’ve split it into 100 by definition there is one percent that they have stolen. Then you know and you stop sending to the thief. Bob calls it a policy recommendation.

MF: There are different designs here. I am trying to logically get it in my mind. Are there certain frameworks that we can hang the different designs on? We will get onto your mailing list post in a minute Bryan. Kevin has got a different design, Bob seemed to be talking about an earlier design. How do I structure this inside of my head in terms of the different options? Are they are all going to be personalized for specific situations?

JR: The way I have been thinking about it is in terms of what the base layer components are. I think that in a vault essentially what you are looking at is an annuity. You are setting up a fixed contract that has some timing condition on every next payment. At the base layer most good vault designs, some you would do something a little bit different, this is what you are working with. The value flows along that path. At any point you can cancel the annuity. If you cancel it the amount remaining goes back somewhere else. Or you can spend the amount that has been redeemed so far. Everything else exists as either key policies on who can take those pathways or as policies on if you observe certain conditions which actions you should take. Those exist at a higher order of logic. Does that help a little bit? That backbone exists in all of these proposals. It is just whether or not you are using multisigs or you are using push-to-recover or you are using signing paths for the cold storage cancel path.

Bitcoin upgrade path to enable better vault designs

KL: Another important thing is what is doable today and what is not. Of course the kind of vault that for example Revault describes is practical today. You don’t need to change Bitcoin to make it work. Of course we are very far from having a properly blockchain enforced covenant. We have to use some tricks around it with either deleting private keys or for us we use co-signing servers which are very far from being perfect. At least we can somewhat emulate the fact that the output is pre-defined to follow certain rules. The early papers on covenants usually required a new opcode and that is a big problem. Who is going to work on creating an implementation of that if you don’t know if the opcode is going to be added to Bitcoin? That is what Jeremy is facing right now. He has been working hard for a few years on his opcode but you still have this uncertainty. When is it going to be added? Is it going to be 6 months? Is it going to be 2 years? Is it going to be 5? It is really hard when you are trying to push an idea like vaults especially for businesses when you are required to work a lot on the implementation itself because it is a security product, if you rely on assumptions like is this specific opcode going to be added or are there going to be some major changes to my BIP that are going to break my implementation? So for me there is also this separation between what can be done today practically even if it is not perfect versus what would be the perfect way of doing it where everything is enforced by Bitcoin itself.

JR: I think that is a super useful distinction to be drawing because it is looking at the trust models. I do think at the same time Bryan has a public confirmation of this idea, is that the way CTV and multisig and pre-signed and key deletion interact is that they are all basically perfectly interoperable. You could have a system with minimal changes between A and B where you are using one or the other security models. Feel free to disagree, but if CTV were available this conversation wouldn’t be happening. We would all agree that CTV was the easiest thing to work with for this goal. There may be some questions around fees but I think this is the design space. The question between Revault and “The Team” is really around do you prefer pre-signed deleted keys or do you prefer a multisig server. That is ultimately a user preference. That should be a check box. If you have to choose between pre-signed and a multisig server which one do you prefer?

BB: Another interesting way to distinguish that even further is that for secure key deletion, that works really well when a user is the primary beneficiary. It works less well when it is a group of mutually distrusting parties.

BM: You run into serious problems for instance with audits. How do I prove that the vault exists and you basically can’t? I think everyone on the call would agree that CTV is the best solution here. I know Jeremy has been very frustrated and has been working on this for a long time. Everyone is basically hedging their bets. As Kevin just said “Maybe we will get this BIP, maybe we won’t. Maybe we will go in a different direction because we are not sure.” It would be terribly fruitful if everybody on this call could get behind these ideas. There has been very little response to Jeremy’s work. There are no responses to Bryan’s post on the mailing list. We have all got to get together, do we want this or not?

BB: So about that, I know this has been an issue for a lot of us including Jeremy. Getting pubic feedback and review and interest. I would say from my email there has been a lot of private feedback and conversations like this. They just don’t show up on the mailing list because none of us can be bothered to write the same things twice which is a bit of an issue. This is a pet peeve of mine. It would be wonderful if there was only a single place where you had to check to see the latest update on things.

BM: It creates a perception that no one cares because there are no responses. I think this is definitely not the case. We need to rally round something, one way or another.

MF: Maybe it is just a site like Jeremy’s, utxos.org but for vaults. Then there is at least a centralized place where you keep going for updates. The big thing hanging over all of this conversation is that I don’t think there are too many people who want to discuss future soft forks until Taproot is done. People are so busy on getting Schnorr and Taproot in and nothing else is going to be getting into that soft fork. That is still a long way off. That is the big question mark, whether people have the time.

JR: I think this is a pretty big mistake. Maybe this is something as a community we can work on. With this group of people we could really make an impact on this. Taproot is having a lot of changes still. It is still not a very stable proposal. Taproot is great and I really want to see it. But that is the reality. There are changes that happened a week or two ago for the signature algorithm that are being proposed. There are changes to the point selection, if it is even or square. The horizon is perpetually looking a year out on it being a locked down document. I know that CTV has not had the level of review that Taproot has had but it is substantially simpler. I don’t think it requires any changes at this point. I do think if we had a concerted push towards getting it reviewed and slotted for roll out there is no formal schedule for when changes have to deliver in Bitcoin. They can be soft forked out when they are ready. I think the question is if we have a room of five people that are all saying that our lives would be made easier if we had CTV then let’s get it done.

MF: The argument against that though is it would be rushed to try to get it in before Taproot. All efforts towards Taproot and then future soft forks once Taproot is in.

JR: Why though?

BB: One conceptualization of Bitcoin Core review capacity is that it is a single mind’s eye. We can only very slowly move together and carefully examine things all at once.

JR: I think there is value to that. We only have so much focused capacity. I would make the suggestion that if that is the case then we did Taproot in the completely wrong way. We really should have done Schnorr and MAST as two separate things that we can checkmark progress along the way rather than all at once that is going to take years to roll out. There is other stuff that is important work to get done. This is my question for vault implementers generally. I think vaults is one of the most compelling new use cases for Bitcoin to dramatically improve user security. My question is does Taproot or CTV do more for the practicality of you being able to deliver vaults to your users?

MF: In the small case of vaults maybe. Obviously there are lots of other use cases.

JR: With CTV there are many other use cases too. That’s a general question more rooted in the CTV side. That is the question just focused on vaults because we are in the vault section. What is the wishlist of things to make vaults better in Bitcoin you need to have and how do we deliver as the Bitcoin community? How can we deliver on these features to make this viable? If it is not CTV I don’t care. If we need something else what are the actual things that we need to be focused on in order to make vaults really work for people.

KL: For us for example Schnorr and Taproot would be a really good improvement already. Maybe if you have a proper covenant you can prevent a theft. Although at some point you need to be able to move your funds to wherever you want. Having a vault is more of a deterrent against an attack than a way to completely be in your own bubble where nobody can move funds outside. At some point you will need to be able to move funds outside. For us Schnorr and Taproot is a really important thing because it would completely hide the fact that we might be using vaults. It also hides some of the defense mechanism especially around the emergency transactions that Bob also uses. That is one of the things that I wanted to cover is that multisig today is cool but what we can do with Schnorr, Taproot is much more powerful. That would be extremely useful for vaults in my opinion.

BM: When you use Taproot your timelock script has to be revealed. A timelock is an opcode and you have to reveal it. One of the major benefits of Schnorr is that you can just sign a multisig. This is great for protocols like Lightning. Because the timelock in vaults works in the opposite way all of your spends have to reveal the Tapscript that contains the timelock. You lose a lot of privacy doing that.

JR: I don’t think that that is completely accurate. If you are signing and you are using pre-signed or multisig you sign the nLockTime and nSequence field without necessarily requiring a CheckLockTimeVerify enforcement. Does that make sense?

BM: Yes. There are a couple of ways to do timelocks there.

JR: That’s the thing with CTV that is interesting. You have to commit to all the timelock fields so you don’t need CheckLockTimeVerify or CheckSequenceVerify. It is just automatically committed to. It is the same thing if you are doing pre-signed. You don’t need those opcodes unless you want to restrict that a certain key can’t be used until a certain time.

MH: Taproot is awesome, CTV is awesome. Why not both? Could we get CTV as part of Tapscript so the new opcodes that are being introduced with Taproot, could CTV be one of them?

JR: I’m not aware of any new opcodes currently being proposed with Tapscript. There might be some with slightly different semantics like the signature stuff that I’m not sure of. It is not like we are adding new things there. The reason why in my original proposal for CHECKOUTPUTSHASHVERIFY, last year I was like “It looks like Taproot is happening soon so let me propose it as using these new extensions.” Then months went by and Taproot didn’t seem to be making strong headway so I said “Let me do this as a normal OP_NOP upgrade because it is really not a dependent feature.” I think that is better for Bitcoin because if you try to say that you will only get CTV if you accept Taproot is worse for the network being able to independently consider changes. That is one reason to not layer them. The larger question you are asking on why not both? Yes, let’s get both in. I think it is a question of on our engineering timelines what is feasible to get done? One of the reasons why I think we want to get CTV very soon is it does help with congestion. It will take a year or two for people to employ those congestion control mechanisms and we are already seeing a major increase in fees right now. We have to be really focused on improving our fee situation. Taproot helps a little bit with fees but the reality is that most users are using very simple keys. Hopefully we can change that, hopefully we will add more user security. Right now the majority of transactions are not materially going to be made more efficient with Schnorr and Taproot. The majority are simple single signatures. Maybe validation will be faster but we are not increasing capacity or decreasing fees. I think we need to be doing things in that category. That’s why I think there is some urgency to do this. I think this is doable in a month. I’m not trying to advocate that timeline but I think that is the amount of review that this idea would take for people to seriously get comfortable with it. Taproot, I have reviewed it many times and I am still not completely comfortable with it. It is inherently going to take a long time. To Bryan’s point on if we should have the eye of review on a single topic, we need to as a community only put our eye on review on things that we can more quickly process. If it is things that are very slow to process we are not going to be nimble enough as a project to actually deal with issues that come up, if we are stuck on things that are like a three year roadmap.

MF: I asked Christian Decker about SIGHASH_NOINPUT. He was very much of the opinion that we don’t want to change the proposal now. Any thinking of adding new stuff in is potentially going to open up Pandora’s box or a can of worms where it starts all this discussion up and disagreement that we want to avoid.

AG: It was interesting to hear that discussion of motivations. I am hearing both security improvements and you are really focused on this congestion control as to practical implications about trying to get this out fairly quickly. I’m curious though. Let’s say we got Taproot quickly, it would take a long time for it to have any impact on wallets and it probably wouldn’t address fees immediately in any realistic sense. I can certainly see those arguments. I am a bit worried about what this looks like. Suppose CTV was deployed today what are wallets going to have to do to make best use of this congestion control feature? You might argue nothing but I have a feeling in practice it is going to be a lot of infrastructure work.

BM: It is a lot. They have to understand the tree structure.

JR: That is only marginally accurate. It depends on what wallet you are asking about. There are two classes of wallet that we’ll look at. Let’s look at infrastructural wallets like Coinbase or Kraken and then user wallets. They both have different requirements. If you look at infrastructural wallet they have a really hard time changing what their internal keys look like. BitMEX for example still uses uncompressed public keys. Why? They made a really great custody engine and is it worth it for them to change the code there? It has worked, they haven’t had a hack of it that I recall. If they were to change that then there is risk and risk costs a lot of money. For them maybe one day they will get SegWit. But they are probably not going to adopt even these better multisig things, for a decade. For them changing their own internal key type is really hard. Changing their output type is actually a little bit less challenging because they are just changing what address they are paying into. They have been able to do things like batching sooner. Batching is much more complicated than SegWit, keep in mind. Batching has a lot of very weird edge cases that you have to deal with in order for batching to not result in loss of funds. But they have been able to add things like batching. I think that for CTV all it is doing is at the layer where they decide which transaction they are going to spend to, it is a single new address they are going to be paying to cover their liabilities. On the receiving end for CTV users who have existing wallets, those wallets just need to be able to understand, in order for this to work today, an unconfirmed transaction. I think that most wallets already understand an unconfirmed transaction. So it should work reasonably ok. At the exchange, infrastructural wallet layer they can also guarantee some of the execution of those expansions. I think Optech has a good write up of this. If you are willing to wait up until a day of blocks to get final confirmation you can save 99 percent on fees. They can take that advantage to get the confirmation at the time of request. They as the infrastructural wallet make the full redemption whenever fees are low enough later that day. I think you will see a pretty easy to migrate to benefit without having to have too many changes to user wallets that can understand unconfirmeds and which the processing to get fully confirmed can be handled by the exchange. I think it is easy to deploy on relatively near term. But the more sophisticated use cases are absolutely going to take a longer amount of time.

AG: For myself I find it a little bit unclear. My main feeling about it is that it is going to be a struggle to convince the hoi polloi of exactly what is going on here. As you say wallets already understand unconfirmed. If that is all we are talking about then people will just say “Why aren’t you sending me my transactions quickly enough?” Most ordinary users just think unconfirmed is nothing and that is why they are generally willing to spend more in fees than they should be willing to spend. They don’t really get it. I don’t think they are going to get this either.

JR: I think it depends. Ultimately with this the only change that the wallets would need to have is to tag things that are observable as a CTV tree as being confirmed and treat it as confirmed. That is very minimal. I have made that change for Bitcoin Core’s wallet. It is like a 30 minute change. It is not that hard. It is just a question of if they have updated software or not and whether it shows up being fully confirmed or unconfirmed. It is hard to get wallets to upgrade but it is not the largest change around. There is this weird curve where the wallets are worse just always spend from unconfirmed so it is not a problem for them. The wallets that are better separate them out but also people who are using those wallets are more likely to maybe receive an upgrade. I don’t think the roll out would be awful for this type of stuff. It would be we go to the exchange and we ask them “Why isn’t this confirmed?” and they say “No it is. Upgrade your wallet and you will see it.” For users who aren’t sophisticated that is a sufficient story.

BM: It does require an additional communication channel between the receiver’s wallet and the sender’s wallet? The sender has to send the tree?

JR: Just the mempool.

BM: You have the whole tree in the mempool?

JR: Yes. Congestion is really for block space. Unless you have a privacy reason for not showing what the total flow is you can always just broadcast a transaction and then it will live at the bottom of the mempool. That is how people learn of transactions right now.

BM: Another interesting thing to think about here is whether wallets in the past have upgraded to new features in general. As mentioned the vast majority of transactions out there are pay-to-pubkey-hash (P2PKH). Why is that? Most wallets don’t even use pay-to-script-hash (P2SH) or multisig. Why? The answer is because everybody who is making wallets is also making s***coin wallets. In order to have an uniform experience and uniform key management for let’s say Ethereum and Bitcoin, what they’ve done is go toward using a single key for everything. And adding things on the back end like multiparty ECDSA so that it is actually multisig on the back end. Unfortunately I don’t think this dynamic isn’t going to go away anytime soon. In my experience very few vendors have implemented some of the more advanced features on Bitcoin unfortunately.

JR: I think that is a great point. One of the things I am worried about for Taproot for example is the actual roll out in wallets is going to be ridiculously slow. It is going to be a new signing thing and wallets are already existing with a single seed format. They are not going to want to rederive a new thing. I think it is going to take a very long time for that adoption to pick up for user wallets. That is one thing that is nice with the CTV rollout. All they have to do is what they are already doing. Most of these wallets already show an unconfirmed balance, especially the s***coin ones. They show unconfirmed balance because they are zero confirmation wallets.

BM: The benefit of using the tree is that you don’t have to put it all in the mempool. If I am going to put everything into the mempool anyway I might as well have not done the tree?

JR: That is not quite true. You can always broadcast anything that will show up in the mempool somewhere but what is important to keep decongested is the top of the mempool. The actual mempool itself, it is the fine to put these things in and then they get propagated around. If the mempool backlog grows they get evicted. That is fine, that is an ok outcome. You don’t want to be in the situation where you have so much stuff in the mempool that high value transactions that are completely unconfirmed because if their outputs get unspent they could be double spent, those getting kicked out of the mempool is much more problematic for users. When you are a user and something goes in the mempool, you see it, you see observe it. If it applies to you you store it in your wallet even if it goes in and out of the mempool. It just has to go into some mempool. Most of these wallets are not using a mempool on their own wallet. They are using a mempool on the server of whoever is providing the wallet. Those can be configured to be watching for the users’ keys or whatever. Or you are filtering and they can be configured to be storing…. The mempool can be terabytes big. It doesn’t need to be a small thing. It only needs to be small if you are a miner. If it is too big and you are trying to mine, a big mempool is more problematic.

Other vault related ideas

MF: Before we go onto Bryan’s mailing list posts there are a couple of people’s work that I added to that Pastebin. One was Peter Todd’s work on single use seals. Another was Christopher Allen’s work on smart custody. Are any of these of interest to people? Any thoughts on these guys’ work?

AG: I just want to mention how incredibly easy to understand single use seals are. It was an extremely sarcastic comment.

JR: I have always found them and their description completely inscrutable. If somebody feels like they can take a shot at that.

BB: Apparently RGB is using this in their code.

AG: We just read some C++ code and it would be easier to understand than the mailing list post.

BB: Code is the universal language.

BM: I think I can describe it pretty simply but I don’t know how it is related to vaults. A single use seal is something that you can use once. If you have ever seen a tag on your shipping crate.

BB: We understand what it is. It is just Peter Todd’s description of it and how it applies to Bitcoin.

BM: Peter Todd’s description is inscrutable, I agree.

JR: Is it just spending a UTXO? That is all we are talking about?

BM: Spending a UTXO is an example of a single use seal. You can only spend a UTXO once.

AG: My sarcasm is one thing. I do think there is something interesting there. His idea of client side filtering but it is pretty abstract. It is perhaps not a topic for today, I’m not sure.

BM: I don’t know how it relates to the vault topic.

MF: Apologies for that if I am introducing noise. The smart custody stuff that Christoper Allen worked on is relevant.

BB: I was co-author with Christopher Allen on that project for some of the Smart Custody book along with Shannon Appelcine and a few others. It was the idea of let’s put together a worksheet on how to do custody for individuals, how to safely store your Bitcoin using hardware wallets. The sort of planning you might go through to be very thorough and make sure you have checklists and understand the sorts of problems you are trying to defend against. The plan was and I think it is still the plan to do a second version of this smartcustody.com work for multisig which was not covered in the original booklet.

JR: I would love that. I have some code that I can donate for that. Generating on your computer a codex which is a shuffled and then zipped BIP 32 wordlist. Then you take the wordlist and you use it as a cipher to write your seed in a different set of words. You give one party your cipher and you give the other party the encrypted word list. What is the point of that? You have a seed that is on paper somewhere and now you want to encrypt it and give a copy to two of your friends so that you have a recovery option. I feel like having little tools to allow people to do that kind of stuff would be pretty nice. Being able to generate completely offline backup keys and shard out keys to people.

BB: Definitely send that to Christopher Allen. He also has an air gapped signing Bitcoin wallet based off of a stripped down iPod touch with a camera and a screen for QR codes. That is on Blockchain Commons GitHub.

MF: Let’s go onto to your mailing list posts Bryan. There are two. Bitcoin vaults with anti-theft recovery/clawback mechanisms and On-chain vaults prototype.

BB: There is good news actually. There were actually three. There were two on the first day. While the first one was somewhat interesting it is actually wrong and you should focus on the second one that occurred on that same day which was the one that quickly said “Aaron van Wirdum pointed out that this insecure and the adversary can just wait for you to broadcast an unlocking transaction and then steal your funds.” I was like “Yes that’s true.” The solution is the sharding which I talked about earlier today. Basically the idea is that if someone is going to steal your money you want them to steal less than 100 percent of your money. You can achieve something like that with vaults.

KL: Something else really interesting in it is Bryan also takes the path of deterring an attack. I think Bryan in the last step you always burn the funds although maybe it is not always?

BB: It is only if you are in an adversarial situation but yes.

KL: I think it is really cool because the whole point of this type of approach is really not to be bad on the user but really to be hard to steal. To deter the attack in the first place.

BB: My vault implementation is not the same as the version being implemented at Fidelity. It is a totally different implementation. There is some similarity. I admit this is very confusing. There are like five different vault implementations flying around at the moment. Jacob on the call her, he has is own little implementation which is number 5. Jeremy has his which is 6. There is mine and the one at Fidelity based off of secure key deletion and also some hardware wallet prototypes. Then there’s Kevin’s Revault. I’m sure I’m forgetting one at this point.

MF: There is a lot going on. I’m assuming some of them are going to be very specific to a certain use case. I don’t know what is going on at Fidelity. Perhaps they have special requirements that other custodians wouldn’t have.

SH: I can speak to what we’re doing at Fidelity. This is what I am on working on. As you may know in January 2019 we released FDAS which is Fidelity Digital Asset Services. It is custodianship for institutional clients. The deleted key vault that we are working on is open sourced. We do have a work in progress implementation on our public facing GitHub page. The interesting part is Vault-mbed repo which is currently under refactoring. We are not looking at extra functionality at the moment. I’d be happy to answer any questions that someone may have.

MF: The next resource is Bryan’s Python vaults repo which is one of those particular implementations. In Python so I am assuming this is a toy, proof of concept type things.

BB: Yes. This is definitely proof of concept. Don’t use it in production. The purpose was to demonstrate that this could all work. To get some sample transactions and use them against Bitcoin regtest mode. It works and definitely open to feedback and review about it. One of the interesting things in there is that there is both a version which is default that uses secure key deletion or a pre-signed transaction where you delete the key. But also an implementation using BIP 119 (OP_CHECKTEMPLATEVERIFY) as well. An interesting note, Jeremy has been polite enough to not bring it up, Jeremy’s version in his branch of Core is substantially more concise and I am a little confused about that. I’m not sure why mine isn’t as concise as yours.

JR: I think I benefit a lot from having implemented it in Core. You have access to all the different bits and bobs of Core functionality, wallet signing and stuff like that. It is an interesting point. Let me find a link so I can send out this implementation to people. I was trying to think about how I write this as a template meta program so that I have all of these recursion things handled in terms of “Here is a class that attaches to another class and there are subclasses.” I think that is a nice model. I also spent some time trying to make a template meta programming language for C++ that allows you to write all different types of smart contracts. I really hit a wall with that. What I have built now, setting up for the big punchline, is this smart contracting language that I have been trying to hype a little bit. It is called Sapio. It isn’t released yet but hopefully soon I will get it out there. If you think the implementation I have in Core is concise wait until you see this one. This one is like 20 lines of code, the whole thing. It is 20 lines of code and then thousands of lines of compiler. There is a trade-off there. I am hoping the compiler that I am working on will be general purpose and I think this is something that I’d love to follow up later with everyone who is working on vaults because I think everybody’s vaults can probably be expressed as programs in this language. You will probably save a lot of lines of code. Maybe we can put communal effort on the things that are same across all implementations. Things like how do you add signatures? How do you template those out? How do you write the PSBT finalizers? All that kind of stuff is general logic.

Q - Can you describe this language briefly? How does it compare to say Ivy or Miniscript?

JR: It is a CTV based language. You can plug out the CTV with emulated single party pre-signed or you can have a multisignature thing. That is the security model that you are willing to do and whether you have the feature available or not. Ivy and Miniscript are key description languages. They operate at the level in a metaphor like “What is my house key? What is my car key? What is my bus pass? What is my office key?” This language operates at the level of commutes. You say “I leave my house, I lock my door, I go to my car, I unlock my car, I start my car, I drive to my office. Then I unlock my office.” Or I walk to the train station, I take the train, I walk to my office and then I unlock the office. It is describing the value flow not just a single instance of an unlocking. Ivy and Miniscript describe single instances of unlocking. This language is a Turing complete meta programming language that emits lists of Bitcoin transactions rather emitting a script. It emits list of transactions rather than a script. That’s the succinct version.

Revault design

MF: Just to bridge to next week’s presentation with Kevin and Antoine, I thought we could discuss Revault. Aaron tried to do this in the interview with Kevin and Antoine which was tease out the differences between Revault and some of the other vault designs such as Bryan’s and Fidelity’s.

KL: With Revault the main thing is that when we started it we started with a threat model and a situation which is different. It was a multiparty situation where we had a set of participants being the stakeholders and a subset of participants for this specific client at least that were doing the day-to-day movement of funds. To explain more clearly they are a hedge fund, they are different participants in the hedge fund but they are only a subset that are traders. The threat model includes internal threats like the two traders trying to steal the money of the fund. This is something quite new or not well covered in other proposals until now. Hopefully we will see more soon. There is the external threat model that a multisig covers. Then you have the internal threat model. The two signatures are not enough for the two traders because you want to include the other one as some kind of reviewer of transactions or whatever you want to call them. Of course there is the main threat for most companies and people today in Bitcoin which is the physical threat. Somebody coming to you, the five dollar wrench attack. We are quite lucky to not have too many of those but when you look at the security at for example exchanges today you would be really surprised to see 500 million or even a billion USD of Bitcoin being secured behind a single key. If you find the right people to threaten then you might be able to steal the funds which is super scary. We are trying to address that kind of stuff. Another difference is because it is for business operations we are trying to reduce the hassle of abusing it. Most security things are defeated if it is complex to use, people are going to bypass it. This is also very problematic so you want the traders to be able to do their job. If every time they are doing a transaction they have to ask their boss if it is ok to move the funds and check the destination address, amounts etc then it is never going to be used. They are going to bypass it somehow. We are trying to move the validation from being some kind of verification after the transaction is created to the opposite. When the funds enter, when somebody deposits money to this hedge fund or whoever uses Revault, the funds are locked. The funds are locked by default. Then you would need all the stakeholders, in my example it is four, to pre-sign the set of transactions to be able to move them. By default they are locked and then you can enable the spending by having everybody signing the transaction being able to move them and revault them. In case of an attack you need to have a pre-signed transaction to put them back in the vault. After that the spenders, the two traders, are able to craft a transaction. That will be timelocked, you will have a delay before it is mined. Different conditions can be triggered to revault the funds if something is wrong. This could be enforced by the stakeholders, that could be third parties like watchtowers and other things like that. Of course it is more complex than that because we are really trying to emulate something like CTV with the tools that we have today. It is not really simple. Personally I am not fond of deleting private keys. Secure key erasure is not something I really like. Personally I am trying to avoid this in the design. At the end of the day it creates other problems. We are having to use a co-signing servers today which is not cool. I don’t know how we will implement that properly or if we can remove it. Antoine who is not on the call today is actively working on trying to remove this co-signing server but that might mean that we are moving towards secure key deletion as well. I think it is a trade-off and you have to see what risk you want to take. Secure key deletion creates other burdens around backups because you need to have to backup every pre-signed transaction. I hope it is a good primer. I don’t know if you have any questions. It would take a long time to dig into it I think.

MF: The few main differences that you talked about in that podcast with Aaron van Wirdum is multiparty architecture rather than single party. Pre-signing the derivation tree depending on the amount. In Bryan’s implementation you need to know the amount in advance.

KL: In the original architecture from Bryan last year the funds are not in the vault before you know exactly how many Bitcoin you want to secure. You have to pre-sign all your tree and then you move the funds in. From that time the funds are protected. Of course that is not usable in normal business operation. At least you would have to consider a step before that where your deposit address is not part of the vault system. It is doable, you can do a system like that, it is not a problem. But it is not part of the vault before pre-signing the transaction because of the deleted private keys and things like that. We don’t have this problem. Our funds are secure as soon as they enter. Different trade-offs.

Mempool transaction pinning problems

MF: Before we wrap I do want to touch on Antoine’s mailing list post on mempool transaction pinning problems. Is this a weakness of your design Kevin? Does Bob have any thoughts on this?

BM: I think Jeremy is probably the best to respond to that as he is actively working on this.

JR: All I can say is we are all f***ed. It would be great if there was a story where one of these designs is really good in the mempool. It turns out that the mempool is really messy. We need to employ 3-5 people who are just working on making the mempool work. There is not the engineering budget for that. The mempool needs a complete rearchitecting and it doesn’t work in the way that anybody expects it to work. You think that the mempool is supposed to be doing one layer of functionality but the reality is the mempool touches everything. The mempool is there in validation, it is there in transaction relay and propagation and it is there in mining. It needs to function in all those different contexts. It needs to function quickly and performantly. What you end up getting is situations where you get pinned. What pinning means is the mempool has all these denial of service protections built into it so that it won’t look at or consider transactions. Because the mempool is part of consensus it is not what it sounds like. It is not a dumb list of memory that just stores things, it is actually a consensus critical data structure that has to store transactions that could be in a valid block. It has to be able to produce a list of transactions that could go into a valid block. Because of that you really tightly guard how complicated the chains of transactions that can get into the mempool are. People have issues where they will do something that is slightly outside of those bounds and they are relying on the mempool being able to accept a transaction for a Lightning channel let’s say. Because the mempool is quite big you have things that are low fee at the bottom that can preclude a new high fee transaction coming in that prevents you from redeeming a Lightning channel which means you are going to lose money. Especially for Lightning, especially for cross-chain atomic swaps. What is annoying about this is because of the way UTXOs are structured this can be somebody who is completely unrelated to you spending from some change address of a change address of some other long chain. With any of these designs, if you have pending transactions you are going to have a really hard time with this stuff. I would like to say we have a plan for making the situation completely addressed. Bcash did deal with this, they got rid of child-pays-for-parent and they have unlimited block size. It turns out that if you do those two things you stop having a lot of these denial of service issues. I don’t think that is viable for Bitcoin at this point but it is not even the worst option among options that could possibly be a thing. I think we just need to invest a lot more engineering effort to see where we can elevate the mempool into. It is the type of issue that people look for carve outs, little things that can make their niche application work. Lightning did this one time with the Lightning carve out that prevents pinning in a certain use case. Six months later they found out that it doesn’t solve all the problems. I don’t think it is going to be a carve out thing to fix some of these pinning issues. It is going to be we have completely rearchitected the mempool able to handle a much broader set of use cases and applications. I am a little bit negative on the prospects of the mempool working really well. I think that is the reality. I am working on it, I am not just complaining. I have like 30 outstanding PRs worth of work but nobody has reviewed the second PR for two months. It is not going to happen if people aren’t putting the engineering resource on it. That’s the reality.

The role of watchtowers

Q - What are your thoughts on the watchtower requirement here? I see a path of either bundling watchtowers for many services versus separate towers for separate services. Which way of handling do you think is best long term?

BB: I will probably hand this over to Bob or Jeremy about watchtowers. It is a huge problem. The prototype I put together did not include a watchtower even though it is absolutely necessary. It is really interesting. One comment I made to Bob is that vaults have revealed things that we should be doing with normal Bitcoin wallets that we just don’t do. Everyone should be watching their coins onchain at all times but most people don’t do that. In vaults it becomes absolutely necessary but is that a property of vaults or is that actually a normal everyday property of how to use Bitcoin that we have mostly been ignoring. I don’t know.

BM: There are many uses of watchtowers. As time goes on we are going to see more. Another use for watchtowers that has come up recently is the statechain discussion. Tom Trevethan posted a ECDSA based statechain that I think is pretty interesting. It also has the requirement for watchtowers. It is a method to transfer UTXOs. What you want to know is did a previous holder of the UTXO broadcast his redemption transaction and how can you deal with that? I think there is a path here to combine all of these ideas but there is so much uncertainty around it we currently wouldn’t know how to do it. There are multiple state update mechanisms in Lightning and that is still in flux. Once you start to add in vaults and then statechains with different ways to update their state there is going to be a wide variety of watchtower needs. Then you get to things like now I want to pay a watchtower. Is the watchtower a service I pay for? Can it be decentralized? Can I open a Lightning channel and pay a little bit over time to make sure this guy is still watching from his watchtower? How do I get guarantees that he is still watching my transactions for me? There is a lot of design space there which is largely unexplored. It is a terribly interesting thing to do if anybody is interested.

JR: I think part of the issue is that we are trying to solve too many problems at once. The reality is we don’t even have a good watchtower that I am operating myself and I fully trust. That should be the first step. We don’t even have the code to run your own server for these things. That has to be where you start. I agree longer term outsourcing makes sense but for sophisticated contracts we need to have at least something that does this functionality that you can run yourself. Then we can figure out these higher order constraints. I think we are putting the cart before the horse on completely functional watchtowers that are bonded. That stuff can come later.

BM: I think the first order thing to solve is how to do the state update mechanism. We are still not decided on whether we are going to get eltoo and SIGHASH_NOINPUT which implies a different update mechanism and a different set of transactions that need to be watched. That conversation doesn’t seem to be settling down anytime soon. It seems like we are not going to get SIGHASH_NOINPUT anytime soon. I don’t know. There is a lot of uncertainty.

KL: For the watchtowers I am not as skeptical as you guys I think for multiple reasons. One of them is that anybody could be working on watchtowers today or even have watchtowers in production and we would not know about it. This is one of the cool things about watchtowers. It behaves as if it has a private key but it doesn’t have it. It has a pre-signed transaction instead. Another thing regarding hosting itself or giving it to other people or having a third party watchtower. I think it is a good thing that you should have a lot of these. Of course you should have one or multiple watchtowers yourself but you should also deal with third parties. You might have to pay them of course. The fact that you should have a lot of watchtowers and no one knows how many you have is really important in terms of security. At least they don’t know if there is a single point of failure. They don’t know if it is a vector of DDOS. They don’t know who to attack to prevent the trigger of a prevention mechanism. I am really bullish on watchtowers and I know a few people working on them. I am really looking forward to seeing them in production.

MF: I’ll wrap up. In terms of transcripts I am doing a lot more transcripts than Bryan these days because obviously Bryan is very busy with his CTO role. If you want to follow new transcripts follow @btctranscripts on Twitter. Eventually there will be a site up, I’m working on it. Bryan has released a transcript of today for the first half. We’ll get that cleaned up and add any content that is missing from the second half. Apart from that the only thing to say is thank you for everybody attending. If you want to hear more about vaults, Kevin and Antoine are presenting next week. Thank you very much everyone.

\ No newline at end of file diff --git a/london-bitcoin-devs/2020-05-26-kevin-loaec-antoine-poinsot-revault/index.html b/london-bitcoin-devs/2020-05-26-kevin-loaec-antoine-poinsot-revault/index.html index 69328634a3..a4e391f97a 100644 --- a/london-bitcoin-devs/2020-05-26-kevin-loaec-antoine-poinsot-revault/index.html +++ b/london-bitcoin-devs/2020-05-26-kevin-loaec-antoine-poinsot-revault/index.html @@ -11,4 +11,4 @@ Kevin Loaec, Antoine Poinsot

Date: May 26, 2020

Transcript By: Michael Folkson

Tags: Covenants

Category: Meetup

Media: -https://www.youtube.com/watch?v=7CE4aiFxh10

Location: London Bitcoin Devs (online)

Kevin slides: https://www.dropbox.com/s/rj45ebnic2m0q2m/kevin%20loaec%20revault%20slides.pdf?dl=0

Antoine slides: https://www.dropbox.com/s/xaoior0goo37247/Antoine%20Poinsot%20Revault%20slides.odp?dl=0

Intro (Michael Folkson)

This is London Bitcoin Devs. This is on Zoom and live-streaming on YouTube. Last week we had a Socratic Seminar on vaults, covenants and CHECKTEMPLATEVERIFY. There is a video up for that. There is also a transcript, @btctranscripts on Twitter. Regarding questions Kevin and Antoine are going to pause during the presentation for questions and comments. We are not going to have interruptions so we will let them speak until they pass. There will be a Q&A afterwards. Kevin will start from basics, foundation level so don’t ask advanced technical questions at the beginning. He will cover the more advanced, technical stuff in the middle and the end. Apart from that I just need to introduce Kevin and Antoine. Kevin Loaec I know from Breaking Bitcoin, Building on Bitcoin conferences which were amazing. Hackathons, conferences and all the events around that. Also Kevin works at Chainsmiths and is building a growing Bitcoin community in Lisbon. Antoine (Poinsot) spoke at London Bitcoin Devs in person a couple of months ago on c-lightning plugins. Watch that presentation on YouTube, that was really good. Today he is going to talk about vaults specifically the vault design “Revault”.

Revault - A Multiparty Vault Architecture (Kevin Loaec)

Thanks for the intro. I am going to try to make my part of the presentation about 40 minutes long. Then Antoine will go through the hardcore discussion on deep technical details after that. It is going to be long. I will try to make some pauses for you to ask questions but don’t hesitate to ping me. We are going to talk about Revault. Revault is a vault architecture that Antoine and I have been working on for the past 5 months now, since December. As you will see it is targeted towards institutions so not people as such. This presentation will talk about everything from the basics of how we do security today on Bitcoin for individuals and institutions. Then I will look at the different propositions we can do with Bitcoin today for individuals and institutions. Then I go directly into Revault, what we have built. What we have and how it works. Then Antoine will talk about very specific details about the architecture. If you are really into Bitcoin development you might have some questions or you might have not considered the attacks possible. Antoine will cover a lot of those. It is really interesting. The longer it goes the more technical it will be. If it is too simple for you at the beginning that is normal. It will get increasingly more advanced. If you are a beginner, at some point if you drop out expect the rest to be even more technical. I am trying to do my best to not lose anybody at the beginning. Hopefully it will not be too simple either.

Ownership model

The first thing I want to talk about is how we do Bitcoin today. We have ownership, how do we own Bitcoin? We have the question that is who controls the keys? This is the first part of how you can protect your Bitcoin. You have three choices here. You can deal with a third party custodian. Let’s say you leave your coins on an exchange. You don’t control the keys. It is not really your Bitcoin. That is one way of doing it. The other way is self-control of your keys. If you own all of your keys then you really own your Bitcoin. Nobody can block them or steal them away from you if they don’t have your private keys. Then you also have the mixed model where you and another party have keys to move the coins. This is about control. A mixed example could be the Blockstream Green wallet. It has two signatures, you sign one then you send the transaction to them and they co-sign it. Even if your key is lost that is fine because nobody can steal your funds without having the key from Green.

Key Management

The second question on ownership is about key management. The first slide was about who has these keys but the second one is about how these keys are secured. You have different approaches again. One popular one is called cold storage. You might have heard about that from exchanges from example. Cold storage is an address or a set of keys that you are not supposed to ever touch. Those are the ones that have been generated offline, have never touched a computer connected online and are never supposed to used until you need to move the coins. Think of that as like burying a treasure chest in your garden. A second one could be having hardware or offline wallets. An offline wallet could be a laptop that you removed all the connectivity from. You removed the wifi chip, you removed all networking capability and you generate your keys on this thing. This will never touch the internet. This is a little bit different than cold storage. This is supposed to be used to spend your coins. The private key itself will never touch a computer connected online. This is probably what most people use. Back in the day it was a paper wallet like cold storage. You would have the ones that you never use and then you would have a small wallet on your phone or computer that you would use for transactions. Now most people are using hardware wallets. Then you can go yolo, meaning you have all your funds on your phone or all your funds on your computer. This is extremely risky of course because the threat model is extremely high. Your computer could be stolen, malware could be installed on your computer, you have a lot of risk there. Don’t ever put all your funds on a laptop or on your phone. Then you can also play around with multisig. You could have multiple hardware wallets. You could have one key on a hardware wallet, one key on your phone, one key on your computer. In that case even if one or multiple keys leak that is alright depending on how the multisig is designed. Out of these two slides, the first one who controls the keys and the second one how they are kept defines every way of storing Bitcoin today. As far as I know that is 99.9 percent of how coins are secured. Your own Bitcoin are probably behind one of these solutions or a mix of two. This is not good enough. This is really not good enough. This is just about securing keys, we are not talking about securing the Bitcoin itself. As soon as the keys are leaked your Bitcoin are gone. Vaults are talking about that. When we say we are dealing with a vault we are not dealing with just a key solution. We are dealing with a way to restrict how Bitcoin can move. That is the main point I want to cover here.

Threat model

Then you have threat models to discuss. Threat models are quite complex to list because they depend on each person or each group of people. The main one up until now was the online threat. How malware can steal your keys. That is why your keys should never be on your computer or on your phone. This is the main threat we have today. Then you have a physical attack. Somebody coming to you and threatening you to give them the Bitcoin. As you can already think if I come to you and I start breaking your fingers you probably will dig up this cold storage wallet you have in your garden. Being a garden doesn’t mean you can’t access it. If I really force you to do it you will give it to me. That is not enough as a security model. Then you have third party risk. Of course if you give all your Bitcoin to Coinbase then you are protected from the random guy coming to you but Coinbase now can steal your Bitcoin. You are creating other risks in solving the first one. Then for multiparty solutions you also have the insider attacks. When it is only your Bitcoin you are fine. If it is a company it is a little bit harder. How do you deal with your co-founder in a startup going away with the coins? How do you deal with your IT guy having access to your private keys on the exchange and stealing them? That is another problem that we have to think of. When you are designing security solutions around keys you are trying to think about that. When you are designing vaults you really are trying to figure out all the different types of attacks that can happen. If they can happen they will happen especially for big players like exchanges.

What we want

What we want, this slide is about what we are trying to achieve here. A perfect solution for vaults would look like this. We would be able to restrict where the funds can go. That’s saying if your address is not in the whitelist your UTXO cannot move to it. This is impossible to do today with Bitcoin. We would like to restrict how much Bitcoin can move, maybe per day or per week. Let’s say you are super rich and you have 100 Bitcoin somewhere and you want to protect yourself by allowing you to only spend 1 Bitcoin a month, right now that cannot be done. The Bitcoin network doesn’t care how many of your Bitcoin you are spending. A vault or a covenant could be interesting for doing that. If you know you are not supposed more than a Bitcoin a month can we somehow enforce this at a protocol level or on a second layer? Then you also have the when. That could be for example during business hours if you are a business. The funds should not move during the night because your employees are only there during the day. Could we enforce that? Or as an individual it could only be during the hours that you are awake. You know a theft if even it is malware it wouldn’t be able to move your funds during the night. This is a restriction we want in a vault. As we discussed before most of the solutions we can think of add risk. Let’s say the where here, if I only allow myself to send my coins to Kraken what happens if Kraken disappears? All my coins are locked forever because Kraken exists don’t anymore? This is really difficult. Or make it unusable. Unusability is interesting as well. In security you have this principle that if your system is too hard to use or too annoying to use people are not going to use it. It might be secure but just because it is annoying to use it on a day-to-day basis people are going to bypass it. Your security is defeated because it was unusable or too hard to use. This is very important when we design security solutions.

Theory is cool but…

What I wanted to cover here is that Bitcoin is in theory pretty cool but the practical Bitcoin we have today is a very limited set of blocks. We cannot do whatever we want with it so we have a specific set of tools we can play with. We can probably build anything with it but we have to be really smart about how we do it. We cannot just ask the Bitcoin Core developer to implement a new covenant opcode “Can you please do it?” Everything takes ages to be added and modified in Bitcoin because that is the way Bitcoin is designed. I am really happy with that. I am ok to have a Bitcoin that is pretty restricted for security reasons but that also means I need to play within the rules. I have to use whatever basic blocks I have and build with them. I cannot just expect Bitcoin to change to fit my needs. That’s why today vaults are not available on the market. Nobody is giving vaults-as-a-service today. We don’t have software to do that. Most of the software you can get today on Bitcoin, the most advanced one would be multisig. We want to go further than that.

What we have

What we have today on Bitcoin are these basic blocks. These are probably the two we can use at the protocol level. We can use multisig. That is having multiple keys required to sign to be able to spend a transaction. That could be multiple people or not. That could be multiple devices whatever. We also have timelocks, I am going to cover timelocks after that. Lastly we can also imagine that we have tamper-proof hardware. That could be a hardware wallet, maybe one that is more programmable. A Coldcard or something like that where you would put some of the restriction on the hardware itself to emulate having this third party checking for your rules to be ok. What we don’t have again is covenants. What is a covenant? To explain what Bitcoin can do today we have the outputs in every transaction. The output defines how the funds can then be spent. The output has conditions on how the next person or the next spend will be able to spend them but that is only from the inputs. You cannot have a restriction on the outputs of the child transaction. Meaning that if I send Bitcoin to Michael, I can put some restriction on this. I could say “Michael will not be able to spend this transaction before December 1st through a timelock.” I could do that. Or I could do “I will send these coins to Michael but I also need to co-sign. I need a multisig with him.” What I cannot do is put on a restriction on how the output of the spending transaction will be. I cannot say “The coins I’m sending to Michael can only go to Kraken afterwards.” There is no way of doing that today. That is what we call a covenant. We do not have covenants on Bitcoin today. There are a lot of proposals and I think Antoine is going to talk about them later. Also the meetup we had last week about CHECKTEMPLATEVERIFY, that is one of those. With CHECKTEMPLATEVERIFY you can enforce a restriction on the outputs of a child transaction. Today we don’t have that in Bitcoin. Revault is not using this covenant thing. We are trying to emulate it with the tools we have today.

Timelocks

The timelocks we have, in Bitcoin you have four of them. You have Locktime and Sequence that you can do at the time of crafting a transaction. When you are spending a transaction you can add a locktime or add a sequence. The Locktime is about putting a date, time in blocks but let’s say a date e.g. December 1st. This transaction will not be able to be mined before this date is reached. This is pretty cool. Sequence is relative. Sequence is about how long ago was the parent you are spending, the UTXO you are spending from, how long ago it was mined. If you sign a transaction with a Sequence of one week it means that you need to wait one week before the UTXO you are spending from has been mined. It is just a delay between the UTXO you are using and the transaction you are spending from. These two are interesting when you are creating the transactions but sometimes they are not enough. If you have the private key you can double spend them. You can do another transaction spending the same outputs and remove the Locktime from your own transaction. It is not really a restriction, it is somewhat restrictive but you can bypass it if you want. An attacker having your keys doesn’t care about your Locktime and Sequence. That is why these two opcodes OP_CLTV and OP_CSV were created. CheckLockTimeVerify and CheckSequenceVerify. This is put on the outputs of a transaction. That forces the one that will use this output later, the child transaction, to have a Locktime or Sequence that corresponds to whatever is restricted here. I could have a deposit address where when I send funds to it it enforces a delay of like 3 hours before I can spend them again using a CSV. This is cool. Now we are starting to talk about interesting things that we can use in weird vault designs. I just wanted to show you the different timelocks we have. Maybe you can start thinking about how we can use these together to build some of the restrictions we were trying to do earlier. As a note here Bitcoin doesn’t have an idea of time or at least it is not clear. You can do time from when a transaction was mined because mining is consensus. You cannot do time since a transaction has been seen in the mempool. The mempool is not consensus, at least not everybody has the same mempool. We cannot do timelocks from the time of signing. This is somewhat annoying because we would like that in vault systems. For example, if we had something that let me have a lock from the time of signing I could say “I can sign a transaction but I know while signing it I need to wait one day before it can be mined.” This is not possible today. That is annoying and we have to play around that.

Single party advanced mitigation

The title here is probably not right. I wanted to start talking about how we can do advanced mitigations, things we are not using today, to emulate these restrictions where, how and when. The most basic one you can do is adding annoying processes. For example you have a 2-of-2 and you put one of these two keys in a bank vault. You cannot spend during the night because the banks are closed at night. That is enabling your covenant on when you can spend from it. You can also have a notary that will make sure that the key you are spending to is one of your whitelist keys. This is cool but it creates other problems. What if the bank loses your vault? What if the notary cheats or doesn’t want to sign any transaction? That is one tool we could use but it is not great.

Timelock

We have timelocks as we just talked about. Timelocks are more interesting in terms of vaults or at least covenants that you can do yourself. If you deposit to a OP_CLTV saying December 1st you know that you cannot even if you try spend this transaction or this output before December 1st. This is great if you really know you don’t want to be able to spend your funds before a certain date. If you want to hold or you want to do a gift to somebody and you tell them “You can’t use it before you are 18 years old” you could enforce that in the Bitcoin transaction itself. That is starting to be interesting now. OP_CSV is the same thing but from the time the deposit was made. It is maybe a little more flexible because you could say “I’m going to do some dollar cost averaging on Bitcoin and every time I purchase Bitcoin I want to only be able to spend them in 2 years.” Every time you do a transaction it will move forward 2 years for this specific output. This is cool but even you cannot spend from it. It is not only against an attacker it is a real covenant that nobody can spend from these funds before date has been reached.

Key deletion

Then we have key deletion. This is more interesting now. We are starting to get into the ideas that are not really obvious. This has been proposed many times on Bitcoin, it is not new. But it is not something you would think of while looking at Bitcoin and what you can do with transactions. Key deletion is a great way to enforce that something will happen. You can sign a transaction from a deposit UTXO, you receive some money. You can sign a transaction spending from it and then you delete the private key. You are broadcasting the transaction yet, you are just signing it and keeping it. You delete the private key. That means that nobody not even you now can move these funds in another way than the way you already signed and kept in your backup. You don’t have the private key to change this. This is a way of creating a covenant where you could choose how much is moving, where it is moving. You can put timelocks if you want, you can do a lot of things. The major risk you have here again is that you don’t have the private key. If anything is wrong with it it is too late. Even if you lose this pre-signed transaction because you don’t have a private key now the funds are gone. It is a risky one but you can try to play around and make cool things.

Co-signer (secure hardware)

Then you have a co-signer. A co-signer can take different forms. It could be a secure hardware where you enforce checking the rules, maybe checking a whitelist or whatever. It could also be a third party if you want to use Casa or BitGo or whatever. In that case you would do a 2-of-2 you would need a co-signer to approve every transaction you do. You sign your transaction first, you send to the co-signer and if the spending conditions are not met, maybe you spent too much this month, then it is not supposed to sign. This is another thing that could be fun. You can automate things, you can build it at home. If you know it is secure hardware and it is not easy to modify that could be a good way to ensure your transactions are not completely unchecked or too easy to spend.

Clawback

Now let’s go to the fun stuff. This is an idea from 2013 which is a clawback. This is also where the name Revault comes from. It is about sending back a transaction to a vault. Doing this is quite interesting. It looks quite heavy so I am going to explain this. You start with a transaction. Let’s call it the vaulting transaction where you have an output that you will use to be spent with a pre-signed transaction that has two different exit points. Either you use your hot wallet key like any hot wallet on your phone plus a CheckSequenceVerify of a few hours. The other way of spending from it is to use the clawback transaction that is sending into a vault. The key is deleted here. Now what it means is that we are back to the key deletion slide that I had before. This vault transaction can only be spent by this transaction because we deleted the private key. If you want to use the funds either you use your hot wallet key and you have to wait for the delay of the CSV or you can also trigger a clawback which instantly without the delay can put these funds back into a vault. Giving back the same protection. Sending them to a different set of keys no matter what. You can choose this at the time of crafting your clawback. Because the clawback doesn’t have the OP_CSV here you also need to delete the key for signing this clawback. You have to delete the key for the unvaulting transaction and you have to delete the key for the clawback transaction. You don’t have to do it this way but the good way of doing it is that the clawback itself is also a vaulting transaction. You are doing a kind of loop here. You need to also have deleted the key of the unvaulting at the bottom here. That means you also need another clawback here that also needs to be deleted. To build such a transaction you can do that with SegWit because now we have more stability in the txid. But the problem is that you should already pre-sign and delete all your keys before doing the first transaction in it. In this type of vault because the amount would change the txid you have to know the exact amount before you are able to craft the transaction here, sign it and delete the key. Same here, you need to know the amount you are spending from because otherwise the txid would change. When you do a vault with a clawback you need to know how much money you are receiving in it. If somebody reuses the address that you have in your output here of the vaulting transaction and sends no matter what amount of funds the txid would have changed so you would have no way of spending from it because the private key has been deleted. It works but you have to be extremely careful because once you have deleted your private keys it is too late. If you do any mistake it is gone. Another thing is that if you lose the pre-signed transaction your funds are gone as well. You need to do backups of all of that. That is what Bryan Bishop proposed to the mailing list with his implementation recently. The first time it was discussed was probably in 2013. Also Bob McElrath has written a paper at some point on this, maybe 2015. A lot of people have thought about such a protection mechanism. I think it is pretty cool. We are not using this in Revault but this is the most advanced thing you can do right now although no one has it in production anywhere.

Bob McElrath: Two little points. One is that you discussed having the ability to do a timelock from the time of signing. It is cryptographically possible using timelock puzzles or verifiable delay functions but I think this is very far from anything people would want to use it right now. For those that are interested in crypto something to keep an eye on. Secondly somebody asks in the chat how do you prove the private key was deleted? Generally you can’t. An alternative is to use ECDSA key recovery but that cannot be used with Bitcoin today because the txid commits to the pubkey which you don’t know at the time you do the key recovery.

For single party it is fine you can somewhat prove to yourself that you deleted your private key. In multiparty it is much harder because you cannot prove to someone else that you did.

Bob McElrath: It is a little bit worse than that. You have some hardware that generated a key and you want it to delete the key. You should ask questions about did that hardware delete the key? If you’re using Linux and it has a swap file and memory paging this key could exist in multiple places. It is not just that I ran it on my computer, you have to get down to the hardware level and lock the memory and overwrite the memory. Then you can ask questions about things like Rowhammer, could I pull that out in some other way? It is very, very hard if not impossible to prove you deleted a key even to yourself.

You could imagine some way but sure. This is not easy to do.

Institutional mitigation

Let’s talk about institutional mitigation which is a little bit different because usually it is multiparty. Also the impact on operations is somewhat important. Of course it depends on who you are dealing with. For example if it is an exchange they need to access their cold storage quite often. That defeats the purpose entirely. A cold storage should never be used. We need to have ways that can be used without breaking the security too much. This is one of the focuses of Revault. Another thing with institutions we have another risk which is insider risk. Of course when you are alone you are not scared of yourself stealing your own Bitcoin. Bitcoin is very similar to cash in the sense that if you have a business and a till with banknotes in it it becomes really hard to secure this. If you want to have normal operations you need people to be able to take cash within this cash till but you don’t necessarily want every stakeholder in the business to be there with a multi key thing to open the till every time. How can we somewhat not have a heavy impact on operations while being able to emulate some enforcement of things that a covenant would enable. The incentive to attack is higher for businesses because usually businesses have much more money than individuals. When we are talking about an exchange we are talking about some having cold storage with more than a billion dollars of Bitcoin behind a single private key. This is insane. This is what we have today in terms of security. You would be really surprised how bad security is in Bitcoin today, at least in key management today.

What most people use today is multisig. Let’s say 2-of-3, m-of-n. Usually you don’t have n-of-n because that would mean if one party loses their key the entire fund is locked. That is a big problem. You have ways to mitigate that with proper backups and things like this. What you mostly see in businesses is a multisig m-of-n. That can be done in house like some exchanges do or that can be done with a third party like BitGo for example. Then you also have the cold storage with frequent access which kinds of defeats the purpose of cold storage. Most of these are often delegated to a third party. You have a lot of businesses today that exist just to provide custodial solutions for other businesses in Bitcoin. They charge a lot of money for that. At the end of the day these custodians also need to secure the keys somehow. We are back to the same problem. How are they keeping the keys? Is it just a multisig? What size of multisig? Is it just a cold storage? How is it kept? You always have a problem there. They would say they have insurance but insurance is not a security solution here.

Revault

Let’s start talking about Revault. In Revault we are trying to solve a lot of things at the same time. It started with a hedge fund contacting us. This hedge fund wanted to be capable of self hosting their own custody without impacting their operations. They also have this interesting thing that they have only a subset of active traders while the other ones are the partners in the fund. They wanted some security that would enable having the maximum security possible without impacting their operations. Revault covers up to N-1 participants being compromised. This is pretty cool. Compared to for example a BitGo 2-of-3, in a 2-of-3 you cannot sustain N-1 because you only need 2 keys to spend the funds. Revault is still secure until you have every single key being compromised. We also have physical threat mitigation. If somebody were to threaten all the participants in our scheme at the same time we still have things to partially protect the funds, a small part of it could be stolen. But we still have a partial mitigation even with N keys being compromised which is quite interesting I think. We also have a very low operational overhead. I will show you how. In Revault it is approve by default. Every spend is approved by default but can be cancelled. Instead of having to ask agreement every time to all the stakeholders. There is a bypass possible in case of emergency. That means for example you see the price of Bitcoin mooning or crashing and you need to move the funds right now. You don’t want to enforce the timelocks we have which would be a few hours. If all the participants want to and they are not under threat then they can move the funds without being encumbered by the timelocks we have. This is very interesting for the hedge fund for example. Then we have the panic button that sends the funds into the Emergency Deep Vault. As you will see later in the slides the Emergency Deep Vault is basically a different address that is more secure than the most secure address we have in our vault system. In our case we have a N-of-N. The Emergency Deep Vault should be harder than that to spend. That could be N-of-N plus a long timelock, separate the keys into different geographical locations, N being bigger than the original N. A lot of things like that. This is a way to escape the architecture if anything goes wrong. To avoid sending back the funds to the vault it goes to a completely different security solution. Another thing different from other models and other proposals out there is that the funds are secure from the time of deposit in Revault. We don’t have an intermediary wallet that needs to put the funds in the vault. For us deposit equals secure. The deposit is a 4-of-4, N-of-N as you will see.

We are not covering an insider locking the funds, refusing to sign. This is something that is possible. If they don’t sign the funds cannot move. That is fair. The only way to bypass somebody not signing would mean we cannot have the N-1 participant compromise here. Maybe the one not signing is the only guy that is not compromised. We cannot bypass that. We have partial mitigation though. It doesn’t mean the funds would be locked forever, it is only a limited amount of the funds that would be locked. Then a little harder would be an insider burning funds, deleting their private key and deleting all backups of their private key. Deleting the Emergency Deep Vault keys, this could happen if somebody got really mad they could delete every single key and backup of keys they have. Depending on how the vault is designed the funds might be lost forever. This is something to have in mind when deploying such an architecture.

This is a graph that we have of what it looks like. I will try to explain it slowly. I know it is a lot, it is quite heavy. I will do my best. We have deposit transactions that create outputs. These outputs are already vaults. As I said before the funds are secure from the time of deposit. These deposits, let’s say you are getting money from an exchange or a client is sending you money, the address you are giving them is already encumbered by a N-of-N multisig. That will be what we call the V, the vault UTXO. To spend from this UTXO you would need a transaction that we call the Unvault transaction or you would need the N participants agreeing and sending the funds out of the vault. That is what I called the bypass before. If there is an operational emergency where the funds need to move right now without a timelock the N participants can use it and spend the funds. That would not be possible if we had key deletion but we do not have key deletion right now in Revault. When you have a deposit, when you receive funds in a vault we need the N participants to pre-sign a set of transactions. Typically they would do that once a day. In the morning they come to the office and every one of these N participants in his routine would have to check if there had been any deposits. If there has been then he should pre-sign the transaction for that. The transactions to be pre-signed are the two emergency transactions, one from the vault directly and one from the transaction that we spent from the vault. He also needs to sign a Cancel transaction which is a transaction that spends from the Unvault transaction and sends it back to a vault. I know it is quite weird if you have never seen vaults before. This is now quite common, it is a clawback. If you have never seen something like that before I know it is kind of weird. We are pre-signing these 3 transactions, the 2 red ones and the orange one. After that we sign the blue one here, the Unvault transaction. Then we keep these 4 pre-signed transactions somewhere safe. If somebody wants to spend funds, let’s say the traders or whoever is authorized to spend funds, he crafts a normal transaction that we call the Spend transaction. This transaction is actually timelocked because it can only spend from the output of the Unvault which enforces a timelock. The only difference in operations is that when the person authorized to spend is doing his transaction he will have to wait a few blocks to have his transaction mined. In the meantime, during this timelock we can have a watchtower or one of the participants triggering a Revault transaction, a transaction coming back to a vault. Or an emergency transaction sending all the funds to the Emergency Deep Vault. The thing that is out of the architecture if something really goes wrong.

Just to clarify what it looks like. These vaults are encumbered by N-of-N. You need all the participants to sign to move them. This is what we use to actually pre-sign the transaction. They use this path. The emergency that goes down here is pre-signed with these N keys. This Unvault transaction is pre-signed with these N keys. This Unvault transaction also looks like this. It is pre-signed and it has two conditions. Either M out of N, M can be anything from 1 to N, that’s the authorized people. Then you have co-signing servers that Antoine will talk about to fix a specific attack against our architecture. And OP_CSV that enforces the timelock. Either we have this subset of authorized people that need to wait for a timelock or we have all the participants agreeing. This N-of-N is not really to used by people. It is just to trigger the pre-signed Cancel and Unvault transactions that are using these N signatures. I think that is as far as I can go in explaining this. Lastly the Emergency Deep Vault (EDV), the conditions to spend from it are just strictly harder than M-of-N. This is the only condition we have. The obscurity in this is important. An attacker from the outside trying to steal your money while scheduling an attack. It is already pretty hard to steal from this but if on top of it he doesn’t any idea of how hard the EDV is to spend from, maybe it will require even more than the N participants, this is a very strong deterrent against an attack.

This looks very complicated but most of it is automated. You don’t have to manually do all of this. The process looks like this. You receive a deposit. This is already protected behind a N-of-N. You are fine even if you don’t sign it it is ok, nobody can steal these funds. At some point all stakeholders will need to pre-sign the Emergency, Cancel and Unvault transaction. That would be maybe done every morning. That makes the funds spendable. It is like confirming that they have been deposited. From this list of UTXOs that are pre-signed now any authorized person can create a spending transaction. It looks like a normal transaction. They can spend it wherever they want. It is just that they will have a timelock that will let any of N stakeholders or watchtowers they control in case there is something wrong with the spending transaction, it will let them Cancel or Emergency Deep Vault the transaction. That is what the process looks like. Funds are deposited, nobody can spend them until all stakeholders have seen it. Once they have been pre-signed it looks like a normal wallet that has a timelock on top of it for the spender to be able to spend.

It is cool for multiple reasons. It is cool because we don’t need to know any amounts in advance. There is no deletion of private keys here. We can reuse addresses, if it happens it is not a problem. Pretty much anything is possible. It doesn’t prevent us from receiving deposits even if there is no pre-signed transaction made in advance. This is different from any of the key deletion proposals because you would need to pre-sign the whole tree before depositing money in it. You would need to put the exact amount as a deposit because that would change the txid otherwise. We don’t have this problem. We can backup seeds. We don’t have to backup transactions as such. Even if one of our pre-signed transactions is lost we can still resign it with the keys. The backups we need are only for seeds for these different private keys. We don’t delete private keys. That is pretty easy to safe keep today. The proposal with the key deletion would have to have a backup every time they pre-sign a transaction. Otherwise losing the pre-signed transaction is losing the funds. Day-to-day operations are only impacted by a delay. The only difference for the normal person to use this architecture is that the funds are not going to be confirmed before x amount of blocks. We are trying to reach the security level of N-of-N although only a subset of these would be the person creating the spending transaction. To emulate that everybody is still agreeing we have watchtowers and things like this. Antoine will cover that later. Also we have the extra Emergency Deep Vault in case a lot of these keys are compromised. We can still move all the funds outside. Lastly it is open source which is pretty cool I think. There is no third party risk because there is no third party. Everything should be self-hosted. If you want to deal with third parties it is only to add security on top like using watchtowers.

Bob McElrath: Antoine was just clarifying for me where the timelocks are. The Unvault transaction has a CSV on it. The Spend transaction has a CSV. The Deposit transactions do not correct?

Only the Unvault has a CSV. The Spend is only delayed by this OP_CSV. The Spend transaction is just going out. Out is a UTXO, it is not a transaction. The timelock is enforced by the CSV. The Sequence is in the Spend tx.

Bob McElrath: The purpose of the timelock on the Spend transaction should have been to send that Spend transaction back to the emergency vault. But there is no second branch there on the Spend transaction?

For us the Spend transaction is exactly like here. The delay is between the Unvaulting and the Spending. The clawback happens at the time of the Unvaulting being published. The reason for that is we enforce through the co-signing servers that the Unvault transaction is not pushed if the co-signers have not seen a Spend transaction child before. That means that we emulate that the distance between these two transactions is OP_CSV because if an Unvault transaction has been mined without the participants knowing about the Spend transaction that will spend from it then we trigger automatically a Cancel transaction. It is to enforce that we have the full delay to cancel the spending if needed.

Bob McElrath: What I would have called the receiving wallet on the left, the Deposit transactions, this is a standard N-of-N wallet. There are not timelocks there, no scripts. I believe in our paper we call this is a “Receiving wallet” because everything else that is happening here is part of a protocol. The participants need to know what are the steps, what is going on there. In order to start the protocol you have to have full control. You can’t expect someone to deposit into a timelocked vaulted scripted transaction very easily. What have you done allows you to construct the Unvault transactions in the way you want. There are some interesting properties here. Those Unvault transactions can be for instance tranched. Let’s say you want to make ten Bitcoin tranches for different traders or something like that.

We could. In our design, although I don’t fully remember what yours is made of, we considered that each vault is a completely different thing. Different UTXO, different address, different everything. But yes we could branch out in the Unvault. I don’t think it is something we really want to do because we can have change going back to a vault or anything if needed. At least at the script level here we are not enforcing amounts. We leave that to watchtowers looking at the protocol or even the co-signers if they have some restrictions on the co-signing.

Antoine Poinsot: This is one point of using the co-signing servers. To leave the maximum flexibility for the Spend transactions to not have to commit to the amounts or the addresses so it can change. We don’t need to change the Unvault transaction.

The first one, the N-of-N here is a normal N-of-N. There is no other restriction on it. But we consider this to be the highest level of security in the architecture because the N is not the normal amount of people required to spend they money. The Spend transaction here would usually be M-of-N. The N-of-N is a lot of people, much more than would be used in a normal multisignature because they don’t have to do any special work. In the morning they have this deposit process happening and they just need to pre-sign the things. A normal company using a normal multisig for this would be using M-of-N for the spenders. We are just trying to add new signatures on top going back to N even if they are not part of the usual spending people. Increasing the security to a bigger multisig.

N-of-N doesn’t allow for any redundancy. Have your clients been happy with that? Do they want to insert a threshold there in the first step?

The N-of-N doesn’t have redundancy here because they should have proper backups and key management in place. In the M-of-N for this spending our original client asked to have a subset of traders and only 2 out of 3 traders should be required to spend the money. Just in case one of them is unavailable. We have done more complex than this generalization.

Q - Which exchanges are best? This is referring to one of your earlier comments about the exchanges’ cold storage set ups not being great at the moment. Do you have any insight into what those set ups actually are because obviously they are shrouded in secrecy because there are hundreds of millions of dollars?

Secrecy is a big part of it. Also it is very hard to know even if it is a single key securing your funds. It could be just a single key but you don’t know how it is secure. It could be secured in a HSM, hardware security module, where no one can extract the key. Having a single key is not necessarily a bad thing if it is properly secured. You can go to the other extreme “I am a company and I am using BitGo, I have 2-of-3 I am very secure.” Not really because you have 2 out of the 3 keys yourself. You have your normal key and you have your backup key in case BitGo goes offline for example. In that case you are a single point of failure. Anybody can come to you and get these 2 keys quite easily. It is really hard to say that more keys is more secure. It really depends on how you manage those keys.

Bitcoin fun Revaulted (Antoine Poinsot)

I am going to talk about some fun we had while designing it and challenges encountered. Specific things about why we ended up with this architecture.

Pre-signed transactions fun

The first challenge we encountered is a very common one which we have been discussing a lot for the Lightning Network. How to get pre-signed transactions confirmed in a timely manner for multiparty contracts. Our security models on our Revault transactions to be enforceable at any time. We need to find a way for transactions to pay the right fees according the fee market when we want to broadcast them. We could use something like the update_fee message currently used in the Lightning Network protocol which asks our peer to sign a new commitment transaction with us as the current fee rate increases or a new one when the fee rate decreases. We can’t trust the other parties to sign the transaction. Firstly, because they might be part of the attack. With vault we are talking bid only so they have a strong incentive to take the vault money and act badly and refuse to sign a Revault transaction. Secondly even if they are honest it would just require an attacker to compromise one party to prevent a Revault transaction being executed. Finally they may not be able to sign it in the first place. They may have their HSM available. In addition it would require them to draw their HSM each time there is a fee rate bump. This is just not practical. We are left with either using anchor outputs, what has been planned for the Lightning Network or letting each party attach inputs to the transactions aka bring your own fees. We went for the second one.

At first we leverage the fact that emergency transactions have just one input and one output. We use SIGHASH_SINGLE safely because if there is a difference between the number we may encounter the SIGHASH_SINGLE bug. This allows any party to bump the fee rate by adding an input and output to the transaction. Or if the transaction already has fees at the time of broadcast just replace a SINGLE ANYONECANPAY with a SIGHASH_ALL signature before broadcasting. Unfortunately this opens up a possibility for transaction pinning as we allow any stakeholder to attach a new output paying to themselves. This will allow them to decrease the CANCEL transaction fee rate while we want them to add a high input and a low output. They just could add a high output and low input to take all the fees of the transaction and keep it above the minimum relay fees. They could also pin the transaction in the mempool as the output is just paying to themselves according to BIP 125 rules. We went for SIGHASH_ALL ANYONECANPAY signatures to be exchanged between the stakeholders because they would commit to all the outputs and thus would not allow any party to add an output paying only to themselves. They cannot anymore pin the Revault transaction itself in the mempool. Nor can they decrease the transaction fee. This adds a burden on the fee bumper because they need to create a transaction if the input is too big and they want a change output. They need a fee bump transaction to attach as inputs of the Revault transaction. The worst case here is that the fee bump transactions can still be pinned by a change output from an attacker. By the second rule of the RBF BIP the unconfirmed input would not be replaced in the Revault transaction. Even in this worst case scenario the party bumping the fee rate could create either a high fee rate fee bump transaction and wait for it to be confirmed in the next two blocks. We can expect the Unvault transaction to have a CSV of 15 at least. It will be confirmed and then the BIP 125 second rule will not apply anymore. I have described it more in detail in my mailing list post.

Optimization / standardness fun

Revault is very customizable. We wanted to study the maximum number of participants for all the transactions to still be standard. The response is of course it depends. It depends mainly on the Unvault script as it is very customizable and the policy can be different depending on the use case of each user. For the Vault script the best one I could come up is this one which would allow 100 participants before meeting the max standard script size for the pay-to-witness-script-hash (P2WSH). This is a standardness limit on the script which is hashed in the witness program version zero P2WSH. It is applied on the inputs when you want to pay to the P2WSH.

Then for the Unvault script as there are too many combinations I arbitrarily took the number of subset members, the traders, to be half of the number of the participants overall. With such a script which the co-signing servers for the N parties who don’t sign the transactions still sign, the maximum number was 68. With other combinations we could also hit a limit on the number of push opcodes. Then on the onchain footprint I had a script which would make raw approximations on how it would cost to use the architecture. If I remember correctly it was about 1000 weight units for 3 traders and 6 overall participants. That is quite reasonable. Unfortunately I don’t have a graphic of all the values but it is on the GitHub on another branch.

Oracle fun

Then there are the co-signing servers. In my opinion co-signing servers are one of the biggest trade-offs of the architecture. They are a trusted oracle and you need one for each party not signing the spend transaction. Otherwise this party is just trusting the member managing the funds or the other co-signing servers. It is costly and it is online. It is costly because it adds signatures. Even in the worst case scenario where all co-signing servers are compromised and all the M parties the spending transactions still allow a race because the spending transaction is forced to signal for RBF because of the CSV. The big question was how to get rid of them?

We thought of a way to eliminate the co-signing servers without the obvious one which is N=M, all the day-to-day participants are managers so you don’t need co-signing servers. Otherwise you can pre-sign pinned transactions but you can’t always because you need the set of addresses to exist, be known in advance and to be small enough. For each destination you need to know the set of amounts you want to send in advance. That is really not practical in general but still can be applied in specific cases.

We thought about covenants in the Unvault transaction so it can only be spent by a Spend transaction. In any way it is spent by a Spend transaction but all covenants would commit to the amount and destination, either with CHECKTEMPLATEVERIFY or SIGHASH_NOINPUT. It would commit to the amount in the Unvault output at the time of signing. It would commit in advance and that is just not practical. We thought about key deletion but as we discussed you can’t prove that you deleted keys.

Covenant fun (BIP 119 way)

We thought about covenants. We can use a covenant like CHECKTEMPLATEVERIFY to have the Vault transaction commit to the Unvault transaction. This would really be a vault as the vault output could just be spent by an Unvault transaction and could not be bypassed. Maybe Kevin could argue that is not a feature because sometimes it is nice to have a multisig and be able to bypass the architecture. But it is a bit more secure. Unfortunately we can’t use CHECKTEMPLATEVERIFY to get rid of the co-signing servers.

Covenant fun (BIP 118 way)

For SIGHASH_NOINPUT’s covenants it is the same. I didn’t explain in the last slide but to have a covenant with SIGHASH_NOINPUT there is a nice scheme in which you pay to a signature. In the vault script there is a pay-to-witness-script-hash which is paying to a signature that is only valid for an Unvault transaction. Since this a SIGHASH_NOINPUT only the outputs of the transactions will be hashed. This forces the transaction spending the Deposit transaction to specify an output such as the Unvault. It is really nice. It is more flexible than CHECKTEMPLATEVERIFY. It is a a little bit less efficient but it is the same principle. Like CHECKTEMPLATEVERIFY it could be used for the Vault transaction but not for the Spend transactions. It would not eliminate the need for signing servers. For both CHECKTEMPLATEVERIFY and NOINPUT covenants we need a new step at set up. We can’t expose addresses with covenants such as CHECKTEMPLATEVERIFY to the public because the address will be reused no matter what we hope. If they are used with an amount less than the previous one for CHECKTEMPLATEVERIFY or SIGHASH_NOINPUT covenant the Bitcoin would be lost. If it is reused for a higher amount all the Bitcoin more than the amount would go to fees.

Covenant fun (key deletion)

We also talked about key deletion with Jacob Swambo and Bob McElrath. Kevin was interested in the Bryan Bishop architecture that uses key deletion. It isn’t provable to the other parties. You don’t care if the output pays to a N-of-N multisig. This is what Jacob came up with when we were talking with him. If this is for N-of-N outputs like the Vault script if you just delete your key you know it won’t be spendable anymore. The pre-signed transactions are effectively a covenant.

It is possible to apply this to Revault but unfortunately it adds a lot of complexity. For example you can’t have a Cancel transaction anymore or you have to sign a lot of pre-signed transaction sets. If the keys for the Vault outputs are deleted and the only way to spend from the Vault is the Unvault transaction the obvious way would be to have the stakeholders broadcast the Vault, have it cancelled and a new output with non-deleted keys appeared. To resolve this you could have deleted these keys and created the signatures for all the next transactions but this adds complexity. We want it to be really practical and we are not convinced of the feasibility of having key deletion for a practical protocol.

BIP fun

Finally Revault is not the only proposal. There are further proposals, CHECKTEMPLATEVERIFY and SIGHASH_NOINPUT, ANYPREVOUT, you can do covenants without key deletion. This is less costly. But it is not required at all for Revault. There are BIPs for Taproot and Tapscript which would bring strong cost optimization and would hide the Cancel transaction path because it is a 4-of-4 but unfortunately not the Spend path because the Spend is not pre-signed so you have to enforce the locktime in the Unvault output script. It has to be revealed in order to spend from the Unvault. We are more eager to see progress in reducing mining centralization such as Stratum version 2. It is at the base of all security assumptions, to have a Revault transaction that confirms. That is why we have complicated schemes to pay honest fees but we need to account for miners as attackers in our architecture. We need to account for miner censorship. If it is too centralized it can be censored.

Is Revault good?

Is Revault good? It is really simple in appearance. It is designed for practical use and for people using Bitcoin today to use something more secure. It is not good but it is better than what is used today. For example it would allow higher threshold multisig at the root of the architecture for the Vault script. Without the flexibility, without this co-signing server stuff we would have lower threshold multisig at the root. Some stakeholders don’t want to have the burden of signing all Spend transactions.

Q&A

Bob McElrath: Since you mentioned NOINPUT there is a way to do deleted keys in a safe way if you have NOINPUT. That is by using ECDSA key recovery. Basically the idea is you generate the signature first and from that you compute the pubkey. You can only do that if you have NOINPUT. There is a mechanism there that could be used with NOINPUT. This is how you prove that you deleted a key. You provide a hash preimage of the signature itself as a binary string. This is now the nothing up my sleeve signature. Unless I can break the hash function or break the elliptic curve discrete log problem I could not have generated that signature even if I had the private key. However this doesn’t work, the NOINPUT/ANYPREVOUT proposal puts the ANYPREVOUT in a Taproot script. This additionally commits to the pubkey at signing time. As the proposal stands now this won’t work. If you do have NOINPUT generally there is a way to prove you deleted keys.

Antoine Poinsot: That is very interesting. I thought a lot about how to make the spenders commit to the amount in the address at spending time. This would create a cyclic hash.

Bob McElrath: That’s exactly the problem. It creates a cyclic hash. That is why you need NOINPUT because the cyclicality comes from the txid committing to the addresses on the previous output which are functions of the pubkey. That is why it doesn’t work.

Michael Folkson: Kevin said that if you were to set up a watchtower for this Revault design the watchtower would potentially be broadcasting either a Revault transaction or an Emergency transaction. Is that right?

Kevin Loaec: That is right but it is being discussed. I don’t agree with Antoine on this. In my vision the watchtower should only broadcast the Revault transaction and they should not have an Emergency transaction. My standpoint for that is the Emergency transaction is such a burden. In our architecture we assume that if one Emergency transaction is broadcast then all the other ones should also be broadcast. All the funds within the architecture should move at the same time to the Emergency Deep Vault wallet. To me that is a problem because the watchtowers should be trustless. We have this problem that if somebody starts triggering one Emergency transaction it completely breaks the system at least for a few weeks or months. I don’t know how difficult it would be to recover from the Emergency Deep Vault. That would be a massive denial of service type attack. If any of the Emergency transactions can be stolen then you force everybody to go dig up their emergency keys. You need to wait however long the timelocks are. It could be a very strong attack against us if an Emergency transaction is out in the wild. While the Cancel transaction is not really a problem. It is just go back to a Vault. The next step is to Unvault which is fine.

Antoine Poinsot: We argue but it is ok, we will agree later. My point on this is that Revault differs from a normal N-of-N multisig due to the Emergency transaction which is a significant deterrent against threats. To keep this deterrent you need to have all the parties including the external watchtowers to be able to broadcast the Emergency. The network watchtowers of each stakeholder might be known while the external watchtowers might not be. We need this to ensure some specific attacks don’t succeed.

Michael Folkson: You haven’t built out a prototype watchtower for this design because you are still finalizing exactly what the watchtower would need to look out for?

Antoine Poinsot: We don’t agree yet but we don’t have an implementation either. We just have a demo, in the demo there are just the network watchtowers. It was just a proof of concept for running functional tests.

Michael Folkson: I think Bryan (Bishop) said last week in the Socratic that he was in the same boat. He hadn’t really thought about the watchtower component.

Kevin Loaec: On the topic of the watchtower I would like to add something that maybe wasn’t explained clearly. The Unvault transaction, the one coming after the Vault, should not be broadcast before the participants of the co-signers have seen the Spend transaction first. First we need to know where the funds are going and then we are releasing the parent of the Spend transaction. That is also part of the job of the watchtowers. If they see on the Bitcoin network there is an Unvault transaction being mined but they are not aware of what the Spend transaction looks like this might be an attack. If we don’t where the Spend transaction is going to then we should trigger a Cancel transaction. Part of the job of the watchtowers is also to look at the network and see that if there is an Unvault that is broadcast without knowing where the Spend transaction is going to bring the funds, then automatically we should trigger a Revault transaction by default. It is not too heavy on the process because there is no timelock in there. It is going back to the vault and we just need to pre-sign the Unvault again. That could be done in one block if needed. The protection it brings is really high. The role of the co-signer like Antoine said is that we have an attack that is possible and is also possible in Bryan’s design that he published. Assuming we know where the Spend transaction is going to go the Unvault transaction is mined, at the exact block where we lapse the CSV there could be a race condition between the Spend transaction and an attacker’s Spend transaction if he has all the M of N keys. That’s why we needed to add the co-signers. We wanted to a very dumb automated server that signs only once per UTXO. That avoids a double spend at the exact time of expiration of the locktime. That is why we added the co-signing server. The solution that Bryan has is deleting the keys and putting a maximum percentage of funds you can spend each cycle. For us it is a different compromise. As of today we haven’t found an alternative to co-signing servers sadly but they do serve a real purpose.

Michael Folkson: I saw some very big numbers in terms of multisigs. This is for designs other than your first client where potentially they would want 50, 60, 70 signers? It seems weird to me that you would want to set up a vault with so many signers rather than just the key board members.

Antoine Poinsot: This was just something I measured so I wanted to talk about. Maybe someone would want to do this but no I don’t expect someone to use 68 party vaults. Technically they can on Bitcoin and have their transactions be relayed.

Kevin Loaec: It is important to look at the modularity of the architecture because we are targeting institutions but two companies don’t have the same requirements. From an exchange to a hedge fund to a company we have very different need of liquidity and security. Some companies have very few stakeholders and one guy doing the finances. If it is about moving the cold storage of a few billion dollars maybe you want to have a very big set of keys. We just wanted to study how big it could get. Not that we want necessarily to implement it at that size but at least we know the theoretical limits. It is just a number. Even for that Antoine had to make some assumptions as he said in his slides. He is taking M of N, M being half the size of N. That could be anything from 1 to N. That could even be different participants. The size is pretty irrelevant depending on how many people, co-signing servers, different participants we want to have. It is very modular and we just want to study how big it could get. The reason for that is that the Unvault transaction, you have two branches and depending on how we implement it the branch for cancelling a transaction is heavier in terms of signatures than the first branch. If we have less co-signing servers than participants for example. Let’s imagine we only have one person that needs to sign a transaction to spend it and we only need one co-signing server that is going to be two signatures. That is fine. It goes through Bitcoin. What if the transaction we pre-sign for the Revault or Emergency has 120 signers? Then this a non-standard transaction so we can never recover the funds. That would be quite problematic. We could sign a transaction that would not be accepted by the network. It is very important for us to know what are the theoretical limits of the network because if the network doesn’t relay our Emergency transaction, too bad.

Antoine Poinsot: The limit I was talking about was policy and not consensus.

Michael Folkson: You said “heavier branch”. An alarm went off in my head, Miniscript. Have you been using Miniscript to get the optimal script by using the weights of the different branches.

Kevin Loaec: Yes and no. I started doing it and then the requirements for our client didn’t work out with Miniscript. Miniscript doesn’t factorize the same keys. If you have M-of-N it works. But if you start having different branches with the same key in different branches Miniscript doesn’t optimize on that. Manually it was much easier to reduce the size and the cost of the transactions by looking at alternative ways that what can be described in the Miniscript language. We were stuck because Miniscript doesn’t realize that if you put A A A as three different public keys it doesn’t realize that it is the same public key. It is not trying to optimize the weight of the transaction for that which is annoying.

Michael Folkson: It can’t identify that the pubkey that has been repeated is the same. That’s the only thing that Miniscript didn’t have that forced you to not take advantage of Miniscript?

Kevin Loaec: I think so.

Bob McElrath: This is relayed from my collaborator Jacob Swambo. Can you talk about your development roadmap for the various components and how other people in the ecosystem could participate? I know you guys have a GitHub with your Revault demo but there are other pieces here like watchtowers, management of pre-signed transactions etc. Can you talk about your development roadmap, what the moving parts are and what needs to be developed?

Kevin Loaec: I will let Antoine answer that but just to start. There are different things there. Watchtowers in my opinion should be as generalized as possible. We shouldn’t just have one implementation of watchtowers in my opinion. The way watchtowers are designed, it is not required to be all the same implementation. Everything else is open source.

Antoine Poinsot: We need to hire other people and raise some funds in order to create a company. We expect it to be a year to create a first product. We expect to achieve that but we don’t know if we will be able to raise funds.

Michael Folkson: In the Socratic last week we talked about transaction pinning problems. Jeremy Rubin was saying that a lot of the problems are unlikely to be solved in the short term because there needs to be a total rearchitecting of the mempool in Bitcoin Core. Any views on that? How much understanding do you have on the mempool in Core and how much is that going to be a problem going forward?

Antoine Poinsot: I don’t want to make strong statements about it because I haven’t been contributing much to Core. I know the inner workings of the mempool and a complete refactoring as Jeremy said last week would be great for all off-chain protocols actually. There are other attacks and transaction pinning for Lightning Network that was found about two weeks ago. I don’t know. It is hard. I think I understand the position of the developers of Bitcoin Core, you need to keep it stable and refactoring the whole thing out is a strong choice to make. We try to use the tools we have today. I think the ANYONECANPAY is not that bad. I think it is pretty workable because there is only one restriction which is really low, the second rule from the RBF BIP. I think we can work it out without refactoring for us. But for the network it would be great.

Michael Folkson: It is interesting that there is transaction pinning and watchtowers, quite a few parallels in terms of the challenges posed to Lightning and vaults. I suppose you are well placed Antoine to be working on both.

Kevin Loaec: That is why I went to Antoine in the first place. Just working on OP_CSV on Bitcoin is a nightmare today. It should be very simple because it was implemented in Core a long time ago but you can still not do your own CSV transactions with Core. It is really hard to find proper tools for that. When looking at Lightning developers every single one has been playing with this because it is a requirement for Lightning. To me the requirement was finding people who have experience working with CSV because it is really important.

Bob McElrath: One other topic I will bring up is the statechain conversation that has been going on the bitcoin-dev mailing list also needs watchtowers. I see a generalized watchtower concept coming. If people are interested in that it is something to work on. At least three different ways to use it: Lightning, vaults and statechains.

Kevin Loaec: Watchtowers are really great because if used properly they don’t add any risk. Using more of them, as many of them as you can, increasing your security without decreasing anything. This is quite exceptional. It is very rare in Bitcoin. Usually when you add security somewhere you are creating new risks. Watchtowers are I think the exception in Bitcoin which is really cool.

Michael Folkson: In an ideal world if you were to raise funds and you could spend more time on this and hire more people would you do a CHECKTEMPLATEVERIFY version of Revault?

Kevin Loaec: Today no because in my opinion you should not spend too much time developing a product that requires something that is not in Bitcoin today. This is really risky, you don’t know what will change. You don’t even know if it will be added to Bitcoin at any point in time. It is even valid for very big things like Schnorr. Some people were saying more than a year ago it will be in within 6 months. It wasn’t. Bitcoin takes time. If you developed a really cool multisig system on Schnorr, Taproot whatever great for you but this doesn’t work today. The risk is way too high. If you want to think about a business the risk is too high to use tools that don’t exist today. That is probably the main reason why we wouldn’t even work on a proper implementation that would be production ready before CTV or NOINPUT are added to Bitcoin.

Antoine Poinsot: In my talk when I talked about CHECKTEMPLATEVERIFY it was at a very high level. I didn’t tweak it or even test that it works. It seems to fit the need but I’m not sure, it was just high level.

Michael Folkson: I think Bryan said he had done a CTV prototype. Perhaps if you had the budget you would look at where the design would go in the future but spend most of your resources on what is in Bitcoin currently?

Kevin Loaec: Bitcoin is great for that. There might be different uses for the same thing. Watchtowers are one of them. There are already a few companies working on watchtowers. These companies should be some of the early targets to help us work on these problems. We can probably do everything in house but that is not the point if other people already are working on really cool stuff it is really good to help each other. Also I think it is starting to be more and more common that companies in Bitcoin do their product open source. Sadly in custody that is not the case. You are not going to find many tools open source outside multisig wallets basically. That is not great. When you are dealing hundreds of millions of dollars there is no backend that is open source today for securing that. That is not normal. Maybe vaults in general could help pave the way forward to start to have visibility on how it works at custodians.

Michael Folkson: I suppose it is whether you want security in obscurity or whether you want a battle tested open source solution?

Kevin Loaec: In Bitcoin and other open source projects we talk about obscurity being the enemy of security. That is ok for everything online. As soon as you work towards physical threats, obscurity is always stronger at least for now. This is a problem that we have in security in general. It is easier to protect yourself if people don’t know the layout of your house. It is easier to protect yourself if they don’t know whether you have a gun or not. This is also the case with hardware security modules. This is the big debate. Is Trezor better because it is open source or is Ledger because it is using a closed source thing? Attacking a device that is not open source is really hard because you need equipment that costs a lot of money. But has it been reviewed enough, is there a backdoor? It is always a compromise somewhere. Typically when it is about physical security obscurity is a great asset.

Michael Folkson: If someone is interested in learning more or contributing to Revault in the future what would you advise?

Kevin Loaec: It depends. If somebody really wants to move forward on just the idea of vaults I think right now it is good we know each other. Dealing with Bryan, Bob, Spencer, Jacob, it is a really small crowd. I think we are getting on well with each other. Hopefully we can keep moving forward in reviewing each other’s work and contributing to each other’s work. Every security product needs a lot of auditing and review. If you are putting millions of dollars on it you need to make sure it is working. As you have seen in this presentation there are a lot of edge cases, weird things around fees and transaction pinning. Things that people don’t look at usually. We need to keep digging because maybe we are missing a massive point. Miners as well are important. Usually we assume the mining is secure, we are only thinking about the script. But the script is not everything. Bitcoin still has human players and as Antoine was saying if the miner is the attacker what do we do? It is starting to be much harder. There are a lot of things that we need to look at. Not just on the development side it is also on the architecture side. For the architecture right now we are looking at building a Rust implementation. That could change. There are more and more people contributing to Rust as well which is good. Square Crypto is contributing a lot which is good as well. Maybe we will start seeing a bigger and bigger Rust community in Bitcoin. Reaching out to us would be good as well. Right now raising funds for us is a way to go because we really want to put that into production. There are a lot of vault proposals over the years but sadly none of them are available as a product. The best we can find today is a Python demo. This is cool but not something you would put your money on. At the very beginning at this project when I reached out to a few people my idea was to set up a non-profit foundation to sustain the development while the users, the exchanges and other companies should have a subscription with this non-profit. That could be a good way to go for the very long term. But in the short term the paperwork for that and figuring out who would be a subscriber is much more annoying than just generating normal revenue through invoicing. Invoicing is how can you customize Revault for us. That is a service we can provide. All work is done open source so we can release the software to everyone. For us that is the way to go right now. A private company and later we will see. But definitely everything is open source and no intellectual property as such behind it.

Michael Folkson: The sales pitch is that whenever we have a bull market there are going to be tonnes of smaller custodians that are going to need a solution like this. It is not just going to be the big exchanges.

Antoine Poinsot: We even thought about ATM operators. Even small players.

Kevin Loaec: Another thing I want to add is that I don’t understand why there are so few physical attacks in Bitcoin. The security of many businesses in Bitcoin is really low. I am not saying that people should do this but it is really impressive that there are still people going into your house to steal your TV which is much harder to resell rather than going to a Bitcoin business to steal their Bitcoin. We are very lucky that for eleven years we had only a few physical attacks. I don’t think it is sustainable. If there is another big bull market we will see more criminals entering the space. We need better security. Even if it is just better multisig, we need better security than we are using today. To me it is more that than just the price going up. It is the risk going up. This is not good. I don’t feel really safe even though I have good key management. I would like to have more options such as cancelling transactions even in a personal wallet. If I send Bitcoin to somebody how many times are you checking that the address is right? How many times are you checking that the amount is right? We put some pressure on us when we could have a button to allow us to claw it back in one hour or whatever. There are a lot of things I would like to see.

Michael Folkson: I don’t have any more insight than you but I have heard that exchanges have very complex cold storage solutions. It is complex physical security rather than complex software security or taking software security to its full potential. Bob can you answer Antoine’s question on what you think about miner censorship of revocation transactions? I also saw that you dropped a paper today. Perhaps you could summarize that you published with Bryan, Jacob and those guys.

Bob McElrath: Miner censorship of transactions like that is a 51 percent attack. It only works if they have 51 percent of the hashpower. If that is the case game over all around. There is really nothing you can do about that. As to our paper we dropped our paper today. This is the first of two papers. The second will focus more on the mechanisms for deleting keys. The first one is an architecture that is similar to Revault what uses deleted keys instead of the multisig. Happy to have this conversation continue and come up with some interesting solutions.

Michael Folkson: What were the key findings of the paper?

Bob McElrath: This paper is not so much a findings kind of paper, it is more of an engineering kind of paper where we describe an architecture and analyze some of its security properties.

\ No newline at end of file +https://www.youtube.com/watch?v=7CE4aiFxh10

Location: London Bitcoin Devs (online)

Kevin slides: https://www.dropbox.com/s/rj45ebnic2m0q2m/kevin%20loaec%20revault%20slides.pdf?dl=0

Antoine slides: https://www.dropbox.com/s/xaoior0goo37247/Antoine%20Poinsot%20Revault%20slides.odp?dl=0

Intro (Michael Folkson)

This is London Bitcoin Devs. This is on Zoom and live-streaming on YouTube. Last week we had a Socratic Seminar on vaults, covenants and CHECKTEMPLATEVERIFY. There is a video up for that. There is also a transcript, @btctranscripts on Twitter. Regarding questions Kevin and Antoine are going to pause during the presentation for questions and comments. We are not going to have interruptions so we will let them speak until they pass. There will be a Q&A afterwards. Kevin will start from basics, foundation level so don’t ask advanced technical questions at the beginning. He will cover the more advanced, technical stuff in the middle and the end. Apart from that I just need to introduce Kevin and Antoine. Kevin Loaec I know from Breaking Bitcoin, Building on Bitcoin conferences which were amazing. Hackathons, conferences and all the events around that. Also Kevin works at Chainsmiths and is building a growing Bitcoin community in Lisbon. Antoine (Poinsot) spoke at London Bitcoin Devs in person a couple of months ago on c-lightning plugins. Watch that presentation on YouTube, that was really good. Today he is going to talk about vaults specifically the vault design “Revault”.

Revault - A Multiparty Vault Architecture (Kevin Loaec)

Thanks for the intro. I am going to try to make my part of the presentation about 40 minutes long. Then Antoine will go through the hardcore discussion on deep technical details after that. It is going to be long. I will try to make some pauses for you to ask questions but don’t hesitate to ping me. We are going to talk about Revault. Revault is a vault architecture that Antoine and I have been working on for the past 5 months now, since December. As you will see it is targeted towards institutions so not people as such. This presentation will talk about everything from the basics of how we do security today on Bitcoin for individuals and institutions. Then I will look at the different propositions we can do with Bitcoin today for individuals and institutions. Then I go directly into Revault, what we have built. What we have and how it works. Then Antoine will talk about very specific details about the architecture. If you are really into Bitcoin development you might have some questions or you might have not considered the attacks possible. Antoine will cover a lot of those. It is really interesting. The longer it goes the more technical it will be. If it is too simple for you at the beginning that is normal. It will get increasingly more advanced. If you are a beginner, at some point if you drop out expect the rest to be even more technical. I am trying to do my best to not lose anybody at the beginning. Hopefully it will not be too simple either.

Ownership model

The first thing I want to talk about is how we do Bitcoin today. We have ownership, how do we own Bitcoin? We have the question that is who controls the keys? This is the first part of how you can protect your Bitcoin. You have three choices here. You can deal with a third party custodian. Let’s say you leave your coins on an exchange. You don’t control the keys. It is not really your Bitcoin. That is one way of doing it. The other way is self-control of your keys. If you own all of your keys then you really own your Bitcoin. Nobody can block them or steal them away from you if they don’t have your private keys. Then you also have the mixed model where you and another party have keys to move the coins. This is about control. A mixed example could be the Blockstream Green wallet. It has two signatures, you sign one then you send the transaction to them and they co-sign it. Even if your key is lost that is fine because nobody can steal your funds without having the key from Green.

Key Management

The second question on ownership is about key management. The first slide was about who has these keys but the second one is about how these keys are secured. You have different approaches again. One popular one is called cold storage. You might have heard about that from exchanges from example. Cold storage is an address or a set of keys that you are not supposed to ever touch. Those are the ones that have been generated offline, have never touched a computer connected online and are never supposed to used until you need to move the coins. Think of that as like burying a treasure chest in your garden. A second one could be having hardware or offline wallets. An offline wallet could be a laptop that you removed all the connectivity from. You removed the wifi chip, you removed all networking capability and you generate your keys on this thing. This will never touch the internet. This is a little bit different than cold storage. This is supposed to be used to spend your coins. The private key itself will never touch a computer connected online. This is probably what most people use. Back in the day it was a paper wallet like cold storage. You would have the ones that you never use and then you would have a small wallet on your phone or computer that you would use for transactions. Now most people are using hardware wallets. Then you can go yolo, meaning you have all your funds on your phone or all your funds on your computer. This is extremely risky of course because the threat model is extremely high. Your computer could be stolen, malware could be installed on your computer, you have a lot of risk there. Don’t ever put all your funds on a laptop or on your phone. Then you can also play around with multisig. You could have multiple hardware wallets. You could have one key on a hardware wallet, one key on your phone, one key on your computer. In that case even if one or multiple keys leak that is alright depending on how the multisig is designed. Out of these two slides, the first one who controls the keys and the second one how they are kept defines every way of storing Bitcoin today. As far as I know that is 99.9 percent of how coins are secured. Your own Bitcoin are probably behind one of these solutions or a mix of two. This is not good enough. This is really not good enough. This is just about securing keys, we are not talking about securing the Bitcoin itself. As soon as the keys are leaked your Bitcoin are gone. Vaults are talking about that. When we say we are dealing with a vault we are not dealing with just a key solution. We are dealing with a way to restrict how Bitcoin can move. That is the main point I want to cover here.

Threat model

Then you have threat models to discuss. Threat models are quite complex to list because they depend on each person or each group of people. The main one up until now was the online threat. How malware can steal your keys. That is why your keys should never be on your computer or on your phone. This is the main threat we have today. Then you have a physical attack. Somebody coming to you and threatening you to give them the Bitcoin. As you can already think if I come to you and I start breaking your fingers you probably will dig up this cold storage wallet you have in your garden. Being a garden doesn’t mean you can’t access it. If I really force you to do it you will give it to me. That is not enough as a security model. Then you have third party risk. Of course if you give all your Bitcoin to Coinbase then you are protected from the random guy coming to you but Coinbase now can steal your Bitcoin. You are creating other risks in solving the first one. Then for multiparty solutions you also have the insider attacks. When it is only your Bitcoin you are fine. If it is a company it is a little bit harder. How do you deal with your co-founder in a startup going away with the coins? How do you deal with your IT guy having access to your private keys on the exchange and stealing them? That is another problem that we have to think of. When you are designing security solutions around keys you are trying to think about that. When you are designing vaults you really are trying to figure out all the different types of attacks that can happen. If they can happen they will happen especially for big players like exchanges.

What we want

What we want, this slide is about what we are trying to achieve here. A perfect solution for vaults would look like this. We would be able to restrict where the funds can go. That’s saying if your address is not in the whitelist your UTXO cannot move to it. This is impossible to do today with Bitcoin. We would like to restrict how much Bitcoin can move, maybe per day or per week. Let’s say you are super rich and you have 100 Bitcoin somewhere and you want to protect yourself by allowing you to only spend 1 Bitcoin a month, right now that cannot be done. The Bitcoin network doesn’t care how many of your Bitcoin you are spending. A vault or a covenant could be interesting for doing that. If you know you are not supposed more than a Bitcoin a month can we somehow enforce this at a protocol level or on a second layer? Then you also have the when. That could be for example during business hours if you are a business. The funds should not move during the night because your employees are only there during the day. Could we enforce that? Or as an individual it could only be during the hours that you are awake. You know a theft if even it is malware it wouldn’t be able to move your funds during the night. This is a restriction we want in a vault. As we discussed before most of the solutions we can think of add risk. Let’s say the where here, if I only allow myself to send my coins to Kraken what happens if Kraken disappears? All my coins are locked forever because Kraken exists don’t anymore? This is really difficult. Or make it unusable. Unusability is interesting as well. In security you have this principle that if your system is too hard to use or too annoying to use people are not going to use it. It might be secure but just because it is annoying to use it on a day-to-day basis people are going to bypass it. Your security is defeated because it was unusable or too hard to use. This is very important when we design security solutions.

Theory is cool but…

What I wanted to cover here is that Bitcoin is in theory pretty cool but the practical Bitcoin we have today is a very limited set of blocks. We cannot do whatever we want with it so we have a specific set of tools we can play with. We can probably build anything with it but we have to be really smart about how we do it. We cannot just ask the Bitcoin Core developer to implement a new covenant opcode “Can you please do it?” Everything takes ages to be added and modified in Bitcoin because that is the way Bitcoin is designed. I am really happy with that. I am ok to have a Bitcoin that is pretty restricted for security reasons but that also means I need to play within the rules. I have to use whatever basic blocks I have and build with them. I cannot just expect Bitcoin to change to fit my needs. That’s why today vaults are not available on the market. Nobody is giving vaults-as-a-service today. We don’t have software to do that. Most of the software you can get today on Bitcoin, the most advanced one would be multisig. We want to go further than that.

What we have

What we have today on Bitcoin are these basic blocks. These are probably the two we can use at the protocol level. We can use multisig. That is having multiple keys required to sign to be able to spend a transaction. That could be multiple people or not. That could be multiple devices whatever. We also have timelocks, I am going to cover timelocks after that. Lastly we can also imagine that we have tamper-proof hardware. That could be a hardware wallet, maybe one that is more programmable. A Coldcard or something like that where you would put some of the restriction on the hardware itself to emulate having this third party checking for your rules to be ok. What we don’t have again is covenants. What is a covenant? To explain what Bitcoin can do today we have the outputs in every transaction. The output defines how the funds can then be spent. The output has conditions on how the next person or the next spend will be able to spend them but that is only from the inputs. You cannot have a restriction on the outputs of the child transaction. Meaning that if I send Bitcoin to Michael, I can put some restriction on this. I could say “Michael will not be able to spend this transaction before December 1st through a timelock.” I could do that. Or I could do “I will send these coins to Michael but I also need to co-sign. I need a multisig with him.” What I cannot do is put on a restriction on how the output of the spending transaction will be. I cannot say “The coins I’m sending to Michael can only go to Kraken afterwards.” There is no way of doing that today. That is what we call a covenant. We do not have covenants on Bitcoin today. There are a lot of proposals and I think Antoine is going to talk about them later. Also the meetup we had last week about CHECKTEMPLATEVERIFY, that is one of those. With CHECKTEMPLATEVERIFY you can enforce a restriction on the outputs of a child transaction. Today we don’t have that in Bitcoin. Revault is not using this covenant thing. We are trying to emulate it with the tools we have today.

Timelocks

The timelocks we have, in Bitcoin you have four of them. You have Locktime and Sequence that you can do at the time of crafting a transaction. When you are spending a transaction you can add a locktime or add a sequence. The Locktime is about putting a date, time in blocks but let’s say a date e.g. December 1st. This transaction will not be able to be mined before this date is reached. This is pretty cool. Sequence is relative. Sequence is about how long ago was the parent you are spending, the UTXO you are spending from, how long ago it was mined. If you sign a transaction with a Sequence of one week it means that you need to wait one week before the UTXO you are spending from has been mined. It is just a delay between the UTXO you are using and the transaction you are spending from. These two are interesting when you are creating the transactions but sometimes they are not enough. If you have the private key you can double spend them. You can do another transaction spending the same outputs and remove the Locktime from your own transaction. It is not really a restriction, it is somewhat restrictive but you can bypass it if you want. An attacker having your keys doesn’t care about your Locktime and Sequence. That is why these two opcodes OP_CLTV and OP_CSV were created. CheckLockTimeVerify and CheckSequenceVerify. This is put on the outputs of a transaction. That forces the one that will use this output later, the child transaction, to have a Locktime or Sequence that corresponds to whatever is restricted here. I could have a deposit address where when I send funds to it it enforces a delay of like 3 hours before I can spend them again using a CSV. This is cool. Now we are starting to talk about interesting things that we can use in weird vault designs. I just wanted to show you the different timelocks we have. Maybe you can start thinking about how we can use these together to build some of the restrictions we were trying to do earlier. As a note here Bitcoin doesn’t have an idea of time or at least it is not clear. You can do time from when a transaction was mined because mining is consensus. You cannot do time since a transaction has been seen in the mempool. The mempool is not consensus, at least not everybody has the same mempool. We cannot do timelocks from the time of signing. This is somewhat annoying because we would like that in vault systems. For example, if we had something that let me have a lock from the time of signing I could say “I can sign a transaction but I know while signing it I need to wait one day before it can be mined.” This is not possible today. That is annoying and we have to play around that.

Single party advanced mitigation

The title here is probably not right. I wanted to start talking about how we can do advanced mitigations, things we are not using today, to emulate these restrictions where, how and when. The most basic one you can do is adding annoying processes. For example you have a 2-of-2 and you put one of these two keys in a bank vault. You cannot spend during the night because the banks are closed at night. That is enabling your covenant on when you can spend from it. You can also have a notary that will make sure that the key you are spending to is one of your whitelist keys. This is cool but it creates other problems. What if the bank loses your vault? What if the notary cheats or doesn’t want to sign any transaction? That is one tool we could use but it is not great.

Timelock

We have timelocks as we just talked about. Timelocks are more interesting in terms of vaults or at least covenants that you can do yourself. If you deposit to a OP_CLTV saying December 1st you know that you cannot even if you try spend this transaction or this output before December 1st. This is great if you really know you don’t want to be able to spend your funds before a certain date. If you want to hold or you want to do a gift to somebody and you tell them “You can’t use it before you are 18 years old” you could enforce that in the Bitcoin transaction itself. That is starting to be interesting now. OP_CSV is the same thing but from the time the deposit was made. It is maybe a little more flexible because you could say “I’m going to do some dollar cost averaging on Bitcoin and every time I purchase Bitcoin I want to only be able to spend them in 2 years.” Every time you do a transaction it will move forward 2 years for this specific output. This is cool but even you cannot spend from it. It is not only against an attacker it is a real covenant that nobody can spend from these funds before date has been reached.

Key deletion

Then we have key deletion. This is more interesting now. We are starting to get into the ideas that are not really obvious. This has been proposed many times on Bitcoin, it is not new. But it is not something you would think of while looking at Bitcoin and what you can do with transactions. Key deletion is a great way to enforce that something will happen. You can sign a transaction from a deposit UTXO, you receive some money. You can sign a transaction spending from it and then you delete the private key. You are broadcasting the transaction yet, you are just signing it and keeping it. You delete the private key. That means that nobody not even you now can move these funds in another way than the way you already signed and kept in your backup. You don’t have the private key to change this. This is a way of creating a covenant where you could choose how much is moving, where it is moving. You can put timelocks if you want, you can do a lot of things. The major risk you have here again is that you don’t have the private key. If anything is wrong with it it is too late. Even if you lose this pre-signed transaction because you don’t have a private key now the funds are gone. It is a risky one but you can try to play around and make cool things.

Co-signer (secure hardware)

Then you have a co-signer. A co-signer can take different forms. It could be a secure hardware where you enforce checking the rules, maybe checking a whitelist or whatever. It could also be a third party if you want to use Casa or BitGo or whatever. In that case you would do a 2-of-2 you would need a co-signer to approve every transaction you do. You sign your transaction first, you send to the co-signer and if the spending conditions are not met, maybe you spent too much this month, then it is not supposed to sign. This is another thing that could be fun. You can automate things, you can build it at home. If you know it is secure hardware and it is not easy to modify that could be a good way to ensure your transactions are not completely unchecked or too easy to spend.

Clawback

Now let’s go to the fun stuff. This is an idea from 2013 which is a clawback. This is also where the name Revault comes from. It is about sending back a transaction to a vault. Doing this is quite interesting. It looks quite heavy so I am going to explain this. You start with a transaction. Let’s call it the vaulting transaction where you have an output that you will use to be spent with a pre-signed transaction that has two different exit points. Either you use your hot wallet key like any hot wallet on your phone plus a CheckSequenceVerify of a few hours. The other way of spending from it is to use the clawback transaction that is sending into a vault. The key is deleted here. Now what it means is that we are back to the key deletion slide that I had before. This vault transaction can only be spent by this transaction because we deleted the private key. If you want to use the funds either you use your hot wallet key and you have to wait for the delay of the CSV or you can also trigger a clawback which instantly without the delay can put these funds back into a vault. Giving back the same protection. Sending them to a different set of keys no matter what. You can choose this at the time of crafting your clawback. Because the clawback doesn’t have the OP_CSV here you also need to delete the key for signing this clawback. You have to delete the key for the unvaulting transaction and you have to delete the key for the clawback transaction. You don’t have to do it this way but the good way of doing it is that the clawback itself is also a vaulting transaction. You are doing a kind of loop here. You need to also have deleted the key of the unvaulting at the bottom here. That means you also need another clawback here that also needs to be deleted. To build such a transaction you can do that with SegWit because now we have more stability in the txid. But the problem is that you should already pre-sign and delete all your keys before doing the first transaction in it. In this type of vault because the amount would change the txid you have to know the exact amount before you are able to craft the transaction here, sign it and delete the key. Same here, you need to know the amount you are spending from because otherwise the txid would change. When you do a vault with a clawback you need to know how much money you are receiving in it. If somebody reuses the address that you have in your output here of the vaulting transaction and sends no matter what amount of funds the txid would have changed so you would have no way of spending from it because the private key has been deleted. It works but you have to be extremely careful because once you have deleted your private keys it is too late. If you do any mistake it is gone. Another thing is that if you lose the pre-signed transaction your funds are gone as well. You need to do backups of all of that. That is what Bryan Bishop proposed to the mailing list with his implementation recently. The first time it was discussed was probably in 2013. Also Bob McElrath has written a paper at some point on this, maybe 2015. A lot of people have thought about such a protection mechanism. I think it is pretty cool. We are not using this in Revault but this is the most advanced thing you can do right now although no one has it in production anywhere.

Bob McElrath: Two little points. One is that you discussed having the ability to do a timelock from the time of signing. It is cryptographically possible using timelock puzzles or verifiable delay functions but I think this is very far from anything people would want to use it right now. For those that are interested in crypto something to keep an eye on. Secondly somebody asks in the chat how do you prove the private key was deleted? Generally you can’t. An alternative is to use ECDSA key recovery but that cannot be used with Bitcoin today because the txid commits to the pubkey which you don’t know at the time you do the key recovery.

For single party it is fine you can somewhat prove to yourself that you deleted your private key. In multiparty it is much harder because you cannot prove to someone else that you did.

Bob McElrath: It is a little bit worse than that. You have some hardware that generated a key and you want it to delete the key. You should ask questions about did that hardware delete the key? If you’re using Linux and it has a swap file and memory paging this key could exist in multiple places. It is not just that I ran it on my computer, you have to get down to the hardware level and lock the memory and overwrite the memory. Then you can ask questions about things like Rowhammer, could I pull that out in some other way? It is very, very hard if not impossible to prove you deleted a key even to yourself.

You could imagine some way but sure. This is not easy to do.

Institutional mitigation

Let’s talk about institutional mitigation which is a little bit different because usually it is multiparty. Also the impact on operations is somewhat important. Of course it depends on who you are dealing with. For example if it is an exchange they need to access their cold storage quite often. That defeats the purpose entirely. A cold storage should never be used. We need to have ways that can be used without breaking the security too much. This is one of the focuses of Revault. Another thing with institutions we have another risk which is insider risk. Of course when you are alone you are not scared of yourself stealing your own Bitcoin. Bitcoin is very similar to cash in the sense that if you have a business and a till with banknotes in it it becomes really hard to secure this. If you want to have normal operations you need people to be able to take cash within this cash till but you don’t necessarily want every stakeholder in the business to be there with a multi key thing to open the till every time. How can we somewhat not have a heavy impact on operations while being able to emulate some enforcement of things that a covenant would enable. The incentive to attack is higher for businesses because usually businesses have much more money than individuals. When we are talking about an exchange we are talking about some having cold storage with more than a billion dollars of Bitcoin behind a single private key. This is insane. This is what we have today in terms of security. You would be really surprised how bad security is in Bitcoin today, at least in key management today.

What most people use today is multisig. Let’s say 2-of-3, m-of-n. Usually you don’t have n-of-n because that would mean if one party loses their key the entire fund is locked. That is a big problem. You have ways to mitigate that with proper backups and things like this. What you mostly see in businesses is a multisig m-of-n. That can be done in house like some exchanges do or that can be done with a third party like BitGo for example. Then you also have the cold storage with frequent access which kinds of defeats the purpose of cold storage. Most of these are often delegated to a third party. You have a lot of businesses today that exist just to provide custodial solutions for other businesses in Bitcoin. They charge a lot of money for that. At the end of the day these custodians also need to secure the keys somehow. We are back to the same problem. How are they keeping the keys? Is it just a multisig? What size of multisig? Is it just a cold storage? How is it kept? You always have a problem there. They would say they have insurance but insurance is not a security solution here.

Revault

Let’s start talking about Revault. In Revault we are trying to solve a lot of things at the same time. It started with a hedge fund contacting us. This hedge fund wanted to be capable of self hosting their own custody without impacting their operations. They also have this interesting thing that they have only a subset of active traders while the other ones are the partners in the fund. They wanted some security that would enable having the maximum security possible without impacting their operations. Revault covers up to N-1 participants being compromised. This is pretty cool. Compared to for example a BitGo 2-of-3, in a 2-of-3 you cannot sustain N-1 because you only need 2 keys to spend the funds. Revault is still secure until you have every single key being compromised. We also have physical threat mitigation. If somebody were to threaten all the participants in our scheme at the same time we still have things to partially protect the funds, a small part of it could be stolen. But we still have a partial mitigation even with N keys being compromised which is quite interesting I think. We also have a very low operational overhead. I will show you how. In Revault it is approve by default. Every spend is approved by default but can be cancelled. Instead of having to ask agreement every time to all the stakeholders. There is a bypass possible in case of emergency. That means for example you see the price of Bitcoin mooning or crashing and you need to move the funds right now. You don’t want to enforce the timelocks we have which would be a few hours. If all the participants want to and they are not under threat then they can move the funds without being encumbered by the timelocks we have. This is very interesting for the hedge fund for example. Then we have the panic button that sends the funds into the Emergency Deep Vault. As you will see later in the slides the Emergency Deep Vault is basically a different address that is more secure than the most secure address we have in our vault system. In our case we have a N-of-N. The Emergency Deep Vault should be harder than that to spend. That could be N-of-N plus a long timelock, separate the keys into different geographical locations, N being bigger than the original N. A lot of things like that. This is a way to escape the architecture if anything goes wrong. To avoid sending back the funds to the vault it goes to a completely different security solution. Another thing different from other models and other proposals out there is that the funds are secure from the time of deposit in Revault. We don’t have an intermediary wallet that needs to put the funds in the vault. For us deposit equals secure. The deposit is a 4-of-4, N-of-N as you will see.

We are not covering an insider locking the funds, refusing to sign. This is something that is possible. If they don’t sign the funds cannot move. That is fair. The only way to bypass somebody not signing would mean we cannot have the N-1 participant compromise here. Maybe the one not signing is the only guy that is not compromised. We cannot bypass that. We have partial mitigation though. It doesn’t mean the funds would be locked forever, it is only a limited amount of the funds that would be locked. Then a little harder would be an insider burning funds, deleting their private key and deleting all backups of their private key. Deleting the Emergency Deep Vault keys, this could happen if somebody got really mad they could delete every single key and backup of keys they have. Depending on how the vault is designed the funds might be lost forever. This is something to have in mind when deploying such an architecture.

This is a graph that we have of what it looks like. I will try to explain it slowly. I know it is a lot, it is quite heavy. I will do my best. We have deposit transactions that create outputs. These outputs are already vaults. As I said before the funds are secure from the time of deposit. These deposits, let’s say you are getting money from an exchange or a client is sending you money, the address you are giving them is already encumbered by a N-of-N multisig. That will be what we call the V, the vault UTXO. To spend from this UTXO you would need a transaction that we call the Unvault transaction or you would need the N participants agreeing and sending the funds out of the vault. That is what I called the bypass before. If there is an operational emergency where the funds need to move right now without a timelock the N participants can use it and spend the funds. That would not be possible if we had key deletion but we do not have key deletion right now in Revault. When you have a deposit, when you receive funds in a vault we need the N participants to pre-sign a set of transactions. Typically they would do that once a day. In the morning they come to the office and every one of these N participants in his routine would have to check if there had been any deposits. If there has been then he should pre-sign the transaction for that. The transactions to be pre-signed are the two emergency transactions, one from the vault directly and one from the transaction that we spent from the vault. He also needs to sign a Cancel transaction which is a transaction that spends from the Unvault transaction and sends it back to a vault. I know it is quite weird if you have never seen vaults before. This is now quite common, it is a clawback. If you have never seen something like that before I know it is kind of weird. We are pre-signing these 3 transactions, the 2 red ones and the orange one. After that we sign the blue one here, the Unvault transaction. Then we keep these 4 pre-signed transactions somewhere safe. If somebody wants to spend funds, let’s say the traders or whoever is authorized to spend funds, he crafts a normal transaction that we call the Spend transaction. This transaction is actually timelocked because it can only spend from the output of the Unvault which enforces a timelock. The only difference in operations is that when the person authorized to spend is doing his transaction he will have to wait a few blocks to have his transaction mined. In the meantime, during this timelock we can have a watchtower or one of the participants triggering a Revault transaction, a transaction coming back to a vault. Or an emergency transaction sending all the funds to the Emergency Deep Vault. The thing that is out of the architecture if something really goes wrong.

Just to clarify what it looks like. These vaults are encumbered by N-of-N. You need all the participants to sign to move them. This is what we use to actually pre-sign the transaction. They use this path. The emergency that goes down here is pre-signed with these N keys. This Unvault transaction is pre-signed with these N keys. This Unvault transaction also looks like this. It is pre-signed and it has two conditions. Either M out of N, M can be anything from 1 to N, that’s the authorized people. Then you have co-signing servers that Antoine will talk about to fix a specific attack against our architecture. And OP_CSV that enforces the timelock. Either we have this subset of authorized people that need to wait for a timelock or we have all the participants agreeing. This N-of-N is not really to used by people. It is just to trigger the pre-signed Cancel and Unvault transactions that are using these N signatures. I think that is as far as I can go in explaining this. Lastly the Emergency Deep Vault (EDV), the conditions to spend from it are just strictly harder than M-of-N. This is the only condition we have. The obscurity in this is important. An attacker from the outside trying to steal your money while scheduling an attack. It is already pretty hard to steal from this but if on top of it he doesn’t any idea of how hard the EDV is to spend from, maybe it will require even more than the N participants, this is a very strong deterrent against an attack.

This looks very complicated but most of it is automated. You don’t have to manually do all of this. The process looks like this. You receive a deposit. This is already protected behind a N-of-N. You are fine even if you don’t sign it it is ok, nobody can steal these funds. At some point all stakeholders will need to pre-sign the Emergency, Cancel and Unvault transaction. That would be maybe done every morning. That makes the funds spendable. It is like confirming that they have been deposited. From this list of UTXOs that are pre-signed now any authorized person can create a spending transaction. It looks like a normal transaction. They can spend it wherever they want. It is just that they will have a timelock that will let any of N stakeholders or watchtowers they control in case there is something wrong with the spending transaction, it will let them Cancel or Emergency Deep Vault the transaction. That is what the process looks like. Funds are deposited, nobody can spend them until all stakeholders have seen it. Once they have been pre-signed it looks like a normal wallet that has a timelock on top of it for the spender to be able to spend.

It is cool for multiple reasons. It is cool because we don’t need to know any amounts in advance. There is no deletion of private keys here. We can reuse addresses, if it happens it is not a problem. Pretty much anything is possible. It doesn’t prevent us from receiving deposits even if there is no pre-signed transaction made in advance. This is different from any of the key deletion proposals because you would need to pre-sign the whole tree before depositing money in it. You would need to put the exact amount as a deposit because that would change the txid otherwise. We don’t have this problem. We can backup seeds. We don’t have to backup transactions as such. Even if one of our pre-signed transactions is lost we can still resign it with the keys. The backups we need are only for seeds for these different private keys. We don’t delete private keys. That is pretty easy to safe keep today. The proposal with the key deletion would have to have a backup every time they pre-sign a transaction. Otherwise losing the pre-signed transaction is losing the funds. Day-to-day operations are only impacted by a delay. The only difference for the normal person to use this architecture is that the funds are not going to be confirmed before x amount of blocks. We are trying to reach the security level of N-of-N although only a subset of these would be the person creating the spending transaction. To emulate that everybody is still agreeing we have watchtowers and things like this. Antoine will cover that later. Also we have the extra Emergency Deep Vault in case a lot of these keys are compromised. We can still move all the funds outside. Lastly it is open source which is pretty cool I think. There is no third party risk because there is no third party. Everything should be self-hosted. If you want to deal with third parties it is only to add security on top like using watchtowers.

Bob McElrath: Antoine was just clarifying for me where the timelocks are. The Unvault transaction has a CSV on it. The Spend transaction has a CSV. The Deposit transactions do not correct?

Only the Unvault has a CSV. The Spend is only delayed by this OP_CSV. The Spend transaction is just going out. Out is a UTXO, it is not a transaction. The timelock is enforced by the CSV. The Sequence is in the Spend tx.

Bob McElrath: The purpose of the timelock on the Spend transaction should have been to send that Spend transaction back to the emergency vault. But there is no second branch there on the Spend transaction?

For us the Spend transaction is exactly like here. The delay is between the Unvaulting and the Spending. The clawback happens at the time of the Unvaulting being published. The reason for that is we enforce through the co-signing servers that the Unvault transaction is not pushed if the co-signers have not seen a Spend transaction child before. That means that we emulate that the distance between these two transactions is OP_CSV because if an Unvault transaction has been mined without the participants knowing about the Spend transaction that will spend from it then we trigger automatically a Cancel transaction. It is to enforce that we have the full delay to cancel the spending if needed.

Bob McElrath: What I would have called the receiving wallet on the left, the Deposit transactions, this is a standard N-of-N wallet. There are not timelocks there, no scripts. I believe in our paper we call this is a “Receiving wallet” because everything else that is happening here is part of a protocol. The participants need to know what are the steps, what is going on there. In order to start the protocol you have to have full control. You can’t expect someone to deposit into a timelocked vaulted scripted transaction very easily. What have you done allows you to construct the Unvault transactions in the way you want. There are some interesting properties here. Those Unvault transactions can be for instance tranched. Let’s say you want to make ten Bitcoin tranches for different traders or something like that.

We could. In our design, although I don’t fully remember what yours is made of, we considered that each vault is a completely different thing. Different UTXO, different address, different everything. But yes we could branch out in the Unvault. I don’t think it is something we really want to do because we can have change going back to a vault or anything if needed. At least at the script level here we are not enforcing amounts. We leave that to watchtowers looking at the protocol or even the co-signers if they have some restrictions on the co-signing.

Antoine Poinsot: This is one point of using the co-signing servers. To leave the maximum flexibility for the Spend transactions to not have to commit to the amounts or the addresses so it can change. We don’t need to change the Unvault transaction.

The first one, the N-of-N here is a normal N-of-N. There is no other restriction on it. But we consider this to be the highest level of security in the architecture because the N is not the normal amount of people required to spend they money. The Spend transaction here would usually be M-of-N. The N-of-N is a lot of people, much more than would be used in a normal multisignature because they don’t have to do any special work. In the morning they have this deposit process happening and they just need to pre-sign the things. A normal company using a normal multisig for this would be using M-of-N for the spenders. We are just trying to add new signatures on top going back to N even if they are not part of the usual spending people. Increasing the security to a bigger multisig.

N-of-N doesn’t allow for any redundancy. Have your clients been happy with that? Do they want to insert a threshold there in the first step?

The N-of-N doesn’t have redundancy here because they should have proper backups and key management in place. In the M-of-N for this spending our original client asked to have a subset of traders and only 2 out of 3 traders should be required to spend the money. Just in case one of them is unavailable. We have done more complex than this generalization.

Q - Which exchanges are best? This is referring to one of your earlier comments about the exchanges’ cold storage set ups not being great at the moment. Do you have any insight into what those set ups actually are because obviously they are shrouded in secrecy because there are hundreds of millions of dollars?

Secrecy is a big part of it. Also it is very hard to know even if it is a single key securing your funds. It could be just a single key but you don’t know how it is secure. It could be secured in a HSM, hardware security module, where no one can extract the key. Having a single key is not necessarily a bad thing if it is properly secured. You can go to the other extreme “I am a company and I am using BitGo, I have 2-of-3 I am very secure.” Not really because you have 2 out of the 3 keys yourself. You have your normal key and you have your backup key in case BitGo goes offline for example. In that case you are a single point of failure. Anybody can come to you and get these 2 keys quite easily. It is really hard to say that more keys is more secure. It really depends on how you manage those keys.

Bitcoin fun Revaulted (Antoine Poinsot)

I am going to talk about some fun we had while designing it and challenges encountered. Specific things about why we ended up with this architecture.

Pre-signed transactions fun

The first challenge we encountered is a very common one which we have been discussing a lot for the Lightning Network. How to get pre-signed transactions confirmed in a timely manner for multiparty contracts. Our security models on our Revault transactions to be enforceable at any time. We need to find a way for transactions to pay the right fees according the fee market when we want to broadcast them. We could use something like the update_fee message currently used in the Lightning Network protocol which asks our peer to sign a new commitment transaction with us as the current fee rate increases or a new one when the fee rate decreases. We can’t trust the other parties to sign the transaction. Firstly, because they might be part of the attack. With vault we are talking bid only so they have a strong incentive to take the vault money and act badly and refuse to sign a Revault transaction. Secondly even if they are honest it would just require an attacker to compromise one party to prevent a Revault transaction being executed. Finally they may not be able to sign it in the first place. They may have their HSM available. In addition it would require them to draw their HSM each time there is a fee rate bump. This is just not practical. We are left with either using anchor outputs, what has been planned for the Lightning Network or letting each party attach inputs to the transactions aka bring your own fees. We went for the second one.

At first we leverage the fact that emergency transactions have just one input and one output. We use SIGHASH_SINGLE safely because if there is a difference between the number we may encounter the SIGHASH_SINGLE bug. This allows any party to bump the fee rate by adding an input and output to the transaction. Or if the transaction already has fees at the time of broadcast just replace a SINGLE ANYONECANPAY with a SIGHASH_ALL signature before broadcasting. Unfortunately this opens up a possibility for transaction pinning as we allow any stakeholder to attach a new output paying to themselves. This will allow them to decrease the CANCEL transaction fee rate while we want them to add a high input and a low output. They just could add a high output and low input to take all the fees of the transaction and keep it above the minimum relay fees. They could also pin the transaction in the mempool as the output is just paying to themselves according to BIP 125 rules. We went for SIGHASH_ALL ANYONECANPAY signatures to be exchanged between the stakeholders because they would commit to all the outputs and thus would not allow any party to add an output paying only to themselves. They cannot anymore pin the Revault transaction itself in the mempool. Nor can they decrease the transaction fee. This adds a burden on the fee bumper because they need to create a transaction if the input is too big and they want a change output. They need a fee bump transaction to attach as inputs of the Revault transaction. The worst case here is that the fee bump transactions can still be pinned by a change output from an attacker. By the second rule of the RBF BIP the unconfirmed input would not be replaced in the Revault transaction. Even in this worst case scenario the party bumping the fee rate could create either a high fee rate fee bump transaction and wait for it to be confirmed in the next two blocks. We can expect the Unvault transaction to have a CSV of 15 at least. It will be confirmed and then the BIP 125 second rule will not apply anymore. I have described it more in detail in my mailing list post.

Optimization / standardness fun

Revault is very customizable. We wanted to study the maximum number of participants for all the transactions to still be standard. The response is of course it depends. It depends mainly on the Unvault script as it is very customizable and the policy can be different depending on the use case of each user. For the Vault script the best one I could come up is this one which would allow 100 participants before meeting the max standard script size for the pay-to-witness-script-hash (P2WSH). This is a standardness limit on the script which is hashed in the witness program version zero P2WSH. It is applied on the inputs when you want to pay to the P2WSH.

Then for the Unvault script as there are too many combinations I arbitrarily took the number of subset members, the traders, to be half of the number of the participants overall. With such a script which the co-signing servers for the N parties who don’t sign the transactions still sign, the maximum number was 68. With other combinations we could also hit a limit on the number of push opcodes. Then on the onchain footprint I had a script which would make raw approximations on how it would cost to use the architecture. If I remember correctly it was about 1000 weight units for 3 traders and 6 overall participants. That is quite reasonable. Unfortunately I don’t have a graphic of all the values but it is on the GitHub on another branch.

Oracle fun

Then there are the co-signing servers. In my opinion co-signing servers are one of the biggest trade-offs of the architecture. They are a trusted oracle and you need one for each party not signing the spend transaction. Otherwise this party is just trusting the member managing the funds or the other co-signing servers. It is costly and it is online. It is costly because it adds signatures. Even in the worst case scenario where all co-signing servers are compromised and all the M parties the spending transactions still allow a race because the spending transaction is forced to signal for RBF because of the CSV. The big question was how to get rid of them?

We thought of a way to eliminate the co-signing servers without the obvious one which is N=M, all the day-to-day participants are managers so you don’t need co-signing servers. Otherwise you can pre-sign pinned transactions but you can’t always because you need the set of addresses to exist, be known in advance and to be small enough. For each destination you need to know the set of amounts you want to send in advance. That is really not practical in general but still can be applied in specific cases.

We thought about covenants in the Unvault transaction so it can only be spent by a Spend transaction. In any way it is spent by a Spend transaction but all covenants would commit to the amount and destination, either with CHECKTEMPLATEVERIFY or SIGHASH_NOINPUT. It would commit to the amount in the Unvault output at the time of signing. It would commit in advance and that is just not practical. We thought about key deletion but as we discussed you can’t prove that you deleted keys.

Covenant fun (BIP 119 way)

We thought about covenants. We can use a covenant like CHECKTEMPLATEVERIFY to have the Vault transaction commit to the Unvault transaction. This would really be a vault as the vault output could just be spent by an Unvault transaction and could not be bypassed. Maybe Kevin could argue that is not a feature because sometimes it is nice to have a multisig and be able to bypass the architecture. But it is a bit more secure. Unfortunately we can’t use CHECKTEMPLATEVERIFY to get rid of the co-signing servers.

Covenant fun (BIP 118 way)

For SIGHASH_NOINPUT’s covenants it is the same. I didn’t explain in the last slide but to have a covenant with SIGHASH_NOINPUT there is a nice scheme in which you pay to a signature. In the vault script there is a pay-to-witness-script-hash which is paying to a signature that is only valid for an Unvault transaction. Since this a SIGHASH_NOINPUT only the outputs of the transactions will be hashed. This forces the transaction spending the Deposit transaction to specify an output such as the Unvault. It is really nice. It is more flexible than CHECKTEMPLATEVERIFY. It is a a little bit less efficient but it is the same principle. Like CHECKTEMPLATEVERIFY it could be used for the Vault transaction but not for the Spend transactions. It would not eliminate the need for signing servers. For both CHECKTEMPLATEVERIFY and NOINPUT covenants we need a new step at set up. We can’t expose addresses with covenants such as CHECKTEMPLATEVERIFY to the public because the address will be reused no matter what we hope. If they are used with an amount less than the previous one for CHECKTEMPLATEVERIFY or SIGHASH_NOINPUT covenant the Bitcoin would be lost. If it is reused for a higher amount all the Bitcoin more than the amount would go to fees.

Covenant fun (key deletion)

We also talked about key deletion with Jacob Swambo and Bob McElrath. Kevin was interested in the Bryan Bishop architecture that uses key deletion. It isn’t provable to the other parties. You don’t care if the output pays to a N-of-N multisig. This is what Jacob came up with when we were talking with him. If this is for N-of-N outputs like the Vault script if you just delete your key you know it won’t be spendable anymore. The pre-signed transactions are effectively a covenant.

It is possible to apply this to Revault but unfortunately it adds a lot of complexity. For example you can’t have a Cancel transaction anymore or you have to sign a lot of pre-signed transaction sets. If the keys for the Vault outputs are deleted and the only way to spend from the Vault is the Unvault transaction the obvious way would be to have the stakeholders broadcast the Vault, have it cancelled and a new output with non-deleted keys appeared. To resolve this you could have deleted these keys and created the signatures for all the next transactions but this adds complexity. We want it to be really practical and we are not convinced of the feasibility of having key deletion for a practical protocol.

BIP fun

Finally Revault is not the only proposal. There are further proposals, CHECKTEMPLATEVERIFY and SIGHASH_NOINPUT, ANYPREVOUT, you can do covenants without key deletion. This is less costly. But it is not required at all for Revault. There are BIPs for Taproot and Tapscript which would bring strong cost optimization and would hide the Cancel transaction path because it is a 4-of-4 but unfortunately not the Spend path because the Spend is not pre-signed so you have to enforce the locktime in the Unvault output script. It has to be revealed in order to spend from the Unvault. We are more eager to see progress in reducing mining centralization such as Stratum version 2. It is at the base of all security assumptions, to have a Revault transaction that confirms. That is why we have complicated schemes to pay honest fees but we need to account for miners as attackers in our architecture. We need to account for miner censorship. If it is too centralized it can be censored.

Is Revault good?

Is Revault good? It is really simple in appearance. It is designed for practical use and for people using Bitcoin today to use something more secure. It is not good but it is better than what is used today. For example it would allow higher threshold multisig at the root of the architecture for the Vault script. Without the flexibility, without this co-signing server stuff we would have lower threshold multisig at the root. Some stakeholders don’t want to have the burden of signing all Spend transactions.

Q&A

Bob McElrath: Since you mentioned NOINPUT there is a way to do deleted keys in a safe way if you have NOINPUT. That is by using ECDSA key recovery. Basically the idea is you generate the signature first and from that you compute the pubkey. You can only do that if you have NOINPUT. There is a mechanism there that could be used with NOINPUT. This is how you prove that you deleted a key. You provide a hash preimage of the signature itself as a binary string. This is now the nothing up my sleeve signature. Unless I can break the hash function or break the elliptic curve discrete log problem I could not have generated that signature even if I had the private key. However this doesn’t work, the NOINPUT/ANYPREVOUT proposal puts the ANYPREVOUT in a Taproot script. This additionally commits to the pubkey at signing time. As the proposal stands now this won’t work. If you do have NOINPUT generally there is a way to prove you deleted keys.

Antoine Poinsot: That is very interesting. I thought a lot about how to make the spenders commit to the amount in the address at spending time. This would create a cyclic hash.

Bob McElrath: That’s exactly the problem. It creates a cyclic hash. That is why you need NOINPUT because the cyclicality comes from the txid committing to the addresses on the previous output which are functions of the pubkey. That is why it doesn’t work.

Michael Folkson: Kevin said that if you were to set up a watchtower for this Revault design the watchtower would potentially be broadcasting either a Revault transaction or an Emergency transaction. Is that right?

Kevin Loaec: That is right but it is being discussed. I don’t agree with Antoine on this. In my vision the watchtower should only broadcast the Revault transaction and they should not have an Emergency transaction. My standpoint for that is the Emergency transaction is such a burden. In our architecture we assume that if one Emergency transaction is broadcast then all the other ones should also be broadcast. All the funds within the architecture should move at the same time to the Emergency Deep Vault wallet. To me that is a problem because the watchtowers should be trustless. We have this problem that if somebody starts triggering one Emergency transaction it completely breaks the system at least for a few weeks or months. I don’t know how difficult it would be to recover from the Emergency Deep Vault. That would be a massive denial of service type attack. If any of the Emergency transactions can be stolen then you force everybody to go dig up their emergency keys. You need to wait however long the timelocks are. It could be a very strong attack against us if an Emergency transaction is out in the wild. While the Cancel transaction is not really a problem. It is just go back to a Vault. The next step is to Unvault which is fine.

Antoine Poinsot: We argue but it is ok, we will agree later. My point on this is that Revault differs from a normal N-of-N multisig due to the Emergency transaction which is a significant deterrent against threats. To keep this deterrent you need to have all the parties including the external watchtowers to be able to broadcast the Emergency. The network watchtowers of each stakeholder might be known while the external watchtowers might not be. We need this to ensure some specific attacks don’t succeed.

Michael Folkson: You haven’t built out a prototype watchtower for this design because you are still finalizing exactly what the watchtower would need to look out for?

Antoine Poinsot: We don’t agree yet but we don’t have an implementation either. We just have a demo, in the demo there are just the network watchtowers. It was just a proof of concept for running functional tests.

Michael Folkson: I think Bryan (Bishop) said last week in the Socratic that he was in the same boat. He hadn’t really thought about the watchtower component.

Kevin Loaec: On the topic of the watchtower I would like to add something that maybe wasn’t explained clearly. The Unvault transaction, the one coming after the Vault, should not be broadcast before the participants of the co-signers have seen the Spend transaction first. First we need to know where the funds are going and then we are releasing the parent of the Spend transaction. That is also part of the job of the watchtowers. If they see on the Bitcoin network there is an Unvault transaction being mined but they are not aware of what the Spend transaction looks like this might be an attack. If we don’t where the Spend transaction is going to then we should trigger a Cancel transaction. Part of the job of the watchtowers is also to look at the network and see that if there is an Unvault that is broadcast without knowing where the Spend transaction is going to bring the funds, then automatically we should trigger a Revault transaction by default. It is not too heavy on the process because there is no timelock in there. It is going back to the vault and we just need to pre-sign the Unvault again. That could be done in one block if needed. The protection it brings is really high. The role of the co-signer like Antoine said is that we have an attack that is possible and is also possible in Bryan’s design that he published. Assuming we know where the Spend transaction is going to go the Unvault transaction is mined, at the exact block where we lapse the CSV there could be a race condition between the Spend transaction and an attacker’s Spend transaction if he has all the M of N keys. That’s why we needed to add the co-signers. We wanted to a very dumb automated server that signs only once per UTXO. That avoids a double spend at the exact time of expiration of the locktime. That is why we added the co-signing server. The solution that Bryan has is deleting the keys and putting a maximum percentage of funds you can spend each cycle. For us it is a different compromise. As of today we haven’t found an alternative to co-signing servers sadly but they do serve a real purpose.

Michael Folkson: I saw some very big numbers in terms of multisigs. This is for designs other than your first client where potentially they would want 50, 60, 70 signers? It seems weird to me that you would want to set up a vault with so many signers rather than just the key board members.

Antoine Poinsot: This was just something I measured so I wanted to talk about. Maybe someone would want to do this but no I don’t expect someone to use 68 party vaults. Technically they can on Bitcoin and have their transactions be relayed.

Kevin Loaec: It is important to look at the modularity of the architecture because we are targeting institutions but two companies don’t have the same requirements. From an exchange to a hedge fund to a company we have very different need of liquidity and security. Some companies have very few stakeholders and one guy doing the finances. If it is about moving the cold storage of a few billion dollars maybe you want to have a very big set of keys. We just wanted to study how big it could get. Not that we want necessarily to implement it at that size but at least we know the theoretical limits. It is just a number. Even for that Antoine had to make some assumptions as he said in his slides. He is taking M of N, M being half the size of N. That could be anything from 1 to N. That could even be different participants. The size is pretty irrelevant depending on how many people, co-signing servers, different participants we want to have. It is very modular and we just want to study how big it could get. The reason for that is that the Unvault transaction, you have two branches and depending on how we implement it the branch for cancelling a transaction is heavier in terms of signatures than the first branch. If we have less co-signing servers than participants for example. Let’s imagine we only have one person that needs to sign a transaction to spend it and we only need one co-signing server that is going to be two signatures. That is fine. It goes through Bitcoin. What if the transaction we pre-sign for the Revault or Emergency has 120 signers? Then this a non-standard transaction so we can never recover the funds. That would be quite problematic. We could sign a transaction that would not be accepted by the network. It is very important for us to know what are the theoretical limits of the network because if the network doesn’t relay our Emergency transaction, too bad.

Antoine Poinsot: The limit I was talking about was policy and not consensus.

Michael Folkson: You said “heavier branch”. An alarm went off in my head, Miniscript. Have you been using Miniscript to get the optimal script by using the weights of the different branches.

Kevin Loaec: Yes and no. I started doing it and then the requirements for our client didn’t work out with Miniscript. Miniscript doesn’t factorize the same keys. If you have M-of-N it works. But if you start having different branches with the same key in different branches Miniscript doesn’t optimize on that. Manually it was much easier to reduce the size and the cost of the transactions by looking at alternative ways that what can be described in the Miniscript language. We were stuck because Miniscript doesn’t realize that if you put A A A as three different public keys it doesn’t realize that it is the same public key. It is not trying to optimize the weight of the transaction for that which is annoying.

Michael Folkson: It can’t identify that the pubkey that has been repeated is the same. That’s the only thing that Miniscript didn’t have that forced you to not take advantage of Miniscript?

Kevin Loaec: I think so.

Bob McElrath: This is relayed from my collaborator Jacob Swambo. Can you talk about your development roadmap for the various components and how other people in the ecosystem could participate? I know you guys have a GitHub with your Revault demo but there are other pieces here like watchtowers, management of pre-signed transactions etc. Can you talk about your development roadmap, what the moving parts are and what needs to be developed?

Kevin Loaec: I will let Antoine answer that but just to start. There are different things there. Watchtowers in my opinion should be as generalized as possible. We shouldn’t just have one implementation of watchtowers in my opinion. The way watchtowers are designed, it is not required to be all the same implementation. Everything else is open source.

Antoine Poinsot: We need to hire other people and raise some funds in order to create a company. We expect it to be a year to create a first product. We expect to achieve that but we don’t know if we will be able to raise funds.

Michael Folkson: In the Socratic last week we talked about transaction pinning problems. Jeremy Rubin was saying that a lot of the problems are unlikely to be solved in the short term because there needs to be a total rearchitecting of the mempool in Bitcoin Core. Any views on that? How much understanding do you have on the mempool in Core and how much is that going to be a problem going forward?

Antoine Poinsot: I don’t want to make strong statements about it because I haven’t been contributing much to Core. I know the inner workings of the mempool and a complete refactoring as Jeremy said last week would be great for all off-chain protocols actually. There are other attacks and transaction pinning for Lightning Network that was found about two weeks ago. I don’t know. It is hard. I think I understand the position of the developers of Bitcoin Core, you need to keep it stable and refactoring the whole thing out is a strong choice to make. We try to use the tools we have today. I think the ANYONECANPAY is not that bad. I think it is pretty workable because there is only one restriction which is really low, the second rule from the RBF BIP. I think we can work it out without refactoring for us. But for the network it would be great.

Michael Folkson: It is interesting that there is transaction pinning and watchtowers, quite a few parallels in terms of the challenges posed to Lightning and vaults. I suppose you are well placed Antoine to be working on both.

Kevin Loaec: That is why I went to Antoine in the first place. Just working on OP_CSV on Bitcoin is a nightmare today. It should be very simple because it was implemented in Core a long time ago but you can still not do your own CSV transactions with Core. It is really hard to find proper tools for that. When looking at Lightning developers every single one has been playing with this because it is a requirement for Lightning. To me the requirement was finding people who have experience working with CSV because it is really important.

Bob McElrath: One other topic I will bring up is the statechain conversation that has been going on the bitcoin-dev mailing list also needs watchtowers. I see a generalized watchtower concept coming. If people are interested in that it is something to work on. At least three different ways to use it: Lightning, vaults and statechains.

Kevin Loaec: Watchtowers are really great because if used properly they don’t add any risk. Using more of them, as many of them as you can, increasing your security without decreasing anything. This is quite exceptional. It is very rare in Bitcoin. Usually when you add security somewhere you are creating new risks. Watchtowers are I think the exception in Bitcoin which is really cool.

Michael Folkson: In an ideal world if you were to raise funds and you could spend more time on this and hire more people would you do a CHECKTEMPLATEVERIFY version of Revault?

Kevin Loaec: Today no because in my opinion you should not spend too much time developing a product that requires something that is not in Bitcoin today. This is really risky, you don’t know what will change. You don’t even know if it will be added to Bitcoin at any point in time. It is even valid for very big things like Schnorr. Some people were saying more than a year ago it will be in within 6 months. It wasn’t. Bitcoin takes time. If you developed a really cool multisig system on Schnorr, Taproot whatever great for you but this doesn’t work today. The risk is way too high. If you want to think about a business the risk is too high to use tools that don’t exist today. That is probably the main reason why we wouldn’t even work on a proper implementation that would be production ready before CTV or NOINPUT are added to Bitcoin.

Antoine Poinsot: In my talk when I talked about CHECKTEMPLATEVERIFY it was at a very high level. I didn’t tweak it or even test that it works. It seems to fit the need but I’m not sure, it was just high level.

Michael Folkson: I think Bryan said he had done a CTV prototype. Perhaps if you had the budget you would look at where the design would go in the future but spend most of your resources on what is in Bitcoin currently?

Kevin Loaec: Bitcoin is great for that. There might be different uses for the same thing. Watchtowers are one of them. There are already a few companies working on watchtowers. These companies should be some of the early targets to help us work on these problems. We can probably do everything in house but that is not the point if other people already are working on really cool stuff it is really good to help each other. Also I think it is starting to be more and more common that companies in Bitcoin do their product open source. Sadly in custody that is not the case. You are not going to find many tools open source outside multisig wallets basically. That is not great. When you are dealing hundreds of millions of dollars there is no backend that is open source today for securing that. That is not normal. Maybe vaults in general could help pave the way forward to start to have visibility on how it works at custodians.

Michael Folkson: I suppose it is whether you want security in obscurity or whether you want a battle tested open source solution?

Kevin Loaec: In Bitcoin and other open source projects we talk about obscurity being the enemy of security. That is ok for everything online. As soon as you work towards physical threats, obscurity is always stronger at least for now. This is a problem that we have in security in general. It is easier to protect yourself if people don’t know the layout of your house. It is easier to protect yourself if they don’t know whether you have a gun or not. This is also the case with hardware security modules. This is the big debate. Is Trezor better because it is open source or is Ledger because it is using a closed source thing? Attacking a device that is not open source is really hard because you need equipment that costs a lot of money. But has it been reviewed enough, is there a backdoor? It is always a compromise somewhere. Typically when it is about physical security obscurity is a great asset.

Michael Folkson: If someone is interested in learning more or contributing to Revault in the future what would you advise?

Kevin Loaec: It depends. If somebody really wants to move forward on just the idea of vaults I think right now it is good we know each other. Dealing with Bryan, Bob, Spencer, Jacob, it is a really small crowd. I think we are getting on well with each other. Hopefully we can keep moving forward in reviewing each other’s work and contributing to each other’s work. Every security product needs a lot of auditing and review. If you are putting millions of dollars on it you need to make sure it is working. As you have seen in this presentation there are a lot of edge cases, weird things around fees and transaction pinning. Things that people don’t look at usually. We need to keep digging because maybe we are missing a massive point. Miners as well are important. Usually we assume the mining is secure, we are only thinking about the script. But the script is not everything. Bitcoin still has human players and as Antoine was saying if the miner is the attacker what do we do? It is starting to be much harder. There are a lot of things that we need to look at. Not just on the development side it is also on the architecture side. For the architecture right now we are looking at building a Rust implementation. That could change. There are more and more people contributing to Rust as well which is good. Square Crypto is contributing a lot which is good as well. Maybe we will start seeing a bigger and bigger Rust community in Bitcoin. Reaching out to us would be good as well. Right now raising funds for us is a way to go because we really want to put that into production. There are a lot of vault proposals over the years but sadly none of them are available as a product. The best we can find today is a Python demo. This is cool but not something you would put your money on. At the very beginning at this project when I reached out to a few people my idea was to set up a non-profit foundation to sustain the development while the users, the exchanges and other companies should have a subscription with this non-profit. That could be a good way to go for the very long term. But in the short term the paperwork for that and figuring out who would be a subscriber is much more annoying than just generating normal revenue through invoicing. Invoicing is how can you customize Revault for us. That is a service we can provide. All work is done open source so we can release the software to everyone. For us that is the way to go right now. A private company and later we will see. But definitely everything is open source and no intellectual property as such behind it.

Michael Folkson: The sales pitch is that whenever we have a bull market there are going to be tonnes of smaller custodians that are going to need a solution like this. It is not just going to be the big exchanges.

Antoine Poinsot: We even thought about ATM operators. Even small players.

Kevin Loaec: Another thing I want to add is that I don’t understand why there are so few physical attacks in Bitcoin. The security of many businesses in Bitcoin is really low. I am not saying that people should do this but it is really impressive that there are still people going into your house to steal your TV which is much harder to resell rather than going to a Bitcoin business to steal their Bitcoin. We are very lucky that for eleven years we had only a few physical attacks. I don’t think it is sustainable. If there is another big bull market we will see more criminals entering the space. We need better security. Even if it is just better multisig, we need better security than we are using today. To me it is more that than just the price going up. It is the risk going up. This is not good. I don’t feel really safe even though I have good key management. I would like to have more options such as cancelling transactions even in a personal wallet. If I send Bitcoin to somebody how many times are you checking that the address is right? How many times are you checking that the amount is right? We put some pressure on us when we could have a button to allow us to claw it back in one hour or whatever. There are a lot of things I would like to see.

Michael Folkson: I don’t have any more insight than you but I have heard that exchanges have very complex cold storage solutions. It is complex physical security rather than complex software security or taking software security to its full potential. Bob can you answer Antoine’s question on what you think about miner censorship of revocation transactions? I also saw that you dropped a paper today. Perhaps you could summarize that you published with Bryan, Jacob and those guys.

Bob McElrath: Miner censorship of transactions like that is a 51 percent attack. It only works if they have 51 percent of the hashpower. If that is the case game over all around. There is really nothing you can do about that. As to our paper we dropped our paper today. This is the first of two papers. The second will focus more on the mechanisms for deleting keys. The first one is an architecture that is similar to Revault what uses deleted keys instead of the multisig. Happy to have this conversation continue and come up with some interesting solutions.

Michael Folkson: What were the key findings of the paper?

Bob McElrath: This paper is not so much a findings kind of paper, it is more of an engineering kind of paper where we describe an architecture and analyze some of its security properties.

\ No newline at end of file diff --git a/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr/index.html b/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr/index.html index 8a34dffd41..abe35e74d2 100644 --- a/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr/index.html +++ b/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr/index.html @@ -10,4 +10,4 @@ Pieter Wuille, Adam Gibson, Elichai Turkel, Tim Ruffing

Date: June 16, 2020

Transcript By: Michael Folkson

Tags: Schnorr signatures

Category: Meetup

Media: -https://www.youtube.com/watch?v=uE3lLsf38O4

Pastebin of the resources discussed: https://pastebin.com/uyktht33

August 2020 update: Since this Socratic on BIP Schnorr there has been a proposed change to the BIP revisting the squaredness tiebreaker for the R point.

The conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Introductions

Michael Folkson (MF): This is London BitDevs, a Socratic Seminar. We are livestreaming on YouTube currently for those on the call. The plan is also to do a transcript of this conversation. The point is not to try to find gotchas with anything anybody says, this is an educational exercise. So don’t let that put you off. We can edit the video afterwards, we can edit the transcript afterwards. This is purely for education, this isn’t an authoritative resource on cryptography or any of the topics we will be discussing today. We have currently got a call on Jitsi going. Jitsi is free, open source and doesn’t collect your data. When it works it is great. It seems to be working so far. This is a Socratic Seminar in preparation for tomorrow when Tim Ruffing will be presenting. There will be a call tomorrow and there will be another livestream tomorrow with Tim Ruffing’s presentation. There will be a Q&A afterwards. Socratic Seminars, for those who haven’t been to one, they originated in New York. There is the site bitdevs.org to help you do Socratic Seminars if you haven’t been to one before. The reading list or the agenda, there is a Pastebin up. It is very much focused on BIP Schnorr. It is hard to get a clear dividing line between BIP Schnorr, BIP Taproot, BIP Tapscript. They do interact and they do inform the design and the code of each other. We will stray sometimes into Taproot but the focus is on cryptography and Schnorr in preparation for Tim’s presentation tomorrow. Maybe we will do a Socratic Seminar in a few weeks on Taproot. That could be good. We will start as we always do with intros. You don’t have to do an intro but if you want to there is a hand signal in the bottom left of the screen if you are on the call. If you click that I’ll be able to see that you have raised your hand and I will go to you. Your intro will be who you are, what you are working on, how much you know about Schnorr or what you would like to learn in this Socratic on Schnorr today. I do need volunteers. Does anyone want to click that hand button to get us going?

Pieter Wuille (PW): Hi. I’m Pieter. I do Bitcoin work at Blockstream and I am one of the co-authors of BIP Schnorr.

Adam Gibson (AG): I am Adam Gibson or waxwing on the internet. I do various coinjoin stuff. I have done study of Schnorr and I did talk on it in 2018 that is referenced.

Jonas Nick (JN): I’m Jonas. I work at Blockstream on Bitcoin research. I am also a co-author of BIP Schnorr. I am interested in learning how BIP Schnorr relates to Tim’s thesis if possible today.

MF: You may be disappointed. I do hope to cover Tim’s thesis at the end. There will obviously lots of opportunity to ask him questions tomorrow.

Elichai Turkel (ET): I am Elichai. I work at DAGlabs and I contribute to rust-bitcoin and libsecp. I helped as much as I could for BIP Schnorr and I hope it will be merged soon.

Tim Ruffing (TR): I am Tim Ruffing. I also work at Blockstream. I am also a co-author of BIP Schnorr. I was impressed by the reading list. I didn’t plan to be here today but after this reading list I am really interested to be here because I don’t have slides yet for tomorrow. If everybody has read all the stuff then I don’t need to present anything. I’m mostly here trying to figure out what I should present tomorrow but I am sure I will find something.

MF: We can do a very long Q&A. You can just produce a few slides to get us going tomorrow.

Volker Herminghaus (VH): I am Volker. I am here for total immersive learning about cryptography. I have no credentials.

MF: That is totally fine.

Properties of a Digital Signature Scheme

MF: As in previous Socratics we will start from basics or from first principles. There are some pretty hardcore experts on the call today. Hopefully they will be patient for the first 10-15 minutes as we start from first principles. I do want people to get in the mindset of thinking what properties a digital signature scheme requires. Adam Gibson who is on the call did a very good presentation at London Bitcoin Devs, this is on the reading list, on the things a signature scheme requires. Some basic questions. Why is a signature scheme where the message is just multiplied by 5 insecure? Why is a hash function insecure? Why do we need a more complicated scheme and what do we need from our signature scheme in terms of security? If I want to send you a message that is “5” why is a digital signature scheme that multiples that “5” by 7 not a sufficient signature scheme?

VH: It is reversible. You could just divide by 7 and you would get the original value. That wouldn’t be very secure.

MF: This is kind of the forgeability property.

AG: What you are describing here Michael is connected to the zero knowledgeness. Let’s not get technical, the privacy element. You want to prove that you own a credential or a key without revealing the key. In the example 5 x 7 if your message is 5 we are imagining that your key is 7. Not the best key ever but we work with what we’ve got. If your key is 7 and you multiply your message by that you can divide by the message to get the 7. The first property we are mentioning there is it should keep the confidentiality of the key. Technically this zero knowledge property.

MF: Let’s move onto to say a hash function. Why would a hash function not be a sufficient signature scheme? If you just hash the message why is that not a sufficient signature scheme.

TR: A simple answer is that it doesn’t have a key. With a signature scheme what you really want is if you don’t have the secret key you can’t produce signatures. In the hash function where is the key at all?

AG: Why is a HMAC not a sufficient signature scheme?

PW: That is a better question.

MF: We are moving in the right direction. We need there to be a secret key and we need to make sure that somebody else can’t forge signature. That secret key needs to produce a signature that nobody can else can produce. In Adam’s presentation he was talking about the producer of the signature and the receiver of the signature. There are two parties and two concerns here. One, the receiver wants to know that the producer of the signature is the only one with that private key. The producer wants to know that if the producer sends that signature to the receiver, the receiver cannot do anything with that signature. This is leading into any leakage in terms of when you produce a signature, does it leak any information?

ET: I wouldn’t say that only the producer has the private key. I would say that only the producer can produce a valid signature. It can be in multiple different ways but the important part is that only the producer can produce a valid signature, not that only he has the private key.

PW: I think you define as producer as “he who has the private key”.

TR: I think Elichai has a good point here. Of course you want to make sure that if you give a signature and you use your secret key to create that signature you want to make sure that your secret key does not leak. That is how usual schemes do that. In the end the more basic property that you want is what Elichai just described. If you see a signature you shouldn’t be able to produce a different signature without having the secret key of course. I heard people talking about encryption and asking the same question. What should an encryption scheme provide? Sometimes people say that it should hide the secret key but this isn’t sufficient. It should hide the message. The secret key is the tool to hide the message. I think we can say the same here for signatures. Of course we don’t want to leak the full secret key because this would allow others to produce signatures. But you can ask questions like “What happens if you leak for example parts of the secret key?” There are signature schemes where this is fine. You may leak 1 bit, you leak some of the signature key but still people can’t produce signatures.

PW: The end goal is unforgeability. Someone without a key can’t produce a valid signature. A corollary is that obviously you cannot learn the full key from the signature.

ET: Part of my point was also in the multiplication example. You might be able to produce another valid signature without knowing the private key.

MF: I think we are ticking off most of the properties. The properties that Adam had in his presentation were completeness, soundness and zero knowledgeness.

AG: Those are the properties of a zero knowledge proof system. I don’t know if this too theoretical but the concept of a signature as a zero knowledge proof of knowledge. This was a big part of what I was blathering on about in that talk. Whether that is a good framework…. clearly with digital signatures, there is a bit more to it than that. There are multiple different security levels. I think I did mention this, there is complete break which is when a signature reveals your key. Then the most strict notion of security is something like unforgeability under chosen message attack, something like existential unforgeability. The idea that not only can you not grab the key from the signature but you can’t produce other signatures on the key without having the key. This is what Elichai was just talking about. You can’t do that even if you’ve got a chance to have something called an oracle. You can get as many signatures as you like on different messages from an oracle. Then you can’t take any of those signatures, or some combination of them, and produce a new signature on a message that hasn’t previously been signed. The other guys will probably correct me right now.

TR: I think this was pretty precise actually. A nice example appeared last week on Twitter. I don’t know if people heard of this new SGX attack, break whatever. I don’t know a lot of the details but apparently the guys who wrote the paper about that attack, they extracted a signing key from SGX. They implemented such an oracle in a sense on Twitter. It is a Twitter bot and you can send it messages. It will give you signed messages back. It will sign messages for you. What Adam just explained, unforgeability under chosen message attack. You can ask this Twitter oracle for signatures of any message and it will reply with exactly this. It will sign any message you want. But even if you have seen as many signatures as you want you can’t produce now a signature on a message that hasn’t been signed by this oracle. This is a super strong notion. You can ask the guy with the secret key to sign everything apart from one message and still you can’t produce a signature for this one message.

PW: Then there is the notion of strong unforgeability which is even stronger. Even if you have asked for a signature on a particular key and got that from an oracle, you cannot produce another signature on the same message.

MF: The signer cannot produce another signature on the same message? Or an attacker can’t produce a signature on the same message?

PW: An attacker should not be able to produce another signature on a valid message even if they have already seen a valid signature on that message. This is related to non-malleability.

MF: That is another property we need on Bitcoin that perhaps you don’t need on a normal digital signature scheme where we don’t have money and a consensus system with signatures being shared all over the place.

PW: People are describing unforgeability under chosen message attack as a very strong property. But there is a property literally called strong unforgeability which is even stronger.

MF: I think we are going to lose the beginners very quickly. We use a signature in Bitcoin because we need to prove that we own a private key without actually telling the world what the private key is. When we publish that signature we don’t want to leak any information. If we leak any more than just one bit then somebody is potentially able to create a signature through brute force without knowing that private key. There has to be no leakage when we publish that signature onchain or in a consensus system.

AG: It was half a joke at start. But since we are still in this beginner area I think it might actually be quite important for people to understand this. Why isn’t HMAC a digital signature? To be clear, a hash function doesn’t have a key but with a HMAC you take a hash function and you put in a secret key as part of the message that you are hashing. It is important to reflect on why doesn’t that count as a digital signature scheme in the sense that we are talking about. I can’t produce a HMAC on the message “Hello” without knowing the corresponding secret for it. HMACs are used in for example in TLS and various other protocols to ensure integrity of the message going from one party to the other. Why doesn’t that also count, just like Schnorr or ECDSA as a digital signature? Just having a key like 100 random bytes along with your message and then just hashing it.

VH: I think the reason why a hash function can’t be used as a signature is because it is impossible to verify the signature without knowing the original message.

PW: Well, no. I don’t think it is required that a signature scheme should be able to verify it without having the message. The problem is that you can’t verify a HMAC without knowing the key. Inherently you need to give the secret key to the verifier in that case. That obviously breaks the unforgeability.

TR: This explains why HMAC is used for TLS connections because they are connections for two defined peers, the two endpoints of the connection. They both can have the key. If I haven’t produced this MAC message then there is only one other party in the world that has the key which is the other side of my connection. I know the other side has produced the message.

PW: You can see a HMAC as an analog of a digital signature scheme but in the symmetric world where you always have shared keys or passwords instead of a secret public, private key pair.

MF: We’ll move on. The conclusion to that is this is very complicated and there is a reason why there is the phrase “don’t roll your own crypto” because constructing a digital signature scheme that is secure and does everything you want especially in a complex system like Bitcoin where signatures are shared all over the place and shared on the blockchain is hard.

(For more information on signature scheme security properties see this talk from Andrew Poelstra at MIT Bitcoin Expo 2019)

Satoshi’s digital signature choices

MF: Let’s move onto Bitcoin history. In 2009 Satoshi made some choices. Satoshi used ECDSA for the digital signature algorithm. He or she used secp256k1 for the elliptic curve. And he or she used the OpenSSL library for doing the digital signing and the cryptographic operations within Bitcoin. Any thoughts on those choices? Why he chose them, were they good choices?

PW: I’m sure all of us are mind readers and can guess his or her intentions.

MF: In retrospect I think there is general consensus that OpenSSL was a good choice. The elliptic curve was perhaps a strange choice, not one that was widely used. ed25519 was more commonly used. No? Please correct me.

ET: I don’t think so. I don’t think it existed back then. ECDSA was a common choice. The exact elliptic curve was not.

AG: Perhaps secp256r1.

ET: I think secp256r1 was the most common one but I’m not sure.

PW: NIST P-256 is the same as secp256r1

TR: The choices at this time, if you don’t want to invent your own stuff, the two possible choices were elliptic curves and RSA. Satoshi didn’t like RSA signatures because they are large. On blockchains we optimize for size. I guess this was the reason why he went for elliptic curves.

PW: In retrospect OpenSSL was a bad choice. We have had lots of problems. I do say in retrospect. I certainly don’t fault anyone for making that choice at the time. Since Tim brings up RSA there is a theory. The maximum size of pushes in Bitcoin script is 520 bytes. It has been pointed out that that is exactly the maximum size of a 4096 bit RSA signature in standard serialization. Maybe that was a consideration.

ECDSA and Schnorr

https://diyhpl.us/wiki/transcripts/chaincode-labs/2019-08-16-elichai-turkel-schnorr-signatures/

MF: This is where I wanted to start in terms of the reading list. I am going to share Elichai’s presentation at Chaincode. This is probably the best concise resource that I could find explaining the differences at a high level between ECDSA and Schnorr. Let me share my screen. Perhaps Elichai you can talk about the differences between ECDSA and Schnorr and summarize the points in your presentation. What I really liked about your presentation was that you had that glossary on every slide. As you say in the presentation it is really hard to keep monitoring what letters are which, which are scalars, which are points etc.

ET: Thank you. I really tried to make it as understandable as possible for non-mathematicians. Basically the biggest difference between ECDSA and Schnorr, you can see there on the second line of my presentation we use the x coordinate of the point kG which is weird. Usually in Schnorr for example there is a full separation between point operations, what you do for verification and signing. You can see at the bottom in Schnorr you take a bunch of scalars, you multiply, you add. It is normal modular arithmetic. In ECDSA you create a point and you take the x of that point. You use that like a scalar which is really weird. It is part of the reason why there is no formal proof for ECDSA although a lot of people have tried.

PW: It depends what you mean by formal proof. There are proofs, just under the standard assumption of discrete logarithm and random oracle. In particular if you add an additional assumption that taking the x coordinate from a point and assume some properties about that then it is provable.

ET: Have those properties been analyzed?

PW: That is the point with assumptions.

ET: For example, the discrete log even though it is an assumption it has been analyzed for a lot of years.

PW: It is a much less common assumption. That is a very reasonable criticism. Saying there is no proof, there is a proof. It is just not one under very common assumptions.

ET: I assume that proof isn’t new. Last time I Googled for it I didn’t find it.

PW: It is linked in the BIP.

ET: There is some proof but as Pieter said the proof is assuming something that wasn’t properly analyzed like discrete log. The reason behind it is probably a combination of the patent on Schnorr and some weird politics in NIST. There are some conspiracies around it but we will ignore those. We do believe that ECDSA is working and isn’t broken. There is no reason to believe otherwise. It is still nicer that Schnorr doesn’t do this weird thing. On top of the actual proof, because Schnorr doesn’t do point operation in the signing it is linear, meaning that you can add signatures, you can tweak signatures and you still get something that is valid in the framework of the signing.

Benefits of Schnorr for Bitcoin

MF: Why do we want Schnorr? What does Schnorr bring us in the Bitcoin ecosystem?

ET: Obviously there is a proof. Probably a lot of non-mathematicians don’t care about the proof. We also have a lot of things in Bitcoin Core that we didn’t prove and we will never be able to prove. It is still an improvement. Also there is the whole linearity of this. Technically you can do Taproot with ECDSA but a lot of things like MuSig, sign-to-contract you cannot do with ECDSA easily although there are improvements and new papers on that.

MF: To summarize, an improved security proof on ECDSA. Linearity, Schnorr signatures are also a little smaller, not a massive deal.

ET: Getting rid of DER is a very good thing. Aside from the size which is an improvement in Bitcoin where we need to save those bytes forever. The exact encoding isn’t fun. In the past there were a lot of bugs that were found by some people here.

Nadav Kohen (NK): I just wanted to point out that along with all the things that have already been mentioned Schnorr signatures also have the nice property that you can compute the sG, the point associated with the signature from public information only. This enables a lot of fun schemes. ECDSA doesn’t have any equivalent to that as far as I’m aware. Schnorr with pre-committed r values, if you know the r value, the public key and the message then you compute sG from just that public information ahead of time without knowing what the signature is. I am referring to the discreet log contract paper more or less.

stefanwouldgo: Doesn’t Schnorr have the strong unforgeability property Pieter mentioned before?

TR: When we mentioned there was a security proof on better assumptions it is also the case that Schnorr has strong unforgeability built in where ECDSA does not. For ECDSA the only reason why it is not strongly unforgeable is that you can negate one of the components in the signature. If you see a signature you can negate it and it is still valid. Given a signature you can produce one other valid signature. This can be avoided by just checking that one of the numbers is in the right range. It is not strictly an improvement. You can do the same for ECDSA. You can make it strongly unforgeable.

PW: Given that we already as a policy do that in Bitcoin, it is the low s rule. With the low s rule and under these additional assumptions it is also possible to prove that ECDSA is strongly unforgeable.

TR: I think the same comment applies to the DER encoding. It is just an encoding issue. Now we move to Schnorr signatures we also fix some other things that are annoying. But you could also encode ECDSA in a nicer way.

PW: I think it is pointed out in the BIP that there are a number of improvements that we can make just because we are switching to a new scheme. We can clean up a bunch of things which includes batch verification. Due to Bitcoin’s use as a consensus system we not only have the requirement that people without a private key cannot produce a valid signature. We also need the property that someone with the private key cannot produce a signature that is only valid to some verifiers but not all. This is a very unusual property.

AG: A little side note that might be of interest or amusement to some people. A few moments ago Elichai was talking about the issue of why we didn’t have Schnorr in the first place. We had a patent. He also mentioned that there were conspiracy theories around the NSA. I found an interesting historical anecdote about this topic. Koblitz (the k in secp256k1), an elliptic curve cryptography guy wrote in a paper about the situation in 1992 when the standard for ECDSA was proposed. He wrote “At the time the proposed standard which soon after became the first digital signature algorithm ever approved by the industrial standards bodies encountered stiff opposition especially from advocates for RSA signatures and from people who mistrusted the NSA’s motives. Some of the leading cryptographers of the day tried hard to find weaknesses in the NIST proposal. A summary of the most important objections and the responses to them were published in the crypto 92 proceedings. The opposition was unable to find any significant defects in this system. In retrospect it is amazing that none of the DSA opponents noticed that when the Schnorr signature was modified the equivalence with the discrete logarithm was lost.” It is an incredible quote. They thought the NSA was up to no good. None of them noticed the most basic obvious fact which was that as we have just been describing in order to try to convince yourself that ECDSA is as secure as Schnorr in terms of being equivalent with the discrete log (ECDLP) we have to jump through hoops and write complicated papers like this paper by Fersch which I think is referred to in the BIP. It is amazing that everyone dreamt up these conspiracy theories but no one noticed that the discrete log equivalence was lost.

TR: This was a historical thing?

AG: It was the crypto 1992 proceedings.

TR: When was the first proof? 1997 I think. The journal of cryptography paper says received in 1997. Provable security of Schnorr wasn’t a thing at that time.

AG: Are you talking about Pointcheval and Stern? There was no concept of a reduction before that?

TR: No one figured out how to prove Schnorr signatures are secure? I don’t know, I need to check. Maybe I am wrong.

AG: It seems to me that if anyone had even thought about the whole business of nonce reuse then it is almost the same fact. The reduction is almost directly the same thing as the fact that nonce reuse breaks the signature scheme right?

TR: If you are familiar with Fiat-Shamir then it is super weird to remove the nonce from the hash. ECDSA only hashes the message, not the nonce of the message. This is what creates the problem in the security proof.

MF: Anything else on why we want Schnorr? We’ve covered everything?

TR: It was mentioned but not really stressed. In general it is much easier to build cryptography on top of Schnorr signatures. There are so many things you can imagine and things we can’t imagine right now. At the moment we are looking into multisignatures, threshold signatures, blind signature variations. Andrew Poelstra’s scriptless scripts. All of these things are usually much easier to build on top of Schnorr because you have these nice algebraic properties that Schnorr preserves. The point of ECDSA basically was to break those algebraic properties. There is nothing definitive you can say about this but in general the math is much nicer.

NK: Essentially building anything on top of ECDSA that you can build on top of Schnorr requires zero knowledge proofs to fill in the lost properties which adds a lot of complexity.

TR: Or other things like encryption. With Schnorr signatures we have very easy constructions for multisignatures. If you have n parties who want to sign… There was work involved and people got it wrong a few times but we have it now and it is simple. For ECDSA even if you go for only two party signatures, 2-of-2, you can build this but it is pretty ugly. You need a lot of stuff just to make that work. For example you need to introduce a completely different new encryption scheme just to make the 2-of-2 signatures work.

Design considerations for the Schnorr implementation

MF: Hopefully that was a convincing section on why we want Schnorr. I am going to ask some crazy questions just for the sake of education and discussion. What would Taproot look like without Schnorr? You wouldn’t be able to have the default spend be a multisig but you could still have Taproot with ECDSA and the default case just be a single sig spend. Is that correct?

PW: You could but I think that would be pointless. I guess if it was a 1-of-n it would still make sense but apart from that.

MF: I suppose multisig is just so big. You could still have the tree with different scripts on it, with different multisigs on each leaf in that tree but it would just be way too big.

PW: You would just have MAST. Taproot without a default case, you have MAST.

MF: We won’t go into Taproot today, another time. That was the case for why we want Schnorr in Bitcoin. I am sure Pieter and Tim early on went through this thought process. But for the people such as me and some other people on the call what are the different ideas in terms of implementing Schnorr. Let’s say Schnorr is a good idea. What are the different ways we could implement Schnorr? I’ll throw some suggestions out there. You could have an OP_SCHNORR opcode rather than a new SegWit version. Or you could have a hard fork, I read this but I didn’t understand how it would work, such that previous ECDSA signatures could be verified using the Schnorr signature verification algorithm. Any other ideas on how Schnorr could have been implemented?

AG: When I heard that they were doing this, my thoughts running through my head were what is the serialization? How are the bytes serialized?/ How are the pubkeys represented? You probably remember that there was the possibility in the past of having uncompressed pubkeys and compressed pubkeys. A public key is a point on a curve but you have to somehow convert that into bytes. Are you printing out the x coordinate or the x and y coordinate? That question is interesting because of the nature of elliptic curve crypto. You have a y^2 = f(x) which means that you will always have this situation where both y and -y are valid solutions for any particular x. So how you serialize the pubkey. Then there is the question of how you print out the Schnorr signature. You probably remember from my talk in 2018 this idea that you have a commit, challenge, response pattern. The challenge ends up being a hash value which we often write as e in these equations. The question is do you publish s, the response and e, the challenge? Or do you publish s and the commitment R? Both of those turn out to work fine for verification. As I’m sure several people on this call can explain in great detail there was a decision made to do one rather than the other. Those were the questions that came into my head.

TR: I wasn’t involved back in 2016 but let me correct one thing that you mentioned that you read. This idea that you could do a hard fork where you use Schnorr verification to verify the existing ECDSA signatures. You said you didn’t understand. This won’t work but maybe you meant something else. What Bitcoin Cash did is they did a hard fork and they reused the keys. A Schnorr public key is the same as an ECDSA public key and the same for the secret key. They are just elliptic curve points on the secp256k1 elliptic curve. You can take an existing ECDSA key, a point on the curve, and reuse it as a Schnorr key. I think this is what Bitcoin Cash did in their hard fork. I’m not sure if they turned it off, I think they turned off ECDSA and said “From now on you can only use Schnorr signatures.” If you have existing UTXOs that are protected by ECDSA keys those keys will be now considered Schnorr keys. This is one way you can do this. It is a weird way.

ET: You are saying because the scriptPubKey contains a public key they just say “From now on you need a Schnorr signature to spend it.”

TR: Exactly. Perhaps later I can say why I think this is not a great idea.

ET: At the very least it violates separation of concerns. You don’t want to mix multiple cryptographic schemes with the same keys. I don’t think there is an obvious attack if you use different nonce functions.

PW: Arguably we are doing that too for the sake of convenience, reusing the same keys. Given that the same BIP 32 derivation is used.

MF: One thing Pieter said in his Scaling Bitcoin 2016 presentation was that you couldn’t just slot in Schnorr as a replacement for ECDSA because of the BIP 32 problem. This is where you can calculate a signature of a child key if you know that master public key.

“It turns out if you take Schnorr signatures naively and apply it to an elliptic curve group it has a really annoying interaction with BIP 32 when used with public derivation. If you know a master public key and you see any signature below it you can transmute that signature into a valid signature for any other key under that master key.” Pieter Wuille at Scaling Bitcoin 2016

NK: Nonce generation seems like a pretty big design choice. How we have deterministic nonce generation plus adding in auxiliary randomness.

MF: The people who were involved back then, I don’t know exactly when, maybe it started 2013, 2014 but it certainly seemed to get serious in 2016 when Pieter presented at Scaling Bitcoin on it. Were there any big changes in terms of your thinking how Schnorr should be implemented? I know Taproot only came along in 2017 so you had to integrate the thinking on the Schnorr implementation with Taproot. We will go onto the multisignature stuff later.

PW: The biggest change I went through was thinking of Schnorr primarily as a scaling improvement. I was envisaging it to be part of a cross input aggregation scheme where we would aggregate all signatures in an entire transaction into one. With the invention of Taproot that changed to a much more simple per input for a privacy improvement technique.

ET: I think I remember you tweeting about it a while ago, the fact that you see Schnorr as doing cross input aggregation and maybe one day BLS or something like that doing cross block aggregation.

MF: So Taproot and perhaps changing priorities on what Schnorr use cases were possible. Cross input aggregation isn’t in BIP Schnorr or in a potential soft fork in the coming months or years. There is a lot of work that I understand to get to a point where that could be introduced to Bitcoin.

Dangers of implementing Schnorr too early

MF: What could go wrong? Let’s say that there had been a Schnorr soft fork two, three years ago with the current thinking. Pre-Taproot, an OP_SCHNORR opcode was introduced. Perhaps Schnorr was introduced before some of the security concerns of multisignature schemes were discovered. What could’ve gone wrong if we had implemented Schnorr two, three years ago?

AG: It is a science fiction argument I suppose but what if people started making signature aggregation just by adding keys together?

PW: Whenever we are talking about hypothetical scenarios it is hard to imagine what we are exactly we are talking about. If it was just a OP_SCHNORR that was introduced and nothing more I don’t think there would be much risk. It would also not be particularly interesting. You wouldn’t get the cross input aggregation, you wouldn’t get Taproot. You would have the ability to do efficient multisig and threshold signatures within one input. Before we spend a lot of time thinking about all the ways that can go wrong, MuSig and its various iterations, there is certainly risk there.

NK: You would’ve still gotten adaptor signatures and that would’ve made Lightning nicer maybe. We will still get there.

VH: Because the Schnorr implementation that is supposed to come into Bitcoin eventually will be pay-to-public-key addresses which means that if there is a quantum computer eventually that could break a lot of things because it is no longer hashed. It is just a public key that you pay to.

NK: It is still inside of SegWit. It is still going to be a witness pubkey hash. I could be mistaken.

PW: There is no hash.

AG: We are talking about Taproot here not Schnorr per se.

PW: That’s right. The Taproot proposal puts a tweaked public key directly in the output. There are no hashes involved.

NK: Does the tweak save this scenario at all?

PW: I would argue there is no problem at all. I believe the argument that hashes protect against a quantum computer is cargo cult. If ECDLP is broken we have a problem period. Any interesting use beyond simple payments and that includes BIP 32 derivation, multisig, Lightning, various other protocols all rely on already sharing public keys with different parties. If we believe that computing a private key from a public key is easy for an attacker we shouldn’t be doing these things. The correct approach is thinking of in the long term how we can move to an actual post quantum secure scheme and not lie to ourselves that our current use of hashes is any significant improvement.

MF: That is a conversation you have had to rehash a lot it seems. Watching from afar.

(For more information on why hashing public keys does not actually provide much quantum resistance see this StackExchange post from Andrew Chow.)

AG: Deliberate pun Michael?

MF: There is a question on the YouTube from Carel. What are the trust vectors within elliptic curves and how does it compare to Schnorr? How do the parameters compare?

NK: They are the same right?

MF: Yes. Greg has answered those in the YouTube chat. “The BIP 340 proposal uses exactly the same group as Bitcoin’s ECDSA (specifically to avoid adding new trust vectors but also because the obvious alternatives aren’t big improvements)”

Dangers of implementing Schnorr today

MF: This was the roadmap in 2017 in terms of Schnorr signature aggregation. Let’s have a discussion what could go wrong now. We have this Schnorr proposal. We have the BIP drafted, the code is pretty much ready, there still seems to be some minor changes. What could go wrong if BIP Schnorr and BIP Taproot were merged in say tomorrow. What would we be having nightmares over?

NK: You mean the code or a soft fork?

MF: It needs to be merged in and then there would be soft fork. What could go wrong? This is linking to the discussion with Pieter earlier where I said “What would’ve happened if it had been implemented really early.” Pieter said “There wouldn’t have been much of a problem if say there had been a OP_SCHNORR opcode back in 2017”. But we don’t to have things introduced to Bitcoin where there are lots of opcodes where funds are locked up using those opcodes. I think Andreas (Antonopoulos) has called this crud.

(See the end of this presentation from Andreas Antonopoulos at SF Bitcoin Devs on Advanced Bitcoin Scripting)

MF: We want to get it right first time. We don’t want to get things wrong. We want to have sorted out the multisig schemes and make sure Taproot integrates perfectly with Schnorr. We don’t want to rush things in. What could go wrong?

PW: I think my biggest concern is in how things end up being used. Just because Schnorr makes it easier to build more complicated constructions on top doesn’t mean they all interact well and securely. We have already seen many unexpected pitfalls. The two round MuSig scheme that was proven insecure. We went from we think we have a proof and someone found a mistake in a proof. And then someone found an actual break. All these things were discovered before deployment. I think the process is working here. There is certainly risk that people will end up using much more experimental things on top. Nonce derivation or anything that introduces more interaction at the wallet signing level introduces risk I think. At the same time it comes with huge benefits.

NK: I would also like to note that there is at least some risk of that happening on ECDSA as well now that there are a lot more feasible schemes built on top of ECDSA that are worse than the Schnorr versions.

PW: Absolutely. I think these same concerns apply to things like 2 party ECDSA, much more so in fact.

MF: We will go onto this later but do we need to have those multisig schemes bulletproof and be really confident in those multisig schemes before we merge Schnorr in? Or can we potentially make those secure in the future after a Schnorr soft fork?

TR: It is a good question. Let me circumvent the question and just say what I believe the current state is. For multisignatures we are pretty good. We have MuSig, it is there, it has a security proof. People have looked at it. I am pretty confident. It could go wrong but I hope not. Multisig is n-of-n, threshold is t-of-n where t could be different from n. The situation is already a little bit more involved. I think when we started to write the BIP we thought this is a solved problem. Then it turned out that it is harder than we thought. There are some schemes from the literature that are decades old. We thought that if people worked on this in the 1990s and there was no follow up work that pointed out issues this is probably fine. Then I looked a little bit into it, Jonas looked into it and we figured out some issues. For example those schemes assume a lot of things that are weird in practice. We changed the wording in the BIP now to be a little bit more careful there. This for sure needs more research. One thing we mention in the BIP is blind signatures and they require more research. On the other hand I think this is not very important. Going back to your question we can certainly live without a working blind signature scheme. If you really want to have blind signatures that end up on the blockchain and are valid Bitcoin signatures this is a pretty rare use case. For threshold it would be nice to have a scheme ready when Schnorr is deployed. But on the other hand I don’t think it is a dealbreaker. I share the concern of Pieter that people will build crazy things on top of it. On the other hand I see that with a lot of features that have been introduced into Bitcoin the usage is not super high so far. It is not like we introduce Schnorr and the next day after the soft fork everybody will use it. It is a slow process in the end anyway.

MF: In a worst case scenario where there was a bug or things that had to be changed after it was deployed I suppose it depends specifically what the bug was and exactly what the problem was.

TR: What Pieter mentioned in that sense is maybe not the worst case scenario. This would be a scheme on top of Schnorr failing and you could just stop using this. It is not nice but in this case you hopefully don’t need to fix the consensus code. Whenever you need to do a hard fork because of a bug in the consensus code it is a nightmare scenario.

PW: I think that is good to point out. Almost all the risk is in how things end up being adopted. Of course there could be actual bugs in a consensus implementation and so on but I think we are reasonably confident that that isn’t the case. That is not a very high bar. Imagine the Schnorr scheme we introduce is just trivially insecure. Anyone can forge a signature. That isn’t the risk for the consensus system in the sense that if nobody uses it there is no problem. Today you can send to OP_TRUE, that is also insecure. Everything depends on the actual schemes and how wallets adopt things. These were extreme examples obviously.

TR: I was surprised by your comment Pieter. That your concern is people build crazy things on top of it. This is always what I fear, coming more from a theory background. When we have Schnorr signatures you can do this fancy construction that I wrote ten minutes ago. You get all these nice properties but you need a security proof. How does it work? Writing a paper and getting peer review takes two years. I am always on the side of being cautious and try to slow down things and be careful. I fully agree with Pieter. We should be careful. Just because a scheme seems to work doesn’t mean it is secure.

TR: One point that we haven’t covered that much is batch verification. That’s a big one. This is actually something where Schnorr is better than ECDSA because we just can’t do this with the current ECDSA.

PW: With a trivial modification you could. It wouldn’t be ECDSA anymore. It is not just an encoding issue. But with a very small change to ECDSA you could.

AG: What would that be?

PW: You send the full R point rather than only the x coordinate of R modulo the curve order. That’s enough. Or what we call a compact signature that has an additional two bits to indicate whether R_x overflowed or not and which of the two to pick. There is a strong analogy between batch verification and public key recovery.

MF: Your answer surprised me Pieter because as a protocol developer or someone who cares about the protocol and the network we don’t care about people building things on top and losing money. What we are worried about funds being locked up in some bug ridden opcode etc and everyone having to verify that bug ridden opcode forever.

PW: I don’t know what the distinction really is. There are two concerns. One is that the consensus code is buggy and the network will fork. Another is an insecure scheme gets adopted. But the second question is much more about, especially if you go into higher level complex protocols, wallet level things than consensus code. Of course if our Schnorr proposal is trivially insecure and everybody starts adopting it that’s a problem. Maybe I am making a separation between two things that don’t really matter. Just proposing something that is possible with new consensus code, if nobody adopts it there is no problem. There are already trivially insecure things that people could be adopting but don’t. People are not sending to an OP_TRUE or something. Perhaps you are talking about technical debt in stupid opcodes that end up never being used?

MF: I suppose that is one of the concerns. Crud or lots of funds being locked up in stuff that wasn’t optimal. It would’ve been much better if things that were optimal were merged in rather than rubbish.

PW: That again is about how things are used. Say Taproot goes through today as it is proposed and gets adopted but everyone uses it by putting existing analogs of multisig scripts inside the tree and don’t use those features. That is a potential outcome and it would be unfortunate but there isn’t much you can do as a protocol designer.

ET: I believe you mentioned the distinction between a consensus failure and upper level protocol failures. I think they can be as bad. Let’s say if Schnorr didn’t interact well with BIP 32 and we didn’t notice that. All the wallets would deploy Schnorr with BIP 32 and a lot of funds would get stolen. This is at least as bad as funds being locked up because of consensus.

PW: Absolutely there is no doubt about that. But it depends on it being used. That is the distinction.

MF: You only need one person to use it and then everyone has to care about it. Is that the distinction? Or are you talking about lots of adoption?

ET: If there is a bug in the altstack for example it wouldn’t be the end of the world in my opinion because as far as I know almost no one uses it. You could just warn everyone against using it and use policy to deny it but I don’t think it would break Bitcoin.

Hash function requirements for Schnorr signatures

http://www.neven.org/papers/schnorr.pdf

MF: So on the reading list there is this paper by Neven, Smart and Warinschi. Who has read this paper? Anyone like to discuss what the hash functions are for Schnorr signatures? How has this paper informed how hash functions are used for Schnorr in Bitcoin?

TR: If I remember this paper has a deeper look into possible alternative security proofs for Schnorr signatures. The normal security proof for Schnorr signatures models the hash function as a random oracle which is what people in crypto do because we cryptographers don’t know how to prove stuff otherwise. At least not very efficiently. We invent this model where we look at the hash function as an ideal thing and then we can play ugly tricks in the security proof. When I say ugly tricks I really mean that. The only confidence that we have in the random oracle model is that it has been used for decades now and it hasn’t led to a problem. We know it is weird but it has never failed us so I believe in it. It is already needed in Bitcoin so it is not a problem to rely on it because it is used in other places in Bitcoin.

PW: I think this is something that few people realize. How crazy some of the cryptographic assumptions are that people make. The random oracle model, a couple of months ago I explained to someone who was reasonably technical but not all that into cryptography how the random oracle DL proof works. His reaction was “That is terrible. That has nothing to do with the real world.” It is true. I won’t go into the details here but it is pretty interesting. The fact that you can program a random oracle has no relation at all to how things work in the real world. If you look at it just pragmatically, how well it has helped us build secure protocols? I think it has been very useful there. In the end you just need to look at how good is this as a tool and how often has it failed us.

TR: In practice it has never failed us so it is a super useful tool. Nevertheless it is interesting to look into other models. This is what is done in this paper. In this paper they take a different approach. As I said normally if you want to prove Schnorr signatures are secure you assume that discrete logarithm is hard. This is a standard thing. There is nothing weird about this. It is still an assumption. You can formalize it and it can be true in the real world that this is hard. You can make this crazy random oracle assumption for the hash function. I think in this paper they go the other way round. They take a simple assumption for the hash function and they discover several assumptions for example, random prefix second preimage secure. I won’t go into details but you can make that formal. It is a much weaker assumption than this crazy random oracle thing. To be able to prove something about the resulting Schnorr signature scheme what they do is they instead make the discrete logarithm problem a more ideal thing. Discrete logarithm in a mathematical group. In our case it is the group of points on the elliptic curve. In the proof they model this group as an ideal mathematical object with strange idealized properties. In this model, the generic group model, we don’t have as much confidence in this one as in the random oracle model I would say. It is another way to produce a proof. This gives us more confidence in the resulting thing. If you go with that approach you can produce a proof, if you go with that other approach you can also produce a proof. It is even more confidence that what we are doing is sound in theory. Concretely this also has implications for the length of the hash function here. I’m not sure.

PW: I believe that is correct. I think the paper points out that you can do with 128 bit hashes in that model but of course you lose the property that your signature is a secure commitment to the message in that case. That may or may not matter. I think it is an implicit assumption that many people who think informally about the signature scheme assume it is scary to lose.

TR: Let me give some background here. This is not really relevant for the Schnorr signature scheme that we have proposed. But it is relevant to the design decisions there. You could specify a variant of Schnorr signatures which are even shorter than what we have proposed. In our proposal you send two elements, R a curve point and s the scalar. You can replace R with a hash. This is in some sense in the spirit of ECDSA. You can replace R with a hash. A hash is suitable, if you have a hash function like SHA256 you can truncate it after 128 bits. You can play around with this length. If you want to send an elliptic curve point you can’t really do that. Usually you send a x coordinate. You can truncate this but then the receiver will never be able to recover the full point from that again. By switching to a hash function you could tune its parameters and then tune it even smaller. This would reduce the final signature by 128 bits and it would still be secure. However this comes with several disadvantages. We explains them in the BIP. One of the disadvantages is that we lose batch verification. We don’t want to do that. Then another disadvantage is pointed out by this paper. If we switch to hashes and do this size optimization then the signature scheme would have a weird property namely the following. I can take a message m, give a signature on it and send you the message. Later I give you a second message m’ and the signature that I sent you will also be valid for m’. I can do this as the sender. If you think about unforgeability there is nothing wrong with this because I am the sender, I am the guy who has the secret key. I can produce a signature for m and m’. For any other message that I want but it is still an unintuitive property that I can produce one single signature that is valid for two messages. This will confuse a lot of protocols that people want to build on top of Schnorr because they implicitly assume that this can’t happen. In particular it is going to be interesting if the sender is not a single party. In this example I mentioned I am the guy with the secret key. But what if we do something like MuSig where the secret key is split among a group of parties and maybe they don’t trust each other. Who is the sender in that case? Things start to get messy. I think this is pointed out in this paper. Another reason why we don’t want to use this hash variant.

AG: You mentioned that one of the reasons you don’t like this truncated hash variant is that it would remove the batch verification property. I know you mentioned that earlier as well. I wonder if it would be useful if you could explain concretely how much an advantage it would be in Bitcoin in the real world to have the batch verification property.

PW: I think a 3x or 4x reduction in CPU for verifying a block isn’t bad.

AG: Under what assumptions do you get those figures?

PW: You are right to point out that there are assumptions here. In practice many transactions are validated ahead of time. So that doesn’t apply. This is definitely something that matters for initial block download validation. And possibly even more because you could arguably batch multiple blocks together. Or even all of them. You can get even higher factors that I haven’t looked into the numbers how much that is.

AG: Are you saying in a typical scenario where everything was using Schnorr that you’d get a 4x speedup on verification of a block?

PW: Yes if you do it from scratch. From scratch I mean without having the transactions pre-validated.

TR: I think also there it is a trade-off. There are a lot of parameters we don’t know. You could also make the point that if you have smaller signatures and the hash variant would give this to you, then this would also speed up initial block download because things are smaller.

PW: Greg Maxwell who is apparently listening is pointing out on IRC that it also matters after you have been offline and you come back online.

stefanwouldgo: But wouldn’t it actually be worse if you have smaller signatures. You would have more signatures so you would have a longer IBD?

TR: Because blocks are full anyway.

stefanwouldgo: If we have a fixed block size and we have more signatures we get a longer IBD.

TR: This adds another point to this trade-off discussion.

PW: It depends what you are talking about. The correct answer is that there is already a limit on the number of sig ops a block can perform. That limit isn’t actually changed. The worst case scenario is a block either before or after that hits that limit of 80,000 signature operations per block. The worst case doesn’t change. Of course the average case may get slightly worse in terms of how many signatures you need to do. Average transactions will make more efficient use of the chain. I think the counter point to that is by introducing the ability to batch verify. That isn’t actually a downside for the network.

stefanwouldgo: You could actually use a hash that is shorter but would make the signatures even shorter and you couldn’t verify them in a block. That would make things worse twice in a way.

TR: You could turn this into an argument for smaller blocks. Let’s not go into that discussion.

ET: I have another use case that I think is important for batch verification which is blocks only mode. This is common in low powered devices and even phones for people who run full nodes on phones.

PW: Yes that is a good point. It definitely matters to those.

Reducing Bitcoin transaction sizes with x only pubkeys

https://medium.com/blockstream/reducing-bitcoin-transaction-sizes-with-x-only-pubkeys-f86476af05d7

MF: This was Jonas Nick’s blog post on x only pubkeys. I’m sure somebody can summarize this because this was pretty important.

PW: This is about the question of whether it is fine to go to x only public keys and the trade-offs there. The argument that people sometimes make that clearly you are cutting the key space in half because half of the private keys correspond to those public keys. You might think that this inherently means a slight reduction in security. But it turns out that this is not the case. A way to see this intuitively is if it is possible to break the discrete logarithm of a x only public key faster than breaking it for a full public key then I could use the faster algorithm to break full public keys too by negating it optionally in the beginning. In the end negating the output. It turns out that this doesn’t just apply to discrete logarithms, it also applies to forging signatures. I think Jonas has a nice blog post on that which is why I suggested he talk about this. The paradox here is how is it possible that you don’t get a speedup? The reason is that the structure of a negation of a point already lets you actually break the discrete logarithm slightly faster. You can try two public keys at once during a break algorithm.

TR: It seems like a paradox. It is not that you don’t lose security, it is just super, super tiny.

PW: What do you mean by super tiny?

TR: People sometimes talk about the hardness of cryptographic problems. They say this should be 2^128 operations hard to break discrete log but they don’t say what operations are. The reason why they don’t say is because the numbers are that big. It doesn’t really matter if you can’t use CPU cycles or additions or whatever. In this case this explains the difference here because negating a public key seems to be an elliptic curve operation and they are much more expensive than finite field operations. If you think of an elliptic curve point it has two coordinates x and y and they are elements of a finite field. Computations on a finite field are much faster than computations on a point itself. However for negating this is not the case. If you negate the point what you do is negate the y coordinate. It is not like you lose 1 bit of security in terms of public key operations or group operations. You lose maybe 1 bit in finite field operations but it is so much smaller.

PW: No I think you are failing to distinguish two things. The difference between the optimal algorithm that can break a x only key and the optimal algorithm that can break a full key is just an additive difference. It is a precomputation and post processing step that you do once. What you are talking about is the fact that you can already use the negation to speed up computing the discrete log of a full public key. This is what resolves the paradox.

TR: Maybe you are right. I need to read Jonas’ blog post again.

ET: Even if it did cut the security in half it would just be one less bit.

PW: Or half a bit because it is square root. It isn’t even that.

TR: If you are talking about a single bit, even if you lose half a bit or one bit it is not really clear what that means. In that sense ECDSA is actually a tiny bit harder than breaking Schnorr. We don’t really know for those small factors.

stefanwouldgo: It is just a constant factor right?

TR: Right. What people usually do is switch to better schemes if they fear that breaking the existing scheme is within reach of attackers in the near future. It is of course very hard to tell, who has the best discrete log server in the world.

AG: So what I am learning from this conversation is that directory.io is only going to be half as big as a website now is that right? Does nobody get the joke? directory.io was a website where you pull it up and it would show you the private keys of every address one by one.

BIP Schnorr (BIP 340)

https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki

MF: Does somebody want to talk about the high level points from this BIP? What they found interesting in the current BIP?

AG: One of the first things that I noticed in BIP was the change in the deterministic nonce generation. It is going to be rehashing what is already stated in the BIP I’m sure. I wonder if Pieter or another author could explain the rationale again for the optional nonce generation function.

TR: That is a tough one.

AG: To start off by saying the general idea here is you have to generate a nonce every time you generate a signature and if by accident you use the same nonce twice on different messages you are going to reveal your private key. This has led to people losing money in the past from bad programming errors. There is something called deterministic randomness which is a weird concept. It means that you generate a number as a function of your private key and the message you are signing. What that means is that that number that comes out is unpredictably random, out of a hash function let’s say but because you are using a deterministic algorithm to generate it you are not going to accidentally create the same number for different messages. That is deterministic nonce generation. The standard that was being used and I think is still being used by pretty much all wallets is something called RFC-6979. It is a rather complicated algorithm. It is a RFC you can look it up. One interesting about this new BIP 340 for Schnorr is that it specifically doesn’t use that algorithm at all. It has another deterministic random nonce generation algorithm. I was asking what was the thinking behind that specific function.

PW: There were a number of reasons for that. One is that RFC-6979 is actually horribly inefficient. To the point that it is measurable time in the signing algorithm. Not huge but measurable. I believe it does something like 14 invocations of the SHA256 compression function just to compute one nonce. The reason for this is that it was designed as a RNG based on hashes. It is then instantiated to compute nonces. It is used in a place where on average only one nonce will be created. The whole set up is actually wasted effort. I think you can say that our scheme is inspired by the ed25519 one which uses SHA512 actually in a particular arrangement. Our scheme is based on that but we just use SHA256 instead. The reason is simply that the order of our curve is so close to 2^256 that we don’t need to worry about a hash being biased. You can directly use the output of a hash as your nonce because our order is so close to 2^256 there is no observable bias there. That simplifies things. We had a huge discussion about how and whether to advise something called synthetic nonces. You take the scheme of deterministic randomness and then add real randomness to it again in such a way that it remains secure if either the real randomness is real random or the deterministic scheme is not attacked through breaks or through side channel attacks. This has various advantages, the synthetic nonces. It protects against hardware injection faults and other side channel attacks. We looked into various schemes and read a few papers. To be fair all of that is really hard to protect against. We need to pick some scheme and there is a number of competing otherwise equally good ones, let’s try to build a model of what a potential side channel attacker could do and pick the one that is best in that regard.

AG: Is that in the scope of the BIP? It is a bit of gray area. You could say “You have to choose a random nonce” and that’s the end of it. Clearly a protocol specification has to say what you are supposed to do.

TR: It is a very good question that you bring up. I think our goal there was to be explicit whenever possible and give safe recommendations. I was one of the guys strongly advocating for this approach. It is always hard. You write a cryptographic algorithm or even a specification in that case and then your hope is that people will implement it. On the other hand you tell people not to implement crypto if you don’t know what you are doing. I think people who even know what they are doing and implement crypto get it wrong sometimes. We could maybe skip some details in the BIP draft. On the other hand if you are an average skilled programmer I still don’t think you should be the one implementing crypto. But I think the idea here is that if we give very good recommendations in the BIP then there is a higher chance that the resulting implementation will be ok. Another thing here is that this specific nonce function, even if you know what you are doing you don’t want to invent this from scratch. You might be happy that other people have thought about it.

PW: Greg gives a good point on IRC which I am going to quote. “It is important to have a good recommendation because if you let people just choose a random nonce we know they will do it poorly. The evidence of that is that in a very early version of libsecp the nonce could be passed explicitly as a call. It wasn’t generated inside the library. Some Ethereum implementation just passed a private key XOR the message as the nonce which is very insecure. The application writer isn’t a cryptographer necessarily.” I was paraphrasing in this case.

MF: There is a question from nothingmuch in the chat. Care to comment on “ It is important to note that multisignature signing schemes in general are insecure with the random number generation from the default signing algorithm above” in BIP 340.

nothingmuch: Deterministic nonce generation, my understanding is that it is not compatible with MuSig right?

PW: It is compatible but only in a trivially insecure way.

TR: This is a really interesting point. Maybe tomorrow I will talk about this. Variants of MuSig, I can explain this in more detail. We talked a lot about deterministic nonces. If you look at ECDSA and Schnorr signatures this is what we read everywhere. It can horribly fail if you use real randomness because if randomness breaks down it might leak your private key and so on. So please use deterministic randomness. We hammered this into the minds of all implementers. Now as you point out it is exactly the case that if you now go to MuSig and use deterministic randomness it is exactly the other way round. If you deterministic randomness it is horribly broken.

ET: What are your thoughts on synthetic nonces with MuSig?

TR: It doesn’t help you with MuSig. The idea of synthetic nonces as Pieter said, it is secure if either the real randomness is secure or the deterministic randomness works out.

PW: And in MuSig without variants you just need to have real randomness period.

TR: Synthetic nonces, you can do this but it is not better than using real randomness in the end. Because if the real randomness breaks down with the synthetic nonce you fall back to the deterministic one and then again it is insecure. I can show the attack on this tomorrow with slides. It is easier to explain. We are working on a paper which has been accepted but is not public yet where we fix this problem for MuSig. The way we fix this is by adding a zero knowledge proof where all the signers in the MuSig session proves that he/she generated the nonce deterministically. Because in that case it works. If you can choose your nonce deterministically and if you are convinced that every other signer has chose his/her nonce deterministically. Our simple solution to this is to add a zero knowledge proof. I can talk about this tomorrow a little more. It works but it is not a bulletproof solution. It removes that problem but it introduces a large complex zero knowledge proof that is slow and gives you another hundred possibilities to fail your implementation. No pun intended Elichai, we use bulletproofs but it is not a bulletproof solution.

ET: With that you again can’t use synthetic nonces and you use 100 percent deterministic ones.

TR: Exactly then you can’t use synthetic nonces. You need 100 percent deterministic nonces, indeed.

Bitcoin Core PR 17977 implementing BIP Schnorr

https://github.com/bitcoin/bitcoin/pull/17977

MF: The next item in the agenda is the actual PR in Bitcoin Core implementing BIP 340-342. One smaller PR 16902 was taken out of this monster Schnorr, Taproot PR and we looked at that Bitcoin Core PR review club. Are there any possibilities to take further PRs out of that monster PR? This was for easy reviewability I think more than anything.

PW: There have been a couple that have been taken out, opened and/or merged independently. I think the only obvious one that is remaining is merging support for Schnorr signatures first in libsecp and then in Bitcoin Core. On top of that I think the code changes are fairly modest of what remains.

MF: It is pretty much ready to be merged bar a few small things? Or just needs more review? What are the next steps for it to be merged assuming consensus?

PW: I think the key word is assuming consensus. It probably requires vague plan at least about how it will be deployed.

MF: Is that the bottleneck now, the activation conversation? You think the code is pretty much there?

PW: I think the code is pretty much done. That is of course my personal opinion. Reviewers are absolutely free to disagree with that.

MF: There needs to be an activation conversation. We’ll leave that for another time.

TR: One thing I can add when talking about things that can be taken out of the PRs. For the Schnorr PR to libsecp it has just been simplified. Jonas who is mostly working on it has taken out a few things. For example we won’t have batch validation in the beginning. This wasn’t planned as a consensus change right now. You couldn’t even call it a consensus change. It is up to you as a verifier of the blockchain if you use batch validation or if you don’t use batch validation. At the moment this wasn’t the plan for Bitcoin Core to introduce batch validation. We took that out of the libsecp PR to make the PR easier with smaller steps. Were there other things removed?

PW: I don’t remember.

TR: Batch verification is a very large thing because you want to have it be very efficient. Then you want to use multi-exponentiation algorithms which so far we haven’t used in libsecp. This would touch a lot of new code. The code is already there but at the moment it is not used. If we introduce batch verification then suddenly all this code would be used. We thought it was a better idea to start with a simple thing and not add batch verification.

libsecp256k1 library

https://github.com/bitcoin-core/secp256k1

MF: The crypto stuff is outsourced to libsecp. Perhaps Elichai you can talk about what libsecp does and what contributions you have made to it.

ET: The library is pretty cool. Originally I think it was made by Pieter to test some optimization trick in ECDSA but since then it is used instead of OpenSSL. There are a lot of problems with OpenSSL as Pieter said before. It is a C library that only does everything Bitcoin Core needs for ECDSA and soon Schnorr too. I try to contribute whenever I see I can. Adding fuzzing to it, I fixed some small non-serious bugs, stuff like that.

MF: How did you build it to begin with Pieter? It is in C, the same as OpenSSL, did you use parts of OpenSSL to begin with and then build it out?

PW: No libsecp at the very beginning was written in C++. It was later converted to C to be more compatible with low powered devices and things like that. And improve things like static analyzability. It is entirely written from scratch. The original algorithms, some of them were inspired by techniques used in ed25519 and things I had seen in other implementations. Later lots of very interesting novel algorithms were contributed by Peter Dettman and others.

MF: You talked about this on the Chaincode Labs podcast. The motivation for working on this were to shrink that attack surface area in terms of what it is actually doing from a security perspective. OpenSSL was doing a lot of stuff that we didn’t need in Bitcoin and there was the DER encoding problem.

PW: It is reducing attack surface by being something that does one thing.

libsecp256kfun

https://github.com/LLFourn/secp256kfun

MF: The next item on the reading list was Lloyd Fournier’s Rust library. Do you have any advice or guidance? This is just for education, toy reasons. In terms of building the library from scratch any thoughts or advice?

PW: I haven’t looked at it.

TR: I also only looked at the GitHub page. It is a shame he is not here. We discussed this paper about hash function requirements on Schnorr signatures. He actually has an academic poster in the write up on hash function requirements for Taproot and MuSig. This refers to the concern that Pieter mentioned. It is not only the security of Schnorr signatures, it is combined with all the things you build on top of it. If you want to look into alternative proofs for Schnorr signatures it is also interesting to look into alternative proofs and hash function requirements for the things you want to build on top of it. Lloyd worked on exactly that which is a very useful contribution.

ET: We were talking about the secp256kfun. I just want to mention that there is a Parity implementation of secp in Rust. I looked at it a year or so ago. They tried to translate the C code in libsecp to Rust. For full disclosure they did make a bunch of mistakes. I wouldn’t recommend using it in production in any way. They replaced a bitwise AND with a logic AND which made constant time operations non-constant time and things like that.

MF: Thanks for clarifying Elichai. Lloyd has said it is not for production. It is an educational, toy implementation.

ET: I wanted to note this as a lot of people have looked at it.

Different Schnorr multisig and threshold schemes

MF: Tim, you did say you might talk about this tomorrow. The different multisig schemes. MuSig, Bellare-Neven, MSDL pop scheme (Boneh et al) and then there are the threshold schemes. Who can give a high level comparison between the multisig schemes and then there is the threshold schemes?

TR: Let me try for the ones you mentioned. The first reasonable scheme in that area was Bellare-Neven because it was the first scheme in what would you call a plain public key model. I can talk about this tomorrow too. The problem with multisignatures is…. Let’s say I have a public key and Pieter has a public key. These are just things that we claim but it doesn’t mean that we know the secret key for them. It turns out that if we don’t know the secret key for a public key we can do some nasty tricks. I can show those tomorrow. For example if we do this naively I can create a public key that depends on Pieter’s public key. After I have seen Pieter’s key I can create a key for which I don’t know the secret key. If we add those keys together to form an aggregate key for the multisignature then it would be a key for which I can sign and only I. I don’t need Pieter. This defeats the purpose of a multisignature. This is called a key cancellation attack or a rogue key attack.

MF: Key subtraction? Different terms for it.

TR: The idea is that simple I can actually explain it. I would take the public key for which I know the private key, let’s say P. Pieter gives me his key P_1. I would claim now that my key is P_2 which is P - P_1. Now if we add up our keys P_1 + P_2. Then the P_1 cancels out with the minus P_1 that I put in my own key. What we get in the end is just P. P was the key for which I know the secret key. Our combined key would be just the key for which I can sign and I don’t need Pieter to create signatures. Key subtraction is another good word for those attacks. Traditionally what people have done to prevent this, this works, you can do this. You give a zero knowledge proof together with your public key that you know the secret key for. This prevents the attack that I just mentioned because P_2 which was P - P_1 I don’t know the secret key for this one because it involves Pieter’s key and I don’t have Pieter’s key. This is not too hard in practice because giving zero knowledge proofs that you know a key is what Adam mentioned at the beginning. It is a zero knowledge proof of knowledge. It can be done by a signature. What you can do is take your public key and sign it with your secret key so you have your new public key is the original public key with your signature added to it. People need to verify this signature. This is the solution that people have done in the past to get around those problems. You mentioned MSDL pop. MS for multisignature, DL for discrete log and pop is proof of possession which is the additional signature you give here. Bellare-Neven on the other hand introduce another simpler model called the plain public key model so you don’t need this proof of possession. This doesn’t make a difference in theory because you could always add this proof of possession, in practice it is much nicer to work with simple public keys as they are now. You could even find some random public keys on the chain for example and create an aggregated key of them without any additional information or talking to the people. In this model our P stay the same. They are just an elliptic curve point. This was the innovation by Bellare and Neven to make a multisignature scheme that is secure in this setting. The problem with their scheme is that if you want to use it in Bitcoin it wouldn’t work because this now assumes you have Schnorr signatures. Maybe I shouldn’t assume but I hope will happen of course. A Bellare-Neven signature doesn’t look like a Schnorr signature. It is slightly different and it is slightly different to verify. To verify it you need the public keys of all signers who created it. In this example of Pieter and me. Let’s say we create a signature. My key is P_2, Pieter’s key is P_1. If you want to verify that signature you need both P_1 and P_2. This already shows you that it doesn’t like a normal Schnorr signature because in the normal Schnorr signature the public key is one thing, not two things or even more things depending on the number of parties. The additional property that we need here is called key aggregation. This was introduced by MuSig. This means Pieter and I, we have our public keys P_1 and P_2, there is a public function that combines those keys into a single aggregated key, let’s call it P_12. To verify signatures, verifiers just need that single P_12 key. This is very useful in our setting. It is actually what makes the nice privacy benefits of Taproot work in the first place. In this example if I do a multisig with Pieter there might be a UTXO on the blockchain that we can only spend together. It will be secured by this public key P_12. The cool thing here is that we don’t need to tell the world that this is a multisig. It just looks like a regular public key. If we produce a signature it will look like a normal Schnorr signature valid on this combined public key. Only two of us know because we set up the keys, that this is a multisig key at all. Others can’t tell. This is exactly what we need in a Bitcoin setting. This is what MuSig gives us.

MF: I saw a presentation from you Tim last year on a threshold signature scheme that used Shamir’s Secret Sharing. It was asked at the end why use Shamir’s Secret Sharing in that threshold scheme. Perhaps we’ll leave that tomorrow. This work is being done in an Elements fork of libsecp. Are you using Liquid at all? Or is it just positioned in that repository? You are not actually doing any experimentation or testing on a sidechain?

TR: MuSig is implemented in a fork of libsecp. It is owned by the Elements project. It is our open source fork of secp. We don’t have Schnorr signatures in Elements. At the moment that is not going to happen but it is in an experimental library where we play around with things. If this is ready people will be able to use it in other blockchains and Bitcoin too of course.

MF: You don’t have Schnorr in Elements? How are you doing schemes like MuSig without Schnorr? You are just forking secp within the Elements organization. Schnorr isn’t actually implemented in Elements.

TR: There was an old Schnorr implementation in an Elements version a few years ago. Greg says I’m wrong.

PW: Elements Alpha used to have Schnorr signatures but it was subsequently dropped after we discovered we wanted something that was secure against cancellation attacks. The Schnorr code that was in Elements Alpha was plain Schnorr without pubkey prefixing even. But it was dropped. It is not currently there.

TR: I can’t speak for Elements and Liquid because I am not so much involved in development there. I guess the plan is to wait until the Bitcoin PRs have stabilized where we are confident that we can implement it in Liquid. Confidence and hope that it won’t change much afterwards. We mostly try to be with compatible with the core features of Bitcoin Core because this makes development easier. Of course we have the advantage that forks are maybe easier if you have a federation.

MF: So tomorrow Tim you are happy to talk about Schnorr multisig schemes because we went through them very quickly.

TR: That was the plan. I don’t have slides yet so I can do this.

Tim Ruffing’s thesis (Cryptography for Bitcoin and Friends)

https://publikationen.sulb.uni-saarland.de/bitstream/20.500.11880/29102/1/ruffing2019.pdf

MF: We haven’t even got onto your thesis. Your thesis has a bunch of privacy schemes. I assume they were implemented on Elements? They are just at the theoretical stage?

TR: I started an implementation of DiceMix under the Elements project in GitHub but it is mostly abandoned. It is kind of sad. It is my plan to do this at some point in my life. So far I haven’t done it.

AG: The whole BIP 32 with Schnorr thing. Can somebody give the one minute summary? The general idea is that everything should still be usable but there are some details to pay attention to. Is that right?

PW: Very short. If you do non-key prefixed Schnorr in combination with BIP 32 then you can malleate signatures for one key into another key in the same tree if you know the derivation paths.

AG: A beautiful illustration of why it makes absolutely zero sense to not key prefix. I like that.

PW: I need to add that even without key prefixing this is not a problem in Bitcoin in reasonable settings because we implicitly commit to the public key.

AG: Except for the SIGHASH_NOINPUT thing which was mentioned in the BIP. It doesn’t exist.

PW: Still it is a scary thing that has potential interaction with things so keep prefixing.

MF: Let’s wrap up there. Thank you to all those attending. Tim will present tomorrow. The topic is going to Schnorr multisig and threshold sig schemes.

\ No newline at end of file +https://www.youtube.com/watch?v=uE3lLsf38O4

Pastebin of the resources discussed: https://pastebin.com/uyktht33

August 2020 update: Since this Socratic on BIP Schnorr there has been a proposed change to the BIP revisting the squaredness tiebreaker for the R point.

The conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Introductions

Michael Folkson (MF): This is London BitDevs, a Socratic Seminar. We are livestreaming on YouTube currently for those on the call. The plan is also to do a transcript of this conversation. The point is not to try to find gotchas with anything anybody says, this is an educational exercise. So don’t let that put you off. We can edit the video afterwards, we can edit the transcript afterwards. This is purely for education, this isn’t an authoritative resource on cryptography or any of the topics we will be discussing today. We have currently got a call on Jitsi going. Jitsi is free, open source and doesn’t collect your data. When it works it is great. It seems to be working so far. This is a Socratic Seminar in preparation for tomorrow when Tim Ruffing will be presenting. There will be a call tomorrow and there will be another livestream tomorrow with Tim Ruffing’s presentation. There will be a Q&A afterwards. Socratic Seminars, for those who haven’t been to one, they originated in New York. There is the site bitdevs.org to help you do Socratic Seminars if you haven’t been to one before. The reading list or the agenda, there is a Pastebin up. It is very much focused on BIP Schnorr. It is hard to get a clear dividing line between BIP Schnorr, BIP Taproot, BIP Tapscript. They do interact and they do inform the design and the code of each other. We will stray sometimes into Taproot but the focus is on cryptography and Schnorr in preparation for Tim’s presentation tomorrow. Maybe we will do a Socratic Seminar in a few weeks on Taproot. That could be good. We will start as we always do with intros. You don’t have to do an intro but if you want to there is a hand signal in the bottom left of the screen if you are on the call. If you click that I’ll be able to see that you have raised your hand and I will go to you. Your intro will be who you are, what you are working on, how much you know about Schnorr or what you would like to learn in this Socratic on Schnorr today. I do need volunteers. Does anyone want to click that hand button to get us going?

Pieter Wuille (PW): Hi. I’m Pieter. I do Bitcoin work at Blockstream and I am one of the co-authors of BIP Schnorr.

Adam Gibson (AG): I am Adam Gibson or waxwing on the internet. I do various coinjoin stuff. I have done study of Schnorr and I did talk on it in 2018 that is referenced.

Jonas Nick (JN): I’m Jonas. I work at Blockstream on Bitcoin research. I am also a co-author of BIP Schnorr. I am interested in learning how BIP Schnorr relates to Tim’s thesis if possible today.

MF: You may be disappointed. I do hope to cover Tim’s thesis at the end. There will obviously lots of opportunity to ask him questions tomorrow.

Elichai Turkel (ET): I am Elichai. I work at DAGlabs and I contribute to rust-bitcoin and libsecp. I helped as much as I could for BIP Schnorr and I hope it will be merged soon.

Tim Ruffing (TR): I am Tim Ruffing. I also work at Blockstream. I am also a co-author of BIP Schnorr. I was impressed by the reading list. I didn’t plan to be here today but after this reading list I am really interested to be here because I don’t have slides yet for tomorrow. If everybody has read all the stuff then I don’t need to present anything. I’m mostly here trying to figure out what I should present tomorrow but I am sure I will find something.

MF: We can do a very long Q&A. You can just produce a few slides to get us going tomorrow.

Volker Herminghaus (VH): I am Volker. I am here for total immersive learning about cryptography. I have no credentials.

MF: That is totally fine.

Properties of a Digital Signature Scheme

MF: As in previous Socratics we will start from basics or from first principles. There are some pretty hardcore experts on the call today. Hopefully they will be patient for the first 10-15 minutes as we start from first principles. I do want people to get in the mindset of thinking what properties a digital signature scheme requires. Adam Gibson who is on the call did a very good presentation at London Bitcoin Devs, this is on the reading list, on the things a signature scheme requires. Some basic questions. Why is a signature scheme where the message is just multiplied by 5 insecure? Why is a hash function insecure? Why do we need a more complicated scheme and what do we need from our signature scheme in terms of security? If I want to send you a message that is “5” why is a digital signature scheme that multiples that “5” by 7 not a sufficient signature scheme?

VH: It is reversible. You could just divide by 7 and you would get the original value. That wouldn’t be very secure.

MF: This is kind of the forgeability property.

AG: What you are describing here Michael is connected to the zero knowledgeness. Let’s not get technical, the privacy element. You want to prove that you own a credential or a key without revealing the key. In the example 5 x 7 if your message is 5 we are imagining that your key is 7. Not the best key ever but we work with what we’ve got. If your key is 7 and you multiply your message by that you can divide by the message to get the 7. The first property we are mentioning there is it should keep the confidentiality of the key. Technically this zero knowledge property.

MF: Let’s move onto to say a hash function. Why would a hash function not be a sufficient signature scheme? If you just hash the message why is that not a sufficient signature scheme.

TR: A simple answer is that it doesn’t have a key. With a signature scheme what you really want is if you don’t have the secret key you can’t produce signatures. In the hash function where is the key at all?

AG: Why is a HMAC not a sufficient signature scheme?

PW: That is a better question.

MF: We are moving in the right direction. We need there to be a secret key and we need to make sure that somebody else can’t forge signature. That secret key needs to produce a signature that nobody can else can produce. In Adam’s presentation he was talking about the producer of the signature and the receiver of the signature. There are two parties and two concerns here. One, the receiver wants to know that the producer of the signature is the only one with that private key. The producer wants to know that if the producer sends that signature to the receiver, the receiver cannot do anything with that signature. This is leading into any leakage in terms of when you produce a signature, does it leak any information?

ET: I wouldn’t say that only the producer has the private key. I would say that only the producer can produce a valid signature. It can be in multiple different ways but the important part is that only the producer can produce a valid signature, not that only he has the private key.

PW: I think you define as producer as “he who has the private key”.

TR: I think Elichai has a good point here. Of course you want to make sure that if you give a signature and you use your secret key to create that signature you want to make sure that your secret key does not leak. That is how usual schemes do that. In the end the more basic property that you want is what Elichai just described. If you see a signature you shouldn’t be able to produce a different signature without having the secret key of course. I heard people talking about encryption and asking the same question. What should an encryption scheme provide? Sometimes people say that it should hide the secret key but this isn’t sufficient. It should hide the message. The secret key is the tool to hide the message. I think we can say the same here for signatures. Of course we don’t want to leak the full secret key because this would allow others to produce signatures. But you can ask questions like “What happens if you leak for example parts of the secret key?” There are signature schemes where this is fine. You may leak 1 bit, you leak some of the signature key but still people can’t produce signatures.

PW: The end goal is unforgeability. Someone without a key can’t produce a valid signature. A corollary is that obviously you cannot learn the full key from the signature.

ET: Part of my point was also in the multiplication example. You might be able to produce another valid signature without knowing the private key.

MF: I think we are ticking off most of the properties. The properties that Adam had in his presentation were completeness, soundness and zero knowledgeness.

AG: Those are the properties of a zero knowledge proof system. I don’t know if this too theoretical but the concept of a signature as a zero knowledge proof of knowledge. This was a big part of what I was blathering on about in that talk. Whether that is a good framework…. clearly with digital signatures, there is a bit more to it than that. There are multiple different security levels. I think I did mention this, there is complete break which is when a signature reveals your key. Then the most strict notion of security is something like unforgeability under chosen message attack, something like existential unforgeability. The idea that not only can you not grab the key from the signature but you can’t produce other signatures on the key without having the key. This is what Elichai was just talking about. You can’t do that even if you’ve got a chance to have something called an oracle. You can get as many signatures as you like on different messages from an oracle. Then you can’t take any of those signatures, or some combination of them, and produce a new signature on a message that hasn’t previously been signed. The other guys will probably correct me right now.

TR: I think this was pretty precise actually. A nice example appeared last week on Twitter. I don’t know if people heard of this new SGX attack, break whatever. I don’t know a lot of the details but apparently the guys who wrote the paper about that attack, they extracted a signing key from SGX. They implemented such an oracle in a sense on Twitter. It is a Twitter bot and you can send it messages. It will give you signed messages back. It will sign messages for you. What Adam just explained, unforgeability under chosen message attack. You can ask this Twitter oracle for signatures of any message and it will reply with exactly this. It will sign any message you want. But even if you have seen as many signatures as you want you can’t produce now a signature on a message that hasn’t been signed by this oracle. This is a super strong notion. You can ask the guy with the secret key to sign everything apart from one message and still you can’t produce a signature for this one message.

PW: Then there is the notion of strong unforgeability which is even stronger. Even if you have asked for a signature on a particular key and got that from an oracle, you cannot produce another signature on the same message.

MF: The signer cannot produce another signature on the same message? Or an attacker can’t produce a signature on the same message?

PW: An attacker should not be able to produce another signature on a valid message even if they have already seen a valid signature on that message. This is related to non-malleability.

MF: That is another property we need on Bitcoin that perhaps you don’t need on a normal digital signature scheme where we don’t have money and a consensus system with signatures being shared all over the place.

PW: People are describing unforgeability under chosen message attack as a very strong property. But there is a property literally called strong unforgeability which is even stronger.

MF: I think we are going to lose the beginners very quickly. We use a signature in Bitcoin because we need to prove that we own a private key without actually telling the world what the private key is. When we publish that signature we don’t want to leak any information. If we leak any more than just one bit then somebody is potentially able to create a signature through brute force without knowing that private key. There has to be no leakage when we publish that signature onchain or in a consensus system.

AG: It was half a joke at start. But since we are still in this beginner area I think it might actually be quite important for people to understand this. Why isn’t HMAC a digital signature? To be clear, a hash function doesn’t have a key but with a HMAC you take a hash function and you put in a secret key as part of the message that you are hashing. It is important to reflect on why doesn’t that count as a digital signature scheme in the sense that we are talking about. I can’t produce a HMAC on the message “Hello” without knowing the corresponding secret for it. HMACs are used in for example in TLS and various other protocols to ensure integrity of the message going from one party to the other. Why doesn’t that also count, just like Schnorr or ECDSA as a digital signature? Just having a key like 100 random bytes along with your message and then just hashing it.

VH: I think the reason why a hash function can’t be used as a signature is because it is impossible to verify the signature without knowing the original message.

PW: Well, no. I don’t think it is required that a signature scheme should be able to verify it without having the message. The problem is that you can’t verify a HMAC without knowing the key. Inherently you need to give the secret key to the verifier in that case. That obviously breaks the unforgeability.

TR: This explains why HMAC is used for TLS connections because they are connections for two defined peers, the two endpoints of the connection. They both can have the key. If I haven’t produced this MAC message then there is only one other party in the world that has the key which is the other side of my connection. I know the other side has produced the message.

PW: You can see a HMAC as an analog of a digital signature scheme but in the symmetric world where you always have shared keys or passwords instead of a secret public, private key pair.

MF: We’ll move on. The conclusion to that is this is very complicated and there is a reason why there is the phrase “don’t roll your own crypto” because constructing a digital signature scheme that is secure and does everything you want especially in a complex system like Bitcoin where signatures are shared all over the place and shared on the blockchain is hard.

(For more information on signature scheme security properties see this talk from Andrew Poelstra at MIT Bitcoin Expo 2019)

Satoshi’s digital signature choices

MF: Let’s move onto Bitcoin history. In 2009 Satoshi made some choices. Satoshi used ECDSA for the digital signature algorithm. He or she used secp256k1 for the elliptic curve. And he or she used the OpenSSL library for doing the digital signing and the cryptographic operations within Bitcoin. Any thoughts on those choices? Why he chose them, were they good choices?

PW: I’m sure all of us are mind readers and can guess his or her intentions.

MF: In retrospect I think there is general consensus that OpenSSL was a good choice. The elliptic curve was perhaps a strange choice, not one that was widely used. ed25519 was more commonly used. No? Please correct me.

ET: I don’t think so. I don’t think it existed back then. ECDSA was a common choice. The exact elliptic curve was not.

AG: Perhaps secp256r1.

ET: I think secp256r1 was the most common one but I’m not sure.

PW: NIST P-256 is the same as secp256r1

TR: The choices at this time, if you don’t want to invent your own stuff, the two possible choices were elliptic curves and RSA. Satoshi didn’t like RSA signatures because they are large. On blockchains we optimize for size. I guess this was the reason why he went for elliptic curves.

PW: In retrospect OpenSSL was a bad choice. We have had lots of problems. I do say in retrospect. I certainly don’t fault anyone for making that choice at the time. Since Tim brings up RSA there is a theory. The maximum size of pushes in Bitcoin script is 520 bytes. It has been pointed out that that is exactly the maximum size of a 4096 bit RSA signature in standard serialization. Maybe that was a consideration.

ECDSA and Schnorr

https://diyhpl.us/wiki/transcripts/chaincode-labs/2019-08-16-elichai-turkel-schnorr-signatures/

MF: This is where I wanted to start in terms of the reading list. I am going to share Elichai’s presentation at Chaincode. This is probably the best concise resource that I could find explaining the differences at a high level between ECDSA and Schnorr. Let me share my screen. Perhaps Elichai you can talk about the differences between ECDSA and Schnorr and summarize the points in your presentation. What I really liked about your presentation was that you had that glossary on every slide. As you say in the presentation it is really hard to keep monitoring what letters are which, which are scalars, which are points etc.

ET: Thank you. I really tried to make it as understandable as possible for non-mathematicians. Basically the biggest difference between ECDSA and Schnorr, you can see there on the second line of my presentation we use the x coordinate of the point kG which is weird. Usually in Schnorr for example there is a full separation between point operations, what you do for verification and signing. You can see at the bottom in Schnorr you take a bunch of scalars, you multiply, you add. It is normal modular arithmetic. In ECDSA you create a point and you take the x of that point. You use that like a scalar which is really weird. It is part of the reason why there is no formal proof for ECDSA although a lot of people have tried.

PW: It depends what you mean by formal proof. There are proofs, just under the standard assumption of discrete logarithm and random oracle. In particular if you add an additional assumption that taking the x coordinate from a point and assume some properties about that then it is provable.

ET: Have those properties been analyzed?

PW: That is the point with assumptions.

ET: For example, the discrete log even though it is an assumption it has been analyzed for a lot of years.

PW: It is a much less common assumption. That is a very reasonable criticism. Saying there is no proof, there is a proof. It is just not one under very common assumptions.

ET: I assume that proof isn’t new. Last time I Googled for it I didn’t find it.

PW: It is linked in the BIP.

ET: There is some proof but as Pieter said the proof is assuming something that wasn’t properly analyzed like discrete log. The reason behind it is probably a combination of the patent on Schnorr and some weird politics in NIST. There are some conspiracies around it but we will ignore those. We do believe that ECDSA is working and isn’t broken. There is no reason to believe otherwise. It is still nicer that Schnorr doesn’t do this weird thing. On top of the actual proof, because Schnorr doesn’t do point operation in the signing it is linear, meaning that you can add signatures, you can tweak signatures and you still get something that is valid in the framework of the signing.

Benefits of Schnorr for Bitcoin

MF: Why do we want Schnorr? What does Schnorr bring us in the Bitcoin ecosystem?

ET: Obviously there is a proof. Probably a lot of non-mathematicians don’t care about the proof. We also have a lot of things in Bitcoin Core that we didn’t prove and we will never be able to prove. It is still an improvement. Also there is the whole linearity of this. Technically you can do Taproot with ECDSA but a lot of things like MuSig, sign-to-contract you cannot do with ECDSA easily although there are improvements and new papers on that.

MF: To summarize, an improved security proof on ECDSA. Linearity, Schnorr signatures are also a little smaller, not a massive deal.

ET: Getting rid of DER is a very good thing. Aside from the size which is an improvement in Bitcoin where we need to save those bytes forever. The exact encoding isn’t fun. In the past there were a lot of bugs that were found by some people here.

Nadav Kohen (NK): I just wanted to point out that along with all the things that have already been mentioned Schnorr signatures also have the nice property that you can compute the sG, the point associated with the signature from public information only. This enables a lot of fun schemes. ECDSA doesn’t have any equivalent to that as far as I’m aware. Schnorr with pre-committed r values, if you know the r value, the public key and the message then you compute sG from just that public information ahead of time without knowing what the signature is. I am referring to the discreet log contract paper more or less.

stefanwouldgo: Doesn’t Schnorr have the strong unforgeability property Pieter mentioned before?

TR: When we mentioned there was a security proof on better assumptions it is also the case that Schnorr has strong unforgeability built in where ECDSA does not. For ECDSA the only reason why it is not strongly unforgeable is that you can negate one of the components in the signature. If you see a signature you can negate it and it is still valid. Given a signature you can produce one other valid signature. This can be avoided by just checking that one of the numbers is in the right range. It is not strictly an improvement. You can do the same for ECDSA. You can make it strongly unforgeable.

PW: Given that we already as a policy do that in Bitcoin, it is the low s rule. With the low s rule and under these additional assumptions it is also possible to prove that ECDSA is strongly unforgeable.

TR: I think the same comment applies to the DER encoding. It is just an encoding issue. Now we move to Schnorr signatures we also fix some other things that are annoying. But you could also encode ECDSA in a nicer way.

PW: I think it is pointed out in the BIP that there are a number of improvements that we can make just because we are switching to a new scheme. We can clean up a bunch of things which includes batch verification. Due to Bitcoin’s use as a consensus system we not only have the requirement that people without a private key cannot produce a valid signature. We also need the property that someone with the private key cannot produce a signature that is only valid to some verifiers but not all. This is a very unusual property.

AG: A little side note that might be of interest or amusement to some people. A few moments ago Elichai was talking about the issue of why we didn’t have Schnorr in the first place. We had a patent. He also mentioned that there were conspiracy theories around the NSA. I found an interesting historical anecdote about this topic. Koblitz (the k in secp256k1), an elliptic curve cryptography guy wrote in a paper about the situation in 1992 when the standard for ECDSA was proposed. He wrote “At the time the proposed standard which soon after became the first digital signature algorithm ever approved by the industrial standards bodies encountered stiff opposition especially from advocates for RSA signatures and from people who mistrusted the NSA’s motives. Some of the leading cryptographers of the day tried hard to find weaknesses in the NIST proposal. A summary of the most important objections and the responses to them were published in the crypto 92 proceedings. The opposition was unable to find any significant defects in this system. In retrospect it is amazing that none of the DSA opponents noticed that when the Schnorr signature was modified the equivalence with the discrete logarithm was lost.” It is an incredible quote. They thought the NSA was up to no good. None of them noticed the most basic obvious fact which was that as we have just been describing in order to try to convince yourself that ECDSA is as secure as Schnorr in terms of being equivalent with the discrete log (ECDLP) we have to jump through hoops and write complicated papers like this paper by Fersch which I think is referred to in the BIP. It is amazing that everyone dreamt up these conspiracy theories but no one noticed that the discrete log equivalence was lost.

TR: This was a historical thing?

AG: It was the crypto 1992 proceedings.

TR: When was the first proof? 1997 I think. The journal of cryptography paper says received in 1997. Provable security of Schnorr wasn’t a thing at that time.

AG: Are you talking about Pointcheval and Stern? There was no concept of a reduction before that?

TR: No one figured out how to prove Schnorr signatures are secure? I don’t know, I need to check. Maybe I am wrong.

AG: It seems to me that if anyone had even thought about the whole business of nonce reuse then it is almost the same fact. The reduction is almost directly the same thing as the fact that nonce reuse breaks the signature scheme right?

TR: If you are familiar with Fiat-Shamir then it is super weird to remove the nonce from the hash. ECDSA only hashes the message, not the nonce of the message. This is what creates the problem in the security proof.

MF: Anything else on why we want Schnorr? We’ve covered everything?

TR: It was mentioned but not really stressed. In general it is much easier to build cryptography on top of Schnorr signatures. There are so many things you can imagine and things we can’t imagine right now. At the moment we are looking into multisignatures, threshold signatures, blind signature variations. Andrew Poelstra’s scriptless scripts. All of these things are usually much easier to build on top of Schnorr because you have these nice algebraic properties that Schnorr preserves. The point of ECDSA basically was to break those algebraic properties. There is nothing definitive you can say about this but in general the math is much nicer.

NK: Essentially building anything on top of ECDSA that you can build on top of Schnorr requires zero knowledge proofs to fill in the lost properties which adds a lot of complexity.

TR: Or other things like encryption. With Schnorr signatures we have very easy constructions for multisignatures. If you have n parties who want to sign… There was work involved and people got it wrong a few times but we have it now and it is simple. For ECDSA even if you go for only two party signatures, 2-of-2, you can build this but it is pretty ugly. You need a lot of stuff just to make that work. For example you need to introduce a completely different new encryption scheme just to make the 2-of-2 signatures work.

Design considerations for the Schnorr implementation

MF: Hopefully that was a convincing section on why we want Schnorr. I am going to ask some crazy questions just for the sake of education and discussion. What would Taproot look like without Schnorr? You wouldn’t be able to have the default spend be a multisig but you could still have Taproot with ECDSA and the default case just be a single sig spend. Is that correct?

PW: You could but I think that would be pointless. I guess if it was a 1-of-n it would still make sense but apart from that.

MF: I suppose multisig is just so big. You could still have the tree with different scripts on it, with different multisigs on each leaf in that tree but it would just be way too big.

PW: You would just have MAST. Taproot without a default case, you have MAST.

MF: We won’t go into Taproot today, another time. That was the case for why we want Schnorr in Bitcoin. I am sure Pieter and Tim early on went through this thought process. But for the people such as me and some other people on the call what are the different ideas in terms of implementing Schnorr. Let’s say Schnorr is a good idea. What are the different ways we could implement Schnorr? I’ll throw some suggestions out there. You could have an OP_SCHNORR opcode rather than a new SegWit version. Or you could have a hard fork, I read this but I didn’t understand how it would work, such that previous ECDSA signatures could be verified using the Schnorr signature verification algorithm. Any other ideas on how Schnorr could have been implemented?

AG: When I heard that they were doing this, my thoughts running through my head were what is the serialization? How are the bytes serialized?/ How are the pubkeys represented? You probably remember that there was the possibility in the past of having uncompressed pubkeys and compressed pubkeys. A public key is a point on a curve but you have to somehow convert that into bytes. Are you printing out the x coordinate or the x and y coordinate? That question is interesting because of the nature of elliptic curve crypto. You have a y^2 = f(x) which means that you will always have this situation where both y and -y are valid solutions for any particular x. So how you serialize the pubkey. Then there is the question of how you print out the Schnorr signature. You probably remember from my talk in 2018 this idea that you have a commit, challenge, response pattern. The challenge ends up being a hash value which we often write as e in these equations. The question is do you publish s, the response and e, the challenge? Or do you publish s and the commitment R? Both of those turn out to work fine for verification. As I’m sure several people on this call can explain in great detail there was a decision made to do one rather than the other. Those were the questions that came into my head.

TR: I wasn’t involved back in 2016 but let me correct one thing that you mentioned that you read. This idea that you could do a hard fork where you use Schnorr verification to verify the existing ECDSA signatures. You said you didn’t understand. This won’t work but maybe you meant something else. What Bitcoin Cash did is they did a hard fork and they reused the keys. A Schnorr public key is the same as an ECDSA public key and the same for the secret key. They are just elliptic curve points on the secp256k1 elliptic curve. You can take an existing ECDSA key, a point on the curve, and reuse it as a Schnorr key. I think this is what Bitcoin Cash did in their hard fork. I’m not sure if they turned it off, I think they turned off ECDSA and said “From now on you can only use Schnorr signatures.” If you have existing UTXOs that are protected by ECDSA keys those keys will be now considered Schnorr keys. This is one way you can do this. It is a weird way.

ET: You are saying because the scriptPubKey contains a public key they just say “From now on you need a Schnorr signature to spend it.”

TR: Exactly. Perhaps later I can say why I think this is not a great idea.

ET: At the very least it violates separation of concerns. You don’t want to mix multiple cryptographic schemes with the same keys. I don’t think there is an obvious attack if you use different nonce functions.

PW: Arguably we are doing that too for the sake of convenience, reusing the same keys. Given that the same BIP 32 derivation is used.

MF: One thing Pieter said in his Scaling Bitcoin 2016 presentation was that you couldn’t just slot in Schnorr as a replacement for ECDSA because of the BIP 32 problem. This is where you can calculate a signature of a child key if you know that master public key.

“It turns out if you take Schnorr signatures naively and apply it to an elliptic curve group it has a really annoying interaction with BIP 32 when used with public derivation. If you know a master public key and you see any signature below it you can transmute that signature into a valid signature for any other key under that master key.” Pieter Wuille at Scaling Bitcoin 2016

NK: Nonce generation seems like a pretty big design choice. How we have deterministic nonce generation plus adding in auxiliary randomness.

MF: The people who were involved back then, I don’t know exactly when, maybe it started 2013, 2014 but it certainly seemed to get serious in 2016 when Pieter presented at Scaling Bitcoin on it. Were there any big changes in terms of your thinking how Schnorr should be implemented? I know Taproot only came along in 2017 so you had to integrate the thinking on the Schnorr implementation with Taproot. We will go onto the multisignature stuff later.

PW: The biggest change I went through was thinking of Schnorr primarily as a scaling improvement. I was envisaging it to be part of a cross input aggregation scheme where we would aggregate all signatures in an entire transaction into one. With the invention of Taproot that changed to a much more simple per input for a privacy improvement technique.

ET: I think I remember you tweeting about it a while ago, the fact that you see Schnorr as doing cross input aggregation and maybe one day BLS or something like that doing cross block aggregation.

MF: So Taproot and perhaps changing priorities on what Schnorr use cases were possible. Cross input aggregation isn’t in BIP Schnorr or in a potential soft fork in the coming months or years. There is a lot of work that I understand to get to a point where that could be introduced to Bitcoin.

Dangers of implementing Schnorr too early

MF: What could go wrong? Let’s say that there had been a Schnorr soft fork two, three years ago with the current thinking. Pre-Taproot, an OP_SCHNORR opcode was introduced. Perhaps Schnorr was introduced before some of the security concerns of multisignature schemes were discovered. What could’ve gone wrong if we had implemented Schnorr two, three years ago?

AG: It is a science fiction argument I suppose but what if people started making signature aggregation just by adding keys together?

PW: Whenever we are talking about hypothetical scenarios it is hard to imagine what we are exactly we are talking about. If it was just a OP_SCHNORR that was introduced and nothing more I don’t think there would be much risk. It would also not be particularly interesting. You wouldn’t get the cross input aggregation, you wouldn’t get Taproot. You would have the ability to do efficient multisig and threshold signatures within one input. Before we spend a lot of time thinking about all the ways that can go wrong, MuSig and its various iterations, there is certainly risk there.

NK: You would’ve still gotten adaptor signatures and that would’ve made Lightning nicer maybe. We will still get there.

VH: Because the Schnorr implementation that is supposed to come into Bitcoin eventually will be pay-to-public-key addresses which means that if there is a quantum computer eventually that could break a lot of things because it is no longer hashed. It is just a public key that you pay to.

NK: It is still inside of SegWit. It is still going to be a witness pubkey hash. I could be mistaken.

PW: There is no hash.

AG: We are talking about Taproot here not Schnorr per se.

PW: That’s right. The Taproot proposal puts a tweaked public key directly in the output. There are no hashes involved.

NK: Does the tweak save this scenario at all?

PW: I would argue there is no problem at all. I believe the argument that hashes protect against a quantum computer is cargo cult. If ECDLP is broken we have a problem period. Any interesting use beyond simple payments and that includes BIP 32 derivation, multisig, Lightning, various other protocols all rely on already sharing public keys with different parties. If we believe that computing a private key from a public key is easy for an attacker we shouldn’t be doing these things. The correct approach is thinking of in the long term how we can move to an actual post quantum secure scheme and not lie to ourselves that our current use of hashes is any significant improvement.

MF: That is a conversation you have had to rehash a lot it seems. Watching from afar.

(For more information on why hashing public keys does not actually provide much quantum resistance see this StackExchange post from Andrew Chow.)

AG: Deliberate pun Michael?

MF: There is a question on the YouTube from Carel. What are the trust vectors within elliptic curves and how does it compare to Schnorr? How do the parameters compare?

NK: They are the same right?

MF: Yes. Greg has answered those in the YouTube chat. “The BIP 340 proposal uses exactly the same group as Bitcoin’s ECDSA (specifically to avoid adding new trust vectors but also because the obvious alternatives aren’t big improvements)”

Dangers of implementing Schnorr today

MF: This was the roadmap in 2017 in terms of Schnorr signature aggregation. Let’s have a discussion what could go wrong now. We have this Schnorr proposal. We have the BIP drafted, the code is pretty much ready, there still seems to be some minor changes. What could go wrong if BIP Schnorr and BIP Taproot were merged in say tomorrow. What would we be having nightmares over?

NK: You mean the code or a soft fork?

MF: It needs to be merged in and then there would be soft fork. What could go wrong? This is linking to the discussion with Pieter earlier where I said “What would’ve happened if it had been implemented really early.” Pieter said “There wouldn’t have been much of a problem if say there had been a OP_SCHNORR opcode back in 2017”. But we don’t to have things introduced to Bitcoin where there are lots of opcodes where funds are locked up using those opcodes. I think Andreas (Antonopoulos) has called this crud.

(See the end of this presentation from Andreas Antonopoulos at SF Bitcoin Devs on Advanced Bitcoin Scripting)

MF: We want to get it right first time. We don’t want to get things wrong. We want to have sorted out the multisig schemes and make sure Taproot integrates perfectly with Schnorr. We don’t want to rush things in. What could go wrong?

PW: I think my biggest concern is in how things end up being used. Just because Schnorr makes it easier to build more complicated constructions on top doesn’t mean they all interact well and securely. We have already seen many unexpected pitfalls. The two round MuSig scheme that was proven insecure. We went from we think we have a proof and someone found a mistake in a proof. And then someone found an actual break. All these things were discovered before deployment. I think the process is working here. There is certainly risk that people will end up using much more experimental things on top. Nonce derivation or anything that introduces more interaction at the wallet signing level introduces risk I think. At the same time it comes with huge benefits.

NK: I would also like to note that there is at least some risk of that happening on ECDSA as well now that there are a lot more feasible schemes built on top of ECDSA that are worse than the Schnorr versions.

PW: Absolutely. I think these same concerns apply to things like 2 party ECDSA, much more so in fact.

MF: We will go onto this later but do we need to have those multisig schemes bulletproof and be really confident in those multisig schemes before we merge Schnorr in? Or can we potentially make those secure in the future after a Schnorr soft fork?

TR: It is a good question. Let me circumvent the question and just say what I believe the current state is. For multisignatures we are pretty good. We have MuSig, it is there, it has a security proof. People have looked at it. I am pretty confident. It could go wrong but I hope not. Multisig is n-of-n, threshold is t-of-n where t could be different from n. The situation is already a little bit more involved. I think when we started to write the BIP we thought this is a solved problem. Then it turned out that it is harder than we thought. There are some schemes from the literature that are decades old. We thought that if people worked on this in the 1990s and there was no follow up work that pointed out issues this is probably fine. Then I looked a little bit into it, Jonas looked into it and we figured out some issues. For example those schemes assume a lot of things that are weird in practice. We changed the wording in the BIP now to be a little bit more careful there. This for sure needs more research. One thing we mention in the BIP is blind signatures and they require more research. On the other hand I think this is not very important. Going back to your question we can certainly live without a working blind signature scheme. If you really want to have blind signatures that end up on the blockchain and are valid Bitcoin signatures this is a pretty rare use case. For threshold it would be nice to have a scheme ready when Schnorr is deployed. But on the other hand I don’t think it is a dealbreaker. I share the concern of Pieter that people will build crazy things on top of it. On the other hand I see that with a lot of features that have been introduced into Bitcoin the usage is not super high so far. It is not like we introduce Schnorr and the next day after the soft fork everybody will use it. It is a slow process in the end anyway.

MF: In a worst case scenario where there was a bug or things that had to be changed after it was deployed I suppose it depends specifically what the bug was and exactly what the problem was.

TR: What Pieter mentioned in that sense is maybe not the worst case scenario. This would be a scheme on top of Schnorr failing and you could just stop using this. It is not nice but in this case you hopefully don’t need to fix the consensus code. Whenever you need to do a hard fork because of a bug in the consensus code it is a nightmare scenario.

PW: I think that is good to point out. Almost all the risk is in how things end up being adopted. Of course there could be actual bugs in a consensus implementation and so on but I think we are reasonably confident that that isn’t the case. That is not a very high bar. Imagine the Schnorr scheme we introduce is just trivially insecure. Anyone can forge a signature. That isn’t the risk for the consensus system in the sense that if nobody uses it there is no problem. Today you can send to OP_TRUE, that is also insecure. Everything depends on the actual schemes and how wallets adopt things. These were extreme examples obviously.

TR: I was surprised by your comment Pieter. That your concern is people build crazy things on top of it. This is always what I fear, coming more from a theory background. When we have Schnorr signatures you can do this fancy construction that I wrote ten minutes ago. You get all these nice properties but you need a security proof. How does it work? Writing a paper and getting peer review takes two years. I am always on the side of being cautious and try to slow down things and be careful. I fully agree with Pieter. We should be careful. Just because a scheme seems to work doesn’t mean it is secure.

TR: One point that we haven’t covered that much is batch verification. That’s a big one. This is actually something where Schnorr is better than ECDSA because we just can’t do this with the current ECDSA.

PW: With a trivial modification you could. It wouldn’t be ECDSA anymore. It is not just an encoding issue. But with a very small change to ECDSA you could.

AG: What would that be?

PW: You send the full R point rather than only the x coordinate of R modulo the curve order. That’s enough. Or what we call a compact signature that has an additional two bits to indicate whether R_x overflowed or not and which of the two to pick. There is a strong analogy between batch verification and public key recovery.

MF: Your answer surprised me Pieter because as a protocol developer or someone who cares about the protocol and the network we don’t care about people building things on top and losing money. What we are worried about funds being locked up in some bug ridden opcode etc and everyone having to verify that bug ridden opcode forever.

PW: I don’t know what the distinction really is. There are two concerns. One is that the consensus code is buggy and the network will fork. Another is an insecure scheme gets adopted. But the second question is much more about, especially if you go into higher level complex protocols, wallet level things than consensus code. Of course if our Schnorr proposal is trivially insecure and everybody starts adopting it that’s a problem. Maybe I am making a separation between two things that don’t really matter. Just proposing something that is possible with new consensus code, if nobody adopts it there is no problem. There are already trivially insecure things that people could be adopting but don’t. People are not sending to an OP_TRUE or something. Perhaps you are talking about technical debt in stupid opcodes that end up never being used?

MF: I suppose that is one of the concerns. Crud or lots of funds being locked up in stuff that wasn’t optimal. It would’ve been much better if things that were optimal were merged in rather than rubbish.

PW: That again is about how things are used. Say Taproot goes through today as it is proposed and gets adopted but everyone uses it by putting existing analogs of multisig scripts inside the tree and don’t use those features. That is a potential outcome and it would be unfortunate but there isn’t much you can do as a protocol designer.

ET: I believe you mentioned the distinction between a consensus failure and upper level protocol failures. I think they can be as bad. Let’s say if Schnorr didn’t interact well with BIP 32 and we didn’t notice that. All the wallets would deploy Schnorr with BIP 32 and a lot of funds would get stolen. This is at least as bad as funds being locked up because of consensus.

PW: Absolutely there is no doubt about that. But it depends on it being used. That is the distinction.

MF: You only need one person to use it and then everyone has to care about it. Is that the distinction? Or are you talking about lots of adoption?

ET: If there is a bug in the altstack for example it wouldn’t be the end of the world in my opinion because as far as I know almost no one uses it. You could just warn everyone against using it and use policy to deny it but I don’t think it would break Bitcoin.

Hash function requirements for Schnorr signatures

http://www.neven.org/papers/schnorr.pdf

MF: So on the reading list there is this paper by Neven, Smart and Warinschi. Who has read this paper? Anyone like to discuss what the hash functions are for Schnorr signatures? How has this paper informed how hash functions are used for Schnorr in Bitcoin?

TR: If I remember this paper has a deeper look into possible alternative security proofs for Schnorr signatures. The normal security proof for Schnorr signatures models the hash function as a random oracle which is what people in crypto do because we cryptographers don’t know how to prove stuff otherwise. At least not very efficiently. We invent this model where we look at the hash function as an ideal thing and then we can play ugly tricks in the security proof. When I say ugly tricks I really mean that. The only confidence that we have in the random oracle model is that it has been used for decades now and it hasn’t led to a problem. We know it is weird but it has never failed us so I believe in it. It is already needed in Bitcoin so it is not a problem to rely on it because it is used in other places in Bitcoin.

PW: I think this is something that few people realize. How crazy some of the cryptographic assumptions are that people make. The random oracle model, a couple of months ago I explained to someone who was reasonably technical but not all that into cryptography how the random oracle DL proof works. His reaction was “That is terrible. That has nothing to do with the real world.” It is true. I won’t go into the details here but it is pretty interesting. The fact that you can program a random oracle has no relation at all to how things work in the real world. If you look at it just pragmatically, how well it has helped us build secure protocols? I think it has been very useful there. In the end you just need to look at how good is this as a tool and how often has it failed us.

TR: In practice it has never failed us so it is a super useful tool. Nevertheless it is interesting to look into other models. This is what is done in this paper. In this paper they take a different approach. As I said normally if you want to prove Schnorr signatures are secure you assume that discrete logarithm is hard. This is a standard thing. There is nothing weird about this. It is still an assumption. You can formalize it and it can be true in the real world that this is hard. You can make this crazy random oracle assumption for the hash function. I think in this paper they go the other way round. They take a simple assumption for the hash function and they discover several assumptions for example, random prefix second preimage secure. I won’t go into details but you can make that formal. It is a much weaker assumption than this crazy random oracle thing. To be able to prove something about the resulting Schnorr signature scheme what they do is they instead make the discrete logarithm problem a more ideal thing. Discrete logarithm in a mathematical group. In our case it is the group of points on the elliptic curve. In the proof they model this group as an ideal mathematical object with strange idealized properties. In this model, the generic group model, we don’t have as much confidence in this one as in the random oracle model I would say. It is another way to produce a proof. This gives us more confidence in the resulting thing. If you go with that approach you can produce a proof, if you go with that other approach you can also produce a proof. It is even more confidence that what we are doing is sound in theory. Concretely this also has implications for the length of the hash function here. I’m not sure.

PW: I believe that is correct. I think the paper points out that you can do with 128 bit hashes in that model but of course you lose the property that your signature is a secure commitment to the message in that case. That may or may not matter. I think it is an implicit assumption that many people who think informally about the signature scheme assume it is scary to lose.

TR: Let me give some background here. This is not really relevant for the Schnorr signature scheme that we have proposed. But it is relevant to the design decisions there. You could specify a variant of Schnorr signatures which are even shorter than what we have proposed. In our proposal you send two elements, R a curve point and s the scalar. You can replace R with a hash. This is in some sense in the spirit of ECDSA. You can replace R with a hash. A hash is suitable, if you have a hash function like SHA256 you can truncate it after 128 bits. You can play around with this length. If you want to send an elliptic curve point you can’t really do that. Usually you send a x coordinate. You can truncate this but then the receiver will never be able to recover the full point from that again. By switching to a hash function you could tune its parameters and then tune it even smaller. This would reduce the final signature by 128 bits and it would still be secure. However this comes with several disadvantages. We explains them in the BIP. One of the disadvantages is that we lose batch verification. We don’t want to do that. Then another disadvantage is pointed out by this paper. If we switch to hashes and do this size optimization then the signature scheme would have a weird property namely the following. I can take a message m, give a signature on it and send you the message. Later I give you a second message m’ and the signature that I sent you will also be valid for m’. I can do this as the sender. If you think about unforgeability there is nothing wrong with this because I am the sender, I am the guy who has the secret key. I can produce a signature for m and m’. For any other message that I want but it is still an unintuitive property that I can produce one single signature that is valid for two messages. This will confuse a lot of protocols that people want to build on top of Schnorr because they implicitly assume that this can’t happen. In particular it is going to be interesting if the sender is not a single party. In this example I mentioned I am the guy with the secret key. But what if we do something like MuSig where the secret key is split among a group of parties and maybe they don’t trust each other. Who is the sender in that case? Things start to get messy. I think this is pointed out in this paper. Another reason why we don’t want to use this hash variant.

AG: You mentioned that one of the reasons you don’t like this truncated hash variant is that it would remove the batch verification property. I know you mentioned that earlier as well. I wonder if it would be useful if you could explain concretely how much an advantage it would be in Bitcoin in the real world to have the batch verification property.

PW: I think a 3x or 4x reduction in CPU for verifying a block isn’t bad.

AG: Under what assumptions do you get those figures?

PW: You are right to point out that there are assumptions here. In practice many transactions are validated ahead of time. So that doesn’t apply. This is definitely something that matters for initial block download validation. And possibly even more because you could arguably batch multiple blocks together. Or even all of them. You can get even higher factors that I haven’t looked into the numbers how much that is.

AG: Are you saying in a typical scenario where everything was using Schnorr that you’d get a 4x speedup on verification of a block?

PW: Yes if you do it from scratch. From scratch I mean without having the transactions pre-validated.

TR: I think also there it is a trade-off. There are a lot of parameters we don’t know. You could also make the point that if you have smaller signatures and the hash variant would give this to you, then this would also speed up initial block download because things are smaller.

PW: Greg Maxwell who is apparently listening is pointing out on IRC that it also matters after you have been offline and you come back online.

stefanwouldgo: But wouldn’t it actually be worse if you have smaller signatures. You would have more signatures so you would have a longer IBD?

TR: Because blocks are full anyway.

stefanwouldgo: If we have a fixed block size and we have more signatures we get a longer IBD.

TR: This adds another point to this trade-off discussion.

PW: It depends what you are talking about. The correct answer is that there is already a limit on the number of sig ops a block can perform. That limit isn’t actually changed. The worst case scenario is a block either before or after that hits that limit of 80,000 signature operations per block. The worst case doesn’t change. Of course the average case may get slightly worse in terms of how many signatures you need to do. Average transactions will make more efficient use of the chain. I think the counter point to that is by introducing the ability to batch verify. That isn’t actually a downside for the network.

stefanwouldgo: You could actually use a hash that is shorter but would make the signatures even shorter and you couldn’t verify them in a block. That would make things worse twice in a way.

TR: You could turn this into an argument for smaller blocks. Let’s not go into that discussion.

ET: I have another use case that I think is important for batch verification which is blocks only mode. This is common in low powered devices and even phones for people who run full nodes on phones.

PW: Yes that is a good point. It definitely matters to those.

Reducing Bitcoin transaction sizes with x only pubkeys

https://medium.com/blockstream/reducing-bitcoin-transaction-sizes-with-x-only-pubkeys-f86476af05d7

MF: This was Jonas Nick’s blog post on x only pubkeys. I’m sure somebody can summarize this because this was pretty important.

PW: This is about the question of whether it is fine to go to x only public keys and the trade-offs there. The argument that people sometimes make that clearly you are cutting the key space in half because half of the private keys correspond to those public keys. You might think that this inherently means a slight reduction in security. But it turns out that this is not the case. A way to see this intuitively is if it is possible to break the discrete logarithm of a x only public key faster than breaking it for a full public key then I could use the faster algorithm to break full public keys too by negating it optionally in the beginning. In the end negating the output. It turns out that this doesn’t just apply to discrete logarithms, it also applies to forging signatures. I think Jonas has a nice blog post on that which is why I suggested he talk about this. The paradox here is how is it possible that you don’t get a speedup? The reason is that the structure of a negation of a point already lets you actually break the discrete logarithm slightly faster. You can try two public keys at once during a break algorithm.

TR: It seems like a paradox. It is not that you don’t lose security, it is just super, super tiny.

PW: What do you mean by super tiny?

TR: People sometimes talk about the hardness of cryptographic problems. They say this should be 2^128 operations hard to break discrete log but they don’t say what operations are. The reason why they don’t say is because the numbers are that big. It doesn’t really matter if you can’t use CPU cycles or additions or whatever. In this case this explains the difference here because negating a public key seems to be an elliptic curve operation and they are much more expensive than finite field operations. If you think of an elliptic curve point it has two coordinates x and y and they are elements of a finite field. Computations on a finite field are much faster than computations on a point itself. However for negating this is not the case. If you negate the point what you do is negate the y coordinate. It is not like you lose 1 bit of security in terms of public key operations or group operations. You lose maybe 1 bit in finite field operations but it is so much smaller.

PW: No I think you are failing to distinguish two things. The difference between the optimal algorithm that can break a x only key and the optimal algorithm that can break a full key is just an additive difference. It is a precomputation and post processing step that you do once. What you are talking about is the fact that you can already use the negation to speed up computing the discrete log of a full public key. This is what resolves the paradox.

TR: Maybe you are right. I need to read Jonas’ blog post again.

ET: Even if it did cut the security in half it would just be one less bit.

PW: Or half a bit because it is square root. It isn’t even that.

TR: If you are talking about a single bit, even if you lose half a bit or one bit it is not really clear what that means. In that sense ECDSA is actually a tiny bit harder than breaking Schnorr. We don’t really know for those small factors.

stefanwouldgo: It is just a constant factor right?

TR: Right. What people usually do is switch to better schemes if they fear that breaking the existing scheme is within reach of attackers in the near future. It is of course very hard to tell, who has the best discrete log server in the world.

AG: So what I am learning from this conversation is that directory.io is only going to be half as big as a website now is that right? Does nobody get the joke? directory.io was a website where you pull it up and it would show you the private keys of every address one by one.

BIP Schnorr (BIP 340)

https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki

MF: Does somebody want to talk about the high level points from this BIP? What they found interesting in the current BIP?

AG: One of the first things that I noticed in BIP was the change in the deterministic nonce generation. It is going to be rehashing what is already stated in the BIP I’m sure. I wonder if Pieter or another author could explain the rationale again for the optional nonce generation function.

TR: That is a tough one.

AG: To start off by saying the general idea here is you have to generate a nonce every time you generate a signature and if by accident you use the same nonce twice on different messages you are going to reveal your private key. This has led to people losing money in the past from bad programming errors. There is something called deterministic randomness which is a weird concept. It means that you generate a number as a function of your private key and the message you are signing. What that means is that that number that comes out is unpredictably random, out of a hash function let’s say but because you are using a deterministic algorithm to generate it you are not going to accidentally create the same number for different messages. That is deterministic nonce generation. The standard that was being used and I think is still being used by pretty much all wallets is something called RFC-6979. It is a rather complicated algorithm. It is a RFC you can look it up. One interesting about this new BIP 340 for Schnorr is that it specifically doesn’t use that algorithm at all. It has another deterministic random nonce generation algorithm. I was asking what was the thinking behind that specific function.

PW: There were a number of reasons for that. One is that RFC-6979 is actually horribly inefficient. To the point that it is measurable time in the signing algorithm. Not huge but measurable. I believe it does something like 14 invocations of the SHA256 compression function just to compute one nonce. The reason for this is that it was designed as a RNG based on hashes. It is then instantiated to compute nonces. It is used in a place where on average only one nonce will be created. The whole set up is actually wasted effort. I think you can say that our scheme is inspired by the ed25519 one which uses SHA512 actually in a particular arrangement. Our scheme is based on that but we just use SHA256 instead. The reason is simply that the order of our curve is so close to 2^256 that we don’t need to worry about a hash being biased. You can directly use the output of a hash as your nonce because our order is so close to 2^256 there is no observable bias there. That simplifies things. We had a huge discussion about how and whether to advise something called synthetic nonces. You take the scheme of deterministic randomness and then add real randomness to it again in such a way that it remains secure if either the real randomness is real random or the deterministic scheme is not attacked through breaks or through side channel attacks. This has various advantages, the synthetic nonces. It protects against hardware injection faults and other side channel attacks. We looked into various schemes and read a few papers. To be fair all of that is really hard to protect against. We need to pick some scheme and there is a number of competing otherwise equally good ones, let’s try to build a model of what a potential side channel attacker could do and pick the one that is best in that regard.

AG: Is that in the scope of the BIP? It is a bit of gray area. You could say “You have to choose a random nonce” and that’s the end of it. Clearly a protocol specification has to say what you are supposed to do.

TR: It is a very good question that you bring up. I think our goal there was to be explicit whenever possible and give safe recommendations. I was one of the guys strongly advocating for this approach. It is always hard. You write a cryptographic algorithm or even a specification in that case and then your hope is that people will implement it. On the other hand you tell people not to implement crypto if you don’t know what you are doing. I think people who even know what they are doing and implement crypto get it wrong sometimes. We could maybe skip some details in the BIP draft. On the other hand if you are an average skilled programmer I still don’t think you should be the one implementing crypto. But I think the idea here is that if we give very good recommendations in the BIP then there is a higher chance that the resulting implementation will be ok. Another thing here is that this specific nonce function, even if you know what you are doing you don’t want to invent this from scratch. You might be happy that other people have thought about it.

PW: Greg gives a good point on IRC which I am going to quote. “It is important to have a good recommendation because if you let people just choose a random nonce we know they will do it poorly. The evidence of that is that in a very early version of libsecp the nonce could be passed explicitly as a call. It wasn’t generated inside the library. Some Ethereum implementation just passed a private key XOR the message as the nonce which is very insecure. The application writer isn’t a cryptographer necessarily.” I was paraphrasing in this case.

MF: There is a question from nothingmuch in the chat. Care to comment on “ It is important to note that multisignature signing schemes in general are insecure with the random number generation from the default signing algorithm above” in BIP 340.

nothingmuch: Deterministic nonce generation, my understanding is that it is not compatible with MuSig right?

PW: It is compatible but only in a trivially insecure way.

TR: This is a really interesting point. Maybe tomorrow I will talk about this. Variants of MuSig, I can explain this in more detail. We talked a lot about deterministic nonces. If you look at ECDSA and Schnorr signatures this is what we read everywhere. It can horribly fail if you use real randomness because if randomness breaks down it might leak your private key and so on. So please use deterministic randomness. We hammered this into the minds of all implementers. Now as you point out it is exactly the case that if you now go to MuSig and use deterministic randomness it is exactly the other way round. If you deterministic randomness it is horribly broken.

ET: What are your thoughts on synthetic nonces with MuSig?

TR: It doesn’t help you with MuSig. The idea of synthetic nonces as Pieter said, it is secure if either the real randomness is secure or the deterministic randomness works out.

PW: And in MuSig without variants you just need to have real randomness period.

TR: Synthetic nonces, you can do this but it is not better than using real randomness in the end. Because if the real randomness breaks down with the synthetic nonce you fall back to the deterministic one and then again it is insecure. I can show the attack on this tomorrow with slides. It is easier to explain. We are working on a paper which has been accepted but is not public yet where we fix this problem for MuSig. The way we fix this is by adding a zero knowledge proof where all the signers in the MuSig session proves that he/she generated the nonce deterministically. Because in that case it works. If you can choose your nonce deterministically and if you are convinced that every other signer has chose his/her nonce deterministically. Our simple solution to this is to add a zero knowledge proof. I can talk about this tomorrow a little more. It works but it is not a bulletproof solution. It removes that problem but it introduces a large complex zero knowledge proof that is slow and gives you another hundred possibilities to fail your implementation. No pun intended Elichai, we use bulletproofs but it is not a bulletproof solution.

ET: With that you again can’t use synthetic nonces and you use 100 percent deterministic ones.

TR: Exactly then you can’t use synthetic nonces. You need 100 percent deterministic nonces, indeed.

Bitcoin Core PR 17977 implementing BIP Schnorr

https://github.com/bitcoin/bitcoin/pull/17977

MF: The next item in the agenda is the actual PR in Bitcoin Core implementing BIP 340-342. One smaller PR 16902 was taken out of this monster Schnorr, Taproot PR and we looked at that Bitcoin Core PR review club. Are there any possibilities to take further PRs out of that monster PR? This was for easy reviewability I think more than anything.

PW: There have been a couple that have been taken out, opened and/or merged independently. I think the only obvious one that is remaining is merging support for Schnorr signatures first in libsecp and then in Bitcoin Core. On top of that I think the code changes are fairly modest of what remains.

MF: It is pretty much ready to be merged bar a few small things? Or just needs more review? What are the next steps for it to be merged assuming consensus?

PW: I think the key word is assuming consensus. It probably requires vague plan at least about how it will be deployed.

MF: Is that the bottleneck now, the activation conversation? You think the code is pretty much there?

PW: I think the code is pretty much done. That is of course my personal opinion. Reviewers are absolutely free to disagree with that.

MF: There needs to be an activation conversation. We’ll leave that for another time.

TR: One thing I can add when talking about things that can be taken out of the PRs. For the Schnorr PR to libsecp it has just been simplified. Jonas who is mostly working on it has taken out a few things. For example we won’t have batch validation in the beginning. This wasn’t planned as a consensus change right now. You couldn’t even call it a consensus change. It is up to you as a verifier of the blockchain if you use batch validation or if you don’t use batch validation. At the moment this wasn’t the plan for Bitcoin Core to introduce batch validation. We took that out of the libsecp PR to make the PR easier with smaller steps. Were there other things removed?

PW: I don’t remember.

TR: Batch verification is a very large thing because you want to have it be very efficient. Then you want to use multi-exponentiation algorithms which so far we haven’t used in libsecp. This would touch a lot of new code. The code is already there but at the moment it is not used. If we introduce batch verification then suddenly all this code would be used. We thought it was a better idea to start with a simple thing and not add batch verification.

libsecp256k1 library

https://github.com/bitcoin-core/secp256k1

MF: The crypto stuff is outsourced to libsecp. Perhaps Elichai you can talk about what libsecp does and what contributions you have made to it.

ET: The library is pretty cool. Originally I think it was made by Pieter to test some optimization trick in ECDSA but since then it is used instead of OpenSSL. There are a lot of problems with OpenSSL as Pieter said before. It is a C library that only does everything Bitcoin Core needs for ECDSA and soon Schnorr too. I try to contribute whenever I see I can. Adding fuzzing to it, I fixed some small non-serious bugs, stuff like that.

MF: How did you build it to begin with Pieter? It is in C, the same as OpenSSL, did you use parts of OpenSSL to begin with and then build it out?

PW: No libsecp at the very beginning was written in C++. It was later converted to C to be more compatible with low powered devices and things like that. And improve things like static analyzability. It is entirely written from scratch. The original algorithms, some of them were inspired by techniques used in ed25519 and things I had seen in other implementations. Later lots of very interesting novel algorithms were contributed by Peter Dettman and others.

MF: You talked about this on the Chaincode Labs podcast. The motivation for working on this were to shrink that attack surface area in terms of what it is actually doing from a security perspective. OpenSSL was doing a lot of stuff that we didn’t need in Bitcoin and there was the DER encoding problem.

PW: It is reducing attack surface by being something that does one thing.

libsecp256kfun

https://github.com/LLFourn/secp256kfun

MF: The next item on the reading list was Lloyd Fournier’s Rust library. Do you have any advice or guidance? This is just for education, toy reasons. In terms of building the library from scratch any thoughts or advice?

PW: I haven’t looked at it.

TR: I also only looked at the GitHub page. It is a shame he is not here. We discussed this paper about hash function requirements on Schnorr signatures. He actually has an academic poster in the write up on hash function requirements for Taproot and MuSig. This refers to the concern that Pieter mentioned. It is not only the security of Schnorr signatures, it is combined with all the things you build on top of it. If you want to look into alternative proofs for Schnorr signatures it is also interesting to look into alternative proofs and hash function requirements for the things you want to build on top of it. Lloyd worked on exactly that which is a very useful contribution.

ET: We were talking about the secp256kfun. I just want to mention that there is a Parity implementation of secp in Rust. I looked at it a year or so ago. They tried to translate the C code in libsecp to Rust. For full disclosure they did make a bunch of mistakes. I wouldn’t recommend using it in production in any way. They replaced a bitwise AND with a logic AND which made constant time operations non-constant time and things like that.

MF: Thanks for clarifying Elichai. Lloyd has said it is not for production. It is an educational, toy implementation.

ET: I wanted to note this as a lot of people have looked at it.

Different Schnorr multisig and threshold schemes

MF: Tim, you did say you might talk about this tomorrow. The different multisig schemes. MuSig, Bellare-Neven, MSDL pop scheme (Boneh et al) and then there are the threshold schemes. Who can give a high level comparison between the multisig schemes and then there is the threshold schemes?

TR: Let me try for the ones you mentioned. The first reasonable scheme in that area was Bellare-Neven because it was the first scheme in what would you call a plain public key model. I can talk about this tomorrow too. The problem with multisignatures is…. Let’s say I have a public key and Pieter has a public key. These are just things that we claim but it doesn’t mean that we know the secret key for them. It turns out that if we don’t know the secret key for a public key we can do some nasty tricks. I can show those tomorrow. For example if we do this naively I can create a public key that depends on Pieter’s public key. After I have seen Pieter’s key I can create a key for which I don’t know the secret key. If we add those keys together to form an aggregate key for the multisignature then it would be a key for which I can sign and only I. I don’t need Pieter. This defeats the purpose of a multisignature. This is called a key cancellation attack or a rogue key attack.

MF: Key subtraction? Different terms for it.

TR: The idea is that simple I can actually explain it. I would take the public key for which I know the private key, let’s say P. Pieter gives me his key P_1. I would claim now that my key is P_2 which is P - P_1. Now if we add up our keys P_1 + P_2. Then the P_1 cancels out with the minus P_1 that I put in my own key. What we get in the end is just P. P was the key for which I know the secret key. Our combined key would be just the key for which I can sign and I don’t need Pieter to create signatures. Key subtraction is another good word for those attacks. Traditionally what people have done to prevent this, this works, you can do this. You give a zero knowledge proof together with your public key that you know the secret key for. This prevents the attack that I just mentioned because P_2 which was P - P_1 I don’t know the secret key for this one because it involves Pieter’s key and I don’t have Pieter’s key. This is not too hard in practice because giving zero knowledge proofs that you know a key is what Adam mentioned at the beginning. It is a zero knowledge proof of knowledge. It can be done by a signature. What you can do is take your public key and sign it with your secret key so you have your new public key is the original public key with your signature added to it. People need to verify this signature. This is the solution that people have done in the past to get around those problems. You mentioned MSDL pop. MS for multisignature, DL for discrete log and pop is proof of possession which is the additional signature you give here. Bellare-Neven on the other hand introduce another simpler model called the plain public key model so you don’t need this proof of possession. This doesn’t make a difference in theory because you could always add this proof of possession, in practice it is much nicer to work with simple public keys as they are now. You could even find some random public keys on the chain for example and create an aggregated key of them without any additional information or talking to the people. In this model our P stay the same. They are just an elliptic curve point. This was the innovation by Bellare and Neven to make a multisignature scheme that is secure in this setting. The problem with their scheme is that if you want to use it in Bitcoin it wouldn’t work because this now assumes you have Schnorr signatures. Maybe I shouldn’t assume but I hope will happen of course. A Bellare-Neven signature doesn’t look like a Schnorr signature. It is slightly different and it is slightly different to verify. To verify it you need the public keys of all signers who created it. In this example of Pieter and me. Let’s say we create a signature. My key is P_2, Pieter’s key is P_1. If you want to verify that signature you need both P_1 and P_2. This already shows you that it doesn’t like a normal Schnorr signature because in the normal Schnorr signature the public key is one thing, not two things or even more things depending on the number of parties. The additional property that we need here is called key aggregation. This was introduced by MuSig. This means Pieter and I, we have our public keys P_1 and P_2, there is a public function that combines those keys into a single aggregated key, let’s call it P_12. To verify signatures, verifiers just need that single P_12 key. This is very useful in our setting. It is actually what makes the nice privacy benefits of Taproot work in the first place. In this example if I do a multisig with Pieter there might be a UTXO on the blockchain that we can only spend together. It will be secured by this public key P_12. The cool thing here is that we don’t need to tell the world that this is a multisig. It just looks like a regular public key. If we produce a signature it will look like a normal Schnorr signature valid on this combined public key. Only two of us know because we set up the keys, that this is a multisig key at all. Others can’t tell. This is exactly what we need in a Bitcoin setting. This is what MuSig gives us.

MF: I saw a presentation from you Tim last year on a threshold signature scheme that used Shamir’s Secret Sharing. It was asked at the end why use Shamir’s Secret Sharing in that threshold scheme. Perhaps we’ll leave that tomorrow. This work is being done in an Elements fork of libsecp. Are you using Liquid at all? Or is it just positioned in that repository? You are not actually doing any experimentation or testing on a sidechain?

TR: MuSig is implemented in a fork of libsecp. It is owned by the Elements project. It is our open source fork of secp. We don’t have Schnorr signatures in Elements. At the moment that is not going to happen but it is in an experimental library where we play around with things. If this is ready people will be able to use it in other blockchains and Bitcoin too of course.

MF: You don’t have Schnorr in Elements? How are you doing schemes like MuSig without Schnorr? You are just forking secp within the Elements organization. Schnorr isn’t actually implemented in Elements.

TR: There was an old Schnorr implementation in an Elements version a few years ago. Greg says I’m wrong.

PW: Elements Alpha used to have Schnorr signatures but it was subsequently dropped after we discovered we wanted something that was secure against cancellation attacks. The Schnorr code that was in Elements Alpha was plain Schnorr without pubkey prefixing even. But it was dropped. It is not currently there.

TR: I can’t speak for Elements and Liquid because I am not so much involved in development there. I guess the plan is to wait until the Bitcoin PRs have stabilized where we are confident that we can implement it in Liquid. Confidence and hope that it won’t change much afterwards. We mostly try to be with compatible with the core features of Bitcoin Core because this makes development easier. Of course we have the advantage that forks are maybe easier if you have a federation.

MF: So tomorrow Tim you are happy to talk about Schnorr multisig schemes because we went through them very quickly.

TR: That was the plan. I don’t have slides yet so I can do this.

Tim Ruffing’s thesis (Cryptography for Bitcoin and Friends)

https://publikationen.sulb.uni-saarland.de/bitstream/20.500.11880/29102/1/ruffing2019.pdf

MF: We haven’t even got onto your thesis. Your thesis has a bunch of privacy schemes. I assume they were implemented on Elements? They are just at the theoretical stage?

TR: I started an implementation of DiceMix under the Elements project in GitHub but it is mostly abandoned. It is kind of sad. It is my plan to do this at some point in my life. So far I haven’t done it.

AG: The whole BIP 32 with Schnorr thing. Can somebody give the one minute summary? The general idea is that everything should still be usable but there are some details to pay attention to. Is that right?

PW: Very short. If you do non-key prefixed Schnorr in combination with BIP 32 then you can malleate signatures for one key into another key in the same tree if you know the derivation paths.

AG: A beautiful illustration of why it makes absolutely zero sense to not key prefix. I like that.

PW: I need to add that even without key prefixing this is not a problem in Bitcoin in reasonable settings because we implicitly commit to the public key.

AG: Except for the SIGHASH_NOINPUT thing which was mentioned in the BIP. It doesn’t exist.

PW: Still it is a scary thing that has potential interaction with things so keep prefixing.

MF: Let’s wrap up there. Thank you to all those attending. Tim will present tomorrow. The topic is going to Schnorr multisig and threshold sig schemes.

\ No newline at end of file diff --git a/london-bitcoin-devs/2020-06-23-socratic-seminar-coinswap/index.html b/london-bitcoin-devs/2020-06-23-socratic-seminar-coinswap/index.html index 4db4e80ac5..435abbb8bc 100644 --- a/london-bitcoin-devs/2020-06-23-socratic-seminar-coinswap/index.html +++ b/london-bitcoin-devs/2020-06-23-socratic-seminar-coinswap/index.html @@ -10,4 +10,4 @@ Adam Gibson, Max Hillebrand, Bob McElrath, Ruben Somsen

Date: June 23, 2020

Transcript By: Michael Folkson

Tags: Coinswap

Category: Meetup

Media: -https://www.youtube.com/watch?v=u7l6rP49hIA

Pastebin of the resources discussed: https://pastebin.com/zbegGmb8

The conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Introductions

Michael Folkson (MF): This is London BitDevs, this is a Socratic Seminar. We had two events last week so it is great to see so many people on the call. I wondered if there would be lethargy after two last week. They were both great, videos and transcripts are up on BIP-Schnorr and Tim Ruffing’s presentation. Today is on CoinSwap. CoinSwap has been in the news because Chris Belcher has just got funding from the Human Rights Foundation to work on it and there seems to be a lot of enthusiasm. At least a couple of people have been getting deep into CoinSwap. A couple of guys from Wasabi have been talking about CoinSwap at the Wasabi Research Club. This is being livestreamed. There is a call currently on Jitsi. Jitsi is free, open source and doesn’t collect your data. There will be a transcript but it can be anonymized and edited so please don’t let that put you off. This isn’t an exercise to find gotchas, this is purely for educational reasons. There is a Pastebin up with the resources we will look at to bring structure to the discussion. As is normal we’ll do intros for anybody that wants to do an intro. There is a hand signal in the bottom left of your screen. If you want to do an intro, who you are, what interests or what understanding you have on CoinSwap. You don’t need to give your real name if you don’t want.

Adam Gibson (AG): I have some knowledge of CoinSwap because I did an implementation of it called CoinSwapCS a couple of years ago. I also corrected Greg Maxwell’s 2013 protocol which I don’t think many people know. I am also the author of the diagram you have at the top of your page.

Max Hillebrand (MH): I am building on different Bitcoin tools, currently focusing a lot on privacy specifically onchain with Wasabi wallet but also have been thinking about CoinSwaps for quite a while. I am very excited about the promise of Schnorr, scriptless scripts and adaptor signatures. I am even more excited that all of this is becoming possible with Schnorr-less 2PECDSA.

Openoms (O): I am contributing to the RaspiBlitz project which is a Lightning Network node implementation and a lot else. I was recently working on integrating Joinmarket on it and I have been working on a Terminal based menu called JoininBox which is helping Joinmarket CLI usage. I am generally very enthusiastic about privacy on Bitcoin and excited about the CoinSwap proposal.

Aviv Milner (AM): I am part of the Wasabi Research Club alongside Max. I am really into privacy. One thing I am really proud with, I got to interview Adam Gibson not long ago on elliptic curve math which was great.

What is an atomic swap?

MF: As is custom in the previous Socratics we start from basics. I will throw out a basic question. What is an atomic swap? What is an atomic swap trying to achieve? This has been discussed for years in terms of swapping coins between chains. Can anybody explain a conventional atomic swap works?

MH: A swap is about exchanging property right titles very generally speaking. There are two parties, Alice and Bob. Alice has something, Bob has something else and they want to have the thing that the other person has. They swap it, they exchange it. What makes this whole thing atomic, not in the sense that it is small but in the sense that it is binary. Either the trade happens or it does not happen. Either Alice gets the good of Bob and Bob gets the good of Alice or nothing happens at all and both retain their own goods themselves. The cool thing is that nobody can steal. It is a bulletproof property rights exchange contract. There is no possibility that Alice can have both her own good and the good from Bob. This is not possible in this scheme. It is achieved by utilizing by cryptography and Bitcoin Script in order to enforce this with some timing constraints that I am sure we will talk about.

O: The word atomic means that it is all or nothing. Max described that there is no way that the trade goes halfway forward and not all the way. That is why we call it atomic.

AG: It might be worth mentioning that the achievement of that goal is contingent on trusting that the blockchain’s so to speak clock operates correctly. For example no re-orgs or no deep re-orgs depending on how you set it up. It is not like you get the atomicity for free. It is depending on the blockchain behaving itself.

MF: You could say the same about Lightning. It is basically the same assumptions.

AG: It is the same primitive involved in at least one aspect of the Lightning Network, similar.

Bob McElrath (BM): There is a fun theorem that says atomic swaps are impossible without a trusted third party. We don’t get away from that theorem here. The two assets live on two different blockchains presumably so one solution is to put them both on the same blockchain which is the way all ERC20 swaps work. What Bitcoin does is use two interleaved timelocks, it is not actually atomic, it doesn’t happen at one point in time. One swap has to go forward and then the second one has to follow but there is a back out where if one half of the transaction doesn’t go through the other half can retrieve their coins and affect the non-execution of the swap.

MH: What theorem suggests they are impossible? Is there a difference between swapping between chains like Bitcoin and Litecoin and swapping on the same chain, Bitcoin UTXOs on the base layer?

BM: If the base layer mediates the atomicity and the atomicity happens on the base layer, there are two transactions that are confirmed simultaneously and they can only be confirmed simultaneously. Then the base layer is enforcing atomicity. In order to do that you have to have a base layer that supports multiple assets. That is what is done on many of the blockchains out there that support sub-assets if you will and the whole DeFi thing.

MF: Alex has shared in the chat a resource on understanding HTLCs.

At a high level HTLCs function as an escrow with a timelock. Once you put money into the escrow it will either execute successfully based on certain conditions or refund the payer once the timelock has expired. HTLCs are one of the primitives used on the Lightning Network. The potential for HTLCs seems boundless in my opinion.

MF: And PTLCs, we will get onto them later. nothingmuch says in the chat is that due to the Fischer-Lynch-Patterson impossibility? Or is it something specific to these swaps?

BM: I think that sounds right. This statement doesn’t necessarily apply only to cryptocurrencies, it also applies to anything else in the world including pdf files, physical objects, anything else. You can’t exchange anything electronically or otherwise without a trusted third party.

AG: It is usually referred to as fair exchange. The title of the paper was “On the Impossibility of Fair Exchange without a Trusted Third Party.” It is the same thing.

BM: In the blockchain space we outsource the third party in a couple of different ways. One interesting way is in the case of Ethereum with ERC20 tokens, the blockchain is actually mediating it and it is the rules of the blockchain that enforce the atomicity. There are ways to trust minimize that third party, the whole statechains conversation is an example of a trust minimized third party but it is still a trusted third party that enforces atomicity. Of course with HTLCs we use the timelocks.

AG: That is why I was saying at the beginning that the simplest way to understand it perhaps is the trade-off that we are trusting the blockchain clock is behaving as we intend it to in our timelocks. There is a lot you could say about this, we have said a lot of already.

Greg Maxwell Bitcointalk post (2013)

https://bitcointalk.org/index.php?topic=321228.0

MF: As is custom for these historical timeline exercises the first link is a Bitcointalk post by Greg Maxwell. This is on CoinSwap. This is at the idea phase in 2013, obviously lots has changed since then. The foundation or the crux is kind of here. It is not as trust minimized as some of the designs today, is that correct?

AG: It is kind of complicated to say the least but it is not less trust minimized, the same basic thing. He chose to present it, perhaps logically, in the form of a three party protocol rather than two party. In a sense we are missing a step because I think people usually refer back to a Tier-Nolan post from a little earlier when they first talked about atomic swaps. This was Greg specifically introducing the concept that even on a single blockchain it could be a very useful construct to have this kind of atomic swap in order to preserve privacy. He was trying to explain how you could do the atomic swap without revealing the contract onchain. That is the most important concept that is being played out there. It is a little complicated, the way it is presented there. I couldn’t explain it to you right now, it was years ago I looked at this. I did actually find a mistake in it, it was quite funny telling Greg, he wasn’t too pleased. It was a minor thing. Even if there was no mistake in that layout it was written before there was such a thing and CLTV and CSV, CheckLockTimeVerify and CheckSequenceVerify. They activated years after. He was writing how to do with it with timelocks on backout contracts rather than integrating it into one script. Also it is three party that makes it more complicated.

MF: This was before timelocks?

AG: They weren’t in the code or activated.

BM: There was a nLocktime in the original Bitcoin. The CSV soft fork in 2016 added CheckLockTimeVerify and CheckSequenceVerify opcodes. That is what gave us HTLCs. The original timelock that was in Bitcoin didn’t behave in a way that anybody expected. It was not really very useful.

MF: This party Carol in the middle. There is no trust in Carol? What is the point of Carol if there is no trust required in Carol in this scheme?

AG: I haven’t read this for three years at least so I’d have to read it. It is very long and very complicated.

BM: I am not sure about this specific proposal but generally the third party is added to enforce atomicity. You can minimize trust down to only enforcing atomicity and not knowing anything else.

MF: Towards the end of Greg’s post he gives the comparison to Coinjoin which is quite interesting.

AG: Even if you just look at the introductory statements at the top of the diagram. Phase 1 “makes it so that if Bob gets paid there is no way for Carol to fail to get paid.” Of course there are such things as escrows with trust but I don’t think there is anything interesting here with regard to having trusted parties. This is about enforcing atomicity at the transaction level. The only reason it is different from a CLTV type construct is you have to use extra transactions in timelocks in the nLocktime as backouts. This isn’t about having a trusted relationship.

TierNolan scheme (2013)

https://bitcointalk.org/index.php?topic=193281.msg2224949#msg2224949

MF: You mentioned Adam the TierNolan scheme. This is May 2013, Greg’s was October 2013. This was before Greg’s. Could you explain the TierNolan scheme and how it differs to Greg’s?

AG: Not me. I never understood it. I remember reading it three or four times and never quite understanding what was going on.

BM: Probably best to skip directly to how HTLCs work today and timelocks there. The other reference I would point out here is Sharon Goldberg’s talk from the MIT Bitcoin Expo this year which enhances the TierNolan proposal and adds three timelocks. It adds the feature that either party can go first. TierNolan’s proposal has to have one person going first. Broadly there are two timelocks, there is a window in which if Alice goes first she can get her coins back and if Bob does go, everything is good, it is executed. There are two timelocks there.

MF: The motivation for Greg’s scheme was as a privacy technique while the TierNolan was set up as a swap between two different cryptocurrencies?

nothingmuch: If I am not mistaken the history is as follows. First there was the Bitter to Better paper by Elaine Shi and others which introduced the idea of using swaps using hashlocks and timelocked transactions and a multisig for privacy. I believe TierNolan came after and generalized to do swaps between different blockchains. Greg’s CoinSwap post describes essentially the same thing as an atomic swap between two parties on Bitcoin specifically for privacy. A minor correction for earlier is Carol is whoever Alice wants to pay, Carol doesn’t know she is participating in a CoinSwap. It is typically meant to be Alice herself. The reason he added it was to emphasize you can also send payments from a CoinSwap without an intermediate step. I think the main innovation from a privacy point of view in the CoinSwap post over the previous protocols is the notion of a cooperative path where the only onchain footprint is the multisig outputs. None of the hash locked contracts make it onto the chain unless one of the parties defects and the refund is actually used. This is why it was a substantial improvement for privacy. I hope that that is accurate, that is my recollection.

AG: That sounds good. Maybe it nicely illustrates that while it is natural to go down a historical route in this it might actually be better educationally to look at those two basic ideas. First we could mechanically state how an atomic swap construct works without any privacy concerns. Then we could describe as Greg laid out how you take that basic construct and you overlay a certain thing on it to make it work for privacy.

How does an atomic swap work?

AG: The basic idea of an atomic swap is this idea of a hash preimage, one side pays to a script that can only be spent out of by revealing a hash preimage. Two people are exchanging coins for whatever reason, cross blockchain or on the same blockchain, they are both able to extract two chunks of coins if they reveal the hash preimage. One of them starts with an advantage in that he knows the hash preimage. If you did it in a naive way he could claim the coins himself and then try to claim the other ones too. The basic idea of course is when you broadcast a transaction that reveals a hash preimage and the script says something like “Check that this data hashes to this previously agreed hash output.” Because you have to broadcast it in order to claim the coins that means that hash preimage by definition becomes public. A naive way to do it would be Alice has the hash preimage, tries to claim one of the outputs using the hash preimage and then Bob would see that hash preimage on the blockchain because it was published inside the scriptSig of the transaction that Alice uses to claim. He would take the hash preimage and essentially do the same thing. He would take that hash preimage and construct a valid scriptSig for the other output. He gets his one coin, they are both trying to claim one coin. The problem with that is it doesn’t have the security you want. Alice is in the preferential situation where she knows how to claim both the coins, she knows both of the hash preimages. You add another layer to it. You add that only one party can claim by adding the requirement of signing against a public key. Why do we need timeouts? If you had this setup as I described it where both sides would claim the coins if they know the preimage to a hash but also it was locked to one of their keys, it looks secure but they have a deadlock problem. If the first party refuses to reveal the hash preimage, the one that knows it, because you have had to deposit coins into these scripts the coins are locked up until that person decides to reveal it. Maybe they die or they disappear and then your money is dead. You do need a backout. The question becomes how does each side have a backout to ensure safety against the other party misbehaving. I haven’t explained that in full detail, I am sure we will at some point. How do we convert it into a private form? In talking about this you might find the diagram on Michael’s meetup page useful for this. We have got Alice and Carol and TX-0 and TX-1 on that diagram show Alice and Carol respectively funding a 2-of-2. Why do they fund a 2-of-2? Because they need to have joint control of the funds.

MF: One thing I saw going through these resources on atomic swaps, Sergio Demian Lerner had an attempt back in 2012 with a trustless exchange protocol called P2PTradeX. That was the idea that was refined and formalized by Tier Nolan in 2013. The guys at Decred worked on executing an atomic swap with Litecoin in 2017. Most of the activity seems to be swapping currencies between chains it seems until recently.

AG: The first thing we need to get clear is a HTLC. HTLC is a slightly unfortunate name, it is a correct name but it isn’t an intuitive name. Forget about coin swaps for a minute and focus on this idea that you could have a custom script that pays out if you provide a hash preimage or it pays out after a delay. Why do we want the after a delay clause? Because you don’t want to pay into a hash, something where the script says “Check if the data provided in the scriptSig hashes to this value.” We don’t want to pay into a script like that and have it sitting there infinitely because the other party walked away. The concept of backout is absolutely crucial. That is why the timelock in a hash timelocked contract (HTLC) exists. As well as paying into a hash we are paying into that or something is locked in say 100 or 1000 blocks forward or possibly time instead. Hash timelocked contract refers to that. You could think of a HTLC as a specific kind of Bitcoin script. I seem to remember that at one point some people tried to produce a BIP to standardize this but the only standardization is in certain places like Lightning. You have a script and that is cool but the idea of an atomic swap is you lock together two such scripts. It could be on different blockchains, it could be Litecoin and Bitcoin for example. Or it could be on the same blockchain. Forgetting privacy the idea is both Alice and Bob pay into such a script. Alice pays into the first one and it says “This script releases the coins if you provide the preimage to the hash or it releases, without the hash preimage, after 100 blocks.” Bob does the same. They use the same hash value in the script but they use slightly different timelocks. If you think about trading, perhaps Alice is paying 1 Bitcoin in exchange for 100 Litecoin. It should be atomic. If Bob receives 1 Bitcoin and Alice receives 100 Litecoin, if one of them happens both should happen. The idea is that when one of them happens it gets broadcast onto the chain with the hash preimage such that the other one should be able to be broadcast also. That is the core concept. The timelocks exist to make sure that both parties are never in danger of putting their coins into such a script but never getting them out again. Maybe that is the simplest explanation you can give for an atomic swap.

MF: I suspect most people know what a HTLC is but a HTLC in one direction and another HTLC going in the other direction with a link between those two HTLCs.

AG: I am not sure about the directionality. If you are thinking about HTLC in a Lightning context then obviously we have a chain of them, not just two. But it is basically the same thing. If one transaction is broadcast the other one can also be broadcast. It is only two and not 3, 4, 5 in a Lightning path.

MF: This is talking about different cryptocurrencies. The question in the chat is on moving value from a legacy address to a bech32. I suppose this is going in the direction of Chris Belcher’s CoinSwap, you are swapping coins in a legacy address for coins in a bech32.

AG: I don’t think you should focus on that yet. If you understood that, what is the advantage of doing an atomic swap from a privacy perspective?

O: The amount and timing correlation?

AG: Those are both valid points but we will get onto them more when we are trying to achieve privacy. There is a more fundamental reason why a basic atomic swap as I just described is poor for privacy.

O: The peers would need to communicate the secret out of band?

AG: They wouldn’t need to communicate the secret out of band because the whole idea is that when one of them broadcasts a transaction with the hash preimage then it is on the blockchain and the other one can simply scan the blockchain to see it. It is actually the case that in practice you would communicate the secret out of band but that is not the principal issue with an atomic swap.

BM: Everybody sees your hash preimage and can correlate them across chains.

AG: That would be the answer. You can generalize this point to just custom scripts generally. When you use some kind of exotic script and not a simple pay-to-public-key-hash or the various equivalents. Even multisig, you are revealing something about your transactions. This is the ultimate extreme of that problem. Here if we simply do a trade of one coin for another using the hash preimage, some string like “Hello”, obviously you would use a more secure string than that as a hash preimage, then that hash preimage has to appear in both of the scriptSigs of the transactions that you are using to claim the coins. By definition it has to if we do it in this simple way. That unambiguously links those two transactions on one chain or on both chains. That is terrible for privacy.

MF: We will go onto adaptor signatures, PTLCs and that kind of stuff later.

Alex Bosworth talk on submarine swaps at London Bitcoin Devs, 2019

https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2019-07-03-alex-bosworth-submarine-swaps/

MF: This is Alex Bosworth’s talk at London Bitcoin Devs on submarine swaps in 2019. This is introducing a new concept where rather than swapping currencies on different chains you are swapping Bitcoin onchain for Bitcoin on the Lightning Network. This is useful in the context of submarine swaps and wanting to change your inbound, outbound capacity by making onchain transactions. nothingmuch says it makes more sense to do CoinSwap first, perhaps it does but I will stick with this as we are following the timeline. I think the discussion on submarine swaps was happening a year or two ago, there wasn’t the discussion on CoinSwap for privacy.

AG: I totally disagree. I spent a lot of time in 2017 writing a library to do it and we used it. CoinSwap was always a very minor topic. The Lightning white paper came out in early 2016, it took a long time to get the ball rolling with Lightning. During that period there were a few of us, you could see people occasionally getting excited about CoinSwap as a privacy idea but it was always difficult to get it off the ground. Perhaps we can discuss why that was at some point. I wouldn’t say CoinSwap was a thing that came after submarine swaps.

nothingmuch: The main innovation in CoinSwap was the observation that you could do a cooperative multipath spend and keep the information offchain. I believe that was the main insight that these kinds of things could happen offchain. You could have a binding set of transactions that makes assurances to both parties without ever making it to the blockchain so long as both parties cooperate. By having an incentive to cooperate, in this case fees, that is how you gain privacy. The hash preimage is never revealed if both parties actually follow through with the protocol.

AG: I am not sure about the multipath aspect. There has been discussion about that at various points. Almost everything else I totally agree with.

nothingmuch: I shouldn’t have used that word because multipath is overloaded. I meant the different contingencies that may arise.

MF: It felt like there was a lot more conversation going on around submarine swaps than CoinSwap. The privacy wiki has a small section on it. But when I was looking at resources on this I was seeing very few resources on CoinSwap. Perhaps it was all being done behind closed doors.

AG: The public’s imagination was captured principally by the concept of an atomic swap, especially in the middle of the craze we experienced from 2016 to 2018. Partly because of all the altcoins and everybody got very excited about how you could trade trustlessly. Lightning exploded in some sense amongst a certain group of people between 2017 and 2018 when it went live on mainnet. Alex being the fountain of incredible ideas that he is, pushed this submarine swap idea. It is just another variant but it is a more complex variant of the things we are discussing here. It might make more sense to go atomic swap, CoinSwap and then talk about the more exotic things.

CoinSwapCS (2017)

https://github.com/AdamISZ/CoinSwapCS

AG: I wrote code and built it in 2017. That repo is probably more interesting for the Issues list than for the code. I went through Greg Maxwell’s post in detail and it took me like 2 days before I eventually realized that there was an error in what he wrote. That is only important in that it let me write down what the protocol should be. You have this on your diagram. Greg’s basic idea was to have an atomic swap, both parties have an exotic script that I have already described, pay to a hash or to a timelock. Then have this overlay which is what nothingmuch was describing. This is a very important principle, we are seeing it in Taproot, all kinds of contracting ideas. If the two parties cooperate then the conditions of the contract do not need to be exposed to the world. Instead you overlay on top of the contract completion a simple direct payout from a 2-of-2 multisig to whatever destination. On the meetup diagram it is the transactions at the top, the TX-4 and TX-5 that pay from the 2-of-2 directly to Carol’s keys and directly to Alice’s keys. CoinSwapCS was just me thinking I can actually concretely make a code example of how you could have CoinSwap servers and clients. This is perhaps something that we will get onto with Chris Belcher’s description recently, we talked about timeouts and how it depends on the clock of the blockchain, the nature of it is it is a two step protocol. The initial funding phase and then when that is committed then you can build these contracts. You need to commit into shared control. In that specific version of the protocol Alice pays into a 2-of-2 with Alice and Carol. Carol also pays into a 2-of-2 with Alice and Carol after of course having pre-agreed transactions that spend out of it just like many of these contracting systems. I coded that up where I had Carol as a server and Alice would be querying Carol and saying “Can I do a CoinSwap for this amount?” They arrange the transactions and they set it up. It is a two phase thing. Fundamentally it is more interactive than something like a Coinjoin where you are just preparing and arranging a single transaction or more importantly a single phase. As I explained in CoinjoinXT it could be multiple transactions. It is still a single negotiation phase in Coinjoin. With CoinSwap it is a lot more powerful of a technique because what you end up with is histories not being merged but histories being completely disconnected. The trade-off is you have cross block interactivity. Part of that cross block interactivity is you have to trust the blockchain not to re-org which you means you have to wait not just ten minutes but quite a long time between these phases of interactivity. One of my blog posts nearly three years ago was on how CoinSwap work, the different types of them. With later blog posts I talked about the Schnorr variants and so on. It is all about that basic concept that you have an atomic swap, because you are using multisigs you can overlay it with a direct spend out to the two parties so that what appears onchain is not a hash or anything, it just looks like an ordinary payment. But it is multisig and this is something that Chris Belcher has tried to address in his write up.

MF: Some of the issues to tee up the discussion of Chris’ proposal. There are a few interesting things here. One is this tweak, this is the bug you found in Greg’s initial design?

AG: Yeah that was me explaining what the problem was with the original post. It is all in the weeds, I wouldn’t worry about it, it is details.

MF: Your blog post, you talk about some problems that I’m assuming Chris would have had to addressed as well. One of them was malleability. Is that an issue we should discuss?

AG: Not really because that was solved with SegWit. This was around the time of activation so we were trying to figure out whether we needed to address non-SegWit or not. Much like Lightning.

MF: It is exactly the same concept.

AG: Yeah.

Design for a CoinSwap implementation (Chris Belcher, 2020)

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017898.html

MF: This is Chris’ mailing list post from May of this year. There’s also the gist with more details on the design. How does this proposal differs from your work Adam in 2017?

AG: It wasn’t just me, many others were enthusiastic back then. How is this different to what goes before? It isn’t really. It is a fuller fleshing out of the key ideas. A couple of things that he has added into the mix. It is a long post and it has several ideas. Using ECDSA two party, what this means is because we need to use multisig but we don’t really want to use ordinary multisig. The thinking behind this is subtle. When you do a Coinjoin in the current form like Wasabi, Joinmarket, Samourai etc you are doing a completely non-steganographic protocol. In other words you are not hiding that you are doing the protocol which means that the anonymity set of the participants in this privacy protocol is transparently the set of participants at least at the pseudonym level that went in. The specific set of addresses that went into the protocol. When you try to move to a more steganographic hiding protocol, with something like CoinSwap specifically you are hoping to not make it obvious that you have done that. As has been discussed at great length in the context of Payjoin and other things there is a huge benefit if we can achieve the goal of having any one of these protocols be steganographic because it could share a huge anonymity set rather than a limited one. If you want to achieve that goal with CoinSwap the first step is to use an overlay contracting system such that the outputs in the cooperative case at least are not exposing things like hash preimages or other custom scripts. At the least you want the outputs to be let’s say 2-of-2 multisig. The reason he is mentioning and expounding on ECDSA two party, he is talking about specifically the scheme of Yehuda Lindell where you use somewhat exotic cryptography such as the Paillier cryptosystem in order to achieve the goal of a 2-of-2 multisig inside a single ECDSA key. If you achieve that goal then you are going to create such a CoinSwap protocol in which all of the transactions at least at this level look like existing single ECDSA keys which is basically all the keys we have today apart from a few multisigs here and there.

BM: I guess the things that have changed since 2013, first of all we got CSV, timelocks are way better. Secondly we got Lindell’s two party ECDSA protocol and the multiparty ECDSA protocol which it doesn’t use Paillier, it uses ElGamal instead which is more compatible with …curves. Both are good for this, different security assumptions. The third thing which is I’m not sure is used here is adaptor signatures. We are going to talk about PTLCs in a minute. That hash preimage reveal is deanonymizing by itself. With adaptor signatures which you can do with both Schnorr and ECDSA you can reveal that preimage in a manner that only the recipient can find it.

MF: This is the ECDSA-2P section in Chris’ design.

BM: He doesn’t really talk about it all. It uses Lindell’s proposal.

MF: This is to have a 2-of-2 multisig with only one signature going onchain before we get Schnorr. It is space savings onchain as well as the adaptor signature property of superior privacy, not having that script telling the world exactly what is going on.

BM: That’s right. The price you pay for that is more rounds of communication between the sender and receiver. There are some zero knowledge proofs that need to communicated. I’m not sure what the count of the number of rounds is. It is at least three. For offchain payment protocols it is not that big a deal. If you look at MuSig and Schnorr you have a similar number of rounds in the protocol anyway because they have to decide upon the nonce for a group sourced Schnorr signature.

nothingmuch: A paper by Malte Moser and Rainer Bohme called “Anonymous Alone?” is an attempt to quantify these what they call second generation anonymity techniques on Bitcoin. It contains an interesting bit of data, an upper bound for how many CoinSwaps could be going on. It is only an upper bound under the assumption that multisigs that co-exist and are of a similar amount, coincide in time on the blockchain, any pair of those is potentially a CoinSwap. That is how they quantify the anonymity set. I think that that gives an intuition for all of the main ideas that Belcher proposed. This is the multiparty ECDSA stuff that makes the scripts less distinguishable. It is the introduction of routing which means that it is not necessarily a two party arrangement. And the splitting of amounts into multiple transactions so that the onchain amounts are decorrelated.

AG: Because of that paper they were making an assumption that multisig is going to be a fingerprint of CoinSwaps and this removes that fingerprint. That is one aspect?

nothingmuch: That paper tried to quantify how many users of privacy techniques are there. It addresses Coinjoins, stealth addresses and CoinSwaps. For CoinSwaps because the data is offchain all they can really do is give an upper bound how much CoinSwapping is occurring under some fairly strong assumptions. The first of these is that any CoinSwap is going to be anchored into two UTXOs that coincide in time that are either 2-of-2 or 2-of-3 and where the amounts are relatively close.

BM: There is one aspect of all this which is a pet peeve of mine. Belcher’s example has Alice dividing 15 Bitcoin into 6 plus 7 plus 2. What we need to do in order to get this is we need to be doing binary decompositions of value. 16 million satoshis plus 8 million satoshis, using a binary decomposition you can get any value you want. Each set of binary decompositions becomes its own anonymity set. That’s the best you can possibly do as far as the anonymity set. But if everybody is dividing their value arbitrarily down to the satoshi you have a very strong fingerprint and a very strong ability to correlate. If I ask the question “How many pairs of outputs on the blockchain can I sum to get a third output on the blockchain?” Maybe these two are engaged in a CoinSwap. The ability to do that is still extremely strong.

AG: That is a very good point. I would make the argument in the Joinmarket scenario it is not necessarily as easy as you might think because you tend to get a lot of false positives. But I think your fundamental point is almost certainly correct.

nothingmuch: This is a bit of digression. I have been thinking about what I have been calling preferred value series. It generalizes to more than just binary but to any minimum Hamming weight amount, it could be decimal as well, it could be decimal with preferred value series. The intuition there is that if you have a smaller set of amounts that you can combine together to create arbitrary amounts then the likelihood that those amounts will have an anonymity set is much larger. The issues with that are if you need a UTXO for every significant bit in your amount, let’s say the average Hamming weight of an amount is 5 significant integers or something then you create a substantial overhead. We haven’t shared anything yet because as a group we haven’t come to consensus about that. For the unrelated stuff about Wabisabi and Coinjoins this is definitely the approach I have in mind and I am happy to share. Finally there is a way to make it practical without necessarily having that problem of this is 5x, 10x UTXO bloat factor.

BM: There is definitely a bloat factor there. The way this is handled in traditional markets is if you try to buy certain kinds of stock you can only buy stocks in lots of hundred shares. On the foreign exchange markets in lots of hundred thousand dollars. I envision that if a trading market were to arise in this manner there would be a minimum lot size significantly larger than the dust limit. You are obviously not going to go down to satoshi because you can’t make a single satoshi output because you can’t spend it and it is below the dust limit. Something like 8 million satoshis, 16 million satoshis would probably end up being the minimum lot size and then you do binary decomposition above that.

nothingmuch: The transaction structure in mind, I’m only speaking for myself here, is the Coinjoins where you have different mixing pools for those Hamming weight amounts, every power of two, that occur as parts of larger Coinjoin transactions. You can imagine a perfect Coinjoin that has a nice switching network topology and there would be representative inputs and outputs of every such class. If you have a Coinjoin transaction that is constructed in this way then in theory this gives you a blinding factor for all of the other arbitrary amounts. This is based on the Knapsack papers definitions of a non-derived sub transaction, the idea of how many different ways are there of combining assuming that all of the inputs and outputs are self spends inside a Coinjoin. A Payjoin breaks this heuristic. If you do the sudoku attack where you constrain things and find partitions of the transaction that balance to zero the idea there is that you always have ambiguity in that case. Every possible amount could be an arbitrary composition of those Coinjoin elements that have that power of two. The reason I’m bringing it up is that I think it is really relevant for this case because a big problem with CoinSwap coordination generally is that if two users are swapping history then one of them in theory gets a tainted history. Unless there is a common understanding that taint is no longer a valid assumption then there may be an disincentive for people to offer swaps because if you don’t know the provenance of the coin you are swapping against maybe you lose fungibility in that transaction. In this case if there are Coinjoins that allow the creation of fungible values of arbitrary amounts you can do a CoinSwap where one side of the CoinSwap is an output from such a Coinjoin I think that creates a very complementary onchain scenario. Users can swap arbitrary amounts without having a very strongly identifiable footprint.

BM: That’s absolutely right. That is what concerns me greatly about these CoinSwap proposals. It is literally a swap. I am going to give you my UTXO, you are going to give me yours. Some of the other proposals, statechains, have the same property where I am going to get a new UTXO. If there is a pool of UTXOs out there and one of them came from a darknet and is blacklisted somehow, somebody is going to get that UTXO, they are going to get the full UTXO and it is going to be a bad one. This is a strong disincentive to use such a system. Whereas with a Coinjoin I have 100 inputs and 100 outputs and I don’t know which is which and so each one is tainted 1 percent. With the CoinSwap proposal one UTXO that somebody is going to get and it might be you, is tainted at the 100 percent level and the others are not tainted at all.

MF: I had exactly the same thought. Let’s say you’ve got completely clean Bitcoin straight from being mined versus Bitcoin that has been through an identified drug mixer you are not going to be too happy swapping those two. Unless as soon as you get your Bitcoin you do loads of transactions to yourself so that you beat the taint score or the taint measure that people like exchanges use.

nothingmuch: I don’t think that is robust because it is pretty easy to do modular decomposition of graphs. You can identify a strongly connected component that originates from a small set of outputs. Even if it is a larger graph it is still equivalent to a single transaction in that case. Unless you are actively mixing with other users’ history I don’t think that approach buys you much in terms of privacy. It may be a very effective way to get past simple heuristics of exchanges but I would caution against actually thinking about that as a privacy enhancing technique.

MF: I am talking about the CoinSwap being the privacy technique. But let’s say you got the short straw, you’ve gone into the CoinSwap and you’ve got these “tainted” Bitcoin. You then need to get them untainted if you want to go into an exchange or you want to use a merchant that has restrictions on which Bitcoin they will accept.

nothingmuch: That process of untainting, it works today just because the heuristics the exchanges use are very crude. They are only doing this as a cost saving measure to reduce liability anyway. If this becomes more common then all they have to do is slightly ramp up the computational difficulty of identifying these graphs of self spends or whatever. It is not a difficult problem to identify those because the subgraphs are disjoint. If there are no transactions connecting it to the rest of the transaction graph it is fairly easy to isolate.

MF: On the YouTube shinobius monk says “I demand an immediate in depth comment from everyone on how to deal with the timing issues of successive CoinSwaps hitting chain in short time period.” Filling up blocks with too many CoinSwaps?

I think what he says is that CoinSwaps can be identified if they are hitting the chain in a very short period and maybe the amounts can be correlated. How do you know they are CoinSwaps? I don’t know, maybe there are ways to guess.

AG: Is he talking about timing correlation? I think this relates back to something I was thinking we should definitely discuss which is what are the practicality issues around CoinSwap? I think people should take note of an interesting historical fact that in late 2013 Greg Maxwell made two very similar long Bitcointalk posts, one called Coinjoin and one called CoinSwap. I think Coinjoin was implemented in some form or another within one month of the Coinjoin post but when I looked at it in early 2017, four years later, nobody had implemented CoinSwap at all.

It is because there was a Coinjoin bounty.

AG: I don’t think so. I think fundamentally CoinSwap is a more powerful protocol but there are some practicality issues that we should start discussing assuming that everyone on the call now understands what a CoinSwap is.

MF: shinobius also says “You arguably don’t get atomicity in the sense of state guarantees across two different proof of work chains.” I’m assuming that it is due to re-orgs, I think that is his point there.

AG: It is definitely worse because you are dealing with two clocks, it is significantly more messy. It is kind of the same basic problem I guess.

MF: There are a lot more re-orgs on other chains with less proof of work. It depends on how s*** your s***coin is. Josh Smith brings up the blockchain space issue which is mitigated by having one signature onchain with ECDSA-2P or Schnorr.

AG: It depends what you are comparing it to. Think about it in comparison to Coinjoin as a privacy technology. It is fundamentally a lot more fungibility per byte onchain. It is difficult to quantify but it must be partly because of this steganographic thing, what is your anonymity set if you do a 50 party Coinjoin? You could say my anonymity set is 50. But what is your anonymity set if you do a CoinSwap properly such that nobody even knows it is a CoinSwap. It is either a lot or you can’t define it. It is pretty efficient which is one of the main reasons Chris Belcher, myself and many other people were always very interested in CoinSwap even if we thought it was quite difficult to implement.

nothingmuch: Furthermore it has a positive sum type interaction. When you CoinSwap you are also helping the privacy of every non-CoinSwapped output that has the same script type.

AG: We discussed in Payjoin the same dynamic.

MF: Another question from shinobius. “Belcher’s idea of routing amounts through successive CoinSwaps and the impracticality of staggering the time between individual transactions in a chain of CoinSwaps for intervals that are too long.” This was a clarification for the previous question. Having too many successive CoinSwaps and staggering the time if there is an overlap between one CoinSwap that you are engaging in and the next CoinSwap.

Routing CoinSwaps

https://gist.github.com/chris-belcher/9144bd57a91c194e332fb5ca371d0964#routing-coinswaps-to-avoid-a-single-points-of-trust

MF: The problem that I understand routed CoinSwaps is addressing is if I do a CoinSwap with Adam and Adam is a malicious party, a chain surveillance company, then I am telling Adam what my history is which isn’t ideal from a privacy perspective. But if you have this circle where everyone is doing CoinSwaps with each other and there are multiple parties you now have to worry about all of them colluding rather than one malicious party. How does this routed CoinSwap work?

I think we can believe that assumption is correct. It is more important to explore how that assumption changes other properties of CoinSwaps if it changes them at all. There is a problem with two party CoinSwaps because the other person learns the history. It might be a more interesting question to explore accepting the solution works. Does it ruin the user experience? Does it do something else other than merely fixing the problem? Is it harder to implement?

AG: I haven’t thought about this much. I think Chris talked about some of the issues on the CoinSwapCS repo a couple of years ago. I remember him talking about this and multi-transaction. This addressing a privacy problem, it seems to me we are looking at exactly what we see on the Lightning Network. It is not exactly the same but it is almost exactly the same mechanic. We are going to have staggered timeouts to make sure that no party is every in a position where they could lose on both ends of the trade. It addresses a privacy problem in that same way where every participant in the route doesn’t know for sure the originator.

nothingmuch: I think it would be useful to go over the privacy issues with Lightning first as it stands today with HTLCs. If we assume that the routing is only a problem for CoinSwap makers much like the public Lightning graph is not considered to be a privacy issue. When you do a routed payment you are making a HTLC with the first party in the route and giving them an incentive to enter a HTLC with the next party. But along the entire path even though anonymity is guaranteed everybody shares the same secret. We mentioned it earlier, the PTLCs as a solution to this, also anonymous multihop locks and a bunch of other approaches. If I’m not mistaken Belcher doesn’t go into detail about how routing is supposed to work. The implication is that it uses adaptor signatures.

AG: Anonymous multihop locks is actually the same thing. It uses points to achieve the security proof to avoid the wormhole attack but of course it has other excellent properties.

nothingmuch: I am not 100 percent on the details to explain how you can use adaptor signatures to create something that is equivalent to a whole route sharing the same preimage without that secret actually being shared by all parties.

AG: You are just talking about the general idea of point timelocked contracts? In the anonymous multihop locks paper they explain how you can have a tweak to the nonce essentially at every hop such that every party subtracts has their own single secret and it is added along the path so that everyone gets the same guarantee as they would if everyone just used one secret. But each person’s secret is different using the linearity of Schnorr. You can think of it as using the linearity of Schnorr to make it so that we are not all using the same hash preimage, we are all using something like a hash preimage that is locked together. But each of our numbers is different. We talk about adaptor signatures, are we talking about using CoinSwap of the type that was initially proposed by Andrew Poelstra? I wrote it up in the blog post “Flipping the scriptless script on Schnorr”. We are talking about using an adaptor signature to affect a CoinSwap so that instead of worrying about the hash preimages all being onchain so someone can connect them. Because the equivalent of a hash preimage is actually embedded inside of the nonce in this adaptor signature it means that nobody sees it anyway. It means we don’t need this overlay protocol, we can make a much simpler CoinSwap. Are we doing that kind of thing across multiple hops but we are using ECDSA adaptor signatures? Is that what we are doing?

BM: I don’t know that adaptor signatures are required or used here. My understanding of Belcher’s proposal was that Alice who is doing the CoinSwap is actually mediating multiple CoinSwaps at the same time. Two of the people she is mediating, Bob and Charlie, Bob doesn’t know Charlie exists. It is centrally routed around Alice as opposed to the linear chain that happens on Lightning.

AG: There are two parts. There is a multi transaction part where Alice parallelizes. Then there is a routing part. He talks about them in two separate sections. And he talks about combining multi transaction and routing together. Routing CoinSwaps to avoid a single point of trust then you’ve got combining multi-transaction with routing. First it is multi transaction CoinSwaps to avoid amount correlation. That is just Alice and Bob making multiple CoinSwaps between those two parties. Then there is routed CoinSwaps, there is shows an actual chain Alice, Bob, Charlie, Dennis. Finally it talks about combining multi-transaction with routing. Between Alice and Bob they have multi-transaction and between Bob and Charlie they have multi-transaction and so on. It is both effects that are being talked about. In terms of adaptor signatures, he is only talking about adaptor signatures in this document in connection with Succinct Atomic Swaps. He is not including that in the proposal. You would have different hash preimages in each of the hops along a route here. That is different to Lightning.

MF: It is not a route is it? It is a circle from Alice to Bob to Charlie and back to Alice. It is not like a Lightning payment where I am trying to get to a destination and there are a bunch intermediaries helping me get it to that destination. It is almost like a ring signature?

AG: I disagree. I don’t think it is fundamental that Alice is sending to Alice in that example. Think of it as Alice’s source and Alice’s destination. If her destination happens to be something that she owns then it is but it is not fundamental to what is going on.

MF: A question on the YouTube from shinobius. “Isn’t the idea to just chain the transactions together without signing/submitting the first “funding” for the routed CoinSwap? That’s what I inferred from the proposal”

AG: I think you guys must be right. The whole thing must be atomic. We are still in the situation where only in the cooperative case would we have proper privacy. Perhaps that reveals that we have extra fragility if we have multiple participants. You already have the concern when you do a single CoinSwap between two participants that if the other guy doesn’t respond in time you are going to have to publish onto the blockchain. It is going to be obvious you did a CoinSwap and you maybe didn’t want that. I am worried that if you have a lot of participants the potential for at least one party to drop out. That would just affect that hop? If you had four hops and the guy who is third along drops out and doesn’t respond then maybe two of the four hops end up broadcasting the hash to chain.

nothingmuch: The parties could in that case choose to still settle cooperatively but the entire route may learn. Suppose Alice sends to Bob who sends too Carol. Carol sends the final payment and then Carol defects and reveals the hashlock. This is what I mistakenly assumed earlier that Chris was relying on adaptor signature. In that case Bob will still learn where Alice’s funds went. Whereas in the optimal case Bob has no idea where the final payment is sent. He only knows that he is sending to Carol.

AG: Mentioning adaptor signatures makes a lot of sense because it does take away that problem. Then there is the question of whether in a ECDSA context, given recent work by Lloyd Fournier those kind of single signer ECDSA things might work. We can do adaptor signatures with ECDSA, that Lindell thing?

Ruben Somsen (RS): I believe that is possible. Lloyd’s work is specific to single signer ECDSA. You wouldn’t have the privacy that Belcher wants. Generally speaking my intuition would be if you have a multiparty protocol it will either be like Lightning where you have A depends on B depends on C. Or you are going to have the protocol that you wrote up, Multiparty S6. I think you’d need to use one of either in order to make it work. Belcher writes up a bunch of variations on how to do CoinSwaps.

AG: It is either going to be a chain or it is going to be a big network all connected. That S6 thing, it is kind of a chain anyway.

RS: That would be a way to do adaptor signatures.

AG: The point me and nothingmuch were arriving that is if you are going to do this kind of onchain chain of swaps, Lightning uses offchain chains of swaps HTLCs, if there is one link in the chain that defects it is going to be revealing information onchain. You get that with two parties but the risk is much lower. If you have six parties the risk gets higher and higher. If you were to use adaptors then it would avoid that problem because you don’t have the defection problem in the adaptor signature CoinSwap because the preimage is embedded in the nonce.

RS: I agree with that. Any third party wouldn’t be able to figure out what the preimage is.

AG: If he is going to use ECDSA-2P which the proposal is, this makes a lot of sense because it shares the anonymity set, then maybe wedge in adaptor signatures even if it is a weird ECDSA variant of adaptor signatures if that is possible.

RS: I believe it is compatible with 2P-ECDSA.

BM: It definitely is.

RS: Very generally speaking I think it makes sense to use adaptor signatures. Even in the non-cooperative case you are going to end up with less block space usage so why not if it is possible?

AG: We talked about routing. What about multi-transaction?

I am wondering if I completely misread the entire section of routing through multiple transactions in Belcher’s proposal. What I took away the write up on GitHub was that you effectively have the taker coordinating between multiple makers with these intermediary UTXOs that are two party ECDSA addresses. You coordinate that route, get all the things to the end of it signed and then you sign and fund it to compare it to Lightning. That is binary, succeeds or fails with 2P-ECDSA preventing anything conflicting from being signed. You could even enforce staggering delays of hitting the chain with the nLocktime field.

BM: Doesn’t every party have to create a funding transaction? The original proposal had four total transactions. As I understand it Belcher’s proposal has two transactions total per swap. One transaction per party. I think that means in the routing network there is one funding transaction per entity and then you have to close it out. This whole routing network is not offchain like Lightning is. It is all onchain.

nothingmuch: I believe that is correct. In this regard the difference between Lightning goes away. In Lightning the preimages are shared peer-to-peer but the route is derived by the sender. Here because everybody is.. broadcast network, the blockchain itself, the parties along the route don’t have to communicate with each other directly like they do in Lightning. Everything is mediated by Alice.

AG: Alice must be choosing the set of participants.

nothingmuch: Yes. That is also true of Lightning.

AG: Source routing.

nothingmuch: The difference is in this scenario because there isn’t the offchain aspect, Bob and Carol are not in some sort of long term arrangement between them in a payment channel. The only situation where they need to coordinate with each other is in case the payment doesn’t go through. They have the blockchain for that because it is onchain.

BM: The original proposal had four transactions. Alice swaps with Bob. Both Alice and Bob have to create a funding transaction. At the end of the swap they have an exit transaction where they retrieve to their wallet. It is four total transactions. Using the notion that Chris Belcher talked about, swapping private keys where it is effectively a 2-of-2 multisig, at the end of the process I am going to give you one of the two private keys and you already have the other one. Now you have unilateral control and doesn’t need to go onchain.

AG: I think you are talking about Succinct Atomic Swaps which is Ruben Somsen’s thing.

CoinSwap comparison with Succinct Atomic Swaps

https://gist.github.com/RubenSomsen/8853a66a64825716f51b409be528355f

RS: There are two things about the Succinct Atomic Swap. The thing you are talking about now is possible with regular atomic swaps as well. You do the swap, you don’t broadcast the final transaction, you just give each other your key of the funding transaction. The problem you have then is that there is still this refund transaction that becomes valid after some period of time. But it allows you to direct the outputs towards whatever you want before that timelock expires.

BM: You are still going to have to exit onchain so we are still talking four total transactions. You can delay the final exit but I cannot reuse that commitment transaction for another swap. If Alice and Bob set up the two commitment transactions that have to be executed or cancelled and if I want to swap with Charlie I need to create a new commitment transaction involving Charlie’s key.

RS: The point there is that it is another funding transaction. You go from funding transaction to funding transaction and you skip the settlement phase because you have full control over the UTXO and can start funding again. As long as you react before that timelock expires, before the refund transaction becomes valid you can direct the funds anywhere you want including another swap. That is the basic premise, if you have a server that does a lot of swapping or you want to swap a bunch of times then you do not have two transactions, you only have one transaction per swap. Succinct Atomic Swaps are a separate thing where you don’t have the timelock on one of the two outputs that you are swapping. That gives you some additional benefits because then that one doesn’t expire.

BM: Worst case scenario there are four transactions per swap, best case scenario two transactions per atomic swap.

RS: The worst case is that your swap doesn’t go through. If your swap doesn’t go through it is four transactions.

BM: I was really hoping this was doing swaps offchain.

CoinSwap comparison with Lightning

AG: That brings up the interesting question which is raised in the document which is how does this compare with Lightning? Shouldn’t we just use Lightning and he talks about several angles to that.

BM: Let’s assume there are multiple Lightning Networks here, one on Bitcoin and one on Litecoin and I can swap between those two. Isn’t that far superior? I don’t think anyone has really done that except in small experiments.

AG: This isn’t attempting to replace cross blockchain swapping or trading. It is attempting to provide a new functionality for privacy in Bitcoin. The question of why we don’t we just use Lightning I think is a good one but it is addressed specifically in a whole section in the document. His first advocacy for this approach really surprised me. It wasn’t the first one I would think of. “CoinSwap can be adopted unilaterally and is onchain.” He is pointing out that in a world where your friendly local Bitcoin exchange or similar company decides not to adopt Lightning and you are sat there twiddling your thumbs getting annoyed with them, this can be done today and doesn’t require anyone’s permission. If you can make payments via either a CoinSwap or some complex routed CoinSwap then you don’t need the permission of the recipient of the payment. It is an interesting point to lead with. He makes a couple of other points about liquidity and size. When I was looking at this in 2017 and writing code for it I was thinking is it worth looking into this as a separate technology considering Lightning in 2017 was being developed actively and was almost ready for prime time? It may only have a small role but it might have a role simply because it might be able to support larger sizes. It could be a more heavy protocol. It doesn’t have the same advantages as an actual offchain protocol but it creates a privacy effect similar to what Lightning does and perhaps for a very large payment. He also talks about sybil resistance and he is talking about fidelity bonds there, something he worked on recently. We recently merged that code into Joinmarket although it isn’t active yet. The idea of fighting sybil attacks with fidelity bonds is interesting. I haven’t thought that through in the context of CoinSwap, how that is different. He makes two or three arguments as to why it is still an interesting thing even though Lightning exists.

BM: I don’t buy his argument that onchain is good. Onchain payments are dead. We should not be using them, we should be using Lightning everywhere. It is going to take time for the market to adopt that. The argument that somebody is still using BitPay from 2014 is not a good argument.

AG: Onchain consumer payments are dead, I would almost but not entirely agree with you and are not a particularly interesting thing. They exist in small pockets, that is fine. Bitcoin is not just consumer payments.

RS: I think it is fundamentally different. With Lightning you have a channel and you still have to open the channel. When you are opening it it won’t be clear until after you’ve closed the channel how much of that money is going to be yours. But it is clear that is a UTXO you are controlling. Before you open the Lightning channel you still need some kind of privacy. At the very least swapping into a Lightning channel or something along those lines seems useful to me. I think it is orthogonal in a sense and there does seem to be a use case even if Lightning becomes amazing and perfect and everybody uses only Lightning. I think it still helps with privacy in the sense that you still have some onchain footprint.

nothingmuch: I would go further. It is not just orthogonal it is also composable. This is exactly why I thought it is better to wait to bring up submarine swaps because that is exactly the use case. You are still swapping on Bitcoin but you are swapping an onchain and an offchain balance.

MF: To be clear, Chris Belcher’s design is fully onchain. Everything is hitting the chain and the part of the document which Adam was discussing is comparing it to Lightning. Chris is saying there is not enough liquidity on Lightning. If you want to do large amounts Lightning is not up to the job and perhaps never will be for large amounts.

BM: That is a very valid argument. Lightning also forces you to put your keys online to a certain extent. It maybe very useful for retail payments but is not going to be useful for high volume traders. Does this give you better properties as far as keeping your keys offline that Lightning? I think it does.

AG: It is a sliding scale really. It does yeah. You could do this entirely with a hardware wallet for example.

BM: Not necessarily. The 2P-ECDSA has multiple rounds of communication. That means you have to put your keys online too.

nothingmuch: These differences are not fundamental though. It is a spectrum of design choices and implementation choices and a function of the adoption. I think it is difficult to speculate at our current vantage point in the present whether Lightning is going to have liquidity issues in two, three years time.

MF: If we are to move briefly onto submarine swaps, when you are swapping Bitcoin onchain for Bitcoin on Lightning then the liquidity issue really comes into play because you could expect it to be a large amount onchain and a smaller amount on Lightning yet if you are doing it for privacy reasons that amount needs to be the same. Amounts that make sense economically to be transferred onchain are not necessarily going to be the same amounts that are being transferred on Lightning.

nothingmuch: What I can see in my mind in an ideal future people having access to Coinjoins, to CoinSwaps, to Lightning with public as well as private channels for various different purposes. Depending on your threat model per payment you can choose your desired level of finality or desired level of hot wallet security, desired level of privacy etc. None of these trade-offs are fundamental to any of these technologies. In principle it is more of a question of implementation. Right now it is very impractical to say you have fluidity with these things. We can envisage a situation where you have a smart wallet that only has watching keys and manages your funds. If you are going to make a very small payment it may make it via Lightning. If you are going to make a large payment to a party who you don’t trust it is going to do an onchain CoinSwap to give you stronger privacy and stronger finality. It also lets you move your balance a hot wallet and a cold wallet or something like that. The entire spectrum is in principle available. It is just a matter of actually implementing that.

MF: Ruben is discussing in the chat submarine swaps plus statechains. I don’t think we have time to go into statechains.

RS: That’s a whole other topic, maybe next time.

Practicalities of implementing CoinSwap

AG: Can we address the serious practical question, CoinSwaps are an idea, they are pretty old. What is going to be the motivation? I think we all agree that it could really improve fungibility and privacy. Whether Lightning is better than it or not is not really the issue because at the end of the day we have to do transactions onchain anyway. People might say “I don’t want to do a CoinSwap because I might get a dirty UTXO.” There is that and then it is a pain. You are going to sit around and wait 100 blocks? Nobody wants to wait 100 blocks to anything at all.

MF: Where is that 100 blocks coming from?

AG: I just made that up but the point is you have to be safe from re-orgs whereas you don’t with an ordinary payment.

MF: In practice Bitcoin doesn’t have too many re-orgs more than a block or two. That applies to s***coins but for Bitcoin that is not a problem.

AG: If you are paying for a house how many confirmation? It is a large amount and we are largely talking about larger amounts here. Maybe it isn’t 100 blocks, maybe that is ridiculous. Maybe it is 50. There is cross block interactivity. I am setting aside all the complexity involved with things like ECDSA-2P multisig which is a very exotic construction. What is the practical reality here? Is it going to be implemented? How are we going to get people to use it? Is a market mechanism an important part of getting people to use it? That might overcome people’s concerns on that being a dirty UTXO. If there is a market incentive to accept a “dirty” UTXO maybe it flips the whole perception. What are the actual practicalities here? Is it going to happen?

MF: That incentive question sounds the same Joinmarket doesn’t it?

AG: He mentions it in the document as I recall.

I think that CoinSwaps have a lot of potential synergies with Coinjoins in the sense of a way to maintain individual anonymity post Coinjoin with those UTXOs as they fragment. Eventually you could hoover them back into a Coinjoin and repeat a synergistic cycle between the two. After a Coinjoin you have to be vigilant with that otherwise you undo the gains from the Coinjoin.

RS: I believe the change outputs in a Coinjoin have amounts that aren’t the same as everybody else’s. That becomes a tainted UTXO that you can’t spend together with your Coinjoined coins. Maybe for that change amount it would make sense to do a CoinSwap or something along those lines.

In a general sense of a way to further disconnect your post Coinjoin spending activity. Even keeping things isolated trying to do things like Payjoin, eventually things are going to narrow themselves down in a long enough time horizon. CoinSwap is another postmix tool in that situation.

AG: I am not sure it really answers my question. That is an interesting area of discussion of how you can make even more powerful constructions but I am asking about practicality. Will people to do this and what are the vectors to decide whether we will end up getting people doing this or not?

MF: I don’t like as the number of parties increasing the chances of it failing increase. You have the same dynamic on Lighting where the more hops you have on your route the more chance that one of those hops isn’t going to work and so your payment is going to fail. This is even worse because at least in Lightning all the intermediaries are playing the role of a routing node and most routing nodes what they are supposed to be doing. In this sense it is creating a privacy scheme where people don’t really know what they are doing. Someone could drop out and make the CoinSwap fail. As the parties increase I can’t see this being a good scheme.

Maybe people selling tainted coins on the dark market to people with clean coins who want to patronize dark markets.

AG: Belcher has a section on Liquidity market. It is just the flip side to what if I participate in a CoinSwap, I’m a normal person doing it in a normal way, and I end up with a direct 100 percent taint to some criminal activity. It is a very valid question. The point Bob was making was when you do Coinjoins you don’t get this binary either it is totally a criminal coin or it is not. You get this 1 percent taint or 2 percent taint or whatever which is something people can stomach more easily. I am ideological about this but most people are not like me. It is a concern in practice, in reality whether we have this binary taint problem.

RS: Isn’t the goal to make it so that everybody’s coins are tainted? The heuristic needs to break. You are paying someone with a 100 percent tainted coin according to some heuristic but then it is not a 100 percent tainted coin because you are not involved with that. We need enough people to do this in order for the heuristic to break so this can no longer be a reliable way of doing business. One thing I am thinking of, more to your question, if we have things like Payswap where you are paying a merchant and simultaneously doing a swap. If the merchant receives some tainted coin and they want to go to an exchange with those tainted coins what do you expect? Either we have this whitelist system where you only take the coins that some authority said are clean and everybody checks off the list. Or it becomes unmanageable. I hope it becomes unmanageable but I do agree to your point it is something that people are going to think about. Do I want to swap my coins if I have a reasonably ok coin and I am going to get some really bad coin from the dark market or something? That is definitely a concern. The previous point on tainted coins and untainted coins, two of these markets. That is a real disaster. There needs to be some flow that allows those two to intermingle in a way that is not traceable. If you really have two separate coins, clean ones and black ones, that is going to be terrible.

I think CoinSwaps on Coinjoins makes it much more practical because you don’t have to worry about the amounts anymore. That is a huge win there because you are doing equal amounts anyway. On the other hand what is the downside here? The only downside I can see is that now you don’t have this cumulative effect of CoinSwap obfuscating all the transactions on the blockchain. I would argue that wouldn’t happen anyway because wallet fingerprinting. Coinjoin is a wallet fingerprint and it still happens but only on the Coinjoin outputs. I can imagine a future where there are very small Coinjoins, very fast and this is good for 99 percent of the people. For those 1 percent who are the Julian Assanges of this world they might do a CoinSwap on those equal amounts. Everyone is happy, grandma can use Coinjoins with very little user experience or drawbacks.

BM: I agree. I think the answer is either or, it is both. I think the next interesting question what is the best way to combine these two? Can I make my commitment to a CoinSwap also be the input to a Coinjoin? Or the output from a Coinjoin be an input to a CoinSwap? I think that is the next iteration of this set of ideas that combines the best of both worlds.

AG: I think an interesting model is the CoinjoinXT thing, one of the things I had with it at that time when I was trying to figure out how do we address amount correlation across multiple transactions, I realized if you have a Lightning channel open as one of the endpoints… Think of a CoinjoinXT as a multi transaction Coinjoin. But it wouldn’t have to be a Lightning channel open, it could just as easily be a CoinSwap.

BM: I guess there are three concepts here. You’ve got Coinjoins, CoinSwaps and Lightning channel opens and closes. In principle any of those three could be any of those other three. The output of a Coinjoin could be a Lightning opening, it could be a CoinSwap. I should be able to compose these things. The extent to which we are successful in creating a protocol which naturally composes those three things and picks which one is best for the circumstance and increases the anonymity set of all three while decreasing the taint of all three as well, that is the ideal case.

Which should come first? CoinSwap or Coinjoin? Which makes more sense?

BM: We already have Coinjoin and Lightning.

RS: I like the argument of Coinjoining first and then using the result in a CoinSwap. That seems pretty good in the sense that you do have some anonymity already. Whoever receives that UTXO is not going to receive a fully tainted UTXO, it is already mixed with other Coinjoins.

nothingmuch: For completeness, submarine swaps also bridge the gap. That pertains to Adam’s point about CoinjoinXT. You can effectively have a Coinjoin transaction where one of the outputs is a submarine swap that moves some of the balance into a Lightning channel just like you can have a CoinSwap transaction either funded or settled through a Coinjoin. An interesting post from the Bitcoin dev mailing list recently is about batched CoinSwaps where you have a single counterparty as a maker servicing multiple takers’ CoinSwaps simultaneously which again blurs the boundary between Coinjoins and CoinSwaps.

Succinct Atomic Swaps

https://gist.github.com/RubenSomsen/c9f0a92493e06b0e29acced61ca9f49a

RS: The main thing here is that with a regular atomic swap you set up a channel with Alice and Bob on two UTXOs. Then there is some kind of secret, one party can take the coin from the other party and simultaneously reveal their secret. That secret allows the other side to happen as well. With Succinct Atomic Swaps there are two secrets and they are both inside of one UTXO. The second UTXO is literally just a combination of those two secrets. Alice has a secret, Bob has a secret. Either the first UTXO gets refunded Alice and then Alice’s secret is revealed or Bob takes it and Bob’s secret is revealed. That determines who gets the second UTXO. Either they both get refunded or they do the swap. This creates this very asymmetric shape where one side has all the transaction complexity where ideally if everyone cooperates it is just a single transaction. If people don’t cooperate it ends up being roughly three transactions. On the other side it is literally is one transaction that settles immediately as soon as the timelocked transaction has completed.

MF: We discussed earlier that Chris Belcher’s CoinSwap, everything is going onchain and that is creating a cost and impact on block space. One of the key differences between CoinSwap and Succinct Atomic Swaps is more stuff is going offchain and less needs to go onchain. You have cut it down to two transactions.

RS: Cutting it down to two transactions, that is already possible in the original atomic swap as well if you assume either you are going to transact again before the timelock or you are trusting your counterparty not to try to claim the refund. There is always a refund transaction on a regular atomic swap. You can set it up in such a way where it becomes like a Lightning channel. If you try to claim the refund there is a relative timelock that starts ticking. If you respond before that relative timelock ends the refund can be cancelled. You do the swap, there is still the risk that your counterparty tries to claim the refund. If they do so you have to respond with yet another transaction. The downside to that is that it becomes three transactions. The worst case on the traditional atomic swaps would be both sides try to claim the refund transaction. If you want the channel to be open indefinitely, at that point you would have to respond. You could have a worst case of six onchain transactions. That seems pretty terrible. That is one way of doing it if you want to do the traditional style. You want to make it two transactions only. You assume that the refund transaction will never be attempted to be claimed. But with Succinct Atomic Swaps one side is literally settled. Once you learn the secret the money is yours. Or if you tell the counterparty your secret then they have the UTXO. It is one or the earlier. You only have this watchtower-like transaction where you have to pay attention on one of the two sides. That cuts the worst case down to four transactions. It is either equivalent to regular atomic swaps or if you don’t want the watchtower construction and you do want to settle then it is going to be three transactions. It is superior in most ways if what you are doing is a regular swap, one person to another. With multi swap protocols or even the partially blind atomic swap those protocols don’t seem to play well with Succinct Atomic Swaps. That is the downside. It depends on how fancy your atomic swap needs to be. If it is just a one on one swap it seems like Succinct Atomic Swaps are almost always superior.

MF: As long as you’ve got the online requirement. Chris’ CoinSwap doesn’t need the online requirement or the watchtower requirement.

RS: Even without that it is three transactions versus four transactions. Even without the online requirement it is still superior. What Chris is doing is saying “Let’s swap and then let’s swap again.” That way you need two transactions instead of four transactions. That is true but that is also true of Succinct Atomic Swaps. It depends on what your use case is. If you are constantly swapping then it doesn’t really matter. Only the final transaction you might want to consider doing a Succinct Atomic Swap because then at least one of the two parties doesn’t have to close out their swap session. It seems to be superior in most other cases.

nothingmuch: I wouldn’t say it is a requirement to be online rather it is a choice. You can choose to save on one transaction if you stay online but you don’t have to which is not the case for Lightning. With Lightning if there is a non-cooperative close you must stay online.

RS: It would be like closing your Lightning channel. You could choose to close your Lightning channel. You don’t have to be online. It is similar in that sense. You have the option to close it out.

AG: I think nothingmuch’s point is important there. Because there is a punishment mechanism there is a game theoretic aspect to a Lightning channel that doesn’t exist in a pure atomic swap or CoinSwap.

MF: That’s because you are keeping your channel open for a long period of time. In a swap you are just doing one swap, you are not opening a swap channel.

AG: There aren’t multiple states. The whole idea with that offchain protocol you have to overwrite a previous state without going to the blockchain. You use a punishment mechanism. With CoinSwap we just use the blockchain always.

MF: With Succinct Atomic Swaps you do have that punishment like mechanism.

AG: Yes which is the key point. It is a very beautiful protocol, I was really impressed reading it. That’s the crux of the matter. It does have a punishment mechanism to achieve its goal.

RS: nothingmuch’s point is you can choose to close it and then it is a three transaction protocol. It is an optional thing. There are two tricks. One is the asymmetric swap where one transaction doesn’t even need a timelock so you don’t need to worry about it. The second trick is the watchtower requirement. The watchtower requirement is completely separate. You can transfer that watchtower requirement to a regular atomic swap but then you have the watchtower requirement on both UTXOs that you are swapping. In order to have some kind of watchtower requirement you would have to have yet another transaction. There needs to be a trigger transaction that starts your claim to a refund. You broadcast one transaction and that signals “If you don’t do anything I am going to get my refund.” The party that is the rightful owner after a swap, Bob has the UTXO that Alice first created. Alice can claim the refund. Bob has to respond before Alice succeeds in claiming her refund.

MF: You briefly mentioned Jonas Nick’s partially blind atomic swap using adaptor signatures. With PTLCs you are getting the privacy in that you are not having a script onchain. What additional privacy are you getting from this partially blind atomic swap?

RS: I think it is already outdated because there is this other paper called A2L which is a better version from what I am told. The general idea is you have one party in the middle that facilitates swaps. Alice, Bob and Carol, they all propose a swap to the server S. On the other side the server doesn’t really know if they are swapping with either Alice, Bob or Carol. But it is an atomic swap. The end result is similar to a Coinjoin, a blind Chaumian Coinjoin but there is no single transaction on the blockchain. Instead you just see separate transactions like 1 Bitcoin transfers or something. There are two things I should mention that might be of interest. You can use MuSig on one of the swap transactions, the one that doesn’t require a timelock. You are not actually ever spending from it. It can be a regular, single key ECDSA UTXO. You can do Succinct Atomic Swaps on that. For one of the two UTXOs. The one that requires the timelocks where you reveal the secret you cannot do it. You would need a 2P-ECDSA or something there. On the other side it can be single key MuSig. You reveal half of the key and then that is efficient. That is very useful for privacy. The second thing that that does is because you are never signing from it if you want to fund that with a Coinjoin that is possible. That is possible today. That would be a very interesting way of doing it. You set up a Succinct Atomic Swap and then on one side you do the complex timelocked transactions. On the other side you fund it through a Coinjoin. Normally you would need to know the txid or something along those lines because you need to create a transaction spending from it before you start the protocol. That is not necessarily here at all. You could fund it through a Coinjoin or something along those lines.

AG: Yeah because usually we have to symmetrically fund these two multisigs.

Externally Funded Succinct Atomic Swaps (Max Hillebrand)

https://github.com/MaxHillebrand/research/blob/master/ExternallyFundedSuccinctAtomicSwaps.md

MF: Max you have this research Externally Funded Succinct Atomic Swaps based on Ruben’s Succinct Atomic Swaps. Perhaps you could explain what use case you are addressing here and how it works?

MH: This is utilizing Succinct Atomic Swaps with one tweak in the user experience. A very important one with consequences. There is a client and a server relationship. The server coordinates the swap and the client utilizes the swap. This is the first thing. However the second aspect is that if might be that the user is CoinSwapping the coin that he is receiving in the future. There is a setup transaction where the coordinator commits Bitcoin into a swap script. This is at point time 1. Later at point 2 the user can send any Bitcoin from anywhere to a specific address that the coordinator can spend from. The interesting part here is that second transaction that triggers the setup transaction, this does not have to be from the user itself. This is very interesting. In the original Succinct Atomic Swap this was the Litecoin side of the chain basically. This side does not have to speak the CoinSwap proposal because it just sends to a regular address. Here a user or a different person who pays the user in our context can do so directly into a CoinSwap address. This would mean at a UX level that the user receives non-anonymous coins from anyone and instantly, as soon as he receives it, he gains access to funds out of a CoinSwap which presumably has a lot of anonymity. Therefore the user buys onchain privacy very quickly by swapping a coin which has a lot of privacy in any transaction that he receives the Bitcoin.

RS: Maybe I can give one example that I am thinking of now that might clarify for people. Obviously I have talked to Max so I know what he is talking about. I can imagine it is complicated to follow. The end result is that for instance what you could do is withdraw from an exchange and the money that the exchange is sending is funding a CoinSwap. The address that the exchange is sending to is not actually where you are receiving the money because you are instantly swapping those coins. That would be one example of what you are achieving.

MH: Yes exactly. This is the very nice thing about this aspect because one side of this Succinct Atomic Swap is very stupid basically. It is just a single key address that anyone can fund. This makes it easier in this sense. There are two important downsides that we should consider talking about. The first is that the server needs to go first. This has financial implications. If this is supposed to be a company or a user seeking to be profitable he will have to open a payment channel to a new user which is most likely an anonymous Tor identity with no reputation. Therefore there is a denial of service risk in the sense that an attacker creates multiple identities and lets the server open multiple swaps to the user, a malicious actor in this case, this will be very capital intensive. There is a need to figure out how to make a fee that will pay the service provider in advance when he is committing coins into the swap.

RS: It seems like Lightning would be perfect for that. You send a payment to open a channel and you even pay for the adversarial closing of that channel. If the channel is not closed adversarially you give a partial refund minus whatever fees you are charging.

MH: There is one more important thing to note. Because the server goes first and a potential third party, external user is going to fund the CoinSwap, the tricky thing is the server does not know exactly how much Bitcoin the third party funder is going to send to the user. It might be any amount. It could be 0.1 Bitcoin, it could be 10 Bitcoin. The server does not know in advance.

RS: I do think it depends on the example though. The example I just gave of withdrawing from an exchange and then immediately swapping, in that case you would know how much you are planning to withdraw ahead of time. It depends on what your use case is. Do you have a specific use case in mind where you are not certain how much money you are going to receive? Is it a payment from a customer? What are you thinking?

MH: In the case of the exchange yes you might figure it out. But it will change in nuances, how high will be the transaction fee and how much will be in the output that you get. The example that I had was Alice and Bob go to a restaurant and Alice needs to pay back. Bob knows that roughly 100,000 satoshis will come on but he doesn’t know if it is 950,000 or 110,000 for example. It is not exactly clear how many Bitcoin will be put into this thing. This is very tricky to figure out. What do you guys think? Is this a problem? To not know how much Bitcoin will be swapped in advance and knowing the setup?

MF: You mean a problem for this third party, this privacy expert?

MH: I think it is mainly a problem for the swap partner.

RS: One thing I am thinking is you can cancel the swap if you are not satisfied with the amount. Then before you are satisfied with the amount maybe you can do some kind of additional Lightning payment. It makes it more complex when you require the Lightning Network. A Lightning payment if the amount is not exactly what you were expecting. There is a little bit of a trust factor there. Or you can make it so that the Lightning payment only goes through if the swap goes through. That should be possible I think. That also relies on a secret. Maybe you can solve it like that but obviously you get a lot of overhead trying to utilize the Lightning Network while doing a swap. That may be not be very practical but that would be one way of doing it.

MH: It is an interesting approach. What Ruben and I figured out eventually was that if the amount is actually unknown then what the service provider can do is instead of sending to a regular swap, 2-of-2 pre-signed refund transaction, he uses a payment channel which has the interesting aspect that if the user Alice somehow thinks that she will receive under 1 Bitcoin but above 0.5 Bitcoin. The service provider can open a payment channel of 1 Bitcoin to the user with all of the money on the side of the server. When an external funder, Carol comes along and pays Alice 0.7 Bitcoin she pays this into an address that the server can spend. But only after they negotiate a payment channel update in that channel where the user will have 0.7 and the server 0.3. I am still not exactly sure if this actually works. To have a Succinct Atomic Swap with that payment channel set up on one side. This changes the timeline a bit of the amounts because there still needs to be one more payment channel update after the transaction is received from the third party funder.

RS: To very briefly summarize, the idea is to allow somebody to make a payment and instead of receiving it on that address you are receiving it on a CoinSwap. You are using that amount immediately to swap. The other side of the swap can be a channel where every time you do a swap you move a little bit of the balance of that Lightning channel to the other side. You can then use that same channel for multiple swaps. After maybe three payments are received now the channel is full and you close the channel. Something along those lines.

MH: Making a payment in this scheme as a user of that coin was just swapped is basically closing the channel in the cooperative closing transaction. And what someone could do potentially, I am still not sure, is to do splicing. You close the channel in the input and then you open a channel again in the output of a transaction. Even better maybe would be to swap some balances. If the user has two channels open to the server, one smaller and one larger, then the user could swap the value of the smaller channel into the larger channel with an atomic swap, a Lightning Network channel update, and then close the smaller channel. In a way that there is no change. As soon as the server and client are online and can communicate I think they can negotiate nice swapping ceremonies.

RS: I believe it is theoretically possible.

MH: The very nice thing with 2P-ECDSA and adaptor signature ECDSA is that this all looks like single public keys and nothing else in the cooperative case. This is fantastic.

\ No newline at end of file +https://www.youtube.com/watch?v=u7l6rP49hIA

Pastebin of the resources discussed: https://pastebin.com/zbegGmb8

The conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Introductions

Michael Folkson (MF): This is London BitDevs, this is a Socratic Seminar. We had two events last week so it is great to see so many people on the call. I wondered if there would be lethargy after two last week. They were both great, videos and transcripts are up on BIP-Schnorr and Tim Ruffing’s presentation. Today is on CoinSwap. CoinSwap has been in the news because Chris Belcher has just got funding from the Human Rights Foundation to work on it and there seems to be a lot of enthusiasm. At least a couple of people have been getting deep into CoinSwap. A couple of guys from Wasabi have been talking about CoinSwap at the Wasabi Research Club. This is being livestreamed. There is a call currently on Jitsi. Jitsi is free, open source and doesn’t collect your data. There will be a transcript but it can be anonymized and edited so please don’t let that put you off. This isn’t an exercise to find gotchas, this is purely for educational reasons. There is a Pastebin up with the resources we will look at to bring structure to the discussion. As is normal we’ll do intros for anybody that wants to do an intro. There is a hand signal in the bottom left of your screen. If you want to do an intro, who you are, what interests or what understanding you have on CoinSwap. You don’t need to give your real name if you don’t want.

Adam Gibson (AG): I have some knowledge of CoinSwap because I did an implementation of it called CoinSwapCS a couple of years ago. I also corrected Greg Maxwell’s 2013 protocol which I don’t think many people know. I am also the author of the diagram you have at the top of your page.

Max Hillebrand (MH): I am building on different Bitcoin tools, currently focusing a lot on privacy specifically onchain with Wasabi wallet but also have been thinking about CoinSwaps for quite a while. I am very excited about the promise of Schnorr, scriptless scripts and adaptor signatures. I am even more excited that all of this is becoming possible with Schnorr-less 2PECDSA.

Openoms (O): I am contributing to the RaspiBlitz project which is a Lightning Network node implementation and a lot else. I was recently working on integrating Joinmarket on it and I have been working on a Terminal based menu called JoininBox which is helping Joinmarket CLI usage. I am generally very enthusiastic about privacy on Bitcoin and excited about the CoinSwap proposal.

Aviv Milner (AM): I am part of the Wasabi Research Club alongside Max. I am really into privacy. One thing I am really proud with, I got to interview Adam Gibson not long ago on elliptic curve math which was great.

What is an atomic swap?

MF: As is custom in the previous Socratics we start from basics. I will throw out a basic question. What is an atomic swap? What is an atomic swap trying to achieve? This has been discussed for years in terms of swapping coins between chains. Can anybody explain a conventional atomic swap works?

MH: A swap is about exchanging property right titles very generally speaking. There are two parties, Alice and Bob. Alice has something, Bob has something else and they want to have the thing that the other person has. They swap it, they exchange it. What makes this whole thing atomic, not in the sense that it is small but in the sense that it is binary. Either the trade happens or it does not happen. Either Alice gets the good of Bob and Bob gets the good of Alice or nothing happens at all and both retain their own goods themselves. The cool thing is that nobody can steal. It is a bulletproof property rights exchange contract. There is no possibility that Alice can have both her own good and the good from Bob. This is not possible in this scheme. It is achieved by utilizing by cryptography and Bitcoin Script in order to enforce this with some timing constraints that I am sure we will talk about.

O: The word atomic means that it is all or nothing. Max described that there is no way that the trade goes halfway forward and not all the way. That is why we call it atomic.

AG: It might be worth mentioning that the achievement of that goal is contingent on trusting that the blockchain’s so to speak clock operates correctly. For example no re-orgs or no deep re-orgs depending on how you set it up. It is not like you get the atomicity for free. It is depending on the blockchain behaving itself.

MF: You could say the same about Lightning. It is basically the same assumptions.

AG: It is the same primitive involved in at least one aspect of the Lightning Network, similar.

Bob McElrath (BM): There is a fun theorem that says atomic swaps are impossible without a trusted third party. We don’t get away from that theorem here. The two assets live on two different blockchains presumably so one solution is to put them both on the same blockchain which is the way all ERC20 swaps work. What Bitcoin does is use two interleaved timelocks, it is not actually atomic, it doesn’t happen at one point in time. One swap has to go forward and then the second one has to follow but there is a back out where if one half of the transaction doesn’t go through the other half can retrieve their coins and affect the non-execution of the swap.

MH: What theorem suggests they are impossible? Is there a difference between swapping between chains like Bitcoin and Litecoin and swapping on the same chain, Bitcoin UTXOs on the base layer?

BM: If the base layer mediates the atomicity and the atomicity happens on the base layer, there are two transactions that are confirmed simultaneously and they can only be confirmed simultaneously. Then the base layer is enforcing atomicity. In order to do that you have to have a base layer that supports multiple assets. That is what is done on many of the blockchains out there that support sub-assets if you will and the whole DeFi thing.

MF: Alex has shared in the chat a resource on understanding HTLCs.

At a high level HTLCs function as an escrow with a timelock. Once you put money into the escrow it will either execute successfully based on certain conditions or refund the payer once the timelock has expired. HTLCs are one of the primitives used on the Lightning Network. The potential for HTLCs seems boundless in my opinion.

MF: And PTLCs, we will get onto them later. nothingmuch says in the chat is that due to the Fischer-Lynch-Patterson impossibility? Or is it something specific to these swaps?

BM: I think that sounds right. This statement doesn’t necessarily apply only to cryptocurrencies, it also applies to anything else in the world including pdf files, physical objects, anything else. You can’t exchange anything electronically or otherwise without a trusted third party.

AG: It is usually referred to as fair exchange. The title of the paper was “On the Impossibility of Fair Exchange without a Trusted Third Party.” It is the same thing.

BM: In the blockchain space we outsource the third party in a couple of different ways. One interesting way is in the case of Ethereum with ERC20 tokens, the blockchain is actually mediating it and it is the rules of the blockchain that enforce the atomicity. There are ways to trust minimize that third party, the whole statechains conversation is an example of a trust minimized third party but it is still a trusted third party that enforces atomicity. Of course with HTLCs we use the timelocks.

AG: That is why I was saying at the beginning that the simplest way to understand it perhaps is the trade-off that we are trusting the blockchain clock is behaving as we intend it to in our timelocks. There is a lot you could say about this, we have said a lot of already.

Greg Maxwell Bitcointalk post (2013)

https://bitcointalk.org/index.php?topic=321228.0

MF: As is custom for these historical timeline exercises the first link is a Bitcointalk post by Greg Maxwell. This is on CoinSwap. This is at the idea phase in 2013, obviously lots has changed since then. The foundation or the crux is kind of here. It is not as trust minimized as some of the designs today, is that correct?

AG: It is kind of complicated to say the least but it is not less trust minimized, the same basic thing. He chose to present it, perhaps logically, in the form of a three party protocol rather than two party. In a sense we are missing a step because I think people usually refer back to a Tier-Nolan post from a little earlier when they first talked about atomic swaps. This was Greg specifically introducing the concept that even on a single blockchain it could be a very useful construct to have this kind of atomic swap in order to preserve privacy. He was trying to explain how you could do the atomic swap without revealing the contract onchain. That is the most important concept that is being played out there. It is a little complicated, the way it is presented there. I couldn’t explain it to you right now, it was years ago I looked at this. I did actually find a mistake in it, it was quite funny telling Greg, he wasn’t too pleased. It was a minor thing. Even if there was no mistake in that layout it was written before there was such a thing and CLTV and CSV, CheckLockTimeVerify and CheckSequenceVerify. They activated years after. He was writing how to do with it with timelocks on backout contracts rather than integrating it into one script. Also it is three party that makes it more complicated.

MF: This was before timelocks?

AG: They weren’t in the code or activated.

BM: There was a nLocktime in the original Bitcoin. The CSV soft fork in 2016 added CheckLockTimeVerify and CheckSequenceVerify opcodes. That is what gave us HTLCs. The original timelock that was in Bitcoin didn’t behave in a way that anybody expected. It was not really very useful.

MF: This party Carol in the middle. There is no trust in Carol? What is the point of Carol if there is no trust required in Carol in this scheme?

AG: I haven’t read this for three years at least so I’d have to read it. It is very long and very complicated.

BM: I am not sure about this specific proposal but generally the third party is added to enforce atomicity. You can minimize trust down to only enforcing atomicity and not knowing anything else.

MF: Towards the end of Greg’s post he gives the comparison to Coinjoin which is quite interesting.

AG: Even if you just look at the introductory statements at the top of the diagram. Phase 1 “makes it so that if Bob gets paid there is no way for Carol to fail to get paid.” Of course there are such things as escrows with trust but I don’t think there is anything interesting here with regard to having trusted parties. This is about enforcing atomicity at the transaction level. The only reason it is different from a CLTV type construct is you have to use extra transactions in timelocks in the nLocktime as backouts. This isn’t about having a trusted relationship.

TierNolan scheme (2013)

https://bitcointalk.org/index.php?topic=193281.msg2224949#msg2224949

MF: You mentioned Adam the TierNolan scheme. This is May 2013, Greg’s was October 2013. This was before Greg’s. Could you explain the TierNolan scheme and how it differs to Greg’s?

AG: Not me. I never understood it. I remember reading it three or four times and never quite understanding what was going on.

BM: Probably best to skip directly to how HTLCs work today and timelocks there. The other reference I would point out here is Sharon Goldberg’s talk from the MIT Bitcoin Expo this year which enhances the TierNolan proposal and adds three timelocks. It adds the feature that either party can go first. TierNolan’s proposal has to have one person going first. Broadly there are two timelocks, there is a window in which if Alice goes first she can get her coins back and if Bob does go, everything is good, it is executed. There are two timelocks there.

MF: The motivation for Greg’s scheme was as a privacy technique while the TierNolan was set up as a swap between two different cryptocurrencies?

nothingmuch: If I am not mistaken the history is as follows. First there was the Bitter to Better paper by Elaine Shi and others which introduced the idea of using swaps using hashlocks and timelocked transactions and a multisig for privacy. I believe TierNolan came after and generalized to do swaps between different blockchains. Greg’s CoinSwap post describes essentially the same thing as an atomic swap between two parties on Bitcoin specifically for privacy. A minor correction for earlier is Carol is whoever Alice wants to pay, Carol doesn’t know she is participating in a CoinSwap. It is typically meant to be Alice herself. The reason he added it was to emphasize you can also send payments from a CoinSwap without an intermediate step. I think the main innovation from a privacy point of view in the CoinSwap post over the previous protocols is the notion of a cooperative path where the only onchain footprint is the multisig outputs. None of the hash locked contracts make it onto the chain unless one of the parties defects and the refund is actually used. This is why it was a substantial improvement for privacy. I hope that that is accurate, that is my recollection.

AG: That sounds good. Maybe it nicely illustrates that while it is natural to go down a historical route in this it might actually be better educationally to look at those two basic ideas. First we could mechanically state how an atomic swap construct works without any privacy concerns. Then we could describe as Greg laid out how you take that basic construct and you overlay a certain thing on it to make it work for privacy.

How does an atomic swap work?

AG: The basic idea of an atomic swap is this idea of a hash preimage, one side pays to a script that can only be spent out of by revealing a hash preimage. Two people are exchanging coins for whatever reason, cross blockchain or on the same blockchain, they are both able to extract two chunks of coins if they reveal the hash preimage. One of them starts with an advantage in that he knows the hash preimage. If you did it in a naive way he could claim the coins himself and then try to claim the other ones too. The basic idea of course is when you broadcast a transaction that reveals a hash preimage and the script says something like “Check that this data hashes to this previously agreed hash output.” Because you have to broadcast it in order to claim the coins that means that hash preimage by definition becomes public. A naive way to do it would be Alice has the hash preimage, tries to claim one of the outputs using the hash preimage and then Bob would see that hash preimage on the blockchain because it was published inside the scriptSig of the transaction that Alice uses to claim. He would take the hash preimage and essentially do the same thing. He would take that hash preimage and construct a valid scriptSig for the other output. He gets his one coin, they are both trying to claim one coin. The problem with that is it doesn’t have the security you want. Alice is in the preferential situation where she knows how to claim both the coins, she knows both of the hash preimages. You add another layer to it. You add that only one party can claim by adding the requirement of signing against a public key. Why do we need timeouts? If you had this setup as I described it where both sides would claim the coins if they know the preimage to a hash but also it was locked to one of their keys, it looks secure but they have a deadlock problem. If the first party refuses to reveal the hash preimage, the one that knows it, because you have had to deposit coins into these scripts the coins are locked up until that person decides to reveal it. Maybe they die or they disappear and then your money is dead. You do need a backout. The question becomes how does each side have a backout to ensure safety against the other party misbehaving. I haven’t explained that in full detail, I am sure we will at some point. How do we convert it into a private form? In talking about this you might find the diagram on Michael’s meetup page useful for this. We have got Alice and Carol and TX-0 and TX-1 on that diagram show Alice and Carol respectively funding a 2-of-2. Why do they fund a 2-of-2? Because they need to have joint control of the funds.

MF: One thing I saw going through these resources on atomic swaps, Sergio Demian Lerner had an attempt back in 2012 with a trustless exchange protocol called P2PTradeX. That was the idea that was refined and formalized by Tier Nolan in 2013. The guys at Decred worked on executing an atomic swap with Litecoin in 2017. Most of the activity seems to be swapping currencies between chains it seems until recently.

AG: The first thing we need to get clear is a HTLC. HTLC is a slightly unfortunate name, it is a correct name but it isn’t an intuitive name. Forget about coin swaps for a minute and focus on this idea that you could have a custom script that pays out if you provide a hash preimage or it pays out after a delay. Why do we want the after a delay clause? Because you don’t want to pay into a hash, something where the script says “Check if the data provided in the scriptSig hashes to this value.” We don’t want to pay into a script like that and have it sitting there infinitely because the other party walked away. The concept of backout is absolutely crucial. That is why the timelock in a hash timelocked contract (HTLC) exists. As well as paying into a hash we are paying into that or something is locked in say 100 or 1000 blocks forward or possibly time instead. Hash timelocked contract refers to that. You could think of a HTLC as a specific kind of Bitcoin script. I seem to remember that at one point some people tried to produce a BIP to standardize this but the only standardization is in certain places like Lightning. You have a script and that is cool but the idea of an atomic swap is you lock together two such scripts. It could be on different blockchains, it could be Litecoin and Bitcoin for example. Or it could be on the same blockchain. Forgetting privacy the idea is both Alice and Bob pay into such a script. Alice pays into the first one and it says “This script releases the coins if you provide the preimage to the hash or it releases, without the hash preimage, after 100 blocks.” Bob does the same. They use the same hash value in the script but they use slightly different timelocks. If you think about trading, perhaps Alice is paying 1 Bitcoin in exchange for 100 Litecoin. It should be atomic. If Bob receives 1 Bitcoin and Alice receives 100 Litecoin, if one of them happens both should happen. The idea is that when one of them happens it gets broadcast onto the chain with the hash preimage such that the other one should be able to be broadcast also. That is the core concept. The timelocks exist to make sure that both parties are never in danger of putting their coins into such a script but never getting them out again. Maybe that is the simplest explanation you can give for an atomic swap.

MF: I suspect most people know what a HTLC is but a HTLC in one direction and another HTLC going in the other direction with a link between those two HTLCs.

AG: I am not sure about the directionality. If you are thinking about HTLC in a Lightning context then obviously we have a chain of them, not just two. But it is basically the same thing. If one transaction is broadcast the other one can also be broadcast. It is only two and not 3, 4, 5 in a Lightning path.

MF: This is talking about different cryptocurrencies. The question in the chat is on moving value from a legacy address to a bech32. I suppose this is going in the direction of Chris Belcher’s CoinSwap, you are swapping coins in a legacy address for coins in a bech32.

AG: I don’t think you should focus on that yet. If you understood that, what is the advantage of doing an atomic swap from a privacy perspective?

O: The amount and timing correlation?

AG: Those are both valid points but we will get onto them more when we are trying to achieve privacy. There is a more fundamental reason why a basic atomic swap as I just described is poor for privacy.

O: The peers would need to communicate the secret out of band?

AG: They wouldn’t need to communicate the secret out of band because the whole idea is that when one of them broadcasts a transaction with the hash preimage then it is on the blockchain and the other one can simply scan the blockchain to see it. It is actually the case that in practice you would communicate the secret out of band but that is not the principal issue with an atomic swap.

BM: Everybody sees your hash preimage and can correlate them across chains.

AG: That would be the answer. You can generalize this point to just custom scripts generally. When you use some kind of exotic script and not a simple pay-to-public-key-hash or the various equivalents. Even multisig, you are revealing something about your transactions. This is the ultimate extreme of that problem. Here if we simply do a trade of one coin for another using the hash preimage, some string like “Hello”, obviously you would use a more secure string than that as a hash preimage, then that hash preimage has to appear in both of the scriptSigs of the transactions that you are using to claim the coins. By definition it has to if we do it in this simple way. That unambiguously links those two transactions on one chain or on both chains. That is terrible for privacy.

MF: We will go onto adaptor signatures, PTLCs and that kind of stuff later.

Alex Bosworth talk on submarine swaps at London Bitcoin Devs, 2019

https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2019-07-03-alex-bosworth-submarine-swaps/

MF: This is Alex Bosworth’s talk at London Bitcoin Devs on submarine swaps in 2019. This is introducing a new concept where rather than swapping currencies on different chains you are swapping Bitcoin onchain for Bitcoin on the Lightning Network. This is useful in the context of submarine swaps and wanting to change your inbound, outbound capacity by making onchain transactions. nothingmuch says it makes more sense to do CoinSwap first, perhaps it does but I will stick with this as we are following the timeline. I think the discussion on submarine swaps was happening a year or two ago, there wasn’t the discussion on CoinSwap for privacy.

AG: I totally disagree. I spent a lot of time in 2017 writing a library to do it and we used it. CoinSwap was always a very minor topic. The Lightning white paper came out in early 2016, it took a long time to get the ball rolling with Lightning. During that period there were a few of us, you could see people occasionally getting excited about CoinSwap as a privacy idea but it was always difficult to get it off the ground. Perhaps we can discuss why that was at some point. I wouldn’t say CoinSwap was a thing that came after submarine swaps.

nothingmuch: The main innovation in CoinSwap was the observation that you could do a cooperative multipath spend and keep the information offchain. I believe that was the main insight that these kinds of things could happen offchain. You could have a binding set of transactions that makes assurances to both parties without ever making it to the blockchain so long as both parties cooperate. By having an incentive to cooperate, in this case fees, that is how you gain privacy. The hash preimage is never revealed if both parties actually follow through with the protocol.

AG: I am not sure about the multipath aspect. There has been discussion about that at various points. Almost everything else I totally agree with.

nothingmuch: I shouldn’t have used that word because multipath is overloaded. I meant the different contingencies that may arise.

MF: It felt like there was a lot more conversation going on around submarine swaps than CoinSwap. The privacy wiki has a small section on it. But when I was looking at resources on this I was seeing very few resources on CoinSwap. Perhaps it was all being done behind closed doors.

AG: The public’s imagination was captured principally by the concept of an atomic swap, especially in the middle of the craze we experienced from 2016 to 2018. Partly because of all the altcoins and everybody got very excited about how you could trade trustlessly. Lightning exploded in some sense amongst a certain group of people between 2017 and 2018 when it went live on mainnet. Alex being the fountain of incredible ideas that he is, pushed this submarine swap idea. It is just another variant but it is a more complex variant of the things we are discussing here. It might make more sense to go atomic swap, CoinSwap and then talk about the more exotic things.

CoinSwapCS (2017)

https://github.com/AdamISZ/CoinSwapCS

AG: I wrote code and built it in 2017. That repo is probably more interesting for the Issues list than for the code. I went through Greg Maxwell’s post in detail and it took me like 2 days before I eventually realized that there was an error in what he wrote. That is only important in that it let me write down what the protocol should be. You have this on your diagram. Greg’s basic idea was to have an atomic swap, both parties have an exotic script that I have already described, pay to a hash or to a timelock. Then have this overlay which is what nothingmuch was describing. This is a very important principle, we are seeing it in Taproot, all kinds of contracting ideas. If the two parties cooperate then the conditions of the contract do not need to be exposed to the world. Instead you overlay on top of the contract completion a simple direct payout from a 2-of-2 multisig to whatever destination. On the meetup diagram it is the transactions at the top, the TX-4 and TX-5 that pay from the 2-of-2 directly to Carol’s keys and directly to Alice’s keys. CoinSwapCS was just me thinking I can actually concretely make a code example of how you could have CoinSwap servers and clients. This is perhaps something that we will get onto with Chris Belcher’s description recently, we talked about timeouts and how it depends on the clock of the blockchain, the nature of it is it is a two step protocol. The initial funding phase and then when that is committed then you can build these contracts. You need to commit into shared control. In that specific version of the protocol Alice pays into a 2-of-2 with Alice and Carol. Carol also pays into a 2-of-2 with Alice and Carol after of course having pre-agreed transactions that spend out of it just like many of these contracting systems. I coded that up where I had Carol as a server and Alice would be querying Carol and saying “Can I do a CoinSwap for this amount?” They arrange the transactions and they set it up. It is a two phase thing. Fundamentally it is more interactive than something like a Coinjoin where you are just preparing and arranging a single transaction or more importantly a single phase. As I explained in CoinjoinXT it could be multiple transactions. It is still a single negotiation phase in Coinjoin. With CoinSwap it is a lot more powerful of a technique because what you end up with is histories not being merged but histories being completely disconnected. The trade-off is you have cross block interactivity. Part of that cross block interactivity is you have to trust the blockchain not to re-org which you means you have to wait not just ten minutes but quite a long time between these phases of interactivity. One of my blog posts nearly three years ago was on how CoinSwap work, the different types of them. With later blog posts I talked about the Schnorr variants and so on. It is all about that basic concept that you have an atomic swap, because you are using multisigs you can overlay it with a direct spend out to the two parties so that what appears onchain is not a hash or anything, it just looks like an ordinary payment. But it is multisig and this is something that Chris Belcher has tried to address in his write up.

MF: Some of the issues to tee up the discussion of Chris’ proposal. There are a few interesting things here. One is this tweak, this is the bug you found in Greg’s initial design?

AG: Yeah that was me explaining what the problem was with the original post. It is all in the weeds, I wouldn’t worry about it, it is details.

MF: Your blog post, you talk about some problems that I’m assuming Chris would have had to addressed as well. One of them was malleability. Is that an issue we should discuss?

AG: Not really because that was solved with SegWit. This was around the time of activation so we were trying to figure out whether we needed to address non-SegWit or not. Much like Lightning.

MF: It is exactly the same concept.

AG: Yeah.

Design for a CoinSwap implementation (Chris Belcher, 2020)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017898.html

MF: This is Chris’ mailing list post from May of this year. There’s also the gist with more details on the design. How does this proposal differs from your work Adam in 2017?

AG: It wasn’t just me, many others were enthusiastic back then. How is this different to what goes before? It isn’t really. It is a fuller fleshing out of the key ideas. A couple of things that he has added into the mix. It is a long post and it has several ideas. Using ECDSA two party, what this means is because we need to use multisig but we don’t really want to use ordinary multisig. The thinking behind this is subtle. When you do a Coinjoin in the current form like Wasabi, Joinmarket, Samourai etc you are doing a completely non-steganographic protocol. In other words you are not hiding that you are doing the protocol which means that the anonymity set of the participants in this privacy protocol is transparently the set of participants at least at the pseudonym level that went in. The specific set of addresses that went into the protocol. When you try to move to a more steganographic hiding protocol, with something like CoinSwap specifically you are hoping to not make it obvious that you have done that. As has been discussed at great length in the context of Payjoin and other things there is a huge benefit if we can achieve the goal of having any one of these protocols be steganographic because it could share a huge anonymity set rather than a limited one. If you want to achieve that goal with CoinSwap the first step is to use an overlay contracting system such that the outputs in the cooperative case at least are not exposing things like hash preimages or other custom scripts. At the least you want the outputs to be let’s say 2-of-2 multisig. The reason he is mentioning and expounding on ECDSA two party, he is talking about specifically the scheme of Yehuda Lindell where you use somewhat exotic cryptography such as the Paillier cryptosystem in order to achieve the goal of a 2-of-2 multisig inside a single ECDSA key. If you achieve that goal then you are going to create such a CoinSwap protocol in which all of the transactions at least at this level look like existing single ECDSA keys which is basically all the keys we have today apart from a few multisigs here and there.

BM: I guess the things that have changed since 2013, first of all we got CSV, timelocks are way better. Secondly we got Lindell’s two party ECDSA protocol and the multiparty ECDSA protocol which it doesn’t use Paillier, it uses ElGamal instead which is more compatible with …curves. Both are good for this, different security assumptions. The third thing which is I’m not sure is used here is adaptor signatures. We are going to talk about PTLCs in a minute. That hash preimage reveal is deanonymizing by itself. With adaptor signatures which you can do with both Schnorr and ECDSA you can reveal that preimage in a manner that only the recipient can find it.

MF: This is the ECDSA-2P section in Chris’ design.

BM: He doesn’t really talk about it all. It uses Lindell’s proposal.

MF: This is to have a 2-of-2 multisig with only one signature going onchain before we get Schnorr. It is space savings onchain as well as the adaptor signature property of superior privacy, not having that script telling the world exactly what is going on.

BM: That’s right. The price you pay for that is more rounds of communication between the sender and receiver. There are some zero knowledge proofs that need to communicated. I’m not sure what the count of the number of rounds is. It is at least three. For offchain payment protocols it is not that big a deal. If you look at MuSig and Schnorr you have a similar number of rounds in the protocol anyway because they have to decide upon the nonce for a group sourced Schnorr signature.

nothingmuch: A paper by Malte Moser and Rainer Bohme called “Anonymous Alone?” is an attempt to quantify these what they call second generation anonymity techniques on Bitcoin. It contains an interesting bit of data, an upper bound for how many CoinSwaps could be going on. It is only an upper bound under the assumption that multisigs that co-exist and are of a similar amount, coincide in time on the blockchain, any pair of those is potentially a CoinSwap. That is how they quantify the anonymity set. I think that that gives an intuition for all of the main ideas that Belcher proposed. This is the multiparty ECDSA stuff that makes the scripts less distinguishable. It is the introduction of routing which means that it is not necessarily a two party arrangement. And the splitting of amounts into multiple transactions so that the onchain amounts are decorrelated.

AG: Because of that paper they were making an assumption that multisig is going to be a fingerprint of CoinSwaps and this removes that fingerprint. That is one aspect?

nothingmuch: That paper tried to quantify how many users of privacy techniques are there. It addresses Coinjoins, stealth addresses and CoinSwaps. For CoinSwaps because the data is offchain all they can really do is give an upper bound how much CoinSwapping is occurring under some fairly strong assumptions. The first of these is that any CoinSwap is going to be anchored into two UTXOs that coincide in time that are either 2-of-2 or 2-of-3 and where the amounts are relatively close.

BM: There is one aspect of all this which is a pet peeve of mine. Belcher’s example has Alice dividing 15 Bitcoin into 6 plus 7 plus 2. What we need to do in order to get this is we need to be doing binary decompositions of value. 16 million satoshis plus 8 million satoshis, using a binary decomposition you can get any value you want. Each set of binary decompositions becomes its own anonymity set. That’s the best you can possibly do as far as the anonymity set. But if everybody is dividing their value arbitrarily down to the satoshi you have a very strong fingerprint and a very strong ability to correlate. If I ask the question “How many pairs of outputs on the blockchain can I sum to get a third output on the blockchain?” Maybe these two are engaged in a CoinSwap. The ability to do that is still extremely strong.

AG: That is a very good point. I would make the argument in the Joinmarket scenario it is not necessarily as easy as you might think because you tend to get a lot of false positives. But I think your fundamental point is almost certainly correct.

nothingmuch: This is a bit of digression. I have been thinking about what I have been calling preferred value series. It generalizes to more than just binary but to any minimum Hamming weight amount, it could be decimal as well, it could be decimal with preferred value series. The intuition there is that if you have a smaller set of amounts that you can combine together to create arbitrary amounts then the likelihood that those amounts will have an anonymity set is much larger. The issues with that are if you need a UTXO for every significant bit in your amount, let’s say the average Hamming weight of an amount is 5 significant integers or something then you create a substantial overhead. We haven’t shared anything yet because as a group we haven’t come to consensus about that. For the unrelated stuff about Wabisabi and Coinjoins this is definitely the approach I have in mind and I am happy to share. Finally there is a way to make it practical without necessarily having that problem of this is 5x, 10x UTXO bloat factor.

BM: There is definitely a bloat factor there. The way this is handled in traditional markets is if you try to buy certain kinds of stock you can only buy stocks in lots of hundred shares. On the foreign exchange markets in lots of hundred thousand dollars. I envision that if a trading market were to arise in this manner there would be a minimum lot size significantly larger than the dust limit. You are obviously not going to go down to satoshi because you can’t make a single satoshi output because you can’t spend it and it is below the dust limit. Something like 8 million satoshis, 16 million satoshis would probably end up being the minimum lot size and then you do binary decomposition above that.

nothingmuch: The transaction structure in mind, I’m only speaking for myself here, is the Coinjoins where you have different mixing pools for those Hamming weight amounts, every power of two, that occur as parts of larger Coinjoin transactions. You can imagine a perfect Coinjoin that has a nice switching network topology and there would be representative inputs and outputs of every such class. If you have a Coinjoin transaction that is constructed in this way then in theory this gives you a blinding factor for all of the other arbitrary amounts. This is based on the Knapsack papers definitions of a non-derived sub transaction, the idea of how many different ways are there of combining assuming that all of the inputs and outputs are self spends inside a Coinjoin. A Payjoin breaks this heuristic. If you do the sudoku attack where you constrain things and find partitions of the transaction that balance to zero the idea there is that you always have ambiguity in that case. Every possible amount could be an arbitrary composition of those Coinjoin elements that have that power of two. The reason I’m bringing it up is that I think it is really relevant for this case because a big problem with CoinSwap coordination generally is that if two users are swapping history then one of them in theory gets a tainted history. Unless there is a common understanding that taint is no longer a valid assumption then there may be an disincentive for people to offer swaps because if you don’t know the provenance of the coin you are swapping against maybe you lose fungibility in that transaction. In this case if there are Coinjoins that allow the creation of fungible values of arbitrary amounts you can do a CoinSwap where one side of the CoinSwap is an output from such a Coinjoin I think that creates a very complementary onchain scenario. Users can swap arbitrary amounts without having a very strongly identifiable footprint.

BM: That’s absolutely right. That is what concerns me greatly about these CoinSwap proposals. It is literally a swap. I am going to give you my UTXO, you are going to give me yours. Some of the other proposals, statechains, have the same property where I am going to get a new UTXO. If there is a pool of UTXOs out there and one of them came from a darknet and is blacklisted somehow, somebody is going to get that UTXO, they are going to get the full UTXO and it is going to be a bad one. This is a strong disincentive to use such a system. Whereas with a Coinjoin I have 100 inputs and 100 outputs and I don’t know which is which and so each one is tainted 1 percent. With the CoinSwap proposal one UTXO that somebody is going to get and it might be you, is tainted at the 100 percent level and the others are not tainted at all.

MF: I had exactly the same thought. Let’s say you’ve got completely clean Bitcoin straight from being mined versus Bitcoin that has been through an identified drug mixer you are not going to be too happy swapping those two. Unless as soon as you get your Bitcoin you do loads of transactions to yourself so that you beat the taint score or the taint measure that people like exchanges use.

nothingmuch: I don’t think that is robust because it is pretty easy to do modular decomposition of graphs. You can identify a strongly connected component that originates from a small set of outputs. Even if it is a larger graph it is still equivalent to a single transaction in that case. Unless you are actively mixing with other users’ history I don’t think that approach buys you much in terms of privacy. It may be a very effective way to get past simple heuristics of exchanges but I would caution against actually thinking about that as a privacy enhancing technique.

MF: I am talking about the CoinSwap being the privacy technique. But let’s say you got the short straw, you’ve gone into the CoinSwap and you’ve got these “tainted” Bitcoin. You then need to get them untainted if you want to go into an exchange or you want to use a merchant that has restrictions on which Bitcoin they will accept.

nothingmuch: That process of untainting, it works today just because the heuristics the exchanges use are very crude. They are only doing this as a cost saving measure to reduce liability anyway. If this becomes more common then all they have to do is slightly ramp up the computational difficulty of identifying these graphs of self spends or whatever. It is not a difficult problem to identify those because the subgraphs are disjoint. If there are no transactions connecting it to the rest of the transaction graph it is fairly easy to isolate.

MF: On the YouTube shinobius monk says “I demand an immediate in depth comment from everyone on how to deal with the timing issues of successive CoinSwaps hitting chain in short time period.” Filling up blocks with too many CoinSwaps?

I think what he says is that CoinSwaps can be identified if they are hitting the chain in a very short period and maybe the amounts can be correlated. How do you know they are CoinSwaps? I don’t know, maybe there are ways to guess.

AG: Is he talking about timing correlation? I think this relates back to something I was thinking we should definitely discuss which is what are the practicality issues around CoinSwap? I think people should take note of an interesting historical fact that in late 2013 Greg Maxwell made two very similar long Bitcointalk posts, one called Coinjoin and one called CoinSwap. I think Coinjoin was implemented in some form or another within one month of the Coinjoin post but when I looked at it in early 2017, four years later, nobody had implemented CoinSwap at all.

It is because there was a Coinjoin bounty.

AG: I don’t think so. I think fundamentally CoinSwap is a more powerful protocol but there are some practicality issues that we should start discussing assuming that everyone on the call now understands what a CoinSwap is.

MF: shinobius also says “You arguably don’t get atomicity in the sense of state guarantees across two different proof of work chains.” I’m assuming that it is due to re-orgs, I think that is his point there.

AG: It is definitely worse because you are dealing with two clocks, it is significantly more messy. It is kind of the same basic problem I guess.

MF: There are a lot more re-orgs on other chains with less proof of work. It depends on how s*** your s***coin is. Josh Smith brings up the blockchain space issue which is mitigated by having one signature onchain with ECDSA-2P or Schnorr.

AG: It depends what you are comparing it to. Think about it in comparison to Coinjoin as a privacy technology. It is fundamentally a lot more fungibility per byte onchain. It is difficult to quantify but it must be partly because of this steganographic thing, what is your anonymity set if you do a 50 party Coinjoin? You could say my anonymity set is 50. But what is your anonymity set if you do a CoinSwap properly such that nobody even knows it is a CoinSwap. It is either a lot or you can’t define it. It is pretty efficient which is one of the main reasons Chris Belcher, myself and many other people were always very interested in CoinSwap even if we thought it was quite difficult to implement.

nothingmuch: Furthermore it has a positive sum type interaction. When you CoinSwap you are also helping the privacy of every non-CoinSwapped output that has the same script type.

AG: We discussed in Payjoin the same dynamic.

MF: Another question from shinobius. “Belcher’s idea of routing amounts through successive CoinSwaps and the impracticality of staggering the time between individual transactions in a chain of CoinSwaps for intervals that are too long.” This was a clarification for the previous question. Having too many successive CoinSwaps and staggering the time if there is an overlap between one CoinSwap that you are engaging in and the next CoinSwap.

Routing CoinSwaps

https://gist.github.com/chris-belcher/9144bd57a91c194e332fb5ca371d0964#routing-coinswaps-to-avoid-a-single-points-of-trust

MF: The problem that I understand routed CoinSwaps is addressing is if I do a CoinSwap with Adam and Adam is a malicious party, a chain surveillance company, then I am telling Adam what my history is which isn’t ideal from a privacy perspective. But if you have this circle where everyone is doing CoinSwaps with each other and there are multiple parties you now have to worry about all of them colluding rather than one malicious party. How does this routed CoinSwap work?

I think we can believe that assumption is correct. It is more important to explore how that assumption changes other properties of CoinSwaps if it changes them at all. There is a problem with two party CoinSwaps because the other person learns the history. It might be a more interesting question to explore accepting the solution works. Does it ruin the user experience? Does it do something else other than merely fixing the problem? Is it harder to implement?

AG: I haven’t thought about this much. I think Chris talked about some of the issues on the CoinSwapCS repo a couple of years ago. I remember him talking about this and multi-transaction. This addressing a privacy problem, it seems to me we are looking at exactly what we see on the Lightning Network. It is not exactly the same but it is almost exactly the same mechanic. We are going to have staggered timeouts to make sure that no party is every in a position where they could lose on both ends of the trade. It addresses a privacy problem in that same way where every participant in the route doesn’t know for sure the originator.

nothingmuch: I think it would be useful to go over the privacy issues with Lightning first as it stands today with HTLCs. If we assume that the routing is only a problem for CoinSwap makers much like the public Lightning graph is not considered to be a privacy issue. When you do a routed payment you are making a HTLC with the first party in the route and giving them an incentive to enter a HTLC with the next party. But along the entire path even though anonymity is guaranteed everybody shares the same secret. We mentioned it earlier, the PTLCs as a solution to this, also anonymous multihop locks and a bunch of other approaches. If I’m not mistaken Belcher doesn’t go into detail about how routing is supposed to work. The implication is that it uses adaptor signatures.

AG: Anonymous multihop locks is actually the same thing. It uses points to achieve the security proof to avoid the wormhole attack but of course it has other excellent properties.

nothingmuch: I am not 100 percent on the details to explain how you can use adaptor signatures to create something that is equivalent to a whole route sharing the same preimage without that secret actually being shared by all parties.

AG: You are just talking about the general idea of point timelocked contracts? In the anonymous multihop locks paper they explain how you can have a tweak to the nonce essentially at every hop such that every party subtracts has their own single secret and it is added along the path so that everyone gets the same guarantee as they would if everyone just used one secret. But each person’s secret is different using the linearity of Schnorr. You can think of it as using the linearity of Schnorr to make it so that we are not all using the same hash preimage, we are all using something like a hash preimage that is locked together. But each of our numbers is different. We talk about adaptor signatures, are we talking about using CoinSwap of the type that was initially proposed by Andrew Poelstra? I wrote it up in the blog post “Flipping the scriptless script on Schnorr”. We are talking about using an adaptor signature to affect a CoinSwap so that instead of worrying about the hash preimages all being onchain so someone can connect them. Because the equivalent of a hash preimage is actually embedded inside of the nonce in this adaptor signature it means that nobody sees it anyway. It means we don’t need this overlay protocol, we can make a much simpler CoinSwap. Are we doing that kind of thing across multiple hops but we are using ECDSA adaptor signatures? Is that what we are doing?

BM: I don’t know that adaptor signatures are required or used here. My understanding of Belcher’s proposal was that Alice who is doing the CoinSwap is actually mediating multiple CoinSwaps at the same time. Two of the people she is mediating, Bob and Charlie, Bob doesn’t know Charlie exists. It is centrally routed around Alice as opposed to the linear chain that happens on Lightning.

AG: There are two parts. There is a multi transaction part where Alice parallelizes. Then there is a routing part. He talks about them in two separate sections. And he talks about combining multi transaction and routing together. Routing CoinSwaps to avoid a single point of trust then you’ve got combining multi-transaction with routing. First it is multi transaction CoinSwaps to avoid amount correlation. That is just Alice and Bob making multiple CoinSwaps between those two parties. Then there is routed CoinSwaps, there is shows an actual chain Alice, Bob, Charlie, Dennis. Finally it talks about combining multi-transaction with routing. Between Alice and Bob they have multi-transaction and between Bob and Charlie they have multi-transaction and so on. It is both effects that are being talked about. In terms of adaptor signatures, he is only talking about adaptor signatures in this document in connection with Succinct Atomic Swaps. He is not including that in the proposal. You would have different hash preimages in each of the hops along a route here. That is different to Lightning.

MF: It is not a route is it? It is a circle from Alice to Bob to Charlie and back to Alice. It is not like a Lightning payment where I am trying to get to a destination and there are a bunch intermediaries helping me get it to that destination. It is almost like a ring signature?

AG: I disagree. I don’t think it is fundamental that Alice is sending to Alice in that example. Think of it as Alice’s source and Alice’s destination. If her destination happens to be something that she owns then it is but it is not fundamental to what is going on.

MF: A question on the YouTube from shinobius. “Isn’t the idea to just chain the transactions together without signing/submitting the first “funding” for the routed CoinSwap? That’s what I inferred from the proposal”

AG: I think you guys must be right. The whole thing must be atomic. We are still in the situation where only in the cooperative case would we have proper privacy. Perhaps that reveals that we have extra fragility if we have multiple participants. You already have the concern when you do a single CoinSwap between two participants that if the other guy doesn’t respond in time you are going to have to publish onto the blockchain. It is going to be obvious you did a CoinSwap and you maybe didn’t want that. I am worried that if you have a lot of participants the potential for at least one party to drop out. That would just affect that hop? If you had four hops and the guy who is third along drops out and doesn’t respond then maybe two of the four hops end up broadcasting the hash to chain.

nothingmuch: The parties could in that case choose to still settle cooperatively but the entire route may learn. Suppose Alice sends to Bob who sends too Carol. Carol sends the final payment and then Carol defects and reveals the hashlock. This is what I mistakenly assumed earlier that Chris was relying on adaptor signature. In that case Bob will still learn where Alice’s funds went. Whereas in the optimal case Bob has no idea where the final payment is sent. He only knows that he is sending to Carol.

AG: Mentioning adaptor signatures makes a lot of sense because it does take away that problem. Then there is the question of whether in a ECDSA context, given recent work by Lloyd Fournier those kind of single signer ECDSA things might work. We can do adaptor signatures with ECDSA, that Lindell thing?

Ruben Somsen (RS): I believe that is possible. Lloyd’s work is specific to single signer ECDSA. You wouldn’t have the privacy that Belcher wants. Generally speaking my intuition would be if you have a multiparty protocol it will either be like Lightning where you have A depends on B depends on C. Or you are going to have the protocol that you wrote up, Multiparty S6. I think you’d need to use one of either in order to make it work. Belcher writes up a bunch of variations on how to do CoinSwaps.

AG: It is either going to be a chain or it is going to be a big network all connected. That S6 thing, it is kind of a chain anyway.

RS: That would be a way to do adaptor signatures.

AG: The point me and nothingmuch were arriving that is if you are going to do this kind of onchain chain of swaps, Lightning uses offchain chains of swaps HTLCs, if there is one link in the chain that defects it is going to be revealing information onchain. You get that with two parties but the risk is much lower. If you have six parties the risk gets higher and higher. If you were to use adaptors then it would avoid that problem because you don’t have the defection problem in the adaptor signature CoinSwap because the preimage is embedded in the nonce.

RS: I agree with that. Any third party wouldn’t be able to figure out what the preimage is.

AG: If he is going to use ECDSA-2P which the proposal is, this makes a lot of sense because it shares the anonymity set, then maybe wedge in adaptor signatures even if it is a weird ECDSA variant of adaptor signatures if that is possible.

RS: I believe it is compatible with 2P-ECDSA.

BM: It definitely is.

RS: Very generally speaking I think it makes sense to use adaptor signatures. Even in the non-cooperative case you are going to end up with less block space usage so why not if it is possible?

AG: We talked about routing. What about multi-transaction?

I am wondering if I completely misread the entire section of routing through multiple transactions in Belcher’s proposal. What I took away the write up on GitHub was that you effectively have the taker coordinating between multiple makers with these intermediary UTXOs that are two party ECDSA addresses. You coordinate that route, get all the things to the end of it signed and then you sign and fund it to compare it to Lightning. That is binary, succeeds or fails with 2P-ECDSA preventing anything conflicting from being signed. You could even enforce staggering delays of hitting the chain with the nLocktime field.

BM: Doesn’t every party have to create a funding transaction? The original proposal had four total transactions. As I understand it Belcher’s proposal has two transactions total per swap. One transaction per party. I think that means in the routing network there is one funding transaction per entity and then you have to close it out. This whole routing network is not offchain like Lightning is. It is all onchain.

nothingmuch: I believe that is correct. In this regard the difference between Lightning goes away. In Lightning the preimages are shared peer-to-peer but the route is derived by the sender. Here because everybody is.. broadcast network, the blockchain itself, the parties along the route don’t have to communicate with each other directly like they do in Lightning. Everything is mediated by Alice.

AG: Alice must be choosing the set of participants.

nothingmuch: Yes. That is also true of Lightning.

AG: Source routing.

nothingmuch: The difference is in this scenario because there isn’t the offchain aspect, Bob and Carol are not in some sort of long term arrangement between them in a payment channel. The only situation where they need to coordinate with each other is in case the payment doesn’t go through. They have the blockchain for that because it is onchain.

BM: The original proposal had four transactions. Alice swaps with Bob. Both Alice and Bob have to create a funding transaction. At the end of the swap they have an exit transaction where they retrieve to their wallet. It is four total transactions. Using the notion that Chris Belcher talked about, swapping private keys where it is effectively a 2-of-2 multisig, at the end of the process I am going to give you one of the two private keys and you already have the other one. Now you have unilateral control and doesn’t need to go onchain.

AG: I think you are talking about Succinct Atomic Swaps which is Ruben Somsen’s thing.

CoinSwap comparison with Succinct Atomic Swaps

https://gist.github.com/RubenSomsen/8853a66a64825716f51b409be528355f

RS: There are two things about the Succinct Atomic Swap. The thing you are talking about now is possible with regular atomic swaps as well. You do the swap, you don’t broadcast the final transaction, you just give each other your key of the funding transaction. The problem you have then is that there is still this refund transaction that becomes valid after some period of time. But it allows you to direct the outputs towards whatever you want before that timelock expires.

BM: You are still going to have to exit onchain so we are still talking four total transactions. You can delay the final exit but I cannot reuse that commitment transaction for another swap. If Alice and Bob set up the two commitment transactions that have to be executed or cancelled and if I want to swap with Charlie I need to create a new commitment transaction involving Charlie’s key.

RS: The point there is that it is another funding transaction. You go from funding transaction to funding transaction and you skip the settlement phase because you have full control over the UTXO and can start funding again. As long as you react before that timelock expires, before the refund transaction becomes valid you can direct the funds anywhere you want including another swap. That is the basic premise, if you have a server that does a lot of swapping or you want to swap a bunch of times then you do not have two transactions, you only have one transaction per swap. Succinct Atomic Swaps are a separate thing where you don’t have the timelock on one of the two outputs that you are swapping. That gives you some additional benefits because then that one doesn’t expire.

BM: Worst case scenario there are four transactions per swap, best case scenario two transactions per atomic swap.

RS: The worst case is that your swap doesn’t go through. If your swap doesn’t go through it is four transactions.

BM: I was really hoping this was doing swaps offchain.

CoinSwap comparison with Lightning

AG: That brings up the interesting question which is raised in the document which is how does this compare with Lightning? Shouldn’t we just use Lightning and he talks about several angles to that.

BM: Let’s assume there are multiple Lightning Networks here, one on Bitcoin and one on Litecoin and I can swap between those two. Isn’t that far superior? I don’t think anyone has really done that except in small experiments.

AG: This isn’t attempting to replace cross blockchain swapping or trading. It is attempting to provide a new functionality for privacy in Bitcoin. The question of why we don’t we just use Lightning I think is a good one but it is addressed specifically in a whole section in the document. His first advocacy for this approach really surprised me. It wasn’t the first one I would think of. “CoinSwap can be adopted unilaterally and is onchain.” He is pointing out that in a world where your friendly local Bitcoin exchange or similar company decides not to adopt Lightning and you are sat there twiddling your thumbs getting annoyed with them, this can be done today and doesn’t require anyone’s permission. If you can make payments via either a CoinSwap or some complex routed CoinSwap then you don’t need the permission of the recipient of the payment. It is an interesting point to lead with. He makes a couple of other points about liquidity and size. When I was looking at this in 2017 and writing code for it I was thinking is it worth looking into this as a separate technology considering Lightning in 2017 was being developed actively and was almost ready for prime time? It may only have a small role but it might have a role simply because it might be able to support larger sizes. It could be a more heavy protocol. It doesn’t have the same advantages as an actual offchain protocol but it creates a privacy effect similar to what Lightning does and perhaps for a very large payment. He also talks about sybil resistance and he is talking about fidelity bonds there, something he worked on recently. We recently merged that code into Joinmarket although it isn’t active yet. The idea of fighting sybil attacks with fidelity bonds is interesting. I haven’t thought that through in the context of CoinSwap, how that is different. He makes two or three arguments as to why it is still an interesting thing even though Lightning exists.

BM: I don’t buy his argument that onchain is good. Onchain payments are dead. We should not be using them, we should be using Lightning everywhere. It is going to take time for the market to adopt that. The argument that somebody is still using BitPay from 2014 is not a good argument.

AG: Onchain consumer payments are dead, I would almost but not entirely agree with you and are not a particularly interesting thing. They exist in small pockets, that is fine. Bitcoin is not just consumer payments.

RS: I think it is fundamentally different. With Lightning you have a channel and you still have to open the channel. When you are opening it it won’t be clear until after you’ve closed the channel how much of that money is going to be yours. But it is clear that is a UTXO you are controlling. Before you open the Lightning channel you still need some kind of privacy. At the very least swapping into a Lightning channel or something along those lines seems useful to me. I think it is orthogonal in a sense and there does seem to be a use case even if Lightning becomes amazing and perfect and everybody uses only Lightning. I think it still helps with privacy in the sense that you still have some onchain footprint.

nothingmuch: I would go further. It is not just orthogonal it is also composable. This is exactly why I thought it is better to wait to bring up submarine swaps because that is exactly the use case. You are still swapping on Bitcoin but you are swapping an onchain and an offchain balance.

MF: To be clear, Chris Belcher’s design is fully onchain. Everything is hitting the chain and the part of the document which Adam was discussing is comparing it to Lightning. Chris is saying there is not enough liquidity on Lightning. If you want to do large amounts Lightning is not up to the job and perhaps never will be for large amounts.

BM: That is a very valid argument. Lightning also forces you to put your keys online to a certain extent. It maybe very useful for retail payments but is not going to be useful for high volume traders. Does this give you better properties as far as keeping your keys offline that Lightning? I think it does.

AG: It is a sliding scale really. It does yeah. You could do this entirely with a hardware wallet for example.

BM: Not necessarily. The 2P-ECDSA has multiple rounds of communication. That means you have to put your keys online too.

nothingmuch: These differences are not fundamental though. It is a spectrum of design choices and implementation choices and a function of the adoption. I think it is difficult to speculate at our current vantage point in the present whether Lightning is going to have liquidity issues in two, three years time.

MF: If we are to move briefly onto submarine swaps, when you are swapping Bitcoin onchain for Bitcoin on Lightning then the liquidity issue really comes into play because you could expect it to be a large amount onchain and a smaller amount on Lightning yet if you are doing it for privacy reasons that amount needs to be the same. Amounts that make sense economically to be transferred onchain are not necessarily going to be the same amounts that are being transferred on Lightning.

nothingmuch: What I can see in my mind in an ideal future people having access to Coinjoins, to CoinSwaps, to Lightning with public as well as private channels for various different purposes. Depending on your threat model per payment you can choose your desired level of finality or desired level of hot wallet security, desired level of privacy etc. None of these trade-offs are fundamental to any of these technologies. In principle it is more of a question of implementation. Right now it is very impractical to say you have fluidity with these things. We can envisage a situation where you have a smart wallet that only has watching keys and manages your funds. If you are going to make a very small payment it may make it via Lightning. If you are going to make a large payment to a party who you don’t trust it is going to do an onchain CoinSwap to give you stronger privacy and stronger finality. It also lets you move your balance a hot wallet and a cold wallet or something like that. The entire spectrum is in principle available. It is just a matter of actually implementing that.

MF: Ruben is discussing in the chat submarine swaps plus statechains. I don’t think we have time to go into statechains.

RS: That’s a whole other topic, maybe next time.

Practicalities of implementing CoinSwap

AG: Can we address the serious practical question, CoinSwaps are an idea, they are pretty old. What is going to be the motivation? I think we all agree that it could really improve fungibility and privacy. Whether Lightning is better than it or not is not really the issue because at the end of the day we have to do transactions onchain anyway. People might say “I don’t want to do a CoinSwap because I might get a dirty UTXO.” There is that and then it is a pain. You are going to sit around and wait 100 blocks? Nobody wants to wait 100 blocks to anything at all.

MF: Where is that 100 blocks coming from?

AG: I just made that up but the point is you have to be safe from re-orgs whereas you don’t with an ordinary payment.

MF: In practice Bitcoin doesn’t have too many re-orgs more than a block or two. That applies to s***coins but for Bitcoin that is not a problem.

AG: If you are paying for a house how many confirmation? It is a large amount and we are largely talking about larger amounts here. Maybe it isn’t 100 blocks, maybe that is ridiculous. Maybe it is 50. There is cross block interactivity. I am setting aside all the complexity involved with things like ECDSA-2P multisig which is a very exotic construction. What is the practical reality here? Is it going to be implemented? How are we going to get people to use it? Is a market mechanism an important part of getting people to use it? That might overcome people’s concerns on that being a dirty UTXO. If there is a market incentive to accept a “dirty” UTXO maybe it flips the whole perception. What are the actual practicalities here? Is it going to happen?

MF: That incentive question sounds the same Joinmarket doesn’t it?

AG: He mentions it in the document as I recall.

I think that CoinSwaps have a lot of potential synergies with Coinjoins in the sense of a way to maintain individual anonymity post Coinjoin with those UTXOs as they fragment. Eventually you could hoover them back into a Coinjoin and repeat a synergistic cycle between the two. After a Coinjoin you have to be vigilant with that otherwise you undo the gains from the Coinjoin.

RS: I believe the change outputs in a Coinjoin have amounts that aren’t the same as everybody else’s. That becomes a tainted UTXO that you can’t spend together with your Coinjoined coins. Maybe for that change amount it would make sense to do a CoinSwap or something along those lines.

In a general sense of a way to further disconnect your post Coinjoin spending activity. Even keeping things isolated trying to do things like Payjoin, eventually things are going to narrow themselves down in a long enough time horizon. CoinSwap is another postmix tool in that situation.

AG: I am not sure it really answers my question. That is an interesting area of discussion of how you can make even more powerful constructions but I am asking about practicality. Will people to do this and what are the vectors to decide whether we will end up getting people doing this or not?

MF: I don’t like as the number of parties increasing the chances of it failing increase. You have the same dynamic on Lighting where the more hops you have on your route the more chance that one of those hops isn’t going to work and so your payment is going to fail. This is even worse because at least in Lightning all the intermediaries are playing the role of a routing node and most routing nodes what they are supposed to be doing. In this sense it is creating a privacy scheme where people don’t really know what they are doing. Someone could drop out and make the CoinSwap fail. As the parties increase I can’t see this being a good scheme.

Maybe people selling tainted coins on the dark market to people with clean coins who want to patronize dark markets.

AG: Belcher has a section on Liquidity market. It is just the flip side to what if I participate in a CoinSwap, I’m a normal person doing it in a normal way, and I end up with a direct 100 percent taint to some criminal activity. It is a very valid question. The point Bob was making was when you do Coinjoins you don’t get this binary either it is totally a criminal coin or it is not. You get this 1 percent taint or 2 percent taint or whatever which is something people can stomach more easily. I am ideological about this but most people are not like me. It is a concern in practice, in reality whether we have this binary taint problem.

RS: Isn’t the goal to make it so that everybody’s coins are tainted? The heuristic needs to break. You are paying someone with a 100 percent tainted coin according to some heuristic but then it is not a 100 percent tainted coin because you are not involved with that. We need enough people to do this in order for the heuristic to break so this can no longer be a reliable way of doing business. One thing I am thinking of, more to your question, if we have things like Payswap where you are paying a merchant and simultaneously doing a swap. If the merchant receives some tainted coin and they want to go to an exchange with those tainted coins what do you expect? Either we have this whitelist system where you only take the coins that some authority said are clean and everybody checks off the list. Or it becomes unmanageable. I hope it becomes unmanageable but I do agree to your point it is something that people are going to think about. Do I want to swap my coins if I have a reasonably ok coin and I am going to get some really bad coin from the dark market or something? That is definitely a concern. The previous point on tainted coins and untainted coins, two of these markets. That is a real disaster. There needs to be some flow that allows those two to intermingle in a way that is not traceable. If you really have two separate coins, clean ones and black ones, that is going to be terrible.

I think CoinSwaps on Coinjoins makes it much more practical because you don’t have to worry about the amounts anymore. That is a huge win there because you are doing equal amounts anyway. On the other hand what is the downside here? The only downside I can see is that now you don’t have this cumulative effect of CoinSwap obfuscating all the transactions on the blockchain. I would argue that wouldn’t happen anyway because wallet fingerprinting. Coinjoin is a wallet fingerprint and it still happens but only on the Coinjoin outputs. I can imagine a future where there are very small Coinjoins, very fast and this is good for 99 percent of the people. For those 1 percent who are the Julian Assanges of this world they might do a CoinSwap on those equal amounts. Everyone is happy, grandma can use Coinjoins with very little user experience or drawbacks.

BM: I agree. I think the answer is either or, it is both. I think the next interesting question what is the best way to combine these two? Can I make my commitment to a CoinSwap also be the input to a Coinjoin? Or the output from a Coinjoin be an input to a CoinSwap? I think that is the next iteration of this set of ideas that combines the best of both worlds.

AG: I think an interesting model is the CoinjoinXT thing, one of the things I had with it at that time when I was trying to figure out how do we address amount correlation across multiple transactions, I realized if you have a Lightning channel open as one of the endpoints… Think of a CoinjoinXT as a multi transaction Coinjoin. But it wouldn’t have to be a Lightning channel open, it could just as easily be a CoinSwap.

BM: I guess there are three concepts here. You’ve got Coinjoins, CoinSwaps and Lightning channel opens and closes. In principle any of those three could be any of those other three. The output of a Coinjoin could be a Lightning opening, it could be a CoinSwap. I should be able to compose these things. The extent to which we are successful in creating a protocol which naturally composes those three things and picks which one is best for the circumstance and increases the anonymity set of all three while decreasing the taint of all three as well, that is the ideal case.

Which should come first? CoinSwap or Coinjoin? Which makes more sense?

BM: We already have Coinjoin and Lightning.

RS: I like the argument of Coinjoining first and then using the result in a CoinSwap. That seems pretty good in the sense that you do have some anonymity already. Whoever receives that UTXO is not going to receive a fully tainted UTXO, it is already mixed with other Coinjoins.

nothingmuch: For completeness, submarine swaps also bridge the gap. That pertains to Adam’s point about CoinjoinXT. You can effectively have a Coinjoin transaction where one of the outputs is a submarine swap that moves some of the balance into a Lightning channel just like you can have a CoinSwap transaction either funded or settled through a Coinjoin. An interesting post from the Bitcoin dev mailing list recently is about batched CoinSwaps where you have a single counterparty as a maker servicing multiple takers’ CoinSwaps simultaneously which again blurs the boundary between Coinjoins and CoinSwaps.

Succinct Atomic Swaps

https://gist.github.com/RubenSomsen/c9f0a92493e06b0e29acced61ca9f49a

RS: The main thing here is that with a regular atomic swap you set up a channel with Alice and Bob on two UTXOs. Then there is some kind of secret, one party can take the coin from the other party and simultaneously reveal their secret. That secret allows the other side to happen as well. With Succinct Atomic Swaps there are two secrets and they are both inside of one UTXO. The second UTXO is literally just a combination of those two secrets. Alice has a secret, Bob has a secret. Either the first UTXO gets refunded Alice and then Alice’s secret is revealed or Bob takes it and Bob’s secret is revealed. That determines who gets the second UTXO. Either they both get refunded or they do the swap. This creates this very asymmetric shape where one side has all the transaction complexity where ideally if everyone cooperates it is just a single transaction. If people don’t cooperate it ends up being roughly three transactions. On the other side it is literally is one transaction that settles immediately as soon as the timelocked transaction has completed.

MF: We discussed earlier that Chris Belcher’s CoinSwap, everything is going onchain and that is creating a cost and impact on block space. One of the key differences between CoinSwap and Succinct Atomic Swaps is more stuff is going offchain and less needs to go onchain. You have cut it down to two transactions.

RS: Cutting it down to two transactions, that is already possible in the original atomic swap as well if you assume either you are going to transact again before the timelock or you are trusting your counterparty not to try to claim the refund. There is always a refund transaction on a regular atomic swap. You can set it up in such a way where it becomes like a Lightning channel. If you try to claim the refund there is a relative timelock that starts ticking. If you respond before that relative timelock ends the refund can be cancelled. You do the swap, there is still the risk that your counterparty tries to claim the refund. If they do so you have to respond with yet another transaction. The downside to that is that it becomes three transactions. The worst case on the traditional atomic swaps would be both sides try to claim the refund transaction. If you want the channel to be open indefinitely, at that point you would have to respond. You could have a worst case of six onchain transactions. That seems pretty terrible. That is one way of doing it if you want to do the traditional style. You want to make it two transactions only. You assume that the refund transaction will never be attempted to be claimed. But with Succinct Atomic Swaps one side is literally settled. Once you learn the secret the money is yours. Or if you tell the counterparty your secret then they have the UTXO. It is one or the earlier. You only have this watchtower-like transaction where you have to pay attention on one of the two sides. That cuts the worst case down to four transactions. It is either equivalent to regular atomic swaps or if you don’t want the watchtower construction and you do want to settle then it is going to be three transactions. It is superior in most ways if what you are doing is a regular swap, one person to another. With multi swap protocols or even the partially blind atomic swap those protocols don’t seem to play well with Succinct Atomic Swaps. That is the downside. It depends on how fancy your atomic swap needs to be. If it is just a one on one swap it seems like Succinct Atomic Swaps are almost always superior.

MF: As long as you’ve got the online requirement. Chris’ CoinSwap doesn’t need the online requirement or the watchtower requirement.

RS: Even without that it is three transactions versus four transactions. Even without the online requirement it is still superior. What Chris is doing is saying “Let’s swap and then let’s swap again.” That way you need two transactions instead of four transactions. That is true but that is also true of Succinct Atomic Swaps. It depends on what your use case is. If you are constantly swapping then it doesn’t really matter. Only the final transaction you might want to consider doing a Succinct Atomic Swap because then at least one of the two parties doesn’t have to close out their swap session. It seems to be superior in most other cases.

nothingmuch: I wouldn’t say it is a requirement to be online rather it is a choice. You can choose to save on one transaction if you stay online but you don’t have to which is not the case for Lightning. With Lightning if there is a non-cooperative close you must stay online.

RS: It would be like closing your Lightning channel. You could choose to close your Lightning channel. You don’t have to be online. It is similar in that sense. You have the option to close it out.

AG: I think nothingmuch’s point is important there. Because there is a punishment mechanism there is a game theoretic aspect to a Lightning channel that doesn’t exist in a pure atomic swap or CoinSwap.

MF: That’s because you are keeping your channel open for a long period of time. In a swap you are just doing one swap, you are not opening a swap channel.

AG: There aren’t multiple states. The whole idea with that offchain protocol you have to overwrite a previous state without going to the blockchain. You use a punishment mechanism. With CoinSwap we just use the blockchain always.

MF: With Succinct Atomic Swaps you do have that punishment like mechanism.

AG: Yes which is the key point. It is a very beautiful protocol, I was really impressed reading it. That’s the crux of the matter. It does have a punishment mechanism to achieve its goal.

RS: nothingmuch’s point is you can choose to close it and then it is a three transaction protocol. It is an optional thing. There are two tricks. One is the asymmetric swap where one transaction doesn’t even need a timelock so you don’t need to worry about it. The second trick is the watchtower requirement. The watchtower requirement is completely separate. You can transfer that watchtower requirement to a regular atomic swap but then you have the watchtower requirement on both UTXOs that you are swapping. In order to have some kind of watchtower requirement you would have to have yet another transaction. There needs to be a trigger transaction that starts your claim to a refund. You broadcast one transaction and that signals “If you don’t do anything I am going to get my refund.” The party that is the rightful owner after a swap, Bob has the UTXO that Alice first created. Alice can claim the refund. Bob has to respond before Alice succeeds in claiming her refund.

MF: You briefly mentioned Jonas Nick’s partially blind atomic swap using adaptor signatures. With PTLCs you are getting the privacy in that you are not having a script onchain. What additional privacy are you getting from this partially blind atomic swap?

RS: I think it is already outdated because there is this other paper called A2L which is a better version from what I am told. The general idea is you have one party in the middle that facilitates swaps. Alice, Bob and Carol, they all propose a swap to the server S. On the other side the server doesn’t really know if they are swapping with either Alice, Bob or Carol. But it is an atomic swap. The end result is similar to a Coinjoin, a blind Chaumian Coinjoin but there is no single transaction on the blockchain. Instead you just see separate transactions like 1 Bitcoin transfers or something. There are two things I should mention that might be of interest. You can use MuSig on one of the swap transactions, the one that doesn’t require a timelock. You are not actually ever spending from it. It can be a regular, single key ECDSA UTXO. You can do Succinct Atomic Swaps on that. For one of the two UTXOs. The one that requires the timelocks where you reveal the secret you cannot do it. You would need a 2P-ECDSA or something there. On the other side it can be single key MuSig. You reveal half of the key and then that is efficient. That is very useful for privacy. The second thing that that does is because you are never signing from it if you want to fund that with a Coinjoin that is possible. That is possible today. That would be a very interesting way of doing it. You set up a Succinct Atomic Swap and then on one side you do the complex timelocked transactions. On the other side you fund it through a Coinjoin. Normally you would need to know the txid or something along those lines because you need to create a transaction spending from it before you start the protocol. That is not necessarily here at all. You could fund it through a Coinjoin or something along those lines.

AG: Yeah because usually we have to symmetrically fund these two multisigs.

Externally Funded Succinct Atomic Swaps (Max Hillebrand)

https://github.com/MaxHillebrand/research/blob/master/ExternallyFundedSuccinctAtomicSwaps.md

MF: Max you have this research Externally Funded Succinct Atomic Swaps based on Ruben’s Succinct Atomic Swaps. Perhaps you could explain what use case you are addressing here and how it works?

MH: This is utilizing Succinct Atomic Swaps with one tweak in the user experience. A very important one with consequences. There is a client and a server relationship. The server coordinates the swap and the client utilizes the swap. This is the first thing. However the second aspect is that if might be that the user is CoinSwapping the coin that he is receiving in the future. There is a setup transaction where the coordinator commits Bitcoin into a swap script. This is at point time 1. Later at point 2 the user can send any Bitcoin from anywhere to a specific address that the coordinator can spend from. The interesting part here is that second transaction that triggers the setup transaction, this does not have to be from the user itself. This is very interesting. In the original Succinct Atomic Swap this was the Litecoin side of the chain basically. This side does not have to speak the CoinSwap proposal because it just sends to a regular address. Here a user or a different person who pays the user in our context can do so directly into a CoinSwap address. This would mean at a UX level that the user receives non-anonymous coins from anyone and instantly, as soon as he receives it, he gains access to funds out of a CoinSwap which presumably has a lot of anonymity. Therefore the user buys onchain privacy very quickly by swapping a coin which has a lot of privacy in any transaction that he receives the Bitcoin.

RS: Maybe I can give one example that I am thinking of now that might clarify for people. Obviously I have talked to Max so I know what he is talking about. I can imagine it is complicated to follow. The end result is that for instance what you could do is withdraw from an exchange and the money that the exchange is sending is funding a CoinSwap. The address that the exchange is sending to is not actually where you are receiving the money because you are instantly swapping those coins. That would be one example of what you are achieving.

MH: Yes exactly. This is the very nice thing about this aspect because one side of this Succinct Atomic Swap is very stupid basically. It is just a single key address that anyone can fund. This makes it easier in this sense. There are two important downsides that we should consider talking about. The first is that the server needs to go first. This has financial implications. If this is supposed to be a company or a user seeking to be profitable he will have to open a payment channel to a new user which is most likely an anonymous Tor identity with no reputation. Therefore there is a denial of service risk in the sense that an attacker creates multiple identities and lets the server open multiple swaps to the user, a malicious actor in this case, this will be very capital intensive. There is a need to figure out how to make a fee that will pay the service provider in advance when he is committing coins into the swap.

RS: It seems like Lightning would be perfect for that. You send a payment to open a channel and you even pay for the adversarial closing of that channel. If the channel is not closed adversarially you give a partial refund minus whatever fees you are charging.

MH: There is one more important thing to note. Because the server goes first and a potential third party, external user is going to fund the CoinSwap, the tricky thing is the server does not know exactly how much Bitcoin the third party funder is going to send to the user. It might be any amount. It could be 0.1 Bitcoin, it could be 10 Bitcoin. The server does not know in advance.

RS: I do think it depends on the example though. The example I just gave of withdrawing from an exchange and then immediately swapping, in that case you would know how much you are planning to withdraw ahead of time. It depends on what your use case is. Do you have a specific use case in mind where you are not certain how much money you are going to receive? Is it a payment from a customer? What are you thinking?

MH: In the case of the exchange yes you might figure it out. But it will change in nuances, how high will be the transaction fee and how much will be in the output that you get. The example that I had was Alice and Bob go to a restaurant and Alice needs to pay back. Bob knows that roughly 100,000 satoshis will come on but he doesn’t know if it is 950,000 or 110,000 for example. It is not exactly clear how many Bitcoin will be put into this thing. This is very tricky to figure out. What do you guys think? Is this a problem? To not know how much Bitcoin will be swapped in advance and knowing the setup?

MF: You mean a problem for this third party, this privacy expert?

MH: I think it is mainly a problem for the swap partner.

RS: One thing I am thinking is you can cancel the swap if you are not satisfied with the amount. Then before you are satisfied with the amount maybe you can do some kind of additional Lightning payment. It makes it more complex when you require the Lightning Network. A Lightning payment if the amount is not exactly what you were expecting. There is a little bit of a trust factor there. Or you can make it so that the Lightning payment only goes through if the swap goes through. That should be possible I think. That also relies on a secret. Maybe you can solve it like that but obviously you get a lot of overhead trying to utilize the Lightning Network while doing a swap. That may be not be very practical but that would be one way of doing it.

MH: It is an interesting approach. What Ruben and I figured out eventually was that if the amount is actually unknown then what the service provider can do is instead of sending to a regular swap, 2-of-2 pre-signed refund transaction, he uses a payment channel which has the interesting aspect that if the user Alice somehow thinks that she will receive under 1 Bitcoin but above 0.5 Bitcoin. The service provider can open a payment channel of 1 Bitcoin to the user with all of the money on the side of the server. When an external funder, Carol comes along and pays Alice 0.7 Bitcoin she pays this into an address that the server can spend. But only after they negotiate a payment channel update in that channel where the user will have 0.7 and the server 0.3. I am still not exactly sure if this actually works. To have a Succinct Atomic Swap with that payment channel set up on one side. This changes the timeline a bit of the amounts because there still needs to be one more payment channel update after the transaction is received from the third party funder.

RS: To very briefly summarize, the idea is to allow somebody to make a payment and instead of receiving it on that address you are receiving it on a CoinSwap. You are using that amount immediately to swap. The other side of the swap can be a channel where every time you do a swap you move a little bit of the balance of that Lightning channel to the other side. You can then use that same channel for multiple swaps. After maybe three payments are received now the channel is full and you close the channel. Something along those lines.

MH: Making a payment in this scheme as a user of that coin was just swapped is basically closing the channel in the cooperative closing transaction. And what someone could do potentially, I am still not sure, is to do splicing. You close the channel in the input and then you open a channel again in the output of a transaction. Even better maybe would be to swap some balances. If the user has two channels open to the server, one smaller and one larger, then the user could swap the value of the smaller channel into the larger channel with an atomic swap, a Lightning Network channel update, and then close the smaller channel. In a way that there is no change. As soon as the server and client are online and can communicate I think they can negotiate nice swapping ceremonies.

RS: I believe it is theoretically possible.

MH: The very nice thing with 2P-ECDSA and adaptor signature ECDSA is that this all looks like single public keys and nothing else in the cooperative case. This is fantastic.

\ No newline at end of file diff --git a/london-bitcoin-devs/2020-07-21-socratic-seminar-bip-taproot/index.html b/london-bitcoin-devs/2020-07-21-socratic-seminar-bip-taproot/index.html index 30c95c23d6..0c80e9fcae 100644 --- a/london-bitcoin-devs/2020-07-21-socratic-seminar-bip-taproot/index.html +++ b/london-bitcoin-devs/2020-07-21-socratic-seminar-bip-taproot/index.html @@ -11,4 +11,4 @@ Pieter Wuille, Elichai Turkel, Russell O’Connor

Date: July 21, 2020

Transcript By: Michael Folkson

Tags: Taproot, Mast, Simplicity

Category: Meetup

Media: -https://www.youtube.com/watch?v=bPcguc108QM

Pastebin of the resources discussed: https://pastebin.com/vsT3DNqW

Transcript of Socratic Seminar on BIP-Schnorr: https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr/

The conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Introductions

Michael Folkson (MF): This is a Socratic Seminar on BIP-Taproot. Sorry for the delay for the people on the YouTube livestream. This is in partnership between London BitDevs and Bitcoin Munich. We were going to have two events on the same day at the same time so we thought rather than have two events clashing we would combine them, have the same event and have people from both London and Munich on the call. But there are people from everywhere not just London and Munich. A few words on Bitcoin Munich. It is the original Bitcoin meetup in Munich. I’ve had the pleasure of attending a Socratic last year, the week before The Lightning Conference. It has been around for years, certainly a few more years than we have at London BitDevs. Socratic Seminars, we’ve had a few in the past, I don’t need to speak about Socratic Seminars. Originated at BitDevs in New York, discussion not presentation, feel free to interrupt, ask questions. Questions, comments on YouTube, we will be monitoring the YouTube, we will be monitoring Twitter and IRC ##ldnbitcoindevs. If you are watching the livestream questions and comments are very welcome. There will be a transcript as well but please don’t let that put you off participating. We can edit the transcript afterwards, we can edit the video afterwards, we are not trying to catch people out or whatever. This is purely for educational purposes. The subject is Taproot, Tapscript is also fine, BIP 341, BIP 342. We have already had a Socratic Seminar on BIP-Schnorr so we’ll avoid Schnorr generally but it is fine to stray onto Schnorr because obviously there is interaction between Schnorr and Taproot. What we won’t be discussing though is activation. No discussion on activation. Maybe we will have a Socratic next month on activation. If you are interested in activation join the IRC channel ##taproot-activation. We start with introductions. If you want to do an introduction please do, who you are, what you are working on and what you are interested in in terms of Taproot stuff.

Emzy (E): I am Emzy, I am involved in running servers for Bisq, the decentralized exchange and I am really thrilled about Taproot and learning more Bitcoin in depth.

Albert M (AM): I am Albert, I am an information security consultant and I am also interested in the privacy aspects of this new proposal.

Pieter Wuille (PW): I’m Pieter, I work at Blockstream and on Bitcoin and I am one of the co-authors of the proposal being discussed.

Auriol (A): I am Auriol. I am just curious as to how the conversation has transitioned over the past year. On the topic I am very interested in the privacy aspects of this new technology.

Elichai Turkel (ET): Hi I’m Elichai, I work at DAGlabs and I work on Bitcoin and libsecp. I hope we can get this in the next year or so.

Will Clark (WC): I am Will, I have been working with goTenna doing some Lightning stuff over mesh networks. Like Albert and Auriol and I am interested in the privacy benefits to this.

Introduction to MAST

MF: There is a reading list that I shared. What we normally do is we start from basics. For the people, there are a couple of new people on the call, we’ll start with MAST and discuss and explain how MAST works. Absolute basics does someone want to explain a Merkle tree?

WC: It is a structure of hashes where you pair up the hashes. You make an even number of hashes and if you’ve got an odd number then you will duplicate the last hash. You pair them up in a binary tree structure so that you can show in a logarithmically expanding size a path along the tree.

MF: Basically we are trying to combine a lot of information within a tree structure. A hash condenses anything, the contents of a whole book, an encyclopedia into just one small digest. With a tree you’ve got a bunch of hashes all on leaves in the tree and they get hashed up pairwise up until the top of the Merkle tree which is called the root. It is a way of condensing a lot of information into a very small digest at the top which is effectively the root. If that is a Merkle tree what was the idea behind MAST?

AM: A Merkle tree is a structure of inputs and outputs combined with the coinbase?

PW: That is one use of Merkle trees in Bitcoin but not the one we are talking about here.

A: Different conditions that are covered in Merkle tree, instead of combining all that information into a single hash, it separates out different hashes so they can remain individualistic as opposed to combining information and revealing too much about all information in one lump sum.

E: I understand that it is if you have different contracts for moving value on Bitcoin you can reveal only one of the paths in the Merkle tree and use this without showing the other paths that are possible.

MF: Let’s go through the reading list then because there are some interesting intricacies around MAST and it dates back to 2012, 2013 as most things do. The first link is the Socratic Seminar that we had on Schnorr before. Then there is the transcript to Tim Ruffing’s presentation. Tim Ruffing presented last month on Taproot and Schnorr multisignature and threshold signature schemes. Then we have links on MAST.

Aaron van Wirdum Bitcoin Magazine article on MAST (2016)

https://bitcoinmagazine.com/articles/the-next-step-to-improve-bitcoin-s-flexibility-scalability-and-privacy-is-called-mast-1476388597

MF: The first link is that Aaron van Wirdum Bitcoin Magazine article on MAST. That is a good intro to what MAST is. He describes it as essentially merging the potential of P2SH with that of Merkle trees. He gives a primer of Merkle trees and says that instead of locking Bitcoin up in a single script, with MAST the same Bitcoin can be locked up into a series of different scripts which was effectively what Emzy was saying.

David Harding article on What is a Bitcoin Merklized Abstract Syntax Tree (2017)?

https://bitcointechtalk.com/what-is-a-bitcoin-merklized-abstract-syntax-tree-mast-33fdf2da5e2f

MF: I found a link which I will share in the chat which was David Harding’s article from 2017 on what is a Bitcoin Merklized Abstract Syntax Tree? That has some info that I hadn’t previously seen before which included the fact that Russell O’Connor is universally credited with first describing MAST in a discussion. Russell is apparently on the call. That first idea of MAST, perhaps Pieter you can talk about the discussion you had back then?

PW: I really need Russell here because he will disagree with me. I seem to recall that the first idea of breaking up a script into a Merkle tree of spendability conditions is something that arrived in a private discussion I had with Russell a number of years ago. In my mind it has always been he who came up with it but maybe he thinks different.

BIP 114 and BIP 116 MAST proposals

https://github.com/bitcoin/bips/blob/master/bip-0114.mediawiki

https://github.com/bitcoin/bips/blob/master/bip-0116.mediawiki

MF: Some of the ideas around MAST, there was BIP 116 which was OP_MERKLEBRANCHVERIFY, that was from Mark Friedenbach. There was also BIP 114 which was another detailed BIP from Johnson Lau on Merklized Abstract Syntax Trees. There did seem to be a considerable effort to get MAST formalized back in 2016,17. When MAST was being finalized, we’ll come onto Key Tree Signatures which you discussed at SF Bitcoin Devs at around the same time, there did seem to be an effort to get MAST into Bitcoin even before SegWit. Perhaps Pieter you could enlighten us with your thoughts on these BIPs and some of this work done by Johnson Lau and Mark Friedenbach?

PW: I think they just didn’t have enough momentum at the time. There were a lot of things to do around SegWit and it is hard to focus on multiple things. It is really hard to say what causes one thing to get more traction than others. I think both BIP 114 and MERKLEBRANCHVERIFY were more flexible than what is now included in BIP 341. MERKLEBRANCHVERIFY didn’t really construct a Merkle root structure itself in script validation but it enabled you to implement it yourself inside the script language which is more flexible. It can do something like grab multiple things from a Merkle tree. Say you have a thousand keys and you want to do a 3-of-10 out of it for example, this is not something that BIP 341 can do. At the same time by not doing it as part of a script but as part of a script structure you get some efficiency gains. It is hard to go into detail but you get some redundancy if you want to create a script that contains a Merkle root and then verifies against that Merkle root that a particular subscript is being executed. It is a trade-off between flexibility in structure and efficiency.

MF: Russell (O’Connor) is here. The very first conversations on MAST, can you remember the discussion? Was is it a light bulb moment of “Let’s use Merkle trees and condense scripts into a Merkle tree”? I saw that you were credited with the idea.

Russell O’Connor (RO): It goes back to some possibly private IRC conversations I had with Pieter back in 2012 I believe. At that time I was musing about if we were to design a Bitcoin or blockchain from scratch what would it look like? I am a bit of a language person, I have been interested in Script and Script alternatives. I was like “This concatenation thing that we have, the original concatenation of scripts idea in Bitcoin doesn’t actually really work very well because it was the source of this OP_RETURN bug.” It is weird to do computation in the scriptSig hash of that concatenation because all you are doing is setting up the environment for the scriptPubKey to execute in. This is reflected in the modern day situation where even in SegWit there is no real scriptSig program in a sense. It just sets up a stack. That is the environment for which the scriptPubKey runs in. I am a bit of a functional programmer so alternative functional programs where the inputs would just be the environment that the program runs in and then the program would execute. Then when you start thinking that way, the branches in your case expressions can be pruned away because they don’t have to show up on the blockchain. That is where I got this idea of MAST, where I coined the name MAST, Merklized Abstract Syntax Trees. If you take the abstract syntax and look at its tree or DAG structure then instead of just hashing at as a linear piece of text you can hash it according to the expression constructs. This allows you to immediately start noticing you can prune away unused parts of those expressions. In particular the case expressions that are not executed. That is where that idea came from. Eventually that original idea turned into the design for Simplicity which I have been working on for the last couple of years. But the MAST aspect of that is more general and it appears in Taproot and other earlier proposals.

MF: Russell do you remember looking through the BIPs from Johnson Lau and Mark Friedenbach? Or is it too far away that you’ve forgotten the high level details.

RO: I wasn’t really involved in the construction of those proposals so I am not a good person to discuss them.

MF: Some of the interesting stuff that I saw was this tail call stuff. An implicit tail call execution semantics in P2SH and how “a normal script is supposed to finish with just true or false on the stack. Any script that finishes execution with more than a single element on the stack is in violation of the so-called clean-stack rule and is considered non-standard.” I don’t think we have anybody on the call who has any more details on those BIPs, the Friedenbach and Johnson Lau work. There was also Jeremy Rubin’s paper on Merklized Abstract Syntax Trees which again I don’t think Jeremy is here and I don’t think people on the call remember the details.

PW: One comment I wanted to make is I think what Russell and I talked about originally with the term MAST isn’t exactly what it is referred to now. Correct me if I’m wrong Russell but I think the name MAST better applies to the Simplicity style where you have an actual abstract syntax tree where every node is a Merklization of its subtree as opposed to BIP 114, 116, BIP-Taproot, which is just a Merkle tree of conditions and the scripts are all at the bottom. Does that distinction make sense? In BIP 340 we don’t use the term MAST except as a reference to the name because what it is doing shouldn’t be called MAST. There is no abstract syntax tree.

MF: To clarify all the leaves are at the bottom of the trees, as far down as you need to go.

PW: I think the term MAST should refer to the script is the tree. Not you have a bunch of trees in the leaves which is what modern MAST named proposals do.

RO: This is a good point. Somebody suggested the alternative reinterpretation of the acronym as Merklized Alternative Script Trees which is maybe a more accurate description of what is going on in Taproot than what is going on in Simplicity where it is actually the script itself that is Merklized rather than the collection of leaves.

PW: To be a bit more concrete in something actually MAST every node would be annotated with an opcode. It would be AND of these two subtrees or OR of these two subtrees. As opposed to pushing all the scripts down at the bottom.

MF: I think we were going to come onto this after we discussed Key Tree Signatures. While we are on it, this is the difference between all the leaves just being standalone scripts versus having combinations of leaves. There could potentially be a design where there is two leaves and you do an OR between those two leaves or an AND between these two leaves. Whereas with Taproot you don’t, you just take one leaf and satisfy that one leaf. Is that correct?

RO: I think that is a fair statement.

Nothingmuch (N): We haven’t really defined what abstract syntax tree means in the wider setting but maybe it makes sense to go over that given that Bitcoin Script is a Forth like language it doesn’t really have syntax per se. The OP_IF, ELSE, THEN are handled a little bit differently than real Forth so you could claim that that has a tree structure. In a hypothetical language with a real syntax tree it makes a lot more sense to treat the programs as hierarchical whereas in Script they are historically encoded as just a linear sequence of symbols. In this regard the tree structure doesn’t really pertain to the language itself. It pertains to combining leaves of this type in the modern proposals into a large disjunction.

PW: When we are talking about “real” MAST it would not be something that is remotely similar to Script today. It is just a hierarchically structured language and every opcode hashes its arguments together. If you compare that with BIP 341 every inner node in the tree is an OR. You cannot have a node that does anything else. I guess that is a difference.

MF: Why does it have to be like that?

PW: Just limitation of the design space. It takes us way too far if you want to do everything. That is my opinion to be clear.

RO: We could talk about the advantages and disadvantages of that design decision. The disadvantage is that you have to pull all your IF statements that you would have in your linear script up to the top level. In particular if you have an IF then ELSE statement followed by a second IF then ELSE statement or block of code you have two control paths that join back together and then you get another control paths. But when you lift that to the Taproot construction you basically enumerate all the code paths and you have to have four leaves for those four possible ways of traversing those pairs of code paths. This causes a combinatorial explosion in the number of leaves that you have to specify. But of course on the flip side because of the binary tree structure of the Taproot redemption you only need a logarithmic number of Merkle branch nodes to get to any given leaf in general. You just need logarithm space of an exponential number of explosion cases and it balances each other out.

PW: To give an example. Say you want to do a 3-of-1000 multisig. You could write it as a single linear script that just pushes 1000 keys, asks for three signatures and does some scripting to do the verification. In Taproot you would expand this to the exact number, probably in the range of 100 million combinations there are for the 3-of-1000. Make one leaf for each of the combinations. In a more expressive script you could choose a different trade-off. Just have three different trees that only need 1000 elements. It would be much simpler to construct a script but you also lose some privacy.

MF: I see that there is potential complexity if you are starting to use multiple leaves at the same time in different combinations. The benefit is that you are potentially able to minimize the number of levels you need to go down. If every Tapscript was just a combination of different other Tapscripts you wouldn’t have to go so far down. You wouldn’t have to reveal so many hashes down the different levels which could potentially be an efficiency.

PW: Not just an efficiency. It may make it tractable. If I don’t do 3-of-1000 but 6-of-1000 enumerating all combinations isn’t tractable anymore. It is like trillions of combinations you need to go through. Just computing the Merkle root of that is not something you can reasonably do anymore. If you are faced with such a policy that you want to implement you need something else. Presumably that would mean you create a linear script, old style scripting, that does “Check a signature, add it, check a signature, add it, check a signature, add it” and see that it adds up to 6. This works but in a more expressive script language you could still partially Merklize this without blowing up the combination space.

RO: I think the advantage here is that we still use the same script language at the leaves and we get this very easy and very powerful benefit of excluding combinations just by putting this tree structure on an outer layer containing script. Whereas to get the full advantages of a prunable script language it means reinventing script.

MF: We’ll get onto Key Tree Signatures but then you do have to outline all the different combinations of signatures that could perhaps satisfy the script. Pieter had a slide on that SF Bitcoin Devs presentation that we will get onto later which had “First signature, second signature”, the next leaf would be “First signature, third signature” and the next leaf would be “Second signature, third signature”, you did have to outline all the different options. But I don’t understand why you’d have to do that in a script sense. Why do you have to outline all those different combinations? Why can’t you just say “I am going to satisfy a combination of Leaf A and Leaf D”?

PW: You are now talking about why can’t you do this in a linear script?

MF: Yeah

PW: You absolutely can. But it has privacy downsides because you are now revealing your entire policy when you are spending. While if you break it up into a Merkle tree you are only revealing this is the exact keys that signed and there were other options. There were probably many but you don’t reveal what those were. The most extreme is a linear script. Right now in a script language you write out a policy as an imperative program and it is a single program that has everything. The other extreme is what BIP 341 is aiming for, that is you break down your policy in as small pieces as possible and put them in a Merkle tree and now you only reveal the one you actually use. As long as that is tractable, that is usually very close to optimal. But with a more expressive language you have more levels between those two where you can say “I am going to Merklize some things but this part that is intractable I am not going to Merklize.” We chose not to do that in BIP 341 just because of the design space explosion you get. We didn’t want to get into designing a new script language from scratch.

A: How do you know that a Merkle root is in fact the Merkle root for a given tree? Say it is locking up funds for participants, how are participants sure that it is not a leaf of a larger tree or a group of trees? Is there a way to provide proofs against this? What Elichai suggested is that it is as if you are using two preimages. He says that it would break the hash function to do this.

ET: Before Pieter starts talking about problems specific in Merkle trees, there could be a way if you implement the Merkle tree badly that you can fake a node to also be a leaf because of the construction without breaking the hash function. But assuming the Merkle tree is good then you shouldn’t be able to fake that without breaking the hash function.

PW: If a Merkle tree is constructed well it is a cryptographic commitment to the list of its inputs, of the leaves. All the properties that you expect from a hash function really apply. Such as given a particular Merkle root you cannot just find another set of leaves that hash to the same thing. Or given a set of leaves you cannot find another set of leaves that hash to the same thing. Or you cannot find two distinct set of leaves that hash to the same thing and so on. Maybe at a higher level if you are a participant in a policy that is complex and has many leaves you will probably want to see the entire tree before agreeing to participate. So you know what the exact policy is.

A: You are talking about collisions correct?

PW: Yes collision and preimage attacks. If a Merkle tree is constructed well and is constructed using a hash function that has normal properties then it is collision and preimage resistant.

MF: In the chat nothingmuch says does it make sense to consider P2SH and OP_EVAL discussions? That helped nothingmuch understand better.

N: I think we are past that point.

MF: We talked a little about P2SH. We didn’t discuss OP_EVAL really.

N: To discuss Auriol’s point, one thing that I think nobody addressed is and maybe the reason for the confusion is that every Taproot output commits to a Merkle root directly. So the root is given as it were. What you need to make sure is that the way that you spend it relates to a specific known root not the other way round. For the P2SH and OP_EVAL stuff it was a convenient segue for myself a few years ago reading about this to think about what you can really do with Bitcoin Script? From a theoretical computer science point of view it is not very much given that it doesn’t have looping and stuff like that. Redeem scripts and P2SH add a first order extension of that where you can have one layer of indirection where the scriptPubKey effectively calls a function which is the redeem script. But you can’t do this recursively as far as I know. OP_EVAL was a BIP by Gavin Andresen and I think it was basically the ability to evaluate something that is analogous to a redeem script as part of a program so you can have a finite number of nested levels. You can imagine a script that has two branches with two OP_EVALs for committing to separate redeem scripts and that structure is already very much like a MAST structure. That is why I brought it up earlier.

Pieter Wuille at SF Bitcoin Devs on Key Tree Signatures

https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2015-08-24-pieter-wuille-key-tree-signatures/

MF: This is your talk Pieter on Key Tree Signatures. A high level summary, this is using Merkle trees to do multisig. This is where every leaf at the bottom of the tree are all the different combinations. If you have a 2-of-3 and the parties are A, B and C you need a leaf that is A, B, you need a leaf that is A, C, you need a leaf that is B, C. Any possible options to get a multisig within a Merkle tree structure.

PW: Key Tree Signatures was really exploiting the observation at the time in Elements Alpha, we even unintentionally had enabled functionality that did this. It didn’t have Merkle tree functionality and it didn’t have key aggregation. It didn’t have any of those things. But it had enough opcodes that you could actually implement a Merkle tree in the Script language. The interesting thing about that was that it didn’t require any specific support beyond what Elements Alpha at the time had. What BIP 341 is do is much more flexible than that because it actually lets you have a separate script in every leaf. The only thing Key Tree Signatures could do was a Merkle tree where every leaf was a public key. At the same time it did go into what the efficiency trade-offs are and how things scale. Those map well.

MF: It could be implemented straight on Elements Alpha but it couldn’t be implemented on Bitcoin Core. It needed OP_CAT?

PW: Yes it couldn’t be implemented on Bitcoin at the time and still can’t.

MF: There are no plans to enable OP_CAT anytime soon?

PW: I have heard people talk about that. There are some use cases for that. Really the entire set of use cases that Key Tree Signatures aim to address are completely subsumed by Taproot. By introducing a native Merkle tree structure you can do these things way more efficiently and with more flexibility because you are not restricted to having a single key in every leaf. I think historically what is interesting about that talk is the complexity and efficiency trade-offs where you can look at a graph of how does the size of a script compare to a naive linear script. The implementation details of how that was done in Key Tree Signatures aren’t relevant anymore.

MF: There are no edge cases where perhaps you get more efficiency assuming we had OP_CAT using a Key Tree scheme rather than using the Taproot design?

PW: The example I gave earlier is the fact you might not be able to break up your script into leaves.

MF: The privacy thing.

PW: This is a more restricted version of the more general Merkle tree verification in Script that you would get with OP_MERKLEBRANCHVERIFY for example. I think in practice almost all use cases will be covered by Taproot. But it is not exactly the same, this is correct.

N: I think a lot of this goes away under the assumption that you are only spending an output once. A lot of what you can benefit from reusing an element of your tree for different conditions are more efficient if you are going to evaluate that script multiple times. That doesn’t really make sense in the context of Bitcoin.

PW: I am not sure that is true. You want every spend to be efficient. It doesn’t matter if there is one or more. I agree that in general you are only going to reveal one but I don’t think this changes any of the goals or trade-offs.

N: Let me be a bit more precise. If you have a hypothetical system which has something like OP_MERKLEBRANCHVERIFY you can always flatten it out to a giant disjunction and create a Taproot tree for that. Everyone leaves a specific path through your reusing tree. If you are only ever going to reveal the one leaf then what matters is that that final condition is efficient.

PW: What you are calling reuse is really having multiple leaves simultaneously.

N: Yes.

PW: There is a good example where there may actually be multiple cases in one tree. That is if you have some giant multisignature and an intractably large set of combinations from it. The example of a 6-of-1000 I gave before, you may want to have a Merkle tree over just those thousand and have a way of expressing “I want six of these leaves to be satisfied.” I don’t how realistic that is as a real world use case but it is something feasibly interesting.

N: That is a definitely a convincing argument that I didn’t account for in my previous statement.

MF: That covers pre-Taproot.

Andrew Poelstra on Tales From The Crypt Podcast (2019)

https://diyhpl.us/wiki/transcripts/tftc-podcast/2019-06-18-andrew-poelstra-tftc/

MF: Let’s move to Taproot. This was an interesting snippet I found on Tales From The Crypt podcast with Andrew Poelstra. He talked about where the idea came from. Apparently there was this diner in California, 100 miles from San Francisco where apparently the idea came into being. Perhaps Bitcoiners will go to this diner and it will become known as the Taproot Diner. Do you remember this conversation at the diner Pieter?

PW: It is a good morning in Los Altos.

MF: So what Andrew Poelstra said in this podcast was that Greg was asking about more efficient script constructions, hiding a timelocked emergency clause. So perhaps talk the problem Taproot solves and this jump that Taproot gave us all that work on MAST.

PW: I think the click you need to make and we had to make was that really in most contracts, in more complex constructions you want to build on top of script, there is some set of signers that are expected to sign. I have talked about this as the “everyone agrees condition” but it doesn’t really need to be everyone. You can easily see that whenever all the parties involved in a contract agree with a particular spend there is no reason to disallow that spend. In a Lightning channel if both of the participants in the channel agree to do something with the money it is ok that that is done with the money. There is nobody else who cares about it than the participants. If they agree we are good. Thanks to key aggregation and MuSig you can represent the single “some set of keys and nothing else”. These people need to agree and nothing else. You can express that as a single public key. This leads to the notion that whatever you do with your Merkle tree, you want very near the top a branch that is “this set of signers agrees.” That is going to be the usual case. In the unusual case it is going to be one of these complex branches inside this Merkle tree. There is going to be this one top condition that we expect to be the most common one. You want to put it near the top because you expect it to be the most common way of spending. It is cheaper because the path is shorter the closer you set it to the top. What Taproot really does is taking that idea and combining it with pay-to-contract where you say “Instead of paying to a Merkle root I am going to pay to a tweaked version of that public key.” You can just spend by signing with that key. Or the alternative is I can reveal to the world that this public key was actually derived from another public key by tweaking it with this Merkle root. Hence I am allowed to instead spend it with that Merkle root.

MF: There is that trick in terms of the key path spend or the script path send. The normal case and then all the complex stuff covered by the tree. Then effectively having an OR construction between the key path spend and the script path spend.

PW: Taproot is just a way of having a super efficient one level of a Merkle tree at the top but it only works under the condition that it is just a public key. It cannot be a script. It makes it super efficient because you are not even revealing to the world that there was a tree in the first place.

MF: And with schemes like MuSig or perhaps even threshold schemes that key path spend can potentially be an effective multisig or threshold sig but it needs to be condensed into one key.

PW: Exactly. Without MuSig or any other kind of aggregation scheme all of this doesn’t really make sense. It works but it doesn’t make sense because you are never going to have a policy that consists of “Here is some complex set of conditions or this one guy signs.” I guess it can happen, it is a 1-of-2 or a 1-of-3 or so but those are fairly rare things. In order for this “everybody agrees” condition to be the more common one, we need the key aggregation aspect.

MF: There is that conceptual switch. In this podcast transcript Greg says “Screw CHECKSIG. What if that was the output? What if we just put the public key in the output and by default you signed it.” That is kind of a second part. How do you combine the two into one once you have that conceptual key path ands script path spend?

PW: The way to accomplish that is by saying “We are going to take the key path, take that key and tweak it with the script path in such a way that if you were able to sign for the original key path you can still sign for the tweaked version.” The tweaked version is what you put in the scriptPubKey. You are paying to a tweaked version of the aggregate of everyone’s keys. You can either spend by just signing for it, nothing else. There is no script involved at all. There is a public key and a scriptPubKey and you spend it by giving a signature. Or in the unusual case you reveal that actually this key was tweaked by something else. I reveal that something else and now I can do whatever that allowed me to do.

Greg Maxwell Bitcoin dev mailing list post on Taproot (2018)

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html

MF: One of the key points here once we have discussed that conceptual type stuff is that pre-Taproot we thought it was going to be an inefficiency to try to have this construction. The key breakthrough with Taproot is that it avoids any larger scripts going onchain and really doesn’t have any downsides. Greg says “You make use cases as indistinguishable as possible from the most common and boring payments.” No privacy downsides, in fact privacy is better and also efficiency. We are getting the best of both worlds on a number of different axes.

PW: I think the post explains the goals and what it accomplishes pretty well. It is a good read.

Andrew Poelstra at MIT Bitcoin Expo on Taproot (2020)

https://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2020/2020-03-07-andrew-poelstra-taproot/

MF: There were a few parts to this presentation that I thought were good. He talks about what Taproot is, he talks about scripts and witnesses, key tricks and then the Taproot assumption. I thought it was a good quotable Taproot assumption “If all interested parties agree no other conditions matter.” You really don’t have to worry about all that complexity as the user as long as you are using that key path spend.

C + H(C || S) = P

N: Since some people already know this and some people don’t maybe it makes sense to dwell on this for a minute so everybody is on the same page for how it works. It is really not magic but it kind of seems like magic the first time you see it.

MF: Can you explain what the equation is saying?

N: The basic details are that public keys are elements of the group that you can define on top of the secp256k1 curve. You can multiply keys by scalars which are just numbers basically. If you take a hash function which gives you a number and you derive something like a public key by multiplying that number by the generator point, then anybody who knows the preimage of the hash knows the equivalent of a secret key as it were. You take a public key that is actually a public key in the sense that those numbers are secret, not known except to the owner. C would be that key and you can always add that tweak which is the hash of C and the script times G to derive P which is a new key. Anybody who knows the secret underlying C, what is the discrete logarithm of C with respect to G, anybody who knows that, under MuSig that is only a group of people, are able to sign with the key P because they also know the diff between the two keys. That takes care of the key spend path. Anybody who can compute a signature with C can compute a signature with P because the difference between them is just the hash of some string. But then also anybody who doesn’t necessarily know how to sign with C can prove that P is the sum of C and that hash. The reason for this is that the preimage for the hash commits to C itself. You can clearly show here that P is the sum of two points. One of them is the discrete logarithm of that point as a hash and that hash contains the other term in the sum so it is inconceivable unless somebody knows how to find second preimages for the hash, to be able to tweak the key in that way and still be able to convince people that really the hash commits to the script. Because the hash does that, the intent of including an additional script in the hash is to convey to the network that that’s one of the authorized ways to spend this output. I hope that was less confusing and not more confusing than before.

MF: That was a good explanation. I also like the slide that Tim Ruffing at his London Bitcoin Devs presentation that has that public key equation and shows how you can get the script out as well as satisfying it with just a normal single key.

pk = g^(x+H(g^x, script))

PW: Maybe a bit confusing because that slide uses multiplicative notation and in everything else we have been using additive notation. This exponentiation that you see in this slide is what we usually call an elliptic curve notation. g^x we usually write as xG, well some people. There are often interesting fights on Twitter between proponents of additive notation and multiplicative notation.

MF: When you first hear of the idea it doesn’t sound plausible that you could have the same security whilst taking a script out of a public key. It almost feels as if you are halving the entropy because you have two things in the same key. You actually do get exactly the same security.

PW: Do you think the same about Merkle trees that you are able to take more out than you took in? You are absolutely right that entropy just isn’t the right notion here. It is really not all that different from the fact that you can hash bigger things into smaller things and then still prove that those bigger things were in it.

MF: I understand it now. But when I first heard of it I didn’t understand how that was possible. I think it is a different concept because I understand the tree concept where you hashing all the leaves up into a root but this was hiding…

PW: Ignore the tree. It is just the hash.

MF: It is the root being hidden within the public key. But that didn’t seem possible without reducing the entropy.

PW: The interesting thing is being able to do it without breaking the ability to spend from that public key. Apart from that it is just hashing.

RO: I just want to make a minor comment on the very good description that was given. You don’t have to know the discrete log of the public key in order to manipulate signatures operating on the tweaked public key. In fact when you are doing a MuSig proposal no individual person ever really knows the discrete log of the aggregated key to begin with and they don’t have to know. It is the case that in Schnorr signatures it is slightly false but very close to being true to say that if you have a signature on a particular public key you can tweak the signature to get a proper signature on the tweaked public key without knowing the private key. The only quibble is that the public key is actually hashed into the equation. You have to know the public key of the tweak before you start this process but the point is that no one has to learn the discrete log of the public key to manipulate this tweak thing.

PW: This is absolutely true. On the flip side whenever you have a protocol that works if a single party knows the private key you can imagine having that private key be shared knowledge in a group of people and design a multiparty protocol that does the same thing. The interesting thing is that that multiparty protocol happens to be really efficient but there is nothing surprising about the fact that you can.

MF: Is there a slight downside that in comparison to a normal pay-to-pub-key, if you want to do a script path spend you do need to know the script and if you want to do a key path spend you do need to know the pubkey? There is more information that needs to be stored to be able to spend from it. Is that correct?

PW: Not more than what you need to spend from a P2SH. It just happens to be in a more structured version. Instead of being a single script that does everything you need to know the structure. Generally if you are a participant in some policy enforced by an output you will need to understand how that policy relates to that output.

MF: With a pay-to-script-hash you still need to have that script to be able to spend from that pay-to-script-hash. In the same way here you need to know the script to be able to spend from the pay-to-taproot.

AJ Towns on formalizing the Taproot proposal (December 2018)

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-December/016556.html

MF: The next item on the reading list was AJ Town’s first attempt to formalize this in a mailing list post in 2018. How much work did it take to go from that idea to formalizing it into a BIP?

PW: If you look at BIP 341 and BIP 342 there is only a very small portion of it that is actually the Taproot construction. That is because our design goal wasn’t make Taproot possible but it was look at the cool things you can accomplish with Taproot and make sure all of those actually work. That includes a number of unrelated changes that were known issues that needed to be fixed such as signing all the input amounts that we have recently seen. Let me step a back. When Taproot came out first me personally, I thought the best way to integrate this was to do all the things. We were at the time already working on Schnorr multisignatures and cross input aggregation. Our interest in getting Schnorr out there was to enable cross input aggregation which is the ability to just have across multiple inputs of a transaction just have a single signature that signs for all of them instead of separate ones. It turns out to be a fairly messy and hard problem. Then Taproot came out and it was like “We need to add that to it because this is clearly something really cool that has privacy and efficiency advantages.” It took a couple of months after that to realize that we are not going to be able to build a single proposal that does all these things because it all interacts in many different ways. Then the focus came on “We want things like batch verification. We want extensibility. We want to fix known bugs and we want to exploit Taproot to the fullest.” Anything else that can be done outside of that is going to have to wait for other independent proposals or a successor. I think a lot of time went into defining exactly what.

MF: Perhaps all the drama and contention of the SegWit fork, did that push you down a road of stripping back some of the more ambitious goals for this? We will get onto some of the things that didn’t make it into the proposal. Did you have a half an eye on that? You wanted as little controversy as possible, minimize the complexity?

PW: Clearly we cannot just put every possible idea and every possible improvement that anyone comes up with into one proposal. How are you going to get everyone to agree on everything? Independent improvements should have some form of independence in its progression towards being active on mainnet. At the same time there are really strong incentives to not do every single thing entirely independently. Doing the Merklization aspect of BIP 341, the Taproot aspect of it and the Schnorr signature aspect, if you don’t do all three of them at the same time you get something that is seriously less efficient and less private. It is trade-off between those things. Sometimes things really interact and they really need to go together but other times they don’t.

John Newbery on reducing size of Taproot output by 1 vbyte (May 2019)

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016943.html

MF: One of the first major changes was this post from John (Newbery) on reducing the size of the pubkey. The consideration always is we don’t want anyone to lose out. Whatever use case they have, whether they have a small script or a really large script, we don’t want them to be any worse off than before because otherwise you then have this problem of some people losing out. It seems like a fiendish problem to make sure that at least everyone’s use case is not hurt even if it is a very small byte difference. I suppose that is what is hanging over this discussion and John’s post here.

PW: I think there is something neat about not using 33 bytes when you can have 32 with the same security. It just feels wasteful.

MF: That’s John’s post. But there was also a conversation I remember on a very basic key path spend being a tiny bit bigger than a normal key path spend pre-Taproot. Is that right?

PW: Possibly I’d need to reread the post I think.

MF: I don’t think this was in John’s post, I think that was a separate discussion.

PW: It is there at the bottom. “The current proposal uses (1). Using (3) or (4) would reduce the size of a taproot output by one byte to be the same size as a P2WSH output. That means that it’s not more expensive for senders compared to sending to P2WSH.” That is part of the motivation as well. Clearly today people are fine with paying to P2WSH which has 32 byte witness programs. It could be argued that it is kind of sad that Taproot would change that to 33. But it is a very minor thing.

Steve Lee presentation on “The Next Soft Fork” (May 2019)

https://bitcoinops.org/en/2019-exec-briefing/#the-next-softfork

MF: There was this presentation from Steve Lee at Optech giving the summary of the different soft fork proposals. There were alternatives that we can’t get into now because there is too much to discuss already. Other potential soft forks, Great Consensus Cleanup was one, there was another one as well. There is a timeline in this presentation which looks very optimistic now of activation maybe 6-12 months ago. Just on the question of timing, there seems to have been progress or changes to either the BIPs or the code continuously throughout this time. It is not as if nothing has been happening. There have been small improvements happening and I suppose it is just inevitable that things are going to take longer than you would expect. There was a conversation earlier with a Lightning dev being frustrated by the pace of change. We won’t go onto activation, these changes taking so long. Andrew Poelstra talked about the strain of getting soft fork changes into Bitcoin now that it is such a massive ecosystem and there is so much value on the line.

RO: In order to activate it you need an activation proposal. I think that might be the most stressful thing for developers to talk about maybe.

ET: That is true. I remember that about a year ago I started to work on descriptors for Taproot and I talked with Pieter and I was like “It should probably get in in a few months” and he was laughing. A year later and we as Russell said we don’t even have an activation path yet.

MF: I certainly think it is possible that we should’ve got the activation conversation started earlier. Everyone kind of thought at the back of their head it was going to be a long conversation. Perhaps the activation discussion should’ve been kicked off earlier.

ET: I think people are a little bit traumatized from SegWit and so don’t really want to talk about it.

MF: A few people were dreading the conversation. But we won’t discuss activation, maybe another time, not today.

Pieter Wuille mailing list post on Taproot updates (no P2SH wrapped Taproot, tagged hashes, increased depth of Merkle tree, October 2019)

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-October/017378.html

MF: The next item on the reading list, you gave an update Pieter in October 2019 on the mailing list. The key items here were no P2SH wrapped Taproot. Perhaps you could talk about why people wanted P2SH wrapped Taproot. I suspect it is exactly the same reason why people wanted P2SH wrapped SegWit. There is also tagged hashes and increased depth of Merkle tree.

PW: Incremental improvements I think. The P2SH thing is just based on adoption of BIP 173 and expecting that probably we don’t want to end up in a situation where long term use of Taproot is split between P2SH and native because it is a very slight privacy issue. You are revealing whether the sender supports native SegWit outputs or not. It is better to have everything into a single uniform output type. Given the timeline it looked like we are probably ok with dropping P2SH. The 32 bye pubkeys, a small incremental improvement. The tagged hashes was another. There was one later change which was changing public keys from implicitly square to implicitly even for better compatibility with existing infrastructure which was maybe a couple of months after this email. Since then there haven’t been any semantic changes to the BIP, only clarifications.

RO: Also adding the covering the input script by the signature was very recent.

PW: Yes you’re right. That was the last change.

RO: I am assuming it was changed.

PW: Yes it was.

MF: We wanted P2SH wrapped SegWit because we were introducing bech32, a different format. What was the motivation for wanting P2SH wrapped Taproot?

PW: The advantage would be that people with non-SegWit or non BIP173 compliant wallet software would be able to send to it. That is the one and only reason to have P2SH support in the first place because it has lower security, it has extra overhead. There is really no reason to want it except compatibility with software that can’t send to bech32 addresses.

ET: I thought another reason was because we can and it went well with the flow of the code.

PW: Sure. It was easy because SegWit was designed to have it. The reason SegWit had it was because we wanted compatibility with old senders. Given that we already had it it was relatively easy to keep it.

RO: There is also a tiny concern of people attempting to send Taproot outputs to P2SH wrapped SegWit outputs because currently those are just not secure and can be stolen.

PW: You mean you can have policy rules around sending to future Taproot versions in wallet software while you can’t do that in P2SH?

RO: People might mistakenly produce P2SH wrapped Taproot addresses because they have incorrectly created a wallet code that way. If we supported both then their funds would be secured against that mistake.

PW: That is fair, yeah.

Pieter Wuille at SF Bitcoin Devs on BIP-Taproot and BIP-Tapscript (December 2019)

https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2019-12-16-bip-taproot-bip-tapscript/

MF: This is the transcript of Pieter’s talk at SF Bitcoin Devs, this was an update end of 2019. There was a conversation you had with Bram (Cohen) and this is talking about being concerned with facilitating future changes, things like Graftroot which we will get onto. But then also making sure that existing applications or existing use cases, things like colored coins which perhaps you might not be interested in at all yourself and perhaps the community generally isn’t. How much thought do you have to put into making sure things like colored coins aren’t hurt, use cases that very few people are using but you feel as if it is your responsibility to make sure that you don’t break them with this upgrade?

PW: This is a very hard question for me because I strongly believe that colored coins make no sense. If you formulate it a bit more generally I think there is a huge amount of potential ideas of what if someone wants to build something like this later? Is there some easy change we can make to our proposal to facilitate that? For example the annex thing in BIP 341 is an example of an extensibility feature that would enable a number of things that would be really hard to do otherwise if it wasn’t done right now. In general I think that is where perhaps the majority of the effort in flushing out the details goes, making sure that it is as compatibility with future changes as possible.

MF: What is the problem specifically? Can you go into a bit more detail on why partial delegation is challenging with Taproot?

PW: That is just a separate feature. It is one we deliberately chose not to include because the design space is too big and there are too many ways of doing this. Keep it for something after Taproot.

Potential criticisms of Taproot and arguments for alternatives on mailing list (Bitcoin Optech, Feb 2020)

https://bitcoinops.org/en/newsletters/2020/02/19/#discussion-about-taproot-versus-alternatives

MF: There hasn’t been much criticism and there doesn’t appear to have been much opposition to Taproot itself. We won’t talk about quantum resistance because it has already discussed a thousand times. There was this post on the mailing list post with potential criticisms of Taproot in February that is covered by the Optech guys. Was there any valid criticism in this? Any highlights from this post? It didn’t seem as if the criticism was grounded in too much concern or reality.

PW: I am not going to comment. There was plenty of good discussion on the mailing list around it.

Andrew Kozlik on committing to all scriptPubKeys in the signature message (April 2020)

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017801.html

MF: This is what Russell was alluding to. This is Andrew Kozlik’s post on committing to all scriptPubKeys in the signature message. Why is it important to commit to scriptPubKeys in the signature message?

RO: I don’t actually understand why it is a good idea. It just doesn’t seem like a bad idea. Maybe Pieter or someone else can comment.

PW: The commitment to all the scriptPubKeys being spent?

MF: Kozlik talked about this in certain applications. So it is specific to things like Coinjoin and not necessarily applicable to everything?

PW: Right but you don’t want to make it optional because if you make it optional you are now again revealing to the world that you care about this thing. I believe that the attack was something of the form where you are lying to a hardware wallet about which inputs of a transaction are yours. Using a variant of the amount attack… I believe it is I do a Coinjoin where I try to spend both outputs from you but the first time I convince you that only one of the inputs is yours and then the second time I convince you that the other one is yours. In both times you think “I am only sending 0.1 BTC” but actually you are spending 0.2. You wouldn’t know this because your hardware wallet has no state that is kept between the two iterations. In general it makes sense to include this information because it is information you are expected to give to a hardware wallet. It is strange that they would not sign it. I think it made perfect sense as soon as the attack was described.

Coverage of Taproot eliminating SegWit fee overpayment attack in Bitcoin Optech (June 2020)

https://bitcoinops.org/en/newsletters/2020/06/10/#fee-overpayment-attack-on-multi-input-segwit-transactions

MF: This was a nice example of Taproot solving a problem that had cropped up. This was the fee overpayment attack on multi input SegWit transaction. Taproot fixes this. This is nice as an example of something Taproot clearly fixes rather than just adding functionality, better privacy, better efficiency. It is a nice add-on.

PW: It was a known problem and we had to fix it. In any successor proposal whatever it was.

Possible extensions to Taproot that didn’t make it in

Greg Maxwell on Graftroot (Feb 2018)

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015700.html

AJ Towns on G’root (July 2018)

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016249.html

Pieter Wuille on G’root (October 2018)

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-October/016461.html

AJ Towns on cross input signature aggregation (March 2018)

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-March/015838.html

AJ Towns on SIGHASH_ANYPREVOUT (May 2019)

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016929.html

MF: The next links are things that didn’t make it in. There is Graftroot, G’root, cross input signature aggregation, ANYPREVOUT/NOINPUT. As the authors of Taproot what thought do you have to put in in terms of making sure that we are in the best position to add these later?

PW: We are not. You need a successor to Taproot to do these things period.

MF: But you have made sure that Taproot is as extensible as possible.

PW: To the extent possible sure. Again there are trade-offs to be made. You can’t support everything. Graftroot and cross input aggregation are such deeply conceptual changes. You can’t permit building them later. It is such a structural change to how scripts work. These things are not something that can be just added later on top of Taproot. You need a successor.

MF: I thought some of the extensibility was giving a stepping stone to doing this later but it is not. It is a massive overhaul again on top of what is kind of an overhaul with Taproot.

PW: Lots of things can be reused. It is not like we need to start over from scratch. You want Schnorr signatures, you want leaf versioning, you want various extensibility mechanisms for new opcodes. Graftroot is maybe not the perfect example. It depends to what extent you want to do it. Cross input aggregation, the concept of script verification is no longer a per input thing but it is a per transaction thing. You can’t do it with optimal efficiency, I guess you can invent things. The type of extensibility that is built in is new opcodes, new types of public keys, new sighash types, all these things are made fairly easy and come with almost no downsides compared to not doing them immediately. Real structural changes to script execution, they need something else.

MF: And perhaps these extensions, we might as well do them because there is no downside. We don’t know the future so we might as well lay the foundations for as many extensions as possible because we don’t know what we will need in future.

PW: Everything is a trade-off between how much engineering and specification and testing work is it compared to what it might gain us.

Taproot and Tapscript BIPs

https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki

https://github.com/bitcoin/bips/blob/master/bip-0342.mediawiki

MF: There is BIP-Taproot. BIP-Tapscript as I understand, BIP-Taproot was getting too long so there was a separate BIP which has a few changes to script. The main one being get rid of CHECKMULTISIG and introducing CHECKSIGADD. CHECKSIGADD is for the multisignature schemes where multiple signatures are actually going onchain. It is more efficient for batch verification if you are doing a multisig with multiple signatures going onchain. Although the hope is that multisig will be with MuSig schemes so that multiple signatures won’t go onchain.

PW: Exactly.

MF: The design of CHECKSIGADD, it is like a counter. With CHECKMULTISIG there was no counter you, just tried all the signatures to see if there were enough signatures to get success from that script. But CHECKSIGADD introduces a counter which is more efficient for batch verification. Why is there not an index for keys and signatures? Why is it not like “Key 1, Key 2, Key 3 and Key 4” and then you say “I’m providing Signature 2 which matches to Key 2.” Why is it not designed like that?

PW: Again design space. If you go in that direction there are so many ways of doing it. Do you want to support arbitrary subsets up to a certain size? You could imagine some efficient encoding of saying “All possible policies up to 5 keys I can put into a single number. Why not have an opcode that does that?” We just picked the simplest thing that made sure that multisignatures weren’t suddenly not a lot more gratuitously inefficient compared to what existed before because the feeling is if you remove a feature you need to compensate with an alternative. Due to OP_SUCCESSx it is really easy to add a new opcode that does any of these things you are suggesting with really no downside.

MF: You could do an indexed multisig using CHECKSIGADD?

PW: Using OP_SUCCESSx you can add any opcode. The focus is fixing existing problems and making sure batch verification works but beyond that anything else we leave. Actually new features we leave to future improvements.

N: The ADD variant of CHECKMULTISIG, that also addresses the quadratic complexity of CHECKMULTISIG? It is not just for batch verification?

PW: There is no quadratic complexity in CHECKMULTISIG?

N: Doesn’t it need to loop for the signatures and the keys?

PW: No because they have to be in the same order. It is inefficient but it is just at worst proportional to the number of the keys given. Ideally we want something that is just proportional to the number of signatures given. It is unnecessarily inefficient but it is only linearly so. There were or are quadratic problems in for example in pre SegWit if you have many public keys, each of them would individually rehash the entire transaction. The bigger you make your transaction your amount of data hashed goes up quadratically. But that is already fixed since SegWit.

RO: I believe OP_ROLL is still quadratic even in Taproot.

PW: I think it is just linear but with a pretty bad constant factor.

RO: The time to execute an OP_ROLL is proportional to, can be as large as the size of the script. So a script that contains only OP_ROLLs has quadratic complexity in terms of the length of the script.

PW: Right but there is a limit on the stack size.

RO: Of course.

PW: Without that limit it would be quadratic, absolutely. Also in theory a different data structure for the execution stack is possible that would turn it into O(n log(n)) instead of O(n^2) to have unbounded ROLLs.

Bitcoin Core BIP 340-342 PR 17977

https://github.com/bitcoin/bitcoin/pull/17977

MF: This is the PR open in Bitcoin Core. This is all the code including the Schnorr libsecp code. My understanding with this is that ideally certainly if you have sufficient expertise is to help review the Schnorr part in libsecp. If not that then start reviewing some of this very large Taproot PR in Core. I am thinking about trying to organize a PR review club maybe just taking a few commits from this PR. I know we covered a couple of the smaller ones earlier.

PW: I think if you ignored the libsecp part it is not all that big, a couple of hundred lines excluding tests.

MF: That is doable for a Bitcoin Core PR review club. Perhaps trying to do the Schnorr code in libsecp is too big. I don’t know if we could narrow that down, focus on just a few commits in the libsecp Schnorr PR. I think either you or Greg said on IRC anybody with C++, C experience is useful in terms of review for that libsecp stuff because the cryptography is on solid ground but you need some code review on that libsecp PR.

PW: I don’t remember saying that.

RO: I think Greg has said that. The difficulties with the libsecp C code will come from not the cryptography but from C.

MF: C specific problems, in terms of the language.

RO: Yes

MF: I did split it into a few commits. There are quite a few functional tests to look at on the Taproot PR. I am trying to think of an accessible way for people to start delving in and looking at the tests and running the tests is often a good first step.

Bitcoin Stack Exchange question on Simplicity and Taproot

https://bitcoin.stackexchange.com/questions/97049/in-theory-could-we-skip-the-proposed-taproot-soft-fork-activate-simplicity-inst

MF: Shall we talk a bit about Simplicity? There was a Bitcoin Stack Exchange on why not skip Taproot and just merge in Simplicity. Why aren’t we doing that?

RO: I don’t know. That seems like a great idea to me (joke). Simplicity is not completed yet and not reviewed and totally not ready. It might good to go with someone that is actually completed and will provide some benefit rather than waiting another four years, I don’t know.

MF: Is that possible longer term? I know that on the Stack Exchange question Pieter says you’d still want Taproot because you can use Simplicity within Taproot. I know you talked about avoiding a SIGHASH_NOINPUT soft fork with Simplicity. If Simplicity was soft forked in you could potentially avoid the SIGHASH_NOINPUT, ANYPREVOUT soft fork.

RO: You can’t get the root part of Taproot with Simplicity. You can’t really program that. The fact that you can have a 32 byte witness program and spend that as a public key is something that is not really possible in Simplicity.

PW: I think the biggest advantage of Taproot is that it very intentionally makes one particular way of spending and creating scripts super efficient in the hope to incentivize that. You get the biggest possible policy based privacy where hopefully nearly everything is spent using just a key path and nothing else. If you just want to emulate that construction in another language be it Simplicity or through new opcodes in Script you won’t be able to do that with the same relative efficiency gains. You would lose that privacy incentive at least to some extent.

RO: Because the root part of Taproot is not something that is inside Script. It is something that is external to Script. Even replacing Script isn’t adequate.

MF: But if you were to get really wacky you could have a Taproot tree with Script and Simplicity on different leaves. You could use one leaf that is using Simplicity or another leaf that is using Bitcoin Script?

RO: Yes and that would probably be the natural state of things if Simplicity goes in that direction into Bitcoin.

MF: Because you’d only want to use Simplicity where you are getting a real benefit of using it?

RO: Simplicity is an alternative to Bitcoin Script. The leaf versioning aspect of Taproot allows you to put in alternatives to Bitcoin Script which don’t have to be Simplicity, any alternative to Bitcoin Script. That is both an upgrade mechanism for Taproot but it also implies this ability to mix a Tapleaf version for Script with a Tapleaf version for Simplicity with a Tapleaf version for whatever else we want.

MF: The benefits would be being able to do stuff that you can’t do in Script. What other benefits, why would I want to use Simplicity rather than use Script on a leaf of my tree other than to get functionality that I can’t get with Script? Is there efficiency with a Simplicity equivalent of Script in some cases?

RO: I think the extended functionality would be the primary benefit, extended features is the primary benefit for using Simplicity over Script. It is possible that on a head to head competition Simplicity might even beat Script in terms of efficiency and weight. That is probably not the case. I suspect that when things settle down Simplicity will not quite to be able to beat Script at its own game. It is a little bit early to tell whether that is true or not.

PW: It also really depends on what kind of jets are implemented and with what kind of encoding. I suspect that for many things that even though it is theoretically possible to do anything in Simplicity it may become really exorbitantly expensive to do so if you need to invent your own sighash scheme or something.

Update on Simplicity

https://blockstream.com/2018/11/28/en-simplicity-github/

MF: Can you give an update on Simplicity Russell? What are the next steps? How near is it to being a potential soft fork proposal for Bitcoin?

RO: I am basically at the point of Simplicity’s functional completeness in that the final operation called disconnect which supports delegation is implemented and is under review. That’s at the point where people who are particularly keen to try out Simplicity in principle can start writing Simplicity programs or describing Simplicity programs. There are two major aspects that still need working on. One is that there is a bunch of anti-malleability checks that need to be implemented that are currently not implemented. This doesn’t affect the functional behavior but of course there are many ways of trivially denial of service attacking Simplicity. While Simplicity doesn’t have loops you can make programs that take an exponential amount of time. We need a mechanism to analyze Simplicity programs. This is part of the design but not implemented to analyze programs and put an upper bound on their runtime costs. There is also various anti witness malleability checks that need to be put in. These are not implemented but they are mostly designed. Then what is unfortunately the most important for people and things that come to the end of the timeline is “What is a good library of jets that we should make available?” This is where experimenting on Elements Alpha and sidechains, potentially Liquid, will be helpful. To try to figure out what are a good broad class of jets, which are intrinsic operations that you would add to the Simplicity language, what sort of class of jets do you want? I am aiming for a very broad class of jets. Jets for elliptic curve point multiplication so you can start potentially putting in very exotic cryptographic protocols and integrating that into your language. I’d probably like to support alternative hashing functions like SHA3 potentially and stuff like that. Although that has got a very large state space so we’ll see how that goes. The things that would inform that come from trying Simplicity. As we explore the uses of Simplicity people will naturally come up with little constructs that they would like to be jets and then we can incorporate that. It would be nice to get a large understanding of what those jets that people will want earlier because the mechanisms for soft forking in new jets into Simplicity are a little bit less nice than I was hoping for. Basically I have been reduced to thinking that you’ll just need different versions of Simplicity with different sets of jets as we progress that need to be soft forked in.

MF: So a Tapleaf version being Simplicity and then having different Simplicity versions within that Tapleaf version? I know we are getting ahead of ourselves.

RO: Probably what would happen. The design space of how to soft fork in jets has maybe not been completely explored yet. That is something we would want to think about.

N: This is about jets and soft forking actually. From the consensus layer point of view it is only down to the validation costs right? If you have a standard set of jets those are easier for full nodes to validate with a reasonable cost and therefore should be cheaper? Semantically there should be no difference from interpreting and executing jets? Or is it also what needs to be included on the blockchain? Can they be implicitly omitted?

RO: Originally I did want to make them implicity but there turns out to be subtle… Simplicity has this type system and there is this subtle problem with how jets interact with the type system that makes it problematic to make jets completely implicit in the way that I was originally thinking. The current plan is to make jets explicit and then you would need some sort of soft forking mechanism to add more jets as you go along. In particular, and part of the reason why it is maybe not so bad, in order to provide the witness discount…. The whole point of jets is that these are bits of Simplicity programs that we are going to natively understand from the Simplicity interpreter so we don’t have to run through the Simplicity interpreter to process them but we are going to run them with native C code. They are going to cheaper and then we can discount the costs to incentivize their use. The discounts for a native Schnorr signature, it takes maybe a hour to run Schnorr signature verification on my laptop written in pure Simplicity. Of course as you add arithmetic jets that comes down to 15 seconds. But of course we want the cost to be on the order of milliseconds. That’s the purpose of jets there. In order to provide that discount we have to be aware of what jets are at the consensus level.

MF: Are you looking at any of these things that didn’t make it into the Taproot soft fork proposal as potential functionality that jumps out as a Simplicity use case? We talked about SIGHASH_NOINPUT but it is potentially too early for that because we’ll want to get that in before Simplicity is ready. The Graftroot, G’root, all this other stuff that didn’t make it in, anything jumps out at you as a Simplicity functionality first use case?

RO: It is probably restricted to the set of things that would be implemented by opcodes. SIGHASH_NOINPUT, delegation are the two things that come to mind. This is what I like about Simplicity. Simplicity is designed to be enable people to do permissionless innovation. My design of Simplicity predates SIGHASH_NOINPUT and it is just a natural consequence of Simplicity design that you can do SIGHASH_NOINPUT. Delegation was a little bit different, it was explicitly put into the Simplicity design to support that. But a lot of things, covenants is just a consequence of Simplicity’s design and the fact you can’t avoid covenants if you have a really flexible programming language. Things like Graftroot and cross input signature aggregation, those things are outside of the scope of Script and generally not enabled by Simplicity by itself. Certainly Simplicity has no way of doing cross input aggregation. You can draw an analogy between Graftroot and delegation. It has a bit of Graftrootness to it but it doesn’t have that root part of the Graftroot in the same way that Simplicity doesn’t have the root part of Taproot.

MF: Covenants is a use case. So perhaps richer covenants depending on if we ever get CHECKTEMPLATEVERIFY or something equivalent in Script?

RO: CHECKTEMPLATEVERIFY is also covered by Simplicity.

MF: Soft forks, you were talking about soft forks with jets. Is the process of doing a soft fork with jets as involved as doing with a soft fork with Bitcoin?

RO: It would probably be comparable to soft forking in new opcodes.

MF: But still needs community consensus and people to upgrade.

RO: Yes. In particular you have a lot of arguments over what an appropriate discount factor is.

MF: I thought we could avoid all the activation conversations with Simplicity.

RO: The point is that these jets won’t enable any more functionality that Simplicity doesn’t already have. It is just a matter of making the price for those contracts that people want to use more reasonable. You can write a SHA3 compression function in Simplicity but without a suitable set of jets it is not going to be a feasible thing for you to run. Although if we are lucky and we have a really rich set of jets it might not be infeasible to write SHA3 out of existing jets. That would be my goal of having a nice big robust set of midlevel and low level jets so that people can build these complicated, not thought of or maybe not even invented, hash functions and cryptographic operations in advance without them necessarily being exorbitantly costly even if we don’t have specific jets for them.

Q&A

MF: I have been very bad with YouTube because I kept checking and nothing was happening. But now lots of happened and I’ve missed it. Apologies YouTube. Luced asks when Taproot? We don’t know, we hope next year. It is probably not going to be this year we have to sort out the activation conversation that we deliberately avoided today. Spike asks “How hard would signature aggregation to implement after this soft fork?” We have key aggregation (corrected) with this soft fork or at least key aggregation schemes. We just don’t have cross input signature aggregation.

PW: Taproot only has it at the wallet level. The consensus rules don’t know or care about aggregation at all. They see a signature and a public key and they verify. While cross input aggregation or any kind of onchain aggregation, before the fact aggregation, needs a different scheme. To answer how much work it is that really depends on what you are talking about.

MF: This is the key aggregation versus signature aggregation conversation?

PW: Not really. It is whether it is done offchain or onchain. Cross input aggregation necessarily needs it onchain because different outputs that are being spent by different inputs of a transaction inevitably have different public keys. You cannot have them aggregated before creating the outputs because you already have the outputs. So spending them simultaneously means that onchain there needs to be aggregation. The consensus rules need to be aware of something that does this aggregation. That is a very fundamental change to how script validation works because right now with this conceptually boolean function you run on every input and it returns TRUE or FALSE. If they all return TRUE you are good. With cross input aggregation you now need some context that is across multiple inputs. In a way and this may actually be a good step towards the implementation side of that, batch validation even for Taproot also needs that. While BIP 341, 342, 340 support batch validation this is not implemented in the current pull request to Bitcoin Core. Something we expect to do after it is in because it is an optional efficiency improvement that the BIP was intentionally designed to support but it is a big practical change in implementation. It turns out that the implementation work needed for that is probably a step towards making cross input aggregation easier once there are consensus rules for that.

MF: Janus did respond to that question “are you referring to MuSig or BLS style aggregation?” MuSig we are hopefully getting but BLS style aggregation we are not with proposed Taproot.

PW: BLS lets you do non-interactive aggregation. That is not something that can be done with Schnorr.

MF: On the PR Greg Sanders says Taproot functional tests, he’d suggest explaining the whole “Spender” framework that’s in it? It took him a couple of days to really understand it. Who has written the functional tests on Taproot, is it you Pieter?

PW: Yes and Johnson Lau and probably a few other people. They were written a while ago. Other people have contributed here and there that I forget now. It does have this framework where a whole bunch of output and input functions are passed. It creates random blocks that randomly combines these into transactions and tries to spend them. It sees that things that should work work and shouldn’t work don’t work.

MF: Now Greg understands it perhaps he is the perfect candidate for the Bitcoin Core PR review club on Taproot functional tests. Volunteering you Greg. Janus asks about a source for BLS style aggregation being possible with Schnorr. Pieter you’ve just said that is not possible?

PW: It is not. If by BLS style aggregation you mean non-interactive aggregation then no. There is no scheme known based on discrete logarithm that has this. We’d need a different curve, different security assumptions, different efficiency profile for that.

MF: Greg says “Jet arguments will likely be very similar to what ETHerians call “pre-compile”. You can technically do SNARKs and whatever in EVM but not practically without those.”

RO: Yeah that sounds true to me. Again I would hope that with enough midlevel jets that even complicated things are not grossly expensive so that people would be unable to run them. That is going to be seen in the future whether that is true or not.

MF: Spike said “Wuille said he started looking at Schnorr specifically for cross input aggregation but they found out it will make other stuff like Taproot and MAST more complicated so it was delayed.” That sounds about right. That is the YouTube comments. I think we have got through the reading list. Are there any other last comments or any questions for anyone on the call?

Next steps for Taproot

RO: Recently I got excited about Taproot pending activation and I wanted to go through and find things that need to be done before Taproot can be deployed. This might be a useful exercise for other people. I found a dangling issue on BIP 173, SegWit witness versions. There was an issue with the insertion bug or something in the specification. I thought it would be easy to fix but it turns out it is complicated. As far as I know the details for BIP 340 are not complete with regards to synthetic nonces although unrelated to Taproot the fact that Taproot depends on BIP 340 suggests that BIP 340 should be completed before Taproot is deployed. I guess my point with this comment is that there are things that should be done before Taproot is being deployed. We should go out and find all those things and try to cross them off.

PW: I do think there is a distinction to be made between things that need to be done before Taproot consensus rules and things that need to be done before wallets can use it. Something like synthetic nonces isn’t an issue until someone writes a wallet. It won’t affect the consensus rules. Similarly standardization of MuSig or threshold schemes is something that needs to be done, integration with descriptors and so on. It is not on the critical path to activation. We can work on how the consensus rules need to activate without having those details worked out. The important thing is just we know they are possible.

MF: Russell, do you have any other ideas other than the one you suggested for things we need to look out for?

RO: Nothing comes to mind but it wouldn’t surprise me if there are other issues out there. I didn’t even think about the design for a MuSig protocol. Pieter is of course right when he says these aren’t necessarily blocking things for Taproot but it feels like an appropriate time to start on them and it is things that everyone can do.

MF: The conversation with someone in the Lightning community, that there is so much other stuff to do that we don’t want to work on Lightning post Taproot given that we don’t know when it is going to be activated. I don’t think there is anybody on the call who is really involved in the Lightning ecosystem but perhaps they are frustrated with the pace or perhaps want some of this to be happening faster than it is. There are lots of challenges to work on Lightning. There were two final links on the reading list. On the reading list I had Nadav Kohen’s talk on “Replacing Payment Hashes with Payment Points” and I also had Antoine Riard’s talk “Schnorr Taproot’d Lightning” at Advancing Bitcoin. Any thoughts on Lightning post Taproot? There has been a discussion on how useful Miniscript can be with Lightning. Any thoughts on how useful Simplicity could be with Lightning?

RO: I am not that familiar with PTLCs (point time locked contracts) so I am not too sure what details are involved with that. Simplicity is a generic programming language so it is exactly these innovative things that Simplicity is designed to support natively without people needing to necessarily soft fork in new jets for. Elliptic curve multiplication should already be a jet and it is one of those things where I am hoping that permissionless innnovation can be supported right out of the box.

MF: If they are using adaptor signatures post Taproot and scriptless scripts there is not much use for Miniscript and not much use for Simplicity?

RO: Adaptor signatures are offchain stuff and so is outside of the scope. Simplicity can take advantage of it because it has Schnorr signature support but it doesn’t have any influence on offchain stuff. I can say that there has been some work towards a Miniscript to Simplicity compiler. That would be a good way of generating common policies within Simplicity and then you could combine those usual or normal policies with more exotic policies using the Simplicity combinators.

MF: To go through the last few comments on the YouTube. “You can do ZKP/STARKs and anything else you want for embedded logic on Bitcoin for stuff like token layers like USDT, protocols soft forks are specifically for handling Bitcoin.” I don’t know what that is in reference to. Jack asks “Do PTLCs do anything with Taproot?” The best PTLCs need Schnorr which comes within the Taproot soft fork but you are not using Taproot with PTLCs because you are just using adaptor signatures. “Covenants would make much safer and cheaper channels” says Spike.

RO: I’m not familiar with that. It is probably true but I can’t comment on it.

MF: There is another PTLC question from Janus. “Will it still be necessary to trim HTLCs when using PTLCs on Taproot. Tadge mentioned that they complicate matters a bit.” I don’t know the answer to that and I don’t think there are any Lightning people on the call. That is all the YouTube comments, No questions on IRC, nothing on Twitter. We will wrap up. Thank you very much to everyone for joining. Thanks to everyone on YouTube. We will get a video up, we will get a transcript up. If you have said your name or introduced yourself then I will attribute your comments and questions on the transcript but please contact me if you would rather be anonymous. Good night from London.

\ No newline at end of file +https://www.youtube.com/watch?v=bPcguc108QM

Pastebin of the resources discussed: https://pastebin.com/vsT3DNqW

Transcript of Socratic Seminar on BIP-Schnorr: https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr/

The conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Introductions

Michael Folkson (MF): This is a Socratic Seminar on BIP-Taproot. Sorry for the delay for the people on the YouTube livestream. This is in partnership between London BitDevs and Bitcoin Munich. We were going to have two events on the same day at the same time so we thought rather than have two events clashing we would combine them, have the same event and have people from both London and Munich on the call. But there are people from everywhere not just London and Munich. A few words on Bitcoin Munich. It is the original Bitcoin meetup in Munich. I’ve had the pleasure of attending a Socratic last year, the week before The Lightning Conference. It has been around for years, certainly a few more years than we have at London BitDevs. Socratic Seminars, we’ve had a few in the past, I don’t need to speak about Socratic Seminars. Originated at BitDevs in New York, discussion not presentation, feel free to interrupt, ask questions. Questions, comments on YouTube, we will be monitoring the YouTube, we will be monitoring Twitter and IRC ##ldnbitcoindevs. If you are watching the livestream questions and comments are very welcome. There will be a transcript as well but please don’t let that put you off participating. We can edit the transcript afterwards, we can edit the video afterwards, we are not trying to catch people out or whatever. This is purely for educational purposes. The subject is Taproot, Tapscript is also fine, BIP 341, BIP 342. We have already had a Socratic Seminar on BIP-Schnorr so we’ll avoid Schnorr generally but it is fine to stray onto Schnorr because obviously there is interaction between Schnorr and Taproot. What we won’t be discussing though is activation. No discussion on activation. Maybe we will have a Socratic next month on activation. If you are interested in activation join the IRC channel ##taproot-activation. We start with introductions. If you want to do an introduction please do, who you are, what you are working on and what you are interested in in terms of Taproot stuff.

Emzy (E): I am Emzy, I am involved in running servers for Bisq, the decentralized exchange and I am really thrilled about Taproot and learning more Bitcoin in depth.

Albert M (AM): I am Albert, I am an information security consultant and I am also interested in the privacy aspects of this new proposal.

Pieter Wuille (PW): I’m Pieter, I work at Blockstream and on Bitcoin and I am one of the co-authors of the proposal being discussed.

Auriol (A): I am Auriol. I am just curious as to how the conversation has transitioned over the past year. On the topic I am very interested in the privacy aspects of this new technology.

Elichai Turkel (ET): Hi I’m Elichai, I work at DAGlabs and I work on Bitcoin and libsecp. I hope we can get this in the next year or so.

Will Clark (WC): I am Will, I have been working with goTenna doing some Lightning stuff over mesh networks. Like Albert and Auriol and I am interested in the privacy benefits to this.

Introduction to MAST

MF: There is a reading list that I shared. What we normally do is we start from basics. For the people, there are a couple of new people on the call, we’ll start with MAST and discuss and explain how MAST works. Absolute basics does someone want to explain a Merkle tree?

WC: It is a structure of hashes where you pair up the hashes. You make an even number of hashes and if you’ve got an odd number then you will duplicate the last hash. You pair them up in a binary tree structure so that you can show in a logarithmically expanding size a path along the tree.

MF: Basically we are trying to combine a lot of information within a tree structure. A hash condenses anything, the contents of a whole book, an encyclopedia into just one small digest. With a tree you’ve got a bunch of hashes all on leaves in the tree and they get hashed up pairwise up until the top of the Merkle tree which is called the root. It is a way of condensing a lot of information into a very small digest at the top which is effectively the root. If that is a Merkle tree what was the idea behind MAST?

AM: A Merkle tree is a structure of inputs and outputs combined with the coinbase?

PW: That is one use of Merkle trees in Bitcoin but not the one we are talking about here.

A: Different conditions that are covered in Merkle tree, instead of combining all that information into a single hash, it separates out different hashes so they can remain individualistic as opposed to combining information and revealing too much about all information in one lump sum.

E: I understand that it is if you have different contracts for moving value on Bitcoin you can reveal only one of the paths in the Merkle tree and use this without showing the other paths that are possible.

MF: Let’s go through the reading list then because there are some interesting intricacies around MAST and it dates back to 2012, 2013 as most things do. The first link is the Socratic Seminar that we had on Schnorr before. Then there is the transcript to Tim Ruffing’s presentation. Tim Ruffing presented last month on Taproot and Schnorr multisignature and threshold signature schemes. Then we have links on MAST.

Aaron van Wirdum Bitcoin Magazine article on MAST (2016)

https://bitcoinmagazine.com/articles/the-next-step-to-improve-bitcoin-s-flexibility-scalability-and-privacy-is-called-mast-1476388597

MF: The first link is that Aaron van Wirdum Bitcoin Magazine article on MAST. That is a good intro to what MAST is. He describes it as essentially merging the potential of P2SH with that of Merkle trees. He gives a primer of Merkle trees and says that instead of locking Bitcoin up in a single script, with MAST the same Bitcoin can be locked up into a series of different scripts which was effectively what Emzy was saying.

David Harding article on What is a Bitcoin Merklized Abstract Syntax Tree (2017)?

https://bitcointechtalk.com/what-is-a-bitcoin-merklized-abstract-syntax-tree-mast-33fdf2da5e2f

MF: I found a link which I will share in the chat which was David Harding’s article from 2017 on what is a Bitcoin Merklized Abstract Syntax Tree? That has some info that I hadn’t previously seen before which included the fact that Russell O’Connor is universally credited with first describing MAST in a discussion. Russell is apparently on the call. That first idea of MAST, perhaps Pieter you can talk about the discussion you had back then?

PW: I really need Russell here because he will disagree with me. I seem to recall that the first idea of breaking up a script into a Merkle tree of spendability conditions is something that arrived in a private discussion I had with Russell a number of years ago. In my mind it has always been he who came up with it but maybe he thinks different.

BIP 114 and BIP 116 MAST proposals

https://github.com/bitcoin/bips/blob/master/bip-0114.mediawiki

https://github.com/bitcoin/bips/blob/master/bip-0116.mediawiki

MF: Some of the ideas around MAST, there was BIP 116 which was OP_MERKLEBRANCHVERIFY, that was from Mark Friedenbach. There was also BIP 114 which was another detailed BIP from Johnson Lau on Merklized Abstract Syntax Trees. There did seem to be a considerable effort to get MAST formalized back in 2016,17. When MAST was being finalized, we’ll come onto Key Tree Signatures which you discussed at SF Bitcoin Devs at around the same time, there did seem to be an effort to get MAST into Bitcoin even before SegWit. Perhaps Pieter you could enlighten us with your thoughts on these BIPs and some of this work done by Johnson Lau and Mark Friedenbach?

PW: I think they just didn’t have enough momentum at the time. There were a lot of things to do around SegWit and it is hard to focus on multiple things. It is really hard to say what causes one thing to get more traction than others. I think both BIP 114 and MERKLEBRANCHVERIFY were more flexible than what is now included in BIP 341. MERKLEBRANCHVERIFY didn’t really construct a Merkle root structure itself in script validation but it enabled you to implement it yourself inside the script language which is more flexible. It can do something like grab multiple things from a Merkle tree. Say you have a thousand keys and you want to do a 3-of-10 out of it for example, this is not something that BIP 341 can do. At the same time by not doing it as part of a script but as part of a script structure you get some efficiency gains. It is hard to go into detail but you get some redundancy if you want to create a script that contains a Merkle root and then verifies against that Merkle root that a particular subscript is being executed. It is a trade-off between flexibility in structure and efficiency.

MF: Russell (O’Connor) is here. The very first conversations on MAST, can you remember the discussion? Was is it a light bulb moment of “Let’s use Merkle trees and condense scripts into a Merkle tree”? I saw that you were credited with the idea.

Russell O’Connor (RO): It goes back to some possibly private IRC conversations I had with Pieter back in 2012 I believe. At that time I was musing about if we were to design a Bitcoin or blockchain from scratch what would it look like? I am a bit of a language person, I have been interested in Script and Script alternatives. I was like “This concatenation thing that we have, the original concatenation of scripts idea in Bitcoin doesn’t actually really work very well because it was the source of this OP_RETURN bug.” It is weird to do computation in the scriptSig hash of that concatenation because all you are doing is setting up the environment for the scriptPubKey to execute in. This is reflected in the modern day situation where even in SegWit there is no real scriptSig program in a sense. It just sets up a stack. That is the environment for which the scriptPubKey runs in. I am a bit of a functional programmer so alternative functional programs where the inputs would just be the environment that the program runs in and then the program would execute. Then when you start thinking that way, the branches in your case expressions can be pruned away because they don’t have to show up on the blockchain. That is where I got this idea of MAST, where I coined the name MAST, Merklized Abstract Syntax Trees. If you take the abstract syntax and look at its tree or DAG structure then instead of just hashing at as a linear piece of text you can hash it according to the expression constructs. This allows you to immediately start noticing you can prune away unused parts of those expressions. In particular the case expressions that are not executed. That is where that idea came from. Eventually that original idea turned into the design for Simplicity which I have been working on for the last couple of years. But the MAST aspect of that is more general and it appears in Taproot and other earlier proposals.

MF: Russell do you remember looking through the BIPs from Johnson Lau and Mark Friedenbach? Or is it too far away that you’ve forgotten the high level details.

RO: I wasn’t really involved in the construction of those proposals so I am not a good person to discuss them.

MF: Some of the interesting stuff that I saw was this tail call stuff. An implicit tail call execution semantics in P2SH and how “a normal script is supposed to finish with just true or false on the stack. Any script that finishes execution with more than a single element on the stack is in violation of the so-called clean-stack rule and is considered non-standard.” I don’t think we have anybody on the call who has any more details on those BIPs, the Friedenbach and Johnson Lau work. There was also Jeremy Rubin’s paper on Merklized Abstract Syntax Trees which again I don’t think Jeremy is here and I don’t think people on the call remember the details.

PW: One comment I wanted to make is I think what Russell and I talked about originally with the term MAST isn’t exactly what it is referred to now. Correct me if I’m wrong Russell but I think the name MAST better applies to the Simplicity style where you have an actual abstract syntax tree where every node is a Merklization of its subtree as opposed to BIP 114, 116, BIP-Taproot, which is just a Merkle tree of conditions and the scripts are all at the bottom. Does that distinction make sense? In BIP 340 we don’t use the term MAST except as a reference to the name because what it is doing shouldn’t be called MAST. There is no abstract syntax tree.

MF: To clarify all the leaves are at the bottom of the trees, as far down as you need to go.

PW: I think the term MAST should refer to the script is the tree. Not you have a bunch of trees in the leaves which is what modern MAST named proposals do.

RO: This is a good point. Somebody suggested the alternative reinterpretation of the acronym as Merklized Alternative Script Trees which is maybe a more accurate description of what is going on in Taproot than what is going on in Simplicity where it is actually the script itself that is Merklized rather than the collection of leaves.

PW: To be a bit more concrete in something actually MAST every node would be annotated with an opcode. It would be AND of these two subtrees or OR of these two subtrees. As opposed to pushing all the scripts down at the bottom.

MF: I think we were going to come onto this after we discussed Key Tree Signatures. While we are on it, this is the difference between all the leaves just being standalone scripts versus having combinations of leaves. There could potentially be a design where there is two leaves and you do an OR between those two leaves or an AND between these two leaves. Whereas with Taproot you don’t, you just take one leaf and satisfy that one leaf. Is that correct?

RO: I think that is a fair statement.

Nothingmuch (N): We haven’t really defined what abstract syntax tree means in the wider setting but maybe it makes sense to go over that given that Bitcoin Script is a Forth like language it doesn’t really have syntax per se. The OP_IF, ELSE, THEN are handled a little bit differently than real Forth so you could claim that that has a tree structure. In a hypothetical language with a real syntax tree it makes a lot more sense to treat the programs as hierarchical whereas in Script they are historically encoded as just a linear sequence of symbols. In this regard the tree structure doesn’t really pertain to the language itself. It pertains to combining leaves of this type in the modern proposals into a large disjunction.

PW: When we are talking about “real” MAST it would not be something that is remotely similar to Script today. It is just a hierarchically structured language and every opcode hashes its arguments together. If you compare that with BIP 341 every inner node in the tree is an OR. You cannot have a node that does anything else. I guess that is a difference.

MF: Why does it have to be like that?

PW: Just limitation of the design space. It takes us way too far if you want to do everything. That is my opinion to be clear.

RO: We could talk about the advantages and disadvantages of that design decision. The disadvantage is that you have to pull all your IF statements that you would have in your linear script up to the top level. In particular if you have an IF then ELSE statement followed by a second IF then ELSE statement or block of code you have two control paths that join back together and then you get another control paths. But when you lift that to the Taproot construction you basically enumerate all the code paths and you have to have four leaves for those four possible ways of traversing those pairs of code paths. This causes a combinatorial explosion in the number of leaves that you have to specify. But of course on the flip side because of the binary tree structure of the Taproot redemption you only need a logarithmic number of Merkle branch nodes to get to any given leaf in general. You just need logarithm space of an exponential number of explosion cases and it balances each other out.

PW: To give an example. Say you want to do a 3-of-1000 multisig. You could write it as a single linear script that just pushes 1000 keys, asks for three signatures and does some scripting to do the verification. In Taproot you would expand this to the exact number, probably in the range of 100 million combinations there are for the 3-of-1000. Make one leaf for each of the combinations. In a more expressive script you could choose a different trade-off. Just have three different trees that only need 1000 elements. It would be much simpler to construct a script but you also lose some privacy.

MF: I see that there is potential complexity if you are starting to use multiple leaves at the same time in different combinations. The benefit is that you are potentially able to minimize the number of levels you need to go down. If every Tapscript was just a combination of different other Tapscripts you wouldn’t have to go so far down. You wouldn’t have to reveal so many hashes down the different levels which could potentially be an efficiency.

PW: Not just an efficiency. It may make it tractable. If I don’t do 3-of-1000 but 6-of-1000 enumerating all combinations isn’t tractable anymore. It is like trillions of combinations you need to go through. Just computing the Merkle root of that is not something you can reasonably do anymore. If you are faced with such a policy that you want to implement you need something else. Presumably that would mean you create a linear script, old style scripting, that does “Check a signature, add it, check a signature, add it, check a signature, add it” and see that it adds up to 6. This works but in a more expressive script language you could still partially Merklize this without blowing up the combination space.

RO: I think the advantage here is that we still use the same script language at the leaves and we get this very easy and very powerful benefit of excluding combinations just by putting this tree structure on an outer layer containing script. Whereas to get the full advantages of a prunable script language it means reinventing script.

MF: We’ll get onto Key Tree Signatures but then you do have to outline all the different combinations of signatures that could perhaps satisfy the script. Pieter had a slide on that SF Bitcoin Devs presentation that we will get onto later which had “First signature, second signature”, the next leaf would be “First signature, third signature” and the next leaf would be “Second signature, third signature”, you did have to outline all the different options. But I don’t understand why you’d have to do that in a script sense. Why do you have to outline all those different combinations? Why can’t you just say “I am going to satisfy a combination of Leaf A and Leaf D”?

PW: You are now talking about why can’t you do this in a linear script?

MF: Yeah

PW: You absolutely can. But it has privacy downsides because you are now revealing your entire policy when you are spending. While if you break it up into a Merkle tree you are only revealing this is the exact keys that signed and there were other options. There were probably many but you don’t reveal what those were. The most extreme is a linear script. Right now in a script language you write out a policy as an imperative program and it is a single program that has everything. The other extreme is what BIP 341 is aiming for, that is you break down your policy in as small pieces as possible and put them in a Merkle tree and now you only reveal the one you actually use. As long as that is tractable, that is usually very close to optimal. But with a more expressive language you have more levels between those two where you can say “I am going to Merklize some things but this part that is intractable I am not going to Merklize.” We chose not to do that in BIP 341 just because of the design space explosion you get. We didn’t want to get into designing a new script language from scratch.

A: How do you know that a Merkle root is in fact the Merkle root for a given tree? Say it is locking up funds for participants, how are participants sure that it is not a leaf of a larger tree or a group of trees? Is there a way to provide proofs against this? What Elichai suggested is that it is as if you are using two preimages. He says that it would break the hash function to do this.

ET: Before Pieter starts talking about problems specific in Merkle trees, there could be a way if you implement the Merkle tree badly that you can fake a node to also be a leaf because of the construction without breaking the hash function. But assuming the Merkle tree is good then you shouldn’t be able to fake that without breaking the hash function.

PW: If a Merkle tree is constructed well it is a cryptographic commitment to the list of its inputs, of the leaves. All the properties that you expect from a hash function really apply. Such as given a particular Merkle root you cannot just find another set of leaves that hash to the same thing. Or given a set of leaves you cannot find another set of leaves that hash to the same thing. Or you cannot find two distinct set of leaves that hash to the same thing and so on. Maybe at a higher level if you are a participant in a policy that is complex and has many leaves you will probably want to see the entire tree before agreeing to participate. So you know what the exact policy is.

A: You are talking about collisions correct?

PW: Yes collision and preimage attacks. If a Merkle tree is constructed well and is constructed using a hash function that has normal properties then it is collision and preimage resistant.

MF: In the chat nothingmuch says does it make sense to consider P2SH and OP_EVAL discussions? That helped nothingmuch understand better.

N: I think we are past that point.

MF: We talked a little about P2SH. We didn’t discuss OP_EVAL really.

N: To discuss Auriol’s point, one thing that I think nobody addressed is and maybe the reason for the confusion is that every Taproot output commits to a Merkle root directly. So the root is given as it were. What you need to make sure is that the way that you spend it relates to a specific known root not the other way round. For the P2SH and OP_EVAL stuff it was a convenient segue for myself a few years ago reading about this to think about what you can really do with Bitcoin Script? From a theoretical computer science point of view it is not very much given that it doesn’t have looping and stuff like that. Redeem scripts and P2SH add a first order extension of that where you can have one layer of indirection where the scriptPubKey effectively calls a function which is the redeem script. But you can’t do this recursively as far as I know. OP_EVAL was a BIP by Gavin Andresen and I think it was basically the ability to evaluate something that is analogous to a redeem script as part of a program so you can have a finite number of nested levels. You can imagine a script that has two branches with two OP_EVALs for committing to separate redeem scripts and that structure is already very much like a MAST structure. That is why I brought it up earlier.

Pieter Wuille at SF Bitcoin Devs on Key Tree Signatures

https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2015-08-24-pieter-wuille-key-tree-signatures/

MF: This is your talk Pieter on Key Tree Signatures. A high level summary, this is using Merkle trees to do multisig. This is where every leaf at the bottom of the tree are all the different combinations. If you have a 2-of-3 and the parties are A, B and C you need a leaf that is A, B, you need a leaf that is A, C, you need a leaf that is B, C. Any possible options to get a multisig within a Merkle tree structure.

PW: Key Tree Signatures was really exploiting the observation at the time in Elements Alpha, we even unintentionally had enabled functionality that did this. It didn’t have Merkle tree functionality and it didn’t have key aggregation. It didn’t have any of those things. But it had enough opcodes that you could actually implement a Merkle tree in the Script language. The interesting thing about that was that it didn’t require any specific support beyond what Elements Alpha at the time had. What BIP 341 is do is much more flexible than that because it actually lets you have a separate script in every leaf. The only thing Key Tree Signatures could do was a Merkle tree where every leaf was a public key. At the same time it did go into what the efficiency trade-offs are and how things scale. Those map well.

MF: It could be implemented straight on Elements Alpha but it couldn’t be implemented on Bitcoin Core. It needed OP_CAT?

PW: Yes it couldn’t be implemented on Bitcoin at the time and still can’t.

MF: There are no plans to enable OP_CAT anytime soon?

PW: I have heard people talk about that. There are some use cases for that. Really the entire set of use cases that Key Tree Signatures aim to address are completely subsumed by Taproot. By introducing a native Merkle tree structure you can do these things way more efficiently and with more flexibility because you are not restricted to having a single key in every leaf. I think historically what is interesting about that talk is the complexity and efficiency trade-offs where you can look at a graph of how does the size of a script compare to a naive linear script. The implementation details of how that was done in Key Tree Signatures aren’t relevant anymore.

MF: There are no edge cases where perhaps you get more efficiency assuming we had OP_CAT using a Key Tree scheme rather than using the Taproot design?

PW: The example I gave earlier is the fact you might not be able to break up your script into leaves.

MF: The privacy thing.

PW: This is a more restricted version of the more general Merkle tree verification in Script that you would get with OP_MERKLEBRANCHVERIFY for example. I think in practice almost all use cases will be covered by Taproot. But it is not exactly the same, this is correct.

N: I think a lot of this goes away under the assumption that you are only spending an output once. A lot of what you can benefit from reusing an element of your tree for different conditions are more efficient if you are going to evaluate that script multiple times. That doesn’t really make sense in the context of Bitcoin.

PW: I am not sure that is true. You want every spend to be efficient. It doesn’t matter if there is one or more. I agree that in general you are only going to reveal one but I don’t think this changes any of the goals or trade-offs.

N: Let me be a bit more precise. If you have a hypothetical system which has something like OP_MERKLEBRANCHVERIFY you can always flatten it out to a giant disjunction and create a Taproot tree for that. Everyone leaves a specific path through your reusing tree. If you are only ever going to reveal the one leaf then what matters is that that final condition is efficient.

PW: What you are calling reuse is really having multiple leaves simultaneously.

N: Yes.

PW: There is a good example where there may actually be multiple cases in one tree. That is if you have some giant multisignature and an intractably large set of combinations from it. The example of a 6-of-1000 I gave before, you may want to have a Merkle tree over just those thousand and have a way of expressing “I want six of these leaves to be satisfied.” I don’t how realistic that is as a real world use case but it is something feasibly interesting.

N: That is a definitely a convincing argument that I didn’t account for in my previous statement.

MF: That covers pre-Taproot.

Andrew Poelstra on Tales From The Crypt Podcast (2019)

https://diyhpl.us/wiki/transcripts/tftc-podcast/2019-06-18-andrew-poelstra-tftc/

MF: Let’s move to Taproot. This was an interesting snippet I found on Tales From The Crypt podcast with Andrew Poelstra. He talked about where the idea came from. Apparently there was this diner in California, 100 miles from San Francisco where apparently the idea came into being. Perhaps Bitcoiners will go to this diner and it will become known as the Taproot Diner. Do you remember this conversation at the diner Pieter?

PW: It is a good morning in Los Altos.

MF: So what Andrew Poelstra said in this podcast was that Greg was asking about more efficient script constructions, hiding a timelocked emergency clause. So perhaps talk the problem Taproot solves and this jump that Taproot gave us all that work on MAST.

PW: I think the click you need to make and we had to make was that really in most contracts, in more complex constructions you want to build on top of script, there is some set of signers that are expected to sign. I have talked about this as the “everyone agrees condition” but it doesn’t really need to be everyone. You can easily see that whenever all the parties involved in a contract agree with a particular spend there is no reason to disallow that spend. In a Lightning channel if both of the participants in the channel agree to do something with the money it is ok that that is done with the money. There is nobody else who cares about it than the participants. If they agree we are good. Thanks to key aggregation and MuSig you can represent the single “some set of keys and nothing else”. These people need to agree and nothing else. You can express that as a single public key. This leads to the notion that whatever you do with your Merkle tree, you want very near the top a branch that is “this set of signers agrees.” That is going to be the usual case. In the unusual case it is going to be one of these complex branches inside this Merkle tree. There is going to be this one top condition that we expect to be the most common one. You want to put it near the top because you expect it to be the most common way of spending. It is cheaper because the path is shorter the closer you set it to the top. What Taproot really does is taking that idea and combining it with pay-to-contract where you say “Instead of paying to a Merkle root I am going to pay to a tweaked version of that public key.” You can just spend by signing with that key. Or the alternative is I can reveal to the world that this public key was actually derived from another public key by tweaking it with this Merkle root. Hence I am allowed to instead spend it with that Merkle root.

MF: There is that trick in terms of the key path spend or the script path send. The normal case and then all the complex stuff covered by the tree. Then effectively having an OR construction between the key path spend and the script path spend.

PW: Taproot is just a way of having a super efficient one level of a Merkle tree at the top but it only works under the condition that it is just a public key. It cannot be a script. It makes it super efficient because you are not even revealing to the world that there was a tree in the first place.

MF: And with schemes like MuSig or perhaps even threshold schemes that key path spend can potentially be an effective multisig or threshold sig but it needs to be condensed into one key.

PW: Exactly. Without MuSig or any other kind of aggregation scheme all of this doesn’t really make sense. It works but it doesn’t make sense because you are never going to have a policy that consists of “Here is some complex set of conditions or this one guy signs.” I guess it can happen, it is a 1-of-2 or a 1-of-3 or so but those are fairly rare things. In order for this “everybody agrees” condition to be the more common one, we need the key aggregation aspect.

MF: There is that conceptual switch. In this podcast transcript Greg says “Screw CHECKSIG. What if that was the output? What if we just put the public key in the output and by default you signed it.” That is kind of a second part. How do you combine the two into one once you have that conceptual key path ands script path spend?

PW: The way to accomplish that is by saying “We are going to take the key path, take that key and tweak it with the script path in such a way that if you were able to sign for the original key path you can still sign for the tweaked version.” The tweaked version is what you put in the scriptPubKey. You are paying to a tweaked version of the aggregate of everyone’s keys. You can either spend by just signing for it, nothing else. There is no script involved at all. There is a public key and a scriptPubKey and you spend it by giving a signature. Or in the unusual case you reveal that actually this key was tweaked by something else. I reveal that something else and now I can do whatever that allowed me to do.

Greg Maxwell Bitcoin dev mailing list post on Taproot (2018)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html

MF: One of the key points here once we have discussed that conceptual type stuff is that pre-Taproot we thought it was going to be an inefficiency to try to have this construction. The key breakthrough with Taproot is that it avoids any larger scripts going onchain and really doesn’t have any downsides. Greg says “You make use cases as indistinguishable as possible from the most common and boring payments.” No privacy downsides, in fact privacy is better and also efficiency. We are getting the best of both worlds on a number of different axes.

PW: I think the post explains the goals and what it accomplishes pretty well. It is a good read.

Andrew Poelstra at MIT Bitcoin Expo on Taproot (2020)

https://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2020/2020-03-07-andrew-poelstra-taproot/

MF: There were a few parts to this presentation that I thought were good. He talks about what Taproot is, he talks about scripts and witnesses, key tricks and then the Taproot assumption. I thought it was a good quotable Taproot assumption “If all interested parties agree no other conditions matter.” You really don’t have to worry about all that complexity as the user as long as you are using that key path spend.

C + H(C || S) = P

N: Since some people already know this and some people don’t maybe it makes sense to dwell on this for a minute so everybody is on the same page for how it works. It is really not magic but it kind of seems like magic the first time you see it.

MF: Can you explain what the equation is saying?

N: The basic details are that public keys are elements of the group that you can define on top of the secp256k1 curve. You can multiply keys by scalars which are just numbers basically. If you take a hash function which gives you a number and you derive something like a public key by multiplying that number by the generator point, then anybody who knows the preimage of the hash knows the equivalent of a secret key as it were. You take a public key that is actually a public key in the sense that those numbers are secret, not known except to the owner. C would be that key and you can always add that tweak which is the hash of C and the script times G to derive P which is a new key. Anybody who knows the secret underlying C, what is the discrete logarithm of C with respect to G, anybody who knows that, under MuSig that is only a group of people, are able to sign with the key P because they also know the diff between the two keys. That takes care of the key spend path. Anybody who can compute a signature with C can compute a signature with P because the difference between them is just the hash of some string. But then also anybody who doesn’t necessarily know how to sign with C can prove that P is the sum of C and that hash. The reason for this is that the preimage for the hash commits to C itself. You can clearly show here that P is the sum of two points. One of them is the discrete logarithm of that point as a hash and that hash contains the other term in the sum so it is inconceivable unless somebody knows how to find second preimages for the hash, to be able to tweak the key in that way and still be able to convince people that really the hash commits to the script. Because the hash does that, the intent of including an additional script in the hash is to convey to the network that that’s one of the authorized ways to spend this output. I hope that was less confusing and not more confusing than before.

MF: That was a good explanation. I also like the slide that Tim Ruffing at his London Bitcoin Devs presentation that has that public key equation and shows how you can get the script out as well as satisfying it with just a normal single key.

pk = g^(x+H(g^x, script))

PW: Maybe a bit confusing because that slide uses multiplicative notation and in everything else we have been using additive notation. This exponentiation that you see in this slide is what we usually call an elliptic curve notation. g^x we usually write as xG, well some people. There are often interesting fights on Twitter between proponents of additive notation and multiplicative notation.

MF: When you first hear of the idea it doesn’t sound plausible that you could have the same security whilst taking a script out of a public key. It almost feels as if you are halving the entropy because you have two things in the same key. You actually do get exactly the same security.

PW: Do you think the same about Merkle trees that you are able to take more out than you took in? You are absolutely right that entropy just isn’t the right notion here. It is really not all that different from the fact that you can hash bigger things into smaller things and then still prove that those bigger things were in it.

MF: I understand it now. But when I first heard of it I didn’t understand how that was possible. I think it is a different concept because I understand the tree concept where you hashing all the leaves up into a root but this was hiding…

PW: Ignore the tree. It is just the hash.

MF: It is the root being hidden within the public key. But that didn’t seem possible without reducing the entropy.

PW: The interesting thing is being able to do it without breaking the ability to spend from that public key. Apart from that it is just hashing.

RO: I just want to make a minor comment on the very good description that was given. You don’t have to know the discrete log of the public key in order to manipulate signatures operating on the tweaked public key. In fact when you are doing a MuSig proposal no individual person ever really knows the discrete log of the aggregated key to begin with and they don’t have to know. It is the case that in Schnorr signatures it is slightly false but very close to being true to say that if you have a signature on a particular public key you can tweak the signature to get a proper signature on the tweaked public key without knowing the private key. The only quibble is that the public key is actually hashed into the equation. You have to know the public key of the tweak before you start this process but the point is that no one has to learn the discrete log of the public key to manipulate this tweak thing.

PW: This is absolutely true. On the flip side whenever you have a protocol that works if a single party knows the private key you can imagine having that private key be shared knowledge in a group of people and design a multiparty protocol that does the same thing. The interesting thing is that that multiparty protocol happens to be really efficient but there is nothing surprising about the fact that you can.

MF: Is there a slight downside that in comparison to a normal pay-to-pub-key, if you want to do a script path spend you do need to know the script and if you want to do a key path spend you do need to know the pubkey? There is more information that needs to be stored to be able to spend from it. Is that correct?

PW: Not more than what you need to spend from a P2SH. It just happens to be in a more structured version. Instead of being a single script that does everything you need to know the structure. Generally if you are a participant in some policy enforced by an output you will need to understand how that policy relates to that output.

MF: With a pay-to-script-hash you still need to have that script to be able to spend from that pay-to-script-hash. In the same way here you need to know the script to be able to spend from the pay-to-taproot.

AJ Towns on formalizing the Taproot proposal (December 2018)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-December/016556.html

MF: The next item on the reading list was AJ Town’s first attempt to formalize this in a mailing list post in 2018. How much work did it take to go from that idea to formalizing it into a BIP?

PW: If you look at BIP 341 and BIP 342 there is only a very small portion of it that is actually the Taproot construction. That is because our design goal wasn’t make Taproot possible but it was look at the cool things you can accomplish with Taproot and make sure all of those actually work. That includes a number of unrelated changes that were known issues that needed to be fixed such as signing all the input amounts that we have recently seen. Let me step a back. When Taproot came out first me personally, I thought the best way to integrate this was to do all the things. We were at the time already working on Schnorr multisignatures and cross input aggregation. Our interest in getting Schnorr out there was to enable cross input aggregation which is the ability to just have across multiple inputs of a transaction just have a single signature that signs for all of them instead of separate ones. It turns out to be a fairly messy and hard problem. Then Taproot came out and it was like “We need to add that to it because this is clearly something really cool that has privacy and efficiency advantages.” It took a couple of months after that to realize that we are not going to be able to build a single proposal that does all these things because it all interacts in many different ways. Then the focus came on “We want things like batch verification. We want extensibility. We want to fix known bugs and we want to exploit Taproot to the fullest.” Anything else that can be done outside of that is going to have to wait for other independent proposals or a successor. I think a lot of time went into defining exactly what.

MF: Perhaps all the drama and contention of the SegWit fork, did that push you down a road of stripping back some of the more ambitious goals for this? We will get onto some of the things that didn’t make it into the proposal. Did you have a half an eye on that? You wanted as little controversy as possible, minimize the complexity?

PW: Clearly we cannot just put every possible idea and every possible improvement that anyone comes up with into one proposal. How are you going to get everyone to agree on everything? Independent improvements should have some form of independence in its progression towards being active on mainnet. At the same time there are really strong incentives to not do every single thing entirely independently. Doing the Merklization aspect of BIP 341, the Taproot aspect of it and the Schnorr signature aspect, if you don’t do all three of them at the same time you get something that is seriously less efficient and less private. It is trade-off between those things. Sometimes things really interact and they really need to go together but other times they don’t.

John Newbery on reducing size of Taproot output by 1 vbyte (May 2019)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016943.html

MF: One of the first major changes was this post from John (Newbery) on reducing the size of the pubkey. The consideration always is we don’t want anyone to lose out. Whatever use case they have, whether they have a small script or a really large script, we don’t want them to be any worse off than before because otherwise you then have this problem of some people losing out. It seems like a fiendish problem to make sure that at least everyone’s use case is not hurt even if it is a very small byte difference. I suppose that is what is hanging over this discussion and John’s post here.

PW: I think there is something neat about not using 33 bytes when you can have 32 with the same security. It just feels wasteful.

MF: That’s John’s post. But there was also a conversation I remember on a very basic key path spend being a tiny bit bigger than a normal key path spend pre-Taproot. Is that right?

PW: Possibly I’d need to reread the post I think.

MF: I don’t think this was in John’s post, I think that was a separate discussion.

PW: It is there at the bottom. “The current proposal uses (1). Using (3) or (4) would reduce the size of a taproot output by one byte to be the same size as a P2WSH output. That means that it’s not more expensive for senders compared to sending to P2WSH.” That is part of the motivation as well. Clearly today people are fine with paying to P2WSH which has 32 byte witness programs. It could be argued that it is kind of sad that Taproot would change that to 33. But it is a very minor thing.

Steve Lee presentation on “The Next Soft Fork” (May 2019)

https://bitcoinops.org/en/2019-exec-briefing/#the-next-softfork

MF: There was this presentation from Steve Lee at Optech giving the summary of the different soft fork proposals. There were alternatives that we can’t get into now because there is too much to discuss already. Other potential soft forks, Great Consensus Cleanup was one, there was another one as well. There is a timeline in this presentation which looks very optimistic now of activation maybe 6-12 months ago. Just on the question of timing, there seems to have been progress or changes to either the BIPs or the code continuously throughout this time. It is not as if nothing has been happening. There have been small improvements happening and I suppose it is just inevitable that things are going to take longer than you would expect. There was a conversation earlier with a Lightning dev being frustrated by the pace of change. We won’t go onto activation, these changes taking so long. Andrew Poelstra talked about the strain of getting soft fork changes into Bitcoin now that it is such a massive ecosystem and there is so much value on the line.

RO: In order to activate it you need an activation proposal. I think that might be the most stressful thing for developers to talk about maybe.

ET: That is true. I remember that about a year ago I started to work on descriptors for Taproot and I talked with Pieter and I was like “It should probably get in in a few months” and he was laughing. A year later and we as Russell said we don’t even have an activation path yet.

MF: I certainly think it is possible that we should’ve got the activation conversation started earlier. Everyone kind of thought at the back of their head it was going to be a long conversation. Perhaps the activation discussion should’ve been kicked off earlier.

ET: I think people are a little bit traumatized from SegWit and so don’t really want to talk about it.

MF: A few people were dreading the conversation. But we won’t discuss activation, maybe another time, not today.

Pieter Wuille mailing list post on Taproot updates (no P2SH wrapped Taproot, tagged hashes, increased depth of Merkle tree, October 2019)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-October/017378.html

MF: The next item on the reading list, you gave an update Pieter in October 2019 on the mailing list. The key items here were no P2SH wrapped Taproot. Perhaps you could talk about why people wanted P2SH wrapped Taproot. I suspect it is exactly the same reason why people wanted P2SH wrapped SegWit. There is also tagged hashes and increased depth of Merkle tree.

PW: Incremental improvements I think. The P2SH thing is just based on adoption of BIP 173 and expecting that probably we don’t want to end up in a situation where long term use of Taproot is split between P2SH and native because it is a very slight privacy issue. You are revealing whether the sender supports native SegWit outputs or not. It is better to have everything into a single uniform output type. Given the timeline it looked like we are probably ok with dropping P2SH. The 32 bye pubkeys, a small incremental improvement. The tagged hashes was another. There was one later change which was changing public keys from implicitly square to implicitly even for better compatibility with existing infrastructure which was maybe a couple of months after this email. Since then there haven’t been any semantic changes to the BIP, only clarifications.

RO: Also adding the covering the input script by the signature was very recent.

PW: Yes you’re right. That was the last change.

RO: I am assuming it was changed.

PW: Yes it was.

MF: We wanted P2SH wrapped SegWit because we were introducing bech32, a different format. What was the motivation for wanting P2SH wrapped Taproot?

PW: The advantage would be that people with non-SegWit or non BIP173 compliant wallet software would be able to send to it. That is the one and only reason to have P2SH support in the first place because it has lower security, it has extra overhead. There is really no reason to want it except compatibility with software that can’t send to bech32 addresses.

ET: I thought another reason was because we can and it went well with the flow of the code.

PW: Sure. It was easy because SegWit was designed to have it. The reason SegWit had it was because we wanted compatibility with old senders. Given that we already had it it was relatively easy to keep it.

RO: There is also a tiny concern of people attempting to send Taproot outputs to P2SH wrapped SegWit outputs because currently those are just not secure and can be stolen.

PW: You mean you can have policy rules around sending to future Taproot versions in wallet software while you can’t do that in P2SH?

RO: People might mistakenly produce P2SH wrapped Taproot addresses because they have incorrectly created a wallet code that way. If we supported both then their funds would be secured against that mistake.

PW: That is fair, yeah.

Pieter Wuille at SF Bitcoin Devs on BIP-Taproot and BIP-Tapscript (December 2019)

https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2019-12-16-bip-taproot-bip-tapscript/

MF: This is the transcript of Pieter’s talk at SF Bitcoin Devs, this was an update end of 2019. There was a conversation you had with Bram (Cohen) and this is talking about being concerned with facilitating future changes, things like Graftroot which we will get onto. But then also making sure that existing applications or existing use cases, things like colored coins which perhaps you might not be interested in at all yourself and perhaps the community generally isn’t. How much thought do you have to put into making sure things like colored coins aren’t hurt, use cases that very few people are using but you feel as if it is your responsibility to make sure that you don’t break them with this upgrade?

PW: This is a very hard question for me because I strongly believe that colored coins make no sense. If you formulate it a bit more generally I think there is a huge amount of potential ideas of what if someone wants to build something like this later? Is there some easy change we can make to our proposal to facilitate that? For example the annex thing in BIP 341 is an example of an extensibility feature that would enable a number of things that would be really hard to do otherwise if it wasn’t done right now. In general I think that is where perhaps the majority of the effort in flushing out the details goes, making sure that it is as compatibility with future changes as possible.

MF: What is the problem specifically? Can you go into a bit more detail on why partial delegation is challenging with Taproot?

PW: That is just a separate feature. It is one we deliberately chose not to include because the design space is too big and there are too many ways of doing this. Keep it for something after Taproot.

Potential criticisms of Taproot and arguments for alternatives on mailing list (Bitcoin Optech, Feb 2020)

https://bitcoinops.org/en/newsletters/2020/02/19/#discussion-about-taproot-versus-alternatives

MF: There hasn’t been much criticism and there doesn’t appear to have been much opposition to Taproot itself. We won’t talk about quantum resistance because it has already discussed a thousand times. There was this post on the mailing list post with potential criticisms of Taproot in February that is covered by the Optech guys. Was there any valid criticism in this? Any highlights from this post? It didn’t seem as if the criticism was grounded in too much concern or reality.

PW: I am not going to comment. There was plenty of good discussion on the mailing list around it.

Andrew Kozlik on committing to all scriptPubKeys in the signature message (April 2020)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017801.html

MF: This is what Russell was alluding to. This is Andrew Kozlik’s post on committing to all scriptPubKeys in the signature message. Why is it important to commit to scriptPubKeys in the signature message?

RO: I don’t actually understand why it is a good idea. It just doesn’t seem like a bad idea. Maybe Pieter or someone else can comment.

PW: The commitment to all the scriptPubKeys being spent?

MF: Kozlik talked about this in certain applications. So it is specific to things like Coinjoin and not necessarily applicable to everything?

PW: Right but you don’t want to make it optional because if you make it optional you are now again revealing to the world that you care about this thing. I believe that the attack was something of the form where you are lying to a hardware wallet about which inputs of a transaction are yours. Using a variant of the amount attack… I believe it is I do a Coinjoin where I try to spend both outputs from you but the first time I convince you that only one of the inputs is yours and then the second time I convince you that the other one is yours. In both times you think “I am only sending 0.1 BTC” but actually you are spending 0.2. You wouldn’t know this because your hardware wallet has no state that is kept between the two iterations. In general it makes sense to include this information because it is information you are expected to give to a hardware wallet. It is strange that they would not sign it. I think it made perfect sense as soon as the attack was described.

Coverage of Taproot eliminating SegWit fee overpayment attack in Bitcoin Optech (June 2020)

https://bitcoinops.org/en/newsletters/2020/06/10/#fee-overpayment-attack-on-multi-input-segwit-transactions

MF: This was a nice example of Taproot solving a problem that had cropped up. This was the fee overpayment attack on multi input SegWit transaction. Taproot fixes this. This is nice as an example of something Taproot clearly fixes rather than just adding functionality, better privacy, better efficiency. It is a nice add-on.

PW: It was a known problem and we had to fix it. In any successor proposal whatever it was.

Possible extensions to Taproot that didn’t make it in

Greg Maxwell on Graftroot (Feb 2018)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015700.html

AJ Towns on G’root (July 2018)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-July/016249.html

Pieter Wuille on G’root (October 2018)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-October/016461.html

AJ Towns on cross input signature aggregation (March 2018)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-March/015838.html

AJ Towns on SIGHASH_ANYPREVOUT (May 2019)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016929.html

MF: The next links are things that didn’t make it in. There is Graftroot, G’root, cross input signature aggregation, ANYPREVOUT/NOINPUT. As the authors of Taproot what thought do you have to put in in terms of making sure that we are in the best position to add these later?

PW: We are not. You need a successor to Taproot to do these things period.

MF: But you have made sure that Taproot is as extensible as possible.

PW: To the extent possible sure. Again there are trade-offs to be made. You can’t support everything. Graftroot and cross input aggregation are such deeply conceptual changes. You can’t permit building them later. It is such a structural change to how scripts work. These things are not something that can be just added later on top of Taproot. You need a successor.

MF: I thought some of the extensibility was giving a stepping stone to doing this later but it is not. It is a massive overhaul again on top of what is kind of an overhaul with Taproot.

PW: Lots of things can be reused. It is not like we need to start over from scratch. You want Schnorr signatures, you want leaf versioning, you want various extensibility mechanisms for new opcodes. Graftroot is maybe not the perfect example. It depends to what extent you want to do it. Cross input aggregation, the concept of script verification is no longer a per input thing but it is a per transaction thing. You can’t do it with optimal efficiency, I guess you can invent things. The type of extensibility that is built in is new opcodes, new types of public keys, new sighash types, all these things are made fairly easy and come with almost no downsides compared to not doing them immediately. Real structural changes to script execution, they need something else.

MF: And perhaps these extensions, we might as well do them because there is no downside. We don’t know the future so we might as well lay the foundations for as many extensions as possible because we don’t know what we will need in future.

PW: Everything is a trade-off between how much engineering and specification and testing work is it compared to what it might gain us.

Taproot and Tapscript BIPs

https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki

https://github.com/bitcoin/bips/blob/master/bip-0342.mediawiki

MF: There is BIP-Taproot. BIP-Tapscript as I understand, BIP-Taproot was getting too long so there was a separate BIP which has a few changes to script. The main one being get rid of CHECKMULTISIG and introducing CHECKSIGADD. CHECKSIGADD is for the multisignature schemes where multiple signatures are actually going onchain. It is more efficient for batch verification if you are doing a multisig with multiple signatures going onchain. Although the hope is that multisig will be with MuSig schemes so that multiple signatures won’t go onchain.

PW: Exactly.

MF: The design of CHECKSIGADD, it is like a counter. With CHECKMULTISIG there was no counter you, just tried all the signatures to see if there were enough signatures to get success from that script. But CHECKSIGADD introduces a counter which is more efficient for batch verification. Why is there not an index for keys and signatures? Why is it not like “Key 1, Key 2, Key 3 and Key 4” and then you say “I’m providing Signature 2 which matches to Key 2.” Why is it not designed like that?

PW: Again design space. If you go in that direction there are so many ways of doing it. Do you want to support arbitrary subsets up to a certain size? You could imagine some efficient encoding of saying “All possible policies up to 5 keys I can put into a single number. Why not have an opcode that does that?” We just picked the simplest thing that made sure that multisignatures weren’t suddenly not a lot more gratuitously inefficient compared to what existed before because the feeling is if you remove a feature you need to compensate with an alternative. Due to OP_SUCCESSx it is really easy to add a new opcode that does any of these things you are suggesting with really no downside.

MF: You could do an indexed multisig using CHECKSIGADD?

PW: Using OP_SUCCESSx you can add any opcode. The focus is fixing existing problems and making sure batch verification works but beyond that anything else we leave. Actually new features we leave to future improvements.

N: The ADD variant of CHECKMULTISIG, that also addresses the quadratic complexity of CHECKMULTISIG? It is not just for batch verification?

PW: There is no quadratic complexity in CHECKMULTISIG?

N: Doesn’t it need to loop for the signatures and the keys?

PW: No because they have to be in the same order. It is inefficient but it is just at worst proportional to the number of the keys given. Ideally we want something that is just proportional to the number of signatures given. It is unnecessarily inefficient but it is only linearly so. There were or are quadratic problems in for example in pre SegWit if you have many public keys, each of them would individually rehash the entire transaction. The bigger you make your transaction your amount of data hashed goes up quadratically. But that is already fixed since SegWit.

RO: I believe OP_ROLL is still quadratic even in Taproot.

PW: I think it is just linear but with a pretty bad constant factor.

RO: The time to execute an OP_ROLL is proportional to, can be as large as the size of the script. So a script that contains only OP_ROLLs has quadratic complexity in terms of the length of the script.

PW: Right but there is a limit on the stack size.

RO: Of course.

PW: Without that limit it would be quadratic, absolutely. Also in theory a different data structure for the execution stack is possible that would turn it into O(n log(n)) instead of O(n^2) to have unbounded ROLLs.

Bitcoin Core BIP 340-342 PR 17977

https://github.com/bitcoin/bitcoin/pull/17977

MF: This is the PR open in Bitcoin Core. This is all the code including the Schnorr libsecp code. My understanding with this is that ideally certainly if you have sufficient expertise is to help review the Schnorr part in libsecp. If not that then start reviewing some of this very large Taproot PR in Core. I am thinking about trying to organize a PR review club maybe just taking a few commits from this PR. I know we covered a couple of the smaller ones earlier.

PW: I think if you ignored the libsecp part it is not all that big, a couple of hundred lines excluding tests.

MF: That is doable for a Bitcoin Core PR review club. Perhaps trying to do the Schnorr code in libsecp is too big. I don’t know if we could narrow that down, focus on just a few commits in the libsecp Schnorr PR. I think either you or Greg said on IRC anybody with C++, C experience is useful in terms of review for that libsecp stuff because the cryptography is on solid ground but you need some code review on that libsecp PR.

PW: I don’t remember saying that.

RO: I think Greg has said that. The difficulties with the libsecp C code will come from not the cryptography but from C.

MF: C specific problems, in terms of the language.

RO: Yes

MF: I did split it into a few commits. There are quite a few functional tests to look at on the Taproot PR. I am trying to think of an accessible way for people to start delving in and looking at the tests and running the tests is often a good first step.

Bitcoin Stack Exchange question on Simplicity and Taproot

https://bitcoin.stackexchange.com/questions/97049/in-theory-could-we-skip-the-proposed-taproot-soft-fork-activate-simplicity-inst

MF: Shall we talk a bit about Simplicity? There was a Bitcoin Stack Exchange on why not skip Taproot and just merge in Simplicity. Why aren’t we doing that?

RO: I don’t know. That seems like a great idea to me (joke). Simplicity is not completed yet and not reviewed and totally not ready. It might good to go with someone that is actually completed and will provide some benefit rather than waiting another four years, I don’t know.

MF: Is that possible longer term? I know that on the Stack Exchange question Pieter says you’d still want Taproot because you can use Simplicity within Taproot. I know you talked about avoiding a SIGHASH_NOINPUT soft fork with Simplicity. If Simplicity was soft forked in you could potentially avoid the SIGHASH_NOINPUT, ANYPREVOUT soft fork.

RO: You can’t get the root part of Taproot with Simplicity. You can’t really program that. The fact that you can have a 32 byte witness program and spend that as a public key is something that is not really possible in Simplicity.

PW: I think the biggest advantage of Taproot is that it very intentionally makes one particular way of spending and creating scripts super efficient in the hope to incentivize that. You get the biggest possible policy based privacy where hopefully nearly everything is spent using just a key path and nothing else. If you just want to emulate that construction in another language be it Simplicity or through new opcodes in Script you won’t be able to do that with the same relative efficiency gains. You would lose that privacy incentive at least to some extent.

RO: Because the root part of Taproot is not something that is inside Script. It is something that is external to Script. Even replacing Script isn’t adequate.

MF: But if you were to get really wacky you could have a Taproot tree with Script and Simplicity on different leaves. You could use one leaf that is using Simplicity or another leaf that is using Bitcoin Script?

RO: Yes and that would probably be the natural state of things if Simplicity goes in that direction into Bitcoin.

MF: Because you’d only want to use Simplicity where you are getting a real benefit of using it?

RO: Simplicity is an alternative to Bitcoin Script. The leaf versioning aspect of Taproot allows you to put in alternatives to Bitcoin Script which don’t have to be Simplicity, any alternative to Bitcoin Script. That is both an upgrade mechanism for Taproot but it also implies this ability to mix a Tapleaf version for Script with a Tapleaf version for Simplicity with a Tapleaf version for whatever else we want.

MF: The benefits would be being able to do stuff that you can’t do in Script. What other benefits, why would I want to use Simplicity rather than use Script on a leaf of my tree other than to get functionality that I can’t get with Script? Is there efficiency with a Simplicity equivalent of Script in some cases?

RO: I think the extended functionality would be the primary benefit, extended features is the primary benefit for using Simplicity over Script. It is possible that on a head to head competition Simplicity might even beat Script in terms of efficiency and weight. That is probably not the case. I suspect that when things settle down Simplicity will not quite to be able to beat Script at its own game. It is a little bit early to tell whether that is true or not.

PW: It also really depends on what kind of jets are implemented and with what kind of encoding. I suspect that for many things that even though it is theoretically possible to do anything in Simplicity it may become really exorbitantly expensive to do so if you need to invent your own sighash scheme or something.

Update on Simplicity

https://blockstream.com/2018/11/28/en-simplicity-github/

MF: Can you give an update on Simplicity Russell? What are the next steps? How near is it to being a potential soft fork proposal for Bitcoin?

RO: I am basically at the point of Simplicity’s functional completeness in that the final operation called disconnect which supports delegation is implemented and is under review. That’s at the point where people who are particularly keen to try out Simplicity in principle can start writing Simplicity programs or describing Simplicity programs. There are two major aspects that still need working on. One is that there is a bunch of anti-malleability checks that need to be implemented that are currently not implemented. This doesn’t affect the functional behavior but of course there are many ways of trivially denial of service attacking Simplicity. While Simplicity doesn’t have loops you can make programs that take an exponential amount of time. We need a mechanism to analyze Simplicity programs. This is part of the design but not implemented to analyze programs and put an upper bound on their runtime costs. There is also various anti witness malleability checks that need to be put in. These are not implemented but they are mostly designed. Then what is unfortunately the most important for people and things that come to the end of the timeline is “What is a good library of jets that we should make available?” This is where experimenting on Elements Alpha and sidechains, potentially Liquid, will be helpful. To try to figure out what are a good broad class of jets, which are intrinsic operations that you would add to the Simplicity language, what sort of class of jets do you want? I am aiming for a very broad class of jets. Jets for elliptic curve point multiplication so you can start potentially putting in very exotic cryptographic protocols and integrating that into your language. I’d probably like to support alternative hashing functions like SHA3 potentially and stuff like that. Although that has got a very large state space so we’ll see how that goes. The things that would inform that come from trying Simplicity. As we explore the uses of Simplicity people will naturally come up with little constructs that they would like to be jets and then we can incorporate that. It would be nice to get a large understanding of what those jets that people will want earlier because the mechanisms for soft forking in new jets into Simplicity are a little bit less nice than I was hoping for. Basically I have been reduced to thinking that you’ll just need different versions of Simplicity with different sets of jets as we progress that need to be soft forked in.

MF: So a Tapleaf version being Simplicity and then having different Simplicity versions within that Tapleaf version? I know we are getting ahead of ourselves.

RO: Probably what would happen. The design space of how to soft fork in jets has maybe not been completely explored yet. That is something we would want to think about.

N: This is about jets and soft forking actually. From the consensus layer point of view it is only down to the validation costs right? If you have a standard set of jets those are easier for full nodes to validate with a reasonable cost and therefore should be cheaper? Semantically there should be no difference from interpreting and executing jets? Or is it also what needs to be included on the blockchain? Can they be implicitly omitted?

RO: Originally I did want to make them implicity but there turns out to be subtle… Simplicity has this type system and there is this subtle problem with how jets interact with the type system that makes it problematic to make jets completely implicit in the way that I was originally thinking. The current plan is to make jets explicit and then you would need some sort of soft forking mechanism to add more jets as you go along. In particular, and part of the reason why it is maybe not so bad, in order to provide the witness discount…. The whole point of jets is that these are bits of Simplicity programs that we are going to natively understand from the Simplicity interpreter so we don’t have to run through the Simplicity interpreter to process them but we are going to run them with native C code. They are going to cheaper and then we can discount the costs to incentivize their use. The discounts for a native Schnorr signature, it takes maybe a hour to run Schnorr signature verification on my laptop written in pure Simplicity. Of course as you add arithmetic jets that comes down to 15 seconds. But of course we want the cost to be on the order of milliseconds. That’s the purpose of jets there. In order to provide that discount we have to be aware of what jets are at the consensus level.

MF: Are you looking at any of these things that didn’t make it into the Taproot soft fork proposal as potential functionality that jumps out as a Simplicity use case? We talked about SIGHASH_NOINPUT but it is potentially too early for that because we’ll want to get that in before Simplicity is ready. The Graftroot, G’root, all this other stuff that didn’t make it in, anything jumps out at you as a Simplicity functionality first use case?

RO: It is probably restricted to the set of things that would be implemented by opcodes. SIGHASH_NOINPUT, delegation are the two things that come to mind. This is what I like about Simplicity. Simplicity is designed to be enable people to do permissionless innovation. My design of Simplicity predates SIGHASH_NOINPUT and it is just a natural consequence of Simplicity design that you can do SIGHASH_NOINPUT. Delegation was a little bit different, it was explicitly put into the Simplicity design to support that. But a lot of things, covenants is just a consequence of Simplicity’s design and the fact you can’t avoid covenants if you have a really flexible programming language. Things like Graftroot and cross input signature aggregation, those things are outside of the scope of Script and generally not enabled by Simplicity by itself. Certainly Simplicity has no way of doing cross input aggregation. You can draw an analogy between Graftroot and delegation. It has a bit of Graftrootness to it but it doesn’t have that root part of the Graftroot in the same way that Simplicity doesn’t have the root part of Taproot.

MF: Covenants is a use case. So perhaps richer covenants depending on if we ever get CHECKTEMPLATEVERIFY or something equivalent in Script?

RO: CHECKTEMPLATEVERIFY is also covered by Simplicity.

MF: Soft forks, you were talking about soft forks with jets. Is the process of doing a soft fork with jets as involved as doing with a soft fork with Bitcoin?

RO: It would probably be comparable to soft forking in new opcodes.

MF: But still needs community consensus and people to upgrade.

RO: Yes. In particular you have a lot of arguments over what an appropriate discount factor is.

MF: I thought we could avoid all the activation conversations with Simplicity.

RO: The point is that these jets won’t enable any more functionality that Simplicity doesn’t already have. It is just a matter of making the price for those contracts that people want to use more reasonable. You can write a SHA3 compression function in Simplicity but without a suitable set of jets it is not going to be a feasible thing for you to run. Although if we are lucky and we have a really rich set of jets it might not be infeasible to write SHA3 out of existing jets. That would be my goal of having a nice big robust set of midlevel and low level jets so that people can build these complicated, not thought of or maybe not even invented, hash functions and cryptographic operations in advance without them necessarily being exorbitantly costly even if we don’t have specific jets for them.

Q&A

MF: I have been very bad with YouTube because I kept checking and nothing was happening. But now lots of happened and I’ve missed it. Apologies YouTube. Luced asks when Taproot? We don’t know, we hope next year. It is probably not going to be this year we have to sort out the activation conversation that we deliberately avoided today. Spike asks “How hard would signature aggregation to implement after this soft fork?” We have key aggregation (corrected) with this soft fork or at least key aggregation schemes. We just don’t have cross input signature aggregation.

PW: Taproot only has it at the wallet level. The consensus rules don’t know or care about aggregation at all. They see a signature and a public key and they verify. While cross input aggregation or any kind of onchain aggregation, before the fact aggregation, needs a different scheme. To answer how much work it is that really depends on what you are talking about.

MF: This is the key aggregation versus signature aggregation conversation?

PW: Not really. It is whether it is done offchain or onchain. Cross input aggregation necessarily needs it onchain because different outputs that are being spent by different inputs of a transaction inevitably have different public keys. You cannot have them aggregated before creating the outputs because you already have the outputs. So spending them simultaneously means that onchain there needs to be aggregation. The consensus rules need to be aware of something that does this aggregation. That is a very fundamental change to how script validation works because right now with this conceptually boolean function you run on every input and it returns TRUE or FALSE. If they all return TRUE you are good. With cross input aggregation you now need some context that is across multiple inputs. In a way and this may actually be a good step towards the implementation side of that, batch validation even for Taproot also needs that. While BIP 341, 342, 340 support batch validation this is not implemented in the current pull request to Bitcoin Core. Something we expect to do after it is in because it is an optional efficiency improvement that the BIP was intentionally designed to support but it is a big practical change in implementation. It turns out that the implementation work needed for that is probably a step towards making cross input aggregation easier once there are consensus rules for that.

MF: Janus did respond to that question “are you referring to MuSig or BLS style aggregation?” MuSig we are hopefully getting but BLS style aggregation we are not with proposed Taproot.

PW: BLS lets you do non-interactive aggregation. That is not something that can be done with Schnorr.

MF: On the PR Greg Sanders says Taproot functional tests, he’d suggest explaining the whole “Spender” framework that’s in it? It took him a couple of days to really understand it. Who has written the functional tests on Taproot, is it you Pieter?

PW: Yes and Johnson Lau and probably a few other people. They were written a while ago. Other people have contributed here and there that I forget now. It does have this framework where a whole bunch of output and input functions are passed. It creates random blocks that randomly combines these into transactions and tries to spend them. It sees that things that should work work and shouldn’t work don’t work.

MF: Now Greg understands it perhaps he is the perfect candidate for the Bitcoin Core PR review club on Taproot functional tests. Volunteering you Greg. Janus asks about a source for BLS style aggregation being possible with Schnorr. Pieter you’ve just said that is not possible?

PW: It is not. If by BLS style aggregation you mean non-interactive aggregation then no. There is no scheme known based on discrete logarithm that has this. We’d need a different curve, different security assumptions, different efficiency profile for that.

MF: Greg says “Jet arguments will likely be very similar to what ETHerians call “pre-compile”. You can technically do SNARKs and whatever in EVM but not practically without those.”

RO: Yeah that sounds true to me. Again I would hope that with enough midlevel jets that even complicated things are not grossly expensive so that people would be unable to run them. That is going to be seen in the future whether that is true or not.

MF: Spike said “Wuille said he started looking at Schnorr specifically for cross input aggregation but they found out it will make other stuff like Taproot and MAST more complicated so it was delayed.” That sounds about right. That is the YouTube comments. I think we have got through the reading list. Are there any other last comments or any questions for anyone on the call?

Next steps for Taproot

RO: Recently I got excited about Taproot pending activation and I wanted to go through and find things that need to be done before Taproot can be deployed. This might be a useful exercise for other people. I found a dangling issue on BIP 173, SegWit witness versions. There was an issue with the insertion bug or something in the specification. I thought it would be easy to fix but it turns out it is complicated. As far as I know the details for BIP 340 are not complete with regards to synthetic nonces although unrelated to Taproot the fact that Taproot depends on BIP 340 suggests that BIP 340 should be completed before Taproot is deployed. I guess my point with this comment is that there are things that should be done before Taproot is being deployed. We should go out and find all those things and try to cross them off.

PW: I do think there is a distinction to be made between things that need to be done before Taproot consensus rules and things that need to be done before wallets can use it. Something like synthetic nonces isn’t an issue until someone writes a wallet. It won’t affect the consensus rules. Similarly standardization of MuSig or threshold schemes is something that needs to be done, integration with descriptors and so on. It is not on the critical path to activation. We can work on how the consensus rules need to activate without having those details worked out. The important thing is just we know they are possible.

MF: Russell, do you have any other ideas other than the one you suggested for things we need to look out for?

RO: Nothing comes to mind but it wouldn’t surprise me if there are other issues out there. I didn’t even think about the design for a MuSig protocol. Pieter is of course right when he says these aren’t necessarily blocking things for Taproot but it feels like an appropriate time to start on them and it is things that everyone can do.

MF: The conversation with someone in the Lightning community, that there is so much other stuff to do that we don’t want to work on Lightning post Taproot given that we don’t know when it is going to be activated. I don’t think there is anybody on the call who is really involved in the Lightning ecosystem but perhaps they are frustrated with the pace or perhaps want some of this to be happening faster than it is. There are lots of challenges to work on Lightning. There were two final links on the reading list. On the reading list I had Nadav Kohen’s talk on “Replacing Payment Hashes with Payment Points” and I also had Antoine Riard’s talk “Schnorr Taproot’d Lightning” at Advancing Bitcoin. Any thoughts on Lightning post Taproot? There has been a discussion on how useful Miniscript can be with Lightning. Any thoughts on how useful Simplicity could be with Lightning?

RO: I am not that familiar with PTLCs (point time locked contracts) so I am not too sure what details are involved with that. Simplicity is a generic programming language so it is exactly these innovative things that Simplicity is designed to support natively without people needing to necessarily soft fork in new jets for. Elliptic curve multiplication should already be a jet and it is one of those things where I am hoping that permissionless innnovation can be supported right out of the box.

MF: If they are using adaptor signatures post Taproot and scriptless scripts there is not much use for Miniscript and not much use for Simplicity?

RO: Adaptor signatures are offchain stuff and so is outside of the scope. Simplicity can take advantage of it because it has Schnorr signature support but it doesn’t have any influence on offchain stuff. I can say that there has been some work towards a Miniscript to Simplicity compiler. That would be a good way of generating common policies within Simplicity and then you could combine those usual or normal policies with more exotic policies using the Simplicity combinators.

MF: To go through the last few comments on the YouTube. “You can do ZKP/STARKs and anything else you want for embedded logic on Bitcoin for stuff like token layers like USDT, protocols soft forks are specifically for handling Bitcoin.” I don’t know what that is in reference to. Jack asks “Do PTLCs do anything with Taproot?” The best PTLCs need Schnorr which comes within the Taproot soft fork but you are not using Taproot with PTLCs because you are just using adaptor signatures. “Covenants would make much safer and cheaper channels” says Spike.

RO: I’m not familiar with that. It is probably true but I can’t comment on it.

MF: There is another PTLC question from Janus. “Will it still be necessary to trim HTLCs when using PTLCs on Taproot. Tadge mentioned that they complicate matters a bit.” I don’t know the answer to that and I don’t think there are any Lightning people on the call. That is all the YouTube comments, No questions on IRC, nothing on Twitter. We will wrap up. Thank you very much to everyone for joining. Thanks to everyone on YouTube. We will get a video up, we will get a transcript up. If you have said your name or introduced yourself then I will attribute your comments and questions on the transcript but please contact me if you would rather be anonymous. Good night from London.

\ No newline at end of file diff --git a/london-bitcoin-devs/2020-08-19-socratic-seminar-signet/index.html b/london-bitcoin-devs/2020-08-19-socratic-seminar-signet/index.html index 6dae4f43ea..24f9e35b5e 100644 --- a/london-bitcoin-devs/2020-08-19-socratic-seminar-signet/index.html +++ b/london-bitcoin-devs/2020-08-19-socratic-seminar-signet/index.html @@ -9,4 +9,4 @@ < Socratic Seminar - Signet

Socratic Seminar - Signet

Date: August 19, 2020

Transcript By: Michael Folkson

Tags: Signet, Soft fork activation

Category: Meetup

Media: -https://www.youtube.com/watch?v=b0AiucAuX3E

Pastebin of the resources discussed: https://pastebin.com/rAcXX9Tn

The conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Intro

Michael Folkson (MF): This is a Socratic Seminar organized by London BitDevs. We have a few in the past. We had a couple on BIP-Schnorr and BIP-Taproot that were really good with Pieter Wuille, Russell O’Connor and various other people joined those. Videos and transcripts for that are up. For those who haven’t attended a Socratic Seminar before this isn’t a presentation. Kalle (All) is on the call which is great but this isn’t a presentation from Kalle this is a discussion. We have got a reading list up that I have shared in various places on Twitter and on the YouTube. We’ll be going through those links and that will be the structure of the discussion. We will start off from basics. Early on is a great time for people who don’t know too much about Signet to participate. It will get more technical and we will go into the implementation details later. We are livestreaming on YouTube. Questions, comments on the YouTube. We will be monitoring the Twitter @LDNBitcoinDevs and IRC ##ldnbitcoindevs on IRC. The topic today is Signet. As I said Kalle is here which is great. He knows more about Signet than probably anybody else on the planet.

Kalle Alm (KA): AJ (Towns) is starting to zoom away from me. I’m starting to be like “What are you doing now?”

MF: We start off with intros. We’ll start with you Kalle. What have you been working on in terms of Signet for the last couple of years?

KA: I am Kalle. I am living in Tokyo right now. I am working for DG Labs. I am doing mostly Bitcoin Core related things. Also working a debugger for Bitcoin Core called btcdeb. Recently the last few years Signet has my top priority. I am excited. It is moving forward now and finally I am hoping to get Signet in place right in time for Taproot. If we can get those two in, add Taproot to Signet, that would be cool.

Emzy (E): I have only heard about Signet. I tried to use testnet for something and I see that we need Signet because testnet has so many problems. Mostly you can’t use it really, it is annoying.

AJ Towns (AT): I have previously been working at Xapo and am now at Paradigm, in both cases just working on random Bitcoin Core stuff. I have been involved in the Taproot stuff for a while now. I agree that testnet is terrible and really want to see some sort of live way of playing with new soft forks and consensus rules before we deploy them. Signet sounds awesome. I have been looking at that for the last month or two with some seriousness.

MF: We’ll start off with basics exploring what testnet and regtest offer and some of the problems, why there was a motivation for Signet in the first place.

Testnet in Mastering Bitcoin (p207)

https://github.com/bitcoinbook/bitcoinbook/blob/25569ba10142a55a0c26d32033d06c5b1033e7ea/ch09.asciidoc#bitcoins-test-blockchains

MF: This is part of Andreas Antonopoulos’ book Mastering Bitcoin. Starting from first principles what is testnet? Why do we have testnet? What do people use testnet for? Why was it set up in the first place?

E: Testnet, I can talk why I used it. I used it in the beginning to test out Lightning because the first implementations of all the Lightning nodes used testnet. That is one example to simply use something that has no value and try out the whole system. I think that is the purpose for testnet. It has many people using it and without any value you can try out things.

MF: If you are building an application and you want to test it. You don’t want to have real money on the line, that would be one use case. We’ll get into some of the problems with testnet but if you are trying out new mining hardware this can really impact the experience of other people’s application testing on testnet because the difficulty of mining blocks go sky high. The interaction in terms of testing certain applications does screw up the testing of other applications on testnet. In terms of how testnet works how are blocks mined on testnet? What is the difference between mining blocks on testnet versus mining blocks on mainnet?

E: There is not directly a difference. There was an improvement because the difficulty was really ramping up. They would stop mining and then the difficulty was high and no new blocks were found. I think something changed on the mining side for the difficulty adjustment. Other than that it should be pretty much the same as mainnet.

MF: There was the adjustment with testnet3. There was the problem that they would be loads of hash power coming onboard and that would mean once that hash power left the block generation time was really, really long. There was that change for testnet3 in 2012. If a block hadn’t been found after a certain time the difficulty would plummet down the lowest possible.

KA: I am not entirely sure. I haven’t done a lot of mining on testnet. From what I understand yeah if a block hasn’t been seen for a while the difficulty drops. This has some issues but I’m sure we’ll get there.

Testnet on Bitcoin wiki

https://en.bitcoin.it/wiki/Testnet

MF: There are a few differences. Different DNS seeds. In terms of trying to find other nodes the DNS seeds are different on testnet to what they are on mainnet. I think Emzy is a DNS seed on mainnet. Are you a DNS seed on testnet Emzy?

E: Half of them from mainnet are also running testnet nodes. I am not running a testnet node because I think it is not that important.

MF: And obviously the port needs to be different. There is a minimal difficulty that we’ve gone over. Then you can do transactions with non-standard scripts. Again we’ll leave that to later because there’s that discussion on Signet. Whether you should be able to experiment with scripts that you can’t on mainnet on say testnet, regtest, Signet. That final one on this testnet wiki is that it is a different genesis block to the main network. There was a new genesis block which was reset for the 0.7 Bitcoin release.

16 blocks were found in one day on testnet3

https://web.archive.org/web/20160910173004/https://blog.blocktrail.com/2015/04/in-the-darkest-depths-of-testnet3-16k-blocks-were-found-in-1-day/

MF: Some of the problems with testnet. This was a crazy example. I don’t know exactly how this happened but apparently there was 16,000 blocks found in a day. Why is this a problem?

KA: I’m not sure if this is a case of a re-org. I am assuming it is. Probably there was a lag between the last blocks and then the difficulty dropped really far down. They put a miner on it and then it went haywire because of the minimum difficulty. That is a problem if you go back and re-org purposefully. You can get the same result because older blocks, if they were seen as if they were the last block on the chain then you would still get this difficulty drop from those based on today. You could go back and get lower difficulty and mine more blocks than you could if you just mined on the top of the chain.

MF: There are two issues with it. There is the issue of too many blocks coming through. Depending on your application, exactly what you testing, let’s say you are testing a new opcode or something. This may not be a big issue but it is just crazy that you are getting so many confirmations through in such a rush. There’s two issues. One is that block generation times are so volatile and variable. As Kalle said you also get re-orgs. If you are testing a transaction that you think has 5 confirmations and then there is a massive re-org and the transaction drops out because it was in a block that is now discarded from a re-org that is also a problem. But it is very dependent on exactly what you are testing. If you are testing new mining hardware, you are testing a change to Bitcoin Core mining or Bitcoin Core P2P that is very different from testing a new script, a non-standard script where perhaps you don’t care about confirmation times, you don’t care about re-orgs as much.

Has the testnet ever been reset?

https://bitcoin.stackexchange.com/questions/9975/has-the-testnet-ever-been-reset

MF: My understanding of why testnet was reset back in 2012, this was testnet2 to testnet3, is that one of the big problems was that people were selling their testnet for real monetary value. Was it just a dearth of testnet faucets and people couldn’t get their hands to do testing? Why did testnet coins suddenly get a value at that point in time?

AT: It was totally before my time so I’ve got no idea but I thought the idea was more you’d reset testnet just to make sure it was easy to sync with in the first place and didn’t have years long history of transactions that nobody really cares about anymore.

KA: It was reset twice because both times somebody started selling them. That is what I heard anyway. I haven’t seen why anyone would buy coins like that.

MF: We can only speculate I suppose. It might just be pure stupidity. Having no idea that the testnet coins weren’t real Bitcoin or not an altcoin or something. That was one of the motivations for resetting it back in 2012. As AJ says block sync time, we will get onto that in a few links, whether testnet should be big and whether it should take a long time to sync testnet. What were the other problems why it was reset? There does seem to be a problem because there are non-standard scripts enabled after a certain period of time it does get messy. Sometimes you need a reset just because people are struggling to sync the chain. There are rule changes and people are struggling to get to that latest blockchain tip, ignoring the time it takes, physically unable to sync up to the latest block.

E: I tested syncing the testnet3 and it was really fast without any problems. I think the problem is if you don’t reset it, if there is no precedent that it was reset I can see it is like any other altcoin. It can get value if you don’t reset it. Now it is once or twice reset it doesn’t get any value.

Testnet version history

https://bitcoin.stackexchange.com/questions/36252/testnet-version-history

MF: On this Bitcoin StackExchange post there is this comment (from Gavin Andresen). “The main reason for the reset is to get a more sane test network; with the BIP16 and BIP30 and testnet difficulty blockchain rule changes the old testnet is a mess with old clients serving up different, incompatible chains.” I don’t know whether that is people not maintaining or not keeping up to date with the latest Bitcoin Core changes but there does seem to be a bigger problem than just purely sync time or IBD time. It does appear there needs to be a reset every so often to get rid of some of these issues that collecting up over time on the testnet. I don’t think people have thought much about testnet, I don’t know if there are any lessons to be learnt from these resets and whether Signets will need similar resets in time.

KA: Testnet is for making sure that your node that you wrote in Javascript can sync everything properly and you can go through all the testnet3 stuff and find all these weird scripts that are trying to break it. It is a resource for that reason I think. A way to try out weird stuff that people have done because it is on testnet and it is worth nothing. On mainnet you wouldn’t be experimental whereas in a test network you would. I know Nicolas Dorier was running a full node in C## and he used testnet to make sure it worked. Both testnet and mainnet of course.

Testnet reset?

https://github.com/bitcoin/bitcoin/issues/19666

MF: Given it has been so long since we reset the testnet we haven’t reset the testnet since 2012 there has been discussion pop up at various points on whether there should be a testnet reset. I thought about it and I was thinking that perhaps we don’t want to reset the testnet until we’ve worked out how effective Signet is and whether Signet solves all the problems. If it does solve all the problems it doesn’t really matter. If we were to reset testnet say tomorrow and everyone moved to testnet there wouldn’t be any problems because obviously it was reset. You are not inheriting all the problems of years of testnet. Then perhaps people wouldn’t be using Signet and so maybe we do need a testnet reset but we should do it once we’ve worked out whether Signet solves all our testing needs.

AT: I don’t think a reset would make testnet all that more useful. One of the things that stops it being useful for testing Lightning is that you get these huge variations in number of blocks per hour or day. The locktime restrictions that Lightning uses to make sure that you have time to be able to publish your revocation transactions and punish people for doing the wrong thing. You have got actual time for that. If thousands of blocks get found in a minute then you don’t have time for that. If there are huge re-orgs, a reset won’t fix that. Even if it was reset tomorrow and worked fine for what it is I still think that leaves Signet as a useful thing to have.

MF: A testnet reset is certainly not going to mean that Signet isn’t required. Lightning is obviously a case where Signet is going to be clearly miles better than testnet for the reasons you just said. But for a testing use case like testing a new ASIC or testing IBD or testing P2P stuff perhaps you do want the craziness that happens on testnet to stress test your ASIC or stress test the latest IBD change in a way that perhaps it would be difficult to replicate that craziness on Signet.

AT: For testing a ASIC Signet is not all that useful because you have got to do the signing as well as the proof of work. You’ve got a whole different setup than you have for mainnet for that. Whereas testnet is almost exactly the same, just point an ASIC at it and you’re fine. I don’t know how much the craziness is useful for anything realistic you are testing. Putting craziness in a regtest thing so you can have a couple of hundred blocks that demonstrates the craziness. Put it in a functional test and have it run on every commit seems a lot better than having to sync testnet and run it against that for changes you might make.

MF: That is an argument for resetting testnet? For those testing use cases such as testing a new ASIC.

AT: If you want to test an ASIC against testnet you have got to sync it first. If you get bored of it and you get bored of it and don’t sync it all the way then you risk doing a re-org which then interferes with everyone else. There is an argument on its own for resetting testnet every two years or on a schedule rather than every 8 or 10 years or whatever. I don’t know if that is a strong argument.

MF: I’m not sure what you are saying. Are you saying it should be regular every 2, 3 years or are you saying it is not needed at all?

AT: I don’t think it is needed at all. If you run a testnet node, once you get your new ASIC you point it at your testnet node and you are instantly mining new testnet blocks. I don’t think you need to do anything special to support that use case. If you wanted to make it so that you just go from scratch, plug in your new Raspberry Pi to run your testnet node, plugin your new ASIC and have it all working in 20 minutes then that is an argument for having it reset maybe.

KA: I still don’t even know why testnet would exist if Signet exists to be honest. If you have an ASIC and you want to try it out why don’t you try to get money? Why don’t you point it at mainnet. I don’t see why you wouldn’t do that. I don’t think you are going to break mainnet if you try out your new ASIC on mainnet.

MF: You are testing the firmware and you want to make sure that it is working properly before you direct loads of hash power towards the main chain?

KA: Yeah maybe. I would just try it on mainnet myself probably.

MF: I see what you mean. There is nothing to lose.

KA: You mine a block. Oh no.

MF: But there are other use cases like changes to Bitcoin Core mining or P2P code. If you wanted to test changes to that before pushing them out on mainnet, does it make sense as a completely open, crazy, chaotic staging ground to test it first before you start pushing it out to users.

KA: I guess. If you have mining software you want to try it first maybe.

E: If you want to try to do solo mining it is way too hard to do that on mainnet. You are trying out some pool software for pool mining and you really want to try to find a few blocks. That is for sure much easier on testnet.

KA: You could start a new custom Signet. You can mine there.

MF: It is very hard to simulate that craziness on Signet. The whole point of Signet is that you are trying to prevent that craziness. You are putting constraints in there so that that craziness doesn’t happen. If you actually want that craziness I don’t know how much sense it would make to try to simulate that craziness either on Signet or to start a new Signet with craziness on it. Nicolas Dorier says testnet should be replaced by a Signet. Nicolas is in your camp Kalle. He doesn’t see the need for a new testnet. Testnet version 1 and 2, we don’t know if they are syncable anymore. At some point we would assume that testnet v3 won’t be syncable anymore for the issues that v1 and v2 had. If there are any use cases then we should think about a testnet v4.

KA: The link you showed where there were issues and clients were giving different chains. That would be a reason to reset but I don’t think we’ve seen that on testnet3 at this point. I don’t think testnet3 is giving different chains for different clients right now.

AT: Testnet3 was giving different chains at a point when some of the miners were trying out one of the block size increase BIPs. Some of the miners mined with whichever bit it was to lock it in. Then still on that chain someone else mined a too small block or something that was invalid. I think a whole bunch of those nodes got stalled. I think there was some sort of re-org that avoided that at some point, I’m not really sure. For a while there were definitely two chains on testnet.

MF: I would guess if you are allowing more functionality on testnet and you are not going through the same rigor and the same processes that you do on mainnet in terms of soft forks and discussing the Taproot soft fork for years. I would think as you’ve got more freedom and more flexibility that there are going to be those problems. Especially if you start reverting making things more restrictive and then allowing things that aren’t on mainnet. I haven’t really followed testnet closely so I don’t know whether there has been lots of changes.

KA: I think I synced testnet one time several years ago. Then I couldn’t figure out how to get any coins. I was like “What do I do now?”

MF: I don’t think many people are monitoring it for obvious reasons. Because it has no value. Most people are focusing on mainnet.

testnet4

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017031.html

MF: Peter Todd back in 2018 when they were discussing a reset of testnet put forward the view that testnet should be a large blockchain. That motivation that AJ said earlier about wanting to reset the testnet just because it gets too big, it takes too long to sync, IBD takes too long. Peter is saying in this mailing list post that you actually want it to be similar size to mainnet if not bigger. Let’s say the block size, block weight discussion starts up again in 5 years, you want to know what is possible before there starts to be performance issues across the network. Perhaps you do want testnet to be really big for those use cases, to experiment with a massive chain prior to thinking about whether we should blocks sizes smaller or bigger or whatever.

KA: I agree.

MF: Perhaps that’s an argument for not doing a reset. Or at least doing a reset but with a massive chain included in that reset. I don’t know.

AT: That would make sense to a point but if you just want to test random wallet and application software I don’t think you want to download an even bigger chain than mainnet to have to do that. Not when we are having problems with the IBD performance of mainnet in the first place. I thought some of the experiments they did with Bcash with their gigamega blocks makes more sense there. You just have some separate offline test to check that performance rather than inflicting it on everyone before it is ready.

MF: This is where some of the use cases would make sense to be on Signet. You just leave testnet for that experimentation for massive chains and long IBDs and P2P issues. Maybe you leave testnet for those use cases and all the use cases such as the wallet use case you put forward, you’d use Signet for that. I am certainly not saying testnet fulfills everything that Signet can do. Signet is going to be better in the vast majority of cases. It is just whether we need that testnet, whether we need a testnet reset and what applications or what use cases are going to be tested on it in future.

KA: I think there is an incentive misalignment with miners mining on testnet. They are mining but there is really no reason for them to mine. They are wasting energy and money, electricity bills to create blocks that are supposed to be worth nothing. I think Signet will solve that problem. People who want to mine testnet coins for whatever reason, I have met several of them. They have been very concerned when I mentioned Signet because they think testnet is going to go away. But it is not going to go away. If miners mine testnet it is going to stay there. Even if I wanted to remove testnet I couldn’t by the very nature of how it is set up.

Richard Bondi workshop at Austin Bitcoin Devs on regtest

https://diyhpl.us/wiki/transcripts/austin-bitcoin-developers/2018-08-17-richard-bondi-bitcoin-cli-regtest/

MF: Let’s move onto regtest then. What do you know about regtest? How is it different to testnet? How does it work?

E: There is no proof of work. Most of the time you run it with any network connections. You simply type in a command and get a block or get 100 blocks, whatever you need for your test case. It is not like a network, it is more like you are doing it on your own node.

KA: It is like a fancy unit testing network. You can’t use it with people but it is perfect for using on your own computer to try stuff out.

MF: You are not setting a network beyond your own machine. For some of those use cases we were thinking about in terms of mining or different ASICs competing with each other or P2P stuff, for those types of things you are just not able to test them on regtest. Regtest is good for small changes that aren’t network level changes. They are just small things you want to experiment with locally.

KA: Taproot is a great example. You can create an address and spend it to yourself on regtest. But having interactions between multiple people are probably harder to do on regtest. Signet will help.

MF: This item on the reading list, this was really good, I think this is still online. Richard Bondi did a regtest workshop at Austin Bitcoin Devs. This was getting set up on your laptop, generating blocks. Obviously there is no proof of work so you can just use a generate command to produce a block and to get the block reward. You don’t need any hash power to generate a block. There is no competition. Does it have the same genesis block as testnet or is there just an empty genesis block?

KA: They are different.

MF: I would recommend that, that’s a really good workshop if you want to play around with regtest and do small changes locally. There is also a tutorial from Bisq.

PR 17032 (closed) Tests: Chainparams: Make regtest almost fully customizable

https://github.com/bitcoin/bitcoin/pull/17032

MF: There was this PR that jtimon opened back in 2019 in terms of making regtest fully customizable. This is moving into Signet territory. I think you commented Kalle on why this isn’t a good idea. The security of opening up regtest to external nodes is not the same security as you’d get opening nodes up to other external nodes on a Signet network.

KA: I have very little experience with all of the altcoins out there. One of the big problems I have noticed when I looked at one, miners are automating mining on lowest difficulty chains all the chain. You have an altcoin that has a value. You will have miners coming regularly and mine a tonne, get the difficulty up to whatever makes it economically unviable to them and they’d move onto the next altcoin. If you had a regtest with mining without some kind of other security check like the signature in Signet they would have that problem there as well, griefing and whatever.

MF: It is a mining issue. Your quote here Kalle is “You can in practice start up a regtest and addnode people all over the world to create a network though it would be completely vulnerable.” Are there any additional P2P security stuff you would have to worry about if you were to open up a regtest to external nodes? Is there anything additional that Signet provides?

KA: I can do a one line thing where I delete your entire chain for you on your regtest chain. I can just erase it anytime I want. I just invalidate your block 1 and I mine a bunch of regtest blocks until my chain is bigger than yours and then you are going to switch over to mine. You have to invalidate my block and then you may go back to yours. You are going to have this back and forth because I can make blocks for free.

MF: That is different to how thought I worked. I thought it was immutable. Once you’ve generated a block that was a block in the chain. But no. How do you ditch a block that someone has generated?

KA: You just type invalidate block and the block hash and hit Enter. Then Bitcoin Core will throw that away and all of the other blocks after it. If you generate a block after doing that you are going to generate a block on the block before it. If I invalidate Block 1 in your chain then my node will mine on top of the genesis block. It will mine the first block, second, third, fourth. If you have 100 blocks and I mine 101 blocks your chain is going to see my 101 blocks and say “This is the longer chain. Let’s go there.”

MF: That is a massive problem. That is something I didn’t realize. That is probably the strongest motivation I have heard for Signet. I was already convinced of the need for Signet. Do you remember how this PR worked in terms of making regtest fully customizable? Is jtimon introducing proof of work?

KA: He has multiple versions of this I think. He was working alongside me when I was doing Signet in the beginning. He was an Elements developer, the Blockstream project. I think he worked to put my Signet stuff into the Elements stuff. I think this is the one he did after I did Signet. His idea was instead of having Signet as a set of parameters with a hard coded network you would just use regtest and customize it to say “We need a signature for the block to be valid.” I think ultimately it was a lot of code for something that didn’t seem like we were going to use a whole lot.

MF: I suppose it depends on whether it is one Signet that everyone uses as a slot in replacement for testnet or whether there are multiple Signets. If there was to be only one Signet then maybe it makes sense to try to experiment with different types of customizable regtests because that one Signet doesn’t satisfy everyone’s needs or desires in terms of how it is set up.

KA: Let’s say you take Jeremy Rubin’s CHECKTEMPLATEVERIFY idea and you make a pull request for this new feature. Then you also change the Signet default values to be a new custom Signet that you make which enables the CHECKTEMPLATEVERIFY code. If you do that people who download your pull request from Bitcoin Core, compile it and run with -signet they would immediately be on your custom chain. No changes done, they don’t need to do any customization at all. They are just on your chain. They can now try CHECKTEMPLATEVERIFY stuff, they will validate it and they will do all the things that a Core node would do normally. I think people are against that idea a little. People would rather have one Signet with all bells and whistles turned on and then people can pick which soft fork they want to use. I think the jtimon idea was in the same vein but a bit complex. More complex than what was needed.

MF: There is a question on the YouTube. Lukas asks wouldn’t the difficulty be way lower on testnet allowing you to test if your software can indeed find a block? In comparison to mainnet that is obviously the case. Because there is no value on testnet the difficulty is obviously going to be lower. It is just that it varies hugely but a much lower base.

KA: Or you could join a pool. If you join a pool even if you have a relatively difficulty you are still going to get shares with your miner if it is working correctly. If you get shares you are going to get money. You can do that or you can use testnet. If you want money, if you don’t want money.

MF: This was the discussion earlier. The worst case is your mining hardware, ASIC doesn’t work and you don’t get anything. That’s exactly the same case as if you are mining on testnet. You might as well throw it on mainnet and even if it doesn’t work you are not losing anything from not being on testnet.

What are the key differences between regtest and Signet?

https://bitcoin.stackexchange.com/questions/89640/what-are-the-key-differences-between-regtest-and-the-proposed-signet

MF: Let’s move onto Signet. What do you know about Signet, how Signet works?

E: I only heard about it and as far as I know instead of mining you sign the blocks. Other than that I don’t know how it works.

MF: There were a couple of good quotes that I put on the reading list from Bitcoin Optech. I am going to read them out. On the Bitcoin Optech Signet topics page “The signet protocol allows creating testnets where all valid new blocks must be signed by a centralized party. Although this centralization would be antithetical to Bitcoin, it’s ideal for a testnet where testers sometimes want to create a disruptive scenario (such as a chain reorganization) and other times just want a stable platform to use for testing software interoperation. On Bitcoin’s existing testnet, reorgs and other disruptions can occur frequently and for prolonged lengths of time, making regular testing impractical.” The operators of the signet can “control the rate of block production and the frequency and magnitude of block chain reorganizations.” This avoids “commonly encountered problems on testnet such as too-fast block production, too-slow block production, and reorganizations involving thousands of blocks.” Any other high level things in terms of how Signet works? You have still got the proof of work but you can only successfully mine a block if it is signed by I think it is just you at the moment Kalle? Or is it currently set up to be a 1-of-2 between you and AJ?

KA: It is me and AJ right now as of two days ago.

MF: That means that if I mine a block at the latest difficulty I need to send that block to either you or AJ to provide a signature before it can go on the chain?

KA: Yes exactly.

MF: There is no way of you or AJ or both giving your signature in advance for someone to start mining. It is going to be “Kalle I’ve got a new block” “Ok I’ll sign it.” “Kalle I’ve got a new block” “I’ll sign it”. It is not a “Kalle can I be a miner please?” and then Kalle goes “Ok fine.”

KA: Yes the signature commits to the block itself. So you have to have the block before you can create the signature.

Signet on bitcoin-dev mailing list (March 2019)

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html

MF: This is the announcement on the dev mailing list. Let’s have this discussion then on having one Signet which everyone uses or having multiple Signets. The problem that I foresee if there is just one is that then you get into the same issue that you do on mainnet where it is like “I want my change on Signet.” Jeremy Rubin comes and says “I want CHECKTEMPLATEVERIFY” and you go “Right fine.” Then someone else comes along with a change that no one is particularly interested or everyone thinks is a bad idea. “I want this on Signet.” Then you are in the same situation that we are on mainnet where it is very hard to get new features on the Signet. It needs at least with you and AJ as the signers it needs you or AJ to sign off that change is worthy of experimentation on Signet. Would you agree that is one of the downsides to having one Signet that everybody uses?

KA: Yes

MF: Any other downsides to having one Signet?

AT: I thought your example of have a really huge testnet so we can test how things work when we’ve got way too much data, that is the perfect example why you want a completely specialized chain that some people use for their purposes but most people don’t want to use because it is too big for no good reason. The way Signet is implemented as is it lets people do custom ones that I think makes a lot of sense.

MF: You certainly want to allow people to do it. Maybe it will end up that there is one Signet that everybody uses most of the time and then if you want to go off and do something crazy that most of the network isn’t interested in then you do your own Signet. You will have one with a network effect and lots of other splintered, fragmented Signets.

Enabling soft forks on Signet

https://bitcoin.stackexchange.com/questions/98642/can-we-experiment-on-signet-with-multiple-proposed-soft-forks-whilst-maintaining

KA: I think so. There is the default one, I call it the default Signet. If you don’t provide a custom block signing script or anything. There is something you can do even if you only have one. I know you were talking about the downsides of having one. Your example of Jeremy Rubin’s CHECKTEMPLATEVERIFY and Taproot. I could see how the default Signet would have both of those turned on. People who were running Signet could pick one. As long as the miner accepts the transactions then the users can choose not to care about specific soft forks. I can say “I don’t want to care about CHECKTEMPLATEVERIFY stuff. I just want to accept it. I don’t care.” It is like OP_TRUE. That’s how it works because it is a soft fork. The only thing is there is peer to peer stuff.

MF: You are saying because people obviously care a lot more what goes on mainnet than goes on Signet they would be open to a lot of those OP_TRUEs on Signet. They don’t really care because it is Signet.

KA: If I am Pieter Wuille and I don’t care about CHECKTEMPLATEVERIFY. I want to make sure that the Taproot stuff works. I tell my Signet node to only care about Taproot and not care about CHECKTEMPLATEVERIFY. Even if the network accepts both I can pick one if I want to. It is a minor detail so it is not a big deal. It is one of the things that is cute about Signet. The miner has both soft forks enabled but the user can choose not to enable them if they want to.

MF: That makes it difficult. There is a reason why you can’t do that on mainnet because you can’t verify the chain… Let’s say I wanted to have SegWit disabled on Signet. I’d have a version of Bitcoin Core pre-SegWit. Then I experiment with Jeremy Rubin’s CHECKTEMPLATEVERIFY without SegWit. Then you are not going to be able to verify the chain. You are going to have some of the problems that we’ve seen on previous testnet resets where people are doing different things and activating different things. People struggling to verify the blocks.

KA: You can definitely not do that on mainnet. The reason why you can do it on Signet is because you trust the miner. The miner signs off the block so you know it is valid.

AT: If you were trying to disable SegWit on mainnet then if you try to spend someone else’s SegWit coins without doing the SegWit stuff and miners accepted that anyone could do it. If you did that on mainnet anyone can mine as long as they dedicate the electricity to it. Then you can get different forks. But on Signet only a couple of people can mine. You can create those transactions but they will never actually make it anywhere. As long as you’ve got all the miners not doing anything that would violate the soft forks then you can treat them as enabled for as long as you like until that changes. The difference in the security model of having only a few signers where you can’t have an attacker mine blocks successfully changes the scenario a bit.

MF: You still want to be able to verify the chain. You still want to be able to do the IBD process even though Kalle or AJ or both have signed off on blocks into the chain. You still want to verify that all the transactions in those blocks are valid in certain use cases.

KA: You can still do it. The only thing is your node is going to see a mined block and it is going to say “This block contains a transaction that I don’t understand” which in Bitcoin land means accept. That’s in mainnet as well. If someone makes a transaction with SegWit version 1 and it gets into a block or SegWit version 58 and it gets into a block everyone is going to accept it.

AT: There is no SegWit version 58, it only goes up to 16.

KA: Ok, sorry.

MF: Certainly in a hard fork case, let’s say someone wanted to hard fork Signet you and AJ would just not allow that. There would not be a hard fork on Signet.

AT: If you were willing to do a hard fork you could make the hard fork change the people who sign the blocks. Once you bring hard forks into it anything can happen.

MF: I was thinking before this that it would be a problem people coming with their own proposed soft fork but it sounds like with OP_TRUE type stuff it wouldn’t.

AT: If you are coming with your own soft fork you still have to get the signatories to mine blocks that will include those transactions. If you started mining CTV blocks then the miners don’t have the code to verify that and they would consider the transactions non-standard and just ignore them. Same with the Taproot transaction right now.

MF: Do you see there being a BIP like process, as part of the soft fork discussion there being an application to Kalle, AJ, whoever else is signing blocks to have their soft fork on Signet?

AT: I am playing around with code to make that as straightforward as possible. We’ve got a Taproot PR in the Bitcoin repository at the moment. The last modification date of that is a couple of days ago. I think what make sense is to say that if you merge this code, once blocks have passed that modification date and the user passes an -enabletaproot flag or something then they will relay Taproot transactions to anyone else that has also got that flag set. If there happens to be a miner on the network that is willing to mine those transactions then they will get mined. I think that is all feasible and would be compatible with doing Taproot and CTV and everything else and work with custom Signets without requiring lots of configuration. Still a work in progress.

KA: To answer your question on an application to get your soft fork in. Ultimately Signet is going to be based on Bitcoin Core. I am hoping other implementations and wallets are going to support it. The way that I see it is if something is merged into Bitcoin Core, into master repository that is going to trickle down into Signet. Maybe there is a Signet branch on Bitcoin Core, I don’t know. That would be automatically pulled and built and ran by the miners somehow. I don’t think AJ and I would sit and take applications or anything. We don’t even know if we are going to be the maintainers or run the miners in the future. It could be anyone. It is more going to be we have a codebase where we have a pull request, the pull request gets merged and after merging it goes into a Signet phase or something. Kind of like how they had the SegWit testnet for a while.

MF: There are definitely trade-offs for both approaches. There is a downside to having that permissioned “Kalle, AJ please enable my soft fork” type stuff. But that does bring all the benefits of Signet with a network effect so that everyone is using that same Signet. If everyone was just splitting off and using their own Signet with their own soft fork on it and no one else is on their Signet then there is not much point. You are almost just on a local regtest like network. You are not getting the benefits of a public network.

KA: If you have a soft fork you want to try out starting your own custom Signet and asking people to join is probably the most straightforward thing to do.

MF: One thing I did see, on testnet you can do non-standard scripts. Some of those opcodes that are disabled on Core you can actually use on testnet but your plan is to not have them enabled on Signet. What was the reason for that?

KA: Ultimately people want to make sure that whatever they are doing is going to work with real money. I think it makes more sense to be close to the things that are possible on mainnet and not accept things that won’t be accepted in Bitcoin. That’s the only reason. It is supposed to be as close to mainnet as possible.

MF: At your workshop at Advancing Bitcoin, the hope is that we have Taproot on Signet before we have Taproot on mainnet. Otherwise there is not much point. It does have to make jumps ahead of mainnet so that you can test stuff but perhaps the bar for getting on Signet is where Taproot is now rather than a soft fork that comes out of nowhere and doesn’t have the discussion, review, the BIP, code, a PR to Bitcoin Core etc.

KA: There are several issues that you have to tackle when you are trying to test out new features with the Bitcoin setup. Once you turn it on if you find a bug do we reset the chain now? I think AJ has been working on various ideas to make it so that you can turn it on but then you say “Oops we didn’t actually turn it on. That was fake. Ignore that stuff.” Then you turn it on again later. So you can test out different stages of a feature even early stages. You can turn it on, see what happens and then selectively turn them off after you realize things were broken.

AT: The problem is if you turned Taproot on two weeks ago all the signatures for that would’ve been with the square version of R. If we change it to the even version of R they’d be invalid if you tried running the future Taproot rules against them. I think you can make that work as long as you keep updating the activation date of Taproot so that all those transactions, they weren’t actually Taproot transactions at all, they were pretend OP_TRUE sort of things that you don’t need to run any validation code over anymore. You only have to do the Taproot rules from this block instead that previous block. Going back to the current version of Signet, that does allow you to accept non-standard transactions.

KA: I don’t think so.

AT: I don’t think we will ever actually mine it but the command line is TestChain which is set to TRUE for Signet at the moment I think.

KA: RequireStandard is TRUE for Signet. It is in chainparams.

AT: It is just using IsTestChain isn’t it?

KA: If testing is TRUE but RequireStandard is also TRUE.

AT: Isn’t fRequireStandard a default?

KA: It is set to TRUE anyway.

AT: I think it gets overriden in init.cpp. If you do the command line parameter. I think this because I claimed all the empty things that I mined. I don’t think I had to change the code to do that.

KA: I’ll have to look at it later.

Keeping multiple Signets separate

MF: There are going to be other Signets. Even if we didn’t want them there are going to be other Signets. There’s reasons for having other Signets as we’ve explored. How do you separate the different Signets so they don’t start interfering with each other. There is network magic that you can set so there are different Signets. You can have different addresses beginning with sb1 and sb2. What else do you need to make sure they impact each other?

KA: The network magic, if you change the magic the nodes aren’t going to talk to each other. All the network messages include the magic so they are not going to communicate. You are pretty good there as long as the magic is different. With Signet the genesis block is the same for all of them. A lot of software hardcodes all of the genesis blocks for all the chains that they support. It is hard to have a dynamic genesis block, it is harder to implement for people. The genesis block is the same for everyone but the magic changes. It should be enough but you can have collisions.

Bitcoin Core PR 16630 (closed) to skip genesis block POW check

https://github.com/bitcoin/bitcoin/pull/16630

MF: On that genesis block, there was a PR to skip the genesis block proof of work check on Core mainnet. Can you explain what the issue is? This PR wasn’t merged but why do we want to change genesis proof of work check on mainnet? And why do we want different genesis blocks on either Signet or multiple Signets?

KA: I am not entirely sure why the genesis block’s proof of work check was added before. It makes sense in a way to not check it because it is hardcoded. You literally put the genesis block values in there yourself. Why would you go and check them? In the case of Signet, because Signet has proof of work and the genesis block has to follow the proof of work rules if you have proof of work enabled, whenever you created a new Signet before you had to also mine the genesis block. There was an entire grindblock command inside the Signet proposal that would turn your genesis challenge into a block with a nonce and everything. This was just more work for everyone. You’d have to have this nonce, you’d have to say this Signet genesis nonce is 1,325,000. That was when the genesis block was dynamically generated for each Signet which is not the case anymore. I don’t have any problem with proof of work being tested now.

MF: What is the argument for different Signets having different genesis blocks? Is that just an extra separation factor?

KA: The argument for is say you want to try out Lightning between multiple chains and you want to spin up a couple of Signets. You want to try them out. If they have the same genesis block c-lightning is not going to be able to use them. They use the genesis block to define the chain. But the argument against that is you can just go in and change the code. You can change the genesis block if you want to try two. It is not hard to do. I know that was one of the reasons why Jorge (jtimon) wanted to have the genesis block be different for each Signet because we was working on multi asset Lightning stuff.

MF: I hadn’t thought much about Lightning in terms of what you’d need on Signet for Lightning. There are quite a things where Lightning influences the design of Signet.

KA: I have pull requests to make it possible to do Lightning stuff.

MF: I am just going to highlight some good resources on Signet for the video’s sake. There’s your mailing list post initially announcing Signet. Then there’s the Signet wiki which talks about some of the reasons why you’d want to run Signet and some of the differences between mainnet and Signet. I think we’ve gone through a few of these.

“All signets use the same hardcoded genesis block (block 0) but independent signets can be differentiated by their network magic (message start bytes). In the updated protocol, the message start bytes are the first four bytes of a hash digest of the network’s challenge script (the script used to determine whether a block has a valid signature). The change was motivated by a desire to simplify the development of applications that want to use multiple signets but which need to call libraries that hardcode the genesis block for the networks they support.” Bitcoin Optech

Kalle Alm at Advancing Bitcoin (presentation and workshop)

https://diyhpl.us/wiki/transcripts/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/

https://diyhpl.us/wiki/transcripts/advancing-bitcoin/2020/2020-02-07-kalle-alm-signet-workshop/

MF: Then there’s Kalle’s talk at Advancing Bitcoin where you have the various elevator pitches. Steps to integrate Signet testing. c-lightning has the PR merged. Once Signet is merged into Core you will be able to do c-lightning experimentation on Signet. The Lightning implementations don’t need to wait for it to be merged into Core before getting it set up on their Lightning implementation. They can just start using it whenever it is merged into Core or even before. They could start using it now. Then in the workshop you had the participants set up a Signet to experiment with Taproot. Because currently Signet isn’t merged into Core they had to merge sipa’s Taproot branch with the Signet branch. Once Signet is merged into Core and certainly when Taproot is merged into Core then it will be very easy. A lot of that fiddling goes away.

KA: If Signet is merged, even if Taproot is not merged then as soon as Pieter Wuille updates the Taproot pull request then you can just grab that and you would have Signet in it.

MF: In terms of the Signet faucet, you have one set up. I used it yesterday and I was successful the first time. Then I tried to get some more. I think the message “Nuh-uh”. Perhaps the error message needs to be improved. How is that set up to prevent people from draining that faucet?

KA: It is one time per IP per day.

MF: I managed to get some later that day again. Maybe I was using a different IP. That stops it being drained. Other people can also set up their own faucets. They just need to either get some Signet from your faucet or they would need to mine a block on Signet. Then they can set up their own faucet and people can get Signet from their faucet rather than your faucet.

KA: If they have the private key to sign a block.

MF: Or they mine a block on Signet that you sign off.

KA: Right.

MF: There is a Signet explorer. I think this is based on the open source Blockstream Esplora explorer. A normal block explorer for Signet. There are currently Taproot transactions on this default Signet? No there are no Taproot transactions currently on the Signet blockchain.

KA: No

Activating Taproot on the default Signet

MF: What needs to be done to get a Taproot transaction on the Signet chain? Is it just the case of the merging that we were discussing?

KA: That is the easiest way I think. I think the other way, you merge the Signet and Taproot together again and then start up a temporary Signet. Then you can use that to do Taproot stuff today. We did it in February at Advancing Bitcoin. I know Stepan Snigirev even used it for his hardware wallet demo.

MF: There would need to be an activation like discussion on when to activate Taproot on the default Signet?

KA: Yes.

AT: There has to be some indication of at what height to start enforcing the rules which gets into some complexity. The complexity is ultimately that if there has to be changes made to the Taproot code before it is rolled out on mainnet then the old transactions might no longer satisfy the new rules and the new transactions that satisfy the new rules might not satisfy the old rules. What that adds up to is if you have enabled Taproot on your Signet server and the rules change underneath you then it is effectively like a hard fork. You have to upgrade or your view of the chain stalls. I think that is probably going to be fine for testing as long as we can get the user experience worked out so that if you enable Taproot you put this command line in. But then you need to keep paying attention to see if there are any changes because then you will have to upgrade or everything will get broken. I am just poking at my Terminal to try to push the branch where I have merged the things on somewhere public.

BIP 325 PR (merged) to change signature scheme to be tx-based

https://github.com/bitcoin/bips/pull/947

MF: There was this change to the BIP recently and this was AJ again, changing the block signing algorithm to be based on tx validation instead of a custom method. Can you explain what is going on here and what the motivation is for changing it?

AT: The original design was that it took the block and extracted the place where the signature would go from it and then made a SHA256 of all that and signed that. Because that’s different to anything that Bitcoin has done before it needed some special signing code infrastructure to let that happen. That special signing code is exactly the sort of signing code that Craig Wright was using to fake all the keys for other stuff. You just pass a hash into it of the message you are claiming to sign but you don’t ever prove that you knew what the stuff was that hashed to it. That’s not a secure protocol. There were some complaints on the PR about that approach. I think there was a similar suggestion for the Bitcoin sign message stuff in that rather than having a new protocol for this we have already got a protocol that lets us sign transactions. Why not reuse that? That’s what this basically is. Instead of creating a new serialization for the block data and running SHA256 over it, it reconstructs it into a transaction like thing and then does the usual transaction signing stuff. After a good suggestion by Pieter it is now at the point where we can convert the block into a PSBT and then just run it through whatever PSBT tools we have so that we could even do a Signet that is signed by a hardware wallet. I think.

MF: This is moving into the conversation on what a malicious actor could do to try to screw up Signet.

AT: It is not really that far. It is more just how could a malicious actor mislead you if you didn’t understand all the details of what you thought was going on in the code. It wasn’t that there were bugs or an actual exploit or anything. It was just an architectural design sort of thing. The result of changing it means that we can then stick in other tools so it is a little bit more flexible in practice now I think.

MF: There is nothing glaring a malicious actor could do to screw up Signet other than trying to get one of your two signing keys?

KA: Even without the signing keys you can customize Bitcoin Core so that it will ignore the signing keys. I think it is pretty easy to modify it so that it will let you mine blocks only using proof of work. Those blocks would not be accepted obviously by the nodes. Light clients, people who are using the nodes and trust them, they can be fooled by this unless they have support for the signature verification part. One of the ideas that’s been floating around is to move the signature into the header of the block. This would mean that even light clients who only download the header, they can choose to also get the signature. Then they can do the bulk validation on their own. That’s something for the future. I am not sure when that would happen. That’s a potential attack vector I guess. You can mine blocks even if they are not valid. Some people will accept them.

MF: It is inevitable there are going to be certain attacks that aren’t possible on mainnet that are going to be possible on Signet because there is no value on it. There is no motivation to do some of the more hardcore attacks that might be tried on mainnet. There could be an eclipse-like attack where your node isn’t connected to Kalle’s or AJ’s on Signet and you don’t realize you are being fed Signet blocks that haven’t been signed by Kalle or AJ.

KA: If you are running Bitcoin Core node then you will check the signature and realize. If you are running a light client then someone could fool you for a while.

MF: I don’t think we need to worry too much about that with no money on the line. There is not much motivation for an attacker to do stuff like that I wouldn’t have thought.

Signet PRs in Bitcoin Core PR 16411 (closed) and PR 18267 (open)

https://github.com/bitcoin/bitcoin/pull/16411

https://github.com/bitcoin/bitcoin/pull/18267

MF: There was the initial PR that is closed (16411) and then there is the PR that is open (18267) and is currently being reviewed. You have been working on this for at least a year or two now Kalle, what big changes have happened at a high level over that time. There has been some structuring of the PR to make it more reviewable and commit organization but what are the high level changes over the last year or two.

KA: There have not been a lot of changes that are high level in that sense. The thing that I realized is that it is really hard to get people to review the code. It was kind of big and it had a lot of stuff in it. I was optimistic for a while. But then I started splitting it into multiple pull requests and doing that let me get more traction. AJ joined in a couple of months ago. He has been really helpful to keep me motivated and nudge other people to help out with review. I think mostly it is the nature of being a big change and it taking a while to move through the process and get it into the system. It has also about me generally being bad at getting reviewers.

MF: It is certainly one of the big challenges to get review on your PRs just observing all the PRs in Core at the moment. There does seem to be impetus now with Taproot. If we don’t get Signet up to test Taproot what is the point of Signet at all? It is the perfect opportunity. We have a proposed soft fork and we want to do testing before it could potentially activate and get deployed. It is split now. What is currently being reviewed is this PR 18267. There were a couple of steps afterwards. Once that is merged you want to do RPC tools and utility scripts.

KA: This PR 16411 (closed) here is kind of outdated. AJ’s stuff with the transaction based approach made a lot of this stuff obsolete. There are some utility scripts that I really want in there though. For example I have a script called getcoins which you run from the command line and it will connect to the faucet and get you coins. That for me was a big blocker for testnet when I started testing it out when I was new. How do I get coins? I don’t know. Now you just run a one liner in the command line and it gets you coins. Now you have coins to send to your buddy or whatever. There is other stuff as well. A lot of the mining stuff I don’t know if it is ever going to go into Core because you can just run custom stuff. The miners can have a different branch. I don’t know if any of that is going to go in to be honest.

MF: You mean it is important that you and AJ are running the right software but it doesn’t really matter if other people aren’t if they are not interested in verifying the chain?

KA: No, verifying is one thing. Mining, generating blocks and signing them and stuff. I don’t know if that has to be in Bitcoin Core. It would be useful if it was. People could start up their own chains but it doesn’t have to be for Signet to work.

Reviewing the Signet PR

“There is a tiny amount of risk involved as it touches consensus validation code, but this is a tiny code change. If for some reason someone managed to flip the “signet” flag on a running node, for example, that would cause those nodes to start rejecting bitcoin blocks. I don’t foresee this as very likely, personally.” Kalle Alm

MF: On reviewing this it does touch consensus but only marginally. The only thing to worry about in terms of touching consensus is someone could hack into your machine and turn the -signet flag on. Is that the only thing we’ve really got to worry about? We’ve obviously got to be careful with PRs that touch consensus. It is just those chain parameters.

KA: Everything is dependent on the signed blocks flag being on or not. Signet adds stuff to the consensus, it doesn’t remove anything really. You couldn’t fake Signet on a node and then suddenly it would accept invalid blocks to fool you into you had money or something.

MF: When I tried it it worked all fine. Fee estimation, obviously there are no fees. Should it be enabled that you can send a Signet transaction without setting the fee? I got an error message when I didn’t set the fee.

KA: I get that on regtest too. When I start up a regtest node and I try to send a transaction. I get the same error, I have to do settxfee or something. I think it is an odd feature to be honest. I guess something should be done about it.

MF: It is just a small thing. It could be addressed in future. With fees, anything is going to be accepted. It could have zero fee but as long as you or AJ sign it off it can get in the chain.

KA: Other nodes are going to reject zero fee transactions I think. The Core nodes will have a minimum 1 satoshi per byte rule in policy.

MF: Fabian (Jahr) has got a Signet2 set up with ANYPREVOUT and Taproot. Have you been discussing that with him?

KA: I hadn’t heard of it.

MF: Maybe it is personal testing.

generatetoaddress doesn’t work on custom signets

https://github.com/bitcoin/bitcoin/pull/16411#issuecomment-522466194

MF: With regtest you can just do the generatetoaddress command because there are no requirements on generating a block. That doesn’t work for Signet obviously because it hasn’t got the signature from you or AJ. I think Nicolas Dorier said he’d like that to work but there is no way for that command to work unless it goes via you or AJ.

KA: What he is saying is he thinks it should work for me and AJ. We have our private key so we can sign the blocks. He thinks generatetoaddress should figure out that it is Signet and go and do the signing and all this stuff under the hood. Because then he can start a custom Signet on his side and he can generate blocks on demand in some kind of testing network or framework.

MF: With multiple Signets are you going to have different data directories, a signet1 data directory, a signet2 data directory? How is it going to be managed on the file system with different Signets?

KA: If you have multiple ones you are going to have to do what you do with regtest. You have to do -datadir=.

MF: In terms of block generation currently you have a script, I think you said at the workshop at Advancing Bitcoin, to do a block every ten minutes. It churns away on the proof of work for a minute, succeeds and then you have a pause. Is that going to be the block generation script going forward? Any thoughts on how that could change or would need to change?

KA: It seems to be working fine right now. If there are any problems we will change it. I don’t really know if there are going to be problems. The difficulty is very low. If you put a modern ASIC on it it would literally create a couple of hundred thousand blocks or something before it slowed down. How big of a deal is that? I don’t know. That could be a problem. If it is then we may have to increase the difficulty. Having an ASIC on a high profile Signet may make sense.

MF: What could go wrong? At the moment there is no hash power directed to it. Let’s say Signet is being widely used and there are a lot of people competing to mine blocks with your or AJ’s signature. Would you or AJ increase the difficulty if there were lots of mining going on manually? Tweak it like that?

KA: The Signet is working exactly like mainnet in terms of proof of work changes. It looks at the last 2016 blocks and if they were mined in 10 minute intervals it keeps the difficulty. If we realized that we had to increase the difficulty we would put more hash power on Signet and then it would gradually increase. The reason I am doing the 9 minute pause right now is because I don’t see a reason for my CPU to be spending time on mining the entire time. If I mine the entire time then the difficulty is going to go up until we hit 10 minutes per block. Then my CPU is going to sit there and sweat. But that could change. If we need to have more security other than the signatures. Someone could spam a bunch of block headers that are valid.

MF: The difficulty is only adjusted every two weeks on mainnet. There is still the possibility that blocks get found really quickly if there was loads of hash power directed to the Signet chain. The signature just makes sure that blocks aren’t found too fast.

KA: The signature makes sure no one else can mine. If everyone could mine…

MF: Two people could submit a block to you. You only accept one of them because you don’t want to accept two blocks within a short period of time.

KA: I don’t think people would be sending blocks to me. I think they would make blocks and they would talk to my node through the network. My node would get transactions like normal in mainnet and it would mine them, mine transactions into blocks.

MF: If I have an ASIC and I am mining and I find a block. I need to get your signature.

KA: I don’t think it is going to work that way. I don’t think you are going to mine blocks. You are going to put transactions out in the network and eventually they are going to be mined just like in mainnet. If you do try to mine then yes you are going to have to have a signature in your block for it to be valid. Even if you did mine you can’t get it on the network.

AT: The code also works the other way at the moment because of where the signature is placed. You have to construct the block first which is all the transactions. Then you have to get it signed and only then do you generate the proof of work after that.

MF: You are only starting the proof of work once you’ve got the signature. If loads of people are asking for that signature to start mining perhaps you wouldn’t give it to everyone in case they all find blocks within 10 minutes?

KA: If we were to do that then only one of them would be valid because they would all be mining on the same block. There would be one chain and they would mine on top of it. Even if all of them found a block there would only be a one height change anyway. Ultimately we don’t get blocks from people. We just get transactions over the P2P network like in mainnet. Then our nodes put them into blocks. Like when you do generatetoaddress, the same thing.

MF: If a block is mined and then I get your signature and I find a block 30 seconds later. Do you only announce that block after 10 minutes and essentially put a 9 minute and a half delay on it?

KA: No. If we have a signature in a block then we just put it out there.

MF: There can be blocks every minute or two. There is still going to be that variability just not testnet level variability. Mainnet variability basically.

KA: Right now it is not going to be that way because I am putting out 9 minute sleep between every block and I am creating all the blocks.

MF: No one is mining.

KA: It is just me mining.

btcdeb

https://github.com/bitcoin-core/btcdeb

MF: Can you explain what btcdeb does? Why people should use it and why it is interesting. Then we’ll talk about enabling Taproot scripts on btcdeb.

KA: It is toolset for Bitcoin related things. It is based on Bitcoin Core code so it has many of the classes defined in Bitcoin Core. They are also in btcdeb. It has several tools. The main one is btcdeb, you can hand it Bitcoin scripts and it will show you step by step what happens when you run those scripts. If you are learning Bitcoin it is something you can use to experiment with Script. It is even more useful if you hand two transactions, one spending the other, then for a P2SH it will show you the redeem script and how the scriptPubKey and scriptSig interact with each other. Step by step how the system verifies that your transaction is valid or not valid. When Pieter Wuille opened the Taproot pull request I wrote a version of btcdeb that allows Taproot. Along with that I also made a tool called tap which you can use to both create Taproot, Tapscript UTXOs. You can also spend them with multiple scipts. I am hoping that can be useful for people testing the Taproot stuff in finding bugs and stuff.

MF: There is a worked through example where you can step through a script.

KA: The Taproot example is in the Taproot branch. That branch is outdated right now. I don’t think you can find that Taproot example right now.

MF: The example that you’ve got up on the documentation isn’t Taproot. You’d need the Taproot branch to be able to step through a Taproot script. One thing I did see was that you disabled Miniscript. You did have Miniscript enabled with this. It was an old version of Miniscript? What was the problem with the Miniscript?

KA: I based it on this old conceptual version that Pieter Wuille or someone made. I didn’t realize until after I got it working that it was missing a bunch of new features. It was basically just a limping version of Miniscript that didn’t really do a lot. I decided to just rip it out and redo it. Also now that there is Minsc, that might be an even better approach to go from there. I don’t know. I have to look at what shesek has done and how it works. It is possible that it would be a Minsc debugger in btcdeb, I don’t know.

MF: The stack visual isn’t needed on Miniscript or Minsc. One of the cool things about this btcdeb is stepping through the stack operations of the script. I suppose if it was integrated with Miniscript you’d get all the benefits of Miniscript with the benefits of being able to analyze and debug the script. That would be cool. Let’s wrap up then. Thank you very much Kalle, AJ, Emzy and everyone else on the YouTube. There was Kalle’s tweet on successfully funding and spending a Taproot script from January. Then your tweet a hour or two before this was upgrading btcdeb for the latest Taproot branch. With that branch you should be able to debug Taproot scripts.

KA: And create them.

MF: Awesome. Thank you. The video will be up, we’ll get a transcript up.

\ No newline at end of file +https://www.youtube.com/watch?v=b0AiucAuX3E

Pastebin of the resources discussed: https://pastebin.com/rAcXX9Tn

The conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Intro

Michael Folkson (MF): This is a Socratic Seminar organized by London BitDevs. We have a few in the past. We had a couple on BIP-Schnorr and BIP-Taproot that were really good with Pieter Wuille, Russell O’Connor and various other people joined those. Videos and transcripts for that are up. For those who haven’t attended a Socratic Seminar before this isn’t a presentation. Kalle (All) is on the call which is great but this isn’t a presentation from Kalle this is a discussion. We have got a reading list up that I have shared in various places on Twitter and on the YouTube. We’ll be going through those links and that will be the structure of the discussion. We will start off from basics. Early on is a great time for people who don’t know too much about Signet to participate. It will get more technical and we will go into the implementation details later. We are livestreaming on YouTube. Questions, comments on the YouTube. We will be monitoring the Twitter @LDNBitcoinDevs and IRC ##ldnbitcoindevs on IRC. The topic today is Signet. As I said Kalle is here which is great. He knows more about Signet than probably anybody else on the planet.

Kalle Alm (KA): AJ (Towns) is starting to zoom away from me. I’m starting to be like “What are you doing now?”

MF: We start off with intros. We’ll start with you Kalle. What have you been working on in terms of Signet for the last couple of years?

KA: I am Kalle. I am living in Tokyo right now. I am working for DG Labs. I am doing mostly Bitcoin Core related things. Also working a debugger for Bitcoin Core called btcdeb. Recently the last few years Signet has my top priority. I am excited. It is moving forward now and finally I am hoping to get Signet in place right in time for Taproot. If we can get those two in, add Taproot to Signet, that would be cool.

Emzy (E): I have only heard about Signet. I tried to use testnet for something and I see that we need Signet because testnet has so many problems. Mostly you can’t use it really, it is annoying.

AJ Towns (AT): I have previously been working at Xapo and am now at Paradigm, in both cases just working on random Bitcoin Core stuff. I have been involved in the Taproot stuff for a while now. I agree that testnet is terrible and really want to see some sort of live way of playing with new soft forks and consensus rules before we deploy them. Signet sounds awesome. I have been looking at that for the last month or two with some seriousness.

MF: We’ll start off with basics exploring what testnet and regtest offer and some of the problems, why there was a motivation for Signet in the first place.

Testnet in Mastering Bitcoin (p207)

https://github.com/bitcoinbook/bitcoinbook/blob/25569ba10142a55a0c26d32033d06c5b1033e7ea/ch09.asciidoc#bitcoins-test-blockchains

MF: This is part of Andreas Antonopoulos’ book Mastering Bitcoin. Starting from first principles what is testnet? Why do we have testnet? What do people use testnet for? Why was it set up in the first place?

E: Testnet, I can talk why I used it. I used it in the beginning to test out Lightning because the first implementations of all the Lightning nodes used testnet. That is one example to simply use something that has no value and try out the whole system. I think that is the purpose for testnet. It has many people using it and without any value you can try out things.

MF: If you are building an application and you want to test it. You don’t want to have real money on the line, that would be one use case. We’ll get into some of the problems with testnet but if you are trying out new mining hardware this can really impact the experience of other people’s application testing on testnet because the difficulty of mining blocks go sky high. The interaction in terms of testing certain applications does screw up the testing of other applications on testnet. In terms of how testnet works how are blocks mined on testnet? What is the difference between mining blocks on testnet versus mining blocks on mainnet?

E: There is not directly a difference. There was an improvement because the difficulty was really ramping up. They would stop mining and then the difficulty was high and no new blocks were found. I think something changed on the mining side for the difficulty adjustment. Other than that it should be pretty much the same as mainnet.

MF: There was the adjustment with testnet3. There was the problem that they would be loads of hash power coming onboard and that would mean once that hash power left the block generation time was really, really long. There was that change for testnet3 in 2012. If a block hadn’t been found after a certain time the difficulty would plummet down the lowest possible.

KA: I am not entirely sure. I haven’t done a lot of mining on testnet. From what I understand yeah if a block hasn’t been seen for a while the difficulty drops. This has some issues but I’m sure we’ll get there.

Testnet on Bitcoin wiki

https://en.bitcoin.it/wiki/Testnet

MF: There are a few differences. Different DNS seeds. In terms of trying to find other nodes the DNS seeds are different on testnet to what they are on mainnet. I think Emzy is a DNS seed on mainnet. Are you a DNS seed on testnet Emzy?

E: Half of them from mainnet are also running testnet nodes. I am not running a testnet node because I think it is not that important.

MF: And obviously the port needs to be different. There is a minimal difficulty that we’ve gone over. Then you can do transactions with non-standard scripts. Again we’ll leave that to later because there’s that discussion on Signet. Whether you should be able to experiment with scripts that you can’t on mainnet on say testnet, regtest, Signet. That final one on this testnet wiki is that it is a different genesis block to the main network. There was a new genesis block which was reset for the 0.7 Bitcoin release.

16 blocks were found in one day on testnet3

https://web.archive.org/web/20160910173004/https://blog.blocktrail.com/2015/04/in-the-darkest-depths-of-testnet3-16k-blocks-were-found-in-1-day/

MF: Some of the problems with testnet. This was a crazy example. I don’t know exactly how this happened but apparently there was 16,000 blocks found in a day. Why is this a problem?

KA: I’m not sure if this is a case of a re-org. I am assuming it is. Probably there was a lag between the last blocks and then the difficulty dropped really far down. They put a miner on it and then it went haywire because of the minimum difficulty. That is a problem if you go back and re-org purposefully. You can get the same result because older blocks, if they were seen as if they were the last block on the chain then you would still get this difficulty drop from those based on today. You could go back and get lower difficulty and mine more blocks than you could if you just mined on the top of the chain.

MF: There are two issues with it. There is the issue of too many blocks coming through. Depending on your application, exactly what you testing, let’s say you are testing a new opcode or something. This may not be a big issue but it is just crazy that you are getting so many confirmations through in such a rush. There’s two issues. One is that block generation times are so volatile and variable. As Kalle said you also get re-orgs. If you are testing a transaction that you think has 5 confirmations and then there is a massive re-org and the transaction drops out because it was in a block that is now discarded from a re-org that is also a problem. But it is very dependent on exactly what you are testing. If you are testing new mining hardware, you are testing a change to Bitcoin Core mining or Bitcoin Core P2P that is very different from testing a new script, a non-standard script where perhaps you don’t care about confirmation times, you don’t care about re-orgs as much.

Has the testnet ever been reset?

https://bitcoin.stackexchange.com/questions/9975/has-the-testnet-ever-been-reset

MF: My understanding of why testnet was reset back in 2012, this was testnet2 to testnet3, is that one of the big problems was that people were selling their testnet for real monetary value. Was it just a dearth of testnet faucets and people couldn’t get their hands to do testing? Why did testnet coins suddenly get a value at that point in time?

AT: It was totally before my time so I’ve got no idea but I thought the idea was more you’d reset testnet just to make sure it was easy to sync with in the first place and didn’t have years long history of transactions that nobody really cares about anymore.

KA: It was reset twice because both times somebody started selling them. That is what I heard anyway. I haven’t seen why anyone would buy coins like that.

MF: We can only speculate I suppose. It might just be pure stupidity. Having no idea that the testnet coins weren’t real Bitcoin or not an altcoin or something. That was one of the motivations for resetting it back in 2012. As AJ says block sync time, we will get onto that in a few links, whether testnet should be big and whether it should take a long time to sync testnet. What were the other problems why it was reset? There does seem to be a problem because there are non-standard scripts enabled after a certain period of time it does get messy. Sometimes you need a reset just because people are struggling to sync the chain. There are rule changes and people are struggling to get to that latest blockchain tip, ignoring the time it takes, physically unable to sync up to the latest block.

E: I tested syncing the testnet3 and it was really fast without any problems. I think the problem is if you don’t reset it, if there is no precedent that it was reset I can see it is like any other altcoin. It can get value if you don’t reset it. Now it is once or twice reset it doesn’t get any value.

Testnet version history

https://bitcoin.stackexchange.com/questions/36252/testnet-version-history

MF: On this Bitcoin StackExchange post there is this comment (from Gavin Andresen). “The main reason for the reset is to get a more sane test network; with the BIP16 and BIP30 and testnet difficulty blockchain rule changes the old testnet is a mess with old clients serving up different, incompatible chains.” I don’t know whether that is people not maintaining or not keeping up to date with the latest Bitcoin Core changes but there does seem to be a bigger problem than just purely sync time or IBD time. It does appear there needs to be a reset every so often to get rid of some of these issues that collecting up over time on the testnet. I don’t think people have thought much about testnet, I don’t know if there are any lessons to be learnt from these resets and whether Signets will need similar resets in time.

KA: Testnet is for making sure that your node that you wrote in Javascript can sync everything properly and you can go through all the testnet3 stuff and find all these weird scripts that are trying to break it. It is a resource for that reason I think. A way to try out weird stuff that people have done because it is on testnet and it is worth nothing. On mainnet you wouldn’t be experimental whereas in a test network you would. I know Nicolas Dorier was running a full node in C## and he used testnet to make sure it worked. Both testnet and mainnet of course.

Testnet reset?

https://github.com/bitcoin/bitcoin/issues/19666

MF: Given it has been so long since we reset the testnet we haven’t reset the testnet since 2012 there has been discussion pop up at various points on whether there should be a testnet reset. I thought about it and I was thinking that perhaps we don’t want to reset the testnet until we’ve worked out how effective Signet is and whether Signet solves all the problems. If it does solve all the problems it doesn’t really matter. If we were to reset testnet say tomorrow and everyone moved to testnet there wouldn’t be any problems because obviously it was reset. You are not inheriting all the problems of years of testnet. Then perhaps people wouldn’t be using Signet and so maybe we do need a testnet reset but we should do it once we’ve worked out whether Signet solves all our testing needs.

AT: I don’t think a reset would make testnet all that more useful. One of the things that stops it being useful for testing Lightning is that you get these huge variations in number of blocks per hour or day. The locktime restrictions that Lightning uses to make sure that you have time to be able to publish your revocation transactions and punish people for doing the wrong thing. You have got actual time for that. If thousands of blocks get found in a minute then you don’t have time for that. If there are huge re-orgs, a reset won’t fix that. Even if it was reset tomorrow and worked fine for what it is I still think that leaves Signet as a useful thing to have.

MF: A testnet reset is certainly not going to mean that Signet isn’t required. Lightning is obviously a case where Signet is going to be clearly miles better than testnet for the reasons you just said. But for a testing use case like testing a new ASIC or testing IBD or testing P2P stuff perhaps you do want the craziness that happens on testnet to stress test your ASIC or stress test the latest IBD change in a way that perhaps it would be difficult to replicate that craziness on Signet.

AT: For testing a ASIC Signet is not all that useful because you have got to do the signing as well as the proof of work. You’ve got a whole different setup than you have for mainnet for that. Whereas testnet is almost exactly the same, just point an ASIC at it and you’re fine. I don’t know how much the craziness is useful for anything realistic you are testing. Putting craziness in a regtest thing so you can have a couple of hundred blocks that demonstrates the craziness. Put it in a functional test and have it run on every commit seems a lot better than having to sync testnet and run it against that for changes you might make.

MF: That is an argument for resetting testnet? For those testing use cases such as testing a new ASIC.

AT: If you want to test an ASIC against testnet you have got to sync it first. If you get bored of it and you get bored of it and don’t sync it all the way then you risk doing a re-org which then interferes with everyone else. There is an argument on its own for resetting testnet every two years or on a schedule rather than every 8 or 10 years or whatever. I don’t know if that is a strong argument.

MF: I’m not sure what you are saying. Are you saying it should be regular every 2, 3 years or are you saying it is not needed at all?

AT: I don’t think it is needed at all. If you run a testnet node, once you get your new ASIC you point it at your testnet node and you are instantly mining new testnet blocks. I don’t think you need to do anything special to support that use case. If you wanted to make it so that you just go from scratch, plug in your new Raspberry Pi to run your testnet node, plugin your new ASIC and have it all working in 20 minutes then that is an argument for having it reset maybe.

KA: I still don’t even know why testnet would exist if Signet exists to be honest. If you have an ASIC and you want to try it out why don’t you try to get money? Why don’t you point it at mainnet. I don’t see why you wouldn’t do that. I don’t think you are going to break mainnet if you try out your new ASIC on mainnet.

MF: You are testing the firmware and you want to make sure that it is working properly before you direct loads of hash power towards the main chain?

KA: Yeah maybe. I would just try it on mainnet myself probably.

MF: I see what you mean. There is nothing to lose.

KA: You mine a block. Oh no.

MF: But there are other use cases like changes to Bitcoin Core mining or P2P code. If you wanted to test changes to that before pushing them out on mainnet, does it make sense as a completely open, crazy, chaotic staging ground to test it first before you start pushing it out to users.

KA: I guess. If you have mining software you want to try it first maybe.

E: If you want to try to do solo mining it is way too hard to do that on mainnet. You are trying out some pool software for pool mining and you really want to try to find a few blocks. That is for sure much easier on testnet.

KA: You could start a new custom Signet. You can mine there.

MF: It is very hard to simulate that craziness on Signet. The whole point of Signet is that you are trying to prevent that craziness. You are putting constraints in there so that that craziness doesn’t happen. If you actually want that craziness I don’t know how much sense it would make to try to simulate that craziness either on Signet or to start a new Signet with craziness on it. Nicolas Dorier says testnet should be replaced by a Signet. Nicolas is in your camp Kalle. He doesn’t see the need for a new testnet. Testnet version 1 and 2, we don’t know if they are syncable anymore. At some point we would assume that testnet v3 won’t be syncable anymore for the issues that v1 and v2 had. If there are any use cases then we should think about a testnet v4.

KA: The link you showed where there were issues and clients were giving different chains. That would be a reason to reset but I don’t think we’ve seen that on testnet3 at this point. I don’t think testnet3 is giving different chains for different clients right now.

AT: Testnet3 was giving different chains at a point when some of the miners were trying out one of the block size increase BIPs. Some of the miners mined with whichever bit it was to lock it in. Then still on that chain someone else mined a too small block or something that was invalid. I think a whole bunch of those nodes got stalled. I think there was some sort of re-org that avoided that at some point, I’m not really sure. For a while there were definitely two chains on testnet.

MF: I would guess if you are allowing more functionality on testnet and you are not going through the same rigor and the same processes that you do on mainnet in terms of soft forks and discussing the Taproot soft fork for years. I would think as you’ve got more freedom and more flexibility that there are going to be those problems. Especially if you start reverting making things more restrictive and then allowing things that aren’t on mainnet. I haven’t really followed testnet closely so I don’t know whether there has been lots of changes.

KA: I think I synced testnet one time several years ago. Then I couldn’t figure out how to get any coins. I was like “What do I do now?”

MF: I don’t think many people are monitoring it for obvious reasons. Because it has no value. Most people are focusing on mainnet.

testnet4

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017031.html

MF: Peter Todd back in 2018 when they were discussing a reset of testnet put forward the view that testnet should be a large blockchain. That motivation that AJ said earlier about wanting to reset the testnet just because it gets too big, it takes too long to sync, IBD takes too long. Peter is saying in this mailing list post that you actually want it to be similar size to mainnet if not bigger. Let’s say the block size, block weight discussion starts up again in 5 years, you want to know what is possible before there starts to be performance issues across the network. Perhaps you do want testnet to be really big for those use cases, to experiment with a massive chain prior to thinking about whether we should blocks sizes smaller or bigger or whatever.

KA: I agree.

MF: Perhaps that’s an argument for not doing a reset. Or at least doing a reset but with a massive chain included in that reset. I don’t know.

AT: That would make sense to a point but if you just want to test random wallet and application software I don’t think you want to download an even bigger chain than mainnet to have to do that. Not when we are having problems with the IBD performance of mainnet in the first place. I thought some of the experiments they did with Bcash with their gigamega blocks makes more sense there. You just have some separate offline test to check that performance rather than inflicting it on everyone before it is ready.

MF: This is where some of the use cases would make sense to be on Signet. You just leave testnet for that experimentation for massive chains and long IBDs and P2P issues. Maybe you leave testnet for those use cases and all the use cases such as the wallet use case you put forward, you’d use Signet for that. I am certainly not saying testnet fulfills everything that Signet can do. Signet is going to be better in the vast majority of cases. It is just whether we need that testnet, whether we need a testnet reset and what applications or what use cases are going to be tested on it in future.

KA: I think there is an incentive misalignment with miners mining on testnet. They are mining but there is really no reason for them to mine. They are wasting energy and money, electricity bills to create blocks that are supposed to be worth nothing. I think Signet will solve that problem. People who want to mine testnet coins for whatever reason, I have met several of them. They have been very concerned when I mentioned Signet because they think testnet is going to go away. But it is not going to go away. If miners mine testnet it is going to stay there. Even if I wanted to remove testnet I couldn’t by the very nature of how it is set up.

Richard Bondi workshop at Austin Bitcoin Devs on regtest

https://diyhpl.us/wiki/transcripts/austin-bitcoin-developers/2018-08-17-richard-bondi-bitcoin-cli-regtest/

MF: Let’s move onto regtest then. What do you know about regtest? How is it different to testnet? How does it work?

E: There is no proof of work. Most of the time you run it with any network connections. You simply type in a command and get a block or get 100 blocks, whatever you need for your test case. It is not like a network, it is more like you are doing it on your own node.

KA: It is like a fancy unit testing network. You can’t use it with people but it is perfect for using on your own computer to try stuff out.

MF: You are not setting a network beyond your own machine. For some of those use cases we were thinking about in terms of mining or different ASICs competing with each other or P2P stuff, for those types of things you are just not able to test them on regtest. Regtest is good for small changes that aren’t network level changes. They are just small things you want to experiment with locally.

KA: Taproot is a great example. You can create an address and spend it to yourself on regtest. But having interactions between multiple people are probably harder to do on regtest. Signet will help.

MF: This item on the reading list, this was really good, I think this is still online. Richard Bondi did a regtest workshop at Austin Bitcoin Devs. This was getting set up on your laptop, generating blocks. Obviously there is no proof of work so you can just use a generate command to produce a block and to get the block reward. You don’t need any hash power to generate a block. There is no competition. Does it have the same genesis block as testnet or is there just an empty genesis block?

KA: They are different.

MF: I would recommend that, that’s a really good workshop if you want to play around with regtest and do small changes locally. There is also a tutorial from Bisq.

PR 17032 (closed) Tests: Chainparams: Make regtest almost fully customizable

https://github.com/bitcoin/bitcoin/pull/17032

MF: There was this PR that jtimon opened back in 2019 in terms of making regtest fully customizable. This is moving into Signet territory. I think you commented Kalle on why this isn’t a good idea. The security of opening up regtest to external nodes is not the same security as you’d get opening nodes up to other external nodes on a Signet network.

KA: I have very little experience with all of the altcoins out there. One of the big problems I have noticed when I looked at one, miners are automating mining on lowest difficulty chains all the chain. You have an altcoin that has a value. You will have miners coming regularly and mine a tonne, get the difficulty up to whatever makes it economically unviable to them and they’d move onto the next altcoin. If you had a regtest with mining without some kind of other security check like the signature in Signet they would have that problem there as well, griefing and whatever.

MF: It is a mining issue. Your quote here Kalle is “You can in practice start up a regtest and addnode people all over the world to create a network though it would be completely vulnerable.” Are there any additional P2P security stuff you would have to worry about if you were to open up a regtest to external nodes? Is there anything additional that Signet provides?

KA: I can do a one line thing where I delete your entire chain for you on your regtest chain. I can just erase it anytime I want. I just invalidate your block 1 and I mine a bunch of regtest blocks until my chain is bigger than yours and then you are going to switch over to mine. You have to invalidate my block and then you may go back to yours. You are going to have this back and forth because I can make blocks for free.

MF: That is different to how thought I worked. I thought it was immutable. Once you’ve generated a block that was a block in the chain. But no. How do you ditch a block that someone has generated?

KA: You just type invalidate block and the block hash and hit Enter. Then Bitcoin Core will throw that away and all of the other blocks after it. If you generate a block after doing that you are going to generate a block on the block before it. If I invalidate Block 1 in your chain then my node will mine on top of the genesis block. It will mine the first block, second, third, fourth. If you have 100 blocks and I mine 101 blocks your chain is going to see my 101 blocks and say “This is the longer chain. Let’s go there.”

MF: That is a massive problem. That is something I didn’t realize. That is probably the strongest motivation I have heard for Signet. I was already convinced of the need for Signet. Do you remember how this PR worked in terms of making regtest fully customizable? Is jtimon introducing proof of work?

KA: He has multiple versions of this I think. He was working alongside me when I was doing Signet in the beginning. He was an Elements developer, the Blockstream project. I think he worked to put my Signet stuff into the Elements stuff. I think this is the one he did after I did Signet. His idea was instead of having Signet as a set of parameters with a hard coded network you would just use regtest and customize it to say “We need a signature for the block to be valid.” I think ultimately it was a lot of code for something that didn’t seem like we were going to use a whole lot.

MF: I suppose it depends on whether it is one Signet that everyone uses as a slot in replacement for testnet or whether there are multiple Signets. If there was to be only one Signet then maybe it makes sense to try to experiment with different types of customizable regtests because that one Signet doesn’t satisfy everyone’s needs or desires in terms of how it is set up.

KA: Let’s say you take Jeremy Rubin’s CHECKTEMPLATEVERIFY idea and you make a pull request for this new feature. Then you also change the Signet default values to be a new custom Signet that you make which enables the CHECKTEMPLATEVERIFY code. If you do that people who download your pull request from Bitcoin Core, compile it and run with -signet they would immediately be on your custom chain. No changes done, they don’t need to do any customization at all. They are just on your chain. They can now try CHECKTEMPLATEVERIFY stuff, they will validate it and they will do all the things that a Core node would do normally. I think people are against that idea a little. People would rather have one Signet with all bells and whistles turned on and then people can pick which soft fork they want to use. I think the jtimon idea was in the same vein but a bit complex. More complex than what was needed.

MF: There is a question on the YouTube. Lukas asks wouldn’t the difficulty be way lower on testnet allowing you to test if your software can indeed find a block? In comparison to mainnet that is obviously the case. Because there is no value on testnet the difficulty is obviously going to be lower. It is just that it varies hugely but a much lower base.

KA: Or you could join a pool. If you join a pool even if you have a relatively difficulty you are still going to get shares with your miner if it is working correctly. If you get shares you are going to get money. You can do that or you can use testnet. If you want money, if you don’t want money.

MF: This was the discussion earlier. The worst case is your mining hardware, ASIC doesn’t work and you don’t get anything. That’s exactly the same case as if you are mining on testnet. You might as well throw it on mainnet and even if it doesn’t work you are not losing anything from not being on testnet.

What are the key differences between regtest and Signet?

https://bitcoin.stackexchange.com/questions/89640/what-are-the-key-differences-between-regtest-and-the-proposed-signet

MF: Let’s move onto Signet. What do you know about Signet, how Signet works?

E: I only heard about it and as far as I know instead of mining you sign the blocks. Other than that I don’t know how it works.

MF: There were a couple of good quotes that I put on the reading list from Bitcoin Optech. I am going to read them out. On the Bitcoin Optech Signet topics page “The signet protocol allows creating testnets where all valid new blocks must be signed by a centralized party. Although this centralization would be antithetical to Bitcoin, it’s ideal for a testnet where testers sometimes want to create a disruptive scenario (such as a chain reorganization) and other times just want a stable platform to use for testing software interoperation. On Bitcoin’s existing testnet, reorgs and other disruptions can occur frequently and for prolonged lengths of time, making regular testing impractical.” The operators of the signet can “control the rate of block production and the frequency and magnitude of block chain reorganizations.” This avoids “commonly encountered problems on testnet such as too-fast block production, too-slow block production, and reorganizations involving thousands of blocks.” Any other high level things in terms of how Signet works? You have still got the proof of work but you can only successfully mine a block if it is signed by I think it is just you at the moment Kalle? Or is it currently set up to be a 1-of-2 between you and AJ?

KA: It is me and AJ right now as of two days ago.

MF: That means that if I mine a block at the latest difficulty I need to send that block to either you or AJ to provide a signature before it can go on the chain?

KA: Yes exactly.

MF: There is no way of you or AJ or both giving your signature in advance for someone to start mining. It is going to be “Kalle I’ve got a new block” “Ok I’ll sign it.” “Kalle I’ve got a new block” “I’ll sign it”. It is not a “Kalle can I be a miner please?” and then Kalle goes “Ok fine.”

KA: Yes the signature commits to the block itself. So you have to have the block before you can create the signature.

Signet on bitcoin-dev mailing list (March 2019)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html

MF: This is the announcement on the dev mailing list. Let’s have this discussion then on having one Signet which everyone uses or having multiple Signets. The problem that I foresee if there is just one is that then you get into the same issue that you do on mainnet where it is like “I want my change on Signet.” Jeremy Rubin comes and says “I want CHECKTEMPLATEVERIFY” and you go “Right fine.” Then someone else comes along with a change that no one is particularly interested or everyone thinks is a bad idea. “I want this on Signet.” Then you are in the same situation that we are on mainnet where it is very hard to get new features on the Signet. It needs at least with you and AJ as the signers it needs you or AJ to sign off that change is worthy of experimentation on Signet. Would you agree that is one of the downsides to having one Signet that everybody uses?

KA: Yes

MF: Any other downsides to having one Signet?

AT: I thought your example of have a really huge testnet so we can test how things work when we’ve got way too much data, that is the perfect example why you want a completely specialized chain that some people use for their purposes but most people don’t want to use because it is too big for no good reason. The way Signet is implemented as is it lets people do custom ones that I think makes a lot of sense.

MF: You certainly want to allow people to do it. Maybe it will end up that there is one Signet that everybody uses most of the time and then if you want to go off and do something crazy that most of the network isn’t interested in then you do your own Signet. You will have one with a network effect and lots of other splintered, fragmented Signets.

Enabling soft forks on Signet

https://bitcoin.stackexchange.com/questions/98642/can-we-experiment-on-signet-with-multiple-proposed-soft-forks-whilst-maintaining

KA: I think so. There is the default one, I call it the default Signet. If you don’t provide a custom block signing script or anything. There is something you can do even if you only have one. I know you were talking about the downsides of having one. Your example of Jeremy Rubin’s CHECKTEMPLATEVERIFY and Taproot. I could see how the default Signet would have both of those turned on. People who were running Signet could pick one. As long as the miner accepts the transactions then the users can choose not to care about specific soft forks. I can say “I don’t want to care about CHECKTEMPLATEVERIFY stuff. I just want to accept it. I don’t care.” It is like OP_TRUE. That’s how it works because it is a soft fork. The only thing is there is peer to peer stuff.

MF: You are saying because people obviously care a lot more what goes on mainnet than goes on Signet they would be open to a lot of those OP_TRUEs on Signet. They don’t really care because it is Signet.

KA: If I am Pieter Wuille and I don’t care about CHECKTEMPLATEVERIFY. I want to make sure that the Taproot stuff works. I tell my Signet node to only care about Taproot and not care about CHECKTEMPLATEVERIFY. Even if the network accepts both I can pick one if I want to. It is a minor detail so it is not a big deal. It is one of the things that is cute about Signet. The miner has both soft forks enabled but the user can choose not to enable them if they want to.

MF: That makes it difficult. There is a reason why you can’t do that on mainnet because you can’t verify the chain… Let’s say I wanted to have SegWit disabled on Signet. I’d have a version of Bitcoin Core pre-SegWit. Then I experiment with Jeremy Rubin’s CHECKTEMPLATEVERIFY without SegWit. Then you are not going to be able to verify the chain. You are going to have some of the problems that we’ve seen on previous testnet resets where people are doing different things and activating different things. People struggling to verify the blocks.

KA: You can definitely not do that on mainnet. The reason why you can do it on Signet is because you trust the miner. The miner signs off the block so you know it is valid.

AT: If you were trying to disable SegWit on mainnet then if you try to spend someone else’s SegWit coins without doing the SegWit stuff and miners accepted that anyone could do it. If you did that on mainnet anyone can mine as long as they dedicate the electricity to it. Then you can get different forks. But on Signet only a couple of people can mine. You can create those transactions but they will never actually make it anywhere. As long as you’ve got all the miners not doing anything that would violate the soft forks then you can treat them as enabled for as long as you like until that changes. The difference in the security model of having only a few signers where you can’t have an attacker mine blocks successfully changes the scenario a bit.

MF: You still want to be able to verify the chain. You still want to be able to do the IBD process even though Kalle or AJ or both have signed off on blocks into the chain. You still want to verify that all the transactions in those blocks are valid in certain use cases.

KA: You can still do it. The only thing is your node is going to see a mined block and it is going to say “This block contains a transaction that I don’t understand” which in Bitcoin land means accept. That’s in mainnet as well. If someone makes a transaction with SegWit version 1 and it gets into a block or SegWit version 58 and it gets into a block everyone is going to accept it.

AT: There is no SegWit version 58, it only goes up to 16.

KA: Ok, sorry.

MF: Certainly in a hard fork case, let’s say someone wanted to hard fork Signet you and AJ would just not allow that. There would not be a hard fork on Signet.

AT: If you were willing to do a hard fork you could make the hard fork change the people who sign the blocks. Once you bring hard forks into it anything can happen.

MF: I was thinking before this that it would be a problem people coming with their own proposed soft fork but it sounds like with OP_TRUE type stuff it wouldn’t.

AT: If you are coming with your own soft fork you still have to get the signatories to mine blocks that will include those transactions. If you started mining CTV blocks then the miners don’t have the code to verify that and they would consider the transactions non-standard and just ignore them. Same with the Taproot transaction right now.

MF: Do you see there being a BIP like process, as part of the soft fork discussion there being an application to Kalle, AJ, whoever else is signing blocks to have their soft fork on Signet?

AT: I am playing around with code to make that as straightforward as possible. We’ve got a Taproot PR in the Bitcoin repository at the moment. The last modification date of that is a couple of days ago. I think what make sense is to say that if you merge this code, once blocks have passed that modification date and the user passes an -enabletaproot flag or something then they will relay Taproot transactions to anyone else that has also got that flag set. If there happens to be a miner on the network that is willing to mine those transactions then they will get mined. I think that is all feasible and would be compatible with doing Taproot and CTV and everything else and work with custom Signets without requiring lots of configuration. Still a work in progress.

KA: To answer your question on an application to get your soft fork in. Ultimately Signet is going to be based on Bitcoin Core. I am hoping other implementations and wallets are going to support it. The way that I see it is if something is merged into Bitcoin Core, into master repository that is going to trickle down into Signet. Maybe there is a Signet branch on Bitcoin Core, I don’t know. That would be automatically pulled and built and ran by the miners somehow. I don’t think AJ and I would sit and take applications or anything. We don’t even know if we are going to be the maintainers or run the miners in the future. It could be anyone. It is more going to be we have a codebase where we have a pull request, the pull request gets merged and after merging it goes into a Signet phase or something. Kind of like how they had the SegWit testnet for a while.

MF: There are definitely trade-offs for both approaches. There is a downside to having that permissioned “Kalle, AJ please enable my soft fork” type stuff. But that does bring all the benefits of Signet with a network effect so that everyone is using that same Signet. If everyone was just splitting off and using their own Signet with their own soft fork on it and no one else is on their Signet then there is not much point. You are almost just on a local regtest like network. You are not getting the benefits of a public network.

KA: If you have a soft fork you want to try out starting your own custom Signet and asking people to join is probably the most straightforward thing to do.

MF: One thing I did see, on testnet you can do non-standard scripts. Some of those opcodes that are disabled on Core you can actually use on testnet but your plan is to not have them enabled on Signet. What was the reason for that?

KA: Ultimately people want to make sure that whatever they are doing is going to work with real money. I think it makes more sense to be close to the things that are possible on mainnet and not accept things that won’t be accepted in Bitcoin. That’s the only reason. It is supposed to be as close to mainnet as possible.

MF: At your workshop at Advancing Bitcoin, the hope is that we have Taproot on Signet before we have Taproot on mainnet. Otherwise there is not much point. It does have to make jumps ahead of mainnet so that you can test stuff but perhaps the bar for getting on Signet is where Taproot is now rather than a soft fork that comes out of nowhere and doesn’t have the discussion, review, the BIP, code, a PR to Bitcoin Core etc.

KA: There are several issues that you have to tackle when you are trying to test out new features with the Bitcoin setup. Once you turn it on if you find a bug do we reset the chain now? I think AJ has been working on various ideas to make it so that you can turn it on but then you say “Oops we didn’t actually turn it on. That was fake. Ignore that stuff.” Then you turn it on again later. So you can test out different stages of a feature even early stages. You can turn it on, see what happens and then selectively turn them off after you realize things were broken.

AT: The problem is if you turned Taproot on two weeks ago all the signatures for that would’ve been with the square version of R. If we change it to the even version of R they’d be invalid if you tried running the future Taproot rules against them. I think you can make that work as long as you keep updating the activation date of Taproot so that all those transactions, they weren’t actually Taproot transactions at all, they were pretend OP_TRUE sort of things that you don’t need to run any validation code over anymore. You only have to do the Taproot rules from this block instead that previous block. Going back to the current version of Signet, that does allow you to accept non-standard transactions.

KA: I don’t think so.

AT: I don’t think we will ever actually mine it but the command line is TestChain which is set to TRUE for Signet at the moment I think.

KA: RequireStandard is TRUE for Signet. It is in chainparams.

AT: It is just using IsTestChain isn’t it?

KA: If testing is TRUE but RequireStandard is also TRUE.

AT: Isn’t fRequireStandard a default?

KA: It is set to TRUE anyway.

AT: I think it gets overriden in init.cpp. If you do the command line parameter. I think this because I claimed all the empty things that I mined. I don’t think I had to change the code to do that.

KA: I’ll have to look at it later.

Keeping multiple Signets separate

MF: There are going to be other Signets. Even if we didn’t want them there are going to be other Signets. There’s reasons for having other Signets as we’ve explored. How do you separate the different Signets so they don’t start interfering with each other. There is network magic that you can set so there are different Signets. You can have different addresses beginning with sb1 and sb2. What else do you need to make sure they impact each other?

KA: The network magic, if you change the magic the nodes aren’t going to talk to each other. All the network messages include the magic so they are not going to communicate. You are pretty good there as long as the magic is different. With Signet the genesis block is the same for all of them. A lot of software hardcodes all of the genesis blocks for all the chains that they support. It is hard to have a dynamic genesis block, it is harder to implement for people. The genesis block is the same for everyone but the magic changes. It should be enough but you can have collisions.

Bitcoin Core PR 16630 (closed) to skip genesis block POW check

https://github.com/bitcoin/bitcoin/pull/16630

MF: On that genesis block, there was a PR to skip the genesis block proof of work check on Core mainnet. Can you explain what the issue is? This PR wasn’t merged but why do we want to change genesis proof of work check on mainnet? And why do we want different genesis blocks on either Signet or multiple Signets?

KA: I am not entirely sure why the genesis block’s proof of work check was added before. It makes sense in a way to not check it because it is hardcoded. You literally put the genesis block values in there yourself. Why would you go and check them? In the case of Signet, because Signet has proof of work and the genesis block has to follow the proof of work rules if you have proof of work enabled, whenever you created a new Signet before you had to also mine the genesis block. There was an entire grindblock command inside the Signet proposal that would turn your genesis challenge into a block with a nonce and everything. This was just more work for everyone. You’d have to have this nonce, you’d have to say this Signet genesis nonce is 1,325,000. That was when the genesis block was dynamically generated for each Signet which is not the case anymore. I don’t have any problem with proof of work being tested now.

MF: What is the argument for different Signets having different genesis blocks? Is that just an extra separation factor?

KA: The argument for is say you want to try out Lightning between multiple chains and you want to spin up a couple of Signets. You want to try them out. If they have the same genesis block c-lightning is not going to be able to use them. They use the genesis block to define the chain. But the argument against that is you can just go in and change the code. You can change the genesis block if you want to try two. It is not hard to do. I know that was one of the reasons why Jorge (jtimon) wanted to have the genesis block be different for each Signet because we was working on multi asset Lightning stuff.

MF: I hadn’t thought much about Lightning in terms of what you’d need on Signet for Lightning. There are quite a things where Lightning influences the design of Signet.

KA: I have pull requests to make it possible to do Lightning stuff.

MF: I am just going to highlight some good resources on Signet for the video’s sake. There’s your mailing list post initially announcing Signet. Then there’s the Signet wiki which talks about some of the reasons why you’d want to run Signet and some of the differences between mainnet and Signet. I think we’ve gone through a few of these.

“All signets use the same hardcoded genesis block (block 0) but independent signets can be differentiated by their network magic (message start bytes). In the updated protocol, the message start bytes are the first four bytes of a hash digest of the network’s challenge script (the script used to determine whether a block has a valid signature). The change was motivated by a desire to simplify the development of applications that want to use multiple signets but which need to call libraries that hardcode the genesis block for the networks they support.” Bitcoin Optech

Kalle Alm at Advancing Bitcoin (presentation and workshop)

https://diyhpl.us/wiki/transcripts/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/

https://diyhpl.us/wiki/transcripts/advancing-bitcoin/2020/2020-02-07-kalle-alm-signet-workshop/

MF: Then there’s Kalle’s talk at Advancing Bitcoin where you have the various elevator pitches. Steps to integrate Signet testing. c-lightning has the PR merged. Once Signet is merged into Core you will be able to do c-lightning experimentation on Signet. The Lightning implementations don’t need to wait for it to be merged into Core before getting it set up on their Lightning implementation. They can just start using it whenever it is merged into Core or even before. They could start using it now. Then in the workshop you had the participants set up a Signet to experiment with Taproot. Because currently Signet isn’t merged into Core they had to merge sipa’s Taproot branch with the Signet branch. Once Signet is merged into Core and certainly when Taproot is merged into Core then it will be very easy. A lot of that fiddling goes away.

KA: If Signet is merged, even if Taproot is not merged then as soon as Pieter Wuille updates the Taproot pull request then you can just grab that and you would have Signet in it.

MF: In terms of the Signet faucet, you have one set up. I used it yesterday and I was successful the first time. Then I tried to get some more. I think the message “Nuh-uh”. Perhaps the error message needs to be improved. How is that set up to prevent people from draining that faucet?

KA: It is one time per IP per day.

MF: I managed to get some later that day again. Maybe I was using a different IP. That stops it being drained. Other people can also set up their own faucets. They just need to either get some Signet from your faucet or they would need to mine a block on Signet. Then they can set up their own faucet and people can get Signet from their faucet rather than your faucet.

KA: If they have the private key to sign a block.

MF: Or they mine a block on Signet that you sign off.

KA: Right.

MF: There is a Signet explorer. I think this is based on the open source Blockstream Esplora explorer. A normal block explorer for Signet. There are currently Taproot transactions on this default Signet? No there are no Taproot transactions currently on the Signet blockchain.

KA: No

Activating Taproot on the default Signet

MF: What needs to be done to get a Taproot transaction on the Signet chain? Is it just the case of the merging that we were discussing?

KA: That is the easiest way I think. I think the other way, you merge the Signet and Taproot together again and then start up a temporary Signet. Then you can use that to do Taproot stuff today. We did it in February at Advancing Bitcoin. I know Stepan Snigirev even used it for his hardware wallet demo.

MF: There would need to be an activation like discussion on when to activate Taproot on the default Signet?

KA: Yes.

AT: There has to be some indication of at what height to start enforcing the rules which gets into some complexity. The complexity is ultimately that if there has to be changes made to the Taproot code before it is rolled out on mainnet then the old transactions might no longer satisfy the new rules and the new transactions that satisfy the new rules might not satisfy the old rules. What that adds up to is if you have enabled Taproot on your Signet server and the rules change underneath you then it is effectively like a hard fork. You have to upgrade or your view of the chain stalls. I think that is probably going to be fine for testing as long as we can get the user experience worked out so that if you enable Taproot you put this command line in. But then you need to keep paying attention to see if there are any changes because then you will have to upgrade or everything will get broken. I am just poking at my Terminal to try to push the branch where I have merged the things on somewhere public.

BIP 325 PR (merged) to change signature scheme to be tx-based

https://github.com/bitcoin/bips/pull/947

MF: There was this change to the BIP recently and this was AJ again, changing the block signing algorithm to be based on tx validation instead of a custom method. Can you explain what is going on here and what the motivation is for changing it?

AT: The original design was that it took the block and extracted the place where the signature would go from it and then made a SHA256 of all that and signed that. Because that’s different to anything that Bitcoin has done before it needed some special signing code infrastructure to let that happen. That special signing code is exactly the sort of signing code that Craig Wright was using to fake all the keys for other stuff. You just pass a hash into it of the message you are claiming to sign but you don’t ever prove that you knew what the stuff was that hashed to it. That’s not a secure protocol. There were some complaints on the PR about that approach. I think there was a similar suggestion for the Bitcoin sign message stuff in that rather than having a new protocol for this we have already got a protocol that lets us sign transactions. Why not reuse that? That’s what this basically is. Instead of creating a new serialization for the block data and running SHA256 over it, it reconstructs it into a transaction like thing and then does the usual transaction signing stuff. After a good suggestion by Pieter it is now at the point where we can convert the block into a PSBT and then just run it through whatever PSBT tools we have so that we could even do a Signet that is signed by a hardware wallet. I think.

MF: This is moving into the conversation on what a malicious actor could do to try to screw up Signet.

AT: It is not really that far. It is more just how could a malicious actor mislead you if you didn’t understand all the details of what you thought was going on in the code. It wasn’t that there were bugs or an actual exploit or anything. It was just an architectural design sort of thing. The result of changing it means that we can then stick in other tools so it is a little bit more flexible in practice now I think.

MF: There is nothing glaring a malicious actor could do to screw up Signet other than trying to get one of your two signing keys?

KA: Even without the signing keys you can customize Bitcoin Core so that it will ignore the signing keys. I think it is pretty easy to modify it so that it will let you mine blocks only using proof of work. Those blocks would not be accepted obviously by the nodes. Light clients, people who are using the nodes and trust them, they can be fooled by this unless they have support for the signature verification part. One of the ideas that’s been floating around is to move the signature into the header of the block. This would mean that even light clients who only download the header, they can choose to also get the signature. Then they can do the bulk validation on their own. That’s something for the future. I am not sure when that would happen. That’s a potential attack vector I guess. You can mine blocks even if they are not valid. Some people will accept them.

MF: It is inevitable there are going to be certain attacks that aren’t possible on mainnet that are going to be possible on Signet because there is no value on it. There is no motivation to do some of the more hardcore attacks that might be tried on mainnet. There could be an eclipse-like attack where your node isn’t connected to Kalle’s or AJ’s on Signet and you don’t realize you are being fed Signet blocks that haven’t been signed by Kalle or AJ.

KA: If you are running Bitcoin Core node then you will check the signature and realize. If you are running a light client then someone could fool you for a while.

MF: I don’t think we need to worry too much about that with no money on the line. There is not much motivation for an attacker to do stuff like that I wouldn’t have thought.

Signet PRs in Bitcoin Core PR 16411 (closed) and PR 18267 (open)

https://github.com/bitcoin/bitcoin/pull/16411

https://github.com/bitcoin/bitcoin/pull/18267

MF: There was the initial PR that is closed (16411) and then there is the PR that is open (18267) and is currently being reviewed. You have been working on this for at least a year or two now Kalle, what big changes have happened at a high level over that time. There has been some structuring of the PR to make it more reviewable and commit organization but what are the high level changes over the last year or two.

KA: There have not been a lot of changes that are high level in that sense. The thing that I realized is that it is really hard to get people to review the code. It was kind of big and it had a lot of stuff in it. I was optimistic for a while. But then I started splitting it into multiple pull requests and doing that let me get more traction. AJ joined in a couple of months ago. He has been really helpful to keep me motivated and nudge other people to help out with review. I think mostly it is the nature of being a big change and it taking a while to move through the process and get it into the system. It has also about me generally being bad at getting reviewers.

MF: It is certainly one of the big challenges to get review on your PRs just observing all the PRs in Core at the moment. There does seem to be impetus now with Taproot. If we don’t get Signet up to test Taproot what is the point of Signet at all? It is the perfect opportunity. We have a proposed soft fork and we want to do testing before it could potentially activate and get deployed. It is split now. What is currently being reviewed is this PR 18267. There were a couple of steps afterwards. Once that is merged you want to do RPC tools and utility scripts.

KA: This PR 16411 (closed) here is kind of outdated. AJ’s stuff with the transaction based approach made a lot of this stuff obsolete. There are some utility scripts that I really want in there though. For example I have a script called getcoins which you run from the command line and it will connect to the faucet and get you coins. That for me was a big blocker for testnet when I started testing it out when I was new. How do I get coins? I don’t know. Now you just run a one liner in the command line and it gets you coins. Now you have coins to send to your buddy or whatever. There is other stuff as well. A lot of the mining stuff I don’t know if it is ever going to go into Core because you can just run custom stuff. The miners can have a different branch. I don’t know if any of that is going to go in to be honest.

MF: You mean it is important that you and AJ are running the right software but it doesn’t really matter if other people aren’t if they are not interested in verifying the chain?

KA: No, verifying is one thing. Mining, generating blocks and signing them and stuff. I don’t know if that has to be in Bitcoin Core. It would be useful if it was. People could start up their own chains but it doesn’t have to be for Signet to work.

Reviewing the Signet PR

“There is a tiny amount of risk involved as it touches consensus validation code, but this is a tiny code change. If for some reason someone managed to flip the “signet” flag on a running node, for example, that would cause those nodes to start rejecting bitcoin blocks. I don’t foresee this as very likely, personally.” Kalle Alm

MF: On reviewing this it does touch consensus but only marginally. The only thing to worry about in terms of touching consensus is someone could hack into your machine and turn the -signet flag on. Is that the only thing we’ve really got to worry about? We’ve obviously got to be careful with PRs that touch consensus. It is just those chain parameters.

KA: Everything is dependent on the signed blocks flag being on or not. Signet adds stuff to the consensus, it doesn’t remove anything really. You couldn’t fake Signet on a node and then suddenly it would accept invalid blocks to fool you into you had money or something.

MF: When I tried it it worked all fine. Fee estimation, obviously there are no fees. Should it be enabled that you can send a Signet transaction without setting the fee? I got an error message when I didn’t set the fee.

KA: I get that on regtest too. When I start up a regtest node and I try to send a transaction. I get the same error, I have to do settxfee or something. I think it is an odd feature to be honest. I guess something should be done about it.

MF: It is just a small thing. It could be addressed in future. With fees, anything is going to be accepted. It could have zero fee but as long as you or AJ sign it off it can get in the chain.

KA: Other nodes are going to reject zero fee transactions I think. The Core nodes will have a minimum 1 satoshi per byte rule in policy.

MF: Fabian (Jahr) has got a Signet2 set up with ANYPREVOUT and Taproot. Have you been discussing that with him?

KA: I hadn’t heard of it.

MF: Maybe it is personal testing.

generatetoaddress doesn’t work on custom signets

https://github.com/bitcoin/bitcoin/pull/16411#issuecomment-522466194

MF: With regtest you can just do the generatetoaddress command because there are no requirements on generating a block. That doesn’t work for Signet obviously because it hasn’t got the signature from you or AJ. I think Nicolas Dorier said he’d like that to work but there is no way for that command to work unless it goes via you or AJ.

KA: What he is saying is he thinks it should work for me and AJ. We have our private key so we can sign the blocks. He thinks generatetoaddress should figure out that it is Signet and go and do the signing and all this stuff under the hood. Because then he can start a custom Signet on his side and he can generate blocks on demand in some kind of testing network or framework.

MF: With multiple Signets are you going to have different data directories, a signet1 data directory, a signet2 data directory? How is it going to be managed on the file system with different Signets?

KA: If you have multiple ones you are going to have to do what you do with regtest. You have to do -datadir=.

MF: In terms of block generation currently you have a script, I think you said at the workshop at Advancing Bitcoin, to do a block every ten minutes. It churns away on the proof of work for a minute, succeeds and then you have a pause. Is that going to be the block generation script going forward? Any thoughts on how that could change or would need to change?

KA: It seems to be working fine right now. If there are any problems we will change it. I don’t really know if there are going to be problems. The difficulty is very low. If you put a modern ASIC on it it would literally create a couple of hundred thousand blocks or something before it slowed down. How big of a deal is that? I don’t know. That could be a problem. If it is then we may have to increase the difficulty. Having an ASIC on a high profile Signet may make sense.

MF: What could go wrong? At the moment there is no hash power directed to it. Let’s say Signet is being widely used and there are a lot of people competing to mine blocks with your or AJ’s signature. Would you or AJ increase the difficulty if there were lots of mining going on manually? Tweak it like that?

KA: The Signet is working exactly like mainnet in terms of proof of work changes. It looks at the last 2016 blocks and if they were mined in 10 minute intervals it keeps the difficulty. If we realized that we had to increase the difficulty we would put more hash power on Signet and then it would gradually increase. The reason I am doing the 9 minute pause right now is because I don’t see a reason for my CPU to be spending time on mining the entire time. If I mine the entire time then the difficulty is going to go up until we hit 10 minutes per block. Then my CPU is going to sit there and sweat. But that could change. If we need to have more security other than the signatures. Someone could spam a bunch of block headers that are valid.

MF: The difficulty is only adjusted every two weeks on mainnet. There is still the possibility that blocks get found really quickly if there was loads of hash power directed to the Signet chain. The signature just makes sure that blocks aren’t found too fast.

KA: The signature makes sure no one else can mine. If everyone could mine…

MF: Two people could submit a block to you. You only accept one of them because you don’t want to accept two blocks within a short period of time.

KA: I don’t think people would be sending blocks to me. I think they would make blocks and they would talk to my node through the network. My node would get transactions like normal in mainnet and it would mine them, mine transactions into blocks.

MF: If I have an ASIC and I am mining and I find a block. I need to get your signature.

KA: I don’t think it is going to work that way. I don’t think you are going to mine blocks. You are going to put transactions out in the network and eventually they are going to be mined just like in mainnet. If you do try to mine then yes you are going to have to have a signature in your block for it to be valid. Even if you did mine you can’t get it on the network.

AT: The code also works the other way at the moment because of where the signature is placed. You have to construct the block first which is all the transactions. Then you have to get it signed and only then do you generate the proof of work after that.

MF: You are only starting the proof of work once you’ve got the signature. If loads of people are asking for that signature to start mining perhaps you wouldn’t give it to everyone in case they all find blocks within 10 minutes?

KA: If we were to do that then only one of them would be valid because they would all be mining on the same block. There would be one chain and they would mine on top of it. Even if all of them found a block there would only be a one height change anyway. Ultimately we don’t get blocks from people. We just get transactions over the P2P network like in mainnet. Then our nodes put them into blocks. Like when you do generatetoaddress, the same thing.

MF: If a block is mined and then I get your signature and I find a block 30 seconds later. Do you only announce that block after 10 minutes and essentially put a 9 minute and a half delay on it?

KA: No. If we have a signature in a block then we just put it out there.

MF: There can be blocks every minute or two. There is still going to be that variability just not testnet level variability. Mainnet variability basically.

KA: Right now it is not going to be that way because I am putting out 9 minute sleep between every block and I am creating all the blocks.

MF: No one is mining.

KA: It is just me mining.

btcdeb

https://github.com/bitcoin-core/btcdeb

MF: Can you explain what btcdeb does? Why people should use it and why it is interesting. Then we’ll talk about enabling Taproot scripts on btcdeb.

KA: It is toolset for Bitcoin related things. It is based on Bitcoin Core code so it has many of the classes defined in Bitcoin Core. They are also in btcdeb. It has several tools. The main one is btcdeb, you can hand it Bitcoin scripts and it will show you step by step what happens when you run those scripts. If you are learning Bitcoin it is something you can use to experiment with Script. It is even more useful if you hand two transactions, one spending the other, then for a P2SH it will show you the redeem script and how the scriptPubKey and scriptSig interact with each other. Step by step how the system verifies that your transaction is valid or not valid. When Pieter Wuille opened the Taproot pull request I wrote a version of btcdeb that allows Taproot. Along with that I also made a tool called tap which you can use to both create Taproot, Tapscript UTXOs. You can also spend them with multiple scipts. I am hoping that can be useful for people testing the Taproot stuff in finding bugs and stuff.

MF: There is a worked through example where you can step through a script.

KA: The Taproot example is in the Taproot branch. That branch is outdated right now. I don’t think you can find that Taproot example right now.

MF: The example that you’ve got up on the documentation isn’t Taproot. You’d need the Taproot branch to be able to step through a Taproot script. One thing I did see was that you disabled Miniscript. You did have Miniscript enabled with this. It was an old version of Miniscript? What was the problem with the Miniscript?

KA: I based it on this old conceptual version that Pieter Wuille or someone made. I didn’t realize until after I got it working that it was missing a bunch of new features. It was basically just a limping version of Miniscript that didn’t really do a lot. I decided to just rip it out and redo it. Also now that there is Minsc, that might be an even better approach to go from there. I don’t know. I have to look at what shesek has done and how it works. It is possible that it would be a Minsc debugger in btcdeb, I don’t know.

MF: The stack visual isn’t needed on Miniscript or Minsc. One of the cool things about this btcdeb is stepping through the stack operations of the script. I suppose if it was integrated with Miniscript you’d get all the benefits of Miniscript with the benefits of being able to analyze and debug the script. That would be cool. Let’s wrap up then. Thank you very much Kalle, AJ, Emzy and everyone else on the YouTube. There was Kalle’s tweet on successfully funding and spending a Taproot script from January. Then your tweet a hour or two before this was upgrading btcdeb for the latest Taproot branch. With that branch you should be able to debug Taproot scripts.

KA: And create them.

MF: Awesome. Thank you. The video will be up, we’ll get a transcript up.

\ No newline at end of file diff --git a/london-bitcoin-devs/2021-07-20-socratic-seminar-taproot-rollout/index.html b/london-bitcoin-devs/2021-07-20-socratic-seminar-taproot-rollout/index.html index a168d7c6ba..a74415830a 100644 --- a/london-bitcoin-devs/2021-07-20-socratic-seminar-taproot-rollout/index.html +++ b/london-bitcoin-devs/2021-07-20-socratic-seminar-taproot-rollout/index.html @@ -9,4 +9,4 @@ < Socratic Seminar - Taproot is locked in. What now?

Socratic Seminar - Taproot is locked in. What now?

Date: July 20, 2021

Transcript By: Michael Folkson

Tags: Taproot, Soft fork activation

Category: Meetup

Media: -https://www.youtube.com/watch?v=GAkLuZNsZzw

Gist of resources discussed: https://gist.github.com/michaelfolkson/0803271754f851530fe8242087859254

This event was livestreamed on YouTube and so comments are attributed to specific individuals by the name or pseudonym that was visible on the livestream. If you were a participant and would like your comments to be anonymized please get in touch.

Intro

Michael Folkson (MF): Welcome to everyone on the call, welcome to anybody on YouTube. This is a Socratic Seminar on Taproot rollout or Taproot support post activation in November. If there are some developers on the call or people working on particular projects hopefully we’ll be able to discuss some of the challenges or gotchas in terms of rolling out Taproot support, gauging where we are at in terms of whether people are thinking about this for their developer roadmaps in their businesses, in their companies, on their open source projects. Everyone should know that activation is in November, block height 709632 and that is a few months away so there is no rush but it is a good time before the summer holidays to be discussing plans for rolling this out. We’ll start with intros. Who you are, name or pseudonym is fine, and also what project and what company you are either contributing to or what software you are using in terms of what you are interested in for what projects and what businesses support Taproot once November comes around.

Oscar Pacey (OP): I’m interested to catch up on the news, follow the format, get more accustomed to it for future meetings.

Alex Waltz (AW): Hi I’m Alex, I don’t work for any company, I just wanted to come in and listen. I hope to see the upgrade in more wallets even though I think that is not going to happen so fast.

MF: We’ll get into that later.

Aaron van Wirdum (AvW): I’m Aaron van Wirdum, I work for Bitcoin Magazine so I am tuning in to see what is going on over here.

Craig Raw (CR): Hi I’m Craig Raw, developer of Sparrow wallet. I have recently been integrating single key Taproot in. I guess I’m here on the call having done that work on my own. I’m interested to hear what the progress of the greater community is on it, get some feedback perhaps.

MF: Awesome. How was that process? Was it pretty easy to do? Or did it require looking up loads of stuff? Did you look at how Core has implemented it or just did it yourself?

CR: I pretty much followed the BIPs. I didn’t find them particularly easy to read. I also referred to the Optech Taproot workshop which is a little bit out of date. There is a PR that updates it to 0.21. I followed that but the workshop is not exactly what is in the BIPs either. It wasn’t the easiest process I must admit but I do think all the information is out there. It can be a little bit difficult to piece it altogether.

MF: That is really good feedback, thanks Craig. Those Optech workshops are really good but yes there were a few done a year and a half ago so maybe there are a few details that are out of date or slight changes that weren’t implemented in the Optech workshops. Luke says you can propose changes to the BIPs to make them easier to work with. You mean the Taproot BIPs?

Luke Dashjr (LD): Yeah. He was saying he needed help besides what was in the BIPs. It might make sense to clarify what goes in the BIPs so that is easier for other people to work with when they are in the same situation.

MF: What are your thoughts in terms of changing the BIPs, it is kind of dangerous… I suppose if it is guidance for users then it is not really a problem.

LD: The purpose of the BIPs is that people can read them and understand how to work with it. If it is not accomplishing that I don’t know what is missing, it sounds like something could be added to make it clearer, whatever was missing for him.

MF: Good point. Craig, after this if there is anything in the BIPs you don’t like perhaps consider opening a PR.

CR: It all depends on what background you’ve had coming into them. That’s obviously a difficult thing to assume everyone’s background reading them for the first time. I didn’t have a particular issue with the BIPs, they are quite dense but that doesn’t necessarily mean they’re bad. I just think we perhaps need more accompanying material that can assist with explaining them. Obviously as the support grows and the amount of code grows you have more reference points. There aren’t that many at this point.

MF: It is still early. In terms of SegWit time we are still mid 2017. I am always trying to think where we are in terms of SegWit and comparing to SegWit. I have a section later where we’ll talk about some lessons why SegWit rollout and SegWit adoption were so small. Andrew says in the chat the BIPs are pretty dense so it can be easy to miss things.

MF: This is the reading list that I put together. We did have Socratics last year discussing the design of BIP-Schnorr and BIP-Taproot, there are transcripts and videos from June, July last year. Anybody who is unclear on the details, some of that discussion might be helpful. Obviously there are BIPs and other resources. Hopefully there will be a lot more resources coming out to help people like Craig implement this.

Recap on miner signaling (activation at block height 709632, approximately mid November)

MF: I thought we’d recap the miner signaling. Activation in November, we had the taproot.watch site that was great for monitoring which mining pools were signaling, that seemed to get a lot of attention on social media etc. I was really impressed by some of the mining pools, pretty much everyone did a good job signaling pretty speedily and reaching that 90 percent threshold pretty fast. Special shoutouts to Slush Pool who were the first mining pool to signal and they put out a tonne of blog posts and podcasts discussing activation. I thought that was really impressive. Also Alejandro from Poolin did the podcast scene discussing the challenges of a mining pool signaling. They had a few issues, they weren’t as fast off the tracks as Slush Pool but I thought he did a really great job discussing miner signaling and he had the taprootactivation.com site. I hope going into activation in November that this aspect of technical competence and staying up to date with the latest features and latest things to be rolling out, user interest in terms of what changes are available, not just miners but exchanges, wallets, hardware wallets etc. I hope that momentum continues because that would be really cool and give us a headstart getting Taproot rolled out across products, companies and the community. Then Pieter Wuille had this tweet thread where he said “This is just the beginning. The real work will be the ecosystem of wallets and other applications adopting changes to make use of Taproot.” This is just the beginning, even though we’ve got it activated and hopefully this call might play a small part in getting people thinking about and discussing it well in advance of November. Any thoughts on the miner signaling part?

AvW: I think an interesting point about miner signaling is a question of what is it for? There seemed to be different perspectives on what the actual purpose of miner signaling is. Is it purely a technical thing, is it purely something to trigger nodes into starting to enforce the rules or is it also some sort of social signal to users that there are going to be new rules. As an overall question I think there are different perspectives on what is miner signaling actually for ultimately.

Andrew Chow (AC): We use miner signaling as a coordination mechanism because it is easy to measure. It is hard to measure whether users are upgrading, it is hard to measure whether nodes are upgrading but it is really easy to measure whether miners seem to want the soft fork. That is why we use miner signaling at least as a first step because it is easy to measure. For further steps we haven’t really gotten there yet so that is up for debate.

LD: Miner signaling is rather the last step and it doesn’t really matter what miners want. That’s not its purpose for sure. The purpose is to have something concrete onchain that says “This chain has these rules” so the users that want it know to activate it and the users that don’t want it know they have to fork off.

MF: I think your point Luke is that miners and mining pools should have reviewed the BIP well in advance of signaling. If they are unhappy with it the time of signaling is not the point to raise that opposition or raise that criticism. It is well in advance of any activation.

AC: Luke’s point is more that miner signaling is the coordination mechanism but they should only be signaling when it is clear that the community wants the soft fork to activate.

LD: If that isn’t clear there shouldn’t be any signaling at all.

MF: There shouldn’t be any signaling at all if there is not consensus on the soft fork but if the miners and mining pools are signaling not only does that mean because they are signaling that there is consensus on the soft fork but also that they are signaling readiness for that soft fork to be activated. But it will be interesting, there seems to be a lot of churn and a lot of changes in the mining community so perhaps in November the mining pool breakdown will be totally different than it was when the signaling actually happened. I don’t know if there is anything to come of that. Hopefully it will go smoothly and miners will enforce the Taproot rules on that block and there won’t be any disruption.

What Taproot offers us and Taproot uses

MF: Let’s start with what does Taproot offer us?

AW: There was this whole thing with signature aggregation and cross input signature aggregation. I am still confused about that. I thought the whole thing with Schnorr is that you can add signatures. Doesn’t that mean aggregating? Can someone clarify what does it mean? What is signature aggregation, what is the other type of aggregation and why is that not here? How is it going to be added and what is it going to be used for?

AC: Signature aggregation is basically the ability to take multiple signatures and combine them into a single signature. Cross input signature aggregation would be to do that across the inputs in your transaction. If you have 5 single signature inputs, instead of having 5 signatures in your transaction you would have 1 signature. Taproot doesn’t implement this and I haven’t seen any concrete proposals to do this yet because this would require significant changes to the transaction format. But signature aggregation itself is something that Schnorr signatures provide because of the linearity properties of the signature. If you have a multisig inside a single input you could then do signature aggregation with something like MuSig where you also do key aggregation. If you have a 2-of-2 multisig instead of doing what we do now where you put 2 keys and 2 signatures, you would combine the 2 keys into 1 key and combine the 2 signatures into 1 signature. It becomes 1 key and 1 signature. That is signature aggregation in general. It is combining signatures. Key aggregation is combining keys. It gets more complicated from there.

AW: The cross input one is just a meme?

AC: It is something that people want and it is something that we thought we could do without Taproot. It is something you can do with Schnorr signatures in general, signature aggregation. But it is not happening because it would require significantly more engineering work and more changes to the transaction format. I think people want the Taproot changes sooner rather than later. Doing cross input signature aggregation would put a several year long delay on that.

AW: This would also require a soft fork? How would this be done?

AC: It would require some kind of fork because you would need to change the transaction format. Maybe there is some trick we could do similar to SegWit where we add on more data to the transaction format. But this is something that still needs to be worked out. I don’t think there have been any solid technical proposals for this yet.

LD: There is no reason we couldn’t have a new witness version and just have the signature only in the very last one or the very first one?

AC: I think that would probably work.

LD: There’s not much you can do at that point to combine the keys.

AC: There might be problems around if you want to combine some inputs but not other ones. I’m not sure. If you have inputs 1 and 2 are one signature and inputs 3 and 4 are a different signature because those are two different parties in a coinjoin. As I said requires more thought and engineering work to actually have a concrete proposal.

LD: I just don’t think it is as big a deal as SegWit would be.

Fabian Jahr (FJ): Schnorr signatures allow you to aggregate signatures of transactions that have Schnorr signatures locally but to make the protocol more efficient you have to make very deep changes to the protocols as Andrew has said. It is possible to do it but to utilize it in the protocol you have to change the protocol.

LD: My understanding is that even without the signature aggregation Taproot is still a significant improvement on the efficiency.

MF: It is more efficient batching wise. There is a StackExchange post on the difference between key aggregation and signature aggregation. With Schnorr and Taproot we have key aggregation but we don’t have cross input signature aggregation. Schemes like MuSig and MuSig2 allow us to have the equivalent of a multisig scheme with only one key and one signature going onchain. But the signature aggregation part is collecting a bunch of signatures and aggregating all those signatures into one. Rather than going to the chain with just a key and a signature having done the equivalent of a multisig scheme.

MF: Anything else that Taproot offers us?

OP: It sounds like we are moving towards a Mimblewimble style protocol. The next step would be, if I understand the math, something like block cut through where you start to be able to remove history from the ledger thus saving disk space and bandwidth. Is that correct rationale and is that conceivable that would ever be interesting to a Bitcoin world?

AC: I haven’t heard or seen anything proposed for Bitcoin like that yet. I think a lot of the talk of cross input signature aggregation has been around making large transactions more efficient. You have a lot of inputs, now we can have just one signature instead of 20 signatures. Save on fees, save on block space so now we can have more transactions in a block.

MF: There is going to be a lot of discussions in terms of future soft forks. That is a whole another discussion. Cross input signature aggregation would definitely be a candidate and I suppose that is a stepping stone towards this potential Mimblewimble end goal. I think that’s a discussion for another day.

FJ: At least from today’s perspective it seems unrealistic that this is going to come. On the one end we can cut off old blocks, do an assumevalid or prune which is another simpler solution to space that you need on disk. In addition to that there is Utreexo which is another approach that also tackles how much you need to save on disk. There are other approaches that seem easier to do or that are further along but maybe we are thinking about it differently in 5 years or so. From today’s perspective it doesn’t seem the most likely scenario that we’ll do block cut through. If we do it will be a big approach similar to what Utreexo is looking at for introducing it.

MF: That’s long term, many years down the line. We’ll get back to the original question. What do we get with Schnorr and Taproot. Why should an exchange, a wallet, a hardware wallet be interested in implementing this come November? What are they going to gain?

AC: There is actually one other thing that Schnorr signatures provide that I don’t see talked about as much and that is batch verification. If you have a tonne of Schnorr signatures you can verify them all in one step instead of n steps. This really improves block verification time. If we see a lot of Taproot inputs and Taproot being adopted blocks that contain a lot of Taproot transactions, a new node that is syncing and revalidating everything they can validate these blocks really quickly because all the Schnorr signatures are verified at the same time in a single step instead of individually sequentially. This is something else that Taproot provides. I haven’t seen much discussion of that before.

MF: I think it is on the order of 40 percent speedup in terms of batch verification?

AC: It is quite big. There used to be a graph in BIP 340 and I can’t find it. I think there was also a mistake in the original benchmark and it is actually faster that was originally stated but I can’t find my reference for this.

MF: Batch verification is definitely great from a network, community perspective bringing IBD time is great. Wallets, exchanges, hardware wallets, Lightning: what benefits are they getting? Instantly with activation or once they do some work to implement some things?

LD: One thing I also haven’t heard talked about much is in theory wallets will be able to do a ring signature proof of funds. You can prove that you have money for a payment or whatever without showing the proposed recipient which funds on the chain are actually yours.

MF: Did Jonas Nick do something like that? I might have seen a tweet.

LD: I’m not sure if anyone is actually working on a BIP for that yet or not.

MF: That did look cool. Did he do it on Elements or Liquid or main chain?

AC: I think he did it on Signet and if I remember correctly it mostly looks like a cool party trick and not something that will be usable. I know when he did it he did it as a cool party trick and not something that would actually be used but perhaps someone will work out how to actually use it and a proper protocol around it.

LD: My understanding is that this was one of the reasons that we weren’t hashing the keys.

MF: For this particular use case?

AC: I don’t think so. I think this came up afterwards. The not hashing the keys is a completely different conversation.

MF: I’ll try to summarize. Obviously MuSig is a big deal, eventually it will be a big deal for Lightning, it will mean with the 2-of-2 only one signature will go onchain rather than two. With threshold schemes like Liquid with its 11-of-15 it could potentially be putting one signature onchain rather than 11. These are massive space savings for multisig and threshold sig. That’s kind of longer term. With the Taproot tree we can have complex scripts and we can put them in the leaves of the Merkle tree. This is great for privacy because you only reveal one of the scripts that you are spending and also it is great for block saving space because rather than putting a really, really long script onchain all you need to do is prove that one of the single leaf scripts was within the tree. That will make complex scripts, combinations of multisig, timelocks etc much more feasible in a higher fee environment. That’s a bunch of stuff in terms of what Taproot offers us. This was a site that Jeremy put up. These are some of the projects that are planning to use Taproot. Either at the idea stage or further than that, actual implementations.

Lessons learnt from SegWit adoption

https://www.coindesk.com/one-year-later-whats-holding-back-segwit-adoption-on-bitcoin

MF: I thought we’d have a brief session if there are any interesting observations or thoughts on why SegWit adoption was so slow, any lessons from that period. I suspect because there’s not so much focus on altcoins and all this kind of stuff, there is at least a small number of exchanges and hardware wallets that are really interested in implementing the latest cutting edge things that come out from Core or that are available to use on the protocol. Hopefully it will be different this time. Are there any lessons from SegWit? Anything that we shouldn’t repeat this time when we try to get businesses, exchanges, open source projects to actually implement Taproot and offer the ability to send and receive to P2TR addresses.

LD: Aside from Lightning wallets there was no reason for wallets to receive with SegWit. That’s not quite the case with Taproot.

MF: There were fee savings though right with the discount?

LD: At the expense of the network. It was harming the network.

MF: It depends whether you are looking at it from a network, blockchain perspective. Luke takes the view that allowed for bigger blocks so longer IBD times. But in terms of a user that doesn’t care about the network and doesn’t care about the chain they did actually receive fee discounts if they used SegWit. That incentive or motivation to use SegWit doesn’t apply as much to Taproot. Perhaps it is going to be harder to get Taproot adoption than SegWit adoption.

AC: I think a lot of the delay in getting exchanges and other wallets to support SegWit was due to the complexity of SegWit. SegWit changed the transaction format, introduced a completely new address format and implementing all of this is a considerable effort. I think that is a significant factor in why SegWit’s rollout was so slow. But with Taproot the transaction format stays the same, the address format changes a little bit but it is almost identical to bech32. That doesn’t really change. A lot of code that was written for SegWit can be reused in Taproot. The Taproot things that need to be implemented are Schnorr signatures and the whole Taproot tree thing. That’s complex but I don’t think it is as complex as all the things that SegWit did. I think Taproot will probably have better adoption or faster rollout.

FJ: I think SegWit showed that the fee savings weren’t that big of an argument for exchanges that are putting most of the transactions onchain. I think that is not such a huge factor probably again since the effect is even slower. But Taproot has this privacy effect. I can’t really give a prediction but it is going to be interesting to see. It is going to be easy to tell which projects go in that direction and implement it first. It is going to be interesting how much of a pull the privacy aspect versus the fee aspect which wasn’t that big for SegWit.

CR: I’d like to disagree with Andrew a little there. Having implemented both I suspect that I found Taproot the harder of the two to implement. As I was saying earlier the BIPs are quite dense, you have to refer to all 3 of them to understand what is going on. I think that unless we get quite a bit more entry level material around Taproot implementation it might be a while before we see a lot of other wallets folding it in.

MF: Luke says in the comments he thinks Taproot provides more benefits than SegWit so there is more incentive for switching to it. I would challenge that by saying if you are using multisig or complex scripts then there is greater incentive. There’s an Optech link on whether that incentive exists if you are just doing simple 1-of-1 pubkey spends.

LD: Better for the network and you get the privacy improvements either way. I think overall it is more incentive for more people to adopt Taproot than there was for SegWit. Even if the fee difference isn’t as big.

MF: I think your focus, which is good because you are a Core developer is always on the network and making sure the network is as healthy as possible. But some users and exchanges obviously don’t have that perspective, will be purely looking at this from a what does this give me from a selfish perspective? We need both types of people. It is great we have people like Luke thinking like that.

MF: During SegWit there were tonnes of resources being produced. These are a bunch of resources Optech put together for bech32 which was SegWit 2017. I think Craig’s point that there were a lot more resources to help people with SegWit is certainly a fair one. Hopefully we’ll get more and more resources as the weeks and months go by up until November.

Monitoring Taproot adoption

MF: This is a Bitcoin wiki. Murch is tracking which wallets, which implementations, hardware wallets, web wallets, exchanges, explorers. I think he is trying to contact them to see what their plans are and whether they’ll be supporting Taproot to some extent in November or perhaps later. We do have 0xB10C here. Let’s discuss your site. What it currently tracks, how you built it and what you plan to support going forward up until activation and beyond.

0xB10C: My site does support Taproot outputs and inputs, both key path spends and script path spends. Of course there aren’t any spends yet but there are some outputs as you can see on screen. I think half of them are from testing sending to witness version 1, bech32m addresses. There was a thread on the mailing list in the past year and I think half of these 6 outputs that exist to P2TR are from that. Some others are from other people experimenting. Of course these are at the moment anyone-can-spend. If you have connections to a mining pool these are spendable supplying an empty witness.

MF: Are you planning to do the same graphs for signet and testnet? Hopefully we’ll see a bunch of people playing and experimenting with Taproot transactions in the run up to November.

0xB10C: I did look at signet and testnet. I was surprised… On signet Taproot has been active for its whole existence or at least since the release of Bitcoin Core with signet implemented. I think there were like 20 key path spends and 5 script path spends. So not much experimentation done on signet. There were 20 or so key path spends in the last week or so. Some people pay to Taproot outputs but not much experimentation done yet. If there is need for it and if people want it we can of course support a signet or testnet version of this site as well.

MF: If you are looking through all the different parties or projects in the space you’d want blockchain explorers to be supporting Taproot from activation. If you had to think about projects that would be a priority you’d want blockchain explorers, you’d want to be able to look up your Taproot transactions immediately from activation in November. Luke says in the chat block explorers are disinformation. I am sure Esplora will support it. I think Esplora supports signet and testnet Taproot transactions now and I’m sure it will do mainnet transactions in November. There is a conversation in the chat about the outputs that are currently on mainnet, the Taproot outputs. Luke asks “Are these trivially stealable?” and Andrew said “Yes they are trivially stealable. I’m surprised they haven’t been stolen yet.”

Tweet discussing the Taproot outputs on mainnet: https://twitter.com/RCasatta/status/1413049169745481730?s=20

0xB10C blog post on spending these Taproot outputs: https://b10c.me/blog/007-spending-p2tr-pre-activation/

AJ Towns tweet on those running a premature Taproot ruleset patch being forked off the network: https://twitter.com/ajtowns/status/1418555956439359490?s=20

MF: I did listen to your podcast Andrew with Stephan Livera that came out in the last couple of days. You said a lot of people have been trying with their wallets to send Bitcoin to mainnet Taproot addresses. Some of them failed, some of them succeeded and some of them just locked up their funds so they couldn’t even spend them with an anyone-can-spend. Do you know what happened there?

AC: There is a mailing list post from November where this testing was happening. I think Mike Schmidt had a big summary of every wallet that he tested and the result. There were some wallets that sent successfully. These used a bech32 address, not a bech32m. This was before bech32m was finalized. Some of them sent successfully and made a SegWit v1 outputs, some of them failed to parse the address, some of them failed to make the transaction. They accepted the address but something else down the line failed and so the transaction wasn’t made. Some of them made a SegWit v0 address which means that the coins are now burnt. As we saw on the website there are some Taproot outputs out there and there are some that should have been Taproot outputs but aren’t.

MF: They sent them to bech32 Taproot addresses rather than bech32m Taproot addresses and the bech32 Taproot addresses can’t be spent. Is that why they are locked up forever?

AC: bech32 and bech32m, on the blockchain there is still a similar SegWit style scriptPubKey. It is just the display to the user that is different. It is that SegWit is v0 and Taproot is v1. So the address that it uses is supposed to make a v1 output but some wallets made a v0 output instead of v1. And that is incorrect. This was part of the motivation for switching to bech32m because several wallets were doing it wrong. It is ok to introduce another address format.

bech32 and bech32m addresses

MF: This is what a bech32 address looks like, bc1q…... The q stands for SegWit v0 and so a mainnet bech32m address will have that bc1p instead of the q but otherwise will look similar to that. Then obviously if you play around with signet and testnet, testnet recently activated a few days ago so Taproot is active on both testnet and signet and obviously regtest. The address will start tb1… for testnet and signet. There will still be that p in tb1p… because that stands for witness version 1. In terms of the first thing an exchange or a hardware wallet would think about doing in the run up to November is allowing the wallet to send to Taproot addresses, P2TR addresses. That would be the first step I think if we imagine ourselves to be an exchange or a wallet. Craig, was the first thing you did to add the ability to send to a P2TR address?

CR: Yes it was. That was the first thing that I built in and then I worked on the receiving part afterwards.

MF: So that’s the bech32 address, bech32m is pretty similar, just the checksum is different. This link is explaining the problems with bech32 and why we’ve gone to bech32m. That part shouldn’t be too hard for implementers.

Implementation gotcha when only needing key path

I think this is one of the first gotchas. Andrew talked about this I think on the Stephan Livera podcast. If you think back to the diagram I had before with the key path and the script path. If you just want to use the key path and you don’t want a script path or vice versa, you just want the script path and not the key path, there is a potential gotcha here. This is explaining how to construct a P2TR address if I just want to use the key path. Andrew can you explain the potential gotcha here because the public key isn’t the one going into the descriptor. The guidance is to always tweak it. Even if you don’t actually want to use the script path the guidance is to tweak that key path public key but something.

AC: Taproot, the general construction is you have an internal key and then you have your script path tree. To make the Taproot output which is also the output pubkey you take the internal key and you tweak it with the Merkle root, something to that effect. But if you don’t have a script path what do you tweak it with? The naive thing would be you just put the internal key as the output key but that is not what the BIPs recommend. Instead they recommend that you tweak the internal key by the hash of itself. If you don’t read the BIPs carefully you might miss this, I think it is mentioned once. There is one sentence that says that. I definitely missed it when I was going through the BIP for reviewing the Taproot code. If you are doing key path spends you should do this tweaking, tweak the internal key with the hash of itself rather than putting the internal key as the output key of the Taproot output.

MF: Does that make sense to everyone?

CR: You used the word “should” there Andrew. Surely the requirement is “must” or is that just best practice. You obviously won’t be able to meet the test vectors you put into BIP 86 unless you do that.

AC: BIP 86 says “must” so that you meet the test vectors. But BIP 341, I think it only says “should” because you can’t verify whether this was actually done. When my node receives your transaction I can’t tell whether you did this tweak of the internal key even after you spend it. Frankly for my node it doesn’t matter whether you did or did not. You should do this as the wallet software but it is not “must” because it can’t be enforced.

CR: Ok, thanks.

MF: It is guidance rather than literally this would be rejected by the consensus rules if you don’t do this. People can do it without the tweak, it is just recommended not to. Would that be correct wording?

AC: Yeah. Also BIP 86 says must because we need it for compatibility. Everyone who uses BIP 86 has to do exactly the same thing which means doing this tweak so that they can generate the same addresses. But if you’re not doing that for whatever reason you don’t have to but you probably should.

MF: [This] is the flip side. If you don’t want to use the key path and you just want to use a single script path or multiple script paths. You don’t want that key path option. The BIP instructs you to use a particular point, internal key, but you can change that key. Again Andrew are we in a similar space where this is the guidance, you should follow this guidance, but the consensus rules won’t reject your transaction if you do something slightly different?

AC: I don’t follow.

MF: This is eliminating the key path. In BIP 341 you have to pick an internal key with an unknown discrete logarithm to slot into the key path.

AC: If you don’t want the key path spend, because BIP 341 still requires an internal key to do the tweak against, your internal key needs to be a NUMS number, nothing up my sleeve number. It is just a pubkey that no one knows the private key for. It could be the hash of something, it could be a random number, it could be anything you want. It just needs to be something that no one knows the discrete log of or no one knows the private key of, in order to not have a key path spend.

MF: The problem that you’re concerned about here is there being a hidden… Assuming you don’t want a key path spend you want to make sure there isn’t a hidden key path spend. If you just want the key path and you don’t want the script path you want to ensure that there is not a hidden script path. Both of those are the challenges that you want to make sure you are ticking off.

Optech series on “Preparing for Taproot”

https://bitcoinops.org/en/preparing-for-taproot/

MF: Bitcoin Optech has doing a series on preparing for Taproot. This is what we alluded to earlier, is Taproot worth it for single sig? Luke was saying it is because it brings the network benefit. From a selfish individual exchange or individual wallet, hardware wallet provider the benefit is pretty small for fee savings. That small benefit accrues to the spender. There are some figures here that Optech have done comparing the vbyte size of Taproot with SegWit v0. This is going to the crux of what we were discussing, whether there is that incentive for an exchange or a wallet or a hardware wallet that is doing single sig. Is there that incentive for them from a selfish perspective to make sure that they are ready and P2TR address support in November?

The Bitcoin Core descriptor wallet

https://bitcoinops.org/en/preparing-for-taproot/#taproot-descriptors

MF: Let’s discuss descriptors. Andrew can tell us better than I can, the Core wallet took a long time to support SegWit back in 2017. Now Andrew has done the heavy lifting of moving the Core wallet to use descriptors instead. It now means the Core wallet will be supporting at least limited Taproot scripts in November. That’s due to Andrew’s hard work on descriptors. I think there are some other wallets, other hardware wallets using descriptors, I don’t know if it is common across the ecosystem and whether it is going to be tough to implement Taproot if it is not a descriptor wallet.

AC: For Core, since we are moving towards descriptors as the main way the wallet handles scriptPubKeys Pieter designed some descriptors for Taproot. You can use them in a descriptor wallet to get Taproot addresses. The main thing is because we have descriptor wallets it was fairly easy to do this. It was just adding a new descriptor for Taproot and then the existing descriptor wallet stuff could automatically handle the generation of Taproot addresses and the use of Taproot. This is in contrast to the legacy wallet which is the wallet before descriptor wallets. Those will not support Taproot and it is probably non-trivial to support Taproot in the legacy wallet. I haven’t looked at whether it can because we’ve decided we are not going to support Taproot with legacy wallets.

LD: Even the SegWit stuff in the legacy wallet was kind of a hack.

AC: Yeah, the SegWit stuff in the legacy wallet, it is not great. It definitely took a long time for that to be implemented, it was 3 or so versions after SegWit itself made it into a release.

MF: It is likely given that the descriptor wallet is the non-legacy part of the Core wallet that no one will work supporting Taproot for the legacy wallet. You are not planning to do it and you don’t know if anyone else will.

AC: The legacy wallet will not have Taproot support. For the descriptor wallet, in 22.0 which is going to be released soon hopefully, there will be Taproot descriptor support and after Taproot activates Taproot descriptors can be imported into the descriptor wallet. For 23.0, the next major release, I expect that descriptor wallets will start making a Taproot descriptor by default. You won’t have to enable Taproot manually.

MF: Are there lots of other wallets using descriptors? Are they going to find it hard supporting Taproot if they don’t currently have descriptor wallets setup now?

AC: I have no idea. I don’t think supporting Taproot without descriptors will be terribly difficult. It all depends on how the wallet is structured. At least for Core the easiest thing for us is to use descriptors. I don’t know about other wallets.

MF: Craig, with implementing Taproot did you use the descriptor approach?

CR: Sparrow does support and use descriptors but it doesn’t directly influence the wallet itself since when it retrieves information it uses the Electrum server approach. This uses a hash of the scriptPubKey. You don’t really need to worry too much about what is in it. However I can say that for example libraries that Sparrow depends on, for instance the one that it uses to connect directly to Bitcoin Core which is the library BWT I believe uses the legacy wallet. That will need to upgrade to the descriptor approach to support Taproot. I can’t speak to other wallets like Specter desktop, I’m not sure what they currently use.

The descriptor BIPs

MF: Descriptors overdue for a BIP, Luke says. There is a bunch of descriptor BIPs.

AC: There is a PR open. Feel free to assign 7 numbers for me, thanks.

LD: I didn’t notice that yet, I’ll get to that.

MF: Most of the descriptor BIPs aren’t Taproot, most of them are just outlining how descriptors currently work without Taproot. One of the BIPs, bip-descriptors-tr is the Taproot descriptor BIP.

AC: There are 7 documents. One of them describes the general philosophy and shared expressions like keys and the checksum. The other 6 describe specific descriptors. There is one for non-SegWit things, pkh and sh. There is one for the multisig descriptors, multi and sortedmulti. There is one for SegWit things, wpkh and wsh. There is one for Taproot and the other two are legacy compatibility things, the combo descriptor and the raw and addr descriptors. The reason there is 7 of them is so that people can say “We support these BIP numbers” instead of saying “We support BIP X” and then have to spell out which descriptors individually. I think that will get really complicated versus giving a list of numbers.

MF: In terms of actual design decisions, most of these BIPs in the PR are outlining stuff that has already been agreed right? The new stuff is just the Taproot BIP.

AC: Descriptors has been discussed in the Bitcoin Core repo. I took that document and cut out pieces for each individual BIP here. It is basically just rewording documentation that has already existed and formatting it in the form of a BIP.

MF: I suppose this is getting onto what Core is supporting although there is an overlap between what the Core wallet supports and the Taproot descriptor BIP. The Taproot descriptor BIP supports the Taproot script as long as all the leaves are 1-of-1.

AC: The way that descriptors work is we specify things generically. “Here is a script expression and the script expression produces a script.” A Taproot descriptor can take key and script expressions and encapsulate those together as a Taproot scriptPubKey. Right now the only supported script expression that works inside of Taproot is normal pubkeys, pubkey OP_CHECKSIG type thing. There can be future script expressions that can be used inside of a Taproot context. Really the only other script expression we have that kind of makes sense is multi but we’ve decided to limit this to just be the old OP_CHECKMULTISIG method of multisig instead of overloading its meaning. The multi descriptor cannot be used inside of Taproot but there will probably be a different multi that uses OP_CHECKSIGADD.

MF: You can do the equivalent of a 1-of-n if you are doing 1-of-1 on all the leaves. But you can’t do a threshold because that would require either MuSig, Murch’s blog post on how to do a 2-of-3 with MuSig with 2-of-2 on the leaves, or CHECKSIGADD which would be a normal multisig but under the Taproot rules.

AC: We haven’t made descriptors for these yet. Once we do make such descriptors there will be yet another BIP for them. It will be another script expression that we say can be used inside of a Taproot descriptor.

LD: Does multi error fit into Taproot?

AC: Yes. multi, I think we’ve limited to just top level or inside sh and wsh. At least in Core our descriptor parser will tell you you are doing something wrong if you try multi inside of tr.

MF: I have got a couple of links here. I was scouring the internet to see whether other people are using descriptors, whether other exchanges, wallets have been discussing supporting Taproot. I did see Bitcoin Development Kit is currently using descriptors which is promising for supporting Taproot in November. I am assuming they can follow a similar approach to what Andrew and the wallet devs have done in Core. This was a field report in Optech on what River are doing and a blog post. I did see the Bitcoin Development Kit has also opened an issue for supporting P2TR addresses in November. I don’t know if anyone on the call has been following some of these other projects, BDK is in Rust so I’m not sure if people like Andrew and Luke are following what they are doing.

Miniscript support

MF: The Miniscript part is going to be interesting. There is a rust-miniscript library and Pieter Wuille’s C++ Miniscript implementation. Miniscript will allow us to do generic scripts. When I said before we can do Taproot functionality as long as there is only a 1-of-1 leaves of the script path but currently with the Bitcoin Core wallet anyway we can’t have threshold signatures as a leaf. We also can’t have those complex scripts, a combination of different timelocks and multisigs, the long scripts that we were talking about that could fit into a leaf script. Perhaps we can talk a bit about Miniscript, any thoughts on Miniscript?

AC: Miniscript is cool. I think there will be a renewed effort to review the PR for that and get it merged into Core. Right now the C++ implementation is kind of limbo. Because Miniscript is part of Bitcoin what Pieter did was copy and paste several chunks of Bitcoin Core code out and use it in his C++ implementation. This means that over time they will start to diverge. It will be hard for one to be merged into the other. It would be better to get that into Core so it is more easily maintained. Then we can use it.

MF: There’s this discussion between Nicolas Dorier and Pieter Wuille on the work that still needs doing. As you said Andrew there is the Miniscript PR to Core but there is still work to do that and it is a bit of out of date before it can get merged into Core. It doesn’t support Taproot and we’ll probably need to get the Miniscript PR merged into Core before getting Miniscript to support Taproot. Craig, you are not thinking about Miniscript? I don’t think anyone can really think about Miniscript until people like Pieter Wuille, Andrew Poelstra, Sanket have updated Miniscript to support Taproot. I think that’s the stepping stone we need to pass to be able to have lots of generic scripts in a Taproot tree. I don’t know if anyone is going to try to implement Taproot generic complex scripts before that Miniscript support. Any thoughts on that?

CR: I think the relative education of users is frankly only just starting to get their heads around multisig. Anything more advanced than that just feels like a few years off. That doesn’t mean we shouldn’t start working on it but as a wallet developer you have to allocate your time to where you think users are actually going to use the features that you build. I really love the idea of Miniscript but I can’t see many users actually using it yet because it is not well understood to create complex scripts to lock Bitcoin in and unlock it.

MF: Sparrow is a multisig wallet. You haven’t done the multisig part, supporting the CHECKSIGADD opcode that you’d need to do for Taproot multisig?

CR: Yes, that’s correct. I’ve only done single key, key path spends thus far. I think there is quite a bit of thinking work still required on how to do interactive MuSig. How does that work if you have QR capable hardware wallets? It is just not something that I think is ready and easy to do today. Obviously there is not much point in having all of the keys in your multisig just coming out of the same wallet. That is something that I need to give more thought. I think the whole industry needs to start figuring out how we develop good UX around that.

Multisig and theshold sig Taproot support

MF: At least from what I’ve read in the discussions of some of the core devs, those stepping stones are going to be CHECKSIGADD support in Taproot descriptors and then MuSig support in either descriptors or Miniscript. We’ll have Taproot PSBT support but MuSig2 support does seem as if it is further down the line, 6-12 months. Any support for multisig in Taproot is likely to be using that CHECKSIGADD opcode rather than MuSig. This post, I referred to this earlier, this is how you get 2-of-3 using MuSig when we do have MuSig support. You’ll have a 2-of-2 on the key path and then two leaf scripts that are also 2-of-2. You can choose which 2 keys you want to sign and that gets you the equivalent of 2-of-3. That’s because MuSig just supports multisig and not threshold sig. There will be schemes like FROST that will support threshold sig even further down the line from MuSig. That seems a few stepping stones away so we can get a threshold scheme equivalent of MuSig. There are a few links there in terms of MuSig support in Elements. (Also https://github.com/ElementsProject/secp256k1-zkp/pull/131 and https://github.com/ElementsProject/scriptless-scripts/pull/24). The code is there if you want to play around with MuSig but it is not going to be supported in the descriptor spec or in Core for a while I suspect.

Taproot’d Lightning

Then I have some links on Lightning.

https://btctranscripts.com/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/

https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-December/002375.html

https://github.com/ElementsProject/scriptless-scripts/blob/master/md/multi-hop-locks.md

MF: Does Sparrow support Lightning Craig?

CR: No it doesn’t at this time. That is still coming.

MF: Do you try to list priorities in terms of what would be cool for Sparrow? Where does Lightning support sit versus supporting paying to P2TR addresses or receiving to P2TR addresses or being one of the first to support MuSig? Is this a jumble in your head or are there certain things that you are really excited about implementing in Sparrow as soon as you possibly can?

CR: I think every wallet has its own particular focus. Sparrow is a desktop wallet, it focuses mainly on Bitcoin’s store of value usage. This is why we’ve got a strong multisig and hardware wallet focus. Lightning is certainly something I want to support. I still think there is some way to go before Lightning itself becomes mainstream. I think I have some time there. There are a lot of layer 1 features that I believe are still interesting and worth building out at this time. Many of us I’m sure expected fees to be higher at this point, the reality is that they aren’t. Fees are ultimately what is going to drive us towards layer 2. That situation can obviously change quite fast but it is not the case today. Today I still focus very much on layer 1 and making sure that Sparrow addresses the store of value need to the best extent that it can.

MF: Seeing how the fee environment develops, if fees were going through the roof perhaps that would completely change your priorities or things that would be high on your objectives. With fees low that changes where your priorities are.

FJ: On the adoption of Lightning, it is a very good fit, these teams are very technically advanced of course. I haven’t talked to anyone personally about this but my feeling is that they are very excited about it and have already started working on it probably. I would bet that c-lightning, lnd and eclair will support Taproot before the first exchange will have sent the first Taproot output. The privacy aspect of Lightning has always been a major argument for it and Taproot improves that greatly. I think it is a great fit and I would put money on this going to be the first wave of adoption of Taproot that we’ll see.

MF: I think, hopefully with Covid, they’ll have some Lightning in person protocol discussions and figure out what priorities there are across the protocol. I agree Fabian that you would expect MuSig to be rolled out first on Lightning and then perhaps later on point timelocked contracts (PTLCs) replacing HTLCs. There is a tonne of stuff that can be done on Lightning with Taproot, it is just a question of priorities for the protocol and for those individual implementations. There is a c-lightning PR for channel types. The approach that they are likely to take is to have different channel types. Perhaps a MuSig channel type for channels that are using MuSig and then upgrading to a new channel type that is perhaps a PTLC rather than a HTLC. There is definitely thought going on amongst the Lightning protocol devs and implementations. They probably need an in person protocol meeting or two to figure out what order to implement some of these Taproot benefits in. I suspect it will be P2TR addresses first, then MuSig support and then potentially very far down the line PTLCs rather than HTLCs. I think that was all my links. Any final thoughts?

MF: So Fabian, you’re a Core contributor have you been following all the Taproot stuff? Is anything complicated to you? Or are you on top of all of this Taproot stuff?

FJ: I try to review Andrew’s PRs. I organize the Socratics for Berlin, we talk about all the Taproot stuff there. Taproot is one of the main things that I’m excited about. I try to be helpful with the adoption of it, currently mainly through reviews.

MF: Are you optimistic for November that we’ll have lots of exchanges, businesses, hardware wallets, wallets, maybe even miners paying to Taproot addresses and receiving from.

FJ: For November I am rather pessimistic especially looking at exchanges from the SegWit era. I think there is going to be slow adoption. As I mentioned Lightning, even though they have a longer way to go, there is going to be more effort there, more motivation, a lot of development power dedicated to it. That is where I expect the most action over the next couple of months in terms of Taproot adoption and discussion.

MF: Any final thoughts? Thanks Craig for coming along, it was great to get an outside of Core perspective trying to implement stuff. I think a few of us are focused on Core most of the time and don’t really know what it is like to implement this outside of Core.

CR: My biggest hope for Taproot in terms of what I hope it can bring is better privacy, particularly for multisig, and if we see something like lnd adopt it, perhaps even as the default, then we can really get the kind of large anon set that we would need for that to happen. That is where I’m personally hoping to be able to benefit apart from all the other things that it brings. Those are my thoughts.

MF: And more resources, I have linked to a few StackExchange posts but hopefully there will be tutorials and videos and things in the run up to November so that there are some resources to look at when trying to implement this stuff. Ok we’ll wrap up. Thanks a lot everyone for attending and fingers crossed that we have some progress to observe in the ecosystem in the run up to November.

\ No newline at end of file +https://www.youtube.com/watch?v=GAkLuZNsZzw

Gist of resources discussed: https://gist.github.com/michaelfolkson/0803271754f851530fe8242087859254

This event was livestreamed on YouTube and so comments are attributed to specific individuals by the name or pseudonym that was visible on the livestream. If you were a participant and would like your comments to be anonymized please get in touch.

Intro

Michael Folkson (MF): Welcome to everyone on the call, welcome to anybody on YouTube. This is a Socratic Seminar on Taproot rollout or Taproot support post activation in November. If there are some developers on the call or people working on particular projects hopefully we’ll be able to discuss some of the challenges or gotchas in terms of rolling out Taproot support, gauging where we are at in terms of whether people are thinking about this for their developer roadmaps in their businesses, in their companies, on their open source projects. Everyone should know that activation is in November, block height 709632 and that is a few months away so there is no rush but it is a good time before the summer holidays to be discussing plans for rolling this out. We’ll start with intros. Who you are, name or pseudonym is fine, and also what project and what company you are either contributing to or what software you are using in terms of what you are interested in for what projects and what businesses support Taproot once November comes around.

Oscar Pacey (OP): I’m interested to catch up on the news, follow the format, get more accustomed to it for future meetings.

Alex Waltz (AW): Hi I’m Alex, I don’t work for any company, I just wanted to come in and listen. I hope to see the upgrade in more wallets even though I think that is not going to happen so fast.

MF: We’ll get into that later.

Aaron van Wirdum (AvW): I’m Aaron van Wirdum, I work for Bitcoin Magazine so I am tuning in to see what is going on over here.

Craig Raw (CR): Hi I’m Craig Raw, developer of Sparrow wallet. I have recently been integrating single key Taproot in. I guess I’m here on the call having done that work on my own. I’m interested to hear what the progress of the greater community is on it, get some feedback perhaps.

MF: Awesome. How was that process? Was it pretty easy to do? Or did it require looking up loads of stuff? Did you look at how Core has implemented it or just did it yourself?

CR: I pretty much followed the BIPs. I didn’t find them particularly easy to read. I also referred to the Optech Taproot workshop which is a little bit out of date. There is a PR that updates it to 0.21. I followed that but the workshop is not exactly what is in the BIPs either. It wasn’t the easiest process I must admit but I do think all the information is out there. It can be a little bit difficult to piece it altogether.

MF: That is really good feedback, thanks Craig. Those Optech workshops are really good but yes there were a few done a year and a half ago so maybe there are a few details that are out of date or slight changes that weren’t implemented in the Optech workshops. Luke says you can propose changes to the BIPs to make them easier to work with. You mean the Taproot BIPs?

Luke Dashjr (LD): Yeah. He was saying he needed help besides what was in the BIPs. It might make sense to clarify what goes in the BIPs so that is easier for other people to work with when they are in the same situation.

MF: What are your thoughts in terms of changing the BIPs, it is kind of dangerous… I suppose if it is guidance for users then it is not really a problem.

LD: The purpose of the BIPs is that people can read them and understand how to work with it. If it is not accomplishing that I don’t know what is missing, it sounds like something could be added to make it clearer, whatever was missing for him.

MF: Good point. Craig, after this if there is anything in the BIPs you don’t like perhaps consider opening a PR.

CR: It all depends on what background you’ve had coming into them. That’s obviously a difficult thing to assume everyone’s background reading them for the first time. I didn’t have a particular issue with the BIPs, they are quite dense but that doesn’t necessarily mean they’re bad. I just think we perhaps need more accompanying material that can assist with explaining them. Obviously as the support grows and the amount of code grows you have more reference points. There aren’t that many at this point.

MF: It is still early. In terms of SegWit time we are still mid 2017. I am always trying to think where we are in terms of SegWit and comparing to SegWit. I have a section later where we’ll talk about some lessons why SegWit rollout and SegWit adoption were so small. Andrew says in the chat the BIPs are pretty dense so it can be easy to miss things.

MF: This is the reading list that I put together. We did have Socratics last year discussing the design of BIP-Schnorr and BIP-Taproot, there are transcripts and videos from June, July last year. Anybody who is unclear on the details, some of that discussion might be helpful. Obviously there are BIPs and other resources. Hopefully there will be a lot more resources coming out to help people like Craig implement this.

Recap on miner signaling (activation at block height 709632, approximately mid November)

MF: I thought we’d recap the miner signaling. Activation in November, we had the taproot.watch site that was great for monitoring which mining pools were signaling, that seemed to get a lot of attention on social media etc. I was really impressed by some of the mining pools, pretty much everyone did a good job signaling pretty speedily and reaching that 90 percent threshold pretty fast. Special shoutouts to Slush Pool who were the first mining pool to signal and they put out a tonne of blog posts and podcasts discussing activation. I thought that was really impressive. Also Alejandro from Poolin did the podcast scene discussing the challenges of a mining pool signaling. They had a few issues, they weren’t as fast off the tracks as Slush Pool but I thought he did a really great job discussing miner signaling and he had the taprootactivation.com site. I hope going into activation in November that this aspect of technical competence and staying up to date with the latest features and latest things to be rolling out, user interest in terms of what changes are available, not just miners but exchanges, wallets, hardware wallets etc. I hope that momentum continues because that would be really cool and give us a headstart getting Taproot rolled out across products, companies and the community. Then Pieter Wuille had this tweet thread where he said “This is just the beginning. The real work will be the ecosystem of wallets and other applications adopting changes to make use of Taproot.” This is just the beginning, even though we’ve got it activated and hopefully this call might play a small part in getting people thinking about and discussing it well in advance of November. Any thoughts on the miner signaling part?

AvW: I think an interesting point about miner signaling is a question of what is it for? There seemed to be different perspectives on what the actual purpose of miner signaling is. Is it purely a technical thing, is it purely something to trigger nodes into starting to enforce the rules or is it also some sort of social signal to users that there are going to be new rules. As an overall question I think there are different perspectives on what is miner signaling actually for ultimately.

Andrew Chow (AC): We use miner signaling as a coordination mechanism because it is easy to measure. It is hard to measure whether users are upgrading, it is hard to measure whether nodes are upgrading but it is really easy to measure whether miners seem to want the soft fork. That is why we use miner signaling at least as a first step because it is easy to measure. For further steps we haven’t really gotten there yet so that is up for debate.

LD: Miner signaling is rather the last step and it doesn’t really matter what miners want. That’s not its purpose for sure. The purpose is to have something concrete onchain that says “This chain has these rules” so the users that want it know to activate it and the users that don’t want it know they have to fork off.

MF: I think your point Luke is that miners and mining pools should have reviewed the BIP well in advance of signaling. If they are unhappy with it the time of signaling is not the point to raise that opposition or raise that criticism. It is well in advance of any activation.

AC: Luke’s point is more that miner signaling is the coordination mechanism but they should only be signaling when it is clear that the community wants the soft fork to activate.

LD: If that isn’t clear there shouldn’t be any signaling at all.

MF: There shouldn’t be any signaling at all if there is not consensus on the soft fork but if the miners and mining pools are signaling not only does that mean because they are signaling that there is consensus on the soft fork but also that they are signaling readiness for that soft fork to be activated. But it will be interesting, there seems to be a lot of churn and a lot of changes in the mining community so perhaps in November the mining pool breakdown will be totally different than it was when the signaling actually happened. I don’t know if there is anything to come of that. Hopefully it will go smoothly and miners will enforce the Taproot rules on that block and there won’t be any disruption.

What Taproot offers us and Taproot uses

MF: Let’s start with what does Taproot offer us?

AW: There was this whole thing with signature aggregation and cross input signature aggregation. I am still confused about that. I thought the whole thing with Schnorr is that you can add signatures. Doesn’t that mean aggregating? Can someone clarify what does it mean? What is signature aggregation, what is the other type of aggregation and why is that not here? How is it going to be added and what is it going to be used for?

AC: Signature aggregation is basically the ability to take multiple signatures and combine them into a single signature. Cross input signature aggregation would be to do that across the inputs in your transaction. If you have 5 single signature inputs, instead of having 5 signatures in your transaction you would have 1 signature. Taproot doesn’t implement this and I haven’t seen any concrete proposals to do this yet because this would require significant changes to the transaction format. But signature aggregation itself is something that Schnorr signatures provide because of the linearity properties of the signature. If you have a multisig inside a single input you could then do signature aggregation with something like MuSig where you also do key aggregation. If you have a 2-of-2 multisig instead of doing what we do now where you put 2 keys and 2 signatures, you would combine the 2 keys into 1 key and combine the 2 signatures into 1 signature. It becomes 1 key and 1 signature. That is signature aggregation in general. It is combining signatures. Key aggregation is combining keys. It gets more complicated from there.

AW: The cross input one is just a meme?

AC: It is something that people want and it is something that we thought we could do without Taproot. It is something you can do with Schnorr signatures in general, signature aggregation. But it is not happening because it would require significantly more engineering work and more changes to the transaction format. I think people want the Taproot changes sooner rather than later. Doing cross input signature aggregation would put a several year long delay on that.

AW: This would also require a soft fork? How would this be done?

AC: It would require some kind of fork because you would need to change the transaction format. Maybe there is some trick we could do similar to SegWit where we add on more data to the transaction format. But this is something that still needs to be worked out. I don’t think there have been any solid technical proposals for this yet.

LD: There is no reason we couldn’t have a new witness version and just have the signature only in the very last one or the very first one?

AC: I think that would probably work.

LD: There’s not much you can do at that point to combine the keys.

AC: There might be problems around if you want to combine some inputs but not other ones. I’m not sure. If you have inputs 1 and 2 are one signature and inputs 3 and 4 are a different signature because those are two different parties in a coinjoin. As I said requires more thought and engineering work to actually have a concrete proposal.

LD: I just don’t think it is as big a deal as SegWit would be.

Fabian Jahr (FJ): Schnorr signatures allow you to aggregate signatures of transactions that have Schnorr signatures locally but to make the protocol more efficient you have to make very deep changes to the protocols as Andrew has said. It is possible to do it but to utilize it in the protocol you have to change the protocol.

LD: My understanding is that even without the signature aggregation Taproot is still a significant improvement on the efficiency.

MF: It is more efficient batching wise. There is a StackExchange post on the difference between key aggregation and signature aggregation. With Schnorr and Taproot we have key aggregation but we don’t have cross input signature aggregation. Schemes like MuSig and MuSig2 allow us to have the equivalent of a multisig scheme with only one key and one signature going onchain. But the signature aggregation part is collecting a bunch of signatures and aggregating all those signatures into one. Rather than going to the chain with just a key and a signature having done the equivalent of a multisig scheme.

MF: Anything else that Taproot offers us?

OP: It sounds like we are moving towards a Mimblewimble style protocol. The next step would be, if I understand the math, something like block cut through where you start to be able to remove history from the ledger thus saving disk space and bandwidth. Is that correct rationale and is that conceivable that would ever be interesting to a Bitcoin world?

AC: I haven’t heard or seen anything proposed for Bitcoin like that yet. I think a lot of the talk of cross input signature aggregation has been around making large transactions more efficient. You have a lot of inputs, now we can have just one signature instead of 20 signatures. Save on fees, save on block space so now we can have more transactions in a block.

MF: There is going to be a lot of discussions in terms of future soft forks. That is a whole another discussion. Cross input signature aggregation would definitely be a candidate and I suppose that is a stepping stone towards this potential Mimblewimble end goal. I think that’s a discussion for another day.

FJ: At least from today’s perspective it seems unrealistic that this is going to come. On the one end we can cut off old blocks, do an assumevalid or prune which is another simpler solution to space that you need on disk. In addition to that there is Utreexo which is another approach that also tackles how much you need to save on disk. There are other approaches that seem easier to do or that are further along but maybe we are thinking about it differently in 5 years or so. From today’s perspective it doesn’t seem the most likely scenario that we’ll do block cut through. If we do it will be a big approach similar to what Utreexo is looking at for introducing it.

MF: That’s long term, many years down the line. We’ll get back to the original question. What do we get with Schnorr and Taproot. Why should an exchange, a wallet, a hardware wallet be interested in implementing this come November? What are they going to gain?

AC: There is actually one other thing that Schnorr signatures provide that I don’t see talked about as much and that is batch verification. If you have a tonne of Schnorr signatures you can verify them all in one step instead of n steps. This really improves block verification time. If we see a lot of Taproot inputs and Taproot being adopted blocks that contain a lot of Taproot transactions, a new node that is syncing and revalidating everything they can validate these blocks really quickly because all the Schnorr signatures are verified at the same time in a single step instead of individually sequentially. This is something else that Taproot provides. I haven’t seen much discussion of that before.

MF: I think it is on the order of 40 percent speedup in terms of batch verification?

AC: It is quite big. There used to be a graph in BIP 340 and I can’t find it. I think there was also a mistake in the original benchmark and it is actually faster that was originally stated but I can’t find my reference for this.

MF: Batch verification is definitely great from a network, community perspective bringing IBD time is great. Wallets, exchanges, hardware wallets, Lightning: what benefits are they getting? Instantly with activation or once they do some work to implement some things?

LD: One thing I also haven’t heard talked about much is in theory wallets will be able to do a ring signature proof of funds. You can prove that you have money for a payment or whatever without showing the proposed recipient which funds on the chain are actually yours.

MF: Did Jonas Nick do something like that? I might have seen a tweet.

LD: I’m not sure if anyone is actually working on a BIP for that yet or not.

MF: That did look cool. Did he do it on Elements or Liquid or main chain?

AC: I think he did it on Signet and if I remember correctly it mostly looks like a cool party trick and not something that will be usable. I know when he did it he did it as a cool party trick and not something that would actually be used but perhaps someone will work out how to actually use it and a proper protocol around it.

LD: My understanding is that this was one of the reasons that we weren’t hashing the keys.

MF: For this particular use case?

AC: I don’t think so. I think this came up afterwards. The not hashing the keys is a completely different conversation.

MF: I’ll try to summarize. Obviously MuSig is a big deal, eventually it will be a big deal for Lightning, it will mean with the 2-of-2 only one signature will go onchain rather than two. With threshold schemes like Liquid with its 11-of-15 it could potentially be putting one signature onchain rather than 11. These are massive space savings for multisig and threshold sig. That’s kind of longer term. With the Taproot tree we can have complex scripts and we can put them in the leaves of the Merkle tree. This is great for privacy because you only reveal one of the scripts that you are spending and also it is great for block saving space because rather than putting a really, really long script onchain all you need to do is prove that one of the single leaf scripts was within the tree. That will make complex scripts, combinations of multisig, timelocks etc much more feasible in a higher fee environment. That’s a bunch of stuff in terms of what Taproot offers us. This was a site that Jeremy put up. These are some of the projects that are planning to use Taproot. Either at the idea stage or further than that, actual implementations.

Lessons learnt from SegWit adoption

https://www.coindesk.com/one-year-later-whats-holding-back-segwit-adoption-on-bitcoin

MF: I thought we’d have a brief session if there are any interesting observations or thoughts on why SegWit adoption was so slow, any lessons from that period. I suspect because there’s not so much focus on altcoins and all this kind of stuff, there is at least a small number of exchanges and hardware wallets that are really interested in implementing the latest cutting edge things that come out from Core or that are available to use on the protocol. Hopefully it will be different this time. Are there any lessons from SegWit? Anything that we shouldn’t repeat this time when we try to get businesses, exchanges, open source projects to actually implement Taproot and offer the ability to send and receive to P2TR addresses.

LD: Aside from Lightning wallets there was no reason for wallets to receive with SegWit. That’s not quite the case with Taproot.

MF: There were fee savings though right with the discount?

LD: At the expense of the network. It was harming the network.

MF: It depends whether you are looking at it from a network, blockchain perspective. Luke takes the view that allowed for bigger blocks so longer IBD times. But in terms of a user that doesn’t care about the network and doesn’t care about the chain they did actually receive fee discounts if they used SegWit. That incentive or motivation to use SegWit doesn’t apply as much to Taproot. Perhaps it is going to be harder to get Taproot adoption than SegWit adoption.

AC: I think a lot of the delay in getting exchanges and other wallets to support SegWit was due to the complexity of SegWit. SegWit changed the transaction format, introduced a completely new address format and implementing all of this is a considerable effort. I think that is a significant factor in why SegWit’s rollout was so slow. But with Taproot the transaction format stays the same, the address format changes a little bit but it is almost identical to bech32. That doesn’t really change. A lot of code that was written for SegWit can be reused in Taproot. The Taproot things that need to be implemented are Schnorr signatures and the whole Taproot tree thing. That’s complex but I don’t think it is as complex as all the things that SegWit did. I think Taproot will probably have better adoption or faster rollout.

FJ: I think SegWit showed that the fee savings weren’t that big of an argument for exchanges that are putting most of the transactions onchain. I think that is not such a huge factor probably again since the effect is even slower. But Taproot has this privacy effect. I can’t really give a prediction but it is going to be interesting to see. It is going to be easy to tell which projects go in that direction and implement it first. It is going to be interesting how much of a pull the privacy aspect versus the fee aspect which wasn’t that big for SegWit.

CR: I’d like to disagree with Andrew a little there. Having implemented both I suspect that I found Taproot the harder of the two to implement. As I was saying earlier the BIPs are quite dense, you have to refer to all 3 of them to understand what is going on. I think that unless we get quite a bit more entry level material around Taproot implementation it might be a while before we see a lot of other wallets folding it in.

MF: Luke says in the comments he thinks Taproot provides more benefits than SegWit so there is more incentive for switching to it. I would challenge that by saying if you are using multisig or complex scripts then there is greater incentive. There’s an Optech link on whether that incentive exists if you are just doing simple 1-of-1 pubkey spends.

LD: Better for the network and you get the privacy improvements either way. I think overall it is more incentive for more people to adopt Taproot than there was for SegWit. Even if the fee difference isn’t as big.

MF: I think your focus, which is good because you are a Core developer is always on the network and making sure the network is as healthy as possible. But some users and exchanges obviously don’t have that perspective, will be purely looking at this from a what does this give me from a selfish perspective? We need both types of people. It is great we have people like Luke thinking like that.

MF: During SegWit there were tonnes of resources being produced. These are a bunch of resources Optech put together for bech32 which was SegWit 2017. I think Craig’s point that there were a lot more resources to help people with SegWit is certainly a fair one. Hopefully we’ll get more and more resources as the weeks and months go by up until November.

Monitoring Taproot adoption

MF: This is a Bitcoin wiki. Murch is tracking which wallets, which implementations, hardware wallets, web wallets, exchanges, explorers. I think he is trying to contact them to see what their plans are and whether they’ll be supporting Taproot to some extent in November or perhaps later. We do have 0xB10C here. Let’s discuss your site. What it currently tracks, how you built it and what you plan to support going forward up until activation and beyond.

0xB10C: My site does support Taproot outputs and inputs, both key path spends and script path spends. Of course there aren’t any spends yet but there are some outputs as you can see on screen. I think half of them are from testing sending to witness version 1, bech32m addresses. There was a thread on the mailing list in the past year and I think half of these 6 outputs that exist to P2TR are from that. Some others are from other people experimenting. Of course these are at the moment anyone-can-spend. If you have connections to a mining pool these are spendable supplying an empty witness.

MF: Are you planning to do the same graphs for signet and testnet? Hopefully we’ll see a bunch of people playing and experimenting with Taproot transactions in the run up to November.

0xB10C: I did look at signet and testnet. I was surprised… On signet Taproot has been active for its whole existence or at least since the release of Bitcoin Core with signet implemented. I think there were like 20 key path spends and 5 script path spends. So not much experimentation done on signet. There were 20 or so key path spends in the last week or so. Some people pay to Taproot outputs but not much experimentation done yet. If there is need for it and if people want it we can of course support a signet or testnet version of this site as well.

MF: If you are looking through all the different parties or projects in the space you’d want blockchain explorers to be supporting Taproot from activation. If you had to think about projects that would be a priority you’d want blockchain explorers, you’d want to be able to look up your Taproot transactions immediately from activation in November. Luke says in the chat block explorers are disinformation. I am sure Esplora will support it. I think Esplora supports signet and testnet Taproot transactions now and I’m sure it will do mainnet transactions in November. There is a conversation in the chat about the outputs that are currently on mainnet, the Taproot outputs. Luke asks “Are these trivially stealable?” and Andrew said “Yes they are trivially stealable. I’m surprised they haven’t been stolen yet.”

Tweet discussing the Taproot outputs on mainnet: https://twitter.com/RCasatta/status/1413049169745481730?s=20

0xB10C blog post on spending these Taproot outputs: https://b10c.me/blog/007-spending-p2tr-pre-activation/

AJ Towns tweet on those running a premature Taproot ruleset patch being forked off the network: https://twitter.com/ajtowns/status/1418555956439359490?s=20

MF: I did listen to your podcast Andrew with Stephan Livera that came out in the last couple of days. You said a lot of people have been trying with their wallets to send Bitcoin to mainnet Taproot addresses. Some of them failed, some of them succeeded and some of them just locked up their funds so they couldn’t even spend them with an anyone-can-spend. Do you know what happened there?

AC: There is a mailing list post from November where this testing was happening. I think Mike Schmidt had a big summary of every wallet that he tested and the result. There were some wallets that sent successfully. These used a bech32 address, not a bech32m. This was before bech32m was finalized. Some of them sent successfully and made a SegWit v1 outputs, some of them failed to parse the address, some of them failed to make the transaction. They accepted the address but something else down the line failed and so the transaction wasn’t made. Some of them made a SegWit v0 address which means that the coins are now burnt. As we saw on the website there are some Taproot outputs out there and there are some that should have been Taproot outputs but aren’t.

MF: They sent them to bech32 Taproot addresses rather than bech32m Taproot addresses and the bech32 Taproot addresses can’t be spent. Is that why they are locked up forever?

AC: bech32 and bech32m, on the blockchain there is still a similar SegWit style scriptPubKey. It is just the display to the user that is different. It is that SegWit is v0 and Taproot is v1. So the address that it uses is supposed to make a v1 output but some wallets made a v0 output instead of v1. And that is incorrect. This was part of the motivation for switching to bech32m because several wallets were doing it wrong. It is ok to introduce another address format.

bech32 and bech32m addresses

MF: This is what a bech32 address looks like, bc1q…... The q stands for SegWit v0 and so a mainnet bech32m address will have that bc1p instead of the q but otherwise will look similar to that. Then obviously if you play around with signet and testnet, testnet recently activated a few days ago so Taproot is active on both testnet and signet and obviously regtest. The address will start tb1… for testnet and signet. There will still be that p in tb1p… because that stands for witness version 1. In terms of the first thing an exchange or a hardware wallet would think about doing in the run up to November is allowing the wallet to send to Taproot addresses, P2TR addresses. That would be the first step I think if we imagine ourselves to be an exchange or a wallet. Craig, was the first thing you did to add the ability to send to a P2TR address?

CR: Yes it was. That was the first thing that I built in and then I worked on the receiving part afterwards.

MF: So that’s the bech32 address, bech32m is pretty similar, just the checksum is different. This link is explaining the problems with bech32 and why we’ve gone to bech32m. That part shouldn’t be too hard for implementers.

Implementation gotcha when only needing key path

I think this is one of the first gotchas. Andrew talked about this I think on the Stephan Livera podcast. If you think back to the diagram I had before with the key path and the script path. If you just want to use the key path and you don’t want a script path or vice versa, you just want the script path and not the key path, there is a potential gotcha here. This is explaining how to construct a P2TR address if I just want to use the key path. Andrew can you explain the potential gotcha here because the public key isn’t the one going into the descriptor. The guidance is to always tweak it. Even if you don’t actually want to use the script path the guidance is to tweak that key path public key but something.

AC: Taproot, the general construction is you have an internal key and then you have your script path tree. To make the Taproot output which is also the output pubkey you take the internal key and you tweak it with the Merkle root, something to that effect. But if you don’t have a script path what do you tweak it with? The naive thing would be you just put the internal key as the output key but that is not what the BIPs recommend. Instead they recommend that you tweak the internal key by the hash of itself. If you don’t read the BIPs carefully you might miss this, I think it is mentioned once. There is one sentence that says that. I definitely missed it when I was going through the BIP for reviewing the Taproot code. If you are doing key path spends you should do this tweaking, tweak the internal key with the hash of itself rather than putting the internal key as the output key of the Taproot output.

MF: Does that make sense to everyone?

CR: You used the word “should” there Andrew. Surely the requirement is “must” or is that just best practice. You obviously won’t be able to meet the test vectors you put into BIP 86 unless you do that.

AC: BIP 86 says “must” so that you meet the test vectors. But BIP 341, I think it only says “should” because you can’t verify whether this was actually done. When my node receives your transaction I can’t tell whether you did this tweak of the internal key even after you spend it. Frankly for my node it doesn’t matter whether you did or did not. You should do this as the wallet software but it is not “must” because it can’t be enforced.

CR: Ok, thanks.

MF: It is guidance rather than literally this would be rejected by the consensus rules if you don’t do this. People can do it without the tweak, it is just recommended not to. Would that be correct wording?

AC: Yeah. Also BIP 86 says must because we need it for compatibility. Everyone who uses BIP 86 has to do exactly the same thing which means doing this tweak so that they can generate the same addresses. But if you’re not doing that for whatever reason you don’t have to but you probably should.

MF: [This] is the flip side. If you don’t want to use the key path and you just want to use a single script path or multiple script paths. You don’t want that key path option. The BIP instructs you to use a particular point, internal key, but you can change that key. Again Andrew are we in a similar space where this is the guidance, you should follow this guidance, but the consensus rules won’t reject your transaction if you do something slightly different?

AC: I don’t follow.

MF: This is eliminating the key path. In BIP 341 you have to pick an internal key with an unknown discrete logarithm to slot into the key path.

AC: If you don’t want the key path spend, because BIP 341 still requires an internal key to do the tweak against, your internal key needs to be a NUMS number, nothing up my sleeve number. It is just a pubkey that no one knows the private key for. It could be the hash of something, it could be a random number, it could be anything you want. It just needs to be something that no one knows the discrete log of or no one knows the private key of, in order to not have a key path spend.

MF: The problem that you’re concerned about here is there being a hidden… Assuming you don’t want a key path spend you want to make sure there isn’t a hidden key path spend. If you just want the key path and you don’t want the script path you want to ensure that there is not a hidden script path. Both of those are the challenges that you want to make sure you are ticking off.

Optech series on “Preparing for Taproot”

https://bitcoinops.org/en/preparing-for-taproot/

MF: Bitcoin Optech has doing a series on preparing for Taproot. This is what we alluded to earlier, is Taproot worth it for single sig? Luke was saying it is because it brings the network benefit. From a selfish individual exchange or individual wallet, hardware wallet provider the benefit is pretty small for fee savings. That small benefit accrues to the spender. There are some figures here that Optech have done comparing the vbyte size of Taproot with SegWit v0. This is going to the crux of what we were discussing, whether there is that incentive for an exchange or a wallet or a hardware wallet that is doing single sig. Is there that incentive for them from a selfish perspective to make sure that they are ready and P2TR address support in November?

The Bitcoin Core descriptor wallet

https://bitcoinops.org/en/preparing-for-taproot/#taproot-descriptors

MF: Let’s discuss descriptors. Andrew can tell us better than I can, the Core wallet took a long time to support SegWit back in 2017. Now Andrew has done the heavy lifting of moving the Core wallet to use descriptors instead. It now means the Core wallet will be supporting at least limited Taproot scripts in November. That’s due to Andrew’s hard work on descriptors. I think there are some other wallets, other hardware wallets using descriptors, I don’t know if it is common across the ecosystem and whether it is going to be tough to implement Taproot if it is not a descriptor wallet.

AC: For Core, since we are moving towards descriptors as the main way the wallet handles scriptPubKeys Pieter designed some descriptors for Taproot. You can use them in a descriptor wallet to get Taproot addresses. The main thing is because we have descriptor wallets it was fairly easy to do this. It was just adding a new descriptor for Taproot and then the existing descriptor wallet stuff could automatically handle the generation of Taproot addresses and the use of Taproot. This is in contrast to the legacy wallet which is the wallet before descriptor wallets. Those will not support Taproot and it is probably non-trivial to support Taproot in the legacy wallet. I haven’t looked at whether it can because we’ve decided we are not going to support Taproot with legacy wallets.

LD: Even the SegWit stuff in the legacy wallet was kind of a hack.

AC: Yeah, the SegWit stuff in the legacy wallet, it is not great. It definitely took a long time for that to be implemented, it was 3 or so versions after SegWit itself made it into a release.

MF: It is likely given that the descriptor wallet is the non-legacy part of the Core wallet that no one will work supporting Taproot for the legacy wallet. You are not planning to do it and you don’t know if anyone else will.

AC: The legacy wallet will not have Taproot support. For the descriptor wallet, in 22.0 which is going to be released soon hopefully, there will be Taproot descriptor support and after Taproot activates Taproot descriptors can be imported into the descriptor wallet. For 23.0, the next major release, I expect that descriptor wallets will start making a Taproot descriptor by default. You won’t have to enable Taproot manually.

MF: Are there lots of other wallets using descriptors? Are they going to find it hard supporting Taproot if they don’t currently have descriptor wallets setup now?

AC: I have no idea. I don’t think supporting Taproot without descriptors will be terribly difficult. It all depends on how the wallet is structured. At least for Core the easiest thing for us is to use descriptors. I don’t know about other wallets.

MF: Craig, with implementing Taproot did you use the descriptor approach?

CR: Sparrow does support and use descriptors but it doesn’t directly influence the wallet itself since when it retrieves information it uses the Electrum server approach. This uses a hash of the scriptPubKey. You don’t really need to worry too much about what is in it. However I can say that for example libraries that Sparrow depends on, for instance the one that it uses to connect directly to Bitcoin Core which is the library BWT I believe uses the legacy wallet. That will need to upgrade to the descriptor approach to support Taproot. I can’t speak to other wallets like Specter desktop, I’m not sure what they currently use.

The descriptor BIPs

MF: Descriptors overdue for a BIP, Luke says. There is a bunch of descriptor BIPs.

AC: There is a PR open. Feel free to assign 7 numbers for me, thanks.

LD: I didn’t notice that yet, I’ll get to that.

MF: Most of the descriptor BIPs aren’t Taproot, most of them are just outlining how descriptors currently work without Taproot. One of the BIPs, bip-descriptors-tr is the Taproot descriptor BIP.

AC: There are 7 documents. One of them describes the general philosophy and shared expressions like keys and the checksum. The other 6 describe specific descriptors. There is one for non-SegWit things, pkh and sh. There is one for the multisig descriptors, multi and sortedmulti. There is one for SegWit things, wpkh and wsh. There is one for Taproot and the other two are legacy compatibility things, the combo descriptor and the raw and addr descriptors. The reason there is 7 of them is so that people can say “We support these BIP numbers” instead of saying “We support BIP X” and then have to spell out which descriptors individually. I think that will get really complicated versus giving a list of numbers.

MF: In terms of actual design decisions, most of these BIPs in the PR are outlining stuff that has already been agreed right? The new stuff is just the Taproot BIP.

AC: Descriptors has been discussed in the Bitcoin Core repo. I took that document and cut out pieces for each individual BIP here. It is basically just rewording documentation that has already existed and formatting it in the form of a BIP.

MF: I suppose this is getting onto what Core is supporting although there is an overlap between what the Core wallet supports and the Taproot descriptor BIP. The Taproot descriptor BIP supports the Taproot script as long as all the leaves are 1-of-1.

AC: The way that descriptors work is we specify things generically. “Here is a script expression and the script expression produces a script.” A Taproot descriptor can take key and script expressions and encapsulate those together as a Taproot scriptPubKey. Right now the only supported script expression that works inside of Taproot is normal pubkeys, pubkey OP_CHECKSIG type thing. There can be future script expressions that can be used inside of a Taproot context. Really the only other script expression we have that kind of makes sense is multi but we’ve decided to limit this to just be the old OP_CHECKMULTISIG method of multisig instead of overloading its meaning. The multi descriptor cannot be used inside of Taproot but there will probably be a different multi that uses OP_CHECKSIGADD.

MF: You can do the equivalent of a 1-of-n if you are doing 1-of-1 on all the leaves. But you can’t do a threshold because that would require either MuSig, Murch’s blog post on how to do a 2-of-3 with MuSig with 2-of-2 on the leaves, or CHECKSIGADD which would be a normal multisig but under the Taproot rules.

AC: We haven’t made descriptors for these yet. Once we do make such descriptors there will be yet another BIP for them. It will be another script expression that we say can be used inside of a Taproot descriptor.

LD: Does multi error fit into Taproot?

AC: Yes. multi, I think we’ve limited to just top level or inside sh and wsh. At least in Core our descriptor parser will tell you you are doing something wrong if you try multi inside of tr.

MF: I have got a couple of links here. I was scouring the internet to see whether other people are using descriptors, whether other exchanges, wallets have been discussing supporting Taproot. I did see Bitcoin Development Kit is currently using descriptors which is promising for supporting Taproot in November. I am assuming they can follow a similar approach to what Andrew and the wallet devs have done in Core. This was a field report in Optech on what River are doing and a blog post. I did see the Bitcoin Development Kit has also opened an issue for supporting P2TR addresses in November. I don’t know if anyone on the call has been following some of these other projects, BDK is in Rust so I’m not sure if people like Andrew and Luke are following what they are doing.

Miniscript support

MF: The Miniscript part is going to be interesting. There is a rust-miniscript library and Pieter Wuille’s C++ Miniscript implementation. Miniscript will allow us to do generic scripts. When I said before we can do Taproot functionality as long as there is only a 1-of-1 leaves of the script path but currently with the Bitcoin Core wallet anyway we can’t have threshold signatures as a leaf. We also can’t have those complex scripts, a combination of different timelocks and multisigs, the long scripts that we were talking about that could fit into a leaf script. Perhaps we can talk a bit about Miniscript, any thoughts on Miniscript?

AC: Miniscript is cool. I think there will be a renewed effort to review the PR for that and get it merged into Core. Right now the C++ implementation is kind of limbo. Because Miniscript is part of Bitcoin what Pieter did was copy and paste several chunks of Bitcoin Core code out and use it in his C++ implementation. This means that over time they will start to diverge. It will be hard for one to be merged into the other. It would be better to get that into Core so it is more easily maintained. Then we can use it.

MF: There’s this discussion between Nicolas Dorier and Pieter Wuille on the work that still needs doing. As you said Andrew there is the Miniscript PR to Core but there is still work to do that and it is a bit of out of date before it can get merged into Core. It doesn’t support Taproot and we’ll probably need to get the Miniscript PR merged into Core before getting Miniscript to support Taproot. Craig, you are not thinking about Miniscript? I don’t think anyone can really think about Miniscript until people like Pieter Wuille, Andrew Poelstra, Sanket have updated Miniscript to support Taproot. I think that’s the stepping stone we need to pass to be able to have lots of generic scripts in a Taproot tree. I don’t know if anyone is going to try to implement Taproot generic complex scripts before that Miniscript support. Any thoughts on that?

CR: I think the relative education of users is frankly only just starting to get their heads around multisig. Anything more advanced than that just feels like a few years off. That doesn’t mean we shouldn’t start working on it but as a wallet developer you have to allocate your time to where you think users are actually going to use the features that you build. I really love the idea of Miniscript but I can’t see many users actually using it yet because it is not well understood to create complex scripts to lock Bitcoin in and unlock it.

MF: Sparrow is a multisig wallet. You haven’t done the multisig part, supporting the CHECKSIGADD opcode that you’d need to do for Taproot multisig?

CR: Yes, that’s correct. I’ve only done single key, key path spends thus far. I think there is quite a bit of thinking work still required on how to do interactive MuSig. How does that work if you have QR capable hardware wallets? It is just not something that I think is ready and easy to do today. Obviously there is not much point in having all of the keys in your multisig just coming out of the same wallet. That is something that I need to give more thought. I think the whole industry needs to start figuring out how we develop good UX around that.

Multisig and theshold sig Taproot support

MF: At least from what I’ve read in the discussions of some of the core devs, those stepping stones are going to be CHECKSIGADD support in Taproot descriptors and then MuSig support in either descriptors or Miniscript. We’ll have Taproot PSBT support but MuSig2 support does seem as if it is further down the line, 6-12 months. Any support for multisig in Taproot is likely to be using that CHECKSIGADD opcode rather than MuSig. This post, I referred to this earlier, this is how you get 2-of-3 using MuSig when we do have MuSig support. You’ll have a 2-of-2 on the key path and then two leaf scripts that are also 2-of-2. You can choose which 2 keys you want to sign and that gets you the equivalent of 2-of-3. That’s because MuSig just supports multisig and not threshold sig. There will be schemes like FROST that will support threshold sig even further down the line from MuSig. That seems a few stepping stones away so we can get a threshold scheme equivalent of MuSig. There are a few links there in terms of MuSig support in Elements. (Also https://github.com/ElementsProject/secp256k1-zkp/pull/131 and https://github.com/ElementsProject/scriptless-scripts/pull/24). The code is there if you want to play around with MuSig but it is not going to be supported in the descriptor spec or in Core for a while I suspect.

Taproot’d Lightning

Then I have some links on Lightning.

https://btctranscripts.com/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-December/002375.html

https://github.com/ElementsProject/scriptless-scripts/blob/master/md/multi-hop-locks.md

MF: Does Sparrow support Lightning Craig?

CR: No it doesn’t at this time. That is still coming.

MF: Do you try to list priorities in terms of what would be cool for Sparrow? Where does Lightning support sit versus supporting paying to P2TR addresses or receiving to P2TR addresses or being one of the first to support MuSig? Is this a jumble in your head or are there certain things that you are really excited about implementing in Sparrow as soon as you possibly can?

CR: I think every wallet has its own particular focus. Sparrow is a desktop wallet, it focuses mainly on Bitcoin’s store of value usage. This is why we’ve got a strong multisig and hardware wallet focus. Lightning is certainly something I want to support. I still think there is some way to go before Lightning itself becomes mainstream. I think I have some time there. There are a lot of layer 1 features that I believe are still interesting and worth building out at this time. Many of us I’m sure expected fees to be higher at this point, the reality is that they aren’t. Fees are ultimately what is going to drive us towards layer 2. That situation can obviously change quite fast but it is not the case today. Today I still focus very much on layer 1 and making sure that Sparrow addresses the store of value need to the best extent that it can.

MF: Seeing how the fee environment develops, if fees were going through the roof perhaps that would completely change your priorities or things that would be high on your objectives. With fees low that changes where your priorities are.

FJ: On the adoption of Lightning, it is a very good fit, these teams are very technically advanced of course. I haven’t talked to anyone personally about this but my feeling is that they are very excited about it and have already started working on it probably. I would bet that c-lightning, lnd and eclair will support Taproot before the first exchange will have sent the first Taproot output. The privacy aspect of Lightning has always been a major argument for it and Taproot improves that greatly. I think it is a great fit and I would put money on this going to be the first wave of adoption of Taproot that we’ll see.

MF: I think, hopefully with Covid, they’ll have some Lightning in person protocol discussions and figure out what priorities there are across the protocol. I agree Fabian that you would expect MuSig to be rolled out first on Lightning and then perhaps later on point timelocked contracts (PTLCs) replacing HTLCs. There is a tonne of stuff that can be done on Lightning with Taproot, it is just a question of priorities for the protocol and for those individual implementations. There is a c-lightning PR for channel types. The approach that they are likely to take is to have different channel types. Perhaps a MuSig channel type for channels that are using MuSig and then upgrading to a new channel type that is perhaps a PTLC rather than a HTLC. There is definitely thought going on amongst the Lightning protocol devs and implementations. They probably need an in person protocol meeting or two to figure out what order to implement some of these Taproot benefits in. I suspect it will be P2TR addresses first, then MuSig support and then potentially very far down the line PTLCs rather than HTLCs. I think that was all my links. Any final thoughts?

MF: So Fabian, you’re a Core contributor have you been following all the Taproot stuff? Is anything complicated to you? Or are you on top of all of this Taproot stuff?

FJ: I try to review Andrew’s PRs. I organize the Socratics for Berlin, we talk about all the Taproot stuff there. Taproot is one of the main things that I’m excited about. I try to be helpful with the adoption of it, currently mainly through reviews.

MF: Are you optimistic for November that we’ll have lots of exchanges, businesses, hardware wallets, wallets, maybe even miners paying to Taproot addresses and receiving from.

FJ: For November I am rather pessimistic especially looking at exchanges from the SegWit era. I think there is going to be slow adoption. As I mentioned Lightning, even though they have a longer way to go, there is going to be more effort there, more motivation, a lot of development power dedicated to it. That is where I expect the most action over the next couple of months in terms of Taproot adoption and discussion.

MF: Any final thoughts? Thanks Craig for coming along, it was great to get an outside of Core perspective trying to implement stuff. I think a few of us are focused on Core most of the time and don’t really know what it is like to implement this outside of Core.

CR: My biggest hope for Taproot in terms of what I hope it can bring is better privacy, particularly for multisig, and if we see something like lnd adopt it, perhaps even as the default, then we can really get the kind of large anon set that we would need for that to happen. That is where I’m personally hoping to be able to benefit apart from all the other things that it brings. Those are my thoughts.

MF: And more resources, I have linked to a few StackExchange posts but hopefully there will be tutorials and videos and things in the run up to November so that there are some resources to look at when trying to implement this stuff. Ok we’ll wrap up. Thanks a lot everyone for attending and fingers crossed that we have some progress to observe in the ecosystem in the run up to November.

\ No newline at end of file diff --git a/london-bitcoin-devs/2021-08-10-socratic-seminar-dlcs/index.html b/london-bitcoin-devs/2021-08-10-socratic-seminar-dlcs/index.html index 577334900f..7177a70a37 100644 --- a/london-bitcoin-devs/2021-08-10-socratic-seminar-dlcs/index.html +++ b/london-bitcoin-devs/2021-08-10-socratic-seminar-dlcs/index.html @@ -9,4 +9,4 @@ < Socratic Seminar - Discreet Log Contracts

Socratic Seminar - Discreet Log Contracts

Date: August 10, 2021

Transcript By: Michael Folkson

Tags: Dlc, Lightning, Adaptor signatures

Category: Meetup

Media: -https://www.youtube.com/watch?v=h6SkBwOCFsA

Gist of resources discussed: https://gist.github.com/michaelfolkson/f5da6774c24f99dba5c6c16ec8d499e9

This event was livestreamed on YouTube and so comments are attributed to specific individuals by the name or pseudonym that was visible on the livestream. If you were a participant and would like your comments to be anonymized please get in touch.

Intro

Michael Folkson (MF): Welcome to everyone on the call, welcome to people watching on YouTube. This is a Socratic Seminar, we’ve got a Zoom call and it is being livestreamed on YouTube. Anybody watching on YouTube, feel free to comment, questions and there is a reading list that I’ll share the link to and that give us some structure for the discussion, we don’t have to stick to it but we’ll see where it takes us. Let’s do intros first. We’ll start with the Suredbits guys.

Chris Stewart (CS): I’m Chris Stewart, I’m the Founder of Suredbits. We do discreet log contracts.

Ben Carman (BC): I’m Ben, been at Suredbits for almost 2 years doing Bitcoin development stuff and DLCs.

Patrick (P): I’m non-technical, just an enthusiast and here to soak up some knowledge.

Roman (R): I also work at Suredbits working on DLCs.

Basic concept of a DLC

MF: Let’s start with the basic concept. As we were saying before there are tonnes of resources out there and I don’t want to just redo some of the conversations and interviews that have already been had. But I feel we have to go over the basic concept first before we go into the advanced stuff. What is a DLC?

P: It is a way to have an oracle system I guess via the Lightning Network?

CS: I think we should publicly shame him for not being an expert in 20 minutes of watching videos on YouTube (joke). How do you not know everything?

MF: From your watching of the videos is there anything that makes you interested in DLCs? Is this an interesting concept to you?

P: Yeah. I’ve been looking into the different things you can do… The other day I bumped into secure multiparty computation and that blew my mind. I don’t know what those things really mean but I’m trying to understand it a bit more. This seems like another one of those applications that sounds very interesting.

MF: So let’s go to one of our experts. What is the concept of a DLC? How is it different to say a smart contract on Ethereum? Is it a bet? What types of contracts are we talking about?

CS: Very simply put, DLCs are a way to make Bitcoin transactions contingent on things that are happening in the real world. Bitcoin’s scripting system, any blockchain system for that matter, only can check and know about data that’s within its actual code I guess. It can’t reach out into the real world and determine who won the Presidential election. Even simple things like what’s the Bitcoin price or what’s the hash rate that Bitcoin has currently or what’s the last block hash, even or odd… Those are things that are almost native to the Bitcoin scripting system but with Bitcoin’s limitations it can’t build Bitcoin transactions based on that information. DLCs are a way to allow transactions that are contingent on real world events or even Bitcoin network specific information in a trust transparent way is what I call it. It is definitely contingent on an oracle attesting to something that happened. You do need to trust that oracle but you can clearly tell when that oracle has misbehaved or lied. There is cryptographic evidence at least that you can use to show that this oracle was misbehaving.

MF: Cool, everyone should know this but Discreet Log Contracts is in a play on words. It is a play on “discrete log” and being “discreet” or being private with what you are actually doing.

CS: Most people seem to spell “discrete” but again going on the play on words it is actually “discreet”. This is the pun that Tadge (Dryja) has in his white paper which was phenomenal of course, Tadge is great.

How DLCs compare to Ethereum smart contracts

MF: But you curse him for the naming, maybe. So why are we only now talking about discreet log contracts on Bitcoin when supposedly Ethereum has taken the smart contract use case? Why has it taken so long to start doing these kinds of smart contracts on Bitcoin? What is the difference between a DLC on Bitcoin and a smart contract on Ethereum?

BC: I would say the reason it has taken so long is for one, that Tadge Dryja paper didn’t come out until 2017. There wasn’t really too much development on making these things a reality until 2019, 2020. Now with us and Crypto Garage and a couple of other open source guys we actually have working clients. It has taken a while to build out all this infrastructure to make it happen but it is becoming a reality now, we are no longer larping. What is the difference between Ethereum? An Ethereum contract, if you code it up in Solidity and post it to the blockchain, if I want to use that I will always use the same contract that everybody else is using. It is very explicit that I am using this contract. Whereas with the DLC I am just entering into a 2-of-2 multisig and all of my contract logic is done offchain. If an oracle signs then it is possible to produce one transaction that spends that 2-of-2 multisig. It is a lot different model versus publishing to the blockchain your actual contract execution, verification. A DLC is a lot more client side, execution and verification which makes it a lot more scalable and a lot more private. As Bitcoiners we definitely want that.

MF: The trade-off Ethereum took, the term they used was “rich statefulness”. We are pushing everything offchain and trying to touch the chain as little as possible while they are able to have an oracle built within the Ethereum virtual machine and that does have some benefits. The contract can automatically update rather than going to an oracle, getting a signature and sending an updated transaction to the chain. But the trade-off that you get with that is obviously that it is completely unscalable because everything is done with the virtual machine and you are not pushing anything offchain whatsoever. Would you agree with that?

CS: What Ben is saying is right. DLCs are vastly superior in scalability and privacy. The thing we give up though with the UTXO model versus the account model is the market actually existing onchain which is what has led to the rise of things like Uniswap with their automated market makers. That’s functionality that is very hard to replicate in the UTXO model. We are still figuring out how can you facilitate a liquid market that trades 24/7 without having jurisdictional requirements impressed upon on you because you are a centralized company in whatever jurisdiction you are in. One of the more interesting developments lately has been the whole Uniswap company versus protocol saga where they ended up delisting securities via their web interface but the underlying Uniswap protocol, as far as I know, still has grey market or black market liquidity pools available to them. Putting aside the Ethereum stuff, in DLC land it is very hard for us to have this market structure exist on a third party collective resource like a blockchain and give people these abilities to trade 24/7. That is another one of these fundamental architecture differences here. They give up on privacy, scalability, they get this liveliness 24/7 that people really like. We are still working on, we’ve got 2 of these great principles but still the market infrastructure piece is an open question.

BC: This isn’t a DLC specific problem. Things like Joinmarket, Bisq, Lightning Pool, all have the same problem. Bisq has a different model, Joinmarket has a different model. These are solvable things, it is picking one or picking all of them and seeing what works.

MF: It is the problem of the matching engine. What Chris was saying is that you can have the matching engine onchain and that brings certain benefits even though it is totally unscalable. Otherwise if you are doing that offchain you are having to build that whole matching engine independently without automatic writes to the blockchain whenever you want?

CS: Matching engines can be built and that is a pretty well understood space. It is the regulatory, legal, compliance restrictions that come with if I was to host these things on Suredbits servers I’d need to comply with United States regulations for offering DLCs. If Michael, you were to host these things in London, you’re the person who is hosting a centralized server where trades are being matched. You have legal requirements there too. Now if it is hosted on a blockchain it is a collective resource. So you are not necessarily liable, I’m not necessarily liable if I’m running a validator. It is this network that is hosting the market rather than a single company’s servers. It is very unscalable, it is very not private but there are certain benefits to that model when it comes to the legal and regulatory world, at least as it stands now. Maybe this changes in the United States and certain three letter agencies decide to try to hold validators liable which is a contemporary issue I guess. It definitely is on stronger footing when it comes to legal and regulatory stuff compared to the UTXO model in that regard.

MF: I don’t want to get into regulation… but if you are uploading data onto the Ethereum blockchain why is that any different to an oracle offchain uploading data to a website? I never thought the matching engine would be subject to similar regulation as a custodian, that didn’t even occur to me. You are not taking any funds, it is like a dating service. You are just matching two parties that want the opposite things. I suspect you’ve looked at the regulation more than I have.

CS: Maybe this is United States compared to the rest of the world, maybe it is different. It is still an active area of debate. I guess I’m turning this into a regulatory talk now rather than a technical one so I’ll stop talking about that.

MF: London Bitcoin Regulators can discuss that conversation. London Bitcoin Devs will stick to the technical part.

Examples of DLCs in the wild

MF: I’ve got this reading list up. We’ve kind of covered the basic concept of a discreet log contract. Z-man has some excellent thoughts on smart contracts on Bitcoin. We’ve covered the differences between Bitcoin and Ethereum. Some of the examples that I could find, you can tell me if there are others the Suredbits guys, one was Nicolas Dorier and Chris (Stewart) entering into a bet on the election. That was organized via Twitter so does that make Twitter a matching engine? Should Twitter be under regulation? That was an election bet. Who took the oracle in that case between you and Nicolas?

CS: I don’t think the oracle has revealed himself publicly yet so I won’t dox him. He does have a Twitter account, OutcomeObserver on Twitter I believe is his handle. He distributed his oracle information via Twitter so that we could take the information embedded in his tweets and start constructing DLCs based off the cryptographic information he published. He followed up after the fact and also revealed what we call attestations, a fancy word for signatures, that corresponded to the outcome of the election. In this case I believe it was a bet on the US Presidential election in 2020, the two candidates running were Joe Biden and Donald Trump. Nicolas took 60/40 odds where I put up 0.6 Bitcoin for Joe Biden winning and Nicolas put up 0.4 Bitcoin for Donald Trump winning. We built a DLC that encoded these conditions inside of a set of Bitcoin transactions. The way this is done with DLCs is you end up publishing a funding transaction that is a 2-of-2 multisig output and then you have encrypted versions of the settlement transactions that correspond to every outcome that can possible occur during the bet. If I recall correctly during this bet the three outcomes were Joe Biden winning, one transaction, Donald Trump winning, another transaction, and then I believe that we had a tie case or a draw case. It is always good to have a backup case for either the oracle going haywire… In this case it wouldn’t be oracle going haywire, just an unresolved bet. It is unclear who won the election so it is always good to have a draw or another case, that was the third encrypted transaction.

MF: We’ll get into how DLCs have evolved from Tadge’s initial design to now. I just wanted to go through a few of the examples, the parameters of these contracts before we do that, to give some examples to people who are new to this. The next one I had up was Chris and Nadav on price volatility. There are various derivatives that you can just copy straight from the financial sector and substitute Bitcoin in for whatever fiat currency or asset price that you are interested in. This one was a bet on the volatility of the Bitcoin price, I think one of you bet that there would be a lot of volatility and the other one bet that there would be less volatility?

CS: Yeah. If you go back to that graph up there at the top I can add a little bit of commentary to this graph. For people who are new to DLCs this graph here represents what we call a payout curve. On the x axis of this graph you have US dollars, you can see it starts all the way down at 12,000 dollars. The triangle that we have here hits the x axis at 19,000 dollars and it goes all the way up to 28,000 dollars on the far right hand side of the x axis. What this is saying here is “We’re creating this volatility contract at a strike price of 19,000 US dollars.” If I remember correctly in this contract Nadav was short volatility which means that he wants the Bitcoin price to stay around 19,000 dollars when we settle. If it is still close to 19,000 dollars that means there wasn’t very much volatility during the interval or the period that we were monitoring for. However, me on the other side of the trade was long volatility. I want to get paid more as the Bitcoin price diverges from 19,000 US dollars. As the Bitcoin price hits 20,000 dollars I start getting paid more satoshis on the y axis. On the y axis it specifies the payouts in satoshis that I receive if the oracle attests to a certain outcome. If the price that the oracle attests to was 19,000 dollars I get zero satoshis. That sucks, I don’t want 19,000 dollars to be attested to. However if the oracle attests to the price being 17,000 dollars that is very good for me. I am getting all the money in the DLC here. I get 200,000 satoshis if you look at the y axis. Depending on what the oracle attests to, the more volatile the Bitcoin market is the more profitable this DLC is for me. However, if the Bitcoin price doesn’t move at all that is not very good for me and I end up losing the amount of money I contributed into the DLC. I am realizing now I didn’t specify how much collateral I contributed upfront. If I remember correctly Nadav contributed 100,00 sats of collateral and I contributed another 100,000 sats of collateral. We are equally collateralized. If the Bitcoin price doesn’t move Nadav gets all the money. If the Bitcoin price moves a lot I get all of his money. That’s the financial conditions for this DLC.

MF: It sounds esoteric but anybody who has followed derivatives in financial markets knows that in the fiat economy this is billions and billions of dollars traded everyday on this kind of stuff, on price volatility of various assets. But as you say Chris, the big difference at least with how DLCs are set up now is we do have that limited collateral within the DLC. You can only lose the collateral that you post. In the conventional financial industry you can lose your shirt, you can lose everything. You do a leveraged trade betting that a price is going to go up or down or that it is going to be particularly volatile or less volatile. If you get it completely wrong you can lose a lot more than your initial collateral. That is the big difference.

BC: Something important too is you can only gain as much as your other counterparty puts up too. In the fiat system you can 100x leverage and get a million dollars. Versus this, if your counterparty only puts up 1 Bitcoin you can only win 1 Bitcoin.

MF: But maybe as this matures, if there was a big enough market with everyone betting their collateral then perhaps you then expand into where there is custodianship and you can win more than the initial collateral posted. That was volatility, the next one I had up was Blockstream and Crypto Garage on the Bitcoin price. Again this is derivatives, this was a forward contract?

CS: This was published in 2019 from Blockstream. As far as I know this is the first DLC executed on the Bitcoin main chain. We have come quite a ways from what is detailed in this blog post from a technical perspective I think. The actual financial engineering side of things here is pretty straightforward. If I recall correctly it is just a linear payout based on what the oracle attests to. I have to admit I did not look through this old blog post before I jumped on. Sorry about that.

Nadav Kohen (NK): The forward contract they executed pretty manually using handwritten scripts as opposed to libraries that autogenerate things. This was also before the use of adaptor signatures. They used quite a bit more script and more complicated stuff.

MF: They used the Japanese Yen stablecoin on Liquid as part of the derivative?

CS: I think that is in the more to come section.

MF: Ok it was purely a Bitcoin…

NK: It was a proof of concept.

CS: Maybe we are saving technical stuff for later but this would be a fun topic to touch on when we get to the technical discussion because it really shows the iterations that we’ve taken on designing the DLC protocol. If I remember correctly this is almost word for word what Tadge has in his white paper and we have diverged a bit from the original white paper as we find more innovations in the space. We can talk about that in the technical section.

MF: Let’s get onto that. There are tonnes of examples but I think that gives a flavor of some of the examples that you can do with DLCs. Let’s get onto the technical stuff, let’s get onto the evolution since Tadge’s initial… Just before we do that this looked cool. I just saw this earlier today. Lloyd Fournier wrote a paper on how to make a prediction market on Twitter with Bitcoin. Similar to what you and Nicolas did on Twitter. He has also made a betting wallet for betting on the Bitcoin price for “plebs, degenerates and revolutionaries”. I just found that earlier, that looked cool.

CS: Ben gave this a try this morning? Ben, do you want to add any color on that?

BC: He put it in our DLC chat the other day asking for people to test it out. I tested it out last night. I did a simple bet against Lloyd on a coin flip. It happens in two days so we’ll see if I win. It is pretty cool. It is only a command line tool at the moment. It is not too easy to use yet but I think it is pretty cool. I think it has a lot of potential.

NK: Just out of curiosity is Lloyd’s tool for binary option stuff or does it work for more general stuff?

BC: I don’t know.

MF: It seems very early, I think he just posted it today or yesterday. It looked cool so I added it to the reading list.

Evolution of the DLC: adaptor signatures

MF: Let’s go through the evolution of the DLC because that hasn’t really been covered I think on other forums, other podcasts, videos etc. The very first resource is Tadge’s original DLC white paper. What was the initial concept of a discreet log contract when Tadge initially released this paper?

NK: I highly recommend people read the white paper if you are interested. It is only 5 pages, it is written very clearly. There is a little bit of math in there but that is about it, that second page. If you are comfortable with the math it is also a decent place to learn about Schnorr signatures. This is where I learnt about Schnorr signatures, they were defined here, the idea. There are two ideas going on: the higher level one and the technical trick that enables it. Up on page 1 I think he states it pretty well in the abstract. Essentially the goal, he mentions financial use, but more generally speaking it is a private contract scheme that is scalable. It addresses scalability and privacy concerns. You also want to minimize the amount of trust. It has a very specific oracle model in mind where you have oblivious oracles that are being used to execute contracts on Bitcoin. As the name puns out we use a discrete log and it is private hence the misspelling with two e’s and a t. Essentially the trick that is used, I don’t know if it is Tadge’s innovation or if it is more so putting things together here, the idea is that you can with Schnorr signatures and other kinds of schemes you can anticipate a signature of a specific message by some entity such as an oracle. You can have an oracle whose public keys are known, they publish some commitments to an event and you can essentially create encryption keys that anticipate a specific event. You can create one for each possible event. This is what is used in order to construct contracts. In the initial white paper it really doesn’t go much further than that. The idea is you have some list of outcomes and you have a payout for each one, that is about where it ends. It is mentioned at the end about further optimizations, how you can do some other tricks but they are not very fleshed out. They are not exactly what we ended up using but they are the right general idea for how you would begin if you were to try to generalize this to more interesting kinds of contracts. It is short and sweet, 5 pages, explains the motivation, the model for the oracles that they don’t know anything and they just broadcast and that they can’t even see themselves after things have gone onchain. There is a section lower down about how you can anticipate a signature for each possible outcome. There is actually a rather vague section in which it says that you can then use this anticipation point in order to construct a contract in Bitcoin. This vagueness is where the older versions of DLCs diverge from the newer ones. The original ones took a more literal approach. You take this anticipation point for a given event and you use it as a tweak on an onchain key. There is always messiness involved with that and proving that it is secure is rather tricky. Newer iterations use these things that I’m sure that we’ll talk about later called adaptor signatures which are a much fancier, much more elegant solution to how we take this property of Schnorr signatures that you can anticipate signatures and use them to actually build out our contracts.

MF: Just to be clear, it doesn’t use adaptor signatures? Tadge is the co-author of the Lightning white paper as well. This is only shortly after Lightning. It is not using adaptor signatures, it is not using Lightning and Schnorr obviously was years away from being activated. It is using Schnorr?

NK: It is using Schnorr. It is using Schnorr offchain. The oracle signatures are Schnorr signatures but you can use the oracle Schnorr signatures to then create ECDSA things. These anticipation points are derived from offchain Schnorr stuff. But you can still then use that for onchain Schnorr or onchain ECDSA or in theory other signature schemes. The original DLCs that were executed with this paper’s publication on testnet and later with Blockstream and Crypto Garage and then later with us, were all actually using ECDSA onchain, everything works today. But we are actually using a Schnorr variant offchain.

MF: That hasn’t quite clicked for me, how you can have a Schnorr signature…

NK: Oracles are not related to the Bitcoin blockchain in this model. An oracle is just an entity that signs things, it announces “In the future I am going to sign X”, “I am going to sign the Bitcoin price on this weekend, on Saturday”. On Saturday it broadcasts the signature of what the Bitcoin price is. Those are the only two functions it has. It doesn’t have anything to do with the blockchain, it doesn’t look at a blockchain, it doesn’t need a blockchain. Then what it is doing is it is using Schnorr signatures, it is posting announcements with pubkeys and then the signature of the price is actually a Schnorr signature. This isn’t a thing that is going on the blockchain, you can use whatever signature scheme that admits an anticipation thing that you want. Schnorr just happens to be a really convenient compact efficient nice one that has libraries in Bitcoin now. So essentially the oracle is just using Schnorr and then the fancy trick that is in this paper explains how from the Schnorr stuff we can get this public key that we can use in anything. In ECDSA we can use this public key to tweak our signatures that are ECDSA signatures in order to enforce our contracts. Tadge was actually using Schnorr signatures in the paper and we do use Schnorr signatures for DLCs generally speaking.

MF: This is a different trick of an oracle signature feeding into who gets paid, different to an adaptor signature?

NK: An adaptor signature actually is a version… There are a couple of different ways you can do this. One way that is the original way is you can use your public key literally to add it to other public keys. That gets kind of nasty. With adaptor signatures you use it as an encryption key. Either way you are using this Schnorr stuff and you are using it to tweak or do some variant scheme on your normal onchain signatures. The original way of doing that was kind of nasty, that is for example what Crypto Garage and Blockstream did and what we did initially. Then adaptor signatures only came later on as an improved way to use these pubkeys that correspond to real life events.

MF: I think I am going to have to read the paper again. I thought it wasn’t using adaptor signatures but it sounds like it is using a flavor of adaptor signatures that isn’t quite the adaptor signatures we use today.

NK: Adjacent. It has a similar interface to adaptor signatures but it is actually vague, it doesn’t say. It is very easy these days to read it and be like “He is talking about adaptor signatures right now”. When really he was saying something much more vague. You need something that acts like this and then doesn’t state which one it is or how you have something that acts like that. These days we use adaptor signatures because they act like that. But the initial iterations did not use adaptor signatures. They used other things that were similar, probably less secure and definitely worse in other ways.

MF: You also have this blog post where you explain how they work before the latest adaptor signatures that are used currently with DLCs.

CS: I believe one of the key things that came along for us in April 2020 was Lloyd figuring out how to do ECDSA adaptor signatures which is a primitive that is available on the Bitcoin blockchain so that we can enforce these things actually onchain for settlement of transactions.

NK: Lloyd figured out that this was likely possible in December 2019 and posted under a pseudonym to the mailing list. Then he fleshed it out a bit more and published an actual GitHub paper draft of how this works and what the security is like. That was, I want to say, February or March 2020. Then there was an online Lightning Hack Day that me, Jonas Nick and a bunch of other folks from Suredbits and some others, Waxwing (Adam Gibson) and Lloyd was also involved. We got together and we implemented ECDSA adaptor signatures and a little thing built on top of them. That was the first implementation of ECDSA adaptor signatures in April 2020. Then it was cleaned up, tests were added, it was made more production ready and it was merged into libsecp256k1-zkp maybe 4 months ago. That was Jesse Posner who picked up nickler’s work, tidied it up and took it over the finish line. When we were originally working on DLCs we had about a year before ECDSA adaptor signatures existed and could be used. We were doing other nastier tricks that are explained in that blog post. They are not terribly nasty or anything but from a code perspective it was 4 times as much code to get it to work. If I recall when we switched to ECDSA adaptor signatures it cut the codebase into a quarter of the size.

MF: Wow, interesting. This is just you saying “We’ve got a better design than what Tadge initially put in the white paper. We want to use this flavor of adaptor signatures, we don’t have Schnorr yet, can we do this with ECDSA? It turns out we can in a convoluted way but once we have Schnorr it will be a bit easier.

NK: That’s right. The way we are doing it now we are using ECDSA adaptor signatures is going to make it much easier to transfer to using Schnorr because we are already using all of the same interfaces as Schnorr adaptor signatures and the like. It will be much closer to just swapping out ECDSA for Schnorr than it would have been if we had to do all of these changes all at once. It also helps from a development perspective where we’ve moved to the thing that’s more Schnorr like even though it is not Schnorr yet. It is better in every other way too because Schnorr is great.

Evolution of the DLC: Onchain, payment channels and Lightning routed

MF: Let’s continue with the evolution of DLCs for a moment. There are a bunch of different evolutions here. There is the adaptor signatures that you’ve just covered Nadav. Then there is the “Do we do this onchain? Do we do this in a payment channel? Perhaps in the future do we do this on a routed payment across the Lightning Network?” The examples that we showed earlier were within a payment channel were they? With no routing? Is that right?

CS: I think the examples that we were talking about earlier were strict onchain DLCs. As of now we have not begun development on a Layer 2 DLC. We are starting to get some inbound interest from Lightning implementations looking to support at least rudimentary DLCs between a counterparty and another person. But yeah where we are at with development today is we have an onchain wallet that works well. You can go download at suredbits.com and play around with this stuff. We have got a lot of improvements on the engineering front in the pipeline but we have also published videos out there that Ben made to show how to do a DLC between two counterparties. To be explicit it is just onchain currently and we are trying to get our code solidified in that regard and the spec solidified in that regard before going up to Layer 2.

NK: There has been a lot of work on offchain DLCs but all offchain DLCs are paper DLCs. They haven’t actually existed on computers yet. But we have a pretty good idea on what that is going to look like when it is done. There are a couple of barriers before that work can actually begin. My understanding is that most of the work that needs to happen to unblock offchain DLC work is on the Lightning implementation side of things. There is a lot more generalization that needs to happen. Hopefully the move to Taproot will be a good motivator to start this work. That is our hope.

MF: The first thing that you would do to take advantage of Schnorr and Taproot would be to just use Schnorr in an onchain setting. Use Schnorr adaptor signatures if DLCs are currently all onchain you will be using Schnorr onchain to begin with?

NK: Certainly. Schnorr/Taproot will certainly be something that benefits DLCs even onchain. More so what I was referencing is that there are going to be changes coming to the implementation structure of Lightning nodes in the near future because they are going to be moving to Taproot and they are still going to have to support old channels for some period of time. That is going to force them to have to be able to do more than one thing. Right now Lightning implementations are quite hardcoded to do just the one thing they do. There is a lot of generalization that needs to happen in order to allow other kinds of things to live in Lightning channels other than just payments. In order to have DLCs we need a more general state machine, the internals need to be able to handle more general kinds of things than just payments. My hope is that the first step in this direction will be when Lightning implementations have to support both pre and post Taproot. Their internals are going to have to be more flexible. Once they are more flexible it should be much easier for us to start doing other things inside of Lightning channels.

MF: I see two possible alternative directions. One which is you try to get integrated into Lightning implementations so that you can work within existing… I think this was the approach Tadge was doing initially with his Lightning implementation Lit which was just to do payment channels and set up a network of payment channels, not do the routing and almost set up a new network that is almost separate to Lightning. To get off the ground given that you are not taking advantage of the routing to begin with.

NK: From an engineer’s perspective the thing we hate most is redoing work. If we gut a Lightning implementation and put DLCs in there and then later in the future we are also going to put DLCs into normal more generalized channels… I’m not saying we wouldn’t necessarily do the first thing but we’d want to make sure we were doing the first thing in a way that is compatible in the future with these more generalized channels. It feels like we are just a year, maybe two years or something like this away from having an actual framework for what it is going to look like to have more generalized channels. We are hesitant to start work that might have to be thrown away in the future when we know that soon enough we are going to know what direction is the right one to take.

MF: I get that. There are going to be various upgrades to Lightning. At which upgrade to Lightning is best for the DLC people to jump in?

NK: I would argue PTLCs.

MF: Yes it is probably going to be PTLCs rather than the MuSig(2) upgrade which will probably happen earlier before the PTLC upgrade.

CS: On what Nadav was saying, the oldest parts of these Lightning implementations are their HTLC state machines. There is a very strong resistance to changing these things because a) it is dangerous, you might introduce bugs, b) sometimes the people have left these companies that wrote this initial code and now there are newcomers that have to maintain this and are scared of introducing these bugs because they aren’t the initial people that wrote this stuff. It is going to be a long conversation I think in the Lightning community to get them comfortable with changing these core state machines specifically their HTLC state machines and incorporating these more general channels. I think when talking about more general channels I don’t know if there is a clear winner in that regard unless you are just talking about PTLCs as your more general channel?

NK: The ideal would be more general channels which would obviously let you do PTLCs on top of other things. But I imagine PTLCs are a good first generalizing step so they’ll probably happen prior to that. It will happen in that direction. Also I want to add a c), another reason why people don’t want to change the Lightning state machines, for the things it does right now it is provably optimal. Everyone is always like “Changing things comes at a slight disadvantage to the current use case”.

MF: You mean the scripts are optimal? What do you mean by provably optimal?

NK: I mean the actual internals for how they handle… From a computer science perspective the algorithm that they use is provably optimal and as fast as possible if the only thing you are doing is HTLCs. The HTLC state machine that is implemented in the current Lightning implementations is optimal. What we call for is less optimal for HTLCs but can do much more. There is that trade-off. I don’t think there is significant pushback but it is more an inertia that has to be overcome. People need to be convinced that it is very much worth it to be working on changing these things when what we have is so nice for this one thing that it does.

MF: To play devil’s advocate, if it is going to take a while for Lightning to get MuSig and PTLCs is that not an argument for bootstrapping a DLC network of payment channels. Then you could start with everything that you want. You are not sitting there waiting for Lightning to implement what you want them to implement. That always takes longer than you’d hope.

NK: I still think it is totally possible to take that approach but I would again just say we would probably wait to start that work until we actually see how we are supposed to do it. There are a couple of different ways that this can be done, how you would gut it and replace it. We want to make sure that we do it in a way that can be reused in the future. That is something that I think will be much clearer in less than 2 or 3 years.

CS: I was talking with some Lightning folks this weekend and talking about what’s on the roadmap for various Lightning companies. My argument to them is we need more adoption on the Lightning Network, I think everybody agrees with this, and number is going up currently and that is great. One of the reasons that number go up could be attributed to lnd adding new features like keysend. I have issues with keysend but I don’t think it can be argued that there is a lot more applications that are enabled by this. I think we need to pitch them the same way as PTLCs. We have written a whole archive about things you can do with PTLCs, it is going to enhance the expressiveness of the Lightning Network which is going to make number go up even more on the Lightning Network. We need to pitch them on this new feature set, your last features didn’t really go through the consensus process for Lightning to integrate. You’ve got some interesting apps out there for that. The same thing can happen in PTLC world. Unfortunately going back to what we already hashed over was it is much more invasive to do PTLC stuff around the core HTLC state machine logic that already exists.

NK: I posted an update to the Lightning dev mailing list around a year ago, after the hack day had happened and we had ECDSA adaptor signatures. I posted an update on PTLCs, what work had been done, onchain proofs of concept that we had executed and the set of things that needed to be changed in the Lightning Network were. And I believe roasbeef responded to this mailing list “I wouldn’t call this a minimal change, this is the largest change that would have ever happened to the Lightning Network. You are changing the state machine.” And I was like “Ok I guess you are right.” For context roasbeef once said this would be the biggest change to Lightning that has ever happened. Not DLCs to be clear, something much smaller than DLCs that are a required first step, changing the state machine at all which has not happened yet.

MF: I get the sense roasbeef is more on the disruptive side rather than the conservative side.

NK: I have talked to him quite a bit since about generalizing channels. It is something he wants to do and it is something that he is working on. It will certainly happen, I am not worried. I think the consensus is that it is the right approach, it is just a lot of work and the work is happening slowly.

How the DLC use case could increase usage of the Lightning Network

Fode Diop (FD): First of all, hello everybody. I just wanted to add to what Chris was saying about number go up in Lightning. I think it will happen. Especially in Central America, we are only 3 and a half weeks away from the government wallet being released here. It is not just El Salvador, it is all the neighboring countries. If you think about it, most of the banks here are owned by foreign banks. The three major banks here are owned by Colombian banks which means that as they help the banks integrate with Lightning they are also themselves getting ready to integrate with Lightning as well. We have Colombia next door, we have Guatemala which is right next door and all exchanges here are pretty much integrating with Lightning as well. I think it is just a matter of time until we see an explosion of usage here. El Salvador alone is 6,7 million people. If you onboard an entire nation on it I think there will be an increase in usage but also a good way to stress test the Lightning Network.

MF: I don’t know how much of it is literally on Lightning. Is there a big uptick in terms of Lightning channels because of this adoption or are they all just doing a custodial service?

FD: It is custodial in the beginning, nobody even knows yet if the wallet is going to be onchain or a Lightning wallet, the wallet coming up on September 7th. We are hearing that it will have an integration with Lightning from the get go. Whether it is a custodial or a non-custodial wallet I think it doesn’t really matter because ultimately if the wallet can allow for people to withdraw onchain or through Lightning then anyone can use any wallet out here. That is what the government is trying to push forward, for anybody to use any wallet that is available. There is Bitcoin Beach, Wallet of Satoshi, some people are using Muun and Strike of course. I think over time as people get more comfortable and they understand how to use wallets they will use whatever wallet works for them. I think there will be increased level of usage. I don’t want to be a prophet and say something that might not happen but what I can see here there is going to be an explosion of usage.

MF: This might seem like a diversion from DLCs but it is not actually as a diversion. AJ Towns’ idea that he posted on the mailing list and he also spoke at the Sydney Socratic on this, AJ is trying to get more adoption on Lightning, more people using Lightning. AJ’s argument was that in the early days of Bitcoin Satoshi Dice was the biggest use case of Bitcoin. There were tonnes and tonnes of transactions. Now we look back and go “How terrible it was. My node has to verify all these terrible transactions”. But you could argue back then any adoption… Bitcoin was nothing, no one was using it. It had a purpose that people were actually using Bitcoin for something. AJ’s argument was, you can read the mailing list post, that you could do similar Satoshi Dice gambling on dice, gambling on coin tosses on Lightning to get the adoption up on Lightning if your objective is to get adoption up on Lightning, more people using Lightning. It sounds like from your perspective Fode that we don’t need to be as concerned with adoption on Lightning, people are going to be using Lightning for payments.

FD: Not at all.

MF: We don’t need the DLCs to get that adoption up.

FD: I think we need every kind of technology out there because usually the markets will determine what they want. The market will say what is needed. To me we try it all, throw the kitchen sink at it and see what happens. Whatever sticks sticks and whatever doesn’t will be discarded like natural selection. I am not saying DLCs will not be needed per se, I am curious about DLCs. Since I met Chris in Miami I really need to see how this works.

MF: Just to be clear I think DLCs have got a definite use case. It is just whether they are the first killer use case for Lightning or whether it is a separate network and Lightning is going to be fine on its own. DLCs take advantage of Lightning or set their own network up, whatever works for DLCs.

CS: I think since the earliest days of us working on DLCs we’ve always had the implicit assumption that eventually Bitcoin blockchain space is going to fill, it is going to be uneconomical for lower value DLCs to happen on Bitcoin so let’s use our Layer 2 solution for what it is. With DLCs in channels, if you are doing smaller DLCs, less than a Bitcoin, I think doing them on Lightning makes a lot of sense. If you are doing hundreds or thousands of Bitcoin in your DLC the same security incentives that apply with Lightning also apply with DLCs. They are just another output in your Lightning channel. You need to be mindful of very large DLCs on Lightning in the same way as you need to be mindful of very large HTLCs on Lightning as well. Going back a step or two, I always try not to be judgmental of the economic activity that is happening on the chain assuming that it is not parasitic to the underlying blockchain. I don’t think we need as Bitcoin developers or the Bitcoin community need to get in the business of judging whether this is worthy to be on the Bitcoin blockchain or if it isn’t. The fee market is there to determine that. That is the whole purpose of fees onchain, to price out things that don’t have as much economic value and it needs to go to a Layer 2 solution. That’s fine in my opinion.

MF: The downsides are totally different now. If we are doing parallels with Satoshi Dice and DLCs on Lightning, every single transaction was onchain. We are talking about a potential future where there are tonnes of DLCs going over Lightning with barely any touching the chain. On the downsides it is very different but perhaps there are the upsides in terms of this could be a use case to get more people using Lightning, I don’t know.

NK: I also think DLCs are a nice test run for doing more generalized offchain stuff. Layer 2 currently is in its first iteration where it is just payments that are happening. I think in the future, as Chris mentioned, almost everything is going to happen offchain, it is going to have to. Certainly nothing cooperative should ever happen onchain, that is a principle. There is no reason to be using a blockchain directly for anything other than assurance that if things stop being cooperative then you have a backup. As a first resort you should never actually be using a blockchain if any activity that you are doing is cooperative. In the future we are actually going to need Layer 2 to handle almost everything. We have payments down as the first step and I think DLCs are an ideal second step. They are pretty minimal in their package for what you need in a technical sense, they have immediate use cases so they are marketable to people. You can have these things get implemented and integrated, and they are not as trivial as a payment. It is not that a payment is trivial, Layer 2 took quite a while to come around, but as far as contingent payments go it is about as simple as it gets. I think in some sense it can be viewed as the natural next step after payment channels, introducing things like DLCs into offchain ecosystems.

The challenges of a routed DLC on Lightning

MF: I do want to have a final section on oracles before we wrap up. Just to finish the Lightning section, this is looking forward to when there are DLCs on Lightning. When we get to routed DLCs, this is looking very far ahead, not only a DLC within a payment channel but routed across multiple payment channels, the problem there is if you do a bet that settles in 6-12 months then the capital would be locked up on every hop. That is not ideal. We are kind of in the same space as HODL invoices with capital being locked up.

NK: I would argue that it is the exactly the same space.

MF: So there are good reasons not to like that, if capital is locked up for no reason. When they want to be doing payments settling very quickly all the time. You’ve said barrier escrows could potentially be a solution to this?

NK: There are two separate things. With HODL invoices there is the current problem, the model under which we operate Lightning is a model in which the use case is fast payments that are executed and then resolved. HODL HTLCs or HODL invoices are essentially payments that are longstanding so they don’t complete immediately. They get set up and then they stay in that state of being set up locking up the capital along the route until some future date when the payment actually gets settled. It is a delayed settlement Lightning payment in which all of this capital gets locked up. In the current model this is essentially indistinguishable from spam. If I was going to spam attack the Lightning Network I would just set up a bunch of very long route payments to myself and never complete them. For each satoshi I have to lock up I can lock up 20 satoshis of other people’s capital because of the routing. You have this huge leveraged power where you can spam the Lightning Network. I would say today the Lightning Network is not generally spam resistant. There are I think four different competing proposals that are in the works right now for how to solve this problem. All of these problems essentially boil down to you have to pay people for their capital lockup, not just on a one time basis as it is right now. But also in some sense you have to compensate them either probabilistically or otherwise. You have to take into account the amount of time that capital is locked up. You are paying for the use of other people’s capital in routing and you are using the capital much more if it is a HODL invoice than if it is a normal one. Essentially there are a bunch of different proposals out there for how to price this in. These people who are holding and locking up their collateral are getting paid compensation that is proportional to that cost for them. Once this problem is solved it is fundamentally a problem of spam. Secondarily a problem for HODL invoice HTLCs and I would say only tertiary it happens to also be true for things like routed DLCs and other contracts. If you solve the problem for spam then the problem also becomes solved for DLCs. It will just be priced in, DLCs will be a little more expensive to do if you are routing them than they would be to route them today. But it wouldn’t be bad for the network. There is some stuff that isn’t priced in under the current Lightning Network model. In the future it will be. I think that essentially this problem reduces to the problem of spam which is a problem a lot of people are working on.

CS: Maybe DLCs would even be good for Lightning in this regard if the spam problem is solved because then people who are putting their capital on Lightning, maybe they want to route DLCs because they can have longer standing DLCs on their Lightning channels rather than these HTLCS that appear quickly and then should be cleared quickly too. Maybe it is a longer term source of income for Lightning node routers when they are routing these larger DLCs that are hanging off the channel for a while.

NK: The problem that barrier escrows solve is a related but different one.

MF: I misunderstood this. There is the capital lockup problem and there is also the problem of getting a free option. You enter into an agreement, the price swings and then you leave that agreement because you’ve basically got a free option.

NK: A simple example. Say I want to bet with Chris whether or not it is going to rain where I am tomorrow. I say it won’t rain, he says it will. We want to do a routed DLC over Lightning. What needs to happen is I need to set up a payment to him contingent on it raining. I set up a payment to him and the point associated with that payment, the tweak, is the oracle anticipation for the event “It is raining”. We’ll say it is for 1 Bitcoin just for simple numbers. Then he needs to set up a payment to me for 1 Bitcoin contingent on it won’t rain. Only one of those two things is going to happen. Say it does rain, then Chris can claim his payment and I am never going to be able to claim my payment. The oracle is never going to publish “It didn’t rain”. I can just fail that payment and return the collateral. The problem here though is there are these two payments that need to be set up from one to the other. The question is who moves first? If I set up my payment to Chris he can never set up a payment to me and now that’s a free option. If it does rain he gets paid and if it doesn’t rain he doesn’t lose anything. He hasn’t set this up in return. This notion of a barrier escrow which came out of a discussion between me and ZmnSCPxj was a solution to a similar problem that happens to also solve this problem where we can essentially atomically set up two payments. How it works is we have the payment I am going to set up to Chris and the payment Chris is going to set up to me. We add one extra lock to both of them that’s the same lock with some other point, say E, that belongs to this barrier escrow. Then what me and Chris do is we do an AMP, an atomic multipath payment to this third party. They don’t know who we are, they don’t know what they are being used for. All they do is they accept payments. Once both me and Chris’ payments have been set up to them then it releases. How this looks is rather than I set up a payment to Chris and he sets one up to me, what we do is we both set up payments to each other that go through each other and to some shared point elsewhere on the network. That shared point happens atomically using the existing thing which is AMP. Using just AMP we can essentially bootstrap AMP into helping with atomic multipayment setup. You can set up as many different consecutive payments as you want for all the different possible outcomes. You can get fancy with it, that essentially lets you do DLCs routed over Lightning if you have PTLCs.

MF: With that problem where I make an offer to you Nadav and then you have time to either decide whether to accept it or if something swings in your favor… Say you bet on the rain. While you are deciding whether you want to enter into the bet on whether it rains or not you are looking out the window to see if it rains.

NK: Something like that.

MF: You’ve got this information where you can decide whether to agree or disagree. The solution is to introduce a third party so is there is no advantage or no delay where one of the parties can collect more information.

NK: In general this is what it is like. In this particular example it is actually even a bit worse since there are just two outcomes. Chris can always accept my payment and never send one back. In a normal DLC there are more than two outcomes and there are multiple payments being set up between two parties. You could interweave them and stop at some other point which would be more analogous to what you are saying. In some sense that is exactly the problem, that someone can verbally agree to something online and then during the process of actually committing to it they can take advantage of the other person if you are not careful. Barrier escrows solve this problem in a trust minimized way. You don’t have to trust your counterparty in any way anymore. You have a much smaller amount of trust that is much more economically motivated that you put into some third party that is publicly auditable. Especially if you are already doing DLCs arguably there is much less trust that is required in a barrier escrow than in an oracle. It seems like an acceptable thing to do at least for the DLC use case.

MF: The solution to the long term capital lockup problem is either you do it in a payment channel where both parties in the payment channel are happy to lock it up between each other as you would onchain. If you are locking it up onchain you are not getting anything for that capital while it is locked up. Or you are paying higher routing fees effectively.

NK: Exactly. Presumably the longer it will take to execute the contract the more proportionally you’ll have to pay in routing fees. Then it becomes a calculation of is it worth it to do it offchain or onchain? If onchain fees are super high it is still going to be worth it to pay those higher routing fees.

MF: I saw Thibaut has opened a PR for rust-lightning on DLCs. It looks like there is some activity on rust-lightning from Thibaut. You guys are Scala guys right so are you trying to get DLC stuff into eclair?

CS: We would love to. The eclair guys are not the most friendly guys for merging code that is not theirs. Funnily enough Roman actually implemented the Postgres support that they have in eclair so they do have a different database backend now mostly thanks to Roman. Roman has also done work on demoing ECDSA adaptor signatures on Lightning as well to see how invasive these changes to this HTLC state machine are. If people want to look at what that world might look like I can dump a link to the implementation to the chat here.

R: The HTLC is the core thing in eclair. Recently I tried to rebase my branch with PTLCs with their current branch. It is a nightmare. There are so many changes in so many places. It is impossible to track. That is probably one of the reasons we need some kind of general concept for channels.

MF: It is complicated. Just to correct Chris the eclair guys are very friendly but it is hard to get stuff in and it should be hard to get stuff in.

NK: Are they still a smaller team than Lightning Labs? They used to be, I don’t know if they still are. They have finite resources and a lot of work and currently generalizing channels is not at the top of their priority list. Not much progress happening there. But we do have DLCs implemented in Scala so if they ever want it it is pretty easy to plug in. That is the easy part. The hard part is the actual making it possible to plug in.

MF: I was looking through some of the work Tadge and other people had done in his Lit implementation. There are a bunch of PRs on DLCs. If you were to go down the different implementation approach rather than trying to get it integrated into an existing implementation.

NK: This is the pre adaptor signature implementation by the way.

Oracles

MF: So let’s go onto oracles. The oracles in DLCs, this was what Tadge initially described in his paper… With a 2-of-3 multisig all the participants need to identify who the oracle is. The oracle is going to be signing in the 2-of-3. The two participants in the contract have to find an oracle who is willing to sign in case they don’t come to agreement. Tadge’s concept was that instead the oracle is just going to sign various statements and not actually know they are participating in any of these contracts. That’s what you have gone with, there are various reasons why you’ve gone with that approach. Why is it better if the oracle doesn’t know that their signed statements are being used in contracts?

BC: The first one you were talking about, the 2-of-3 multisig, your third party will know the exact UTXOs you are using onchain, they can see your transaction, they have to sign it. They can track which party won the money, they know where the funds are going, where the funds are coming from. It is really not private and you have an extra public key onchain so it is a little less scalable. For the actual oracle or your third party helper, they have a lot more regulatory risk. They might need to KYC your users, they might need to be a public entity. It makes it harder to do. Versus with the DLC model the oracle just signs “It is raining outside” or signs the Bitcoin price or “Biden won the Presidency”. That single signature can be used for every single user that is using them as an oracle. It is a lot easier for the oracle, they just put it out. For the users it is better because the oracle doesn’t know they are being used, they are hiding from the oracle, it is only between them and their counterparty.

NK: In some sense it is the difference between the “don’t be evil” and “can’t be evil” models. The 2-of-3 is the “don’t be evil” trust model. I guess it is not “can’t be evil” but the DLC oracle model is that it is much harder to be evil because you know so much less and you are essentially telling everyone the same thing.

MF: So let me take the evil case, take the devil’s advocate case. The disadvantages are that it is hard to incentivize the oracle to sign a lot of statements. You need an exact statement to be signed. If it is raining it needs to be exactly “Yes” or “No”. It can’t be “Yes it was raining” that was signed. You need the exact statement to be signed perfectly. If there is any difference then that signature won’t be a valid signature to feed into the DLC.

NK: This is actually one of the biggest motivations for writing a specification rather than just using this technology. The first month we built out fully functioning DLCs and since then it has been “Ok but everyone needs to be doing the same thing and working together” so this is nice and standardized. In the future this won’t be a problem because we will have standards that everyone can follow for announcing what messages they might sign and then signing exactly those messages. Signing a slightly different message is considered fraud in this world where we have these standards. In the same way that lying is considered fraud. I do think it is true that it is harder to pay an oblivious entity but this is actually a problem Lightning solves. There are a few different ways that one could end up incentivizing oracles. At the end of the day as long as one way works the market will make sure that it does. It does become more feasible once you also realize that there is a lot less overhead required to be a DLC oracle than to be a 2-of-3 signer where you have to be dealing with individuals directly. As opposed to just broadcasting messages. If you look at the DLC code that has been written there is a tonne of client side code and just a couple of files of DLC oracle side code. Essentially all the complexity in DLCs goes to the client, goes to the DLC participants and the oracle does the bare minimum. They are slightly harder to incentivize, I think that is universally true because of the constraints but I think it is still possible and it is cheaper to operate a DLC oracle. I think it works out.

BC: To highlight the complexity of creating the oracle. Nadav and me probably spent a couple of months writing the actual DLC wallet code.

NK: At least a year.

BC: The initial spec. For the initial oracle I did it in a weekend.

MF: These downsides, I think they are all manageable. The other downside I could think of… Oracle announcements, rather than the flow being “Chris I want to enter into a bet on whether it is going to rain where I currently am” and us going through the oracle announcements and going “But there is no oracle that is willing to be part of this contract because the oracle doesn’t know whether it is raining where I live”. That’s the other downside. The flow is you go to the oracle announcements and you see what contract you want to enter into rather than Chris and me saying “Let’s do a contract on some weird bet that there isn’t an oracle announcement on.”

NK: There is certainly a bootstrapping problem when it comes to oracles with DLCs. That is one thing that we are working a lot on in this site. It is not just us posting these things, anyone can come post oracle stuff on this site. You can think of it like a forum, everyone is posting stuff on here that they can attest to. I think this is maybe a slightly more transient problem. In the future you could imagine weather.com, all their data is signed on top of just published. It is not that much more work for them necessarily. If they are doing any kind of oracle stuff, making it so that all of their stuff is signed isn’t a big next step. I hope in the future that finding an oracle isn’t the hardest problem. The hardest problem will be finding a counterparty which has its own struggles though this is also true for the 2-of-3 case. The other thing worth noting is that you can still ask someone to be your oracle even if they know you. Essentially the trust model reduces down to the 2-of-3 in that case. I would say DLCs do just as well as 2-of-3 in that case. They do better in cases where oracles already exist for other things. In the 2-of-3 case you still have to find someone who was willing to sign off on whatever is happening. You can do that and find them, tell them to be a DLC oracle, you are just not really using them like a DLC oracle at that point. Your trust model has reduced down to that of a 2-of-3.

BC: You still have some benefits. You and Chris enter into a bet, I don’t need to know how big your bet is or what your UTXOs are.

NK: It is a slight improvement.

MF: So it is just getting that bootstrap flow right. Perhaps the oracle announcements should all be “respectable”. Or people who everyone knows that are saying what the price is. Rather than a random person like me coming onto your oracle site and saying “It is raining outside my house now” when nobody cares.

NK: One consideration is that on this oracle site, one goal is to actually show someone’s past as well. You can see things they’ve done before and stuff like that. Maybe in the future there will be a more explicit reputation built into things. I think this is one of Lloyd’s motivations in using Twitter. You can bootstrap on Twitter reliability, whatever metrics exist there, they are faulty there as well. If someone has got a blue checkmark maybe it is worth a little bit more. But yeah, essentially I think Lloyd is using Twitter to get around various problems to do with verification and reputation when it comes to oracles.

MF: There is some very cool cryptographic stuff with oracles. You’ve talked about this on other things, we don’t need to go into too much detail, the signing two statements and losing your private key is a really cool trick. You can only sign one of the two statements. That’s the punishment. There is also the possibility to have multiple oracles. A multisig on statements.

NK: That works today.

MF: That works today in your current implementation of adaptor signatures onchain?

NK: Yes and that is in the specification.

MF: That is using what for the multisig?

NK: A lot of math.

MF: Can you use MuSig for that?

NK: We aren’t using direct public key… You don’t want to think of it like a multisignature. We are essentially combining anticipated… What you get from an oracle is their pubkey information and their potential outcomes. From that you can do some math using Tadge’s trick to compute an elliptic curve point. You can think of it like a public key but it is a point on an elliptic curve. That’s the anticipation for a specific event. Traditionally what you do with that is you use it as an encryption key for your signatures, that is what adaptor signatures do. What we are doing for multi oracle is you take not just one oracle’s point for one event but you combine a bunch of different oracles’ points for a bunch of different events in fancy ways. Then you do elliptic curve math on them to create this aggregate point that is associated with an event. It is a bit more complicated. Rather than Alice said “It is raining” you can have a point for the event “Alice says it is raining and Bob says it is raining and Chris says that it is either raining or it is partly cloudy”. You can create these kinds of ANDs and ORs of these points and create new ones and then use those to lock up the actual transactions that pay out to those events. It is quite a bit more complicated. This is one of the biggest things we’ve done that isn’t outlined how to do it in the white paper or even mentioned really what approaches can be taken. We had to develop our own algorithms and approaches. Multi oracle support and numeric contract support more generally uses some clever tricks that I and a couple of others have come up with to essentially enable multi oracle numeric contracts that even allow oracles to not be exactly the same number to the last digit. They can have some variants that you get to specify as the DLC participant for what is acceptable differences between oracles.

MF: This is adaptor signatures, this isn’t multisig with all the signatures going onchain or MuSig where it is key aggregation. This is math that kinds of work out as multisig.

NK: That’s right. The generalization of multisig in computer science is called a monotone access structure if anyone is curious.

MF: That’s one to look up. The oracle stuff is very cool. Oracle failure cases, we didn’t go into this but obviously if you are only using one oracle and the oracle lies then you’ve got problems. That is where a reputation system comes in or using multiple oracles.

The impact of possible future soft forks on DLCs

MF: There is tonnes of stuff to do with Schnorr and Taproot. Is there anything for future soft forks that would be cool for DLCs? I saw one blog post from Jeremy Rubin on betting on Taproot activation. This wasn’t actually using CHECKTEMPLATEVERIFY, I think he was just using his Sapio programming language to do a bet on whether Taproot would activate. I think Ben put up a Taproot activation oracle at one point on Suredbits.

BC: On our most recent spec call Lloyd actually detailed a way of how to do DLCs not using adaptor signatures or anything, just using covenants. That is another potential way to do it but it is an alternative way, you don’t really get new functionality. There are some improvements with things like SIGHASH_ANYPREVOUT where you can make them more composable.

NK: It will be much easier to put DLCs in eltoo Lightning channels than in penalty Lightning channels. It is not that it will be easier. Today if we found a way to hack together and create a slightly more generalized Lightning channel and put a DLC in it, just between two parties, no routing, having that DLC in your channel would slow down your channel operation. Because every time you want to do something else with that channel like a payment you have to re-sign all of the stuff to do with the DLC. If it is a small contract, only two outcomes or something, it is not a big deal. But if you have a large numeric contract that depends on the Bitcoin price or something inside of your Lightning channel then every single update that you make to your Lightning channel is going to take a couple of seconds, it is not great, even more sometimes. With eltoo this isn’t the case. You can use SIGHASH_ANYPREVOUT to dynamically bind your DLC into your channel so you don’t have to re-sign the DLC every time. The DLC can be a part of all future channels until it is kicked out or settled or whatever. You can think at a very high level, SIGHASH_ANYPREVOUT lets you dynamically rebind transactions so they don’t just spend this output, they can spend any of the future outputs. That is how the Lightning update stuff works. Any transaction can spend any previous commitment transaction in an eltoo channel. Likewise any DLC output, you don’t have to re-sign all the time because you can dynamically rebind it onto future channel states without re-entering into the DLC from scratch. I think that is going to be a big improvement once we have DLCs in Lightning channels, to have SIGHASH_ANYPREVOUT. I think there are other tricks Ben that you know more about for how SIGHASH_ANYPREVOUT can be used to optimize DLCS.

BC: I think you can do more complex things. Say when I get paid to this address from a DLC, instead of it being issued to me I could have an already transaction that signs it from that address to 50 other parties. That way my DLC only needs one key to sign but the payout goes to 50 people. There are other fancy things you can do. In Chris’ blog post on DLC put contracts that would help as well, you don’t need to constantly renegotiate it.

CS: With the coinbase put contract example the tough part for creating a DLC predicated on the coinbase transaction is the fact that miners tweak the scriptSig and the witness commitment which ends up changing the funding transaction ID for your DLC every time a hash is performed. With ANYPREVOUT you abstract away what the specific funding transaction is, rather you say it just has to match this template. You can bind it to any transaction that follows this template which makes things like what I talk about in this blog post more viable in the market.

Upcoming releases

MF: At some point in the future we’ll be doing DLCs between all of us in a channel factory. There is a lot of work to do. We’ll wrap up. What would you want people to do? Do you want people to find friends to do DLCs with, to use your software? I saw various videos of tutorials. Do you need oracles? Do you need people entering into DLCs? What do you need?

CS: We definitely need people entering into DLCs. We are about to land a pretty major upgrade to the user experience of the DLC process courtesy of a lot of hard work by Ben and Roman, streamlining the negotiation process of DLCs. It is going to look much more like Lightning channel openings now rather than the very complicated copy, paste message back and forth process that we had previously. It is going to be a much more streamlined experience. We should have that out in the next week or two. That’s when we really want people to get off to the races and really start trying out these things. The demo video you have up here is for becoming an oracle which isn’t going to change significantly in the next 3-6 months. That’s something you can do today. You don’t even need to know anything about a chain state data source. You can just start posting oracles on oracle.suredbits.com and telling your friends to enter into DLCs that are predicated off you being an oracle if they trust you of course.

NK: And if anyone out there is super into Rust I know Thibaut (Le Guilly) who is the maintainer of rust-dlc, he just needs review on his thousands of lines of his code. That sounds daunting, there is always room for contribution on rust-dlc these days if you know any Rust.

BC: Check out the spec. We have a bunch of small issues, lazy things that we haven’t done yet, adding pretty pictures, correctly linking things around. There is other basic stuff. There is also high level stuff that we’d love review on too. Anything would help.

NK: If something is unreadable post an issue asking questions. Maybe we will get around to fixing it and making it easier to understand. We are adding lots of pictures, stuff like that. It is coming together.

MF: And lobby the eclair guys to support DLCs in eclair soon.

NK: Bother all your Lightning developer friends until they generalize their Lightning channels and put PTLCs into them.

CS: Or until they hate the Suredbits team.

MF: Great work guys, thanks for coming on. I’ll get a transcript up. I look forward to seeing the latest UX that comes out in a week or two.

\ No newline at end of file +https://www.youtube.com/watch?v=h6SkBwOCFsA

Gist of resources discussed: https://gist.github.com/michaelfolkson/f5da6774c24f99dba5c6c16ec8d499e9

This event was livestreamed on YouTube and so comments are attributed to specific individuals by the name or pseudonym that was visible on the livestream. If you were a participant and would like your comments to be anonymized please get in touch.

Intro

Michael Folkson (MF): Welcome to everyone on the call, welcome to people watching on YouTube. This is a Socratic Seminar, we’ve got a Zoom call and it is being livestreamed on YouTube. Anybody watching on YouTube, feel free to comment, questions and there is a reading list that I’ll share the link to and that give us some structure for the discussion, we don’t have to stick to it but we’ll see where it takes us. Let’s do intros first. We’ll start with the Suredbits guys.

Chris Stewart (CS): I’m Chris Stewart, I’m the Founder of Suredbits. We do discreet log contracts.

Ben Carman (BC): I’m Ben, been at Suredbits for almost 2 years doing Bitcoin development stuff and DLCs.

Patrick (P): I’m non-technical, just an enthusiast and here to soak up some knowledge.

Roman (R): I also work at Suredbits working on DLCs.

Basic concept of a DLC

MF: Let’s start with the basic concept. As we were saying before there are tonnes of resources out there and I don’t want to just redo some of the conversations and interviews that have already been had. But I feel we have to go over the basic concept first before we go into the advanced stuff. What is a DLC?

P: It is a way to have an oracle system I guess via the Lightning Network?

CS: I think we should publicly shame him for not being an expert in 20 minutes of watching videos on YouTube (joke). How do you not know everything?

MF: From your watching of the videos is there anything that makes you interested in DLCs? Is this an interesting concept to you?

P: Yeah. I’ve been looking into the different things you can do… The other day I bumped into secure multiparty computation and that blew my mind. I don’t know what those things really mean but I’m trying to understand it a bit more. This seems like another one of those applications that sounds very interesting.

MF: So let’s go to one of our experts. What is the concept of a DLC? How is it different to say a smart contract on Ethereum? Is it a bet? What types of contracts are we talking about?

CS: Very simply put, DLCs are a way to make Bitcoin transactions contingent on things that are happening in the real world. Bitcoin’s scripting system, any blockchain system for that matter, only can check and know about data that’s within its actual code I guess. It can’t reach out into the real world and determine who won the Presidential election. Even simple things like what’s the Bitcoin price or what’s the hash rate that Bitcoin has currently or what’s the last block hash, even or odd… Those are things that are almost native to the Bitcoin scripting system but with Bitcoin’s limitations it can’t build Bitcoin transactions based on that information. DLCs are a way to allow transactions that are contingent on real world events or even Bitcoin network specific information in a trust transparent way is what I call it. It is definitely contingent on an oracle attesting to something that happened. You do need to trust that oracle but you can clearly tell when that oracle has misbehaved or lied. There is cryptographic evidence at least that you can use to show that this oracle was misbehaving.

MF: Cool, everyone should know this but Discreet Log Contracts is in a play on words. It is a play on “discrete log” and being “discreet” or being private with what you are actually doing.

CS: Most people seem to spell “discrete” but again going on the play on words it is actually “discreet”. This is the pun that Tadge (Dryja) has in his white paper which was phenomenal of course, Tadge is great.

How DLCs compare to Ethereum smart contracts

MF: But you curse him for the naming, maybe. So why are we only now talking about discreet log contracts on Bitcoin when supposedly Ethereum has taken the smart contract use case? Why has it taken so long to start doing these kinds of smart contracts on Bitcoin? What is the difference between a DLC on Bitcoin and a smart contract on Ethereum?

BC: I would say the reason it has taken so long is for one, that Tadge Dryja paper didn’t come out until 2017. There wasn’t really too much development on making these things a reality until 2019, 2020. Now with us and Crypto Garage and a couple of other open source guys we actually have working clients. It has taken a while to build out all this infrastructure to make it happen but it is becoming a reality now, we are no longer larping. What is the difference between Ethereum? An Ethereum contract, if you code it up in Solidity and post it to the blockchain, if I want to use that I will always use the same contract that everybody else is using. It is very explicit that I am using this contract. Whereas with the DLC I am just entering into a 2-of-2 multisig and all of my contract logic is done offchain. If an oracle signs then it is possible to produce one transaction that spends that 2-of-2 multisig. It is a lot different model versus publishing to the blockchain your actual contract execution, verification. A DLC is a lot more client side, execution and verification which makes it a lot more scalable and a lot more private. As Bitcoiners we definitely want that.

MF: The trade-off Ethereum took, the term they used was “rich statefulness”. We are pushing everything offchain and trying to touch the chain as little as possible while they are able to have an oracle built within the Ethereum virtual machine and that does have some benefits. The contract can automatically update rather than going to an oracle, getting a signature and sending an updated transaction to the chain. But the trade-off that you get with that is obviously that it is completely unscalable because everything is done with the virtual machine and you are not pushing anything offchain whatsoever. Would you agree with that?

CS: What Ben is saying is right. DLCs are vastly superior in scalability and privacy. The thing we give up though with the UTXO model versus the account model is the market actually existing onchain which is what has led to the rise of things like Uniswap with their automated market makers. That’s functionality that is very hard to replicate in the UTXO model. We are still figuring out how can you facilitate a liquid market that trades 24/7 without having jurisdictional requirements impressed upon on you because you are a centralized company in whatever jurisdiction you are in. One of the more interesting developments lately has been the whole Uniswap company versus protocol saga where they ended up delisting securities via their web interface but the underlying Uniswap protocol, as far as I know, still has grey market or black market liquidity pools available to them. Putting aside the Ethereum stuff, in DLC land it is very hard for us to have this market structure exist on a third party collective resource like a blockchain and give people these abilities to trade 24/7. That is another one of these fundamental architecture differences here. They give up on privacy, scalability, they get this liveliness 24/7 that people really like. We are still working on, we’ve got 2 of these great principles but still the market infrastructure piece is an open question.

BC: This isn’t a DLC specific problem. Things like Joinmarket, Bisq, Lightning Pool, all have the same problem. Bisq has a different model, Joinmarket has a different model. These are solvable things, it is picking one or picking all of them and seeing what works.

MF: It is the problem of the matching engine. What Chris was saying is that you can have the matching engine onchain and that brings certain benefits even though it is totally unscalable. Otherwise if you are doing that offchain you are having to build that whole matching engine independently without automatic writes to the blockchain whenever you want?

CS: Matching engines can be built and that is a pretty well understood space. It is the regulatory, legal, compliance restrictions that come with if I was to host these things on Suredbits servers I’d need to comply with United States regulations for offering DLCs. If Michael, you were to host these things in London, you’re the person who is hosting a centralized server where trades are being matched. You have legal requirements there too. Now if it is hosted on a blockchain it is a collective resource. So you are not necessarily liable, I’m not necessarily liable if I’m running a validator. It is this network that is hosting the market rather than a single company’s servers. It is very unscalable, it is very not private but there are certain benefits to that model when it comes to the legal and regulatory world, at least as it stands now. Maybe this changes in the United States and certain three letter agencies decide to try to hold validators liable which is a contemporary issue I guess. It definitely is on stronger footing when it comes to legal and regulatory stuff compared to the UTXO model in that regard.

MF: I don’t want to get into regulation… but if you are uploading data onto the Ethereum blockchain why is that any different to an oracle offchain uploading data to a website? I never thought the matching engine would be subject to similar regulation as a custodian, that didn’t even occur to me. You are not taking any funds, it is like a dating service. You are just matching two parties that want the opposite things. I suspect you’ve looked at the regulation more than I have.

CS: Maybe this is United States compared to the rest of the world, maybe it is different. It is still an active area of debate. I guess I’m turning this into a regulatory talk now rather than a technical one so I’ll stop talking about that.

MF: London Bitcoin Regulators can discuss that conversation. London Bitcoin Devs will stick to the technical part.

Examples of DLCs in the wild

MF: I’ve got this reading list up. We’ve kind of covered the basic concept of a discreet log contract. Z-man has some excellent thoughts on smart contracts on Bitcoin. We’ve covered the differences between Bitcoin and Ethereum. Some of the examples that I could find, you can tell me if there are others the Suredbits guys, one was Nicolas Dorier and Chris (Stewart) entering into a bet on the election. That was organized via Twitter so does that make Twitter a matching engine? Should Twitter be under regulation? That was an election bet. Who took the oracle in that case between you and Nicolas?

CS: I don’t think the oracle has revealed himself publicly yet so I won’t dox him. He does have a Twitter account, OutcomeObserver on Twitter I believe is his handle. He distributed his oracle information via Twitter so that we could take the information embedded in his tweets and start constructing DLCs based off the cryptographic information he published. He followed up after the fact and also revealed what we call attestations, a fancy word for signatures, that corresponded to the outcome of the election. In this case I believe it was a bet on the US Presidential election in 2020, the two candidates running were Joe Biden and Donald Trump. Nicolas took 60/40 odds where I put up 0.6 Bitcoin for Joe Biden winning and Nicolas put up 0.4 Bitcoin for Donald Trump winning. We built a DLC that encoded these conditions inside of a set of Bitcoin transactions. The way this is done with DLCs is you end up publishing a funding transaction that is a 2-of-2 multisig output and then you have encrypted versions of the settlement transactions that correspond to every outcome that can possible occur during the bet. If I recall correctly during this bet the three outcomes were Joe Biden winning, one transaction, Donald Trump winning, another transaction, and then I believe that we had a tie case or a draw case. It is always good to have a backup case for either the oracle going haywire… In this case it wouldn’t be oracle going haywire, just an unresolved bet. It is unclear who won the election so it is always good to have a draw or another case, that was the third encrypted transaction.

MF: We’ll get into how DLCs have evolved from Tadge’s initial design to now. I just wanted to go through a few of the examples, the parameters of these contracts before we do that, to give some examples to people who are new to this. The next one I had up was Chris and Nadav on price volatility. There are various derivatives that you can just copy straight from the financial sector and substitute Bitcoin in for whatever fiat currency or asset price that you are interested in. This one was a bet on the volatility of the Bitcoin price, I think one of you bet that there would be a lot of volatility and the other one bet that there would be less volatility?

CS: Yeah. If you go back to that graph up there at the top I can add a little bit of commentary to this graph. For people who are new to DLCs this graph here represents what we call a payout curve. On the x axis of this graph you have US dollars, you can see it starts all the way down at 12,000 dollars. The triangle that we have here hits the x axis at 19,000 dollars and it goes all the way up to 28,000 dollars on the far right hand side of the x axis. What this is saying here is “We’re creating this volatility contract at a strike price of 19,000 US dollars.” If I remember correctly in this contract Nadav was short volatility which means that he wants the Bitcoin price to stay around 19,000 dollars when we settle. If it is still close to 19,000 dollars that means there wasn’t very much volatility during the interval or the period that we were monitoring for. However, me on the other side of the trade was long volatility. I want to get paid more as the Bitcoin price diverges from 19,000 US dollars. As the Bitcoin price hits 20,000 dollars I start getting paid more satoshis on the y axis. On the y axis it specifies the payouts in satoshis that I receive if the oracle attests to a certain outcome. If the price that the oracle attests to was 19,000 dollars I get zero satoshis. That sucks, I don’t want 19,000 dollars to be attested to. However if the oracle attests to the price being 17,000 dollars that is very good for me. I am getting all the money in the DLC here. I get 200,000 satoshis if you look at the y axis. Depending on what the oracle attests to, the more volatile the Bitcoin market is the more profitable this DLC is for me. However, if the Bitcoin price doesn’t move at all that is not very good for me and I end up losing the amount of money I contributed into the DLC. I am realizing now I didn’t specify how much collateral I contributed upfront. If I remember correctly Nadav contributed 100,00 sats of collateral and I contributed another 100,000 sats of collateral. We are equally collateralized. If the Bitcoin price doesn’t move Nadav gets all the money. If the Bitcoin price moves a lot I get all of his money. That’s the financial conditions for this DLC.

MF: It sounds esoteric but anybody who has followed derivatives in financial markets knows that in the fiat economy this is billions and billions of dollars traded everyday on this kind of stuff, on price volatility of various assets. But as you say Chris, the big difference at least with how DLCs are set up now is we do have that limited collateral within the DLC. You can only lose the collateral that you post. In the conventional financial industry you can lose your shirt, you can lose everything. You do a leveraged trade betting that a price is going to go up or down or that it is going to be particularly volatile or less volatile. If you get it completely wrong you can lose a lot more than your initial collateral. That is the big difference.

BC: Something important too is you can only gain as much as your other counterparty puts up too. In the fiat system you can 100x leverage and get a million dollars. Versus this, if your counterparty only puts up 1 Bitcoin you can only win 1 Bitcoin.

MF: But maybe as this matures, if there was a big enough market with everyone betting their collateral then perhaps you then expand into where there is custodianship and you can win more than the initial collateral posted. That was volatility, the next one I had up was Blockstream and Crypto Garage on the Bitcoin price. Again this is derivatives, this was a forward contract?

CS: This was published in 2019 from Blockstream. As far as I know this is the first DLC executed on the Bitcoin main chain. We have come quite a ways from what is detailed in this blog post from a technical perspective I think. The actual financial engineering side of things here is pretty straightforward. If I recall correctly it is just a linear payout based on what the oracle attests to. I have to admit I did not look through this old blog post before I jumped on. Sorry about that.

Nadav Kohen (NK): The forward contract they executed pretty manually using handwritten scripts as opposed to libraries that autogenerate things. This was also before the use of adaptor signatures. They used quite a bit more script and more complicated stuff.

MF: They used the Japanese Yen stablecoin on Liquid as part of the derivative?

CS: I think that is in the more to come section.

MF: Ok it was purely a Bitcoin…

NK: It was a proof of concept.

CS: Maybe we are saving technical stuff for later but this would be a fun topic to touch on when we get to the technical discussion because it really shows the iterations that we’ve taken on designing the DLC protocol. If I remember correctly this is almost word for word what Tadge has in his white paper and we have diverged a bit from the original white paper as we find more innovations in the space. We can talk about that in the technical section.

MF: Let’s get onto that. There are tonnes of examples but I think that gives a flavor of some of the examples that you can do with DLCs. Let’s get onto the technical stuff, let’s get onto the evolution since Tadge’s initial… Just before we do that this looked cool. I just saw this earlier today. Lloyd Fournier wrote a paper on how to make a prediction market on Twitter with Bitcoin. Similar to what you and Nicolas did on Twitter. He has also made a betting wallet for betting on the Bitcoin price for “plebs, degenerates and revolutionaries”. I just found that earlier, that looked cool.

CS: Ben gave this a try this morning? Ben, do you want to add any color on that?

BC: He put it in our DLC chat the other day asking for people to test it out. I tested it out last night. I did a simple bet against Lloyd on a coin flip. It happens in two days so we’ll see if I win. It is pretty cool. It is only a command line tool at the moment. It is not too easy to use yet but I think it is pretty cool. I think it has a lot of potential.

NK: Just out of curiosity is Lloyd’s tool for binary option stuff or does it work for more general stuff?

BC: I don’t know.

MF: It seems very early, I think he just posted it today or yesterday. It looked cool so I added it to the reading list.

Evolution of the DLC: adaptor signatures

MF: Let’s go through the evolution of the DLC because that hasn’t really been covered I think on other forums, other podcasts, videos etc. The very first resource is Tadge’s original DLC white paper. What was the initial concept of a discreet log contract when Tadge initially released this paper?

NK: I highly recommend people read the white paper if you are interested. It is only 5 pages, it is written very clearly. There is a little bit of math in there but that is about it, that second page. If you are comfortable with the math it is also a decent place to learn about Schnorr signatures. This is where I learnt about Schnorr signatures, they were defined here, the idea. There are two ideas going on: the higher level one and the technical trick that enables it. Up on page 1 I think he states it pretty well in the abstract. Essentially the goal, he mentions financial use, but more generally speaking it is a private contract scheme that is scalable. It addresses scalability and privacy concerns. You also want to minimize the amount of trust. It has a very specific oracle model in mind where you have oblivious oracles that are being used to execute contracts on Bitcoin. As the name puns out we use a discrete log and it is private hence the misspelling with two e’s and a t. Essentially the trick that is used, I don’t know if it is Tadge’s innovation or if it is more so putting things together here, the idea is that you can with Schnorr signatures and other kinds of schemes you can anticipate a signature of a specific message by some entity such as an oracle. You can have an oracle whose public keys are known, they publish some commitments to an event and you can essentially create encryption keys that anticipate a specific event. You can create one for each possible event. This is what is used in order to construct contracts. In the initial white paper it really doesn’t go much further than that. The idea is you have some list of outcomes and you have a payout for each one, that is about where it ends. It is mentioned at the end about further optimizations, how you can do some other tricks but they are not very fleshed out. They are not exactly what we ended up using but they are the right general idea for how you would begin if you were to try to generalize this to more interesting kinds of contracts. It is short and sweet, 5 pages, explains the motivation, the model for the oracles that they don’t know anything and they just broadcast and that they can’t even see themselves after things have gone onchain. There is a section lower down about how you can anticipate a signature for each possible outcome. There is actually a rather vague section in which it says that you can then use this anticipation point in order to construct a contract in Bitcoin. This vagueness is where the older versions of DLCs diverge from the newer ones. The original ones took a more literal approach. You take this anticipation point for a given event and you use it as a tweak on an onchain key. There is always messiness involved with that and proving that it is secure is rather tricky. Newer iterations use these things that I’m sure that we’ll talk about later called adaptor signatures which are a much fancier, much more elegant solution to how we take this property of Schnorr signatures that you can anticipate signatures and use them to actually build out our contracts.

MF: Just to be clear, it doesn’t use adaptor signatures? Tadge is the co-author of the Lightning white paper as well. This is only shortly after Lightning. It is not using adaptor signatures, it is not using Lightning and Schnorr obviously was years away from being activated. It is using Schnorr?

NK: It is using Schnorr. It is using Schnorr offchain. The oracle signatures are Schnorr signatures but you can use the oracle Schnorr signatures to then create ECDSA things. These anticipation points are derived from offchain Schnorr stuff. But you can still then use that for onchain Schnorr or onchain ECDSA or in theory other signature schemes. The original DLCs that were executed with this paper’s publication on testnet and later with Blockstream and Crypto Garage and then later with us, were all actually using ECDSA onchain, everything works today. But we are actually using a Schnorr variant offchain.

MF: That hasn’t quite clicked for me, how you can have a Schnorr signature…

NK: Oracles are not related to the Bitcoin blockchain in this model. An oracle is just an entity that signs things, it announces “In the future I am going to sign X”, “I am going to sign the Bitcoin price on this weekend, on Saturday”. On Saturday it broadcasts the signature of what the Bitcoin price is. Those are the only two functions it has. It doesn’t have anything to do with the blockchain, it doesn’t look at a blockchain, it doesn’t need a blockchain. Then what it is doing is it is using Schnorr signatures, it is posting announcements with pubkeys and then the signature of the price is actually a Schnorr signature. This isn’t a thing that is going on the blockchain, you can use whatever signature scheme that admits an anticipation thing that you want. Schnorr just happens to be a really convenient compact efficient nice one that has libraries in Bitcoin now. So essentially the oracle is just using Schnorr and then the fancy trick that is in this paper explains how from the Schnorr stuff we can get this public key that we can use in anything. In ECDSA we can use this public key to tweak our signatures that are ECDSA signatures in order to enforce our contracts. Tadge was actually using Schnorr signatures in the paper and we do use Schnorr signatures for DLCs generally speaking.

MF: This is a different trick of an oracle signature feeding into who gets paid, different to an adaptor signature?

NK: An adaptor signature actually is a version… There are a couple of different ways you can do this. One way that is the original way is you can use your public key literally to add it to other public keys. That gets kind of nasty. With adaptor signatures you use it as an encryption key. Either way you are using this Schnorr stuff and you are using it to tweak or do some variant scheme on your normal onchain signatures. The original way of doing that was kind of nasty, that is for example what Crypto Garage and Blockstream did and what we did initially. Then adaptor signatures only came later on as an improved way to use these pubkeys that correspond to real life events.

MF: I think I am going to have to read the paper again. I thought it wasn’t using adaptor signatures but it sounds like it is using a flavor of adaptor signatures that isn’t quite the adaptor signatures we use today.

NK: Adjacent. It has a similar interface to adaptor signatures but it is actually vague, it doesn’t say. It is very easy these days to read it and be like “He is talking about adaptor signatures right now”. When really he was saying something much more vague. You need something that acts like this and then doesn’t state which one it is or how you have something that acts like that. These days we use adaptor signatures because they act like that. But the initial iterations did not use adaptor signatures. They used other things that were similar, probably less secure and definitely worse in other ways.

MF: You also have this blog post where you explain how they work before the latest adaptor signatures that are used currently with DLCs.

CS: I believe one of the key things that came along for us in April 2020 was Lloyd figuring out how to do ECDSA adaptor signatures which is a primitive that is available on the Bitcoin blockchain so that we can enforce these things actually onchain for settlement of transactions.

NK: Lloyd figured out that this was likely possible in December 2019 and posted under a pseudonym to the mailing list. Then he fleshed it out a bit more and published an actual GitHub paper draft of how this works and what the security is like. That was, I want to say, February or March 2020. Then there was an online Lightning Hack Day that me, Jonas Nick and a bunch of other folks from Suredbits and some others, Waxwing (Adam Gibson) and Lloyd was also involved. We got together and we implemented ECDSA adaptor signatures and a little thing built on top of them. That was the first implementation of ECDSA adaptor signatures in April 2020. Then it was cleaned up, tests were added, it was made more production ready and it was merged into libsecp256k1-zkp maybe 4 months ago. That was Jesse Posner who picked up nickler’s work, tidied it up and took it over the finish line. When we were originally working on DLCs we had about a year before ECDSA adaptor signatures existed and could be used. We were doing other nastier tricks that are explained in that blog post. They are not terribly nasty or anything but from a code perspective it was 4 times as much code to get it to work. If I recall when we switched to ECDSA adaptor signatures it cut the codebase into a quarter of the size.

MF: Wow, interesting. This is just you saying “We’ve got a better design than what Tadge initially put in the white paper. We want to use this flavor of adaptor signatures, we don’t have Schnorr yet, can we do this with ECDSA? It turns out we can in a convoluted way but once we have Schnorr it will be a bit easier.

NK: That’s right. The way we are doing it now we are using ECDSA adaptor signatures is going to make it much easier to transfer to using Schnorr because we are already using all of the same interfaces as Schnorr adaptor signatures and the like. It will be much closer to just swapping out ECDSA for Schnorr than it would have been if we had to do all of these changes all at once. It also helps from a development perspective where we’ve moved to the thing that’s more Schnorr like even though it is not Schnorr yet. It is better in every other way too because Schnorr is great.

Evolution of the DLC: Onchain, payment channels and Lightning routed

MF: Let’s continue with the evolution of DLCs for a moment. There are a bunch of different evolutions here. There is the adaptor signatures that you’ve just covered Nadav. Then there is the “Do we do this onchain? Do we do this in a payment channel? Perhaps in the future do we do this on a routed payment across the Lightning Network?” The examples that we showed earlier were within a payment channel were they? With no routing? Is that right?

CS: I think the examples that we were talking about earlier were strict onchain DLCs. As of now we have not begun development on a Layer 2 DLC. We are starting to get some inbound interest from Lightning implementations looking to support at least rudimentary DLCs between a counterparty and another person. But yeah where we are at with development today is we have an onchain wallet that works well. You can go download at suredbits.com and play around with this stuff. We have got a lot of improvements on the engineering front in the pipeline but we have also published videos out there that Ben made to show how to do a DLC between two counterparties. To be explicit it is just onchain currently and we are trying to get our code solidified in that regard and the spec solidified in that regard before going up to Layer 2.

NK: There has been a lot of work on offchain DLCs but all offchain DLCs are paper DLCs. They haven’t actually existed on computers yet. But we have a pretty good idea on what that is going to look like when it is done. There are a couple of barriers before that work can actually begin. My understanding is that most of the work that needs to happen to unblock offchain DLC work is on the Lightning implementation side of things. There is a lot more generalization that needs to happen. Hopefully the move to Taproot will be a good motivator to start this work. That is our hope.

MF: The first thing that you would do to take advantage of Schnorr and Taproot would be to just use Schnorr in an onchain setting. Use Schnorr adaptor signatures if DLCs are currently all onchain you will be using Schnorr onchain to begin with?

NK: Certainly. Schnorr/Taproot will certainly be something that benefits DLCs even onchain. More so what I was referencing is that there are going to be changes coming to the implementation structure of Lightning nodes in the near future because they are going to be moving to Taproot and they are still going to have to support old channels for some period of time. That is going to force them to have to be able to do more than one thing. Right now Lightning implementations are quite hardcoded to do just the one thing they do. There is a lot of generalization that needs to happen in order to allow other kinds of things to live in Lightning channels other than just payments. In order to have DLCs we need a more general state machine, the internals need to be able to handle more general kinds of things than just payments. My hope is that the first step in this direction will be when Lightning implementations have to support both pre and post Taproot. Their internals are going to have to be more flexible. Once they are more flexible it should be much easier for us to start doing other things inside of Lightning channels.

MF: I see two possible alternative directions. One which is you try to get integrated into Lightning implementations so that you can work within existing… I think this was the approach Tadge was doing initially with his Lightning implementation Lit which was just to do payment channels and set up a network of payment channels, not do the routing and almost set up a new network that is almost separate to Lightning. To get off the ground given that you are not taking advantage of the routing to begin with.

NK: From an engineer’s perspective the thing we hate most is redoing work. If we gut a Lightning implementation and put DLCs in there and then later in the future we are also going to put DLCs into normal more generalized channels… I’m not saying we wouldn’t necessarily do the first thing but we’d want to make sure we were doing the first thing in a way that is compatible in the future with these more generalized channels. It feels like we are just a year, maybe two years or something like this away from having an actual framework for what it is going to look like to have more generalized channels. We are hesitant to start work that might have to be thrown away in the future when we know that soon enough we are going to know what direction is the right one to take.

MF: I get that. There are going to be various upgrades to Lightning. At which upgrade to Lightning is best for the DLC people to jump in?

NK: I would argue PTLCs.

MF: Yes it is probably going to be PTLCs rather than the MuSig(2) upgrade which will probably happen earlier before the PTLC upgrade.

CS: On what Nadav was saying, the oldest parts of these Lightning implementations are their HTLC state machines. There is a very strong resistance to changing these things because a) it is dangerous, you might introduce bugs, b) sometimes the people have left these companies that wrote this initial code and now there are newcomers that have to maintain this and are scared of introducing these bugs because they aren’t the initial people that wrote this stuff. It is going to be a long conversation I think in the Lightning community to get them comfortable with changing these core state machines specifically their HTLC state machines and incorporating these more general channels. I think when talking about more general channels I don’t know if there is a clear winner in that regard unless you are just talking about PTLCs as your more general channel?

NK: The ideal would be more general channels which would obviously let you do PTLCs on top of other things. But I imagine PTLCs are a good first generalizing step so they’ll probably happen prior to that. It will happen in that direction. Also I want to add a c), another reason why people don’t want to change the Lightning state machines, for the things it does right now it is provably optimal. Everyone is always like “Changing things comes at a slight disadvantage to the current use case”.

MF: You mean the scripts are optimal? What do you mean by provably optimal?

NK: I mean the actual internals for how they handle… From a computer science perspective the algorithm that they use is provably optimal and as fast as possible if the only thing you are doing is HTLCs. The HTLC state machine that is implemented in the current Lightning implementations is optimal. What we call for is less optimal for HTLCs but can do much more. There is that trade-off. I don’t think there is significant pushback but it is more an inertia that has to be overcome. People need to be convinced that it is very much worth it to be working on changing these things when what we have is so nice for this one thing that it does.

MF: To play devil’s advocate, if it is going to take a while for Lightning to get MuSig and PTLCs is that not an argument for bootstrapping a DLC network of payment channels. Then you could start with everything that you want. You are not sitting there waiting for Lightning to implement what you want them to implement. That always takes longer than you’d hope.

NK: I still think it is totally possible to take that approach but I would again just say we would probably wait to start that work until we actually see how we are supposed to do it. There are a couple of different ways that this can be done, how you would gut it and replace it. We want to make sure that we do it in a way that can be reused in the future. That is something that I think will be much clearer in less than 2 or 3 years.

CS: I was talking with some Lightning folks this weekend and talking about what’s on the roadmap for various Lightning companies. My argument to them is we need more adoption on the Lightning Network, I think everybody agrees with this, and number is going up currently and that is great. One of the reasons that number go up could be attributed to lnd adding new features like keysend. I have issues with keysend but I don’t think it can be argued that there is a lot more applications that are enabled by this. I think we need to pitch them the same way as PTLCs. We have written a whole archive about things you can do with PTLCs, it is going to enhance the expressiveness of the Lightning Network which is going to make number go up even more on the Lightning Network. We need to pitch them on this new feature set, your last features didn’t really go through the consensus process for Lightning to integrate. You’ve got some interesting apps out there for that. The same thing can happen in PTLC world. Unfortunately going back to what we already hashed over was it is much more invasive to do PTLC stuff around the core HTLC state machine logic that already exists.

NK: I posted an update to the Lightning dev mailing list around a year ago, after the hack day had happened and we had ECDSA adaptor signatures. I posted an update on PTLCs, what work had been done, onchain proofs of concept that we had executed and the set of things that needed to be changed in the Lightning Network were. And I believe roasbeef responded to this mailing list “I wouldn’t call this a minimal change, this is the largest change that would have ever happened to the Lightning Network. You are changing the state machine.” And I was like “Ok I guess you are right.” For context roasbeef once said this would be the biggest change to Lightning that has ever happened. Not DLCs to be clear, something much smaller than DLCs that are a required first step, changing the state machine at all which has not happened yet.

MF: I get the sense roasbeef is more on the disruptive side rather than the conservative side.

NK: I have talked to him quite a bit since about generalizing channels. It is something he wants to do and it is something that he is working on. It will certainly happen, I am not worried. I think the consensus is that it is the right approach, it is just a lot of work and the work is happening slowly.

How the DLC use case could increase usage of the Lightning Network

Fode Diop (FD): First of all, hello everybody. I just wanted to add to what Chris was saying about number go up in Lightning. I think it will happen. Especially in Central America, we are only 3 and a half weeks away from the government wallet being released here. It is not just El Salvador, it is all the neighboring countries. If you think about it, most of the banks here are owned by foreign banks. The three major banks here are owned by Colombian banks which means that as they help the banks integrate with Lightning they are also themselves getting ready to integrate with Lightning as well. We have Colombia next door, we have Guatemala which is right next door and all exchanges here are pretty much integrating with Lightning as well. I think it is just a matter of time until we see an explosion of usage here. El Salvador alone is 6,7 million people. If you onboard an entire nation on it I think there will be an increase in usage but also a good way to stress test the Lightning Network.

MF: I don’t know how much of it is literally on Lightning. Is there a big uptick in terms of Lightning channels because of this adoption or are they all just doing a custodial service?

FD: It is custodial in the beginning, nobody even knows yet if the wallet is going to be onchain or a Lightning wallet, the wallet coming up on September 7th. We are hearing that it will have an integration with Lightning from the get go. Whether it is a custodial or a non-custodial wallet I think it doesn’t really matter because ultimately if the wallet can allow for people to withdraw onchain or through Lightning then anyone can use any wallet out here. That is what the government is trying to push forward, for anybody to use any wallet that is available. There is Bitcoin Beach, Wallet of Satoshi, some people are using Muun and Strike of course. I think over time as people get more comfortable and they understand how to use wallets they will use whatever wallet works for them. I think there will be increased level of usage. I don’t want to be a prophet and say something that might not happen but what I can see here there is going to be an explosion of usage.

MF: This might seem like a diversion from DLCs but it is not actually as a diversion. AJ Towns’ idea that he posted on the mailing list and he also spoke at the Sydney Socratic on this, AJ is trying to get more adoption on Lightning, more people using Lightning. AJ’s argument was that in the early days of Bitcoin Satoshi Dice was the biggest use case of Bitcoin. There were tonnes and tonnes of transactions. Now we look back and go “How terrible it was. My node has to verify all these terrible transactions”. But you could argue back then any adoption… Bitcoin was nothing, no one was using it. It had a purpose that people were actually using Bitcoin for something. AJ’s argument was, you can read the mailing list post, that you could do similar Satoshi Dice gambling on dice, gambling on coin tosses on Lightning to get the adoption up on Lightning if your objective is to get adoption up on Lightning, more people using Lightning. It sounds like from your perspective Fode that we don’t need to be as concerned with adoption on Lightning, people are going to be using Lightning for payments.

FD: Not at all.

MF: We don’t need the DLCs to get that adoption up.

FD: I think we need every kind of technology out there because usually the markets will determine what they want. The market will say what is needed. To me we try it all, throw the kitchen sink at it and see what happens. Whatever sticks sticks and whatever doesn’t will be discarded like natural selection. I am not saying DLCs will not be needed per se, I am curious about DLCs. Since I met Chris in Miami I really need to see how this works.

MF: Just to be clear I think DLCs have got a definite use case. It is just whether they are the first killer use case for Lightning or whether it is a separate network and Lightning is going to be fine on its own. DLCs take advantage of Lightning or set their own network up, whatever works for DLCs.

CS: I think since the earliest days of us working on DLCs we’ve always had the implicit assumption that eventually Bitcoin blockchain space is going to fill, it is going to be uneconomical for lower value DLCs to happen on Bitcoin so let’s use our Layer 2 solution for what it is. With DLCs in channels, if you are doing smaller DLCs, less than a Bitcoin, I think doing them on Lightning makes a lot of sense. If you are doing hundreds or thousands of Bitcoin in your DLC the same security incentives that apply with Lightning also apply with DLCs. They are just another output in your Lightning channel. You need to be mindful of very large DLCs on Lightning in the same way as you need to be mindful of very large HTLCs on Lightning as well. Going back a step or two, I always try not to be judgmental of the economic activity that is happening on the chain assuming that it is not parasitic to the underlying blockchain. I don’t think we need as Bitcoin developers or the Bitcoin community need to get in the business of judging whether this is worthy to be on the Bitcoin blockchain or if it isn’t. The fee market is there to determine that. That is the whole purpose of fees onchain, to price out things that don’t have as much economic value and it needs to go to a Layer 2 solution. That’s fine in my opinion.

MF: The downsides are totally different now. If we are doing parallels with Satoshi Dice and DLCs on Lightning, every single transaction was onchain. We are talking about a potential future where there are tonnes of DLCs going over Lightning with barely any touching the chain. On the downsides it is very different but perhaps there are the upsides in terms of this could be a use case to get more people using Lightning, I don’t know.

NK: I also think DLCs are a nice test run for doing more generalized offchain stuff. Layer 2 currently is in its first iteration where it is just payments that are happening. I think in the future, as Chris mentioned, almost everything is going to happen offchain, it is going to have to. Certainly nothing cooperative should ever happen onchain, that is a principle. There is no reason to be using a blockchain directly for anything other than assurance that if things stop being cooperative then you have a backup. As a first resort you should never actually be using a blockchain if any activity that you are doing is cooperative. In the future we are actually going to need Layer 2 to handle almost everything. We have payments down as the first step and I think DLCs are an ideal second step. They are pretty minimal in their package for what you need in a technical sense, they have immediate use cases so they are marketable to people. You can have these things get implemented and integrated, and they are not as trivial as a payment. It is not that a payment is trivial, Layer 2 took quite a while to come around, but as far as contingent payments go it is about as simple as it gets. I think in some sense it can be viewed as the natural next step after payment channels, introducing things like DLCs into offchain ecosystems.

The challenges of a routed DLC on Lightning

MF: I do want to have a final section on oracles before we wrap up. Just to finish the Lightning section, this is looking forward to when there are DLCs on Lightning. When we get to routed DLCs, this is looking very far ahead, not only a DLC within a payment channel but routed across multiple payment channels, the problem there is if you do a bet that settles in 6-12 months then the capital would be locked up on every hop. That is not ideal. We are kind of in the same space as HODL invoices with capital being locked up.

NK: I would argue that it is the exactly the same space.

MF: So there are good reasons not to like that, if capital is locked up for no reason. When they want to be doing payments settling very quickly all the time. You’ve said barrier escrows could potentially be a solution to this?

NK: There are two separate things. With HODL invoices there is the current problem, the model under which we operate Lightning is a model in which the use case is fast payments that are executed and then resolved. HODL HTLCs or HODL invoices are essentially payments that are longstanding so they don’t complete immediately. They get set up and then they stay in that state of being set up locking up the capital along the route until some future date when the payment actually gets settled. It is a delayed settlement Lightning payment in which all of this capital gets locked up. In the current model this is essentially indistinguishable from spam. If I was going to spam attack the Lightning Network I would just set up a bunch of very long route payments to myself and never complete them. For each satoshi I have to lock up I can lock up 20 satoshis of other people’s capital because of the routing. You have this huge leveraged power where you can spam the Lightning Network. I would say today the Lightning Network is not generally spam resistant. There are I think four different competing proposals that are in the works right now for how to solve this problem. All of these problems essentially boil down to you have to pay people for their capital lockup, not just on a one time basis as it is right now. But also in some sense you have to compensate them either probabilistically or otherwise. You have to take into account the amount of time that capital is locked up. You are paying for the use of other people’s capital in routing and you are using the capital much more if it is a HODL invoice than if it is a normal one. Essentially there are a bunch of different proposals out there for how to price this in. These people who are holding and locking up their collateral are getting paid compensation that is proportional to that cost for them. Once this problem is solved it is fundamentally a problem of spam. Secondarily a problem for HODL invoice HTLCs and I would say only tertiary it happens to also be true for things like routed DLCs and other contracts. If you solve the problem for spam then the problem also becomes solved for DLCs. It will just be priced in, DLCs will be a little more expensive to do if you are routing them than they would be to route them today. But it wouldn’t be bad for the network. There is some stuff that isn’t priced in under the current Lightning Network model. In the future it will be. I think that essentially this problem reduces to the problem of spam which is a problem a lot of people are working on.

CS: Maybe DLCs would even be good for Lightning in this regard if the spam problem is solved because then people who are putting their capital on Lightning, maybe they want to route DLCs because they can have longer standing DLCs on their Lightning channels rather than these HTLCS that appear quickly and then should be cleared quickly too. Maybe it is a longer term source of income for Lightning node routers when they are routing these larger DLCs that are hanging off the channel for a while.

NK: The problem that barrier escrows solve is a related but different one.

MF: I misunderstood this. There is the capital lockup problem and there is also the problem of getting a free option. You enter into an agreement, the price swings and then you leave that agreement because you’ve basically got a free option.

NK: A simple example. Say I want to bet with Chris whether or not it is going to rain where I am tomorrow. I say it won’t rain, he says it will. We want to do a routed DLC over Lightning. What needs to happen is I need to set up a payment to him contingent on it raining. I set up a payment to him and the point associated with that payment, the tweak, is the oracle anticipation for the event “It is raining”. We’ll say it is for 1 Bitcoin just for simple numbers. Then he needs to set up a payment to me for 1 Bitcoin contingent on it won’t rain. Only one of those two things is going to happen. Say it does rain, then Chris can claim his payment and I am never going to be able to claim my payment. The oracle is never going to publish “It didn’t rain”. I can just fail that payment and return the collateral. The problem here though is there are these two payments that need to be set up from one to the other. The question is who moves first? If I set up my payment to Chris he can never set up a payment to me and now that’s a free option. If it does rain he gets paid and if it doesn’t rain he doesn’t lose anything. He hasn’t set this up in return. This notion of a barrier escrow which came out of a discussion between me and ZmnSCPxj was a solution to a similar problem that happens to also solve this problem where we can essentially atomically set up two payments. How it works is we have the payment I am going to set up to Chris and the payment Chris is going to set up to me. We add one extra lock to both of them that’s the same lock with some other point, say E, that belongs to this barrier escrow. Then what me and Chris do is we do an AMP, an atomic multipath payment to this third party. They don’t know who we are, they don’t know what they are being used for. All they do is they accept payments. Once both me and Chris’ payments have been set up to them then it releases. How this looks is rather than I set up a payment to Chris and he sets one up to me, what we do is we both set up payments to each other that go through each other and to some shared point elsewhere on the network. That shared point happens atomically using the existing thing which is AMP. Using just AMP we can essentially bootstrap AMP into helping with atomic multipayment setup. You can set up as many different consecutive payments as you want for all the different possible outcomes. You can get fancy with it, that essentially lets you do DLCs routed over Lightning if you have PTLCs.

MF: With that problem where I make an offer to you Nadav and then you have time to either decide whether to accept it or if something swings in your favor… Say you bet on the rain. While you are deciding whether you want to enter into the bet on whether it rains or not you are looking out the window to see if it rains.

NK: Something like that.

MF: You’ve got this information where you can decide whether to agree or disagree. The solution is to introduce a third party so is there is no advantage or no delay where one of the parties can collect more information.

NK: In general this is what it is like. In this particular example it is actually even a bit worse since there are just two outcomes. Chris can always accept my payment and never send one back. In a normal DLC there are more than two outcomes and there are multiple payments being set up between two parties. You could interweave them and stop at some other point which would be more analogous to what you are saying. In some sense that is exactly the problem, that someone can verbally agree to something online and then during the process of actually committing to it they can take advantage of the other person if you are not careful. Barrier escrows solve this problem in a trust minimized way. You don’t have to trust your counterparty in any way anymore. You have a much smaller amount of trust that is much more economically motivated that you put into some third party that is publicly auditable. Especially if you are already doing DLCs arguably there is much less trust that is required in a barrier escrow than in an oracle. It seems like an acceptable thing to do at least for the DLC use case.

MF: The solution to the long term capital lockup problem is either you do it in a payment channel where both parties in the payment channel are happy to lock it up between each other as you would onchain. If you are locking it up onchain you are not getting anything for that capital while it is locked up. Or you are paying higher routing fees effectively.

NK: Exactly. Presumably the longer it will take to execute the contract the more proportionally you’ll have to pay in routing fees. Then it becomes a calculation of is it worth it to do it offchain or onchain? If onchain fees are super high it is still going to be worth it to pay those higher routing fees.

MF: I saw Thibaut has opened a PR for rust-lightning on DLCs. It looks like there is some activity on rust-lightning from Thibaut. You guys are Scala guys right so are you trying to get DLC stuff into eclair?

CS: We would love to. The eclair guys are not the most friendly guys for merging code that is not theirs. Funnily enough Roman actually implemented the Postgres support that they have in eclair so they do have a different database backend now mostly thanks to Roman. Roman has also done work on demoing ECDSA adaptor signatures on Lightning as well to see how invasive these changes to this HTLC state machine are. If people want to look at what that world might look like I can dump a link to the implementation to the chat here.

R: The HTLC is the core thing in eclair. Recently I tried to rebase my branch with PTLCs with their current branch. It is a nightmare. There are so many changes in so many places. It is impossible to track. That is probably one of the reasons we need some kind of general concept for channels.

MF: It is complicated. Just to correct Chris the eclair guys are very friendly but it is hard to get stuff in and it should be hard to get stuff in.

NK: Are they still a smaller team than Lightning Labs? They used to be, I don’t know if they still are. They have finite resources and a lot of work and currently generalizing channels is not at the top of their priority list. Not much progress happening there. But we do have DLCs implemented in Scala so if they ever want it it is pretty easy to plug in. That is the easy part. The hard part is the actual making it possible to plug in.

MF: I was looking through some of the work Tadge and other people had done in his Lit implementation. There are a bunch of PRs on DLCs. If you were to go down the different implementation approach rather than trying to get it integrated into an existing implementation.

NK: This is the pre adaptor signature implementation by the way.

Oracles

MF: So let’s go onto oracles. The oracles in DLCs, this was what Tadge initially described in his paper… With a 2-of-3 multisig all the participants need to identify who the oracle is. The oracle is going to be signing in the 2-of-3. The two participants in the contract have to find an oracle who is willing to sign in case they don’t come to agreement. Tadge’s concept was that instead the oracle is just going to sign various statements and not actually know they are participating in any of these contracts. That’s what you have gone with, there are various reasons why you’ve gone with that approach. Why is it better if the oracle doesn’t know that their signed statements are being used in contracts?

BC: The first one you were talking about, the 2-of-3 multisig, your third party will know the exact UTXOs you are using onchain, they can see your transaction, they have to sign it. They can track which party won the money, they know where the funds are going, where the funds are coming from. It is really not private and you have an extra public key onchain so it is a little less scalable. For the actual oracle or your third party helper, they have a lot more regulatory risk. They might need to KYC your users, they might need to be a public entity. It makes it harder to do. Versus with the DLC model the oracle just signs “It is raining outside” or signs the Bitcoin price or “Biden won the Presidency”. That single signature can be used for every single user that is using them as an oracle. It is a lot easier for the oracle, they just put it out. For the users it is better because the oracle doesn’t know they are being used, they are hiding from the oracle, it is only between them and their counterparty.

NK: In some sense it is the difference between the “don’t be evil” and “can’t be evil” models. The 2-of-3 is the “don’t be evil” trust model. I guess it is not “can’t be evil” but the DLC oracle model is that it is much harder to be evil because you know so much less and you are essentially telling everyone the same thing.

MF: So let me take the evil case, take the devil’s advocate case. The disadvantages are that it is hard to incentivize the oracle to sign a lot of statements. You need an exact statement to be signed. If it is raining it needs to be exactly “Yes” or “No”. It can’t be “Yes it was raining” that was signed. You need the exact statement to be signed perfectly. If there is any difference then that signature won’t be a valid signature to feed into the DLC.

NK: This is actually one of the biggest motivations for writing a specification rather than just using this technology. The first month we built out fully functioning DLCs and since then it has been “Ok but everyone needs to be doing the same thing and working together” so this is nice and standardized. In the future this won’t be a problem because we will have standards that everyone can follow for announcing what messages they might sign and then signing exactly those messages. Signing a slightly different message is considered fraud in this world where we have these standards. In the same way that lying is considered fraud. I do think it is true that it is harder to pay an oblivious entity but this is actually a problem Lightning solves. There are a few different ways that one could end up incentivizing oracles. At the end of the day as long as one way works the market will make sure that it does. It does become more feasible once you also realize that there is a lot less overhead required to be a DLC oracle than to be a 2-of-3 signer where you have to be dealing with individuals directly. As opposed to just broadcasting messages. If you look at the DLC code that has been written there is a tonne of client side code and just a couple of files of DLC oracle side code. Essentially all the complexity in DLCs goes to the client, goes to the DLC participants and the oracle does the bare minimum. They are slightly harder to incentivize, I think that is universally true because of the constraints but I think it is still possible and it is cheaper to operate a DLC oracle. I think it works out.

BC: To highlight the complexity of creating the oracle. Nadav and me probably spent a couple of months writing the actual DLC wallet code.

NK: At least a year.

BC: The initial spec. For the initial oracle I did it in a weekend.

MF: These downsides, I think they are all manageable. The other downside I could think of… Oracle announcements, rather than the flow being “Chris I want to enter into a bet on whether it is going to rain where I currently am” and us going through the oracle announcements and going “But there is no oracle that is willing to be part of this contract because the oracle doesn’t know whether it is raining where I live”. That’s the other downside. The flow is you go to the oracle announcements and you see what contract you want to enter into rather than Chris and me saying “Let’s do a contract on some weird bet that there isn’t an oracle announcement on.”

NK: There is certainly a bootstrapping problem when it comes to oracles with DLCs. That is one thing that we are working a lot on in this site. It is not just us posting these things, anyone can come post oracle stuff on this site. You can think of it like a forum, everyone is posting stuff on here that they can attest to. I think this is maybe a slightly more transient problem. In the future you could imagine weather.com, all their data is signed on top of just published. It is not that much more work for them necessarily. If they are doing any kind of oracle stuff, making it so that all of their stuff is signed isn’t a big next step. I hope in the future that finding an oracle isn’t the hardest problem. The hardest problem will be finding a counterparty which has its own struggles though this is also true for the 2-of-3 case. The other thing worth noting is that you can still ask someone to be your oracle even if they know you. Essentially the trust model reduces down to the 2-of-3 in that case. I would say DLCs do just as well as 2-of-3 in that case. They do better in cases where oracles already exist for other things. In the 2-of-3 case you still have to find someone who was willing to sign off on whatever is happening. You can do that and find them, tell them to be a DLC oracle, you are just not really using them like a DLC oracle at that point. Your trust model has reduced down to that of a 2-of-3.

BC: You still have some benefits. You and Chris enter into a bet, I don’t need to know how big your bet is or what your UTXOs are.

NK: It is a slight improvement.

MF: So it is just getting that bootstrap flow right. Perhaps the oracle announcements should all be “respectable”. Or people who everyone knows that are saying what the price is. Rather than a random person like me coming onto your oracle site and saying “It is raining outside my house now” when nobody cares.

NK: One consideration is that on this oracle site, one goal is to actually show someone’s past as well. You can see things they’ve done before and stuff like that. Maybe in the future there will be a more explicit reputation built into things. I think this is one of Lloyd’s motivations in using Twitter. You can bootstrap on Twitter reliability, whatever metrics exist there, they are faulty there as well. If someone has got a blue checkmark maybe it is worth a little bit more. But yeah, essentially I think Lloyd is using Twitter to get around various problems to do with verification and reputation when it comes to oracles.

MF: There is some very cool cryptographic stuff with oracles. You’ve talked about this on other things, we don’t need to go into too much detail, the signing two statements and losing your private key is a really cool trick. You can only sign one of the two statements. That’s the punishment. There is also the possibility to have multiple oracles. A multisig on statements.

NK: That works today.

MF: That works today in your current implementation of adaptor signatures onchain?

NK: Yes and that is in the specification.

MF: That is using what for the multisig?

NK: A lot of math.

MF: Can you use MuSig for that?

NK: We aren’t using direct public key… You don’t want to think of it like a multisignature. We are essentially combining anticipated… What you get from an oracle is their pubkey information and their potential outcomes. From that you can do some math using Tadge’s trick to compute an elliptic curve point. You can think of it like a public key but it is a point on an elliptic curve. That’s the anticipation for a specific event. Traditionally what you do with that is you use it as an encryption key for your signatures, that is what adaptor signatures do. What we are doing for multi oracle is you take not just one oracle’s point for one event but you combine a bunch of different oracles’ points for a bunch of different events in fancy ways. Then you do elliptic curve math on them to create this aggregate point that is associated with an event. It is a bit more complicated. Rather than Alice said “It is raining” you can have a point for the event “Alice says it is raining and Bob says it is raining and Chris says that it is either raining or it is partly cloudy”. You can create these kinds of ANDs and ORs of these points and create new ones and then use those to lock up the actual transactions that pay out to those events. It is quite a bit more complicated. This is one of the biggest things we’ve done that isn’t outlined how to do it in the white paper or even mentioned really what approaches can be taken. We had to develop our own algorithms and approaches. Multi oracle support and numeric contract support more generally uses some clever tricks that I and a couple of others have come up with to essentially enable multi oracle numeric contracts that even allow oracles to not be exactly the same number to the last digit. They can have some variants that you get to specify as the DLC participant for what is acceptable differences between oracles.

MF: This is adaptor signatures, this isn’t multisig with all the signatures going onchain or MuSig where it is key aggregation. This is math that kinds of work out as multisig.

NK: That’s right. The generalization of multisig in computer science is called a monotone access structure if anyone is curious.

MF: That’s one to look up. The oracle stuff is very cool. Oracle failure cases, we didn’t go into this but obviously if you are only using one oracle and the oracle lies then you’ve got problems. That is where a reputation system comes in or using multiple oracles.

The impact of possible future soft forks on DLCs

MF: There is tonnes of stuff to do with Schnorr and Taproot. Is there anything for future soft forks that would be cool for DLCs? I saw one blog post from Jeremy Rubin on betting on Taproot activation. This wasn’t actually using CHECKTEMPLATEVERIFY, I think he was just using his Sapio programming language to do a bet on whether Taproot would activate. I think Ben put up a Taproot activation oracle at one point on Suredbits.

BC: On our most recent spec call Lloyd actually detailed a way of how to do DLCs not using adaptor signatures or anything, just using covenants. That is another potential way to do it but it is an alternative way, you don’t really get new functionality. There are some improvements with things like SIGHASH_ANYPREVOUT where you can make them more composable.

NK: It will be much easier to put DLCs in eltoo Lightning channels than in penalty Lightning channels. It is not that it will be easier. Today if we found a way to hack together and create a slightly more generalized Lightning channel and put a DLC in it, just between two parties, no routing, having that DLC in your channel would slow down your channel operation. Because every time you want to do something else with that channel like a payment you have to re-sign all of the stuff to do with the DLC. If it is a small contract, only two outcomes or something, it is not a big deal. But if you have a large numeric contract that depends on the Bitcoin price or something inside of your Lightning channel then every single update that you make to your Lightning channel is going to take a couple of seconds, it is not great, even more sometimes. With eltoo this isn’t the case. You can use SIGHASH_ANYPREVOUT to dynamically bind your DLC into your channel so you don’t have to re-sign the DLC every time. The DLC can be a part of all future channels until it is kicked out or settled or whatever. You can think at a very high level, SIGHASH_ANYPREVOUT lets you dynamically rebind transactions so they don’t just spend this output, they can spend any of the future outputs. That is how the Lightning update stuff works. Any transaction can spend any previous commitment transaction in an eltoo channel. Likewise any DLC output, you don’t have to re-sign all the time because you can dynamically rebind it onto future channel states without re-entering into the DLC from scratch. I think that is going to be a big improvement once we have DLCs in Lightning channels, to have SIGHASH_ANYPREVOUT. I think there are other tricks Ben that you know more about for how SIGHASH_ANYPREVOUT can be used to optimize DLCS.

BC: I think you can do more complex things. Say when I get paid to this address from a DLC, instead of it being issued to me I could have an already transaction that signs it from that address to 50 other parties. That way my DLC only needs one key to sign but the payout goes to 50 people. There are other fancy things you can do. In Chris’ blog post on DLC put contracts that would help as well, you don’t need to constantly renegotiate it.

CS: With the coinbase put contract example the tough part for creating a DLC predicated on the coinbase transaction is the fact that miners tweak the scriptSig and the witness commitment which ends up changing the funding transaction ID for your DLC every time a hash is performed. With ANYPREVOUT you abstract away what the specific funding transaction is, rather you say it just has to match this template. You can bind it to any transaction that follows this template which makes things like what I talk about in this blog post more viable in the market.

Upcoming releases

MF: At some point in the future we’ll be doing DLCs between all of us in a channel factory. There is a lot of work to do. We’ll wrap up. What would you want people to do? Do you want people to find friends to do DLCs with, to use your software? I saw various videos of tutorials. Do you need oracles? Do you need people entering into DLCs? What do you need?

CS: We definitely need people entering into DLCs. We are about to land a pretty major upgrade to the user experience of the DLC process courtesy of a lot of hard work by Ben and Roman, streamlining the negotiation process of DLCs. It is going to look much more like Lightning channel openings now rather than the very complicated copy, paste message back and forth process that we had previously. It is going to be a much more streamlined experience. We should have that out in the next week or two. That’s when we really want people to get off to the races and really start trying out these things. The demo video you have up here is for becoming an oracle which isn’t going to change significantly in the next 3-6 months. That’s something you can do today. You don’t even need to know anything about a chain state data source. You can just start posting oracles on oracle.suredbits.com and telling your friends to enter into DLCs that are predicated off you being an oracle if they trust you of course.

NK: And if anyone out there is super into Rust I know Thibaut (Le Guilly) who is the maintainer of rust-dlc, he just needs review on his thousands of lines of his code. That sounds daunting, there is always room for contribution on rust-dlc these days if you know any Rust.

BC: Check out the spec. We have a bunch of small issues, lazy things that we haven’t done yet, adding pretty pictures, correctly linking things around. There is other basic stuff. There is also high level stuff that we’d love review on too. Anything would help.

NK: If something is unreadable post an issue asking questions. Maybe we will get around to fixing it and making it easier to understand. We are adding lots of pictures, stuff like that. It is coming together.

MF: And lobby the eclair guys to support DLCs in eclair soon.

NK: Bother all your Lightning developer friends until they generalize their Lightning channels and put PTLCs into them.

CS: Or until they hate the Suredbits team.

MF: Great work guys, thanks for coming on. I’ll get a transcript up. I look forward to seeing the latest UX that comes out in a week or two.

\ No newline at end of file diff --git a/london-bitcoin-devs/2021-11-02-socratic-seminar-assumeutxo/index.html b/london-bitcoin-devs/2021-11-02-socratic-seminar-assumeutxo/index.html index e091f1cdad..e22155f258 100644 --- a/london-bitcoin-devs/2021-11-02-socratic-seminar-assumeutxo/index.html +++ b/london-bitcoin-devs/2021-11-02-socratic-seminar-assumeutxo/index.html @@ -9,4 +9,4 @@ James O'Beirne, Calvin Kim, Tadge Dryja

Date: November 2, 2021

Transcript By: Michael Folkson

Tags: Assume utxo, Utreexo

Category: Meetup

Media: -https://www.youtube.com/watch?v=JottwT-kEdg

Reading list: https://gist.github.com/michaelfolkson/f46a7085af59b2e7b9a79047155c3993

Intros

Michael Folkson (MF): This is a discussion on AssumeUTXO. We are lucky to have James O’Beirne on the call. There is a reading list that I will share in a second with a bunch of links going right from concept, some of the podcasts James has done, a couple of presentations James has done. And then towards the end hopefully we will get pretty technical and in the weeds of some of the current, past and future PRs. Let’s do intros.

James O’Beirne (JOB): Thanks Michael for putting this together and for having me. It is nice to be able to participate in meetings like this. My name is James O’Beirne. I have been working on Bitcoin Core on and off for about 6 years. Spent some time at Chaincode Labs and continue to work full time on Bitcoin Core today. Excited to talk about AssumeUTXO and to answer any questions anybody has. The project has been going on for maybe 2 and a half years now which is a lot longer than I’d anticipated. Always happy to talk about it so thanks for the opportunity.

Why do we validate the whole blockchain from genesis?

https://btctranscripts.com/andreas-antonopoulos/2018-10-23-andreas-antonopoulos-initial-blockchain-download/

MF: We generally start with basics, some people who are beginners will get lost later. I am going to pick on some volunteers to discuss some of the basics. The first question is what is initial block download? Why do we do it? Why do we bother validating the whole blockchain from genesis?

Openoms (O): The whole point is to build the current UTXO set so we can see what coins are available to be spent currently and which have been spent before. We do it by validating all the blocks, downloading all the blocks from the genesis block, from the beginning. Going through which coins have been spent and ending up with the current chain tip which will end up building the current UTXO set. We can know what has been spent before and what is available now.

JOB: At a very high level if you think of the state of Bitcoin as being a ledger you want to validate for yourself without taking anything for granted the current state of the ledger. The way you would do that is by replaying every single transaction that has ever happened to create the current state of the ledger and verifying for yourself that each transaction is valid. That is essentially what the initial block download is.

MF: We are replaying absolutely everything from genesis, all the rule changes, all the transactions, all the blocks and eventually we get to that current blockchain tip.

How long does the initial block download take?

https://blog.bitmex.com/bitcoins-initial-block-download/

https://blog.lopp.net/bitcoin-full-validation-sync-performance/

https://blog.lopp.net/2020-bitcoin-node-performance-tests/

MF: This does take a very long time to do. There are a few posts from BitMEX and Jameson Lopp on how long this takes. Although there have been improvements in recent years it still can take hours or days depending on your machine and your hardware. Any experience from the people on the call on how long this takes? Has anyone done a IBD recently?

O: A couple of hours on high end hardware with a I7 processor and a decent amount of RAM. On a Raspberry Pi with 4GB of RAM and a SSD it is between 2 and 3 days.

Stephan (S): I was using Raspiblitz and it took me like 5 days with a SSD. Similar magnitude.

O: It depends a lot on the peers you end up downloading from. For example if you have a full node on your local network and you add it as a peer then it can be quite quick in that sense. Also if you are downloading through Tor that can slow down things a lot.

Assumedvalid and checkpoints

https://bitcoincore.org/en/2017/03/08/release-0.14.0/#assumed-valid-blocks

MF: So obviously from a convenience perspective spending hours or days in this state where you can’t do anything isn’t ideal. From a convenience perspective a better thing to do would be to try to shorten that time. An obvious thing to do would be to just download blocks or validate blocks from a certain point in time rather than from genesis. The first attempt at either bringing this time down or at least providing some assurance to users that they are not spending hours or days verifying the wrong blockchain was assumed valid. Is anyone on the call aware of assumed valid other than James?

Fabian Jahr (FJ): Assumed valid is a point in the blockchain, a certain block height and/or block hash, from which point it is assumed that this is not going to be re-orged out. These are hardcoded in the Bitcoin Core codebase. This feature, this is before I got active in Bitcoin Core, I don’t know what the conversation around it was exactly, it is not really used anymore. There are no new assumed valid points introduced regularly into the codebase. But I am not 100 percent sure what the discussion was around that, what the latest status is, what are people’s opinions on that.

MF: I just highlighted what I’ve got on the shared screen, it was introduced in 0.3.2 so very early. The intention was more to prevent denial of service attacks during IBD rather than shortening the time of IBD. Presumably at 0.3.2 the time of IBD wasn’t anywhere near as long as it is now. James can correct me, it did at that point or later, it skipped validating signatures up until that assumed valid block?

JOB: Yes. It looks like when it was introduced it was called checkpoints which ended up being a pretty controversial change. A checkpoint dictates what the valid chain is. That is subtly different from what assumed valid does today which is just say “If you are connecting blocks on this chain and you are connecting a block that is beneath the block designated by the assumed valid hash then you don’t have to check the signature”. It is not ruling out other chains which is what checkpoints do. The idea was to prevent denial of service attacks. Maybe someone sends you a really long chain of headers but that chain of headers has really low difficulty associated with it. Checkpointing is a different thing than assumed valid. As you said assumed valid really just skips the signature checks for each transaction below a certain block as an optimization. It doesn’t rule out another longer valid chain from coming along. For example if someone had been secretly mining a much longer chain than we have on mainnet with some unknown hash rate that is much greater than the hash rate we’ve been using, they could still come along and re-org out the whole chain hypothetically. Based on use of assumed valid whereas checkpoints don’t allow that. Checkpoints are a little bit controversial because they hardcode validity in the source code whereas assumed valid is just an optimization. It is worth noting that with assumed valid we are obviously still checking a lot about the block. We are checking the structure of the block, we are checking that the transactions all hash to the Merkle root and the Merkle root is in the block header and the block headers connect. You are still validating quite a bit about the block, you are just not doing the cryptographic signature checks that the community has done over and over again.

S: To be clear assumed valid is still being used today, it is just the checkpoints that are deprecated?

JOB: Yes. Assumed valid is used by default. It is the default configuration. Typically every release the assumed valid value is bumped to a recent block as of the time of the code. That typically happens every release. Calvin (Kim) is saying that checkpoints are still used. They are probably just really low height and could probably be removed. Assumed valid is still very much maintained. It is the default setting. If you want to opt out of it and do a full signature check for the entire chain you can do that by passing the value 0 for an assumed valid configuration.

Calvin Kim (CK): Just a point on removing the old checkpoints. Technically that is a hard fork.

JOB: Yeah, exactly. It would broaden the consensus rules, it would allow blocks that were previously invalid to be valid. You are right about that Calvin.

MF: With the existing checkpoints it is technically impossible to validate a chain which has different checkpoints to those checkpoints say in Core or another Bitcoin implementation. It is impossible.

JOB: Yes.

MF: Obviously these are unlikely scenarios but let’s say you were validating a chain that didn’t meet an assumed valid, it didn’t agree or match the assumed valid. What is the user experience then? Is it just flagged to the user? Is there a log that says the assumed valid in the code doesn’t match the chain that you are validating? Or is it based on proof of work? What’s the user experience if that assumed valid isn’t matched with your experience of validating the chain?

JOB: It is still possible. If we were considering a chain that was more work than the chain that we know now that violated the hardcoded assumed valid default, that would indicate a re-org of thousands and thousands of blocks. It would probably be catastrophic at a higher level. I think the most that you would get as a user is maybe a log statement. I don’t even know if we have a log statement for that case. If something is on a chain that isn’t assumed valid you just do the signature validation. It is not any kind of special behavior.

S: The difference in performance would be that you skip signature validation until the previous assumed valid? There is only one valid hash in the binary? It is not like you can go back to the previous valid hash? It is all or nothing? If there is a re-org with a couple of thousand blocks the user would have to validate all signatures since the genesis block and not use the assumed valid hash of the previous Core release?

JOB: That sounds right. I am not sure offhand, I’d have to check the code.

S: There is only a single hash in the code right? Whereas checkpoints you have one every I don’t know 50,000 blocks. But for assumed valid it is only a single hash.

CK: That hash is backed by minimum chain work. There are some checks to make sure you are on the right chain.

JOB: Yeah minimum chain work is definitely another relevant setting here. That prevents someone from sending a really long chain of headers that may be longer in a strict count than mainnet but is lower work.

MF: This is relevant in an AssumeUTXO context because the concept is that you are going to validate only a fraction of the chain and then potentially do transactions. Pay or receive having only done a fraction of the initial block download. The rest of it from genesis would be happening in the background but you are already functional, you are already up and running, you are already potentially doing transactions, paying and receiving.

JOB: That’s the sketch of AssumeUTXO. With assumedvalid you are still downloading all the blocks. You have all the data onhand. AssumeUTXO allows you to take a serialized snapshot of the UTXO set, deserialize that, load that in from a certain height. In the same way as with assumedvalid we expect that by the time the code is distributed there are going to be hundreds if not thousands of blocks on top of the assumedvalid value, the same would go for AssumeUTXO. We would expect that the initial block download that you have to is truncated but it is not completely eliminated. You are still downloading a few thousand blocks on top of that snapshot. In the background we then start a full initial block download.

The AssumeUTXO concept

Bitcoin Optech topics page: https://bitcoinops.org/en/topics/assumeutxo/

MF: Let’s go over the concept of AssumeUTXO. I’ve got the Optech topics page up. We are bootstrapping new full nodes, we are reducing this initial blockchain download time by only doing a fraction of it before we are able to send and receive transactions, before we are up and running. During IBD you can still play around with your node. You can still make RPC calls and do CLI commands etc. But you can’t do anything with the wallet until you’ve fully completed IBD, is that right?

JOB: That’s right.

MF: One thing it says in Optech is “embedded in the code of the node would be a hash of the set of all spendable bitcoins”. Is that correct? Are there alternatives to embedding it in the code of Core? I think you’ve talked before James about getting this UTXO snapshot out of band and then using a RPC command to start the process. Then it wouldn’t necessarily be in the code of Core or the implementation you are using?

JOB: The value that’s important to hardcode is the hash. If you take the UTXO set, the collection of unspent coins, you order them somehow, and then you hash them up. You get a single value like you might get at the root of a Merkle tree or something. Hardcoding that is important. It is more important to hardcode that and not allow it to be specified through the command line say in the same way as assumedvalid. If you can allow an attacker to give you a crafted value of that hash and then give you a bad snapshot then they can really mislead you about the current state of the UTXO set before you’ve finished back validating. We decided in the early stages of designing this proposal that that value should be hardcoded in the source code and shouldn’t be user specifiable. Unless you are recompiling the source code in which case anybody can tell you to make any change and potentially cause loss of funds that way. To your question once you have the UTXO set hash hardcoded you have to actually obtain the file of the UTXO set itself which is serialized out. That is going to be on the order of 4 gigabytes (correction) or so depending on how big the UTXO set is of course. In the early days of the proposal there was a phase 1 and phase 2. Phase 1 just added a RPC command that allowed you to activate a UTXO snapshot that you had obtained somehow, whether that’s through HTTP or torrent or Tor or whatever. The idea there is it really doesn’t matter how you obtain it because if you are doing a hash based on the content of the file anyway the channel that you get it through doesn’t have to be secure. When you download a copy of Bitcoin Core you don’t necessarily care about the HTTPS part if you are checking the binary hash validates against what you expect. It is the same idea here. Phase 2 is going to be building in some kind of distribution mechanism to the peer-to-peer network. In my opinion Phase 2 is looking less and less likely just because it would be a lot of work to do. It requires the use of something called erasure coding to ensure we can split up UTXO snapshots so no one is burdened with storing too much. It would be a really big job. In my opinion it doesn’t matter how you get the snapshot. There are lots of good protocols for doing decentralized distribution of this kind of stuff via torrents say. I think it is less likely there will be a bundled way of getting the snapshot within Core although that is certainly not out of the question.

MF: These are the two phases you’ve discussed in other forums, podcasts and presentations. You said Phase 1 was allowing snapshot use via RPC with no distribution. This would be getting the snapshot from potentially anywhere, a friend, a torrent, whatever. Phase 2 which you say is unlikely now is building the distribution into the peer-to-peer layer. I was thinking that sounds very difficult. Then you have questions over how many nodes are supplying it and whether you can get a hold of it. Core and other implementations have to decide how recent a snapshot they need to provide which gets hairy right?

JOB: Absolutely.

S: Is the problem that you are making it very difficult for novice users to use AssumeUTXO? They have to go and find those binaries themselves in addition to making sure they get the right version of Bitcoin Core? If it is not bundled they are probably not going to do it or they are going to rely on third parties like Umbrel or Raspiblitz to bundle it for them. If you push towards that model you are relying on those third parties.

JOB: I think you are totally right, there is a convenience trade-off there. It is unfortunate that it is not easier for us to make this a seamless experience. My hope I guess is that eventually once the RPC commands are merged we can work on a GUI change that will allow someone to drag and drop the serialized UTXO set into the GUI or something. Maybe also link to a torrent for the snapshot from bitcoincore.org where you get the binary. It could say “If you want to expedite the IBD then do this”. You are totally right that it does complicate things for the end user. I thought that these changes were fairly simple for Phase 1 and I thought they would be integrated within a year and a half or so. It is now two and a half years later. We are maybe 60-70 percent done with Phase 1. I am very pessimistic about the pace of doing what I think is a much more complicated change in terms of the distribution stuff.

FJ: The types of users that really want this, the question is who really wants to get started within a hour or so of plugging the node in? There is a big overlap of the people who really want that and the people who get a Raspiblitz or Umbrel or whatever comes next that is really plug and play. This feature matches really well with that user base. Making it available for other people who are on the self sovereign side and do more by themselves, that would be great for them as well. But where I see the biggest leverage and adoption, Raspiblitz people are asking for it all the time. BTCPay Server built their own type of feature that was hacky and not completely trustless. There is a very good overlap of people that are going to use this and from this crowd.

O: There is a use case for example for the Specter wallet. Some people would want to use a self validating wallet connected to Bitcoin Core and get started as soon as possible. They have a similar solution to what BTCPay Server uses but even more trusted. The website is prunednode.today. You can download a signed version of a pruned snapshot from a node of Stepan Snigirev. Get started with a pruned node so you don’t even need to have the storage available for the whole blockchain. There is another alternative that LND uses already on mobile by default, the Breez wallet for example, which is Neutrino (BIP 157/158). Also the Wasabi wallet on desktop uses this. You only download the block hashes as needed and not more than that. It is more like a light wallet function rather than a fully validating one.

JOB: Yeah that’s a good point. In the early days of this proposal we were actually debating whether it would be easier to just integrate a more SPV or light client like behavior into Bitcoin Core instead of a proposal like this. It turned out that for a variety of reasons this seemed preferable. One, it seemed like less of an implementation burden because to build in SPV behavior to the node in a first class way would be a big change. Secondly I think the validation model here is quite a bit better than SPV. With a full view of the UTXO set you can do first class validation for recent blocks at the tip, better assurance.

O: There is no risk of having information withheld from you like with Neutrino. That’s the worry there, you are getting blocks which are not containing the transaction you are after.

JOB: Exactly.

Security model of AssumeUTXO

MF: We are kind of on the security model now. We won’t spend too much time on this but we should cover it. Presumably this has slowed progress on it. There are different perspectives. There is a perspective that perhaps you shouldn’t make any transactions, send or receive, until you’ve done a full initial block download. I’m assuming some people think that. Then you’ve got people in the middle that are kind of like “It is ok if the snapshot is really, really deep”, like years deep. Others will say months deep, some would say weeks. Are concerns over security slowing progress on this or have slowed progress up until this point?

JOB: I think at the beginning there were some misconceptions about security. There was really only one developer that had consistent misconceptions about security. That was Luke Dashjr. I think he is now in favor of the proposal. There are a few important aspects here. Number one, as someone mentioned earlier, there are a variety of approaches now with PGP signed data directories that people are passing around that I think are very subpar. I think everybody who is doing that has good intentions but it is the kind of thing where if one person’s PGP key gets compromised or one hosting provider gets compromised then you have the possibility of some really bad behavior. Number two, initial block download is a process that is linear with the growth of the chain. If the growth of the chain is going to continue indefinitely that’s an infinite process. At some point this needs to be truncated to something that is not infinite which AssumeUTXO does. I think that needs to happen at some point and I think this is the most conservative implementation of that. The third thing, this is an opt-in feature. Unlike assumedvalid which is a default setting you have to proactively use this. At least at this point given this implementation users aren’t nudged into doing AssumeUTXO in the same way as they are with assumedvalid. I think that’s an important consideration. Maybe one day this will be some kind of default but there is a lot more that would go into that before that’s the case. It is important to think about what kind of risks this introduces over assumedvalid. One of the main differences between assumedvalid and AssumeUTXO, in assumedvalid you still have the block, you still have the structure of the block and you are validating that the Merkle root hashes check out. Whereas with AssumeUTXO all you have are the results of that computation, the UTXO set. The way you would attack AssumeUTXO is to both give someone a malicious UTXO set hash that they are going to put in their source code and then to give them a maliciously constructed UTXO snapshot. Right now if you give them a maliciously constructed UTXO snapshot its contents will be hashed and the hash won’t validate against the hardcoded hash in the source code. The thinking there was that if someone can modify your source code arbitrarily, if someone can insert a bad AssumeUTXO hash into your source code there are all kinds of easier ways to cause loss of funds or steal your Bitcoin. If you assume someone can get to your source code and recompile your binary then you are already cooked anyway so it doesn’t really matter. Aside from that there really aren’t any practical differences in the security model between assumedvalid and AssumeUTXO. I am curious if anybody else has thinking on that.

MF: In a model where Core was putting a snapshot into the codebase that was thousands of blocks deep, months or years worth of blocks deep, as you say the only concern there is someone having control over the binary in its entirety in which case they can do anything. I am thinking about this scenario where people are passing around snapshots between friends. Perhaps someone uploads a snapshot that is way too recent. How is Core or another codebase going to stop people from using snapshots that are way too recent? Let’s say 6 blocks ago or 3 blocks ago, how do you stop them from doing that?

JOB: If someone say generates a snapshot that is 3 blocks old what would happen is they’d give that snapshot to someone else. If you were using an unmodified version of Bitcoin Core you would attempt to load that snapshot, we would hash the contents of the snapshot and we wouldn’t be able to find an entry in the AssumeUTXO data structure in the source code to validate against that. We would reject it as invalid. You can think about it like a whitelist of snapshots that are allowed per the code.

O: Transacting with an unsynced node there is a big difference between sending and receiving. If you can send and the network accepts it and especially if the receiver receives it then you don’t need to verify that anymore. The question is when you are receiving something and you want confirmation. That would need the new blocks fed to you as well which is a much more complicated question.

JOB: That’s a great point. There is an asymmetry there.

O: You wouldn’t be able to verify without reaching the chain tip, fully sychronizing. You might be able to construct a transaction without doing that and then broadcast it.

JOB: Yes.

MF: So this is sounding less controversial than I thought it was initially.

Ross Clelland (RC): Are there concerns where the snapshot is being taken when a Lightning channel is opened? It is subsequently closed but you’ve given that snapshot to someone, they think the channel is still open so they could use that. Or would Lightning itself prevent that money from being sent?

JOB: I think that’s equivalent to the simple payment example where someone gives you a snapshot, you load in the snapshot. Until you’ve gotten to the network tip, the most recent block that everyone has seen, you can’t make assertions about what is spent and what is unspent. I think they are analogous.

MF: I’m wavering on this now. I came into this thinking you would definitely want the user to complete that initial block download from genesis. As you say this is conservatively designed in that you do actually complete the full IBD, it is just you do the from genesis part after you are up and running. This argument of whether you should actually do that IBD from genesis and what’s the point of that seems like a question.

JOB: It is a question. I agree that the value of doing the background IBD is debatable. From my perspective as the person proposing this and doing the implementation I wanted to make it as appealing and as conservative as I could so I didn’t strip that out. But there are certainly other engineers who thought that it was necessary. If you were willing to accept the UTXO security model for a few days or a few weeks… In assumedvalid we don’t in the background validate all the signatures after we are done syncing. The analog for AssumeUTXO wouldn’t be to do background validation. I figured I’d rather people default into doing the initial block download at some point and having the block data on hand, being able to serve it to their peers if necessary, than not. But I think you raise a good question that frankly still could be debated and at some point could change.

MF: There would definitely be people that oppose that.

O: AssumeUTXO, would it be able to start in a pruned node and not need to download the full chain then?

JOB: AssumeUTXO is compatible with pruning. You still have to retain enough data to be able to arrive at the UTXO set that you started the snapshot from. It basically for most intents and purposes doesn’t change pruning.

O: The big benefit of having a validated chain or having a full chain, validated or not, is if you want to be able to use an already used wallet with some transaction history. You should have the blocks which are containing those transactions at hand so you rescan and be able to reproduce the history of a used wallet. A pruned node is all fine until you need to rescan. Then you would need to start from the beginning most of the time unless you didn’t prune long enough.

JOB: Exactly. Even with that one interesting line of thinking is to use the compact block filters and to index those compact block filters. If you need to rescan, use the filters, decide which blocks you are interested in and go out and retrieve those from the network as necessary. That’s an unrelated question.

O: There is an implementation of a similar idea called btc-rpc-proxy. That is outside Bitcoin Core but it is a service that simulates a full chain to the software calling the Bitcoin RPC interface. It sits behind Bitcoin Core and the Bitcoin RPC basically. You still have a pruned node but there is a request getting to a block which is not on the chain anymore because it is pruned. It requests it like with the block filters and serves it to the requester software. To that software it looks like it is an unpruned node but in reality it is not. There is this full node company called Embassy who are running on a SD card by default and they use a pruned chain. They also develop and use this btc-rpc-proxy which allows them to not need 300 GB of storage, they just need a SD card with the OS and the pruned node. This is something in between the block filters and the pruned, full node trade-off.

MF: I’m guessing the strongest argument for still doing IBD from genesis is what you said James, in principle ideally we want everyone to be able to fetch and validate the whole chain. And so if we relaxed and become less conservative, don’t do that by default, release that conservatism, then perhaps those blocks and transactions from the first few years of Bitcoin are just impossible to retrieve.

JOB: Right. Someone early in the proposal made the argument, in theory everybody is incentivized just to run a pruned node. Why would you keep the extra storage around? I don’t think they are quite comparable. Running a pruned node doesn’t make the IBD go faster. Pretty much everyone wants IBD to go faster. If it was easy enough almost everybody would want to do AssumeUTXO, it gives you an operable node much more quickly. In that case you would then have the default of not having the blocks. In practice it may be the case that many more people don’t have the full chain history on disk. I think you are right.

MF: There would be disagreement I think on relaxing that conservative default. I’m sure some people would oppose that. Is there anything else on the security model of AssumeUTXO?

S: James you mentioned if you already trust the developers and the Core binaries to be secure, trusting the AssumeUTXO hash wouldn’t introduce extra vulnerabilities. Isn’t that a dangerous path because we know the different parts of the codebase have different levels of scrutiny? Some code is reviewed much more in depth than maybe some test PR. The implications of a malicious AssumeUTXO has is pretty big. It is a magic number that would get updated every Core release? Or maybe in 5 years time people won’t be surprised if they see a new hash and it won’t get reviewed as much. The impact this can have is huge. If this was ever changed maliciously it would have huge implications on the reliability and trust of the ecosystem. Whereas if the assumedvalid hash is wrong it is no big deal. People might take longer to do IBD but no money can be stolen. There is no perceived inflation and so on. There is a scale of trust in developers and you don’t want to assume everything is correct all the time.

JOB: This is a really, really good question. It gets to the heart of the design with AssumeUTXO which I feel really optimistic about. In exactly the way you are saying there are certain parts of the codebase that are scrutinized more heavily than others. In my experience working on some of the code that handles the UTXO set it can be very, very difficult to reason about. Even for people who are well versed in C++, even for people who know the Core codebase. What I love about the AssumeUTXO design is that it is very easy for even someone who is semi technical to verify that an AssumeUTXO hash is correct or not correct. As long as they have a full node they can run a script and check that value. Whereas if we try to do optimizations that are slightly more sophisticated or opaque or technically complicated there are very few people capable of actually scrutinizing those changes. It is very hard to read a lot of Core’s code. Not many people these days casually browse the repo with C++ knowledge sufficient to do that. I really like the idea of crystallizing validation into this one value. You are right, it is toxic waste and has to be scrutinized, but it is very readily scrutinized by a wider variety of people than say the CCoin cache is.

S: Independent nodes keeping a full state, that could be checked to make sure the hash is always correct.

JOB: Definitely. You could automatically parse the chain params, the cpp file, extract out the AssumeUTXO values and validate that against your own node pretty trivially. That is a great idea.

MF: I was thinking there would be a warning in the GUI if you say made a payment or received a payment without having completed IBD. I was thinking there would be information to the user “You haven’t completed IBD. Therefore there is a slightly higher risk”. But I’m thinking that is not even necessary. The risk is not zero, there could be some crazy re-org back there but it is basically zero to many decimal places.

JOB: I agree. I don’t know what the user’s action would be subsequently. It is such a vanishingly low probability that there would be some kind of attack there. If you assume that the user got a malicious binary that had a bad AssumeUTXO hash in it they would strip out that warning too.

Thaddeus Dryja (TD): It seems like the only thing you can do is crash. The binary is not self consistent. It is like “Hey this binary has detected a state where the binary itself is assured that it would never happen”. So crash or something.

How AssumeUTXO interacts with other projects like Utreexo

Utreexo: https://dci.mit.edu/utreexo

Utreexo paper: https://eprint.iacr.org/2019/611.pdf

Utreexo demo release: https://medium.com/mit-media-lab-digital-currency-initiative/utreexo-demo-release-0-2-ac40a1223a38

Faster Blockchain Validation with Utreexo Accumulators: https://blog.bitmex.com/faster-blockchain-validation-with-utreexo-accumulators/

MF: The next topic I have up, it is good Tadge and Calvin are here, how this interacts with other projects. From where I’m sitting this lays some foundations for doing more interesting stuff with IBD to reduce IBD time. With AssumeUTXO this is the first step to not having a linear IBD from genesis to tip. This is the first heavy lifting and refactoring of the Core codebase to allow that to happen. One potential next step is Tadge’s and Calvin’s project which is Utreexo. You could perhaps, I don’t know the viability of this, have two or multiple fractions of the chain being validated in parallel. Rather than just having the snapshot to the tip and when that’s done the genesis to the snapshot, you could perhaps have genesis to the snapshot being verified at the same time as snapshot to tip. We can go to James first for thoughts on whether this does actually lay foundations for Utreexo and then we’ll go to Tadge and Calvin and anybody else who has more thoughts on that.

JOB: Definitely eager to hear Calvin and Tadge talk about Utreexo. I am really that it seems like a lot of the prerequisite refactoring for AssumeUTXO has already paid some dividends in terms of a project from Carl Dong called libbitcoinkernel. This is trying to in a really rigorous way separate out the consensus critical data structures and runtime from Bitcoin Core so we can have other implementations safely wrap the consensus stuff, do different implementations of the mempool or peer-to-peer network. Some general background, refactoring in Bitcoin Core is not often kindly looked upon. Having the AssumeUTXO project as justification allowed me to do some fairly fundamental refactoring that would have been hard to justify otherwise. The way that things previously worked was in the codebase there was a global chainstate object that managed a single view of the UTXO set, a single view of the blocks and the headers. What we did in lieu of that is introduce an object called the ChainstateManager which abstracts a bunch of operations on the chain and allows us in the particular case of AssumeUTXO to have a background chainstate and a foreground or active chainstate. That is what we do right now. In principle the abstraction of the ChainstateManager allows us to do pretty much anything. We could have any number of particular chainstates running round doing things, being validated and use the ChainstateManager to abstract that. It is a nice feature of the code now to be able to do that.

TD: Calvin has written a bunch of code to do exactly that, running multiple threads at once or even multiple machines at once doing it.

CK: James talked about how you could have a background IBD and a little bit of IBD from the AssumeUTXO set. The Utreexo project represents the UTXO set in a bunch of different Merkle trees. It allows you to represent the UTXO set in a very few hashes. Just the roots of each Merkle tree. This reduces the size of how you could represent things. With AssumeUTXO you have to hash the entire UTXO set, with Utreexo it is Merkle tree-ified. You take these roots and that is what you would validate against. That is essentially your UTXO set. What you could do is represent the UTXO set at a height in less than a kilobyte. That allows a lot of flexibility. Whereas before you had to download these 4GB, now you don’t really have to. It is possible to fit everything into a binary. That almost solves the problem of distributing these 4GB. It introduces a bit of other problems but yeah. The projects that I have been working on, you have a bunch of different threads going and each thread is syncing a different part of the chain. Let’s say you start up 3 threads, one would be syncing 0 to 1000, another would be syncing 1001 to 2000 and the third 2001 to 3000. Once all those are finished you would check against the Utreexo set representation. If you start from genesis and that block 1000, that’s where you were provided the cumulated root. You would compare against it, if they match up great you just saved a bunch of time. You already started verifying from 1000 to 2000. You do this a lot. It allows for more efficient use of your CPU cores. You could go further and say “We don’t even need to do this on a single machine. You could split the IBD into multiple machines.” A really fun thing you could do is have a phone and your laptop do IBD validation at the same time. It doesn’t even have to be continuous. If you have a Raspberry PI going in the background you could be like “I’m sleeping. I don’t need my laptop. I am going to use my laptop to help my Raspberry Pi sync up.” Those are some fun things you could do. Or if you are like “I trust my family. Could you bring your laptop? We’ll do the IBD.”

TD: We have software that does is, testing software, it does seem to give a big speedup. Even on one machine, if you are like “Let me have 8 threads that are all independently doing IBD from different points in the blockchain”. You get a lot better usage of your CPU because you have got so many different things going on. They are small enough that you can throw them around and keep them in CPU cache. You do get a bit of a speedup there.

MF: From a high level perspective this seems like a big change rather than like a refactoring type reorganization change that AssumeUTXO seems to be.

TD: Different peer-to-peer messages, yeah.

MF: I’ve heard Utreexo has a different commitment structure so it can’t use rolling hashes. How much of this is building on AssumeUTXO?

TD: Conceptually there are some similarities. It would be really cool if there was a way to switch between them. Somehow to say “If you have your AssumeUTXO giant UTXO commitment you could turn that into a Utreexo style commitment or vice versa.” It is not super obvious how you do that. In Utreexo it matters the order the things were inserted and deleted because of how the accumulator works. It is not like you can just given the UTXO set generate the commitment state of Utreexo. Maybe someday we can make something like that, that would be really useful, but we don’t have any software right now to say “Take a current node that is synced up. It has got its chainstate folder with the LevelDB of the 80 million or whatever UTXOs that exist. That is what the Utreexo accumulator represents. Take all that data and turn it into a Utreexo accumulator.” We can’t right now because the order things were added and deleted changes the accumulator unfortunately. It is not just what is in it. Utreexo does need its own weird messages, you have to send proofs. Whereas in AssumeUTXO nobody is going to know you are using this. Nobody you are connected to sees any difference. Maybe if they are really trying they can be like “He asked for block 700,000 and then a few hours later he asked for block 5. That’s weird.” But even that, you’d have to be connected to the same serving nodes and it would be hard to tell. Whereas something like Utreexo, it is totally different messages. It is very clear. If no one is there to serve you these things you are not going to be able to use it. It is a bigger change in that sense.

JOB: I think the strongest relationship between the two is that the AssumeUTXO work from a Core implementation standpoint paved the way to make it easier to do Utreexo in a sense. Some of the abstractions that it introduced are going to make it a lot easier to error lift in the Utreexo operations. It took probably a year to get in the really foundational refactoring necessary to move away from a single chainstate and a single UTXO set.

TD: The stuff we’ve been working on, some of it dealing with multiple chainstates, but also dealing with not having a database. It is kind of like “Wait, what?”. There is no database. A lot of the software definitely assumes that you’ve got LevelDB there. That is what you are using when you are validating blocks and stuff. A lot of software is like “There are ways to pretend that it is there or make a disposable chainstate every time you validate a block and things like that”. That’s some of the refactoring we’ve been looking at. That is probably going even further than the changes in AssumeUTXO. But definitely using that and changing even more.

JOB: For anybody who is not familiar with Utreexo, Utreexo kind of inverts the validation process. Instead of checking some all encompassing UTXO set whether a coin is spendable you basically attach a proof along with the transaction that shows that given a Merkle root for the entire Utreexo set at some height the coin exists in that tree and is spendable.

TD: One thing I do want to clarify, a lot of people think it is a UTXO commitment scheme. In the software we are using and I think just theoretically it bears no commitment. The miners or the blocks never tell you what that UTXO set is. Similar to today, if you are downloading a block there is no indication what the UTXO set is. You generate it yourself and hopefully everyone’s UTXO set is the same because they started from nothing and built the same thing based on the blocks. In Utreexo as well no one gives you that, you build it yourself. It is very much a direct replacement for the database. No one is going to give it to you but you build it yourself. Hopefully everyone’s is the same. It is a bit different in that if everyone’s is not the same it breaks. Whereas today, hopefully we don’t, it is a bug, but you may have a slightly different UTXO set than someone else. One bug we saw when we were working on this, stuff like OP_RETURN or is_unspendable, things you immediately can see. I’m not going to even put that in the UTXO set because you are never going to spend that. You can totally run a Bitcoin Core node where you put all the OP_RETURNs in the UTXO database. It will work, it is a waste of space because they are never going to get spent but you could do that, it’ll still work. With something like Utreexo that becomes a consensus rule which is kind of scary. If everyone doesn’t have the same set the proofs are no longer compatible. If someone says “I put in OP_RETURNs” or “I put in these kinds of things” they are going to be incompatible. That is a little scary. That’s another issue. I did want to clarify that. There has been so much talk over the years about commitments, “Let’s commit to the UTXO set in some kind of message the miner makes”. This does not do that. That maybe something that you could do later do with the same technology but that is a whole other can of worms. None of these are doing that right now.

JOB: It is worth noting as well that at some point there was talk about incorporating a UTXO set hash into the block header. Instead of having a specifically hardcoded AssumeUTXO hash in the codebase we could use the block headers. But as Tadge is saying that is a much bigger question.

TD: A lot of people think it is this. I don’t know that it actually gives you that much in either case. Even AssumeUTXO, you are going to want to validate the last couple of weeks or months anyway. You can hardcode that far back in the binary. Does it really help that much to have miners committing to these things? I don’t know.

JOB: That’s a fundamental shift of trust towards the miners as well.

TD: Even if you were willing to do that, “Let’s make this a commitment” there are a lot of downsides. The biggest that I’ve talked to people about, at least for Utreexo, it keeps changing. We have cool ideas and now it is twice as fast as it was last month or something. We definitely don’t want to commit to some kind of soft fork because what if you find a way better idea in a year but now you’ve got this soft fork thing stuck in that no one wants to use anymore. Also it doesn’t seem to get you that much. For the risk and the change in trust it seems like you can do a lot of really cool things without that. I haven’t really even looked into that much because it doesn’t seem it will help us that much. Let’s not even go there. It is the same sort of idea with AssumeUTXO? It doesn’t seem like there is any need to involve miners?

JOB: Absolutely. Given it has taken two and a half years just to do the implementation I didn’t want to undertake anything even getting close to a soft fork.

TD: That’s what is cool about it. These improvements that aren’t even forks at all. If you want to use it, cool and if you don’t… That’s one of the things we talk about a lot with Utreexo, it is a big enough change that it is going to be hard to get into Bitcoin Core and rightly so. Bitcoin Core is quite conservative with changing all this stuff and it is a pretty big change. Let’s try testnet versions that aren’t Core and play around with that first. Maybe it is the kind of the thing where people use it for a year or two without it being merged into Core. Then it is long enough and well tested enough that people are like “We can move it in now”.

JOB: I was going to say that hopefully with libbitcoinkernel it might even be practical or advisable to do Utreexo as a separate project. At the same time I think you guys are going to be modifying so much of what is considered consensus critical. The benefits of having a shared library like libbitcoinkernel probably aren’t as much as if you just want to do your own mempool or do a different peer-to-peer relay.

TD: The messages are different. I have talked to other people who are interested in libbitcoinkernel, libconsensus, that kind of stuff. Getting rid of the database code as consensus code is something people are very interested in. Talking to fanquake, fanquake got rid of OpenSSL in Bitcoin Core a year or two ago. He was like “If we could get rid of LevelDB that would be amazing”. LevelDB is pretty big and does affect consensus. It would be great if it didn’t or if it was walled off or isolated a bit more. LevelDB seems fine and everyone is using it. The transition from BerkeleyDB to LevelDB was this big unintentional hard fork. That’s a scary thing to do if you want to change database stuff. But you may want to in the future. Walling that off would be another project that we want. This is part of it. You sort of need to wall it off a bit to do things like AssumeUTXO as well.

MF: We could definitely have a whole session on Utreexo but we should probably move on. Final question, I am just trying to understand how they could potentially play together. Could you perhaps do a normal IBD from snapshot to tip and then use Utreexo from genesis to snapshot? How are they going to play together? Would you use both?

TD: That you could. You couldn’t do the opposite. You couldn’t say “I am going to do normal IBD from snapshot to tip without doing the genesis to snapshot through Utreexo”. Unfortunately right now Utreexo does need to build itself up from scratch if you want to prove things. You could potentially do something like that. That is sort of a weird case, “I want to start really fast but then I want to be able to provide all possible services to everyone”. It could be a use case.

CK: Just to note, feature wise I think they are fairly different. It is just conceptually and in terms of code a lot of stuff that needs to happen are the same. That’s where they are similar.

TD: It would be cool and something we should look at. I did talk about that a little bit a few weeks ago, a different accumulator design. Maybe there are ways to switch from a complete UTXO set and into the accumulator itself. Right now AssumeUTXO does have this hash that it hardcodes in, “Here’s the UTXO set. Go download it and make sure it matches this hash.” If you could also make that something that would accept Utreexo proofs then that is really cool. It works for both. Maybe nodes that support one could provide UTXO set data to the other and stuff. I don’t know how to do that yet. I have some vague ideas that I need to work on. That would be cool if it were possible, it might not be.

MF: There are a few other proposals or mini projects I thought we’d briefly cover. The impact of the rolling UTXO set hash on AssumeUTXO. Have you used this as part of AssumeUTXO? At least in my understanding there was an intersection between these two projects. Is that correct?

FJ: There is not really an intersection. The hashes that they use now, they could use MuHash instead. But right now it uses the “legacy” hash of the UTXO set. James was already working on AssumeUTXO when I got started with this stuff. The whole UTXO set project was based on the old style of hash. It was very unclear if and when the rolling hash, MuHash, would get in. There was really no reason to change that. Now it is in there could be potentially in the future, instead of the serialized hash, the MuHash could be used. That would give some upside if you run the CoinStats index. It is much easier to get hashes of UTXO sets that are in the past. James has a script to do it with a node but I guess the node is not really operational while you are running the script. It is a bit hacky. This could potentially make it a lot nicer but then of course you would still need to make changes in the sense that these saved hashes in the codebase would need to be MuHash hashes. Of course you could have both in there. That is still a bit off, MuHash first needs to be considered 100 percent safe. We still need to watch it a bit more and get a bit more experience with it and give it a bit more time before that makes sense.

JOB: Definitely. As Fabian mentioned the process right now for checking the AssumeUTXO hash on your own node is pretty hacky because what you have to do is call invalidateblock on the block after the snapshot block so that you roll your chain back to the snapshot point. Then you calculate the hash. All the while obviously your node is not at network tip and so not operable. You have to reconsider a block and rewind back to tip. That does take probably 15 minutes at least.

MF: It says here “It is conceivable that AssumeUTXO could use the rolling UTXO set hash but it doesn’t currently”. That’s right yeah?

JOB: AssumeUTXO is really agnostic in terms of the particular commitment as long as you hardcode it. As Fabian said you could imagine that if we decide MuHash has the security properties that we’re comfortable with you could have a side by side listing or something to facilitate migration, just move to MuHash.

MF: There were a couple more that were interesting. I don’t know if James or anybody else wants to comment on these. There was the periodic automatic UTXO database backup that has been open for a number of years from Wladimir.

JOB: I haven’t really looked at this in depth.

MF: The other one was the UHS, I don’t know what UHS stands for but that was a post from Cory Fields back in 2018.

TD: Unspent hash set I think.

MF: Full node security without maintaining a full UTXO set.

TD: I know a little bit about that. Definitely talking to Cory about that stuff was one of the things that led to Utreexo. Let’s keep LevelDB, the UTXO set, instead of storing the outpoint and the pubkey script and the amount, let’s just take the hash of all that. You do need to send these proofs, the preimages, when you are spending a transaction but they are very small. Most of the data you can get from the transaction itself. You just can’t get the amount I guess because you don’t know exactly how many satoshis. The txid and the index itself you’ll see right there in the input of the transaction. The fun thing is most of the time you don’t need the pubkey script. You can figure it out yourself. With regular P2PKH you see the pubkey posted in the input and the signature, you can just that pubkey and see if it matches. It should if it works. You only have to send 12 bytes or something for each UTXO you are spending. We use this code and this technique in Utreexo as well. This is a much simpler thing though because you still have the UTXO set in your database, it is just hashes instead of the preimage data. It is not that much smaller. It cuts in half-ish.

The AssumeUTXO implementation in Bitcoin Core

https://github.com/bitcoin/bitcoin/pull/15606

MF: Let’s talk about the implementation in Core then to finish. The evolution of this, you have the prototype just to check it works or convince yourself and convince others that this was worth pursuing. You said this wasn’t mergeable, this was just a prototype. Any interesting things that came out of this other than just initial feedback?

JOB: That started out as a rough prototype. I think it was a single commit when I opened. It has slowly evolved into something resembling mergeable. At the moment I think it needs a rebase but beyond that from my perspective it is code that could be considered for merging. What I’ve been doing is trying to carve off little bits of it and open separate PRs so each change can get the appropriate level of scrutiny. Maybe people can goad me into actually writing unit tests and things like that. This PR does represent the current state of everything needed to do AssumeUTXO. I am regularly testing it.

MF: I misunderstood then. I thought you’d spun off all the other PRs and that was the implementation of it that you thought was going to get merged. This was just the first attempt. But I’ve misunderstood that.

JOB: It initially was just the first attempt. Bit by bit I’ve been keeping it up to date with the changes I do make. All the merged PRs so far have been commits that I’ve carved off from this one. One of the incidental sad things about that PR is that it has got so many comments and so much discussion now that it is almost impossible to navigate based on GitHub’s UI. I don’t know what is to be done about that.

MF: A lot of people complain about the GitHub UI and comments, it is a massive headache.

JOB: Can’t live with it, can’t live without it kind of thing.

MF: You also talked earlier about Carl’s work. He deglobalized ChainstateManager which I’m assuming allowed progress on AssumeUTXO. You needed to get rid of various globals to be able to do AssumeUTXO at all?

JOB: Yeah there were a few globals. One, pcoinsTip was a pointer into the UTXO set itself, the CCoinsView cache which is the in-memory cache entry point into the UTXO set. Another was ChainstateActive or CChain or something, I can’t remember the name of it. It was a global representing the Chainstate object. The block index was a separate global itself. I had to take a few globals and package them up into this ChainstateManager which itself became a global. Then Carl came along and made the good change of removing that global favoring passing the chainman as an argument to the various places that it is needed. That allows us to do better testing, not rely on global state as much. And I guess to eventually consolidate all the consensus critical stuff into a shared library which is what he is working on now.

MF: That was a big project in itself wasn’t it? I suppose you could still have a global ChainstateManager that manages two chainstates.

JOB: Yeah exactly, that is the whole point of the ChainstateManager versus a particular chainstate object.

MF: Then there’s the (open) PRs. This is the issue.

JOB: The issue and the original prototype PR got a little bit muddled. There was some conceptual talk in both places.

MF: The project has all the PRs. We won’t go through them one by one. Quite a few of them are refactors, certainly the early ones were refactors. Add ChainstateManager was an early one we were just discussing that has been deglobalized by Carl. Any particular ones that have been merged that are worth discussing or worth pulling out at this point? James has dropped off the call. We’ll see if James comes back. Let’s go back to Utreexo for a moment. What’s the current status of the project? I saw there was an implementation in Go using the btcd implementation.

TD: We’ve been mostly working in Go. Nicolas and Calvin are making some C++ code but we are working more in Go initially to try things out and then port it over to C++. Someone was working on a Rust version as well.

CK: That was me.

TD: If anyone wants to join Utreexo IRC channel on Libera, utreexo.

MF: In terms of the PRs that have been merged, are there any particular ones that are worth discussing and pulling up? And then we’ll discuss what still needs to be merged.

JOB: The UTXO snapshot activation thing could be worth a look. The ChainstateManager introduction is kind of interesting. But to talk through them here is probably kind of laborious. I would just say if you are interested they should hopefully be fairly clearly titled. You can go and read through the changes for any you are particularly interested in.

MF: Any particular context for reviewing current PRs?

JOB: No, I think a lot of the stuff that got merged is pretty self contained. The upshot of what you need to know is that now there is this ChainstateManager object, there is an active chainstate, the chainstate that is being consulted for the legacy operation of the system. If you want to check if a coin is spendable or do validation you consult the active chainstate but there may or may not be this background chainstate doing stuff in the background. That’s the current state of where we are. Right now for example one of the PRs that’s open, the “have LoadBlockIndex account for snapshot use” is really just adapting different parts of the system, in this case part of the initialization to account for the possible presence of a background chainstate that we have to treat a little bit differently than the active chainstate.

MF: In terms of the review and testing, it looks like you’ve only got one PR left to get merged? Will that be Phase 1 completed?

JOB: I wish that were the case. There is another PR that is open, fanquake has been very kindly maintaining this project because I don’t have the GitHub permissions to add stuff to projects. There is another change in parallel that is being considered in addition to those 3 or 4 PRs up there. What I don’t want to do is open all the PRs I want to get merged at once. I think that will overwhelm people. I gradually as merges happen open up new PRs. If you want to see all the commits that have yet to go in for the completion of Phase 1 you can go to the AssumeUTXO PR. That has all the commits that are required for that to happen. That is a listing of all the commits that are left to go in.

MF: What’s an easy way to test it? I saw Sjors had found a bug or two and done some good reviews by playing around with it. What could people do to be useful? Is it functional in a way that you can use the RPC and try to do IBD from the snapshot?

JOB: It should be pretty usable. Right now there’s a hardcoded hash at I think 685,000 maybe. Sjors and I have at various points been hosting the UTXO snapshot over Torrent. You can clone this branch, build it and play around with doing an IBD using that snapshot. We should probably create a more recent one but I haven’t thought to do that recently. I bundled a little script in contrib that allows you to do a little mock usage on mainnet. It will spin up a new data directory, it will sync maybe 50,000 blocks that happens very quickly. It will generate a snapshot and then it will tell you what to recompile your code with to allow you to accept that snapshot so you can demonstrate to yourself that the change works. I use that as a system or functional test to make sure the main branch still works.

MF: It is functional yet there are still a lot of commits and a lot of code still to merge.

JOB: Absolutely, yeah.

MF: It is working on your branch, gotcha. Any edge cases or tests that need to be written? I was trying to think of edge cases but the edge cases I was thinking of would be like a snapshot that was really recent and then there was a re-org or something. But that is not possible.

JOB: It would be great to have a test for doing pruning and building all the indexes to make sure that code still works. I don’t think I’ve tested all that stuff very rigorously. There is no reason it shouldn’t work but it definitely needs to be tested with pruning and indexing both enabled.

MF: Why might this be a problem?

JOB: There are some accounting complexities. The way that the indexers work is they consume these events on something called the validation interface queue. I had to introduce a new kind of event, a block connected event. I have to distinguish for the purpose of index building between a block connection on an active chainstate versus a background chainstate. The index is now being built out of order instead of in order. If you are connecting blocks from the tip and then afterwards connecting blocks early on that’s a change in assumptions that the indexers are making. On pruning there is some new pruning logic that ensures that when we prune the active chainstate we don’t erase blocks needed by the background chainstate. There are a few changes in there. It would be good if someone tested that stuff. I will at some point, hopefully in a formalized functional test in the test suite. Right now there is a functional test but it just does pretty basic UTXO snapshot operations. It does test a sync between two separate nodes using snapshots. But I haven’t gotten around to writing tests for the indexers and pruning.

FJ: I have a pull request open where I change the way pruning is blocked when indexers are running. There I extend the tests for pruning and indexers together quite a lot. I think it would be easy to add that on after that gets merged.

JOB: That’s right, I think I ACKed that.

FJ: I hope so. There are some comments that I need to address.

JOB: I think I reviewed that a couple of months ago. That would be a really good change to get in. If anybody is looking for things to do reviewing that PR is a good one.

MF: What’s the PR number?

FJ: 21726

MF: Ok we’ll wrap up. Thanks James and thanks to everyone who attended, it was great to hear about some other projects. The YouTube video will be up and I’lll do a transcript as well.

JOB: Thank you for having me. Really great to hear from everybody, some really insightful stuff.

\ No newline at end of file +https://www.youtube.com/watch?v=JottwT-kEdg

Reading list: https://gist.github.com/michaelfolkson/f46a7085af59b2e7b9a79047155c3993

Intros

Michael Folkson (MF): This is a discussion on AssumeUTXO. We are lucky to have James O’Beirne on the call. There is a reading list that I will share in a second with a bunch of links going right from concept, some of the podcasts James has done, a couple of presentations James has done. And then towards the end hopefully we will get pretty technical and in the weeds of some of the current, past and future PRs. Let’s do intros.

James O’Beirne (JOB): Thanks Michael for putting this together and for having me. It is nice to be able to participate in meetings like this. My name is James O’Beirne. I have been working on Bitcoin Core on and off for about 6 years. Spent some time at Chaincode Labs and continue to work full time on Bitcoin Core today. Excited to talk about AssumeUTXO and to answer any questions anybody has. The project has been going on for maybe 2 and a half years now which is a lot longer than I’d anticipated. Always happy to talk about it so thanks for the opportunity.

Why do we validate the whole blockchain from genesis?

https://btctranscripts.com/andreas-antonopoulos/2018-10-23-andreas-antonopoulos-initial-blockchain-download/

MF: We generally start with basics, some people who are beginners will get lost later. I am going to pick on some volunteers to discuss some of the basics. The first question is what is initial block download? Why do we do it? Why do we bother validating the whole blockchain from genesis?

Openoms (O): The whole point is to build the current UTXO set so we can see what coins are available to be spent currently and which have been spent before. We do it by validating all the blocks, downloading all the blocks from the genesis block, from the beginning. Going through which coins have been spent and ending up with the current chain tip which will end up building the current UTXO set. We can know what has been spent before and what is available now.

JOB: At a very high level if you think of the state of Bitcoin as being a ledger you want to validate for yourself without taking anything for granted the current state of the ledger. The way you would do that is by replaying every single transaction that has ever happened to create the current state of the ledger and verifying for yourself that each transaction is valid. That is essentially what the initial block download is.

MF: We are replaying absolutely everything from genesis, all the rule changes, all the transactions, all the blocks and eventually we get to that current blockchain tip.

How long does the initial block download take?

https://blog.bitmex.com/bitcoins-initial-block-download/

https://blog.lopp.net/bitcoin-full-validation-sync-performance/

https://blog.lopp.net/2020-bitcoin-node-performance-tests/

MF: This does take a very long time to do. There are a few posts from BitMEX and Jameson Lopp on how long this takes. Although there have been improvements in recent years it still can take hours or days depending on your machine and your hardware. Any experience from the people on the call on how long this takes? Has anyone done a IBD recently?

O: A couple of hours on high end hardware with a I7 processor and a decent amount of RAM. On a Raspberry Pi with 4GB of RAM and a SSD it is between 2 and 3 days.

Stephan (S): I was using Raspiblitz and it took me like 5 days with a SSD. Similar magnitude.

O: It depends a lot on the peers you end up downloading from. For example if you have a full node on your local network and you add it as a peer then it can be quite quick in that sense. Also if you are downloading through Tor that can slow down things a lot.

Assumedvalid and checkpoints

https://bitcoincore.org/en/2017/03/08/release-0.14.0/#assumed-valid-blocks

MF: So obviously from a convenience perspective spending hours or days in this state where you can’t do anything isn’t ideal. From a convenience perspective a better thing to do would be to try to shorten that time. An obvious thing to do would be to just download blocks or validate blocks from a certain point in time rather than from genesis. The first attempt at either bringing this time down or at least providing some assurance to users that they are not spending hours or days verifying the wrong blockchain was assumed valid. Is anyone on the call aware of assumed valid other than James?

Fabian Jahr (FJ): Assumed valid is a point in the blockchain, a certain block height and/or block hash, from which point it is assumed that this is not going to be re-orged out. These are hardcoded in the Bitcoin Core codebase. This feature, this is before I got active in Bitcoin Core, I don’t know what the conversation around it was exactly, it is not really used anymore. There are no new assumed valid points introduced regularly into the codebase. But I am not 100 percent sure what the discussion was around that, what the latest status is, what are people’s opinions on that.

MF: I just highlighted what I’ve got on the shared screen, it was introduced in 0.3.2 so very early. The intention was more to prevent denial of service attacks during IBD rather than shortening the time of IBD. Presumably at 0.3.2 the time of IBD wasn’t anywhere near as long as it is now. James can correct me, it did at that point or later, it skipped validating signatures up until that assumed valid block?

JOB: Yes. It looks like when it was introduced it was called checkpoints which ended up being a pretty controversial change. A checkpoint dictates what the valid chain is. That is subtly different from what assumed valid does today which is just say “If you are connecting blocks on this chain and you are connecting a block that is beneath the block designated by the assumed valid hash then you don’t have to check the signature”. It is not ruling out other chains which is what checkpoints do. The idea was to prevent denial of service attacks. Maybe someone sends you a really long chain of headers but that chain of headers has really low difficulty associated with it. Checkpointing is a different thing than assumed valid. As you said assumed valid really just skips the signature checks for each transaction below a certain block as an optimization. It doesn’t rule out another longer valid chain from coming along. For example if someone had been secretly mining a much longer chain than we have on mainnet with some unknown hash rate that is much greater than the hash rate we’ve been using, they could still come along and re-org out the whole chain hypothetically. Based on use of assumed valid whereas checkpoints don’t allow that. Checkpoints are a little bit controversial because they hardcode validity in the source code whereas assumed valid is just an optimization. It is worth noting that with assumed valid we are obviously still checking a lot about the block. We are checking the structure of the block, we are checking that the transactions all hash to the Merkle root and the Merkle root is in the block header and the block headers connect. You are still validating quite a bit about the block, you are just not doing the cryptographic signature checks that the community has done over and over again.

S: To be clear assumed valid is still being used today, it is just the checkpoints that are deprecated?

JOB: Yes. Assumed valid is used by default. It is the default configuration. Typically every release the assumed valid value is bumped to a recent block as of the time of the code. That typically happens every release. Calvin (Kim) is saying that checkpoints are still used. They are probably just really low height and could probably be removed. Assumed valid is still very much maintained. It is the default setting. If you want to opt out of it and do a full signature check for the entire chain you can do that by passing the value 0 for an assumed valid configuration.

Calvin Kim (CK): Just a point on removing the old checkpoints. Technically that is a hard fork.

JOB: Yeah, exactly. It would broaden the consensus rules, it would allow blocks that were previously invalid to be valid. You are right about that Calvin.

MF: With the existing checkpoints it is technically impossible to validate a chain which has different checkpoints to those checkpoints say in Core or another Bitcoin implementation. It is impossible.

JOB: Yes.

MF: Obviously these are unlikely scenarios but let’s say you were validating a chain that didn’t meet an assumed valid, it didn’t agree or match the assumed valid. What is the user experience then? Is it just flagged to the user? Is there a log that says the assumed valid in the code doesn’t match the chain that you are validating? Or is it based on proof of work? What’s the user experience if that assumed valid isn’t matched with your experience of validating the chain?

JOB: It is still possible. If we were considering a chain that was more work than the chain that we know now that violated the hardcoded assumed valid default, that would indicate a re-org of thousands and thousands of blocks. It would probably be catastrophic at a higher level. I think the most that you would get as a user is maybe a log statement. I don’t even know if we have a log statement for that case. If something is on a chain that isn’t assumed valid you just do the signature validation. It is not any kind of special behavior.

S: The difference in performance would be that you skip signature validation until the previous assumed valid? There is only one valid hash in the binary? It is not like you can go back to the previous valid hash? It is all or nothing? If there is a re-org with a couple of thousand blocks the user would have to validate all signatures since the genesis block and not use the assumed valid hash of the previous Core release?

JOB: That sounds right. I am not sure offhand, I’d have to check the code.

S: There is only a single hash in the code right? Whereas checkpoints you have one every I don’t know 50,000 blocks. But for assumed valid it is only a single hash.

CK: That hash is backed by minimum chain work. There are some checks to make sure you are on the right chain.

JOB: Yeah minimum chain work is definitely another relevant setting here. That prevents someone from sending a really long chain of headers that may be longer in a strict count than mainnet but is lower work.

MF: This is relevant in an AssumeUTXO context because the concept is that you are going to validate only a fraction of the chain and then potentially do transactions. Pay or receive having only done a fraction of the initial block download. The rest of it from genesis would be happening in the background but you are already functional, you are already up and running, you are already potentially doing transactions, paying and receiving.

JOB: That’s the sketch of AssumeUTXO. With assumedvalid you are still downloading all the blocks. You have all the data onhand. AssumeUTXO allows you to take a serialized snapshot of the UTXO set, deserialize that, load that in from a certain height. In the same way as with assumedvalid we expect that by the time the code is distributed there are going to be hundreds if not thousands of blocks on top of the assumedvalid value, the same would go for AssumeUTXO. We would expect that the initial block download that you have to is truncated but it is not completely eliminated. You are still downloading a few thousand blocks on top of that snapshot. In the background we then start a full initial block download.

The AssumeUTXO concept

Bitcoin Optech topics page: https://bitcoinops.org/en/topics/assumeutxo/

MF: Let’s go over the concept of AssumeUTXO. I’ve got the Optech topics page up. We are bootstrapping new full nodes, we are reducing this initial blockchain download time by only doing a fraction of it before we are able to send and receive transactions, before we are up and running. During IBD you can still play around with your node. You can still make RPC calls and do CLI commands etc. But you can’t do anything with the wallet until you’ve fully completed IBD, is that right?

JOB: That’s right.

MF: One thing it says in Optech is “embedded in the code of the node would be a hash of the set of all spendable bitcoins”. Is that correct? Are there alternatives to embedding it in the code of Core? I think you’ve talked before James about getting this UTXO snapshot out of band and then using a RPC command to start the process. Then it wouldn’t necessarily be in the code of Core or the implementation you are using?

JOB: The value that’s important to hardcode is the hash. If you take the UTXO set, the collection of unspent coins, you order them somehow, and then you hash them up. You get a single value like you might get at the root of a Merkle tree or something. Hardcoding that is important. It is more important to hardcode that and not allow it to be specified through the command line say in the same way as assumedvalid. If you can allow an attacker to give you a crafted value of that hash and then give you a bad snapshot then they can really mislead you about the current state of the UTXO set before you’ve finished back validating. We decided in the early stages of designing this proposal that that value should be hardcoded in the source code and shouldn’t be user specifiable. Unless you are recompiling the source code in which case anybody can tell you to make any change and potentially cause loss of funds that way. To your question once you have the UTXO set hash hardcoded you have to actually obtain the file of the UTXO set itself which is serialized out. That is going to be on the order of 4 gigabytes (correction) or so depending on how big the UTXO set is of course. In the early days of the proposal there was a phase 1 and phase 2. Phase 1 just added a RPC command that allowed you to activate a UTXO snapshot that you had obtained somehow, whether that’s through HTTP or torrent or Tor or whatever. The idea there is it really doesn’t matter how you obtain it because if you are doing a hash based on the content of the file anyway the channel that you get it through doesn’t have to be secure. When you download a copy of Bitcoin Core you don’t necessarily care about the HTTPS part if you are checking the binary hash validates against what you expect. It is the same idea here. Phase 2 is going to be building in some kind of distribution mechanism to the peer-to-peer network. In my opinion Phase 2 is looking less and less likely just because it would be a lot of work to do. It requires the use of something called erasure coding to ensure we can split up UTXO snapshots so no one is burdened with storing too much. It would be a really big job. In my opinion it doesn’t matter how you get the snapshot. There are lots of good protocols for doing decentralized distribution of this kind of stuff via torrents say. I think it is less likely there will be a bundled way of getting the snapshot within Core although that is certainly not out of the question.

MF: These are the two phases you’ve discussed in other forums, podcasts and presentations. You said Phase 1 was allowing snapshot use via RPC with no distribution. This would be getting the snapshot from potentially anywhere, a friend, a torrent, whatever. Phase 2 which you say is unlikely now is building the distribution into the peer-to-peer layer. I was thinking that sounds very difficult. Then you have questions over how many nodes are supplying it and whether you can get a hold of it. Core and other implementations have to decide how recent a snapshot they need to provide which gets hairy right?

JOB: Absolutely.

S: Is the problem that you are making it very difficult for novice users to use AssumeUTXO? They have to go and find those binaries themselves in addition to making sure they get the right version of Bitcoin Core? If it is not bundled they are probably not going to do it or they are going to rely on third parties like Umbrel or Raspiblitz to bundle it for them. If you push towards that model you are relying on those third parties.

JOB: I think you are totally right, there is a convenience trade-off there. It is unfortunate that it is not easier for us to make this a seamless experience. My hope I guess is that eventually once the RPC commands are merged we can work on a GUI change that will allow someone to drag and drop the serialized UTXO set into the GUI or something. Maybe also link to a torrent for the snapshot from bitcoincore.org where you get the binary. It could say “If you want to expedite the IBD then do this”. You are totally right that it does complicate things for the end user. I thought that these changes were fairly simple for Phase 1 and I thought they would be integrated within a year and a half or so. It is now two and a half years later. We are maybe 60-70 percent done with Phase 1. I am very pessimistic about the pace of doing what I think is a much more complicated change in terms of the distribution stuff.

FJ: The types of users that really want this, the question is who really wants to get started within a hour or so of plugging the node in? There is a big overlap of the people who really want that and the people who get a Raspiblitz or Umbrel or whatever comes next that is really plug and play. This feature matches really well with that user base. Making it available for other people who are on the self sovereign side and do more by themselves, that would be great for them as well. But where I see the biggest leverage and adoption, Raspiblitz people are asking for it all the time. BTCPay Server built their own type of feature that was hacky and not completely trustless. There is a very good overlap of people that are going to use this and from this crowd.

O: There is a use case for example for the Specter wallet. Some people would want to use a self validating wallet connected to Bitcoin Core and get started as soon as possible. They have a similar solution to what BTCPay Server uses but even more trusted. The website is prunednode.today. You can download a signed version of a pruned snapshot from a node of Stepan Snigirev. Get started with a pruned node so you don’t even need to have the storage available for the whole blockchain. There is another alternative that LND uses already on mobile by default, the Breez wallet for example, which is Neutrino (BIP 157/158). Also the Wasabi wallet on desktop uses this. You only download the block hashes as needed and not more than that. It is more like a light wallet function rather than a fully validating one.

JOB: Yeah that’s a good point. In the early days of this proposal we were actually debating whether it would be easier to just integrate a more SPV or light client like behavior into Bitcoin Core instead of a proposal like this. It turned out that for a variety of reasons this seemed preferable. One, it seemed like less of an implementation burden because to build in SPV behavior to the node in a first class way would be a big change. Secondly I think the validation model here is quite a bit better than SPV. With a full view of the UTXO set you can do first class validation for recent blocks at the tip, better assurance.

O: There is no risk of having information withheld from you like with Neutrino. That’s the worry there, you are getting blocks which are not containing the transaction you are after.

JOB: Exactly.

Security model of AssumeUTXO

MF: We are kind of on the security model now. We won’t spend too much time on this but we should cover it. Presumably this has slowed progress on it. There are different perspectives. There is a perspective that perhaps you shouldn’t make any transactions, send or receive, until you’ve done a full initial block download. I’m assuming some people think that. Then you’ve got people in the middle that are kind of like “It is ok if the snapshot is really, really deep”, like years deep. Others will say months deep, some would say weeks. Are concerns over security slowing progress on this or have slowed progress up until this point?

JOB: I think at the beginning there were some misconceptions about security. There was really only one developer that had consistent misconceptions about security. That was Luke Dashjr. I think he is now in favor of the proposal. There are a few important aspects here. Number one, as someone mentioned earlier, there are a variety of approaches now with PGP signed data directories that people are passing around that I think are very subpar. I think everybody who is doing that has good intentions but it is the kind of thing where if one person’s PGP key gets compromised or one hosting provider gets compromised then you have the possibility of some really bad behavior. Number two, initial block download is a process that is linear with the growth of the chain. If the growth of the chain is going to continue indefinitely that’s an infinite process. At some point this needs to be truncated to something that is not infinite which AssumeUTXO does. I think that needs to happen at some point and I think this is the most conservative implementation of that. The third thing, this is an opt-in feature. Unlike assumedvalid which is a default setting you have to proactively use this. At least at this point given this implementation users aren’t nudged into doing AssumeUTXO in the same way as they are with assumedvalid. I think that’s an important consideration. Maybe one day this will be some kind of default but there is a lot more that would go into that before that’s the case. It is important to think about what kind of risks this introduces over assumedvalid. One of the main differences between assumedvalid and AssumeUTXO, in assumedvalid you still have the block, you still have the structure of the block and you are validating that the Merkle root hashes check out. Whereas with AssumeUTXO all you have are the results of that computation, the UTXO set. The way you would attack AssumeUTXO is to both give someone a malicious UTXO set hash that they are going to put in their source code and then to give them a maliciously constructed UTXO snapshot. Right now if you give them a maliciously constructed UTXO snapshot its contents will be hashed and the hash won’t validate against the hardcoded hash in the source code. The thinking there was that if someone can modify your source code arbitrarily, if someone can insert a bad AssumeUTXO hash into your source code there are all kinds of easier ways to cause loss of funds or steal your Bitcoin. If you assume someone can get to your source code and recompile your binary then you are already cooked anyway so it doesn’t really matter. Aside from that there really aren’t any practical differences in the security model between assumedvalid and AssumeUTXO. I am curious if anybody else has thinking on that.

MF: In a model where Core was putting a snapshot into the codebase that was thousands of blocks deep, months or years worth of blocks deep, as you say the only concern there is someone having control over the binary in its entirety in which case they can do anything. I am thinking about this scenario where people are passing around snapshots between friends. Perhaps someone uploads a snapshot that is way too recent. How is Core or another codebase going to stop people from using snapshots that are way too recent? Let’s say 6 blocks ago or 3 blocks ago, how do you stop them from doing that?

JOB: If someone say generates a snapshot that is 3 blocks old what would happen is they’d give that snapshot to someone else. If you were using an unmodified version of Bitcoin Core you would attempt to load that snapshot, we would hash the contents of the snapshot and we wouldn’t be able to find an entry in the AssumeUTXO data structure in the source code to validate against that. We would reject it as invalid. You can think about it like a whitelist of snapshots that are allowed per the code.

O: Transacting with an unsynced node there is a big difference between sending and receiving. If you can send and the network accepts it and especially if the receiver receives it then you don’t need to verify that anymore. The question is when you are receiving something and you want confirmation. That would need the new blocks fed to you as well which is a much more complicated question.

JOB: That’s a great point. There is an asymmetry there.

O: You wouldn’t be able to verify without reaching the chain tip, fully sychronizing. You might be able to construct a transaction without doing that and then broadcast it.

JOB: Yes.

MF: So this is sounding less controversial than I thought it was initially.

Ross Clelland (RC): Are there concerns where the snapshot is being taken when a Lightning channel is opened? It is subsequently closed but you’ve given that snapshot to someone, they think the channel is still open so they could use that. Or would Lightning itself prevent that money from being sent?

JOB: I think that’s equivalent to the simple payment example where someone gives you a snapshot, you load in the snapshot. Until you’ve gotten to the network tip, the most recent block that everyone has seen, you can’t make assertions about what is spent and what is unspent. I think they are analogous.

MF: I’m wavering on this now. I came into this thinking you would definitely want the user to complete that initial block download from genesis. As you say this is conservatively designed in that you do actually complete the full IBD, it is just you do the from genesis part after you are up and running. This argument of whether you should actually do that IBD from genesis and what’s the point of that seems like a question.

JOB: It is a question. I agree that the value of doing the background IBD is debatable. From my perspective as the person proposing this and doing the implementation I wanted to make it as appealing and as conservative as I could so I didn’t strip that out. But there are certainly other engineers who thought that it was necessary. If you were willing to accept the UTXO security model for a few days or a few weeks… In assumedvalid we don’t in the background validate all the signatures after we are done syncing. The analog for AssumeUTXO wouldn’t be to do background validation. I figured I’d rather people default into doing the initial block download at some point and having the block data on hand, being able to serve it to their peers if necessary, than not. But I think you raise a good question that frankly still could be debated and at some point could change.

MF: There would definitely be people that oppose that.

O: AssumeUTXO, would it be able to start in a pruned node and not need to download the full chain then?

JOB: AssumeUTXO is compatible with pruning. You still have to retain enough data to be able to arrive at the UTXO set that you started the snapshot from. It basically for most intents and purposes doesn’t change pruning.

O: The big benefit of having a validated chain or having a full chain, validated or not, is if you want to be able to use an already used wallet with some transaction history. You should have the blocks which are containing those transactions at hand so you rescan and be able to reproduce the history of a used wallet. A pruned node is all fine until you need to rescan. Then you would need to start from the beginning most of the time unless you didn’t prune long enough.

JOB: Exactly. Even with that one interesting line of thinking is to use the compact block filters and to index those compact block filters. If you need to rescan, use the filters, decide which blocks you are interested in and go out and retrieve those from the network as necessary. That’s an unrelated question.

O: There is an implementation of a similar idea called btc-rpc-proxy. That is outside Bitcoin Core but it is a service that simulates a full chain to the software calling the Bitcoin RPC interface. It sits behind Bitcoin Core and the Bitcoin RPC basically. You still have a pruned node but there is a request getting to a block which is not on the chain anymore because it is pruned. It requests it like with the block filters and serves it to the requester software. To that software it looks like it is an unpruned node but in reality it is not. There is this full node company called Embassy who are running on a SD card by default and they use a pruned chain. They also develop and use this btc-rpc-proxy which allows them to not need 300 GB of storage, they just need a SD card with the OS and the pruned node. This is something in between the block filters and the pruned, full node trade-off.

MF: I’m guessing the strongest argument for still doing IBD from genesis is what you said James, in principle ideally we want everyone to be able to fetch and validate the whole chain. And so if we relaxed and become less conservative, don’t do that by default, release that conservatism, then perhaps those blocks and transactions from the first few years of Bitcoin are just impossible to retrieve.

JOB: Right. Someone early in the proposal made the argument, in theory everybody is incentivized just to run a pruned node. Why would you keep the extra storage around? I don’t think they are quite comparable. Running a pruned node doesn’t make the IBD go faster. Pretty much everyone wants IBD to go faster. If it was easy enough almost everybody would want to do AssumeUTXO, it gives you an operable node much more quickly. In that case you would then have the default of not having the blocks. In practice it may be the case that many more people don’t have the full chain history on disk. I think you are right.

MF: There would be disagreement I think on relaxing that conservative default. I’m sure some people would oppose that. Is there anything else on the security model of AssumeUTXO?

S: James you mentioned if you already trust the developers and the Core binaries to be secure, trusting the AssumeUTXO hash wouldn’t introduce extra vulnerabilities. Isn’t that a dangerous path because we know the different parts of the codebase have different levels of scrutiny? Some code is reviewed much more in depth than maybe some test PR. The implications of a malicious AssumeUTXO has is pretty big. It is a magic number that would get updated every Core release? Or maybe in 5 years time people won’t be surprised if they see a new hash and it won’t get reviewed as much. The impact this can have is huge. If this was ever changed maliciously it would have huge implications on the reliability and trust of the ecosystem. Whereas if the assumedvalid hash is wrong it is no big deal. People might take longer to do IBD but no money can be stolen. There is no perceived inflation and so on. There is a scale of trust in developers and you don’t want to assume everything is correct all the time.

JOB: This is a really, really good question. It gets to the heart of the design with AssumeUTXO which I feel really optimistic about. In exactly the way you are saying there are certain parts of the codebase that are scrutinized more heavily than others. In my experience working on some of the code that handles the UTXO set it can be very, very difficult to reason about. Even for people who are well versed in C++, even for people who know the Core codebase. What I love about the AssumeUTXO design is that it is very easy for even someone who is semi technical to verify that an AssumeUTXO hash is correct or not correct. As long as they have a full node they can run a script and check that value. Whereas if we try to do optimizations that are slightly more sophisticated or opaque or technically complicated there are very few people capable of actually scrutinizing those changes. It is very hard to read a lot of Core’s code. Not many people these days casually browse the repo with C++ knowledge sufficient to do that. I really like the idea of crystallizing validation into this one value. You are right, it is toxic waste and has to be scrutinized, but it is very readily scrutinized by a wider variety of people than say the CCoin cache is.

S: Independent nodes keeping a full state, that could be checked to make sure the hash is always correct.

JOB: Definitely. You could automatically parse the chain params, the cpp file, extract out the AssumeUTXO values and validate that against your own node pretty trivially. That is a great idea.

MF: I was thinking there would be a warning in the GUI if you say made a payment or received a payment without having completed IBD. I was thinking there would be information to the user “You haven’t completed IBD. Therefore there is a slightly higher risk”. But I’m thinking that is not even necessary. The risk is not zero, there could be some crazy re-org back there but it is basically zero to many decimal places.

JOB: I agree. I don’t know what the user’s action would be subsequently. It is such a vanishingly low probability that there would be some kind of attack there. If you assume that the user got a malicious binary that had a bad AssumeUTXO hash in it they would strip out that warning too.

Thaddeus Dryja (TD): It seems like the only thing you can do is crash. The binary is not self consistent. It is like “Hey this binary has detected a state where the binary itself is assured that it would never happen”. So crash or something.

How AssumeUTXO interacts with other projects like Utreexo

Utreexo: https://dci.mit.edu/utreexo

Utreexo paper: https://eprint.iacr.org/2019/611.pdf

Utreexo demo release: https://medium.com/mit-media-lab-digital-currency-initiative/utreexo-demo-release-0-2-ac40a1223a38

Faster Blockchain Validation with Utreexo Accumulators: https://blog.bitmex.com/faster-blockchain-validation-with-utreexo-accumulators/

MF: The next topic I have up, it is good Tadge and Calvin are here, how this interacts with other projects. From where I’m sitting this lays some foundations for doing more interesting stuff with IBD to reduce IBD time. With AssumeUTXO this is the first step to not having a linear IBD from genesis to tip. This is the first heavy lifting and refactoring of the Core codebase to allow that to happen. One potential next step is Tadge’s and Calvin’s project which is Utreexo. You could perhaps, I don’t know the viability of this, have two or multiple fractions of the chain being validated in parallel. Rather than just having the snapshot to the tip and when that’s done the genesis to the snapshot, you could perhaps have genesis to the snapshot being verified at the same time as snapshot to tip. We can go to James first for thoughts on whether this does actually lay foundations for Utreexo and then we’ll go to Tadge and Calvin and anybody else who has more thoughts on that.

JOB: Definitely eager to hear Calvin and Tadge talk about Utreexo. I am really that it seems like a lot of the prerequisite refactoring for AssumeUTXO has already paid some dividends in terms of a project from Carl Dong called libbitcoinkernel. This is trying to in a really rigorous way separate out the consensus critical data structures and runtime from Bitcoin Core so we can have other implementations safely wrap the consensus stuff, do different implementations of the mempool or peer-to-peer network. Some general background, refactoring in Bitcoin Core is not often kindly looked upon. Having the AssumeUTXO project as justification allowed me to do some fairly fundamental refactoring that would have been hard to justify otherwise. The way that things previously worked was in the codebase there was a global chainstate object that managed a single view of the UTXO set, a single view of the blocks and the headers. What we did in lieu of that is introduce an object called the ChainstateManager which abstracts a bunch of operations on the chain and allows us in the particular case of AssumeUTXO to have a background chainstate and a foreground or active chainstate. That is what we do right now. In principle the abstraction of the ChainstateManager allows us to do pretty much anything. We could have any number of particular chainstates running round doing things, being validated and use the ChainstateManager to abstract that. It is a nice feature of the code now to be able to do that.

TD: Calvin has written a bunch of code to do exactly that, running multiple threads at once or even multiple machines at once doing it.

CK: James talked about how you could have a background IBD and a little bit of IBD from the AssumeUTXO set. The Utreexo project represents the UTXO set in a bunch of different Merkle trees. It allows you to represent the UTXO set in a very few hashes. Just the roots of each Merkle tree. This reduces the size of how you could represent things. With AssumeUTXO you have to hash the entire UTXO set, with Utreexo it is Merkle tree-ified. You take these roots and that is what you would validate against. That is essentially your UTXO set. What you could do is represent the UTXO set at a height in less than a kilobyte. That allows a lot of flexibility. Whereas before you had to download these 4GB, now you don’t really have to. It is possible to fit everything into a binary. That almost solves the problem of distributing these 4GB. It introduces a bit of other problems but yeah. The projects that I have been working on, you have a bunch of different threads going and each thread is syncing a different part of the chain. Let’s say you start up 3 threads, one would be syncing 0 to 1000, another would be syncing 1001 to 2000 and the third 2001 to 3000. Once all those are finished you would check against the Utreexo set representation. If you start from genesis and that block 1000, that’s where you were provided the cumulated root. You would compare against it, if they match up great you just saved a bunch of time. You already started verifying from 1000 to 2000. You do this a lot. It allows for more efficient use of your CPU cores. You could go further and say “We don’t even need to do this on a single machine. You could split the IBD into multiple machines.” A really fun thing you could do is have a phone and your laptop do IBD validation at the same time. It doesn’t even have to be continuous. If you have a Raspberry PI going in the background you could be like “I’m sleeping. I don’t need my laptop. I am going to use my laptop to help my Raspberry Pi sync up.” Those are some fun things you could do. Or if you are like “I trust my family. Could you bring your laptop? We’ll do the IBD.”

TD: We have software that does is, testing software, it does seem to give a big speedup. Even on one machine, if you are like “Let me have 8 threads that are all independently doing IBD from different points in the blockchain”. You get a lot better usage of your CPU because you have got so many different things going on. They are small enough that you can throw them around and keep them in CPU cache. You do get a bit of a speedup there.

MF: From a high level perspective this seems like a big change rather than like a refactoring type reorganization change that AssumeUTXO seems to be.

TD: Different peer-to-peer messages, yeah.

MF: I’ve heard Utreexo has a different commitment structure so it can’t use rolling hashes. How much of this is building on AssumeUTXO?

TD: Conceptually there are some similarities. It would be really cool if there was a way to switch between them. Somehow to say “If you have your AssumeUTXO giant UTXO commitment you could turn that into a Utreexo style commitment or vice versa.” It is not super obvious how you do that. In Utreexo it matters the order the things were inserted and deleted because of how the accumulator works. It is not like you can just given the UTXO set generate the commitment state of Utreexo. Maybe someday we can make something like that, that would be really useful, but we don’t have any software right now to say “Take a current node that is synced up. It has got its chainstate folder with the LevelDB of the 80 million or whatever UTXOs that exist. That is what the Utreexo accumulator represents. Take all that data and turn it into a Utreexo accumulator.” We can’t right now because the order things were added and deleted changes the accumulator unfortunately. It is not just what is in it. Utreexo does need its own weird messages, you have to send proofs. Whereas in AssumeUTXO nobody is going to know you are using this. Nobody you are connected to sees any difference. Maybe if they are really trying they can be like “He asked for block 700,000 and then a few hours later he asked for block 5. That’s weird.” But even that, you’d have to be connected to the same serving nodes and it would be hard to tell. Whereas something like Utreexo, it is totally different messages. It is very clear. If no one is there to serve you these things you are not going to be able to use it. It is a bigger change in that sense.

JOB: I think the strongest relationship between the two is that the AssumeUTXO work from a Core implementation standpoint paved the way to make it easier to do Utreexo in a sense. Some of the abstractions that it introduced are going to make it a lot easier to error lift in the Utreexo operations. It took probably a year to get in the really foundational refactoring necessary to move away from a single chainstate and a single UTXO set.

TD: The stuff we’ve been working on, some of it dealing with multiple chainstates, but also dealing with not having a database. It is kind of like “Wait, what?”. There is no database. A lot of the software definitely assumes that you’ve got LevelDB there. That is what you are using when you are validating blocks and stuff. A lot of software is like “There are ways to pretend that it is there or make a disposable chainstate every time you validate a block and things like that”. That’s some of the refactoring we’ve been looking at. That is probably going even further than the changes in AssumeUTXO. But definitely using that and changing even more.

JOB: For anybody who is not familiar with Utreexo, Utreexo kind of inverts the validation process. Instead of checking some all encompassing UTXO set whether a coin is spendable you basically attach a proof along with the transaction that shows that given a Merkle root for the entire Utreexo set at some height the coin exists in that tree and is spendable.

TD: One thing I do want to clarify, a lot of people think it is a UTXO commitment scheme. In the software we are using and I think just theoretically it bears no commitment. The miners or the blocks never tell you what that UTXO set is. Similar to today, if you are downloading a block there is no indication what the UTXO set is. You generate it yourself and hopefully everyone’s UTXO set is the same because they started from nothing and built the same thing based on the blocks. In Utreexo as well no one gives you that, you build it yourself. It is very much a direct replacement for the database. No one is going to give it to you but you build it yourself. Hopefully everyone’s is the same. It is a bit different in that if everyone’s is not the same it breaks. Whereas today, hopefully we don’t, it is a bug, but you may have a slightly different UTXO set than someone else. One bug we saw when we were working on this, stuff like OP_RETURN or is_unspendable, things you immediately can see. I’m not going to even put that in the UTXO set because you are never going to spend that. You can totally run a Bitcoin Core node where you put all the OP_RETURNs in the UTXO database. It will work, it is a waste of space because they are never going to get spent but you could do that, it’ll still work. With something like Utreexo that becomes a consensus rule which is kind of scary. If everyone doesn’t have the same set the proofs are no longer compatible. If someone says “I put in OP_RETURNs” or “I put in these kinds of things” they are going to be incompatible. That is a little scary. That’s another issue. I did want to clarify that. There has been so much talk over the years about commitments, “Let’s commit to the UTXO set in some kind of message the miner makes”. This does not do that. That maybe something that you could do later do with the same technology but that is a whole other can of worms. None of these are doing that right now.

JOB: It is worth noting as well that at some point there was talk about incorporating a UTXO set hash into the block header. Instead of having a specifically hardcoded AssumeUTXO hash in the codebase we could use the block headers. But as Tadge is saying that is a much bigger question.

TD: A lot of people think it is this. I don’t know that it actually gives you that much in either case. Even AssumeUTXO, you are going to want to validate the last couple of weeks or months anyway. You can hardcode that far back in the binary. Does it really help that much to have miners committing to these things? I don’t know.

JOB: That’s a fundamental shift of trust towards the miners as well.

TD: Even if you were willing to do that, “Let’s make this a commitment” there are a lot of downsides. The biggest that I’ve talked to people about, at least for Utreexo, it keeps changing. We have cool ideas and now it is twice as fast as it was last month or something. We definitely don’t want to commit to some kind of soft fork because what if you find a way better idea in a year but now you’ve got this soft fork thing stuck in that no one wants to use anymore. Also it doesn’t seem to get you that much. For the risk and the change in trust it seems like you can do a lot of really cool things without that. I haven’t really even looked into that much because it doesn’t seem it will help us that much. Let’s not even go there. It is the same sort of idea with AssumeUTXO? It doesn’t seem like there is any need to involve miners?

JOB: Absolutely. Given it has taken two and a half years just to do the implementation I didn’t want to undertake anything even getting close to a soft fork.

TD: That’s what is cool about it. These improvements that aren’t even forks at all. If you want to use it, cool and if you don’t… That’s one of the things we talk about a lot with Utreexo, it is a big enough change that it is going to be hard to get into Bitcoin Core and rightly so. Bitcoin Core is quite conservative with changing all this stuff and it is a pretty big change. Let’s try testnet versions that aren’t Core and play around with that first. Maybe it is the kind of the thing where people use it for a year or two without it being merged into Core. Then it is long enough and well tested enough that people are like “We can move it in now”.

JOB: I was going to say that hopefully with libbitcoinkernel it might even be practical or advisable to do Utreexo as a separate project. At the same time I think you guys are going to be modifying so much of what is considered consensus critical. The benefits of having a shared library like libbitcoinkernel probably aren’t as much as if you just want to do your own mempool or do a different peer-to-peer relay.

TD: The messages are different. I have talked to other people who are interested in libbitcoinkernel, libconsensus, that kind of stuff. Getting rid of the database code as consensus code is something people are very interested in. Talking to fanquake, fanquake got rid of OpenSSL in Bitcoin Core a year or two ago. He was like “If we could get rid of LevelDB that would be amazing”. LevelDB is pretty big and does affect consensus. It would be great if it didn’t or if it was walled off or isolated a bit more. LevelDB seems fine and everyone is using it. The transition from BerkeleyDB to LevelDB was this big unintentional hard fork. That’s a scary thing to do if you want to change database stuff. But you may want to in the future. Walling that off would be another project that we want. This is part of it. You sort of need to wall it off a bit to do things like AssumeUTXO as well.

MF: We could definitely have a whole session on Utreexo but we should probably move on. Final question, I am just trying to understand how they could potentially play together. Could you perhaps do a normal IBD from snapshot to tip and then use Utreexo from genesis to snapshot? How are they going to play together? Would you use both?

TD: That you could. You couldn’t do the opposite. You couldn’t say “I am going to do normal IBD from snapshot to tip without doing the genesis to snapshot through Utreexo”. Unfortunately right now Utreexo does need to build itself up from scratch if you want to prove things. You could potentially do something like that. That is sort of a weird case, “I want to start really fast but then I want to be able to provide all possible services to everyone”. It could be a use case.

CK: Just to note, feature wise I think they are fairly different. It is just conceptually and in terms of code a lot of stuff that needs to happen are the same. That’s where they are similar.

TD: It would be cool and something we should look at. I did talk about that a little bit a few weeks ago, a different accumulator design. Maybe there are ways to switch from a complete UTXO set and into the accumulator itself. Right now AssumeUTXO does have this hash that it hardcodes in, “Here’s the UTXO set. Go download it and make sure it matches this hash.” If you could also make that something that would accept Utreexo proofs then that is really cool. It works for both. Maybe nodes that support one could provide UTXO set data to the other and stuff. I don’t know how to do that yet. I have some vague ideas that I need to work on. That would be cool if it were possible, it might not be.

MF: There are a few other proposals or mini projects I thought we’d briefly cover. The impact of the rolling UTXO set hash on AssumeUTXO. Have you used this as part of AssumeUTXO? At least in my understanding there was an intersection between these two projects. Is that correct?

FJ: There is not really an intersection. The hashes that they use now, they could use MuHash instead. But right now it uses the “legacy” hash of the UTXO set. James was already working on AssumeUTXO when I got started with this stuff. The whole UTXO set project was based on the old style of hash. It was very unclear if and when the rolling hash, MuHash, would get in. There was really no reason to change that. Now it is in there could be potentially in the future, instead of the serialized hash, the MuHash could be used. That would give some upside if you run the CoinStats index. It is much easier to get hashes of UTXO sets that are in the past. James has a script to do it with a node but I guess the node is not really operational while you are running the script. It is a bit hacky. This could potentially make it a lot nicer but then of course you would still need to make changes in the sense that these saved hashes in the codebase would need to be MuHash hashes. Of course you could have both in there. That is still a bit off, MuHash first needs to be considered 100 percent safe. We still need to watch it a bit more and get a bit more experience with it and give it a bit more time before that makes sense.

JOB: Definitely. As Fabian mentioned the process right now for checking the AssumeUTXO hash on your own node is pretty hacky because what you have to do is call invalidateblock on the block after the snapshot block so that you roll your chain back to the snapshot point. Then you calculate the hash. All the while obviously your node is not at network tip and so not operable. You have to reconsider a block and rewind back to tip. That does take probably 15 minutes at least.

MF: It says here “It is conceivable that AssumeUTXO could use the rolling UTXO set hash but it doesn’t currently”. That’s right yeah?

JOB: AssumeUTXO is really agnostic in terms of the particular commitment as long as you hardcode it. As Fabian said you could imagine that if we decide MuHash has the security properties that we’re comfortable with you could have a side by side listing or something to facilitate migration, just move to MuHash.

MF: There were a couple more that were interesting. I don’t know if James or anybody else wants to comment on these. There was the periodic automatic UTXO database backup that has been open for a number of years from Wladimir.

JOB: I haven’t really looked at this in depth.

MF: The other one was the UHS, I don’t know what UHS stands for but that was a post from Cory Fields back in 2018.

TD: Unspent hash set I think.

MF: Full node security without maintaining a full UTXO set.

TD: I know a little bit about that. Definitely talking to Cory about that stuff was one of the things that led to Utreexo. Let’s keep LevelDB, the UTXO set, instead of storing the outpoint and the pubkey script and the amount, let’s just take the hash of all that. You do need to send these proofs, the preimages, when you are spending a transaction but they are very small. Most of the data you can get from the transaction itself. You just can’t get the amount I guess because you don’t know exactly how many satoshis. The txid and the index itself you’ll see right there in the input of the transaction. The fun thing is most of the time you don’t need the pubkey script. You can figure it out yourself. With regular P2PKH you see the pubkey posted in the input and the signature, you can just that pubkey and see if it matches. It should if it works. You only have to send 12 bytes or something for each UTXO you are spending. We use this code and this technique in Utreexo as well. This is a much simpler thing though because you still have the UTXO set in your database, it is just hashes instead of the preimage data. It is not that much smaller. It cuts in half-ish.

The AssumeUTXO implementation in Bitcoin Core

https://github.com/bitcoin/bitcoin/pull/15606

MF: Let’s talk about the implementation in Core then to finish. The evolution of this, you have the prototype just to check it works or convince yourself and convince others that this was worth pursuing. You said this wasn’t mergeable, this was just a prototype. Any interesting things that came out of this other than just initial feedback?

JOB: That started out as a rough prototype. I think it was a single commit when I opened. It has slowly evolved into something resembling mergeable. At the moment I think it needs a rebase but beyond that from my perspective it is code that could be considered for merging. What I’ve been doing is trying to carve off little bits of it and open separate PRs so each change can get the appropriate level of scrutiny. Maybe people can goad me into actually writing unit tests and things like that. This PR does represent the current state of everything needed to do AssumeUTXO. I am regularly testing it.

MF: I misunderstood then. I thought you’d spun off all the other PRs and that was the implementation of it that you thought was going to get merged. This was just the first attempt. But I’ve misunderstood that.

JOB: It initially was just the first attempt. Bit by bit I’ve been keeping it up to date with the changes I do make. All the merged PRs so far have been commits that I’ve carved off from this one. One of the incidental sad things about that PR is that it has got so many comments and so much discussion now that it is almost impossible to navigate based on GitHub’s UI. I don’t know what is to be done about that.

MF: A lot of people complain about the GitHub UI and comments, it is a massive headache.

JOB: Can’t live with it, can’t live without it kind of thing.

MF: You also talked earlier about Carl’s work. He deglobalized ChainstateManager which I’m assuming allowed progress on AssumeUTXO. You needed to get rid of various globals to be able to do AssumeUTXO at all?

JOB: Yeah there were a few globals. One, pcoinsTip was a pointer into the UTXO set itself, the CCoinsView cache which is the in-memory cache entry point into the UTXO set. Another was ChainstateActive or CChain or something, I can’t remember the name of it. It was a global representing the Chainstate object. The block index was a separate global itself. I had to take a few globals and package them up into this ChainstateManager which itself became a global. Then Carl came along and made the good change of removing that global favoring passing the chainman as an argument to the various places that it is needed. That allows us to do better testing, not rely on global state as much. And I guess to eventually consolidate all the consensus critical stuff into a shared library which is what he is working on now.

MF: That was a big project in itself wasn’t it? I suppose you could still have a global ChainstateManager that manages two chainstates.

JOB: Yeah exactly, that is the whole point of the ChainstateManager versus a particular chainstate object.

MF: Then there’s the (open) PRs. This is the issue.

JOB: The issue and the original prototype PR got a little bit muddled. There was some conceptual talk in both places.

MF: The project has all the PRs. We won’t go through them one by one. Quite a few of them are refactors, certainly the early ones were refactors. Add ChainstateManager was an early one we were just discussing that has been deglobalized by Carl. Any particular ones that have been merged that are worth discussing or worth pulling out at this point? James has dropped off the call. We’ll see if James comes back. Let’s go back to Utreexo for a moment. What’s the current status of the project? I saw there was an implementation in Go using the btcd implementation.

TD: We’ve been mostly working in Go. Nicolas and Calvin are making some C++ code but we are working more in Go initially to try things out and then port it over to C++. Someone was working on a Rust version as well.

CK: That was me.

TD: If anyone wants to join Utreexo IRC channel on Libera, utreexo.

MF: In terms of the PRs that have been merged, are there any particular ones that are worth discussing and pulling up? And then we’ll discuss what still needs to be merged.

JOB: The UTXO snapshot activation thing could be worth a look. The ChainstateManager introduction is kind of interesting. But to talk through them here is probably kind of laborious. I would just say if you are interested they should hopefully be fairly clearly titled. You can go and read through the changes for any you are particularly interested in.

MF: Any particular context for reviewing current PRs?

JOB: No, I think a lot of the stuff that got merged is pretty self contained. The upshot of what you need to know is that now there is this ChainstateManager object, there is an active chainstate, the chainstate that is being consulted for the legacy operation of the system. If you want to check if a coin is spendable or do validation you consult the active chainstate but there may or may not be this background chainstate doing stuff in the background. That’s the current state of where we are. Right now for example one of the PRs that’s open, the “have LoadBlockIndex account for snapshot use” is really just adapting different parts of the system, in this case part of the initialization to account for the possible presence of a background chainstate that we have to treat a little bit differently than the active chainstate.

MF: In terms of the review and testing, it looks like you’ve only got one PR left to get merged? Will that be Phase 1 completed?

JOB: I wish that were the case. There is another PR that is open, fanquake has been very kindly maintaining this project because I don’t have the GitHub permissions to add stuff to projects. There is another change in parallel that is being considered in addition to those 3 or 4 PRs up there. What I don’t want to do is open all the PRs I want to get merged at once. I think that will overwhelm people. I gradually as merges happen open up new PRs. If you want to see all the commits that have yet to go in for the completion of Phase 1 you can go to the AssumeUTXO PR. That has all the commits that are required for that to happen. That is a listing of all the commits that are left to go in.

MF: What’s an easy way to test it? I saw Sjors had found a bug or two and done some good reviews by playing around with it. What could people do to be useful? Is it functional in a way that you can use the RPC and try to do IBD from the snapshot?

JOB: It should be pretty usable. Right now there’s a hardcoded hash at I think 685,000 maybe. Sjors and I have at various points been hosting the UTXO snapshot over Torrent. You can clone this branch, build it and play around with doing an IBD using that snapshot. We should probably create a more recent one but I haven’t thought to do that recently. I bundled a little script in contrib that allows you to do a little mock usage on mainnet. It will spin up a new data directory, it will sync maybe 50,000 blocks that happens very quickly. It will generate a snapshot and then it will tell you what to recompile your code with to allow you to accept that snapshot so you can demonstrate to yourself that the change works. I use that as a system or functional test to make sure the main branch still works.

MF: It is functional yet there are still a lot of commits and a lot of code still to merge.

JOB: Absolutely, yeah.

MF: It is working on your branch, gotcha. Any edge cases or tests that need to be written? I was trying to think of edge cases but the edge cases I was thinking of would be like a snapshot that was really recent and then there was a re-org or something. But that is not possible.

JOB: It would be great to have a test for doing pruning and building all the indexes to make sure that code still works. I don’t think I’ve tested all that stuff very rigorously. There is no reason it shouldn’t work but it definitely needs to be tested with pruning and indexing both enabled.

MF: Why might this be a problem?

JOB: There are some accounting complexities. The way that the indexers work is they consume these events on something called the validation interface queue. I had to introduce a new kind of event, a block connected event. I have to distinguish for the purpose of index building between a block connection on an active chainstate versus a background chainstate. The index is now being built out of order instead of in order. If you are connecting blocks from the tip and then afterwards connecting blocks early on that’s a change in assumptions that the indexers are making. On pruning there is some new pruning logic that ensures that when we prune the active chainstate we don’t erase blocks needed by the background chainstate. There are a few changes in there. It would be good if someone tested that stuff. I will at some point, hopefully in a formalized functional test in the test suite. Right now there is a functional test but it just does pretty basic UTXO snapshot operations. It does test a sync between two separate nodes using snapshots. But I haven’t gotten around to writing tests for the indexers and pruning.

FJ: I have a pull request open where I change the way pruning is blocked when indexers are running. There I extend the tests for pruning and indexers together quite a lot. I think it would be easy to add that on after that gets merged.

JOB: That’s right, I think I ACKed that.

FJ: I hope so. There are some comments that I need to address.

JOB: I think I reviewed that a couple of months ago. That would be a really good change to get in. If anybody is looking for things to do reviewing that PR is a good one.

MF: What’s the PR number?

FJ: 21726

MF: Ok we’ll wrap up. Thanks James and thanks to everyone who attended, it was great to hear about some other projects. The YouTube video will be up and I’lll do a transcript as well.

JOB: Thank you for having me. Really great to hear from everybody, some really insightful stuff.

\ No newline at end of file diff --git a/london-bitcoin-devs/2022-03-01-lightning-panel/index.html b/london-bitcoin-devs/2022-03-01-lightning-panel/index.html index f49be98363..aa5725c2e1 100644 --- a/london-bitcoin-devs/2022-03-01-lightning-panel/index.html +++ b/london-bitcoin-devs/2022-03-01-lightning-panel/index.html @@ -11,4 +11,4 @@ Christian Decker, Bastien Teinturier, Oliver Gugger, Michael Folkson

Date: March 1, 2022

Transcript By: Michael Folkson

Tags: Lightning

Category: Meetup

Media: -https://www.youtube.com/watch?v=fqhioVxqmig

Topic: The Lightning Network in 2022

Location: London Bitcoin Devs

Introductions

Ali Taylor-Cipolla (ATC): Ali coming in from Coinbase Recruiting. I’d like to introduce to my friend Tiago who is on the Europe side.

Tiago Schmidt (TS): Hello everyone. Quick introduction, sitting in the recruiting team here in London and leading the expansion for engineering hiring across UK and Ireland. Coinbase is in hyper-growth at the moment and we will be looking to hire loads of the people. We have over 200 engineers across all levels here in UK and Ireland and further down the line in more European countries as we continue to expand. We are going to be at the conference on Thursday and Friday so if you have any questions we’ll be there to answer any questions.

Q - Are you hiring Lightning engineers?

ATC: Sure are. I have one more friend.

Trent Fuenmayor (TF): I’m Trent from Coinbase Giving. We have developer grants available for people in the community doing Bitcoin Core development, blockchain agnostic. We have four themes, you can read more about them by grabbing one of these flyers over here. If you are looking for money to do some Core development please apply to us, if you are looking for a full time gig please apply to us, if you just want to enjoy the drink and food enjoy it.

Greenlight demo (Christian Decker)

Blog post: https://blog.blockstream.com/en-greenlight-by-blockstream-lightning-made-easy/

Michael Folkson (MF): So we’ll start with the demo and then we’ll crack on with the panel. Christian is going to demo Greenlight, as far as I know as an exclusive.

Christian Decker (CD): This is the first time anybody has seen it. It is kind of adhoc so I can’t guarantee it is working. It is also not going to look very nice.

Audience: Classic Blockstream

CD: I suck at interfaces. I was hoping to skip the demo completely but Michael is forcing me to do it. Let’s see how this goes. For those who have never heard about Greenlight it is a service that we hope to be offering very soon to end users who might not yet know what operating a Lightning node looks like but who would like to dip their toes into it and get a feel for what Lightning can do for you before investing the time to learn and upgrade your own experience. This is not intended for routing nodes, this is not intended for you if you know how to run a Lightning node. This is for you if you want to see what it can do for you but are not prepared to do the investment just yet. The way it works is we run the infrastructure on your behalf but we don’t have access to your keys, the keys that are necessary to sign off on anything that would move funds. What we call it is a hosted non-custodial Lightning as a service, service. Did I mention this is adhoc? I will walk you through quickly how you can get a node up and running and interact with it, do a payment with it. It is going to be from a command line which I know all of you will appreciate. Hopefully we will have a UI for this.

As you can see this is an empty directory. I want to register a node for mainnet. What this does is goes out, talks to our service, allocates a node for you and it will give you back the access credentials to talk to it. What we have here is the seed phrase that was generated locally, we will never share this with anybody, it will never leave this device at all. And we have the credentials here. What we do next is we ask it “Where is it running?” The node with this ID is currently not running anywhere. Let’s change that. What I do now is I tell it to spin up this node, I want to use it and talk to it. I talk to the scheduler and after a 2 second break it will tell me my node is running here. What I can do is I can talk to it. When I call getinfo it will talk to this node and say “Hey I am node VIOLENTCHASER with this ID running this version and I am synchronized at this block height.” We can tail the logs and we can interact with it. We have a whole bunch of useful command line tools that we can use. We can open channels, we can disconnect, we can stop it, we can generate a new address, stuff like that. That is kind of cute but it is not helping us much. So what I did before as with every cooking show you’ve ever seen I prepared something. What we have here is a node I setup a couple of days ago, it is called T2, very imaginative. When we say “I’d like to talk to this guy” this is going out to the scheduler, starting it and talking to the node, getting the getinfo. This is VIOLENTSEAGULL. If we check where it is connected to using listpeers we see it has a connection but it is disconnected. What we’ve just done is we have connected the signer to the node, we have connected the front end to the node and now this node is fully functional. It has reconnected to its peer and it could perform payments. We can do listpeers again and we see that it is connected, it is talking to a node and it is up to date.

Now I go over to my favorite demo shop (Starblocks) by our friends at eclair. This is the best way to demonstrate anything on testnet and buying coffee has always been my go-to example. I create a Lightning invoice and I copy this invoice and I pay it. And it works. Now I can breathe a sigh of relief. This is always scary. What just happened here? This machine acted as a signer, it held onto the private keys, and it acted as a front end. The node itself did not have any control over what happened. The front end was sending authenticated commands to the node, the node was computing what the change should be in the channels and to affect those changes we had to reach out to the signer to verify, sign off on changes and send back the signatures. You might have seen this scroll by very quickly. What it did here, the signer received an incoming signature request, signed them and sent them back. This allows you to spin up a node in less than a second. When you need it you open the app, it will spin up the node again, it will connect, it will do everything automatically. You can just use it as if it was any app you would normally use. What does this mean for app developers? You don’t have to learn how to operate Lightning nodes anymore because we do it for you. What does this mean for end users? You don’t have to learn about how to operate a Lightning node anymore before seeing what you could do with a Lightning node. You first see what the upside is and then we give you the tools to learn and upgrade your experience. Eventually we want you to take your node and take self custody of it, now you are a fully self sovereign individual, you are a fully self sovereign node operator in the Lightning Network. This is our attempt to onboard, educate and then offboard. That is pretty much it. I hope this is something that you find interesting. We will have many more updates coming soon and will have some sexier interfaces as well. Thank you.

MF: Cool, the way this is going to work today is we’ll have topics and we’ll open it up to questions on each topic. The topic now is Greenlight. Does anyone have any questions or comments on Greenlight?

Q - Is there a reason to not use this long term?

A - The way we achieve efficiency with this is if you are not using the node we will spin it down. We can’t do any changes so we will spin it down and free those resources and use them for other users. If you are online continuously, make use of it and keep it alive then we currently impose a 1 hour limit but we could lift that for you as a business. Ultimately if you are a business you are probably better off running your own node. You probably don’t want to run this anyway. You want to have more control over your funds. This is mainly for onboarding new users, teaching them how things can look like before they have to make any investment. But we don’t prevent you from using it if you want to have a business on it. One of the use cases that I can imagine is you are at a hackathon, you need a quick Lightning backend. It takes less than a second to spin this up and have an experimental backend for the weekend. Once you see that it is working we will allow you to export the node and make it a fully fledged Lightning node in your own infrastructure that you fully control.

Q - Is Greenlight itself open source so others can run this service?

A - We plan to open source many parts of Greenlight, among which is the networking interface. I know this is and old friend of yours, complaining about the lack of network RPC in c-lightning. We will open source the components that allow you to run the nodes themselves on your infrastructure. What we are still considering whether we want to open source is the control plane: how you register a node and how you schedule a node. That might also be an option because we want as many of these offerings as possible to help as many people onboard to the Lightning Network as possible.

Q - Will Blockstream be implementing this from their own front ends for Green wallet and things like this? If so what library images would that be?

A - The way we interact with the nodes is just a gRPC interface. Any language that can talk gRPC can interact with the nodes running our own services. The goal is to eventually integrate it into Green. We didn’t want to force the Green team to spend time on this as of yet because they tend to be quite busy. We are working with a couple of companies in a closed beta right now to explore how to get Lightning integrations into as many applications as possible. The name “Greenlight” gives away that it is planned for Green I guess.

Q - How are you managing the liquidity per user? What does that look like in an export scenario?

A - The liquidity is handled by us coordinating with external liquidity service providers to make sure that you as the end user have the cheapest and best possible offer you could have to open channels to the wider network. We plan to have a lower bound offering by ourselves as well to make sure that every user that wants liquidity can get it. For the export this is running open source c-lightning. What we do with the export is we give you a copy of the database, mark the node as exported in our own database so we don’t start it again and you can import the database into your own node. This means that you don’t have any downtime, you don’t have to close channels, you don’t have to reopen channels, your node will be exactly as is at the moment that you export it. We also plan to offer a couple of integration helpers such as a reverse proxy, giving each node its own URL. If you have any wallet attached to your node it will still be able to talk to the node even if you offboard it into your own self hosted infrastructure. Making it a zero config change for front ends.

Q - On the demo the HSM secret there, you connect it to your hardware module or is it just a soft signer?

A - That name has always been a bit aspirational to be honest. We will take the learnings from this to inform also the way that the open source c-lightning project will be designed going forward. We will use information that we gather from this to make c-lightning more efficient in future. Part of that is the way that we independently verify in the signer whether it is an authenticated command or not. That will eventually inform how we can build hardware security modules including hardware wallets for Lightning nodes themselves. This is very much a learning experience for us to eventually get a Ledger or Trezor that can be used to run a Lightning node.

Q - You will expand the HWI library perhaps?

A - The HWI might not be sufficient in this case for us. It would require quite a bit more state management on the hardware itself to get this working in a reliable way and make sure that we can verify the full context of whatever we are signing.

Contrasting the different Lightning implementations

MF: So we’ll start the panel. I can see a lot of familiar faces. For anyone who is new to the Lightning Network I’ll just give the basics. The Lightning Network is a network built on top of the Bitcoin network. It allows for a lot of transaction throughput that doesn’t touch the chain. There are various implementations that try to stay compatible using the spec process. Today we have three representatives of three implementations. We have a representative of a fourth implementation at the back so for any LDK questions we’ll try to get a mic to the back. We have Christian representing c-lightning, Bastien representing eclair and Oliver representing LND. I thought to start, something that might be quite fun to do, a short sales pitch for your implementation and also an anti sales pitch, what you think your implementation is currently bad at doing.

CD: So c-lightning, written in C so very efficient. It adheres to the UNIX philosophy of doing one thing and one thing very well. We don’t force decisions on you, c-lightning is very modular and customizable so you can make it very much your own however you like, however you need it to be. The anti sales pitch, it is very bare bones, it is missing a network RPC and you have to do work to get it working. It is probably a corollary to the sales pitch.

Bastien Teinturier (BT): So eclair, its goals are stability, reliability, scalability and security as well. We were not aiming for maximum number of features, we are a bit lacking a developer community. Our goal is to have something stable that does payments right. You don’t have to care about your node, it just runs and never crashes, there are never any issues. The anti pitch is that it is in Scala, no one knows Scala but it is actually really great.

Oliver Gugger (OG): LND tries to be developer first. We want developers to be able to pick it up easily, can integrate it into their product, build apps on top of it and distribute it as a wallet or a self hosted node. Bringing it to the plebs. We focus mainly on having a great developer interface. We do gRPC, REST and try to build in everything and also make it secure and scalable. If you run a very large node then currently database size is an issue. We are working very hard on that to make that not a big issue. We are working on external database support, lots to do.

MF: Cool. So Christian went through Greenlight just then. Perhaps Bastien you can talk about one of the biggest if not the biggest Lightning node on the network?

BT: It depends on what you count but yeah probably the biggest.

MF: I know that you don’t intensely manage that node but can you give any insights into running such a big node, I think there were scaling challenges that you could maybe talk about briefly.

BT: So the main challenge is sleeping at night knowing that you have that many Bitcoin in a hot wallet. We didn’t have any issues scaling because there is not that much volume on Lightning today. It is really easy to support that volume. We built eclair to be easily horizontally scalable. One thing that we did a year ago, our node is running on multiple machines. There is one main machine that does all the channel and important logic but there are also two front end machines that do all of the connection management, routing gossip management. The things that are bandwidth intensive and can easily be scaled across many different machines, each connection is independent from other connections. To be honest we don’t need it. It could easily be run on a single machine even at that scale. It is a proof of concept that we can do it and if we need to scale it is easy to then scale these front end machines independently. It was a bit simpler than we expected in a way. On the scaling issue, the main scaling issue is not related to Lightning the implementation, more onboarding users and getting many people to have channels and to run in non-custodial settings. That is independent of implementation and more Lightning as a whole, allowing people to be self-sovereign and be able to use Lightning without any pain points.

MF: It is such a big node because you have the Phoenix mobile wallet and lots of users connecting to this massive node? That is the reason why you have such a big node on the network?

BT: We had the biggest node on the network even before we launched Phoenix. It helps but it is not the only reason.

Audience: You did have eclair, the first proper mobile wallet in my opinion.

BT: But eclair could connect to anything, you could open channels to anyone.

MF: With these different models that are emerging for running a Lightning node, perhaps Oliver you can talk about Lightning Node Connect. This is offering a very different service to Greenlight and what Christian was talking about earlier. This is not getting a company to do all the hard stuff, this is allowing you to set up infrastructure between say two servers and splitting up the functionality of that node and wallet.

OG: Lightning Node Connect is a way to connect to your node from let’s say a website or a different device. It helps you punch through your home router and establish a connection. Before what we had was something called LND Connect. It was a QR code, you had to do port forwarding, there was a certificate in there and a macaroon. It was hard to set up. What Lightning Node Connect is trying to do is through an external proxy bridging that gap to your node so you can connect from any device to your node, even if you are running behind a firewall and Tor only. It gives you a 10 word comparison phase that it can use to connect to your node. The idea is that this is implementation agnostic. Currently it only runs on LND but it is very similar to the Noise protocol. It is a secure protocol to connect to a node behind a firewall. We want to see this being adopted by other implementations as well. It could be cool for c-lightning, eclair to use this. We have an early version released, it needs some work but if that sounds interesting please take a look.

MF: This is separate to the remote signing. I was combining the two. These are two separate projects. One is addressing NAT traversal, the other one is addressing private key management.

OG: Remote signing is different. It is just separating out the private key part of the LND node. Currently it is just splitting it out, you need to run a secondary LND. There is a gRPC interface in between. Now we have the separation we can implement more logic to make it implement policies on when to sign, how often. It is just a first step.

MF: So the topic is comparison of implementations or any particular questions on a specific implementation. We do have Antoine in the back if anyone has any LDK questions, contrasting LDK with these approaches.

Q - Does LND have plans for being able to create a similar environment to Greenlight, like you were mentioning fully remote signing where keys are totally segregated?

OG: If you mean with an environment like a service provider then no. We won’t be offering such a service compared to Greenlight. We have partners like Voltage that already do that. If you are just asking technology wise we want to have compete separation of private keys and you being able to apply policies on how often you can sign, how much and on what keys, whatever. That is the plan. But us hosting a remote signing service I don’t think so, no.

Q - I’m coming from the app user experience. The app can just hold the keys for the user. It could be from the wallet provider, it doesn’t have to be from LND’s service. Just being able to have that user experience.

OG: That is something we are definitely thinking about, how to achieve that and how it could be done. Not all the keys, if you are wake up to gossip stuff it might not be very efficient.

Q - Since not too many people seem to be implementing eclair stuff do you have plans for doing similar services or looking at other applications use your implementation? Or is your primary goal just to make your implementation functional for your wallet?

BT: eclair implements all of the spec so what exactly are you referring to?

Q - A lot of people are looking at mostly using LND for applications, I don’t know if it is just because scaling isn’t popular and they don’t mess with the node at all. Now that Blockstream has Greenlight, it is at least a solution to be able to put the keys in users’ hands. Granted you have to connect it to a Blockstream service instead of your own service but they’ve said they’ll open source it.

BT: I think eclair is really meant for server nodes. We have a lot of ways to build on top of eclair. We have a plugin system where you can interact with eclair. You can send messages to all of the components of eclair and implement almost anything that you like in any JVM language. JVM is just not popular so we don’t have many people running plugins but in theory you could. What I think is more interesting for you is we not only have the eclair codebase for a node implementation but we also have an implementation of Lightning that is targeted only for wallets. At first our wallets were completely based on eclair, on the Scala implementation and on a specific branch that forked off of eclair and removed some of the stuff that was unnecessary for wallets. That could only run on Android, on the JVM, and not on iOS. When we wanted to ship Phoenix on iOS we considered many things. We decided to rewrite the Lightning implementation and optimize wallets, leaving many things out. Since there was a Kotlin stack that works on both Android and iOS we started based on that. It was really easy to port Scala code to Kotlin because these are two very similar languages. It took Fabrice a lot of months to bootstrap this. We started from an architecture from eclair and simplify it with what we learned, what we could remove for wallets. That is something an application developer could have a look at as well. It is quite a simple library to use. It is quite minimal, it doesn’t do everything that a node needs to do. It is only for wallets so there are things left out. I think it can be an interesting thing to build upon.

Q - LND has Neutrino and generally the user experience is not that great. It takes a pretty long time to do a full sync, you have to re-sync if you haven’t had your wallet online for a while. Does LND have plans for a node that is performant and more appropriate to mobile? Same question for c-lightning.

CD: I personally never understood why Neutrino should be a good fit for Lightning nodes. Neutrino does away with the downloading of blocks that you are not interested in but since most blocks nowadays contain channel opens you are almost always interested in all blocks so you are not actually saving anything unless you are not verifying that the channels are being opened. That caveat aside. c-lightning has a very powerful plugin infrastructure. We have a very modular architecture that allows you to swap out individual parts including the Bitcoin backend. By default it talks to a bitcoind full node and it will fully verify the entire blockchain. But you can very easily swap it out for something that talks Neutrino or talks to a block explorer if you are so inclined to trust a block explorer. In the case of Greenlight we do have a central bitcoind that serves stripped blocks to our c-lightning nodes. It is much quicker to actually catch up to the blockchain. This is the customizability that I was talking about before. If you put in the time to make c-lightning work for your environment you are going to be very happy with it. But we do ship with defaults that are sane in that sense. There are many ways of running a Bitcoin backend that could be more efficient than just processing entire blocks, with Greenlight we are using stripped blocks. You could talk to a central server that serves you this kind of information. It very much depends on your security needs, how much you trust whatever the source of this ground truth data is. By default we do use the least amount of assumptions but if your environment allows it we can speed it up. That includes server nodes, you name it.

OG: As far as I know there are no plans to support another kind of chain backend than we currently have. btcd, bitcoind and Neutrino. There are still a few performance optimisations that can be done on Neutrino. It needs a bit more love. We could do some pre-loading of block headers, an optimization with the database because it is the same database technology that we use in LND. It has come to an end. I feel like Neutrino is still a viable model but maybe we need to invest a little more time. What is being worked on is to have an alternative to ZMQ with bitcoind. You don’t need an actual ZMQ connection. One contributor is working on allowing you to use a remote bitcoind at home as an alternative if that could be an interesting option. Apart from that I am not aware of any other plans.

MF: It seems to me there are specific use cases that are gravitating towards the separate implementations. Perhaps merchants getting set up for the first time would use Greenlight and c-lightning, mobile users would use eclair and developers using an API, there are some exciting gaming use cases of Lightning using LND. Do you think these use cases are sticking to your particular implementation or do you think your implementation can potentially do everything?

CD: We definitely have a persona that we’re aiming for. We do have a certain type of user that we try to optimize for and we are looking into supporting as much as we can. Of course this persona is probably not the same for all of us but there is a bit of overlap. I like the fact that users get to choose what implementation they want to run. Depending on what their use case is one might be slightly better than the other. I think all 3 implementations, all 4 implementations, sorry Antoine…

MF: Given Antoine is so far in the back perhaps Christian you can talk about the use case that LDK is centering on.

CD: Correct me if I’m wrong but LDK is very much as the name suggests a development kit that aims to enable app developers to integrate Lightning into their applications without forcing them to adhere to a certain set of rules. They have a very large toolset of components that you can pick and choose from and you can use to adapt to whatever your situation might be. Whether that is a mobile app or a server app, if I’m not mistaken the Square Crypto app now uses LDK. There is a wide variety of things that you can use it for. It is not a self contained node but more aimed at developers that want a tight integration with the Lightning node itself. Whereas the three implementations here are a software package that can be run out of the box. Some more opinionated and feature complete, some that give you the opportunity of customizing it and are less opinionated. I think the main differentiating factor is that LDK is a development kit that gives you the tools and doesn’t tell you how to actually do it. The documentation is actually quite good.

MF: Another use case that Matt (Corallo) talked about, I don’t know how much progress you’ve made on this was an existing Bitcoin wallet integrating Lightning functionality. A Bitcoin onchain wallet coming and using the LDK and bringing Lightning functionality into that wallet. That sounds really cool but also hard. I don’t know if any progress has been made on that.

Audience: There is a talk at the conference on that.

BT: With ACINQ it seems to be confusing. We have two very different personas and only one is sticking. People think ACINQ is the mobile persona whereas our first persona is actually big, reliable routing and merchant nodes. That has been our first focus. But we have also embraced a different persona of working on mobile. A mistake was probably to name our first wallet the same thing as our node so people thought we were only doing that wallet part. The reason we did that is we really wanted to understand the whole experience of using Lightning. We thought that using Lightning for everyone would not go through running your own node. We wanted to understand what pain points people on a mobile phone would have when using Lightning. You don’t have the same environment at all as when you are running a node in a data center or even at home. That’s why we embraced this second persona of doing mobile wallets to see what we needed to fix at the spec level and implementation level for routing nodes for Lightning to work end-to-end, from a user on a mobile phone to someone paying. We have these two personas and we are trying to separate them a bit more by having our wallet implementation be different from our server node implementation. Don’t think we are only doing mobile.

CD: So what you are saying is Amazon would choose eclair to accept payments?

BT: Yeah probably.

OG: I’m not sure what persona I would ascribe to LDN other than the developers themselves, we have a batteries included experience for the developer so they can choose the persona. We have a lot of features, everything is in a single binary so we are also trying to unbundle some things so developers can have a more configurable experience. We have something for everyone almost. We also have some quite big nodes running LND but I wouldn’t say the largest nodes are the main goal but we definitely want to get there as well. We are limited by this database, we don’t have replication built in just yet but we want to go to SQL so we can also cover this persona. I guess we want to serve everyone and anyone.

CD: While everyone is trying to cut out their niche there is friendly competition in trying to give users the options here. We aren’t going to limit ourselves to just one persona or another, you will be the ones profiting from it.

Q - I’ve been in the Lightning space for a while. I am very invested in users using mobile wallets. This is a question for eclair. In my opinion the eclair wallet on Android is one of the best mobile wallets if not the best. Especially the UX. My question is, it is 2022, I recently moved over to a iPhone and there are very few non-custodial Lightning clients available, what is the biggest thing in the way right now for greater mobile wallet adoption and creation?

CD: Apple.

BT: I think you mean a mobile wallet for someone who understands the technical detail and wants to manage the technical detail, someone who wants to see the channels. The approach we have taken with Phoenix is different from the approach we’ve taken with eclair-mobile. Our approach with eclair-mobile was to make a wallet that anyone could use but we failed. Lightning is complicated when you have to manage your channels and you don’t even understand why we cannot tell you exactly how much you are able to send. We started again from scratch with Phoenix. Our goal was anyone, anyone who doesn’t care that it is not onchain Bitcoin, just wants something that sends payments and it just works. If you want that we already have it with Phoenix and Breez and other wallets are doing the same kind of things. If you want something that is more expert that gives you control over channel management maybe what you should look for is more of a remote control over your node at home. If you are at that level of technicality maybe you want to have your node at home control everything. Other than that I think the libraries are getting mature enough so that these wallets will emerge if there is demand. I’m not sure if there is such a big demand for people who don’t run a node at home but want a wallet that gives them full control over the channels. I don’t know. If the demand is there the tools and libraries are starting to emerge for people to build those wallets.

CD: Channels, it’s complicated should be Lightning’s slogan actually.

Priorities in the coming year for each implementation

MF: Before we move onto the network as a whole and spec stuff, anything to add on priorities for your implementation in the coming year? I suppose it kind of links to the anti sales pitch, anything in addition that you want to be focusing on this year on your implementation?

CD: We’ve certainly identified a couple of gaps that we’ve had in the past including for example giving users more tools to make c-lightning work for them, be more opinionated, give them something to start with. The first step to this end is we are building a gRPC interface with mutual TLS authentication to allow people to talk to their nodes. That has been a big gap in the past, we were hoping that users would come and tell us how they expect to work with c-lightning and gRPC is definitely the winner there. We are also working on a couple of long requested features that I might not want to reveal just now. You are not going to be limited to one channel per peer much longer. We are going to work much more with a specification effort to bring some of the features that we have implemented, standardize them and make them more widely available. That includes our dual funding approach, that includes our splicing proposals, that includes the liquidity ads that we have. Hopefully we will be able to standardize them, make them much more widely accessible by removing the blockers that have been there so far on the specification. And hopefully Greenlight will work out, we’ll take the learnings from Greenlight and apply them back into the open source c-lightning project, make c-lightning more accessible and easier to work with.

BT: On the eclair side I would say that our first focus is going to be continuing to work on security, reliability and making payments work better, improving payments. The only thing that Lightning needs to do and needs to do well, your payments must work, be fast and be reliable. There are still a lot of things to do. There are big spec changes that have helped Lightning get better but they need a lot of implementation work. They create a lot of details that make it hard and can be improved. Also there are a lot of spec proposals that will really help Lightning get better as a whole. The three that Christian mentioned, we are actively trying to implement them and we want to ship them this year. There are also other proposals, some of which we pushed forward and we hope to see other implementations add like trampoline (routing) and route blinding. Route blinding is already in c-lightning because it is a dependency for offers and onion messages. I think it is really good for privacy. Better payments, better security, better privacy and better reliability. All of these spec proposals all help in a way to get to that end goal where all of these specs are better.

OG: Our main focus is of course stability, scalability and reliability. Payment failures are never fun. These are the biggest things to look at. If we want to get Lightning in as many hands as possible then we will experience these scaling issues. The next step will be Taproot on the wallet level and of course on the spec level. We want to push for everything that is needed to get Lightning upgraded with Taproot but also in our products and services. With Loop and Pool we can take advantage of some of the privacy and scalability things. I personally think we should take a close look at some of the privacy features such as route blinding and what is proposed in offers. I think we should do more on that side.

Security of the network

MF: That was the implementations. We’ll move onto the network as a whole now. I thought we’d start with security. You can give me your view on whether this is a good split but I kind of see three security vectors here. There is the DoS vector, the interaction with the P2P layer of Bitcoin itself and then there are bugs that crop up. Perhaps we’ll start with Christian on the history so far on the Lightning Network of bugs that have cropped up. The most serious one that I remember is the one where you would be tricked into entering into a channel without actually committing any funds onchain. That seemed like a scary bug. Christian can’t remember that one. Let’s start with the bugs that people can remember and then we’ll go onto those other two categories.

BT: I think there is only this one to remember, it was quite bad. The good thing is that it was easy to check if you’ve been attacked. I think no one lost funds in this because it was fixed quickly. We were lucky in that it was early enough that people did the upgrade. There are so many things that are getting better in every new version of an implementation. It helps us ship bug fixes quickly. The process to find these kinds of issues, we need to have more people continually testing these things, we need to have more people with adversarial thinking trying to probe the implementations, trying to run attacks on the existing nodes on regtest or something else. That needs to happen more. We are doing it all the time but it would be good if other people outside of the main teams were doing it as well. It would be valuable and would bring new ideas. Researchers sometimes try to probe the network at a theoretical level right now, doing it more practical and hands on would help a lot. I would like to see more of that.

CD: This gets us a bit into the open source and security dilemma. We work in the open, you can see every change that we do. That puts us in a situation where we sometimes have to hide a bug fix from you that might otherwise be exploited by somebody that knows about this issue. It is very hard for us to fix issues and tell you right away about it because that could expose others in the network to the risk. When we tell you to update please do so. When we ask you twice do so twice.

OG: That bug you mentioned was one of the biggest that affects all of us. LND has had a few bugs as well that the other implementations weren’t affected by. We had one with the signature with a low s. Someone could produce a signature that LND would think was ok but the network wouldn’t. That was a while ago. Then of course bugs that are affecting users, maybe a crash or incompatibility with c-lightning, stuff like that. We’ve since put more focus on stability.

MF: The next category, we’ll probably go to Bastien for this one, the mempool, replaceable transactions. Perhaps you can talk about that very long PR you did implementing replaceable transactions in eclair and some of the challenges of interacting with mempool policy in Core.

BT: I don’t know how well you know the technicalities of Lightning. There was a big change we made recently, almost a year ago, it took a long time to get into production, called anchor outputs. We made a small but fundamental change in how we used a channel. Before that you had to choose the fee rate of your channel transactions beforehand and sign for that. You couldn’t change it. That means you had to predict the future fee rate for when that channel would close. If you guessed wrong then you are screwed. That was bad. Now with anchor outputs you can set the fee rate when the channel closes, when you actually need it. You don’t have this issue anymore but you run into other issues. Yes you can set the fee rate of your transactions when you broadcast them but there are still quirks about how the Bitcoin P2P layer will relay transactions and let you bump the fees of transactions that doesn’t guarantee that you will be able to get these transactions to a miner and get them confirmed. If you don’t get them confirmed before a specific timelock then you are exposed to attacks by a malicious actor. We have definitely improved one thing but we have shifted complexity to another layer and we have to deal with that complexity now. We have to make it better and this is not an easy task. This is a very hard, subtle issue that touches many aspects of Bitcoin and Lightning Network. This isn’t easy to fix but something that we really need to do and make it much better security wise. It is getting better, we are working towards it. I think we will be in a good state in the end but there is still a lot of work to do.

MF: I guess the concern is with the bugs, the first category, you can solve those problems. You identify and squash those bugs. This one seems like a long term one where we can kind of incrementally make progress and make it more secure but there is no hallelujah moment where this bug is fixed.

BT: Lightning starts from a statement that may or may not be true. You are able to get transactions confirmed in a specific time period. If you cannot guarantee that you cannot guarantee fund safety. That is not always that easy to guarantee. In very high fee environments where the mempool is very congested it may cost you a lot to be able to guarantee that security. We want to find the right trade-off where it doesn’t cost you too much but you are still completely secure. It is a hard trade-off to find.

CD: It is as much a learning experience for us as it is a learning experience for Bitcoin Core, the Bitcoin peer-to-peer layer. That is what makes this exciting. This is very much a research in progress kind of thing. What excites me and gets me up in the morning.

MF: I looked up the stats on Clark Moody. There’s 3,500 Bitcoin on the Lightning Network that we know publicly. That’s about 130 million US dollars. Does that scare you? Is that about right? If it was a billion would you be scared? 10 billion? Any thoughts?

CD: I would be lying if I was saying this is not a scary amount of money. There is a lot of trust that people put into our code. We do our best to rise up to that trust. That being said we are also learning at the same time as many of you are while operating a Lightning node. The more you can tell us about your experiences while operating Lightning the better we can help you make it more secure, easier to use and more private to use as well. We depend very much on your feedback, just as much as you depend on our code.

Q - What part of the implementation is most prone to breaking interoperability?

CD: We are not pointing fingers. When breaking interoperability you also have two sides. You have one side that has performed some changes, the other side is no longer compatible with it. It is sometimes really hard to figure out which one is the wrong one. The one that has changed might have just addressed a bug that the other one hasn’t addressed yet. It is very much up to the spec effort to say “This is the right behavior, this one isn’t.” That sometimes is an after the fact issue. The spec sometimes gives a bit of leeway that allows you to interpret some parts of the specification in a certain way. Without clarifying comments of which one is the intended behavior you can’t really assign blame to either of them. It might sometimes be the specification that is underspecified causing these kinds of issues. I remember recently roasbeef reached out to ask whether the way that we interpreted one sentence in the specification was the same way he interpreted that one sentence in the specification. It turns out to be different. Is it LND that interpreted it wrong or was it us who interpreted it wrong? There is no right or wrong in this case. It is up to us to come back to the table and clarify what the right interpretation is. That is the vast majority of these kinds of incompatibilities.

Q - How much longer can we have BOLTs without version numbers associated with them? If we want to say we are BOLT whatever compliant it is kind of amorphous. We are changing it, modifying it. It seems really prudent for us to start versioning BOLTs to say “eclair, LND, c-lightning release is BOLT 2.5 compatible” or whatever. What benefits do you see to that and what potential downsides?

BT: It would be really convenient but it is really hard to achieve. This would require central planning of “This feature will make it into the 2.0 release.” All of the features that we are adding to the spec are things that require months of work for each implementation. To have everyone commit to say “Yes I will implement this thing before this other one and before this other one” which is already a one year span with all the unknowns that can happen in a year. It is really to hard to get because this is decentralised development. That is something that we would really like to get but there are so many different things that we want to work on. People on each implementation assign different priorities to it and I think that part is healthy. It is really hard to say “This is going to be version 1.0, 1,1, 1.2”. I used to think that we really needed to do that. I don’t think that anymore.

CD: We actually tagged version 1.0 at some point. It was sort of the lowest common denominator among all the implementations. This was to signal that we have all achieved a certain amount of maturity. But very early on we decided on having a specification be a living document. That also means that it is evolving over time. Maybe we will end up with a version 1.1 at some point that declares a level playing field among all of the implementations with some implementations adding optional features and some of them deciding that it is not the way they want to go. It is very much in the nature of the Lightning specification to always have stuff in flight, always have multiple things in flight and there not being as much comparability as there could be maybe if we were to have a more RFC like process. There you say “I’ve implemented RFC1, I’ve implemented RFC2.” You build up the specification by picking and choosing which part of the specification you build. That was very much a choice early on to have this be a living document that evolves over time and has much flexibility as possible built into it.

BT: One thing I would like to add to that, the main issue is right now we’ve only been adding stuff. We are at the point where we’ve added a lot of stuff but we are starting to remove the old stuff. That is when it gets better. We are able to remove the old stuff. When there are two things, one modern thing and one legacy thing and everyone knows about the modern thing, we start removing the old things from the spec, it helps the spec get better, get smaller, get easier. Rusty has started doing that by deprecating one thing a week ago. I am hoping that we will be able to deprecate some other things and remove the old things from the spec.

Q - I know it is technically legal but probing feels like buggy behavior. What are we going to do about probing? There is a lot of probing going on.

CD: I don’t like that categorization of probing being dodgy or anything because I like to do it. Just for context, probing is sending out a payment that is already destined to fail but along the way you learn a lot about how the liquidity is distributed in the network. The fear is that it might be abused to learn where payments are going in the network by very precisely figuring out how much capacity is in those channels and on what side. When there is a change you can see there is 13 satoshis removed there and 13 satoshis removed there so those two changes might be related. It is very hard to get to that level of precision and we are adding randomness to prevent you from doing that. The main attack vector is pretty much mitigated at this point in time. It also takes quite a bit of time to get to that precision even if you weren’t randomizing. You always have a very fuzzy view even if you are very good at probing, so much for probing as an attack. When it comes to the upsides of probing, probing does tell you a lot about the liquidity in the network. If you probe your surroundings before you start a payment you can relatively certain that your view of the network is up to date. You can skip a lot of failed payment attempts that you’d otherwise have to do in order to learn that information. It also provides cover traffic for other payments. When you perform payments you are causing traffic in the network. A passive observer in the network would have a very hard time to say “This is a real payment going through and this is just a probe going through”. In my opinion probing does add value, it doesn’t remove as much privacy as people fear it does. On the other hand it adds to the chances of you succeeding your payments and to you providing cover traffic for people that might need it. I pretty much like probing to be honest.

Q - I was less worried about the privacy side of it and more the 95 percent of payments going through my node appear to be probes or failed payments at least. I guess the argument is cover traffic works for that.

CD: The one downside I can see with a huge amount of probes going through a node is that it may end up bloating your database. Every time we add or remove a HTLC to a channel we have to flush that to disk, otherwise we might incur losses there. There is work you have to do even for probes. There are ways we could replace those probes with more lightweight probes that do not have this property. You could have lightweight probes that don’t add to your database but still give you some information about how the liquidity situation is in parts of the network and provide that kind of cover traffic. It is not exactly free because yes you are storing a couple of bytes for each failed probe. Over time that might accumulate but I think in many cases the upsides outweigh the downsides. Apparently I’m the only probing so sorry for those extra bytes. I’ll buy you a USB stick.

Tensions in the BOLT spec process

MF: So we’ve left the slight controversy until the end. The spec process and the BOLTs. Alex Bosworth put the cat amongst the pigeons with a few comments in an email that was shared on Twitter. I’ll read out a couple of the quotes. Obviously he’s not a fan of BOLT 12 but there were some specific comments on the BOLT process itself. To clarify, Alex is speaking for himself and not speaking for Lightning Labs. I think roasbeef and other people clarified that, it is just his personal opinion. “The way the BOLTs are standardized is arbitrary” and “if your side produces Lightning software that only 1 or 2 percent of the network uses, you still get to set the standard for the remaining 98 or 99 percent”. I guess with any spec process or any standardization process there are always going to be tensions. I haven’t followed any others so this is quite new to me. Obviously with different priorities and different business models and different wishes for the direction in which the spec goes it is almost inevitable there are going to be these tensions. Firstly thoughts on Alex’s comments, thoughts on how the spec process has evolved? Is there anything we can improve or is this just an inevitable side effect of having a standardization process with lots of different players, lots of different competing interests.

CD: I think those are very strong statements from someone who has never participated in a single spec meeting. As you rightly pointed out there is a bit of contention in the spec process but that is by design. If one implementation were able to dictate what the entire network looks like we would end up with a very myopic view of what the network could be and we wouldn’t be able to serve all of the different use cases that we are serving. And so yes, sometimes the spec process is frustrating, I totally agree with that. We certainly have different views on what the network should look like. But by this thesis, antithesis and synthesis process we come up with a system that is much more able to serve our users than if one implementation were to go it alone.

BT: I’ll address the frustration about the spec. Yes it is frustrating. Personally I opened my first big spec PR in 2019, it was trampoline, I was hoping that this would get accepted in 6 months. It takes a long time so 6 months should be great. In 2021 I closed it and opened the 2021 edition of trampoline. This time, everyone says they will implement it so in 3 months we will have it and it is still there. But it is ok. If in the end we get something better, between the 2019 edition and the 2021 edition I have already improved some stuff. Some stuff has changed based on feedback, based on things that we learnt running this in one of our wallets. If this is what it takes for the end result to be good I am ok with it. We are not in a rush here. We are in this for the long haul. I think that is the mindset you should have when you approach the spec process. You should not aim for results, you should not aim for merges, you should aim for a learning path. Getting in the end to something that many people contributed to. That in the end is really good for your users and good for the network. If you are ready to wait I think this is a good process. But it is frustrating and it is hard sometimes but I think it is a good thing.

OG: I personally don’t work on the spec so I don’t feel qualified to give an answer. I just wanted to add that I don’t necessarily agree with all the points that Alex mentioned. I definitely would have said it in a different way as well. I think lack of resources to work on the spec sometimes is interpreted as us blocking stuff which is not the intention and not our goal of course. We want to put in more work on the spec so I hope we will improve there. It is an interesting thing to observe, how that frustration sometimes comes to the surface. Thank you for all the work you do on the spec. I need to pick up as well so I’ll do my best.

BT: Two things I want to mention. Of course the spec takes a long time, this is time you could spend doing support for your users, time you could spend doing code for your users, time you could spend doing bug fixes. It is really hard to arbitrate. We all have small teams and a big userbase so it is really hard to find the time to do those things and choose how to allocate your time. It is hard to blame someone because they don’t have the time but I think it is still an important thing to do and so you should try to find some time to allocate to it. I really hope that people don’t think the spec is an ivory tower kind of thing. “These guys are doing the spec and it is complicated. I cannot participate.” Not at all. You should come, you should learn, you should listen, you should ask questions and you should propose things. It is ok to fail in public. That is a good thing. It is ok to embarrass yourself because you proposed something and it was completely wrong. I’m doing it recently with all these RBF things on Bitcoin Core. I am thinking “Why can’t we do that? Wouldn’t that work?” and it is completely dumb because there are a lot of things I don’t know. That’s how you learn. It is ok, you just need to not have too much of an ego. No one will judge you for trying something and learning that there were things you didn’t know. This is a really complex project so don’t be afraid to come in and say things, maybe you will be wrong but that is how you learn. There’s no ivory tower and we are welcoming of all contributions.

Q - What is the actual problem with implementations leading the spec and treating the Lightning Network like competition? You don’t need unanimous consensus like you do with the base layer. In the end you can’t even actually achieve it in the Lightning Network because you already right now don’t have completely compatible specs with the different implementations. What’s the actual problem with one implementation taking a risk, implementing something not in the spec and seeing if it floats, seeing if it sticks. It brings new users and new use cases to the Lightning Network, letting the other implementations agree by seeing it demonstrated and all agreeing to upping the lowest common denominator to that spec. What is the problem with that?

CD: Like Bastien said there is no problem in implementations trying out experimental things, it is very much welcome. Back in 2016 we came from 3 different directions and decided to join all of the things that we learned during this initial experimentation phase into a single specification so that we could collaborate and interoperate. This experimental phase must always be followed up by a proposal that is introspectable by everybody else and can be implemented by everybody else. Sometimes that formal proposal is missing and that prevents the other implementations giving their own review on that feature. This review is very important to make sure it works for everybody and that it is the best we can make it.

Q - Is that where the tension is coming from? I know there are some arguments on offers, AMP, LNURL. There are a lot of different payment methods in Lightning. Where is the drama at all? There seems to be some form of drama emerging. Is it just people that are trying to lead by implementation are not going back and making a spec.

MF: Just to be clear there has to be two implementations that implement it.

Q - That’s arbitrary.

MF: But if you don’t do that and one implementation implements it they are attempting to set the standard. Someone comes 6 months later and goes “You’ve made lots of mistakes all over the place. This is a bad design decision, this is a bad design decision.” But because it is out there in the network you’ve got to follow it.

Q - It is only bad if it fails. It is a subjective thing to say it is bad if no one does it. If it succeeds and if it is effective?

CD: Very concretely, one of the cases that comes to mind is a pending formal proposal that was done by one team that has been discussed for several months before. Suddenly out of the blue comes a counter proposal that does not have a formal write up, that is only implemented in one codebase and that is being used to hold up the process on a formal proposal without additional explanation why this should be preferred over the formal proposal or why there isn’t a formal proposal. There is a bit of tension there that progress was held up by that argument.

Q - Incompatibility is inevitable though in some features. I believe PTLC channels aren’t compatible with HTLC channels.

CD: One thing is incompatibility, the other one is holding up the spec. Holding up everybody else to catch up to the full feature set that you are building into your own implementation.

MF: And of course every implementation if they wanted to could have their own network of that implementation. They are free to do that but the whole point of the spec process is to try to get all the implementations when they implement something to be compatible on that feature. But they are free to set up a new network and be completely independent of the BOLT compliant network.

Q - I don’t make an implementation so you don’t have to worry about me. I just feel like it is a bit idealistic in that the competition could result in even more iteration and faster evolution of the network than the spec process.

CD: That is a fair argument, that’s not what I am arguing against. Like the name Lightning Network suggests it very much profits from the network effects we get by being compatible, by being able to interoperate and enabling all implementations to play on a level playing field.

Q - The last part sounds like fairness which is not really a competitive aspect. If one implementation led the whole spec, just incidentally not necessarily by tyranny, “We know the way” and they were right and brought a million users to Lightning the other specs would have to go inline but we would have a million users instead of x thousand.

CD: That’s why we still have Internet Explorer?

Q - I don’t think it is the same thing. You don’t need total compatibility, you just need a minimum amount. If you have some basic features that work in all implementations you are ok. If there is some new spec thing that isn’t in all of them that brings in new people it would become evident.

CD: That assumes that the new features are backwards compatible and do not benefit from a wider part of the network implementing it.

Q - 10,000 users compared to 100,000 is a different story. If it is useful people will use it.

CD: But then you shouldn’t be pretending that you are part of an open source community that is collaborating on a specification.

MF: So there are conflicting opinions on BOLT 12, I think a lot of people support BOLT 12, there are a couple of oppositions but it is quite a new proposal. Let’s talk about a couple of proposals that have been stuck for at least a year or two. You mentioned your trampoline proposal, there is also dual funding and liquidity ads that I think Lisa is frustrated about with lack of progress on. Perhaps we can talk about what’s holding up these. Is it business models? Is it proprietary services not wanting decentralized versions of that proprietary service? Is it not wanting too much to go over the Lightning Network so that it becomes spam? What are the considerations here that are holding up progress on some of these proposals?

BT: I think the main consideration is developer time. We have done all the simple things which was Lightning 1.0. Now all that remains is the harder things that take more time to build and take more time to review and that involve trade-offs that aren’t perfect solutions. That takes time to review, that takes time for people to agree that this is the right trade-off that they want to implement in their implementation. That takes time for people to actually implement it, test compatibility. I think we have a lot of proposals right now that are floating in the spec and most of us agree that this is a good thing, this is something we want to build, we just don’t have the time to do everything at once. Everyone has to prioritize what they do. But I don’t think any of those are really stuck, they are making slow progress. All of these in my opinion are progressing.

CD: All 3 or 4 proposals that you mentioned…. Trampoline isn’t that big but dual funding is quite big, splicing is quite big, liquidity ads is quite big, offers is quite big. It is only natural that it takes time to review them, hammer out all the fine details and that requires setting aside resources to do so. We are all small teams. It comes down to the priorities of the individual team, how much you want to dedicate to the specification effort. c-lightning has always made an effort to have all of its developers on the specification process as have other teams as well. But it is a small community, we sometimes get frustrated if our own pet project doesn’t make it through the process as quickly as we’d like it to.

The far future of the Lightning Network

MF: So we’ll end with when you dream about what the Lightning Network looks like in 5 years what does that look like? I know Christian has previously talked about channel factories. As the months go on everything seems further away. The more work you do, the more progress you make the further you are away from it almost. Certainly with Taproot, perhaps Oliver can talk about some of the stuff he’d like to get with Taproot’d Lightning. What do you hope to get and in which direction do you want to go in in the next few years?

CD: Five years ago I would not have expected to be here. This network has grown much much quicker than I thought it would. It has been much more successful than I thought it would. It has surpassed my wildest expectations which probably should give you an idea of how bad I am at estimating the future. You shouldn’t ask me about predictions. What the Lightning Network will look like in 5 years doesn’t really depend on what we think is the right way to go. There are applications out there we Lightning spec developers cannot imagine. We are simply way too deep in the process to take those long shots. I am always amazed by what applications the users are building on top of Lightning. It is really you guys who are going to build the big moonshots. We are just building the bases that you can start off on. That’s really the message here. You are the guys who are doing the innovation here so thank you for that.

BT: I don’t have anything to add, that’s perfect.

OG: I’m just going to be bold and say I’d like to see a billion users or a billion people actually using Lighting in one way or another. Hopefully as non-custodial as possible but number of users go up.

Q&A

MF: So final audience questions. Rene (Pickhardt) had one on Twitter. How should we proceed with the zero base fee situation? Should LN devs do nothing? Should we have a default zero recommendation in the spec and/or implementations? Should we deprecate the base fee? Something else? Any thoughts on zero base fee?

CD: Since Rene and I have been talking a lot about why zero base fee is sensible or why it wouldn’t be I’ll add a quick summary. Zero base fee is a proposal by Rene Pickhardt to remove the fixed fee and make the fee that every node charges for forwarding payments just proportional to the amount that is being forwarded. It is not removing the fees, it is just removing that initial offset. Why is that important? It turns out that the computation that we do to compute an optimal route inside the Lightning Network might be much, much harder if we don’t remove it according to his model. If we were to remove the base fee then that would allow us to compute optimal routes inside of the Lightning Network with a maximum chance of succeeding in the shortest amount possible. That’s a huge upside. The counter-argument is that it is a change in the protocol and maybe we can get away with an approximation that might not be as optimal as the optimal solution would be but is still pretty good. As to whether we as the spec process should be dictating those rules, I don’t think it is up to us to dictate anything here. We have given Lightning node operators the ability to set fees however they want. It is up to the operators to decide whether that huge performance increase for their users is worth setting the base fee to zero or whether they are ok with charging a bit more for the fixed work they have to do and having slightly worse routing performance. As spec developers it is always difficult to set defaults because defaults are sticky. It would be us deciding on behalf of Lightning node operators what the path forward should be. Whereas it should be much more us listening to you guys what you guys want from us and taking decisions based on that.

OG: I met with Rene recently and we discussed this proposal again. We kind of have a plan how we could integrate some kind of proof of concept into LND that can work around some of the issues with the base fee. Just to get a sense of how does this compute, how fast is it compared to normal pathfinding and how much better could it be? So we can actually run some numbers, do some tests, get some real world comparison, real world numbers which I thought was a bit lacking before. Have actual results being shown. My goal is to give it a shot, implement something very naive and very stupid based on his paper and see where it goes. I am curious to see what comes out of that.

CD: As you can see many of our answers come down to us being very few people working on the specification. I think all 3 or 4 teams are currently looking for help there. If you or someone you know is interested in joining the Lightning efforts I think we are all hiring.

OG: Definitely.

CD: Raise your hand, join the ranks and we can definitely use your help.

Q - The base chain is always quite incentive compatible, something that is good for you is good for network and vice versa. In Lightning there are some deviations from that. For example if you lock up liquidity for a long time you don’t pay for that. I think it is fine for now as we are bootstrapping but going forward when we get more adoption do you see that as something we need to move away from? Are there any technical obstacles, something like stuckless payments? What are your views on paying for liquidity over time and not just for a one-off?

BT: You say this is not an issue because we don’t have a lot of volume but I think it is an issue even if we don’t have a lot of volume. It is just that we don’t have a perfect solution to that and I don’t think there is a perfect solution to that. We have done a lot of research, there have been a lot of proposals trying different things to fix it but all of these proposals either don’t completely work or work but require quite a lot of implementation changes and a network wide upgrade that takes time. We are actively trying to fix this but this is still very much in research phase. If more people want to look at it from a research angle that would be very much appreciated because I think it is an interesting problem to solve and there is still some design space that hasn’t been evaluated. But we haven’t focused on implementing anything because we are not yet convinced by the solutions we’ve found. This is very much something that we are trying to fix in the short, mid term.

CD: I think saying that the base chain is always incentive compatible is probably also a bit of a stretch. We still disagree on whether RBF rules are incentive compatible or not. We are still fighting over that. That being said I do agree that the incentive incompatibilities on the base chain are much fewer because the base chain is much simpler. There is much less stuff that can go wrong on Bitcoin mainnet than can go wrong on Lightning. The surface where stuff can go wrong on Lightning is much bigger. Bitcoin itself has had quite a lot more time to mature. I’ve been with Bitcoin since 2009 and trust me I’ve seen some s*** go down. So I think we shouldn’t set up the same requirements when it comes to being perfectly incentive compatible or perfectly secure or perfectly private either. We should be able to take the time to address these issues in a reasonable way, take those learnings and address them as they surface. That being said there are many proposals flying around, we are evaluating quite a few of them including paying for time of funds locked up, you mentioned stuckless payments which might become possible with PTLCs which might become possible with Taproot. There are ways we can improve and only the future will tell if we address them completely or whether we have to go over the books again and find a better solution.

Q - Do any of you have any general advice for aspiring Lighting developers? Pitfalls to avoid or things specifically to focus on?

CD: As an aspiring Lightning developer you probably want to go from your own experience, go from what you’ve seen while running a Lightning node yourself, whether that is for yourself or somebody else. Or trying to explain to somebody else where are your own disagreements with what Lightning is nowadays. Try to come up with a solution, you can propose something completely theoretical at first or it can be something that you built on top of Lightning. All of that is experience that you can bring to the table and is very valuable for us as implementers of the various implementations or as spec developers. Those additional views that you bring to the table can inform on how we can go forward. My personal tip is try out an implementation, try to build something on top of it and make an experience with it. Then come to us and say “This is not how I expected it to work. Why is it? How can we do it better? Wouldn’t this be better?” Things like that. That is a very gentle approach to this whole topic. Usually the way that you can make a permanent change in this system as well.

BT: My biggest feedback and advice on the Lightning learning experience is to not try to take it all in at once. This is a multi-year learning process, there is a lot you have to learn. This is a very complex subject. You have to master Bitcoin, then Lightning. It is a huge beast with a lot of subtleties. Accept that there are many things that will not make sense for a very long time. You have to be ok with that and just take piece by piece learning some stuff starting with some part of Lightning and then moving onto something else. This is going to be a long journey but a really good one.

OG: Very good points. Just to add onto everything already said, the main constraint in resources isn’t that we don’t have enough people creating pull requests but we don’t have enough people reviewing, testing, running stuff, making sure that new PRs are at the quality that is required to be merged. As a new developer if your first goal is to create a PR it might be frustrating because it might lie there for a while if no one has time to look at it. Start by testing out a PR, giving feedback, “I ran this. It works or it doesn’t work”. You learn a lot and also help way more than just adding a new feature that might not be the most important thing at the moment.

CD: I think Bastien said it about half a hour ago. I think it is worth repeating. Don’t hesitate to ask questions. There are no dumb questions. Even the simplest of question allows us to shine a light on something from a different perspective. It is something that I personally enjoy a lot, to walk people through the basics or more advanced features. Or maybe it is an opportunity for me to learn a new perspective as well. We are kind of ok, we don’t bite. Sometimes we have cookies.

MF: Thank you to our panelists and thank you to our sponsors, Coinbase Recruiting.

\ No newline at end of file +https://www.youtube.com/watch?v=fqhioVxqmig

Topic: The Lightning Network in 2022

Location: London Bitcoin Devs

Introductions

Ali Taylor-Cipolla (ATC): Ali coming in from Coinbase Recruiting. I’d like to introduce to my friend Tiago who is on the Europe side.

Tiago Schmidt (TS): Hello everyone. Quick introduction, sitting in the recruiting team here in London and leading the expansion for engineering hiring across UK and Ireland. Coinbase is in hyper-growth at the moment and we will be looking to hire loads of the people. We have over 200 engineers across all levels here in UK and Ireland and further down the line in more European countries as we continue to expand. We are going to be at the conference on Thursday and Friday so if you have any questions we’ll be there to answer any questions.

Q - Are you hiring Lightning engineers?

ATC: Sure are. I have one more friend.

Trent Fuenmayor (TF): I’m Trent from Coinbase Giving. We have developer grants available for people in the community doing Bitcoin Core development, blockchain agnostic. We have four themes, you can read more about them by grabbing one of these flyers over here. If you are looking for money to do some Core development please apply to us, if you are looking for a full time gig please apply to us, if you just want to enjoy the drink and food enjoy it.

Greenlight demo (Christian Decker)

Blog post: https://blog.blockstream.com/en-greenlight-by-blockstream-lightning-made-easy/

Michael Folkson (MF): So we’ll start with the demo and then we’ll crack on with the panel. Christian is going to demo Greenlight, as far as I know as an exclusive.

Christian Decker (CD): This is the first time anybody has seen it. It is kind of adhoc so I can’t guarantee it is working. It is also not going to look very nice.

Audience: Classic Blockstream

CD: I suck at interfaces. I was hoping to skip the demo completely but Michael is forcing me to do it. Let’s see how this goes. For those who have never heard about Greenlight it is a service that we hope to be offering very soon to end users who might not yet know what operating a Lightning node looks like but who would like to dip their toes into it and get a feel for what Lightning can do for you before investing the time to learn and upgrade your own experience. This is not intended for routing nodes, this is not intended for you if you know how to run a Lightning node. This is for you if you want to see what it can do for you but are not prepared to do the investment just yet. The way it works is we run the infrastructure on your behalf but we don’t have access to your keys, the keys that are necessary to sign off on anything that would move funds. What we call it is a hosted non-custodial Lightning as a service, service. Did I mention this is adhoc? I will walk you through quickly how you can get a node up and running and interact with it, do a payment with it. It is going to be from a command line which I know all of you will appreciate. Hopefully we will have a UI for this.

As you can see this is an empty directory. I want to register a node for mainnet. What this does is goes out, talks to our service, allocates a node for you and it will give you back the access credentials to talk to it. What we have here is the seed phrase that was generated locally, we will never share this with anybody, it will never leave this device at all. And we have the credentials here. What we do next is we ask it “Where is it running?” The node with this ID is currently not running anywhere. Let’s change that. What I do now is I tell it to spin up this node, I want to use it and talk to it. I talk to the scheduler and after a 2 second break it will tell me my node is running here. What I can do is I can talk to it. When I call getinfo it will talk to this node and say “Hey I am node VIOLENTCHASER with this ID running this version and I am synchronized at this block height.” We can tail the logs and we can interact with it. We have a whole bunch of useful command line tools that we can use. We can open channels, we can disconnect, we can stop it, we can generate a new address, stuff like that. That is kind of cute but it is not helping us much. So what I did before as with every cooking show you’ve ever seen I prepared something. What we have here is a node I setup a couple of days ago, it is called T2, very imaginative. When we say “I’d like to talk to this guy” this is going out to the scheduler, starting it and talking to the node, getting the getinfo. This is VIOLENTSEAGULL. If we check where it is connected to using listpeers we see it has a connection but it is disconnected. What we’ve just done is we have connected the signer to the node, we have connected the front end to the node and now this node is fully functional. It has reconnected to its peer and it could perform payments. We can do listpeers again and we see that it is connected, it is talking to a node and it is up to date.

Now I go over to my favorite demo shop (Starblocks) by our friends at eclair. This is the best way to demonstrate anything on testnet and buying coffee has always been my go-to example. I create a Lightning invoice and I copy this invoice and I pay it. And it works. Now I can breathe a sigh of relief. This is always scary. What just happened here? This machine acted as a signer, it held onto the private keys, and it acted as a front end. The node itself did not have any control over what happened. The front end was sending authenticated commands to the node, the node was computing what the change should be in the channels and to affect those changes we had to reach out to the signer to verify, sign off on changes and send back the signatures. You might have seen this scroll by very quickly. What it did here, the signer received an incoming signature request, signed them and sent them back. This allows you to spin up a node in less than a second. When you need it you open the app, it will spin up the node again, it will connect, it will do everything automatically. You can just use it as if it was any app you would normally use. What does this mean for app developers? You don’t have to learn how to operate Lightning nodes anymore because we do it for you. What does this mean for end users? You don’t have to learn about how to operate a Lightning node anymore before seeing what you could do with a Lightning node. You first see what the upside is and then we give you the tools to learn and upgrade your experience. Eventually we want you to take your node and take self custody of it, now you are a fully self sovereign individual, you are a fully self sovereign node operator in the Lightning Network. This is our attempt to onboard, educate and then offboard. That is pretty much it. I hope this is something that you find interesting. We will have many more updates coming soon and will have some sexier interfaces as well. Thank you.

MF: Cool, the way this is going to work today is we’ll have topics and we’ll open it up to questions on each topic. The topic now is Greenlight. Does anyone have any questions or comments on Greenlight?

Q - Is there a reason to not use this long term?

A - The way we achieve efficiency with this is if you are not using the node we will spin it down. We can’t do any changes so we will spin it down and free those resources and use them for other users. If you are online continuously, make use of it and keep it alive then we currently impose a 1 hour limit but we could lift that for you as a business. Ultimately if you are a business you are probably better off running your own node. You probably don’t want to run this anyway. You want to have more control over your funds. This is mainly for onboarding new users, teaching them how things can look like before they have to make any investment. But we don’t prevent you from using it if you want to have a business on it. One of the use cases that I can imagine is you are at a hackathon, you need a quick Lightning backend. It takes less than a second to spin this up and have an experimental backend for the weekend. Once you see that it is working we will allow you to export the node and make it a fully fledged Lightning node in your own infrastructure that you fully control.

Q - Is Greenlight itself open source so others can run this service?

A - We plan to open source many parts of Greenlight, among which is the networking interface. I know this is and old friend of yours, complaining about the lack of network RPC in c-lightning. We will open source the components that allow you to run the nodes themselves on your infrastructure. What we are still considering whether we want to open source is the control plane: how you register a node and how you schedule a node. That might also be an option because we want as many of these offerings as possible to help as many people onboard to the Lightning Network as possible.

Q - Will Blockstream be implementing this from their own front ends for Green wallet and things like this? If so what library images would that be?

A - The way we interact with the nodes is just a gRPC interface. Any language that can talk gRPC can interact with the nodes running our own services. The goal is to eventually integrate it into Green. We didn’t want to force the Green team to spend time on this as of yet because they tend to be quite busy. We are working with a couple of companies in a closed beta right now to explore how to get Lightning integrations into as many applications as possible. The name “Greenlight” gives away that it is planned for Green I guess.

Q - How are you managing the liquidity per user? What does that look like in an export scenario?

A - The liquidity is handled by us coordinating with external liquidity service providers to make sure that you as the end user have the cheapest and best possible offer you could have to open channels to the wider network. We plan to have a lower bound offering by ourselves as well to make sure that every user that wants liquidity can get it. For the export this is running open source c-lightning. What we do with the export is we give you a copy of the database, mark the node as exported in our own database so we don’t start it again and you can import the database into your own node. This means that you don’t have any downtime, you don’t have to close channels, you don’t have to reopen channels, your node will be exactly as is at the moment that you export it. We also plan to offer a couple of integration helpers such as a reverse proxy, giving each node its own URL. If you have any wallet attached to your node it will still be able to talk to the node even if you offboard it into your own self hosted infrastructure. Making it a zero config change for front ends.

Q - On the demo the HSM secret there, you connect it to your hardware module or is it just a soft signer?

A - That name has always been a bit aspirational to be honest. We will take the learnings from this to inform also the way that the open source c-lightning project will be designed going forward. We will use information that we gather from this to make c-lightning more efficient in future. Part of that is the way that we independently verify in the signer whether it is an authenticated command or not. That will eventually inform how we can build hardware security modules including hardware wallets for Lightning nodes themselves. This is very much a learning experience for us to eventually get a Ledger or Trezor that can be used to run a Lightning node.

Q - You will expand the HWI library perhaps?

A - The HWI might not be sufficient in this case for us. It would require quite a bit more state management on the hardware itself to get this working in a reliable way and make sure that we can verify the full context of whatever we are signing.

Contrasting the different Lightning implementations

MF: So we’ll start the panel. I can see a lot of familiar faces. For anyone who is new to the Lightning Network I’ll just give the basics. The Lightning Network is a network built on top of the Bitcoin network. It allows for a lot of transaction throughput that doesn’t touch the chain. There are various implementations that try to stay compatible using the spec process. Today we have three representatives of three implementations. We have a representative of a fourth implementation at the back so for any LDK questions we’ll try to get a mic to the back. We have Christian representing c-lightning, Bastien representing eclair and Oliver representing LND. I thought to start, something that might be quite fun to do, a short sales pitch for your implementation and also an anti sales pitch, what you think your implementation is currently bad at doing.

CD: So c-lightning, written in C so very efficient. It adheres to the UNIX philosophy of doing one thing and one thing very well. We don’t force decisions on you, c-lightning is very modular and customizable so you can make it very much your own however you like, however you need it to be. The anti sales pitch, it is very bare bones, it is missing a network RPC and you have to do work to get it working. It is probably a corollary to the sales pitch.

Bastien Teinturier (BT): So eclair, its goals are stability, reliability, scalability and security as well. We were not aiming for maximum number of features, we are a bit lacking a developer community. Our goal is to have something stable that does payments right. You don’t have to care about your node, it just runs and never crashes, there are never any issues. The anti pitch is that it is in Scala, no one knows Scala but it is actually really great.

Oliver Gugger (OG): LND tries to be developer first. We want developers to be able to pick it up easily, can integrate it into their product, build apps on top of it and distribute it as a wallet or a self hosted node. Bringing it to the plebs. We focus mainly on having a great developer interface. We do gRPC, REST and try to build in everything and also make it secure and scalable. If you run a very large node then currently database size is an issue. We are working very hard on that to make that not a big issue. We are working on external database support, lots to do.

MF: Cool. So Christian went through Greenlight just then. Perhaps Bastien you can talk about one of the biggest if not the biggest Lightning node on the network?

BT: It depends on what you count but yeah probably the biggest.

MF: I know that you don’t intensely manage that node but can you give any insights into running such a big node, I think there were scaling challenges that you could maybe talk about briefly.

BT: So the main challenge is sleeping at night knowing that you have that many Bitcoin in a hot wallet. We didn’t have any issues scaling because there is not that much volume on Lightning today. It is really easy to support that volume. We built eclair to be easily horizontally scalable. One thing that we did a year ago, our node is running on multiple machines. There is one main machine that does all the channel and important logic but there are also two front end machines that do all of the connection management, routing gossip management. The things that are bandwidth intensive and can easily be scaled across many different machines, each connection is independent from other connections. To be honest we don’t need it. It could easily be run on a single machine even at that scale. It is a proof of concept that we can do it and if we need to scale it is easy to then scale these front end machines independently. It was a bit simpler than we expected in a way. On the scaling issue, the main scaling issue is not related to Lightning the implementation, more onboarding users and getting many people to have channels and to run in non-custodial settings. That is independent of implementation and more Lightning as a whole, allowing people to be self-sovereign and be able to use Lightning without any pain points.

MF: It is such a big node because you have the Phoenix mobile wallet and lots of users connecting to this massive node? That is the reason why you have such a big node on the network?

BT: We had the biggest node on the network even before we launched Phoenix. It helps but it is not the only reason.

Audience: You did have eclair, the first proper mobile wallet in my opinion.

BT: But eclair could connect to anything, you could open channels to anyone.

MF: With these different models that are emerging for running a Lightning node, perhaps Oliver you can talk about Lightning Node Connect. This is offering a very different service to Greenlight and what Christian was talking about earlier. This is not getting a company to do all the hard stuff, this is allowing you to set up infrastructure between say two servers and splitting up the functionality of that node and wallet.

OG: Lightning Node Connect is a way to connect to your node from let’s say a website or a different device. It helps you punch through your home router and establish a connection. Before what we had was something called LND Connect. It was a QR code, you had to do port forwarding, there was a certificate in there and a macaroon. It was hard to set up. What Lightning Node Connect is trying to do is through an external proxy bridging that gap to your node so you can connect from any device to your node, even if you are running behind a firewall and Tor only. It gives you a 10 word comparison phase that it can use to connect to your node. The idea is that this is implementation agnostic. Currently it only runs on LND but it is very similar to the Noise protocol. It is a secure protocol to connect to a node behind a firewall. We want to see this being adopted by other implementations as well. It could be cool for c-lightning, eclair to use this. We have an early version released, it needs some work but if that sounds interesting please take a look.

MF: This is separate to the remote signing. I was combining the two. These are two separate projects. One is addressing NAT traversal, the other one is addressing private key management.

OG: Remote signing is different. It is just separating out the private key part of the LND node. Currently it is just splitting it out, you need to run a secondary LND. There is a gRPC interface in between. Now we have the separation we can implement more logic to make it implement policies on when to sign, how often. It is just a first step.

MF: So the topic is comparison of implementations or any particular questions on a specific implementation. We do have Antoine in the back if anyone has any LDK questions, contrasting LDK with these approaches.

Q - Does LND have plans for being able to create a similar environment to Greenlight, like you were mentioning fully remote signing where keys are totally segregated?

OG: If you mean with an environment like a service provider then no. We won’t be offering such a service compared to Greenlight. We have partners like Voltage that already do that. If you are just asking technology wise we want to have compete separation of private keys and you being able to apply policies on how often you can sign, how much and on what keys, whatever. That is the plan. But us hosting a remote signing service I don’t think so, no.

Q - I’m coming from the app user experience. The app can just hold the keys for the user. It could be from the wallet provider, it doesn’t have to be from LND’s service. Just being able to have that user experience.

OG: That is something we are definitely thinking about, how to achieve that and how it could be done. Not all the keys, if you are wake up to gossip stuff it might not be very efficient.

Q - Since not too many people seem to be implementing eclair stuff do you have plans for doing similar services or looking at other applications use your implementation? Or is your primary goal just to make your implementation functional for your wallet?

BT: eclair implements all of the spec so what exactly are you referring to?

Q - A lot of people are looking at mostly using LND for applications, I don’t know if it is just because scaling isn’t popular and they don’t mess with the node at all. Now that Blockstream has Greenlight, it is at least a solution to be able to put the keys in users’ hands. Granted you have to connect it to a Blockstream service instead of your own service but they’ve said they’ll open source it.

BT: I think eclair is really meant for server nodes. We have a lot of ways to build on top of eclair. We have a plugin system where you can interact with eclair. You can send messages to all of the components of eclair and implement almost anything that you like in any JVM language. JVM is just not popular so we don’t have many people running plugins but in theory you could. What I think is more interesting for you is we not only have the eclair codebase for a node implementation but we also have an implementation of Lightning that is targeted only for wallets. At first our wallets were completely based on eclair, on the Scala implementation and on a specific branch that forked off of eclair and removed some of the stuff that was unnecessary for wallets. That could only run on Android, on the JVM, and not on iOS. When we wanted to ship Phoenix on iOS we considered many things. We decided to rewrite the Lightning implementation and optimize wallets, leaving many things out. Since there was a Kotlin stack that works on both Android and iOS we started based on that. It was really easy to port Scala code to Kotlin because these are two very similar languages. It took Fabrice a lot of months to bootstrap this. We started from an architecture from eclair and simplify it with what we learned, what we could remove for wallets. That is something an application developer could have a look at as well. It is quite a simple library to use. It is quite minimal, it doesn’t do everything that a node needs to do. It is only for wallets so there are things left out. I think it can be an interesting thing to build upon.

Q - LND has Neutrino and generally the user experience is not that great. It takes a pretty long time to do a full sync, you have to re-sync if you haven’t had your wallet online for a while. Does LND have plans for a node that is performant and more appropriate to mobile? Same question for c-lightning.

CD: I personally never understood why Neutrino should be a good fit for Lightning nodes. Neutrino does away with the downloading of blocks that you are not interested in but since most blocks nowadays contain channel opens you are almost always interested in all blocks so you are not actually saving anything unless you are not verifying that the channels are being opened. That caveat aside. c-lightning has a very powerful plugin infrastructure. We have a very modular architecture that allows you to swap out individual parts including the Bitcoin backend. By default it talks to a bitcoind full node and it will fully verify the entire blockchain. But you can very easily swap it out for something that talks Neutrino or talks to a block explorer if you are so inclined to trust a block explorer. In the case of Greenlight we do have a central bitcoind that serves stripped blocks to our c-lightning nodes. It is much quicker to actually catch up to the blockchain. This is the customizability that I was talking about before. If you put in the time to make c-lightning work for your environment you are going to be very happy with it. But we do ship with defaults that are sane in that sense. There are many ways of running a Bitcoin backend that could be more efficient than just processing entire blocks, with Greenlight we are using stripped blocks. You could talk to a central server that serves you this kind of information. It very much depends on your security needs, how much you trust whatever the source of this ground truth data is. By default we do use the least amount of assumptions but if your environment allows it we can speed it up. That includes server nodes, you name it.

OG: As far as I know there are no plans to support another kind of chain backend than we currently have. btcd, bitcoind and Neutrino. There are still a few performance optimisations that can be done on Neutrino. It needs a bit more love. We could do some pre-loading of block headers, an optimization with the database because it is the same database technology that we use in LND. It has come to an end. I feel like Neutrino is still a viable model but maybe we need to invest a little more time. What is being worked on is to have an alternative to ZMQ with bitcoind. You don’t need an actual ZMQ connection. One contributor is working on allowing you to use a remote bitcoind at home as an alternative if that could be an interesting option. Apart from that I am not aware of any other plans.

MF: It seems to me there are specific use cases that are gravitating towards the separate implementations. Perhaps merchants getting set up for the first time would use Greenlight and c-lightning, mobile users would use eclair and developers using an API, there are some exciting gaming use cases of Lightning using LND. Do you think these use cases are sticking to your particular implementation or do you think your implementation can potentially do everything?

CD: We definitely have a persona that we’re aiming for. We do have a certain type of user that we try to optimize for and we are looking into supporting as much as we can. Of course this persona is probably not the same for all of us but there is a bit of overlap. I like the fact that users get to choose what implementation they want to run. Depending on what their use case is one might be slightly better than the other. I think all 3 implementations, all 4 implementations, sorry Antoine…

MF: Given Antoine is so far in the back perhaps Christian you can talk about the use case that LDK is centering on.

CD: Correct me if I’m wrong but LDK is very much as the name suggests a development kit that aims to enable app developers to integrate Lightning into their applications without forcing them to adhere to a certain set of rules. They have a very large toolset of components that you can pick and choose from and you can use to adapt to whatever your situation might be. Whether that is a mobile app or a server app, if I’m not mistaken the Square Crypto app now uses LDK. There is a wide variety of things that you can use it for. It is not a self contained node but more aimed at developers that want a tight integration with the Lightning node itself. Whereas the three implementations here are a software package that can be run out of the box. Some more opinionated and feature complete, some that give you the opportunity of customizing it and are less opinionated. I think the main differentiating factor is that LDK is a development kit that gives you the tools and doesn’t tell you how to actually do it. The documentation is actually quite good.

MF: Another use case that Matt (Corallo) talked about, I don’t know how much progress you’ve made on this was an existing Bitcoin wallet integrating Lightning functionality. A Bitcoin onchain wallet coming and using the LDK and bringing Lightning functionality into that wallet. That sounds really cool but also hard. I don’t know if any progress has been made on that.

Audience: There is a talk at the conference on that.

BT: With ACINQ it seems to be confusing. We have two very different personas and only one is sticking. People think ACINQ is the mobile persona whereas our first persona is actually big, reliable routing and merchant nodes. That has been our first focus. But we have also embraced a different persona of working on mobile. A mistake was probably to name our first wallet the same thing as our node so people thought we were only doing that wallet part. The reason we did that is we really wanted to understand the whole experience of using Lightning. We thought that using Lightning for everyone would not go through running your own node. We wanted to understand what pain points people on a mobile phone would have when using Lightning. You don’t have the same environment at all as when you are running a node in a data center or even at home. That’s why we embraced this second persona of doing mobile wallets to see what we needed to fix at the spec level and implementation level for routing nodes for Lightning to work end-to-end, from a user on a mobile phone to someone paying. We have these two personas and we are trying to separate them a bit more by having our wallet implementation be different from our server node implementation. Don’t think we are only doing mobile.

CD: So what you are saying is Amazon would choose eclair to accept payments?

BT: Yeah probably.

OG: I’m not sure what persona I would ascribe to LDN other than the developers themselves, we have a batteries included experience for the developer so they can choose the persona. We have a lot of features, everything is in a single binary so we are also trying to unbundle some things so developers can have a more configurable experience. We have something for everyone almost. We also have some quite big nodes running LND but I wouldn’t say the largest nodes are the main goal but we definitely want to get there as well. We are limited by this database, we don’t have replication built in just yet but we want to go to SQL so we can also cover this persona. I guess we want to serve everyone and anyone.

CD: While everyone is trying to cut out their niche there is friendly competition in trying to give users the options here. We aren’t going to limit ourselves to just one persona or another, you will be the ones profiting from it.

Q - I’ve been in the Lightning space for a while. I am very invested in users using mobile wallets. This is a question for eclair. In my opinion the eclair wallet on Android is one of the best mobile wallets if not the best. Especially the UX. My question is, it is 2022, I recently moved over to a iPhone and there are very few non-custodial Lightning clients available, what is the biggest thing in the way right now for greater mobile wallet adoption and creation?

CD: Apple.

BT: I think you mean a mobile wallet for someone who understands the technical detail and wants to manage the technical detail, someone who wants to see the channels. The approach we have taken with Phoenix is different from the approach we’ve taken with eclair-mobile. Our approach with eclair-mobile was to make a wallet that anyone could use but we failed. Lightning is complicated when you have to manage your channels and you don’t even understand why we cannot tell you exactly how much you are able to send. We started again from scratch with Phoenix. Our goal was anyone, anyone who doesn’t care that it is not onchain Bitcoin, just wants something that sends payments and it just works. If you want that we already have it with Phoenix and Breez and other wallets are doing the same kind of things. If you want something that is more expert that gives you control over channel management maybe what you should look for is more of a remote control over your node at home. If you are at that level of technicality maybe you want to have your node at home control everything. Other than that I think the libraries are getting mature enough so that these wallets will emerge if there is demand. I’m not sure if there is such a big demand for people who don’t run a node at home but want a wallet that gives them full control over the channels. I don’t know. If the demand is there the tools and libraries are starting to emerge for people to build those wallets.

CD: Channels, it’s complicated should be Lightning’s slogan actually.

Priorities in the coming year for each implementation

MF: Before we move onto the network as a whole and spec stuff, anything to add on priorities for your implementation in the coming year? I suppose it kind of links to the anti sales pitch, anything in addition that you want to be focusing on this year on your implementation?

CD: We’ve certainly identified a couple of gaps that we’ve had in the past including for example giving users more tools to make c-lightning work for them, be more opinionated, give them something to start with. The first step to this end is we are building a gRPC interface with mutual TLS authentication to allow people to talk to their nodes. That has been a big gap in the past, we were hoping that users would come and tell us how they expect to work with c-lightning and gRPC is definitely the winner there. We are also working on a couple of long requested features that I might not want to reveal just now. You are not going to be limited to one channel per peer much longer. We are going to work much more with a specification effort to bring some of the features that we have implemented, standardize them and make them more widely available. That includes our dual funding approach, that includes our splicing proposals, that includes the liquidity ads that we have. Hopefully we will be able to standardize them, make them much more widely accessible by removing the blockers that have been there so far on the specification. And hopefully Greenlight will work out, we’ll take the learnings from Greenlight and apply them back into the open source c-lightning project, make c-lightning more accessible and easier to work with.

BT: On the eclair side I would say that our first focus is going to be continuing to work on security, reliability and making payments work better, improving payments. The only thing that Lightning needs to do and needs to do well, your payments must work, be fast and be reliable. There are still a lot of things to do. There are big spec changes that have helped Lightning get better but they need a lot of implementation work. They create a lot of details that make it hard and can be improved. Also there are a lot of spec proposals that will really help Lightning get better as a whole. The three that Christian mentioned, we are actively trying to implement them and we want to ship them this year. There are also other proposals, some of which we pushed forward and we hope to see other implementations add like trampoline (routing) and route blinding. Route blinding is already in c-lightning because it is a dependency for offers and onion messages. I think it is really good for privacy. Better payments, better security, better privacy and better reliability. All of these spec proposals all help in a way to get to that end goal where all of these specs are better.

OG: Our main focus is of course stability, scalability and reliability. Payment failures are never fun. These are the biggest things to look at. If we want to get Lightning in as many hands as possible then we will experience these scaling issues. The next step will be Taproot on the wallet level and of course on the spec level. We want to push for everything that is needed to get Lightning upgraded with Taproot but also in our products and services. With Loop and Pool we can take advantage of some of the privacy and scalability things. I personally think we should take a close look at some of the privacy features such as route blinding and what is proposed in offers. I think we should do more on that side.

Security of the network

MF: That was the implementations. We’ll move onto the network as a whole now. I thought we’d start with security. You can give me your view on whether this is a good split but I kind of see three security vectors here. There is the DoS vector, the interaction with the P2P layer of Bitcoin itself and then there are bugs that crop up. Perhaps we’ll start with Christian on the history so far on the Lightning Network of bugs that have cropped up. The most serious one that I remember is the one where you would be tricked into entering into a channel without actually committing any funds onchain. That seemed like a scary bug. Christian can’t remember that one. Let’s start with the bugs that people can remember and then we’ll go onto those other two categories.

BT: I think there is only this one to remember, it was quite bad. The good thing is that it was easy to check if you’ve been attacked. I think no one lost funds in this because it was fixed quickly. We were lucky in that it was early enough that people did the upgrade. There are so many things that are getting better in every new version of an implementation. It helps us ship bug fixes quickly. The process to find these kinds of issues, we need to have more people continually testing these things, we need to have more people with adversarial thinking trying to probe the implementations, trying to run attacks on the existing nodes on regtest or something else. That needs to happen more. We are doing it all the time but it would be good if other people outside of the main teams were doing it as well. It would be valuable and would bring new ideas. Researchers sometimes try to probe the network at a theoretical level right now, doing it more practical and hands on would help a lot. I would like to see more of that.

CD: This gets us a bit into the open source and security dilemma. We work in the open, you can see every change that we do. That puts us in a situation where we sometimes have to hide a bug fix from you that might otherwise be exploited by somebody that knows about this issue. It is very hard for us to fix issues and tell you right away about it because that could expose others in the network to the risk. When we tell you to update please do so. When we ask you twice do so twice.

OG: That bug you mentioned was one of the biggest that affects all of us. LND has had a few bugs as well that the other implementations weren’t affected by. We had one with the signature with a low s. Someone could produce a signature that LND would think was ok but the network wouldn’t. That was a while ago. Then of course bugs that are affecting users, maybe a crash or incompatibility with c-lightning, stuff like that. We’ve since put more focus on stability.

MF: The next category, we’ll probably go to Bastien for this one, the mempool, replaceable transactions. Perhaps you can talk about that very long PR you did implementing replaceable transactions in eclair and some of the challenges of interacting with mempool policy in Core.

BT: I don’t know how well you know the technicalities of Lightning. There was a big change we made recently, almost a year ago, it took a long time to get into production, called anchor outputs. We made a small but fundamental change in how we used a channel. Before that you had to choose the fee rate of your channel transactions beforehand and sign for that. You couldn’t change it. That means you had to predict the future fee rate for when that channel would close. If you guessed wrong then you are screwed. That was bad. Now with anchor outputs you can set the fee rate when the channel closes, when you actually need it. You don’t have this issue anymore but you run into other issues. Yes you can set the fee rate of your transactions when you broadcast them but there are still quirks about how the Bitcoin P2P layer will relay transactions and let you bump the fees of transactions that doesn’t guarantee that you will be able to get these transactions to a miner and get them confirmed. If you don’t get them confirmed before a specific timelock then you are exposed to attacks by a malicious actor. We have definitely improved one thing but we have shifted complexity to another layer and we have to deal with that complexity now. We have to make it better and this is not an easy task. This is a very hard, subtle issue that touches many aspects of Bitcoin and Lightning Network. This isn’t easy to fix but something that we really need to do and make it much better security wise. It is getting better, we are working towards it. I think we will be in a good state in the end but there is still a lot of work to do.

MF: I guess the concern is with the bugs, the first category, you can solve those problems. You identify and squash those bugs. This one seems like a long term one where we can kind of incrementally make progress and make it more secure but there is no hallelujah moment where this bug is fixed.

BT: Lightning starts from a statement that may or may not be true. You are able to get transactions confirmed in a specific time period. If you cannot guarantee that you cannot guarantee fund safety. That is not always that easy to guarantee. In very high fee environments where the mempool is very congested it may cost you a lot to be able to guarantee that security. We want to find the right trade-off where it doesn’t cost you too much but you are still completely secure. It is a hard trade-off to find.

CD: It is as much a learning experience for us as it is a learning experience for Bitcoin Core, the Bitcoin peer-to-peer layer. That is what makes this exciting. This is very much a research in progress kind of thing. What excites me and gets me up in the morning.

MF: I looked up the stats on Clark Moody. There’s 3,500 Bitcoin on the Lightning Network that we know publicly. That’s about 130 million US dollars. Does that scare you? Is that about right? If it was a billion would you be scared? 10 billion? Any thoughts?

CD: I would be lying if I was saying this is not a scary amount of money. There is a lot of trust that people put into our code. We do our best to rise up to that trust. That being said we are also learning at the same time as many of you are while operating a Lightning node. The more you can tell us about your experiences while operating Lightning the better we can help you make it more secure, easier to use and more private to use as well. We depend very much on your feedback, just as much as you depend on our code.

Q - What part of the implementation is most prone to breaking interoperability?

CD: We are not pointing fingers. When breaking interoperability you also have two sides. You have one side that has performed some changes, the other side is no longer compatible with it. It is sometimes really hard to figure out which one is the wrong one. The one that has changed might have just addressed a bug that the other one hasn’t addressed yet. It is very much up to the spec effort to say “This is the right behavior, this one isn’t.” That sometimes is an after the fact issue. The spec sometimes gives a bit of leeway that allows you to interpret some parts of the specification in a certain way. Without clarifying comments of which one is the intended behavior you can’t really assign blame to either of them. It might sometimes be the specification that is underspecified causing these kinds of issues. I remember recently roasbeef reached out to ask whether the way that we interpreted one sentence in the specification was the same way he interpreted that one sentence in the specification. It turns out to be different. Is it LND that interpreted it wrong or was it us who interpreted it wrong? There is no right or wrong in this case. It is up to us to come back to the table and clarify what the right interpretation is. That is the vast majority of these kinds of incompatibilities.

Q - How much longer can we have BOLTs without version numbers associated with them? If we want to say we are BOLT whatever compliant it is kind of amorphous. We are changing it, modifying it. It seems really prudent for us to start versioning BOLTs to say “eclair, LND, c-lightning release is BOLT 2.5 compatible” or whatever. What benefits do you see to that and what potential downsides?

BT: It would be really convenient but it is really hard to achieve. This would require central planning of “This feature will make it into the 2.0 release.” All of the features that we are adding to the spec are things that require months of work for each implementation. To have everyone commit to say “Yes I will implement this thing before this other one and before this other one” which is already a one year span with all the unknowns that can happen in a year. It is really to hard to get because this is decentralised development. That is something that we would really like to get but there are so many different things that we want to work on. People on each implementation assign different priorities to it and I think that part is healthy. It is really hard to say “This is going to be version 1.0, 1,1, 1.2”. I used to think that we really needed to do that. I don’t think that anymore.

CD: We actually tagged version 1.0 at some point. It was sort of the lowest common denominator among all the implementations. This was to signal that we have all achieved a certain amount of maturity. But very early on we decided on having a specification be a living document. That also means that it is evolving over time. Maybe we will end up with a version 1.1 at some point that declares a level playing field among all of the implementations with some implementations adding optional features and some of them deciding that it is not the way they want to go. It is very much in the nature of the Lightning specification to always have stuff in flight, always have multiple things in flight and there not being as much comparability as there could be maybe if we were to have a more RFC like process. There you say “I’ve implemented RFC1, I’ve implemented RFC2.” You build up the specification by picking and choosing which part of the specification you build. That was very much a choice early on to have this be a living document that evolves over time and has much flexibility as possible built into it.

BT: One thing I would like to add to that, the main issue is right now we’ve only been adding stuff. We are at the point where we’ve added a lot of stuff but we are starting to remove the old stuff. That is when it gets better. We are able to remove the old stuff. When there are two things, one modern thing and one legacy thing and everyone knows about the modern thing, we start removing the old things from the spec, it helps the spec get better, get smaller, get easier. Rusty has started doing that by deprecating one thing a week ago. I am hoping that we will be able to deprecate some other things and remove the old things from the spec.

Q - I know it is technically legal but probing feels like buggy behavior. What are we going to do about probing? There is a lot of probing going on.

CD: I don’t like that categorization of probing being dodgy or anything because I like to do it. Just for context, probing is sending out a payment that is already destined to fail but along the way you learn a lot about how the liquidity is distributed in the network. The fear is that it might be abused to learn where payments are going in the network by very precisely figuring out how much capacity is in those channels and on what side. When there is a change you can see there is 13 satoshis removed there and 13 satoshis removed there so those two changes might be related. It is very hard to get to that level of precision and we are adding randomness to prevent you from doing that. The main attack vector is pretty much mitigated at this point in time. It also takes quite a bit of time to get to that precision even if you weren’t randomizing. You always have a very fuzzy view even if you are very good at probing, so much for probing as an attack. When it comes to the upsides of probing, probing does tell you a lot about the liquidity in the network. If you probe your surroundings before you start a payment you can relatively certain that your view of the network is up to date. You can skip a lot of failed payment attempts that you’d otherwise have to do in order to learn that information. It also provides cover traffic for other payments. When you perform payments you are causing traffic in the network. A passive observer in the network would have a very hard time to say “This is a real payment going through and this is just a probe going through”. In my opinion probing does add value, it doesn’t remove as much privacy as people fear it does. On the other hand it adds to the chances of you succeeding your payments and to you providing cover traffic for people that might need it. I pretty much like probing to be honest.

Q - I was less worried about the privacy side of it and more the 95 percent of payments going through my node appear to be probes or failed payments at least. I guess the argument is cover traffic works for that.

CD: The one downside I can see with a huge amount of probes going through a node is that it may end up bloating your database. Every time we add or remove a HTLC to a channel we have to flush that to disk, otherwise we might incur losses there. There is work you have to do even for probes. There are ways we could replace those probes with more lightweight probes that do not have this property. You could have lightweight probes that don’t add to your database but still give you some information about how the liquidity situation is in parts of the network and provide that kind of cover traffic. It is not exactly free because yes you are storing a couple of bytes for each failed probe. Over time that might accumulate but I think in many cases the upsides outweigh the downsides. Apparently I’m the only probing so sorry for those extra bytes. I’ll buy you a USB stick.

Tensions in the BOLT spec process

MF: So we’ve left the slight controversy until the end. The spec process and the BOLTs. Alex Bosworth put the cat amongst the pigeons with a few comments in an email that was shared on Twitter. I’ll read out a couple of the quotes. Obviously he’s not a fan of BOLT 12 but there were some specific comments on the BOLT process itself. To clarify, Alex is speaking for himself and not speaking for Lightning Labs. I think roasbeef and other people clarified that, it is just his personal opinion. “The way the BOLTs are standardized is arbitrary” and “if your side produces Lightning software that only 1 or 2 percent of the network uses, you still get to set the standard for the remaining 98 or 99 percent”. I guess with any spec process or any standardization process there are always going to be tensions. I haven’t followed any others so this is quite new to me. Obviously with different priorities and different business models and different wishes for the direction in which the spec goes it is almost inevitable there are going to be these tensions. Firstly thoughts on Alex’s comments, thoughts on how the spec process has evolved? Is there anything we can improve or is this just an inevitable side effect of having a standardization process with lots of different players, lots of different competing interests.

CD: I think those are very strong statements from someone who has never participated in a single spec meeting. As you rightly pointed out there is a bit of contention in the spec process but that is by design. If one implementation were able to dictate what the entire network looks like we would end up with a very myopic view of what the network could be and we wouldn’t be able to serve all of the different use cases that we are serving. And so yes, sometimes the spec process is frustrating, I totally agree with that. We certainly have different views on what the network should look like. But by this thesis, antithesis and synthesis process we come up with a system that is much more able to serve our users than if one implementation were to go it alone.

BT: I’ll address the frustration about the spec. Yes it is frustrating. Personally I opened my first big spec PR in 2019, it was trampoline, I was hoping that this would get accepted in 6 months. It takes a long time so 6 months should be great. In 2021 I closed it and opened the 2021 edition of trampoline. This time, everyone says they will implement it so in 3 months we will have it and it is still there. But it is ok. If in the end we get something better, between the 2019 edition and the 2021 edition I have already improved some stuff. Some stuff has changed based on feedback, based on things that we learnt running this in one of our wallets. If this is what it takes for the end result to be good I am ok with it. We are not in a rush here. We are in this for the long haul. I think that is the mindset you should have when you approach the spec process. You should not aim for results, you should not aim for merges, you should aim for a learning path. Getting in the end to something that many people contributed to. That in the end is really good for your users and good for the network. If you are ready to wait I think this is a good process. But it is frustrating and it is hard sometimes but I think it is a good thing.

OG: I personally don’t work on the spec so I don’t feel qualified to give an answer. I just wanted to add that I don’t necessarily agree with all the points that Alex mentioned. I definitely would have said it in a different way as well. I think lack of resources to work on the spec sometimes is interpreted as us blocking stuff which is not the intention and not our goal of course. We want to put in more work on the spec so I hope we will improve there. It is an interesting thing to observe, how that frustration sometimes comes to the surface. Thank you for all the work you do on the spec. I need to pick up as well so I’ll do my best.

BT: Two things I want to mention. Of course the spec takes a long time, this is time you could spend doing support for your users, time you could spend doing code for your users, time you could spend doing bug fixes. It is really hard to arbitrate. We all have small teams and a big userbase so it is really hard to find the time to do those things and choose how to allocate your time. It is hard to blame someone because they don’t have the time but I think it is still an important thing to do and so you should try to find some time to allocate to it. I really hope that people don’t think the spec is an ivory tower kind of thing. “These guys are doing the spec and it is complicated. I cannot participate.” Not at all. You should come, you should learn, you should listen, you should ask questions and you should propose things. It is ok to fail in public. That is a good thing. It is ok to embarrass yourself because you proposed something and it was completely wrong. I’m doing it recently with all these RBF things on Bitcoin Core. I am thinking “Why can’t we do that? Wouldn’t that work?” and it is completely dumb because there are a lot of things I don’t know. That’s how you learn. It is ok, you just need to not have too much of an ego. No one will judge you for trying something and learning that there were things you didn’t know. This is a really complex project so don’t be afraid to come in and say things, maybe you will be wrong but that is how you learn. There’s no ivory tower and we are welcoming of all contributions.

Q - What is the actual problem with implementations leading the spec and treating the Lightning Network like competition? You don’t need unanimous consensus like you do with the base layer. In the end you can’t even actually achieve it in the Lightning Network because you already right now don’t have completely compatible specs with the different implementations. What’s the actual problem with one implementation taking a risk, implementing something not in the spec and seeing if it floats, seeing if it sticks. It brings new users and new use cases to the Lightning Network, letting the other implementations agree by seeing it demonstrated and all agreeing to upping the lowest common denominator to that spec. What is the problem with that?

CD: Like Bastien said there is no problem in implementations trying out experimental things, it is very much welcome. Back in 2016 we came from 3 different directions and decided to join all of the things that we learned during this initial experimentation phase into a single specification so that we could collaborate and interoperate. This experimental phase must always be followed up by a proposal that is introspectable by everybody else and can be implemented by everybody else. Sometimes that formal proposal is missing and that prevents the other implementations giving their own review on that feature. This review is very important to make sure it works for everybody and that it is the best we can make it.

Q - Is that where the tension is coming from? I know there are some arguments on offers, AMP, LNURL. There are a lot of different payment methods in Lightning. Where is the drama at all? There seems to be some form of drama emerging. Is it just people that are trying to lead by implementation are not going back and making a spec.

MF: Just to be clear there has to be two implementations that implement it.

Q - That’s arbitrary.

MF: But if you don’t do that and one implementation implements it they are attempting to set the standard. Someone comes 6 months later and goes “You’ve made lots of mistakes all over the place. This is a bad design decision, this is a bad design decision.” But because it is out there in the network you’ve got to follow it.

Q - It is only bad if it fails. It is a subjective thing to say it is bad if no one does it. If it succeeds and if it is effective?

CD: Very concretely, one of the cases that comes to mind is a pending formal proposal that was done by one team that has been discussed for several months before. Suddenly out of the blue comes a counter proposal that does not have a formal write up, that is only implemented in one codebase and that is being used to hold up the process on a formal proposal without additional explanation why this should be preferred over the formal proposal or why there isn’t a formal proposal. There is a bit of tension there that progress was held up by that argument.

Q - Incompatibility is inevitable though in some features. I believe PTLC channels aren’t compatible with HTLC channels.

CD: One thing is incompatibility, the other one is holding up the spec. Holding up everybody else to catch up to the full feature set that you are building into your own implementation.

MF: And of course every implementation if they wanted to could have their own network of that implementation. They are free to do that but the whole point of the spec process is to try to get all the implementations when they implement something to be compatible on that feature. But they are free to set up a new network and be completely independent of the BOLT compliant network.

Q - I don’t make an implementation so you don’t have to worry about me. I just feel like it is a bit idealistic in that the competition could result in even more iteration and faster evolution of the network than the spec process.

CD: That is a fair argument, that’s not what I am arguing against. Like the name Lightning Network suggests it very much profits from the network effects we get by being compatible, by being able to interoperate and enabling all implementations to play on a level playing field.

Q - The last part sounds like fairness which is not really a competitive aspect. If one implementation led the whole spec, just incidentally not necessarily by tyranny, “We know the way” and they were right and brought a million users to Lightning the other specs would have to go inline but we would have a million users instead of x thousand.

CD: That’s why we still have Internet Explorer?

Q - I don’t think it is the same thing. You don’t need total compatibility, you just need a minimum amount. If you have some basic features that work in all implementations you are ok. If there is some new spec thing that isn’t in all of them that brings in new people it would become evident.

CD: That assumes that the new features are backwards compatible and do not benefit from a wider part of the network implementing it.

Q - 10,000 users compared to 100,000 is a different story. If it is useful people will use it.

CD: But then you shouldn’t be pretending that you are part of an open source community that is collaborating on a specification.

MF: So there are conflicting opinions on BOLT 12, I think a lot of people support BOLT 12, there are a couple of oppositions but it is quite a new proposal. Let’s talk about a couple of proposals that have been stuck for at least a year or two. You mentioned your trampoline proposal, there is also dual funding and liquidity ads that I think Lisa is frustrated about with lack of progress on. Perhaps we can talk about what’s holding up these. Is it business models? Is it proprietary services not wanting decentralized versions of that proprietary service? Is it not wanting too much to go over the Lightning Network so that it becomes spam? What are the considerations here that are holding up progress on some of these proposals?

BT: I think the main consideration is developer time. We have done all the simple things which was Lightning 1.0. Now all that remains is the harder things that take more time to build and take more time to review and that involve trade-offs that aren’t perfect solutions. That takes time to review, that takes time for people to agree that this is the right trade-off that they want to implement in their implementation. That takes time for people to actually implement it, test compatibility. I think we have a lot of proposals right now that are floating in the spec and most of us agree that this is a good thing, this is something we want to build, we just don’t have the time to do everything at once. Everyone has to prioritize what they do. But I don’t think any of those are really stuck, they are making slow progress. All of these in my opinion are progressing.

CD: All 3 or 4 proposals that you mentioned…. Trampoline isn’t that big but dual funding is quite big, splicing is quite big, liquidity ads is quite big, offers is quite big. It is only natural that it takes time to review them, hammer out all the fine details and that requires setting aside resources to do so. We are all small teams. It comes down to the priorities of the individual team, how much you want to dedicate to the specification effort. c-lightning has always made an effort to have all of its developers on the specification process as have other teams as well. But it is a small community, we sometimes get frustrated if our own pet project doesn’t make it through the process as quickly as we’d like it to.

The far future of the Lightning Network

MF: So we’ll end with when you dream about what the Lightning Network looks like in 5 years what does that look like? I know Christian has previously talked about channel factories. As the months go on everything seems further away. The more work you do, the more progress you make the further you are away from it almost. Certainly with Taproot, perhaps Oliver can talk about some of the stuff he’d like to get with Taproot’d Lightning. What do you hope to get and in which direction do you want to go in in the next few years?

CD: Five years ago I would not have expected to be here. This network has grown much much quicker than I thought it would. It has been much more successful than I thought it would. It has surpassed my wildest expectations which probably should give you an idea of how bad I am at estimating the future. You shouldn’t ask me about predictions. What the Lightning Network will look like in 5 years doesn’t really depend on what we think is the right way to go. There are applications out there we Lightning spec developers cannot imagine. We are simply way too deep in the process to take those long shots. I am always amazed by what applications the users are building on top of Lightning. It is really you guys who are going to build the big moonshots. We are just building the bases that you can start off on. That’s really the message here. You are the guys who are doing the innovation here so thank you for that.

BT: I don’t have anything to add, that’s perfect.

OG: I’m just going to be bold and say I’d like to see a billion users or a billion people actually using Lighting in one way or another. Hopefully as non-custodial as possible but number of users go up.

Q&A

MF: So final audience questions. Rene (Pickhardt) had one on Twitter. How should we proceed with the zero base fee situation? Should LN devs do nothing? Should we have a default zero recommendation in the spec and/or implementations? Should we deprecate the base fee? Something else? Any thoughts on zero base fee?

CD: Since Rene and I have been talking a lot about why zero base fee is sensible or why it wouldn’t be I’ll add a quick summary. Zero base fee is a proposal by Rene Pickhardt to remove the fixed fee and make the fee that every node charges for forwarding payments just proportional to the amount that is being forwarded. It is not removing the fees, it is just removing that initial offset. Why is that important? It turns out that the computation that we do to compute an optimal route inside the Lightning Network might be much, much harder if we don’t remove it according to his model. If we were to remove the base fee then that would allow us to compute optimal routes inside of the Lightning Network with a maximum chance of succeeding in the shortest amount possible. That’s a huge upside. The counter-argument is that it is a change in the protocol and maybe we can get away with an approximation that might not be as optimal as the optimal solution would be but is still pretty good. As to whether we as the spec process should be dictating those rules, I don’t think it is up to us to dictate anything here. We have given Lightning node operators the ability to set fees however they want. It is up to the operators to decide whether that huge performance increase for their users is worth setting the base fee to zero or whether they are ok with charging a bit more for the fixed work they have to do and having slightly worse routing performance. As spec developers it is always difficult to set defaults because defaults are sticky. It would be us deciding on behalf of Lightning node operators what the path forward should be. Whereas it should be much more us listening to you guys what you guys want from us and taking decisions based on that.

OG: I met with Rene recently and we discussed this proposal again. We kind of have a plan how we could integrate some kind of proof of concept into LND that can work around some of the issues with the base fee. Just to get a sense of how does this compute, how fast is it compared to normal pathfinding and how much better could it be? So we can actually run some numbers, do some tests, get some real world comparison, real world numbers which I thought was a bit lacking before. Have actual results being shown. My goal is to give it a shot, implement something very naive and very stupid based on his paper and see where it goes. I am curious to see what comes out of that.

CD: As you can see many of our answers come down to us being very few people working on the specification. I think all 3 or 4 teams are currently looking for help there. If you or someone you know is interested in joining the Lightning efforts I think we are all hiring.

OG: Definitely.

CD: Raise your hand, join the ranks and we can definitely use your help.

Q - The base chain is always quite incentive compatible, something that is good for you is good for network and vice versa. In Lightning there are some deviations from that. For example if you lock up liquidity for a long time you don’t pay for that. I think it is fine for now as we are bootstrapping but going forward when we get more adoption do you see that as something we need to move away from? Are there any technical obstacles, something like stuckless payments? What are your views on paying for liquidity over time and not just for a one-off?

BT: You say this is not an issue because we don’t have a lot of volume but I think it is an issue even if we don’t have a lot of volume. It is just that we don’t have a perfect solution to that and I don’t think there is a perfect solution to that. We have done a lot of research, there have been a lot of proposals trying different things to fix it but all of these proposals either don’t completely work or work but require quite a lot of implementation changes and a network wide upgrade that takes time. We are actively trying to fix this but this is still very much in research phase. If more people want to look at it from a research angle that would be very much appreciated because I think it is an interesting problem to solve and there is still some design space that hasn’t been evaluated. But we haven’t focused on implementing anything because we are not yet convinced by the solutions we’ve found. This is very much something that we are trying to fix in the short, mid term.

CD: I think saying that the base chain is always incentive compatible is probably also a bit of a stretch. We still disagree on whether RBF rules are incentive compatible or not. We are still fighting over that. That being said I do agree that the incentive incompatibilities on the base chain are much fewer because the base chain is much simpler. There is much less stuff that can go wrong on Bitcoin mainnet than can go wrong on Lightning. The surface where stuff can go wrong on Lightning is much bigger. Bitcoin itself has had quite a lot more time to mature. I’ve been with Bitcoin since 2009 and trust me I’ve seen some s*** go down. So I think we shouldn’t set up the same requirements when it comes to being perfectly incentive compatible or perfectly secure or perfectly private either. We should be able to take the time to address these issues in a reasonable way, take those learnings and address them as they surface. That being said there are many proposals flying around, we are evaluating quite a few of them including paying for time of funds locked up, you mentioned stuckless payments which might become possible with PTLCs which might become possible with Taproot. There are ways we can improve and only the future will tell if we address them completely or whether we have to go over the books again and find a better solution.

Q - Do any of you have any general advice for aspiring Lighting developers? Pitfalls to avoid or things specifically to focus on?

CD: As an aspiring Lightning developer you probably want to go from your own experience, go from what you’ve seen while running a Lightning node yourself, whether that is for yourself or somebody else. Or trying to explain to somebody else where are your own disagreements with what Lightning is nowadays. Try to come up with a solution, you can propose something completely theoretical at first or it can be something that you built on top of Lightning. All of that is experience that you can bring to the table and is very valuable for us as implementers of the various implementations or as spec developers. Those additional views that you bring to the table can inform on how we can go forward. My personal tip is try out an implementation, try to build something on top of it and make an experience with it. Then come to us and say “This is not how I expected it to work. Why is it? How can we do it better? Wouldn’t this be better?” Things like that. That is a very gentle approach to this whole topic. Usually the way that you can make a permanent change in this system as well.

BT: My biggest feedback and advice on the Lightning learning experience is to not try to take it all in at once. This is a multi-year learning process, there is a lot you have to learn. This is a very complex subject. You have to master Bitcoin, then Lightning. It is a huge beast with a lot of subtleties. Accept that there are many things that will not make sense for a very long time. You have to be ok with that and just take piece by piece learning some stuff starting with some part of Lightning and then moving onto something else. This is going to be a long journey but a really good one.

OG: Very good points. Just to add onto everything already said, the main constraint in resources isn’t that we don’t have enough people creating pull requests but we don’t have enough people reviewing, testing, running stuff, making sure that new PRs are at the quality that is required to be merged. As a new developer if your first goal is to create a PR it might be frustrating because it might lie there for a while if no one has time to look at it. Start by testing out a PR, giving feedback, “I ran this. It works or it doesn’t work”. You learn a lot and also help way more than just adding a new feature that might not be the most important thing at the moment.

CD: I think Bastien said it about half a hour ago. I think it is worth repeating. Don’t hesitate to ask questions. There are no dumb questions. Even the simplest of question allows us to shine a light on something from a different perspective. It is something that I personally enjoy a lot, to walk people through the basics or more advanced features. Or maybe it is an opportunity for me to learn a new perspective as well. We are kind of ok, we don’t bite. Sometimes we have cookies.

MF: Thank you to our panelists and thank you to our sponsors, Coinbase Recruiting.

\ No newline at end of file diff --git a/london-bitcoin-devs/2022-08-11-tim-ruffing-musig2/index.html b/london-bitcoin-devs/2022-08-11-tim-ruffing-musig2/index.html index 3a801a20d3..bdb51ac800 100644 --- a/london-bitcoin-devs/2022-08-11-tim-ruffing-musig2/index.html +++ b/london-bitcoin-devs/2022-08-11-tim-ruffing-musig2/index.html @@ -10,4 +10,4 @@ Tim Ruffing

Date: August 11, 2022

Transcript By: Michael Folkson

Tags: Musig

Category: Meetup

Media: -https://www.youtube.com/watch?v=TpyK_ayKlj0

Reading list: https://gist.github.com/michaelfolkson/5bfffa71a93426b57d518b09ebd0998c

Introduction

Michael Folkson (MF): This is a Socratic Seminar, we are going to be discussing MuSig2 and we’ll move onto adjacent topics such as FROST and libsecp256k1 later. We have a few people on the call including Tim (Ruffing). If you want to do short intros, you don’t have to, for the people on the call.

Tim Ruffing (TR): Hi. Thanks for having me. I am Tim Ruffing, my work is at Blockstream. I am an author of the MuSig2 paper, I guess that’s why I’m here. In general I work on cryptography for Bitcoin and cryptocurrencies and in a more broad sense applied crypto. I am also a maintainer of the libsecp256k1 library which is a Bitcoin Core cryptographic library for elliptic curve operations.

Elizabeth Crites (EC): My name is Elizabeth Crites, I am a postdoc at the University of Edinburgh. I work on very similar research to Tim. I also work on the FROST threshold signature scheme which I’m sure we’ll talk about at some point during this.

Nigel Sharp (NS): I’m Nigel Sharp, bit of a newbie software engineer, Bitcoiner, just listening in on the call in case I have any questions.

Grant (G): Hi, I’m Grant. Excited about MuSig. I may have a couple of questions later on but I think I’ll mostly just observe for the moment.

A retrospective look at BIP340

MF: This is a BitDevs, there are a lot of BitDevs all over the world in various cities. There is a list here. This is going to be a bit different in that we’re going to focus on one particular topic and libsecp256k1. Also we are going to assume a base level of knowledge from a Socratic and presentation Tim did a couple of years ago. This isn’t going to be an intro. If you are looking for an intro this won’t be for you. But anyone is free to listen in and participate and ask questions. The Socratic we did with Tim, this is before BIP340 was finalized, this was before Taproot was activated, this was before Schnorr signatures were online and active on the Bitcoin blockchain. First question, BIP340 related, has anything in BIP340 been a disappointment or problematic or caused any challenges with any future work? It is always very difficult to finalize and put something in stone, put it in the consensus rules forever especially when you don’t know what is coming down the pipeline in terms of other projects and other protocols, other designs. The only thing I’ve seen, you can tell me if there is anything else, is the x-only pubkeys for the TLUV opcode. The choice of x-only pubkeys in BIP340, that has had a complication for a covenant opcode in its current design. That’s the only thing I’ve heard from BIP340 that has posed any kind of disappointment.

TR: I think that is the main point here. BIP340 is the Schnorr signature BIP for Bitcoin. One special thing about the way we use Schnorr signatures in Bitcoin is that we have x-only public keys. What this means, if you look at an elliptic curve point, there is a x and a y coordinate, there are two coordinates. If you have a x coordinate you can almost compute the y coordinate up to the sign basically. For each valid x coordinate there are two valid y coordinates. That means if you want to represent that point or store it or make a public key out of it you need the x coordinate and you need 1 bit of the y coordinate. What we actually did in the BIP here, in our design, we dropped that additional bit. The reason is that it is often encoded in a full byte and maybe you can’t even get around this. This saves 1 byte in the end, brings down our public keys from 33 bytes to 32 bytes. This sounds good, it is just 1 byte or 1 bit but I think it is a good idea to squeeze out every saving we can because space is very sparse. This was the idea. There’s a nice blog post by Jonas (Nick). This blog post explains that this is actually not a loss of security. Even though we drop a bit in the public key it doesn’t mean that the scheme becomes a bit less secure, it is actually the same security. That is explained in this blog post. Though it turns out that if you just want to do Schnorr signatures on Bitcoin with normal transactions then it is pretty cool, it saves space. If you want to do more complex things, for example MuSig or more complex crypto then this bit is always a little pain in the ass. We can always get around this but whenever we do advanced stuff like tweaking keys, a lot of schemes involve tweaking keys, even Taproot itself involves tweaking, MuSig2 has key aggregation and so on. You always have to implicitly remember that bit because it is not explicitly there. You have to implicitly remember it sometimes. This makes specifications annoying. I don’t think it is a problem for security but for engineering it is certainly annoying. In hindsight it is not clear if we would make that decision again. I still think it is good because we save a byte but you can also say the increased engineering complexity is not worth it. At this point I can understand both points of view. You mentioned one thing on the bitcoin-dev mailing list. There was a suggestion by AJ (Towns) for a new Bitcoin opcode that would allow a lot of advanced constructions, for example coinpools. You have a shared UTXO between a lot of people, the fact that we don’t have the sign of the elliptic curve point of the public key would have created a problem there. You need the sign when you want to do arithmetic. If you want to add a point to another point, A+B, then if B is actually -B then it doesn’t work out. You are not adding, you are subtracting, this is the basic problem there. There were some ugly workarounds proposed. I recently came up with a better workaround. It is still ugly but not as ugly as the other ones. It is just an annoying thing. If we could always deal with this x only key and still save space but sometimes they are annoying. The only hope is that we can hide all the annoying things in BIPs and specifications and libraries so that actual protocol developers don’t need to care too much about this.

MF: I think coinpool was using this particular covenant opcode but it is a complication for the covenant opcode TAPLEAF_UPDATE_VERIFY. Then various things might use it if that was ever activated on the Bitcoin blockchain. That’s the only thing that has come to light regarding BIP340.

TR: Mostly. The other thing, it is not really a pain but we are working on a tiny change. At the moment the message size is restricted to 32 bytes so you can sign only messages that are exactly 32 bytes. This isn’t a problem in Bitcoin because we pre-hash messages and then they are 32 bytes. But it turned out that some people would like to see this restriction lifted. We are working on a small update that allows you to sign messages of arbitrary size. This is really just a tiny detail. The algorithm works but we specified it to accept only 32 bytes. We need to drop this check and then everything is fine basically.

MF: That’s BIP340. I thought we’d cover that because before it wasn’t active onchain and now it is. Hopefully that will be the only thing that comes to light regarding BIP340 frustrations.

MuSig2 history

MF: Let’s go onto MuSig2. I have a bunch of links, a couple of talks from Tim Ruffing and Jonas Nick. I thought I’d start with a basic history. The first paper I found was this from Bellare, Neven (2006). They tried to do multisig, I guess key aggregation multisig. I don’t know what the motivation is for this. Before Bitcoin and before blockchains what were people using multisig for? What use cases were there where there were so many signatures flying around? Not only that but also wanting to have multiple people signing? And then on top of that wanting to aggregate it to get the privacy or space benefit? Is it just a case of academics pursuing their interests and not really worrying about the use case? Or is there a use case out there where people wanted this?

TR: You could have a look at the paper, if they have a use case.

MF: I don’t think there is much about use cases.

TR: No. I think blockchain and cryptocurrencies and Bitcoin changed cryptography a lot. A lot of things were brought from theory to practice. Cryptographers who were always busy with ideas and schemes but no one really deployed the advanced stuff. Of course you always had encryption, you had TLS, SSL, we had signatures and certificates also used in SSL, TLS. We had the basic stuff, encryption, secure channels, signatures. But almost everything beyond this simple stuff wasn’t really used anywhere in practice. It was proposed in papers and there were great ideas but cryptocurrencies have changed the entire field. Back then you wrote a paper and you hoped that maybe someone would use it in 10 years. Nowadays sometimes you write a paper, you upload it on ePrint, it is not even peer reviewed and people will implement it two days later. It is a bit of extreme. I’m not sure what Elizabeth thinks. That is my impression.

EC: Given that this paper was 2006, this is my perspective because I work on anonymous credentials, I think that was popular in 2001 to 2006, pre-Bitcoin era. I imagine that some of the motivation would have been distributing trust for certificate authorities. But I don’t know. I think they do say that in this particular paper as a motivation. As far as I know no one actually implemented this.

TR: Back then from a practitioner’s point of view, even if the goal is to distribute trust you could have a simpler construction of multisignatures with multiple public keys and multiple signatures. For CAs maybe that is good enough. This really becomes interesting when you have things like key aggregation and combined signatures that are as large as normal signatures. Space on the blockchain is so expensive, I think this is what makes it interesting to implement something like this. In the end all these modern multisignature schemes are interactive. Interaction is annoying, you have to send messages around and so on. I think if you don’t need to care about every byte on the blockchain maybe it is just not worth it.

MF: I suppose the challenge would be would they need privacy in that multisig setting or would they be motivated by the space savings. Why can’t it just be 5 signatures?

TR: This paper doesn’t have key aggregation?

EC: I think it does. I think this was the first paper that had key aggregation.

TR: I should know, let’s check.

MF: It has a section on rogue key attacks.

EC: They had the first idea to avoid the rogue key attacks.

TR: You can still have rogue key attacks even if you don’t have key aggregation.

EC: I think that was the thing, how to aggregate keys without rogue key attacks.

TR: Let’s have a look at the syntax.

MF: If it is talking about rogue key attacks that is surely key aggregation, no?

EC: It doesn’t have to be.

TR: You could do it internally.

EC: You could just prove possession of your secret key and then multiply the keys together. But I think the interesting thing was that they were trying to not do that and have a key aggregation mechanism that still holds without being susceptible to that.

TR: Maybe go to Section 5.

EC: Also I think this was one of the first papers that abstracted the Forking Lemma from being a scheme specific type thing to a general…

TR: You see they just have a key generation algorithm, they don’t have a key aggregation algorithm. Of course during signing they kind of aggregate keys but you couldn’t use this for example for something like Taproot because from the API it doesn’t give you a way to get the combined key. They still can have rogue key attacks because in signing you still add up keys. You need to be careful to avoid this.

EC: How do they aggregate keys? I’m curious.

TR: They don’t because verification takes all the public keys as input.

EC: I thought that was one of the main motivations of this.

TR: MuSig1 was the first to solve that problem. It was there for BLS I guess already.

EC: I think this paper is a good reference for the transition for the Forking Lemma. There was a different Forking Lemma for every scheme you wanted to prove. I think this one abstracted away from something being specific for a signature scheme. Maybe I’m getting the papers confused. This is the one that referenced the change in the Forking Lemma.

TR: They say it is a general Forking Lemma but it is my impression that it is still not general. Almost every paper needs a new one.

EC: That’s true. It made it an abstract… lemma versus being for a signature scheme. You’re right, you end up needing a new one for every paper anyway. That’s the main contribution I see from this if I’m remembering this paper correctly.

MF: So we’ll move onto MuSig1. MuSig1 addressed the rogue key attack because that was well known. But MuSig1 had this problem where you had parallel signing sessions and you could get a forgery if you had a sufficient number of parallel signing sessions right?

TR: Yes and no. If you are going from Bellare Neven to..?

MF: Let’s go Bellare Neven, then insecure MuSig1 then secure MuSig1.

TR: The innovation then of MuSig1 compared to Bellare Neven was you had key aggregation. You have a public key algorithm where you can get all the keys and aggregate them. This didn’t exist in Bellare Neven. But yeah the first version of MuSig1, the first upload of ePrint here had a flaw in the proof. This turned out to be exploitable. The scheme was insecure in the first version.

MF: This is committing to nonces to fix broken MuSig1 to get to secure MuSig1.

TR: They started with a 2 round scheme, this 2 round scheme wasn’t secure. The problem is that if you are the victim and you start many parallel sessions with a lot of signers, 300 sessions in parallel, then the other guy should get only 300 signatures but it turns out they can get 301 signatures. The last one on a message totally chosen by the attacker. This was the attack.

MF: Was there a particular number of signing sessions that you need to be able to have to get that forgery? Or was that a number plucked out of anywhere?

TR: You need a handful but the security goes down pretty quickly. There are two answers. With the first attack this covered, with Wagner’s algorithm, there it goes down pretty quickly. I never implemented it but around 100 I think it is doable in practice. If you have a powerful machine with a lot of computation power maybe much lower, I don’t know. Recently there was a new paper that improves this attack to super low computation. There you specifically need 256. If you have 256 sessions, or maybe 257 or something like this, then the attack is suddenly super simple. You could probably do it on your pocket calculator if you spend half a hour.

Different security models for security proofs

Jonas Nick on OMDL: https://btctranscripts.com/iacr/2021-08-16-jonas-nick-musig2/#one-more-dl-omdl

MF: One thing neither you nor Jonas in the presentations you’ve done so far, for good reason probably, you haven’t really discussed in depth the different security proofs. My understanding is there are four different security models for security proofs. There’s Random Oracle Model (ROM), Algebraic Group Model (AGM), One More Discrete Logarithm (OMDL) and then Algebraic One More Discrete Logarithm (AOMDL) that is weaker than the One More Discrete Logarithm (OMDL). Can you dig a little bit deeper into that? I was getting confused.

TR: I can try and Elizabeth can chime in if I am talking nonsense which I hope I don’t. Let me think about how to explain it best. First of all we need to differentiate between models and assumptions. They have some things in common but they are different beasts. In cryptography we usually need some problem where we assume that it is hard to solve, the most fundamental problem that we use here is the discrete logarithm. If the public key is g^x and you see the public key it is hard to compute x just from g^x but the other way round is easy. Other assumptions that we typically know, factoring, it is easy to multiply large numbers but it is hard to factor them. For MuSig we need a special form of this discrete logarithm. The discrete logarithm assumption is just what I said, it is an assumption that says it is hard given g^x to compute x. The reason why I call this an assumption is that it is really just an assumption. We don’t know, we can’t prove this, this is not covered by the proof. People have tried for decades to do this and they’ve failed so we believe it holds. This sounds a little bit worrisome but in practice it is not that much worrisome. All crypto is built on this and usually the assumptions don’t fail. Maybe there is a quantum computer soon that will compute discrete logarithms but let’s not talk about this now, that’s another story. For MuSig we need some special form of this discrete logarithm problem which is called One More Discrete Logarithm problem. It is a generalisation. The normal problem is given g^x compute x. Here it is “I give you something like 10 g^x with different x’s and I give you oracles, a magic way to compute 9 of these discrete logarithms but you have to solve all 10.” You always need to solve one more than I give you a magic way to solve it. This is One More Discrete Logarithm.

MF: There was an incorrect security proof for the broken MuSig1 that was based on One More Discrete Logarithm. That’s a normal model for a security proof. It is just that there was a problem with the scheme?

TR: Yeah. The security theorem would look like “If this OMDL problem is really hard to compute, I need to say if because we don’t know, it is just an assumption, then the scheme is secure.” The implication was wrong but not the OMDL problem. This is still fine. It is an assumption, we don’t know if it is hard, but this was not a problem. The algebraic OMDL, this is a tiny variant but it is not really worth talking about. Maybe when we come to the models. We need this OMDL assumption for MuSig2 also. And we also need the random oracle model, the random oracle model is basically a cheat. Cryptographers are only really bad at proving that particular things are hard, for example OMDL, we need to assume them. We are also pretty bad at arguing about hash functions. Modeling hash functions without special tricks is possible but it is very, very restricted and we can’t really prove a lot. That is why at some point people had the idea to cheat a little bit and assume that the hash function behaves like a random oracle. What is a random oracle? An oracle is basically a theoretical word for an API, a magic API. You can ask it something, you send it a query and you get a response back. When we say we model a hash function as a random oracle, that means when doing a security proof we assume that the hash function behaves like such an oracle where the reply is totally random. You send it some bit string x as input and you get a totally random response. Of course it is a function, if you send the same bit string x twice you get the same response back. But otherwise if you send a different one you get a totally new random response. If you think about practical hash functions, SHA256, there is kind of some truth in it. You put some string in there and you get the random looking response. But modeling this as an oracle is really an entirely different beast in a sense. It basically says an attacker can’t run this on their own. A normal hash function in the real world, they are algorithms, they have code, you can run them, you can inspect the code, you can inspect the computation while it is running, middle states and all that stuff. Whereas an oracle is some magic thing you send a message to and you get a magic reply back. You can’t really see what is going on in the middle for example. This is something where we restrict the attacker, where we force the attacker to call this oracle and this makes life in the proofs much easier. Now you can say “This is cheating” and in some sense it is but it has turned out in practice that it works pretty well. We haven’t seen a real world example where this causes trouble. That is why this methodology is accepted, at least for applied cryptography I would say.

G: I have a question about that. I noticed that in BIP340 and also in MuSig there has been talk about using variable length for the message as a parameter. Does that variable length play into this random oracle model assumption or is that completely separate?

TR: I think it is separate. If you have longer messages, you need to hash the messages at some point, they will go as an input to the hash function. Depending on how you model that hash function, that will be an algorithm or a magic random oracle. But I think the question of how long the message is is kind of orthogonal to how you model the hash function. To give you an idea how this helps in the proofs, it helps at several points. One easy example is that we can force the attacker to show us inputs. When we do a security proof what does it look like? The theorem is something like “If OMDL is difficult to compute and we model the hash function as a random oracle then MuSig2 is secure”. Roughly like this. How do we do this? We prove by contradiction. We assume there is a way to attack MuSig2, a concrete attacker algorithm that breaks MuSig2, and then we try to use that algorithm to break OMDL. If we assume OMDL is not breakable then this is a contradiction, the attack against MuSig2 in the first place can’t exist. This is proof by contradiction methodology that we call reductions. The random oracle model for example helps us to see how the attacker algorithm that we are given in this proof by contradiction uses the hash function. If we were just using SHA256 as an algorithm then this attacker algorithm can use this SHA256 internally and we can’t really see what is going on. If we don’t give it the SHA256 algorithm but we force the attacker to call this random oracle, to call this magic thing, then we see the calls. We see all the inputs that the attacker algorithm gives to the hash function. Then we can play around with them because we see them. We can make use of that knowledge that we get. But if you think about it model isn’t the right word, if there is some attacker somewhere in the world trying to attack MuSig2 it is not as friendly and doesn’t send us over all its inputs to the hash function. It just doesn’t happen. That is why this model is kind of cheating. As I said it turned out to be pretty successful. Elizabeth, maybe you want to add something? Whenever I try to explain these things I gloss over so many details.

EC: We use this notion of this idealized something, right. In the case of random oracle model we are saying a hash function should output something random. So we are going to pretend that it is a truly random function. What that means is that when we idealize a hash function in that way it says “If you are going to break MuSig you have to do something really clever with the explicit hash function that is instantiated”. Your proof of security says “The only way you are going to break this scheme is if you do something clever with the hash”. It kind of eliminates all other attack vectors and just creates that one. So far nobody has been able to do anything with the hash. It just eliminates every other weakness except for that which is the explicit instantiation that is used. That’s my two cents.

TR: That’s a good way to say it. When you think about a theorem, if OMDL holds this is one thing and if you model the hash function as a random oracle then the scheme is secure. This gives you a recipe to break MuSig2 in a sense. Either you try to break OMDL or you try to break the hash function. As long as those things hold up you can be sure that MuSig2 is secure. You also asked about AGM, the Algebraic Group Model. This is another idealized model. Here we are not talking about the hash function, we are talking about the group arithmetic. All this discrete logarithm stuff takes place in an algebraic group. This is now very confusing when I use this terminology. When I say “algebraic group” I just mean group in algebra terminology, a mathematical object. But when cryptographers say “algebraic” they mean something different. Here we restrict the attacker in some other way. We kind of say “The only way to compute in this group is to add elements”. If you have a public key A and a public key B the only thing you can do with this is compute A+B or maybe A-B or A+2B or something like this. You can’t come up with your own public key except drawing a new x and computing g^x. You can do this point addition, you can add public keys, and you can generate new public keys by drawing the exponent and raising g the generator to the exponent. But we assume that this is the only way you can come up with group elements. This is again very helpful in the proof. When I talked about the random oracle I said the attacker needs to show his inputs to the hash function. Here in the Algebraic Group Model the attacker needs to show us how it came up with group elements. For example what could happen is that we send the attacker the public key g^x and ask it to produce a signature on this public key that we haven’t created before. That would be a break of MuSig2. Now the attacker could say “Here is a signature” and the signature will contain a group element and then the attacker needs to tell us how it computed this element. This is an implication from what I said earlier. If you assume the only thing the attacker can do is add those things up then this is basically equivalent to forcing the attacker to tell us how it came up with these elements, give us the numbers basically. This again helps in the security proof. It mostly has theory implications only I would say as long as this model really holds up. It was a special thing with MuSig2 that we had two variants: the variant with the Algebraic Group Model where we need this stronger assumption is a little bit more efficient.

MF: There isn’t neat strictly increasing levels of security right? It is just different paradigms? Because the Algebraic One More Discrete Logarithm (AOMDL) is weaker security than One More Discrete Logarithm. But how does One More Discrete Logarithm compare to AGM and ROM?

TR: Let me try to explain it the other way round. So far we have talked about assumptions and models. They are a little bit similar. The random oracle model (ROM) is in some sense an assumption. Whenever we add an assumption our proofs get weaker because we need to assume more. For the basic version of MuSig2, for the security, we need to assume that OMDL is hard and we need to assume the random oracle model (ROM). If we on top assume the Algebraic Group Model (AGM) then our security proof gets weaker. That means there is an additional thing that can break down. This Algebraic Group Model thing which is another cheating thing, if this doesn’t hold up in practice then the scheme might be insecure. On the other hand what we get out of the security proof is we now can have a more efficient way to do MuSig2. The Algebraic One More Discrete Logarithm (AOMDL) thing is really a slight variation of OMDL. This is very confusing even in the paper. Maybe this was your impression, if we add this algebraic assumption then this becomes weaker. We have a stronger assumption so our results become weaker. For the assumption it is kind of the other way round. I could explain it but I’m not sure if anyone will understand it if I try to explain it now.

MF: Ok, we’ll move on. We are not going to do a whole session on security proofs.

TR: If you are really interested, in the paper we explain it. But of course it is written with a specific audience in mind.

MF: I’ve just heard “This is weaker”, “This is stronger”, “We’ve got a stronger proof”, “We’ve got a weaker proof”. I was struggling to understand which one you are going from to another.

TR: It is hard because you say “weaker proof” and “stronger assumption” and even we get confused.

EC: The term “weaker” and “stronger” means different things in different contexts. A weaker assumption means better. A stronger assumption is worse. But then you have “stronger security”, it is a point of confusion in the terminology that we’re using.

TR: And it even confuses cryptographers. When you look at algebraic OMDL it is a weaker assumption than OMDL, slightly weaker which is slightly better. We have to assume less which gives us a stronger result. Even cryptographers are trained to believe whenever the hear the word “algebraic” it is something bad because now we have to make all these assumptions. In this specific case we are actually making a weaker assumption, that is why we stressed it so much in the paper. Whenever we say algebraic OMDL we have a relative clause that says it is actually weaker than OMDL to remind the reader that what we’re doing is actually a good thing and not a bad thing.

MuSig-DN

Paper: https://eprint.iacr.org/2020/1057.pdf

Blog post: https://medium.com/blockstream/musig-dn-schnorr-multisignatures-with-verifiably-deterministic-nonces-27424b5df9d6

Comparing MuSig-DN with MuSig1 and MuSig2: https://bitcoin.stackexchange.com/questions/98845/which-musig-scheme-is-optimal-classic-musig-or-this-new-musig-dn-scheme/

MF: Ok let’s move on but thank you for those explanations. I’ll look at the papers again and hopefully I’ll understand it a bit better. So we were doing history. Let’s get back to the history. We went through broken MuSig1, we went through corrected non-broken MuSig1. Then there was MuSig-DN.

TR: DN was before MuSig2.

MF: Right, we haven’t got onto MuSig2 yet. MuSig-DN, it hasn’t been implemented, there is no one working on that. It was an advancement in terms of academic research but there’s no use case that you’ve seen so far why people would use MuSig-DN?

TR: Maybe there are some use cases but it is much more difficult. From the practical point of view what does it do? Multisignatures have some specific risks when using them in practice, at least all these schemes we’re talking about here. For example you need secure random numbers while signing. If you have broken random number generators, this has happened in the past, not in the context of multisignatures but in other contexts, then you might lose security entirely. You try to sign a message and the others can extract your entire secret key which is a catastrophic failure. MuSig-DN is an attempt to get rid of this requirement to have real randomness. This sounds good in practice because it removes one way things can go wrong but on the other hand the way we are doing this is we add a huge zero knowledge proof to the protocol. First of all we lose a lot of efficiency but even if you ignore this this comes with a lot of engineering complexity. All this engineering complexity could mean something is wrong there. In a sense we are removing one footgun and maybe adding another one. Adding that other one is a lot of work to implement. I think that is why it hasn’t really been used in practice so far. I still believe that there are niche use cases where it is useful. There has been at least one other paper that does a very similar thing to what MuSig-DN does but in a more efficient way. Their method is more efficient than ours but it is still very complex. It doesn’t use zero knowledge proofs, it is complex in a totally different way. But again a lot of engineering complexity. It seems there is no simple solution to get rid of this randomness requirement.

MF: So MuSig-DN is better for the randomness requirement and stateless signing. It is just that those benefits aren’t obvious in terms of why someone would really want those benefits for a particular use case.

TR: The randomness and the statelessness, they are very related if you look at the details. MuSig-DN has two rounds. When I say it is stateless it basically means that in one round I send a message, I receive a message from the others and then I send another message. That is why it is two rounds. When I say it has state the signer needs to remember secret state from the first round to the second round. This can be problematic because weird things could happen while you remember state. For example you could run in a VM, you do the second round and then someone resets the VM to the point where you did the first round but not the second round and you still have the state. You do another second round and you lost again. The basic idea is if we remove the requirement to have randomness then everything will be deterministic. That is why we don’t have to remember state. When we come to the second round we could just recompute the first round again as it is deterministic. We can redo the first round again so we don’t need to remember it because there is no randomness involved. That is why having no state and requiring no randomness are pretty related in the end. I think the motivation is we have seen random number generators fail in practice. Even in Bitcoin, people lost their coins due to this. It is a real problem but for multisignatures we currently don’t have a good way to avoid the randomness requirement. From an engineering point of view it could make more sense to focus on really getting the randomness right instead of trying to work around it.

MF: Anything to add on MuSig-DN Elizabeth?

EC: No, I’d just say that I think there are some niche use cases for having a deterministic nonce, if it gets used in combination with something else. I personally have thought about a few things that might use it. I think it is actually a really nice result even with the expensive zero knowledge proofs. Maybe there is something that could be done there, I don’t know. I think it is not just pushing the problem to somewhere else, it is a different construction and I do think that determinism is useful when plugged into other schemes. I still think it is something worth pursuing and I like this result.

TR: We thought about using the core of it in other protocols and problems but we haven’t really found anything.

MuSig2

MuSig2 paper: https://eprint.iacr.org/2020/1261.pdf

Blog post: https://medium.com/blockstream/musig2-simple-two-round-schnorr-multisignatures-bf9582e99295

MF: Ok so then MuSig2. This is getting back to 2 rounds. This is exchanging nonce commitments in advance to remove the third round of communication.

TR: Right, the fixed version of MuSig1 had 3 rounds. They needed to introduce this precommitment round before the other 2 rounds to fix the problem in the security proof, make the attack go away. It wasn’t just a problem with the proof, the scheme was insecure. Then MuSig2 found the right trick to remove that first round again. Whenever I say this I should mention that Chelsea Komlo and Ian Goldberg found the same trick for FROST. It is really the same idea just in a different setting. Even Alper and Burdges, they found the same trick in parallel. It was really interesting to see three teams independently work on the same problem.

MF: And this is strictly better than fixed MuSig1? MuSig-DN did still have some advantages. It is just that for most use cases we expect MuSig2 to be the best MuSig.

TR: Yeah. In this blog post we argue that MuSig2 is strictly better than MuSig1. It has the same safety requirements, it just gets rid of one round. You can precompute the first round without having a message which is another big advantage. A 2-of-2 Lightning channel may in the future use MuSig2, what you can do is do the first round already before you know what to sign. Then when a message arrives that you want to sign, a payment in Lightning that you want to forward or want to make, only then can you do the second round. Because you already did the first round the second round is just one more message. It is particularly useful for these 2-of-2 things. Then the transaction arrives at one endpoint of the Lightning channel, this endpoint can complete the second round of MuSig2 for that particular transaction, compute something locally, and then just send it over to the other endpoint. The other endpoint will do its part of the second round and at this point it already has the full signature. It is really just one message that goes over the network. That’s why you could call this semi-interactive or half-interactive. It is still interactive because you have two rounds but you can alway precompute this first round. Whenever you really want to sign a message then it is just one more message over the network. That’s why we believe MuSig2 is strictly better than MuSig1 in practice. But compared to MuSig-DN it is really hard to compare. MuSig-DN has this additional feature, it doesn’t need randomness. If you do the zero knowledge proof correctly and implement all of this correctly you definitely remove that footgun of needing randomness. This is a safety advantage in a sense. But on the other hand it is slower, it has more implementation complexity and you can’t do this precomputation trick so it is always two rounds. You can’t precompute the first round. These would be drawbacks. But as I said it has these safety improvements, you can’t say that one is strictly better than the other. But in practice what people want is MuSig2 because it is simpler and we like simple protocols. Not only because they are easier to implement but they are also easier to implement correctly. Crypto is hard to implement so whenever we have a simple protocol we have much less potential to screw it up.

MF: And on the security proofs you’ve proved the security of MuSig2 in the random oracle model. “We prove the security of MuSig2 in the random oracle model, and the security of a more efficient variant in the combination of the random oracle and the algebraic group model. Both our proofs rely on a weaker variant of the OMDL assumption”. Maybe I’ll never understand that. MuSig2, any other comments on MuSig2?

SpeedyMuSig and proofs of possession

Paper that references SpeedyMuSig: https://eprint.iacr.org/2021/1375.pdf

Comparing SpeedyMuSig with MuSig2: https://bitcoin.stackexchange.com/questions/114244/how-does-speedymusig-compare-to-musig2

MF: Let’s move onto SpeedyMuSig, that’s an additional MuSig protocol. This is your paper with Chelsea Komlo and Mary Maller. This has SpeedyMuSig in it and it is using proofs of possession which MuSig1, MuSig-DN or MuSig2 all don’t use. You argue in the paper that this potentially could be better in certain examples of use cases.

EC: Yeah, MuSig2 has this really nice key aggregation technique. What we do instead is we include proofs of possession of your keys. We talked a little bit before about how you want to avoid these rogue key attacks. One way is if you prove knowledge of your secret key then you can’t do that kind of attack. If you do that then the aggregate key for all the parties signing is just the multiplication of their individual keys. That makes for a more efficient scheme. I would say a use case where this might work really well is if you are reusing a key. It is a little bit more expensive to produce your key in the first place because you are computing this proof of possession. But if you are reusing keys a lot to sign then it is an alternative.

MF: Is this where Bitcoin has very specific requirements which make MuSig2 better than using proofs of possession, a SpeedyMuSig like scheme. In the MuSig2 paper it says proofs of possession “makes key management cumbersome, complicates implementations and is not compatible with existing and widely used key serialization formats”. This is Bitcoin specific stuff right? BIP32 keys and things like this? What is it that makes MuSig2 better than SpeedyMuSig in a Bitcoin context?

TR: I’m not sure it is really “better” and whether that is the right word. It is another trade-off. The proofs of possession are pieces of data that you need to add to the public keys. This means that when you want to aggregate a few public keys you need this additional piece of data. That’s data that we didn’t have so far in the public key and we wouldn’t have it on blockchains. For example if you do some crazy protocol where you take some random public keys or specific public keys from the chain without any additional context and you aggregate them then certainly MuSig2 is better because you wouldn’t have access to the proofs of possession. You would need to ask the other signers to send the proofs of possession before you can aggregate the key. On the other hand you can say “In practice we usually talk anyway to the people that we want to have multisigs with so they could send us their proofs of possession when we ask them. Both are true in a sense.

EC: I guess the idea is that the key is a little bit larger. If you are reusing the key constantly to produce these signatures then you only have to do it once and you can reuse it. It depends on the use case mostly.

TR: Yeah. SpeedyMuSig, the name already says it. It is a little bit more speedy, it is faster and MuSig2 is maybe a little bit more convenient depending on your use case. This is the trade-off. In the end I think it is not a big deal, you could use either for most applications.

MF: Maybe this is completely off-topic but I thought Bitcoin using BIP32 keys and having specific requirements, it would want to use MuSig2 over SpeedyMuSig. But that’s veering off course. BIP32 is a separate issue?

TR: Yes and no. BIP32, it maybe depends on which particular way of derivation you use here. For the others BIP32 is a way to derive multiple public keys, a lot of public keys, from single key material, a single secret key in a sense. There are two ways of using BIP32. There is public derivation and hardened derivation I think. In the hardened derivation, this is what people mostly do, there is just a signer with a secret key and this guy creates all the public keys and sends them to other people. In that model it doesn’t really make a lot of difference because the signer anyway has to generate a public key and send it to someone else. It could attach this piece of data, the proof of possession. If you do public derivation which is a little more advanced where you as a signer give a master public key to other people and they can derive more public keys that belong to you, that would be harder because they can’t derive the proofs of possession. Maybe they wouldn’t even need to, I never thought about this, maybe this would work. I should think about this.

EC: I think we should chat more about this.

TR: I think it would work.

EC: I should say that proof of possession in our case, it adds a couple of elements. It is a Schnorr signature.

TR: It is really just a Schnorr signature. It is confusing because we say “proof of possession” but it is again another Schnorr signature.

EC: It is another Schnorr signature because we love them so much. We are just adding one group element and one field element, a Schnorr signature to your public key.

MF: The Bitcoin specific things, you were describing the hierarchical deterministic key generation, a tree of keys that we have in Bitcoin, I don’t know if they use it on other blockchains, other cryptocurrencies. As you said this is generating lots of keys from a root key. We have a thing in Bitcoin where it is we don’t want to reuse public keys and addresses. We want to keep rotating them. So perhaps that Bitcoin requirement or expectation that you only use your public key once for a single transaction and then rotate to a new key generated from that tree impacts whether you’d want to use MuSig2 or SpeedyMuSig.

TR: It is hard to say. I think there is nothing specific about the way you use keys in Bitcoin that would mean you can only MuSig2 and not SpeedyMuSig, that you can’t use proofs of possession. It makes key management a little easier and also more backwards compatible for what we already have in our implementations. But it is not a fundamental thing, I think almost everything you could do with MuSig2 you could also do with SpeedyMuSig. Maybe you should ask Pieter about this because he has stronger opinions. I always argue proofs of possession are not that bad and Pieter says “Let’s avoid them”. It is not really a fundamental difference.

MF: I saw a presentation from Andrew Poelstra, apparently Ethereum has public key recovery from the signature. Someone (Bob McElrath) asked about this and Andrew said that’s not a good idea because of BIP32 keys in Bitcoin and that you need to commit to a public key. He was quite strong on not wanting to enable that. Apparently there is a covenant proposal that comes out of that if you can do public key recovery from the signature. Andrew was strong on saying that we shouldn’t enable that because of BIP32 keys which is kind of assuming BIP32 keys are fundamental to how we use Bitcoin in real life.

TR: I don’t know what he had in mind there. I don’t know this proposal but at the moment I can’t think of how BIP32 would interact with recovery.

MF: A question for Andrew then. Comments on YouTube, we’ve got one saying “Bellare Neven needed all keys for verifying”, maybe.

TR: Right, we covered that.

Adam Gibson (AG): We have pubkey recovery in pre-Taproot already because we have it in ECDSA.

TR: We can’t do it in Schnorr, at least not with BIP340.

MuSig2 draft BIP

MuSig2 draft BIP: https://github.com/jonasnick/bips/blob/musig2/bip-musig2.mediawiki

MF: MuSig2 now has a draft BIP.

TR: I just updated it yesterday. Still a work in progress.

MF: Can you talk a bit about what work still needs to be done? Why is it still a work in progress? If it was changed to be SpeedyMuSig that would be a significant change.

TR: That would be a significant change.

MF: What other things could potentially change in this BIP?

TR: Maybe go to the issue list. If I look at this list we merged a few PRs yesterday. Most of the things in the actual algorithms are pretty stable. Almost all of what I see here is either already worked on or is improving the writing, the presentation but not the actual algorithms. One very interesting point here maybe is Issue 32, the principle of just-in-time x-only. This is maybe something I forgot to mention when I talked about x-only keys. Here Lloyd (Fournier) argues that we should use x-only keys in a different way that reduces some of the pain that I mentioned earlier. I said x-only keys save 1 byte on the blockchain, in the public key, but they introduce a lot of pain in the algorithms. Lloyd here has a proposal, a different way to think about it, which reduces some pain. It is a really long discussion. At least we agreed on making certain specific changes to the BIP. We haven’t reached a conclusion on our fundamental philosophical views on how x-only keys should be thought about but at least we made progress. I promised to write a document (TODO) that explains how protocol designers and implementers should think about x-only keys. When they should use x-only keys and when they should use the keys that we used earlier.

MF: So the x-only thing is subtly complicating MuSig2 as well, I didn’t appreciate that. I thought it was just the TLUV proposal.

TR: It is what I mentioned earlier, it makes MuSig2 for example more complicated. On one side you can say it is not that bad because we can handle this complexity in the BIP and then we have a library where it is implemented, this handles all the complexity but still it is complexity. This discussion more turned into a discussion on how people should think about x-only keys. When we released BIP340 and now it is active with Taproot, some people got the impression that this is the new shiny key format that everybody should use. This is maybe the wrong way to think about it. It saves a byte but it saves this byte by losing a bit of information. It is a little less powerful. Maybe a better way to think about it, if you know that with a specific public key you really only want to do signatures, then it is ok to have it x-only. But if you may want to do other stuff with the public key, maybe add it to other keys, do MuSig2 key aggregation, do some other more advanced crypto, then it might be better to stick to the old format. “Old” is even the wrong word. We propose the word “plain” now. When I say “old” it comes with this flavor of old and legacy. It is actually not the case, it is just a different format. It really makes sense on the blockchain because there we save space. But for example if you send public keys over the network without needing to store them on the blockchain it might be better to use the plain format which is 33 bytes, 1 byte more. You can do more with these keys, they have 1 bit more information. They are more powerful in a sense. I can totally understand, if you read BIP340 you can understand that this is the new thing that everyone should migrate to. In particular also because 32 is just a number that implementers for some reason like more than 33. It is a power of 2, it doesn’t make a lot of difference but it looks nicer. Maybe that is the wrong way to think about it and that is why we had this issue and I volunteered to write a document, maybe a blog post, or maybe an informational BIP even, that explains how people should think about these keys and how they should be used in practice, at least according to my opinion. That’s another place where x-only keys made the world a little bit more complex. Now they exist there are two formats and people have to choose.

MF: So it is quite serious then. We’ll probably be cursing that x-only choice. As I was saying earlier it is just impossible to predict how the future goes. You have to converge on a proposal at some point, you can’t keep putting it off. Then no one will work on anything because they just think it will never get onchain.

TR: We had a lot of eyes on it. Of course in hindsight you could say if we had written the MuSig2 BIP and the code back then we would have maybe noticed that this introduces a lot of complexity. It is hard to say. At some point you need to decide and say “We are confident enough. A lot of clever people have looked at this and they agree that it is a good thing.” And it still may even be a good thing. It is not really clear. It saves space, maybe if people use it in the right way it will be good in the end.

MF: That’s the keys we are using, keygen. Are the other parts of the MuSig2 protocol kind of set in stone. Key aggregation, tweak, nonce generation. This is going to be solidified right?

TR: It is pretty much set in stone by now. If anybody here is really working on an implementation, when I say set in stone don’t count on it. It is still experimental and we could change things. This may lead to weird consequences if you use it now in production and then switch to a new version and so on. But if you ask me, yeah I think the algorithms are pretty much set in stone. Unless someone else finds a problem. One thing that is not set in stone when it comes to the real algorithms is one thing that was discussed in this x-only issue, where we want to change the key aggregation slightly. We should really hurry up now, it is there for a while now and we should try to get it finished.

MF: The BIP? Hurry up and finalize the BIP?

TR: Yeah because people really want to use it. In particular the Lightning people.

MF: In terms of the things that can go wrong in a MuSig protocol, you do need signatures from all parties so if one party is not responsive then you can’t produce a signature. You’d need a backout, an alternative script path.

TR: This is true in every multisig thing. If the other person doesn’t want to or is offline there is no chance.

MF: Either unresponsive or providing an invalid signature and it just can’t complete. In terms of sorting pubkeys, signatures, does that present a complication? If people are receiving pubkeys and signatures in different orders do you need a finalizer or combiner? Do you need to fallback onto that? Do you need to allocate a finalizer or combiner at the start?

TR: For key aggregation the way the BIP is written you can do both. The key aggregation algorithm in the BIP gets an ordered list as a data structure. You can parse an arbitrary list, which means you can sort the list of public keys before you parse it or you can choose to not sort it. We took this approach because it is the most flexible and different callers may have different requirements. For example if you use multisignatures with your hardware wallets at home you probably don’t want to have the complexity and just sort the keys. But maybe some applications already have a canonical order of keys, for example in Lightning. I’m not sure how they do it, I’m making this up, maybe there you have one party that initiates the channel and the other party which is the responder in the channel. Then you could say “Let’s put the initiator always at position 1 and the responder always at position 2”. I think we described this in the BIP in the text somewhere. If you don’t want to care about the order you are free to sort the public keys before you pass them to key generation. If you care about the order you can pass in whatever you want. If you pass two lists or two different versions of a list in a different ordering then the resulting key will be different because we don’t sort internally. For signatures it doesn’t matter in which order you get them. This is really handled in the protocol, that doesn’t matter. But if you are asking for things that can go wrong, I think the main thing that can go wrong is what we already talked about, the requirement to have good randomness. This is really the one thing where you need to be careful as an implementer. Of course you always need to be careful. You need some good source of randomness which is typically the random number generator in your operating system.

MF: And this is a new requirement because before with single signatures you are using the hash of various inputs to get the randomness. Now you do actually need an independent source of randomness.

TR: Yeah, right. That’s exactly the thing that MuSig-DN solves in a sense but by adding a lot of complexity. This is also a very, very important thing to know. People have tried to use this trick with the hash generating the nonce by hashing the secret key and the message for multisignatures. Don’t do this. If this was possible we would have added it to the BIP. But it is totally insecure. For years we told people when they implement signatures they should generate the nonce deterministically because this is way safer. This is true for ordinary single singer signatures. But funnily it is the exact opposite for multisignatures. This is really dangerous because we’ve seen people do this. We told them for years derive nonces deterministically and then they saw the multisignature proposal and said “Ok it is safer to derive nonces deterministically”. Now it is the other way round. It is strange but that is how it is. When you look at nonce generation here it has this note. “NonceGen must have access to a high quality random generator to draw an unbiased, uniformly random value rand”. We say it explicitly, you must not derive deterministically.

MuSig2 support in Lightning and hardware signers

MF: Is this going to be the biggest challenge? Everything seems to take years these days but assuming we get hardware signer support for MuSig2, the hardware signer would need to provide this randomness that it currently wouldn’t.

TR: It is hard to say. You need randomness for doing crypto at all. You need randomness to generate your seed. In the hardware wallet case you could say “Let’s draw some physical dice”. For normal computers I think you assume you have some way to generate real randomness. We have cases where this broke down and we’ve seen cases in practice where this broke down and people lost their coins. This is one risk. Another risk is that you have this statefulness. If you do this… you precompute the first round in particular, which is what they want to do in a Lightning channel. Then you also have to make sure that you can’t really reuse that state twice. Whenever you have state you have some risk in practice that the state of your machine gets reset. For example if you run in a VM or you try to backup. Let’s say you do the first round of MuSig1, you have to keep the secret state in order to perform the second round, once you have a message. Now it would be a very bad idea to write that state to disk or even write it to a backup. What could happen is you complete the second round by sending out signatures and then at some point the user will restore the backup, you have the risk that you perform the second round again now with different inputs. This is exactly what we want to avoid, then you expose your secret key. That’s why we say in the BIP that you shouldn’t serialize the state. Even in our implementation we don’t give the user a way to serialize the state because this could mean some user will write it to some backup or store it somewhere else or whatever. If you really crash after the first round no big deal, just do the first round again. That’s totally fine. This is probably what Lightning will do. If a node crashes then they have to reestablish a connection to the other Lightning node and then they can run the first round of MuSig again.

MF: You can tell me to mind my own business but have you had any discussions with hardware wallet devs? You’ve got a hardware wallet at Blockstream. Any discussions with those guys? How long would it take? The BIP has to be finalized, you probably want a lot of experimentation before you make that investment.

TR: It is a good question. We should talk to hardware wallet people more. So far we mostly are in touch with Lightning people. I have the impression they really want to use it so they talk about it very often. I don’t know why because I don’t know the details of the Lightning protocol, I think in Lightning they said the statefulness is ok because if you lose state and you crash you need to trust the other side that they won’t steal your entire channel. They already have this assumption in their threat model so the additional state introduced by MuSig2 doesn’t change that really.

MF: I have a few links. I think it is specifically the Lightning Labs, LND devs, Laolu etc. I don’t know if they are already using it. Loop is one of their products, maybe MuSig2 is already being used.

TR: Of course I hope my stuff is used in production but they need to be careful if we update the BIP and do some small changes. They have to be careful that it doesn’t break. At the moment we don’t care too much about backwards compatibility. It is not 0.1, it is not even a BIP, it is a BIP draft.

MF: It is merged but perhaps it is just an option, they aren’t actually using it in production.

AG: I know Laolu wrote an implementation of MuSig2 in the btcsuite golang backend but for sure it isn’t exactly final.

FROST

FROST paper: https://eprint.iacr.org/2020/852.pdf

FROST PR in libsecp256k1-zkp: https://github.com/ElementsProject/secp256k1-zkp/pull/138

MF: So FROST. I have a quote from you Tim, a couple of years ago. I don’t know if you still have this view. “It should be possible to marry MuSig2 and FROST papers together and get a t-of-n with our provable guarantees and our key aggregation method”. Do you still share that view now? My understand of FROST is there’s a lot more complexity, there are a lot more design choices.

TR: I believe that is still true. Only a belief, I’m not sure what Elizabeth thinks. I think it is possible to create a scheme where you have a tree of key aggregations where some of those would be MuSig2, some of those would be FROST. Some of those would be multisig, some of those would be threshold aggregations. It is very hard to formalize and prove secure because then you get all the complexity from all the different schemes.

EC: I think it is worth pointing out that because FROST is a threshold signature scheme there’s a lot of added complexity with the key generation. When you have a multisig you have a nice, compact representation of the key representing the group. But as soon as you move to the threshold setting now you need some kind of distributed key generation which often has multiple rounds. There’s a trade-off there in terms of flexibility or having a nice key generation algorithm. These are different aspects.

TR: Maybe it is not related to that point of the discussion, I’m sure there is this RFC proposal for FROST. I think this wasn’t included in the reading list.

MF: Sorry, what specifically?

TR: There is a draft RFC for FROST.

EC: FROST is being standardized through NIST. There’s a CFRG draft which has gone through several iterations.

TR: IRTF?

EC: Yeah. It would be helpful to pop those links into the list also.

MF: Ok I’ll definitely do that. So it is a threshold scheme but obviously with the k-of-n, the k can equal n. You can still do multisig with FROST. It is just a question of whether you would do that.

TR: I think you wouldn’t. If k equals n then you only get the disadvantages of FROST and not the advantages.

EC: Exactly.

MF: A couple of things I wanted to confirm with you guys. Jesse Posner who is writing the FROST implementation in libsecp256k1-zkp, he said a couple of things, I wanted to confirm that this is true. I think he was a bit unsure when he said them. For FROST you can swap out public keys for other public keys without having an onchain transaction and you can also change from say a 2-of-3 to a 3-of-5 again without an onchain transaction? There is a lot of flexibility in terms of what you can do for moving the off-chain protocol around and making changes to it without needing an onchain transaction? This seems very flexible, certainly when you compare it to MuSig2. The only way you can change the multisig arrangement for MuSig2 would be to have an onchain transaction.

EC: I’m not sure if I understand that statement. If you are changing keys around at all then you need to rerun the distributed key generation protocol.

TR: I’m also not sure what Jesse is talking about here. I think in the pull request there were some discussions. What you certainly can do, you can downgrade n-of-n to k-of-n. This has been discussed. For example swapping out a key to a new key, maybe Elizabeth knows more, there are some key resharing schemes, I’m not really aware of those.

EC: Yeah. That’s what I was saying about doing the distributed key generation again. Say you run distributed key generation once and everybody has their secret shares of the overall group key. At least the DKG that is used in conjunction in FROST, the original one which is what we prove in our paper, it is based on Shamir’s Secret Sharing. There are some pretty standard ways to reshare using Shamir. That is possible. It is still non-trivial.

TR: You can do resharing but it is more like a forward security thing. You can’t reshare to an entirely new group of signers. You would still trust the old group of signers.

EC: You can transition from some group of signers to a new group of signers also. There are ways to reshare the keys. Or you can keep the same group and reshare a sharing of zero so your shares essentially stay the same, same group, or you can switch the group of signers. But it does involve performing the resharing. There’s a little bit that has to be done there.

MF: You would need to redo the protocol which surely means you’d need an onchain transaction in Bitcoin or blockchain lingo? It is not the case that you can have some funds stuck behind an address, a public key, and keep that public key associated with funds but rejig the protocol behind that public key.

EC: When you reshare you do have to send some values to the other parties so that everybody has their new shares.

TR: The old group will still have the signing keys. When you have a normal coin with a normal UTXO and single signer, of course what you can do is take your secret key and send it to another person, handover the keys. But this is not really secure because the other person needs to trust you, you still have the key. This would be the same in this FROST resharing. We can do the same transition from one group where you have a 2-of-3 sharing to totally different parties that again would have a 2-of-3. But they still would need to trust the old group not to spend the coins. In that sense it doesn’t give you a magic way to overcome this problem. Whenever you want to handover coins to a new person in a secure way you need to do a transaction.

EC: I think it is worth pointing out too that distributed key generation is dealt with as an entire field by itself, how to reshare things. When we say “FROST” it consists of these two components. There is the distributed key generation protocol and then the signing. What we tend to focus on is the signing because you could incorporate any number of distributed key generation protocols with the FROST signing mechanism. In terms of properties you want out of key resharing this is within the realm of distributed key generation research. Not really FROST research.

MF: And FROST also uses proof of possession, PedPop.

EC: Yeah. The original FROST paper proposed a distributed key generation mechanism which is basically the Pedersen DKG, this is a pretty standard DKG in the literature. With the addition of proofs of possession. That was why we proved our version of FROST together with the key generation protocol that they proposed. But it is basically just Pedersen plus proofs of possession.

TR: Maybe one thing to mention here, a step back and to reiterate, threshold schemes are in general more complex than multisignature schemes because not only the signing part is interactive but also the key generation part is interactive. In the MuSig paradigm you can just take some public keys and aggregate them. In threshold schemes, at least in those that work in the group we have in Bitcoin, the crypto in Bitcoin, even the key setup process is now interactive. This is of course again another layer of complexity. One interesting side effect here is that if we always during key generation need to talk to everybody, then we can certainly also send around proofs of possession. There is a requirement that we need to talk to everybody else during key generation. Then sending proofs of possession around is the natural thing to do. There is no drawback. We have the drawback already in a sense. That is why all these proposals use proof of possession and not MuSig key aggregation. This would probably be possible but you lose some efficiency.

MF: The Bitcoin implementation of FROST uses MuSig key aggregation.

TR: I think it does. Jesse said it is easier because then it is the same method, we could reuse some code. And it would be possible to do what he described there. Maybe you could start with a 3-of-3 setup and then downgrade it to a 2-of-3, something like this. Then it would make sense to start with the MuSig setup. When you start with the n-of-n thing you probably want to have the MuSig method. This was still an open point in this PR. He came up with this use case, I wasn’t really convinced that people want to do this in practice. Others maybe weren’t convinced either. I think we haven’t decided yet. At least in the Bitcoin world when you talk about this PR, it is much more work in progress than MuSig2. With MuSig2 we already have the BIP draft and it is almost finalized. Here we don’t even have a specification, we just have this draft PR which tries to implement it but without an explicit specification. It is worth mentioning that outside Bitcoin it is maybe the other way round. There is this IRTF proposal for FROST and there is nothing for multisignature.

MF: So why is that? Given that MuSig is simpler. It is just that Elizabeth has done better speaking to the IRTF than the Bitcoin guys.

TR: I think there are a few things to say. First of all yes MuSig is simpler than FROST. Multisignature is simpler than threshold signature. Mostly because of the thing we just mentioned, key generation is non-interactive in multisignatures and interactive in threshold signatures. The second thing is use cases. In cryptocurrencies multisignatures have some use cases because in cryptocurrencies you always have an escape hatch when the other parties go offline. For example in a 2-of-2 Lightning channel you always need to have some timelock that says “If the other party disappears I can get the money back after some time”. If the other party disappears we can’t just create a new key because there’s money stored on that key and the money would be gone. In other applications, and the IRTF is probably not interested in cryptocurrencies, threshold signatures make a lot of sense. They also make sense in a lot of blockchain applications, as I said with blockchains we have these special cases where multisignatures make sense. But in general if you ignore cryptocurrencies for a moment then threshold makes more sense. You can scale it better. It is pretty useless to have a 10-of-10. What is the goal of all these schemes? To distribute trust. But with multisignatures you increase security but you decrease safety. Two things can happen, either some people get malicious or steal your keys, that is great with 10-of-10, 9 people can get compromised, their keys stolen, and the scheme is still secure. But on the other hand you always increase the probability that you lose the key material. If you have a 10-of-10 and you lose one of the 10 pieces the key is gone. You increase security against attacks but you decrease security against losing your keys. The only way out of this dilemma is threshold signatures where you don’t need to do 10-of-10 but you could do 7-of-10 or something like this. Then you can trade off this security against attackers versus security against losing stuff. This is something you can really only do with threshold keys. This is why in general people are more interested in threshold things. On top of this in Bitcoin I mentioned contracts, for example Lightning or other complicated things, we always have an escape hatch in these contracts. Even in other setups, for example at home when you store some larger amount of coins, maybe you want to have a 2-of-3 between 3 hardware wallets or 2 hardware wallets and your machine. This is a setup that I think a lot of people use. Even there you want threshold because if you lose one hardware wallet you don’t want to lose all your coins. But using Taproot there are some ways you could do this even with multisignatures. For example say you have 3 signers, 2 hardware wallets and a backup wallet somewhere else. You could have a Taproot output that is just a 2-of-2 of the two main signing wallets and you have some script paths that cover the other possibilities. You have two primary devices and as long as you have access to those you use this normal key spend. Whenever you need to resort to the backup device, then you need to open the Taproot commitment and show the script path, use the other MuSig combinations. You could implement this 2-of-3 via 3 combinations of 2-of-2s. In the end this may be more convenient in practice than going through the hassle of running this interactive distributed key generation that you would need for FROST. I think this is another reason why multisignatures are interesting in Bitcoin. But of course this doesn’t scale. If you want to do 20-of-60 or 20-of-40, you probably don’t want to do this at home, you can’t enumerate all the possibilities here. It would be very inefficient.

MF: I guess finalizing the BIP, it is a bit closer to the use case than a IRTF standard. IRTF would be a “We have this paper. Academics are happy with the proof.” But it is not spec’ing it for a particular real world use case. Would that be correct to say?

TR: I think it is spec’ing it. It is a specification. Maybe Elizabeth knows more about the use cases that they have in mind.

MF: Are there particular use cases that this IRTF standard is being geared towards? With Bitcoin we have a couple of Bitcoin specific requirements that might not apply to other blockchains or other use cases. How close to a use case is this IRTF standard for FROST? I think Elizabeth has dropped off the call. And of course you can do nested MuSig within FROST and nested FROST within MuSig. But from a Bitcoin specific it makes sense to finalize MuSig first and then work out how one might nest into the other or vice versa.

TR: I really need to stress that we believe we can do this but we really don’t know. I could write down a scheme but I don’t know if it is secure. We’d need to prove it was secure and nobody has done that so far. I wouldn’t recommend using nested MuSig at the moment.

libsecp256k1 scope

MF: Let’s finish off with libsecp256k1. I don’t know which is priority. Finalizing the MuSig2 BIP or getting a API release out for libsecp256k1. What’s your current thinking, obviously there are other contributors, other reviewers, other maintainers, on how close we are to having a libsecp256k1 release and formal API?

TR: You mean MuSig2 in libsecp256k1?

MF: I’m assuming MuSig2 won’t be in that first release.

TR: No, I really hope we have it in that first release.

MF: This x-only pubkey issue needs to resolved before there would be a libsecp256k1 release. And any other things that need to be finalized in terms of that MuSig2 BIP.

TR: I think the roadmap for MuSig2, at the moment we’re making changes to the BIP, it is mostly there. There are some final issues, mostly related to the x-only things where we need to do some changes, we also need some writing changes. They can always be done later. Once we are really confident about the algorithms that we have we can update the implementation in libsecp256k1-zkp or do it on the fly even. I need to say here that the current implementation based on an earlier version of the BIP draft is not in libsecp256k1 but in libsecp256k1-zkp which is a fork of this library maintained by Blockstream where we add a few pieces of functionality mostly relevant to our Liquid sidechain. But we also use it as a testbed in a sense for new development. That is why we added MuSig2 first there. Once we have the BIP finalized we will first the libsecp256k1-zkp implementation to match the finalized BIP. When I say “finalized BIP” at the moment it is a draft, it is not even proposed. We need to get it a BIP number, put it out for review. Maybe there are more comments and we need to update things. The plan is then to update the implementation in libsecp256k1-zkp and once that is stable I really want to see this in libsecp256k1. Recently there has been discussion about the scope of the fork and the library and so on. I tend to believe that things that are useful to the wider Bitcoin ecosystem should be in libsecp256k1. I believe MuSig2 is useful to the wider ecosystem so we should add it there. At the moment it is maybe nice in libsecp256k1-zkp because it is easier for us to get things merged and easier to play around with it in an experimental state. But once that code is stable we should add it to libsecp256k1.

MF: The libsecp256k1-zkp repo, other people are using it. Is it designed to be a repo that other people use. I suppose if you’ve got experimental on the things that you think are experimental it is kind of user beware type thing. Would there be a release for libsecp256k1-zkp as well? That’s predominantly for Elements, Liquid but if you use it it is kind of user, developer beware type thing.

TR: That’s a good question. I think at the moment it is a little bit of both and that is why I started this issue and this discussion. The discussion was there before in more informal channels. If you ask me personally what I’d like to see is in libsecp256k1 we have the things for the Bitcoin ecosystem and in libsecp256k1-zkp we have things for the Elements ecosystem and the Liquid ecosystem. If we have that kind of separation then the separation would be clearer. On a possible release of libsecp256k1-zkp, so far we haven’t talked about this at all. Maybe we wouldn’t need a release because it is used mostly by the Elements and the Liquid ecosystem that we (Blockstream) mostly control. Now that I think about it it is not even true. There are hardware wallets that support Liquid, they use libsecp256k1-zkp. I guess whenever you have a library, releases make a lot of sense. The truth is so far we were too lazy to put out a release for libsecp256k1, we should do this now. Maybe that is a good starting point to think about a possible release for the libsecp256k1-zkp library too. Independently of whether we move MuSig2 to this other library or not.

MF: It is hard, there are a lot of grey areas here. You don’t know where to draw the line. Which features should be in each library if any. Whether a feature should be in libsecp256k1 or libsecp256k1-zkp or neither. And also how much time the maintainers including yourself can dedicate to reviewing it.

TR: Upstream, the discussion in libsecp256k1 is much more interesting in that respect. There we really have a community project. We want to have the things there that are relevant for Bitcoin I think. But of course our time is limited. In the libsecp256k1-zkp fork, it is a repo controlled by Blockstream, we could do whatever we want with this repo. That’s why we don’t care that much what we put in there but maybe we should care a little bit more and have a discussion on the upstream repo. Then it would be more meaningful for everybody.

Reusing code for future proposals (e.g. CISA)

MF: There was some discussion here. You’ve discussed some cross input signature aggregation type proposals, you were on a podcast recently talking about half signature aggregation. There are some things that can be reused. You are having to think “Not only is this code potentially being used in the use case that we’re coding it up for but it might be used in future”. It is not consensus so it is not cemented on the blockchain but I suppose you are trying to think ahead in making sure it is modular and a potential module could do the same function for both use cases, even long term things.

TR: It is always hard to predict the future when you are writing code. Maybe not yet at an implementation level but at least at the level of writing papers we are thinking about this. You mentioned cross input signature aggregation, it is a different beast, it is not multisignatures, from a theory point of view it is a different thing. It turns out you could use something very similar to MuSig2, probably, we need to write a paper about it and prove it secure before we can be sure, to do cross input signature aggregation. But we are not sure on whether this is the best approach even. If I remember correctly our most recent thinking about cross input signature aggregation is if we want to propose something like this then the resulting crypto scheme would look more like the original Bellare Neven than MuSig. This is a little bit easier to prove secure probably and it is more efficient. In that setting Bellare Neven would be better or a variant maybe of Bellare Neven. If you think about cross input signature aggregation, you have multiple UTXOs, you are spending them altogether or you have different parties controlling them even, they run an interactive protocol and want to spend them altogether in a single transaction with a single signature. If you think about it now you have all the public keys on the chain. Whereas in MuSig you don’t have this. In MuSig you have a Taproot output which is an aggregate key, aggregated from multiple keys, but you don’t see the individual keys on the blockchain. But here you see them on the blockchain. This is exactly what makes Bellare Neven work in this setting. As we discussed earlier, with Bellare Neven all the individual keys go into the verification algorithm so you need them. But for cross input signature aggregation you have them so you could use Bellare Neven. You probably wouldn’t use it exactly, do some minor tweaks to it but at least it is a candidate there. On this issue, I didn’t really read this post, I think here Russell (O’Connor) is assuming that we would use something like MuSig2 for cross input signature aggregation. But this is over a year ago. As I said our current thinking is that we would rather use something like Bellare Neven. Jonas mentioned it here, somewhere in this discussion.

MF: It is not critically important. It is just a nice to have type thing. This long term forward thinking, what can be reused for what. I suppose if you are doing a BIP then you don’t exactly want to go through the hassle of changing the BIP.

TR: When you are writing code you can think about future applications. It is probably good to think about but it might turn out in a month you want to do it differently. That’s how it works.

ROAST

ROAST paper: https://eprint.iacr.org/2022/550.pdf

ROAST blog post: https://medium.com/blockstream/roast-robust-asynchronous-schnorr-threshold-signatures-ddda55a07d1b

Tim Ruffing presentation on ROAST: https://btctranscripts.com/misc/2022-07-14-tim-ruffing-roast/

MF: I think we’re close to wrapping up then. Perhaps you can explain ROAST. I think Elizabeth briefly discussed it in her presentation too recently at Zcon. So ROAST is a protocol, a wrapper on top of FROST. What specifically are you concerned with re FROST to want something on top of it to not be susceptible to certain things?

TR: Let’s say you have some threshold setup, I think the example I use mostly here is 11-of-15. You have a 11-of-15 setup and you want to create a signature so you need 11 people. The problem with FROST is you need exactly 11 people. You really need to commit on the exact subset of 11 people that you want to use. If it turns out that one of these signers in this set is offline or doesn’t want to sign then you need to reset the protocol from scratch. You need to pick another subset of 11 signers, kicking out maybe that one signer who was offline and replacing it with another one. But then it could fail again and fail again and so on.

MF: You have to decide upfront when you start the FROST protocol which 11 signers are going to be signing. If one doesn’t sign then you fail and you have to start again.

TR: Right. ROAST gives you a clever way to restart in a sense so you don’t need too many sessions in the end. Maybe I have to start a few protocol runs of FROST but I do it in a clever way so I don’t have to start too many of them. Another nice feature is that the resulting thing, what ROAST gives you is an asynchronous protocol in a sense. You can always make progress, you never need to have timeouts. For example in the simple approach that I just described here, you start with 11, maybe 10 of them reply and you have some 11th signer that never replies. It is not clear, maybe the network is super slow and at some point it will reply or it might never reply. If you do this naively at some point you would need a raise a timeout. Then maybe you’re wrong because you raise a timeout and a second later the message arrives. You would have aborted but actually the guy was there. ROAST avoids this because it never aborts sessions, it just starts new ones. It never needs to abort existing sessions. It works by starting one session and at some point after a few people reply you already start a second session. Either the first session can complete or the second session. If a few people respond in the second session you may even start a third session and so on. You leave them all open so you never miss a message. You never kick out somebody because he seems to be offline. You never need some kind of timeout. This is useful in real networks because timeouts are difficult. You never know what the right value for a timeout is.

MF: The parties of the 11-of-15 on Liquid are expected to be online all the time? I suppose it is just a hard decision, if you’ve got 15 parties why choose a specific 11 of them? Why would you ditch 4 of them? There’s no reason to if they are all equal parties.

TR: All ROAST does is start FROST sessions. It needs to start a first session for example and for this first session you need to make some choice. You need to pick some 11 to which you do the first attempt. But yeah you could just randomize it. Take a new random selection every time for example or always take the ones that responded quickly in the past. That is another meaningful strategy. I’d guess you would want to have some randomness involved. If you always pick the same 11 and they always work you would never notice if someone failed. Maybe one of those 4 that you don’t pick goes offline and you wouldn’t even notice. I guess you would notice because the TCP connection drops or something like this. But you want to have some randomization to make sure that everyone is picked from time to time to see if things are working. This is mostly just an implementation detail I think. If people are interested in ROAST specifically I gave a talk about it which is also on YouTube. Maybe you can add it to the reading list.

MF: We were discussing before about changing it from a 11-of-15 to a 11-of-16 or whatever. If that is going to change regularly or semi regularly I suppose ideally if you could avoid that onchain transaction you would but it sounds like you can’t do that. You would need an onchain transaction every time you changed the setup for Liquid.

TR: But I don’t think that is a big deal.

MF: It hasn’t changed for years? Is it still the same 15 that it has always been? Or have they been swapped in and out?

TR: I don’t think they have been swapped in and out. But it is not a big problem to make onchain transactions because they do onchain transactions all the time, they do pay outs and peg outs from the Liquid network. Doing an onchain transaction isn’t a big deal. Even if you wouldn’t do it constantly how often do you add a member to this 15? Maybe once a month or so.

MF: I don’t know, I have no idea how it works. I know it is a 11-of-15 but I have no idea whether parties have been swapped in or out, it doesn’t seem to have changed from the original 11-of-15.

TR: We are adding more members to the federation but at the moment we are not adding more members to this set of signers because we have committed on this 11-of-15 thing because of some specific reason, how multisigs currently work in Bitcoin. If you go beyond 15 it will be more inefficient. ROAST in the long term would maybe give us a way to scale that number without losing efficiency onchain. Even in that setting if it was easy to add people we wouldn’t add a new member every day. Even if you do it every day one transaction per day, that’s not that expensive.

MF: Not currently. We don’t know what the fees will be like in future. Back to 2017 fee levels.

TR: At least at the moment it is ok.

MF: We don’t care at the moment. Is it currently randomized in terms of choosing the 11-of-15 in the CHECKMULTISIG. How do the 11 decide on who is going to sign?

TR: That’s a good question. I’m not sure. I think at the moment we have a rotating leader and the leader will ask everybody else to send a signature. We just take the first 11 signatures that we get and put them on the transaction. It uses this naive Bitcoin threshold sig, this is exactly where it is better than FROST in a sense. With this approach you don’t need to precommit on which 11 you want to use. You can just ask everybody for their partial signature and you wait until you have 11. But you can ask every one of the 15 members and take the first 11. Maybe we wait for a second longer and then randomize it, I don’t know. I bet we take the first 11 but I would need to read the code or ask people who know more about Liquid.

MF: I’m assuming it still uses CHECKMULTISIG rather than CHECKSIGADD. The benefit of using CHECKSIGADD is probably not worth the hassle of changing it.

TR: I’d need to ask the Liquid engineers.

Q&A

Q - Back in MuSig land, in the BIP there is a section called verify failed test cases and it provides some signatures. One of them exceeds the group size and it fails. One of them is the wrong signer and so it fails. And one of them is the wrong signature… why is that an important test to make?

A - In the test vector file, just yesterday we converted that to a JSON file. The reason is we really want to have a specific property in signatures. There are different security notions of signatures in theory. What we usually expect from a signature scheme, even without multisigs, just from a single signer, normal signature scheme, is some unforgeability notion. This means you can only sign messages if you have the secret key. There are various forms of unforgeability and what I just described is weak unforgeability. There’s also strong unforgeability, a little stronger as the name says. Let’s say I give you a signature on a specific message. You can’t even come up with a second signature, a different signature, on this specific message. This is strong unforgeability. If you didn’t check for the negation then you could easily do this. I send you a signature, you negate it and you have a different signature. It is just negated but it is different. The reason why we want to have this strong unforgeability is mostly just because we can have it. It is there. Have you ever heard of malleability issues in Bitcoin? This is specifically also related to this because this is one source of malleability. In SegWit signature data became witness data and from SegWit on it doesn’t influence the transaction IDs anymore. But before SegWit and with ECDSA this was one source of malleability. With ECDSA you could do exactly this. An ECDSA signature is like a Schnorr signature, it is two components. You could just negate one of them.

Q - For the negation is that multiplying the whole point by negative one (-1) for example or is that mirroring it on the other side of an axis. What does negation actually look like when it is performed?

A - I’m not sure what exactly is negated in this test case but if you look at the Schnorr signature it has two components. It is a group element, a point on the curve, and it is a scalar. The scalar is in a sense an exponent, something like a secret key. You could negate either. BIP340 makes sure that no matter what you negate it will always be an invalid signature. But in general it is possible to negate the point or it is possible to negate the scalar. Look into the old ECDSA specification because this had this problem and you can see how the negation was ignored there. When you have a negation, you have a signature, you negate one of the components and you get another valid signature.

MF: I went through the Schnorr test vectors a while back and Pieter said negating means taking the complement with the group order n. “The signature won’t be valid if you verify it using the negated message rather than the actual message used in the signature.”

TR: This is negating the scalar, what Pieter is talking about.

MF: It is the same thing that is being asked about?

TR: It is the same thing. There are two things you could potentially negate. A Schnorr signature has two components R and s and you could negate R and you could negate s. I guess s has been negated but the comment doesn’t say.

MF: Ok thank you very much Tim, thank you Elizabeth for joining and thank you everyone else for joining.

TR: Thanks for inviting me.

\ No newline at end of file +https://www.youtube.com/watch?v=TpyK_ayKlj0

Reading list: https://gist.github.com/michaelfolkson/5bfffa71a93426b57d518b09ebd0998c

Introduction

Michael Folkson (MF): This is a Socratic Seminar, we are going to be discussing MuSig2 and we’ll move onto adjacent topics such as FROST and libsecp256k1 later. We have a few people on the call including Tim (Ruffing). If you want to do short intros, you don’t have to, for the people on the call.

Tim Ruffing (TR): Hi. Thanks for having me. I am Tim Ruffing, my work is at Blockstream. I am an author of the MuSig2 paper, I guess that’s why I’m here. In general I work on cryptography for Bitcoin and cryptocurrencies and in a more broad sense applied crypto. I am also a maintainer of the libsecp256k1 library which is a Bitcoin Core cryptographic library for elliptic curve operations.

Elizabeth Crites (EC): My name is Elizabeth Crites, I am a postdoc at the University of Edinburgh. I work on very similar research to Tim. I also work on the FROST threshold signature scheme which I’m sure we’ll talk about at some point during this.

Nigel Sharp (NS): I’m Nigel Sharp, bit of a newbie software engineer, Bitcoiner, just listening in on the call in case I have any questions.

Grant (G): Hi, I’m Grant. Excited about MuSig. I may have a couple of questions later on but I think I’ll mostly just observe for the moment.

A retrospective look at BIP340

MF: This is a BitDevs, there are a lot of BitDevs all over the world in various cities. There is a list here. This is going to be a bit different in that we’re going to focus on one particular topic and libsecp256k1. Also we are going to assume a base level of knowledge from a Socratic and presentation Tim did a couple of years ago. This isn’t going to be an intro. If you are looking for an intro this won’t be for you. But anyone is free to listen in and participate and ask questions. The Socratic we did with Tim, this is before BIP340 was finalized, this was before Taproot was activated, this was before Schnorr signatures were online and active on the Bitcoin blockchain. First question, BIP340 related, has anything in BIP340 been a disappointment or problematic or caused any challenges with any future work? It is always very difficult to finalize and put something in stone, put it in the consensus rules forever especially when you don’t know what is coming down the pipeline in terms of other projects and other protocols, other designs. The only thing I’ve seen, you can tell me if there is anything else, is the x-only pubkeys for the TLUV opcode. The choice of x-only pubkeys in BIP340, that has had a complication for a covenant opcode in its current design. That’s the only thing I’ve heard from BIP340 that has posed any kind of disappointment.

TR: I think that is the main point here. BIP340 is the Schnorr signature BIP for Bitcoin. One special thing about the way we use Schnorr signatures in Bitcoin is that we have x-only public keys. What this means, if you look at an elliptic curve point, there is a x and a y coordinate, there are two coordinates. If you have a x coordinate you can almost compute the y coordinate up to the sign basically. For each valid x coordinate there are two valid y coordinates. That means if you want to represent that point or store it or make a public key out of it you need the x coordinate and you need 1 bit of the y coordinate. What we actually did in the BIP here, in our design, we dropped that additional bit. The reason is that it is often encoded in a full byte and maybe you can’t even get around this. This saves 1 byte in the end, brings down our public keys from 33 bytes to 32 bytes. This sounds good, it is just 1 byte or 1 bit but I think it is a good idea to squeeze out every saving we can because space is very sparse. This was the idea. There’s a nice blog post by Jonas (Nick). This blog post explains that this is actually not a loss of security. Even though we drop a bit in the public key it doesn’t mean that the scheme becomes a bit less secure, it is actually the same security. That is explained in this blog post. Though it turns out that if you just want to do Schnorr signatures on Bitcoin with normal transactions then it is pretty cool, it saves space. If you want to do more complex things, for example MuSig or more complex crypto then this bit is always a little pain in the ass. We can always get around this but whenever we do advanced stuff like tweaking keys, a lot of schemes involve tweaking keys, even Taproot itself involves tweaking, MuSig2 has key aggregation and so on. You always have to implicitly remember that bit because it is not explicitly there. You have to implicitly remember it sometimes. This makes specifications annoying. I don’t think it is a problem for security but for engineering it is certainly annoying. In hindsight it is not clear if we would make that decision again. I still think it is good because we save a byte but you can also say the increased engineering complexity is not worth it. At this point I can understand both points of view. You mentioned one thing on the bitcoin-dev mailing list. There was a suggestion by AJ (Towns) for a new Bitcoin opcode that would allow a lot of advanced constructions, for example coinpools. You have a shared UTXO between a lot of people, the fact that we don’t have the sign of the elliptic curve point of the public key would have created a problem there. You need the sign when you want to do arithmetic. If you want to add a point to another point, A+B, then if B is actually -B then it doesn’t work out. You are not adding, you are subtracting, this is the basic problem there. There were some ugly workarounds proposed. I recently came up with a better workaround. It is still ugly but not as ugly as the other ones. It is just an annoying thing. If we could always deal with this x only key and still save space but sometimes they are annoying. The only hope is that we can hide all the annoying things in BIPs and specifications and libraries so that actual protocol developers don’t need to care too much about this.

MF: I think coinpool was using this particular covenant opcode but it is a complication for the covenant opcode TAPLEAF_UPDATE_VERIFY. Then various things might use it if that was ever activated on the Bitcoin blockchain. That’s the only thing that has come to light regarding BIP340.

TR: Mostly. The other thing, it is not really a pain but we are working on a tiny change. At the moment the message size is restricted to 32 bytes so you can sign only messages that are exactly 32 bytes. This isn’t a problem in Bitcoin because we pre-hash messages and then they are 32 bytes. But it turned out that some people would like to see this restriction lifted. We are working on a small update that allows you to sign messages of arbitrary size. This is really just a tiny detail. The algorithm works but we specified it to accept only 32 bytes. We need to drop this check and then everything is fine basically.

MF: That’s BIP340. I thought we’d cover that because before it wasn’t active onchain and now it is. Hopefully that will be the only thing that comes to light regarding BIP340 frustrations.

MuSig2 history

MF: Let’s go onto MuSig2. I have a bunch of links, a couple of talks from Tim Ruffing and Jonas Nick. I thought I’d start with a basic history. The first paper I found was this from Bellare, Neven (2006). They tried to do multisig, I guess key aggregation multisig. I don’t know what the motivation is for this. Before Bitcoin and before blockchains what were people using multisig for? What use cases were there where there were so many signatures flying around? Not only that but also wanting to have multiple people signing? And then on top of that wanting to aggregate it to get the privacy or space benefit? Is it just a case of academics pursuing their interests and not really worrying about the use case? Or is there a use case out there where people wanted this?

TR: You could have a look at the paper, if they have a use case.

MF: I don’t think there is much about use cases.

TR: No. I think blockchain and cryptocurrencies and Bitcoin changed cryptography a lot. A lot of things were brought from theory to practice. Cryptographers who were always busy with ideas and schemes but no one really deployed the advanced stuff. Of course you always had encryption, you had TLS, SSL, we had signatures and certificates also used in SSL, TLS. We had the basic stuff, encryption, secure channels, signatures. But almost everything beyond this simple stuff wasn’t really used anywhere in practice. It was proposed in papers and there were great ideas but cryptocurrencies have changed the entire field. Back then you wrote a paper and you hoped that maybe someone would use it in 10 years. Nowadays sometimes you write a paper, you upload it on ePrint, it is not even peer reviewed and people will implement it two days later. It is a bit of extreme. I’m not sure what Elizabeth thinks. That is my impression.

EC: Given that this paper was 2006, this is my perspective because I work on anonymous credentials, I think that was popular in 2001 to 2006, pre-Bitcoin era. I imagine that some of the motivation would have been distributing trust for certificate authorities. But I don’t know. I think they do say that in this particular paper as a motivation. As far as I know no one actually implemented this.

TR: Back then from a practitioner’s point of view, even if the goal is to distribute trust you could have a simpler construction of multisignatures with multiple public keys and multiple signatures. For CAs maybe that is good enough. This really becomes interesting when you have things like key aggregation and combined signatures that are as large as normal signatures. Space on the blockchain is so expensive, I think this is what makes it interesting to implement something like this. In the end all these modern multisignature schemes are interactive. Interaction is annoying, you have to send messages around and so on. I think if you don’t need to care about every byte on the blockchain maybe it is just not worth it.

MF: I suppose the challenge would be would they need privacy in that multisig setting or would they be motivated by the space savings. Why can’t it just be 5 signatures?

TR: This paper doesn’t have key aggregation?

EC: I think it does. I think this was the first paper that had key aggregation.

TR: I should know, let’s check.

MF: It has a section on rogue key attacks.

EC: They had the first idea to avoid the rogue key attacks.

TR: You can still have rogue key attacks even if you don’t have key aggregation.

EC: I think that was the thing, how to aggregate keys without rogue key attacks.

TR: Let’s have a look at the syntax.

MF: If it is talking about rogue key attacks that is surely key aggregation, no?

EC: It doesn’t have to be.

TR: You could do it internally.

EC: You could just prove possession of your secret key and then multiply the keys together. But I think the interesting thing was that they were trying to not do that and have a key aggregation mechanism that still holds without being susceptible to that.

TR: Maybe go to Section 5.

EC: Also I think this was one of the first papers that abstracted the Forking Lemma from being a scheme specific type thing to a general…

TR: You see they just have a key generation algorithm, they don’t have a key aggregation algorithm. Of course during signing they kind of aggregate keys but you couldn’t use this for example for something like Taproot because from the API it doesn’t give you a way to get the combined key. They still can have rogue key attacks because in signing you still add up keys. You need to be careful to avoid this.

EC: How do they aggregate keys? I’m curious.

TR: They don’t because verification takes all the public keys as input.

EC: I thought that was one of the main motivations of this.

TR: MuSig1 was the first to solve that problem. It was there for BLS I guess already.

EC: I think this paper is a good reference for the transition for the Forking Lemma. There was a different Forking Lemma for every scheme you wanted to prove. I think this one abstracted away from something being specific for a signature scheme. Maybe I’m getting the papers confused. This is the one that referenced the change in the Forking Lemma.

TR: They say it is a general Forking Lemma but it is my impression that it is still not general. Almost every paper needs a new one.

EC: That’s true. It made it an abstract… lemma versus being for a signature scheme. You’re right, you end up needing a new one for every paper anyway. That’s the main contribution I see from this if I’m remembering this paper correctly.

MF: So we’ll move onto MuSig1. MuSig1 addressed the rogue key attack because that was well known. But MuSig1 had this problem where you had parallel signing sessions and you could get a forgery if you had a sufficient number of parallel signing sessions right?

TR: Yes and no. If you are going from Bellare Neven to..?

MF: Let’s go Bellare Neven, then insecure MuSig1 then secure MuSig1.

TR: The innovation then of MuSig1 compared to Bellare Neven was you had key aggregation. You have a public key algorithm where you can get all the keys and aggregate them. This didn’t exist in Bellare Neven. But yeah the first version of MuSig1, the first upload of ePrint here had a flaw in the proof. This turned out to be exploitable. The scheme was insecure in the first version.

MF: This is committing to nonces to fix broken MuSig1 to get to secure MuSig1.

TR: They started with a 2 round scheme, this 2 round scheme wasn’t secure. The problem is that if you are the victim and you start many parallel sessions with a lot of signers, 300 sessions in parallel, then the other guy should get only 300 signatures but it turns out they can get 301 signatures. The last one on a message totally chosen by the attacker. This was the attack.

MF: Was there a particular number of signing sessions that you need to be able to have to get that forgery? Or was that a number plucked out of anywhere?

TR: You need a handful but the security goes down pretty quickly. There are two answers. With the first attack this covered, with Wagner’s algorithm, there it goes down pretty quickly. I never implemented it but around 100 I think it is doable in practice. If you have a powerful machine with a lot of computation power maybe much lower, I don’t know. Recently there was a new paper that improves this attack to super low computation. There you specifically need 256. If you have 256 sessions, or maybe 257 or something like this, then the attack is suddenly super simple. You could probably do it on your pocket calculator if you spend half a hour.

Different security models for security proofs

Jonas Nick on OMDL: https://btctranscripts.com/iacr/2021-08-16-jonas-nick-musig2/#one-more-dl-omdl

MF: One thing neither you nor Jonas in the presentations you’ve done so far, for good reason probably, you haven’t really discussed in depth the different security proofs. My understanding is there are four different security models for security proofs. There’s Random Oracle Model (ROM), Algebraic Group Model (AGM), One More Discrete Logarithm (OMDL) and then Algebraic One More Discrete Logarithm (AOMDL) that is weaker than the One More Discrete Logarithm (OMDL). Can you dig a little bit deeper into that? I was getting confused.

TR: I can try and Elizabeth can chime in if I am talking nonsense which I hope I don’t. Let me think about how to explain it best. First of all we need to differentiate between models and assumptions. They have some things in common but they are different beasts. In cryptography we usually need some problem where we assume that it is hard to solve, the most fundamental problem that we use here is the discrete logarithm. If the public key is g^x and you see the public key it is hard to compute x just from g^x but the other way round is easy. Other assumptions that we typically know, factoring, it is easy to multiply large numbers but it is hard to factor them. For MuSig we need a special form of this discrete logarithm. The discrete logarithm assumption is just what I said, it is an assumption that says it is hard given g^x to compute x. The reason why I call this an assumption is that it is really just an assumption. We don’t know, we can’t prove this, this is not covered by the proof. People have tried for decades to do this and they’ve failed so we believe it holds. This sounds a little bit worrisome but in practice it is not that much worrisome. All crypto is built on this and usually the assumptions don’t fail. Maybe there is a quantum computer soon that will compute discrete logarithms but let’s not talk about this now, that’s another story. For MuSig we need some special form of this discrete logarithm problem which is called One More Discrete Logarithm problem. It is a generalisation. The normal problem is given g^x compute x. Here it is “I give you something like 10 g^x with different x’s and I give you oracles, a magic way to compute 9 of these discrete logarithms but you have to solve all 10.” You always need to solve one more than I give you a magic way to solve it. This is One More Discrete Logarithm.

MF: There was an incorrect security proof for the broken MuSig1 that was based on One More Discrete Logarithm. That’s a normal model for a security proof. It is just that there was a problem with the scheme?

TR: Yeah. The security theorem would look like “If this OMDL problem is really hard to compute, I need to say if because we don’t know, it is just an assumption, then the scheme is secure.” The implication was wrong but not the OMDL problem. This is still fine. It is an assumption, we don’t know if it is hard, but this was not a problem. The algebraic OMDL, this is a tiny variant but it is not really worth talking about. Maybe when we come to the models. We need this OMDL assumption for MuSig2 also. And we also need the random oracle model, the random oracle model is basically a cheat. Cryptographers are only really bad at proving that particular things are hard, for example OMDL, we need to assume them. We are also pretty bad at arguing about hash functions. Modeling hash functions without special tricks is possible but it is very, very restricted and we can’t really prove a lot. That is why at some point people had the idea to cheat a little bit and assume that the hash function behaves like a random oracle. What is a random oracle? An oracle is basically a theoretical word for an API, a magic API. You can ask it something, you send it a query and you get a response back. When we say we model a hash function as a random oracle, that means when doing a security proof we assume that the hash function behaves like such an oracle where the reply is totally random. You send it some bit string x as input and you get a totally random response. Of course it is a function, if you send the same bit string x twice you get the same response back. But otherwise if you send a different one you get a totally new random response. If you think about practical hash functions, SHA256, there is kind of some truth in it. You put some string in there and you get the random looking response. But modeling this as an oracle is really an entirely different beast in a sense. It basically says an attacker can’t run this on their own. A normal hash function in the real world, they are algorithms, they have code, you can run them, you can inspect the code, you can inspect the computation while it is running, middle states and all that stuff. Whereas an oracle is some magic thing you send a message to and you get a magic reply back. You can’t really see what is going on in the middle for example. This is something where we restrict the attacker, where we force the attacker to call this oracle and this makes life in the proofs much easier. Now you can say “This is cheating” and in some sense it is but it has turned out in practice that it works pretty well. We haven’t seen a real world example where this causes trouble. That is why this methodology is accepted, at least for applied cryptography I would say.

G: I have a question about that. I noticed that in BIP340 and also in MuSig there has been talk about using variable length for the message as a parameter. Does that variable length play into this random oracle model assumption or is that completely separate?

TR: I think it is separate. If you have longer messages, you need to hash the messages at some point, they will go as an input to the hash function. Depending on how you model that hash function, that will be an algorithm or a magic random oracle. But I think the question of how long the message is is kind of orthogonal to how you model the hash function. To give you an idea how this helps in the proofs, it helps at several points. One easy example is that we can force the attacker to show us inputs. When we do a security proof what does it look like? The theorem is something like “If OMDL is difficult to compute and we model the hash function as a random oracle then MuSig2 is secure”. Roughly like this. How do we do this? We prove by contradiction. We assume there is a way to attack MuSig2, a concrete attacker algorithm that breaks MuSig2, and then we try to use that algorithm to break OMDL. If we assume OMDL is not breakable then this is a contradiction, the attack against MuSig2 in the first place can’t exist. This is proof by contradiction methodology that we call reductions. The random oracle model for example helps us to see how the attacker algorithm that we are given in this proof by contradiction uses the hash function. If we were just using SHA256 as an algorithm then this attacker algorithm can use this SHA256 internally and we can’t really see what is going on. If we don’t give it the SHA256 algorithm but we force the attacker to call this random oracle, to call this magic thing, then we see the calls. We see all the inputs that the attacker algorithm gives to the hash function. Then we can play around with them because we see them. We can make use of that knowledge that we get. But if you think about it model isn’t the right word, if there is some attacker somewhere in the world trying to attack MuSig2 it is not as friendly and doesn’t send us over all its inputs to the hash function. It just doesn’t happen. That is why this model is kind of cheating. As I said it turned out to be pretty successful. Elizabeth, maybe you want to add something? Whenever I try to explain these things I gloss over so many details.

EC: We use this notion of this idealized something, right. In the case of random oracle model we are saying a hash function should output something random. So we are going to pretend that it is a truly random function. What that means is that when we idealize a hash function in that way it says “If you are going to break MuSig you have to do something really clever with the explicit hash function that is instantiated”. Your proof of security says “The only way you are going to break this scheme is if you do something clever with the hash”. It kind of eliminates all other attack vectors and just creates that one. So far nobody has been able to do anything with the hash. It just eliminates every other weakness except for that which is the explicit instantiation that is used. That’s my two cents.

TR: That’s a good way to say it. When you think about a theorem, if OMDL holds this is one thing and if you model the hash function as a random oracle then the scheme is secure. This gives you a recipe to break MuSig2 in a sense. Either you try to break OMDL or you try to break the hash function. As long as those things hold up you can be sure that MuSig2 is secure. You also asked about AGM, the Algebraic Group Model. This is another idealized model. Here we are not talking about the hash function, we are talking about the group arithmetic. All this discrete logarithm stuff takes place in an algebraic group. This is now very confusing when I use this terminology. When I say “algebraic group” I just mean group in algebra terminology, a mathematical object. But when cryptographers say “algebraic” they mean something different. Here we restrict the attacker in some other way. We kind of say “The only way to compute in this group is to add elements”. If you have a public key A and a public key B the only thing you can do with this is compute A+B or maybe A-B or A+2B or something like this. You can’t come up with your own public key except drawing a new x and computing g^x. You can do this point addition, you can add public keys, and you can generate new public keys by drawing the exponent and raising g the generator to the exponent. But we assume that this is the only way you can come up with group elements. This is again very helpful in the proof. When I talked about the random oracle I said the attacker needs to show his inputs to the hash function. Here in the Algebraic Group Model the attacker needs to show us how it came up with group elements. For example what could happen is that we send the attacker the public key g^x and ask it to produce a signature on this public key that we haven’t created before. That would be a break of MuSig2. Now the attacker could say “Here is a signature” and the signature will contain a group element and then the attacker needs to tell us how it computed this element. This is an implication from what I said earlier. If you assume the only thing the attacker can do is add those things up then this is basically equivalent to forcing the attacker to tell us how it came up with these elements, give us the numbers basically. This again helps in the security proof. It mostly has theory implications only I would say as long as this model really holds up. It was a special thing with MuSig2 that we had two variants: the variant with the Algebraic Group Model where we need this stronger assumption is a little bit more efficient.

MF: There isn’t neat strictly increasing levels of security right? It is just different paradigms? Because the Algebraic One More Discrete Logarithm (AOMDL) is weaker security than One More Discrete Logarithm. But how does One More Discrete Logarithm compare to AGM and ROM?

TR: Let me try to explain it the other way round. So far we have talked about assumptions and models. They are a little bit similar. The random oracle model (ROM) is in some sense an assumption. Whenever we add an assumption our proofs get weaker because we need to assume more. For the basic version of MuSig2, for the security, we need to assume that OMDL is hard and we need to assume the random oracle model (ROM). If we on top assume the Algebraic Group Model (AGM) then our security proof gets weaker. That means there is an additional thing that can break down. This Algebraic Group Model thing which is another cheating thing, if this doesn’t hold up in practice then the scheme might be insecure. On the other hand what we get out of the security proof is we now can have a more efficient way to do MuSig2. The Algebraic One More Discrete Logarithm (AOMDL) thing is really a slight variation of OMDL. This is very confusing even in the paper. Maybe this was your impression, if we add this algebraic assumption then this becomes weaker. We have a stronger assumption so our results become weaker. For the assumption it is kind of the other way round. I could explain it but I’m not sure if anyone will understand it if I try to explain it now.

MF: Ok, we’ll move on. We are not going to do a whole session on security proofs.

TR: If you are really interested, in the paper we explain it. But of course it is written with a specific audience in mind.

MF: I’ve just heard “This is weaker”, “This is stronger”, “We’ve got a stronger proof”, “We’ve got a weaker proof”. I was struggling to understand which one you are going from to another.

TR: It is hard because you say “weaker proof” and “stronger assumption” and even we get confused.

EC: The term “weaker” and “stronger” means different things in different contexts. A weaker assumption means better. A stronger assumption is worse. But then you have “stronger security”, it is a point of confusion in the terminology that we’re using.

TR: And it even confuses cryptographers. When you look at algebraic OMDL it is a weaker assumption than OMDL, slightly weaker which is slightly better. We have to assume less which gives us a stronger result. Even cryptographers are trained to believe whenever the hear the word “algebraic” it is something bad because now we have to make all these assumptions. In this specific case we are actually making a weaker assumption, that is why we stressed it so much in the paper. Whenever we say algebraic OMDL we have a relative clause that says it is actually weaker than OMDL to remind the reader that what we’re doing is actually a good thing and not a bad thing.

MuSig-DN

Paper: https://eprint.iacr.org/2020/1057.pdf

Blog post: https://medium.com/blockstream/musig-dn-schnorr-multisignatures-with-verifiably-deterministic-nonces-27424b5df9d6

Comparing MuSig-DN with MuSig1 and MuSig2: https://bitcoin.stackexchange.com/questions/98845/which-musig-scheme-is-optimal-classic-musig-or-this-new-musig-dn-scheme/

MF: Ok let’s move on but thank you for those explanations. I’ll look at the papers again and hopefully I’ll understand it a bit better. So we were doing history. Let’s get back to the history. We went through broken MuSig1, we went through corrected non-broken MuSig1. Then there was MuSig-DN.

TR: DN was before MuSig2.

MF: Right, we haven’t got onto MuSig2 yet. MuSig-DN, it hasn’t been implemented, there is no one working on that. It was an advancement in terms of academic research but there’s no use case that you’ve seen so far why people would use MuSig-DN?

TR: Maybe there are some use cases but it is much more difficult. From the practical point of view what does it do? Multisignatures have some specific risks when using them in practice, at least all these schemes we’re talking about here. For example you need secure random numbers while signing. If you have broken random number generators, this has happened in the past, not in the context of multisignatures but in other contexts, then you might lose security entirely. You try to sign a message and the others can extract your entire secret key which is a catastrophic failure. MuSig-DN is an attempt to get rid of this requirement to have real randomness. This sounds good in practice because it removes one way things can go wrong but on the other hand the way we are doing this is we add a huge zero knowledge proof to the protocol. First of all we lose a lot of efficiency but even if you ignore this this comes with a lot of engineering complexity. All this engineering complexity could mean something is wrong there. In a sense we are removing one footgun and maybe adding another one. Adding that other one is a lot of work to implement. I think that is why it hasn’t really been used in practice so far. I still believe that there are niche use cases where it is useful. There has been at least one other paper that does a very similar thing to what MuSig-DN does but in a more efficient way. Their method is more efficient than ours but it is still very complex. It doesn’t use zero knowledge proofs, it is complex in a totally different way. But again a lot of engineering complexity. It seems there is no simple solution to get rid of this randomness requirement.

MF: So MuSig-DN is better for the randomness requirement and stateless signing. It is just that those benefits aren’t obvious in terms of why someone would really want those benefits for a particular use case.

TR: The randomness and the statelessness, they are very related if you look at the details. MuSig-DN has two rounds. When I say it is stateless it basically means that in one round I send a message, I receive a message from the others and then I send another message. That is why it is two rounds. When I say it has state the signer needs to remember secret state from the first round to the second round. This can be problematic because weird things could happen while you remember state. For example you could run in a VM, you do the second round and then someone resets the VM to the point where you did the first round but not the second round and you still have the state. You do another second round and you lost again. The basic idea is if we remove the requirement to have randomness then everything will be deterministic. That is why we don’t have to remember state. When we come to the second round we could just recompute the first round again as it is deterministic. We can redo the first round again so we don’t need to remember it because there is no randomness involved. That is why having no state and requiring no randomness are pretty related in the end. I think the motivation is we have seen random number generators fail in practice. Even in Bitcoin, people lost their coins due to this. It is a real problem but for multisignatures we currently don’t have a good way to avoid the randomness requirement. From an engineering point of view it could make more sense to focus on really getting the randomness right instead of trying to work around it.

MF: Anything to add on MuSig-DN Elizabeth?

EC: No, I’d just say that I think there are some niche use cases for having a deterministic nonce, if it gets used in combination with something else. I personally have thought about a few things that might use it. I think it is actually a really nice result even with the expensive zero knowledge proofs. Maybe there is something that could be done there, I don’t know. I think it is not just pushing the problem to somewhere else, it is a different construction and I do think that determinism is useful when plugged into other schemes. I still think it is something worth pursuing and I like this result.

TR: We thought about using the core of it in other protocols and problems but we haven’t really found anything.

MuSig2

MuSig2 paper: https://eprint.iacr.org/2020/1261.pdf

Blog post: https://medium.com/blockstream/musig2-simple-two-round-schnorr-multisignatures-bf9582e99295

MF: Ok so then MuSig2. This is getting back to 2 rounds. This is exchanging nonce commitments in advance to remove the third round of communication.

TR: Right, the fixed version of MuSig1 had 3 rounds. They needed to introduce this precommitment round before the other 2 rounds to fix the problem in the security proof, make the attack go away. It wasn’t just a problem with the proof, the scheme was insecure. Then MuSig2 found the right trick to remove that first round again. Whenever I say this I should mention that Chelsea Komlo and Ian Goldberg found the same trick for FROST. It is really the same idea just in a different setting. Even Alper and Burdges, they found the same trick in parallel. It was really interesting to see three teams independently work on the same problem.

MF: And this is strictly better than fixed MuSig1? MuSig-DN did still have some advantages. It is just that for most use cases we expect MuSig2 to be the best MuSig.

TR: Yeah. In this blog post we argue that MuSig2 is strictly better than MuSig1. It has the same safety requirements, it just gets rid of one round. You can precompute the first round without having a message which is another big advantage. A 2-of-2 Lightning channel may in the future use MuSig2, what you can do is do the first round already before you know what to sign. Then when a message arrives that you want to sign, a payment in Lightning that you want to forward or want to make, only then can you do the second round. Because you already did the first round the second round is just one more message. It is particularly useful for these 2-of-2 things. Then the transaction arrives at one endpoint of the Lightning channel, this endpoint can complete the second round of MuSig2 for that particular transaction, compute something locally, and then just send it over to the other endpoint. The other endpoint will do its part of the second round and at this point it already has the full signature. It is really just one message that goes over the network. That’s why you could call this semi-interactive or half-interactive. It is still interactive because you have two rounds but you can alway precompute this first round. Whenever you really want to sign a message then it is just one more message over the network. That’s why we believe MuSig2 is strictly better than MuSig1 in practice. But compared to MuSig-DN it is really hard to compare. MuSig-DN has this additional feature, it doesn’t need randomness. If you do the zero knowledge proof correctly and implement all of this correctly you definitely remove that footgun of needing randomness. This is a safety advantage in a sense. But on the other hand it is slower, it has more implementation complexity and you can’t do this precomputation trick so it is always two rounds. You can’t precompute the first round. These would be drawbacks. But as I said it has these safety improvements, you can’t say that one is strictly better than the other. But in practice what people want is MuSig2 because it is simpler and we like simple protocols. Not only because they are easier to implement but they are also easier to implement correctly. Crypto is hard to implement so whenever we have a simple protocol we have much less potential to screw it up.

MF: And on the security proofs you’ve proved the security of MuSig2 in the random oracle model. “We prove the security of MuSig2 in the random oracle model, and the security of a more efficient variant in the combination of the random oracle and the algebraic group model. Both our proofs rely on a weaker variant of the OMDL assumption”. Maybe I’ll never understand that. MuSig2, any other comments on MuSig2?

SpeedyMuSig and proofs of possession

Paper that references SpeedyMuSig: https://eprint.iacr.org/2021/1375.pdf

Comparing SpeedyMuSig with MuSig2: https://bitcoin.stackexchange.com/questions/114244/how-does-speedymusig-compare-to-musig2

MF: Let’s move onto SpeedyMuSig, that’s an additional MuSig protocol. This is your paper with Chelsea Komlo and Mary Maller. This has SpeedyMuSig in it and it is using proofs of possession which MuSig1, MuSig-DN or MuSig2 all don’t use. You argue in the paper that this potentially could be better in certain examples of use cases.

EC: Yeah, MuSig2 has this really nice key aggregation technique. What we do instead is we include proofs of possession of your keys. We talked a little bit before about how you want to avoid these rogue key attacks. One way is if you prove knowledge of your secret key then you can’t do that kind of attack. If you do that then the aggregate key for all the parties signing is just the multiplication of their individual keys. That makes for a more efficient scheme. I would say a use case where this might work really well is if you are reusing a key. It is a little bit more expensive to produce your key in the first place because you are computing this proof of possession. But if you are reusing keys a lot to sign then it is an alternative.

MF: Is this where Bitcoin has very specific requirements which make MuSig2 better than using proofs of possession, a SpeedyMuSig like scheme. In the MuSig2 paper it says proofs of possession “makes key management cumbersome, complicates implementations and is not compatible with existing and widely used key serialization formats”. This is Bitcoin specific stuff right? BIP32 keys and things like this? What is it that makes MuSig2 better than SpeedyMuSig in a Bitcoin context?

TR: I’m not sure it is really “better” and whether that is the right word. It is another trade-off. The proofs of possession are pieces of data that you need to add to the public keys. This means that when you want to aggregate a few public keys you need this additional piece of data. That’s data that we didn’t have so far in the public key and we wouldn’t have it on blockchains. For example if you do some crazy protocol where you take some random public keys or specific public keys from the chain without any additional context and you aggregate them then certainly MuSig2 is better because you wouldn’t have access to the proofs of possession. You would need to ask the other signers to send the proofs of possession before you can aggregate the key. On the other hand you can say “In practice we usually talk anyway to the people that we want to have multisigs with so they could send us their proofs of possession when we ask them. Both are true in a sense.

EC: I guess the idea is that the key is a little bit larger. If you are reusing the key constantly to produce these signatures then you only have to do it once and you can reuse it. It depends on the use case mostly.

TR: Yeah. SpeedyMuSig, the name already says it. It is a little bit more speedy, it is faster and MuSig2 is maybe a little bit more convenient depending on your use case. This is the trade-off. In the end I think it is not a big deal, you could use either for most applications.

MF: Maybe this is completely off-topic but I thought Bitcoin using BIP32 keys and having specific requirements, it would want to use MuSig2 over SpeedyMuSig. But that’s veering off course. BIP32 is a separate issue?

TR: Yes and no. BIP32, it maybe depends on which particular way of derivation you use here. For the others BIP32 is a way to derive multiple public keys, a lot of public keys, from single key material, a single secret key in a sense. There are two ways of using BIP32. There is public derivation and hardened derivation I think. In the hardened derivation, this is what people mostly do, there is just a signer with a secret key and this guy creates all the public keys and sends them to other people. In that model it doesn’t really make a lot of difference because the signer anyway has to generate a public key and send it to someone else. It could attach this piece of data, the proof of possession. If you do public derivation which is a little more advanced where you as a signer give a master public key to other people and they can derive more public keys that belong to you, that would be harder because they can’t derive the proofs of possession. Maybe they wouldn’t even need to, I never thought about this, maybe this would work. I should think about this.

EC: I think we should chat more about this.

TR: I think it would work.

EC: I should say that proof of possession in our case, it adds a couple of elements. It is a Schnorr signature.

TR: It is really just a Schnorr signature. It is confusing because we say “proof of possession” but it is again another Schnorr signature.

EC: It is another Schnorr signature because we love them so much. We are just adding one group element and one field element, a Schnorr signature to your public key.

MF: The Bitcoin specific things, you were describing the hierarchical deterministic key generation, a tree of keys that we have in Bitcoin, I don’t know if they use it on other blockchains, other cryptocurrencies. As you said this is generating lots of keys from a root key. We have a thing in Bitcoin where it is we don’t want to reuse public keys and addresses. We want to keep rotating them. So perhaps that Bitcoin requirement or expectation that you only use your public key once for a single transaction and then rotate to a new key generated from that tree impacts whether you’d want to use MuSig2 or SpeedyMuSig.

TR: It is hard to say. I think there is nothing specific about the way you use keys in Bitcoin that would mean you can only MuSig2 and not SpeedyMuSig, that you can’t use proofs of possession. It makes key management a little easier and also more backwards compatible for what we already have in our implementations. But it is not a fundamental thing, I think almost everything you could do with MuSig2 you could also do with SpeedyMuSig. Maybe you should ask Pieter about this because he has stronger opinions. I always argue proofs of possession are not that bad and Pieter says “Let’s avoid them”. It is not really a fundamental difference.

MF: I saw a presentation from Andrew Poelstra, apparently Ethereum has public key recovery from the signature. Someone (Bob McElrath) asked about this and Andrew said that’s not a good idea because of BIP32 keys in Bitcoin and that you need to commit to a public key. He was quite strong on not wanting to enable that. Apparently there is a covenant proposal that comes out of that if you can do public key recovery from the signature. Andrew was strong on saying that we shouldn’t enable that because of BIP32 keys which is kind of assuming BIP32 keys are fundamental to how we use Bitcoin in real life.

TR: I don’t know what he had in mind there. I don’t know this proposal but at the moment I can’t think of how BIP32 would interact with recovery.

MF: A question for Andrew then. Comments on YouTube, we’ve got one saying “Bellare Neven needed all keys for verifying”, maybe.

TR: Right, we covered that.

Adam Gibson (AG): We have pubkey recovery in pre-Taproot already because we have it in ECDSA.

TR: We can’t do it in Schnorr, at least not with BIP340.

MuSig2 draft BIP

MuSig2 draft BIP: https://github.com/jonasnick/bips/blob/musig2/bip-musig2.mediawiki

MF: MuSig2 now has a draft BIP.

TR: I just updated it yesterday. Still a work in progress.

MF: Can you talk a bit about what work still needs to be done? Why is it still a work in progress? If it was changed to be SpeedyMuSig that would be a significant change.

TR: That would be a significant change.

MF: What other things could potentially change in this BIP?

TR: Maybe go to the issue list. If I look at this list we merged a few PRs yesterday. Most of the things in the actual algorithms are pretty stable. Almost all of what I see here is either already worked on or is improving the writing, the presentation but not the actual algorithms. One very interesting point here maybe is Issue 32, the principle of just-in-time x-only. This is maybe something I forgot to mention when I talked about x-only keys. Here Lloyd (Fournier) argues that we should use x-only keys in a different way that reduces some of the pain that I mentioned earlier. I said x-only keys save 1 byte on the blockchain, in the public key, but they introduce a lot of pain in the algorithms. Lloyd here has a proposal, a different way to think about it, which reduces some pain. It is a really long discussion. At least we agreed on making certain specific changes to the BIP. We haven’t reached a conclusion on our fundamental philosophical views on how x-only keys should be thought about but at least we made progress. I promised to write a document (TODO) that explains how protocol designers and implementers should think about x-only keys. When they should use x-only keys and when they should use the keys that we used earlier.

MF: So the x-only thing is subtly complicating MuSig2 as well, I didn’t appreciate that. I thought it was just the TLUV proposal.

TR: It is what I mentioned earlier, it makes MuSig2 for example more complicated. On one side you can say it is not that bad because we can handle this complexity in the BIP and then we have a library where it is implemented, this handles all the complexity but still it is complexity. This discussion more turned into a discussion on how people should think about x-only keys. When we released BIP340 and now it is active with Taproot, some people got the impression that this is the new shiny key format that everybody should use. This is maybe the wrong way to think about it. It saves a byte but it saves this byte by losing a bit of information. It is a little less powerful. Maybe a better way to think about it, if you know that with a specific public key you really only want to do signatures, then it is ok to have it x-only. But if you may want to do other stuff with the public key, maybe add it to other keys, do MuSig2 key aggregation, do some other more advanced crypto, then it might be better to stick to the old format. “Old” is even the wrong word. We propose the word “plain” now. When I say “old” it comes with this flavor of old and legacy. It is actually not the case, it is just a different format. It really makes sense on the blockchain because there we save space. But for example if you send public keys over the network without needing to store them on the blockchain it might be better to use the plain format which is 33 bytes, 1 byte more. You can do more with these keys, they have 1 bit more information. They are more powerful in a sense. I can totally understand, if you read BIP340 you can understand that this is the new thing that everyone should migrate to. In particular also because 32 is just a number that implementers for some reason like more than 33. It is a power of 2, it doesn’t make a lot of difference but it looks nicer. Maybe that is the wrong way to think about it and that is why we had this issue and I volunteered to write a document, maybe a blog post, or maybe an informational BIP even, that explains how people should think about these keys and how they should be used in practice, at least according to my opinion. That’s another place where x-only keys made the world a little bit more complex. Now they exist there are two formats and people have to choose.

MF: So it is quite serious then. We’ll probably be cursing that x-only choice. As I was saying earlier it is just impossible to predict how the future goes. You have to converge on a proposal at some point, you can’t keep putting it off. Then no one will work on anything because they just think it will never get onchain.

TR: We had a lot of eyes on it. Of course in hindsight you could say if we had written the MuSig2 BIP and the code back then we would have maybe noticed that this introduces a lot of complexity. It is hard to say. At some point you need to decide and say “We are confident enough. A lot of clever people have looked at this and they agree that it is a good thing.” And it still may even be a good thing. It is not really clear. It saves space, maybe if people use it in the right way it will be good in the end.

MF: That’s the keys we are using, keygen. Are the other parts of the MuSig2 protocol kind of set in stone. Key aggregation, tweak, nonce generation. This is going to be solidified right?

TR: It is pretty much set in stone by now. If anybody here is really working on an implementation, when I say set in stone don’t count on it. It is still experimental and we could change things. This may lead to weird consequences if you use it now in production and then switch to a new version and so on. But if you ask me, yeah I think the algorithms are pretty much set in stone. Unless someone else finds a problem. One thing that is not set in stone when it comes to the real algorithms is one thing that was discussed in this x-only issue, where we want to change the key aggregation slightly. We should really hurry up now, it is there for a while now and we should try to get it finished.

MF: The BIP? Hurry up and finalize the BIP?

TR: Yeah because people really want to use it. In particular the Lightning people.

MF: In terms of the things that can go wrong in a MuSig protocol, you do need signatures from all parties so if one party is not responsive then you can’t produce a signature. You’d need a backout, an alternative script path.

TR: This is true in every multisig thing. If the other person doesn’t want to or is offline there is no chance.

MF: Either unresponsive or providing an invalid signature and it just can’t complete. In terms of sorting pubkeys, signatures, does that present a complication? If people are receiving pubkeys and signatures in different orders do you need a finalizer or combiner? Do you need to fallback onto that? Do you need to allocate a finalizer or combiner at the start?

TR: For key aggregation the way the BIP is written you can do both. The key aggregation algorithm in the BIP gets an ordered list as a data structure. You can parse an arbitrary list, which means you can sort the list of public keys before you parse it or you can choose to not sort it. We took this approach because it is the most flexible and different callers may have different requirements. For example if you use multisignatures with your hardware wallets at home you probably don’t want to have the complexity and just sort the keys. But maybe some applications already have a canonical order of keys, for example in Lightning. I’m not sure how they do it, I’m making this up, maybe there you have one party that initiates the channel and the other party which is the responder in the channel. Then you could say “Let’s put the initiator always at position 1 and the responder always at position 2”. I think we described this in the BIP in the text somewhere. If you don’t want to care about the order you are free to sort the public keys before you pass them to key generation. If you care about the order you can pass in whatever you want. If you pass two lists or two different versions of a list in a different ordering then the resulting key will be different because we don’t sort internally. For signatures it doesn’t matter in which order you get them. This is really handled in the protocol, that doesn’t matter. But if you are asking for things that can go wrong, I think the main thing that can go wrong is what we already talked about, the requirement to have good randomness. This is really the one thing where you need to be careful as an implementer. Of course you always need to be careful. You need some good source of randomness which is typically the random number generator in your operating system.

MF: And this is a new requirement because before with single signatures you are using the hash of various inputs to get the randomness. Now you do actually need an independent source of randomness.

TR: Yeah, right. That’s exactly the thing that MuSig-DN solves in a sense but by adding a lot of complexity. This is also a very, very important thing to know. People have tried to use this trick with the hash generating the nonce by hashing the secret key and the message for multisignatures. Don’t do this. If this was possible we would have added it to the BIP. But it is totally insecure. For years we told people when they implement signatures they should generate the nonce deterministically because this is way safer. This is true for ordinary single singer signatures. But funnily it is the exact opposite for multisignatures. This is really dangerous because we’ve seen people do this. We told them for years derive nonces deterministically and then they saw the multisignature proposal and said “Ok it is safer to derive nonces deterministically”. Now it is the other way round. It is strange but that is how it is. When you look at nonce generation here it has this note. “NonceGen must have access to a high quality random generator to draw an unbiased, uniformly random value rand”. We say it explicitly, you must not derive deterministically.

MuSig2 support in Lightning and hardware signers

MF: Is this going to be the biggest challenge? Everything seems to take years these days but assuming we get hardware signer support for MuSig2, the hardware signer would need to provide this randomness that it currently wouldn’t.

TR: It is hard to say. You need randomness for doing crypto at all. You need randomness to generate your seed. In the hardware wallet case you could say “Let’s draw some physical dice”. For normal computers I think you assume you have some way to generate real randomness. We have cases where this broke down and we’ve seen cases in practice where this broke down and people lost their coins. This is one risk. Another risk is that you have this statefulness. If you do this… you precompute the first round in particular, which is what they want to do in a Lightning channel. Then you also have to make sure that you can’t really reuse that state twice. Whenever you have state you have some risk in practice that the state of your machine gets reset. For example if you run in a VM or you try to backup. Let’s say you do the first round of MuSig1, you have to keep the secret state in order to perform the second round, once you have a message. Now it would be a very bad idea to write that state to disk or even write it to a backup. What could happen is you complete the second round by sending out signatures and then at some point the user will restore the backup, you have the risk that you perform the second round again now with different inputs. This is exactly what we want to avoid, then you expose your secret key. That’s why we say in the BIP that you shouldn’t serialize the state. Even in our implementation we don’t give the user a way to serialize the state because this could mean some user will write it to some backup or store it somewhere else or whatever. If you really crash after the first round no big deal, just do the first round again. That’s totally fine. This is probably what Lightning will do. If a node crashes then they have to reestablish a connection to the other Lightning node and then they can run the first round of MuSig again.

MF: You can tell me to mind my own business but have you had any discussions with hardware wallet devs? You’ve got a hardware wallet at Blockstream. Any discussions with those guys? How long would it take? The BIP has to be finalized, you probably want a lot of experimentation before you make that investment.

TR: It is a good question. We should talk to hardware wallet people more. So far we mostly are in touch with Lightning people. I have the impression they really want to use it so they talk about it very often. I don’t know why because I don’t know the details of the Lightning protocol, I think in Lightning they said the statefulness is ok because if you lose state and you crash you need to trust the other side that they won’t steal your entire channel. They already have this assumption in their threat model so the additional state introduced by MuSig2 doesn’t change that really.

MF: I have a few links. I think it is specifically the Lightning Labs, LND devs, Laolu etc. I don’t know if they are already using it. Loop is one of their products, maybe MuSig2 is already being used.

TR: Of course I hope my stuff is used in production but they need to be careful if we update the BIP and do some small changes. They have to be careful that it doesn’t break. At the moment we don’t care too much about backwards compatibility. It is not 0.1, it is not even a BIP, it is a BIP draft.

MF: It is merged but perhaps it is just an option, they aren’t actually using it in production.

AG: I know Laolu wrote an implementation of MuSig2 in the btcsuite golang backend but for sure it isn’t exactly final.

FROST

FROST paper: https://eprint.iacr.org/2020/852.pdf

FROST PR in libsecp256k1-zkp: https://github.com/ElementsProject/secp256k1-zkp/pull/138

MF: So FROST. I have a quote from you Tim, a couple of years ago. I don’t know if you still have this view. “It should be possible to marry MuSig2 and FROST papers together and get a t-of-n with our provable guarantees and our key aggregation method”. Do you still share that view now? My understand of FROST is there’s a lot more complexity, there are a lot more design choices.

TR: I believe that is still true. Only a belief, I’m not sure what Elizabeth thinks. I think it is possible to create a scheme where you have a tree of key aggregations where some of those would be MuSig2, some of those would be FROST. Some of those would be multisig, some of those would be threshold aggregations. It is very hard to formalize and prove secure because then you get all the complexity from all the different schemes.

EC: I think it is worth pointing out that because FROST is a threshold signature scheme there’s a lot of added complexity with the key generation. When you have a multisig you have a nice, compact representation of the key representing the group. But as soon as you move to the threshold setting now you need some kind of distributed key generation which often has multiple rounds. There’s a trade-off there in terms of flexibility or having a nice key generation algorithm. These are different aspects.

TR: Maybe it is not related to that point of the discussion, I’m sure there is this RFC proposal for FROST. I think this wasn’t included in the reading list.

MF: Sorry, what specifically?

TR: There is a draft RFC for FROST.

EC: FROST is being standardized through NIST. There’s a CFRG draft which has gone through several iterations.

TR: IRTF?

EC: Yeah. It would be helpful to pop those links into the list also.

MF: Ok I’ll definitely do that. So it is a threshold scheme but obviously with the k-of-n, the k can equal n. You can still do multisig with FROST. It is just a question of whether you would do that.

TR: I think you wouldn’t. If k equals n then you only get the disadvantages of FROST and not the advantages.

EC: Exactly.

MF: A couple of things I wanted to confirm with you guys. Jesse Posner who is writing the FROST implementation in libsecp256k1-zkp, he said a couple of things, I wanted to confirm that this is true. I think he was a bit unsure when he said them. For FROST you can swap out public keys for other public keys without having an onchain transaction and you can also change from say a 2-of-3 to a 3-of-5 again without an onchain transaction? There is a lot of flexibility in terms of what you can do for moving the off-chain protocol around and making changes to it without needing an onchain transaction? This seems very flexible, certainly when you compare it to MuSig2. The only way you can change the multisig arrangement for MuSig2 would be to have an onchain transaction.

EC: I’m not sure if I understand that statement. If you are changing keys around at all then you need to rerun the distributed key generation protocol.

TR: I’m also not sure what Jesse is talking about here. I think in the pull request there were some discussions. What you certainly can do, you can downgrade n-of-n to k-of-n. This has been discussed. For example swapping out a key to a new key, maybe Elizabeth knows more, there are some key resharing schemes, I’m not really aware of those.

EC: Yeah. That’s what I was saying about doing the distributed key generation again. Say you run distributed key generation once and everybody has their secret shares of the overall group key. At least the DKG that is used in conjunction in FROST, the original one which is what we prove in our paper, it is based on Shamir’s Secret Sharing. There are some pretty standard ways to reshare using Shamir. That is possible. It is still non-trivial.

TR: You can do resharing but it is more like a forward security thing. You can’t reshare to an entirely new group of signers. You would still trust the old group of signers.

EC: You can transition from some group of signers to a new group of signers also. There are ways to reshare the keys. Or you can keep the same group and reshare a sharing of zero so your shares essentially stay the same, same group, or you can switch the group of signers. But it does involve performing the resharing. There’s a little bit that has to be done there.

MF: You would need to redo the protocol which surely means you’d need an onchain transaction in Bitcoin or blockchain lingo? It is not the case that you can have some funds stuck behind an address, a public key, and keep that public key associated with funds but rejig the protocol behind that public key.

EC: When you reshare you do have to send some values to the other parties so that everybody has their new shares.

TR: The old group will still have the signing keys. When you have a normal coin with a normal UTXO and single signer, of course what you can do is take your secret key and send it to another person, handover the keys. But this is not really secure because the other person needs to trust you, you still have the key. This would be the same in this FROST resharing. We can do the same transition from one group where you have a 2-of-3 sharing to totally different parties that again would have a 2-of-3. But they still would need to trust the old group not to spend the coins. In that sense it doesn’t give you a magic way to overcome this problem. Whenever you want to handover coins to a new person in a secure way you need to do a transaction.

EC: I think it is worth pointing out too that distributed key generation is dealt with as an entire field by itself, how to reshare things. When we say “FROST” it consists of these two components. There is the distributed key generation protocol and then the signing. What we tend to focus on is the signing because you could incorporate any number of distributed key generation protocols with the FROST signing mechanism. In terms of properties you want out of key resharing this is within the realm of distributed key generation research. Not really FROST research.

MF: And FROST also uses proof of possession, PedPop.

EC: Yeah. The original FROST paper proposed a distributed key generation mechanism which is basically the Pedersen DKG, this is a pretty standard DKG in the literature. With the addition of proofs of possession. That was why we proved our version of FROST together with the key generation protocol that they proposed. But it is basically just Pedersen plus proofs of possession.

TR: Maybe one thing to mention here, a step back and to reiterate, threshold schemes are in general more complex than multisignature schemes because not only the signing part is interactive but also the key generation part is interactive. In the MuSig paradigm you can just take some public keys and aggregate them. In threshold schemes, at least in those that work in the group we have in Bitcoin, the crypto in Bitcoin, even the key setup process is now interactive. This is of course again another layer of complexity. One interesting side effect here is that if we always during key generation need to talk to everybody, then we can certainly also send around proofs of possession. There is a requirement that we need to talk to everybody else during key generation. Then sending proofs of possession around is the natural thing to do. There is no drawback. We have the drawback already in a sense. That is why all these proposals use proof of possession and not MuSig key aggregation. This would probably be possible but you lose some efficiency.

MF: The Bitcoin implementation of FROST uses MuSig key aggregation.

TR: I think it does. Jesse said it is easier because then it is the same method, we could reuse some code. And it would be possible to do what he described there. Maybe you could start with a 3-of-3 setup and then downgrade it to a 2-of-3, something like this. Then it would make sense to start with the MuSig setup. When you start with the n-of-n thing you probably want to have the MuSig method. This was still an open point in this PR. He came up with this use case, I wasn’t really convinced that people want to do this in practice. Others maybe weren’t convinced either. I think we haven’t decided yet. At least in the Bitcoin world when you talk about this PR, it is much more work in progress than MuSig2. With MuSig2 we already have the BIP draft and it is almost finalized. Here we don’t even have a specification, we just have this draft PR which tries to implement it but without an explicit specification. It is worth mentioning that outside Bitcoin it is maybe the other way round. There is this IRTF proposal for FROST and there is nothing for multisignature.

MF: So why is that? Given that MuSig is simpler. It is just that Elizabeth has done better speaking to the IRTF than the Bitcoin guys.

TR: I think there are a few things to say. First of all yes MuSig is simpler than FROST. Multisignature is simpler than threshold signature. Mostly because of the thing we just mentioned, key generation is non-interactive in multisignatures and interactive in threshold signatures. The second thing is use cases. In cryptocurrencies multisignatures have some use cases because in cryptocurrencies you always have an escape hatch when the other parties go offline. For example in a 2-of-2 Lightning channel you always need to have some timelock that says “If the other party disappears I can get the money back after some time”. If the other party disappears we can’t just create a new key because there’s money stored on that key and the money would be gone. In other applications, and the IRTF is probably not interested in cryptocurrencies, threshold signatures make a lot of sense. They also make sense in a lot of blockchain applications, as I said with blockchains we have these special cases where multisignatures make sense. But in general if you ignore cryptocurrencies for a moment then threshold makes more sense. You can scale it better. It is pretty useless to have a 10-of-10. What is the goal of all these schemes? To distribute trust. But with multisignatures you increase security but you decrease safety. Two things can happen, either some people get malicious or steal your keys, that is great with 10-of-10, 9 people can get compromised, their keys stolen, and the scheme is still secure. But on the other hand you always increase the probability that you lose the key material. If you have a 10-of-10 and you lose one of the 10 pieces the key is gone. You increase security against attacks but you decrease security against losing your keys. The only way out of this dilemma is threshold signatures where you don’t need to do 10-of-10 but you could do 7-of-10 or something like this. Then you can trade off this security against attackers versus security against losing stuff. This is something you can really only do with threshold keys. This is why in general people are more interested in threshold things. On top of this in Bitcoin I mentioned contracts, for example Lightning or other complicated things, we always have an escape hatch in these contracts. Even in other setups, for example at home when you store some larger amount of coins, maybe you want to have a 2-of-3 between 3 hardware wallets or 2 hardware wallets and your machine. This is a setup that I think a lot of people use. Even there you want threshold because if you lose one hardware wallet you don’t want to lose all your coins. But using Taproot there are some ways you could do this even with multisignatures. For example say you have 3 signers, 2 hardware wallets and a backup wallet somewhere else. You could have a Taproot output that is just a 2-of-2 of the two main signing wallets and you have some script paths that cover the other possibilities. You have two primary devices and as long as you have access to those you use this normal key spend. Whenever you need to resort to the backup device, then you need to open the Taproot commitment and show the script path, use the other MuSig combinations. You could implement this 2-of-3 via 3 combinations of 2-of-2s. In the end this may be more convenient in practice than going through the hassle of running this interactive distributed key generation that you would need for FROST. I think this is another reason why multisignatures are interesting in Bitcoin. But of course this doesn’t scale. If you want to do 20-of-60 or 20-of-40, you probably don’t want to do this at home, you can’t enumerate all the possibilities here. It would be very inefficient.

MF: I guess finalizing the BIP, it is a bit closer to the use case than a IRTF standard. IRTF would be a “We have this paper. Academics are happy with the proof.” But it is not spec’ing it for a particular real world use case. Would that be correct to say?

TR: I think it is spec’ing it. It is a specification. Maybe Elizabeth knows more about the use cases that they have in mind.

MF: Are there particular use cases that this IRTF standard is being geared towards? With Bitcoin we have a couple of Bitcoin specific requirements that might not apply to other blockchains or other use cases. How close to a use case is this IRTF standard for FROST? I think Elizabeth has dropped off the call. And of course you can do nested MuSig within FROST and nested FROST within MuSig. But from a Bitcoin specific it makes sense to finalize MuSig first and then work out how one might nest into the other or vice versa.

TR: I really need to stress that we believe we can do this but we really don’t know. I could write down a scheme but I don’t know if it is secure. We’d need to prove it was secure and nobody has done that so far. I wouldn’t recommend using nested MuSig at the moment.

libsecp256k1 scope

MF: Let’s finish off with libsecp256k1. I don’t know which is priority. Finalizing the MuSig2 BIP or getting a API release out for libsecp256k1. What’s your current thinking, obviously there are other contributors, other reviewers, other maintainers, on how close we are to having a libsecp256k1 release and formal API?

TR: You mean MuSig2 in libsecp256k1?

MF: I’m assuming MuSig2 won’t be in that first release.

TR: No, I really hope we have it in that first release.

MF: This x-only pubkey issue needs to resolved before there would be a libsecp256k1 release. And any other things that need to be finalized in terms of that MuSig2 BIP.

TR: I think the roadmap for MuSig2, at the moment we’re making changes to the BIP, it is mostly there. There are some final issues, mostly related to the x-only things where we need to do some changes, we also need some writing changes. They can always be done later. Once we are really confident about the algorithms that we have we can update the implementation in libsecp256k1-zkp or do it on the fly even. I need to say here that the current implementation based on an earlier version of the BIP draft is not in libsecp256k1 but in libsecp256k1-zkp which is a fork of this library maintained by Blockstream where we add a few pieces of functionality mostly relevant to our Liquid sidechain. But we also use it as a testbed in a sense for new development. That is why we added MuSig2 first there. Once we have the BIP finalized we will first the libsecp256k1-zkp implementation to match the finalized BIP. When I say “finalized BIP” at the moment it is a draft, it is not even proposed. We need to get it a BIP number, put it out for review. Maybe there are more comments and we need to update things. The plan is then to update the implementation in libsecp256k1-zkp and once that is stable I really want to see this in libsecp256k1. Recently there has been discussion about the scope of the fork and the library and so on. I tend to believe that things that are useful to the wider Bitcoin ecosystem should be in libsecp256k1. I believe MuSig2 is useful to the wider ecosystem so we should add it there. At the moment it is maybe nice in libsecp256k1-zkp because it is easier for us to get things merged and easier to play around with it in an experimental state. But once that code is stable we should add it to libsecp256k1.

MF: The libsecp256k1-zkp repo, other people are using it. Is it designed to be a repo that other people use. I suppose if you’ve got experimental on the things that you think are experimental it is kind of user beware type thing. Would there be a release for libsecp256k1-zkp as well? That’s predominantly for Elements, Liquid but if you use it it is kind of user, developer beware type thing.

TR: That’s a good question. I think at the moment it is a little bit of both and that is why I started this issue and this discussion. The discussion was there before in more informal channels. If you ask me personally what I’d like to see is in libsecp256k1 we have the things for the Bitcoin ecosystem and in libsecp256k1-zkp we have things for the Elements ecosystem and the Liquid ecosystem. If we have that kind of separation then the separation would be clearer. On a possible release of libsecp256k1-zkp, so far we haven’t talked about this at all. Maybe we wouldn’t need a release because it is used mostly by the Elements and the Liquid ecosystem that we (Blockstream) mostly control. Now that I think about it it is not even true. There are hardware wallets that support Liquid, they use libsecp256k1-zkp. I guess whenever you have a library, releases make a lot of sense. The truth is so far we were too lazy to put out a release for libsecp256k1, we should do this now. Maybe that is a good starting point to think about a possible release for the libsecp256k1-zkp library too. Independently of whether we move MuSig2 to this other library or not.

MF: It is hard, there are a lot of grey areas here. You don’t know where to draw the line. Which features should be in each library if any. Whether a feature should be in libsecp256k1 or libsecp256k1-zkp or neither. And also how much time the maintainers including yourself can dedicate to reviewing it.

TR: Upstream, the discussion in libsecp256k1 is much more interesting in that respect. There we really have a community project. We want to have the things there that are relevant for Bitcoin I think. But of course our time is limited. In the libsecp256k1-zkp fork, it is a repo controlled by Blockstream, we could do whatever we want with this repo. That’s why we don’t care that much what we put in there but maybe we should care a little bit more and have a discussion on the upstream repo. Then it would be more meaningful for everybody.

Reusing code for future proposals (e.g. CISA)

MF: There was some discussion here. You’ve discussed some cross input signature aggregation type proposals, you were on a podcast recently talking about half signature aggregation. There are some things that can be reused. You are having to think “Not only is this code potentially being used in the use case that we’re coding it up for but it might be used in future”. It is not consensus so it is not cemented on the blockchain but I suppose you are trying to think ahead in making sure it is modular and a potential module could do the same function for both use cases, even long term things.

TR: It is always hard to predict the future when you are writing code. Maybe not yet at an implementation level but at least at the level of writing papers we are thinking about this. You mentioned cross input signature aggregation, it is a different beast, it is not multisignatures, from a theory point of view it is a different thing. It turns out you could use something very similar to MuSig2, probably, we need to write a paper about it and prove it secure before we can be sure, to do cross input signature aggregation. But we are not sure on whether this is the best approach even. If I remember correctly our most recent thinking about cross input signature aggregation is if we want to propose something like this then the resulting crypto scheme would look more like the original Bellare Neven than MuSig. This is a little bit easier to prove secure probably and it is more efficient. In that setting Bellare Neven would be better or a variant maybe of Bellare Neven. If you think about cross input signature aggregation, you have multiple UTXOs, you are spending them altogether or you have different parties controlling them even, they run an interactive protocol and want to spend them altogether in a single transaction with a single signature. If you think about it now you have all the public keys on the chain. Whereas in MuSig you don’t have this. In MuSig you have a Taproot output which is an aggregate key, aggregated from multiple keys, but you don’t see the individual keys on the blockchain. But here you see them on the blockchain. This is exactly what makes Bellare Neven work in this setting. As we discussed earlier, with Bellare Neven all the individual keys go into the verification algorithm so you need them. But for cross input signature aggregation you have them so you could use Bellare Neven. You probably wouldn’t use it exactly, do some minor tweaks to it but at least it is a candidate there. On this issue, I didn’t really read this post, I think here Russell (O’Connor) is assuming that we would use something like MuSig2 for cross input signature aggregation. But this is over a year ago. As I said our current thinking is that we would rather use something like Bellare Neven. Jonas mentioned it here, somewhere in this discussion.

MF: It is not critically important. It is just a nice to have type thing. This long term forward thinking, what can be reused for what. I suppose if you are doing a BIP then you don’t exactly want to go through the hassle of changing the BIP.

TR: When you are writing code you can think about future applications. It is probably good to think about but it might turn out in a month you want to do it differently. That’s how it works.

ROAST

ROAST paper: https://eprint.iacr.org/2022/550.pdf

ROAST blog post: https://medium.com/blockstream/roast-robust-asynchronous-schnorr-threshold-signatures-ddda55a07d1b

Tim Ruffing presentation on ROAST: https://btctranscripts.com/misc/2022-07-14-tim-ruffing-roast/

MF: I think we’re close to wrapping up then. Perhaps you can explain ROAST. I think Elizabeth briefly discussed it in her presentation too recently at Zcon. So ROAST is a protocol, a wrapper on top of FROST. What specifically are you concerned with re FROST to want something on top of it to not be susceptible to certain things?

TR: Let’s say you have some threshold setup, I think the example I use mostly here is 11-of-15. You have a 11-of-15 setup and you want to create a signature so you need 11 people. The problem with FROST is you need exactly 11 people. You really need to commit on the exact subset of 11 people that you want to use. If it turns out that one of these signers in this set is offline or doesn’t want to sign then you need to reset the protocol from scratch. You need to pick another subset of 11 signers, kicking out maybe that one signer who was offline and replacing it with another one. But then it could fail again and fail again and so on.

MF: You have to decide upfront when you start the FROST protocol which 11 signers are going to be signing. If one doesn’t sign then you fail and you have to start again.

TR: Right. ROAST gives you a clever way to restart in a sense so you don’t need too many sessions in the end. Maybe I have to start a few protocol runs of FROST but I do it in a clever way so I don’t have to start too many of them. Another nice feature is that the resulting thing, what ROAST gives you is an asynchronous protocol in a sense. You can always make progress, you never need to have timeouts. For example in the simple approach that I just described here, you start with 11, maybe 10 of them reply and you have some 11th signer that never replies. It is not clear, maybe the network is super slow and at some point it will reply or it might never reply. If you do this naively at some point you would need a raise a timeout. Then maybe you’re wrong because you raise a timeout and a second later the message arrives. You would have aborted but actually the guy was there. ROAST avoids this because it never aborts sessions, it just starts new ones. It never needs to abort existing sessions. It works by starting one session and at some point after a few people reply you already start a second session. Either the first session can complete or the second session. If a few people respond in the second session you may even start a third session and so on. You leave them all open so you never miss a message. You never kick out somebody because he seems to be offline. You never need some kind of timeout. This is useful in real networks because timeouts are difficult. You never know what the right value for a timeout is.

MF: The parties of the 11-of-15 on Liquid are expected to be online all the time? I suppose it is just a hard decision, if you’ve got 15 parties why choose a specific 11 of them? Why would you ditch 4 of them? There’s no reason to if they are all equal parties.

TR: All ROAST does is start FROST sessions. It needs to start a first session for example and for this first session you need to make some choice. You need to pick some 11 to which you do the first attempt. But yeah you could just randomize it. Take a new random selection every time for example or always take the ones that responded quickly in the past. That is another meaningful strategy. I’d guess you would want to have some randomness involved. If you always pick the same 11 and they always work you would never notice if someone failed. Maybe one of those 4 that you don’t pick goes offline and you wouldn’t even notice. I guess you would notice because the TCP connection drops or something like this. But you want to have some randomization to make sure that everyone is picked from time to time to see if things are working. This is mostly just an implementation detail I think. If people are interested in ROAST specifically I gave a talk about it which is also on YouTube. Maybe you can add it to the reading list.

MF: We were discussing before about changing it from a 11-of-15 to a 11-of-16 or whatever. If that is going to change regularly or semi regularly I suppose ideally if you could avoid that onchain transaction you would but it sounds like you can’t do that. You would need an onchain transaction every time you changed the setup for Liquid.

TR: But I don’t think that is a big deal.

MF: It hasn’t changed for years? Is it still the same 15 that it has always been? Or have they been swapped in and out?

TR: I don’t think they have been swapped in and out. But it is not a big problem to make onchain transactions because they do onchain transactions all the time, they do pay outs and peg outs from the Liquid network. Doing an onchain transaction isn’t a big deal. Even if you wouldn’t do it constantly how often do you add a member to this 15? Maybe once a month or so.

MF: I don’t know, I have no idea how it works. I know it is a 11-of-15 but I have no idea whether parties have been swapped in or out, it doesn’t seem to have changed from the original 11-of-15.

TR: We are adding more members to the federation but at the moment we are not adding more members to this set of signers because we have committed on this 11-of-15 thing because of some specific reason, how multisigs currently work in Bitcoin. If you go beyond 15 it will be more inefficient. ROAST in the long term would maybe give us a way to scale that number without losing efficiency onchain. Even in that setting if it was easy to add people we wouldn’t add a new member every day. Even if you do it every day one transaction per day, that’s not that expensive.

MF: Not currently. We don’t know what the fees will be like in future. Back to 2017 fee levels.

TR: At least at the moment it is ok.

MF: We don’t care at the moment. Is it currently randomized in terms of choosing the 11-of-15 in the CHECKMULTISIG. How do the 11 decide on who is going to sign?

TR: That’s a good question. I’m not sure. I think at the moment we have a rotating leader and the leader will ask everybody else to send a signature. We just take the first 11 signatures that we get and put them on the transaction. It uses this naive Bitcoin threshold sig, this is exactly where it is better than FROST in a sense. With this approach you don’t need to precommit on which 11 you want to use. You can just ask everybody for their partial signature and you wait until you have 11. But you can ask every one of the 15 members and take the first 11. Maybe we wait for a second longer and then randomize it, I don’t know. I bet we take the first 11 but I would need to read the code or ask people who know more about Liquid.

MF: I’m assuming it still uses CHECKMULTISIG rather than CHECKSIGADD. The benefit of using CHECKSIGADD is probably not worth the hassle of changing it.

TR: I’d need to ask the Liquid engineers.

Q&A

Q - Back in MuSig land, in the BIP there is a section called verify failed test cases and it provides some signatures. One of them exceeds the group size and it fails. One of them is the wrong signer and so it fails. And one of them is the wrong signature… why is that an important test to make?

A - In the test vector file, just yesterday we converted that to a JSON file. The reason is we really want to have a specific property in signatures. There are different security notions of signatures in theory. What we usually expect from a signature scheme, even without multisigs, just from a single signer, normal signature scheme, is some unforgeability notion. This means you can only sign messages if you have the secret key. There are various forms of unforgeability and what I just described is weak unforgeability. There’s also strong unforgeability, a little stronger as the name says. Let’s say I give you a signature on a specific message. You can’t even come up with a second signature, a different signature, on this specific message. This is strong unforgeability. If you didn’t check for the negation then you could easily do this. I send you a signature, you negate it and you have a different signature. It is just negated but it is different. The reason why we want to have this strong unforgeability is mostly just because we can have it. It is there. Have you ever heard of malleability issues in Bitcoin? This is specifically also related to this because this is one source of malleability. In SegWit signature data became witness data and from SegWit on it doesn’t influence the transaction IDs anymore. But before SegWit and with ECDSA this was one source of malleability. With ECDSA you could do exactly this. An ECDSA signature is like a Schnorr signature, it is two components. You could just negate one of them.

Q - For the negation is that multiplying the whole point by negative one (-1) for example or is that mirroring it on the other side of an axis. What does negation actually look like when it is performed?

A - I’m not sure what exactly is negated in this test case but if you look at the Schnorr signature it has two components. It is a group element, a point on the curve, and it is a scalar. The scalar is in a sense an exponent, something like a secret key. You could negate either. BIP340 makes sure that no matter what you negate it will always be an invalid signature. But in general it is possible to negate the point or it is possible to negate the scalar. Look into the old ECDSA specification because this had this problem and you can see how the negation was ignored there. When you have a negation, you have a signature, you negate one of the components and you get another valid signature.

MF: I went through the Schnorr test vectors a while back and Pieter said negating means taking the complement with the group order n. “The signature won’t be valid if you verify it using the negated message rather than the actual message used in the signature.”

TR: This is negating the scalar, what Pieter is talking about.

MF: It is the same thing that is being asked about?

TR: It is the same thing. There are two things you could potentially negate. A Schnorr signature has two components R and s and you could negate R and you could negate s. I guess s has been negated but the comment doesn’t say.

MF: Ok thank you very much Tim, thank you Elizabeth for joining and thank you everyone else for joining.

TR: Thanks for inviting me.

\ No newline at end of file diff --git a/london-bitcoin-devs/index.xml b/london-bitcoin-devs/index.xml index 5f31dd3629..89c737d41b 100644 --- a/london-bitcoin-devs/index.xml +++ b/london-bitcoin-devs/index.xml @@ -57,8 +57,8 @@ Slides: https://www.dropbox.com/s/cyh97jv81hrz8tf/alex-bosworth-submarine-swaps- https://twitter.com/kanzure/status/1151158631527849985 Intro Thanks for inviting me. I’m going to talk about submarine swaps and Lightning Loop which is something that I work on at Lightning Labs. Bitcoin - One Currency, Multiple Settlement Networks Something that is pretty important to understand about Lightning is that it is a flow network so once you set up your channels and you’re in a network of channels with other people, the balances can only shift along flow properties of the network.
Hardware Wallets (History of Attacks)https://btctranscripts.com/london-bitcoin-devs/2019-05-01-stepan-snigirev-hardware-wallet-attacks/Wed, 01 May 2019 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2019-05-01-stepan-snigirev-hardware-wallet-attacks/Slides: https://www.dropbox.com/s/64s3mtmt3efijxo/Stepan%20Snigirev%20on%20hardware%20wallet%20attacks.pdf -Pieter Wuille on anti covert channel signing techniques: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-March/017667.html -Introduction This talk is the second in the series after my previous talk in London a few months ago at the Advancing Bitcoin conference. There I was talking mostly about general attacks on hardware, more from the theoretical perspective. I didn’t say anything bad hardware wallets that exist and I didn’t specify anything on the bad side. Here I feel a bit more free as it is a meetup not a conference so I can say bad things about everyone.Better Hashing through BetterHashhttps://btctranscripts.com/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/Tue, 05 Feb 2019 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/Announcement of BetterHash on the mailing list: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016077.html +Pieter Wuille on anti covert channel signing techniques: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-March/017667.html +Introduction This talk is the second in the series after my previous talk in London a few months ago at the Advancing Bitcoin conference. There I was talking mostly about general attacks on hardware, more from the theoretical perspective. I didn’t say anything bad hardware wallets that exist and I didn’t specify anything on the bad side. Here I feel a bit more free as it is a meetup not a conference so I can say bad things about everyone.Better Hashing through BetterHashhttps://btctranscripts.com/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/Tue, 05 Feb 2019 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/Announcement of BetterHash on the mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016077.html Draft BIP: https://github.com/TheBlueMatt/bips/blob/betterhash/bip-XXXX.mediawiki Intro I am going to talk about BetterHash this evening. If you are coming to Advancing Bitcoin don’t worry I am talking about something completely different. You are not going to get duplicated content. That talk should be interesting as well though admittedly I haven’t written it yet. We’ll find out. BetterHash is a project that unfortunately has some naming collisions so it might get renamed at some point, I’ve been working on for about a year to redo mining and the way it works in Bitcoin.Bitcoin Core and hardware walletshttps://btctranscripts.com/london-bitcoin-devs/2018-09-19-sjors-provoost-core-hardware-wallet/Wed, 19 Sep 2018 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2018-09-19-sjors-provoost-core-hardware-wallet/Topic: Using Bitcoin Core with hardware wallets Location: London Bitcoin Devs diff --git a/misc/2018-01-24-rusty-russell-future-bitcoin-tech-directions/index.html b/misc/2018-01-24-rusty-russell-future-bitcoin-tech-directions/index.html index e746d526eb..b7cd38957f 100644 --- a/misc/2018-01-24-rusty-russell-future-bitcoin-tech-directions/index.html +++ b/misc/2018-01-24-rusty-russell-future-bitcoin-tech-directions/index.html @@ -9,4 +9,4 @@ < Misc < Future Bitcoin Tech Directions

Future Bitcoin Tech Directions

Speakers: Rusty Russell

Date: January 24, 2018

Transcript By: Bryan Bishop

Media: -https://www.youtube.com/watch?v=VABppufTdpA

Future technological directions in bitcoin: An unreliable guide

notes: https://docs.google.com/document/d/1iNzJtJRq9-0vdILQAe9Bcj7r1a7xYVf8CSY0_oFrH9c/edit or http://diyhpl.us/~bryan/papers2/bitcoin/Future%20technological%20directions%20in%20bitcoin%20-%202018-01-24.pdf

https://twitter.com/kanzure/status/957318453181939712

Introduction

Thank you very much for that introduction. Alright, well polished machine right there. Let’s try that again. Thank you for that introduction. My talk today is all about this. I’m not going to be talking about this. If you see me reaching for the red hat, please stop me. Before I begin, I want to apologize in advance: this talk is a summary of things coming up in bitcoin that some people are workin on, which means first I need to apologize to the people whose work I am about to compress into about 85 second segments each, because I am going to gloss over a lot of things. I had to pick what I thought was most interesting. Also I apologize to you, my audience, because this is going to feel like 23 lightning talks back-to-back in quick succession. I am going to give you a vague taste of things, and you’ll have to dig in later. Hopefully I can convey some of the exciement as I go.

Talk overview

I believe that everyone talking about cryptocurrencies should disclose their holdings. In my case, bitcoin. Fairly straight-forward. Okay, moving on. First we’re going to go through a bitcoin keyword primer. This is basically not nenough bitcoin to do anything useful but it does mean you know a few keywords and you can sound like you know what you’re talking about. Then wen’re going to talk about two kinds of proposed improvements. The first being consensus improvements, changes to the blockchain itself (transactions and blocks), in particular we’re talking about backwards-compatible changes. And then non-consensus changes, such as changes in how peers on the network talk with each other, which does not involve changing the blockchain format. Today, for a lack of time, I am not going to be talking about anything built on top of bitcoin, layer 2. I am not going to be talking about hard-forks (incompatible changes to bitcoin).

Bitcoin keyword primer

https://bitcoin.org/en/vocabulary

https://bitcoin.org/en/developer-documentation

So here comes a very quick bitcoin keyword primer. This is a bitcoin transaction. It contains outputs (TXOs). Each output has an amount and a script (scriptPubKey). That output script is a very simple stack-based scripting language with no loops that validates who can spend this output. Here’s another transaction spending it. You can see that this transaction has an input, and the input has an input script (scriptSig) spending the previous output. In this case, it’s a key and the signature of that key. When you evaluate that, and then the output script, it leaves “true” on the stack, meaning it’s valid, and therefore that the transaction is valid and authorized.

The other term that we come across is “txid” which is the hash of the entire transaction. Very simple.

Transactions are built up into chains like this. Of the 19 outputs on this diagram, 6 of them are unspent. They are considered members of the unspent transaction output set (UTXO set). And that’s important because to validate a new transaction, the transaction must spend a member of the UTXO set. That’s the set of things that you need to check against, to spend a new transaction.

Bitcoin uses a blockchain. A bitcoin block contains the hash of the previous block, causing it to be a chain. And it also has a set of transactions in the block. Now the way that transactions are put into the bitcoin block is kind of interesting. The txid is a hash of the transaction itself. We take pairs of txids, hash those together, and we build up a tree. We put the root of the tree in the bitcoin block header. That kind of hash tree is called a merkle tree. The cool thing about a merkle tree is that I can prove to you that this transaction is in the merkle tree. If you have the block header then I can prove that the transaction is in the block. But because all I have to give you is the txid and the hash of the other neighbor at that level of the tree, you can combine the hashes yourself, and check that it matches the merkle root. If it does, then you know that the transaction is in the block.

That’s all that I am going to tell you about bitcoin keywords.

Soft forks and consensus changes

https://www.youtube.com/watch?v=VABppufTdpA&t=5m26s

Let’s talk about a first set of improvements that have been proposed and researched. These are called soft-forks. A soft-fork is the backwards compatible changes. Old nodes still work. You can make things that are currently legal to be illegal, but you can’t make things that are currently illegal to be legal because that would break compatibility.

Segwit

https://bitcoincore.org/en/2016/06/24/segwit-next-steps/

https://bitcoincore.org/en/2016/01/26/segwit-benefits/

https://bitcoincore.org/en/segwit_wallet_dev/

https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki

https://github.com/bitcoin/bips/blob/master/bip-0143.mediawiki

https://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/segregated-witness-and-its-impact-on-scalability/

https://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/segwit-lessons-learned/

As a warm-up, I am going to talk about an upgrade that was activated in August 2017 called segregated witness (segwit). And this addressed the problem of script upgradeability, UTXO bloat, and unwanted malleability. What really is segwit? Well, it’s a new output script witness type. And it’s literally just a version and then a hash. For version 0, it’s either a 20 byte hash (hash of a public key), or a 32 byte hash (hash of a script). The “input” which normally spends this is empty. The real input data goes into something called the witness. And the witness contains the input stack and the key or script which we promised in the output. You can check whether it’s the right one by hashing to check whether the hashes match up. Okay. Here’s an old style transaction, remember the txid was the hash of the whole thing. If it was a segwit transaction, the input script would be empty and this new witness thing would contain the pubkey and the signature for example. Now, as you’ve noticed, that is no longer hashed into the txid, it has been segregated out of the txid. This turns out to be really important because there’s a long list of different input scrips that can satisfy those output conditions. You can sign it again, there’s a way for third parties to change signatures and make them still work but they completely change the txid, this is called transaction malleability. And malleate is an obsolete word, it means to hit something with a hammer, so I imagine someone out there whacking transactions with a hammer to change their transaction txid. Segwit helps several things. We see that version byte helps with– everything else above version 0, like version 1, is given a free pass at the moment. It helps layer 2, because if you’re building things on top of bitcoin, you can now rely on the fact that txid values don’t change. Someone can’t change the chain of transactions by malleating one of the earlier transactions in your pending transaction tree. It also helps hardware wallets because of a checksig change. But importantly, it helps bloat incentives and throughput. Without segwit, it’s actually cheaper to create a new output than it is to spend an existing one. This creates incentives to create more UTXOs, which everyone has to remember. With this witness, it’s outside the old rules– it’s a separate thing. These bytes are not counted in the same way. The bitcoin rules say that you cannot have more than a million bytes in a block. The witnesses, which do not apply under that rule, to count a quarter of a byte each. This means that you can fit more into a block, and in particular what it means is that it’s now almost as cheap to spend an output as it is to create a new one so the incentives are more aligned. Also, 60% of the block is usually the inputs, and segwit is a significant capacity increase in bitcoin: instead of a megabyte, we end up at in theory about 4 MB but in practice probably about 2 MB to 2.3 MB blocks. That’s in the past, now.

Segwit addresses: bech32 (bip173)

https://www.youtube.com/watch?v=VABppufTdpA&t=9m10s

https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki

https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2017-03-29-new-address-type-for-segwit-addresses/

http://bitcoin.sipa.be/bech32/demo/demo.html

https://github.com/sipa/ezbase32/blob/master/dist32.cpp#L13L25

The first of the proposals of the things that I want to talk about for the future is this new problem, in segwit, we have this new output format but we have no address format. The original bitcoin addresses are base58, a nice round number, with a 32-bit sha256 checksum. It’s just an example address on screen, don’t send money there. The new addresses are base32 with a 30-bit BCH checksum code. According to pieterwuillefacts.com, this is apparently the name of Pieter Wuille’s dog. So here’s wnullywullewuelewuulywulewuluhwuelwu… Anyway, the code is guaranteed to detect up to 4 errors. And for “similar” letter substitutions it detects up to 5 errors. And if you have other errors, it has a 1 in a billion chance of falsely okaying the result. This is a much stronger checksum than the old 32-bit checksum that we were using. With bech32, readability is improved, bech32 is case-insensitive, it fits better on QR codes, and it has error correction and error detection. We’re seeing this being rolled out now. I think we’re going to see this coming out very soon.

MAST (bip114 vs bip116 + bip117)

https://github.com/bitcoin/bips/blob/master/bip-0114.mediawiki

https://github.com/bitcoin/bips/blob/master/bip-0116.mediawiki

https://github.com/bitcoin/bips/blob/master/bip-0117.mediawiki

http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-07-merkleized-abstract-syntax-trees/

https://bitcointechtalk.com/what-is-a-bitcoin-merklized-abstract-syntax-tree-mast-33fdf2da5e2f

This solves the problem of really large scripts. MAST stands for merkleized abstract syntax trees. When bitcoiners talk about this, what they are really talking about is urning a script into a merkle tree of scripts. So then in order to spend it, you provide one of those scripts, and one of hte hash proofs and the neighbors to prove that it’s in the tree. Before, we might have had a complex script like this where I can spend it after 144 blocks or you can spend it with your key. And currently using a segwit output script it would be 0 HASH(script). That would be the output script.

In bip114, it would break those two and create a merkle tree out of those conditions and it would be a merkle tree with only two leaves. And then you would commit version 1 segwit script they are proposing, which is “here’s my merkle root”. To spend it, you would provide the merkleroot in the output script originally, and to spend it you would provide the merkle inclusion proof and that has a version field as well. The actual implementation is slightly more complex than this, but you get the general idea.

But bip114 is not the only game in town. There’s a separate pair of proposals (bip116 + bip117). bip116 proposes adding a new opcode OP_MERKLEBRANCHVERIFY which does the merkle tree check on the top stack elements. It takes the script, the proof and the merkle root and checks that it indeed is in the merkle tree. That doesn’t get you MAST, it just gets you a check. What bip117 does is that it adds a new rule: extra stuff you’ve left on the stack after valid execution, is popped off the stack and executed in a tail-recursive fashion. So you use bip116 to prove that yes this was in the merkle tree, and then you use bip117 to execute whatever you left on the stack. There are some issues with this but I still expect this to happen within the next year or so. In addition to size, it also enhances privacy. It’s another set of conditions that are summarized with a hash. You don’t need to reveal that your backup plan involves a 3-of-5 multisig or whatever, so that’s kind of nice.

Taproot

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html

There is, however, a problem with MAST which is that MAST inputs are obvious. You’re doing a MAST payment. There’s a proposal called taproot. This is why you check twitter during talks. It uses a new style of pay-to-pubkeyhash. You can– the users, when they are setting this up, use a base key and a hash of a script that they want the alternative to be. They use that to make a new key, then they just say use this new thing and pay to that key. They can pay using key and signature like before. Or you can reveal the base key and the script, and it will execute the script for you. In the common case, it looks like pay-to-pubkeyhash which helps fungibility. In the exceptional case, you provide the components and you can do MAST. This overrides what I had said earlier about MAST coming soon because this proposal looks really awesome but taproot doesn’t have any code yet.

Key aggregation

https://blockstream.com/2018/01/23/musig-key-aggregation-schnorr-signatures.html

https://eprint.iacr.org/2018/068

Another problem in the bitcoin space are these keys and signatures. There’s a separate key for every signer. In particular, there are many cases where you have multisignatures and you have to have a key for each signer. Say we have a deal where we both have to sign it and we use OP_CHECKMULTISIG. There are however I would say, simpler signature schemes, like schnorr signature schemes where we can combine both keys and also signatures. The added keys become a single valid key. Same with signatures. And then we’ll need a new OP_CHECKSCHNORRSIGVERIFY. It’s smaller, making transactions cheaper. It’s also fungible, and whether it’s multisig or single signaure is not publicy identifiable.

Signature aggregation

http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-06-signature-aggregation/

Still, there is a single signature for every input, which takes up space. It turns out that with some certain signature aggregation schemes, you can aggregate signatures. Say you had an OP_CHECKAGGSIG opcode. You provide a signature and a key. After the first signature, everything is empty, you say refer to the one above. OP_CHECKAGGSIG wouldn’t actually fail during script execution itself. When you get to the end of the transaction, it would add up the signatures you said to check, and then it would check that the signature was valid, all at once, and then pass/fail the transaction at that time.

This helps input size, which makes more room for other transactions in the blockchain. And validation speed is improved- it’s not one signature check per input, it’s one signature check per transaction. Fantastic. But also it provides incentive to combine transactions. There’s a protocol called coinjoin where two people who don’t trust each other can negotiate to combine things they want to spend into one mega-transaction. By two I mean any number. This makesa massive signature. Using signature aggregation, a single coinjoin transaction is much smaller than having two separate transactions, so signature aggregation provides a coinjoin incentive.

Batch signature validation

https://bitcoincore.org/logs/2016-05-zurich-meeting-notes.html

It also turns out that there’s still a lot of signatures to check in every block. But these signatures are actually ameniable to a batch validation mode where you do some prep work and you can check all of them work. This scales super-linearly. It’s much faster to batch validate 10k signatures rather than running 10k separate signature validation operations. The downside is that all you get is an answer like they were all good or something went wrong, however that might be okay because when you’re checking a block for validity that batch answer is all you care about anyway.

In practice, this doesn’t actually help with block validation. When you get a block, you have already seen many of the pending transactions and already validated them. But batch signature validation does help with initial block download (IBD) speed.

Scriptless scripts

http://diyhpl.us/wiki/transcripts/realworldcrypto/2018/mimblewimble-and-scriptless-scripts/

http://diyhpl.us/wiki/transcripts/scalingbitcoin/stanford-2017/using-the-chain-for-what-chains-are-good-for/

Scripts are sometimes unnecessary. Scriptless scripts originate from work done by Andrew Poelstra on a project called mimblewimble. If you ever want to be amused, go look up the origins of the mimblewimble project. It turns out that scripts are sometimes unnecessary. Atomic swaps between blockchains where Alice wants to swap some coins across with Bob on another blockchain. The way that Alice does this is that she creates an unguessable big secret and hashes it to x. And then she produces a transaction that spends to Bob if Bob can provide some information. And Bob says the same thing on his blockchain and the value that hashes to x. To collect this, Alice spends it by revealing her secret. That means that Bob can see that transaction and now spend Alice’s. So without trusting each other they have managed a swap of coins.

With these kind of signature schemes, you can set things up by negotiating between the two parties with keys. There’s enough information in that for Bob to create the signature he needs to take Alice’s coins. And this concept is called scriptless scripts. On the blockchain all that appears is one transaction saying pay to some key and some people spend them, basically. Nobody but Alice and Bob knows that any of this fancy coinswap thing happening. So this helps fungibility and input size. It’s a little bit in the future because it would only be able to happen once we get a soft-fork that introduces new signature schemes.

Simplicity

https://www.youtube.com/watch?v=VABppufTdpA&t=19m56s

https://blockstream.com/simplicity.pdf

https://bitcoinmagazine.com/articles/introducing-programming-language-so-simple-it-fits-t-shirt/

Along the line of script improvements, there is a problem with scripts particularly as they become more powerful. There are many feature proposals but it’s hard to analyze the scripts and make provable statements about what the scripts will do. People have a hard time writing bugfree code. There’s a language called Simplicity. It’s a typed composition language. Read the paper. It’s specifically designed for the blockchain use case. This helps the security of smart contracts by allowing you to do validation. Right now you would not write simplicity, but you would write in another language that compiles down to simplicity.

Covenants

http://fc16.ifca.ai/bitcoin/papers/MES16.pdf

http://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/covenants/

https://github.com/jl2012/bips/blob/vault/bip-0ZZZ.mediawiki

Covenants would be a powerful improvement to bitcoin scripts. Current bitcoin scripts can’t really do any kind of sophisticated introspection. In particular, you can’t write a script that says “you can spend me if the transaction that spends me has this kind of output script”. You might want to say, you can spend this transaction but the transaction that spends me must have an output that has a one week delay or it can be spent by an emergency key. This is the basic idea of the “The Vault” proposal. You would put your money into this kind of output and then you would watch the network. To spend it, you wait a week. To take money out of the vault, you wait a week. If you see money going out of the vault and you didn’t authorize it, then that means someone has stolen your key and is trying to spend your money. At that point, you go to your backyard and dig up your emergency secret key and you take the funds back during that one week period. The only transaction that the thief was allowed to make was (due to the covenant) one with this one week period of lock-up where only the secret key could spend the coins during that period.

Confidential transactions

https://people.xiph.org/~greg/confidential_values.txt

https://blockstream.com/bitcoin17-final41.pdf

http://diyhpl.us/wiki/transcripts/gmaxwell-confidential-transactions/

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015346.html

Enough about scrips for a moment. The blockchain is public. In particular, the amounts are public, and this actually reveals a lot of information. By looking at the amounts, you can often tell which one is a change output and going back to the same person. All that nodes really need to know is not necessarily the amounts, but rather that there is no inflation in the system. They only need to know that the sum of the input amounts is equal to the sum of the output amounts plus the miner fee amount. They also need to kknow that there was no overflow. You don’t want a 1 million BTC transaction and a -1 million BTC transaction.

There’s a way of doing this using math, it’s called confidential transaction. It takes the 8 byte value and changes it to a 2400 byte value. To process each input that spends one of these takes about 14 ms. This is against 0.1 ms for a normal signature check. It’s heavy and slow. But it provides the additional privacy on amounts.

Bulletproofs

https://eprint.iacr.org/2017/1066.pdf

https://joinmarket.me/blog/blog/bulletpoints-on-bulletproofs/

http://diyhpl.us/wiki/transcripts/blockchain-protocol-analysis-security-engineering/2018/bulletproofs/

There was a paper released a couple months ago called “bulletproofs” which cuts these numbers down to 4.2 ms and about 674 bytes. Interestingly this time does benefit from batch validation where you can reduce that time somewhat. And proofs can be combined across outputs. So if you have 8 outputs you only add a few hundred bytes to the proof. At this point you’re thinking, this would incentive some kind of coinjoin scheme and everything else. I think this needs more research because it’s a big change, the fungibility benefits are huge, it’s fairly new math and it would– bitcoin is generally conservative. There might be more squeezing or improvements we could do. Get it down by a factor of 10 and then we’ll talk.

Client-side validation

http://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/client-side-validation/

https://scalingbitcoin.org/milan2016/presentations/D2%20-%20A%20-%20Peter%20Todd.pdf

UTXO commitments

https://bitcointalk.org/index.php?topic=101734.0

https://bitcointalk.org/index.php?topic=88208.0

https://bitcointalk.org/index.php?topic=204283.0

https://github.com/petertodd/python-merbinnertree

https://github.com/bramcohen/MerkleSet

UTXO commitments are a very very old idea in bitcoin. The problem is that lite nodes which don’t have the whole UTXO set and don’t track the whole blockchain can’t tell if a transaction is valid. the idea is to put a hash of the UTXO set in the block. We use a merkle tree and put the hash root somewhere. This would allow you to prove the presence of an unspent output by using a merkle inclusion proof. You could also prove that a utxo wasn’t in there, if it was ordered you could provide two neighbors and show that there’s nothing between two there aren’t any members. You could get the UTXO set, check if it’s committed to by that block, and then work forward from there, giving you some kind of partial semi-trusted initial block download and sync. This would help with lite node security. Unfortunately there are some problems with UTXO set commitments that put it in the “need more research” flying cars phase of the future.

UTXO proofs

The unspent transaction output sets (UTXO set) sits at about 63 million outputs. If everyone in the world wants to own bitcoin, then that size is going to increase to about 7 billion items. So what happens when that UTXO set size grows too large for most nodes? If we had UTXO commitments then nodes could remember the merkle tree, when you want to spend bitcoin, you would provide proofs that each of your coins are in the UTXO set. The downside is that wallets would need to be able to track those proofs. These proofs would change in each block- so now you need not just your private keys but you need to track the blockchain. The proofs themselves will probably be about 1 kilobyte per input. So you’re trading bandwidth for UTXO storage. These are a couple of problems that would probably– unless the crisis is acute, it wouldn’t help, it’s in the flying cars research phase category.

TXO commitments

https://petertodd.org/2016/delayed-txo-commitments#slow-path-calculating-pending-txo-commitments

There’s a proposal with one of he other things we have UTXO sets. Validating the UTXO tree is really slow when it gets really big. If you randomly add and delete nodes from the tree you do a lot of hashing to regenerate the merkle root. And everyone has to do that to validate the block. The idea of TXO commitments is that you commit all transaction outputs (TXOs) into the block, with a bit that says whether it was spent or not. And you basically build up this merkle tree. If you have a merkle tree where you continue to add data, you end up with a data structure where you have a number of peaks that you need to keep up with, and petertodd calls this a “merkle mountain range” that you build up and you just append and append and append. You wouldn’t do this for the current block, but rather some previous older block, and you would just remember the changes since then. And then the wallet has to prove that the output was in the TXO tree. That wallet, the proof, that yes it was there, gives you enough information to change it from unspent to spent and re-calculate the root because you have all of the pairs or peers. Unfortunately the wlalets would need to still track the TXO proofs and it might still be about as bad as tracking other things, and it’s still 1 kb of extra transmission for every input, but it certainly helps bring the UTXO commitment idea closer.

Fraud proofs

https://en.bitcoin.it/wiki/User:Gmaxwell/features#Proofs

https://github.com/bitcoin/bips/blob/master/bip-0180.mediawiki

https://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2016/fraud-proofs-petertodd/

Another idea that goes back even further, back to the bitcoin whitepaper, is the idea of fraud proofs. I sort of alluded to this problem earier, which is that lite nodes don’t actually run the bitcoin protocol and they simply trust the miners. Like mobile phone nodes. “If it’s in a block then it must be good”. This goes back to the original Satoshi paper where he suggested that maybe nodes could give alerts. Well it doesn’t work because you could give false alerts all the time and force you into a full node downloading all the blocks all the time anyway. And secondly because you can’t validate a block by itself without the UTXO set you would have no idea whether the block is valid.

What can you compactly prove? If a transaction is completely invalid or malformed, it’s easy, you can give the lite node the malformed transaction and then a merkle inclusion proof for that transaction in the block and it’s easy to verify that the transaction is malformed and that the merkle inclusion proof is valid. If a transaction can’t spend the output it’s claiming to spend, it’s the same thing: proof that the transaction is in, and you can provide the transaction that already spents it, or the input transaction input, and the proof that a transaction output already spent, same thing done. However, you can’t prove that a ninput transnaction simply doesn’t exist, unless you had UTXO commitments perhaps with a soft-fork upgrade. Or you can have a lighter version where a block contains inputs of locations. You also can’t prove that the fees collected in the block are incorrect. Miners are supposed to add up the inputs from every transaction and give themselves their money. What if they take too much? There’s a merkle tree variant that could possibly fix this, called a merkle sum tree, you can in fact prove this and you could do this with a soft-fork.

What if the merkle root hash of the transaction tree in a block is completely wrong? Just junk? In general you can’t prove this. This is a subset of a problem called a data withholding attack. In this, you basically, the miner puts out a new block but doesn’t provide any of the transactions in it. Lite nodes say it looks fine, and full nodes say can’t validate this, but the full nodes can’t prove this to help the lite nodes. There’s speculation that a proposal could be made that employs fountain codes in such a way that you are required to reveal information … but this is definitely in the realm of flying cars future research, a lot more research needed if we were to go in that direction.

Part 3: Peer protocol changes

I am going to talk about peer protocol changes. I think I have time for– nope.

TXO bitfields

http://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets/

Okay, TXO bitfields. This is a different approach for the problem of the UTXO set becoming far too large. In this, each node remembers a bitfield for each block. The bits represent whether the txout (TXO) was spent or not. It remembers a root merkle for that block. And then the node sends a merkle proof that proves that this transaction output I’m trying to spend is the 3000th output of the block, so the full node then checks the transaction is correct- is it bit 3000? And then checks bit 3000 and flips the bit if necessary.

This works because the size of an unspent output is around 32 bytes. The sie of the bitfield is about 1/8th byte (only 1 bit). It turns out the ratio between TXO and UTXOs is only 12.6, so we end up with a data structure about 20x smaller. If it turns out that most things are spent, then there are ways to compress it further to make the data structure more dense.

The cool thing about this is that proof size might still be an issue, but the proof of TXO position is static. Once the block is set, you can create the proof. It doesn’t update like UTXO proofs do. This is also not a consensus change. Nothing has to go into blocks. Peers just have to choose to do this on their own. It does not require consensus between nodes to validate blocks or whatever.

As far as I know, nobody is actually working on it, so it’s in the further future, but it does help bitcoin node scalability.

Rolling UTXO set hashes

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014337.html

Another subset of the UTXO set problem is that there’s no compact way to prove that UTXO set is correct. XORing all the UTXOs together is not secure. There’s this idea of a rolling UTXO set hash. You can update a 256-bit number, it’s fairly easy to calculate, you can update this one hash when a new block comes in. If you’re a full node, you can record your UTXO set hash and then validate that your UTXO set hasn’t been corrupted. But it also helps the idea of initial node bootstrap and initial block download. If you get TXO hash from someone you trust, say someone writing the bitcoin software, you can go anywhere and get the UTXO set, and you can download it, and just check that the hash matches. Maybe if things keep getting bigger, then this might be a middle ground between running a lite node and running a fully-validating full node from the dawn of time itself.

Transaction compression

http://diyhpl.us/wiki/transcripts/gmaxwell-2017-11-27-advances-in-block-propagation/

We have this other problem that transactions are too large. We’re bumping up to 170 GB for the blockchain. Turns out that with a dedicated compressor you can get 28% compression. This still needs to be benchmarked on how fast decompression is. This number is only in the context of compressing a single transaction, not with any context. This can help bandwidth and storage.

Set reconciliation for transaction relay

http://diyhpl.us/wiki/transcripts/gmaxwell-2017-11-27-advances-in-block-propagation/

Most of the bandwidth used by full nodes today is not sending blocks, but announcing that you have a txid. It’s very chatty like that. Broadcast to all. What about instead announcing it to one or two peers and then use set reconciliation every so often to catch the ones you have missed? We know how to do efficient set reconciliation. This would help bandwidth and full nodes.

Block template delta compression

http://diyhpl.us/wiki/transcripts/gmaxwell-2017-11-27-advances-in-block-propagation/

Block propagation is fairly efficient. It takes about 40 kb to send a block. This is achieved because of compact blocks, which sends a summary of what’s in the blocks. Assuming that you have seen most of these transactions before. There are two modes: high bandwidth mode and low bandwidth mode. In high bandwidth mode, I basically just send you the summary every time I get a block. It’s only 40 kilobytes. But in low bandwidth mode, I notify you, and then you request it, and then I send the summary, so there’s another roundtrip in there in low bandwidth mode.

Bitcoin Core by default picks 3 peers to be high bandwidth peers and everyone else is low bandwidth peers. The 3 that get picked are the last that gave us a block, because they are probably prety-well connected in the bitcoin p2p network.

The idea with block template delta compression is that every 30 seconds we take those 3 peers and we send them a template of what we expect to be in the next block, and when the next block occurs, we just send a delta between what the template had and what the block actually had. It turns out that about 22% of the time, that delta is under 1 packet, which of course is the golden point for low latency. This particularly helps especially if you are trying to do something crazy like bitcoin networking over a satellite link.

Dandelion

https://www.youtube.com/watch?v=VABppufTdpA&t=36m25s

https://arxiv.org/abs/1701.04439

https://github.com/sbaks0820/bitcoin-dandelion/issues/1

Another problem with fungibility on the network is that today there’s a lot of nodes that connect to as many nodes as they can and look for transactions. These nodes are essentially looking for where transactions are originating from in an attempt to deanonymize the network. I ban a lot of them from my nodes, for example.

Dandelion was a proposal to solve this where when it gets a transaction it exposes it to a single peer 90% of the time. And then on the 10% case it broadcasts it to everyone. So these are the stem phase and the fluff phase. There are a few other rules you have to add to make this robust and make it work. I hope to see this sooner rather than later because it really does help fungibility on the network.

Neutrino: Client-side block filtering (bip157/bip158)

https://youtube.com/watch?v=7FWKc8lM4Ek

https://github.com/Roasbeef/bips/blob/master/gcs_light_client.mediawiki

https://github.com/lightninglabs/neutrino

https://github.com/bitcoin/bips/blob/master/bip-0157.mediawiki

https://github.com/bitcoin/bips/blob/master/bip-0158.mediawiki

Another proposal that helps fungibility on the network is this proposal called Neutrino. The current way that lite nodes owrk is that they tell the full nodes a bloom filter of all the things they are interested in. This works against privacy as you might expect. Once you have seen one of those addresses, it’s almost trivial to figure out which peer is interested in that address, in other words which peer has that address in their wallet. It also requires every node to look through the bloom filters and do calculations every time a block comes through, in an attempt to figure out whether to send the transaction to the lite node.

Neutrino inverts this. The full nodes produce a 20 kilobyte block summary and the algorithm to do this is pretty awesome. This summary allows the lite node to say this is a block that I’m interested in, and pull down the whole block, ideally from a separate peer that isn’t even related to the first peer. This would definitely help lite node privacy. But the reason why I say it’s a few years out is that the originator of this, roasbeef, is deep in lightning right now and I expect lightnig network stuff to consume all his cycles for some time.

NODE_NETWORK_LIMITED (bip159)

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014314.html

https://github.com/bitcoin/bips/blob/master/bip-0159.mediawiki

Another problem on the network is that– a number of years ago we introduced prune nodes on the network, which download all the blocks but they throw away old blocks. This saves a lot of disk space. It’s still a full node, but it throws away old stuff it doesn’t need. There’s a bit in the service field when you advertize a full node, which says you will serve all blocks. But the pruned nodes can’t set this. So there’s a proposal for a few other bits, to say that actually I do have the last 500 or 10,000 blocks and this will really help the current bandwidth load on the current full archival nodes that aren’t pruned. I expect this very soon.

Peer encryption and peer authentication (bip150/bip151)

https://github.com/bitcoin/bips/blob/master/bip-0150.mediawiki

https://github.com/bitcoin/bips/blob/master/bip-0151.mediawiki

https://btctranscripts.com/sf-bitcoin-meetup/2017-09-04-jonas-schnelli-bip150-bip151/

It might also interest you that in the bitcoin p2p network we don’t currently authenticate or encrypt traffic between bitcoin nodes. It’s to somewhat extent already public information, but there’s analysis attacks against this. This is low hanging fruit against passive attacks. But it also helps with trusted nodes, like if you have a lite node and you want to connect to your dedicated full node. This would make trusted nodes easier since there’s nothing out-of-the-box that does this yet.

Things that I wanted to mention

There are a lot of things I wanted to mention in this talk but I couldn’t, such as: bip8, spoonnet, zerolink, coinshuffle++, zero-knowledge contingent payments, discreet log contracts, lightning network, etc. I put them on this slide to make myself feel better. I am not going to mention any of them, unfortunately.

Conclusion

I’ll tweet out a list of references, it’s currently a google doc. If there are any questions then I would be delighted to answer them.

Q&A

Q: What do your hats say?

A: This one says “someone in bitcoin did something awesome again” and this one says “someone in bitcoin did something stupid again”. Anthony Towns suggested that this pair covers everything in bitcoin.

Q: Which of those projects are you working on?

A: No. I am working on none of these. I am actually working on lightning, and it’s incredibly exciting, I’ve been working on it for about 2.5 years. It’s soon going to be usable by normal people without losing money. It’s an exciting time. My timing is terrible today though, I’m here talking about other stuff.

Q: There’s a question on twitter that asks you to prove that you are not the output of an artificial intelligence.

A: Okay. I anticipated this question. I’m glad it was asked. I don’t believe such a proof is possible.

Q: Hi Rusty. On the dandelion project, do you think it could potentially hinder the ability to monitor the network for any nodes that are doing anything dodgey?

A: So, dandelion itself not particularly. Dandelion does this stem and fluff approach. The idea is that the transaction will travel some way before it broadcasts to the network not necessarily the original point. There’s a timer in there so that if it sends it down a dead-end then it will fluff it out. So what percentage of nodes need to be evil before this scheme is compromised? I think the numbers look good. It’s hard to monitor the entire bitcoin network. That’s kind of a feature in a way. I don’t think this will be the main issue. I was shocked once to see a block explorer identify that I was the one to send a bitcoin transaction, by geography.

Q: On one of your randomly generated slides, you made an intriguing comment where you said people were trying to deanonymize the network. Who are these people?

A: There are companies that do this as their job.

Q: Why?

A: I stay out of that world. I believe they are trying to sell information to places that have KYC/AML requirements and say “this person is dodgy”. KYC/AML are regulatory rules about knowing who your customers are. It sometimes conflicts with an anonymous cash system. They are generally not that clever, but it’s still a bit of an arms race. We need to keep on top of it.

Rusty, thank you so much for this presentation on the state of blockchain. Here’s a gift of gratitude from everyone and the organizers. Let’s give a round of applause to Rusty. Yes I’ll stick around for the rest of the conference and I’ll be around to answer questions. We have the penguin dinner tonight. It’s behind building 1, under the blue space, in the green space, is where the groups start walking.

\ No newline at end of file +https://www.youtube.com/watch?v=VABppufTdpA

Future technological directions in bitcoin: An unreliable guide

notes: https://docs.google.com/document/d/1iNzJtJRq9-0vdILQAe9Bcj7r1a7xYVf8CSY0_oFrH9c/edit or http://diyhpl.us/~bryan/papers2/bitcoin/Future%20technological%20directions%20in%20bitcoin%20-%202018-01-24.pdf

https://twitter.com/kanzure/status/957318453181939712

Introduction

Thank you very much for that introduction. Alright, well polished machine right there. Let’s try that again. Thank you for that introduction. My talk today is all about this. I’m not going to be talking about this. If you see me reaching for the red hat, please stop me. Before I begin, I want to apologize in advance: this talk is a summary of things coming up in bitcoin that some people are workin on, which means first I need to apologize to the people whose work I am about to compress into about 85 second segments each, because I am going to gloss over a lot of things. I had to pick what I thought was most interesting. Also I apologize to you, my audience, because this is going to feel like 23 lightning talks back-to-back in quick succession. I am going to give you a vague taste of things, and you’ll have to dig in later. Hopefully I can convey some of the exciement as I go.

Talk overview

I believe that everyone talking about cryptocurrencies should disclose their holdings. In my case, bitcoin. Fairly straight-forward. Okay, moving on. First we’re going to go through a bitcoin keyword primer. This is basically not nenough bitcoin to do anything useful but it does mean you know a few keywords and you can sound like you know what you’re talking about. Then wen’re going to talk about two kinds of proposed improvements. The first being consensus improvements, changes to the blockchain itself (transactions and blocks), in particular we’re talking about backwards-compatible changes. And then non-consensus changes, such as changes in how peers on the network talk with each other, which does not involve changing the blockchain format. Today, for a lack of time, I am not going to be talking about anything built on top of bitcoin, layer 2. I am not going to be talking about hard-forks (incompatible changes to bitcoin).

Bitcoin keyword primer

https://bitcoin.org/en/vocabulary

https://bitcoin.org/en/developer-documentation

So here comes a very quick bitcoin keyword primer. This is a bitcoin transaction. It contains outputs (TXOs). Each output has an amount and a script (scriptPubKey). That output script is a very simple stack-based scripting language with no loops that validates who can spend this output. Here’s another transaction spending it. You can see that this transaction has an input, and the input has an input script (scriptSig) spending the previous output. In this case, it’s a key and the signature of that key. When you evaluate that, and then the output script, it leaves “true” on the stack, meaning it’s valid, and therefore that the transaction is valid and authorized.

The other term that we come across is “txid” which is the hash of the entire transaction. Very simple.

Transactions are built up into chains like this. Of the 19 outputs on this diagram, 6 of them are unspent. They are considered members of the unspent transaction output set (UTXO set). And that’s important because to validate a new transaction, the transaction must spend a member of the UTXO set. That’s the set of things that you need to check against, to spend a new transaction.

Bitcoin uses a blockchain. A bitcoin block contains the hash of the previous block, causing it to be a chain. And it also has a set of transactions in the block. Now the way that transactions are put into the bitcoin block is kind of interesting. The txid is a hash of the transaction itself. We take pairs of txids, hash those together, and we build up a tree. We put the root of the tree in the bitcoin block header. That kind of hash tree is called a merkle tree. The cool thing about a merkle tree is that I can prove to you that this transaction is in the merkle tree. If you have the block header then I can prove that the transaction is in the block. But because all I have to give you is the txid and the hash of the other neighbor at that level of the tree, you can combine the hashes yourself, and check that it matches the merkle root. If it does, then you know that the transaction is in the block.

That’s all that I am going to tell you about bitcoin keywords.

Soft forks and consensus changes

https://www.youtube.com/watch?v=VABppufTdpA&t=5m26s

Let’s talk about a first set of improvements that have been proposed and researched. These are called soft-forks. A soft-fork is the backwards compatible changes. Old nodes still work. You can make things that are currently legal to be illegal, but you can’t make things that are currently illegal to be legal because that would break compatibility.

Segwit

https://bitcoincore.org/en/2016/06/24/segwit-next-steps/

https://bitcoincore.org/en/2016/01/26/segwit-benefits/

https://bitcoincore.org/en/segwit_wallet_dev/

https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki

https://github.com/bitcoin/bips/blob/master/bip-0143.mediawiki

https://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/segregated-witness-and-its-impact-on-scalability/

https://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/segwit-lessons-learned/

As a warm-up, I am going to talk about an upgrade that was activated in August 2017 called segregated witness (segwit). And this addressed the problem of script upgradeability, UTXO bloat, and unwanted malleability. What really is segwit? Well, it’s a new output script witness type. And it’s literally just a version and then a hash. For version 0, it’s either a 20 byte hash (hash of a public key), or a 32 byte hash (hash of a script). The “input” which normally spends this is empty. The real input data goes into something called the witness. And the witness contains the input stack and the key or script which we promised in the output. You can check whether it’s the right one by hashing to check whether the hashes match up. Okay. Here’s an old style transaction, remember the txid was the hash of the whole thing. If it was a segwit transaction, the input script would be empty and this new witness thing would contain the pubkey and the signature for example. Now, as you’ve noticed, that is no longer hashed into the txid, it has been segregated out of the txid. This turns out to be really important because there’s a long list of different input scrips that can satisfy those output conditions. You can sign it again, there’s a way for third parties to change signatures and make them still work but they completely change the txid, this is called transaction malleability. And malleate is an obsolete word, it means to hit something with a hammer, so I imagine someone out there whacking transactions with a hammer to change their transaction txid. Segwit helps several things. We see that version byte helps with– everything else above version 0, like version 1, is given a free pass at the moment. It helps layer 2, because if you’re building things on top of bitcoin, you can now rely on the fact that txid values don’t change. Someone can’t change the chain of transactions by malleating one of the earlier transactions in your pending transaction tree. It also helps hardware wallets because of a checksig change. But importantly, it helps bloat incentives and throughput. Without segwit, it’s actually cheaper to create a new output than it is to spend an existing one. This creates incentives to create more UTXOs, which everyone has to remember. With this witness, it’s outside the old rules– it’s a separate thing. These bytes are not counted in the same way. The bitcoin rules say that you cannot have more than a million bytes in a block. The witnesses, which do not apply under that rule, to count a quarter of a byte each. This means that you can fit more into a block, and in particular what it means is that it’s now almost as cheap to spend an output as it is to create a new one so the incentives are more aligned. Also, 60% of the block is usually the inputs, and segwit is a significant capacity increase in bitcoin: instead of a megabyte, we end up at in theory about 4 MB but in practice probably about 2 MB to 2.3 MB blocks. That’s in the past, now.

Segwit addresses: bech32 (bip173)

https://www.youtube.com/watch?v=VABppufTdpA&t=9m10s

https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki

https://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2017-03-29-new-address-type-for-segwit-addresses/

http://bitcoin.sipa.be/bech32/demo/demo.html

https://github.com/sipa/ezbase32/blob/master/dist32.cpp#L13L25

The first of the proposals of the things that I want to talk about for the future is this new problem, in segwit, we have this new output format but we have no address format. The original bitcoin addresses are base58, a nice round number, with a 32-bit sha256 checksum. It’s just an example address on screen, don’t send money there. The new addresses are base32 with a 30-bit BCH checksum code. According to pieterwuillefacts.com, this is apparently the name of Pieter Wuille’s dog. So here’s wnullywullewuelewuulywulewuluhwuelwu… Anyway, the code is guaranteed to detect up to 4 errors. And for “similar” letter substitutions it detects up to 5 errors. And if you have other errors, it has a 1 in a billion chance of falsely okaying the result. This is a much stronger checksum than the old 32-bit checksum that we were using. With bech32, readability is improved, bech32 is case-insensitive, it fits better on QR codes, and it has error correction and error detection. We’re seeing this being rolled out now. I think we’re going to see this coming out very soon.

MAST (bip114 vs bip116 + bip117)

https://github.com/bitcoin/bips/blob/master/bip-0114.mediawiki

https://github.com/bitcoin/bips/blob/master/bip-0116.mediawiki

https://github.com/bitcoin/bips/blob/master/bip-0117.mediawiki

http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-07-merkleized-abstract-syntax-trees/

https://bitcointechtalk.com/what-is-a-bitcoin-merklized-abstract-syntax-tree-mast-33fdf2da5e2f

This solves the problem of really large scripts. MAST stands for merkleized abstract syntax trees. When bitcoiners talk about this, what they are really talking about is urning a script into a merkle tree of scripts. So then in order to spend it, you provide one of those scripts, and one of hte hash proofs and the neighbors to prove that it’s in the tree. Before, we might have had a complex script like this where I can spend it after 144 blocks or you can spend it with your key. And currently using a segwit output script it would be 0 HASH(script). That would be the output script.

In bip114, it would break those two and create a merkle tree out of those conditions and it would be a merkle tree with only two leaves. And then you would commit version 1 segwit script they are proposing, which is “here’s my merkle root”. To spend it, you would provide the merkleroot in the output script originally, and to spend it you would provide the merkle inclusion proof and that has a version field as well. The actual implementation is slightly more complex than this, but you get the general idea.

But bip114 is not the only game in town. There’s a separate pair of proposals (bip116 + bip117). bip116 proposes adding a new opcode OP_MERKLEBRANCHVERIFY which does the merkle tree check on the top stack elements. It takes the script, the proof and the merkle root and checks that it indeed is in the merkle tree. That doesn’t get you MAST, it just gets you a check. What bip117 does is that it adds a new rule: extra stuff you’ve left on the stack after valid execution, is popped off the stack and executed in a tail-recursive fashion. So you use bip116 to prove that yes this was in the merkle tree, and then you use bip117 to execute whatever you left on the stack. There are some issues with this but I still expect this to happen within the next year or so. In addition to size, it also enhances privacy. It’s another set of conditions that are summarized with a hash. You don’t need to reveal that your backup plan involves a 3-of-5 multisig or whatever, so that’s kind of nice.

Taproot

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-January/015614.html

There is, however, a problem with MAST which is that MAST inputs are obvious. You’re doing a MAST payment. There’s a proposal called taproot. This is why you check twitter during talks. It uses a new style of pay-to-pubkeyhash. You can– the users, when they are setting this up, use a base key and a hash of a script that they want the alternative to be. They use that to make a new key, then they just say use this new thing and pay to that key. They can pay using key and signature like before. Or you can reveal the base key and the script, and it will execute the script for you. In the common case, it looks like pay-to-pubkeyhash which helps fungibility. In the exceptional case, you provide the components and you can do MAST. This overrides what I had said earlier about MAST coming soon because this proposal looks really awesome but taproot doesn’t have any code yet.

Key aggregation

https://blockstream.com/2018/01/23/musig-key-aggregation-schnorr-signatures.html

https://eprint.iacr.org/2018/068

Another problem in the bitcoin space are these keys and signatures. There’s a separate key for every signer. In particular, there are many cases where you have multisignatures and you have to have a key for each signer. Say we have a deal where we both have to sign it and we use OP_CHECKMULTISIG. There are however I would say, simpler signature schemes, like schnorr signature schemes where we can combine both keys and also signatures. The added keys become a single valid key. Same with signatures. And then we’ll need a new OP_CHECKSCHNORRSIGVERIFY. It’s smaller, making transactions cheaper. It’s also fungible, and whether it’s multisig or single signaure is not publicy identifiable.

Signature aggregation

http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-06-signature-aggregation/

Still, there is a single signature for every input, which takes up space. It turns out that with some certain signature aggregation schemes, you can aggregate signatures. Say you had an OP_CHECKAGGSIG opcode. You provide a signature and a key. After the first signature, everything is empty, you say refer to the one above. OP_CHECKAGGSIG wouldn’t actually fail during script execution itself. When you get to the end of the transaction, it would add up the signatures you said to check, and then it would check that the signature was valid, all at once, and then pass/fail the transaction at that time.

This helps input size, which makes more room for other transactions in the blockchain. And validation speed is improved- it’s not one signature check per input, it’s one signature check per transaction. Fantastic. But also it provides incentive to combine transactions. There’s a protocol called coinjoin where two people who don’t trust each other can negotiate to combine things they want to spend into one mega-transaction. By two I mean any number. This makesa massive signature. Using signature aggregation, a single coinjoin transaction is much smaller than having two separate transactions, so signature aggregation provides a coinjoin incentive.

Batch signature validation

https://bitcoincore.org/logs/2016-05-zurich-meeting-notes.html

It also turns out that there’s still a lot of signatures to check in every block. But these signatures are actually ameniable to a batch validation mode where you do some prep work and you can check all of them work. This scales super-linearly. It’s much faster to batch validate 10k signatures rather than running 10k separate signature validation operations. The downside is that all you get is an answer like they were all good or something went wrong, however that might be okay because when you’re checking a block for validity that batch answer is all you care about anyway.

In practice, this doesn’t actually help with block validation. When you get a block, you have already seen many of the pending transactions and already validated them. But batch signature validation does help with initial block download (IBD) speed.

Scriptless scripts

http://diyhpl.us/wiki/transcripts/realworldcrypto/2018/mimblewimble-and-scriptless-scripts/

http://diyhpl.us/wiki/transcripts/scalingbitcoin/stanford-2017/using-the-chain-for-what-chains-are-good-for/

Scripts are sometimes unnecessary. Scriptless scripts originate from work done by Andrew Poelstra on a project called mimblewimble. If you ever want to be amused, go look up the origins of the mimblewimble project. It turns out that scripts are sometimes unnecessary. Atomic swaps between blockchains where Alice wants to swap some coins across with Bob on another blockchain. The way that Alice does this is that she creates an unguessable big secret and hashes it to x. And then she produces a transaction that spends to Bob if Bob can provide some information. And Bob says the same thing on his blockchain and the value that hashes to x. To collect this, Alice spends it by revealing her secret. That means that Bob can see that transaction and now spend Alice’s. So without trusting each other they have managed a swap of coins.

With these kind of signature schemes, you can set things up by negotiating between the two parties with keys. There’s enough information in that for Bob to create the signature he needs to take Alice’s coins. And this concept is called scriptless scripts. On the blockchain all that appears is one transaction saying pay to some key and some people spend them, basically. Nobody but Alice and Bob knows that any of this fancy coinswap thing happening. So this helps fungibility and input size. It’s a little bit in the future because it would only be able to happen once we get a soft-fork that introduces new signature schemes.

Simplicity

https://www.youtube.com/watch?v=VABppufTdpA&t=19m56s

https://blockstream.com/simplicity.pdf

https://bitcoinmagazine.com/articles/introducing-programming-language-so-simple-it-fits-t-shirt/

Along the line of script improvements, there is a problem with scripts particularly as they become more powerful. There are many feature proposals but it’s hard to analyze the scripts and make provable statements about what the scripts will do. People have a hard time writing bugfree code. There’s a language called Simplicity. It’s a typed composition language. Read the paper. It’s specifically designed for the blockchain use case. This helps the security of smart contracts by allowing you to do validation. Right now you would not write simplicity, but you would write in another language that compiles down to simplicity.

Covenants

http://fc16.ifca.ai/bitcoin/papers/MES16.pdf

http://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/covenants/

https://github.com/jl2012/bips/blob/vault/bip-0ZZZ.mediawiki

Covenants would be a powerful improvement to bitcoin scripts. Current bitcoin scripts can’t really do any kind of sophisticated introspection. In particular, you can’t write a script that says “you can spend me if the transaction that spends me has this kind of output script”. You might want to say, you can spend this transaction but the transaction that spends me must have an output that has a one week delay or it can be spent by an emergency key. This is the basic idea of the “The Vault” proposal. You would put your money into this kind of output and then you would watch the network. To spend it, you wait a week. To take money out of the vault, you wait a week. If you see money going out of the vault and you didn’t authorize it, then that means someone has stolen your key and is trying to spend your money. At that point, you go to your backyard and dig up your emergency secret key and you take the funds back during that one week period. The only transaction that the thief was allowed to make was (due to the covenant) one with this one week period of lock-up where only the secret key could spend the coins during that period.

Confidential transactions

https://people.xiph.org/~greg/confidential_values.txt

https://blockstream.com/bitcoin17-final41.pdf

http://diyhpl.us/wiki/transcripts/gmaxwell-confidential-transactions/

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015346.html

Enough about scrips for a moment. The blockchain is public. In particular, the amounts are public, and this actually reveals a lot of information. By looking at the amounts, you can often tell which one is a change output and going back to the same person. All that nodes really need to know is not necessarily the amounts, but rather that there is no inflation in the system. They only need to know that the sum of the input amounts is equal to the sum of the output amounts plus the miner fee amount. They also need to kknow that there was no overflow. You don’t want a 1 million BTC transaction and a -1 million BTC transaction.

There’s a way of doing this using math, it’s called confidential transaction. It takes the 8 byte value and changes it to a 2400 byte value. To process each input that spends one of these takes about 14 ms. This is against 0.1 ms for a normal signature check. It’s heavy and slow. But it provides the additional privacy on amounts.

Bulletproofs

https://eprint.iacr.org/2017/1066.pdf

https://joinmarket.me/blog/blog/bulletpoints-on-bulletproofs/

http://diyhpl.us/wiki/transcripts/blockchain-protocol-analysis-security-engineering/2018/bulletproofs/

There was a paper released a couple months ago called “bulletproofs” which cuts these numbers down to 4.2 ms and about 674 bytes. Interestingly this time does benefit from batch validation where you can reduce that time somewhat. And proofs can be combined across outputs. So if you have 8 outputs you only add a few hundred bytes to the proof. At this point you’re thinking, this would incentive some kind of coinjoin scheme and everything else. I think this needs more research because it’s a big change, the fungibility benefits are huge, it’s fairly new math and it would– bitcoin is generally conservative. There might be more squeezing or improvements we could do. Get it down by a factor of 10 and then we’ll talk.

Client-side validation

http://diyhpl.us/wiki/transcripts/scalingbitcoin/milan/client-side-validation/

https://scalingbitcoin.org/milan2016/presentations/D2%20-%20A%20-%20Peter%20Todd.pdf

UTXO commitments

https://bitcointalk.org/index.php?topic=101734.0

https://bitcointalk.org/index.php?topic=88208.0

https://bitcointalk.org/index.php?topic=204283.0

https://github.com/petertodd/python-merbinnertree

https://github.com/bramcohen/MerkleSet

UTXO commitments are a very very old idea in bitcoin. The problem is that lite nodes which don’t have the whole UTXO set and don’t track the whole blockchain can’t tell if a transaction is valid. the idea is to put a hash of the UTXO set in the block. We use a merkle tree and put the hash root somewhere. This would allow you to prove the presence of an unspent output by using a merkle inclusion proof. You could also prove that a utxo wasn’t in there, if it was ordered you could provide two neighbors and show that there’s nothing between two there aren’t any members. You could get the UTXO set, check if it’s committed to by that block, and then work forward from there, giving you some kind of partial semi-trusted initial block download and sync. This would help with lite node security. Unfortunately there are some problems with UTXO set commitments that put it in the “need more research” flying cars phase of the future.

UTXO proofs

The unspent transaction output sets (UTXO set) sits at about 63 million outputs. If everyone in the world wants to own bitcoin, then that size is going to increase to about 7 billion items. So what happens when that UTXO set size grows too large for most nodes? If we had UTXO commitments then nodes could remember the merkle tree, when you want to spend bitcoin, you would provide proofs that each of your coins are in the UTXO set. The downside is that wallets would need to be able to track those proofs. These proofs would change in each block- so now you need not just your private keys but you need to track the blockchain. The proofs themselves will probably be about 1 kilobyte per input. So you’re trading bandwidth for UTXO storage. These are a couple of problems that would probably– unless the crisis is acute, it wouldn’t help, it’s in the flying cars research phase category.

TXO commitments

https://petertodd.org/2016/delayed-txo-commitments#slow-path-calculating-pending-txo-commitments

There’s a proposal with one of he other things we have UTXO sets. Validating the UTXO tree is really slow when it gets really big. If you randomly add and delete nodes from the tree you do a lot of hashing to regenerate the merkle root. And everyone has to do that to validate the block. The idea of TXO commitments is that you commit all transaction outputs (TXOs) into the block, with a bit that says whether it was spent or not. And you basically build up this merkle tree. If you have a merkle tree where you continue to add data, you end up with a data structure where you have a number of peaks that you need to keep up with, and petertodd calls this a “merkle mountain range” that you build up and you just append and append and append. You wouldn’t do this for the current block, but rather some previous older block, and you would just remember the changes since then. And then the wallet has to prove that the output was in the TXO tree. That wallet, the proof, that yes it was there, gives you enough information to change it from unspent to spent and re-calculate the root because you have all of the pairs or peers. Unfortunately the wlalets would need to still track the TXO proofs and it might still be about as bad as tracking other things, and it’s still 1 kb of extra transmission for every input, but it certainly helps bring the UTXO commitment idea closer.

Fraud proofs

https://en.bitcoin.it/wiki/User:Gmaxwell/features#Proofs

https://github.com/bitcoin/bips/blob/master/bip-0180.mediawiki

https://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2016/fraud-proofs-petertodd/

Another idea that goes back even further, back to the bitcoin whitepaper, is the idea of fraud proofs. I sort of alluded to this problem earier, which is that lite nodes don’t actually run the bitcoin protocol and they simply trust the miners. Like mobile phone nodes. “If it’s in a block then it must be good”. This goes back to the original Satoshi paper where he suggested that maybe nodes could give alerts. Well it doesn’t work because you could give false alerts all the time and force you into a full node downloading all the blocks all the time anyway. And secondly because you can’t validate a block by itself without the UTXO set you would have no idea whether the block is valid.

What can you compactly prove? If a transaction is completely invalid or malformed, it’s easy, you can give the lite node the malformed transaction and then a merkle inclusion proof for that transaction in the block and it’s easy to verify that the transaction is malformed and that the merkle inclusion proof is valid. If a transaction can’t spend the output it’s claiming to spend, it’s the same thing: proof that the transaction is in, and you can provide the transaction that already spents it, or the input transaction input, and the proof that a transaction output already spent, same thing done. However, you can’t prove that a ninput transnaction simply doesn’t exist, unless you had UTXO commitments perhaps with a soft-fork upgrade. Or you can have a lighter version where a block contains inputs of locations. You also can’t prove that the fees collected in the block are incorrect. Miners are supposed to add up the inputs from every transaction and give themselves their money. What if they take too much? There’s a merkle tree variant that could possibly fix this, called a merkle sum tree, you can in fact prove this and you could do this with a soft-fork.

What if the merkle root hash of the transaction tree in a block is completely wrong? Just junk? In general you can’t prove this. This is a subset of a problem called a data withholding attack. In this, you basically, the miner puts out a new block but doesn’t provide any of the transactions in it. Lite nodes say it looks fine, and full nodes say can’t validate this, but the full nodes can’t prove this to help the lite nodes. There’s speculation that a proposal could be made that employs fountain codes in such a way that you are required to reveal information … but this is definitely in the realm of flying cars future research, a lot more research needed if we were to go in that direction.

Part 3: Peer protocol changes

I am going to talk about peer protocol changes. I think I have time for– nope.

TXO bitfields

http://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets/

Okay, TXO bitfields. This is a different approach for the problem of the UTXO set becoming far too large. In this, each node remembers a bitfield for each block. The bits represent whether the txout (TXO) was spent or not. It remembers a root merkle for that block. And then the node sends a merkle proof that proves that this transaction output I’m trying to spend is the 3000th output of the block, so the full node then checks the transaction is correct- is it bit 3000? And then checks bit 3000 and flips the bit if necessary.

This works because the size of an unspent output is around 32 bytes. The sie of the bitfield is about 1/8th byte (only 1 bit). It turns out the ratio between TXO and UTXOs is only 12.6, so we end up with a data structure about 20x smaller. If it turns out that most things are spent, then there are ways to compress it further to make the data structure more dense.

The cool thing about this is that proof size might still be an issue, but the proof of TXO position is static. Once the block is set, you can create the proof. It doesn’t update like UTXO proofs do. This is also not a consensus change. Nothing has to go into blocks. Peers just have to choose to do this on their own. It does not require consensus between nodes to validate blocks or whatever.

As far as I know, nobody is actually working on it, so it’s in the further future, but it does help bitcoin node scalability.

Rolling UTXO set hashes

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014337.html

Another subset of the UTXO set problem is that there’s no compact way to prove that UTXO set is correct. XORing all the UTXOs together is not secure. There’s this idea of a rolling UTXO set hash. You can update a 256-bit number, it’s fairly easy to calculate, you can update this one hash when a new block comes in. If you’re a full node, you can record your UTXO set hash and then validate that your UTXO set hasn’t been corrupted. But it also helps the idea of initial node bootstrap and initial block download. If you get TXO hash from someone you trust, say someone writing the bitcoin software, you can go anywhere and get the UTXO set, and you can download it, and just check that the hash matches. Maybe if things keep getting bigger, then this might be a middle ground between running a lite node and running a fully-validating full node from the dawn of time itself.

Transaction compression

http://diyhpl.us/wiki/transcripts/gmaxwell-2017-11-27-advances-in-block-propagation/

We have this other problem that transactions are too large. We’re bumping up to 170 GB for the blockchain. Turns out that with a dedicated compressor you can get 28% compression. This still needs to be benchmarked on how fast decompression is. This number is only in the context of compressing a single transaction, not with any context. This can help bandwidth and storage.

Set reconciliation for transaction relay

http://diyhpl.us/wiki/transcripts/gmaxwell-2017-11-27-advances-in-block-propagation/

Most of the bandwidth used by full nodes today is not sending blocks, but announcing that you have a txid. It’s very chatty like that. Broadcast to all. What about instead announcing it to one or two peers and then use set reconciliation every so often to catch the ones you have missed? We know how to do efficient set reconciliation. This would help bandwidth and full nodes.

Block template delta compression

http://diyhpl.us/wiki/transcripts/gmaxwell-2017-11-27-advances-in-block-propagation/

Block propagation is fairly efficient. It takes about 40 kb to send a block. This is achieved because of compact blocks, which sends a summary of what’s in the blocks. Assuming that you have seen most of these transactions before. There are two modes: high bandwidth mode and low bandwidth mode. In high bandwidth mode, I basically just send you the summary every time I get a block. It’s only 40 kilobytes. But in low bandwidth mode, I notify you, and then you request it, and then I send the summary, so there’s another roundtrip in there in low bandwidth mode.

Bitcoin Core by default picks 3 peers to be high bandwidth peers and everyone else is low bandwidth peers. The 3 that get picked are the last that gave us a block, because they are probably prety-well connected in the bitcoin p2p network.

The idea with block template delta compression is that every 30 seconds we take those 3 peers and we send them a template of what we expect to be in the next block, and when the next block occurs, we just send a delta between what the template had and what the block actually had. It turns out that about 22% of the time, that delta is under 1 packet, which of course is the golden point for low latency. This particularly helps especially if you are trying to do something crazy like bitcoin networking over a satellite link.

Dandelion

https://www.youtube.com/watch?v=VABppufTdpA&t=36m25s

https://arxiv.org/abs/1701.04439

https://github.com/sbaks0820/bitcoin-dandelion/issues/1

Another problem with fungibility on the network is that today there’s a lot of nodes that connect to as many nodes as they can and look for transactions. These nodes are essentially looking for where transactions are originating from in an attempt to deanonymize the network. I ban a lot of them from my nodes, for example.

Dandelion was a proposal to solve this where when it gets a transaction it exposes it to a single peer 90% of the time. And then on the 10% case it broadcasts it to everyone. So these are the stem phase and the fluff phase. There are a few other rules you have to add to make this robust and make it work. I hope to see this sooner rather than later because it really does help fungibility on the network.

Neutrino: Client-side block filtering (bip157/bip158)

https://youtube.com/watch?v=7FWKc8lM4Ek

https://github.com/Roasbeef/bips/blob/master/gcs_light_client.mediawiki

https://github.com/lightninglabs/neutrino

https://github.com/bitcoin/bips/blob/master/bip-0157.mediawiki

https://github.com/bitcoin/bips/blob/master/bip-0158.mediawiki

Another proposal that helps fungibility on the network is this proposal called Neutrino. The current way that lite nodes owrk is that they tell the full nodes a bloom filter of all the things they are interested in. This works against privacy as you might expect. Once you have seen one of those addresses, it’s almost trivial to figure out which peer is interested in that address, in other words which peer has that address in their wallet. It also requires every node to look through the bloom filters and do calculations every time a block comes through, in an attempt to figure out whether to send the transaction to the lite node.

Neutrino inverts this. The full nodes produce a 20 kilobyte block summary and the algorithm to do this is pretty awesome. This summary allows the lite node to say this is a block that I’m interested in, and pull down the whole block, ideally from a separate peer that isn’t even related to the first peer. This would definitely help lite node privacy. But the reason why I say it’s a few years out is that the originator of this, roasbeef, is deep in lightning right now and I expect lightnig network stuff to consume all his cycles for some time.

NODE_NETWORK_LIMITED (bip159)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014314.html

https://github.com/bitcoin/bips/blob/master/bip-0159.mediawiki

Another problem on the network is that– a number of years ago we introduced prune nodes on the network, which download all the blocks but they throw away old blocks. This saves a lot of disk space. It’s still a full node, but it throws away old stuff it doesn’t need. There’s a bit in the service field when you advertize a full node, which says you will serve all blocks. But the pruned nodes can’t set this. So there’s a proposal for a few other bits, to say that actually I do have the last 500 or 10,000 blocks and this will really help the current bandwidth load on the current full archival nodes that aren’t pruned. I expect this very soon.

Peer encryption and peer authentication (bip150/bip151)

https://github.com/bitcoin/bips/blob/master/bip-0150.mediawiki

https://github.com/bitcoin/bips/blob/master/bip-0151.mediawiki

https://btctranscripts.com/sf-bitcoin-meetup/2017-09-04-jonas-schnelli-bip150-bip151/

It might also interest you that in the bitcoin p2p network we don’t currently authenticate or encrypt traffic between bitcoin nodes. It’s to somewhat extent already public information, but there’s analysis attacks against this. This is low hanging fruit against passive attacks. But it also helps with trusted nodes, like if you have a lite node and you want to connect to your dedicated full node. This would make trusted nodes easier since there’s nothing out-of-the-box that does this yet.

Things that I wanted to mention

There are a lot of things I wanted to mention in this talk but I couldn’t, such as: bip8, spoonnet, zerolink, coinshuffle++, zero-knowledge contingent payments, discreet log contracts, lightning network, etc. I put them on this slide to make myself feel better. I am not going to mention any of them, unfortunately.

Conclusion

I’ll tweet out a list of references, it’s currently a google doc. If there are any questions then I would be delighted to answer them.

Q&A

Q: What do your hats say?

A: This one says “someone in bitcoin did something awesome again” and this one says “someone in bitcoin did something stupid again”. Anthony Towns suggested that this pair covers everything in bitcoin.

Q: Which of those projects are you working on?

A: No. I am working on none of these. I am actually working on lightning, and it’s incredibly exciting, I’ve been working on it for about 2.5 years. It’s soon going to be usable by normal people without losing money. It’s an exciting time. My timing is terrible today though, I’m here talking about other stuff.

Q: There’s a question on twitter that asks you to prove that you are not the output of an artificial intelligence.

A: Okay. I anticipated this question. I’m glad it was asked. I don’t believe such a proof is possible.

Q: Hi Rusty. On the dandelion project, do you think it could potentially hinder the ability to monitor the network for any nodes that are doing anything dodgey?

A: So, dandelion itself not particularly. Dandelion does this stem and fluff approach. The idea is that the transaction will travel some way before it broadcasts to the network not necessarily the original point. There’s a timer in there so that if it sends it down a dead-end then it will fluff it out. So what percentage of nodes need to be evil before this scheme is compromised? I think the numbers look good. It’s hard to monitor the entire bitcoin network. That’s kind of a feature in a way. I don’t think this will be the main issue. I was shocked once to see a block explorer identify that I was the one to send a bitcoin transaction, by geography.

Q: On one of your randomly generated slides, you made an intriguing comment where you said people were trying to deanonymize the network. Who are these people?

A: There are companies that do this as their job.

Q: Why?

A: I stay out of that world. I believe they are trying to sell information to places that have KYC/AML requirements and say “this person is dodgy”. KYC/AML are regulatory rules about knowing who your customers are. It sometimes conflicts with an anonymous cash system. They are generally not that clever, but it’s still a bit of an arms race. We need to keep on top of it.

Rusty, thank you so much for this presentation on the state of blockchain. Here’s a gift of gratitude from everyone and the organizers. Let’s give a round of applause to Rusty. Yes I’ll stick around for the rest of the conference and I’ll be around to answer questions. We have the penguin dinner tonight. It’s behind building 1, under the blue space, in the green space, is where the groups start walking.

\ No newline at end of file diff --git a/misc/2019-01-05-unchained-capital-socratic-seminar/index.html b/misc/2019-01-05-unchained-capital-socratic-seminar/index.html index dd7dc5e504..d9a96f493f 100644 --- a/misc/2019-01-05-unchained-capital-socratic-seminar/index.html +++ b/misc/2019-01-05-unchained-capital-socratic-seminar/index.html @@ -12,4 +12,4 @@ < Misc < Unchained Capital Socratic Seminar

Unchained Capital Socratic Seminar

Date: January 5, 2019

Transcript By: Bryan Bishop

Tags: Research, Taproot

Category: -Meetup

Unchained Capital Bitcoin Socratic Seminar

Socratic Seminar 87 at BitDevs NYC has all of the links on meetup.com available.

https://www.meetup.com/BitDevsNYC/events/256924041/

https://www.meetup.com/Austin-Bitcoin-Developers/events/257718282/

https://twitter.com/kanzure/status/1081674259880120323

Last time

Last time Andrew Poelstra gave a talk about research at Blockstream. He also talked about scriptless scripts and using the properties of Schnorr signatures to do all of your scripting. There was also talk about taproot and graftroot.

J: The chaincode labs guys in NYC .. we had a lot of participation, it was very cool. John Newbery has been helping me to do Socratics. He goes through Bitcoin Core PRs and things like that. Tries to get everyone involved.

JM: He’s a very efficient person. He gets a lot done.

J: Yeah, hats off to John Newbery.

JM: When we started this, there was a signal group of everyone running meetups all over the world. He coached me a little bit, gave me some confidence. There were 3 people in the first one, and 4 people in the first one. Richard showed up both times.

J: Our first time we met was summer 2013 and it was a video game shop in Chinatown.

JM: Michael Flaxman was telling me about some of the first bitdevs meetups.

PL: At the one in NY, there was about 120 people there. They had everyone introduce themselves. 5-10 seconds per person tops. Unchained started hosting a few events recently.

Introduction

Bitdevs is a community for people interested in discussing bitcoin research and developments. We have a meetup that occurs once a month in NY. These are socratic seminars. In the weeks preceeding the event, we go and collate all of the interesting content and interesting research going on in the space. We put it together into a newsletter which you can find on the meetup page. All of these links are on the meetup page, then we gather and try to investigate these things and try to learn about these and work together to understand them. To give an idea of some of the content we look at, it’s CVEs, research papers, IRC logs, technical blog posts, network statistics, pull requests in popular repositories like Bitcoin Core, lnd, joinmarket, etc. We sometimes look at altcoins but they have to be interesting or relevant in some way.

The idea is that if you find something you want to discuss and it’s not in the prepared list then you can send it to the organizer and you can even lead discussion on that topic. We can spend anywhere from 1-10min on a topic. We can give a setup on the topic, or someone else can, and if there’s a question or a comment or opinion to be had then feel free to speak your mind. Don’t feel concerned about being wrong- people can correct each other. We’re all learning here. We try not to dessiminate bad information, but it happens. We try to catch each other.

We’ll start by going around the world and do 30sec-1min and you can say your name or what you’re interested in, if you’re organizing events and you wnat people to know, the idea is that if someone says something that interests you then you can hangout with them later. Then we go through the content. At the end of the meetup we have presentations like if someone is working on an open-source project or a company relevant to the space then we get them up here and let them get feedback from the community. Then we hangout afterwards. Once a month, that’s the drill.

No names in the transcript please.

Difficulty adjustment

I like to start with hacks or CVEs or something. Lately we have seen some significant difficulty adjustments downwards. We haven’t seen difficulty adjustments downwards to that degree since 2011 or 2012. These are very non-trivial, they were quite large. The question of hashrate importance can get lost when we talk about proof-of-stake and htings like that. Not too long ago, you guys probably seen this, http://crypto51.app assumes that you can rent hashrate. If you can rent hashrate at going market rates, how much does it cost to 51% attack these coins? How much could you make from doing these 51% attacks? It’s just a cost, not how much you make. How much you make depends on what type of attack you launch, naturally.

You can’t rent enough hashpower on the market to 51% attack bitcoin. But some of the coins have gpu-friendly algorithms. There’s a ton of rentable hashrate out there. When you have low market cap coins with low hashrates that are worth enough to attack, you run into dangerous scenarios. Like let’s look at Ethereum Classic. 105% of the full hashrate is rentable. Nicehash is a hashrate rental service. You can rent more than enough hashpower for a very low cost here, for under $5,000. You can 51% attack Ethereum Classic. When you see the market in the state that it is right now, people are burned pretty bad etc., in normal scenarios attackers might be more interested in doing straight-up hacks of exchanges or wallet services or whatever. This is me opining here, but when prices go down and hashrate goes down, the attack surface for these 51% attack surface is…

Vertcoin

So how many of you know about vertcoin? I don’t know anything about it. They got hit real hard. There were a number of deep reorgs on the order of the largest one having a depth of 307 blocks. I think we did some of the math at the last socratic. It was like 50-60 blocks deep in bitcoin if you just match by time. 60 block reorg in bitcoin would be totally crazy. We figure we’re safe around 6-7 confirmations. This is particularly apt when you’re looking at these exchanges that are adding every shitcoin under the sun… they are very much exposed to these kinds of attacks. There’s the cost of adding the coin itself, to get the node running, add infrastructure, but then you also now have to consider which I don’t think exhcanges consider– how dangerous is this for our customers? If coinbase adds vertcoin, then they say only need 10-50 confirmations, someone deposits coins, 51% attack occurs, a huge reorg, and all of the sudden coinbase lost some crazy amount of vertcoin or whatever. Their customers thought vertcoin was safe because maybe Coinbase Pro added it… An attacker can short the coin on the market, or they could do double spending. You can do a reorg where you get the vertcoin back, and you got bitcoin back. So maybe you also shorted, probably on another exchange.

The economic limits of bitcoin and the blockchain

paper: “The economic limits of bitcoin and the blockchain”

If I invest $100m in bitcoin hardware, I don’t necessarily want to attack bitcoin and lose my future profits. But as the hashrate goes down, and the price goes down, there are some scenarios like miner exit scam scenarios. Eric calls them “collapse scenarios”. Under which scenarios does it make sense for a miner to burn the house down? To forego the entire future income from their hardware? What kind of attacks might allow this to occur? This has to do with the cost of ASICs relative to the wider hashrate in the network. We’re seeing a lot of older ASICs become more or less useless because they are not power efficient. If you’re a miner and you’ve got a hoard of old ASICs, what are you doing with them? You don’t throw them out, they are going to sit somewhere in storage. If at the same time that bitcoin hashrate’s is coming down, you’ve invested in all of this future hardware, a huge amount of money that has been tied into hardware that is not returning any investment, and you’ve got this completely useless hardware as well. You can spin up this old hardware, and launch an attack to recoup the costs of your future hardware and previous hardware and burn the house down. You just want to make your money back. When you have all of this hardware sitting around, it poses an existential risk. There’s economic sabotage attacks too like short selling, and a theoretical possibility that miners will want to exit scam. These are called “collapse scenarios”.

There’s some mistakes- he thought that one day FPGAs will be made more efficient than ASICs. It might be possible to have something between an FPGA and ASIC, but it might be cost efficient or something. It’s not entirely reasonable.

Their source of income depends on the source of the attack. Maybe they were able to do some double spends on exchanges and get an OTC wire that is more or less irreversible in some circumstances. They could do double spending, and shorting the network at the same time.

The challenge is coordination without being detected. If you attack an altcoin, it’s easy to cash out to bitcoin. Vertcoin getting 51% attacked didn’t make the news in bitcoin world. You have a stable store of value to cash out to. But if you’re trying to attack bitcoin, you can only cash out to dollars, which is much harder because you have AML/KYC. And wire transfers aren’t that fast. Bitcoin Cash and its forks might be more vulnerable to this. Although, they checkpoint everything so it doesn’t matter.

There are definitely smaller altcoins where it is very cheap to do 51% attacks. There might not even be derivative markets here. You don’t need derivative markets for the altcoins because you can still do double spending attacks. Vertcoin’s attack is something we could learn from. Their double spending attack is more a lesson to me that having a lot of small altcoins with low hashrates is not really helpful to the ecosystem.

Max depth for reorg is a bad idea because the node won’t be able to get on the right blockchain on the network, until later. An attacker can feed a private blockchain history.

You have all these miners that could be doing something productive in mining and protecting the network. The mining rigs could still have value. Could they generate value in excess of their recovery value to do an attack on the network? The economic reason why this might be unlikely is that if they are not able to mine profitably, they are probably not sitting with a ton of capital resources or liquidity to be able to expend the energy because it requires real cost to do that.

The energy that you get, you need a lot of energy to run these miners. Often they are at like $0.02/kwH. You can’t get a giant burst on a short-term basis. No energy company is going to sell you $0.02/kWH and have it burst to however much you need in order to do a 51% attack for only like a week. They force you into 6 month, 12 month contracts. This is why all the miners went offline at the same time- you saw the difficulty adjustment go down significantly over 3 difficulty adjustments; those were all expiring contracts in China. When the energy contract is over, they say they aren’t going to renew because the bitcoin market price is bad. This is how they have to plan. If you want to execute an attack on this like bitcoin, you would need not only the hardware, but an energy production facility like a hydroelectric dam or some insane array of solar panels where you have direct access. Nobody is going to sell you that much energy.

I was talking with a miner producing at $0.025/kWH. The marginal cost was $0.06 or $0.07 or higher. They could not be profitable if their costs were above $0.06 all-in. If you’re able to produce energy at a profit or to procure energy and mine bitcoin at $0.025, and if the marginal cost to produce is $0.05 then you have an economic incentive not to attack the network but to continue to act rationally and mine. There’s this huge economic web that says if you can mine profitably, you will. If you can’t mine profitably, and you can’t find energy below $0.06 in that scenario, can you profitably attack it? It’s not just gross, it’s net of whatever you are spending.

Do we have a guest to at what the breakeven price for bitcoin is for people mining at $0.025/kwH? I’ve heard the break-even price is $0.06/kWH. It’s at around $3800 or so, for the 6 cents or below to be profitable. If there’s another drop in the markets, the cost curves adjust. You’re able to mine a larger share of it. There will still be profitable miners. The difficulty will come down, but the most efficient miners with the best hardware and best power contracts will stay around longer. When the price drops, there’s more likely to be 51% attacks or other hashrate attacks. Right now the bottleneck is hardware, but eventually the bottleneck is going to be energy. It will be hard to get 51% of the energy necessary to attack bitcoin because hardware will be ubiquitous. It will get cheaper, better, faster. Energy will be the bottleneck. Producing like 51% of the energy, that’s a really hard thing to do. Right now that’s extremely hard. As it gets more difficult, it’s going to be a much bigger bottleneck.

Energy production is unusual because it has a natural diseconomy of scale. If you’re building an energy production facility, you’re not thinking about 51% attacking a cryptocurrency in the next 6 months. This is a 40 year capex project, and you’re concerned if a giant customer goes away. If it’s a nuclear facility, you can’t make adjustments like oh I am just going to make 10% less energy because one of my customers is gone. Acquiring energy at scale is really hard. When power becomes your limiting factor, that’s a really cool protection. This idea is from BlueMatt.

Blocksonly mode, mempool set reconciliation and minisketch

gmaxwell bitcointalk post about blocksonly mode

In Bitcoin Core v0.12, they introduced a blocksonly mode. This is where you operate a node and you only receive the blocks themselves. You don’t relay transactions, so you know the amount of bandwidth that your node is using is going to be fixed at 144 MB/day. However, you might want to participate in the relay network and see unconfirmed transactions before they get into the blockchain. When you look at currently what is the thing that uses the most bandwidth in your node is the relay of transactions and p2p transaction relay. It works using the mempool. These roundtrips and data being passed back and forth is by far the largest consumer of bandwidth in your node. This is a bit of a problem, we want to always limit the number of resources to operate a node while still giving a node operator the greatest insight into what’s going on in the network so they can avoid double spends and they can build blocks quickly. You want to know because you want to know. The question is, how can we address this huge bandwidth issue?

In 2016, gmaxwell outlined a method of doing mempool set reconciliation. It gives you an idea just from the name. Suppose you have a few different mempools, after all there’s never a universal mempool. We all see different parts of the network. Normally we do INV messages back and forth (inventory). But instead, what if you reach out to your peers and reconcile the differences in your mempools? Their idea is to use sketches, something in computer science that I was not familiar before. It’s a set reconciliation scheme.

sipa recently published a library called minisketch which is an optimized library for BCH-based set reconciliation. He went through some popular sketch schemes. Look at the sketch size factor for 30 bit elements- it’s constant. It’s really nice. This is an alternative to invertible bloom lookup tables (IBLTs), which are probabilistic and have failures. The old school SPV lite client scheme where you use a lite client and it uses bloom filters to update the set of unspent transactions in your wallet… it has serious privacy issues and so on, so it’s not a useful thing for this particular use case. But minisketch is a great library, it has all kinds of useful documentation on how you can use this in practice and implement your own set reconciliations. This is useful for reducing bandwidth.

What’s the tradeoff, or is it a free lunch? What if we have an adversary with a poisoned mempool sending garbage data to another peer? Well, dosban stuff will kill that peer eventually. It won’t propagate because it’s invalid transaction data. With minisketch you need to do some computation on the client side to compute the sketch reconciliation.

Revisiting bip125 RBF policy

Russell O’Connor wrote an email to bitcoin-dev about “Revisiting bip125 RBF policy”. Using replace-by-fee, you can signal in your transaction that you may want to replace that transaction in the future with another transaction that plays a higher fee. It has some anti-DoS mechanisms in it. You can’t just replace your transaction with one paying 0.00001 satoshi or whatever, there’s minimums you have to comply with.

The use case for replace-by-fee is pretty obvious, like if you send a payment to an exchange but it doesn’t get confirmed on the blockchain fast enough. If there’s a tree of pending transactions, my RBF transaction has been pinned. I would have to pay a lot to get the whole tree going, depending on how big that tree is.

See also “CPFP carve-out for fee-prediction issues in contracting applications (lightning network etc)”. If I’m signing commitment transactions or transactions that close LN channels and I’m not broadcasting right away, how do I predict the fee rate I’m going to pay? I don’t know what the fee rate is going to be in a month. I need the pre-signed transactions. I need to talk to my counterparty to get their signature anyway. So the question is what do we do if we go for a month and the mempool spikes and the fee rate spikes. Maybe my counterparty tried to do a unilateral close against me to an old commitment, so my counterparty is cheating. In LN, what do you do? You have a justice transaction, it’s a checklocktime delta where I have 144 blocks to prove that my counterparty cheated against me. So I have to broadcast the most recent commitment to close that channel. If that transaction doesn’t get confirmed in that window, then your counterparty wins. Why not use replace-by-fee and child-pays-for-parent. In CPFP, you can spend the unconfirmed parent with a child and he can pay a high enough fee rate to pay for the whole package. But there are issues with both of these things in the lightning network context. You can’t do replace-by-fee in LN because it’s multisig and if the counterparty doesn’t care then they aren’t going to sign a new transaction for you or whatever. You can’t do CPFP because of the nature of the scripts that are conditionally locking the outputs. These were previously called breach remedy transactions. Your counterparty could try to pin your justice transaction to a low fee-rate transaction, or do CPFP and then it’s a fee competition between both parties.

In BlueMatt’s post, he mentioned you can’t do CPFP on a commitment close transaction. But what could be done is that small outputs from both parties that are unrelated to the lightning commitments could be added ot the transactions. So you could both do unconfirmed spends of the commitments without signatures from your counterparty, but this exposes you to the transaction pinning attack where your counterparty could try to CPFP it and get it stuck and try to get it pinned.

One solution is to add more relay policies. At the node level, you have relay policies which Bryan was talking about earlier like minimum feerate to relay transactions or replace-by-fee rules. These are not consensus rules. Each node can have its own version of these rules. One of the rules that would help prevent the problem BlueMatt identified is that, at various points, there have been discussions about solving the pinning problem by marking transactions as “likely to be RBF” where children of such transactions could be rejected by policy unless such a package would be near the top of the mempool. So if I try to do a pinning attack that would leave the transaction tree package with a low fee rate relative to other transactions on the network, that attempt would be rejected by nodes on the network. RBF isn’t going to work because of the multisig aspect of these LN.

Would ajtown’s OP_MASK cover the nsequence value so you can hide replace-by-fee?

Replace-by-fee transactions are currently 5-6% of the transactions on the network. I was evaluating wallets and one of the criteria was RBF support. And it’s not good. Even some of the big exchanges, I was executing RBF transactions and pending transactions in my own wallet for over a month. In some cases, some of the wallets actually credit my balance so it looks like I actually have double my balance. It’s not an available balance, and I wasn’t able to take funds in that time. But the confusion for the user is there from a usability perspective.

Electrum handles replace-by-fee correctly. It’s not a long list of which wallets support this. If we want large volume BTC companies like exchanges to adopt things like replace-by-fee they want to make sure the ecosystem supports it. It can’t be just petertodd’s opentimestamps server using replace-by-fee.

jnewbery is looking for feedback on the replace-by-fee chapter in the Bitcoin Optech book or documentation. It’s in progress.

Some of these web wallets, even after one of the replace-by-fee transactions has been confirmed, the other one still contributes to a pending balance on the web interface, because it has a different txid even if it will never confirm. So some of these are pretty poorly built.

MtGox blamed transaction malleability for their problems. Most developers think different txid, different transaction but you really have to look at all the utxos and see which transaction replaces each other. It’s not trivially obvious. People think txid is a perfect identifier. And then they don’t test that because it’s a rare edge case.

txprobe paper

“TxProbe: Discovering bitcoin’s network topology using orphan transactions”

Who knows about Chainalysis? There are operators in the network whose job it is to provide metadata to companies that have compliance programs and law enforcement agencies and other actors who are interested in knowing whose public keys are associated with which identities. Chainalysis sells you this information, they have a detailed map of the network. If a transaction has been associated with Coinbase, they will probalby know. At one point back in 2013, they had famously got caught up in this claim that they had deployed a non-trivial number of sybil “full nodes” across the bitcoin network. They were attempting to connect to every node in the network. Why would they do that? If you’re running a full node and not running it over tor, then they figured they could tell which nodes were the source of the transaction. If an attacker has a connection to every single node in the network, and listening to all the INV messages, the first node to relay a transaction is probably the source of the transaction. A sybil attacker can deanonymize transactions in the network by sybil attacking the network and doing these attacks and analysis.

These attacks are difficult to protect from. The characteristics of the network protocol in Bitcoin Core are such that a crafty attacker can leverage these characteristics to do different types of attacks. Another example is eclipse attacks, where I as an attacker attempt to saturate a node’s connections so that they are only connected to me. They are only connected to me. If I do this with a miner, I can do selfish mining attacks; if it’s a merchant then I can do double spending and they are in my own little universe. In your node, you have buckets of IP addresses and so on, and a lot of these attacks have been mitigated to some degree, and the mitigations that have been proposed were implemented. But based on how IP addresses get kicked out or pulled into buckets, an attacker with a reasonable number of IP addresses can saturate connections on a node just by kicking nodes out of each bucket. A lot of researchers spent some time looking at deanonymization attacks.

Andrew Miller and some other researchers looked at how Bitcoin Core’s mempool handles orphan transactions. In this context, an orphan transaction is when you receive a transaction but it doesn’t have a parent. It’s spending a utxo– you have a transaction, and another transaction where the mempool doesn’t have the parent. It’s an orphan transaction. They investigated the mechanics of this buffer. The MapOrphanTransactions buffer. They found a way to discover a way to execute an eclipse attack the important thing to understand is where are the edges in the network… if you’re able to map the network, that’s a critical piece of launching other types of attacks on the network. They weren’t trying to figure out the source of transactions, but figuring out the connectivity of the network.

If you have two sets of nodes and you want to figure out the connections between these two nodes or clusters of nodes, you can use marker transactions and flood transactions and other types. You use a flood transaction which is an orphan transaction and you send this flood transaction to a sector set. It’s an orphan, so it goes in their orphan buffer. Then for the number of nodes in the sector set, you create n double spends that are parents of these flood transactions. If you have 10 nodes in the sector set, you create 10 conflicting parent transactions and you send those to the sync set. And then what you do is you send marker transactions to the sector set members, and depending on how quickly the nodes are responding to the INV messages about which they will or will not accept, you can deduce which nodes are connected to each other.

Something like dandelion or delays on orphan transaction relay may be able to help mitigate this attack.

Dandelion (bip156)

“What is the tradeoff betwene privacy and implementation complexity of dandelion bip156”

Dandelion is another useful privacy development. There’s a stem phase and a disperse phase. This is a separate relay protocol. It’s called dandelion because it looks like a dandelion when you look at the network. Before dandelion, you would broadcast to all your peers. In dandelion, you broadcast with a stem phase and then later it becomes a fluff phase. Each hop during the stem phase is a coin flip as to whether it will go into the fluff phase.

In dandelion there’s a big tradeoff- this is a great stackexhcange post by sdaftuar. It relates to denial-of-service vectors. If someone sends me a bad transaction or bad block, we ban them, right? But the issue here is that because my immediate peers in my stem phase don’t know what the transaction is, it might be invalid or maybe not something that would get into the mempool in the first place. A bad guy could leverage dandelion to DoS dandelion peers, to send garbage through these stem phases that wont actually make it into the mempool. So you have this DoS vector in protocols like dandelion. You want the privacy but you got this exposure to Dos. It’s an unfortunate tradeoff.

The takeaway here is that “if we don’t have simpler solutions that owrk, is it worth implementing something akin to the current mempool logic to introduce dandelion into bitcoin core?” We don’t know if it make sense. How do you encrypt it so that only.

Dandelion is voluntary during the stem phase, it’s not encrypted. Each node does a coin flip, you have to rely on the good behavior of the node you’re connected to.

What’s the best way to send an anonymous transaction? Maybe use blockchain.info and use pushtx while using tor. Some mining pools have pushtx endpoints on the web which are custom home-built. I know luke-jr had one for a long time. Hitbtc has a credit card form for one, so you can pay for the broadcast of your transaction.

Statistics

Let’s do some statistics which is usually pretty fun. The number of utxos has gone donw, so probably a service has been consolidating their utxos. Not long after I looked at the graph, I saw that Coinbase tweeted that they moved 5% of all BTC into their cold wallet, and 25% of all LTC, and 8% of all ETH. So during this movement they were consolidating utxos. They have like 1% in their hot wallet.

Every full node has to keep the current UTXO set in RAM. You want this lookup to be fast. You need this if you’re a miner. If you need to have this in hard drives, then that’s very slow. The size of the UTXO set is how much RAM you need on your box if you’re a miner. So it’s important that the utxo set size is low. If you want to run a fully validating node, you need the full utxo set. So by reducing utxos, you can reduce the total resource requirements.

What are the storage requirements for running a pruned node now? When you are running a pruned node, what are you storing? What are you validating against? You validate the header, each transaction, but then you update your utxo set, and you throw away the extra data. You are fully validating, but you don’t have an archive log of spent transactions, you only have the current state. You need the entire transaction for the utxo, because that’s the tx hash, so you need that. At Breaking Bitcoin, someone made large transactions but only spent one output, and it just broke pruned nodes or something because of how it worked out. For reorgs, it’s not completely pruned, you have a few hundred blocks.

Mempool statistics

The mempool has been relatively quiet. We saw a bump on December 18th. Oh, that was Coinbase, same day as their tweet. At peak there was about 18 megabytes of transactions in the mempool on December 18th. That was the biggest spike. This is joechen’s bitcoin mempool stats. I can post the link later. It’s based on someone else’s work, he’s just repurposing it. It’s color coded. For fee estimation, you could look at https://whatthefee.io or https://bitcoinfees.21.co or something. Some people use Bitcoin Core’s fee estimator. Mempool-based fee estimation is something that has more recently cropped up in discussions in the community. At 830am CST, there’s always a bump because that’s when Bitmex does their consolidation. Sometimes people use vanity addresses- which is aa bad idea due to privacy problems of course, don’t reuse addresses.

So those 18 MB, would it take 18 blocks? Well, yes, assuming no new better fee rate transactions come in.

What’s the average mempool size? Well let’s use joechen, go to the few year view, you can see the huge spike from late 2017. Good question.

Block size has been consistent for the past month. You can see the witness size in there too, it’s on https://statoshi.info it’s very useful.

Asicboost

There are two generally understood mechanisms by which miners can leverage what is now called asicboost. One is known generally as overt asicboost and the other is covert asicboost. This became particularly contentious topic back in 2016-2017 when segwit was attempting to be deployed. When you look at the header of a bitcoin block, it’s 80 bytes. sha256 takes input in 64 byte chunks, it puts it throug ha compression function, and then there’s a midstate which doesn’t have as much entropy as the final hash. You can find collisions for those midstates. Because of the–… “The problem with ASICBOOST” by Jeremy Rubin. Here’s your 80 byte header, you have the version bytes, previous block, merkle root, nbits, nonce. Sha256 has to break that down. At the 64 byte breakpoint here, we still have some of the merkle root which is kind of nubbed off into the 2nd 64-byte chunk. There’s two pieces of data in this first chunk here that could be “rolled”- it could be modified without breaking consensus. So the first is the version field- it’s not a consensus-enforced field, a miner can put whatever they want within constraints. The merkle root is determined by the order of the transactions and it’s not consensus-enforced. Bitcoin Cash has a canonical transaction ordering. But even then they could probably still do covert asicboost. But because version rolling is overt, everyone can see that someone is rolling versions, it’s overt asicboost. If you’re changing the version constantly, the midstate will be, and the final hash will be different. It’s possible to find inputs to the hash function that collide with that midstate, where you get collisions there. If you can find collisions at the midstate then you can reuse that owrk. If you can reuse the inputs, if you can ifnd these collisions, you’re not going to be mining faster, but you will be gaining some amount of efficiency in your mining operation because you’re using less power to get hash outputs because part of the function you don’t have to do consecutively, you just have to do it once, find collisions there, and then you can continue on with the mining process. You can also do the same thing with the merkle root, which is more covert because ordering of transactions you can’t necessarily know, but there are indications that miners are reordering transactions. Once a block is public on the network, it’s not clear that a miner did that.

This became contentious in 2016-2017 related to segwit because segwit has a second merkle root which is not put into the header of the block, which would have made it a hard-fork. So there’s a new serialization format for the blocks where the signatures are stored in a new way. Because the signatures are not in the transactions themselves, there has to be an indication of where the signatures are. To make segwit a soft-fork, the merkle root is put in the coinbase transaction not in the header. So segwit broke covert asicboost because the transaction ordering technique no longer works, if you do reordering you can’t gain that asicboost efficiency given active segwit.

Greg Maxwell investigated some firmware and concluded that it had logic capable of performing asicboost. Bitmain probably didn’t want to lose their 14% efficiency gain they got by implementing asicboost, so they had an incentive to block segwit’s deployment. If you want to do overt asicboost, most people are okay with that. There’s some software out there to do that, like slushpool’s raain, and Bitmain also published an ability for their users to do this. Over the last couple of months, the number of blocks that have been produced with version rolled versionfields has increased. With version rolling, you’re getting more nonce space. It’s way more efficient than covert asicboost by orders of magnitude. Miners have an immense incentive to use overt asicboost, it’s a huge efficiency gain. Miners are naturally competitive and they will do whatever they can to compete. As long as it’s not creating perverse incentives to delay network upgrades, overt asicboost sounds okay to me.

Bitcoin playground

This is a script playground where you can watch scripts be replayed basically in realtime, like the pushes and pops interacting with the stack. This is useful if you’re learning about bitcoin script. Also there’s btcdeb.

Output script descriptors

This was an idea proposed by sipa. A private key inofitself does not transmit any information about the scripts which are conditionally locking the outputs associated with that key. There’s not much information you can gleam from a private key on its own. The idea of output script descriptors is that it’s appended to a private key and gives information about that key itself, like what’s the script– is it 2-of-3, 3-of-5, is it a pay-to-pubkeyhash, is it a pay-to-pubkey? It will even give you information on the derivation path so if it’s a hierarchical deterministic wallet, and the account structure, etc. If you’re importing and exporting keys, you really want to know this information. Especially with partially signed bitcoin transactions (PSBT), a new serialization format which helps with things like offline signing with a hardware wallet.

PR 14477: Add ability to convert solvability info to descriptor

These help your wallet figure out how to sign for the scriptpubkey.

PR 14955: Swithc all RNG code to the built-in PRNG

bip66 was a soft-fork where the … strict rules were enforced for signatures. OpenSSL had a bug where on Windows 32-bit vs 64-bit would disagree about signature invalidity. Bitcoin Core has been stripping out openssl for a long time. One of the places that it is still being used in Bitcoin Core is in some RNG scenarios. So getting rid of it and replacing it is pretty important. So what sipa has been working on is designing a context-dependent random number generator scheme where you have different tiers of RNGs that have performance tradeoffs based on how much randomness you need. You need a ton of randomness if you’re generating a key, but sometimes you just need a random number. PR 14955 doesn’t remove openssl but it does introduce other RNG schemes that will be in its place and it will be slowly deprecated over time. This is important work being done here.

Bitcoin Core uses rfc6979 for the k value. When you’re producing a signature, you need to produce some randomness and if it’s insufficiently random or reused across signatures then an attacker can deduce your private key. There were a couple of bad wallets did this. I think the Bitcoin wallet for Android had this problem back in the day where they were seeding the RNG with bad entropy. Also, android securerandom wasn’t secure, it was like bouncycastle or something. It was the library that people recommended which is ironic. Deterministic K reuse is why ps3 got hacked too. I think that was by geohot, same guy who did the iphone jailbreak? It wasn’t bouncycastle on android actually, .. it was the Java Cryptography Application (JCA)…

Any chance of BLS and pairing crypto getting into bitcoin? It doesn’t have the K value there. Uh let’s get Schnorr signatures first.

Stricter and more performant invariant checking in CheckBlock, PR 14837

Stricter and more performant invariant checking in CheckBlock, PR 14837

This was where the double spending bug was introduced. Refactoring consensus code is dangerous because it’s really hard to get this right. We have to be careful. This is a pull request from Jeremy Rubin. He’s adding additional checks to protect against the failure that occurred in that double spending CVE… This is changing consensus-critical code. There’s a big debate in the thread about whether it’s worth the risk in the first place. In this bug, a miner could produce a block that had a transaction that spends the same output a second time. The mempool checks are much more strict than just checkBlock. Folks considered the duplicate check was a tthe mempool level… but if a block was produced by a miner that includes transactions never propagated across the network, that duplicate duplicate check is needed. That was a failure of testing, peer review, and a general overall failure. This bug was introduced in v0.13, and v0.12 nodes would not have been effected by it and it would have caused a hard-fork or nasty split. Anyway, I just wanted to have a short discussion about modifying consensus code.

libconsensus would be helpful because you need to be bug-for-bug compatible in a consensus network like tihs. You can’t write a nice yellow paper to make a specification for bitcoin… This is also an argument against multiple implementations of bitcoin consensus rules. If a block comes out tha thas a transaction that breaks some nodes but doesn’t break other nodes, then you’re causing a network split. That’s a dangerous thing. The argument is that if we’re going to have a failure let’s all fail together.

Lightning stuff

lnd, the go implementation of lightning funded by Lightning Labs, recently hit v0.5.1. There’s some bigfuxes in here.

Height hint cache: What happens if you have an open channel and for one reason or another, your node goes offline? What’s the first thing you want to do when you come online? You want to check out your transactions, did your party attempt to do a unilateral close against you? So you check the blocks real quick, and then you go on your way. A lite client scenario is more of an ordeal here… you have to call out to a full node, you have to get your lite filters the client side filters, and you might have to go far pretty back in history. The height hint cache helps you cache the results of previous scans and start scans from where you previously left off. Useful performance improvement, pretty simple.

Validating received channel updates: We were trusting nodes in the network to give us channel update messages from other nodes in the network that were not good, and we were not checking to see if what they were saying was true. So someone could lie to me about the state of a channel or if there was something wrong with it or if it was unroutable, and he could do that to such a degree that he could shift the way that payments were being routed in the network. From there, you can do nasty things like maybe some massive closure attack against the network or deanonymization attacks or something. This was a nasty vulnerability in lnd. So now there’s some signature checks and making sure the messages are coming from who they are supposed to come from.

Rate limiting: There were some other DoS vulnerabilities exposed. Older nodes were spewing out tons of channel update messages, causing DoS attacks against the nodes they were talking about. An old node could cause this problem, but so could an adversary. lnd will now rate limit peers that are requesting excessive amounts of data to ratelimit DoS attacks.

PR 2006 Scoring-based autopilot attachment heuristic: This autopilot scheme will open a channel for you. It’s not very smart, it looks at the network and weights nodes by the number of channels they have and then randomly selects a node to connect to based on this weighting. This ends up causing nodes that have a lot of channels are often selected by autopilot to open up a new channel, which is not good for network topology. But you could allow the autopilot scheme to have multiple heuristics by which to score nodes in the network, and then open up the channel on the network. This doesn’t implement the other heuristics, but this is the framework allowing autopilot to be better. Also, c-lightning is doing its own autopilot scheme. This is a centralization problem and also it could cause routing problems if lots of nodes went offline or something.

PR 2313

multi: Implement new safe static channel backup and recovery scheme, RPCs, and CLI commands

We’re spoiled with bip32 hierarchical deterministic wallets, you can recreate all of your keys from your seed. But with lightning you don’t have that luxury. The commitment transactions are not deterministic, they are based on signatures and scripts and so on. The nature of lightning is that you can’t have just a seed and from that reconstruct all the commitment transactions you’ve made with a counterparty. The danger here is that you’ve made a lot of commitment updates, you go offline, you lose your data and you don’t have a backup, or your backup is old, if I try to broadcast an old state then it’s going to look like to my counteprarty that I’m cheating which is bad. So the other guy is going to use breach remedy or justice transaction. All because my node didn’t know about the latest state. Having sufficient backups to reconstruct the channel if you have some sort of failure at the node level is critical. Some people are using rsync, but this doesn’t work if you’re doing rapid commitment updates. So you might not stay in sync with yourself. So in this PR, roasbeef is working on doing static backups. You have some seed, and this only works for opens and closes right now. Ultimately there will be backups for individual commitments. This is just the beginning. Every time you do opens and closes, you’re taking some seed and encrypting a backup file and dumping it somewhere. There’s this whole recovery flow- I don’t want to say anything wrong, I haven’t read it closely. This is a “channels.backup” file. This is the same problem with p2sh and redeemScripts. Output script descriptors sure do cover p2sh.

pylightning: Add a plugin framework

cdecker did a pull request introducing pylightning. This is a plugin framework for c-lightning. In lnd, you can create cookies or macaroons that have more limited access controls for RPCs where this user can only produce invoices can’t actually spend coins in the wallet or something. If you’re a merchant, you don’t want your coins exposed in a hot wallet but you might want to be able to generate invoices s othat customers can pay you or something. In c-lightning, they are going for a plugin framework which are little scripts that can talk over json-rpc to the node. They have been working on adding this framework, kind of a flask-style. The plugins are written in python. You’re communicating through a unix socket, it doesn’t have to be python. It’s json-rpc over a unix socket.

cdecker gave a good talk about the architecture of c-lightning. Rusty said I know C and I’m a linux developer so I’ll write it in C.

Doesn’t look like there’s any authentication here. Good question. Maybe TLS over the socket or something.

Don’t trust, verify: A bitcoin private case study (coinmetrics)

This was a simultaneous fork of Bitcoin and zcash or a fork-merge or something. With zclassic, you have a shielded pool and zero-knowledge proof magic. In bitcoin, all the outputs are there and you can validate that coins are not being created out of thin air. In the shielded pool, assuming the zero-knowledge proofs and math holds, by validating proofs you should be able to know that coins are not printed out of thin air. The reason I’m bringing up this blog post is because there’s interesting conversations to be had about modifications to bitcoin. It’s been proposed that in theory we might want to see confidential transactions in bitcoin where we use additive homomorphism to mask the amounts in the outputs. I recommend reading the grin introduction for bitcoiners document on their github repository, it’s a good introduction.

The tradeoff you have to make is perfect binding vs perfect hiding. Confidential transactions rests on the discrete log problem. If you’re hiding the amounts of outputs, we know right now that discrete log broken in bitcoin that transactions with signatures and address reuse then a quantum attacker can be able to steal those coins. But if not only the coins are conditionally locked with keys that use ECDSA scheme, you as an implementer of the scheme have to decide if the discrete log is broken, tha twhen the attacker leverages that mechanism, will they be able to decrypt all of the confidential transactions history, or will they be able to print coins and nobody will ever know? That’s perfect hiding vs perfect blinding.

With zksnarks, I think they picked perfect hiding. So if zksnarks are broken, then someone will be able to print zcoins or whatever they are called and nobody would be wiser, unless someone pulled them out of the shielded pool. Maybe zcash electric company is already doing that and causing silent inflation. For bitcoin, it could break the scarcity of the coins. But some people might prefer for perfect privacy. Some would prefer for the privacy features to be done on a sidechain. Some people want mimblewimble on a bitcoin sidechain. You have perfect hiding in the sidechain but perfect binding on the mainchain. You isolate any sort of shenanigans into that sidechain.

Blockstream’s new product Liquid.. they forked Bitcoin Core v0.14 and implemented confidential transactions and confidential assets. You could deterministically peg coins in there, mix them around privately, and then pop out. Masking the amounts in the outputs is not enough to give anonymity in the confidential transaction scheme of course, but it certainly helps.

The first implementation of ring signatures was in bytecoin. There was no premine or whatever, but I think something liked this happened in dash as well actually… the folks who released the code had a fast miner that they kept to themselves, like something CUDA. They released the code, everyone started mining, and they were mining many orders of magnitude faster than everyone. So they had a head start on everyone. Okay, the guy who started dash was the one… But bytecoin was that they secretly mined for a while and said hey guys you haven’t seen this it’s been here the whole time. Monero also uses cryptonote. They use RingCT. They had an inflation bug that was found before anyone found it. But someone exploited it on bytecoin. They exploited it and then monero patched it. It’s very easily detected.

Stackexchange has a question “How do I use the CryptoNote inflation bug” - nobody seems to have answered that one hmm.

OP_MASK

bitcoin-dev post: Schnorr and taproot (etc) upgrade

This lets you sign anything that matches this form, and you can mask certain parts of the transaction. This is instead of SIGHASH_ALL or SIGHASH_NOINPUT. Might be similar to Element’s bitmask method. If you can MASK the sequence field, then you have replace-by-fee because your counterparty has already signed off on it. So then you would have it. I don’t know how granular the masking gets. That’s the part I was curious about. Lightning network whitepaper was originally written with SIGHASH_NOINPUT. Roasbeef was saying you don’t even need to sign amounts, which you could MASK, in which case that one signature is good for the entire channel state all the way through because the amounts are masked off. If they try to steal from you, you have the revocation hash and then you could fill in whatever you think is the fair thing. In segwit v1, if you have all of this, then lightning gets much simpler, because you don’t have to send so much data back and forth. In watchtowers, if you’re using an offline node or something, you can’t do justice because you’re not watching. But with a watchtower, you would give them the commitment or justice transaction and they would monitor the chain for you. The nature of lightning now is that the watchtower would need to keep every single commitment. So this is burdensome and annoying, but with eltoo they don’t need to store all states, they can just store one state and rewrite it effectively. It’s much more efficient in that sense. You could introduce better lightning protocols without forcing everyone to upgrade- when you do the handshake with your counterparty, if they are eltoo-savy then you can do that or another version or whatever.

References

Bitcoin Core PRs:

Optech Newsletters:

[bitcoin-dev] Schnorr and taproot (etc) upgrade https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-December/016556.html

[bitcoin-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning) https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-November/016518.html

[bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-November/016488.html

Minisketch: an optimized library for BCH-based set reconciliation https://github.com/sipa/minisketch

Lightning:

05.1 Release https://github.com/lightningnetwork/lnd

c-lighting and lnd PRs:

SLP39 Olaoluwa Osuntokun (Roasbeef), CTO of Lightning Labs https://stephanlivera.com/episode/39

Research:

MIT DCI’s Cryptocurrency Research Review #1 https://mitcryptocurrencyresearch.substack.com/p/mit-dcis-cryptocurrency-research

Simplicity: High-Assurance Smart Contracting https://blockstream.com/2018/11/28/simplicity-github/

Pederson Swap: https://github.com/apoelstra/scriptless-scripts/blob/master/md/pedersen-swap.md

TxProbe: Discovering Bitcoin’s Network Topology Using Orphan Transactions http://arxiv.org/abs/1812.00942

JuxtaPiton: Enabling Heterogeneous-ISA Research with RISC-V and SPARC FPGA Soft-cores http://arxiv.org/abs/1811.08091

Batching Techniques for Accumulators with Applications to IOPs and Stateless Blockchains https://eprint.iacr.org/2018/1188

Misc:

Who controls Bitcoin Core? https://medium.com/@lopp/who-controls-bitcoin-core-c55c0af91b8a

What is the tradeoff between privacy and implementation complexity of Dandelion (BIP156): https://bitcoin.stackexchange.com/questions/81503/what-is-the-tradeoff-between-privacy-and-implementation-complexity-of-dandelion/81504

bitcoin script playground: https://nioctib.tech/#/transaction/f2f398dace996dab12e0cfb02fb0b59de0ef0398be393d90ebc8ab397550370b/input/0/interpret?automatic=true

Information Security:

Long-tail cryptocurrency is 51% attacked: Vertcoin edition https://www.theblockcrypto.com/2018/12/03/long-tail-cryptocurrency-is-51-attacked-vertcoin-edition/

Let’s Not Speculate: Discovering and Analyzing Speculative Execution Attacks https://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/d66e56756964d8998525835200494b74

Grifter Journalist Jerry Ji Guo Jailed for Alleged $3.5 Million Bitcoin Heist https://www.thedailybeast.com/grifter-journalist-jerry-ji-guo-jailed-for-alleged-bitcoin-long-con

Cryptojacking Malware Now Infects More Than 415,000 Routers Globally https://www.digitaltrends.com/computing/415000-routers-globally-infected-with-cryptojacking-malware/

Timothy May RIP https://www.facebook.com/lucky.green.73/posts/10155498914786706

\ No newline at end of file +Meetup

Unchained Capital Bitcoin Socratic Seminar

Socratic Seminar 87 at BitDevs NYC has all of the links on meetup.com available.

https://www.meetup.com/BitDevsNYC/events/256924041/

https://www.meetup.com/Austin-Bitcoin-Developers/events/257718282/

https://twitter.com/kanzure/status/1081674259880120323

Last time

Last time Andrew Poelstra gave a talk about research at Blockstream. He also talked about scriptless scripts and using the properties of Schnorr signatures to do all of your scripting. There was also talk about taproot and graftroot.

J: The chaincode labs guys in NYC .. we had a lot of participation, it was very cool. John Newbery has been helping me to do Socratics. He goes through Bitcoin Core PRs and things like that. Tries to get everyone involved.

JM: He’s a very efficient person. He gets a lot done.

J: Yeah, hats off to John Newbery.

JM: When we started this, there was a signal group of everyone running meetups all over the world. He coached me a little bit, gave me some confidence. There were 3 people in the first one, and 4 people in the first one. Richard showed up both times.

J: Our first time we met was summer 2013 and it was a video game shop in Chinatown.

JM: Michael Flaxman was telling me about some of the first bitdevs meetups.

PL: At the one in NY, there was about 120 people there. They had everyone introduce themselves. 5-10 seconds per person tops. Unchained started hosting a few events recently.

Introduction

Bitdevs is a community for people interested in discussing bitcoin research and developments. We have a meetup that occurs once a month in NY. These are socratic seminars. In the weeks preceeding the event, we go and collate all of the interesting content and interesting research going on in the space. We put it together into a newsletter which you can find on the meetup page. All of these links are on the meetup page, then we gather and try to investigate these things and try to learn about these and work together to understand them. To give an idea of some of the content we look at, it’s CVEs, research papers, IRC logs, technical blog posts, network statistics, pull requests in popular repositories like Bitcoin Core, lnd, joinmarket, etc. We sometimes look at altcoins but they have to be interesting or relevant in some way.

The idea is that if you find something you want to discuss and it’s not in the prepared list then you can send it to the organizer and you can even lead discussion on that topic. We can spend anywhere from 1-10min on a topic. We can give a setup on the topic, or someone else can, and if there’s a question or a comment or opinion to be had then feel free to speak your mind. Don’t feel concerned about being wrong- people can correct each other. We’re all learning here. We try not to dessiminate bad information, but it happens. We try to catch each other.

We’ll start by going around the world and do 30sec-1min and you can say your name or what you’re interested in, if you’re organizing events and you wnat people to know, the idea is that if someone says something that interests you then you can hangout with them later. Then we go through the content. At the end of the meetup we have presentations like if someone is working on an open-source project or a company relevant to the space then we get them up here and let them get feedback from the community. Then we hangout afterwards. Once a month, that’s the drill.

No names in the transcript please.

Difficulty adjustment

I like to start with hacks or CVEs or something. Lately we have seen some significant difficulty adjustments downwards. We haven’t seen difficulty adjustments downwards to that degree since 2011 or 2012. These are very non-trivial, they were quite large. The question of hashrate importance can get lost when we talk about proof-of-stake and htings like that. Not too long ago, you guys probably seen this, http://crypto51.app assumes that you can rent hashrate. If you can rent hashrate at going market rates, how much does it cost to 51% attack these coins? How much could you make from doing these 51% attacks? It’s just a cost, not how much you make. How much you make depends on what type of attack you launch, naturally.

You can’t rent enough hashpower on the market to 51% attack bitcoin. But some of the coins have gpu-friendly algorithms. There’s a ton of rentable hashrate out there. When you have low market cap coins with low hashrates that are worth enough to attack, you run into dangerous scenarios. Like let’s look at Ethereum Classic. 105% of the full hashrate is rentable. Nicehash is a hashrate rental service. You can rent more than enough hashpower for a very low cost here, for under $5,000. You can 51% attack Ethereum Classic. When you see the market in the state that it is right now, people are burned pretty bad etc., in normal scenarios attackers might be more interested in doing straight-up hacks of exchanges or wallet services or whatever. This is me opining here, but when prices go down and hashrate goes down, the attack surface for these 51% attack surface is…

Vertcoin

So how many of you know about vertcoin? I don’t know anything about it. They got hit real hard. There were a number of deep reorgs on the order of the largest one having a depth of 307 blocks. I think we did some of the math at the last socratic. It was like 50-60 blocks deep in bitcoin if you just match by time. 60 block reorg in bitcoin would be totally crazy. We figure we’re safe around 6-7 confirmations. This is particularly apt when you’re looking at these exchanges that are adding every shitcoin under the sun… they are very much exposed to these kinds of attacks. There’s the cost of adding the coin itself, to get the node running, add infrastructure, but then you also now have to consider which I don’t think exhcanges consider– how dangerous is this for our customers? If coinbase adds vertcoin, then they say only need 10-50 confirmations, someone deposits coins, 51% attack occurs, a huge reorg, and all of the sudden coinbase lost some crazy amount of vertcoin or whatever. Their customers thought vertcoin was safe because maybe Coinbase Pro added it… An attacker can short the coin on the market, or they could do double spending. You can do a reorg where you get the vertcoin back, and you got bitcoin back. So maybe you also shorted, probably on another exchange.

The economic limits of bitcoin and the blockchain

paper: “The economic limits of bitcoin and the blockchain”

If I invest $100m in bitcoin hardware, I don’t necessarily want to attack bitcoin and lose my future profits. But as the hashrate goes down, and the price goes down, there are some scenarios like miner exit scam scenarios. Eric calls them “collapse scenarios”. Under which scenarios does it make sense for a miner to burn the house down? To forego the entire future income from their hardware? What kind of attacks might allow this to occur? This has to do with the cost of ASICs relative to the wider hashrate in the network. We’re seeing a lot of older ASICs become more or less useless because they are not power efficient. If you’re a miner and you’ve got a hoard of old ASICs, what are you doing with them? You don’t throw them out, they are going to sit somewhere in storage. If at the same time that bitcoin hashrate’s is coming down, you’ve invested in all of this future hardware, a huge amount of money that has been tied into hardware that is not returning any investment, and you’ve got this completely useless hardware as well. You can spin up this old hardware, and launch an attack to recoup the costs of your future hardware and previous hardware and burn the house down. You just want to make your money back. When you have all of this hardware sitting around, it poses an existential risk. There’s economic sabotage attacks too like short selling, and a theoretical possibility that miners will want to exit scam. These are called “collapse scenarios”.

There’s some mistakes- he thought that one day FPGAs will be made more efficient than ASICs. It might be possible to have something between an FPGA and ASIC, but it might be cost efficient or something. It’s not entirely reasonable.

Their source of income depends on the source of the attack. Maybe they were able to do some double spends on exchanges and get an OTC wire that is more or less irreversible in some circumstances. They could do double spending, and shorting the network at the same time.

The challenge is coordination without being detected. If you attack an altcoin, it’s easy to cash out to bitcoin. Vertcoin getting 51% attacked didn’t make the news in bitcoin world. You have a stable store of value to cash out to. But if you’re trying to attack bitcoin, you can only cash out to dollars, which is much harder because you have AML/KYC. And wire transfers aren’t that fast. Bitcoin Cash and its forks might be more vulnerable to this. Although, they checkpoint everything so it doesn’t matter.

There are definitely smaller altcoins where it is very cheap to do 51% attacks. There might not even be derivative markets here. You don’t need derivative markets for the altcoins because you can still do double spending attacks. Vertcoin’s attack is something we could learn from. Their double spending attack is more a lesson to me that having a lot of small altcoins with low hashrates is not really helpful to the ecosystem.

Max depth for reorg is a bad idea because the node won’t be able to get on the right blockchain on the network, until later. An attacker can feed a private blockchain history.

You have all these miners that could be doing something productive in mining and protecting the network. The mining rigs could still have value. Could they generate value in excess of their recovery value to do an attack on the network? The economic reason why this might be unlikely is that if they are not able to mine profitably, they are probably not sitting with a ton of capital resources or liquidity to be able to expend the energy because it requires real cost to do that.

The energy that you get, you need a lot of energy to run these miners. Often they are at like $0.02/kwH. You can’t get a giant burst on a short-term basis. No energy company is going to sell you $0.02/kWH and have it burst to however much you need in order to do a 51% attack for only like a week. They force you into 6 month, 12 month contracts. This is why all the miners went offline at the same time- you saw the difficulty adjustment go down significantly over 3 difficulty adjustments; those were all expiring contracts in China. When the energy contract is over, they say they aren’t going to renew because the bitcoin market price is bad. This is how they have to plan. If you want to execute an attack on this like bitcoin, you would need not only the hardware, but an energy production facility like a hydroelectric dam or some insane array of solar panels where you have direct access. Nobody is going to sell you that much energy.

I was talking with a miner producing at $0.025/kWH. The marginal cost was $0.06 or $0.07 or higher. They could not be profitable if their costs were above $0.06 all-in. If you’re able to produce energy at a profit or to procure energy and mine bitcoin at $0.025, and if the marginal cost to produce is $0.05 then you have an economic incentive not to attack the network but to continue to act rationally and mine. There’s this huge economic web that says if you can mine profitably, you will. If you can’t mine profitably, and you can’t find energy below $0.06 in that scenario, can you profitably attack it? It’s not just gross, it’s net of whatever you are spending.

Do we have a guest to at what the breakeven price for bitcoin is for people mining at $0.025/kwH? I’ve heard the break-even price is $0.06/kWH. It’s at around $3800 or so, for the 6 cents or below to be profitable. If there’s another drop in the markets, the cost curves adjust. You’re able to mine a larger share of it. There will still be profitable miners. The difficulty will come down, but the most efficient miners with the best hardware and best power contracts will stay around longer. When the price drops, there’s more likely to be 51% attacks or other hashrate attacks. Right now the bottleneck is hardware, but eventually the bottleneck is going to be energy. It will be hard to get 51% of the energy necessary to attack bitcoin because hardware will be ubiquitous. It will get cheaper, better, faster. Energy will be the bottleneck. Producing like 51% of the energy, that’s a really hard thing to do. Right now that’s extremely hard. As it gets more difficult, it’s going to be a much bigger bottleneck.

Energy production is unusual because it has a natural diseconomy of scale. If you’re building an energy production facility, you’re not thinking about 51% attacking a cryptocurrency in the next 6 months. This is a 40 year capex project, and you’re concerned if a giant customer goes away. If it’s a nuclear facility, you can’t make adjustments like oh I am just going to make 10% less energy because one of my customers is gone. Acquiring energy at scale is really hard. When power becomes your limiting factor, that’s a really cool protection. This idea is from BlueMatt.

Blocksonly mode, mempool set reconciliation and minisketch

gmaxwell bitcointalk post about blocksonly mode

In Bitcoin Core v0.12, they introduced a blocksonly mode. This is where you operate a node and you only receive the blocks themselves. You don’t relay transactions, so you know the amount of bandwidth that your node is using is going to be fixed at 144 MB/day. However, you might want to participate in the relay network and see unconfirmed transactions before they get into the blockchain. When you look at currently what is the thing that uses the most bandwidth in your node is the relay of transactions and p2p transaction relay. It works using the mempool. These roundtrips and data being passed back and forth is by far the largest consumer of bandwidth in your node. This is a bit of a problem, we want to always limit the number of resources to operate a node while still giving a node operator the greatest insight into what’s going on in the network so they can avoid double spends and they can build blocks quickly. You want to know because you want to know. The question is, how can we address this huge bandwidth issue?

In 2016, gmaxwell outlined a method of doing mempool set reconciliation. It gives you an idea just from the name. Suppose you have a few different mempools, after all there’s never a universal mempool. We all see different parts of the network. Normally we do INV messages back and forth (inventory). But instead, what if you reach out to your peers and reconcile the differences in your mempools? Their idea is to use sketches, something in computer science that I was not familiar before. It’s a set reconciliation scheme.

sipa recently published a library called minisketch which is an optimized library for BCH-based set reconciliation. He went through some popular sketch schemes. Look at the sketch size factor for 30 bit elements- it’s constant. It’s really nice. This is an alternative to invertible bloom lookup tables (IBLTs), which are probabilistic and have failures. The old school SPV lite client scheme where you use a lite client and it uses bloom filters to update the set of unspent transactions in your wallet… it has serious privacy issues and so on, so it’s not a useful thing for this particular use case. But minisketch is a great library, it has all kinds of useful documentation on how you can use this in practice and implement your own set reconciliations. This is useful for reducing bandwidth.

What’s the tradeoff, or is it a free lunch? What if we have an adversary with a poisoned mempool sending garbage data to another peer? Well, dosban stuff will kill that peer eventually. It won’t propagate because it’s invalid transaction data. With minisketch you need to do some computation on the client side to compute the sketch reconciliation.

Revisiting bip125 RBF policy

Russell O’Connor wrote an email to bitcoin-dev about “Revisiting bip125 RBF policy”. Using replace-by-fee, you can signal in your transaction that you may want to replace that transaction in the future with another transaction that plays a higher fee. It has some anti-DoS mechanisms in it. You can’t just replace your transaction with one paying 0.00001 satoshi or whatever, there’s minimums you have to comply with.

The use case for replace-by-fee is pretty obvious, like if you send a payment to an exchange but it doesn’t get confirmed on the blockchain fast enough. If there’s a tree of pending transactions, my RBF transaction has been pinned. I would have to pay a lot to get the whole tree going, depending on how big that tree is.

See also “CPFP carve-out for fee-prediction issues in contracting applications (lightning network etc)”. If I’m signing commitment transactions or transactions that close LN channels and I’m not broadcasting right away, how do I predict the fee rate I’m going to pay? I don’t know what the fee rate is going to be in a month. I need the pre-signed transactions. I need to talk to my counterparty to get their signature anyway. So the question is what do we do if we go for a month and the mempool spikes and the fee rate spikes. Maybe my counterparty tried to do a unilateral close against me to an old commitment, so my counterparty is cheating. In LN, what do you do? You have a justice transaction, it’s a checklocktime delta where I have 144 blocks to prove that my counterparty cheated against me. So I have to broadcast the most recent commitment to close that channel. If that transaction doesn’t get confirmed in that window, then your counterparty wins. Why not use replace-by-fee and child-pays-for-parent. In CPFP, you can spend the unconfirmed parent with a child and he can pay a high enough fee rate to pay for the whole package. But there are issues with both of these things in the lightning network context. You can’t do replace-by-fee in LN because it’s multisig and if the counterparty doesn’t care then they aren’t going to sign a new transaction for you or whatever. You can’t do CPFP because of the nature of the scripts that are conditionally locking the outputs. These were previously called breach remedy transactions. Your counterparty could try to pin your justice transaction to a low fee-rate transaction, or do CPFP and then it’s a fee competition between both parties.

In BlueMatt’s post, he mentioned you can’t do CPFP on a commitment close transaction. But what could be done is that small outputs from both parties that are unrelated to the lightning commitments could be added ot the transactions. So you could both do unconfirmed spends of the commitments without signatures from your counterparty, but this exposes you to the transaction pinning attack where your counterparty could try to CPFP it and get it stuck and try to get it pinned.

One solution is to add more relay policies. At the node level, you have relay policies which Bryan was talking about earlier like minimum feerate to relay transactions or replace-by-fee rules. These are not consensus rules. Each node can have its own version of these rules. One of the rules that would help prevent the problem BlueMatt identified is that, at various points, there have been discussions about solving the pinning problem by marking transactions as “likely to be RBF” where children of such transactions could be rejected by policy unless such a package would be near the top of the mempool. So if I try to do a pinning attack that would leave the transaction tree package with a low fee rate relative to other transactions on the network, that attempt would be rejected by nodes on the network. RBF isn’t going to work because of the multisig aspect of these LN.

Would ajtown’s OP_MASK cover the nsequence value so you can hide replace-by-fee?

Replace-by-fee transactions are currently 5-6% of the transactions on the network. I was evaluating wallets and one of the criteria was RBF support. And it’s not good. Even some of the big exchanges, I was executing RBF transactions and pending transactions in my own wallet for over a month. In some cases, some of the wallets actually credit my balance so it looks like I actually have double my balance. It’s not an available balance, and I wasn’t able to take funds in that time. But the confusion for the user is there from a usability perspective.

Electrum handles replace-by-fee correctly. It’s not a long list of which wallets support this. If we want large volume BTC companies like exchanges to adopt things like replace-by-fee they want to make sure the ecosystem supports it. It can’t be just petertodd’s opentimestamps server using replace-by-fee.

jnewbery is looking for feedback on the replace-by-fee chapter in the Bitcoin Optech book or documentation. It’s in progress.

Some of these web wallets, even after one of the replace-by-fee transactions has been confirmed, the other one still contributes to a pending balance on the web interface, because it has a different txid even if it will never confirm. So some of these are pretty poorly built.

MtGox blamed transaction malleability for their problems. Most developers think different txid, different transaction but you really have to look at all the utxos and see which transaction replaces each other. It’s not trivially obvious. People think txid is a perfect identifier. And then they don’t test that because it’s a rare edge case.

txprobe paper

“TxProbe: Discovering bitcoin’s network topology using orphan transactions”

Who knows about Chainalysis? There are operators in the network whose job it is to provide metadata to companies that have compliance programs and law enforcement agencies and other actors who are interested in knowing whose public keys are associated with which identities. Chainalysis sells you this information, they have a detailed map of the network. If a transaction has been associated with Coinbase, they will probalby know. At one point back in 2013, they had famously got caught up in this claim that they had deployed a non-trivial number of sybil “full nodes” across the bitcoin network. They were attempting to connect to every node in the network. Why would they do that? If you’re running a full node and not running it over tor, then they figured they could tell which nodes were the source of the transaction. If an attacker has a connection to every single node in the network, and listening to all the INV messages, the first node to relay a transaction is probably the source of the transaction. A sybil attacker can deanonymize transactions in the network by sybil attacking the network and doing these attacks and analysis.

These attacks are difficult to protect from. The characteristics of the network protocol in Bitcoin Core are such that a crafty attacker can leverage these characteristics to do different types of attacks. Another example is eclipse attacks, where I as an attacker attempt to saturate a node’s connections so that they are only connected to me. They are only connected to me. If I do this with a miner, I can do selfish mining attacks; if it’s a merchant then I can do double spending and they are in my own little universe. In your node, you have buckets of IP addresses and so on, and a lot of these attacks have been mitigated to some degree, and the mitigations that have been proposed were implemented. But based on how IP addresses get kicked out or pulled into buckets, an attacker with a reasonable number of IP addresses can saturate connections on a node just by kicking nodes out of each bucket. A lot of researchers spent some time looking at deanonymization attacks.

Andrew Miller and some other researchers looked at how Bitcoin Core’s mempool handles orphan transactions. In this context, an orphan transaction is when you receive a transaction but it doesn’t have a parent. It’s spending a utxo– you have a transaction, and another transaction where the mempool doesn’t have the parent. It’s an orphan transaction. They investigated the mechanics of this buffer. The MapOrphanTransactions buffer. They found a way to discover a way to execute an eclipse attack the important thing to understand is where are the edges in the network… if you’re able to map the network, that’s a critical piece of launching other types of attacks on the network. They weren’t trying to figure out the source of transactions, but figuring out the connectivity of the network.

If you have two sets of nodes and you want to figure out the connections between these two nodes or clusters of nodes, you can use marker transactions and flood transactions and other types. You use a flood transaction which is an orphan transaction and you send this flood transaction to a sector set. It’s an orphan, so it goes in their orphan buffer. Then for the number of nodes in the sector set, you create n double spends that are parents of these flood transactions. If you have 10 nodes in the sector set, you create 10 conflicting parent transactions and you send those to the sync set. And then what you do is you send marker transactions to the sector set members, and depending on how quickly the nodes are responding to the INV messages about which they will or will not accept, you can deduce which nodes are connected to each other.

Something like dandelion or delays on orphan transaction relay may be able to help mitigate this attack.

Dandelion (bip156)

“What is the tradeoff betwene privacy and implementation complexity of dandelion bip156”

Dandelion is another useful privacy development. There’s a stem phase and a disperse phase. This is a separate relay protocol. It’s called dandelion because it looks like a dandelion when you look at the network. Before dandelion, you would broadcast to all your peers. In dandelion, you broadcast with a stem phase and then later it becomes a fluff phase. Each hop during the stem phase is a coin flip as to whether it will go into the fluff phase.

In dandelion there’s a big tradeoff- this is a great stackexhcange post by sdaftuar. It relates to denial-of-service vectors. If someone sends me a bad transaction or bad block, we ban them, right? But the issue here is that because my immediate peers in my stem phase don’t know what the transaction is, it might be invalid or maybe not something that would get into the mempool in the first place. A bad guy could leverage dandelion to DoS dandelion peers, to send garbage through these stem phases that wont actually make it into the mempool. So you have this DoS vector in protocols like dandelion. You want the privacy but you got this exposure to Dos. It’s an unfortunate tradeoff.

The takeaway here is that “if we don’t have simpler solutions that owrk, is it worth implementing something akin to the current mempool logic to introduce dandelion into bitcoin core?” We don’t know if it make sense. How do you encrypt it so that only.

Dandelion is voluntary during the stem phase, it’s not encrypted. Each node does a coin flip, you have to rely on the good behavior of the node you’re connected to.

What’s the best way to send an anonymous transaction? Maybe use blockchain.info and use pushtx while using tor. Some mining pools have pushtx endpoints on the web which are custom home-built. I know luke-jr had one for a long time. Hitbtc has a credit card form for one, so you can pay for the broadcast of your transaction.

Statistics

Let’s do some statistics which is usually pretty fun. The number of utxos has gone donw, so probably a service has been consolidating their utxos. Not long after I looked at the graph, I saw that Coinbase tweeted that they moved 5% of all BTC into their cold wallet, and 25% of all LTC, and 8% of all ETH. So during this movement they were consolidating utxos. They have like 1% in their hot wallet.

Every full node has to keep the current UTXO set in RAM. You want this lookup to be fast. You need this if you’re a miner. If you need to have this in hard drives, then that’s very slow. The size of the UTXO set is how much RAM you need on your box if you’re a miner. So it’s important that the utxo set size is low. If you want to run a fully validating node, you need the full utxo set. So by reducing utxos, you can reduce the total resource requirements.

What are the storage requirements for running a pruned node now? When you are running a pruned node, what are you storing? What are you validating against? You validate the header, each transaction, but then you update your utxo set, and you throw away the extra data. You are fully validating, but you don’t have an archive log of spent transactions, you only have the current state. You need the entire transaction for the utxo, because that’s the tx hash, so you need that. At Breaking Bitcoin, someone made large transactions but only spent one output, and it just broke pruned nodes or something because of how it worked out. For reorgs, it’s not completely pruned, you have a few hundred blocks.

Mempool statistics

The mempool has been relatively quiet. We saw a bump on December 18th. Oh, that was Coinbase, same day as their tweet. At peak there was about 18 megabytes of transactions in the mempool on December 18th. That was the biggest spike. This is joechen’s bitcoin mempool stats. I can post the link later. It’s based on someone else’s work, he’s just repurposing it. It’s color coded. For fee estimation, you could look at https://whatthefee.io or https://bitcoinfees.21.co or something. Some people use Bitcoin Core’s fee estimator. Mempool-based fee estimation is something that has more recently cropped up in discussions in the community. At 830am CST, there’s always a bump because that’s when Bitmex does their consolidation. Sometimes people use vanity addresses- which is aa bad idea due to privacy problems of course, don’t reuse addresses.

So those 18 MB, would it take 18 blocks? Well, yes, assuming no new better fee rate transactions come in.

What’s the average mempool size? Well let’s use joechen, go to the few year view, you can see the huge spike from late 2017. Good question.

Block size has been consistent for the past month. You can see the witness size in there too, it’s on https://statoshi.info it’s very useful.

Asicboost

There are two generally understood mechanisms by which miners can leverage what is now called asicboost. One is known generally as overt asicboost and the other is covert asicboost. This became particularly contentious topic back in 2016-2017 when segwit was attempting to be deployed. When you look at the header of a bitcoin block, it’s 80 bytes. sha256 takes input in 64 byte chunks, it puts it throug ha compression function, and then there’s a midstate which doesn’t have as much entropy as the final hash. You can find collisions for those midstates. Because of the–… “The problem with ASICBOOST” by Jeremy Rubin. Here’s your 80 byte header, you have the version bytes, previous block, merkle root, nbits, nonce. Sha256 has to break that down. At the 64 byte breakpoint here, we still have some of the merkle root which is kind of nubbed off into the 2nd 64-byte chunk. There’s two pieces of data in this first chunk here that could be “rolled”- it could be modified without breaking consensus. So the first is the version field- it’s not a consensus-enforced field, a miner can put whatever they want within constraints. The merkle root is determined by the order of the transactions and it’s not consensus-enforced. Bitcoin Cash has a canonical transaction ordering. But even then they could probably still do covert asicboost. But because version rolling is overt, everyone can see that someone is rolling versions, it’s overt asicboost. If you’re changing the version constantly, the midstate will be, and the final hash will be different. It’s possible to find inputs to the hash function that collide with that midstate, where you get collisions there. If you can find collisions at the midstate then you can reuse that owrk. If you can reuse the inputs, if you can ifnd these collisions, you’re not going to be mining faster, but you will be gaining some amount of efficiency in your mining operation because you’re using less power to get hash outputs because part of the function you don’t have to do consecutively, you just have to do it once, find collisions there, and then you can continue on with the mining process. You can also do the same thing with the merkle root, which is more covert because ordering of transactions you can’t necessarily know, but there are indications that miners are reordering transactions. Once a block is public on the network, it’s not clear that a miner did that.

This became contentious in 2016-2017 related to segwit because segwit has a second merkle root which is not put into the header of the block, which would have made it a hard-fork. So there’s a new serialization format for the blocks where the signatures are stored in a new way. Because the signatures are not in the transactions themselves, there has to be an indication of where the signatures are. To make segwit a soft-fork, the merkle root is put in the coinbase transaction not in the header. So segwit broke covert asicboost because the transaction ordering technique no longer works, if you do reordering you can’t gain that asicboost efficiency given active segwit.

Greg Maxwell investigated some firmware and concluded that it had logic capable of performing asicboost. Bitmain probably didn’t want to lose their 14% efficiency gain they got by implementing asicboost, so they had an incentive to block segwit’s deployment. If you want to do overt asicboost, most people are okay with that. There’s some software out there to do that, like slushpool’s raain, and Bitmain also published an ability for their users to do this. Over the last couple of months, the number of blocks that have been produced with version rolled versionfields has increased. With version rolling, you’re getting more nonce space. It’s way more efficient than covert asicboost by orders of magnitude. Miners have an immense incentive to use overt asicboost, it’s a huge efficiency gain. Miners are naturally competitive and they will do whatever they can to compete. As long as it’s not creating perverse incentives to delay network upgrades, overt asicboost sounds okay to me.

Bitcoin playground

This is a script playground where you can watch scripts be replayed basically in realtime, like the pushes and pops interacting with the stack. This is useful if you’re learning about bitcoin script. Also there’s btcdeb.

Output script descriptors

This was an idea proposed by sipa. A private key inofitself does not transmit any information about the scripts which are conditionally locking the outputs associated with that key. There’s not much information you can gleam from a private key on its own. The idea of output script descriptors is that it’s appended to a private key and gives information about that key itself, like what’s the script– is it 2-of-3, 3-of-5, is it a pay-to-pubkeyhash, is it a pay-to-pubkey? It will even give you information on the derivation path so if it’s a hierarchical deterministic wallet, and the account structure, etc. If you’re importing and exporting keys, you really want to know this information. Especially with partially signed bitcoin transactions (PSBT), a new serialization format which helps with things like offline signing with a hardware wallet.

PR 14477: Add ability to convert solvability info to descriptor

These help your wallet figure out how to sign for the scriptpubkey.

PR 14955: Swithc all RNG code to the built-in PRNG

bip66 was a soft-fork where the … strict rules were enforced for signatures. OpenSSL had a bug where on Windows 32-bit vs 64-bit would disagree about signature invalidity. Bitcoin Core has been stripping out openssl for a long time. One of the places that it is still being used in Bitcoin Core is in some RNG scenarios. So getting rid of it and replacing it is pretty important. So what sipa has been working on is designing a context-dependent random number generator scheme where you have different tiers of RNGs that have performance tradeoffs based on how much randomness you need. You need a ton of randomness if you’re generating a key, but sometimes you just need a random number. PR 14955 doesn’t remove openssl but it does introduce other RNG schemes that will be in its place and it will be slowly deprecated over time. This is important work being done here.

Bitcoin Core uses rfc6979 for the k value. When you’re producing a signature, you need to produce some randomness and if it’s insufficiently random or reused across signatures then an attacker can deduce your private key. There were a couple of bad wallets did this. I think the Bitcoin wallet for Android had this problem back in the day where they were seeding the RNG with bad entropy. Also, android securerandom wasn’t secure, it was like bouncycastle or something. It was the library that people recommended which is ironic. Deterministic K reuse is why ps3 got hacked too. I think that was by geohot, same guy who did the iphone jailbreak? It wasn’t bouncycastle on android actually, .. it was the Java Cryptography Application (JCA)…

Any chance of BLS and pairing crypto getting into bitcoin? It doesn’t have the K value there. Uh let’s get Schnorr signatures first.

Stricter and more performant invariant checking in CheckBlock, PR 14837

Stricter and more performant invariant checking in CheckBlock, PR 14837

This was where the double spending bug was introduced. Refactoring consensus code is dangerous because it’s really hard to get this right. We have to be careful. This is a pull request from Jeremy Rubin. He’s adding additional checks to protect against the failure that occurred in that double spending CVE… This is changing consensus-critical code. There’s a big debate in the thread about whether it’s worth the risk in the first place. In this bug, a miner could produce a block that had a transaction that spends the same output a second time. The mempool checks are much more strict than just checkBlock. Folks considered the duplicate check was a tthe mempool level… but if a block was produced by a miner that includes transactions never propagated across the network, that duplicate duplicate check is needed. That was a failure of testing, peer review, and a general overall failure. This bug was introduced in v0.13, and v0.12 nodes would not have been effected by it and it would have caused a hard-fork or nasty split. Anyway, I just wanted to have a short discussion about modifying consensus code.

libconsensus would be helpful because you need to be bug-for-bug compatible in a consensus network like tihs. You can’t write a nice yellow paper to make a specification for bitcoin… This is also an argument against multiple implementations of bitcoin consensus rules. If a block comes out tha thas a transaction that breaks some nodes but doesn’t break other nodes, then you’re causing a network split. That’s a dangerous thing. The argument is that if we’re going to have a failure let’s all fail together.

Lightning stuff

lnd, the go implementation of lightning funded by Lightning Labs, recently hit v0.5.1. There’s some bigfuxes in here.

Height hint cache: What happens if you have an open channel and for one reason or another, your node goes offline? What’s the first thing you want to do when you come online? You want to check out your transactions, did your party attempt to do a unilateral close against you? So you check the blocks real quick, and then you go on your way. A lite client scenario is more of an ordeal here… you have to call out to a full node, you have to get your lite filters the client side filters, and you might have to go far pretty back in history. The height hint cache helps you cache the results of previous scans and start scans from where you previously left off. Useful performance improvement, pretty simple.

Validating received channel updates: We were trusting nodes in the network to give us channel update messages from other nodes in the network that were not good, and we were not checking to see if what they were saying was true. So someone could lie to me about the state of a channel or if there was something wrong with it or if it was unroutable, and he could do that to such a degree that he could shift the way that payments were being routed in the network. From there, you can do nasty things like maybe some massive closure attack against the network or deanonymization attacks or something. This was a nasty vulnerability in lnd. So now there’s some signature checks and making sure the messages are coming from who they are supposed to come from.

Rate limiting: There were some other DoS vulnerabilities exposed. Older nodes were spewing out tons of channel update messages, causing DoS attacks against the nodes they were talking about. An old node could cause this problem, but so could an adversary. lnd will now rate limit peers that are requesting excessive amounts of data to ratelimit DoS attacks.

PR 2006 Scoring-based autopilot attachment heuristic: This autopilot scheme will open a channel for you. It’s not very smart, it looks at the network and weights nodes by the number of channels they have and then randomly selects a node to connect to based on this weighting. This ends up causing nodes that have a lot of channels are often selected by autopilot to open up a new channel, which is not good for network topology. But you could allow the autopilot scheme to have multiple heuristics by which to score nodes in the network, and then open up the channel on the network. This doesn’t implement the other heuristics, but this is the framework allowing autopilot to be better. Also, c-lightning is doing its own autopilot scheme. This is a centralization problem and also it could cause routing problems if lots of nodes went offline or something.

PR 2313

multi: Implement new safe static channel backup and recovery scheme, RPCs, and CLI commands

We’re spoiled with bip32 hierarchical deterministic wallets, you can recreate all of your keys from your seed. But with lightning you don’t have that luxury. The commitment transactions are not deterministic, they are based on signatures and scripts and so on. The nature of lightning is that you can’t have just a seed and from that reconstruct all the commitment transactions you’ve made with a counterparty. The danger here is that you’ve made a lot of commitment updates, you go offline, you lose your data and you don’t have a backup, or your backup is old, if I try to broadcast an old state then it’s going to look like to my counteprarty that I’m cheating which is bad. So the other guy is going to use breach remedy or justice transaction. All because my node didn’t know about the latest state. Having sufficient backups to reconstruct the channel if you have some sort of failure at the node level is critical. Some people are using rsync, but this doesn’t work if you’re doing rapid commitment updates. So you might not stay in sync with yourself. So in this PR, roasbeef is working on doing static backups. You have some seed, and this only works for opens and closes right now. Ultimately there will be backups for individual commitments. This is just the beginning. Every time you do opens and closes, you’re taking some seed and encrypting a backup file and dumping it somewhere. There’s this whole recovery flow- I don’t want to say anything wrong, I haven’t read it closely. This is a “channels.backup” file. This is the same problem with p2sh and redeemScripts. Output script descriptors sure do cover p2sh.

pylightning: Add a plugin framework

cdecker did a pull request introducing pylightning. This is a plugin framework for c-lightning. In lnd, you can create cookies or macaroons that have more limited access controls for RPCs where this user can only produce invoices can’t actually spend coins in the wallet or something. If you’re a merchant, you don’t want your coins exposed in a hot wallet but you might want to be able to generate invoices s othat customers can pay you or something. In c-lightning, they are going for a plugin framework which are little scripts that can talk over json-rpc to the node. They have been working on adding this framework, kind of a flask-style. The plugins are written in python. You’re communicating through a unix socket, it doesn’t have to be python. It’s json-rpc over a unix socket.

cdecker gave a good talk about the architecture of c-lightning. Rusty said I know C and I’m a linux developer so I’ll write it in C.

Doesn’t look like there’s any authentication here. Good question. Maybe TLS over the socket or something.

Don’t trust, verify: A bitcoin private case study (coinmetrics)

This was a simultaneous fork of Bitcoin and zcash or a fork-merge or something. With zclassic, you have a shielded pool and zero-knowledge proof magic. In bitcoin, all the outputs are there and you can validate that coins are not being created out of thin air. In the shielded pool, assuming the zero-knowledge proofs and math holds, by validating proofs you should be able to know that coins are not printed out of thin air. The reason I’m bringing up this blog post is because there’s interesting conversations to be had about modifications to bitcoin. It’s been proposed that in theory we might want to see confidential transactions in bitcoin where we use additive homomorphism to mask the amounts in the outputs. I recommend reading the grin introduction for bitcoiners document on their github repository, it’s a good introduction.

The tradeoff you have to make is perfect binding vs perfect hiding. Confidential transactions rests on the discrete log problem. If you’re hiding the amounts of outputs, we know right now that discrete log broken in bitcoin that transactions with signatures and address reuse then a quantum attacker can be able to steal those coins. But if not only the coins are conditionally locked with keys that use ECDSA scheme, you as an implementer of the scheme have to decide if the discrete log is broken, tha twhen the attacker leverages that mechanism, will they be able to decrypt all of the confidential transactions history, or will they be able to print coins and nobody will ever know? That’s perfect hiding vs perfect blinding.

With zksnarks, I think they picked perfect hiding. So if zksnarks are broken, then someone will be able to print zcoins or whatever they are called and nobody would be wiser, unless someone pulled them out of the shielded pool. Maybe zcash electric company is already doing that and causing silent inflation. For bitcoin, it could break the scarcity of the coins. But some people might prefer for perfect privacy. Some would prefer for the privacy features to be done on a sidechain. Some people want mimblewimble on a bitcoin sidechain. You have perfect hiding in the sidechain but perfect binding on the mainchain. You isolate any sort of shenanigans into that sidechain.

Blockstream’s new product Liquid.. they forked Bitcoin Core v0.14 and implemented confidential transactions and confidential assets. You could deterministically peg coins in there, mix them around privately, and then pop out. Masking the amounts in the outputs is not enough to give anonymity in the confidential transaction scheme of course, but it certainly helps.

The first implementation of ring signatures was in bytecoin. There was no premine or whatever, but I think something liked this happened in dash as well actually… the folks who released the code had a fast miner that they kept to themselves, like something CUDA. They released the code, everyone started mining, and they were mining many orders of magnitude faster than everyone. So they had a head start on everyone. Okay, the guy who started dash was the one… But bytecoin was that they secretly mined for a while and said hey guys you haven’t seen this it’s been here the whole time. Monero also uses cryptonote. They use RingCT. They had an inflation bug that was found before anyone found it. But someone exploited it on bytecoin. They exploited it and then monero patched it. It’s very easily detected.

Stackexchange has a question “How do I use the CryptoNote inflation bug” - nobody seems to have answered that one hmm.

OP_MASK

bitcoin-dev post: Schnorr and taproot (etc) upgrade

This lets you sign anything that matches this form, and you can mask certain parts of the transaction. This is instead of SIGHASH_ALL or SIGHASH_NOINPUT. Might be similar to Element’s bitmask method. If you can MASK the sequence field, then you have replace-by-fee because your counterparty has already signed off on it. So then you would have it. I don’t know how granular the masking gets. That’s the part I was curious about. Lightning network whitepaper was originally written with SIGHASH_NOINPUT. Roasbeef was saying you don’t even need to sign amounts, which you could MASK, in which case that one signature is good for the entire channel state all the way through because the amounts are masked off. If they try to steal from you, you have the revocation hash and then you could fill in whatever you think is the fair thing. In segwit v1, if you have all of this, then lightning gets much simpler, because you don’t have to send so much data back and forth. In watchtowers, if you’re using an offline node or something, you can’t do justice because you’re not watching. But with a watchtower, you would give them the commitment or justice transaction and they would monitor the chain for you. The nature of lightning now is that the watchtower would need to keep every single commitment. So this is burdensome and annoying, but with eltoo they don’t need to store all states, they can just store one state and rewrite it effectively. It’s much more efficient in that sense. You could introduce better lightning protocols without forcing everyone to upgrade- when you do the handshake with your counterparty, if they are eltoo-savy then you can do that or another version or whatever.

References

Bitcoin Core PRs:

Optech Newsletters:

[bitcoin-dev] Schnorr and taproot (etc) upgrade https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-December/016556.html

[bitcoin-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-November/016518.html

[bitcoin-dev] Safer sighashes and more granular SIGHASH_NOINPUT https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-November/016488.html

Minisketch: an optimized library for BCH-based set reconciliation https://github.com/sipa/minisketch

Lightning:

05.1 Release https://github.com/lightningnetwork/lnd

c-lighting and lnd PRs:

SLP39 Olaoluwa Osuntokun (Roasbeef), CTO of Lightning Labs https://stephanlivera.com/episode/39

Research:

MIT DCI’s Cryptocurrency Research Review #1 https://mitcryptocurrencyresearch.substack.com/p/mit-dcis-cryptocurrency-research

Simplicity: High-Assurance Smart Contracting https://blockstream.com/2018/11/28/simplicity-github/

Pederson Swap: https://github.com/apoelstra/scriptless-scripts/blob/master/md/pedersen-swap.md

TxProbe: Discovering Bitcoin’s Network Topology Using Orphan Transactions http://arxiv.org/abs/1812.00942

JuxtaPiton: Enabling Heterogeneous-ISA Research with RISC-V and SPARC FPGA Soft-cores http://arxiv.org/abs/1811.08091

Batching Techniques for Accumulators with Applications to IOPs and Stateless Blockchains https://eprint.iacr.org/2018/1188

Misc:

Who controls Bitcoin Core? https://medium.com/@lopp/who-controls-bitcoin-core-c55c0af91b8a

What is the tradeoff between privacy and implementation complexity of Dandelion (BIP156): https://bitcoin.stackexchange.com/questions/81503/what-is-the-tradeoff-between-privacy-and-implementation-complexity-of-dandelion/81504

bitcoin script playground: https://nioctib.tech/#/transaction/f2f398dace996dab12e0cfb02fb0b59de0ef0398be393d90ebc8ab397550370b/input/0/interpret?automatic=true

Information Security:

Long-tail cryptocurrency is 51% attacked: Vertcoin edition https://www.theblockcrypto.com/2018/12/03/long-tail-cryptocurrency-is-51-attacked-vertcoin-edition/

Let’s Not Speculate: Discovering and Analyzing Speculative Execution Attacks https://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/d66e56756964d8998525835200494b74

Grifter Journalist Jerry Ji Guo Jailed for Alleged $3.5 Million Bitcoin Heist https://www.thedailybeast.com/grifter-journalist-jerry-ji-guo-jailed-for-alleged-bitcoin-long-con

Cryptojacking Malware Now Infects More Than 415,000 Routers Globally https://www.digitaltrends.com/computing/415000-routers-globally-infected-with-cryptojacking-malware/

Timothy May RIP https://www.facebook.com/lucky.green.73/posts/10155498914786706

\ No newline at end of file diff --git a/misc/2019-02-09-mcelrath-on-chain-defense-in-depth/index.html b/misc/2019-02-09-mcelrath-on-chain-defense-in-depth/index.html index cf229b51b7..9b46555866 100644 --- a/misc/2019-02-09-mcelrath-on-chain-defense-in-depth/index.html +++ b/misc/2019-02-09-mcelrath-on-chain-defense-in-depth/index.html @@ -18,4 +18,4 @@ 2 <Clawback Pubkey1> <Clawback Pubkey2> 2 OP_CHECKMULTISIG OP_ENDIF ) -

The challenge of the “vault” mechanism is to enforce that the vaulted address can only spend to the unlocked address that we created. This is how immediate theft can be prevented. Encumberence of a future output was termed a “covenant” by Eyal et al.

If this unlock transaction has that script, and this is a pay-to-scripthash type transaction, and your address is a hash of the script. I have an if-else statement. There is a 72 block timelock, about 12 hours. First, we’re going to check that this locktime has passed. Second, we are going to have a 2-of-2 multisig. During the course of a withdrawal normal, we would wait out this 12 hours and then send it off to the customer requesting the withdrawal. The else statement is the clawback with a separate set of keys. The idea is that the thief might steal the first set of private keys, but not able to steal the second set of private keys. If that happens, and the thief only steals the first set, then I can use the else condition and the thief cannot. Using this else condition, there is no timelock here, and I can send it to another address. By not having a timelock here, I have a window of 72 blocks where I do not have a race condition with the thief.

There’s another method where, if a thief sends a transaction, there’s 10 minutes before it gets mined (roughly), and I could play a game where I try to double spend him during that 10 minutes. But then I am in a race, and I want to avoid that race because the thief is much more willing to burn coins to miner fees than I am.

The timelock in the above script prevents that race. I have control during the 72 hour period, and the thief does not.

The if-else statement is fairly straightforward, but the challenge of doing this is that you want to enforce that this vaulted address can only spend to the unlock address. There are two transactions and two addresses here, and they are tied together. So I have to enforce that the next transaction goes to this specific address, and that’s the hard part. We generally can’t do that in bitcoin today. But if you could, then you could prevent immediate theft from stealing the hot or cold keys.

This form of encumberence was termed a “covenant” by Eyal and his collaborators in 2016.

Vault features

The remainder of this talk is going to be dedicated to discussing several possibilities for achieving this mechanism. Come on in guys, grab a seat.

  • The vault -> unlocked unvaulting transaction starts a clock. This is one way to understand why two transactions are required.

  • The blockchain is the clock and enforces the policy.

  • We term the alternative, non-timelocked branch of the spend a clawback.

  • You have the opportunity to watch the blockchain for your own addresses, and see any unvaulting that you didn’t create. You then have timelock-amount of time to pull your clawback keys from cold storage and create the clawback transaction.

  • Clawback keys are held offline, never used, and are more difficult to obtain than the hot/cold keys. ((There could even be a cost associated with accessing these keys.))

  • The target wallet for a clawback transaction needs to be prepared ahead of time (backup cold storage).

This transaction that sends your funds from a vault to this unlocked address effectively starts a clock. We want to enforce a clock. The reason why it needs two transactions is that we’re actually using the blockchain as a clock. The first transaction goes and gets mined into a block with a timestamp in that block. Then I have the blockchain via a relative locktime is counting the number of blocks that comes after, and that’s when enforcement comes in. I don’t know when the thief is going to steal the funds, so therefore I don’t know when to start the clock. This is why it requires two transactions. The first transaction starts the clock.

Q: Maybe I’m rolling this back just a bit, but I’m a bit hazy on what the scenario is. What is the threat scenario? What are those details?

A: The threat is that I have a cold storage, thief somehow manages to exfiltrate my private keys, which normally means game over and all funds lost. This is a mechanism to actually save yourself in that situation. How they exfiltrate those private keys is subject to how exactly your wallet is built, and I am not going into the details of how Fidely built its wallet.

Q: So the threat is simple capture of private keys.

A: Yep, basically. It doesn’t matter how they obtain them. There are side-channel attacks, insider threat attacks, signature vulnerability attacks, and certain circumstances where if you can convince someone in ECDSA to sign a message and reuse some nonce k then you can actually get the private key. There are multiple ways you can get the private key if there was some flaw in the software.

Q: Do you have any statistics on this?

A: Not in this talk, but there have been many hundreds of exchange hacks. It’s estimated that 20% of all bitcoin in existence has been stolen. I’m making up that number off the top of my head, I’m probably wrong about it. It’s a large number though. Just last week, the QuadrigaCX exchange in Canada claimed it was hacked. It was shutdown. $100 million was either lost or stollen by the founder. Nobody knows yet.

Q: Is there a reason for focusing on working around the fundamental limitations in the security of the keys, the system, and focusing on this workaround?

A: As opposed to what?

Q: Rearchitecting the mechanisms by which keys are used or protected, and making them more consistent with the ways that keys are recommended to be used in cryptography in general.

A: Well, here the idea is that we’re using defense in depth: do both. This talk is focusing on what can we do on the blockchain. You can do a lot of things offline too, like Shamir sharding my keys, using multi-party computation to compute the signature, and we do use a lot of those tools. But that has already had a lot of focus around the industry, and on-chain defense has not. The valts discussed here are also not possible today because it requires upgrades to the bitcoin consensus rules. The purpose of this talk is to get people to think about this scenario. If we want to implement these changes in bitcoin, exactly what changes would we want to implement, and why? These are going to require at least a soft-fork. There’s probably a soft-fork upgrade coming this summer which will include a few things I am going to talk about. I would very much like there to be a mechanism in there so that we could use vaults. It’s a tricky question. Honestly, this talk doesn’t have much of a conclusion. I am going to present several alternatives and I hope to raise people’s awareness to think about it and maybe over the next few months we can get this into the next soft-fork.

Or maybe it’s just a terrible idea and we shouldn’t use it at all.

Q: … more profitable for you to… delay this… in a system like this, … you wouldn’t be protected against that. You will… start blaming… either close that channel… I could… proof of.. transaction… and record the… to … So these systems are also needed for… as for private keys…. important.. but also cooperation in second layers as well.

Q: That’s a different threat.

A: This basic script structure I showed is also used in lightning channels. The way I am using it here is actually backwards compared to how it is used in lightning and plasma and other state-channel approaches. Here, the one with the timelock is the normal execution branch. In plasma or lightning, the one without the timelock is the default execution branch. In lightning, the one with the timelock is called an adversarial close of the channel. It’s a similar construct and a similar use of features. I don’t quite know off the top of my head how to incorporate this with state channels because of that flip. But we can discuss it afterwards.

The vault features include a way to see the theft transaction, we see it and we know about it because we have other nodes watching the blockchain that are not connected to the system with which the thief stole it. This requires an online approach. It’s appropriate for exchanges and online service providers, but not really appropriate an individual’s phone.

We’re calling the alternative spend route a “clawback”. That’s the terminology here: vaults, clawbacks and covenants. You have the opportunity to watch the blockchain and observe for unvaulting transactions, at which time there is a timelock and the opportunity to broadcast the clawback transaction if that’s what you wish to do. This gives you time to pull out the clawback keys and use them.

The clawback keys are a different set of keys than the ones that the thief stole. Presumably these clawback keys are more difficult or costly to access than the hot key. So these are very cold keys or it might be a much larger multisig arrangement than your regular cold keys. The idea is that they stole your hot keys or even your cold keys, and these ones– the clawback keys should be more secure because they are used less often and not used in the course of normal operations. This makes them more difficult to steal by an attacker who is watching your operation who watches how things are moved through your system can discover how keys are controlled and who has access to them and then attack those people. But the clawback keys should not be used, until you have to execute the clawback mechanism. If you see a theft like that, you probably want to shut down your system and figure out what happened and then later not use the system in the same way again without the vulnerability that the attacker exploited.

Last point here, the clawback wallet– the destination of the funds when I execute the clawback mechanism– needs to be different from the original one. The assumption here is that the thief has stolen the cold wallet keys, or other things in addition to that. So I need to decide what the clawback should be doing, upfront, and where it should be sending the funds when I successfully clawback the funds from the thief. This can be another vault or some really deeply cold storage keys.

For the rest of this talk, I will dig into some rather technical details about how to do this. Let me just pause here and ask if there’s any questions about the general mechanism.

Q: It sounds similar to the time period between the time a transaction happens in bank account, and the time of settlement, during which time you can “undo”.

A: Absolutely. Those kinds of timelocks are present all over finance. The way that they are implemented is that there is a central computer system that looks at the time on the clock and says okay the time hasn’t passed yet. But these systems aren’t cryptographically secure. If I hack into the right system and twiddle the right bit in the system, I can convince you the time is different and successfully evade that control mechanism.

The notion of timelock itself is very old. Back in the old west, they would have physical mechanical clocks where the bank owner would leave and go home but first he would set the timer on the vault such that the vault wouldn’t open up overnight when he wasn’t there, only at 8am the next morning would the vault open. It’s a physical mechanism. I actually have a photo of this in a blog post I wrote a while back.

But yes, this is standard practice. You want to insert time gaps in the right places for security measures and authorization control in your system.

Q: Do you have control over how many days or how much time elapses?

A: Of course.

Q: If I said I want it to be 72 hours, and you want it to be 12 hours, that’s usually different from the banks.

A: The time here is set by the service provider. All the transactions and timelocks are created by the service provider. This service provider is providing a service to the customers who are trusting them with their funds, whether an exchange or a custodian or something else. Everything happening here is set by the service provider. If you want those times to be different, then you can talk wit hyour service provider. If you want the timelocks to be short, then it’s higher risk and you’re going to have to ask about your insurance and etcetera etcetera.

Q: Could miners theoretically.. by their mining power.. attack the timelock?

A: No.

Q: But it is measured in blocks?

A: It is.

Q: So if they stop mining….

A: There are two ways to specify timelocks. This mechanism that I said here is OP_CHECKLOCKTIMEVERIFY which is an absolute timelock. There’s also a relative timelock specified in number of blocks. There’s relative timelock, absolute timelock, block-based and clock-based. So by how you encode this number here, you can select all of those options. Here I am using a relative timelock, but here I am using a number of blocks.

Q: …

A: The number comes from the miners putting timestamps in the blockheader, so if all the miners want to mess with this then they could. But that’s your standard 51% attack. Nothing in bitcoin works if you have a 51% attack.

Q: … It could effect it to a lesser degree, like if 30% of the miners…

A: If you want to speed up or slow down the timelock, then …

Q: …

A: If you’re a miner, you can slow it down by turning things off. But presumably you want to turn things on. You can calculate the cost of doing that. Today, at whatever the hashrate is. In 2017, there was approximately $4b spent on bitcoin mining total. If I divide that by the number of blocks in 2017, then I can calculate how much each block is worth and then I can see the value of a transaction might be $10m and the cost of an attack is $x dollars. So I can set the timelock based on the ratio of those two numbers, like the cost of the attack versus the value of the transaction. What this means is that high-value transactions will have longer timelocks. This is frankly a very good risk strategy that I haven’t seen many people in the bitcoin space use. Different value transfers should have different timelocks. It’s a risk parameter.

Q: You said there’s a need for some fork to allow some functionalities.. what is the functionality?

A: That’s the rest of the talk.

Vaulting mechanisms

There are four mechanisms that I know of that have been proposed so far. There are probably others.

Remember, a vaulted UTXO is a UTXO encumbered such that the spending transaction must have a certain defined structure.

Eyal, Sirer and Moser wrote a paper in 2016 and showed it at Financial Crypto 2016. They proposed a new opcode OP_CHECKOUTPUTVERIFY.

The second mechanism is something I came up with a few months later where I described how to do this using pre-signed transactions. That allows you to basically do this mechanism without actually modifying bitcoin. I will talk mostly about that one since it was mostly my idea and I know a lot about it.

jl2012 wrote a BIP about OP_PUSHTXDATA where the idea is to take data on the transaction and place it on the stack so that you can do computation on the data.

The last idea referenced here is from O’Connor and it’s implemented in Blockstream Elements called OP_CHECKSIGFROMSTACK. They had a paper in Financial Crypto 2017 about it. This mechanism essentially adds an opcode called OP_CHECKSIGFROMSTACK and it literally checks the signature on the stack and there’s a trick by which you can use that to implement a covenant.

I am going to go through these one-by-one.

OP_CHECKOUTPUTVERIFY (Eyal 2016)

As the name implies, OP_CHECKOUTPUTVERIFY examines a specific output and does computation on it. It’s sort of like a regular transaction. I have placed a transaction template here. There’s a pattern with placeholders, which could be pubkeys. This then encumbers the next output to say the script of the next output must be a script that matches this pattern.

This checks the value, for instance. This lets you encumber the value and check the script pattern, which implicity lets you check the public keys or any other logical structure in the output script.

Personally I think this is kind of complicated, and it’s sort of like an ethereum-like approach to computation where you put everything on the stack and write a program to deal with it. This is not really how things are done in bitcoin. One of the ways we describe the difference between bitcoin and ethereum is that bitcoin requires proof-of-correct-computation for scripts, whereas ethereum actually does the computation on-chain. So my preference, I think the bitcoin’s community preference is that rather than doing computation in the scrpits, which has scaling and privacy consequences, we should instead do the simplest possible thing to prove you did the right computation. At the end of the day, you can prove that you have done almost anything just by having a certain signature.

Pay-to-timelocked pre-signed transaction (P2TST) and SIGHASH_NOINPUT

I came up with the pay-to-timelocked pre-signed transaction (P2TST) proposal. The idea is the following. When you give someone a deposit address, instead of giving them an address to which you have the private key, you instead randomly generate a new private key at the time that they request the deposit. Then you create and pre-sign a transaction that does the vault unlocked send with one input one output. Then you delete that private key. As long as that key was really deleted, I have just created an enforcement where the funds have to move from this vault to the unlock because that’s the only thing that exists at this point is the pre-signed transaction. Then I give the address for the incoming transaction on that pre-signed transaction to the user that wishes to make a deposit.

This pre-signed transaction unfortunately depends on the pre-signed txids. So we have to wait for Alice to broadcast her transaction before we can do this. So this private key has to live for some time while we wait for Alice to send the transaction. That’s a pretty big risk: during that waiting period, the private key could be stolen. We could talk about secure hardware and other options, though. But this is a risk on its own. If somebody is able to figure out how to obtain this private key, then it can evade the vault mechanism entirely which is not what we want. So instead of exfiltrating the private keys for the cold storage, they would just exfiltrate the pre-signing keys and this would completely evade the vault mechanism.

There’s a second idea here which is that you create a transaction called a P2TST which is a pay-to-timelocked pre-signed transaction transaction. It sends from the vault to the unlocked state. With this construct, your funds storage is now a set of pre-signed transactions instead of being keys. You have keys also, but your main control is the set of pre-signed transactions. Then there are new questions like what do I do with these pre-signed transactions and where do I put them.

Instead of creating a key and delteing it, you could instead create a signature using a random number generator and then through the magic of ECDSA public key recovery you can compute the corresponding public key. I would have to break the ECDSA discrete log problem asumption in order to compute the private key. By using this technique, I have created a valid pre-signed transaction and I have enforced this policy now, having never had a private key at all. There’s some caveats though, like this requires some changes to bitcoin either SIGHASH_NOINPUT also with a flag like SIGHASH_SCRIPTHASH and an allowance for pubkey recovery. Unfortnuately some of the upcoming Schnorr signature proposals explicitly do not allow for pubkey recovery techniques.

Because the SIGHASH depends on the input txid, which in turn depends on the previous transaction’s output address, which depends on the pubkey, we have a circular dependency that could only be satisfied by breaking our hash function. It’s a circular dependency on the pubkey. I couldn’t do this, unless I break my hash function. I can’t compute the pubkey and then put it into the input hash that depends on the pubkey in the first place.

What’s a SIGHASH?

The way this works is that when you do EC recovery on a signature, you have three things really– you have a signature, a pubkey, and you’ve got a message. Those are the three things involved in verifying a signature. The pubkey and signature are obvious to everyone here right? The message here is really the SIGHASH. It’s a concatenation of a whole bunch of data extracted from the transaction. It depends on the version, hash of the previous outputs, and the outputs, and various other things. The signature authorizes that. What we want to do is not depend on the pubkey in the input. That creates a circular dependency where you can’t do EC recovery.

There’s a proposal out there called SIGHASH_NOINPUT which was originally proposed for the lightning network where it would be nice to be able to take the set of outputs on the channel and be able to rearrange them, without making an on-chain transaction. If my transactions commit to the txids on-chain, then I can’t do that. SIGHASH_NOINPUT decouples your lightning transactions from those inputs and let you rearrange those, and therefore let you add more funds and remove funds from the transaction. I think it’s very likely that NOINPUT will be implemented and deployed because it is so helpful to the lightning network. So, can we repurpose SIGHASH_NOINPUT for a vault?

Schnorr BIP and SIGHASH_NOINPUT discussion

Brief interlude on Schnorr signatures. In elliptic curves, pubkey recovery is well known. In ethereum, there’s even an opcode that does it, or a common function. The way you check signatures in ethereum is often you compute the pubkey and see if it’s the same. So there is now a Schnorr BIP put out by Pieter Wuille. The way it works is the following: this here basically is the equation he proposes that is satisfied by Schnorr signatures in Schnorr.

Schnorr signatures basically for those who don’t know the history actually predate ECDSA signatures and unfortunately they were patented. Therefore they saw basically zero use whatsoever. At the time that Satoshi wrote bitcoin in 2007-2009, it was almost the same time that the Schnorr patent expired like in 2009 or 2010. But since there was no history or libraries that used Schnorr signatures, nobody used them. Schnorr signatures have a lot of advantages over ECDSA signatures. If you take two pubkeys and add them together, the sum of the signatures validates for the sum of the pubkeys. That’s not something as easily done with ECDSA. This is very powerful for blockchain because blockchain has scalability problems in the first place. If I can smash all the signatures together and validate just one signature, that’s a very big win for scalability.

There’s a lot of other fancy things you can do with schnorr signatures. They are linear, so you can add things to them. You can do native multisig inside of a Schnorr signature. There’s a lot of reasons that people want this. A BIP has been proposed. The mechanism to get it into bitcoin has not been proposed yet, but the BIP describes what the sigantures are, how you get them and how you check them.

Assuming that we get this BIP later this year, this still has the problem of committing to the public key and creating a circular dependency that again I cannot satisfy. The way that happens here is that they have intentionally put the pubkey inside this hash, and the reason for that is that they don’t want people to be able to reuse signatures. The standard way of specify a Schnorr signature is slightly different, there’s no pubkey in the hash there. For clarity, G is the standard basepoint, and (r, s) are the two components of my signature, H is the hash of the pubkey and message times my pubkey. So with this original proposal, you can do pubkey recovery. I can just put things on the other side of the equation and compute the pubkey, which is what I need for my pre-signed transaction idea.

The reason why this formula was not chosen, in favor of this other, is that I can convert a signature any (r, s) into a signature on something else by shifting the s by an elliptic curve point. This means that I can basically take a signature off the blockchain and reuse it for a different transaction, which is bad, because it might allow someone else to send funds in a way that you didn’t want them to. If I take this signature and shift it by this much, it’s now a valid signature for something else. Whether this is okay reduces to the question, are there any transactions that reuse the same message value m? It turns out in practice that the answer is yes, there are. So one way this might happen is that– if you have a service provider and someone is sending them a lot of funds, it’s standard practice for someone to do a penny test where they send a small amount of funds first to check if things work, and the other thing that they will do is tranche transactions where they will divide up the total amount into several smaller denominated transactions in order to lower the risk on the entire transfer. If you’re sending a billion dollars all at once, and it’s all lost, then it sucks. If I divide it up into 10 parts, and I only lose 1/10th of it, then that sucks less. But the consequence of this is that you will end up having transactions with exactly the same value and exactly the same address. As much as we would want to say hey client please don’t reuse addresses or send to the same address twice, we technically can’t prevent them from doing that. There are good reasons why they probably would reuse addresses too. If I take two different transactions and I tranche it, say I send 100 BTC to the same address, twice, this is a case where I can reuse the signature. If we then move those funds to another cold wallet or something like that, the second transaction is now vulnerable. They can take the signature off the first transaction and apply it to the second and do some tweaks and have some monkey business going on. We don’t want that. The reason why this works is because of SIGHASH_NOINPUT. If we were to use SIGHASH_NOINPUT, which allows for removing the txid from the input, then that is a circumstance underwhich the Schnorr signature replay would work. If we don’t use NOINPUT, then this is not a problem.

Half-baked idea: input-only txid (itxid)

I had a half-baked idea this morning to try to fix this. There’s probably something wrong with this, and if so let me know.

https://vimeo.com/316301424&t=33m47s

The problem with pubkey circular dependency also comes from the input txids: the txid is defined as sha256d(nVersion|txins|txouts|nLockTime). Segwit also defines another txid used internally in the witness merkle tree (not referenced by transactions): wtxid = sha256d(nVersion|marker|flag|txins|txouts|witness|nLockTime) where the txouts are a concatenation of value|scriptPubKey|value|scriptPubKey…. and scriptPubKey(P2PKH) is OP_DUP OP_HASH160 pubkeyhash OP_EQUALVERIFY OP_CHECKSIG and scriptPubKey(segwit) is 0 sha256d(witnessScript), which references our pubkey, creating the circular dependence. However, let us define an input-only txid itxid where itxid = sha256d(nVersion|txins|nLockTime)). This is sufficient to uniquely identify the input (since you can’t double spend an input), but does not commit to the puts. Outputs are committed to in sighashes anyway (and, if desired, modulo SIGHASH_NOINPUT). This would allow us to use pubkey recovery and RNG-generated signatures with no known private key. This implies a new or modified txid index and SIGHASH_ITXID flag. This doesn’t seem to be useful for lightning’s SIGHASH_NOINPUT usage where they need the ability to change the inputs. This evades the problem of Schnorr signatur ereplay since the message m (sighash) is different for each input.

The reason why we ended up with the circular dependency is because the pubkey is referenced by the txid. The txid is defined as mentioned above. Segwit also defines another txid called wtxid which adds a few more things including the witness program. Segwit takes a lot of the usual transaction data and moves it to a separate block, which removes the possibility of malleability. Signatures and scripts get moved to this extra block of data. So there’s another merkle tree of this witness block that gets committed to.

In P2PKH, the pubkeyhash gets hashed into the — which creates the circular dependency. ((You can use P2PK to not require a pubkeyhash.))

So what if we instead introduce input-only txids called itxid. I remove the outputs from the original txid calculation. This only involves the inputs. If I use this to reference which inputs I am actually spending, this would still uniquely identify the inputs I am spending. It doesn’t say anything about the outputs. The outputs are tied to the signature hash in the transaction, if you desire it to, which ties them together. The outputs are then here in the witness tree. Everything is still committed to in the right way, in a block. I don’t think there’s a problem with that. This allows us to compute the pubkey and create pre-signed transactions, without actually having the private key.

This is a bit complex. I literally had the idea this morning. You would have to change the txid index. Bitcoin is like a big list of txids. If I want to change that index, I basically have to double the indices. This will not go over well, I suspect.

Pre-signed transactions

Your wallet is a set of pre-signed transactions, here. This is an interesting object for risk management. These objects are not funds in-of-themselves. There is a private key somewhere that is needed to do anything with them. But they can be aggregated into amounts of known denominations, such that they can be segregated and separated. It gives you an extra tool for risk management.

These pre-signed transactions are less security-critical than private keys, and they are less costly to create than an entire private key ceremony. A key signing ceremony can be a multi-million dollar affair that is audited and has all sorts of controls around it to make sure we don’t mess up.

If an attacker steals just the pre-signed transactions, they don’t have access to the funds because they also have to steal the keys with them. So you could back these up, you could put them with a lawyer or a custodian or something like that.

There are downsides to pre-signed transactions. A responsible hard-fork that implements replay transaction cannot be claimed by a pre-signed transaction. If I change how transactions are signed, and I provably don’t know the private key, then by definition I cannot possibly generate a valid signature to claim the coins on the other side of the hard-fork. These pre-signed transactions won’t be valid on any fork that changes the sighash. I think hard-forks are a waste of time and people should stop doing them. But some people would say this is a nasty consequence of pre-signed transactions.

I talked around at Fidelity about the “delete the key” idea, and generally that’s not how things are done. Our cybersecurity people generally prefer to always backup keys just in case. You never know, put it in a really really deep vault. The biggest idea with delete-the-key is how do you know that the key was really deleted? Software and hardware mechanisms can evade the deletion.

SIGHASH_NOINPUT is generally unsafe for wallet usage. So, don’t use it for wallets. The reason why it’s okay for lightning is that lightning is a protocol with certain rules around it. People aren’t just sending arbitrary funds at arbitrary times using SIGHASH_NOINPUT to arbitrary lightning addresses; you have to follow the lightning protocol. That’s why we run into the problem with penny tests and tranche testing. Whether we create tranches, it’s the sender’s decision and policy, not the receiver’s. That’s the root of the problem there.

Lau: OP_PUSHTXDATA

OP_PUSHTXDATA seems to be created to neable arbitrary computation on transaction data. This is generally not the approach favored by most bitcoin developers, instead they prefer verification that a transaction is correctly authorized, rather than computation (which is more similar to ethereum). In my opinion, lots of fnuny things can be created with OP_PUSHTXDATA, and there are many unintended consequences.

This was proposed by jl2012. It’s kind of a swiss-army type opcode. What it does is it takes the random data from the transaction and puts it on the stack and then lets you do a computation on it, like input index, input size, output size, locktime, fee amount, version, locktimes, weight, total size, base size, etc.

This was proposed I think even before Eyal’s 2016 paper. It’s kind of gone nowhere, though. This draft BIP never became an actual BIP. Given the philosophy of “let’s not do fancy computation in scripts” this proposal is probably not going to go anywhere.

OP_CHECKSIGFROMSTACK

The one that I am in favor of the most of the other 3, is OP_CHECKSIGFROMSTACK. The way a vault works iwth this opcode looks like the following: you’ve got, so, you have this:

script: OP_OVER OP_SHA256 pubkey 2 OP_PICK 1 OP_CAT OP_OVER OP_CHECKSIGVERIFY OP_CHECKSIGFROMSTACKVERIFY

initial stack: signature sha256(sigTransactionData)

So you do two checksigverifies. The idea here is that I am going to put the initial stack here when I send the transaction will have the signature and hte hash of the transaction data. I am going to first check the signature that is on the stack, and the separate is that I check that I check the signature of the transaction itself. If the signature of the transaction is exactly equal to the one that is on the stack then this implies the transaction data is exactly the same as this transaction. It implicityl verifies that this transaction has a specific structure encoded in its sighash value.

It’s generally possible to create recursive covenants with this. Once you put this opcode in that encumbers the script of a next transaction, you can require that the next transaction has any structure at all. You can require it has another CHECKSIGFROMSTACK or another PUSHTXDATA in there or whatever. There might be uses for this.

Another drawback of using these kinds of opcodes is that it’s fairly obvious to everyone what you’re doing; all your code ends up in the blockchain and now someone knows that you’re doing vaults. This is revealed with every transaction, not just when you execute the clawback transaction. Any regular send reveals this information.

Taproot

It would be awesome if there was a way to hide the fact that I was using a vault mechanism or using OP_CHECKSIGFROMSTACK. Just in general, it is useful to hide the existence of an alternative clawback branch in the script.

Taproot works by “tweaking” your public key such that your public key P is in fact public key P plus a merkleized abstract syntax tree of your script times the generator G.

If you have the private key, then you can also sign for P=MG just as you could for P by tweaking your private key in the same way. When spending, we reveal the committed alternatives M in the scriptPubKey, along with the executedMerkle branch. But the remaining non-executed branches and alternatives remain private.

This is good for optimistic protocols where the alternatives are used less frequently than the default case. Transactions are indistinguishable from a single-key P2PKH type output. Multisig can be done via MuSig with Schnorr, and an interactive threshold signature scheme exists that also results in a single signature.

For a vault in taproot, the default script would be: (timelock) OP_CHECKLOCKTIMEVERIFY/OP_CHECKSEQUENCEVERIFY hotpubkey OP_CHECKSIG.

Taproot was proposed in 2018 by Greg Maxwell. The idea is that you can add alternative spending paths to your script. You do this by tweaking your public key, which is done here by addition. M is the merkle root of a merkleized abstract syntax tree of alternative scripts (MAST). MAST is where you take a big block of if-else statements and turn it into a single merkle tree.

If you have a private key, you could sign by tweaking your private key in the same way. But the key might be constructed by MuSig or something. The interesting thing is that nobody can tell you tweaked the public key, if you don’t reveal it. I can’t take an arbitrary pubkey and claim I did this when I didn’t; that would be equivalent to breaking the discrete log problem on ECDSA.

By “optimistic” protocol I mean the alternatives are used less freequently than the default case. You can hide all kinds of stuff. With Schnorr signatures, you can do multisig in a single key and single signature. This is pretty darn fancy, and I am pretty happy about this. It would completely let you hide that you did this.

Your vault script ends up looking pretty simple, which is just a timelock and a CHECKSIG operation.

Fee-only wallets

You generally want to add fee wallets when you do this. If you create these pre-signed transactions or use covenant opcodes, this encumberence you create doesn’t know anything about the fee market. Bitcoin has a fee market where fees go up and down based on block pressure. The time that you claim these funds or do a withdrawal request is separated in time from when the deposit happened and the fee market will not be the same. So generally you have to add a wallet that is capable of bumping the fees. There are many mechanisms of doing that.

Conclusions

Vaults are a powerful tool for secure custody that have been around for at least 2-3 years and haven’t gone anywhree. My read of the ecosystem is that most of the developers think it’s a good idea, but we haven’t done the work to make it happen. Fidelity would use vaults if we had these tools.

Pre-signed transactions, modulo my itxid idea, are kind of undesirable because of delete-the-key. If you don’t delete the key, then third-parties can steal that key. If you can use NOINPUT, then you can get around that.

Of the new opcodes, the one I am most in favor of is OP_CHECKSIGFROMSTACK which is already deployed in Elements and Liquid. It’s also deployed on Bitcoin Cash. As far as I know, there’s no issues with it. It’s a simple opcode, it just checks a signature. But you can create covenants this way. There might be other ways to use this opcode. One of the ways in the original covenants 2016 paper was that you can make a colored coin with OP_CHECKOUTPUTVERIFY. But what if colored coins get mixed with other coins? Well, you can enforce that this doesn’t happen, using these kinds of mechanisms. This is one other potential use case for all three of the opcode proposals we have discussed.

One of the reasons I like OP_CHECKSIGFROMSTACK better is that it’s simpler than the other ones, and it’s less like ethereum, and it just lets me prove that you have the right thing. All three opcode options let you encumber things recursively, so it means that bitcoin would no longer be fungible: is that desirable?

In all cases here, batching is difficult. This is one consequence of this. You generally can’t take pre-signed transactions and combine them with other inputs and outputs. The signatures will sign the whole transaction. There are ways to do it such that you only sign one of the inputs and one of the outputs, but then if I do that and batch a bunch of transactions together then I don’t really gain much from the batching. So batching requires more thought.

1h

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015793.html

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html or https://twitter.com/kanzure/status/1159101146881036289

https://blog.oleganza.com/post/163955782228/how-segwit-makes-security-better

https://bitcointalk.org/index.php?topic=5111656

“tick method”: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017237.html

http://web.archive.org/web/20180503151920/https://blog.sldx.com/re-imagining-cold-storage-with-timelocks-1f293bfe421f?gi=da99a4a00f67

http://www0.cs.ucl.ac.uk/staff/P.McCorry/preventing-cryptocurrency-exchange.pdf

\ No newline at end of file +

The challenge of the “vault” mechanism is to enforce that the vaulted address can only spend to the unlocked address that we created. This is how immediate theft can be prevented. Encumberence of a future output was termed a “covenant” by Eyal et al.

If this unlock transaction has that script, and this is a pay-to-scripthash type transaction, and your address is a hash of the script. I have an if-else statement. There is a 72 block timelock, about 12 hours. First, we’re going to check that this locktime has passed. Second, we are going to have a 2-of-2 multisig. During the course of a withdrawal normal, we would wait out this 12 hours and then send it off to the customer requesting the withdrawal. The else statement is the clawback with a separate set of keys. The idea is that the thief might steal the first set of private keys, but not able to steal the second set of private keys. If that happens, and the thief only steals the first set, then I can use the else condition and the thief cannot. Using this else condition, there is no timelock here, and I can send it to another address. By not having a timelock here, I have a window of 72 blocks where I do not have a race condition with the thief.

There’s another method where, if a thief sends a transaction, there’s 10 minutes before it gets mined (roughly), and I could play a game where I try to double spend him during that 10 minutes. But then I am in a race, and I want to avoid that race because the thief is much more willing to burn coins to miner fees than I am.

The timelock in the above script prevents that race. I have control during the 72 hour period, and the thief does not.

The if-else statement is fairly straightforward, but the challenge of doing this is that you want to enforce that this vaulted address can only spend to the unlock address. There are two transactions and two addresses here, and they are tied together. So I have to enforce that the next transaction goes to this specific address, and that’s the hard part. We generally can’t do that in bitcoin today. But if you could, then you could prevent immediate theft from stealing the hot or cold keys.

This form of encumberence was termed a “covenant” by Eyal and his collaborators in 2016.

Vault features

The remainder of this talk is going to be dedicated to discussing several possibilities for achieving this mechanism. Come on in guys, grab a seat.

  • The vault -> unlocked unvaulting transaction starts a clock. This is one way to understand why two transactions are required.

  • The blockchain is the clock and enforces the policy.

  • We term the alternative, non-timelocked branch of the spend a clawback.

  • You have the opportunity to watch the blockchain for your own addresses, and see any unvaulting that you didn’t create. You then have timelock-amount of time to pull your clawback keys from cold storage and create the clawback transaction.

  • Clawback keys are held offline, never used, and are more difficult to obtain than the hot/cold keys. ((There could even be a cost associated with accessing these keys.))

  • The target wallet for a clawback transaction needs to be prepared ahead of time (backup cold storage).

This transaction that sends your funds from a vault to this unlocked address effectively starts a clock. We want to enforce a clock. The reason why it needs two transactions is that we’re actually using the blockchain as a clock. The first transaction goes and gets mined into a block with a timestamp in that block. Then I have the blockchain via a relative locktime is counting the number of blocks that comes after, and that’s when enforcement comes in. I don’t know when the thief is going to steal the funds, so therefore I don’t know when to start the clock. This is why it requires two transactions. The first transaction starts the clock.

Q: Maybe I’m rolling this back just a bit, but I’m a bit hazy on what the scenario is. What is the threat scenario? What are those details?

A: The threat is that I have a cold storage, thief somehow manages to exfiltrate my private keys, which normally means game over and all funds lost. This is a mechanism to actually save yourself in that situation. How they exfiltrate those private keys is subject to how exactly your wallet is built, and I am not going into the details of how Fidely built its wallet.

Q: So the threat is simple capture of private keys.

A: Yep, basically. It doesn’t matter how they obtain them. There are side-channel attacks, insider threat attacks, signature vulnerability attacks, and certain circumstances where if you can convince someone in ECDSA to sign a message and reuse some nonce k then you can actually get the private key. There are multiple ways you can get the private key if there was some flaw in the software.

Q: Do you have any statistics on this?

A: Not in this talk, but there have been many hundreds of exchange hacks. It’s estimated that 20% of all bitcoin in existence has been stolen. I’m making up that number off the top of my head, I’m probably wrong about it. It’s a large number though. Just last week, the QuadrigaCX exchange in Canada claimed it was hacked. It was shutdown. $100 million was either lost or stollen by the founder. Nobody knows yet.

Q: Is there a reason for focusing on working around the fundamental limitations in the security of the keys, the system, and focusing on this workaround?

A: As opposed to what?

Q: Rearchitecting the mechanisms by which keys are used or protected, and making them more consistent with the ways that keys are recommended to be used in cryptography in general.

A: Well, here the idea is that we’re using defense in depth: do both. This talk is focusing on what can we do on the blockchain. You can do a lot of things offline too, like Shamir sharding my keys, using multi-party computation to compute the signature, and we do use a lot of those tools. But that has already had a lot of focus around the industry, and on-chain defense has not. The valts discussed here are also not possible today because it requires upgrades to the bitcoin consensus rules. The purpose of this talk is to get people to think about this scenario. If we want to implement these changes in bitcoin, exactly what changes would we want to implement, and why? These are going to require at least a soft-fork. There’s probably a soft-fork upgrade coming this summer which will include a few things I am going to talk about. I would very much like there to be a mechanism in there so that we could use vaults. It’s a tricky question. Honestly, this talk doesn’t have much of a conclusion. I am going to present several alternatives and I hope to raise people’s awareness to think about it and maybe over the next few months we can get this into the next soft-fork.

Or maybe it’s just a terrible idea and we shouldn’t use it at all.

Q: … more profitable for you to… delay this… in a system like this, … you wouldn’t be protected against that. You will… start blaming… either close that channel… I could… proof of.. transaction… and record the… to … So these systems are also needed for… as for private keys…. important.. but also cooperation in second layers as well.

Q: That’s a different threat.

A: This basic script structure I showed is also used in lightning channels. The way I am using it here is actually backwards compared to how it is used in lightning and plasma and other state-channel approaches. Here, the one with the timelock is the normal execution branch. In plasma or lightning, the one without the timelock is the default execution branch. In lightning, the one with the timelock is called an adversarial close of the channel. It’s a similar construct and a similar use of features. I don’t quite know off the top of my head how to incorporate this with state channels because of that flip. But we can discuss it afterwards.

The vault features include a way to see the theft transaction, we see it and we know about it because we have other nodes watching the blockchain that are not connected to the system with which the thief stole it. This requires an online approach. It’s appropriate for exchanges and online service providers, but not really appropriate an individual’s phone.

We’re calling the alternative spend route a “clawback”. That’s the terminology here: vaults, clawbacks and covenants. You have the opportunity to watch the blockchain and observe for unvaulting transactions, at which time there is a timelock and the opportunity to broadcast the clawback transaction if that’s what you wish to do. This gives you time to pull out the clawback keys and use them.

The clawback keys are a different set of keys than the ones that the thief stole. Presumably these clawback keys are more difficult or costly to access than the hot key. So these are very cold keys or it might be a much larger multisig arrangement than your regular cold keys. The idea is that they stole your hot keys or even your cold keys, and these ones– the clawback keys should be more secure because they are used less often and not used in the course of normal operations. This makes them more difficult to steal by an attacker who is watching your operation who watches how things are moved through your system can discover how keys are controlled and who has access to them and then attack those people. But the clawback keys should not be used, until you have to execute the clawback mechanism. If you see a theft like that, you probably want to shut down your system and figure out what happened and then later not use the system in the same way again without the vulnerability that the attacker exploited.

Last point here, the clawback wallet– the destination of the funds when I execute the clawback mechanism– needs to be different from the original one. The assumption here is that the thief has stolen the cold wallet keys, or other things in addition to that. So I need to decide what the clawback should be doing, upfront, and where it should be sending the funds when I successfully clawback the funds from the thief. This can be another vault or some really deeply cold storage keys.

For the rest of this talk, I will dig into some rather technical details about how to do this. Let me just pause here and ask if there’s any questions about the general mechanism.

Q: It sounds similar to the time period between the time a transaction happens in bank account, and the time of settlement, during which time you can “undo”.

A: Absolutely. Those kinds of timelocks are present all over finance. The way that they are implemented is that there is a central computer system that looks at the time on the clock and says okay the time hasn’t passed yet. But these systems aren’t cryptographically secure. If I hack into the right system and twiddle the right bit in the system, I can convince you the time is different and successfully evade that control mechanism.

The notion of timelock itself is very old. Back in the old west, they would have physical mechanical clocks where the bank owner would leave and go home but first he would set the timer on the vault such that the vault wouldn’t open up overnight when he wasn’t there, only at 8am the next morning would the vault open. It’s a physical mechanism. I actually have a photo of this in a blog post I wrote a while back.

But yes, this is standard practice. You want to insert time gaps in the right places for security measures and authorization control in your system.

Q: Do you have control over how many days or how much time elapses?

A: Of course.

Q: If I said I want it to be 72 hours, and you want it to be 12 hours, that’s usually different from the banks.

A: The time here is set by the service provider. All the transactions and timelocks are created by the service provider. This service provider is providing a service to the customers who are trusting them with their funds, whether an exchange or a custodian or something else. Everything happening here is set by the service provider. If you want those times to be different, then you can talk wit hyour service provider. If you want the timelocks to be short, then it’s higher risk and you’re going to have to ask about your insurance and etcetera etcetera.

Q: Could miners theoretically.. by their mining power.. attack the timelock?

A: No.

Q: But it is measured in blocks?

A: It is.

Q: So if they stop mining….

A: There are two ways to specify timelocks. This mechanism that I said here is OP_CHECKLOCKTIMEVERIFY which is an absolute timelock. There’s also a relative timelock specified in number of blocks. There’s relative timelock, absolute timelock, block-based and clock-based. So by how you encode this number here, you can select all of those options. Here I am using a relative timelock, but here I am using a number of blocks.

Q: …

A: The number comes from the miners putting timestamps in the blockheader, so if all the miners want to mess with this then they could. But that’s your standard 51% attack. Nothing in bitcoin works if you have a 51% attack.

Q: … It could effect it to a lesser degree, like if 30% of the miners…

A: If you want to speed up or slow down the timelock, then …

Q: …

A: If you’re a miner, you can slow it down by turning things off. But presumably you want to turn things on. You can calculate the cost of doing that. Today, at whatever the hashrate is. In 2017, there was approximately $4b spent on bitcoin mining total. If I divide that by the number of blocks in 2017, then I can calculate how much each block is worth and then I can see the value of a transaction might be $10m and the cost of an attack is $x dollars. So I can set the timelock based on the ratio of those two numbers, like the cost of the attack versus the value of the transaction. What this means is that high-value transactions will have longer timelocks. This is frankly a very good risk strategy that I haven’t seen many people in the bitcoin space use. Different value transfers should have different timelocks. It’s a risk parameter.

Q: You said there’s a need for some fork to allow some functionalities.. what is the functionality?

A: That’s the rest of the talk.

Vaulting mechanisms

There are four mechanisms that I know of that have been proposed so far. There are probably others.

Remember, a vaulted UTXO is a UTXO encumbered such that the spending transaction must have a certain defined structure.

Eyal, Sirer and Moser wrote a paper in 2016 and showed it at Financial Crypto 2016. They proposed a new opcode OP_CHECKOUTPUTVERIFY.

The second mechanism is something I came up with a few months later where I described how to do this using pre-signed transactions. That allows you to basically do this mechanism without actually modifying bitcoin. I will talk mostly about that one since it was mostly my idea and I know a lot about it.

jl2012 wrote a BIP about OP_PUSHTXDATA where the idea is to take data on the transaction and place it on the stack so that you can do computation on the data.

The last idea referenced here is from O’Connor and it’s implemented in Blockstream Elements called OP_CHECKSIGFROMSTACK. They had a paper in Financial Crypto 2017 about it. This mechanism essentially adds an opcode called OP_CHECKSIGFROMSTACK and it literally checks the signature on the stack and there’s a trick by which you can use that to implement a covenant.

I am going to go through these one-by-one.

OP_CHECKOUTPUTVERIFY (Eyal 2016)

As the name implies, OP_CHECKOUTPUTVERIFY examines a specific output and does computation on it. It’s sort of like a regular transaction. I have placed a transaction template here. There’s a pattern with placeholders, which could be pubkeys. This then encumbers the next output to say the script of the next output must be a script that matches this pattern.

This checks the value, for instance. This lets you encumber the value and check the script pattern, which implicity lets you check the public keys or any other logical structure in the output script.

Personally I think this is kind of complicated, and it’s sort of like an ethereum-like approach to computation where you put everything on the stack and write a program to deal with it. This is not really how things are done in bitcoin. One of the ways we describe the difference between bitcoin and ethereum is that bitcoin requires proof-of-correct-computation for scripts, whereas ethereum actually does the computation on-chain. So my preference, I think the bitcoin’s community preference is that rather than doing computation in the scrpits, which has scaling and privacy consequences, we should instead do the simplest possible thing to prove you did the right computation. At the end of the day, you can prove that you have done almost anything just by having a certain signature.

Pay-to-timelocked pre-signed transaction (P2TST) and SIGHASH_NOINPUT

I came up with the pay-to-timelocked pre-signed transaction (P2TST) proposal. The idea is the following. When you give someone a deposit address, instead of giving them an address to which you have the private key, you instead randomly generate a new private key at the time that they request the deposit. Then you create and pre-sign a transaction that does the vault unlocked send with one input one output. Then you delete that private key. As long as that key was really deleted, I have just created an enforcement where the funds have to move from this vault to the unlock because that’s the only thing that exists at this point is the pre-signed transaction. Then I give the address for the incoming transaction on that pre-signed transaction to the user that wishes to make a deposit.

This pre-signed transaction unfortunately depends on the pre-signed txids. So we have to wait for Alice to broadcast her transaction before we can do this. So this private key has to live for some time while we wait for Alice to send the transaction. That’s a pretty big risk: during that waiting period, the private key could be stolen. We could talk about secure hardware and other options, though. But this is a risk on its own. If somebody is able to figure out how to obtain this private key, then it can evade the vault mechanism entirely which is not what we want. So instead of exfiltrating the private keys for the cold storage, they would just exfiltrate the pre-signing keys and this would completely evade the vault mechanism.

There’s a second idea here which is that you create a transaction called a P2TST which is a pay-to-timelocked pre-signed transaction transaction. It sends from the vault to the unlocked state. With this construct, your funds storage is now a set of pre-signed transactions instead of being keys. You have keys also, but your main control is the set of pre-signed transactions. Then there are new questions like what do I do with these pre-signed transactions and where do I put them.

Instead of creating a key and delteing it, you could instead create a signature using a random number generator and then through the magic of ECDSA public key recovery you can compute the corresponding public key. I would have to break the ECDSA discrete log problem asumption in order to compute the private key. By using this technique, I have created a valid pre-signed transaction and I have enforced this policy now, having never had a private key at all. There’s some caveats though, like this requires some changes to bitcoin either SIGHASH_NOINPUT also with a flag like SIGHASH_SCRIPTHASH and an allowance for pubkey recovery. Unfortnuately some of the upcoming Schnorr signature proposals explicitly do not allow for pubkey recovery techniques.

Because the SIGHASH depends on the input txid, which in turn depends on the previous transaction’s output address, which depends on the pubkey, we have a circular dependency that could only be satisfied by breaking our hash function. It’s a circular dependency on the pubkey. I couldn’t do this, unless I break my hash function. I can’t compute the pubkey and then put it into the input hash that depends on the pubkey in the first place.

What’s a SIGHASH?

The way this works is that when you do EC recovery on a signature, you have three things really– you have a signature, a pubkey, and you’ve got a message. Those are the three things involved in verifying a signature. The pubkey and signature are obvious to everyone here right? The message here is really the SIGHASH. It’s a concatenation of a whole bunch of data extracted from the transaction. It depends on the version, hash of the previous outputs, and the outputs, and various other things. The signature authorizes that. What we want to do is not depend on the pubkey in the input. That creates a circular dependency where you can’t do EC recovery.

There’s a proposal out there called SIGHASH_NOINPUT which was originally proposed for the lightning network where it would be nice to be able to take the set of outputs on the channel and be able to rearrange them, without making an on-chain transaction. If my transactions commit to the txids on-chain, then I can’t do that. SIGHASH_NOINPUT decouples your lightning transactions from those inputs and let you rearrange those, and therefore let you add more funds and remove funds from the transaction. I think it’s very likely that NOINPUT will be implemented and deployed because it is so helpful to the lightning network. So, can we repurpose SIGHASH_NOINPUT for a vault?

Schnorr BIP and SIGHASH_NOINPUT discussion

Brief interlude on Schnorr signatures. In elliptic curves, pubkey recovery is well known. In ethereum, there’s even an opcode that does it, or a common function. The way you check signatures in ethereum is often you compute the pubkey and see if it’s the same. So there is now a Schnorr BIP put out by Pieter Wuille. The way it works is the following: this here basically is the equation he proposes that is satisfied by Schnorr signatures in Schnorr.

Schnorr signatures basically for those who don’t know the history actually predate ECDSA signatures and unfortunately they were patented. Therefore they saw basically zero use whatsoever. At the time that Satoshi wrote bitcoin in 2007-2009, it was almost the same time that the Schnorr patent expired like in 2009 or 2010. But since there was no history or libraries that used Schnorr signatures, nobody used them. Schnorr signatures have a lot of advantages over ECDSA signatures. If you take two pubkeys and add them together, the sum of the signatures validates for the sum of the pubkeys. That’s not something as easily done with ECDSA. This is very powerful for blockchain because blockchain has scalability problems in the first place. If I can smash all the signatures together and validate just one signature, that’s a very big win for scalability.

There’s a lot of other fancy things you can do with schnorr signatures. They are linear, so you can add things to them. You can do native multisig inside of a Schnorr signature. There’s a lot of reasons that people want this. A BIP has been proposed. The mechanism to get it into bitcoin has not been proposed yet, but the BIP describes what the sigantures are, how you get them and how you check them.

Assuming that we get this BIP later this year, this still has the problem of committing to the public key and creating a circular dependency that again I cannot satisfy. The way that happens here is that they have intentionally put the pubkey inside this hash, and the reason for that is that they don’t want people to be able to reuse signatures. The standard way of specify a Schnorr signature is slightly different, there’s no pubkey in the hash there. For clarity, G is the standard basepoint, and (r, s) are the two components of my signature, H is the hash of the pubkey and message times my pubkey. So with this original proposal, you can do pubkey recovery. I can just put things on the other side of the equation and compute the pubkey, which is what I need for my pre-signed transaction idea.

The reason why this formula was not chosen, in favor of this other, is that I can convert a signature any (r, s) into a signature on something else by shifting the s by an elliptic curve point. This means that I can basically take a signature off the blockchain and reuse it for a different transaction, which is bad, because it might allow someone else to send funds in a way that you didn’t want them to. If I take this signature and shift it by this much, it’s now a valid signature for something else. Whether this is okay reduces to the question, are there any transactions that reuse the same message value m? It turns out in practice that the answer is yes, there are. So one way this might happen is that– if you have a service provider and someone is sending them a lot of funds, it’s standard practice for someone to do a penny test where they send a small amount of funds first to check if things work, and the other thing that they will do is tranche transactions where they will divide up the total amount into several smaller denominated transactions in order to lower the risk on the entire transfer. If you’re sending a billion dollars all at once, and it’s all lost, then it sucks. If I divide it up into 10 parts, and I only lose 1/10th of it, then that sucks less. But the consequence of this is that you will end up having transactions with exactly the same value and exactly the same address. As much as we would want to say hey client please don’t reuse addresses or send to the same address twice, we technically can’t prevent them from doing that. There are good reasons why they probably would reuse addresses too. If I take two different transactions and I tranche it, say I send 100 BTC to the same address, twice, this is a case where I can reuse the signature. If we then move those funds to another cold wallet or something like that, the second transaction is now vulnerable. They can take the signature off the first transaction and apply it to the second and do some tweaks and have some monkey business going on. We don’t want that. The reason why this works is because of SIGHASH_NOINPUT. If we were to use SIGHASH_NOINPUT, which allows for removing the txid from the input, then that is a circumstance underwhich the Schnorr signature replay would work. If we don’t use NOINPUT, then this is not a problem.

Half-baked idea: input-only txid (itxid)

I had a half-baked idea this morning to try to fix this. There’s probably something wrong with this, and if so let me know.

https://vimeo.com/316301424&t=33m47s

The problem with pubkey circular dependency also comes from the input txids: the txid is defined as sha256d(nVersion|txins|txouts|nLockTime). Segwit also defines another txid used internally in the witness merkle tree (not referenced by transactions): wtxid = sha256d(nVersion|marker|flag|txins|txouts|witness|nLockTime) where the txouts are a concatenation of value|scriptPubKey|value|scriptPubKey…. and scriptPubKey(P2PKH) is OP_DUP OP_HASH160 pubkeyhash OP_EQUALVERIFY OP_CHECKSIG and scriptPubKey(segwit) is 0 sha256d(witnessScript), which references our pubkey, creating the circular dependence. However, let us define an input-only txid itxid where itxid = sha256d(nVersion|txins|nLockTime)). This is sufficient to uniquely identify the input (since you can’t double spend an input), but does not commit to the puts. Outputs are committed to in sighashes anyway (and, if desired, modulo SIGHASH_NOINPUT). This would allow us to use pubkey recovery and RNG-generated signatures with no known private key. This implies a new or modified txid index and SIGHASH_ITXID flag. This doesn’t seem to be useful for lightning’s SIGHASH_NOINPUT usage where they need the ability to change the inputs. This evades the problem of Schnorr signatur ereplay since the message m (sighash) is different for each input.

The reason why we ended up with the circular dependency is because the pubkey is referenced by the txid. The txid is defined as mentioned above. Segwit also defines another txid called wtxid which adds a few more things including the witness program. Segwit takes a lot of the usual transaction data and moves it to a separate block, which removes the possibility of malleability. Signatures and scripts get moved to this extra block of data. So there’s another merkle tree of this witness block that gets committed to.

In P2PKH, the pubkeyhash gets hashed into the — which creates the circular dependency. ((You can use P2PK to not require a pubkeyhash.))

So what if we instead introduce input-only txids called itxid. I remove the outputs from the original txid calculation. This only involves the inputs. If I use this to reference which inputs I am actually spending, this would still uniquely identify the inputs I am spending. It doesn’t say anything about the outputs. The outputs are tied to the signature hash in the transaction, if you desire it to, which ties them together. The outputs are then here in the witness tree. Everything is still committed to in the right way, in a block. I don’t think there’s a problem with that. This allows us to compute the pubkey and create pre-signed transactions, without actually having the private key.

This is a bit complex. I literally had the idea this morning. You would have to change the txid index. Bitcoin is like a big list of txids. If I want to change that index, I basically have to double the indices. This will not go over well, I suspect.

Pre-signed transactions

Your wallet is a set of pre-signed transactions, here. This is an interesting object for risk management. These objects are not funds in-of-themselves. There is a private key somewhere that is needed to do anything with them. But they can be aggregated into amounts of known denominations, such that they can be segregated and separated. It gives you an extra tool for risk management.

These pre-signed transactions are less security-critical than private keys, and they are less costly to create than an entire private key ceremony. A key signing ceremony can be a multi-million dollar affair that is audited and has all sorts of controls around it to make sure we don’t mess up.

If an attacker steals just the pre-signed transactions, they don’t have access to the funds because they also have to steal the keys with them. So you could back these up, you could put them with a lawyer or a custodian or something like that.

There are downsides to pre-signed transactions. A responsible hard-fork that implements replay transaction cannot be claimed by a pre-signed transaction. If I change how transactions are signed, and I provably don’t know the private key, then by definition I cannot possibly generate a valid signature to claim the coins on the other side of the hard-fork. These pre-signed transactions won’t be valid on any fork that changes the sighash. I think hard-forks are a waste of time and people should stop doing them. But some people would say this is a nasty consequence of pre-signed transactions.

I talked around at Fidelity about the “delete the key” idea, and generally that’s not how things are done. Our cybersecurity people generally prefer to always backup keys just in case. You never know, put it in a really really deep vault. The biggest idea with delete-the-key is how do you know that the key was really deleted? Software and hardware mechanisms can evade the deletion.

SIGHASH_NOINPUT is generally unsafe for wallet usage. So, don’t use it for wallets. The reason why it’s okay for lightning is that lightning is a protocol with certain rules around it. People aren’t just sending arbitrary funds at arbitrary times using SIGHASH_NOINPUT to arbitrary lightning addresses; you have to follow the lightning protocol. That’s why we run into the problem with penny tests and tranche testing. Whether we create tranches, it’s the sender’s decision and policy, not the receiver’s. That’s the root of the problem there.

Lau: OP_PUSHTXDATA

OP_PUSHTXDATA seems to be created to neable arbitrary computation on transaction data. This is generally not the approach favored by most bitcoin developers, instead they prefer verification that a transaction is correctly authorized, rather than computation (which is more similar to ethereum). In my opinion, lots of fnuny things can be created with OP_PUSHTXDATA, and there are many unintended consequences.

This was proposed by jl2012. It’s kind of a swiss-army type opcode. What it does is it takes the random data from the transaction and puts it on the stack and then lets you do a computation on it, like input index, input size, output size, locktime, fee amount, version, locktimes, weight, total size, base size, etc.

This was proposed I think even before Eyal’s 2016 paper. It’s kind of gone nowhere, though. This draft BIP never became an actual BIP. Given the philosophy of “let’s not do fancy computation in scripts” this proposal is probably not going to go anywhere.

OP_CHECKSIGFROMSTACK

The one that I am in favor of the most of the other 3, is OP_CHECKSIGFROMSTACK. The way a vault works iwth this opcode looks like the following: you’ve got, so, you have this:

script: OP_OVER OP_SHA256 pubkey 2 OP_PICK 1 OP_CAT OP_OVER OP_CHECKSIGVERIFY OP_CHECKSIGFROMSTACKVERIFY

initial stack: signature sha256(sigTransactionData)

So you do two checksigverifies. The idea here is that I am going to put the initial stack here when I send the transaction will have the signature and hte hash of the transaction data. I am going to first check the signature that is on the stack, and the separate is that I check that I check the signature of the transaction itself. If the signature of the transaction is exactly equal to the one that is on the stack then this implies the transaction data is exactly the same as this transaction. It implicityl verifies that this transaction has a specific structure encoded in its sighash value.

It’s generally possible to create recursive covenants with this. Once you put this opcode in that encumbers the script of a next transaction, you can require that the next transaction has any structure at all. You can require it has another CHECKSIGFROMSTACK or another PUSHTXDATA in there or whatever. There might be uses for this.

Another drawback of using these kinds of opcodes is that it’s fairly obvious to everyone what you’re doing; all your code ends up in the blockchain and now someone knows that you’re doing vaults. This is revealed with every transaction, not just when you execute the clawback transaction. Any regular send reveals this information.

Taproot

It would be awesome if there was a way to hide the fact that I was using a vault mechanism or using OP_CHECKSIGFROMSTACK. Just in general, it is useful to hide the existence of an alternative clawback branch in the script.

Taproot works by “tweaking” your public key such that your public key P is in fact public key P plus a merkleized abstract syntax tree of your script times the generator G.

If you have the private key, then you can also sign for P=MG just as you could for P by tweaking your private key in the same way. When spending, we reveal the committed alternatives M in the scriptPubKey, along with the executedMerkle branch. But the remaining non-executed branches and alternatives remain private.

This is good for optimistic protocols where the alternatives are used less frequently than the default case. Transactions are indistinguishable from a single-key P2PKH type output. Multisig can be done via MuSig with Schnorr, and an interactive threshold signature scheme exists that also results in a single signature.

For a vault in taproot, the default script would be: (timelock) OP_CHECKLOCKTIMEVERIFY/OP_CHECKSEQUENCEVERIFY hotpubkey OP_CHECKSIG.

Taproot was proposed in 2018 by Greg Maxwell. The idea is that you can add alternative spending paths to your script. You do this by tweaking your public key, which is done here by addition. M is the merkle root of a merkleized abstract syntax tree of alternative scripts (MAST). MAST is where you take a big block of if-else statements and turn it into a single merkle tree.

If you have a private key, you could sign by tweaking your private key in the same way. But the key might be constructed by MuSig or something. The interesting thing is that nobody can tell you tweaked the public key, if you don’t reveal it. I can’t take an arbitrary pubkey and claim I did this when I didn’t; that would be equivalent to breaking the discrete log problem on ECDSA.

By “optimistic” protocol I mean the alternatives are used less freequently than the default case. You can hide all kinds of stuff. With Schnorr signatures, you can do multisig in a single key and single signature. This is pretty darn fancy, and I am pretty happy about this. It would completely let you hide that you did this.

Your vault script ends up looking pretty simple, which is just a timelock and a CHECKSIG operation.

Fee-only wallets

You generally want to add fee wallets when you do this. If you create these pre-signed transactions or use covenant opcodes, this encumberence you create doesn’t know anything about the fee market. Bitcoin has a fee market where fees go up and down based on block pressure. The time that you claim these funds or do a withdrawal request is separated in time from when the deposit happened and the fee market will not be the same. So generally you have to add a wallet that is capable of bumping the fees. There are many mechanisms of doing that.

Conclusions

Vaults are a powerful tool for secure custody that have been around for at least 2-3 years and haven’t gone anywhree. My read of the ecosystem is that most of the developers think it’s a good idea, but we haven’t done the work to make it happen. Fidelity would use vaults if we had these tools.

Pre-signed transactions, modulo my itxid idea, are kind of undesirable because of delete-the-key. If you don’t delete the key, then third-parties can steal that key. If you can use NOINPUT, then you can get around that.

Of the new opcodes, the one I am most in favor of is OP_CHECKSIGFROMSTACK which is already deployed in Elements and Liquid. It’s also deployed on Bitcoin Cash. As far as I know, there’s no issues with it. It’s a simple opcode, it just checks a signature. But you can create covenants this way. There might be other ways to use this opcode. One of the ways in the original covenants 2016 paper was that you can make a colored coin with OP_CHECKOUTPUTVERIFY. But what if colored coins get mixed with other coins? Well, you can enforce that this doesn’t happen, using these kinds of mechanisms. This is one other potential use case for all three of the opcode proposals we have discussed.

One of the reasons I like OP_CHECKSIGFROMSTACK better is that it’s simpler than the other ones, and it’s less like ethereum, and it just lets me prove that you have the right thing. All three opcode options let you encumber things recursively, so it means that bitcoin would no longer be fungible: is that desirable?

In all cases here, batching is difficult. This is one consequence of this. You generally can’t take pre-signed transactions and combine them with other inputs and outputs. The signatures will sign the whole transaction. There are ways to do it such that you only sign one of the inputs and one of the outputs, but then if I do that and batch a bunch of transactions together then I don’t really gain much from the batching. So batching requires more thought.

1h

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015793.html

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html or https://twitter.com/kanzure/status/1159101146881036289

https://blog.oleganza.com/post/163955782228/how-segwit-makes-security-better

https://bitcointalk.org/index.php?topic=5111656

“tick method”: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017237.html

http://web.archive.org/web/20180503151920/https://blog.sldx.com/re-imagining-cold-storage-with-timelocks-1f293bfe421f?gi=da99a4a00f67

http://www0.cs.ucl.ac.uk/staff/P.McCorry/preventing-cryptocurrency-exchange.pdf

\ No newline at end of file diff --git a/misc/adam3us-bitcoin-scaling-tradeoffs/index.html b/misc/adam3us-bitcoin-scaling-tradeoffs/index.html index 631b493959..e724858e9f 100644 --- a/misc/adam3us-bitcoin-scaling-tradeoffs/index.html +++ b/misc/adam3us-bitcoin-scaling-tradeoffs/index.html @@ -12,4 +12,4 @@ Adam Back

Date: April 5, 2016

Transcript By: Bryan Bishop

Tags: Scalability

Category: Conference

Media: -https://www.youtube.com/watch?v=HEZAlNBJjA0

Bitcoin scaling tradeoffs with Adam Back (adam3us) at Paralelni Polis

Institute of Cryptoanarchy http://www.paralelnipolis.cz/

slides: http://www.slideshare.net/paralelnipolis/bitcoin-scaling-tradeoffs-with-adam-back

description: “In this talk, Adam Back overviews the Bitcoin scaling and discuses his personal view about its tradeoffs and future development.”

Intro fluff

And now I would like to introduce you to a great guest from the U.K. or Malta, Adam Back. Adam is long-time professional cryptologist and also he’s a Bitcoin expert. He’s the inventor of hashcash, which was used for example by Bitcoin. I heard that Satoshi Nakamoto was inspired by hashcash.

Adam: I think I got the first email received by anyone from Satoshi just to ask about hashcash.

The second thing that slush0 told me was that hashcash is used by the bitcoin mining protocol. This is a really big thing. Also, I know hashcash is used by many anti-spam solutions. If you have implemented some anti-spam solutions, quite likely you are using hashcash.

I also read from your Wikipedia page that you developed a system based on David Chaum’s ideas. David Chaum is a cryptologist. I invited him to a cryptoanarchist conference last year. Unfortunately this guy probably he didn’t read his emails, so maybe next year.

And the last thing is that Adam is also President of the company which is Blockstream. That’s a very brief introduction. Now Adam it is your turn.

Talk intro actual

Adam: Thank you. Hi. So. I am going to talk for a relatively short period of time. People should ask questions. Sorry. Okay. Yes. So, people should interrupt and ask questions. The slides and discussion is mostly to start some structure to the conversation so feel free to ask questions as we go, or at the end as you prefer.

Many hats

So as many people in Bitcoin, I started as an enthusiast. I saw bitcoin and found it exciting and interesting. As many people, I tried to do something to move bitcoin forward by starting a company to improve on the technology and so forth. As probably other people can attest, once you start a company you have multiple hats now. I have to preface what I’m saying by explaining which hat I am speaking with. I am speaking as an individual who very much likes bitcoin and wants it to succeed. As was said in the introduction, I have been playing around with and doing applied research in electronic cash systems for a long time. I also worked at a company doing Tor-like things before Tor existed. And ecash systems relate to this as well. I am also not a spokesperson for Bitcoin Core.

Bitcoin Core is a decentralized group of people that work by consensus, perhaps similar to the way that IETF runs discussion forums. I am speaking as an individual. These sentences are the only Blockstream part. Blockstream as a company is reliant on Bitcoin and they need it to scale and succeed. Same as any other company. Everyone at Blockstream owns bitcoin for themselves and they are excited about bitcoin.

Requirements for scaling bitcoin

With that out of the way, I thought I would talk about it in terms of requirements. People who have been through software engineering in startups or various projects, you know that you often interface with a customer by stating requirements. They are not always technical requirements. They are about the effect that you want to achieve. There has to be a constructive conversation with a client who is buying something or maybe they want to buy an open-source system. It’s useful because then the conversation is at a level where hopefully everyone understands, either the customer understands or the user understands, and the technical people can get sure that they have the same idea as what success is or what they want. In Bitcoin, it’s a little bit ambiguous who the customer is. It’s an open-source project, it’s a currency, it’s a network, it’s a peer-to-peer currency so it’s also a user currency so we should most care what users want and why they value bitcoin. This can be difficult to determine because there are many users with different viewpoints.

Bitcoin also depends on companies to succeed. There are many companies popularizing access to Bitcoin, like exchanges and hosted wallets, to make it simpler to use Bitcoin, and miners who are an important part of the ecosystem to secure it, including mining pools. We have to sort of if we’re going to do requirements and engineering around bitcoin, we have to balance the interests of users and companies and different ecosystem miners and payment processors and exchanges may have different and slightly conflicting requirements.

If we were to optimize solely for the benefits of miners, we might find the outcome to be one thing, but for the sole benefit of payment processors might have a different direction. Each thing you do tends to have an impact on someone else, so it’s sort of a zero sum system in that sense. The system needs to be in balance and we don’t want one sector to have an overly strong influence over the others. One type of ecosystem gaining an advantage over others is not something desired. The companies in some respects are, they are trying to improve the value for users, users buy their services because they find them convenient, and companies succeed by delivering value to users and getting more users to start using bitcoin and build the network effect.

Let’s talk about a what if scenario. We say that we want bitcoin to scale. But how much? Let’s put some numbers on it. Hypothetically say we want the scale to double every year for the next three years. Let’s draw a rough outline of how that’s going to happen. As a requirement, that’s something that companies can think about, hopefully we can do better, but it’s something they can plan around, they can look at user growth numbers. Some ecosystem people in the business side have said that they think it has been scaling around this rate over the last year or two. It’s not an arbitrary number, it has been scaling maybe 2x maybe 2.5x times something like. And then the more recent interesting technology is Lightning and other layer 2 protocols where we get much more exciting increases in scale. It depends on some usage pattern, it depends on recirculation, but you hear numbers like 100x to 10,000x transaction throughput using the same base technology. We’ll talk in a bit about how that works roughly.

If we can achieve these targets taken together, I hope that the companies could be happy with these targets. It’s a conversation topic, because they could come back and say I need it more or I need it sooner, at least you’re having a conversation where the people designing protocol upgrades could maybe work with and consider.

What is bitcoin?

There are other system requirements that are sort of invariants or hard requirements, which are around bitcoin’s properties. Bitcoin is very importantly permissionless. The internet opened up permissionless innovation. The permissionless nature of it is significantly credited to driving the fast pace of innovation. Many people think that what bitcoin will allow is that rate of innovation into financial payment networks where it’s so far been relatively closed and more like the closed telephone networks from pre-internet era. There’s some analogy there.

There are some other interesting bitcoin properties that we don’t want to lose, such as fungibility which is important for a payment mechanism. Fungability is the concept that, like cash, one bank note or one coin is the same as the other. This concept arrosed, sometimes you can distinguish between bank notes because of serial numbers. In 17th century Scottland, there was an old case where a businessman sent some high value bank notes to someone else, they got stolen in the mail, they got deposited in the bank, he tried to sue to get them returned as his property, there was a court case, and the courts decided he should not get the notes back, because people would lose confidence in their money and reliable money is very important for an economy. That started the legal concept of fungibility. And I guess other countries arrived at similar concepts and similar reasons. It’s important for bitcoin to have fungibility in a practical sense. You have seen attempts like to trace bitcoin, or find bitcoin that have been used at Silk Road or connected by two or three hops to a transaction used on there or something, but if we have this activity then people will end up with those coins who wont be able to spend because of the hops and taint. The way that it fails in the Scottish bank note case, if the ruling had been the other way, such that the merchant got his notes back to him, then nobody would want to accept bank notes without rushing to the bank and depositing them. This would cause the currency to fail at its purpose. It’s important that bitcoin is fungible and that it has privacy because too much transparency causes taint.

We do have some functional requirements, I put these in rough priority order. These are obvious things that we need the system to do to be effective. Has to be secure. We have to scale it, so that everyone can use it. We want it to be reliable, predictable, good user experience. You want the system to be cheap so that many people could use it, such as in third world countries when the average spend is much lower, or different types of spending.

Since I was asking about requirements, I think it’s interesting to ask, what is bitcoin? It’s actually an interesting conversation to have with anyone you meet who is interested in bitcoin. To ask them, what’s most important about bitcoin, to them? Another way to ask this would be, which of the features of bitcoin if it lost, when would you stop using it? Just to run through the top ones quickly, it’s better bearer ecash, cash-like, irreversible no way to take back payments, unseizable because it’s bearer, and very importantly there’s no third party or central point of trust, no bank. That’s important to consider if that’s a requirement for Bitcoin. We can obviously scale bitcoin by running a central server that holds all the bitcoin. But that would lose that important differentiator that there’s no third party that has to be trusted. We talked about permissionless already. Also has to be borderless and network neutral, there should be no central party at the base layer that says they don’t like some transactions or whatever. In some countries, well I guess Wikileaks is an example, they had their payments blocked. It was not done through legal means, there was no court order deciding it. Some politicians decided to make some phone calls and ask some favors to some large companies, and their payments were blocked. In bitcoin there’s nobody to call up to achieve that kind of effect. Thus you have seen some adoption of bitcoin by parties that have that unfortunate vulnerability. Fungibility we discussed, privacy we discussed, it’s a virtual commodity gold-like, virtual mining etcetera. Specifically in terms of the economic properties of it, it’s not a political currency like fiat currency.

It’s purely free market, there’s no central party that can adjust the rate of new coin production, nobody can do quantitative easing or buy the money back. There’s no party that can adjust inflation, there’s no central party with special authority to set an interest rate. It’s purely free market.

If we look at the standardized three main properties of money, store of money, means of exchange and unit of account, it depends on your viewpoint but I think from my perspective probably the store of value has achieved has been of the three properties the most strongly achieved. Means of exchange somewhat, people are doing bitcoin transactions sure. But you could argue that perhaps more of the value comes from an investment perspective for now. They are related. Maybe one is important for a period of time, and another one becomes important later. I have heard people argue that it’s important for bitcoin to have a stable exchange rate and stable robust value because this would make it as a unit of account easier to use. This area is subject to debate because it’s about economic opinion. Unit of account, I guess, you know, this some like the coffee shop downstairs and everything you buy in this building is a test case unit of account, but because of the volatility of times, most people have been thinking of bitcoin as pricing it in dollars or other currencies. Maybe we will get there in the future though?

Upgrade methods

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=17m50s

Next I wanted to talk about upgrade methods. There are a number of different technical means by which we can upgrade the bitcoin network, to add new features to bitcoin. They have different tradeoffs. I think possibly the current scaling discussion should have a conversation around upgrades at an earlier stage, for example with the Scaling Bitcoin conferences or something. There needs to be a more timely discussion.

https://github.com/bitcoin/bips/blob/master/bip-0099.mediawiki

But let’s have that discussion now anyway. People are probably familiar with soft-forks and hard-forks. There are some different bitcoin upgrade method variants and some have had more recent analysis to talk about. People are still learning new things about bitcoin. There are people who understand code, but the implications of the protocol and the ways that you can extend it and upgrade it, are still new realizations and understandings are being found. That’s the topic of ongoing learning even amongst Bitcoin developers and so on.

An example of that is a “firm fork” or “soft hard-fork”, which has been talked about recently. It was quite obscure or not widely understood or known before. Let’s talk about the tradeoffs. Backwards compatibility meaning that the existing bitcoin wallets that are in use today will they be able to still send transactions after the upgrade? One of the texts is wrong, this is should be a tic not a cross. A soft-fork is backwards compatible because existing wallets can pay to new wallets and new wallets can pay to old wallets. Full hard-forks are not backwards compatible, by design they change the formats to improve something fundamental. All clients, even smartphone clients, even full node clients, have to upgrade after a full hard-fork. A simple hard-fork is a restricted hard-fork that smartphone clients don’t have to upgrade with. The firm hard-fork is a kind of hybrid between the two, which we will talk about in a moment.

One of the factors for an upgrade is, how quickly can you do it? How safe is it? A soft-fork is quite safe to use and has been the method of all previous planned upgrades to bitcoin, of which there have been quite a few, including some last year and some ongoing this year. It’s quite well understood how to do them and what the security properties are, and it doesn’t require tremendous coordination. Mostly miners have to focus on the upgrade, and people who are running merchant software or full nodes, they have to do an upgrade reasonably quickly but they have some flexibility in that, they are protected by miners if they take a little bit longer to do the upgrade.

A firm fork is also fast, a simple hard-fork it depends, if you do it very quickly it probably introduces risk. But if you want to be conservative, then it takes longer than a soft-fork, that’s a potential topic for discussion then because when someone is looking at these tradeoffs. They might say, I want to take the risk because I want to see it quickly, but others might want to take longer and be slow to be more secure, perhaps they would prefer a soft-fork done first and then a hard-fork done later, because it could happen more quickly. And a full hard-fork is similar in terms of speed.

Then what I was saying about the… how we have the experience of doing soft-forks. You don’t need to coordinate with literally everyone to achieve a soft-fork upgrade. The soft-fork and the firm fork have similarity to the historic upgrades in bitcoin so far. The hard-forks require everybody in the network to upgrade. It requires much closer coordination than has been attempted before in bitcoin. That’s something htat could be done, but there are new coordination risks. In the network, you can look at the current software running and find some quite old versions still running, we don’t know if they have value depending on them, but it’s clear that people have not been keeping up to date with versions, so that might be something to be aware of, because if we were trying to do a coordinated upgrade we would have to somehow contact them, and whether they have a timeframe for upgrading or some exercise like that…..

Another topic is whether a fork is opt-in or not. Do the people who run the software, do they all have to upgrade? Do they make the decision? With hard-forks, everyone has to upgrade. Everybody has to get together and agree to upgrade for the system to upgrade, the users have a direct choice, if a proposal was made for a hard-fork, well users could veto it and just not upgrade. Conversely, with a soft-fork, it’s somewhat more automatic, the miners are making the decision. It’s indirect in the sense that we expect miners would want to make an upgrade only if users wanted it, because miners depend on users that want bitcoin and like bitcoin. If users don’t want it, it seems unlikely for miners to want to do an upgrade. But still, there’s a more direct decision with a hard-fork. This comes with the cost that it’s more complicated to do the upgrade because you have to coordinate with all those people.

In the slide this next line about SPV or smartphone wallets whether an upgrade is needed; so, with a simple hard-fork, that is generally not the case that you know an upgrade can happen and due to some limitations in the security model that most smartphone wallets work, it turns out you can increase the block size and the SPV wallets wont notice, the software will continue to operate. With a soft-fork, users don’t have to upgrade smartphone wallets because they continue to function, and the transactions are backwards and forwards compatible, but users get an advantage by upgrading.

Another aspect of software is technical debt (1 2 3), or sort of built-up long-running bugs or design defects which you generally want to fix, because if you don’t then you tend to run into problems in software. We will talk about this a little bit more. The soft-fork proposal in bip141 segregated witness includes a number of technical debt fixes, some technical design fixes which will help many companies and use cases and generally help the state of the software. It’s hard to know what features other forks include because it depends on what you choose to implement in them. So for a firm-fork, and the tradeoff would be, the more fixes you implement, the longer it would take to do the implementation and do the design and testing. One of the criteria is, how quickly can they do the upgrade? Typically that means the simple hard-fork has included the minimal possible features. So particularly no technical debt fixes, minimal work-arounds to avoid immediate problems. So that’s why I say, if it’s done quickly, then there are only minimal fixes, and that has the side effect that the you know the problems caused by those bugs will continue to persist for 6 or 12 months more and the features that rely on those fixes, like Lightning relying on malleability fixes, those might also get delayed and therefore delay the higher scale opportunity that we talked about in requirements for layer 2. In the case of a full hard-fork, it’s actually the motivation to do technical debt fixes. Typically you wouldn’t do a full hard-fork unless you really wanted to do some overhaul or data structure reorganization in order to fix bugs or defects. So it would tend to have fixes in it as an assumption.

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=28min40s

So this is an interesting question. You could look at a hard-fork as a, we were talking about it requiring everyone to opt-in, so it’s a form of referendum in a way, where you need almost unanimous support and agreement for it to happen safely. Conversely, soft-forks are, you know, we hope that miners will take into account the, they should only make an upgrade if users want an upgrade, but if the feature is not controversial, then it’s cheaper to do a soft-fork because we have lots of experience doing them, they can be done quickly, they don’t require as much coordination.

If we look at some different kinds of examples of controversial and uncontroversial things, so, some of the you know, most of the bitcoin differentiators that we talked about in the previous slide were things that usres would be very upset about if they were removed from the system. A major reduction in privacy, an imposition that you require permission to use bitcoin at all, things of that nature, another one would be increasing the total number of coins, these are things that no user and no ecosystem company that uses bitcoin would want to contemplate. So we never want to do those. On the other end of the spectrum are uncontroversial things, things which preserve all of the interesting features of bitcoin, they are just better, they fix limitations, faster transactions, more scale, cheaper transactions, bug fixes, and so forth, in principle they should be uncontroversial. So I think where it gets interesting is that there are, unfortunately, tradeoffs and we get situations that arise where we would like both sides of the tradeoff to go in our collective favor, but this can be difficult to achieve in the short-term. One example is scale versus security and permissionless tradeoff. I’ll talk about why that tradeoff arises, in a bit. Centralization can be a problem because at the extremes it tends to put the properties that make bitcoin interesting at risk of erosion. To make a simple and extreme example, if all bitcoin ended up being controlled by a single pool or single miner, or a very small number of them, and there were companies controlling them perhaps in the United States or another country, there would be a risk that the company would be asked by law enforcement to make policy changes that users would not like. And so we can see that decentralization is the main technical mechanism that is providing many of the interesting bitcoin differentiators.

So the question then about, this is just a question but I’m not sure what the answer is, I think if you’d asked people last year or the year before, if they had preffered a hard-fork for every change, or if they were okay with soft-forks, they might have said hard-forks are generally better because it gives users the choice to opt-in. ….. the ongoing discussion about scalability has shown that referendums are expensive and have their own controversy. In the same way that if a country has a referendum, it encourages people to want to think about the decision and campaign about it and think about whether it’s good or bad for them and try to pick an outcome that is advantageous for them. So if we are actually making a change that is uncontroversial, like a small increase in scale for example, maybe you could argue in hindsight that a referendum is more expensive than the benefit you get, because if it’s not a question that users really care about, like nobody really disagreeing about a small increase in scale or something, then it might be that it’s quicker to just do it by soft-fork if we use that as a bar. That’s a question that people can have different views on.

Decentralization

So regarding decentralization, this manifests itself with miners and pools. There’s a problem called the orphan rate… because it’s a distributed system, and miners are running a lottery winning blocks every 10 minutes, there’s a certain amount of time it takes to transmit a block through the network. There’s a chance that two miners will create a block close to the same time, one of them will win, one of them will lose, the person who loses will lose mining revenue. Miners keep a close eye on their orphan rate, they monitor it, they try to optimize it away. People say that this is due to bandwidth, miners are sometimes in remote locations with bad bandwidth availability, it turns out the limit is latency and not bandwidth because the actual block is transmitted at the end, at the last 3 seconds or so. So 1 megabyte is not much in bandwidth terms, it’s actually quite technically difficult to achieve reliable and fair broadcast in 3 seconds, meaning that small miners and large miners tend to receive the block in similar timeframes and this is not happening. In many ways, you are already seeing side-effects of the broadcast latency issue. What tends to happen is that people use workarounds, one thing they do is use a pool instead of mining on their own. So you have slush0 and slush’s pool. If you were solo mining and had slow bandwidth, then you would use a pool which would solve the problem for you. Another one is the relay network, which is a custom optimized block transfer that can do much better than the p2p network both because of the roots that it has selected plus the network compression. This was introduced to help medium-sized miners and maybe small miners to help keep up with large miners in terms of block transfer time. Large miners have been more able to negotiate peering arrangements with other miners so they can reduce orphan risk in that way.

Validationless mining

Another phenomenon has been the so-called “validationless mining” where one pool or miner fetches the block proposal from another pool without checking the block themselves ((thus the block can trivially include transactions or other properties that violate the bitcoin rules)). They just accept the block on trust that the other person has checked it. This can cascade problems, like if a miner has done something inconsistent or confusing with network rules, then there could be a sequence of blocks found by other miners that have also not been checked. This happened in 2015 with a soft-fork because people did not realize how widespread validationless mining was; until intervening, there were some invalid blocks mined. Some miners lost bitcoins because of this. The reason why people would do validationless mining is because it’s faster and it reduces orphan rate. As far as I know, people are still doing it, because it’s still worthwhile overall relative to the loss they experienced when it went wrong. It reduces security for SPV wallets and smartphone wallets because the proof-of-work is sort of an assertion of transaction validity, but it turns out that PoW can be calculated without ensuring transaction validity. So there could be multiple blocks that are on an invalid chain due to validationless mining.

If we increase the block size and the orphan rate goes up, is something disadvantageous going to happen? From what we have seen from workarounds lately, we think miners are pragmatic, they are in business to make a profit, so they are going to use whatever techniques to work around any increased orphan rate. So they are going to use more validationless mining, they are going to use a larger pool, more centralization. This cuts into decentralization. The side effect of these things is more centralization and increasing the policy risk that we talked about earlier.

Bitcoin companies should participate in bitcoin mining

We don’t always hear so much about decentralization in terms of doing something about it, more as a passive problem. But actually it’s an ecosystem problem. I would argue that if the ecosystem put its mind to it, there could be things done to improve decentralization. First of all, what exactly do we mean by decentralization? There are pools and economic nodes which are topical. Some people are using smartphones, some people are running a full nodes which is the implementation of bitcoin. People who are running merchant exchanges, services, high-value vaults, you should run economic fully-validating nodes. It’s not miners that enforce the protocol consensus rules, it’s the economic fully-validating nodes. The economic nodes are receiving transactions and making decisions about what the node says. If a user was to receive some coins by running a shop and try to deposit them into an exchange, and the exchange is running a full node, and the exchange says the coins are invalid because their node says so, then the coins are invalid. It’s more important to think about a reasonable decentralization of nodes run by power users, small and medium large sized companies in many different countries, and that a large proportion of the bitcoin transactions by value are being tested against economic nodes soon, like after not too many peer-to-peer hops of payments, then that collective action enforces the network rules. It is the case that economic nodes are lower at present. An analogy of this process of economic nodes would be counterfeit testing equipment. Some shops have equipment to pass notes through to determine whether they are forgeries. This is providing integrity for the money supply even though it’s run by the shops and merchants for their own security, but it actually provides currency integrity for everybody. The number of economic nodes is unfortunately decreasing, for a few reasons, mostly because I think there are services that started outsourcing full nodes, they will run a full node for you. Perhaps this is good because they are specialized and know how to run them securely, but it’s also bad because there are fewer companies running full nodes. If this number of full nodes falls too far, then they might have policy decisions imposed on them that are undesirable from a user perspective. If we let too many undesirable side effects creep into bitcoin, then users will become disenfranchised and go use something other than bitcoin. It’s in the ecosystem’s interest to retain the differentiating properties of bitcoin because that’s what attracts users to bitcoin at all. We should all want to protect those properties.

Because mining is becoming increasingly centralized, and economic nodes are somewhat centralized, it makes it difficult to do large increases in block size for example, for the reasons just mentioned above. In terms of the side effects, BitFury did some analysis to see what percentage of nodes would disconnect if block size reached different levels ((page 4)). If the decentralization metrics were measurable and were quite strong, such as in 2013, and we increased the block size quite a bit perhaps, and 20% or 30% of the nodes dropped off, it wouldn’t be much of a problem because there were quite a lot of nodes still around and the properties could still be protected.

Let’s say there are two metrics of decentralization. If one was strong and one was weak, we could probably work with that. If we had very decentralized mining, but not many full nodes, then that might be OK, because miners would be patching over the decentralization problem, or vice versa. I think it’s not always articulated clearly that we could, you know, be more relaxed about changing the block size if decentralization was fixed. This has been discussed in a passive sense, but we should attack this in an active sense. The ecosystem should coordinate this. We have talked about the ecosystem coordinating a hard-fork if that became necessary, but we should consider proactively coordinating decentralization improvements. We could all buy some miners, even a few terahashes, even if that’s spread around, and power users running a bigger percentage of the network, that helps. Ecosystem companies that are not professional miners but perhaps vault services or wallet services, they could also buy a small percentage of mining to improve decentralization. Their reason to do this would be because their business depends on scale, and scale indirectly depends on decentralization, and their business depends on bitcoin retaining its properties that users want and want to buy services based on.

Another thing you could do is mine on smaller pools first. For whatever reason, people have tended, it’s moved around over time, one pool has been big for a while, then it shrunk and then another one takes over. This is somewhat of a user interface problem, it’s an arbitrary decision you might tend to pick the one that is biggest, with the assumption that it’s big so maybe others have validated it already and thus it might be a good choice, right? Well this actually hurts the decentralization of bitcoin.

Bitcoin mining ASICs

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=47m55s

Another problem is access to ASICs. There are only three or four companies directly selling ASICs. There were more companies a few years ago, some of them have consolidated or gone out of business, most of them have failed to produce on time because it’s very time sensitive. Companies could sell ASICs to power users and smaller users. At the moment it’s bulk discounted maybe sold in bulk to mining firms and not really available to small miners at all.

There are economies of scale. People running professional mining farms get cheaper power because they can choose their location. Residential power is often more expensive. They can also bulk discount ASICs and maybe get them cheaper, perhaps the manufacturers wont even go to the trouble of selling small amounts.

If you are a company and selling one or two miners per customer, that’s a cost to you. Then you need to handle support calls, deal with questions from people learning to mine for the first time, which is a cost that a manufacturer might want to avoid. We could encourage miners to sell them; this is self-interested. If miners let centralization build up, and it erodes interesting properties of bitcoin, then bitcoin will become less valuable and users will lose and then their ASICs wont be entirely sellable. ASICs are actually sensitive to price fluctuations, if the price goes down significantly then the ASIC’s output well that can be the difference between profit and loss on mining.

If you don’t have an ASIC, or you are paying above average for electricity, something else that coudl be hypothetically done is an open-source ASIC that could provide baseline availability for users so that everyone can get a reasonably cost effective price, perhaps the org would be non-profit or something. This is an important problem for bitcoin’s success, it’s one of the big drivers behind current scale limitations and retaining bitcoin’s differentiating properties.

How does scale via soft-fork work

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=51m

This is a brief technical detour into how soft-forks work and how you could increase scale using soft-forks. I think there was a time when people were thinking you couldn’t increase the block size by soft-fork. There’s a mechanism where soft-forks can only restrict rules and not add rules. It was counter-intuitive that it would be possible to increase block size, it turns out that it’s possible, and the segregated witness proposal uses this property. The way it works is that the average bitcoin transaction is 60% size is signatures and the other 40% is transaction information. The segwit soft-fork stores the signatures separately in a witness area from the basic transaction. The 1 megabyte limit is then applied only to that other 40%. So this would allow in principle up to 2.5x increase in block size by soft-fork. For various technical reasons, this soft-fork ends up providing 1.8 to 2 MB depending on the types of signatures, so there are multisigs that are bigger, single sigs are smaller, and so forth.

The interesting observation here is that you could do many things with soft-forks than people had been assuming a couple years ago. This is an example of a new thing being discovered about the bitcoin protocol. Actually you can, in principle, although there is a discussion to be had about whether this would be good or not, you could increase the block size beyond the segregated witness, which is sort of a one-off mechanism we can’t repeat this to get more scale, but there’s another block size increase by soft-fork in principle might not be too inelegant in fact.

We talked earlier about software engineering and technical debt. For people who have done software and programming, it’s been a hard earned experience that if you do not write-down technical debt, like fix bugs and fix design defects such that each software version tries to improve the design or fix previous bugs, or improve the design you discovered orgnaically through use, if you don’t do that, problems start to happen. What tends to happen is that, if you don’t do that, it over time creates complexity. People put in work-arounds that have limitations and arbitrary behaviors. If you do it again another time, then you have a workaround on top of another workaround. It’s a common problem in software engineering because sometimes, let’s say, the management of a software project might not themselves be technical or they might feel commercial pressures and they demand that the bugs be deferred until the next release. If they aren’t showstoppers, they say wait to fix them until later. The danger is that these bugs will persist forever. This tends to be counterproductive after a while. The software becomes more complex and slower to develop, it ends up costing more and it makes the software less reliable. There are a number of technical debt items. On the bitcoin wiki there is a page called the hard-fork wishlist, which has a large number of known issues, some of which are simple but bitcoin has some very strict backwards-compatibility requirements. It should in principle be easy to implement these, but it takes a while to deploy anything like this. The segregated witness implementation comes with it quite a wide set of technical debt fixes, which many companies are excited to see. The primary one, which is how this feature arose, and then some other ones to write down other technical debt writedowns.

Some technical debt writedowns provided by the segregated witness soft-fork proposal

The first one is malleability, which is a long-standing design defect, needs to be fixed for Lightning to work, and for payment processors and so on. Another type of fix is the signature on the amount or value of the bitcoin transaction. It’s a small design defect that created complexity for hardware wallets. There was a lot of work related to the fact that this wasn’t fixed in the early bitcoin protocol. Maybe you could have a different interface or a lower powered CPU or something on trezor if this wasn’t the case…. most people will be pretty happy about these fixes.

Some of these problems are related to scaling. As we’re talking about scaling, we should wnat to improve scaling and write down technical debt that frustrates scaling. One of them is the O(n^2) hashing problem. Another problem is change buildup. This is analogous to how some people handle physical cash. They withdraw some notes from an ATM. They end up with a pocket full of change. They throw it in a jar, and they keep doing it, and then they end up with a full jar of change. Bitcoin wallets have a design defect that makes that the optimal thing to do. It’s because it’s cheaper to split a note than it is to combine change, at least in bitcoin. So even if they have change to use up, the wallet will usually choose to split a new coin. This is a scaling problem because the ledger gets bigger, each coin needs to be individually tracked, the ledger has to be indexed, you need more storage more memory some more CPU probably. It increases the minimum footprint or specification of a full node that can exist on the network because the UTXO set is larger than it needs to be. People who have been following hte discussions may have heard about a discount for UTXOs in segwit, it makes spending change and splitting coins into change, approximately the same cost, so that wallets wont have that perverse incentive anymore.

There’s also a fix… well, in the Satoshi whitepaper there was a concept of fraud proofs discussed. There were some limits which caused fraud proofs to be impractical. Segwit has been setup to help make fraud proofs more closely realizable and possible.

Another thing is a more extensible script system, which allows for exmaple Schnorr signatures. When the developers look to make a change to Bitcoin, they have to provide high assurance that no security defects slipped in. This kind of script extension mechanism is much simpler to assure the correctness of than the existing system.

Future scale sketch (my opinion)

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=1h

At the beginning, this is coming full circle back to the requirements in the beginning. These are the requirements that we were talking about, to double the transactions per second for three years in a row or something, and in parallel have Lightning scalability as well. This is a sketch of a sequence of upgrades which should be able to easily achieve that throughput. This is my opinion. Things can be done in a different sequence, or different developers might think that IBLT should happen before Schnorr or in parallel or afterwards or something, these details can get worked out. This is my sketch of what I think is reasonably realistic, using the scalability roadmap (FAQ) as an outline.

If we start with the segregated witness soft-fork, we can get approximately 2 MB as wallets and companies opt-in, and that’s in current late-stage testing. The last testnet before production is running right now, I think segnet4. That should be relatively soon if the ecosystem wants to activate it and opt-in and start adopting it to achieve scale and the other fixes it comes with.

Another thing we could do after segwit adoption is using the script extension from the previous slide, which is that we could get interesting scale by making the transactions smaller. From the same block size, we can get more transactions if we use a different type of signature, we could get between 1.5x-2x transactions per block. The actual physical block size of the network could still be 2 MB but the equivalent throughput it could achieve as a 3 or 4 megabyte using the old signature type. The Schnorr signature mechanism is already implemented in libsecp256k1 signature library that Bitcoin uses. The mechanism to deploy that is included in segregated witness. This is relatively close technology, there are not many unknowns here, this could deliver ahead-of-schedule scale later this year assuming people adopt this.

Something to say about the adoption and opt-in of segregated witness is that it provides scale to those users that opt-in. If I am running a payment processor and I upgrade the library I am using, and move to a new types of address which are backwards compatible, then I could get cheaper transactions and access to more scale. It’s interesting also that, people who do not upgrade, get access to scale. People who upgrade, leave empty space in the base block which could be used by people who haven’t upgraded. So this does support quite well an incremental scaling that builds up over time as people opt-in and the new space left by people opting-in is used by new users coming in or existing users who haven’t upgraded their wallets, creating new transactions. Hopefully new users will of course be using segregated witness compatible wallets, though…

We were talking about the orphan problem and how that is significant for mining. There is an interesting technical solution to this, to convert a latency bottleneck into a bandwidth bottleneck. The physical network has excess bandwidth, the transport mechanisms between miners and pulls. Full nodes that are not mining are not entirely sensitive to how quickly they receive blocks. They don’t need to receive in 3 seconds, 10 seconds would probably be fine. The idea of weak blocks is that we could push the network harder by using up the excess bandwidth currently going unused. Assuming this happens next, the weak blocks and IBLT goes live, then we would be in a position to make use of the excess bandwidth without worrying about the current orphan rate problems. We could increase the use of hte extra bandwidth, perhaps with a hard-fork planned ahead, it could potentially be done with a soft-fork but I think a hard-fork would be more likely.

Another potential upgrade would be a kind of flexible size, a block size that could grow over time automatically, maybe reacting to demand in some way, and that’s what the flexcap outline proposal kind of does. It’s possible that this would happen next or a simpler block size change would happen next. This should deliver another 2x scale increase. We can see with these three changes we get to the scale that was talked about in the requirements section of this presentation at the beginning.

Future scale sketch (Lightning)

Then we can talk about the layer 2 or Lightning and so, that can happen in parallel, it’s not waiting or deferring, there are different people on different teams developing Lightning today. I think there are 4 or 5 companies working on this. Most of it is open-source where you can contribute as well, with mailing lists and source code posted. The one requirement is that it needs some of the technical fixes in segregated witness, or those fixes can be deployed in other ways, it needs the malleability fix and it needs bip65 checklocktime which has already been upgraded previously, and it needs ideally CSV (bip112 CHECKSEQUENCEVERIFY) which is in the process of being upgraded now separately from segregated witness. So basically the existing segregated witness testnet is now being used by people working on lightning, because it provides everything they need, once it goes live, there will be no network features missing that would prevent Lightning. So that’s an exciting progress. In terms of Lightning, the estimates are, they vary it depends on how the usage pattern works out, but there are estimates that are maybe 100 to 10,000 times more transactions than on-chain transactions. It’s important to point out that Lightning transactions are real native Bitcoin transactions ((zero-conf (zero confirmations) but with an initial transaction that gets committed into the blockchain)). They could be posted to the on-chain but there’s a caching mechanism that collapses them so that they don’t all need to be sent to the chain. This is a write coalescing disk cache or something.

What could we do with this huge amount of scale? We might see new types of use cases, like micropayments, or low-value payments, bringing in new users and use cases. For example, sending a small amount of bitcoin with an email or some people have talked about using this to pay a website a small amount of money per page view and that would maybe provide them a better source of revenue and less frustration than ad blocking, and something like Lightning might be able to provide this.

Another interesting property of Lightning is that it provides instant and secure final confirmation of transactions. One of the problems that people have with Bitcoin payments is that technically you should wait until the first or second confirmation, which is about 10 minutes for the first confirmation. This is far too slow for retail payments. There’s a chance of “accepting” a payment and then not getting paid. Lightning can provide a secure and instant confirmation which is great for that retail problem as well.

Questions

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=1h10m30s

Time for questions.

Thank you for your comprehensive and interesting presentation about Bitcoin scaling issues. I see the audience lotta of important and people from Czech and Slovic bitcoin community. I think these people can wait for question to you, any questions? Don’t hesitate.

Q: I have question of course. Thank you Adam for coming. Your presentation was quite technical. My question is completely untechnical from the social part. You were involved in bitcoin from the very beginning, maybe one of the first people to interact with Satoshi Nakamoto. But I heard that you really started to think about bitcoin in 2013 when you bought your first bitcoin for real? How could this happen?

A: I guess I am happy to think about the technical protocols. Other people are more practical and are eager to try software out. At the time, Hal Finney was the first people to try out Bitcoin and write a report about how it works. I was content to read the report and think “that’s very cool”. Also for some reason it seemed to me it was uncertain whether this would bootstrap. I was kind of taking a wait and see approach. Different people saw potential earlier or later, some tried it out and kept some coins, yeah that’s how that happened.

Q: This is contrary to how people think, that people who were in the beginning saw that bitcoin would previal and rise. For example, for me, I was really switching to bitcoin really fast but this is only because I didn’t see the past where a lot of trials failed.

Q: Hello everybody. My name is Maria. I am relatively new to this topic. I wanted to make two questions. I read before that bitcoins can be stolen. What if someone sends you virus over internet? Is that even possible? The second question is, let’s say with money, with paper, with this system that we have, people make illegal ways to make money. Like money laundry and so on and so on. Is there a legal way to make bitcoins?

A: Can you repeat the first question?

Q: I read that bitcoins can be stolen.

A: Right, the theft problem. Bitcoin is interesting but irreversible transactions means that it’s relatively unforgiving. It stresses computer security. For an average Windows machine, it could be dangerous to store private keys. Maybe you only want to put $10 or $20 or something you would feel OK losing. At least if you lost it you would know that you had a virus and that you should reinstall the machine or something. For higher security applications, people should be using hardware wallets like the trezor or smartphones where they don’t install much software on, even smartphones can have security vulnerabilities. You can also potentially use trustless vaults, there’s a multisignature mechanism where you can work with a provider that helps you retain security. Some services can prevent you from spending more than $100/day but still leave you in control of your bitcoins, it can help protect you from theft because you can set rules about spending money. Your second question was regarding illegal uses of bitcoin. It has some privacy, but it’s not great privacy. All of the transactions can be followed on the network. you can follow the trail. The entire ledger is public data. You can see, for example, in the second Silk Road trial where two of the FBI and DEA law enforcement agents got greedy and stole some of the Silk Road bitcoins. There was a presentation recently by one of the internal investigation team members who were investigating the corrupt law enforcement agents, they were able to trace the transactions and figured out how much BTC they took and determined it was indeed them. In many ways, bitcoin is more traceable than other forms of payment. Physical paper cash is more attractive these days for illegal behavior. There’s far more volume in physical paper cash too– way more crime and greymarket transactions go on in the world than the entire market value of bitcoin by a big magnitude. If people want to focus on reducing crime, there’s other areas to focus on that are much more productive to focus on.

Q: I have a question about hashcash. Have you thought about this idea of for combatting the asymmetric problems like DDoS attacks. Very easy to send traffic to a website but difficult to consume. I think the hashcash problem is very nice. I implemented this in an anti-DDoS proxy. It seems like the idea is frozen. I am wondering if you have any new thoughts on this or new developments.

A: There were some attempts to use hashcash for DDoS. There was somebody using it to deter click fraud where people are receiving money per advertising click. They would cause people to mine hashcash on the CPU and only count the click if it actually happened. There’s a wordpress plugin that does something similar to deter abusive blog spam, which is trying to artificially increase search engine rankings by pasting links everywhere. Another idea was a more dynamic, I think there was an internet draft by some people at Cisco some years ago where they proposed that you would connect to a web server and if the web server was underload, it would request some work, and if it was under more load it would request more work. If the web server would have crashed anyway, some people could get through this way, but only the people with the plugin or the person with the most powerful hardware would get to use the service. This was such that people would be able to get some level of service rather than nobody. There is a really old RFC document from IETF about hashcash about this. I forgot the name of the primary author of this proposal. I don’t know if it was ever used. The other things it was used for was anti-spam, so spamassasin actually respects hashcash postage stamps. The microsoft mail suite, like outlook and exchange and hotmail, have their own hashcash with a different format. It’s not compatible with hashcash. They implemented that as an anti-spam mechanism in that ecosystem, I think it’s called postmark, they released it as an open specification so that anyone can implement it in theory although I think Microsoft was the only one to implement it. The other problem with hashcash is that people make ASICs. I figured that if this was massively successful that spammers would make hardware to overcome this limitation. I was thinking that if hashcash would become widely adopted, that individuals should have ASICs too to keep the playing field level.

Q: …

A: Yeah that could help. That’s a related topic for bitcoin. Some people wonder about if in the very long-term we should consider, with a lot of notice like 3 years notice, to change the hash function in some way. Or changing it, I think there are some coins or proposals to change the hash function every 6 months. And maybe that would prevent or deter ASICs, but I think it wouldn’t ultimately solve the problem because it’s a universal rule of software that hardware wins. People would look at the catalog of hash functions, look at the common properties, and make things that accelerate them, or make optimized FPGAs, or make GPUs that are optimized for that purpose, without the graphics IO and whatstuff. Ultimately, specialized hardware always wins, especially if the problem is dynamic, because the space of techniques and functions has to be specified upfront. The other problem is that it makes it harder to make ASICs. There was finally an scrypt ASIC because it was an ugly complex thing with memory and it wasn’t inconvenient to put memory into ASIC technology. So in many ways it’s best to have a very simple hash function, so that others can make the ASIC rather than only one person making the ASIC tech. Also, it’s part of the social contract that changing the PoW hash function is controversial.

Q: It’s possible to not just change the hash function, but the problem itself. If you have a web server giving you a javascript PoW problem, if you solve it, and if an attacker can create an ASIC that can solve all the problems, it’s profit anyway.

A: I think the general fast problem solver for proof-of-work is something like a GPU or there’s a company making CPUs sort of like a high core count, a very simple RISC count like 1000 cores in a chip. Those kinds of things are maybe, maybe they are single-threaded performance is weak, but they have a much higher execution throughput than a conventional CPU. ASICs are, with mining, mining is fault-tolerant, if you make a general purpose CPU you need some huge number of 9s of reliability. But for an ASIC, even one 9 of reliability would be OK. You can get those kind of CPUs and push it to the limit and then use it for general purpose execution with a JIT compiler for whatever the web server is sending you. You can still get pretty decent advantage over a conventional user. It’s very difficult to get a, it doesn’t take a strong advantage to break things. ASICs are 1000s or 10,000x faster than GPUs, which are several times faster than CPUs. Even if the advantage from the somewhat customized hardware solution is 50% at least, then it’s probably already economically broken. It doesn’t take much for a miner to win all of the mining output or all of it. It is challenging to make a fair algorithm. It pushes the hardware design in a different direction which is more complicated.

Q: Regarding IBLT and weak block, we operate Slush pool in China. For us it’s both latency and bandwidth.

A: Interesting.

Q: Because of the great firewall of china. I could imagine a scenario where you build a block on the other end, but you are still missing the transactions stuck in China. What are your thoughts on this? Perhaps the solution is to get rid of China so that they don’t have the majority, but that’s hard to do.

A: So you have a node in China? That’s the right thing to do. Two nodes? Because the interesting observation is that, to the extent that there is a lower bandwidth situation in China or a higher latency, it’s actually that China has more than 50%, that’s the problem for people outside of China. That’s not China’s problem. People misunderstand this sometime. But you said you were concerned about bandwidth? Well there are some separate things to do about bandwidth. There are a number of proposals to compress blocks. The IBLT part is for compressing asymptotically by a factor of 2 that all the transactions go against the network and then again in a block but with IBLT you basically send the transactions only once then you send a compact list of which transactions, like “these are the transactions I am using” which could fit in a handful of TCP packets. This is what you see with the relay network. It conveys how to construct the block with a single TCP packet most of the time. There’s a high bandwidth saving. There’s not much more bandwidth saving you could do. You must receive the transaction. Transactions exist and there’s a compression limit beyond which you cannot compress them further. Other things which could be done for people trying to run nodes in constrained bandwidth situations is to turn off relaying. It turns out that relaying is using the majority of the bandwidth, like 80% of the bandwidth being used is actually relaying transactions to other nodes on the p2p network. You could be a leech on the p2p network where you just receive transactions and not relay them. It’s not giving back to the network, but perhaps it’s better for people with high bandwidth to provide the relay function to the network instead. I think it’s important that blocks be constructed, sometimes when people talk about bandwidth constraints in China or something, they all say well okay but they can just rent a server in Singapore or somtehing with high bandwidth and low cost and relatively close and that will solve the problem, but it’s another form of centralization. What makes bitcoin have its properties like policy neutrality and permissionless is that there are too many jurisdictions involved to impose policy. Because of the many jurisdictions, there might be one thing blocked in Singapore but not blocked in China. If they are not constructing their own local blocks, we lose that diversity of policy. It’s interesting to know that you have gone to the step of obtaining a node in China. That’s interesting, I don’t think many people have done that:

Q: It’s much faster to.. than it is to…

A: The relay network is doing some really odd things with routing. Sometimes and I guess this is why you have nodes, you are probably doing the same thing. You would think maybe we should route over the internet and it will go through the shortest route. But in the relay network, BlueMatt has rented VPS in very strange places that can achieve a very short route faster than you could achieve over the public internet because the routes are otherwise too ridiculous.

Q: I have a question about the hard-fork which happened in the beginning of the year. I am wondering about your take on this. What do you think about this hard-fork by Bitcoin Classic team? Do you think it will happen again? How do you prevent this? Is it preventable? So it brings another question more philosophical like, is decentralization attainable? Is it utopic? When you take something like the linux kernel, it’s open-source software and everyone can add new features, but you still have one guy managing Linux. Is it a mistake to not have one person deciding?

A: I’ll talk about the second question first. The counterargument has been that if you’re applying it to yourself and imagine that you have decisionmaking power about what features go into Bitcoin, I would feel scared to be in that position because over time powerful forces such as governments that would like to change the properties. You can see a preview of this in the Ripple company, where the government asked them to make changes to the protocol which were not popular with the users. They were in central control so the government asked them to make those changes. We have to achieve neutrality and keep primary the features that users value in Bitcoin. Having developers in different countries working for different companies and having strong independence, is maybe a more robust way to keep the system independent and retain the properties that Bitcoin users prefer. We’re looking at a snapshot in time wondering what will happen in the future as companies and governments might want to influence Bitcoin protocol design. I think that if too many properties are lost, Bitcoin will lose its value. Bitcoin companies at the moment should find it in their own interest to retain the existing Bitcoin features even if governments maybe bully them. Your other question about hard-forks, I think it’s a question about tradeoffs. There are different ways to do upgrades. They have advantages and disadvantages. It’s possible for different ecosystem companies to have different views because maybe they specialize in a different area of business. A miner might prefer one feature, a payment processor might like another, and a user might like neither. What you are seeing is that some people have different preferences. In some sense it ties back to “what are bitcoin’s differentiating properties?”. Not necessarily everyone agrees on Bitcoin’s desirability. If you make different assumptions about what’s important, you can end up with different conclusions. I think that one of the debates is how fast can you do a hard-fork. On my slide, I put the simple hard-fork which is like bip109 which I think you are referring to, and in my view, and I think quite a lot of the developers believe that there is a tradeoff where the faster you do it the more risky it would be. If you do it tomorrow it would be a disaster, in a month it might be a rush to see everyone upgrade and there are risks for people who haven’t upgraded, and it requires a lot of coordination which the bitcoin network and ecosystem hasn’t tried to do before, like how do I call up this person, how do I reach this node, how do I know who is running a node, there are even bitcoin services running that are economically active but the nodes are running with old software. There was a story about a mining pool still running that had a few petahashes and was mining invalid blocks for a few months. There were miners pointing at the pool but the miners weren’t checking if they were receiving shares. There had also been cases about pools htat were defunct, nobody was maintaining them, they had reasonable hashrate but no payouts, but they kept getting listed on mining pool comparison sites as having a 100% fee because none of the payouts were working. It’s hard to do strict planning. In a top-down managed network, there are reporting responsibilities and who to contact and the people they contact will cooperate and collaborate to achieve that. But in bitcoin, it’s a peer-to-peer network so it’s difficult sometimes to reach or identify people. In many ways this is a feature of bitcoin that some components of it will be possible without identity. It’s good that miners can be permissionless and not necessarily identified, because this makes it harder to apply policy requests to the miner. This at the same time also makes it difficult to contact the miner if they are mining invalid blocks or if we want them to upgrade or something. It’s a tradeoff, it’s a grey area, it depends on how optimistic you are, it depends on how important immediate higher scale is to you. Maybe some other users wont want to take the same risks you would prefer to take. I think it’s a question of different users and types of ecosystem companies having slightly conflicting views and preferences. We’ll see how it plays out. Ultimately, for the network to upgrade and scale, it needs people to work together, it needs backwards compatibility, upgrade mechanisms will not work if people don’t work together. It’s up to the ecosystem and the users really. At the end of the day, developers are writing software and if nobody runs their software then that kind of shows that the decision is the users’ to make, they can choose what software to run. As I mentioned briefly, the economic nodes control the consensus rules, the miners have to follow the economic views of the running software on the network. There’s no hard and fast answer but I am hopeful that Bitcoin will scale. I also gave my sketch regarding what I think will happen. We’ll see, it really depends on the users and which software they choose to run and whether miners choose to activate one method or another.

Q: Some people proposed hard-forks and bigger block sizes. They say that soft-forks are inherently secure because in segwit for example the new version is lying to the old version about what’s in the blocks. It’s saying it’s not a transaction when it is a transaction. All of the versions cannot verify it correctly.

A: This is an argument that has been made. It’s not a good argument in the sense that, this is not a new observation. All bitcoin protocol upgrades so far have been soft-forks. They all have the exact problem and nobody was complaining for the last X protocol upgrades really…. It depends on what kind of node you are running regarding the risk. If you were running a smartphone node SPV wallet, then you’re not checking a lot of stuff anyway and a soft-fork isn’t going to suddenly switch you to using the Bitcoin protocol rules. If you are running a fully-validating full node with some economic value on it, the fact that it is a soft-fork does not mean you can relax, you should upgrade definitely, but if you are on a holiday and don’t get back for a few days then miners are going to protect you while that happens. It means that people can upgrade more flexibly and in a less coordinated way. For the period where you haven’t upgraded, you are at a reduced security model in the same way the smartphone is trusting miners, but the intention is that it’s temporary– and your wallet isn’t using the upgraded coins anyway so it should not be a concern. There are non-trivial parts of the network running 0.8, 0.9, 0.11 versions of Bitcoin Core for example. This means that all of those nodes are actually not upgraded. It might be like 30% of the network. We don’t know if those are economic nodes. We don’t know if they are being used by an exchange, or if they are nodes not being used by anyone. This is a problem because SPV clients could connect to them and get bad information. SPV clients tend to connect to multiple nodes and cross check, so they would notice a discrepancy, we hope. There’s so much old software on the network, it’s an interesting effect another example of anonymity or privacy is beneficial with a negative side-effect is that we can’t tell which nodes are economic nodes, and this is good because otherwise you could have a map with geolocation to go steal those bitcoin held by those nodes. The ones that are economic are important to upgrade, someone should try to contact them and suggest that they upgrade for their own safety kind of thing. From that perspective, segwit soft-fork is the same as any other soft-fork. I have heard some people try to make the case that it is somehow different but I don’t agree. A soft-fork is a soft-fork. There exists a transaction that you can create which would look valid to an old node but isn’t actually valid. Using that, you can abuse that if you have hashrate to potentially defraud someone who is using an SPV client. It doesn’t matter what feature that packet is using, or what the effect of soft-fork is, it would just create the same problem. If someone has a remote root exploit for Linux, and it gains root, you don’t care how it’s a root exploit it’s already game over, I think any soft-fork means that economic nodes should try to upgrade quickly but they are better protected than during a hard-fork. With a hard-fork, a different kind of failure occurs which is that the old nodes are ignoring the current network. They are on a low hashrate network. There is some risk that they would accept invalid transactions by someone with a moderate amount of hashrate if they were connecting to that network. This also splits the currency. You can make wallets full-node wallets and SPV wallets that stop and warn you if they see transactions on the longest chain that are a more recent version than you could understand, and at that point the user should be given a choice to upgrade or continue with the weakened security model, it’s the same in both the soft-fork and hard-fork cases, there have been some proposals to do this in future releases of Bitcoin Core. Perhaps the default should be stopping rather than continuing with risk without making a decision to do that.

Unfortunately we are out of time, thanks a lot Adam for your presentation. Thanks a lot for attending this presentation everyone.

\ No newline at end of file +https://www.youtube.com/watch?v=HEZAlNBJjA0

Bitcoin scaling tradeoffs with Adam Back (adam3us) at Paralelni Polis

Institute of Cryptoanarchy http://www.paralelnipolis.cz/

slides: http://www.slideshare.net/paralelnipolis/bitcoin-scaling-tradeoffs-with-adam-back

description: “In this talk, Adam Back overviews the Bitcoin scaling and discuses his personal view about its tradeoffs and future development.”

Intro fluff

And now I would like to introduce you to a great guest from the U.K. or Malta, Adam Back. Adam is long-time professional cryptologist and also he’s a Bitcoin expert. He’s the inventor of hashcash, which was used for example by Bitcoin. I heard that Satoshi Nakamoto was inspired by hashcash.

Adam: I think I got the first email received by anyone from Satoshi just to ask about hashcash.

The second thing that slush0 told me was that hashcash is used by the bitcoin mining protocol. This is a really big thing. Also, I know hashcash is used by many anti-spam solutions. If you have implemented some anti-spam solutions, quite likely you are using hashcash.

I also read from your Wikipedia page that you developed a system based on David Chaum’s ideas. David Chaum is a cryptologist. I invited him to a cryptoanarchist conference last year. Unfortunately this guy probably he didn’t read his emails, so maybe next year.

And the last thing is that Adam is also President of the company which is Blockstream. That’s a very brief introduction. Now Adam it is your turn.

Talk intro actual

Adam: Thank you. Hi. So. I am going to talk for a relatively short period of time. People should ask questions. Sorry. Okay. Yes. So, people should interrupt and ask questions. The slides and discussion is mostly to start some structure to the conversation so feel free to ask questions as we go, or at the end as you prefer.

Many hats

So as many people in Bitcoin, I started as an enthusiast. I saw bitcoin and found it exciting and interesting. As many people, I tried to do something to move bitcoin forward by starting a company to improve on the technology and so forth. As probably other people can attest, once you start a company you have multiple hats now. I have to preface what I’m saying by explaining which hat I am speaking with. I am speaking as an individual who very much likes bitcoin and wants it to succeed. As was said in the introduction, I have been playing around with and doing applied research in electronic cash systems for a long time. I also worked at a company doing Tor-like things before Tor existed. And ecash systems relate to this as well. I am also not a spokesperson for Bitcoin Core.

Bitcoin Core is a decentralized group of people that work by consensus, perhaps similar to the way that IETF runs discussion forums. I am speaking as an individual. These sentences are the only Blockstream part. Blockstream as a company is reliant on Bitcoin and they need it to scale and succeed. Same as any other company. Everyone at Blockstream owns bitcoin for themselves and they are excited about bitcoin.

Requirements for scaling bitcoin

With that out of the way, I thought I would talk about it in terms of requirements. People who have been through software engineering in startups or various projects, you know that you often interface with a customer by stating requirements. They are not always technical requirements. They are about the effect that you want to achieve. There has to be a constructive conversation with a client who is buying something or maybe they want to buy an open-source system. It’s useful because then the conversation is at a level where hopefully everyone understands, either the customer understands or the user understands, and the technical people can get sure that they have the same idea as what success is or what they want. In Bitcoin, it’s a little bit ambiguous who the customer is. It’s an open-source project, it’s a currency, it’s a network, it’s a peer-to-peer currency so it’s also a user currency so we should most care what users want and why they value bitcoin. This can be difficult to determine because there are many users with different viewpoints.

Bitcoin also depends on companies to succeed. There are many companies popularizing access to Bitcoin, like exchanges and hosted wallets, to make it simpler to use Bitcoin, and miners who are an important part of the ecosystem to secure it, including mining pools. We have to sort of if we’re going to do requirements and engineering around bitcoin, we have to balance the interests of users and companies and different ecosystem miners and payment processors and exchanges may have different and slightly conflicting requirements.

If we were to optimize solely for the benefits of miners, we might find the outcome to be one thing, but for the sole benefit of payment processors might have a different direction. Each thing you do tends to have an impact on someone else, so it’s sort of a zero sum system in that sense. The system needs to be in balance and we don’t want one sector to have an overly strong influence over the others. One type of ecosystem gaining an advantage over others is not something desired. The companies in some respects are, they are trying to improve the value for users, users buy their services because they find them convenient, and companies succeed by delivering value to users and getting more users to start using bitcoin and build the network effect.

Let’s talk about a what if scenario. We say that we want bitcoin to scale. But how much? Let’s put some numbers on it. Hypothetically say we want the scale to double every year for the next three years. Let’s draw a rough outline of how that’s going to happen. As a requirement, that’s something that companies can think about, hopefully we can do better, but it’s something they can plan around, they can look at user growth numbers. Some ecosystem people in the business side have said that they think it has been scaling around this rate over the last year or two. It’s not an arbitrary number, it has been scaling maybe 2x maybe 2.5x times something like. And then the more recent interesting technology is Lightning and other layer 2 protocols where we get much more exciting increases in scale. It depends on some usage pattern, it depends on recirculation, but you hear numbers like 100x to 10,000x transaction throughput using the same base technology. We’ll talk in a bit about how that works roughly.

If we can achieve these targets taken together, I hope that the companies could be happy with these targets. It’s a conversation topic, because they could come back and say I need it more or I need it sooner, at least you’re having a conversation where the people designing protocol upgrades could maybe work with and consider.

What is bitcoin?

There are other system requirements that are sort of invariants or hard requirements, which are around bitcoin’s properties. Bitcoin is very importantly permissionless. The internet opened up permissionless innovation. The permissionless nature of it is significantly credited to driving the fast pace of innovation. Many people think that what bitcoin will allow is that rate of innovation into financial payment networks where it’s so far been relatively closed and more like the closed telephone networks from pre-internet era. There’s some analogy there.

There are some other interesting bitcoin properties that we don’t want to lose, such as fungibility which is important for a payment mechanism. Fungability is the concept that, like cash, one bank note or one coin is the same as the other. This concept arrosed, sometimes you can distinguish between bank notes because of serial numbers. In 17th century Scottland, there was an old case where a businessman sent some high value bank notes to someone else, they got stolen in the mail, they got deposited in the bank, he tried to sue to get them returned as his property, there was a court case, and the courts decided he should not get the notes back, because people would lose confidence in their money and reliable money is very important for an economy. That started the legal concept of fungibility. And I guess other countries arrived at similar concepts and similar reasons. It’s important for bitcoin to have fungibility in a practical sense. You have seen attempts like to trace bitcoin, or find bitcoin that have been used at Silk Road or connected by two or three hops to a transaction used on there or something, but if we have this activity then people will end up with those coins who wont be able to spend because of the hops and taint. The way that it fails in the Scottish bank note case, if the ruling had been the other way, such that the merchant got his notes back to him, then nobody would want to accept bank notes without rushing to the bank and depositing them. This would cause the currency to fail at its purpose. It’s important that bitcoin is fungible and that it has privacy because too much transparency causes taint.

We do have some functional requirements, I put these in rough priority order. These are obvious things that we need the system to do to be effective. Has to be secure. We have to scale it, so that everyone can use it. We want it to be reliable, predictable, good user experience. You want the system to be cheap so that many people could use it, such as in third world countries when the average spend is much lower, or different types of spending.

Since I was asking about requirements, I think it’s interesting to ask, what is bitcoin? It’s actually an interesting conversation to have with anyone you meet who is interested in bitcoin. To ask them, what’s most important about bitcoin, to them? Another way to ask this would be, which of the features of bitcoin if it lost, when would you stop using it? Just to run through the top ones quickly, it’s better bearer ecash, cash-like, irreversible no way to take back payments, unseizable because it’s bearer, and very importantly there’s no third party or central point of trust, no bank. That’s important to consider if that’s a requirement for Bitcoin. We can obviously scale bitcoin by running a central server that holds all the bitcoin. But that would lose that important differentiator that there’s no third party that has to be trusted. We talked about permissionless already. Also has to be borderless and network neutral, there should be no central party at the base layer that says they don’t like some transactions or whatever. In some countries, well I guess Wikileaks is an example, they had their payments blocked. It was not done through legal means, there was no court order deciding it. Some politicians decided to make some phone calls and ask some favors to some large companies, and their payments were blocked. In bitcoin there’s nobody to call up to achieve that kind of effect. Thus you have seen some adoption of bitcoin by parties that have that unfortunate vulnerability. Fungibility we discussed, privacy we discussed, it’s a virtual commodity gold-like, virtual mining etcetera. Specifically in terms of the economic properties of it, it’s not a political currency like fiat currency.

It’s purely free market, there’s no central party that can adjust the rate of new coin production, nobody can do quantitative easing or buy the money back. There’s no party that can adjust inflation, there’s no central party with special authority to set an interest rate. It’s purely free market.

If we look at the standardized three main properties of money, store of money, means of exchange and unit of account, it depends on your viewpoint but I think from my perspective probably the store of value has achieved has been of the three properties the most strongly achieved. Means of exchange somewhat, people are doing bitcoin transactions sure. But you could argue that perhaps more of the value comes from an investment perspective for now. They are related. Maybe one is important for a period of time, and another one becomes important later. I have heard people argue that it’s important for bitcoin to have a stable exchange rate and stable robust value because this would make it as a unit of account easier to use. This area is subject to debate because it’s about economic opinion. Unit of account, I guess, you know, this some like the coffee shop downstairs and everything you buy in this building is a test case unit of account, but because of the volatility of times, most people have been thinking of bitcoin as pricing it in dollars or other currencies. Maybe we will get there in the future though?

Upgrade methods

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=17m50s

Next I wanted to talk about upgrade methods. There are a number of different technical means by which we can upgrade the bitcoin network, to add new features to bitcoin. They have different tradeoffs. I think possibly the current scaling discussion should have a conversation around upgrades at an earlier stage, for example with the Scaling Bitcoin conferences or something. There needs to be a more timely discussion.

https://github.com/bitcoin/bips/blob/master/bip-0099.mediawiki

But let’s have that discussion now anyway. People are probably familiar with soft-forks and hard-forks. There are some different bitcoin upgrade method variants and some have had more recent analysis to talk about. People are still learning new things about bitcoin. There are people who understand code, but the implications of the protocol and the ways that you can extend it and upgrade it, are still new realizations and understandings are being found. That’s the topic of ongoing learning even amongst Bitcoin developers and so on.

An example of that is a “firm fork” or “soft hard-fork”, which has been talked about recently. It was quite obscure or not widely understood or known before. Let’s talk about the tradeoffs. Backwards compatibility meaning that the existing bitcoin wallets that are in use today will they be able to still send transactions after the upgrade? One of the texts is wrong, this is should be a tic not a cross. A soft-fork is backwards compatible because existing wallets can pay to new wallets and new wallets can pay to old wallets. Full hard-forks are not backwards compatible, by design they change the formats to improve something fundamental. All clients, even smartphone clients, even full node clients, have to upgrade after a full hard-fork. A simple hard-fork is a restricted hard-fork that smartphone clients don’t have to upgrade with. The firm hard-fork is a kind of hybrid between the two, which we will talk about in a moment.

One of the factors for an upgrade is, how quickly can you do it? How safe is it? A soft-fork is quite safe to use and has been the method of all previous planned upgrades to bitcoin, of which there have been quite a few, including some last year and some ongoing this year. It’s quite well understood how to do them and what the security properties are, and it doesn’t require tremendous coordination. Mostly miners have to focus on the upgrade, and people who are running merchant software or full nodes, they have to do an upgrade reasonably quickly but they have some flexibility in that, they are protected by miners if they take a little bit longer to do the upgrade.

A firm fork is also fast, a simple hard-fork it depends, if you do it very quickly it probably introduces risk. But if you want to be conservative, then it takes longer than a soft-fork, that’s a potential topic for discussion then because when someone is looking at these tradeoffs. They might say, I want to take the risk because I want to see it quickly, but others might want to take longer and be slow to be more secure, perhaps they would prefer a soft-fork done first and then a hard-fork done later, because it could happen more quickly. And a full hard-fork is similar in terms of speed.

Then what I was saying about the… how we have the experience of doing soft-forks. You don’t need to coordinate with literally everyone to achieve a soft-fork upgrade. The soft-fork and the firm fork have similarity to the historic upgrades in bitcoin so far. The hard-forks require everybody in the network to upgrade. It requires much closer coordination than has been attempted before in bitcoin. That’s something htat could be done, but there are new coordination risks. In the network, you can look at the current software running and find some quite old versions still running, we don’t know if they have value depending on them, but it’s clear that people have not been keeping up to date with versions, so that might be something to be aware of, because if we were trying to do a coordinated upgrade we would have to somehow contact them, and whether they have a timeframe for upgrading or some exercise like that…..

Another topic is whether a fork is opt-in or not. Do the people who run the software, do they all have to upgrade? Do they make the decision? With hard-forks, everyone has to upgrade. Everybody has to get together and agree to upgrade for the system to upgrade, the users have a direct choice, if a proposal was made for a hard-fork, well users could veto it and just not upgrade. Conversely, with a soft-fork, it’s somewhat more automatic, the miners are making the decision. It’s indirect in the sense that we expect miners would want to make an upgrade only if users wanted it, because miners depend on users that want bitcoin and like bitcoin. If users don’t want it, it seems unlikely for miners to want to do an upgrade. But still, there’s a more direct decision with a hard-fork. This comes with the cost that it’s more complicated to do the upgrade because you have to coordinate with all those people.

In the slide this next line about SPV or smartphone wallets whether an upgrade is needed; so, with a simple hard-fork, that is generally not the case that you know an upgrade can happen and due to some limitations in the security model that most smartphone wallets work, it turns out you can increase the block size and the SPV wallets wont notice, the software will continue to operate. With a soft-fork, users don’t have to upgrade smartphone wallets because they continue to function, and the transactions are backwards and forwards compatible, but users get an advantage by upgrading.

Another aspect of software is technical debt (1 2 3), or sort of built-up long-running bugs or design defects which you generally want to fix, because if you don’t then you tend to run into problems in software. We will talk about this a little bit more. The soft-fork proposal in bip141 segregated witness includes a number of technical debt fixes, some technical design fixes which will help many companies and use cases and generally help the state of the software. It’s hard to know what features other forks include because it depends on what you choose to implement in them. So for a firm-fork, and the tradeoff would be, the more fixes you implement, the longer it would take to do the implementation and do the design and testing. One of the criteria is, how quickly can they do the upgrade? Typically that means the simple hard-fork has included the minimal possible features. So particularly no technical debt fixes, minimal work-arounds to avoid immediate problems. So that’s why I say, if it’s done quickly, then there are only minimal fixes, and that has the side effect that the you know the problems caused by those bugs will continue to persist for 6 or 12 months more and the features that rely on those fixes, like Lightning relying on malleability fixes, those might also get delayed and therefore delay the higher scale opportunity that we talked about in requirements for layer 2. In the case of a full hard-fork, it’s actually the motivation to do technical debt fixes. Typically you wouldn’t do a full hard-fork unless you really wanted to do some overhaul or data structure reorganization in order to fix bugs or defects. So it would tend to have fixes in it as an assumption.

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=28min40s

So this is an interesting question. You could look at a hard-fork as a, we were talking about it requiring everyone to opt-in, so it’s a form of referendum in a way, where you need almost unanimous support and agreement for it to happen safely. Conversely, soft-forks are, you know, we hope that miners will take into account the, they should only make an upgrade if users want an upgrade, but if the feature is not controversial, then it’s cheaper to do a soft-fork because we have lots of experience doing them, they can be done quickly, they don’t require as much coordination.

If we look at some different kinds of examples of controversial and uncontroversial things, so, some of the you know, most of the bitcoin differentiators that we talked about in the previous slide were things that usres would be very upset about if they were removed from the system. A major reduction in privacy, an imposition that you require permission to use bitcoin at all, things of that nature, another one would be increasing the total number of coins, these are things that no user and no ecosystem company that uses bitcoin would want to contemplate. So we never want to do those. On the other end of the spectrum are uncontroversial things, things which preserve all of the interesting features of bitcoin, they are just better, they fix limitations, faster transactions, more scale, cheaper transactions, bug fixes, and so forth, in principle they should be uncontroversial. So I think where it gets interesting is that there are, unfortunately, tradeoffs and we get situations that arise where we would like both sides of the tradeoff to go in our collective favor, but this can be difficult to achieve in the short-term. One example is scale versus security and permissionless tradeoff. I’ll talk about why that tradeoff arises, in a bit. Centralization can be a problem because at the extremes it tends to put the properties that make bitcoin interesting at risk of erosion. To make a simple and extreme example, if all bitcoin ended up being controlled by a single pool or single miner, or a very small number of them, and there were companies controlling them perhaps in the United States or another country, there would be a risk that the company would be asked by law enforcement to make policy changes that users would not like. And so we can see that decentralization is the main technical mechanism that is providing many of the interesting bitcoin differentiators.

So the question then about, this is just a question but I’m not sure what the answer is, I think if you’d asked people last year or the year before, if they had preffered a hard-fork for every change, or if they were okay with soft-forks, they might have said hard-forks are generally better because it gives users the choice to opt-in. ….. the ongoing discussion about scalability has shown that referendums are expensive and have their own controversy. In the same way that if a country has a referendum, it encourages people to want to think about the decision and campaign about it and think about whether it’s good or bad for them and try to pick an outcome that is advantageous for them. So if we are actually making a change that is uncontroversial, like a small increase in scale for example, maybe you could argue in hindsight that a referendum is more expensive than the benefit you get, because if it’s not a question that users really care about, like nobody really disagreeing about a small increase in scale or something, then it might be that it’s quicker to just do it by soft-fork if we use that as a bar. That’s a question that people can have different views on.

Decentralization

So regarding decentralization, this manifests itself with miners and pools. There’s a problem called the orphan rate… because it’s a distributed system, and miners are running a lottery winning blocks every 10 minutes, there’s a certain amount of time it takes to transmit a block through the network. There’s a chance that two miners will create a block close to the same time, one of them will win, one of them will lose, the person who loses will lose mining revenue. Miners keep a close eye on their orphan rate, they monitor it, they try to optimize it away. People say that this is due to bandwidth, miners are sometimes in remote locations with bad bandwidth availability, it turns out the limit is latency and not bandwidth because the actual block is transmitted at the end, at the last 3 seconds or so. So 1 megabyte is not much in bandwidth terms, it’s actually quite technically difficult to achieve reliable and fair broadcast in 3 seconds, meaning that small miners and large miners tend to receive the block in similar timeframes and this is not happening. In many ways, you are already seeing side-effects of the broadcast latency issue. What tends to happen is that people use workarounds, one thing they do is use a pool instead of mining on their own. So you have slush0 and slush’s pool. If you were solo mining and had slow bandwidth, then you would use a pool which would solve the problem for you. Another one is the relay network, which is a custom optimized block transfer that can do much better than the p2p network both because of the roots that it has selected plus the network compression. This was introduced to help medium-sized miners and maybe small miners to help keep up with large miners in terms of block transfer time. Large miners have been more able to negotiate peering arrangements with other miners so they can reduce orphan risk in that way.

Validationless mining

Another phenomenon has been the so-called “validationless mining” where one pool or miner fetches the block proposal from another pool without checking the block themselves ((thus the block can trivially include transactions or other properties that violate the bitcoin rules)). They just accept the block on trust that the other person has checked it. This can cascade problems, like if a miner has done something inconsistent or confusing with network rules, then there could be a sequence of blocks found by other miners that have also not been checked. This happened in 2015 with a soft-fork because people did not realize how widespread validationless mining was; until intervening, there were some invalid blocks mined. Some miners lost bitcoins because of this. The reason why people would do validationless mining is because it’s faster and it reduces orphan rate. As far as I know, people are still doing it, because it’s still worthwhile overall relative to the loss they experienced when it went wrong. It reduces security for SPV wallets and smartphone wallets because the proof-of-work is sort of an assertion of transaction validity, but it turns out that PoW can be calculated without ensuring transaction validity. So there could be multiple blocks that are on an invalid chain due to validationless mining.

If we increase the block size and the orphan rate goes up, is something disadvantageous going to happen? From what we have seen from workarounds lately, we think miners are pragmatic, they are in business to make a profit, so they are going to use whatever techniques to work around any increased orphan rate. So they are going to use more validationless mining, they are going to use a larger pool, more centralization. This cuts into decentralization. The side effect of these things is more centralization and increasing the policy risk that we talked about earlier.

Bitcoin companies should participate in bitcoin mining

We don’t always hear so much about decentralization in terms of doing something about it, more as a passive problem. But actually it’s an ecosystem problem. I would argue that if the ecosystem put its mind to it, there could be things done to improve decentralization. First of all, what exactly do we mean by decentralization? There are pools and economic nodes which are topical. Some people are using smartphones, some people are running a full nodes which is the implementation of bitcoin. People who are running merchant exchanges, services, high-value vaults, you should run economic fully-validating nodes. It’s not miners that enforce the protocol consensus rules, it’s the economic fully-validating nodes. The economic nodes are receiving transactions and making decisions about what the node says. If a user was to receive some coins by running a shop and try to deposit them into an exchange, and the exchange is running a full node, and the exchange says the coins are invalid because their node says so, then the coins are invalid. It’s more important to think about a reasonable decentralization of nodes run by power users, small and medium large sized companies in many different countries, and that a large proportion of the bitcoin transactions by value are being tested against economic nodes soon, like after not too many peer-to-peer hops of payments, then that collective action enforces the network rules. It is the case that economic nodes are lower at present. An analogy of this process of economic nodes would be counterfeit testing equipment. Some shops have equipment to pass notes through to determine whether they are forgeries. This is providing integrity for the money supply even though it’s run by the shops and merchants for their own security, but it actually provides currency integrity for everybody. The number of economic nodes is unfortunately decreasing, for a few reasons, mostly because I think there are services that started outsourcing full nodes, they will run a full node for you. Perhaps this is good because they are specialized and know how to run them securely, but it’s also bad because there are fewer companies running full nodes. If this number of full nodes falls too far, then they might have policy decisions imposed on them that are undesirable from a user perspective. If we let too many undesirable side effects creep into bitcoin, then users will become disenfranchised and go use something other than bitcoin. It’s in the ecosystem’s interest to retain the differentiating properties of bitcoin because that’s what attracts users to bitcoin at all. We should all want to protect those properties.

Because mining is becoming increasingly centralized, and economic nodes are somewhat centralized, it makes it difficult to do large increases in block size for example, for the reasons just mentioned above. In terms of the side effects, BitFury did some analysis to see what percentage of nodes would disconnect if block size reached different levels ((page 4)). If the decentralization metrics were measurable and were quite strong, such as in 2013, and we increased the block size quite a bit perhaps, and 20% or 30% of the nodes dropped off, it wouldn’t be much of a problem because there were quite a lot of nodes still around and the properties could still be protected.

Let’s say there are two metrics of decentralization. If one was strong and one was weak, we could probably work with that. If we had very decentralized mining, but not many full nodes, then that might be OK, because miners would be patching over the decentralization problem, or vice versa. I think it’s not always articulated clearly that we could, you know, be more relaxed about changing the block size if decentralization was fixed. This has been discussed in a passive sense, but we should attack this in an active sense. The ecosystem should coordinate this. We have talked about the ecosystem coordinating a hard-fork if that became necessary, but we should consider proactively coordinating decentralization improvements. We could all buy some miners, even a few terahashes, even if that’s spread around, and power users running a bigger percentage of the network, that helps. Ecosystem companies that are not professional miners but perhaps vault services or wallet services, they could also buy a small percentage of mining to improve decentralization. Their reason to do this would be because their business depends on scale, and scale indirectly depends on decentralization, and their business depends on bitcoin retaining its properties that users want and want to buy services based on.

Another thing you could do is mine on smaller pools first. For whatever reason, people have tended, it’s moved around over time, one pool has been big for a while, then it shrunk and then another one takes over. This is somewhat of a user interface problem, it’s an arbitrary decision you might tend to pick the one that is biggest, with the assumption that it’s big so maybe others have validated it already and thus it might be a good choice, right? Well this actually hurts the decentralization of bitcoin.

Bitcoin mining ASICs

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=47m55s

Another problem is access to ASICs. There are only three or four companies directly selling ASICs. There were more companies a few years ago, some of them have consolidated or gone out of business, most of them have failed to produce on time because it’s very time sensitive. Companies could sell ASICs to power users and smaller users. At the moment it’s bulk discounted maybe sold in bulk to mining firms and not really available to small miners at all.

There are economies of scale. People running professional mining farms get cheaper power because they can choose their location. Residential power is often more expensive. They can also bulk discount ASICs and maybe get them cheaper, perhaps the manufacturers wont even go to the trouble of selling small amounts.

If you are a company and selling one or two miners per customer, that’s a cost to you. Then you need to handle support calls, deal with questions from people learning to mine for the first time, which is a cost that a manufacturer might want to avoid. We could encourage miners to sell them; this is self-interested. If miners let centralization build up, and it erodes interesting properties of bitcoin, then bitcoin will become less valuable and users will lose and then their ASICs wont be entirely sellable. ASICs are actually sensitive to price fluctuations, if the price goes down significantly then the ASIC’s output well that can be the difference between profit and loss on mining.

If you don’t have an ASIC, or you are paying above average for electricity, something else that coudl be hypothetically done is an open-source ASIC that could provide baseline availability for users so that everyone can get a reasonably cost effective price, perhaps the org would be non-profit or something. This is an important problem for bitcoin’s success, it’s one of the big drivers behind current scale limitations and retaining bitcoin’s differentiating properties.

How does scale via soft-fork work

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=51m

This is a brief technical detour into how soft-forks work and how you could increase scale using soft-forks. I think there was a time when people were thinking you couldn’t increase the block size by soft-fork. There’s a mechanism where soft-forks can only restrict rules and not add rules. It was counter-intuitive that it would be possible to increase block size, it turns out that it’s possible, and the segregated witness proposal uses this property. The way it works is that the average bitcoin transaction is 60% size is signatures and the other 40% is transaction information. The segwit soft-fork stores the signatures separately in a witness area from the basic transaction. The 1 megabyte limit is then applied only to that other 40%. So this would allow in principle up to 2.5x increase in block size by soft-fork. For various technical reasons, this soft-fork ends up providing 1.8 to 2 MB depending on the types of signatures, so there are multisigs that are bigger, single sigs are smaller, and so forth.

The interesting observation here is that you could do many things with soft-forks than people had been assuming a couple years ago. This is an example of a new thing being discovered about the bitcoin protocol. Actually you can, in principle, although there is a discussion to be had about whether this would be good or not, you could increase the block size beyond the segregated witness, which is sort of a one-off mechanism we can’t repeat this to get more scale, but there’s another block size increase by soft-fork in principle might not be too inelegant in fact.

We talked earlier about software engineering and technical debt. For people who have done software and programming, it’s been a hard earned experience that if you do not write-down technical debt, like fix bugs and fix design defects such that each software version tries to improve the design or fix previous bugs, or improve the design you discovered orgnaically through use, if you don’t do that, problems start to happen. What tends to happen is that, if you don’t do that, it over time creates complexity. People put in work-arounds that have limitations and arbitrary behaviors. If you do it again another time, then you have a workaround on top of another workaround. It’s a common problem in software engineering because sometimes, let’s say, the management of a software project might not themselves be technical or they might feel commercial pressures and they demand that the bugs be deferred until the next release. If they aren’t showstoppers, they say wait to fix them until later. The danger is that these bugs will persist forever. This tends to be counterproductive after a while. The software becomes more complex and slower to develop, it ends up costing more and it makes the software less reliable. There are a number of technical debt items. On the bitcoin wiki there is a page called the hard-fork wishlist, which has a large number of known issues, some of which are simple but bitcoin has some very strict backwards-compatibility requirements. It should in principle be easy to implement these, but it takes a while to deploy anything like this. The segregated witness implementation comes with it quite a wide set of technical debt fixes, which many companies are excited to see. The primary one, which is how this feature arose, and then some other ones to write down other technical debt writedowns.

Some technical debt writedowns provided by the segregated witness soft-fork proposal

The first one is malleability, which is a long-standing design defect, needs to be fixed for Lightning to work, and for payment processors and so on. Another type of fix is the signature on the amount or value of the bitcoin transaction. It’s a small design defect that created complexity for hardware wallets. There was a lot of work related to the fact that this wasn’t fixed in the early bitcoin protocol. Maybe you could have a different interface or a lower powered CPU or something on trezor if this wasn’t the case…. most people will be pretty happy about these fixes.

Some of these problems are related to scaling. As we’re talking about scaling, we should wnat to improve scaling and write down technical debt that frustrates scaling. One of them is the O(n^2) hashing problem. Another problem is change buildup. This is analogous to how some people handle physical cash. They withdraw some notes from an ATM. They end up with a pocket full of change. They throw it in a jar, and they keep doing it, and then they end up with a full jar of change. Bitcoin wallets have a design defect that makes that the optimal thing to do. It’s because it’s cheaper to split a note than it is to combine change, at least in bitcoin. So even if they have change to use up, the wallet will usually choose to split a new coin. This is a scaling problem because the ledger gets bigger, each coin needs to be individually tracked, the ledger has to be indexed, you need more storage more memory some more CPU probably. It increases the minimum footprint or specification of a full node that can exist on the network because the UTXO set is larger than it needs to be. People who have been following hte discussions may have heard about a discount for UTXOs in segwit, it makes spending change and splitting coins into change, approximately the same cost, so that wallets wont have that perverse incentive anymore.

There’s also a fix… well, in the Satoshi whitepaper there was a concept of fraud proofs discussed. There were some limits which caused fraud proofs to be impractical. Segwit has been setup to help make fraud proofs more closely realizable and possible.

Another thing is a more extensible script system, which allows for exmaple Schnorr signatures. When the developers look to make a change to Bitcoin, they have to provide high assurance that no security defects slipped in. This kind of script extension mechanism is much simpler to assure the correctness of than the existing system.

Future scale sketch (my opinion)

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=1h

At the beginning, this is coming full circle back to the requirements in the beginning. These are the requirements that we were talking about, to double the transactions per second for three years in a row or something, and in parallel have Lightning scalability as well. This is a sketch of a sequence of upgrades which should be able to easily achieve that throughput. This is my opinion. Things can be done in a different sequence, or different developers might think that IBLT should happen before Schnorr or in parallel or afterwards or something, these details can get worked out. This is my sketch of what I think is reasonably realistic, using the scalability roadmap (FAQ) as an outline.

If we start with the segregated witness soft-fork, we can get approximately 2 MB as wallets and companies opt-in, and that’s in current late-stage testing. The last testnet before production is running right now, I think segnet4. That should be relatively soon if the ecosystem wants to activate it and opt-in and start adopting it to achieve scale and the other fixes it comes with.

Another thing we could do after segwit adoption is using the script extension from the previous slide, which is that we could get interesting scale by making the transactions smaller. From the same block size, we can get more transactions if we use a different type of signature, we could get between 1.5x-2x transactions per block. The actual physical block size of the network could still be 2 MB but the equivalent throughput it could achieve as a 3 or 4 megabyte using the old signature type. The Schnorr signature mechanism is already implemented in libsecp256k1 signature library that Bitcoin uses. The mechanism to deploy that is included in segregated witness. This is relatively close technology, there are not many unknowns here, this could deliver ahead-of-schedule scale later this year assuming people adopt this.

Something to say about the adoption and opt-in of segregated witness is that it provides scale to those users that opt-in. If I am running a payment processor and I upgrade the library I am using, and move to a new types of address which are backwards compatible, then I could get cheaper transactions and access to more scale. It’s interesting also that, people who do not upgrade, get access to scale. People who upgrade, leave empty space in the base block which could be used by people who haven’t upgraded. So this does support quite well an incremental scaling that builds up over time as people opt-in and the new space left by people opting-in is used by new users coming in or existing users who haven’t upgraded their wallets, creating new transactions. Hopefully new users will of course be using segregated witness compatible wallets, though…

We were talking about the orphan problem and how that is significant for mining. There is an interesting technical solution to this, to convert a latency bottleneck into a bandwidth bottleneck. The physical network has excess bandwidth, the transport mechanisms between miners and pulls. Full nodes that are not mining are not entirely sensitive to how quickly they receive blocks. They don’t need to receive in 3 seconds, 10 seconds would probably be fine. The idea of weak blocks is that we could push the network harder by using up the excess bandwidth currently going unused. Assuming this happens next, the weak blocks and IBLT goes live, then we would be in a position to make use of the excess bandwidth without worrying about the current orphan rate problems. We could increase the use of hte extra bandwidth, perhaps with a hard-fork planned ahead, it could potentially be done with a soft-fork but I think a hard-fork would be more likely.

Another potential upgrade would be a kind of flexible size, a block size that could grow over time automatically, maybe reacting to demand in some way, and that’s what the flexcap outline proposal kind of does. It’s possible that this would happen next or a simpler block size change would happen next. This should deliver another 2x scale increase. We can see with these three changes we get to the scale that was talked about in the requirements section of this presentation at the beginning.

Future scale sketch (Lightning)

Then we can talk about the layer 2 or Lightning and so, that can happen in parallel, it’s not waiting or deferring, there are different people on different teams developing Lightning today. I think there are 4 or 5 companies working on this. Most of it is open-source where you can contribute as well, with mailing lists and source code posted. The one requirement is that it needs some of the technical fixes in segregated witness, or those fixes can be deployed in other ways, it needs the malleability fix and it needs bip65 checklocktime which has already been upgraded previously, and it needs ideally CSV (bip112 CHECKSEQUENCEVERIFY) which is in the process of being upgraded now separately from segregated witness. So basically the existing segregated witness testnet is now being used by people working on lightning, because it provides everything they need, once it goes live, there will be no network features missing that would prevent Lightning. So that’s an exciting progress. In terms of Lightning, the estimates are, they vary it depends on how the usage pattern works out, but there are estimates that are maybe 100 to 10,000 times more transactions than on-chain transactions. It’s important to point out that Lightning transactions are real native Bitcoin transactions ((zero-conf (zero confirmations) but with an initial transaction that gets committed into the blockchain)). They could be posted to the on-chain but there’s a caching mechanism that collapses them so that they don’t all need to be sent to the chain. This is a write coalescing disk cache or something.

What could we do with this huge amount of scale? We might see new types of use cases, like micropayments, or low-value payments, bringing in new users and use cases. For example, sending a small amount of bitcoin with an email or some people have talked about using this to pay a website a small amount of money per page view and that would maybe provide them a better source of revenue and less frustration than ad blocking, and something like Lightning might be able to provide this.

Another interesting property of Lightning is that it provides instant and secure final confirmation of transactions. One of the problems that people have with Bitcoin payments is that technically you should wait until the first or second confirmation, which is about 10 minutes for the first confirmation. This is far too slow for retail payments. There’s a chance of “accepting” a payment and then not getting paid. Lightning can provide a secure and instant confirmation which is great for that retail problem as well.

Questions

https://www.youtube.com/watch?v=HEZAlNBJjA0&t=1h10m30s

Time for questions.

Thank you for your comprehensive and interesting presentation about Bitcoin scaling issues. I see the audience lotta of important and people from Czech and Slovic bitcoin community. I think these people can wait for question to you, any questions? Don’t hesitate.

Q: I have question of course. Thank you Adam for coming. Your presentation was quite technical. My question is completely untechnical from the social part. You were involved in bitcoin from the very beginning, maybe one of the first people to interact with Satoshi Nakamoto. But I heard that you really started to think about bitcoin in 2013 when you bought your first bitcoin for real? How could this happen?

A: I guess I am happy to think about the technical protocols. Other people are more practical and are eager to try software out. At the time, Hal Finney was the first people to try out Bitcoin and write a report about how it works. I was content to read the report and think “that’s very cool”. Also for some reason it seemed to me it was uncertain whether this would bootstrap. I was kind of taking a wait and see approach. Different people saw potential earlier or later, some tried it out and kept some coins, yeah that’s how that happened.

Q: This is contrary to how people think, that people who were in the beginning saw that bitcoin would previal and rise. For example, for me, I was really switching to bitcoin really fast but this is only because I didn’t see the past where a lot of trials failed.

Q: Hello everybody. My name is Maria. I am relatively new to this topic. I wanted to make two questions. I read before that bitcoins can be stolen. What if someone sends you virus over internet? Is that even possible? The second question is, let’s say with money, with paper, with this system that we have, people make illegal ways to make money. Like money laundry and so on and so on. Is there a legal way to make bitcoins?

A: Can you repeat the first question?

Q: I read that bitcoins can be stolen.

A: Right, the theft problem. Bitcoin is interesting but irreversible transactions means that it’s relatively unforgiving. It stresses computer security. For an average Windows machine, it could be dangerous to store private keys. Maybe you only want to put $10 or $20 or something you would feel OK losing. At least if you lost it you would know that you had a virus and that you should reinstall the machine or something. For higher security applications, people should be using hardware wallets like the trezor or smartphones where they don’t install much software on, even smartphones can have security vulnerabilities. You can also potentially use trustless vaults, there’s a multisignature mechanism where you can work with a provider that helps you retain security. Some services can prevent you from spending more than $100/day but still leave you in control of your bitcoins, it can help protect you from theft because you can set rules about spending money. Your second question was regarding illegal uses of bitcoin. It has some privacy, but it’s not great privacy. All of the transactions can be followed on the network. you can follow the trail. The entire ledger is public data. You can see, for example, in the second Silk Road trial where two of the FBI and DEA law enforcement agents got greedy and stole some of the Silk Road bitcoins. There was a presentation recently by one of the internal investigation team members who were investigating the corrupt law enforcement agents, they were able to trace the transactions and figured out how much BTC they took and determined it was indeed them. In many ways, bitcoin is more traceable than other forms of payment. Physical paper cash is more attractive these days for illegal behavior. There’s far more volume in physical paper cash too– way more crime and greymarket transactions go on in the world than the entire market value of bitcoin by a big magnitude. If people want to focus on reducing crime, there’s other areas to focus on that are much more productive to focus on.

Q: I have a question about hashcash. Have you thought about this idea of for combatting the asymmetric problems like DDoS attacks. Very easy to send traffic to a website but difficult to consume. I think the hashcash problem is very nice. I implemented this in an anti-DDoS proxy. It seems like the idea is frozen. I am wondering if you have any new thoughts on this or new developments.

A: There were some attempts to use hashcash for DDoS. There was somebody using it to deter click fraud where people are receiving money per advertising click. They would cause people to mine hashcash on the CPU and only count the click if it actually happened. There’s a wordpress plugin that does something similar to deter abusive blog spam, which is trying to artificially increase search engine rankings by pasting links everywhere. Another idea was a more dynamic, I think there was an internet draft by some people at Cisco some years ago where they proposed that you would connect to a web server and if the web server was underload, it would request some work, and if it was under more load it would request more work. If the web server would have crashed anyway, some people could get through this way, but only the people with the plugin or the person with the most powerful hardware would get to use the service. This was such that people would be able to get some level of service rather than nobody. There is a really old RFC document from IETF about hashcash about this. I forgot the name of the primary author of this proposal. I don’t know if it was ever used. The other things it was used for was anti-spam, so spamassasin actually respects hashcash postage stamps. The microsoft mail suite, like outlook and exchange and hotmail, have their own hashcash with a different format. It’s not compatible with hashcash. They implemented that as an anti-spam mechanism in that ecosystem, I think it’s called postmark, they released it as an open specification so that anyone can implement it in theory although I think Microsoft was the only one to implement it. The other problem with hashcash is that people make ASICs. I figured that if this was massively successful that spammers would make hardware to overcome this limitation. I was thinking that if hashcash would become widely adopted, that individuals should have ASICs too to keep the playing field level.

Q: …

A: Yeah that could help. That’s a related topic for bitcoin. Some people wonder about if in the very long-term we should consider, with a lot of notice like 3 years notice, to change the hash function in some way. Or changing it, I think there are some coins or proposals to change the hash function every 6 months. And maybe that would prevent or deter ASICs, but I think it wouldn’t ultimately solve the problem because it’s a universal rule of software that hardware wins. People would look at the catalog of hash functions, look at the common properties, and make things that accelerate them, or make optimized FPGAs, or make GPUs that are optimized for that purpose, without the graphics IO and whatstuff. Ultimately, specialized hardware always wins, especially if the problem is dynamic, because the space of techniques and functions has to be specified upfront. The other problem is that it makes it harder to make ASICs. There was finally an scrypt ASIC because it was an ugly complex thing with memory and it wasn’t inconvenient to put memory into ASIC technology. So in many ways it’s best to have a very simple hash function, so that others can make the ASIC rather than only one person making the ASIC tech. Also, it’s part of the social contract that changing the PoW hash function is controversial.

Q: It’s possible to not just change the hash function, but the problem itself. If you have a web server giving you a javascript PoW problem, if you solve it, and if an attacker can create an ASIC that can solve all the problems, it’s profit anyway.

A: I think the general fast problem solver for proof-of-work is something like a GPU or there’s a company making CPUs sort of like a high core count, a very simple RISC count like 1000 cores in a chip. Those kinds of things are maybe, maybe they are single-threaded performance is weak, but they have a much higher execution throughput than a conventional CPU. ASICs are, with mining, mining is fault-tolerant, if you make a general purpose CPU you need some huge number of 9s of reliability. But for an ASIC, even one 9 of reliability would be OK. You can get those kind of CPUs and push it to the limit and then use it for general purpose execution with a JIT compiler for whatever the web server is sending you. You can still get pretty decent advantage over a conventional user. It’s very difficult to get a, it doesn’t take a strong advantage to break things. ASICs are 1000s or 10,000x faster than GPUs, which are several times faster than CPUs. Even if the advantage from the somewhat customized hardware solution is 50% at least, then it’s probably already economically broken. It doesn’t take much for a miner to win all of the mining output or all of it. It is challenging to make a fair algorithm. It pushes the hardware design in a different direction which is more complicated.

Q: Regarding IBLT and weak block, we operate Slush pool in China. For us it’s both latency and bandwidth.

A: Interesting.

Q: Because of the great firewall of china. I could imagine a scenario where you build a block on the other end, but you are still missing the transactions stuck in China. What are your thoughts on this? Perhaps the solution is to get rid of China so that they don’t have the majority, but that’s hard to do.

A: So you have a node in China? That’s the right thing to do. Two nodes? Because the interesting observation is that, to the extent that there is a lower bandwidth situation in China or a higher latency, it’s actually that China has more than 50%, that’s the problem for people outside of China. That’s not China’s problem. People misunderstand this sometime. But you said you were concerned about bandwidth? Well there are some separate things to do about bandwidth. There are a number of proposals to compress blocks. The IBLT part is for compressing asymptotically by a factor of 2 that all the transactions go against the network and then again in a block but with IBLT you basically send the transactions only once then you send a compact list of which transactions, like “these are the transactions I am using” which could fit in a handful of TCP packets. This is what you see with the relay network. It conveys how to construct the block with a single TCP packet most of the time. There’s a high bandwidth saving. There’s not much more bandwidth saving you could do. You must receive the transaction. Transactions exist and there’s a compression limit beyond which you cannot compress them further. Other things which could be done for people trying to run nodes in constrained bandwidth situations is to turn off relaying. It turns out that relaying is using the majority of the bandwidth, like 80% of the bandwidth being used is actually relaying transactions to other nodes on the p2p network. You could be a leech on the p2p network where you just receive transactions and not relay them. It’s not giving back to the network, but perhaps it’s better for people with high bandwidth to provide the relay function to the network instead. I think it’s important that blocks be constructed, sometimes when people talk about bandwidth constraints in China or something, they all say well okay but they can just rent a server in Singapore or somtehing with high bandwidth and low cost and relatively close and that will solve the problem, but it’s another form of centralization. What makes bitcoin have its properties like policy neutrality and permissionless is that there are too many jurisdictions involved to impose policy. Because of the many jurisdictions, there might be one thing blocked in Singapore but not blocked in China. If they are not constructing their own local blocks, we lose that diversity of policy. It’s interesting to know that you have gone to the step of obtaining a node in China. That’s interesting, I don’t think many people have done that:

Q: It’s much faster to.. than it is to…

A: The relay network is doing some really odd things with routing. Sometimes and I guess this is why you have nodes, you are probably doing the same thing. You would think maybe we should route over the internet and it will go through the shortest route. But in the relay network, BlueMatt has rented VPS in very strange places that can achieve a very short route faster than you could achieve over the public internet because the routes are otherwise too ridiculous.

Q: I have a question about the hard-fork which happened in the beginning of the year. I am wondering about your take on this. What do you think about this hard-fork by Bitcoin Classic team? Do you think it will happen again? How do you prevent this? Is it preventable? So it brings another question more philosophical like, is decentralization attainable? Is it utopic? When you take something like the linux kernel, it’s open-source software and everyone can add new features, but you still have one guy managing Linux. Is it a mistake to not have one person deciding?

A: I’ll talk about the second question first. The counterargument has been that if you’re applying it to yourself and imagine that you have decisionmaking power about what features go into Bitcoin, I would feel scared to be in that position because over time powerful forces such as governments that would like to change the properties. You can see a preview of this in the Ripple company, where the government asked them to make changes to the protocol which were not popular with the users. They were in central control so the government asked them to make those changes. We have to achieve neutrality and keep primary the features that users value in Bitcoin. Having developers in different countries working for different companies and having strong independence, is maybe a more robust way to keep the system independent and retain the properties that Bitcoin users prefer. We’re looking at a snapshot in time wondering what will happen in the future as companies and governments might want to influence Bitcoin protocol design. I think that if too many properties are lost, Bitcoin will lose its value. Bitcoin companies at the moment should find it in their own interest to retain the existing Bitcoin features even if governments maybe bully them. Your other question about hard-forks, I think it’s a question about tradeoffs. There are different ways to do upgrades. They have advantages and disadvantages. It’s possible for different ecosystem companies to have different views because maybe they specialize in a different area of business. A miner might prefer one feature, a payment processor might like another, and a user might like neither. What you are seeing is that some people have different preferences. In some sense it ties back to “what are bitcoin’s differentiating properties?”. Not necessarily everyone agrees on Bitcoin’s desirability. If you make different assumptions about what’s important, you can end up with different conclusions. I think that one of the debates is how fast can you do a hard-fork. On my slide, I put the simple hard-fork which is like bip109 which I think you are referring to, and in my view, and I think quite a lot of the developers believe that there is a tradeoff where the faster you do it the more risky it would be. If you do it tomorrow it would be a disaster, in a month it might be a rush to see everyone upgrade and there are risks for people who haven’t upgraded, and it requires a lot of coordination which the bitcoin network and ecosystem hasn’t tried to do before, like how do I call up this person, how do I reach this node, how do I know who is running a node, there are even bitcoin services running that are economically active but the nodes are running with old software. There was a story about a mining pool still running that had a few petahashes and was mining invalid blocks for a few months. There were miners pointing at the pool but the miners weren’t checking if they were receiving shares. There had also been cases about pools htat were defunct, nobody was maintaining them, they had reasonable hashrate but no payouts, but they kept getting listed on mining pool comparison sites as having a 100% fee because none of the payouts were working. It’s hard to do strict planning. In a top-down managed network, there are reporting responsibilities and who to contact and the people they contact will cooperate and collaborate to achieve that. But in bitcoin, it’s a peer-to-peer network so it’s difficult sometimes to reach or identify people. In many ways this is a feature of bitcoin that some components of it will be possible without identity. It’s good that miners can be permissionless and not necessarily identified, because this makes it harder to apply policy requests to the miner. This at the same time also makes it difficult to contact the miner if they are mining invalid blocks or if we want them to upgrade or something. It’s a tradeoff, it’s a grey area, it depends on how optimistic you are, it depends on how important immediate higher scale is to you. Maybe some other users wont want to take the same risks you would prefer to take. I think it’s a question of different users and types of ecosystem companies having slightly conflicting views and preferences. We’ll see how it plays out. Ultimately, for the network to upgrade and scale, it needs people to work together, it needs backwards compatibility, upgrade mechanisms will not work if people don’t work together. It’s up to the ecosystem and the users really. At the end of the day, developers are writing software and if nobody runs their software then that kind of shows that the decision is the users’ to make, they can choose what software to run. As I mentioned briefly, the economic nodes control the consensus rules, the miners have to follow the economic views of the running software on the network. There’s no hard and fast answer but I am hopeful that Bitcoin will scale. I also gave my sketch regarding what I think will happen. We’ll see, it really depends on the users and which software they choose to run and whether miners choose to activate one method or another.

Q: Some people proposed hard-forks and bigger block sizes. They say that soft-forks are inherently secure because in segwit for example the new version is lying to the old version about what’s in the blocks. It’s saying it’s not a transaction when it is a transaction. All of the versions cannot verify it correctly.

A: This is an argument that has been made. It’s not a good argument in the sense that, this is not a new observation. All bitcoin protocol upgrades so far have been soft-forks. They all have the exact problem and nobody was complaining for the last X protocol upgrades really…. It depends on what kind of node you are running regarding the risk. If you were running a smartphone node SPV wallet, then you’re not checking a lot of stuff anyway and a soft-fork isn’t going to suddenly switch you to using the Bitcoin protocol rules. If you are running a fully-validating full node with some economic value on it, the fact that it is a soft-fork does not mean you can relax, you should upgrade definitely, but if you are on a holiday and don’t get back for a few days then miners are going to protect you while that happens. It means that people can upgrade more flexibly and in a less coordinated way. For the period where you haven’t upgraded, you are at a reduced security model in the same way the smartphone is trusting miners, but the intention is that it’s temporary– and your wallet isn’t using the upgraded coins anyway so it should not be a concern. There are non-trivial parts of the network running 0.8, 0.9, 0.11 versions of Bitcoin Core for example. This means that all of those nodes are actually not upgraded. It might be like 30% of the network. We don’t know if those are economic nodes. We don’t know if they are being used by an exchange, or if they are nodes not being used by anyone. This is a problem because SPV clients could connect to them and get bad information. SPV clients tend to connect to multiple nodes and cross check, so they would notice a discrepancy, we hope. There’s so much old software on the network, it’s an interesting effect another example of anonymity or privacy is beneficial with a negative side-effect is that we can’t tell which nodes are economic nodes, and this is good because otherwise you could have a map with geolocation to go steal those bitcoin held by those nodes. The ones that are economic are important to upgrade, someone should try to contact them and suggest that they upgrade for their own safety kind of thing. From that perspective, segwit soft-fork is the same as any other soft-fork. I have heard some people try to make the case that it is somehow different but I don’t agree. A soft-fork is a soft-fork. There exists a transaction that you can create which would look valid to an old node but isn’t actually valid. Using that, you can abuse that if you have hashrate to potentially defraud someone who is using an SPV client. It doesn’t matter what feature that packet is using, or what the effect of soft-fork is, it would just create the same problem. If someone has a remote root exploit for Linux, and it gains root, you don’t care how it’s a root exploit it’s already game over, I think any soft-fork means that economic nodes should try to upgrade quickly but they are better protected than during a hard-fork. With a hard-fork, a different kind of failure occurs which is that the old nodes are ignoring the current network. They are on a low hashrate network. There is some risk that they would accept invalid transactions by someone with a moderate amount of hashrate if they were connecting to that network. This also splits the currency. You can make wallets full-node wallets and SPV wallets that stop and warn you if they see transactions on the longest chain that are a more recent version than you could understand, and at that point the user should be given a choice to upgrade or continue with the weakened security model, it’s the same in both the soft-fork and hard-fork cases, there have been some proposals to do this in future releases of Bitcoin Core. Perhaps the default should be stopping rather than continuing with risk without making a decision to do that.

Unfortunately we are out of time, thanks a lot Adam for your presentation. Thanks a lot for attending this presentation everyone.

\ No newline at end of file diff --git a/misc/mimblewimble-podcast/index.html b/misc/mimblewimble-podcast/index.html index 08a7045e2c..0c0c06c580 100644 --- a/misc/mimblewimble-podcast/index.html +++ b/misc/mimblewimble-podcast/index.html @@ -11,4 +11,4 @@ Andrew Poelstra, Pieter Wuille, Brian Deery, Chris Odom

Date: August 7, 2016

Transcript By: Bryan Bishop

Tags: Adaptor signatures, Sidechains

Category: Podcast

Media: -http://web.archive.org/web/20200926001539/https://soundcloud.com/heryptohow/mimblewimble-andrew-poelstra-peter-wuille-brian-deery-and-chris-odom

  • Andrew Poelstra (andytoshi)
  • Pieter Wuille (sipa)
  • Brian Deery
  • Chris Odom

Starts at 26min.

I am going to introduce our guests here in just a minute.

Launch of zcash might be delayed in order to allow the code to be analyzed by multiple third-party auditors. Zooko stated, “I feel bad that we didn’t do a better job of making TheDAO disaster-like problems harder for people”.

Our guests on the line are Andrew Poelstra and Pieter Wuille. Andrew are you there?

AP: Hey. We’re here.

host: I am going to let you guys give our audience some background. Andrew, tell us about yourself and what you do i nbitcoin.

AP: Sure. I showed up in the bitcoin space around late 2011 while I was starting a PhD in mathematics. I wound up hanging around on the research side of things, like IRC channels centered on cryptography research. These days I work on the libsecp256k1 project which does the underlying cryptography stuff for Bitcoin Core and related projects. That’s mostly what I spend my days doing, implementing crypto code.

host: That’s pretty awesome. This is probably one of the first times we’ve had a hard-core cryptography person on this show. We should probably have you back at some point in the future. But as it turns out, we have another one on the phone now too. Pieter Wuille is also a cryptography expert. Pieter, go ahead and tell su about yourself as well.

PW: Sure. I discovered bitcoin around the end of 2010 and I was immediately attracted to the development side of things. I started coding on bitcoind and it became Bitcoin Core. This is now my full time job and working as well on the cryptography libraries.

host: You’re one of the Core developers?

PW: There’s no good definition for that word, but yes I suppose.

host: Fair enough. But definitely a person with very deep knowledge of bitcoin and very integral in its development. Thank you both for coming on, I really appreciate it. Courtesy of our in-studio bitcoin expert, who is with us here a lot, Brian Deery at Factom, actually real quick too… Factom has had some good news lately…

BD: Oh we don’t want to talk about that. No.

host: Well why have we brought these two on the show?

BD: Something exciting happened last week. On the same day that Bitfinex lost a lot of money, a mysterious Harry Potter fan, who was also a world-class cryptographer, who was and remains anonymous, announced a new cryptographic protocol to the world.

host: Was this Satoshi again?

BD: Probably not.

host: Was it Craig Wright?

BD: There would be a lot more swearing involved.

host: So what is the new protocol about?

BD: The working title is mimblewimble. This is a Harry Potter spell that stops other wizards from being able to cast spells against you. It’s a very fitting name for an anonymity protocol that allows you to spend money without having other people watch you using their own magic of watching you and how yo uspend your money.

host: Is this another cryptocurrency-like protocol? How would you characterize it?

BD: This is a way to, why don’t we let our guests answer that? Andrew was the first one to poke some holes in this protocol and make a few fixes and tweaks to it. And so, of all the people in the world would come on the show who isn’t hiding behind tor at the moment, he’s the most appropriate, he’s the world expert.

host: Andrew could you elaborate on mimblewimble?

AP: Mimblewimble.

host: Forgive me, I’m not as big of a Harry Potter fan as I should be. Tell us more about that.

AP: Sure. On Monday, I think it was, on Monday evening, someone logged on to one of our research channels and dropped a paper under the name Tom which is the name of Voldemort in the French translation of Harry Potter. You can find the paper online, and it’s written by Voldemort. He dropped this, then he signed off and that’s the last we heard from this person.

host: It sounds like really intelligent nerds. Instead of frat boys dropping burning bags of shit on people’s front doors, it’s like dropping research instead. Okay, so someone dropped the paper. Continue.

AP: Yep. Okay so he dropped the paper. What this paper describes is the cryptography behind something like bitcoin but not bitcoin. You could build an altcoin with this. Or more usefully you could build a sidechain with this. It’s a way to create transactions, unlike bitcoin where you have this bitcoin script system and you have to solve the bitcoin script; it uses straightforward digital signatures. You can spend money by having a secret key, you can do multisignature things, etc. It’s structured in such a way that when you make a transaction and someone else makes a transaction, you can combine them and make a bigger single transaction. In bitcoin, you can’t do this, because transactions are atomic except in coinjoin where you can do interactive transaction merging in bitcoin. Instead of blocks being a giant list of transactions, in mimblewimble each block is one giant transaction, and you can’t tell which parts correspond to different transactions. This is a big thing for privacy and anonymity. It takes your entire transaction graph and squishes it into one transaction per block.

PW: I have something to add here. This is Pieter. What Andrew is describing is that somewhere in 2013 there was another anonymous author who dropped a paper on bitcointalk.

((37min 58sec))

host: So is mimblewimble polish? No, it’s pottish. Okay at least they laugh. That’s good. So just to remind ourselves, this protocol can combine transactions, but not just one transaction and another transaction, but you’re saying all transactions in a single block? Pieter please continue with your point.

PW: Andrew was explaining how this mechanism allows transactions to be combined together. This mechanism was described 3 years ago by another anonymous author who dropped a paper anonymously on bitcointalk, called OWAS. The difference is that OWAS required a new type of cryptography called pairing crypto, which is not well-trusted in the academic space yet. Mimblewimble accomplishes the same thing as OWAS, and more, but does not need the new assumptions, and it only uses the elliptic curve crypto like Bitcoin is using.

host: When you say this other form of crypto is not well trusted, is the reason is that it uses a certain set of unproven assumptions?

PW: That’s right. They are impossible to prove. They were just assuming that no efficient algorithm exists to break that. The assumptions for pairing crypto are a bit stronger than for elliptic curve crypto. In short, it’s newer.

BD: Does mimblewimble rely on discrete logarithm problem?

PW: Yes. The elliptic curve discrete logarithm problem.

BD: Just like bitcoin?

PW: Yes.

host: What are the implications of combining all transactions into one in a single block?

AP: The implication is that the transaction graph, which is sort of a technical way of describing all the inputs and all the outputs of each transaction for money in and money out, those transaction graphs no longer give you a way to follow the coins to learn anything.

PW: This is the same thing that coinjoin tries to accomplish. In coinjoin, all the participants need to be online and collaborate at the same time. Mimblewimble allows anyone on the network to take any two transactions and combine them. It simplifies coinjoin.

host: Users have to take steps to do this?

PW: Every node on the network would do mimblewimble automatically, which is not possible with interactive coinjoin.

AP: This is only half of mimblewimble. It’s pretty cool that you can get OWAS without pairing crypto.

host: So were we discussing the mimble or the wimble?

AP: So the second part is that what this allows you to do, and I think OWAS could have been coerced into doing this, but mimblewimble definitely does. It’s that, if you have a series of blocks, then somebody can give you if you want to validate all those blocks, rather than getting every full block with every transaction, they could just give you the effect on the blockchain of all those blocks put together. So if a transaction had an output, and a later block had the output spent, then it doesn’t appear, you can delete that data, and you can give someone the entire chain with the missing data, and that person can still verify the entire chain. This is something that you can’t do in bitcoin right now.

((42min 54sec))

PW: Specifically, this actually means that the blockchain could shrink. We could have a block that spends more than it creates, and the result would be that the entire blockchain would shrink. The amount of data I need to give you to prove that the state of the ledger is correct, could theoretically go down over time. Whereas in bitcoin, we append blocks all the time.

host: So it’s not just that the chain would grow in an incrementally smaller space, it’s that the total data could go down over time.

BD: Ethereum has the full UTXO set equivalent in the patricia merkle tree.

host: You should explain that.

PW: There’s a bit of a difference here in that the UTXO set is the state of the ledger. Knowig how much money everyone has. That’s what the UTXO set is. In bitcoin, you could be given the UTXO set and you wouldn’t have to verify the history if you trust me. If you don’t trust me, then you need to see all blocks in history to verify that the state of the ledger is correct. In mimblewimble, the data I need to show you, you don’t trust me, I can prove that it’s correct, that amount of data can go down.

AP: Ethereum does not change that. It has commitments. Now you’re trusting a miner that did a lot of PoW and maybe is more trustworthy, but you’re still ultimately trusting that miner that the state of the chain is what it is. With mimblewimble, it’s as if you downloaded the entire blockchain and verified it, but you don’t.

host: So as far as blockchain and block size, it sounds like there would be no need to increase the block size with bitcoin in terms of transactions per 10 minutes or whatever because in practice there’s just so little data going across….?

AP: That’s complicated. In real-time, the data still has to go across. It’s only for the people that join later. They get to reap the benefits of all that deleted data. I can give you the mimblewimble coin history, and it will be really small. But while you’re watching the network, you still have to participate in real time.

host: So if you have 100 million transactions in a block, you have to keep listening?

BD: You have to keep up with the fire hose in real-time.

PW: Good analogy. Part of this operation happens within a single block. If I send you some money and you spend it, and both transactions go into the same block, then those two cancel out with each other, and it doesn’t appear in the chain anymore.

BD: So now we have to have two transactions? Someone needs to spend money, and I need to spend it too?

PW: Oh, no. I am just describing transaction inputs and outputs. The transactions where I send you and where you take it can be merged together and they cancel out. So the joined transaction of those 2 is smaller than the sum of the individual transactions.

host: Interesting. It’s a lot to wrap our head around. We’re so used to …

PW: It’s very different from how bitcoin works.

host: We are so used to that, we have a mental inertia of how we understand bitcoin, it’s a little difficult to wrap our heads around this.

BD: is this going to have to be an interactive protocol? In coinjoin, I need real-time communication with the people I’m mixing with. Do I need interactive communication to make this transaction?

AP: You need to be in communication with the person you’re spending to. That’s interactive. But the merging is non-interactive.

host: So not all transactions are guaranteed to be merged?

AP: One of the rules in the mimblewimble….

host: Andrew and Pieter are here. One of them lives in town. We’ll have to have them on our show in the future. Mimblewimble. Having a crypto expert on the show is cool because in something like bitcoin, we’re grateful to have experts on our show to enligthen us and our listeners about these arcane aspects of a wonderful technology and related tech. That cryptographic stuff is always something that is a little bit missing with others that we have on. It’s a key element that’s nice to fill in. In a future episode we would like to explore that more. When we were coming to break, was it … someone I forget.. was trying to say something on the way out, I don’t know if either of you remember what that would have been.

BD: We were talking about the block size, was that it?

host: Well there was something related before that. In that case, Brian, you had a good question about multiple assets using this tech.

BD: Okay, sure. In the sidechains paper, there was talk of having multiple assets in a single sidechain. You could have a dogecoin sidechain and a bitcoin sidechain and inside that sidechain you would have atomic swaps which are basically trustless but that rely on everything pretty much being transparent. But now, not having transparency, is multiple assets can you have multiple assets on the same ledger with this mimblewimble tech?

AP: Um, sure. In something like, and Blockstream has a sidechain called Elements Alpha which only supports bitcoin. It has something called confidential transactions which is a way to encrypt amounts. It’s a way of encrypting amounts where you can still verify that the transaction is valid. If you had asset pegs on this, you could have verifiers check which inputs have a certain asset type, and which outputs have another asset type, and that for every single asset nothing was created or destroyed. The way that mimblewimble works is that it uses confidential transactions in some creative ways. These ways are not too creative that would prevent asset pegs. So you have two transactions that don’t create or destroy any assets. After merging, this would still be the case and thus it is compatible. The amounts are still encrypted. The assets would not be encrypted in mimblewimble. But they are still compatible.

host: After merging, the assets would still be encrypted? or what were you saying?

PW: The balance would sitll be encrypted. But for each output you would see which asset type, but you wouldn’t see how much. And improving on that is something we’re working on. I guess that, the baseline answer to your question is, there’s a conflict between how in a blockchain we make everything public because we think it’s the only way that things can be verified. Usually we try to make protocols where you don’t need to reveal everything. Confidential transactions is one way. Mimblewimble accomplishes the same thing.

host: Could mimblewimble be applied to bitcoin or be added as a layer on top? Would it instead operate as a potential competing cryptocurrency?

PW: So, it works very differently from how bitcoin works. Introducing mimblewimble into bitcoin in a backwards-compatible way would be a difficult exercise. It may not be impossible, but it would be hard. I think the way if people were experimenting with this, I would expect it to be an experimental separate chain or sidechain. In a sidechain we would not introduce a new cryptocurrency but it would be a separate chain. There are some downsides to mimblewimble. In particular, it does not have a scripting language.

host: For some who do not know the implications of that, why is that a bad thing?

AP: So, that’s something we have talked about on the show before is something called lightning network and payment channels, which is a way to do a transaction off-chain. In mimblewimble, those are not supported. I think I can get payment channels to work, actually. Bitcoin can do payment channels with its scripting language. Another big one is cross-chain atomic swaps where I could for example trade with Brian some dogecoin for some bitcoin. And dogecoin and bitcoin have nothing to do with each other, but we can still trustlessly swap these using a cross-chain atomic swap, because both of them have a very expressive scripting language so you can find sorts of trickery that let you do things like this, and in mimblewimble you cannot. And I don’t see any way to make mimblewimble able to do that, actually. And there’s lots of other examples I could think of. Bitcoin can do a lot of stuff beyond just passing money between people.

PW: Yeah the nice thing about bitcoin scripting language and other cryptocurrency with expressive script is that people end up doing things with it that it was not designed for. Nobody was thinking about cross-chain atomic swaps and lightning when bitcoin was invented. And yet it requires almost no changes to the protocol, at the scripting level at least. So it’s, having a relatively expressive language in transactions allows much easier experimentation by doing things it was not designed for. This would be much harder in mimblewimble. It’s basically just for sending money. It could do things like multisignature and a few others, but it’s much more limited in its possibilities.

BD: Could it do 2-of-3 multisig?

PW: Yes. Any of them. Larger ones require a lot of interaction between the participants, in worst case scenario I think exponentially so. Things like bitcoin, yes definitely. If we’re just talking about multisig on a small scale like bitcoin does, yes. It doesn’t go as far as what we can do with Schnorr or key tree signatures (see key tree signatures transcript).

((58min 7sec))

host: So just to clarify then, because bitcoin has an expressive scripting language, it has a lot of versatility compared to mimblewimble’s narrow scope? It doesn’t have this sort of flexibility to be adapted to other things?

PW: Yes.

host: So that’s a good thing to know, not that it would be a bad thing if it was otherwise. So it’s not in direct competition with bitcoin, it can share the space with bitcoin.

PW: Yeah there’s an argument to be made that an expressive scripting langugae is sort of an inherently privacy fungibility risk to the system in that say tomorrow AwesomeWallet.com opens and they might be the only one in the bitcoin space that uses 3-of-7 multisig and now you can identify every transaction on the blockchain that uses that wallet because they are the only 3-of-7 multisig users. A scripting language is very neat to play with, but it has a privacy downside. Mimblewimble takes this to the other side where you have very good privacy but at the expense of no other features any more.

host: Just to clarify, what you’re saying is that it’s a downside because the ledger shows….

PW: The scripts go into the blockchain. So if you want a transaction to be indistinguishable from another one, and you use a different script that nobody else is using, then you have already failed.

host: Oh okay got you.

PW: Go ahead.

BD: There was some talk about zcash about our crypto compare news minutes. Is this crypto mix this up? Would you trust this more or less than the crypto that underlies zcash?

PW: So the crypto that underlies zcash that zcash relies on is SNARKs. Andrew knows this better than I do. It’s pairing based but it requires a trusted setup. There’s a ceremony where a number of people need to come together and create a private key for the system itself, and then destroy it. And then anyone who has that private key can do anything they want in the system. This is called a trusted setup and it’s something that zcash has, and mimblewimble does not require trusted setup.

(1h 1min)

BD: Are there any signatures in mimblewimble?

host: Sounds like it needs some thought to answer. We can answer this after the break. Alright guys welcome back. Let’s get back to our guests. We have both of the Chuck Norrises of bitcoin cryptography, Pieter Wuille and Andrew Poelstra. So we’re talking about mimblewimble, which can help maximize privacy. Brian, you had a question for one of the guys as we went out?

BD: There was some talk in the break about is this does this use signatures? I guess that’s not an easy question.

PW: Sure. Yes, it definitely uses digital signatures but more in an implicit way. There are no OP_CHECKSIG and there’s no scriptSig in mimblewimble. In bitcoin, signatures are very explicit. You say “I expect a signature here” and then you verify it. In mimblewimble, they are part of the transaction structure. I guess this is not a useful answer. I guess the answer is yes.

host: Is there a way you can elaborate and clarify on that to explain that better, or is it a deep thorny issue?

BD: So it maintains a state and then cryptographic operations which are similar to the bitcoin signatures that we all know and love, transform from one state to another state however.. that’s not really.. you can really go from the first state to the last state without all the things in between it, and all the morphs are squished together in a single.. to go from the beginning to the end very quickly?

AP: Here’s one way to think about it. In bitcoin, everything uses digital signature. What a digital signature does is say that someone authorizes a state transition. To spend a coin, there is a key related to that coin, you sign a message saying this coin is no longer this coin, but now there’s a new coin and it belongs to someone else with a different key that could authorize its future transfer. And that’s how you sign stuff. With these signatures, they just take some arbitrary data and sign it. They authorize something. There’s no mathematical connection between the input and the output in a normal signature scheme. What mimblewimble does is it does make the inputs and outputs mathematically related but in a way where you can’t actually make a transaction unless you know a secret key. So, I mean, this is kind of a signature in some sense.

PW: Maybe important to add is that there are no addresses in mimblewimble.

AP: Oh yeah, that too.

PW: You just talk directly to the receiver and you jointly construct the transaction as you are talking. So there are private keys, public keys and signatures involved there, but it’s not like I create an output to this address and to spend this I sign off using that address. It’s more deeper, I guess, inside the engine.

host: How do I know how many mimblewimble coins I have then, if I don’t have an address?

PW: Ah. Well, basically you want to receive some coins and you construct the part of the transaction with the amounts in it, and then you give it to the sender for them to create the transaction. You know how much is in there because that’s what you put there.

BD: Okay so is there no concept of change anymore? or miner fees?

PW: Oh yeah, there’s still definitely …

AP: Imagine I am trying to send Brian some money. When we jointly create our transaction, I basically make a transaction that’s got a bunch of inputs, and some change, and Brian then adds to that transaction some outputs that he created of the value that he wanted. And then we can also leave some value sitting on the table there. We send the transaction out, and then anybody can add an output and take the remaining value. So this means miners can take their own fees, more or less. At each stage of creating a transaction, you create the part of the transaction that you want to authorize and that you know how to create, and then the other party, I guess the recipient, tacks on some outputs that they know the value of, that they can take ownership of, and other nodes and miners can take fees and any floating value is – you have to explicitly say what the floating value is, and they can take some of that, and they wont be able to make the transaction add up if they take more than is available. So there’s this sort of neat thing where everybody receiving coins can get to make their part of the transaction, basically.

host: I think a lot of this is still cooking in my brain. So these transactions between two people are still broadcasted to the network?

BD: You need to broadcast the transaction and the leftover fees that someone can then calculate. Would the transaction be valid if it wasn’t exactly inputs and outputs? if the balances didn’t match?

AP: No.

PW: It has to match exactly so that the amounts in the inputs must be exactly equal to the amounts in the output plus the fee that you declare. If the number is not exactly the same, then the transaction is irrevocably invalid.

host: So obviously the transaction as it goes through how long does it take?

PW: Milliseconds.

((1h 12min 30sec))

host: So Pieter, you were saying during the break about what “mimble wimble” actually means in the Harry Potter books.

PW: I actually learned this from Andrew.

host: Well Andrew, you go ahead then. You’re the source.

AP: Well I also learned it this week, but not from anyone in this room.

PW: Sourcer.

host: Ahh that was good. We always love good puns on the crypto show. You are the number one punster for the show. Anyway, go ahead.

AP: What mimblewimble does in Harry Potter is that it prevents someone from saying some sort of information, like any sort of information. This was used in the books where before the books even happened and Harry’s parents were killed. And they were part of some group called the Order of the Phoenix and they were being hidden in some house whose location was secret. And the members of the Order of the Phoenix used this mimblewimble spell on each other to make sure that none of them could reveal the location of this house. So they were magically unable to give away the secret location.

host: That’s interesting. So they had a secret society, and instead of blood oaths to keep the secret, they cast spells on each other.

BD: That’s a perfect name for the protocol.

PW: I really love how earlier this week we had totally serious discussions which sentences like “So what Voldemort told us ….”

host: ((laughter)) And that’s the beauty of the crypto space. All the brilliant minds still love their Harry Potter and bringing in the Harry Potter terminology. In France htey called Voldemort what?

AP: I might be saying this wrong. It’s Tom elvis… which is an anagram of Voldemort.

host: Oh.

AP: There’s a scene in the second movie or the second book where Voldemort conjours up some letters for “I am lord Voldemort” and the letters re-arrange themselves and it turns out to be an anagram for “Tom M Jedusor”. Sorry, he says, “Tom Marvolo Riddle” and he rearranges these letters to “I am Lord Voldemort”. So in the French books they needed a different name that would anagram to “je suis Voldemort”, which is “I am Voldemort” in French.

PW: And that’s what the author of the mimblewimble paper called himself.

host: That’s awesome. That’s amazing. I love the French. So back to serious matters. Go ahead.

BD: And a rant.

host: Uh oh.

BD: So the state itself is that what we advance slowly over time. There was a proposal called “flipping the chain” way back way, perhaps attributed to Alan Reiner, only using UTXO commitments so it was the state of the chain which is what Ethereum is right now… and now on to my rant now about Alan Reiner. With Armory Wallet… he’s shifted to the dark side, he’s working for Iron Net Cybersecurity which is Keith Alexander’s cybersecurity company, who was director of the CIA for a decade until 2014. So this kind of falls in with a lot of other wallet providers who have turned to the dark side. Yaun Moeller is one of the founders of Chainalysis, he’s the original founder of Mycelium.

host: Really?

BD: Well not Mycelium itself. But the original author of Bitcoin Spinner which turned into mycelium. So the dark side is strong with us in the wallet industry.

host: So it’s a project to deanonymize what, you didn’t finish.

BD: Bitcoin transactions.

host: Really? And so what is the purported advantage of benefit to that?

BD: Well if you wanted to track flows of money…

host: Well fair enough. But I mean if you’re not the government basically….

PW: A government can force businesses to use Chainalysis and other services, to make sure money received isn’t stolen or something. And my attitude towards such services is that they are great because they give us a target. Our job in developing this system is to make their job impossible.

host: You mean with mimblewimble?

PW: As an example, yes. There are various other things we can do. You can’t blame people for trying things that are possible. They show us what is possible so that we have a target to improve upon.

host: So they give you more purpose. I like that.

BD: As a corollary to this though, one of the benefits of bitcoin is that I can provably show that I have paid someone some money, cryptographically proven. If I make one of these transactions will I be able to cryptographically show that I paid some money to someone even though it was anonymous?

AP: Yes. Yep, you can. The way that this is done, say that when we’re talking to build this transaction, you need to create an output for this transaction. Part of this output is a part of confidential transactions, called a range proof, which you need to make on that output. And to make this, you need to use the secret key for that output. As soon as I see that output, I know that you were in control of that key. When I build the whole transaction, I know that the output is your output, because I was with you when we were creating the transaction. And then we send out the whole thing, this sort of built together transaction, and what shows up in the block is a whole bunch of inputs, whole bunch of outputs, lots of merged transactions that you can’t separate. But I can see that one of the outputs in that merged transaction is the one that you gave me when we were talking (interactively) in the blockchain and we can both see that and we both have the transcript of our conversation.

BD: Could I prove it to a third-party as a sender?

PW: Yes. You can show the transaction and then show it was linked in the chain. This is usually the tradeoff that I think is optimal where the public … where you always have the ability to prove to an auditor that something happened. It’s voluntary.

host: That makes sense. As we talked about off air, I was wondering if you guys have any opinions about the memo from a year ago that the NSA put out, basically discouraging people from upgrading to elliptic curve crypto from RSA if they haven’t done so already, sort of implying that it wasn’t worth it. The context was that the NSA expected advancements in quantum computing to make such a switch irrelevant. I believe that is what they were implying. I am curious what you guys thought about this. As Brian was explaining, there were two cryptographers that explored a game theoretic– different scenarios under which the NSA might make such an announcement, and what motives they might have had for that. Do you guys have comments on that?

AP: On the NSA? So my recollection of the NSA announcement, I had forgotten or didn’t read the part about RSA. There was a recommendation as part of this to ..

PW: I just looked it up. This is from an article. I don’t know the exact quote. Complementary to a decision to move away from elliptic curve crypto, also what the NSA … p256 .. blah blah blah… uh.. more resistant to advancements in quantum computing. So the claim is that there’s some belief that especially the smaller elliptic curves in the future could be vulnerable to quantum computing. Whether that’s the real reason, I don’t know.

host: So maybe just go to bigger curves and that would ..?

PW: With a few people we recently visited Stanford campus here a week ago and met with Dan Boneh who is an expert in elliptic curve crypto and other things. And he basically confirmed that even if quantum computers were real, there’s a chance that they would not be able to attack larger elliptic curves while still being able to attack smaller ones, at least that was my understading from what he said.

host: Would the reason for that be that in the same sense that for any kind of cryptograhpic breaking, even with quantum computers, the bigger curves would take too much time?

PW: The usual assumption that is if we had a large fast quantum computer, then all of elliptic crypto would break instantly. This is not how things work in engineering. We don’t go from no quantum computer to one big fast stable usable machine. But basically what I learned there with Dan Boneh is that there’s some thereotical understanding that quantum computers in practice might be able to break and completely break the security of smaller curves while not being abl to break larger ones.

BD: Is secp256k1 that bitcoin uses, is that a small one or a big one?

PW: It’s a small one.

host: Oh, well….

PW: Well as far as I know the largest quantum computer is like 5 qubits right now. And we would need a system with a few thousand qubits to apply this attack.

BD: UT Austin had a recent big win on the quantum computing field. Professor Scott Aaronson has decided to move down and teach at UT Austin and setup a quantum calculation laboratory at the university. They wouldn’t give his wife tenure at MIT along with… he already had tenure, so they offered… he wanted his wife to be with him, and so UT Austin gave both of them tenure.

host: Good for UT. That sounds awesome. Screw you MIT. That’s interesting. … That’s not to say that the government does or does not have quantum computing… I have already been fascinated by Fermat’s last theorem ever since I was in high school. I read a book a while back about proving Fermat’s last theorem and it had to do with elliptic curves. In y’alls education in cryptography, did Fermat’s last theorem by Andrew Willes, was that significant at all in the progress of cryptography?

AP: The short answer is no. The long answer is that I have tried to understand that proof many times. It starts with elliptic curves stuff. The whole history of Fermat’s last theorem involves elliptic curves. This is why elliptic curve groups were invented, which allows us to do fancy algebra on elliptic curves. All this machinery was invented largely to deal with Fermat’s last theorem and other algebraic equations we didn’t understand.

PW: Oh I didn’t know that.

AP: Yep that’s true. And actually in the first chapter of most any general algebra book on elliptic curves, you will find a few special cases of Fermat’s last theorem that have simple solutions for elliptic curves. But the most general form it starts with the curve and then it goes on to higher-level stuff and higher-level stuff probably 100 times and I can’t even begin to understand what I’m uable to begin to understand about that proof.

PW: It took him 7 years to write it, right. It’s not something you would expect to like read over and say “oh sure that’s obvious”.

host: So if it took 7 years to write, it could take 7+ years to read. The book said that, right. Lots of mathematicians had difficulty grasping it because it was so arcane and deep and so involved, so that’s interesting, it’s not surprising that you go down one rabbit hole and not everyone can necessarily follow you without that same amount of time to go down that rabbit hole. Interesting response. Yeah it’s something, I got to AP Calculus and that’s about it. The higher-order math and cryptography has always fascinated me. So I was really curious to hear what you thought about that, so it’s an interesting response, thank you. Great discussion on mimblewimble. We have a minute left. It’s been a pleasure to have you guys on. Thank you.

AP: http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimble.txt and https://www.reddit.com/r/Bitcoin/comments/4vub3y/mimblewimble_noninteractive_coinjoin_and_better/ and https://www.reddit.com/r/Bitcoin/comments/4woyc0/mimblewimble_interview_with_andrew_poelstra_and/ and https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-August/012927.html

and https://www.reddit.com/r/Bitcoin/comments/4xge51/mimblewimble_how_a_strippeddown_version_of/

and https://www.reddit.com/r/Bitcoin/comments/4y4ezm/mimblewimble_how_a_strippeddown_version_of/

\ No newline at end of file +http://web.archive.org/web/20200926001539/https://soundcloud.com/heryptohow/mimblewimble-andrew-poelstra-peter-wuille-brian-deery-and-chris-odom

  • Andrew Poelstra (andytoshi)
  • Pieter Wuille (sipa)
  • Brian Deery
  • Chris Odom

Starts at 26min.

I am going to introduce our guests here in just a minute.

Launch of zcash might be delayed in order to allow the code to be analyzed by multiple third-party auditors. Zooko stated, “I feel bad that we didn’t do a better job of making TheDAO disaster-like problems harder for people”.

Our guests on the line are Andrew Poelstra and Pieter Wuille. Andrew are you there?

AP: Hey. We’re here.

host: I am going to let you guys give our audience some background. Andrew, tell us about yourself and what you do i nbitcoin.

AP: Sure. I showed up in the bitcoin space around late 2011 while I was starting a PhD in mathematics. I wound up hanging around on the research side of things, like IRC channels centered on cryptography research. These days I work on the libsecp256k1 project which does the underlying cryptography stuff for Bitcoin Core and related projects. That’s mostly what I spend my days doing, implementing crypto code.

host: That’s pretty awesome. This is probably one of the first times we’ve had a hard-core cryptography person on this show. We should probably have you back at some point in the future. But as it turns out, we have another one on the phone now too. Pieter Wuille is also a cryptography expert. Pieter, go ahead and tell su about yourself as well.

PW: Sure. I discovered bitcoin around the end of 2010 and I was immediately attracted to the development side of things. I started coding on bitcoind and it became Bitcoin Core. This is now my full time job and working as well on the cryptography libraries.

host: You’re one of the Core developers?

PW: There’s no good definition for that word, but yes I suppose.

host: Fair enough. But definitely a person with very deep knowledge of bitcoin and very integral in its development. Thank you both for coming on, I really appreciate it. Courtesy of our in-studio bitcoin expert, who is with us here a lot, Brian Deery at Factom, actually real quick too… Factom has had some good news lately…

BD: Oh we don’t want to talk about that. No.

host: Well why have we brought these two on the show?

BD: Something exciting happened last week. On the same day that Bitfinex lost a lot of money, a mysterious Harry Potter fan, who was also a world-class cryptographer, who was and remains anonymous, announced a new cryptographic protocol to the world.

host: Was this Satoshi again?

BD: Probably not.

host: Was it Craig Wright?

BD: There would be a lot more swearing involved.

host: So what is the new protocol about?

BD: The working title is mimblewimble. This is a Harry Potter spell that stops other wizards from being able to cast spells against you. It’s a very fitting name for an anonymity protocol that allows you to spend money without having other people watch you using their own magic of watching you and how yo uspend your money.

host: Is this another cryptocurrency-like protocol? How would you characterize it?

BD: This is a way to, why don’t we let our guests answer that? Andrew was the first one to poke some holes in this protocol and make a few fixes and tweaks to it. And so, of all the people in the world would come on the show who isn’t hiding behind tor at the moment, he’s the most appropriate, he’s the world expert.

host: Andrew could you elaborate on mimblewimble?

AP: Mimblewimble.

host: Forgive me, I’m not as big of a Harry Potter fan as I should be. Tell us more about that.

AP: Sure. On Monday, I think it was, on Monday evening, someone logged on to one of our research channels and dropped a paper under the name Tom which is the name of Voldemort in the French translation of Harry Potter. You can find the paper online, and it’s written by Voldemort. He dropped this, then he signed off and that’s the last we heard from this person.

host: It sounds like really intelligent nerds. Instead of frat boys dropping burning bags of shit on people’s front doors, it’s like dropping research instead. Okay, so someone dropped the paper. Continue.

AP: Yep. Okay so he dropped the paper. What this paper describes is the cryptography behind something like bitcoin but not bitcoin. You could build an altcoin with this. Or more usefully you could build a sidechain with this. It’s a way to create transactions, unlike bitcoin where you have this bitcoin script system and you have to solve the bitcoin script; it uses straightforward digital signatures. You can spend money by having a secret key, you can do multisignature things, etc. It’s structured in such a way that when you make a transaction and someone else makes a transaction, you can combine them and make a bigger single transaction. In bitcoin, you can’t do this, because transactions are atomic except in coinjoin where you can do interactive transaction merging in bitcoin. Instead of blocks being a giant list of transactions, in mimblewimble each block is one giant transaction, and you can’t tell which parts correspond to different transactions. This is a big thing for privacy and anonymity. It takes your entire transaction graph and squishes it into one transaction per block.

PW: I have something to add here. This is Pieter. What Andrew is describing is that somewhere in 2013 there was another anonymous author who dropped a paper on bitcointalk.

((37min 58sec))

host: So is mimblewimble polish? No, it’s pottish. Okay at least they laugh. That’s good. So just to remind ourselves, this protocol can combine transactions, but not just one transaction and another transaction, but you’re saying all transactions in a single block? Pieter please continue with your point.

PW: Andrew was explaining how this mechanism allows transactions to be combined together. This mechanism was described 3 years ago by another anonymous author who dropped a paper anonymously on bitcointalk, called OWAS. The difference is that OWAS required a new type of cryptography called pairing crypto, which is not well-trusted in the academic space yet. Mimblewimble accomplishes the same thing as OWAS, and more, but does not need the new assumptions, and it only uses the elliptic curve crypto like Bitcoin is using.

host: When you say this other form of crypto is not well trusted, is the reason is that it uses a certain set of unproven assumptions?

PW: That’s right. They are impossible to prove. They were just assuming that no efficient algorithm exists to break that. The assumptions for pairing crypto are a bit stronger than for elliptic curve crypto. In short, it’s newer.

BD: Does mimblewimble rely on discrete logarithm problem?

PW: Yes. The elliptic curve discrete logarithm problem.

BD: Just like bitcoin?

PW: Yes.

host: What are the implications of combining all transactions into one in a single block?

AP: The implication is that the transaction graph, which is sort of a technical way of describing all the inputs and all the outputs of each transaction for money in and money out, those transaction graphs no longer give you a way to follow the coins to learn anything.

PW: This is the same thing that coinjoin tries to accomplish. In coinjoin, all the participants need to be online and collaborate at the same time. Mimblewimble allows anyone on the network to take any two transactions and combine them. It simplifies coinjoin.

host: Users have to take steps to do this?

PW: Every node on the network would do mimblewimble automatically, which is not possible with interactive coinjoin.

AP: This is only half of mimblewimble. It’s pretty cool that you can get OWAS without pairing crypto.

host: So were we discussing the mimble or the wimble?

AP: So the second part is that what this allows you to do, and I think OWAS could have been coerced into doing this, but mimblewimble definitely does. It’s that, if you have a series of blocks, then somebody can give you if you want to validate all those blocks, rather than getting every full block with every transaction, they could just give you the effect on the blockchain of all those blocks put together. So if a transaction had an output, and a later block had the output spent, then it doesn’t appear, you can delete that data, and you can give someone the entire chain with the missing data, and that person can still verify the entire chain. This is something that you can’t do in bitcoin right now.

((42min 54sec))

PW: Specifically, this actually means that the blockchain could shrink. We could have a block that spends more than it creates, and the result would be that the entire blockchain would shrink. The amount of data I need to give you to prove that the state of the ledger is correct, could theoretically go down over time. Whereas in bitcoin, we append blocks all the time.

host: So it’s not just that the chain would grow in an incrementally smaller space, it’s that the total data could go down over time.

BD: Ethereum has the full UTXO set equivalent in the patricia merkle tree.

host: You should explain that.

PW: There’s a bit of a difference here in that the UTXO set is the state of the ledger. Knowig how much money everyone has. That’s what the UTXO set is. In bitcoin, you could be given the UTXO set and you wouldn’t have to verify the history if you trust me. If you don’t trust me, then you need to see all blocks in history to verify that the state of the ledger is correct. In mimblewimble, the data I need to show you, you don’t trust me, I can prove that it’s correct, that amount of data can go down.

AP: Ethereum does not change that. It has commitments. Now you’re trusting a miner that did a lot of PoW and maybe is more trustworthy, but you’re still ultimately trusting that miner that the state of the chain is what it is. With mimblewimble, it’s as if you downloaded the entire blockchain and verified it, but you don’t.

host: So as far as blockchain and block size, it sounds like there would be no need to increase the block size with bitcoin in terms of transactions per 10 minutes or whatever because in practice there’s just so little data going across….?

AP: That’s complicated. In real-time, the data still has to go across. It’s only for the people that join later. They get to reap the benefits of all that deleted data. I can give you the mimblewimble coin history, and it will be really small. But while you’re watching the network, you still have to participate in real time.

host: So if you have 100 million transactions in a block, you have to keep listening?

BD: You have to keep up with the fire hose in real-time.

PW: Good analogy. Part of this operation happens within a single block. If I send you some money and you spend it, and both transactions go into the same block, then those two cancel out with each other, and it doesn’t appear in the chain anymore.

BD: So now we have to have two transactions? Someone needs to spend money, and I need to spend it too?

PW: Oh, no. I am just describing transaction inputs and outputs. The transactions where I send you and where you take it can be merged together and they cancel out. So the joined transaction of those 2 is smaller than the sum of the individual transactions.

host: Interesting. It’s a lot to wrap our head around. We’re so used to …

PW: It’s very different from how bitcoin works.

host: We are so used to that, we have a mental inertia of how we understand bitcoin, it’s a little difficult to wrap our heads around this.

BD: is this going to have to be an interactive protocol? In coinjoin, I need real-time communication with the people I’m mixing with. Do I need interactive communication to make this transaction?

AP: You need to be in communication with the person you’re spending to. That’s interactive. But the merging is non-interactive.

host: So not all transactions are guaranteed to be merged?

AP: One of the rules in the mimblewimble….

host: Andrew and Pieter are here. One of them lives in town. We’ll have to have them on our show in the future. Mimblewimble. Having a crypto expert on the show is cool because in something like bitcoin, we’re grateful to have experts on our show to enligthen us and our listeners about these arcane aspects of a wonderful technology and related tech. That cryptographic stuff is always something that is a little bit missing with others that we have on. It’s a key element that’s nice to fill in. In a future episode we would like to explore that more. When we were coming to break, was it … someone I forget.. was trying to say something on the way out, I don’t know if either of you remember what that would have been.

BD: We were talking about the block size, was that it?

host: Well there was something related before that. In that case, Brian, you had a good question about multiple assets using this tech.

BD: Okay, sure. In the sidechains paper, there was talk of having multiple assets in a single sidechain. You could have a dogecoin sidechain and a bitcoin sidechain and inside that sidechain you would have atomic swaps which are basically trustless but that rely on everything pretty much being transparent. But now, not having transparency, is multiple assets can you have multiple assets on the same ledger with this mimblewimble tech?

AP: Um, sure. In something like, and Blockstream has a sidechain called Elements Alpha which only supports bitcoin. It has something called confidential transactions which is a way to encrypt amounts. It’s a way of encrypting amounts where you can still verify that the transaction is valid. If you had asset pegs on this, you could have verifiers check which inputs have a certain asset type, and which outputs have another asset type, and that for every single asset nothing was created or destroyed. The way that mimblewimble works is that it uses confidential transactions in some creative ways. These ways are not too creative that would prevent asset pegs. So you have two transactions that don’t create or destroy any assets. After merging, this would still be the case and thus it is compatible. The amounts are still encrypted. The assets would not be encrypted in mimblewimble. But they are still compatible.

host: After merging, the assets would still be encrypted? or what were you saying?

PW: The balance would sitll be encrypted. But for each output you would see which asset type, but you wouldn’t see how much. And improving on that is something we’re working on. I guess that, the baseline answer to your question is, there’s a conflict between how in a blockchain we make everything public because we think it’s the only way that things can be verified. Usually we try to make protocols where you don’t need to reveal everything. Confidential transactions is one way. Mimblewimble accomplishes the same thing.

host: Could mimblewimble be applied to bitcoin or be added as a layer on top? Would it instead operate as a potential competing cryptocurrency?

PW: So, it works very differently from how bitcoin works. Introducing mimblewimble into bitcoin in a backwards-compatible way would be a difficult exercise. It may not be impossible, but it would be hard. I think the way if people were experimenting with this, I would expect it to be an experimental separate chain or sidechain. In a sidechain we would not introduce a new cryptocurrency but it would be a separate chain. There are some downsides to mimblewimble. In particular, it does not have a scripting language.

host: For some who do not know the implications of that, why is that a bad thing?

AP: So, that’s something we have talked about on the show before is something called lightning network and payment channels, which is a way to do a transaction off-chain. In mimblewimble, those are not supported. I think I can get payment channels to work, actually. Bitcoin can do payment channels with its scripting language. Another big one is cross-chain atomic swaps where I could for example trade with Brian some dogecoin for some bitcoin. And dogecoin and bitcoin have nothing to do with each other, but we can still trustlessly swap these using a cross-chain atomic swap, because both of them have a very expressive scripting language so you can find sorts of trickery that let you do things like this, and in mimblewimble you cannot. And I don’t see any way to make mimblewimble able to do that, actually. And there’s lots of other examples I could think of. Bitcoin can do a lot of stuff beyond just passing money between people.

PW: Yeah the nice thing about bitcoin scripting language and other cryptocurrency with expressive script is that people end up doing things with it that it was not designed for. Nobody was thinking about cross-chain atomic swaps and lightning when bitcoin was invented. And yet it requires almost no changes to the protocol, at the scripting level at least. So it’s, having a relatively expressive language in transactions allows much easier experimentation by doing things it was not designed for. This would be much harder in mimblewimble. It’s basically just for sending money. It could do things like multisignature and a few others, but it’s much more limited in its possibilities.

BD: Could it do 2-of-3 multisig?

PW: Yes. Any of them. Larger ones require a lot of interaction between the participants, in worst case scenario I think exponentially so. Things like bitcoin, yes definitely. If we’re just talking about multisig on a small scale like bitcoin does, yes. It doesn’t go as far as what we can do with Schnorr or key tree signatures (see key tree signatures transcript).

((58min 7sec))

host: So just to clarify then, because bitcoin has an expressive scripting language, it has a lot of versatility compared to mimblewimble’s narrow scope? It doesn’t have this sort of flexibility to be adapted to other things?

PW: Yes.

host: So that’s a good thing to know, not that it would be a bad thing if it was otherwise. So it’s not in direct competition with bitcoin, it can share the space with bitcoin.

PW: Yeah there’s an argument to be made that an expressive scripting langugae is sort of an inherently privacy fungibility risk to the system in that say tomorrow AwesomeWallet.com opens and they might be the only one in the bitcoin space that uses 3-of-7 multisig and now you can identify every transaction on the blockchain that uses that wallet because they are the only 3-of-7 multisig users. A scripting language is very neat to play with, but it has a privacy downside. Mimblewimble takes this to the other side where you have very good privacy but at the expense of no other features any more.

host: Just to clarify, what you’re saying is that it’s a downside because the ledger shows….

PW: The scripts go into the blockchain. So if you want a transaction to be indistinguishable from another one, and you use a different script that nobody else is using, then you have already failed.

host: Oh okay got you.

PW: Go ahead.

BD: There was some talk about zcash about our crypto compare news minutes. Is this crypto mix this up? Would you trust this more or less than the crypto that underlies zcash?

PW: So the crypto that underlies zcash that zcash relies on is SNARKs. Andrew knows this better than I do. It’s pairing based but it requires a trusted setup. There’s a ceremony where a number of people need to come together and create a private key for the system itself, and then destroy it. And then anyone who has that private key can do anything they want in the system. This is called a trusted setup and it’s something that zcash has, and mimblewimble does not require trusted setup.

(1h 1min)

BD: Are there any signatures in mimblewimble?

host: Sounds like it needs some thought to answer. We can answer this after the break. Alright guys welcome back. Let’s get back to our guests. We have both of the Chuck Norrises of bitcoin cryptography, Pieter Wuille and Andrew Poelstra. So we’re talking about mimblewimble, which can help maximize privacy. Brian, you had a question for one of the guys as we went out?

BD: There was some talk in the break about is this does this use signatures? I guess that’s not an easy question.

PW: Sure. Yes, it definitely uses digital signatures but more in an implicit way. There are no OP_CHECKSIG and there’s no scriptSig in mimblewimble. In bitcoin, signatures are very explicit. You say “I expect a signature here” and then you verify it. In mimblewimble, they are part of the transaction structure. I guess this is not a useful answer. I guess the answer is yes.

host: Is there a way you can elaborate and clarify on that to explain that better, or is it a deep thorny issue?

BD: So it maintains a state and then cryptographic operations which are similar to the bitcoin signatures that we all know and love, transform from one state to another state however.. that’s not really.. you can really go from the first state to the last state without all the things in between it, and all the morphs are squished together in a single.. to go from the beginning to the end very quickly?

AP: Here’s one way to think about it. In bitcoin, everything uses digital signature. What a digital signature does is say that someone authorizes a state transition. To spend a coin, there is a key related to that coin, you sign a message saying this coin is no longer this coin, but now there’s a new coin and it belongs to someone else with a different key that could authorize its future transfer. And that’s how you sign stuff. With these signatures, they just take some arbitrary data and sign it. They authorize something. There’s no mathematical connection between the input and the output in a normal signature scheme. What mimblewimble does is it does make the inputs and outputs mathematically related but in a way where you can’t actually make a transaction unless you know a secret key. So, I mean, this is kind of a signature in some sense.

PW: Maybe important to add is that there are no addresses in mimblewimble.

AP: Oh yeah, that too.

PW: You just talk directly to the receiver and you jointly construct the transaction as you are talking. So there are private keys, public keys and signatures involved there, but it’s not like I create an output to this address and to spend this I sign off using that address. It’s more deeper, I guess, inside the engine.

host: How do I know how many mimblewimble coins I have then, if I don’t have an address?

PW: Ah. Well, basically you want to receive some coins and you construct the part of the transaction with the amounts in it, and then you give it to the sender for them to create the transaction. You know how much is in there because that’s what you put there.

BD: Okay so is there no concept of change anymore? or miner fees?

PW: Oh yeah, there’s still definitely …

AP: Imagine I am trying to send Brian some money. When we jointly create our transaction, I basically make a transaction that’s got a bunch of inputs, and some change, and Brian then adds to that transaction some outputs that he created of the value that he wanted. And then we can also leave some value sitting on the table there. We send the transaction out, and then anybody can add an output and take the remaining value. So this means miners can take their own fees, more or less. At each stage of creating a transaction, you create the part of the transaction that you want to authorize and that you know how to create, and then the other party, I guess the recipient, tacks on some outputs that they know the value of, that they can take ownership of, and other nodes and miners can take fees and any floating value is – you have to explicitly say what the floating value is, and they can take some of that, and they wont be able to make the transaction add up if they take more than is available. So there’s this sort of neat thing where everybody receiving coins can get to make their part of the transaction, basically.

host: I think a lot of this is still cooking in my brain. So these transactions between two people are still broadcasted to the network?

BD: You need to broadcast the transaction and the leftover fees that someone can then calculate. Would the transaction be valid if it wasn’t exactly inputs and outputs? if the balances didn’t match?

AP: No.

PW: It has to match exactly so that the amounts in the inputs must be exactly equal to the amounts in the output plus the fee that you declare. If the number is not exactly the same, then the transaction is irrevocably invalid.

host: So obviously the transaction as it goes through how long does it take?

PW: Milliseconds.

((1h 12min 30sec))

host: So Pieter, you were saying during the break about what “mimble wimble” actually means in the Harry Potter books.

PW: I actually learned this from Andrew.

host: Well Andrew, you go ahead then. You’re the source.

AP: Well I also learned it this week, but not from anyone in this room.

PW: Sourcer.

host: Ahh that was good. We always love good puns on the crypto show. You are the number one punster for the show. Anyway, go ahead.

AP: What mimblewimble does in Harry Potter is that it prevents someone from saying some sort of information, like any sort of information. This was used in the books where before the books even happened and Harry’s parents were killed. And they were part of some group called the Order of the Phoenix and they were being hidden in some house whose location was secret. And the members of the Order of the Phoenix used this mimblewimble spell on each other to make sure that none of them could reveal the location of this house. So they were magically unable to give away the secret location.

host: That’s interesting. So they had a secret society, and instead of blood oaths to keep the secret, they cast spells on each other.

BD: That’s a perfect name for the protocol.

PW: I really love how earlier this week we had totally serious discussions which sentences like “So what Voldemort told us ….”

host: ((laughter)) And that’s the beauty of the crypto space. All the brilliant minds still love their Harry Potter and bringing in the Harry Potter terminology. In France htey called Voldemort what?

AP: I might be saying this wrong. It’s Tom elvis… which is an anagram of Voldemort.

host: Oh.

AP: There’s a scene in the second movie or the second book where Voldemort conjours up some letters for “I am lord Voldemort” and the letters re-arrange themselves and it turns out to be an anagram for “Tom M Jedusor”. Sorry, he says, “Tom Marvolo Riddle” and he rearranges these letters to “I am Lord Voldemort”. So in the French books they needed a different name that would anagram to “je suis Voldemort”, which is “I am Voldemort” in French.

PW: And that’s what the author of the mimblewimble paper called himself.

host: That’s awesome. That’s amazing. I love the French. So back to serious matters. Go ahead.

BD: And a rant.

host: Uh oh.

BD: So the state itself is that what we advance slowly over time. There was a proposal called “flipping the chain” way back way, perhaps attributed to Alan Reiner, only using UTXO commitments so it was the state of the chain which is what Ethereum is right now… and now on to my rant now about Alan Reiner. With Armory Wallet… he’s shifted to the dark side, he’s working for Iron Net Cybersecurity which is Keith Alexander’s cybersecurity company, who was director of the CIA for a decade until 2014. So this kind of falls in with a lot of other wallet providers who have turned to the dark side. Yaun Moeller is one of the founders of Chainalysis, he’s the original founder of Mycelium.

host: Really?

BD: Well not Mycelium itself. But the original author of Bitcoin Spinner which turned into mycelium. So the dark side is strong with us in the wallet industry.

host: So it’s a project to deanonymize what, you didn’t finish.

BD: Bitcoin transactions.

host: Really? And so what is the purported advantage of benefit to that?

BD: Well if you wanted to track flows of money…

host: Well fair enough. But I mean if you’re not the government basically….

PW: A government can force businesses to use Chainalysis and other services, to make sure money received isn’t stolen or something. And my attitude towards such services is that they are great because they give us a target. Our job in developing this system is to make their job impossible.

host: You mean with mimblewimble?

PW: As an example, yes. There are various other things we can do. You can’t blame people for trying things that are possible. They show us what is possible so that we have a target to improve upon.

host: So they give you more purpose. I like that.

BD: As a corollary to this though, one of the benefits of bitcoin is that I can provably show that I have paid someone some money, cryptographically proven. If I make one of these transactions will I be able to cryptographically show that I paid some money to someone even though it was anonymous?

AP: Yes. Yep, you can. The way that this is done, say that when we’re talking to build this transaction, you need to create an output for this transaction. Part of this output is a part of confidential transactions, called a range proof, which you need to make on that output. And to make this, you need to use the secret key for that output. As soon as I see that output, I know that you were in control of that key. When I build the whole transaction, I know that the output is your output, because I was with you when we were creating the transaction. And then we send out the whole thing, this sort of built together transaction, and what shows up in the block is a whole bunch of inputs, whole bunch of outputs, lots of merged transactions that you can’t separate. But I can see that one of the outputs in that merged transaction is the one that you gave me when we were talking (interactively) in the blockchain and we can both see that and we both have the transcript of our conversation.

BD: Could I prove it to a third-party as a sender?

PW: Yes. You can show the transaction and then show it was linked in the chain. This is usually the tradeoff that I think is optimal where the public … where you always have the ability to prove to an auditor that something happened. It’s voluntary.

host: That makes sense. As we talked about off air, I was wondering if you guys have any opinions about the memo from a year ago that the NSA put out, basically discouraging people from upgrading to elliptic curve crypto from RSA if they haven’t done so already, sort of implying that it wasn’t worth it. The context was that the NSA expected advancements in quantum computing to make such a switch irrelevant. I believe that is what they were implying. I am curious what you guys thought about this. As Brian was explaining, there were two cryptographers that explored a game theoretic– different scenarios under which the NSA might make such an announcement, and what motives they might have had for that. Do you guys have comments on that?

AP: On the NSA? So my recollection of the NSA announcement, I had forgotten or didn’t read the part about RSA. There was a recommendation as part of this to ..

PW: I just looked it up. This is from an article. I don’t know the exact quote. Complementary to a decision to move away from elliptic curve crypto, also what the NSA … p256 .. blah blah blah… uh.. more resistant to advancements in quantum computing. So the claim is that there’s some belief that especially the smaller elliptic curves in the future could be vulnerable to quantum computing. Whether that’s the real reason, I don’t know.

host: So maybe just go to bigger curves and that would ..?

PW: With a few people we recently visited Stanford campus here a week ago and met with Dan Boneh who is an expert in elliptic curve crypto and other things. And he basically confirmed that even if quantum computers were real, there’s a chance that they would not be able to attack larger elliptic curves while still being able to attack smaller ones, at least that was my understading from what he said.

host: Would the reason for that be that in the same sense that for any kind of cryptograhpic breaking, even with quantum computers, the bigger curves would take too much time?

PW: The usual assumption that is if we had a large fast quantum computer, then all of elliptic crypto would break instantly. This is not how things work in engineering. We don’t go from no quantum computer to one big fast stable usable machine. But basically what I learned there with Dan Boneh is that there’s some thereotical understanding that quantum computers in practice might be able to break and completely break the security of smaller curves while not being abl to break larger ones.

BD: Is secp256k1 that bitcoin uses, is that a small one or a big one?

PW: It’s a small one.

host: Oh, well….

PW: Well as far as I know the largest quantum computer is like 5 qubits right now. And we would need a system with a few thousand qubits to apply this attack.

BD: UT Austin had a recent big win on the quantum computing field. Professor Scott Aaronson has decided to move down and teach at UT Austin and setup a quantum calculation laboratory at the university. They wouldn’t give his wife tenure at MIT along with… he already had tenure, so they offered… he wanted his wife to be with him, and so UT Austin gave both of them tenure.

host: Good for UT. That sounds awesome. Screw you MIT. That’s interesting. … That’s not to say that the government does or does not have quantum computing… I have already been fascinated by Fermat’s last theorem ever since I was in high school. I read a book a while back about proving Fermat’s last theorem and it had to do with elliptic curves. In y’alls education in cryptography, did Fermat’s last theorem by Andrew Willes, was that significant at all in the progress of cryptography?

AP: The short answer is no. The long answer is that I have tried to understand that proof many times. It starts with elliptic curves stuff. The whole history of Fermat’s last theorem involves elliptic curves. This is why elliptic curve groups were invented, which allows us to do fancy algebra on elliptic curves. All this machinery was invented largely to deal with Fermat’s last theorem and other algebraic equations we didn’t understand.

PW: Oh I didn’t know that.

AP: Yep that’s true. And actually in the first chapter of most any general algebra book on elliptic curves, you will find a few special cases of Fermat’s last theorem that have simple solutions for elliptic curves. But the most general form it starts with the curve and then it goes on to higher-level stuff and higher-level stuff probably 100 times and I can’t even begin to understand what I’m uable to begin to understand about that proof.

PW: It took him 7 years to write it, right. It’s not something you would expect to like read over and say “oh sure that’s obvious”.

host: So if it took 7 years to write, it could take 7+ years to read. The book said that, right. Lots of mathematicians had difficulty grasping it because it was so arcane and deep and so involved, so that’s interesting, it’s not surprising that you go down one rabbit hole and not everyone can necessarily follow you without that same amount of time to go down that rabbit hole. Interesting response. Yeah it’s something, I got to AP Calculus and that’s about it. The higher-order math and cryptography has always fascinated me. So I was really curious to hear what you thought about that, so it’s an interesting response, thank you. Great discussion on mimblewimble. We have a minute left. It’s been a pleasure to have you guys on. Thank you.

AP: http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimble.txt and https://www.reddit.com/r/Bitcoin/comments/4vub3y/mimblewimble_noninteractive_coinjoin_and_better/ and https://www.reddit.com/r/Bitcoin/comments/4woyc0/mimblewimble_interview_with_andrew_poelstra_and/ and https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-August/012927.html

and https://www.reddit.com/r/Bitcoin/comments/4xge51/mimblewimble_how_a_strippeddown_version_of/

and https://www.reddit.com/r/Bitcoin/comments/4y4ezm/mimblewimble_how_a_strippeddown_version_of/

\ No newline at end of file diff --git a/mit-bitcoin-expo/mit-bitcoin-expo-2017/scaling-and-utxos/index.html b/mit-bitcoin-expo/mit-bitcoin-expo-2017/scaling-and-utxos/index.html index 3aebb6fbcd..6fb1521167 100644 --- a/mit-bitcoin-expo/mit-bitcoin-expo-2017/scaling-and-utxos/index.html +++ b/mit-bitcoin-expo/mit-bitcoin-expo-2017/scaling-and-utxos/index.html @@ -10,4 +10,4 @@ < Scaling and UTXO commitments

Scaling and UTXO commitments

Speakers: Peter Todd

Date: March 4, 2017

Transcript By: Bryan Bishop

Category: Conference

Media: -https://www.youtube.com/watch?v=0mVOq1jaR1U&feature=youtu.be&t=1h20m

https://twitter.com/kanzure/status/838481311992012800

Thank you. I’ll admit the last one was probably the most fun other than. On the schedule, it said scaling and UTXO and I threw something together. TXO commitments were a bold idea but recently there were some observations that make it more interesting. I thought I would start by going through what problem they are actually solving.

Like David was saying in the last presentation, running a full node is kind of a pain. How big is the archival blockchain data? Earlier this morning on blockchain.info, it was 125 GB. They don’t have a good track record, who knows, but it’s roughly the right number. History doesn’t disappear, it keeps growing. In this case, in David’s talk, we were able to mitigate that problem with pruning if you want to run a pruned node you don’t have to go keep all that 100 gigs on disk all the time. Currently there are some issues with pruned nodes where we don’t have the infrastructure to do initial synchronization from each other. But long story short, it’s pretty easy to see how the archival history problem we can solve it by essentially splitting up the problem and letting people contribute disk space.

However, pruned nodes still have the problem of the UTXO set which is that if your node wants to go verify someone spending a coin, you must have a copy of that data. Otherwise, how do you know that the coin is real? Even a pruned node is carrying around 50 million UTXOs right now. That number while it can go up and down a bit, fundamentally it will always go up in the long run, because people will lose private keys. That’s enough to guarantee that UTXO set size will continue to grow up. 50 million translates to about 2-3 GB of data, which doesn’t sound like a lot, but remember that it can grow just as fat as the blockchain does. That can grow by 50 GB per year. If we want to go scale up block size, and segwit does that, the amount that the UTXO grows can go up too. So it’s not necessarily a big problem right now, but even in the future having the UTXOs around can be a problem.

When you look at the UTXO set, if you’re going to process a block quickly, you need to have the UTXO set in reasonably fast access storage. If you have a block with 5000 inputs, you need to do 500 queries to wherhever you’re storing the UTXO set… so you know, as that goes up, the amount of relatively fast random access memory you need goes up so that you can have decent performance. You can run this on a hard drive with all the UTXO data, and the node will run a lot slower, and that’s not good for consensus either. In the future we’re going to have to fix this problem.

How is the UTXO data stored anyway? With this crowd, you’re all thinking about a merkle tree. The reality is that it’s oversimplified of leveldb architecture. Basically everything in existence that stores data, there’s some kind of tree, you start at the top, you go access your data. You can always go take something that uses pointers and hash it and convert it into what we usually call a merkle tree. The other thing to remembe rwith UTXO sets is that not all the coins are going to be spent. In this diagram, suppose the red coins are the ones that people are likely to go spend in the future and they have the private keys. And the grey ones maybe they lost the private keys. If we’re going to scale up the system, we have some problems there. First of all, if we’re going to scale it, not everyone wants to have that all that data. So if I go and hash that data, you know, I can go and extract proofs, so I can go and outsource who has a copy of this. For instance, go back a second, if you imagine all this data being on my hard drive and I want to not have it, I could hash it all up, throw away most of it, and if someone wants to spend a coin, we can give the person hey here’s all the stuff you didn’t have, you know it’s correct because the hashing matches, now you can update your state and continue block processing.

With lost coins, the issue is that, who has this UTXO set data? How are we going to go and split that up to get a scalability benefit out of this? And, where was I…. I mean, so the technique that I came up with a while ago was why don’t we go and make this insertion-ordered? And what’s interesting about insertion-ordered… imagine the first transaction output ever created ends up at position 0 on the left, and our not-so-used coin, we have 20 inputs here maybe, a lot of them are lost… but because people lose hteir keys over time, we could make the assumption that entries in the UTXO set that are older are hte ones that are less likely to be spent. Right? Because obviously people are going to go spend their coins on a regular basis. And the freshly created coins are most likely to correspond to coins that someone is about to actually spend. The grey ones are dead. But sometimes maybe someone spends an old coin from way back when. But first and foremost, if you’re insertion-ordering, what happens whne you add a new coin to the set? What data do you need to do that? If we go back to UTXO set commitments, if we’re storing that by the hash of the transaction and the output number, that’s essentially a randomly distributed key space because txids are random. So I oculd end up having to insert data into that data structure almost anywhree. Whereas if you do insertion-ordering, you only need basically the nodes on the right. Right? Because I always know what part of the big data set I’m going to change and add something new to it.

Which also means that in addition to this, we have a cache …. we can go pick some recent history and keep that around, and the other stuff you discard it and now your disk space usage goes down. Just like in the UTXO commitment example, someone could still provide you that extra data on demand. You threw away the data, but you still had verified it. Just like bittorrent lets you download a file from people you don’t trust. So we can still get spend data when needed. Oops, where is it, there we go.

When that guy finally spends his txo created a year ago, he could provide you with a proof that it is real, and you temporarily go and fill in that and you wind up being able to go record that. Now, here’s an interesting observation though.

If we’re going to implement all this, which sounds good, we can run nodes with less than the full UTXO set, does this actually need to be a consensus protocol change? Do miners have to do this? I recently realized the answer is no. We’ve often been talking about this technique in the wrong way. We think of this as TXO proofs. Proofs that things exist. In reality, when you look at the details of this, if we’re basing this on standard data structures that you otherwise build with pointers, we’re always tlaking about something where data you pruned away and discarded, that’s not really a proof. You’re just filling in some details that are missing from someone’s database. I’m not proving that something is true. I’m simply helping you to go prove for yourself. Which then also means, why do we care that miners do any of this? Why can’t I just have a full node, that computes what the TXO set commitment would be, computes the hashes of all these states in the database, and hten among my peers, follow a similar protocol and give each other the data that we threw away. If I want to convince your node that an output from 2 years ago was valid; I am going to give you data that you probably processed at some point but long since discarded. I don’t need a miner to do that. I can go do that just between you and me. If miners do this, it’s irrelevant to the argument.

We could deploy this without a big political fight with guys scattered around the world that might not have our best interests in their hearts. This makes it all the more interesting.

The other interesting thing is that if this is not a consensus protocol change, they can be a lot faster. People actually implemented…. Mark Friedenbach implemented a UTXO set commitment scheme, where he took the set and hashed it and did state changes, he found that the performance was kind of bad because you’re updating this big cryptographic data structure every time a new block came in and you have to do it quickly. Well, if we’re not putting this into consensus itself, and we’re just doing this locally, then my node can compute the intermediate hashes lazily. So for instance we’re looking at our reently created UTXOs cache example… I could keep track of the tree, but I don’t have to re-hash anything. I could go treat it like any other pointer-based data structure and then at some point deep in the tree, on the left side, maybe I can keep some of the hashes and then someone else can fill me in on the details later. A peer would give me a small bit of data, just enough to lead to something that in my local copy of the set has a hash attached to it… and if I add that to the data I already knew about it, it doesn’t matter that I didn’t bother aggressively rehashing everything all the time. I have implemented htis and I’m going to have to go see if this has any performance improvements.

And finally, the last thing I’d point out with this is that setting up new nodes takes a long time. David talked about how many hours spent re-hashing and re-validating and checking old blockchain data and so on. If you have a commitment to the state of the transaction output set, well, you could go and get the state of that from someone you trust. We recently did this in Bitcoin Core version 0.14 where we added something called “assumedvalid”. My big contribution to that was that I came up with the name. That command line option is also– we assume a particular blockhash is valid. Rather than rechecking all the signatures leading up to that, which is a big chunk of time of the initial synchronization, we assume that a blockchain you’re synchronizing in ends in that particular blockhash, then it skips all the signature validation. You might think this is a terrible security model– but remember the default value is part of the Bitcoin Core source code. And if you don’t know that the source code isn’t being malicious, well it could do anything. Some 32 byte hash in the middle of the source code which is really easy to audit by just re-running the process of block validation. That’s one of your least concerns of potential attacks; if that value is wrong, that’s a very obvious thing that people are going to point out. It’s much more likely that someone distributing the code would go and make your node do something bad in a more underhanded way. I would argue that assumedvalid is a fair bit less dodgy than assuming miners are honest.

If we implement TXO commitment schemes on the client-side without changing the consensus protocol, and you take advantage of it by having a trusted mechanism to assume that the UTXO state is correct based on state, that’s actually a better security model than having miners involved. In BU, you could assume that miners say something is true then it is say true… But I would much rather know who I am trusting. I could be part of my security model I already have: the software that I am trusting and auditing. Why add more parties to that? And with this model, what would Bitcoin, what would running a bitcoin node look like? You download your software, have an assumedvalid TXO state in it, then you ask your peers to fill in the data you’re missing. Your node would start immediately and get full security in the same trust model that the software had to begin with. I think this would be a major improvement.

I setup some nodes on scaleway and it took about 5 days for them to get started. Maybe we could go implement this in Bitcoin Core in the next year or two. Thank you.

http://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2015/peter-todd-scalability/

http://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2016/fraud-proofs-petertodd/

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012715.html

https://s3.amazonaws.com/peter.todd/bitcoin-wizards-13-10-17.log

http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2013-November/003714.html

http://gnusha.org/bitcoin-wizards/2015-09-18.log

\ No newline at end of file +https://www.youtube.com/watch?v=0mVOq1jaR1U&feature=youtu.be&t=1h20m

https://twitter.com/kanzure/status/838481311992012800

Thank you. I’ll admit the last one was probably the most fun other than. On the schedule, it said scaling and UTXO and I threw something together. TXO commitments were a bold idea but recently there were some observations that make it more interesting. I thought I would start by going through what problem they are actually solving.

Like David was saying in the last presentation, running a full node is kind of a pain. How big is the archival blockchain data? Earlier this morning on blockchain.info, it was 125 GB. They don’t have a good track record, who knows, but it’s roughly the right number. History doesn’t disappear, it keeps growing. In this case, in David’s talk, we were able to mitigate that problem with pruning if you want to run a pruned node you don’t have to go keep all that 100 gigs on disk all the time. Currently there are some issues with pruned nodes where we don’t have the infrastructure to do initial synchronization from each other. But long story short, it’s pretty easy to see how the archival history problem we can solve it by essentially splitting up the problem and letting people contribute disk space.

However, pruned nodes still have the problem of the UTXO set which is that if your node wants to go verify someone spending a coin, you must have a copy of that data. Otherwise, how do you know that the coin is real? Even a pruned node is carrying around 50 million UTXOs right now. That number while it can go up and down a bit, fundamentally it will always go up in the long run, because people will lose private keys. That’s enough to guarantee that UTXO set size will continue to grow up. 50 million translates to about 2-3 GB of data, which doesn’t sound like a lot, but remember that it can grow just as fat as the blockchain does. That can grow by 50 GB per year. If we want to go scale up block size, and segwit does that, the amount that the UTXO grows can go up too. So it’s not necessarily a big problem right now, but even in the future having the UTXOs around can be a problem.

When you look at the UTXO set, if you’re going to process a block quickly, you need to have the UTXO set in reasonably fast access storage. If you have a block with 5000 inputs, you need to do 500 queries to wherhever you’re storing the UTXO set… so you know, as that goes up, the amount of relatively fast random access memory you need goes up so that you can have decent performance. You can run this on a hard drive with all the UTXO data, and the node will run a lot slower, and that’s not good for consensus either. In the future we’re going to have to fix this problem.

How is the UTXO data stored anyway? With this crowd, you’re all thinking about a merkle tree. The reality is that it’s oversimplified of leveldb architecture. Basically everything in existence that stores data, there’s some kind of tree, you start at the top, you go access your data. You can always go take something that uses pointers and hash it and convert it into what we usually call a merkle tree. The other thing to remembe rwith UTXO sets is that not all the coins are going to be spent. In this diagram, suppose the red coins are the ones that people are likely to go spend in the future and they have the private keys. And the grey ones maybe they lost the private keys. If we’re going to scale up the system, we have some problems there. First of all, if we’re going to scale it, not everyone wants to have that all that data. So if I go and hash that data, you know, I can go and extract proofs, so I can go and outsource who has a copy of this. For instance, go back a second, if you imagine all this data being on my hard drive and I want to not have it, I could hash it all up, throw away most of it, and if someone wants to spend a coin, we can give the person hey here’s all the stuff you didn’t have, you know it’s correct because the hashing matches, now you can update your state and continue block processing.

With lost coins, the issue is that, who has this UTXO set data? How are we going to go and split that up to get a scalability benefit out of this? And, where was I…. I mean, so the technique that I came up with a while ago was why don’t we go and make this insertion-ordered? And what’s interesting about insertion-ordered… imagine the first transaction output ever created ends up at position 0 on the left, and our not-so-used coin, we have 20 inputs here maybe, a lot of them are lost… but because people lose hteir keys over time, we could make the assumption that entries in the UTXO set that are older are hte ones that are less likely to be spent. Right? Because obviously people are going to go spend their coins on a regular basis. And the freshly created coins are most likely to correspond to coins that someone is about to actually spend. The grey ones are dead. But sometimes maybe someone spends an old coin from way back when. But first and foremost, if you’re insertion-ordering, what happens whne you add a new coin to the set? What data do you need to do that? If we go back to UTXO set commitments, if we’re storing that by the hash of the transaction and the output number, that’s essentially a randomly distributed key space because txids are random. So I oculd end up having to insert data into that data structure almost anywhree. Whereas if you do insertion-ordering, you only need basically the nodes on the right. Right? Because I always know what part of the big data set I’m going to change and add something new to it.

Which also means that in addition to this, we have a cache …. we can go pick some recent history and keep that around, and the other stuff you discard it and now your disk space usage goes down. Just like in the UTXO commitment example, someone could still provide you that extra data on demand. You threw away the data, but you still had verified it. Just like bittorrent lets you download a file from people you don’t trust. So we can still get spend data when needed. Oops, where is it, there we go.

When that guy finally spends his txo created a year ago, he could provide you with a proof that it is real, and you temporarily go and fill in that and you wind up being able to go record that. Now, here’s an interesting observation though.

If we’re going to implement all this, which sounds good, we can run nodes with less than the full UTXO set, does this actually need to be a consensus protocol change? Do miners have to do this? I recently realized the answer is no. We’ve often been talking about this technique in the wrong way. We think of this as TXO proofs. Proofs that things exist. In reality, when you look at the details of this, if we’re basing this on standard data structures that you otherwise build with pointers, we’re always tlaking about something where data you pruned away and discarded, that’s not really a proof. You’re just filling in some details that are missing from someone’s database. I’m not proving that something is true. I’m simply helping you to go prove for yourself. Which then also means, why do we care that miners do any of this? Why can’t I just have a full node, that computes what the TXO set commitment would be, computes the hashes of all these states in the database, and hten among my peers, follow a similar protocol and give each other the data that we threw away. If I want to convince your node that an output from 2 years ago was valid; I am going to give you data that you probably processed at some point but long since discarded. I don’t need a miner to do that. I can go do that just between you and me. If miners do this, it’s irrelevant to the argument.

We could deploy this without a big political fight with guys scattered around the world that might not have our best interests in their hearts. This makes it all the more interesting.

The other interesting thing is that if this is not a consensus protocol change, they can be a lot faster. People actually implemented…. Mark Friedenbach implemented a UTXO set commitment scheme, where he took the set and hashed it and did state changes, he found that the performance was kind of bad because you’re updating this big cryptographic data structure every time a new block came in and you have to do it quickly. Well, if we’re not putting this into consensus itself, and we’re just doing this locally, then my node can compute the intermediate hashes lazily. So for instance we’re looking at our reently created UTXOs cache example… I could keep track of the tree, but I don’t have to re-hash anything. I could go treat it like any other pointer-based data structure and then at some point deep in the tree, on the left side, maybe I can keep some of the hashes and then someone else can fill me in on the details later. A peer would give me a small bit of data, just enough to lead to something that in my local copy of the set has a hash attached to it… and if I add that to the data I already knew about it, it doesn’t matter that I didn’t bother aggressively rehashing everything all the time. I have implemented htis and I’m going to have to go see if this has any performance improvements.

And finally, the last thing I’d point out with this is that setting up new nodes takes a long time. David talked about how many hours spent re-hashing and re-validating and checking old blockchain data and so on. If you have a commitment to the state of the transaction output set, well, you could go and get the state of that from someone you trust. We recently did this in Bitcoin Core version 0.14 where we added something called “assumedvalid”. My big contribution to that was that I came up with the name. That command line option is also– we assume a particular blockhash is valid. Rather than rechecking all the signatures leading up to that, which is a big chunk of time of the initial synchronization, we assume that a blockchain you’re synchronizing in ends in that particular blockhash, then it skips all the signature validation. You might think this is a terrible security model– but remember the default value is part of the Bitcoin Core source code. And if you don’t know that the source code isn’t being malicious, well it could do anything. Some 32 byte hash in the middle of the source code which is really easy to audit by just re-running the process of block validation. That’s one of your least concerns of potential attacks; if that value is wrong, that’s a very obvious thing that people are going to point out. It’s much more likely that someone distributing the code would go and make your node do something bad in a more underhanded way. I would argue that assumedvalid is a fair bit less dodgy than assuming miners are honest.

If we implement TXO commitment schemes on the client-side without changing the consensus protocol, and you take advantage of it by having a trusted mechanism to assume that the UTXO state is correct based on state, that’s actually a better security model than having miners involved. In BU, you could assume that miners say something is true then it is say true… But I would much rather know who I am trusting. I could be part of my security model I already have: the software that I am trusting and auditing. Why add more parties to that? And with this model, what would Bitcoin, what would running a bitcoin node look like? You download your software, have an assumedvalid TXO state in it, then you ask your peers to fill in the data you’re missing. Your node would start immediately and get full security in the same trust model that the software had to begin with. I think this would be a major improvement.

I setup some nodes on scaleway and it took about 5 days for them to get started. Maybe we could go implement this in Bitcoin Core in the next year or two. Thank you.

http://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2015/peter-todd-scalability/

http://diyhpl.us/wiki/transcripts/mit-bitcoin-expo-2016/fraud-proofs-petertodd/

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012715.html

https://s3.amazonaws.com/peter.todd/bitcoin-wizards-13-10-17.log

http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2013-November/003714.html

http://gnusha.org/bitcoin-wizards/2015-09-18.log

\ No newline at end of file diff --git a/mit-bitcoin-expo/mit-bitcoin-expo-2018/improving-bitcoin-smart-contract-efficiency/index.html b/mit-bitcoin-expo/mit-bitcoin-expo-2018/improving-bitcoin-smart-contract-efficiency/index.html index 6395227657..f8bbb43598 100644 --- a/mit-bitcoin-expo/mit-bitcoin-expo-2018/improving-bitcoin-smart-contract-efficiency/index.html +++ b/mit-bitcoin-expo/mit-bitcoin-expo-2018/improving-bitcoin-smart-contract-efficiency/index.html @@ -11,4 +11,4 @@ Tadge Dryja

Transcript By: Bryan Bishop

Tags: Taproot, Contract protocols, Dlc

Category: Conference

Media: -https://www.youtube.com/watch?v=i-0NUqIVVV4&t=53m14s

https://twitter.com/kanzure/status/980545280973201410

Introduction

I will be talking about discreet log contracts, taproot, and graftroot. Not in that order, the order will actually be taproot, graftroot, and then discreet log contracts.

Quick intro: I’m Tadge Dryja. I work here at MIT down the street at Digital Currency Initiative. I was coauthor on the lightning network paper, and I worked on lightning. I also was the author of a paper introducing discreet log contracts. It’s a more recent thing that looks a lot like lightning. I’m going to talk about taproot, graftroot and discreet log contracts.

I only have about 20 minutes and this is only meant to get you introduced to it, please bug me later if you would like even more details.

Smart contracts

So, what is a smart contract? There’s no real good answer to that question. A lot of people think one thing is a smart contract while another isn’t or something. All outputs in bitcoin are smart contracts because they use bitcoin script. Even pay-to-pubkeyhash with OP_DUP OP_HASH160 and OP_EQUALVERIFY… So in my view, pay-to-pubkeyhash is not really a smart contract. You’re just sending money. It’s functionally the same as using cash.

Multisig, is multisig a smart contract where you have multiple signatures and maybe some kind of threshold? Yeah, maybe, kind of. That’s iffy. But it’s sort of novel, it’s not how cash works, it’s kind of a smart contract.

What about OP_CHECKLOCKTIMEVERIFY where there’s an output and it can’t be spent until after a time has passed? Okay, that’s getting there. That’s something.

And then there’s stuff like the lightning network where the scripts in lightning are using multisig, OP_CHECKLOCKTIMEVERIFY and revealing secrets to revoke old state. I’d say, yeah, that’s a smart contract. It’s a fairly limited one compared to what people are looking at, but this is a smart contract with conditoinal payments and things like that.

Smart contracts in bitcoin

How do we do smart contracts in bitcoin? It’s not the same model as in ethereum. You have– there’s basically two output types that you can see on bitcoin today. There’s pay-to-pubkeyhash (P2PKH) and pay-to-scripthash (P2SH). In P2PKH, there’s a key, you can spend to the key, pretty straightforward.

In P2SH, you take some kind of script or program or a predicate, you hash it, and you can send to that. When you want to spend it, you have to reveal what the script actualy was.

So you have these two different output types in bitcoin. And these two different output types look different, and that’s bad. Why is that bad? Well, in bitcoin, you want fungibility and uniformity. You want it so that when Chainalysis and other similar companies that when they look at the blockchain that they get nothing. Ideally they just have nothing and just see random numbers and random outputs with uniform distribution of addresses. So you want to make everything uniform. One idea to do this is something called taproot.

Taproot

http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/

Taproot is an idea from Greg Maxwell, who is somewhat of a bitcoin guru. He knows a lot about bitcoin. He wrote a post introducing taproot a couple of weeks ago. It merges p2sh and p2pkh. It does this in a very annoyingly-simple way where we started wondering why nobody thought of this before.

You start by making a public key, the same way you do to create your addresses. Say you have your script S. What you do is you compute the key C and you send the taproot. I skipped the elliptic curve operations, but if you’re familiar with any of this stuff then this is how you turn a private key into a public key, you multiply by G. You can add public keys together, it’s quick and really simple to do. And if you add the public keys, then you can sign with the sum of the private keys, which is a really cool detail. What you do is say you have a regular key pair, and you also have this script and you perform this taproot equation. You hash the script and the public key together, you use that as a private key, turn that into a public key, and add that to my existing public key. This allows you to have both a script and a key squished into one thing. C is essentially a public key but it also has a script in it. When you want to spend from it, you have the option to use the key part or the script part. If you want to treat it as P2PKH, where you’re a person signing off with this, then you know your private key will just be your regular private key plus this hash that you computed which you know. So you sign as if there were no scripts and nobody will be able to detect that there was a script -it looks like a regular signature. But if you want to reveal the script, you can do so, and then people can verify that it is still valid, and then they can execute the script. It’s a way of merging p2pkh and p2sh into one.

It’s nice because in many cases in smart contracts, there’s a bunch of people getting together for a smart contract. If all of those people sign off on the outcome, then you don’t really care what the smart contract was. If everyone agrees, then don’t use the smart contract. Everyone just agrees. In real life, you enter into legal agreements all the time and you don’t go to court, there’s no disagreement so you don’t need to use the contracts themselves… the contracts are really only useful in the event of a disagreement, but they still need to be there.

…. ((stream died)) …

Graftroot

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015700.html

http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/

.. or I sign a script and then execute that script. The really cool thing about this is that I can add scripts after I’ve received the money. I can say “it’s my money, OR it can be this 2-of-3 multisig” and I can sign off on that, and I can keep adding people into the contract after I’ve created it, without touching the blockchain at all. This is really cool. It’s in some ways like merkleized abstract syntax trees (MAST), which is a way of having a whole bunch of different scripts that could be executed. It’s simple, and the scaling is really great. You can sign a million different scripts with this key, and all million are different possible ways of spending the money, but there’s no additional blockchain overhead, it’s just 32 bytes because the 32 bytes you can sort of squish the signatures together.

The downside is that you have to sign. In taproot and p2sh, it’s only operating on public keys. Anyone can take your public key and send money to it, or anyone can take your public key and a script and combine those in the case of taproot. In the case of graftroot, there are signatures, and someone’s private key has to be online, so this doesn’t work as well with cold storage situations or offline wallets. But the upsides are really huge and you get these really cool scripting and scaling benefits.

I have 10 minutes left.

Discreet log contracts

http://diyhpl.us/wiki/transcripts/discreet-log-contracts/

https://adiabat.github.io/dlc.pdf

Discreet log contracts can use some of those scripts. This is an application, though. It looks a lot like lightning network. I came up with it from working on lightning network for years. I wondered if we could do discreet log contracts instead, and hopefully we can reuse 80-90% of the lightning network source code to do this.

The idea of lightning is that you have this multisig output, you keep making new transactions. In lightning, the most recent transaction is valid, and the older transactions are invalid.

In discreet log contracts, you have a different way of choosing which transaction is valid. You have this non-interactive oracle which can determine a valid transaction. I have a little graphic about this. Let me do the graphic first. So yeah, in lightning, you say, okay, we have this funding transaction, you have 10 BTC shared between Alice and Bob and they can keep sending them back and forth. And they update these amounts per each state, and they invalidate the previous state to make sure that neither of them can successfully broadcast the earlier states. They are really only able to broadcast the latest most recent state. If they try to broadcast the older states, then they lose everything. That’s what lightning does. But in discreet log contracts, both parties create all possible outcomes of the contract when they enter into the agreement. It looks similar in terms of transactions, with a multisig 2-of-2 at the top, and they make a million transactions descending from that. But in this case, they take an oracle’s key nad they say there might be 3 different possible outcomes for tomorrow, and given the outcome as determined by the oracle, that will determine which of the millions of transactions or 3 different transactions is going to be the one valid transaction. The oracle’s signature is going to be used to endorse one specific transaction.

This uses Schnorr signatures. This all happens offline. We don’t need Schnorr signatures to be on the bitcoin network. The idea of a Schnorr signature is pretty straightforward. You have a pubkey, you make a temporary pubkey like k * G(r), and then you compute a temporary private key minus a hash of a message and R times the public key, and then to verify you multiply everything by G and see whether it equals what it should be. They give you (r, s) and you get this and here’s what you compute. In bitcoin, (R, s) is the signature. It’s ECDSA in bitcoin today not Schnorr. You get a point and a scalar and see if this relationship holds true and if it does then it’s a valid signature.

In discreet log contracts, – well, until now, the public key is a point on the curve and so on. But what if we keep the equations the same, but we say the pubkey is two points, and we pre-commit to the R value. We just move one thing to the beginning, when we’re sharing the public key. What’s interesting is that– normally, you don’t do this because this lets you only sign once. If you reuse the same R value, then people can find the private key. This happened on ps3 back in the day with geohot. I think blockchain.info had problems with people reusing R values. it makes your signatures one-time use. But it also allows you to figure out what the signatures are going to look like. If I know (A, R) and I can come up with the expected messages, I can compute the signature times G, and if we think of a signature as a private key, then I can compute what that public key would be. So this is a weird thing you can do where you can’t figure out the signature but you can figure out the pubkey version of the signature. You have to think of signatures as private keys. This lets you take this oracle’s signature and use it as a private key in your own transaction, and combine it with other existing keys.

It’s a third-party oracle model. In many other oracle models like in ethereum, generally the oracle has all visibility into the smart contract so they know what is going to happen if they say the price is one thing or the other. But in my model, the oracle might not have any idea that they are being used in this arrangement. Maybe smart contracts use their signatures maybe not, you can’t tell without being party to the discreet log contracts. The oracle wont necessarily know if anyone is using their services, and it might be difficult for them to make money, and it has good privacy properties.

There are use cases like currency futures where you can make contracts on the price of bitcoin that settles into bitcoin. It’s the opposite of CME and CBOE where they do cash-settled futures. You can also do a lot of other stuff like betting and gambling which I guess people will do. The downside is that you have to enumerate all possible outcomes, so some things it doesn’t work, like if there’s tens and tens of millions of possible outcomes then it might become unscalable. But if you have a small finite number of outputs, and it’s bounded, it’s based on how much data you can store, and the only thing going into the blockchain in all cases is just the first one transaction and maybe the spending transaction.

Q&A

https://www.youtube.com/watch?v=i-0NUqIVVV4&t=1h9m54s

Q: Delegated timing?

A: Yes, kind of, yes. Graftroot, you can, the issue with graftroot is that there is a root key that creates the scripts. You can make the root key interactive. But you can make a way where it doesn’t exist and it’s endorsed to script. You can delegate, you can say you have this key and you can sign off on a script where you have a key and you can sign. But do we make it recursive? Like a tree of delegation? It wouldn’t be hard to make it recursive, but last week at the Bitcoin Core dev meetup, people were arguing it might be crazy to allow that. It might allow indefinite delegation.

Q: Since it doesn’t have a dependency on bitcoin network, then it should be… Lightning network and passing a message to any blockchain…

A: So you’re using bitcoin or a similarly compatible network as a court, essentially. As long as everyone is getting along, it’s fine. Only the results get posted to the blockchain.

Q: It could be like…

A: Graftroot and taproot would be a soft-fork. Discrete log contracts, there’s no soft-fork needed, it looks, if you look at it, it looks the same as lightning. It could work today, I just have to code some of this. You don’t need Schnorr signature soft-fork– the non-interactive oracle uses Schnorr signatures, but it’s all offline, to compute private keys which we then use for ECDSA on bitcoin.

Q: …

A: The smart contracts in bitcoin script? There’s not a programming language for discreet log contracts. You enumerate all the outcomes, and you make a bitcoin transaction for all of those. You can look at this as a giant OR gate, circuit-wise.

Okay, thanks a lot. Ask questions to me later.

\ No newline at end of file +https://www.youtube.com/watch?v=i-0NUqIVVV4&t=53m14s

https://twitter.com/kanzure/status/980545280973201410

Introduction

I will be talking about discreet log contracts, taproot, and graftroot. Not in that order, the order will actually be taproot, graftroot, and then discreet log contracts.

Quick intro: I’m Tadge Dryja. I work here at MIT down the street at Digital Currency Initiative. I was coauthor on the lightning network paper, and I worked on lightning. I also was the author of a paper introducing discreet log contracts. It’s a more recent thing that looks a lot like lightning. I’m going to talk about taproot, graftroot and discreet log contracts.

I only have about 20 minutes and this is only meant to get you introduced to it, please bug me later if you would like even more details.

Smart contracts

So, what is a smart contract? There’s no real good answer to that question. A lot of people think one thing is a smart contract while another isn’t or something. All outputs in bitcoin are smart contracts because they use bitcoin script. Even pay-to-pubkeyhash with OP_DUP OP_HASH160 and OP_EQUALVERIFY… So in my view, pay-to-pubkeyhash is not really a smart contract. You’re just sending money. It’s functionally the same as using cash.

Multisig, is multisig a smart contract where you have multiple signatures and maybe some kind of threshold? Yeah, maybe, kind of. That’s iffy. But it’s sort of novel, it’s not how cash works, it’s kind of a smart contract.

What about OP_CHECKLOCKTIMEVERIFY where there’s an output and it can’t be spent until after a time has passed? Okay, that’s getting there. That’s something.

And then there’s stuff like the lightning network where the scripts in lightning are using multisig, OP_CHECKLOCKTIMEVERIFY and revealing secrets to revoke old state. I’d say, yeah, that’s a smart contract. It’s a fairly limited one compared to what people are looking at, but this is a smart contract with conditoinal payments and things like that.

Smart contracts in bitcoin

How do we do smart contracts in bitcoin? It’s not the same model as in ethereum. You have– there’s basically two output types that you can see on bitcoin today. There’s pay-to-pubkeyhash (P2PKH) and pay-to-scripthash (P2SH). In P2PKH, there’s a key, you can spend to the key, pretty straightforward.

In P2SH, you take some kind of script or program or a predicate, you hash it, and you can send to that. When you want to spend it, you have to reveal what the script actualy was.

So you have these two different output types in bitcoin. And these two different output types look different, and that’s bad. Why is that bad? Well, in bitcoin, you want fungibility and uniformity. You want it so that when Chainalysis and other similar companies that when they look at the blockchain that they get nothing. Ideally they just have nothing and just see random numbers and random outputs with uniform distribution of addresses. So you want to make everything uniform. One idea to do this is something called taproot.

Taproot

http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/

Taproot is an idea from Greg Maxwell, who is somewhat of a bitcoin guru. He knows a lot about bitcoin. He wrote a post introducing taproot a couple of weeks ago. It merges p2sh and p2pkh. It does this in a very annoyingly-simple way where we started wondering why nobody thought of this before.

You start by making a public key, the same way you do to create your addresses. Say you have your script S. What you do is you compute the key C and you send the taproot. I skipped the elliptic curve operations, but if you’re familiar with any of this stuff then this is how you turn a private key into a public key, you multiply by G. You can add public keys together, it’s quick and really simple to do. And if you add the public keys, then you can sign with the sum of the private keys, which is a really cool detail. What you do is say you have a regular key pair, and you also have this script and you perform this taproot equation. You hash the script and the public key together, you use that as a private key, turn that into a public key, and add that to my existing public key. This allows you to have both a script and a key squished into one thing. C is essentially a public key but it also has a script in it. When you want to spend from it, you have the option to use the key part or the script part. If you want to treat it as P2PKH, where you’re a person signing off with this, then you know your private key will just be your regular private key plus this hash that you computed which you know. So you sign as if there were no scripts and nobody will be able to detect that there was a script -it looks like a regular signature. But if you want to reveal the script, you can do so, and then people can verify that it is still valid, and then they can execute the script. It’s a way of merging p2pkh and p2sh into one.

It’s nice because in many cases in smart contracts, there’s a bunch of people getting together for a smart contract. If all of those people sign off on the outcome, then you don’t really care what the smart contract was. If everyone agrees, then don’t use the smart contract. Everyone just agrees. In real life, you enter into legal agreements all the time and you don’t go to court, there’s no disagreement so you don’t need to use the contracts themselves… the contracts are really only useful in the event of a disagreement, but they still need to be there.

…. ((stream died)) …

Graftroot

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-February/015700.html

http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/

.. or I sign a script and then execute that script. The really cool thing about this is that I can add scripts after I’ve received the money. I can say “it’s my money, OR it can be this 2-of-3 multisig” and I can sign off on that, and I can keep adding people into the contract after I’ve created it, without touching the blockchain at all. This is really cool. It’s in some ways like merkleized abstract syntax trees (MAST), which is a way of having a whole bunch of different scripts that could be executed. It’s simple, and the scaling is really great. You can sign a million different scripts with this key, and all million are different possible ways of spending the money, but there’s no additional blockchain overhead, it’s just 32 bytes because the 32 bytes you can sort of squish the signatures together.

The downside is that you have to sign. In taproot and p2sh, it’s only operating on public keys. Anyone can take your public key and send money to it, or anyone can take your public key and a script and combine those in the case of taproot. In the case of graftroot, there are signatures, and someone’s private key has to be online, so this doesn’t work as well with cold storage situations or offline wallets. But the upsides are really huge and you get these really cool scripting and scaling benefits.

I have 10 minutes left.

Discreet log contracts

http://diyhpl.us/wiki/transcripts/discreet-log-contracts/

https://adiabat.github.io/dlc.pdf

Discreet log contracts can use some of those scripts. This is an application, though. It looks a lot like lightning network. I came up with it from working on lightning network for years. I wondered if we could do discreet log contracts instead, and hopefully we can reuse 80-90% of the lightning network source code to do this.

The idea of lightning is that you have this multisig output, you keep making new transactions. In lightning, the most recent transaction is valid, and the older transactions are invalid.

In discreet log contracts, you have a different way of choosing which transaction is valid. You have this non-interactive oracle which can determine a valid transaction. I have a little graphic about this. Let me do the graphic first. So yeah, in lightning, you say, okay, we have this funding transaction, you have 10 BTC shared between Alice and Bob and they can keep sending them back and forth. And they update these amounts per each state, and they invalidate the previous state to make sure that neither of them can successfully broadcast the earlier states. They are really only able to broadcast the latest most recent state. If they try to broadcast the older states, then they lose everything. That’s what lightning does. But in discreet log contracts, both parties create all possible outcomes of the contract when they enter into the agreement. It looks similar in terms of transactions, with a multisig 2-of-2 at the top, and they make a million transactions descending from that. But in this case, they take an oracle’s key nad they say there might be 3 different possible outcomes for tomorrow, and given the outcome as determined by the oracle, that will determine which of the millions of transactions or 3 different transactions is going to be the one valid transaction. The oracle’s signature is going to be used to endorse one specific transaction.

This uses Schnorr signatures. This all happens offline. We don’t need Schnorr signatures to be on the bitcoin network. The idea of a Schnorr signature is pretty straightforward. You have a pubkey, you make a temporary pubkey like k * G(r), and then you compute a temporary private key minus a hash of a message and R times the public key, and then to verify you multiply everything by G and see whether it equals what it should be. They give you (r, s) and you get this and here’s what you compute. In bitcoin, (R, s) is the signature. It’s ECDSA in bitcoin today not Schnorr. You get a point and a scalar and see if this relationship holds true and if it does then it’s a valid signature.

In discreet log contracts, – well, until now, the public key is a point on the curve and so on. But what if we keep the equations the same, but we say the pubkey is two points, and we pre-commit to the R value. We just move one thing to the beginning, when we’re sharing the public key. What’s interesting is that– normally, you don’t do this because this lets you only sign once. If you reuse the same R value, then people can find the private key. This happened on ps3 back in the day with geohot. I think blockchain.info had problems with people reusing R values. it makes your signatures one-time use. But it also allows you to figure out what the signatures are going to look like. If I know (A, R) and I can come up with the expected messages, I can compute the signature times G, and if we think of a signature as a private key, then I can compute what that public key would be. So this is a weird thing you can do where you can’t figure out the signature but you can figure out the pubkey version of the signature. You have to think of signatures as private keys. This lets you take this oracle’s signature and use it as a private key in your own transaction, and combine it with other existing keys.

It’s a third-party oracle model. In many other oracle models like in ethereum, generally the oracle has all visibility into the smart contract so they know what is going to happen if they say the price is one thing or the other. But in my model, the oracle might not have any idea that they are being used in this arrangement. Maybe smart contracts use their signatures maybe not, you can’t tell without being party to the discreet log contracts. The oracle wont necessarily know if anyone is using their services, and it might be difficult for them to make money, and it has good privacy properties.

There are use cases like currency futures where you can make contracts on the price of bitcoin that settles into bitcoin. It’s the opposite of CME and CBOE where they do cash-settled futures. You can also do a lot of other stuff like betting and gambling which I guess people will do. The downside is that you have to enumerate all possible outcomes, so some things it doesn’t work, like if there’s tens and tens of millions of possible outcomes then it might become unscalable. But if you have a small finite number of outputs, and it’s bounded, it’s based on how much data you can store, and the only thing going into the blockchain in all cases is just the first one transaction and maybe the spending transaction.

Q&A

https://www.youtube.com/watch?v=i-0NUqIVVV4&t=1h9m54s

Q: Delegated timing?

A: Yes, kind of, yes. Graftroot, you can, the issue with graftroot is that there is a root key that creates the scripts. You can make the root key interactive. But you can make a way where it doesn’t exist and it’s endorsed to script. You can delegate, you can say you have this key and you can sign off on a script where you have a key and you can sign. But do we make it recursive? Like a tree of delegation? It wouldn’t be hard to make it recursive, but last week at the Bitcoin Core dev meetup, people were arguing it might be crazy to allow that. It might allow indefinite delegation.

Q: Since it doesn’t have a dependency on bitcoin network, then it should be… Lightning network and passing a message to any blockchain…

A: So you’re using bitcoin or a similarly compatible network as a court, essentially. As long as everyone is getting along, it’s fine. Only the results get posted to the blockchain.

Q: It could be like…

A: Graftroot and taproot would be a soft-fork. Discrete log contracts, there’s no soft-fork needed, it looks, if you look at it, it looks the same as lightning. It could work today, I just have to code some of this. You don’t need Schnorr signature soft-fork– the non-interactive oracle uses Schnorr signatures, but it’s all offline, to compute private keys which we then use for ECDSA on bitcoin.

Q: …

A: The smart contracts in bitcoin script? There’s not a programming language for discreet log contracts. You enumerate all the outcomes, and you make a bitcoin transaction for all of those. You can look at this as a giant OR gate, circuit-wise.

Okay, thanks a lot. Ask questions to me later.

\ No newline at end of file diff --git a/mit-bitcoin-expo/mit-bitcoin-expo-2020/2020-03-07-andrew-poelstra-taproot/index.html b/mit-bitcoin-expo/mit-bitcoin-expo-2020/2020-03-07-andrew-poelstra-taproot/index.html index 2feb6341cb..d35a16e537 100644 --- a/mit-bitcoin-expo/mit-bitcoin-expo-2020/2020-03-07-andrew-poelstra-taproot/index.html +++ b/mit-bitcoin-expo/mit-bitcoin-expo-2020/2020-03-07-andrew-poelstra-taproot/index.html @@ -13,4 +13,4 @@ Andrew Poelstra

Date: March 7, 2020

Transcript By: Michael Folkson

Tags: Taproot

Category: Conference

Media: -https://www.youtube.com/watch?v=1VtQBVauvhw

Topic: Taproot - Who, How, Why

Location: MIT Bitcoin Expo 2020

BIP 341: https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki

Intro (Hugo Uvegi)

I am excited to introduce our first speaker, Andrew Poelstra. He is coming to us from Blockstream where he is Director of Research. He is going to be talking about Taproot which I think is on everybody’s mind these days.

Intro (Andrew Poelstra)

Thanks for waking up so early for the first day of the conference and making the trek from the registration room over here. It is more faces than I expected considering. Hello to all the livestream people hiding from the Corona virus. I hope it doesn’t reach you, best of luck. I told myself I was going to write my slides on the plane. As you can tell I instead spent the entire plane ride drawing this title slide. I am very proud of. I hope you enjoy it. I am going to split this talk into two halves. I have only got 20 or 30 minutes. I can’t really describe Taproot in full detail in half a hour, probably even in a couple of hours. Instead I am going to give a brief overview of what Taproot is, describe some of the technical reasons for it being what it is and give a high level summary of the components of it. In the second half of the talk I am going to talk more generally about how we design proposals for Bitcoin, the considerations we have to make for a system with such high uptime requirements with so many diverse stakeholders who all more or less have a veto over proposals but nobody has the ability to push things through. And where everything is very conservative. We are all very afraid of deploying broken crypto or somehow breaking the system or causing a consensus failure or who knows what. Let’s get into it.

What is Taproot?

First half, what is Taproot? Taproot is a proposal for Bitcoin that was developed originally by Pieter Wuille, Greg Maxwell and myself. It has since been taken up by probably 10 major contributors who have been doing various things on IRC and on the mailing list over the last year or two. It is a new transaction output version, meaning that it is a new to define spending conditions for your coins on Bitcoin. I am going to talk about what that means.

Spending Conditions: Keys and Scripts

First off for those who don’t know Bitcoin has a scripting system. It has the ability to specify spending conditions on all of the coins. Typically for casual users the way we think of Bitcoins, you have an address, the address represents some sort of public key. You have a secret key corresponding to the public key and if you can produce a signature with that secret key you can spend the coins. This is actually a special case of what Bitcoin can do. It is not just one key, one signature kind of thing. We have the ability to describe arbitrary spending conditions where arbitrary means specifically you can check signatures with various public keys like you do with the normal one key standard wallet. You can check hashlocks which means you can put a hash of some data on the blockchain and it will enforce that somebody reveals the preimage of that which is a way to do a forced publication of some shared secret say. It can do timelocks where it won’t allow coins to move until some amount of time has gone by. You can do arbitrary monotone functions of these. You create a circuit out of ANDs and ORs, threshold 2-of-3 or 5-of-10 of these different checks. You can do arbitrary sets of these. The mechanism for these is called Bitcoin script. Script can do a fair number of other things, most of which are not super exciting. It can’t do a whole bunch of things. In particular you can’t use Bitcoin script to enforce things like velocity limits. A common thing people want to do is have a wallet where they have some coins that say only a certain amount of coins are allowed to move in a given day. That kind of thing you can’t do with Bitcoin script. For people thinking about future research directions for Bitcoin this the kind of missing functionality we have. Although as we will see by the end of this talk it is not as straightforward as having a cool idea and everybody cheering for you.

Spending Conditions: Scripts and Witnesses

An interesting thing about script. We use this word “script” which conjures up connotations of scripting languages like shell or Python or PHP or Node or whatever people use these days. A difference between Bitcoin Script and an ordinary scripting language is that in Bitcoin Script you are describing conditions under which a spend is valid. You aren’t executing a bunch of code. You literally are executing a bunch of code but morally what you are doing is demonstrating that some conditions exist that were sufficient to spend the coins and you have met those conditions. Scripts often specify a wide set of conditions. Say you have a 2-of-3 signature check then there are 3 different public keys. Any pair of those could be used to spend them. You may have a timelock with an emergency key. Maybe after a certain time has gone by, the original 3 keys have been lost, then there is an alternate key. You can do this but what hits the blockchain when you are spending coins is only one condition. You have the script that is describing a whole bunch of different things. Ultimately only one of them matters. Only one of them matters once the coins are spent. If they don’t get spent none of them matter. So it would be nice from a privacy/scalability perspective, it is nice I can bundle those up, there is usually a trade-off there. For the purposes of this talk privacy and scalability are going to come hand in hand. It would be ideal if we weren’t even revealing all these spending conditions. If at most one of them matters why are we publishing them all? Why are we making everybody download them? Why are we making everybody parse these? Why are we everybody check that they make sense and that they hash to the right thing etc.

So around 2011, 2012 on Bitcointalk I believe, which is where all Bitcoin ideas were invented. You can Google them and resurrect them as easy publication. There was an idea called MAST, Merklized Abstract Syntax Tree. I think now it is Merklized Alternate Script something or other. It is not quite MAST. The idea is that you take all these different spending conditions, you put them in what is called a Merkle tree. You take all the conditions, you hash them up, you take all the hashes, you bundle those together and hash them up. You get this cryptographic object that lets you cheaply reveal any one of the conditions or any subset of the conditions without needing to reveal all of them. This is smaller. What actually hits the chain is just a single 32 byte hash representing all the different conditions. When you use one of the conditions you have to reveal that condition and also a couple of hashes that give a cryptographic proof that the original hash committed to it. This idea has been floating around for quite a while. It has never been implemented. Why hasn’t it? For a couple of reasons that I am going to go into in more detail. One is that for something like MAST there is a wide range of design surface and because changes in Bitcoin are so far reaching and so difficult to do nobody wants to post something for Bitcoin and nobody wants to accept a proposal for Bitcoin that isn’t the best possible proposal that does what we are trying to do. For years we have had variations on different ways to do MAST. Different ways to hide script components. Questions about should we improve the script system at the same time as we are doing this? Should we change the output type and so on. Since 2012 we have had a number of different upgrade mechanisms appear. We have learnt a lot more about the difference between hard forks and soft forks and when hard forks are necessary and what they are appropriate for. We have learned new ways to soft fork in changes, especially changes to the script system, in ways that minimize disruption to nodes that haven’t updated. On all levels of this kind of change we have made a lot of progress over the last several years. In one sense it has been worthwhile. It is great that we didn’t try to deploy this in 2012 because what we did would have sucked. On the other hand it is 2020 and we still don’t have it. There is this trade-off that I’m going to talk about a bit more.

Spending Conditions: Keys Tricks

That is one of the two major ideas in Taproot. It is this thing MAST. You put all your spending conditions in a Merkle tree. You only have to reveal one. Nobody can see how many there are. Nobody can see what the other ones are, everything is great. The second part of Taproot is this family of things I am going to call key tricks. The standard Bitcoin script, address, whatever you want to describe it as, has a single key. You spend the coin by providing a single signature. Traditionally a public key belongs to one entity. It identifies that identity and it identifies the person who holds the private key, the person who is able to spend those coins. The idea is that there is one person with this private key. One person has complete and sole custody of the coins. It turns out there is a lot more you can do with keys, with single keys. A lot of this stuff is made much easier using Schnorr signatures versus ECDSA. I am not going to go into that but I am going to throw that out there. These are two different digital signature algorithms. They both use the same kind of keys but Schnorr signatures let you do some cool things with the keys in a much simpler way. The most important one that I have highlighted here is multisignatures. If you have several participants who all individually have a signing key, it is possible for them to combine all of their keys into one. What they do is they all choose some randomizers, they all multiply their key by some randomizer. This is a technical thing that prevents them from creating malicious keys that cancel out other participants. They add them together. I am using add in the sense of elliptic curves which is really like addition that most people are familiar with. But it behaves algebraically exactly like addition so we call it addition. You add these keys together, you get a single key out of this. Then what is cool is all these participants by cooperating are then able to produce a single signature for this single key and publish that to the blockchain. What the blockchain is going to see is one key and one signature. The same as if there was only one participant. The same as if it was an ordinary wallet that is not doing anything remarkable. But in fact behind the scenes there are multiple parties that all share custody of these coins and they all had to cooperate to move the coins. That is a cool thing. You can do variants of this. You can do threshold signatures. Instead of having 5 participants who all their combine their keys and then the 5 of them cooperate. You can have 5 participants combine their keys in such a way that any 3 of them might cooperate. There are 5 choose 3 different possibilities here and any of those 5 choose 3 possibilities of sets of signers are able to spend the coins. This requires a little bit more complicated interaction protocol between the individual participants but again what the blockchain sees is just one key, one signature. You can do more interesting things than thresholds. You can do arbitrary monotone functions, arbitrary different sets of signers can all be bundled together into one key which is pretty cool.

Another thing you can do with keys and signatures that we have learned is something called adaptor signatures. If you have two parties doing a 2-of-2 multisignature, they both have to cooperate to spend the coins, they can modify the multisigning protocol such that when the second party finishes the protocol, they complete the signature, by doing so they reveal a decryption key for some secret to the other party. A lot of what we use hash preimages and hashlocks for in Bitcoin is when you have two parties and you want one to have to reveal a secret to the other as a condition of taking their coins. We can bundle that into the signatures. I am not going to go into that but the keyword to lookup would probably be adaptor signatures or scriptless scripts. Adaptor signatures are the specific construction I am describing. This is the only equation that I am going to have in all these slides, last year I did 100 equations in a row in half a hour at the first talk of the Expo and I was told I scared people. This is a commitment equation.

P -> P + H(P,m)G

What is going on here? On the left hand side I have P for public key. I am going to modify my public key here. I am going to add the hash of the original public key and some arbitrary message m. I am going to multiply that by the generator of my elliptic curve group. What this multiplication does is converts this hash, which is a number, into a point which is a public key. This allows me to add them together. The effect of doing this transformation is that before I had a public key that I or some people knew the secret key to. Afterwards I have a different public key which the same set of people know the secret to. I have just offset it by this value which is a hash of public data. Anybody can compute it, I have just offset it. I haven’t changed the signing set at all. What I have done is turn the key from just a boring old key into a key that is actually a cryptographic commitment to this message m. I am using an arbitrary hash function H. If that hash is a cryptographic commitment, specifically if it is reasonable to model it as a random oracle then this construction also works as a hash. You can also model it is a random oracle and you can see that as long as I had a uniform distribution of hashes I am going to get a uniform distribution of points out of this. What is the point of this? The point of this is if I’m on the blockchain, I’m publishing a key and this key represents some sort of spending conditions. Now I can do one better, I not only have a key on the blockchain I have a commitment to some secret data. What is this good for? This is good for a couple of non-blockchain things like timestamping say. If I have some data, I want to prove that it existed at some point I can hide it inside of one of my public keys that I was going to put on the blockchain anyway. That goes into the Bitcoin blockchain that timestamps it. I have a whole bunch of proof of work on it. There are a certain number of blocks that were created after it. Everybody has a good idea of when every Bitcoin block was created, at least within a few minutes or a few hours. I have a cryptographic anchor for my message m to the blockchain. You can also use this to associate extra data to a transaction that the Bitcoin blockchain doesn’t care about.

For example at Blockstream we work on a project called Liquid which is a sidechain, it is a chain where you can move coins from the Bitcoin chain onto this other chain, Liquid. The mechanism of doing that is all the coins that are on the alternate chain, from Bitcoin’s perspective these are actually in the custody of this 11-of-15 quorum of who we call functionaries. To move coins onto the chain, you send them to the functionaries and then you go onto the sidechain and you write a special transaction saying “I locked up these Bitcoin, please give them to me on the sidechain.” The consensus rules of the sidechain know how to look at Bitcoin and verify that you did so. How do you say this is me? You are sending the coins to the functionaries, they are the same 15 people all the time. How do you identify that you were the one who locked up the coins when from Bitcoin’s perspective you gave them to the same people that everybody else did. You use this construction. You put some identifying thing here in this message m, throw that onto the Bitcoin blockchain. You then reveal m on the sidechain and the sidechain validators can verify this equation is satisfied. That is an example use of this.

Taproot Assumption

“If all interested parties agree, no other conditions matter”

The coolest use is going to be in Taproot. Let me throw out this maxim, the Taproot assumption. In most situations, most uses of Bitcoin script, you have this wide range of spending conditions that represent different possibilities for how your parties might interact, but ultimately you have a fixed set of parties that are known upfront. In a Lightning payment channel you have got the two participants in the channel. In an escrow type arrangement you have got the two parties in the escrow. In Liquid you have got the 15 functionaries who are all signing stuff. On a standard wallet you have got the one individual party. If everyone who has an interest in these coins agrees to move the coins they can just sign for the coins. As I mentioned two slides ago they can sign for the coins using a single key that represents all of their joint interests and do so interactively. The Taproot assumption is that in the common case, in the happy case of moving Bitcoin you only actually need a key and a signature. All this scripting stuff is there as a backstop for when things don’t go well or when you have weird requirements or weird assumptions. With that said we can get into where pay-to-contract comes in, where this commitment thing comes in. Here is where I describe what Taproot is.

Taproot

We use MAST to hide our spending conditions in a giant Merkle tree. We get a single hash. We take that hash, we use our key commitment construction to commit to that hash inside of a public key which you put on the blockchain. Then we say the public key is how you spend the coins. What hits the chain is a single key, in the happy case with a single signature. Nobody even sees that any additional conditions exist. If you do reveal that they only see one of the conditions. In the typical case whether you are a normal wallet with a single key that is on a user’s hardware wallet or something, or if you are doing an escrow, or if you are doing a Lightning channel, or if you are doing a Liquid transfer, if you are doing a non-custodial trade, whatever you are doing, what hits the chain is one key, one signature. This is cheaper for everyone to validate than putting all the conditions explicitly on there. It is also much better for privacy because you are not revealing what the conditions are, you aren’t revealing that there were any special conditions. You are not revealing how many participants were involved, how many people have an interest and what that interest looks like. You are not revealing any of that. That’s Taproot. There’s a whole bunch of detailed design stuff that I am not going to go into here. At a high level that’s the idea.

Designing for Bitcoin

In the next five minutes let me talk about some of the design considerations that went into this. The different ways that we had to think about Taproot.

Is Bitcoin Dead?

Before I do that let me quickly talk about Bitcoin development. I know a lot of people here are MIT students or students from other universities. There is a perception that there is a lot of really cool stuff happening in the cryptocurrency world. There are all these new things being developed, all these new technologies being deployed. Meanwhile Bitcoin is the dinosaur in the room. It never really changes and it doesn’t have any of the cool stuff. It doesn’t have the cool scripting language, it doesn’t have all the cool privacy tech. It doesn’t have DAGs, all this cool stuff. There is this idea that Bitcoin maybe hasn’t really changed in the last several years. We don’t have new features and new press releases saying “Here is a cool thing you can do on Bitcoin that you couldn’t do before.” On some level everything you can do in Bitcoin in 2020 was technically possible in 2009. Although very very difficult and very inefficient for many reasons. The reason for this perception is that deploying new things on Bitcoin is very slow. If you have a proposal you need to write it up, you need to have a detailed description of the proposal. You need to have code that is written. You need to have a fair bit of buy-in from the developer community. That is to just have a proposal, to have something that somebody is willing to give a BIP number to. A BIP number means almost nothing. Then you need to go through probably years of review, you need to get input from various stakeholders in the ecosystem, you need to go through all this rigor and rigmarole. It is a very long process and it can feel frustrating because there are a lot of other projects out there where you have a cool idea, you show up on the IRC channel and they are like “Wow somebody is interested in our stuff. We will deploy your thing of course.” Then you get stuff out there. You see various projects that are having hard forks every six months or something, deploying cool new stuff that is very experimental and very bold. That is super exciting but Bitcoin can’t do that. The requirements in Bitcoin are much higher. In particular Bitcoin is by far the most scalable cryptocurrency that is deployed and it is probably not scalable enough for serious worldwide usage. We are really hesitant to do anything that is going to slow down validation. Even to do anything that doesn’t speed up validation. That is maybe the most pressing concern. Others would argue that privacy is the most pressing concern. That is also a very valid viewpoint. Unfortunately improving privacy often comes with very difficult trade-offs that Bitcoin is unable to make in terms of weird new crypto assumptions or speed or size. Despite the difficulty in deploying things the pace of research in Bitcoin is incredibly fast. I hinted at all of these things we can do with keys and signatures. Over the last two years we have seen this explosion of different cool things you can do just with keys and signatures. There is an irony, it is so slow to deploy stuff on Bitcoin. What do we have? We have keys, what can we do with keys? But we have actually done a tremendous amount with keys, far more than anybody even in the academic cryptography space would’ve thought. Let’s do cryptography but the constraint is you are only allowed to output a key and a signature at the end. First of all they would say “That is the most ridiculous thing I have ever heard.” I actually did a talk at NIST once and I got belly laughs from people. They thought it was hilarious that there was this community of Bitcoin people who had tied their hands behind their backs in such a dramatic way. A result of all this is that there is a tremendous amount of research that is developing really cool stuff. Really innovative things that wind up having better scalability and better privacy than those things that we would’ve deployed in the standard way where we are allowed to have new cryptographic assumptions, we are allowed to use as much space as we want or we are allowed to spend quite a bit of time verifying stuff.

The Unbearable Heaviness of Protocol Changes

As I mentioned, I am going to rush through these two slides, there is a lot of difficulty even if you have a proposal that checks all these boxes you have got to get through a whole bunch of hoops. This change has to be accepted by the entire community which includes very many people. It includes miners, the protocol developers, the wallet developers who often have opposing goals, HSM developers who are in their own little world where they have no memory, no space and no state and they want the protocol to be set. We have retail users who just want their stuff to work and who often want bad things to not happen even when the cryptography guarantees that bad things will happen to them. You have institutional users who care even more about bad things not happening. Exchanges, custodians etc. All of these people have some interest in the system. All of these categories represent people who have a large economic stake in the system. If any change makes their lives meaningfully worse without giving them tremendous benefit they are going to complain and you are not going to get your proposal anywhere. You are going to have endless fights on the development mailing list. Just proposing an upgrade at all is making people’s lives worse because now they have to read stuff and you are going to have fights about that.

Bitcoin, I checked this morning, has a market capitalization of about 170 billion dollars. This is not flash in the pan money, it has been over 100 billion dollars for several years now. When we deploy changes to Bitcoin on a worldwide consensus system these changes can’t be undone. If we screw up the soundness and it forks into a million different things. There is no more agreement on the state of the chain, probably that is game over. If people lose their money, if coins can get stolen it is just game over. It may even be game over for the whole cryptocurrency space. That would be such a tremendous loss of faith in this technology. Remember in the eyes of the wider public, as slow as Bitcoin is to us, it is really fast, reckless, all this crazy cypherpunk stuff, going into a computer system that has nobody in charge of it that is supposed to guarantee everybody’s life savings. It is nuts. If we screw it up we screw it up, game over. We all find new jobs I guess. Maybe go on the speaking circuit apologizing. That is the heaviness of protocol changes.

Tradeoffs

A couple of quick words about cryptography. In the first half of the talk I was talking about all these cool things we can do with just keys, just signatures. Isn’t this great? No additional resources on the chain. That is not quite true. You would think adding these new features would involve some increase of resources at least for some users. But in fact we have been able to keep this to a couple of bytes here and there. In certain really specific scenarios somebody has to reveal more hashes than they otherwise would. We have been spoilt with the magic of cryptography over the last several years. We have been able by grinding on research to find all these cool new scalability and privacy improvements that have no trade-offs other than deployment complexity and so forth. Cryptography can’t do everything, we think. There aren’t really any hard limits on what cryptography can do that necessarily prevent us from just doing everything in an arbitrarily small amount of space. But it is an ongoing research project. Every new thing is something that takes many years of research to come up with. When we are making deployments, I said if we make anyone’s lives worse then it is not going to go through. This includes wasting a couple of bytes. For example on Taproot one technical thing I am going to go into is we had public keys that took 33 bytes to represent. 32 bytes plus one extra bit which represents a choice of two different points that have the same x coordinate. We found a way to drop that extra bit, we had to add some complexity. There was an argument about how we wanted to drop that extra bit and what the meaning of the bit would have been. Would it be the evenness or oddness of the number we alighted, would it be whether it was quadratic residue? Would it be what part of the range of possible values it lives in, stuff like this. That is the kind of stuff that we spent quite a while grinding on even though it is not very exciting. It is certainly not some cool new flash loan technology or whatever that various other projects are deploying. This is stuff that is important for getting something through on a worldwide system where everybody is a stakeholder and no one wants to spend money on extra bytes.

Political Things

Finally a few general words about politics. I deliberately ran out of time here so I wouldn’t have to linger on this slide. I have said most of this. Usually when we think about Bitcoin politics, those of us who have been around for a little while think about the SegWit debacle where we had this UASF thing going on. We had miners doing secret AsicBoost and we had misalignment of incentives between users, developers and miners. There was this fork, Bitcoin Cash, all this grandstanding. People saying “We are going to create a fork such that we have no replay protection so that if you don’t give us what we want we will cause all this money loss.” That was pretty dramatic but that is not really what Bitcoin politics is like generally. Generally Bitcoin politics are the things that I have been talking about. You have a whole wide set of participants who are generally afraid of change and complexity for very good reason by the way. We have seen a lot of technology failures on other projects deploying things too rapidly. We have a lot of people who feel that Bitcoin is increasingly onerous to validate. The blockchain is getting too large, it is already too much of a verification burden. That’s is what we should be doing, reducing that somehow. We have people who think privacy is the most important thing. Again with good reason. Bitcoin’s privacy story is absolutely horrible. We have an aversion to reading stuff, as people in this room probably are aware, when you propose things for Bitcoin it can be hard to get people to read your emails. Especially if you have some cool new crypto that requires a lot of cognitive load for people to read and for people to deploy. It can be difficult to compete for people’s attention. Even once you succeed on that there is a long process. There is going to be a lot of bikeshedding on various trivial features of your proposal that you have to be polite with and try to come to a conclusion. On the opposite end with bikeshedding you are going to get demands for proof, demands that you prove your idea and you deploy it in a solid way. That can take quite a bit of time and energy.

Q&A

Q - This would work for any blockchain or ledger? Grin etc It would work for all of them?

A - Absolutely. But in Bitcoin there is a much more extreme aversion to experimental technology. All the blockchains you mentioned were deployed around some new technology that they wanted to prove. By nature these are more willing to accept new ideas or ideas that maybe have different trade-offs in terms of algorithmic complexity or cryptographic assumptions or something like that. But any blockchain that expects to survive and expects to continue to work for its users, all these considerations apply.

\ No newline at end of file +https://www.youtube.com/watch?v=1VtQBVauvhw

Topic: Taproot - Who, How, Why

Location: MIT Bitcoin Expo 2020

BIP 341: https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki

Intro (Hugo Uvegi)

I am excited to introduce our first speaker, Andrew Poelstra. He is coming to us from Blockstream where he is Director of Research. He is going to be talking about Taproot which I think is on everybody’s mind these days.

Intro (Andrew Poelstra)

Thanks for waking up so early for the first day of the conference and making the trek from the registration room over here. It is more faces than I expected considering. Hello to all the livestream people hiding from the Corona virus. I hope it doesn’t reach you, best of luck. I told myself I was going to write my slides on the plane. As you can tell I instead spent the entire plane ride drawing this title slide. I am very proud of. I hope you enjoy it. I am going to split this talk into two halves. I have only got 20 or 30 minutes. I can’t really describe Taproot in full detail in half a hour, probably even in a couple of hours. Instead I am going to give a brief overview of what Taproot is, describe some of the technical reasons for it being what it is and give a high level summary of the components of it. In the second half of the talk I am going to talk more generally about how we design proposals for Bitcoin, the considerations we have to make for a system with such high uptime requirements with so many diverse stakeholders who all more or less have a veto over proposals but nobody has the ability to push things through. And where everything is very conservative. We are all very afraid of deploying broken crypto or somehow breaking the system or causing a consensus failure or who knows what. Let’s get into it.

What is Taproot?

First half, what is Taproot? Taproot is a proposal for Bitcoin that was developed originally by Pieter Wuille, Greg Maxwell and myself. It has since been taken up by probably 10 major contributors who have been doing various things on IRC and on the mailing list over the last year or two. It is a new transaction output version, meaning that it is a new to define spending conditions for your coins on Bitcoin. I am going to talk about what that means.

Spending Conditions: Keys and Scripts

First off for those who don’t know Bitcoin has a scripting system. It has the ability to specify spending conditions on all of the coins. Typically for casual users the way we think of Bitcoins, you have an address, the address represents some sort of public key. You have a secret key corresponding to the public key and if you can produce a signature with that secret key you can spend the coins. This is actually a special case of what Bitcoin can do. It is not just one key, one signature kind of thing. We have the ability to describe arbitrary spending conditions where arbitrary means specifically you can check signatures with various public keys like you do with the normal one key standard wallet. You can check hashlocks which means you can put a hash of some data on the blockchain and it will enforce that somebody reveals the preimage of that which is a way to do a forced publication of some shared secret say. It can do timelocks where it won’t allow coins to move until some amount of time has gone by. You can do arbitrary monotone functions of these. You create a circuit out of ANDs and ORs, threshold 2-of-3 or 5-of-10 of these different checks. You can do arbitrary sets of these. The mechanism for these is called Bitcoin script. Script can do a fair number of other things, most of which are not super exciting. It can’t do a whole bunch of things. In particular you can’t use Bitcoin script to enforce things like velocity limits. A common thing people want to do is have a wallet where they have some coins that say only a certain amount of coins are allowed to move in a given day. That kind of thing you can’t do with Bitcoin script. For people thinking about future research directions for Bitcoin this the kind of missing functionality we have. Although as we will see by the end of this talk it is not as straightforward as having a cool idea and everybody cheering for you.

Spending Conditions: Scripts and Witnesses

An interesting thing about script. We use this word “script” which conjures up connotations of scripting languages like shell or Python or PHP or Node or whatever people use these days. A difference between Bitcoin Script and an ordinary scripting language is that in Bitcoin Script you are describing conditions under which a spend is valid. You aren’t executing a bunch of code. You literally are executing a bunch of code but morally what you are doing is demonstrating that some conditions exist that were sufficient to spend the coins and you have met those conditions. Scripts often specify a wide set of conditions. Say you have a 2-of-3 signature check then there are 3 different public keys. Any pair of those could be used to spend them. You may have a timelock with an emergency key. Maybe after a certain time has gone by, the original 3 keys have been lost, then there is an alternate key. You can do this but what hits the blockchain when you are spending coins is only one condition. You have the script that is describing a whole bunch of different things. Ultimately only one of them matters. Only one of them matters once the coins are spent. If they don’t get spent none of them matter. So it would be nice from a privacy/scalability perspective, it is nice I can bundle those up, there is usually a trade-off there. For the purposes of this talk privacy and scalability are going to come hand in hand. It would be ideal if we weren’t even revealing all these spending conditions. If at most one of them matters why are we publishing them all? Why are we making everybody download them? Why are we making everybody parse these? Why are we everybody check that they make sense and that they hash to the right thing etc.

So around 2011, 2012 on Bitcointalk I believe, which is where all Bitcoin ideas were invented. You can Google them and resurrect them as easy publication. There was an idea called MAST, Merklized Abstract Syntax Tree. I think now it is Merklized Alternate Script something or other. It is not quite MAST. The idea is that you take all these different spending conditions, you put them in what is called a Merkle tree. You take all the conditions, you hash them up, you take all the hashes, you bundle those together and hash them up. You get this cryptographic object that lets you cheaply reveal any one of the conditions or any subset of the conditions without needing to reveal all of them. This is smaller. What actually hits the chain is just a single 32 byte hash representing all the different conditions. When you use one of the conditions you have to reveal that condition and also a couple of hashes that give a cryptographic proof that the original hash committed to it. This idea has been floating around for quite a while. It has never been implemented. Why hasn’t it? For a couple of reasons that I am going to go into in more detail. One is that for something like MAST there is a wide range of design surface and because changes in Bitcoin are so far reaching and so difficult to do nobody wants to post something for Bitcoin and nobody wants to accept a proposal for Bitcoin that isn’t the best possible proposal that does what we are trying to do. For years we have had variations on different ways to do MAST. Different ways to hide script components. Questions about should we improve the script system at the same time as we are doing this? Should we change the output type and so on. Since 2012 we have had a number of different upgrade mechanisms appear. We have learnt a lot more about the difference between hard forks and soft forks and when hard forks are necessary and what they are appropriate for. We have learned new ways to soft fork in changes, especially changes to the script system, in ways that minimize disruption to nodes that haven’t updated. On all levels of this kind of change we have made a lot of progress over the last several years. In one sense it has been worthwhile. It is great that we didn’t try to deploy this in 2012 because what we did would have sucked. On the other hand it is 2020 and we still don’t have it. There is this trade-off that I’m going to talk about a bit more.

Spending Conditions: Keys Tricks

That is one of the two major ideas in Taproot. It is this thing MAST. You put all your spending conditions in a Merkle tree. You only have to reveal one. Nobody can see how many there are. Nobody can see what the other ones are, everything is great. The second part of Taproot is this family of things I am going to call key tricks. The standard Bitcoin script, address, whatever you want to describe it as, has a single key. You spend the coin by providing a single signature. Traditionally a public key belongs to one entity. It identifies that identity and it identifies the person who holds the private key, the person who is able to spend those coins. The idea is that there is one person with this private key. One person has complete and sole custody of the coins. It turns out there is a lot more you can do with keys, with single keys. A lot of this stuff is made much easier using Schnorr signatures versus ECDSA. I am not going to go into that but I am going to throw that out there. These are two different digital signature algorithms. They both use the same kind of keys but Schnorr signatures let you do some cool things with the keys in a much simpler way. The most important one that I have highlighted here is multisignatures. If you have several participants who all individually have a signing key, it is possible for them to combine all of their keys into one. What they do is they all choose some randomizers, they all multiply their key by some randomizer. This is a technical thing that prevents them from creating malicious keys that cancel out other participants. They add them together. I am using add in the sense of elliptic curves which is really like addition that most people are familiar with. But it behaves algebraically exactly like addition so we call it addition. You add these keys together, you get a single key out of this. Then what is cool is all these participants by cooperating are then able to produce a single signature for this single key and publish that to the blockchain. What the blockchain is going to see is one key and one signature. The same as if there was only one participant. The same as if it was an ordinary wallet that is not doing anything remarkable. But in fact behind the scenes there are multiple parties that all share custody of these coins and they all had to cooperate to move the coins. That is a cool thing. You can do variants of this. You can do threshold signatures. Instead of having 5 participants who all their combine their keys and then the 5 of them cooperate. You can have 5 participants combine their keys in such a way that any 3 of them might cooperate. There are 5 choose 3 different possibilities here and any of those 5 choose 3 possibilities of sets of signers are able to spend the coins. This requires a little bit more complicated interaction protocol between the individual participants but again what the blockchain sees is just one key, one signature. You can do more interesting things than thresholds. You can do arbitrary monotone functions, arbitrary different sets of signers can all be bundled together into one key which is pretty cool.

Another thing you can do with keys and signatures that we have learned is something called adaptor signatures. If you have two parties doing a 2-of-2 multisignature, they both have to cooperate to spend the coins, they can modify the multisigning protocol such that when the second party finishes the protocol, they complete the signature, by doing so they reveal a decryption key for some secret to the other party. A lot of what we use hash preimages and hashlocks for in Bitcoin is when you have two parties and you want one to have to reveal a secret to the other as a condition of taking their coins. We can bundle that into the signatures. I am not going to go into that but the keyword to lookup would probably be adaptor signatures or scriptless scripts. Adaptor signatures are the specific construction I am describing. This is the only equation that I am going to have in all these slides, last year I did 100 equations in a row in half a hour at the first talk of the Expo and I was told I scared people. This is a commitment equation.

P -> P + H(P,m)G

What is going on here? On the left hand side I have P for public key. I am going to modify my public key here. I am going to add the hash of the original public key and some arbitrary message m. I am going to multiply that by the generator of my elliptic curve group. What this multiplication does is converts this hash, which is a number, into a point which is a public key. This allows me to add them together. The effect of doing this transformation is that before I had a public key that I or some people knew the secret key to. Afterwards I have a different public key which the same set of people know the secret to. I have just offset it by this value which is a hash of public data. Anybody can compute it, I have just offset it. I haven’t changed the signing set at all. What I have done is turn the key from just a boring old key into a key that is actually a cryptographic commitment to this message m. I am using an arbitrary hash function H. If that hash is a cryptographic commitment, specifically if it is reasonable to model it as a random oracle then this construction also works as a hash. You can also model it is a random oracle and you can see that as long as I had a uniform distribution of hashes I am going to get a uniform distribution of points out of this. What is the point of this? The point of this is if I’m on the blockchain, I’m publishing a key and this key represents some sort of spending conditions. Now I can do one better, I not only have a key on the blockchain I have a commitment to some secret data. What is this good for? This is good for a couple of non-blockchain things like timestamping say. If I have some data, I want to prove that it existed at some point I can hide it inside of one of my public keys that I was going to put on the blockchain anyway. That goes into the Bitcoin blockchain that timestamps it. I have a whole bunch of proof of work on it. There are a certain number of blocks that were created after it. Everybody has a good idea of when every Bitcoin block was created, at least within a few minutes or a few hours. I have a cryptographic anchor for my message m to the blockchain. You can also use this to associate extra data to a transaction that the Bitcoin blockchain doesn’t care about.

For example at Blockstream we work on a project called Liquid which is a sidechain, it is a chain where you can move coins from the Bitcoin chain onto this other chain, Liquid. The mechanism of doing that is all the coins that are on the alternate chain, from Bitcoin’s perspective these are actually in the custody of this 11-of-15 quorum of who we call functionaries. To move coins onto the chain, you send them to the functionaries and then you go onto the sidechain and you write a special transaction saying “I locked up these Bitcoin, please give them to me on the sidechain.” The consensus rules of the sidechain know how to look at Bitcoin and verify that you did so. How do you say this is me? You are sending the coins to the functionaries, they are the same 15 people all the time. How do you identify that you were the one who locked up the coins when from Bitcoin’s perspective you gave them to the same people that everybody else did. You use this construction. You put some identifying thing here in this message m, throw that onto the Bitcoin blockchain. You then reveal m on the sidechain and the sidechain validators can verify this equation is satisfied. That is an example use of this.

Taproot Assumption

“If all interested parties agree, no other conditions matter”

The coolest use is going to be in Taproot. Let me throw out this maxim, the Taproot assumption. In most situations, most uses of Bitcoin script, you have this wide range of spending conditions that represent different possibilities for how your parties might interact, but ultimately you have a fixed set of parties that are known upfront. In a Lightning payment channel you have got the two participants in the channel. In an escrow type arrangement you have got the two parties in the escrow. In Liquid you have got the 15 functionaries who are all signing stuff. On a standard wallet you have got the one individual party. If everyone who has an interest in these coins agrees to move the coins they can just sign for the coins. As I mentioned two slides ago they can sign for the coins using a single key that represents all of their joint interests and do so interactively. The Taproot assumption is that in the common case, in the happy case of moving Bitcoin you only actually need a key and a signature. All this scripting stuff is there as a backstop for when things don’t go well or when you have weird requirements or weird assumptions. With that said we can get into where pay-to-contract comes in, where this commitment thing comes in. Here is where I describe what Taproot is.

Taproot

We use MAST to hide our spending conditions in a giant Merkle tree. We get a single hash. We take that hash, we use our key commitment construction to commit to that hash inside of a public key which you put on the blockchain. Then we say the public key is how you spend the coins. What hits the chain is a single key, in the happy case with a single signature. Nobody even sees that any additional conditions exist. If you do reveal that they only see one of the conditions. In the typical case whether you are a normal wallet with a single key that is on a user’s hardware wallet or something, or if you are doing an escrow, or if you are doing a Lightning channel, or if you are doing a Liquid transfer, if you are doing a non-custodial trade, whatever you are doing, what hits the chain is one key, one signature. This is cheaper for everyone to validate than putting all the conditions explicitly on there. It is also much better for privacy because you are not revealing what the conditions are, you aren’t revealing that there were any special conditions. You are not revealing how many participants were involved, how many people have an interest and what that interest looks like. You are not revealing any of that. That’s Taproot. There’s a whole bunch of detailed design stuff that I am not going to go into here. At a high level that’s the idea.

Designing for Bitcoin

In the next five minutes let me talk about some of the design considerations that went into this. The different ways that we had to think about Taproot.

Is Bitcoin Dead?

Before I do that let me quickly talk about Bitcoin development. I know a lot of people here are MIT students or students from other universities. There is a perception that there is a lot of really cool stuff happening in the cryptocurrency world. There are all these new things being developed, all these new technologies being deployed. Meanwhile Bitcoin is the dinosaur in the room. It never really changes and it doesn’t have any of the cool stuff. It doesn’t have the cool scripting language, it doesn’t have all the cool privacy tech. It doesn’t have DAGs, all this cool stuff. There is this idea that Bitcoin maybe hasn’t really changed in the last several years. We don’t have new features and new press releases saying “Here is a cool thing you can do on Bitcoin that you couldn’t do before.” On some level everything you can do in Bitcoin in 2020 was technically possible in 2009. Although very very difficult and very inefficient for many reasons. The reason for this perception is that deploying new things on Bitcoin is very slow. If you have a proposal you need to write it up, you need to have a detailed description of the proposal. You need to have code that is written. You need to have a fair bit of buy-in from the developer community. That is to just have a proposal, to have something that somebody is willing to give a BIP number to. A BIP number means almost nothing. Then you need to go through probably years of review, you need to get input from various stakeholders in the ecosystem, you need to go through all this rigor and rigmarole. It is a very long process and it can feel frustrating because there are a lot of other projects out there where you have a cool idea, you show up on the IRC channel and they are like “Wow somebody is interested in our stuff. We will deploy your thing of course.” Then you get stuff out there. You see various projects that are having hard forks every six months or something, deploying cool new stuff that is very experimental and very bold. That is super exciting but Bitcoin can’t do that. The requirements in Bitcoin are much higher. In particular Bitcoin is by far the most scalable cryptocurrency that is deployed and it is probably not scalable enough for serious worldwide usage. We are really hesitant to do anything that is going to slow down validation. Even to do anything that doesn’t speed up validation. That is maybe the most pressing concern. Others would argue that privacy is the most pressing concern. That is also a very valid viewpoint. Unfortunately improving privacy often comes with very difficult trade-offs that Bitcoin is unable to make in terms of weird new crypto assumptions or speed or size. Despite the difficulty in deploying things the pace of research in Bitcoin is incredibly fast. I hinted at all of these things we can do with keys and signatures. Over the last two years we have seen this explosion of different cool things you can do just with keys and signatures. There is an irony, it is so slow to deploy stuff on Bitcoin. What do we have? We have keys, what can we do with keys? But we have actually done a tremendous amount with keys, far more than anybody even in the academic cryptography space would’ve thought. Let’s do cryptography but the constraint is you are only allowed to output a key and a signature at the end. First of all they would say “That is the most ridiculous thing I have ever heard.” I actually did a talk at NIST once and I got belly laughs from people. They thought it was hilarious that there was this community of Bitcoin people who had tied their hands behind their backs in such a dramatic way. A result of all this is that there is a tremendous amount of research that is developing really cool stuff. Really innovative things that wind up having better scalability and better privacy than those things that we would’ve deployed in the standard way where we are allowed to have new cryptographic assumptions, we are allowed to use as much space as we want or we are allowed to spend quite a bit of time verifying stuff.

The Unbearable Heaviness of Protocol Changes

As I mentioned, I am going to rush through these two slides, there is a lot of difficulty even if you have a proposal that checks all these boxes you have got to get through a whole bunch of hoops. This change has to be accepted by the entire community which includes very many people. It includes miners, the protocol developers, the wallet developers who often have opposing goals, HSM developers who are in their own little world where they have no memory, no space and no state and they want the protocol to be set. We have retail users who just want their stuff to work and who often want bad things to not happen even when the cryptography guarantees that bad things will happen to them. You have institutional users who care even more about bad things not happening. Exchanges, custodians etc. All of these people have some interest in the system. All of these categories represent people who have a large economic stake in the system. If any change makes their lives meaningfully worse without giving them tremendous benefit they are going to complain and you are not going to get your proposal anywhere. You are going to have endless fights on the development mailing list. Just proposing an upgrade at all is making people’s lives worse because now they have to read stuff and you are going to have fights about that.

Bitcoin, I checked this morning, has a market capitalization of about 170 billion dollars. This is not flash in the pan money, it has been over 100 billion dollars for several years now. When we deploy changes to Bitcoin on a worldwide consensus system these changes can’t be undone. If we screw up the soundness and it forks into a million different things. There is no more agreement on the state of the chain, probably that is game over. If people lose their money, if coins can get stolen it is just game over. It may even be game over for the whole cryptocurrency space. That would be such a tremendous loss of faith in this technology. Remember in the eyes of the wider public, as slow as Bitcoin is to us, it is really fast, reckless, all this crazy cypherpunk stuff, going into a computer system that has nobody in charge of it that is supposed to guarantee everybody’s life savings. It is nuts. If we screw it up we screw it up, game over. We all find new jobs I guess. Maybe go on the speaking circuit apologizing. That is the heaviness of protocol changes.

Tradeoffs

A couple of quick words about cryptography. In the first half of the talk I was talking about all these cool things we can do with just keys, just signatures. Isn’t this great? No additional resources on the chain. That is not quite true. You would think adding these new features would involve some increase of resources at least for some users. But in fact we have been able to keep this to a couple of bytes here and there. In certain really specific scenarios somebody has to reveal more hashes than they otherwise would. We have been spoilt with the magic of cryptography over the last several years. We have been able by grinding on research to find all these cool new scalability and privacy improvements that have no trade-offs other than deployment complexity and so forth. Cryptography can’t do everything, we think. There aren’t really any hard limits on what cryptography can do that necessarily prevent us from just doing everything in an arbitrarily small amount of space. But it is an ongoing research project. Every new thing is something that takes many years of research to come up with. When we are making deployments, I said if we make anyone’s lives worse then it is not going to go through. This includes wasting a couple of bytes. For example on Taproot one technical thing I am going to go into is we had public keys that took 33 bytes to represent. 32 bytes plus one extra bit which represents a choice of two different points that have the same x coordinate. We found a way to drop that extra bit, we had to add some complexity. There was an argument about how we wanted to drop that extra bit and what the meaning of the bit would have been. Would it be the evenness or oddness of the number we alighted, would it be whether it was quadratic residue? Would it be what part of the range of possible values it lives in, stuff like this. That is the kind of stuff that we spent quite a while grinding on even though it is not very exciting. It is certainly not some cool new flash loan technology or whatever that various other projects are deploying. This is stuff that is important for getting something through on a worldwide system where everybody is a stakeholder and no one wants to spend money on extra bytes.

Political Things

Finally a few general words about politics. I deliberately ran out of time here so I wouldn’t have to linger on this slide. I have said most of this. Usually when we think about Bitcoin politics, those of us who have been around for a little while think about the SegWit debacle where we had this UASF thing going on. We had miners doing secret AsicBoost and we had misalignment of incentives between users, developers and miners. There was this fork, Bitcoin Cash, all this grandstanding. People saying “We are going to create a fork such that we have no replay protection so that if you don’t give us what we want we will cause all this money loss.” That was pretty dramatic but that is not really what Bitcoin politics is like generally. Generally Bitcoin politics are the things that I have been talking about. You have a whole wide set of participants who are generally afraid of change and complexity for very good reason by the way. We have seen a lot of technology failures on other projects deploying things too rapidly. We have a lot of people who feel that Bitcoin is increasingly onerous to validate. The blockchain is getting too large, it is already too much of a verification burden. That’s is what we should be doing, reducing that somehow. We have people who think privacy is the most important thing. Again with good reason. Bitcoin’s privacy story is absolutely horrible. We have an aversion to reading stuff, as people in this room probably are aware, when you propose things for Bitcoin it can be hard to get people to read your emails. Especially if you have some cool new crypto that requires a lot of cognitive load for people to read and for people to deploy. It can be difficult to compete for people’s attention. Even once you succeed on that there is a long process. There is going to be a lot of bikeshedding on various trivial features of your proposal that you have to be polite with and try to come to a conclusion. On the opposite end with bikeshedding you are going to get demands for proof, demands that you prove your idea and you deploy it in a solid way. That can take quite a bit of time and energy.

Q&A

Q - This would work for any blockchain or ledger? Grin etc It would work for all of them?

A - Absolutely. But in Bitcoin there is a much more extreme aversion to experimental technology. All the blockchains you mentioned were deployed around some new technology that they wanted to prove. By nature these are more willing to accept new ideas or ideas that maybe have different trade-offs in terms of algorithmic complexity or cryptographic assumptions or something like that. But any blockchain that expects to survive and expects to continue to work for its users, all these considerations apply.

\ No newline at end of file diff --git a/ruben-somsen/2020-05-11-ruben-somsen-succinct-atomic-swap/index.html b/ruben-somsen/2020-05-11-ruben-somsen-succinct-atomic-swap/index.html index d5ba9e5389..53f014414b 100644 --- a/ruben-somsen/2020-05-11-ruben-somsen-succinct-atomic-swap/index.html +++ b/ruben-somsen/2020-05-11-ruben-somsen-succinct-atomic-swap/index.html @@ -10,4 +10,4 @@ < Ruben Somsen < Succinct Atomic Swap

Succinct Atomic Swap

Speakers: Ruben Somsen

Date: May 11, 2020

Transcript By: Michael Folkson

Media: -https://www.youtube.com/watch?v=TlCxpdNScCA

Topic: Succinct Atomic Swap (SAS)

Location: Ruben Somsen YouTube channel

SAS resources: https://gist.github.com/RubenSomsen/8853a66a64825716f51b409be528355f

Intro

Happy halvening. Today I have a special technical presentation for you. Basically what we are going to do is cut atomic swaps in half. I call this succinct atomic swaps.

Motivation

First let me tell you my motivation. I think Bitcoin is a very important technology. We need to find ways to utilize it at its maximum potential without sacrificing decentralization. In order to do that you need to come up with some smart ways to do more with less. That is the protocol design that I try to come up with. In line with that I came up with this.

Going from four transactions….

What we are going to do is take atomic swaps which are a protocol where you have a UTXO on two chains and you want to swap them. Or maybe on the same chain. You could think Bitcoin to Litecoin or even two Bitcoin UTXOs where you want privacy and that is why you swap them. The protocol that works today is four transactions. You have a preparation transaction on the first chain. Then you have another preparation transaction on the other chain. It is generally a multisig in both cases. You have some kind of timelock. They do a swap. We are taking that and we are bringing it down to only two transactions. You might think how is that possible? You are going to find out.

Setup Alice

Let’s get started. We start with Alice who has some Bitcoin on the Bitcoin blockchain. She wants to prepare this for transfer for a swap with Bob. What she does is she locks it up, Alice’s key and Bob’s key and they lock it up together. She is not actually sending her Bitcoin yet. This is the transaction that is going to be on the blockchain. This is an output that is locked and it can be unlocked by Alice’s and Bob’s signature but she hasn’t actually sent it yet. Before we actually put this on the blockchain we make some preparatory transactions. The first one is almost the same. It is again Alice and Bob but this one has a one day timelock. This is a timelock that is on the transaction level meaning that this transaction cannot even go onto the blockchain until one day has passed. From this transaction we create a transaction where the money goes back to Alice. This is necessary because if Alice sends this to the blockchain and Bob doesn’t do anything Alice needs a way to get her money back. This one has a relative timelock. Meaning that first the transaction in the middle here has to go to the blockchain. Then this transaction where Alice gets her money back can be sent to the blockchain two days from starting this whole process. There is a star * there. What does the * mean? It is an adaptor signature. Meaning that if Alice wants to broadcast this, remember both Alice and Bob put a signature on it, Bob’s signature is only valid if Alice reveals the secret to Bob. This is something that adaptor signatures can do. What you have to remember here is if this transaction ever goes to the blockchain Alice is revealing a secret which we call AS here. Finally we have this other transaction which shouldn’t ever go to the blockchain unless Alice is completely not paying attention, where Bob gets the money. The only reason for this one is to make sure that Alice actually does something and doesn’t sit on it and do nothing. At this point we have a guarantee that either Alice gets her money back and she reveals a secret or Bob gets the money. Alice knowing that she will actually respond in time and get her money back in time is ready to send this to the blockchain. Now this first transaction goes onchain.

Setup Bob

Now it is Bob’s turn. Bob knowing that he will either learn the secret or get the money, he locks it up on the other chain, let’s say Litecoin, with two keys, Alice’s secret and Bob’s secret.

Refund

What that means is that if the Bitcoin atomic swap side goes to the blockchain like this then AS, Alice’s secret is revealed to Bob and Bob gets his money back on the Litecoin chain. Because of this guarantee Bob is secure in locking up his money with these two keys with no timelock whatsoever.

Swap

From this point on we do the swap. How do we do that? We create another transaction where the money simply goes to Bob but again it is an adaptor signature. This time it is Alice who wants Bob to reveal secrets in order to send this transaction to the blockchain. What that means is that Bob can now claim the money at the top but he has to reveal Bob’s secret to Alice. If he goes ahead and does that and sends this to the blockchain, Bob has the Bitcoin at the top and Alice at the bottom, she learns BS, Bob’s secret. Now Alice has control of the bottom transaction, Bob has control of the top transaction. However we are doing three transactions here not two so what is going on here? First three transactions is already better than what we have today because we have four transactions right now so this is already an improvement. We can do even better. How do we do that? We just don’t send that transaction to the blockchain but instead Bob gives Bob’s secret to Alice and now Alice has control over the coins on the Litecoin side and Alice does the same thing, gives Alice’s key to Bob. Now Bob has control of the Bitcoin at the top.

Bob should be online

In two transactions they both have control over the money but there is one caveat which is this transaction still exists. There is still the possibility for Alice to send this transaction to the blockchain and reclaim the top Bitcoin. However because of the way we have the timelocks constructed Bob can simply be online, pay attention and respond if Alice ever tries to do this. She first has to send the middle transactions to the blockchain and then that final refund transaction where Alice would get her money back. If the second transaction here in the middle, this A+B and a 1 day lock, goes to the blockchain Bob simply responds and since Bob knows both keys A and B he can send it to himself at that point. There is this online requirement where Alice can’t get the money if Bob is paying attention. We assume and hope that Alice doesn’t try. This is very similar to how Lightning Network works. If that is indeed the case then we did an atomic swap in two transactions.

Negative

The negative is the online requirement for one of the two parties, in this case Bob. There is this state. You have to remember the secrets that you learned during the process. This is different from running a regular Bitcoin wallet where you do a backup once and then you have all your own money. In this case you actually have to make sure you backup those secrets or you definitely have them in case something goes wrong, on your phone or whatever device you are using. That’s a little bit more work that you have to do there. That is very similar to how the Lightning Network works today as well. It is a good compromise considering what you get in return. Only two transactions instead of four.

Positive

It works today, that is a good thing. You can do this with MuSig and Schnorr. That will be the most efficient way of doing it without any weird math that you have to do. Recently Lloyd Fournier, he came up with a way to do a single signer ECDSA adaptor signature. That allows you to do this today. If you utilize that kind of technique then you can do adaptor signatures with single signatures on the Bitcoin blockchain today. That’s really cool. Lloyd also helped me out by reviewing this Succinct Atomic Swap that I created so I want to thank him for that. Another advantage that it is two transactions not four which is great. It is scriptless so you don’t really have anything huge going to the blockchain. It really is in the case of MuSig one signature, in the case of ECDSA two signatures going to the blockchain per transaction. It is asymmetric meaning that one of the chains only has one transaction going onchain at any time even if the protocol fails. That is nice because if one of the two chains is more expensive to use, let’s say you go from Litecoin to Bitcoin, then you want to have Bitcoin be the place where only one transaction takes place. That is more efficient. The other thing already mentioned is that one of the two chains doesn’t require a timelock. That might be good if there are some blockchains out there that don’t have any scripting whatsoever including timelocks. Lastly there is something called Payswap which might be useful to do with this protocol. Payswap is an idea by ZmnSCPxj on the Bitcoin mailing list where you have a payment where you send a full output to one person and the change, which is normally inside of the same transaction, is an atomic swap. I might be sending 1.5 Bitcoin to somebody for buying something and in another transaction that is seemingly unrelated that person sends me back 0.5 because I only intended to send 1 Bitcoin let’s say. The nice thing about this is now you don’t really have any connection between the amounts. The amounts are different now. It is as obvious as if you were to do an atomic swap where the amounts are the same. You do a payment and an atomic swap in one and that gives you an additional amount of privacy. This protocol wasn’t very practical before because it required four transactions. But now you could maybe do it in two transactions or three if you don’t want the online requirements.

Maybe

The last thing is that maybe, I’m not sure about this, we could use this to swap in and out of Lightning in one single transaction. The way that would work is imagine you have a Lightning route from Alice to Bob to Carol. What you want to do is create a payment from Alice to Carol where depending on whether the payment goes through or not either Alice’s secret is revealed or Carol’s secret is revealed. It is not how Lightning works today. It is a change and it is an open question whether or not this is possible or whether there is something that makes this impossible to do this over routing. I’m not sure what the answer is there. If it is possible that would be really cool. You could make a payment onchain and have a Lightning payment go through on the other end and make that be atomic so both things go through or neither. Both in and out of Lightning. Hopefully that works but I’ll need some feedback on that. Please let me know if you are willing to look into this and think it may or may not work.

Resources

If you want to learn more. This was a brief overview of how it works. I have a full write up where you will find the details of this protocol as well as all of my other work including my work on statechains. I also have some write ups on blind merged mining or at least a variant that uses a fee bidding structure where only the highest paying person gets to put his hash into the blockchain. That is interesting. And also perpetual one-way peg which is a way to do sidechains without having the store of value property that you are used to from Bitcoin. It is an interesting thing that I am hoping to present about at Bitcoin 2020 conference in San Francisco. This will hopefully happen in a couple of months but we will see what happens. The situation right now is tricky. You might be able to find me there. If I do go there and you go there as well I’ll see you there. Thank you very much for listening and have a nice day.

\ No newline at end of file +https://www.youtube.com/watch?v=TlCxpdNScCA

Topic: Succinct Atomic Swap (SAS)

Location: Ruben Somsen YouTube channel

SAS resources: https://gist.github.com/RubenSomsen/8853a66a64825716f51b409be528355f

Intro

Happy halvening. Today I have a special technical presentation for you. Basically what we are going to do is cut atomic swaps in half. I call this succinct atomic swaps.

Motivation

First let me tell you my motivation. I think Bitcoin is a very important technology. We need to find ways to utilize it at its maximum potential without sacrificing decentralization. In order to do that you need to come up with some smart ways to do more with less. That is the protocol design that I try to come up with. In line with that I came up with this.

Going from four transactions….

What we are going to do is take atomic swaps which are a protocol where you have a UTXO on two chains and you want to swap them. Or maybe on the same chain. You could think Bitcoin to Litecoin or even two Bitcoin UTXOs where you want privacy and that is why you swap them. The protocol that works today is four transactions. You have a preparation transaction on the first chain. Then you have another preparation transaction on the other chain. It is generally a multisig in both cases. You have some kind of timelock. They do a swap. We are taking that and we are bringing it down to only two transactions. You might think how is that possible? You are going to find out.

Setup Alice

Let’s get started. We start with Alice who has some Bitcoin on the Bitcoin blockchain. She wants to prepare this for transfer for a swap with Bob. What she does is she locks it up, Alice’s key and Bob’s key and they lock it up together. She is not actually sending her Bitcoin yet. This is the transaction that is going to be on the blockchain. This is an output that is locked and it can be unlocked by Alice’s and Bob’s signature but she hasn’t actually sent it yet. Before we actually put this on the blockchain we make some preparatory transactions. The first one is almost the same. It is again Alice and Bob but this one has a one day timelock. This is a timelock that is on the transaction level meaning that this transaction cannot even go onto the blockchain until one day has passed. From this transaction we create a transaction where the money goes back to Alice. This is necessary because if Alice sends this to the blockchain and Bob doesn’t do anything Alice needs a way to get her money back. This one has a relative timelock. Meaning that first the transaction in the middle here has to go to the blockchain. Then this transaction where Alice gets her money back can be sent to the blockchain two days from starting this whole process. There is a star * there. What does the * mean? It is an adaptor signature. Meaning that if Alice wants to broadcast this, remember both Alice and Bob put a signature on it, Bob’s signature is only valid if Alice reveals the secret to Bob. This is something that adaptor signatures can do. What you have to remember here is if this transaction ever goes to the blockchain Alice is revealing a secret which we call AS here. Finally we have this other transaction which shouldn’t ever go to the blockchain unless Alice is completely not paying attention, where Bob gets the money. The only reason for this one is to make sure that Alice actually does something and doesn’t sit on it and do nothing. At this point we have a guarantee that either Alice gets her money back and she reveals a secret or Bob gets the money. Alice knowing that she will actually respond in time and get her money back in time is ready to send this to the blockchain. Now this first transaction goes onchain.

Setup Bob

Now it is Bob’s turn. Bob knowing that he will either learn the secret or get the money, he locks it up on the other chain, let’s say Litecoin, with two keys, Alice’s secret and Bob’s secret.

Refund

What that means is that if the Bitcoin atomic swap side goes to the blockchain like this then AS, Alice’s secret is revealed to Bob and Bob gets his money back on the Litecoin chain. Because of this guarantee Bob is secure in locking up his money with these two keys with no timelock whatsoever.

Swap

From this point on we do the swap. How do we do that? We create another transaction where the money simply goes to Bob but again it is an adaptor signature. This time it is Alice who wants Bob to reveal secrets in order to send this transaction to the blockchain. What that means is that Bob can now claim the money at the top but he has to reveal Bob’s secret to Alice. If he goes ahead and does that and sends this to the blockchain, Bob has the Bitcoin at the top and Alice at the bottom, she learns BS, Bob’s secret. Now Alice has control of the bottom transaction, Bob has control of the top transaction. However we are doing three transactions here not two so what is going on here? First three transactions is already better than what we have today because we have four transactions right now so this is already an improvement. We can do even better. How do we do that? We just don’t send that transaction to the blockchain but instead Bob gives Bob’s secret to Alice and now Alice has control over the coins on the Litecoin side and Alice does the same thing, gives Alice’s key to Bob. Now Bob has control of the Bitcoin at the top.

Bob should be online

In two transactions they both have control over the money but there is one caveat which is this transaction still exists. There is still the possibility for Alice to send this transaction to the blockchain and reclaim the top Bitcoin. However because of the way we have the timelocks constructed Bob can simply be online, pay attention and respond if Alice ever tries to do this. She first has to send the middle transactions to the blockchain and then that final refund transaction where Alice would get her money back. If the second transaction here in the middle, this A+B and a 1 day lock, goes to the blockchain Bob simply responds and since Bob knows both keys A and B he can send it to himself at that point. There is this online requirement where Alice can’t get the money if Bob is paying attention. We assume and hope that Alice doesn’t try. This is very similar to how Lightning Network works. If that is indeed the case then we did an atomic swap in two transactions.

Negative

The negative is the online requirement for one of the two parties, in this case Bob. There is this state. You have to remember the secrets that you learned during the process. This is different from running a regular Bitcoin wallet where you do a backup once and then you have all your own money. In this case you actually have to make sure you backup those secrets or you definitely have them in case something goes wrong, on your phone or whatever device you are using. That’s a little bit more work that you have to do there. That is very similar to how the Lightning Network works today as well. It is a good compromise considering what you get in return. Only two transactions instead of four.

Positive

It works today, that is a good thing. You can do this with MuSig and Schnorr. That will be the most efficient way of doing it without any weird math that you have to do. Recently Lloyd Fournier, he came up with a way to do a single signer ECDSA adaptor signature. That allows you to do this today. If you utilize that kind of technique then you can do adaptor signatures with single signatures on the Bitcoin blockchain today. That’s really cool. Lloyd also helped me out by reviewing this Succinct Atomic Swap that I created so I want to thank him for that. Another advantage that it is two transactions not four which is great. It is scriptless so you don’t really have anything huge going to the blockchain. It really is in the case of MuSig one signature, in the case of ECDSA two signatures going to the blockchain per transaction. It is asymmetric meaning that one of the chains only has one transaction going onchain at any time even if the protocol fails. That is nice because if one of the two chains is more expensive to use, let’s say you go from Litecoin to Bitcoin, then you want to have Bitcoin be the place where only one transaction takes place. That is more efficient. The other thing already mentioned is that one of the two chains doesn’t require a timelock. That might be good if there are some blockchains out there that don’t have any scripting whatsoever including timelocks. Lastly there is something called Payswap which might be useful to do with this protocol. Payswap is an idea by ZmnSCPxj on the Bitcoin mailing list where you have a payment where you send a full output to one person and the change, which is normally inside of the same transaction, is an atomic swap. I might be sending 1.5 Bitcoin to somebody for buying something and in another transaction that is seemingly unrelated that person sends me back 0.5 because I only intended to send 1 Bitcoin let’s say. The nice thing about this is now you don’t really have any connection between the amounts. The amounts are different now. It is as obvious as if you were to do an atomic swap where the amounts are the same. You do a payment and an atomic swap in one and that gives you an additional amount of privacy. This protocol wasn’t very practical before because it required four transactions. But now you could maybe do it in two transactions or three if you don’t want the online requirements.

Maybe

The last thing is that maybe, I’m not sure about this, we could use this to swap in and out of Lightning in one single transaction. The way that would work is imagine you have a Lightning route from Alice to Bob to Carol. What you want to do is create a payment from Alice to Carol where depending on whether the payment goes through or not either Alice’s secret is revealed or Carol’s secret is revealed. It is not how Lightning works today. It is a change and it is an open question whether or not this is possible or whether there is something that makes this impossible to do this over routing. I’m not sure what the answer is there. If it is possible that would be really cool. You could make a payment onchain and have a Lightning payment go through on the other end and make that be atomic so both things go through or neither. Both in and out of Lightning. Hopefully that works but I’ll need some feedback on that. Please let me know if you are willing to look into this and think it may or may not work.

Resources

If you want to learn more. This was a brief overview of how it works. I have a full write up where you will find the details of this protocol as well as all of my other work including my work on statechains. I also have some write ups on blind merged mining or at least a variant that uses a fee bidding structure where only the highest paying person gets to put his hash into the blockchain. That is interesting. And also perpetual one-way peg which is a way to do sidechains without having the store of value property that you are used to from Bitcoin. It is an interesting thing that I am hoping to present about at Bitcoin 2020 conference in San Francisco. This will hopefully happen in a couple of months but we will see what happens. The situation right now is tricky. You might be able to find me there. If I do go there and you go there as well I’ll see you there. Thank you very much for listening and have a nice day.

\ No newline at end of file diff --git a/scalingbitcoin/hong-kong-2015/a-bevy-of-block-size-proposals-bip100-bip102-and-more/index.html b/scalingbitcoin/hong-kong-2015/a-bevy-of-block-size-proposals-bip100-bip102-and-more/index.html index ac45665759..f88200999b 100644 --- a/scalingbitcoin/hong-kong-2015/a-bevy-of-block-size-proposals-bip100-bip102-and-more/index.html +++ b/scalingbitcoin/hong-kong-2015/a-bevy-of-block-size-proposals-bip100-bip102-and-more/index.html @@ -12,4 +12,4 @@ < A Bevy Of Block Size Proposals Bip100 Bip102 And More

A Bevy Of Block Size Proposals Bip100 Bip102 And More

Speakers: Jeff Garzik

Transcript By: Bryan Bishop

Category: Conference

Media: -https://www.youtube.com/watch?v=fst1IK_mrng&t=3h52m35s

slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/3_tweaking_the_chain_1_garzik.pdf

slides: http://www.slideshare.net/jgarzik/a-bevy-of-block-size-proposals-scaling-bitcoin-hk-2015

Alternative video: https://www.youtube.com/watch?v=37LiYOOevqs&t=1h16m6s

We’re going to be talking about every single block size proposal. This is going to be the least technical presentation at the conference. I am not going to go into the proposals htemselves. Changing the block size itself is pretty simple. You change a constant. It’s more a discussion of algorithms, roll-out, etc.

What are some of the high level concerns with block size? We have been addressing technical scaling issues like mempool and other work. I think this has been deferring some of the economic and game theory issues, which is not so good. There’s the “FOMC problem” where humans pick block size and functionally choosing economics, versus the free market where the free market chooses block size but maybe it chooses wrong and runs bitcoin off a cliff.

You have to achieve a zen balance where at the low end the users are forced on Coinbase or walled gardens. But at the high end, most nodes can’t process big blocks, and you centralize that way when you force them off the network. Either way, you defeat the security and privacy of the system.

What is a healthy fee market? We haven’t figured that out yet. Is the block size algorithm easy to game? Can miners game it? Can users game it? Can you generate lots of transactions? There is very little field data on hard-forks. We have had incidents like the March 2013 fork. We have very little field data on hard-forks in general. Rolling them out to a large user populous.

Signaling bitcoin growth to external parties, some people are ((mistakenly)) looking at block size as a proxy for whether users can upgrade bitcoin and whether they should start projects on bitcoin. Is there going to be a crisis when block size changes? Users want predictability.

High-level miners mining without validating. Miners as it relates to block size, today, they can choose to reduce the block size, but they can’t choose to increase the max block size. That’s the average block size over the past two years, it’s going up not racing up. That’s the average over the past 180 days. It’s not really going up very much.

Thinking about the fee market. From a user experience standpoint, fees are very difficult to reason and predict by design, that’s just how the system works. Fees are disconnected by transaction value, because it’s size-based. You might have a low-value transaction that is big in terms of bytes, so you are paying a high fee on a low-value transaction. You might have a super-large value transaction that has only one small UTXO, so the fee is tiny.

From a user point of view, they only have choices in terms of what they get for what they pay. You can pay a high fee, and that’s I want it as soon as possible, you can pay average fee, slightly below fee, or zero and have a very long wait. These are the definitions from the user’s point of view. They don’t have direct control based on fees. The block generation times are noisy, you might have a burst of two blocks inside of a minute, and you might have to wait an hour for another block. Even if you pay a high fee, there’s no guarantee that it will confirm in the next ten minutes, only that it will confirm in the next block. From a user perspective, transaction fees are hard to reason about. Wallets have a difficult time figuring out what the best fee is to pay ((see Bram Cohen’s work on how wallets can handle transaction fee estimation)). How do you present that in the user interface?

The fee market status, the changes, the economics, market reaction, all plays into the block size as well. The fee market exists today in a narrow band based on simplistic wallet software fee behavior. If you think through some scenarios about block size changes, you think if you have full blocks and then we change the block size, that might reboot the fee market and then introduce chaos into the user experience. If you don’t have full blocks, then you might not have that hurdle. A large block size step might reboot the fee market.

Let’s move on to bip100. Have I been talking for 23 minutes? I don’t know if I have been time traveling or not. bip100 is shift block size from development to the miner market. The limit floats between 1 MB and 32 MB. 1.09% per diff perod, 100% growth every 2.5 years. It’s a slow-motion miner vote. There’s a continuous 3-month voting window. At each 2 week diff period, the new size looks at the last 3 months of coinbase, and examine 90th percentile of votes, take average. The activation method is flag day, on this day in 6 months from now, that’s when the network changes.

Analysis of bip100. This shifts block size selection to the miners. This avoids anointing developers as the FOMC for humans that pick constants. The miners have inputs on fee income. Community feedback is that it gives miners too much control, miners can sell votes costlessly, limit increase is too large. That last point was addressed. Miner control I will cover later.

That’s bip100 in a nutshell. That’s a template for some of the other proposals. This is my analysis, but what’s the community feedback as well? That’s quite relevant.

bip101 has the theme of predictable growth. Immediate jump to 8 MB, doubles every 2 years. Activation is 750 of 1,000 blocks to indicate support, then a 2 week grace period to turn on. It’s predictable increase, so that’s good from a user perspective. There’s no miner sensitivity. The fee market is significantly postponed, as blocks are a limited resource, and transaction fees are bidding for that limited resource, if you have 4 MB of traffic in 8 MB max block size, then you have no fee competition so fees would stay low. Community feedback is that it’s a big size bump. There was negative community reaction to politics around bitcoin-xt drama.

That’s all I have to say about that.

bip103’s theme is block size following technological growth. Increases max block size by 4.4% every 97 days, leading to 17.7% growth per year. Activation in a while to give time to upgrade. Predictable block size increase. No miner sensitivity. Fee market sooner. Not much community feedback, but the main criticism from redditors was way too small an increase, so later hard-fork would jump beyond this, and 2017 activation is way too far.

bip105 is consensus-based block size retargeting. Miners would vote to up or down the block size target by a max of 10%. Miners would pay with difficulty, providing a proportionally higher block hash target. You either get lucky, or you mine for a longer time. Miners have to pay a cost to increase the block size. It shifts the block size selection to miners, away from developers. Miners have an input on fee income. Community feedback has been little; my personal opinion is that pay-with-difficulty skews the incentives and it’s difficult for miners to reason about incentives. Meni Rosenfeld mentioned pay-to-future-miner concept. I do think there’s a potential modification to this called pay-to-future-miner, which Meni Rosenfeld proposed. You use CLTV to lock some funds but you make it an ANYONECANSPEND so you’re paying to whatever miner is 2 years in the future or 2 months in the future or whatever.

bip106 is dynamically-controlled block size max cap. There are two variants here. Variant one is per difficulty period. Says if blocks are full, then 2x the block size, if blocks are less than 50% full 90% of the time then halve the block size. The second variant is, I encourage you to read the bip because the formula is quite complex, this is the interesting variant because it looks at transaction fee pressure. If transaction fee pressure is increasing, then it increases the block size. That’s pretty interesting. It shifts block size selection to miners, it avoids anointing high priests. Not much community feedback. Variant number one is easily gamed, you can fill the blocks with spam and pump up the max block size. Variant two is more interesting, but it encodes the economic idea of too much fee pressure. So you’re having software decide when fees is too high, so that’s an open question, needs analysis. Is it a good idea? is it not? etc.

bip102: aka the backup plan. One-time bump to 2 MB. Activation is 6 months in the future flag day, with non-binding miner voting to signal hashpower for that type of upgrade. It’s intended as a backup option. It’s conservative with regards to block size limit. It’s conservative to the algorithm, it’s just changing a constant. It’s a conservative method for obtaining hard-fork field data, for all the unknowns that we don’t know. It’s highly predictable, always 2 MB. This seems acceptable to most people. This does not avoid the FOMC problem. It requires another hard-fork in the future to change again.

Similarly, Tadge really nailed it with bip numbering. BIP 248 summary is 2-4-8 proposed by Adam Back. I am calling this bip248. 2 megabytes now, 4 megabytes in 2 years, 8 megabytes in 4 years. I assume activation is similar to flag day with non-binding miner voting. This is very similar to bip102. Highly predictable. Allows us to learn from hard-forks and get field data about what goes wrong what goes right. Similar to bip102. Does not avoid FOMC problem. Requires another hard-fork like bip102 to move beyond this.

bip000 is again Tadge beat me on the bip numbering. That’s status quo. Keep the current block size until change is obviously necessary. Keep the current max block size limit. Keep the node numbers at the current numbers. It’s maximally conservative, it’s what’s been working today and will continue to work until the blocks are full. It goes against the economic majority of bitcoin startups. We can’t really sample users. There seems to be clear economic consensus that they want to increase, but not about target. If you ask exchanges, businesses, miners, those guys, they all want way bigger numbers for bragging rights. Staying the same, wallet software is not prepared for transaction fee estimation. It’s difficult for users to reason. ((Is any block size appropriate if everyone wants to store everything on the blockchain?)) Just like in politics elsewhere, policymaking in a crisis is probably the worst policymaking you could do. Centralization on the low-end, if you stick at 1 MB, there’s potentially users pushed into zero-conf transactions as fee pressure increases.

My personal thoughts, this is not speaking for anyone else except for myself, all vendor hats are off now. I think we need a small bump now to gather crucial field data. You can theorize and test and so on, but the internet and the real world is the best test lab in the world. You can only get that full accounting of field data if you actually do a real hard-fork. So the venture capital consensus wants to g beyond 1 MB. The technical consensus is that going above 1 MB is risky. I think it’s poor signalling to users.

We have been kicking the can down the road, we have integrated libsecp256k1 to increase validation speed and validation cost. These are big metrics in our system. We have been making positive strides on this. This should reduce some of the pressure to change the block size. The difficulty is finding an algorithm that cannot be gamed, cannot be bought, and is sensitive to miners. You can get 2 out of 3 but not all 3.

….

Q: …

A: … I haven’t seen any of that so far, well that’s not true, there’s someone there who has done some, Rusty has done some numbers. In general we need much more data.

Q: What fee is too much? What about the number of people? Perhaps this is more viable than fee number or fee level.

A: Yes, that’s one of the inputs to block size debate. It’s very difficult to measure users. ((We would have to keep verification super super cheap. Increasingly cheaper.)) … In the past, we would rather onboard bitcoin users with low-fees, rather than high-fees in bitcoin’s current stage. This is a valid economic choice as well,

Q: Bandwidth solutions over non-bandwidth solutions?

A: Both. That needs to continue. We have promising technologies which are risky. It’s risky whether they will be effective. The original vision was p2p decentralized cash, as described in the Bitcoin whitepaper.

Q: Miners think they need to choose one bip over another bip. I think it will be pretty easy to find consensus if miners could vote for several proposals that they approve. So the point would be just as they could vote for a single one, like bip100, bip102, and if one of those reach some threshold, then we could start talking about doing the hard-fork. What do you think about this?

A: That’s related to some of the roll-out proposals. How would you roll-out a hard-fork? You definitely want to engage users. A flag day was the preferred solution, 6 months in the future, but you want a non-binding miner vote to say the hashpower is ready for that hard-fork. So there’s two leaves to that rollout, and definitely it makes a lot of sense to have the hashpower say we support this proposal this proposal this proposal.

Q: Looking for comments on segregated witness stuff.

A: Totally cool and totally missed the talk.

Q: How do you expect we will arrive at consensus?

A: That’s the question of the hour. I think it’s more of a process whereby in Montreal we did some input data stage. In Hong Kong now, we’re looking at here are all the issues, the validation costs, the various proposals et cetera. Now step 3 is you take this back, you noodle with the busineses, users, miners ((but not the developers??)), then you get a rough consensus. My general response is you need to make your thoughts known. Everyone can know this is what jgarzik thinks, or this is what BitPay thinks. I think that transparency and discussion are the ways we find this. I think backroom deals, private visits to various people, that’s not the way to do it. You have to do it in public. That’s the open-source way.

Q: What’s preventing us from moving to 2 MB block sizes tomorrow?

A: Well, a lot of factors. There’s still resistance to change in general. There’s the valid concern of hard-fork risks, splitting the community. Lack of testing. Lack of patches. In general there has been a lot of work not on block size, so moving to it tomorrow would just not work. I think all of us in this room need to work towards testing, data, simulation, generate patches. It’s not a magic formula. It’s just a lot of hard work.

Q: 1 month?

A: That’s reasonable. You are going to be rolling out a patch, which says 6 months in the future we are going to do X. So you have 6 months of time to do additional testing and simulation etc.

Q: In one of those slides, you said 75% levels, so 750 out of 1000 blocks. That was bip101. What is your optimal choice for that? Is it 75% 80% 90%?

A: That’s why we’re leaning towards flag day with non-binding miner voting. It’s to avoid picking a specific number. You want to have a clear majority of hashpower on the fork that users prefer. I don’t want to pick a number. I don’t have a number. It’s just a supermajority of the hashpower. That’s step 2. But step 1 is the users agreeing by running the new software.

Q: Would you break SPV clients in a hard-fork?

A: Potentially, if that needs to be done.

Q: Would you deliberately do it?

A: No. I want to maintain backwards-compatibility if at all possible. Not all wallets are SPV. Some are SPV-light.

Q: Do you think increasing on-chain bandwidth will interfere with the value of bitcoin?

A: It will increase the value of bitcoin. It will signal that we are willing to enhance the system.

bip100 http://gtf.org/garzik/bitcoin/BIP100-blocksizechangeproposal.pdf

bip101 https://github.com/bitcoin/bips/blob/master/bip-0101.mediawiki

bip102 https://github.com/bitcoin/bips/blob/master/bip-0102.mediawiki

bip103 https://github.com/bitcoin/bips/blob/master/bip-0103.mediawiki

bip105 https://github.com/bitcoin/bips/blob/master/bip-0105.mediawiki

bip106 https://github.com/bitcoin/bips/blob/master/bip-0106.mediawiki

2-4-8 (“bip248”) https://twitter.com/adam3us/status/636410827969421312 (better link available? let me know…)

how do bips work? https://github.com/bitcoin/bips/blob/master/bip-0001.mediawiki

\ No newline at end of file +https://www.youtube.com/watch?v=fst1IK_mrng&t=3h52m35s

slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/3_tweaking_the_chain_1_garzik.pdf

slides: http://www.slideshare.net/jgarzik/a-bevy-of-block-size-proposals-scaling-bitcoin-hk-2015

Alternative video: https://www.youtube.com/watch?v=37LiYOOevqs&t=1h16m6s

We’re going to be talking about every single block size proposal. This is going to be the least technical presentation at the conference. I am not going to go into the proposals htemselves. Changing the block size itself is pretty simple. You change a constant. It’s more a discussion of algorithms, roll-out, etc.

What are some of the high level concerns with block size? We have been addressing technical scaling issues like mempool and other work. I think this has been deferring some of the economic and game theory issues, which is not so good. There’s the “FOMC problem” where humans pick block size and functionally choosing economics, versus the free market where the free market chooses block size but maybe it chooses wrong and runs bitcoin off a cliff.

You have to achieve a zen balance where at the low end the users are forced on Coinbase or walled gardens. But at the high end, most nodes can’t process big blocks, and you centralize that way when you force them off the network. Either way, you defeat the security and privacy of the system.

What is a healthy fee market? We haven’t figured that out yet. Is the block size algorithm easy to game? Can miners game it? Can users game it? Can you generate lots of transactions? There is very little field data on hard-forks. We have had incidents like the March 2013 fork. We have very little field data on hard-forks in general. Rolling them out to a large user populous.

Signaling bitcoin growth to external parties, some people are ((mistakenly)) looking at block size as a proxy for whether users can upgrade bitcoin and whether they should start projects on bitcoin. Is there going to be a crisis when block size changes? Users want predictability.

High-level miners mining without validating. Miners as it relates to block size, today, they can choose to reduce the block size, but they can’t choose to increase the max block size. That’s the average block size over the past two years, it’s going up not racing up. That’s the average over the past 180 days. It’s not really going up very much.

Thinking about the fee market. From a user experience standpoint, fees are very difficult to reason and predict by design, that’s just how the system works. Fees are disconnected by transaction value, because it’s size-based. You might have a low-value transaction that is big in terms of bytes, so you are paying a high fee on a low-value transaction. You might have a super-large value transaction that has only one small UTXO, so the fee is tiny.

From a user point of view, they only have choices in terms of what they get for what they pay. You can pay a high fee, and that’s I want it as soon as possible, you can pay average fee, slightly below fee, or zero and have a very long wait. These are the definitions from the user’s point of view. They don’t have direct control based on fees. The block generation times are noisy, you might have a burst of two blocks inside of a minute, and you might have to wait an hour for another block. Even if you pay a high fee, there’s no guarantee that it will confirm in the next ten minutes, only that it will confirm in the next block. From a user perspective, transaction fees are hard to reason about. Wallets have a difficult time figuring out what the best fee is to pay ((see Bram Cohen’s work on how wallets can handle transaction fee estimation)). How do you present that in the user interface?

The fee market status, the changes, the economics, market reaction, all plays into the block size as well. The fee market exists today in a narrow band based on simplistic wallet software fee behavior. If you think through some scenarios about block size changes, you think if you have full blocks and then we change the block size, that might reboot the fee market and then introduce chaos into the user experience. If you don’t have full blocks, then you might not have that hurdle. A large block size step might reboot the fee market.

Let’s move on to bip100. Have I been talking for 23 minutes? I don’t know if I have been time traveling or not. bip100 is shift block size from development to the miner market. The limit floats between 1 MB and 32 MB. 1.09% per diff perod, 100% growth every 2.5 years. It’s a slow-motion miner vote. There’s a continuous 3-month voting window. At each 2 week diff period, the new size looks at the last 3 months of coinbase, and examine 90th percentile of votes, take average. The activation method is flag day, on this day in 6 months from now, that’s when the network changes.

Analysis of bip100. This shifts block size selection to the miners. This avoids anointing developers as the FOMC for humans that pick constants. The miners have inputs on fee income. Community feedback is that it gives miners too much control, miners can sell votes costlessly, limit increase is too large. That last point was addressed. Miner control I will cover later.

That’s bip100 in a nutshell. That’s a template for some of the other proposals. This is my analysis, but what’s the community feedback as well? That’s quite relevant.

bip101 has the theme of predictable growth. Immediate jump to 8 MB, doubles every 2 years. Activation is 750 of 1,000 blocks to indicate support, then a 2 week grace period to turn on. It’s predictable increase, so that’s good from a user perspective. There’s no miner sensitivity. The fee market is significantly postponed, as blocks are a limited resource, and transaction fees are bidding for that limited resource, if you have 4 MB of traffic in 8 MB max block size, then you have no fee competition so fees would stay low. Community feedback is that it’s a big size bump. There was negative community reaction to politics around bitcoin-xt drama.

That’s all I have to say about that.

bip103’s theme is block size following technological growth. Increases max block size by 4.4% every 97 days, leading to 17.7% growth per year. Activation in a while to give time to upgrade. Predictable block size increase. No miner sensitivity. Fee market sooner. Not much community feedback, but the main criticism from redditors was way too small an increase, so later hard-fork would jump beyond this, and 2017 activation is way too far.

bip105 is consensus-based block size retargeting. Miners would vote to up or down the block size target by a max of 10%. Miners would pay with difficulty, providing a proportionally higher block hash target. You either get lucky, or you mine for a longer time. Miners have to pay a cost to increase the block size. It shifts the block size selection to miners, away from developers. Miners have an input on fee income. Community feedback has been little; my personal opinion is that pay-with-difficulty skews the incentives and it’s difficult for miners to reason about incentives. Meni Rosenfeld mentioned pay-to-future-miner concept. I do think there’s a potential modification to this called pay-to-future-miner, which Meni Rosenfeld proposed. You use CLTV to lock some funds but you make it an ANYONECANSPEND so you’re paying to whatever miner is 2 years in the future or 2 months in the future or whatever.

bip106 is dynamically-controlled block size max cap. There are two variants here. Variant one is per difficulty period. Says if blocks are full, then 2x the block size, if blocks are less than 50% full 90% of the time then halve the block size. The second variant is, I encourage you to read the bip because the formula is quite complex, this is the interesting variant because it looks at transaction fee pressure. If transaction fee pressure is increasing, then it increases the block size. That’s pretty interesting. It shifts block size selection to miners, it avoids anointing high priests. Not much community feedback. Variant number one is easily gamed, you can fill the blocks with spam and pump up the max block size. Variant two is more interesting, but it encodes the economic idea of too much fee pressure. So you’re having software decide when fees is too high, so that’s an open question, needs analysis. Is it a good idea? is it not? etc.

bip102: aka the backup plan. One-time bump to 2 MB. Activation is 6 months in the future flag day, with non-binding miner voting to signal hashpower for that type of upgrade. It’s intended as a backup option. It’s conservative with regards to block size limit. It’s conservative to the algorithm, it’s just changing a constant. It’s a conservative method for obtaining hard-fork field data, for all the unknowns that we don’t know. It’s highly predictable, always 2 MB. This seems acceptable to most people. This does not avoid the FOMC problem. It requires another hard-fork in the future to change again.

Similarly, Tadge really nailed it with bip numbering. BIP 248 summary is 2-4-8 proposed by Adam Back. I am calling this bip248. 2 megabytes now, 4 megabytes in 2 years, 8 megabytes in 4 years. I assume activation is similar to flag day with non-binding miner voting. This is very similar to bip102. Highly predictable. Allows us to learn from hard-forks and get field data about what goes wrong what goes right. Similar to bip102. Does not avoid FOMC problem. Requires another hard-fork like bip102 to move beyond this.

bip000 is again Tadge beat me on the bip numbering. That’s status quo. Keep the current block size until change is obviously necessary. Keep the current max block size limit. Keep the node numbers at the current numbers. It’s maximally conservative, it’s what’s been working today and will continue to work until the blocks are full. It goes against the economic majority of bitcoin startups. We can’t really sample users. There seems to be clear economic consensus that they want to increase, but not about target. If you ask exchanges, businesses, miners, those guys, they all want way bigger numbers for bragging rights. Staying the same, wallet software is not prepared for transaction fee estimation. It’s difficult for users to reason. ((Is any block size appropriate if everyone wants to store everything on the blockchain?)) Just like in politics elsewhere, policymaking in a crisis is probably the worst policymaking you could do. Centralization on the low-end, if you stick at 1 MB, there’s potentially users pushed into zero-conf transactions as fee pressure increases.

My personal thoughts, this is not speaking for anyone else except for myself, all vendor hats are off now. I think we need a small bump now to gather crucial field data. You can theorize and test and so on, but the internet and the real world is the best test lab in the world. You can only get that full accounting of field data if you actually do a real hard-fork. So the venture capital consensus wants to g beyond 1 MB. The technical consensus is that going above 1 MB is risky. I think it’s poor signalling to users.

We have been kicking the can down the road, we have integrated libsecp256k1 to increase validation speed and validation cost. These are big metrics in our system. We have been making positive strides on this. This should reduce some of the pressure to change the block size. The difficulty is finding an algorithm that cannot be gamed, cannot be bought, and is sensitive to miners. You can get 2 out of 3 but not all 3.

….

Q: …

A: … I haven’t seen any of that so far, well that’s not true, there’s someone there who has done some, Rusty has done some numbers. In general we need much more data.

Q: What fee is too much? What about the number of people? Perhaps this is more viable than fee number or fee level.

A: Yes, that’s one of the inputs to block size debate. It’s very difficult to measure users. ((We would have to keep verification super super cheap. Increasingly cheaper.)) … In the past, we would rather onboard bitcoin users with low-fees, rather than high-fees in bitcoin’s current stage. This is a valid economic choice as well,

Q: Bandwidth solutions over non-bandwidth solutions?

A: Both. That needs to continue. We have promising technologies which are risky. It’s risky whether they will be effective. The original vision was p2p decentralized cash, as described in the Bitcoin whitepaper.

Q: Miners think they need to choose one bip over another bip. I think it will be pretty easy to find consensus if miners could vote for several proposals that they approve. So the point would be just as they could vote for a single one, like bip100, bip102, and if one of those reach some threshold, then we could start talking about doing the hard-fork. What do you think about this?

A: That’s related to some of the roll-out proposals. How would you roll-out a hard-fork? You definitely want to engage users. A flag day was the preferred solution, 6 months in the future, but you want a non-binding miner vote to say the hashpower is ready for that hard-fork. So there’s two leaves to that rollout, and definitely it makes a lot of sense to have the hashpower say we support this proposal this proposal this proposal.

Q: Looking for comments on segregated witness stuff.

A: Totally cool and totally missed the talk.

Q: How do you expect we will arrive at consensus?

A: That’s the question of the hour. I think it’s more of a process whereby in Montreal we did some input data stage. In Hong Kong now, we’re looking at here are all the issues, the validation costs, the various proposals et cetera. Now step 3 is you take this back, you noodle with the busineses, users, miners ((but not the developers??)), then you get a rough consensus. My general response is you need to make your thoughts known. Everyone can know this is what jgarzik thinks, or this is what BitPay thinks. I think that transparency and discussion are the ways we find this. I think backroom deals, private visits to various people, that’s not the way to do it. You have to do it in public. That’s the open-source way.

Q: What’s preventing us from moving to 2 MB block sizes tomorrow?

A: Well, a lot of factors. There’s still resistance to change in general. There’s the valid concern of hard-fork risks, splitting the community. Lack of testing. Lack of patches. In general there has been a lot of work not on block size, so moving to it tomorrow would just not work. I think all of us in this room need to work towards testing, data, simulation, generate patches. It’s not a magic formula. It’s just a lot of hard work.

Q: 1 month?

A: That’s reasonable. You are going to be rolling out a patch, which says 6 months in the future we are going to do X. So you have 6 months of time to do additional testing and simulation etc.

Q: In one of those slides, you said 75% levels, so 750 out of 1000 blocks. That was bip101. What is your optimal choice for that? Is it 75% 80% 90%?

A: That’s why we’re leaning towards flag day with non-binding miner voting. It’s to avoid picking a specific number. You want to have a clear majority of hashpower on the fork that users prefer. I don’t want to pick a number. I don’t have a number. It’s just a supermajority of the hashpower. That’s step 2. But step 1 is the users agreeing by running the new software.

Q: Would you break SPV clients in a hard-fork?

A: Potentially, if that needs to be done.

Q: Would you deliberately do it?

A: No. I want to maintain backwards-compatibility if at all possible. Not all wallets are SPV. Some are SPV-light.

Q: Do you think increasing on-chain bandwidth will interfere with the value of bitcoin?

A: It will increase the value of bitcoin. It will signal that we are willing to enhance the system.

bip100 http://gtf.org/garzik/bitcoin/BIP100-blocksizechangeproposal.pdf

bip101 https://github.com/bitcoin/bips/blob/master/bip-0101.mediawiki

bip102 https://github.com/bitcoin/bips/blob/master/bip-0102.mediawiki

bip103 https://github.com/bitcoin/bips/blob/master/bip-0103.mediawiki

bip105 https://github.com/bitcoin/bips/blob/master/bip-0105.mediawiki

bip106 https://github.com/bitcoin/bips/blob/master/bip-0106.mediawiki

2-4-8 (“bip248”) https://twitter.com/adam3us/status/636410827969421312 (better link available? let me know…)

how do bips work? https://github.com/bitcoin/bips/blob/master/bip-0001.mediawiki

\ No newline at end of file diff --git a/scalingbitcoin/hong-kong-2015/overview-of-bips-necessary-for-lightning/index.html b/scalingbitcoin/hong-kong-2015/overview-of-bips-necessary-for-lightning/index.html index 0775f36de8..a326d79d4b 100644 --- a/scalingbitcoin/hong-kong-2015/overview-of-bips-necessary-for-lightning/index.html +++ b/scalingbitcoin/hong-kong-2015/overview-of-bips-necessary-for-lightning/index.html @@ -11,4 +11,4 @@ Tadge Dryja

Transcript By: Bryan Bishop

Tags: Lightning

Category: Conference

Media: -https://www.youtube.com/watch?v=fst1IK_mrng&t=1h5m50s

Scalability of Lightning with different BIPs and some back-of-the-envelope calculations.

slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/1_layer2_2_dryja.pdf

I don’t have time to introduce the idea of zero-confirmation transactions and lightning. We have given talks about this. There’s some documentation available. There’s an implementation being worked on. I think it helps a lot with scalability by keeping some transactions off the blockchain. Ideally, many or most transactions. Instead, you would open and close channels. That’s all that needs to be on the blockchain. When it fails, it should fail back to the efficiency and security of Bitcoin. When people are cooperating, they should use this. When they stop cooperating, they can drop back to Bitcoin and be okay.

I think that’s a good tradeoff.

But how much can this get us? What do we need in order to get this? Can Lightning work today? Well, check back nextweek. bip65 is going to be active pretty soon. bip65 is not sufficient, but it’s necessary. We need relative timelocks, and the ability to reliably spend from an unconfirmed transaction, which segregated witness allows. OP_CLTV is almost active. OP_CSV (bip112) is maybe soon.

There are levels of lightning that we are prepared to accept. If we never get segregated witness, if we never get checksequenceverify, we can still use lightning, it just wont be as good. Channels can work with only OP_CLTV (checklocktimeverify), but it’s much less efficient ((see here for why segregated witness is useful for lightning)). This could be ready to go next week.

With Lightning Network level 1, you need 1.25 MB/user/year. With Lightning Network level 3, you need 3 KB/user/year.

Lightning level 1 with OP_CLTV only. Only checklocktimeverify. To open a channel, this is assuming there is one participant funding the entire channel, which I think is going to be very common. Joseph in the next talk will talk about the topology of the lightning payment channels. This is a fairly optimistic size, it’s the minimum size you can have. You need 1 input to open a channel. You have some UTXO outpoint, you have a couple BTC, one signature spending from that, and then 3 outputs. 2 of them would be 2-of-2 multisig with some extra opcode, and one of them owuld be change back to the funder. The channel close transaction will be 2 inputs, 4 signatures because of 2-of-2 multisig for both, and 2 outputs back to the participants to the channel, which would be about 700 bytes. Signatures are what take space and time in these transactions. Putting in new outputs is pretty easy. Channel duration would be, say, 1 week, it might be longer or shorter, I am just making this up. In the CLTV-only model, the channel duration is fixed at the beginning of this sequence of transactions. It would last at least 1 week as well. If the other guy on your channel disappears or becomes uncooperative, you have to wait the whole duration. It takes a transaction to open the channel, if you make the length really long, you can keep it open longer, you can do more things with it, but if your counterparty disappears, then you have to wait for up to a month. If the channel is opened for only a day total, then if the counterparty disappears, you can get your money back next day. But you have to renew the channel daily.

Channel monitoring, if your counterparty goes rogue and tries to broadcast a previous channel state to rip you off, in CLTV-only situation, the user must verify and must observe that. You must watch the blockchain for those 20 byte P2SH scripts. If you see that script, you know your counterparty is trying to rip you off, so then you would submit the revocation hash. It’s safe, but you have to keep watching during that 1 week.

There’s also a genralized channel awesomeness rating, which in this case is low. Just some subjective measurement.

The second lightning level, is OP_CSV or OP_MATURITY or whatever is the coolest name. It’s relative to when the input got into a block, so it’s like relative checklocktimeverify. This is a cool opcode. It lets you say you can spend this, but only once it’s a week old, or you know a thousand blocks old. This allows a lot of cool things for lightning channels. The channel open and close transactions are about the same size. The channel duration is indefinite and you can keep it open for as long as you want. Both channels can cooperatively close it “immediately”. There is a timeout period for the uncooperative close time, which you can make perhaps 1 week like in the other, if the counterparty disappears and he’s gone, you can push the current channel state to the blockchain and then I have to wait a week to collect the money back to a non-lightning P2SH output. You could tweak this from 1 week to 1 day or 1 month or something. You have to watch the blockchain at least once during that period. You have to check the phone or blockchain at least once a week or something. That channel duration matters for the timeout of the checksequenceverify matters for channel monitoring which must still be done by the user.

The monitoring itself could be done by a third-party. The third-partry could inform the user, they could email you and say you should sign it, if you ever see these 20 bytes on the blockchain please send me an email. They could email you and tell you to sign to get your money back, the user still has to sign, of course. The user still has to do something.

The generalized channel awesomeness of the second type, is medium. It helps a lot. It’s good.

Lightning level 3, which includes SIGHASH_NOINPUT and segregated witness… This can reduce the channel size by half. It reduces it a lot. Channel duration is still indefinite, if you have OP_CSV. The channel monitoring can be done sort of anonymously untrustedly outsourced. If you see 20 bytes on the blockchain, please broadcast this transaction. The general transactional awesomeness of this channel type is high.

The way to do this is to have a reliable spend of an unconfirmed transacton. It deals with malleability in that, I can know, here is the txid of a transaction. It’s a transaction spending from… the txid could be changed, so you can’t sign. You have to watch for what the txid is. So there’s two ways to fix this. Segregated witness is one of them. Segregated witness is pretty cool. It’s also kind of weird, but in a good way. We might want some more time to test it out and figure out how to do it, it’s a pretty big rethinking of how things work. I think that it’s cool and we’ll use it if it gets in. There’s a simpler way, which is called SIGHASH_NOINPUT. It does not fix malleability, because txid can still be changed, but the signatures can be applicable to multiple txid values. The txid is not included in the signature. If the txid changes, it’s okay. What I’m including in the signature are my outputs and my input scripts, so it’s still referencing the correct, also I’m referencing, signing with the pubkey, so as long as I don’t reuse pubkeys, it’s really safe. There’s some risks to this. If you did it the wrong way, you could for example send 2 bitcoins to a pubkeyhash, send 3 bitcoins to that same pubkeyhash, and then try to spend the two, and sign, and you’re trying to send the original 2 BTC, well, a miner would redirect that signature to the 3 BTC output, and cause you to spend 3 and take that extra 1. So this is somewhat dangerous in that you don’t want to reuse addresses with this sighash type. You could do this with a soft-fork, and do it with multisig only so that it’s much less likely that someone would replay the signature. It would require, in 2-of-2 multisig, for both people to reuse the same pubkey, and they both would have to be doing something stupid, which is much less likely. If you were going to do a soft-fork, you could do remove the 0 bug in multisig. There was a pathological transaction a few months ago where the whole block was a single transaction with thousands of inputs, and you were hashing an enormous amount of data per signature. This helps a little. Segregated witness would also allow this lightning level 3 channel, however.

I would now want to figure out how many people can use this stuff, given 3 different levels based on different BIP availability. We could have 1 megabyte, which is like inertia with now. What about with BIP 2-4-8 and meet in 10 years? What was discussed yesterday, bip100. This would be where miners vote and decide, but still a 32 MB cap that they would be deciding within the range of. These seem like reasonable sizes to discuss. The 3 bips for lightning channels seem possible.

The assumptions I’m making are that half of the transactions in blocks are channel open and channel close transactions. Half of the LN closes and opens are merged, such that you close and immediately open another channel in the same transaction. This is quite possible if you have the cooperation of your counterparty, like Alice saying to Bob hey I am going to open a channel with Carol, please make this switch on the blockchain with me. Also, I am assuming there are no non-cooperative channel closes, hopefully only a fraction of the channels will end uncooperatively. It will happen, but hopefully extremely rare. Also, let’s assume that channels tend to last about 6 months.

At level 1, e would need 150 channels/year, 150 close, 75 open. With level 2 and 3, you need 6 channels/year, 6 close 3 open. So this is basic results of the scalability matrix. (see slide).

When you only have OP_CLTV, it’s about 1.25 MB per user per year. Level 2, it’s kilobytes. You could have 5 million users with OP_CSV, and 42 million users with 5 KB level 2 per user per year. With fairly reasonable block sizes, oyu could get 100s of millions of users. Users are the right metric. How many transactions are they going to do? Well, as many as they want. Once you have the channel open, you can do 100s of transactions per hour, there is plenty of room. To some extent, these are pretty vague, but I would say if you do have 200 million people using the lightning network, it’s probably the case that not half of the transactions are channel open and channel close transactions, it would probably be 80 to 90% of the blockchain transactions are LN related, in which case you could probably get to 800 million users.

LN can help out with scaling. Relative locktime helps out a lot. Going from level 1 to level 2 is huge. OP_CSV is really really helpful. You go from 1.25 megabytes to kilobytes. Segwit improves scalability significantly. Not the same amount of imrpovement from level 1 to level 2, for level 2 to level 3. But still useful. We can support a isgnificant fraction of all humans with reasonable block sizes.

Q: On the last slide, one of the assumptions was 3 channels per person. Assuming payment channels wouldn’t be useful for retail sales, because you don’t want to buy a coffee just to open immediately. Is that correct?

A: Joseph might expand on this. Let’s say you buy a coffee. You’re probably buying a coffee only once, right? Well, maybe the coffee is $5, and you put $50 into the channel and leave it open. Then someone else comes to the coffee shop and she does the same thing. But she has a channel with the grocery store. There’s me, coffee shop, Alice, grocery store, they all have channels. When I go to the grocery store next time, I don’t have to open a channel. Payments are routed.

Q: It sounded like everybody would have to open new channels.

A: I am guessing the mean is going to be 3, but it will probably be an exponential distribution. Most people will probably have 1 channel, and then some might have 100s of channels open

Q: You can make the sighash noinput slightly safer by also signing the input value.

A: Yes, that would be an option.

Q: Do they have to store lots of money in a hot wallet?

A: Yes. There’s going to be people with lots of channel and lots of hops. If you look at graph theory and how this work,s it ends up working a lot better than expected. Even if you have 100 million nodes and each node only has 1 to 3 channels, you only need 6 or 7 hops is the most you ever need to do. If you are ever unable to route a payment through the channels in your subgraph, you can always open a new channel anyway.

Q: How hard is it to .. do automatically… ? Path discovery and routing.

A: That’s another whole issue. It’s cool, hard work, we’re working on it. It’s not really bitcoin-related. The bitcoin network doesn’t see that stuff. It never ends up on the blockchain. It’s not something we have to do for Bitcoin. The lightning clients can handle that. It’s not too bad. There’s a lot of research in doing that.

Q: What about including privacy?

A: Yes, there’s privacy benefits, you could do onion routing for payments. That might be later-on stuff. Initially we want something perhaps less private, just to get something working.

Q: With those large numbers that this would be able to support, what would happen if a big hub goes uncooperative or malicious, and large number of users have to settle suddenly? How would this play out in the different levels you described?

A: The first line of defense is try to make sure there isn’t a mega-hop or mega-hub of some kind with millions of channels. If that node goes down or goes rogue, you’re going to have a backlog of millions of people trying to close out of their channel. If it exceeds the OP_CSV time, then bad things happen and people could lose money. So first, we need to make sure the network doesn’t look like that. Some eople have talked about timestop where if blocks are super-congested, the checksequenceverify doesn’t increment. If you have a huge backlog of people trying to close out of cahnnels, then we could have a statue of limitations freeze perhaps. Maybe an easier way is to make sure the topology doesn’t look like that.

Thank you.

small correction about the scalability matrix: https://twitter.com/tdryja/status/673705980668993537

Other lightning presentation from same day: http://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/network-topologies-and-their-scalability-implications-on-decentralized-off-chain-networks/

Some other resources:

http://lightning.network/

http://lightning.network/lightning-network-summary.pdf

http://lightning.network/lightning-network-technical-summary.pdf

http://lightning.network/lightning-network-paper.pdf

other presentation https://www.youtube.com/watch?v=8zVzw912wPo and slides http://lightning.network/lightning-network.pdf

SF Bitcoin Social presentation slides http://lightning.network/lightning-network-presentation-sfbitcoinsocial-2015-05-26.pdf

slides re: time and bitcoin http://lightning.network/lightning-network-presentation-time-2015-07-06.pdf

http://diyhpl.us/wiki/transcripts/scalingbitcoin/bitcoin-failure-modes-and-the-role-of-the-lightning-network/

https://github.com/ElementsProject/lightning

https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

irc.freenode.net #lightning-dev

\ No newline at end of file +https://www.youtube.com/watch?v=fst1IK_mrng&t=1h5m50s

Scalability of Lightning with different BIPs and some back-of-the-envelope calculations.

slides: https://scalingbitcoin.org/hongkong2015/presentations/DAY2/1_layer2_2_dryja.pdf

I don’t have time to introduce the idea of zero-confirmation transactions and lightning. We have given talks about this. There’s some documentation available. There’s an implementation being worked on. I think it helps a lot with scalability by keeping some transactions off the blockchain. Ideally, many or most transactions. Instead, you would open and close channels. That’s all that needs to be on the blockchain. When it fails, it should fail back to the efficiency and security of Bitcoin. When people are cooperating, they should use this. When they stop cooperating, they can drop back to Bitcoin and be okay.

I think that’s a good tradeoff.

But how much can this get us? What do we need in order to get this? Can Lightning work today? Well, check back nextweek. bip65 is going to be active pretty soon. bip65 is not sufficient, but it’s necessary. We need relative timelocks, and the ability to reliably spend from an unconfirmed transaction, which segregated witness allows. OP_CLTV is almost active. OP_CSV (bip112) is maybe soon.

There are levels of lightning that we are prepared to accept. If we never get segregated witness, if we never get checksequenceverify, we can still use lightning, it just wont be as good. Channels can work with only OP_CLTV (checklocktimeverify), but it’s much less efficient ((see here for why segregated witness is useful for lightning)). This could be ready to go next week.

With Lightning Network level 1, you need 1.25 MB/user/year. With Lightning Network level 3, you need 3 KB/user/year.

Lightning level 1 with OP_CLTV only. Only checklocktimeverify. To open a channel, this is assuming there is one participant funding the entire channel, which I think is going to be very common. Joseph in the next talk will talk about the topology of the lightning payment channels. This is a fairly optimistic size, it’s the minimum size you can have. You need 1 input to open a channel. You have some UTXO outpoint, you have a couple BTC, one signature spending from that, and then 3 outputs. 2 of them would be 2-of-2 multisig with some extra opcode, and one of them owuld be change back to the funder. The channel close transaction will be 2 inputs, 4 signatures because of 2-of-2 multisig for both, and 2 outputs back to the participants to the channel, which would be about 700 bytes. Signatures are what take space and time in these transactions. Putting in new outputs is pretty easy. Channel duration would be, say, 1 week, it might be longer or shorter, I am just making this up. In the CLTV-only model, the channel duration is fixed at the beginning of this sequence of transactions. It would last at least 1 week as well. If the other guy on your channel disappears or becomes uncooperative, you have to wait the whole duration. It takes a transaction to open the channel, if you make the length really long, you can keep it open longer, you can do more things with it, but if your counterparty disappears, then you have to wait for up to a month. If the channel is opened for only a day total, then if the counterparty disappears, you can get your money back next day. But you have to renew the channel daily.

Channel monitoring, if your counterparty goes rogue and tries to broadcast a previous channel state to rip you off, in CLTV-only situation, the user must verify and must observe that. You must watch the blockchain for those 20 byte P2SH scripts. If you see that script, you know your counterparty is trying to rip you off, so then you would submit the revocation hash. It’s safe, but you have to keep watching during that 1 week.

There’s also a genralized channel awesomeness rating, which in this case is low. Just some subjective measurement.

The second lightning level, is OP_CSV or OP_MATURITY or whatever is the coolest name. It’s relative to when the input got into a block, so it’s like relative checklocktimeverify. This is a cool opcode. It lets you say you can spend this, but only once it’s a week old, or you know a thousand blocks old. This allows a lot of cool things for lightning channels. The channel open and close transactions are about the same size. The channel duration is indefinite and you can keep it open for as long as you want. Both channels can cooperatively close it “immediately”. There is a timeout period for the uncooperative close time, which you can make perhaps 1 week like in the other, if the counterparty disappears and he’s gone, you can push the current channel state to the blockchain and then I have to wait a week to collect the money back to a non-lightning P2SH output. You could tweak this from 1 week to 1 day or 1 month or something. You have to watch the blockchain at least once during that period. You have to check the phone or blockchain at least once a week or something. That channel duration matters for the timeout of the checksequenceverify matters for channel monitoring which must still be done by the user.

The monitoring itself could be done by a third-party. The third-partry could inform the user, they could email you and say you should sign it, if you ever see these 20 bytes on the blockchain please send me an email. They could email you and tell you to sign to get your money back, the user still has to sign, of course. The user still has to do something.

The generalized channel awesomeness of the second type, is medium. It helps a lot. It’s good.

Lightning level 3, which includes SIGHASH_NOINPUT and segregated witness… This can reduce the channel size by half. It reduces it a lot. Channel duration is still indefinite, if you have OP_CSV. The channel monitoring can be done sort of anonymously untrustedly outsourced. If you see 20 bytes on the blockchain, please broadcast this transaction. The general transactional awesomeness of this channel type is high.

The way to do this is to have a reliable spend of an unconfirmed transacton. It deals with malleability in that, I can know, here is the txid of a transaction. It’s a transaction spending from… the txid could be changed, so you can’t sign. You have to watch for what the txid is. So there’s two ways to fix this. Segregated witness is one of them. Segregated witness is pretty cool. It’s also kind of weird, but in a good way. We might want some more time to test it out and figure out how to do it, it’s a pretty big rethinking of how things work. I think that it’s cool and we’ll use it if it gets in. There’s a simpler way, which is called SIGHASH_NOINPUT. It does not fix malleability, because txid can still be changed, but the signatures can be applicable to multiple txid values. The txid is not included in the signature. If the txid changes, it’s okay. What I’m including in the signature are my outputs and my input scripts, so it’s still referencing the correct, also I’m referencing, signing with the pubkey, so as long as I don’t reuse pubkeys, it’s really safe. There’s some risks to this. If you did it the wrong way, you could for example send 2 bitcoins to a pubkeyhash, send 3 bitcoins to that same pubkeyhash, and then try to spend the two, and sign, and you’re trying to send the original 2 BTC, well, a miner would redirect that signature to the 3 BTC output, and cause you to spend 3 and take that extra 1. So this is somewhat dangerous in that you don’t want to reuse addresses with this sighash type. You could do this with a soft-fork, and do it with multisig only so that it’s much less likely that someone would replay the signature. It would require, in 2-of-2 multisig, for both people to reuse the same pubkey, and they both would have to be doing something stupid, which is much less likely. If you were going to do a soft-fork, you could do remove the 0 bug in multisig. There was a pathological transaction a few months ago where the whole block was a single transaction with thousands of inputs, and you were hashing an enormous amount of data per signature. This helps a little. Segregated witness would also allow this lightning level 3 channel, however.

I would now want to figure out how many people can use this stuff, given 3 different levels based on different BIP availability. We could have 1 megabyte, which is like inertia with now. What about with BIP 2-4-8 and meet in 10 years? What was discussed yesterday, bip100. This would be where miners vote and decide, but still a 32 MB cap that they would be deciding within the range of. These seem like reasonable sizes to discuss. The 3 bips for lightning channels seem possible.

The assumptions I’m making are that half of the transactions in blocks are channel open and channel close transactions. Half of the LN closes and opens are merged, such that you close and immediately open another channel in the same transaction. This is quite possible if you have the cooperation of your counterparty, like Alice saying to Bob hey I am going to open a channel with Carol, please make this switch on the blockchain with me. Also, I am assuming there are no non-cooperative channel closes, hopefully only a fraction of the channels will end uncooperatively. It will happen, but hopefully extremely rare. Also, let’s assume that channels tend to last about 6 months.

At level 1, e would need 150 channels/year, 150 close, 75 open. With level 2 and 3, you need 6 channels/year, 6 close 3 open. So this is basic results of the scalability matrix. (see slide).

When you only have OP_CLTV, it’s about 1.25 MB per user per year. Level 2, it’s kilobytes. You could have 5 million users with OP_CSV, and 42 million users with 5 KB level 2 per user per year. With fairly reasonable block sizes, oyu could get 100s of millions of users. Users are the right metric. How many transactions are they going to do? Well, as many as they want. Once you have the channel open, you can do 100s of transactions per hour, there is plenty of room. To some extent, these are pretty vague, but I would say if you do have 200 million people using the lightning network, it’s probably the case that not half of the transactions are channel open and channel close transactions, it would probably be 80 to 90% of the blockchain transactions are LN related, in which case you could probably get to 800 million users.

LN can help out with scaling. Relative locktime helps out a lot. Going from level 1 to level 2 is huge. OP_CSV is really really helpful. You go from 1.25 megabytes to kilobytes. Segwit improves scalability significantly. Not the same amount of imrpovement from level 1 to level 2, for level 2 to level 3. But still useful. We can support a isgnificant fraction of all humans with reasonable block sizes.

Q: On the last slide, one of the assumptions was 3 channels per person. Assuming payment channels wouldn’t be useful for retail sales, because you don’t want to buy a coffee just to open immediately. Is that correct?

A: Joseph might expand on this. Let’s say you buy a coffee. You’re probably buying a coffee only once, right? Well, maybe the coffee is $5, and you put $50 into the channel and leave it open. Then someone else comes to the coffee shop and she does the same thing. But she has a channel with the grocery store. There’s me, coffee shop, Alice, grocery store, they all have channels. When I go to the grocery store next time, I don’t have to open a channel. Payments are routed.

Q: It sounded like everybody would have to open new channels.

A: I am guessing the mean is going to be 3, but it will probably be an exponential distribution. Most people will probably have 1 channel, and then some might have 100s of channels open

Q: You can make the sighash noinput slightly safer by also signing the input value.

A: Yes, that would be an option.

Q: Do they have to store lots of money in a hot wallet?

A: Yes. There’s going to be people with lots of channel and lots of hops. If you look at graph theory and how this work,s it ends up working a lot better than expected. Even if you have 100 million nodes and each node only has 1 to 3 channels, you only need 6 or 7 hops is the most you ever need to do. If you are ever unable to route a payment through the channels in your subgraph, you can always open a new channel anyway.

Q: How hard is it to .. do automatically… ? Path discovery and routing.

A: That’s another whole issue. It’s cool, hard work, we’re working on it. It’s not really bitcoin-related. The bitcoin network doesn’t see that stuff. It never ends up on the blockchain. It’s not something we have to do for Bitcoin. The lightning clients can handle that. It’s not too bad. There’s a lot of research in doing that.

Q: What about including privacy?

A: Yes, there’s privacy benefits, you could do onion routing for payments. That might be later-on stuff. Initially we want something perhaps less private, just to get something working.

Q: With those large numbers that this would be able to support, what would happen if a big hub goes uncooperative or malicious, and large number of users have to settle suddenly? How would this play out in the different levels you described?

A: The first line of defense is try to make sure there isn’t a mega-hop or mega-hub of some kind with millions of channels. If that node goes down or goes rogue, you’re going to have a backlog of millions of people trying to close out of their channel. If it exceeds the OP_CSV time, then bad things happen and people could lose money. So first, we need to make sure the network doesn’t look like that. Some eople have talked about timestop where if blocks are super-congested, the checksequenceverify doesn’t increment. If you have a huge backlog of people trying to close out of cahnnels, then we could have a statue of limitations freeze perhaps. Maybe an easier way is to make sure the topology doesn’t look like that.

Thank you.

small correction about the scalability matrix: https://twitter.com/tdryja/status/673705980668993537

Other lightning presentation from same day: http://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/network-topologies-and-their-scalability-implications-on-decentralized-off-chain-networks/

Some other resources:

http://lightning.network/

http://lightning.network/lightning-network-summary.pdf

http://lightning.network/lightning-network-technical-summary.pdf

http://lightning.network/lightning-network-paper.pdf

other presentation https://www.youtube.com/watch?v=8zVzw912wPo and slides http://lightning.network/lightning-network.pdf

SF Bitcoin Social presentation slides http://lightning.network/lightning-network-presentation-sfbitcoinsocial-2015-05-26.pdf

slides re: time and bitcoin http://lightning.network/lightning-network-presentation-time-2015-07-06.pdf

http://diyhpl.us/wiki/transcripts/scalingbitcoin/bitcoin-failure-modes-and-the-role-of-the-lightning-network/

https://github.com/ElementsProject/lightning

https://gnusha.org/url/https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

irc.freenode.net #lightning-dev

\ No newline at end of file diff --git a/scalingbitcoin/hong-kong-2015/validation-cost-metric/index.html b/scalingbitcoin/hong-kong-2015/validation-cost-metric/index.html index fa814b680b..ae40741666 100644 --- a/scalingbitcoin/hong-kong-2015/validation-cost-metric/index.html +++ b/scalingbitcoin/hong-kong-2015/validation-cost-metric/index.html @@ -86,4 +86,4 @@ Also it guarantees that average blocks are always allowed to have the maximum block size.

Conclusion

In conclusion, Bitcoin places various resource requirements on full nodes. And it is essential that blocksize proposals account at least for the most important ones, or extreme worst cases are . A cost metric helps with that because it sets the requirements in relation to each other.

We’ve seen that estimating a function for validation cost only, is straightforward, when assuming a reference machine, collecting data and fitting a linear function.

A more complete cost function that includes bandwidth, validation and utxo requirements is difficult to derive from the bottom up. -But as we showed we can build on existing blocksize proposals to get some of the advantages of a cost metric,

  • such as confining the worst-case while
  • allowing to trade-off various block aspects
  • and setting the right incentives.

see also https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-November/011662.html

\ No newline at end of file +But as we showed we can build on existing blocksize proposals to get some of the advantages of a cost metric,

  • such as confining the worst-case while
  • allowing to trade-off various block aspects
  • and setting the right incentives.

see also https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-November/011662.html

\ No newline at end of file diff --git a/scalingbitcoin/milan-2016/coin-selection/index.html b/scalingbitcoin/milan-2016/coin-selection/index.html index 07dbbfe68d..d0f87d6731 100644 --- a/scalingbitcoin/milan-2016/coin-selection/index.html +++ b/scalingbitcoin/milan-2016/coin-selection/index.html @@ -10,4 +10,4 @@ < Milan (2016) < Coin Selection

Coin Selection

Speakers: Mark Erhardt

Transcript By: Bryan Bishop

Category: -Conference

Simulation-based evaluation of coin selection strategies

https://twitter.com/kanzure/status/785061222316113920

Introduction

Thank you. ((applause))

Hi. Okay. I’m going to talk about coin selection. I have been doing this for my master thesis. To take you through what I will be talking about, I will be shortly outlining my work, then talking about coin selection in general. And then the design space of coin selection. I will also be introducing the framework I have been using for simulating coin selection. Also I will talk about my results.

To get right started, you may have seen this graph before, created by Pieter Wuille. What it shows is that we have unspent transaction outputs growing pretty quickly and it has grown in the past years doubled since the last year, and 7x in the last 3 years. So the value of the utxos and the size of the megabytes we’re at more than 1.4 GB of UTXO set size now. Why is that a problem? Well, for miners to be most competitive, they have to keep the UTXO set in RAM toverify blocks as quickly as possible. This is more of an economic problem for smaller miners.

How does it come that the UTXO set is growing so much? We have more coins every day. The average utxo value has shrunk from almost 1 bitcoin 2 years ago to less than 0.4 bitcoin. Looking at Core for example, they have a minimum change of 1 bitcent of change, which means we have a lot of room to grow this UTXO set still. So maybe we have to think about what we can do to make it smaller. We have heard today that segwit will give a discount for spending inputs, and if we get to Schnorr signatures that will help to compound and aggregate signatures and make inputs even cheaper to spin. I think coin selection can also help us to reduce the UTXO set size.

Coin selection

Coin selection is basically just the question of how do we spend the coins we have, to fund transactions? Which ones do we use to fund the transaction? My hypothesis for my master thesis was to improve coin selection and have an impact on UTXO set size. Just looking at the hard constraints of the coin selection problem, you basically have to use the UTXOs that your wallet has. And then you have to pay for the transaction and the fee. Since we don’t want to create non-standard transactions, we must not create dust outputs, and our change has to be a certain size as a result.

We want to minimize the cost for the user, so we have to minimize inputs. On the other hand, we want to reduce the UTXO set size, so we have to use more than a few UTXOs because that’s the only way to shrink the set. Also we want users to have as much privacy as possible. So we have to pick one or two of those and doing all three is hard.

Conditions and factors

Priority-based selection for inclusions into blocks has largely changed to a fee market now, for miners to optimize their fee income. Block space demand has been increasing over the past few years. So the coin selection background has changed a little bit over the past few years. And then there’s some factors that are partially individual– it depends on what kinds of payments you’re doing. If you’re doing very large payments, you might need a different coin selection algorithm. Short-term cost is a different target compared to total cost over lifetime of the wallet. If you are looking at merchants might use bitcoin and how a mobile wallet user might use bitcoin, it’s very different. Mobile wallet client might get a big payment incoming and then make lots of small payments, whereas merchants might have the opposite trend.

One of the interesting thing is that a factor that can influence this is the size of change outputs. This also has a large impact on the utxo set composition going forward in your wallet.

Improvement

There has been quite a bit of speculation about coin selection so far. I wanted to introduce some of the ideas floating around.

Luke-Jr suggested we hsould try to ccreate change outputs up to the average size of the payments that the wallet has been making. So look at the payments the wallet has made over time, and adapt change to that. Another idea is that instead of creating tiny change, and have a little leftover beyond what we want to pay; instead of giving it back to the user, perhaps give it to the miner. It can be more costly to spend a UTXO than the value created by the UTXO if it is very small.

We could generally try to match the spending amount with the change output. If we’re trying to spend 1` coin or create a change output of 1 coin, which was speculated to probably have good properties in the wallet if we repeat the same size payment over time. This has come up quite often. Oh, one more. I wanted to try random selection on the utxo set size.

These ideas have been floating around, but not quantified so much. So what is a good coin selection algorithm?

Simulator

I created a coin selection simulator. We have a scenario that is just a stack of payments that happen in the simulation. We have a next payment coming in, it might be an incoming payment, and then we add one more unspent transaction output to the unspent transaction output pool of the wallet. If it’s an outgoing payment, we consider a fee level and a block height. We create the transaction. And then the transaction uses up some of the UTXOs, and it might create a new change output to the UTXO set size. This is basically how it works in regular wallets. I just don’t look at the network stack and stuff.

So what I’m considering is the selection policy of the wallet that I’m simulating. I am simulating fees, but it’s a fixed number, it’s 10,000 satoshi BTC per kilobyte. I have looked at this with different transaction formats, like P2PKH and P2SH and P2WPKH from segwit. For some of the coin selection approaches, it’s important how old the UTXO is for the selection, so I have also added blockheight into my simulation.

What I don’t do yet– which might be a disappointment to some– I have not yet implemented addresses into the simulation, so I can’t talk so much about the privacy impact of coin selection algorithms.

The most interesting part of the simulator is this box where coin selection works. I have looked at some of the prevalent policies in the space. Breadwallet has a policy of just spending the oldest UTXOs first, first-in first-out (FIFO). They also have some change outputs that get added to the miner fee. That’s one of the most commonly used coin selection algorithms in bitcoin because a lot of people use Electrum and Breadwallet.

As a second one, I looked at Mycelium, which uses a similar approach. They select by oldest first. After they have selected, they minimize the input set by removing the smallest inputs. They also add change, and they select the higher limit to add change to the fee. I think a lot of people are also using mycelium.

The bitcoin wallet for android is also popular. I implemented their approach partly, but the core idea is that it is select by priority, which was the prevalent paradigm before the fee market. This is the age in blockheight times the value in satoshi BTC. If UTXOs have aged for a while, the value of the UTXO is the more important part of the priority calculation. The implications of this will show up in the simulation results.

I also looked at electrum “private mode” which has a different idea about matching the target amount.. They select random buckets and select the bucket that has the least distance between the change output and target amount. I have not simulated this.

In Bitcoin Core, their coin selection uses… I put it over here. It is not the most easiest way to go about it. Bitcoin Core’s main idea could be described as trying to create a direct match for the amount to be paid. It tries to hit this with 0 satoshi difference. First it will look for a direct match UTXO. Second it will try to find all the UTXOs that are smaller than what they want to spend, which is useful. And third it just tries to do a knapsack solver and add up some UTXOs to create another match, and if that fails, it does a knapsack solver to select the smallest possible combination to spend a transaction and create a minimum change of 10 millicoins. It created me quite a while to figure out what Bitcoin Core does. One funny thing is that it first estimates the fee, and then it does try to find a selection, and then it finds out that zero-fee can’t work, and then it starts again with a higher fee estimated on top of the previous solution. I think we might be able to improve that later sometime.

Data

I’ve been using the only available real-life data set. This was provided by Mr…. is he here? I haven’t met him yet. He offered his data set on github. It has 24,388 incoming payments and 11,860 outgoing payments. It’s basically the only big set of real-life data that I know about. Running on this data set, here’s a histogram of all the payments on the set. It kind of looks roughly gaussian, but it has spikes at round numbers. I think the most common payment is about 10 millibitcoin or 1 bitcent.

Simulation results

policynum utxochange mBTCtotal cost mBTCinputs
FIFO182.87399.62629.073.03
pruned FIFO763.73169.93623.392.91
highest priority2551.52789.52629.052.50
Core180.3031.75819.033.05
  • FIFO maintains almost as few UTXO as Core.

  • Pruned FIFO and highest priority accumulate small UTXO

  • Bitcoin Core: overpays fees, computationally expensive, only 0.5% Direct Matches (63 of 11860)

It has this queue of all transactions and it basically selects them until it has enough money to pay for a transaction and then it minimizes the set. It leaves always the smallest UTXOs left over. It leaves the smallest UTXOs at the front of the queue. And then it will create a huge transaction that spends a bunch of them. Otherwise it keeps them in the UTXO set forever.

With bitcoinj, the highest priority approach, basically means always use the biggest UTXO you have in your wallet. It does grind down largest UTXOs to smaller UTXOs until basically they are not of high enough priority to be spent. And lastly, because of the way that Bitcoin Core estimates fee, by first selecting a bunch of inputs and then estimating how much that would cost to spend, if it can’t spend a transaction at that point, then it will start that process over again with the previously estimated fee. If it picked 10 inputs and then realized it didn’t have enough to pay for it, and then again to select for 2, then it will still pay for 10. This makes Bitcoin Core coin selection a little bit more expensive than the other approaches.

Also, for all the effort that we put into making direct approaches for Bitcoin Core, we could only do 63 of 11860 payments as direct matches. It seems much more likely to make a change output and think about what size we want, and reduce complexity of coin selection.

Here’s one more figure to look at. This is the surviving UTXOs in each of the wallets after the simulation has finished. What you can see here is that the FIFO approach has all kinds of UTXO sizes because it just spends them as it goes. On mycelium where it prunes the smallest outputs, it has almost 400 of these 1 satoshi outputs even after the end of 36,000 payments. The most prevalent UTXO set size lower than 10,000 satoshi, so it has a bunch of really small UTXOs left over. The third one here is the bitcoinj highest priority approach, and also as you can see it has grinded down all the big UTXOs to small UTXOs. And in Bitcoin Core, it has only … UTXOs left, and they are all pretty big size.

Simulation results with other strategies

PolicyUTXOchange (mBTC)total cost (mBTC)inputs
Average target137.89207.39767.083.04
Wider match donation165.2432.95829.383.02
Double target225.0198.39832.413.03
Single random draw (no MC)185.16384.43629.133.03
Single random draw (0.01 BTC)173.27424.15628.983.04
Core180.3031.75819.033.05

Conclusion

I have hopefully shown that we can do a few improvements on coin selection in the future. We will be able to find coin selection simulation framework on github later this month when I am finished writing my thesis which is due in 3 weeks. I will probably notify you on bitcoin-dev mailing list. I am hoping in the future, I don’t think it’s a hard change, to add in addresses into my coin simulation framework and think more about what privacy implications it has. We can also use it to do multisig addresses and all kinds of stuff later because the size of transactions that I’m simulating are just a few hard-coded or just a few variables in the framework. That’s it.

Q&A

Q: What about grouping by correlation? Trying to reuse them such that you can always reuse– if they are already correlated such that you don’t leak?

A: Linked to the same address? That has been a frequent request and I have not looked at it yet, because I am not modeling addresses yet. But that’s definitely something to look at.

Q: Why not pull data from the blockchain?

A: It’s easy to crawl blocks and add up transactions. It’s hard to model why that would be representative of any use case on the network because you don’t know if the payments are from the same person. Here the use case is a gambling website, so lots of small payments and if someone wins it’s a larger payout. If I could have more data, I would like to try with different data sets.

Q: An average user of the person on a blockchain is not the same kind as the distribution you see on the chain itself; some users have many transactions. You need something that works for every use case rather than average use case. How do you scale from this– what happens if you double the amounts, or half the amounts? How does it impact the results?

A: Haven’t done that yet. I was thinking of grouping incoming and outgoing payment amounts to have the same ingoing and outgoing amounts. I just wanted to present the one use case that I had real-life data from.

Q: Do you have timing data?

A: Unfortunately, no. Just the values.

Q: In your summaries, it seemed there was a bad tradeoff between being efficient with the UTXO and minimizing the fee in the sense that the fee strategies tend to follow the UTXO, and compact UTXO strategies tend to do the reverse. Have you had the chance to model what would happen if the data set you had was re-done with segwit, would there be a strategy for minimal fees that also minimized UTXO use?

A: Pieter suggested that a few weeks ago. I ran the numbers with P2WPKH for witness input and output scripts. It did not change much. It’s still slightly in favor of outputs than inputs. So for these strategies the cost is roughly half of what was shown, but the UTXO footprint and all that seems mostly the same.

References

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-September/013131.html

paper http://murch.one/wp-content/uploads/2016/09/CoinSelection.pdf

https://github.com/Xekyo/CoinSelectionSimulator/blob/master/scala/ops9.txt

\ No newline at end of file +Conference

Simulation-based evaluation of coin selection strategies

https://twitter.com/kanzure/status/785061222316113920

Introduction

Thank you. ((applause))

Hi. Okay. I’m going to talk about coin selection. I have been doing this for my master thesis. To take you through what I will be talking about, I will be shortly outlining my work, then talking about coin selection in general. And then the design space of coin selection. I will also be introducing the framework I have been using for simulating coin selection. Also I will talk about my results.

To get right started, you may have seen this graph before, created by Pieter Wuille. What it shows is that we have unspent transaction outputs growing pretty quickly and it has grown in the past years doubled since the last year, and 7x in the last 3 years. So the value of the utxos and the size of the megabytes we’re at more than 1.4 GB of UTXO set size now. Why is that a problem? Well, for miners to be most competitive, they have to keep the UTXO set in RAM toverify blocks as quickly as possible. This is more of an economic problem for smaller miners.

How does it come that the UTXO set is growing so much? We have more coins every day. The average utxo value has shrunk from almost 1 bitcoin 2 years ago to less than 0.4 bitcoin. Looking at Core for example, they have a minimum change of 1 bitcent of change, which means we have a lot of room to grow this UTXO set still. So maybe we have to think about what we can do to make it smaller. We have heard today that segwit will give a discount for spending inputs, and if we get to Schnorr signatures that will help to compound and aggregate signatures and make inputs even cheaper to spin. I think coin selection can also help us to reduce the UTXO set size.

Coin selection

Coin selection is basically just the question of how do we spend the coins we have, to fund transactions? Which ones do we use to fund the transaction? My hypothesis for my master thesis was to improve coin selection and have an impact on UTXO set size. Just looking at the hard constraints of the coin selection problem, you basically have to use the UTXOs that your wallet has. And then you have to pay for the transaction and the fee. Since we don’t want to create non-standard transactions, we must not create dust outputs, and our change has to be a certain size as a result.

We want to minimize the cost for the user, so we have to minimize inputs. On the other hand, we want to reduce the UTXO set size, so we have to use more than a few UTXOs because that’s the only way to shrink the set. Also we want users to have as much privacy as possible. So we have to pick one or two of those and doing all three is hard.

Conditions and factors

Priority-based selection for inclusions into blocks has largely changed to a fee market now, for miners to optimize their fee income. Block space demand has been increasing over the past few years. So the coin selection background has changed a little bit over the past few years. And then there’s some factors that are partially individual– it depends on what kinds of payments you’re doing. If you’re doing very large payments, you might need a different coin selection algorithm. Short-term cost is a different target compared to total cost over lifetime of the wallet. If you are looking at merchants might use bitcoin and how a mobile wallet user might use bitcoin, it’s very different. Mobile wallet client might get a big payment incoming and then make lots of small payments, whereas merchants might have the opposite trend.

One of the interesting thing is that a factor that can influence this is the size of change outputs. This also has a large impact on the utxo set composition going forward in your wallet.

Improvement

There has been quite a bit of speculation about coin selection so far. I wanted to introduce some of the ideas floating around.

Luke-Jr suggested we hsould try to ccreate change outputs up to the average size of the payments that the wallet has been making. So look at the payments the wallet has made over time, and adapt change to that. Another idea is that instead of creating tiny change, and have a little leftover beyond what we want to pay; instead of giving it back to the user, perhaps give it to the miner. It can be more costly to spend a UTXO than the value created by the UTXO if it is very small.

We could generally try to match the spending amount with the change output. If we’re trying to spend 1` coin or create a change output of 1 coin, which was speculated to probably have good properties in the wallet if we repeat the same size payment over time. This has come up quite often. Oh, one more. I wanted to try random selection on the utxo set size.

These ideas have been floating around, but not quantified so much. So what is a good coin selection algorithm?

Simulator

I created a coin selection simulator. We have a scenario that is just a stack of payments that happen in the simulation. We have a next payment coming in, it might be an incoming payment, and then we add one more unspent transaction output to the unspent transaction output pool of the wallet. If it’s an outgoing payment, we consider a fee level and a block height. We create the transaction. And then the transaction uses up some of the UTXOs, and it might create a new change output to the UTXO set size. This is basically how it works in regular wallets. I just don’t look at the network stack and stuff.

So what I’m considering is the selection policy of the wallet that I’m simulating. I am simulating fees, but it’s a fixed number, it’s 10,000 satoshi BTC per kilobyte. I have looked at this with different transaction formats, like P2PKH and P2SH and P2WPKH from segwit. For some of the coin selection approaches, it’s important how old the UTXO is for the selection, so I have also added blockheight into my simulation.

What I don’t do yet– which might be a disappointment to some– I have not yet implemented addresses into the simulation, so I can’t talk so much about the privacy impact of coin selection algorithms.

The most interesting part of the simulator is this box where coin selection works. I have looked at some of the prevalent policies in the space. Breadwallet has a policy of just spending the oldest UTXOs first, first-in first-out (FIFO). They also have some change outputs that get added to the miner fee. That’s one of the most commonly used coin selection algorithms in bitcoin because a lot of people use Electrum and Breadwallet.

As a second one, I looked at Mycelium, which uses a similar approach. They select by oldest first. After they have selected, they minimize the input set by removing the smallest inputs. They also add change, and they select the higher limit to add change to the fee. I think a lot of people are also using mycelium.

The bitcoin wallet for android is also popular. I implemented their approach partly, but the core idea is that it is select by priority, which was the prevalent paradigm before the fee market. This is the age in blockheight times the value in satoshi BTC. If UTXOs have aged for a while, the value of the UTXO is the more important part of the priority calculation. The implications of this will show up in the simulation results.

I also looked at electrum “private mode” which has a different idea about matching the target amount.. They select random buckets and select the bucket that has the least distance between the change output and target amount. I have not simulated this.

In Bitcoin Core, their coin selection uses… I put it over here. It is not the most easiest way to go about it. Bitcoin Core’s main idea could be described as trying to create a direct match for the amount to be paid. It tries to hit this with 0 satoshi difference. First it will look for a direct match UTXO. Second it will try to find all the UTXOs that are smaller than what they want to spend, which is useful. And third it just tries to do a knapsack solver and add up some UTXOs to create another match, and if that fails, it does a knapsack solver to select the smallest possible combination to spend a transaction and create a minimum change of 10 millicoins. It created me quite a while to figure out what Bitcoin Core does. One funny thing is that it first estimates the fee, and then it does try to find a selection, and then it finds out that zero-fee can’t work, and then it starts again with a higher fee estimated on top of the previous solution. I think we might be able to improve that later sometime.

Data

I’ve been using the only available real-life data set. This was provided by Mr…. is he here? I haven’t met him yet. He offered his data set on github. It has 24,388 incoming payments and 11,860 outgoing payments. It’s basically the only big set of real-life data that I know about. Running on this data set, here’s a histogram of all the payments on the set. It kind of looks roughly gaussian, but it has spikes at round numbers. I think the most common payment is about 10 millibitcoin or 1 bitcent.

Simulation results

policynum utxochange mBTCtotal cost mBTCinputs
FIFO182.87399.62629.073.03
pruned FIFO763.73169.93623.392.91
highest priority2551.52789.52629.052.50
Core180.3031.75819.033.05
  • FIFO maintains almost as few UTXO as Core.

  • Pruned FIFO and highest priority accumulate small UTXO

  • Bitcoin Core: overpays fees, computationally expensive, only 0.5% Direct Matches (63 of 11860)

It has this queue of all transactions and it basically selects them until it has enough money to pay for a transaction and then it minimizes the set. It leaves always the smallest UTXOs left over. It leaves the smallest UTXOs at the front of the queue. And then it will create a huge transaction that spends a bunch of them. Otherwise it keeps them in the UTXO set forever.

With bitcoinj, the highest priority approach, basically means always use the biggest UTXO you have in your wallet. It does grind down largest UTXOs to smaller UTXOs until basically they are not of high enough priority to be spent. And lastly, because of the way that Bitcoin Core estimates fee, by first selecting a bunch of inputs and then estimating how much that would cost to spend, if it can’t spend a transaction at that point, then it will start that process over again with the previously estimated fee. If it picked 10 inputs and then realized it didn’t have enough to pay for it, and then again to select for 2, then it will still pay for 10. This makes Bitcoin Core coin selection a little bit more expensive than the other approaches.

Also, for all the effort that we put into making direct approaches for Bitcoin Core, we could only do 63 of 11860 payments as direct matches. It seems much more likely to make a change output and think about what size we want, and reduce complexity of coin selection.

Here’s one more figure to look at. This is the surviving UTXOs in each of the wallets after the simulation has finished. What you can see here is that the FIFO approach has all kinds of UTXO sizes because it just spends them as it goes. On mycelium where it prunes the smallest outputs, it has almost 400 of these 1 satoshi outputs even after the end of 36,000 payments. The most prevalent UTXO set size lower than 10,000 satoshi, so it has a bunch of really small UTXOs left over. The third one here is the bitcoinj highest priority approach, and also as you can see it has grinded down all the big UTXOs to small UTXOs. And in Bitcoin Core, it has only … UTXOs left, and they are all pretty big size.

Simulation results with other strategies

PolicyUTXOchange (mBTC)total cost (mBTC)inputs
Average target137.89207.39767.083.04
Wider match donation165.2432.95829.383.02
Double target225.0198.39832.413.03
Single random draw (no MC)185.16384.43629.133.03
Single random draw (0.01 BTC)173.27424.15628.983.04
Core180.3031.75819.033.05

Conclusion

I have hopefully shown that we can do a few improvements on coin selection in the future. We will be able to find coin selection simulation framework on github later this month when I am finished writing my thesis which is due in 3 weeks. I will probably notify you on bitcoin-dev mailing list. I am hoping in the future, I don’t think it’s a hard change, to add in addresses into my coin simulation framework and think more about what privacy implications it has. We can also use it to do multisig addresses and all kinds of stuff later because the size of transactions that I’m simulating are just a few hard-coded or just a few variables in the framework. That’s it.

Q&A

Q: What about grouping by correlation? Trying to reuse them such that you can always reuse– if they are already correlated such that you don’t leak?

A: Linked to the same address? That has been a frequent request and I have not looked at it yet, because I am not modeling addresses yet. But that’s definitely something to look at.

Q: Why not pull data from the blockchain?

A: It’s easy to crawl blocks and add up transactions. It’s hard to model why that would be representative of any use case on the network because you don’t know if the payments are from the same person. Here the use case is a gambling website, so lots of small payments and if someone wins it’s a larger payout. If I could have more data, I would like to try with different data sets.

Q: An average user of the person on a blockchain is not the same kind as the distribution you see on the chain itself; some users have many transactions. You need something that works for every use case rather than average use case. How do you scale from this– what happens if you double the amounts, or half the amounts? How does it impact the results?

A: Haven’t done that yet. I was thinking of grouping incoming and outgoing payment amounts to have the same ingoing and outgoing amounts. I just wanted to present the one use case that I had real-life data from.

Q: Do you have timing data?

A: Unfortunately, no. Just the values.

Q: In your summaries, it seemed there was a bad tradeoff between being efficient with the UTXO and minimizing the fee in the sense that the fee strategies tend to follow the UTXO, and compact UTXO strategies tend to do the reverse. Have you had the chance to model what would happen if the data set you had was re-done with segwit, would there be a strategy for minimal fees that also minimized UTXO use?

A: Pieter suggested that a few weeks ago. I ran the numbers with P2WPKH for witness input and output scripts. It did not change much. It’s still slightly in favor of outputs than inputs. So for these strategies the cost is roughly half of what was shown, but the UTXO footprint and all that seems mostly the same.

References

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-September/013131.html

paper http://murch.one/wp-content/uploads/2016/09/CoinSelection.pdf

https://github.com/Xekyo/CoinSelectionSimulator/blob/master/scala/ops9.txt

\ No newline at end of file diff --git a/scalingbitcoin/milan-2016/mimblewimble/index.html b/scalingbitcoin/milan-2016/mimblewimble/index.html index a72391efd3..dee703999e 100644 --- a/scalingbitcoin/milan-2016/mimblewimble/index.html +++ b/scalingbitcoin/milan-2016/mimblewimble/index.html @@ -11,4 +11,4 @@ Andrew Poelstra

Date: October 8, 2016

Transcript By: Bryan Bishop

Tags: Adaptor signatures, Sidechains

Category: Conference

Media: -https://www.youtube.com/watch?v=8BLWUUPfh2Q&t=5368s

https://twitter.com/kanzure/status/784696648597405696

slides: http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimble-2016-scaling-bitcoin-slides.pdf

Hi, I am from Blockstream. I am here to talk about Mimblewimble. Mimblewimble is not mine. It’s from the dark lord. I am going to start with the history of the paper. It has been in the news a little bit. I will talk about the transaction structure, the block structure, try to explain the trust model and argue that it’s almost as strong as bitcoin. There’s less data to verify than in bitcoin. If I have time, then I might talk about some extensions I have done to the protocol to make it extend the scaling beyond what was possible. And then next steps and open problems.

History

This started over 2 months ago, on August 2nd, when someone on IRC, someone logged on with the nickname “majorplayer” where he wrote this paper and he said here’s some paper and then he signed off. Myself and Bryan Bishop, who hosts a lot of files related to bitcoin, got this file from the onion link. It was a text file, called mimblewimble.txt, signed “Tom Elvis Jedusor” which is the name of Lord Voldemort in the French Harry Potter books. It’s unlikely that this is his real name. But that’s the name we have.

I should mention a little bit that I was working on something similar, well not similar, somthing similar but much smaller. When I encountered this, I was surprised, it was what I was working on but much more powerful. It was exciting to me. After the next week or so, I tried to understand what I was doing compared to mimblewimble. I thought it was the same as what I was doing in Elements. I had some discussions with Greg Sanders, who broke a lot of my assumptions… I think that we have got it now.

Over the next couple of months, I spent some little while thinking about open problems posed at the end of the paper about getting further scalability. I think I have managed to get some of it, although some of the stuff I have done would be another talk.

I am not Lord Voldemort. I don’t know who did it. A number of people have asked me recently who it was. I don’t know.

What is mimblewimble?

It was a paper, neat name, in the Harry Potter books it’s a tongue-tying curse that prevents someone from revealing secrets. So this is a fungibility spell. I don’t think it was described this way in canon. It’s designed for a blockchain. A separate blockchain could be made to be compatible, it could be soft-forked as an extension block or something. This could be applied to bitcoin. To any currency that you can add a mimblewimble sidechain, you get the scalability and fungibility improvements on top of that original chain.

In bitcoin, the way that bitcoin transactions work, you have a pile of inputs and a pile of outputs. Every input is an output from a previous transaction. Every input has to sign the entire transaction. This proves that the person who owns those inputs intends to send the transaction and authorizes this. In mimblewimble, rather than having a complex script system and allowing arbitrary scripts, mimblewimble just has EC keys. You can add and subtract any two EC keys and add and subtract and get another elliptic curve key. If I have some coins attached to some key, and someone wants to move the coins from my key to their key, we can take the difference between the two keys, and create a multisig of that difference, and that’s almost as good as any. So we take a sum of all input keys and all output keys, and we can get a single key, and sign it with that. So it’s really just operations on keys. It’s bitcoin and remove script– well no, it’s designed from the ground-up to be different. I’ll mention later that we can get locktime, and we can definitely get multisignatures and locktime. You can use this to get unidirectional payment channels. So in principle you can have lightning on top of mimblewimble, although not in an elegant way. I think this is very cool.

So mimblewimble transactions have inputs just like in bitcoin transactions. The inputs are confidential transactions, rather than bitcoin inputs. They are homomorphically-encrypted vaues. What you can do with confidential transactions is you can add up all the output values, subtract the input values as elliptic curve points. You can check whether they add up to zero. You can’t check anything else. There’s a priviledged zeropoint in elliptic curve groups. Everything else looks as a uniformly random elliptic curve points. There are no features to these. As far as fungibility is concerned, this is pretty cool that every output looks like a random number. There’s an annoying feature of confidential transactions where it’s in principle possible to encrypt negative values, so I can create an output with a positive million bitcoins and a negative million bitcoins… I might throw that away and have a million bitcoin. The way that confidential transactions solves this is with a range proof. In elements alpha, we use a range proof developed by Greg Maxwell, which proves that…. you can make it whatever range you wat. As long as it is much less than 2^256, you can have any value you want, and it proves it’s in the range. It’s proofs of knowledge of the blinding key of the output.

Confidential transactions use homomorphic encryption. Mimblewimble uses this as a signing key and a blinding key. It uses range proofs to show that the key is known. In confidential transactions, you add up points, you show they all addu p to zero, you show that all the input values equal the output values. So you have to show that they add up to some other random curve point. That curve point has a key, it proves that it’s an encrption of the value zero. We can exploit this in mimblewimble to allow handing over hte key without the recipient learning the sender’s keys. We can hand-off keys by signing with their difference, and mimbewimble solves this with an ….

I am going to explain everything else from here on out with pictures. Here’s two mimblewimble transactions. The way that we verify that the two transactions are legitimate is that the inputs reference old curve points. We subtract them, see if we get an excess value, we check the signature on the range proof, and then we have a valid transaction. The validity claim is not a bunch of different signatures signed data; it’s rather that a bunch of things add up to another thing. A miner watching the network can combine transactions and get another valid transaction. There’s a bit of an oddity here with excess values. You end up with another signature on the excess vaues. You can no longer tell which inputs and which outputs were involved. This is non-interactive coinjoin, or one-way aggregatable signatures (OWAS).

So what does this give us? There’s fungibility, there’s non-interactive coinjoin. Well observe this… one of these transactions spent an output of another. If I’m adding the inputs and subtracting the outputs or vie versa, if the same thing occurred on both sides, I can just drop that or not. These transactions– one is valid if and only if the other is valid. This is called transaction cut-through which can also be done non-interactively. So I immediately get a scaling and privacy and fungibility benefit. These excess values are unrelated to…. there’s no trace that this output ever existed.

Blocks are just one transaction, because of this. It’s a header and a single transaction, a list of inputs, a list of outputs, a list of excess values.

Here’s a picture of a bunch of mimblewimble blocks in a row. The white boxes are coinbase inputs. Assuming we don’t have sidechain stuff; every block has 50 mimblewimble coins or whatever. They don’t have to be written down anywhere. You just add a commitment to 50 coins. Suppose you have this mimblewimble chain and some users want to show up and know the status of the chain. In bitcoin, you have to download the entire blockchain, you have to validate everything, oyu have to replay everything to see if the current UTXO set is valid and nothing was removed or nothing invalid. What we can do here is I can send this person all the blocks, but all the transactions here I can just merge together. The blocks commit to the individual commitments. Eahc input is committed to by a different block maybe. You can see that a lot of these inputs are actually outputs of transactions. Every input except for coinbase inputs are outputs of all transactions. So I just have the coinbase inputs which don’t need to be encoded; what I give the person is a list of block headers, a list of currently unspent outputs, and their range proofs, and their excess values, because if we could aggregate them in any othe rway then … but for each transaction, that’s a pubkey ad a signature that’s like 100 bytes. So with the stuff in the Voldemort paper with no extensions, everything is compressed to 100 bytes. In bitcoin where we have 150 million transactions and historic blockchain data and the UTXO set and the range proofs. If we had confidential transactions in bitcoin, the CT size would exceed the entire size of the current chain, but not by much. With CT, in bitcoin, we would have 1 terabyte of blockchain data by now, if bitcoin had CT from day one. It’s a lot of data for a new verifier to download. In mimblewimble you wouldn’t have to do that; you only need range proofs on the unspent outputs. It means that the data scales with the number of blocks and transactions grows linearly, but it’s a very small amount of data. The bulk of the data would be UTXOs, which can shrink, because you can create a transaction that shrinks the total UTXO set. So in the mimblewimble chain, in addition to whatever anti-DoS resource limits, you might want to limi the UTXO set size, and then incentivize with a fee market for transactions that shrink the UTXO set size. There’s unbounded growth in the linear part, but it’s very very slow unbounded growth there. So even as the chain goes on for three years and decades, it’s very slow growth. It’s very exciting.

Okay let’s try to explain the trust model in mimblewimble. A transaction is considered valid if it’s non-inflationary, you can’t create or destroy coins. The outputs minus inputs equal your excess values. Also the owner of the transaction has to sign off on it, by the excess value having a signature on it. A multisig still verifies that every party signs off on this, so this would not be weakened by any extensions to the excess values. So this is almost the same as in bitcoin. In bitcoin, you have it where the owners of the inputs sign off on this specific transaction and this specific data. You can see from this picture that this is what we get here.

In the blockchain, let’s considr this new verifier that downloaded the chain data. What can the verifier tell? It can tell that a transaction once it appears in a block… it can check the excess values committed in every block. It just knows the shadow and the shadow never left. So the transaction was not computed after the fact. No net inflation. If you look at the… the unspent outputs minus however much coins should exist, it should equal zero. However, you cannot tell the exact sequence of transactions. I could be giving this verifier where there was a false blockcahin and a lot of inflation in the end, or maybe I stole a bunch of coins or something— I wouldn’t gain anything from such a forgery, other than haha I did all this proof-of-work and I lied and you can’t tell. It’s not an attack, but it is a different trust model.

As far as fungible usable currency, it’s just as strong and as usable as bitcoin’s trust model. It gets you a massive scalability improvement.

Sinking signatures

Can we aggregate the excess values in a block? This is in general hard to do. You could do some trickery with a variant of BLS signatures. If you use those, then you get aggregation within a block, but it’s possible for someone who has seen a transaction to reverse a transaction by swapping inputs and outputs and putting a minus sign in front of …. and if the excess value doesn’t sign itself, meaning you negate it when you… that’s very bad. Historic transactions can’t be reversed without rewriting blocks, but this is suddenly not the case. So instead you can do tihs scheme where every transaction signs a current blockheight, and ow the transaction the reversal can only appear in the block in which they appear. So reversing it in the same block is trivial and not important. This just by itself, this makes 20 gigabyte of historic data, can be 20 megabytes. So this is 1,000x savings. But now there’s a lot of pairings to verify, which is a bit slow.

You can do better than this. You can use proof-of-proof-of-work or compact SPV proofs from the sidechains whitepaper, to basically skip a whole lot of headers. You can give the verifier about 300 blocks that have enough PoW on them to have the same total difficulty as the original chain where the difficulty… in the original chain, a block count for some amount of work, and in the compact chain you can say if a block has exceeded a target, then you give it credit for all the excess it had. So using sinking signatures, allowing these excess values to sign the blockheights as well as previous blockheights, you can aggregate all the excess values on the signatures, and now those 22 megs become 1 megabyte, and you can verify that with unoptimized code in 20 seconds on my laptop. I think that’s pretty good resource load for validation.

This does change the trust model a little bit. The way that it changes it is that when oyu do the compression, the compressed chain has the same expected amount of work as the original chain. The variance is much higher, though. In bitcoin, such an attacker has 0 chance of producing that. We can formalise this. If I give you 400k blocks of bitcoin, this is a proof of 395,000 blocks worth of work. If you did substantially less than the full 400k blocks of work, you have almost zero chance. In compact chains this is not the case. So you need to give the verifier about 5000 blocks with the full PoW work. This will prove that what I was giving them was no accident, it wasn’t by fluke, someone gave them a lot of work to produce that. And I also give them a compact chain from genesis to that 5k block. So this incentivizes anyone who wants to produce a forgery, that such a forger would expect to do as much work as the original chain, so the Satoshi argument is that they would rather extend the main chain rathr than generate forgeries. One thing proposed every so often is to drop old blocks, then just verify the tip… the problem is that dropping the historic data is analogous to a cliff, where someone who does as much work can forge, because they can forge part of the chain. So we no longer get that here.

Where are we?

I have been working on a consensus library for mimblewimble. I have got hung up on pairing curve selection, I’m working on that now. I am going to publish the source code I’ve got and encourage people to contribute and do peer review. It’s an exciting project. Once that is done and there’s a consensus library written, then we can start doing cool stuff, we can write a p2p layer and write wallets and so on, and it will be much easier to write and more friendlier languages and you will have a lot more freedom— there doesn’t need to be consensus on that kind of source code. Right now we’re still in the boring consensus part of development. I hope to get past this soon. As far as sidechains, I’ve been saying, … we can actually make mimblewimble as a sidechain. If in addition to all the unspent outputs, you commit to pegs-in and pegs-out, then you can apply an excess signature on every peg-in and peg-out. And that’s it. I’m over time. So I’m going to stop there. Thank you all.

Q&A

Q: What curve are you using? Could you use ed25519?

A: Good questions. When you create a transaction, the total input value has to equal the total output value. So any fee would be released as an explicit unblinded output value. As it’s floating around the network, the miner would commit to that amount of fee value, so the miner would add its own output to all the aggregated transaction. On the blockchain there’s an explicit fee. For a curve selection, I can’t use secp256k1, can’t use Ed25519. For sinking signatures, I need a pairing operation. I am using a 341-bit curve that I came up with, but I’ll probably replace this as I learn more about what I should be doing.

andytoshi’s mimblewimble paper http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimble-andytoshi-INCOMPLETE-DRAFT-2016-10-06-001.pdf

original mimblewimble paper http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimble.txt

mimblewimble podcast http://diyhpl.us/wiki/transcripts/mimblewimble-podcast/

other mimblewimble follow-up https://www.reddit.com/r/Bitcoin/comments/4vub3y/mimblewimble_noninteractive_coinjoin_and_better/ and https://www.reddit.com/r/Bitcoin/comments/4woyc0/mimblewimble_interview_with_andrew_poelstra_and/ and https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-August/012927.html

and https://www.reddit.com/r/Bitcoin/comments/4xge51/mimblewimble_how_a_strippeddown_version_of/

and https://www.reddit.com/r/Bitcoin/comments/4y4ezm/mimblewimble_how_a_strippeddown_version_of/

http://gnusha.org/bitcoin-wizards/2016-08-02.log

http://gnusha.org/bitcoin-wizards/2016-08-03.log

https://www.reddit.com/r/Mimblewimble/comments/58knkc/minimal_implementation_of_the_mimblewimble/

\ No newline at end of file +https://www.youtube.com/watch?v=8BLWUUPfh2Q&t=5368s

https://twitter.com/kanzure/status/784696648597405696

slides: http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimble-2016-scaling-bitcoin-slides.pdf

Hi, I am from Blockstream. I am here to talk about Mimblewimble. Mimblewimble is not mine. It’s from the dark lord. I am going to start with the history of the paper. It has been in the news a little bit. I will talk about the transaction structure, the block structure, try to explain the trust model and argue that it’s almost as strong as bitcoin. There’s less data to verify than in bitcoin. If I have time, then I might talk about some extensions I have done to the protocol to make it extend the scaling beyond what was possible. And then next steps and open problems.

History

This started over 2 months ago, on August 2nd, when someone on IRC, someone logged on with the nickname “majorplayer” where he wrote this paper and he said here’s some paper and then he signed off. Myself and Bryan Bishop, who hosts a lot of files related to bitcoin, got this file from the onion link. It was a text file, called mimblewimble.txt, signed “Tom Elvis Jedusor” which is the name of Lord Voldemort in the French Harry Potter books. It’s unlikely that this is his real name. But that’s the name we have.

I should mention a little bit that I was working on something similar, well not similar, somthing similar but much smaller. When I encountered this, I was surprised, it was what I was working on but much more powerful. It was exciting to me. After the next week or so, I tried to understand what I was doing compared to mimblewimble. I thought it was the same as what I was doing in Elements. I had some discussions with Greg Sanders, who broke a lot of my assumptions… I think that we have got it now.

Over the next couple of months, I spent some little while thinking about open problems posed at the end of the paper about getting further scalability. I think I have managed to get some of it, although some of the stuff I have done would be another talk.

I am not Lord Voldemort. I don’t know who did it. A number of people have asked me recently who it was. I don’t know.

What is mimblewimble?

It was a paper, neat name, in the Harry Potter books it’s a tongue-tying curse that prevents someone from revealing secrets. So this is a fungibility spell. I don’t think it was described this way in canon. It’s designed for a blockchain. A separate blockchain could be made to be compatible, it could be soft-forked as an extension block or something. This could be applied to bitcoin. To any currency that you can add a mimblewimble sidechain, you get the scalability and fungibility improvements on top of that original chain.

In bitcoin, the way that bitcoin transactions work, you have a pile of inputs and a pile of outputs. Every input is an output from a previous transaction. Every input has to sign the entire transaction. This proves that the person who owns those inputs intends to send the transaction and authorizes this. In mimblewimble, rather than having a complex script system and allowing arbitrary scripts, mimblewimble just has EC keys. You can add and subtract any two EC keys and add and subtract and get another elliptic curve key. If I have some coins attached to some key, and someone wants to move the coins from my key to their key, we can take the difference between the two keys, and create a multisig of that difference, and that’s almost as good as any. So we take a sum of all input keys and all output keys, and we can get a single key, and sign it with that. So it’s really just operations on keys. It’s bitcoin and remove script– well no, it’s designed from the ground-up to be different. I’ll mention later that we can get locktime, and we can definitely get multisignatures and locktime. You can use this to get unidirectional payment channels. So in principle you can have lightning on top of mimblewimble, although not in an elegant way. I think this is very cool.

So mimblewimble transactions have inputs just like in bitcoin transactions. The inputs are confidential transactions, rather than bitcoin inputs. They are homomorphically-encrypted vaues. What you can do with confidential transactions is you can add up all the output values, subtract the input values as elliptic curve points. You can check whether they add up to zero. You can’t check anything else. There’s a priviledged zeropoint in elliptic curve groups. Everything else looks as a uniformly random elliptic curve points. There are no features to these. As far as fungibility is concerned, this is pretty cool that every output looks like a random number. There’s an annoying feature of confidential transactions where it’s in principle possible to encrypt negative values, so I can create an output with a positive million bitcoins and a negative million bitcoins… I might throw that away and have a million bitcoin. The way that confidential transactions solves this is with a range proof. In elements alpha, we use a range proof developed by Greg Maxwell, which proves that…. you can make it whatever range you wat. As long as it is much less than 2^256, you can have any value you want, and it proves it’s in the range. It’s proofs of knowledge of the blinding key of the output.

Confidential transactions use homomorphic encryption. Mimblewimble uses this as a signing key and a blinding key. It uses range proofs to show that the key is known. In confidential transactions, you add up points, you show they all addu p to zero, you show that all the input values equal the output values. So you have to show that they add up to some other random curve point. That curve point has a key, it proves that it’s an encrption of the value zero. We can exploit this in mimblewimble to allow handing over hte key without the recipient learning the sender’s keys. We can hand-off keys by signing with their difference, and mimbewimble solves this with an ….

I am going to explain everything else from here on out with pictures. Here’s two mimblewimble transactions. The way that we verify that the two transactions are legitimate is that the inputs reference old curve points. We subtract them, see if we get an excess value, we check the signature on the range proof, and then we have a valid transaction. The validity claim is not a bunch of different signatures signed data; it’s rather that a bunch of things add up to another thing. A miner watching the network can combine transactions and get another valid transaction. There’s a bit of an oddity here with excess values. You end up with another signature on the excess vaues. You can no longer tell which inputs and which outputs were involved. This is non-interactive coinjoin, or one-way aggregatable signatures (OWAS).

So what does this give us? There’s fungibility, there’s non-interactive coinjoin. Well observe this… one of these transactions spent an output of another. If I’m adding the inputs and subtracting the outputs or vie versa, if the same thing occurred on both sides, I can just drop that or not. These transactions– one is valid if and only if the other is valid. This is called transaction cut-through which can also be done non-interactively. So I immediately get a scaling and privacy and fungibility benefit. These excess values are unrelated to…. there’s no trace that this output ever existed.

Blocks are just one transaction, because of this. It’s a header and a single transaction, a list of inputs, a list of outputs, a list of excess values.

Here’s a picture of a bunch of mimblewimble blocks in a row. The white boxes are coinbase inputs. Assuming we don’t have sidechain stuff; every block has 50 mimblewimble coins or whatever. They don’t have to be written down anywhere. You just add a commitment to 50 coins. Suppose you have this mimblewimble chain and some users want to show up and know the status of the chain. In bitcoin, you have to download the entire blockchain, you have to validate everything, oyu have to replay everything to see if the current UTXO set is valid and nothing was removed or nothing invalid. What we can do here is I can send this person all the blocks, but all the transactions here I can just merge together. The blocks commit to the individual commitments. Eahc input is committed to by a different block maybe. You can see that a lot of these inputs are actually outputs of transactions. Every input except for coinbase inputs are outputs of all transactions. So I just have the coinbase inputs which don’t need to be encoded; what I give the person is a list of block headers, a list of currently unspent outputs, and their range proofs, and their excess values, because if we could aggregate them in any othe rway then … but for each transaction, that’s a pubkey ad a signature that’s like 100 bytes. So with the stuff in the Voldemort paper with no extensions, everything is compressed to 100 bytes. In bitcoin where we have 150 million transactions and historic blockchain data and the UTXO set and the range proofs. If we had confidential transactions in bitcoin, the CT size would exceed the entire size of the current chain, but not by much. With CT, in bitcoin, we would have 1 terabyte of blockchain data by now, if bitcoin had CT from day one. It’s a lot of data for a new verifier to download. In mimblewimble you wouldn’t have to do that; you only need range proofs on the unspent outputs. It means that the data scales with the number of blocks and transactions grows linearly, but it’s a very small amount of data. The bulk of the data would be UTXOs, which can shrink, because you can create a transaction that shrinks the total UTXO set. So in the mimblewimble chain, in addition to whatever anti-DoS resource limits, you might want to limi the UTXO set size, and then incentivize with a fee market for transactions that shrink the UTXO set size. There’s unbounded growth in the linear part, but it’s very very slow unbounded growth there. So even as the chain goes on for three years and decades, it’s very slow growth. It’s very exciting.

Okay let’s try to explain the trust model in mimblewimble. A transaction is considered valid if it’s non-inflationary, you can’t create or destroy coins. The outputs minus inputs equal your excess values. Also the owner of the transaction has to sign off on it, by the excess value having a signature on it. A multisig still verifies that every party signs off on this, so this would not be weakened by any extensions to the excess values. So this is almost the same as in bitcoin. In bitcoin, you have it where the owners of the inputs sign off on this specific transaction and this specific data. You can see from this picture that this is what we get here.

In the blockchain, let’s considr this new verifier that downloaded the chain data. What can the verifier tell? It can tell that a transaction once it appears in a block… it can check the excess values committed in every block. It just knows the shadow and the shadow never left. So the transaction was not computed after the fact. No net inflation. If you look at the… the unspent outputs minus however much coins should exist, it should equal zero. However, you cannot tell the exact sequence of transactions. I could be giving this verifier where there was a false blockcahin and a lot of inflation in the end, or maybe I stole a bunch of coins or something— I wouldn’t gain anything from such a forgery, other than haha I did all this proof-of-work and I lied and you can’t tell. It’s not an attack, but it is a different trust model.

As far as fungible usable currency, it’s just as strong and as usable as bitcoin’s trust model. It gets you a massive scalability improvement.

Sinking signatures

Can we aggregate the excess values in a block? This is in general hard to do. You could do some trickery with a variant of BLS signatures. If you use those, then you get aggregation within a block, but it’s possible for someone who has seen a transaction to reverse a transaction by swapping inputs and outputs and putting a minus sign in front of …. and if the excess value doesn’t sign itself, meaning you negate it when you… that’s very bad. Historic transactions can’t be reversed without rewriting blocks, but this is suddenly not the case. So instead you can do tihs scheme where every transaction signs a current blockheight, and ow the transaction the reversal can only appear in the block in which they appear. So reversing it in the same block is trivial and not important. This just by itself, this makes 20 gigabyte of historic data, can be 20 megabytes. So this is 1,000x savings. But now there’s a lot of pairings to verify, which is a bit slow.

You can do better than this. You can use proof-of-proof-of-work or compact SPV proofs from the sidechains whitepaper, to basically skip a whole lot of headers. You can give the verifier about 300 blocks that have enough PoW on them to have the same total difficulty as the original chain where the difficulty… in the original chain, a block count for some amount of work, and in the compact chain you can say if a block has exceeded a target, then you give it credit for all the excess it had. So using sinking signatures, allowing these excess values to sign the blockheights as well as previous blockheights, you can aggregate all the excess values on the signatures, and now those 22 megs become 1 megabyte, and you can verify that with unoptimized code in 20 seconds on my laptop. I think that’s pretty good resource load for validation.

This does change the trust model a little bit. The way that it changes it is that when oyu do the compression, the compressed chain has the same expected amount of work as the original chain. The variance is much higher, though. In bitcoin, such an attacker has 0 chance of producing that. We can formalise this. If I give you 400k blocks of bitcoin, this is a proof of 395,000 blocks worth of work. If you did substantially less than the full 400k blocks of work, you have almost zero chance. In compact chains this is not the case. So you need to give the verifier about 5000 blocks with the full PoW work. This will prove that what I was giving them was no accident, it wasn’t by fluke, someone gave them a lot of work to produce that. And I also give them a compact chain from genesis to that 5k block. So this incentivizes anyone who wants to produce a forgery, that such a forger would expect to do as much work as the original chain, so the Satoshi argument is that they would rather extend the main chain rathr than generate forgeries. One thing proposed every so often is to drop old blocks, then just verify the tip… the problem is that dropping the historic data is analogous to a cliff, where someone who does as much work can forge, because they can forge part of the chain. So we no longer get that here.

Where are we?

I have been working on a consensus library for mimblewimble. I have got hung up on pairing curve selection, I’m working on that now. I am going to publish the source code I’ve got and encourage people to contribute and do peer review. It’s an exciting project. Once that is done and there’s a consensus library written, then we can start doing cool stuff, we can write a p2p layer and write wallets and so on, and it will be much easier to write and more friendlier languages and you will have a lot more freedom— there doesn’t need to be consensus on that kind of source code. Right now we’re still in the boring consensus part of development. I hope to get past this soon. As far as sidechains, I’ve been saying, … we can actually make mimblewimble as a sidechain. If in addition to all the unspent outputs, you commit to pegs-in and pegs-out, then you can apply an excess signature on every peg-in and peg-out. And that’s it. I’m over time. So I’m going to stop there. Thank you all.

Q&A

Q: What curve are you using? Could you use ed25519?

A: Good questions. When you create a transaction, the total input value has to equal the total output value. So any fee would be released as an explicit unblinded output value. As it’s floating around the network, the miner would commit to that amount of fee value, so the miner would add its own output to all the aggregated transaction. On the blockchain there’s an explicit fee. For a curve selection, I can’t use secp256k1, can’t use Ed25519. For sinking signatures, I need a pairing operation. I am using a 341-bit curve that I came up with, but I’ll probably replace this as I learn more about what I should be doing.

andytoshi’s mimblewimble paper http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimble-andytoshi-INCOMPLETE-DRAFT-2016-10-06-001.pdf

original mimblewimble paper http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimble.txt

mimblewimble podcast http://diyhpl.us/wiki/transcripts/mimblewimble-podcast/

other mimblewimble follow-up https://www.reddit.com/r/Bitcoin/comments/4vub3y/mimblewimble_noninteractive_coinjoin_and_better/ and https://www.reddit.com/r/Bitcoin/comments/4woyc0/mimblewimble_interview_with_andrew_poelstra_and/ and https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-August/012927.html

and https://www.reddit.com/r/Bitcoin/comments/4xge51/mimblewimble_how_a_strippeddown_version_of/

and https://www.reddit.com/r/Bitcoin/comments/4y4ezm/mimblewimble_how_a_strippeddown_version_of/

http://gnusha.org/bitcoin-wizards/2016-08-02.log

http://gnusha.org/bitcoin-wizards/2016-08-03.log

https://www.reddit.com/r/Mimblewimble/comments/58knkc/minimal_implementation_of_the_mimblewimble/

\ No newline at end of file diff --git a/scalingbitcoin/milan-2016/onion-routing-in-lightning/index.html b/scalingbitcoin/milan-2016/onion-routing-in-lightning/index.html index 43816959e0..3e21d93edf 100644 --- a/scalingbitcoin/milan-2016/onion-routing-in-lightning/index.html +++ b/scalingbitcoin/milan-2016/onion-routing-in-lightning/index.html @@ -13,4 +13,4 @@ Olaoluwa Osuntokun

Date: October 8, 2016

Transcript By: Bryan Bishop

Tags: Lightning, Routing

Category: Conference

Media: -https://www.youtube.com/watch?v=Gzg_u9gHc5Q&t=164s

http://lightning.network/

https://twitter.com/kanzure/status/784742298089299969

Privacy-preserving decentralized micropayments

We’re excited about lightning because the second layer could be an important opportunity to improve privacy and fungibility. Also, there might be timing information in the payments themselves.

Distributed set of onion routers (OR, cite OG onionrouting paper). Users create circuits with sub-set of nodes. Difficult for oion routers to gain more info than predecessor+successor in path. Low latency - usable within greater internet. Notable success is tor, notable success of routing nodes.

In lightning, we have some goals to make it private and censorship resistant. We use source-routing which means the source fully specifies the entire route. In lightning we want to know what the fees are. We want to know the entire route, the timelocks and the path itself.

We use sphinx to make a provably secure mix format. It has various desirable capabilities. With encryption, the ciphertext – the plaintext due to randomization. There’s non-trivial shipping and mapping to make sure everything is fixed throughout the entire size. With fixed size, it gives no positional information. If you are the fifth person in the graph, then you might give away information about the graph or your position in the graph.

In sphinx you can derive a shared secret. To achieve unlinkability, we need to randomize the base key of the diffie-hellman itself. You can have n-keys in the package itself, but you can do things like RSA and now you have kilobytes of a packet because you need all the shared secrets itself. It’s the hoop element at each hop. It’s a cool trick. This can be generalized to any protocol like elliptic curves, RSA, we can do LWE and other things like that.

So you get a session key for the session and then a list of all the public nodes in the route. So a0 is the initial public key for the first hop and then it derives a shared secret as 0. It raises it to the power of its secret exponent and then there’s a blinding factor. Each intermediate node uses the blinding factor to randomize the public key for the next hop. a1 is g^x^b0. So each node derives a blinding factor from the shared secret, and uses this to randomize for the next hop. We have a constant-size key with each hop and we achieve unlinkability.

In sphinx, packet processing is simplified. ProcessSphinxPacket…. if the MAC is the .. then you tampered with the packet and it’s rejected immediately. Also, you have to protect against replay attacks as well. You re-randomize the shared secret with a new blinding factor.

One thing we’ve done is we’ve started to make some modifications to sphinx itself. We realized it’s useful to ourselves but maybe we could add some features for lightning. cdecker from Blockstream has done some work with this to make it more lightning-like. We can add a version byte to the header itself so we can have new versions later on if we use different cryptosystems. We also added the public key and the MAC is now over the entire packet. Originally in sphinx it’s not over the entire packet, it was made for mixnet rather than indistinguishable replies. So you can’t have the MAC on it because if you have a MAC then it’s not indistinguishable. We don’t have that reply use-case in lightning itself. We switched from AES to chacha20 for speed optimality. We also have a payload that attaches instructions for the hop itself. If one link has several links, which one should be used? We have some information about which links to link on, and which ones, and some other information like timing information.

We have this code and it’s done. We could start a mixnet to have an alternative transaction propagation mechanism.

Performance considerations

There were two asterisks on two lines of codes; those were the asymmetic crypto operations for forwarding the HLTC where you derive a shared secret and randomize a …. so this might be nice to eliminate or advertise that we do it only one time and avoid it otherwise. The onion routers themselves need to maintain a per-session state. This is the hash, the incoming link and hte actual link. They need to maintain this state and forward it down to next path. If you forget the state and you get the settle, you don’t know who to send it to. If you forget the state you can’t remember the HLTC. So this would be a DoS attack. It needs to be persisted to disk, too. If we could overcome those limitations we could have it be faster and the routers could be stateless.

Hornet

It’s an extension of Sphinx and overcomes some of the detrimental problems there. Hornet is a progression of sphinx and it is targeting internet-scale. They want to eliminate the asymmetric crypto operations. It does two important things. It gets the asymmetric operations out of the critical path of data forwarding. During initial setup, if we can get our symmetric keys and then only do fast symmetric crypto operations. Another nice part of hornet is that it creates a bidirectional circuit. Sphinx is setup and forget. But hornet has real-time.

Hornet uses sphinx. Hornet uses sphinx initially to derive the shared secrets which then allows intermediate nodes to put a special key into a packet which they can use into data forwarding. The nice thing about hornet is that the nodes only need to maintain constant state. The packets carry all the state. The state is pushed to the endpoints rather than the intermediate nodes. The major parts is the anonymous header. A forwarding segment is encrypted by a node and can only be decrypted by itself, and it contains routing information like the next route to go to and some session information to avoid replay attacs.

Nodes only need to maintain their own symmetric keys, they can decrypt the packets and then continue forwarding.

Hornet can help with pyament flow of lightning itself. The payment flow is hypothesized to be out-of-band. Alice and Bob want to send payments through lightning. Maybe there’s tor, maybe not. Maybe they exchange tor information so they can send money over lightning. Maybe they are doing several payments and need to talk back and forth. We can eliminate that out-bound communication and move it into the network. This assumes Alice and Bob have a route. Alice can create a hornet session to Bob and then they have a bi-directional link where they can exchange detail. I can get payment vaues and r values from Bob through the network itself.

Maybe some of the links wont have sufficient capacity to send everything in one go; so I could fragment a payment through several routes in lightning at the same time.

Shared secret log

The important part for maintaining a shared secret is that you need to reject a packet if it’s sent again. An adversary could resend a packet, but we want to reject that. So you need a growing log of all the shared secrets, so if it’s in the log then you need to reject it. If I need to maintain shared secrets for all my lifetime, I have unbound growth. So we need to garbage select part of the trees. So we can do that with key rotation. There are a few ways to do this. In tor, they have a central directory server and everyone uploads their keys and their new signed keys, but you know ideally we could do that and we would like to avoid that, there’s a centarl point of authority so there’s a few ways to do ad hoc key rotation. So let’s assume that nodes have an identity key, and then they could authenticate hteir onion key with an identity key. So maybe each day they broadcast a new key to the network. So one subtletly of this is that the rotation has to be loosely synchronized. If you rotate keys and then someone sends you payment over the old key, then it looks like an invalid MAC, you would have to reject it. So there needs to be a grace period instead, where a few days after key rotation you still accept old packets maybe. So you check with the old key and check with the new key and maybe it works.

Active key rotation actually encourages higher bandwidth and now we need to rotate keys 24 hours every 24 hours or something. So with a million node networks, that’s a lot of data to download.

Passive key rotation

You published a key. You use bip32 public key derivatio. The people at the edges have the blockhash and your master public key from bip32. Everyone can do key rotation by themselves on the edges. There’s a gotcha, though. It’s common knowledge that with public derivation on bip32 is that if you have the master pubkey and you leak the…. so.. with that new, you need forward secrecy if the private key is leaked. You can have an intermediate point, as to a step doing derivation and then leave that point later. If you use some crypto magic maybe you can solve that.

So for passive key rotation, you could do pairing crypto and have passive non-interactive key rotation. Three cyclic groups. A bilinear pairing. We can have more meaningful identifiers like roasbeef@lightning.network and use that instead. So we have a three cyclic group and a pairing operation which takes elements from two groups and transmutes it to a third one. And the group is multiplicative. If you look at the pairing operation, … that’s the magical step that allows you to do a bunch of other cool stuff.

Every node now advertises a master public key. With regular IBE, there’s a trusted agent that distributes public keys to everyone else. We can have the nodes be the trusted agent to do the actual protocol. We need a novel primitive called a … hash function…. we want the hash function to map directly to a point on a curve. You could do this iteratively where you have a string, you hash it, you get an output on the curve. What about mapping something to a point that isn’t the identity point or the element at infinity? Everyone has an ID which is a blockhash. ID is a group element, a public key. So given a blockhash, they can then use this abstraction to derive another key that is a group element that matches that. If you remember in sphinx we had this g^r, we could do this to do non-interactive key rotation. Alice knows the key schedule of the other nodes. So then she uses that to basically, using g^r, to derive a shared secret. She uses bob’s master key and then raises that to r which is the current session key with maybe some blinding factors. Bob takes r itself, does a pairing operation with n which is his private key. So it just goes out to the end where they both arrive at a shared secret which is the pairing of g, and the Bob’s ID, and now they have a shared secret. So with this we can achieve passive key rotation from the edges– but now there’s a pairing operation cost as well. We can use this to do key rotation or just do a stop gap and do the other one.

Limitations in the scheme

Assumes a high degree of path diversity. There are many ways to get from Alice to Bob. If there’s only one path, then you know she’s sending it to Bob. You can have some correlation with payment values where– you know there’s no other link could support a payment of this size, so therefore they are using this particular route.

You can do timing attacks where you can see Bob and Alice are sending payments– well you can see packet size correlation and so on.

Future directions

Maybe we can gain capabilities … and moving the asymmetric operation to the forwarding highlight. We could figure out the payload structure. How do I identify different chains? What is the timing? What about high-latency systems that give us more privacy guarantees? Maybe lightning is just for instantaneous payments, and maybe you could tolerate a minute and that’s still okay.

We can also look into non-source-routed privacy schemes. Since everyone has to give information about links, you lose some privacy. There’s some ways you could do this but they involve trusted hardware at the node using obliviousRAM (oRAM) and doing shortest path search over the graph and that would require special hardware. We probably wont do that. It’s a cool research direction to look into for the future.

Q&A

Q: Statistical correlation in onion routing?

A: Randomized delays. We could pad out the packet sizes so that every packet on matter if you were doing an opening or a forwarding is always the same size. All the packets would be encrypted. We could do other things like add some delay or move to higher-latency networks which could help us hiding some of this metadata.

Q: What do you do about… network layer attackers?

A: For the payment network problems, we could say all payments in lightning are a particular value, and then we fix the channels, and then everything is completely uniform. With sphinx we talked about unlinkability, the packets themselves are indistinguishable bu tbecause the r value is the same. Versioning– same path, same r value. We could use some techniques to randomize the r value just like we do for the group value for sphinx. We could do a scheme where we randomize the r values, where we add generic point multiplication or use fancy signatures where you do single show signature which forces you to use a certain value for the r value in the signature. If oyu sign with the key then you reveal the r value. Without that, it’s somewhat limited. Maybe we could do this at the end of v1.

onion routing specification https://lists.linuxfoundation.org/pipermail/lightning-dev/2016-July/000557.html

onion routing protocol for lightning https://github.com/cdecker/lightning-rfc/blob/master/bolts/onion-protocol.md

https://github.com/lightningnetwork/lightning-onion and https://github.com/cdecker/lightning-onion/tree/chacha20

\ No newline at end of file +https://www.youtube.com/watch?v=Gzg_u9gHc5Q&t=164s

http://lightning.network/

https://twitter.com/kanzure/status/784742298089299969

Privacy-preserving decentralized micropayments

We’re excited about lightning because the second layer could be an important opportunity to improve privacy and fungibility. Also, there might be timing information in the payments themselves.

Distributed set of onion routers (OR, cite OG onionrouting paper). Users create circuits with sub-set of nodes. Difficult for oion routers to gain more info than predecessor+successor in path. Low latency - usable within greater internet. Notable success is tor, notable success of routing nodes.

In lightning, we have some goals to make it private and censorship resistant. We use source-routing which means the source fully specifies the entire route. In lightning we want to know what the fees are. We want to know the entire route, the timelocks and the path itself.

We use sphinx to make a provably secure mix format. It has various desirable capabilities. With encryption, the ciphertext – the plaintext due to randomization. There’s non-trivial shipping and mapping to make sure everything is fixed throughout the entire size. With fixed size, it gives no positional information. If you are the fifth person in the graph, then you might give away information about the graph or your position in the graph.

In sphinx you can derive a shared secret. To achieve unlinkability, we need to randomize the base key of the diffie-hellman itself. You can have n-keys in the package itself, but you can do things like RSA and now you have kilobytes of a packet because you need all the shared secrets itself. It’s the hoop element at each hop. It’s a cool trick. This can be generalized to any protocol like elliptic curves, RSA, we can do LWE and other things like that.

So you get a session key for the session and then a list of all the public nodes in the route. So a0 is the initial public key for the first hop and then it derives a shared secret as 0. It raises it to the power of its secret exponent and then there’s a blinding factor. Each intermediate node uses the blinding factor to randomize the public key for the next hop. a1 is g^x^b0. So each node derives a blinding factor from the shared secret, and uses this to randomize for the next hop. We have a constant-size key with each hop and we achieve unlinkability.

In sphinx, packet processing is simplified. ProcessSphinxPacket…. if the MAC is the .. then you tampered with the packet and it’s rejected immediately. Also, you have to protect against replay attacks as well. You re-randomize the shared secret with a new blinding factor.

One thing we’ve done is we’ve started to make some modifications to sphinx itself. We realized it’s useful to ourselves but maybe we could add some features for lightning. cdecker from Blockstream has done some work with this to make it more lightning-like. We can add a version byte to the header itself so we can have new versions later on if we use different cryptosystems. We also added the public key and the MAC is now over the entire packet. Originally in sphinx it’s not over the entire packet, it was made for mixnet rather than indistinguishable replies. So you can’t have the MAC on it because if you have a MAC then it’s not indistinguishable. We don’t have that reply use-case in lightning itself. We switched from AES to chacha20 for speed optimality. We also have a payload that attaches instructions for the hop itself. If one link has several links, which one should be used? We have some information about which links to link on, and which ones, and some other information like timing information.

We have this code and it’s done. We could start a mixnet to have an alternative transaction propagation mechanism.

Performance considerations

There were two asterisks on two lines of codes; those were the asymmetic crypto operations for forwarding the HLTC where you derive a shared secret and randomize a …. so this might be nice to eliminate or advertise that we do it only one time and avoid it otherwise. The onion routers themselves need to maintain a per-session state. This is the hash, the incoming link and hte actual link. They need to maintain this state and forward it down to next path. If you forget the state and you get the settle, you don’t know who to send it to. If you forget the state you can’t remember the HLTC. So this would be a DoS attack. It needs to be persisted to disk, too. If we could overcome those limitations we could have it be faster and the routers could be stateless.

Hornet

It’s an extension of Sphinx and overcomes some of the detrimental problems there. Hornet is a progression of sphinx and it is targeting internet-scale. They want to eliminate the asymmetric crypto operations. It does two important things. It gets the asymmetric operations out of the critical path of data forwarding. During initial setup, if we can get our symmetric keys and then only do fast symmetric crypto operations. Another nice part of hornet is that it creates a bidirectional circuit. Sphinx is setup and forget. But hornet has real-time.

Hornet uses sphinx. Hornet uses sphinx initially to derive the shared secrets which then allows intermediate nodes to put a special key into a packet which they can use into data forwarding. The nice thing about hornet is that the nodes only need to maintain constant state. The packets carry all the state. The state is pushed to the endpoints rather than the intermediate nodes. The major parts is the anonymous header. A forwarding segment is encrypted by a node and can only be decrypted by itself, and it contains routing information like the next route to go to and some session information to avoid replay attacs.

Nodes only need to maintain their own symmetric keys, they can decrypt the packets and then continue forwarding.

Hornet can help with pyament flow of lightning itself. The payment flow is hypothesized to be out-of-band. Alice and Bob want to send payments through lightning. Maybe there’s tor, maybe not. Maybe they exchange tor information so they can send money over lightning. Maybe they are doing several payments and need to talk back and forth. We can eliminate that out-bound communication and move it into the network. This assumes Alice and Bob have a route. Alice can create a hornet session to Bob and then they have a bi-directional link where they can exchange detail. I can get payment vaues and r values from Bob through the network itself.

Maybe some of the links wont have sufficient capacity to send everything in one go; so I could fragment a payment through several routes in lightning at the same time.

Shared secret log

The important part for maintaining a shared secret is that you need to reject a packet if it’s sent again. An adversary could resend a packet, but we want to reject that. So you need a growing log of all the shared secrets, so if it’s in the log then you need to reject it. If I need to maintain shared secrets for all my lifetime, I have unbound growth. So we need to garbage select part of the trees. So we can do that with key rotation. There are a few ways to do this. In tor, they have a central directory server and everyone uploads their keys and their new signed keys, but you know ideally we could do that and we would like to avoid that, there’s a centarl point of authority so there’s a few ways to do ad hoc key rotation. So let’s assume that nodes have an identity key, and then they could authenticate hteir onion key with an identity key. So maybe each day they broadcast a new key to the network. So one subtletly of this is that the rotation has to be loosely synchronized. If you rotate keys and then someone sends you payment over the old key, then it looks like an invalid MAC, you would have to reject it. So there needs to be a grace period instead, where a few days after key rotation you still accept old packets maybe. So you check with the old key and check with the new key and maybe it works.

Active key rotation actually encourages higher bandwidth and now we need to rotate keys 24 hours every 24 hours or something. So with a million node networks, that’s a lot of data to download.

Passive key rotation

You published a key. You use bip32 public key derivatio. The people at the edges have the blockhash and your master public key from bip32. Everyone can do key rotation by themselves on the edges. There’s a gotcha, though. It’s common knowledge that with public derivation on bip32 is that if you have the master pubkey and you leak the…. so.. with that new, you need forward secrecy if the private key is leaked. You can have an intermediate point, as to a step doing derivation and then leave that point later. If you use some crypto magic maybe you can solve that.

So for passive key rotation, you could do pairing crypto and have passive non-interactive key rotation. Three cyclic groups. A bilinear pairing. We can have more meaningful identifiers like roasbeef@lightning.network and use that instead. So we have a three cyclic group and a pairing operation which takes elements from two groups and transmutes it to a third one. And the group is multiplicative. If you look at the pairing operation, … that’s the magical step that allows you to do a bunch of other cool stuff.

Every node now advertises a master public key. With regular IBE, there’s a trusted agent that distributes public keys to everyone else. We can have the nodes be the trusted agent to do the actual protocol. We need a novel primitive called a … hash function…. we want the hash function to map directly to a point on a curve. You could do this iteratively where you have a string, you hash it, you get an output on the curve. What about mapping something to a point that isn’t the identity point or the element at infinity? Everyone has an ID which is a blockhash. ID is a group element, a public key. So given a blockhash, they can then use this abstraction to derive another key that is a group element that matches that. If you remember in sphinx we had this g^r, we could do this to do non-interactive key rotation. Alice knows the key schedule of the other nodes. So then she uses that to basically, using g^r, to derive a shared secret. She uses bob’s master key and then raises that to r which is the current session key with maybe some blinding factors. Bob takes r itself, does a pairing operation with n which is his private key. So it just goes out to the end where they both arrive at a shared secret which is the pairing of g, and the Bob’s ID, and now they have a shared secret. So with this we can achieve passive key rotation from the edges– but now there’s a pairing operation cost as well. We can use this to do key rotation or just do a stop gap and do the other one.

Limitations in the scheme

Assumes a high degree of path diversity. There are many ways to get from Alice to Bob. If there’s only one path, then you know she’s sending it to Bob. You can have some correlation with payment values where– you know there’s no other link could support a payment of this size, so therefore they are using this particular route.

You can do timing attacks where you can see Bob and Alice are sending payments– well you can see packet size correlation and so on.

Future directions

Maybe we can gain capabilities … and moving the asymmetric operation to the forwarding highlight. We could figure out the payload structure. How do I identify different chains? What is the timing? What about high-latency systems that give us more privacy guarantees? Maybe lightning is just for instantaneous payments, and maybe you could tolerate a minute and that’s still okay.

We can also look into non-source-routed privacy schemes. Since everyone has to give information about links, you lose some privacy. There’s some ways you could do this but they involve trusted hardware at the node using obliviousRAM (oRAM) and doing shortest path search over the graph and that would require special hardware. We probably wont do that. It’s a cool research direction to look into for the future.

Q&A

Q: Statistical correlation in onion routing?

A: Randomized delays. We could pad out the packet sizes so that every packet on matter if you were doing an opening or a forwarding is always the same size. All the packets would be encrypted. We could do other things like add some delay or move to higher-latency networks which could help us hiding some of this metadata.

Q: What do you do about… network layer attackers?

A: For the payment network problems, we could say all payments in lightning are a particular value, and then we fix the channels, and then everything is completely uniform. With sphinx we talked about unlinkability, the packets themselves are indistinguishable bu tbecause the r value is the same. Versioning– same path, same r value. We could use some techniques to randomize the r value just like we do for the group value for sphinx. We could do a scheme where we randomize the r values, where we add generic point multiplication or use fancy signatures where you do single show signature which forces you to use a certain value for the r value in the signature. If oyu sign with the key then you reveal the r value. Without that, it’s somewhat limited. Maybe we could do this at the end of v1.

onion routing specification https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2016-July/000557.html

onion routing protocol for lightning https://github.com/cdecker/lightning-rfc/blob/master/bolts/onion-protocol.md

https://github.com/lightningnetwork/lightning-onion and https://github.com/cdecker/lightning-onion/tree/chacha20

\ No newline at end of file diff --git a/scalingbitcoin/stanford-2017/index.xml b/scalingbitcoin/stanford-2017/index.xml index e7d2187e8d..09b918e669 100644 --- a/scalingbitcoin/stanford-2017/index.xml +++ b/scalingbitcoin/stanford-2017/index.xml @@ -23,7 +23,7 @@ It&rsquo;s a system of smart contracts using bitcoin which is similar to lig This is joint work with some people I work with at UMass, including Gavin Andresen. The problem I am going to focus on in this presentation is how to relay information to a new block, to a neighbor, that doesn&rsquo;t know about next. This would be on the fast relay network or on the regular p2p network. It&rsquo;s about avoiding a situation where, the naieve situation is to send the entire block data.
Introductionhttps://btctranscripts.com/scalingbitcoin/stanford-2017/intro/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/stanford-2017/intro/https://scalingbitcoin.org/ https://scalingbitcoin.org/event/stanford2017 -Sorry that we&rsquo;re running a little bit late. My name is Anton Yemelyanov. I am one of the organizers running this event. Welcome everybody. It&rsquo;s great to have a lot of familiar faces here. First of all, as you may know, we have ran over the last 2 days, Bitcoin Edge Dev++ where we have trained over 80 new developers that have joined our ecosystem. They have covered a number of topics including ECDSA and discreet log contracts.Optimizing Fee Estimation Via Mempool Statehttps://btctranscripts.com/scalingbitcoin/stanford-2017/optimizing-fee-estimation-via-mempool-state/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/stanford-2017/optimizing-fee-estimation-via-mempool-state/I am a Bitcoin Core developer and I work at DG Lab. Today I would like to talk about fees. There&rsquo;s this weird gap between&ndash; there are two things going on. People complain about high fees. But people are confused about why Bitcoin Core is giving high fees but if you set fees manually you can get a much lower fee and get a transaction mined pretty fast. I started to look into this in detail and did simulations and a bunch of stuff.Redesigning Bitcoin Fee Markethttps://btctranscripts.com/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015093.html +Sorry that we&rsquo;re running a little bit late. My name is Anton Yemelyanov. I am one of the organizers running this event. Welcome everybody. It&rsquo;s great to have a lot of familiar faces here. First of all, as you may know, we have ran over the last 2 days, Bitcoin Edge Dev++ where we have trained over 80 new developers that have joined our ecosystem. They have covered a number of topics including ECDSA and discreet log contracts.Optimizing Fee Estimation Via Mempool Statehttps://btctranscripts.com/scalingbitcoin/stanford-2017/optimizing-fee-estimation-via-mempool-state/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/stanford-2017/optimizing-fee-estimation-via-mempool-state/I am a Bitcoin Core developer and I work at DG Lab. Today I would like to talk about fees. There&rsquo;s this weird gap between&ndash; there are two things going on. People complain about high fees. But people are confused about why Bitcoin Core is giving high fees but if you set fees manually you can get a much lower fee and get a transaction mined pretty fast. I started to look into this in detail and did simulations and a bunch of stuff.Redesigning Bitcoin Fee Markethttps://btctranscripts.com/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015093.html https://www.reddit.com/r/Bitcoin/comments/72qi2r/redesigning_bitcoins_fee_market_a_new_paper_by/ paper: https://arxiv.org/abs/1709.08881 He will be exploring alternative auction markets. diff --git a/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/index.html b/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/index.html index e482b72b69..79120229d9 100644 --- a/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/index.html +++ b/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/index.html @@ -1,5 +1,5 @@ Redesigning Bitcoin Fee Market | ₿itcoin Transcripts - \ No newline at end of file diff --git a/scalingbitcoin/tel-aviv-2019/work-in-progress/index.html b/scalingbitcoin/tel-aviv-2019/work-in-progress/index.html index 2ddc57b3a5..5fd0cbd04f 100644 --- a/scalingbitcoin/tel-aviv-2019/work-in-progress/index.html +++ b/scalingbitcoin/tel-aviv-2019/work-in-progress/index.html @@ -11,4 +11,4 @@ < Tel Aviv (2019) < Work In Progress Sessions

Work In Progress Sessions

Speakers: Mikael Dubrovsky, Yoshinori Hashimoto, Daniel Marquez, Thomas Eizinger

Transcript By: Bryan Bishop

Category: -Conference

((this needs to break into individual transcripts))

https://twitter.com/kanzure/status/1172173183329476609

Optical proof of work

Mikael Dubrovsky

I am working on optical proof-of-work. I’ll try to pack a lot into the 10 minutes. I spent a year and a half at Techion.

Physical scaling limits

Probably the three big ones are, there’s first the demand for blockchain or wanting to use bitcoin more. I think more people want to use bitcoin. Most of the world does not have access to property rights. There’s a lot of pent up demand for something like bitcoin. Bitcoin is $100 billion and offshore banking is $20 trillion. There is definitely demand, but that doesn’t seem to be the bottleneck for now.

Throughput seems like the bottleneck of this conference, so I won’t address it.

Scalability of PoW

The energy use of bitcoin has grown monotonically or even worse with the bitcoin market cap. Nobody really knows how much energy bitcoin is using. But we do know that we’re getting centralization of miner often in countries where the governments are not ideal. People mine in some of these countries because they can make deals for better electricity.

Part of the problem with electricity-based PoW is that it’s centralized, open to partioning attacks, it’s open to regulation in general since it’s next to power plants and waterfalls. It’s huge, it needs cooling, it can’t be hidden. It’s unfairly distributed especially geographically, and it excludes nearly all large cities which excludes most living people. Also you need a lot of electricity.

Redesigning the economics of PoW

So instead of getting rid of PoW, can we change the economics for miners?

Fundamentally what we want to do is just have for this conference, this is not an important slide to describe PoW. But looking at this problem creatively, there’s nothing we know how to prove remotely other than computation. It might be ideal to just burn diamonds, but you can’t prove you did it. So you’re stuck proving some kind of computation. I think we’re stuck with computation for now.

We can try to pick a computation that the optimal hardware for this computation you get a better CAPEX/OPEX ratio where you pay more for the equipment and less for the energy. This ratio is just arbitrary. For ASICs, you’re mostly paying for energy and much less for hardware.

The benefits you would get of a high CAPEX proof-of-work would be that it would be hard to arbitrage compared to electricity. Access to capital is much more democratic. It scales better, it’s geographically distributed, less partition attacks on the network, and the hashrate is more resilient to the coin price. This is a totally fake graph, but the hashrate follows the bitcoin price. The hashrate doens’t grow all the time because people turn off their miners. If you buy a miner and it has a low operating cost, you wouldn’t turn it off, so even if the price is volatile this hashrate wouldn’t be volatile.

PoW algorithm design goals

The high level goals for a new PoW would be to have an accelerated high energy-efficient hardware, let it be digitally verifiable even if the hardware is analog, and optimal on the hardware you’re targeting, and have same or better security than we have now.

Silicon photonic co-processors

There’s a number of emerging platforms for analog computing, but this one is very promising because you can go to TMSC or Global Foundries. These kinds of chips are already commercial for processing data coming out of fiber. They do a fourier transform on the optical data in the silicon chip using waveguides made of silicon. This is already commercial and lots of companies are starting to do machine learning with this stack.

The way a chip like this works is that you hvae a laser input, and there’s multiple architectures for this, the light gets split, your bits are converted into light intensities, and then they go through a number of interferometers and you can set the tuning on interferometers to get a different transformation like a vector matrix multiplication, and you collect your output and convert it back to bits from light. If you have a good chip, you’ve saved a lot of energy in theory.

We wanted to stick to something as close to hashcash as possible since it’s easy to implement and people might use it. We didn’t want to work with all-optical hashing because people probably wouldn’t use it. We also didn’t want to design a hash. We didn’t want to use SGX or some trusted setup. And we wanted to use hardware that we can go to the foundry for.

We composed a function that you can do on the optical chip with hashes. It’s “HeavyHash” like = H(F(H(I))). It’s more work to do this one hash, but it ends up being that the total hashrate of the system is lower so you’re doing fewer sha hashes becaues you’re doing vector matrix operations and instead of paying for those operations in electricity you’re paying for them in chip space on the photonic chip.

The requirement for the function F to be acceleratable on low-opex hardware like photonic chips, to preserve entropy, and be “minimal effective hard” property.

Progress

We have some prototypes for these chips. Here’s a bench prototype. We were going to bring this, but it got stuck in customs. We hope to publish a paper soon.

Feasibility for bitcoin

Is it time to change bitcoin’s PoW to optical proof-of-work? Probably not now, but these geographical centralization issues over time are going to be an issue and at some point bitcoin is going to hit a wall.

A lucas critique of the difficulty adjusmtent algorithm of the bitcoin system

Yoshinori Hashimoto

Introduction

I am going to talk about our research project about difficulty adjustment algorithm. BUIDL is my employer in Japan. This is joint work with my colleagues.

Difficulty adjustment

The difficulty adjustment algorithm is used to adjust the difficulty of the bitcoin system to achieve one block every 10 minutes based on recent hashrate.

Policy goal

There’s a winning rate, a global hash rate, and the block time is an exponential distribution with intensity W x H. The current bitcoin DAA adjusts the difficulty every 2016 blocks (every 2 weeks), and adjusts so that the block time is about 10 minutes. If block generation is too slow, then the winning rate is increased, and if it is too fast, then the winning rate is decreased.

Sample analogue

If the hashrate was directly observable, it would be easier to determine an ideal winning rate. Still we can compute an estimation of the hashrate. Bitcoin DAA achieves the policy goal by using the estimated historical hash rate.

Miners’ incentive

If we change the winning rate, then mining profitability will also change. If we increase the winning rate, then it is easier to get the mining reward and more hashrate will be provided. This increases the global hash rate. For miners, it is easy to adjust the hashrate because they can do it just by turning on and off their ASIC machines. Change in winning rate also changes the hash rate.

((.. I think he’s reading from his notes, can we just copy+paste the notes into this document? Thank you.))

Policy suggestions

We suggest upgrading Bitcoin DAA to Bitcoin Cash DAA which is a good candidate. There might be some other better one.

https://ssrn.com/abstract=3410460

Offline transactions

Daniel Marquez

Introduction

The inspiration for this work came from the concept of banking the unbanked, the open money inititaive introduced me to this, and then MIT DCI’s sponsorship of the ethics course and blockchain ethics course this concept of banking the unbanked. This is a subset of that that I would like to talk about, which involves offline transactions.

Use cases

I am interested in post-natural disasters or suboptimal infrastructure. Venezuela where the infrastructure might be down but I want to buy basic necessities or maybe a hurricane goes through Miami and I want things to work while everything is offline.

Possible solutions

There are some possible solutions like the OpenDime wallet or other HSM wallets. There’s layer 2 like the lightning network.

Lightning network

Why LN? It’s robust, it’s getting better, a lot of the talks have focused on LN. What about payment channels when parties go offline? But this doesn’t fit the use case, because how can two parties instantiate offline payments?

WIP

This is very much a work in progress. Email me or tweet at me: dmarquez@mit.edu or tweet at bumangues92. The idea is to enable these transactions over WLAN or bluetooth or something.

Q: What about trusted execution environments where you prove that you deleted the private key?

A: I can’t really talk about that, but I’ll look into that.

Q: I don’t think they have attestations. I would want a proof that you are running the hardware you’re claiming you’re running. We can’t do …

BB: You can do 2-of-2 multisig with the sender and the recipient and then delete the private key. But they can double spend it out from under you unfortunately. You can probably do offline transactions if you are willing to tolerate an on-chain delay of like 3 months.

Conditional transfer of tokens on top of bitcoin

Thomas Eizinger (coblox.tech)

I want to work on trustlessly rebalancing your LN channels with USDT. Let me tell you how lightning channels work. Just kidding ((laughter)). Does anyone remember omnilayer? It was originally called mastercoin. It has tokens on it like USDT. Omnilayer is a separate consensus layer on top of bitcoin, using OP_RETURN to embed extra data into bitcoin transactions where this data is not validated by bitcoin nodes but is validated by omnilayer clients.

Can we use HTLCs with omnilayer simple_send? The answer is yes we can do that. It works just as you would expect it to work. You have an output, and the consensus rules require you to provide the redeem transaction with the HTLC. It provides the preimage to the hash.

How is this useful? We’ve learned several things while doing this investigation. If you do assets on top of bitcoin that aren’t bitcoin, you don’t need to include any functionality that gives you conditional transfers. We can define assets on top of bitcoin and still use HTLCs and scripts to conditionally spend those assets. Omnilayer actually, is, it has a lot of commands to do all kinds of things. You can’t directly buy tether with bitcoin– you have to go through a weird omnicoin to buy tether, which I think just is there to make them money. But you can construct a transaction where within one transaction someone else sends bitcoin and you get tokens back. It’s an atomic swap on the same chain.

Omni can potentially be used with everything that uses HTLCs such as cross-chain atomic swaps or LN.

Constructing these transactions is non-trivial. You can make a lot of mistakes if you put the wrong address in, the consensus layer may not recognize where it wants you to transfer it. This is where our WIP comes in. We want to avoid one-off implementations. We want to implement the use case of rebalancing your LN channel with USDT and we want to avoid the implementation where all you have to build upon is the blockchain node.

We are working on COMIT, a framework for cryptographic protocols. We’re trying to give you primitives that you will need when you build a cryptographic protocol so that you can actually focus on your work. Not just a prototype. Exchanging messages over a robust framework, or negotiating keys, or most cryptographic protocols have timeouts because parties can leave at any point so you need to watch the chain. You need some reusable pieces. You want to hide cryptographic complexity from users.

Storm: layer 2/3 storage and messaging

Maxim Orlovsky (Pandora Core)

This idea of economical incentives to do the storage of incentives. This is early work in progress.

The problem we’re trying to solve is storage. I am not going to describe why we have to store all this data like lightning network, scriptless scripts and single-use seals that all have off-chain data that we need to store remotely.

Can it be trustless but also guaranteed? Yes. By using probabilistically checkable proofs, HTLCs and partially-signed bitcoin transactions.

Basically the idea is to have users pay other users for data storage. Bob can prove the fact that he has the data in a succinct way both to Alice and on-chain with a bitcoin script. We use probabilistically checkable proofs for that. Alice gets obscured data from Bob encrypted with his yet unknown public key and is able to decrypt them only when Bob takes his payment for storage. Alice only learns this at that point.

I won’t have time to describe probabilistically checkable proofs. It’s a kind of zero knowledge technology. You use the hash of a merkle root over some random oracle to select more pieces of the data, and then you provide those pieces together with a merkle path tree to demonstrate with some probability you can’t assume this number upfront but for a small number of the data you have, you’re still able to prove something on that and I’ll show how we are using that for storage.

Bob stores data for Alice. She puts a payment for that storage. Bob puts it under an escrow timelock contract. There’s two inputs, coming from Alice and Bob. There are two options to be spent. If Alice forgets about her data (maybe she dies or whatever), Bob still takes the payment for storage and his stake back. After that, they sign a number of transactions but they don’t publish them. Bob encrypts Alice data with some public and private key pair. Bob constructs a special PCP proof showing Alice that he has really encrypted the original data. If they close cooperatively, she signs pre-signed Bob’s transaction. When bob claims funds, he reveals a decryption key that allows Alice to decrypt the data. Alice can verify that she has the correct encrypted blob through some mechanism. If Bob doesn’t provide Alice with the key, and he doesn’t take his money, Alice will be able to get her money back plus Bob’s stake to compensate for the loss of the data.

There are also two uncooperative cases. Maybe Alice is unhappy with Bob’s proofs and he is trying to cheat. She signs another pre-signed transaction. After some delay, she will be able to get both her money back and Bob’s stake again to compensate for Bob’s negliglence. But if Alice is trying to cheat, Bob can appeal to that fact and provide another PCP proof showing on-chain that he really has this data and Alice is just not wanting to pay. In this case, Bob takes his stake back and the reward from Alice maybe not the full reward it depends on the agreement between the two parties when they setup the contract at the moment.

How can Bob prove on chain that he has the data?

At setup time, Alice uses her newly derived public key for both the funding transaction output and deterministic definition of some small portion of the source data. She puts into the funding transaction the hash of the public key she will be using later. She uses this public key to select some random portion of the data she has. This portion is double hashes by some hash, and included into HTLC fallback. This hash can be unlocked only if you have the source data and the public key of Alice. When Bob wants to prove he still has the data available, he sees that Alice has published a management transaction and he extracts the public key from that transaction, and now he knows what part of the source data to use, he hashes that, and then he …. to unlock his… output.

Why is this important? This is tech for incentivized data storage. Alice only needs to store a seed phrase. This can potentially be done on top of the lightning network, and zero transactions will reach the blockchain.

Building ultra-light clients with SNARKs

Noah Vesely (C-Labs)

…..

Signet

Karl-Johan Alm

https://diyhpl.us/wiki/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/signet/

https://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2019-06-07-signet/

Miniscripts

Andrew Poelstra

https://diyhpl.us/wiki/transcripts/stanford-blockchain-conference/2019/miniscript/

https://diyhpl.us/wiki/transcripts/noded-podcast/2019-05-11-andrew-poelstra-miniscript/

Vaults

Bryan Bishop

oh wait that’s me (who types for the typer?)

https://www.coindesk.com/the-vault-is-back-bitcoin-coder-to-revive-plan-to-shield-wallets-from-theft

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017231.html

\ No newline at end of file +Conference

((this needs to break into individual transcripts))

https://twitter.com/kanzure/status/1172173183329476609

Optical proof of work

Mikael Dubrovsky

I am working on optical proof-of-work. I’ll try to pack a lot into the 10 minutes. I spent a year and a half at Techion.

Physical scaling limits

Probably the three big ones are, there’s first the demand for blockchain or wanting to use bitcoin more. I think more people want to use bitcoin. Most of the world does not have access to property rights. There’s a lot of pent up demand for something like bitcoin. Bitcoin is $100 billion and offshore banking is $20 trillion. There is definitely demand, but that doesn’t seem to be the bottleneck for now.

Throughput seems like the bottleneck of this conference, so I won’t address it.

Scalability of PoW

The energy use of bitcoin has grown monotonically or even worse with the bitcoin market cap. Nobody really knows how much energy bitcoin is using. But we do know that we’re getting centralization of miner often in countries where the governments are not ideal. People mine in some of these countries because they can make deals for better electricity.

Part of the problem with electricity-based PoW is that it’s centralized, open to partioning attacks, it’s open to regulation in general since it’s next to power plants and waterfalls. It’s huge, it needs cooling, it can’t be hidden. It’s unfairly distributed especially geographically, and it excludes nearly all large cities which excludes most living people. Also you need a lot of electricity.

Redesigning the economics of PoW

So instead of getting rid of PoW, can we change the economics for miners?

Fundamentally what we want to do is just have for this conference, this is not an important slide to describe PoW. But looking at this problem creatively, there’s nothing we know how to prove remotely other than computation. It might be ideal to just burn diamonds, but you can’t prove you did it. So you’re stuck proving some kind of computation. I think we’re stuck with computation for now.

We can try to pick a computation that the optimal hardware for this computation you get a better CAPEX/OPEX ratio where you pay more for the equipment and less for the energy. This ratio is just arbitrary. For ASICs, you’re mostly paying for energy and much less for hardware.

The benefits you would get of a high CAPEX proof-of-work would be that it would be hard to arbitrage compared to electricity. Access to capital is much more democratic. It scales better, it’s geographically distributed, less partition attacks on the network, and the hashrate is more resilient to the coin price. This is a totally fake graph, but the hashrate follows the bitcoin price. The hashrate doens’t grow all the time because people turn off their miners. If you buy a miner and it has a low operating cost, you wouldn’t turn it off, so even if the price is volatile this hashrate wouldn’t be volatile.

PoW algorithm design goals

The high level goals for a new PoW would be to have an accelerated high energy-efficient hardware, let it be digitally verifiable even if the hardware is analog, and optimal on the hardware you’re targeting, and have same or better security than we have now.

Silicon photonic co-processors

There’s a number of emerging platforms for analog computing, but this one is very promising because you can go to TMSC or Global Foundries. These kinds of chips are already commercial for processing data coming out of fiber. They do a fourier transform on the optical data in the silicon chip using waveguides made of silicon. This is already commercial and lots of companies are starting to do machine learning with this stack.

The way a chip like this works is that you hvae a laser input, and there’s multiple architectures for this, the light gets split, your bits are converted into light intensities, and then they go through a number of interferometers and you can set the tuning on interferometers to get a different transformation like a vector matrix multiplication, and you collect your output and convert it back to bits from light. If you have a good chip, you’ve saved a lot of energy in theory.

We wanted to stick to something as close to hashcash as possible since it’s easy to implement and people might use it. We didn’t want to work with all-optical hashing because people probably wouldn’t use it. We also didn’t want to design a hash. We didn’t want to use SGX or some trusted setup. And we wanted to use hardware that we can go to the foundry for.

We composed a function that you can do on the optical chip with hashes. It’s “HeavyHash” like = H(F(H(I))). It’s more work to do this one hash, but it ends up being that the total hashrate of the system is lower so you’re doing fewer sha hashes becaues you’re doing vector matrix operations and instead of paying for those operations in electricity you’re paying for them in chip space on the photonic chip.

The requirement for the function F to be acceleratable on low-opex hardware like photonic chips, to preserve entropy, and be “minimal effective hard” property.

Progress

We have some prototypes for these chips. Here’s a bench prototype. We were going to bring this, but it got stuck in customs. We hope to publish a paper soon.

Feasibility for bitcoin

Is it time to change bitcoin’s PoW to optical proof-of-work? Probably not now, but these geographical centralization issues over time are going to be an issue and at some point bitcoin is going to hit a wall.

A lucas critique of the difficulty adjusmtent algorithm of the bitcoin system

Yoshinori Hashimoto

Introduction

I am going to talk about our research project about difficulty adjustment algorithm. BUIDL is my employer in Japan. This is joint work with my colleagues.

Difficulty adjustment

The difficulty adjustment algorithm is used to adjust the difficulty of the bitcoin system to achieve one block every 10 minutes based on recent hashrate.

Policy goal

There’s a winning rate, a global hash rate, and the block time is an exponential distribution with intensity W x H. The current bitcoin DAA adjusts the difficulty every 2016 blocks (every 2 weeks), and adjusts so that the block time is about 10 minutes. If block generation is too slow, then the winning rate is increased, and if it is too fast, then the winning rate is decreased.

Sample analogue

If the hashrate was directly observable, it would be easier to determine an ideal winning rate. Still we can compute an estimation of the hashrate. Bitcoin DAA achieves the policy goal by using the estimated historical hash rate.

Miners’ incentive

If we change the winning rate, then mining profitability will also change. If we increase the winning rate, then it is easier to get the mining reward and more hashrate will be provided. This increases the global hash rate. For miners, it is easy to adjust the hashrate because they can do it just by turning on and off their ASIC machines. Change in winning rate also changes the hash rate.

((.. I think he’s reading from his notes, can we just copy+paste the notes into this document? Thank you.))

Policy suggestions

We suggest upgrading Bitcoin DAA to Bitcoin Cash DAA which is a good candidate. There might be some other better one.

https://ssrn.com/abstract=3410460

Offline transactions

Daniel Marquez

Introduction

The inspiration for this work came from the concept of banking the unbanked, the open money inititaive introduced me to this, and then MIT DCI’s sponsorship of the ethics course and blockchain ethics course this concept of banking the unbanked. This is a subset of that that I would like to talk about, which involves offline transactions.

Use cases

I am interested in post-natural disasters or suboptimal infrastructure. Venezuela where the infrastructure might be down but I want to buy basic necessities or maybe a hurricane goes through Miami and I want things to work while everything is offline.

Possible solutions

There are some possible solutions like the OpenDime wallet or other HSM wallets. There’s layer 2 like the lightning network.

Lightning network

Why LN? It’s robust, it’s getting better, a lot of the talks have focused on LN. What about payment channels when parties go offline? But this doesn’t fit the use case, because how can two parties instantiate offline payments?

WIP

This is very much a work in progress. Email me or tweet at me: dmarquez@mit.edu or tweet at bumangues92. The idea is to enable these transactions over WLAN or bluetooth or something.

Q: What about trusted execution environments where you prove that you deleted the private key?

A: I can’t really talk about that, but I’ll look into that.

Q: I don’t think they have attestations. I would want a proof that you are running the hardware you’re claiming you’re running. We can’t do …

BB: You can do 2-of-2 multisig with the sender and the recipient and then delete the private key. But they can double spend it out from under you unfortunately. You can probably do offline transactions if you are willing to tolerate an on-chain delay of like 3 months.

Conditional transfer of tokens on top of bitcoin

Thomas Eizinger (coblox.tech)

I want to work on trustlessly rebalancing your LN channels with USDT. Let me tell you how lightning channels work. Just kidding ((laughter)). Does anyone remember omnilayer? It was originally called mastercoin. It has tokens on it like USDT. Omnilayer is a separate consensus layer on top of bitcoin, using OP_RETURN to embed extra data into bitcoin transactions where this data is not validated by bitcoin nodes but is validated by omnilayer clients.

Can we use HTLCs with omnilayer simple_send? The answer is yes we can do that. It works just as you would expect it to work. You have an output, and the consensus rules require you to provide the redeem transaction with the HTLC. It provides the preimage to the hash.

How is this useful? We’ve learned several things while doing this investigation. If you do assets on top of bitcoin that aren’t bitcoin, you don’t need to include any functionality that gives you conditional transfers. We can define assets on top of bitcoin and still use HTLCs and scripts to conditionally spend those assets. Omnilayer actually, is, it has a lot of commands to do all kinds of things. You can’t directly buy tether with bitcoin– you have to go through a weird omnicoin to buy tether, which I think just is there to make them money. But you can construct a transaction where within one transaction someone else sends bitcoin and you get tokens back. It’s an atomic swap on the same chain.

Omni can potentially be used with everything that uses HTLCs such as cross-chain atomic swaps or LN.

Constructing these transactions is non-trivial. You can make a lot of mistakes if you put the wrong address in, the consensus layer may not recognize where it wants you to transfer it. This is where our WIP comes in. We want to avoid one-off implementations. We want to implement the use case of rebalancing your LN channel with USDT and we want to avoid the implementation where all you have to build upon is the blockchain node.

We are working on COMIT, a framework for cryptographic protocols. We’re trying to give you primitives that you will need when you build a cryptographic protocol so that you can actually focus on your work. Not just a prototype. Exchanging messages over a robust framework, or negotiating keys, or most cryptographic protocols have timeouts because parties can leave at any point so you need to watch the chain. You need some reusable pieces. You want to hide cryptographic complexity from users.

Storm: layer 2/3 storage and messaging

Maxim Orlovsky (Pandora Core)

This idea of economical incentives to do the storage of incentives. This is early work in progress.

The problem we’re trying to solve is storage. I am not going to describe why we have to store all this data like lightning network, scriptless scripts and single-use seals that all have off-chain data that we need to store remotely.

Can it be trustless but also guaranteed? Yes. By using probabilistically checkable proofs, HTLCs and partially-signed bitcoin transactions.

Basically the idea is to have users pay other users for data storage. Bob can prove the fact that he has the data in a succinct way both to Alice and on-chain with a bitcoin script. We use probabilistically checkable proofs for that. Alice gets obscured data from Bob encrypted with his yet unknown public key and is able to decrypt them only when Bob takes his payment for storage. Alice only learns this at that point.

I won’t have time to describe probabilistically checkable proofs. It’s a kind of zero knowledge technology. You use the hash of a merkle root over some random oracle to select more pieces of the data, and then you provide those pieces together with a merkle path tree to demonstrate with some probability you can’t assume this number upfront but for a small number of the data you have, you’re still able to prove something on that and I’ll show how we are using that for storage.

Bob stores data for Alice. She puts a payment for that storage. Bob puts it under an escrow timelock contract. There’s two inputs, coming from Alice and Bob. There are two options to be spent. If Alice forgets about her data (maybe she dies or whatever), Bob still takes the payment for storage and his stake back. After that, they sign a number of transactions but they don’t publish them. Bob encrypts Alice data with some public and private key pair. Bob constructs a special PCP proof showing Alice that he has really encrypted the original data. If they close cooperatively, she signs pre-signed Bob’s transaction. When bob claims funds, he reveals a decryption key that allows Alice to decrypt the data. Alice can verify that she has the correct encrypted blob through some mechanism. If Bob doesn’t provide Alice with the key, and he doesn’t take his money, Alice will be able to get her money back plus Bob’s stake to compensate for the loss of the data.

There are also two uncooperative cases. Maybe Alice is unhappy with Bob’s proofs and he is trying to cheat. She signs another pre-signed transaction. After some delay, she will be able to get both her money back and Bob’s stake again to compensate for Bob’s negliglence. But if Alice is trying to cheat, Bob can appeal to that fact and provide another PCP proof showing on-chain that he really has this data and Alice is just not wanting to pay. In this case, Bob takes his stake back and the reward from Alice maybe not the full reward it depends on the agreement between the two parties when they setup the contract at the moment.

How can Bob prove on chain that he has the data?

At setup time, Alice uses her newly derived public key for both the funding transaction output and deterministic definition of some small portion of the source data. She puts into the funding transaction the hash of the public key she will be using later. She uses this public key to select some random portion of the data she has. This portion is double hashes by some hash, and included into HTLC fallback. This hash can be unlocked only if you have the source data and the public key of Alice. When Bob wants to prove he still has the data available, he sees that Alice has published a management transaction and he extracts the public key from that transaction, and now he knows what part of the source data to use, he hashes that, and then he …. to unlock his… output.

Why is this important? This is tech for incentivized data storage. Alice only needs to store a seed phrase. This can potentially be done on top of the lightning network, and zero transactions will reach the blockchain.

Building ultra-light clients with SNARKs

Noah Vesely (C-Labs)

…..

Signet

Karl-Johan Alm

https://diyhpl.us/wiki/transcripts/scalingbitcoin/tel-aviv-2019/edgedevplusplus/signet/

https://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2019-06-07-signet/

Miniscripts

Andrew Poelstra

https://diyhpl.us/wiki/transcripts/stanford-blockchain-conference/2019/miniscript/

https://diyhpl.us/wiki/transcripts/noded-podcast/2019-05-11-andrew-poelstra-miniscript/

Vaults

Bryan Bishop

oh wait that’s me (who types for the typer?)

https://www.coindesk.com/the-vault-is-back-bitcoin-coder-to-revive-plan-to-shield-wallets-from-theft

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017229.html

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-August/017231.html

\ No newline at end of file diff --git a/scalingbitcoin/tokyo-2018/atomic-swaps/index.html b/scalingbitcoin/tokyo-2018/atomic-swaps/index.html index ee22afc316..29c3213c82 100644 --- a/scalingbitcoin/tokyo-2018/atomic-swaps/index.html +++ b/scalingbitcoin/tokyo-2018/atomic-swaps/index.html @@ -10,4 +10,4 @@ Thomas Eizinger

Date: October 7, 2018

Transcript By: Bryan Bishop

Tags: Research, Htlc, Adaptor signatures

Category: Conference

Media: -https://www.youtube.com/watch?v=FI9cwksTrQs&t=1025s

https://twitter.com/kanzure/status/1048790167283068929

History

This should be easy to follow. I want to start off with some history. The first mention of atomic swap concepts was in 2013 in a bitcointalk.org thread where people were wondering how to exchange coins between independent ledgers in a trustless way. There were a few comments on the thread every now and then asking has anyone implemented this. It was quiet until last year when people started doing atomic swaps last year. Lightning Labs did one for bitcoin-litecoin, and then there were some for bitcoin-ethereum, bitcoin-dogecoin, all kinds of things.

Why do I care?

My name is Thomas Eizinger. I am based in Sydney. We are building the COMIT protocol, a decentralized protocol aiming to connect blockchains together. We chain up atomic swaps together because they are all linked together we can trustlessly exchange funds across multiple chains.

Conditional transfers

The main building block for atomic swaps are hash-time-lock contracts (HTLCs). It’s not the abstraction we should be talking about. Instead, we should talk about conditional transfers. We need a secret and a timeout to get the funds back when we don’t want to reveal the secret. There are two major ways to achieve this- by using scripting languages like bitcoin script, or we could use “scriptless scripts” created by Andrew Peolstra which use adaptor signatures and transient private keys. I am sure we will see other ways to do conditional transfers.

HTLCs

Here is a bitcoin script for an HTLC. It uses OP_SHA256 and it checks whether the hash matches the expected hash. Everything after that is just a regular p2pkh transaction where you compare to a pubkeyhash and verify the signature at the end.

Atomic swaps

How do atomic swaps work? We need four transactions, and 2 conditional payments. This is what it looks like. There’s Alice and Bob and two separate ledgers like bitcoin and litecoin. Alice sends an HTLC that pays Bob. In bitcoin script, she puts one of those scripts, puts in Bob’s pubkeyhash, she computes a secret and keeps it secret, hashes it, puts the hash into the script, then spends money to that address. Bob does the same thing, using the same hash that Alice used, but puts in a different pubkeyhash and pays Alice on the other chain. Alice sees this, takes the secret, redeems the money on the other chain, and by redeeming it she reveals the script and the secret and now Bob can redeem the coins on the other chain.

Optionality

This is an unfair protocol: the problem is that if Alice is a greedy person, and Alice monitors the LTC:BTC exchange ratio, then she could just wait for the timeout of her HTLC and then take her money back. We have some money locked up on both chains. There’s an exchange rate, and it can’t change because it’s on the blockchain. There might be a timeout of like 6 hours, and the actual exchange rate that you could get at a centralized custodial exchange might be quite different and maybe Alice wants to use that instead of continuing with the established atomic swap.

Technically, Alice has an option that she didn’t pay for. It’s this freedom of deciding whether to take it in. In this case, Alice gets an option she didn’t pay for.

Fixing the atomic swap protocol

First we have to attribute the fault, then we introduce punishment into the protocol. There’s a uniquely attributable fault that is important: if we want to punish someone, we want to be certain we’re punishing the right person. We want to be sure that we can uniquely and with certainty identify the player we want to punish. In this case it’s easy because Alice didn’t release the secret, so we know we should punish Alice. Thankfully we’re already in a system dealing with money. We could get Alice to lose something and this is how we could punish her. On which chain can we punish Alice? If this step doesn’t happen, can we tell on the bitcoin chain whether the step has happened? The two chains are independent of each other, so we can’t punish Alice for something she didn’t do on the litecoin chain on the bitcoin chain. We need to take away some collateral from her if she didn’t behave in the right way.

Atomic swap revision 1 with fairness

What’s the difference between revision 0 and 1? What we introduce is a second HTLC or a conditional payment where Alice instead of sending the HTLC that pays Bob on the bitcoin chain, she puts up collateral on the litecoin chain and she gets it back if she redeems the money.

Collateral design

The collateral is a conditional payment, it uses the same preimage and hash she used in the other HTLC except the payout paths are reversed. In the case that the secret is presented, the money goes to Alice, and in the case the timeout is activated, the money goes to Bob. If Alice tries to wait until the timelock passes, the collateral goes to Bob. Alice is now incentivized to reveal the preimage to Bob. Now they no longer have to trust Alice to actually go through with the whole thing.

But now Alice has to trust Bob, which is bad. Why does Alice have to trust Bob? If you look at the diagram, who forces Bob to do anything in this protocol? Bob could just wait for the timeout and then get free money. Who doesn’t want free money? This is also not ideal.

Atomic swap revision 2 with fairness

We introduce another tweak. What we actually want to achieve is that we want Alice to be able to take her collateral back but only as long as Bob didn’t commit to the whole thing. For this, these two payments need to be in the same transaction. This could look like this: Alice sends an HTLC that pays Bob through the bitcoin chain, then she constructs a transaction that puts in her collateral, has an output that should be the sum of whatever Bob is willing to pay plus her collateral, she partially signs it then sends it to Bob, then Bob adds his funds, and then he sends it to the chain.

Bob is now guaranteed that he has this transaction on his machine and he can add an output and an input at any time. If he sends it to the chain, then Alice is incentivized to take it out and thereby Bob gets his secret which is in the end what he wants. Alice can cancel the whole thing by spending the output somewhere else. As long as Bob didn’t put it on the chain, it’s not yet there. If Bob waits too long, Alice just spends it somewhere else. If Bob tries to include it, then it’s rejected by the network as a double spend.

But we have lightning network!

It’s not quite true that LN solves this. You might argue the optionality exists because there’s a six hour difference between the time the transaction gets into the chain and the time you have to refund it because of the timeout part. It’s not effected by fast or slow transactions, it’s caused by the underlying protocol being unfair. If the incentives don’t match up, then no matter how fast your protocol, you could always delay revealing the secret and this delay in revealing the secret introduces this optionality.

Privacy

Now that we have fair atomic swaps, let’s talk about privacy. How can we make atomic swaps private? We heard about this idea called scriptless scripts which uses signatures instead of hash functions and this makes it indistinguishable from other transactions on the chain. So how could we use this for atomic swaps?

Alice and Bob start off with their respective keys and some secret that only Alice knows at the beginning. Alice computes an address that contains Bob’s private key plus some value t. Alice can do this without knowing Bob’s private key. Bob locks the fund in a multisignature where they need both private keys in order to redeem the money. At this stage, neither of them can spend this money because Alice doesn’t know Bob’s key so she can’t redeem the money she has locked. Bob doesn’t know t, so he can’t spend that money, or Alice’s private key. Neither of them can spend. How do we proceed? Bob creates an adaptor signature, a sort of partial signature on this thing, but designed in such a way that in order to make it a fully valid signature, Alice has to include the value t in the signature. Alice completes the signature but she gets a fully valid signature for this transaction. But she’s forced to reveal t in this case. This makes it so that Bob is able to learn the value t afterwards, and now everyone can spend their funds.

This has the same exact properties as an atomic swap with HTLCs. Alice starts with a secret that she only knows in the beginning, but she can’t take Bob’s money without atomically revealing her secret. This is the only property we care about in the beginning. Both parties can take their money back because they had pre-signed nlocktime transactions for each other before beginning.

Open research questions

How to do scriptless scripts across curves? https://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-05-cross-curve-atomic-swaps/ What about between different chains with different curves? Is there a way to construct the signatures in a way that we can use scriptless scripts to perform atomic swaps across chains and curves even if the curves are different curves? Also, monero doesn’t have an nlocktime feature. But at least if we can do the conditional part then at least we’re one-step closer to doing atomic swaps.

How can we uniquely attribute fault if we’re in a multi-hop payment channel? This might seem like a big jump from doing all this on-chain stuff and then talking about payment channels, but really all this atomic swap stuff works in payment channels in the lightning network.

How can we construct HTLCs or conditional payments in general without knowing the recipient upfront? What do we need for an HTLC or a conditional payment? It’s a redeem address, a refund address, a timeout and a hash. The pubkey hash of whoever can take the money in the success case, and the one for the timelock case, and then the actual timelock and then the hash. The last three parts can actually be chosen by whoever constructs the conditional payment. But the redeem address needs to be constructed by whoever wants to receive the money. If we can somehow get rid of the redeem address, then we can get rid of a communication step between the two parties that want to do an atomic swap. Because then by removing that, you can just send this out there, and people can send an offer for atomic swaps for the conditional payment, and would you sell me this secret? You could build a whole market on selling secrets, it would be easy to build a decentralized exchange on top of this if we could figure out how to make a conditional payment that doesn’t need the redeem address upfront but still guarantees only one person can receive it. If you take out the pubkey check in the first branch, then miners would figure out you can just take the money and forward it to myself. It’s not as simple as just taking out the variable.

References

\ No newline at end of file +https://www.youtube.com/watch?v=FI9cwksTrQs&t=1025s

https://twitter.com/kanzure/status/1048790167283068929

History

This should be easy to follow. I want to start off with some history. The first mention of atomic swap concepts was in 2013 in a bitcointalk.org thread where people were wondering how to exchange coins between independent ledgers in a trustless way. There were a few comments on the thread every now and then asking has anyone implemented this. It was quiet until last year when people started doing atomic swaps last year. Lightning Labs did one for bitcoin-litecoin, and then there were some for bitcoin-ethereum, bitcoin-dogecoin, all kinds of things.

Why do I care?

My name is Thomas Eizinger. I am based in Sydney. We are building the COMIT protocol, a decentralized protocol aiming to connect blockchains together. We chain up atomic swaps together because they are all linked together we can trustlessly exchange funds across multiple chains.

Conditional transfers

The main building block for atomic swaps are hash-time-lock contracts (HTLCs). It’s not the abstraction we should be talking about. Instead, we should talk about conditional transfers. We need a secret and a timeout to get the funds back when we don’t want to reveal the secret. There are two major ways to achieve this- by using scripting languages like bitcoin script, or we could use “scriptless scripts” created by Andrew Peolstra which use adaptor signatures and transient private keys. I am sure we will see other ways to do conditional transfers.

HTLCs

Here is a bitcoin script for an HTLC. It uses OP_SHA256 and it checks whether the hash matches the expected hash. Everything after that is just a regular p2pkh transaction where you compare to a pubkeyhash and verify the signature at the end.

Atomic swaps

How do atomic swaps work? We need four transactions, and 2 conditional payments. This is what it looks like. There’s Alice and Bob and two separate ledgers like bitcoin and litecoin. Alice sends an HTLC that pays Bob. In bitcoin script, she puts one of those scripts, puts in Bob’s pubkeyhash, she computes a secret and keeps it secret, hashes it, puts the hash into the script, then spends money to that address. Bob does the same thing, using the same hash that Alice used, but puts in a different pubkeyhash and pays Alice on the other chain. Alice sees this, takes the secret, redeems the money on the other chain, and by redeeming it she reveals the script and the secret and now Bob can redeem the coins on the other chain.

Optionality

This is an unfair protocol: the problem is that if Alice is a greedy person, and Alice monitors the LTC:BTC exchange ratio, then she could just wait for the timeout of her HTLC and then take her money back. We have some money locked up on both chains. There’s an exchange rate, and it can’t change because it’s on the blockchain. There might be a timeout of like 6 hours, and the actual exchange rate that you could get at a centralized custodial exchange might be quite different and maybe Alice wants to use that instead of continuing with the established atomic swap.

Technically, Alice has an option that she didn’t pay for. It’s this freedom of deciding whether to take it in. In this case, Alice gets an option she didn’t pay for.

Fixing the atomic swap protocol

First we have to attribute the fault, then we introduce punishment into the protocol. There’s a uniquely attributable fault that is important: if we want to punish someone, we want to be certain we’re punishing the right person. We want to be sure that we can uniquely and with certainty identify the player we want to punish. In this case it’s easy because Alice didn’t release the secret, so we know we should punish Alice. Thankfully we’re already in a system dealing with money. We could get Alice to lose something and this is how we could punish her. On which chain can we punish Alice? If this step doesn’t happen, can we tell on the bitcoin chain whether the step has happened? The two chains are independent of each other, so we can’t punish Alice for something she didn’t do on the litecoin chain on the bitcoin chain. We need to take away some collateral from her if she didn’t behave in the right way.

Atomic swap revision 1 with fairness

What’s the difference between revision 0 and 1? What we introduce is a second HTLC or a conditional payment where Alice instead of sending the HTLC that pays Bob on the bitcoin chain, she puts up collateral on the litecoin chain and she gets it back if she redeems the money.

Collateral design

The collateral is a conditional payment, it uses the same preimage and hash she used in the other HTLC except the payout paths are reversed. In the case that the secret is presented, the money goes to Alice, and in the case the timeout is activated, the money goes to Bob. If Alice tries to wait until the timelock passes, the collateral goes to Bob. Alice is now incentivized to reveal the preimage to Bob. Now they no longer have to trust Alice to actually go through with the whole thing.

But now Alice has to trust Bob, which is bad. Why does Alice have to trust Bob? If you look at the diagram, who forces Bob to do anything in this protocol? Bob could just wait for the timeout and then get free money. Who doesn’t want free money? This is also not ideal.

Atomic swap revision 2 with fairness

We introduce another tweak. What we actually want to achieve is that we want Alice to be able to take her collateral back but only as long as Bob didn’t commit to the whole thing. For this, these two payments need to be in the same transaction. This could look like this: Alice sends an HTLC that pays Bob through the bitcoin chain, then she constructs a transaction that puts in her collateral, has an output that should be the sum of whatever Bob is willing to pay plus her collateral, she partially signs it then sends it to Bob, then Bob adds his funds, and then he sends it to the chain.

Bob is now guaranteed that he has this transaction on his machine and he can add an output and an input at any time. If he sends it to the chain, then Alice is incentivized to take it out and thereby Bob gets his secret which is in the end what he wants. Alice can cancel the whole thing by spending the output somewhere else. As long as Bob didn’t put it on the chain, it’s not yet there. If Bob waits too long, Alice just spends it somewhere else. If Bob tries to include it, then it’s rejected by the network as a double spend.

But we have lightning network!

It’s not quite true that LN solves this. You might argue the optionality exists because there’s a six hour difference between the time the transaction gets into the chain and the time you have to refund it because of the timeout part. It’s not effected by fast or slow transactions, it’s caused by the underlying protocol being unfair. If the incentives don’t match up, then no matter how fast your protocol, you could always delay revealing the secret and this delay in revealing the secret introduces this optionality.

Privacy

Now that we have fair atomic swaps, let’s talk about privacy. How can we make atomic swaps private? We heard about this idea called scriptless scripts which uses signatures instead of hash functions and this makes it indistinguishable from other transactions on the chain. So how could we use this for atomic swaps?

Alice and Bob start off with their respective keys and some secret that only Alice knows at the beginning. Alice computes an address that contains Bob’s private key plus some value t. Alice can do this without knowing Bob’s private key. Bob locks the fund in a multisignature where they need both private keys in order to redeem the money. At this stage, neither of them can spend this money because Alice doesn’t know Bob’s key so she can’t redeem the money she has locked. Bob doesn’t know t, so he can’t spend that money, or Alice’s private key. Neither of them can spend. How do we proceed? Bob creates an adaptor signature, a sort of partial signature on this thing, but designed in such a way that in order to make it a fully valid signature, Alice has to include the value t in the signature. Alice completes the signature but she gets a fully valid signature for this transaction. But she’s forced to reveal t in this case. This makes it so that Bob is able to learn the value t afterwards, and now everyone can spend their funds.

This has the same exact properties as an atomic swap with HTLCs. Alice starts with a secret that she only knows in the beginning, but she can’t take Bob’s money without atomically revealing her secret. This is the only property we care about in the beginning. Both parties can take their money back because they had pre-signed nlocktime transactions for each other before beginning.

Open research questions

How to do scriptless scripts across curves? https://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-05-cross-curve-atomic-swaps/ What about between different chains with different curves? Is there a way to construct the signatures in a way that we can use scriptless scripts to perform atomic swaps across chains and curves even if the curves are different curves? Also, monero doesn’t have an nlocktime feature. But at least if we can do the conditional part then at least we’re one-step closer to doing atomic swaps.

How can we uniquely attribute fault if we’re in a multi-hop payment channel? This might seem like a big jump from doing all this on-chain stuff and then talking about payment channels, but really all this atomic swap stuff works in payment channels in the lightning network.

How can we construct HTLCs or conditional payments in general without knowing the recipient upfront? What do we need for an HTLC or a conditional payment? It’s a redeem address, a refund address, a timeout and a hash. The pubkey hash of whoever can take the money in the success case, and the one for the timelock case, and then the actual timelock and then the hash. The last three parts can actually be chosen by whoever constructs the conditional payment. But the redeem address needs to be constructed by whoever wants to receive the money. If we can somehow get rid of the redeem address, then we can get rid of a communication step between the two parties that want to do an atomic swap. Because then by removing that, you can just send this out there, and people can send an offer for atomic swaps for the conditional payment, and would you sell me this secret? You could build a whole market on selling secrets, it would be easy to build a decentralized exchange on top of this if we could figure out how to make a conditional payment that doesn’t need the redeem address upfront but still guarantees only one person can receive it. If you take out the pubkey check in the first branch, then miners would figure out you can just take the money and forward it to myself. It’s not as simple as just taking out the variable.

References

\ No newline at end of file diff --git a/scalingbitcoin/tokyo-2018/index.xml b/scalingbitcoin/tokyo-2018/index.xml index 5a380ac899..932da2857e 100644 --- a/scalingbitcoin/tokyo-2018/index.xml +++ b/scalingbitcoin/tokyo-2018/index.xml @@ -14,7 +14,7 @@ Introductions Hi everybody. This is a talk on how much privacy is enough, evalua https://twitter.com/kanzure/status/1048446183004233731 Introduction I am going to be talking about fraud proofs. It allows lite clients to have a leve lof security of almost the level of a full node. Before I describe fraud proofs, how about we talk about motivations. Motivations There&rsquo;s a large tradeoff between blockchain decentralization and how much on-chain throughput you can get. The more transactions you have on the chain, the more resources you need to validate the chain.
Instantiating (Scriptless) 2P-ECDSA: Fungible 2-of-2 MultiSigs for Today's Bitcoinhttps://btctranscripts.com/scalingbitcoin/tokyo-2018/scriptless-ecdsa/Sat, 06 Oct 2018 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/tokyo-2018/scriptless-ecdsa/https://twitter.com/kanzure/status/1048483254087573504 -maybe https://eprint.iacr.org/2018/472.pdf and https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-April/001221.html +maybe https://eprint.iacr.org/2018/472.pdf and https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-April/001221.html Introduction Alright. Thank you very much. Thank you Pedro, that was a great segue into what I&rsquo;m talking about. He has been doing work on formalizing multi-hop locks. I want to also talk about what changes might be necessary to deploy this on the lightning network. History For what it&rsquo;s worth, these dates are rough. Andrew Poelstra started working on this and released something in 2016 for a Schnorr-based scriptless script model.Multi-Hop Locks for Secure, Privacy-Preserving and Interoperable Payment-Channel Networkshttps://btctranscripts.com/scalingbitcoin/tokyo-2018/multi-hop-locks/Sat, 06 Oct 2018 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/tokyo-2018/multi-hop-locks/Giulio Malavolta (Friedrich-Alexander-University Erlangen-Nuernberg), Pedro Moreno-Sanchez (Purdue University), Clara Schneidewind (Vienna University of Technology), Aniket Kate (Purdue University) and Matteo Maffei (Vienna University of Technology) https://eprint.iacr.org/2018/472.pdf diff --git a/scalingbitcoin/tokyo-2018/scriptless-ecdsa/index.html b/scalingbitcoin/tokyo-2018/scriptless-ecdsa/index.html index 7344755738..3705952ce0 100644 --- a/scalingbitcoin/tokyo-2018/scriptless-ecdsa/index.html +++ b/scalingbitcoin/tokyo-2018/scriptless-ecdsa/index.html @@ -1,6 +1,6 @@ Instantiating (Scriptless) 2P-ECDSA: Fungible 2-of-2 MultiSigs for Today's Bitcoin | ₿itcoin Transcripts \ No newline at end of file +https://www.youtube.com/watch?v=3mJURLD2XS8&t=3623s

https://twitter.com/kanzure/status/1048483254087573504

maybe https://eprint.iacr.org/2018/472.pdf and https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-April/001221.html

Introduction

Alright. Thank you very much. Thank you Pedro, that was a great segue into what I’m talking about. He has been doing work on formalizing multi-hop locks. I want to also talk about what changes might be necessary to deploy this on the lightning network.

History

For what it’s worth, these dates are rough. Andrew Poelstra started working on this and released something in 2016 for a Schnorr-based scriptless script model. This gave you deniable swaps and the ability to do cross-chain transfers without having to reveal shared data, which is often in a payment hash. It doesn’t link transactions. He later provided a construction for turning this into a lightning network replacement for hash-preimage challenges. This was in March 2017. Finally, the main win here is that as Pedro mentioned… ensures payment information is randomized at every hop. All the payments are settled in the reserve order in which they are propagated.

In late 2017, Yehuda Lindell published an efficient 2p-ECDSA signing protocol. It got put on the side for a little bit while we were working on other things. This allows someone to do a 2-of-2 ECDSA multisig without updating bitcoin multisigs. All the old bitcoin nodes are able to verify these signatures. It’s entirely backwards compatible. The anonymity set is now encompassed by now all the pay-to-pubkeyhash and pay-to-witness-pubkeyhash that exist today. It would blend in today to existing transactions out there.

Later, April 2018, Pedro released the multi-hop locks paper which formaliszed the framework for LN hop decorrelation. It includes scriptless script construction based on the 2p-ECDSA protocol. When Schnorr gets integrated into bitcoin, we wont have to fork the lightning network to have them interop, and this will require no changes from the nodes and it wont fragment the network. At some point, the network will have to diverge based on this payment hash scheme as we get away from payment hashes entirely. It’s probably better to do this while the network is younger and then be able to smoothly transition to Schnorr once it’s fully integrated into the consensus layer.

Today, we have some working 2p-ECDSA scriptless scripts.

Agenda

I am going to give an overview of 2p-ECDSA. It will be a little technical but I hope I can deliver some insight. It’s fairly involved and uses some heavy cryptography but hopefully this will be obvious to you and it will offer some opportunities to deploy scriptless scripts and othe rprotocols. In addition, I’ll talk about benchmarks and deployment considerations and how we will go about deploying this.

2p-ECDSA overview: Regular ECDSA

Regular ECDSA can be defined in like 5 lines which is nice. The next slides will be the 2 party setting. So to begin with 2p-ECDSA overview.. the three links are at the bottom, it’s Yehuda Lindell’s paper, efficient RSA key generation and threshold pallier in the two-party setting from hazay, and a generalization and simplification of pallier’s probabilistic publy-key system with damdgard and jurk.

Alice has a private key and a public key, and Bob has his own .Each know a and b. Alice knows A and Bob knows B. They jointly create the public key Q = ab * G with private key ab, but neither knows ab outright yet… together they can create valid ECDSA signatures under Q. It requires two algorithms - KeyGen offline which setsu p Alice and Bob for participation in online signig protocol. It’s more exepssnive, but it’s only executed once. Then there’s an online signing protocol which is more efficient and it requires two roundtrips and it doesn ot require scriptless scripts.

Keygen

Basically, Alice and Bob exchange their public keys and provide a proof of discrete log that they have a proof-of-knowledge. We use a standard schnorr signature here that you know the secret key to these public keys. All this stuff is abridged here. Alice generates a Laillier keypair from a cryptosystem related to RSA and she provides a zero-knowledge proof that the key is constructed of these two primes p1 and p2 and this is so that she can’t cheat Bob down the road. Alice encrypts her private key under PPK, creating ciphertext c. She sends PPK and c to Bob. Bob is going to store this. It’s going to be a two-party private key. Alice also provides a zero-knowledge proof tha the ciphertext contains a small value. In fact, the ciphertext is much much bigger than the actual encrypted value. The implication there is that, because Bob only receives the ciphertext, he may not know the size of the underlying value so Alice needs ot prove that it’s small. The fact that it’s less than the curve order (secp256k1 in this case) and that it decrypts in a certain way. Lindell had to invent a new zero-knowledge proof for this, and it mixes two different cryptosystems. If you’re interested in novel cryptography, check that out. Finally, Alice does a lot of proving to generate all these proves and the pallielrer pubkey. Bob receives all thes eproofs, and then they generate Q. The output is Alice saves 2p-ECDSA private key (a, PSK) with public key Q = ab & G. Bob saves 2p-ECDSA private key (b, c, PPK) with public key Q = ab * G. Surprisingly- this is not vulnerable to key loss. It might seem like if you loss these it might corrupt the channel but in fact that’s not true because as long as Alice and Bob know that it’s derived from an HD seed then they are able to redo the keygen protocol to ressurect the channel. I thought this was going to be a downside compared to Schnorr but it turns out it’s not a problem for 2p ECDSA>

Signing

So why all this Paillier nonsense? You’re all probably asking this question. The main thing is that we can’t add public keys and signatures in the same way as for Schnorr. They are not linear. This does not bode well for non-interactive aggregation. But Palliler ciphertexts exhibit partially-homorphoric properties. They are additive, and they are scalar-multiplicative where you can get the scalar times the original message. This is enough to create a signature under mostly encrypted data and then decrypt a nearly valid signature. All of these operations can be done wihtout private knowledge. All the parameters are n, whic his the pubkey that Bob kept around so that he can do these manipulations.

ECDSA signature here is a (R, s) pair and then there’s this k value and the x coordinate…. so to sign, Alice and Bob exchanges nonces with discrete log proof of knowledge. Bob then encrypts his inverse times the message and the other value which is the inverse of the nonce times the x-coordinate of the aggregate nonce and then times his private key. The interesting thing is that, he can combine a ciphertext and manipulate it using the transformations described earlier, and then here in color code purple means both parties are aware, and the red represents Alice knowing the value only and the other color represents Bob only knows the value. At the beginning, Bob had a in an encrypted form. He can manipulate it and send it back to Alice, she can decrypt it and then multiply it by the inverse of her nonce and this generates the signature. If you compare this to the simple ECDSA protocol, k is replaced by Alice-and-Bob k and the secret keys are replaced too. These equations are similar and this actually works as an ECDSA signature. The final step is making sure the s value is small.

Scriptless script signing is similar but it requires an extra round to extract the secret data. The protocols are very similar so I’ll omit the details. The details are in Pedro’s paper.

Benchmarks

For what it’s worth, these were computed last night. I’ve been optimizing it over the last week on the flight and stuff. I tried to get keygen under 1 second but overall it’s about 1.07 second. I was pretty surprised about this, I thought it would be slower. Given all the actual crypto and complexity of the keygen protocol, this is pretty great. The probably the slowest thing is the use of golang’s bigint library whic his notoriously not constant time and has memory performance problems. I think this could be driven down by 2, 5 or 10x.

Signing as you can see is under 30 milliseconds. That’s pretty huge. As far as na online protocol, that works pretty well.

Signing in scriptless signing, the allocation time and basically numbers across the board it’s pretty promising. There’s a lot of optimization work remaining.

For what it’s worth, this was tested on my laptop. It was single process, it was 2.8 Ghz Intel Core i7 16 GB 2133 MHz LPDDR3. It’s single process, no network latency or serialization, I was using a non-interactive discrete log proof of knowledge nad proof of pallier l pallier key correctness. There’s some golang code.

https://github.com/cfromknecht/tpec

Mhttps://github.com/Kzen-networks/multi-party-ecdsa

Deployment considerations

There would need to be some script modifications on lightning network. The funding outputs are 2-of-2 multisig at the moment. This requires 2-of-2 signature to spend. The cooperative closes are where both parties agree to close it; it’s not a unilateral close, it’s just splitting the balance based on the latest state. There’s also a commitment transaction which are pre-signed. In the offline update protocol, in lightning, you need a 2-of-2 multisig to update these commitment transactions and HTLCs. But all of this could be replaced with a single p2wpkh-looking output. The HTLC outputs use 2-of-2 multisig in non-standard HTLC scripts. The two types of HTLC scripts are the offered script and the received script. They are mostly similar, but it depends on which direction the HTLC is going across the channel, and both require a 2-of-2 multisig, or a timeout. It requires 2-of-2 ssig to spend offered-timeout and received-success clauses. This can be replaced with much simpler HTLC script.

With scriptless 2p-ECDSA/Schnorr, we can remove payment hashes from HTLC scripts, and by extension we can remove preimages from witnesses so right there you’re saving 52 bytes in many of the witnesses. Just in terms of space savings, those are fee improvements and ultimately less load on the blockchain as well, which is the whole point of this conference.

Funding output scripts

Considering regular 2-of-2 multisigs, Schnorr 2-of-2 multisigs, and 2p-ECDSA 2-of-2 multisig… here’s a table. The space is roughly halved; what was two keys is now one key and two signatures now one signature. This is a huge privacy win for non-advertised channels. If you are lightning node and you’re advertising a channel, you’re giving up some privacy. But if you’re a private channel, then this gives a huge win, because funding transactions look like regular p2pkh or p2wpkh transactions. That’s a huge win, especially as we move to a network where many mobile devices or a node you keep on your laptop is not going to advertise the nodes at all for routing. And finally, there’s less bandwidth at the gossip layer because of the one less signature so that improves the network load situation in general.

HTLC scripts

Here’s a receive-HTLC witness script. You’re probably looking at this and saying “oh shit”. It’s hard to reason about. When you’re flipping pubkeys and verifying timelocks, that’s totally unacceptable. But you could have a simpler receive-HTLC witness script with 2p-ECDSA, and it’s all governed with a single signature, and the timeout clause will have an explicit parameter and other than that they are virtually identical. This is a 20% reduction in the script size, and that’s basically- when you spend any HTLC outputs on this, that’s a 20% reduction in the witness script size. It improves the readability of the script and it’s easier to reason about what’s happening here. In terms of witness sizes, I picked the receive-HTLC witness script because it highlights the worst case for closing channels unilaterally. In the success witness size, it’s a 78% reduction in script size. The revocation witness is 30% smaller, and the timeout witness stays the same. You can expect similar improvements for the offered-HTLC scripts but it should be smaller.

Bidirectional 2p-ECDSA instances

Channels are bidirectional. When you try to update the HLTC, you send all the updates and parameters to the other side of the channel, and then you send a batch of signatures that you create on that. If you noticed from Pedro’s talk, when you do this, only one partry ends up with the signature and typically with a normal script multisig where you have Alice pubkey Bob pubkey then two signatures each could contribute theirs independently and that simplifies things significantly. So you have a single funding output on chain, and to spend from this and have either party initiate the ability to sign, you are going to setup two instances of the same key pair so that either party can update this. This will allow them to sign commitment transactions to spend from this output or either party would allow them to do cooperative close with this channel. Some of you might ask how does this apply to eltoo… on the commitment transaction level, they would share one key. It’s roughly about the same.

Onion packets

Here’s the current onion packet data structure. Right now I’m able to read these data, and the 32 byte MAC covering everything, and then some authenticated and encrypted bytes at the end which get shifted around when each person decrypts. What’s going to change is the data in the per-hop payload. The new one will include a schnorr signature and the difference between the incoming and outgoing locks that Pedro described. The packet size grows by about 3x, but when you’re constructing and decrypting these, the majority of the bottleneck is in asymmetric crypto operations like deriving ephemeral keys and so on. This scales linearly in the number of hops, and I got this down from quadratic back in February.

Benefits

This can be deployed today. I may have already done this, and you would never know because it’s totally private. The smaller scripts and witnesses mean lower fees. It’s increased privacy on-chain via increased anonymity set for funding outputs. It increases privacy off-chain via hop decorrelation. The 2p ECDSA blends into all existing traffic, and if you start with Schnorr then the anonymity set will be smaller at the start. But I think there will be a world where most users will be migrating to Schnorr over time. Anyway, that’s the only immediate win for 2p ECDSA other than being performant. You also get real proofs of payments (“invoice tunneling”). There’s some more work being done in this area, I know Pedro is working on it.

Some of the disadvantages is the complexity- this took a year to start and finish this, obviously not full-time but still. It takes a long time to analyze everything and review it. The resource usage on mobile devices isn’t so great and you don’t have so many cores to do the proofs in parallel. Also it takes more updates to update commitmnet transactions, requires more round trips. But it might be possible to pipeline the signing protocol and shave off a roundtrip.

\ No newline at end of file diff --git a/sf-bitcoin-meetup/2017-03-29-new-address-type-for-segwit-addresses/index.html b/sf-bitcoin-meetup/2017-03-29-new-address-type-for-segwit-addresses/index.html index 3ee2d30726..9e783940ce 100644 --- a/sf-bitcoin-meetup/2017-03-29-new-address-type-for-segwit-addresses/index.html +++ b/sf-bitcoin-meetup/2017-03-29-new-address-type-for-segwit-addresses/index.html @@ -14,4 +14,4 @@ Pieter Wuille

Date: March 29, 2017

Transcript By: Bryan Bishop

Tags: Bech32

Category: Meetup

Media: -https://www.youtube.com/watch?v=NqiN9VFE4CU

Topic: Bech32 addresses for Bitcoin

Slides: https://prezi.com/gwnjkqjqjjbz/bech32-a-base32-address-format/

Proposal: https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki

Demo website: http://bitcoin.sipa.be/bech32/demo/demo.html

Twitter announcement: https://twitter.com/kanzure/status/847569047902273536

Transcript completed by: Bryan Bishop Edited by: Michael Folkson

Intro

Can everyone hear me fine through this microphone? Anyone who can’t hear me please raise your hand. Oh wait. All good now? Tonight I will be speaking on a project I’ve been working on on and off for the past year or so, which is the question of what kind of addresses we will be using in Bitcoin in the future. Recently I proposed a BIP after several long discussions among some people. I think we have a great proposal. So today I will be talking about the proposal itself and how it came to be. This was joint work with several people, in particular Greg Maxwell who is here as well, and my colleagues at Blockstream. Most of this work was done thanks to the computation power of their computers. I’ll talk about that more. So this is the outline of my talk. First I’ll talk about why we need a new address type going forward. The decision to use base32 rather than base58 as has been used historically. Once the choice for base32 has been made, there are a bunch of open design questions like what checksum to use, what character set to use, and what the address structure looks like. Optimal character set depends on optimal choice of checksum, which may be surprising. And then combining this into a new format, which I am calling bech32

Why?

Segregated Witness is a proposal that I presented a bit over a year ago in Hong Kong for the first time. It is now in a state of perhaps being deployed on the Bitcoin network. Segregated Witness needs to encode new address types. They are described in BIP 143 pay-to-witness-pubkey-hash (P2WPKH) and pay-to-witness-script-hash (P2WSH) and there are some possible extensions later. SegWit supports two ways of using these, either inside of P2SH which is an address type that has been supported for years on the Bitcoin network, making it backward and forward compatible with every wallet out there created in the past few years. However, going forward, we really want this to happen natively. This gives us better efficiency for spending as we don’t need the overall backward compatibility layer of the redeem script that P2SH gives us. Secondly it gives us 128 bit security for script hashes. P2SH only delivers 80 bits which is becoming questionable. This proposal replaces BIP 142 which was an older base58 based proposal and I think this one is much better.

Base32

Why base32? First of all due to the more limited alphabet we can restrict ourselves to just lowercase or just uppercase making the address format case insensitive. This makes it much easier to read or write addresses down as anyone who has ever tried to write an address down or type it after someone reads it over the phone will easily confirm. To be clear, my hope is that in the long term Bitcoin doesn’t need addresses anymore. We will get a real solution where humans don’t need to interact with the cryptographic material at all anymore. There have been proposals going in that direction but it seems for the time being we are not there so we need a solution regardless. 32 being a power of 2 means it is much easier to convert. We can just take the bytes from the data, split them into bits, rearrange them into groups of 5, take these groups of 5 and those become your base32 characters. Compare this to base58 where you really need BigNum logic to turn the whole thing into a huge number and then convert to a new basis and so on which is a quadratic algorithm. For all of that we get a downside. It is 17 percent larger than base58 just due to being less information fitting in one character. That’s going to be the main topic of what I’ll be talking about. Due to 32 being a prime power we can support a mathematical field over the characters. We can use a lot of research on strong error detection codes which doesn’t exist something like base58. Due to being case insensitive it is also more compact to store in QR codes, which have a special mode for encoding alphanumeric data. But this only works for case insensitive things. Base32 is also being used for many cases already including onion addresses in Tor and I2P and several other projects.

Checksum

We fix the idea that we are going to use base32 for all these reasons. What are we going to do about the checksum character set and structure? First I should point out a few design decisions we fix early on. All of this will be about the chance of misdetecting an invalid address as a valid address because when that happens you will be sending money into a black hole. That’s what we want to prevent. Six characters is the minimum we need to make the chance that a random string gets accepted as an address less than one in a billion. That’s just a design choice. Six characters, that is what we are going with. This means if this address type is going to be generic for every witness output which can have up to 320 bits of data, we really need 71 characters under the checksum. So we are looking for error detecting codes that detect as many errors as possible with a checksum of length fixed, with a total message of length 71. I am now going to look at a few of the design choices we went through ending up with the one we picked, the BCH code at the end.

Minimum distance of a code

First I need to clarify the concept of distance. Anyone who knows something about coding theory will find this trivial but I’ll explain anyway. The distance of a code or the minimum distance of a code or the Hamming distance of a code is how many characters within a valid address you need to at least change to turn it into another valid address. The minimum number of different characters between two different addresses. This diagram gives a nice demonstration. All the lines are single character changes and all the black dots are valid addresses. The minimum distance of the code shown here is 4. You need to at least cross black lines between any two black dots. There is a very fundamental theorem that says if your distance is n you can detect up to n-1 errors. This is really obvious to see. If you start from any of these black dots and you make up to 3 errors, you follow up to 3 black lines, you never end up with another black dot. This shows you how a distance 4 code can detect 3 errors. There is also an equivalence. If you have a code that can detect n errors it can also correct n-2 errors. From any point in the diagram you go to the closest black dot. If you are on a black dot that is on an intersection point that is a distance 2 from a number of black dots, there are multiple choices you can make. You cannot correct 2 errors but you can correct 1. If they were all 5 apart you could correct 2. What we will be looking for is things that are 5 apart.

CRC codes

The first thing to look at is CRC codes. They are the most traditional checksum algorithm used in a wide variety of protocols. However, they are bit based. This makes sense because in most protocols what we care about are bit errors. However, here this is not the case. We don’t care directly about bit errors, we care about symbol errors or character errors. Whenever someone makes a mistake they will make an entire character error and every character is 5 bits. Here is an example where the B is turned into a J and the D is turned a V. Even though this is only two characters that are changed you can see that it is actually 9 bits that are changed. CRC codes are designed to optimize for detecting a number of bit errors. Which means that if we want something that can detect 4 errors we really need something that can detect 20 bit errors, which is a lot. It turns out that finding something that can detect 20 bit errors is impossible. But we don’t really care about that. We really care about these symbol errors which result in bit errors that are somewhat structured. They always occur in groups of 5 and we care about the number of groups that are wrong, not just the bits. It is in fact possible to find a CRC that gives us distance 4 but we can do better.

RS codes

Probably the best known type of checksum algorithm that allows error correction are Reed-Solomon codes. These work directly over symbols which is great. Unfortunately they are limited in length to the size of the alphabet minus 1. Typically Reed-Solomon codes are done over 8 bits of data which means that they can work over code words of length 255 which is 2^8 - 1. But in our case our alphabet size is 32 using base32, which means we would be limited to doing error detection in strings of length 31. This is too short. We cannot fit enough data into that. So a possibility is to use an alphabet extension where you are really looking at two characters at once. You are really doing a size 1024 which is 2^10 alphabet. You see 2 characters in your code as one symbol. This is possible. However it is still limited to distance 4. We are really trying to see if there is nothing better we can get.

BCH codes

About a year ago I found out about BCH codes which are a generalization of Reed-Solomon code that drop this restriction of being limited to the alphabet size minus 1. Most of the research on BCH code is actually about bits based ones but this is not necessary. A lot of research is applicable to larger ones as well. We are going to create a BCH code over our size 32 alphabet. In fact it turns out the theory that you can read about in nice articles on Wikipedia, if you do the math you can construct a BCH code with distance 5. Yay! I’ll soon show you a diagram for why this actually matters. It turns out there is still a huge design space, many parameters are free in this BCH class of codes. So we’ll need a way to figure out which one to use.

BCH code selection

Even if you fix the field size there are about 160,000 different ones. When confronted with this I thought how are we ever going to pick one? I started trying to do random sampling and see which ones are actually better even if you give it more errors than they are designed for. These are codes that are designed for having distance 4 or 5 meaning they will detect 3 or 4 errors. But what if you give them one more error? Probably if these are all different codes some of them are actually better if you go beyond their limit. I started on this project of trying to characterize all the different codes that are possible here.

How do you characterize such a code because all 71 character addresses is that ridiculously large number that we need to go over to characterize them? This is about 2^350 so way, way, way beyond the number of atoms in the universe. However, BCH codes belong to a class called linear codes. Linear codes means if you have a valid code word and you look at the values that every character represents and pairwise add every character to another valid code word the sum will again be a valid code word. What you only need to look for if you want to see whether this code below distance 5, you need to check does there exist any pair of 4 non-zero values over these 71 positions whose checksum is zero. Because if that is the case you can add that to any valid code word and your result will be another valid code word. Now we have established two valid code words with distance 4. It turns out that is still 12 trillion combinations which is painfully large. Maybe not entirely impossible but more than we were able to do. The next realization is that you can use a collision search. Instead of looking for 4 errors what you do is you look for only 2. You build a table with all the results on 2 errors, compute what their checksum would be to correct it, sort that table and look whether there are any 2 identical ones. Now you have 2 code words that need the same checksum to correct it. If you XOR them, if you add them together, now you have 2 x 2 = 4 changes and the checksum cancels out. Through this collision search you can now find by only doing the square root of the amount of work which makes it feasible.

There are a bunch of other optimizations on top which I may talk about later if there is interest. We are able to do some search. We start from this 159,605 codes, let’s require that they actually have distance 5 at length 71. There are 28,825 left. Which one to pick from now? What you want to do is look at how they behave at 1 beyond the limit. All of these codes, these 28,825 codes, they all detect 4 errors at length 71. But what if you give them 5 errors? What if you give them 5 errors that are only appearing in a burst very close together or you give them randomly distributed behaviors? It turns out if we pick some reasonable assumptions about how we weigh the random case versus the worst case there are 310 best ones that are identical.

Effective error detecting power as a function of error rate (graph)

To show you how much this matters I want to show this graph that contains a lot of information. What this graph shows you is the detecting power of a particular error detection code in a function of the chance that every character individually is wrong. This makes the assumption that every character is independent from every other. For example you can see that the 1 percent line, the red line is the error detection code we chose. You can see it is about 2^(-38) which means that it is 1 in 250 billion, the chance that an error you make would not be detected. The blue line is what the 32 bit hash function would do as the current address format, base58. You can see it doesn’t matter what your error rate is. Any error has exactly the same chance of being detected. We can reasonably assume that the average number of errors that would be expected within an address is not large. We can assume that it is maybe 1 in an address, hopefully less than 1 especially if we switch to case insensitive coding. It would become even less likely. The yellow line shows what we could have chosen if we didn’t do this exhaustive analysis to find the best one. The yellow line is actually the worst possible code. You can see that it has this annoying bump where its detection probability even goes below 1/(2^30). We don’t even guarantee this 1 in a billion chance for that code. This is just to show you how great optimizing for a low number of errors is because it gives you under these assumptions a much better model. But it also shows you how much it matters to do this analysis. It clearly makes a difference.

BCH code selection (continued)

From those 310 we pick the codes with the best bit error rate behavior. We still have these 310 identical codes. For how many characters can be wrong they behave identically. No difference at all. We still need some criteria to pick one. We have all this analysis available anyway so what to pick? It will soon become clear why it is useful to optimize for low bit error rates. What this means is what if we are only looking at errors that change 1 bit in a character? How does the code behave then? For random errors they are identical anyway. Now we only have 2 left and we just pick one of them. This took many, many years of CPU time. I think we have something like 200 CPU cores available to do analysis on. This only took a couple of weeks. But in total it was more than ten years of computation time. Until we discovered that these 310 identical ones, it is a pattern. All codes appear in groups of 310 identical ones. A couple of weeks ago, when we identified what these exact changes were, suddenly a lot more became feasible to analyze. We were no longer restricted to those 160,000 BCH codes. We could in fact look at all the 3.6 million 6 character cyclic codes which is a much larger class of functions. It turns out that if you make the range of things you are looking for larger, you find better things. However we did not pick anything from this. The results from this search, which was now feasible after dividing by 310, because we could from each class test one. Instead of a billion codes there were only 3.6 million left. It turns out some of them were slightly better than what we had already found but we are not changing it for the reason that there are efficient error location algorithms available for these BCH codes. These aren’t available if you pick an arbitrary code. The difference isn’t much so we are sticking to it.

Character set

Now the question of the character set. There exists various character sets for base32 already. There is the RFC3548 which is a standard. There’s the z based base32 standard, various other standards for base32 data that have been used in the past. We’re still going to pick a different one. The reason for this is we were able to select our code to optimize for low bit error rates. Wouldn’t it be great if we could choose the character set in such a way that 1 bit errors are more likely than non 1 bit errors. This character set is the result of another year of CPU time to optimize for this. We found a bunch of information on tables for similarity between various characters. What you can see on this slide, the z and the 2 are considered similar in some fonts or writing. As you can see they are 8 apart. One is 2 and the other is 10 so they are 1 bit apart. And r and t are 1 bit apart. And y and v are 1 bit apart. And x and k are 1 bit apart. And e and a are 1 bit apart. And s and 5 are 1 bit apart. And 4 and h are 1 bit apart. There are way more similar errors overlaid in this data that you can look for. It’s pretty cool. We made a character set that optimizes for 1 bit errors. As a result our code is distance 6 for 1 bit errors. If you just look at these 1 bit errors we guarantee 5 errors. If you only make errors like this you can detect 5.

Q - Did you take into account QWERTY keyboard distance in that selection?

A - We did not take QWERTY keyboard distance into account.

Greg Maxwell: We considered it but the visual component is uniform. In formal testing it looked like the visual component was more commonly a source of errors. But the visual component is almost always in the single path whereas a QWERTY keyboard may not be in the single path.

Effective error detecting power as a function of error rate

What this diagram shows you, the blue line is again what the 32 bit hash, the current address format, would do. The red line is for arbitrary errors using the checksum algorithm we’ve chosen. The purple line is the same thing for the checksum algorithm we chose but restricted to 1 bit errors. So if you only make this class of errors that we consider more likely, you can see that it’s even stronger. For 1 percent you can see you get another 5 bit of detection chance making it 32 times less likely for something like that to not be detected. Something else you can see on this diagram is the line for 1 expected error in the whole address. You can see that there is a crossover point at 3.53 (expected errors per address). What this means is that the checksum algorithm despite being shorter, it is only a 30 bit checksum, despite that for up to 3.53 expected errors in your whole address it is actually stronger than the 32 bit checksum that was being used in the base58. For only likely errors it is even up to 4.85 per address. A great win.

Structure

One last thing, how do we combine all of that into a real address because we have a character set and we have a checksum algorithm. How are we going to structure SegWit addresses? It consists of three major parts. The first is the human readable part which for Bitcoin addresses in our proposal will be bc standing for Bitcoin. For testnet it is tb which is still only two characters but visually distinct from bc. Then there is the separator which is always 1. 1 is a character that does not appear in your character set. This means that the human readable part is always unambiguously separated from the data that follows. Even if the human readable part itself contains a 1. That makes it extra flexible. Then there is the data part which uses the character set as I described before. For SegWit addresses, but this is generic, in there is the data: the witness version, the witness program and the checksum which is the last 6 characters. The result of this for a pay-to-witness-pubkey-hash (P2WPKH) address would be 42 characters rather than 34 so it is a bit longer. Base32 is a bit less efficient, the checksum is longer and the prefix of two characters adds up. But I don’t expect this to be a big problem. It is slightly larger but it is more compact in QR codes. It is much easier to read and write. Only things that care about visual space does this matter. It is 62 for pay-to-witness-script-hash (P2WSH) because it uses 256 bit hashes rather than 160 for higher security. For future witness versions which support up to 320 bit hashes the length can go up to 74. Though I don’t expect that as 256 bit is a very reasonable security target.

Bech32

All of this together gives bech32 which is a generic data format for things like addresses. One instance of it for the use of SegWit addresses but it could be used for various other things that have similar requirements. I don’t know what. It seems strange that most of the research on checksum human readable data uses 1 checksum character or 2 checksum characters. Think about bank account numbers or a few similar things. There seems to be very little research on how to make an actually strong checksum that is still designed for human consumption. I hope this can become perhaps a standard for how to do this. There is a link to the BIP. In all of this I have not mentioned one of the most important design goals, code simplicity. I’ve been talking about these error detection codes which are typically very complicated to deal with. However they are only complicated to deal with if you are actually interested in error correction. Error detection is trivial.

Checksum algorithm code

https://github.com/sipa/bech32/blob/master/ref/python/segwit_addr.py#L27-L36

Ta da. This is the checksum algorithm. It uses no bignum conversion, it has no SHA256 dependency. It is just these ten lines. Of course you need more for the character set conversion and converting to bytes for the witness program. But the mathematical part of the spec is just this.

Demo website

http://bitcoin.sipa.be/bech32/demo/demo.html

We also made a small demo website. Because it is an actual error correction algorithm behind the scenes, even if we are not using it, you could optionally implement it to do error location. The spec allows this. It strongly advises against doing actual error correction because if someone types an address wrong you want to go complain and tell the user to go look what the address is and not try to correct it for them. They might end up with a valid address that is not the one they intended. Here is an example. If you change this v to a x it will point out that x is likely what you did wrong. This can even be inside the checksum. The algorithm can support up to 2. We have an algorithm that supports up to 3 but not most of the time but is still quite frequently wrong. There are ways to deal with this like showing multiple possibilities to the user. None of the contributors to this project are great UI people so we really didn’t know how to do this. Here’s a P2WSH example as well. This website is linked from the BIP. You can play with it if you are interested. That’s it. Thank you very much for your attention.

Q&A

Q - Do you have any idea how many Bitcoin are lost per year due to human readable character mistakes that this corrects for?

A - I have no idea. I expect it to be low but I also expect that we wouldn’t know about it. It is hard to quantify and it is very nice to just categorically improve the state of the art there.

Q - In the error location code, can it suggest corrections there?

A - Yes. The location code actually does full error correction. It knows what the likely error is but it intentionally does not show you because you should go back and look at what it should be rather than try things. Greg suggests I explain why this matters. A code with distance d can either detect d-1 errors or correct (d-1)/2. This means that our code which is distance 5 can detect 4 errors but can only correct 2. This is because you can be in the middle of two valid code words and then you don’t know which one to go to. So if you make 4 errors, which will never result in a valid code word, you may however have gotten closer to another valid code word than the one you started off from. This means that your ability to do error detection is eroded by trying to do correction.

Greg Maxwell: The key point to make here is that if you make 4 errors it will almost certainly correct it to a valid address which is not the address you intended. This is why you don’t want to correct the errors.

As Greg points out, if you make 4 errors and run the error correction algorithm which can make up to 2, it will correct to the wrong thing with a very high probability. For 4 it is almost certain, with 3 it is likely, with 2 never. This is for example different when you are dealing with private keys. Maybe that is something I should mention as future work. We are also looking at a similar standard like the one here for encoding private keys. But for private keys telling the user “Sorry your private key is wrong”, that is not what you want. You really want to do the error correction there. There we are looking at something with a stronger checksum that has more than 6 characters extra but actually can correct a reasonable number of errors and not just detect them.

Q - The private key thing made me wonder whether this could be used also for master keys at the root of BIP 32 or something similar to it?

A - The checksum algorithm we chose here ends up being really good up to length 89. It was designed to be good up to 71. It turns out it is really good up to 89. Not more. This is approximately 450 bits of data. That is enough for a seed but not enough for a master key, which has 512 bits of entropy because it has a chaincode as well as a key. Future work is looking for a stronger checksum which can both correct a reasonable number of errors for short things like private keys, but also has good performance for longer things. Given that we are talking about longer strings there anyway you don’t care whether it adds 6 or 10 or 15 characters.

Q - Could bech32 implicitly support SegWit in the practice of false signaling where miners are signaling for something other than Core while running Core in the background?

A - That is completely orthogonal. All of what I have been talking about here is on the wallet implementation side. It has no relation at all to what is implemented in consensus rules, full nodes, peer-to-peer protocols, miners. None of this cares about it. This is just something that wallets need to implement if they want. So no.

Q - At the beginning of your talk you mentioned that there were some proposals that would completely abstract Bitcoin addresses away from users. Can you talk briefly about what those might be?

A - I wouldn’t say promising but the only proposal that had some traction in the past is BIP 70. This is the payment protocol you may have heard about. Instead of having an address you have a file that is a payment descriptor. Your address becomes a URL to that file. In almost cases where a Bitcoin transaction is taking place the sender is already interacting with the recipient anyway so why do we need an address? They are on their website. They can just give the information through that to the software. This doesn’t solve any of the problems of what if your website is being intercepted? But neither do addresses. Complications with this is I think some mistakes that were made in the specification of this that makes it less useful than it could have been. There is not all that much software that implements it. It is hard to implement, it requires SSL certificates. Something in that direction I think would be great. If it were done with what we have learned so far but it may be very hard to get something adopted. I am not very hopeful unfortunately.

Q - With this implementation does RIPEMD160 come into it?

A - Yes it does because pay-to-witness-pubkey-hash (P2WPKH) addresses still contain a RIPEMD160 hash of the public key being sent to. That is part of the SegWit proposal. That doesn’t really have anything to do with this address type that abstracts over any data. The data being encoded in the address for a P2WPKH is RIPEMD160. It is no longer for P2WSH, it is just SHA256 there.

Q - I am curious if you have done the optimization in terms of the classic spending error correction style. For example what if I miss one character and end up adding a second one at a later point?

Greg Maxwell: What you are describing where someone drops a character and inserts a character, they end up with the correct length. What this results in is a burst of errors all confined within a short space between the span that they dropped them. Although Pieter didn’t talk about it the codes that we use are also selected by virtue of their construction to have very good properties for small bursts as well. They do detect them by chance more than you would expect from it being a 30 bit check. Although we haven’t implemented it, we could implement one that provides useful hints for the drop and insert case. There’s a table in the BIP that gives a breakdown of all the errors occurring in the window which is specifically interesting for this case because shorter windows are more common for burst errors, just from a counting argument.

Q - With traditional Bitcoin addresses you have a 1 and a 3 that tells the wallet what type of scriptPubKey. In this case it is the length that determines…?

A - No it isn’t. The q you see in red is the witness version. All of it is SegWit outputs. But SegWit outputs have a version number and a hash in combination with the length. Both P2WPKH and P2WSH use version 0 but one uses 20 bytes of hash and the other uses 32 bytes.

Greg Maxwell: Commenting more on the length. Because future SegWit versions may use different lengths, the specification allows the lengths to be many possible lengths. There is some validation code on what lengths are allowed. Not every possible length is an allowed length as a result of the 8 bit to 5 bit conversion that occurs between the two sizes.

It is also important that this address type intentionally makes an abstraction. While the former thing using the 1 or the 3 selected P2PKH or P2SH, someone implementing the address type does not really need to know about the existence of a witness pubkey hash or a witness script hash, just the version number and data. The sender even shouldn’t care about what is in there, it is the receiver’s business. There are some sanity checks if you know more.

Greg Maxwell: We generally consider it a flaw that the sender knows anything about what kind of scheme the receiver is using. We try to abstract that away.

Q - How does the probability change for address collisions from the old addresses to the new?

A - What do you mean by address collision exactly?

Q - The chances of you generating the same addresses on the old system are absolutely miniscule. Does this increase that probability that two people generate the same address?

A - There is still 160 bits or 256 bits of data inside the address and that’s the only thing that matters. All of this is about the checksum, not about the data in the addresses which is still using traditional hash functions. Nothing changes there. If you are using a 160 bit one, it goes even dramatically down if you are using 256 bits because now you have something proportional to 2^(-128) rather than 2^(-80) for a collision.

Q - Do you see any applications outside of Bitcoin for this? Maybe it will help with marketing.

A - Anything where humans are dealing with small strings of cryptographic material. The example I gave already that uses base32 is onion addresses in Tor for example. I don’t know if they are interested but something similar could be applicable there. They have different requirements for error detection. I guess it isn’t all that bad if you accidentally…. Maybe it is also important to point out, this does not really prevent intentionally similar looking addresses. We have this guarantee that any two valid addresses differ in at least 5 characters which makes it somewhat hard for an attacker to generate two similar looking ones. But making that many characters similar is computationally due to the hash function already being very hard. Sorry that was not really an answer to your question.

Q - Are most of the design decisions around things like the character map and the code generator, are they documented along with the BIP?

A - Yes. They are briefly. There is a rationale section that explains why many of the decisions were made.

Q - Is there any way to use this scheme for traditional pay-to-pubkey or pay-to-script-hash?

Intentionally not as I think it would be a very confusing thing if there were multiple encodings, multiple addresses that refer to the same output. You might get someone using one form of address because that is what their wallet does but then they go to a block explorer website which shows them another. They can’t tell which one was intended and this isn’t compatible with the software. I think we should think of addresses as the thing you are sending to. It happens to map to a particular scriptPubKey but we shouldn’t see it as just an encoding of some data. That could be confusing.

Q - I know it is super early but have you talked to any wallet implementers about this yet?

A - While designing this we talked to various people before publishing it. In particular, GreenAddress, Armory, Electrum. There have been comments by various others. Many of them gave suggestions.

Q - I’m sure they appreciate the simplicity.

A - I hope so. I had comments like “You are using an error detection algorithm. We need to implement new cryptography.” I’m like “These ten lines of code.” In practice you need to implement more than that of course. The whole reference implementation for encoding and decoding in Python is 120 lines.

Q - What is your opinion on everyone not caring about Bitcoin anymore and flipping to Ethereum?

A - People do what they want.

\ No newline at end of file +https://www.youtube.com/watch?v=NqiN9VFE4CU

Topic: Bech32 addresses for Bitcoin

Slides: https://prezi.com/gwnjkqjqjjbz/bech32-a-base32-address-format/

Proposal: https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki

Demo website: http://bitcoin.sipa.be/bech32/demo/demo.html

Twitter announcement: https://twitter.com/kanzure/status/847569047902273536

Transcript completed by: Bryan Bishop Edited by: Michael Folkson

Intro

Can everyone hear me fine through this microphone? Anyone who can’t hear me please raise your hand. Oh wait. All good now? Tonight I will be speaking on a project I’ve been working on on and off for the past year or so, which is the question of what kind of addresses we will be using in Bitcoin in the future. Recently I proposed a BIP after several long discussions among some people. I think we have a great proposal. So today I will be talking about the proposal itself and how it came to be. This was joint work with several people, in particular Greg Maxwell who is here as well, and my colleagues at Blockstream. Most of this work was done thanks to the computation power of their computers. I’ll talk about that more. So this is the outline of my talk. First I’ll talk about why we need a new address type going forward. The decision to use base32 rather than base58 as has been used historically. Once the choice for base32 has been made, there are a bunch of open design questions like what checksum to use, what character set to use, and what the address structure looks like. Optimal character set depends on optimal choice of checksum, which may be surprising. And then combining this into a new format, which I am calling bech32

Why?

Segregated Witness is a proposal that I presented a bit over a year ago in Hong Kong for the first time. It is now in a state of perhaps being deployed on the Bitcoin network. Segregated Witness needs to encode new address types. They are described in BIP 143 pay-to-witness-pubkey-hash (P2WPKH) and pay-to-witness-script-hash (P2WSH) and there are some possible extensions later. SegWit supports two ways of using these, either inside of P2SH which is an address type that has been supported for years on the Bitcoin network, making it backward and forward compatible with every wallet out there created in the past few years. However, going forward, we really want this to happen natively. This gives us better efficiency for spending as we don’t need the overall backward compatibility layer of the redeem script that P2SH gives us. Secondly it gives us 128 bit security for script hashes. P2SH only delivers 80 bits which is becoming questionable. This proposal replaces BIP 142 which was an older base58 based proposal and I think this one is much better.

Base32

Why base32? First of all due to the more limited alphabet we can restrict ourselves to just lowercase or just uppercase making the address format case insensitive. This makes it much easier to read or write addresses down as anyone who has ever tried to write an address down or type it after someone reads it over the phone will easily confirm. To be clear, my hope is that in the long term Bitcoin doesn’t need addresses anymore. We will get a real solution where humans don’t need to interact with the cryptographic material at all anymore. There have been proposals going in that direction but it seems for the time being we are not there so we need a solution regardless. 32 being a power of 2 means it is much easier to convert. We can just take the bytes from the data, split them into bits, rearrange them into groups of 5, take these groups of 5 and those become your base32 characters. Compare this to base58 where you really need BigNum logic to turn the whole thing into a huge number and then convert to a new basis and so on which is a quadratic algorithm. For all of that we get a downside. It is 17 percent larger than base58 just due to being less information fitting in one character. That’s going to be the main topic of what I’ll be talking about. Due to 32 being a prime power we can support a mathematical field over the characters. We can use a lot of research on strong error detection codes which doesn’t exist something like base58. Due to being case insensitive it is also more compact to store in QR codes, which have a special mode for encoding alphanumeric data. But this only works for case insensitive things. Base32 is also being used for many cases already including onion addresses in Tor and I2P and several other projects.

Checksum

We fix the idea that we are going to use base32 for all these reasons. What are we going to do about the checksum character set and structure? First I should point out a few design decisions we fix early on. All of this will be about the chance of misdetecting an invalid address as a valid address because when that happens you will be sending money into a black hole. That’s what we want to prevent. Six characters is the minimum we need to make the chance that a random string gets accepted as an address less than one in a billion. That’s just a design choice. Six characters, that is what we are going with. This means if this address type is going to be generic for every witness output which can have up to 320 bits of data, we really need 71 characters under the checksum. So we are looking for error detecting codes that detect as many errors as possible with a checksum of length fixed, with a total message of length 71. I am now going to look at a few of the design choices we went through ending up with the one we picked, the BCH code at the end.

Minimum distance of a code

First I need to clarify the concept of distance. Anyone who knows something about coding theory will find this trivial but I’ll explain anyway. The distance of a code or the minimum distance of a code or the Hamming distance of a code is how many characters within a valid address you need to at least change to turn it into another valid address. The minimum number of different characters between two different addresses. This diagram gives a nice demonstration. All the lines are single character changes and all the black dots are valid addresses. The minimum distance of the code shown here is 4. You need to at least cross black lines between any two black dots. There is a very fundamental theorem that says if your distance is n you can detect up to n-1 errors. This is really obvious to see. If you start from any of these black dots and you make up to 3 errors, you follow up to 3 black lines, you never end up with another black dot. This shows you how a distance 4 code can detect 3 errors. There is also an equivalence. If you have a code that can detect n errors it can also correct n-2 errors. From any point in the diagram you go to the closest black dot. If you are on a black dot that is on an intersection point that is a distance 2 from a number of black dots, there are multiple choices you can make. You cannot correct 2 errors but you can correct 1. If they were all 5 apart you could correct 2. What we will be looking for is things that are 5 apart.

CRC codes

The first thing to look at is CRC codes. They are the most traditional checksum algorithm used in a wide variety of protocols. However, they are bit based. This makes sense because in most protocols what we care about are bit errors. However, here this is not the case. We don’t care directly about bit errors, we care about symbol errors or character errors. Whenever someone makes a mistake they will make an entire character error and every character is 5 bits. Here is an example where the B is turned into a J and the D is turned a V. Even though this is only two characters that are changed you can see that it is actually 9 bits that are changed. CRC codes are designed to optimize for detecting a number of bit errors. Which means that if we want something that can detect 4 errors we really need something that can detect 20 bit errors, which is a lot. It turns out that finding something that can detect 20 bit errors is impossible. But we don’t really care about that. We really care about these symbol errors which result in bit errors that are somewhat structured. They always occur in groups of 5 and we care about the number of groups that are wrong, not just the bits. It is in fact possible to find a CRC that gives us distance 4 but we can do better.

RS codes

Probably the best known type of checksum algorithm that allows error correction are Reed-Solomon codes. These work directly over symbols which is great. Unfortunately they are limited in length to the size of the alphabet minus 1. Typically Reed-Solomon codes are done over 8 bits of data which means that they can work over code words of length 255 which is 2^8 - 1. But in our case our alphabet size is 32 using base32, which means we would be limited to doing error detection in strings of length 31. This is too short. We cannot fit enough data into that. So a possibility is to use an alphabet extension where you are really looking at two characters at once. You are really doing a size 1024 which is 2^10 alphabet. You see 2 characters in your code as one symbol. This is possible. However it is still limited to distance 4. We are really trying to see if there is nothing better we can get.

BCH codes

About a year ago I found out about BCH codes which are a generalization of Reed-Solomon code that drop this restriction of being limited to the alphabet size minus 1. Most of the research on BCH code is actually about bits based ones but this is not necessary. A lot of research is applicable to larger ones as well. We are going to create a BCH code over our size 32 alphabet. In fact it turns out the theory that you can read about in nice articles on Wikipedia, if you do the math you can construct a BCH code with distance 5. Yay! I’ll soon show you a diagram for why this actually matters. It turns out there is still a huge design space, many parameters are free in this BCH class of codes. So we’ll need a way to figure out which one to use.

BCH code selection

Even if you fix the field size there are about 160,000 different ones. When confronted with this I thought how are we ever going to pick one? I started trying to do random sampling and see which ones are actually better even if you give it more errors than they are designed for. These are codes that are designed for having distance 4 or 5 meaning they will detect 3 or 4 errors. But what if you give them one more error? Probably if these are all different codes some of them are actually better if you go beyond their limit. I started on this project of trying to characterize all the different codes that are possible here.

How do you characterize such a code because all 71 character addresses is that ridiculously large number that we need to go over to characterize them? This is about 2^350 so way, way, way beyond the number of atoms in the universe. However, BCH codes belong to a class called linear codes. Linear codes means if you have a valid code word and you look at the values that every character represents and pairwise add every character to another valid code word the sum will again be a valid code word. What you only need to look for if you want to see whether this code below distance 5, you need to check does there exist any pair of 4 non-zero values over these 71 positions whose checksum is zero. Because if that is the case you can add that to any valid code word and your result will be another valid code word. Now we have established two valid code words with distance 4. It turns out that is still 12 trillion combinations which is painfully large. Maybe not entirely impossible but more than we were able to do. The next realization is that you can use a collision search. Instead of looking for 4 errors what you do is you look for only 2. You build a table with all the results on 2 errors, compute what their checksum would be to correct it, sort that table and look whether there are any 2 identical ones. Now you have 2 code words that need the same checksum to correct it. If you XOR them, if you add them together, now you have 2 x 2 = 4 changes and the checksum cancels out. Through this collision search you can now find by only doing the square root of the amount of work which makes it feasible.

There are a bunch of other optimizations on top which I may talk about later if there is interest. We are able to do some search. We start from this 159,605 codes, let’s require that they actually have distance 5 at length 71. There are 28,825 left. Which one to pick from now? What you want to do is look at how they behave at 1 beyond the limit. All of these codes, these 28,825 codes, they all detect 4 errors at length 71. But what if you give them 5 errors? What if you give them 5 errors that are only appearing in a burst very close together or you give them randomly distributed behaviors? It turns out if we pick some reasonable assumptions about how we weigh the random case versus the worst case there are 310 best ones that are identical.

Effective error detecting power as a function of error rate (graph)

To show you how much this matters I want to show this graph that contains a lot of information. What this graph shows you is the detecting power of a particular error detection code in a function of the chance that every character individually is wrong. This makes the assumption that every character is independent from every other. For example you can see that the 1 percent line, the red line is the error detection code we chose. You can see it is about 2^(-38) which means that it is 1 in 250 billion, the chance that an error you make would not be detected. The blue line is what the 32 bit hash function would do as the current address format, base58. You can see it doesn’t matter what your error rate is. Any error has exactly the same chance of being detected. We can reasonably assume that the average number of errors that would be expected within an address is not large. We can assume that it is maybe 1 in an address, hopefully less than 1 especially if we switch to case insensitive coding. It would become even less likely. The yellow line shows what we could have chosen if we didn’t do this exhaustive analysis to find the best one. The yellow line is actually the worst possible code. You can see that it has this annoying bump where its detection probability even goes below 1/(2^30). We don’t even guarantee this 1 in a billion chance for that code. This is just to show you how great optimizing for a low number of errors is because it gives you under these assumptions a much better model. But it also shows you how much it matters to do this analysis. It clearly makes a difference.

BCH code selection (continued)

From those 310 we pick the codes with the best bit error rate behavior. We still have these 310 identical codes. For how many characters can be wrong they behave identically. No difference at all. We still need some criteria to pick one. We have all this analysis available anyway so what to pick? It will soon become clear why it is useful to optimize for low bit error rates. What this means is what if we are only looking at errors that change 1 bit in a character? How does the code behave then? For random errors they are identical anyway. Now we only have 2 left and we just pick one of them. This took many, many years of CPU time. I think we have something like 200 CPU cores available to do analysis on. This only took a couple of weeks. But in total it was more than ten years of computation time. Until we discovered that these 310 identical ones, it is a pattern. All codes appear in groups of 310 identical ones. A couple of weeks ago, when we identified what these exact changes were, suddenly a lot more became feasible to analyze. We were no longer restricted to those 160,000 BCH codes. We could in fact look at all the 3.6 million 6 character cyclic codes which is a much larger class of functions. It turns out that if you make the range of things you are looking for larger, you find better things. However we did not pick anything from this. The results from this search, which was now feasible after dividing by 310, because we could from each class test one. Instead of a billion codes there were only 3.6 million left. It turns out some of them were slightly better than what we had already found but we are not changing it for the reason that there are efficient error location algorithms available for these BCH codes. These aren’t available if you pick an arbitrary code. The difference isn’t much so we are sticking to it.

Character set

Now the question of the character set. There exists various character sets for base32 already. There is the RFC3548 which is a standard. There’s the z based base32 standard, various other standards for base32 data that have been used in the past. We’re still going to pick a different one. The reason for this is we were able to select our code to optimize for low bit error rates. Wouldn’t it be great if we could choose the character set in such a way that 1 bit errors are more likely than non 1 bit errors. This character set is the result of another year of CPU time to optimize for this. We found a bunch of information on tables for similarity between various characters. What you can see on this slide, the z and the 2 are considered similar in some fonts or writing. As you can see they are 8 apart. One is 2 and the other is 10 so they are 1 bit apart. And r and t are 1 bit apart. And y and v are 1 bit apart. And x and k are 1 bit apart. And e and a are 1 bit apart. And s and 5 are 1 bit apart. And 4 and h are 1 bit apart. There are way more similar errors overlaid in this data that you can look for. It’s pretty cool. We made a character set that optimizes for 1 bit errors. As a result our code is distance 6 for 1 bit errors. If you just look at these 1 bit errors we guarantee 5 errors. If you only make errors like this you can detect 5.

Q - Did you take into account QWERTY keyboard distance in that selection?

A - We did not take QWERTY keyboard distance into account.

Greg Maxwell: We considered it but the visual component is uniform. In formal testing it looked like the visual component was more commonly a source of errors. But the visual component is almost always in the single path whereas a QWERTY keyboard may not be in the single path.

Effective error detecting power as a function of error rate

What this diagram shows you, the blue line is again what the 32 bit hash, the current address format, would do. The red line is for arbitrary errors using the checksum algorithm we’ve chosen. The purple line is the same thing for the checksum algorithm we chose but restricted to 1 bit errors. So if you only make this class of errors that we consider more likely, you can see that it’s even stronger. For 1 percent you can see you get another 5 bit of detection chance making it 32 times less likely for something like that to not be detected. Something else you can see on this diagram is the line for 1 expected error in the whole address. You can see that there is a crossover point at 3.53 (expected errors per address). What this means is that the checksum algorithm despite being shorter, it is only a 30 bit checksum, despite that for up to 3.53 expected errors in your whole address it is actually stronger than the 32 bit checksum that was being used in the base58. For only likely errors it is even up to 4.85 per address. A great win.

Structure

One last thing, how do we combine all of that into a real address because we have a character set and we have a checksum algorithm. How are we going to structure SegWit addresses? It consists of three major parts. The first is the human readable part which for Bitcoin addresses in our proposal will be bc standing for Bitcoin. For testnet it is tb which is still only two characters but visually distinct from bc. Then there is the separator which is always 1. 1 is a character that does not appear in your character set. This means that the human readable part is always unambiguously separated from the data that follows. Even if the human readable part itself contains a 1. That makes it extra flexible. Then there is the data part which uses the character set as I described before. For SegWit addresses, but this is generic, in there is the data: the witness version, the witness program and the checksum which is the last 6 characters. The result of this for a pay-to-witness-pubkey-hash (P2WPKH) address would be 42 characters rather than 34 so it is a bit longer. Base32 is a bit less efficient, the checksum is longer and the prefix of two characters adds up. But I don’t expect this to be a big problem. It is slightly larger but it is more compact in QR codes. It is much easier to read and write. Only things that care about visual space does this matter. It is 62 for pay-to-witness-script-hash (P2WSH) because it uses 256 bit hashes rather than 160 for higher security. For future witness versions which support up to 320 bit hashes the length can go up to 74. Though I don’t expect that as 256 bit is a very reasonable security target.

Bech32

All of this together gives bech32 which is a generic data format for things like addresses. One instance of it for the use of SegWit addresses but it could be used for various other things that have similar requirements. I don’t know what. It seems strange that most of the research on checksum human readable data uses 1 checksum character or 2 checksum characters. Think about bank account numbers or a few similar things. There seems to be very little research on how to make an actually strong checksum that is still designed for human consumption. I hope this can become perhaps a standard for how to do this. There is a link to the BIP. In all of this I have not mentioned one of the most important design goals, code simplicity. I’ve been talking about these error detection codes which are typically very complicated to deal with. However they are only complicated to deal with if you are actually interested in error correction. Error detection is trivial.

Checksum algorithm code

https://github.com/sipa/bech32/blob/master/ref/python/segwit_addr.py#L27-L36

Ta da. This is the checksum algorithm. It uses no bignum conversion, it has no SHA256 dependency. It is just these ten lines. Of course you need more for the character set conversion and converting to bytes for the witness program. But the mathematical part of the spec is just this.

Demo website

http://bitcoin.sipa.be/bech32/demo/demo.html

We also made a small demo website. Because it is an actual error correction algorithm behind the scenes, even if we are not using it, you could optionally implement it to do error location. The spec allows this. It strongly advises against doing actual error correction because if someone types an address wrong you want to go complain and tell the user to go look what the address is and not try to correct it for them. They might end up with a valid address that is not the one they intended. Here is an example. If you change this v to a x it will point out that x is likely what you did wrong. This can even be inside the checksum. The algorithm can support up to 2. We have an algorithm that supports up to 3 but not most of the time but is still quite frequently wrong. There are ways to deal with this like showing multiple possibilities to the user. None of the contributors to this project are great UI people so we really didn’t know how to do this. Here’s a P2WSH example as well. This website is linked from the BIP. You can play with it if you are interested. That’s it. Thank you very much for your attention.

Q&A

Q - Do you have any idea how many Bitcoin are lost per year due to human readable character mistakes that this corrects for?

A - I have no idea. I expect it to be low but I also expect that we wouldn’t know about it. It is hard to quantify and it is very nice to just categorically improve the state of the art there.

Q - In the error location code, can it suggest corrections there?

A - Yes. The location code actually does full error correction. It knows what the likely error is but it intentionally does not show you because you should go back and look at what it should be rather than try things. Greg suggests I explain why this matters. A code with distance d can either detect d-1 errors or correct (d-1)/2. This means that our code which is distance 5 can detect 4 errors but can only correct 2. This is because you can be in the middle of two valid code words and then you don’t know which one to go to. So if you make 4 errors, which will never result in a valid code word, you may however have gotten closer to another valid code word than the one you started off from. This means that your ability to do error detection is eroded by trying to do correction.

Greg Maxwell: The key point to make here is that if you make 4 errors it will almost certainly correct it to a valid address which is not the address you intended. This is why you don’t want to correct the errors.

As Greg points out, if you make 4 errors and run the error correction algorithm which can make up to 2, it will correct to the wrong thing with a very high probability. For 4 it is almost certain, with 3 it is likely, with 2 never. This is for example different when you are dealing with private keys. Maybe that is something I should mention as future work. We are also looking at a similar standard like the one here for encoding private keys. But for private keys telling the user “Sorry your private key is wrong”, that is not what you want. You really want to do the error correction there. There we are looking at something with a stronger checksum that has more than 6 characters extra but actually can correct a reasonable number of errors and not just detect them.

Q - The private key thing made me wonder whether this could be used also for master keys at the root of BIP 32 or something similar to it?

A - The checksum algorithm we chose here ends up being really good up to length 89. It was designed to be good up to 71. It turns out it is really good up to 89. Not more. This is approximately 450 bits of data. That is enough for a seed but not enough for a master key, which has 512 bits of entropy because it has a chaincode as well as a key. Future work is looking for a stronger checksum which can both correct a reasonable number of errors for short things like private keys, but also has good performance for longer things. Given that we are talking about longer strings there anyway you don’t care whether it adds 6 or 10 or 15 characters.

Q - Could bech32 implicitly support SegWit in the practice of false signaling where miners are signaling for something other than Core while running Core in the background?

A - That is completely orthogonal. All of what I have been talking about here is on the wallet implementation side. It has no relation at all to what is implemented in consensus rules, full nodes, peer-to-peer protocols, miners. None of this cares about it. This is just something that wallets need to implement if they want. So no.

Q - At the beginning of your talk you mentioned that there were some proposals that would completely abstract Bitcoin addresses away from users. Can you talk briefly about what those might be?

A - I wouldn’t say promising but the only proposal that had some traction in the past is BIP 70. This is the payment protocol you may have heard about. Instead of having an address you have a file that is a payment descriptor. Your address becomes a URL to that file. In almost cases where a Bitcoin transaction is taking place the sender is already interacting with the recipient anyway so why do we need an address? They are on their website. They can just give the information through that to the software. This doesn’t solve any of the problems of what if your website is being intercepted? But neither do addresses. Complications with this is I think some mistakes that were made in the specification of this that makes it less useful than it could have been. There is not all that much software that implements it. It is hard to implement, it requires SSL certificates. Something in that direction I think would be great. If it were done with what we have learned so far but it may be very hard to get something adopted. I am not very hopeful unfortunately.

Q - With this implementation does RIPEMD160 come into it?

A - Yes it does because pay-to-witness-pubkey-hash (P2WPKH) addresses still contain a RIPEMD160 hash of the public key being sent to. That is part of the SegWit proposal. That doesn’t really have anything to do with this address type that abstracts over any data. The data being encoded in the address for a P2WPKH is RIPEMD160. It is no longer for P2WSH, it is just SHA256 there.

Q - I am curious if you have done the optimization in terms of the classic spending error correction style. For example what if I miss one character and end up adding a second one at a later point?

Greg Maxwell: What you are describing where someone drops a character and inserts a character, they end up with the correct length. What this results in is a burst of errors all confined within a short space between the span that they dropped them. Although Pieter didn’t talk about it the codes that we use are also selected by virtue of their construction to have very good properties for small bursts as well. They do detect them by chance more than you would expect from it being a 30 bit check. Although we haven’t implemented it, we could implement one that provides useful hints for the drop and insert case. There’s a table in the BIP that gives a breakdown of all the errors occurring in the window which is specifically interesting for this case because shorter windows are more common for burst errors, just from a counting argument.

Q - With traditional Bitcoin addresses you have a 1 and a 3 that tells the wallet what type of scriptPubKey. In this case it is the length that determines…?

A - No it isn’t. The q you see in red is the witness version. All of it is SegWit outputs. But SegWit outputs have a version number and a hash in combination with the length. Both P2WPKH and P2WSH use version 0 but one uses 20 bytes of hash and the other uses 32 bytes.

Greg Maxwell: Commenting more on the length. Because future SegWit versions may use different lengths, the specification allows the lengths to be many possible lengths. There is some validation code on what lengths are allowed. Not every possible length is an allowed length as a result of the 8 bit to 5 bit conversion that occurs between the two sizes.

It is also important that this address type intentionally makes an abstraction. While the former thing using the 1 or the 3 selected P2PKH or P2SH, someone implementing the address type does not really need to know about the existence of a witness pubkey hash or a witness script hash, just the version number and data. The sender even shouldn’t care about what is in there, it is the receiver’s business. There are some sanity checks if you know more.

Greg Maxwell: We generally consider it a flaw that the sender knows anything about what kind of scheme the receiver is using. We try to abstract that away.

Q - How does the probability change for address collisions from the old addresses to the new?

A - What do you mean by address collision exactly?

Q - The chances of you generating the same addresses on the old system are absolutely miniscule. Does this increase that probability that two people generate the same address?

A - There is still 160 bits or 256 bits of data inside the address and that’s the only thing that matters. All of this is about the checksum, not about the data in the addresses which is still using traditional hash functions. Nothing changes there. If you are using a 160 bit one, it goes even dramatically down if you are using 256 bits because now you have something proportional to 2^(-128) rather than 2^(-80) for a collision.

Q - Do you see any applications outside of Bitcoin for this? Maybe it will help with marketing.

A - Anything where humans are dealing with small strings of cryptographic material. The example I gave already that uses base32 is onion addresses in Tor for example. I don’t know if they are interested but something similar could be applicable there. They have different requirements for error detection. I guess it isn’t all that bad if you accidentally…. Maybe it is also important to point out, this does not really prevent intentionally similar looking addresses. We have this guarantee that any two valid addresses differ in at least 5 characters which makes it somewhat hard for an attacker to generate two similar looking ones. But making that many characters similar is computationally due to the hash function already being very hard. Sorry that was not really an answer to your question.

Q - Are most of the design decisions around things like the character map and the code generator, are they documented along with the BIP?

A - Yes. They are briefly. There is a rationale section that explains why many of the decisions were made.

Q - Is there any way to use this scheme for traditional pay-to-pubkey or pay-to-script-hash?

Intentionally not as I think it would be a very confusing thing if there were multiple encodings, multiple addresses that refer to the same output. You might get someone using one form of address because that is what their wallet does but then they go to a block explorer website which shows them another. They can’t tell which one was intended and this isn’t compatible with the software. I think we should think of addresses as the thing you are sending to. It happens to map to a particular scriptPubKey but we shouldn’t see it as just an encoding of some data. That could be confusing.

Q - I know it is super early but have you talked to any wallet implementers about this yet?

A - While designing this we talked to various people before publishing it. In particular, GreenAddress, Armory, Electrum. There have been comments by various others. Many of them gave suggestions.

Q - I’m sure they appreciate the simplicity.

A - I hope so. I had comments like “You are using an error detection algorithm. We need to implement new cryptography.” I’m like “These ten lines of code.” In practice you need to implement more than that of course. The whole reference implementation for encoding and decoding in Python is 120 lines.

Q - What is your opinion on everyone not caring about Bitcoin anymore and flipping to Ethereum?

A - People do what they want.

\ No newline at end of file diff --git a/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets/index.html b/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets/index.html index 64284f9919..d53a6f3af8 100644 --- a/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets/index.html +++ b/sf-bitcoin-meetup/2017-07-08-bram-cohen-merkle-sets/index.html @@ -10,4 +10,4 @@ Bram Cohen

Date: July 8, 2017

Transcript By: Bryan Bishop

Tags: Cryptography

Category: Meetup

Media: -https://www.youtube.com/watch?v=52FVkHlCh7Y

code: https://github.com/bramcohen/MerkleSet

https://twitter.com/kanzure/status/888836850529558531

Introduction

There’s been a fair amount of talk about putting UTXO commitments in bitcoin blocks. Whether this is a good idea or not, but on the off-chance that it might be a good idea, I spent a fair amount of time thinking and actually good merkle set implementation whic his what you you need to put utxo set commitments in bitcoin blocks. There are other benefits to this, in that there are actually a million and one uses for merkle sets in general, and it’s actually a general interesting problem and it performs well. So I figured I would make it, and if it’s not useful to me, then it might be useful to someone anyway.

High-performance merkle set implementation

Okay, so. Merkle set. For those of you who aren’t aware, this is sometimes– sometimes people call it a hash set, based on hash tree, and then Ralph Merkle gets really mad at you. Since he’s still alive, we still call them merkle trees. A merkle set is a set which contains arrays of bytes, it keeps track of its merkle root. All the things at the terminal things are the things contained in the set, in the middle are summaries, and then the root. You can use the root to prove, for someone who has the root, to prove whether something is or is not in the set.

There’s a series of interesting engineering problems to make these perform well. What hash function should we use? The obvious choice is sha256 because it’s the standard. The really big benefit of sha256 is that it’s really the only thing that has hardware acceleration and it’s the only thing that is going to have widespread hardware acceleration for the foreseeable future. Unfortunately it’s not super fast in software and on top of not being super fast in software, it has funky padding issues. If you concatenate 2 hashes, it doesn’t align up nicely, and it has other block it has to compute on. The best thing to use is blake2s because everything here has 256 bit outputs, and blake2s has a block size of 512 bits. There’s this slightly frustrating thing with the flavors of blake, that there’s there’s blake2s, which is optimized for 32-bit systems, and blake2b which is optimized for 64-bit systems. So nothing is 32-bit anymore, so we need to use blake2b, unfortunately it has a 1024 bit block size, so it’s actually slower. I talked with the authors of blake to see if it would be possible to do a new variant, and they told me it would be a lot of work, so they said no. This is irritating.

On top of that, it turns out that sometimes you do want just a little teeny bit of metadata. blake2b is really good about when you give it 512 bits, it doesn’t need to do any padding, it processes exactly one block. But you do actually want some metadata, so it’s a good idea, if you’re doing this route, to sacrifice 2 bits to metadata, so that you can encode information what you’re encoding, so the values we’re using are whether it’s “terminal” meaning it’s just a hash string that is being stored, “empty” meaning there’s nothing here, or “intermediary” which means this is a hash of multiple things below it, or “invalid” which basically means this hasn’t been calculated yet when you’re doing lazy evaluation this is really handy.

So I’m using a patricia trie. There’s a number of subtle different ways of making merkle sets. The most straightforward way is basically you just sort everything and put it into a list. That can’t be efficiently updated, because the positions of everything aren’t super-canonical, so everyone who is spending time on this has wound up using a patricia trie. It positions everything based on what the bits in it are. Here’s the root. Everything that has a 0 as its first bit, goes here. Everything with a 1 as its firs tbit, goes here. This spot right here– everything starts with the 0 bit, so there’s these 2 things here which both have 0 bits, but you include explicit “empty” so that you can have nice proofs of non-inclusion. If you try to optimize out the empties, you could still have proofs of nonmembership, but it’s really ugly.

There’s one more trick that’s actually part of the semantics of this that I came up with, which is that if you have only two things below a particular point, and it’s empty on the other side, you do a pass-through. If you look at the bottom here, there’s 2 terminals, so the thing that has those 2 terminals children has to do a calculation to figure out the root value there. Above that, there’s something where the other side is empty, so that does a “pass-through” because the other side is empty and it only has those 2 values. And then there’s another layer that is also empty on the other side, so it does a pass-through. At the top of my diagram, there’s a layer where there’s no empty on the other side, so it has to do the tree. The other one has an empty at the very top of the tree, but it doesn’t work out because there’s hashing to be done there. So this gets rid of a lot of the hashing. It’s a substantial optimization and it’s worth doing.

Reinventing memory management for fun and profit

I have on github a merkle set git repository which is an implementation of this. It’s in python, which is ridiculous. It needs to be ported to C soon. That has a reference implementation which is very straightforward, and a high performance implementation that isn’t high performance yet. There are subtleties to what the performance needs are. In modern architectures, you spend all of your time waiting for memory. The exact subtleties of how that works aren’t clear. When you’re accessing memory, you want to get things near each other all together. In the extreme version, it’s a physical hard drive where a seek is expensive but reading everything at once isn’t an issue. In SSD this is less so, but it continues up the memory hierarchy. So you have to arrange things that are near each other are near each other in memory.

I didn’t invent the concept of memory management– I have a chunk of memory called a block. When I first started playing around with this, it wanted gigantic blocks of memory allocated up front. I figured out how to make it act normally after a number of rewrites. The sizes of blocks aren’t obvious. There’s really bad documentation everywhere about how long things wait for and how near things need to be to each other to get better read times and how much of an improvement those are, so I made that a parameterizable in my iplementation and it needs to be optimized for local CPU memory bus architecture and phase of the moon.

That’s the high-level viewpoint of what this is going to do.

There are two types of blocks. There are branch blocks and … blocks. The root is a branch block. Every block can have many children, but only one parent. That makes things behave a lot better. No sibling relationships between them of any kind. Any time there is an overflow from a parent, it goes into one of the children. It’s possible for multiple outputs of a parent to go into the same child if the child block is a … block. Usually the terminals are leaves, and everything above them are branches. This way, this is fairly realistic in that it’s pretty normal for a branch block to have one or a few leaves before it. Branches tend to fan out very fast, and when you’re doing a memory look-up it tends to go branch branch leaf.

These are the formats. A branch block has a reference, all of the things of size B here are memory pointers. It has a pointer to its active child, which is a leaf block, and it has a patricia of depth, the branch has depth as a parameter to how it works, its size is roughly 2^(depth). Each patricia .. 0 bit below the current position. Call it by the right hash, everything is starting 1-bit below the current position. Followed by the two child things, the patricia of 0 size n minus 1, and so this is a recursive data structure here. This one trick of just having a left and right next to each other, should improve look-up times a lot. When you’re updating, especially. When you are recalculating hashes, you want the siblings to be right there. When you get down to patricia 0, there’s a pointer to one of the children, and a position within that child. And indices within those blocks. If a child is a branch block position, it just looks like FFFF, an invalid node.

Branch blocks are balanced, by their nature. Things don’t move around. The biggest subtlety here is with respect to the active child. The branch block might overflow with itself, where there’s 2 things that need to go to the same out position. Sometimes leaf blocks will overflow as well, and when that happens, it needs to decide where to put the overflow. It puts it into the active child. This does a good job of memory allocation. When you need to, you go and make a new active child, until it’s full, and then you usually have active children with mostly full, except for one that has overflow, and you really only do memory allocation when you need to do. So this does a pretty good job of not pre-allocating any more than it actually needs.

Within leaf blocks, this is an interesting subtlety, a leaf block is basically a list of nodes. It has first node used, a position in itself. Everything in here tha tis 2 bytes is a position. Num inputs, the number of times the parent has inputs going into it, and a list of nodes, a full node (not an empty node) has – it has a hash of everything on the left side, hash of everything on the right side, and then a position on the child node on the left, and a position of the child node on the right, when I say left I mean 0-order low bit, and right is 1-order high bit. The empty node has a next pointer of just zeroes. The reason for this first unused and empty node here is that the unused nodes form a linked list, so when you want to allocate a new node, you pick the first one and point to the next one after it. If you hit the last one, you figure out you have to do an overflow, and you copy-out into the active child unless it’s full or if you are the active child, in which case you make a new active child. If you have only one input yourself, you don’t copy out into the active child, you take your whole block that you are, and copy it into a new branch block, which likely has a new leaf block beneath it. There’s a lot of recursive stuff in here, in the code. It’s a little bit subtle, but it looks simple.

I have another interesting thing to talk about. First I’d like to ask if there are any questions.

Q: Why do you.. not.. hash.. mechanism.. under a merkle tree?

A: What do you mean by the skip hash? Oh, you mean… if you want to have a proof of non-membership, you terminate at an empty. When there’s only 2 things below, you just terminate with that, you don’t want to blow up the proof size.

Q: Difference between merkle tree and patricia trie?

A: I’m not really sure the difference between trees and tries. I’m not really sure the real difference between the tries and trees. I think tries have an I in it, and trees don’t. I think patricia tries are called tries, and if you have a more straightforward approach of cramming everything together by position, that’s usually called a tree, and it’s usually not easily updated.

Q: Trie is from a … it’s a tree where search is easy, where the points… splitting based on the key value rather than based on balancing. Every trie is a tree.

A: Okay, so Pieter says that it implies everything is balanced in a trie. Anything even vaguely resembling what I’m talking about here, even without efficient updating, would be a trie.

Q: There’s like a million merkle tree libraries out there. Why did you do this?

A: The ones that are out there right now don’t do a good job of getting good cache coherence. Having to worry about cache coherence is sort of a relatively new thing in computer science, having to do with computers being weird and fast. Things moving around the bus are a problem. Really what it amounts to is a die is this 2d thing that needs to get, data around on these highways going over it, and the size of the die is quadratic and the width of the highway is linear, and now we have megalopsies on chips, it’s actually problematic. There aren’t too many things that talk about this kind of optimization. More recent versions of introductions to algorithms talk about sorting algorithms that will work optimally regardless of how your cache coherence properties work, and things like that. But it’s not something that people usually worry about too much. So I decided to try to make this as performant as I possibly could.

Q: Could you talk about that for a sec? When you say performance, you mean handling larger trees?

A: The relevant performance characteristics have to do with– well there’s some operations it can do. It can check for inclusion, and it can update. When it’s doing update, there’s different usage patterns. Maybe it has a lot of retrieves interspersed with updates, and maybe it has whole batches of updates followed by retrieves. When you’re doing retrievals maybe you care about proofs, ad maybe you don’t. If you don’t care about proofs, you should be using an ordinary set, because a hash set has really really good memory cache coherence properties because it already knows exactly where to look any time traversal happens. But traversing a tree is going to be very slow by comparison. If you want to check for inclusion and don’t care about proofs, just have two sets, a merkle set and a regular set. The most relevant thing here is as I was thinking about this was how long would it take you to go over all over the entire blockchain history. For that particular use case, for each block, you’re doing a whole bunch of updates, followed by getting the root. And then you do a validation for a bunch of retrieves for the next block, to make sure everything is in there. We could write something to try that, it has a lot of lazy evaluation because of the big batches of updates… when you’re doing big batches of updates, you should sort them before you go ahead and do the– you should sort them before actually applying the updates. The way that I am oblitterating the two high order bits, you actually want to sort with the third bit not the first bit, but I haven’t inserted a convenience function for batch updates, because it has to be done in the right order, it’s a weird gotcha. There’s a bunch of different patterns, and because it’s so hard to actually get even sane information about exactly where things bottleneck in memory hardware, I just optimized everything I could.

Q: I don’t know if people… blake2 has this personalization functionality, have you looked at using that for robbing bits?

A: Not familiar.

Q: The initial state of the blake2 hash you can set to different values based on running blake2 to pre-initialize it. So you can use this to separate the domain of different hashes.

A: Yes that would be a much more elegant way of adding the necessary metadata than losing the 2 bits of security. It really is necessary to do something along those lines because otherwise osmeone might try to store a tree value in the set and you can’t tell whether it’s an internal or external value.

Q: I think personalization will help you there.

A: Sounds like a better way to do that. The default APIs of this stuff doesn’t tend to give you those fancy bells and whistles. I was unaware of that as a feature.

Q: You were talking about… testing.. have you done?

A: Because my implementation so far is in python, it needs to be ported to C. It’s really weird looking python that is meant to be ported to C. There are data structures that have pointers, you can’t do that in python, it doesn’t like you using bytes as a pointer to a python object, so I took all of the memory data structures and used a hash table and wrote wrapper methods for getting things in and out of the hash table. Yeah it’s kind of weird. It’s handy to have everything in there, so as part of debugging there are audit methods, and you can just go over the whole hashtable and check that everything was allocated or not deallocated. I did read over data about how hard drives and SSD drives work. SSD usually has page sizes like 2 kilobytes and you probably want to align with those.. and hard drives, things should be pretty big, like 1 megabyte for the blocks. And then you get things like where you only want the terminal leaves to be on the disk, and everything above that to be in memory, and then you spend all of your time waiting on the disk. That’s … when it gets to more fine-grained things about well if I’m storing the entire thing in memory, not in SSD, not in the physical drive, it’s all in memory, how big should my branch and leaf blocks be? The available docs are really bad– seemingly similar machines have just radically different behaviors. There’s a library called fastest fourier transform in the west– the first time you run it, it benchmarks itself on the current machine, and it decides what its magic numbers should be. I think this might be the most reasonable solution for now.

Q: Your block structure doesn’t change the resulting root hash?

A: Yeah. The block structure does not change the root hash whatsoever. I implemented this as a very simple reference implementation which is quite straightforward. It has some nice properties. If you are doing things with lots of fallbacks, and you want to keep lots of roots without copying everything over, everything is immutable in it. The performant one– everything is really mutable and if you want to get back you have to rollback. They do absolutely the same exact thing, and I have extensive tests, the way these tests work is that they have just a random bunch of strings and they repeatedly add one string at a time and they report the root each time and they make sure everything added is still there, and everything added isn’t there yet, and then deletes them one at a time and works its way back. Once it has tested the reference implementation, it does this for other parameters including fairly degenerate ones, for the performant one. I found a lot of bugs. It has 98% code coverage with a fairly small amount of test code.

TXO bitfields

Any more questions? Okay, something that I am fairly sure will be controversial. Unfortunately the bitcoin-dev mailing list has got completely drowned in discussions of subtleties of upgrade paths recently (laughter). So there hasn’t been too much discussion of real engineering, much to my dismay. But, so, using a merkle set is kind of a blunt hammer, all problems can be solved with a merkle set. And maybe in some cases, something a little bit more white box, that knows a little bit more about what’s going on under the hood, might perform better. So here’s a thought. My merkle set proposal has an implementation with defined behavior. But right now, the way things work, you implicitly have a UTXO set, and a wallet has a private key, it generates a transaction, sends it to a full node that has a UTXO set so that it can validate the transaction, and the UTXO set size is approximately the number of things in the UTXO set multiplied by 32 bytes. So the number over here is kind of big and you might want it to be smaller.

So there’s been discussions of maybe using UTXO bit fields, I had a good discussion with petertodd about his alternative approach to this whole thing, wherein I said this weird comment because he likes UTXO bit fields, I said the thing that might be useful there is that you can compress things down a lot. And it turns out that this really helps a lot. I had this idea for a UTXO bit field. UTXO is all the unspent transaction outputs. It includes all unspents. The idea with a UTXO bit field is that you have a wallet, a proof of position and its private key. As things are added to blocks, each one has a position, and no matter how many things are added later, it is already in the same position always. So to make the validation easier for the full node, the wallet will give the proof of position which it remembers for the relevant inputs that it’s using, and bundles that with the transaction to send it to the full node, who then a miner puts it into a block. The proofs of positions will be substantially larger than the transactions, so that’s a tradeoff.

So this goes to a full node, and it has to remember a little bit more than before. It has to remember a list of position roots per block. For every single block, it remembers a root of a commitment that can be canonically calculated for that block, of the positions for all the inputs in it, to allow it to verify the proof of position. And it also has a TXO bitfield. The full node takes the proof of position and verifies the alleged position using the data it remembers for validating that. And then it looks it up in the TXO bit field, which in the simplest implementation is a trivial data structure, it’s a bit field, and you look it up by position. It’s not complex at all. It’s one look up, it’s probably very close to other lookups because you’re probably looking at recent information, and these are all 1 bit each, so they are much closer to each other. The size of the TXO bit field is equal to the TXO size divided by 8. So this is a constant factor improvement of 256. Computer science people usually don’t care about constant factors, but 256 makes a big difference in the real world. This also has the beenfit that my merkle set is a couple hundred lines of fairly technical code, it has extensive testing but it’s not something that I particularly trust someone to go ahead and re-implement well from that, it’s something where I would expect someone to port the existing code and existing tests. Whereas I would have much ore confidence that someone could implement a TXO bit field from a spec and get it right.

The downside is that these proofs of positions are much bigger than the transactions. And this is based on the TXO set size, rather than the UTXO set size which is probably trending towards some constant. In the long term it might grow iwthout bound. There is a pretty straightforward way of fixing that, which is making a fancier bit field. When it’s sparse, at the expense of it being much more interesting to implement, you can retain the exact same semantics while making the physical size that the bitfield usig is equal to the number of bits that are set to 1 times the log of the total number of bits, which isn’t scary in the slightest. So this is still going to be quite small. To get this working, you’d have to go through an exercise about making a good merkle set, and someone is going to have to think about it, experiment with it and figure out what implementation is going to perform well. Whereas a straightforward bit field is just trivial.

Any discussion of this got drowned out with the mailing list’s very important discussion about segwit. But I think this is an approach worth considering, with the given caveats about its performane, it has a lot going for it, rather than doing UTXO commimtents in blocks in bitcoin. Merkle sets can do a lot more things for a lot more stuff than just in bitcoin. I think they shine when you have truly gigantic sets of data– probably people that have permissioned blockchains should be using merkle sets, not really bitcoin I guess.

Q: The proof of position… is that position data immutable?

A: Yes. It only grows.

Q: That’s the biggest distinction from petertodd’s proposal, where he chooses bitfield and proof of position…

A: petertodd’s idea of keeping the whole UTXO, he likes indexing things by position on the grounds that the recent stuff is getting updated a lot probably, and by organizing things in memory that way, you get memory-efficient look-ups because it’s always hitting recent stuff. My approach gets this and more, so you have this compacted down, so you are more likely to hit things that are near because it’s all in a smaller area, and all of the recent stuff is right there. So that’s a benefit. I don’t think this is as near as much of a benefit as this implementation of a trivial bitfield knowing exactly where the bit is, a one look-up, and the constant factor of 256 is a huge deal, and it’s a really easy implementation to do.

Q: The real challenge of petertodd’s proposals previously was that wallet had to track basically the proof of positions and the change over time.

A: Oh right. that’s what inspired me to do this. I don’t want wallets to have to keep track of everything. I don’t want wallets to have to keep track of blocks. I want them to come back to life after years of being deactivate dand still be able to make functional transactions. And by making proof of positions never change, after like a few blocks, after you’re past the reorg period, that keeps that property, so it makes wallets really braindead simple to write and have function.

Q: Yeah. Makes sense.

A: I got through this talk very quickly. Anyone have any more questions about this?

Q: Can you describe the proof of position? What is information in that? How does that affect how bitcoin blocks or transactions?

A: So… to make a proof of position, you have a canonical way of taking a block and calculating a position root. We’re not adding anything to bitcoin blocks. You get a block, and you look at it, and you say well if I take the infrmation about the order of everything in here and I put it into a sorted orde rof their positions and have a root for it, it’s a canonical root, me and everyone else will calculate the same value. I calculate everything abut this, I keep the root, I know it’s valid, but I don’t keep the whole thing. Someone else who wants to prove position, well they want everything up to this point has this number of things in it, and here’s the list of things in order. Someone else who wants to make the proof of position will include the subset of the things up to that root that gives enough information to verify the offset of one particular TXO from the value that was at the beginning of the block. No new things added to bitcoin commitments and it’s just this canonical calculation that everyone can calculate.

Q: Mind if I re-state the benefits of this?

A: Sure.

Q: So the advantages of this proposal is that currently with a full node today to validate blocks you have this UTXO set which is maybe 2 GB of data, and what Bram is proposing is a scheme where the full node doesn’t need to store that. It stores a bit field which is 256x times smaller, plus an additional hash per block. That’s a huge savings in resources for a minimum full node. The downside is that a transaction are now much larger, instead of a few bytes like 226 bytes it’s now like a kilobyte to remind the node of the data it has forgotten. There have been other proposals like from petertodd where the wallet would have to keep a passive view of the network to keep updating its proofs. So Bram’s proposal doesn’t have that problem. Well, then there’s the question of what’s worse for people, a lot of storage on disk, big transactions, or a lot of bandwidth to have the transaction overhead? There’s a lot of wrinkles in the discussion on this– you could have a p2p tradeoff where some nodes have the merkle set of data and they don’t have to send it over the wire or things like that. Bram’s proposal scales over the TXO size, not the UTXO size. The TXO count is all history not just the unspents, so it’s a lot larger, but because of the big savings maybe it doesn’t matter.

A: Yeah. And like I was saying with maybe too many words trying to explain, when the ratio between TXO and UTXO set sizes become large, you can compactify this bitfield and the asymptotic size of htis bitfield will then be, the UTXO set size times the log of the ratio between the sizes between them which is totally under control. But it’s not trivial to implement, doing that well is the same order of effort as the effort I had to put into making a decent merkle set implementation, which was non-trivial.

Q: …

A: It’s not necessary right now, this bitfield is so small that it’s not scary in the slightest.

Q: Just to clarify.. we have 1600 transactions/block on average.

Q: Inputs and outputs.. so … 2500 per block.

Q: Are we talking about adding so much data that we will limit the number of transactions?

A: We are not proposing the number of transactions per block.

Q: The way to think about this is to think about what’s more expensive for users.. bandwidth or storage? If bandwidth was free, then sure. There’s a tradeoff circus where this makes more or less sense.

A: We’re trying to make technical solutions that can keep up. We’re not– we’re trying to avoid anything that touches the semantics of bitcoin at all, we just want to make some magic pixie dust and get the issues going away, and then people can stop discussing hard-forks to fix them.

Q: I want to answer that question differently. Your blocks would be the same size. Your transactions would be the same size. All the accounting is the same. But when I give you a block or transaction, I would also give you the proofs that the outputs it’s spending also existed in the past. It’s auxiliary information that you pass around in the past, it doesn’t change the canonical block data, it just changes what I need to give to you in order for you to verify it.

Q: Proof of position….?

A: The wallet can be given the proof of position and it will remember it.

Q: That is the solution that petertodd suggests. Yeah, farm it out to someone else. But in this case, there’s no reason to do it. You only need a third party if you went offline immediately after getting paid, but if you were online when you got paid, you just save it and you’re done. It only changes during a reorg. Once your transaction is confirmed, you save the data. If you lose the data, you go to a third party, and you give them a transaction without proofs plus an extra output to pay them, and they give you the proof in order to get your money.

A: You only need to find one fully true node to find the lookup position, and then that proof that can be sent around to everyone else in the whole network.

Q: Are these bitfields stored per block, are they aggregated?

A: One giant bitfield for the whole history. The very first bit is the first UTXO of the genesis block.

Q: No. Block 0 doesn’t create any outputs. It was never inserted in the database.

A: Uh… well since block 0 doesn’t create any outputs, the first one would be block 1. And then block 2, 3, 4 and so on. So yeah it grows without bound off of the end of the thing.

Q: if we have time, – can I ealborate on how I thik the proof works? So that we can make sure I am thinking about the same thing.

A: Go right ahead.

Q: I think you’re going to make in every block a merkle tree that for its outputs, based on its position and whatever the value is in the TXO, which is the amount, the scripts and txid.

A: It just needs position information.

Q: The hash needs to..

A: It needs to commit to the hash, yes.

Q: When I want to spend something, I give you the output I am spending, plus a merkle path, which is per block, from the block it was created in.

A: It doesn’t have to be per block. I like per block, because it’s a good idea for a node to store the bunch layers tops of a merkle tree, and it’s a good spot to stop at.

Q: But we already have linear amount of data from the blocks that are available. I agree.

delayed txo commitments https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012715.html

TXO commitments do not need a soft-fork to be useful https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013591.html

rolling UTXO set hashes https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014337.html

\ No newline at end of file +https://www.youtube.com/watch?v=52FVkHlCh7Y

code: https://github.com/bramcohen/MerkleSet

https://twitter.com/kanzure/status/888836850529558531

Introduction

There’s been a fair amount of talk about putting UTXO commitments in bitcoin blocks. Whether this is a good idea or not, but on the off-chance that it might be a good idea, I spent a fair amount of time thinking and actually good merkle set implementation whic his what you you need to put utxo set commitments in bitcoin blocks. There are other benefits to this, in that there are actually a million and one uses for merkle sets in general, and it’s actually a general interesting problem and it performs well. So I figured I would make it, and if it’s not useful to me, then it might be useful to someone anyway.

High-performance merkle set implementation

Okay, so. Merkle set. For those of you who aren’t aware, this is sometimes– sometimes people call it a hash set, based on hash tree, and then Ralph Merkle gets really mad at you. Since he’s still alive, we still call them merkle trees. A merkle set is a set which contains arrays of bytes, it keeps track of its merkle root. All the things at the terminal things are the things contained in the set, in the middle are summaries, and then the root. You can use the root to prove, for someone who has the root, to prove whether something is or is not in the set.

There’s a series of interesting engineering problems to make these perform well. What hash function should we use? The obvious choice is sha256 because it’s the standard. The really big benefit of sha256 is that it’s really the only thing that has hardware acceleration and it’s the only thing that is going to have widespread hardware acceleration for the foreseeable future. Unfortunately it’s not super fast in software and on top of not being super fast in software, it has funky padding issues. If you concatenate 2 hashes, it doesn’t align up nicely, and it has other block it has to compute on. The best thing to use is blake2s because everything here has 256 bit outputs, and blake2s has a block size of 512 bits. There’s this slightly frustrating thing with the flavors of blake, that there’s there’s blake2s, which is optimized for 32-bit systems, and blake2b which is optimized for 64-bit systems. So nothing is 32-bit anymore, so we need to use blake2b, unfortunately it has a 1024 bit block size, so it’s actually slower. I talked with the authors of blake to see if it would be possible to do a new variant, and they told me it would be a lot of work, so they said no. This is irritating.

On top of that, it turns out that sometimes you do want just a little teeny bit of metadata. blake2b is really good about when you give it 512 bits, it doesn’t need to do any padding, it processes exactly one block. But you do actually want some metadata, so it’s a good idea, if you’re doing this route, to sacrifice 2 bits to metadata, so that you can encode information what you’re encoding, so the values we’re using are whether it’s “terminal” meaning it’s just a hash string that is being stored, “empty” meaning there’s nothing here, or “intermediary” which means this is a hash of multiple things below it, or “invalid” which basically means this hasn’t been calculated yet when you’re doing lazy evaluation this is really handy.

So I’m using a patricia trie. There’s a number of subtle different ways of making merkle sets. The most straightforward way is basically you just sort everything and put it into a list. That can’t be efficiently updated, because the positions of everything aren’t super-canonical, so everyone who is spending time on this has wound up using a patricia trie. It positions everything based on what the bits in it are. Here’s the root. Everything that has a 0 as its first bit, goes here. Everything with a 1 as its firs tbit, goes here. This spot right here– everything starts with the 0 bit, so there’s these 2 things here which both have 0 bits, but you include explicit “empty” so that you can have nice proofs of non-inclusion. If you try to optimize out the empties, you could still have proofs of nonmembership, but it’s really ugly.

There’s one more trick that’s actually part of the semantics of this that I came up with, which is that if you have only two things below a particular point, and it’s empty on the other side, you do a pass-through. If you look at the bottom here, there’s 2 terminals, so the thing that has those 2 terminals children has to do a calculation to figure out the root value there. Above that, there’s something where the other side is empty, so that does a “pass-through” because the other side is empty and it only has those 2 values. And then there’s another layer that is also empty on the other side, so it does a pass-through. At the top of my diagram, there’s a layer where there’s no empty on the other side, so it has to do the tree. The other one has an empty at the very top of the tree, but it doesn’t work out because there’s hashing to be done there. So this gets rid of a lot of the hashing. It’s a substantial optimization and it’s worth doing.

Reinventing memory management for fun and profit

I have on github a merkle set git repository which is an implementation of this. It’s in python, which is ridiculous. It needs to be ported to C soon. That has a reference implementation which is very straightforward, and a high performance implementation that isn’t high performance yet. There are subtleties to what the performance needs are. In modern architectures, you spend all of your time waiting for memory. The exact subtleties of how that works aren’t clear. When you’re accessing memory, you want to get things near each other all together. In the extreme version, it’s a physical hard drive where a seek is expensive but reading everything at once isn’t an issue. In SSD this is less so, but it continues up the memory hierarchy. So you have to arrange things that are near each other are near each other in memory.

I didn’t invent the concept of memory management– I have a chunk of memory called a block. When I first started playing around with this, it wanted gigantic blocks of memory allocated up front. I figured out how to make it act normally after a number of rewrites. The sizes of blocks aren’t obvious. There’s really bad documentation everywhere about how long things wait for and how near things need to be to each other to get better read times and how much of an improvement those are, so I made that a parameterizable in my iplementation and it needs to be optimized for local CPU memory bus architecture and phase of the moon.

That’s the high-level viewpoint of what this is going to do.

There are two types of blocks. There are branch blocks and … blocks. The root is a branch block. Every block can have many children, but only one parent. That makes things behave a lot better. No sibling relationships between them of any kind. Any time there is an overflow from a parent, it goes into one of the children. It’s possible for multiple outputs of a parent to go into the same child if the child block is a … block. Usually the terminals are leaves, and everything above them are branches. This way, this is fairly realistic in that it’s pretty normal for a branch block to have one or a few leaves before it. Branches tend to fan out very fast, and when you’re doing a memory look-up it tends to go branch branch leaf.

These are the formats. A branch block has a reference, all of the things of size B here are memory pointers. It has a pointer to its active child, which is a leaf block, and it has a patricia of depth, the branch has depth as a parameter to how it works, its size is roughly 2^(depth). Each patricia .. 0 bit below the current position. Call it by the right hash, everything is starting 1-bit below the current position. Followed by the two child things, the patricia of 0 size n minus 1, and so this is a recursive data structure here. This one trick of just having a left and right next to each other, should improve look-up times a lot. When you’re updating, especially. When you are recalculating hashes, you want the siblings to be right there. When you get down to patricia 0, there’s a pointer to one of the children, and a position within that child. And indices within those blocks. If a child is a branch block position, it just looks like FFFF, an invalid node.

Branch blocks are balanced, by their nature. Things don’t move around. The biggest subtlety here is with respect to the active child. The branch block might overflow with itself, where there’s 2 things that need to go to the same out position. Sometimes leaf blocks will overflow as well, and when that happens, it needs to decide where to put the overflow. It puts it into the active child. This does a good job of memory allocation. When you need to, you go and make a new active child, until it’s full, and then you usually have active children with mostly full, except for one that has overflow, and you really only do memory allocation when you need to do. So this does a pretty good job of not pre-allocating any more than it actually needs.

Within leaf blocks, this is an interesting subtlety, a leaf block is basically a list of nodes. It has first node used, a position in itself. Everything in here tha tis 2 bytes is a position. Num inputs, the number of times the parent has inputs going into it, and a list of nodes, a full node (not an empty node) has – it has a hash of everything on the left side, hash of everything on the right side, and then a position on the child node on the left, and a position of the child node on the right, when I say left I mean 0-order low bit, and right is 1-order high bit. The empty node has a next pointer of just zeroes. The reason for this first unused and empty node here is that the unused nodes form a linked list, so when you want to allocate a new node, you pick the first one and point to the next one after it. If you hit the last one, you figure out you have to do an overflow, and you copy-out into the active child unless it’s full or if you are the active child, in which case you make a new active child. If you have only one input yourself, you don’t copy out into the active child, you take your whole block that you are, and copy it into a new branch block, which likely has a new leaf block beneath it. There’s a lot of recursive stuff in here, in the code. It’s a little bit subtle, but it looks simple.

I have another interesting thing to talk about. First I’d like to ask if there are any questions.

Q: Why do you.. not.. hash.. mechanism.. under a merkle tree?

A: What do you mean by the skip hash? Oh, you mean… if you want to have a proof of non-membership, you terminate at an empty. When there’s only 2 things below, you just terminate with that, you don’t want to blow up the proof size.

Q: Difference between merkle tree and patricia trie?

A: I’m not really sure the difference between trees and tries. I’m not really sure the real difference between the tries and trees. I think tries have an I in it, and trees don’t. I think patricia tries are called tries, and if you have a more straightforward approach of cramming everything together by position, that’s usually called a tree, and it’s usually not easily updated.

Q: Trie is from a … it’s a tree where search is easy, where the points… splitting based on the key value rather than based on balancing. Every trie is a tree.

A: Okay, so Pieter says that it implies everything is balanced in a trie. Anything even vaguely resembling what I’m talking about here, even without efficient updating, would be a trie.

Q: There’s like a million merkle tree libraries out there. Why did you do this?

A: The ones that are out there right now don’t do a good job of getting good cache coherence. Having to worry about cache coherence is sort of a relatively new thing in computer science, having to do with computers being weird and fast. Things moving around the bus are a problem. Really what it amounts to is a die is this 2d thing that needs to get, data around on these highways going over it, and the size of the die is quadratic and the width of the highway is linear, and now we have megalopsies on chips, it’s actually problematic. There aren’t too many things that talk about this kind of optimization. More recent versions of introductions to algorithms talk about sorting algorithms that will work optimally regardless of how your cache coherence properties work, and things like that. But it’s not something that people usually worry about too much. So I decided to try to make this as performant as I possibly could.

Q: Could you talk about that for a sec? When you say performance, you mean handling larger trees?

A: The relevant performance characteristics have to do with– well there’s some operations it can do. It can check for inclusion, and it can update. When it’s doing update, there’s different usage patterns. Maybe it has a lot of retrieves interspersed with updates, and maybe it has whole batches of updates followed by retrieves. When you’re doing retrievals maybe you care about proofs, ad maybe you don’t. If you don’t care about proofs, you should be using an ordinary set, because a hash set has really really good memory cache coherence properties because it already knows exactly where to look any time traversal happens. But traversing a tree is going to be very slow by comparison. If you want to check for inclusion and don’t care about proofs, just have two sets, a merkle set and a regular set. The most relevant thing here is as I was thinking about this was how long would it take you to go over all over the entire blockchain history. For that particular use case, for each block, you’re doing a whole bunch of updates, followed by getting the root. And then you do a validation for a bunch of retrieves for the next block, to make sure everything is in there. We could write something to try that, it has a lot of lazy evaluation because of the big batches of updates… when you’re doing big batches of updates, you should sort them before you go ahead and do the– you should sort them before actually applying the updates. The way that I am oblitterating the two high order bits, you actually want to sort with the third bit not the first bit, but I haven’t inserted a convenience function for batch updates, because it has to be done in the right order, it’s a weird gotcha. There’s a bunch of different patterns, and because it’s so hard to actually get even sane information about exactly where things bottleneck in memory hardware, I just optimized everything I could.

Q: I don’t know if people… blake2 has this personalization functionality, have you looked at using that for robbing bits?

A: Not familiar.

Q: The initial state of the blake2 hash you can set to different values based on running blake2 to pre-initialize it. So you can use this to separate the domain of different hashes.

A: Yes that would be a much more elegant way of adding the necessary metadata than losing the 2 bits of security. It really is necessary to do something along those lines because otherwise osmeone might try to store a tree value in the set and you can’t tell whether it’s an internal or external value.

Q: I think personalization will help you there.

A: Sounds like a better way to do that. The default APIs of this stuff doesn’t tend to give you those fancy bells and whistles. I was unaware of that as a feature.

Q: You were talking about… testing.. have you done?

A: Because my implementation so far is in python, it needs to be ported to C. It’s really weird looking python that is meant to be ported to C. There are data structures that have pointers, you can’t do that in python, it doesn’t like you using bytes as a pointer to a python object, so I took all of the memory data structures and used a hash table and wrote wrapper methods for getting things in and out of the hash table. Yeah it’s kind of weird. It’s handy to have everything in there, so as part of debugging there are audit methods, and you can just go over the whole hashtable and check that everything was allocated or not deallocated. I did read over data about how hard drives and SSD drives work. SSD usually has page sizes like 2 kilobytes and you probably want to align with those.. and hard drives, things should be pretty big, like 1 megabyte for the blocks. And then you get things like where you only want the terminal leaves to be on the disk, and everything above that to be in memory, and then you spend all of your time waiting on the disk. That’s … when it gets to more fine-grained things about well if I’m storing the entire thing in memory, not in SSD, not in the physical drive, it’s all in memory, how big should my branch and leaf blocks be? The available docs are really bad– seemingly similar machines have just radically different behaviors. There’s a library called fastest fourier transform in the west– the first time you run it, it benchmarks itself on the current machine, and it decides what its magic numbers should be. I think this might be the most reasonable solution for now.

Q: Your block structure doesn’t change the resulting root hash?

A: Yeah. The block structure does not change the root hash whatsoever. I implemented this as a very simple reference implementation which is quite straightforward. It has some nice properties. If you are doing things with lots of fallbacks, and you want to keep lots of roots without copying everything over, everything is immutable in it. The performant one– everything is really mutable and if you want to get back you have to rollback. They do absolutely the same exact thing, and I have extensive tests, the way these tests work is that they have just a random bunch of strings and they repeatedly add one string at a time and they report the root each time and they make sure everything added is still there, and everything added isn’t there yet, and then deletes them one at a time and works its way back. Once it has tested the reference implementation, it does this for other parameters including fairly degenerate ones, for the performant one. I found a lot of bugs. It has 98% code coverage with a fairly small amount of test code.

TXO bitfields

Any more questions? Okay, something that I am fairly sure will be controversial. Unfortunately the bitcoin-dev mailing list has got completely drowned in discussions of subtleties of upgrade paths recently (laughter). So there hasn’t been too much discussion of real engineering, much to my dismay. But, so, using a merkle set is kind of a blunt hammer, all problems can be solved with a merkle set. And maybe in some cases, something a little bit more white box, that knows a little bit more about what’s going on under the hood, might perform better. So here’s a thought. My merkle set proposal has an implementation with defined behavior. But right now, the way things work, you implicitly have a UTXO set, and a wallet has a private key, it generates a transaction, sends it to a full node that has a UTXO set so that it can validate the transaction, and the UTXO set size is approximately the number of things in the UTXO set multiplied by 32 bytes. So the number over here is kind of big and you might want it to be smaller.

So there’s been discussions of maybe using UTXO bit fields, I had a good discussion with petertodd about his alternative approach to this whole thing, wherein I said this weird comment because he likes UTXO bit fields, I said the thing that might be useful there is that you can compress things down a lot. And it turns out that this really helps a lot. I had this idea for a UTXO bit field. UTXO is all the unspent transaction outputs. It includes all unspents. The idea with a UTXO bit field is that you have a wallet, a proof of position and its private key. As things are added to blocks, each one has a position, and no matter how many things are added later, it is already in the same position always. So to make the validation easier for the full node, the wallet will give the proof of position which it remembers for the relevant inputs that it’s using, and bundles that with the transaction to send it to the full node, who then a miner puts it into a block. The proofs of positions will be substantially larger than the transactions, so that’s a tradeoff.

So this goes to a full node, and it has to remember a little bit more than before. It has to remember a list of position roots per block. For every single block, it remembers a root of a commitment that can be canonically calculated for that block, of the positions for all the inputs in it, to allow it to verify the proof of position. And it also has a TXO bitfield. The full node takes the proof of position and verifies the alleged position using the data it remembers for validating that. And then it looks it up in the TXO bit field, which in the simplest implementation is a trivial data structure, it’s a bit field, and you look it up by position. It’s not complex at all. It’s one look up, it’s probably very close to other lookups because you’re probably looking at recent information, and these are all 1 bit each, so they are much closer to each other. The size of the TXO bit field is equal to the TXO size divided by 8. So this is a constant factor improvement of 256. Computer science people usually don’t care about constant factors, but 256 makes a big difference in the real world. This also has the beenfit that my merkle set is a couple hundred lines of fairly technical code, it has extensive testing but it’s not something that I particularly trust someone to go ahead and re-implement well from that, it’s something where I would expect someone to port the existing code and existing tests. Whereas I would have much ore confidence that someone could implement a TXO bit field from a spec and get it right.

The downside is that these proofs of positions are much bigger than the transactions. And this is based on the TXO set size, rather than the UTXO set size which is probably trending towards some constant. In the long term it might grow iwthout bound. There is a pretty straightforward way of fixing that, which is making a fancier bit field. When it’s sparse, at the expense of it being much more interesting to implement, you can retain the exact same semantics while making the physical size that the bitfield usig is equal to the number of bits that are set to 1 times the log of the total number of bits, which isn’t scary in the slightest. So this is still going to be quite small. To get this working, you’d have to go through an exercise about making a good merkle set, and someone is going to have to think about it, experiment with it and figure out what implementation is going to perform well. Whereas a straightforward bit field is just trivial.

Any discussion of this got drowned out with the mailing list’s very important discussion about segwit. But I think this is an approach worth considering, with the given caveats about its performane, it has a lot going for it, rather than doing UTXO commimtents in blocks in bitcoin. Merkle sets can do a lot more things for a lot more stuff than just in bitcoin. I think they shine when you have truly gigantic sets of data– probably people that have permissioned blockchains should be using merkle sets, not really bitcoin I guess.

Q: The proof of position… is that position data immutable?

A: Yes. It only grows.

Q: That’s the biggest distinction from petertodd’s proposal, where he chooses bitfield and proof of position…

A: petertodd’s idea of keeping the whole UTXO, he likes indexing things by position on the grounds that the recent stuff is getting updated a lot probably, and by organizing things in memory that way, you get memory-efficient look-ups because it’s always hitting recent stuff. My approach gets this and more, so you have this compacted down, so you are more likely to hit things that are near because it’s all in a smaller area, and all of the recent stuff is right there. So that’s a benefit. I don’t think this is as near as much of a benefit as this implementation of a trivial bitfield knowing exactly where the bit is, a one look-up, and the constant factor of 256 is a huge deal, and it’s a really easy implementation to do.

Q: The real challenge of petertodd’s proposals previously was that wallet had to track basically the proof of positions and the change over time.

A: Oh right. that’s what inspired me to do this. I don’t want wallets to have to keep track of everything. I don’t want wallets to have to keep track of blocks. I want them to come back to life after years of being deactivate dand still be able to make functional transactions. And by making proof of positions never change, after like a few blocks, after you’re past the reorg period, that keeps that property, so it makes wallets really braindead simple to write and have function.

Q: Yeah. Makes sense.

A: I got through this talk very quickly. Anyone have any more questions about this?

Q: Can you describe the proof of position? What is information in that? How does that affect how bitcoin blocks or transactions?

A: So… to make a proof of position, you have a canonical way of taking a block and calculating a position root. We’re not adding anything to bitcoin blocks. You get a block, and you look at it, and you say well if I take the infrmation about the order of everything in here and I put it into a sorted orde rof their positions and have a root for it, it’s a canonical root, me and everyone else will calculate the same value. I calculate everything abut this, I keep the root, I know it’s valid, but I don’t keep the whole thing. Someone else who wants to prove position, well they want everything up to this point has this number of things in it, and here’s the list of things in order. Someone else who wants to make the proof of position will include the subset of the things up to that root that gives enough information to verify the offset of one particular TXO from the value that was at the beginning of the block. No new things added to bitcoin commitments and it’s just this canonical calculation that everyone can calculate.

Q: Mind if I re-state the benefits of this?

A: Sure.

Q: So the advantages of this proposal is that currently with a full node today to validate blocks you have this UTXO set which is maybe 2 GB of data, and what Bram is proposing is a scheme where the full node doesn’t need to store that. It stores a bit field which is 256x times smaller, plus an additional hash per block. That’s a huge savings in resources for a minimum full node. The downside is that a transaction are now much larger, instead of a few bytes like 226 bytes it’s now like a kilobyte to remind the node of the data it has forgotten. There have been other proposals like from petertodd where the wallet would have to keep a passive view of the network to keep updating its proofs. So Bram’s proposal doesn’t have that problem. Well, then there’s the question of what’s worse for people, a lot of storage on disk, big transactions, or a lot of bandwidth to have the transaction overhead? There’s a lot of wrinkles in the discussion on this– you could have a p2p tradeoff where some nodes have the merkle set of data and they don’t have to send it over the wire or things like that. Bram’s proposal scales over the TXO size, not the UTXO size. The TXO count is all history not just the unspents, so it’s a lot larger, but because of the big savings maybe it doesn’t matter.

A: Yeah. And like I was saying with maybe too many words trying to explain, when the ratio between TXO and UTXO set sizes become large, you can compactify this bitfield and the asymptotic size of htis bitfield will then be, the UTXO set size times the log of the ratio between the sizes between them which is totally under control. But it’s not trivial to implement, doing that well is the same order of effort as the effort I had to put into making a decent merkle set implementation, which was non-trivial.

Q: …

A: It’s not necessary right now, this bitfield is so small that it’s not scary in the slightest.

Q: Just to clarify.. we have 1600 transactions/block on average.

Q: Inputs and outputs.. so … 2500 per block.

Q: Are we talking about adding so much data that we will limit the number of transactions?

A: We are not proposing the number of transactions per block.

Q: The way to think about this is to think about what’s more expensive for users.. bandwidth or storage? If bandwidth was free, then sure. There’s a tradeoff circus where this makes more or less sense.

A: We’re trying to make technical solutions that can keep up. We’re not– we’re trying to avoid anything that touches the semantics of bitcoin at all, we just want to make some magic pixie dust and get the issues going away, and then people can stop discussing hard-forks to fix them.

Q: I want to answer that question differently. Your blocks would be the same size. Your transactions would be the same size. All the accounting is the same. But when I give you a block or transaction, I would also give you the proofs that the outputs it’s spending also existed in the past. It’s auxiliary information that you pass around in the past, it doesn’t change the canonical block data, it just changes what I need to give to you in order for you to verify it.

Q: Proof of position….?

A: The wallet can be given the proof of position and it will remember it.

Q: That is the solution that petertodd suggests. Yeah, farm it out to someone else. But in this case, there’s no reason to do it. You only need a third party if you went offline immediately after getting paid, but if you were online when you got paid, you just save it and you’re done. It only changes during a reorg. Once your transaction is confirmed, you save the data. If you lose the data, you go to a third party, and you give them a transaction without proofs plus an extra output to pay them, and they give you the proof in order to get your money.

A: You only need to find one fully true node to find the lookup position, and then that proof that can be sent around to everyone else in the whole network.

Q: Are these bitfields stored per block, are they aggregated?

A: One giant bitfield for the whole history. The very first bit is the first UTXO of the genesis block.

Q: No. Block 0 doesn’t create any outputs. It was never inserted in the database.

A: Uh… well since block 0 doesn’t create any outputs, the first one would be block 1. And then block 2, 3, 4 and so on. So yeah it grows without bound off of the end of the thing.

Q: if we have time, – can I ealborate on how I thik the proof works? So that we can make sure I am thinking about the same thing.

A: Go right ahead.

Q: I think you’re going to make in every block a merkle tree that for its outputs, based on its position and whatever the value is in the TXO, which is the amount, the scripts and txid.

A: It just needs position information.

Q: The hash needs to..

A: It needs to commit to the hash, yes.

Q: When I want to spend something, I give you the output I am spending, plus a merkle path, which is per block, from the block it was created in.

A: It doesn’t have to be per block. I like per block, because it’s a good idea for a node to store the bunch layers tops of a merkle tree, and it’s a good spot to stop at.

Q: But we already have linear amount of data from the blocks that are available. I agree.

delayed txo commitments https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012715.html

TXO commitments do not need a soft-fork to be useful https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013591.html

rolling UTXO set hashes https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014337.html

\ No newline at end of file diff --git a/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my/index.html b/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my/index.html index 89b028e3ae..c9bfcc8524 100644 --- a/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my/index.html +++ b/sf-bitcoin-meetup/2018-07-09-taproot-schnorr-signatures-and-sighash-noinput-oh-my/index.html @@ -11,4 +11,4 @@ Pieter Wuille

Date: July 9, 2018

Transcript By: Bryan Bishop

Tags: Taproot, Schnorr signatures, Musig, Adaptor signatures, Mast, Contract protocols

Category: Meetup

Media: -https://www.youtube.com/watch?v=YSUVRj8iznU

https://twitter.com/kanzure/status/1021880538020368385

slides: https://prezi.com/view/YkJwE7LYJzAzJw9g1bWV/

Introduction

I am Pieter Wuille. I will just dive in. Today I want to talk about improvements to the bitcoin scripting language. There is feedback in the microphone. Okay.

I will mostly be talking about improvements to the bitcoin scripting language. This is by no means an exhaustive list of all the things that are going on. It’s more my personal focus in the things that I am interested in and excited about seeing happen.

I will first start with some general ideas about how we should think about a scripting language, then go into two kinds of topics. In particular, signatures and structure of the bitcoin scripting system and changes to that. And then conclude with some remarks about how this can be brought into production which is a non-trivial thing.

Script system goals

Much of what’s going on is that I want to convince you that there are many things going on and that we need to prioritize in how we work on these things because there are engineering tradeoffs. I’ll get to that in the end.

First of all, we should think about bitcoin scripting language as a way for specifying conditions for spending a transaction output. Bitcoin uses a UTXO model. It has significant advantages in terms of privacy. Really everything that our scripting language does is specifying under what conditions can you spend an output. We want to do that under various constraints. One of them is privacy. You don’t really want to reveal to the world who you are or really what you were doing, which is contradictory because we want a scripting language where you don’t say what you want to do. You also want to be space efficient because space on the blockchain is expensive and there’s a price to it so everyone is incentivized to keep things as small as possible. And of course there’s the computational efficiency- we want things that are efficient to verify so that it’s easy to run full nodes to fully audit the system.

All of these concerns are related, not perfectly, but generally, if you try to reveal less about what you’re doing then you will be saying less and using less storage and as a result there will be less data to verify. All of these things generally go hand-in-hand.

An important concern here is to not think about a scripting language as a programming language that describes execution. We’re working in a strange model where any time a transaction is included in the blockchain, every node in the world forever will be validating that transaction, executing the same steps and coming to the same conclusion. We don’t want it to compute anything. I already know the outcome- I don’t need 100,000 computers in the world to replicate this behavior. The only thing I am trying to do is convince them that the thing I’m doing is authorized. It’s really about not so much computing things, but more verifying that a computation was done correctly.

This has many similarities with proof systems. In the extreme, we can aim for a zero-knowledge proof system really where all you say is I have some condition and its hash was X and here is a proof that this condition was satisfied and nothing else. Unfortunately there are computational and other difficulties right now. I think that should be our ultimate goal, for building things where we’re looking at things that way.

Signature system improvements

Regarding improvements, I want to talk about three things. One is Schnorr signatures. Some of you may have seen that I recently a couple days ago published a draft BIP for incorporating Schnorr signatures into bitcoin. I’ll talk a bit about that.

I will also be talking about signature aggregation (or here), in particular aggregation across multiple inputs in a transaction. There’s really two separate things, I believe. There’s the signature system and then the integration and we should talk about them separately. Lots of the interest in the media on this topic are easily conflated between the two issues.

Third, I also want to talk about SIGHASH_NOINPUT and signature hashing. There’s a number of developments there as well.

Schnorr signatures

https://www.youtube.com/watch?v=YSUVRj8iznU&t=5m50s

Schnorr signatures are the prototypical digital signature system that has been around for a long time, building on the discrete logarithm problem. This is the same security assumption as ECDSA has. The two have an interesting history where DSA, the predecessor to ECDSA was created as a way to avoid the patents that Schnorr had on his signature scheme. The Schnorr signature scheme is better in almost every way. There’s no reason not to use it, apart from the fact that due to it being patented until 2008, standardization focused mostly on ECDSA and we should see if that can be improved.

In particular, I’m not going into the real details of the verification equation. I’m just putting them on the slide here to show that they are very similar in structure. It’s just re-arranging the variables a little bit and putting a hash in there. The interesting thing is that Schnorr signatures have a security proof: we know that if the discrete log problem of our group is hard in the random oracle model, then there is a proof that the Schnorr signatures cannot be forged. There is no such proof for ECDSA. This is a nice-to-have.

  • Schnorr is sG = R + H(R,P,m)*P
  • ECDSA is sR = mG + rP

As we are potentially introducing a new signature scheme anyway, we have the opportunity to make a number of changes really unrelated to the theoretical foundations of the signature scheme. We can get rid of the dumb DER encoding from ECDSA signatures. There’s some six, seven bytes of overhead just stuff like “there’s an integer that follows that are this long” and we really just want 64 bytes so let’s just use 64 bytes.

Batch verification

Another thing that we can do is focusing on this too, we want batch verifiability, which means the ability to take a number of triplets of (public key, message, signature) and throwing them all at a verification algorithm and the algorithm can tell you all of them are valid or not all of them valid. You wouldn’t learn where the faults are, but generally during block validation we don’t care about that. We only care whether the block is valid or not. This seems like an almost perfect match for what we want to do. This batch verifiability is an interesting property that we want to maintain.

Schnorr signature BIP draft

A few days ago, I published a Schnorr signature BIP draft which was the combined work of a number of people including Greg Maxwell. And many other people who are listed in the BIP draft. This BIP accomplishes all of those goals. It’s really just a signature scheme, it doesn’t talk about how we might go about integrating that into bitcoin. I’m going to talk about my ideas about that integration problem later.

Schnorr signature properties

One of the most interesting properties that Schnorr signatures have, and the reason why we looked into them in the first place, is the fact that they are linear. The linearity property is that you can take roughly speaking it’s a bit more complex than this but generally you can take multiple Schnorr signatures by different keys on the same message and add them together in a certain way and the result is a valid signature for the sum of those keys. This is a remarkable property which is the basis for all the cool things we want to do on top of them. The most obvious one is that we can change how many of our multisignature policies work. In bitcoin, you have the ability to have k-of-n signatures. There’s a built-in construction to do this. Especially when it’s n-of-n, where you have a group of people and you require all of them to sign before an output can be spent, that reduces in the Schnorr signatures to a single signature. Roughly speaking, you just send money to the sum of their keys, and now all of them need to sign in order to be able to spend with this. It’s a bit more complex than that, please don’t do exactly that, there will be specifications for how to deal with this.

Musig

https://blockstream.com/2018/01/23/musig-key-aggregation-schnorr-signatures.html

https://eprint.iacr.org/2018/068

There are more advanced protocols that do even more with Schnorr signatures, where you can implement any k-of-n policy or even further any monotone boolean function over different keys, such as “this key and this key and that key, or that key and this key and that key or that key” basically anything you can build with ANDs or ORs over keys is doable, but the protocol for doing so is tricky.

The downside is that these things are not accountable. If you use a protocol like this, with a 2-of-3 multisig, you cannot tell from the multisig which of the two signers produced the signature. There are other constructions for doing this, though.

Also there’s an interactive key setup protocol for anything other than n-of-n. You really need the different signers to run a protocol among themselves before they can spend. For just n-of-n, we came up with a construction called musig. It was coauthored by Andrew Poelstra, Greg Maxwell, Yannick Seurin and myself. It’s a construction for doing this non-interactively and take keys, combine them, and then you can send to them.

Adaptor signatures

Another advantage of Schnorr signatures, and it’s one of the more exciting things in this space, are adaptor signatures, which are a way for implementing atomic swaps that look completely like ordinary payments and they can’t be linked together. Roughly how they work is you produce two… you lock up your funds on both sides, say you have two assets on two different chains or on the same chain, you lock up both funds into a 2-of-2 multisig, and then you produce a damaged signature for both where you prove to the other party that the amount you damaged these signatures by is equal in both cases and then as soon as you take the money, you reveal the real signature in one, they compute the difference between the damaged and real one and then apply the same difference to the other side and take the money. Your taking money from one side is in fact what reveals and gives the ability for the other side for the ohter part. There’s a recent paper that described how to use this to build a payment channel system with good privacy properties.

Cross-input signature aggregation

Cross-input signature aggregation is from the fact that the Schnorr signature construction where you can sign the same message with multiple keys, this can be generalized to having multiple different messages be signed by different people and still have just a single signature. The ability to do so really would in theory allow us to reduce the total number of signatures in a transaction to just one signature. This has been the initial drive for going and looking into Schnorr signatures and it’s such an awesome win. There are many complications in implementing this, it turns out, but this is the goal that we want to get to. It has an impact on how we validate transactions. Right now, every input you just run the scripts and out comes TRUE or FALSE and if there’s FALSE then the transaction is invalid. This needs to be changed to a model where script validation returns TRUE or FALSE and also returns a list of pubkeys which is the set of keys that must still sign for that input, and then we need a single signature that can do this, and it needs to be a transaction-wide rather than transaction input-wide.

Another complication is soft-fork compatibility. The complication is that when you want different versions of software to validate the same set of inputs, and there is only a single signature, you must make sure that this signature and they both understand that this signature is about the same set of keys. If they disagree about the set of keys or about who has to sign, then that would be bad. Any sort of new feature that gets added to the scripting language that changes the set of signers, is inherently incompatible with aggregation. This is solvable, but it’s something to take into account, and it interacts with many things.

Sighash modes

Another new development is thinking about new sighash modes. When I’m signing for a transaction, what am I really signing? This has traditionally been a modified version of the transaction with certain values blanked out, permitting you to choose certain changes that can still be made to the transaction. There’s the anyonecanpay flag, for example. Instead of signing the full transaction, anyonecanpay flag causes you to sign a transaction as if it only had a single input which is the one you’re putting in. This is for example useful for crowdfunding-like constructions where you have “I want to pay, but only when enough other people chip in to make this amount match, and I don’t want my signature to be dependent on them” (mutual assurance contracts). There’s a number of interesting constructions that have been come up with, there’s really only six modes. So we have to wonder, is this the best way of dealing with this?

SIGHASH_NOINPUT

Recently there has been a proposal by cdecker for SIGHASH_NOINPUT where you sign the scripts and not the txids. The scary downside of this construction is that they are replayable. I pay to an address, you spend using the SIGHASH_NOINPUT and then someone else for whatever reason can send to the same address then the receiver of that first spend can take the first signature, put it in a new transaction and can take the new coins that were spent there. So this is something that should only be used in certain applications that can make sure this is not a problem.

eltoo

Apparently they also have a pretty interesting advantages, one of them is the eltoo proposal that those guys in the back know more about than I do. My general understanding is that this permits payment channels that are no longer punitive. Instead of– if someone broadcasts an old state and tries to revert the state of a channel where they sent money to you, currently in lightning-like constructions, you get the ability to take their coins but you have to be watching. With eltoo, this can be changed to: they can still broadcast an old state and you can update and broadcast a newer state on top of it, so it makes it into something that always does the right thing rather than needing to rely on incentives.

Then there are thoughts (regarding sighash modes) about delegation or the ability to verify signatures from the stack. There’s many kinds of thinking. I’m not going to go into these things because I haven’t spent much time myself on it.

Script structural improvements

Thinking about the structure of scripts… especially following that model of thinking about script as a verification not an execution model, thinking less about it as a programming language and more as a verification system, I’m going to go through the difference steps and try to explain taproot and graftroot as the final step.

Pay-to-scripthash (P2SH)

We have to start with bip16 p2sh, which was a change that was made as a soft-fork in 2012 where initially the program, the script, was put into the script output which meant the sender had to know what your script was. So this was a changed to something where instead of putting in the output the script itself, you put the hash of the script. When spending it, you reveal this hash was really this script, and now you can run it and here are the inputs to it. This had a number of advantages that at the time we might now take for granted. All of the outputs look identical, apart from the single key ones where we’re still using p2pkh (pay-to-pubkeyhashes). The sender doesn’t need to care about what your policy is: if you happen to use a hardware wallet or some escrow service to protect your coins, then the sender shouldn’t need to know about that. By turning it into a hash, that’s accomplished. And also, because bitcoin nodes need to maintain the full UTXO set between the time the output is created or spent, they only need to know the hash not the full script. But you still reveal the full script when spending, which is not that great for privacy. What can we do better?

Merkleized abstract syntax tree (MAST)

One of the ideas that has been around for a while is merkleized abstract syntax trees, as it was called originally. According to Russell O’Connor, it’s not what we should be talking about when we talk about merkle branches today. The observation is that most scripts that you see in practice are something that is just this junction of a number of possibilities. You can spend if A and B sign, or if C has signed and some time has passed, or D and A has signed and some hash has been revealed. Pretty much everything we have seen to date is some combination of these things. It’s unfortunate that we have to reveal all possibilities. Any time you want to spend anything, you have to reveal the complete script. The observation is that you can instead build a merkle tree, a hash tree where you pairwise combine different scripts together, and then in your output you do not put the script or the hash of the script, you put the merkle root of all the possibilities you want to permit spending. At spending time, you reveal the script, you reveal the path along the merkle tree to prove that the output really contained that script and the inputs to that script. This has log(n) size in the number of possibilities, and you only need to reveal the actually taken branch. This is an interesting idea that has been around for a while. There have been a number of proposals in particular by Mark Friedenbach (bip116 and bip117) and Johnson Lau (bip114) who have worked on ideas around this.

Merkle trees with unanimity branch

I want to make an intermediate step here, where I want to go into what is a ‘unanimity branch’. The observation is that in almost all interactions between different parties that want to participate in a script or smart contract, it’s fine, not necessarily required, but fine to have a branch that is “everyone agrees”. I’m modeling that here by adding an additional branch to my merkle tree. Due to Schnorr multisig, we can have a single key that represents a collection of signers that all agree. To explain this, I want to go into an abstraction for the blockchain called “the court model” which is that we can think about the blockchain as a perfectly fair court that will always rule according to whatever was agreed to in the contract. However, the court only has limited capacity. In the real world, hardly all disputes between any two parties ever get sent to a jury or a judge in the judicial system. The observation is that having the ability to go to a court is sufficient in many cases to make people behave honestly even if they don’t actually go to the court. There’s a similarity here with the blockchain because knowing that you have the ability to publish the full script and have whatever the agreed upon contract was executed, is enough to make people say “well, you know, we all agree, we can spend the money, there’s no need to actually present the entire script”. You could think about this as settling out of court where you just say “hi judge, we agree to settle this”. That’s how you can represent this as this single key that is everyone agrees in this merkle tree.

Taproot

However, if you think about the scenario here.. we want everyone to agree generally, still we have to publish on the chain is our key and the top-right branch hash. It’s an additional 64 bytes that need to be revealed just for this super common case that hopefully will be taken all the time. Can we do better? That is where taproot comes in. It’s something that Greg Maxwell came up with.

Taproot gives the ability to say– it’s based on the idea that we can use a construction called pay-to-contract which was originally invented by Timo Hanke in 2013 I think, to tweak a public key with a script using the equation there on the screen.

It has a number of properties. Namely, if I know the original public key and I know the scripts, then I can compute the tweaked public key. If I know the original secret key and I know the public key, then I can compute the secret key corresponding to the tweaked public key. And if you have a tweaked public key, then you cannot come up with another original key and script that has the same tweak. It works like a cryptographic commitment.

We can use this to combine pay-to-scripthash and pay-to-pubkey into a single thing. We make a script output a public key that is tweaked by some script. You are permitted to spend either using the public key or the script. But what goes on in the chain in the scriptpubkey is just a public key. The way you spend the key path is just by signing with it, because you knew the original private key if you were authorized to spend, you know the script, so you can compute the modified secret key and just sign with it. If you want to use the script path, if you want to use the fallback strategy and run the full contract, you reveal the original public key and the script and everyone can verify that this tweaked public key from the chain matches that data indeed.

The awesome part in this is that what goes on in the chain in the case of a spend through the key path is just a signature. You don’t reveal what the script was, and you don’t even reveal there was a script in the first place. This turns the unanimity branch in that we assumed in every contract there’s the “everyone agrees” case, just becomes a single signature in taproot. We have all outputs looking identical, and all collaborative input case also looking identical, which is an enormous win for privacy.

Taproot also interacts with adaptor signatures. For those you generally need an escape valve in the case of a timeout. As I explained before, you both put your money in the 2-of-2 multisig but you wouldn’t want one party to just say nah I’m not going to take your money and now everything is locked up. You want a fallback strategy that after some time everyone can take their money back. With something like taproot, that still remains a single public key, where you just hide the timeouts script in a commitment to the public key. In the case that everything goes as planned, the only thing that touches the blockchain is a single signature and not the timeout scripts.

Graftroot

If we start from this assumption that there exists a single key that represents the “everyone agrees” case, then we can actually do better and make use of delegation.

Delegation means we’re now going to permit spending, by saying “I have a signature with this taproot key (the key that represents everyone involved)”, revealing a script, revealing a signature with a key on that script, and the inputs to it. There’s a group of participants that represent the “everyone agrees” case, and they have the ability to delegate spending to other scripts and other particpants.

The advantage of graftroot over a merkle tree is, you can have as many spending paths as you want and they are all the same size. All you do is reveal a single signature. The downside here is that it is an inherently interactive key setup. You cannot spend as– if you are a part of this s1, s2, or s3. You cannot spend without having been given the signature by the keys involved.

This may mean difficulty with backups for example, because your money is lost if you lose the signature for example.

Half-aggregation

There is another concept called half-aggregation ((unpublished improvements forthcoming)) that Tadge Dryja came up with that lets you half-aggregate signatures non-interactively together. It doesn’t turn them into an entire single thing but it turns them into half the size. With that, graftroot even for the most simple of cases is more efficient in terms of space than a merkle branch. But there are tradeoffs.

Taproot and graftroot in practice

In practice, my impression with all of the things going on is that there’s a lot of ideas and we cannot focus on everything at once. Even worse, if you don’t call this worse I guess, there are incentives to do everything at once. In particular, I have talked about various structures for how script execution should work but you need to commit upfront about what your structure should be. We need upgrade mechanisms, it doesn’t exist in a vacuum. Thanks to segwit, we have script versioning. But needing to reveal to the world that there’s a new feature and you need to use it for your script is itself a privacy leak and this is sort of unfortunate. This is an incentive to do everything at once so that you don’t need to introduce multiple versions.

Also, signature aggregation doesn’t work across soft-forks because we need to make sure that different versions of the software agree on what the keys are that get signed. Again, it’s an incentive to do more things at once. Any time a new change gets introduced, it cannot be aggregated together with the old things.

There are some engineering tradeoffs to be made here I think where you cannot let these incentives drive the development of a system where you need vast agreement in a community about the way to go forward. So my initial focus here is Schnorr signatures and taproot. The reason for this is focus is that the ability to make any input and output in the cooperative case to look identical is an enormous win for how script execution works. Schnorr is necessary for this because without it we cannot encode multiple parties into a single key. Having multiple branches in there is a relatively simple change. If you look at the consensus changes necessary for these things, it’s really remarkably small, dozens of lines of code. It looks like a lot of the complexity is in explaining why these things are useful and how to use them and not so much in the impact on the consensus rules. Things like aggregation, I think, are something that can be done after we have explored various options for structural improvements to the scripting language once it’s clear around what the structuring should be because we will probably learn from the deployments how these things get used in practice. That’s what I’m working on with a number of collaborators and we’ll hopefully be proposing something soon, and that’s the end of my talk.

Q&A

https://www.youtube.com/watch?v=YSUVRj8iznU&t=38m38s

Christopher Allen: In the Schnorr BIP, you publish a reference python not-safe. I’m already seeing a rust implementation again not safe. I’m obviously enthusiastic about Schnorr signatures but I’m not finding good resources for what are the processes and methods by which to make…

A: Of doing this in a production ready correct way? Is that your question?

Christopher Allen: Yes. Do you have sources or your own plans?

A: We’re working on an implementation that does things correctly, which is constant time, and based on libsecp256k1. I’m rather surprised by how fast people have written implementations already. I believe it’s been 3 days. ((laughter)) That’s exciting, but it probably means we should hurry up with publishing an implementation. Ther’es more on top, because the BIP is single key Schnorr is what you would need to integrate into a consensus system. One of the advantages I talked about is all these multisig adaptor signature constructions and we will have a reference implementation for that.

Christopher Allne: What is the kind of rigorous list of things that ought to be done or ought to be checked or where do people find to do this? I’m ont finding good resources for how to do that.

Q: In the atomic swap case, do you need both chains to understand Schnorr signatures?

A: What?

gmaxwell: When doing atomic swaps across chains, do you need both chains to implement Schnorr signatures?

A: Ah, yes, that’s a good question. The adaptor signature based— the question is, and correct me if I’m wrong, is if you use an adaptor signature to do an atomic swap between two chains, then do both chains need to support Schnorr signatures? The answer is no. The construction works as long as one of them has Schnorr signatures. The reason is that– the exact same construction does not work between ECDSA and ECDSA. The reason is that you need to be able to lock things up into a 2-of-2 multisig which is a single atomic signature. Otherwise, the other party could just change their part of the signature and not reveal the piece of the damaged one they showed before. I believe in the paper (maybe this one?) that builds payment channels on top of these adaptor signature constructions, they actually come up with a way to do it between ECDSA and ECDSA. I don’t know how complex it is.

roasbeef: There’s zero-knowledge proofs.

gmaxwell: Andytoshi has a construction that allows you to use this across different elliptic curve groups. You could do this with secp curve and something using a bernstein curve.

roasbeef: It’s a lot heavier. If you use ECDSA, you encrypt a private key, they do a half-signature, then you decrypt it or something.

A: You project things int oa completely different encryption system and then you map it down or something.

roasbeef: For cross-sig aggregation, musig or BN, which one is a better fit?

A: I will repeat the question slower. ((laughter)) For cross-input signature aggregation, what’s the better choice– Bellare-Neven or musig signatures? I think the answer is in theory there is no difference. What musig does is that it starts from the security assumptions and attack model of Bellare-Neven but reintroduces the ability to do key aggregation. Bellare-Neven is a generalization of Schnorr signatures where you have multiple parties that sign a message. But the result does not look like a signature for any specific given key. And for cross-input aggregation, this is fine because all the public keys are known to the verifier anyway. In practice, I think the answer is Bellare-Neven because it’s older and it’s peer reviewed. There are few advantages. It’s an often-cited construction while musig is very new. In theory, they should both work.

jrubin: Musig… hi Pieter. Musig lets us non-interactively combine any m-of-n. And interactively using related constructions we could do any monotone k-of-n.

A: The signing is always interactive, but the key setup is non-interactive for musig.

jrubin: For key setup, can I take an interactively generated key, and put that in into an n-of-n musig?

A: I see no reason why not. The procedure for doing the k-of-n is where you have a number of parties that each come up with their own key and then split those into shares, distribute the shares among them, but really the sum of all the individual keys is the key the signature will look like under and you can put that key into a musig construction again. Also, and we don’t have a security proof for this, you can do musig in musig in musig and things like that.

Q: .. one of the big use cases for Schnorr signatures was in the area of blind signatures, the u-prove brandian signatures. Are there any insights or thoughts if other people have used this construction for blind signatures or if you have any thoughts there?

A: I know there is interest in doing blind and partially blind signatures on top of Schnorr signatures. I’m not an expert in that topic.

roasbeef: What are the implications of scripted batch validation? Are there any modifications needed? A year ago, they were talking about checksig verifies or something?

A: The question was, whether any changes are needed for the scripting system to support batch validation. Generally, yes because you cannot have signature checks that fail but still are permitted. Specifically right now you could write a script that is public key of maaku checksig which means anyone can take this money except maaku. But this is of course nonsensical, there is no reason why you would want to write such a script because you know if you don’t permit this key to spend it then just don’t sign with it, and it’s always bypassable. But that construction is a problem for batch validation because in batch validatoin you run all your script, out come the public keys, then you do your overall check. What if maaku did sign? Well, now the block is invalid. So we need to use a model where it is I guess statically-known at execution time whether any check will succeed or fail. An easy way of doing this is only allowing the empty signature as a failing one, where you are not allowed to produce with an invalid signature but you could just empty it and then things could work. Generally, the verify type of opcodes, only use… oh and another problem with this is that the current checkmultisig execution, say you want to do 2-of-5, it is going to try various combinations of which key matches which signature because the current language doesn’t let you say which one matches which. This one is not compatible with batch validation either, but I think this could be improved by requiring telling the opcode which signature corresponds to which key because it’s also a waste of execution time to spend validating all the possibilities.

roasbeef: Wouldn’t the new versions have better properties like lower cost and higher efficiency?

A: For cross-input aggregation, I hope that the incentives to adopt it will be self-sufficient, as in people will want to adopt it simply because it’s cheaper to use. Before that point, yeah. I think the reality is regardless of which changes are proposed, adoption of these things takes a long time and that’s the reality we face and that’s fine. We’re aiming for long-term anyway.

Q: Have you looked at BLS signature schemes?

A: The extent to which these aggregation constructions can be done with pairing-based cryptography is far better than anything that can be done with elliptic curves. It’s very appealing to work on these things. In particular, there’s some recent work by Dan Boneh and Neven and others where they permit even multisignatures where you can have multiple keys that sign and it’s just an elliptic curve operation per key and then a pairing operation for the whole thing rather than a pairing operation for each individual one. I also think that the difficulty of getting something adopted in the bitcoin ecosystem is much harder if you need to first introduce new cryptographic assumptions such as pairing. The counterargument to this is that well why don’t you propose having the option of doing both. The problem with that is, again, that it reduces your anonymity set. You’re now forcing people to make a choice between “oh I want to perhaps have the–” what is the advice that you give to people about which of the two they should be using? I think it’s very appealing to look at BLS, and we should look into it, but I think it’s something for the longer term.

Q: For atomic swaps, how can you use different curves? Could you go over that?

A: I believe it involves a proof of the discrete logarithm equivalence between the two. There’s an existing construction that you can use to say the ratio between two points is equal to the ratio between two points in another curve and when you plug that in, it works out. That’s the best that I can explain it for now.

Q: You mentioned that when you’re looking at these different elements you’ve talked about, that there are tradeoffs in engineering and you can’t do everything ta once. Is this engineering time, or is this consensus persuasion?

A: I think it’s all of those things. A first step in getting changes like these adopted is convincing the technical community that these changes are worthwhile and safe. If that doesn’t work, then you’re already at a dead end I think. The engineering and review time in convincing a group of technical people is a first step. But then yes, the real bigger picture is just a concern that is a much harder to get anything adopted at all.

\ No newline at end of file +https://www.youtube.com/watch?v=YSUVRj8iznU

https://twitter.com/kanzure/status/1021880538020368385

slides: https://prezi.com/view/YkJwE7LYJzAzJw9g1bWV/

Introduction

I am Pieter Wuille. I will just dive in. Today I want to talk about improvements to the bitcoin scripting language. There is feedback in the microphone. Okay.

I will mostly be talking about improvements to the bitcoin scripting language. This is by no means an exhaustive list of all the things that are going on. It’s more my personal focus in the things that I am interested in and excited about seeing happen.

I will first start with some general ideas about how we should think about a scripting language, then go into two kinds of topics. In particular, signatures and structure of the bitcoin scripting system and changes to that. And then conclude with some remarks about how this can be brought into production which is a non-trivial thing.

Script system goals

Much of what’s going on is that I want to convince you that there are many things going on and that we need to prioritize in how we work on these things because there are engineering tradeoffs. I’ll get to that in the end.

First of all, we should think about bitcoin scripting language as a way for specifying conditions for spending a transaction output. Bitcoin uses a UTXO model. It has significant advantages in terms of privacy. Really everything that our scripting language does is specifying under what conditions can you spend an output. We want to do that under various constraints. One of them is privacy. You don’t really want to reveal to the world who you are or really what you were doing, which is contradictory because we want a scripting language where you don’t say what you want to do. You also want to be space efficient because space on the blockchain is expensive and there’s a price to it so everyone is incentivized to keep things as small as possible. And of course there’s the computational efficiency- we want things that are efficient to verify so that it’s easy to run full nodes to fully audit the system.

All of these concerns are related, not perfectly, but generally, if you try to reveal less about what you’re doing then you will be saying less and using less storage and as a result there will be less data to verify. All of these things generally go hand-in-hand.

An important concern here is to not think about a scripting language as a programming language that describes execution. We’re working in a strange model where any time a transaction is included in the blockchain, every node in the world forever will be validating that transaction, executing the same steps and coming to the same conclusion. We don’t want it to compute anything. I already know the outcome- I don’t need 100,000 computers in the world to replicate this behavior. The only thing I am trying to do is convince them that the thing I’m doing is authorized. It’s really about not so much computing things, but more verifying that a computation was done correctly.

This has many similarities with proof systems. In the extreme, we can aim for a zero-knowledge proof system really where all you say is I have some condition and its hash was X and here is a proof that this condition was satisfied and nothing else. Unfortunately there are computational and other difficulties right now. I think that should be our ultimate goal, for building things where we’re looking at things that way.

Signature system improvements

Regarding improvements, I want to talk about three things. One is Schnorr signatures. Some of you may have seen that I recently a couple days ago published a draft BIP for incorporating Schnorr signatures into bitcoin. I’ll talk a bit about that.

I will also be talking about signature aggregation (or here), in particular aggregation across multiple inputs in a transaction. There’s really two separate things, I believe. There’s the signature system and then the integration and we should talk about them separately. Lots of the interest in the media on this topic are easily conflated between the two issues.

Third, I also want to talk about SIGHASH_NOINPUT and signature hashing. There’s a number of developments there as well.

Schnorr signatures

https://www.youtube.com/watch?v=YSUVRj8iznU&t=5m50s

Schnorr signatures are the prototypical digital signature system that has been around for a long time, building on the discrete logarithm problem. This is the same security assumption as ECDSA has. The two have an interesting history where DSA, the predecessor to ECDSA was created as a way to avoid the patents that Schnorr had on his signature scheme. The Schnorr signature scheme is better in almost every way. There’s no reason not to use it, apart from the fact that due to it being patented until 2008, standardization focused mostly on ECDSA and we should see if that can be improved.

In particular, I’m not going into the real details of the verification equation. I’m just putting them on the slide here to show that they are very similar in structure. It’s just re-arranging the variables a little bit and putting a hash in there. The interesting thing is that Schnorr signatures have a security proof: we know that if the discrete log problem of our group is hard in the random oracle model, then there is a proof that the Schnorr signatures cannot be forged. There is no such proof for ECDSA. This is a nice-to-have.

  • Schnorr is sG = R + H(R,P,m)*P
  • ECDSA is sR = mG + rP

As we are potentially introducing a new signature scheme anyway, we have the opportunity to make a number of changes really unrelated to the theoretical foundations of the signature scheme. We can get rid of the dumb DER encoding from ECDSA signatures. There’s some six, seven bytes of overhead just stuff like “there’s an integer that follows that are this long” and we really just want 64 bytes so let’s just use 64 bytes.

Batch verification

Another thing that we can do is focusing on this too, we want batch verifiability, which means the ability to take a number of triplets of (public key, message, signature) and throwing them all at a verification algorithm and the algorithm can tell you all of them are valid or not all of them valid. You wouldn’t learn where the faults are, but generally during block validation we don’t care about that. We only care whether the block is valid or not. This seems like an almost perfect match for what we want to do. This batch verifiability is an interesting property that we want to maintain.

Schnorr signature BIP draft

A few days ago, I published a Schnorr signature BIP draft which was the combined work of a number of people including Greg Maxwell. And many other people who are listed in the BIP draft. This BIP accomplishes all of those goals. It’s really just a signature scheme, it doesn’t talk about how we might go about integrating that into bitcoin. I’m going to talk about my ideas about that integration problem later.

Schnorr signature properties

One of the most interesting properties that Schnorr signatures have, and the reason why we looked into them in the first place, is the fact that they are linear. The linearity property is that you can take roughly speaking it’s a bit more complex than this but generally you can take multiple Schnorr signatures by different keys on the same message and add them together in a certain way and the result is a valid signature for the sum of those keys. This is a remarkable property which is the basis for all the cool things we want to do on top of them. The most obvious one is that we can change how many of our multisignature policies work. In bitcoin, you have the ability to have k-of-n signatures. There’s a built-in construction to do this. Especially when it’s n-of-n, where you have a group of people and you require all of them to sign before an output can be spent, that reduces in the Schnorr signatures to a single signature. Roughly speaking, you just send money to the sum of their keys, and now all of them need to sign in order to be able to spend with this. It’s a bit more complex than that, please don’t do exactly that, there will be specifications for how to deal with this.

Musig

https://blockstream.com/2018/01/23/musig-key-aggregation-schnorr-signatures.html

https://eprint.iacr.org/2018/068

There are more advanced protocols that do even more with Schnorr signatures, where you can implement any k-of-n policy or even further any monotone boolean function over different keys, such as “this key and this key and that key, or that key and this key and that key or that key” basically anything you can build with ANDs or ORs over keys is doable, but the protocol for doing so is tricky.

The downside is that these things are not accountable. If you use a protocol like this, with a 2-of-3 multisig, you cannot tell from the multisig which of the two signers produced the signature. There are other constructions for doing this, though.

Also there’s an interactive key setup protocol for anything other than n-of-n. You really need the different signers to run a protocol among themselves before they can spend. For just n-of-n, we came up with a construction called musig. It was coauthored by Andrew Poelstra, Greg Maxwell, Yannick Seurin and myself. It’s a construction for doing this non-interactively and take keys, combine them, and then you can send to them.

Adaptor signatures

Another advantage of Schnorr signatures, and it’s one of the more exciting things in this space, are adaptor signatures, which are a way for implementing atomic swaps that look completely like ordinary payments and they can’t be linked together. Roughly how they work is you produce two… you lock up your funds on both sides, say you have two assets on two different chains or on the same chain, you lock up both funds into a 2-of-2 multisig, and then you produce a damaged signature for both where you prove to the other party that the amount you damaged these signatures by is equal in both cases and then as soon as you take the money, you reveal the real signature in one, they compute the difference between the damaged and real one and then apply the same difference to the other side and take the money. Your taking money from one side is in fact what reveals and gives the ability for the other side for the ohter part. There’s a recent paper that described how to use this to build a payment channel system with good privacy properties.

Cross-input signature aggregation

Cross-input signature aggregation is from the fact that the Schnorr signature construction where you can sign the same message with multiple keys, this can be generalized to having multiple different messages be signed by different people and still have just a single signature. The ability to do so really would in theory allow us to reduce the total number of signatures in a transaction to just one signature. This has been the initial drive for going and looking into Schnorr signatures and it’s such an awesome win. There are many complications in implementing this, it turns out, but this is the goal that we want to get to. It has an impact on how we validate transactions. Right now, every input you just run the scripts and out comes TRUE or FALSE and if there’s FALSE then the transaction is invalid. This needs to be changed to a model where script validation returns TRUE or FALSE and also returns a list of pubkeys which is the set of keys that must still sign for that input, and then we need a single signature that can do this, and it needs to be a transaction-wide rather than transaction input-wide.

Another complication is soft-fork compatibility. The complication is that when you want different versions of software to validate the same set of inputs, and there is only a single signature, you must make sure that this signature and they both understand that this signature is about the same set of keys. If they disagree about the set of keys or about who has to sign, then that would be bad. Any sort of new feature that gets added to the scripting language that changes the set of signers, is inherently incompatible with aggregation. This is solvable, but it’s something to take into account, and it interacts with many things.

Sighash modes

Another new development is thinking about new sighash modes. When I’m signing for a transaction, what am I really signing? This has traditionally been a modified version of the transaction with certain values blanked out, permitting you to choose certain changes that can still be made to the transaction. There’s the anyonecanpay flag, for example. Instead of signing the full transaction, anyonecanpay flag causes you to sign a transaction as if it only had a single input which is the one you’re putting in. This is for example useful for crowdfunding-like constructions where you have “I want to pay, but only when enough other people chip in to make this amount match, and I don’t want my signature to be dependent on them” (mutual assurance contracts). There’s a number of interesting constructions that have been come up with, there’s really only six modes. So we have to wonder, is this the best way of dealing with this?

SIGHASH_NOINPUT

Recently there has been a proposal by cdecker for SIGHASH_NOINPUT where you sign the scripts and not the txids. The scary downside of this construction is that they are replayable. I pay to an address, you spend using the SIGHASH_NOINPUT and then someone else for whatever reason can send to the same address then the receiver of that first spend can take the first signature, put it in a new transaction and can take the new coins that were spent there. So this is something that should only be used in certain applications that can make sure this is not a problem.

eltoo

Apparently they also have a pretty interesting advantages, one of them is the eltoo proposal that those guys in the back know more about than I do. My general understanding is that this permits payment channels that are no longer punitive. Instead of– if someone broadcasts an old state and tries to revert the state of a channel where they sent money to you, currently in lightning-like constructions, you get the ability to take their coins but you have to be watching. With eltoo, this can be changed to: they can still broadcast an old state and you can update and broadcast a newer state on top of it, so it makes it into something that always does the right thing rather than needing to rely on incentives.

Then there are thoughts (regarding sighash modes) about delegation or the ability to verify signatures from the stack. There’s many kinds of thinking. I’m not going to go into these things because I haven’t spent much time myself on it.

Script structural improvements

Thinking about the structure of scripts… especially following that model of thinking about script as a verification not an execution model, thinking less about it as a programming language and more as a verification system, I’m going to go through the difference steps and try to explain taproot and graftroot as the final step.

Pay-to-scripthash (P2SH)

We have to start with bip16 p2sh, which was a change that was made as a soft-fork in 2012 where initially the program, the script, was put into the script output which meant the sender had to know what your script was. So this was a changed to something where instead of putting in the output the script itself, you put the hash of the script. When spending it, you reveal this hash was really this script, and now you can run it and here are the inputs to it. This had a number of advantages that at the time we might now take for granted. All of the outputs look identical, apart from the single key ones where we’re still using p2pkh (pay-to-pubkeyhashes). The sender doesn’t need to care about what your policy is: if you happen to use a hardware wallet or some escrow service to protect your coins, then the sender shouldn’t need to know about that. By turning it into a hash, that’s accomplished. And also, because bitcoin nodes need to maintain the full UTXO set between the time the output is created or spent, they only need to know the hash not the full script. But you still reveal the full script when spending, which is not that great for privacy. What can we do better?

Merkleized abstract syntax tree (MAST)

One of the ideas that has been around for a while is merkleized abstract syntax trees, as it was called originally. According to Russell O’Connor, it’s not what we should be talking about when we talk about merkle branches today. The observation is that most scripts that you see in practice are something that is just this junction of a number of possibilities. You can spend if A and B sign, or if C has signed and some time has passed, or D and A has signed and some hash has been revealed. Pretty much everything we have seen to date is some combination of these things. It’s unfortunate that we have to reveal all possibilities. Any time you want to spend anything, you have to reveal the complete script. The observation is that you can instead build a merkle tree, a hash tree where you pairwise combine different scripts together, and then in your output you do not put the script or the hash of the script, you put the merkle root of all the possibilities you want to permit spending. At spending time, you reveal the script, you reveal the path along the merkle tree to prove that the output really contained that script and the inputs to that script. This has log(n) size in the number of possibilities, and you only need to reveal the actually taken branch. This is an interesting idea that has been around for a while. There have been a number of proposals in particular by Mark Friedenbach (bip116 and bip117) and Johnson Lau (bip114) who have worked on ideas around this.

Merkle trees with unanimity branch

I want to make an intermediate step here, where I want to go into what is a ‘unanimity branch’. The observation is that in almost all interactions between different parties that want to participate in a script or smart contract, it’s fine, not necessarily required, but fine to have a branch that is “everyone agrees”. I’m modeling that here by adding an additional branch to my merkle tree. Due to Schnorr multisig, we can have a single key that represents a collection of signers that all agree. To explain this, I want to go into an abstraction for the blockchain called “the court model” which is that we can think about the blockchain as a perfectly fair court that will always rule according to whatever was agreed to in the contract. However, the court only has limited capacity. In the real world, hardly all disputes between any two parties ever get sent to a jury or a judge in the judicial system. The observation is that having the ability to go to a court is sufficient in many cases to make people behave honestly even if they don’t actually go to the court. There’s a similarity here with the blockchain because knowing that you have the ability to publish the full script and have whatever the agreed upon contract was executed, is enough to make people say “well, you know, we all agree, we can spend the money, there’s no need to actually present the entire script”. You could think about this as settling out of court where you just say “hi judge, we agree to settle this”. That’s how you can represent this as this single key that is everyone agrees in this merkle tree.

Taproot

However, if you think about the scenario here.. we want everyone to agree generally, still we have to publish on the chain is our key and the top-right branch hash. It’s an additional 64 bytes that need to be revealed just for this super common case that hopefully will be taken all the time. Can we do better? That is where taproot comes in. It’s something that Greg Maxwell came up with.

Taproot gives the ability to say– it’s based on the idea that we can use a construction called pay-to-contract which was originally invented by Timo Hanke in 2013 I think, to tweak a public key with a script using the equation there on the screen.

It has a number of properties. Namely, if I know the original public key and I know the scripts, then I can compute the tweaked public key. If I know the original secret key and I know the public key, then I can compute the secret key corresponding to the tweaked public key. And if you have a tweaked public key, then you cannot come up with another original key and script that has the same tweak. It works like a cryptographic commitment.

We can use this to combine pay-to-scripthash and pay-to-pubkey into a single thing. We make a script output a public key that is tweaked by some script. You are permitted to spend either using the public key or the script. But what goes on in the chain in the scriptpubkey is just a public key. The way you spend the key path is just by signing with it, because you knew the original private key if you were authorized to spend, you know the script, so you can compute the modified secret key and just sign with it. If you want to use the script path, if you want to use the fallback strategy and run the full contract, you reveal the original public key and the script and everyone can verify that this tweaked public key from the chain matches that data indeed.

The awesome part in this is that what goes on in the chain in the case of a spend through the key path is just a signature. You don’t reveal what the script was, and you don’t even reveal there was a script in the first place. This turns the unanimity branch in that we assumed in every contract there’s the “everyone agrees” case, just becomes a single signature in taproot. We have all outputs looking identical, and all collaborative input case also looking identical, which is an enormous win for privacy.

Taproot also interacts with adaptor signatures. For those you generally need an escape valve in the case of a timeout. As I explained before, you both put your money in the 2-of-2 multisig but you wouldn’t want one party to just say nah I’m not going to take your money and now everything is locked up. You want a fallback strategy that after some time everyone can take their money back. With something like taproot, that still remains a single public key, where you just hide the timeouts script in a commitment to the public key. In the case that everything goes as planned, the only thing that touches the blockchain is a single signature and not the timeout scripts.

Graftroot

If we start from this assumption that there exists a single key that represents the “everyone agrees” case, then we can actually do better and make use of delegation.

Delegation means we’re now going to permit spending, by saying “I have a signature with this taproot key (the key that represents everyone involved)”, revealing a script, revealing a signature with a key on that script, and the inputs to it. There’s a group of participants that represent the “everyone agrees” case, and they have the ability to delegate spending to other scripts and other particpants.

The advantage of graftroot over a merkle tree is, you can have as many spending paths as you want and they are all the same size. All you do is reveal a single signature. The downside here is that it is an inherently interactive key setup. You cannot spend as– if you are a part of this s1, s2, or s3. You cannot spend without having been given the signature by the keys involved.

This may mean difficulty with backups for example, because your money is lost if you lose the signature for example.

Half-aggregation

There is another concept called half-aggregation ((unpublished improvements forthcoming)) that Tadge Dryja came up with that lets you half-aggregate signatures non-interactively together. It doesn’t turn them into an entire single thing but it turns them into half the size. With that, graftroot even for the most simple of cases is more efficient in terms of space than a merkle branch. But there are tradeoffs.

Taproot and graftroot in practice

In practice, my impression with all of the things going on is that there’s a lot of ideas and we cannot focus on everything at once. Even worse, if you don’t call this worse I guess, there are incentives to do everything at once. In particular, I have talked about various structures for how script execution should work but you need to commit upfront about what your structure should be. We need upgrade mechanisms, it doesn’t exist in a vacuum. Thanks to segwit, we have script versioning. But needing to reveal to the world that there’s a new feature and you need to use it for your script is itself a privacy leak and this is sort of unfortunate. This is an incentive to do everything at once so that you don’t need to introduce multiple versions.

Also, signature aggregation doesn’t work across soft-forks because we need to make sure that different versions of the software agree on what the keys are that get signed. Again, it’s an incentive to do more things at once. Any time a new change gets introduced, it cannot be aggregated together with the old things.

There are some engineering tradeoffs to be made here I think where you cannot let these incentives drive the development of a system where you need vast agreement in a community about the way to go forward. So my initial focus here is Schnorr signatures and taproot. The reason for this is focus is that the ability to make any input and output in the cooperative case to look identical is an enormous win for how script execution works. Schnorr is necessary for this because without it we cannot encode multiple parties into a single key. Having multiple branches in there is a relatively simple change. If you look at the consensus changes necessary for these things, it’s really remarkably small, dozens of lines of code. It looks like a lot of the complexity is in explaining why these things are useful and how to use them and not so much in the impact on the consensus rules. Things like aggregation, I think, are something that can be done after we have explored various options for structural improvements to the scripting language once it’s clear around what the structuring should be because we will probably learn from the deployments how these things get used in practice. That’s what I’m working on with a number of collaborators and we’ll hopefully be proposing something soon, and that’s the end of my talk.

Q&A

https://www.youtube.com/watch?v=YSUVRj8iznU&t=38m38s

Christopher Allen: In the Schnorr BIP, you publish a reference python not-safe. I’m already seeing a rust implementation again not safe. I’m obviously enthusiastic about Schnorr signatures but I’m not finding good resources for what are the processes and methods by which to make…

A: Of doing this in a production ready correct way? Is that your question?

Christopher Allen: Yes. Do you have sources or your own plans?

A: We’re working on an implementation that does things correctly, which is constant time, and based on libsecp256k1. I’m rather surprised by how fast people have written implementations already. I believe it’s been 3 days. ((laughter)) That’s exciting, but it probably means we should hurry up with publishing an implementation. Ther’es more on top, because the BIP is single key Schnorr is what you would need to integrate into a consensus system. One of the advantages I talked about is all these multisig adaptor signature constructions and we will have a reference implementation for that.

Christopher Allne: What is the kind of rigorous list of things that ought to be done or ought to be checked or where do people find to do this? I’m ont finding good resources for how to do that.

Q: In the atomic swap case, do you need both chains to understand Schnorr signatures?

A: What?

gmaxwell: When doing atomic swaps across chains, do you need both chains to implement Schnorr signatures?

A: Ah, yes, that’s a good question. The adaptor signature based— the question is, and correct me if I’m wrong, is if you use an adaptor signature to do an atomic swap between two chains, then do both chains need to support Schnorr signatures? The answer is no. The construction works as long as one of them has Schnorr signatures. The reason is that– the exact same construction does not work between ECDSA and ECDSA. The reason is that you need to be able to lock things up into a 2-of-2 multisig which is a single atomic signature. Otherwise, the other party could just change their part of the signature and not reveal the piece of the damaged one they showed before. I believe in the paper (maybe this one?) that builds payment channels on top of these adaptor signature constructions, they actually come up with a way to do it between ECDSA and ECDSA. I don’t know how complex it is.

roasbeef: There’s zero-knowledge proofs.

gmaxwell: Andytoshi has a construction that allows you to use this across different elliptic curve groups. You could do this with secp curve and something using a bernstein curve.

roasbeef: It’s a lot heavier. If you use ECDSA, you encrypt a private key, they do a half-signature, then you decrypt it or something.

A: You project things int oa completely different encryption system and then you map it down or something.

roasbeef: For cross-sig aggregation, musig or BN, which one is a better fit?

A: I will repeat the question slower. ((laughter)) For cross-input signature aggregation, what’s the better choice– Bellare-Neven or musig signatures? I think the answer is in theory there is no difference. What musig does is that it starts from the security assumptions and attack model of Bellare-Neven but reintroduces the ability to do key aggregation. Bellare-Neven is a generalization of Schnorr signatures where you have multiple parties that sign a message. But the result does not look like a signature for any specific given key. And for cross-input aggregation, this is fine because all the public keys are known to the verifier anyway. In practice, I think the answer is Bellare-Neven because it’s older and it’s peer reviewed. There are few advantages. It’s an often-cited construction while musig is very new. In theory, they should both work.

jrubin: Musig… hi Pieter. Musig lets us non-interactively combine any m-of-n. And interactively using related constructions we could do any monotone k-of-n.

A: The signing is always interactive, but the key setup is non-interactive for musig.

jrubin: For key setup, can I take an interactively generated key, and put that in into an n-of-n musig?

A: I see no reason why not. The procedure for doing the k-of-n is where you have a number of parties that each come up with their own key and then split those into shares, distribute the shares among them, but really the sum of all the individual keys is the key the signature will look like under and you can put that key into a musig construction again. Also, and we don’t have a security proof for this, you can do musig in musig in musig and things like that.

Q: .. one of the big use cases for Schnorr signatures was in the area of blind signatures, the u-prove brandian signatures. Are there any insights or thoughts if other people have used this construction for blind signatures or if you have any thoughts there?

A: I know there is interest in doing blind and partially blind signatures on top of Schnorr signatures. I’m not an expert in that topic.

roasbeef: What are the implications of scripted batch validation? Are there any modifications needed? A year ago, they were talking about checksig verifies or something?

A: The question was, whether any changes are needed for the scripting system to support batch validation. Generally, yes because you cannot have signature checks that fail but still are permitted. Specifically right now you could write a script that is public key of maaku checksig which means anyone can take this money except maaku. But this is of course nonsensical, there is no reason why you would want to write such a script because you know if you don’t permit this key to spend it then just don’t sign with it, and it’s always bypassable. But that construction is a problem for batch validation because in batch validatoin you run all your script, out come the public keys, then you do your overall check. What if maaku did sign? Well, now the block is invalid. So we need to use a model where it is I guess statically-known at execution time whether any check will succeed or fail. An easy way of doing this is only allowing the empty signature as a failing one, where you are not allowed to produce with an invalid signature but you could just empty it and then things could work. Generally, the verify type of opcodes, only use… oh and another problem with this is that the current checkmultisig execution, say you want to do 2-of-5, it is going to try various combinations of which key matches which signature because the current language doesn’t let you say which one matches which. This one is not compatible with batch validation either, but I think this could be improved by requiring telling the opcode which signature corresponds to which key because it’s also a waste of execution time to spend validating all the possibilities.

roasbeef: Wouldn’t the new versions have better properties like lower cost and higher efficiency?

A: For cross-input aggregation, I hope that the incentives to adopt it will be self-sufficient, as in people will want to adopt it simply because it’s cheaper to use. Before that point, yeah. I think the reality is regardless of which changes are proposed, adoption of these things takes a long time and that’s the reality we face and that’s fine. We’re aiming for long-term anyway.

Q: Have you looked at BLS signature schemes?

A: The extent to which these aggregation constructions can be done with pairing-based cryptography is far better than anything that can be done with elliptic curves. It’s very appealing to work on these things. In particular, there’s some recent work by Dan Boneh and Neven and others where they permit even multisignatures where you can have multiple keys that sign and it’s just an elliptic curve operation per key and then a pairing operation for the whole thing rather than a pairing operation for each individual one. I also think that the difficulty of getting something adopted in the bitcoin ecosystem is much harder if you need to first introduce new cryptographic assumptions such as pairing. The counterargument to this is that well why don’t you propose having the option of doing both. The problem with that is, again, that it reduces your anonymity set. You’re now forcing people to make a choice between “oh I want to perhaps have the–” what is the advice that you give to people about which of the two they should be using? I think it’s very appealing to look at BLS, and we should look into it, but I think it’s something for the longer term.

Q: For atomic swaps, how can you use different curves? Could you go over that?

A: I believe it involves a proof of the discrete logarithm equivalence between the two. There’s an existing construction that you can use to say the ratio between two points is equal to the ratio between two points in another curve and when you plug that in, it works out. That’s the best that I can explain it for now.

Q: You mentioned that when you’re looking at these different elements you’ve talked about, that there are tradeoffs in engineering and you can’t do everything ta once. Is this engineering time, or is this consensus persuasion?

A: I think it’s all of those things. A first step in getting changes like these adopted is convincing the technical community that these changes are worthwhile and safe. If that doesn’t work, then you’re already at a dead end I think. The engineering and review time in convincing a group of technical people is a first step. But then yes, the real bigger picture is just a concern that is a much harder to get anything adopted at all.

\ No newline at end of file diff --git a/sf-bitcoin-meetup/2019-12-16-bip-taproot-bip-tapscript/index.html b/sf-bitcoin-meetup/2019-12-16-bip-taproot-bip-tapscript/index.html index 85fe69dc97..fe17bb18d3 100644 --- a/sf-bitcoin-meetup/2019-12-16-bip-taproot-bip-tapscript/index.html +++ b/sf-bitcoin-meetup/2019-12-16-bip-taproot-bip-tapscript/index.html @@ -13,4 +13,4 @@ < BIP Taproot and BIP Tapscript

BIP Taproot and BIP Tapscript

Speakers: Pieter Wuille

Date: December 16, 2019

Transcript By: Bryan Bishop

Tags: Taproot, Tapscript

Category: -Meetup

slides: https://prezi.com/view/AlXd19INd3isgt3SvW8g/

https://twitter.com/kanzure/status/1208438837845929987

https://twitter.com/SFBitcoinDevs/status/1206678306721894400

bip-taproot: https://github.com/sipa/bips/blob/bip-schnorr/bip-taproot.mediawiki

bip-tapscript: https://github.com/sipa/bips/blob/bip-schnorr/bip-tapscript.mediawiki

bip-schnorr: https://github.com/sipa/bips/blob/bip-schnorr/bip-schnorr.mediawiki

Please try to find seats. Today we have a special treat. We have Pieter here, who is well known as a contributor to Bitcoin Core, stackexchange, the mailing list, and he has been around forever. He’s the author of multiple BIPs. Today he is going to talk about taproot and tapscript which is basically what the whole Schnorr movement thing became. He’s probably just giving us an update and all the small details and everything else as well. We’d like to thank our sponsors: River Financial, Square Crypto and Digital Garage for making this possible.

Introduction

Thank you, Mark. My name is Pieter Wuille. I do bitcoin stuff. I work at Blockstream. Today I am going to give an update on the status of really three BIPs that a number of us have been working on for a while, at least 1.5 years. These BIPs are coauthored together with Anthony Towns, Jonas Nick, Tim Ruffing and many other people.

Over the past few weeks, Bitcoin Optech has organized structured taproot review sessions (news) (and workshops and workshop transcript and here) which has brought in attention and lots of comments from lots of people which have been very useful.

Of course, the original idea of taproot is due to Greg Maxwell who came up with it a year or two ago. Thanks to him as well. And all the other people have been involved in this, too.

I always make my slides at the very last minute. These people have not seen my slides. If there’s anything wrong on them, that’s on me.

Agenda

Okay, so what will this talk be about? I wanted to talk about the actual BIPs and the actual changes that we’re proposing to make to bitcoin to bring taproot, Schnorr signatures, and merkle trees, and a whole bunch of other things. I am mostly not going to talk about taproot as an abstract concept. I previously gave a talk about taproot here I think 1.5 years ago in July 2018. So if you want to know more about the history or the reasoning why we want this sort of thing, then please go have a look at that talk. Here, I am going to talk about a lot of the other things that were brought in that we realized we had to change along the way or that we could and should. I am going to try to justify those things.

I think we’re nearing the end of– we’re nearing the point where these BIPs are getting ready to be an actual proposal for bitcoin. But still feel free, if you have comments, then you’re more than welcome to post them on my github repository, on the mailing list, on IRC, or here in person to make them and I’m happy to answer any questions.

So during this talk, I am really going to go over step-by-step a whole bunch of small and less small details that were introduced. Feel free to raise your hand at any time if you have questions. As I said, I am not going to talk so much about taproot as a concept, but this might mean that the justification or rationale for things is not clear, so feel free to ask. Okay.

Design goals

Why do we want something like taproot? The reason is that we have realized it is possible to improve the privacy, efficiency and flexibility of the bitcoin script system, and doing so without changing the security assumptions.

Primarily what we’re focusing on in terms of privacy is something I’m calling “policy privacy”. There are many types of information that we leak on the network when we’re making transactions. Some of them on-chain, and some of them just during, like by revealing things on the p2p network. But this is only addressing one of them; namely the fact that when you create a script on-chain, you create an output that commits to a script and you spend it, you reveal what that script is to the network. That means that if tomorrow some wallet provider comes up with a fancy 4-of-7 multisig checklocktimeverify script and you start using it, then any transaction you’re doing when someone on the network sees a 4-of-7 multisig with checklocktimeverify then they can probably guess with high certainty that you are using that particular wallet provider.

We’re trying to address policy privacy, which is about not leaking the policy of the spendability conditions of your outputs to the network. The ideal that we’re trying to achieve here would be possible in theory with a recursive zero-knowledge proof construction where you show something like “I know the hash of a program and I know the inputs to that program that will satisfy it” but without revealing what either of those inputs or program are. There are good reasons not to do that. One is, all the generic zero-knowledge proof constructions that we could use are either very large, computationally expensive, rely on new security assumptions, need a trusted setup, all kinds of stuff that we’d really rather avoid at this stage. But things are progressing fast in that domain and I hope that at some point we’ll actually be able to do something moon mathy that completely hides the policy.

I’d like to, when working on proposals for bitcoin, I like to focus on things that are likely to be accepted. That’s another reason to focus on things that don’t change the security assumptions. Right now, bitcoin at the very least requires ECDSA whose security requires the discrete logarithm over the secp256k1 group. It makes a whole bunch of assumptions about hash functions, both standard and non-standard ones. We’re not changing those assumptions at all. In fact, we’re reducing them. Schnorr signatures, which I’ll show that we use, are actually relying on fewer assumptions.

I think it’s important to point out that it’s not even– a question you could ask is, well, but it should be possible to say “optionally introduce a feature that people can use that changes the security assumptions”. Probably that is what we want to do at some point eventually, but even— if I don’t trust some new digital signature scheme that offers some new awesome features that we may want to use, and you do, so you use the wallet that uses it, effectively your coins become at risk and if I’m interacting with you… Eventually the whole ecosystem of bitcoin transactions is relying on…. say several million coins are somehow encumbered by security assumptions that I don’t trust. Then I probably won’t have faith in the currency anymore. What I’m trying to get at is that the security assumptions of the system are not something you can just choose and take. It must be something that really the whole ecosystem accepts. For that reason, I’m just focusing on not changing them at all, because that’s obviously the easiest thing to argue for. The result of this is that we end up exploring to the extent possible all the possible things that are possible with these, and we have discovered some pretty neat things along the way.

Over the past few years, a whole bunch of technologies and techniques have been invented that could be used to improve the efficiency, flexibility or privacy of bitcoin Script in some way. There’s merkle trees and MASTs, taproot, graftroot, generalized taproot (also known as groot). Then there’s some ideas about new opcodes, new sighash modes such as SIGHASH_NOINPUT, cross-input aggregation which is actually what started all of this… A problem is that there’s a tradeoff. The tradeoff is, on the one hand we want to have– it turns out that combining more things actually gives you better efficiency and privacy especially when we’re talking about policy privacy, say there’s a dozen possible things that interact and every two months there’s a new soft-fork that introduces some new feature…. clearly, you’re going to be revealing more to the network because you’re using script version 7 or something, and it added this new feature, and you must have had a reason to migrate to script version 7. This makes for an automatic incentive to combine things together. Also, the fact that probably not– people will not want to go through an upgrade changing their wallet logic every couple of months. When you introduce a change like this, you want to make it large enough that people are effectively incentivized to adopt it. On the other hand, putting everything all at once together becomes really complex, becomes hard to explain, and is just from an engineering perspective and a political perspective too, a really hard thing. “Here’s 300 pages of specification of a new thing, take it or leave it” is really not how you want to do things.

The balance we end up with is combining some things by focusing on just one thing, and its dependencies, bugfixes and extensions to it, but let’s not do everything at once. In particular, we avoid things that can be done independently. If we can argue that doing some feature as a new soft-fork independently is just as good as doing it at the same time, then we avoid it. As you’ll see, there’s a whole bunch of extension mechanisms that we prefer over adding features themselves. In particular, there will be a way to easily add new sighash modes, and as a result we don’t have to worry about having those integrated into the proposal right away. Also new opcodes; we’re not really adding new opcodes because there’s an extension mechanism that will easily let us do that later.

So the focus is on merkle trees, merkleized abstract syntax trees (MASTs), and taproot, and the things that are required to make those efficient. We’ll look at what are the benefits that those can give us, and makre sure they’re usable. In particular, this is a focus on things that are usable without interactive setup. So both merkle trees and taproot have the advantage that I can just get your public key and compute an address and have people send to it and none of you need to run to your safe, hardware wallets or vaults or something else.

Another thing that is possible is graftroot, which can in some cases offer much better savings and complexity than taproot, but it has the disadvantage that you inherently need access to the private keys. That’s not a reason not to do that, but it’s a reason to avoid it in the first steps.

So let’s actually stop talking about abstract stuff and move on.

Taproot

What is taproot? Trying to make all output scripts and most spends indistinguishable. How are we going to do that? Instead of having separate concepts for pay-to-pubkey and pay-to-scripthash, we combine them into one and make every output both. Every output will be spendable by one key and zero or more scripts. We’re going to make it in such a way that spending with just a public key will be super efficient: it will only require a single signature on-chain. The downside is that spending with scripts will be slightly less efficient. It’s a very interesting tradeoff you can make where you can actually choose to make one branch more efficient than the others in the script.

What is the justification for doing so? With Schnorr signatures, one key can be easily an aggregate of multiple keys. With multiple keys, it’s easy to say that in most spends of fairly complex scripts on-chain it could be replaced with a script branch of “everyone agrees”. Clearly if I take all the public keys involved in a script and they all agree, then that should be sufficient to spend, regardles of what the scripts actually are. It doesn’t even need to be all of them anyway, say it’s a 2-of-3 where 2 of the keys are online and one is offline… you make the efficient side the two keys that are online, in 99% of the cases you’ll be able to use that branch.

How are we constructing an output? On the slide, you can see s1, s2, and s3. Those correspond to three possible scripts that we want to spend things with. Then we put them into a merkle tree. This is a very simple merkle tree of three elements where we compute m2 as the inner node over s2 and s3. Then m1 is the merkle root that combines it with s1. But then, as a layer on top of that, instead of using that merkle root directly in an output, we’re going to instead use it to tweak the public key p. So p corresponds to the public key that might be an aggregate of multiple keys, but it’s our happy path. We really hope that pretty much all the time we’ll be able to spend with p alone. Our output becomes q which is this formula that takes p and combines it with the merkle root and multiplies it by the generator and adds it. The end result is a new public key that has been tweaked with the merkle root.

So we’re going to introduce a new witness version. Segwit as defined in bip141 offered the possibility of having multiple script versions. So we’re going to use that instead of using v0 as we’ve used so far, we define a new one which is script v1. Its program is not a hash, it is in fact the x-coordinate of that point q. There’s some interesting observations here. One, we just store the x-coordinate and not the y-coordinate. A common intuition that people have is that by dropping the y-coordinate we’re actually reducing the key space in half. So people think well maybe this is 1/2 bits reduction in security. It’s easy to prove that this is in fact no reduction in security at all. The intuition is that, if you have an algorithm to break a public key given just an x-coordinate you would in fact always use it. You would even use it on public keys that also had a y-coordinate. So it is true that there’s some structure in public keys, and we’re exploiting that by just storing the x-coordinate, but that structure is always there and it can always be exploited. It’s easy to actually prove this. Jonas Nick wrote a blog post about that not so long ago, which is an interesting read and gives a glimpse of the proofs that are used in this sort of thing.

As I said, we’re defining witness v1. Other witness versions remain unencumbered, obviously, because we don’t want to say anything yet about v2 and beyond. But also, we want to keep other lengths unencumbered… I believe this was a mistake we made in witness v0, which is only valid with 20 or 32 bytes hash … a 20 byte corresponds to a public key hash, and the 32 bytes refers to scripthash. The space of witness versions and their programs is limited. It’s sad that we’ve removed the possibility to use v0. There’s only 16 versions. To avoid that, we leave other lengths unencumbered but the downside is that this exposes us to – a couple of months ago, a issue was discovered in bech32 (bip173) where it is under some circumstances possible to insert characters into an address without invalidating it. I’ve posted on the mailing list a strategy and an analysis that shows how to fix that bech32 problem. It’s unfortunate though that we’re now becoming exposed by not doing this encumberence.

As I said, the output is an x-coordinate directly. It’s not a hash. This is perhaps the most controversial part that I expect of the taproot proposal. There is a common thing that people say. They say, oh bitcoin is quantum resistant because it hashes public keys. I think that statement is nonsense. There’s several reasons why it’s nonsense. First, it makes assumptions about how fast quantum computers are. Clearly when spending an output, you’re revealing the public key. If within that time it can be attacked, then it can be attacked. Plus, there are several million bitcoin right now available on outputs that are actually have known public keys and can be spent with known public keys. There’s no real reason to assume that number will go down. The reason for that is that really, any interesting use of the bitcoin protocol involves revealing public keys. If you’re using lightning, you’re revealing public keys. If you’re using multisig, you’re revealing public keys to your cosigners. If you’re using various kinds of lite clients, they are sending public keys to their servers. It’s just an unreasonable assumption that…. simply said, we cannot treat public keys as secret.

In this proposal, we exploit that by just putting the point directly in the outputs and this just saves us 32 bytes because now we don’t need to reveal both the hash and the outputs((??)). Yes, to save the bytes. Yeah.

Q: What is the reason for not putting the parity of the y-coordinate in there?

A: It’s just to save the bytes. A better justification is that it literally adds no security, so why should we waste a byte on it? Also note that because we’re not hashing things, this byte would go in the output directly where it is not witness discounted.

Also, a relatively recent change to bip-taproot and bip-tapscript is no P2SH support. The reasons for removing P2SH support are that P2SH only has 80 bits of collision resistance security. If you’re jointly constructing a script with someone else, 80 bits security is not something that we expect in the long-term to hold up. Another reason is that P2SH is an inefficient use of the chain… It also reduces our ability of achieving the goal of making all outputs look identical, because if we have both P2SH outputs and non-P2SH outputs then that just gratuitously gives up bits of information or privacy. At this point, I think native segwit outputs and bech32 have been adopted sufficiently that we expect that by the time that taproot if and when it gets rolled out, the stragglers at that point probably either won’t upgrade at all.

More BIP details

I told you that in taproot an output is a public key that we’re tweaking with the merkle root of a tree whose leaves are scripts. Here is the big trick that makes taproot work. x here is the private key to p. Let’s say p is a single participant. It works for multiples too but it’s easier to show this way. x is the private key to p. Q which is P+H(P && merkle root)*G is actually equal to x + that hash times G. In other words, the private key to Q equals x plus that hash. In other words, if you have the private key to p and you know the merkle root, then you know the private key to Q. In the proposal, it says it is possible to spend any taproot output by just giving a signature with that private key to Q. Yes, we just pick a fixed parity. At signing time, if the parity is wrong, you flip your private key just before signing. So literally the only thing that goes on-chain is a single signature, in the happy case. Using schnorr signatures, that P can actually be an aggregate and cooperative spends can just use that single key.

Another advantage of Schnorr signatures is that they are batch verifiable. If we have hundreds or hundreds of thousands of signatures, then we can verify them more efficiently than a single one. This scales approximately with n/log(n). If you have thousands of keys, then you get a factor of 3x easily which is nice for initial block download where in theory you could batch verify over multiple blocks at once.

I lied though– I told you that the leaves of this merkle tree are scripts. They are actually not scripts. They are tuples of a version and a script, so (version, script). The reason for this is simply, we have 5 bits in the header available. We couldn’t get rid of the bytes, there was 5 bits available, so why not use it as an extension mechanism and say for now why not only define semantics for one of them, and if you have another one then for now it’s treated as ANYONECANSPEND and it’s unencumbered? So in a way you have the versioning number at the segwit level, which is an exposed version number because your transcation output reveals what that version number is. But this introduces a new subversion number which is per script. Every leaf can have a different version, which is nice for privacy because now you will only reveal—- if you have 10 branches and only one branch needs a new feature implemented in a new version, then you only reveal that you’re using that new version when you’re actually revealing that particular script branch which is really when you need it.

This is primarily useful for large re-designs of script. I don’t think there’s any such thing in the near or mediate-term future, but I think it’s good to have something where we can just swap things out. To make a clear distinction about where you would use a witness version number versus this leaf version number, I think the witness version is something that we want to use for changing the execution structure like if we want to change the merkle tree or if we want to change things like cros-input aggregation, graftroot, those changes would need to be a new witness version and we can’t put them into the leaf version number. When we want to replace the tree structure, use a new witness version number. If you just want to make some changes to script, you can use the leaf version number, in fact you might want to use another mechanism which I’ll talk about later.

At this point, only one version is defined, c0. I’ll get back on why that’s not just 0 and why it’s c0. This version we call tapscript. That is actually the second BIP which describes the modifications to script as they apply to scripts under leaf version c0 in taproot.

Another advantage of having this the second layer of versioning is, say that in a later change, graftroot gets introduced which will need a new witness version. We might want to reuse the leaf versions because there’s no repercussions on script itself.

When talking about that merkle tree, this is something that Eric over there observed… we don’t actually care about where in the tree our script is, we only care about the fact that it is somewhere. It’s fairly annoying to come up with a bitwise encoding that tells you go left here, go right here, go left here, etc. So the idea is, before hashing, the two branches will be sorted lexigraphically. Pick the lowest first, and now you don’t need to say which side you go on. You just reveal your leaf and you reveal the hashes to combine it with, and you’re done. This is also a very weak privacy improvement, because it automatically mungles the tree order. If you change the public keys in it, or something, so you’re not actually revealing in your policy where that position was. It’s not perfect, but it’s something of a privacy improvement.

When spending through the scriptpath, what do you need to do? You need to reveal the script, the leaf version, inputs to that script and a merkle branch that proves it was committed to by your root. How do we do that? You put on the witness stack when spending an input, you give the inputs to the script, then the script itself (so far the same as P2SH), but a third element is added called a “control block” which contains marker bits, then the leaf version. It stores the sign of Q and unfortunately we do need to reveal the sign of Q otherwise the result is not batch verifiable. There’s two possible equations and you’d get a combinatorial explosiion if you tried to verify multiples at once, so we need one bit to convey this sign of Q, and then we need the merkle path.

What these marker bits are for, is that by setting the top two bits of the first byte of the last stack element in the witness stack, to 1, we have the remarkable property that we can detect taproot spends without having access to the inputs. I don’t know what it’s useful for, but it feels like there’s some static analysis advantage where I can look at a transaction and an input and say this is a taproot spend and I don’t need to go lookup the UTXO or anything to know that. The reason for this is that any such bytes at the end would imply a P2WSH spend with a disabled opcode… so clearly those can’t exist.

For verification, we compute S from the leaf and the script as the first hash, then compute the merkle root from that S together with the path you provided. Verify that equation, and again this is batch verifiable, together with the Schnorr signatures. They can all be thrown into one batch. Then execute the script with the input stack and then fail if it fails execution.

Another change is that in all… should I stop a moment and ask if there are questions so far about anything? No, all good?

Murch: If anyone has a question, just raise your hand and I’ll bring you a microphone. Okay, go on.

sipa: We’ll give it another five seconds.

Tagged hashes

Another thing that we’re proposing is that instead of just using single or double sha256 directly, we suggest to use tagged hashes. The idea is that every hash is given a tag. The goal of this is doing domain separation. So hashes used for computing a signature hash should we really don’t want them to ever collide or be reinterpretable as a hash and a merkle tree, or a hash used to a derive a nonce, or a hash to tweak the public key in taproot. An easy way to do that is by tagging a hash along with every hash you compute. How we do this is we take your tag, which is an ASCII string, you hash it, you double that hash which now becomes 64 bytes, and that is a prefix before you put the data that you’re hashing yourself. Now, because 64 bytes is the block size of sha256, it means that for any given constant tag, sha256 with that tag actually just becomes sha256 with a different initialization function. This costs nothing, because we precompute the new initialization vector after hashing the tag. Because nowhere today to the best of my knowledge in bitcoin is anywhere a protocol that hashes two single sha256 hashes at the beginning of another sha256 hash, this should never collide with any existing usage either. The rationale here is that bitcoin transaction merkle tree actually has had vulnerabilities in the past from not having such a separation. It uses the same algorithm for hashing the inner nodes and the leaves, and there’s a whole bunch of scary bugs that result from this, some that were more recent, but the earliest was 2012 where– you can look up the details. Anyway, these things are scary and we’re providing a simple standard to avoid this.

All the tags that we’re using are for the leaves, for combining the scripts with the version number, that’s tapleaf. For the branches of the merkle tree, it’s tapbranch. For the tweaking the public key, it’s taptweak. Then there’s one for sighashes and inside Schnorr signatures we also use them. Many of these are very likely overkill, but I think it’s good practice to start doing this.

Not a balanced merkle tree

Another observation is that this merkle tree of all the scripts that you might want to spend a particular output with, doesn’t need to be a balanced merkle tree. We’re never even necessarily materializing it. You’re just… the verifier clearly doesn’t care if it’s balanced, because he can’t even observe the whole tree. You just give it a path. There’s good reasons why you may not want to make it balanced. In particular, if you have some branches that are far less likely than others, then you can put them further away and make the more likely ones further up. There’s actually a very simple algorithm for doing this called Huffman tree which is used in simple compression algorithms where you make a list of all your possibilities together with their probabilities and combine the smallest two and put them in a node together, and you combine them and build a tree, and that’s your tree and it’s in fact the optimal one. We do in the BIP today there’s a limit of 128 levels which is clearly more than what is practical for a balanced tree. You’re clearly not going to construct things with billions of spending possibilities… it would be too hard to even compute the address, it could easily turn into spending seconds, minutes and more. But because of this unbalancing, we still want a limit for spam protection so that you can’t just dump kilobytes of text into your merkle tree. This is something to consider that may make things more efficient.

Annex

A final thing is something we’re calling the annex. The idea is that, observe that today in bitcoin transactions you have a locktime and an nSequence value. The locktime has always been there. The nSequence value was useless because its original design didn’t actually work. Its semantics were later changed in bip68. To turn them into relative locktime. The question is, what if we want more of those things? What if we wanted a way to have an input that could only be spent in a chain whose blockhash at a certain height is a certain value? The obvious way would be to do something with like a nLockTime field on a transaction but there is no such field and we can’t easily add one, at least not without running into all sorts of complications. So what we’re doing is that, the witness stack when spending a taproot output can have a final element optionally called an annex which is just ignored. Right now there’s no semantics associated with it, except that it is signed by all signatures. You can’t add or remove it, as long as your transaction has signatures. It’s also non-standard to use it. This would let you very elegantly say, if the first byte of the annex is this, then it’s given these certain semantics. Another use case I’ll get back to this once I talk about resource limits in script where you could use an annex to declare an expensive opcode and pay for it upfront because we may not want to add very expensive opcodes otherwise. I recognize that this might be the least obvious part of the presentation; is this clear?

Tapscript

This is the other BIP that describes the semantics when executing a script, specifically when the leaf version is c0. This is bip-tapscript. By now, the reason that it is c0 and not 0 is because of those marker bits that let you detect that it is a taproot spend even without the inputs.

We start from the script semantics as they were in bip141 segwit but with a number of changes. The most obvious change is that well instead of CHECKSIG and CHECKSIGVERIFY these things now use Schnorr signatures instead of ECDSA signatures. The justification is simply that we want these to be able to be batch verifiable and therefore there’s no reason to support both. Another difference is that CHECKMULTISIG is no longer available. The reason for that is that, so CHECKMULTISIG today you give it a list of public keys and then a list of signatures and it finds which signature matches which public key. This retry behavior is not something you can batch verify. A possibility would be to just to declare which keys belongs to which signature, but it turns out that wasn’t actually much more efficient than not having an opcode for that at all. So instead, our proposal adds CHECKSIGADD which is the same as CHECKSIG but it increments an accumulator with whether or not the signature check was successful. So this lets you implement a CHECKMULTISIG as key CHECKSIG key CHECKSIGADD key CHECKSIGADD 5 equal and this will do the same thing. It’s a few more opcode bytes, but for simple things you’ll probably want to turn it into a merkle tree where every branch is one combination of your multisig anyway which is more efficient.

The next thing to discuss is OP_SUCCESS. In bitcoin script today, we have a whole bunch of NOP opcodes. These were probably added specifically with the intent of having an upgrade mechanism so that we can easily add new opcodes to the bitcoin script language. So far they have been used twice, for checklocktimeverify and checksequenceverify. The problem with redefining these NOPs is that in order to be soft-fork compatible they can only do one of two things: abort or not do anything at all. This is the reason why CHECKLOCKTIMEVERIFY and CHECKSEQUENCEVERIFY opcodes today don’t pop their argument from the stack and you always need an OP_DROP after that. It’s because their redefinition of a NOP as a result they cannot modify the stack in any way.

A solution to that is instead of having an opcode that doesn’t do anything (a NOP), have an opcode that just returns TRUE and that’s OP_SUCCESS. That might sound unsafe, but it’s exactly as unsafe as having new witness versions that are undefined. You won’t use them until you have a use for them, and you won’t use them until they have defined locked-in semantics on the network. We’re taking a whole swath of disabled and never-defined opcode numbers, and turning them into “return TRUE”. Later, these opcodes can be redefined to be anything, because everything is soft-fork compatible with “return TRUE”.

Bram: Is that “return TRUE” at parse time?

sipa: It is at parse time. There’s a preprocessing step where the script gets decoded before execution and even if you have an OP_SUCCESS in an unexecuted IF branch for example, it still means “return TRUE”. This means it can be used to introduce completely new semantics, it can change resource limits, it change the parse function in a way if you say the first byte of your script becomes OP_SUCCESS and so on. So this is just the most powerful way of doing it.

Of all the different upgrade mechanisms that are in this proposal, OP_SUCCESS is the one I don’t want to lose. The leaf versions can effectively be subsumed by OP_SUCCESS just start your script with an opcode like OP_UPGRADE and now your script has completely new semantics. This is really powerful and should make it much easier to add new opcodes that do this or that.

Another thing is upgradeable pubkey types. The idea is that if you have a public key that is passed to CHECKSIG, CHECKSIGVERIFY or CHECKSIGADD, that is not the usual 32 bytes (not 33 bytes anymore, it’s 32 bytes because also there we’re just using the x-coordinate). If it’s not 32 then we treat that public key as an unknown public key type whose signature check will automatically succeed. This means that you can do things like introduce a new digital signature scheme without introducing new opcodes every time. Maybe more short-term, it means that it’s also usable to introduce new signature hashing schemes where otherwise you would have to say oh I have slightly different sighash semantics like SIGHASH_NOINPUT or ANYPREVOUT or whatever it’s called these days. Introducing them would, every time you would need to have three new opcodes otherwise. Using upgradeable public key types, this problem goes away.

Another is making “minimal IF” a consensus rule. “Minimal IF” is currently a standardness rule that has been in segwit forever and I haven’t seen anyone complain about it. It says that the input to an OP_IF or an OP_NOTIF in the scripting language has to be exactly TRUE or FALSE and it cannot be any other number or bytearray. It’s really hard to make non-malleable scripts without it. This is actually something that we stumbled upon when doing research on miniscript and tried to formalize what non-malleability in bitcoin script means, and we have to rely on “Minimal IF” and otherwise you get ridiculous scripts where you have two or three opcodes before every IF to guarantee they’re right. So that’s the justification, it’s always been there, we’re forced to rely on it, we better make it a consensus rule.

The last one is new code separator semantics. The OP_CODESEPARATOR has been in bitcoin forever. It was probably originally intended as a delegation mechanism because it restricted what part of the script was being signed by signatures, and you could actually implement a delegation mechanism with this. Unfortunately, in mid 2010 when the execution of the scriptsig and the scriptpubkey was split and the CODESEPARATOR in one doesn’t influence the signatures in the other, anymore, then that functionality was broken. So we looked at that and thought, well what is it useful for? There’s one thing that it can still be used for, namely where you have multiple execution branches through your scripts like IF THEN ELSE and you want your signatures to commit to which of the branch you’re taking… by putting a CODESEPARATOR in one of them, you change what script they are signing, and as a result indirectly lets you commit to what branch you’re taking in the script. I don’t know if anyone using this in production but I’ve certainly heard of people thinking about using this. So we’re going to keep this part and drop the rest. We’re just going to make signatures sign the last executed CODESEPARATOR position without those things.

Resource limits

Okay, resource limits. Bitcoin script today has a 10,000 byte script size limit which we propose to drop. The reason for this is that it has no use. There is literally no data structure left in taproot whose size depends on the size of your script, or any way in which execution time is more than proportional to your script size. Before, so even in segwit, every signature hash contains the entire script being executed which means that if you have a script of 10,000 bytes and they’re all CHECKSIGs, then you’re going to do 10,000 CHECKSIGs that each hash 10,000 bytes and this is a new version of the quadratic hashing problem that was improved before like in segwit there’s fewer quadratic hashing things left than in the legacy system that came before it, but in taproot they’re all gone. We actually just pre-hash the script once, and then it gets included in the signature hash every time. So with that, we don’t need that anymore.

Also, the 201-non-push ops limit, which is my favorite crazy thing in bitcoin. We got rid of that too. To show how crazy it is, right now in bitcoin script there is a rule that says that the total number of non-push opcodes in your script can be at most 201. This counts executed and unexecuted opcodes. However, when you execute a CHECKMULTISIG, the number of public keys participating in that CHECKMULTISIG counts towards this limit, but only the executed ones. So that’s another thing that– I discovered this while working on miniscript where I had a fuzzer constructing random scripts and we had miniscripts reasoning about oh is this a valid script and handing it to Bitcoin Core to verify and some didn’t pass and it was because of this CHECKMULTISIG thing. There’s really no reason for this weird limit.

Lastly, we removed the block-wide 80,000 sigop limits and replace it with one-sigop per 50 bytes of witness rule. This accomplishes a really cool thing that we get rid of multi-dimensional optimization. Right now, there are two independent resource limits for blocks. There’s the weight and there’s the sigops. In normal transactions like the number of sigops is way below the ratio where this matters. But you can maliciously construct scripts that no miner will know how to optimize for correctly. This is very vaguely exploitable. It’s just an income reduction, right. A miner will not be able to find the optimal bin packing solution to given all these transactions and they have fees and weights and sigops what’s the optimal combination. But the downside is that if they would try to be smarter, then fee estimation on the network would be harder. By removing the sigops limit, and basically translating the sigops into bytes, which is what this does, like every 50 bytes you get a credit of 1 sigop you can do. Given that a public key plus a signature already costs 96 bytes, there’s absolutely no reason why you would ever need to go over this limit. That way, there’s only a single limit that is left to optimize for, and things get a lot easier.

Maybe you wonder, well, why only sigops? Why not big hashes or other expensive opcodes? So I did some benchmarking. Like what are the most complex slowest slow scripts we could come up with in terms of execution time per byte that don’t use CHECKSIGs? It turns out there’s fairly close like within an order of magnitude, which I guess is not really close, but it’s not crazy far off of the worse you can do with just CHECKSIG. So that’s the reason for only counting CHECKSIGs here. But imagine in the future there’s an OP_SNARKVERIFY or OP_X86 or OP_WEBASSEMBLY that costs orders of magnitude more than a CHECKSIG now per byte, will want to price those proportionally… it doesn’t matter that it’s exactly proportional, but you need to be able to reason that a block full of these is not going to become an attack vector. So how do you do that? The problem is, say this SNARKVERIFY costs the equivalent of 1000 bytes worth of CHECKSIGs but your script isn’t 1000 bytes and no reasonable input to this opcode will be 1000 bytes. You’d be incentivized to, you know, stuff your transaction with zero bytes or something just to make it pass. That’s actually another justification for the annex where you can use an annex on the input to say, virtually increment the cost of this transaction by this many bytes. Just treat it as as if it was a kilobyte bigger, and I’ll even pay fees for it. Now you can see why it is important that you can statically determine an input is a taproot spend without having the inputs available, because otherwise I hand you a transaction on the network and you want to compute its size, you want to compute its fee rate, and in order to compute the fee rate you need to know its size, and its size is now being amended by this annex. But by allowing this to be done statically, by using a recognizable pattern for taproot spends, this is much easier and you don’t need the UTXO set or anything else available to compute the size of transactions. Likely this never happens, I have no concrete ideas for what this would be useful for, but it’s an extension mechanism that we considered and found interesting.

There’s also some changes to the signature hashing algorithm as I alluded to earlier, with the scriptpubkeys. The sizes don’t matter anymore. So we have the same sighash modes, which is the easiest to argue for. I don’t think the existing sighash modes like ALL, SINGLE, NONE, and ONE, and ANYONECANPAY, I don’t think they are particularly great- most of them are never used. But it’s just easier to argue to not change the semantics, especially with upgradeable pubkeys we can easily introduce new sighash modes later. So it’s less of a concern to do this perfectly right now. A big one is that inputs commit to all input amounts, not just the amount of the output they are spending. The reason for that is a particular attack that exists today on hardware wallets or offline signing devices where say there’s malicious wallet software and there’s a hardware wallet and the malicious software colludes with a miner to spend all of someone’s coins into fees. This is why we commit to amounts in the first place in segwit, but it is not sufficient because you actually need to commit to all of them. The reason is that, as a malicious wallet, you can give the hardware wallet two transactions. I give it two– I do two signing attempts. The first time, I lie about the amount on the first inputs, but I’m honest on the second one. On the second attempt, I’m honest on the second one but I lie about the first. So now I can take the signatures from these two, combine them into one single valid transaction that moves everything into fees. This attack has been described, it’s fairly obscure to pull off. In order to address it, we should commit to all input amounts and this problem goes away. Committing to all input amounts and all output amounts effectively means you’re committing to the fee itself.

Another one is committing to the scriptpubkey. There is some issues where you can lie to a hardware device like oh this is a p2sh versus a non-p2sh spend. This would in theory make them report fee rates incorrectly or something. Good practice and this can be completely avoided by committing to the scriptpubkey.

All variable length data is now pre-hashed. The hashing that you do for any checksig is now just a constant, up to 200 something bytes. This is the reason why we can get rid of the script size limits.

In that sighash, at the transaction level, data goes first before the input level data which allows in theory some additional caching where you pre-compute that whole part once. Then obviously, all the new things, you commit to the annex, the leaf hash, and the CODESEPARATOR position.

That’s all I have about changes introduced in tapscript. I have another slide about future things, but if there’s any questions about any of these things, feel free.

murch: How has the organized taproot review been going? I haven’t heard that much about that.

Bitcoin Optech has organized a public structured taproot review session. It was a 7-week thing where every week there was a curriculum which was some subset of the changes to discuss, and once or twice a week there would be a moment where people could ask questions and review things. The first weeks went very well. There were tons of questions, very good suggestions on improvements. The later weeks, it’s been dropping off significantly. I think this is the last week. Tomorrow I think is the last Q&A session. Overall I’m happy, and this gave a lot of exposure about these ideas to people who gave lots of good comments, some pull requests. There were only a few actual semantic changes that came out of this. Or ideas about that. Most of it has been just about some things being unclear, rationales being unclear, and requests for stating motivations better.

Bram: The RETURN TRUE extension semantics… that implies that you’re not doing any delegation? Or you’re only delegating to a trusted thing or a thing of known code at the time you do the call? It makes it so you can’t do a graftroot where you have a graftroot.

sipa: You can, I think.

Bram: You can’t do partial delegations. So you’re either completely trusting something, or not calling it.

sipa: First of all, graftroot is not included here so that’s not a concern. Longer-term, I don’t see the problem really. When delegating, you delegate to a new script and if you delegate to a script that has an OP_RETURN TRUE, then that’s your problem- you’re not going to do that.

Bram: I know you don’t want to support colored coins, but if I did, then you have this sort of partial delegation- this outer thing that is enforcing the color, and an inner thing that might support transactions within the colored coin itself.

sipa: I think you can still do that as long as you structure the requirements on the caller as an assertion. I think the semantics you want is that, when delegating you first completely execute that level of the script. If it returns false, for any reason, you return false, and then you do all of the invoked delegations.

Bram: The main issue.. and I don’t have a good solution for this, is if you want an extension that returns some value. Like some new function that returns a new thing, then this extension mechanism makes it hard to do that.

sipa: Sounds like something to discuss when this becomes an issue.

murch: Any more questions?

What’s next

We need to finish addressing all the comments from review. There’s still a few open issues. This is the last week of the Bitcoin Optech review sessions. I hope and I think the sentiment in general is positive. I am hopeful that we will find widespread acceptance for these changes. We’ll have to work on requesting a BIP number, I think we’ll have it in the next few days. Then there’s work on reference code, unit tests, and just review, test vectors, and need to open up a demo pull request to Bitcoin Core with an implementation… and then some unclear stuff… and then one day it will activate on the network. I am purposefully not going into details on how activation might happen. I think different people have different opinions on that aspect. That’s something up to the community to decide how these things make their way forward.

In parallel, once it’s clear that these things will happen, there’s work that can be done on reference implementations for wallets and libraries. Things like bip174 PSBT extensions, I know Elichai has been working on that. Miniscript will probably want to extend that to support taproot. Given the whole structure introducing with the merkle tree, and the tweaked key, just getting an overview of what a script is or what a policy is enorced by a particular output becomes harder so we will want better and better tools to actually be able to deal with this. There will be fewer and fewer ways that you can do these things manually.

That was my talk, thank you very much.

Q&A

Q: What’s the type of pushback if any you are seeing?

sipa: I think I’ve seen some comments on putting public keys directly in an output… but even that has been far less than I had feared.

murch: So deploy in 2 weeks?

sipa: No, I think the most valuable feedback is critical review including questions like “why are you doing this, is this part necessary” or comments like “this is a bad idea”. I haven’t heard any such comments. That might be due to either the right people haven’t looked at it yet, or the proposal is already polished enough that we’re past that stage. I think all kinds of comments are useful. It’s very hard to judge where acceptance lies. I can go on twitter and ask hey all in favor… I don’t think that the responses would be very useful, even the positive ones. This is a hard question. In the end, I think we’ll know consensus when we see it.

Q: Around this merkle tree and creating more complex scripts, what is the future looking like there with tools and wallets to actually build those things? What’s the work that needs to be done there?

sipa: Yeah, so, it’s interesting that at this point we’re really focusing on defining the semantics primarily the consensus side of things. But there’s a whole lot of work to do on integration and on the signing side. A lot of it is optional. I suspect that many wallets will just implement the single key signing because that’s all they care about and single key signing isn’t all that much harder in taproot than something else. At the same time, there’s a lot of more flexibility and possibilities that are there.

Q: Given that the depth of the tree can be 128, you can have quite sizable scripts. Have you seen anything novel coming out of that in review?

sipa: There’s obvious things like, you take a multisig and you split it out into a tree where every leaf is one combination of keys and you aggregate the keys per leaf together. So you just have a couple thousand leafs… say maybe 5-of-10 or something. I don’t know. And then you have a couple dozen leaves that are each just a single key. It’s far more efficient and much more private than doing it directly as a script that just gives all the public keys. Really a lot of this depends on what kind of policies that people want.

Q: In the last major soft-fork, we activated segwit and we saw some pushback from some stakeholders and ecosystem people that hadn’t been involved that much in design. Are you aware of any activity to reach out to those subsets of the community like miners in particular?

sipa: No, I am not aware. But everyone is welcome to review. The ecosystem is probably more well organized now than it was a while ago. I don’t expect it to be a problem. Of course, we don’t know if miners will go along and how long it will take. This is all up to activation mechanism which is still an open question.

Q: What about potentially simplifying multisig applications; what’s going to be the most common or powerful use case you can get from this? If you reduce down to a single signature on-chain.

sipa: That’s a possibility, but it comes with complexity. The verification side of things becomes simpler but actually building an interactive protocol between multiple parties that jointly sign for a single key is more complicated than signing a multisig today. It has pitfalls. I wouldn’t be surprised if in practice it ends up onyl being a few players that do this. That’s fine, because things are still indistinguishable.

Q: A quick question about the annex. Is that strictly for making transactions being able to weight their incentives properly…

sipa: That’s one reason. There’s another example which is the anti-fee sniping thing. Right now there’s a technique for anti-fee sniping where you put a locktime on a transaction for a block not far in the past. If a miner tries to intentionally reorg the chain to grab the fees of this perhaps high-fee-paying transaction, then they wouldn’t be able to do so and they would still need to wait. If there was a way to make a transaction that says this is only valid in a chain that builds off of this blockhash, then this becomes a lot more powerful. I don’t know if it’s sufficiently desirable, but it’s an example of something that you could do.

Q: You could even create a bounty to vote against a split or something like that, because you commit to only one side of the split.

sipa: This is fork protection, too. I think that’s the context in which it was originally proposed.

1h 26min

Other resources

http://gnusha.org/bitmetas/

\ No newline at end of file +Meetup

slides: https://prezi.com/view/AlXd19INd3isgt3SvW8g/

https://twitter.com/kanzure/status/1208438837845929987

https://twitter.com/SFBitcoinDevs/status/1206678306721894400

bip-taproot: https://github.com/sipa/bips/blob/bip-schnorr/bip-taproot.mediawiki

bip-tapscript: https://github.com/sipa/bips/blob/bip-schnorr/bip-tapscript.mediawiki

bip-schnorr: https://github.com/sipa/bips/blob/bip-schnorr/bip-schnorr.mediawiki

Please try to find seats. Today we have a special treat. We have Pieter here, who is well known as a contributor to Bitcoin Core, stackexchange, the mailing list, and he has been around forever. He’s the author of multiple BIPs. Today he is going to talk about taproot and tapscript which is basically what the whole Schnorr movement thing became. He’s probably just giving us an update and all the small details and everything else as well. We’d like to thank our sponsors: River Financial, Square Crypto and Digital Garage for making this possible.

Introduction

Thank you, Mark. My name is Pieter Wuille. I do bitcoin stuff. I work at Blockstream. Today I am going to give an update on the status of really three BIPs that a number of us have been working on for a while, at least 1.5 years. These BIPs are coauthored together with Anthony Towns, Jonas Nick, Tim Ruffing and many other people.

Over the past few weeks, Bitcoin Optech has organized structured taproot review sessions (news) (and workshops and workshop transcript and here) which has brought in attention and lots of comments from lots of people which have been very useful.

Of course, the original idea of taproot is due to Greg Maxwell who came up with it a year or two ago. Thanks to him as well. And all the other people have been involved in this, too.

I always make my slides at the very last minute. These people have not seen my slides. If there’s anything wrong on them, that’s on me.

Agenda

Okay, so what will this talk be about? I wanted to talk about the actual BIPs and the actual changes that we’re proposing to make to bitcoin to bring taproot, Schnorr signatures, and merkle trees, and a whole bunch of other things. I am mostly not going to talk about taproot as an abstract concept. I previously gave a talk about taproot here I think 1.5 years ago in July 2018. So if you want to know more about the history or the reasoning why we want this sort of thing, then please go have a look at that talk. Here, I am going to talk about a lot of the other things that were brought in that we realized we had to change along the way or that we could and should. I am going to try to justify those things.

I think we’re nearing the end of– we’re nearing the point where these BIPs are getting ready to be an actual proposal for bitcoin. But still feel free, if you have comments, then you’re more than welcome to post them on my github repository, on the mailing list, on IRC, or here in person to make them and I’m happy to answer any questions.

So during this talk, I am really going to go over step-by-step a whole bunch of small and less small details that were introduced. Feel free to raise your hand at any time if you have questions. As I said, I am not going to talk so much about taproot as a concept, but this might mean that the justification or rationale for things is not clear, so feel free to ask. Okay.

Design goals

Why do we want something like taproot? The reason is that we have realized it is possible to improve the privacy, efficiency and flexibility of the bitcoin script system, and doing so without changing the security assumptions.

Primarily what we’re focusing on in terms of privacy is something I’m calling “policy privacy”. There are many types of information that we leak on the network when we’re making transactions. Some of them on-chain, and some of them just during, like by revealing things on the p2p network. But this is only addressing one of them; namely the fact that when you create a script on-chain, you create an output that commits to a script and you spend it, you reveal what that script is to the network. That means that if tomorrow some wallet provider comes up with a fancy 4-of-7 multisig checklocktimeverify script and you start using it, then any transaction you’re doing when someone on the network sees a 4-of-7 multisig with checklocktimeverify then they can probably guess with high certainty that you are using that particular wallet provider.

We’re trying to address policy privacy, which is about not leaking the policy of the spendability conditions of your outputs to the network. The ideal that we’re trying to achieve here would be possible in theory with a recursive zero-knowledge proof construction where you show something like “I know the hash of a program and I know the inputs to that program that will satisfy it” but without revealing what either of those inputs or program are. There are good reasons not to do that. One is, all the generic zero-knowledge proof constructions that we could use are either very large, computationally expensive, rely on new security assumptions, need a trusted setup, all kinds of stuff that we’d really rather avoid at this stage. But things are progressing fast in that domain and I hope that at some point we’ll actually be able to do something moon mathy that completely hides the policy.

I’d like to, when working on proposals for bitcoin, I like to focus on things that are likely to be accepted. That’s another reason to focus on things that don’t change the security assumptions. Right now, bitcoin at the very least requires ECDSA whose security requires the discrete logarithm over the secp256k1 group. It makes a whole bunch of assumptions about hash functions, both standard and non-standard ones. We’re not changing those assumptions at all. In fact, we’re reducing them. Schnorr signatures, which I’ll show that we use, are actually relying on fewer assumptions.

I think it’s important to point out that it’s not even– a question you could ask is, well, but it should be possible to say “optionally introduce a feature that people can use that changes the security assumptions”. Probably that is what we want to do at some point eventually, but even— if I don’t trust some new digital signature scheme that offers some new awesome features that we may want to use, and you do, so you use the wallet that uses it, effectively your coins become at risk and if I’m interacting with you… Eventually the whole ecosystem of bitcoin transactions is relying on…. say several million coins are somehow encumbered by security assumptions that I don’t trust. Then I probably won’t have faith in the currency anymore. What I’m trying to get at is that the security assumptions of the system are not something you can just choose and take. It must be something that really the whole ecosystem accepts. For that reason, I’m just focusing on not changing them at all, because that’s obviously the easiest thing to argue for. The result of this is that we end up exploring to the extent possible all the possible things that are possible with these, and we have discovered some pretty neat things along the way.

Over the past few years, a whole bunch of technologies and techniques have been invented that could be used to improve the efficiency, flexibility or privacy of bitcoin Script in some way. There’s merkle trees and MASTs, taproot, graftroot, generalized taproot (also known as groot). Then there’s some ideas about new opcodes, new sighash modes such as SIGHASH_NOINPUT, cross-input aggregation which is actually what started all of this… A problem is that there’s a tradeoff. The tradeoff is, on the one hand we want to have– it turns out that combining more things actually gives you better efficiency and privacy especially when we’re talking about policy privacy, say there’s a dozen possible things that interact and every two months there’s a new soft-fork that introduces some new feature…. clearly, you’re going to be revealing more to the network because you’re using script version 7 or something, and it added this new feature, and you must have had a reason to migrate to script version 7. This makes for an automatic incentive to combine things together. Also, the fact that probably not– people will not want to go through an upgrade changing their wallet logic every couple of months. When you introduce a change like this, you want to make it large enough that people are effectively incentivized to adopt it. On the other hand, putting everything all at once together becomes really complex, becomes hard to explain, and is just from an engineering perspective and a political perspective too, a really hard thing. “Here’s 300 pages of specification of a new thing, take it or leave it” is really not how you want to do things.

The balance we end up with is combining some things by focusing on just one thing, and its dependencies, bugfixes and extensions to it, but let’s not do everything at once. In particular, we avoid things that can be done independently. If we can argue that doing some feature as a new soft-fork independently is just as good as doing it at the same time, then we avoid it. As you’ll see, there’s a whole bunch of extension mechanisms that we prefer over adding features themselves. In particular, there will be a way to easily add new sighash modes, and as a result we don’t have to worry about having those integrated into the proposal right away. Also new opcodes; we’re not really adding new opcodes because there’s an extension mechanism that will easily let us do that later.

So the focus is on merkle trees, merkleized abstract syntax trees (MASTs), and taproot, and the things that are required to make those efficient. We’ll look at what are the benefits that those can give us, and makre sure they’re usable. In particular, this is a focus on things that are usable without interactive setup. So both merkle trees and taproot have the advantage that I can just get your public key and compute an address and have people send to it and none of you need to run to your safe, hardware wallets or vaults or something else.

Another thing that is possible is graftroot, which can in some cases offer much better savings and complexity than taproot, but it has the disadvantage that you inherently need access to the private keys. That’s not a reason not to do that, but it’s a reason to avoid it in the first steps.

So let’s actually stop talking about abstract stuff and move on.

Taproot

What is taproot? Trying to make all output scripts and most spends indistinguishable. How are we going to do that? Instead of having separate concepts for pay-to-pubkey and pay-to-scripthash, we combine them into one and make every output both. Every output will be spendable by one key and zero or more scripts. We’re going to make it in such a way that spending with just a public key will be super efficient: it will only require a single signature on-chain. The downside is that spending with scripts will be slightly less efficient. It’s a very interesting tradeoff you can make where you can actually choose to make one branch more efficient than the others in the script.

What is the justification for doing so? With Schnorr signatures, one key can be easily an aggregate of multiple keys. With multiple keys, it’s easy to say that in most spends of fairly complex scripts on-chain it could be replaced with a script branch of “everyone agrees”. Clearly if I take all the public keys involved in a script and they all agree, then that should be sufficient to spend, regardles of what the scripts actually are. It doesn’t even need to be all of them anyway, say it’s a 2-of-3 where 2 of the keys are online and one is offline… you make the efficient side the two keys that are online, in 99% of the cases you’ll be able to use that branch.

How are we constructing an output? On the slide, you can see s1, s2, and s3. Those correspond to three possible scripts that we want to spend things with. Then we put them into a merkle tree. This is a very simple merkle tree of three elements where we compute m2 as the inner node over s2 and s3. Then m1 is the merkle root that combines it with s1. But then, as a layer on top of that, instead of using that merkle root directly in an output, we’re going to instead use it to tweak the public key p. So p corresponds to the public key that might be an aggregate of multiple keys, but it’s our happy path. We really hope that pretty much all the time we’ll be able to spend with p alone. Our output becomes q which is this formula that takes p and combines it with the merkle root and multiplies it by the generator and adds it. The end result is a new public key that has been tweaked with the merkle root.

So we’re going to introduce a new witness version. Segwit as defined in bip141 offered the possibility of having multiple script versions. So we’re going to use that instead of using v0 as we’ve used so far, we define a new one which is script v1. Its program is not a hash, it is in fact the x-coordinate of that point q. There’s some interesting observations here. One, we just store the x-coordinate and not the y-coordinate. A common intuition that people have is that by dropping the y-coordinate we’re actually reducing the key space in half. So people think well maybe this is 1/2 bits reduction in security. It’s easy to prove that this is in fact no reduction in security at all. The intuition is that, if you have an algorithm to break a public key given just an x-coordinate you would in fact always use it. You would even use it on public keys that also had a y-coordinate. So it is true that there’s some structure in public keys, and we’re exploiting that by just storing the x-coordinate, but that structure is always there and it can always be exploited. It’s easy to actually prove this. Jonas Nick wrote a blog post about that not so long ago, which is an interesting read and gives a glimpse of the proofs that are used in this sort of thing.

As I said, we’re defining witness v1. Other witness versions remain unencumbered, obviously, because we don’t want to say anything yet about v2 and beyond. But also, we want to keep other lengths unencumbered… I believe this was a mistake we made in witness v0, which is only valid with 20 or 32 bytes hash … a 20 byte corresponds to a public key hash, and the 32 bytes refers to scripthash. The space of witness versions and their programs is limited. It’s sad that we’ve removed the possibility to use v0. There’s only 16 versions. To avoid that, we leave other lengths unencumbered but the downside is that this exposes us to – a couple of months ago, a issue was discovered in bech32 (bip173) where it is under some circumstances possible to insert characters into an address without invalidating it. I’ve posted on the mailing list a strategy and an analysis that shows how to fix that bech32 problem. It’s unfortunate though that we’re now becoming exposed by not doing this encumberence.

As I said, the output is an x-coordinate directly. It’s not a hash. This is perhaps the most controversial part that I expect of the taproot proposal. There is a common thing that people say. They say, oh bitcoin is quantum resistant because it hashes public keys. I think that statement is nonsense. There’s several reasons why it’s nonsense. First, it makes assumptions about how fast quantum computers are. Clearly when spending an output, you’re revealing the public key. If within that time it can be attacked, then it can be attacked. Plus, there are several million bitcoin right now available on outputs that are actually have known public keys and can be spent with known public keys. There’s no real reason to assume that number will go down. The reason for that is that really, any interesting use of the bitcoin protocol involves revealing public keys. If you’re using lightning, you’re revealing public keys. If you’re using multisig, you’re revealing public keys to your cosigners. If you’re using various kinds of lite clients, they are sending public keys to their servers. It’s just an unreasonable assumption that…. simply said, we cannot treat public keys as secret.

In this proposal, we exploit that by just putting the point directly in the outputs and this just saves us 32 bytes because now we don’t need to reveal both the hash and the outputs((??)). Yes, to save the bytes. Yeah.

Q: What is the reason for not putting the parity of the y-coordinate in there?

A: It’s just to save the bytes. A better justification is that it literally adds no security, so why should we waste a byte on it? Also note that because we’re not hashing things, this byte would go in the output directly where it is not witness discounted.

Also, a relatively recent change to bip-taproot and bip-tapscript is no P2SH support. The reasons for removing P2SH support are that P2SH only has 80 bits of collision resistance security. If you’re jointly constructing a script with someone else, 80 bits security is not something that we expect in the long-term to hold up. Another reason is that P2SH is an inefficient use of the chain… It also reduces our ability of achieving the goal of making all outputs look identical, because if we have both P2SH outputs and non-P2SH outputs then that just gratuitously gives up bits of information or privacy. At this point, I think native segwit outputs and bech32 have been adopted sufficiently that we expect that by the time that taproot if and when it gets rolled out, the stragglers at that point probably either won’t upgrade at all.

More BIP details

I told you that in taproot an output is a public key that we’re tweaking with the merkle root of a tree whose leaves are scripts. Here is the big trick that makes taproot work. x here is the private key to p. Let’s say p is a single participant. It works for multiples too but it’s easier to show this way. x is the private key to p. Q which is P+H(P && merkle root)*G is actually equal to x + that hash times G. In other words, the private key to Q equals x plus that hash. In other words, if you have the private key to p and you know the merkle root, then you know the private key to Q. In the proposal, it says it is possible to spend any taproot output by just giving a signature with that private key to Q. Yes, we just pick a fixed parity. At signing time, if the parity is wrong, you flip your private key just before signing. So literally the only thing that goes on-chain is a single signature, in the happy case. Using schnorr signatures, that P can actually be an aggregate and cooperative spends can just use that single key.

Another advantage of Schnorr signatures is that they are batch verifiable. If we have hundreds or hundreds of thousands of signatures, then we can verify them more efficiently than a single one. This scales approximately with n/log(n). If you have thousands of keys, then you get a factor of 3x easily which is nice for initial block download where in theory you could batch verify over multiple blocks at once.

I lied though– I told you that the leaves of this merkle tree are scripts. They are actually not scripts. They are tuples of a version and a script, so (version, script). The reason for this is simply, we have 5 bits in the header available. We couldn’t get rid of the bytes, there was 5 bits available, so why not use it as an extension mechanism and say for now why not only define semantics for one of them, and if you have another one then for now it’s treated as ANYONECANSPEND and it’s unencumbered? So in a way you have the versioning number at the segwit level, which is an exposed version number because your transcation output reveals what that version number is. But this introduces a new subversion number which is per script. Every leaf can have a different version, which is nice for privacy because now you will only reveal—- if you have 10 branches and only one branch needs a new feature implemented in a new version, then you only reveal that you’re using that new version when you’re actually revealing that particular script branch which is really when you need it.

This is primarily useful for large re-designs of script. I don’t think there’s any such thing in the near or mediate-term future, but I think it’s good to have something where we can just swap things out. To make a clear distinction about where you would use a witness version number versus this leaf version number, I think the witness version is something that we want to use for changing the execution structure like if we want to change the merkle tree or if we want to change things like cros-input aggregation, graftroot, those changes would need to be a new witness version and we can’t put them into the leaf version number. When we want to replace the tree structure, use a new witness version number. If you just want to make some changes to script, you can use the leaf version number, in fact you might want to use another mechanism which I’ll talk about later.

At this point, only one version is defined, c0. I’ll get back on why that’s not just 0 and why it’s c0. This version we call tapscript. That is actually the second BIP which describes the modifications to script as they apply to scripts under leaf version c0 in taproot.

Another advantage of having this the second layer of versioning is, say that in a later change, graftroot gets introduced which will need a new witness version. We might want to reuse the leaf versions because there’s no repercussions on script itself.

When talking about that merkle tree, this is something that Eric over there observed… we don’t actually care about where in the tree our script is, we only care about the fact that it is somewhere. It’s fairly annoying to come up with a bitwise encoding that tells you go left here, go right here, go left here, etc. So the idea is, before hashing, the two branches will be sorted lexigraphically. Pick the lowest first, and now you don’t need to say which side you go on. You just reveal your leaf and you reveal the hashes to combine it with, and you’re done. This is also a very weak privacy improvement, because it automatically mungles the tree order. If you change the public keys in it, or something, so you’re not actually revealing in your policy where that position was. It’s not perfect, but it’s something of a privacy improvement.

When spending through the scriptpath, what do you need to do? You need to reveal the script, the leaf version, inputs to that script and a merkle branch that proves it was committed to by your root. How do we do that? You put on the witness stack when spending an input, you give the inputs to the script, then the script itself (so far the same as P2SH), but a third element is added called a “control block” which contains marker bits, then the leaf version. It stores the sign of Q and unfortunately we do need to reveal the sign of Q otherwise the result is not batch verifiable. There’s two possible equations and you’d get a combinatorial explosiion if you tried to verify multiples at once, so we need one bit to convey this sign of Q, and then we need the merkle path.

What these marker bits are for, is that by setting the top two bits of the first byte of the last stack element in the witness stack, to 1, we have the remarkable property that we can detect taproot spends without having access to the inputs. I don’t know what it’s useful for, but it feels like there’s some static analysis advantage where I can look at a transaction and an input and say this is a taproot spend and I don’t need to go lookup the UTXO or anything to know that. The reason for this is that any such bytes at the end would imply a P2WSH spend with a disabled opcode… so clearly those can’t exist.

For verification, we compute S from the leaf and the script as the first hash, then compute the merkle root from that S together with the path you provided. Verify that equation, and again this is batch verifiable, together with the Schnorr signatures. They can all be thrown into one batch. Then execute the script with the input stack and then fail if it fails execution.

Another change is that in all… should I stop a moment and ask if there are questions so far about anything? No, all good?

Murch: If anyone has a question, just raise your hand and I’ll bring you a microphone. Okay, go on.

sipa: We’ll give it another five seconds.

Tagged hashes

Another thing that we’re proposing is that instead of just using single or double sha256 directly, we suggest to use tagged hashes. The idea is that every hash is given a tag. The goal of this is doing domain separation. So hashes used for computing a signature hash should we really don’t want them to ever collide or be reinterpretable as a hash and a merkle tree, or a hash used to a derive a nonce, or a hash to tweak the public key in taproot. An easy way to do that is by tagging a hash along with every hash you compute. How we do this is we take your tag, which is an ASCII string, you hash it, you double that hash which now becomes 64 bytes, and that is a prefix before you put the data that you’re hashing yourself. Now, because 64 bytes is the block size of sha256, it means that for any given constant tag, sha256 with that tag actually just becomes sha256 with a different initialization function. This costs nothing, because we precompute the new initialization vector after hashing the tag. Because nowhere today to the best of my knowledge in bitcoin is anywhere a protocol that hashes two single sha256 hashes at the beginning of another sha256 hash, this should never collide with any existing usage either. The rationale here is that bitcoin transaction merkle tree actually has had vulnerabilities in the past from not having such a separation. It uses the same algorithm for hashing the inner nodes and the leaves, and there’s a whole bunch of scary bugs that result from this, some that were more recent, but the earliest was 2012 where– you can look up the details. Anyway, these things are scary and we’re providing a simple standard to avoid this.

All the tags that we’re using are for the leaves, for combining the scripts with the version number, that’s tapleaf. For the branches of the merkle tree, it’s tapbranch. For the tweaking the public key, it’s taptweak. Then there’s one for sighashes and inside Schnorr signatures we also use them. Many of these are very likely overkill, but I think it’s good practice to start doing this.

Not a balanced merkle tree

Another observation is that this merkle tree of all the scripts that you might want to spend a particular output with, doesn’t need to be a balanced merkle tree. We’re never even necessarily materializing it. You’re just… the verifier clearly doesn’t care if it’s balanced, because he can’t even observe the whole tree. You just give it a path. There’s good reasons why you may not want to make it balanced. In particular, if you have some branches that are far less likely than others, then you can put them further away and make the more likely ones further up. There’s actually a very simple algorithm for doing this called Huffman tree which is used in simple compression algorithms where you make a list of all your possibilities together with their probabilities and combine the smallest two and put them in a node together, and you combine them and build a tree, and that’s your tree and it’s in fact the optimal one. We do in the BIP today there’s a limit of 128 levels which is clearly more than what is practical for a balanced tree. You’re clearly not going to construct things with billions of spending possibilities… it would be too hard to even compute the address, it could easily turn into spending seconds, minutes and more. But because of this unbalancing, we still want a limit for spam protection so that you can’t just dump kilobytes of text into your merkle tree. This is something to consider that may make things more efficient.

Annex

A final thing is something we’re calling the annex. The idea is that, observe that today in bitcoin transactions you have a locktime and an nSequence value. The locktime has always been there. The nSequence value was useless because its original design didn’t actually work. Its semantics were later changed in bip68. To turn them into relative locktime. The question is, what if we want more of those things? What if we wanted a way to have an input that could only be spent in a chain whose blockhash at a certain height is a certain value? The obvious way would be to do something with like a nLockTime field on a transaction but there is no such field and we can’t easily add one, at least not without running into all sorts of complications. So what we’re doing is that, the witness stack when spending a taproot output can have a final element optionally called an annex which is just ignored. Right now there’s no semantics associated with it, except that it is signed by all signatures. You can’t add or remove it, as long as your transaction has signatures. It’s also non-standard to use it. This would let you very elegantly say, if the first byte of the annex is this, then it’s given these certain semantics. Another use case I’ll get back to this once I talk about resource limits in script where you could use an annex to declare an expensive opcode and pay for it upfront because we may not want to add very expensive opcodes otherwise. I recognize that this might be the least obvious part of the presentation; is this clear?

Tapscript

This is the other BIP that describes the semantics when executing a script, specifically when the leaf version is c0. This is bip-tapscript. By now, the reason that it is c0 and not 0 is because of those marker bits that let you detect that it is a taproot spend even without the inputs.

We start from the script semantics as they were in bip141 segwit but with a number of changes. The most obvious change is that well instead of CHECKSIG and CHECKSIGVERIFY these things now use Schnorr signatures instead of ECDSA signatures. The justification is simply that we want these to be able to be batch verifiable and therefore there’s no reason to support both. Another difference is that CHECKMULTISIG is no longer available. The reason for that is that, so CHECKMULTISIG today you give it a list of public keys and then a list of signatures and it finds which signature matches which public key. This retry behavior is not something you can batch verify. A possibility would be to just to declare which keys belongs to which signature, but it turns out that wasn’t actually much more efficient than not having an opcode for that at all. So instead, our proposal adds CHECKSIGADD which is the same as CHECKSIG but it increments an accumulator with whether or not the signature check was successful. So this lets you implement a CHECKMULTISIG as key CHECKSIG key CHECKSIGADD key CHECKSIGADD 5 equal and this will do the same thing. It’s a few more opcode bytes, but for simple things you’ll probably want to turn it into a merkle tree where every branch is one combination of your multisig anyway which is more efficient.

The next thing to discuss is OP_SUCCESS. In bitcoin script today, we have a whole bunch of NOP opcodes. These were probably added specifically with the intent of having an upgrade mechanism so that we can easily add new opcodes to the bitcoin script language. So far they have been used twice, for checklocktimeverify and checksequenceverify. The problem with redefining these NOPs is that in order to be soft-fork compatible they can only do one of two things: abort or not do anything at all. This is the reason why CHECKLOCKTIMEVERIFY and CHECKSEQUENCEVERIFY opcodes today don’t pop their argument from the stack and you always need an OP_DROP after that. It’s because their redefinition of a NOP as a result they cannot modify the stack in any way.

A solution to that is instead of having an opcode that doesn’t do anything (a NOP), have an opcode that just returns TRUE and that’s OP_SUCCESS. That might sound unsafe, but it’s exactly as unsafe as having new witness versions that are undefined. You won’t use them until you have a use for them, and you won’t use them until they have defined locked-in semantics on the network. We’re taking a whole swath of disabled and never-defined opcode numbers, and turning them into “return TRUE”. Later, these opcodes can be redefined to be anything, because everything is soft-fork compatible with “return TRUE”.

Bram: Is that “return TRUE” at parse time?

sipa: It is at parse time. There’s a preprocessing step where the script gets decoded before execution and even if you have an OP_SUCCESS in an unexecuted IF branch for example, it still means “return TRUE”. This means it can be used to introduce completely new semantics, it can change resource limits, it change the parse function in a way if you say the first byte of your script becomes OP_SUCCESS and so on. So this is just the most powerful way of doing it.

Of all the different upgrade mechanisms that are in this proposal, OP_SUCCESS is the one I don’t want to lose. The leaf versions can effectively be subsumed by OP_SUCCESS just start your script with an opcode like OP_UPGRADE and now your script has completely new semantics. This is really powerful and should make it much easier to add new opcodes that do this or that.

Another thing is upgradeable pubkey types. The idea is that if you have a public key that is passed to CHECKSIG, CHECKSIGVERIFY or CHECKSIGADD, that is not the usual 32 bytes (not 33 bytes anymore, it’s 32 bytes because also there we’re just using the x-coordinate). If it’s not 32 then we treat that public key as an unknown public key type whose signature check will automatically succeed. This means that you can do things like introduce a new digital signature scheme without introducing new opcodes every time. Maybe more short-term, it means that it’s also usable to introduce new signature hashing schemes where otherwise you would have to say oh I have slightly different sighash semantics like SIGHASH_NOINPUT or ANYPREVOUT or whatever it’s called these days. Introducing them would, every time you would need to have three new opcodes otherwise. Using upgradeable public key types, this problem goes away.

Another is making “minimal IF” a consensus rule. “Minimal IF” is currently a standardness rule that has been in segwit forever and I haven’t seen anyone complain about it. It says that the input to an OP_IF or an OP_NOTIF in the scripting language has to be exactly TRUE or FALSE and it cannot be any other number or bytearray. It’s really hard to make non-malleable scripts without it. This is actually something that we stumbled upon when doing research on miniscript and tried to formalize what non-malleability in bitcoin script means, and we have to rely on “Minimal IF” and otherwise you get ridiculous scripts where you have two or three opcodes before every IF to guarantee they’re right. So that’s the justification, it’s always been there, we’re forced to rely on it, we better make it a consensus rule.

The last one is new code separator semantics. The OP_CODESEPARATOR has been in bitcoin forever. It was probably originally intended as a delegation mechanism because it restricted what part of the script was being signed by signatures, and you could actually implement a delegation mechanism with this. Unfortunately, in mid 2010 when the execution of the scriptsig and the scriptpubkey was split and the CODESEPARATOR in one doesn’t influence the signatures in the other, anymore, then that functionality was broken. So we looked at that and thought, well what is it useful for? There’s one thing that it can still be used for, namely where you have multiple execution branches through your scripts like IF THEN ELSE and you want your signatures to commit to which of the branch you’re taking… by putting a CODESEPARATOR in one of them, you change what script they are signing, and as a result indirectly lets you commit to what branch you’re taking in the script. I don’t know if anyone using this in production but I’ve certainly heard of people thinking about using this. So we’re going to keep this part and drop the rest. We’re just going to make signatures sign the last executed CODESEPARATOR position without those things.

Resource limits

Okay, resource limits. Bitcoin script today has a 10,000 byte script size limit which we propose to drop. The reason for this is that it has no use. There is literally no data structure left in taproot whose size depends on the size of your script, or any way in which execution time is more than proportional to your script size. Before, so even in segwit, every signature hash contains the entire script being executed which means that if you have a script of 10,000 bytes and they’re all CHECKSIGs, then you’re going to do 10,000 CHECKSIGs that each hash 10,000 bytes and this is a new version of the quadratic hashing problem that was improved before like in segwit there’s fewer quadratic hashing things left than in the legacy system that came before it, but in taproot they’re all gone. We actually just pre-hash the script once, and then it gets included in the signature hash every time. So with that, we don’t need that anymore.

Also, the 201-non-push ops limit, which is my favorite crazy thing in bitcoin. We got rid of that too. To show how crazy it is, right now in bitcoin script there is a rule that says that the total number of non-push opcodes in your script can be at most 201. This counts executed and unexecuted opcodes. However, when you execute a CHECKMULTISIG, the number of public keys participating in that CHECKMULTISIG counts towards this limit, but only the executed ones. So that’s another thing that– I discovered this while working on miniscript where I had a fuzzer constructing random scripts and we had miniscripts reasoning about oh is this a valid script and handing it to Bitcoin Core to verify and some didn’t pass and it was because of this CHECKMULTISIG thing. There’s really no reason for this weird limit.

Lastly, we removed the block-wide 80,000 sigop limits and replace it with one-sigop per 50 bytes of witness rule. This accomplishes a really cool thing that we get rid of multi-dimensional optimization. Right now, there are two independent resource limits for blocks. There’s the weight and there’s the sigops. In normal transactions like the number of sigops is way below the ratio where this matters. But you can maliciously construct scripts that no miner will know how to optimize for correctly. This is very vaguely exploitable. It’s just an income reduction, right. A miner will not be able to find the optimal bin packing solution to given all these transactions and they have fees and weights and sigops what’s the optimal combination. But the downside is that if they would try to be smarter, then fee estimation on the network would be harder. By removing the sigops limit, and basically translating the sigops into bytes, which is what this does, like every 50 bytes you get a credit of 1 sigop you can do. Given that a public key plus a signature already costs 96 bytes, there’s absolutely no reason why you would ever need to go over this limit. That way, there’s only a single limit that is left to optimize for, and things get a lot easier.

Maybe you wonder, well, why only sigops? Why not big hashes or other expensive opcodes? So I did some benchmarking. Like what are the most complex slowest slow scripts we could come up with in terms of execution time per byte that don’t use CHECKSIGs? It turns out there’s fairly close like within an order of magnitude, which I guess is not really close, but it’s not crazy far off of the worse you can do with just CHECKSIG. So that’s the reason for only counting CHECKSIGs here. But imagine in the future there’s an OP_SNARKVERIFY or OP_X86 or OP_WEBASSEMBLY that costs orders of magnitude more than a CHECKSIG now per byte, will want to price those proportionally… it doesn’t matter that it’s exactly proportional, but you need to be able to reason that a block full of these is not going to become an attack vector. So how do you do that? The problem is, say this SNARKVERIFY costs the equivalent of 1000 bytes worth of CHECKSIGs but your script isn’t 1000 bytes and no reasonable input to this opcode will be 1000 bytes. You’d be incentivized to, you know, stuff your transaction with zero bytes or something just to make it pass. That’s actually another justification for the annex where you can use an annex on the input to say, virtually increment the cost of this transaction by this many bytes. Just treat it as as if it was a kilobyte bigger, and I’ll even pay fees for it. Now you can see why it is important that you can statically determine an input is a taproot spend without having the inputs available, because otherwise I hand you a transaction on the network and you want to compute its size, you want to compute its fee rate, and in order to compute the fee rate you need to know its size, and its size is now being amended by this annex. But by allowing this to be done statically, by using a recognizable pattern for taproot spends, this is much easier and you don’t need the UTXO set or anything else available to compute the size of transactions. Likely this never happens, I have no concrete ideas for what this would be useful for, but it’s an extension mechanism that we considered and found interesting.

There’s also some changes to the signature hashing algorithm as I alluded to earlier, with the scriptpubkeys. The sizes don’t matter anymore. So we have the same sighash modes, which is the easiest to argue for. I don’t think the existing sighash modes like ALL, SINGLE, NONE, and ONE, and ANYONECANPAY, I don’t think they are particularly great- most of them are never used. But it’s just easier to argue to not change the semantics, especially with upgradeable pubkeys we can easily introduce new sighash modes later. So it’s less of a concern to do this perfectly right now. A big one is that inputs commit to all input amounts, not just the amount of the output they are spending. The reason for that is a particular attack that exists today on hardware wallets or offline signing devices where say there’s malicious wallet software and there’s a hardware wallet and the malicious software colludes with a miner to spend all of someone’s coins into fees. This is why we commit to amounts in the first place in segwit, but it is not sufficient because you actually need to commit to all of them. The reason is that, as a malicious wallet, you can give the hardware wallet two transactions. I give it two– I do two signing attempts. The first time, I lie about the amount on the first inputs, but I’m honest on the second one. On the second attempt, I’m honest on the second one but I lie about the first. So now I can take the signatures from these two, combine them into one single valid transaction that moves everything into fees. This attack has been described, it’s fairly obscure to pull off. In order to address it, we should commit to all input amounts and this problem goes away. Committing to all input amounts and all output amounts effectively means you’re committing to the fee itself.

Another one is committing to the scriptpubkey. There is some issues where you can lie to a hardware device like oh this is a p2sh versus a non-p2sh spend. This would in theory make them report fee rates incorrectly or something. Good practice and this can be completely avoided by committing to the scriptpubkey.

All variable length data is now pre-hashed. The hashing that you do for any checksig is now just a constant, up to 200 something bytes. This is the reason why we can get rid of the script size limits.

In that sighash, at the transaction level, data goes first before the input level data which allows in theory some additional caching where you pre-compute that whole part once. Then obviously, all the new things, you commit to the annex, the leaf hash, and the CODESEPARATOR position.

That’s all I have about changes introduced in tapscript. I have another slide about future things, but if there’s any questions about any of these things, feel free.

murch: How has the organized taproot review been going? I haven’t heard that much about that.

Bitcoin Optech has organized a public structured taproot review session. It was a 7-week thing where every week there was a curriculum which was some subset of the changes to discuss, and once or twice a week there would be a moment where people could ask questions and review things. The first weeks went very well. There were tons of questions, very good suggestions on improvements. The later weeks, it’s been dropping off significantly. I think this is the last week. Tomorrow I think is the last Q&A session. Overall I’m happy, and this gave a lot of exposure about these ideas to people who gave lots of good comments, some pull requests. There were only a few actual semantic changes that came out of this. Or ideas about that. Most of it has been just about some things being unclear, rationales being unclear, and requests for stating motivations better.

Bram: The RETURN TRUE extension semantics… that implies that you’re not doing any delegation? Or you’re only delegating to a trusted thing or a thing of known code at the time you do the call? It makes it so you can’t do a graftroot where you have a graftroot.

sipa: You can, I think.

Bram: You can’t do partial delegations. So you’re either completely trusting something, or not calling it.

sipa: First of all, graftroot is not included here so that’s not a concern. Longer-term, I don’t see the problem really. When delegating, you delegate to a new script and if you delegate to a script that has an OP_RETURN TRUE, then that’s your problem- you’re not going to do that.

Bram: I know you don’t want to support colored coins, but if I did, then you have this sort of partial delegation- this outer thing that is enforcing the color, and an inner thing that might support transactions within the colored coin itself.

sipa: I think you can still do that as long as you structure the requirements on the caller as an assertion. I think the semantics you want is that, when delegating you first completely execute that level of the script. If it returns false, for any reason, you return false, and then you do all of the invoked delegations.

Bram: The main issue.. and I don’t have a good solution for this, is if you want an extension that returns some value. Like some new function that returns a new thing, then this extension mechanism makes it hard to do that.

sipa: Sounds like something to discuss when this becomes an issue.

murch: Any more questions?

What’s next

We need to finish addressing all the comments from review. There’s still a few open issues. This is the last week of the Bitcoin Optech review sessions. I hope and I think the sentiment in general is positive. I am hopeful that we will find widespread acceptance for these changes. We’ll have to work on requesting a BIP number, I think we’ll have it in the next few days. Then there’s work on reference code, unit tests, and just review, test vectors, and need to open up a demo pull request to Bitcoin Core with an implementation… and then some unclear stuff… and then one day it will activate on the network. I am purposefully not going into details on how activation might happen. I think different people have different opinions on that aspect. That’s something up to the community to decide how these things make their way forward.

In parallel, once it’s clear that these things will happen, there’s work that can be done on reference implementations for wallets and libraries. Things like bip174 PSBT extensions, I know Elichai has been working on that. Miniscript will probably want to extend that to support taproot. Given the whole structure introducing with the merkle tree, and the tweaked key, just getting an overview of what a script is or what a policy is enorced by a particular output becomes harder so we will want better and better tools to actually be able to deal with this. There will be fewer and fewer ways that you can do these things manually.

That was my talk, thank you very much.

Q&A

Q: What’s the type of pushback if any you are seeing?

sipa: I think I’ve seen some comments on putting public keys directly in an output… but even that has been far less than I had feared.

murch: So deploy in 2 weeks?

sipa: No, I think the most valuable feedback is critical review including questions like “why are you doing this, is this part necessary” or comments like “this is a bad idea”. I haven’t heard any such comments. That might be due to either the right people haven’t looked at it yet, or the proposal is already polished enough that we’re past that stage. I think all kinds of comments are useful. It’s very hard to judge where acceptance lies. I can go on twitter and ask hey all in favor… I don’t think that the responses would be very useful, even the positive ones. This is a hard question. In the end, I think we’ll know consensus when we see it.

Q: Around this merkle tree and creating more complex scripts, what is the future looking like there with tools and wallets to actually build those things? What’s the work that needs to be done there?

sipa: Yeah, so, it’s interesting that at this point we’re really focusing on defining the semantics primarily the consensus side of things. But there’s a whole lot of work to do on integration and on the signing side. A lot of it is optional. I suspect that many wallets will just implement the single key signing because that’s all they care about and single key signing isn’t all that much harder in taproot than something else. At the same time, there’s a lot of more flexibility and possibilities that are there.

Q: Given that the depth of the tree can be 128, you can have quite sizable scripts. Have you seen anything novel coming out of that in review?

sipa: There’s obvious things like, you take a multisig and you split it out into a tree where every leaf is one combination of keys and you aggregate the keys per leaf together. So you just have a couple thousand leafs… say maybe 5-of-10 or something. I don’t know. And then you have a couple dozen leaves that are each just a single key. It’s far more efficient and much more private than doing it directly as a script that just gives all the public keys. Really a lot of this depends on what kind of policies that people want.

Q: In the last major soft-fork, we activated segwit and we saw some pushback from some stakeholders and ecosystem people that hadn’t been involved that much in design. Are you aware of any activity to reach out to those subsets of the community like miners in particular?

sipa: No, I am not aware. But everyone is welcome to review. The ecosystem is probably more well organized now than it was a while ago. I don’t expect it to be a problem. Of course, we don’t know if miners will go along and how long it will take. This is all up to activation mechanism which is still an open question.

Q: What about potentially simplifying multisig applications; what’s going to be the most common or powerful use case you can get from this? If you reduce down to a single signature on-chain.

sipa: That’s a possibility, but it comes with complexity. The verification side of things becomes simpler but actually building an interactive protocol between multiple parties that jointly sign for a single key is more complicated than signing a multisig today. It has pitfalls. I wouldn’t be surprised if in practice it ends up onyl being a few players that do this. That’s fine, because things are still indistinguishable.

Q: A quick question about the annex. Is that strictly for making transactions being able to weight their incentives properly…

sipa: That’s one reason. There’s another example which is the anti-fee sniping thing. Right now there’s a technique for anti-fee sniping where you put a locktime on a transaction for a block not far in the past. If a miner tries to intentionally reorg the chain to grab the fees of this perhaps high-fee-paying transaction, then they wouldn’t be able to do so and they would still need to wait. If there was a way to make a transaction that says this is only valid in a chain that builds off of this blockhash, then this becomes a lot more powerful. I don’t know if it’s sufficiently desirable, but it’s an example of something that you could do.

Q: You could even create a bounty to vote against a split or something like that, because you commit to only one side of the split.

sipa: This is fork protection, too. I think that’s the context in which it was originally proposed.

1h 26min

Other resources

http://gnusha.org/bitmetas/

\ No newline at end of file diff --git a/sf-bitcoin-meetup/2020-11-30-socratic-seminar-20/index.html b/sf-bitcoin-meetup/2020-11-30-socratic-seminar-20/index.html index 2d2c9e66ee..46c2cbd4e6 100644 --- a/sf-bitcoin-meetup/2020-11-30-socratic-seminar-20/index.html +++ b/sf-bitcoin-meetup/2020-11-30-socratic-seminar-20/index.html @@ -10,4 +10,4 @@
Home < SF Bitcoin Meetup < Socratic Seminar 20

Socratic Seminar 20

Date: November 30, 2020

Transcript By: Bryan Bishop

Category: -Meetup

SF Bitcoin Devs socratic seminar #20

((Names anonymized.))

XX01: For everyone new who has joined- this is an experiment for us. We’ll see how this first online event goes. So far no major issues. Nice background, Uyluvolokutat. Should we let Harkenpost in?

XX06: People around America are showing up since they don’t have to show up in person.

XX01: How do we block NY IPs?

XX06: We already have people from Western Africa joining.

XX05: Oh he’s here? You can increase the tile number. To get everyone in the same tile display. You can do non-power of 2, but you can also set it if you want.

XX04: Can you hear me now?

XX01: Yes. I like your throne.

XX04: It’s just a couch. It’s out in the garage. Maybe I should get it. I think I should get it.

XX01: There’s an agenda published. It’s also linked in the chat.

https://www.sfbitcoindevs.org/socratic/2020/11/30/socratic-20.html

Ground rules

XX01: I think I’m going to screenshare my screen. Then we’ll go through the topics. I think the best way to run this is for people to raise their hand. Can someone test raising their hand right now? Okay, great. I’ll kickstart each topic.

XX01: The way we run these is that we have a list of topics to discuss. Anyone is welcome to participate in the conversation, ask questions, give their input into any insights they have into the topics we’re going through, the list of topics is linked in the chat. I’ll be sharing my screen with a webpage for each of the topics we go through. It will be something visual like a pull request, charts, news article.

XX01: If you’re not talking, just mute yourself for everyone else’s sake. Once I kickstart each topic, I think I’ll just call on people who have their hands raised and that will allow everyone to either ask a question or give their thoughts on that. Sometimes a specific person I might call on for a specific topic because it’s something someone has worked on or they built. We’ll just go through the– we’ll start going through the list now and see how this works. Other ground rules- be respectrufl. Typically we have a rule about privacy and no photos, which is hard to enforce here.

XX05: It’s been so long, there’s a lot of topics to go over.

XX01: If this goes well, we’ll just do this every month and it’s actually easier. Denise and I don’t need to lug around drinks, chairs and tables and all that.

XX01: The nice thing about this is that we have this sidebar for commentary. If you want to leave comments about things happening, feel free to throw it in there. If you have random questions, feel free to throw them in there. If you have comments you just want to make on the side, feel free to use the chat for that. I think it’s a nice non-distracting way to give input. I’ll share my screen here.

XX01: Can everyone see my screen?

XX01: Weclome everyone .I think all of you were here when we went over the rules. Let’s get started. We’ll go over news, statistics, pull requests, lightning pull requests, etc.

GLV patent expired

https://twitter.com/pwuille/status/1309707188575694849

Brink

https://brink.dev/

David Harding, John Newbery and Mike Schmidt have started a new bitcoin development funding program. They are taking a unique approach for funding and bitcoin development. Are they here? Steve?

XX08: I don’t think they are here. If they are, they could speak to it. I can speak to it a little bit. I think they are introducing the first mentorship program for open-source bitcoin development. The first formal mentorship program. I think that’s exciting for the space. I think they have identified their first fellow and will be announcing that person soon. I guess the way I view it, it’s another great organization like Chaincode and Blockstream, and MIT DCI and Square Crypto and these different orgs focused on open-source bitcoin development, it’s great to have another entity focused on that.

XX01: Cool. It will be cool to see what they will roll out this year. Looks like John Pfeffer and Wences Casares are both funders and supporting this group. Pretty awesome.

XX05: It’s a one-year program. It seems long. That’s also good, because it can be more intensive. Chaincode was a summer or a few weeks. It’s a year of vocational school, in a sense, to learn bitcoin stuff.

XX08: You’re right, it’s a year. Brink has both the mentorship program which they are calling Fellowships and more traditional grants too. They will be funding established developers through regular grants, as well as the fellowships. It’s more substantial than a few weeks. They have already signed up several funders not listed on their page, like Square Crypto, Kraken, and one more. Gemini and Human Rights Foundation (HRF).

Bitcoin Core wallet 0.21 what’s new

https://achow101.com/2020/10/0.21-wallets

XX01: Bitcoin Core v0.21 is– the most recent release; it has a number of awesome features raised in the wallet itself including wallet descriptors, sqlite database backend, and achow101 has a nice writeup on his site. We’ll go into the details of these pull requests a little bit later on tonight. I just wanted to give a little taste.

Lightning pool

https://lightning.engineering/pool/

Lightning pool is a marketplace for lightning channels. It’s the next step in building out a true market for liquidity on the lightning network. Would someone from Lightning Labs like to share more? Oh, Segfault is here.

XX05: It’s an auction that runs every 10 minutes and people can buy/sell channels through leases. So you can buy a channel for 20 days for 2%. It can be negotiated, and has a few other cool features. We’re building on some ideas we talked about in the past.

There’s a @lightningpool on twitter, it’s not made by us. Roasbeef worked a lot on the lightning pool paper. Also shadowchain stuff?

XX05: We have something called a shadowchain which is a reusable application framework for applications on bitcoin in general. This can be used for fidelity bonds, for example, or an untrusted system– you have a protocol where you have off-chain validation, versus actively putting in all chains. Every batch in pool is its own thing; signature aggregation is another example. It’s a protocol that can sign off on blocks of data they want. It’s like pessimistic roll-up where you only care about things on your own input, so you have fraud proofs and other things. Also, we have a channel on our slack.

XX01: Looks like fees during the alpha will range from 5 to 25 basis points?

XX05: That’s what we as the operator will be charging for our services, like coordination or matching. You have an embedded chain in the bitcoin blockchain that has all the auctions and all the L2 modifications. So we can create an authenticated tries of the … on bitcoin itself.

https://lightning.engineering/lightning-pool-whitepaper.pdf

XX11: Between 5 and 8% APR. Loalu doesn’t like that wording though. It’s not compounding, so you can think about the risk-free yield for coins on lightning in a non-custodial manner.

XX01: Any questions? Steve? Your hand is up. This is working better than I expected to be honest.

bitcoin-core-pr-review-club

https://bitcoincore.reviews/

XX01: John Newbery hosts Bitcoin Core PR review club. I haven’t participated yet, but I look at what they put out, and it’s very interesting. It’s a good way to get a sense of what’s happening in bitcoin development, dig through the code, have walkthroughs of what’s going on.

XX03: This isn’t a recent thing. It’s been going on for a year or two.

XX01: Did I miss it? Was there always a website for it?

??: He recently had the 100th review.

XX01: Wow, I missed that then. It’s pretty cool.

Crypto Open Patent Alliance

https://open-patent.org/

XX01: Square Crypto is helping to kick this off. The goal is to create an alliance around patents to prevent malicious use of IP in this industry. Steve?

XX08: Technically, it’s Square not Square Crypto. It’s a Square initiative. They announced a couple months ago. Blockstream did work around this a few years ago, a similar intent and program. The idea here is to help defend against patent trolls. Any aggressive, offensive crypto patent thicket. This serves a few purposes: member companies or individual developers (anyone can join, including projects or developers) can commit to not asserting their patents they have offensively and then adds them to a pool which any of the members can use to defend thesmelves. You don’t need to have any patents to join. If you don’t have any patents and don’t intend to, but you still want to protect yourself, it can be a good idea to join.

XX08: We’ll soon be announcing some major companies that will be joining. I think Blockstream is on the public record saying they are going to join. It’s a good way to fend off potential problems in the future.

Bitcoin design grants

https://medium.com/@squarecrypto/square-crypto-designer-grants-a9a3982c1921

XX01: In the design world, there’s some attempts to bring more designers into the fold and help create user experiences for bitcoin. Square has a number of crypto designer grants that they have available. I don’t know if any have been awarded yet. Are you still looking for people to apply?

XX08: The initial thinking of Square Crypto was that we would hire a full-time designer to contribute. After speaking with 50-60 designers around the world, it was clear that there were many designers quite interested in bitcoin but they were off in their own little island and they just really needed a community to participate in and get to know each other. So this summer we announced an open-source bitcoin design community with now 750 people that are parto f that. There’s about 30-40 who are fairly active and contributors. We have given out 7 or 8 grants so far to both designers and user-experience researchers. The primary work product or outcome we hope from this community to get developed is an open source design guidelines that will help any wallet or application designers to not start from scratch. Any new designer to this space or frankly anyone new to bitcoin finds bitcoin quite counterintuitive when you have to build a non-custodial wallet and those challenges. Building mobile apps that are client-server based, it doesn’t help so much. So hopefully the design guide will help with that. A number of people we’ve been giving grants to are helping specific projects in the space improve their design. There are volunteers who don’t have Square Crypto grants who are more engaged on Bitcoin Core now and trying to help the Bitcoin Core GUI. Square Crypto helped kick it off, but it’s intended to be a public good and open-source bitcoin thing. There’s no…. Square does not want any specific control over this. We’d like to see other companies fund designers and researchers as well.

Bitcoin treasuries

https://bitcointreasuries.org/

XX01: Bitcoin has seem large adoption in corporate treasuries making allocations. I think it has implications on the tech downstream, custody solutions and I think it’s important to understand who is using bitcoin now. This is pretty interesting to see. I don’t know if anyone has something specific things they want to discuss.

((isn’t there an accounting reason why this doesn’t work for companies?))

GLV patent has expired

https://twitter.com/pwuille/status/1309707188575694849

XX03: This is the GLV endomorphism patent that enables a faster elliptic curve multiplication in secp curve. The secp curve was specifically designed to enable this optimization. libsecp256k1 started as a hobby project of mine trying to figure out how much of a gain it would actually be. Due to patent concerns, it was later made optional. The optimization was never used in any releases of Bitcoin Core or otherwise. 0.21 will have this optimization enabled. The code for not doing the optimization has been removed from the library, in fact.

XX01: So the impact of that, if I recall, is a 25% speedup in signature validation?

XX03: Something like that. It’s huge. It may not sound like that much, but let’s just say optimizations of a couple percents are rare.

GM: Hal Finney actually wrote code to implement this optimization while he was still posting on bitcointalk and it sort of dropped that code– but his implementation was built on top of openssl and so it didn’t have a lot of other optimizations. So, Pieter’s initial work was combining that along with other high performance optimizations.

XX01: It’s cool to see this finally live. Are there any other patented optimizations that we’re waiting to unleash? Or is this the last one lined up for a while?

XX03: There might be one. It’s much smaller. Co-zee stuff?

XX04: I think that we were actually on the cozee stuff as far as patents went. That was one of those things where we wanted to do some more research on the patentability of it. But it’s a small optimization, in any case.

XX03: For anyone who would go to Google Patents and see that it claims that it is still live, I think their information is rather outdated. Blockstream actually had a patent attorney… ah, it says it’s expired now. A couple of weeks ago it was still saying it was live. Blockstream had a patent attorney verify that it actually expired.

“This invention provides a method for accelerating multiplication of an elliptic curve point Q(x,y) by a scalar.”

XX04: This was originally a Certicom patent, and then they got acquired by RIM, and then they were acquired by Blackberry.

XX01: Okay, let’s move on from news to statistics.

Segwit adoption

https://charts.woobull.com/bitcoin-segwit-adoption/

XX01: I found these charts interesting. I wanted to throw out some trends and see if anyone had anything to say about it.

XX06: One thing that I thought was interesting on bitcoin segwit adoption chart is that you can see it going down lately. I think what we’re seeing there is a lot of people are batching payments. If you look at the charts on payments that use segwit versus transactions, it’s just that people who adopted segwit early are also the people who adopted payment batching early. The number of payments using segwit has been going up.

XX04: What about old coins excluded? People are often bringing up pre-segwit coins that are old and there’s no way they could have used segwit.

XX06: I’m not aware of that chart, but it makes sense. We should mention that in public, and someone will do it.

XX01: One of these lines is hard to see here. The payments using segwit has increased, it’s just hard to see that on my black background. The other reason I wanted to bring this up is because— taproot is obviously a proposed soft-fork coming and with a new version of segwit addresses and seeing how the previous network upgrade adoption went, and kind of where it is potentially plateaued is interesting especially in light of a new upgrade coming down the road. I have myself a lot that I could say about this, having run a bitcoin company, having seen how infrastructure ossicizes and companies get stuck with legacy infrastructure that they don’t bother to update. What can we do to make it easier to make

BB: address index wasn’t that interesting.

XX01: At Redacted we have a golang indexer and wallet infrastructure, watchers, script descriptors, We’d like to hear what you;’re doing at Avanti. Our implementation is a golang. I also wanted to talk about blockchain developer kit (BDK) which might help here with developers building out more standard projects for infrastructure for companies to use.

XX05: I think taproot is going to have slower uptick. A lot of the lightning nodes will probably upgrade to taproot because of the privacy benefits. Lightning pool— whenever you have a batch of transactions, single payout script, which will be interesting. I think lightning operators will upgrade quickly, and be a new constituent in the pool of people upgrading, for things in the future including script updates.

XX04: About 30% of all the transactions are still blockchain.info as of a few months ago, which is a supermajority of the non-segwit transactions that were going on at the time. Someone pointed that out to me. I suspect it’s still the case now.

XX06: …. now it’s aroudn the time where wallets can think about making segwit their default. Have you ever bumped into trouble? Is there anoyne that can’t send to native segwit by now?

XX01: It’s rare these days. Most of our people buying bitcoin for the first time, so they use new software. There’s still a lot of existing exchanges that don’t support sending it to. I don’t think Binance supports pay-to-segwit.

XX03: Witness scripthash.

XX01: We use multisig segwit deposit addresses, sometimes they’re too long for certain wallets to send to. For those of you who don’t know, I can pull it up here, there’s a compatibility table on bitcoinoptech for who supports what.

Mining stats

https://bitaps.com/blocks

https://www.blockchain.com/pools

XX01: Looks like pooln is quite large these days and F2pool. I’m not close to the mining space but I like to look in once in a while. What’s happenign in mining these days? Any interesting trends or surprises? Things that people want to bring up?

XX11: There was news that– because of the rainy season– that a lot of mining within China was moved. Why have we never heard that before? Why was that a thing this year if it was an annual year when the hashrate dropped?

XX04: We’ve heard this every year for the last four years that it was seasonal. That’s not new.

XX11: The drop was more dramatic this year.

XX04: True, but we have seen this every year in the rainy season because of all the hydropower in China.

XX05: One of the engineers that left Bitmain started one of those pools. It’s cool to see them with mindshare with some of the OGs in there as well. I haven’t looked at this chart in a long time.

XX11: pooln had a non-compete and couldn’t mine bitcoin for a year. Now they are back after the year. It was about the unbundling of Bitmain in a sense.

XX06: I think that the latest numbers on how much of the mining power are in China are down to something like 65%. I haven’t tracked that over the years. I think that’s lower than it has been in a while.

XX01: Most of these pools seem like they are based in China, but I don’t know where the hashrate is.

XX04: The big advantage of mining in China is proximity to manufacturers and lower cost of building out mining farms. There’s a lot of inexpensive power in North America. So it doesn’t surprise me that there’s some transition out of China.

XX01: I have seen some increase in the US like with Layer1 and some of the pool stuff Blockstream is working on. I don’t know if any of that is material or if anyone has information about that.

XX09: (redacted) Like Uyluvolokutat said, there’s a lot of very cheap power and good rule of law in North America. It’s attractive.

XX09: Binance Pool is usually around 6-10%. It’s interesting that these big exchanges using pools as a loss leader to build relationships with miners. I don’t know how to think about that strategy other than building relationships or maybe being able to negotiate cheaper bitcoin buys and cutting out market makers. I think that’s an interesting new thing.

XX01: The financialization of this industry is still in its infancy. There’s going to be a lot of second-order effects across the industry.

XX01: One other thing. I know there’s increasing popularity on the financialization trend or professionalization around this– Matrixport is doing well with miners in China, derivatives, things to help them hedge. It’s an interesting trend worth keeping an eye on.

Taproot activation

https://taprootactivation.com/

Al: Taproot activation is a major topic of discussion lately. I think everyone has a bit of PTSD from the block size wars and segwit to say the least. Nonetheless, I think there’s a lot of excitement about taproot – something that is– it’s an upgrade that has been in the making for years now. It seems to have increasing amount of broad support– which will remain to be seen regarding how it is to be activated. When people need to put money where their mouth is, we’ll see.

XX01: Poolin has created a tracker to track the hashrate that has indicated they would be supportive of the taproot soft-fork. It’s at taprootactivation.com. I think it’s interesting to see. A lot of the major pools have indicated support for this upgrade. I’d be curious to hear any insights from people closer to the ground on this stuff than what any– any insights on there?

XX05: What’s with the “no answer” on Binance Pool?

XX04: It’s a “no comment” kind of answer, not a no.

XX05: So it’s “I’m not running for President”.

XX04: There were a few crappy news outlets that sent out an article saying Binance had rejected it. But they hadn’t even looked atit was the comment I got from them. I think those numbers add up to like 65% of the hashpower or something like that.

XX01: It says 82% here.

XX04: I think we’re going to see… the challenge here is that this is a verbal commitment. When it comes time to update their software, the schedule may get pushed out and it might take a while to get that level of support. But it looks good to me.

XX06: There’s only 91% of all hashrate accounted for on this list. So with an activation threshold of 95% if we do the same as before, we haven’t even talked with everyone.

XX04: I think this is really good, though, because people had a lot of concerns that this would be some really dramatic drama-filled activation mechanism issue this time aruond. I think this is at least a good first example that no, this can be activated without a lot of drama.

XX08: I’ve had direct conversations with poolin, Huobi and Slush and got positive vibes consistent with them saying yes. Again, that doesn’t mean anything until it’s time, but they understand taproot and I think they are legitimately supportive of it. To me the big open question remains the activation method and there’s not consensus among developers and as you can see on this table there’s not consensus among pools. Somewhat unclear to me how that gets resolved.

XX04: I think this goes a long way towards resolving that question. The question among developers was how aggressive does the activation mechanism need to be? If there’s broad support from pools, it will justify a less aggressive mechanism because it will activate anyway. We’ll see, though.

XX06: Also, we can always start with the least aggressive and as we build raporte we can commit a level further and ramp it up.

Revisiting squaredness tiebreaker for R point in bip340

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-August/018081.html

XX01: Pieter asked about revisiting the squaredness tiebreaker for schnorr signatures. Do you want to give an overview of that discussion and the end result?

XX03: In short, we thought we had a good reason for using quadratic residues instead of evenness as a tiebreaker. We only want to store an x-coordinate because wasting a byte to encode 1 bit is stupid. Which of the two y coords do you take, corresponding to a particular x coord? We thought we had a good reason for picking quadratic residue which was kind of cool and seemed faster, but later we realized actually you don’t want to do that for public keys because it has some compatibility issues and even later we realized it wasn’t a good idea in the first place and we shouldn’t do it. This email you’re pointing out is where I explain why we had come to the incorrect conclusion and why some recent work on faster algorithms made it not only the case that it wasn’t actually faster, but that it was actually slower. So that’s all finalized now, and it’s now using evenness everywhere and that’s what’s in the bip and that’s what’s implemented. Everyone can forget about the quadratic residues.

XX01: For those of you who might be out of the loop— a public key is a point on a curve in bitcoin, you have two coords but you only need the x coordinate. There’s a positive and negative y coord corresponding to the x coordinate, and you need a standard method to choose which of the y coords to use so that you don’t have to signal which one to choose.

XX03: This isn’t just public keys, but also the public nonce inside the signature, which is what this was actually about.

Bitcoin archaeology

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-November/018269.html

BB: (timestamp your old emails, archaeology….)

XX01: What does archaeology in bitcoin mean at all? Dan Bryant went through and tried to figure out how to run the oldest versions of bitcoin core, using the old libraries that it linked to. It’s very interesting. I think people are interested in history.

XX04: I was able to get bitcoin v0.3 to sync, back in 2017. You have to work aroudn some bdb lock issues.

XX03: If you go further back, the problem is v0.2.9 which didn’t have the checksum in p2p protocols so you need some sort of adapter to make that talk to the current network. But it should be possible, in theory.

XX01: That’s a pretty interesting project. I like that.

bech32 updates

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-October/018236.html

XX01: Rusty brought up some ideas for how to deal with potential issues with bech32 encoding including the malleability issue discovered a year or two ago at this point with the checksum. Pieter or Rusty, do you want to give a high level overview of this discussion?

XX03: I was planning to reply to this email today, actually. In short, I agree, and I think we should do the clean break switch to checksum for v1 and up, regardless of length. …. This will protect against a situation where an old software enters the new style address, which is something you don’t get with the proposal we originally had because v1 would keep the same checksum in the algorithm. Anything to add?

XX10: I wonder if there’s some background notes?

XX05: Why? Uyluvolokutat says we messed up? Are addresses longer?

XX10: That was the dream, that there would be one upgrade and then it would all be great. There wasn’t an understanding at the time that this would be how it works. So people do the checks for v0, and then they reject for v1 so there’s already a lot of implementations that— some of these things were implicit not explicit, that they should accept v1 and v2. But it turns out that the people who implemented it in c-lightning also didn’t understand, if it’s v2 do the length checks, if it’s v1 just fail. So it was a common mistake. Bitcoin Core also didn’t relay it early on, so in the lightning code we had to be sure to not relay them or accept them. ….. since everyone is going to update their softwre anyway, it will force some people to get upgraded, so sometimes the way out of this mess is to fix it the right way, and the right way to fix it is saying if v0 it’s this checksum algorithm and if it’s this other one you use another checksum. People are going to upgrade anyway, so let’s let them upgrade to make this problem go away.

XX04: There’s more than one service where if you send to a v1 address right now, they just burn it to a v0 address using the v1 payload and the coins are unrecoverable. Just refusing wouldn’t be the end of the world, and burning the coins is pretty bad.

XX01: Is the proposed new checksum for segwit v1+…. Is the consensus right now for taproots to use the existing checksum?

XX03: The idea that– so, Rusty’s proposal which I agree with, is that for v0 witness we keep using the thing we have been using so far. Then for v1 and all future versions, this new checksum algorithm would be used.

XX04: The problem with continuing to use the old one with v1 with the length restriction is that we’re still left with the coin burning problem.

XX03: There’s also an issue that all software would remain vulnerable, even if we added a length restriction to v0. All software would be vulnerable to the length problem…. and by forcing all receivers to change the algorithm, we protect all old senders simultaneously.

XX10: It’s clear that they haven’t upgraded. We should probably generate a v2 address and ask people to start testing their sends to that. A miner could scoop it up if they want to. Just to make sure this doesn’t happen again for the next time.

XX03: Another angle that might have caused the problem, mainly…. OP_0 is different from OP_1 to OP_16 so maybe having the checksum type that – is not such a terrible thing. v1 to v16 you handle identically anyway, but it’s not true for v0.

https://cbeci.org/mining_map

https://garethrees.org/2013/06/12/archaeology/

Hold fees

https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-October/002826.html

XX01: A little bit of a lightning discussion. I thought this was interesting. This idea of trying to prevent spam on the lightning network, locking up HTLCs, pinging nodes too much, creating a– requiring payments for actually even doing things in the lightning network. This email is basically positing a few ideas of how to prevent spam on a lightning network. Just a cool conversation.

XX05: It tries to present a different way…. we have an old page talking about… but it was a hacky solution, I’ve gone away from making people pay for failures. …. someone’s stale payments could cause your wallet to be drained, so that’s bad UX. Having this effective… privacy preserving of … there could be more legs there. From my point of view, I don’t know it’s correct that people can be paid for failures. So you can delegate your right to BCI because they have… but I think some measure gains were made in different ways like the forwards and the backwards thigns, but I don’t know if that’s the right way to handle it personally.

XX10: I agree with you, redacted. There’s a feeling that perhaps eventually we’re going to have to require some anti-spam payment.

XX05: To Bryan’s comment, you can have a privacy-preserving bond of coins in some output and I can prove that– it’s like attribution of HTLCs and the problem goes away. I have some ideas about doing it without too much heavy stuff.

XX04: Joinmarket has done some privacy-preserving anti-spam stuff as well.

XX10: It turns out there’s a time difference between short payment and a long payment. There’s <5 seconds payments, and then there’s a multi-day case where it is hard to come up with a number upfront. A tiny fee can stop fast spam, but what about slow spam? It’s an interesting design space here. There’s no obvious wins. The problem with not solving this is that it increases an incentive to deanonymize the network.

XX05: I think this is going to be mitigated by… being compesnated by putting liquidity into the network and having passive compensation over time, so people are prepared for the worst case OP_CSV deadlock. I think it’s another thing to factor in.

XX10: That’s more a griefing attack, but you can do it, but it will cost you. So you can magnify how much you put in by about 20 something? You can squeeze some more in there if you really try, but that’s still annoying. You can attack the network today with that, pretty limited capital. If we have a liquidity provider war…. attacking one person because you want liquidity on yours; etc. There’s a lot of work ongoing on this. I think it’s important.

XX01: Another post by gleb was around “Mitigating channel jamming with stake certificates”.

XX05: This is related to what kanzure and gmax talked about with respect to proof-of-burn to prevent spam. This is about proving to you that you have an output locked up for 30 days, and my proof is unique to you at a particular time. You would be able to attribute who is routing towards you; so nodes are able to quanitfy their relative risk. This problem has a lot of sub problems. What about separating spam from the griefings and everythign? This can solve one of them.

Bitcoin dev kit

https://bitcoindevkit.org/

XX01: Since February when we last did this; another one that is interesting is Bitcoin Dev Kit. It helps wallet developers join forces and works … BDK, a rust bitcoin library. What do people think?

XX08: A few different developers started two different projects earlier this year, and then merged it into one. Steve Mieiers is a grant recipient of Square Crypto. And another one funded by Bitfinex. I’m bullish on this project. As was discussed earlier, Bryan Bishop talked about address index and other things that a lot of companies have to build themselves. At Optech, we learned that lots of companies are rolling their own in terms of bitcoin libraries. Some people use bitcoinj but nobody is satisfied. bdk is a wallet library kit that makes it easier for any wallet developer to build a wallet. I think they are focused on mobile first. It uses rust-bitcoin and builds on it, but it does much more than the primitives in rust-bitcoin. It does coin selection, which I think Murch can speak to. I think Mark has been looking into this project.

XX06: They have native segwit, script descriptors, and they have implemented a very simple constellation of added random selection and branch and bound.

BB: There’s also a python library from Jimmy Song and Michael Flaxman.

XX03: Also libwally.

XX06: There were two multisig wallets that appeared recently; Spectre and Nunchuck. They have both open-sourced their wallet code, not necessarily their UX code but there’s a lot of new things coming out.

https://specter.solutions/

XX01: I am excited to try out Foundation Devices.

bolt12 and reusable invoices

https://twitter.com/rusty_twit/status/1328826839024865281

XX10: I have lots of stuff to say about it. This is basically– we have been mumbling about this for about 2 years now. I tried to implement it and get a spec out there. Basically it’s the classic solving the problem by another layer of indirection. You have an invoice and you make a request over the lightning network to get the real invoice. At its most trivial, it gives you a reusable invoice format. This is an offer for something; you scan it, your wallet reaches out, gets the real invoice, and pays it. This givs you a lot of cute things- the obvious one being that, people abuse invoices at the moment. The secret you promise in return for payment is a secret and you can only send that invoice to one person at a time because it’s a secret. You should only be using it once. This is painful because you shouldn’t be tweeting it to people. … It’s nice to have a transient key you can associate with an invoice request,s o if they want to prove they are you later, you can get a proof of payer and get a proof that yeah this was me like for refunds. It uses bech32 in the draft without a checksum because we didn’t know what the checksum was going to be anyway; it’s not clear how much we gain from having a checksum, we just needed some random encoding and the codebases already have bech32 somewhere. There’s multiple threads on that; I spent a week tweeting about it. If you want to read it and see all the goodies, most of them are already implemented already.

XX03: Very cool.

XX01: Looks lik you have a regtest offer here already. Neat.

XX10: Yes, but it’s so drafty, it might not work next week.

XX01: Random question on invoices and bech32. Is the discussion– the discussion earlier, relevant? I have heard also it mentioned that the whole checksum for bech32 in general is meant for quite short or wasn’t meant for strings as long as an invoice. Is there any updated discussion around that given the new developments in that encoding standard?

XX10: LN invoices are pretty short, for offers. The default is bitcoin mainnet so you don’t have to include the genesis block in that case. Anything under 1024 characters has some guarantees in the original checksum for bech32. If you don’t have the checksum, you scan it, you try to pay it, we would basically do key recovery to figure out what node I need to pay to and if there’s an error somewhere it will say I don’t know how to pay that node, it’s nice for it to say invalid invoice instead of saying I don’t know the node. That’s why we wanted to keep the checksum in the first place. This uses x-only pubkeys which is what all the cool kids are using; it’s very clean, it’s quite nice, we switched to it within a day. There’s no key recovery for that format; there’s explicit key in there– so the checksum doesn’t really buy us anything anymore.

??: Is there a use case for spontaneous payments?

XX10: Spontaneous payments are really cool. Being able to tip someone without having to interact with them. You don’t have to request them to make an offer first. I think that’s cool. The problem with spontaneous payments is that you don’t get a receipt and you can’t prove that you made a payment. If you have a static offer on your page and you say here use this to send me money, then …. cryptographic signature across that, I can prove the amount is paid, that I paid it, because I hold the payer key. You can’t fake a payment, and you can’t fake that you paid something, and you can prove that you paid. It has a lot of advantages. The downside is that if you try to do a lot of them, there’s some latency fetching the invoices since it’s a roundtrip.

XX05: Spontaneous payments… using something like d-log based thing like HTLCs, does allow a secret to be… one thing we realized is that we can’t probe a window-sized payment because the …. not really sure how to… the entire flow. …. do that aggressively by not giving them the last transaction; so to probe across many channels, you have to do something like this. Once you have a … you can do the best of both worlds and I think they both work in those use cases. We have seen some people rolll other solutiosn we have seen in the past.

Signet merged

https://bitcoincore.reviews/18267

https://github.com/bitcoin/bitcoin/pull/18267

XX01: Pretty cool. Nice development. We’re thinking about using it internally for some of our environments at Redacted. Excited to see adoption here pick up. I’d be curious to hear if anyone has had experience playing around with it. Signet has taproot activated, yes. There’s a taproot spend— someone forked Esplora.

XX05: Who are the signers? Can you dynamically add and remove signers to the thing?

XX03: There’s signet and then there’s advanced signet, you can define your own. The default signet is a network that is signed by ajtowns and bellawoof…. alm? kallewoof. I think it’s 1-of-2 multisig. I think there’s no affordance for changing the set of signers at runtime. But it’s fairly cheap to change the network. Also I believe that as far as Bitcoin Core is concerned it does not have taproot activated. But given the network is mined by two private instances, they could choose to enforce taproot rules if they want and then it would be active. in such a centralized setting, they individually define the consensus.

XX01: How do you get signet coins?

XX03: I think there’s a faucet, or even a tool that lets you request coins automatically.

XX01: That was a pull request we followed from beginning to end. Yes, there’s a bitcoin review club that went over it. There’s a link in the doc for that.

bip 340-342

https://github.com/bitcoin/bitcoin/pull/19953

XX01: The big one. The big topic of discussion. bip340, bip341, bip342 was merged into Bitcoin Core. This was the effort of a lot of hard work from a lot of different people. Great job all around. It’s exciting to see the progress here and see the excitement across the various stakeholders in bitcoin. I’d be curious to hear updates on anything that we haven’t discussed yet that you might think is worth bringing up. Has anyone spent some time building it into their own infrastructure or preparing their own projects to support it? Any insights that anyone would like to share? Anything else you want to add?

XX05: From my end, I’m starting to look at implementing it for real. I had a toy thing that I was letting sit for a while. At this point, I want to start adding it to btc suite and lnd and maybe get around to testnet activation. This is back on my radar.

XX01: Phil has been working on some go stuff about this.

XX05: I saw his Schnorr thing. It doesn’t use some of the better integer representations, but we can work with that.

XX01: Anyone else? Anything else to add about taproot?

XX04: when wallet support?

XX03: We have time. Now that we have descriptor wallet support in 0.21, or will have it once it gets released– we have cut two release candidates, it’s imminent.

XX03: It’s 0.21, but it will be 22 in the future for the next release. It would have been nice to have “version 21”, I know.

XX01: Oh look who just joined. Funny timing. Andrew Chow just joined.

XX02: i got a message from Murch saying I should join.

XX04: Murch is kind of super-natural, so that explains it.

XX03: … adding things like taproot to the wallet, at least simple single key version of it, but probably some other complex things will be significantly more simple than hacking it into the old code.

XX01: Andrew, I think you had a post about this or there was some pull request? Not sure what the status is. I saw a discussion somewhere about bitcoin versions are changing. What’s the latest there?

XX02: The net version will be called “22.0” instead of “0.22”. All we did was drop the leading zero. That was merged shortly after branching of v0.21.

XX01: So there will never be a v1.0?

XX02: We’re skipping 1.0 to 21.0 and going straight to 22.0.

XX01: So version inflation?

XX03: We should have made it 42.

XX01: Shouldn’t we have made it 0.0000021 since it’s bitcoin? That’s just a joke. Okay. We also have descriptor wallets as well. We can get to that.

XX06: About taproot, I’ve been looking at how 2-of-3 multisig would end up transaction size wise and single sig and so on, and it’s just another huge size improvement. It’s going to be 58 vbytes. Comparing that to 2-of-3 multisig with P2SH with 297 bytes, it’s almost a 5x reduction in blockchain space which I think is pretty remarkable. Then the other thing that I’ve been really thinking about is if people are now going to start implementing sending to v1 addresses first, like they had been with native segwit where they were first able to send it to, and later it was enabled as a non-default reecive address…. then I think there’s a lot of room for people to upgrade to native segwit by default, because when they support taproot they will therefore go to native segwit. So we will see a big push of segwit adoption with taproot getting merged.

XX01: … how far are we from the world where can someone can spin up a 3-of-5 cold storage with a single pubkey representing a threshold required to sign for it?

XX03: … the first round can be pre-computed before you know its message. There’s some edge cases to worry about, but in that case you have a non-interactive signing protocol. You have everyone publish their partial signatures, and then they can be combined. It’s a remarkable technique. It’s just using two nonces instead of one nonce. Then you MuSig those two nonces at signing time. This was pretty much invented simultaneously by three distinct groups of people.

XX03: I should also mention that this is not yet in Musig2 paper… but Musig2 is compatible with privately nested musig where you can have a multisig of a number of participants and one of them is a multisig themselves of a hardware wallet and a software wallet, or something, and it doesn’t need to reveal that to the other participants. This is not something you can do with normal musig. I need to stress that this is not proven yet.

….

Use wtxid for transaction relay

https://github.com/bitcoin/bitcoin/pull/18044

XX03: The lingering issue that existed after segwit was that transactions are announced and requested on the protocol using txids. In segwit, the txid does not fully commit to all the data you use to verify a transaction, in particular the witness. If someone gives you a segwit transaction with an invalid witness, you can’t blacklist that txid for denial of service protection because you don’t know the txid is invalid because you only know that particular witness is invalid. This has some effects on potential bandwidth where you request the same transaction multiple times. When nodes disagree on what the policy is on the network, then you could have the same transactions announced over and over again and it could be retrieved from nodes over and over again. This problem specifically has been addressed using some much simpler change as well. This was a fundamental improvement where nodes can negotiate– just using wtxids instead of txiss for announcements and pull requests. It’s in, it works.

TORv3

XX01: The previous version of advertising node location did not support longer torv3 address limit, because there was a length limit, and thias been updated and been emrged into core and will be shipped in the v0.21 release?

XX03: Exactly. It was a bip written originally around 3 years ago by Wladimir by the idea that we would eventually need the longer address length. Torv2 is deprecated and the shorter address length will dispapear in less than a year. We couldn’t use the hack of putting it in a ipv6 packet. There’s a specific network class now for a torv2 address, torv3 address, ipv4, and ipv6, and i2p, and what’s the other one supported? CJDNS.

XX03: torv2 addresses are only 80 bits. We sent them over the network by putting them in some private ipv6 range which was something we’ve been doing since 2012. But now I think ipv6 addresses… … and torv3 is 256 bits.

XX04: The reason why the Tor network changed is because they did a substantial rework of their hidden services, like switching off of 1024-bit RSA, and they also fixed a tor hidden service enumeration attack that existed with Torv2. The torv3 addresses use a bip32 derivation (similar to that, anyway), and it requires them to include a pubkey in the address rather than having a hash or something like that.

??: There was a lot of privacy implications for the hidden service directory where they could observe and get information about the topology, which were improved in Torv3.

XX03: Yes, public keys should be public.

GLV commit

https://github.com/bitcoin-core/secp256k1/commit/949bea92624fbd65bfb21d773f1df6a115af71ff

XX01: This goes back to 2013. This was just now flagged as default in Bitcoin Core due to this patent expiration. So that’s cool.

sqlite as a new wallet database for Bitcoin Core

https://github.com/bitcoin/bitcoin/pull/19077

XX01: I like the idea of having a standardized descriptor describe a wallet. Also adding sqlite as a database in Bitcoin Core with the long-term goal of removing berkeley db (bdb).

XX02: This pull request is for using sqlite as the wallet database. Basically the idea behind why now was that descriptor wallets was going to be a completely new type of wallet that would be backwards incompatible and one of the reasons we hadn’t switched off bdb for such a long time was because of the need for backwards compatibility. If we’re going to move to something completely backwards incompatible on the application side, then we might as well change the database to be backwards incompatible as well. Even though descriptor wallets don’t need sqlite, they are orthogonal in nature, they are being bundled together as part of the same pull request. That’s mostly it.

??: Does it use an ORM in front of the sqlite itself?

XX02: We don’t even use any of the sql stuff. We use it as a key-value store like bdb is. We’re just serializing all oru records in the same way we did as bdb. You could in theory upgrade a legacy wallet from bdb to sqlite but this is not recommended.

XX01: So there’s just one table?

XX02: It’s just one table. We’re using it purely as a data file format.

XX01: So there’s no longer-term goal here to replace leveldb with sqlite?

XX02: sqlite is going to only be for the wallet. We’re only using it because we don’t want to use bdb anymore.

XX01: So there’s no desire here for the actual blockchain itself or chainstate to use sqlite either?

XX02: Those will all remain as leveldb for the forseeable future.

??: If sqlite is only being used as a key-value store, why not move the wallet functionality and store that in leveldb so there’s only one database backend?

XX03: One reason is that, leveldb is sort of annoying in that its whole directory with log files and whatever. Sqlite is just single file, you can backup the file and you’re done. I think leveldb….

XX02: leveldb is more similar to bdb. One of the major pain points with bdb was that it had these log files. Every time we flush the database, stuff would end up in the log file and not the database file. So if you did the stupid thing of copying a wallet file while it was still in use, then you would end up missing some data. I actually wrote a test case for this. Its’ very easy to miss data because it was in the log file and not in the database file. One of the nice properties of sqlite is that it has a mode where if you flush, everything flushes to the database file. You don’t have the log file, or if you do, it’s temporary and doesn’t stick around for more than a couple milliseconds. Mostly everything is in the database file; you can copy the wallet file while it was opened, which is not recommended, but you are far less likely to lose data– not to the log file, but certainly to corruption.

XX03: …. has a history of corruption crashes. sqlite is known to be extraordinarily resilient to that sort of stuff. It’s of course annoying wherever something like that happesn, but for something that stores private keys, you want to use the most stable thing out there.

Native descriptor wallets

https://github.com/bitcoin/bitcoin/pull/16528

XX02: It changes the concept of the wallet from something that is key-based to something that is script based. This is backwards incompatible with previous versions of Bitcoin Core. We disabled a bunch of RPCs that no longer make sense in a descriptor wallet world. hdseed no longer makes sense. The descriptor wallet doesn’t have a global wallet seed anymore. Everythign is based around the descriptor now, at least for address generation and all that. To most users, when you use a descriptor wallet, unless you’re doing something with multisigs, or with watch-only, you won’t feel anything different. So it stores addresses in the same way;y ou still sign transactions, you still use the rawtransaction API, you still use PSBT, the whole— not even this PR, but the 5 PRs before it, was refactors to make an interface where we could do this without having the user feel anything different from this change. So if you do multisig stuff, or watchonly things, it’s now all under a single RPC descriptors and you just import a descriptor and it deals with it magically.

Multiprocess bitcoin

https://github.com/bitcoin/bitcoin/pull/10102

XX01: This is potentially the longest running open effort in Bitcoin Core that I know…. but, this experimental branch to make the Bitcoin Core codebase multiprocess and switch out the core, switch out the various components into separate processes. ryanofsky is constantly and diligently maintaining this branch. I don’t know if there has been any updates at a high level around the thinking around this, or the prospects of this happening. Does anyone have any updates?

XX02: I think something was merged that enables this as a build option, I think. You can try that out if you want.

ZMQ: Create “sequence” notifier, enabling client-side mempool tracking

https://github.com/bitcoin/bitcoin/pull/19572

XX03: I thought it was cool that if your’e consuming mempool stuff from bitcoind, this makes it easier than following “getrawmempool”.

XX03: With zeromq, you still have to poll. No deliverability guarantees. But yeah, I’m sure it’s easier.

Lightning pull requests

We’ll go over some lightning pull requests.

Lightning pull requests

We’ll go over some lightning pull requests.

\ No newline at end of file +Meetup

SF Bitcoin Devs socratic seminar #20

((Names anonymized.))

XX01: For everyone new who has joined- this is an experiment for us. We’ll see how this first online event goes. So far no major issues. Nice background, Uyluvolokutat. Should we let Harkenpost in?

XX06: People around America are showing up since they don’t have to show up in person.

XX01: How do we block NY IPs?

XX06: We already have people from Western Africa joining.

XX05: Oh he’s here? You can increase the tile number. To get everyone in the same tile display. You can do non-power of 2, but you can also set it if you want.

XX04: Can you hear me now?

XX01: Yes. I like your throne.

XX04: It’s just a couch. It’s out in the garage. Maybe I should get it. I think I should get it.

XX01: There’s an agenda published. It’s also linked in the chat.

https://www.sfbitcoindevs.org/socratic/2020/11/30/socratic-20.html

Ground rules

XX01: I think I’m going to screenshare my screen. Then we’ll go through the topics. I think the best way to run this is for people to raise their hand. Can someone test raising their hand right now? Okay, great. I’ll kickstart each topic.

XX01: The way we run these is that we have a list of topics to discuss. Anyone is welcome to participate in the conversation, ask questions, give their input into any insights they have into the topics we’re going through, the list of topics is linked in the chat. I’ll be sharing my screen with a webpage for each of the topics we go through. It will be something visual like a pull request, charts, news article.

XX01: If you’re not talking, just mute yourself for everyone else’s sake. Once I kickstart each topic, I think I’ll just call on people who have their hands raised and that will allow everyone to either ask a question or give their thoughts on that. Sometimes a specific person I might call on for a specific topic because it’s something someone has worked on or they built. We’ll just go through the– we’ll start going through the list now and see how this works. Other ground rules- be respectrufl. Typically we have a rule about privacy and no photos, which is hard to enforce here.

XX05: It’s been so long, there’s a lot of topics to go over.

XX01: If this goes well, we’ll just do this every month and it’s actually easier. Denise and I don’t need to lug around drinks, chairs and tables and all that.

XX01: The nice thing about this is that we have this sidebar for commentary. If you want to leave comments about things happening, feel free to throw it in there. If you have random questions, feel free to throw them in there. If you have comments you just want to make on the side, feel free to use the chat for that. I think it’s a nice non-distracting way to give input. I’ll share my screen here.

XX01: Can everyone see my screen?

XX01: Weclome everyone .I think all of you were here when we went over the rules. Let’s get started. We’ll go over news, statistics, pull requests, lightning pull requests, etc.

GLV patent expired

https://twitter.com/pwuille/status/1309707188575694849

Brink

https://brink.dev/

David Harding, John Newbery and Mike Schmidt have started a new bitcoin development funding program. They are taking a unique approach for funding and bitcoin development. Are they here? Steve?

XX08: I don’t think they are here. If they are, they could speak to it. I can speak to it a little bit. I think they are introducing the first mentorship program for open-source bitcoin development. The first formal mentorship program. I think that’s exciting for the space. I think they have identified their first fellow and will be announcing that person soon. I guess the way I view it, it’s another great organization like Chaincode and Blockstream, and MIT DCI and Square Crypto and these different orgs focused on open-source bitcoin development, it’s great to have another entity focused on that.

XX01: Cool. It will be cool to see what they will roll out this year. Looks like John Pfeffer and Wences Casares are both funders and supporting this group. Pretty awesome.

XX05: It’s a one-year program. It seems long. That’s also good, because it can be more intensive. Chaincode was a summer or a few weeks. It’s a year of vocational school, in a sense, to learn bitcoin stuff.

XX08: You’re right, it’s a year. Brink has both the mentorship program which they are calling Fellowships and more traditional grants too. They will be funding established developers through regular grants, as well as the fellowships. It’s more substantial than a few weeks. They have already signed up several funders not listed on their page, like Square Crypto, Kraken, and one more. Gemini and Human Rights Foundation (HRF).

Bitcoin Core wallet 0.21 what’s new

https://achow101.com/2020/10/0.21-wallets

XX01: Bitcoin Core v0.21 is– the most recent release; it has a number of awesome features raised in the wallet itself including wallet descriptors, sqlite database backend, and achow101 has a nice writeup on his site. We’ll go into the details of these pull requests a little bit later on tonight. I just wanted to give a little taste.

Lightning pool

https://lightning.engineering/pool/

Lightning pool is a marketplace for lightning channels. It’s the next step in building out a true market for liquidity on the lightning network. Would someone from Lightning Labs like to share more? Oh, Segfault is here.

XX05: It’s an auction that runs every 10 minutes and people can buy/sell channels through leases. So you can buy a channel for 20 days for 2%. It can be negotiated, and has a few other cool features. We’re building on some ideas we talked about in the past.

There’s a @lightningpool on twitter, it’s not made by us. Roasbeef worked a lot on the lightning pool paper. Also shadowchain stuff?

XX05: We have something called a shadowchain which is a reusable application framework for applications on bitcoin in general. This can be used for fidelity bonds, for example, or an untrusted system– you have a protocol where you have off-chain validation, versus actively putting in all chains. Every batch in pool is its own thing; signature aggregation is another example. It’s a protocol that can sign off on blocks of data they want. It’s like pessimistic roll-up where you only care about things on your own input, so you have fraud proofs and other things. Also, we have a channel on our slack.

XX01: Looks like fees during the alpha will range from 5 to 25 basis points?

XX05: That’s what we as the operator will be charging for our services, like coordination or matching. You have an embedded chain in the bitcoin blockchain that has all the auctions and all the L2 modifications. So we can create an authenticated tries of the … on bitcoin itself.

https://lightning.engineering/lightning-pool-whitepaper.pdf

XX11: Between 5 and 8% APR. Loalu doesn’t like that wording though. It’s not compounding, so you can think about the risk-free yield for coins on lightning in a non-custodial manner.

XX01: Any questions? Steve? Your hand is up. This is working better than I expected to be honest.

bitcoin-core-pr-review-club

https://bitcoincore.reviews/

XX01: John Newbery hosts Bitcoin Core PR review club. I haven’t participated yet, but I look at what they put out, and it’s very interesting. It’s a good way to get a sense of what’s happening in bitcoin development, dig through the code, have walkthroughs of what’s going on.

XX03: This isn’t a recent thing. It’s been going on for a year or two.

XX01: Did I miss it? Was there always a website for it?

??: He recently had the 100th review.

XX01: Wow, I missed that then. It’s pretty cool.

Crypto Open Patent Alliance

https://open-patent.org/

XX01: Square Crypto is helping to kick this off. The goal is to create an alliance around patents to prevent malicious use of IP in this industry. Steve?

XX08: Technically, it’s Square not Square Crypto. It’s a Square initiative. They announced a couple months ago. Blockstream did work around this a few years ago, a similar intent and program. The idea here is to help defend against patent trolls. Any aggressive, offensive crypto patent thicket. This serves a few purposes: member companies or individual developers (anyone can join, including projects or developers) can commit to not asserting their patents they have offensively and then adds them to a pool which any of the members can use to defend thesmelves. You don’t need to have any patents to join. If you don’t have any patents and don’t intend to, but you still want to protect yourself, it can be a good idea to join.

XX08: We’ll soon be announcing some major companies that will be joining. I think Blockstream is on the public record saying they are going to join. It’s a good way to fend off potential problems in the future.

Bitcoin design grants

https://medium.com/@squarecrypto/square-crypto-designer-grants-a9a3982c1921

XX01: In the design world, there’s some attempts to bring more designers into the fold and help create user experiences for bitcoin. Square has a number of crypto designer grants that they have available. I don’t know if any have been awarded yet. Are you still looking for people to apply?

XX08: The initial thinking of Square Crypto was that we would hire a full-time designer to contribute. After speaking with 50-60 designers around the world, it was clear that there were many designers quite interested in bitcoin but they were off in their own little island and they just really needed a community to participate in and get to know each other. So this summer we announced an open-source bitcoin design community with now 750 people that are parto f that. There’s about 30-40 who are fairly active and contributors. We have given out 7 or 8 grants so far to both designers and user-experience researchers. The primary work product or outcome we hope from this community to get developed is an open source design guidelines that will help any wallet or application designers to not start from scratch. Any new designer to this space or frankly anyone new to bitcoin finds bitcoin quite counterintuitive when you have to build a non-custodial wallet and those challenges. Building mobile apps that are client-server based, it doesn’t help so much. So hopefully the design guide will help with that. A number of people we’ve been giving grants to are helping specific projects in the space improve their design. There are volunteers who don’t have Square Crypto grants who are more engaged on Bitcoin Core now and trying to help the Bitcoin Core GUI. Square Crypto helped kick it off, but it’s intended to be a public good and open-source bitcoin thing. There’s no…. Square does not want any specific control over this. We’d like to see other companies fund designers and researchers as well.

Bitcoin treasuries

https://bitcointreasuries.org/

XX01: Bitcoin has seem large adoption in corporate treasuries making allocations. I think it has implications on the tech downstream, custody solutions and I think it’s important to understand who is using bitcoin now. This is pretty interesting to see. I don’t know if anyone has something specific things they want to discuss.

((isn’t there an accounting reason why this doesn’t work for companies?))

GLV patent has expired

https://twitter.com/pwuille/status/1309707188575694849

XX03: This is the GLV endomorphism patent that enables a faster elliptic curve multiplication in secp curve. The secp curve was specifically designed to enable this optimization. libsecp256k1 started as a hobby project of mine trying to figure out how much of a gain it would actually be. Due to patent concerns, it was later made optional. The optimization was never used in any releases of Bitcoin Core or otherwise. 0.21 will have this optimization enabled. The code for not doing the optimization has been removed from the library, in fact.

XX01: So the impact of that, if I recall, is a 25% speedup in signature validation?

XX03: Something like that. It’s huge. It may not sound like that much, but let’s just say optimizations of a couple percents are rare.

GM: Hal Finney actually wrote code to implement this optimization while he was still posting on bitcointalk and it sort of dropped that code– but his implementation was built on top of openssl and so it didn’t have a lot of other optimizations. So, Pieter’s initial work was combining that along with other high performance optimizations.

XX01: It’s cool to see this finally live. Are there any other patented optimizations that we’re waiting to unleash? Or is this the last one lined up for a while?

XX03: There might be one. It’s much smaller. Co-zee stuff?

XX04: I think that we were actually on the cozee stuff as far as patents went. That was one of those things where we wanted to do some more research on the patentability of it. But it’s a small optimization, in any case.

XX03: For anyone who would go to Google Patents and see that it claims that it is still live, I think their information is rather outdated. Blockstream actually had a patent attorney… ah, it says it’s expired now. A couple of weeks ago it was still saying it was live. Blockstream had a patent attorney verify that it actually expired.

“This invention provides a method for accelerating multiplication of an elliptic curve point Q(x,y) by a scalar.”

XX04: This was originally a Certicom patent, and then they got acquired by RIM, and then they were acquired by Blackberry.

XX01: Okay, let’s move on from news to statistics.

Segwit adoption

https://charts.woobull.com/bitcoin-segwit-adoption/

XX01: I found these charts interesting. I wanted to throw out some trends and see if anyone had anything to say about it.

XX06: One thing that I thought was interesting on bitcoin segwit adoption chart is that you can see it going down lately. I think what we’re seeing there is a lot of people are batching payments. If you look at the charts on payments that use segwit versus transactions, it’s just that people who adopted segwit early are also the people who adopted payment batching early. The number of payments using segwit has been going up.

XX04: What about old coins excluded? People are often bringing up pre-segwit coins that are old and there’s no way they could have used segwit.

XX06: I’m not aware of that chart, but it makes sense. We should mention that in public, and someone will do it.

XX01: One of these lines is hard to see here. The payments using segwit has increased, it’s just hard to see that on my black background. The other reason I wanted to bring this up is because— taproot is obviously a proposed soft-fork coming and with a new version of segwit addresses and seeing how the previous network upgrade adoption went, and kind of where it is potentially plateaued is interesting especially in light of a new upgrade coming down the road. I have myself a lot that I could say about this, having run a bitcoin company, having seen how infrastructure ossicizes and companies get stuck with legacy infrastructure that they don’t bother to update. What can we do to make it easier to make

BB: address index wasn’t that interesting.

XX01: At Redacted we have a golang indexer and wallet infrastructure, watchers, script descriptors, We’d like to hear what you;’re doing at Avanti. Our implementation is a golang. I also wanted to talk about blockchain developer kit (BDK) which might help here with developers building out more standard projects for infrastructure for companies to use.

XX05: I think taproot is going to have slower uptick. A lot of the lightning nodes will probably upgrade to taproot because of the privacy benefits. Lightning pool— whenever you have a batch of transactions, single payout script, which will be interesting. I think lightning operators will upgrade quickly, and be a new constituent in the pool of people upgrading, for things in the future including script updates.

XX04: About 30% of all the transactions are still blockchain.info as of a few months ago, which is a supermajority of the non-segwit transactions that were going on at the time. Someone pointed that out to me. I suspect it’s still the case now.

XX06: …. now it’s aroudn the time where wallets can think about making segwit their default. Have you ever bumped into trouble? Is there anoyne that can’t send to native segwit by now?

XX01: It’s rare these days. Most of our people buying bitcoin for the first time, so they use new software. There’s still a lot of existing exchanges that don’t support sending it to. I don’t think Binance supports pay-to-segwit.

XX03: Witness scripthash.

XX01: We use multisig segwit deposit addresses, sometimes they’re too long for certain wallets to send to. For those of you who don’t know, I can pull it up here, there’s a compatibility table on bitcoinoptech for who supports what.

Mining stats

https://bitaps.com/blocks

https://www.blockchain.com/pools

XX01: Looks like pooln is quite large these days and F2pool. I’m not close to the mining space but I like to look in once in a while. What’s happenign in mining these days? Any interesting trends or surprises? Things that people want to bring up?

XX11: There was news that– because of the rainy season– that a lot of mining within China was moved. Why have we never heard that before? Why was that a thing this year if it was an annual year when the hashrate dropped?

XX04: We’ve heard this every year for the last four years that it was seasonal. That’s not new.

XX11: The drop was more dramatic this year.

XX04: True, but we have seen this every year in the rainy season because of all the hydropower in China.

XX05: One of the engineers that left Bitmain started one of those pools. It’s cool to see them with mindshare with some of the OGs in there as well. I haven’t looked at this chart in a long time.

XX11: pooln had a non-compete and couldn’t mine bitcoin for a year. Now they are back after the year. It was about the unbundling of Bitmain in a sense.

XX06: I think that the latest numbers on how much of the mining power are in China are down to something like 65%. I haven’t tracked that over the years. I think that’s lower than it has been in a while.

XX01: Most of these pools seem like they are based in China, but I don’t know where the hashrate is.

XX04: The big advantage of mining in China is proximity to manufacturers and lower cost of building out mining farms. There’s a lot of inexpensive power in North America. So it doesn’t surprise me that there’s some transition out of China.

XX01: I have seen some increase in the US like with Layer1 and some of the pool stuff Blockstream is working on. I don’t know if any of that is material or if anyone has information about that.

XX09: (redacted) Like Uyluvolokutat said, there’s a lot of very cheap power and good rule of law in North America. It’s attractive.

XX09: Binance Pool is usually around 6-10%. It’s interesting that these big exchanges using pools as a loss leader to build relationships with miners. I don’t know how to think about that strategy other than building relationships or maybe being able to negotiate cheaper bitcoin buys and cutting out market makers. I think that’s an interesting new thing.

XX01: The financialization of this industry is still in its infancy. There’s going to be a lot of second-order effects across the industry.

XX01: One other thing. I know there’s increasing popularity on the financialization trend or professionalization around this– Matrixport is doing well with miners in China, derivatives, things to help them hedge. It’s an interesting trend worth keeping an eye on.

Taproot activation

https://taprootactivation.com/

Al: Taproot activation is a major topic of discussion lately. I think everyone has a bit of PTSD from the block size wars and segwit to say the least. Nonetheless, I think there’s a lot of excitement about taproot – something that is– it’s an upgrade that has been in the making for years now. It seems to have increasing amount of broad support– which will remain to be seen regarding how it is to be activated. When people need to put money where their mouth is, we’ll see.

XX01: Poolin has created a tracker to track the hashrate that has indicated they would be supportive of the taproot soft-fork. It’s at taprootactivation.com. I think it’s interesting to see. A lot of the major pools have indicated support for this upgrade. I’d be curious to hear any insights from people closer to the ground on this stuff than what any– any insights on there?

XX05: What’s with the “no answer” on Binance Pool?

XX04: It’s a “no comment” kind of answer, not a no.

XX05: So it’s “I’m not running for President”.

XX04: There were a few crappy news outlets that sent out an article saying Binance had rejected it. But they hadn’t even looked atit was the comment I got from them. I think those numbers add up to like 65% of the hashpower or something like that.

XX01: It says 82% here.

XX04: I think we’re going to see… the challenge here is that this is a verbal commitment. When it comes time to update their software, the schedule may get pushed out and it might take a while to get that level of support. But it looks good to me.

XX06: There’s only 91% of all hashrate accounted for on this list. So with an activation threshold of 95% if we do the same as before, we haven’t even talked with everyone.

XX04: I think this is really good, though, because people had a lot of concerns that this would be some really dramatic drama-filled activation mechanism issue this time aruond. I think this is at least a good first example that no, this can be activated without a lot of drama.

XX08: I’ve had direct conversations with poolin, Huobi and Slush and got positive vibes consistent with them saying yes. Again, that doesn’t mean anything until it’s time, but they understand taproot and I think they are legitimately supportive of it. To me the big open question remains the activation method and there’s not consensus among developers and as you can see on this table there’s not consensus among pools. Somewhat unclear to me how that gets resolved.

XX04: I think this goes a long way towards resolving that question. The question among developers was how aggressive does the activation mechanism need to be? If there’s broad support from pools, it will justify a less aggressive mechanism because it will activate anyway. We’ll see, though.

XX06: Also, we can always start with the least aggressive and as we build raporte we can commit a level further and ramp it up.

Revisiting squaredness tiebreaker for R point in bip340

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-August/018081.html

XX01: Pieter asked about revisiting the squaredness tiebreaker for schnorr signatures. Do you want to give an overview of that discussion and the end result?

XX03: In short, we thought we had a good reason for using quadratic residues instead of evenness as a tiebreaker. We only want to store an x-coordinate because wasting a byte to encode 1 bit is stupid. Which of the two y coords do you take, corresponding to a particular x coord? We thought we had a good reason for picking quadratic residue which was kind of cool and seemed faster, but later we realized actually you don’t want to do that for public keys because it has some compatibility issues and even later we realized it wasn’t a good idea in the first place and we shouldn’t do it. This email you’re pointing out is where I explain why we had come to the incorrect conclusion and why some recent work on faster algorithms made it not only the case that it wasn’t actually faster, but that it was actually slower. So that’s all finalized now, and it’s now using evenness everywhere and that’s what’s in the bip and that’s what’s implemented. Everyone can forget about the quadratic residues.

XX01: For those of you who might be out of the loop— a public key is a point on a curve in bitcoin, you have two coords but you only need the x coordinate. There’s a positive and negative y coord corresponding to the x coordinate, and you need a standard method to choose which of the y coords to use so that you don’t have to signal which one to choose.

XX03: This isn’t just public keys, but also the public nonce inside the signature, which is what this was actually about.

Bitcoin archaeology

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-November/018269.html

BB: (timestamp your old emails, archaeology….)

XX01: What does archaeology in bitcoin mean at all? Dan Bryant went through and tried to figure out how to run the oldest versions of bitcoin core, using the old libraries that it linked to. It’s very interesting. I think people are interested in history.

XX04: I was able to get bitcoin v0.3 to sync, back in 2017. You have to work aroudn some bdb lock issues.

XX03: If you go further back, the problem is v0.2.9 which didn’t have the checksum in p2p protocols so you need some sort of adapter to make that talk to the current network. But it should be possible, in theory.

XX01: That’s a pretty interesting project. I like that.

bech32 updates

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-October/018236.html

XX01: Rusty brought up some ideas for how to deal with potential issues with bech32 encoding including the malleability issue discovered a year or two ago at this point with the checksum. Pieter or Rusty, do you want to give a high level overview of this discussion?

XX03: I was planning to reply to this email today, actually. In short, I agree, and I think we should do the clean break switch to checksum for v1 and up, regardless of length. …. This will protect against a situation where an old software enters the new style address, which is something you don’t get with the proposal we originally had because v1 would keep the same checksum in the algorithm. Anything to add?

XX10: I wonder if there’s some background notes?

XX05: Why? Uyluvolokutat says we messed up? Are addresses longer?

XX10: That was the dream, that there would be one upgrade and then it would all be great. There wasn’t an understanding at the time that this would be how it works. So people do the checks for v0, and then they reject for v1 so there’s already a lot of implementations that— some of these things were implicit not explicit, that they should accept v1 and v2. But it turns out that the people who implemented it in c-lightning also didn’t understand, if it’s v2 do the length checks, if it’s v1 just fail. So it was a common mistake. Bitcoin Core also didn’t relay it early on, so in the lightning code we had to be sure to not relay them or accept them. ….. since everyone is going to update their softwre anyway, it will force some people to get upgraded, so sometimes the way out of this mess is to fix it the right way, and the right way to fix it is saying if v0 it’s this checksum algorithm and if it’s this other one you use another checksum. People are going to upgrade anyway, so let’s let them upgrade to make this problem go away.

XX04: There’s more than one service where if you send to a v1 address right now, they just burn it to a v0 address using the v1 payload and the coins are unrecoverable. Just refusing wouldn’t be the end of the world, and burning the coins is pretty bad.

XX01: Is the proposed new checksum for segwit v1+…. Is the consensus right now for taproots to use the existing checksum?

XX03: The idea that– so, Rusty’s proposal which I agree with, is that for v0 witness we keep using the thing we have been using so far. Then for v1 and all future versions, this new checksum algorithm would be used.

XX04: The problem with continuing to use the old one with v1 with the length restriction is that we’re still left with the coin burning problem.

XX03: There’s also an issue that all software would remain vulnerable, even if we added a length restriction to v0. All software would be vulnerable to the length problem…. and by forcing all receivers to change the algorithm, we protect all old senders simultaneously.

XX10: It’s clear that they haven’t upgraded. We should probably generate a v2 address and ask people to start testing their sends to that. A miner could scoop it up if they want to. Just to make sure this doesn’t happen again for the next time.

XX03: Another angle that might have caused the problem, mainly…. OP_0 is different from OP_1 to OP_16 so maybe having the checksum type that – is not such a terrible thing. v1 to v16 you handle identically anyway, but it’s not true for v0.

https://cbeci.org/mining_map

https://garethrees.org/2013/06/12/archaeology/

Hold fees

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-October/002826.html

XX01: A little bit of a lightning discussion. I thought this was interesting. This idea of trying to prevent spam on the lightning network, locking up HTLCs, pinging nodes too much, creating a– requiring payments for actually even doing things in the lightning network. This email is basically positing a few ideas of how to prevent spam on a lightning network. Just a cool conversation.

XX05: It tries to present a different way…. we have an old page talking about… but it was a hacky solution, I’ve gone away from making people pay for failures. …. someone’s stale payments could cause your wallet to be drained, so that’s bad UX. Having this effective… privacy preserving of … there could be more legs there. From my point of view, I don’t know it’s correct that people can be paid for failures. So you can delegate your right to BCI because they have… but I think some measure gains were made in different ways like the forwards and the backwards thigns, but I don’t know if that’s the right way to handle it personally.

XX10: I agree with you, redacted. There’s a feeling that perhaps eventually we’re going to have to require some anti-spam payment.

XX05: To Bryan’s comment, you can have a privacy-preserving bond of coins in some output and I can prove that– it’s like attribution of HTLCs and the problem goes away. I have some ideas about doing it without too much heavy stuff.

XX04: Joinmarket has done some privacy-preserving anti-spam stuff as well.

XX10: It turns out there’s a time difference between short payment and a long payment. There’s <5 seconds payments, and then there’s a multi-day case where it is hard to come up with a number upfront. A tiny fee can stop fast spam, but what about slow spam? It’s an interesting design space here. There’s no obvious wins. The problem with not solving this is that it increases an incentive to deanonymize the network.

XX05: I think this is going to be mitigated by… being compesnated by putting liquidity into the network and having passive compensation over time, so people are prepared for the worst case OP_CSV deadlock. I think it’s another thing to factor in.

XX10: That’s more a griefing attack, but you can do it, but it will cost you. So you can magnify how much you put in by about 20 something? You can squeeze some more in there if you really try, but that’s still annoying. You can attack the network today with that, pretty limited capital. If we have a liquidity provider war…. attacking one person because you want liquidity on yours; etc. There’s a lot of work ongoing on this. I think it’s important.

XX01: Another post by gleb was around “Mitigating channel jamming with stake certificates”.

XX05: This is related to what kanzure and gmax talked about with respect to proof-of-burn to prevent spam. This is about proving to you that you have an output locked up for 30 days, and my proof is unique to you at a particular time. You would be able to attribute who is routing towards you; so nodes are able to quanitfy their relative risk. This problem has a lot of sub problems. What about separating spam from the griefings and everythign? This can solve one of them.

Bitcoin dev kit

https://bitcoindevkit.org/

XX01: Since February when we last did this; another one that is interesting is Bitcoin Dev Kit. It helps wallet developers join forces and works … BDK, a rust bitcoin library. What do people think?

XX08: A few different developers started two different projects earlier this year, and then merged it into one. Steve Mieiers is a grant recipient of Square Crypto. And another one funded by Bitfinex. I’m bullish on this project. As was discussed earlier, Bryan Bishop talked about address index and other things that a lot of companies have to build themselves. At Optech, we learned that lots of companies are rolling their own in terms of bitcoin libraries. Some people use bitcoinj but nobody is satisfied. bdk is a wallet library kit that makes it easier for any wallet developer to build a wallet. I think they are focused on mobile first. It uses rust-bitcoin and builds on it, but it does much more than the primitives in rust-bitcoin. It does coin selection, which I think Murch can speak to. I think Mark has been looking into this project.

XX06: They have native segwit, script descriptors, and they have implemented a very simple constellation of added random selection and branch and bound.

BB: There’s also a python library from Jimmy Song and Michael Flaxman.

XX03: Also libwally.

XX06: There were two multisig wallets that appeared recently; Spectre and Nunchuck. They have both open-sourced their wallet code, not necessarily their UX code but there’s a lot of new things coming out.

https://specter.solutions/

XX01: I am excited to try out Foundation Devices.

bolt12 and reusable invoices

https://twitter.com/rusty_twit/status/1328826839024865281

XX10: I have lots of stuff to say about it. This is basically– we have been mumbling about this for about 2 years now. I tried to implement it and get a spec out there. Basically it’s the classic solving the problem by another layer of indirection. You have an invoice and you make a request over the lightning network to get the real invoice. At its most trivial, it gives you a reusable invoice format. This is an offer for something; you scan it, your wallet reaches out, gets the real invoice, and pays it. This givs you a lot of cute things- the obvious one being that, people abuse invoices at the moment. The secret you promise in return for payment is a secret and you can only send that invoice to one person at a time because it’s a secret. You should only be using it once. This is painful because you shouldn’t be tweeting it to people. … It’s nice to have a transient key you can associate with an invoice request,s o if they want to prove they are you later, you can get a proof of payer and get a proof that yeah this was me like for refunds. It uses bech32 in the draft without a checksum because we didn’t know what the checksum was going to be anyway; it’s not clear how much we gain from having a checksum, we just needed some random encoding and the codebases already have bech32 somewhere. There’s multiple threads on that; I spent a week tweeting about it. If you want to read it and see all the goodies, most of them are already implemented already.

XX03: Very cool.

XX01: Looks lik you have a regtest offer here already. Neat.

XX10: Yes, but it’s so drafty, it might not work next week.

XX01: Random question on invoices and bech32. Is the discussion– the discussion earlier, relevant? I have heard also it mentioned that the whole checksum for bech32 in general is meant for quite short or wasn’t meant for strings as long as an invoice. Is there any updated discussion around that given the new developments in that encoding standard?

XX10: LN invoices are pretty short, for offers. The default is bitcoin mainnet so you don’t have to include the genesis block in that case. Anything under 1024 characters has some guarantees in the original checksum for bech32. If you don’t have the checksum, you scan it, you try to pay it, we would basically do key recovery to figure out what node I need to pay to and if there’s an error somewhere it will say I don’t know how to pay that node, it’s nice for it to say invalid invoice instead of saying I don’t know the node. That’s why we wanted to keep the checksum in the first place. This uses x-only pubkeys which is what all the cool kids are using; it’s very clean, it’s quite nice, we switched to it within a day. There’s no key recovery for that format; there’s explicit key in there– so the checksum doesn’t really buy us anything anymore.

??: Is there a use case for spontaneous payments?

XX10: Spontaneous payments are really cool. Being able to tip someone without having to interact with them. You don’t have to request them to make an offer first. I think that’s cool. The problem with spontaneous payments is that you don’t get a receipt and you can’t prove that you made a payment. If you have a static offer on your page and you say here use this to send me money, then …. cryptographic signature across that, I can prove the amount is paid, that I paid it, because I hold the payer key. You can’t fake a payment, and you can’t fake that you paid something, and you can prove that you paid. It has a lot of advantages. The downside is that if you try to do a lot of them, there’s some latency fetching the invoices since it’s a roundtrip.

XX05: Spontaneous payments… using something like d-log based thing like HTLCs, does allow a secret to be… one thing we realized is that we can’t probe a window-sized payment because the …. not really sure how to… the entire flow. …. do that aggressively by not giving them the last transaction; so to probe across many channels, you have to do something like this. Once you have a … you can do the best of both worlds and I think they both work in those use cases. We have seen some people rolll other solutiosn we have seen in the past.

Signet merged

https://bitcoincore.reviews/18267

https://github.com/bitcoin/bitcoin/pull/18267

XX01: Pretty cool. Nice development. We’re thinking about using it internally for some of our environments at Redacted. Excited to see adoption here pick up. I’d be curious to hear if anyone has had experience playing around with it. Signet has taproot activated, yes. There’s a taproot spend— someone forked Esplora.

XX05: Who are the signers? Can you dynamically add and remove signers to the thing?

XX03: There’s signet and then there’s advanced signet, you can define your own. The default signet is a network that is signed by ajtowns and bellawoof…. alm? kallewoof. I think it’s 1-of-2 multisig. I think there’s no affordance for changing the set of signers at runtime. But it’s fairly cheap to change the network. Also I believe that as far as Bitcoin Core is concerned it does not have taproot activated. But given the network is mined by two private instances, they could choose to enforce taproot rules if they want and then it would be active. in such a centralized setting, they individually define the consensus.

XX01: How do you get signet coins?

XX03: I think there’s a faucet, or even a tool that lets you request coins automatically.

XX01: That was a pull request we followed from beginning to end. Yes, there’s a bitcoin review club that went over it. There’s a link in the doc for that.

bip 340-342

https://github.com/bitcoin/bitcoin/pull/19953

XX01: The big one. The big topic of discussion. bip340, bip341, bip342 was merged into Bitcoin Core. This was the effort of a lot of hard work from a lot of different people. Great job all around. It’s exciting to see the progress here and see the excitement across the various stakeholders in bitcoin. I’d be curious to hear updates on anything that we haven’t discussed yet that you might think is worth bringing up. Has anyone spent some time building it into their own infrastructure or preparing their own projects to support it? Any insights that anyone would like to share? Anything else you want to add?

XX05: From my end, I’m starting to look at implementing it for real. I had a toy thing that I was letting sit for a while. At this point, I want to start adding it to btc suite and lnd and maybe get around to testnet activation. This is back on my radar.

XX01: Phil has been working on some go stuff about this.

XX05: I saw his Schnorr thing. It doesn’t use some of the better integer representations, but we can work with that.

XX01: Anyone else? Anything else to add about taproot?

XX04: when wallet support?

XX03: We have time. Now that we have descriptor wallet support in 0.21, or will have it once it gets released– we have cut two release candidates, it’s imminent.

XX03: It’s 0.21, but it will be 22 in the future for the next release. It would have been nice to have “version 21”, I know.

XX01: Oh look who just joined. Funny timing. Andrew Chow just joined.

XX02: i got a message from Murch saying I should join.

XX04: Murch is kind of super-natural, so that explains it.

XX03: … adding things like taproot to the wallet, at least simple single key version of it, but probably some other complex things will be significantly more simple than hacking it into the old code.

XX01: Andrew, I think you had a post about this or there was some pull request? Not sure what the status is. I saw a discussion somewhere about bitcoin versions are changing. What’s the latest there?

XX02: The net version will be called “22.0” instead of “0.22”. All we did was drop the leading zero. That was merged shortly after branching of v0.21.

XX01: So there will never be a v1.0?

XX02: We’re skipping 1.0 to 21.0 and going straight to 22.0.

XX01: So version inflation?

XX03: We should have made it 42.

XX01: Shouldn’t we have made it 0.0000021 since it’s bitcoin? That’s just a joke. Okay. We also have descriptor wallets as well. We can get to that.

XX06: About taproot, I’ve been looking at how 2-of-3 multisig would end up transaction size wise and single sig and so on, and it’s just another huge size improvement. It’s going to be 58 vbytes. Comparing that to 2-of-3 multisig with P2SH with 297 bytes, it’s almost a 5x reduction in blockchain space which I think is pretty remarkable. Then the other thing that I’ve been really thinking about is if people are now going to start implementing sending to v1 addresses first, like they had been with native segwit where they were first able to send it to, and later it was enabled as a non-default reecive address…. then I think there’s a lot of room for people to upgrade to native segwit by default, because when they support taproot they will therefore go to native segwit. So we will see a big push of segwit adoption with taproot getting merged.

XX01: … how far are we from the world where can someone can spin up a 3-of-5 cold storage with a single pubkey representing a threshold required to sign for it?

XX03: … the first round can be pre-computed before you know its message. There’s some edge cases to worry about, but in that case you have a non-interactive signing protocol. You have everyone publish their partial signatures, and then they can be combined. It’s a remarkable technique. It’s just using two nonces instead of one nonce. Then you MuSig those two nonces at signing time. This was pretty much invented simultaneously by three distinct groups of people.

XX03: I should also mention that this is not yet in Musig2 paper… but Musig2 is compatible with privately nested musig where you can have a multisig of a number of participants and one of them is a multisig themselves of a hardware wallet and a software wallet, or something, and it doesn’t need to reveal that to the other participants. This is not something you can do with normal musig. I need to stress that this is not proven yet.

….

Use wtxid for transaction relay

https://github.com/bitcoin/bitcoin/pull/18044

XX03: The lingering issue that existed after segwit was that transactions are announced and requested on the protocol using txids. In segwit, the txid does not fully commit to all the data you use to verify a transaction, in particular the witness. If someone gives you a segwit transaction with an invalid witness, you can’t blacklist that txid for denial of service protection because you don’t know the txid is invalid because you only know that particular witness is invalid. This has some effects on potential bandwidth where you request the same transaction multiple times. When nodes disagree on what the policy is on the network, then you could have the same transactions announced over and over again and it could be retrieved from nodes over and over again. This problem specifically has been addressed using some much simpler change as well. This was a fundamental improvement where nodes can negotiate– just using wtxids instead of txiss for announcements and pull requests. It’s in, it works.

TORv3

XX01: The previous version of advertising node location did not support longer torv3 address limit, because there was a length limit, and thias been updated and been emrged into core and will be shipped in the v0.21 release?

XX03: Exactly. It was a bip written originally around 3 years ago by Wladimir by the idea that we would eventually need the longer address length. Torv2 is deprecated and the shorter address length will dispapear in less than a year. We couldn’t use the hack of putting it in a ipv6 packet. There’s a specific network class now for a torv2 address, torv3 address, ipv4, and ipv6, and i2p, and what’s the other one supported? CJDNS.

XX03: torv2 addresses are only 80 bits. We sent them over the network by putting them in some private ipv6 range which was something we’ve been doing since 2012. But now I think ipv6 addresses… … and torv3 is 256 bits.

XX04: The reason why the Tor network changed is because they did a substantial rework of their hidden services, like switching off of 1024-bit RSA, and they also fixed a tor hidden service enumeration attack that existed with Torv2. The torv3 addresses use a bip32 derivation (similar to that, anyway), and it requires them to include a pubkey in the address rather than having a hash or something like that.

??: There was a lot of privacy implications for the hidden service directory where they could observe and get information about the topology, which were improved in Torv3.

XX03: Yes, public keys should be public.

GLV commit

https://github.com/bitcoin-core/secp256k1/commit/949bea92624fbd65bfb21d773f1df6a115af71ff

XX01: This goes back to 2013. This was just now flagged as default in Bitcoin Core due to this patent expiration. So that’s cool.

sqlite as a new wallet database for Bitcoin Core

https://github.com/bitcoin/bitcoin/pull/19077

XX01: I like the idea of having a standardized descriptor describe a wallet. Also adding sqlite as a database in Bitcoin Core with the long-term goal of removing berkeley db (bdb).

XX02: This pull request is for using sqlite as the wallet database. Basically the idea behind why now was that descriptor wallets was going to be a completely new type of wallet that would be backwards incompatible and one of the reasons we hadn’t switched off bdb for such a long time was because of the need for backwards compatibility. If we’re going to move to something completely backwards incompatible on the application side, then we might as well change the database to be backwards incompatible as well. Even though descriptor wallets don’t need sqlite, they are orthogonal in nature, they are being bundled together as part of the same pull request. That’s mostly it.

??: Does it use an ORM in front of the sqlite itself?

XX02: We don’t even use any of the sql stuff. We use it as a key-value store like bdb is. We’re just serializing all oru records in the same way we did as bdb. You could in theory upgrade a legacy wallet from bdb to sqlite but this is not recommended.

XX01: So there’s just one table?

XX02: It’s just one table. We’re using it purely as a data file format.

XX01: So there’s no longer-term goal here to replace leveldb with sqlite?

XX02: sqlite is going to only be for the wallet. We’re only using it because we don’t want to use bdb anymore.

XX01: So there’s no desire here for the actual blockchain itself or chainstate to use sqlite either?

XX02: Those will all remain as leveldb for the forseeable future.

??: If sqlite is only being used as a key-value store, why not move the wallet functionality and store that in leveldb so there’s only one database backend?

XX03: One reason is that, leveldb is sort of annoying in that its whole directory with log files and whatever. Sqlite is just single file, you can backup the file and you’re done. I think leveldb….

XX02: leveldb is more similar to bdb. One of the major pain points with bdb was that it had these log files. Every time we flush the database, stuff would end up in the log file and not the database file. So if you did the stupid thing of copying a wallet file while it was still in use, then you would end up missing some data. I actually wrote a test case for this. Its’ very easy to miss data because it was in the log file and not in the database file. One of the nice properties of sqlite is that it has a mode where if you flush, everything flushes to the database file. You don’t have the log file, or if you do, it’s temporary and doesn’t stick around for more than a couple milliseconds. Mostly everything is in the database file; you can copy the wallet file while it was opened, which is not recommended, but you are far less likely to lose data– not to the log file, but certainly to corruption.

XX03: …. has a history of corruption crashes. sqlite is known to be extraordinarily resilient to that sort of stuff. It’s of course annoying wherever something like that happesn, but for something that stores private keys, you want to use the most stable thing out there.

Native descriptor wallets

https://github.com/bitcoin/bitcoin/pull/16528

XX02: It changes the concept of the wallet from something that is key-based to something that is script based. This is backwards incompatible with previous versions of Bitcoin Core. We disabled a bunch of RPCs that no longer make sense in a descriptor wallet world. hdseed no longer makes sense. The descriptor wallet doesn’t have a global wallet seed anymore. Everythign is based around the descriptor now, at least for address generation and all that. To most users, when you use a descriptor wallet, unless you’re doing something with multisigs, or with watch-only, you won’t feel anything different. So it stores addresses in the same way;y ou still sign transactions, you still use the rawtransaction API, you still use PSBT, the whole— not even this PR, but the 5 PRs before it, was refactors to make an interface where we could do this without having the user feel anything different from this change. So if you do multisig stuff, or watchonly things, it’s now all under a single RPC descriptors and you just import a descriptor and it deals with it magically.

Multiprocess bitcoin

https://github.com/bitcoin/bitcoin/pull/10102

XX01: This is potentially the longest running open effort in Bitcoin Core that I know…. but, this experimental branch to make the Bitcoin Core codebase multiprocess and switch out the core, switch out the various components into separate processes. ryanofsky is constantly and diligently maintaining this branch. I don’t know if there has been any updates at a high level around the thinking around this, or the prospects of this happening. Does anyone have any updates?

XX02: I think something was merged that enables this as a build option, I think. You can try that out if you want.

ZMQ: Create “sequence” notifier, enabling client-side mempool tracking

https://github.com/bitcoin/bitcoin/pull/19572

XX03: I thought it was cool that if your’e consuming mempool stuff from bitcoind, this makes it easier than following “getrawmempool”.

XX03: With zeromq, you still have to poll. No deliverability guarantees. But yeah, I’m sure it’s easier.

Lightning pull requests

We’ll go over some lightning pull requests.

Lightning pull requests

We’ll go over some lightning pull requests.

\ No newline at end of file diff --git a/speakers/alex-myers/index.xml b/speakers/alex-myers/index.xml index 16c6aa434b..6c1a0e897b 100644 --- a/speakers/alex-myers/index.xml +++ b/speakers/alex-myers/index.xml @@ -1,6 +1,6 @@ Alex Myers on ₿itcoin Transcriptshttps://btctranscripts.com/speakers/alex-myers/Recent content in Alex Myers on ₿itcoin TranscriptsHugo -- gohugo.ioenTue, 07 Jun 2022 00:00:00 +0000Minisketch and Lightning gossiphttps://btctranscripts.com/bitcoinplusplus/2022/2022-06-07-alex-myers-minisketch-lightning-gossip/Tue, 07 Jun 2022 00:00:00 +0000https://btctranscripts.com/bitcoinplusplus/2022/2022-06-07-alex-myers-minisketch-lightning-gossip/Location: Bitcoin++ Slides: https://endothermic.dev/presentations/magical-minisketch -Rusty Russell on using Minisketch for Lightning gossip: https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001741.html +Rusty Russell on using Minisketch for Lightning gossip: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001741.html Minisketch library: https://github.com/sipa/minisketch Bitcoin Core PR review club on Minisketch (3 sessions): https://bitcoincore.reviews/minisketch-26 diff --git a/speakers/anthony-towns/index.xml b/speakers/anthony-towns/index.xml index 379902870e..aa495f4b20 100644 --- a/speakers/anthony-towns/index.xml +++ b/speakers/anthony-towns/index.xml @@ -2,7 +2,7 @@ Video: No video posted online Google Doc of the resources discussed: https://docs.google.com/document/d/1VAP4LNjHHVLy9RJwpQ8LqXEUw79z5TTZxr9du_Z0-9A/ The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -PoDLEs revisited (Lloyd Fournier) https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html +PoDLEs revisited (Lloyd Fournier) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html We’ll start with me talking about my research into UTXO probing attacks on Lightning in the dual funding proposal.Schnorr Taproot Tapscript BIPshttps://btctranscripts.com/stephan-livera-podcast/2019-12-27-aj-towns-schnorr-taproot/Fri, 27 Dec 2019 00:00:00 +0000https://btctranscripts.com/stephan-livera-podcast/2019-12-27-aj-towns-schnorr-taproot/Stephan Livera: AJ, welcome to the show. AJ Towns: Howdy. Stephan Livera: Thanks for joining me AJ. I know you’re doing a lot of really cool work with the Schnorr and Taproot proposal and the review club. But let’s start with a bit of your background and I know you’re working at Xapo as well. diff --git a/speakers/conner-fromknecht/index.xml b/speakers/conner-fromknecht/index.xml index 645960b340..b977afad4d 100644 --- a/speakers/conner-fromknecht/index.xml +++ b/speakers/conner-fromknecht/index.xml @@ -14,7 +14,7 @@ Podcast: https://noded.org/podcast/noded-0360-with-olaoluwa-osuntokun-and-conner Pierre: Welcome to the Noded Bitcoin podcast. This is episode 36. We’re joined with Laolu and Conner from Lightning Labs as well as my co-host Michael Goldstein. How’s it going guys? roasbeef: Not bad Pierre: So I listened to Laolu on Stephan Livera’s podcast and I would encourage any of our listeners to go listen to that first so that they can hear all about some of the latest developments in lightning.Instantiating (Scriptless) 2P-ECDSA: Fungible 2-of-2 MultiSigs for Today's Bitcoinhttps://btctranscripts.com/scalingbitcoin/tokyo-2018/scriptless-ecdsa/Sat, 06 Oct 2018 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/tokyo-2018/scriptless-ecdsa/https://twitter.com/kanzure/status/1048483254087573504 -maybe https://eprint.iacr.org/2018/472.pdf and https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-April/001221.html +maybe https://eprint.iacr.org/2018/472.pdf and https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-April/001221.html Introduction Alright. Thank you very much. Thank you Pedro, that was a great segue into what I&rsquo;m talking about. He has been doing work on formalizing multi-hop locks. I want to also talk about what changes might be necessary to deploy this on the lightning network. History For what it&rsquo;s worth, these dates are rough. Andrew Poelstra started working on this and released something in 2016 for a Schnorr-based scriptless script model.Lightning Overviewhttps://btctranscripts.com/layer2-summit/2018/lightning-overview/Wed, 25 Apr 2018 00:00:00 +0000https://btctranscripts.com/layer2-summit/2018/lightning-overview/https://twitter.com/kanzure/status/1005913055333675009 https://lightning.network/ diff --git a/speakers/kalle-alm/index.xml b/speakers/kalle-alm/index.xml index 9ba3254d64..f78dc388cc 100644 --- a/speakers/kalle-alm/index.xml +++ b/speakers/kalle-alm/index.xml @@ -2,7 +2,7 @@ Let’s prepare mkdir workspace cd workspace git clone https://github.com/bitcoin/bitcoin.git cd bitcoin git remote add kallewoof https://github.com/kallewoof/bitcoin.git git fetch kallewoof git checkout signet ./autogen.sh ./configure -C --disable-bench --disable-test --without-gui make -j5 When you try to run the configure part you are going to have some problems if you don’t have the dependencies. If you don’t have the dependencies Google your OS and “Bitcoin build”. If you have Windows you’re out of luck.Signet Integrationhttps://btctranscripts.com/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/Thu, 06 Feb 2020 00:00:00 +0000https://btctranscripts.com/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/Slides: https://www.dropbox.com/s/6fqwhx7ugr3ppsg/Signet%20Integration%20V2.pdf BIP 325: https://github.com/bitcoin/bips/blob/master/bip-0325.mediawiki Signet on Bitcoin Wiki: https://en.bitcoin.it/wiki/Signet -Bitcoin dev mailing list: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html +Bitcoin dev mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html Bitcoin Core PR 16411 (closed): https://github.com/bitcoin/bitcoin/pull/16411 Bitcoin Core PR 18267 (open): https://github.com/bitcoin/bitcoin/pull/18267 Intro I am going to talk about Signet. Do you guys know what Signet is? A few people know. I will explain it briefly. I have an elevator pitch, I have three actually depending on the height of the elevator. Basically Signet is testnet except all the broken parts are removed.Signet annd its uses for developmenthttps://btctranscripts.com/edgedevplusplus/2019/signet/Tue, 10 Sep 2019 00:00:00 +0000https://btctranscripts.com/edgedevplusplus/2019/signet/https://twitter.com/kanzure/status/1171310731100381184 diff --git a/speakers/lloyd-fournier/index.xml b/speakers/lloyd-fournier/index.xml index 2a455b4812..ab58de36b0 100644 --- a/speakers/lloyd-fournier/index.xml +++ b/speakers/lloyd-fournier/index.xml @@ -3,7 +3,7 @@ Tim: Yes, so my name is Tim Ruffing. I am a maintainer of the libsecp256k1 libra Video: No video posted online Google Doc of the resources discussed: https://docs.google.com/document/d/1VAP4LNjHHVLy9RJwpQ8LqXEUw79z5TTZxr9du_Z0-9A/ The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -PoDLEs revisited (Lloyd Fournier) https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html +PoDLEs revisited (Lloyd Fournier) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html We’ll start with me talking about my research into UTXO probing attacks on Lightning in the dual funding proposal.Scriptless Lotterieshttps://btctranscripts.com/scalingbitcoin/tel-aviv-2019/scriptless-lotteries/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/tel-aviv-2019/scriptless-lotteries/Scriptless lotteries on bitcoin from oblivious transfer Lloyd Fournier (lloyd.fourn@gmail.com) https://twitter.com/kanzure/status/1171717583629934593 diff --git a/speakers/luke-dashjr/index.xml b/speakers/luke-dashjr/index.xml index 9e4305674a..fb7e90869c 100644 --- a/speakers/luke-dashjr/index.xml +++ b/speakers/luke-dashjr/index.xml @@ -1,6 +1,6 @@ -Luke Dashjr on ₿itcoin Transcriptshttps://btctranscripts.com/speakers/luke-dashjr/Recent content in Luke Dashjr on ₿itcoin TranscriptsHugo -- gohugo.ioenWed, 17 Mar 2021 00:00:00 +0000How Bitcoin UASF went down, Taproot LOT=true, Speedy Trial, Small Blockshttps://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Wed, 17 Mar 2021 00:00:00 +0000https://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Luke Dashjr arguments against LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html -T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html -F7 argument for LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html +Luke Dashjr on ₿itcoin Transcriptshttps://btctranscripts.com/speakers/luke-dashjr/Recent content in Luke Dashjr on ₿itcoin TranscriptsHugo -- gohugo.ioenWed, 17 Mar 2021 00:00:00 +0000How Bitcoin UASF went down, Taproot LOT=true, Speedy Trial, Small Blockshttps://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Wed, 17 Mar 2021 00:00:00 +0000https://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Luke Dashjr arguments against LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html +T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html +F7 argument for LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html Transcript by: Stephan Livera Edited by: Michael Folkson Intro Stephan Livera (SL): Luke, welcome to the show. Luke Dashjr (LD): Thanks. @@ -22,7 +22,7 @@ Luke Dashjr (LD): OK. How are you?Segwit, PSBT Location: LA BitDevs (online) CVE: https://nvd.nist.gov/vuln/detail/CVE-2020-14199 Trezor blog post on the vulnerability: https://blog.trezor.io/latest-firmware-updates-correct-possible-segwit-transaction-vulnerability-266df0d2860 -Greg Sanders Bitcoin dev mailing list post in April 2017: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html +Greg Sanders Bitcoin dev mailing list post in April 2017: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html The vulnerability The way Bitcoin transactions are encoded in the software is there is a list of coins essentially and then there is a list of destinations and the amount being sent to each destination. The destinations do not include the fee. Nothing in the transaction tells you what the fee of the transaction is.</description></item><item><title>Abstract Thinking About Consensus Systemshttps://btctranscripts.com/edgedevplusplus/2018/abstract-thinking-about-consensus-systems/Fri, 05 Oct 2018 00:00:00 +0000https://btctranscripts.com/edgedevplusplus/2018/abstract-thinking-about-consensus-systems/https://twitter.com/kanzure/status/1048039485550690304 slides: https://drive.google.com/file/d/1LiGzgFXMI2zq8o9skErLcO3ET2YeQbyx/view?usp=sharing Serialization Blocks are usually thought of as a serialized data stream. But really these are only true on the network serialization or over the network. A different implementation of bitcoin could in theory use a different format. The format is only used on the network and the disk, not the consensus protocol. The format could actually be completely different. diff --git a/speakers/matt-corallo/index.xml b/speakers/matt-corallo/index.xml index 9183f9128f..b60c619886 100644 --- a/speakers/matt-corallo/index.xml +++ b/speakers/matt-corallo/index.xml @@ -26,7 +26,7 @@ Matt Corallo (MC): Hi John Newbery (JN): Hi Matt AJ: Today we are going to do a little bit of a “This is your life Bitcoin”. MC: I am not dead yet. -AJ: You have a lot of contributions over the years so there is lots to talk about but we’ll start with three. Let’s start with compact blocks. Tell us a little bit about what compact blocks are and then we can dive a little bit deeper.Great Consensus Cleanuphttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html +AJ: You have a lot of contributions over the years so there is lots to talk about but we’ll start with three. Let’s start with compact blocks. Tell us a little bit about what compact blocks are and then we can dive a little bit deeper.Great Consensus Cleanuphttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html https://twitter.com/kanzure/status/1136591286012698626 Introduction There&rsquo;s not much new to talk about. Unclear about CODESEPARATOR. You want to make it a consensus rule that transactions can&rsquo;t be larger than 100 kb. No reactions to that? Alright. Fine, we&rsquo;re doing it. Let&rsquo;s do it. Does everyone know what this proposal is? Validation time for any block&ndash; we were lazy about fixing this. Segwit was a first step to fixing this, by giving people a way to do this in a more efficient way.The State of Bitcoin Mininghttps://btctranscripts.com/magicalcryptoconference/2019/the-state-of-bitcoin-mining/Sun, 12 May 2019 00:00:00 +0000https://btctranscripts.com/magicalcryptoconference/2019/the-state-of-bitcoin-mining/Topic: The State of Bitcoin Mining: No Good, The Bad, and The Ugly @@ -34,7 +34,7 @@ So I&rsquo;m going to talk a little bit about kind of the state of mining an Slides: https://docs.google.com/presentation/d/154bMWdcMCFUco4ZXQ3lWfF51U5dad8pQ23rKVkncnns/edit#slide=id.p https://twitter.com/kanzure/status/1144256392490029057 Introduction Thanks for having me. I want to talk a little bit about a project I’ve been working on for about a year called rust-lightning. I started it in December so a year and a few months ago. This is my first presentation on it so I’m excited to finally get to talk about it a little bit. -Goals It is yet another Lightning implementation because somehow we needed another one I guess or really not.Better Hashing through BetterHashhttps://btctranscripts.com/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/Tue, 05 Feb 2019 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/Announcement of BetterHash on the mailing list: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016077.html +Goals It is yet another Lightning implementation because somehow we needed another one I guess or really not.Better Hashing through BetterHashhttps://btctranscripts.com/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/Tue, 05 Feb 2019 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/Announcement of BetterHash on the mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016077.html Draft BIP: https://github.com/TheBlueMatt/bips/blob/betterhash/bip-XXXX.mediawiki Intro I am going to talk about BetterHash this evening. If you are coming to Advancing Bitcoin don’t worry I am talking about something completely different. You are not going to get duplicated content. That talk should be interesting as well though admittedly I haven’t written it yet. We’ll find out. BetterHash is a project that unfortunately has some naming collisions so it might get renamed at some point, I’ve been working on for about a year to redo mining and the way it works in Bitcoin.Bitcoin's Miner Security Modelhttps://btctranscripts.com/mit-bitcoin-expo/mit-bitcoin-expo-2017/bitcoin-mining-and-trustlessness/Sat, 04 Mar 2017 00:00:00 +0000https://btctranscripts.com/mit-bitcoin-expo/mit-bitcoin-expo-2017/bitcoin-mining-and-trustlessness/https://twitter.com/kanzure/status/838500313212518404 Yes, something like that. As mentioned, I&rsquo;ve been contributing to Bitcoin Core since 2011 and I&rsquo;ve worked on various parts of bitcoin, such as the early payment channel work, some wallet stuff, and now I work for Chaincode Lab one of the sponsors&ndash; we&rsquo;re a bitcoin research lab. We all kind of hang out and work on various bitcoin projects we find interesting. I want to talk about reflections on trusting trust.Fungibility Overviewhttps://btctranscripts.com/scalingbitcoin/milan-2016/fungibility-overview/Sat, 08 Oct 2016 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/milan-2016/fungibility-overview/https://twitter.com/kanzure/status/784676022318952448 diff --git a/speakers/or-sattath/index.xml b/speakers/or-sattath/index.xml index c337ed190b..d937053835 100644 --- a/speakers/or-sattath/index.xml +++ b/speakers/or-sattath/index.xml @@ -1,4 +1,4 @@ -Or Sattath on ₿itcoin Transcriptshttps://btctranscripts.com/speakers/or-sattath/Recent content in Or Sattath on ₿itcoin TranscriptsHugo -- gohugo.ioenRedesigning Bitcoin Fee Markethttps://btctranscripts.com/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015093.html +Or Sattath on ₿itcoin Transcriptshttps://btctranscripts.com/speakers/or-sattath/Recent content in Or Sattath on ₿itcoin TranscriptsHugo -- gohugo.ioenRedesigning Bitcoin Fee Markethttps://btctranscripts.com/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015093.html https://www.reddit.com/r/Bitcoin/comments/72qi2r/redesigning_bitcoins_fee_market_a_new_paper_by/ paper: https://arxiv.org/abs/1709.08881 He will be exploring alternative auction markets. diff --git a/speakers/pieter-wuille/index.xml b/speakers/pieter-wuille/index.xml index 4915c07964..025e2436de 100644 --- a/speakers/pieter-wuille/index.xml +++ b/speakers/pieter-wuille/index.xml @@ -46,7 +46,7 @@ Draft of BIP-Schnorr: https://github.com/sipa/bips/blob/bip-schnorr/bip-schnorr. Draft of BIP-Taproot: https://github.com/sipa/bips/blob/bip-schnorr/bip-taproot.mediawiki Draft of BIP-Tapscript: https://github.com/sipa/bips/blob/bip-schnorr/bip-tapscript.mediawiki Part 1 Adam: On this episode we&rsquo;ll be digging deeply into some of the most important changes coming soon to the Bitcoin protocol in the form of BIPs or Bitcoin Improvement Proposals focused on Taproot, Tapscript and Schnorr signatures.Taproothttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki -https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html +https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html https://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/ previously: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/ https://twitter.com/kanzure/status/1136616356827283456 diff --git a/speakers/ruben-somsen/index.xml b/speakers/ruben-somsen/index.xml index d66c9bc84d..612c83d0e6 100644 --- a/speakers/ruben-somsen/index.xml +++ b/speakers/ruben-somsen/index.xml @@ -29,7 +29,7 @@ Intro Happy halvening. Today I have a special technical presentation for you. Ba Motivation First let me tell you my motivation. I think Bitcoin is a very important technology. We need to find ways to utilize it at its maximum potential without sacrificing decentralization.Statechainshttps://btctranscripts.com/edgedevplusplus/2019/statechains/Tue, 10 Sep 2019 00:00:00 +0000https://btctranscripts.com/edgedevplusplus/2019/statechains/Schnorr signatures, adaptor signatures and statechains https://twitter.com/kanzure/status/1171345418237685760 Introduction If you want to know the details of statechains, I recommend checking out my talk from Breaking Bitcoin 2019 Amsterdam. I&rsquo;ll give a quick recap of Schnorr signatures and adaptor signatures and then statechains. I think it&rsquo;s important to understand Schnorr signatures to the point where you&rsquo;re really comfortable with it. A lot of the cool stuff in bitcoin right now is related to or using Schnorr signatures.Blind statechains: UTXO transfer with a blind signing serverhttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/https://twitter.com/kanzure/status/1136992734953299970 -&ldquo;Formalizing Blind Statechains as a minimalistic blind signing server&rdquo; https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html +&ldquo;Formalizing Blind Statechains as a minimalistic blind signing server&rdquo; https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html overview: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39 statechains paper: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf previous transcript: http://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/ diff --git a/speakers/stepan-snigirev/index.xml b/speakers/stepan-snigirev/index.xml index bd12530115..f2eddfd6e8 100644 --- a/speakers/stepan-snigirev/index.xml +++ b/speakers/stepan-snigirev/index.xml @@ -14,7 +14,7 @@ Introduction We have 30 minutes. This was a talk by me and Jimmy but Jimmy talke Imagine you want to start working in bitcoin development. There&rsquo;s a chance you will end up using a hardware wallet. In principle, it&rsquo;s not any different from other software development. There&rsquo;s still programming involved, like for firmware. Wallets have certain features and nuances compared to software wallets.Hardware Walletshttps://btctranscripts.com/austin-bitcoin-developers/2019-06-29-hardware-wallets/Sat, 29 Jun 2019 00:00:00 +0000https://btctranscripts.com/austin-bitcoin-developers/2019-06-29-hardware-wallets/https://twitter.com/kanzure/status/1145019634547978240 see also: Extracting seeds from hardware wallets The future of hardware wallets coredev.tech 2019 hardware wallets discussion Background A bit more than a year ago, I went through Jimmy Song&rsquo;s Programming Blockchain class. That&rsquo;s where I met M where he was the teaching assistant. Basically, you write a python bitcoin library from scratch. The API for this library and the classes and fnuctions that Jimmy uses is very easy to read and understand.Hardware Wallets (History of Attacks)https://btctranscripts.com/london-bitcoin-devs/2019-05-01-stepan-snigirev-hardware-wallet-attacks/Wed, 01 May 2019 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2019-05-01-stepan-snigirev-hardware-wallet-attacks/Slides: https://www.dropbox.com/s/64s3mtmt3efijxo/Stepan%20Snigirev%20on%20hardware%20wallet%20attacks.pdf -Pieter Wuille on anti covert channel signing techniques: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-March/017667.html +Pieter Wuille on anti covert channel signing techniques: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-March/017667.html Introduction This talk is the second in the series after my previous talk in London a few months ago at the Advancing Bitcoin conference. There I was talking mostly about general attacks on hardware, more from the theoretical perspective. I didn’t say anything bad hardware wallets that exist and I didn’t specify anything on the bad side. Here I feel a bit more free as it is a meetup not a conference so I can say bad things about everyone.The Future of Hardware Walletshttps://btctranscripts.com/breaking-bitcoin/2019/future-of-hardware-wallets/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/breaking-bitcoin/2019/future-of-hardware-wallets/D419 C410 1E24 5B09 0D2C 46BF 8C3D 2C48 560E 81AC https://twitter.com/kanzure/status/1137663515957837826 Introduction We are making a secure hardware platform for developers so that they can build their own hardware wallets. Today I want to talk about certain challenges for hardware wallets, what we&rsquo;re missing, and how we can get better. diff --git a/stanford-blockchain-conference/2019/htlcs-considered-harmful/index.html b/stanford-blockchain-conference/2019/htlcs-considered-harmful/index.html index e86e4ef4b3..82052d2100 100644 --- a/stanford-blockchain-conference/2019/htlcs-considered-harmful/index.html +++ b/stanford-blockchain-conference/2019/htlcs-considered-harmful/index.html @@ -9,4 +9,4 @@ < Htlcs Considered Harmful

Htlcs Considered Harmful

Speakers: Daniel Robinson

Transcript By: Bryan Bishop

Tags: Lightning

Category: -Conference

https://twitter.com/kanzure/status/1091033955824881664

Introduction

I am here to talk about a design pattern that I don’t like. You see HTLCs a lot. You see them in a lot of protocols for cross-chain atomic exchanges and payment channel networks such as lightning. But they have some downsides, which people aren’t all familiar with. I am not affiliated with Interledger, but they convinced me that HTLCs were a bad idea and they developed an alternative to replace HTLCs. There’s a simpler way to solve the same problem.

The problem of fair exchange

What is the problem that HTLCs solve? It’s not atomicity. What HTLCs actually solve is the problem of fair exchange. In Raiders of the Last Ark, Indy is trying to get across the pit and his guy says if you throw me the idle then I’ll throw you the rope to get across. He doesn’t have a choice there, he has no way to guarantee that when he throws over the idle that the counterparty will throw the rope. This doesn’t work out for the assistant in the movie so well, but in a payment channel your thief is going to win out on that. This is the problem that we’re trying to solve.

Atomic transactions on a shared ledger is more trivial to solve. It’s a huge benefit of blockchain that you have all your assets on the same ledger and it’s all in one transaction. But it turns out that’s unrealistic even in the blockchain world, there’s many ledgers that are totally unrelated to each other. So trying to do atomic transactions across different ledgers is a problem.

Cross-chain atomic swap

The primary use case of Litecoin is as an example for cross-chain atomic swaps between bitcoin and litecoin. Suppose we’re doing an on-chain payment from Alice to Bob in exchange for Bob’s litecoin. Alice can go first and make a payment. But then Bob could cheat Alice by not making the payment. Bob could go first, but then Alice can cheat Bob by not making the bitcoin payment. So we need a protocol to make sure that value is exchanged fairly.

HTLCs

Originally people came up with hashed timelock contracts (HTLCs) like for cross-chain atomic swaps. Sometimes people call it hash timelock contracts. Are they contracts though? HTLCs doesn’t stand for anything anymore really.

HTLCs provide cross-ledger atomic transactions. Either both transactions complete, or neither do. I say cross-ledger because it’s not just a blockchain it could be payment channels or whatever.

Here is an illustration of HTLCs in Ivy. There are two ways to complete an HTLC: either one of the recipients can complete by revealing a secret, or the sender can cancel after a timeout. So this is an example of just one HTLC, and it’s a two-phase commit protocol for these two transactions to happen atomically.

I’m sure you have seen this before. It’s bitcoin for litecoin. Alice locks a bitcoin into a 48-hour HTLC, using a hash of Alice’s secret. Bob locks his litecoin on another network with a 24-hour HTLC using the same secret. Bob will learn the secret when Alice spends to get the litecoin. This is the happy case when Alice reveals the secret. But what happens when she doesn’t reveal the secret? Well then they both get their money back after the timeouts.

Your timeout can’t be too short, because some of these times– if someone sees something on one of the chains or a payment channel, and they need time to get included into a block on the other side. There’s a limit on how short you can make these timeouts.

If you change a little bit about this thought experiment, you can change from a cross-chain atomic swap into a cross-chain atomic payment like a cross-chain lightning network payment. It’s like a trustless shapeshift.io implementation. These don’t have to be on-chain, HTLCs can be embedded in payment channels. The mechanism for this is somewhat complicated but you can do these ledger updates with ledger themselves being the payment channels. This is how lightning works- there are multi-hop payment channels that go through a network of HTLC locks. To complete the payment, the receiver reveals the preimage, and it propagates back down the path to the sender. At any point if there’s a dispute, then you can go to the main chain and settle it there. All this money gets locked up along the chain, and then when the receiver reveals it, it all ripples back.

I don’t know if lightning network was named lightning for this reason, but if you notice, lightning in a thunderstorm doesn’t really go down. When you see these paths of the ions going down, this isn’t the lightning strike. Lightning actually strikes up, it goes backward. This payment goes here, and it comes back. Anyway, I don’t know if that’s why they call it lightning network, but it’s a good way to remember that.

People who use lightning don’t think of it like that. I think of it as payment channels and HTLCs. These are two composable separate ingredients. Payment channels give you off-chain bilateral ledgers, and HTLCs give you cross-ledger atomic transactions. HTLCs are the part that I am picking on. I love payment channels. HTLCs are a problem.

So, why are HTLCs harmful?

There’s a lot of reasons. It’s a free option, there’s a griefing attack, and there’s complexity.

Free option problem

This problem has gone mainstream recently which is great. The problem here is that we had Alice locked hers first and then Bob locks her. This matters when you’re doing a multi-asset bet with HTLC. Usually Alice goes to complete the payment immediately, but what if Alice just sits and watches the price and decides last minute to do it? Alice is getting a free option to execute the transaction. It could be, though, that the price moves and the trade is no longer economic. They could let the HTLC timeout and cancel the trade. In the unlikely event that litecoin market price rises, Alice can complete the transaction and get her new money. This is basically an American-style call option. This is worth a premium, but Alice isn’t actually paying that premium in an HTLC. In fact, in lightning, if you bail an HTLC you don’t actually pay any fee for it at all. So this is a vulnerability and can be attacked.

My good friend ZmnSCPxj made a good argument for a single-asset lightning network on lightning-dev a month ago. It’s a good analyiss, I recommend looking at: https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001752.html

Griefing problem

Remember how these arrows go? There’s multiple hops in the payment path. We are just waiting for lightning to strike. But what if it doesn’t? Everybody’s money is stuck in that lock all day. It’s just a feature of the system. But this might not be a 2 hop payment. It could be a 20 hop payment. Bob could have constructed a convoluted path to mess with everyone on that path. This is one way that someone could cause a lot of problems on the lightning network and cause like 20x loss. For someone in the middle, they don’t know Bob, they just route his payment. They didn’t sign up for their bitcoin to be locked up for the whole day, and they don’t even get paid for it, they just get canceled.

Complexity

There’s one more problem with HTLCs. Embedding HTLCs in payment channels and using them in general is kind of complicated. It depends on what kind of features you support in the payment channel, and on the base ledger features, and it’s a pain to settle them. And it’s expensive. On bitcoin, you need to do a few transactions to settle an HTLC. The default values for HTLCs right now it seems like 50 cents on bitcoin, depending on what the transaction fees are paying and current market price. Doing smaller payments than this with HTLCs is not economical, not really, with adversarial counterparties.

The alternative: packetized payments

Packetized payments is a piece of what has been developed by Interledger protocol by people at Ripple. It’s an independent protocol for doing a lightning-like payment channel network but pottenitally across a wider class of assets.

Packetized payments takes some inspiration from packet switching networks from the history of the internet. When the internet was being developed, files were being shared over dedicated connections between computers. You can think of this as- and this is an oversimplification because I don’t have a computer science degree and I have no idea how any of this wokrs- but imagine you have a dedicated connection across a bunch of hops, and you’re sending a whole file but instead of one big chunk you split it up into tiny packets and switch it around and it doesn’t matter what order they arrive in and some of the packets get dropped and that’s fine. In Interledger, we do this, but we do it with payments.

Let’s go back to what the cross-chain atomic swap problem was. Alice might exit scam Bob. But what if the payment amount was tiny? It doesn’t matter if the counterparty disappears because the tiny payments are so small that it doesn’t matter. If you do it all in sequence, and doing 1 bit at a time, you eventually make the whole payment. You can only steal a small bit at most so it doesn’t matter.

So you split your big atomic trade int osmall, economically-insingifcant trades. It take turns executing tiny pieces of it, in sequence. If your counterparty cheats you at any point, close the channel. This works for multi-hop payments as well. It wouldn’t even be worth it for the ledger to enforce the small tiny payment anyway.

The griefing risk and free option problem are minimized. They can have very tiny packets with very short timeouts. The free option problem isn’t a big deal because if you do give someone an American-style call option, it’s only for an extremely small amount of time. Also, this works over any payment medium that supports small, cheap payments. It works even for bank wire transfers, or tossing pennies across the grand canyon or something.

Is this expensive? Well, not in payment channels. Payment channels are extremely cheap and fast. If the money is not in a payment channel, put it in a payment channel first.

Doesn’t require that much trust. You can literally have a satoshi as the payment amount. That’s probably not the right value for most of these relationships. Only your immediate counterparty can cheat you. You can bound the trust limit to an arbitrarily small amount. The griefing risk requires way more trust in your counterparties (and everyone downstream from them). Lightning does something very similar for payments below the HTLC dust limit (default around 50 cents). It doesn’t even make sense to put an HTLC in that case… when you’re making 1 satoshi payment on lightning network, you’re not even using HTLCs. It’s very similar to how Interledger does it, and there’s some slight differences that are irrelevant, but it shows that people are willing to enter into this amount of trust. Counterparties can grief you for the fees too, so you should factor that into the calculus of what you are willing to lose from the channel.

What if a payment fails halfway through? Well, find another route. Refund it. You can go to the main chain in the worst case scenario and make a payment to them there, which is the same use case on the main chain if there’s a problem with HTLCs completing. I don’t think this is a big deal, we just need transport layers and application layers that don’t freak out about partial payment completion. I also think that if you have a liquid network, and nobody is griefing the whole thing, and you have tiny payments, it should be very likely that you will find an alternative route.

Lightning people make a big deal about atomic multipath payments. I think that’s bad as well. If you want an HTLC too large for your channel, you make part on one channel and part on another path. The way that atomic multipath payments works is that as part of the protocol, one of the HTLCs isn’t completed until the other one has been routed to them. But that’s literally asking them to grief the network, where they hold the HTLCs open until they get the payment they want. And why? Because they will be confused if they receive a slightly smaller payment? I think that’s the wrong choice.

Larger payments will take more time. The amount of latency will depend on the size of the payment, somewhat. If you have computers close to each other, then most of the work in a payment channel update is done just by the communication and you can get very fast. Say you can do 20 payments per second and 50 cents per payment, then that’s $10/second which isn’t too bad. If you go with higher amounts then you get a lot more throughput.

I am not sure lightning is a good idea for large payments anyway. As a payment gets larger, it’s harder to route that over the lightning network. There’s more risks, and with HTLCs there’s griefing risk. So you have this curve that slopes upwards in how much it costs and how difficult it is to settle a payment on lightning network the larger the payment gets, and you should just go settle on the main chain instead. Probably it will mostly be non-economical small payments that can’t be economical on the main chain.

Other downsides of packetized payments: it doesn’t work for non-fungible assets. You can’t put a CryptoKitty in a payment channel. You could do it, but it wont be useful. You need a liquid, fungible asset for this. Also, some more complex protocols can’t be supported, like if you’re trying to support complex protocols. For the basic use case of payments in a liquid asset across a payment channel network, then I think packetized payments are better.

Conclusion

I am over HTLCs. I don’t want to write HTLC code ever again. Lightning network does support HTLCs, sure. You can do packetized payments on top of lightning by making small payments on lightning network. Interledger doesn’t care what the substrate is. There’s a beta Interledger connector for lightning network, where the payments get settled on lightning network.

Thanks again to Evan Schwartz for the inspiration.

\ No newline at end of file +Conference

https://twitter.com/kanzure/status/1091033955824881664

Introduction

I am here to talk about a design pattern that I don’t like. You see HTLCs a lot. You see them in a lot of protocols for cross-chain atomic exchanges and payment channel networks such as lightning. But they have some downsides, which people aren’t all familiar with. I am not affiliated with Interledger, but they convinced me that HTLCs were a bad idea and they developed an alternative to replace HTLCs. There’s a simpler way to solve the same problem.

The problem of fair exchange

What is the problem that HTLCs solve? It’s not atomicity. What HTLCs actually solve is the problem of fair exchange. In Raiders of the Last Ark, Indy is trying to get across the pit and his guy says if you throw me the idle then I’ll throw you the rope to get across. He doesn’t have a choice there, he has no way to guarantee that when he throws over the idle that the counterparty will throw the rope. This doesn’t work out for the assistant in the movie so well, but in a payment channel your thief is going to win out on that. This is the problem that we’re trying to solve.

Atomic transactions on a shared ledger is more trivial to solve. It’s a huge benefit of blockchain that you have all your assets on the same ledger and it’s all in one transaction. But it turns out that’s unrealistic even in the blockchain world, there’s many ledgers that are totally unrelated to each other. So trying to do atomic transactions across different ledgers is a problem.

Cross-chain atomic swap

The primary use case of Litecoin is as an example for cross-chain atomic swaps between bitcoin and litecoin. Suppose we’re doing an on-chain payment from Alice to Bob in exchange for Bob’s litecoin. Alice can go first and make a payment. But then Bob could cheat Alice by not making the payment. Bob could go first, but then Alice can cheat Bob by not making the bitcoin payment. So we need a protocol to make sure that value is exchanged fairly.

HTLCs

Originally people came up with hashed timelock contracts (HTLCs) like for cross-chain atomic swaps. Sometimes people call it hash timelock contracts. Are they contracts though? HTLCs doesn’t stand for anything anymore really.

HTLCs provide cross-ledger atomic transactions. Either both transactions complete, or neither do. I say cross-ledger because it’s not just a blockchain it could be payment channels or whatever.

Here is an illustration of HTLCs in Ivy. There are two ways to complete an HTLC: either one of the recipients can complete by revealing a secret, or the sender can cancel after a timeout. So this is an example of just one HTLC, and it’s a two-phase commit protocol for these two transactions to happen atomically.

I’m sure you have seen this before. It’s bitcoin for litecoin. Alice locks a bitcoin into a 48-hour HTLC, using a hash of Alice’s secret. Bob locks his litecoin on another network with a 24-hour HTLC using the same secret. Bob will learn the secret when Alice spends to get the litecoin. This is the happy case when Alice reveals the secret. But what happens when she doesn’t reveal the secret? Well then they both get their money back after the timeouts.

Your timeout can’t be too short, because some of these times– if someone sees something on one of the chains or a payment channel, and they need time to get included into a block on the other side. There’s a limit on how short you can make these timeouts.

If you change a little bit about this thought experiment, you can change from a cross-chain atomic swap into a cross-chain atomic payment like a cross-chain lightning network payment. It’s like a trustless shapeshift.io implementation. These don’t have to be on-chain, HTLCs can be embedded in payment channels. The mechanism for this is somewhat complicated but you can do these ledger updates with ledger themselves being the payment channels. This is how lightning works- there are multi-hop payment channels that go through a network of HTLC locks. To complete the payment, the receiver reveals the preimage, and it propagates back down the path to the sender. At any point if there’s a dispute, then you can go to the main chain and settle it there. All this money gets locked up along the chain, and then when the receiver reveals it, it all ripples back.

I don’t know if lightning network was named lightning for this reason, but if you notice, lightning in a thunderstorm doesn’t really go down. When you see these paths of the ions going down, this isn’t the lightning strike. Lightning actually strikes up, it goes backward. This payment goes here, and it comes back. Anyway, I don’t know if that’s why they call it lightning network, but it’s a good way to remember that.

People who use lightning don’t think of it like that. I think of it as payment channels and HTLCs. These are two composable separate ingredients. Payment channels give you off-chain bilateral ledgers, and HTLCs give you cross-ledger atomic transactions. HTLCs are the part that I am picking on. I love payment channels. HTLCs are a problem.

So, why are HTLCs harmful?

There’s a lot of reasons. It’s a free option, there’s a griefing attack, and there’s complexity.

Free option problem

This problem has gone mainstream recently which is great. The problem here is that we had Alice locked hers first and then Bob locks her. This matters when you’re doing a multi-asset bet with HTLC. Usually Alice goes to complete the payment immediately, but what if Alice just sits and watches the price and decides last minute to do it? Alice is getting a free option to execute the transaction. It could be, though, that the price moves and the trade is no longer economic. They could let the HTLC timeout and cancel the trade. In the unlikely event that litecoin market price rises, Alice can complete the transaction and get her new money. This is basically an American-style call option. This is worth a premium, but Alice isn’t actually paying that premium in an HTLC. In fact, in lightning, if you bail an HTLC you don’t actually pay any fee for it at all. So this is a vulnerability and can be attacked.

My good friend ZmnSCPxj made a good argument for a single-asset lightning network on lightning-dev a month ago. It’s a good analyiss, I recommend looking at: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001752.html

Griefing problem

Remember how these arrows go? There’s multiple hops in the payment path. We are just waiting for lightning to strike. But what if it doesn’t? Everybody’s money is stuck in that lock all day. It’s just a feature of the system. But this might not be a 2 hop payment. It could be a 20 hop payment. Bob could have constructed a convoluted path to mess with everyone on that path. This is one way that someone could cause a lot of problems on the lightning network and cause like 20x loss. For someone in the middle, they don’t know Bob, they just route his payment. They didn’t sign up for their bitcoin to be locked up for the whole day, and they don’t even get paid for it, they just get canceled.

Complexity

There’s one more problem with HTLCs. Embedding HTLCs in payment channels and using them in general is kind of complicated. It depends on what kind of features you support in the payment channel, and on the base ledger features, and it’s a pain to settle them. And it’s expensive. On bitcoin, you need to do a few transactions to settle an HTLC. The default values for HTLCs right now it seems like 50 cents on bitcoin, depending on what the transaction fees are paying and current market price. Doing smaller payments than this with HTLCs is not economical, not really, with adversarial counterparties.

The alternative: packetized payments

Packetized payments is a piece of what has been developed by Interledger protocol by people at Ripple. It’s an independent protocol for doing a lightning-like payment channel network but pottenitally across a wider class of assets.

Packetized payments takes some inspiration from packet switching networks from the history of the internet. When the internet was being developed, files were being shared over dedicated connections between computers. You can think of this as- and this is an oversimplification because I don’t have a computer science degree and I have no idea how any of this wokrs- but imagine you have a dedicated connection across a bunch of hops, and you’re sending a whole file but instead of one big chunk you split it up into tiny packets and switch it around and it doesn’t matter what order they arrive in and some of the packets get dropped and that’s fine. In Interledger, we do this, but we do it with payments.

Let’s go back to what the cross-chain atomic swap problem was. Alice might exit scam Bob. But what if the payment amount was tiny? It doesn’t matter if the counterparty disappears because the tiny payments are so small that it doesn’t matter. If you do it all in sequence, and doing 1 bit at a time, you eventually make the whole payment. You can only steal a small bit at most so it doesn’t matter.

So you split your big atomic trade int osmall, economically-insingifcant trades. It take turns executing tiny pieces of it, in sequence. If your counterparty cheats you at any point, close the channel. This works for multi-hop payments as well. It wouldn’t even be worth it for the ledger to enforce the small tiny payment anyway.

The griefing risk and free option problem are minimized. They can have very tiny packets with very short timeouts. The free option problem isn’t a big deal because if you do give someone an American-style call option, it’s only for an extremely small amount of time. Also, this works over any payment medium that supports small, cheap payments. It works even for bank wire transfers, or tossing pennies across the grand canyon or something.

Is this expensive? Well, not in payment channels. Payment channels are extremely cheap and fast. If the money is not in a payment channel, put it in a payment channel first.

Doesn’t require that much trust. You can literally have a satoshi as the payment amount. That’s probably not the right value for most of these relationships. Only your immediate counterparty can cheat you. You can bound the trust limit to an arbitrarily small amount. The griefing risk requires way more trust in your counterparties (and everyone downstream from them). Lightning does something very similar for payments below the HTLC dust limit (default around 50 cents). It doesn’t even make sense to put an HTLC in that case… when you’re making 1 satoshi payment on lightning network, you’re not even using HTLCs. It’s very similar to how Interledger does it, and there’s some slight differences that are irrelevant, but it shows that people are willing to enter into this amount of trust. Counterparties can grief you for the fees too, so you should factor that into the calculus of what you are willing to lose from the channel.

What if a payment fails halfway through? Well, find another route. Refund it. You can go to the main chain in the worst case scenario and make a payment to them there, which is the same use case on the main chain if there’s a problem with HTLCs completing. I don’t think this is a big deal, we just need transport layers and application layers that don’t freak out about partial payment completion. I also think that if you have a liquid network, and nobody is griefing the whole thing, and you have tiny payments, it should be very likely that you will find an alternative route.

Lightning people make a big deal about atomic multipath payments. I think that’s bad as well. If you want an HTLC too large for your channel, you make part on one channel and part on another path. The way that atomic multipath payments works is that as part of the protocol, one of the HTLCs isn’t completed until the other one has been routed to them. But that’s literally asking them to grief the network, where they hold the HTLCs open until they get the payment they want. And why? Because they will be confused if they receive a slightly smaller payment? I think that’s the wrong choice.

Larger payments will take more time. The amount of latency will depend on the size of the payment, somewhat. If you have computers close to each other, then most of the work in a payment channel update is done just by the communication and you can get very fast. Say you can do 20 payments per second and 50 cents per payment, then that’s $10/second which isn’t too bad. If you go with higher amounts then you get a lot more throughput.

I am not sure lightning is a good idea for large payments anyway. As a payment gets larger, it’s harder to route that over the lightning network. There’s more risks, and with HTLCs there’s griefing risk. So you have this curve that slopes upwards in how much it costs and how difficult it is to settle a payment on lightning network the larger the payment gets, and you should just go settle on the main chain instead. Probably it will mostly be non-economical small payments that can’t be economical on the main chain.

Other downsides of packetized payments: it doesn’t work for non-fungible assets. You can’t put a CryptoKitty in a payment channel. You could do it, but it wont be useful. You need a liquid, fungible asset for this. Also, some more complex protocols can’t be supported, like if you’re trying to support complex protocols. For the basic use case of payments in a liquid asset across a payment channel network, then I think packetized payments are better.

Conclusion

I am over HTLCs. I don’t want to write HTLC code ever again. Lightning network does support HTLCs, sure. You can do packetized payments on top of lightning by making small payments on lightning network. Interledger doesn’t care what the substrate is. There’s a beta Interledger connector for lightning network, where the payments get settled on lightning network.

Thanks again to Evan Schwartz for the inspiration.

\ No newline at end of file diff --git a/stephan-livera-podcast/2020-08-13-christian-decker-lightning-topics/index.html b/stephan-livera-podcast/2020-08-13-christian-decker-lightning-topics/index.html index 04aad2064b..6b5e91d8dc 100644 --- a/stephan-livera-podcast/2020-08-13-christian-decker-lightning-topics/index.html +++ b/stephan-livera-podcast/2020-08-13-christian-decker-lightning-topics/index.html @@ -12,4 +12,4 @@ Christian Decker

Date: August 13, 2020

Transcript By: Michael Folkson

Tags: Sighash anyprevout, Eltoo, Transaction pinning, Multipath payments, Trampoline payments, Privacy problems

Category: Podcast

Media: -https://stephanlivera.com/episode/200/

Transcript completed by: Stephan Livera Edited by: Michael Folkson

Latest ANYPREVOUT update

ANYPREVOUT BIP (BIP 118): https://github.com/ajtowns/bips/blob/bip-anyprevout/bip-0118.mediawiki

Stephan Livera (SL): Christian welcome back to the show.

Christian Decker (CD): Hey Stephan, thanks for having me.

SL: I wanted to chat with you about a bunch of stuff that you’ve been doing. We’ve got a couple of things that I was really interested to chat with you about: ANYPREVOUT, MPP, Lightning attacks, the latest with Lightning Network. Let’s start with ANYPREVOUT. I see that yourself and AJ Towns just recently did an update and I think AJ Towns did an email to the mailing list saying “Here’s the update to ANYPREVOUT.” Do you want to give us a bit of background? What motivated this recent update?

CD: When I wrote up the NOINPUT BIP it was just a bare bones proposal that did not consider or take into consideration Taproot at all simply because we didn’t know as much about Taproot as we do now. What I did for NOINPUT (BIP118) was to have a minimal working solution that we could use to implement eltoo on top and a number of other proposals. But we didn’t integrate it with Taproot simply because that wasn’t at a stage where we could use it as a solid foundation yet. Since then that has changed. AJ went ahead and did the dirty work of actually integrating the two proposals with eachother. That’s where ANYPREVOUT and ANYPREVOUTANYSCRIPT, the two variants, came out. Now it’s very nicely integrated with the Taproot system. Once Taproot goes live we can deploy ANYPREVOUT directly without a lot of adaption that that has to happen. That’s definitely a good change. ANYPREVOUT supersedes the NOINPUT proposal which was a bit of a misnomer. Using ANYPREVOUT we get the effects that we want to have for eltoo and some other protocols and have them nicely integrated with Taproot. We can propose them once Taproot is merged.

Eltoo

Christian Decker on Eltoo at Chaincode Labs: https://diyhpl.us/wiki/transcripts/chaincode-labs/2019-09-18-christian-decker-eltoo/

SL: Let’s talk a little bit about the background. For the listeners who aren’t familiar, what is eltoo? Why do we want that as opposed to the current model for the Lightning Network?

CD: Eltoo is a proposal that we came up with about two years ago. It is an alternative update mechanism for Lightning. In Lightning we use what’s called an update mechanism to go from one state to the next one and make sure that the old state is not enforceable. If we take an example, you and I have a channel open with 10 dollars on your side. The initial state reflects this. 10 dollars goes to Stephan and zero goes to Christian. If we do any sort of transfer, some payment that we are forwarding over this channel or a direct payment that we want to have between the two of us, then we need to update this state. Let’s say you send me 1 dollar. The new state becomes 9 dollars to Stephan and 1 dollar to Christian. But we also need to make sure that the old state cannot be enforced anymore. You couldn’t go back and say “Hey I own 10 out of 10 dollars on this contract.” I need to have the option of saying “Wait that’s outdated. Please use this version instead.” What eltoo does is exactly that. We create a transaction that reflects our current state. We have a mechanism to activate that state and we have a mechanism to override that state if if it turns out to be an old one instead of the latest one. For this to be efficient what we do is we say “The newest state can be attached to any of the old states.” Traditionally this would be done by taking the signature and if there’s n old states, creating n variants with n signatures, one for each of the binding to the old state. With the ANYPREVOUT or NOINPUT proposal we have the possibility of having one transaction that can be bound to any of the previous states without having to re-sign. That’s the entire trick. We make one transaction applicable to multiple old states by leaving out the exact location from where we are spending. We leave out the UTXO reference that we’re spending when signing. We can modify that later on without invalidating the signature.

SL: Let me replay my understanding there. This is the current model of Lightning. You and I set up a channel together. What we’re doing is we’re putting a multisignature output onto the blockchain and that is a 2-of-2. Then what we’re doing is we’re passing back and forward the new states to reflect the new output. So let’s say it was initially 10 dollars to me and zero to you and then 9 dollars to me and 1 dollar to you. In the current model if somebody tries to cheat the other party. Let’s say I’m a scammer and I try to cheat you. I publish a Bitcoin transaction to the blockchain that is the pre-signed commitment transaction that closes the channel. The idea is your Lightning node is going to be watching the chain and say “Oh look, Stephan’s trying to cheat me. Let me do my penalty close transaction.” In the current model that would put all the 10 dollars onto your side.

CD: Exactly. For any of my wrong actions you have a custom tailored reaction to that that punishes me and penalizes me by crediting you with all the funds. That’s the exact issue that we’re facing is that these reactions have to be custom tailored to each and every possible misbehavior that I could do. Your set of retaliatory transactions grows every time that we perform a state change. We might have had 1 million states since the beginning and for each of these 1 million states you have to have a tailored reaction that you can replay if I end up publishing transaction 993 for example. This is one of the core innovations that eltoo brings to the table. You no longer have this custom tailored transaction to each of the previous states. Instead you can tailor it on the fly to match whatever I just did. You do not have to keep an ever-growing set of retaliation transactions in your database backed up somewhere or at the ready.

SL: In terms of benefits, it softens the penalty model. Instead of one party cheating the other and then losing everything, now if somebody publishes a wrong transaction or an old state then the other party just publishes the most up to date one that they have. The other benefit here is a scaling one that it might be easier for someone to host watchtowers because it’s less hard drive usage.

CD: Exactly. It is definitely the case that it becomes less data intensive in the sense that a watchtower or even yourself do not have to manage an ever growing set of of transactions. Instead all you need to do is to have the latest transaction in your back pocket. Then you can react to whatever happens onchain. That’s true for you as well as for watchtowers. Watchtowers therefore become really cheap because they just have to manage these 200 bytes of information. When you hand them a new transaction, a new reaction, they throw out the old one and and keep the new one. The other effect that you mentioned is that we now override the old state instead of using the old state but then penalizing. That has a really nice effect. What we do in the end is enforce a state that we agreed upon instead of enforcing “This went horribly wrong and now I have to grab all of the money.” It changes a bit the semantics of what we do towards we can only update the old state and not force an issue on the remote end and steal money from them. That’s really important when it comes to backups with Lightning. As it is today backups are almost impossible to do because whenever you restore you cannot be sure that it’s really the latest state and when you publish it that it’s not going to be seen as a cheating attempt. Whereas with eltoo you can take any old state, publish it and the worst that can happen is that somebody else comes along and says “This is not the latest state, theres’s a newer one. Here it is.” You might not get your desired state. Let’s say you want to take all 10 out of 10 dollars from the channel but you will still get the 9 out of 10 that you own in the latest state because all I can do is override your “10 go to Stephan” with my “9 go to Stephan and 1 goes to Christian”. We’ve reduced the the penalty for misbehavior in the network from being devastating and losing all of the funds to a more reasonable level where we can say “At least I agreed to the state and it’s going to be a newer state that I agreed upon.” I often compare it to the difference between Lightning penalty being the death by beheading whereas eltoo is death by a thousand paper cuts. The cost of misbehaving is much reduced allowing us to have working backups and have a lot of of nice properties that we can probably talk about later such as true multiparty channels with any number of participants. That’s all due to the fact that we no longer insist on penalizing the misbehaving party, we now instead correct the effects that the misbehaving party wanted to trigger.

SL: Fantastic. From your eltoo paper it introduces the idea of state numbers and onchain enforceable variant of sequence numbers. As I understand there’s a ratchet effect that once you move up to that new state that’s now the new one. It means that at least one of our nodes has the ability to enforce the correct latest state. Could you just explain that state numbers idea?

CD: The state numbers idea is actually connecting back to the very first iteration of Bitcoin like we had with the nSequence proposal that Satoshi himself added. nSequence meant that you could have multiple versions of transactions and miners were supposed to pick the one with the highest sequence number and confirm that. Basically replacing any previous transaction that had a lower sequence number. That had a couple of issues, namely that there is no way to force miners to do this. You can always bribe a miner to use a version of a transaction that suits you better or they might be actively trying to defraud you. There is no really good way of enforcing nSequence numbers. On the other hand, what we do with the state numbers is that we do not give the miners the freedom to choose which transaction to confirm. What we do is we say “We have transaction 100 and this transaction 100 can be attached to any previous transaction that could be confirmed or unconfirmed, that has a state number lower than 100. In eltoo we say “This latest state represented by this transaction with a state number of 100 can be attached to any of the previous transactions and override their effect by ratcheting forward the state.” Let’s say you have a published state 90. That means that anything with state number 91, 92, 93 and so on can be attached to your transaction. Your transaction might confirm but the effects that you want are in the settlement part of the transaction. If I can come in and attach a newer version of that state, a new update transaction to your published transaction, then I can basically detach the settlement part of the transaction from this ratcheting forward. I have just disabled your attempt at settlement by ratcheting forward and initiating the settlement for state 100. Then you could come come along and say “Sorry I forgot about state 100. Here’s state 110.” Even while closing the channel we can still continue making updates to the eltoo channel using these state numbers. The state numbers are really nothing else than making an explicit way of saying “This number 100 overrides whatever came before it.” Whereas with LN penalty the only association you have between the individual transactions and so on is by following the “is spent by” relationship. You have a set of transactions that can be published together. But there is no sense of of transitive overriding of effects.

SL: A naive question that a listener might be thinking, Christian what if I tried to set my state number higher than yours, what’s stopping me from doing that?

CD: You can certainly try. But since we are still talking about 2-of-2 multisig outputs I would have to countersign that. I might sign it but then I will make sure that if we later come to a new agreement on what the latest state should be that that state number must be higher than whatever I signed before. This later state can then override your spuriously numbered state. In fact that’s something that we propose in the paper to hide the number of updates that were performed on a channel. Not to go incrementing one by one but have different sized increment steps so that when we settle onchain we don’t tell the rest of the world “Hey, by the way we just had 93 updates.”

RBF pinning

Matt Corallo on RBF pinning: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017757.html

SL: From watching the Bitcoin dev mailing list, I saw some discussion around this idea of whether the Lightning node should also be looking into what’s going on in the mempool of Bitcoin versus only looking for transactions that actually get confirmed into the chain. Can you comment on how you’re thinking about the security model? As I understand, you’re thinking more that we’re just looking at what’s happening on the chain and the mempool watching is a nice to have.

CD: With all of these protocols we can usually replay them only onchain and we don’t need to look at the mempool. That’s true for eltoo as it is true for Lightning penalty. Recently we had a lengthy discussion about an issue that is dubbed RBF pinning attack which makes this a bit harder. The attack is a bit involved but it basically boils down to the attacker placing a placeholder transaction in the mempool of the peers making sure that that transaction does not confirm. But being in the mempool that transaction can result in rejections for future transactions. That comes into play when we are talking about HTLCs which span multiple channels. We can have effects where the downstream channel is still locked because the attacker placed a placeholder transaction in the mempool. We are frantically trying to react to this HTLC being timed out but our transaction is not making it into the mempool because it is being rejected by this poison transaction there. If that happens on a single channel that’s ok because eventually we will be able to resolve that and a HTLC is not a huge amount usually. Where this becomes a problem is if we were forwarding that payment and we have a matching upstream HTLC that now also needs to timeout or have a success. That depends on the downstream HTLC which we don’t get to see. So it might happen that the upstream timeout gets timed out. Our upstream node told us “Here’s 1 dollar. I promised to give it to you if you can show me this hash preimage in a reasonable amount of time.” You turned around and forwarded that promise and said “Hey, your attacker, here’s 1 dollar. You can have it if you give me the secret in time.” The downstream attacker doesn’t tell you in time so you will be ok with the upstream one timing out. But it turns out the downstream one can succeed. So you’re out of pocket in the end of the forwarded amount. That is a really difficult problem to solve without looking at the mempool because the mempool is the only indication that this attack is going on and therefore that that we should be more aggressive in reacting to this attack being performed. Most lightning nodes do not actually look at the mempool currently. There’s two proposals that we’re trying to do. One is to make the mempool logic a bit less unpredictable, namely that we can still make progress without reaction even though there is this poison transaction. That is something that we’re trying to get the Bitcoin core developers interested in. On the other side we are looking into mechanisms to look at the mempool, see what is happening and then start alerting nodes that “Hey you might be under attack. Please take precautions and and react accordingly.”

SIGHASH flags

SL: I also wanted to chat about SIGHASH flags because ANYPREVOUT and ANYPREVOUTANYSCRIPT are some new SIGHASH flags. Could you take us through some of the basics on what is a SIGHASH flag?

CD: A sighash flag is sometimes confused with an opcode. It is just a modifier of existing opcodes namely OP_CHECKSIG, OP_CHECKSIGVERIFY, OP_CHECKMULTISIG and OP_CHECKMULTISIGVERIFY variants that basically instructs the the CHECKSIG operation to which part of the transaction should be signed and which part should not be signed. In particular what we do with SIGHASH_ANYPREVOUT is when computing the signature and verifying the signature, we do not include the previous outputs in the signature itself. These can be modified if desired without invalidating the signature. It is like a kid having a bad grade at at school coming home and needing a signature from the parents. What he does is he covers up part of the permission slip and the parents still sign it. Only then he uncovers the covered part. This changing what was signed does not invalidate the signature itself. That’s sort of a nefarious example but it can be really useful. If you’ve ever given out a blank check for example where you could fill in the amount at a later point in time or fill out the recipient at a later point in time, that’s a very useful tool. For eltoo we use the reaction transaction to something that our counterparty has done and adapted in such a way that it can cleanly attach to what your counterparty has done. There are already some existing SIGHASH flags. The default one is SIGHASH_ALL which covers the entirety of the transaction without the input scripts. There’s SIGHASH_SINGLE, which has been used in a couple of places. It signs the input and the matching output but there can be other inputs and outputs as well that are not covered by the signature. You can amend a transaction and add later on new funds to that transaction and new recipients to that transaction. We use that for example to attach fees to transactions in eltoo. Fees in eltoo are not intrinsic to the update mechanism itself. They are attached like a sidecar which removes the need for us to negotiate fees between the end points. Something that has in the beginning of Lightning caused a lot of channels to die, simply disagreement on fees. There’s also SIGHASH_NONE which basically signs nothing. It signs the overall structure of the transaction but it doesn’t restrict which inputs can be used, it doesn’t restrict which outputs can be used. If you get one of these transactions, you can basically rewrite it at will sending yourself all the money that would have been transferred by it.

SL: I guess for most users, without knowing when they’re doing standard single signature spending on their phone wallet or whatever, they’re probably using SIGHASH_ALL. That’s what their wallet is using in the background for them. If the listener wants to see how this might work, they could pull up a block explorer and on a transaction you can see the different inputs and outputs. What we’re talking about here is what we are committing to when we sign. What is that signature committing to?

CD: It takes the transaction, it passes through a hash function and then the hash is signed. The effect that we have is that if anything in the transaction itself is modified, which was also part of the hash itself, then the signature is no longer valid. It means that I both authorize this transaction and I authorize it only in this form that I’m currently signing. There can be no modification afterwards or else the signature would have to change in order to remain valid. If with the SIGHASH flags we remove something from the commitment to the transaction then we give outsiders or ourselves the ability to modify without having to re-sign.

SL: For the typical user just doing single signature, their wallet is going to use SIGHASH_ALL but where they are doing some sort of collaborative transaction or there’s some kind of special construction that’s where we’re using some of these other sighash flags. With eltoo and ANYPREVOUT the idea is that these ANYPREVOUT sighash flags will allow us to rebind to the prior update correct?

CD: Exactly yes.

Difference between ANYPREVOUT and ANYPREVOUTANYSCRIPT

SL: Could we talk about ANYPREVOUT and ANYPREVOUTANYSCRIPT? What’s the difference there?

CD: What we do with SIGHASH_ANYPREVOUT is no longer explicitly saying “Hey by the way I’m spending those funds over there.” Instead what we say is “The output script and the input script have to match. Other than that we can mix these transactions however we want.” Instead of having an explicit binding saying “Hey my transaction 100 now connects to transaction 99, and then the scripts have to match. By the scripts I mean the output script would specify that the spender has to sign with public key X and the input script would contain a signature by public key X. Instead of binding by both the explicit reference and the scripts we now bind solely by the scripts. That means that as long as the output says “The spender has to sign with public key X and the input of the other transaction that is being bound to it has a valid signature for public key X in it” then we can attach these two. The difference between ANYPREVOUT and ANYPREVOUTANYSCRIPT is whether we include the output script in the hash of the spending transaction or not. For ANYPREVOUT we still commit to what script we are spending. We take a copy of the script saying that the spending transaction needs to be signed by public key X. We move that into the spending transaction and then include it into the signature computation so that if the output script is modified we cannot bind to it. Whereas the ANYPREVOUTANYSCRIPT says “We don’t take a copy of the output script into the input of the spending transaction but instead we have a blank script. We can bind it to any output whose output script matches our input script.” It is a bit more freedom but it is also something that we need for eltoo to work because the output script of the transaction we’re binding to includes the state number and that obviously changes from state to state. We still want to have the freedom of taking a later state and attaching it to any of the previous states. For eltoo we’d have to use ANYPREVOUTANYSCRIPT. There are a couple of use cases where ANYPREVOUT is suitable on its own. For example if we have any sort of transaction malleability and we still want to take a transaction that connects to a potentially malleable transaction, then we can use SIGHASH_ANYPREVOUT. If the transaction gets malleated in the public network before it is being confirmed we can still connect to it using the connection between the output script and the input script and the commitment of the output script in the spending transaction.

Transaction malleation

SL: You were mentioning malleation there. Could you outline what is malleation?

CD: Malleation is the bane of all offchain protocols. Malleation is something that that we’ve known about for well over seven years now. If you remember the MtGox hack was for some time attributed to malleability. They said “Our transactions were malleated. We didn’t recognize them anymore so we paid out multiple times.” What happens is I create a transaction and this transaction includes some information that is covered by the signature and can therefore not be changed, but it also could include some information that cannot possibly be covered by the signature. For example, the signature itself. In the input script of a transaction we need to have the signatures but we cannot include the signatures in the signature itself. Otherwise we’d have this circular argument. So while signing the input scripts are set to blank and not committed to. That means that if we then publish this transaction there are places in the transaction that can be modified without invalidating the signature anymore. Some parts of this include push operations, for example normalizations of signatures themselves. We can add prefixes to stuff. We can add dummy operations to the input script. Change how the transaction looks just slightly but not invalidating the signature itself. The transaction now looks different and is getting confirmed in this different form but we might have a dependent transaction where we’re referring to the old form by its hash, by its unchanged form. This follow up transaction that was referencing the unmodified transaction can no longer be used to spend those funds because the miner will just see this new transaction, go look for the old output that it is spending and this output doesn’t exist because it looks slightly different now because the hash changed. It will say “I don’t know where you’re getting that money from. Go away. I’m throwing away that transaction and it will not get confirmed. Whereas with SIGHASH_ANYPREVOUT we can counter this by having the transaction in the wider network, be modified, be confirmed in this modified state and then the sender of the follow up transaction can just say “Ok I see that there has been a modification to the transaction that I’m trying to spend from. Let me adjust my existing transaction by changing the reference inside of the input to this new alias that everybody else knows the old transaction.” Now we can publish this transaction. We did not have to re-sign the transaction. We did not have to modify the signature. All we had to do was take the reference and update it to point to the real confirmed transaction. That makes offchain protocols a lot easier because while having a single signer re-sign a transaction might be easy to do, if we’re talking about multisig transactions where multiple of us have to sign off on any change, that might not be so easy to implement. ANYPREVOUT gives us this freedom of reacting to stuff that happens onchain or in the network without having to go around and convince everybody “Hey please sign this updated version of this transaction because somebody did something in the network.”

Risks to everyday Bitcoin users

SL: If I understood correctly the way eltoo has been constructed, it is defending against that risk. You’re trying to use this new functionality of being able to rebind dynamically. For listeners who are concerned that maybe there’s a risk this is all opt in? It is only if you want to use Lightning in the eltoo model. You and I have this special type of SIGHASH flag where we have a special kind of output that we are doing the updates on our channel. If somebody doesn’t want to they can just not use Lightning.

CD: Absolutely. It is fully opt in. It is a SIGHASH flag. We do have a couple of SIGHASH flags already but no wallet that I’m aware of implements anything but SIGHASH_ALL. So if you don’t want to use Lightning or you don’t want to use any of the offchain protocols that are based on SIGHASH_ANYPREVOUT simply don’t use a wallet that can sign with them. These are very specific escape hatches from the existing functionality that we need to implement more advanced technologies on top of the Bitcoin network. But it is by no means something that suddenly everybody should start using just because it’s a new thing that is out there. If we’re careful not to even implement SIGHASH_ANYPREVOUT in everyday consumer wallets then this will have no effect whatsoever on the users that do not want to use these technologies. It is something that has a very specific use case. It’s very useful for those use cases but by no means everybody needs to use it. We’re trying to to add as many security features as possible. For example if you sign with a SIGHASH flag that is not SIGHASH_ALL you as the signing party are the only one that is deciding whether to use the sighash flag or not. Whereas with the ANYPREVOUT changes that were introduced, AJ has done a lot of work on on this, he introduces a new public key format that explicitly says “Hey I’m available for SIGHASH_ANYPREVOUT.” Even the one that is being spent from now has the ability to opt into ANYPREVOUT being used or not. Both have to match, the public key that is being signed for has to have opted in for ANYPREVOUT and the signing party has to opt in as well. Otherwise we will fall back to existing semantics.

ANYPREVOUT reliance on Taproot

SL: As I understand this BIP 118, there is a reliance on Taproot being activated first before ANYPREVOUT?

CD: Obviously we would have liked to have ANYPREVOUT as soon as possible but one of the eternal truths of software development is that reviewer time is scarce. We decided to not push too hard on ANYPREVOUT being included in Taproot itself to keep Taproot very minimal, clean and easy to review. Then try to do a ANYPREVOUT soft fork at a future point in time at which we will hopefully have enough confidence in our ability to perform soft forks that we can actually roll out ANYPREVOUT in a reasonable amount of time. For now it’s more important for us to get Taproot through. Taproot is an incredible enabling technology for a number of changes, not just for Lightning or eltoo but for a whole slew of things that are based on Taproot. Any delay in Taproot would definitely not be in our interest. We do see the possibility of rolling out ANYPREVOUT without too many stumbling stones at a second stage once we have seen Taproot be activated correctly.

Signature replay

SL: Also in the BIP118 document by AJ there’s a discussion around signature replay. What is signature replay? How is that being stopped?

CD: Signature replay is one of the big concerns around the activation of ANYPREVOUT. It consists of if I have one transaction that can be rebound to a large number of transactions this doesn’t force me to use that transaction only in a specific context but I could use it in a different context. For example, if we were to construct an offchain protocol that was broken and couldn’t work we could end up in a situation where you have two outputs of the same value that opted in for ANYPREVOUT and you have one transaction that spends one of them. Since both opted into ANYPREVOUT and both have the identical script and the identical value, I could replay that transaction on both outputs at once. So instead of the intended effect of me giving you let’s say 5 dollars in one output you can claim 5 dollars twice by replaying this multiple times. This is true for offchain protocols that are not well developed and are broken because well designed offchain protocols will only ever have one transaction that you can bind to. You cannot have multiple outputs that all can be spent by the same ANYPREVOUT transaction. But it might still happen that somebody goes onto a blockchain explorer and looks up the address and then sends some money that happens to be the exact same value to that output. What we’re trying to do is find good ways to prevent exactly the scenario of somebody accidentally sending funds and creating an output that could potentially be claimed by SIGHASH_ANYPREVOUT by making these scripts unaddressable. We create a new script format for which there is no bech32 encoding for the script. Suddenly you cannot go on to a blockchain explorer and manually interfere with an existing offchain protocol. There are a number of steps that we are trying to do to reduce this accidental replayability. That being said in eltoo for example, the ability to rebind a transaction to any of the previous matching ones is exactly what we were trying to achieve. It is a very useful tool but it in the wrong hands it can be dangerous. So don’t go play with SIGHASH_ANYPREVOUT if you don’t know what you are doing.

Potential ANYPREVOUT activation

SL: What would the pathway be to activate ANYPREVOUT? What stage would it be in terms of people being able to review it or test it?

CD: I had a branch for SIGHASH_NOINPUT which was used by Richard Myers to implement a prototype of eltoo in Python that is working. I’m not exactly sure if ANYPREVOUT has any code that can be used just yet. I would have to definitely check with AJ or implement it myself. It shouldn’t be too hard to implement given that SIGHASH_NOINPUT consisted of two IF statements and a total of four lines changed. I don’t foresee any huge technical challenges. It is mostly just the discussion on making it safe, making it usable and making it efficient that will take a bit longer. We have that time as well because we are waiting for Taproot to be activated in the meantime.

SL: Is there anything else you wanted to mention about ANYPREVOUT or shall we now start talking about MPP?

CD: ANYPREVOUT is absolutely cool. We’re finding so many use cases. It’s really nice. I would so love to see it.

MPP (Multi-Part Payments)

MPP blog post: https://medium.com/blockstream/all-paths-lead-to-your-destination-bc8f1a76c53d

SL: We’ll see what all the Bitcoin people out there are thinking. The benefits of having eltoo in Lightning would be pretty cool. It enables multi-party channels which for listeners who haven’t listened to our first episode, I think it is Episode 59 off the top of my head, have a listen to that. There’s a lot of possibilities there in terms of multi-party channels. That helps in terms of being able to get around that idea that there won’t necessarily be enough UTXOs for every person on earth. That’s why multi-party channels might be a handy thing to have. So let’s have a look into MPP then, multi-part payments. Listeners also check out my earlier episode with Rusty on this one. Christian, you had a great blog post talking about MPP and how it’s been implemented in c-lightning. Do you want to tell us the latest with that?

CD: We implemented multi-part payments as part of our recent 0.9.0 release which we published about 10 days ago. Multi-part payments is one of those features that has been long awaited because it enables us to be much more reliant and adapt ourselves better to the network condition that we encounter. It boils down to instead of me sending a payment and doing it all in one chunk, we split the payment into multiple partial payments and send them on different paths from us to the destination. Thus making better use of the network liquidity allowing us to create bigger payments since we are no longer constrained by the capacity of individual channels. Instead we can bundle multiple channels’ capacities and use the aggregate of all of these channels together. It also allows us to make the payments much more reliable. We send out parts, get back information and only retry the parts that failed on our first attempt.

SL: So there’s a lot of benefits.

CD: There’s also a couple of benefits when it comes to privacy but we’ll probably talk about those a bit later.

SL: What are some of the main constraints if your node wants to construct an MPP multi-part payment onion package?

CD: There’s two parts of MPP. One is the recipient part which is “I know I should receive 10 dollars and I received only two. I’ll just keep on waiting until I get the rest, holding on to the initial two that I already have for sure.” The recipient grabs money and waits for it to be all there before claiming the full amount. On the sender’s side what we do is we split the payment into multiple parts. For each of these partial payments we compute a route from us to the destination. For each of these we go ahead and compute the routing onion. Each individual part has its own routing onion, has its own path, has its own fate so to speak. Then we send out the partial payment with its onion on its merry way until we either get to the destination, at which point the destination will collect the promise of incoming funds and if it has all of the funds that were promised available it will release the payment preimage thus locking in atomically all of the partial payments. Or we get back an error saying “This channel down the road doesn’t have enough capacity. Please try again.” At which point we then update our view of the network. We compute a new route for this payment and we try again. If we cannot find a new route for the amount that we have, we split it in half and now we have two parts that we try to send independently from each other. The sender side is pretty much in control of how big do we make these parts? How do we schedule them? How do we route them? How do we detect whether we have a fatal error that we cannot recover from? When do we detect that this part is ok but this part is about to be retried? The sender part is where all of the logic is. The recipient just waits for incoming pieces and then at some point decides “Ok I have enough. I’ll claim all of them.” The sender side required us to reengineer quite a lot of our payment flow. That also enabled us to build a couple of other improvements like the key send support for example, which we so far only had a Python plugin for but now have a C plugin for it as well.

SL: You were talking through the two different processes. You’ve got the pre-split and then you mentioned the adaptive splitting. Once you’ve tried it one time and it failed, now you can take that knowledge and try a slightly different split or slightly different route. It will then create the new payment and try to send that?

CD: Exactly. The adaptive splitting is exactly the part that we mentioned before. We try once and then depending on what comes back we decide do we retry? Do we split? What do we do now? Is this something that we can still retry and have a chance of completing? Or do we give up?

SL: Let’s say you have installed c-lightning and you’re trying to do a payment. In the background what’s going on is your node has its own graph of the network and it’s trying to figure out “Here’s where the channels are that I know about. Here’s what I know of the capacity.” Does it then have better information and therefore each successive try is a little bit better. How does that work?

CD: Exactly. So initially what we have in the network is we see channels as total capacities. If the two of us opened a 10 dollar channel then somebody else would see it as 10 dollars. They would potentially try to send 8 dollars through this channel. Depending on the ownership of those 10 dollars this might be possible or not. For example, if we each own five there’s no way for us to send 8 dollars through this channel. We will report an error back to the sender and the sender will then know 8 was more than the capacity. I will remember this upper limit on the capacity. It might be lower but we know that we cannot send 9 Bitcoin through that channel. As we learn more about the network our information will be more and more precise and we will be able to make better predictions as to which channels are usable and which channels aren’t for a given payment of a given size. There is no point in us retrying this 8 dollar payment through our well balanced channel again, because that cannot happen. But if we split in two and now have two 4 dollar parts then one of them might go through our channels. Knowing that we have 5 and 5 it will actually go through. Now the sender is left with a much easier task of finding a second 4 dollar path from himself to the destination rather than having this one big chunk of 8 all at once.

Impact of fees on MPP

SL: From the blog post the way the fees work is there’s a base fee and there’s typically a percentage fee. If you split your MPP up into hundred different pieces you’re going to end up paying massive amount of base fee across all of those hundred pieces. Your c-lightning node node has to make a decision on how many pieces to split into.

CD: Exactly. We need to have a lower value after which we say “From now on it’s unlikely that we’re going to find any path because the payment is so small in size that it will be dominated by the base fee itself.” This is something that we’ve encountered quite early on already when the first games started popping up on the Lightning Network. For example Satoshi’s Place, if you wanted to color in one pixel on Satoshi’s Place you’d end up paying one millisatoshi but the base fee to get there would already be one satoshi. You’d be paying a 100,000 percent fee for your one millisatoshi transfer which is absolutely ludicrous. So we added an exception for really tiny transfers that we call signaling transfers because their intent is not really to pay somebody it’s more to signal activity. In those cases we allow you to have a rather large fee upfront. That is not applicable to MPP payments because if we were to give them a fee budget of 5 satoshis each then these would accumulate across all of the different parts and we’d end up with a huge fee. So we decided to give up if a payment is below 100 satoshis in size. Not give up but not split any further because at that size the base fee would dominate the the overall cost of the transfers. What we did there was take the network graph and compute all end to end paths that are possible, all shortest paths, and compute what the base fee for these paths would be. If I’m not mistaken we have a single digit percent of payments that may still go through even though they are below 100 satoshis in size. We felt that aborting at something that is smaller than 100 satoshis is safe. We will still retry different routes but we will not split any further because that would double the cost in base fees at each splitting.

SL: In practice most people are opening channels much, much larger than that. A hundred satoshis is really trivial to be able to move that through. At current prices we’re talking like 6 or 7 cents or something.

CD: Speaking of channels and the expected size of a payment that we can send through, that brings me back to our other payment modifier, the pre-split modifier which instead of having this adaptive mechanism where we try and then learn something and retry with this new information incorporated. We decided to do something a bit more clever and say “Wait why do we even try these really large payments in the first place when we know perfectly well that most of them will not succeed at first?” I took my Lightning node and tried to send payments of various sizes to different endpoints by probing them. Unlike my previous probes where I was interested in seeing if I could reach those those nodes, I was more interested if I can reach them how much capacity could I get on this path? What is the biggest payment that would still succeed getting it to the destination? What we did is we measured the capacity of channels along the shortest path from me to I think 2000 destinations. Then we plotted it and it was pretty clear that amounts below 10,000 satoshis, approximately 1 dollar, have a really good chance of of succeeding on the first try. We measured the capacities in the network and found that payments with 10,000 satoshis in size can succeed relatively well. We have an 83 percent success rate for payments of exactly 10,000 satoshis, smaller amounts will have higher success rates. Instead of trying these large chunks at first and then slowly moving towards these sizes anyway, we decided to split right at the beginning of the payment into roughly 1 dollar sized chunks and then send them on their way. These already have a way better chance of succeeding on the first try then this one huge chunk would have initially.

SL: To clarify, that percentage you were mentioning, that is on the first try, correct? It will then retry multiple times and the actual payment success rate is even higher than that for 10,000 sats?

CD: Absolutely, yeah.

SL: I think this is an interesting idea because it makes it easier for the retail HODLer or retail Lightning enthusiast to be able to set up his node and be a meaningful user of the network that they’re not so reliant on routing through the well known massive nodes, the ACINQ node, the Bitrefill node or the Zap node. It is easier for an individual because now you can split those payments across multiple channels?

CD: Absolutely. What we do with the pre-split and adaptive splitter, we make better use of the network resources that are available by spreading a single payment over a larger number of routes. We give each of the nodes on those routes a tiny sliver of fees instead of going through the usual suspects and giving them all of the fees. We make revenue from routing payments more predictable. We learn more about the network topology. While doing MPP payments we effectively probe the network and find places that are broken and will cause them to close channels that are effectively of no use anyway. Something that we’ve seen with the probing that we we did for the Lightning Network conference was that if we end up somewhere where the channel is non-functional, we will effectively close that channel and prune the network of these relics that are of no use. We also speed up the end-to-end time by doing all of this in parallel instead of sequentially where each payment attempt would be attempted one by one. We massively parallelized that to learn about the network and make better use of what we learned by speeding up the payment as well.

Privacy improvements of MPP

SL: I also wanted to touch on the privacy elements. I guess there’s probably two different ways you could think of, multiple angles I can think of. One angle might be if somebody was trying to surveil the network and they wanted to try to understand the channel balances and ascertain or infer from the movement in the balances who is paying who, MPP changes that game a little bit. It makes it harder for them. But then maybe on the downside you might say because we haven’t moved to the Schnorr payment points PTLC idea then it’s still the same payment preimage. It is asking the same question to use the phrasing Rusty used. In that sense it might theoretically be easier for a hypothetical surveillance company to set up spy Lightning nodes and see that they’re asking the same question. What are your thoughts there?

CD: There is definitely some truth in the statement that by distributing a payment over more routes and therefore involving more forwarding nodes, we are telling a larger part of the network about a payment that we are performing. That’s probably worse than our current system where even if we were using a big hub that hub would see one payment and the rest of the network would be none the wiser. On the plus side however the one big hub thing would give away the exact value you’re transferring to the big hub. Whereas if we pre-split to 1 dollar amounts and then do adaptive splitting, each of the additional nodes that are now involved in this payment learns a tiny bit about the payment being performed, namely that there is a payment, but since we use this homogeneous split of everything splits to 1 dollar, they don’t really know much more than that. They will learn that somebody is paying someone but they will not learn about the amount, they will not learn about the source and destination. And we are making traffic analysis a lot harder for ISP level attackers by really increasing the chattiness of the network itself. We make it much harder for observers to associate or to collate individual observations into one payment. It is definitely not the perfect solution to to tell a wider part of the network about the payment being done, but it is an incremental step towards the ultimate goal of making every payment indistinguishable from each other which we are getting with Schnorr and the point timelocked contracts. Once we have the point timelocked contracts we truly have a system where we are sending back and forth payments that are not collatable by payment hash as you correctly pointed out. Not even by amount, because all of the payments have roughly the same amounts. It is the combination of multiple of these partial payments that gives you the actual transferred amount. I think it’s not a clear loss or a clear win for privacy that we’re now telling a larger part of the network. But I do think that the pre-splitter and the adaptive splitting when combined with PTLC will be an absolute win no matter where you look at it.

PTLCs

SL: I think that’s a very fair way to summarize. In terms of getting PTLCs, point timelocked contracts, the requirement for that would be the Schnorr Taproot soft fork? Or is there anything else that’s also required?

CD: Taproot and Schnorr is the only one that is required for PTLCs. I’m expecting the Lightning Network specification to be really quick at adapting it, pushing it out to the network and actually making use of that new found freedom that we have with PTLCs and Schnorr.

Lightning onchain footprint

SL: I suppose the other component to think about and consider from a privacy perspective is the onchain footprint aspect of Lightning. Maybe some listeners might not be familiar but when you’re doing Lightning you still have to do the open and close of a channel. You did some recent work at the recent Lightning conference showing an ability to understand which ones of these were probably Lightning channel opens. That is another thing where Taproot might help particularly in the case of a collaborative close. Once we have Taproot, let’s say you and I open a channel together and it’s the happy path, the collaborative close, that channel close is indistinguishable from a normal Taproot key path spend?

CD: Exactly. Our opens will always look exactly like somebody paying to a single sig. The single sig under the covers happens to be a 2-of-2 multisig disguised as a single sig through the signature aggregation proposals that we have. The close transactions, if they are collaborative closes, they will also look like single sig spends to the destinations that are owned by the endpoint. It might be worth pointing out that non-collaborative closes will leak some information about the usage of eltoo or Lightning penalty simply because we enter this disputed phase where we reveal all of the internals of our agreement, namely how we intend to overwrite or penalize the misbehaving party. Then we can still read out some of the information from a channel. That’s where I mentioned before that you might not want to increment state numbers one by one for example. This is also the reason why in LN penalty, we hide the commitment number in the locktime field but encrypt it. That information might still eventually end up on the blockchain where they could be analyzed. But we’d gossip about most of this informations anyway because we need to have a local view of the network in order to route payments.

SL: It is a question of what path you really need to be private I guess. One other part where I wanted to confirm my understanding is with the Taproot proposal, my understanding is you’ll have a special kind of Taproot output. The cool thing about the Schnorr signatures aspect is that people can do more cryptography and manipulation on that. That’s this idea of tweaking. My understanding is you either have the key path spend which is the indistinguishable spend. That’s the collaborative close example. But then in the non-collaborative close, that would be a script path spend. As part of Taproot there’s a Merkle tree. You have to expose which of the scripts that you want to spend. You’re showing the script I want to spend and the signatures in relation to it. Is that right?

CD: That’s right. The Taproot idea comes out of this discussion for Merklized Abstract Syntax Trees. It adds a couple of new features to it as well. A Merklized Abstract Syntax Tree is a mechanism of having multiple scripts that are added to a Merkle tree and summed up until we get to the root. The root would be what we put into our output script. When we spend that output we would say “That Merkle tree corresponds to this script and here is the input that matches this script proving that I have permission to spend these coins. Taproot goes one step further and says “That Merkle tree root is not really useful. We could make that a public key and mix in the Merkle root through this tweaking mechanism.” That would allow us to say “Either we sign using the root key into which we tweak the the Merklized Abstract Syntax Tree. That’s the key path spent. Or we can say “I cannot sign with this pubkey alone but I can show the script that corresponds to this commitment. Then for that I do have all of the information I need to sign off. In the normal case for a channel close we use the root key to sign off on the close transaction. In the disputed case we say “Here’s the script that we agreed upon before. Now let’s run through it and resolve this dispute that we have by settling onchain and having the blockchain as a mediator for our dispute.”

Lightning Network attacks

SL: I also wanted to talk about some of the Lightning attacks that are coming out in articles. From my understanding from chatting with yourself and some of the other Lightning protocol developers, it seems to me like there’s a bunch of these that have been known for a while but some of them are now coming out as papers. An interesting recent one is called “Flood and Loot: A Systemic Attack on the Lightning Network. As I understand this, it requires establishing channels and then trying to send through a lot of HTLC payments. They go non-responsive and then they force the victim to try to go to chain. The problem is if they’ve done it with many people and many channels all at once they wouldn’t be able to get confirmed. That’s where the victim would lose some money. Could you help help me there? Did I explain that correctly?

CD: That’s correct. The idea is to have an attacker send to a second node. He owns a massive amount of payments going through the victims. What you end up doing there is you add a lot of HTLCs to the channel of of your victim and then you hold on to these payments on the recipient side of the channel. Something that we’ve known for quite some time. We know that holding onto HTLCs is kind of dangerous. This attacker will hold onto HTLCs so long that the timeout approaches for the HTLC. The HTLC has two possible outcomes, either it is successful and the preimage is shown to the endpoint that added the HTLC, or we have a timeout. Then the funds revert back to the endpoint that added the HTLC. This works because there is no race between the success transaction and the timeout transaction. If there is no success for let’s say 10 hours then we will trigger the timeout because we can be confident that the success will not come after the timeout. This Flood and Loot attack does exactly that by holding onto the HTLC, it forces us to have a race between the timeout and the success transaction. The problem is that our close transaction, having all of these HTLCs attached, is so huge that it will not confirm for quite some time. So they can force the close to take so long that the timeout has expired. We are suddenly in a race between the successful transaction and the timeout transaction. That’s the attack. To bloat somebody else’s channel such that the confirmation of the close transaction that is following is so long that we can actually get into a situation where we are no longer sure whether the timeout or the success transaction will succeed in the end.

SL: I guess there’s a lot of moving parts here. You could say “Let’s modify the CSV window and make that longer” or “Let’s change the number of HTLCs and restrict that for each channel”. Could you talk to us about some of those different moving parts here?

CD: It is really hard to say “One number is better than the other”. But one way of of reducing the impact of this attack is to limit the number of HTLCs that we add to our own transaction. That will directly impact the size of our commitment transaction and therefore our chances of getting confirmed in a reasonable amount of time. Avoid having this race condition between success and timeout. The reason why I’m saying that there is no clear solution is that reducing the number of HTLCs that we add to our channels reduces the utility of the network as a whole because once we have 10 HTLCs added and we only allow 10 to be added at once then that means that we cannot forward the 11th payment for example. If our attacker knows that we have this limit, they could effectively run a DOS attack against us by by opening 10 HTLCs. Exhausting our budget for HTLCs and therefore making our channel unusable until they release some of the HTLCs. That’s an attack that we are aware of and so far hasn’t been caught by academia but I’m waiting for it. All of these parameters are a trade-off between various goals that we want to have. We don’t currently have a clean solution that has only upsides. The same goes for CSVs. If we increase the CSV timeouts then this attack might be harder to enforce because we can spread confirmation of transactions out a bit further. On the downside having large CSVs means that if we have a non-collaborative close for a channel then the funds will return only once the CSV timeout expires. That means that the funds are sure to come back to us but might not be available for a couple of days before we can reuse them.

SL: It is an opportunity cost because you want to be able to use that money now or whatever. There are trade off and there’s no perfect answer to them. Let’s say somebody tries to jam your channels. How do HTLCs release? What’s the function there?

CD: Each HTLC has a timeout. The endpoint that has added the HTLC can use this timeout to recover funds that are in this HTLC after this timeout expires. Each HTLC that is added starts a new clock that counts down until we can recover our funds. If the success case happens before this timeout then we’re happy as well. If this timeout is about to expire and we need to resolve this HTLC onchain then we will have to force this channel onchain before this timeout expires, a couple of blocks before. Then force our counterparty to either reveal the preimage or grab back our funds through the timeout. We then end up with a channel closing slightly before the timeout and then an onchain settlement of that HTLC.

SL: So we could think of it like we set up our node, we set up the channels and over time HTLCs will route through. It is usually going to be a CSV or maybe a CLTV where over time those HTLCs will expire out because the timer has run out on them. Now you’ve got that capacity back again.

CD: In these cases they are CLTVs because we need absolute times for HTLCs. That’s simply because we need to make sure the HTLC that we forwarded settles before the upstream or the HTLC where we received from settles. We need to have this time to extract the information from the downstream HTLC, turn around and forward it to the upstream HTLC in order to settle the upstream HTLC correctly. That’s where the notion of CLTV delta comes in. That is a parameter that each node sets for himself and says “I am confident that if my downstream nodes settles in 10 blocks I have enough time to turn around and inform my upstream note about this downstream settlement so that my channel can stay active.”

SL: I also wanted to touch on the commitment transaction size. Part of this attack in the Flood and Loot example depends on having a very large commitment transaction. If there’s a lot of pending HTLCs why does that make the transaction bigger? Is it that there’s a lot more outputs there?

CD: That’s exactly the case. The commitment transaction varies in size over time as we change our state. Initially when we have a single party funding the channel then the entirety of the funds will revert back to that to that party. The commitment transaction will have one output that sends all of the funds back to the funding party. As soon as the counterparty has ownership of some funds in the channel then we will add a second output, one going to endpoint A and one going to endpoint B. Those reflect the settled capacity that is owned by the respective party. Then we have a third place where we add new outputs, that’s exactly the HTLCs. Each HTLC doesn’t belong to either A or B but it’s somewhere in the middle. If we succeed, it belongs to B and if it doesn’t succeed, it reverts back to A. Each of the HTLCs has their own place in the commitment transaction in the form of an output reflecting the value of the HTLC and the output script, the resolution script of the HTLCs, which spells out “Before block height X I can be claimed by this and after block height X I can be reverted back to whoever added me”. Having a lot of HTLCs attached to a channel means that the commitment transaction is really large in size. That’s also why we have this seemingly random limit on the total number of HTLCs in the protocol of 483 maximum HTLCs attached to a single transaction because at that point with 483 HTLCs we’d end up with a commitment transaction that is a 100 kilobytes in size, I think.

SL: That’s pretty big. A standard transaction might be like 300 bytes?

CD: It’s a massive cost as well to get that confirmed. It definitely is a really evil attack because not only are you stealing from somebody but you’re also forcing them to pay a considerable amount of money to get their channel to settle.

SL: The other point there is that because we count fees in terms of sats per byte and if you’ve done that fee negotiation between the two nodes upfront, let’s say you and I negotiated that early on and then one of us goes offline because it’s the flood and loot attack. You’d have this huge transaction but you wouldn’t have enough fees to close it.

CD: Exactly. We would stop adding HTLCs before we no longer have any funds to settle it but it would still be costly if we ever end up with a large commitment transaction where something like 50 percent of our funds go to fees because it’s this huge thing.

Different Lightning usage models

SL: Maybe we can step back and talk about Lightning generally, the growth of the Lightning Network and some of the different models that are out there. In terms of how people use a Lightning node today there’s the Phoenix wallet ACINQ style where it is non-custodial but there’s certain trade offs there and it’s all going through the ACINQ node. Then you’ve got Wallet of Satoshi style. They’re kind of like a Bitcoin Lightning bank and the users are customers of that bank. Then you’ve got some people who are going full mobile node Neutrino style and then maybe the more self-sovereign style where people might run node packages like myNode, Nodl or Raspiblitz and have a way to remote in with their Blue wallet or Zap or Zeus or Spark wallet. Do you have any thoughts on what models you think will be more popular over time?

CD: I definitely can see the first and last model quite nicely namely the sort of mobile wallet that has somebody on the operational side taking care of operating your node but you are still in full control of your funds. That would be the Phoenix ACINQ model where you care for your own node but the hard parts of maintaining connectivity and maintaining routing tables and so on would be taken care of by a professional operator. That’s also why together with ACINQ we came up with the trampoline routing mechanism and some other mechanisms to outsource routing to online nodes. Running a full Lightning node on a mobile phone while way easier than a Bitcoin full node, it is still going to use quite a considerable amount of resources in terms of battery and data to synchronize the view of the network to find paths from you to your destination. You would also need to monitor the blockchain in a reliable way so that if something happens, one of your channels goes down, you are there to react. Having somebody taking care of those parts, namely to preprocess he changes in the network view and providing access to the wider network through themselves is definitely something that I can see being really popular. On the other side, I can definitely see people that are more into operating a node themselves going towards a self sovereign node style at home where they have a home base that their whole family might share or they might administer it for a group of friends and each person would get a node that they can remote into and operate from there. There is the issue of synchronizing routing notes and so on to your actual devices that you’re running around with like a mobile phone or your desktop. It doesn’t really matter because you have this 24 hour node online that will take care of those details. The fully mobile nodes, I think they’re interesting to see and they definitely show up a lot of interesting challenges but it might be a bit too much for the average user to have to take care of all of the stuff themselves. To learn about what a channel is, to open a channel, to curate channels to make sure that they are well connected to the network. Those are all details that I would like to hide as much as possible from the end user because while important for your performance and your ability to pay they are also hard concepts that I for example would not want to try to explain to my parents.

SL: Of course. Obviously your focus is very deep technical protocol level but do you have any thoughts on what is needed in terms of making Lightning more accessible to that end user? Is it better ways to remote into your home node? Do you have any ideas around that or what you would like to see?

CD: I think at least from the protocol side of things we still have a lot we can do to make all of this more transparent to the user and enable non tech savvy people to take care of a node themselves. I don’t know what the big picture is at the end but I do know that we can certainly abstract away and hide some of the details in the protocol itself to make it more accessible and make it more usable to end users. As for the nice UI and user experience that we don’t have yet, I think that will crystallize itself out in the coming months. We will see some really good looking things from wallet developers. I’m not a very graphical person so I can’t tell you what that’s going to look like but I’m confident that there are people out there that have a really good idea on what this could look like. I’m looking forward to seeing it myself.

SL: There’s a whole bunch of different models because people who just want something easy to get started, something like Phoenix might be a good one for them. If you’re more technical then obviously you can go and do the full set up your own c-lightning and Spark or set up lnd and Zap or whatever you like. It is building out better options to make it easy for people even if we know not everyone is going to be capable to do the full self-sovereign style as we would like.

CD: Absolutely. It is one of my pet peeves that I have with the Bitcoin community, we have a tendency to jump right to the perfect solution and shame people that do not see this perfect solution right away. This shaming of newcomers into believing that there is this huge amount of literature they have to go through before even touching Bitcoin the first time. That can be a huge barrier to entry. I think what we need to have is a wide range of of utilities that as the user grows in their own understanding of Bitcoin he can upgrade or downgrade accordingly to reflect his own understanding of the system itself. We shouldn’t always mandate that only the most secure solution is the only one that is to be used. I think that there are trade offs when it comes to user friendliness and privacy and security, and we have to accept that some people might not care so much about the perfect setup, they might be ok with a decent one.

Trampoline routing

Bastien Teinturier at Lightning Conference: https://diyhpl.us/wiki/transcripts/lightning-conference/2019/2019-10-20-bastien-teinturier-trampoline-routing/

Bastien Teinturier on the Lightning dev mailing list: https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-August/002100.html

SL: That’s a very good comment there. I wanted to talk about trampoline routing. You mentioned this earlier as well. I know the ACINQ guys are keen on this idea though I know that there has also been some discussion on GitHub from some other Lightning developers who said “I see a privacy issue there because there might not be enough people who run trampoline routers and therefore there’s a privacy concern there. All those mobile users will be doxing their privacy to these trampoline routers.” Do you have any thoughts on that or where are you placed on that idea?

CD: Just to reiterate trampoline routing is a mechanism for a mobile wallet or a resource constrained wallet to contact somebody in the network that offers this trampoline service and and forwarding a payment to that trampoline node. When the trampoline node unpacks the routing onion it will see “I’m not the destination and I should forward it to somebody.” But instead of telling me exactly whom I have to forward it to it tells me the final destination of the payment. Let’s say I’m a mobile phone and I have a very limited knowledge of my surroundings in the network but I know that you Stephan are a trampoline node. Then when I want to try Rusty for example I can look in my vicinity to see if I have a trampoline node. I can build a payment to you with instructions to forward it to Rusty whom I don’t know how to reach. Then I send this payment. When you unpack your onion you just receive it like usual. You don’t know exactly who I am because I’m still onion routing to you. You unpack this onion and see “Somebody who has sent me this payment has left me 100 satoshis in extra fees. I’m supposed to send 1 dollar to Rusty.” Now I have 100 satoshis as a budget to get this to Rusty.” I outsourced my route finding to you. What have you seen from this payment? You’ve obviously seen that Rusty is the destination and that he should receive 1 dollar worth of Bitcoin. But you still don’t know me. We could go one step further and say “Instead of having this one trampoline hop we can also chain multiple of them.” Instead of telling you to go to Rusty I would tell you to go to somebody else who also happens to be a trampoline and then he can forward it to Rusty. We can expand on this on this concept and make it an onion routed payment inside of individual onion routed hops. What does the node learn about the payment he is forwarding? If we only do this one trampoline hop then you might guess that I’m somewhere in your vicinity, network wise, and you learned that Rusty is the destination. If we do multiple trampoline hops then you will learn that somebody has sent you a payment. Big surprise, that’s what you always knew. You can no longer say that I’m in your vicinity, I the original sender, because you might have gotten it from some other trampoline node. You can also not know the next trampoline you’re supposed to send to is the destination or whether that’s an intermediate trampoline as well. We can claw back some of the privacy primitives that we have in pure onion routing that is source based routing inside of the trampoline routing. But it does alleviate the issue of the sender having to know a good picture of the network topology in order to send a payment first. I think we can make a good case for this not being much worse but much more reliable than what we have before. We also have a couple of improvements that come alongside trampoline routing. Let’s go back to the initial example of me sending to you, you being the trampoline and then sending to Rusty. Once you get the instruction to send to the final destination, you can retry yourself instead of having to tell me “This didn’t work. Please try something else.” We can do in network retries which is really cool especially for mobile phones that might have a flaky connection or it might be slow. We can outsource retrying multiple attempts to the network itself without having to be in the active path ourselves.

SL: Fascinating. If I had to summarize some of your thinking there, it’s kind of like think through a little bit more clearly about exactly who are you doxing and what are you doxing to who? If you haven’t doxed any personal information about yourself to me then really what’s the privacy loss there? Maybe it would become the case that the hardcore Bitcoin Lightning people might run trampoline routing boxes in a similar way to some hardcore people run Electrum public servers to benefit people on the network.

CD: Absolutely. It is not just because of the kindness of your heart that you’re running trampoline nodes. One thing that that I mentioned before is that you get a lot of fees in order for you to be able to find a route. The sender cannot estimate how much it’s going to cost to reach the destination. They are incentivized to overpay the trampoline node to find a route for them. This difference then goes to the trampoline node. Running a trampoline node can be really, really lucrative as well.

SL: Yeah, that’s fascinating. I didn’t think about that. That’s a good point. In some ways there’s more incentive to do it than running an Electrum public server because people don’t pay Electrum public servers right now. It is even better in that sense.

CD: Yeah and it’s not hard to implement. We can implement trampoline routing as a plugin right now.

Privacy attacks on Lightning

SL: Another thing I was interested to touch on is privacy attacks on Lightning. With channel probing the idea is that people construct a false onion that they know cannot go through and then try to figure out based on that. They sort of play Price is Right. Try 800 sats, try 8 dollars and then figure it out based on knowing roughly how much is available in that channel. People say that’s violating the privacy principles of Lightning but how bad is that really? What’s the actual severity of it? Just losing some small amount of privacy in a small way that doesn’t really stop the network growing? Do you have any reflections on that?

CD: I do because probing was one of my babies. I really like probing the network to be honest. I come from a background that is mostly measurements and probing the Bitcoin network. I was really happy when I found a way to probe the Lightning Network and see how well it works and if we can detect some failures inside of the network. You’re right that probing boils down to attempting a payment that we know will never succeed because we gave it a payment hash that doesn’t correspond to anything that the recipient knows. What we can do is compute a route to whichever node I’m trying to probe, I will construct an onion and then send out a HTLC that cannot possibly be claimed by the recipient. Depending on the error message that comes back, whether the destination says “I don’t know what you’re talking about” or some intermediate node saying “Insufficient capacity” we can determine how far we got with this probe and what kind of error happened at the point where it failed. We can learn something about the network and how it operates in the real world. That’s invaluable information. For example we measured how probable a stuck payment is, something that has been dreaded for a long time. It turns out that stuck payments are really rare. They happen in 0.18 percent of cases for payments. It’s also really useful to estimate the capacity that we have available for sending payments to a destination. That’s something that we’ve done for the pre-split analysis for example. We said “Anything below 10,000 satoshis has a reasonable chance of success. Anything above might might be tricky.” Before even trying anything we split right at the start into smaller chunks. Those are all upsides for probes but I definitely do see that there is a downside for probing and that is that we leaked some privacy. What privacy do we leak? It’s the channel capacities. Why are channel capacities dangerous to be known publicly? It could enable you to trace a payment through multiple hops. Let’s say for example we have channels A, B and C that are part of a route. Along these three channels we detect a change in capacity of 13 satoshis. Now 13 satoshis is quite a specific number. The probability of that all belonging to the same payment is quite high. But for us to collate this information into reconstructing payments based solely on observing capacity changes we also need to make sure that our observations are relatively close together. Because if an intermediate payment comes through that might obscure our signal that allows us to collate the payment. That’s where I think that MPP payments can actually hugely increase privacy simply by providing enough noise to make this collating of multiple observations really hard because channel balances now change all the time. You cannot have a channel that is constant for hours and hours and then suddenly a payment goes through and you can measure it. Instead you have multiple payments going over a channel in different combinations and the balances of those changes cannot be collated into an individual payment anymore. That is combined with efforts like Rene Pickhardt’s just-in-time rebalancing where you obscure your current balance by rebalancing on the fly while you are holding onto an HTLC. That can pretend to be a larger channel than it actually is simply because we rebalance our channel on the fly. I think probing can be really useful when it comes to measuring the performance metrics for the Lightning Network. It could potentially be a privacy issue but at the timeframes that we’re talking today it’s really improbable to be able to trace a payment through multiple channels.

SL: Especially with MPP and once you add all these different layers it seems quite low risk I guess. Christian, I’ve really enjoyed chatting with you. We’ve almost gone two hours at this point. I’ve definitely learned a lot and I’m sure SLP listeners will appreciate being able to learn from you today. For any listeners who want to find you online, where can they find you?

CD: I’m /@cdecker on Github and /@snyke on Twitter.

SL: Fantastic. I’ve really enjoyed chatting with you. Thank you for joining me.

CD: Thank you so much. Pleasure as always and keep on doing the good work.

\ No newline at end of file +https://stephanlivera.com/episode/200/

Transcript completed by: Stephan Livera Edited by: Michael Folkson

Latest ANYPREVOUT update

ANYPREVOUT BIP (BIP 118): https://github.com/ajtowns/bips/blob/bip-anyprevout/bip-0118.mediawiki

Stephan Livera (SL): Christian welcome back to the show.

Christian Decker (CD): Hey Stephan, thanks for having me.

SL: I wanted to chat with you about a bunch of stuff that you’ve been doing. We’ve got a couple of things that I was really interested to chat with you about: ANYPREVOUT, MPP, Lightning attacks, the latest with Lightning Network. Let’s start with ANYPREVOUT. I see that yourself and AJ Towns just recently did an update and I think AJ Towns did an email to the mailing list saying “Here’s the update to ANYPREVOUT.” Do you want to give us a bit of background? What motivated this recent update?

CD: When I wrote up the NOINPUT BIP it was just a bare bones proposal that did not consider or take into consideration Taproot at all simply because we didn’t know as much about Taproot as we do now. What I did for NOINPUT (BIP118) was to have a minimal working solution that we could use to implement eltoo on top and a number of other proposals. But we didn’t integrate it with Taproot simply because that wasn’t at a stage where we could use it as a solid foundation yet. Since then that has changed. AJ went ahead and did the dirty work of actually integrating the two proposals with eachother. That’s where ANYPREVOUT and ANYPREVOUTANYSCRIPT, the two variants, came out. Now it’s very nicely integrated with the Taproot system. Once Taproot goes live we can deploy ANYPREVOUT directly without a lot of adaption that that has to happen. That’s definitely a good change. ANYPREVOUT supersedes the NOINPUT proposal which was a bit of a misnomer. Using ANYPREVOUT we get the effects that we want to have for eltoo and some other protocols and have them nicely integrated with Taproot. We can propose them once Taproot is merged.

Eltoo

Christian Decker on Eltoo at Chaincode Labs: https://diyhpl.us/wiki/transcripts/chaincode-labs/2019-09-18-christian-decker-eltoo/

SL: Let’s talk a little bit about the background. For the listeners who aren’t familiar, what is eltoo? Why do we want that as opposed to the current model for the Lightning Network?

CD: Eltoo is a proposal that we came up with about two years ago. It is an alternative update mechanism for Lightning. In Lightning we use what’s called an update mechanism to go from one state to the next one and make sure that the old state is not enforceable. If we take an example, you and I have a channel open with 10 dollars on your side. The initial state reflects this. 10 dollars goes to Stephan and zero goes to Christian. If we do any sort of transfer, some payment that we are forwarding over this channel or a direct payment that we want to have between the two of us, then we need to update this state. Let’s say you send me 1 dollar. The new state becomes 9 dollars to Stephan and 1 dollar to Christian. But we also need to make sure that the old state cannot be enforced anymore. You couldn’t go back and say “Hey I own 10 out of 10 dollars on this contract.” I need to have the option of saying “Wait that’s outdated. Please use this version instead.” What eltoo does is exactly that. We create a transaction that reflects our current state. We have a mechanism to activate that state and we have a mechanism to override that state if if it turns out to be an old one instead of the latest one. For this to be efficient what we do is we say “The newest state can be attached to any of the old states.” Traditionally this would be done by taking the signature and if there’s n old states, creating n variants with n signatures, one for each of the binding to the old state. With the ANYPREVOUT or NOINPUT proposal we have the possibility of having one transaction that can be bound to any of the previous states without having to re-sign. That’s the entire trick. We make one transaction applicable to multiple old states by leaving out the exact location from where we are spending. We leave out the UTXO reference that we’re spending when signing. We can modify that later on without invalidating the signature.

SL: Let me replay my understanding there. This is the current model of Lightning. You and I set up a channel together. What we’re doing is we’re putting a multisignature output onto the blockchain and that is a 2-of-2. Then what we’re doing is we’re passing back and forward the new states to reflect the new output. So let’s say it was initially 10 dollars to me and zero to you and then 9 dollars to me and 1 dollar to you. In the current model if somebody tries to cheat the other party. Let’s say I’m a scammer and I try to cheat you. I publish a Bitcoin transaction to the blockchain that is the pre-signed commitment transaction that closes the channel. The idea is your Lightning node is going to be watching the chain and say “Oh look, Stephan’s trying to cheat me. Let me do my penalty close transaction.” In the current model that would put all the 10 dollars onto your side.

CD: Exactly. For any of my wrong actions you have a custom tailored reaction to that that punishes me and penalizes me by crediting you with all the funds. That’s the exact issue that we’re facing is that these reactions have to be custom tailored to each and every possible misbehavior that I could do. Your set of retaliatory transactions grows every time that we perform a state change. We might have had 1 million states since the beginning and for each of these 1 million states you have to have a tailored reaction that you can replay if I end up publishing transaction 993 for example. This is one of the core innovations that eltoo brings to the table. You no longer have this custom tailored transaction to each of the previous states. Instead you can tailor it on the fly to match whatever I just did. You do not have to keep an ever-growing set of retaliation transactions in your database backed up somewhere or at the ready.

SL: In terms of benefits, it softens the penalty model. Instead of one party cheating the other and then losing everything, now if somebody publishes a wrong transaction or an old state then the other party just publishes the most up to date one that they have. The other benefit here is a scaling one that it might be easier for someone to host watchtowers because it’s less hard drive usage.

CD: Exactly. It is definitely the case that it becomes less data intensive in the sense that a watchtower or even yourself do not have to manage an ever growing set of of transactions. Instead all you need to do is to have the latest transaction in your back pocket. Then you can react to whatever happens onchain. That’s true for you as well as for watchtowers. Watchtowers therefore become really cheap because they just have to manage these 200 bytes of information. When you hand them a new transaction, a new reaction, they throw out the old one and and keep the new one. The other effect that you mentioned is that we now override the old state instead of using the old state but then penalizing. That has a really nice effect. What we do in the end is enforce a state that we agreed upon instead of enforcing “This went horribly wrong and now I have to grab all of the money.” It changes a bit the semantics of what we do towards we can only update the old state and not force an issue on the remote end and steal money from them. That’s really important when it comes to backups with Lightning. As it is today backups are almost impossible to do because whenever you restore you cannot be sure that it’s really the latest state and when you publish it that it’s not going to be seen as a cheating attempt. Whereas with eltoo you can take any old state, publish it and the worst that can happen is that somebody else comes along and says “This is not the latest state, theres’s a newer one. Here it is.” You might not get your desired state. Let’s say you want to take all 10 out of 10 dollars from the channel but you will still get the 9 out of 10 that you own in the latest state because all I can do is override your “10 go to Stephan” with my “9 go to Stephan and 1 goes to Christian”. We’ve reduced the the penalty for misbehavior in the network from being devastating and losing all of the funds to a more reasonable level where we can say “At least I agreed to the state and it’s going to be a newer state that I agreed upon.” I often compare it to the difference between Lightning penalty being the death by beheading whereas eltoo is death by a thousand paper cuts. The cost of misbehaving is much reduced allowing us to have working backups and have a lot of of nice properties that we can probably talk about later such as true multiparty channels with any number of participants. That’s all due to the fact that we no longer insist on penalizing the misbehaving party, we now instead correct the effects that the misbehaving party wanted to trigger.

SL: Fantastic. From your eltoo paper it introduces the idea of state numbers and onchain enforceable variant of sequence numbers. As I understand there’s a ratchet effect that once you move up to that new state that’s now the new one. It means that at least one of our nodes has the ability to enforce the correct latest state. Could you just explain that state numbers idea?

CD: The state numbers idea is actually connecting back to the very first iteration of Bitcoin like we had with the nSequence proposal that Satoshi himself added. nSequence meant that you could have multiple versions of transactions and miners were supposed to pick the one with the highest sequence number and confirm that. Basically replacing any previous transaction that had a lower sequence number. That had a couple of issues, namely that there is no way to force miners to do this. You can always bribe a miner to use a version of a transaction that suits you better or they might be actively trying to defraud you. There is no really good way of enforcing nSequence numbers. On the other hand, what we do with the state numbers is that we do not give the miners the freedom to choose which transaction to confirm. What we do is we say “We have transaction 100 and this transaction 100 can be attached to any previous transaction that could be confirmed or unconfirmed, that has a state number lower than 100. In eltoo we say “This latest state represented by this transaction with a state number of 100 can be attached to any of the previous transactions and override their effect by ratcheting forward the state.” Let’s say you have a published state 90. That means that anything with state number 91, 92, 93 and so on can be attached to your transaction. Your transaction might confirm but the effects that you want are in the settlement part of the transaction. If I can come in and attach a newer version of that state, a new update transaction to your published transaction, then I can basically detach the settlement part of the transaction from this ratcheting forward. I have just disabled your attempt at settlement by ratcheting forward and initiating the settlement for state 100. Then you could come come along and say “Sorry I forgot about state 100. Here’s state 110.” Even while closing the channel we can still continue making updates to the eltoo channel using these state numbers. The state numbers are really nothing else than making an explicit way of saying “This number 100 overrides whatever came before it.” Whereas with LN penalty the only association you have between the individual transactions and so on is by following the “is spent by” relationship. You have a set of transactions that can be published together. But there is no sense of of transitive overriding of effects.

SL: A naive question that a listener might be thinking, Christian what if I tried to set my state number higher than yours, what’s stopping me from doing that?

CD: You can certainly try. But since we are still talking about 2-of-2 multisig outputs I would have to countersign that. I might sign it but then I will make sure that if we later come to a new agreement on what the latest state should be that that state number must be higher than whatever I signed before. This later state can then override your spuriously numbered state. In fact that’s something that we propose in the paper to hide the number of updates that were performed on a channel. Not to go incrementing one by one but have different sized increment steps so that when we settle onchain we don’t tell the rest of the world “Hey, by the way we just had 93 updates.”

RBF pinning

Matt Corallo on RBF pinning: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-April/017757.html

SL: From watching the Bitcoin dev mailing list, I saw some discussion around this idea of whether the Lightning node should also be looking into what’s going on in the mempool of Bitcoin versus only looking for transactions that actually get confirmed into the chain. Can you comment on how you’re thinking about the security model? As I understand, you’re thinking more that we’re just looking at what’s happening on the chain and the mempool watching is a nice to have.

CD: With all of these protocols we can usually replay them only onchain and we don’t need to look at the mempool. That’s true for eltoo as it is true for Lightning penalty. Recently we had a lengthy discussion about an issue that is dubbed RBF pinning attack which makes this a bit harder. The attack is a bit involved but it basically boils down to the attacker placing a placeholder transaction in the mempool of the peers making sure that that transaction does not confirm. But being in the mempool that transaction can result in rejections for future transactions. That comes into play when we are talking about HTLCs which span multiple channels. We can have effects where the downstream channel is still locked because the attacker placed a placeholder transaction in the mempool. We are frantically trying to react to this HTLC being timed out but our transaction is not making it into the mempool because it is being rejected by this poison transaction there. If that happens on a single channel that’s ok because eventually we will be able to resolve that and a HTLC is not a huge amount usually. Where this becomes a problem is if we were forwarding that payment and we have a matching upstream HTLC that now also needs to timeout or have a success. That depends on the downstream HTLC which we don’t get to see. So it might happen that the upstream timeout gets timed out. Our upstream node told us “Here’s 1 dollar. I promised to give it to you if you can show me this hash preimage in a reasonable amount of time.” You turned around and forwarded that promise and said “Hey, your attacker, here’s 1 dollar. You can have it if you give me the secret in time.” The downstream attacker doesn’t tell you in time so you will be ok with the upstream one timing out. But it turns out the downstream one can succeed. So you’re out of pocket in the end of the forwarded amount. That is a really difficult problem to solve without looking at the mempool because the mempool is the only indication that this attack is going on and therefore that that we should be more aggressive in reacting to this attack being performed. Most lightning nodes do not actually look at the mempool currently. There’s two proposals that we’re trying to do. One is to make the mempool logic a bit less unpredictable, namely that we can still make progress without reaction even though there is this poison transaction. That is something that we’re trying to get the Bitcoin core developers interested in. On the other side we are looking into mechanisms to look at the mempool, see what is happening and then start alerting nodes that “Hey you might be under attack. Please take precautions and and react accordingly.”

SIGHASH flags

SL: I also wanted to chat about SIGHASH flags because ANYPREVOUT and ANYPREVOUTANYSCRIPT are some new SIGHASH flags. Could you take us through some of the basics on what is a SIGHASH flag?

CD: A sighash flag is sometimes confused with an opcode. It is just a modifier of existing opcodes namely OP_CHECKSIG, OP_CHECKSIGVERIFY, OP_CHECKMULTISIG and OP_CHECKMULTISIGVERIFY variants that basically instructs the the CHECKSIG operation to which part of the transaction should be signed and which part should not be signed. In particular what we do with SIGHASH_ANYPREVOUT is when computing the signature and verifying the signature, we do not include the previous outputs in the signature itself. These can be modified if desired without invalidating the signature. It is like a kid having a bad grade at at school coming home and needing a signature from the parents. What he does is he covers up part of the permission slip and the parents still sign it. Only then he uncovers the covered part. This changing what was signed does not invalidate the signature itself. That’s sort of a nefarious example but it can be really useful. If you’ve ever given out a blank check for example where you could fill in the amount at a later point in time or fill out the recipient at a later point in time, that’s a very useful tool. For eltoo we use the reaction transaction to something that our counterparty has done and adapted in such a way that it can cleanly attach to what your counterparty has done. There are already some existing SIGHASH flags. The default one is SIGHASH_ALL which covers the entirety of the transaction without the input scripts. There’s SIGHASH_SINGLE, which has been used in a couple of places. It signs the input and the matching output but there can be other inputs and outputs as well that are not covered by the signature. You can amend a transaction and add later on new funds to that transaction and new recipients to that transaction. We use that for example to attach fees to transactions in eltoo. Fees in eltoo are not intrinsic to the update mechanism itself. They are attached like a sidecar which removes the need for us to negotiate fees between the end points. Something that has in the beginning of Lightning caused a lot of channels to die, simply disagreement on fees. There’s also SIGHASH_NONE which basically signs nothing. It signs the overall structure of the transaction but it doesn’t restrict which inputs can be used, it doesn’t restrict which outputs can be used. If you get one of these transactions, you can basically rewrite it at will sending yourself all the money that would have been transferred by it.

SL: I guess for most users, without knowing when they’re doing standard single signature spending on their phone wallet or whatever, they’re probably using SIGHASH_ALL. That’s what their wallet is using in the background for them. If the listener wants to see how this might work, they could pull up a block explorer and on a transaction you can see the different inputs and outputs. What we’re talking about here is what we are committing to when we sign. What is that signature committing to?

CD: It takes the transaction, it passes through a hash function and then the hash is signed. The effect that we have is that if anything in the transaction itself is modified, which was also part of the hash itself, then the signature is no longer valid. It means that I both authorize this transaction and I authorize it only in this form that I’m currently signing. There can be no modification afterwards or else the signature would have to change in order to remain valid. If with the SIGHASH flags we remove something from the commitment to the transaction then we give outsiders or ourselves the ability to modify without having to re-sign.

SL: For the typical user just doing single signature, their wallet is going to use SIGHASH_ALL but where they are doing some sort of collaborative transaction or there’s some kind of special construction that’s where we’re using some of these other sighash flags. With eltoo and ANYPREVOUT the idea is that these ANYPREVOUT sighash flags will allow us to rebind to the prior update correct?

CD: Exactly yes.

Difference between ANYPREVOUT and ANYPREVOUTANYSCRIPT

SL: Could we talk about ANYPREVOUT and ANYPREVOUTANYSCRIPT? What’s the difference there?

CD: What we do with SIGHASH_ANYPREVOUT is no longer explicitly saying “Hey by the way I’m spending those funds over there.” Instead what we say is “The output script and the input script have to match. Other than that we can mix these transactions however we want.” Instead of having an explicit binding saying “Hey my transaction 100 now connects to transaction 99, and then the scripts have to match. By the scripts I mean the output script would specify that the spender has to sign with public key X and the input script would contain a signature by public key X. Instead of binding by both the explicit reference and the scripts we now bind solely by the scripts. That means that as long as the output says “The spender has to sign with public key X and the input of the other transaction that is being bound to it has a valid signature for public key X in it” then we can attach these two. The difference between ANYPREVOUT and ANYPREVOUTANYSCRIPT is whether we include the output script in the hash of the spending transaction or not. For ANYPREVOUT we still commit to what script we are spending. We take a copy of the script saying that the spending transaction needs to be signed by public key X. We move that into the spending transaction and then include it into the signature computation so that if the output script is modified we cannot bind to it. Whereas the ANYPREVOUTANYSCRIPT says “We don’t take a copy of the output script into the input of the spending transaction but instead we have a blank script. We can bind it to any output whose output script matches our input script.” It is a bit more freedom but it is also something that we need for eltoo to work because the output script of the transaction we’re binding to includes the state number and that obviously changes from state to state. We still want to have the freedom of taking a later state and attaching it to any of the previous states. For eltoo we’d have to use ANYPREVOUTANYSCRIPT. There are a couple of use cases where ANYPREVOUT is suitable on its own. For example if we have any sort of transaction malleability and we still want to take a transaction that connects to a potentially malleable transaction, then we can use SIGHASH_ANYPREVOUT. If the transaction gets malleated in the public network before it is being confirmed we can still connect to it using the connection between the output script and the input script and the commitment of the output script in the spending transaction.

Transaction malleation

SL: You were mentioning malleation there. Could you outline what is malleation?

CD: Malleation is the bane of all offchain protocols. Malleation is something that that we’ve known about for well over seven years now. If you remember the MtGox hack was for some time attributed to malleability. They said “Our transactions were malleated. We didn’t recognize them anymore so we paid out multiple times.” What happens is I create a transaction and this transaction includes some information that is covered by the signature and can therefore not be changed, but it also could include some information that cannot possibly be covered by the signature. For example, the signature itself. In the input script of a transaction we need to have the signatures but we cannot include the signatures in the signature itself. Otherwise we’d have this circular argument. So while signing the input scripts are set to blank and not committed to. That means that if we then publish this transaction there are places in the transaction that can be modified without invalidating the signature anymore. Some parts of this include push operations, for example normalizations of signatures themselves. We can add prefixes to stuff. We can add dummy operations to the input script. Change how the transaction looks just slightly but not invalidating the signature itself. The transaction now looks different and is getting confirmed in this different form but we might have a dependent transaction where we’re referring to the old form by its hash, by its unchanged form. This follow up transaction that was referencing the unmodified transaction can no longer be used to spend those funds because the miner will just see this new transaction, go look for the old output that it is spending and this output doesn’t exist because it looks slightly different now because the hash changed. It will say “I don’t know where you’re getting that money from. Go away. I’m throwing away that transaction and it will not get confirmed. Whereas with SIGHASH_ANYPREVOUT we can counter this by having the transaction in the wider network, be modified, be confirmed in this modified state and then the sender of the follow up transaction can just say “Ok I see that there has been a modification to the transaction that I’m trying to spend from. Let me adjust my existing transaction by changing the reference inside of the input to this new alias that everybody else knows the old transaction.” Now we can publish this transaction. We did not have to re-sign the transaction. We did not have to modify the signature. All we had to do was take the reference and update it to point to the real confirmed transaction. That makes offchain protocols a lot easier because while having a single signer re-sign a transaction might be easy to do, if we’re talking about multisig transactions where multiple of us have to sign off on any change, that might not be so easy to implement. ANYPREVOUT gives us this freedom of reacting to stuff that happens onchain or in the network without having to go around and convince everybody “Hey please sign this updated version of this transaction because somebody did something in the network.”

Risks to everyday Bitcoin users

SL: If I understood correctly the way eltoo has been constructed, it is defending against that risk. You’re trying to use this new functionality of being able to rebind dynamically. For listeners who are concerned that maybe there’s a risk this is all opt in? It is only if you want to use Lightning in the eltoo model. You and I have this special type of SIGHASH flag where we have a special kind of output that we are doing the updates on our channel. If somebody doesn’t want to they can just not use Lightning.

CD: Absolutely. It is fully opt in. It is a SIGHASH flag. We do have a couple of SIGHASH flags already but no wallet that I’m aware of implements anything but SIGHASH_ALL. So if you don’t want to use Lightning or you don’t want to use any of the offchain protocols that are based on SIGHASH_ANYPREVOUT simply don’t use a wallet that can sign with them. These are very specific escape hatches from the existing functionality that we need to implement more advanced technologies on top of the Bitcoin network. But it is by no means something that suddenly everybody should start using just because it’s a new thing that is out there. If we’re careful not to even implement SIGHASH_ANYPREVOUT in everyday consumer wallets then this will have no effect whatsoever on the users that do not want to use these technologies. It is something that has a very specific use case. It’s very useful for those use cases but by no means everybody needs to use it. We’re trying to to add as many security features as possible. For example if you sign with a SIGHASH flag that is not SIGHASH_ALL you as the signing party are the only one that is deciding whether to use the sighash flag or not. Whereas with the ANYPREVOUT changes that were introduced, AJ has done a lot of work on on this, he introduces a new public key format that explicitly says “Hey I’m available for SIGHASH_ANYPREVOUT.” Even the one that is being spent from now has the ability to opt into ANYPREVOUT being used or not. Both have to match, the public key that is being signed for has to have opted in for ANYPREVOUT and the signing party has to opt in as well. Otherwise we will fall back to existing semantics.

ANYPREVOUT reliance on Taproot

SL: As I understand this BIP 118, there is a reliance on Taproot being activated first before ANYPREVOUT?

CD: Obviously we would have liked to have ANYPREVOUT as soon as possible but one of the eternal truths of software development is that reviewer time is scarce. We decided to not push too hard on ANYPREVOUT being included in Taproot itself to keep Taproot very minimal, clean and easy to review. Then try to do a ANYPREVOUT soft fork at a future point in time at which we will hopefully have enough confidence in our ability to perform soft forks that we can actually roll out ANYPREVOUT in a reasonable amount of time. For now it’s more important for us to get Taproot through. Taproot is an incredible enabling technology for a number of changes, not just for Lightning or eltoo but for a whole slew of things that are based on Taproot. Any delay in Taproot would definitely not be in our interest. We do see the possibility of rolling out ANYPREVOUT without too many stumbling stones at a second stage once we have seen Taproot be activated correctly.

Signature replay

SL: Also in the BIP118 document by AJ there’s a discussion around signature replay. What is signature replay? How is that being stopped?

CD: Signature replay is one of the big concerns around the activation of ANYPREVOUT. It consists of if I have one transaction that can be rebound to a large number of transactions this doesn’t force me to use that transaction only in a specific context but I could use it in a different context. For example, if we were to construct an offchain protocol that was broken and couldn’t work we could end up in a situation where you have two outputs of the same value that opted in for ANYPREVOUT and you have one transaction that spends one of them. Since both opted into ANYPREVOUT and both have the identical script and the identical value, I could replay that transaction on both outputs at once. So instead of the intended effect of me giving you let’s say 5 dollars in one output you can claim 5 dollars twice by replaying this multiple times. This is true for offchain protocols that are not well developed and are broken because well designed offchain protocols will only ever have one transaction that you can bind to. You cannot have multiple outputs that all can be spent by the same ANYPREVOUT transaction. But it might still happen that somebody goes onto a blockchain explorer and looks up the address and then sends some money that happens to be the exact same value to that output. What we’re trying to do is find good ways to prevent exactly the scenario of somebody accidentally sending funds and creating an output that could potentially be claimed by SIGHASH_ANYPREVOUT by making these scripts unaddressable. We create a new script format for which there is no bech32 encoding for the script. Suddenly you cannot go on to a blockchain explorer and manually interfere with an existing offchain protocol. There are a number of steps that we are trying to do to reduce this accidental replayability. That being said in eltoo for example, the ability to rebind a transaction to any of the previous matching ones is exactly what we were trying to achieve. It is a very useful tool but it in the wrong hands it can be dangerous. So don’t go play with SIGHASH_ANYPREVOUT if you don’t know what you are doing.

Potential ANYPREVOUT activation

SL: What would the pathway be to activate ANYPREVOUT? What stage would it be in terms of people being able to review it or test it?

CD: I had a branch for SIGHASH_NOINPUT which was used by Richard Myers to implement a prototype of eltoo in Python that is working. I’m not exactly sure if ANYPREVOUT has any code that can be used just yet. I would have to definitely check with AJ or implement it myself. It shouldn’t be too hard to implement given that SIGHASH_NOINPUT consisted of two IF statements and a total of four lines changed. I don’t foresee any huge technical challenges. It is mostly just the discussion on making it safe, making it usable and making it efficient that will take a bit longer. We have that time as well because we are waiting for Taproot to be activated in the meantime.

SL: Is there anything else you wanted to mention about ANYPREVOUT or shall we now start talking about MPP?

CD: ANYPREVOUT is absolutely cool. We’re finding so many use cases. It’s really nice. I would so love to see it.

MPP (Multi-Part Payments)

MPP blog post: https://medium.com/blockstream/all-paths-lead-to-your-destination-bc8f1a76c53d

SL: We’ll see what all the Bitcoin people out there are thinking. The benefits of having eltoo in Lightning would be pretty cool. It enables multi-party channels which for listeners who haven’t listened to our first episode, I think it is Episode 59 off the top of my head, have a listen to that. There’s a lot of possibilities there in terms of multi-party channels. That helps in terms of being able to get around that idea that there won’t necessarily be enough UTXOs for every person on earth. That’s why multi-party channels might be a handy thing to have. So let’s have a look into MPP then, multi-part payments. Listeners also check out my earlier episode with Rusty on this one. Christian, you had a great blog post talking about MPP and how it’s been implemented in c-lightning. Do you want to tell us the latest with that?

CD: We implemented multi-part payments as part of our recent 0.9.0 release which we published about 10 days ago. Multi-part payments is one of those features that has been long awaited because it enables us to be much more reliant and adapt ourselves better to the network condition that we encounter. It boils down to instead of me sending a payment and doing it all in one chunk, we split the payment into multiple partial payments and send them on different paths from us to the destination. Thus making better use of the network liquidity allowing us to create bigger payments since we are no longer constrained by the capacity of individual channels. Instead we can bundle multiple channels’ capacities and use the aggregate of all of these channels together. It also allows us to make the payments much more reliable. We send out parts, get back information and only retry the parts that failed on our first attempt.

SL: So there’s a lot of benefits.

CD: There’s also a couple of benefits when it comes to privacy but we’ll probably talk about those a bit later.

SL: What are some of the main constraints if your node wants to construct an MPP multi-part payment onion package?

CD: There’s two parts of MPP. One is the recipient part which is “I know I should receive 10 dollars and I received only two. I’ll just keep on waiting until I get the rest, holding on to the initial two that I already have for sure.” The recipient grabs money and waits for it to be all there before claiming the full amount. On the sender’s side what we do is we split the payment into multiple parts. For each of these partial payments we compute a route from us to the destination. For each of these we go ahead and compute the routing onion. Each individual part has its own routing onion, has its own path, has its own fate so to speak. Then we send out the partial payment with its onion on its merry way until we either get to the destination, at which point the destination will collect the promise of incoming funds and if it has all of the funds that were promised available it will release the payment preimage thus locking in atomically all of the partial payments. Or we get back an error saying “This channel down the road doesn’t have enough capacity. Please try again.” At which point we then update our view of the network. We compute a new route for this payment and we try again. If we cannot find a new route for the amount that we have, we split it in half and now we have two parts that we try to send independently from each other. The sender side is pretty much in control of how big do we make these parts? How do we schedule them? How do we route them? How do we detect whether we have a fatal error that we cannot recover from? When do we detect that this part is ok but this part is about to be retried? The sender part is where all of the logic is. The recipient just waits for incoming pieces and then at some point decides “Ok I have enough. I’ll claim all of them.” The sender side required us to reengineer quite a lot of our payment flow. That also enabled us to build a couple of other improvements like the key send support for example, which we so far only had a Python plugin for but now have a C plugin for it as well.

SL: You were talking through the two different processes. You’ve got the pre-split and then you mentioned the adaptive splitting. Once you’ve tried it one time and it failed, now you can take that knowledge and try a slightly different split or slightly different route. It will then create the new payment and try to send that?

CD: Exactly. The adaptive splitting is exactly the part that we mentioned before. We try once and then depending on what comes back we decide do we retry? Do we split? What do we do now? Is this something that we can still retry and have a chance of completing? Or do we give up?

SL: Let’s say you have installed c-lightning and you’re trying to do a payment. In the background what’s going on is your node has its own graph of the network and it’s trying to figure out “Here’s where the channels are that I know about. Here’s what I know of the capacity.” Does it then have better information and therefore each successive try is a little bit better. How does that work?

CD: Exactly. So initially what we have in the network is we see channels as total capacities. If the two of us opened a 10 dollar channel then somebody else would see it as 10 dollars. They would potentially try to send 8 dollars through this channel. Depending on the ownership of those 10 dollars this might be possible or not. For example, if we each own five there’s no way for us to send 8 dollars through this channel. We will report an error back to the sender and the sender will then know 8 was more than the capacity. I will remember this upper limit on the capacity. It might be lower but we know that we cannot send 9 Bitcoin through that channel. As we learn more about the network our information will be more and more precise and we will be able to make better predictions as to which channels are usable and which channels aren’t for a given payment of a given size. There is no point in us retrying this 8 dollar payment through our well balanced channel again, because that cannot happen. But if we split in two and now have two 4 dollar parts then one of them might go through our channels. Knowing that we have 5 and 5 it will actually go through. Now the sender is left with a much easier task of finding a second 4 dollar path from himself to the destination rather than having this one big chunk of 8 all at once.

Impact of fees on MPP

SL: From the blog post the way the fees work is there’s a base fee and there’s typically a percentage fee. If you split your MPP up into hundred different pieces you’re going to end up paying massive amount of base fee across all of those hundred pieces. Your c-lightning node node has to make a decision on how many pieces to split into.

CD: Exactly. We need to have a lower value after which we say “From now on it’s unlikely that we’re going to find any path because the payment is so small in size that it will be dominated by the base fee itself.” This is something that we’ve encountered quite early on already when the first games started popping up on the Lightning Network. For example Satoshi’s Place, if you wanted to color in one pixel on Satoshi’s Place you’d end up paying one millisatoshi but the base fee to get there would already be one satoshi. You’d be paying a 100,000 percent fee for your one millisatoshi transfer which is absolutely ludicrous. So we added an exception for really tiny transfers that we call signaling transfers because their intent is not really to pay somebody it’s more to signal activity. In those cases we allow you to have a rather large fee upfront. That is not applicable to MPP payments because if we were to give them a fee budget of 5 satoshis each then these would accumulate across all of the different parts and we’d end up with a huge fee. So we decided to give up if a payment is below 100 satoshis in size. Not give up but not split any further because at that size the base fee would dominate the the overall cost of the transfers. What we did there was take the network graph and compute all end to end paths that are possible, all shortest paths, and compute what the base fee for these paths would be. If I’m not mistaken we have a single digit percent of payments that may still go through even though they are below 100 satoshis in size. We felt that aborting at something that is smaller than 100 satoshis is safe. We will still retry different routes but we will not split any further because that would double the cost in base fees at each splitting.

SL: In practice most people are opening channels much, much larger than that. A hundred satoshis is really trivial to be able to move that through. At current prices we’re talking like 6 or 7 cents or something.

CD: Speaking of channels and the expected size of a payment that we can send through, that brings me back to our other payment modifier, the pre-split modifier which instead of having this adaptive mechanism where we try and then learn something and retry with this new information incorporated. We decided to do something a bit more clever and say “Wait why do we even try these really large payments in the first place when we know perfectly well that most of them will not succeed at first?” I took my Lightning node and tried to send payments of various sizes to different endpoints by probing them. Unlike my previous probes where I was interested in seeing if I could reach those those nodes, I was more interested if I can reach them how much capacity could I get on this path? What is the biggest payment that would still succeed getting it to the destination? What we did is we measured the capacity of channels along the shortest path from me to I think 2000 destinations. Then we plotted it and it was pretty clear that amounts below 10,000 satoshis, approximately 1 dollar, have a really good chance of of succeeding on the first try. We measured the capacities in the network and found that payments with 10,000 satoshis in size can succeed relatively well. We have an 83 percent success rate for payments of exactly 10,000 satoshis, smaller amounts will have higher success rates. Instead of trying these large chunks at first and then slowly moving towards these sizes anyway, we decided to split right at the beginning of the payment into roughly 1 dollar sized chunks and then send them on their way. These already have a way better chance of succeeding on the first try then this one huge chunk would have initially.

SL: To clarify, that percentage you were mentioning, that is on the first try, correct? It will then retry multiple times and the actual payment success rate is even higher than that for 10,000 sats?

CD: Absolutely, yeah.

SL: I think this is an interesting idea because it makes it easier for the retail HODLer or retail Lightning enthusiast to be able to set up his node and be a meaningful user of the network that they’re not so reliant on routing through the well known massive nodes, the ACINQ node, the Bitrefill node or the Zap node. It is easier for an individual because now you can split those payments across multiple channels?

CD: Absolutely. What we do with the pre-split and adaptive splitter, we make better use of the network resources that are available by spreading a single payment over a larger number of routes. We give each of the nodes on those routes a tiny sliver of fees instead of going through the usual suspects and giving them all of the fees. We make revenue from routing payments more predictable. We learn more about the network topology. While doing MPP payments we effectively probe the network and find places that are broken and will cause them to close channels that are effectively of no use anyway. Something that we’ve seen with the probing that we we did for the Lightning Network conference was that if we end up somewhere where the channel is non-functional, we will effectively close that channel and prune the network of these relics that are of no use. We also speed up the end-to-end time by doing all of this in parallel instead of sequentially where each payment attempt would be attempted one by one. We massively parallelized that to learn about the network and make better use of what we learned by speeding up the payment as well.

Privacy improvements of MPP

SL: I also wanted to touch on the privacy elements. I guess there’s probably two different ways you could think of, multiple angles I can think of. One angle might be if somebody was trying to surveil the network and they wanted to try to understand the channel balances and ascertain or infer from the movement in the balances who is paying who, MPP changes that game a little bit. It makes it harder for them. But then maybe on the downside you might say because we haven’t moved to the Schnorr payment points PTLC idea then it’s still the same payment preimage. It is asking the same question to use the phrasing Rusty used. In that sense it might theoretically be easier for a hypothetical surveillance company to set up spy Lightning nodes and see that they’re asking the same question. What are your thoughts there?

CD: There is definitely some truth in the statement that by distributing a payment over more routes and therefore involving more forwarding nodes, we are telling a larger part of the network about a payment that we are performing. That’s probably worse than our current system where even if we were using a big hub that hub would see one payment and the rest of the network would be none the wiser. On the plus side however the one big hub thing would give away the exact value you’re transferring to the big hub. Whereas if we pre-split to 1 dollar amounts and then do adaptive splitting, each of the additional nodes that are now involved in this payment learns a tiny bit about the payment being performed, namely that there is a payment, but since we use this homogeneous split of everything splits to 1 dollar, they don’t really know much more than that. They will learn that somebody is paying someone but they will not learn about the amount, they will not learn about the source and destination. And we are making traffic analysis a lot harder for ISP level attackers by really increasing the chattiness of the network itself. We make it much harder for observers to associate or to collate individual observations into one payment. It is definitely not the perfect solution to to tell a wider part of the network about the payment being done, but it is an incremental step towards the ultimate goal of making every payment indistinguishable from each other which we are getting with Schnorr and the point timelocked contracts. Once we have the point timelocked contracts we truly have a system where we are sending back and forth payments that are not collatable by payment hash as you correctly pointed out. Not even by amount, because all of the payments have roughly the same amounts. It is the combination of multiple of these partial payments that gives you the actual transferred amount. I think it’s not a clear loss or a clear win for privacy that we’re now telling a larger part of the network. But I do think that the pre-splitter and the adaptive splitting when combined with PTLC will be an absolute win no matter where you look at it.

PTLCs

SL: I think that’s a very fair way to summarize. In terms of getting PTLCs, point timelocked contracts, the requirement for that would be the Schnorr Taproot soft fork? Or is there anything else that’s also required?

CD: Taproot and Schnorr is the only one that is required for PTLCs. I’m expecting the Lightning Network specification to be really quick at adapting it, pushing it out to the network and actually making use of that new found freedom that we have with PTLCs and Schnorr.

Lightning onchain footprint

SL: I suppose the other component to think about and consider from a privacy perspective is the onchain footprint aspect of Lightning. Maybe some listeners might not be familiar but when you’re doing Lightning you still have to do the open and close of a channel. You did some recent work at the recent Lightning conference showing an ability to understand which ones of these were probably Lightning channel opens. That is another thing where Taproot might help particularly in the case of a collaborative close. Once we have Taproot, let’s say you and I open a channel together and it’s the happy path, the collaborative close, that channel close is indistinguishable from a normal Taproot key path spend?

CD: Exactly. Our opens will always look exactly like somebody paying to a single sig. The single sig under the covers happens to be a 2-of-2 multisig disguised as a single sig through the signature aggregation proposals that we have. The close transactions, if they are collaborative closes, they will also look like single sig spends to the destinations that are owned by the endpoint. It might be worth pointing out that non-collaborative closes will leak some information about the usage of eltoo or Lightning penalty simply because we enter this disputed phase where we reveal all of the internals of our agreement, namely how we intend to overwrite or penalize the misbehaving party. Then we can still read out some of the information from a channel. That’s where I mentioned before that you might not want to increment state numbers one by one for example. This is also the reason why in LN penalty, we hide the commitment number in the locktime field but encrypt it. That information might still eventually end up on the blockchain where they could be analyzed. But we’d gossip about most of this informations anyway because we need to have a local view of the network in order to route payments.

SL: It is a question of what path you really need to be private I guess. One other part where I wanted to confirm my understanding is with the Taproot proposal, my understanding is you’ll have a special kind of Taproot output. The cool thing about the Schnorr signatures aspect is that people can do more cryptography and manipulation on that. That’s this idea of tweaking. My understanding is you either have the key path spend which is the indistinguishable spend. That’s the collaborative close example. But then in the non-collaborative close, that would be a script path spend. As part of Taproot there’s a Merkle tree. You have to expose which of the scripts that you want to spend. You’re showing the script I want to spend and the signatures in relation to it. Is that right?

CD: That’s right. The Taproot idea comes out of this discussion for Merklized Abstract Syntax Trees. It adds a couple of new features to it as well. A Merklized Abstract Syntax Tree is a mechanism of having multiple scripts that are added to a Merkle tree and summed up until we get to the root. The root would be what we put into our output script. When we spend that output we would say “That Merkle tree corresponds to this script and here is the input that matches this script proving that I have permission to spend these coins. Taproot goes one step further and says “That Merkle tree root is not really useful. We could make that a public key and mix in the Merkle root through this tweaking mechanism.” That would allow us to say “Either we sign using the root key into which we tweak the the Merklized Abstract Syntax Tree. That’s the key path spent. Or we can say “I cannot sign with this pubkey alone but I can show the script that corresponds to this commitment. Then for that I do have all of the information I need to sign off. In the normal case for a channel close we use the root key to sign off on the close transaction. In the disputed case we say “Here’s the script that we agreed upon before. Now let’s run through it and resolve this dispute that we have by settling onchain and having the blockchain as a mediator for our dispute.”

Lightning Network attacks

SL: I also wanted to talk about some of the Lightning attacks that are coming out in articles. From my understanding from chatting with yourself and some of the other Lightning protocol developers, it seems to me like there’s a bunch of these that have been known for a while but some of them are now coming out as papers. An interesting recent one is called “Flood and Loot: A Systemic Attack on the Lightning Network. As I understand this, it requires establishing channels and then trying to send through a lot of HTLC payments. They go non-responsive and then they force the victim to try to go to chain. The problem is if they’ve done it with many people and many channels all at once they wouldn’t be able to get confirmed. That’s where the victim would lose some money. Could you help help me there? Did I explain that correctly?

CD: That’s correct. The idea is to have an attacker send to a second node. He owns a massive amount of payments going through the victims. What you end up doing there is you add a lot of HTLCs to the channel of of your victim and then you hold on to these payments on the recipient side of the channel. Something that we’ve known for quite some time. We know that holding onto HTLCs is kind of dangerous. This attacker will hold onto HTLCs so long that the timeout approaches for the HTLC. The HTLC has two possible outcomes, either it is successful and the preimage is shown to the endpoint that added the HTLC, or we have a timeout. Then the funds revert back to the endpoint that added the HTLC. This works because there is no race between the success transaction and the timeout transaction. If there is no success for let’s say 10 hours then we will trigger the timeout because we can be confident that the success will not come after the timeout. This Flood and Loot attack does exactly that by holding onto the HTLC, it forces us to have a race between the timeout and the success transaction. The problem is that our close transaction, having all of these HTLCs attached, is so huge that it will not confirm for quite some time. So they can force the close to take so long that the timeout has expired. We are suddenly in a race between the successful transaction and the timeout transaction. That’s the attack. To bloat somebody else’s channel such that the confirmation of the close transaction that is following is so long that we can actually get into a situation where we are no longer sure whether the timeout or the success transaction will succeed in the end.

SL: I guess there’s a lot of moving parts here. You could say “Let’s modify the CSV window and make that longer” or “Let’s change the number of HTLCs and restrict that for each channel”. Could you talk to us about some of those different moving parts here?

CD: It is really hard to say “One number is better than the other”. But one way of of reducing the impact of this attack is to limit the number of HTLCs that we add to our own transaction. That will directly impact the size of our commitment transaction and therefore our chances of getting confirmed in a reasonable amount of time. Avoid having this race condition between success and timeout. The reason why I’m saying that there is no clear solution is that reducing the number of HTLCs that we add to our channels reduces the utility of the network as a whole because once we have 10 HTLCs added and we only allow 10 to be added at once then that means that we cannot forward the 11th payment for example. If our attacker knows that we have this limit, they could effectively run a DOS attack against us by by opening 10 HTLCs. Exhausting our budget for HTLCs and therefore making our channel unusable until they release some of the HTLCs. That’s an attack that we are aware of and so far hasn’t been caught by academia but I’m waiting for it. All of these parameters are a trade-off between various goals that we want to have. We don’t currently have a clean solution that has only upsides. The same goes for CSVs. If we increase the CSV timeouts then this attack might be harder to enforce because we can spread confirmation of transactions out a bit further. On the downside having large CSVs means that if we have a non-collaborative close for a channel then the funds will return only once the CSV timeout expires. That means that the funds are sure to come back to us but might not be available for a couple of days before we can reuse them.

SL: It is an opportunity cost because you want to be able to use that money now or whatever. There are trade off and there’s no perfect answer to them. Let’s say somebody tries to jam your channels. How do HTLCs release? What’s the function there?

CD: Each HTLC has a timeout. The endpoint that has added the HTLC can use this timeout to recover funds that are in this HTLC after this timeout expires. Each HTLC that is added starts a new clock that counts down until we can recover our funds. If the success case happens before this timeout then we’re happy as well. If this timeout is about to expire and we need to resolve this HTLC onchain then we will have to force this channel onchain before this timeout expires, a couple of blocks before. Then force our counterparty to either reveal the preimage or grab back our funds through the timeout. We then end up with a channel closing slightly before the timeout and then an onchain settlement of that HTLC.

SL: So we could think of it like we set up our node, we set up the channels and over time HTLCs will route through. It is usually going to be a CSV or maybe a CLTV where over time those HTLCs will expire out because the timer has run out on them. Now you’ve got that capacity back again.

CD: In these cases they are CLTVs because we need absolute times for HTLCs. That’s simply because we need to make sure the HTLC that we forwarded settles before the upstream or the HTLC where we received from settles. We need to have this time to extract the information from the downstream HTLC, turn around and forward it to the upstream HTLC in order to settle the upstream HTLC correctly. That’s where the notion of CLTV delta comes in. That is a parameter that each node sets for himself and says “I am confident that if my downstream nodes settles in 10 blocks I have enough time to turn around and inform my upstream note about this downstream settlement so that my channel can stay active.”

SL: I also wanted to touch on the commitment transaction size. Part of this attack in the Flood and Loot example depends on having a very large commitment transaction. If there’s a lot of pending HTLCs why does that make the transaction bigger? Is it that there’s a lot more outputs there?

CD: That’s exactly the case. The commitment transaction varies in size over time as we change our state. Initially when we have a single party funding the channel then the entirety of the funds will revert back to that to that party. The commitment transaction will have one output that sends all of the funds back to the funding party. As soon as the counterparty has ownership of some funds in the channel then we will add a second output, one going to endpoint A and one going to endpoint B. Those reflect the settled capacity that is owned by the respective party. Then we have a third place where we add new outputs, that’s exactly the HTLCs. Each HTLC doesn’t belong to either A or B but it’s somewhere in the middle. If we succeed, it belongs to B and if it doesn’t succeed, it reverts back to A. Each of the HTLCs has their own place in the commitment transaction in the form of an output reflecting the value of the HTLC and the output script, the resolution script of the HTLCs, which spells out “Before block height X I can be claimed by this and after block height X I can be reverted back to whoever added me”. Having a lot of HTLCs attached to a channel means that the commitment transaction is really large in size. That’s also why we have this seemingly random limit on the total number of HTLCs in the protocol of 483 maximum HTLCs attached to a single transaction because at that point with 483 HTLCs we’d end up with a commitment transaction that is a 100 kilobytes in size, I think.

SL: That’s pretty big. A standard transaction might be like 300 bytes?

CD: It’s a massive cost as well to get that confirmed. It definitely is a really evil attack because not only are you stealing from somebody but you’re also forcing them to pay a considerable amount of money to get their channel to settle.

SL: The other point there is that because we count fees in terms of sats per byte and if you’ve done that fee negotiation between the two nodes upfront, let’s say you and I negotiated that early on and then one of us goes offline because it’s the flood and loot attack. You’d have this huge transaction but you wouldn’t have enough fees to close it.

CD: Exactly. We would stop adding HTLCs before we no longer have any funds to settle it but it would still be costly if we ever end up with a large commitment transaction where something like 50 percent of our funds go to fees because it’s this huge thing.

Different Lightning usage models

SL: Maybe we can step back and talk about Lightning generally, the growth of the Lightning Network and some of the different models that are out there. In terms of how people use a Lightning node today there’s the Phoenix wallet ACINQ style where it is non-custodial but there’s certain trade offs there and it’s all going through the ACINQ node. Then you’ve got Wallet of Satoshi style. They’re kind of like a Bitcoin Lightning bank and the users are customers of that bank. Then you’ve got some people who are going full mobile node Neutrino style and then maybe the more self-sovereign style where people might run node packages like myNode, Nodl or Raspiblitz and have a way to remote in with their Blue wallet or Zap or Zeus or Spark wallet. Do you have any thoughts on what models you think will be more popular over time?

CD: I definitely can see the first and last model quite nicely namely the sort of mobile wallet that has somebody on the operational side taking care of operating your node but you are still in full control of your funds. That would be the Phoenix ACINQ model where you care for your own node but the hard parts of maintaining connectivity and maintaining routing tables and so on would be taken care of by a professional operator. That’s also why together with ACINQ we came up with the trampoline routing mechanism and some other mechanisms to outsource routing to online nodes. Running a full Lightning node on a mobile phone while way easier than a Bitcoin full node, it is still going to use quite a considerable amount of resources in terms of battery and data to synchronize the view of the network to find paths from you to your destination. You would also need to monitor the blockchain in a reliable way so that if something happens, one of your channels goes down, you are there to react. Having somebody taking care of those parts, namely to preprocess he changes in the network view and providing access to the wider network through themselves is definitely something that I can see being really popular. On the other side, I can definitely see people that are more into operating a node themselves going towards a self sovereign node style at home where they have a home base that their whole family might share or they might administer it for a group of friends and each person would get a node that they can remote into and operate from there. There is the issue of synchronizing routing notes and so on to your actual devices that you’re running around with like a mobile phone or your desktop. It doesn’t really matter because you have this 24 hour node online that will take care of those details. The fully mobile nodes, I think they’re interesting to see and they definitely show up a lot of interesting challenges but it might be a bit too much for the average user to have to take care of all of the stuff themselves. To learn about what a channel is, to open a channel, to curate channels to make sure that they are well connected to the network. Those are all details that I would like to hide as much as possible from the end user because while important for your performance and your ability to pay they are also hard concepts that I for example would not want to try to explain to my parents.

SL: Of course. Obviously your focus is very deep technical protocol level but do you have any thoughts on what is needed in terms of making Lightning more accessible to that end user? Is it better ways to remote into your home node? Do you have any ideas around that or what you would like to see?

CD: I think at least from the protocol side of things we still have a lot we can do to make all of this more transparent to the user and enable non tech savvy people to take care of a node themselves. I don’t know what the big picture is at the end but I do know that we can certainly abstract away and hide some of the details in the protocol itself to make it more accessible and make it more usable to end users. As for the nice UI and user experience that we don’t have yet, I think that will crystallize itself out in the coming months. We will see some really good looking things from wallet developers. I’m not a very graphical person so I can’t tell you what that’s going to look like but I’m confident that there are people out there that have a really good idea on what this could look like. I’m looking forward to seeing it myself.

SL: There’s a whole bunch of different models because people who just want something easy to get started, something like Phoenix might be a good one for them. If you’re more technical then obviously you can go and do the full set up your own c-lightning and Spark or set up lnd and Zap or whatever you like. It is building out better options to make it easy for people even if we know not everyone is going to be capable to do the full self-sovereign style as we would like.

CD: Absolutely. It is one of my pet peeves that I have with the Bitcoin community, we have a tendency to jump right to the perfect solution and shame people that do not see this perfect solution right away. This shaming of newcomers into believing that there is this huge amount of literature they have to go through before even touching Bitcoin the first time. That can be a huge barrier to entry. I think what we need to have is a wide range of of utilities that as the user grows in their own understanding of Bitcoin he can upgrade or downgrade accordingly to reflect his own understanding of the system itself. We shouldn’t always mandate that only the most secure solution is the only one that is to be used. I think that there are trade offs when it comes to user friendliness and privacy and security, and we have to accept that some people might not care so much about the perfect setup, they might be ok with a decent one.

Trampoline routing

Bastien Teinturier at Lightning Conference: https://diyhpl.us/wiki/transcripts/lightning-conference/2019/2019-10-20-bastien-teinturier-trampoline-routing/

Bastien Teinturier on the Lightning dev mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-August/002100.html

SL: That’s a very good comment there. I wanted to talk about trampoline routing. You mentioned this earlier as well. I know the ACINQ guys are keen on this idea though I know that there has also been some discussion on GitHub from some other Lightning developers who said “I see a privacy issue there because there might not be enough people who run trampoline routers and therefore there’s a privacy concern there. All those mobile users will be doxing their privacy to these trampoline routers.” Do you have any thoughts on that or where are you placed on that idea?

CD: Just to reiterate trampoline routing is a mechanism for a mobile wallet or a resource constrained wallet to contact somebody in the network that offers this trampoline service and and forwarding a payment to that trampoline node. When the trampoline node unpacks the routing onion it will see “I’m not the destination and I should forward it to somebody.” But instead of telling me exactly whom I have to forward it to it tells me the final destination of the payment. Let’s say I’m a mobile phone and I have a very limited knowledge of my surroundings in the network but I know that you Stephan are a trampoline node. Then when I want to try Rusty for example I can look in my vicinity to see if I have a trampoline node. I can build a payment to you with instructions to forward it to Rusty whom I don’t know how to reach. Then I send this payment. When you unpack your onion you just receive it like usual. You don’t know exactly who I am because I’m still onion routing to you. You unpack this onion and see “Somebody who has sent me this payment has left me 100 satoshis in extra fees. I’m supposed to send 1 dollar to Rusty.” Now I have 100 satoshis as a budget to get this to Rusty.” I outsourced my route finding to you. What have you seen from this payment? You’ve obviously seen that Rusty is the destination and that he should receive 1 dollar worth of Bitcoin. But you still don’t know me. We could go one step further and say “Instead of having this one trampoline hop we can also chain multiple of them.” Instead of telling you to go to Rusty I would tell you to go to somebody else who also happens to be a trampoline and then he can forward it to Rusty. We can expand on this on this concept and make it an onion routed payment inside of individual onion routed hops. What does the node learn about the payment he is forwarding? If we only do this one trampoline hop then you might guess that I’m somewhere in your vicinity, network wise, and you learned that Rusty is the destination. If we do multiple trampoline hops then you will learn that somebody has sent you a payment. Big surprise, that’s what you always knew. You can no longer say that I’m in your vicinity, I the original sender, because you might have gotten it from some other trampoline node. You can also not know the next trampoline you’re supposed to send to is the destination or whether that’s an intermediate trampoline as well. We can claw back some of the privacy primitives that we have in pure onion routing that is source based routing inside of the trampoline routing. But it does alleviate the issue of the sender having to know a good picture of the network topology in order to send a payment first. I think we can make a good case for this not being much worse but much more reliable than what we have before. We also have a couple of improvements that come alongside trampoline routing. Let’s go back to the initial example of me sending to you, you being the trampoline and then sending to Rusty. Once you get the instruction to send to the final destination, you can retry yourself instead of having to tell me “This didn’t work. Please try something else.” We can do in network retries which is really cool especially for mobile phones that might have a flaky connection or it might be slow. We can outsource retrying multiple attempts to the network itself without having to be in the active path ourselves.

SL: Fascinating. If I had to summarize some of your thinking there, it’s kind of like think through a little bit more clearly about exactly who are you doxing and what are you doxing to who? If you haven’t doxed any personal information about yourself to me then really what’s the privacy loss there? Maybe it would become the case that the hardcore Bitcoin Lightning people might run trampoline routing boxes in a similar way to some hardcore people run Electrum public servers to benefit people on the network.

CD: Absolutely. It is not just because of the kindness of your heart that you’re running trampoline nodes. One thing that that I mentioned before is that you get a lot of fees in order for you to be able to find a route. The sender cannot estimate how much it’s going to cost to reach the destination. They are incentivized to overpay the trampoline node to find a route for them. This difference then goes to the trampoline node. Running a trampoline node can be really, really lucrative as well.

SL: Yeah, that’s fascinating. I didn’t think about that. That’s a good point. In some ways there’s more incentive to do it than running an Electrum public server because people don’t pay Electrum public servers right now. It is even better in that sense.

CD: Yeah and it’s not hard to implement. We can implement trampoline routing as a plugin right now.

Privacy attacks on Lightning

SL: Another thing I was interested to touch on is privacy attacks on Lightning. With channel probing the idea is that people construct a false onion that they know cannot go through and then try to figure out based on that. They sort of play Price is Right. Try 800 sats, try 8 dollars and then figure it out based on knowing roughly how much is available in that channel. People say that’s violating the privacy principles of Lightning but how bad is that really? What’s the actual severity of it? Just losing some small amount of privacy in a small way that doesn’t really stop the network growing? Do you have any reflections on that?

CD: I do because probing was one of my babies. I really like probing the network to be honest. I come from a background that is mostly measurements and probing the Bitcoin network. I was really happy when I found a way to probe the Lightning Network and see how well it works and if we can detect some failures inside of the network. You’re right that probing boils down to attempting a payment that we know will never succeed because we gave it a payment hash that doesn’t correspond to anything that the recipient knows. What we can do is compute a route to whichever node I’m trying to probe, I will construct an onion and then send out a HTLC that cannot possibly be claimed by the recipient. Depending on the error message that comes back, whether the destination says “I don’t know what you’re talking about” or some intermediate node saying “Insufficient capacity” we can determine how far we got with this probe and what kind of error happened at the point where it failed. We can learn something about the network and how it operates in the real world. That’s invaluable information. For example we measured how probable a stuck payment is, something that has been dreaded for a long time. It turns out that stuck payments are really rare. They happen in 0.18 percent of cases for payments. It’s also really useful to estimate the capacity that we have available for sending payments to a destination. That’s something that we’ve done for the pre-split analysis for example. We said “Anything below 10,000 satoshis has a reasonable chance of success. Anything above might might be tricky.” Before even trying anything we split right at the start into smaller chunks. Those are all upsides for probes but I definitely do see that there is a downside for probing and that is that we leaked some privacy. What privacy do we leak? It’s the channel capacities. Why are channel capacities dangerous to be known publicly? It could enable you to trace a payment through multiple hops. Let’s say for example we have channels A, B and C that are part of a route. Along these three channels we detect a change in capacity of 13 satoshis. Now 13 satoshis is quite a specific number. The probability of that all belonging to the same payment is quite high. But for us to collate this information into reconstructing payments based solely on observing capacity changes we also need to make sure that our observations are relatively close together. Because if an intermediate payment comes through that might obscure our signal that allows us to collate the payment. That’s where I think that MPP payments can actually hugely increase privacy simply by providing enough noise to make this collating of multiple observations really hard because channel balances now change all the time. You cannot have a channel that is constant for hours and hours and then suddenly a payment goes through and you can measure it. Instead you have multiple payments going over a channel in different combinations and the balances of those changes cannot be collated into an individual payment anymore. That is combined with efforts like Rene Pickhardt’s just-in-time rebalancing where you obscure your current balance by rebalancing on the fly while you are holding onto an HTLC. That can pretend to be a larger channel than it actually is simply because we rebalance our channel on the fly. I think probing can be really useful when it comes to measuring the performance metrics for the Lightning Network. It could potentially be a privacy issue but at the timeframes that we’re talking today it’s really improbable to be able to trace a payment through multiple channels.

SL: Especially with MPP and once you add all these different layers it seems quite low risk I guess. Christian, I’ve really enjoyed chatting with you. We’ve almost gone two hours at this point. I’ve definitely learned a lot and I’m sure SLP listeners will appreciate being able to learn from you today. For any listeners who want to find you online, where can they find you?

CD: I’m /@cdecker on Github and /@snyke on Twitter.

SL: Fantastic. I’ve really enjoyed chatting with you. Thank you for joining me.

CD: Thank you so much. Pleasure as always and keep on doing the good work.

\ No newline at end of file diff --git a/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/index.html b/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/index.html index 08c8ccfb22..e4477a6655 100644 --- a/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/index.html +++ b/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/index.html @@ -1,7 +1,7 @@ How Bitcoin UASF went down, Taproot LOT=true, Speedy Trial, Small Blocks | ₿itcoin Transcripts -https://rusty.ozlabs.org/?p=628

SL: Some people make this argument that you’ve got developers, you’ve got miners and you’ve got users. Each of them can propose things, developers can propose and write code and put it up there. Ultimately it’s up to people to run that code. What if you let miners signal it on their own and activate on their own? I guess that is one of the arguments for LOT=false. What are your thoughts on that idea of letting them signal on their own and then changing? If they do not signal then change at that point?

LD: There’s no point to it though. LOT=true gives them the whole year to signal on their own. If they do then there’s no difference whatsoever between the two. The only question is do they have the ability to refuse to collaborate? And if they refuse to collaborate does that essentially cancel the soft fork? There’s no reason to ever do false. If they collaborate great, then it works. If they don’t collaborate then it works as well.

The prospect of a bug in the Taproot implementation

SL: Another reason I’ve heard to have LOT=false is if let’s say in this activation period, there is a bug found and coordinating the stoppage of the soft fork is easier in a LOT=false scenario than if we were to go with LOT=true as default. What’s your thoughts on that?

LD: First of all, the possibility of finding a bug at this stage is very unlikely. There’s been years of review on this, or at least during development there’s been review. And then even after it was merged there has been more review. At this point it’s just sitting there waiting to be activated. There’s not really much more review to be done with it. The next time we’re going to see any possible bugs would be after it’s activated. At that point it’s, after all this is relevant, it’s activated at that point. The second point on LOT is that it doesn’t actually make it any easier to cancel it. Sure, you could just not activate it. But if the miners have the trigger only the miners can not activate it. So you as an economic user running a full node, you want to change to different software, you don’t want to allow the miners to activate it. Finally, even in the best case scenario you would still have to update again because you’re going to want to activate Taproot at the end of the day with that bug fix. There’s nothing to gain in that regard.

SL: This is more of a meta or long term argument, that if developers are unilaterally able to put out this code and everyone just adopts it, in the future maybe a large government or a large business could try to bring pressure to bear onto developers to co-opt or change the protocol or somehow sabotage the protocol. Do you see a potential risk on that side?

LD: No, not really. The developers, no matter what we release at the end of the day, if the users won’t run it then it doesn’t have any effect. Again if users just blindly run whatever developers release that is another failure scenario for Bitcoin. That’s something that’s a problem no matter what we do with the legitimate soft forks.

SL: Of course. I guess in practice though not everybody is a software developer and even for the people who are software developers they may not be familiar with the Bitcoin Core codebase. It is a sliding scale. There’ll be some who are loosely familiar and then others like yourself who are much more closely familiar with the codebase. To some extent there’s some level of trust placed in people who are a little bit closer to the detail. The argument then is in reality not everybody can review the code?

LD: Right. But we can provide honest explanations in the release notes of what this code does. No matter what we do with the legitimate soft forks it does not change what a malicious soft fork or a 51 percent attack rather can attempt to do. If developers are going to put out malicious code it doesn’t matter what we do with Taproot, the malicious code is going to be malicious no matter what.

SL: And so either way we are reliant on there being enough eyes on the code, enough people reviewing that if there were some malicious code inserted that somebody would raise a flag and let everybody know. Then there’d be a warning about it and people would kick up a stink about it basically.

LD: You would hope so. But regardless, what we do with legitimate soft forks has no influence on that scenario.

Bitcoin Core releasing LOT=false and UASF releasing LOT=true

Luke Dashjr on why LOT=false shouldn’t be used: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html

T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html

F7 argument for LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html

SL: The other argument I have heard is if Bitcoin Core were to release a client with LOT=false and another contingent of developers and users who want to go out and do similar to the UASF and release an alternate client with LOT=true. The average user can’t review all the Bitcoin code and they would now have to decide whether they want to run this alternate client that does include LOT=true. So what are your thoughts on that aspect?

LD: That’s no riskier than running the one with LOT=false. LOT=false for other reasons doesn’t come to a coherent view of consensus. It will not be very useful to people who are on the LOT=false client. For that reason, I think Core releasing LOT=false would actually be an abdication of duties towards users. Obviously Bitcoin Core, there’s this expectation that it’s going to follow what the users want and be safe to use. LOT=false is simply not safe to use.

Dealing with unlikely chain splits

SL: There’s also this discussion on the code in Bitcoin Core dealing with chain splits. People have points like “If there were to be a chain split at that time how would Bitcoin Core deal with the peer-to-peer ramifications of finding other peers who are on the same chain you are on and stuff like that?” Could you outline any thoughts on that aspect of it? I guess that’s also why in your view is LOT=true is the way to proceed with this.

LD: LOT=true minimizes any risk of chain splits. That only happens if the miners are malicious at that point. But if they are, they could be again, they could be malicious tomorrow and cause problems. In any case Bitcoin Core can definitely improve on its handling of such miner attacks but it’s not really related to Taproot or lockinontimeout or any of that. It’s a general issue that could be improved but there’s no reason to link it to lockinontimeout or any of this.

Miner signaling threshold

SL: in terms of the minor signaling ratio or percentage. 95 percent is the current threshold as I understand. What’s your thought on that level and what sort of scenarios could happen if the hash power were to be evenly split? If it wasn’t just one or two blocks before everyone figures out this is the correct chain and this is what we’re going with?

LD: SegWit had 95 percent but that one failed. The current consensus for Taproot seems to be around 90 percent. As long as you’re relying on miners to protect the nodes that haven’t upgraded yet you probably don’t want it to be much lower. I would say 85 percent at least but once you get to after the year is over and we’re activating, regardless of the miners at that point, hopefully the whole economy has upgraded and we aren’t relying on the miners anymore because that could be kind of messy. At that point the hash rate doesn’t matter quite so much as long as there’s enough blocks to use it.

SL: Let me take a step here just to summarize that for listeners who are maybe a little newer and trying to follow the discussion. The point I think you’re making there is that miners are in some sense helping enforce the rules for the older nodes. Because the older nodes aren’t validating the full set of rules, the way Bitcoin works old nodes still have backward compatibility. Old nodes could theoretically be put onto the wrong chain…

LD: Old nodes essentially become light wallets at that point. To get a better view of the timeline overall, before any signaling, before the miners even have the opportunity to activate, you really want the economic majority to have upgraded by that point. When the miners activate the enforcement of the rules also agrees and so you have the majority of the economy enforcing the rules no matter when the miners activate. For the next year the miners are enforcing the rules on the mining side. So if someone were to make an invalid block, the longest chain would still enforce the Taproot rules and by doing that they protect the nodes that have not upgraded yet, the light wallets and such. After that year is over, that’s why you would hope at that point that the entire economy plus or minus one or two small actors has upgraded and is enforcing the rules. Regardless of what the miners do at that point the rules are still being enforced. If you lose money because you haven’t updated your full node, that’s kind of on you at that point. You’ve had a whole year to get ready.

SL: Let’s say if somebody had not upgraded at that point, there wouldn’t be enough hash power actually pointed at that incorrect chain such that people would be kept from the correct chain. Even if they are an old node because of the rule about the most work?

LD: After the full year you’re no longer relying on that assumption. The miners, if they were to produce an invalid block, then everyone’s expected to use their own full node to reject that block no matter how much work that chain has.

Speedy Trial

Speedy Trial proposal: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html

Speedy Trial support: https://gist.github.com/michaelfolkson/92899f27f1ab30aa2ebee82314f8fe7f

SL: Now is a good point to bring up the Speedy Trial idea. For anyone who is not familiar with that could you outline what that is and also what your views are on that?

LD: Speedy Trial is essentially a new idea where signaling starts basically immediately and pretty quickly. If at any point during that 3 months the miners signal 90 percent or whatever the threshold is, it activates 3 months after that. It is a total of 6 months into the future. At that point Taproot is considered active. This gives us a 6 month window where the economic majority has an opportunity to upgrade. Because of the short window it doesn’t conflict with the so to speak “real plan” hopefully LOT=true. They don’t overlap with signaling and if Speedy Trial activates sooner, great, we don’t even have to go with the regular one. It’s just active in 6 months. If it doesn’t work that’s ok too. We just go forward as if it had never been tried.

SL: So I presume then you’re also in favor of Speedy Trial in that case and you’re encouraging other people to go with that approach?

LD: I think it’s still important that we move forward with the LOT=true. If Speedy Trial manages to preempt it that’s ok too. It’s not really an alternative as much as another way that the miners could activate before the deadline.

BIP 8

https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki

SL: While we’re here as well, it might be a good point to talk about BIP8. As I understand I think there are some parts about it that you would prefer to change if you were to be writing it today. Could you outline that?

LD: In light of lockinontimeout=false being unsafe it really doesn’t have a purpose. If it were solely up to me I would just remove the parameter and make it always true effectively. I don’t think it would be right for me to make that change unilaterally when there’s still disagreement about that. But if it were up to me I don’t think that there’s any purpose in having a parameter in there, it should just always be true at this point. There’s no point adding a bug back in once we fixed it.

SL: If you had to think about what is the most likely outcome at this point, what do you think that would be?

LD: Considering all the miner support for Taproot I’m guessing that Speedy Trial might succeed. But if it doesn’t that’s fine. If people have to switch to a so called Bitcoin Core with Taproot added that’s ok. It might actually be better than if Bitcoin Core were to release in the main line release because then it’s the users even more explicitly acting. I think it really should have been released by now with the timeline that had been proposed by the meetings a month ago but I’m not about to do it myself. If there isn’t enough community support to actually get it done then it’s flawed and it’s not going to succeed in the first place. I’m happy to help the community if they want to move forward with this. But I do think it should happen sooner rather than later, we don’t want to wait until after Speedy Trial and then realize “Oh we should have done this 3 months ago.”

The importance of network consensus

Suhas Daftuar on Taproot activation and consensus changes: https://medium.com/@sdaftuar/on-taproot-activation-and-consensus-changes-in-bitcoin-5b3453e91c4e

SL: Suhas Daftuar from Chaincode Labs wrote a blog. His overall guiding thrust was the important thing is to keep network consensus. I’m not sure if you have any views on that or if you’ve had a chance to read that blog post?

LD: I didn’t read the whole thing, I did skim through it. I agree that keeping the network consensus is probably very high priority, if not the highest. I think LOT=true does that.

SL: I guess those are probably the key points at least from my reading of the community discussion. People out there, if I missed any questions, let me know. Luke, did you have any other points around the Taproot activation conversation that you wanted to make?

LD: I did think it was important to point out that the miners aren’t going to be caught by surprise with the requirement for signaling. If they haven’t signaled for a whole year, they’ve had that whole year to prepare for the inevitable need to signal and to make valid blocks. If they have an outlier server somewhere and they’re setting the wrong version, they’ve had a whole year to work out that problem. There’s no risk that there’s going to be an accidental chain split with LOT=true. I’ve noticed there’s been a lot of fear being spread about accidental chain splits and all that but the only way a chain split would occur at that time with LOT=true would be if the miners are maliciously, intentionally creating invalid blocks, there’s no accidental risk.

SL: If I had to summarize your view it’s essentially we should be pursuing the LOT=true approach. As you’ve said that maximally reduces the risk of these splits. Given there’s been no serious sustained objections to Taproot that’s just the way to proceed.

LD: We can go ahead with Speedy Trial too, that doesn’t hurt. But I do think we should be doing both in parallel in case that doesn’t succeed.

Thoughts on the Bitcoin block size

Luke Dashjr presentation on why block sizes should not be too big: https://diyhpl.us/wiki/transcripts/magicalcryptoconference/2019/why-block-sizes-should-not-be-too-big/

SL: While we’ve got you here Luke, I thought it would be interesting to hear more about your views on the small block approach. This is one of those things where you have been campaigning for and agitating for this idea of smaller blocks in Bitcoin. Can you outline some of your thoughts on this and why is that an important thing to pursue?

LD: Earlier you mentioned users who just want to use a smartphone and that’s pretty much impractical these days. With a full node you have to download and process 350 gigabytes of blockchain and that’s just way too much for a smartphone. So that ship has pretty much sailed. What would reducing block size now get us? It would hopefully accelerate the return to the blockchain being manageable as smartphones get better. Right now the blockchain is still growing faster than the smartphone or any technology improves. So it’s actually getting harder and harder for phones or computers to keep up with it. So reducing the block size would get us to the point where the technology improvements, hopefully if they keep pace, will finally catch up and maybe someday smartphones will again be usable for a full node. The best we can hope for in the meantime is people run a full node at home and they remotely access it from their phone.

SL: In that case what about the idea of using pruned nodes on the smartphone and things like that? Is that a possibility in your mind? Or do you think that even that ship has already sailed?

LD: That ship’s already sailed. I was assuming that in the first place because even with the pruned node you still have to download and process all 350 gigabytes of data. It’s just what is requires to be a full node even if you prune it.

SL: There are also the battery and internet considerations as well because people are walking around with a smartphone. They might not want to take that kind of battery loss.

LD: Yeah and when the CPU is pegged it’s going to get hot. That also destroys the phone usually if it’s running too hot for too long.

SL: So I wonder then whether smartphone use might not have been feasible even if it stayed at 350 gigabytes just because of the battery and the CPU aspects.

LD: No because the technology would continue to improve while the blockchain grows slower than the improvement. It would remain viable if it had been reduced in a reasonably timely manner. It may have actually needed to have been before SegWit but there was a point in time where the reduction would have preserved that use case. I remember back in 2012 or 2013 I was actually running a full node on my phone with no problem at all.

SL: What about your thoughts on making it easier to remote into your home node? Right now a lot of people can do the whole Raspberry Pi thing with one of the different packaged nodes.

LD: That’s pretty much the approach I’ve been looking at lately and working toward. I’ve got the whole pairing thing that I have in Bitcoin Knots. There’s a QR code where you can point your phone’s wallet to the QR code and scan it and then it will automatically connect to your node on the computer that you’re showing the QR code on. That’s the area I’ve been trying to focus more on lately, trying to get it so that people can use Bitcoin as easily as they want to but still have a full node of their own for security.

SL: And your thoughts on the compact block filter approach?

LD: That’s just another light wallet. It’s no more secure than bloom filters. In fact the existence of that feature is harmful because there’s no longer a privacy incentive to run your own full node.

SL: In your view you would rather that not exist and you would rather people just all be on their full node at home kind of thing.

LD: Yeah. And it’s actually less efficient than the bloom filter protocol.

SL: That’s interesting. Why is it less efficient?

LD: Because now your light wallet has to download the block filters for every block. Whereas with bloom filters, you just tell your full node at home this is what addresses my wallet has and your full node at home just tells you “These are the blocks you need to worry about and nothing else.”

SL: There’s a privacy trade-off there but it is less computationally demanding?

LD: There is a privacy trade-off if you’re using somebody else’s full node. If you’re using your own full node then it doesn’t matter.

SL: That makes sense to me. Longer term as Bitcoin grows it eventually will hit a point where not every user will be able to hold their own UTXO right. Putting context and some numbers on this. We’re speaking in March 2021, the population of the world is about 7.8 billion. The current estimates for Bitcoin users around the world, it might be something like 100 to 200 million people but obviously not all those people are using it directly on the chain. Some of those are just custodial users, they’ve just got their Bitcoin on some exchange somewhere. Let’s say over the next 5-10 years we get a big increase in the number of people using Bitcoin. What happens when they can’t all fit on the chain?

LD: I’m not sure. Hopefully we’ll have a comparable increase of developers working on solving that.

SL: Even if you were to go with Lightning, I’m a fan of Lightning, it might be difficult because by the time you get each person opening or closing channels then it would just completely blow out the capacity in terms of block space.

LD: It can do what it can do. What it can’t do we can try to solve. Maybe we will, maybe we won’t come up with the solution. It is hard to tell at this point. There’s already plenty of work for developers without having to try to look at things like that.

SL: Of course. I guess the optimistic view would be something like we have some kind of multiparty channel thing going where multiple people can share one UTXO and then people sort of splice in and out of a channel factory or something like that. That allows people to preserve some more sovereignty in their use of Bitcoin rather than a lot of people having to be custodial users of Bitcoin.

LD: I haven’t given it much thought but it’s possible that Taproot might actually enable that since it has multiparty Schnorr signatures.

SL: Another approach I’ve heard is using covenants and things which is related to what Jeremy Rubin is doing with CTV. Do you have any thoughts on that kind of approach using covenants?

LD: I haven’t given it much thought. There’s just so many things going on here now.

SL: It is a big world out there and there’s so many different ideas going around. It’s obviously very difficult to keep up with all of that. For some people who maybe don’t want small blocks, they might be thinking “Let’s say we did lower the block size. It might make it even harder right now for people who want to open and close their Lightning channels. It might not be enough in terms of block space. We have to remember that Lightning does rely on us being able to react accordingly if somebody tries to cheat us or something goes wrong. We still have to be able to get our penalty close transaction or justice transaction in the Lightning Labs parlance and get that back into the chain in time. If the block size was lower at that point that’s another consideration.

LD: I haven’t looked at the specs but my understanding is that Lightning does not count the time as long as the blocks are being mined.

SL: It is mostly around relative timelocks, is that what you’re getting at?

LD: Not so much relative timelocks but that while the blocks are full it’s not counting towards your time limit on the penalty. You have more time if the blocks are full.

SL: I’m not familiar on this part of how Lightning interacts with Bitcoin. I probably can’t comment any further there.

LD: I could be wrong. My understanding of Lightning is mostly based on the theory rather than what has been implemented.

SL: Have you used any Lightning stuff yourself or have you mostly just been focused at the Bitcoin Core level?

LD: Mostly at the Bitcoin Core level. I’m not going to say I haven’t used it but pretty much just to the extent of losing a bunch of testnet coins.

SL: Have you tried any phone wallets on Lightning?

LD: No, I like to understand what is actually happening with my Bitcoin. I have a really high bar, I will not even let it touch my Bitcoins if I haven’t looked at the code and compiled it myself.

SL: What if someone just sent you 10 bucks on a small Lightning wallet or something like that?

LD: I’ve had some people offer to do that. I should probably figure out something for that. I also don’t want to use a custodial Lighting wallet. In my position I don’t want to set a bad example. If I’m going to do it, I’m going to do it right.

SL: You could use one of the non-custodial ones. There are some out there depending on how much trust or how much self sovereignty you want. There are different choices out there, things like Phoenix or Breez.

LD: I’m sure there are, I just haven’t seen much yet.

SL: With the whole small blocks thing is your hope that there might be other people out there in the community who agree and help agitate for the idea or are you sort of resigned to the block size as it is now and the block weight limit as it is now?

LD: There’s only so much I can do. If the community tomorrow decides that we’re ready to reduce the block size, sure we can do it. But until that happens, it’s just a matter of I think the block size should be lower and right now there’s not enough support. So there’s really nothing more to do until other people agree with me.

SL: It may be that large holders of Bitcoin are the ones who are much more able to fully be self-sovereign with their own full node, holding their own keys and things. Maybe to some extent, the people with a smaller stack of Bitcoins using more custodial services and things. They are more reliant on the protection and the rewards of the wholecoiners or the large coiners.

LD: Well also you have to consider that even if you have a lot of Bitcoins, if you’re not very economically active with those Bitcoins, you may find yourself at a loss if you’re running a full node and nobody else is. If you’re cut out of the economy because other people have accepted an invalid chain, your Bitcoins aren’t going to be worth quite as much anymore.

SL: I see what you’re saying. Even if you’re a whale sitting on over a thousand coins or whatever. The scenario would be that if a lot of other people out there get tricked…

LD: Then the economy moves on to another chain that doesn’t necessarily recognize your rules. No matter how many Bitcoins you have you’re still at the mercy of the economic majority. That is essentially what puts the price on the Bitcoin and the value. Even if you’re not valuing it in dollars the value all comes from the economy.

SL: So your purchasing power could be impacted. I guess it also is about if you are running a business then you’re regularly doing transactions. In that sense you are helping enforce your vision of what the rules of Bitcoin should be out there in the network. You’re helping influence that in some way as long as you’re running a business. Even if you are regularly accumulating and you’re regularly receiving you are helping enforce in that sense.

LD: Yes to a limited extent. Obviously it would be very bad if there was a handful of people that made up the whole economic majority.

Closing thoughts

SL: It has been a very interesting discussion, Luke. I wonder if you’ve got any closing thoughts that you want to leave for the listeners?

LD: I guess if you are interested in working on Bitcoin Core with Taproot or getting Taproot activated, join the IRC channels. If you’re not comfortable with your skill in doing it, I can help teach.

SL: Luke, for any listeners who want to find you online, or they’d like to get in contact with you, or maybe they want to follow your work, where’s the best place for them to find you?

LD: If they just want to follow my work or ask me questions in general, probably best way these days is probably Mastodon or Twitter. My handle is @LukeDashjr, and then on Mastodon that’s bitcoinhackers.org. If you have a reason to reach out privately, there’s some privacy sensitive information, you can always email me directly. It’s just luke at dashjr.org for my email.

SL: Excellent. Well, Luke, I enjoyed chatting with you and thank you for joining me.

LD: Thank you for inviting me.

\ No newline at end of file +https://stephanlivera.com/episode/260/

Luke Dashjr arguments against LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html

T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html

F7 argument for LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html

Transcript by: Stephan Livera Edited by: Michael Folkson

Intro

Stephan Livera (SL): Luke, welcome to the show.

Luke Dashjr (LD): Thanks.

SL: So, Luke for listeners who are unfamiliar, maybe you could take a minute and tell us a little bit about your background and how long you’ve been developing and contributing with Bitcoin Core.

LD: I first learned about Bitcoin back at the end of 2010, it was a new year’s party and I’ve been contributing since about a week later. So I recently got past the decade mark.

Bitcoin Knots

GitHub org: https://github.com/bitcoinknots

Website: https://bitcoinknots.org/

SL: So I know you have done a lot of different things in the Bitcoin world, and I know you have different projects running. One of them is Knots. Can you tell us a little bit about that?

LD: Bitcoin Knots is pretty much a product of the same development process that creates Bitcoin Core but it doesn’t have as onerous review requirements for things to be merged. And it also preserves some features that have been removed from the mainline Core such as the transaction priority, coin age and all that.

History of SegWit activation

SL: Okay, great. I know you have been developing in Bitcoin Core for a while and there are certain things that you’ve helped progress. I believe it was your suggestion that enabled SegWit to be done as a soft fork as well. So maybe it’d be good to also dive into some of the different views on how SegWit got activated, because I think looking back now there are different visions of how exactly SegWit was activated in the Bitcoin network looking back to 2017. So perhaps if you want to set the scene in terms of how you viewed the lead up to 2017? As I understand there were multiple attempts to raise the block size at that time, there was XT and Unlimited and a few others. Can you spell out for us your view on what that looked like leading up to 2017?

LD: There were a lot of people who wanted to increase the block size mainly as a PR stunt to say “Bitcoin can handle so many more transactions”. There were a lot of reasons not to do that. At the end of the day, when we got SegWit made up as a soft fork, a block size increase was included in that as sort of a compromise with the big block faction, pretty much doubling the effective block size that Bitcoin blocks can be. For whatever reason, they didn’t like that, they wanted to have more. I guess they wanted to take control of the protocol rules away from the community. And so they pushed forward with trying to do a hard fork despite that.

SL: The view at that time is that people who were more in the small block camp or even in the “Let’s just keep things the way they are and not change it too much” some of them viewed that as an attack, correct?

LD: Yeah. But SegWit was pretty widely accepted as a compromise between the big blockers and the small blockers because SegWit not only increased the block size, it also enabled Lightning to be a lot more effective and secure and that hopefully will eventually help reduce the block sizes significantly.

SL Yeah, I see. By fixing some of the different bugs, for example transaction malleability and setting up the possibility for Lightning and so on, it was viewed that that would be the way forward. In the community it seems that there are somewhat different views on how exactly SegWit was activated. Could you perhaps tell us the story from your perspective, how do you believe SegWit was activated in that time in 2017? I know there were various attempts and different BIPs and proposals made.

LD: Initially we had configured SegWit activation to be done with the BIP 9 version bits, which was an upgrade over the previous model that simply used the version number as an integer for each new feature and we’d increment it. Then eventually all the old version numbers would become invalid and all the blocks had to be upgraded. So version bits, the idea was we can just temporarily assign each of the bits for the activation. Then once activation is over, we stop using it. That way we can have up to 20 soft forks activating in parallel, which at this point seems like why would we ever have more than one? But at the time things were speeding up and it looked like that was going to become a possible issue that we’d want to have multiple in progress at a time. So SegWit used this and because of the whole big block controversy, there may have been other motives in the middle there, but it turns out the miners decided they were going to, instead of coordinating that upgrade, actually just refuse to coordinate it and effectively stop the upgrade in its tracks. It was never intended to be a way for miners to make decisions about the protocol. The community had already decided SegWit was going to happen. Otherwise you don’t deploy an activation at all. But in any case the fact that it was relying on the miners to coordinate meant the miners were able to effectively prevent it. And so BIP 9 at that point had pretty much failed. An anonymous developer named Shaolin Fry proposed that we fix this by making them signal, it’s no longer optional. The miners have to no later than a certain date signal or their blocks wouldn’t be valid. This meant closing the loophole that miners could refuse to coordinate. They still had the coordination involved but if they didn’t coordinate then it would still activate anyway. That was BIP 148. At one point it was found that due to some bug in BIP 9 it would have to be moved forward and ended up being moved to August 1st. It was very rushed and somewhat risky because the timeframe was only 3-5 months depending on when you first heard about it and very controversial obviously, but pulled it off at the end of the day without any issues at all. Despite everything it had going against it it still was a complete success.

SL: Could you also outline what BIP 91 is? And any impact that had?

LD: BIP 91 to be frank was essentially a 51 percent attack. The miners collaborated against the network. The only reason it was acceptable at all was because it was in effect the miners complying with BIP 148. BIP 91 was effectively a way that the miners could say. “Yeah we activated SegWit” even though they didn’t really have a choice at that point, BIP 148 was just a day or two later.

SL: Going back to my recent episode with Matt Corallo, we were talking about this idea of so-called playing chicken with the network. I guess that’s one of the concerns that people had, that there could potentially be a split caused in the network. This is why the idea is you want the miners to come along to help enforce that rule. This comes onto the topic of forced signaling. I think that’s essentially what BIP 148 was helping achieve because all of the nodes who are running this version are essentially saying “If you do not signal for SegWit we will not recognize your blocks as valid, right?

LD: That’s a framing that revolves around BIP 148 and the events of that time. It doesn’t really apply to the current situation where all the miners are friendly and hopefully going to just activate it anyway.

SL: Of course. So we’ll get to the Taproot stuff but I just wanted to get your view on the SegWit stuff.

LD: I don’t know that I would portray it as a game of chicken though. The users pretty much said “SegWit is going to be activated.” That’s how it is. That’s not going to necessarily cause a networks split. The network only splits if miners violate these new rules that the users have decided to enforce.

SL: I’m kind of more in the camp of UASF myself but I guess the counterargument would be something like “Even if all you UASF nodes go off and create your own Bitcoin network you might not have the same level of hash power and therefore not the same level of security. Or maybe even at the start you might not get that many blocks coming through because the difficulty would not have adjusted back down. How would you think about that? Or would you disagree with that?

LD: I completely disagree with the premise that UASF splits off at all. It’s just one more rule that the blocks have to follow to be enforced. Miners can violate that rule but they could violate that rule tomorrow. If they wanted to they can violate another rule. If the miners decide to violate rules that’s just the miners splitting the network. That has nothing to do with the actual UASF.

SL: Moving on with the events in 2017, could you spell out your thoughts on how you viewed the SegWit2x aspect? That was later in the year post SegWit being activated for listeners who are unfamiliar.

LD: That was right before BIP 91 which was a few days before BIP 148 activated. They saw that essentially SegWit was going to happen, the UASF was going to work. So they decided they were going to tack on their hard fork as an after effect or try to anyway. Obviously Bitcoin doesn’t work like that. You can’t just force users to do something so it was a complete failure.

SL: In your view was it important that for example, Bitfinex had a B1X token and there was a B2X token to represent a futures market of what people viewed the value of Bitcoin versus the SegWit2x coin, which never actually eventuated? The fact that at that time it was something like 9 to 1 in favor of Bitcoin over the SegWit2x coin of which one was the true Bitcoin. Was that significant, did that matter at all? Or did that just not really matter?

LD: At the end of the day it would have had the same outcome. I’m not going to say it didn’t matter at all. Obviously it helped get us there quicker. It made it clear before the fact that it wasn’t going anywhere, which I guess caused them to give up early. But at the end of the day the users are the final rule on what the protocol is. So it’s not like it would have succeeded without a futures market involved.

Economic majority concept

SL: Of course. I think another important topic to bring up here is the concept of economic majority. It is one thing for people to spin up a node and not actually transacting, not receiving Bitcoin. That act of receiving Bitcoin and saying “Yes I recognize this as valid Bitcoins” or “No these are not valid Bitcoins.” In that act they are helping in some sense influence the rules of the network. And so I guess the argument from the people who believe that it was not UASF that did it might say “Ok there might just have been a few people on Twitter but they were not actually the economic majority. The economic majority would have been actors like Coinbase and other big exchanges.” What would you say to that line of thinking?

LD: It is kind of history revisionism. Everyone knows that the exchanges have to do what their customers want. They’re not really the economic actors, the economic actors are the people who are offering services or products for the Bitcoin. When you receive the alleged Bitcoin, if they’re not valid you have to be able to say “No I’m not going to give you a product” or not. “I’m not going to give you a service.” Otherwise you’re not really enforcing anything.

SL: Perhaps the counter argument, and again I’m more on your side, but just for the sake of talking it out and thinking what would it look like. If a lot of users are naive or they do not understand this aspect of it. Maybe they’re not as engaged in the conversation around what Bitcoin is and thinking about the technical ramifications of what’s going on. They are just an everyday user and they just see on their wallets or on their front end, whether that’s Coinbase or some other front end they see “Oh I’ve received SegWit2x coin and I think that’s Bitcoin.” What about that?

LD: That’s a scenario where Bitcoin has failed. Bitcoin only works because of decentralized enforcement. If it is centralized enforcement you may as well go back to PayPal because that’s all you have. Except it’s more expensive than PayPal because it’s inefficient. It is trying to simulate decentralization without actually having decentralization at that point.

SL: Absolutely, it’s important that people take that on, actually treat Bitcoin seriously and try to learn about it and be more actively engaged in how they use Bitcoin. But there are a lot of people out there, maybe they are only on a mobile phone or maybe they are not very technically savvy. What’s to be said for those users or potentially, and I know you might not agree with this, for the people who are lightweight client users?

LD: Hopefully they’re an economic minority at all times. Bitcoin just doesn’t work if the majority isn’t enforcing the rules, there’s nobody else that’s going to do it for them.

SL: In your view then is it not feasible? Let’s say there would be enough users who might call out a service and say “Hey, they’re not actually valid. They’re not giving me real Bitcoins.” And then maybe everyone stops using that service. People go to some other service that is using “true” Bitcoin.

LD: I would hope that would occur. I would imagine if any exchange tried to pass off fake Bitcoins then they would probably get a class action lawsuit.

SL: Of course. That’s what we would hope to see. I guess the worst case would be nobody can natively interact with Bitcoin or very few can natively interact with Bitcoin. Then we end up in this scenario where people are essentially all having to trust somebody else. Obviously that’s very antithetical to the idea and the very notion of Bitcoin. Is there a question around some nodes being more important than others because some actors may hold more coins? They might be more interested in it. They might be the ones in some sense defending the ruleset of the network in a loose sense. Do you understand what I’m saying there?

LD: That’s back to the economic majority and economic activity being used. Using your node to validate is essentially the weight of your node on the network enforcement. If your node doesn’t validate any transactions that people are using in real economic activity, then to be frank that node is not doing any enforcement at all. If you’re selling products every day for Bitcoins then you’ve got a lot of push compared to someone who’s only selling something maybe once in a while.

SL: Yeah. So more active users and people who are using it to receive Bitcoins, they’re the ones who are in some sense enforcing the rules.

LD: And of course people who have the Bitcoins and are willing to spend it get to choose who is receiving the most.

SL: You’re right there. Are there any other points around SegWit and 2017 that you wanted to touch on? It’s ok if not, I just wanted to make sure you had your chance to say your view.

LD: I’m trying to think. I’m not sure that there was too much else.

Taproot activation

SL: So let’s move on to Taproot now. We’ve got this new soft fork that most people want. There’s been no serious sustained objections to it. Can you spell out your thoughts on how Taproot activation has gone so far?

LD: We had three, maybe four meetings a month or two ago. Turnout wasn’t that great, only a hundred people or so showed up for them. At the end of the day we came to consensus on pretty much everything except for the one lockinontimeout (LOT) parameter. Since then a bunch of people have started throwing out completely new ideas. It is great to discuss them but I think they should be saved for the next soft fork. We’ve already got near consensus on Taproot activation, might as well just go forward with that. There’s not consensus on lockinontimeout but there’s enough community support to enforce it. I think we should just move forward with that how it is and we can do something different next time if there’s a better idea that comes around. Right now that is the least risky option on the table.

SL: With lockinontimeout there’s been a lot of discussion back and forth about true or false. And other ideas proposed such as just straight flag day activation or this other idea of Speedy Trial. Could you outline some of the differences between those different approaches?

LD: The lockintimeout=true is essentially what we ended up having to do with SegWit. It gives a full year to the miners so they can collaborate cooperatively and protect the network while it’s being activated early. If the miners don’t do that for whatever reason, at the end it activates. If we were to set lockinontimeout=false we essentially undo that bug fix and give miners control again. It would be like reintroducing the inflation bug that was fixed not so long ago. It doesn’t really make sense to do that. At the end of the day it is a lot less secure. You don’t really want to be running it as an economic actor so you would logically want to run lockinontimeout=true. Therefore a lot of economic actors are likely to run it true. In most of the polls I’ve seen most of the community seems to want true. As far as a flag day, that’s essentially the same thing as lockinontimeout=true except that it doesn’t have the ability for miners to activate it early. So we’d have to wait the whole 18 months for it to activate and it doesn’t have any signaling. At the end of the day we don’t really know if it activated or if the miners are just not mining stuff that violates Taproot which is the difference of whether it is centralized or decentralized verification. It is economic majority still, that will matter for the enforcement, but you want to be able to say “This chain has Taproot activated.” You don’t want it to be an opinion. I say Taproot is activated, you say it isn’t. Who’s to say which one of us is right? Without a signal on the chain we’re both in a limbo where we’re both saying the same thing about the same chain and there’s no clear objective answer to that question, is Taproot activated.

SL: I know this may be a bit more contentious but I think there are other developers and other people out there who have made an argument that they think setting lockinontimeout=true is “putting a gun to the head of the miners” and forcing them to signal in a certain way. What would you say in response to that?

LD: Are we putting a gun to the miners and forcing them to not mine transactions stealing Satoshi’s coins or whatever? There’s rules and the miners have to follow the rules. They’ve got a whole year to figure out whatever they need to do to enforce the rules themselves. It’s not any different than any other rule. Would you add the inflation bug back because we don’t want to force the miners not to inflate?

SL: Of course not.

LD: Kind of a nonsensical argument.

Devs propose, miners activate, users override

Rusty Russell blog post: https://rusty.ozlabs.org/?p=628

SL: Some people make this argument that you’ve got developers, you’ve got miners and you’ve got users. Each of them can propose things, developers can propose and write code and put it up there. Ultimately it’s up to people to run that code. What if you let miners signal it on their own and activate on their own? I guess that is one of the arguments for LOT=false. What are your thoughts on that idea of letting them signal on their own and then changing? If they do not signal then change at that point?

LD: There’s no point to it though. LOT=true gives them the whole year to signal on their own. If they do then there’s no difference whatsoever between the two. The only question is do they have the ability to refuse to collaborate? And if they refuse to collaborate does that essentially cancel the soft fork? There’s no reason to ever do false. If they collaborate great, then it works. If they don’t collaborate then it works as well.

The prospect of a bug in the Taproot implementation

SL: Another reason I’ve heard to have LOT=false is if let’s say in this activation period, there is a bug found and coordinating the stoppage of the soft fork is easier in a LOT=false scenario than if we were to go with LOT=true as default. What’s your thoughts on that?

LD: First of all, the possibility of finding a bug at this stage is very unlikely. There’s been years of review on this, or at least during development there’s been review. And then even after it was merged there has been more review. At this point it’s just sitting there waiting to be activated. There’s not really much more review to be done with it. The next time we’re going to see any possible bugs would be after it’s activated. At that point it’s, after all this is relevant, it’s activated at that point. The second point on LOT is that it doesn’t actually make it any easier to cancel it. Sure, you could just not activate it. But if the miners have the trigger only the miners can not activate it. So you as an economic user running a full node, you want to change to different software, you don’t want to allow the miners to activate it. Finally, even in the best case scenario you would still have to update again because you’re going to want to activate Taproot at the end of the day with that bug fix. There’s nothing to gain in that regard.

SL: This is more of a meta or long term argument, that if developers are unilaterally able to put out this code and everyone just adopts it, in the future maybe a large government or a large business could try to bring pressure to bear onto developers to co-opt or change the protocol or somehow sabotage the protocol. Do you see a potential risk on that side?

LD: No, not really. The developers, no matter what we release at the end of the day, if the users won’t run it then it doesn’t have any effect. Again if users just blindly run whatever developers release that is another failure scenario for Bitcoin. That’s something that’s a problem no matter what we do with the legitimate soft forks.

SL: Of course. I guess in practice though not everybody is a software developer and even for the people who are software developers they may not be familiar with the Bitcoin Core codebase. It is a sliding scale. There’ll be some who are loosely familiar and then others like yourself who are much more closely familiar with the codebase. To some extent there’s some level of trust placed in people who are a little bit closer to the detail. The argument then is in reality not everybody can review the code?

LD: Right. But we can provide honest explanations in the release notes of what this code does. No matter what we do with the legitimate soft forks it does not change what a malicious soft fork or a 51 percent attack rather can attempt to do. If developers are going to put out malicious code it doesn’t matter what we do with Taproot, the malicious code is going to be malicious no matter what.

SL: And so either way we are reliant on there being enough eyes on the code, enough people reviewing that if there were some malicious code inserted that somebody would raise a flag and let everybody know. Then there’d be a warning about it and people would kick up a stink about it basically.

LD: You would hope so. But regardless, what we do with legitimate soft forks has no influence on that scenario.

Bitcoin Core releasing LOT=false and UASF releasing LOT=true

Luke Dashjr on why LOT=false shouldn’t be used: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html

T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html

F7 argument for LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html

SL: The other argument I have heard is if Bitcoin Core were to release a client with LOT=false and another contingent of developers and users who want to go out and do similar to the UASF and release an alternate client with LOT=true. The average user can’t review all the Bitcoin code and they would now have to decide whether they want to run this alternate client that does include LOT=true. So what are your thoughts on that aspect?

LD: That’s no riskier than running the one with LOT=false. LOT=false for other reasons doesn’t come to a coherent view of consensus. It will not be very useful to people who are on the LOT=false client. For that reason, I think Core releasing LOT=false would actually be an abdication of duties towards users. Obviously Bitcoin Core, there’s this expectation that it’s going to follow what the users want and be safe to use. LOT=false is simply not safe to use.

Dealing with unlikely chain splits

SL: There’s also this discussion on the code in Bitcoin Core dealing with chain splits. People have points like “If there were to be a chain split at that time how would Bitcoin Core deal with the peer-to-peer ramifications of finding other peers who are on the same chain you are on and stuff like that?” Could you outline any thoughts on that aspect of it? I guess that’s also why in your view is LOT=true is the way to proceed with this.

LD: LOT=true minimizes any risk of chain splits. That only happens if the miners are malicious at that point. But if they are, they could be again, they could be malicious tomorrow and cause problems. In any case Bitcoin Core can definitely improve on its handling of such miner attacks but it’s not really related to Taproot or lockinontimeout or any of that. It’s a general issue that could be improved but there’s no reason to link it to lockinontimeout or any of this.

Miner signaling threshold

SL: in terms of the minor signaling ratio or percentage. 95 percent is the current threshold as I understand. What’s your thought on that level and what sort of scenarios could happen if the hash power were to be evenly split? If it wasn’t just one or two blocks before everyone figures out this is the correct chain and this is what we’re going with?

LD: SegWit had 95 percent but that one failed. The current consensus for Taproot seems to be around 90 percent. As long as you’re relying on miners to protect the nodes that haven’t upgraded yet you probably don’t want it to be much lower. I would say 85 percent at least but once you get to after the year is over and we’re activating, regardless of the miners at that point, hopefully the whole economy has upgraded and we aren’t relying on the miners anymore because that could be kind of messy. At that point the hash rate doesn’t matter quite so much as long as there’s enough blocks to use it.

SL: Let me take a step here just to summarize that for listeners who are maybe a little newer and trying to follow the discussion. The point I think you’re making there is that miners are in some sense helping enforce the rules for the older nodes. Because the older nodes aren’t validating the full set of rules, the way Bitcoin works old nodes still have backward compatibility. Old nodes could theoretically be put onto the wrong chain…

LD: Old nodes essentially become light wallets at that point. To get a better view of the timeline overall, before any signaling, before the miners even have the opportunity to activate, you really want the economic majority to have upgraded by that point. When the miners activate the enforcement of the rules also agrees and so you have the majority of the economy enforcing the rules no matter when the miners activate. For the next year the miners are enforcing the rules on the mining side. So if someone were to make an invalid block, the longest chain would still enforce the Taproot rules and by doing that they protect the nodes that have not upgraded yet, the light wallets and such. After that year is over, that’s why you would hope at that point that the entire economy plus or minus one or two small actors has upgraded and is enforcing the rules. Regardless of what the miners do at that point the rules are still being enforced. If you lose money because you haven’t updated your full node, that’s kind of on you at that point. You’ve had a whole year to get ready.

SL: Let’s say if somebody had not upgraded at that point, there wouldn’t be enough hash power actually pointed at that incorrect chain such that people would be kept from the correct chain. Even if they are an old node because of the rule about the most work?

LD: After the full year you’re no longer relying on that assumption. The miners, if they were to produce an invalid block, then everyone’s expected to use their own full node to reject that block no matter how much work that chain has.

Speedy Trial

Speedy Trial proposal: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html

Speedy Trial support: https://gist.github.com/michaelfolkson/92899f27f1ab30aa2ebee82314f8fe7f

SL: Now is a good point to bring up the Speedy Trial idea. For anyone who is not familiar with that could you outline what that is and also what your views are on that?

LD: Speedy Trial is essentially a new idea where signaling starts basically immediately and pretty quickly. If at any point during that 3 months the miners signal 90 percent or whatever the threshold is, it activates 3 months after that. It is a total of 6 months into the future. At that point Taproot is considered active. This gives us a 6 month window where the economic majority has an opportunity to upgrade. Because of the short window it doesn’t conflict with the so to speak “real plan” hopefully LOT=true. They don’t overlap with signaling and if Speedy Trial activates sooner, great, we don’t even have to go with the regular one. It’s just active in 6 months. If it doesn’t work that’s ok too. We just go forward as if it had never been tried.

SL: So I presume then you’re also in favor of Speedy Trial in that case and you’re encouraging other people to go with that approach?

LD: I think it’s still important that we move forward with the LOT=true. If Speedy Trial manages to preempt it that’s ok too. It’s not really an alternative as much as another way that the miners could activate before the deadline.

BIP 8

https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki

SL: While we’re here as well, it might be a good point to talk about BIP8. As I understand I think there are some parts about it that you would prefer to change if you were to be writing it today. Could you outline that?

LD: In light of lockinontimeout=false being unsafe it really doesn’t have a purpose. If it were solely up to me I would just remove the parameter and make it always true effectively. I don’t think it would be right for me to make that change unilaterally when there’s still disagreement about that. But if it were up to me I don’t think that there’s any purpose in having a parameter in there, it should just always be true at this point. There’s no point adding a bug back in once we fixed it.

SL: If you had to think about what is the most likely outcome at this point, what do you think that would be?

LD: Considering all the miner support for Taproot I’m guessing that Speedy Trial might succeed. But if it doesn’t that’s fine. If people have to switch to a so called Bitcoin Core with Taproot added that’s ok. It might actually be better than if Bitcoin Core were to release in the main line release because then it’s the users even more explicitly acting. I think it really should have been released by now with the timeline that had been proposed by the meetings a month ago but I’m not about to do it myself. If there isn’t enough community support to actually get it done then it’s flawed and it’s not going to succeed in the first place. I’m happy to help the community if they want to move forward with this. But I do think it should happen sooner rather than later, we don’t want to wait until after Speedy Trial and then realize “Oh we should have done this 3 months ago.”

The importance of network consensus

Suhas Daftuar on Taproot activation and consensus changes: https://medium.com/@sdaftuar/on-taproot-activation-and-consensus-changes-in-bitcoin-5b3453e91c4e

SL: Suhas Daftuar from Chaincode Labs wrote a blog. His overall guiding thrust was the important thing is to keep network consensus. I’m not sure if you have any views on that or if you’ve had a chance to read that blog post?

LD: I didn’t read the whole thing, I did skim through it. I agree that keeping the network consensus is probably very high priority, if not the highest. I think LOT=true does that.

SL: I guess those are probably the key points at least from my reading of the community discussion. People out there, if I missed any questions, let me know. Luke, did you have any other points around the Taproot activation conversation that you wanted to make?

LD: I did think it was important to point out that the miners aren’t going to be caught by surprise with the requirement for signaling. If they haven’t signaled for a whole year, they’ve had that whole year to prepare for the inevitable need to signal and to make valid blocks. If they have an outlier server somewhere and they’re setting the wrong version, they’ve had a whole year to work out that problem. There’s no risk that there’s going to be an accidental chain split with LOT=true. I’ve noticed there’s been a lot of fear being spread about accidental chain splits and all that but the only way a chain split would occur at that time with LOT=true would be if the miners are maliciously, intentionally creating invalid blocks, there’s no accidental risk.

SL: If I had to summarize your view it’s essentially we should be pursuing the LOT=true approach. As you’ve said that maximally reduces the risk of these splits. Given there’s been no serious sustained objections to Taproot that’s just the way to proceed.

LD: We can go ahead with Speedy Trial too, that doesn’t hurt. But I do think we should be doing both in parallel in case that doesn’t succeed.

Thoughts on the Bitcoin block size

Luke Dashjr presentation on why block sizes should not be too big: https://diyhpl.us/wiki/transcripts/magicalcryptoconference/2019/why-block-sizes-should-not-be-too-big/

SL: While we’ve got you here Luke, I thought it would be interesting to hear more about your views on the small block approach. This is one of those things where you have been campaigning for and agitating for this idea of smaller blocks in Bitcoin. Can you outline some of your thoughts on this and why is that an important thing to pursue?

LD: Earlier you mentioned users who just want to use a smartphone and that’s pretty much impractical these days. With a full node you have to download and process 350 gigabytes of blockchain and that’s just way too much for a smartphone. So that ship has pretty much sailed. What would reducing block size now get us? It would hopefully accelerate the return to the blockchain being manageable as smartphones get better. Right now the blockchain is still growing faster than the smartphone or any technology improves. So it’s actually getting harder and harder for phones or computers to keep up with it. So reducing the block size would get us to the point where the technology improvements, hopefully if they keep pace, will finally catch up and maybe someday smartphones will again be usable for a full node. The best we can hope for in the meantime is people run a full node at home and they remotely access it from their phone.

SL: In that case what about the idea of using pruned nodes on the smartphone and things like that? Is that a possibility in your mind? Or do you think that even that ship has already sailed?

LD: That ship’s already sailed. I was assuming that in the first place because even with the pruned node you still have to download and process all 350 gigabytes of data. It’s just what is requires to be a full node even if you prune it.

SL: There are also the battery and internet considerations as well because people are walking around with a smartphone. They might not want to take that kind of battery loss.

LD: Yeah and when the CPU is pegged it’s going to get hot. That also destroys the phone usually if it’s running too hot for too long.

SL: So I wonder then whether smartphone use might not have been feasible even if it stayed at 350 gigabytes just because of the battery and the CPU aspects.

LD: No because the technology would continue to improve while the blockchain grows slower than the improvement. It would remain viable if it had been reduced in a reasonably timely manner. It may have actually needed to have been before SegWit but there was a point in time where the reduction would have preserved that use case. I remember back in 2012 or 2013 I was actually running a full node on my phone with no problem at all.

SL: What about your thoughts on making it easier to remote into your home node? Right now a lot of people can do the whole Raspberry Pi thing with one of the different packaged nodes.

LD: That’s pretty much the approach I’ve been looking at lately and working toward. I’ve got the whole pairing thing that I have in Bitcoin Knots. There’s a QR code where you can point your phone’s wallet to the QR code and scan it and then it will automatically connect to your node on the computer that you’re showing the QR code on. That’s the area I’ve been trying to focus more on lately, trying to get it so that people can use Bitcoin as easily as they want to but still have a full node of their own for security.

SL: And your thoughts on the compact block filter approach?

LD: That’s just another light wallet. It’s no more secure than bloom filters. In fact the existence of that feature is harmful because there’s no longer a privacy incentive to run your own full node.

SL: In your view you would rather that not exist and you would rather people just all be on their full node at home kind of thing.

LD: Yeah. And it’s actually less efficient than the bloom filter protocol.

SL: That’s interesting. Why is it less efficient?

LD: Because now your light wallet has to download the block filters for every block. Whereas with bloom filters, you just tell your full node at home this is what addresses my wallet has and your full node at home just tells you “These are the blocks you need to worry about and nothing else.”

SL: There’s a privacy trade-off there but it is less computationally demanding?

LD: There is a privacy trade-off if you’re using somebody else’s full node. If you’re using your own full node then it doesn’t matter.

SL: That makes sense to me. Longer term as Bitcoin grows it eventually will hit a point where not every user will be able to hold their own UTXO right. Putting context and some numbers on this. We’re speaking in March 2021, the population of the world is about 7.8 billion. The current estimates for Bitcoin users around the world, it might be something like 100 to 200 million people but obviously not all those people are using it directly on the chain. Some of those are just custodial users, they’ve just got their Bitcoin on some exchange somewhere. Let’s say over the next 5-10 years we get a big increase in the number of people using Bitcoin. What happens when they can’t all fit on the chain?

LD: I’m not sure. Hopefully we’ll have a comparable increase of developers working on solving that.

SL: Even if you were to go with Lightning, I’m a fan of Lightning, it might be difficult because by the time you get each person opening or closing channels then it would just completely blow out the capacity in terms of block space.

LD: It can do what it can do. What it can’t do we can try to solve. Maybe we will, maybe we won’t come up with the solution. It is hard to tell at this point. There’s already plenty of work for developers without having to try to look at things like that.

SL: Of course. I guess the optimistic view would be something like we have some kind of multiparty channel thing going where multiple people can share one UTXO and then people sort of splice in and out of a channel factory or something like that. That allows people to preserve some more sovereignty in their use of Bitcoin rather than a lot of people having to be custodial users of Bitcoin.

LD: I haven’t given it much thought but it’s possible that Taproot might actually enable that since it has multiparty Schnorr signatures.

SL: Another approach I’ve heard is using covenants and things which is related to what Jeremy Rubin is doing with CTV. Do you have any thoughts on that kind of approach using covenants?

LD: I haven’t given it much thought. There’s just so many things going on here now.

SL: It is a big world out there and there’s so many different ideas going around. It’s obviously very difficult to keep up with all of that. For some people who maybe don’t want small blocks, they might be thinking “Let’s say we did lower the block size. It might make it even harder right now for people who want to open and close their Lightning channels. It might not be enough in terms of block space. We have to remember that Lightning does rely on us being able to react accordingly if somebody tries to cheat us or something goes wrong. We still have to be able to get our penalty close transaction or justice transaction in the Lightning Labs parlance and get that back into the chain in time. If the block size was lower at that point that’s another consideration.

LD: I haven’t looked at the specs but my understanding is that Lightning does not count the time as long as the blocks are being mined.

SL: It is mostly around relative timelocks, is that what you’re getting at?

LD: Not so much relative timelocks but that while the blocks are full it’s not counting towards your time limit on the penalty. You have more time if the blocks are full.

SL: I’m not familiar on this part of how Lightning interacts with Bitcoin. I probably can’t comment any further there.

LD: I could be wrong. My understanding of Lightning is mostly based on the theory rather than what has been implemented.

SL: Have you used any Lightning stuff yourself or have you mostly just been focused at the Bitcoin Core level?

LD: Mostly at the Bitcoin Core level. I’m not going to say I haven’t used it but pretty much just to the extent of losing a bunch of testnet coins.

SL: Have you tried any phone wallets on Lightning?

LD: No, I like to understand what is actually happening with my Bitcoin. I have a really high bar, I will not even let it touch my Bitcoins if I haven’t looked at the code and compiled it myself.

SL: What if someone just sent you 10 bucks on a small Lightning wallet or something like that?

LD: I’ve had some people offer to do that. I should probably figure out something for that. I also don’t want to use a custodial Lighting wallet. In my position I don’t want to set a bad example. If I’m going to do it, I’m going to do it right.

SL: You could use one of the non-custodial ones. There are some out there depending on how much trust or how much self sovereignty you want. There are different choices out there, things like Phoenix or Breez.

LD: I’m sure there are, I just haven’t seen much yet.

SL: With the whole small blocks thing is your hope that there might be other people out there in the community who agree and help agitate for the idea or are you sort of resigned to the block size as it is now and the block weight limit as it is now?

LD: There’s only so much I can do. If the community tomorrow decides that we’re ready to reduce the block size, sure we can do it. But until that happens, it’s just a matter of I think the block size should be lower and right now there’s not enough support. So there’s really nothing more to do until other people agree with me.

SL: It may be that large holders of Bitcoin are the ones who are much more able to fully be self-sovereign with their own full node, holding their own keys and things. Maybe to some extent, the people with a smaller stack of Bitcoins using more custodial services and things. They are more reliant on the protection and the rewards of the wholecoiners or the large coiners.

LD: Well also you have to consider that even if you have a lot of Bitcoins, if you’re not very economically active with those Bitcoins, you may find yourself at a loss if you’re running a full node and nobody else is. If you’re cut out of the economy because other people have accepted an invalid chain, your Bitcoins aren’t going to be worth quite as much anymore.

SL: I see what you’re saying. Even if you’re a whale sitting on over a thousand coins or whatever. The scenario would be that if a lot of other people out there get tricked…

LD: Then the economy moves on to another chain that doesn’t necessarily recognize your rules. No matter how many Bitcoins you have you’re still at the mercy of the economic majority. That is essentially what puts the price on the Bitcoin and the value. Even if you’re not valuing it in dollars the value all comes from the economy.

SL: So your purchasing power could be impacted. I guess it also is about if you are running a business then you’re regularly doing transactions. In that sense you are helping enforce your vision of what the rules of Bitcoin should be out there in the network. You’re helping influence that in some way as long as you’re running a business. Even if you are regularly accumulating and you’re regularly receiving you are helping enforce in that sense.

LD: Yes to a limited extent. Obviously it would be very bad if there was a handful of people that made up the whole economic majority.

Closing thoughts

SL: It has been a very interesting discussion, Luke. I wonder if you’ve got any closing thoughts that you want to leave for the listeners?

LD: I guess if you are interested in working on Bitcoin Core with Taproot or getting Taproot activated, join the IRC channels. If you’re not comfortable with your skill in doing it, I can help teach.

SL: Luke, for any listeners who want to find you online, or they’d like to get in contact with you, or maybe they want to follow your work, where’s the best place for them to find you?

LD: If they just want to follow my work or ask me questions in general, probably best way these days is probably Mastodon or Twitter. My handle is @LukeDashjr, and then on Mastodon that’s bitcoinhackers.org. If you have a reason to reach out privately, there’s some privacy sensitive information, you can always email me directly. It’s just luke at dashjr.org for my email.

SL: Excellent. Well, Luke, I enjoyed chatting with you and thank you for joining me.

LD: Thank you for inviting me.

\ No newline at end of file diff --git a/stephan-livera-podcast/index.xml b/stephan-livera-podcast/index.xml index 056389fa5e..3806de5d5b 100644 --- a/stephan-livera-podcast/index.xml +++ b/stephan-livera-podcast/index.xml @@ -216,9 +216,9 @@ Craig welcome to the show. Craig Raw: Hi there, Stephan! It’s great to be here. Stephan Livera: -So Craig I’ve been seeing what you’re doing with Sparrow wallet and I thought it’s time to get this guy on the show. So can you, I mean, obviously I know you’re under a pseudonym, right? So don’t dox anything about yourself that you don’t want that you’re not comfortable to, but can you tell us a little bit about how you got into Bitcoin and why you’re interested in Bitcoin?
How Bitcoin UASF went down, Taproot LOT=true, Speedy Trial, Small Blockshttps://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Wed, 17 Mar 2021 00:00:00 +0000https://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Luke Dashjr arguments against LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html -T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html -F7 argument for LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html +So Craig I’ve been seeing what you’re doing with Sparrow wallet and I thought it’s time to get this guy on the show. So can you, I mean, obviously I know you’re under a pseudonym, right? So don’t dox anything about yourself that you don’t want that you’re not comfortable to, but can you tell us a little bit about how you got into Bitcoin and why you’re interested in Bitcoin?How Bitcoin UASF went down, Taproot LOT=true, Speedy Trial, Small Blockshttps://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Wed, 17 Mar 2021 00:00:00 +0000https://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Luke Dashjr arguments against LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html +T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html +F7 argument for LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html Transcript by: Stephan Livera Edited by: Michael Folkson Intro Stephan Livera (SL): Luke, welcome to the show. Luke Dashjr (LD): Thanks. diff --git a/sydney-bitcoin-meetup/2020-05-19-socratic-seminar/index.html b/sydney-bitcoin-meetup/2020-05-19-socratic-seminar/index.html index b3c9859869..89a32b5391 100644 --- a/sydney-bitcoin-meetup/2020-05-19-socratic-seminar/index.html +++ b/sydney-bitcoin-meetup/2020-05-19-socratic-seminar/index.html @@ -11,4 +11,4 @@ < Sydney Bitcoin Meetup < Socratic Seminar

Socratic Seminar

Date: May 19, 2020

Transcript By: Michael Folkson

Tags: Fee management, Dual funding

Category: -Meetup

Topic: Agenda in Google Doc below

Location: Bitcoin Sydney (online)

Video: No video posted online

Google Doc of the resources discussed: https://docs.google.com/document/d/1hCTlQdt_dK6HerKNt0kl6X8HAeRh16VSAaEtrqcBvlw/

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

0xB10C blog post on mempool observations

https://b10c.me/mempool-observations/2-bitmex-broadcast-13-utc/

These 3-of-4 multisig inputs don’t have compressed public keys so they are pretty large in current transaction sizes. They could potentially be way smaller. I thought about what are he effects of the daily broadcast.

You are saying that they don’t use compressed public keys. They use full 64 byte public keys?

65 bytes yes. With uncompressed public keys, an average output is bigger than 500 bytes. I looked at the distribution of time. I found out that they process withdrawals at 13:00 UTC. I looked at the effects this broadcast had. If you imagine really big transactions being broadcast at the same time at different fee levels. They take up a large part of the next blocks that are found. I clearly saw that the median fee rate levels rose quite significantly. They had a real spiky increase. Even the observed fee rates entering the mempool, there was the same increase which is a likely reaction to estimating a higher fee so users pay a higher fee. There are the two charts. The first one is the median estimated fee rate, what the estimators give me. The second one is the fee rate relative to midnight UTC. There is this clear huge increase visible at the same time BitMEX does the broadcast. The fee rate is elevated for the next few hours.

The effect you are pointing to there is there is this cascading effect where everyone else starts estimating the fee higher than they theoretically should. Would that be accurate in terms of how to explain that?

The estimators think “We want to outbid the transactions that are currently in the mempool.” If there are really big transactions in the mempool…. For example these BitMEX transactions pay a much higher fee. I estimated that based on an assumption that is a 4 satoshi per byte increase over 8 hours, around 1.7 Bitcoin in total fees are paid additionally due to this broadcast every day. This is about 7 percent of the total transaction fees daily.

Happy days for the miners. Potentially even more now during a high fee time?

I haven’t looked at what the current effect is. I also noticed that the minimum fee rate that a block includes spikes as well. Transactions with very low fee rates won’t get confirmed for a few hours. I talk about the improvements that can be made. One thing I mention is these uncompressed public keys started to be deprecated as early as 2012. By 2017 more than 95 percent of all keys added to the blockchain per day were compressed. The 5 percent remaining are mainly from BitMEX. For example transaction batching could be helpful as well. In the end the really big part is if you pay to a P2SH multisig address which you can only spend from if you supply a 3-of-4 redeem script then you always have these big inputs that you need to spend. There might be other ways to decrease the size there. I am sure it is not something doable for BitMEX because they have really high security standards. One thing that could be done in theory would be to say “We let users deposit on a non multisig address and then transfer them into a colder multisig wallet and have very few UTXOs lying in these big inputs.” Obviously using SegWit would help as well, reducing the size as the script is moved into the witness. BitMEX mentioned they are working on using P2SH wrapped SegWit and they estimate a 65 percent reduction there. Potentially in the future BitMEX could explore utilizing Schnorr and Taproot. The Musig scheme, I don’t know if that would work for them or not. Combine that with batching and you would have really big space savings. It might even increase BitMEX’s privacy. They use vanity addresses so everything incoming and outgoing is quite visible for everyone who wants to observe what is going there. There is definitely a lot to be done. They are stepping in the right direction with SegWit but it might be possible for them to do more. I spoke to Jonny from BitMEX at the Berlin Socratic and they have hired someone specifically for this reason to improve their onchain footprint over the next year or so. That is good news. Any questions?

With the way the wallets react with this giant transaction in the mempool, they increase their fee rate. It slowly drops after that. Do you think the wallets are improving correctly in this situation? The algorithm they use to choose the fees is adapting to this giant transaction efficiently or is it inefficient?

I am not sure. Typically if you estimate fee rates you say “I want to be included in the next block or in the next 3 blocks. Or 10 blocks maybe.” If that is a constraint of your use case or your wallet then maybe that is the right reaction. If you want to pay as low fees as possible it is maybe not the correct reaction.

It is a really great blog post. It strikes a nice balance between really going into the detail but actually explaining things in a way that is easy for the layman to understand.

Fabian Jahr blog post on “Where are the coins?”

https://medium.com/okcoin-blog/btc-developer-asks-where-are-the-coins-8ea70b1734f4

I remember running a node for the first time and doing what it said in the tutorial to run gettxoutsetinfo. It gave me a really weird number for the total amount. It was not clear and there was no detailed information on exactly went where. I found this post from Pieter Wuille pretty late. He explained it similarly to how I explain it and he also gives particular numbers. He doesn’t talk about gettxoutsetinfo. He talks about the total amount, how many coins will there ever be in total. It is a different view and this is the reason why I didn’t find it in the beginning. With my post I tried to add even more information so I go over the numbers of the very recent state of the UTXO set. I used the work that I did with the CoinStatsIndex, an open pull request. I wanted to link to the different parts of the codebase that is implementing this or why things work the way they do. It is also on my website. OkCoin reposted it and this got a little bit more traction. A lot of these links go to the codebase. My article is aimed at someone who is running a node. I hope with this information I can take them to get more into the code and explore why things work they do and form that mindset of going deeper from just running a node the same way that I did over the last couple of years.

At what point did you come across the Pieter Wuille post?

It was almost done. I had produced some bugs in my code. I was running the unclaimed miner reward code to get the exact number and make sure it really adds up 100 percent. The rest of the text was mostly written. It was great that I found it because I took this blog that I copied over from his stuff. I am not sure I would have found this information otherwise. I could have found in which blocks what went missing but Pieter at the time was maybe in contact with people to identify bugs from the miners. That was great and definitely helped me finish up much more quickly.

I think it is a really cool post.

These are things Fabian that you stumbled across whilst contributing to Core right? This isn’t you using the Core wallet and thinking “I don’t how this works. I need to research this”?

Initially when I saw the first time that was the case. I was confused by it and I wanted to know why there is this weird number but I didn’t have the tools to analyze it. I accepted the incomplete information that I found on Bitcointalk and went on. At some point I realized what I am doing for CoinStats, that is the same stuff that I need to know to analyze what this exact number is. It is a different view on the same statistic. I realized I can actually write an article because I already have all the knowledge and I am working with the code. I can just adapt it a little bit in order to get these exact numbers. That is what motivated me to actually write a blog post. It is not something that I do very often.

I recall seeing some discussion where one OG on Twitter and Matt Corallo was talking with him about the time he tried to intentionally one less satoshi in your block reward as a miner in the early days.

With BIP 30 you have this messy solution where overriding blocks are hardcoded into the codebase. Is it possible at this point to actually fork back to an earlier block in 2010? Or these rules and other rules essentially add a checkpoint at some point after that?

That’s what BIP 34 does. I don’t really remember how exactly that works. There are measures here but I am not exactly sure how it would play out if we would go back that far. There are a lot of comments above the code.

It says we will enforce BIP 30 every time unless the height is one of these particular heights and the hash is these two particular blocks. If you do a re-org to as far as back as you like you won’t be able to do anything apart from those two blocks as violations of it.

This rule applies to every block apart from these two blocks.

PTLCs (lightning-dev mailing list)

https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002647.html

This is the mailing list post from Nadav Kohen summarizing a lot of the different ideas about PTLCs.

Nadav is keen on working on PTLCs now despite not having Schnorr. There is this discussion about whether it is worth it in the case of discreet log contracts where you have one version that doesn’t use adaptor signatures and you have the original one that uses some kind of Lightning construction where you have a penalty transaction. The adaptor signature one gets rid of the penalty. You have an oracle who is releasing a different secret key depending on the outcome of the bets. It is Liverpool versus Manchester. The oracle releases one secret key based on Liverpool winning, another one on Manchester wining and another one if they tie. It is really simple to do a bet on this kind of thing if you have the adaptor based one. What you do is you both put money into a joint output, if you don’t have Schnorr you use OP_CHECKMULTISIG. Then you put adaptors signatures on each of these outcomes, tie, one team wins, the other team wins. The point, I call it the encryption key, for the signature for each of those outcomes is the oracle’s point. If the oracle releases one secret key what it means is you can decrypt your adaptor signature, get the real signature, put that on the chain. Then you get the transaction which pays out to whoever gets money in that situation. Or if they tie both get money back for example. This is a really simple protocol. The one without adaptor signatures is pretty complicated. I was just pointing out in this mailing list post it is far preferable to have the adaptor one in this case. Even though the ECDSA adaptor signatures are not as elegant as Schnorr it would probably be easier to do that now than do what the original discreet log contract paper suggests.

Rust DLC stuff was worked on at the Lightning Hack Sprint?

Jonas, Nadav and the rest of the team worked on doing a PTLC. The latest one, they did tried to do a Lightning discreet log contract. You are trying to modify rust-lightning to do something it wasn’t meant to do yet. I don’t think much was completed other than learning how rust-lightning works. It looks like it will go somewhere. The rust-lightning Slack has a DLC channel where some subsequent discussions are going on.

There was a question on IRC yesterday about a specific mailing list post on one party ECDSA. Are there different schemes on how to do adaptor like signatures with one party ECDSA?

The short story is that Andrew Poelstra invented the adaptor signature. Then Pedro Moreno-Sanchez invented the ECDSA one. When he explained it he only explained it with a two party protocol. What I did is say we can simplify this to a single signer one. This doesn’t give you all the benefits but it makes far simpler and much more practical for use in Bitcoin today. That was that mailing list post. It seems solid to me. There might be some minor things that might change but from a security perspective it passes all the things that I would look for. I implemented it myself and I made several mistakes. When I looked at whether Jonas Nick had made the same mistakes, he didn’t. He managed to do all that in a hackathon, very impressive.

On the scalability issues of onboarding millions of LN clients (lightning-dev mailing list)

https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-May/002678.html

Antoine Riard was asking the question of scalability, what happens with BIP 157 and the incentive for people to run a full node versus SPV or miner consensus takeover.

I think the best summary of my thoughts on it in much better words than I could is John Newbery’s answer. These are valid concerns that Antoine is raising. On the other hand they are not specific to BIP 157. They are general concerns for light clients. BIP 157 is a better light client than we have before with bloom filters. As long as we accept that we will have light clients we should do BIP 157.

This conversation happened on both the bitcoin-dev mailing list and lightning-dev mailing list.

This sums up my feelings on it. There were a lot of messages but I would definitely recommend reading John’s one.

The Luke Dashjr post was referring back to some of Nicolas Dorier’s arguments. He would rather people use an explorer wallet in his parlance, Samourai wallet or whatever, calling back to the provider as opposed to a SPV wallet. John’s comment addresses that at least somewhat. It is not specific to BIP 157.

The argument is really about do we want to make light clients not too good in order to disincentivize people to use them or do we not want to do that? It also comes down how do we believe the future will play out. That’s why I would recommend people to read Nicolas Dorier’s article and look at the arguments. It is important that we still think about these things. Maybe I am on the side of being a little more optimistic about the more future. That more people will run their own full nodes still or enough people. I think that is also because everyone will value their privacy will want to run a full node still. It is not like people will forget about that because there is a good light client. I honestly believe that we will have more full nodes. It is not like full nodes will stay at the same number because we have really good light clients.

To reflect some of the views I saw elsewhere in the thread. Richard Myers is working on mesh networking and people trying to operate using Bitcoin who can’t run their own full node. If they are in a more developing world country and don’t have the same level of internet or accessibility of a home node they can call back to. Even there there are people working on different ideas. One guy in Africa just posted about how they are trying to use Raspiblitz as a set up with solar panels. That is a pretty cool idea as well.

This is the PGP model. When PGP first came out it was deliberately hard to use so you would have to read documentation so you wouldn’t use it insecurely. That was a terrible idea. It turns out that users who are not prepared to invest time, you scare off all these users. The idea that we should make it hard for them so they do the right thing, if you make it hard for them they will go and do something else. There is a whole graduation here. At the moment a full node is pretty heavy but this doesn’t necessarily have to be so. A full node is something that builds up its own its own UTXO set. At some stage it does need to download all the blocks. It doesn’t need to do so to start with. You could have a node that over time builds it up and becomes a full node for example, gradually catches up. Things like Blockstream’s satellite potentially help with that. Adding more points on the spectrum is helpful. This idea that if they are not doing things we like we should serve them badly hasn’t worked already. People using Electrum random peers and using them instead. If we’ve got a better way of doing it we should do it. I’ve always felt we should channel the Bitcoin block headers at least over the Lightning Network. The more sources of block headers we have the less chance you’ve got of being eclipsed and sybil attacked.

I recall some people were talking about the idea of fork detection. Maybe you might be not able to run your own full node and you are running some kind of SPV or BIP 157 light client. It would know that there is a fork so now I should be more careful instead of merely trusting whatever miner or node has served me that filter.

There are certainly more things we can do. The truth is that people are running light clients today. I don’t think that is going to stop. If you really want to stop them work really hard on making a better full client than we have today.

I also liked Laolu’s post too. You can leverage existing infrastructure to serve blocks. There was a discussion around how important is economic weight in the Bitcoin network and how easy would it be to try to takeover consensus and change to their fork of 22 million Bitcoin or whatever.

This was what happened with SegWit2x, aggressively going after the SPV wallets. They were like “We are going to carry all the SPV wallets with us”. It is a legitimate threat. There is an argument that if you have significant economic weight you should be running a full node, that is definitely true. But everyone with their little mobile phone with ten dollars on it is maybe not so important.

One of the arguments that came up in this thread is if you make it so that the full nodes have to keep serving the blocks for the light clients that might increasingly put more pressure on the full nodes to serve the filters. That might create this weird downward spiral incentive to not run their full nodes because of the increasing demands on them.

Serving filters is less intensive than serving blocks. We’ve already got that. We are already there.

If you don’t want to serve the filters you don’t have to serve them. It is a command line option, it is off by default.

How easy it to run your full node and pair it with your own phone? It looks like we are looking for an enterprising entrepreneur to create a full node that is also a home router, a modem and a Pi hole. Does anyone have any thoughts on what could be done to make that easier?

Shouldn’t we have protocol level Bitcoin authentication and encryption first so we can guarantee that we are connecting to our own node? That seems like the obvious thing.

I am maybe one of the few people who doesn’t buy the economic weight argument about these light clients. The thinking goes that if a large proportion of the economy are not verifying their own transactions the miners could just change the rules. Create Bitcoin Miner’s Vision with 22 million Bitcoin and everyone will just go along with it. Even if a large amount of economic weight is with that I fail to see how people could just go along with it. It is going to be noticed very quickly. Light client would have to switch. It would be a bad scenario if a lot of infrastructure is running on light clients. But I don’t think it is going to change the definition of what is Bitcoin. I can’t see it happening. Of course if no one is running full nodes then yeah. As long as it is easy to run a full node, you can run it at home on your Raspberry Pi. Some people decide not to because of their phone or they don’t have the right hardware or internet connection. I can’t see how this could change what is Bitcoin in any scenario. What Bitcoin will be is always be what bitcoind full nodes think is Bitcoin. If you some other coin on some other fork of it it is not going to be Bitcoin and it is not going to be worth anything.

That was literally what SegWit2x was about. That was literally what they tried to do. They said “We are taking Bitcoin in this direction and we’re taking the light clients with us because we’ll have the longer chain.” It is not a theoretical, sounds unlikely, that was literally what the plan was. The idea was that it would force enough people to flip and that would be where the economic majority are now. It would lever the small number of people running full nodes into switching to follow that chain as well because that was going to be where everyone was. Your useless old Bitcoins weren’t going to worth anything. This is not some theoretic argument, this was literally what happened. Saying this is not going to happen again because reasons is probably not going to carry weight for at least ten years until people have forgotten about that one.

But would it work? If there were more light clients is it more likely to work? I don’t see it.

It would absolutely work. All the light clients would follow it. Then your exchange or service doesn’t work anymore if you are not following the same chain as all the light clients talking to you. Follow the light clients if they are the economic majority that is where you have to go.

It becomes a question of economics then. I don’t see it actually playing out that way. The reason is the people who decide what is Bitcoin is not really the owners of Bitcoin or the people in the P2P network. It is more like the people who have US dollars who want to buy Bitcoin. They are the ones who decide what it is at the end of the day. The exchange is going to offer them two different ticker symbols for these two different Bitcoins and they are going to decide which one is the real one. They are not going to decide based on which one the light clients are following. I don’t see how having light clients follow you is going to help you too much in deciding what is the real Bitcoin.

If Bitcoin was used for real daily economic activity which it isn’t today then you’ve got two choices. This change happens, the miners push this through and the light clients go one way. Say you’re relying on Bitcoin income and people sending you Bitcoin. You have two choices. All the light clients don’t work for your customers or you change, follow the miners and go with the economic majority. You may rail against it, “This is not real Bitcoin”, it doesn’t matter. People on the margins will go with the majority. If the economic majority goes one way you will go that way because otherwise you will suffer. At the moment we have this luxury that most people are not living day to day on Bitcoin. They could sit it out and wait to see what wins. But if there is real monetary flow with Bitcoin then that isn’t necessarily the case and you can place time pressure on people. The economic majority is significant then. The time it would take to get a response, to get all the SPV nodes to flip across or reject whatever blocks could be significant. It could be long enough that people will cave. Given we’ve seen serious actors try this once and a significant amount of mining power threaten to try this I think it is a real scenario.

Let’s say in that scenario SPV wallet developers think how can I let users select which chain they wish to be on. At that point the SPV wallet developers would have to quickly update their code to help their users select the correct chain, not this fork 22 million Bitcoin chain. Theoretically it is possible but it is not so easy to quickly put in place to help them fight the adversarial change.

Imagine the chaos that would ensue. The people who haven’t upgraded can’t send money to people who have upgraded. They are also under pressure to get their software working. The easiest way to get their software working is to follow everyone else.

I see the argument and I think you’ve put it better than I’ve ever heard it before. I still think there is “Ok I have to make a decision about what I am going to do.” I can follow this chain but if the people who have US dollars who are investing in Bitcoin, big players, don’t think that chain has Bitcoin on it. I do see the economic majority argument but just not from economic majority of holders of Bitcoin.

You are simplifying the argument. There are different constituents. Even though I agree that one set of constituents are the whales that hold Bitcoin or are buying lots of Bitcoin, if they are going to lose money because all the other constituents in the ecosystem say that chain isn’t Bitcoin then it doesn’t matter what they say. They either lose money or they go with the ecosystem majority. It is not as simple as saying one constituent has all the power. This is definitely not the case. There are four or five different constituents.

They all have one thing in common. They are bidding for Bitcoin with US dollars or some other currency or goods. People who have Bitcoin do not get to decide what the real Bitcoin is because they are going to have both Bitcoin and the forked coin. They don’t get to decide which one is the real one. The guys who decide are the people who purchase it on exchanges. They are going to say which is the real one, we have a social consensus on which one is the real one. I see there could be an attack on it but I don’t see how light clients really are going to be the key to the malicious attack winning.

This is where I agree with you. I think it is not light client or full node. It is light client or custodial. Custodial are just as bad as light clients if not worse. Then you are fully trusting the exchange’s full node. You also only need one exchange full node to serve billions of users. If we can move people from custodial to light client I see that as an improvement. That is the comparison people don’t often make.

This seems counter to the Nicolas Dorier argument. He would rather you use a custodial wallet because that is just bad for your risk because you are not holding your own keys. That is resisting a miner takeover.

I would think that would make the economic weight argument even worse. Because then they have actually have the Bitcoin, you have nothing. If you have a SPV wallet you ostensibly have the keys and you can do something else with them.

If you want to discuss the bad things you can do having the majority of clients controlled by a custodial entity is arguably even worse than the SPV mode.

What Nicolas says I would rather use a custodial wallet for myself if that means the whole ecosystem doesn’t use light clients.

That is the PGP disease. Let’s make it really hard to use them so everyone is forced into a worse solution. It is a terrible idea.

What are some of the ideas that would make it easier for people run their own full node. You mentioned making it easier for people to call back to their own node. Right now it is difficult.

The friends and family idea. That computer geek you know runs a full node for you which is second best to running your own. The idea of a node that slowly works its way backwards. It has a UTXO snapshot that it has to trust originally. Then it slowly works its way backwards and downloads blocks for however long it takes. Eventually it gets double ticked. It is like “I have actually verified. This is indeed the genuine UTXO set.” A few megabytes every ten minutes gets lost in the noise these days.

That is related to the James O’Beirne assumeutxo idea.

And UTXO set hashes that have been proposed in the past are pretty easy to validate. You can take your full node and you can check that this is the hash you should have at this block. You can spread that pretty widely to people who trust you and check that is something you’ve got as well. Creating a social consensus allows you to bootstrap really fast. I can grab that UTXO set from anywhere and slowly my node will work its way backwards. With some block compression and other tricks we can do that. The other issue is of course that there is a lot of load. If you serve an archived full node and serve old blocks you get slammed pretty hard with people grabbing blocks. I had a half serious proposal for adopt a block a while ago. Every light client would hold 16 blocks. They would advertise that and so you would be able to grab 16 blocks from here and 16 blocks from there. Rather than hammering these full nodes even light clients would hold a small number of blocks for you. You could do this thing where you would slowly wind your way backwards and validate your UTXO set. I think the effort is better put into making it easier to run a full node. If you don’t like light clients move the needle and introduce things that are closer to what you want in a full client. Work on that rather than trying to make a giant crater so that no one can have a light client. That is short term and long term a bad idea.

Will Casarin made a comment around how he had set up his Bitcoin node with his phone to call back to his own node. It is more like enthusiast level or developer level to use your own full node and have your phone paired with it. To have your own Electrum Rust Server and pair your laptop back to it. How can that be made easy for the Uncle Jim, friends and family node so he can set it up so they call back to his node.

If you stop encouraging people to experiment with this stuff you can inadvertently stop people from experimenting and entrepreneurs from figuring cool new ways to have your phone paired with your full node making it easy. You can stop people from getting into Bitcoin. This option doesn’t exist now. That lowers demand, that lowers the price and the price lowers the hashrate. You can have your pure Bitcoin network with only full nodes but it is at a lower price and a lower hash rate. Is your security better? It is definitely up for debate.

On Neutrino the way I read that was it made it easier for me as a local enthusiast to just serve effectively a SPV. What doesn’t get brought up to much is the counterargument that if you make it easier to be a SPV you are going to get a lot more honest SPVs out there that will sway the economic majority. Instead of there being blockchain.info as the only party serving SPV filters and they own the money, if I’m a SPV and others are a SPV there will be a lot more of the network who are acting honestly and needing SPV filters.

The problem with SPV is that it trusts the person who serves you.

But my node isn’t going to serve you rubbish forks.

I have been running a Electrum server for many years. Often I see people at conferences telling me they are using my node. They are trusting me. I say “Don’t trust me” but it has happened this way. At least they know this guy is running a node and I have to trust him. It is not some random node on the internet.

If you trust a certain full node or you connect to a certain full node and you are getting light client filters from it then you are fine.

I am still a random person for them. Maybe they saw me once or twice at a conference. I always tell them “Run your own node.”

This is where you need to have a ladder. It is much better people are going “I am connecting to this guy’s node. He is serving me filters” than none of the options apart from a full node are there and therefore I am just going to trust Coinbase or Kraken to deal with it. I am not going to worry about how my balance is being verified. The situation that you describe is superior to custodial. And it is a long journey. As long people understand what they are doing you can see in 5, 10 years they are like “I don’t want to trust this guy’s node anymore. I don’t want to have filters served from this guy’s node. I am going to run my own node.” You have given them a stepping stone towards running a full node that wouldn’t have been there if you had ditched all the superior light client options.

RFC: Rust code integration into Bitcoin Core

https://github.com/bitcoin/bitcoin/issues/17090

Last month we spoke about this idea.

This has been a topic that has floated around for a while, a Rust integration in Bitcoin Core. It first started in the middle of last year when Cory Fields and Jeremy Rubin hacked together some initial Rust integration and had a proof of concept as to how you might integrate a simple Rust module into the Core codebase. There was an initial PR that has a lot of conceptual level discussion in it. I would definitely recommend that anyone interested in Rust integration to bootstrap by reading through this discussion. There is Cory’s presentation he gave at the MIT conference (2019). That summarizes a lot of the thinking why a language like Rust might be something good for Bitcoin. When we are having a lot of these discussions a lot of thinking is five or ten years down the line. Thinking about where we want to be at that point. As opposed to we should rewrite everything in Rust right now. Cory brings this up as well. You can look at a lot of other big security critical open source projects like Firefox. Mozilla is one of the orgs that kicked off a lot of this research into safer languages. Web browsers are exactly the kind of thing you want to build using safer languages. Maybe Rust makes sense for us. There would be downsides to blanket switched to using Rust next week. There are lot of long term contributors working on Bitcoin Core who may not be interested in or as fluent in Rust as they are C or C++. You can’t necessarily just switch to a new language and expect everyone else to pick that up and be happy with using it going forward. We have thousands and thousands of lines of code and tests that have worked for near on a decade. There is not necessarily a good reason to rewrite for the sake of rewriting it. Maybe better languages come along. Particularly not when some of the benefits that Rust might present are not necessarily applicable to those parts of the codebase. Since the original discussion around the end of last year Matt ended up opening a few PRs. A serving DNS headers over Rust PR he put forward. A parallel P2P client in Rust PR. He wrote another networking stack or peer-to-peer stack in Rust that would run alongside the C++ implementation. If the C++ implementation crashed for some reason maybe you could failover to the Rust implementation. Try understanding things like that. Maybe you want to run the Rust implementation and failover to the C++ implementation. There is lots of code there. I think it is still worth discussing but I can’t imagine how the discussion ended. It got to the point where a lot of our PRs and discussions end up. There’s a flurry of comments and opinions for a week or two and then it dies down and everyone moves onto discussing and arguing the next interesting thing to come up. It is probably one of the things that needs a champion to pick it up and really push the use case. Obviously Matt has tried to that and it hasn’t worked. I don’t know what needs to be done differently or not. It is one of these open for discussion topics for the repo. Personally I think it is really interesting. I am played around with writing some Rust. I like writing it. It seems to be where big software projects are going. That includes things like operating systems for desktop and mobile, web browsers. Idealistically in five or ten years time if we manage to fix a lot of the other things we need to fix up and trim down in the Core codebase, at that point if some or all of it has been transitioned into Rust I wouldn’t be against it. It is definitely unclear the path to that at the moment.

Pathway wise is this even possible to have a parallel Rust thing going at the same time? Is that potentially going to run into problems where the Rust version disagrees with the C++ version?

You have already got Bitcoin clients that exist written in Rust. You’ve got rust-bitcoin that Andrew (Poelstra) and Matt (Corallo) work on which still has a big warning at the top of the README saying “Don’t use in this production for consensus critical applications.”

I have never seen a rust-bitcoin node be listed on an bitnodes.io site as a node that is following the consensus rules and you can connect to it. So it appears people are following that warning.

I am not sure who may or may not be running them. Maybe Square Crypto are running some rust-bitcoin nodes. Maybe at Chaincode when Matt was run there had a few running. I’m not entirely sure. It looks like rust-bitcoin has some extra features that Core doesn’t.

I have never seen a rust-bitcoin node anywhere.

I have never seen one on a node explorer.

Coindance or the Clark Moody site. That is all I’m familiar with.

What is the right way to introduce Rust? Would it be a pull request where it is just integration and the conversation is focused on whether Rust should be introduced into Core or not. Or should it be included a feature pull request where the feature in some kind of way shows off how Rust can be used and what the pros of Rust are. Then you would be discussing the feature and Rust at the same time.

Splitting the conceptual discussion and actual implementation is tricky. As soon as you turn up with an implementation the conversation is always likely to diverge back to conceptual again. I think we have gone in circles enough with the conceptual stuff. At this point we may have to open enough PRs and throw enough Rust code at the wall until something sticks that we can get some agreement on. Even if we can’t get it into the actual Core codebase, if we start pulling it into some of our build systems and tooling. That is another way to inject it into or get in front of a lot of the developers working on our codebase. We had some horrible C tools that were used as part of the Mac OS build process. I have played round with rewriting one of those in Rust. It works but there is very little interest in the build system let alone operating specific components of it. I thought might not be the best way forward for some Rust integration. Maybe writing some sort of test harness or some other test component in Rust. Something that is very easily optional to begin with as well. As soon as you integrate Rust code into Core then it has to become part of the full reproducible build process as well. This makes it more tricky. If you can jam it into an additional test feature that might be a way to get some in. Regardless of what you do as soon as you open a PR that has some sort of Rust code in it there are going to be some people who going to turn up and say they are not interested regardless of what it does. Be prepared to put up with that. If it is something that you are interested in I am happy to talk about that and discuss that and help you to figure out the best path to get it done.

Rust is definitely promising. I don’t know but reading through the conversation I think there is consensus people find the language interesting and find the properties that it brings promising. To steelman the opposing argument it is not only that a lot of contributors can’t write or review Rust code, it is also how much of this is a bet on Rust as a language. There have been so many languages that have sprung up in the past and showed promise but the maintainers of that language or the path it went down meant it fizzled out and it didn’t become the really useful language that most people are using. At what point do you have to make a bet on an early language and an early ecosystem and whether Core should be making that bet?

That is a good thing to think about. I think Rust is like 11 or 12 years old. There is one argument to be made that it is not stable yet. Maybe that is something that will improve over the next 2, 3, 4 years. Maybe we shouldn’t be the software project that makes bets on “new” languages. At the same time there are a lot of big companies making that same bet. I dare say Windows one of them that are looking at using Rust. Mozilla obviously starting Rust but has a very long history of using Rust in production for many many years now in security critical contexts. I wish I had more examples than that but I don’t off the top of my head. The long term contributors not necessarily knowing Rust is obviously a good one. But at the same time the consensus critical code is going to be the absolute last thing that we rush to rewrite in Rust. The stuff that you would be rewriting in Rust is probably going to be very mundane to begin with. I would also like to think that if we do pick up a language like Rust maybe we pick up more contributors. We have seen a few PRs opened, people who aren’t necessarily actively contributing to Core all the time but they may very well contribute more they could contribute in a language like Rust which would be great. There is no end of arguments from either side to integrate or not.

I really like Rust. It is going from strength to strength and it has got definite merit. But you have got an existing expertise who have tooled up in a certain language. Some of that is self selected but whether that is true or not they exist. Now you are asking people to learn two languages. If you want to broadly read the codebase you have to know two languages and that is a significant burden. Perhaps in five years time when Rust is perhaps an obvious successor to C++ then that intersection… Arguing that we will get more contributors for Rust is probably not true. You might get some different contributors but C++ is pretty well known now. There is the sexy argument. “Work on this, it is in Rust” Unfortunately that will become less compelling too over time. As Rust gets bigger and more projects are using it and there are other cool things you could be doing with Rust anyway. It would be better to do stuff in Rust as far as security. I am cautiously hopeful that the size of bitcoind is going down not up. Stuff should be farmed out to the wallet etc with better RPCs and better library calls so it is not this monolithic project anymore. The argument that we are going to be introducing all these security holes if we don’t use Rust is probably weaker. As you said the last thing that will change is the consensus code. That said I like projects that stick to two languages. It is really common to have projects where you have to know multiple languages to deal with them. Working around the edges and reworking some of the tooling in Rust, building up those skills slowly is definitely the way forward. It will take longer than you want and you hope but it is a pretty clear direction to head in. But head in in terms of a decade we will be talking about finally rewriting the core in Rust. It is not going to happen overnight.

I definitely share your optimism for the size of bitcoind going down over time as well. Even if we couldn’t do Rust, if all we could do is slash so much out of that repository I would be very happy.

At least you don’t have the problem that your project is commonly known as c-bitcoin.

Is there a slippery slope argument where if you start introducing a language even right at the periphery that it is hard to ever get it out. It is the wrong word but it is kind of like a cancer. Once it is in there it starts going further and further in. If you are going to start introducing it even at the periphery you need to be really bought in and really confident. There must have been examples with Linux where people wanted to introduce other languages than C and it didn’t happen as far as I know.

It didn’t happen. You have a base of developers who are comfortable with a language and you are not going to move to anything else. The only way you get Linux in Rust to rewrite everything.

I thought there were people introducing some Rust things into the Linux kernel.

We went through the whole wave of people adding modules in C++ as well. But Linux has an advantage. If Linus (Torvalds) decided he really liked Rust and it was the way forward he could drag everyone else with him. It is the benevolent dictator model. I can’t think of an open source project that has switched implementation languages and survived. I can’t think of many that have tried to switch implementation languages. The answer is generally if you are going to rewrite it is usually a different group that come along and rewrite it.

What about making a fresh one starting from scratch in Rust?

There is rust-bitcoin.

It is just missing a community of maintainers.

And people using it as well.

If you are going to introduce things into bitcoind you could do the piecemeal approach where you take certain things and shove Rust in there. Wouldn’t it be cool to have a whole working thing in Rust and then take bits out as you want them into the bitcoind project?

The idea of having an independent implementation of Bitcoin has been pretty roundly rejected by the core developers.

Bitcoin Core is not modular right now so I think there would be a lot more steps before that would be possible.

You would need libconsensus to be a real thing.

libconsensus is another good thing to argue about if we have endless time.

We haven’t talked much about the promise of Rust. Rust is really exciting. Definitely look into Rust. It is just whether there is a potential roadmap to get it into Core and whether that is a good thing or not is the question.

If we had managed to have a Core dev meetup earlier this year, we didn’t because it got cancelled, I’m sure that would have been one of the things very much discussed in person. We’ll wait and see.

Dual funding in c-lightning

https://github.com/ElementsProject/lightning/pull/3418

Lisa and I are doing daily pair programming four days a week working on the infrastructure required for dual funding. We have gone through the spec, we are happy with the way the spec works but we have to implement it. The thing with dual funding is instead of creating a funding transaction unilaterally you get some information from the peer, what their output should look like etc. With dual funding you negotiate. I throw in some UTXOs, you throw in some UTXOs, I throw in some outputs, you throw in some outputs. You are effectively doing a coinjoin between the two of you. That is a huge ball of wax. Coinjoin is still a bleeding edge technology in Bitcoin land. We are use libwally which has PSBT support now but it turns out to be really hard, we hit some nasty corner cases, I just filed a bug report on it. PSBT is set up for “We’ve got this transaction, let’s just get the signatures now.” But actually building a transaction where you contribute some parts and I need to modify my PSBT to add and delete parts and put them in the right order is something that the library doesn’t handle well. Even just building up the infrastructure so you can do this is really interesting. We have got a PR that plumbs PSBTs through c-lightning now as a first class citizen which is the beginning of this. The PSBT format is an obvious consequence of that increasingly in Bitcoin we don’t put all the stuff onchain. You have to have metadata associated with things. Some of that is obvious like pay-to-pubkey-hash, you need to know what the real pubkey is, that doesn’t appear. To more subtle things like you need to know the input amount to create a signature in SegWit. That is not part of the transaction as such, it is kind of implied. Plumbing all that through is step zero in getting dual funding to work. We will get onto the dual funding part which is the fun bit but we have spent a week on c-lightning infrastructure to get that going. What we are doing is for 0.9, the next release which Christian (Decker) will be captaining, we are trying to get as much of the infrastructure in as possible. That monster PR was this massive number of commits, this giant unreviewable thing. We said “Let’s break it up into the infrastructure part and PSBT support”. Once we’ve got all that done we will have an implementation that will be labelled experimental. With c-lightning you can enable experimental features. It will turn on all these cool not spec finalized things that will only work with other things that have been modified to work with it. One of those will be dual funding. We are hoping that this will give us some active research that will feed back into the spec process and will go around like that. Dual funding has been a long time coming. Lisa doesn’t really want to work on dual funding, she wants to work on the advertisement mechanism where you can advertise that you are willing to dual fund. She has to get through this. That’s the carrot. We are definitely hoping that it will make the next release in experimental form and then we can push on the spec. The spec rules are you have to have two independent implementations in order to get it in. Ours will be one and then we will need one of the others to step up and validate that our spec is implementable and they are happy with it. Then we get to the stage where it gets in the spec. It is a long road but this is important. It does give us a pay-to-endpoint (P2EP) like pack. Not only does it give us both the ability to add inputs and outputs but also the ability to add the relevant inputs and outputs. You can use it to do consolidation and things like that that will really throw out chain analysis because there is a channel coming out of this as well.

Would you be trying to do that equal output thing as well as part of that channel open? Or is it just a batched transaction?

When we have Schnorr signatures of course you won’t be able to distinguish the channel opens. Then it starts to look completely random. At the moment you can tell that that is obviously a channel close even if it is a mutually agreed close. If it is 2-of-2 it is almost certainly a channel. In the longer term you will then get this coinjoin ability. In the initial implementation we made sure it is possible to negotiate multiple channel opens at once and produce this huge batch. You could also be using it for consolidation. Someone opens a channel with you, while you are here and making this transaction may as well throw this input and output in. We’ll do that as well. You can get pretty aggressive batching with this stuff.

It requires interoperability with some other Lightning client like lnd or eclair or whoever. Presumably the idea then is c-lightning would create a PSBT with one of the other clients to jointly construct a transaction so they can make the channel together.

We are not using generic PSBT across the wire. We are using a cut down add input, add output kind of thing. It is a subset of PSBT. We will use that internally. That also gives us a trivial ability for us to improve our hardware wallet support. Once you are supporting PSBTs then it is trivial to plug it into hardware wallets as well. That is pretty exciting. We are using dual funding to motivate us to do PSBTs.

Have you looked at how lnd have implemented it yet? Is that informing how you implement it in lnd?

They haven’t implemented dual funding.

They have implemented PSBT support.

PSBT support itself is actually pretty easy to do. You can already do PSBT today in c-lightning as a plugin. It wasn’t in the core. You would build a PSBT based on the information the core would give you. That works. You have been able to use hardware wallets with c-lightning since before the last lnd release. It wasn’t very polished. This will bring it to a level of polish where we can recommend people use it. Rather than telling people they need to string these pieces together to get it to work. The PSBT itself, we are using libwally. We import libwally, pretty easy.

Would PSBT would become part of the Lightning spec at that point?

No. We let you add arbitrary outputs but we don’t need the full PSBT for that. You just tell it the output. As long as you have got the bits that we need that opens the channel and our outputs go where we expect we let you put whatever stuff you want in the funding transaction. If there is something wrong with it it won’t get mined. That is always true. You could put a valid input and then double spend that input. We always need to handle the case where the opening transaction doesn’t work for some reason. We might as well let you use anything. Just give us the output script, we don’t need the full PSBT flexibility. There is some caution with allowing arbitrary PSBTs. Other than generally having a large attack surface it would be a requirement that you understand the PSBT format and we are not ready to go that far. It is quite easy. “Here’s an output, here’s an output number” and then you can delete the number. The negotiation is quite complicated. “Add this output. Add this input. Add this output. Flip that input. Flip that output.” As long as at the end you’ve got all the stuff. That turns out to be important. If you are negotiating with multiple people you have to make sure everyone sees things in the same order. You have an index for each one and you specify that it will be in ascending order. That is required because if you are routing an output I might need to move my output out the way and put yours in there because that is what you expect. We all have to agree. It turns out to be quite a tricky problem. The actual specification of the inputs and outputs themselves is trivial so we haven’t use PSBT for that. The whole industry has been moving to PSBT because it solves a real problem. That problem is that there is metadata that you need to deal with a transaction that is not actually inside the transaction format that hits the blockchain. This is good, keep the minimum amount of stuff on the blockchain. It does create this higher hurdle for what we need to deal with. The lack of standardization has slowed down development. It is another thing you need to be aware of when you are developing a wallet or hardware wallet. PSBT really does fill that gap well. I expect to see in the future more people dealing with PSBTs. A Bitcoin tx is almost like an artifact that comes out of that. Everyone starts talking PSBTs and not talking transactions at all.

In my recent episode with Lisa (Neigut) we were talking about the script descriptors in Bitcoin Core. Would that be someone that comes into the Lightning wallet management as well so that you would have an output descriptor to say “This is the onchain wallet of my Lightning wallet.”?

We would love to get out of the wallet game. PSBT to some extent lets us do that. PSBT gives us an interoperable layer so we can get rid of our internal wallet altogether and you can use whatever wallet you want. I think that is where we are headed. The main advantage of having normal wallets understand a little bit of Lightning is that you could theoretically import your Lightning seed and get them to scrape and dredge up your funds which would be cool. Then they need to understand some more templated script outputs. We haven’t seen demand for that yet. It is certainly something that we have talked about in the past. A salvage operation. If you constrain the CSV delay for example it gets a lot easier. Unfortunately for many of the outputs that you care about you can’t generate all from a seed, some of the information has to come from the peer. Unless you talk with the peer again you don’t know enough to even figure out what the script actually is. Some of them you can get, some of them are more difficult.

I read Lisa’s initial mailing list post. She wrote down it works. This is a protocol design question. Why is it add input, add output and not just one message which adds a bunch of inputs and outputs and then the other guy sends you a message with his inputs and outputs and you go from there?

It has to be stateful anyway because you have to have multiple rounds. If you want to let them negotiate with multiple parties they are not going to know everything upfront. I send to Alice and Bob “Here’s things I want to add.” Then Alice sends me stuff. I have to mirror that and send it to Bob. We have to have multiple rounds. If we are going to have multiple rounds keep the protocol as simple as possible. Add, remove, delete. We don’t care about byte efficiency, there is not much efficiency to be gained. A simple protocol to just have “Add input, add input, add input” rather than “Add inputs” with a number n. It simplifies the protocol.

I would have thought it would be harder to engineer if you are modifying your database and then you get another output coming in. Now maybe you have to do locking. I guess you have to do that anyway?

The protocol ends up being an alternating protocol where you say “I have nothing further to add and I’m finished.” You end up ping ponging back and forth. It came out of this research on figuring out how to coordinate multiple of these at once. Not that we are planning to implement that but we wanted to make sure it was possible to do that. That does add some complexity to the protocol. The rule is you don’t check at the end. Obviously they need to have enough inputs to fund the channel that they’ve promised and stuff like that. But they might need to change an input and remove a UTXO and use a different one. Then they would delete one and add another. That is completely legal even though at some point it could be a transaction with zero inputs. It is only when it is finalized that you do all the checks. It is not that hard as long as it is carefully specified. Says he who hasn’t implemented it yet. That’s one of the reasons we implement it to see if it is a dumb idea.

Bitcoin Core MacOS functional tests failing intermittently

https://github.com/bitcoin/bitcoin/issues/18794

I am not entirely sure why this is on the list, it is not that interesting. I think this is a case of us reenabling some functional tests that we had disabled in Travis a while back. They were still failing so we turned them back on again. The failures were Python socket OS type errors that no one could really recreate locally. They just happened on Travis intermittently. It was a pain to have them fail now and again for no specific reason.

Two elements. One, an update on the state of functional tests in Core. The other thing is there seems to be a few problems with Mac OS. Thing like fuzzing are difficult to get up and running on Mac OS. I just wondered what problems there were with functional tests on Mac OS as well.

I think we have got the functional tests fairly nailed down on Mac OS now. Especially over the past 18-24 months Mac OS has turned into a horrible operating system to use for a lot of stuff we are doing. It comes pre-installed with lots of really old versions of tools like make 3.8. It has got Apple clang by default that is a version of some upstream LLVM clang with a bunch of Apple stuff hacked in. We spent a lot of time working around in our build system. There were issues with the fuzzers as well. I patched some of those. That was linking issues. Or the alternative to throw away Apple clang and use a proper LLVM clang 10 or whatever is the latest release. It works now. I don’t think our docs are entirely correct. There are a few command line flags, there is a flag in there telling people to disable the AS assembly optimizations. I don’t think that is required for Mac OS specifically for any reason. That will work but if you want to do a lot of fuzzing or have something that continuously runs a lot of the functional and unit tests for Core spin up a Linux instance somewhere preferably a recent Ubuntu 20 or Debian 10/11. Run them on there instead. On the CI more generally, we have obviously used Travis for a long time with sometimes good success, other times worse success. I think Marco (Falke) has got to the point where he got fed up with Travis and has really genericized a lot of our CI infrastructure so you can run it wherever you want. I know he runs it on Cirrus and something else as well regularly. He has a btc_nightly repo with scripts for doing that that you can fork and use as well. Continual issues with Travis with certain builds running out of disk space. We reported this 4 or 5 months ago and we still haven’t got answers about why it happens. Maybe we will migrate to something else. Anytime it comes up, do we pay for something else and where are we going to migrate to? We don’t want to end up on something worse than what we have currently have.

Modern soft fork activation

https://twitter.com/LukeDashjr/status/1260598347322265600?s=20

This one was on soft fork deployment.

We discussed this last month. AJ and Matt Corallo put up mailing list posts on soft fork activation. We knew that Luke Dashjr and others have strong views. This was asking what his view is. I think we are going to need a new BIP which will be BIP 8 updated to be BIP 9 and BIP 148 but I haven’t looked into how that compares to what Matt and AJ were discussing on the mailing list.

What was discussed on the mailing list at the start of the year was as Luke says was BIP 9 plus BIP 149. BIP 149 was going to be the SegWit activation which expires in November, we’ll give it a couple of months and then we’ll do a new activation in January 2018 I guess that will last a year. There will be signaling but at the end of that year whether the signaling works or not it will just activate. It divides it up into two periods. You’ve got the first period which is regular BIP 9 and the second period is essentially the same thing again but at the end it succeeds no matter what. The difference between that and BIP 148 is what miners have to do. What we actually had happen was before the 12 months of SegWit’s initial signaling ended we said “For this particular period starting around August miners have to signal for SegWit. If they don’t this hopefully economic majority of full nodes will drop their blocks so that they will be losing money.” Because they are all signaling for it and we’ve got this long timeframe it will activate no matter what. It will all be great. The difference is mostly BIP 148 got it working in a shorter timeframe because we didn’t need to have to start it again in January but it also forced the miners to have this panic “We might be losing money. How do we coordinate?” That was BIP 91 that allowed them to coordinate. The difference is that BIP 148 we know has worked at least once. BIP 149 is similar to how stuff was activated in the early days like P2SH. But we don’t know 100 percent it will work but it is less risk for miners at least.

The contrasting approaches are how quickly it gets pushed through and how much opportunity miners are given to flag that they are happy with the change?

In both cases miners can flag they are happy with it pretty quickly. It is just how quickly everyone else can respond if miners aren’t happy with it for stupid reasons.

If miners dragged their heels or they don’t want to proceed with it one method is more forcing?

Yes. I don’t think there is any reason we can’t be prepared to do both. Set it up so that we’ll have this one year and this other year and at the end of two years it will definitely be activated as long as it seems sensible. If it is taking too long we can have the emergency and do the exact same thing we did with BIP 148 and force signaling earlier. They are not necessarily incompatible ideas either.

AltNet, a pluggable framework for alternative transports

https://github.com/bitcoin/bitcoin/pull/18988

We have this one on AltNet from Antoine Riard.

There were 2 PRs he opened all at once. One is this AltNet framework and there is this other watchdog PR. One of was Marco’s comments was that we can have these drivers but he hasn’t thought about all the trade-offs. It seems beneficial if add-ons can be attached and removed during runtime as well as developed independently of the Bitcoin Core development process. I definitely agree that if we are going to be building these drivers that has to happen outside of the Bitcoin Core repository. As far as loading and unloading plugins at runtime goes definitely not a fan of that idea.

Are you not a fan of it at runtime as in while the thing is running or not a fan of it even as part of the configure option at runtime?

If it was a configure option so you had a line in bitcoin.conf “add driver XYZ” and then you start and you’re running, sure. But he actually means you run for a day and then you go I’ll unload this driver and reload this driver, add a new driver while you’re still running…

A bitcoin.conf option makes sense but a RPC call to load and unload seems complicated?

Yes it seems like an attack surface. What are these drivers going to look like? Are you loading some random shared libraries? We definitely don’t want to go down the route of are these drivers authenticated in some way? Have we built them reproducibly and someone has signed off that they are ok to load? That sounds like all sorts of things that we don’t want to be managing or have to do in our codebase.

I also briefly skipped over it when he released it. The way I understood the watchdog PR one was that it is looking at the network and giving alerts on things that it sees that might hint at something fishy going on. My thought on that was that in practice I don’t really know how that is going to work out. If we have two systems that are independent and one says there is a problem and the other system doesn’t say… I would prefer us to have one source of truth. This watchdog, there could be a way where this gives alerts and they are not really that important. People start ignoring the alerts like the version warnings in the blocks. Everyone ignores them. That is a slippery slope to go in that direction. I would have to look at the code and conceptually what the plan is for it to play together with the actual network and what is says is fishy and what is not fishy.

What benefits do you see from this? Would it enable different methods of communication? Radio and mesh networking and so on?

I would have to think about it more and how it is going to be implemented. How does this AltNet play together with the main actual net. I don’t have a model in my head right now how that would work. I would need to think about it.

secp256kfun

https://github.com/LLFourn/secp256kfun

I have released this library called secp256kfun. It is for doing experimental cryptography stuff using Rust and having it work on Bitcoin. It doesn’t use Pieter Wuille’s library. It uses one written by Parity to do Ethereum stuff. That is more low level and gives the things I needed to get the result I wanted. It is a mid level library really. It gives you nice, clean APIs to do things. It catches errors. For example the zero element one, you have the point at infinity which is often an invalid public key but it is a valid group element. It can know at compile time whether a point is the point at infinity or a valid public key and can give you a compile time error if you try to pass something that could be the point at infinity into a function. Also it has an interesting way of dealing with variable time and constant time. Constant time is running operations on secret data, it is really important it does constant time. By measuring the execution time you can’t figure out anything about the secret input to the function. With this library I mark things as secret or public and then the values themselves get passed into functions and depending on whether they are marked as secret or public it uses a variable time or constant time algorithm to do the operation. This is all decided at compile time, there is no runtime cost. It is experimental, I think it is cool. I’d like people to check it out if they are doing some Bitcoin cryptography research.

This is for playing around as a toy implementation and can be used for educational reasons too. A stepping stone to understanding what is going on in the actual secp256k1 library.

I think that is a strong point for it. There is an Examples folder and I implemented BIP 340 in a very few lines of code. It really helps you get an idea of how Schnorr signatures work or ECDSA signatures work because you aren’t forced to deal with these low level details. But at the same time you are not missing anything. You are not hacking around anything. You are doing the way you are meant to do it. As a teaching tool of how this cryptography works, it is pretty good. And for doing research quickly. We have been talking about Rust a lot. As a researcher I don’t want to go down into C and implement cryptography using libsecp, this C library. I am not very good at C. I do know Rust pretty well and there is a lot less work that goes into an implementation in Rust. Especially if you are willing to cut corners and are not paranoid about security. Just for a demonstration or getting benchmarks for your paper or getting a basic implementation to demonstrate your idea. That is what it is there for.

\ No newline at end of file +Meetup

Topic: Agenda in Google Doc below

Location: Bitcoin Sydney (online)

Video: No video posted online

Google Doc of the resources discussed: https://docs.google.com/document/d/1hCTlQdt_dK6HerKNt0kl6X8HAeRh16VSAaEtrqcBvlw/

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

0xB10C blog post on mempool observations

https://b10c.me/mempool-observations/2-bitmex-broadcast-13-utc/

These 3-of-4 multisig inputs don’t have compressed public keys so they are pretty large in current transaction sizes. They could potentially be way smaller. I thought about what are he effects of the daily broadcast.

You are saying that they don’t use compressed public keys. They use full 64 byte public keys?

65 bytes yes. With uncompressed public keys, an average output is bigger than 500 bytes. I looked at the distribution of time. I found out that they process withdrawals at 13:00 UTC. I looked at the effects this broadcast had. If you imagine really big transactions being broadcast at the same time at different fee levels. They take up a large part of the next blocks that are found. I clearly saw that the median fee rate levels rose quite significantly. They had a real spiky increase. Even the observed fee rates entering the mempool, there was the same increase which is a likely reaction to estimating a higher fee so users pay a higher fee. There are the two charts. The first one is the median estimated fee rate, what the estimators give me. The second one is the fee rate relative to midnight UTC. There is this clear huge increase visible at the same time BitMEX does the broadcast. The fee rate is elevated for the next few hours.

The effect you are pointing to there is there is this cascading effect where everyone else starts estimating the fee higher than they theoretically should. Would that be accurate in terms of how to explain that?

The estimators think “We want to outbid the transactions that are currently in the mempool.” If there are really big transactions in the mempool…. For example these BitMEX transactions pay a much higher fee. I estimated that based on an assumption that is a 4 satoshi per byte increase over 8 hours, around 1.7 Bitcoin in total fees are paid additionally due to this broadcast every day. This is about 7 percent of the total transaction fees daily.

Happy days for the miners. Potentially even more now during a high fee time?

I haven’t looked at what the current effect is. I also noticed that the minimum fee rate that a block includes spikes as well. Transactions with very low fee rates won’t get confirmed for a few hours. I talk about the improvements that can be made. One thing I mention is these uncompressed public keys started to be deprecated as early as 2012. By 2017 more than 95 percent of all keys added to the blockchain per day were compressed. The 5 percent remaining are mainly from BitMEX. For example transaction batching could be helpful as well. In the end the really big part is if you pay to a P2SH multisig address which you can only spend from if you supply a 3-of-4 redeem script then you always have these big inputs that you need to spend. There might be other ways to decrease the size there. I am sure it is not something doable for BitMEX because they have really high security standards. One thing that could be done in theory would be to say “We let users deposit on a non multisig address and then transfer them into a colder multisig wallet and have very few UTXOs lying in these big inputs.” Obviously using SegWit would help as well, reducing the size as the script is moved into the witness. BitMEX mentioned they are working on using P2SH wrapped SegWit and they estimate a 65 percent reduction there. Potentially in the future BitMEX could explore utilizing Schnorr and Taproot. The Musig scheme, I don’t know if that would work for them or not. Combine that with batching and you would have really big space savings. It might even increase BitMEX’s privacy. They use vanity addresses so everything incoming and outgoing is quite visible for everyone who wants to observe what is going there. There is definitely a lot to be done. They are stepping in the right direction with SegWit but it might be possible for them to do more. I spoke to Jonny from BitMEX at the Berlin Socratic and they have hired someone specifically for this reason to improve their onchain footprint over the next year or so. That is good news. Any questions?

With the way the wallets react with this giant transaction in the mempool, they increase their fee rate. It slowly drops after that. Do you think the wallets are improving correctly in this situation? The algorithm they use to choose the fees is adapting to this giant transaction efficiently or is it inefficient?

I am not sure. Typically if you estimate fee rates you say “I want to be included in the next block or in the next 3 blocks. Or 10 blocks maybe.” If that is a constraint of your use case or your wallet then maybe that is the right reaction. If you want to pay as low fees as possible it is maybe not the correct reaction.

It is a really great blog post. It strikes a nice balance between really going into the detail but actually explaining things in a way that is easy for the layman to understand.

Fabian Jahr blog post on “Where are the coins?”

https://medium.com/okcoin-blog/btc-developer-asks-where-are-the-coins-8ea70b1734f4

I remember running a node for the first time and doing what it said in the tutorial to run gettxoutsetinfo. It gave me a really weird number for the total amount. It was not clear and there was no detailed information on exactly went where. I found this post from Pieter Wuille pretty late. He explained it similarly to how I explain it and he also gives particular numbers. He doesn’t talk about gettxoutsetinfo. He talks about the total amount, how many coins will there ever be in total. It is a different view and this is the reason why I didn’t find it in the beginning. With my post I tried to add even more information so I go over the numbers of the very recent state of the UTXO set. I used the work that I did with the CoinStatsIndex, an open pull request. I wanted to link to the different parts of the codebase that is implementing this or why things work the way they do. It is also on my website. OkCoin reposted it and this got a little bit more traction. A lot of these links go to the codebase. My article is aimed at someone who is running a node. I hope with this information I can take them to get more into the code and explore why things work they do and form that mindset of going deeper from just running a node the same way that I did over the last couple of years.

At what point did you come across the Pieter Wuille post?

It was almost done. I had produced some bugs in my code. I was running the unclaimed miner reward code to get the exact number and make sure it really adds up 100 percent. The rest of the text was mostly written. It was great that I found it because I took this blog that I copied over from his stuff. I am not sure I would have found this information otherwise. I could have found in which blocks what went missing but Pieter at the time was maybe in contact with people to identify bugs from the miners. That was great and definitely helped me finish up much more quickly.

I think it is a really cool post.

These are things Fabian that you stumbled across whilst contributing to Core right? This isn’t you using the Core wallet and thinking “I don’t how this works. I need to research this”?

Initially when I saw the first time that was the case. I was confused by it and I wanted to know why there is this weird number but I didn’t have the tools to analyze it. I accepted the incomplete information that I found on Bitcointalk and went on. At some point I realized what I am doing for CoinStats, that is the same stuff that I need to know to analyze what this exact number is. It is a different view on the same statistic. I realized I can actually write an article because I already have all the knowledge and I am working with the code. I can just adapt it a little bit in order to get these exact numbers. That is what motivated me to actually write a blog post. It is not something that I do very often.

I recall seeing some discussion where one OG on Twitter and Matt Corallo was talking with him about the time he tried to intentionally one less satoshi in your block reward as a miner in the early days.

With BIP 30 you have this messy solution where overriding blocks are hardcoded into the codebase. Is it possible at this point to actually fork back to an earlier block in 2010? Or these rules and other rules essentially add a checkpoint at some point after that?

That’s what BIP 34 does. I don’t really remember how exactly that works. There are measures here but I am not exactly sure how it would play out if we would go back that far. There are a lot of comments above the code.

It says we will enforce BIP 30 every time unless the height is one of these particular heights and the hash is these two particular blocks. If you do a re-org to as far as back as you like you won’t be able to do anything apart from those two blocks as violations of it.

This rule applies to every block apart from these two blocks.

PTLCs (lightning-dev mailing list)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002647.html

This is the mailing list post from Nadav Kohen summarizing a lot of the different ideas about PTLCs.

Nadav is keen on working on PTLCs now despite not having Schnorr. There is this discussion about whether it is worth it in the case of discreet log contracts where you have one version that doesn’t use adaptor signatures and you have the original one that uses some kind of Lightning construction where you have a penalty transaction. The adaptor signature one gets rid of the penalty. You have an oracle who is releasing a different secret key depending on the outcome of the bets. It is Liverpool versus Manchester. The oracle releases one secret key based on Liverpool winning, another one on Manchester wining and another one if they tie. It is really simple to do a bet on this kind of thing if you have the adaptor based one. What you do is you both put money into a joint output, if you don’t have Schnorr you use OP_CHECKMULTISIG. Then you put adaptors signatures on each of these outcomes, tie, one team wins, the other team wins. The point, I call it the encryption key, for the signature for each of those outcomes is the oracle’s point. If the oracle releases one secret key what it means is you can decrypt your adaptor signature, get the real signature, put that on the chain. Then you get the transaction which pays out to whoever gets money in that situation. Or if they tie both get money back for example. This is a really simple protocol. The one without adaptor signatures is pretty complicated. I was just pointing out in this mailing list post it is far preferable to have the adaptor one in this case. Even though the ECDSA adaptor signatures are not as elegant as Schnorr it would probably be easier to do that now than do what the original discreet log contract paper suggests.

Rust DLC stuff was worked on at the Lightning Hack Sprint?

Jonas, Nadav and the rest of the team worked on doing a PTLC. The latest one, they did tried to do a Lightning discreet log contract. You are trying to modify rust-lightning to do something it wasn’t meant to do yet. I don’t think much was completed other than learning how rust-lightning works. It looks like it will go somewhere. The rust-lightning Slack has a DLC channel where some subsequent discussions are going on.

There was a question on IRC yesterday about a specific mailing list post on one party ECDSA. Are there different schemes on how to do adaptor like signatures with one party ECDSA?

The short story is that Andrew Poelstra invented the adaptor signature. Then Pedro Moreno-Sanchez invented the ECDSA one. When he explained it he only explained it with a two party protocol. What I did is say we can simplify this to a single signer one. This doesn’t give you all the benefits but it makes far simpler and much more practical for use in Bitcoin today. That was that mailing list post. It seems solid to me. There might be some minor things that might change but from a security perspective it passes all the things that I would look for. I implemented it myself and I made several mistakes. When I looked at whether Jonas Nick had made the same mistakes, he didn’t. He managed to do all that in a hackathon, very impressive.

On the scalability issues of onboarding millions of LN clients (lightning-dev mailing list)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-May/002678.html

Antoine Riard was asking the question of scalability, what happens with BIP 157 and the incentive for people to run a full node versus SPV or miner consensus takeover.

I think the best summary of my thoughts on it in much better words than I could is John Newbery’s answer. These are valid concerns that Antoine is raising. On the other hand they are not specific to BIP 157. They are general concerns for light clients. BIP 157 is a better light client than we have before with bloom filters. As long as we accept that we will have light clients we should do BIP 157.

This conversation happened on both the bitcoin-dev mailing list and lightning-dev mailing list.

This sums up my feelings on it. There were a lot of messages but I would definitely recommend reading John’s one.

The Luke Dashjr post was referring back to some of Nicolas Dorier’s arguments. He would rather people use an explorer wallet in his parlance, Samourai wallet or whatever, calling back to the provider as opposed to a SPV wallet. John’s comment addresses that at least somewhat. It is not specific to BIP 157.

The argument is really about do we want to make light clients not too good in order to disincentivize people to use them or do we not want to do that? It also comes down how do we believe the future will play out. That’s why I would recommend people to read Nicolas Dorier’s article and look at the arguments. It is important that we still think about these things. Maybe I am on the side of being a little more optimistic about the more future. That more people will run their own full nodes still or enough people. I think that is also because everyone will value their privacy will want to run a full node still. It is not like people will forget about that because there is a good light client. I honestly believe that we will have more full nodes. It is not like full nodes will stay at the same number because we have really good light clients.

To reflect some of the views I saw elsewhere in the thread. Richard Myers is working on mesh networking and people trying to operate using Bitcoin who can’t run their own full node. If they are in a more developing world country and don’t have the same level of internet or accessibility of a home node they can call back to. Even there there are people working on different ideas. One guy in Africa just posted about how they are trying to use Raspiblitz as a set up with solar panels. That is a pretty cool idea as well.

This is the PGP model. When PGP first came out it was deliberately hard to use so you would have to read documentation so you wouldn’t use it insecurely. That was a terrible idea. It turns out that users who are not prepared to invest time, you scare off all these users. The idea that we should make it hard for them so they do the right thing, if you make it hard for them they will go and do something else. There is a whole graduation here. At the moment a full node is pretty heavy but this doesn’t necessarily have to be so. A full node is something that builds up its own its own UTXO set. At some stage it does need to download all the blocks. It doesn’t need to do so to start with. You could have a node that over time builds it up and becomes a full node for example, gradually catches up. Things like Blockstream’s satellite potentially help with that. Adding more points on the spectrum is helpful. This idea that if they are not doing things we like we should serve them badly hasn’t worked already. People using Electrum random peers and using them instead. If we’ve got a better way of doing it we should do it. I’ve always felt we should channel the Bitcoin block headers at least over the Lightning Network. The more sources of block headers we have the less chance you’ve got of being eclipsed and sybil attacked.

I recall some people were talking about the idea of fork detection. Maybe you might be not able to run your own full node and you are running some kind of SPV or BIP 157 light client. It would know that there is a fork so now I should be more careful instead of merely trusting whatever miner or node has served me that filter.

There are certainly more things we can do. The truth is that people are running light clients today. I don’t think that is going to stop. If you really want to stop them work really hard on making a better full client than we have today.

I also liked Laolu’s post too. You can leverage existing infrastructure to serve blocks. There was a discussion around how important is economic weight in the Bitcoin network and how easy would it be to try to takeover consensus and change to their fork of 22 million Bitcoin or whatever.

This was what happened with SegWit2x, aggressively going after the SPV wallets. They were like “We are going to carry all the SPV wallets with us”. It is a legitimate threat. There is an argument that if you have significant economic weight you should be running a full node, that is definitely true. But everyone with their little mobile phone with ten dollars on it is maybe not so important.

One of the arguments that came up in this thread is if you make it so that the full nodes have to keep serving the blocks for the light clients that might increasingly put more pressure on the full nodes to serve the filters. That might create this weird downward spiral incentive to not run their full nodes because of the increasing demands on them.

Serving filters is less intensive than serving blocks. We’ve already got that. We are already there.

If you don’t want to serve the filters you don’t have to serve them. It is a command line option, it is off by default.

How easy it to run your full node and pair it with your own phone? It looks like we are looking for an enterprising entrepreneur to create a full node that is also a home router, a modem and a Pi hole. Does anyone have any thoughts on what could be done to make that easier?

Shouldn’t we have protocol level Bitcoin authentication and encryption first so we can guarantee that we are connecting to our own node? That seems like the obvious thing.

I am maybe one of the few people who doesn’t buy the economic weight argument about these light clients. The thinking goes that if a large proportion of the economy are not verifying their own transactions the miners could just change the rules. Create Bitcoin Miner’s Vision with 22 million Bitcoin and everyone will just go along with it. Even if a large amount of economic weight is with that I fail to see how people could just go along with it. It is going to be noticed very quickly. Light client would have to switch. It would be a bad scenario if a lot of infrastructure is running on light clients. But I don’t think it is going to change the definition of what is Bitcoin. I can’t see it happening. Of course if no one is running full nodes then yeah. As long as it is easy to run a full node, you can run it at home on your Raspberry Pi. Some people decide not to because of their phone or they don’t have the right hardware or internet connection. I can’t see how this could change what is Bitcoin in any scenario. What Bitcoin will be is always be what bitcoind full nodes think is Bitcoin. If you some other coin on some other fork of it it is not going to be Bitcoin and it is not going to be worth anything.

That was literally what SegWit2x was about. That was literally what they tried to do. They said “We are taking Bitcoin in this direction and we’re taking the light clients with us because we’ll have the longer chain.” It is not a theoretical, sounds unlikely, that was literally what the plan was. The idea was that it would force enough people to flip and that would be where the economic majority are now. It would lever the small number of people running full nodes into switching to follow that chain as well because that was going to be where everyone was. Your useless old Bitcoins weren’t going to worth anything. This is not some theoretic argument, this was literally what happened. Saying this is not going to happen again because reasons is probably not going to carry weight for at least ten years until people have forgotten about that one.

But would it work? If there were more light clients is it more likely to work? I don’t see it.

It would absolutely work. All the light clients would follow it. Then your exchange or service doesn’t work anymore if you are not following the same chain as all the light clients talking to you. Follow the light clients if they are the economic majority that is where you have to go.

It becomes a question of economics then. I don’t see it actually playing out that way. The reason is the people who decide what is Bitcoin is not really the owners of Bitcoin or the people in the P2P network. It is more like the people who have US dollars who want to buy Bitcoin. They are the ones who decide what it is at the end of the day. The exchange is going to offer them two different ticker symbols for these two different Bitcoins and they are going to decide which one is the real one. They are not going to decide based on which one the light clients are following. I don’t see how having light clients follow you is going to help you too much in deciding what is the real Bitcoin.

If Bitcoin was used for real daily economic activity which it isn’t today then you’ve got two choices. This change happens, the miners push this through and the light clients go one way. Say you’re relying on Bitcoin income and people sending you Bitcoin. You have two choices. All the light clients don’t work for your customers or you change, follow the miners and go with the economic majority. You may rail against it, “This is not real Bitcoin”, it doesn’t matter. People on the margins will go with the majority. If the economic majority goes one way you will go that way because otherwise you will suffer. At the moment we have this luxury that most people are not living day to day on Bitcoin. They could sit it out and wait to see what wins. But if there is real monetary flow with Bitcoin then that isn’t necessarily the case and you can place time pressure on people. The economic majority is significant then. The time it would take to get a response, to get all the SPV nodes to flip across or reject whatever blocks could be significant. It could be long enough that people will cave. Given we’ve seen serious actors try this once and a significant amount of mining power threaten to try this I think it is a real scenario.

Let’s say in that scenario SPV wallet developers think how can I let users select which chain they wish to be on. At that point the SPV wallet developers would have to quickly update their code to help their users select the correct chain, not this fork 22 million Bitcoin chain. Theoretically it is possible but it is not so easy to quickly put in place to help them fight the adversarial change.

Imagine the chaos that would ensue. The people who haven’t upgraded can’t send money to people who have upgraded. They are also under pressure to get their software working. The easiest way to get their software working is to follow everyone else.

I see the argument and I think you’ve put it better than I’ve ever heard it before. I still think there is “Ok I have to make a decision about what I am going to do.” I can follow this chain but if the people who have US dollars who are investing in Bitcoin, big players, don’t think that chain has Bitcoin on it. I do see the economic majority argument but just not from economic majority of holders of Bitcoin.

You are simplifying the argument. There are different constituents. Even though I agree that one set of constituents are the whales that hold Bitcoin or are buying lots of Bitcoin, if they are going to lose money because all the other constituents in the ecosystem say that chain isn’t Bitcoin then it doesn’t matter what they say. They either lose money or they go with the ecosystem majority. It is not as simple as saying one constituent has all the power. This is definitely not the case. There are four or five different constituents.

They all have one thing in common. They are bidding for Bitcoin with US dollars or some other currency or goods. People who have Bitcoin do not get to decide what the real Bitcoin is because they are going to have both Bitcoin and the forked coin. They don’t get to decide which one is the real one. The guys who decide are the people who purchase it on exchanges. They are going to say which is the real one, we have a social consensus on which one is the real one. I see there could be an attack on it but I don’t see how light clients really are going to be the key to the malicious attack winning.

This is where I agree with you. I think it is not light client or full node. It is light client or custodial. Custodial are just as bad as light clients if not worse. Then you are fully trusting the exchange’s full node. You also only need one exchange full node to serve billions of users. If we can move people from custodial to light client I see that as an improvement. That is the comparison people don’t often make.

This seems counter to the Nicolas Dorier argument. He would rather you use a custodial wallet because that is just bad for your risk because you are not holding your own keys. That is resisting a miner takeover.

I would think that would make the economic weight argument even worse. Because then they have actually have the Bitcoin, you have nothing. If you have a SPV wallet you ostensibly have the keys and you can do something else with them.

If you want to discuss the bad things you can do having the majority of clients controlled by a custodial entity is arguably even worse than the SPV mode.

What Nicolas says I would rather use a custodial wallet for myself if that means the whole ecosystem doesn’t use light clients.

That is the PGP disease. Let’s make it really hard to use them so everyone is forced into a worse solution. It is a terrible idea.

What are some of the ideas that would make it easier for people run their own full node. You mentioned making it easier for people to call back to their own node. Right now it is difficult.

The friends and family idea. That computer geek you know runs a full node for you which is second best to running your own. The idea of a node that slowly works its way backwards. It has a UTXO snapshot that it has to trust originally. Then it slowly works its way backwards and downloads blocks for however long it takes. Eventually it gets double ticked. It is like “I have actually verified. This is indeed the genuine UTXO set.” A few megabytes every ten minutes gets lost in the noise these days.

That is related to the James O’Beirne assumeutxo idea.

And UTXO set hashes that have been proposed in the past are pretty easy to validate. You can take your full node and you can check that this is the hash you should have at this block. You can spread that pretty widely to people who trust you and check that is something you’ve got as well. Creating a social consensus allows you to bootstrap really fast. I can grab that UTXO set from anywhere and slowly my node will work its way backwards. With some block compression and other tricks we can do that. The other issue is of course that there is a lot of load. If you serve an archived full node and serve old blocks you get slammed pretty hard with people grabbing blocks. I had a half serious proposal for adopt a block a while ago. Every light client would hold 16 blocks. They would advertise that and so you would be able to grab 16 blocks from here and 16 blocks from there. Rather than hammering these full nodes even light clients would hold a small number of blocks for you. You could do this thing where you would slowly wind your way backwards and validate your UTXO set. I think the effort is better put into making it easier to run a full node. If you don’t like light clients move the needle and introduce things that are closer to what you want in a full client. Work on that rather than trying to make a giant crater so that no one can have a light client. That is short term and long term a bad idea.

Will Casarin made a comment around how he had set up his Bitcoin node with his phone to call back to his own node. It is more like enthusiast level or developer level to use your own full node and have your phone paired with it. To have your own Electrum Rust Server and pair your laptop back to it. How can that be made easy for the Uncle Jim, friends and family node so he can set it up so they call back to his node.

If you stop encouraging people to experiment with this stuff you can inadvertently stop people from experimenting and entrepreneurs from figuring cool new ways to have your phone paired with your full node making it easy. You can stop people from getting into Bitcoin. This option doesn’t exist now. That lowers demand, that lowers the price and the price lowers the hashrate. You can have your pure Bitcoin network with only full nodes but it is at a lower price and a lower hash rate. Is your security better? It is definitely up for debate.

On Neutrino the way I read that was it made it easier for me as a local enthusiast to just serve effectively a SPV. What doesn’t get brought up to much is the counterargument that if you make it easier to be a SPV you are going to get a lot more honest SPVs out there that will sway the economic majority. Instead of there being blockchain.info as the only party serving SPV filters and they own the money, if I’m a SPV and others are a SPV there will be a lot more of the network who are acting honestly and needing SPV filters.

The problem with SPV is that it trusts the person who serves you.

But my node isn’t going to serve you rubbish forks.

I have been running a Electrum server for many years. Often I see people at conferences telling me they are using my node. They are trusting me. I say “Don’t trust me” but it has happened this way. At least they know this guy is running a node and I have to trust him. It is not some random node on the internet.

If you trust a certain full node or you connect to a certain full node and you are getting light client filters from it then you are fine.

I am still a random person for them. Maybe they saw me once or twice at a conference. I always tell them “Run your own node.”

This is where you need to have a ladder. It is much better people are going “I am connecting to this guy’s node. He is serving me filters” than none of the options apart from a full node are there and therefore I am just going to trust Coinbase or Kraken to deal with it. I am not going to worry about how my balance is being verified. The situation that you describe is superior to custodial. And it is a long journey. As long people understand what they are doing you can see in 5, 10 years they are like “I don’t want to trust this guy’s node anymore. I don’t want to have filters served from this guy’s node. I am going to run my own node.” You have given them a stepping stone towards running a full node that wouldn’t have been there if you had ditched all the superior light client options.

RFC: Rust code integration into Bitcoin Core

https://github.com/bitcoin/bitcoin/issues/17090

Last month we spoke about this idea.

This has been a topic that has floated around for a while, a Rust integration in Bitcoin Core. It first started in the middle of last year when Cory Fields and Jeremy Rubin hacked together some initial Rust integration and had a proof of concept as to how you might integrate a simple Rust module into the Core codebase. There was an initial PR that has a lot of conceptual level discussion in it. I would definitely recommend that anyone interested in Rust integration to bootstrap by reading through this discussion. There is Cory’s presentation he gave at the MIT conference (2019). That summarizes a lot of the thinking why a language like Rust might be something good for Bitcoin. When we are having a lot of these discussions a lot of thinking is five or ten years down the line. Thinking about where we want to be at that point. As opposed to we should rewrite everything in Rust right now. Cory brings this up as well. You can look at a lot of other big security critical open source projects like Firefox. Mozilla is one of the orgs that kicked off a lot of this research into safer languages. Web browsers are exactly the kind of thing you want to build using safer languages. Maybe Rust makes sense for us. There would be downsides to blanket switched to using Rust next week. There are lot of long term contributors working on Bitcoin Core who may not be interested in or as fluent in Rust as they are C or C++. You can’t necessarily just switch to a new language and expect everyone else to pick that up and be happy with using it going forward. We have thousands and thousands of lines of code and tests that have worked for near on a decade. There is not necessarily a good reason to rewrite for the sake of rewriting it. Maybe better languages come along. Particularly not when some of the benefits that Rust might present are not necessarily applicable to those parts of the codebase. Since the original discussion around the end of last year Matt ended up opening a few PRs. A serving DNS headers over Rust PR he put forward. A parallel P2P client in Rust PR. He wrote another networking stack or peer-to-peer stack in Rust that would run alongside the C++ implementation. If the C++ implementation crashed for some reason maybe you could failover to the Rust implementation. Try understanding things like that. Maybe you want to run the Rust implementation and failover to the C++ implementation. There is lots of code there. I think it is still worth discussing but I can’t imagine how the discussion ended. It got to the point where a lot of our PRs and discussions end up. There’s a flurry of comments and opinions for a week or two and then it dies down and everyone moves onto discussing and arguing the next interesting thing to come up. It is probably one of the things that needs a champion to pick it up and really push the use case. Obviously Matt has tried to that and it hasn’t worked. I don’t know what needs to be done differently or not. It is one of these open for discussion topics for the repo. Personally I think it is really interesting. I am played around with writing some Rust. I like writing it. It seems to be where big software projects are going. That includes things like operating systems for desktop and mobile, web browsers. Idealistically in five or ten years time if we manage to fix a lot of the other things we need to fix up and trim down in the Core codebase, at that point if some or all of it has been transitioned into Rust I wouldn’t be against it. It is definitely unclear the path to that at the moment.

Pathway wise is this even possible to have a parallel Rust thing going at the same time? Is that potentially going to run into problems where the Rust version disagrees with the C++ version?

You have already got Bitcoin clients that exist written in Rust. You’ve got rust-bitcoin that Andrew (Poelstra) and Matt (Corallo) work on which still has a big warning at the top of the README saying “Don’t use in this production for consensus critical applications.”

I have never seen a rust-bitcoin node be listed on an bitnodes.io site as a node that is following the consensus rules and you can connect to it. So it appears people are following that warning.

I am not sure who may or may not be running them. Maybe Square Crypto are running some rust-bitcoin nodes. Maybe at Chaincode when Matt was run there had a few running. I’m not entirely sure. It looks like rust-bitcoin has some extra features that Core doesn’t.

I have never seen a rust-bitcoin node anywhere.

I have never seen one on a node explorer.

Coindance or the Clark Moody site. That is all I’m familiar with.

What is the right way to introduce Rust? Would it be a pull request where it is just integration and the conversation is focused on whether Rust should be introduced into Core or not. Or should it be included a feature pull request where the feature in some kind of way shows off how Rust can be used and what the pros of Rust are. Then you would be discussing the feature and Rust at the same time.

Splitting the conceptual discussion and actual implementation is tricky. As soon as you turn up with an implementation the conversation is always likely to diverge back to conceptual again. I think we have gone in circles enough with the conceptual stuff. At this point we may have to open enough PRs and throw enough Rust code at the wall until something sticks that we can get some agreement on. Even if we can’t get it into the actual Core codebase, if we start pulling it into some of our build systems and tooling. That is another way to inject it into or get in front of a lot of the developers working on our codebase. We had some horrible C tools that were used as part of the Mac OS build process. I have played round with rewriting one of those in Rust. It works but there is very little interest in the build system let alone operating specific components of it. I thought might not be the best way forward for some Rust integration. Maybe writing some sort of test harness or some other test component in Rust. Something that is very easily optional to begin with as well. As soon as you integrate Rust code into Core then it has to become part of the full reproducible build process as well. This makes it more tricky. If you can jam it into an additional test feature that might be a way to get some in. Regardless of what you do as soon as you open a PR that has some sort of Rust code in it there are going to be some people who going to turn up and say they are not interested regardless of what it does. Be prepared to put up with that. If it is something that you are interested in I am happy to talk about that and discuss that and help you to figure out the best path to get it done.

Rust is definitely promising. I don’t know but reading through the conversation I think there is consensus people find the language interesting and find the properties that it brings promising. To steelman the opposing argument it is not only that a lot of contributors can’t write or review Rust code, it is also how much of this is a bet on Rust as a language. There have been so many languages that have sprung up in the past and showed promise but the maintainers of that language or the path it went down meant it fizzled out and it didn’t become the really useful language that most people are using. At what point do you have to make a bet on an early language and an early ecosystem and whether Core should be making that bet?

That is a good thing to think about. I think Rust is like 11 or 12 years old. There is one argument to be made that it is not stable yet. Maybe that is something that will improve over the next 2, 3, 4 years. Maybe we shouldn’t be the software project that makes bets on “new” languages. At the same time there are a lot of big companies making that same bet. I dare say Windows one of them that are looking at using Rust. Mozilla obviously starting Rust but has a very long history of using Rust in production for many many years now in security critical contexts. I wish I had more examples than that but I don’t off the top of my head. The long term contributors not necessarily knowing Rust is obviously a good one. But at the same time the consensus critical code is going to be the absolute last thing that we rush to rewrite in Rust. The stuff that you would be rewriting in Rust is probably going to be very mundane to begin with. I would also like to think that if we do pick up a language like Rust maybe we pick up more contributors. We have seen a few PRs opened, people who aren’t necessarily actively contributing to Core all the time but they may very well contribute more they could contribute in a language like Rust which would be great. There is no end of arguments from either side to integrate or not.

I really like Rust. It is going from strength to strength and it has got definite merit. But you have got an existing expertise who have tooled up in a certain language. Some of that is self selected but whether that is true or not they exist. Now you are asking people to learn two languages. If you want to broadly read the codebase you have to know two languages and that is a significant burden. Perhaps in five years time when Rust is perhaps an obvious successor to C++ then that intersection… Arguing that we will get more contributors for Rust is probably not true. You might get some different contributors but C++ is pretty well known now. There is the sexy argument. “Work on this, it is in Rust” Unfortunately that will become less compelling too over time. As Rust gets bigger and more projects are using it and there are other cool things you could be doing with Rust anyway. It would be better to do stuff in Rust as far as security. I am cautiously hopeful that the size of bitcoind is going down not up. Stuff should be farmed out to the wallet etc with better RPCs and better library calls so it is not this monolithic project anymore. The argument that we are going to be introducing all these security holes if we don’t use Rust is probably weaker. As you said the last thing that will change is the consensus code. That said I like projects that stick to two languages. It is really common to have projects where you have to know multiple languages to deal with them. Working around the edges and reworking some of the tooling in Rust, building up those skills slowly is definitely the way forward. It will take longer than you want and you hope but it is a pretty clear direction to head in. But head in in terms of a decade we will be talking about finally rewriting the core in Rust. It is not going to happen overnight.

I definitely share your optimism for the size of bitcoind going down over time as well. Even if we couldn’t do Rust, if all we could do is slash so much out of that repository I would be very happy.

At least you don’t have the problem that your project is commonly known as c-bitcoin.

Is there a slippery slope argument where if you start introducing a language even right at the periphery that it is hard to ever get it out. It is the wrong word but it is kind of like a cancer. Once it is in there it starts going further and further in. If you are going to start introducing it even at the periphery you need to be really bought in and really confident. There must have been examples with Linux where people wanted to introduce other languages than C and it didn’t happen as far as I know.

It didn’t happen. You have a base of developers who are comfortable with a language and you are not going to move to anything else. The only way you get Linux in Rust to rewrite everything.

I thought there were people introducing some Rust things into the Linux kernel.

We went through the whole wave of people adding modules in C++ as well. But Linux has an advantage. If Linus (Torvalds) decided he really liked Rust and it was the way forward he could drag everyone else with him. It is the benevolent dictator model. I can’t think of an open source project that has switched implementation languages and survived. I can’t think of many that have tried to switch implementation languages. The answer is generally if you are going to rewrite it is usually a different group that come along and rewrite it.

What about making a fresh one starting from scratch in Rust?

There is rust-bitcoin.

It is just missing a community of maintainers.

And people using it as well.

If you are going to introduce things into bitcoind you could do the piecemeal approach where you take certain things and shove Rust in there. Wouldn’t it be cool to have a whole working thing in Rust and then take bits out as you want them into the bitcoind project?

The idea of having an independent implementation of Bitcoin has been pretty roundly rejected by the core developers.

Bitcoin Core is not modular right now so I think there would be a lot more steps before that would be possible.

You would need libconsensus to be a real thing.

libconsensus is another good thing to argue about if we have endless time.

We haven’t talked much about the promise of Rust. Rust is really exciting. Definitely look into Rust. It is just whether there is a potential roadmap to get it into Core and whether that is a good thing or not is the question.

If we had managed to have a Core dev meetup earlier this year, we didn’t because it got cancelled, I’m sure that would have been one of the things very much discussed in person. We’ll wait and see.

Dual funding in c-lightning

https://github.com/ElementsProject/lightning/pull/3418

Lisa and I are doing daily pair programming four days a week working on the infrastructure required for dual funding. We have gone through the spec, we are happy with the way the spec works but we have to implement it. The thing with dual funding is instead of creating a funding transaction unilaterally you get some information from the peer, what their output should look like etc. With dual funding you negotiate. I throw in some UTXOs, you throw in some UTXOs, I throw in some outputs, you throw in some outputs. You are effectively doing a coinjoin between the two of you. That is a huge ball of wax. Coinjoin is still a bleeding edge technology in Bitcoin land. We are use libwally which has PSBT support now but it turns out to be really hard, we hit some nasty corner cases, I just filed a bug report on it. PSBT is set up for “We’ve got this transaction, let’s just get the signatures now.” But actually building a transaction where you contribute some parts and I need to modify my PSBT to add and delete parts and put them in the right order is something that the library doesn’t handle well. Even just building up the infrastructure so you can do this is really interesting. We have got a PR that plumbs PSBTs through c-lightning now as a first class citizen which is the beginning of this. The PSBT format is an obvious consequence of that increasingly in Bitcoin we don’t put all the stuff onchain. You have to have metadata associated with things. Some of that is obvious like pay-to-pubkey-hash, you need to know what the real pubkey is, that doesn’t appear. To more subtle things like you need to know the input amount to create a signature in SegWit. That is not part of the transaction as such, it is kind of implied. Plumbing all that through is step zero in getting dual funding to work. We will get onto the dual funding part which is the fun bit but we have spent a week on c-lightning infrastructure to get that going. What we are doing is for 0.9, the next release which Christian (Decker) will be captaining, we are trying to get as much of the infrastructure in as possible. That monster PR was this massive number of commits, this giant unreviewable thing. We said “Let’s break it up into the infrastructure part and PSBT support”. Once we’ve got all that done we will have an implementation that will be labelled experimental. With c-lightning you can enable experimental features. It will turn on all these cool not spec finalized things that will only work with other things that have been modified to work with it. One of those will be dual funding. We are hoping that this will give us some active research that will feed back into the spec process and will go around like that. Dual funding has been a long time coming. Lisa doesn’t really want to work on dual funding, she wants to work on the advertisement mechanism where you can advertise that you are willing to dual fund. She has to get through this. That’s the carrot. We are definitely hoping that it will make the next release in experimental form and then we can push on the spec. The spec rules are you have to have two independent implementations in order to get it in. Ours will be one and then we will need one of the others to step up and validate that our spec is implementable and they are happy with it. Then we get to the stage where it gets in the spec. It is a long road but this is important. It does give us a pay-to-endpoint (P2EP) like pack. Not only does it give us both the ability to add inputs and outputs but also the ability to add the relevant inputs and outputs. You can use it to do consolidation and things like that that will really throw out chain analysis because there is a channel coming out of this as well.

Would you be trying to do that equal output thing as well as part of that channel open? Or is it just a batched transaction?

When we have Schnorr signatures of course you won’t be able to distinguish the channel opens. Then it starts to look completely random. At the moment you can tell that that is obviously a channel close even if it is a mutually agreed close. If it is 2-of-2 it is almost certainly a channel. In the longer term you will then get this coinjoin ability. In the initial implementation we made sure it is possible to negotiate multiple channel opens at once and produce this huge batch. You could also be using it for consolidation. Someone opens a channel with you, while you are here and making this transaction may as well throw this input and output in. We’ll do that as well. You can get pretty aggressive batching with this stuff.

It requires interoperability with some other Lightning client like lnd or eclair or whoever. Presumably the idea then is c-lightning would create a PSBT with one of the other clients to jointly construct a transaction so they can make the channel together.

We are not using generic PSBT across the wire. We are using a cut down add input, add output kind of thing. It is a subset of PSBT. We will use that internally. That also gives us a trivial ability for us to improve our hardware wallet support. Once you are supporting PSBTs then it is trivial to plug it into hardware wallets as well. That is pretty exciting. We are using dual funding to motivate us to do PSBTs.

Have you looked at how lnd have implemented it yet? Is that informing how you implement it in lnd?

They haven’t implemented dual funding.

They have implemented PSBT support.

PSBT support itself is actually pretty easy to do. You can already do PSBT today in c-lightning as a plugin. It wasn’t in the core. You would build a PSBT based on the information the core would give you. That works. You have been able to use hardware wallets with c-lightning since before the last lnd release. It wasn’t very polished. This will bring it to a level of polish where we can recommend people use it. Rather than telling people they need to string these pieces together to get it to work. The PSBT itself, we are using libwally. We import libwally, pretty easy.

Would PSBT would become part of the Lightning spec at that point?

No. We let you add arbitrary outputs but we don’t need the full PSBT for that. You just tell it the output. As long as you have got the bits that we need that opens the channel and our outputs go where we expect we let you put whatever stuff you want in the funding transaction. If there is something wrong with it it won’t get mined. That is always true. You could put a valid input and then double spend that input. We always need to handle the case where the opening transaction doesn’t work for some reason. We might as well let you use anything. Just give us the output script, we don’t need the full PSBT flexibility. There is some caution with allowing arbitrary PSBTs. Other than generally having a large attack surface it would be a requirement that you understand the PSBT format and we are not ready to go that far. It is quite easy. “Here’s an output, here’s an output number” and then you can delete the number. The negotiation is quite complicated. “Add this output. Add this input. Add this output. Flip that input. Flip that output.” As long as at the end you’ve got all the stuff. That turns out to be important. If you are negotiating with multiple people you have to make sure everyone sees things in the same order. You have an index for each one and you specify that it will be in ascending order. That is required because if you are routing an output I might need to move my output out the way and put yours in there because that is what you expect. We all have to agree. It turns out to be quite a tricky problem. The actual specification of the inputs and outputs themselves is trivial so we haven’t use PSBT for that. The whole industry has been moving to PSBT because it solves a real problem. That problem is that there is metadata that you need to deal with a transaction that is not actually inside the transaction format that hits the blockchain. This is good, keep the minimum amount of stuff on the blockchain. It does create this higher hurdle for what we need to deal with. The lack of standardization has slowed down development. It is another thing you need to be aware of when you are developing a wallet or hardware wallet. PSBT really does fill that gap well. I expect to see in the future more people dealing with PSBTs. A Bitcoin tx is almost like an artifact that comes out of that. Everyone starts talking PSBTs and not talking transactions at all.

In my recent episode with Lisa (Neigut) we were talking about the script descriptors in Bitcoin Core. Would that be someone that comes into the Lightning wallet management as well so that you would have an output descriptor to say “This is the onchain wallet of my Lightning wallet.”?

We would love to get out of the wallet game. PSBT to some extent lets us do that. PSBT gives us an interoperable layer so we can get rid of our internal wallet altogether and you can use whatever wallet you want. I think that is where we are headed. The main advantage of having normal wallets understand a little bit of Lightning is that you could theoretically import your Lightning seed and get them to scrape and dredge up your funds which would be cool. Then they need to understand some more templated script outputs. We haven’t seen demand for that yet. It is certainly something that we have talked about in the past. A salvage operation. If you constrain the CSV delay for example it gets a lot easier. Unfortunately for many of the outputs that you care about you can’t generate all from a seed, some of the information has to come from the peer. Unless you talk with the peer again you don’t know enough to even figure out what the script actually is. Some of them you can get, some of them are more difficult.

I read Lisa’s initial mailing list post. She wrote down it works. This is a protocol design question. Why is it add input, add output and not just one message which adds a bunch of inputs and outputs and then the other guy sends you a message with his inputs and outputs and you go from there?

It has to be stateful anyway because you have to have multiple rounds. If you want to let them negotiate with multiple parties they are not going to know everything upfront. I send to Alice and Bob “Here’s things I want to add.” Then Alice sends me stuff. I have to mirror that and send it to Bob. We have to have multiple rounds. If we are going to have multiple rounds keep the protocol as simple as possible. Add, remove, delete. We don’t care about byte efficiency, there is not much efficiency to be gained. A simple protocol to just have “Add input, add input, add input” rather than “Add inputs” with a number n. It simplifies the protocol.

I would have thought it would be harder to engineer if you are modifying your database and then you get another output coming in. Now maybe you have to do locking. I guess you have to do that anyway?

The protocol ends up being an alternating protocol where you say “I have nothing further to add and I’m finished.” You end up ping ponging back and forth. It came out of this research on figuring out how to coordinate multiple of these at once. Not that we are planning to implement that but we wanted to make sure it was possible to do that. That does add some complexity to the protocol. The rule is you don’t check at the end. Obviously they need to have enough inputs to fund the channel that they’ve promised and stuff like that. But they might need to change an input and remove a UTXO and use a different one. Then they would delete one and add another. That is completely legal even though at some point it could be a transaction with zero inputs. It is only when it is finalized that you do all the checks. It is not that hard as long as it is carefully specified. Says he who hasn’t implemented it yet. That’s one of the reasons we implement it to see if it is a dumb idea.

Bitcoin Core MacOS functional tests failing intermittently

https://github.com/bitcoin/bitcoin/issues/18794

I am not entirely sure why this is on the list, it is not that interesting. I think this is a case of us reenabling some functional tests that we had disabled in Travis a while back. They were still failing so we turned them back on again. The failures were Python socket OS type errors that no one could really recreate locally. They just happened on Travis intermittently. It was a pain to have them fail now and again for no specific reason.

Two elements. One, an update on the state of functional tests in Core. The other thing is there seems to be a few problems with Mac OS. Thing like fuzzing are difficult to get up and running on Mac OS. I just wondered what problems there were with functional tests on Mac OS as well.

I think we have got the functional tests fairly nailed down on Mac OS now. Especially over the past 18-24 months Mac OS has turned into a horrible operating system to use for a lot of stuff we are doing. It comes pre-installed with lots of really old versions of tools like make 3.8. It has got Apple clang by default that is a version of some upstream LLVM clang with a bunch of Apple stuff hacked in. We spent a lot of time working around in our build system. There were issues with the fuzzers as well. I patched some of those. That was linking issues. Or the alternative to throw away Apple clang and use a proper LLVM clang 10 or whatever is the latest release. It works now. I don’t think our docs are entirely correct. There are a few command line flags, there is a flag in there telling people to disable the AS assembly optimizations. I don’t think that is required for Mac OS specifically for any reason. That will work but if you want to do a lot of fuzzing or have something that continuously runs a lot of the functional and unit tests for Core spin up a Linux instance somewhere preferably a recent Ubuntu 20 or Debian 10/11. Run them on there instead. On the CI more generally, we have obviously used Travis for a long time with sometimes good success, other times worse success. I think Marco (Falke) has got to the point where he got fed up with Travis and has really genericized a lot of our CI infrastructure so you can run it wherever you want. I know he runs it on Cirrus and something else as well regularly. He has a btc_nightly repo with scripts for doing that that you can fork and use as well. Continual issues with Travis with certain builds running out of disk space. We reported this 4 or 5 months ago and we still haven’t got answers about why it happens. Maybe we will migrate to something else. Anytime it comes up, do we pay for something else and where are we going to migrate to? We don’t want to end up on something worse than what we have currently have.

Modern soft fork activation

https://twitter.com/LukeDashjr/status/1260598347322265600?s=20

This one was on soft fork deployment.

We discussed this last month. AJ and Matt Corallo put up mailing list posts on soft fork activation. We knew that Luke Dashjr and others have strong views. This was asking what his view is. I think we are going to need a new BIP which will be BIP 8 updated to be BIP 9 and BIP 148 but I haven’t looked into how that compares to what Matt and AJ were discussing on the mailing list.

What was discussed on the mailing list at the start of the year was as Luke says was BIP 9 plus BIP 149. BIP 149 was going to be the SegWit activation which expires in November, we’ll give it a couple of months and then we’ll do a new activation in January 2018 I guess that will last a year. There will be signaling but at the end of that year whether the signaling works or not it will just activate. It divides it up into two periods. You’ve got the first period which is regular BIP 9 and the second period is essentially the same thing again but at the end it succeeds no matter what. The difference between that and BIP 148 is what miners have to do. What we actually had happen was before the 12 months of SegWit’s initial signaling ended we said “For this particular period starting around August miners have to signal for SegWit. If they don’t this hopefully economic majority of full nodes will drop their blocks so that they will be losing money.” Because they are all signaling for it and we’ve got this long timeframe it will activate no matter what. It will all be great. The difference is mostly BIP 148 got it working in a shorter timeframe because we didn’t need to have to start it again in January but it also forced the miners to have this panic “We might be losing money. How do we coordinate?” That was BIP 91 that allowed them to coordinate. The difference is that BIP 148 we know has worked at least once. BIP 149 is similar to how stuff was activated in the early days like P2SH. But we don’t know 100 percent it will work but it is less risk for miners at least.

The contrasting approaches are how quickly it gets pushed through and how much opportunity miners are given to flag that they are happy with the change?

In both cases miners can flag they are happy with it pretty quickly. It is just how quickly everyone else can respond if miners aren’t happy with it for stupid reasons.

If miners dragged their heels or they don’t want to proceed with it one method is more forcing?

Yes. I don’t think there is any reason we can’t be prepared to do both. Set it up so that we’ll have this one year and this other year and at the end of two years it will definitely be activated as long as it seems sensible. If it is taking too long we can have the emergency and do the exact same thing we did with BIP 148 and force signaling earlier. They are not necessarily incompatible ideas either.

AltNet, a pluggable framework for alternative transports

https://github.com/bitcoin/bitcoin/pull/18988

We have this one on AltNet from Antoine Riard.

There were 2 PRs he opened all at once. One is this AltNet framework and there is this other watchdog PR. One of was Marco’s comments was that we can have these drivers but he hasn’t thought about all the trade-offs. It seems beneficial if add-ons can be attached and removed during runtime as well as developed independently of the Bitcoin Core development process. I definitely agree that if we are going to be building these drivers that has to happen outside of the Bitcoin Core repository. As far as loading and unloading plugins at runtime goes definitely not a fan of that idea.

Are you not a fan of it at runtime as in while the thing is running or not a fan of it even as part of the configure option at runtime?

If it was a configure option so you had a line in bitcoin.conf “add driver XYZ” and then you start and you’re running, sure. But he actually means you run for a day and then you go I’ll unload this driver and reload this driver, add a new driver while you’re still running…

A bitcoin.conf option makes sense but a RPC call to load and unload seems complicated?

Yes it seems like an attack surface. What are these drivers going to look like? Are you loading some random shared libraries? We definitely don’t want to go down the route of are these drivers authenticated in some way? Have we built them reproducibly and someone has signed off that they are ok to load? That sounds like all sorts of things that we don’t want to be managing or have to do in our codebase.

I also briefly skipped over it when he released it. The way I understood the watchdog PR one was that it is looking at the network and giving alerts on things that it sees that might hint at something fishy going on. My thought on that was that in practice I don’t really know how that is going to work out. If we have two systems that are independent and one says there is a problem and the other system doesn’t say… I would prefer us to have one source of truth. This watchdog, there could be a way where this gives alerts and they are not really that important. People start ignoring the alerts like the version warnings in the blocks. Everyone ignores them. That is a slippery slope to go in that direction. I would have to look at the code and conceptually what the plan is for it to play together with the actual network and what is says is fishy and what is not fishy.

What benefits do you see from this? Would it enable different methods of communication? Radio and mesh networking and so on?

I would have to think about it more and how it is going to be implemented. How does this AltNet play together with the main actual net. I don’t have a model in my head right now how that would work. I would need to think about it.

secp256kfun

https://github.com/LLFourn/secp256kfun

I have released this library called secp256kfun. It is for doing experimental cryptography stuff using Rust and having it work on Bitcoin. It doesn’t use Pieter Wuille’s library. It uses one written by Parity to do Ethereum stuff. That is more low level and gives the things I needed to get the result I wanted. It is a mid level library really. It gives you nice, clean APIs to do things. It catches errors. For example the zero element one, you have the point at infinity which is often an invalid public key but it is a valid group element. It can know at compile time whether a point is the point at infinity or a valid public key and can give you a compile time error if you try to pass something that could be the point at infinity into a function. Also it has an interesting way of dealing with variable time and constant time. Constant time is running operations on secret data, it is really important it does constant time. By measuring the execution time you can’t figure out anything about the secret input to the function. With this library I mark things as secret or public and then the values themselves get passed into functions and depending on whether they are marked as secret or public it uses a variable time or constant time algorithm to do the operation. This is all decided at compile time, there is no runtime cost. It is experimental, I think it is cool. I’d like people to check it out if they are doing some Bitcoin cryptography research.

This is for playing around as a toy implementation and can be used for educational reasons too. A stepping stone to understanding what is going on in the actual secp256k1 library.

I think that is a strong point for it. There is an Examples folder and I implemented BIP 340 in a very few lines of code. It really helps you get an idea of how Schnorr signatures work or ECDSA signatures work because you aren’t forced to deal with these low level details. But at the same time you are not missing anything. You are not hacking around anything. You are doing the way you are meant to do it. As a teaching tool of how this cryptography works, it is pretty good. And for doing research quickly. We have been talking about Rust a lot. As a researcher I don’t want to go down into C and implement cryptography using libsecp, this C library. I am not very good at C. I do know Rust pretty well and there is a lot less work that goes into an implementation in Rust. Especially if you are willing to cut corners and are not paranoid about security. Just for a demonstration or getting benchmarks for your paper or getting a basic implementation to demonstrate your idea. That is what it is there for.

\ No newline at end of file diff --git a/sydney-bitcoin-meetup/2020-06-23-socratic-seminar/index.html b/sydney-bitcoin-meetup/2020-06-23-socratic-seminar/index.html index 009f2b3e21..c24fdae440 100644 --- a/sydney-bitcoin-meetup/2020-06-23-socratic-seminar/index.html +++ b/sydney-bitcoin-meetup/2020-06-23-socratic-seminar/index.html @@ -11,4 +11,4 @@
Home < Sydney Bitcoin Meetup < Socratic Seminar

Socratic Seminar

Date: June 23, 2020

Transcript By: Michael Folkson

Category: -Meetup

Name: Socratic Seminar

Topic: Agenda in Google Doc below

Location: Bitcoin Sydney (online)

Video: No video posted online

Last month’s Sydney Socratic: https://diyhpl.us/wiki/transcripts/sydney-bitcoin-meetup/2020-05-19-socratic-seminar/

Google Doc of the resources discussed: https://docs.google.com/document/d/1TKVOScS0Ms52Vwb33n4cwCzQIkY1kjBj9UioKkkR9Xo/

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Succinct Atomic Swap (Ruben Somsen)

https://gist.github.com/RubenSomsen/8853a66a64825716f51b409be528355f#file-succinctatomicswap-svg

Succinct Atomic Swaps are an alternative way of doing atomic swaps. They can be useful in certain cases or they can be less useful depending on what you want to do. I will try to take you through how it all works. These are the slides that maybe some have you seen a video of already. If you have any questions just interject and I’ll answer them. I will try to pause here and there so you can ask questions. Why am I spending bunch of time looking at protocols like this? At a high level I think Bitcoin is a very important technology that needs to grow in ways that don’t really hurt decentralization. There are not many ways to do that. You can increase the block size but that isn’t something that doesn’t hurt decentralization. That is off the table for the most part. The question becomes what can we do with this blockchain resource in such a way that you do more with it without it becoming less decentralized. My focus on that has led me to research a bunch of topics. My statechains work is part of that where you have this trade-off of using a federation but you do it in such a way where decentralization is not preserved but tries to make the best trade-off possible where you do rely on a third party but as little as possible. Succinct Atomic Swaps fall in that same category of me looking at how can we do better? My example is going to between Bitcoin and Litecoin. I think some people get confused and are like “This is an altcoin thing. It helps altcoins.” I want to mention ahead of time that atomic swaps are very general. You do them within the Lightning Network even. You can do them between Liquid and Bitcoin etc. It happens to be something that is blockchain agnostic. You can use it with any blockchain you want but it is very obviously useful within Bitcoin. Also for privacy swaps where I have a Bitcoin, you have a Bitcoin and then we swap it in such a way that it is not noticeable onchain that we did so. That can give a huge privacy boost.

The general atomic swap protocol is actually four transactions. You have a preparation phase on one chain and a preparation phase on the other chain. Then you have the second transaction where you actually do the swap. You reveal some secret then the swap goes through on the other side as well. This is four transactions. What I want to show here is how to do it in only two transactions. There are two tricks to this. One side of it is that we have an asymmetric swap where one side is more complex while the other side is very simple. The other side of it is a Lightning type channel where you have to watch the blockchain for a certain transaction. That cuts the protocol down from 3 to 2 transactions. The second part can actually also be done in a traditional atomic swap but the nice thing here is because it is asymmetric you only watch the blockchain for one of the two sides. As I’ll point out again later the protocol itself is already set up in such a way that you can make an non-expiring channel without adding any extra overhead. Any overhead is already there because of the way the protocol works. It is elegant to do it within this protocol.

I am going to describe what the setup is. Later on I will have a more complex slide that really shows the actual in depth protocol and all the onchain stuff. This is a high level overview to give you a better idea of what we are doing here. The first setup is let’s say on the Bitcoin side. Alice wants to lock up some Bitcoin with Bob and she doesn’t do so yet but she creates the transaction that can go to the blockchain later on. From that point on, Alice and Bob have to agree ahead of time how to spend this UTXO that will get onto the blockchain at some future time. The first thing they do is they create a transaction where you send it to Alice and Bob again with a timelock of 1 day. From that point on you send it to yet another timelocked transaction but this one is a 2 day timelock, 1 day after the middle transaction goes on the blockchain this transaction becomes valid. This is a refund transaction where just the money goes back to Alice. If this whole thing went to the blockchain it just meant Alice got her money back. Alice locks her money up with Bob and then she gets it back. There is a * as you might have noticed. The * is an adaptor signature where if Alice sends this transaction to the blockchain reclaiming her money she reveals a secret which we call AS, Alice’s secret, and Bob then learns this. There is a reason why we would want to learn that which we will get to. There is one more transaction which shows that if Alice does absolutely nothing Bob gets the money. The reason for that is to really make sure Alice after a couple days reveals this secret. With this current setup Bob knows that he will either learn this secret or he will get the money. Obviously Alice not being crazy will broadcast this to the Bitcoin blockchain. Just to clarify what we are doing here is not yet a swap, this is the setup. The setup means that if worst comes to worst the parties stop responding, stop communicating with each other, what happens? In that case you want it to revert back to normal. What we are doing here is we are reverting it back to normal by allowing Alice to just get her money back. At this point Alice is ready to send this Alice and Bob (A+B) transaction to the blockchain because she has a guarantee that she will get her money back assuming she pays attention and is faster than that Bob transaction at the end. Now that this transaction is on the blockchain Bob is basically guaranteed to learn Alice’s secret. Because of that Bob just broadcasts this transaction on say the Litecoin blockchain where he locks up the money with Alice’s secret and Bob’s secret. If everything stops working out Alice is forced to send this transaction to the blockchain, the transaction that gets her money back and reveals Alice’s secret. Once Bob learns Alice’s secret then he also gets his refund on the Litecoin blockchain because he already knows BS and know he also learns AS. He has the complete secret.

The next step is going to be doing the actual swap. What we want here is the opposite to happen. Instead of AS being revealed and Alice getting her money back we want Bob to get the money and BS to get revealed to Alice. How do we do that? We create another transaction to Bob and that transaction if Bob sends to the blockchain will reveal BS. Because this one has no timelocks it happens before everything else we have created thus far. This enables the swap. Now if this transaction goes to the blockchain, BS is revealed, Alice already knowing Alice’s secret and now also knowing Bob’s secret gains control over the transaction on the Litecoin side. Bob if he has sent this to the blockchain, now he has the money on the Bitcoin side. This is a three transaction protocol. The main point here is that this already better than the original atomic swap. With the original atomic swap you would have 4 transactions. Here on the bottom side, nothing happens. It is just one single transaction and the key is split into two parts. Either Bob gets it or Alice gets it because they give one of the parts to each other. The next step would be how to turn this into a 2 transaction variation. What we do is we don’t broadcast this transaction at the top. We give it to Bob but Bob doesn’t broadcast it and instead Bob just gives Bob’s secret to Alice. Bob knows he could broadcast it and get the money, he is guaranteed to receive it but he doesn’t do so and he just gives the secret to Alice. Alice now has control over the Litecoin transaction and Bob if he sends that transaction to the blockchain would also get control over the Bitcoin transaction. However instead of doing that Alice gives Alice’s key to Bob. Now the swap is basically complete in two transactions. They both learn the secret of the other person on the other side of the chain. This would literally be a 2 transaction swap. But there is still this little issue of this transaction existing where Alice gets her money back. This timelocked transaction still exists. What this means is that even though the protocol has completed in 2 transactions there is still a need for Bob to watch the blockchain to see if Alice tries to get her money back. The way the timelocks are set up, particularly with this relative timelock at the end there, what this means is that Alice can send this middle transaction A+B (1 day lock) to the blockchain but in doing so Bob will notice that the transaction goes to the blockchain. Since Bob has Alice’s key he can spend it before the third transaction becomes valid. Bob will always be able to react just like a Lightning channel where one party tries to close the channel in a way that is not supposed to happen. The other party has to respond to it. It requires Bob to be online in order for this to be a true 2 transaction setup.

To go through the negatives, there is an online requirement for Bob, not for Alice. As soon as the swap is complete Alice is done but Bob has to pay attention. If he doesn’t Alice could try funny. There is an introduction state. We’ve got these secrets that are being swapped. If you forget the secret or if you lose the information that was given to you you will no longer have access to your money. That is different to how a regular Bitcoin wallet works where you have a seed, you back it up and when you lose your phone or something you get your backup. That is not the case here. Every time you do a swap you have some extra secrets and you have to backup those secrets. The positives are that this all works today. You can do this with ECDSA. Lloyd’s work, I am going to call this 1P-ECDSA but I’m not sure if Lloyd agrees. You can do a very simple adaptor signature if you have a literal single key that is owned by a single owner. Using that we can do adaptor signatures on ECDSA today. It only requires 2 transactions as you already know. It is scriptless or at least it is mainly scriptless. There is one way to add another transaction to it that isn’t completely scriptless in order to make it a little bit cheaper to cover the worst case scenario where Alice locks up her money and Bob does nothing. If you want it to be completely scriptless it can be. It is asymmetric. That has some advantages. You could think of this as one blockchain doing all the work and the other blockchain literally do nothing. This allows for two things. The first thing is that you can pick your chain. This is the cheaper chain, maybe some altcoin chain and this is the more expensive chain like the Bitcoin chain. On the Bitcoin chain you do the simple version of the swap. On the altcoin chain you do the complex version. If something goes wrong then the cheaper blockchain will bear the brunt. It is without any timelocks. That is really nice because that means that there is no timelock requirement on one of these two blockchains. I think Monero is one example that doesn’t have timelocks. It is very compatible with even non-blockchain protocols where you have some kind of key that has ownership or you can transfer ownership. This can be very useful for Payswap or other privacy protocols and people are talking about these on the mailing list today. How to do atomic swaps for privacy. Hopefully Succinct Atomic Swaps can be part of that. A way to do it more efficiently. One thing I still need to point out. We can make use of MuSig on ECDSA on the Litecoin side of things. The problem with ECDSA is that you can’t do multisig signing or at least without 2P-ECDSA which is a more complex protocol in order to signing with a single key. In this case we are not really signing anything. We just have this one transaction on the blockchain. There are two secrets. Part of one of these secrets gets revealed to the other party but at no time do two people need to collaborate to sign something. There are no transactions that are spending it. We can use MuSig here and replace this key with M in this case and have it be the MuSig of AS and BS. On the blockchain this bottom transaction looks like a very basic transaction like if you were making a payment from a single key. It would be indistinguishable without any advanced 2P-ECDSA or Schnorr. We can do this today.

What exactly are the requirements for the alternative coin? It doesn’t need to have timelocks, does it need Script?

You just need some kind of key transfer mechanism, a signature scheme where you can send money from one key holder to another key holder, that is sufficient. You don’t need anything else. I can barely think of anything that it wouldn’t be compatible with.

You could even have M as the key to your sever. A SSH key, you could swap ownership for that. It doesn’t even have to be a blockchain or an asset.

That’s right. It is weird that you could do those things.

What are some of the practical applications that someone might use it for? Instead of Bosworth’s submarine swaps? Can you give some examples?

MuSig makes one of these transactions indistinguishable from any other regular transaction. Another interesting observation for use cases is that this MuSig side, the simpler side of the two in the swap, can be funded by a third party. Normally the problem is that you need to create some kind of transactions that occur after this transaction goes onchain but because this is just a single transaction you don’t have to worry about what the txid is or something. You just need to create that one single transaction. You can imagine doing some kind of swap where I create a transaction where somebody pays me but that pays into a swap. Then I can swap. As a merchant you can receive money and then you can immediately swap with it. That is something that is not possible today without either using this Succinct Atomic Swap protocol or in a way that doesn’t preserve privacy. You have some advanced scripts that you make use of and then it is very obvious that you did a swap. That is one unique case.

It might give people more privacy when they are paying a merchant. Instead they are paying a Succinct Atomic Swap. Is that one example?

Yes that would be one example of something that is possible with this.

It looks like a normal address but the atomic swap is this joint key address from somewhere else. You are getting that money straight into your Lightning and some other market maker is getting the actual Bitcoin onchain.

That is right. You can do that with Succinct Atomic Swaps very easily without revealing any privacy. You can already do it but if you did it today it would be very obvious you are doing it.

You said timelocks weren’t needed but at least one of the chains should support timelocks?

That’s right. Timelocks are not needed on one side of the swap but on the other side of the swap it is absolutely necessary.

How does it differ from Chris Belcher’s CoinSwap? They are similar in quite a lot of ways. How are they different?

It is the same thing basically. What Chris Belcher is working on is using atomic swaps to maximize privacy. The atomic swap itself wasn’t really novel. This is a drop-in replacement for regular atomic swaps. It is orthogonal. Chris Belcher can choose to use this protocol depending on what he ends up with. One of the things that this protocol doesn’t do well is multiparty swaps. If you want to do something like Alice pays Bob providing Bob pays Carol providing Carol pays Dave, that seems impossible or is at the very least difficult with this protocol. It depends on what Chris ends up doing, whether or not a Succinct Atomic Swap makes sense. If you just do a swap between Party A and Party B it seems Succinct Atomic Swaps are more efficient. There are no downsides to doing it. They might be some trade-offs that are hard to argue about but because of the asymmetry it might be better or worse in terms of who takes the risk. With a swap you always have a risk where you do the setup and then one of the parties disappears and one party might end up paying more. With the asymmetry the side that has all the complexity bears all the cost if things go wrong. You can imagine a protocol where all the complexity is at the client side. There is a server, a swap server for privacy, where you suggest a swap to them and they give you an equal amount of some other UTXO. Assuming the server has some kind of reputation where you know the server isn’t going to back out of the trade, they don’t have any risk. The client has to suggest the swap and if the client disappears it is on the client to bear all the cost. It depends on what you are doing. If you want to do it the opposite way, if you want the swap end up on the opposite side then the server needs to open the channel and now you have a DOS vector. It depends. In certain cases it is just more efficient, that is the bottom line.

You mentioned the altcoin side only needed to be a private key. Could this be used then to buy an encryption key to decrypt some data that you might be selling or something like that? Does it literally just need a private key? Those guys at Suredbits were very interested in PTLCs because they were trying to sell encrypted data with keys to decrypt it.

I think there is no need for this specific protocol there because the main thing here is that you have this either Alice gets the full key or Bob gets the full key. In the case of encryption one person already knows the key and the other person needs to get it. I think that goes more into zero knowledge protocols where you have the key that is the encryption of something you are interested in and you have to prove that if they learn the key they can decrypt the thing that interests them. Greg Maxwell wrote an article about it.

It was about selling sudoku puzzles.

You encrypt some data and then you prove in zero knowledge that the encrypted data is a solution to what you are looking for. Through Bitcoin you buy the key that allows you to decrypt the data. I happen to know that at Blockstream they are working on a MuSig related protocol that also would be helpful for that. A hint to what is coming in the future. I don’t think they have published that paper yet. That will be interesting.

One point I skipped over which is interesting. Because this protocol is simple to set up on one side without any timelocks whatsoever you can use this to buy coinbase UTXOs. You can imagine one person wanting to have some coinbase coins. Let’s say I want to buy 1 coinbase Bitcoin. A miner has to mine this UTXO and create a transaction output that has 1 Bitcoin in it. They do so with the AS + BS setup using MuSig so it looks like a single key. Before they start mining it first we do the setup. Now the miner starts mining and eventually they will mine a coinbase transaction with this AS + BS transaction in it and they complete the swap. With a coinbase transaction you can’t spend it until 100 blocks have passed. Now you are not really spending it, you never have to send it out to the blockchain afterwards so you can change the ownership of the coinbase transaction before 100 blocks have passed. You can do a swap into a coinbase transaction. That is cool. Regarding swapping in and out of Lightning in a single transaction it might be possible, maybe not. You can imagine on Lightning having a route from Alice to Bob to Carol. You have some kind of onchain transaction where it is Alice’s secret and Carol’s secret. Depending on whether the money goes from Alice to Carol, Alice learns Carol’s secret or Carol learns Alice’s secret. This is not how Lightning works today because normally what you do is reveal a single transaction. In this case what you do is reveal a secret if the payment goes through and reveal a different secret if the payment doesn’t go through. I’m not sure if that is compatible with routing. If it is then it might be an interesting way to do swaps in and out of Lightning that are more efficient than submarine swaps. If I’m not mistaken submarine swaps require two onchain transactions every time you do it.

You are not sure if you can do it on Lightning? If it is just A to B you definitely can. It is just the hop that is in question?

I think that is right. If you are interested in reading more of the details you can go to tiny.cc/atomicswap. I also have some links to some of my other work. Statechains, PoW fraud proofs, blind merged mining, perpetual one-way peg. These last two I wanted to present at the Bitcoin 2020 conference in San Francisco. Because of Corona that didn’t happen. I have a full presentation that I am hoping to present at a conference but maybe I should present it online. That might be something we can go over at some other meetup as well.

Bitcoin Optech 98

https://bitcoinops.org/en/newsletters/2020/05/20/

We had Bitcoin Optech 98 but one of the key items on there was Succinct Atomic Swaps so we will skip over that one and go to the next one.

CoinSwap

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017898.html

It is looking at how we can use atomic swaps in such a way that we increase privacy. It goes through different setups you can create. One of them would be on one side you give somebody 1 Bitcoin and on the other side you give somebody 3 UTXOs, 0.1, 0.2 and 0.7. You can do these fan out swaps and you can make swaps depend on other swaps. You can do swaps in a circle where Alice gets Bob’s money, Bob gets Carol’s money, Carol gets Alice’s money and things like that.

An onchain Lightning payment?

That is what atomic swaps are in essence. It is an onchain version of what you are doing offchain on the Lightning Network. Generally speaking that is what I have wondered about atomic swaps, where are they more useful than Lightning? Currently with things like Coinjoin there are definitely some issues there. For instance you have these change outputs that you then cannot use anymore or you can’t use it in combination with the outputs that did get coinjoined. Maybe you can take the change outputs and swap those to get privacy in your change as well. Very generally speaking you are paying onchain fees for your privacy. With the Lightning Network your payments are private but you are not mixing up your coins. I guess that would be the use case. It is not fully efficient in terms of onchain space but that is what you pay for in order to get privacy.

There is a section around the differences as well. One of them is liquidity because of the payment size on the Lightning Network and because it is onchain.

Chris Belcher is focused on privacy and predominantly onchain stuff. He does discuss the differences between Lightning here but the use case he is most interested in “I have a transaction history behind my coin and I want to swap with it say Stephan’s”. If a Chainalysis comes along they are like “These are Stephan’s coins, I’m tracing them through the network” but actually Stephan and I swapped our coin history. That’s the crux of the idea. He does talk about the danger of you and me swapping but Stephan is actually the adversary or Chainalysis.

It is kind of like a dust attack.

If I swapped my history with you Stephan and you are malicious then I am telling you what my coins are which isn’t great for privacy. He talks about routing CoinSwaps where you have a group of people and as long as they are not all colluding it is safe to do the CoinSwap.

He layers on the fidelity bond idea as well to try to make it harder for someone to do the sybil thing. A cool idea, definitely going to take a lot of work to make this happen.

Do they all have to be the same amount like Coinjoin?

I think it can be split up. You could have the same amount but from different chunks.

That is right. You can split it up. You can do Payswap where you are swapping and paying at the same time. Your change output is what is getting swapped back to you. Now the amounts are not exactly the same anymore. Those are all methods to make it a little less obvious. If you literally swap 1 Bitcoin for somebody else’s Bitcoin. In that case first you would have to wonder if a swap is happening or not. Is someone just paying 1 Bitcoin to somebody? The second thing would be who could this person have been swapping with? That depends on how long the timelock is and how many 1 Bitcoin transactions have occurred. Every 1 Bitcoin transaction has occurred in that period, that is the anonymity set as opposed to Coinjoin where your anonymity set is literally just everybody within that specific transaction. That has some potential to have a larger anonymity set. The other thing that is interesting is that the anonymity set is not limited to Bitcoin itself. If you do a cross chain swap it means that every transaction on every blockchain could’ve been the potential source of your swap. If a lot of people do these cross chain swaps that makes the anonymity set even larger and makes it more complex to figure out.

That is definitely a big use case of atomic swaps. Say you want to use zero knowledge proofs on Zcash or use Monero to get some privacy with your Bitcoin and then get it back into Bitcoin. Does that bring strong benefits? Or use a sidechain with confidential transactions on it. This seems like a useful use case.

There was a Chainalysis presentation where they talked about things like that. If you really do it properly then maybe but a lot of these anonymity coins are not quite as anonymous as you think. If you do the really obvious thing where you go in and go back out and somebody saw you go in and out it is also not anonymous. If you are doing swaps it is maybe better to try to set up the swaps with Coinjoins and CoinSwaps so you don’t rely on going to this other place and swapping it round. I’m not sure to what degree that is useful, maybe it is.

The other thing with that kind of approach is timing analysis. It depends on how it is done but it can be unmixed if people are able to look at what is coming in and coming out.

I suppose the challenge is finding that person who is happy to swap say Monero, Zcash or Liquid for your Bitcoin. You could be leaking privacy finding that counterparty.

I suppose there is going to be a market for that. If you are willing to pay for it there will be a market. If the market is not very liquid you are going to have to pay more. I guess you are concerned that in your search for your swap partner you might be leaking information. That is a tricky one. You have definitely got to be careful.

Custody Protocols Using Bitcoin Vaults

https://arxiv.org/abs/2005.11776

Last month we spoke about this vault idea. This was a paper on vaults and covenants. As I understand the idea is that if you are being stolen from out of this vault you limit your loss and you have a watchtower that spends it all into a safe secondary address. If you are an exchange you might want to secure your coins in the vault this way.

We had a Socratic on this in London a few weeks back. It was very interesting. You talked about what people are trying to achieve but everyone has very different scenarios, different number of parties, whether they want to get access to their coins quickly, there are so many different dynamics in play. Also these designs are vastly improved with something like Jeremy Rubin’s CHECKTEMPLATEVERIFY which obviously isn’t in Bitcoin now. Bryan (Bishop) was saying he has built a vault design using that and Kevin (Loaec) was like “I’m not spending anytime on that until it is in Bitcoin. It might not ever get in Bitcoin.”

The trick is that you want to signal what you are going to do with the funds so that onchain you can see this is what is going to happen and you can respond to it if that is not what is supposed to happen. That the general way you can look at these vaults. You are signaling the money is about to move and then if it is not what you intended you can claw it back.

Bryan’s idea was if you wanted to send money you’d send it in drips. If one drip went to a destination that was not your intended destination then you would stop any future transfers. Instead of losing your whole stash you would just lose that first installment of the funds you were sending.

Elichai Turkel response to the Rust discussion at last month’s Socratic Seminar

https://gist.github.com/elichai/673b031fb235655cd254397eb8d04233

We had a discussion last month at the Socratic whether Rust should ever get into Core and when, if and when. One of the conversations we had last month was that we didn’t see any rust-bitcoin nodes on the network. Elichai is saying in this the Rust guys don’t think that is a good idea and they don’t want any rust-bitcoin nodes on the network. My view was if you are going to be introducing this new language which is a big deal for Bitcoin Core perhaps there should be some live Rust nodes on the network to work out kinks and any problems with using Rust on a full node implementation.

There aren’t many live running non-Bitcoin Core nodes are there? There is libbitcoin and stuff like that…

A very small percentage. There is Bcoin and roasbeef’s btcd and libbitcoin. They have a few nodes on the network.

Bitcoin Knots too. Some people run that. The Wasabi guys run Knots?

Luke Dashjr’s full node. It is a very small percentage. I don’t think it is bad for the network. It would be a problem if one of them was massively used and was starting rival Bitcoin Core as the full node that everyone is using.

That wouldn’t be a problem either. You want to run the node that everybody is running. You want to be on the network that everyone else is on If you run some alternative version of Bitcoin then you risk of getting forked off the network. If everybody is running this alternate version that is not Bitcoin Core then that is what Bitcoin became. It is like firing Core in a sense. I don’t think that is ever going to happen. You are putting more trust in the consensus of some alternate fork. That is the problem. There is such a huge incentive to be where everybody else is that it makes no sense to run an alternate client. You can say “I am going to run Bitcoin Core and I am going to run rust-bitcoin or something” but if you do that you are doubling your validation cost. It is like doubling the block size or something. I guess it is just CPU, you can save on bandwidth costs at least. That is not great either in terms of validation cost. It is a natural effect that everybody picks what everybody else is picking.

There is an interesting comment that Elichai makes that rust-bitcoin is used in production by many users just not for Bitcoin Core. It is used on other things. Lots of us are running electrs (Electrum server). That is a good example.

I didn’t realize electrs was using rust-bitcoin.

I have never run rust-bitcoin in my life but I do run electrs.

That runs on Bitcoin Core too? You connect electrs to Core? I’m not sure I’ve never used it.

Elichai’s point is that it is using the rust-bitcoin library but there is no current full node implementation using rust-bitcoin.

There is a full node implementation in Rust written by Parity. The Ethereum people wrote a full node Bitcoin implementation in Rust. It exists. I don’t know if everyone runs it, it looks like it isn’t maintained.

rust-bitcoin is a library. It is a library. Maybe someone has made a little node using rust-bitcoin against the recommendations or for an experiment.

Jameson Lopp did some tests on alternate implementations. The last one I saw, Parity was completely stalled and not able to sync up.

It has several advantages. It can connect to a Bitcoin Cash node at the same time.

How exciting.

It does say rust-bitcoin supports consensus code, fully validating blockchain data but you shouldn’t do it.

I don’t know if anyone actually uses it like that. The recommendation is against that.

This library is “simply unable to implement the same rules as Core”

It doesn’t have the same rules. In Core there are all these rules for signatures, it doesn’t do that I don’t think. It uses secp256k1, the C library, whatever is in there it has. But I don’t think it uses these very Bitcoin specific things that were put in there as part of consensus. You need the Bitcoin consensus library.

The line you are pointing to is just a warning saying “Don’t run alternate implementations”.

Fanquake blog post on Subtleties and Security

https://blog.bitmex.com/subtleties-and-security/

I don’t think fanquake is on the call. He had a really great post.

Let’s add it to next month’s in case he attends next month. It would be good to get him to go through it.

I need to get him on the podcast at some point.

London Socratic on BIP-Schnorr (BIP 340)

https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr/

We did a Socratic last week on BIP-Schnorr and Tim Ruffing did a presentation the following day on multisig and threshold sig schemes with Schnorr. It was a really interesting conversation on the history of Schnorr, how the idea evolved, what the risks are today of implementing it. Pieter had some comments on how he was more worried about people building things on top and the problems that might ensue with things people build rather than the actual Schnorr implementation. We talked a little bit about libsecp256kfun too. The video is up, the transcript is up. Tim said it was a shame you weren’t on the call Lloyd because we could’ve discussed hash function requirements for Schnorr signatures. You had that academic poster we looked at a couple of months ago on hash function requirements for Taproot and MuSig.

Delay querying DNS seeds in Bitcoin Core (PR 16939)

https://github.com/bitcoin/bitcoin/pull/16939

When we start a Bitcoin node up from scratch it obviously has to have some way of working out who all the other Bitcoin nodes are so it can talk to them and share transactions and blocks. The original way that it used to work back in Satoshi’s day was that there was a IRC channel that all the clients would log on to and talk to each other over IRC. That was obviously the best way of doing it but IRC servers didn’t like that and thought it was a botnet. The modern way we have is a few developers run what is called DNS seeds. You can do a DNS query and get a list of a bunch of peers on the network that are running bitcoind and connect to them via that. In case the DNS doesn’t work there are also some fixed seeds that get shipped hardcoded into bitcoind that it will use if it can’t find anything else. Once you are on the network then you start querying for what other peers are on the network. You keep a record of that so you don’t have to do this every time. But the basic way the code worked before this patch was that it would start up your node and then it would try connecting to a peer. Most of the time that peer has disappeared in the meantime. It would sit there trying to connect for maybe ten seconds, five seconds or something like that. It would fail and then it would try another one, that is another 5-10 seconds gone. By the time you have tried 5 peers that is like 30 seconds gone. At which point it says “You don’t know any good peers. I’ll query DNS.” This effectively meant that every time you start your Bitcoin node, even if it knows tens of thousands of other peers to try, it will still do a DNS query which kind of gives away the fact that you are running a Bitcoin node if nothing else did. Once you have been on the network for a while you have got a list of tens of thousands of other peers. If you have a list of that many peers at least try a few for five minutes before trying to query the DNS to get more stuff. So that’s pretty much the summary.

It is to use those DNS seeds less at a high level?

It is to use them less as long as there’s no apparent reason why you need them.

How much of a worry is it that you are relying these DNS seeds when you first boot up your node? At an extreme level if you are trusting them to give you IP addresses and they give you IP addresses that they own then it doesn’t really matter what you are doing with the initial block download and all the verification because you have been gamed right from the beginning.

The nodes are all run essentially by well known devs. In that kind of sense they are a trusted source of information. Or at least hopefully not malicious. When you start up bitcoind at first it will query all the nodes. As long as one of them is giving you good data that should mean you get onto the real network. Anytime after that you will query in blocks of three. As long as you don’t have three of them that are all co-conspiring to be evil and you don’t happen to pick those three by bad luck then you are probably good. It is a centralized thing so you are much better off if you get onto the P2P network via a friend who has been on it for years or some other way like that. That would be a -connect argument with your friend’s IP address.

DNS seed policy

https://github.com/bitcoin/bitcoin/blob/master/doc/dnsseed-policy.md

I think there needs to be some edits to that DNS seed policy because it wasn’t clear without going into the documentation or the code that you are spreading that trust amongst multiple seeds in all cases. That’s important especially as we get more DNS seed operators, Wiz has just become a new DNS seed operator. As we add more then it is going to be harder to monitor that one or two don’t turn malicious so it is important to be getting IP addresses from as many seeds as possible to reduce that risk. Do you agree?

I’m not sure if there is a sweet spot. If you open it for all of them it is maybe too broad.

If you have so many seeds that you can’t check any of them then it is worse than having a few seeds where you can check that they are behaving reasonably well. I agree there is probably some sweet spot. I’m pretty sure at least one of the seeds is hidden behind Cloudflare. That means that if Cloudflare decides to be malicious with bitcoind stuff then they have got that avenue to be malicious. Presumably everyone is going via their ISP or Google or whoever’s DNS, that is an attack point as well potentially.

There is also a privacy concern coming up because normally you can’t see a seed who is asking you. There is an extension in DNS where you can see a part of the client IP address, the client network part, where it is coming from.

Hypothetically if there was a malicious seed who only gave you bad peers. Wouldn’t you periodically come across other peers on the network? Or would you be contained into this malicious network?

If the peers in your database are all malicious then those malicious peers will only tell you about other malicious peers. How will you ever find one that is a good one? You will have asked three DNS seeds for peers in the first place. You’d need all of those to have given you bad peers in the first place and you’d have to not accidentally query DNS again. It is an attack vector but it is probably one that you have to be somewhat specifically targeted to get attacked that way.

It relies on you not using the -connect flag to specifically connect to a known peer that you know.

I wouldn’t think that too many people would do that. If you are a beginner running a full node for the first time you are not going to know to do that. At the moment all of those seeds are serious Bitcoin people you really would hope would never turn malicious. As you expand that group of DNS seeds then I suppose it is always a risk. That is why you would need to be calling multiple of them.

It is a longer term risk. Maybe you could argue have we seen that kind of attack in the wild yet? How much effort should be put into protecting against this thing?

One way to address this danger is to go out of band and check a block explorer that you do have the latest block. But this is doing stuff outside of Bitcoin Core software, outside of the protocol. Again beginners just aren’t probably ever going to do that. It is more of a beginner type question than someone who knows what they are doing and can do some basic sanity checks once they have caught up with the blockchain tip.

It would be hard to maintain that for a long time. As soon as someone sends a transaction even on the normal network, the counterparty is not going to get that transaction because they are on a different network. It would require a lot of things to all be malicious for anyone to be successfully fooled by it.

Once you are on that fake network you are basically owned. You are owned by that attacker. That attacker can do anything. You could send a transaction out and the attacker could say “This transaction is in a block” with no proof of work on a fake chain.

What if you tried to receive some Bitcoin?

Matt Corallo did a PR a while ago that was one of his this is how we could use Rust to improve bitcoind. This was having a block downloaded that downloads block headers over HTTP instead of the P2P network. It is a centralized way of checking that the headers are you getting over the P2P network are actually what other people seem to see as the best most work chain. If you were eclipsed and being fed bad blocks that would almost certainly have less work than the real chain. You would then be able to find via these HTTP served things which can then be secured by HTTPS in a centralized manner and at least give you an alert that you’ve been eclipsed and you don’t know what is going on. I think he has had a couple of other ideas like that. There is a PR that Antoine Riard that generalizes a bit and is not all in Rust.

The AltNet PR?

Yeah that one.

That would be a cool idea to have alternative checks of these things. Maybe you would check your block height or some other thing in some other way. That would tip you off that something is wrong.

If you are running a node that you expect to be visible to everyone else you can look it up on Bitnodes.

You are only on that if you are listening.

If you’ve got an open port.

I don’t think it is capturing all the Tor only nodes as well. If you are going to Tor only then you might not appear.

Let’s say there are long term core developers who are running tonnes of nodes. Is there any way of monitoring the behavior of those DNS seeds to ensure that they don’t go malicious? It is certainly hard. But trying to detect that they are trying to put you on a fake chain and then flagging that the DNS seed has gone malicious.

You could make every node or DNS seed put a digital signature on anything they send you. If they send you BS you could go on Reddit and complain “This guy sent me this with this public key.”

I don’t think you need a digital signature to go on Reddit and complain.

That would be flagging after the fact. My question was how do you flag it before a victim gets screwed by a malicious DNS seed.

If your software can determine this guy is malicious and you have the signature you could forward the signature to everyone else saying “This guy sent me this bogus thing. Don’t use that guy.” It is really easy to change your identity on the network anyway so I don’t know if it makes sense.

You can only forward things to everyone else if everyone else is on the P2P network. This is trying to get them on the P2P network in the first place. I think that falls down there.

Because the idea is you are already in the malicious world and you are not connected to the good people.

There is also another problem if your DNS resolver, the DNS server you have set in your configuration is malicious then you could answer with the wrong nodes for every seed node.

secp256kfun library

https://github.com/LLFourn/secp256kfun

My library is there. It is self explanatory. If you interested in making experimental cryptography in Rust check it out.

How are you building it? Are you taking Pieter’s library libsecp library and translating the core parts into Rust? Where are you starting from?

The individuals from Parity who wrote the alternative Rust Bitcoin node also wrote a pure Rust translation of it. The translation work was not done by me. If they hadn’t had done it I would never have done it. It is not guaranteed to have fidelity with the original C code so it is a risky choice. For experimentation it is decent enough. That is what I am using. On top of that I have so far only implemented BIP 340, ECDSA and the adaptor signatures for those. That is all that is there on top of the base secp library.

Have you found any problems with their code that you have had to fix in terms of security stuff?

With their code? One interesting thing, I did find a tiny issue with the real secp library while doing it. I found some logic in the ECDSA signature code which was actually incorrect. I made an issue and they fixed it. One of the interesting features about this library is that it marks elliptic curve points as zero or non-zero. This is a scalar known not to be zero. That is important in various ways. Sometimes zero is an illegal state for a public key to be in. A public key can never be zero. What I found that was the signing code for libsecp256k1 allowed zero in a certain circumstance where it should not have. And also disallowed zero in a circumstance where it should not have. The logic of my library, when I was implementing ECDSA it showed me there is an error here. You can’t do it like that because this is meant to not be zero but it is actually zero. The compiler told me this is wrong and I said “This is wrong. How did they do it in libsecp?”

Elichai said in the London Socratic they had a bunch of mistakes, the Parity implementation. One of them was they “replaced a bitwise AND with a logic AND which made constant time operations non-constant time.” He seems to think there were loads of mistakes.

They had some mistakes for sure. There could be more. I can see some things where they have cut corners and not done it properly. There has been a huge amount gone into the C library by Pieter Wuille, Elichai and the rest of the squad. A huge amount of work and a lot of it has been done towards constant time and different kinds of side channel attacks. That work is nowhere near as good in that Parity library. The one he is talking about has been fixed obviously but there could be other problems with it. I didn’t find any problems with it. There have been problems found with it. There could be more. At least in my usage of it, when you multiply a curve point by a scalar it produces the right curve point which is pretty much all I need.

The alternative is to build it from scratch. If it was really bad code. It sounds like it is not too bad.

It is a pasta without too much thought going into it. It is literally the same code when they replace a bitwise AND with a logic AND. Obviously it is not the same thing. You cannot literally translate C into Rust. There were some corner cases where that doesn’t make sense and you really have to put some thought into how that should be done. They made some mistakes on. I am very glad it exists. I thank the Parity people, the Ethereum people for creating that. Without that I wouldn’t be able to build that library. I am very happy with how the library is done. Don’t use this to make a wallet or a node. That goes without saying. It is good for messing around and building proof of concept protocols. I am really excited about all the protocols coming into Bitcoin, Layer 2 protocols etc. This is a good way to have some code to do some testing on those ideas.

Generalized Bitcoin compatible channels

https://eprint.iacr.org/2020/476.pdf

Maybe we can talk about this paper next time. I am interested in people’s opinions on the paper. It is a way of getting symmetric state into Lightning. When you have Lightning nowadays you have two parties that both have a different transaction that represents the state for them. This paper is a trick to get it to symmetric state without eltoo. Eltoo has symmetric state. This proposal also has symmetric state. It uses adaptor signatures. It is a nice trick to do that. I would be interested in people’s opinions on this paper next time.

Is there a short blog post version of that paper or mailing list post?

I don’t think so. It was by some academics. I don’t think they have made a post about it. I can give the rough summary. Instead of having different transactions representing each person’s state, a transaction is the way to identify who put the transaction down. If transaction A goes down that means Alice put the transaction on the blockchain. If transaction B goes down it means Bob put the transaction on the blockchain. The A transaction has a path for punishing Alice if it is the revoked state. The B transaction has a path for punishing Bob if it is the revoked state. Instead Alice and Bob both have the same transaction but they have different adaptor signatures on the inputs of that transaction. When Alice or Bob post it it is the same transaction with the same ID but they reveal a different secret to the other party. That secret allows for the revocation to come in in a symmetric way with symmetric transactions rather than asymmetric transactions. That’s the summary. They give a more full academic explanation in there. It is a little bit tough but think about that idea and I would be interested to hear what people think about it.

\ No newline at end of file +Meetup

Name: Socratic Seminar

Topic: Agenda in Google Doc below

Location: Bitcoin Sydney (online)

Video: No video posted online

Last month’s Sydney Socratic: https://diyhpl.us/wiki/transcripts/sydney-bitcoin-meetup/2020-05-19-socratic-seminar/

Google Doc of the resources discussed: https://docs.google.com/document/d/1TKVOScS0Ms52Vwb33n4cwCzQIkY1kjBj9UioKkkR9Xo/

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Succinct Atomic Swap (Ruben Somsen)

https://gist.github.com/RubenSomsen/8853a66a64825716f51b409be528355f#file-succinctatomicswap-svg

Succinct Atomic Swaps are an alternative way of doing atomic swaps. They can be useful in certain cases or they can be less useful depending on what you want to do. I will try to take you through how it all works. These are the slides that maybe some have you seen a video of already. If you have any questions just interject and I’ll answer them. I will try to pause here and there so you can ask questions. Why am I spending bunch of time looking at protocols like this? At a high level I think Bitcoin is a very important technology that needs to grow in ways that don’t really hurt decentralization. There are not many ways to do that. You can increase the block size but that isn’t something that doesn’t hurt decentralization. That is off the table for the most part. The question becomes what can we do with this blockchain resource in such a way that you do more with it without it becoming less decentralized. My focus on that has led me to research a bunch of topics. My statechains work is part of that where you have this trade-off of using a federation but you do it in such a way where decentralization is not preserved but tries to make the best trade-off possible where you do rely on a third party but as little as possible. Succinct Atomic Swaps fall in that same category of me looking at how can we do better? My example is going to between Bitcoin and Litecoin. I think some people get confused and are like “This is an altcoin thing. It helps altcoins.” I want to mention ahead of time that atomic swaps are very general. You do them within the Lightning Network even. You can do them between Liquid and Bitcoin etc. It happens to be something that is blockchain agnostic. You can use it with any blockchain you want but it is very obviously useful within Bitcoin. Also for privacy swaps where I have a Bitcoin, you have a Bitcoin and then we swap it in such a way that it is not noticeable onchain that we did so. That can give a huge privacy boost.

The general atomic swap protocol is actually four transactions. You have a preparation phase on one chain and a preparation phase on the other chain. Then you have the second transaction where you actually do the swap. You reveal some secret then the swap goes through on the other side as well. This is four transactions. What I want to show here is how to do it in only two transactions. There are two tricks to this. One side of it is that we have an asymmetric swap where one side is more complex while the other side is very simple. The other side of it is a Lightning type channel where you have to watch the blockchain for a certain transaction. That cuts the protocol down from 3 to 2 transactions. The second part can actually also be done in a traditional atomic swap but the nice thing here is because it is asymmetric you only watch the blockchain for one of the two sides. As I’ll point out again later the protocol itself is already set up in such a way that you can make an non-expiring channel without adding any extra overhead. Any overhead is already there because of the way the protocol works. It is elegant to do it within this protocol.

I am going to describe what the setup is. Later on I will have a more complex slide that really shows the actual in depth protocol and all the onchain stuff. This is a high level overview to give you a better idea of what we are doing here. The first setup is let’s say on the Bitcoin side. Alice wants to lock up some Bitcoin with Bob and she doesn’t do so yet but she creates the transaction that can go to the blockchain later on. From that point on, Alice and Bob have to agree ahead of time how to spend this UTXO that will get onto the blockchain at some future time. The first thing they do is they create a transaction where you send it to Alice and Bob again with a timelock of 1 day. From that point on you send it to yet another timelocked transaction but this one is a 2 day timelock, 1 day after the middle transaction goes on the blockchain this transaction becomes valid. This is a refund transaction where just the money goes back to Alice. If this whole thing went to the blockchain it just meant Alice got her money back. Alice locks her money up with Bob and then she gets it back. There is a * as you might have noticed. The * is an adaptor signature where if Alice sends this transaction to the blockchain reclaiming her money she reveals a secret which we call AS, Alice’s secret, and Bob then learns this. There is a reason why we would want to learn that which we will get to. There is one more transaction which shows that if Alice does absolutely nothing Bob gets the money. The reason for that is to really make sure Alice after a couple days reveals this secret. With this current setup Bob knows that he will either learn this secret or he will get the money. Obviously Alice not being crazy will broadcast this to the Bitcoin blockchain. Just to clarify what we are doing here is not yet a swap, this is the setup. The setup means that if worst comes to worst the parties stop responding, stop communicating with each other, what happens? In that case you want it to revert back to normal. What we are doing here is we are reverting it back to normal by allowing Alice to just get her money back. At this point Alice is ready to send this Alice and Bob (A+B) transaction to the blockchain because she has a guarantee that she will get her money back assuming she pays attention and is faster than that Bob transaction at the end. Now that this transaction is on the blockchain Bob is basically guaranteed to learn Alice’s secret. Because of that Bob just broadcasts this transaction on say the Litecoin blockchain where he locks up the money with Alice’s secret and Bob’s secret. If everything stops working out Alice is forced to send this transaction to the blockchain, the transaction that gets her money back and reveals Alice’s secret. Once Bob learns Alice’s secret then he also gets his refund on the Litecoin blockchain because he already knows BS and know he also learns AS. He has the complete secret.

The next step is going to be doing the actual swap. What we want here is the opposite to happen. Instead of AS being revealed and Alice getting her money back we want Bob to get the money and BS to get revealed to Alice. How do we do that? We create another transaction to Bob and that transaction if Bob sends to the blockchain will reveal BS. Because this one has no timelocks it happens before everything else we have created thus far. This enables the swap. Now if this transaction goes to the blockchain, BS is revealed, Alice already knowing Alice’s secret and now also knowing Bob’s secret gains control over the transaction on the Litecoin side. Bob if he has sent this to the blockchain, now he has the money on the Bitcoin side. This is a three transaction protocol. The main point here is that this already better than the original atomic swap. With the original atomic swap you would have 4 transactions. Here on the bottom side, nothing happens. It is just one single transaction and the key is split into two parts. Either Bob gets it or Alice gets it because they give one of the parts to each other. The next step would be how to turn this into a 2 transaction variation. What we do is we don’t broadcast this transaction at the top. We give it to Bob but Bob doesn’t broadcast it and instead Bob just gives Bob’s secret to Alice. Bob knows he could broadcast it and get the money, he is guaranteed to receive it but he doesn’t do so and he just gives the secret to Alice. Alice now has control over the Litecoin transaction and Bob if he sends that transaction to the blockchain would also get control over the Bitcoin transaction. However instead of doing that Alice gives Alice’s key to Bob. Now the swap is basically complete in two transactions. They both learn the secret of the other person on the other side of the chain. This would literally be a 2 transaction swap. But there is still this little issue of this transaction existing where Alice gets her money back. This timelocked transaction still exists. What this means is that even though the protocol has completed in 2 transactions there is still a need for Bob to watch the blockchain to see if Alice tries to get her money back. The way the timelocks are set up, particularly with this relative timelock at the end there, what this means is that Alice can send this middle transaction A+B (1 day lock) to the blockchain but in doing so Bob will notice that the transaction goes to the blockchain. Since Bob has Alice’s key he can spend it before the third transaction becomes valid. Bob will always be able to react just like a Lightning channel where one party tries to close the channel in a way that is not supposed to happen. The other party has to respond to it. It requires Bob to be online in order for this to be a true 2 transaction setup.

To go through the negatives, there is an online requirement for Bob, not for Alice. As soon as the swap is complete Alice is done but Bob has to pay attention. If he doesn’t Alice could try funny. There is an introduction state. We’ve got these secrets that are being swapped. If you forget the secret or if you lose the information that was given to you you will no longer have access to your money. That is different to how a regular Bitcoin wallet works where you have a seed, you back it up and when you lose your phone or something you get your backup. That is not the case here. Every time you do a swap you have some extra secrets and you have to backup those secrets. The positives are that this all works today. You can do this with ECDSA. Lloyd’s work, I am going to call this 1P-ECDSA but I’m not sure if Lloyd agrees. You can do a very simple adaptor signature if you have a literal single key that is owned by a single owner. Using that we can do adaptor signatures on ECDSA today. It only requires 2 transactions as you already know. It is scriptless or at least it is mainly scriptless. There is one way to add another transaction to it that isn’t completely scriptless in order to make it a little bit cheaper to cover the worst case scenario where Alice locks up her money and Bob does nothing. If you want it to be completely scriptless it can be. It is asymmetric. That has some advantages. You could think of this as one blockchain doing all the work and the other blockchain literally do nothing. This allows for two things. The first thing is that you can pick your chain. This is the cheaper chain, maybe some altcoin chain and this is the more expensive chain like the Bitcoin chain. On the Bitcoin chain you do the simple version of the swap. On the altcoin chain you do the complex version. If something goes wrong then the cheaper blockchain will bear the brunt. It is without any timelocks. That is really nice because that means that there is no timelock requirement on one of these two blockchains. I think Monero is one example that doesn’t have timelocks. It is very compatible with even non-blockchain protocols where you have some kind of key that has ownership or you can transfer ownership. This can be very useful for Payswap or other privacy protocols and people are talking about these on the mailing list today. How to do atomic swaps for privacy. Hopefully Succinct Atomic Swaps can be part of that. A way to do it more efficiently. One thing I still need to point out. We can make use of MuSig on ECDSA on the Litecoin side of things. The problem with ECDSA is that you can’t do multisig signing or at least without 2P-ECDSA which is a more complex protocol in order to signing with a single key. In this case we are not really signing anything. We just have this one transaction on the blockchain. There are two secrets. Part of one of these secrets gets revealed to the other party but at no time do two people need to collaborate to sign something. There are no transactions that are spending it. We can use MuSig here and replace this key with M in this case and have it be the MuSig of AS and BS. On the blockchain this bottom transaction looks like a very basic transaction like if you were making a payment from a single key. It would be indistinguishable without any advanced 2P-ECDSA or Schnorr. We can do this today.

What exactly are the requirements for the alternative coin? It doesn’t need to have timelocks, does it need Script?

You just need some kind of key transfer mechanism, a signature scheme where you can send money from one key holder to another key holder, that is sufficient. You don’t need anything else. I can barely think of anything that it wouldn’t be compatible with.

You could even have M as the key to your sever. A SSH key, you could swap ownership for that. It doesn’t even have to be a blockchain or an asset.

That’s right. It is weird that you could do those things.

What are some of the practical applications that someone might use it for? Instead of Bosworth’s submarine swaps? Can you give some examples?

MuSig makes one of these transactions indistinguishable from any other regular transaction. Another interesting observation for use cases is that this MuSig side, the simpler side of the two in the swap, can be funded by a third party. Normally the problem is that you need to create some kind of transactions that occur after this transaction goes onchain but because this is just a single transaction you don’t have to worry about what the txid is or something. You just need to create that one single transaction. You can imagine doing some kind of swap where I create a transaction where somebody pays me but that pays into a swap. Then I can swap. As a merchant you can receive money and then you can immediately swap with it. That is something that is not possible today without either using this Succinct Atomic Swap protocol or in a way that doesn’t preserve privacy. You have some advanced scripts that you make use of and then it is very obvious that you did a swap. That is one unique case.

It might give people more privacy when they are paying a merchant. Instead they are paying a Succinct Atomic Swap. Is that one example?

Yes that would be one example of something that is possible with this.

It looks like a normal address but the atomic swap is this joint key address from somewhere else. You are getting that money straight into your Lightning and some other market maker is getting the actual Bitcoin onchain.

That is right. You can do that with Succinct Atomic Swaps very easily without revealing any privacy. You can already do it but if you did it today it would be very obvious you are doing it.

You said timelocks weren’t needed but at least one of the chains should support timelocks?

That’s right. Timelocks are not needed on one side of the swap but on the other side of the swap it is absolutely necessary.

How does it differ from Chris Belcher’s CoinSwap? They are similar in quite a lot of ways. How are they different?

It is the same thing basically. What Chris Belcher is working on is using atomic swaps to maximize privacy. The atomic swap itself wasn’t really novel. This is a drop-in replacement for regular atomic swaps. It is orthogonal. Chris Belcher can choose to use this protocol depending on what he ends up with. One of the things that this protocol doesn’t do well is multiparty swaps. If you want to do something like Alice pays Bob providing Bob pays Carol providing Carol pays Dave, that seems impossible or is at the very least difficult with this protocol. It depends on what Chris ends up doing, whether or not a Succinct Atomic Swap makes sense. If you just do a swap between Party A and Party B it seems Succinct Atomic Swaps are more efficient. There are no downsides to doing it. They might be some trade-offs that are hard to argue about but because of the asymmetry it might be better or worse in terms of who takes the risk. With a swap you always have a risk where you do the setup and then one of the parties disappears and one party might end up paying more. With the asymmetry the side that has all the complexity bears all the cost if things go wrong. You can imagine a protocol where all the complexity is at the client side. There is a server, a swap server for privacy, where you suggest a swap to them and they give you an equal amount of some other UTXO. Assuming the server has some kind of reputation where you know the server isn’t going to back out of the trade, they don’t have any risk. The client has to suggest the swap and if the client disappears it is on the client to bear all the cost. It depends on what you are doing. If you want to do it the opposite way, if you want the swap end up on the opposite side then the server needs to open the channel and now you have a DOS vector. It depends. In certain cases it is just more efficient, that is the bottom line.

You mentioned the altcoin side only needed to be a private key. Could this be used then to buy an encryption key to decrypt some data that you might be selling or something like that? Does it literally just need a private key? Those guys at Suredbits were very interested in PTLCs because they were trying to sell encrypted data with keys to decrypt it.

I think there is no need for this specific protocol there because the main thing here is that you have this either Alice gets the full key or Bob gets the full key. In the case of encryption one person already knows the key and the other person needs to get it. I think that goes more into zero knowledge protocols where you have the key that is the encryption of something you are interested in and you have to prove that if they learn the key they can decrypt the thing that interests them. Greg Maxwell wrote an article about it.

It was about selling sudoku puzzles.

You encrypt some data and then you prove in zero knowledge that the encrypted data is a solution to what you are looking for. Through Bitcoin you buy the key that allows you to decrypt the data. I happen to know that at Blockstream they are working on a MuSig related protocol that also would be helpful for that. A hint to what is coming in the future. I don’t think they have published that paper yet. That will be interesting.

One point I skipped over which is interesting. Because this protocol is simple to set up on one side without any timelocks whatsoever you can use this to buy coinbase UTXOs. You can imagine one person wanting to have some coinbase coins. Let’s say I want to buy 1 coinbase Bitcoin. A miner has to mine this UTXO and create a transaction output that has 1 Bitcoin in it. They do so with the AS + BS setup using MuSig so it looks like a single key. Before they start mining it first we do the setup. Now the miner starts mining and eventually they will mine a coinbase transaction with this AS + BS transaction in it and they complete the swap. With a coinbase transaction you can’t spend it until 100 blocks have passed. Now you are not really spending it, you never have to send it out to the blockchain afterwards so you can change the ownership of the coinbase transaction before 100 blocks have passed. You can do a swap into a coinbase transaction. That is cool. Regarding swapping in and out of Lightning in a single transaction it might be possible, maybe not. You can imagine on Lightning having a route from Alice to Bob to Carol. You have some kind of onchain transaction where it is Alice’s secret and Carol’s secret. Depending on whether the money goes from Alice to Carol, Alice learns Carol’s secret or Carol learns Alice’s secret. This is not how Lightning works today because normally what you do is reveal a single transaction. In this case what you do is reveal a secret if the payment goes through and reveal a different secret if the payment doesn’t go through. I’m not sure if that is compatible with routing. If it is then it might be an interesting way to do swaps in and out of Lightning that are more efficient than submarine swaps. If I’m not mistaken submarine swaps require two onchain transactions every time you do it.

You are not sure if you can do it on Lightning? If it is just A to B you definitely can. It is just the hop that is in question?

I think that is right. If you are interested in reading more of the details you can go to tiny.cc/atomicswap. I also have some links to some of my other work. Statechains, PoW fraud proofs, blind merged mining, perpetual one-way peg. These last two I wanted to present at the Bitcoin 2020 conference in San Francisco. Because of Corona that didn’t happen. I have a full presentation that I am hoping to present at a conference but maybe I should present it online. That might be something we can go over at some other meetup as well.

Bitcoin Optech 98

https://bitcoinops.org/en/newsletters/2020/05/20/

We had Bitcoin Optech 98 but one of the key items on there was Succinct Atomic Swaps so we will skip over that one and go to the next one.

CoinSwap

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017898.html

It is looking at how we can use atomic swaps in such a way that we increase privacy. It goes through different setups you can create. One of them would be on one side you give somebody 1 Bitcoin and on the other side you give somebody 3 UTXOs, 0.1, 0.2 and 0.7. You can do these fan out swaps and you can make swaps depend on other swaps. You can do swaps in a circle where Alice gets Bob’s money, Bob gets Carol’s money, Carol gets Alice’s money and things like that.

An onchain Lightning payment?

That is what atomic swaps are in essence. It is an onchain version of what you are doing offchain on the Lightning Network. Generally speaking that is what I have wondered about atomic swaps, where are they more useful than Lightning? Currently with things like Coinjoin there are definitely some issues there. For instance you have these change outputs that you then cannot use anymore or you can’t use it in combination with the outputs that did get coinjoined. Maybe you can take the change outputs and swap those to get privacy in your change as well. Very generally speaking you are paying onchain fees for your privacy. With the Lightning Network your payments are private but you are not mixing up your coins. I guess that would be the use case. It is not fully efficient in terms of onchain space but that is what you pay for in order to get privacy.

There is a section around the differences as well. One of them is liquidity because of the payment size on the Lightning Network and because it is onchain.

Chris Belcher is focused on privacy and predominantly onchain stuff. He does discuss the differences between Lightning here but the use case he is most interested in “I have a transaction history behind my coin and I want to swap with it say Stephan’s”. If a Chainalysis comes along they are like “These are Stephan’s coins, I’m tracing them through the network” but actually Stephan and I swapped our coin history. That’s the crux of the idea. He does talk about the danger of you and me swapping but Stephan is actually the adversary or Chainalysis.

It is kind of like a dust attack.

If I swapped my history with you Stephan and you are malicious then I am telling you what my coins are which isn’t great for privacy. He talks about routing CoinSwaps where you have a group of people and as long as they are not all colluding it is safe to do the CoinSwap.

He layers on the fidelity bond idea as well to try to make it harder for someone to do the sybil thing. A cool idea, definitely going to take a lot of work to make this happen.

Do they all have to be the same amount like Coinjoin?

I think it can be split up. You could have the same amount but from different chunks.

That is right. You can split it up. You can do Payswap where you are swapping and paying at the same time. Your change output is what is getting swapped back to you. Now the amounts are not exactly the same anymore. Those are all methods to make it a little less obvious. If you literally swap 1 Bitcoin for somebody else’s Bitcoin. In that case first you would have to wonder if a swap is happening or not. Is someone just paying 1 Bitcoin to somebody? The second thing would be who could this person have been swapping with? That depends on how long the timelock is and how many 1 Bitcoin transactions have occurred. Every 1 Bitcoin transaction has occurred in that period, that is the anonymity set as opposed to Coinjoin where your anonymity set is literally just everybody within that specific transaction. That has some potential to have a larger anonymity set. The other thing that is interesting is that the anonymity set is not limited to Bitcoin itself. If you do a cross chain swap it means that every transaction on every blockchain could’ve been the potential source of your swap. If a lot of people do these cross chain swaps that makes the anonymity set even larger and makes it more complex to figure out.

That is definitely a big use case of atomic swaps. Say you want to use zero knowledge proofs on Zcash or use Monero to get some privacy with your Bitcoin and then get it back into Bitcoin. Does that bring strong benefits? Or use a sidechain with confidential transactions on it. This seems like a useful use case.

There was a Chainalysis presentation where they talked about things like that. If you really do it properly then maybe but a lot of these anonymity coins are not quite as anonymous as you think. If you do the really obvious thing where you go in and go back out and somebody saw you go in and out it is also not anonymous. If you are doing swaps it is maybe better to try to set up the swaps with Coinjoins and CoinSwaps so you don’t rely on going to this other place and swapping it round. I’m not sure to what degree that is useful, maybe it is.

The other thing with that kind of approach is timing analysis. It depends on how it is done but it can be unmixed if people are able to look at what is coming in and coming out.

I suppose the challenge is finding that person who is happy to swap say Monero, Zcash or Liquid for your Bitcoin. You could be leaking privacy finding that counterparty.

I suppose there is going to be a market for that. If you are willing to pay for it there will be a market. If the market is not very liquid you are going to have to pay more. I guess you are concerned that in your search for your swap partner you might be leaking information. That is a tricky one. You have definitely got to be careful.

Custody Protocols Using Bitcoin Vaults

https://arxiv.org/abs/2005.11776

Last month we spoke about this vault idea. This was a paper on vaults and covenants. As I understand the idea is that if you are being stolen from out of this vault you limit your loss and you have a watchtower that spends it all into a safe secondary address. If you are an exchange you might want to secure your coins in the vault this way.

We had a Socratic on this in London a few weeks back. It was very interesting. You talked about what people are trying to achieve but everyone has very different scenarios, different number of parties, whether they want to get access to their coins quickly, there are so many different dynamics in play. Also these designs are vastly improved with something like Jeremy Rubin’s CHECKTEMPLATEVERIFY which obviously isn’t in Bitcoin now. Bryan (Bishop) was saying he has built a vault design using that and Kevin (Loaec) was like “I’m not spending anytime on that until it is in Bitcoin. It might not ever get in Bitcoin.”

The trick is that you want to signal what you are going to do with the funds so that onchain you can see this is what is going to happen and you can respond to it if that is not what is supposed to happen. That the general way you can look at these vaults. You are signaling the money is about to move and then if it is not what you intended you can claw it back.

Bryan’s idea was if you wanted to send money you’d send it in drips. If one drip went to a destination that was not your intended destination then you would stop any future transfers. Instead of losing your whole stash you would just lose that first installment of the funds you were sending.

Elichai Turkel response to the Rust discussion at last month’s Socratic Seminar

https://gist.github.com/elichai/673b031fb235655cd254397eb8d04233

We had a discussion last month at the Socratic whether Rust should ever get into Core and when, if and when. One of the conversations we had last month was that we didn’t see any rust-bitcoin nodes on the network. Elichai is saying in this the Rust guys don’t think that is a good idea and they don’t want any rust-bitcoin nodes on the network. My view was if you are going to be introducing this new language which is a big deal for Bitcoin Core perhaps there should be some live Rust nodes on the network to work out kinks and any problems with using Rust on a full node implementation.

There aren’t many live running non-Bitcoin Core nodes are there? There is libbitcoin and stuff like that…

A very small percentage. There is Bcoin and roasbeef’s btcd and libbitcoin. They have a few nodes on the network.

Bitcoin Knots too. Some people run that. The Wasabi guys run Knots?

Luke Dashjr’s full node. It is a very small percentage. I don’t think it is bad for the network. It would be a problem if one of them was massively used and was starting rival Bitcoin Core as the full node that everyone is using.

That wouldn’t be a problem either. You want to run the node that everybody is running. You want to be on the network that everyone else is on If you run some alternative version of Bitcoin then you risk of getting forked off the network. If everybody is running this alternate version that is not Bitcoin Core then that is what Bitcoin became. It is like firing Core in a sense. I don’t think that is ever going to happen. You are putting more trust in the consensus of some alternate fork. That is the problem. There is such a huge incentive to be where everybody else is that it makes no sense to run an alternate client. You can say “I am going to run Bitcoin Core and I am going to run rust-bitcoin or something” but if you do that you are doubling your validation cost. It is like doubling the block size or something. I guess it is just CPU, you can save on bandwidth costs at least. That is not great either in terms of validation cost. It is a natural effect that everybody picks what everybody else is picking.

There is an interesting comment that Elichai makes that rust-bitcoin is used in production by many users just not for Bitcoin Core. It is used on other things. Lots of us are running electrs (Electrum server). That is a good example.

I didn’t realize electrs was using rust-bitcoin.

I have never run rust-bitcoin in my life but I do run electrs.

That runs on Bitcoin Core too? You connect electrs to Core? I’m not sure I’ve never used it.

Elichai’s point is that it is using the rust-bitcoin library but there is no current full node implementation using rust-bitcoin.

There is a full node implementation in Rust written by Parity. The Ethereum people wrote a full node Bitcoin implementation in Rust. It exists. I don’t know if everyone runs it, it looks like it isn’t maintained.

rust-bitcoin is a library. It is a library. Maybe someone has made a little node using rust-bitcoin against the recommendations or for an experiment.

Jameson Lopp did some tests on alternate implementations. The last one I saw, Parity was completely stalled and not able to sync up.

It has several advantages. It can connect to a Bitcoin Cash node at the same time.

How exciting.

It does say rust-bitcoin supports consensus code, fully validating blockchain data but you shouldn’t do it.

I don’t know if anyone actually uses it like that. The recommendation is against that.

This library is “simply unable to implement the same rules as Core”

It doesn’t have the same rules. In Core there are all these rules for signatures, it doesn’t do that I don’t think. It uses secp256k1, the C library, whatever is in there it has. But I don’t think it uses these very Bitcoin specific things that were put in there as part of consensus. You need the Bitcoin consensus library.

The line you are pointing to is just a warning saying “Don’t run alternate implementations”.

Fanquake blog post on Subtleties and Security

https://blog.bitmex.com/subtleties-and-security/

I don’t think fanquake is on the call. He had a really great post.

Let’s add it to next month’s in case he attends next month. It would be good to get him to go through it.

I need to get him on the podcast at some point.

London Socratic on BIP-Schnorr (BIP 340)

https://diyhpl.us/wiki/transcripts/london-bitcoin-devs/2020-06-16-socratic-seminar-bip-schnorr/

We did a Socratic last week on BIP-Schnorr and Tim Ruffing did a presentation the following day on multisig and threshold sig schemes with Schnorr. It was a really interesting conversation on the history of Schnorr, how the idea evolved, what the risks are today of implementing it. Pieter had some comments on how he was more worried about people building things on top and the problems that might ensue with things people build rather than the actual Schnorr implementation. We talked a little bit about libsecp256kfun too. The video is up, the transcript is up. Tim said it was a shame you weren’t on the call Lloyd because we could’ve discussed hash function requirements for Schnorr signatures. You had that academic poster we looked at a couple of months ago on hash function requirements for Taproot and MuSig.

Delay querying DNS seeds in Bitcoin Core (PR 16939)

https://github.com/bitcoin/bitcoin/pull/16939

When we start a Bitcoin node up from scratch it obviously has to have some way of working out who all the other Bitcoin nodes are so it can talk to them and share transactions and blocks. The original way that it used to work back in Satoshi’s day was that there was a IRC channel that all the clients would log on to and talk to each other over IRC. That was obviously the best way of doing it but IRC servers didn’t like that and thought it was a botnet. The modern way we have is a few developers run what is called DNS seeds. You can do a DNS query and get a list of a bunch of peers on the network that are running bitcoind and connect to them via that. In case the DNS doesn’t work there are also some fixed seeds that get shipped hardcoded into bitcoind that it will use if it can’t find anything else. Once you are on the network then you start querying for what other peers are on the network. You keep a record of that so you don’t have to do this every time. But the basic way the code worked before this patch was that it would start up your node and then it would try connecting to a peer. Most of the time that peer has disappeared in the meantime. It would sit there trying to connect for maybe ten seconds, five seconds or something like that. It would fail and then it would try another one, that is another 5-10 seconds gone. By the time you have tried 5 peers that is like 30 seconds gone. At which point it says “You don’t know any good peers. I’ll query DNS.” This effectively meant that every time you start your Bitcoin node, even if it knows tens of thousands of other peers to try, it will still do a DNS query which kind of gives away the fact that you are running a Bitcoin node if nothing else did. Once you have been on the network for a while you have got a list of tens of thousands of other peers. If you have a list of that many peers at least try a few for five minutes before trying to query the DNS to get more stuff. So that’s pretty much the summary.

It is to use those DNS seeds less at a high level?

It is to use them less as long as there’s no apparent reason why you need them.

How much of a worry is it that you are relying these DNS seeds when you first boot up your node? At an extreme level if you are trusting them to give you IP addresses and they give you IP addresses that they own then it doesn’t really matter what you are doing with the initial block download and all the verification because you have been gamed right from the beginning.

The nodes are all run essentially by well known devs. In that kind of sense they are a trusted source of information. Or at least hopefully not malicious. When you start up bitcoind at first it will query all the nodes. As long as one of them is giving you good data that should mean you get onto the real network. Anytime after that you will query in blocks of three. As long as you don’t have three of them that are all co-conspiring to be evil and you don’t happen to pick those three by bad luck then you are probably good. It is a centralized thing so you are much better off if you get onto the P2P network via a friend who has been on it for years or some other way like that. That would be a -connect argument with your friend’s IP address.

DNS seed policy

https://github.com/bitcoin/bitcoin/blob/master/doc/dnsseed-policy.md

I think there needs to be some edits to that DNS seed policy because it wasn’t clear without going into the documentation or the code that you are spreading that trust amongst multiple seeds in all cases. That’s important especially as we get more DNS seed operators, Wiz has just become a new DNS seed operator. As we add more then it is going to be harder to monitor that one or two don’t turn malicious so it is important to be getting IP addresses from as many seeds as possible to reduce that risk. Do you agree?

I’m not sure if there is a sweet spot. If you open it for all of them it is maybe too broad.

If you have so many seeds that you can’t check any of them then it is worse than having a few seeds where you can check that they are behaving reasonably well. I agree there is probably some sweet spot. I’m pretty sure at least one of the seeds is hidden behind Cloudflare. That means that if Cloudflare decides to be malicious with bitcoind stuff then they have got that avenue to be malicious. Presumably everyone is going via their ISP or Google or whoever’s DNS, that is an attack point as well potentially.

There is also a privacy concern coming up because normally you can’t see a seed who is asking you. There is an extension in DNS where you can see a part of the client IP address, the client network part, where it is coming from.

Hypothetically if there was a malicious seed who only gave you bad peers. Wouldn’t you periodically come across other peers on the network? Or would you be contained into this malicious network?

If the peers in your database are all malicious then those malicious peers will only tell you about other malicious peers. How will you ever find one that is a good one? You will have asked three DNS seeds for peers in the first place. You’d need all of those to have given you bad peers in the first place and you’d have to not accidentally query DNS again. It is an attack vector but it is probably one that you have to be somewhat specifically targeted to get attacked that way.

It relies on you not using the -connect flag to specifically connect to a known peer that you know.

I wouldn’t think that too many people would do that. If you are a beginner running a full node for the first time you are not going to know to do that. At the moment all of those seeds are serious Bitcoin people you really would hope would never turn malicious. As you expand that group of DNS seeds then I suppose it is always a risk. That is why you would need to be calling multiple of them.

It is a longer term risk. Maybe you could argue have we seen that kind of attack in the wild yet? How much effort should be put into protecting against this thing?

One way to address this danger is to go out of band and check a block explorer that you do have the latest block. But this is doing stuff outside of Bitcoin Core software, outside of the protocol. Again beginners just aren’t probably ever going to do that. It is more of a beginner type question than someone who knows what they are doing and can do some basic sanity checks once they have caught up with the blockchain tip.

It would be hard to maintain that for a long time. As soon as someone sends a transaction even on the normal network, the counterparty is not going to get that transaction because they are on a different network. It would require a lot of things to all be malicious for anyone to be successfully fooled by it.

Once you are on that fake network you are basically owned. You are owned by that attacker. That attacker can do anything. You could send a transaction out and the attacker could say “This transaction is in a block” with no proof of work on a fake chain.

What if you tried to receive some Bitcoin?

Matt Corallo did a PR a while ago that was one of his this is how we could use Rust to improve bitcoind. This was having a block downloaded that downloads block headers over HTTP instead of the P2P network. It is a centralized way of checking that the headers are you getting over the P2P network are actually what other people seem to see as the best most work chain. If you were eclipsed and being fed bad blocks that would almost certainly have less work than the real chain. You would then be able to find via these HTTP served things which can then be secured by HTTPS in a centralized manner and at least give you an alert that you’ve been eclipsed and you don’t know what is going on. I think he has had a couple of other ideas like that. There is a PR that Antoine Riard that generalizes a bit and is not all in Rust.

The AltNet PR?

Yeah that one.

That would be a cool idea to have alternative checks of these things. Maybe you would check your block height or some other thing in some other way. That would tip you off that something is wrong.

If you are running a node that you expect to be visible to everyone else you can look it up on Bitnodes.

You are only on that if you are listening.

If you’ve got an open port.

I don’t think it is capturing all the Tor only nodes as well. If you are going to Tor only then you might not appear.

Let’s say there are long term core developers who are running tonnes of nodes. Is there any way of monitoring the behavior of those DNS seeds to ensure that they don’t go malicious? It is certainly hard. But trying to detect that they are trying to put you on a fake chain and then flagging that the DNS seed has gone malicious.

You could make every node or DNS seed put a digital signature on anything they send you. If they send you BS you could go on Reddit and complain “This guy sent me this with this public key.”

I don’t think you need a digital signature to go on Reddit and complain.

That would be flagging after the fact. My question was how do you flag it before a victim gets screwed by a malicious DNS seed.

If your software can determine this guy is malicious and you have the signature you could forward the signature to everyone else saying “This guy sent me this bogus thing. Don’t use that guy.” It is really easy to change your identity on the network anyway so I don’t know if it makes sense.

You can only forward things to everyone else if everyone else is on the P2P network. This is trying to get them on the P2P network in the first place. I think that falls down there.

Because the idea is you are already in the malicious world and you are not connected to the good people.

There is also another problem if your DNS resolver, the DNS server you have set in your configuration is malicious then you could answer with the wrong nodes for every seed node.

secp256kfun library

https://github.com/LLFourn/secp256kfun

My library is there. It is self explanatory. If you interested in making experimental cryptography in Rust check it out.

How are you building it? Are you taking Pieter’s library libsecp library and translating the core parts into Rust? Where are you starting from?

The individuals from Parity who wrote the alternative Rust Bitcoin node also wrote a pure Rust translation of it. The translation work was not done by me. If they hadn’t had done it I would never have done it. It is not guaranteed to have fidelity with the original C code so it is a risky choice. For experimentation it is decent enough. That is what I am using. On top of that I have so far only implemented BIP 340, ECDSA and the adaptor signatures for those. That is all that is there on top of the base secp library.

Have you found any problems with their code that you have had to fix in terms of security stuff?

With their code? One interesting thing, I did find a tiny issue with the real secp library while doing it. I found some logic in the ECDSA signature code which was actually incorrect. I made an issue and they fixed it. One of the interesting features about this library is that it marks elliptic curve points as zero or non-zero. This is a scalar known not to be zero. That is important in various ways. Sometimes zero is an illegal state for a public key to be in. A public key can never be zero. What I found that was the signing code for libsecp256k1 allowed zero in a certain circumstance where it should not have. And also disallowed zero in a circumstance where it should not have. The logic of my library, when I was implementing ECDSA it showed me there is an error here. You can’t do it like that because this is meant to not be zero but it is actually zero. The compiler told me this is wrong and I said “This is wrong. How did they do it in libsecp?”

Elichai said in the London Socratic they had a bunch of mistakes, the Parity implementation. One of them was they “replaced a bitwise AND with a logic AND which made constant time operations non-constant time.” He seems to think there were loads of mistakes.

They had some mistakes for sure. There could be more. I can see some things where they have cut corners and not done it properly. There has been a huge amount gone into the C library by Pieter Wuille, Elichai and the rest of the squad. A huge amount of work and a lot of it has been done towards constant time and different kinds of side channel attacks. That work is nowhere near as good in that Parity library. The one he is talking about has been fixed obviously but there could be other problems with it. I didn’t find any problems with it. There have been problems found with it. There could be more. At least in my usage of it, when you multiply a curve point by a scalar it produces the right curve point which is pretty much all I need.

The alternative is to build it from scratch. If it was really bad code. It sounds like it is not too bad.

It is a pasta without too much thought going into it. It is literally the same code when they replace a bitwise AND with a logic AND. Obviously it is not the same thing. You cannot literally translate C into Rust. There were some corner cases where that doesn’t make sense and you really have to put some thought into how that should be done. They made some mistakes on. I am very glad it exists. I thank the Parity people, the Ethereum people for creating that. Without that I wouldn’t be able to build that library. I am very happy with how the library is done. Don’t use this to make a wallet or a node. That goes without saying. It is good for messing around and building proof of concept protocols. I am really excited about all the protocols coming into Bitcoin, Layer 2 protocols etc. This is a good way to have some code to do some testing on those ideas.

Generalized Bitcoin compatible channels

https://eprint.iacr.org/2020/476.pdf

Maybe we can talk about this paper next time. I am interested in people’s opinions on the paper. It is a way of getting symmetric state into Lightning. When you have Lightning nowadays you have two parties that both have a different transaction that represents the state for them. This paper is a trick to get it to symmetric state without eltoo. Eltoo has symmetric state. This proposal also has symmetric state. It uses adaptor signatures. It is a nice trick to do that. I would be interested in people’s opinions on this paper next time.

Is there a short blog post version of that paper or mailing list post?

I don’t think so. It was by some academics. I don’t think they have made a post about it. I can give the rough summary. Instead of having different transactions representing each person’s state, a transaction is the way to identify who put the transaction down. If transaction A goes down that means Alice put the transaction on the blockchain. If transaction B goes down it means Bob put the transaction on the blockchain. The A transaction has a path for punishing Alice if it is the revoked state. The B transaction has a path for punishing Bob if it is the revoked state. Instead Alice and Bob both have the same transaction but they have different adaptor signatures on the inputs of that transaction. When Alice or Bob post it it is the same transaction with the same ID but they reveal a different secret to the other party. That secret allows for the revocation to come in in a symmetric way with symmetric transactions rather than asymmetric transactions. That’s the summary. They give a more full academic explanation in there. It is a little bit tough but think about that idea and I would be interested to hear what people think about it.

\ No newline at end of file diff --git a/sydney-bitcoin-meetup/2020-07-21-socratic-seminar/index.html b/sydney-bitcoin-meetup/2020-07-21-socratic-seminar/index.html index e7c7c64287..b04d7f06ce 100644 --- a/sydney-bitcoin-meetup/2020-07-21-socratic-seminar/index.html +++ b/sydney-bitcoin-meetup/2020-07-21-socratic-seminar/index.html @@ -12,4 +12,4 @@ < Sydney Bitcoin Meetup < Socratic Seminar

Socratic Seminar

Date: July 21, 2020

Transcript By: Michael Folkson

Tags: Taproot

Category: -Meetup

Name: Socratic Seminar

Topic: Agenda in Google Doc below

Location: Bitcoin Sydney (online)

Video: No video posted online

Last month’s Sydney Socratic: https://diyhpl.us/wiki/transcripts/sydney-bitcoin-meetup/2020-06-23-socratic-seminar/

Google Doc of the resources discussed: https://docs.google.com/document/d/1Aw_llsP8xSipp7l6JqjSpaqw5qN1vXRqhOyeulqmXcg/

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Intro

For anyone who is new, welcome. This is the Bitcoin Sydney Socratic. In normal times we do two meetups per month. One is an in person Bitcoin Sydney meetup for those of you in Sydney you are welcome to join us. We don’t know exactly when our next in person one will be. This is the other one that we do every month. Obviously the timezone works for people in Asia and Europe as well and obviously Australia, New Zealand people. We share the list, here is our list.

Subtleties and Security (Michael Ford, BitMEX research)

https://blog.bitmex.com/subtleties-and-security/

This was a summary of a bunch of the work done on Core over the past five or six months. It was mainly focused on build systems and trying to improve the security of our binaries and our build processes. A lot of stuff when it comes to build systems can be very finicky. There are a lot of peculiarities. It is easy to miss a flag or not turn something on and have that affect something else, two or three hops down the line and you don’t even realize it. This was a summary of some of those changes. Some of them were very simple but it existed in the codebase for quite a long time. The first one here, where our security and symbol checks had been skipping over bitcoind. Essentially once we run our Gitian builds to produce releases we have some scripts that perform some security checks and then symbol-check is essentially checking the binary dependencies looking for certain symbols. If we are building with stack protection it might check the symbols in the binary to see whether there is a stack protector symbol in there somewhere. But essentially since these scripts have been introduced they had never been run against bitcoind because in a quirk in our Makefile. If you open the PR 17857 you’ll see that the fix here was essentially deleting one character or two characters if you include the whitespace. When these were being run because of the use of the < character there, bin_PROGRAMS is essentially a list of binaries, they get passed into the script but because of the way that Python handles the arguments if you pass them in using this < character the first argument would be skipped over and the checks wouldn’t get run against it. That always happened with bitcoind. However this isn’t as severe as it sounds because bitcoind is a subset of bitcoin-qt and the checks were always being run against that. As far as we are aware there is nothing malicious that we could’ve missed in this case.

The next one is also another subtle one. There have been longstanding issues with ASLR and Windows binaries. It has come up in our repository a few times before in various mailing lists. I can’t remember why I was looking at this exactly. I think I was going to add some tests for it. Then I noticed that when I added the test one of our binaries was actually failing the test which was unexpected because we thought everything was fine. It turned out it was the bitcoin-cli binary. It came down to that binary wasn’t exporting any symbols. At build time the linker would strip away a certain part of the binary that is actually required on Windows if you want ASLR at runtime. This didn’t happen in any of the other Windows binaries because they are all exporting libsecp256k1 symbols. The CLI binary doesn’t do that, it doesn’t use any libsecp256k1 symbols. The fix here was again another simple one, basically just adding a directive to the linker to export the main symbol out of the CLI binary. Then at link time it wouldn’t strip away this certain part that it needed for ASLR at runtime. That was a fun one to track down.

The sysctl stuff is kind of boring and the effects weren’t that interesting. This was a follow on to some of the work that was done after removing the OpenSSL RNG. There was a new module added to our RNG that collected entropy from the environment. One of the ways it collected entropy was via this sys call. It would go and look at stuff like the amount of CPUs you have or how much RAM you have or what files are in certain directories, lots of random things. However in our build system we had some checks that would look to try to figure out if the host you were building for, Windows or MacOS or BSD, we would run these checks to see if that system call was available. The way we were checking for it was failing on MacOS. Then obviously at build time we wouldn’t compile in the code that would make use of those calls. There was no real impact because this code had never been released. It was fixed before it made it into the 0.20 release that only just came out. This was another case of something subtle where it looks like this is working everywhere and the detection is ok. It turns out on MacOS because the way we were detecting it was slightly different to other BSDs and it was failing.

I’ll go quickly through this last one. This is a case of Apple’s documentation claiming that the linker would do a certain thing if you passed a flag. This flag was BINDATLOAD. According to Apple’s documentation if you link with this flag it would instruct the linker to set a certain bit in the binary header and then at runtime the loader would look at the binary header, see that bit and would modify its behavior to bind all of the binary symbols at load rather than lazily when they are first used. However when you actually pass this flag to the linker and build a binary it doesn’t set this bit in the header which is obviously contradictory to the documentation. It was unclear whether this was a bug in Apple’s linker or whether we were misunderstanding something. I eventually sent a few emails back and forward to this guy Nick Kledzik who is one of the people that works on the linker and loader at Apple. He came back with a clarification about how the behavior is meant to work and that we could disregard this flag and this header to all intents and purposes. However we can’t check whether it is working by looking for the bit, we have to look at some other output when you run a different tool on the binary. This was just a case of the documentation being incorrect and thus when you are looking at this sort of stuff you can’t even necessarily trust the documentation to instruct you on what to do.

Are you excited about Apple moving over to ARM and the stuff you are going to have deal with?

Not particularly excited. We’ll see how significant it is and what we may or may not have to do or work around in six months or maybe a year. In general it is not entirely clear even what the future is for us releasing MacOS binaries because it is essentially getting harder and harder for us to be able to build and distribute binaries in the completely reproducible, trustless manner that we’d like to. Apple is introducing these notarization requirements where we would have to build our binary and then send it to Apple servers so they could notarize it. Then they would send it back to us and that’s the only way we could distribute a binary out to end users that would run on their Macs. Obviously doing that sort of stuff is tricky for us especially if it requires more official things like having an organization. I guess we already do that in some part because we have code signing certificates but I think the direction that Apple is going is making it harder and harder for us to continue or want to continue releasing Mac OS binaries. I know some of the developers are getting less enthused about having to deal with all of Apple’s changes and distribution requirements. But in general the ARM stuff is interesting. I haven’t looked at it a whole heap.

The concern with notarizing is that you are assuming some kind of liability by signing off a binary and sending it to Apple? What is the actual concern there?

One of the concerns is also that it is essentially a privacy leak. When the end users actually run those binaries on their machines the OS will essentially ping Apple’s servers with information asking it whether these binaries are ok to run. This is another thing we don’t necessarily want. I think the other problem may be due to reproducibility. It depends how Apple modifies the binaries when they do the notarizing, whether we can incorporate that into our reproducible build process. I know for a fact that we would never distribute binaries that could not be reproducibly built by anyone else that wants to run the same binary. Those are two concerns. There might be more. There are a few threads on GitHub with related discussion as well.

Part of the reproducibility concern is that the end user is not able to go through all that same notarizing process through Apple? Or they wouldn’t need to go through Apple to reproduce it, it is just they wouldn’t be able to do it in the same way that the Bitcoin Core developers do it?

At the moment we release our binaries, the source code is available, anyone can follow the same process we do and get an exactly identical binary to the one that we release. However if as part of this requirement we’d build that binary but then have to send it off to Apple they would maybe modify it in some way, give it back to us and we would have to incorporate something that they gave back to us then end users could try to go through the same process but it wouldn’t really work.

On the blog post, there are four bugs that you go through in the blog post. They all don’t seem to be massively severe to put it mildly but they are all things that you expect to be caught. The first one isn’t severe because as you say bitcoind is a subset of bitcoin-qt. There was no problem with that one but you still expect that to be caught. The second one, this was data positioning inside of process address space. This is a concern around a private key being stored in a deterministic address space or a predictable address space. If an attacker had access to your machine they could extract the private key?

ASLR is one runtime security feature. The purpose of it is to try to make it harder for attackers if they were able to say exploit a binary in some way to exploit it further. The addresses or the location of certain data in the address space of the binary is randomized at load. It was working fine for all the rest of binaries, it was only bitcoin-cli. I guess in general we want all of our hardening techniques to apply equally across all of our binaries. There are ways to detect this after build. You can look for whether certain bits have been set in binary headers or other tooling output. We were looking for those and had been for some time. They were all set to what they were meant to be set to but it turned out that the runtime requirements were slightly different to what we understood them to be. Even though we were already testing for certain things that didn’t catch this issue.

The concern is that it is generating a private key using the bitcoin-cli. Apart from privacy the only thing you’ve really got to worry about are the private keys in terms of an attacker accessing your machine. Everything else is pretty public in terms of blockchain data, blocks etc.

Sure.

The address space randomization only makes it harder to exploit something. It is not that it is avoiding it. It is not that black or white.

The other two, entropy problems was the third one. The fourth one, my understanding was from the blog post they just haven’t the documentation even now. They have told you that it works a certain way but that’s not the way it is outlined in the documentation, is that right?

Yeah. The man page for Apple’s linker still says that if you pass this flag the linker will set some bit in the program header but it does not do that.

Have you thought about chasing them? What is the process of trying to get Apple to update their documentation?

I conversed with the guy that works on the linker at Apple. I assume that if they were concerned about it he would probably tell someone to update it.

It is certainly not your responsibility. So basically these four things, they are not hugely severe but they are all things that you wouldn’t expect to be happening at this point in the maturity of Core certainly when you are interacting with things like Apple. And something could pop up in future that is similar that could be really severe.

The one counterpoint I’d make is that a lot of this stuff can change over time. Three or four years ago that BINDATLOAD flag may have worked exactly as is written in the man page. It just happens as of two years ago it suddenly didn’t work that way. You could have some assumptions or tests for behavior that worked at a point in time that then may not work because something beneath you is changing.

Generalized Bitcoin-Compatible Channels

https://eprint.iacr.org/2020/476.pdf

https://suredbits.com/generalized-bitcoin-channels/

This is a cool idea. It is about taking the way Lightning works today which is with asymmetric state, each party has their own commitment transaction which is different to the other party’s commitment transaction with a different transaction ID and different script. Essentially the Lightning mechanism uses that to identify who posted the transaction on the blockchain. If it is an old transaction, the transaction has been revoked, it allows that person to be punished. The idea here is to keep the Lightning Network transactions symmetric. Both parties have the same set of transactions that they could post on the blockchain but what is different is that when they post them they have different signatures. The system identifies who posted a transaction and who to punish by the signature on the transaction rather than the transaction itself. This is a nice idea for several ideas. In my opinion the main motivator is that it is much simpler. If you are implementing it you have less objects to deal with and less complexity which can lead to software bugs and so on. The next benefit is that the transactions end up being smaller because there is less state to keep around in the transactions and less complicated update and settlement process. The place where it shines is in the punishment branch. In this system if you put down an old state, what first happens is we figure out that this state is old. You have a relative timelock where the other party has some time to use the signature that you put down that identifies you as the perpetrator. The other party is able to extract a secret key from that because it is an adaptor signature. They extract the secret key and they can use that secret key to punish you if that has been revoked already. If you have a bunch of HTLCs on a transaction that has been revoked it is much simpler to settle and take all the money. With the punishment one you may have to do a single transaction per output. Per HTLC that is inflight at that revoked state you may have to do an individual transaction for each of them whereas for this one you figure out whether it is punishable or not and if it wasn’t you carry on to settling all the rest of the outputs onchain if they are not revoked. I am really keen on this idea. I am looking at maybe attempting to build it starting next month but I am looking for feedback from anyone on anything about what they think about this paper. People who are more familiar with Lightning than me and the actual implementation.

Why is it that you normally in Lightning need to have one transaction per HTLC you are settling?

It is a good question. When you push the transaction onchain the HTLCs may already have been revoked. When you put a revoked transaction onchain the HTLC outputs then have three paths. One is redeem, one is refund and one is it has been revoked. Depending on who put it down you have another transaction that spends from the HTLC output and that determines whether it goes into the redeem with a secret or the refund path. But if I put it down and I know the secret I can use the secret right away and put it into another state on another transaction. If you want to redeem all these things you may have to make an individual transaction for each of those other outputs that I put down. They are about to change it because of this anchor outputs idea. You never have an unilateral spend.

I think one of the things with the way it works at the moment is if you have got a HTLC then there is either the timeout and refund path or the reveal the secret and get paid path. But if you are publishing a transaction near the timeout then you want to be able to say “I know that this might be revoked but I want to guarantee that unless someone has been cheating that I get my money and that we don’t have this delay while we figure out if someone is cheating and that puts us over the time limit so now I can’t get paid.” I think that is why there are the two layers there. The term that is used is layered commitments. I think that is going to be the challenge to have work with this approach. The problem with not doing it that way is that the overall timeout of the HTLC has to decrease at each step by the time you are going to allow somebody to detect cheating at each step. If you are saying “I don’t want to have to detect cheating every five minutes, I want to be able to check it every two hours” the difference between the overall HTLC timeout on each step ends up being the two hours for one guy, the two days for another guy, the five minutes for another guy. That makes the overall timeouts of the HTLCs really long.

This is one of the issues I really wanted to know more about. It has obviously been discussed because it is the same way eltoo works?

Yeah eltoo doesn’t work that way. There is a post of getting it to work despite that with the ANYPREVOUT stuff. (Also a historical discussion on the mailing list)

You can actually circumvent this problem in eltoo?

In eltoo with ANYPREVOUT as long as ANYPREVOUT doesn’t also commit to the value of the transactions. This was discussed in around 2016 to come up with the way the current stuff works.

I was thinking that this seems like quite a big problem. But it seemed to also be a problem with the original eltoo paper. I looked at the original paper and I couldn’t find anything talking about it.

As far as I can see it is totally a problem with eltoo and it hasn’t been discussed at any point.

On the previous answer to the question, it is like a worst case analysis. This transaction could go on there, I don’t think it necessarily has to be there but each of those transactions needs another revocation thing in it. I think that is the way they get their numbers from. But they could be wrong and I could be wrong.

At least the general idea is that there is a second transaction that needs to occur before the whole HTLC revocation process is played out. Because of that secondary transaction you get this problem of not being able to spend multiple revoked HTLCs at the same time.

Yes and it doesn’t help you if it is a valid state. It seems to be the same if it is a valid state. You still have to go through all the transactions. It is not an optimization there, it is an optimization saying “We can exit early on the punishment.” This is the question. Is it actually worth doing that double layer of first punish and then settle. Right now the way Lightning does it is punish and settle at the same time to avoid this layering of timelocks. Is that still the right way to go? You still use the new mechanism that they’ve provided in the paper but don’t do their double stage revocation. That is the thing I am wrestling with.

Doesn’t this all go out the window with eltoo because there is no punishment?

I think it does go out the window with eltoo.

Going back to that original post on generalized channels, how Lightning works now is that I have a commitment transaction with a timelock on my output and the counterparty has a timelock on the output on their commitment transaction. They are not symmetric. I am struggling to understand how those transactions are identical in this design.

They are identical because the commitment transactions are the same.

Are they adding a timelock that is unnecessary to one of the outputs or are they taking away the timelock that were there on each party’s output?

It is necessary but it can be circumvented in eltoo apparently. Each commitment transaction is the same for both parties and then the output on that can be either revoked, give all the money to the person who was not the perpetrator or if it is valid it spends to the real commitment transaction or the state transaction you could call it. The commitment transaction is very boring and bland. It just has an output which if the guy has revealed his adaptor signature secret and his revocation key, when he puts it down he reveals the secret and he has also revealed the revocation key over the communication channel in Lightning then the other guy who is not the perpetrator can just take all the money from the output. He knows both the private keys of the OP_CHECKMULTISIG so he can take the money. But if he doesn’t, if it is a valid state then on a relative timelock it spends to a state transaction that has all the HTLC outputs on it and the balance outputs of the two parties. That is how they manage to keep it symmetric.

Flood & Loot (Harris, Zohar)

https://arxiv.org/abs/2006.08513

This was published mid June and then revised early July. To outline the attack, the idea is you are an attacker, you want to steal money from people, you set up your channels. The idea is that you would have the target node and the source node. You set up your channels and you push through as many possible payments as you can based on how many HTLCs you can do and then you are waiting for the timelock. Because all those transactions are happening at once you can spring the trap and steal from everyone because everyone is trying to claim at the same time. Every HTLC has its own output as opposed to the other approach where it is one for the whole channel.

An attacker tries to create as many channels as possible with victim nodes and the attacker also creates a node that is able to receive funds. What the attacker now does is tries to create as many HTLCs as possible, payments that are targeted to the receiving node. That is also a node that controlled by the attacker. He uses the channels that he just created with all the victim nodes. The target node waits until all the HTLCs have reached the target node and then it sends back all the secrets to the victim node that need that secret to claim the transaction from the attacker. The target node is left without any open HTLCs but at the same time the source node that created all the transactions doesn’t respond to the secrets from the victim nodes. The source node is turned off. As the timeout approaches for all the HTLCs all the victim nodes will try to broadcast their transaction at the same time because they want to close the channel. Some of those broadcasts will fail to enter the Bitcoin blockchain because of congestion and the attacker will then wait, bust the expiration of those HTLCs and by using the replace-by-fee policy raise the fee of its own transactions to a higher one he can then claim that victim’s transactions and steal money that way. It is a kind of a denial of service attack because you try to create a situation where suddenly a lot of people try to close channels at the same time. If your read the paper you will see that with the current block weight you need a minimum of 85 channels that close at the same time to be able to steal some funds. The amount of funds you can steal increases if you have more channels. This is all based on simulations. Apparently it starts at 85 channels and if you are able to close down more channels than 85 at the same time you are able to steal substantially more funds.

The crucial point I missed there is that you stop responding. Every other node out there thinks “I need to go to chain and I have no other way to deal with it.” The concern would be that people might not want to put funds into Lightning because you are now risking money. Previously a lot of people might have thought if other people open channels to me that is not a risk but really there is a risk to that especially once you start routing through that channel. Probably worthwhile talking about the mitigations as well. Some of the Lightning implementations are limiting the number of HTLCs. The other one is anchor outputs and reputation though obviously there is a concern around reintroducing layers of trust there. Any comments on mitigations?

I still haven’t got my head around all this replace-by-fee stuff works together in this attack.

The replace-by-fee is used in this attack by the attacker at the moment the victim tries to broadcast the current state of the channel and close the channel. If you wait until the HTLC has expired then that broadcast is still in the UTXO. You need to use a higher fee as an attacker to be able to be the first to claim the funds.

The other crucial point they mention in the paper is that if you are the innocent victim node all you have is the pre-signed commitment transaction from back when the attacker was talking to you. You can’t now re-sign that because you can’t re-sign it on your own.

One of the mitigations as always in these papers is the reputation based behavior.

That’s not a good pathway to go down because you don’t want the whole network to become permissioned and reinsert all the same permissioning that we are trying to get away from.

There is already some reputation based behavior in some of the clients. c-lightning is a little bit more hardcore and rightly so. I think lnd already has a small local database on the client itself keeping reputation…

The BOS scoring.

Does anyone know what the problem is with reducing the number of unresolved HTLCs? Would that be a big deal?

I think that would mean less functionality because now there is less routing that can happen through the network.

I don’t think it is a limiting factor right now but in the future if Lightning started to grow exponentially it would limit the number of transactions.

It just adds more restrictions to the types of payments that can go through Lightning. If you are allowing loads of really tiny HTLCs and as a routing node you are collecting tonnes of really small HTLCs you are helping the network by routing loads of different types of payments including small ones. The problem is when you go onchain with so many HTLCs obviously the fee is so much higher because you are taking so many HTLCs onchain. The number of bytes are so much higher than it would be if you were just going to chain with a few HTLCs.

Chicago BitDevs transcript on various Lightning Network attacks

https://diyhpl.us/wiki/transcripts/chicago-bitdevs/2020-07-08-socratic-seminar/

They went through a bunch of Lightning Network attacks including Flood & Loot. There was this Bitcoin Optech link talking about the different types of attacks on Lightning. Someone said “you either really want to know your stuff with Lightning or you want some sort of trusted relationship with your direct peers” at this stage. Another attack that they went through was stealing HTLCs inflight which at least in my eyes was a more concerning one and something I hadn’t really thought about before. It has always been obvious that fees are going to be a problem if you are forced to go back onchain and onchain fees are really high. Right from the beginning that has been obvious that it is a problem. But it hadn’t really clicked with me this stealing HTLCs inflight if you are an attacker who has one node earlier on in the route and one node later on in the route, whether you withhold the revealing the preimage or play tricks with some of the intermediary nodes.

That is similar to the wormhole attack, the idea is that you jump through and skip everyone in the middle.

That only allows you to steal the transaction fee right?

The wormhole attack is just fee that you are stealing.

I think it is an optimization not an attack. It is a way of optimizing the route adhoc and getting paid for it in my opinion. Everyone still gets the money.

No you are stealing the fee.

You steal some fees because you skipped some nodes in the route. If you are delivering a parcel and instead of passing onto the next delivery man you just deliver it straight to the guy yourself. You get the fees in the middle, it sounds good to me. I don’t know why it is an attack.

You are tricking those nodes into locking up their capital when they are honestly seeking to route a payment. Thinking that they are providing a route that is needed on the network and then tricking them when they are acting honestly to lose out on the fees that they could’ve perhaps got elsewhere.

Maybe I misunderstood the attack. You can lock up money with that attack?

You lock up money anyway because you need to lock up the money for the HTLC until the payment comes through.

Now I get it, I never understood what the attack was. The attack is like a denial of service attack. You lock up their money, you’re skipping them and stealing their fee.

They are performing the service and they are not getting the fee in return.

To some extent I agree that you get paid for offering a better route. It is a funny way of looking at it. The only cost the victim incurs is the cost of having those funds locked up and not getting the fee. It is not really stealing.

They could be routing another payment that is actually needed and getting the fee for that. They are locking up capital and they are not receiving fees that they could be receiving elsewhere.

Opportunity cost.

Following the Blockchain.com feerate recommendations (0xB10C)

https://b10c.me/mempool-observations/3-blockchaincom-recommendations/

I thought this was a really cool post but I can’t remember all the specifics of it. For a long time people were saying “Are Blockchain lying about how many transactions they do? Are they really one third of the network in terms of transactions?” According to this article that is a fair estimate based on the fingerprinting.

I guess the one thing to point out is how shocking that these transactions can be fingerprinted so easily. One third of transactions, it is obvious where they are coming from. That in itself is quite shocking.

What’s so good about this Blockchain wallet? Why does everybody use it? I have never used it myself.

It is like a first mover advantage. They were so early that so many newbies used them and never moved off to another wallet. They have blocked up a lot of things by not having SegWit etc.

Part of it was also they were one of the first easy wallets in general and they were also a web wallet. You go to a website and it is really easy to sign up. All the newbies did it. People who have money with them, their value went up quite considerably because it was very early on in Bitcoin’s life, that is what caused them to have a lot of money on there. I find it surprising that those users also create a lot of onchain transactions at the same time. That part is a little confusing to me. I am not surprised that there is a lot of capital locked up with them.

They are non-custodial? It is like a web wallet client side wallet thing?

Yeah but the security of that is questionable. Now they have an app as well. They didn’t in the past. It was the web wallet mainly that got them a lot of popularity and was relatively easy to use.

That is scary.

They even had a poor implementation of Coinjoin at some point. It was completely traceable. They were not a bad company at that time in the sense that they were trying to do these things.

BIP 118 and SIGHASH_ANYPREVOUT

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-July/018038.html

Is anyone not familiar with NOINPUT/ANYPREVOUT as of a year ago, basic concept?

Personally I know what they are trying to do. I’m not familiar with the exact differences between NOINPUT and ANYPREVOUT and I haven’t really followed the recent stuff on how it has changed. I know ideas are being merged from the two.

There is the motivation in the BIP now that is probably good context to read. The basic idea is you want to spend some Bitcoin somewhere then you obviously have to sign that or else everyone could spend it because it wouldn’t need a signature. When you create any of the signatures by any current way with SegWit or with legacy pre-SegWit stuff or with pay-to-script-hash or with Taproot as it is proposed so far without ANYPREVOUT, you are committing to the txid of the transaction that you are trying to spend. The txid is made up from hashing the scriptPubKeys and the amounts and everything else. At the point where you have made a signature you then have to commit to exactly how the funds got into whatever transaction is going to end up with onchain. What would be nifty to be able to do is to come up with a way of spending transactions where you don’t know quite how they got there, you just know that they got there somehow or other and they are going to be yours. If you are writing a will you want to be able to say “All my money goes to my children and it is not going to matter if the money ended up in this bank or a different bank or if it is gold buried on your property or whatever.” The executor can take your will and its signature and say “This is sufficient proof for whoever cares” and execute your will and testament that way. What we’d like to do is have something similar for Bitcoin smart contract stuff where you can say “No matter how someone tries to cheat me on my Lightning channel I want to just send this signature to these watchtowers who will pay more attention to the blockchain than I can be bothered doing and will send these funds directly to me if there is any cheating that happens. It doesn’t matter how that cheating happens.” That was the original use case of NOINPUT when Joseph Poon I think proposed it originally for Lightning. Then the whole eltoo thing is based on “If there is ever some state published I want to be able to update that to a new state and it doesn’t matter if it is an old state from 5 minutes ago or from a year ago or however long. No matter which of those gets published I want to be able to do a single signature and end up with the current state.” That is the more current motivation for it. The idea here is that when you are doing a signature you are no longer signing the transaction ID that is going to be spent because you don’t know that yet, instead you will sign some sort of description of what the transaction is going to be. For ANYPREVOUT that includes things like the scriptPubKey that is going to be spent or the actual script after it has been unhashed and unTaproot’d. And maybe the value and other stuff like that. It is still signing that it is your funds, it is just not signing that it is this particular set of your funds. That has different risks. For the will example that means you are going to be able to take all the money from all the bank accounts not just the one bank account that you were thinking of when you wrote the will. In the same way with ANYPREVOUT it means that that single signature can possibly spend funds from one UTXO or from many UTXOs that all satisfy the same conditions. It requires a bit more care to use this sort of signature but if you are doing complicated smart contract stuff like eltoo then you need to take that sort of care anyway. It seems like a reasonably good fit.

There was also a discussion around script path and key path?

The way Taproot works is that you can have a key path or many script paths for a single scriptPubKey that goes into a UTXO. I say “You can spend this bit of Bitcoin with this key just by doing a signature or by following this script or this different script or this other script.” The way we’ve set Taproot, when you define a new SegWit version the key path is pretty much set in stone from when you define it. Say we activate Taproot tomorrow you can’t say “Right now these particular signatures are valid but in the future some different signatures will be valid. We don’t know what those are yet so in the meantime just accept any signature.” because that would obviously be completely insecure. The only way we can accept signatures that we haven’t defined yet and that we are going to define in future in a soft fork friendly way is to put it in the script path or to set up new SegWit versions. ANYPREVOUT doesn’t want to use up an extra SegWit version, it wants to be as minimal as possible so it goes in the script path which has a different leading byte for the pubkey that gets put in the script. There is a special code so that you can just reuse the main pubkey that you use for Taproot without having to repeat it too much. By using that special class of pubkeys or specially tagged pubkeys that means you have opted in to allowing ANYPREVOUT on that script. It also means that if you don’t want to opt in to ANYPREVOUT you can do stuff as normal with Taproot and no ANYPREVOUT signature will ever be valid for those coins that you have used. You don’t have to worry about the extra risks that ANYPREVOUT may introduce.

Is my understanding correct that these discussions with Christian (Decker) on NOINPUT means that these proposals are being merged and getting the best from both?

There is not a real conflict between the proposals apart from the naming. The NOINPUT original proposal was we want to have this functionality and we’ll do it when the next version of SegWit comes along. The next version of SegWit is hopefully Taproot. The original way it was proposed meant that eltoo would’ve had to have been a CHECKMULTISIG script path rather than key aggregation Schnorr key path potentially. It is not technically any worse than what the original concept was by having to go through the script path. It is just the next progression now that we have some idea what the next version of SegWit should look like.

There was a small change to sighash in Taproot wasn’t there to facilitate a new sighash flag in a future soft fork?

The fact that we have allowed for unknown public keys. The public keys that Taproot understands are 32 bytes. Instead of starting with 02 or 03 it is just the 32 bytes that follow that and that’s it. When we put them in script we have just said that stuff that doesn’t match this default we will accept it full stop. You can use any signatures so don’t put in a pubkey of that sort until it is defined how it should behave. These pubkeys are going to be 33 bytes with the first byte being 01 and that is going to limit how they can be signed but 33 bytes with the first byte 77 can be something entirely different with a different sighash or a different elliptic curve. It could be a different size of elliptic curve. It could be 384 bits instead of 256. Or whatever else turns out to be a good idea eventually. There is also the specification for how stuff gets hashed. There has been a fair bit of effort put in to making sure that we won’t accidentally have hash collisions between hashing for one sort of message or a different sort of message. Hopefully that should be good.

It is certainly going to be a strong contender for a future soft fork assuming we get Taproot. Maybe a soft fork with Jeremy Rubin’s CHECKTEMPLATEVERIFY.

The code isn’t updated for the latest tweak to the ANYPREVOUT BIP that doesn’t commit to the value with ANYPREVOUTANYSCRIPT any more. It is still missing some tests but assuming the code can be written and the tests can be made to pass then I don’t think there needs to be any special delay to get it activated. In theory we should be able to activate multiple forks at once with BIP 9 and BIP 8 and whatever else.

There could be signaling in different ways?

There are at least 13 version bits that can all be used at the same time if necessary. We could have bit 5 signaling for Taproot and 6 signaling for ANYPREVOUT should Taproot activate. Then 7 signaling for CHECKTEMPLATEVERIFY. Then 8 signaling for CHECKBLOCKATHEIGHT via the annex. I forget what my next favorite one was.

What is CHECKBLOCKATHEIGHT? I haven’t heard of this one.

You put the tail of the hash of a block at a particular height in the annex. Your transaction is not valid if the tail of that block at that height doesn’t match it.

It is to connect a transaction to a specific block?

Yes. If the blockchain gets re-organized your transaction is not valid anymore. Or if there is a huge hard fork or whatever you can make your transaction valid on one side but not on the other. As long as the hard fork still respects this rule of course.

I think Luke Dashjr wrote a BIP on that a while back which was implemented in a very different way.

It was via an extra opcode.

Is that optional, the annex thing? Is it optional or mandatory?

I’m not quite sure what you are asking. The annex is something that you can optionally add to a transaction but once it is added to the transaction every signature commits to it. Once you’ve added it if you take it away or change it in any way then the signatures become invalid.

Are you using the annex as part of this proposal?

Which proposal?

ANYPREVOUT.

No ANYPREVOUT still commits to the annex. This was talking about the CHECKBLOCKATHEIGHT idea.

Let’s say you have eltoo all spec’d up and designed. Is there a way given that we have Taproot happen first to change Lightning channels to eltoo channels without going onchain. I couldn’t see how to do it since you have to commit to a new public key type. Is there a way to do it?

The problem is that if either party had kept any of the previous penalty Lightning revoked commitment states then they could still publish those and those wouldn’t be replaceable via an eltoo thing. I think you’d still want to go onchain and bump the pubkey or the UTXO or whatever. That is pretty much the same problem with if you’ve got eltoo and you want to go from having 5 people in the channel to having 6 people in the channel.

Bob McElrath wrote a paper (further discussion on Twitter) advocating for enabling SIGHASH_ANYPREVOUT for ECDSA so he could do covenant signatures where it looks like you are paying to a pubkey but in reality the private key of it is not known. A specific signature for it is known in such a way that you can spend it.

Can you explain how the pubkey recovery covenant signature stuff works?

The general idea is that you can create a pubkey in such a way that you don’t know the private key but you do know a single signature. You create a signature beforehand so you know the transaction that you wanted to create and you reverse the ECDSA math in such a way that you come up with a single pubkey that is valid for that specific signature. Now if somebody sends money to that pubkey you can only spend it in one way because you only have the signature, you don’t have the private key. The reason this doesn’t work today for ECDSA is because you are committing to the txid of the output that you are spending. There is a circular reference there where you want somebody to pay to a pubkey and then you want to spend from that pubkey with a signature but the signature has to contain the txid which contains that pubkey. It is a circle there. In theory if you had ECDSA plus ANYPREVOUT you could do this because now you are no longer signing the txid. I would be able to give you a pubkey, you would send money to it and then provably I would only be able to spend it in one way. It would act very similarly to OP_CTV with the one upside being that the anonymity set is better because it just looks like a regular pubkey instead of a very obvious OP_CTV transaction. It seems to me like it is cleaner to just put everything on Schnorr.

That’s not compatible with Schnorr because we have chosen to have the Schnorr signature commit to the pubkey. Even if the txid signature doesn’t the Schnorr part of it does. That’s the dependency on ECDSA. If we didn’t do that or if we put in some way round it then it would work fine for Schnorr as well. The other thing about it is if you are just looking at it at the academic maths level then it looks just like a normal signature. When you look at it from the Bitcoin level you’d have to opt in to being able to spend it this way somehow. The fact that you have opted in would then distinguish it from all the other transactions. Maybe if we did some big upgrade, with Taproot we are doing a big upgrade where we are hopefully going to have everyone using Taproot because it is better in a bunch of ways. Everybody is using Taproot and most people are using the key path so everyone is going to be doing the same thing there. If we go to SegWit version 2, 3, 4 and there is a big reason for everyone to switch to that then maybe we can make ANYPREVOUT the default there and then everyone is doing it and it is indistinguishable. I think we are a long way away from being able to make it a default. There are potential risks of doing ANYPREVOUT compared to how we do things now. If those risks are real and cause actual problems if people aren’t ridiculously careful then we don’t want to inflict that on everyone using Bitcoin and have problems result from it. But maybe it will turn out that there aren’t really any risks that aren’t trivially mitigable. Maybe it is a good default some time in the future.

I totally overlooked that it would still be obvious that you are opting into ANYPREVOUT even with ECDSA. That makes it pointless in that sense.

It is still interesting in an academic sense to think about what should be the default for everyone to do in future that gives you big anonymity sets and lots of power and flexibility. But it doesn’t seem practical in the short term. The way Taproot activation is going the short term seems kind of long.

Thoughts on soft fork activation (AJ Towns)

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-July/018043.html

This was building off Matt Corallo’s idea but with slight differences. Mandatory activation disabled in Bitcoin Core unless you manually do something for that. One of Matt Corallo’s concerns was that there are all the core developers and are they making a decision for the users when maybe the users need to actively opt-in to it. The counter view is that the Bitcoin Core developers are meant to reflect the view of the users. If the users don’t like it they can just not run that code and not upgrade.

There are a billion points under this topic that you could probably talk about forever. Obviously talking about things forever is how we are going at the moment. We don’t want to have Bitcoin be run by a handful of developers that just dictate what is going on. Then we have reinvented the central bank control board or whatever. One of the problems if you do that is that then everyone who is trying to dictate where Bitcoin goes in future starts putting pressure on those people. That gets pretty uncomfortable if you are one of those people and you don’t want that sort of political pressure. The ideal would be that developers think about the code and try to understand the technical trade-offs and what is going to happen if people do something. Somehow giving that as an option to the wider Bitcoin marketplace, community, industry, however you want to describe it. The 1MB soft fork where the block size got limited, Satoshi quietly committed some code to activate it, released the code seven days later and then made the code not have any activation parameters another seven days after that. That was when Bitcoin was around 0.0002 cents per Bitcoin. Maybe it is fine with that sort of market cap. But that doesn’t seem like the way you’d want to go today. Since then it has transitioned off. There has been a flag day activation or two that had a few months notice. Then there has been the version number voting which has taken a month or a year for the two of those that happened. Then we switched onto BIP 9 which at least in theory lets us do multiple activations at once and have activations that don’t end up succeeding which is nice. Then SegWit went kind of crazy and so we want to have something a little bit more advanced than that too. SegWit had a whole lot of pressure for the people who were deeply involved at the time. That is something we would not like to repeat. Conversely we have taken a lot more time with Taproot than any of the activations have in the past too. It might be a case of the pendulum swinging a bit too far the other way. There is a bunch of different approaches on how to deal with that. If you read Harding’s post it is mostly couched in terms of BIP 8 which I am calling the simplest possible approach. That is a pretty good place to start to think about these things. BIP 8 is about saying “We’ll accept miners signaling for a year or however long and then at the end of the year assuming everyone is running a client that has lock in on timeout, however that bit ends up getting set, we’ll stop accepting miners not signaling. The only valid chain at that point will have whatever we’re activating activated.” Matt’s modern soft fork activation is two of those steps combined. The first one has that bit set to false and then there is a little bit of a gap. Then there is a much longer one where it is set to TRUE where it will activate at the end. There are some differences in how they get signaled but those are details that don’t ultimately matter that much. The decreasing threshold one is basically the same thing as Matt’s again except that instead of having 95 percent of blocks in a retarget period have to signal, that gradually decreases to 50 percent until the end of the time period. If you manage to convince 65 percent of miners to signal that gives you an incremental speed up in how fast it activates. At least that way there is some kind of game theory incentive for people to signal even if it is clear that it is not going to get to 95 percent.

There is a bit of concern and I think Greg Maxwell voiced it on Reddit, that maybe if we are talking about two year activation it is going to be demotivating for the people working on this because it is going to be such a long time period. There is a point to be made for too long also not being good.

I am happy either way. Trying to go super fast and dealing with the problems of that doesn’t really bother me. Taking a really long time is kind of annoying but I don’t find it that problematic either. It has already taken a long time. The benefit of taking a really long time is that we can be pretty confident that if we get pretty clear consensus to do it after 6 or 12 months that I expect we would have, then spending 2 and a half years or however long getting people to upgrade to new software will make it pretty certain that every single node and every single business that is going to be running the new software. At that point there is no chance of having any major “Lock up your funds. There has been a 7 block chain split and we don’t know what is going to happen. You can’t make transactions” and the whole Bitcoin economy has to stop and deliberately react. Even if like with SegWit we get to the point where it turns out there is no need to stop and react if we do things fast we can’t be quite 100 percent sure of that in advance. That means that we’ve got to have all the press people try to make it clear that there is a reason to start paying attention and forcing everyone using the currency to pay attention is kind of the opposite of what we want. I want this to be stable and Bitcoin be something that people can rely on without having to constantly think about what is going to be the next parameter change is kind of the point. I think 99.9 percent of nodes have upgraded some time in the last four years going by Luke’s stats from the other day. Four years definitely seems completely long enough and two and a half years seems like plenty of time to me too. But it could well be that given even a bit of notice 3 months is plenty of time. I don’t see how there is any way of knowing what timeframe is perfectly safe and what timeframe isn’t without trying it. Obviously if you get it wrong when you try it out that is going to be a pain. But maybe that pain is something you have to accept as a growing pain.

I think that is a good argument that you don’t really want people to think about upgrading and ideally want that to be a natural consequence and everyone be upgraded without feeling pressure.

The ideal thing is for the miners to do the upgrading because they are the ones who are getting paid on a daily basis to keep Bitcoin running. If it is upgrading the pool software from 7 pools or something then that shouldn’t be hard. In theory unless there is someone specifically hurt by Taproot or someone is trying to stop Bitcoin from upgrading because they are short Bitcoin and long altcoins or something all of this discussion shouldn’t really matter. But we don’t know that none of those things are happening.

There is a concern there that you wouldn’t want miners to activate it if the majority of the users haven’t also activated it. There is this theoretical chance that users are not upgraded, miners are upgraded, it activates and then miners could theoretically revert it and steal. There is no way of getting around users having to upgrade I would assume.

If miners upgrade and no users do and then miners revert it is not really a problem because none of the users would’ve used the new features because they haven’t upgraded. If some of the users have upgraded and the miners have activated then there might be a problem because those users won’t be following the most work chain that everyone else is following now. They could get scammed more cheaply on that shorter chain that they are following.

Why would they be following the shorter chain in that case?

They won’t consider it the shorter chain because they don’t see the invalid chain that has got more work applied to it. They will be following the shorter chain because it is activated. This is assuming that the miners just stop following the rules, not that they re-org the chain to some different point at which it hadn’t activated and stop signaling.

That’s a good point that users are not only opting in by running the software but also accepting payments with the new soft fork.

As soon as they upgrade their software to the stuff that checks the activation back in history they will consider it activated on all the blocks more or less. It doesn’t matter if they upgrade after it has activated but that will still catch the miners cheating. They don’t need to have upgraded in advance.

On the question of defaults, people present different ideas here. The code would already be in Bitcoin Core and it would just be set to activate at a certain time versus the other idea which is more like the user has to actively opt in to it.

There are three parts to making Taproot live on Bitcoin. One is merging all the code. At the point where we merge all the code then that lets us do tests on it but unless there is a bug it doesn’t affect any of the live transactions. It doesn’t enforce the rules, it doesn’t do anything on Bitcoin itself. The second step is we add activation parameters. At that point that is something that anyone who can compile Bitcoin can almost certainly change two lines in the source and recompile to get to the point where it is not activated or there are some different parameters. There is still the option to say “Screw the Bitcoin Core developers, they are being stupid. We need to do something different.” If everyone gets consensus on that then things will mostly work ok. Once the activation parameters are in then whatever the conditions of activation have to actually happen whether that it is timeout or whether it is miner signaling or whatever.

It sounds like you are a bit depressed at the state of the discussion going around in circles.

I wouldn’t say depressed, I’d say cynical.

For me it seemed obvious after all the chaos of SegWit there was going to have to be this discussion that was likely to be longwinded. Maybe we should’ve tried to kick off this discussion earlier and maybe there is a lesson for future soft forks to get that activation discussion started earlier.

Ideally if we get something that works for Taproot and cleanly deals with whatever problems there are we can just reuse it in future. I think it would be fair to say that at least some of us have specifically delayed discussing activation stuff because we expected it to be a horrible, unfun discussion and conflict and tedious debate rather than fun coding kind of thing. But it is obviously something that we have to go through. We need to get an answer to the question and I don’t see an obvious way of doing that without discussion. Dictating stuff is the opposite of what I want.

The hope is that after this brainstorming phase people will start coalescing around particular ones. At the moment the number of proposals just keeps increasing. You would hope once we have that brainstorming phase then people will start coalescing around two or three options. Then we’ll be able to narrow it down. I think I am optimistic. I can see why you’re a little cynical. I think it was inevitable it was going to take a long time.

Just take a straw poll on the mailing list and be like “Everyone says aye. Which one? Here’s your three choices, off you go” (Joke) What is needed? Do we need enough people to coalesce around one particular method and then everyone says “This is the one we are going with”? What do you guys think is needed?

I think that is a great question.

I think that the work that David Harding and Aaron van Wirdum are doing is really valuable. We need to have some structure and as I said I think this is the brainstorming phase. There is going to be loads of different proposals. Some of them are going to be very similar and people won’t really care about eventually ditching some of the proposals that are very similar to other ones. The only thing I am worried about is if people have really conflicting views on what happened with SegWit. “This proposal should definitely happen because otherwise SegWit wouldn’t have happened.” Or disagreements on what the history of SegWit activation was. I think that is the only thing I might be worried about. I think everyone wants Taproot so I don’t think people are going to want to continue with these crazy discussions for a really long time delaying Taproot.

If you want to get more time wasted then there is a IRC channel ##taproot-activation.

As a relative newcomer to the space, I thought that the whole Taproot process was handled magnificently. The open source design and discussion about it was so much better than any other software project in general that I have seen. That was really good and that seems to have built a lot of good support and goodwill amongst everyone. Although people may enjoy discussing all the different ways of activating it it feels like it will get activated with any one of those processes.

One thing I might add is that we have still got a few technical things to merge to get Taproot in. There is the updates to libsecp to get Schnorr. One of the things that Greg mentioned on the ##taproot-activation channel is that the libsecp stuff would really like more devs doing review of code even if you are not a high level crypto dev. Making sure the code makes sense, making sure the comments make sense, being able to understand C/C++ code and making sure that there isn’t obvious mistakes not including the complicated crypto stuff that has probably already had lots of thought put into it. Making sure APIs make sense and are usable. Adding review there is always good. The Taproot merge and the wtxid relay stuff, both of those are pretty deep Bitcoin things but are worth a look if you want to look into Bitcoin and have a reasonable grasp of C++. Hopefully we are getting a bit closer to a signet merge which is a more reliable test network than testnet is hopefully. There should be a post to the mailing list about some updates for that in the next few weeks. I am hoping that we can get Taproot on signet so that we can start playing around doing things like Lightning clients running against signet to try out new features and doing development and interaction on that as well sometime soon.

Is the first blocker getting that Schnorr PR merged into libsecp? That code is obviously replicated in the Bitcoin Core Taproot PR. But is that the first blocker? Get the libsecp PR merged and then the rest of the Taproot Core PR.

I’m not sure I would call it a blocker so much as the next step.

The Taproot PR used to be quite a bit bigger than it is now. It also included an interim libsecp update that had non Schnorr related changes. We yanked that out, got that merged in. There were a few other refactoring type changes that were also part of the Taproot PR that I think now have all been merged. Now in the Bitcoin Core repo the Taproot PR is basically just a merge of the Schnorr PR into libsecp and then Taproot stuff. We are not going to pull that code into the Core repo obviously until it is merged into the upstream libsecp repo. That is where the review needs to go.

Thank you everyone for joining. We’ll do a similar thing in about a month’s time. We will make up a list and feel free to add in things to chat about. I’ll put the Meetup page up on normal channels.

\ No newline at end of file +Meetup

Name: Socratic Seminar

Topic: Agenda in Google Doc below

Location: Bitcoin Sydney (online)

Video: No video posted online

Last month’s Sydney Socratic: https://diyhpl.us/wiki/transcripts/sydney-bitcoin-meetup/2020-06-23-socratic-seminar/

Google Doc of the resources discussed: https://docs.google.com/document/d/1Aw_llsP8xSipp7l6JqjSpaqw5qN1vXRqhOyeulqmXcg/

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Intro

For anyone who is new, welcome. This is the Bitcoin Sydney Socratic. In normal times we do two meetups per month. One is an in person Bitcoin Sydney meetup for those of you in Sydney you are welcome to join us. We don’t know exactly when our next in person one will be. This is the other one that we do every month. Obviously the timezone works for people in Asia and Europe as well and obviously Australia, New Zealand people. We share the list, here is our list.

Subtleties and Security (Michael Ford, BitMEX research)

https://blog.bitmex.com/subtleties-and-security/

This was a summary of a bunch of the work done on Core over the past five or six months. It was mainly focused on build systems and trying to improve the security of our binaries and our build processes. A lot of stuff when it comes to build systems can be very finicky. There are a lot of peculiarities. It is easy to miss a flag or not turn something on and have that affect something else, two or three hops down the line and you don’t even realize it. This was a summary of some of those changes. Some of them were very simple but it existed in the codebase for quite a long time. The first one here, where our security and symbol checks had been skipping over bitcoind. Essentially once we run our Gitian builds to produce releases we have some scripts that perform some security checks and then symbol-check is essentially checking the binary dependencies looking for certain symbols. If we are building with stack protection it might check the symbols in the binary to see whether there is a stack protector symbol in there somewhere. But essentially since these scripts have been introduced they had never been run against bitcoind because in a quirk in our Makefile. If you open the PR 17857 you’ll see that the fix here was essentially deleting one character or two characters if you include the whitespace. When these were being run because of the use of the < character there, bin_PROGRAMS is essentially a list of binaries, they get passed into the script but because of the way that Python handles the arguments if you pass them in using this < character the first argument would be skipped over and the checks wouldn’t get run against it. That always happened with bitcoind. However this isn’t as severe as it sounds because bitcoind is a subset of bitcoin-qt and the checks were always being run against that. As far as we are aware there is nothing malicious that we could’ve missed in this case.

The next one is also another subtle one. There have been longstanding issues with ASLR and Windows binaries. It has come up in our repository a few times before in various mailing lists. I can’t remember why I was looking at this exactly. I think I was going to add some tests for it. Then I noticed that when I added the test one of our binaries was actually failing the test which was unexpected because we thought everything was fine. It turned out it was the bitcoin-cli binary. It came down to that binary wasn’t exporting any symbols. At build time the linker would strip away a certain part of the binary that is actually required on Windows if you want ASLR at runtime. This didn’t happen in any of the other Windows binaries because they are all exporting libsecp256k1 symbols. The CLI binary doesn’t do that, it doesn’t use any libsecp256k1 symbols. The fix here was again another simple one, basically just adding a directive to the linker to export the main symbol out of the CLI binary. Then at link time it wouldn’t strip away this certain part that it needed for ASLR at runtime. That was a fun one to track down.

The sysctl stuff is kind of boring and the effects weren’t that interesting. This was a follow on to some of the work that was done after removing the OpenSSL RNG. There was a new module added to our RNG that collected entropy from the environment. One of the ways it collected entropy was via this sys call. It would go and look at stuff like the amount of CPUs you have or how much RAM you have or what files are in certain directories, lots of random things. However in our build system we had some checks that would look to try to figure out if the host you were building for, Windows or MacOS or BSD, we would run these checks to see if that system call was available. The way we were checking for it was failing on MacOS. Then obviously at build time we wouldn’t compile in the code that would make use of those calls. There was no real impact because this code had never been released. It was fixed before it made it into the 0.20 release that only just came out. This was another case of something subtle where it looks like this is working everywhere and the detection is ok. It turns out on MacOS because the way we were detecting it was slightly different to other BSDs and it was failing.

I’ll go quickly through this last one. This is a case of Apple’s documentation claiming that the linker would do a certain thing if you passed a flag. This flag was BINDATLOAD. According to Apple’s documentation if you link with this flag it would instruct the linker to set a certain bit in the binary header and then at runtime the loader would look at the binary header, see that bit and would modify its behavior to bind all of the binary symbols at load rather than lazily when they are first used. However when you actually pass this flag to the linker and build a binary it doesn’t set this bit in the header which is obviously contradictory to the documentation. It was unclear whether this was a bug in Apple’s linker or whether we were misunderstanding something. I eventually sent a few emails back and forward to this guy Nick Kledzik who is one of the people that works on the linker and loader at Apple. He came back with a clarification about how the behavior is meant to work and that we could disregard this flag and this header to all intents and purposes. However we can’t check whether it is working by looking for the bit, we have to look at some other output when you run a different tool on the binary. This was just a case of the documentation being incorrect and thus when you are looking at this sort of stuff you can’t even necessarily trust the documentation to instruct you on what to do.

Are you excited about Apple moving over to ARM and the stuff you are going to have deal with?

Not particularly excited. We’ll see how significant it is and what we may or may not have to do or work around in six months or maybe a year. In general it is not entirely clear even what the future is for us releasing MacOS binaries because it is essentially getting harder and harder for us to be able to build and distribute binaries in the completely reproducible, trustless manner that we’d like to. Apple is introducing these notarization requirements where we would have to build our binary and then send it to Apple servers so they could notarize it. Then they would send it back to us and that’s the only way we could distribute a binary out to end users that would run on their Macs. Obviously doing that sort of stuff is tricky for us especially if it requires more official things like having an organization. I guess we already do that in some part because we have code signing certificates but I think the direction that Apple is going is making it harder and harder for us to continue or want to continue releasing Mac OS binaries. I know some of the developers are getting less enthused about having to deal with all of Apple’s changes and distribution requirements. But in general the ARM stuff is interesting. I haven’t looked at it a whole heap.

The concern with notarizing is that you are assuming some kind of liability by signing off a binary and sending it to Apple? What is the actual concern there?

One of the concerns is also that it is essentially a privacy leak. When the end users actually run those binaries on their machines the OS will essentially ping Apple’s servers with information asking it whether these binaries are ok to run. This is another thing we don’t necessarily want. I think the other problem may be due to reproducibility. It depends how Apple modifies the binaries when they do the notarizing, whether we can incorporate that into our reproducible build process. I know for a fact that we would never distribute binaries that could not be reproducibly built by anyone else that wants to run the same binary. Those are two concerns. There might be more. There are a few threads on GitHub with related discussion as well.

Part of the reproducibility concern is that the end user is not able to go through all that same notarizing process through Apple? Or they wouldn’t need to go through Apple to reproduce it, it is just they wouldn’t be able to do it in the same way that the Bitcoin Core developers do it?

At the moment we release our binaries, the source code is available, anyone can follow the same process we do and get an exactly identical binary to the one that we release. However if as part of this requirement we’d build that binary but then have to send it off to Apple they would maybe modify it in some way, give it back to us and we would have to incorporate something that they gave back to us then end users could try to go through the same process but it wouldn’t really work.

On the blog post, there are four bugs that you go through in the blog post. They all don’t seem to be massively severe to put it mildly but they are all things that you expect to be caught. The first one isn’t severe because as you say bitcoind is a subset of bitcoin-qt. There was no problem with that one but you still expect that to be caught. The second one, this was data positioning inside of process address space. This is a concern around a private key being stored in a deterministic address space or a predictable address space. If an attacker had access to your machine they could extract the private key?

ASLR is one runtime security feature. The purpose of it is to try to make it harder for attackers if they were able to say exploit a binary in some way to exploit it further. The addresses or the location of certain data in the address space of the binary is randomized at load. It was working fine for all the rest of binaries, it was only bitcoin-cli. I guess in general we want all of our hardening techniques to apply equally across all of our binaries. There are ways to detect this after build. You can look for whether certain bits have been set in binary headers or other tooling output. We were looking for those and had been for some time. They were all set to what they were meant to be set to but it turned out that the runtime requirements were slightly different to what we understood them to be. Even though we were already testing for certain things that didn’t catch this issue.

The concern is that it is generating a private key using the bitcoin-cli. Apart from privacy the only thing you’ve really got to worry about are the private keys in terms of an attacker accessing your machine. Everything else is pretty public in terms of blockchain data, blocks etc.

Sure.

The address space randomization only makes it harder to exploit something. It is not that it is avoiding it. It is not that black or white.

The other two, entropy problems was the third one. The fourth one, my understanding was from the blog post they just haven’t the documentation even now. They have told you that it works a certain way but that’s not the way it is outlined in the documentation, is that right?

Yeah. The man page for Apple’s linker still says that if you pass this flag the linker will set some bit in the program header but it does not do that.

Have you thought about chasing them? What is the process of trying to get Apple to update their documentation?

I conversed with the guy that works on the linker at Apple. I assume that if they were concerned about it he would probably tell someone to update it.

It is certainly not your responsibility. So basically these four things, they are not hugely severe but they are all things that you wouldn’t expect to be happening at this point in the maturity of Core certainly when you are interacting with things like Apple. And something could pop up in future that is similar that could be really severe.

The one counterpoint I’d make is that a lot of this stuff can change over time. Three or four years ago that BINDATLOAD flag may have worked exactly as is written in the man page. It just happens as of two years ago it suddenly didn’t work that way. You could have some assumptions or tests for behavior that worked at a point in time that then may not work because something beneath you is changing.

Generalized Bitcoin-Compatible Channels

https://eprint.iacr.org/2020/476.pdf

https://suredbits.com/generalized-bitcoin-channels/

This is a cool idea. It is about taking the way Lightning works today which is with asymmetric state, each party has their own commitment transaction which is different to the other party’s commitment transaction with a different transaction ID and different script. Essentially the Lightning mechanism uses that to identify who posted the transaction on the blockchain. If it is an old transaction, the transaction has been revoked, it allows that person to be punished. The idea here is to keep the Lightning Network transactions symmetric. Both parties have the same set of transactions that they could post on the blockchain but what is different is that when they post them they have different signatures. The system identifies who posted a transaction and who to punish by the signature on the transaction rather than the transaction itself. This is a nice idea for several ideas. In my opinion the main motivator is that it is much simpler. If you are implementing it you have less objects to deal with and less complexity which can lead to software bugs and so on. The next benefit is that the transactions end up being smaller because there is less state to keep around in the transactions and less complicated update and settlement process. The place where it shines is in the punishment branch. In this system if you put down an old state, what first happens is we figure out that this state is old. You have a relative timelock where the other party has some time to use the signature that you put down that identifies you as the perpetrator. The other party is able to extract a secret key from that because it is an adaptor signature. They extract the secret key and they can use that secret key to punish you if that has been revoked already. If you have a bunch of HTLCs on a transaction that has been revoked it is much simpler to settle and take all the money. With the punishment one you may have to do a single transaction per output. Per HTLC that is inflight at that revoked state you may have to do an individual transaction for each of them whereas for this one you figure out whether it is punishable or not and if it wasn’t you carry on to settling all the rest of the outputs onchain if they are not revoked. I am really keen on this idea. I am looking at maybe attempting to build it starting next month but I am looking for feedback from anyone on anything about what they think about this paper. People who are more familiar with Lightning than me and the actual implementation.

Why is it that you normally in Lightning need to have one transaction per HTLC you are settling?

It is a good question. When you push the transaction onchain the HTLCs may already have been revoked. When you put a revoked transaction onchain the HTLC outputs then have three paths. One is redeem, one is refund and one is it has been revoked. Depending on who put it down you have another transaction that spends from the HTLC output and that determines whether it goes into the redeem with a secret or the refund path. But if I put it down and I know the secret I can use the secret right away and put it into another state on another transaction. If you want to redeem all these things you may have to make an individual transaction for each of those other outputs that I put down. They are about to change it because of this anchor outputs idea. You never have an unilateral spend.

I think one of the things with the way it works at the moment is if you have got a HTLC then there is either the timeout and refund path or the reveal the secret and get paid path. But if you are publishing a transaction near the timeout then you want to be able to say “I know that this might be revoked but I want to guarantee that unless someone has been cheating that I get my money and that we don’t have this delay while we figure out if someone is cheating and that puts us over the time limit so now I can’t get paid.” I think that is why there are the two layers there. The term that is used is layered commitments. I think that is going to be the challenge to have work with this approach. The problem with not doing it that way is that the overall timeout of the HTLC has to decrease at each step by the time you are going to allow somebody to detect cheating at each step. If you are saying “I don’t want to have to detect cheating every five minutes, I want to be able to check it every two hours” the difference between the overall HTLC timeout on each step ends up being the two hours for one guy, the two days for another guy, the five minutes for another guy. That makes the overall timeouts of the HTLCs really long.

This is one of the issues I really wanted to know more about. It has obviously been discussed because it is the same way eltoo works?

Yeah eltoo doesn’t work that way. There is a post of getting it to work despite that with the ANYPREVOUT stuff. (Also a historical discussion on the mailing list)

You can actually circumvent this problem in eltoo?

In eltoo with ANYPREVOUT as long as ANYPREVOUT doesn’t also commit to the value of the transactions. This was discussed in around 2016 to come up with the way the current stuff works.

I was thinking that this seems like quite a big problem. But it seemed to also be a problem with the original eltoo paper. I looked at the original paper and I couldn’t find anything talking about it.

As far as I can see it is totally a problem with eltoo and it hasn’t been discussed at any point.

On the previous answer to the question, it is like a worst case analysis. This transaction could go on there, I don’t think it necessarily has to be there but each of those transactions needs another revocation thing in it. I think that is the way they get their numbers from. But they could be wrong and I could be wrong.

At least the general idea is that there is a second transaction that needs to occur before the whole HTLC revocation process is played out. Because of that secondary transaction you get this problem of not being able to spend multiple revoked HTLCs at the same time.

Yes and it doesn’t help you if it is a valid state. It seems to be the same if it is a valid state. You still have to go through all the transactions. It is not an optimization there, it is an optimization saying “We can exit early on the punishment.” This is the question. Is it actually worth doing that double layer of first punish and then settle. Right now the way Lightning does it is punish and settle at the same time to avoid this layering of timelocks. Is that still the right way to go? You still use the new mechanism that they’ve provided in the paper but don’t do their double stage revocation. That is the thing I am wrestling with.

Doesn’t this all go out the window with eltoo because there is no punishment?

I think it does go out the window with eltoo.

Going back to that original post on generalized channels, how Lightning works now is that I have a commitment transaction with a timelock on my output and the counterparty has a timelock on the output on their commitment transaction. They are not symmetric. I am struggling to understand how those transactions are identical in this design.

They are identical because the commitment transactions are the same.

Are they adding a timelock that is unnecessary to one of the outputs or are they taking away the timelock that were there on each party’s output?

It is necessary but it can be circumvented in eltoo apparently. Each commitment transaction is the same for both parties and then the output on that can be either revoked, give all the money to the person who was not the perpetrator or if it is valid it spends to the real commitment transaction or the state transaction you could call it. The commitment transaction is very boring and bland. It just has an output which if the guy has revealed his adaptor signature secret and his revocation key, when he puts it down he reveals the secret and he has also revealed the revocation key over the communication channel in Lightning then the other guy who is not the perpetrator can just take all the money from the output. He knows both the private keys of the OP_CHECKMULTISIG so he can take the money. But if he doesn’t, if it is a valid state then on a relative timelock it spends to a state transaction that has all the HTLC outputs on it and the balance outputs of the two parties. That is how they manage to keep it symmetric.

Flood & Loot (Harris, Zohar)

https://arxiv.org/abs/2006.08513

This was published mid June and then revised early July. To outline the attack, the idea is you are an attacker, you want to steal money from people, you set up your channels. The idea is that you would have the target node and the source node. You set up your channels and you push through as many possible payments as you can based on how many HTLCs you can do and then you are waiting for the timelock. Because all those transactions are happening at once you can spring the trap and steal from everyone because everyone is trying to claim at the same time. Every HTLC has its own output as opposed to the other approach where it is one for the whole channel.

An attacker tries to create as many channels as possible with victim nodes and the attacker also creates a node that is able to receive funds. What the attacker now does is tries to create as many HTLCs as possible, payments that are targeted to the receiving node. That is also a node that controlled by the attacker. He uses the channels that he just created with all the victim nodes. The target node waits until all the HTLCs have reached the target node and then it sends back all the secrets to the victim node that need that secret to claim the transaction from the attacker. The target node is left without any open HTLCs but at the same time the source node that created all the transactions doesn’t respond to the secrets from the victim nodes. The source node is turned off. As the timeout approaches for all the HTLCs all the victim nodes will try to broadcast their transaction at the same time because they want to close the channel. Some of those broadcasts will fail to enter the Bitcoin blockchain because of congestion and the attacker will then wait, bust the expiration of those HTLCs and by using the replace-by-fee policy raise the fee of its own transactions to a higher one he can then claim that victim’s transactions and steal money that way. It is a kind of a denial of service attack because you try to create a situation where suddenly a lot of people try to close channels at the same time. If your read the paper you will see that with the current block weight you need a minimum of 85 channels that close at the same time to be able to steal some funds. The amount of funds you can steal increases if you have more channels. This is all based on simulations. Apparently it starts at 85 channels and if you are able to close down more channels than 85 at the same time you are able to steal substantially more funds.

The crucial point I missed there is that you stop responding. Every other node out there thinks “I need to go to chain and I have no other way to deal with it.” The concern would be that people might not want to put funds into Lightning because you are now risking money. Previously a lot of people might have thought if other people open channels to me that is not a risk but really there is a risk to that especially once you start routing through that channel. Probably worthwhile talking about the mitigations as well. Some of the Lightning implementations are limiting the number of HTLCs. The other one is anchor outputs and reputation though obviously there is a concern around reintroducing layers of trust there. Any comments on mitigations?

I still haven’t got my head around all this replace-by-fee stuff works together in this attack.

The replace-by-fee is used in this attack by the attacker at the moment the victim tries to broadcast the current state of the channel and close the channel. If you wait until the HTLC has expired then that broadcast is still in the UTXO. You need to use a higher fee as an attacker to be able to be the first to claim the funds.

The other crucial point they mention in the paper is that if you are the innocent victim node all you have is the pre-signed commitment transaction from back when the attacker was talking to you. You can’t now re-sign that because you can’t re-sign it on your own.

One of the mitigations as always in these papers is the reputation based behavior.

That’s not a good pathway to go down because you don’t want the whole network to become permissioned and reinsert all the same permissioning that we are trying to get away from.

There is already some reputation based behavior in some of the clients. c-lightning is a little bit more hardcore and rightly so. I think lnd already has a small local database on the client itself keeping reputation…

The BOS scoring.

Does anyone know what the problem is with reducing the number of unresolved HTLCs? Would that be a big deal?

I think that would mean less functionality because now there is less routing that can happen through the network.

I don’t think it is a limiting factor right now but in the future if Lightning started to grow exponentially it would limit the number of transactions.

It just adds more restrictions to the types of payments that can go through Lightning. If you are allowing loads of really tiny HTLCs and as a routing node you are collecting tonnes of really small HTLCs you are helping the network by routing loads of different types of payments including small ones. The problem is when you go onchain with so many HTLCs obviously the fee is so much higher because you are taking so many HTLCs onchain. The number of bytes are so much higher than it would be if you were just going to chain with a few HTLCs.

Chicago BitDevs transcript on various Lightning Network attacks

https://diyhpl.us/wiki/transcripts/chicago-bitdevs/2020-07-08-socratic-seminar/

They went through a bunch of Lightning Network attacks including Flood & Loot. There was this Bitcoin Optech link talking about the different types of attacks on Lightning. Someone said “you either really want to know your stuff with Lightning or you want some sort of trusted relationship with your direct peers” at this stage. Another attack that they went through was stealing HTLCs inflight which at least in my eyes was a more concerning one and something I hadn’t really thought about before. It has always been obvious that fees are going to be a problem if you are forced to go back onchain and onchain fees are really high. Right from the beginning that has been obvious that it is a problem. But it hadn’t really clicked with me this stealing HTLCs inflight if you are an attacker who has one node earlier on in the route and one node later on in the route, whether you withhold the revealing the preimage or play tricks with some of the intermediary nodes.

That is similar to the wormhole attack, the idea is that you jump through and skip everyone in the middle.

That only allows you to steal the transaction fee right?

The wormhole attack is just fee that you are stealing.

I think it is an optimization not an attack. It is a way of optimizing the route adhoc and getting paid for it in my opinion. Everyone still gets the money.

No you are stealing the fee.

You steal some fees because you skipped some nodes in the route. If you are delivering a parcel and instead of passing onto the next delivery man you just deliver it straight to the guy yourself. You get the fees in the middle, it sounds good to me. I don’t know why it is an attack.

You are tricking those nodes into locking up their capital when they are honestly seeking to route a payment. Thinking that they are providing a route that is needed on the network and then tricking them when they are acting honestly to lose out on the fees that they could’ve perhaps got elsewhere.

Maybe I misunderstood the attack. You can lock up money with that attack?

You lock up money anyway because you need to lock up the money for the HTLC until the payment comes through.

Now I get it, I never understood what the attack was. The attack is like a denial of service attack. You lock up their money, you’re skipping them and stealing their fee.

They are performing the service and they are not getting the fee in return.

To some extent I agree that you get paid for offering a better route. It is a funny way of looking at it. The only cost the victim incurs is the cost of having those funds locked up and not getting the fee. It is not really stealing.

They could be routing another payment that is actually needed and getting the fee for that. They are locking up capital and they are not receiving fees that they could be receiving elsewhere.

Opportunity cost.

Following the Blockchain.com feerate recommendations (0xB10C)

https://b10c.me/mempool-observations/3-blockchaincom-recommendations/

I thought this was a really cool post but I can’t remember all the specifics of it. For a long time people were saying “Are Blockchain lying about how many transactions they do? Are they really one third of the network in terms of transactions?” According to this article that is a fair estimate based on the fingerprinting.

I guess the one thing to point out is how shocking that these transactions can be fingerprinted so easily. One third of transactions, it is obvious where they are coming from. That in itself is quite shocking.

What’s so good about this Blockchain wallet? Why does everybody use it? I have never used it myself.

It is like a first mover advantage. They were so early that so many newbies used them and never moved off to another wallet. They have blocked up a lot of things by not having SegWit etc.

Part of it was also they were one of the first easy wallets in general and they were also a web wallet. You go to a website and it is really easy to sign up. All the newbies did it. People who have money with them, their value went up quite considerably because it was very early on in Bitcoin’s life, that is what caused them to have a lot of money on there. I find it surprising that those users also create a lot of onchain transactions at the same time. That part is a little confusing to me. I am not surprised that there is a lot of capital locked up with them.

They are non-custodial? It is like a web wallet client side wallet thing?

Yeah but the security of that is questionable. Now they have an app as well. They didn’t in the past. It was the web wallet mainly that got them a lot of popularity and was relatively easy to use.

That is scary.

They even had a poor implementation of Coinjoin at some point. It was completely traceable. They were not a bad company at that time in the sense that they were trying to do these things.

BIP 118 and SIGHASH_ANYPREVOUT

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-July/018038.html

Is anyone not familiar with NOINPUT/ANYPREVOUT as of a year ago, basic concept?

Personally I know what they are trying to do. I’m not familiar with the exact differences between NOINPUT and ANYPREVOUT and I haven’t really followed the recent stuff on how it has changed. I know ideas are being merged from the two.

There is the motivation in the BIP now that is probably good context to read. The basic idea is you want to spend some Bitcoin somewhere then you obviously have to sign that or else everyone could spend it because it wouldn’t need a signature. When you create any of the signatures by any current way with SegWit or with legacy pre-SegWit stuff or with pay-to-script-hash or with Taproot as it is proposed so far without ANYPREVOUT, you are committing to the txid of the transaction that you are trying to spend. The txid is made up from hashing the scriptPubKeys and the amounts and everything else. At the point where you have made a signature you then have to commit to exactly how the funds got into whatever transaction is going to end up with onchain. What would be nifty to be able to do is to come up with a way of spending transactions where you don’t know quite how they got there, you just know that they got there somehow or other and they are going to be yours. If you are writing a will you want to be able to say “All my money goes to my children and it is not going to matter if the money ended up in this bank or a different bank or if it is gold buried on your property or whatever.” The executor can take your will and its signature and say “This is sufficient proof for whoever cares” and execute your will and testament that way. What we’d like to do is have something similar for Bitcoin smart contract stuff where you can say “No matter how someone tries to cheat me on my Lightning channel I want to just send this signature to these watchtowers who will pay more attention to the blockchain than I can be bothered doing and will send these funds directly to me if there is any cheating that happens. It doesn’t matter how that cheating happens.” That was the original use case of NOINPUT when Joseph Poon I think proposed it originally for Lightning. Then the whole eltoo thing is based on “If there is ever some state published I want to be able to update that to a new state and it doesn’t matter if it is an old state from 5 minutes ago or from a year ago or however long. No matter which of those gets published I want to be able to do a single signature and end up with the current state.” That is the more current motivation for it. The idea here is that when you are doing a signature you are no longer signing the transaction ID that is going to be spent because you don’t know that yet, instead you will sign some sort of description of what the transaction is going to be. For ANYPREVOUT that includes things like the scriptPubKey that is going to be spent or the actual script after it has been unhashed and unTaproot’d. And maybe the value and other stuff like that. It is still signing that it is your funds, it is just not signing that it is this particular set of your funds. That has different risks. For the will example that means you are going to be able to take all the money from all the bank accounts not just the one bank account that you were thinking of when you wrote the will. In the same way with ANYPREVOUT it means that that single signature can possibly spend funds from one UTXO or from many UTXOs that all satisfy the same conditions. It requires a bit more care to use this sort of signature but if you are doing complicated smart contract stuff like eltoo then you need to take that sort of care anyway. It seems like a reasonably good fit.

There was also a discussion around script path and key path?

The way Taproot works is that you can have a key path or many script paths for a single scriptPubKey that goes into a UTXO. I say “You can spend this bit of Bitcoin with this key just by doing a signature or by following this script or this different script or this other script.” The way we’ve set Taproot, when you define a new SegWit version the key path is pretty much set in stone from when you define it. Say we activate Taproot tomorrow you can’t say “Right now these particular signatures are valid but in the future some different signatures will be valid. We don’t know what those are yet so in the meantime just accept any signature.” because that would obviously be completely insecure. The only way we can accept signatures that we haven’t defined yet and that we are going to define in future in a soft fork friendly way is to put it in the script path or to set up new SegWit versions. ANYPREVOUT doesn’t want to use up an extra SegWit version, it wants to be as minimal as possible so it goes in the script path which has a different leading byte for the pubkey that gets put in the script. There is a special code so that you can just reuse the main pubkey that you use for Taproot without having to repeat it too much. By using that special class of pubkeys or specially tagged pubkeys that means you have opted in to allowing ANYPREVOUT on that script. It also means that if you don’t want to opt in to ANYPREVOUT you can do stuff as normal with Taproot and no ANYPREVOUT signature will ever be valid for those coins that you have used. You don’t have to worry about the extra risks that ANYPREVOUT may introduce.

Is my understanding correct that these discussions with Christian (Decker) on NOINPUT means that these proposals are being merged and getting the best from both?

There is not a real conflict between the proposals apart from the naming. The NOINPUT original proposal was we want to have this functionality and we’ll do it when the next version of SegWit comes along. The next version of SegWit is hopefully Taproot. The original way it was proposed meant that eltoo would’ve had to have been a CHECKMULTISIG script path rather than key aggregation Schnorr key path potentially. It is not technically any worse than what the original concept was by having to go through the script path. It is just the next progression now that we have some idea what the next version of SegWit should look like.

There was a small change to sighash in Taproot wasn’t there to facilitate a new sighash flag in a future soft fork?

The fact that we have allowed for unknown public keys. The public keys that Taproot understands are 32 bytes. Instead of starting with 02 or 03 it is just the 32 bytes that follow that and that’s it. When we put them in script we have just said that stuff that doesn’t match this default we will accept it full stop. You can use any signatures so don’t put in a pubkey of that sort until it is defined how it should behave. These pubkeys are going to be 33 bytes with the first byte being 01 and that is going to limit how they can be signed but 33 bytes with the first byte 77 can be something entirely different with a different sighash or a different elliptic curve. It could be a different size of elliptic curve. It could be 384 bits instead of 256. Or whatever else turns out to be a good idea eventually. There is also the specification for how stuff gets hashed. There has been a fair bit of effort put in to making sure that we won’t accidentally have hash collisions between hashing for one sort of message or a different sort of message. Hopefully that should be good.

It is certainly going to be a strong contender for a future soft fork assuming we get Taproot. Maybe a soft fork with Jeremy Rubin’s CHECKTEMPLATEVERIFY.

The code isn’t updated for the latest tweak to the ANYPREVOUT BIP that doesn’t commit to the value with ANYPREVOUTANYSCRIPT any more. It is still missing some tests but assuming the code can be written and the tests can be made to pass then I don’t think there needs to be any special delay to get it activated. In theory we should be able to activate multiple forks at once with BIP 9 and BIP 8 and whatever else.

There could be signaling in different ways?

There are at least 13 version bits that can all be used at the same time if necessary. We could have bit 5 signaling for Taproot and 6 signaling for ANYPREVOUT should Taproot activate. Then 7 signaling for CHECKTEMPLATEVERIFY. Then 8 signaling for CHECKBLOCKATHEIGHT via the annex. I forget what my next favorite one was.

What is CHECKBLOCKATHEIGHT? I haven’t heard of this one.

You put the tail of the hash of a block at a particular height in the annex. Your transaction is not valid if the tail of that block at that height doesn’t match it.

It is to connect a transaction to a specific block?

Yes. If the blockchain gets re-organized your transaction is not valid anymore. Or if there is a huge hard fork or whatever you can make your transaction valid on one side but not on the other. As long as the hard fork still respects this rule of course.

I think Luke Dashjr wrote a BIP on that a while back which was implemented in a very different way.

It was via an extra opcode.

Is that optional, the annex thing? Is it optional or mandatory?

I’m not quite sure what you are asking. The annex is something that you can optionally add to a transaction but once it is added to the transaction every signature commits to it. Once you’ve added it if you take it away or change it in any way then the signatures become invalid.

Are you using the annex as part of this proposal?

Which proposal?

ANYPREVOUT.

No ANYPREVOUT still commits to the annex. This was talking about the CHECKBLOCKATHEIGHT idea.

Let’s say you have eltoo all spec’d up and designed. Is there a way given that we have Taproot happen first to change Lightning channels to eltoo channels without going onchain. I couldn’t see how to do it since you have to commit to a new public key type. Is there a way to do it?

The problem is that if either party had kept any of the previous penalty Lightning revoked commitment states then they could still publish those and those wouldn’t be replaceable via an eltoo thing. I think you’d still want to go onchain and bump the pubkey or the UTXO or whatever. That is pretty much the same problem with if you’ve got eltoo and you want to go from having 5 people in the channel to having 6 people in the channel.

Bob McElrath wrote a paper (further discussion on Twitter) advocating for enabling SIGHASH_ANYPREVOUT for ECDSA so he could do covenant signatures where it looks like you are paying to a pubkey but in reality the private key of it is not known. A specific signature for it is known in such a way that you can spend it.

Can you explain how the pubkey recovery covenant signature stuff works?

The general idea is that you can create a pubkey in such a way that you don’t know the private key but you do know a single signature. You create a signature beforehand so you know the transaction that you wanted to create and you reverse the ECDSA math in such a way that you come up with a single pubkey that is valid for that specific signature. Now if somebody sends money to that pubkey you can only spend it in one way because you only have the signature, you don’t have the private key. The reason this doesn’t work today for ECDSA is because you are committing to the txid of the output that you are spending. There is a circular reference there where you want somebody to pay to a pubkey and then you want to spend from that pubkey with a signature but the signature has to contain the txid which contains that pubkey. It is a circle there. In theory if you had ECDSA plus ANYPREVOUT you could do this because now you are no longer signing the txid. I would be able to give you a pubkey, you would send money to it and then provably I would only be able to spend it in one way. It would act very similarly to OP_CTV with the one upside being that the anonymity set is better because it just looks like a regular pubkey instead of a very obvious OP_CTV transaction. It seems to me like it is cleaner to just put everything on Schnorr.

That’s not compatible with Schnorr because we have chosen to have the Schnorr signature commit to the pubkey. Even if the txid signature doesn’t the Schnorr part of it does. That’s the dependency on ECDSA. If we didn’t do that or if we put in some way round it then it would work fine for Schnorr as well. The other thing about it is if you are just looking at it at the academic maths level then it looks just like a normal signature. When you look at it from the Bitcoin level you’d have to opt in to being able to spend it this way somehow. The fact that you have opted in would then distinguish it from all the other transactions. Maybe if we did some big upgrade, with Taproot we are doing a big upgrade where we are hopefully going to have everyone using Taproot because it is better in a bunch of ways. Everybody is using Taproot and most people are using the key path so everyone is going to be doing the same thing there. If we go to SegWit version 2, 3, 4 and there is a big reason for everyone to switch to that then maybe we can make ANYPREVOUT the default there and then everyone is doing it and it is indistinguishable. I think we are a long way away from being able to make it a default. There are potential risks of doing ANYPREVOUT compared to how we do things now. If those risks are real and cause actual problems if people aren’t ridiculously careful then we don’t want to inflict that on everyone using Bitcoin and have problems result from it. But maybe it will turn out that there aren’t really any risks that aren’t trivially mitigable. Maybe it is a good default some time in the future.

I totally overlooked that it would still be obvious that you are opting into ANYPREVOUT even with ECDSA. That makes it pointless in that sense.

It is still interesting in an academic sense to think about what should be the default for everyone to do in future that gives you big anonymity sets and lots of power and flexibility. But it doesn’t seem practical in the short term. The way Taproot activation is going the short term seems kind of long.

Thoughts on soft fork activation (AJ Towns)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-July/018043.html

This was building off Matt Corallo’s idea but with slight differences. Mandatory activation disabled in Bitcoin Core unless you manually do something for that. One of Matt Corallo’s concerns was that there are all the core developers and are they making a decision for the users when maybe the users need to actively opt-in to it. The counter view is that the Bitcoin Core developers are meant to reflect the view of the users. If the users don’t like it they can just not run that code and not upgrade.

There are a billion points under this topic that you could probably talk about forever. Obviously talking about things forever is how we are going at the moment. We don’t want to have Bitcoin be run by a handful of developers that just dictate what is going on. Then we have reinvented the central bank control board or whatever. One of the problems if you do that is that then everyone who is trying to dictate where Bitcoin goes in future starts putting pressure on those people. That gets pretty uncomfortable if you are one of those people and you don’t want that sort of political pressure. The ideal would be that developers think about the code and try to understand the technical trade-offs and what is going to happen if people do something. Somehow giving that as an option to the wider Bitcoin marketplace, community, industry, however you want to describe it. The 1MB soft fork where the block size got limited, Satoshi quietly committed some code to activate it, released the code seven days later and then made the code not have any activation parameters another seven days after that. That was when Bitcoin was around 0.0002 cents per Bitcoin. Maybe it is fine with that sort of market cap. But that doesn’t seem like the way you’d want to go today. Since then it has transitioned off. There has been a flag day activation or two that had a few months notice. Then there has been the version number voting which has taken a month or a year for the two of those that happened. Then we switched onto BIP 9 which at least in theory lets us do multiple activations at once and have activations that don’t end up succeeding which is nice. Then SegWit went kind of crazy and so we want to have something a little bit more advanced than that too. SegWit had a whole lot of pressure for the people who were deeply involved at the time. That is something we would not like to repeat. Conversely we have taken a lot more time with Taproot than any of the activations have in the past too. It might be a case of the pendulum swinging a bit too far the other way. There is a bunch of different approaches on how to deal with that. If you read Harding’s post it is mostly couched in terms of BIP 8 which I am calling the simplest possible approach. That is a pretty good place to start to think about these things. BIP 8 is about saying “We’ll accept miners signaling for a year or however long and then at the end of the year assuming everyone is running a client that has lock in on timeout, however that bit ends up getting set, we’ll stop accepting miners not signaling. The only valid chain at that point will have whatever we’re activating activated.” Matt’s modern soft fork activation is two of those steps combined. The first one has that bit set to false and then there is a little bit of a gap. Then there is a much longer one where it is set to TRUE where it will activate at the end. There are some differences in how they get signaled but those are details that don’t ultimately matter that much. The decreasing threshold one is basically the same thing as Matt’s again except that instead of having 95 percent of blocks in a retarget period have to signal, that gradually decreases to 50 percent until the end of the time period. If you manage to convince 65 percent of miners to signal that gives you an incremental speed up in how fast it activates. At least that way there is some kind of game theory incentive for people to signal even if it is clear that it is not going to get to 95 percent.

There is a bit of concern and I think Greg Maxwell voiced it on Reddit, that maybe if we are talking about two year activation it is going to be demotivating for the people working on this because it is going to be such a long time period. There is a point to be made for too long also not being good.

I am happy either way. Trying to go super fast and dealing with the problems of that doesn’t really bother me. Taking a really long time is kind of annoying but I don’t find it that problematic either. It has already taken a long time. The benefit of taking a really long time is that we can be pretty confident that if we get pretty clear consensus to do it after 6 or 12 months that I expect we would have, then spending 2 and a half years or however long getting people to upgrade to new software will make it pretty certain that every single node and every single business that is going to be running the new software. At that point there is no chance of having any major “Lock up your funds. There has been a 7 block chain split and we don’t know what is going to happen. You can’t make transactions” and the whole Bitcoin economy has to stop and deliberately react. Even if like with SegWit we get to the point where it turns out there is no need to stop and react if we do things fast we can’t be quite 100 percent sure of that in advance. That means that we’ve got to have all the press people try to make it clear that there is a reason to start paying attention and forcing everyone using the currency to pay attention is kind of the opposite of what we want. I want this to be stable and Bitcoin be something that people can rely on without having to constantly think about what is going to be the next parameter change is kind of the point. I think 99.9 percent of nodes have upgraded some time in the last four years going by Luke’s stats from the other day. Four years definitely seems completely long enough and two and a half years seems like plenty of time to me too. But it could well be that given even a bit of notice 3 months is plenty of time. I don’t see how there is any way of knowing what timeframe is perfectly safe and what timeframe isn’t without trying it. Obviously if you get it wrong when you try it out that is going to be a pain. But maybe that pain is something you have to accept as a growing pain.

I think that is a good argument that you don’t really want people to think about upgrading and ideally want that to be a natural consequence and everyone be upgraded without feeling pressure.

The ideal thing is for the miners to do the upgrading because they are the ones who are getting paid on a daily basis to keep Bitcoin running. If it is upgrading the pool software from 7 pools or something then that shouldn’t be hard. In theory unless there is someone specifically hurt by Taproot or someone is trying to stop Bitcoin from upgrading because they are short Bitcoin and long altcoins or something all of this discussion shouldn’t really matter. But we don’t know that none of those things are happening.

There is a concern there that you wouldn’t want miners to activate it if the majority of the users haven’t also activated it. There is this theoretical chance that users are not upgraded, miners are upgraded, it activates and then miners could theoretically revert it and steal. There is no way of getting around users having to upgrade I would assume.

If miners upgrade and no users do and then miners revert it is not really a problem because none of the users would’ve used the new features because they haven’t upgraded. If some of the users have upgraded and the miners have activated then there might be a problem because those users won’t be following the most work chain that everyone else is following now. They could get scammed more cheaply on that shorter chain that they are following.

Why would they be following the shorter chain in that case?

They won’t consider it the shorter chain because they don’t see the invalid chain that has got more work applied to it. They will be following the shorter chain because it is activated. This is assuming that the miners just stop following the rules, not that they re-org the chain to some different point at which it hadn’t activated and stop signaling.

That’s a good point that users are not only opting in by running the software but also accepting payments with the new soft fork.

As soon as they upgrade their software to the stuff that checks the activation back in history they will consider it activated on all the blocks more or less. It doesn’t matter if they upgrade after it has activated but that will still catch the miners cheating. They don’t need to have upgraded in advance.

On the question of defaults, people present different ideas here. The code would already be in Bitcoin Core and it would just be set to activate at a certain time versus the other idea which is more like the user has to actively opt in to it.

There are three parts to making Taproot live on Bitcoin. One is merging all the code. At the point where we merge all the code then that lets us do tests on it but unless there is a bug it doesn’t affect any of the live transactions. It doesn’t enforce the rules, it doesn’t do anything on Bitcoin itself. The second step is we add activation parameters. At that point that is something that anyone who can compile Bitcoin can almost certainly change two lines in the source and recompile to get to the point where it is not activated or there are some different parameters. There is still the option to say “Screw the Bitcoin Core developers, they are being stupid. We need to do something different.” If everyone gets consensus on that then things will mostly work ok. Once the activation parameters are in then whatever the conditions of activation have to actually happen whether that it is timeout or whether it is miner signaling or whatever.

It sounds like you are a bit depressed at the state of the discussion going around in circles.

I wouldn’t say depressed, I’d say cynical.

For me it seemed obvious after all the chaos of SegWit there was going to have to be this discussion that was likely to be longwinded. Maybe we should’ve tried to kick off this discussion earlier and maybe there is a lesson for future soft forks to get that activation discussion started earlier.

Ideally if we get something that works for Taproot and cleanly deals with whatever problems there are we can just reuse it in future. I think it would be fair to say that at least some of us have specifically delayed discussing activation stuff because we expected it to be a horrible, unfun discussion and conflict and tedious debate rather than fun coding kind of thing. But it is obviously something that we have to go through. We need to get an answer to the question and I don’t see an obvious way of doing that without discussion. Dictating stuff is the opposite of what I want.

The hope is that after this brainstorming phase people will start coalescing around particular ones. At the moment the number of proposals just keeps increasing. You would hope once we have that brainstorming phase then people will start coalescing around two or three options. Then we’ll be able to narrow it down. I think I am optimistic. I can see why you’re a little cynical. I think it was inevitable it was going to take a long time.

Just take a straw poll on the mailing list and be like “Everyone says aye. Which one? Here’s your three choices, off you go” (Joke) What is needed? Do we need enough people to coalesce around one particular method and then everyone says “This is the one we are going with”? What do you guys think is needed?

I think that is a great question.

I think that the work that David Harding and Aaron van Wirdum are doing is really valuable. We need to have some structure and as I said I think this is the brainstorming phase. There is going to be loads of different proposals. Some of them are going to be very similar and people won’t really care about eventually ditching some of the proposals that are very similar to other ones. The only thing I am worried about is if people have really conflicting views on what happened with SegWit. “This proposal should definitely happen because otherwise SegWit wouldn’t have happened.” Or disagreements on what the history of SegWit activation was. I think that is the only thing I might be worried about. I think everyone wants Taproot so I don’t think people are going to want to continue with these crazy discussions for a really long time delaying Taproot.

If you want to get more time wasted then there is a IRC channel ##taproot-activation.

As a relative newcomer to the space, I thought that the whole Taproot process was handled magnificently. The open source design and discussion about it was so much better than any other software project in general that I have seen. That was really good and that seems to have built a lot of good support and goodwill amongst everyone. Although people may enjoy discussing all the different ways of activating it it feels like it will get activated with any one of those processes.

One thing I might add is that we have still got a few technical things to merge to get Taproot in. There is the updates to libsecp to get Schnorr. One of the things that Greg mentioned on the ##taproot-activation channel is that the libsecp stuff would really like more devs doing review of code even if you are not a high level crypto dev. Making sure the code makes sense, making sure the comments make sense, being able to understand C/C++ code and making sure that there isn’t obvious mistakes not including the complicated crypto stuff that has probably already had lots of thought put into it. Making sure APIs make sense and are usable. Adding review there is always good. The Taproot merge and the wtxid relay stuff, both of those are pretty deep Bitcoin things but are worth a look if you want to look into Bitcoin and have a reasonable grasp of C++. Hopefully we are getting a bit closer to a signet merge which is a more reliable test network than testnet is hopefully. There should be a post to the mailing list about some updates for that in the next few weeks. I am hoping that we can get Taproot on signet so that we can start playing around doing things like Lightning clients running against signet to try out new features and doing development and interaction on that as well sometime soon.

Is the first blocker getting that Schnorr PR merged into libsecp? That code is obviously replicated in the Bitcoin Core Taproot PR. But is that the first blocker? Get the libsecp PR merged and then the rest of the Taproot Core PR.

I’m not sure I would call it a blocker so much as the next step.

The Taproot PR used to be quite a bit bigger than it is now. It also included an interim libsecp update that had non Schnorr related changes. We yanked that out, got that merged in. There were a few other refactoring type changes that were also part of the Taproot PR that I think now have all been merged. Now in the Bitcoin Core repo the Taproot PR is basically just a merge of the Schnorr PR into libsecp and then Taproot stuff. We are not going to pull that code into the Core repo obviously until it is merged into the upstream libsecp repo. That is where the review needs to go.

Thank you everyone for joining. We’ll do a similar thing in about a month’s time. We will make up a list and feel free to add in things to chat about. I’ll put the Meetup page up on normal channels.

\ No newline at end of file diff --git a/sydney-bitcoin-meetup/2020-08-25-socratic-seminar/index.html b/sydney-bitcoin-meetup/2020-08-25-socratic-seminar/index.html index 93b448396e..13ebcebfd2 100644 --- a/sydney-bitcoin-meetup/2020-08-25-socratic-seminar/index.html +++ b/sydney-bitcoin-meetup/2020-08-25-socratic-seminar/index.html @@ -11,4 +11,4 @@
Home < Sydney Bitcoin Meetup < Socratic Seminar

Socratic Seminar

Date: August 25, 2020

Transcript By: Michael Folkson

Category: -Meetup

Name: Socratic Seminar

Topic: Agenda in Google Doc below

Location: Bitcoin Sydney (online)

Video: No video posted online

Last month’s Sydney Socratic: https://diyhpl.us/wiki/transcripts/sydney-bitcoin-meetup/2020-07-21-socratic-seminar/

Google Doc of the resources discussed: https://docs.google.com/document/d/1rJxVznWaFHKe88s5GyrxOW-RFGTeD_GKdFzHNvhrq-c/

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Statechains (Off-chain UTXO transfer via chains of signatures)

Slides: https://docs.google.com/presentation/d/1W4uKMJgYwb5Oxjo1HZu9sjl_DY9aJdXbIYOknrJ_oLg/

Ruben Somsen presentation on Bitcoin Magazine Technical Tuesday: https://www.youtube.com/watch?v=CKx6eULIC3A

Other resources on Statechains: https://twitter.com/SomsenRuben/status/1145738783192600576

This is a presentation on Statechains. If you have seen my previous presentations there is going to be some overlap. Hopefully this is a better explanation than what I did before. I try to make it clearer every time. There are two new things that you probably haven’t heard that are interesting and improvements. That will be interesting for people who do know a lot about statechains. The general gist of it is its an offchain UTXO transfer via a chain of signatures.

My general motivation, the goal is to improve Bitcoin while preserving decentralization like most of my work. We will be covering an explanation of statechains, an easy to grasp one, the limitations and security of it and some recent developments that are interesting.

The 3 simple rules of a statechain. Basically the server signs blindly on behalf of the user. You have this server that just takes requests and then signs. It doesn’t really know what it signs. The user can transfer these signing rights. If I am the user I request a signature and then later I say “Now somebody else can sign instead of me”, some other key. All of these signatures are published in a chain, the statechain. This is all that happens server side. There is not a lot of complexity there. In fact these three simple rules are the full server implementation. For the next couple of slides we are going to assume this works and the server doesn’t misbehave which obviously is a problem but we will get back to that.

The example would be you have Alice and Alice currently is the owner of the statechain. She has the signing rights. She is allowed to tell the statechain what to sign. She says “Sign this message X and then transfer the signing rights over to Bob.” Bob becomes the owner of the signing rights on the statechain and then Bob does the same thing. Bob says “Sign this message Y and now transfer these signing rights to Carol.” This is really all the server does. Somewhat surprisingly this is enough to enable offchain UTXO transfer.

The general implication is you can gain control over a key and learn everything it ever signed through this method because all the signatures are public. Everything the statechain ever signed can be verified. They are public but blinded so you do need the blinding key to unblind them.

The offchain Bitcoin transfer is essentially if you control the key, that statechain key, if you have the signing rights and that key signed some kind of offchain Bitcoin transaction that transfers over some coins to you and there has never been any conflicting signature from that key that signs the coins elsewhere then effectively you have achieved an offchain Bitcoin transfer.

How does this look? First what you do is you start with Alice who controls the statechain key who has the signing rights. Alice first creates an offchain transaction where she is guaranteed to get the Bitcoins back if she ever locks them up with the statechain. After she has this offchain transaction that you see here, an input and an output. The output is S or A* after some kind of timelock. In a week or so the statechain is in control and after a week automatically Alice gets control. After she has this guarantee she is ready to say “Now I will lock up my coins with the statechain because I have this offchain transaction where I get them back.” From this point Alice can transfer the coins to Bob by creating yet another offchain transaction where now Bob gets the coins and simultaneously the signing rights are transferred from Alice to Bob. We have one problem here which I need to get into. Now there are two offchain transactions and they are conflicting unless you have some kind of method that allows that second transaction occur in spite of that first transaction existing.

There are two ways of preventing this. The simplest way of doing is having a decreasing timelock. The coins are locked up with some kind of timelock. The top transaction might become valid in 30 days but then the transaction below it becomes valid in 29 days. That guarantees that the last transaction is the one that will be able to be sent to the blockchain.

This method has some downsides. The preferred method is using eltoo which requires a soft fork. But eltoo I would say as a simple summary is overriding transactions. I am sure people are aware of eltoo but because it is new technology that hasn’t been implemented yet I will have to go over it. Briefly how it works is you have some kind of onchain output and then there is a transaction that spends from it. You have an output that can either be spent by that key S which in case signifies the statechain or after a week Alice gets it. They can create this second transaction which because S is still in control until the timelock expires. If within a week you send State 2 you can overwrite state 1. You can do that again with State 3. Even if State 1 or 2 appear on the blockchain you can always send State 3 afterwards. One final important feature is that you don’t have to send all these states to the blockchain in order to get your money. You can skip states and go from State 0 to State 3.

This eltoo method is how we solve it in statechains, at least the ideal setup. The limitations are you transferring entire UTXOs. You are not sending fractions though you can open up Lightning channels and then you can send a fraction through Lightning on a statechain. There is no scripting which is an upside and downside. It means there is no complexity on the server because they are not verifying anything other than just a signature. What we can do is all the scripts are enforced at the Bitcoin side. You have this offchain transaction and this offchain transaction contains a script. You are just not sending it to the Bitcoin blockchain because you don’t have to. The ideal setup requires Schnorr.

Couldn’t you get the server enforce some kind of script as well as the signature?

Absolutely. You can do so but it comes at the downside of adding complexity and because the preferable way of doing this is having a server sign blindly and so you can’t have any restrictions on what it is allowed to sign. You can’t have restrictions on something you can’t see. Maybe you can have some kind of zero knowledge proof on the blind signature or something that the server can verify. Maybe you can do something like that.

Couldn’t you just have the constraints out in the open? The server saying “You tell the server the next guy can have this money with these constraints.” When the guy comes with the correct credentials you can check the constraints as well. The server could do a timelock.

That is absolutely true. It is possible. The server already does a couple of things. You can add to that. I don’t think it is necessary and I think it breaks the simplicity of the model. So far I haven’t really seen that being desirable. But it is certainly something you could do. You could have a complex scripting language that the server is enforcing on top of what it is already doing. That is entirely possible. There is one thing that you do need from the server, ideally you need some kind of atomic swap guarantee. The method I came up utilizing scriptless scripts is slightly complex but kind of elegant. There are a couple of things you want the server to do but my design goals are to keep that at a minimum.

If you have blinded signatures you require client side validation. Every user that accepts a statechain payment, they need to go back and verify all the previous signatures. I will have an example later that makes clear how universal the statechain is without requiring any scripting. That doesn’t mean scripting couldn’t be added as an option but the nice thing is you can do a lot of things without the server actually needing to do anything.

The failure modes. How can this possibly fail? It is not perfect, there is a server involved. The server could disappear. It could break or catch fire. It could get compromised. While it is running maybe somebody tries to extract the private key or starts running some malicious software or something. Or the server could be actively malicious from the beginning. Out there as a honeypot waiting to take your coins. The first thing we can do to improve this is we take the server and we turn it into a federation. Pretty straightforward. In terms of implementation complexity now you have the federation that needs to communicate with each other, that is going to be complex. But very simple in terms of understanding how the model works. The federation is some kind of multisig, 7-of-10 or something like that. If the server disappears the nice thing about statechains specifically, this doesn’t apply to federated side chains, is it doesn’t matter. If they disappear you have this offchain transaction that you can send to the blockchain and you get your Bitcoin back. This is really nice and improves the security by quite a bit. The second is if the server gets compromised or the federation gets compromised there is this transitory key. So far my example has been just one key but there is a key that is also with the user. There is a trick that improves this. If the server or the federation is actively malicious sorry for your loss. Your coins are gone. This is not a trustless system. There is still a federation involved. Compare this to federated sidechains. What you have in federated sidechains, the federated sidechain is generally like 7-of-10 multisig or something along those lines. 70 percent needs to sign off on something but that also means 30 percent can refuse to sign something. They can refuse to sign a peg out where you get your coins back. In a federated sidechain model your security model is that if over 30 percent of the federation doesn’t like you and doesn’t want to give you your coins you are not getting your coins. What we’ve seen with Liquid in particular, they had this bug where there was a timeout which was supposed to be some kind of security measure for if the federation disappears. The coins go back to a 2-of-3 at Blockstream. For a while you had this issue where the coins were under control of Blockstream in a 2-of-3 with all keys they controlled. This would be completely unnecessary in the statechain model. In the statechain model not only do you need the majority to sign in order to try to steal your coins but you can increase that majority from 70 percent, 80 percent, 90 percent whatever you want. Worst case scenario, what you are trying to protect against is if one of the federation members disappears, you don’t want to lose your coins. Now what happens is if too many federation members disappear you just go onchain. This is not ideal, you still want some kind of threshold probably. You could even do a 10-of-10 if you only care about security and don’t mind being forced onchain if one of them disappears.

The transitory key T is only known by the users. This statechain federation should not be aware of this key. The onchain key on which the coins are locked is going to be a combination of the statechain key S and the transitory key T. This means custody is shared with the pool of users in a sense. There is this new thing that the Commerceblock guys who are building an implementation of statechains right now came up with.

What I was showing you earlier is a Bitcoin transfer from Alice to Bob. In my example here the key is controlled by S, the statechain. Now we change it where it is S and T. The statechain key and the transitory key that is with the users. Then when a transfer occurs Alice has to give the transitory key to Bob.

The weakness here is similar to what it was when it was just S. If somehow somebody learns the private key of both S and T they could potentially take the coins.

Bob generates and shares a key K. He tells the private key to the statechain. Originally with S and T, which is S+T under Schnorr using MuSig, what we do is we take S and we subtract K and we take T and we add K. Now we still have the combined key of ST but the relative keys are different. Now Bob instead of remembering T, he now remembers T\’ which is T+K. The statechain only remembers S\’ which is S-K. What this means is that when the transitory key T transferred over from Alice to Bob, Alice only knows T, Alice doesn’t know T\’. So if the statechain forgets their knowledge about S and K it means that T becomes harmless. Even if Alice later hacks the statechain Alice can still not take the coins from Bob despite learning S\’.

This is an interesting claim. My original explanation of this was a little vague. What I call this is regulatory non-custodial. I wouldn’t say it is non-custodial in a strict sense because there is still a key and we are still relying on a federation. As I said with the security model there is a way the statechain can fail. But the statechain doesn’t know what it is signing, it only controls one of two keys and it can’t confiscate coins even if it learns the transitory key from a previous user because it changes the key share with every transfer. What this means is that honest behavior equals no custody. If the server is actively malicious they can take your coins. If they are actively doing the right thing a third party or a hacker can’t come to them and take their key and start being malicious. The only way in which malicious behavior can occur is if the server is actively taken over, actively being malicious while it is running. From a practical perspective that is quite secure. From a regulatory perspective I am hopeful that regulators would have a hard time saying that the statechain is a custodian. Whereas with something like a federated sidechain you have this multisig address and they can literally take the coins from you. That is a harder claim to make in court.

Moving over to Lightning, how does that work? You can use Lightning through the statechains. Instead of transferring the ownership of UTXO from Alice to Bob you would transfer it from Alice to Alice and Bob. It becomes a double key. Remember this is MuSig so AB is a single key. From that point on to run the Lightning channel you don’t need the statechain. It becomes irrelevant at that point, at least until you want to transfer the UTXO again. Now you can open the channel and you can be on the Lightning Network. Alice and Bob would have some kind of separate offchain outputs that they change the balance of while using the Lightning Network.

The synergy here is you can open, close and rebalance Lightning channels offchain. The issue with Lightning is that you have this throughput limitation where you can only send as many coins as there is liquidity available on the network. But you can send any amount you want. It is very flexible in that regard, they are divisible amounts. Statechains has the opposite problem. The throughput is infinite. You can open a 100 BTC statechain output and you can transfer it over to somebody else without requiring any channels or routing. But it is not divisible unless you go over and connect to the Lightning Network.

Here is a more complex example. It is very interesting. This is just a Lightning channel, no statechain involved right now. You have a Lightning channel with Alice and Bob. They both have 5 Bitcoin. They change their state and now Alice has 3 Bitcoin and Bob has 7 Bitcoin. Now Alice says “I have these 3 Bitcoin. Can I give these 3 Bitcoin to somebody else without interacting with Bob?” If they did so over the Lightning Network then Alice would have to give those 3 Bitcoin to Bob and Bob would have to give those 3 Bitcoin to Carol. Then the channel balance would be 0 and 10. What I am trying to do here is I’m trying to swap out Alice without interacting with Bob. If you interact with Bob it is simple. You can just re-open a statechain or transfer over the entire UTXO and then give control from Alice to Carol. That requires Bob’s permission. We can do this without Bob’s permission. This is kind of weird. You have a Lightning channel. Somebody you have the Lightning channel open with, this person can change identity, can become somebody else without your permission. Whether this is a good or a bad thing, maybe you want to know who you have your channel open with. I will leave that out in the open.

Alice’s key becomes the statechain key and the transitory key. What is A here becomes ST. We have ST and B and the final output is ST, the statechain and the transitory key owned by Alice. Alice needs some kind of guarantee that if the statechain goes away she gets her coins. There needs to be yet another transaction like this. You can combine those two transactions into a single output by adding some scripting but for simplicity we don’t do this. Now Alice hands over control of the statechain key to Carol. The latest state is then signed over to Carol. On the right hand side this is eltoo so the transaction takes precedence. Carol has the final offchain transaction that she can send to the blockchain to claim these Bitcoin. What is important is she has to go and check every signature that the statechain key ever made to make sure there is no conflicting transaction and the final state is her receiving these coins. But the final result is you can literally swap out of a contract you have with someone and somebody else can take your place. These things layer, these stack. Particularly because the statechain doesn’t need to be aware of anything. If you go back to this state the A key can be yet another statechain key with yet another transitory key. It is a statechain inside of a statechain but Bob does not even have to be aware of that.

I think DLCs is gaining a little bit in popularity. This is an interesting way of doing DLC. Let’s say you have a DLC style bet. DLC means you have some kind of third party oracle. The third party oracle will hand out a signature based on an outcome and you utilize that signature to resolve a bet. You could for instance bet on the BTC/USD price at some time x, let’s say 1 month from now. Half way through, Alice and Bob have their position, Alice can switch out her position and give it to Carol for instance. This would be without requiring any interaction with Bob. If Bob has to be online you have difficulty. This enables you to have a position in any asset. If you can bet on the BTC/USD price you can have a position that is equivalent to holding US dollars. Instead of having 1 Bitcoin you would have 10,000 dollars and by the end of the month you would receive 10,000 dollars of Bitcoin. Because you are engaged in this bet you have the equivalent of dollars that is going to be paid out in Bitcoin. What you can even do is if assuming the person you are paying trusts both the DLC bet, the oracle, and the statechain you can give somebody a portion of this bet. From the 10,000 dollars you have in this bet you can give somebody 1000 dollars by co-owning the position. You have this whole offchain system where you have these derivatives that you can give people pieces of. It is non-interactive in the sense that it is non-interactive with Bob, it is interactive with the statechain.

Here I have got a bunch of short points I will go through quickly. You can add hardware security modules both at the federation side that makes things strictly more secure. The federation is limited, it would have to tamper with its own hardware. You could also put a hardware security module with your users. You get this transitory key be controlled by a hardware security module. This strictly only improves the security. If the hardware security module breaks you always have this offchain transaction that would have to be stored outside of the HSM. You can always get your coins. This solves a problem that traditionally exists with HSM key transfer where you could potentially transfer a regular private key from one HSM to another HSM. But if only one HSM is malicious and if it was in the chain of transfers it can take all the coins. The second thing is that if the HSM breaks your coins are gone. You have no backup but now you have this offchain backup. Lightning channel factories, my example is Lightning with a single user but you could have Lightning with 10 users inside a single UTXO. You can swap out these users without interacting with any of the other users just through interacting with the statechain by having a statechain inside of a statechain. Statechain history, you do need to make sure that the statechain does not keep double copies of a UTXO. It might pretend it has a single UTXO twice and then it will give one history to one person and one history to another person. As long as this UTXO does not get redeemed for a while the statechain could operate and cheat. We can get rid of that by having some kind of sparse Merkle tree where every UTXO is being recorded. This means that you have a proof that a UTXO only exists once inside of a single statechain. There is a watchtower requirement. Because you have these offchain transactions they become valid after a week or whatever you decide. You need to watch the blockchain and see if a prior user tries to claim the UTXO. If they do so you need to respond to it. The nice thing with eltoo is that the fees are entirely on the user trying to cheat. That makes that whole model a lot cleaner. But that is a potential downside. If somebody is willing to send the transaction and pay the fee you have to respond to that and also pay the fee. You want to pay fees to the statechain entity, to the federation. First I imagined that you’d open a Lightning channel with the federation and you would pay them through that. But the interesting thing about federations is you don’t really need some kind of onchain fee structure. You can do it out of band and that makes the whole blockchain itself a lot easier. In hindsight I think this is something that Liquid… having some kind of blockchain on a federation, I think in that model you would prefer not to have the fees onchain. You would prefer to have them out of band, paid through Lightning or using Chaumian e-cash. Chaumian e-cash, you buy tokens from the server and you can redeem the tokens for services. I think that will be a pretty good model. This is something that Warren Togami pointed out to me. I am warming up to the idea now and I like it. You could do RGB over statechains. RGB is a colored coin protocol. The problem is if you have non-fungible tokens you can’t transfer them over Lightning because they are non-fungible. You can’t have a channel from Alice to Bob to Carol because what is inside of the Alice, Bob channel is different from what is inside of the Bob, Carol channel because they use non-fungible tokens. You can’t do routing but with statechains you could transfer them and still redeem them onchain. Atomic swaps, there needs to be some kind of method where the statechain allows you to swap UTXOs on the statechain with other statechain users. You could do this through Bitcoin scripting. That would be acceptable but the problem is if the swap breaks down you are forced onchain. Ideally you would not need Bitcoin script for this but it is serviceable if you do it that way. Finally there is the Mercury statechain which is the implementation by Commerceblock. They are cool guys, it is an interesting implementation. They tried to do a MVP version of statechains that can work today. They don’t have eltoo, they don’t have Schnorr. You have the expiring timelocks. It is a little bit more iffy in terms of who is paying fees. You don’t have the ability to open Lightning channels but you do have this UTXO transfer. I think it makes a lot of sense in terms of wanting to get the technology out there today. You’d need to have these trade-offs. It is not the ideal model that I described here but if you are interested in what works today I would definitely check out their work and they have some great write ups. It is all open source.

In short, offchain UTXO transfer. Including the ability to open Lightning channels. You can even use statechains within channels. Lightning, DLC or anything else and swap out users non-interactively. It is more secure than federated sidechains and it is “regulatory non-custodial”. There is a risk that the federation can take your coins if they are really actively malicious. Thank you for listening. If you are interested in my other work you can check out tiny.cc/atomicswap. There are a bunch of links there. You can email me, you can reach me on Twitter or you can ask questions right now.

With Lightning and statechains what would go on in the case where you are doing a Lightning channel over a statechain but then the statechain breaks down? Does that mean the Lightning channel would then have to go back to chain?

If the statechain stops functioning it doesn’t really matter. You don’t need the statechain to keep the Lightning channel open. The only thing you cannot do if the statechain breaks down is cut through the transactions. Theoretically you could have the Lightning channel open. You could change the UTXO amounts. At the end of the day if you want to close your Lightning channel you can then close your statechain channel. It would save you 1 or 2 transactions. The worst case here is that the statechain disappears. Once you want to close your channel, you don’t have to, then it will cost you one or two transactions extra compared to a fully cooperative close with the statechain.

What are the requirements to get statechains, the version you are talking about? Is that ANYPREVOUT and eltoo and then we could have the statechains that you are talking about.

Schnorr and ANYPREVOUT. ANYPREVOUT enables eltoo. Eltoo is a method for Lightning that utilizes ANYPREVOUT.

With a sidechain like Liquid you could have a Lightning Network on the sidechain and then perhaps do an atomic swap between the Lightning Network on the main network and the Lightning Network on Liquid. But this is grey between that in that it is not really separate or distinct in the same way as a Lightning Network would be on Liquid. What does it look like when you close that Lightning channel and you’ve opened that Lightning channel on a statechain? Is it a process of not only closing the Lightning channel but also getting out of that statechain. There are two steps rather than just closing the channel.

There are two steps. What you have this channel open here where Alice and Bob have some coins. Let’s say Alice spends all her coins, she gives all her coins to Bob. What they can do is go back to this state and now Alice and Bob transfer over the coins to Bob. It becomes a regular statechain full UTXO with one owner except there are two keys. If you are cooperative Alice will back out of her key and give Bob full control by doing yet another transfer here from Alice and Bob to Bob.

If you are closing a Lightning channel on a statechain you close the Lightning channel and then get out of the statechain? You wouldn’t be able to get out of the statechain into a normal Lightning channel that is onchain?

Yes you can but it may be less efficient. You have this offchain transaction with which you can always exit out of the statechain. You can send that onchain and once you do so you are out of the statechain and you are onchain with a Lightning channel. The ideal way of doing so is to cooperatively exit out of the statechain, with another signature from the statechain you exit out while it does not affect your Lightning channel. Because of the way SIGHASH_ANYPREVOUT works even if the output changes as long as it is the same output, all your transactions that are building on top of it remain valid. You can close the statechain channel while keeping the Lightning channel open without any additional work.

If you had a long timelock that is going to get in the way of you settling an inflight HTLC. It could be very problematic, the relative timelocks between each hop. You’d probably have to take into account how long it takes the statechain to get onchain?

I think that doesn’t matter. This setup where you have this eltoo transaction in the middle and then you have the Lightning channel. This is literally what eltoo looks like. Despite us using a statechain in the middle the actual transaction structure does not change. Any of the issues you are describing are issues with eltoo.

That’s exactly what we were talking about last month, this problem of eltoo. This time it takes to settle to the right state means you have to account for that in the timelocks between each hop. AJ Towns has a proposal to fix that with eltoo. I think the same rules here, it is an issue with eltoo. If you put the statechain on top of it it exacerbates it.

I don’t think it exacerbates it but if the issue exists in eltoo it exists here. I would have to look into what AJ Towns is suggesting. I would assume that it wouldn’t really affect statechains.

You have got to the HTLC onchain in order to redeem it with the secret before the other guy gets the refund. You have to have it onchain “Here’s my secret.” But when I wait for the statechain the absolute timelock in Lightning gets pushed back and back.

With the channel counterparty changing, what would the motivation be to be transitioned into a Lightning channel that is on a statechain rather than just opening a channel normally onchain without being on a statechain? Fees? Do you get access to the Lightning Network with lower fees?

It makes more sense in different scenarios. For instance the Lightning channel factory where you have ten users and one of these users wants to swap out of it. Normally what you would need is all other 9 users to interact with you in order for the 10th person to swap out for somebody else. Now because you can do it non-interactively by communicating with the statechain you can do so without the permission of the other 9 users. It goes from interactive to non-interactive. The second example would be the DLC style bet thing where exiting in and out of a position is interesting. The price of that position changes over time. You could sell it midway at the price that it is. Normally you would be held hostage by whether or not your counterparty inside of the channel wants you to move out. They can prevent you from getting out. Here you don’t need their permission. You can move out and exit the position. It makes it a lot more practical to have these bets and very smoothly move in and out of them, sell these bets to other people without requiring all these people to be online. Those are the benefits. It is quite significant that you are able to do this without requiring the help of your counterparty.

In the case of a counterparty swapping into a Lightning channel the counterparty swapping in needs to trust the current state of the channel? In the case of Bob swapping out for Carol, Bob and Carol need to trust each other?

That’s a good point. Do you even want this? In Lightning Alice and Bob open a channel and what we are steering towards is that Alice and Bob kind of trust each other or at least think this person is not completely malicious. Now Alice can switch over and become Carol. Does Bob trust Carol? Maybe not. That’s certainly an issue. The funny thing is that it is not preventable. This just works. You can do this on Lightning once we have Schnorr and statechains people can start doing this. Bob has no say over it. Bob can’t recognize a statechain key from a regular Alice key. It is going to be a thing. Is it good? Is it bad? I don’t know. I don’t necessarily think it is good. The trust assumptions are for Carol to move in Carol needs to trust Bob. Be willing to have a channel with Bob. Not trust Bob but be willing to have a channel with him. Trust the statechain and if you have some kind of DLC bet going on you also have to trust the oracle. There are a couple of things but they are all very separated so that is nice.

c-lightning 0.9.0

https://medium.com/blockstream/new-release-c-lightning-0-9-0-e83b2a374183

We rewrote our whole pay plugin. c-lightning is evolving into this core plus all these plugins that do different things. The pay command that is important is a plugin. You can do a lot of interesting stuff. The one that broke the camel’s back is this multipart payment idea. You can split payments into multiple parts to try to get them all to the destination at the same time. As you can imagine there are an infinite number of ways you could do that. We didn’t want to put that in the core. Christian Decker did some research by probing the network and came up with a number of 10,000 sats. If you are trying to send 10,000 sats through most of the network you’ve got a 83 percent chance of it working. It declines pretty rapidly after that. The obvious thing with multipart payments is you try sending it, if you get a channel saying “I don’t have capacity” you try splitting it in half. We are a bit more aggressive than that. If it starts out really big we try to divide it into 10,000 sats chunks and send those. This works way better in real life. It turns out that the release a bit aggressive on that. 10,000 sats is roughly a dollar. If you try to send a 400 dollar payment it is pretty aggressive at splitting it. One of the cool things is that splits it into uneven parts. That is really nice for obfuscating how much we are sending. People tend to ask for round amounts. They ask for 10,000 sats or something. If you see 10,003 sats go by you can pretty easily guess how many hops it has got left to go before it gets to its destination. We would overpay for exactly that reason. We would create a shadow route and add some sats. The person gets free sats at the end. Even so it is still pretty obvious because we don’t want to add too many sats. With splitting you can split onto rough boundaries and there is no real information if you only see one part of the payment. It was a complete rework of our internal pay plugin. Christian Decker was the release manager and he decided with the agreement of the rest of us that should definitely go in the release. There were four release candidates because we found some bugs. Worked in the end, multipart payment worked. Any modern client will issue an invoice that has a bit in it to say “We accept multipart payments.” Multipart payments are pretty live on the network at the moment which is pretty nice. That was the big thing, a big rewrite for that.

There was also the coin movement stuff. This came out of Lisa (Neigut) looking at doing her tax in the US. You are supposed to declare every payment you make, incoming and outgoing, and theoretically all the fees that you charge for routing things. You should mark the value at the time you received them and stuff like that. Getting that information out of your node is kind of tricky. It is all there but having one nice place where you can get a ledger style, this amount moved in, this amount moved out and here’s why view was something that turned out to be pretty painful. She wrote this whole coin movements API. Everywhere in the code that we move coins whether it is on Lightning or onchain it gets accounted for. You can say “I paid this much in fees.” She has also got a plugin to go with that that stamps out all the payments. That’s still yet to be released because there are some issues with re-orgs and stuff that she wants to address. I am looking forward to that. Her next tax time, she will just be able to dump this out, hand it to her accountant with all the answers.

We did a whole internal rework. PSBTs, Partially Signed Bitcoin Transactions are the new hotness. We previously had them in a couple of APIs. You could get a PSBT out and give a PSBT in. But we reengineered all the guts of c-lightning to use them all over the shop. That continues in the next release. We completely reengineered some things, move them out to plugins and deprecate them because it is all PSBTs internally. The old things that gave you transactions and stuff, they are all new APIs using PSBTs. Get to be one of the cool kids. It makes life so much easier to deal with other wallets, hardware wallets, stuff like that. We are pretty much at the point where you can throw a PSBT at something and it will do the right thing. It will sign it, it will combine it with other things and stuff like that. It is particularly powerful for dual funding where you want to merge PSBTs. PSBTs have been great. That is not really visible in that release but that was a huge amount of work to rework everything. It was a pretty solid release, I’m pretty happy with that. We thought it was worth bumping the version number. I think the 0.9.0 release name we gave it was “Rat Poison Squared on Steroids”. It was named by the new contributor who contributed the most. That’s the c-lightning release.

PSBTs are ready to be used on Lightning? All the implementations are either using or thinking about using PSBTs? There are no niggly issues that need to be sorted out with PSBTs with the Lightning use case?

Everything is ready to go. I didn’t find any horrific bugs. We are dog fooding them a bit more, that is useful. There was some recent PSBT churn because of the issue with witness UTXOs and non-witness UTXOs, this issue of people worrying about double spending with hardware wallets. There was some churn in the PSBT spec recently. There is still a bit of movement in the ecosystem but generally it is pretty well designed. I expect as people roll out you generally find that you interoperate, it just works with everything which is pretty nice. For c-lightning we are in pretty good shape with PSBTs.

Announcing the lnprototest Alpha Release

lnprototest blog post: https://medium.com/blockstream/announcing-the-lnprototest-alpha-release-f43f46f2c05

lnprototest on GitHub: https://github.com/rustyrussell/lnprototest

Rusty presenting at Bitcoin Magazine Technical Tuesday on lnprototest: https://www.youtube.com/watch?v=oe1hQ7WaX4c

This started over 12 months ago. The idea was we should write some tests that take a Bitcoin node and feed it messages and check that it gives the correct responses according to the spec. It should be this test suite that goes with the spec. It seemed like a nice idea. It kind of worked reasonably well but it was really painful to write those tests. You’d do this and then “What will the commitment transaction look like? It is going to send the signatures …” As the spec involved there were implementation differences which are perfectly legitimate. It means that you couldn’t simply go “It will send exactly this message.” It would send a valid signature but you can’t say exactly what it would look like. What we did find were two bugs with the original implementation. One was that c-lightning had stopped ignoring unknown odd packets which was a dumb thing that we’d lost. Because you never send unknown packets to each other so a test suite never hit it. You are supposed to ignore them and that code had somehow got factored out. The other one was the CVE of course. I was testing the opening path and I realized we weren’t doing some checks that we needed to check in c-lightning. I spoke to the other implementations and they were exposed to the same bug in similar ways. It was a spec bug. The spec should have said “You must check this” and it didn’t. Everyone fell in the same hole. That definitely convinced me that we needed something like this but the original one was kind of a proof of concept and pretty crappy. I sat down for a month and rewrote it from scratch. The result is lnprototest. It is a pure Python3 test system and some packages to interface with the spec that currently live in the c-lightning repository. You run lnprototest and it has these scripts and goes “I will send this” and you will send back this. It can keep state and does some quite sophisticated things. It has a whole heap of scaffolding to understand commitment transactions, anchor outputs and a whole heap of other things. Then you write these scripts that say “If I send this it should send this” or “If I send this instead…”. You create this DAG, a graph of possible things that could happen and it runs through all of them and checks that happens. It has been really useful. It is really good for protocol development too not just testing existing stuff. When you want to modify the spec you can write that half and run it against your own node. It almost inevitably find bugs. Lisa (Neigut) has been using it for the dual funding testing. That protocol dev is really important. Both lnd and eclair are looking at integrating their stuff into lnprototest. You have to write a driver for lnprototest and I have no doubt that they will find bugs when they do it. It tests things that are really hard to test in real life. Things that don’t happen like sending unexpected packets at different times. There has been some really good interest in it and it is fantastic to see that taking off. Some good bug reports too. I spent yesterday fixing the README and fixing a few details. The documentation lied about how you’d get it to work. That is fixed now.

This testing suite allows people to develop a feature… would that help them check compatibility against another implementation for example?

Yes. It is Python, it is pretty easy to hack on. You can add things in pretty easily. You don’t have to worry about handling all the corner cases. You write your scripts and check that your implementation works. For example I used this to develop the anchor outputs stuff. I took the anchor spec, I implemented it in lnprototest first and then I implemented in c-lightning. The c-lightning one took a lot longer. It took me an afternoon in lnprototest. It took me several days in c-lightning. Once we had c-lightning working with the lnprototest side the first time I attached it to a lnd node in the field it just worked. It is definitely useful as a test suite but also for developing. When you want to add something to the protocol. It is a lot easier to hack it into lnprototest. “Here’s a new packet”, send it, see what happens. This is the response you should get. It is way easier than modifying a real implementation to do it. It is a really good way of playing with things.

Can you do one for the Bitcoin spec? (Joke)

With Bitcoin we don’t have a spec. The code is the spec. There are tonnes of unit tests and functional tests on Core. The test framework sets up a stripped down Bitcoin node so you can do testing between your node and this stripped down Python node. In this case the lnprototest is setting up a stripped down Lightning node that is coded up in Python and then you are interacting in a channel between your main c-lightning node or whatever implementation signs up to use lnprototest, with that stripped down Python lnprototest node?

The Python implementation isn’t even really a node. It understands how to construct a commitment transaction. It has got a helper to do that. “I send this and by the way my commitment transaction should now have a HTLC in it. Add a HTLC to my commitment transaction.” They send something and you go “Check that that is a valid signature on the commitment transaction.” It goes “Yes it is.” It has enough pieces to help you so you don’t have to figure things out by hand. It has a lot of stuff like it knows what a valid signature is rather than encoding the signature. What it is doing under the covers is it reaching into the implementation and grabbing the private keys out. It knows the private keys of the other side it is testing. That simplifies a whole pile of stuff. It can say “I know what the 13th commitment secret is going to give, I know what that is. I know what it should be.” It is a much simpler implementation to play with. Then you have these scripts that say “I should open a connection. It should reply with init. I should send init. I send open channel, it should say accept channel. Take those fields and produce a commitment transaction as agreed. Give me what the signature should be on the first one. I will send that across.” They send the reply and I go “Check that that is what I expect? Yes it is.” It has some helpers to construct these things as you go but you end up writing the test to say “Update the state. You should match what they send.” We have enough infrastructure to build commitment transactions and stuff like that. But we don’t have any logic in there to negotiate shutdown for example. There is none of that logic. That would be a script that says “If I offer this they should offer this” and stuff like that. There is a whole heap of scaffolding to help you with the base construction of the protocol. It does all the encrypted communication stuff. You just say “Send this message” and it worries about encrypting it as it needs to be on the wire, authenticating and all that stuff. You end up writing a whole heap of test cases that say “If I send this they should send this.” There are two parts. There is the scaffolding part that has the implementation bits we need to make it useful. Then there are all these tests that say “If I do this they’ll do this.”

Have you had to do much interoperability testing with some of the lesser known Lightning clients like Electrum or Nayuta? Maybe they could use this testing suite as well.

If you are trying to bring up a new implementation, bringing it to par with the others this is invaluable. It is a stepping stone. “I sent this but I was supposed to send this and I didn’t. What do we disagree on?” You might have found a bug in lnprototest. You know that at least one implementation passes lnprototest. Either there is a bug in c-lightning or I am doing something wrong. It is a much more controlled environment. We could test things like blockchain reorganizations to whatever depth. Stuff like that in canned tests is incredibly useful.

When you found that channel funding bug Rusty, you found that just by implementing it in Python rather a particular test failing in lnprototest?

I was writing the test and I went “I got that wrong and it still worked. Why did that work?” Then I went “Ohhh. That is bad.” I jumped on internal chat. “There is nowhere I’m missing that we check this that we are supposed to do this?” No it was a real bug. I immediately back channel pinged ACINQ and Lightning Labs, “I suspect you want to check if you are doing this as well.” I was writing the test, realized that I’d screwed up the test and it shouldn’t have worked but it did. The act of testing it was what drove me into this path.

If you are testing new features and you want to use lnprototest to test those new features you’d have to reimplement in Python?

Yes and no, it depends on what you are testing. You can tell lnprototest “Send this raw packet”, it doesn’t have to understand what it is doing. All the lnprototest message stuff is generated from the spec. You patch the spec, you run the generator thing and it generates the packets for you.

The spec is words, a lot of it is. How do you get code out?

The way the messages are implemented in the spec is they are machine readable as well human readable. We’ve always done that on c-lightning. There is a script in the spec itself that gives you a nice CSV file, comma separated values file, describing all the packets in the spec. That feeds into the c-lightning implementation. In c-lightning we have something that turns that into C code. For lnprototest we turn those into Python packages. It reads those Python packages and generates all the types. If you are just adding a message you can edit the spec, rebuild and you’ll get your new message type. I have no idea what that message type is supposed to do but you could make the lnprototest send that message type and expect whatever response. “If I send this you should send this.” Then test it against your node and see if it works. Obviously, with anchor outputs, if you offer anchor outputs and they offer anchor outputs then it changes the commitment structure in very well defined ways. Internally lnprototest knows how to build the commitments. I add a new flag and look through the diff of the spec, what they changed. If anchor outputs here, if anchor outputs here… It was a couple of hours work maximum to get that working.

It is not a replacement for the cross implementation testing for new features. It is additional testing and assurance. You still want to test those full implementations. Otherwise if you are reimplementing it in Python you could just give it to Electrum and Electrum will have all the new features first. Because that is written in Python.

I looked at some of their code actually. Again they have got too much stuff. We need this thin amount of stuff to implement things. And a whole heap of stuff to parse messages generically etc. Their stuff has too much stuff in it. Like any implementation there is a whole heap of other stuff you have to worry about like onchain handling and timeouts. lnprototest doesn’t care because it is not dealing with real money. It is a whole other ball game. I think every implementation will end up using lnprototest at some stage which it will make it much easier. The idea is eventually you’ll patch the spec. I’ve got this cool new thing, here is the spec patch, here is the new lnprototest test. You’ll run those together against your implemetation. You’ll be 90 percent of the way there. At least you are compatible with lnprototest so the chance of you being compatible with each other is now greatly increased.

They just need a runner, the new implementations, if they want to use lnprototest?

We’ve got our DummyRunner that passes all the tests. It always gives you what you expect. They need their own runner. Take the c-lightning one, it fires up a bitcoind in regtest mode and fires up a c-lightning node. It is kind of dumb. You have to have c-lightning run in developer mode because it uses some weird hacks. We are slowly pulling those out. Ideally you’d be able to run it in any off the shelf implementation. You have to get your hands dirty a bit and write some Python for your implementation, that is true.

Dynamic Commitments: Upgrading Channels Without Onchain Transactions (Laolu Osuntokun)

https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-July/002763.html

We had this item here on upgrading channels without an onchain transaction. This is a roasbeef post. It is talking about how you could upgrade a channel. One example was around changing the static remote key.

At the moment you set up your channel and that is it. It is how you built it. We chose not to put in any upgrading in the spec. Early on it was just “Close the channel and start again.” We’ve had two new channel types so far. One is the static remote key option which now as far as I know every implementation supports. The only time you will not get a static remote key channel is if you’ve got an old one that you opened beforehand. Anchor outputs is the new hotness that is coming through. You have now got three kinds of channels. What type of channel you get depends on what you both supported at the time you opened it. What would be cool is to be able to upgrade these things on the fly. You can always close and open again. We have this idea of splice that is not currently in the spec but it is on Lisa’s plate after she finishes dual funding. It is very similar. We can negotiate to spend the commitment transaction in some way that changes the channel. You might want to splice funds in. You might want to splice funds out. Have a new commitment that spends the commitment transaction and opens a new channel atomically. That is still an onchain transaction. What if you wanted to change it on the fly? For the first 100 commitments of this channel it was a vanilla one but after that point we both agreed that it would be option static remote key. We would use the modern style. That is perfectly possible. It is not as good from a code maintenance point of view. You still have to be able to handle those old channels because they could still drop an ancient commitment transaction on you. They drop commitment transaction 99, you need to be able to penalize that. You still need some code there to handle the old ones. But ideally at some point in the future if we have this dynamic upgrade we can insist everyone upgrade and then six months later we go “If anyone hasn’t upgraded their channels when you upgrade to the next version of c-lightning it will unilaterally close those old ones.” We have removed the code that can do all that stuff. This is a nice simplification. This proposal here is the set of the messages that you would send to negotiate with your peer that you are both ready to upgrade this channel on the fly. It went through a couple of revisions based on feedback from the list. The consensus in the end was that we would block the channel, once you’ve started this process we are going to drain out all the updates. That is the same thing we do on shutdown already. When you shutdown a channel any outstanding HTLCs have to be settled before it finally gets closed by mutual close. We would use the same kind of negotiation. If I want to upgrade the channel the other side would go “Great. We will upgrade the channel as soon as all the HTLCs are gone.” In the normal case this would be immediately but it could take a while. You only really have to worry about the case of upgrading empty channels. From that point onwards you’d be using new style not old style. This is definitely something people want. Static remote key is good because it is way nicer for backups. Because we used to rotate all the keys prior to static remote key without your peer’s help if you somehow lost your state you could forget how to spend your own output. Static remote key changed that. You don’t need the peer. If a commitment transaction from the future appears on the blockchain you are kind of screwed because it means you have lost track of things. At least now with static remote key you would be able to get your own money back without having to ask anyone. “I don’t know what the tweaking factor was for that commitment. Could you tell me?” Anchor outputs does even more. It makes it possible to low ball your fees and use the anchor outputs and child-pays-for-parent to push the commitment transaction into a block. That overrides the problem that we have at the moment which is that you have to put enough fees in your commitment transaction to pay for it later when you are going to use. You have no idea when that is so it is an impossible problem. Anchor outputs provide a way, not a perfect way, to top up the fees afterwards. This means you can go lower on your fees which I think is good for everyone. It is worth bearing in mind that you only care about what those fees are like if you get an unilateral close. If it is a mutual close you negotiate fees at that point. But should somebody need to go onchain it is nice if they are paying less fees than they are at the moment. Everyone overbids on fees at the moment because you don’t know when a fee spike will happen. Knowing my luck it will be the moment when you want to close the channel. We go for a multiple of the current fee rate.

On the anchor outputs, Lisa was talking about this fee concern because it makes it more costly. What is the thinking on that? Is it worth it?

There are two things that make it more costly. One is the scripts involved on a couple of the outputs are slightly bigger so it is a slightly bigger transaction. Each HTLC output is a little bit bigger. But the other thing is that the funder is also paying for these two 330 satoshi outputs, one for each side to use to push the transaction if necessary. It is 660 sats more expensive to start with. The flip side of that is you can lowball the fees. It is almost certainly worth it. We have implemented everything on c-lightning but it is currently behind a configuration option. Our config experimental, if it breaks you get to keep both pieces configuration. We haven’t turned that off for this release because we haven’t implemented child-pays-for-parent. So anchor outputs is all loss. It is just more expensive and it doesn’t help you in any way. As soon as we implement child-pays-for-parent it would be great to get it in there because you would probably come out ahead most of the time. Even though you are spending extra sats in the anchor outputs you won’t have to overbid on fees. This proposal here would less us then go “We see a path here to deprecating the non-anchor outputs output”. In the future everything would be anchor outputs. We do a release which has the upgradeability and then 12 months later we would start giving warnings. Then anchor outputs would become compulsory. If you happen to have any clients that haven’t upgraded and you haven’t spoken to them in months we would just close their channels. We could remove a whole heap of if statements in code and our test matrix. Having to test against the original channels and static remote key channels and anchor output channels multiplies your test matrix out. You can simply things if you start pruning some old stuff out.

It might be a good way to clear out the zombie channels. I hear there is a lot of them out there.

It turns out there are. We recently had a change to the spec. You have to update your channels every two weeks. After two weeks you could be forgotten if you haven’t updated. The spec said either end has to update and we just changed it so that both ends have to update. The reason for that is that you get some node that has fallen off the network. This node every two we’ll keep refreshing it. Keeping it alive when we might as well forget it, it is game over. That spec change has gone through and the new release of c-lightning 0.9.1 will implement that. It is in the tree now. That will help clear out some stuff as well.

This upgrading channels, this works for anchor outputs because you are changing the commitment transaction. You’ve got an open channel, the funding transaction is onchain and you can update the commitment transactions. For other things, there are so many potential channel update proposals swirling around. There are generalized payment channels, using Schnorr MuSig, that would be a funding thing so you wouldn’t be able to use this for that, there is eltoo, PTLCs. I think most of those you wouldn’t be able to use this for because they either use the funding transaction or it is a completely different configuration.

If your funding output is still a 2-of-2 you are good. But if you have to change the funding transaction then you are going to be going onchain anyway. The way I think we will end up doing that in future is with a splice. The splice proposal says “I propose a splice.” We both get to go to town on proposing what we want to change about that commitment transaction. That is useful. You might have a preference. You might have a low watermark and a high watermark. If you get more than a certain amount in fees in the channel “Sometime this week I’d like to move some off to cold storage. I’d like to splice them out.” Then you have a high watermark where you are like “This is getting ridiculous. I really have to do this.” If I am at the low watermark and you go “I would like to splice” then I will jump on that train. “While we are there I would also like to splice.” Throw this input and this output in and whatever. Opportunistically do these things. The splice negotiation will also then be an opportunity to do an upgrade. In fact I think that we would make the splice an implicit upgrade almost. Why wouldn’t you at that point? If we are going to splice, if we are going to spend a transaction let’s also do an upgrade. This is on the assumption that every improvement is considered an upgrade. If we ever ended up with two really different species of channels that were useful for different things then you might end up with two completely different ones. That is the kind of complexity I am hoping to avoid. Splicing will be the upgrade mechanism for anything that has to change the funding transaction. The cool thing about splicing is you can forget all your previous revocation things after. Once the splicing transaction is buried deeply enough you can forget the old state because it can never happen now. Those old commitment transactions are completely dead. That is nice. That lets you transfer to eltoo where you have only got a single commitment transaction you have to remember in a very clean way anyway. Even if you could do it with 2-of-2 you would want to splice in I think for eltoo.

Let’s say hypothetically that Taproot comes in, then we get ANYPREVOUT. All the channels that currently exist, they would be upgraded to eltoo channels using this splice method?

Yes you would have to use splice to upgrade those. We will get Taproot first. Would you want to upgrade? Maybe because now you start to look like single sig. It is all single sig, it is all cool. You probably want to do it just for that. On the other hand if you are a public channel anyway there is little benefit to doing that onchain transaction just for that. Maybe you don’t bother? If you are going to splice anyway then let’s upgrade, let’s save ourselves some bytes. You probably wouldn’t do it just in order to upgrade. If we get ANYPREVOUT and you’ve got eltoo then there’s a convincing reason to upgrade. No more toxic waste for your old states.

People can just opportunistically wait until it is down to 1 sat per byte. They could wait for a cheap time and do it then.

Splicing is kind of cool because a channel doesn’t stop while you are splicing. After every commitment transaction against both the spliced one and the old one, until the spliced one is buried to a sufficient depth. Your channel doesn’t stop. You can lowball your splice and just wait for it. It is annoying if you change your mind later and now you really want to splice it in. You’d have to child-pays-for-parent. We are probably not going to do a multilayer splice and let you have this pile of potential changes to the channel piling up. You can keep using the channel while the splicing is happening. You can absolutely lowball on your fees.

What is likely to happen post Taproot that unless you do this splicing you will keep your old channels open and any new channels you open you would use the latest MuSig, Taproot stuff?

That is happening with existing upgrades. Modern channels are all static remote key, old ones aren’t. We would end up with the same thing. We have to change the gossip protocol slightly because we nail the fact that there is a 2-of-2 in the gossip protocol. I want to rework some of the gossip protocol anyway. There will be a gossip v2 to match those. At some point in the far, far future we will deprecate old channels and splice them out or die. That would be a long way away. We will have two types of gossip messages, an old one and a new one for a long time.

Does the gossiping get complicated? I have got this channel on this version, this channel on this version and this channel on this version.

We have feature bits in the gossip so it is pretty easy for you to tell that. Most of this stuff, if Alice and Bob have a channel and we are using carrier pigeons or psychic waves or whatever to transfer funds it doesn’t matter to you. That’s between Alice and Bob. It is not externally visible. Some changes to channels are externally visible. If you are using PTLCs instead of HTLCs that is something you need to know about. But for much of the channel topology they are completely local. The gossiping doesn’t really need to know. Where the gossiping needs to know is we currently say “Here are the two keys” and you go and check that the transaction in the Bitcoin blockchain actually does pay to those keys. If you have a different style of funding transaction that will need to change. In the case of Schnorr there is only one key. “This is the key that it pays to” and yes I can tell that. You don’t need to tell it that, you just need to use that key to sign your message and it can verify it. You can literally pull the key out of the output which is really nice. The gossip messages get smaller which is a big win. Plus 32 byte pubkeys. We shave another byte off there. It is all nice round numbers. It is wonderful. I am hugely looking forward to it just to get rid of 33 byte pubkeys to be honest.

When does that discussion seriously kick off? Does Taproot have to be activated? I know you have got a thousand things that you could be working on.

We marked it out as out of bounds back in the Adelaide summit at the end of last year. This was a conscious decision. There is more than enough stuff on our front burner without going into Taproot. People like AJ Towns and ZmnSCPxj can think about the further possibilities. I am pushing off as far as possible. When it lands on my plate we will jump on it.

AJ and ZmnSCPxj are the brainstorming vision guys.

They will tell me what the answer is. That’s my plan of action. Ignore it until I am forced to.

BIP 8 - replacing FAILING with MUST_SIGNAL

https://github.com/bitcoin/bips/pull/950

I was going to avoid activation for two months but then Luke wrote on IRC “What is the latest update?” and I started to dig it into again. Luke’s perspective, and I think this is right, is that most people are leaning towards BIP 8 rather than Modern Soft Fork Activation (potentially 1 year). That is the majority view. Obviously this is not scientific in any way. This is gut feel from observing IRC conversations. Luke highlighted a couple of open PRs on BIP 8. One which is this one that AJ opened, there is another one that Jeremy opened. At some point depending on how big a priority activation is, there is still a lot of work to do in terms of review on Schnorr and Taproot, I think these PRs need to get looked at and reviewed.

That particular PR is waiting on updates from Luke as to what he thinks about I guess. I sent around a private survey to a bunch of people on what their thoughts on Taproot activation timelines are. I am still waiting on a couple of responses back from that before making some of it public. I am hoping that will help with what the actual parameters should be. As to 1 year or 2 years or an initial delay of a couple of months or 6 months… What the exact parameters are, they are just numbers in the code that can be changed pretty easily whereas the actual structure of the activation protocol which is what these PRs are about is a bit more complicated to decide. This particular change was mostly about getting back to the point where something gets locked in onchain. The way BIP 9 works is you’ve always got a retarget period, a couple of weeks before there is any actual impact on what the rules onchain are. Whereas the current BIP 8 in the BIPs repo, that can happen from one block to the next, the rules that the next block has to validate according to change instantly. This is a little bit rough. That’s what that PR is about.

\ No newline at end of file +Meetup

Name: Socratic Seminar

Topic: Agenda in Google Doc below

Location: Bitcoin Sydney (online)

Video: No video posted online

Last month’s Sydney Socratic: https://diyhpl.us/wiki/transcripts/sydney-bitcoin-meetup/2020-07-21-socratic-seminar/

Google Doc of the resources discussed: https://docs.google.com/document/d/1rJxVznWaFHKe88s5GyrxOW-RFGTeD_GKdFzHNvhrq-c/

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Statechains (Off-chain UTXO transfer via chains of signatures)

Slides: https://docs.google.com/presentation/d/1W4uKMJgYwb5Oxjo1HZu9sjl_DY9aJdXbIYOknrJ_oLg/

Ruben Somsen presentation on Bitcoin Magazine Technical Tuesday: https://www.youtube.com/watch?v=CKx6eULIC3A

Other resources on Statechains: https://twitter.com/SomsenRuben/status/1145738783192600576

This is a presentation on Statechains. If you have seen my previous presentations there is going to be some overlap. Hopefully this is a better explanation than what I did before. I try to make it clearer every time. There are two new things that you probably haven’t heard that are interesting and improvements. That will be interesting for people who do know a lot about statechains. The general gist of it is its an offchain UTXO transfer via a chain of signatures.

My general motivation, the goal is to improve Bitcoin while preserving decentralization like most of my work. We will be covering an explanation of statechains, an easy to grasp one, the limitations and security of it and some recent developments that are interesting.

The 3 simple rules of a statechain. Basically the server signs blindly on behalf of the user. You have this server that just takes requests and then signs. It doesn’t really know what it signs. The user can transfer these signing rights. If I am the user I request a signature and then later I say “Now somebody else can sign instead of me”, some other key. All of these signatures are published in a chain, the statechain. This is all that happens server side. There is not a lot of complexity there. In fact these three simple rules are the full server implementation. For the next couple of slides we are going to assume this works and the server doesn’t misbehave which obviously is a problem but we will get back to that.

The example would be you have Alice and Alice currently is the owner of the statechain. She has the signing rights. She is allowed to tell the statechain what to sign. She says “Sign this message X and then transfer the signing rights over to Bob.” Bob becomes the owner of the signing rights on the statechain and then Bob does the same thing. Bob says “Sign this message Y and now transfer these signing rights to Carol.” This is really all the server does. Somewhat surprisingly this is enough to enable offchain UTXO transfer.

The general implication is you can gain control over a key and learn everything it ever signed through this method because all the signatures are public. Everything the statechain ever signed can be verified. They are public but blinded so you do need the blinding key to unblind them.

The offchain Bitcoin transfer is essentially if you control the key, that statechain key, if you have the signing rights and that key signed some kind of offchain Bitcoin transaction that transfers over some coins to you and there has never been any conflicting signature from that key that signs the coins elsewhere then effectively you have achieved an offchain Bitcoin transfer.

How does this look? First what you do is you start with Alice who controls the statechain key who has the signing rights. Alice first creates an offchain transaction where she is guaranteed to get the Bitcoins back if she ever locks them up with the statechain. After she has this offchain transaction that you see here, an input and an output. The output is S or A* after some kind of timelock. In a week or so the statechain is in control and after a week automatically Alice gets control. After she has this guarantee she is ready to say “Now I will lock up my coins with the statechain because I have this offchain transaction where I get them back.” From this point Alice can transfer the coins to Bob by creating yet another offchain transaction where now Bob gets the coins and simultaneously the signing rights are transferred from Alice to Bob. We have one problem here which I need to get into. Now there are two offchain transactions and they are conflicting unless you have some kind of method that allows that second transaction occur in spite of that first transaction existing.

There are two ways of preventing this. The simplest way of doing is having a decreasing timelock. The coins are locked up with some kind of timelock. The top transaction might become valid in 30 days but then the transaction below it becomes valid in 29 days. That guarantees that the last transaction is the one that will be able to be sent to the blockchain.

This method has some downsides. The preferred method is using eltoo which requires a soft fork. But eltoo I would say as a simple summary is overriding transactions. I am sure people are aware of eltoo but because it is new technology that hasn’t been implemented yet I will have to go over it. Briefly how it works is you have some kind of onchain output and then there is a transaction that spends from it. You have an output that can either be spent by that key S which in case signifies the statechain or after a week Alice gets it. They can create this second transaction which because S is still in control until the timelock expires. If within a week you send State 2 you can overwrite state 1. You can do that again with State 3. Even if State 1 or 2 appear on the blockchain you can always send State 3 afterwards. One final important feature is that you don’t have to send all these states to the blockchain in order to get your money. You can skip states and go from State 0 to State 3.

This eltoo method is how we solve it in statechains, at least the ideal setup. The limitations are you transferring entire UTXOs. You are not sending fractions though you can open up Lightning channels and then you can send a fraction through Lightning on a statechain. There is no scripting which is an upside and downside. It means there is no complexity on the server because they are not verifying anything other than just a signature. What we can do is all the scripts are enforced at the Bitcoin side. You have this offchain transaction and this offchain transaction contains a script. You are just not sending it to the Bitcoin blockchain because you don’t have to. The ideal setup requires Schnorr.

Couldn’t you get the server enforce some kind of script as well as the signature?

Absolutely. You can do so but it comes at the downside of adding complexity and because the preferable way of doing this is having a server sign blindly and so you can’t have any restrictions on what it is allowed to sign. You can’t have restrictions on something you can’t see. Maybe you can have some kind of zero knowledge proof on the blind signature or something that the server can verify. Maybe you can do something like that.

Couldn’t you just have the constraints out in the open? The server saying “You tell the server the next guy can have this money with these constraints.” When the guy comes with the correct credentials you can check the constraints as well. The server could do a timelock.

That is absolutely true. It is possible. The server already does a couple of things. You can add to that. I don’t think it is necessary and I think it breaks the simplicity of the model. So far I haven’t really seen that being desirable. But it is certainly something you could do. You could have a complex scripting language that the server is enforcing on top of what it is already doing. That is entirely possible. There is one thing that you do need from the server, ideally you need some kind of atomic swap guarantee. The method I came up utilizing scriptless scripts is slightly complex but kind of elegant. There are a couple of things you want the server to do but my design goals are to keep that at a minimum.

If you have blinded signatures you require client side validation. Every user that accepts a statechain payment, they need to go back and verify all the previous signatures. I will have an example later that makes clear how universal the statechain is without requiring any scripting. That doesn’t mean scripting couldn’t be added as an option but the nice thing is you can do a lot of things without the server actually needing to do anything.

The failure modes. How can this possibly fail? It is not perfect, there is a server involved. The server could disappear. It could break or catch fire. It could get compromised. While it is running maybe somebody tries to extract the private key or starts running some malicious software or something. Or the server could be actively malicious from the beginning. Out there as a honeypot waiting to take your coins. The first thing we can do to improve this is we take the server and we turn it into a federation. Pretty straightforward. In terms of implementation complexity now you have the federation that needs to communicate with each other, that is going to be complex. But very simple in terms of understanding how the model works. The federation is some kind of multisig, 7-of-10 or something like that. If the server disappears the nice thing about statechains specifically, this doesn’t apply to federated side chains, is it doesn’t matter. If they disappear you have this offchain transaction that you can send to the blockchain and you get your Bitcoin back. This is really nice and improves the security by quite a bit. The second is if the server gets compromised or the federation gets compromised there is this transitory key. So far my example has been just one key but there is a key that is also with the user. There is a trick that improves this. If the server or the federation is actively malicious sorry for your loss. Your coins are gone. This is not a trustless system. There is still a federation involved. Compare this to federated sidechains. What you have in federated sidechains, the federated sidechain is generally like 7-of-10 multisig or something along those lines. 70 percent needs to sign off on something but that also means 30 percent can refuse to sign something. They can refuse to sign a peg out where you get your coins back. In a federated sidechain model your security model is that if over 30 percent of the federation doesn’t like you and doesn’t want to give you your coins you are not getting your coins. What we’ve seen with Liquid in particular, they had this bug where there was a timeout which was supposed to be some kind of security measure for if the federation disappears. The coins go back to a 2-of-3 at Blockstream. For a while you had this issue where the coins were under control of Blockstream in a 2-of-3 with all keys they controlled. This would be completely unnecessary in the statechain model. In the statechain model not only do you need the majority to sign in order to try to steal your coins but you can increase that majority from 70 percent, 80 percent, 90 percent whatever you want. Worst case scenario, what you are trying to protect against is if one of the federation members disappears, you don’t want to lose your coins. Now what happens is if too many federation members disappear you just go onchain. This is not ideal, you still want some kind of threshold probably. You could even do a 10-of-10 if you only care about security and don’t mind being forced onchain if one of them disappears.

The transitory key T is only known by the users. This statechain federation should not be aware of this key. The onchain key on which the coins are locked is going to be a combination of the statechain key S and the transitory key T. This means custody is shared with the pool of users in a sense. There is this new thing that the Commerceblock guys who are building an implementation of statechains right now came up with.

What I was showing you earlier is a Bitcoin transfer from Alice to Bob. In my example here the key is controlled by S, the statechain. Now we change it where it is S and T. The statechain key and the transitory key that is with the users. Then when a transfer occurs Alice has to give the transitory key to Bob.

The weakness here is similar to what it was when it was just S. If somehow somebody learns the private key of both S and T they could potentially take the coins.

Bob generates and shares a key K. He tells the private key to the statechain. Originally with S and T, which is S+T under Schnorr using MuSig, what we do is we take S and we subtract K and we take T and we add K. Now we still have the combined key of ST but the relative keys are different. Now Bob instead of remembering T, he now remembers T\’ which is T+K. The statechain only remembers S\’ which is S-K. What this means is that when the transitory key T transferred over from Alice to Bob, Alice only knows T, Alice doesn’t know T\’. So if the statechain forgets their knowledge about S and K it means that T becomes harmless. Even if Alice later hacks the statechain Alice can still not take the coins from Bob despite learning S\’.

This is an interesting claim. My original explanation of this was a little vague. What I call this is regulatory non-custodial. I wouldn’t say it is non-custodial in a strict sense because there is still a key and we are still relying on a federation. As I said with the security model there is a way the statechain can fail. But the statechain doesn’t know what it is signing, it only controls one of two keys and it can’t confiscate coins even if it learns the transitory key from a previous user because it changes the key share with every transfer. What this means is that honest behavior equals no custody. If the server is actively malicious they can take your coins. If they are actively doing the right thing a third party or a hacker can’t come to them and take their key and start being malicious. The only way in which malicious behavior can occur is if the server is actively taken over, actively being malicious while it is running. From a practical perspective that is quite secure. From a regulatory perspective I am hopeful that regulators would have a hard time saying that the statechain is a custodian. Whereas with something like a federated sidechain you have this multisig address and they can literally take the coins from you. That is a harder claim to make in court.

Moving over to Lightning, how does that work? You can use Lightning through the statechains. Instead of transferring the ownership of UTXO from Alice to Bob you would transfer it from Alice to Alice and Bob. It becomes a double key. Remember this is MuSig so AB is a single key. From that point on to run the Lightning channel you don’t need the statechain. It becomes irrelevant at that point, at least until you want to transfer the UTXO again. Now you can open the channel and you can be on the Lightning Network. Alice and Bob would have some kind of separate offchain outputs that they change the balance of while using the Lightning Network.

The synergy here is you can open, close and rebalance Lightning channels offchain. The issue with Lightning is that you have this throughput limitation where you can only send as many coins as there is liquidity available on the network. But you can send any amount you want. It is very flexible in that regard, they are divisible amounts. Statechains has the opposite problem. The throughput is infinite. You can open a 100 BTC statechain output and you can transfer it over to somebody else without requiring any channels or routing. But it is not divisible unless you go over and connect to the Lightning Network.

Here is a more complex example. It is very interesting. This is just a Lightning channel, no statechain involved right now. You have a Lightning channel with Alice and Bob. They both have 5 Bitcoin. They change their state and now Alice has 3 Bitcoin and Bob has 7 Bitcoin. Now Alice says “I have these 3 Bitcoin. Can I give these 3 Bitcoin to somebody else without interacting with Bob?” If they did so over the Lightning Network then Alice would have to give those 3 Bitcoin to Bob and Bob would have to give those 3 Bitcoin to Carol. Then the channel balance would be 0 and 10. What I am trying to do here is I’m trying to swap out Alice without interacting with Bob. If you interact with Bob it is simple. You can just re-open a statechain or transfer over the entire UTXO and then give control from Alice to Carol. That requires Bob’s permission. We can do this without Bob’s permission. This is kind of weird. You have a Lightning channel. Somebody you have the Lightning channel open with, this person can change identity, can become somebody else without your permission. Whether this is a good or a bad thing, maybe you want to know who you have your channel open with. I will leave that out in the open.

Alice’s key becomes the statechain key and the transitory key. What is A here becomes ST. We have ST and B and the final output is ST, the statechain and the transitory key owned by Alice. Alice needs some kind of guarantee that if the statechain goes away she gets her coins. There needs to be yet another transaction like this. You can combine those two transactions into a single output by adding some scripting but for simplicity we don’t do this. Now Alice hands over control of the statechain key to Carol. The latest state is then signed over to Carol. On the right hand side this is eltoo so the transaction takes precedence. Carol has the final offchain transaction that she can send to the blockchain to claim these Bitcoin. What is important is she has to go and check every signature that the statechain key ever made to make sure there is no conflicting transaction and the final state is her receiving these coins. But the final result is you can literally swap out of a contract you have with someone and somebody else can take your place. These things layer, these stack. Particularly because the statechain doesn’t need to be aware of anything. If you go back to this state the A key can be yet another statechain key with yet another transitory key. It is a statechain inside of a statechain but Bob does not even have to be aware of that.

I think DLCs is gaining a little bit in popularity. This is an interesting way of doing DLC. Let’s say you have a DLC style bet. DLC means you have some kind of third party oracle. The third party oracle will hand out a signature based on an outcome and you utilize that signature to resolve a bet. You could for instance bet on the BTC/USD price at some time x, let’s say 1 month from now. Half way through, Alice and Bob have their position, Alice can switch out her position and give it to Carol for instance. This would be without requiring any interaction with Bob. If Bob has to be online you have difficulty. This enables you to have a position in any asset. If you can bet on the BTC/USD price you can have a position that is equivalent to holding US dollars. Instead of having 1 Bitcoin you would have 10,000 dollars and by the end of the month you would receive 10,000 dollars of Bitcoin. Because you are engaged in this bet you have the equivalent of dollars that is going to be paid out in Bitcoin. What you can even do is if assuming the person you are paying trusts both the DLC bet, the oracle, and the statechain you can give somebody a portion of this bet. From the 10,000 dollars you have in this bet you can give somebody 1000 dollars by co-owning the position. You have this whole offchain system where you have these derivatives that you can give people pieces of. It is non-interactive in the sense that it is non-interactive with Bob, it is interactive with the statechain.

Here I have got a bunch of short points I will go through quickly. You can add hardware security modules both at the federation side that makes things strictly more secure. The federation is limited, it would have to tamper with its own hardware. You could also put a hardware security module with your users. You get this transitory key be controlled by a hardware security module. This strictly only improves the security. If the hardware security module breaks you always have this offchain transaction that would have to be stored outside of the HSM. You can always get your coins. This solves a problem that traditionally exists with HSM key transfer where you could potentially transfer a regular private key from one HSM to another HSM. But if only one HSM is malicious and if it was in the chain of transfers it can take all the coins. The second thing is that if the HSM breaks your coins are gone. You have no backup but now you have this offchain backup. Lightning channel factories, my example is Lightning with a single user but you could have Lightning with 10 users inside a single UTXO. You can swap out these users without interacting with any of the other users just through interacting with the statechain by having a statechain inside of a statechain. Statechain history, you do need to make sure that the statechain does not keep double copies of a UTXO. It might pretend it has a single UTXO twice and then it will give one history to one person and one history to another person. As long as this UTXO does not get redeemed for a while the statechain could operate and cheat. We can get rid of that by having some kind of sparse Merkle tree where every UTXO is being recorded. This means that you have a proof that a UTXO only exists once inside of a single statechain. There is a watchtower requirement. Because you have these offchain transactions they become valid after a week or whatever you decide. You need to watch the blockchain and see if a prior user tries to claim the UTXO. If they do so you need to respond to it. The nice thing with eltoo is that the fees are entirely on the user trying to cheat. That makes that whole model a lot cleaner. But that is a potential downside. If somebody is willing to send the transaction and pay the fee you have to respond to that and also pay the fee. You want to pay fees to the statechain entity, to the federation. First I imagined that you’d open a Lightning channel with the federation and you would pay them through that. But the interesting thing about federations is you don’t really need some kind of onchain fee structure. You can do it out of band and that makes the whole blockchain itself a lot easier. In hindsight I think this is something that Liquid… having some kind of blockchain on a federation, I think in that model you would prefer not to have the fees onchain. You would prefer to have them out of band, paid through Lightning or using Chaumian e-cash. Chaumian e-cash, you buy tokens from the server and you can redeem the tokens for services. I think that will be a pretty good model. This is something that Warren Togami pointed out to me. I am warming up to the idea now and I like it. You could do RGB over statechains. RGB is a colored coin protocol. The problem is if you have non-fungible tokens you can’t transfer them over Lightning because they are non-fungible. You can’t have a channel from Alice to Bob to Carol because what is inside of the Alice, Bob channel is different from what is inside of the Bob, Carol channel because they use non-fungible tokens. You can’t do routing but with statechains you could transfer them and still redeem them onchain. Atomic swaps, there needs to be some kind of method where the statechain allows you to swap UTXOs on the statechain with other statechain users. You could do this through Bitcoin scripting. That would be acceptable but the problem is if the swap breaks down you are forced onchain. Ideally you would not need Bitcoin script for this but it is serviceable if you do it that way. Finally there is the Mercury statechain which is the implementation by Commerceblock. They are cool guys, it is an interesting implementation. They tried to do a MVP version of statechains that can work today. They don’t have eltoo, they don’t have Schnorr. You have the expiring timelocks. It is a little bit more iffy in terms of who is paying fees. You don’t have the ability to open Lightning channels but you do have this UTXO transfer. I think it makes a lot of sense in terms of wanting to get the technology out there today. You’d need to have these trade-offs. It is not the ideal model that I described here but if you are interested in what works today I would definitely check out their work and they have some great write ups. It is all open source.

In short, offchain UTXO transfer. Including the ability to open Lightning channels. You can even use statechains within channels. Lightning, DLC or anything else and swap out users non-interactively. It is more secure than federated sidechains and it is “regulatory non-custodial”. There is a risk that the federation can take your coins if they are really actively malicious. Thank you for listening. If you are interested in my other work you can check out tiny.cc/atomicswap. There are a bunch of links there. You can email me, you can reach me on Twitter or you can ask questions right now.

With Lightning and statechains what would go on in the case where you are doing a Lightning channel over a statechain but then the statechain breaks down? Does that mean the Lightning channel would then have to go back to chain?

If the statechain stops functioning it doesn’t really matter. You don’t need the statechain to keep the Lightning channel open. The only thing you cannot do if the statechain breaks down is cut through the transactions. Theoretically you could have the Lightning channel open. You could change the UTXO amounts. At the end of the day if you want to close your Lightning channel you can then close your statechain channel. It would save you 1 or 2 transactions. The worst case here is that the statechain disappears. Once you want to close your channel, you don’t have to, then it will cost you one or two transactions extra compared to a fully cooperative close with the statechain.

What are the requirements to get statechains, the version you are talking about? Is that ANYPREVOUT and eltoo and then we could have the statechains that you are talking about.

Schnorr and ANYPREVOUT. ANYPREVOUT enables eltoo. Eltoo is a method for Lightning that utilizes ANYPREVOUT.

With a sidechain like Liquid you could have a Lightning Network on the sidechain and then perhaps do an atomic swap between the Lightning Network on the main network and the Lightning Network on Liquid. But this is grey between that in that it is not really separate or distinct in the same way as a Lightning Network would be on Liquid. What does it look like when you close that Lightning channel and you’ve opened that Lightning channel on a statechain? Is it a process of not only closing the Lightning channel but also getting out of that statechain. There are two steps rather than just closing the channel.

There are two steps. What you have this channel open here where Alice and Bob have some coins. Let’s say Alice spends all her coins, she gives all her coins to Bob. What they can do is go back to this state and now Alice and Bob transfer over the coins to Bob. It becomes a regular statechain full UTXO with one owner except there are two keys. If you are cooperative Alice will back out of her key and give Bob full control by doing yet another transfer here from Alice and Bob to Bob.

If you are closing a Lightning channel on a statechain you close the Lightning channel and then get out of the statechain? You wouldn’t be able to get out of the statechain into a normal Lightning channel that is onchain?

Yes you can but it may be less efficient. You have this offchain transaction with which you can always exit out of the statechain. You can send that onchain and once you do so you are out of the statechain and you are onchain with a Lightning channel. The ideal way of doing so is to cooperatively exit out of the statechain, with another signature from the statechain you exit out while it does not affect your Lightning channel. Because of the way SIGHASH_ANYPREVOUT works even if the output changes as long as it is the same output, all your transactions that are building on top of it remain valid. You can close the statechain channel while keeping the Lightning channel open without any additional work.

If you had a long timelock that is going to get in the way of you settling an inflight HTLC. It could be very problematic, the relative timelocks between each hop. You’d probably have to take into account how long it takes the statechain to get onchain?

I think that doesn’t matter. This setup where you have this eltoo transaction in the middle and then you have the Lightning channel. This is literally what eltoo looks like. Despite us using a statechain in the middle the actual transaction structure does not change. Any of the issues you are describing are issues with eltoo.

That’s exactly what we were talking about last month, this problem of eltoo. This time it takes to settle to the right state means you have to account for that in the timelocks between each hop. AJ Towns has a proposal to fix that with eltoo. I think the same rules here, it is an issue with eltoo. If you put the statechain on top of it it exacerbates it.

I don’t think it exacerbates it but if the issue exists in eltoo it exists here. I would have to look into what AJ Towns is suggesting. I would assume that it wouldn’t really affect statechains.

You have got to the HTLC onchain in order to redeem it with the secret before the other guy gets the refund. You have to have it onchain “Here’s my secret.” But when I wait for the statechain the absolute timelock in Lightning gets pushed back and back.

With the channel counterparty changing, what would the motivation be to be transitioned into a Lightning channel that is on a statechain rather than just opening a channel normally onchain without being on a statechain? Fees? Do you get access to the Lightning Network with lower fees?

It makes more sense in different scenarios. For instance the Lightning channel factory where you have ten users and one of these users wants to swap out of it. Normally what you would need is all other 9 users to interact with you in order for the 10th person to swap out for somebody else. Now because you can do it non-interactively by communicating with the statechain you can do so without the permission of the other 9 users. It goes from interactive to non-interactive. The second example would be the DLC style bet thing where exiting in and out of a position is interesting. The price of that position changes over time. You could sell it midway at the price that it is. Normally you would be held hostage by whether or not your counterparty inside of the channel wants you to move out. They can prevent you from getting out. Here you don’t need their permission. You can move out and exit the position. It makes it a lot more practical to have these bets and very smoothly move in and out of them, sell these bets to other people without requiring all these people to be online. Those are the benefits. It is quite significant that you are able to do this without requiring the help of your counterparty.

In the case of a counterparty swapping into a Lightning channel the counterparty swapping in needs to trust the current state of the channel? In the case of Bob swapping out for Carol, Bob and Carol need to trust each other?

That’s a good point. Do you even want this? In Lightning Alice and Bob open a channel and what we are steering towards is that Alice and Bob kind of trust each other or at least think this person is not completely malicious. Now Alice can switch over and become Carol. Does Bob trust Carol? Maybe not. That’s certainly an issue. The funny thing is that it is not preventable. This just works. You can do this on Lightning once we have Schnorr and statechains people can start doing this. Bob has no say over it. Bob can’t recognize a statechain key from a regular Alice key. It is going to be a thing. Is it good? Is it bad? I don’t know. I don’t necessarily think it is good. The trust assumptions are for Carol to move in Carol needs to trust Bob. Be willing to have a channel with Bob. Not trust Bob but be willing to have a channel with him. Trust the statechain and if you have some kind of DLC bet going on you also have to trust the oracle. There are a couple of things but they are all very separated so that is nice.

c-lightning 0.9.0

https://medium.com/blockstream/new-release-c-lightning-0-9-0-e83b2a374183

We rewrote our whole pay plugin. c-lightning is evolving into this core plus all these plugins that do different things. The pay command that is important is a plugin. You can do a lot of interesting stuff. The one that broke the camel’s back is this multipart payment idea. You can split payments into multiple parts to try to get them all to the destination at the same time. As you can imagine there are an infinite number of ways you could do that. We didn’t want to put that in the core. Christian Decker did some research by probing the network and came up with a number of 10,000 sats. If you are trying to send 10,000 sats through most of the network you’ve got a 83 percent chance of it working. It declines pretty rapidly after that. The obvious thing with multipart payments is you try sending it, if you get a channel saying “I don’t have capacity” you try splitting it in half. We are a bit more aggressive than that. If it starts out really big we try to divide it into 10,000 sats chunks and send those. This works way better in real life. It turns out that the release a bit aggressive on that. 10,000 sats is roughly a dollar. If you try to send a 400 dollar payment it is pretty aggressive at splitting it. One of the cool things is that splits it into uneven parts. That is really nice for obfuscating how much we are sending. People tend to ask for round amounts. They ask for 10,000 sats or something. If you see 10,003 sats go by you can pretty easily guess how many hops it has got left to go before it gets to its destination. We would overpay for exactly that reason. We would create a shadow route and add some sats. The person gets free sats at the end. Even so it is still pretty obvious because we don’t want to add too many sats. With splitting you can split onto rough boundaries and there is no real information if you only see one part of the payment. It was a complete rework of our internal pay plugin. Christian Decker was the release manager and he decided with the agreement of the rest of us that should definitely go in the release. There were four release candidates because we found some bugs. Worked in the end, multipart payment worked. Any modern client will issue an invoice that has a bit in it to say “We accept multipart payments.” Multipart payments are pretty live on the network at the moment which is pretty nice. That was the big thing, a big rewrite for that.

There was also the coin movement stuff. This came out of Lisa (Neigut) looking at doing her tax in the US. You are supposed to declare every payment you make, incoming and outgoing, and theoretically all the fees that you charge for routing things. You should mark the value at the time you received them and stuff like that. Getting that information out of your node is kind of tricky. It is all there but having one nice place where you can get a ledger style, this amount moved in, this amount moved out and here’s why view was something that turned out to be pretty painful. She wrote this whole coin movements API. Everywhere in the code that we move coins whether it is on Lightning or onchain it gets accounted for. You can say “I paid this much in fees.” She has also got a plugin to go with that that stamps out all the payments. That’s still yet to be released because there are some issues with re-orgs and stuff that she wants to address. I am looking forward to that. Her next tax time, she will just be able to dump this out, hand it to her accountant with all the answers.

We did a whole internal rework. PSBTs, Partially Signed Bitcoin Transactions are the new hotness. We previously had them in a couple of APIs. You could get a PSBT out and give a PSBT in. But we reengineered all the guts of c-lightning to use them all over the shop. That continues in the next release. We completely reengineered some things, move them out to plugins and deprecate them because it is all PSBTs internally. The old things that gave you transactions and stuff, they are all new APIs using PSBTs. Get to be one of the cool kids. It makes life so much easier to deal with other wallets, hardware wallets, stuff like that. We are pretty much at the point where you can throw a PSBT at something and it will do the right thing. It will sign it, it will combine it with other things and stuff like that. It is particularly powerful for dual funding where you want to merge PSBTs. PSBTs have been great. That is not really visible in that release but that was a huge amount of work to rework everything. It was a pretty solid release, I’m pretty happy with that. We thought it was worth bumping the version number. I think the 0.9.0 release name we gave it was “Rat Poison Squared on Steroids”. It was named by the new contributor who contributed the most. That’s the c-lightning release.

PSBTs are ready to be used on Lightning? All the implementations are either using or thinking about using PSBTs? There are no niggly issues that need to be sorted out with PSBTs with the Lightning use case?

Everything is ready to go. I didn’t find any horrific bugs. We are dog fooding them a bit more, that is useful. There was some recent PSBT churn because of the issue with witness UTXOs and non-witness UTXOs, this issue of people worrying about double spending with hardware wallets. There was some churn in the PSBT spec recently. There is still a bit of movement in the ecosystem but generally it is pretty well designed. I expect as people roll out you generally find that you interoperate, it just works with everything which is pretty nice. For c-lightning we are in pretty good shape with PSBTs.

Announcing the lnprototest Alpha Release

lnprototest blog post: https://medium.com/blockstream/announcing-the-lnprototest-alpha-release-f43f46f2c05

lnprototest on GitHub: https://github.com/rustyrussell/lnprototest

Rusty presenting at Bitcoin Magazine Technical Tuesday on lnprototest: https://www.youtube.com/watch?v=oe1hQ7WaX4c

This started over 12 months ago. The idea was we should write some tests that take a Bitcoin node and feed it messages and check that it gives the correct responses according to the spec. It should be this test suite that goes with the spec. It seemed like a nice idea. It kind of worked reasonably well but it was really painful to write those tests. You’d do this and then “What will the commitment transaction look like? It is going to send the signatures …” As the spec involved there were implementation differences which are perfectly legitimate. It means that you couldn’t simply go “It will send exactly this message.” It would send a valid signature but you can’t say exactly what it would look like. What we did find were two bugs with the original implementation. One was that c-lightning had stopped ignoring unknown odd packets which was a dumb thing that we’d lost. Because you never send unknown packets to each other so a test suite never hit it. You are supposed to ignore them and that code had somehow got factored out. The other one was the CVE of course. I was testing the opening path and I realized we weren’t doing some checks that we needed to check in c-lightning. I spoke to the other implementations and they were exposed to the same bug in similar ways. It was a spec bug. The spec should have said “You must check this” and it didn’t. Everyone fell in the same hole. That definitely convinced me that we needed something like this but the original one was kind of a proof of concept and pretty crappy. I sat down for a month and rewrote it from scratch. The result is lnprototest. It is a pure Python3 test system and some packages to interface with the spec that currently live in the c-lightning repository. You run lnprototest and it has these scripts and goes “I will send this” and you will send back this. It can keep state and does some quite sophisticated things. It has a whole heap of scaffolding to understand commitment transactions, anchor outputs and a whole heap of other things. Then you write these scripts that say “If I send this it should send this” or “If I send this instead…”. You create this DAG, a graph of possible things that could happen and it runs through all of them and checks that happens. It has been really useful. It is really good for protocol development too not just testing existing stuff. When you want to modify the spec you can write that half and run it against your own node. It almost inevitably find bugs. Lisa (Neigut) has been using it for the dual funding testing. That protocol dev is really important. Both lnd and eclair are looking at integrating their stuff into lnprototest. You have to write a driver for lnprototest and I have no doubt that they will find bugs when they do it. It tests things that are really hard to test in real life. Things that don’t happen like sending unexpected packets at different times. There has been some really good interest in it and it is fantastic to see that taking off. Some good bug reports too. I spent yesterday fixing the README and fixing a few details. The documentation lied about how you’d get it to work. That is fixed now.

This testing suite allows people to develop a feature… would that help them check compatibility against another implementation for example?

Yes. It is Python, it is pretty easy to hack on. You can add things in pretty easily. You don’t have to worry about handling all the corner cases. You write your scripts and check that your implementation works. For example I used this to develop the anchor outputs stuff. I took the anchor spec, I implemented it in lnprototest first and then I implemented in c-lightning. The c-lightning one took a lot longer. It took me an afternoon in lnprototest. It took me several days in c-lightning. Once we had c-lightning working with the lnprototest side the first time I attached it to a lnd node in the field it just worked. It is definitely useful as a test suite but also for developing. When you want to add something to the protocol. It is a lot easier to hack it into lnprototest. “Here’s a new packet”, send it, see what happens. This is the response you should get. It is way easier than modifying a real implementation to do it. It is a really good way of playing with things.

Can you do one for the Bitcoin spec? (Joke)

With Bitcoin we don’t have a spec. The code is the spec. There are tonnes of unit tests and functional tests on Core. The test framework sets up a stripped down Bitcoin node so you can do testing between your node and this stripped down Python node. In this case the lnprototest is setting up a stripped down Lightning node that is coded up in Python and then you are interacting in a channel between your main c-lightning node or whatever implementation signs up to use lnprototest, with that stripped down Python lnprototest node?

The Python implementation isn’t even really a node. It understands how to construct a commitment transaction. It has got a helper to do that. “I send this and by the way my commitment transaction should now have a HTLC in it. Add a HTLC to my commitment transaction.” They send something and you go “Check that that is a valid signature on the commitment transaction.” It goes “Yes it is.” It has enough pieces to help you so you don’t have to figure things out by hand. It has a lot of stuff like it knows what a valid signature is rather than encoding the signature. What it is doing under the covers is it reaching into the implementation and grabbing the private keys out. It knows the private keys of the other side it is testing. That simplifies a whole pile of stuff. It can say “I know what the 13th commitment secret is going to give, I know what that is. I know what it should be.” It is a much simpler implementation to play with. Then you have these scripts that say “I should open a connection. It should reply with init. I should send init. I send open channel, it should say accept channel. Take those fields and produce a commitment transaction as agreed. Give me what the signature should be on the first one. I will send that across.” They send the reply and I go “Check that that is what I expect? Yes it is.” It has some helpers to construct these things as you go but you end up writing the test to say “Update the state. You should match what they send.” We have enough infrastructure to build commitment transactions and stuff like that. But we don’t have any logic in there to negotiate shutdown for example. There is none of that logic. That would be a script that says “If I offer this they should offer this” and stuff like that. There is a whole heap of scaffolding to help you with the base construction of the protocol. It does all the encrypted communication stuff. You just say “Send this message” and it worries about encrypting it as it needs to be on the wire, authenticating and all that stuff. You end up writing a whole heap of test cases that say “If I send this they should send this.” There are two parts. There is the scaffolding part that has the implementation bits we need to make it useful. Then there are all these tests that say “If I do this they’ll do this.”

Have you had to do much interoperability testing with some of the lesser known Lightning clients like Electrum or Nayuta? Maybe they could use this testing suite as well.

If you are trying to bring up a new implementation, bringing it to par with the others this is invaluable. It is a stepping stone. “I sent this but I was supposed to send this and I didn’t. What do we disagree on?” You might have found a bug in lnprototest. You know that at least one implementation passes lnprototest. Either there is a bug in c-lightning or I am doing something wrong. It is a much more controlled environment. We could test things like blockchain reorganizations to whatever depth. Stuff like that in canned tests is incredibly useful.

When you found that channel funding bug Rusty, you found that just by implementing it in Python rather a particular test failing in lnprototest?

I was writing the test and I went “I got that wrong and it still worked. Why did that work?” Then I went “Ohhh. That is bad.” I jumped on internal chat. “There is nowhere I’m missing that we check this that we are supposed to do this?” No it was a real bug. I immediately back channel pinged ACINQ and Lightning Labs, “I suspect you want to check if you are doing this as well.” I was writing the test, realized that I’d screwed up the test and it shouldn’t have worked but it did. The act of testing it was what drove me into this path.

If you are testing new features and you want to use lnprototest to test those new features you’d have to reimplement in Python?

Yes and no, it depends on what you are testing. You can tell lnprototest “Send this raw packet”, it doesn’t have to understand what it is doing. All the lnprototest message stuff is generated from the spec. You patch the spec, you run the generator thing and it generates the packets for you.

The spec is words, a lot of it is. How do you get code out?

The way the messages are implemented in the spec is they are machine readable as well human readable. We’ve always done that on c-lightning. There is a script in the spec itself that gives you a nice CSV file, comma separated values file, describing all the packets in the spec. That feeds into the c-lightning implementation. In c-lightning we have something that turns that into C code. For lnprototest we turn those into Python packages. It reads those Python packages and generates all the types. If you are just adding a message you can edit the spec, rebuild and you’ll get your new message type. I have no idea what that message type is supposed to do but you could make the lnprototest send that message type and expect whatever response. “If I send this you should send this.” Then test it against your node and see if it works. Obviously, with anchor outputs, if you offer anchor outputs and they offer anchor outputs then it changes the commitment structure in very well defined ways. Internally lnprototest knows how to build the commitments. I add a new flag and look through the diff of the spec, what they changed. If anchor outputs here, if anchor outputs here… It was a couple of hours work maximum to get that working.

It is not a replacement for the cross implementation testing for new features. It is additional testing and assurance. You still want to test those full implementations. Otherwise if you are reimplementing it in Python you could just give it to Electrum and Electrum will have all the new features first. Because that is written in Python.

I looked at some of their code actually. Again they have got too much stuff. We need this thin amount of stuff to implement things. And a whole heap of stuff to parse messages generically etc. Their stuff has too much stuff in it. Like any implementation there is a whole heap of other stuff you have to worry about like onchain handling and timeouts. lnprototest doesn’t care because it is not dealing with real money. It is a whole other ball game. I think every implementation will end up using lnprototest at some stage which it will make it much easier. The idea is eventually you’ll patch the spec. I’ve got this cool new thing, here is the spec patch, here is the new lnprototest test. You’ll run those together against your implemetation. You’ll be 90 percent of the way there. At least you are compatible with lnprototest so the chance of you being compatible with each other is now greatly increased.

They just need a runner, the new implementations, if they want to use lnprototest?

We’ve got our DummyRunner that passes all the tests. It always gives you what you expect. They need their own runner. Take the c-lightning one, it fires up a bitcoind in regtest mode and fires up a c-lightning node. It is kind of dumb. You have to have c-lightning run in developer mode because it uses some weird hacks. We are slowly pulling those out. Ideally you’d be able to run it in any off the shelf implementation. You have to get your hands dirty a bit and write some Python for your implementation, that is true.

Dynamic Commitments: Upgrading Channels Without Onchain Transactions (Laolu Osuntokun)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-July/002763.html

We had this item here on upgrading channels without an onchain transaction. This is a roasbeef post. It is talking about how you could upgrade a channel. One example was around changing the static remote key.

At the moment you set up your channel and that is it. It is how you built it. We chose not to put in any upgrading in the spec. Early on it was just “Close the channel and start again.” We’ve had two new channel types so far. One is the static remote key option which now as far as I know every implementation supports. The only time you will not get a static remote key channel is if you’ve got an old one that you opened beforehand. Anchor outputs is the new hotness that is coming through. You have now got three kinds of channels. What type of channel you get depends on what you both supported at the time you opened it. What would be cool is to be able to upgrade these things on the fly. You can always close and open again. We have this idea of splice that is not currently in the spec but it is on Lisa’s plate after she finishes dual funding. It is very similar. We can negotiate to spend the commitment transaction in some way that changes the channel. You might want to splice funds in. You might want to splice funds out. Have a new commitment that spends the commitment transaction and opens a new channel atomically. That is still an onchain transaction. What if you wanted to change it on the fly? For the first 100 commitments of this channel it was a vanilla one but after that point we both agreed that it would be option static remote key. We would use the modern style. That is perfectly possible. It is not as good from a code maintenance point of view. You still have to be able to handle those old channels because they could still drop an ancient commitment transaction on you. They drop commitment transaction 99, you need to be able to penalize that. You still need some code there to handle the old ones. But ideally at some point in the future if we have this dynamic upgrade we can insist everyone upgrade and then six months later we go “If anyone hasn’t upgraded their channels when you upgrade to the next version of c-lightning it will unilaterally close those old ones.” We have removed the code that can do all that stuff. This is a nice simplification. This proposal here is the set of the messages that you would send to negotiate with your peer that you are both ready to upgrade this channel on the fly. It went through a couple of revisions based on feedback from the list. The consensus in the end was that we would block the channel, once you’ve started this process we are going to drain out all the updates. That is the same thing we do on shutdown already. When you shutdown a channel any outstanding HTLCs have to be settled before it finally gets closed by mutual close. We would use the same kind of negotiation. If I want to upgrade the channel the other side would go “Great. We will upgrade the channel as soon as all the HTLCs are gone.” In the normal case this would be immediately but it could take a while. You only really have to worry about the case of upgrading empty channels. From that point onwards you’d be using new style not old style. This is definitely something people want. Static remote key is good because it is way nicer for backups. Because we used to rotate all the keys prior to static remote key without your peer’s help if you somehow lost your state you could forget how to spend your own output. Static remote key changed that. You don’t need the peer. If a commitment transaction from the future appears on the blockchain you are kind of screwed because it means you have lost track of things. At least now with static remote key you would be able to get your own money back without having to ask anyone. “I don’t know what the tweaking factor was for that commitment. Could you tell me?” Anchor outputs does even more. It makes it possible to low ball your fees and use the anchor outputs and child-pays-for-parent to push the commitment transaction into a block. That overrides the problem that we have at the moment which is that you have to put enough fees in your commitment transaction to pay for it later when you are going to use. You have no idea when that is so it is an impossible problem. Anchor outputs provide a way, not a perfect way, to top up the fees afterwards. This means you can go lower on your fees which I think is good for everyone. It is worth bearing in mind that you only care about what those fees are like if you get an unilateral close. If it is a mutual close you negotiate fees at that point. But should somebody need to go onchain it is nice if they are paying less fees than they are at the moment. Everyone overbids on fees at the moment because you don’t know when a fee spike will happen. Knowing my luck it will be the moment when you want to close the channel. We go for a multiple of the current fee rate.

On the anchor outputs, Lisa was talking about this fee concern because it makes it more costly. What is the thinking on that? Is it worth it?

There are two things that make it more costly. One is the scripts involved on a couple of the outputs are slightly bigger so it is a slightly bigger transaction. Each HTLC output is a little bit bigger. But the other thing is that the funder is also paying for these two 330 satoshi outputs, one for each side to use to push the transaction if necessary. It is 660 sats more expensive to start with. The flip side of that is you can lowball the fees. It is almost certainly worth it. We have implemented everything on c-lightning but it is currently behind a configuration option. Our config experimental, if it breaks you get to keep both pieces configuration. We haven’t turned that off for this release because we haven’t implemented child-pays-for-parent. So anchor outputs is all loss. It is just more expensive and it doesn’t help you in any way. As soon as we implement child-pays-for-parent it would be great to get it in there because you would probably come out ahead most of the time. Even though you are spending extra sats in the anchor outputs you won’t have to overbid on fees. This proposal here would less us then go “We see a path here to deprecating the non-anchor outputs output”. In the future everything would be anchor outputs. We do a release which has the upgradeability and then 12 months later we would start giving warnings. Then anchor outputs would become compulsory. If you happen to have any clients that haven’t upgraded and you haven’t spoken to them in months we would just close their channels. We could remove a whole heap of if statements in code and our test matrix. Having to test against the original channels and static remote key channels and anchor output channels multiplies your test matrix out. You can simply things if you start pruning some old stuff out.

It might be a good way to clear out the zombie channels. I hear there is a lot of them out there.

It turns out there are. We recently had a change to the spec. You have to update your channels every two weeks. After two weeks you could be forgotten if you haven’t updated. The spec said either end has to update and we just changed it so that both ends have to update. The reason for that is that you get some node that has fallen off the network. This node every two we’ll keep refreshing it. Keeping it alive when we might as well forget it, it is game over. That spec change has gone through and the new release of c-lightning 0.9.1 will implement that. It is in the tree now. That will help clear out some stuff as well.

This upgrading channels, this works for anchor outputs because you are changing the commitment transaction. You’ve got an open channel, the funding transaction is onchain and you can update the commitment transactions. For other things, there are so many potential channel update proposals swirling around. There are generalized payment channels, using Schnorr MuSig, that would be a funding thing so you wouldn’t be able to use this for that, there is eltoo, PTLCs. I think most of those you wouldn’t be able to use this for because they either use the funding transaction or it is a completely different configuration.

If your funding output is still a 2-of-2 you are good. But if you have to change the funding transaction then you are going to be going onchain anyway. The way I think we will end up doing that in future is with a splice. The splice proposal says “I propose a splice.” We both get to go to town on proposing what we want to change about that commitment transaction. That is useful. You might have a preference. You might have a low watermark and a high watermark. If you get more than a certain amount in fees in the channel “Sometime this week I’d like to move some off to cold storage. I’d like to splice them out.” Then you have a high watermark where you are like “This is getting ridiculous. I really have to do this.” If I am at the low watermark and you go “I would like to splice” then I will jump on that train. “While we are there I would also like to splice.” Throw this input and this output in and whatever. Opportunistically do these things. The splice negotiation will also then be an opportunity to do an upgrade. In fact I think that we would make the splice an implicit upgrade almost. Why wouldn’t you at that point? If we are going to splice, if we are going to spend a transaction let’s also do an upgrade. This is on the assumption that every improvement is considered an upgrade. If we ever ended up with two really different species of channels that were useful for different things then you might end up with two completely different ones. That is the kind of complexity I am hoping to avoid. Splicing will be the upgrade mechanism for anything that has to change the funding transaction. The cool thing about splicing is you can forget all your previous revocation things after. Once the splicing transaction is buried deeply enough you can forget the old state because it can never happen now. Those old commitment transactions are completely dead. That is nice. That lets you transfer to eltoo where you have only got a single commitment transaction you have to remember in a very clean way anyway. Even if you could do it with 2-of-2 you would want to splice in I think for eltoo.

Let’s say hypothetically that Taproot comes in, then we get ANYPREVOUT. All the channels that currently exist, they would be upgraded to eltoo channels using this splice method?

Yes you would have to use splice to upgrade those. We will get Taproot first. Would you want to upgrade? Maybe because now you start to look like single sig. It is all single sig, it is all cool. You probably want to do it just for that. On the other hand if you are a public channel anyway there is little benefit to doing that onchain transaction just for that. Maybe you don’t bother? If you are going to splice anyway then let’s upgrade, let’s save ourselves some bytes. You probably wouldn’t do it just in order to upgrade. If we get ANYPREVOUT and you’ve got eltoo then there’s a convincing reason to upgrade. No more toxic waste for your old states.

People can just opportunistically wait until it is down to 1 sat per byte. They could wait for a cheap time and do it then.

Splicing is kind of cool because a channel doesn’t stop while you are splicing. After every commitment transaction against both the spliced one and the old one, until the spliced one is buried to a sufficient depth. Your channel doesn’t stop. You can lowball your splice and just wait for it. It is annoying if you change your mind later and now you really want to splice it in. You’d have to child-pays-for-parent. We are probably not going to do a multilayer splice and let you have this pile of potential changes to the channel piling up. You can keep using the channel while the splicing is happening. You can absolutely lowball on your fees.

What is likely to happen post Taproot that unless you do this splicing you will keep your old channels open and any new channels you open you would use the latest MuSig, Taproot stuff?

That is happening with existing upgrades. Modern channels are all static remote key, old ones aren’t. We would end up with the same thing. We have to change the gossip protocol slightly because we nail the fact that there is a 2-of-2 in the gossip protocol. I want to rework some of the gossip protocol anyway. There will be a gossip v2 to match those. At some point in the far, far future we will deprecate old channels and splice them out or die. That would be a long way away. We will have two types of gossip messages, an old one and a new one for a long time.

Does the gossiping get complicated? I have got this channel on this version, this channel on this version and this channel on this version.

We have feature bits in the gossip so it is pretty easy for you to tell that. Most of this stuff, if Alice and Bob have a channel and we are using carrier pigeons or psychic waves or whatever to transfer funds it doesn’t matter to you. That’s between Alice and Bob. It is not externally visible. Some changes to channels are externally visible. If you are using PTLCs instead of HTLCs that is something you need to know about. But for much of the channel topology they are completely local. The gossiping doesn’t really need to know. Where the gossiping needs to know is we currently say “Here are the two keys” and you go and check that the transaction in the Bitcoin blockchain actually does pay to those keys. If you have a different style of funding transaction that will need to change. In the case of Schnorr there is only one key. “This is the key that it pays to” and yes I can tell that. You don’t need to tell it that, you just need to use that key to sign your message and it can verify it. You can literally pull the key out of the output which is really nice. The gossip messages get smaller which is a big win. Plus 32 byte pubkeys. We shave another byte off there. It is all nice round numbers. It is wonderful. I am hugely looking forward to it just to get rid of 33 byte pubkeys to be honest.

When does that discussion seriously kick off? Does Taproot have to be activated? I know you have got a thousand things that you could be working on.

We marked it out as out of bounds back in the Adelaide summit at the end of last year. This was a conscious decision. There is more than enough stuff on our front burner without going into Taproot. People like AJ Towns and ZmnSCPxj can think about the further possibilities. I am pushing off as far as possible. When it lands on my plate we will jump on it.

AJ and ZmnSCPxj are the brainstorming vision guys.

They will tell me what the answer is. That’s my plan of action. Ignore it until I am forced to.

BIP 8 - replacing FAILING with MUST_SIGNAL

https://github.com/bitcoin/bips/pull/950

I was going to avoid activation for two months but then Luke wrote on IRC “What is the latest update?” and I started to dig it into again. Luke’s perspective, and I think this is right, is that most people are leaning towards BIP 8 rather than Modern Soft Fork Activation (potentially 1 year). That is the majority view. Obviously this is not scientific in any way. This is gut feel from observing IRC conversations. Luke highlighted a couple of open PRs on BIP 8. One which is this one that AJ opened, there is another one that Jeremy opened. At some point depending on how big a priority activation is, there is still a lot of work to do in terms of review on Schnorr and Taproot, I think these PRs need to get looked at and reviewed.

That particular PR is waiting on updates from Luke as to what he thinks about I guess. I sent around a private survey to a bunch of people on what their thoughts on Taproot activation timelines are. I am still waiting on a couple of responses back from that before making some of it public. I am hoping that will help with what the actual parameters should be. As to 1 year or 2 years or an initial delay of a couple of months or 6 months… What the exact parameters are, they are just numbers in the code that can be changed pretty easily whereas the actual structure of the activation protocol which is what these PRs are about is a bit more complicated to decide. This particular change was mostly about getting back to the point where something gets locked in onchain. The way BIP 9 works is you’ve always got a retarget period, a couple of weeks before there is any actual impact on what the rules onchain are. Whereas the current BIP 8 in the BIPs repo, that can happen from one block to the next, the rules that the next block has to validate according to change instantly. This is a little bit rough. That’s what that PR is about.

\ No newline at end of file diff --git a/sydney-bitcoin-meetup/2021-02-23-socratic-seminar/index.html b/sydney-bitcoin-meetup/2021-02-23-socratic-seminar/index.html index 01647ee71d..6419e7d9d9 100644 --- a/sydney-bitcoin-meetup/2021-02-23-socratic-seminar/index.html +++ b/sydney-bitcoin-meetup/2021-02-23-socratic-seminar/index.html @@ -3,7 +3,7 @@ Video: No video posted online Google Doc of the resources discussed: https://docs.google.com/document/d/1VAP4LNjHHVLy9RJwpQ8LqXEUw79z5TTZxr9du_Z0-9A/ The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -PoDLEs revisited (Lloyd Fournier) https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html +PoDLEs revisited (Lloyd Fournier) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html We’ll start with me talking about my research into UTXO probing attacks on Lightning in the dual funding proposal.">
\ No newline at end of file +Meetup

Topic: Agenda in Google Doc below

Video: No video posted online

Google Doc of the resources discussed: https://docs.google.com/document/d/1VAP4LNjHHVLy9RJwpQ8LqXEUw79z5TTZxr9du_Z0-9A/

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

PoDLEs revisited (Lloyd Fournier)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html

We’ll start with me talking about my research into UTXO probing attacks on Lightning in the dual funding proposal. I will go quickly on it because there are a lot of details in that post and they are not super relevant because my conclusion wipes it all away. I think we’ve discussed this before if you are a long time Sydney Socratic attendee, maybe the first or second meeting we had this topic come up when the dual funding proposal was first made by Lisa from Blockstream. This is a proposal to allow Lightning channels to be funded by two parties. The reason for doing that, both parties have capacity at both sides. Both sides can make a payment through that channel right at the beginning of the channel. The difficulty with that is that it creates this opportunity for the person who is requesting to open the channel, say “I am going to use this UTXO”, wait for the other guy to say “I will dual fund this with you with my UTXO” and then just leave. Once the attacker has learnt what UTXO you were going to use he now knows the UTXO from your wallet and then just aborts the protocol. You can imagine if you have a bunch of these nodes on the network that are offering dual funding the attacker goes to all of them at once and just gets a bunch of information about which node owns which UTXO on the blockchain and then leaves, does it again in a hour or something. We want to have a way to prevent this attack, prevent leaking the UTXOs of every Lightning node that offers this dual funding. We can guess that with dual funding, probably your node at home is not offering that, maybe it is but you would have to enable it and you would have to carefully think about what that meant. But certainly it is a profitable thing to do because one of the businesses in Lightning is these services like Bitrefill where you pay for capacity. If anyone at home with their money could offer capacity in some way to dual fund it might become a popular thing and it may offer a big attack surface. One very intuitive proposal you might think of is as soon as this happens to you, you broadcast the UTXO that the attacker proposed and you tell everyone “This guy is a bad UTXO”. You shouldn’t open channels with this guy because he is just going to learn your UTXO and abort. Maybe that isn’t such a great idea because what if it was just an accident? Now you’ve sent this guy’s UTXO around to everyone saying “He is about to open a Lightning channel”. Maybe not the end of the world but the proposal from Lisa is to do a bit better using a trick from Joinmarket which is this proof of discrete logarithm equality or we’ve called it PoDLE. What this does is creates an image of your public key and UTXO against a different base point. It is fully determined by your public key, by your secret key, but it cannot be linked to your public key. It is like another public key that is determined by your public key but cannot be linked to it unless you have a proof that links the two. What you do is instead of broadcasting a UTXO you broadcast these unlinked coins. No one can link to the onchain UTXO but if that attacker connects to them they’ll be able to link it.

This is suboptimal. The drawbacks are pretty clear. You have this new broadcast message, you don’t want to add broadcast messages to the Lightning Network without a lot of careful thought. This is the main drawback. I made a post trying to avoid this because I happened to be studying this kind of proof, I came across it and thought “Let’s think about this.” I came up with some refinements of other proposals and thought “Maybe we can avoid this by doing this”. In the end it turns out none of them are good ideas. The reason none of them are a good idea is that you can do this, you can broadcast to the network after something bad happens but then the attacker can just do it in parallel. They can do it all at the same time to all the nodes and get the UTXOs anyway. That is the main thing, you don’t protect against parallel attacks with this. I talked with Rusty a bit and we figured out that to protect against parallel attacks, the only proposal that can be modified is the PoDLE one to protect against parallel attacks. With the PoDLE one what you can do is broadcast it immediately, rather than waiting until something bad happens. You can imagine that is going to create a lot of gossip on the Lightning Network. Every Lightning channel that gets opened will get this broadcast. This makes the proposal even more sketchy. But it would actually work, we would actually protect against parallel attacks.

Having said all that, the main conclusion I had from all of this is that we shouldn’t do any of these proposals because of what we talked about in the last Socratic. In the last Socratic we had the authors of this paper “Cross Layer Deanonymization Methods in the Lightning Protocol” and it showed that the UTXO privacy of Lightning is really not so great. You can use the onchain heuristics that you normally use from chain analysis and combine them with Lightning heuristics like the gossip messages about channel openings and channel IDs. You can combine those two and you can figure out who opened the channel and therefore the change outputs of the channels, who owns them. You can figure out essentially what UTXOs a node had because they used that UTXO to open the channel. Now they’ve got this change UTXO, now you’ve got a UTXO in your wallet. These heuristics help you identify which nodes own which UTXOs. Although this attack helps you figure out which node owned a UTXO before they’ve used it, these heuristics if you wait long enough and you watch the chain, you’ll be able to figure out that information most of the time anyway. That’s the main conclusion. All these complicated proposals, they are trying to solve this protocol problem where you send the guy information and they leave the protocol. This narrow thinking led to these different ideas but if you take a step back and you realize what information already leaks out in the Lightning Network, the heuristics you can already use and the chain analysis companies will actually use, it feeds into what they already do, they will be able to figure out that information already. Without having to create these special active UTXO probing attacks.

Let’s say I’m a chain surveillance company I could run the parallel version of that attack to try to figure out what everyone’s UTXOs are and then combine that with my knowledge of the transaction graph and my knowledge of gossip data such that I could then try to form a more complete picture of who is paying who and where the coins are going and that kind of thing. It would make it more difficult for them if you did have this. The other point I would bring up is that I recall from that paper, the “Cross Layer Deanonymization Methods in the Lightning Protocol” paper they weren’t able to deanonymize all Lightning payments. It could be that just having dual funded channels helps in some sense break the heuristics that they are already relying on.

Dual funding may help the situation, that is a really good point. The question is whether leaving this attack surface open is advantageous to them actually. I think the jury is out. It is not all the time they get the heuristic right but my main conjecture is when they do get it right it is usually against these nodes that are churning. They are closing channels, they are opening new channels. I think that the nodes offering to open dual funding channels will be exactly these nodes. They will be these nodes that are online, you can connect to them, you can get some funds to lock in a channel and they will charge a bit of money from you. Once that channel is over they’ll quickly put it into another channel without doing any mixing or any tricks and these are the nodes where the heuristic just works all the time. The heuristic is not perfect but I think the heuristic really gets these nodes that will be doing this dual funding. The fact that we can scam these dual funding nodes and get their UTXOs from them, it is probably not worth it, these nodes probably don’t care that much because they are always creating channels with them and broadcasting that to the network. Public organizations, people active on the Lighting Network, they are not super concerned about that. I think it is not worth adding the complexity of these cryptographic solutions at this time. We can just get this dual funding specification done, Rusty agrees that it is pretty easy to add whatever trick we have in our bag that is best at the time later on.

WIP: Dual Funding (v2 Channel Establishment protocol): https://github.com/lightningnetwork/lightning-rfc/pull/524

BOLT 02: opt-in dual funding PR: https://github.com/lightningnetwork/lightning-rfc/pull/184

Is there an overlapping protocol here where there is some gain by still doing the PoDLE proposal or is there a bigger problem that engulfs the smaller problem such that this is completely pointless?

It is not completely pointless. The heuristics chain analysis companies could use combined with Lightning information give you a pretty good idea, if you watch the chain and you listen on the Lightning Network, on what UTXOs these active nodes own. They are always recycled from previous channel openings and closings. With PoDLE, they can still do the attack, it is just they can only do it to one guy once and then they have to move the UTXO to a different thing. The PoDLE thing is not perfect either but it would provide some guarantees. The question is whether it is worth having a guarantee of this kind of privacy to a small level when with other heuristics you are losing the privacy. It is not completely useless but we can’t see that it provides enough benefit for the complexity you have to add to actually realize this cryptographic proof with Lightning. It would be the most complicated cryptography you would have on Lightning so far and also open another gossip message. You have to then rate limit people if they are spamming this message. You can’t really tell which message you should keep and which you shouldn’t. It is a can of worms that I think would best be avoided unless we know that this is actually really useful against chain analysis people. That is my conclusion.

A limited gain but to get that limited gain the complexity is not worth it.

Yeah, I think it is likely you may never ever see this attack. It is possible you will but it is likely you may never see it done systemically and routinely by these major companies.

It sounds like it is only useful when you are deploying new funds on a Lightning node. When you are recycling funds they are already known because they were part of public channels that have been advertised.

Correct. You would learn new funds before they are used but they are eventually going to use them in a Lightning channel and you’ll eventually going to figure that out. Then you know the funds in the future, you just have to wait. You don’t get the information as early as you would but you will get that information eventually.

Lightning dice (AJ Towns)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002937.html

Slides: https://www.dropbox.com/s/xborgrl1cofyads/AJ%20Towns-%20Lightning%20Dice.pdf

StackExchange question: https://bitcoin.stackexchange.com/questions/4609/how-can-a-wager-with-satoshidice-be-proven-to-be-fair

Satoshi Dice

A similar idea to Satoshi Dice, anyone not heard of Satoshi Dice? Satoshi Dice was way back in the day. Erik Voorhees made a whole bunch of money off it. The idea was you send some coins to an address and if Satoshi Dice thinks you won then you get more coins back and if it thinks you didn’t win then you don’t get any coins back or you get far less. It was really simple. The big idea about it was that it was a trust but verify thing. You’d have to trust it for a day but after the day you could verify that it wasn’t cheating you. I think if you do a web search you will find some comments on Reddit that at least once upon a time it went down the next day and didn’t pay out. People figured that was one way of it cheating. Those transactions, you win money and you just have to send something. If there were no fees then why not? There were millions and millions of transactions. It got to the point at some times where you couldn’t get real transactions through for all the Satoshi Dice spam. The 50 percent double your money or lose your money thing had 3 million transactions on the blockchain at the moment.

Do this with Lightning?

An idea I’ve kept thinking about is how we could do this with L2 because L2 lets you do, in theory, similar stuff and it takes it all offchain so you don’t have to worry about fees or spam. The thing I’d really like to be able to do one day is have something like if you have shares as an Ethereum smart contract or something, ideally you’d like to be able to do share trading over Layer 2 instead of Layer 1. One of my things is I’d like to figure out a way of doing that sanely. That’s where my goal is for this eventually.

PTLCs

The background for it is that instead of HTLCs which Lightning does at the moment we will use PTLCs because we can do math on PTLCs. With hash timelocked contracts you can reveal a hash, because of the way hashes work you stop there. With point timelocked contracts you can do ECC math, elliptic curve cryptography, and at least get addition, maybe a little bit of multiplication so that you can actually do slightly more complicated things with it. One of the things you can do with it is make a partial Schnorr signature. You create your Schnorr signature, you reveal half of it to the other guy which doesn’t actually tell them anything useful. Then once you’ve revealed a point preimage they can take that signature and it is a valid signature at that point.

Trust

This is a “trust but verify” model, it is not a “don’t trust, verify” model. You do have to trust for a little while. Your funds could get instantly stolen but at least you will know after the fact that they have definitely been stolen, I don’t need to deal with this guy again. For something like Satoshi Dice that might be reasonable because you are only gambling 5 cents and trying to win a couple of dollars. For other things it might not be reasonable. That’s the security model. By getting away from trust there we can say that we don’t need to lock up funds. We are going to trust them anyway so locking up funds for 5 minutes while we resolve the protocol is no big deal. If we don’t lock up funds that means you can’t get your funds locked away from you while the protocol fails or whatever. That’s the trade-off there. Whether the risk of getting funds immediately stolen is worth the benefit of not having to have funds locked up for maybe 30 days until a timelock rolls out.

The wager

To be a bit more specific about the wager here. The idea is you are having a bet on something completely random that has got no skill involved whatsoever. It has just got odds of winning and you agree on a payout if you win or not. In particular pick some numbers, add them together, if the result is less than whatever condition then you win, if it is greater than you lose. I’m using 256 bit numbers here rather than just numbers between 1 and 1000 because that way when you create a point on them it is random. If you just had the numbers 1 to 1000 then someone trying to hack you could try every single one of those and then figure out from the point you’ve got exactly which number you picked.

Protocol design

The goal is to take all of those ideas and then get a protocol where we are not doing anything onchain so we can communicate over the web like HTTPS or something and we can send PTLC payments. We don’t want to use any more primitives than that and we want to make sure that it is a safe protocol. If someone stops playing, stops participating in the protocol that doesn’t let them cheat.

Steps

That’s the specifics of it. The key concepts are you connect to the website, you decide that you want to make a bet of some sort, you pick the random number (b) and you calculate the corresponding point (Pb=b*G) and that lets you get started. You send the public parts of those over to the casino. The casino decides to accept the bet. If they don’t want to accept the bet because they’ve run out of funds or they are under regulatory pressure or something you just stop there and no one is any worse off. Carol picks her number (c), calculates the corresponding point (Pc=c*G) and she signs the message “Carol (C) agrees to the bet with Bob (B) with conditions (Pb)/(Pc)” giving a signature (R1,s1). A signature on this message is later going to be what you use to say that the casino cheated you out of your money. That generates a signature (R1,s1) and Carol is going to send a partial signature (R1,s1-c) and her point (Pc) to the person who made the bet. The way the math works is if you get those two numbers and you add c to the second one you get back to the original signature. If you don’t get c you can’t get anything out of it because as far as you know c could be any number out of 0 to 2^256.

Q - If there is a payment failure or if the channel is exhausted at the precise moment the person is trying to claim their payment? It cannot go through unless there is capacity?

A - If a payment doesn’t go through for whatever reason that’s the same as they didn’t pay you. If you are at a shop and the bank declines your card then you didn’t pay for the goods. For the purposes of this I am assuming that Lightning works really well. Payments go through instantly, they get cancelled instantly, there isn’t much of an incentive for anyone to hold up payments in this protocol. You don’t want to have the payments get stuck halfway or something as per normal.

Q - You wouldn’t lose money. Your protocol is secure even in the case of stuff like that happening.

A - Yeah.

At this point no money has changed hands at all. You’ve just gone on the website and exchanged some information. The next thing is that Bob needs to check that this (R1,s1-c) is actually the correct partial signature which is doing some elliptic curve math.

(s1-c)*G = R1+H(R1,C,m)*C - Pc

Assuming that is alright then Bob makes the bet. He pays this wager to get this value c over Lightning as a PTLC. He bets his 5 cents in return for Carol’s number.

Q - What does Bob verify here? Does he verify that Carol has chosen a random number and it was Carol that has chosen that number?

A - Carol is sending Pc and (R1,s1-c) across. The Pc is the point committing to the random number and the signature is the signature but adjusted by the random number. What Bob is verifying here is if he gets the random number that corresponds to the point and then applies that to the signature he’ll actually get the signature for what he thinks he is going to get back. At the moment this signature on its own doesn’t mean anything because he doesn’t know what c is. Because he does know what Pc is he can bring these numbers up to the elliptic curve thing and do all the math because all the elliptic curve points are public.

Once he’s got the preimage for Pc he knows Carol’s number and he can calculate the signature (R1,s1-c+c). At that point he can figure out whether he has won or lost. If he’s won, the original message, he can provide the numbers for b and c and prove to everyone that Carol owes him money.

Like I said he works out if he won or not. If he did win he is going to sign a message saying “Carol (C) has paid me (B) my winnings for our bet with conditions (Pb/Pc)” so that he can eventually give this signature to Carol when she’s paid him and everyone can be square. In the future if Bob tries claiming that he’s cheated Carol will say “I’ve got this signature from Bob and he says I paid the money so we’re definitely square.” He gets the signature (R2,s2). He sends the signature nonce (R2) to Carol and he sends his number b. At this point Carol can evaluate b,c and tell that Bob has actually won. He sends a Lightning invoice for his winning and then payment of that Lightning invoice will reveal s2 which will give Carol her signature. Carol needs to check that the signature stuff is right, that Bob did actually win and pay the invoice so everyone is square.

(Carol checks that b*G = Pb, that R2 and the PTLC will be a valid signature)

Does it work?

A couple of caveats with that. There’s the possibility that Bob has chosen a winning number, he gets Carol’s c and then he just finishes. At this point he has a signature that says Carol owes him money but he has never told Carol anything so that Carol knows that she owes him any money. If you are going to be publishing public proofs of “Carol has cheated, she owes me money, she hasn’t paid so she can’t provide the signature from me because I haven’t given anything at all to say she’s paid so she’s a cheater”. You need some way to deal with that. If you are dealing with a court where people go in person then at that point the mediator can pass stuff at all. Otherwise the mailing list post about this has some ideas for how you could use Lightning to verify a public proof of things which then has further complications. The other thing is what if Carol does an exit scam. She has got a bunch of bets, she owes half of them winnings and then she doesn’t bother paying and doesn’t do anything more ever again. She doesn’t care if her reputation is ruined.

Does this generalize?

That’s the bad side. The good side is that absolutely none of this is onchain. It is all purely Lightning transactions, it is not even complicated ones. The stuff you do before you call the Lightning client is a bit complicated but the Lightning stuff itself is just PTLCs. The other nice thing is that all the complicated conditions on the bets are not even getting passed to Lightning. They are not onchain, the Lightning client doesn’t have to know anything about them. You can generalize that just by changing a Satoshi Dice website, Javascript, Android app, whatever. You could make that both more complicated, instead of just a dice roll you could have different levels of pay-offs, like a scratch it or something, or you could have payouts dependent on some oracle, sports betting etc. In general I think you can have pay outs of any sort of nature as long as they only end up being payouts in Bitcoin and not payouts in some other asset like cash or stocks or something because you can’t send those over Lightning in the first place. I think in theory because no funds are locked at any point as long as you trust that the casino is not going to do an exit scam on you you can extend this for bets that go over a longer period of time as well.

Implementable

It is pretty close to implementable. The Suredbits guys have a PTLC proof of concept that works on testnet I think and should work on mainnet. If you are doing PTLCs with Taproot then at least theoretically you can do them on Signet already. That’s my theory.

Q - At the beginning you talked about how this may be related to other applications other than shares. On the mailing list you talked about how it might be used for prediction markets as well. I don’t get that bit, how does that relate?

A - A prediction market says you win 5 dollars if something happens and you lose 2 dollars if it doesn’t happen. It happens is some condition that at least with an oracle you can verify programmatically in some scripting language and you can define whatever scripting language you want for this. A prediction market, you want the predictions to last over a moderate period of time. You’ve got the exit scam risk but otherwise I think that all works as a trusted but verifiable prediction market. The prediction market is holding all the money until the prediction gets resolved, you’ve got that risk still. But you are still able to do everything over Lightning and you’ve got this level of proof that they’re behaving properly.

Q - There was already this idea of paying for signatures but this is taking that a bit further. You pay for a signature and you can claim that against someone under conditions that are in the signature. The clever thing is you’ve got cryptographic commitments to the lottery numbers you’ve chosen in there. That’s the breakthrough. You can take that claim against the guy and if they don’t fulfill that then you publish it. That is really what the breakthrough is.

A - Shares are pretty hard because we can’t do Lightning over anything but Bitcoin. Or at least there are serious problems with doing Lightning over anything but Bitcoin at the moment. You could maybe do if the share price is such and such then you get paid such and such in Bitcoin which you could have used to buy the chain afterwards, that sort of option. That devolves everything to a prediction market. The biggest problem with trading shares and options is the locking them in. If you want to trade shares over Lightning then every hop in your Lightning Network has to own some shares that they can move from one channel to the other. This is probably completely impossible to make work. I think this might be a step forward to making that a little bit more reasonable because it gives you some of the benefits of decentralizing stuff in the casino model. I’m not sure how that works, I’m just hopeful that maybe one day it does.

Q - With the shares, the problem you just described as to finding a way to trade shares over Lightning, couldn’t you do an atomic swap? You are transacting Bitcoin like you said but then it automatically uses an atomic swap that swaps it into an asset in another blockchain.

A - In theory that sounds good. I think it has the same problem with multi-asset Lightning which is that because of the timelock… If you are trying to pay 100 US dollars to someone and you’ve got Australian dollars, you spend 130 Australian dollars to someone, that gets forwarded as 129 Australian dollars, 128 Australian dollars, then gets converted. The problem is that your Lightning stuff has the timelock of a few days. So if the price is changing over a few days that’s giving the guy who is doing the conversion a free option to cancel your transaction or send 100 dollars and take 5 dollars profit because the Australian dollar has improved. I don’t see how you reduce the timelock enough to make that work. If you could do that then yeah you’re probably right. If you have your Lightning channel with your shares against a broker and the broker has got a lot of shares that get distributed amongst their clients and they want to transfer that across to some other broker, they are just moving money to settle the difference. The end broker reconverts that back into shares and you solve this multi-asset problem then I think that would be pretty cool.

Q - That is a problem a lot of people right now in the normal market are betting on as well, on just having the little window of time to play with. I don’t know if it is an actual problem.

A - The whole two day window of shares getting settled was why Robinhood had to cancel the GameStop things.

Q - Did you ever use Satoshi Dice in the early days? What were people doing?

A - I wasn’t around in the early days.

Q - Were people literally making really tiny bets just on random numbers? The equivalent of a few cents?

A - It was on Reddit or Bitcointalk. People were like “I made all this money on Satoshi Dice”. That is what I saw people do.

A - The news articles were reckoning that 400 million dollars in Bitcoin back in 2012, 2013 days had gone through it.

A - As far as I remember more than a million Bitcoin went through the most popular address on Satoshi Dice, a million Bitcoin. It was filling up the blocks.

Q - It was purely betting on random numbers? Is this is a possible thing to kick off use of Lightning? Could we all make tiny bets of a few satoshis on random numbers? What is to stop that from happening?

A - Let’s do it, let’s figure it out. That’s the coolest thing about AJ’s thing, it lets us figure out the answer to the question without more complicated stuff. We can actually build this now.

Q - How do you build in a house edge? You have a certain probability and that’s how the house gets their edge?

A - The idea is that the casino makes an offering, the 48 percent tells you which set of numbers win and the payout is part of the message that says “I made this bet with Carol, I am owed 50 satoshis because I bet 25 satoshis on p being b, c and then I will tell you what b and c ended being so you can see that I actually did win with the 48 percent odds.

Q - Another thing I recall from those days is they would keep doubling up and doubling up and doubling up.

A - That’s martingale bets.

Q - Yeah. What if at the moment you were trying to do the payment there was literally not enough in the route even using MPP, whatever. There just wasn’t enough to route to that person, then that person, are they stuck in the lurch? They have technically won but they aren’t able to claim their payment. Would they need to try to get more inbound capacity and then claim it? How would that work?

A - If they don’t have inbound capacity then they are bit screwed. If you are making a bet that is big enough that is worth doing onchain then you could do it onchain. But if you don’t have any inbound capacity and you don’t have dual funded channels then nobody has any inbound capacity when they start out with Lightning, this is all screwed in that case. If you are making a bet where you are winning 1 percent of the time and you get 95 times what you bet then maybe you send the money on Lightning and you get paid on the blockchain.

Q - One of the reasons Satoshi Dice was successful was because fees were basically zero. What’s the latest state in terms of fees on Lightning? For historical purposes it is great that everyone doesn’t have to store these transactions because it is on Lightning but you are still potentially having to store lots of tiny HTLCs until you close the channel. Maybe the really tiny bets don’t make sense on Lightning.

A - All of these PTLCs can get resolved immediately. You send your wager over and you are trying to get c back, there’s no need to keep that open for any length of time because Carol already knows the c that she decided. She has to look that up in a database, send it through, collect the funds, funds go in the channel and it is done. The other thing about this is if you are doing everything on Lightning then apart from the casino’s edge all the money is going in from a dozen people and then getting paid back out to six people so the casino’s channels should stay relatively balanced.

Q - Transaction fees are currently really low on Lightning. I read an article a few weeks ago, if you look at what transaction fees are currently used in Lightning then the only conclusion is that nobody is making money on transaction fees. It is not economically viable at the moment. It is just a bunch of hackers keeping up the Lightning Network, nobody is making money off it.

A - Alex Bosworth sounds like he’s making money on it if you read his Twitter stream.

Q - I think he is but he is probably the outlier. That said, a Bitfinex guy, he said they have done 12,000 Lightning payments in and out of Bitfinex. They are one of the leading Bitcoin exchanges who supported Lightning very early on compared to others. Potentially as this bull market heats up you’ll see more and more people getting on Lightning. We are seeing promising signs.

A - The longer you keep channels open the higher chance you are making some profit off them. Obviously you need to rationalize your fees. Regarding a casino like this if you were a gambler you would open a direct channel and play for free.

Q - Have you seen lightningspin.com?

A - That’s been around for a while?

A - I think it was Rui Gomes from OpenNode who created it. I think he sold it off to somebody else when he went to go work on OpenNode.

Q - My hypothesis is fees are going to go crazy, if we are in a bull market, towards the end of the bull market fees are going to go crazy onchain and then people are going to be forced to use Lightning. Then I expect the fees will go up on Lightning and then there may be money to be made for Alex Bosworth and the expert routers.

A - If fees go up on Lightning at least you can always make more capacity on Lightning.

A - To have a channel, a routing node pointing towards a use case like this which is a balanced use case, you can afford to lower your fees because the traffic will be much higher. Whereas if it is a trading peer, like for example on Bitfinex everyone is just depositing, they are not withdrawing on Lightning, so the traffic is unidirectional and you need to increase your fees a lot to make back the cost of rebalancing or opening a new channel. Same is true for the Loop node, a submarine swap service, where everyone is just swapping their money out to onchain rather than doing the other way. I think it is more affordable to have low fees towards this kind of service.

Taproot activation

Taproot activation meeting 2: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html

Let’s talk about Taproot activation so we can actually have this thing working on mainnet.

So after 2017 and the whole UASF thing there is kind of a split community where there are people who thought 2017 occurred in a way that UASF meant SegWit was activated and UASF was the crowning glory and then you’ve got the other half of the community that’s like UASF was totally reckless, it should never have been attempted and the only reason why Bitcoin didn’t die was because a bunch of smart, conservative people had discussions with miners and made sure it didn’t split the network and kill Bitcoin etc. I think people are in those camps and it is informing how strongly they feel about this lockinontimeout discussion. We have a revised BIP 8 and within that there is a parameter called lockinontimeout. Some people think that should be set as true as the default which would basically mean that Taproot definitely activates after a year. If miners fail to activate it will activate at the end of the year regardless. And then lockinontimeout being false would mean if miners fail to activate then it wouldn’t activate after a year. It is only a clause if miners fail to activate. Given that miners have said that they support it, there doesn’t seem to be any controversy, you would expect it to activate before that lockinontimeout parameter is even relevant. Yet people have very strong views on whether it should be set to true or false. I think a lot of those arguments are kind of symmetrical. Some will say “You are totally reckless proposing some people run lockinontimeout=true because Core should release lockinontimeout=false and if you release something with true then you can split the network.” But the same argument applies the other way. If Core released true anybody running software with false is potentially risking a chain split on the network. It is hard because you have strong views on both sides. For a one off soft fork I lean towards doing true just because I think it is cleaner, you don’t need any unorganized UASFs because you definitely know it is going to activate after a year. But Greg Maxwell and David Harding had this argument where developers shouldn’t be releasing something that is definitely going to activate. That puts too much power into their hands and bad precedent etc. They could in future push a soft fork that isn’t widely popular amongst the community and it would end up activating because developers want it to activate. I don’t know why people are getting so passionate about it and so rigid in their views. My preference would be true but I’m also happy with false. Luke Dashjr is very strong on true being set and Matt Corallo is very strong on false being set. I think AJ wavers between the two. I don’t know how it is going to resolve. My expectation is that Core will release false if it releases anything and then there will be a community version that releases true. I think that is what will happen because a number of Core contributors are against setting lockinontimeout (LOT) to true in Core. So either Core releases nothing and we try to activate this with non-Core releases, community releases, or Core releases default lot=false and then there’s potentially a community release with lot=true that some people can choose to run. That is how I think it will end up. I am just trying to make sure that people don’t start shouting and swearing at each other. I think that has been my primary job so far.

I think that was a fair summary at least from what I understood.

How familiar are people with what BIP 8 actually does? Has anyone looked at BIP 8.

There is a picture. The idea is that we define a new deployment, we say “We want to deploy Taproot.” The green section (ACTIVE) is when the Taproot rules apply. We start off in the DEFINED section, we eventually reach start_height whatever that parameter is set to. We seem to be looking at late July for that. It goes into STARTED and at the point miners or rather blocks can be signaled to enable activation or not. If a threshold of blocks in a retarget period, 2 weeks, 2016 blocks, the threshold we’re thinking about is 90 percent. If 90 percent of blocks in a retarget period signal it then we go into LOCKED_IN, we spend another 2 weeks sitting in LOCKED_IN so people can go “Oh my g*d, this is coming in two weeks” and actually upgrade and not just talk about it. At the end of the two weeks it is ACTIVE. If we’ve got this lockinontimeout thing set and we get almost up to the timeoutheight then we reach a MUST_SIGNAL phase. During that MUST_SIGNAL phase there has to be 90 percent of blocks signaling. At the end of that we do the same LOCKED_IN, ACTIVE thing. If we don’t have lockinontimeout set and we get to the timeout then it just fails and we don’t have Taproot active. The whole having to do things rather than just making sure everyone is happy with everything kind of sucks. People get upset about that either way. The ideal path is this. We go through STARTED, we skip the MUST_SIGNAL, we get enough signaling to go into LOCKED_IN naturally and we go to ACTIVE. Odds on, touch wood whatever, that’s what will happen. The MUST_SIGNAL is there as a safety valve so that we don’t have to do all the BIP 148 stuff at the very last minute like we did last time.

Just to clarify you don’t get into that MUST_SIGNAL phase if you’ve set lot to false. That MUST_SIGNAL phase is only accessible if lockinontimeout has been set to true in your software.

Importantly it is upgradeable. If the majority of people have lockinontimeout set to true it drags everyone else in because by forcing all the miners to signal anyone who was just going “It hasn’t timed out yet”, they’re all signaling. It has this ability that you could start with lockinontimeout being false and decide later “No that’s it. We are going to go hard, we’re going to turn it to true.” Those who set it to false will also just follow along assuming a majority has set true.

If the chain with the most proof of work on the network has enough signaling then that will satisfy both lockinontimeout as true and lockinontimeout as false nodes. They will all end up in the same state. They will all be enforcing Taproot rules. The only difference is if somehow there’s only a minority of people with lockinontimeout as true, they don’t get the most work chain, they’ve got maybe a 20 percent hash power fork so it is a much shorter chain, it is much slower, it activates Taproot and the longer 80 percent chain doesn’t then you’ve got a network split like the BCH thing. Taproot is on the wrong side of it for some reason and everyone is unhappy.

There is a valid chain split risk if say half the network is running lot=true and half the network is running lot=false because some people will be enforcing that MUST_SIGNAL phase and other people won’t. You could potentially have a chain split if we get to that scenario. We only get to that scenario if miners have failed to activate for a year.

Even in that scenario miners have to also mine both chains. In that case the miners would also have to be split. If the miners were unanimous then you would get one chain stopped and the other chain still going. You’d need everyone to split. If one large group agree and have a super majority it is fairly straightforward what happens. If there is a huge amount of contention it is all bad but I think that statement is generally true in Bitcoin anyway.

If the upgrade works like you said what’s the argument against doing that? It is a soft window. Give them a chance and after a little while, how is that not the best option?

That’s a good question. This is a topic of some debate. For me, you can’t back out. Once you’ve set lockinontimeout to true in a deployment that’s it, you are committed. You can’t later on go “That was a mistake, wait, hold up everyone.” At the moment with the old style lockinointimeout=false, if we found a bug, it was a terrible idea, we should not do this. We could kill the activation and tell the miners “Please do not activate. Nobody set lockinontimeout = true, we’ll let this one die in a year’s time and we’ll do the real one later.” That is an unlikely scenario. But in a meta sense I think it is really important because with this way everyone gets their say. The developers go “We have proposed it, we’ve created it, we’ve setup the conditions but we are not going to be the ones who dictate that it happens.” If the network rejects it, the network rejects it. Then the miners get their shot, they get a chance to activate it. And then the users get their shot, they get a chance to say “F*** you, the miners are being obstructive. We really believe this should be activated so we are going to put lockinontimeout to true.” If the economic majority of Bitcoiners do that then the miners can mine the other chain if they want to but it won’t be worth anything. The thing I like about lockinontimeout=false is it gives everyone a chance to have their say. If things go well and we all get consensus then we know that everyone has bought into this. You can never truly know you have consensus in Bitcoin. We believe that miners are enforcing SegWit today, we believe that nodes are enforcing SegWit but the only way to find out is to actually mine a block that violates some of the rules. It is an expensive test and even then it is only a statistical test. But for all you know everyone stopped enforcing it last week and you just haven’t been told. You can never truly know you are in consensus so we have to have this mechanism that is slow and gradual. Everyone signals and says “Yes” so we are all fairly convinced that we are in this together. That applies to old soft forks, it applies to new soft forks. Importantly I think this is going to be pretty uncontroversial and go through whatever we do but I like the idea in future that we have this situation where everyone gets their chance to have their say so that we are all confident that we are all in agreement. We’ve not only got to all be in agreement, we’ve all got to know we’re in agreement. Because you can never really know what the network is doing. You can never really know what software people are running. I do like the idea of roughly three groups, the nodes, the developers and the miners, all getting an opportunity to sign off on this. At the end of the day the economic users of the network can impose whatever rules they want. They can override everyone. If they set rules and the miners don’t like it the miners just mine something that isn’t Bitcoin. They always eventually have the override but as far as possible it is good if we have this broad consensus. And I don’t want a perception that any one group is in control, I don’t want a perception that developers can define what Bitcoin is and when we can upgrade. Developers don’t want this because if they are perceived to be a point of control on the network then they are going to have the s*** lobbied out of them over every single issue and upgrade that anyone could ever want. If it is decided that actually the devs set lockinontimeout=true and it is going to activate in a year come hell or high-water then we’ve removed an important check and balance on future development decisions. I don’t think developers want that because the next thing that happens is that you start lobbying developers for every possible change you might want. As we see Bitcoin become more serious that is exactly the kind of gamesmanship that we expect to see. By making it very clear that the devs do not activate things, they get to step away from this decision making process. They always have a huge influence of course and there are day to day decisions that they make that do have a huge influence but on something as key as this I think it is really important that it be very clear that developers are not in control and bribing or threatening the developers does not give you control of the Bitcoin network. For their own sake they do not want to be in the driver’s seat on these things.

Was there any discussion around this idea of first releasing code that has lot=false and then changing it at a later date or is it seen as that it should be a community driven thing if somebody in the community wanted to release a version of Bitcoin that has lot=true?

We are headed to that by default because Luke (Dashjr) has already said he is going to do lockinontimeout=true. There are a number of people who have suggested a hidden option, an undocumented option, that would be in the initial release to say I want lockinontimeout=true. That makes it easier if we decide to go that path. Users don’t have to upgrade or go for some dodgy patch from somewhere. They have already got it, they just need to change one config option. It makes it as simple as possible to put power in their hands. That’s my personal approach. I think that having that option there, default off, but users should be able to turn it on without having other consequences. You could do it by upgrading for example. What else are you going to get with the upgrade? Changing a single option is an isolated decision that I think users can make. A third party release is pretty bad because I’ve already decided I know how to validate the Bitcoin Core releases and I trust those developers enough to do that. I’ve now got this other release. That’s a whole rigamarole. Who released it? Do I trust them? Who has vetted the patches? All those kinds of things. I just think it is cleaner for the developer to make an explicit user choice.

Let me try to do a good job of summarizing Luke’s arguments because I think he has some very good arguments in response to Rusty’s arguments. One is that miners are only signaling, they are not voting for the change. Everything should be preloaded before the activation mechanism. We should have all the discussions, all the stakeholders should discuss whether they want Taproot and if there is any substantial opposition and people don’t want Taproot then we shouldn’t even attempt the activation. The only reason we are even discussing activation is because there is broad consensus amongst the community across all the different constituents, users, miners, developers etc that this is a change that we all want. That would be one challenge I’m sure Luke would say if he was here.

Note that by that measure SegWit would never have activated. Miners had huge opposition to it. They did not stress it. You could say “We have this broad consensus” and the developers believe they have, and I think they’ve done a great job of doing so but let’s verify that, not trust it. The devs can individually be convinced but they can’t convince me for example, there is not transferrable proof. So by getting as many groups to buy in and signal that, signaling of course is different from actually doing it. Everyone can lie but at least it is better than nothing. In the past we have seen that that argument is wrong.

Why didn’t the miners express their dissatisfaction with SegWit? What was in their minds?

Because they were secretly violating a patent and they weren’t going to tell anyone.

ASIC Boost? That is a funny situation, I guess that’s a special case.

It is not that they were violating the patent at that point, though they are now because the patent has actually been granted as of August. What they were doing was a competitive advantage that they didn’t want to tell anyone else it was a competitive advantage because then they’d do it too and it wouldn’t be a competitive advantage. You can’t explain something and then lose the point of doing it in the first place.

As we understand it the mining suppliers were strong arming the miners. At this point we have a concrete argument that they didn’t express anything and suddenly boom. And the things they did express weren’t genuine and were dismissed. Consensus is hard, I think it failed in that case.

With the New York Agreement, SegWit2x, they wanted a block size increase. There were very few people who were unhappy with SegWit, they just wanted an additional block size increase on top of SegWit.

That is not clear. That seems to have been a fig leaf for Bitmain. They were very unhappy with SegWit.

For ASIC Boost, yeah.

It was a huge threat to their margins.

I suppose there were different players wanting different things.

That’s right. SegWit2x seems to have been a negotiation to try to get more players onboard with their thing or a delaying tactic. It is hard to say. At the end of the day, agreements and blog posts are a weak signaling. I do like to have a signaling that is closer to Bitcoin itself.

Another Luke argument would be yes we don’t want developers making all the decisions and perhaps there is even that argument where we should do things differently if there is even a perception that developers have the power to do that…

I don’t think Luke would argue that. I think Luke is like the developers are right.

Without Luke being here we can only guess what he would say. I would guess he would say the defense against running something that the community doesn’t want or the user doesn’t want is the community or user not running the software that Core releases. If Core released a contentious change there would be a massive community uprising and there would be a different release to run released a community group of developers. That Core release would never get activated. Similarly miners don’t have to run Core software. This “We need miners to pull the final switch” is kind of passing the buck. Even though we’ve got this community consensus, even though we’ve included miners in this process to get consensus on the change we still want miners to pull that final switch because we are worried about this perception that developers have too much power. That is potentially just passing the buck. We’ve already come to consensus, developers just push the superior change that is cleaner, that doesn’t need any uncoordinated UASFs on top, we can all be done within a year. Everyone has got clarity, everyone knows that it will activate within a year and the ultimate defense is just don’t run what Core releases if you are that angry about it.

It is not that easy though. You might step forward and then the debate explodes and then you decide you have to upgrade twice, you’ve got to downgrade. Those who don’t now are screwed because they are locked into something. There is a huge amount of inertia, it is a blurry line. In this case, personally I feel really happy with Taproot. I feel happy with the review, I feel happy that everything has gone through and all these things. But I do feel that we are using this as hopefully something that we will use for future activations. Not all of them may be quite as clear. I think this idea that devs have done all this due diligence so you don’t need to is a little bit awkward. The reason we let the miners signal is because they are easy to count, not because they are particularly special. Everything else is very much a qualitative view. They are so easy to count in a distributed manner. But they are not the true power in the network, nor are the devs, as you say the users are, the users can choose what to run. But the devs have a huge amount of influence, defaults have a massive amount of influence. Unless you feel so strongly that you are prepared to pay someone to write something else or that you can do it yourself, there is a significant bar to running anything but Core. Not a bar for Luke who is a developer himself but for everyone else it is a big deal. If you have a significant amount of discontent and I feel that if you don’t give people a way to express that discontent you seem to have consensus until you really need it and then you find out you didn’t. I feel it is better to give people the option to signal and express things because then I feel more confident that actually we did have a vast majority of people feel this way. We gave them an opportunity to say no and they didn’t take it. Not like “All they had to do is patch their software and install this dodgy patch from some crazy Bitcoin Cash dude who decided to create a patch for them.” They could have resisted, they could have opposed it. Since they didn’t do that clearly we have consensus. No. We should make it as easy as possible for people to express themselves. I think that’s important. I think in this case it won’t be necessary this time but I’ll feel much better knowing we have that option because at the end of the day users do have that option, they just have to be driven pretty hard before it would happen.

It is miner signaling rather than user signaling. The way you are talking it is almost as if users are pulling that final switch. But it is miners pulling that final switch and users don’t necessarily want to trust miners any more than they potentially want to trust developers.

No. On the surface it seems like miners control the signaling in the same way it seems like miners control Bitcoin. But miners are more exposed to forks. If there are multiple options miners have to decide at every moment what they are mining. There is opportunity cost in making a mistake. You or I sitting here at HODLers can sit out a fork for a lot longer than a miner can. That’s why the economic weight at the end of the day is controlled by the actual users, the people who run nodes, the people who actually use their Bitcoin, doing economic activity with Bitcoin. They have huge leverage over the miners because they literally have costs and they are burning them all the time. They have to decide. They are a lot easier to pressure than you could think. They are terrified of the idea of being on the wrong side of a fork in a way that I don’t care. If I’m on the wrong side of a fork I can just upgrade my software and I haven’t moved my coins, it is all good.

So why is it important that we get miners to do that final switch pulling or that final readiness signaling if they can be pressured?

I thought it was more about the devs not putting in a change unilaterally. I thought that was the argument.

The reason we care about miners is not just that they are easy to count, it is also that if we’ve got a huge amount of hash power supporting something and actually validating, not building blocks on something that doesn’t validate, then the likelihood of getting a chain of 2 or 3 or 6 blocks that are invalid is extremely low. You don’t get the 6 blocks are confirmed, oops there is a double spend in it, oops it all gets re-orged out and someone has lost money. That is the reason why we’ve got a 90 percent threshold and our preferred way of getting things done by having miners actually upgrade, actually signal that they’ve upgraded and that we’ve reached this threshold with everyone being happy and not fake signaling or whatever else.

That is an issue too. There is a trust issue because people can signal and it is almost completely detached from what they are actually doing. But also of course if 99 percent of the nodes are upgraded then nobody gives a f*** what the miners do because you won’t even see it. You have to have the case where you haven’t got enough nodes upgraded and you haven’t got enough miners upgraded where it kind of falls apart. If all the miners are upgraded you’re good or if all the nodes are upgraded. Technically 51 percent of the mining power is upgraded you are good whereas the vast majority of nodes have to be upgraded so that anyone who wasn’t isn’t partitioned off. It is much harder to count that. Ideally they are both upgraded and it is all good but it does come back to the miners being easier to count at that point.

Yeah, 51 percent of people upgraded, you can still have really long re-orgs if the 49 percent get lucky of course.

Absolutely. The miners are really on the sharp end of this. The miners do not want to be on the wrong side on that, more than anybody else. If your node is on the wrong side of that then transactions that you make during that time may be vulnerable but for miners they are definitely vulnerable if they are on the wrong side. Whereas a node that is on the wrong side, you just upgrade it. You upgrade it to whatever fork wins and you haven’t been hurt in any way. Where Bitcoin is not used for compulsory economic activity, I think that will still be true. And it will always be true for the miners, the miners will always be subject to this whereas economic nodes at this stage I believe are much more resistant to chain forks because they can just stop in a way that miners can’t really.

Once people are running nodes with lockinontimeout set to true or false is there a way to query? Will we be able to query someone’s node to see what they’ve got it set to?

I personally believe that it should also change the default user agent for exactly that reason. The only way to find out would be to actually do the fork. People are presumably doing it in order to signal “Yes I want lockinontimeout=true” so I do believe that the default user agent string should change in that case so we can run some stats and have some signaling happening. But I haven’t seen any discussion on that.

That would be solved assuming I’m right and the likely scenario Core releases false and a community version releases true, you would be able to see the version that people are running when you connect to them, when you do that peer handshake.

I like to think it would come out as patch to Core anyway. Hopefully it would just do both. Change default user agent when you turn it on.

In a scenario that miners don’t activate in say the first 6 months you think Core will release a new version with lot=true even though they released lot=false 6 months ago?

I like a hidden option in Core which is always there from day one, which is lot=true or taproot-bip8-lot-true or something that would change lockinontimeout to true and also change the default user agent string so you can tell people are running it. That’s good because by the time 6 months comes almost everyone has upgraded anyway, they’ve just got to change their config. They don’t have to upgrade again or do anything else. They don’t have to find a new supplier. They can just use Bitcoin Core the way they always have and turn it on. But yes I know Luke’s node Knots will almost certainly run lot=true.

I was surprised to see him say a couple of days ago that he hadn’t made a decision on what Knots would do but that might have changed by now.

https://luke.dashjr.org/programs/bitcoin/files/charts/historical-pretaproot2.html

That’s Luke’s numbers on adoption after release of various versions of Bitcoin. That top line that is super sharp with number go up is the latest release with the Taproot code included but not the activation stuff. That going up really fast is a pretty good sign.

It is generally perceived that what happened with SegWit was bad but I’m not understanding fully why it is so bad. When I think back we tried to do the soft fork, it didn’t work out and I presume that during this time people figured out it was because of this patent thing. That was seen as not a reasonable reason to oppose SegWit and it just had to be done by users in the end. You had this conflict on the technical level where we had a very good technical solution but it changes the game for miners in a way that they aren’t going to support it. It is a difficult one but it is one that had to be done and that period where you didn’t have an automatic lock-in, you waited for the signaling, gave time for this information to come out and then make the final doomsday decision to do the UASF. It seems like although it is messy maybe it wasn’t so bad.

Messy in Bitcoin is bad. We won at chicken, they conceded and that’s great. But that doesn’t mean anyone wants to go through it again because if we had found out what happened with UASF and various miner interests it would have at least been a huge stutter for the use of Bitcoin. With the chain split and uncertainty of what is going on every exchange would have had to shut, everything would have had to wait it out while the battle roared. Whichever way it went it would have been a war. Even if you won it was much better to go the way it was. Nobody wants to trigger that again. Nobody wants to step that close to the edge I think. Even though it worked out there are two things. One is everyone wants to make it easier to do a UASF next time which is really what BIP 8 is all about if you do it with lockinontimeout=false by default. It makes it very easy for users to do that. I am arguing it should be even easier.

With SegWit, it was enforced by a UASF after it floundered, which is similar to what we are doing here. Isn’t it the same thing? What’s the difference?

A whole heap of people upgraded, threatened and signaled and there was huge uncertainty. Miners, remember, are terrified of forks, they do not want to be on the losing side of this. A hundred percent of them started signaling. We never found out if anyone was running that code. I know personally I was running that code but we never found out. We never had a test because literally 100 percent of miners signaled. There were no blocks that did not signal which is unheard of and it means there is probably too much miner concentration in the world if you can get 100 percent of miners to agree on anything something is wrong. It was ridiculous. They literally all caved, were signaling it because they did not want to mining invalid blocks. But we never knew what the percentages were. And economic nodes, is it just Luke and three friends or is it actually major exchanges who are going to go “SegWit or bust.”

I don’t think we had major exchanges saying that but I think we had a couple of companies, maybe Bitrefill or someone like that on the SegWit support and running BIP 148. Whether they did or not or just said they would…

I don’t think anyone wants to go through that again.

The timeline for SegWit was signaling started in mid November 2016. We started the whole UASF discussion in I think February or March and that was about the same time that the ASIC Boost stuff came out. I don’t have the date that BIP 148 got finalized but I think it was April with the August deadline, 3 months or whatever. Then BIP 91, where the miners coordinated to actually do the 100 percent signaling was over two weeks beforehand in the second half of July. That BIP didn’t come about in the first place until the start of June. It was all very rushed especially compared to all the time we’ve spent on Taproot. Avoiding the rush alone would be a huge improvement.

The argument of course for doing the UASF with SegWit is that without it we potentially would have never had SegWit. We wouldn’t even be in a position to be discussing Taproot because SegWit would have never activated.

We did BIP 148 in order to try to get the SegWit BIP that was already progressing to activate. That would have timed out the end of November 2017. But there was another proposal, BIP 149, which was let’s deploy Taproot with a UASF from Day 0 as soon as the current deployment failed. It would have started in December or January 2017-2018. A lot of people who don’t like the lockinontimeout from Day 1 did like that proposal and did want to do something similar. It is just more delay basically.

I can only speak for myself but I was supportive of the idea of “We will force the miners.” They have not shown good faith. They do not have a valid reason for blocking consensus on this. I think that was a pretty common feeling. We will just do something. But there is a bridge between doing it when we have to, which absolutely we have to be ready to do, and doing it all the time by default which I think is a step too far.

\ No newline at end of file diff --git a/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/index.html b/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/index.html index 9f64773f0a..ca72bbc999 100644 --- a/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/index.html +++ b/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/index.html @@ -11,7 +11,7 @@ < Sydney Bitcoin Meetup < Sydney Socratic Seminar

Sydney Socratic Seminar

Date: June 1, 2021

Transcript By: Michael Folkson

Tags: Research, Covenants, Op checktemplateverify, Taproot

Category: -Meetup

Topic: Agenda in Google Doc below

Video: No video posted online

Google Doc of the resources discussed: https://docs.google.com/document/d/1E9mzB7fmzPxZ74WZg0PsJfLwjpVZ7OClmRdGQQFlzoY/

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Bitcoin Problems (Lloyd Fournier)

https://bitcoin-problems.github.io/

This is Bitcoin Problems, it is a Jekyll website I put on GitHub. I am very bad at making things look nice but it is functional at least. What the idea is is that there are lots of open research problems that often don’t get written down in a particular place but they end up on the mailing list and stuff like that. Sometimes they are just things in my head and other people’s heads that you talk about on Slack or IRC or something and they don’t really get put down properly. But on the other hand we have a lot of really clever people in academia or in research labs who are looking for topics to research on. For example we have Lukas (Aumayr) here today who made a brilliant presentation last time on Blitz (Secure Multi-Hop Payments Without Two-Phase Commits). Now he’s done that he is going to be looking for other things to do. There are lots of people like him so I thought that at the very least we should have a website where we write down open research problems that someone could pick up and see if they can get to the bottom of it for us. Rather than it being these developers and independent mailing list readers and researchers having all the fun. Maybe we can get some other people in there to do it as well. I created this website and started writing down problems that are in my head that I thought other people should look at but maybe aren’t being paid much attention. I’ll give you some examples to start with.

This one (PTLC Cycle Jamming) here was my test one that I wrote to see if I could put an image here. This is one where you have these jamming attacks that we’ve talked about before. Someone makes a Lightning payment but does not redeem the payment and so the funds get locked along the payment path until the expiration of the timelocks. This is a kind of griefing attack. You can enhance this attack with the point timelocked contracts (PTLCs). If we upgrade to point timelocked contracts not only do we enhance privacy, we enhance privacy so much that it creates a new kind of jamming attack. Instead of just locking up funds along a single path that goes to different nodes, you lock up funds along a path that actually repeats through the same hop several times. This is a way of focusing an attack on a particular hop. The people who are forwarding the payments in a circular way through their hop multiple times can’t tell it is the same payment. At each hop it is randomized and it doesn’t look like the same payment as it was before. This is why it gives you privacy and why it gives you this attack.

Another one is these removing cross-layer links which is heavily related to this paper we discussed with Pedro (Moreno-Sanchez) and his co-authors which is about cross-layer deanonymization methods in the Lightning protocol. How you can link onchain stuff to offchain stuff through these cross-layer links that occur in the protocol. Can we have Lightning Network without these links?

This is what I’ve been doing and wondering what your thoughts are. I don’t know exactly where I’m going to go with it. My dream would be each problem was really, really well specified and has a maintainer. I’ve tried to say there is a maintainer for each problem. It is open, has categories, has a discussion issue. You can click on the issue and discuss it in an informal manner. Then what I would really like to have BTCPay Server and you can pay to have these problems solved. Or at least pay into so then if someone publishes a paper or puts some work into solving it they can maybe get the money at the end, if it is sufficient of course. The problems right now are probably not specified well enough to actually decide when that would get paid out. In the long run that’s where I’d like to go with it.

On the channel jamming for PTLCs, I haven’t heard of that before. I was under the impression that PTLCs were a strict improvement on HTLCs. This is an example where they aren’t and we need to solve this.

I think so. I don’t know if everyone else agrees that this attack is much worse than the original attack. It doesn’t let you lock up more funds but it lets you focus it on a particular hop. Is that much worse? I think it is. As an attacker you can lock up lots of funds but in a channel with 1 BTC capacity in one direction, in order to jam that capacity up you need to spend 1 BTC doing that and lock up your own BTC. You can do that 20 times with 20 different 1 BTC channels. But with this attack you can lock it up with much less, 0.1 or even less BTC. That seems like a problem, you don’t want to do that. That might bring down the threshold to actually start doing the attacks. Of course these attacks don’t exist right now. Maybe the reason is people don’t care enough, probably that’s the main reason. The next reason why people aren’t doing them, maybe it is not super effective to lock up your own funds to lock up other people’s funds. When you focus it on a particular person that you want to take out. That’s why the site exists, no one is thinking about this problem, let me write it down and see what people think.

I think this attack was also possible before HTLCs. I remember a paper where they used circular routes as well. Maybe clients now don’t allow for circular routes anymore? I don’t know, this exact method was also used in HTLCs as well.

If you can find that mentioned you should send that to me or make a pull request and put that there. In theory you can easily protect against the circular routes just because the HTLC preimage is the same on each hop. The privacy leak allows you to identify it.

To be clear the payment hash would be the same and because the node can detect it is the same payment hash this guy might be trying to spoof or channel jam me. I am now going to stop accepting incoming HTLCs from that have this payment hash?

Exactly. There is no reason it should ever cycle through you with the same payment hash.

So would this be a strong enough reason to not do PTLCs until this is solved or attempted to be solved?

You could do the PTLCs and just not do the randomization bit of it. There are other ways you can leverage PTLCs. I don’t know if I would say I would not do it but it is definitely worth the Lightning developers considering this problem before switching I would say. It is the operating nodes that have to pay the cost for this protocol change. Of course they can just not accept that if they don’t want to, it can be an optional thing.

Are you planning to attend Antoine Riard’s workshops on Lightning problems? I think they are coming up next month.

This is the particular one about fees. He writes a lot about Lightning problems but he is also doing a workshop on this fee bumping, what should be the rules to evict transactions from the mempool? I have a Bitcoin problem as a PR right now on that topic as I’m trying to teach myself and he has done a nice review for me. I need to go through that. Eventually there will be a Bitcoin problem on solving that. If we solve this problem out of these meetings then it can be the first Bitcoin problem to be solved.

OP_CAT and Schnorr tricks (Andrew Poelstra)

Part 1: https://www.wpsoftware.net/andrew/blog/cat-and-schnorr-tricks-i.html

Part 2: https://www.wpsoftware.net/andrew/blog/cat-and-schnorr-tricks-ii.html

Next up we have something that has been on the list for us for several months, we just never get to it because we have very interesting speakers. It is about the implications of adding OP_CAT. OP_CAT is a proposed opcode, I think it used to exist but it was quietly removed in 2010. This is a blog post about the immense implications of adding this opcode and how it does much more than you would have thought. Just taking two bits of data and sticking them together which is what it does. It takes the top two elements on the stack, puts them together and leaves it on the stack. Ruben sent me this idea on Telegram years ago, maybe at the beginning of the pandemic. He was like “If you have OP_CAT we can do covenants, we can do CHECKTEMPLATEVERIFY potentially.” We have this opcode as a BIP 119, called CHECKTEMPLATEVERIFY which is being considered seriously. The claim of this blog post and what Ruben was saying at the time, if you just have OP_CAT you can do this. How on earth could you get CHECKTEMPLATEVERIFY out of OP_CAT? What is CHECKTEMPLATEVERIFY first of all? CHECKTEMPLATEVERIFY is to check what is the structure or in the simplest case the hash, of the transaction that is spending this output? I’ve got an output, I’ve got some money, I can put an OP that says “It can only be spent by this exact transaction in this branch”. It doesn’t matter about the signature, you may check the signature as well, but it can only be spent by this transaction. It says CHECKTEMPLATEVERIFY and there’s a hash and it checks that the hash of the transaction spending is this hash. If I’m remembering correctly exactly how it works. The reason you may be able to do this without that particular opcode is because you have a thing which has as input the transaction that is spending this output. You have the CHECKSIG, you have a signature checking opcode which checks the signature on a digest of the transaction that is spending the current output. When you sign a transaction, you take the transaction, you hash it into the signature and you check the signature in the output. You have the transaction as an input to this process of verifying the signature. What the trick is… This is what a Schnorr signature looks like:

s = k + xe

e is the hash of some things and the transaction digest. That is in the e. Then you’ve got your private key and you’ve got this random k thing. The approach for this cheating way Andrew uses here to get CHECKTEMPLATEVERIFY is to fix those other two things to 1. k and x, fix them to 1 then s is just 1+e, 1 plus the hash of the transaction roughly. In order to check that it is spending to a particular transaction, we can do that because we’ve fixed the private key to 1 and we’ve fixed this k value to 1. If the person who wants to spend puts s on the witness stack we can check s is a valid signature under the public key corresponding to 1. If we check that and we check its value, we check s is this value and we check that it is a valid signature, we’ve checked that the transaction spending it is the one we wanted. It is a really odd way of checking it but it certainly works. Instead of having these variables of public key and this k value we fix them all and now we are only checking the message. Rather than checking a signature on the message we are only checking what the message is effectively by removing these variants. What does OP_CAT have to do with this? OP_CAT allows us funnily enough to do this 1+e. The difficulty of doing this before, Ruben (Somsen) said you could do this with ECDSDA and I was like “But wait, you can’t do the 1. You can’t add 1, it is impossible to add 1 to something without a special opcode to do the 1+e modulo the curve order. What Andrew has figured out, obvious in retrospect, if you want to add 1 to something you can just add the byte 01. If the number ends with 00 let’s say and you just add the byte 01 onto the end instead of the 00 it is the same as adding 1. This uses OP_CAT to do arithmetic. Then once you’ve got the right s value he uses OP_CAT to concatenate the s value onto the point corresponding to 1 which is G and checks the signature against it. That’s really difficult to get your head around. I was confused about it, I had to ask Ruben what was going on here. Eventually I got to the bottom of it. You can see these Gs. This is the public key corresponding to 1. They are being concatenated onto each other and then CHECKSIG, you check it against G and the signature G.s. Normally it is R and s, the signature. We are setting R as G because that corresponds to k. That’s the trick of this thing. It is not something I would have ever come up with but what that gets you is covenants without this particular CTV opcode. OP_CAT gets you that now. It is not in a particularly nice way, it is quite messy in several respects. One thing is you have to find your s value that starts with a particular byte pattern. You can’t just choose any s value, you have to keep going through them until you find one that starts with 01. Then you put 2 CAT on the end of it because 2 is 1+1. That is what he is going for here. That is the trick.

Then given this trick, this fundamental trick he’s got, he claims in his next post that he can do vaults. You’ve probably heard of this idea of vaults which is an example of a dynamic covenant. It is more complicated than just OP_CHECKTEMPLATEVERIFY. A vault is when you take your money out and you send it to a particular address but it doesn’t go straight there, it waits some time. Then eventually it gets there after a timeout. It gives you time to take the money back if you didn’t really want it to go there. Instead of sending a payment that goes straight there it takes time and then comes back if you sign something saying “I cancel that payment”. I can’t remember the details of this now, he’s done that just using this trick and using OP_CAT to hash bits of the transaction. Instead of having a fixed transaction it is a dynamically constructed transaction using bits of the witness stack and using OP_SHA256 to actually do the hash and get the transaction digest on the actual stack from data that was passed in. It is much more dynamic than just a fixed transaction that can spend this output. It is a transaction with this particular structure, it can be any transaction with this particular structure. We only care about these bits of it. He manages to construct that rather remarkably using this trick. To top that off, he also says “In my next blog post I’ll show you how to do ANYPREVOUT just with this trick”. You wouldn’t even need ANYPREVOUT, maybe. Or you could do something like eltoo just with this trick. I haven’t figured out how he’s going to do that. Since Andrew is such a prolific researcher I thought it was pointing this out. We have to wait for the next blog post in the series to figure out how this magic gets turbocharged to achieve what he claims there. That is what we know so far.

Did you say you tweak the signature nonce until you get a signature that starts with a certain byte?

Yeah he wants 01.

You want 01 at the start of your signature?

At the end. He wants the s value which is 32 bytes to end in 01. Then when he puts it on the witness stack he is going to get rid of the 01, it is going to be 31 bytes there.

Why does OP_CAT need 01? Why can’t you concatenate anything? Why does it have to remove 01?

The reason is he needs addition. He needs to check that the signature is this hash plus 1. This e is going to be on the stack but he needs to complete the addition (1+e). This is the difficult thing. He needs to check that this s is equal to this 1+e. I get lost in the logic of it as well but that is what is happening. He needs to do this addition. We need this plus operator so he’s taking something that he knows ends in 01, the person who is doing it has found something that ends in 01 and adds 1 to it by concatenating it (doing a 2 CAT and checks the signature. Hopefully I’m putting you in the right direction at least to read the blog post. It ends in 01, he removes the 01 byte, adds 2 using the CAT operator, a 02 byte on the end, that’s the result of the addition in almost all cases unless you have a modular reduction that you are unlikely to have. Then the addition is done and now you know the OP_CHECKSIG is going to be checking what you want it to check. That’s the answer, the best one I’ll give.

What do you think? This doesn’t sound like a proper replacement for CHECKTEMPLATEVERIFY, it is more like a trick?

I don’t know what he’s going for with this. It is very interesting but especially with this SHA256(“BIP340”) and concatenate that into the whole thing as well if you are doing the dynamic thing. You put BIP 340 in the message, it is just not the transaction hash. There is stuff before it, you have to concatenate other things before anyway. You have to do all this concatenation. I’m not explaining it very well now and I have read the blog post. Imagine trying to get the developer community to understand this, that’s the difficulty. Even if it works it is going to be a little bit inefficient.

You need 256 tries to find the correct s.

Yeah, writing that code is disappointing. The guy who has to write this loop where you keep trying until it ends in 01 before you can do the covenant.

This might speed up the acceptance of the BIP proposal for CHECKTEMPLATEVERIFY because you could argue that it is already possible right now in a weird way.

We don’t have OP_CAT. If we have OP_CAT then you’re having this anyway. Those who are fans of OP_CAT now have cause to accept that you are going to have covenants. That is a big breakthrough, that’s important.

I think that’s important. OP_CAT is not as controversial as OP_CHECKTEMPLATEVERIFY. OP_CAT, all you do is concatenate two strings. But OP_CHECKTEMPLATEVERIFY, how should this be used, how should this look? Maybe we will see OP_CAT in Bitcoin now.

OP_CAT is now more controversial because it makes the functionality of OP_CHECKTEMPLATEVERIFY possible. You sign up for more than you originally intended.

I guess so. A good point he makes in the blog post, this whole thing about covenants being dangerous or whatever is a bit of nonsense. It doesn’t make any sense and I agree with him. They are fun and they don’t hurt anyone.

One thing about OP_CAT is that it is already available in Liquid or Elements. You can do this sort of hackery in that environment and actually create wallets that use covenants and stuff without needing to finish the design of CTV or anything similar.

That’s right. In this blog post, what I assume has happened is that he’s so excited about this idea that he’s gone off and started implemented it in Elements. We haven’t gotten the blog posts after that. I’m hoping he’ll appear with eltoo on Liquid with OP_CAT.

There is no Taproot yet on Liquid.

You wouldn’t need it.

I thought someone was saying that Taproot was on Liquid already.

I haven’t seen that. I think they are still working on it. It is at least not yet live.

Alternatively Taproot is apparently already available on Chia.

Chia is an altcoin for anyone who doesn’t know what Chia is.

I have seen people complaining about it. It is destroying the SSDs on AWS or something.

We made GPUs expensive, now we are making SSDs expensive.

There is going to have to be an interesting conversation for the next soft fork. From what I see, ANYPREVOUT seems an absolute guarantee but then for covenants and vaults, Poelstra is talking about using OP_CAT here, there’s CHECKTEMPLATEVERIFY. Is the next soft fork just going to be a bunch of opcodes? We throw it out there and say “See what you can do with this kind of stuff”? Or is the next soft fork is going to be “You can do everything with ANYPREVOUT so we are not going to do OP_CTV, we are not going to do OP_CAT”. I have no idea which is superior at this point. Do we have all the opcodes or do we just have one or two that allow us to do the majority of the stuff we want to do?

With ANYPREVOUT you can do everything that you can do with CTV and OP_CAT?

You can do vaults with ANYPREVOUT.

I can’t remember what ANYPREVOUT can do on its own versus what it can do with OP_CAT. OP_CAT is so simple that if you are making these weird tricks that you can do anything with, you assume that OP_CAT is something equivalent is available.

One thing I remember why OP_CAT was maybe controversial is because you can keep CATing things and make a really, really big stack.

The way CAT works in Liquid is its limited to some power of 2 bytes, 64 or 256 bytes, you can’t CAT together things anything longer than that.

Seems like a simple solution.

Designing Bitcoin Smart Contracts with Sapio (Jeremy Rubin)

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-April/018759.html

I’ve got this blog post, this is being released by Judica which is a research lab / development team started by Jeremy Rubin. Their first big release is this Sapio language, this Sapio compiler for the language. It is a smart contract compiler but smart contract doesn’t necessarily mean one transaction. It is a contract that could be expressed through many transaction trees. What’s the angle here? I have taken a look at this blog post. What he has got here is a really basic public key contract which means these funds can be taken by making a signature under somebody’s public key.

class PayToPublicKey(Contract):
+Meetup

Topic: Agenda in Google Doc below

Video: No video posted online

Google Doc of the resources discussed: https://docs.google.com/document/d/1E9mzB7fmzPxZ74WZg0PsJfLwjpVZ7OClmRdGQQFlzoY/

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Bitcoin Problems (Lloyd Fournier)

https://bitcoin-problems.github.io/

This is Bitcoin Problems, it is a Jekyll website I put on GitHub. I am very bad at making things look nice but it is functional at least. What the idea is is that there are lots of open research problems that often don’t get written down in a particular place but they end up on the mailing list and stuff like that. Sometimes they are just things in my head and other people’s heads that you talk about on Slack or IRC or something and they don’t really get put down properly. But on the other hand we have a lot of really clever people in academia or in research labs who are looking for topics to research on. For example we have Lukas (Aumayr) here today who made a brilliant presentation last time on Blitz (Secure Multi-Hop Payments Without Two-Phase Commits). Now he’s done that he is going to be looking for other things to do. There are lots of people like him so I thought that at the very least we should have a website where we write down open research problems that someone could pick up and see if they can get to the bottom of it for us. Rather than it being these developers and independent mailing list readers and researchers having all the fun. Maybe we can get some other people in there to do it as well. I created this website and started writing down problems that are in my head that I thought other people should look at but maybe aren’t being paid much attention. I’ll give you some examples to start with.

This one (PTLC Cycle Jamming) here was my test one that I wrote to see if I could put an image here. This is one where you have these jamming attacks that we’ve talked about before. Someone makes a Lightning payment but does not redeem the payment and so the funds get locked along the payment path until the expiration of the timelocks. This is a kind of griefing attack. You can enhance this attack with the point timelocked contracts (PTLCs). If we upgrade to point timelocked contracts not only do we enhance privacy, we enhance privacy so much that it creates a new kind of jamming attack. Instead of just locking up funds along a single path that goes to different nodes, you lock up funds along a path that actually repeats through the same hop several times. This is a way of focusing an attack on a particular hop. The people who are forwarding the payments in a circular way through their hop multiple times can’t tell it is the same payment. At each hop it is randomized and it doesn’t look like the same payment as it was before. This is why it gives you privacy and why it gives you this attack.

Another one is these removing cross-layer links which is heavily related to this paper we discussed with Pedro (Moreno-Sanchez) and his co-authors which is about cross-layer deanonymization methods in the Lightning protocol. How you can link onchain stuff to offchain stuff through these cross-layer links that occur in the protocol. Can we have Lightning Network without these links?

This is what I’ve been doing and wondering what your thoughts are. I don’t know exactly where I’m going to go with it. My dream would be each problem was really, really well specified and has a maintainer. I’ve tried to say there is a maintainer for each problem. It is open, has categories, has a discussion issue. You can click on the issue and discuss it in an informal manner. Then what I would really like to have BTCPay Server and you can pay to have these problems solved. Or at least pay into so then if someone publishes a paper or puts some work into solving it they can maybe get the money at the end, if it is sufficient of course. The problems right now are probably not specified well enough to actually decide when that would get paid out. In the long run that’s where I’d like to go with it.

On the channel jamming for PTLCs, I haven’t heard of that before. I was under the impression that PTLCs were a strict improvement on HTLCs. This is an example where they aren’t and we need to solve this.

I think so. I don’t know if everyone else agrees that this attack is much worse than the original attack. It doesn’t let you lock up more funds but it lets you focus it on a particular hop. Is that much worse? I think it is. As an attacker you can lock up lots of funds but in a channel with 1 BTC capacity in one direction, in order to jam that capacity up you need to spend 1 BTC doing that and lock up your own BTC. You can do that 20 times with 20 different 1 BTC channels. But with this attack you can lock it up with much less, 0.1 or even less BTC. That seems like a problem, you don’t want to do that. That might bring down the threshold to actually start doing the attacks. Of course these attacks don’t exist right now. Maybe the reason is people don’t care enough, probably that’s the main reason. The next reason why people aren’t doing them, maybe it is not super effective to lock up your own funds to lock up other people’s funds. When you focus it on a particular person that you want to take out. That’s why the site exists, no one is thinking about this problem, let me write it down and see what people think.

I think this attack was also possible before HTLCs. I remember a paper where they used circular routes as well. Maybe clients now don’t allow for circular routes anymore? I don’t know, this exact method was also used in HTLCs as well.

If you can find that mentioned you should send that to me or make a pull request and put that there. In theory you can easily protect against the circular routes just because the HTLC preimage is the same on each hop. The privacy leak allows you to identify it.

To be clear the payment hash would be the same and because the node can detect it is the same payment hash this guy might be trying to spoof or channel jam me. I am now going to stop accepting incoming HTLCs from that have this payment hash?

Exactly. There is no reason it should ever cycle through you with the same payment hash.

So would this be a strong enough reason to not do PTLCs until this is solved or attempted to be solved?

You could do the PTLCs and just not do the randomization bit of it. There are other ways you can leverage PTLCs. I don’t know if I would say I would not do it but it is definitely worth the Lightning developers considering this problem before switching I would say. It is the operating nodes that have to pay the cost for this protocol change. Of course they can just not accept that if they don’t want to, it can be an optional thing.

Are you planning to attend Antoine Riard’s workshops on Lightning problems? I think they are coming up next month.

This is the particular one about fees. He writes a lot about Lightning problems but he is also doing a workshop on this fee bumping, what should be the rules to evict transactions from the mempool? I have a Bitcoin problem as a PR right now on that topic as I’m trying to teach myself and he has done a nice review for me. I need to go through that. Eventually there will be a Bitcoin problem on solving that. If we solve this problem out of these meetings then it can be the first Bitcoin problem to be solved.

OP_CAT and Schnorr tricks (Andrew Poelstra)

Part 1: https://www.wpsoftware.net/andrew/blog/cat-and-schnorr-tricks-i.html

Part 2: https://www.wpsoftware.net/andrew/blog/cat-and-schnorr-tricks-ii.html

Next up we have something that has been on the list for us for several months, we just never get to it because we have very interesting speakers. It is about the implications of adding OP_CAT. OP_CAT is a proposed opcode, I think it used to exist but it was quietly removed in 2010. This is a blog post about the immense implications of adding this opcode and how it does much more than you would have thought. Just taking two bits of data and sticking them together which is what it does. It takes the top two elements on the stack, puts them together and leaves it on the stack. Ruben sent me this idea on Telegram years ago, maybe at the beginning of the pandemic. He was like “If you have OP_CAT we can do covenants, we can do CHECKTEMPLATEVERIFY potentially.” We have this opcode as a BIP 119, called CHECKTEMPLATEVERIFY which is being considered seriously. The claim of this blog post and what Ruben was saying at the time, if you just have OP_CAT you can do this. How on earth could you get CHECKTEMPLATEVERIFY out of OP_CAT? What is CHECKTEMPLATEVERIFY first of all? CHECKTEMPLATEVERIFY is to check what is the structure or in the simplest case the hash, of the transaction that is spending this output? I’ve got an output, I’ve got some money, I can put an OP that says “It can only be spent by this exact transaction in this branch”. It doesn’t matter about the signature, you may check the signature as well, but it can only be spent by this transaction. It says CHECKTEMPLATEVERIFY and there’s a hash and it checks that the hash of the transaction spending is this hash. If I’m remembering correctly exactly how it works. The reason you may be able to do this without that particular opcode is because you have a thing which has as input the transaction that is spending this output. You have the CHECKSIG, you have a signature checking opcode which checks the signature on a digest of the transaction that is spending the current output. When you sign a transaction, you take the transaction, you hash it into the signature and you check the signature in the output. You have the transaction as an input to this process of verifying the signature. What the trick is… This is what a Schnorr signature looks like:

s = k + xe

e is the hash of some things and the transaction digest. That is in the e. Then you’ve got your private key and you’ve got this random k thing. The approach for this cheating way Andrew uses here to get CHECKTEMPLATEVERIFY is to fix those other two things to 1. k and x, fix them to 1 then s is just 1+e, 1 plus the hash of the transaction roughly. In order to check that it is spending to a particular transaction, we can do that because we’ve fixed the private key to 1 and we’ve fixed this k value to 1. If the person who wants to spend puts s on the witness stack we can check s is a valid signature under the public key corresponding to 1. If we check that and we check its value, we check s is this value and we check that it is a valid signature, we’ve checked that the transaction spending it is the one we wanted. It is a really odd way of checking it but it certainly works. Instead of having these variables of public key and this k value we fix them all and now we are only checking the message. Rather than checking a signature on the message we are only checking what the message is effectively by removing these variants. What does OP_CAT have to do with this? OP_CAT allows us funnily enough to do this 1+e. The difficulty of doing this before, Ruben (Somsen) said you could do this with ECDSDA and I was like “But wait, you can’t do the 1. You can’t add 1, it is impossible to add 1 to something without a special opcode to do the 1+e modulo the curve order. What Andrew has figured out, obvious in retrospect, if you want to add 1 to something you can just add the byte 01. If the number ends with 00 let’s say and you just add the byte 01 onto the end instead of the 00 it is the same as adding 1. This uses OP_CAT to do arithmetic. Then once you’ve got the right s value he uses OP_CAT to concatenate the s value onto the point corresponding to 1 which is G and checks the signature against it. That’s really difficult to get your head around. I was confused about it, I had to ask Ruben what was going on here. Eventually I got to the bottom of it. You can see these Gs. This is the public key corresponding to 1. They are being concatenated onto each other and then CHECKSIG, you check it against G and the signature G.s. Normally it is R and s, the signature. We are setting R as G because that corresponds to k. That’s the trick of this thing. It is not something I would have ever come up with but what that gets you is covenants without this particular CTV opcode. OP_CAT gets you that now. It is not in a particularly nice way, it is quite messy in several respects. One thing is you have to find your s value that starts with a particular byte pattern. You can’t just choose any s value, you have to keep going through them until you find one that starts with 01. Then you put 2 CAT on the end of it because 2 is 1+1. That is what he is going for here. That is the trick.

Then given this trick, this fundamental trick he’s got, he claims in his next post that he can do vaults. You’ve probably heard of this idea of vaults which is an example of a dynamic covenant. It is more complicated than just OP_CHECKTEMPLATEVERIFY. A vault is when you take your money out and you send it to a particular address but it doesn’t go straight there, it waits some time. Then eventually it gets there after a timeout. It gives you time to take the money back if you didn’t really want it to go there. Instead of sending a payment that goes straight there it takes time and then comes back if you sign something saying “I cancel that payment”. I can’t remember the details of this now, he’s done that just using this trick and using OP_CAT to hash bits of the transaction. Instead of having a fixed transaction it is a dynamically constructed transaction using bits of the witness stack and using OP_SHA256 to actually do the hash and get the transaction digest on the actual stack from data that was passed in. It is much more dynamic than just a fixed transaction that can spend this output. It is a transaction with this particular structure, it can be any transaction with this particular structure. We only care about these bits of it. He manages to construct that rather remarkably using this trick. To top that off, he also says “In my next blog post I’ll show you how to do ANYPREVOUT just with this trick”. You wouldn’t even need ANYPREVOUT, maybe. Or you could do something like eltoo just with this trick. I haven’t figured out how he’s going to do that. Since Andrew is such a prolific researcher I thought it was pointing this out. We have to wait for the next blog post in the series to figure out how this magic gets turbocharged to achieve what he claims there. That is what we know so far.

Did you say you tweak the signature nonce until you get a signature that starts with a certain byte?

Yeah he wants 01.

You want 01 at the start of your signature?

At the end. He wants the s value which is 32 bytes to end in 01. Then when he puts it on the witness stack he is going to get rid of the 01, it is going to be 31 bytes there.

Why does OP_CAT need 01? Why can’t you concatenate anything? Why does it have to remove 01?

The reason is he needs addition. He needs to check that the signature is this hash plus 1. This e is going to be on the stack but he needs to complete the addition (1+e). This is the difficult thing. He needs to check that this s is equal to this 1+e. I get lost in the logic of it as well but that is what is happening. He needs to do this addition. We need this plus operator so he’s taking something that he knows ends in 01, the person who is doing it has found something that ends in 01 and adds 1 to it by concatenating it (doing a 2 CAT and checks the signature. Hopefully I’m putting you in the right direction at least to read the blog post. It ends in 01, he removes the 01 byte, adds 2 using the CAT operator, a 02 byte on the end, that’s the result of the addition in almost all cases unless you have a modular reduction that you are unlikely to have. Then the addition is done and now you know the OP_CHECKSIG is going to be checking what you want it to check. That’s the answer, the best one I’ll give.

What do you think? This doesn’t sound like a proper replacement for CHECKTEMPLATEVERIFY, it is more like a trick?

I don’t know what he’s going for with this. It is very interesting but especially with this SHA256(“BIP340”) and concatenate that into the whole thing as well if you are doing the dynamic thing. You put BIP 340 in the message, it is just not the transaction hash. There is stuff before it, you have to concatenate other things before anyway. You have to do all this concatenation. I’m not explaining it very well now and I have read the blog post. Imagine trying to get the developer community to understand this, that’s the difficulty. Even if it works it is going to be a little bit inefficient.

You need 256 tries to find the correct s.

Yeah, writing that code is disappointing. The guy who has to write this loop where you keep trying until it ends in 01 before you can do the covenant.

This might speed up the acceptance of the BIP proposal for CHECKTEMPLATEVERIFY because you could argue that it is already possible right now in a weird way.

We don’t have OP_CAT. If we have OP_CAT then you’re having this anyway. Those who are fans of OP_CAT now have cause to accept that you are going to have covenants. That is a big breakthrough, that’s important.

I think that’s important. OP_CAT is not as controversial as OP_CHECKTEMPLATEVERIFY. OP_CAT, all you do is concatenate two strings. But OP_CHECKTEMPLATEVERIFY, how should this be used, how should this look? Maybe we will see OP_CAT in Bitcoin now.

OP_CAT is now more controversial because it makes the functionality of OP_CHECKTEMPLATEVERIFY possible. You sign up for more than you originally intended.

I guess so. A good point he makes in the blog post, this whole thing about covenants being dangerous or whatever is a bit of nonsense. It doesn’t make any sense and I agree with him. They are fun and they don’t hurt anyone.

One thing about OP_CAT is that it is already available in Liquid or Elements. You can do this sort of hackery in that environment and actually create wallets that use covenants and stuff without needing to finish the design of CTV or anything similar.

That’s right. In this blog post, what I assume has happened is that he’s so excited about this idea that he’s gone off and started implemented it in Elements. We haven’t gotten the blog posts after that. I’m hoping he’ll appear with eltoo on Liquid with OP_CAT.

There is no Taproot yet on Liquid.

You wouldn’t need it.

I thought someone was saying that Taproot was on Liquid already.

I haven’t seen that. I think they are still working on it. It is at least not yet live.

Alternatively Taproot is apparently already available on Chia.

Chia is an altcoin for anyone who doesn’t know what Chia is.

I have seen people complaining about it. It is destroying the SSDs on AWS or something.

We made GPUs expensive, now we are making SSDs expensive.

There is going to have to be an interesting conversation for the next soft fork. From what I see, ANYPREVOUT seems an absolute guarantee but then for covenants and vaults, Poelstra is talking about using OP_CAT here, there’s CHECKTEMPLATEVERIFY. Is the next soft fork just going to be a bunch of opcodes? We throw it out there and say “See what you can do with this kind of stuff”? Or is the next soft fork is going to be “You can do everything with ANYPREVOUT so we are not going to do OP_CTV, we are not going to do OP_CAT”. I have no idea which is superior at this point. Do we have all the opcodes or do we just have one or two that allow us to do the majority of the stuff we want to do?

With ANYPREVOUT you can do everything that you can do with CTV and OP_CAT?

You can do vaults with ANYPREVOUT.

I can’t remember what ANYPREVOUT can do on its own versus what it can do with OP_CAT. OP_CAT is so simple that if you are making these weird tricks that you can do anything with, you assume that OP_CAT is something equivalent is available.

One thing I remember why OP_CAT was maybe controversial is because you can keep CATing things and make a really, really big stack.

The way CAT works in Liquid is its limited to some power of 2 bytes, 64 or 256 bytes, you can’t CAT together things anything longer than that.

Seems like a simple solution.

Designing Bitcoin Smart Contracts with Sapio (Jeremy Rubin)

https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-April/018759.html

I’ve got this blog post, this is being released by Judica which is a research lab / development team started by Jeremy Rubin. Their first big release is this Sapio language, this Sapio compiler for the language. It is a smart contract compiler but smart contract doesn’t necessarily mean one transaction. It is a contract that could be expressed through many transaction trees. What’s the angle here? I have taken a look at this blog post. What he has got here is a really basic public key contract which means these funds can be taken by making a signature under somebody’s public key.

class PayToPublicKey(Contract):
     class Fields:
         key: PubKey
 
diff --git a/sydney-bitcoin-meetup/2021-07-06-socratic-seminar/index.html b/sydney-bitcoin-meetup/2021-07-06-socratic-seminar/index.html
index d281698f2c..1d26180221 100644
--- a/sydney-bitcoin-meetup/2021-07-06-socratic-seminar/index.html
+++ b/sydney-bitcoin-meetup/2021-07-06-socratic-seminar/index.html
@@ -5,11 +5,11 @@
 Video: No video posted online
 The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.
 Agenda: https://github.com/bitcoin-sydney/socratic/blob/master/README.md#2021-07
-First IRC workshop on L2 onchain support: https://lists.">
\ No newline at end of file
diff --git a/sydney-bitcoin-meetup/2021-11-02-socratic-seminar/index.html b/sydney-bitcoin-meetup/2021-11-02-socratic-seminar/index.html
index 94dd1f0a3e..364bd73031 100644
--- a/sydney-bitcoin-meetup/2021-11-02-socratic-seminar/index.html
+++ b/sydney-bitcoin-meetup/2021-11-02-socratic-seminar/index.html
@@ -5,7 +5,7 @@
 Video: No video posted online
 The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.
 Agenda: https://github.com/bitcoin-sydney/socratic/blob/master/README.md#2021-11
-Package Mempool Accept and Package RBF: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-September/019464.html
+Package Mempool Accept and Package RBF: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-September/019464.html
 With illustrations: https://gist.">
\ No newline at end of file +Meetup

Name: Socratic Seminar

Topic: Package relay

Location: Bitcoin Sydney (online)

Video: No video posted online

The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch.

Agenda: https://github.com/bitcoin-sydney/socratic/blob/master/README.md#2021-11

Package Mempool Accept and Package RBF: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-September/019464.html

With illustrations: https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a

Intro

I think the best thing for me to do is go through Gloria’s mailing list post. We’ll go through the package relay acceptance rules and look at how they differ from the existing BIP 125 rules which we had a whole Socratic about a few months ago with Antoine Riard. Gloria can interject whenever she feels like talking about anything in particular. We are going to go through this excellent mailing list post that Gloria made with all these awesome pictures. We’ll talk about how the package relay rules may differ from BIP 125 replace-by-fee rules. These are the existing rules you apply when you get a transaction to see if you are going to include it in your mempool or not and whether you are going to pass it onto other nodes. With packages, which are single transactions relaying multiple transactions you need variations in these rules. They are really carefully thought out here and explained. A really valuable post. Let’s start going through them.

Existing package rules

https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a#existing-package-rules

As we already talked about Bitcoin already has these packages in Bitcoin Core as data structures but they are just not put over the network. That is correct?

Yeah they only exist within your mempool. We don’t think about packages until they are already in our mempool. We want to be able to validate them and relay them.

The reason packages already exist, one of the reasons would be child-pays-for-parent?

Yeah exactly. A small correction, BIP 125 is just replace-by-fee rules which is one of many mempool validation rules.

They are the rules you use to decide whether to replace if there is a conflict between the transactions coming in.

Right.

So the existing package rules, there is this MAX_PACKAGE_COUNT. This is the hard limit of the number of transactions you can have in a package. You have also got the MAX_PACKAGE_SIZE. BIP 125 has a rule about not evicting more than 100 transactions. This is another constraint about how big sets of descendants can be. You have this MAX_PACKAGE_SIZE. In our existing mempool logic if you are adding an individual transaction you are not allowed to create chains of more than 25, an ancestor and descendant limit. A transaction with all of its ancestors or all of its descendants cannot exceed a count of 25 or 101 kilo virtual bytes. This naturally informs these package limits. If you have a package that is bigger than this it is going to be exceeding the mempool limits. It is a very natural bound.

Is it possible to put the logic together, the RBF rule of not evicting more than 100, is it possible to still have 99 be evicted with the MAX_PACKAGE_COUNT at 25 or is it like 25 is the real limit?

You can be conflicting with 100 different transactions that are independent in the mempool.

They are orthogonal basically.

Yeah.

To clarify, is this when you try to replace multiple transactions? Can you walk me through an example as to when the 100 transaction limit can occur?

Let’s say you are trying to validate a transaction and it has 100 inputs. Each of those inputs is also spent by a transaction in your mempool. But those transactions in your mempool are not related.

You have multiple conflicting transactions? Or are you replacing multiple transactions with a single transaction?

Yes. I think people get confused because they think of there only being one original transaction. They are like “It is not going to have more than 25 descendants so how can you hit 100?” But you can be conflicting with many independent transactions.

One transaction can spend 10 different inputs. Each of those inputs can have a package of 25 associated with it.

This is what the rules look like and how they would affect the relay. The basic one (1A), you’ve got 24 descendants, then you try to package with 2 more in it, doesn’t work. 13 with a package with 2 more in it is fine (1B).

Basically package validation is hard because you are trying to prevent DoS attacks. If you have even just a package of 2 transactions the number of possibilities for interconnectivity with the mempool just gets more and more complex the more transactions you have. The reason why we have ancestor or descendant limits, also known as package limits confusingly, within the mempool is so we can limit the potential CPU time we are spending spinning around trying to find everything that we need to evict or everything that we need to update in case we get a block or we get a conflicting transaction, a replacement or whatever. But because it is really complex with packages we have to do one of two things. Either we very intensively compute all the different relationships before we admit it to our mempool, which can take a long time, or we use heuristics such as the one in 21800 where we assume that the package is very highly connected. But we have to be careful about the way that we construct these heuristics because if it is too loose or too imperfect then we might accidentally create a pinning vector. That’s what this section is about.

“The union of their descendants and the ancestor is considered”. What does that mean?

When we are looking at one transaction it is very easy to calculate what the ancestors in the mempool are. But with packages they might be interconnected, they might share ancestors. So what we do is we take the union of every transaction in the package’s ancestors and that needs to meet the ancestor count. This works for a package of several parents and one child because the union of their ancestors is the ancestor count of the child in the package. The fact that the packages only include a child and its parents, that’s the rule. Is that already in the logic or is that part of the change here?

That’s part of the proposed change. This section is just existing rules that were added in previous PRs earlier this year.

So the union just means you get every single descendant. When you are looking at a package does the package have to have to all the ancestors in it or if some of the ancestors are in the mempool, is that ok?

Yeah this is mempool ancestors.

Some of them can be omitted from the package if they are already in your mempool, that is normal.

Yeah that’s normal.

Which ones are the interesting ones here? In 2A, there is this ancestor with two branches each with 12 in it. There is a package with A and B as the nth child of those branches but A and B aren’t connected to each other at all. Can a package have totally unrelated transactions to each other?

Not in this proposal. But when I wrote the code for ancestor descendant limits since arbitrary packages were allowed I needed to create a heuristic that would work for arbitrary packages and never underestimate the amount of ancestors/descendants. It is supposed to work for arbitrary packages but the heuristic calculates the exact amount for the packages I am trying to restrict us to in this proposal which is child with parents.

The plan is only to allow these child with parents to be broadcast?

Yeah.

The third row is probably more indicative of what we would actually see. In 3A there are two parents, the child is in pink, there is only one layer of parents. There are not parents of parents in the package?

Yes. That’s a restriction, that’s intentional.

Proposed changes

https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a#existing-package-rules

Here are the proposed changes. There is multi-parent, one child, two generations in them. They may contain already in mempool transactions and they can also refer to transactions that are already in your mempool. This picture demonstrates things that stick to that rule. One interesting one is D, although there are only parents of P4 in D, P2 is a parent of the parents. There are parents of parents, it is just that they all must be parents of the child.

Right.

In the previous diagram C what do the dots imply?

It is just shorthand for 1 through 24. It means there is P1, P2, P3 etc all the way to P24.

At each individual layer of those parents there are another 24 of them. The second dot implies another parent.

They are also arrows. P25 is spending all of them.

Ok, understood.

https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a#fee-related-checks-use-package-feerate

Without conflicts you need to check that the transactions have a good enough fee. The min relay tx fee which is usually 1 sat per byte. But now we have packages and one of the points of package relay is so that this may not apply to transactions individually and could rather apply to the whole package. The point of this is Lightning where you want to do child-pays-for-parent, you don’t have to include fees on each transaction but rather you can bump the fee exclusively with fees of another transaction, a child. You can have a transaction with zero fee that is relayed as long as it is part of a package where the total fee rate of the package is above the min relay tx fee. Am I right there?

Yes.

The section is rationale for using the package fee rate after deduplication. A lot of the thinking around the package fee rate is you want to make sure that your policy is incentive compatible. For example if you had a package where the child pays lower fees than the parents then you want your behavior to reflect what would be most economically rational. And also you don’t want to allow the child to be paying for things twice. It should be able to boost the fee rate but you don’t want people to be able to cheat. F and G are the examples where if you use the package fee rate before deduplication you might end up using the same fees to pay for more than one thing.

This notation here means this is replacing this one. P1 is replacing M1 in F and P2 is replacing M2 in F. The question we are considering here is should this package here in the light orange replace these light blue ones. If these two were not in the mempool this would be a fine package to include. But these are in the mempool and so should we be able to remove them and replace them with P3 that has a nice juicy 600 sat fee. The answer is no. It adds 200 sats but it is adding 300 virtual bytes. The rule is you need at least 1 sat per byte over the existing ones that are there, the min relay tx fee over the ones that are already there in order to replace them. Even if the fee is a little bit better it is not good enough. It has to be 1 sat per byte better and it is not here.

Exactly.

In G1 we are fine. We have 100 sats and 100 virtual bytes and that is fine. There is 1 sat per byte more so we evict it and add P1.

G2 is an interesting one. When you get a package you remove the thing from the package that is already in the mempool. What is the rationale for that?

You are essentially de-duplicating. If it is already in your mempool you don’t need to be checking it again and you shouldn’t use its fees again.

That’s the key point. After de-duplicating here you can see that this adds 200 vbytes and 100 sats. The point is although this package includes P1 we can just include P1 without P3 and P2. If P1 is already in our mempool we can say “Let’s just include that”.

You definitely shouldn’t include its fees. For example here if you included P1 then you would think “I have enough fees to replace M2”. It would look like you are adding 300 sats and 300 vbytes which would be enough. But you already used M1’s fees to pay for the replacement of that other one in diagram G1. You used those fees already so you shouldn’t be trying to use them again.

Let’s say M1 didn’t exist. Let’s consider G2 in isolation. MP1 is P1, it is the same one. It is irrelevant whether MP1 is in the mempool already, you just want to consider it in isolation. In other words replacing M2 only depends on P2 and P3. Is that the right way to think about it?

Yes. If MP1 isn’t already in the mempool you need it in order to validate P3.

The fees of MP1 do not matter in the case of considering whether to add this package of P3 and P2.

If it is not already in your mempool then it is fine to use its fees. But if it is already in your mempool then you definitely shouldn’t because it already used its fees to pay for something.

That’s interesting. Isn’t the point of it to have the best transactions in the mempool? Regardless of what is already in your mempool shouldn’t you just choose the transaction set that is the best? Or are you saying that rule is not as important as keeping packages together? Packages should be excluded or included as a whole.

The goal is to have the most incentive compatible transactions. This is later in the document but you will consider P1 individually and you’ll do that before you try to validate as a package. You’ll pick whatever is best. You can pick just one package or some subset of it if that turns out to be more economical.

You are going to add things individually and then when something doesn’t get added as an individual then you start to consider the package at that point?

Exactly.

What you are talking about here only applies after you’ve done the individual adding. You are not lowering the quality of the mempool in any way.

No never.

I receive a package and if there is some zero fee transaction I skip that and after adding each transaction one by one then do you check the zero fee transactions with respect to the whole package?

You have a package and it has transaction A, B and C. A has a zero fee?

G2 is a pretty good example, a good structure. You would try adding P1 first, that’s fine. Then you would try adding P2, it says “That is not going to work”. You can’t add P3 because you don’t have P2. Then you start considering P2 and P3 as a group.

Package RBF

https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a#package-rbf

This is package RBF which is the modification of the BIP 125 rules. The first one, does this use the same signaling mechanism as BIP 125?

Yes.

Just before we go into RBF. With the mempool it is either in the mempool or it is not. There is no intermediate state. Let’s say a package successfully got into the mempool and then the mempool became aware of another transaction so that the package was no longer the most incentive compatible option. Does it then ditch that package? Or should it perhaps hold it in an intermediate state in case things change again?

If you accept the package to your mempool and then you see a conflicting package?

Yeah. Or a transaction that conflicts with the package but it is offering more fees than the package in entirety.

Then you’ll validate that new package the same way you validated the original package. If it pays significantly more fees then it can replace whatever transactions it conflicts with in the mempool.

Is there an edge case here sometimes where the mempool ditches some transactions, perhaps a whole package but then things change again with the introduction of a new transaction added to that package but you’ve thrown away the initial package?

We want to keep our mempool in a consistent state. We don’t want conflicting transactions in our mempool.

To me it seems like you might want like a second mempool while things are still in flux where you dump things there that you don’t think should be in the mempool but then if things change with the introduction of a new transaction then perhaps you bring it back. Obviously once you’ve ditched it from the mempool if you want to get it back you have to go and ask your peers for it.

Not necessarily. We have various caches for transactions that we’ve evicted from our mempool.

We keep them around for a decent amount of time?

For reconstructing compact blocks for example.

So the intermediate state is a cache.

A little bit but it is not part of the mempool. That’s why we have rules that say “If you want to replace a transaction in the mempool it has to be a significantly higher fee”. We are going to keep the best one.

You would never go to that cache to get things out of it? To put it back in the mempool?

No. You could end up re-requesting things.

When would you re-request something?

In this scenario you evicted something and then later you get a really high fee child of it for example. You already threw it out of your mempool and you like “Missing inputs. I need this transaction.” You need to re-request it from your peers.

You could use the cache instead of re-requesting it?

You could theoretically.

Ok so the signaling. I guess this means that everything that is RBF’ed has the signaling in it. It is set with the sequence number. Here is a new rule, New Unconfirmed Inputs (Rule 2). A package may include new unconfirmed inputs but the ancestor fee rate of the child must be as high as the ancestor fee rates of every transaction being replaced. This is contrary to BIP 125 which states “The replacement transaction may only include an unconfirmed input if the input was included in one of the original transactions”. The original BIP 125 was written mostly for wallets who want to bump their fee. When you bump a fee you are going to spend the same inputs, you may add another one, but you are mostly going to spend the same inputs or lower the value of one of your outputs, usually your change output to get more fees. This package rule is more sophisticated than this because we are not dealing with just simple wallets anymore, we are dealing with Layer 2 protocols which need to use child-pays-for-parent. I guess that’s the main motivation.

Yes. Not to knock RBF as it was originally implemented but they were constrained by what the mempool data structure was able to provide them at the time. We have a better mempool now so we are able to have more intelligent rules around RBF.

What does it mean, ancestor fee rate? We have this structure, this package that has children and parents. The child has a fee rate, including the child you can figure out the fee rate of the child in so far as its parent is replacing something in the mempool. You have to look at every single thing you are evicting and see if the ancestor fee rate to the parent that it is replacing double spending the one that is in the mempool is higher than the one that was in the mempool. I don’t know if that is a coherent way of saying it.

Your replacement needs to be a better candidate for mining.

That sounds better. What do you have to consider to figure out that fee rate? Is it a fixed value per package or is it more it is different value when you are comparing it to the transaction you are maybe evicting?

An ancestor score is the total modified fees of this transaction and all of its unconfirmed ancestors divided by the total virtual size of that transaction and all of its unconfirmed ancestors. Essentially you just need to go look at the mempool and calculate all of its ancestors. Get the sum total, divide it and you compare that to the ancestor score of each mempool entry that you are trying to replace. Luckily now that we have more information in our mempool data structure that information is actually cached in each mempool entry.

Is it every unconfirmed ancestor or every new one that you are adding?

Every unconfirmed ancestor.

The rationale makes sense. The purpose of BIP 125 Rule 2 is to ensure that the replacement transaction has a higher ancestor score than the original ones. Example H shows that a new unconfirmed input can lower the ancestor score of the replacement transaction. If we only consider P1 by itself it looks better than that one. But this one needs M2 to be in the mempool whereas before this one would be low priority in the mempool, this one is high priority. P1 shouldn’t get in there without considering the fact that it is attached to M2.

I’ll leave the more complicated examples here. It did take me quite a long time to get to the bottom of them.

It is better with pictures though I hope.

So much better. I wouldn’t even attempt to do it without the pictures.

From an engineering perspective the reviewers are probably really happy to have these pictures.

The next one is the Absolute Fee Rule (Rule 3) which also exists in BIP 125. The package must increase the absolute fee of the mempool i.e. the total fees of the package must be higher than the absolute fees of the mempool transactions it replaces. This differs from BIP 125 Rule 3, it has the bonus now. You can have a transaction with a zero absolute fee and it can still get into a block because the rule is applied to the package now.

Feerate (Rule 4), we already went through that a bit originally. The package must pay for its own bandwidth. You need a 1 sat per vbyte improvement over whatever you are replacing. It must be higher than the replaced transactions by at least that much.

Total Number of Replaced Transactions (Rule 5) states the package cannot replace more than 100 mempool transactions.

The final bit of this is to talk about why you add individual submission, we’ve gone through that a little bit.

Q&A

One thing that did change, Rule 2, “The replacement transaction may only include an unconfirmed input if that input was included in one of the original transactions”. Wasn’t there also a rule where BIP 125 had to have everything confirmed? In Bitcoin Core PR 6871 there is a comment in main.cpp “We don’t want to accept replacements that require low feerate junk to be mined first. Ideally we’d keep track of the ancestor feerates and make the decision based on that but for now requiring all new inputs to be confirmed works.”

That is rule 2, if there are any additional inputs they need to be confirmed.

Or they already need to be in the transaction.

Yes.

Does that rule apply to non-packages going forward if you were to implement package relay? This proposal is not yet about the peer-to-peer layer…

I have a confession. I’ll admit this is also bundling in my desire to change our RBF logic. Essentially when RBF was first implemented I said they had a lot of limitations. Rule 2 is an ugly hack or a very bad heuristic. I want to change that but there are always a lot of people nit picking about what to do about BIP 125. You never get anywhere. I was like “We can bundle this in with package RBF and since these rules are better we can gradually introduce them for individual transaction RBF”. That’s my ulterior motive.

We have to do this rule. When you are writing a wallet you have to go “Is it confirmed. I can’t use that thing in the mempool, it is already there”. Different logic to when you are normally constructing a transaction. You have to have this context of am I doing a RBF? It is just bad. That may not apply initially, is that what you are saying?

I have a PR open to do this for individual but it has been thousands of words of arguing about disclosing to the mailing list and BIP 125. It is an ugly fight.

I agree with you changing that because it sucks to have that in the wallet. Can I hack my new transactions to be in a package through the RPC? Can we make sure that I get my original transaction into a package so that the package rules apply instead and we can delete that code?

I would imagine that package relay will be mostly a peer-to-peer thing and not really need to be considered in wallets. Ideally you and your peers, you are sending fee rate filters to each other. If you can see that your peer is going to package validate in order to accept this low fee parent, high fee child then you construct a package for them. You just let them know “Hey make sure you package validate this because it is below your fee filter but the package fee rate meets your fee filter”. I don’t think there’s a hack you can do where you always submit packages rather than individual transactions because the concept of package relay is really more node to node than something that clients or wallets should need to think about. You should just broadcast your transaction and it just works. The P2P layer understands how to relay transactions. That’s my design philosophy for this. The whole thing is wallets shouldn’t need to care about all this stuff.

Shouldn’t they need to care in the case of when they are doing a child-pays-for-parent in a very specific way? Is the broadcast RPC API going to allow you to broadcast your zero sat commitment transaction if you are doing Lightning and the follow-up child-pays-for-parent fee paying transaction?

There’s that. The wallet shouldn’t be telling the node what to relay on the P2P network.

Not that but you can broadcast multiple transactions at the same time.

Yes of course. There would be a client interface where you submit multiple transactions together and let your mempool handle that.

Couldn’t you make the argument that if your node knows the other node is not going to accept this by the old RBF rules they can just say “You need to use the package thing”. I don’t know if that would make sense. There’s no hack really so let’s fix it.

Just fix it.

Zero satoshi outputs in packages

For my spacechain proposal one of the things I would like to see is the possibility of spending zero satoshi outputs. It looks like maybe with package relay there might be a possibility to do that. The problem with zero satoshi outputs is you need to abide by the dust limit. If you don’t abide by the dust limit then you might create an output that is uneconomical to spend. We want to guarantee that zero satoshi outputs by the relay rules are not accepted. But if you have a package then you can construct it in such a way that you can guarantee that the zero satoshi output is being spent within the same package. You could argue that it is ugly because of this, it is essentially a hack to use an output as an anchor to do child-pays-for-parent. My thinking is if you create a transaction that has zero satoshi outputs and that transaction itself has either a fee that is zero or a fee that is at least lower than the child that is paying for it then if you send that out as a package it will never be the case that the child gets evicted and the parent stays in the mempool. That is what you don’t want. You don’t want the child to be evicted and the zero satoshi output still being in the mempool. If the fee rate of the child is higher then I think you guarantee that either they both get evicted or neither get evicted.

I am wondering why zero satoshi outputs? Why not just a normal output? Essentially you are trying to create artificial relationships between transactions? I don’t think that zero fee outputs is the best way to achieve that. You can do what exists now where you spend a normal amount output.

The issue that I have here is that the person who is creating the output with the dust limit is not the person that is going to get that money. I have this transaction structure where one person has to create a tonne of these outputs ahead of time and they are never going to be able to reclaim that money. If they can create satoshi outputs it means it doesn’t cost them anything. I would have to go into more detail to explain why that is the case but it is spacechain design where one person pre-signs a bunch of transactions with a bunch of ideally zero satoshi outputs. If instead they have to create dust limit outputs, it is one transaction per block. You have to create years worth of these transactions. It ends up being hundreds of thousands of dollars just in order to create these outputs. That is what I am trying to avoid here. I do agree that it is like an anchor for trying to bump the fee of a specific transaction. It is a bit of a hack, I would agree with you there. But it does seem to fit very neatly within the package relay structure at the very least.

I think that if it costs someone nothing to create all of these outputs then it also costs an attacker nothing to create all of these outputs. I would argue that if someone wants to take up block space and ask the network to relay their transactions they should be paying.

I agree with that. What I am saying is that the child will have to pay for it. The network should not relay it unless there is a child that is spending a zero satoshi output and paying for its fees. You no longer have this issue, the whole reason for the dust limit is to pay for when it gets spent. If you spend it as a package then you don’t have that issue. The child has to pay for it.

It is still imposing a cost on the network right? It has to propagate this output with no fee? It is imposing a cost on the network even if it isn’t getting into people’s mempools.

Any amount of data that enters the blockchain is a cost for people.

It is only being paid for if that child comes. If that child doesn’t come then it hasn’t been paid for.

If the child doesn’t come it doesn’t enter into the mempool, it doesn’t enter into a block. It is a zero satoshi transaction with a zero satoshi per vbyte fee rate and it has a zero satoshi output. This will never enter the mempool, this will never enter a blockchain unless some miner for some reason mines it. They can always do that but it is not going to be relayed. The only way you can get this transaction to enter the Bitcoin blockchain is to create a child that spends from the zero satoshi output, you send it as a package, the package as a whole needs to be accepted according to all the rules we’ve just discussed.

I don’t understand why it isn’t just one transaction. You are creating a dummy output that you are going to spend immediately, why don’t you just make it one transaction?

We are going to have to go into the whole spacechain design to explain it properly. There is one person who ahead of time creates a bunch of these transactions, that person is then out of the game. You are not supposed to cooperate with them anymore. The person who creates the initial transaction with the zero satoshi output is different to the person who is going to bump it. They are not going to be able to communicate with each other or cooperate with each other. It is supposed to be non-interactive.

I think the reason we want a dust limit is to prevent UTXO bloat. That’s why we have the standardness rule. There is absolutely no way that you can guarantee that that child will be created. You said it would only be included in a block if the child is there. We have no control over that. The miner chooses the transactions. As a node operator with a mempool I personally don’t want to be relaying transactions that create zero satoshi outputs. I don’t want to contribute to bloating the UTXO set. You can’t get around that rule, the miner chooses, the miner has discretion over what transactions go into the block.

I fully agree with that. The point here is that you are not creating a zero satoshi output, you are spending it right away in the same block. Otherwise the miner should not accept it.

You are not enforcing that.

The miner can always choose to ignore any rules. A miner can always create a zero satoshi output if it wants.

Of course it can. The point I’m making is I don’t want to relay those transactions to miners. That’s my choice as a node operator, I choose to enforce the standardness rules. When you are talking about packages all you are talking about is what goes into the mempool. There is no control over what the miners eventually choose to include in a block.

The miner will only receive it as a package. There wouldn’t be a scenario where the miner received that transaction with a zero satoshi output. You wouldn’t consider it if it wasn’t a package.

I think a lot of people who develop applications on top of Bitcoin have this misconception about the P2P network. My application will force the P2P thing to do this. It doesn’t work that way.

The bytes of that second transaction could get dropped. There is no guarantee that both transactions reach the miner. The first transaction could reach the miner and the second one not.

What you are looking for is a consensus enforced rule where the child transaction must be mined with the parent transaction.

No it is what I am specifically trying to avoid. You make it in such a way so that it is never economical to mine the parent without the child. That doesn’t mean it cannot be mined, a miner can do anything. The goal that I am trying to reach is that it is always a fee rate to consider the parent with the child as opposed to just the parent.

You are not listening to what people are saying. There is one point where yes it is highly likely that both transactions will reach the miner but there is the possibility where the second transaction gets dropped in which case there will be a zero fee output that is sent across the network and reached the miner which is what we want to avoid. The second point is yes a miner can create a zero fee output but we don’t want to be relaying that across the network. If a miner wants to create a transaction and include it in a block they are free to do that. What we don’t want to do is relay something across the network so that a miner considers including it in a block.

You have the child and you have the parent. The parent has the zero satoshi output. If you were to send that over the network to someone else then it should be rejected. Nobody should accept that transaction. But if it comes with a child that spends the zero satoshi output then it should be accepted.

You are assuming that they definitely 100 percent always arrive at exactly the same time. Nothing can guarantee that they arrive at exactly the same time. One transaction could be received milliseconds after the first one.

You are waiting for the entire package.

From what I understand your application wishes to be able to create zero satoshi outputs. That is a rule that our policy does not relay. You are saying that our policy should create an exception to this rule in packages to allow your application to be able to create and spend zero satoshi outputs. You are saying this is reasonable because in incentive compatible cases the zero satoshi output will never appear in the UTXO set.

If I am wrong about that then I agree this should not happen to clarify.

I understand what you are saying could be reasonable. But I don’t think I would ever want to create the possibility of relaying transactions that could add zero satoshi outputs to the UTXO set.

If there is a scenario here where a zero satoshi output gets created and gets mined and doesn’t get spent in the same block then I agree we should not do this. My argument is that we can make it such that that will never happen by creating the incentives correctly.

Just because something is incentive incompatible doesn’t mean that it will never happen. The only way to ensure that it will never happen is if you made it a consensus rule.

Now we are talking again about the fact that theoretically miners can mine a zero satoshi output if they wanted because it is not a consensus rule to not do that.

You are asking us to change our policy which I wouldn’t do.

If the policy ended up creating zero satoshi outputs, yes. But I am saying that we can do this in such a way that the policy doesn’t create zero satoshi outputs.

It is incentive incompatible to create zero satoshi outputs. You can’t guarantee.

You are asking the network to take a probabilistic cost at the very least.

Zero satoshi outputs are consensus valid but they are non-standard so all Bitcoin Core nodes on the Bitcoin network will not relay a transaction that creates a zero satoshi output because it is bad practice and bloats the UTXO set. Since it is not a consensus rule it is fine but you are asking us to make that standard and allow relaying transactions that create zero satoshi outputs. You are saying that they are never going to end up in the UTXO set but that is not a consensus rule so it is not guaranteed. Therefore we are not going to remove this policy.

You could have zero satoshi outputs in the UTXO set with or without the policy. The question is do you have more than you otherwise would have?

I’m saying you should relay zero satoshi outputs only if they are relayed in a way that they are immediately being spent. Can you explain why making this change makes it more likely that a zero satoshi output gets created?

Right now if you want to create a zero satoshi output and have it be mined you have to submit it directly to miners. In this scenario you can create it and submit this transaction to any Bitcoin Core node, you attach this child, that makes it incentive compatible to include the child, but that doesn’t mean miners will necessarily include the child. You are increasing the number of ways that someone can introduce a zero satoshi output, it is bad practice. People shouldn’t be using Bitcoin Core this way.

The conversation continued afterwards.

I believe your point (or at least one of them) was that the transactions could still arrive separately so then the parent might arrive first and now there’s a zero satoshi output transaction in the mempool that might get mined. Is that accurate?

Right. Currently it would not get relayed and would not make its way into nodes’ mempools. To ensure that desired behavior is not relaxed in a future package world you need to guarantee that the transactions are received instantaneously and verified together immediately. These guarantees are impossible. Hence you are arguing for a relaxation even though in the ideal cases (with guarantees) it wouldn’t be a relaxation.

In reality there can be a delay between receiving 2 transactions in a package. And it takes time to process/validate both transactions.

So at the very least you are arguing for relaying a zero satoshi output transaction and not discarding it immediately in case the child comes soon after. That is a relaxation from current mempool behavior (and introduces complexity).

The whole idea behind package relay is that you do evaluate the entire package as a whole. For instance you could have a parent that pays zero sats per byte and a parent that pays for itself and the child. If what you said was true then the same issue would apply here. The zero sat per byte transaction could get mined even though it is not economical. But what happens instead is that the package as a whole is evaluated and the entire thing enters the mempool instead of one at a time. Note I am not currently talking about zero satoshi outputs yet.

I think the idea is that even in a future package world every transaction will still pay the minimum relay fee. So a transaction will be relayed and hence can be assessed both individually or as part of a package by a mempool.

“The parent(s) in the package can have zero fees but be paid for by the child”

I don’t think that should be the case then and I see the equivalence you’re making. You’re saying zero fee is fine so zero output should also be fine.

Essentially yes (assuming we can ensure that zero satoshi outputs don’t get mined without being immediately spent).

Should definitely be min relay fee but I don’t know why that is zero.

Note I am still of the opinion that the zero satoshi per byte transaction won’t enter the mempool, it seems we disagree there.

If you are asking the network to relay it it should meet the min relay fee in my opinion.

The idea is that the package as a whole meets the min relay fee.

Seems too relaxed to me.

You’re basically treating the package as a single transaction.

But the transaction could be assessed individually ignoring the package. Otherwise you’re forced to hold onto it waiting for the child to arrive.

If you assess it individually then it simply fails to enter the mempool because the zero satoshi per byte transaction doesn’t meet the relay fee rules.

So you need to guarantee that the child gets there before the parent. Which is impossible.

You’ll just have to wait for the full package to arrive and evaluate that. It is no different than waiting for a full transaction to arrive before you can evaluate it.

Parent arrives, rejected by mempool. Child arrives, mempool requests parent from peers (who might not have it either as they rejected it too).

If the child arrives alone it should just be rejected, it has no parent.

So you have to guarantee they arrive at exactly the same time. Which is impossible.

It is not impossible. It is just a package. Peer A says to Peer B “I have a package for you.” Peer B receives it and when it fully arrives it starts evaluating it.

Otherwise it would be a DoS vector. Sending transactions which have no parents.

Much better to have parent pay min relay fee and potentially make it into the mempool regardless of any package it may be a part of.

Isn’t that how Bitcoin works today without package relay?

We are getting into internet packet reliability now of which I assume neither of us are experts on. I would guess the larger the package the more likely the package doesn’t arrive all in one piece at exactly the same time. 2 transactions would be less reliable than 1. 10 would be even worse.

I’m not an expert but you generally send a hash of what you intend to send so you can know whether the package fully arrived or not.

So the package just gets treated as a very large transaction. And a package of 10 transactions gets treated as a monster transaction.

There are limits of course but essentially yes. And there are also situations where the package does get only half accepted, mainly when the feerate of a parent is higher than the parent and child combined.

We already have limits on the size of the transaction. We are essentially extending those limits by a multiple of whatever the max package size is. Sounds like caching is going to be the biggest challenge here then. We hope that the transaction size limit we currently have is unnecessarily restrictive. Because if it isn’t we’re just about to multiply it by the max package size.

I believe the package cannot be bigger than the maximum transaction size (100kb?).

So your question is if zero fee transactions are fine why not zero outputs too. We are already relaxing the min relay fee requirement when in a package, why not also the non-zero output requirement (when in a package)? You’re getting the relay of the transaction for free, why not the relay of an extra zero output?

There are a few differences that do matter. I actually came up with a scenario that could be problematic which kills my own idea. Parent P1 (0 sats per byte) has 2 outputs, one is zero satoshis. Child C1 spends the zero satoshi output for a combined fee rate of 1 satoshi per vbyte and they enter the mempool as a package. Child C2 spends the other output of P1 with a really high feerate and enters the mempool. Fees rise and child C1 falls out of the mempool leaving the zero satoshi output unspent.

Let’s say P1 and C1 have a combined feerate of 5 satoshis per vbyte and enter the mempool first. Then later C2 enters the mempool. Fees rise to 20 satoshis per vbyte and C1 leaves the mempool. Now P1 and C2 get mined and the zero satoshi output remains unspent. You could maybe come up with complicated rules to prevent this but superficially it seems like it’ll get too complicated.

This enlarges the UTXO set permanently? Standard argument against dust.

Yeah. Under no circumstances do we want the the zero satoshi output to get mined without being spent. You could come up with a soft fork for that but at that point there are probably better things you can do instead (e.g. the soft fork could be that the zero satoshi output is only spendable in the same block, otherwise it becomes unspendable and doesn’t enter the UTXO set).

So I guess this shouldn’t be done but for different reasons than discussed before? Though in similar ball park. It sneaks into a block in this edge case.

Right, seems like it. Unless there is a straightforward fix for the problem I just outlined I don’t think it should be done.

\ No newline at end of file diff --git a/sydney-bitcoin-meetup/index.xml b/sydney-bitcoin-meetup/index.xml index d3199188f6..4ae8a78aac 100644 --- a/sydney-bitcoin-meetup/index.xml +++ b/sydney-bitcoin-meetup/index.xml @@ -13,14 +13,14 @@ Location: Bitcoin Sydney (online) Video: No video posted online The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. Agenda: https://github.com/bitcoin-sydney/socratic/blob/master/README.md#2021-11 -Package Mempool Accept and Package RBF: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-September/019464.html +Package Mempool Accept and Package RBF: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-September/019464.html With illustrations: https://gist.Sydney Socratic Seminarhttps://btctranscripts.com/sydney-bitcoin-meetup/2021-07-06-socratic-seminar/Tue, 06 Jul 2021 00:00:00 +0000https://btctranscripts.com/sydney-bitcoin-meetup/2021-07-06-socratic-seminar/Name: Socratic Seminar Topic: Fee bumping and layer 2 protocols Location: Bitcoin Sydney (online) Video: No video posted online The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. Agenda: https://github.com/bitcoin-sydney/socratic/blob/master/README.md#2021-07 -First IRC workshop on L2 onchain support: https://lists.Sydney Socratic Seminarhttps://btctranscripts.com/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/Tue, 01 Jun 2021 00:00:00 +0000https://btctranscripts.com/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/Topic: Agenda in Google Doc below +First IRC workshop on L2 onchain support: https://gnusha.Sydney Socratic Seminarhttps://btctranscripts.com/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/Tue, 01 Jun 2021 00:00:00 +0000https://btctranscripts.com/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/Topic: Agenda in Google Doc below Video: No video posted online Google Doc of the resources discussed: https://docs.google.com/document/d/1E9mzB7fmzPxZ74WZg0PsJfLwjpVZ7OClmRdGQQFlzoY/ The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. @@ -29,7 +29,7 @@ This is Bitcoin Problems, it is a Jekyll website I put on GitHub.< Video: No video posted online Google Doc of the resources discussed: https://docs.google.com/document/d/1VAP4LNjHHVLy9RJwpQ8LqXEUw79z5TTZxr9du_Z0-9A/ The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -PoDLEs revisited (Lloyd Fournier) https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html +PoDLEs revisited (Lloyd Fournier) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html We’ll start with me talking about my research into UTXO probing attacks on Lightning in the dual funding proposal.Socratic Seminarhttps://btctranscripts.com/sydney-bitcoin-meetup/2020-08-25-socratic-seminar/Tue, 25 Aug 2020 00:00:00 +0000https://btctranscripts.com/sydney-bitcoin-meetup/2020-08-25-socratic-seminar/Name: Socratic Seminar Topic: Agenda in Google Doc below Location: Bitcoin Sydney (online) diff --git a/tags/adaptor-signatures/index.xml b/tags/adaptor-signatures/index.xml index e5ff34457d..1c26ffffbc 100644 --- a/tags/adaptor-signatures/index.xml +++ b/tags/adaptor-signatures/index.xml @@ -27,7 +27,7 @@ My name is Jonas. I work at Blockstream as an engineer. I am going to talk about I want to introduce the blind Schnorr signature in a few moments. Schnorr signature My assumption is not that you will completely understand Schnorr signatures, but maybe you will at least agree that if you can understand Schnorr signatures then the step to blind Schnorr signatures is not a big step.Cross Curve Atomic Swapshttps://btctranscripts.com/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/Mon, 05 Mar 2018 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2018-03/2018-03-05-cross-curve-atomic-swaps/https://twitter.com/kanzure/status/971827042223345664 Draft of an upcoming scriptless scripts paper. This was at the beginning of 2017. But now an entire year has gone by. -post-schnorr lightning transactions https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html +post-schnorr lightning transactions https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001031.html An adaptor signature.. if you have different generators, then the two secrets to reveal, you just give someone both of them, plus a proof of a discrete log, and then you say learn the secret to one that gets the reveal to be the same.Schnorr Signatures For Bitcoin: Challenges and Opportunitieshttps://btctranscripts.com/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities/Wed, 31 Jan 2018 00:00:00 +0000https://btctranscripts.com/blockchain-protocol-analysis-security-engineering/2018/schnorr-signatures-for-bitcoin-challenges-opportunities/slides: https://prezi.com/bihvorormznd/schnorr-signatures-for-bitcoin/ https://twitter.com/kanzure/status/958776403977220098 https://blockstream.com/2018/02/15/schnorr-signatures-bpase.html diff --git a/tags/c-lightning/index.xml b/tags/c-lightning/index.xml index d0f869b70d..459f50704b 100644 --- a/tags/c-lightning/index.xml +++ b/tags/c-lightning/index.xml @@ -44,7 +44,7 @@ Topic: Various topics Location: Jitsi online Video: No video posted online The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -Dust HTLC exposure (Lisa Neigut) Antoine Riard email to the Lightning dev mailing list: https://lists.c-lightning developer callhttps://btctranscripts.com/c-lightning/2021-09-20-developer-call/Mon, 20 Sep 2021 00:00:00 +0000https://btctranscripts.com/c-lightning/2021-09-20-developer-call/Topic: Various topics +Dust HTLC exposure (Lisa Neigut) Antoine Riard email to the Lightning dev mailing list: https://gnusha.c-lightning developer callhttps://btctranscripts.com/c-lightning/2021-09-20-developer-call/Mon, 20 Sep 2021 00:00:00 +0000https://btctranscripts.com/c-lightning/2021-09-20-developer-call/Topic: Various topics Location: Jitsi online Video: No video posted online Agenda: https://hackmd.io/@cdecker/Sy-9vZIQt diff --git a/tags/channel-factories/index.xml b/tags/channel-factories/index.xml index 3bb008741a..94cc5eac09 100644 --- a/tags/channel-factories/index.xml +++ b/tags/channel-factories/index.xml @@ -1,5 +1,5 @@ Channel factories on ₿itcoin Transcriptshttps://btctranscripts.com/tags/channel-factories/Recent content in Channel factories on ₿itcoin TranscriptsHugo -- gohugo.ioenFri, 07 Jun 2019 00:00:00 +0000Blind statechains: UTXO transfer with a blind signing serverhttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/https://twitter.com/kanzure/status/1136992734953299970 -&ldquo;Formalizing Blind Statechains as a minimalistic blind signing server&rdquo; https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html +&ldquo;Formalizing Blind Statechains as a minimalistic blind signing server&rdquo; https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html overview: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39 statechains paper: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf previous transcript: http://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/ diff --git a/tags/consensus-cleanup/index.xml b/tags/consensus-cleanup/index.xml index bf9498772a..49d45dbd4b 100644 --- a/tags/consensus-cleanup/index.xml +++ b/tags/consensus-cleanup/index.xml @@ -3,7 +3,7 @@ How good are the mitigations? Improvements to mitigations in the last 5 years? Anything else to fix? The talk is a summary of https://delvingbitcoin.org/t/great-consensus-cleanup-revival/710 . -Time warp What is it? Off by one in the retargeting period 2015 blocks instead of 2016 Impact Spam (since difficulty is 1 and block times are what restricts tx) UXTO set growth for the same reason 40 days to kill the chain Empowers 51% attacker Political games (users individually incentivized short-term to benefit from more block space, miners individually incentivized short-term to benefit of more subsidy) Minority miners not incentivized to try but it doesn’t cost anything Original mitigation is good Mandating new restrictions on the timestamp of the first block of a retarget period in relation to last blocks timestamp Merkle tree attacks w/64 byte txs Fake SPV inclusion &lt;visual merkle tree diagram illustrating issue&gt; Years ago the attack required more work than proof of work, so was less of a concern, not so now Arbitrary confs, less work Simple mitigation Require the coinbase transaction too, as all transactions on the same level of the merkle tree Block malleability Separate but similar attack Fork nodes Simple mitigation Dont cache context-less checks BIP’s original Mitigation Forbid &lt;=64 byte transactions No need to disable &lt;64 bytes transactions, since 64 is the issue Concern about existing, unbroadcasted 64 byte transaction?Great Consensus Cleanuphttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html +Time warp What is it? Off by one in the retargeting period 2015 blocks instead of 2016 Impact Spam (since difficulty is 1 and block times are what restricts tx) UXTO set growth for the same reason 40 days to kill the chain Empowers 51% attacker Political games (users individually incentivized short-term to benefit from more block space, miners individually incentivized short-term to benefit of more subsidy) Minority miners not incentivized to try but it doesn’t cost anything Original mitigation is good Mandating new restrictions on the timestamp of the first block of a retarget period in relation to last blocks timestamp Merkle tree attacks w/64 byte txs Fake SPV inclusion &lt;visual merkle tree diagram illustrating issue&gt; Years ago the attack required more work than proof of work, so was less of a concern, not so now Arbitrary confs, less work Simple mitigation Require the coinbase transaction too, as all transactions on the same level of the merkle tree Block malleability Separate but similar attack Fork nodes Simple mitigation Dont cache context-less checks BIP’s original Mitigation Forbid &lt;=64 byte transactions No need to disable &lt;64 bytes transactions, since 64 is the issue Concern about existing, unbroadcasted 64 byte transaction?Great Consensus Cleanuphttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-great-consensus-cleanup/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016714.html https://twitter.com/kanzure/status/1136591286012698626 Introduction There&rsquo;s not much new to talk about. Unclear about CODESEPARATOR. You want to make it a consensus rule that transactions can&rsquo;t be larger than 100 kb. No reactions to that? Alright. Fine, we&rsquo;re doing it. Let&rsquo;s do it. Does everyone know what this proposal is? Validation time for any block&ndash; we were lazy about fixing this. Segwit was a first step to fixing this, by giving people a way to do this in a more efficient way. \ No newline at end of file diff --git a/tags/dual-funding/index.xml b/tags/dual-funding/index.xml index 7899b667ac..0389cc6633 100644 --- a/tags/dual-funding/index.xml +++ b/tags/dual-funding/index.xml @@ -3,7 +3,7 @@ We&rsquo;re talking about lightning channels. A lightning channel is a UTXO Video: No video posted online Google Doc of the resources discussed: https://docs.google.com/document/d/1VAP4LNjHHVLy9RJwpQ8LqXEUw79z5TTZxr9du_Z0-9A/ The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -PoDLEs revisited (Lloyd Fournier) https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html +PoDLEs revisited (Lloyd Fournier) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html We’ll start with me talking about my research into UTXO probing attacks on Lightning in the dual funding proposal.Socratic Seminarhttps://btctranscripts.com/sydney-bitcoin-meetup/2020-05-19-socratic-seminar/Tue, 19 May 2020 00:00:00 +0000https://btctranscripts.com/sydney-bitcoin-meetup/2020-05-19-socratic-seminar/Topic: Agenda in Google Doc below Location: Bitcoin Sydney (online) Video: No video posted online diff --git a/tags/eltoo/index.xml b/tags/eltoo/index.xml index 1afbac3d19..48a3d93eb8 100644 --- a/tags/eltoo/index.xml +++ b/tags/eltoo/index.xml @@ -8,7 +8,7 @@ Slides: https://residency.chaincode.com/presentations/lightning/Eltoo.pdf Eltoo white paper: https://blockstream.com/eltoo.pdf Bitcoin Magazine article: https://bitcoinmagazine.com/articles/noinput-class-bitcoin-soft-fork-simplify-lightning Intro Who has never heard about eltoo? It is my pet project and I am pretty proud of it. I will try to keep this short. I was told that you all have seen my presentation about the evolution of update protocols. I will probably go pretty quickly. I still want to give everybody the opportunity to ask questions if they have any.Blind statechains: UTXO transfer with a blind signing serverhttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/https://twitter.com/kanzure/status/1136992734953299970 -&ldquo;Formalizing Blind Statechains as a minimalistic blind signing server&rdquo; https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html +&ldquo;Formalizing Blind Statechains as a minimalistic blind signing server&rdquo; https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html overview: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39 statechains paper: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf previous transcript: http://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/ diff --git a/tags/fee-management/index.xml b/tags/fee-management/index.xml index 144c257c8a..0ec8096b5b 100644 --- a/tags/fee-management/index.xml +++ b/tags/fee-management/index.xml @@ -14,7 +14,7 @@ I am Marshall. I run Firehash. CTO Cryptsy also. Some people have higher cost, s The problem The problem is variable transaction fees. The transaction fees shot up. There&rsquo;s a lot of fluctuation. To understand why this is a problem, let&rsquo;s go back to the basics. Block size limit We can have big blocks or small blocks. Each one has advantages and disadvantages. One of the benefits of small blocks is that it&rsquo;s easier to run a node.Fee Management (Lightning Network)https://btctranscripts.com/chaincode-labs/chaincode-residency/2019-06-25-fabrice-drouin-fee-management/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/chaincode-labs/chaincode-residency/2019-06-25-fabrice-drouin-fee-management/Location: Chaincode Labs Lightning Residency 2019 Fabrice: So I&rsquo;m going to talk about fee management in Lightning which has been surprisingly one of the biggest operational issues we had when we launched the Lightning Network. So again the idea of Lightning is you have transactions that are not published but publishable, and in our case the problem is what does exactly publishable mean. So it means the UTXO that you’re spending is still spendable.Fee marketshttps://btctranscripts.com/scalingbitcoin/montreal-2015/peter-r/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/montreal-2015/peter-r/((Note that there is a more accurate transcript from Peter-R himself below.)) -Miners have another job as well. Miners are commodity producers, they produce something that the world has never seen. They produce block space, which is room for transactional debt. Let&rsquo;s explore what the field of economics tells us. We&rsquo;ll plot the total number of bytes per block. On the vertical we will plot the unit cost of the commodity, or the price of 1 transaction worth of blockspace.Optimizing Fee Estimation Via Mempool Statehttps://btctranscripts.com/scalingbitcoin/stanford-2017/optimizing-fee-estimation-via-mempool-state/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/stanford-2017/optimizing-fee-estimation-via-mempool-state/I am a Bitcoin Core developer and I work at DG Lab. Today I would like to talk about fees. There&rsquo;s this weird gap between&ndash; there are two things going on. People complain about high fees. But people are confused about why Bitcoin Core is giving high fees but if you set fees manually you can get a much lower fee and get a transaction mined pretty fast. I started to look into this in detail and did simulations and a bunch of stuff.Redesigning Bitcoin Fee Markethttps://btctranscripts.com/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015093.html +Miners have another job as well. Miners are commodity producers, they produce something that the world has never seen. They produce block space, which is room for transactional debt. Let&rsquo;s explore what the field of economics tells us. We&rsquo;ll plot the total number of bytes per block. On the vertical we will plot the unit cost of the commodity, or the price of 1 transaction worth of blockspace.Optimizing Fee Estimation Via Mempool Statehttps://btctranscripts.com/scalingbitcoin/stanford-2017/optimizing-fee-estimation-via-mempool-state/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/stanford-2017/optimizing-fee-estimation-via-mempool-state/I am a Bitcoin Core developer and I work at DG Lab. Today I would like to talk about fees. There&rsquo;s this weird gap between&ndash; there are two things going on. People complain about high fees. But people are confused about why Bitcoin Core is giving high fees but if you set fees manually you can get a much lower fee and get a transaction mined pretty fast. I started to look into this in detail and did simulations and a bunch of stuff.Redesigning Bitcoin Fee Markethttps://btctranscripts.com/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/Mon, 01 Jan 0001 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/stanford-2017/redesigning-bitcoin-fee-market/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/015093.html https://www.reddit.com/r/Bitcoin/comments/72qi2r/redesigning_bitcoins_fee_market_a_new_paper_by/ paper: https://arxiv.org/abs/1709.08881 He will be exploring alternative auction markets. diff --git a/tags/hardware-wallet/index.xml b/tags/hardware-wallet/index.xml index b52bc17cc5..c3858d426b 100644 --- a/tags/hardware-wallet/index.xml +++ b/tags/hardware-wallet/index.xml @@ -35,7 +35,7 @@ Mike Schmidt is going to talk about some optech newsletters that he has been con see also: Extracting seeds from hardware wallets The future of hardware wallets coredev.tech 2019 hardware wallets discussion Background A bit more than a year ago, I went through Jimmy Song&rsquo;s Programming Blockchain class. That&rsquo;s where I met M where he was the teaching assistant. Basically, you write a python bitcoin library from scratch. The API for this library and the classes and fnuctions that Jimmy uses is very easy to read and understand.Hardware Walletshttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-hardware-wallets/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-hardware-wallets/https://twitter.com/kanzure/status/1136924010955104257 How much should Bitcoin Core do, and how much should other libraries do? Andrew Chow wrote the wonderful HWI tool. Right now we have a pull request to support external signers. The HWI script can talk to most major hardware wallets because it has all the drivers built in now, and it can get keys from it, and sign arbitrary transactions. That&rsquo;s roughly what it does. It&rsquo;s kind of manual, though.Hardware Wallets (History of Attacks)https://btctranscripts.com/london-bitcoin-devs/2019-05-01-stepan-snigirev-hardware-wallet-attacks/Wed, 01 May 2019 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2019-05-01-stepan-snigirev-hardware-wallet-attacks/Slides: https://www.dropbox.com/s/64s3mtmt3efijxo/Stepan%20Snigirev%20on%20hardware%20wallet%20attacks.pdf -Pieter Wuille on anti covert channel signing techniques: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-March/017667.html +Pieter Wuille on anti covert channel signing techniques: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-March/017667.html Introduction This talk is the second in the series after my previous talk in London a few months ago at the Advancing Bitcoin conference. There I was talking mostly about general attacks on hardware, more from the theoretical perspective. I didn’t say anything bad hardware wallets that exist and I didn’t specify anything on the bad side. Here I feel a bit more free as it is a meetup not a conference so I can say bad things about everyone.Are Hardware Wallets Secure Enough?https://btctranscripts.com/andreas-antonopoulos/2019-02-01-andreas-antonopoulos-hardware-wallet-security/Fri, 01 Feb 2019 00:00:00 +0000https://btctranscripts.com/andreas-antonopoulos/2019-02-01-andreas-antonopoulos-hardware-wallet-security/Q - Hi Andreas. I store my crypto on a Ledger. Listening to Trace Mayer this week has me concerned that this is not safe enough. Trace says you need Bitcoin Core for network validation, Armory for managing the private keys and a Glacier protocol for standard operating procedures and a Purism laptop for hardware. What is the gold standard for storing crypto for non-technical people? Is a hardware wallet good enough?Wallet Security, Key Management & Hardware Security Modules (HSMs)https://btctranscripts.com/edgedevplusplus/2018/wallet-security/Thu, 04 Oct 2018 00:00:00 +0000https://btctranscripts.com/edgedevplusplus/2018/wallet-security/https://twitter.com/kanzure/status/1049813559750746112 Intro Alright, I’m now going to talk about bitcoin wallet security. And I was asked to talk about key management and hardware security modules and a bunch of other topics all in one talk. This will be a bit broader than some of the other talks because this is an important subject about how do you actually store bitcoin and then some of the developments around the actual storage of bitcoin in a secure way.Bitcoin Core and hardware walletshttps://btctranscripts.com/london-bitcoin-devs/2018-09-19-sjors-provoost-core-hardware-wallet/Wed, 19 Sep 2018 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2018-09-19-sjors-provoost-core-hardware-wallet/Topic: Using Bitcoin Core with hardware wallets Location: London Bitcoin Devs diff --git a/tags/lightning/index.xml b/tags/lightning/index.xml index ea4a3d6555..e0e7906405 100644 --- a/tags/lightning/index.xml +++ b/tags/lightning/index.xml @@ -63,7 +63,7 @@ We have Steve Lee in the office this week. Murch: 00:00:04 He is the head at Spiral. He&rsquo;s done a lot of open source development. He talks to a bunch of people. He&rsquo;s also the PM for the LDK team. So we&rsquo;re going to talk Lightning, the challenges that are open with Lightning, and maybe a little bit about other projects in the space.Lightning Panelhttps://btctranscripts.com/mit-bitcoin-expo/mit-bitcoin-expo-2022/lightning-panel/Tue, 05 Jul 2022 00:00:00 +0000https://btctranscripts.com/mit-bitcoin-expo/mit-bitcoin-expo-2022/lightning-panel/The discussion primarily revolved around the Lightning Network, a scaling solution for Bitcoin designed to enable faster, decentralized transactions. Rene Pickhardt and Lisa Neigut shared their insights, highlighting Lightning&rsquo;s potential as a peer-to-peer electronic cash system and a future payment infrastructure. They emphasized its efficiency for frequent transactions between trusted parties but noted challenges in its current infrastructure, such as the need for continuous online operation and the risk of losing funds if a node is compromised. The panelists discussed the scalability of the network, indicating that while millions could use it self-sovereignly, larger-scale adoption would likely involve centralized service providers. The conversation also touched on the impact of Taproot on privacy and channel efficiency, and the technical intricacies of maintaining state and preventing fraud within the network.Minisketch and Lightning gossiphttps://btctranscripts.com/bitcoinplusplus/2022/2022-06-07-alex-myers-minisketch-lightning-gossip/Tue, 07 Jun 2022 00:00:00 +0000https://btctranscripts.com/bitcoinplusplus/2022/2022-06-07-alex-myers-minisketch-lightning-gossip/Location: Bitcoin++ Slides: https://endothermic.dev/presentations/magical-minisketch -Rusty Russell on using Minisketch for Lightning gossip: https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001741.html +Rusty Russell on using Minisketch for Lightning gossip: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001741.html Minisketch library: https://github.com/sipa/minisketch Bitcoin Core PR review club on Minisketch (3 sessions): https://bitcoincore.reviews/minisketch-26 @@ -168,7 +168,7 @@ Topic: Various topics Location: Jitsi online Video: No video posted online The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -Dust HTLC exposure (Lisa Neigut) Antoine Riard email to the Lightning dev mailing list: https://lists.c-lightning developer callhttps://btctranscripts.com/c-lightning/2021-09-20-developer-call/Mon, 20 Sep 2021 00:00:00 +0000https://btctranscripts.com/c-lightning/2021-09-20-developer-call/Topic: Various topics +Dust HTLC exposure (Lisa Neigut) Antoine Riard email to the Lightning dev mailing list: https://gnusha.c-lightning developer callhttps://btctranscripts.com/c-lightning/2021-09-20-developer-call/Mon, 20 Sep 2021 00:00:00 +0000https://btctranscripts.com/c-lightning/2021-09-20-developer-call/Topic: Various topics Location: Jitsi online Video: No video posted online Agenda: https://hackmd.io/@cdecker/Sy-9vZIQt @@ -194,7 +194,7 @@ Location: Bitcoin Sydney (online) Video: No video posted online The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. Agenda: https://github.com/bitcoin-sydney/socratic/blob/master/README.md#2021-07 -First IRC workshop on L2 onchain support: https://lists.Improving the Lightning Networkhttps://btctranscripts.com/wasabi/podcast/2021-05-30-improving-lightning/Sun, 30 May 2021 00:00:00 +0000https://btctranscripts.com/wasabi/podcast/2021-05-30-improving-lightning/How Rusty got into Lightning Max Hillebrand (MH): So Rusty, I am very happy that you joined me for this conversation today. You have been a pioneer in Lightning Network. To start off this conversation I am curious, 6 years ago before you got into the Lightning Network what was your understanding of Bitcoin back then and where did you see some of the big problems that needed to be solved at that point?Lightning Development Kithttps://btctranscripts.com/chaincode-labs/chaincode-podcast/2021-05-12-matt-corallo-ldk/Wed, 12 May 2021 00:00:00 +0000https://btctranscripts.com/chaincode-labs/chaincode-podcast/2021-05-12-matt-corallo-ldk/Matt Corallo presentation at Advancing Bitcoin 2019: https://btctranscripts.com/advancing-bitcoin/2019/2019-02-07-matt-corallo-rust-lightning/ +First IRC workshop on L2 onchain support: https://gnusha.Improving the Lightning Networkhttps://btctranscripts.com/wasabi/podcast/2021-05-30-improving-lightning/Sun, 30 May 2021 00:00:00 +0000https://btctranscripts.com/wasabi/podcast/2021-05-30-improving-lightning/How Rusty got into Lightning Max Hillebrand (MH): So Rusty, I am very happy that you joined me for this conversation today. You have been a pioneer in Lightning Network. To start off this conversation I am curious, 6 years ago before you got into the Lightning Network what was your understanding of Bitcoin back then and where did you see some of the big problems that needed to be solved at that point?Lightning Development Kithttps://btctranscripts.com/chaincode-labs/chaincode-podcast/2021-05-12-matt-corallo-ldk/Wed, 12 May 2021 00:00:00 +0000https://btctranscripts.com/chaincode-labs/chaincode-podcast/2021-05-12-matt-corallo-ldk/Matt Corallo presentation at Advancing Bitcoin 2019: https://btctranscripts.com/advancing-bitcoin/2019/2019-02-07-matt-corallo-rust-lightning/ rust-lightning repo: https://github.com/rust-bitcoin/rust-lightning Intro Adam Jonas (AJ): Welcome back to the office Matt, glad to have you back on the podcast. Matt Corallo (MC): Thank you. @@ -217,7 +217,7 @@ Yeah, last year it was like, I think it’s been maybe half a year or more since Video: No video posted online Google Doc of the resources discussed: https://docs.google.com/document/d/1VAP4LNjHHVLy9RJwpQ8LqXEUw79z5TTZxr9du_Z0-9A/ The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -PoDLEs revisited (Lloyd Fournier) https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html +PoDLEs revisited (Lloyd Fournier) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html We’ll start with me talking about my research into UTXO probing attacks on Lightning in the dual funding proposal.Watchtowers and BOLT 13https://btctranscripts.com/lightning-hack-day/2020-05-24-sergi-delgado-watchtowers/Sun, 24 May 2020 00:00:00 +0000https://btctranscripts.com/lightning-hack-day/2020-05-24-sergi-delgado-watchtowers/Location: Potzblitz (online) Slides: https://srgi.me/resources/slides/Potzblitz!2020-Watchtowers.pdf The Eye of Satoshi code: https://github.com/talaia-labs/python-teos diff --git a/tags/mast/index.xml b/tags/mast/index.xml index df0be0a501..fc1f48fb47 100644 --- a/tags/mast/index.xml +++ b/tags/mast/index.xml @@ -10,6 +10,6 @@ See also http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2017-09-07-merk MAST stuff You could directly merkleize scripts if you switch from IF, IFNOT, ELSE with IFJUMP that has the number of bytes. With graftroot and taproot, you never to do any scripts (which were a hack to get things started). But we&rsquo;re doing validation and computation. You take every single path it has; so instead, it becomes &hellip; certain condition, or certain not conditions&hellip; You take every possible ifs, you use this, you say it&rsquo;s one of these, then you specify which one, and you show it and everyone else can validate this.Merkleized Abstract Syntax Treeshttps://btctranscripts.com/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/Thu, 07 Sep 2017 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2017-09/2017-09-07-merkleized-abstract-syntax-trees/https://twitter.com/kanzure/status/907075529534328832 -https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html +https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014932.html I am going to talk about the scheme I posted to the mailing list yesterday which is to implement MAST (merkleized abstract syntax trees) in bitcoin in a minimally invasive way as possible. It&rsquo;s broken into two major consensus features that together gives us MAST. I&rsquo;ll start with the last BIP. This is tail-call evaluation. Can we generalize P2SH to give us more general capabilities, than just a single redeem script. \ No newline at end of file diff --git a/tags/mining/index.xml b/tags/mining/index.xml index cf222fc16d..724e8fb9b1 100644 --- a/tags/mining/index.xml +++ b/tags/mining/index.xml @@ -24,7 +24,7 @@ https://twitter.com/kanzure/status/1171331418716278785 notes from slides: https://docs.google.com/document/d/1ETKx8qfml2GOn_CBXhe9IZzjSv9VnXLGYfQb3nD3N4w/edit?usp=sharing Introduction Good morning everyone. My task is to talk about the challenges we faced while we were implementing a replacement for the cgminer software. We&rsquo;re doing it in rust. Essentially, I would like to cover a little bit of the history and to give some credit to ck for his hard work. cgminer cgminer used to be a CPU miner that has been functionally removed a long time ago.Mining Firmware Securityhttps://btctranscripts.com/edgedevplusplus/2019/mining-firmware-security/Tue, 10 Sep 2019 00:00:00 +0000https://btctranscripts.com/edgedevplusplus/2019/mining-firmware-security/slides: https://docs.google.com/presentation/d/1apJRD1BwskElWP0Yb1C_tXmGYA_vkx9rjS8VTDW_Z3A/edit?usp=sharingThe State of Bitcoin Mininghttps://btctranscripts.com/magicalcryptoconference/2019/the-state-of-bitcoin-mining/Sun, 12 May 2019 00:00:00 +0000https://btctranscripts.com/magicalcryptoconference/2019/the-state-of-bitcoin-mining/Topic: The State of Bitcoin Mining: No Good, The Bad, and The Ugly -So I&rsquo;m going to talk a little bit about kind of the state of mining and the set of protocols that are used for mining today, a tiny bit about the history. And then I&rsquo;m going to talk very briefly about some of the solutions that are being proposed, whether it&rsquo;s BetterHash, whether it&rsquo;s Braidpool design, rebooting P2Pool, and some of the other stuff, only kind of briefly at the end.Better Hashing through BetterHashhttps://btctranscripts.com/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/Tue, 05 Feb 2019 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/Announcement of BetterHash on the mailing list: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016077.html +So I&rsquo;m going to talk a little bit about kind of the state of mining and the set of protocols that are used for mining today, a tiny bit about the history. And then I&rsquo;m going to talk very briefly about some of the solutions that are being proposed, whether it&rsquo;s BetterHash, whether it&rsquo;s Braidpool design, rebooting P2Pool, and some of the other stuff, only kind of briefly at the end.Better Hashing through BetterHashhttps://btctranscripts.com/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/Tue, 05 Feb 2019 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2019-02-05-matt-corallo-betterhash/Announcement of BetterHash on the mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-June/016077.html Draft BIP: https://github.com/TheBlueMatt/bips/blob/betterhash/bip-XXXX.mediawiki Intro I am going to talk about BetterHash this evening. If you are coming to Advancing Bitcoin don’t worry I am talking about something completely different. You are not going to get duplicated content. That talk should be interesting as well though admittedly I haven’t written it yet. We’ll find out. BetterHash is a project that unfortunately has some naming collisions so it might get renamed at some point, I’ve been working on for about a year to redo mining and the way it works in Bitcoin.Handling Reorgs & Forkshttps://btctranscripts.com/edgedevplusplus/2018/reorgs/Fri, 05 Oct 2018 00:00:00 +0000https://btctranscripts.com/edgedevplusplus/2018/reorgs/https://twitter.com/kanzure/status/1052344700554960896 Introduction Good morning, my name is Bryan. I&rsquo;m going to be talking about reorganizations and forks in the bitcoin blockchain. First, I want to define what a reorganization is and what a fork is. diff --git a/tags/multisignature/index.xml b/tags/multisignature/index.xml index 38b4a45604..0fff983c8b 100644 --- a/tags/multisignature/index.xml +++ b/tags/multisignature/index.xml @@ -1,5 +1,5 @@ Scriptless multisignatures on ₿itcoin Transcriptshttps://btctranscripts.com/tags/multisignature/Recent content in Scriptless multisignatures on ₿itcoin TranscriptsHugo -- gohugo.ioenSat, 06 Oct 2018 00:00:00 +0000Instantiating (Scriptless) 2P-ECDSA: Fungible 2-of-2 MultiSigs for Today's Bitcoinhttps://btctranscripts.com/scalingbitcoin/tokyo-2018/scriptless-ecdsa/Sat, 06 Oct 2018 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/tokyo-2018/scriptless-ecdsa/https://twitter.com/kanzure/status/1048483254087573504 -maybe https://eprint.iacr.org/2018/472.pdf and https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-April/001221.html +maybe https://eprint.iacr.org/2018/472.pdf and https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-April/001221.html Introduction Alright. Thank you very much. Thank you Pedro, that was a great segue into what I&rsquo;m talking about. He has been doing work on formalizing multi-hop locks. I want to also talk about what changes might be necessary to deploy this on the lightning network. History For what it&rsquo;s worth, these dates are rough. Andrew Poelstra started working on this and released something in 2016 for a Schnorr-based scriptless script model.Multi-Hop Locks for Secure, Privacy-Preserving and Interoperable Payment-Channel Networkshttps://btctranscripts.com/scalingbitcoin/tokyo-2018/multi-hop-locks/Sat, 06 Oct 2018 00:00:00 +0000https://btctranscripts.com/scalingbitcoin/tokyo-2018/multi-hop-locks/Giulio Malavolta (Friedrich-Alexander-University Erlangen-Nuernberg), Pedro Moreno-Sanchez (Purdue University), Clara Schneidewind (Vienna University of Technology), Aniket Kate (Purdue University) and Matteo Maffei (Vienna University of Technology) https://eprint.iacr.org/2018/472.pdf diff --git a/tags/p2c/index.xml b/tags/p2c/index.xml index 48e621224c..e1ef2ca9e5 100644 --- a/tags/p2c/index.xml +++ b/tags/p2c/index.xml @@ -1,5 +1,5 @@ Pay-to-Contract (P2C) protocols on ₿itcoin Transcriptshttps://btctranscripts.com/tags/p2c/Recent content in Pay-to-Contract (P2C) protocols on ₿itcoin TranscriptsHugo -- gohugo.ioenThu, 06 Jun 2019 00:00:00 +0000Taproothttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki -https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html +https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html https://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/ previously: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/ https://twitter.com/kanzure/status/1136616356827283456 diff --git a/tags/p2p/index.xml b/tags/p2p/index.xml index f2c7e16614..bd35cadd3e 100644 --- a/tags/p2p/index.xml +++ b/tags/p2p/index.xml @@ -18,7 +18,7 @@ We&rsquo;re going to talk a little bit about bip324. This is a BIP that has &lt;bitcoin-otc.com&gt; continues to be the longest operating PGP web-of-trust using public key infrastructure. Rumplepay might be able to bootstrap a web-of-trust over time. Stealth addresses and silent payments Here&rsquo;s something controversial. Say you keep an in-memory map of all addresses that have already been used.Minisketch and Lightning gossiphttps://btctranscripts.com/bitcoinplusplus/2022/2022-06-07-alex-myers-minisketch-lightning-gossip/Tue, 07 Jun 2022 00:00:00 +0000https://btctranscripts.com/bitcoinplusplus/2022/2022-06-07-alex-myers-minisketch-lightning-gossip/Location: Bitcoin++ Slides: https://endothermic.dev/presentations/magical-minisketch -Rusty Russell on using Minisketch for Lightning gossip: https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001741.html +Rusty Russell on using Minisketch for Lightning gossip: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001741.html Minisketch library: https://github.com/sipa/minisketch Bitcoin Core PR review club on Minisketch (3 sessions): https://bitcoincore.reviews/minisketch-26 diff --git a/tags/package-relay/index.xml b/tags/package-relay/index.xml index 29bfa80bb4..7559bace3b 100644 --- a/tags/package-relay/index.xml +++ b/tags/package-relay/index.xml @@ -12,5 +12,5 @@ Location: Bitcoin Sydney (online) Video: No video posted online The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. Agenda: https://github.com/bitcoin-sydney/socratic/blob/master/README.md#2021-11 -Package Mempool Accept and Package RBF: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-September/019464.html +Package Mempool Accept and Package RBF: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-September/019464.html With illustrations: https://gist.Chaincode Decoded: Mempoolhttps://btctranscripts.com/chaincode-labs/chaincode-podcast/chaincode-decoded-mempool/Mon, 26 Apr 2021 00:00:00 +0000https://btctranscripts.com/chaincode-labs/chaincode-podcast/chaincode-decoded-mempool/The Chaincode Decoded segment returns and we jump into the deep end of the mempool. \ No newline at end of file diff --git a/tags/privacy-problems/index.xml b/tags/privacy-problems/index.xml index 58d916e752..d61d45701c 100644 --- a/tags/privacy-problems/index.xml +++ b/tags/privacy-problems/index.xml @@ -5,7 +5,7 @@ I&rsquo;m here to talk to you about privacy. This is a big topic. It&rsq Video: No video posted online Google Doc of the resources discussed: https://docs.google.com/document/d/1VAP4LNjHHVLy9RJwpQ8LqXEUw79z5TTZxr9du_Z0-9A/ The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -PoDLEs revisited (Lloyd Fournier) https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html +PoDLEs revisited (Lloyd Fournier) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html We’ll start with me talking about my research into UTXO probing attacks on Lightning in the dual funding proposal.ANYPREVOUT, MPP, Mitigating Lightning Attackshttps://btctranscripts.com/stephan-livera-podcast/2020-08-13-christian-decker-lightning-topics/Thu, 13 Aug 2020 00:00:00 +0000https://btctranscripts.com/stephan-livera-podcast/2020-08-13-christian-decker-lightning-topics/Transcript completed by: Stephan Livera Edited by: Michael Folkson Latest ANYPREVOUT update ANYPREVOUT BIP (BIP 118): https://github.com/ajtowns/bips/blob/bip-anyprevout/bip-0118.mediawiki Stephan Livera (SL): Christian welcome back to the show. diff --git a/tags/psbt/index.xml b/tags/psbt/index.xml index 9837be8e2c..e5975e3f3c 100644 --- a/tags/psbt/index.xml +++ b/tags/psbt/index.xml @@ -3,7 +3,7 @@ We are complete. Maybe we can start. So good morning. This is Optech Newsletter Location: LA BitDevs (online) CVE: https://nvd.nist.gov/vuln/detail/CVE-2020-14199 Trezor blog post on the vulnerability: https://blog.trezor.io/latest-firmware-updates-correct-possible-segwit-transaction-vulnerability-266df0d2860 -Greg Sanders Bitcoin dev mailing list post in April 2017: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html +Greg Sanders Bitcoin dev mailing list post in April 2017: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html The vulnerability The way Bitcoin transactions are encoded in the software is there is a list of coins essentially and then there is a list of destinations and the amount being sent to each destination. The destinations do not include the fee. Nothing in the transaction tells you what the fee of the transaction is.Magical Bitcoinhttps://btctranscripts.com/la-bitdevs/2020-05-21-alekos-filini-magical-bitcoin/Thu, 21 May 2020 00:00:00 +0000https://btctranscripts.com/la-bitdevs/2020-05-21-alekos-filini-magical-bitcoin/Magical Bitcoin site: https://magicalbitcoin.org/ Magical Bitcoin wallet GitHub: https://github.com/MagicalBitcoin/magical-bitcoin-wallet sipa’s Miniscript site: http://bitcoin.sipa.be/miniscript/ diff --git a/tags/ptlc/index.xml b/tags/ptlc/index.xml index 4a0f38a1fd..6557522a34 100644 --- a/tags/ptlc/index.xml +++ b/tags/ptlc/index.xml @@ -3,7 +3,7 @@ Introduction My name&rsquo;s Adam Gibson, also known as waxwing on the inter Video: No video posted online Google Doc of the resources discussed: https://docs.google.com/document/d/1VAP4LNjHHVLy9RJwpQ8LqXEUw79z5TTZxr9du_Z0-9A/ The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -PoDLEs revisited (Lloyd Fournier) https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html +PoDLEs revisited (Lloyd Fournier) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html We’ll start with me talking about my research into UTXO probing attacks on Lightning in the dual funding proposal.Payment Pointshttps://btctranscripts.com/chaincode-labs/chaincode-podcast/payment-points/Mon, 30 Mar 2020 00:00:00 +0000https://btctranscripts.com/chaincode-labs/chaincode-podcast/payment-points/Nadav Kohen: 00:00:00 Right now in the Lightning Network, if I were to make a payment every single hop along that route, they would know that they&rsquo;re on the same route, because every single HTLC uses the same hash. It&rsquo;s a bad privacy leak. It&rsquo;s actually a much worse privacy leak now that we have multi-path payments, because every single path along your multi-path payment uses the same hash as well.A Schnorr-Taproot’ed Lightninghttps://btctranscripts.com/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/Thu, 06 Feb 2020 00:00:00 +0000https://btctranscripts.com/advancing-bitcoin/2020/2020-02-06-antoine-riard-taproot-lightning/Slides: https://www.dropbox.com/s/9vs54e9bqf317u0/Schnorr-Taproot%27ed-LN.pdf Intro Today Schnorr and Taproot for Lightning, it is a really exciting topic. diff --git a/tags/quantum-resistance/index.xml b/tags/quantum-resistance/index.xml index b1832df764..62256d4e3a 100644 --- a/tags/quantum-resistance/index.xml +++ b/tags/quantum-resistance/index.xml @@ -1,5 +1,5 @@ Quantum resistance on ₿itcoin Transcriptshttps://btctranscripts.com/tags/quantum-resistance/Recent content in Quantum resistance on ₿itcoin TranscriptsHugo -- gohugo.ioenThu, 06 Jun 2019 00:00:00 +0000Taproothttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki -https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html +https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html https://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/ previously: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/ https://twitter.com/kanzure/status/1136616356827283456 diff --git a/tags/research/index.xml b/tags/research/index.xml index 35093d8763..12ea0b83c1 100644 --- a/tags/research/index.xml +++ b/tags/research/index.xml @@ -8,7 +8,7 @@ Location: Bitcoin Sydney (online) Video: No video posted online The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. Agenda: https://github.com/bitcoin-sydney/socratic/blob/master/README.md#2021-07 -First IRC workshop on L2 onchain support: https://lists.Sydney Socratic Seminarhttps://btctranscripts.com/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/Tue, 01 Jun 2021 00:00:00 +0000https://btctranscripts.com/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/Topic: Agenda in Google Doc below +First IRC workshop on L2 onchain support: https://gnusha.Sydney Socratic Seminarhttps://btctranscripts.com/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/Tue, 01 Jun 2021 00:00:00 +0000https://btctranscripts.com/sydney-bitcoin-meetup/2021-06-01-socratic-seminar/Topic: Agenda in Google Doc below Video: No video posted online Google Doc of the resources discussed: https://docs.google.com/document/d/1E9mzB7fmzPxZ74WZg0PsJfLwjpVZ7OClmRdGQQFlzoY/ The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. @@ -17,7 +17,7 @@ This is Bitcoin Problems, it is a Jekyll website I put on GitHub.< Video: No video posted online Google Doc of the resources discussed: https://docs.google.com/document/d/1VAP4LNjHHVLy9RJwpQ8LqXEUw79z5TTZxr9du_Z0-9A/ The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -PoDLEs revisited (Lloyd Fournier) https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html +PoDLEs revisited (Lloyd Fournier) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html We’ll start with me talking about my research into UTXO probing attacks on Lightning in the dual funding proposal.Socratic Seminarhttps://btctranscripts.com/chicago-bitdevs/2020-08-12-socratic-seminar/Wed, 12 Aug 2020 00:00:00 +0000https://btctranscripts.com/chicago-bitdevs/2020-08-12-socratic-seminar/Topic: Agenda below Video: No video posted online BitDevs Solo Socratic 4 agenda: https://bitdevs.org/2020-07-31-solo-socratic-4 diff --git a/tags/security-problems/index.xml b/tags/security-problems/index.xml index 464caac840..ed42e5b897 100644 --- a/tags/security-problems/index.xml +++ b/tags/security-problems/index.xml @@ -32,7 +32,7 @@ see also: Extracting seeds from hardware wallets The future of hardware wallets coredev.tech 2019 hardware wallets discussion Background A bit more than a year ago, I went through Jimmy Song&rsquo;s Programming Blockchain class. That&rsquo;s where I met M where he was the teaching assistant. Basically, you write a python bitcoin library from scratch. The API for this library and the classes and fnuctions that Jimmy uses is very easy to read and understand.Attack Vectors of Lightning Networkhttps://btctranscripts.com/chaincode-labs/chaincode-residency/2019-06-25-fabrice-drouin-attack-vectors-of-lightning-network/Tue, 25 Jun 2019 00:00:00 +0000https://btctranscripts.com/chaincode-labs/chaincode-residency/2019-06-25-fabrice-drouin-attack-vectors-of-lightning-network/Location: Chaincode Residency – Summer 2019 Introduction All right so I&rsquo;m going to introduce really quickly attack vectors on Lightning. I focus on first what you can do with the Lightning protocol, but I will mostly speak about how attacks will probably happen in real life. It&rsquo;s probably not going to be direct attacks on the protocol itself. Denial of Service So the basic attacks you can have when you&rsquo;re running lightning nodes are denial of service attacks basically.Hardware Wallets (History of Attacks)https://btctranscripts.com/london-bitcoin-devs/2019-05-01-stepan-snigirev-hardware-wallet-attacks/Wed, 01 May 2019 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2019-05-01-stepan-snigirev-hardware-wallet-attacks/Slides: https://www.dropbox.com/s/64s3mtmt3efijxo/Stepan%20Snigirev%20on%20hardware%20wallet%20attacks.pdf -Pieter Wuille on anti covert channel signing techniques: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-March/017667.html +Pieter Wuille on anti covert channel signing techniques: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-March/017667.html Introduction This talk is the second in the series after my previous talk in London a few months ago at the Advancing Bitcoin conference. There I was talking mostly about general attacks on hardware, more from the theoretical perspective. I didn’t say anything bad hardware wallets that exist and I didn’t specify anything on the bad side. Here I feel a bit more free as it is a meetup not a conference so I can say bad things about everyone.Topological Analysis of the Lightning Networkhttps://btctranscripts.com/breaking-bitcoin/2019/lightning-network-topological-analysis/Tue, 15 Jan 2019 00:00:00 +0000https://btctranscripts.com/breaking-bitcoin/2019/lightning-network-topological-analysis/Topological analysis of the lightning network paper: https://arxiv.org/abs/1901.04972 https://github.com/seresistvanandras/LNTopology diff --git a/tags/segwit/index.xml b/tags/segwit/index.xml index 295d209d1a..77d7db219f 100644 --- a/tags/segwit/index.xml +++ b/tags/segwit/index.xml @@ -21,7 +21,7 @@ And we&rsquo;re back!Segwit, PSBT Vulnerab Location: LA BitDevs (online) CVE: https://nvd.nist.gov/vuln/detail/CVE-2020-14199 Trezor blog post on the vulnerability: https://blog.trezor.io/latest-firmware-updates-correct-possible-segwit-transaction-vulnerability-266df0d2860 -Greg Sanders Bitcoin dev mailing list post in April 2017: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html +Greg Sanders Bitcoin dev mailing list post in April 2017: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014843.html The vulnerability The way Bitcoin transactions are encoded in the software is there is a list of coins essentially and then there is a list of destinations and the amount being sent to each destination. The destinations do not include the fee. Nothing in the transaction tells you what the fee of the transaction is.</description></item><item><title>Advanced Segwithttps://btctranscripts.com/chaincode-labs/chaincode-residency/2019-06-18-james-obeirne-advanced-segwit/Tue, 18 Jun 2019 00:00:00 +0000https://btctranscripts.com/chaincode-labs/chaincode-residency/2019-06-18-james-obeirne-advanced-segwit/Location: Chaincode Labs 2019 Residency Slides: https://residency.chaincode.com/presentations/bitcoin/Advanced_segwit.pdf James O&rsquo;Beirne: To sort of vamp off of Jonas&rsquo;s preamble, I&rsquo;m not that smart. I&rsquo;m a decent programmer, but compared to a lot of people who work on Bitcoin, I barely know what I&rsquo;m doing. I kind of consider myself like a carpenter, a digital carpenter equivalent. I&rsquo;m a steady hand. I can get things done. I&rsquo;m fairly patient, which i s key when it comes to Bitcoin Core development because, trust me, you&rsquo;re gonna be doing a lot of waiting around and a lot of rebasing.Bitcoin Optechhttps://btctranscripts.com/bitcoin-core-dev-tech/2018-10/2018-10-09-bitcoin-optech/Tue, 09 Oct 2018 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2018-10/2018-10-09-bitcoin-optech/https://twitter.com/kanzure/status/1049527415767101440 diff --git a/tags/signet/index.xml b/tags/signet/index.xml index 5e58e9bb6f..3f63f862e3 100644 --- a/tags/signet/index.xml +++ b/tags/signet/index.xml @@ -22,11 +22,11 @@ Intro Michael Folkson (MF): This is a Socratic Seminar organized by London BitDe Let’s prepare mkdir workspace cd workspace git clone https://github.com/bitcoin/bitcoin.git cd bitcoin git remote add kallewoof https://github.com/kallewoof/bitcoin.git git fetch kallewoof git checkout signet ./autogen.sh ./configure -C --disable-bench --disable-test --without-gui make -j5 When you try to run the configure part you are going to have some problems if you don’t have the dependencies. If you don’t have the dependencies Google your OS and “Bitcoin build”. If you have Windows you’re out of luck.Signet Integrationhttps://btctranscripts.com/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/Thu, 06 Feb 2020 00:00:00 +0000https://btctranscripts.com/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/Slides: https://www.dropbox.com/s/6fqwhx7ugr3ppsg/Signet%20Integration%20V2.pdf BIP 325: https://github.com/bitcoin/bips/blob/master/bip-0325.mediawiki Signet on Bitcoin Wiki: https://en.bitcoin.it/wiki/Signet -Bitcoin dev mailing list: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html +Bitcoin dev mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html Bitcoin Core PR 16411 (closed): https://github.com/bitcoin/bitcoin/pull/16411 Bitcoin Core PR 18267 (open): https://github.com/bitcoin/bitcoin/pull/18267 Intro I am going to talk about Signet. Do you guys know what Signet is? A few people know. I will explain it briefly. I have an elevator pitch, I have three actually depending on the height of the elevator. Basically Signet is testnet except all the broken parts are removed.Signet annd its uses for developmenthttps://btctranscripts.com/edgedevplusplus/2019/signet/Tue, 10 Sep 2019 00:00:00 +0000https://btctranscripts.com/edgedevplusplus/2019/signet/https://twitter.com/kanzure/status/1171310731100381184 https://explorer.bc-2.jp/ -Introduction I was going to talk about signet yesterday but people had some delay downloading docker images. How many of you have signet right now? How many think you have signet right now? How many downloaded something yesterday? How many docker users? And how many people have compiled it themselves? Okay. I think we have like 10 people. The people that compiled it yourself, I think you&rsquo;re going to be able to do this.Signethttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html +Introduction I was going to talk about signet yesterday but people had some delay downloading docker images. How many of you have signet right now? How many think you have signet right now? How many downloaded something yesterday? How many docker users? And how many people have compiled it themselves? Okay. I think we have like 10 people. The people that compiled it yourself, I think you&rsquo;re going to be able to do this.Signethttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-signet/https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html https://twitter.com/kanzure/status/1136980462524608512 Introduction I am going to talk a little bit about signet. Does anyone not know what signet is? The idea is to have a signature of the block or the previous block. The idea is that testnet is horribly broken for testing things, especially testing things for long-term. You have large reorgs on testnet. What about testnet with a less broken difficulty adjustment? Testnet is for miner testing really. \ No newline at end of file diff --git a/tags/soft-fork-activation/index.xml b/tags/soft-fork-activation/index.xml index 1793e6ae89..a3331b037e 100644 --- a/tags/soft-fork-activation/index.xml +++ b/tags/soft-fork-activation/index.xml @@ -30,9 +30,9 @@ That&rsquo;s right. Aaron: 00:01:57 Segregated Witness, which was the previous soft fork, well, was the last soft fork. We&rsquo;re working towards a Taproot soft fork now. Sjors: 00:02:06 -It&rsquo;s the last soft fork we know of.How Bitcoin UASF went down, Taproot LOT=true, Speedy Trial, Small Blockshttps://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Wed, 17 Mar 2021 00:00:00 +0000https://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Luke Dashjr arguments against LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html -T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html -F7 argument for LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html +It&rsquo;s the last soft fork we know of.How Bitcoin UASF went down, Taproot LOT=true, Speedy Trial, Small Blockshttps://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Wed, 17 Mar 2021 00:00:00 +0000https://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Luke Dashjr arguments against LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html +T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html +F7 argument for LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html Transcript by: Stephan Livera Edited by: Michael Folkson Intro Stephan Livera (SL): Luke, welcome to the show. Luke Dashjr (LD): Thanks. @@ -44,7 +44,7 @@ SL: So, Luke for listeners who are unfamiliar, maybe you could take a minute and Video: No video posted online Google Doc of the resources discussed: https://docs.google.com/document/d/1VAP4LNjHHVLy9RJwpQ8LqXEUw79z5TTZxr9du_Z0-9A/ The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -PoDLEs revisited (Lloyd Fournier) https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html +PoDLEs revisited (Lloyd Fournier) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html We’ll start with me talking about my research into UTXO probing attacks on Lightning in the dual funding proposal.Socratic Seminar - Signethttps://btctranscripts.com/london-bitcoin-devs/2020-08-19-socratic-seminar-signet/Wed, 19 Aug 2020 00:00:00 +0000https://btctranscripts.com/london-bitcoin-devs/2020-08-19-socratic-seminar-signet/Pastebin of the resources discussed: https://pastebin.com/rAcXX9Tn The conversation has been anonymized by default to protect the identities of the participants. Those who have given permission for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. Intro Michael Folkson (MF): This is a Socratic Seminar organized by London BitDevs. We have a few in the past. We had a couple on BIP-Schnorr and BIP-Taproot that were really good with Pieter Wuille, Russell O’Connor and various other people joined those.How to Activate a New Soft Forkhttps://btctranscripts.com/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation/Mon, 03 Aug 2020 00:00:00 +0000https://btctranscripts.com/bitcoin-magazine/2020-08-03-eric-lombrozo-luke-dashjr-taproot-activation/Location: Bitcoin Magazine (online) diff --git a/tags/statechains/index.xml b/tags/statechains/index.xml index 8a453fe143..7e2dfd5e92 100644 --- a/tags/statechains/index.xml +++ b/tags/statechains/index.xml @@ -1,5 +1,5 @@ Statechains on ₿itcoin Transcriptshttps://btctranscripts.com/tags/statechains/Recent content in Statechains on ₿itcoin TranscriptsHugo -- gohugo.ioenSat, 29 Apr 2023 00:00:00 +0000Building Trustless Bridgeshttps://btctranscripts.com/bitcoinplusplus/layer-2/building-trustless-bridges/Sat, 29 Apr 2023 00:00:00 +0000https://btctranscripts.com/bitcoinplusplus/layer-2/building-trustless-bridges/Many mechanisms have been created that enable users to lock BTC on the mainchain, transfer a claim on the BTC in some offchain system, and then later redeem the claim and take ownership of the underlying BTC. Colloquially known as &ldquo;bridges&rdquo;, these mechanisms offer a diverse range of security and usability properties for users to choose from depending on their risk tolerance and cost preferences. This talk will give an overview of the different types of BTC bridges that exist, how they work, and how they can be improved.Blind statechains: UTXO transfer with a blind signing serverhttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/Fri, 07 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-07-statechains/https://twitter.com/kanzure/status/1136992734953299970 -&ldquo;Formalizing Blind Statechains as a minimalistic blind signing server&rdquo; https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html +&ldquo;Formalizing Blind Statechains as a minimalistic blind signing server&rdquo; https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-June/017005.html overview: https://medium.com/@RubenSomsen/statechains-non-custodial-off-chain-bitcoin-transfer-1ae4845a4a39 statechains paper: https://github.com/RubenSomsen/rubensomsen.github.io/blob/master/img/statechains.pdf previous transcript: http://diyhpl.us/wiki/transcripts/scalingbitcoin/tokyo-2018/statechains/ diff --git a/tags/taproot/index.xml b/tags/taproot/index.xml index c276e21656..ef48c9f378 100644 --- a/tags/taproot/index.xml +++ b/tags/taproot/index.xml @@ -17,9 +17,9 @@ Google Doc of the resources discussed: https://docs.google.com/document/d/1E9mzB The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. Bitcoin Problems (Lloyd Fournier) https://bitcoin-problems.github.io/ This is Bitcoin Problems, it is a Jekyll website I put on GitHub.Taproot Activation Update: Speedy Trial And The LOT=True Clienthttps://btctranscripts.com/bitcoin-explained/taproot-activation-update/Fri, 23 Apr 2021 00:00:00 +0000https://btctranscripts.com/bitcoin-explained/taproot-activation-update/<p>In this episode of &ldquo;The Van Wirdum Sjorsnado&rdquo; hosts Aaron van Wirdum and Sjors Provoost discussed the final implementation details of Speedy Trial, the Taproot activation mechanism included in Bitcoin Core 0.21.1. Van Wirdum and Provoost also compared Speedy Trial to the alternative BIP 8 LOT=true activation client.</p> -<p>After more than a year of deliberation, the Bitcoin Core project has merged Speedy Trial as the (first) activation mechanism for the Taproot protocol upgrade. Although van Wirdum and Provoost had already covered Taproot, the different possible activation mechanisms and Speedy Trial specifically in previous episodes, in this episode they laid out the final implementation details of Speedy Trial.</p>Chaincode Decoded: Bech32mhttps://btctranscripts.com/chaincode-labs/chaincode-podcast/chaincode-decoded-bech32m/Fri, 16 Apr 2021 00:00:00 +0000https://btctranscripts.com/chaincode-labs/chaincode-podcast/chaincode-decoded-bech32m/This revisits a segment we call Chaincode Decoded. In this episode, we&rsquo;ll learn how to say Bech32 and also what it and Bech32m are.How Bitcoin UASF went down, Taproot LOT=true, Speedy Trial, Small Blockshttps://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Wed, 17 Mar 2021 00:00:00 +0000https://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Luke Dashjr arguments against LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html -T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html -F7 argument for LOT=false: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html +<p>After more than a year of deliberation, the Bitcoin Core project has merged Speedy Trial as the (first) activation mechanism for the Taproot protocol upgrade. Although van Wirdum and Provoost had already covered Taproot, the different possible activation mechanisms and Speedy Trial specifically in previous episodes, in this episode they laid out the final implementation details of Speedy Trial.</p>Chaincode Decoded: Bech32mhttps://btctranscripts.com/chaincode-labs/chaincode-podcast/chaincode-decoded-bech32m/Fri, 16 Apr 2021 00:00:00 +0000https://btctranscripts.com/chaincode-labs/chaincode-podcast/chaincode-decoded-bech32m/This revisits a segment we call Chaincode Decoded. In this episode, we&rsquo;ll learn how to say Bech32 and also what it and Bech32m are.How Bitcoin UASF went down, Taproot LOT=true, Speedy Trial, Small Blockshttps://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Wed, 17 Mar 2021 00:00:00 +0000https://btctranscripts.com/stephan-livera-podcast/2021-03-17-luke-dashjr-taproot-activation/Luke Dashjr arguments against LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018498.html +T1-T6 and F1-F6 arguments for LOT=true and LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html +F7 argument for LOT=false: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018415.html Transcript by: Stephan Livera Edited by: Michael Folkson Intro Stephan Livera (SL): Luke, welcome to the show. Luke Dashjr (LD): Thanks. @@ -31,7 +31,7 @@ SL: So, Luke for listeners who are unfamiliar, maybe you could take a minute and Video: No video posted online Google Doc of the resources discussed: https://docs.google.com/document/d/1VAP4LNjHHVLy9RJwpQ8LqXEUw79z5TTZxr9du_Z0-9A/ The conversation has been anonymized by default to protect the identities of the participants. Those who have expressed a preference for their comments to be attributed are attributed. If you were a participant and would like your comments to be attributed please get in touch. -PoDLEs revisited (Lloyd Fournier) https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html +PoDLEs revisited (Lloyd Fournier) https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-January/002929.html We’ll start with me talking about my research into UTXO probing attacks on Lightning in the dual funding proposal.UASFs, BIP 148, BIP 91 and Taproot Activationhttps://btctranscripts.com/tftc-podcast/2021-02-11-matt-corallo-taproot-activation/Thu, 11 Feb 2021 00:00:00 +0000https://btctranscripts.com/tftc-podcast/2021-02-11-matt-corallo-taproot-activation/Topic: UASFs, BIP 148, BIP 91 and Taproot Activation Location: Tales from the Crypt podcast Intro Marty Bent (MB): Sitting down with a man who needs no introduction on this podcast. I think you have been on four times already. I think this is number five Matt. You are worried about the future of Bitcoin. What the hell is going on? You reached out to me last week, you scaring the s*** out of me.Enterprise Wallets/UTXO Managementhttps://btctranscripts.com/chaincode-labs/chaincode-podcast/2020-11-09-enterprise-walletsutxo-management/Mon, 09 Nov 2020 00:00:00 +0000https://btctranscripts.com/chaincode-labs/chaincode-podcast/2020-11-09-enterprise-walletsutxo-management/Mark Erhardt: 00:00:00 @@ -71,7 +71,7 @@ Intro Today Schnorr and Taproot for Lightning, it is a really exciting topic. Lightning architecture The Lightning architecture for those who are not familiar with it. You have the blockchain as the underlying layer. On top of it you are going to build a channel, you have a HTLC and people are going to spend onions to you. If you want to be paid you are going to send an invoice to the sender.Signet Integrationhttps://btctranscripts.com/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/Thu, 06 Feb 2020 00:00:00 +0000https://btctranscripts.com/advancing-bitcoin/2020/2020-02-06-kalle-alm-signet-integration/Slides: https://www.dropbox.com/s/6fqwhx7ugr3ppsg/Signet%20Integration%20V2.pdf BIP 325: https://github.com/bitcoin/bips/blob/master/bip-0325.mediawiki Signet on Bitcoin Wiki: https://en.bitcoin.it/wiki/Signet -Bitcoin dev mailing list: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html +Bitcoin dev mailing list: https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016734.html Bitcoin Core PR 16411 (closed): https://github.com/bitcoin/bitcoin/pull/16411 Bitcoin Core PR 18267 (open): https://github.com/bitcoin/bitcoin/pull/18267 Intro I am going to talk about Signet. Do you guys know what Signet is? A few people know. I will explain it briefly. I have an elevator pitch, I have three actually depending on the height of the elevator. Basically Signet is testnet except all the broken parts are removed.Schnorr Taproot Tapscript BIPshttps://btctranscripts.com/stephan-livera-podcast/2019-12-27-aj-towns-schnorr-taproot/Fri, 27 Dec 2019 00:00:00 +0000https://btctranscripts.com/stephan-livera-podcast/2019-12-27-aj-towns-schnorr-taproot/Stephan Livera: AJ, welcome to the show. @@ -113,7 +113,7 @@ http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2019-06-06-taproot/ slides: https://nickler.ninja/slides/2019-breaking.pdf Introduction I am going to tlak about something that is a collaboration of many people in the bitcoin community including my colleagues at Blockstream. The bip-taproot proposal was recently posted to the bitcoin-dev mailing list which proposes an improvement for bitcoin. Disclaimer It&rsquo;s not at all certain that a bip-taproot soft-fork will activate in its current form or at all.Taproothttps://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/Thu, 06 Jun 2019 00:00:00 +0000https://btctranscripts.com/bitcoin-core-dev-tech/2019-06/2019-06-06-taproot/https://github.com/bitcoin/bips/blob/master/bip-0340.mediawiki -https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html +https://gnusha.org/url/https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-May/016914.html https://bitcoinmagazine.com/articles/taproot-coming-what-it-and-how-it-will-benefit-bitcoin/ previously: http://diyhpl.us/wiki/transcripts/bitcoin-core-dev-tech/2018-03-06-taproot-graftroot-etc/ https://twitter.com/kanzure/status/1136616356827283456 diff --git a/tftc-podcast/2021-02-11-matt-corallo-taproot-activation/index.html b/tftc-podcast/2021-02-11-matt-corallo-taproot-activation/index.html index a1387b74bf..2dd44244b6 100644 --- a/tftc-podcast/2021-02-11-matt-corallo-taproot-activation/index.html +++ b/tftc-podcast/2021-02-11-matt-corallo-taproot-activation/index.html @@ -10,4 +10,4 @@ Matt Corallo

Date: February 11, 2021

Transcript By: Michael Folkson

Tags: Taproot

Category: Podcast

Media: -https://anchor.fm/tales-from-the-crypt/episodes/228-UASFs--BIP-148--BIP-91--and-Taproot-Activation-with-Matt-Corallo-eq7cif

Topic: UASFs, BIP 148, BIP 91 and Taproot Activation

Location: Tales from the Crypt podcast

Intro

Marty Bent (MB): Sitting down with a man who needs no introduction on this podcast. I think you have been on four times already. I think this is number five Matt. You are worried about the future of Bitcoin. What the hell is going on? You reached out to me last week, you scaring the s*** out of me. Why are you worried?

Matt Corallo (MC): First of all thanks for having me. I think the Bitcoin community broadly right now is selectively misremembering events that happened only a few years ago now and taking conclusions from it that are not entirely justified. We can get into it in depth but I think the block size wars had an effect on a lot of people. There were two sides that both had a story to tell. One side got largely pushed out of Bitcoin, a lot of people on the “losing side” have moved on from Bitcoin and are doing cryptocurrency exchanges or just not working in cryptocurrency anymore. That left a side that gets to write their own history. I think there is some facts that have been left out and more importantly it impacts the way people see Bitcoin’s future and how Bitcoin should exist going forward. And that worries me, I don’t really care if the history got rewritten, I care if it results in a Bitcoin that is fundamentally less safe.

The history of SegWit and SegWit2x

MB: This all revolves around the user activated soft fork and that mechanism to activate certain upgrades, whether it be SegWit or potentially Schnorr and Taproot on the horizon. Let’s jump into the fine details of what went on, how a user activated soft fork works and why you believe it should be the nuclear option, is that the correct term?

MC: Yeah maybe. If we wanted to get into it we have to go back and revisit the history. We have to go back and remember exactly what happened and more importantly what didn’t happen. For most of your listeners who were around a few years ago or who have since learned the history of SegWit2x and the activation of SegWit, it bears revisiting and ensuring that everyone is on the same page, there is a more complete history that people are aware of. As hopefully most of your users know with SegWit there was a large debate within the Bitcoin community broadly, exchanges, large industry players, miners, users, developers, a number of different crowds around how Bitcoin should scale. Whether the block size should just be increased ad infinitum, whether the block size should completely be untouched. It all focused on the block size largely, there were a number of technical issues with just blindly increasing the block size which after some time led to the proposal of SegWit. SegWit is kind of a nifty little trick to increase the block size somewhat while also solving a number of technical issues that prevented increasing the block size previously. There were potential denial of service issues with just blindly increasing the block size that SegWit solved. So it was this nifty little trick that came out from Pieter Wuiille and Luke Dashjr designed for Bitcoin and proposed that as a standalone soft fork. That is the beginning of the story. Over the next year after that was all out Bitcoin civil war that had already been building but it reached a fever pitch throughout the course of the one year activation timeline. SegWit and several soft forks prior to it generally had a theory of soft fork activation which went something like this. Developers work on the soft fork, design it and propose it to the community broadly in the form of both a BIP and code and that code eventually hopefully takes the form of a Bitcoin Core release. This in effect is a formal proposal to the community for a new soft fork. Then hopefully the community adopts it, upgrades to it, is happy with it by the time it is out there, hopefully it has really strong consensus behind it. And as a final step, because with soft forks you want to make sure that the network stays together on one chain you have miners signal when they are ready and when they are in fact enforcing rules. If you don’t have that you could have a chain split. You have a majority or a good chunk of miners who aren’t enforcing the rules, the chain can go off in one direction that is invalid to a number of nodes and suddenly you are going to have two chains depending on who you ask and whether nodes have upgraded or whatnot. That is a mess, you don’t want that. It is not fun, you end up with double spend potential for some people who might have forgotten to upgrade a node or whatever. No one wants that. So ideally you have miners say “Yes I’m ready. I’ve upgraded.” Once you reach some threshold, in the case of SegWit and a number of other proposals it has been 95 percent, the change locks in and activates and everything goes forward. SegWit took place over the course of a year, it came out as a proposal and a Bitcoin Core release. At the end of a year it would have timed out and presumably some other activation method would have been tried. SegWit kind of languished and depends on who you ask for exactly why, miners had some dubious incentives around some optimizations they were using that were patented and they weren’t very upfront about it, other reasons to block it. There was a contingent within the community that might have been happy with SegWit but viewed it as insufficient, that the block size needed to be increased hugely and SegWit represented some small multiple, somewhere between 1.5x and 2x, which is still a pretty substantial increase but they wanted a significantly higher multiple. SegWit, which might have been fine on its own was not sufficient and thus simply shouldn’t happen. Long story short, this kind of miner readiness signaling that was intended to just prevent the network from splitting into a million pieces was abused to prevent SegWit from activating.

MB: This is version bit signaling via BIP 8 correct?

MC: BIP 9 I think. BIP 8 is BIP 9 with a few fixes in it. Then during this time period a number of companies got together in secret and decided they were going to decide Bitcoin’s consensus for us all, signed what became the New York Agreement led by DCG and declared unilaterally that Bitcoin was going to activate SegWit and in addition a hard fork which further doubles the block size on top of the SegWit increase. SegWit is already a 1.5x to 2x, and then we are going to double it again and get something between 2x and 4x. This was going to activate on some, I forget the exact timeline but it was something completely nonsensical. It was like “We are going to upgrade the entire Bitcoin network and everyone is going to upgrade their node in like 3 months. Let’s go.” The increase block size more thing, whatever, let’s do it all in 3 months is obviously brain dead. But more importantly SegWit2x represented a small group of people coming together and deciding Bitcoin’s rules. Fundamentally my issue with anything around consensus changes is that we as a community need to have a higher bar. We need to have a very high bar for how changes get made and if in the end you have some small group of people who are allowed to just blindly upgrade the rules, decide in a private meeting, invite only, what the rules are then Bitcoin has failed. This private meeting is going to decide that you have to do KYC or a private meeting is going to decide that it is ok if not everyone can run a node and it can be prohibitively expensive to run a node. There are a number of things that would be unacceptable but would be trivial to do if you are able to change the rules by small committee. That was the New York Agreement, SegWit2x. The community was broadly up in arms, the developer community was universally up in arms, there were no objections within the developer community to condemning SegWit2x and the New York Agreement. So the community is up in arms, there is this group of industry players who are saying “We are just going to do it, it doesn’t matter what you say Bitcoin users, it is just going to happen, suck it up.” Because SegWit2x was a hard fork it was fundamentally going to be a different token. If you had sufficient hash power on both sides you could always have two tokens, you could always trade the old one for the new one and vice versa. And so of course several exchanges just listed both and listed a futures market. Bitfinex listed a futures market that was relatively deep and BitMex, it is only a futures market but they declared that the futures were going to be only non hard forked Bitcoin as it exists today, not the SegWit2x Bitcoin. The futures market told a very different story from the confidence of the business community. SegWit2x tokens traded something around 20, 30 percent, sometimes it was 10 percent of Bitcoin. You could see this both on Bitfinex’s direct trading of the two tokens but you could also see it in the discount on the futures on BitMex. You had billion dollar daily volumes on BitMex showing exactly what these hard fork tokens were worth. It was a pretty good indicator that the market was not a fan of the SegWit2x idea. Now you have this very vocal Bitcoin community on Twitter, the developer community, you have a very confident business community, several key businesses, not every cryptocurrency business but several key businesses in the cryptocurrency space. Coinbase, it was led largely by BitGo, a number of others, a number of large mining pools. They are both declaring they are going ahead with their vision and this would have resulted in two separate currencies fighting over who gets the term Bitcoin and as you might imagine that would be a mess and that would destroy the value of Bitcoin for basically everyone.

MB: If they were successful in writing good code, we came to find that they were off by one block and it wouldn’t have successfully hard forked.

MC: Their software didn’t actually function. This is what happens when the entire developer community thinks this is a terrible idea, they can’t hire someone with a deep understanding of the codebase because they all don’t want to work for you. You have these two factions that are screaming they are going ahead and both won’t admit it publicly but very well aware that if they both go ahead the whole thing catches fire and any legitimacy that Bitcoin had in the financial world is gone because now you have two different people claiming to be Bitcoin. Why would anyone care? A number of critics who have always suggested that Bitcoin has no value because anyone can create a new cryptocurrency and thus there is no scarcity, a poor argument, but it would have actually been true had this come to pass. I think a big reason why it didn’t come to pass is because everyone understood that their entire value would be gone. That’s a pretty strong incentive if nothing else. So how do two competing factions who both have very different visions and both think the other is acting in bad faith come together and protect the value of Bitcoin and prevent it from splintering into two different things but presumably later having more issues. The very loud Twitter community, there were a few folks who were very “Screw big business” and blah blah blah, but notably only one or two folks who ever contributed materially to Bitcoin Core decided that they were going to do what was then termed a user activated soft fork. Before BIP 148 and UASF we would have just termed it a flag day soft fork but basically it is just a change where on a certain day it activates, period. You have some software, the software says “On this date the consensus rules are going to change to be X, whatever X is.” In this case X was “If your block does not signal readiness for SegWit and isn’t contributing to turning SegWit on we are just going to ignore you and consider the block invalid.” That was one group, it was very vocal. So now you have two proposed changes in addition to original Bitcoin. You have the soft fork that is led by a bunch of users on Twitter who are saying “We are going to fork off any blocks that don’t signal for SegWit and we are going to go off on our own way and have our own chain.” Then you have this business community that is saying “We are going to go hard fork and have our own chain.” Then you have the people in the middle who are still just running the software that they downloaded two weeks ago and are a little confused maybe. Now suddenly you have Bitcoin careering towards three different tokens. What happens next depends on who you ask. I think this is really the key issue.

MB: Can we take a step back before we get to the key issue? Did the Twitter users propose the user activated soft fork? Where does Shaolin Fry play into this?

MC: There was one anonymous developer who wrote BIP 148, I don’t believe their actual name or identity has ever been revealed, there is various speculation about it. They posted the BIP and posted some code for it and provided binaries. A bunch of people downloaded it, the alias Shaolin Fry had never contributed to Bitcoin Core, it was never used for anything else, that alias existed only to post what became BIP 148 and this code. They obviously had a relatively active Twitter account but that is where this code came from. So if you ask someone who was staunchly in favor of BIP 148 and UASF, their view is that UASF activated, BIP 148 was enforced and on the day of BIP 148 no blocks did not signal for SegWit and thus no blocks were every forked off and the consensus rules of BIP 148 still remained on the same chain with the rest of Bitcoin. That soft fork was enforced and SegWit activated and happily ever after. It is obviously a little wrong because to my knowledge no pool or materially sized transactor ever run that code. A lot of Twitter users were running it, there was a campaign to get people to spin up UASF enforcing nodes but just spinning up a node as we know doesn’t have much impact on anything. Otherwise Chainalysis would be in charge of the consensus rules because they certainly have a lot of nodes. What really matters is people using a node to transact saying “I will only accept your transaction in Bitcoin if my node thinks that the rules of the network have been enforced and that transaction meets all the rules of the network including is included in a block that meets the rules of the network.” Some individuals presumably transacted around that code but no large businesses, no people accepting large volume payments, no mining pools and presumably not even very much in terms of total transaction volume of individuals ran on it. I certainly never ran it and I transact some on Bitcoin Core. So what really did happen and why was it that all the blocks suddenly started signaling for SegWit on that day? That is where it gets complicated and it is kind of unclear. There is some indication that the very confident Twitter users who were saying “We are running UASF whether you like it or not” scared several large players in the mining pool space, specifically there were rumors going round that Jihan (Wu) was scared by it. It is not unreasonable right, you have people who are very ideological and believe very strongly in Bitcoin and are telling NYA to go screw themselves. Saying that they are going to run the software and they are going to fork Bitcoin into another coin whether you get onboard or not. It is absolutely the case that both those users and everyone loses out if Bitcoin forks. You have two tokens and in every possible way it is a mess. There was a proposal which several New York Agreement signatories, SegWit2x supporting businesses and mining pools got behind called I believe BIP 91, I may be wrong.

MB: I remember BIP 148, BIP 90 and BIP 91 being crucial around this time.

MC: There was BIP 91 which said “What we are going to do is we are going to do SegWit2x, you are going to signal readiness for SegWit by the date of the UASF activation timeline and that signaling is going to signal support for both SegWit and a hard fork. The SegWit2x people should go ahead and signal readiness for SegWit and we should activate SegWit that way.” If you asked some large businesses that is probably what they would have told you at the time they were signaling.

MB: There were sites tracking the percentage to it.

MC: Bitmain exerted a lot of influence on their customers and basically the entire Bitcoin industry in China because a large part of it was mining related. Basically they would tell people “If you get onboard and do what we say and use our pool or use a pool that we bless then you’ll get the next generation hardware sooner and for a cheaper price.” Basically once Bitmain came around, they were the big blocker, once they came around all the other dominos fell within a week. That’s largely what happened. The “UASF activated and we won. We showed them via the free market thing” it is a little disingenuous. Clearly the free market and the very large volumes of futures trading and this fork token trading on Bitfinex showed very clearly that the SegWit2x New York Agreement token was dead in the water. It had 10 percent of the value of Bitcoin or 20 or 30 depending on where you looked. It was a small fraction of the value of Bitcoin and it was dead. There was never any futures trading for what would have happened had the UASF token gone off in one direction and classic Bitcoin had gone off in the other direction which was totally a possibility. The UASF thing, the code for it was released very, very quickly because of the time crunch basically, SegWit was about to timeout and they wanted to make sure that it activated before SegWit timed out. They released it and it was going to turn on I think within a month or two or maybe even a few weeks. And so no one spent the time to figure it out on paper and give the markets the voice. No one had the time to audit the code so no one was materially risking their business on it or accepting really large volume payments on it or switching out to it. There were a number of issues in the code depending on which case you looked at. One Bitcoin node forking off from the rest of the network and finding other peers to connect to that are on its fork is not a solved problem in the codebase at all. Hopefully it should never happen. Or it definitely wasn’t at the time. It is very unclear whether you could say that this UASF, BIP 148 movement ever activated. You could absolutely say it had an impact and it probably if you believe rumors, who knows, had some impact on Jihan and Bitmain and actually scared some people into allowing SegWit to activate in the signaling. But had they decided not to do that it basically had no hash rate behind it. So somebody needed to step up, if it was going to be a thing, and invest a very large sum of money to buy miners, offer existing miners to buy their UASF tokens had Bitcoin forked. It is unclear that there was this kind of money behind it. You see a lot of people running around saying “Yeah UASF is great, it works, we’ve done it before. It activated and it is how we activated SegWit.” The story is 10x more complicated than that. At the end of the day the only thing you can say for certain is Bitcoin users, Bitcoin mining pools, Bitcoin miners, Bitcoin businesses all understood that if Bitcoin forked into multiple tokens that would be very bad for their investment. Then they took action to make sure that didn’t happen. You can’t really say anything more than that. You can’t say that one thing was the thing that activated anymore than you can say that another thing was the thing that activated SegWit. There were several things involved.

MB: I am pulling up the newsletter that I wrote on Tuesday July 18th 2017 and one of the headlines in that newsletter is “BIP 91 to the rescue, hopefully.” This is the way I understood it in the middle of July to bring some clarity. Funnily enough this is the same one in which John McAfee said he is going to eat his d***. I have that in the newsletter too. There is the possibility of locking in SegWit via BIP 141 which required 95 percent miner consensus. At this point 95 percent miner consensus was not reached via BIP 141. Our last hope of activating the protocol upgrade without a network split may be BIP 91 which is authored by James Hilliard. The BIP makes it so BIP 148 and SegWit2x are compatible. In short if 80 percent of the miners represented by collective hash power signal for BIP 91 SegWit will be activated and all blocks that are not signaling for SegWit activation will be rejected from the network. That is the way I described it.

MC: Right, BIP 91 basically reduced the threshold from 95 percent to 80 percent and then also made a non technical statement that this signaling would also include signaling for a hard fork. That hard fork never materialized, there were no developers interested in writing code for that and even if someone had written code for it it probably would have got laughed out the room. That part was basically ignored after the fact.

MB: It is cool have this historical journal. So James Hilliard, BIP 91 was “Alright guys, let’s not split, let’s compromise here.”

MC: It is terrible for all of us if we split. Here is a thing that we can do where both sides get to claim they won. The UASF people get to scream about how UASF locked in on this date. The miners and large pool operators get to talk about how they successfully activated SegWit2x and at the end of the day SegWit will be activated in a way that is compatible with every node that is currently sitting on the network. We can all go about our lives and have one Bitcoin.

MB: The miners were crucial in this, you needed 80 percent of miner signaling.

MC: Right, with all of these really compressed timelines the UASF BIP 148 crowd was suggesting they could activate something on a really compressed timeline but you can’t get any material amount of nodes upgraded. The only thing you can really do is get miners to enforce these rules. As long as they are enforcing them they are the rules of the network and we’ll get nodes to upgrade over time to enforce them.

MB: It was a hairy situation back then if you weren’t around.

MC: It was so hairy. There was definitely some hair loss due to some stress. When you compare a President’s photo when they start their term and when they end. These days not so much, they used to start and they didn’t have gray hair and then they always ended with gray hair and they always looked about 20 years older even though it had already been 8. Over the course of one year.

MB: Obama’s was pretty drastic.

MC: Obama looked about 20 years older.

MB: I am loving that we are rehashing this, jogging memories and going back through all the newsletters now. This was a daily update. BIP 91, shoutout to James Hilliard for pushing this forward. Arguably prevented a network split. Even myself personally, the years that have come to pass, I have somehow whitewashed my own memory and thought that BIP 148 was the driving factor for actually activating it. It was more influence or pressure.

MC: It was definitely a very successful pressure campaign. If you want to argue that that kind of thing activates you really have to make a market analysis and have some evidence that the other token is going to be completely worthless and thus our flag day activation and our splitting off is going to be the only thing with value and the only thing with hash power and the only thing that exchanges care about. That just didn’t exist then. If you want to assign blame for what really succeeded and what really activated SegWit you would have to have visibility into how things would have gone had the various splits happened. And we just don’t. There are a lot of things that came together and a lot of things that had an impact. It is impossible to say what did it.

MB: Now I’m refreshing my memory around this, the consensus right after was both sides can say they won. Everybody can be happy. Why are you worried about this now?

Taproot activation

MC: Rehashing old history is fun, well actually for most of us it is miserable because that was a pretty stressful time. But it is important I think to take away the right conclusion because in discussion of Taproot, there aren’t as many voices in it as SegWit, it is pretty straightforward.

MB: Not even close to as controversial.

MC: It is very, very well designed. Too well designed sometimes. A lot of effort has gone into it. I think importantly Optech has done some good work on reaching out to businesses and chatting with businesses about the design, making sure these large industry players who are using Bitcoin are aware of it and know about it, and also a lot of the community. The core consensus bits of Taproot are great, no one has anything bad to say basically. Cool, let’s do it. The community has taken that a little bit to heart and is like “Cool, let’s go. Let’s do it, let’s turn it on.” The how to turn it on is sadly maybe also just as important as the actual design. At the end of the day you have to write consensus level code, code that describes how Bitcoin functions as a part of the activation method. The activation method of a soft fork is a consensus change in of itself. It describes a consensus change that activates this much larger consensus change but it describes a consensus change. So it needs just as much care as anything else. To wax a little more philosophically Bitcoin in my view, most Bitcoiners and a lot of people have invested in Bitcoin because it represents this conservative system that you can depend upon, that exists and provides you hopefully censorship resistant value transfer and value storage, payments whatever in a way that you can depend on. If you are using it in some specific way today you are going to be able to use it in that specific way in 5 years, in 10 years. The consensus system that is Bitcoin, the technical features available to you in Bitcoin continue to exist in roughly the same way or at least a compatible way as they do today in the long run. And your use case for Bitcoin, there are a million use cases for Bitcoin, whether you are hedging against the Fed printing too much money and you think hyperinflation is happening or whether you are in Venezuela and hyperinflation is happening or whether you are in Nigeria and you want to accept donations because you want to protest against the government being sketchy or you are in Russia, whatever, there are a million reasons to care about Bitcoin and to have a use for it. If you have something like that Bitcoin should continue to serve you. You should be able to invest in and use Bitcoin and invest time into building Bitcoin based solutions feeling safe and secure that it will continue to exist as it exists today. No one is going to come in and change it out from under you, decide to change the cap because that will help certain users but hurt certain other users. Or no one is going to come and out from under you change materially the fact that there are transaction fees because you invested a tonne of money into buying a mining farm. At the end of the day because there are so many different legitimate perfectly valid use cases for Bitcoin we as a community have to make sure that changes happen in such a way that Bitcoin continues to work for all of those use cases. That has always been my theory on soft forks. I wrote a whole long blog post on February 28th 2017. It is the last blog post I wrote.

MB: I like you writing code instead of blog posts, it is better time spent.

MC: Probably. I have this whole long thing where I wax philosophically about there being different use cases of Bitcoin. That is important and that is what is so cool about Bitcoin. I have my interests in Bitcoin and you have your interests in Bitcoin and they are fairly different. We care about Bitcoin for fairly different reasons, obviously it overlaps a lot but certainly I care about Bitcoin for many different reasons than a lot of other Bitcoiners. There are some people down the street here in New York City who are trading Bitcoin who are Bitcoiners. They care about using Bitcoin as a hedge against the Fed or whatever monetary policy or whatever crazy market things. They are still Bitcoiners even though they are never on Bitcoin Twitter but they have a use for Bitcoin and we should make sure that exists and that use case is still supported. If we take this as an important key value proposition of Bitcoin, that there are so many value propositions and that we care about all of them, we have to be very, very careful about not just about the consensus changes we make, Taproot is very carefully done and most importantly if you don’t want to use it there is not really any reason to care about it. It is not going to negatively impact you. Maybe fees will drop a little bit but probably not a lot. There is no reason to worry about if you aren’t going to use it. But also and more importantly the activation method sets the stage for how we activate soft forks. That activation methods themselves describe a set of rules technically and in code but also socially of what bar we hold changes to. Because Taproot is this thing that has been going for a while, it has been cooking forever, it is super carefully designed and it is really well implemented. There’s code and it is great and it is ready to go. There are a lot of people who are like “Let’s just do it. Last time we did this UASF thing.” They view history through some rose colored glasses maybe. Let’s just do it, we’ve got this thing, let’s just turn it on, the last UASF came on and activated in a few weeks, let’s just do it. We’ve got this. I think that does Bitcoin a disservice. I think if our approach to changes is that we as a very small community on Twitter or whatever just ship code that turns on a soft fork whether the network is fully upgraded to it, whether we’ve actually made sure that we’ve talked to as broad a community as we can, that doesn’t set up Bitcoin for a win. It doesn’t set Bitcoin up to be this conservative careful thing that changes only when people aren’t going to be hugely negatively impacted by the change. Most importantly, flag day activations in general and UASF is another term for it, a more cool term now, they don’t have this backout. “Here’s a release of Bitcoin Core with the change in it. We’ve shipped it and the change is going to happen in a year, six months whatever and that’s it.” That doesn’t send a good image of Bitcoin to anyone. No one wants a Bitcoin where a release of Bitcoin Core decides the consensus rules in of itself. Developers shouldn’t be deciding these things. Sure it is the case that Bitcoin Core could release software and people could look at it and say “I’m not going to run that because there is some soft fork I’m not signing up for.” Certainly in the case of Taproot that is probably not going to happen because the change is great, why would we not upgrade to it? But that’s also the way the community learns how changes should be made. You have this Taproot thing, it is great, it is in this release of Bitcoin Core, it has got a flag day activation, a release of Bitcoin Core happens and in a year Taproot turns on and that is it. How can someone who is just passively observing Bitcoin, maybe they care about Bitcoin but they probably don’t have the time to actively participate in the conversations on Twitter or Clubhouse or Reddit or what have you, how can they take away a message from that anything other than Bitcoin Core decides consensus and maybe a small group of users? It is a perfectly reasonable conclusion based on just looking at that. There are many ways to solve it, don’t get me wrong. Flag days aren’t inherently evil, we can’t never do flag days. But it needs a resolution in some way. It needs to be very clear that so much work has been done. This passive observer who maybe in the future will be a Bitcoiner who is learning about how changes to Bitcoin get made shouldn’t be bombarded with people on Twitter saying “Yes we’re activating SegWit, f*** miners.” I’ve seen a lot of people recently on Twitter and wherever talking about how basically UASF is good because it beats miners over the head and tells them they have no control.” It is not useful. Miners are in fact a part of the Bitcoin community, also pools. No more so than anyone else, no more so than any other large business that is made up of a bunch of people who bet their careers on Bitcoin. They should be treated as valued parts of the community no more or less so than anyone else. Certainly if you have as was the case with SegWit one large company or one large pool that is probably acting in bad faith and trying to block a consensus change that was otherwise fairly broadly agreed to they should be ignored and overruled. But that shouldn’t be our first go to, let’s just do this flag day. Also flag days have a lot of other technical risk. When we were talking about UASF and BIP 148 especially because of the timeline if you don’t have really broad node upgrades both on the mining pool level and just across the network, most importantly at the transactor level, people who are making transactions, enforcing these new rules by the time the flag day happens you can very easily end up in cases where you have forks. Maybe light clients or maybe people who forgot to upgrade or one of their nodes didn’t upgrade or whatever may get double spent. You don’t want a risk to that on the Bitcoin network. It doesn’t make sense to risk that on the Bitcoin network unless you really really have to. Unless there is really this case of “Here is a mining pool that is acting clearly in bad faith and they need to be overruled.” It is very clear to all involved, it is blown up into this whole big drama across the Bitcoin community and thus everyone is painfully aware of where all the sides are and how different people feel. In that case fine, whatever. Otherwise you risk and we saw this in very related circumstances but not quite the same as a flag day, it wasn’t a flag day it was actually a BIP 9 activation, but similar circumstances to the soft fork activation where there was a fork. There was a several block fork that was invalid with invalid blocks but that light clients would have accepted payments on and various mining pools extended this invalid chain.

MB: That was March 2013 right?

MC: It was a way before SegWit. I don’t think it was that far back. March 2013 was a different bug. It was only a year or two before SegWit.

MB: March 2013 was a flag day soft fork that caused a chain split.

MC: That was a separate issue. There was one a little later. In any case you can have these flag days, flag days require a much higher level of making sure everyone is upgraded than these miner readiness signaling soft forks where you take advantage of soft forks really activate when everyone on the network has upgraded but that doesn’t mean you can’t derisk the possibility of forks. We should certainly as much as we can derisk the possibility that there exists two chains, that some people who are running light clients might be on one chain and get double spent or what have you. A great way to do that is to say “We are going to make sure that almost all the hash power is ready and running new code.” Version bits is ok for that but it represents that, it represents miners indicating that they are ready and running the code. For flag days there is technical risk, I think there is really high social risk of the culture around Bitcoin changes and holding ourselves to a high bar. Some of it is technical but a lot of it is the discourse and the discourse around doing a UASF or a flag day soft fork activation for Taproot is in my view really negative. People are saying we are just going to do it and we’re going to activate it and that’s just how it’s going to be. That to me sounds like New York Agreement, SegWit2x. It is a broader set of people, it was planned in public instead of in private, sure and that is great, that is an improvement, but it is still this discourse that sets Bitcoin up for not being this community that takes itself very seriously and is very careful.

MB: I can certainly see that. For this particular situation with Taproot, Schnorr, people have been eagerly awaiting this and I guess it is just trying to find that balance between urgency and complacency.

MC: It is totally understandable. It has been forever.

MB: In many conversations with you and others precedents matter and we should be thinking about precedents that are set. Do these nuclear options even need to be put on the table yet?

MC: I am really afraid of this normalization of them. Yes there exist options where we can just go to a full on market referendum and say “Here are two different softwares. They are going to split into two different chains at this date and we think the market is going to strongly prefer to stay cohesive because that is how everyone maximizes their value. Here’s the two chains, everyone figure it out and let the market decide.” We can do that, that is an option and that is basically what happened with SegWit2x and what might have happened with UASF and BIP 148 had the two sides not come together in a more technical way. It is such a mess. It is a mess for everyone. No one holding Bitcoin going into the SegWit activation deadline and the UASF activation timeline was super comfortable. I went into the office at I think 11pm, midnight, right before it was going to activate and planned on staying up all night just to make sure something didn’t catch fire. Every exchange had people doing that, everyone who was meaningfully transacting paused transactions. We shouldn’t have to do that. There are technical reasons for it and there are social reasons for it. But those technical reasons, we can derisk those technical reasons so why are we putting these things on the table?

MB: So let’s describe the table right now for Taproot. What is it looking like?

MC: I have taken some time off from some of those discussions because I got a little frustrated. My understanding is that it has come around, the folks working on it, there is the broad Twitter universe of people who are regularly commenting and there are some public discussions that are a little more structured that some people have put together, I think those public discussions that are more structured have come around to the idea that there is some risk in doing these flag day activations. I think the latest terminology you might see is BIP 8(true) which is just a flag day, there are some other technical features but it is largely a flag day activation. There are some technical risks with this and most importantly there is no one against this thing. Mining pools have even started signaling that they’re in favor of it and they’re ready for it so why are we having these debates? Let’s just do something more akin to BIP 9 but with some of the technical nuance issues fixed that you might now see referred to as BIP 8(false). Especially because it is pretty clear there is not going to be any mining pools that try to play shenanigans to try to block it for their own personal reasons, I think people have come round to that as probably the approach. I still remain very worried that we are normalizing these things. We are normalizing a discourse around changes that is “We are just going to do it. We are normalizing an activation around changes where “Look it is in the same BIP that described the last activation method to describe how we do a flag day activation that not only forces a consensus change but forces miners to indicate preference for this consensus change.” To slap them in the face while we are at it. These designs, people are legitimately frustrated from miners still after years since 2017, since the SegWit mess as they should be. People should be frustrated with miners for that or mining pools specifically.

MB: Jihan is gone. F2Pool had some f***ery with switching BIP 9 off and on and confusing people with their signaling but outside of that the only hostile miner that I can recall was Bitmain being driven by Jihan. He’s gone.

MC: He’s gotten pushed out because he bet the company on Bcash and that didn’t go very well. It should be focused on healing and doing the best and most careful design. I think it is also important to talk up and realize how much work has gone on in the background that I think is critical for consensus changes and should be critical for consensus changes in the future that hasn’t been sufficiently talked up or publicized. There were a number of meetings driven by the Bitcoin Optech folks between large businesses just talking about all the technical details of Taproot. Really getting into the nitty gritty and saying “Here is the proposed change. You are a big user of Bitcoin. What do you think? Are you happy with this? Are there some issues with it that maybe impact you in some way? Can you integrate this? Are you excited about this? Is this interesting?” All these kinds of things. Obviously it has been publicly decided at this point for years, it has been maybe a bit of a slower process for various reasons but these things happened in a way that Bitcoin has really grown up because of work by a few people in the Optech group. I think that should be the face of Bitcoin consensus changes, not the debates over specific activation methods and people saying “We need to force miners to signal at the end of a year because that is how we get this quicker.” The face really needs to be look at all the work some of these developers put in, people at Chaincode and the Optech group and Bitcoin Core contributors funded at this point by various groups, all the work they’ve put in to making sure this is carefully designed and talking to everyone they could and making sure no one in the community is going to be negatively impacted by this and it is going to really help some stuff like Lightning and some other systems and then going ahead with it. That needs to be the face of consensus changes.

MB: To put forth the user activated soft fork argument for this particular case. Again I don’t think it is going to get to that point, I don’t think it needs to, I don’t think it should because of precedence, focusing on the scars from SegWit2x, that battle, there does seem to be a bit of apprehension from a large part of the developer community to put forth an activation path. People are like “Who is going to put something forward?” The UASF crowd has been “If no one is going to do it we can do it and this is the way we think it is best to do it because we want this.” There is not a rush to get it. It is sort of a rush to get it but do we have to put this ball into motion?

MC: It has been years. Nothing is really happening, what is going on? Not so much a rush as it would be nice to get it eventually. I honestly can’t fault that argument. There are some folks who worked on Taproot who don’t have any desire to work on activation mechanisms because of basically SegWit, the mess of the activation debate there and because activation mechanisms in Bitcoin, there is no good answer. The right answer is we find a way to have changes only activate once it is clear that no one in the community is going to be negatively impacted. The changes must not be blocked by someone who is saying they are going to be negatively impacted when they are not. There is no perfect technical solution to that. There is not a way to capture that so it is ultimately a little subjective. There are some folks that don’t want to work on that kind of stuff. There are folks who do want to work on that kind of stuff. I think that conversation got started maybe only, I think I kickstarted it but I immediately stopped working on it.

MB: I feel like it was this time last year.

MC: Let me check my email. I sent an email on January 10th of last year describing what I think are good technical requirements to have for an activation method and kickstarted a little bit of a discussion. I got some responses, not from many people. That discussion only started around then and I think for a while there were only a few people involved in it and then more recently more developers have got involved in those discussions, Anthony Towns and a few other people leading the charge on that. It has been a slow process and doubly so because there is a lot of acrimony. Over time no one has really wanted to talk about SegWit and that mess because it was a painful time for a lot of people. I think over time different people have started remembering it slightly differently. It has been 3 years, that is a normal thing for human brains to do. And so I think once that conversation got started I quickly realized and I think a few other people realized that people are on some very different pages in terms of how we should think about soft forks. It has taken time but it sounds like there is a little more agreement, a little more “There’s debate over whether we should do flag days and it looks like it won’t even be an issue so maybe we should just do a normal 95 percent miner readiness signaling thing. We’ll do that and if it doesn’t work out we can revisit it and do a flag day, fine, whatever.” Not many people are going to say that is bad. They are just going to say they’d rather do a flag day maybe. There is some legitimate complaints about precedent on the other side saying that miner readiness signaling should not be viewed as a vote for or against a consensus change. Consensus changes aren’t decided by miners and there is an issue with setting precedent that miners decide consensus changes based on this signaling mechanism. That is a very valid point. I think these two opposite points about the different options on the table, flag day UASF or not, both have interesting points about precedent and drawbacks of the other side. Thus it has been a long conversation of hashing it out. That is to be expected.

MB: It is good to see people meeting on IRC. There is another meeting next Wednesday on IRC about it. I’m pulling up Michael Folkson’s notes from the first IRC meeting. Overwhelming consensus that 1 year is the correct timeout period, unanimous support for BIP 8 except for Luke Dashjr (see notes for exact wording), no decision on start time but 2 months was done for SegWit and that didn’t seem too objectionable. It seems like good conversation from Michael’s notes.

MC: I just have Michael’s notes as well. It seems like people are coming round to just doing the miner readiness signaling thing.

Comparing the SegWit and Taproot soft fork changes

MB: It is important to think about this precedent and it is why I love having you on because you are one of the most adversarial thinkers I’ve met in this space. I do think people have very high time preference about this stuff. I want Schnorr and Taproot as well but again precedents matter. We throw the fact that precedents matter at other projects that we like to critique. It would be a bit hypocritical if we didn’t exercise this type of caution and patience for a massive upgrade. Do you think this is bigger than SegWit?

MC: In terms of code, no. In terms of cryptographic changes, yeah. It is also a big upgrade in terms of new things you can do with it, debatable, you could argue either way. It is big but it is also not going to make or break Bitcoin. It is big, it is going to improve privacy on Lightning, it is going to improve the ability for people to use multisig significantly, especially larger multisig policies, it is going to over time improve privacy onchain. All of these things are important and cool and awesome but also not going to make or break Bitcoin. But in my view how the community views changes and how the change process happens is going to make or break Bitcoin.

Precedence of soft fork activation mechanisms

MB: I can agree there. There is a slippery slope maybe. You effectively do a flag day soft fork this time around but then it is jammed down people’s throats at some point in the future.

MC: I think the pro UASF argument would be that largely SegWit didn’t have any material detractors who would be harmed by SegWit directly and thus UASF was ok that it got jammed through in a month or two. But also UASF was itself a consensus change on its own. Still years later people are running round today holding on to a consensus change that was incredibly rushed, it was about as rushed if not more than the SegWit2x and New York Agreement. “Here’s code and here’s binaries. Run it and we are going to create our own chain and fork off in 3 weeks” or 2 months or whatever it was. That is also this really heavy risk and people are holding that up as a gold standard for how we should think about doing activations in Bitcoin. And that worries me.

MB: I appreciate that you are worried. Somebody has got to worry about this s***. I think sanity is going to prevail here, hopefully patience prevails.

MC: The good news is that the market isn’t always right but it is true that people in Bitcoin generally are at least vaguely aware that if they screw Bitcoin up then the market is going to punish them eventually and that there are certain things about Bitcoin that must not go wrong in order for its value to remain. That above all else people understand because a lot of people have millions and millions invested in Bitcoin, whether their business or personally.

MB: 1.5 billion for Tesla. The severity and the importance of making sure this network survives in perpetuity is so grave that caution should not be thrown to the wind. Precedence should be very seriously considered and you should lower your time preference. But that being said I am very optimistic about what has been going on recently in these IRC meetings.

Consensus forming

MC: Yeah it seems like there is a bit more consensus forming which is really good.

MB: Give a shout out to the pools, to Poolin, I think that may have given confidence for some in the developer community to begin having these types of conversations and sticking their necks out with specific activation paths. Obviously users on Twitter and other platforms signaling that they want to use some of the functionality that will be provided by this upgrade.

MC: I would love to see it more from the business community as well but all that stuff is really great to see. At least during the SegWit debacle, the 3 or 4 main groups, you’ve got business community, you’ve got pools and miners, you’ve got individual users and you’ve got developers, as much as we can get all 4 of those groups to be vocal that a change is acceptable and meets their requirements. Maybe it is useful to them, maybe it is not but at least it is acceptable and not a negative to them the better off we will be for future changes. Even if this change is minor and a lot of people don’t really need to care about it having them say “I looked at that change and I don’t care about it” is a great outcome.

MB: At the end of the day they don’t have to use it if they don’t want to because it is all backward compatible?

MC: Right. The change was made very carefully to make sure it is not risking other people’s coins with crazy new cryptographic assumptions. It is not going to massively change the market dynamics of Bitcoin or the fee market or whatever. It is a pretty straight up and down, backwards compatible, if you use it you use it, if you don’t you don’t care kind of change. It is important that that is stated as broadly as it can be. That is the gold standard for Bitcoin changes and that is how Bitcoin changes should look.

MB: Bitcoin is beautiful. It feels weird, maybe people will be affected by this conversation and that will have a little effect on whether or not this gets activated.

MC: Hopefully.

Taproot code in Bitcoin Core

MB: The Taproot code has officially been merged into Core, version 0.21.0. What is the technical thing that happens that takes it from being merged to being activated and used? The code is in the codebase technically.

MC: You can have code but there is no way to reach the code on mainnet. That code can’t be used on mainnet. Once there is an activation method, that code is there but for all intents and purposes it is not part of the Bitcoin consensus, at least not the mainnet Bitcoin consensus protocol, once there is an activation method, once there is a way to reach that code in mainnet Bitcoin that code is now merged and activated, whatever you want to call it.

MB: That was one of the questions I had in my mind for the last year. It is merged but you can’t use it. I am pumped that people can use it on Signet. Maybe businesses can mess around with some proof of concepts to be convinced to or not to support this.

MC: People can use it yeah. That is a reason to merge it first. Historically in Bitcoin Core it has always been you merge the bulk of the code changes in a major release, on testnet, signet or whatever, on your own regtest networks, you can test it out, play with it, use it in a live system and then the actual activation parameters, in essence the formal proposal of this fork from the Bitcoin Core developer community takes the form of its own minor release version that does nothing, maybe it fixes a few minor bugs, but in large part it is just a proposal and it just contains those activation parameters to make a cohesive soft fork change. “Here is a series of consensus changes in code that can be reached on mainnet in a version of Bitcoin Core. You can run this version or you can stay on a previous version, there are little code changes, there isn’t a lot of features you’re missing out on by running this new version. Run this version and now you are on the hopefully new soft fork, the activation rules will kick in.”

LDK and rust-lightning

MB: I learned a lot today, I always learn a lot when I speak to you Matt. How are things going with the LDK?

MC: Things are going great. I have been working, for those of you who are unaware, for a few years now building out a Lightning library. Not yet another Lightning node, lnd and c-lightning and whatever work great, a lot of people use them, a lot of people are really happy with them. They are robust, they are well tested, they have got a lot of users. But what there isn’t is a library that lets you integrate a Lightning node into your project really tightly. Instead of “I’m going to spawn lnd, it is going to run, it is going to download the chain itself and have its own onchain wallet” it is “Here’s a library, you integrate it, you plug in the chain, you plug in onchain keys and onchain funds to create channels and then you have a Lightning node that is in your application, it is not some separate thing that you’re controlling or talking to.” I started that maybe a year and a half ago now, Square hired a bunch of us at Square Crypto and after a bunch of back and forth this was the project we decided to work on. I wasn’t actually the one who proposed it but I think we all saw a lot of value for this potentially especially in the domain of existing Bitcoin wallets that haven’t integrated Lightning or existing Bitcoin applications that have wallets that haven’t integrated Lightning and hopefully will in the future but building a Lightning node from scratch is a lot of work. Making sure it handles every possible edge case is a lot of work. We’ve got this thing that lets you integrate Lightning into your application. I think everyone on the Square Crypto team looked at that and was like “This could have a big impact on Bitcoin down the road.” We got started working on it, the new folks were getting spun up on how all the bits and pieces worked and then Covid hit and all of a sudden we weren’t in the office together. It slowed things down a little bit but we have spent a year really digging into what was there and cleaning up. The core of the code was pretty good, works pretty well, it is super well tested, the actual APIs that it exposed were fine but could use some work. We spent the last year really cleaning up those APIs and making it really easy to integrate. Instead of just having an API where you have to download the blockchain yourself we have options. You can take it and say “I want you to download the blockchain and here is where you get it from. I am going to do my own onchain wallet.” Or vice versa you can say “You do that work”, a bunch of different sample batteries, sample implementations of things. It is now not entirely a batteries not included Lightning node, it has some batteries in it. We’ve cleaned up that a lot. We have spent a tonne of time on building language bindings so you can use it in different languages. It is written in Rust, that’s great because we can compile it for hardware wallets or you can run it in the browser but only if you call it from Rust. We actually had to build our entirely own language bindings generation utility, there was nothing out there as far as we were able to find that in any way is able to take an existing library that has complicated object oriented semantics and just spit out language bindings in a bunch of different code. All the language binding stuff that exists is really all about “You have one simple function and you just want to stub it out into a different language so that is faster.” Not like “You have this whole library that has a bunch of different things and a bunch of different interactions and you want that in language bindings.” We had to build top to bottom our own language bindings generation stuff so we’ve got that. We’ve got good C, C++ bindings if you want to use it at a very low level. We’ve got some Java bindings that people can use. We’ve got samples of using it in Swift and we are getting there on the Javascript end. You will be able to call it directly from Javascript in a web browser. If you ask “Why run a Lightning node in a web browser?” I’m not sure why you would but you can. It is this really nice cross platform thing. So we’ve spent the last year building out language bindings and we are really rounding the corner now. We have what is a rather cohesive product, it is lightningdevkit.org if you are interested, join our Slack, reach out to us and we are happy to work closely with people. We are an open source team hired to build this out so we are happy to help people out any way we can in terms of integrating and getting Lightning in as many places as we can. It is going great and we really turned the corner on having a cohesive thing that people can use now. We are really happy with it.

MB: I am pumped to see that come out. We had you, Val and Matt Odell in that episode describing what you guys were embarking on, attempting to attack this LDK project, an incredible update. I have been learning a lot from… on Clubhouse. The way he can describe the potential for the Lightning Network and break down Taproot, why that’s important, gets into cryptography, point time locked contracts something I am very excited about, he explained that for people as well.

MC: Big privacy improvement.

MB: Rendezvous routing with PTLCs seems like a game changer.

MC: There are going to be some interesting trade-offs coming up in terms of how people do routing, privacy trade-offs in Lightning. If I am honest it does worry me a little bit because existing academic research already shows that Lightning today, the privacy against middle nodes inferring who is sending money where is pretty weak and they have a lot of visibility into what is going on. Having Taproot and Schnorr specifically for point time locked contracts is going to improve that, hopefully significantly but there are a lot of other structural issues. Then a lot of especially the mobile wallets, oftentimes calculating a route right now can take a little bit of time and also importantly if you send a route and then it fails halfway there then you have to calculate another route and you have to send it again, the payment. That can take some latency and nobody wants payments latency. Downloading the graph can take a little bit of time especially if you are on a mobile phone, you open the app and can’t send for a minute while you download the graph so you can calculate a route. All these UX paper cuts. The way to solve them is to take a privacy trade-off so different routing methods, having a server calculate a route or whatever. There is a lot of really exciting research there and really exciting ways to do routing and rendezvous is really awesome. Rendezvous is less of a trade-off than some of the other work. Rendezvous is great because you can have payee privacy from the payer. You have hidden service privacy, like Tor, where the payer doesn’t necessarily know where the money is ending up which is really huge. But you also have to have that in a context where your routing algorithm considers privacy really carefully and you’re not leaning too much on middle nodes to do a lot of your work for you. It will be interesting to see where all that ends up and whether people can build a good enough UX on top of really privacy optimized Lightning routing.

MB: I’m bullish on the Lightning Network, I know I’m a fanatic but I think people are sleeping on it.

MC: People are sleeping on it in part because it is not integrated anywhere. I think a lot of the Lightning folks across different parts of the Lightning ecosystem are really excited. This year looking towards more exchange integration and institutional integration of Lightning which hopefully is going to take some volume of transactions off the blockchain itself, give people some instant transacting ability and then hopefully that also translates a little bit into mobile wallets. It shouldn’t be the case that you ever download a Bitcoin wallet that doesn’t support Lightning. That is where we are at today. You download a Bitcoin wallet and it probably doesn’t support Lightning and that sucks. Or if it does there are a few wallets, non-custodial Bitcoin wallets but then custodial Lightning because the only good way to integrate Lightning today is custodial. That shouldn’t be the case.

MB: Even better if you can abstract so you don’t even know you are using Lightning versus main chain.

MC: Right. Nor should a user ever be aware of Lightning, they should just know that their payment cleared instantly. That is what you should have. I am really bullish on it, that’s why I’m working on it full time but also I’m really bullish on integrations into existing Bitcoin wallets and new Bitcoin wallets that support it. Today a lot of Bitcoin stuff is really tightly integrated with the application level. People aren’t just downloading and running Bitcoin Core and using Bitcoin Core’s wallet to power their big exchange. Some people use Bitcoin Core but most people don’t use its wallet. Those are the options for Lightning today. You can run c-lightning, it has a great plugin system, you can do a tonne of hooking and editing it but it is still downloading and running a binary. A little less so for lnd. lnd has you download Bitcoin Core and you get the RPC API, that is what you have. It is great for many use cases, a lot of people use Bitcoin Core very successfully as a wallet on their server, whatever and same for lnd. But it is just not going to get us there in terms of mobile wallets everywhere and it is not going to get us there… I am really bullish on it, I hope we can have a really positive impact too.

MB: Thank you for all the work you do, thank you for reaching out. Like I told you, you don’t have to say thank you for having you on, you have an open invite in perpetuity until I croak or you croak or until this podcast croaks.

MC: Next time I’ll make you bribe me with beer.

MB: Yes we need to do that. I’ll drive up for that. Anything you want to end this on? Any particular note or message?

MC: I am super pumped for Bitcoin, I am super excited for it, I only worry about it because I love it.

MB: I love you Matt, thank you for coming back on, it was a great conversation, it was a great history lesson, great perspective, I hope you enjoy the rest of your night. If there is anything else you ever want to talk about in the future just let me know.

\ No newline at end of file +https://anchor.fm/tales-from-the-crypt/episodes/228-UASFs--BIP-148--BIP-91--and-Taproot-Activation-with-Matt-Corallo-eq7cif

Topic: UASFs, BIP 148, BIP 91 and Taproot Activation

Location: Tales from the Crypt podcast

Intro

Marty Bent (MB): Sitting down with a man who needs no introduction on this podcast. I think you have been on four times already. I think this is number five Matt. You are worried about the future of Bitcoin. What the hell is going on? You reached out to me last week, you scaring the s*** out of me. Why are you worried?

Matt Corallo (MC): First of all thanks for having me. I think the Bitcoin community broadly right now is selectively misremembering events that happened only a few years ago now and taking conclusions from it that are not entirely justified. We can get into it in depth but I think the block size wars had an effect on a lot of people. There were two sides that both had a story to tell. One side got largely pushed out of Bitcoin, a lot of people on the “losing side” have moved on from Bitcoin and are doing cryptocurrency exchanges or just not working in cryptocurrency anymore. That left a side that gets to write their own history. I think there is some facts that have been left out and more importantly it impacts the way people see Bitcoin’s future and how Bitcoin should exist going forward. And that worries me, I don’t really care if the history got rewritten, I care if it results in a Bitcoin that is fundamentally less safe.

The history of SegWit and SegWit2x

MB: This all revolves around the user activated soft fork and that mechanism to activate certain upgrades, whether it be SegWit or potentially Schnorr and Taproot on the horizon. Let’s jump into the fine details of what went on, how a user activated soft fork works and why you believe it should be the nuclear option, is that the correct term?

MC: Yeah maybe. If we wanted to get into it we have to go back and revisit the history. We have to go back and remember exactly what happened and more importantly what didn’t happen. For most of your listeners who were around a few years ago or who have since learned the history of SegWit2x and the activation of SegWit, it bears revisiting and ensuring that everyone is on the same page, there is a more complete history that people are aware of. As hopefully most of your users know with SegWit there was a large debate within the Bitcoin community broadly, exchanges, large industry players, miners, users, developers, a number of different crowds around how Bitcoin should scale. Whether the block size should just be increased ad infinitum, whether the block size should completely be untouched. It all focused on the block size largely, there were a number of technical issues with just blindly increasing the block size which after some time led to the proposal of SegWit. SegWit is kind of a nifty little trick to increase the block size somewhat while also solving a number of technical issues that prevented increasing the block size previously. There were potential denial of service issues with just blindly increasing the block size that SegWit solved. So it was this nifty little trick that came out from Pieter Wuiille and Luke Dashjr designed for Bitcoin and proposed that as a standalone soft fork. That is the beginning of the story. Over the next year after that was all out Bitcoin civil war that had already been building but it reached a fever pitch throughout the course of the one year activation timeline. SegWit and several soft forks prior to it generally had a theory of soft fork activation which went something like this. Developers work on the soft fork, design it and propose it to the community broadly in the form of both a BIP and code and that code eventually hopefully takes the form of a Bitcoin Core release. This in effect is a formal proposal to the community for a new soft fork. Then hopefully the community adopts it, upgrades to it, is happy with it by the time it is out there, hopefully it has really strong consensus behind it. And as a final step, because with soft forks you want to make sure that the network stays together on one chain you have miners signal when they are ready and when they are in fact enforcing rules. If you don’t have that you could have a chain split. You have a majority or a good chunk of miners who aren’t enforcing the rules, the chain can go off in one direction that is invalid to a number of nodes and suddenly you are going to have two chains depending on who you ask and whether nodes have upgraded or whatnot. That is a mess, you don’t want that. It is not fun, you end up with double spend potential for some people who might have forgotten to upgrade a node or whatever. No one wants that. So ideally you have miners say “Yes I’m ready. I’ve upgraded.” Once you reach some threshold, in the case of SegWit and a number of other proposals it has been 95 percent, the change locks in and activates and everything goes forward. SegWit took place over the course of a year, it came out as a proposal and a Bitcoin Core release. At the end of a year it would have timed out and presumably some other activation method would have been tried. SegWit kind of languished and depends on who you ask for exactly why, miners had some dubious incentives around some optimizations they were using that were patented and they weren’t very upfront about it, other reasons to block it. There was a contingent within the community that might have been happy with SegWit but viewed it as insufficient, that the block size needed to be increased hugely and SegWit represented some small multiple, somewhere between 1.5x and 2x, which is still a pretty substantial increase but they wanted a significantly higher multiple. SegWit, which might have been fine on its own was not sufficient and thus simply shouldn’t happen. Long story short, this kind of miner readiness signaling that was intended to just prevent the network from splitting into a million pieces was abused to prevent SegWit from activating.

MB: This is version bit signaling via BIP 8 correct?

MC: BIP 9 I think. BIP 8 is BIP 9 with a few fixes in it. Then during this time period a number of companies got together in secret and decided they were going to decide Bitcoin’s consensus for us all, signed what became the New York Agreement led by DCG and declared unilaterally that Bitcoin was going to activate SegWit and in addition a hard fork which further doubles the block size on top of the SegWit increase. SegWit is already a 1.5x to 2x, and then we are going to double it again and get something between 2x and 4x. This was going to activate on some, I forget the exact timeline but it was something completely nonsensical. It was like “We are going to upgrade the entire Bitcoin network and everyone is going to upgrade their node in like 3 months. Let’s go.” The increase block size more thing, whatever, let’s do it all in 3 months is obviously brain dead. But more importantly SegWit2x represented a small group of people coming together and deciding Bitcoin’s rules. Fundamentally my issue with anything around consensus changes is that we as a community need to have a higher bar. We need to have a very high bar for how changes get made and if in the end you have some small group of people who are allowed to just blindly upgrade the rules, decide in a private meeting, invite only, what the rules are then Bitcoin has failed. This private meeting is going to decide that you have to do KYC or a private meeting is going to decide that it is ok if not everyone can run a node and it can be prohibitively expensive to run a node. There are a number of things that would be unacceptable but would be trivial to do if you are able to change the rules by small committee. That was the New York Agreement, SegWit2x. The community was broadly up in arms, the developer community was universally up in arms, there were no objections within the developer community to condemning SegWit2x and the New York Agreement. So the community is up in arms, there is this group of industry players who are saying “We are just going to do it, it doesn’t matter what you say Bitcoin users, it is just going to happen, suck it up.” Because SegWit2x was a hard fork it was fundamentally going to be a different token. If you had sufficient hash power on both sides you could always have two tokens, you could always trade the old one for the new one and vice versa. And so of course several exchanges just listed both and listed a futures market. Bitfinex listed a futures market that was relatively deep and BitMex, it is only a futures market but they declared that the futures were going to be only non hard forked Bitcoin as it exists today, not the SegWit2x Bitcoin. The futures market told a very different story from the confidence of the business community. SegWit2x tokens traded something around 20, 30 percent, sometimes it was 10 percent of Bitcoin. You could see this both on Bitfinex’s direct trading of the two tokens but you could also see it in the discount on the futures on BitMex. You had billion dollar daily volumes on BitMex showing exactly what these hard fork tokens were worth. It was a pretty good indicator that the market was not a fan of the SegWit2x idea. Now you have this very vocal Bitcoin community on Twitter, the developer community, you have a very confident business community, several key businesses, not every cryptocurrency business but several key businesses in the cryptocurrency space. Coinbase, it was led largely by BitGo, a number of others, a number of large mining pools. They are both declaring they are going ahead with their vision and this would have resulted in two separate currencies fighting over who gets the term Bitcoin and as you might imagine that would be a mess and that would destroy the value of Bitcoin for basically everyone.

MB: If they were successful in writing good code, we came to find that they were off by one block and it wouldn’t have successfully hard forked.

MC: Their software didn’t actually function. This is what happens when the entire developer community thinks this is a terrible idea, they can’t hire someone with a deep understanding of the codebase because they all don’t want to work for you. You have these two factions that are screaming they are going ahead and both won’t admit it publicly but very well aware that if they both go ahead the whole thing catches fire and any legitimacy that Bitcoin had in the financial world is gone because now you have two different people claiming to be Bitcoin. Why would anyone care? A number of critics who have always suggested that Bitcoin has no value because anyone can create a new cryptocurrency and thus there is no scarcity, a poor argument, but it would have actually been true had this come to pass. I think a big reason why it didn’t come to pass is because everyone understood that their entire value would be gone. That’s a pretty strong incentive if nothing else. So how do two competing factions who both have very different visions and both think the other is acting in bad faith come together and protect the value of Bitcoin and prevent it from splintering into two different things but presumably later having more issues. The very loud Twitter community, there were a few folks who were very “Screw big business” and blah blah blah, but notably only one or two folks who ever contributed materially to Bitcoin Core decided that they were going to do what was then termed a user activated soft fork. Before BIP 148 and UASF we would have just termed it a flag day soft fork but basically it is just a change where on a certain day it activates, period. You have some software, the software says “On this date the consensus rules are going to change to be X, whatever X is.” In this case X was “If your block does not signal readiness for SegWit and isn’t contributing to turning SegWit on we are just going to ignore you and consider the block invalid.” That was one group, it was very vocal. So now you have two proposed changes in addition to original Bitcoin. You have the soft fork that is led by a bunch of users on Twitter who are saying “We are going to fork off any blocks that don’t signal for SegWit and we are going to go off on our own way and have our own chain.” Then you have this business community that is saying “We are going to go hard fork and have our own chain.” Then you have the people in the middle who are still just running the software that they downloaded two weeks ago and are a little confused maybe. Now suddenly you have Bitcoin careering towards three different tokens. What happens next depends on who you ask. I think this is really the key issue.

MB: Can we take a step back before we get to the key issue? Did the Twitter users propose the user activated soft fork? Where does Shaolin Fry play into this?

MC: There was one anonymous developer who wrote BIP 148, I don’t believe their actual name or identity has ever been revealed, there is various speculation about it. They posted the BIP and posted some code for it and provided binaries. A bunch of people downloaded it, the alias Shaolin Fry had never contributed to Bitcoin Core, it was never used for anything else, that alias existed only to post what became BIP 148 and this code. They obviously had a relatively active Twitter account but that is where this code came from. So if you ask someone who was staunchly in favor of BIP 148 and UASF, their view is that UASF activated, BIP 148 was enforced and on the day of BIP 148 no blocks did not signal for SegWit and thus no blocks were every forked off and the consensus rules of BIP 148 still remained on the same chain with the rest of Bitcoin. That soft fork was enforced and SegWit activated and happily ever after. It is obviously a little wrong because to my knowledge no pool or materially sized transactor ever run that code. A lot of Twitter users were running it, there was a campaign to get people to spin up UASF enforcing nodes but just spinning up a node as we know doesn’t have much impact on anything. Otherwise Chainalysis would be in charge of the consensus rules because they certainly have a lot of nodes. What really matters is people using a node to transact saying “I will only accept your transaction in Bitcoin if my node thinks that the rules of the network have been enforced and that transaction meets all the rules of the network including is included in a block that meets the rules of the network.” Some individuals presumably transacted around that code but no large businesses, no people accepting large volume payments, no mining pools and presumably not even very much in terms of total transaction volume of individuals ran on it. I certainly never ran it and I transact some on Bitcoin Core. So what really did happen and why was it that all the blocks suddenly started signaling for SegWit on that day? That is where it gets complicated and it is kind of unclear. There is some indication that the very confident Twitter users who were saying “We are running UASF whether you like it or not” scared several large players in the mining pool space, specifically there were rumors going round that Jihan (Wu) was scared by it. It is not unreasonable right, you have people who are very ideological and believe very strongly in Bitcoin and are telling NYA to go screw themselves. Saying that they are going to run the software and they are going to fork Bitcoin into another coin whether you get onboard or not. It is absolutely the case that both those users and everyone loses out if Bitcoin forks. You have two tokens and in every possible way it is a mess. There was a proposal which several New York Agreement signatories, SegWit2x supporting businesses and mining pools got behind called I believe BIP 91, I may be wrong.

MB: I remember BIP 148, BIP 90 and BIP 91 being crucial around this time.

MC: There was BIP 91 which said “What we are going to do is we are going to do SegWit2x, you are going to signal readiness for SegWit by the date of the UASF activation timeline and that signaling is going to signal support for both SegWit and a hard fork. The SegWit2x people should go ahead and signal readiness for SegWit and we should activate SegWit that way.” If you asked some large businesses that is probably what they would have told you at the time they were signaling.

MB: There were sites tracking the percentage to it.

MC: Bitmain exerted a lot of influence on their customers and basically the entire Bitcoin industry in China because a large part of it was mining related. Basically they would tell people “If you get onboard and do what we say and use our pool or use a pool that we bless then you’ll get the next generation hardware sooner and for a cheaper price.” Basically once Bitmain came around, they were the big blocker, once they came around all the other dominos fell within a week. That’s largely what happened. The “UASF activated and we won. We showed them via the free market thing” it is a little disingenuous. Clearly the free market and the very large volumes of futures trading and this fork token trading on Bitfinex showed very clearly that the SegWit2x New York Agreement token was dead in the water. It had 10 percent of the value of Bitcoin or 20 or 30 depending on where you looked. It was a small fraction of the value of Bitcoin and it was dead. There was never any futures trading for what would have happened had the UASF token gone off in one direction and classic Bitcoin had gone off in the other direction which was totally a possibility. The UASF thing, the code for it was released very, very quickly because of the time crunch basically, SegWit was about to timeout and they wanted to make sure that it activated before SegWit timed out. They released it and it was going to turn on I think within a month or two or maybe even a few weeks. And so no one spent the time to figure it out on paper and give the markets the voice. No one had the time to audit the code so no one was materially risking their business on it or accepting really large volume payments on it or switching out to it. There were a number of issues in the code depending on which case you looked at. One Bitcoin node forking off from the rest of the network and finding other peers to connect to that are on its fork is not a solved problem in the codebase at all. Hopefully it should never happen. Or it definitely wasn’t at the time. It is very unclear whether you could say that this UASF, BIP 148 movement ever activated. You could absolutely say it had an impact and it probably if you believe rumors, who knows, had some impact on Jihan and Bitmain and actually scared some people into allowing SegWit to activate in the signaling. But had they decided not to do that it basically had no hash rate behind it. So somebody needed to step up, if it was going to be a thing, and invest a very large sum of money to buy miners, offer existing miners to buy their UASF tokens had Bitcoin forked. It is unclear that there was this kind of money behind it. You see a lot of people running around saying “Yeah UASF is great, it works, we’ve done it before. It activated and it is how we activated SegWit.” The story is 10x more complicated than that. At the end of the day the only thing you can say for certain is Bitcoin users, Bitcoin mining pools, Bitcoin miners, Bitcoin businesses all understood that if Bitcoin forked into multiple tokens that would be very bad for their investment. Then they took action to make sure that didn’t happen. You can’t really say anything more than that. You can’t say that one thing was the thing that activated anymore than you can say that another thing was the thing that activated SegWit. There were several things involved.

MB: I am pulling up the newsletter that I wrote on Tuesday July 18th 2017 and one of the headlines in that newsletter is “BIP 91 to the rescue, hopefully.” This is the way I understood it in the middle of July to bring some clarity. Funnily enough this is the same one in which John McAfee said he is going to eat his d***. I have that in the newsletter too. There is the possibility of locking in SegWit via BIP 141 which required 95 percent miner consensus. At this point 95 percent miner consensus was not reached via BIP 141. Our last hope of activating the protocol upgrade without a network split may be BIP 91 which is authored by James Hilliard. The BIP makes it so BIP 148 and SegWit2x are compatible. In short if 80 percent of the miners represented by collective hash power signal for BIP 91 SegWit will be activated and all blocks that are not signaling for SegWit activation will be rejected from the network. That is the way I described it.

MC: Right, BIP 91 basically reduced the threshold from 95 percent to 80 percent and then also made a non technical statement that this signaling would also include signaling for a hard fork. That hard fork never materialized, there were no developers interested in writing code for that and even if someone had written code for it it probably would have got laughed out the room. That part was basically ignored after the fact.

MB: It is cool have this historical journal. So James Hilliard, BIP 91 was “Alright guys, let’s not split, let’s compromise here.”

MC: It is terrible for all of us if we split. Here is a thing that we can do where both sides get to claim they won. The UASF people get to scream about how UASF locked in on this date. The miners and large pool operators get to talk about how they successfully activated SegWit2x and at the end of the day SegWit will be activated in a way that is compatible with every node that is currently sitting on the network. We can all go about our lives and have one Bitcoin.

MB: The miners were crucial in this, you needed 80 percent of miner signaling.

MC: Right, with all of these really compressed timelines the UASF BIP 148 crowd was suggesting they could activate something on a really compressed timeline but you can’t get any material amount of nodes upgraded. The only thing you can really do is get miners to enforce these rules. As long as they are enforcing them they are the rules of the network and we’ll get nodes to upgrade over time to enforce them.

MB: It was a hairy situation back then if you weren’t around.

MC: It was so hairy. There was definitely some hair loss due to some stress. When you compare a President’s photo when they start their term and when they end. These days not so much, they used to start and they didn’t have gray hair and then they always ended with gray hair and they always looked about 20 years older even though it had already been 8. Over the course of one year.

MB: Obama’s was pretty drastic.

MC: Obama looked about 20 years older.

MB: I am loving that we are rehashing this, jogging memories and going back through all the newsletters now. This was a daily update. BIP 91, shoutout to James Hilliard for pushing this forward. Arguably prevented a network split. Even myself personally, the years that have come to pass, I have somehow whitewashed my own memory and thought that BIP 148 was the driving factor for actually activating it. It was more influence or pressure.

MC: It was definitely a very successful pressure campaign. If you want to argue that that kind of thing activates you really have to make a market analysis and have some evidence that the other token is going to be completely worthless and thus our flag day activation and our splitting off is going to be the only thing with value and the only thing with hash power and the only thing that exchanges care about. That just didn’t exist then. If you want to assign blame for what really succeeded and what really activated SegWit you would have to have visibility into how things would have gone had the various splits happened. And we just don’t. There are a lot of things that came together and a lot of things that had an impact. It is impossible to say what did it.

MB: Now I’m refreshing my memory around this, the consensus right after was both sides can say they won. Everybody can be happy. Why are you worried about this now?

Taproot activation

MC: Rehashing old history is fun, well actually for most of us it is miserable because that was a pretty stressful time. But it is important I think to take away the right conclusion because in discussion of Taproot, there aren’t as many voices in it as SegWit, it is pretty straightforward.

MB: Not even close to as controversial.

MC: It is very, very well designed. Too well designed sometimes. A lot of effort has gone into it. I think importantly Optech has done some good work on reaching out to businesses and chatting with businesses about the design, making sure these large industry players who are using Bitcoin are aware of it and know about it, and also a lot of the community. The core consensus bits of Taproot are great, no one has anything bad to say basically. Cool, let’s do it. The community has taken that a little bit to heart and is like “Cool, let’s go. Let’s do it, let’s turn it on.” The how to turn it on is sadly maybe also just as important as the actual design. At the end of the day you have to write consensus level code, code that describes how Bitcoin functions as a part of the activation method. The activation method of a soft fork is a consensus change in of itself. It describes a consensus change that activates this much larger consensus change but it describes a consensus change. So it needs just as much care as anything else. To wax a little more philosophically Bitcoin in my view, most Bitcoiners and a lot of people have invested in Bitcoin because it represents this conservative system that you can depend upon, that exists and provides you hopefully censorship resistant value transfer and value storage, payments whatever in a way that you can depend on. If you are using it in some specific way today you are going to be able to use it in that specific way in 5 years, in 10 years. The consensus system that is Bitcoin, the technical features available to you in Bitcoin continue to exist in roughly the same way or at least a compatible way as they do today in the long run. And your use case for Bitcoin, there are a million use cases for Bitcoin, whether you are hedging against the Fed printing too much money and you think hyperinflation is happening or whether you are in Venezuela and hyperinflation is happening or whether you are in Nigeria and you want to accept donations because you want to protest against the government being sketchy or you are in Russia, whatever, there are a million reasons to care about Bitcoin and to have a use for it. If you have something like that Bitcoin should continue to serve you. You should be able to invest in and use Bitcoin and invest time into building Bitcoin based solutions feeling safe and secure that it will continue to exist as it exists today. No one is going to come in and change it out from under you, decide to change the cap because that will help certain users but hurt certain other users. Or no one is going to come and out from under you change materially the fact that there are transaction fees because you invested a tonne of money into buying a mining farm. At the end of the day because there are so many different legitimate perfectly valid use cases for Bitcoin we as a community have to make sure that changes happen in such a way that Bitcoin continues to work for all of those use cases. That has always been my theory on soft forks. I wrote a whole long blog post on February 28th 2017. It is the last blog post I wrote.

MB: I like you writing code instead of blog posts, it is better time spent.

MC: Probably. I have this whole long thing where I wax philosophically about there being different use cases of Bitcoin. That is important and that is what is so cool about Bitcoin. I have my interests in Bitcoin and you have your interests in Bitcoin and they are fairly different. We care about Bitcoin for fairly different reasons, obviously it overlaps a lot but certainly I care about Bitcoin for many different reasons than a lot of other Bitcoiners. There are some people down the street here in New York City who are trading Bitcoin who are Bitcoiners. They care about using Bitcoin as a hedge against the Fed or whatever monetary policy or whatever crazy market things. They are still Bitcoiners even though they are never on Bitcoin Twitter but they have a use for Bitcoin and we should make sure that exists and that use case is still supported. If we take this as an important key value proposition of Bitcoin, that there are so many value propositions and that we care about all of them, we have to be very, very careful about not just about the consensus changes we make, Taproot is very carefully done and most importantly if you don’t want to use it there is not really any reason to care about it. It is not going to negatively impact you. Maybe fees will drop a little bit but probably not a lot. There is no reason to worry about if you aren’t going to use it. But also and more importantly the activation method sets the stage for how we activate soft forks. That activation methods themselves describe a set of rules technically and in code but also socially of what bar we hold changes to. Because Taproot is this thing that has been going for a while, it has been cooking forever, it is super carefully designed and it is really well implemented. There’s code and it is great and it is ready to go. There are a lot of people who are like “Let’s just do it. Last time we did this UASF thing.” They view history through some rose colored glasses maybe. Let’s just do it, we’ve got this thing, let’s just turn it on, the last UASF came on and activated in a few weeks, let’s just do it. We’ve got this. I think that does Bitcoin a disservice. I think if our approach to changes is that we as a very small community on Twitter or whatever just ship code that turns on a soft fork whether the network is fully upgraded to it, whether we’ve actually made sure that we’ve talked to as broad a community as we can, that doesn’t set up Bitcoin for a win. It doesn’t set Bitcoin up to be this conservative careful thing that changes only when people aren’t going to be hugely negatively impacted by the change. Most importantly, flag day activations in general and UASF is another term for it, a more cool term now, they don’t have this backout. “Here’s a release of Bitcoin Core with the change in it. We’ve shipped it and the change is going to happen in a year, six months whatever and that’s it.” That doesn’t send a good image of Bitcoin to anyone. No one wants a Bitcoin where a release of Bitcoin Core decides the consensus rules in of itself. Developers shouldn’t be deciding these things. Sure it is the case that Bitcoin Core could release software and people could look at it and say “I’m not going to run that because there is some soft fork I’m not signing up for.” Certainly in the case of Taproot that is probably not going to happen because the change is great, why would we not upgrade to it? But that’s also the way the community learns how changes should be made. You have this Taproot thing, it is great, it is in this release of Bitcoin Core, it has got a flag day activation, a release of Bitcoin Core happens and in a year Taproot turns on and that is it. How can someone who is just passively observing Bitcoin, maybe they care about Bitcoin but they probably don’t have the time to actively participate in the conversations on Twitter or Clubhouse or Reddit or what have you, how can they take away a message from that anything other than Bitcoin Core decides consensus and maybe a small group of users? It is a perfectly reasonable conclusion based on just looking at that. There are many ways to solve it, don’t get me wrong. Flag days aren’t inherently evil, we can’t never do flag days. But it needs a resolution in some way. It needs to be very clear that so much work has been done. This passive observer who maybe in the future will be a Bitcoiner who is learning about how changes to Bitcoin get made shouldn’t be bombarded with people on Twitter saying “Yes we’re activating SegWit, f*** miners.” I’ve seen a lot of people recently on Twitter and wherever talking about how basically UASF is good because it beats miners over the head and tells them they have no control.” It is not useful. Miners are in fact a part of the Bitcoin community, also pools. No more so than anyone else, no more so than any other large business that is made up of a bunch of people who bet their careers on Bitcoin. They should be treated as valued parts of the community no more or less so than anyone else. Certainly if you have as was the case with SegWit one large company or one large pool that is probably acting in bad faith and trying to block a consensus change that was otherwise fairly broadly agreed to they should be ignored and overruled. But that shouldn’t be our first go to, let’s just do this flag day. Also flag days have a lot of other technical risk. When we were talking about UASF and BIP 148 especially because of the timeline if you don’t have really broad node upgrades both on the mining pool level and just across the network, most importantly at the transactor level, people who are making transactions, enforcing these new rules by the time the flag day happens you can very easily end up in cases where you have forks. Maybe light clients or maybe people who forgot to upgrade or one of their nodes didn’t upgrade or whatever may get double spent. You don’t want a risk to that on the Bitcoin network. It doesn’t make sense to risk that on the Bitcoin network unless you really really have to. Unless there is really this case of “Here is a mining pool that is acting clearly in bad faith and they need to be overruled.” It is very clear to all involved, it is blown up into this whole big drama across the Bitcoin community and thus everyone is painfully aware of where all the sides are and how different people feel. In that case fine, whatever. Otherwise you risk and we saw this in very related circumstances but not quite the same as a flag day, it wasn’t a flag day it was actually a BIP 9 activation, but similar circumstances to the soft fork activation where there was a fork. There was a several block fork that was invalid with invalid blocks but that light clients would have accepted payments on and various mining pools extended this invalid chain.

MB: That was March 2013 right?

MC: It was a way before SegWit. I don’t think it was that far back. March 2013 was a different bug. It was only a year or two before SegWit.

MB: March 2013 was a flag day soft fork that caused a chain split.

MC: That was a separate issue. There was one a little later. In any case you can have these flag days, flag days require a much higher level of making sure everyone is upgraded than these miner readiness signaling soft forks where you take advantage of soft forks really activate when everyone on the network has upgraded but that doesn’t mean you can’t derisk the possibility of forks. We should certainly as much as we can derisk the possibility that there exists two chains, that some people who are running light clients might be on one chain and get double spent or what have you. A great way to do that is to say “We are going to make sure that almost all the hash power is ready and running new code.” Version bits is ok for that but it represents that, it represents miners indicating that they are ready and running the code. For flag days there is technical risk, I think there is really high social risk of the culture around Bitcoin changes and holding ourselves to a high bar. Some of it is technical but a lot of it is the discourse and the discourse around doing a UASF or a flag day soft fork activation for Taproot is in my view really negative. People are saying we are just going to do it and we’re going to activate it and that’s just how it’s going to be. That to me sounds like New York Agreement, SegWit2x. It is a broader set of people, it was planned in public instead of in private, sure and that is great, that is an improvement, but it is still this discourse that sets Bitcoin up for not being this community that takes itself very seriously and is very careful.

MB: I can certainly see that. For this particular situation with Taproot, Schnorr, people have been eagerly awaiting this and I guess it is just trying to find that balance between urgency and complacency.

MC: It is totally understandable. It has been forever.

MB: In many conversations with you and others precedents matter and we should be thinking about precedents that are set. Do these nuclear options even need to be put on the table yet?

MC: I am really afraid of this normalization of them. Yes there exist options where we can just go to a full on market referendum and say “Here are two different softwares. They are going to split into two different chains at this date and we think the market is going to strongly prefer to stay cohesive because that is how everyone maximizes their value. Here’s the two chains, everyone figure it out and let the market decide.” We can do that, that is an option and that is basically what happened with SegWit2x and what might have happened with UASF and BIP 148 had the two sides not come together in a more technical way. It is such a mess. It is a mess for everyone. No one holding Bitcoin going into the SegWit activation deadline and the UASF activation timeline was super comfortable. I went into the office at I think 11pm, midnight, right before it was going to activate and planned on staying up all night just to make sure something didn’t catch fire. Every exchange had people doing that, everyone who was meaningfully transacting paused transactions. We shouldn’t have to do that. There are technical reasons for it and there are social reasons for it. But those technical reasons, we can derisk those technical reasons so why are we putting these things on the table?

MB: So let’s describe the table right now for Taproot. What is it looking like?

MC: I have taken some time off from some of those discussions because I got a little frustrated. My understanding is that it has come around, the folks working on it, there is the broad Twitter universe of people who are regularly commenting and there are some public discussions that are a little more structured that some people have put together, I think those public discussions that are more structured have come around to the idea that there is some risk in doing these flag day activations. I think the latest terminology you might see is BIP 8(true) which is just a flag day, there are some other technical features but it is largely a flag day activation. There are some technical risks with this and most importantly there is no one against this thing. Mining pools have even started signaling that they’re in favor of it and they’re ready for it so why are we having these debates? Let’s just do something more akin to BIP 9 but with some of the technical nuance issues fixed that you might now see referred to as BIP 8(false). Especially because it is pretty clear there is not going to be any mining pools that try to play shenanigans to try to block it for their own personal reasons, I think people have come round to that as probably the approach. I still remain very worried that we are normalizing these things. We are normalizing a discourse around changes that is “We are just going to do it. We are normalizing an activation around changes where “Look it is in the same BIP that described the last activation method to describe how we do a flag day activation that not only forces a consensus change but forces miners to indicate preference for this consensus change.” To slap them in the face while we are at it. These designs, people are legitimately frustrated from miners still after years since 2017, since the SegWit mess as they should be. People should be frustrated with miners for that or mining pools specifically.

MB: Jihan is gone. F2Pool had some f***ery with switching BIP 9 off and on and confusing people with their signaling but outside of that the only hostile miner that I can recall was Bitmain being driven by Jihan. He’s gone.

MC: He’s gotten pushed out because he bet the company on Bcash and that didn’t go very well. It should be focused on healing and doing the best and most careful design. I think it is also important to talk up and realize how much work has gone on in the background that I think is critical for consensus changes and should be critical for consensus changes in the future that hasn’t been sufficiently talked up or publicized. There were a number of meetings driven by the Bitcoin Optech folks between large businesses just talking about all the technical details of Taproot. Really getting into the nitty gritty and saying “Here is the proposed change. You are a big user of Bitcoin. What do you think? Are you happy with this? Are there some issues with it that maybe impact you in some way? Can you integrate this? Are you excited about this? Is this interesting?” All these kinds of things. Obviously it has been publicly decided at this point for years, it has been maybe a bit of a slower process for various reasons but these things happened in a way that Bitcoin has really grown up because of work by a few people in the Optech group. I think that should be the face of Bitcoin consensus changes, not the debates over specific activation methods and people saying “We need to force miners to signal at the end of a year because that is how we get this quicker.” The face really needs to be look at all the work some of these developers put in, people at Chaincode and the Optech group and Bitcoin Core contributors funded at this point by various groups, all the work they’ve put in to making sure this is carefully designed and talking to everyone they could and making sure no one in the community is going to be negatively impacted by this and it is going to really help some stuff like Lightning and some other systems and then going ahead with it. That needs to be the face of consensus changes.

MB: To put forth the user activated soft fork argument for this particular case. Again I don’t think it is going to get to that point, I don’t think it needs to, I don’t think it should because of precedence, focusing on the scars from SegWit2x, that battle, there does seem to be a bit of apprehension from a large part of the developer community to put forth an activation path. People are like “Who is going to put something forward?” The UASF crowd has been “If no one is going to do it we can do it and this is the way we think it is best to do it because we want this.” There is not a rush to get it. It is sort of a rush to get it but do we have to put this ball into motion?

MC: It has been years. Nothing is really happening, what is going on? Not so much a rush as it would be nice to get it eventually. I honestly can’t fault that argument. There are some folks who worked on Taproot who don’t have any desire to work on activation mechanisms because of basically SegWit, the mess of the activation debate there and because activation mechanisms in Bitcoin, there is no good answer. The right answer is we find a way to have changes only activate once it is clear that no one in the community is going to be negatively impacted. The changes must not be blocked by someone who is saying they are going to be negatively impacted when they are not. There is no perfect technical solution to that. There is not a way to capture that so it is ultimately a little subjective. There are some folks that don’t want to work on that kind of stuff. There are folks who do want to work on that kind of stuff. I think that conversation got started maybe only, I think I kickstarted it but I immediately stopped working on it.

MB: I feel like it was this time last year.

MC: Let me check my email. I sent an email on January 10th of last year describing what I think are good technical requirements to have for an activation method and kickstarted a little bit of a discussion. I got some responses, not from many people. That discussion only started around then and I think for a while there were only a few people involved in it and then more recently more developers have got involved in those discussions, Anthony Towns and a few other people leading the charge on that. It has been a slow process and doubly so because there is a lot of acrimony. Over time no one has really wanted to talk about SegWit and that mess because it was a painful time for a lot of people. I think over time different people have started remembering it slightly differently. It has been 3 years, that is a normal thing for human brains to do. And so I think once that conversation got started I quickly realized and I think a few other people realized that people are on some very different pages in terms of how we should think about soft forks. It has taken time but it sounds like there is a little more agreement, a little more “There’s debate over whether we should do flag days and it looks like it won’t even be an issue so maybe we should just do a normal 95 percent miner readiness signaling thing. We’ll do that and if it doesn’t work out we can revisit it and do a flag day, fine, whatever.” Not many people are going to say that is bad. They are just going to say they’d rather do a flag day maybe. There is some legitimate complaints about precedent on the other side saying that miner readiness signaling should not be viewed as a vote for or against a consensus change. Consensus changes aren’t decided by miners and there is an issue with setting precedent that miners decide consensus changes based on this signaling mechanism. That is a very valid point. I think these two opposite points about the different options on the table, flag day UASF or not, both have interesting points about precedent and drawbacks of the other side. Thus it has been a long conversation of hashing it out. That is to be expected.

MB: It is good to see people meeting on IRC. There is another meeting next Wednesday on IRC about it. I’m pulling up Michael Folkson’s notes from the first IRC meeting. Overwhelming consensus that 1 year is the correct timeout period, unanimous support for BIP 8 except for Luke Dashjr (see notes for exact wording), no decision on start time but 2 months was done for SegWit and that didn’t seem too objectionable. It seems like good conversation from Michael’s notes.

MC: I just have Michael’s notes as well. It seems like people are coming round to just doing the miner readiness signaling thing.

Comparing the SegWit and Taproot soft fork changes

MB: It is important to think about this precedent and it is why I love having you on because you are one of the most adversarial thinkers I’ve met in this space. I do think people have very high time preference about this stuff. I want Schnorr and Taproot as well but again precedents matter. We throw the fact that precedents matter at other projects that we like to critique. It would be a bit hypocritical if we didn’t exercise this type of caution and patience for a massive upgrade. Do you think this is bigger than SegWit?

MC: In terms of code, no. In terms of cryptographic changes, yeah. It is also a big upgrade in terms of new things you can do with it, debatable, you could argue either way. It is big but it is also not going to make or break Bitcoin. It is big, it is going to improve privacy on Lightning, it is going to improve the ability for people to use multisig significantly, especially larger multisig policies, it is going to over time improve privacy onchain. All of these things are important and cool and awesome but also not going to make or break Bitcoin. But in my view how the community views changes and how the change process happens is going to make or break Bitcoin.

Precedence of soft fork activation mechanisms

MB: I can agree there. There is a slippery slope maybe. You effectively do a flag day soft fork this time around but then it is jammed down people’s throats at some point in the future.

MC: I think the pro UASF argument would be that largely SegWit didn’t have any material detractors who would be harmed by SegWit directly and thus UASF was ok that it got jammed through in a month or two. But also UASF was itself a consensus change on its own. Still years later people are running round today holding on to a consensus change that was incredibly rushed, it was about as rushed if not more than the SegWit2x and New York Agreement. “Here’s code and here’s binaries. Run it and we are going to create our own chain and fork off in 3 weeks” or 2 months or whatever it was. That is also this really heavy risk and people are holding that up as a gold standard for how we should think about doing activations in Bitcoin. And that worries me.

MB: I appreciate that you are worried. Somebody has got to worry about this s***. I think sanity is going to prevail here, hopefully patience prevails.

MC: The good news is that the market isn’t always right but it is true that people in Bitcoin generally are at least vaguely aware that if they screw Bitcoin up then the market is going to punish them eventually and that there are certain things about Bitcoin that must not go wrong in order for its value to remain. That above all else people understand because a lot of people have millions and millions invested in Bitcoin, whether their business or personally.

MB: 1.5 billion for Tesla. The severity and the importance of making sure this network survives in perpetuity is so grave that caution should not be thrown to the wind. Precedence should be very seriously considered and you should lower your time preference. But that being said I am very optimistic about what has been going on recently in these IRC meetings.

Consensus forming

MC: Yeah it seems like there is a bit more consensus forming which is really good.

MB: Give a shout out to the pools, to Poolin, I think that may have given confidence for some in the developer community to begin having these types of conversations and sticking their necks out with specific activation paths. Obviously users on Twitter and other platforms signaling that they want to use some of the functionality that will be provided by this upgrade.

MC: I would love to see it more from the business community as well but all that stuff is really great to see. At least during the SegWit debacle, the 3 or 4 main groups, you’ve got business community, you’ve got pools and miners, you’ve got individual users and you’ve got developers, as much as we can get all 4 of those groups to be vocal that a change is acceptable and meets their requirements. Maybe it is useful to them, maybe it is not but at least it is acceptable and not a negative to them the better off we will be for future changes. Even if this change is minor and a lot of people don’t really need to care about it having them say “I looked at that change and I don’t care about it” is a great outcome.

MB: At the end of the day they don’t have to use it if they don’t want to because it is all backward compatible?

MC: Right. The change was made very carefully to make sure it is not risking other people’s coins with crazy new cryptographic assumptions. It is not going to massively change the market dynamics of Bitcoin or the fee market or whatever. It is a pretty straight up and down, backwards compatible, if you use it you use it, if you don’t you don’t care kind of change. It is important that that is stated as broadly as it can be. That is the gold standard for Bitcoin changes and that is how Bitcoin changes should look.

MB: Bitcoin is beautiful. It feels weird, maybe people will be affected by this conversation and that will have a little effect on whether or not this gets activated.

MC: Hopefully.

Taproot code in Bitcoin Core

MB: The Taproot code has officially been merged into Core, version 0.21.0. What is the technical thing that happens that takes it from being merged to being activated and used? The code is in the codebase technically.

MC: You can have code but there is no way to reach the code on mainnet. That code can’t be used on mainnet. Once there is an activation method, that code is there but for all intents and purposes it is not part of the Bitcoin consensus, at least not the mainnet Bitcoin consensus protocol, once there is an activation method, once there is a way to reach that code in mainnet Bitcoin that code is now merged and activated, whatever you want to call it.

MB: That was one of the questions I had in my mind for the last year. It is merged but you can’t use it. I am pumped that people can use it on Signet. Maybe businesses can mess around with some proof of concepts to be convinced to or not to support this.

MC: People can use it yeah. That is a reason to merge it first. Historically in Bitcoin Core it has always been you merge the bulk of the code changes in a major release, on testnet, signet or whatever, on your own regtest networks, you can test it out, play with it, use it in a live system and then the actual activation parameters, in essence the formal proposal of this fork from the Bitcoin Core developer community takes the form of its own minor release version that does nothing, maybe it fixes a few minor bugs, but in large part it is just a proposal and it just contains those activation parameters to make a cohesive soft fork change. “Here is a series of consensus changes in code that can be reached on mainnet in a version of Bitcoin Core. You can run this version or you can stay on a previous version, there are little code changes, there isn’t a lot of features you’re missing out on by running this new version. Run this version and now you are on the hopefully new soft fork, the activation rules will kick in.”

LDK and rust-lightning

MB: I learned a lot today, I always learn a lot when I speak to you Matt. How are things going with the LDK?

MC: Things are going great. I have been working, for those of you who are unaware, for a few years now building out a Lightning library. Not yet another Lightning node, lnd and c-lightning and whatever work great, a lot of people use them, a lot of people are really happy with them. They are robust, they are well tested, they have got a lot of users. But what there isn’t is a library that lets you integrate a Lightning node into your project really tightly. Instead of “I’m going to spawn lnd, it is going to run, it is going to download the chain itself and have its own onchain wallet” it is “Here’s a library, you integrate it, you plug in the chain, you plug in onchain keys and onchain funds to create channels and then you have a Lightning node that is in your application, it is not some separate thing that you’re controlling or talking to.” I started that maybe a year and a half ago now, Square hired a bunch of us at Square Crypto and after a bunch of back and forth this was the project we decided to work on. I wasn’t actually the one who proposed it but I think we all saw a lot of value for this potentially especially in the domain of existing Bitcoin wallets that haven’t integrated Lightning or existing Bitcoin applications that have wallets that haven’t integrated Lightning and hopefully will in the future but building a Lightning node from scratch is a lot of work. Making sure it handles every possible edge case is a lot of work. We’ve got this thing that lets you integrate Lightning into your application. I think everyone on the Square Crypto team looked at that and was like “This could have a big impact on Bitcoin down the road.” We got started working on it, the new folks were getting spun up on how all the bits and pieces worked and then Covid hit and all of a sudden we weren’t in the office together. It slowed things down a little bit but we have spent a year really digging into what was there and cleaning up. The core of the code was pretty good, works pretty well, it is super well tested, the actual APIs that it exposed were fine but could use some work. We spent the last year really cleaning up those APIs and making it really easy to integrate. Instead of just having an API where you have to download the blockchain yourself we have options. You can take it and say “I want you to download the blockchain and here is where you get it from. I am going to do my own onchain wallet.” Or vice versa you can say “You do that work”, a bunch of different sample batteries, sample implementations of things. It is now not entirely a batteries not included Lightning node, it has some batteries in it. We’ve cleaned up that a lot. We have spent a tonne of time on building language bindings so you can use it in different languages. It is written in Rust, that’s great because we can compile it for hardware wallets or you can run it in the browser but only if you call it from Rust. We actually had to build our entirely own language bindings generation utility, there was nothing out there as far as we were able to find that in any way is able to take an existing library that has complicated object oriented semantics and just spit out language bindings in a bunch of different code. All the language binding stuff that exists is really all about “You have one simple function and you just want to stub it out into a different language so that is faster.” Not like “You have this whole library that has a bunch of different things and a bunch of different interactions and you want that in language bindings.” We had to build top to bottom our own language bindings generation stuff so we’ve got that. We’ve got good C, C++ bindings if you want to use it at a very low level. We’ve got some Java bindings that people can use. We’ve got samples of using it in Swift and we are getting there on the Javascript end. You will be able to call it directly from Javascript in a web browser. If you ask “Why run a Lightning node in a web browser?” I’m not sure why you would but you can. It is this really nice cross platform thing. So we’ve spent the last year building out language bindings and we are really rounding the corner now. We have what is a rather cohesive product, it is lightningdevkit.org if you are interested, join our Slack, reach out to us and we are happy to work closely with people. We are an open source team hired to build this out so we are happy to help people out any way we can in terms of integrating and getting Lightning in as many places as we can. It is going great and we really turned the corner on having a cohesive thing that people can use now. We are really happy with it.

MB: I am pumped to see that come out. We had you, Val and Matt Odell in that episode describing what you guys were embarking on, attempting to attack this LDK project, an incredible update. I have been learning a lot from… on Clubhouse. The way he can describe the potential for the Lightning Network and break down Taproot, why that’s important, gets into cryptography, point time locked contracts something I am very excited about, he explained that for people as well.

MC: Big privacy improvement.

MB: Rendezvous routing with PTLCs seems like a game changer.

MC: There are going to be some interesting trade-offs coming up in terms of how people do routing, privacy trade-offs in Lightning. If I am honest it does worry me a little bit because existing academic research already shows that Lightning today, the privacy against middle nodes inferring who is sending money where is pretty weak and they have a lot of visibility into what is going on. Having Taproot and Schnorr specifically for point time locked contracts is going to improve that, hopefully significantly but there are a lot of other structural issues. Then a lot of especially the mobile wallets, oftentimes calculating a route right now can take a little bit of time and also importantly if you send a route and then it fails halfway there then you have to calculate another route and you have to send it again, the payment. That can take some latency and nobody wants payments latency. Downloading the graph can take a little bit of time especially if you are on a mobile phone, you open the app and can’t send for a minute while you download the graph so you can calculate a route. All these UX paper cuts. The way to solve them is to take a privacy trade-off so different routing methods, having a server calculate a route or whatever. There is a lot of really exciting research there and really exciting ways to do routing and rendezvous is really awesome. Rendezvous is less of a trade-off than some of the other work. Rendezvous is great because you can have payee privacy from the payer. You have hidden service privacy, like Tor, where the payer doesn’t necessarily know where the money is ending up which is really huge. But you also have to have that in a context where your routing algorithm considers privacy really carefully and you’re not leaning too much on middle nodes to do a lot of your work for you. It will be interesting to see where all that ends up and whether people can build a good enough UX on top of really privacy optimized Lightning routing.

MB: I’m bullish on the Lightning Network, I know I’m a fanatic but I think people are sleeping on it.

MC: People are sleeping on it in part because it is not integrated anywhere. I think a lot of the Lightning folks across different parts of the Lightning ecosystem are really excited. This year looking towards more exchange integration and institutional integration of Lightning which hopefully is going to take some volume of transactions off the blockchain itself, give people some instant transacting ability and then hopefully that also translates a little bit into mobile wallets. It shouldn’t be the case that you ever download a Bitcoin wallet that doesn’t support Lightning. That is where we are at today. You download a Bitcoin wallet and it probably doesn’t support Lightning and that sucks. Or if it does there are a few wallets, non-custodial Bitcoin wallets but then custodial Lightning because the only good way to integrate Lightning today is custodial. That shouldn’t be the case.

MB: Even better if you can abstract so you don’t even know you are using Lightning versus main chain.

MC: Right. Nor should a user ever be aware of Lightning, they should just know that their payment cleared instantly. That is what you should have. I am really bullish on it, that’s why I’m working on it full time but also I’m really bullish on integrations into existing Bitcoin wallets and new Bitcoin wallets that support it. Today a lot of Bitcoin stuff is really tightly integrated with the application level. People aren’t just downloading and running Bitcoin Core and using Bitcoin Core’s wallet to power their big exchange. Some people use Bitcoin Core but most people don’t use its wallet. Those are the options for Lightning today. You can run c-lightning, it has a great plugin system, you can do a tonne of hooking and editing it but it is still downloading and running a binary. A little less so for lnd. lnd has you download Bitcoin Core and you get the RPC API, that is what you have. It is great for many use cases, a lot of people use Bitcoin Core very successfully as a wallet on their server, whatever and same for lnd. But it is just not going to get us there in terms of mobile wallets everywhere and it is not going to get us there… I am really bullish on it, I hope we can have a really positive impact too.

MB: Thank you for all the work you do, thank you for reaching out. Like I told you, you don’t have to say thank you for having you on, you have an open invite in perpetuity until I croak or you croak or until this podcast croaks.

MC: Next time I’ll make you bribe me with beer.

MB: Yes we need to do that. I’ll drive up for that. Anything you want to end this on? Any particular note or message?

MC: I am super pumped for Bitcoin, I am super excited for it, I only worry about it because I love it.

MB: I love you Matt, thank you for coming back on, it was a great conversation, it was a great history lesson, great perspective, I hope you enjoy the rest of your night. If there is anything else you ever want to talk about in the future just let me know.

\ No newline at end of file diff --git a/wasabi/research-club/2020-06-15-coinswap/index.html b/wasabi/research-club/2020-06-15-coinswap/index.html index e483ac092d..b160f9c56c 100644 --- a/wasabi/research-club/2020-06-15-coinswap/index.html +++ b/wasabi/research-club/2020-06-15-coinswap/index.html @@ -11,4 +11,4 @@ < CoinSwaps

CoinSwaps

Date: June 15, 2020

Transcript By: Michael Folkson

Tags: Coinswap, Research

Category: Meetup

Media: -https://www.youtube.com/watch?v=Pqz7_Eqw9jM

Intro (Aviv Milner)

Today we are talking about CoinSwaps, massively improving Bitcoin privacy and fungibility. A lot of excitement about CoinSwaps so hopefully we can touch on some interesting things.

CoinSwaps (Belcher 2020)

https://gist.github.com/chris-belcher/9144bd57a91c194e332fb5ca371d0964

This is a 2020 ongoing GitHub research paper that Chris Belcher has been working on. You can view the original file here. It is heavily based on the 2013 work by Greg Maxwell.

Wasabi Research Club

Just a reminder of what we have been doing. Wasabi Research Club is a weekly meetup that tries to focus on interesting philosophical papers, math papers, privacy papers around Bitcoin. We cover different topics. You can see here we have covered a lot of different topics in the last few months. We went on a hiatus because I was mostly the one organizing these things. It became a lot less formal in April and May with casual conversations. Most recently we are very happy to say that Wabisabi, the outcome of all of this discussion and work, we have this new protocol draft that has just been finished by a lot of people on this call. That is very exciting. Now we are getting back into the swing of things with regular discussions on papers. Last week we talked about CoinSwaps as a broad idea. What are CoinSwaps? Why do we want them? What are some ways we could use CoinSwaps? Today we are looking at a specific protocol for CoinSwaps which is Chris Belcher’s 2020 paper. Find out about what we are doing on our GitHub.

CoinSwaps (Belcher 2020)

https://gist.github.com/chris-belcher/9144bd57a91c194e332fb5ca371d0964

Like any good privacy paper the CoinSwap paper begins with prose. “Imagine a future where a user Alice has bitcoins and wants to send them with maximal privacy so she creates a special kind of transaction. For anyone looking at the blockchain her transaction appears completely normal with her coins seemingly going from address A to address B. But in reality her coins end up in address Z which is entirely unconnected to either A or B.” The big thing that Belcher is talking about is the concept of covertness.

Property - Covertness

What is covertness? We talked about this last week. We say a protocol is covert if a passive bystander is not able to differentiate a user of the protocol with a regular everyday user who is not engaged in the protocol. For Bitcoin this means not revealing any sort of smart contract behavior in the address format or the transaction format. If we care about covertness we have to admit that Coinjoins are not covert. What do we mean by that?

When we look at a Coinjoin we can say individuals participating in a coinjoin are anonymous between themselves. There is a probabilistic uncertainty on which blinded output belongs to which participant. There is no question to the passive observer that what is going on before them is a Coinjoin. These three individuals are engaged in some sort of privacy enhancing technique. In that way a ZeroLink Coinjoin is not covert. We know people who use Coinjoin and who don’t use Coinjoin and they are very distinct.

Covertness

Why is covertness so critical? This is brought up immediately in the paper. Belcher says “… imagine another user Carol who isn’t too bothered by privacy and sends her bitcoin using a regular wallet which exists today.” It doesn’t consider privacy at all. Because Carol’s transaction looks exactly like Alice’s and Alice does care about privacy, Alice is doing these covert CoinSwaps, Carol’s transaction is now possibly Alice’s transaction. You can’t be certain that Carol herself is not engaged in some privacy protocol because it looks just like Alice. Carol’s privacy is improved even though she didn’t change her behavior and perhaps has never even heard of this software. She doesn’t care at all about CoinSwaps but because Alice is engaged in these covert privacy protocols she is actually shielding Carol as well. This is a huge property. Coinjoin in a sense does offer this property in a very small amount. If Coinjoin participants are engaged with average everyday users, those average everyday users are getting obfuscated coins. When they give those obfuscated coins to other users, obfuscated coins are everywhere.

What do we mean by covertness in light of CoinSwaps? If we have Alice in orange and Bob in yellow, when they are doing a CoinSwap between each other, they are sending their addresses to a special address that looks like your average plain pay-to-public-key address or pay-to-witness-public-key address. From that address they are switching ownership from one to the other. The critical thing here is that on the blockchain these two graphs of Alice’s coin going to this brown address and then to this yellow address is completely unconnected. There is no merging from the graphs from these two sides. They are completely separate graphs. No one knows that this is happening. The end result is that whatever Bob’s history is now Alice is inheriting it. If Bob is a British individual who likes to buy shoes online his history is now going over to Alice who is then going to behave totally differently. Any forensic company who is trying to cluster those behaviors and say “This graph looks like Bob”. This will be met by a completely different user behavior because actually it is Alice. This is totally covert.

CoinSwap, Belcher

Here is Belcher talking about CoinSwaps in the very beginning. You can see that he outlines it very clear here. At the bottom the same chart that was on a previous slide. The critical thing to understand is that CoinSwaps are non-custodial. Alice is not giving her coin up to Bob hoping that Bob doesn’t steal that coin. They are using hash timelocked contracts. As soon as Bob claims Alice’s coin, Alice’s coin is now able to claim Bob’s coin. This is not very different from the Lightning Network. This is heavily based on the 2013 protocol by Gregory Maxwell. Back then Maxwell had to use many more transactions and they weren’t covert at all. If something fishy was going on this transaction was different to an average transaction. This is not the case with Belcher’s work.

ECDSA-2P

How is Belcher making it covert? If Maxwell couldn’t make it covert how is Belcher doing that? He is using a clever trick called ECDSA-2P. This is two party elliptic curve digital signature algorithm. What it essentially is a 2-of-2 multisignature address but it looks the same as a regular single signature address. Any address can be converted into a two party ECDSA. You can use it with pay-to-public-key-hash (P2PKH), you can use it with pay-to-script-hash wrapped pay-to-witness-public-key-hash (P2SH - P2WPKH) or you can use it with the new bech32 pay-to-witness-public-key-hash (P2WPKH). There is a big point that Belcher makes here. This shows a lot of where he comes from. I haven’t said a lot about Chris Belcher because everyone in the chat knows about him. He is the privacy wiki maintainer. He is the Electrum Personal Server maintainer and a Joinmarket maintainer. He is probably one of the top three people in this space working on privacy. You’ll notice something very smart here. He says that although “Schnorr signatures with MuSig provide a much easier way to create invisible 2-of-2 multisig but it is not as suitable for CoinSwap.” Why would a more secure, simpler protocol that all the new hype is talking about, why isn’t that the way to go? The answer is simple. Because not enough people are using this address. What Belcher really wants is users who are using CoinSwap to look like an overwhelming majority of average people on the network. He wants people to look like everyday old school P2PKH or the newer P2SH wrapped P2WSH. He doesn’t want to use a new format that is used by a small number of people.

Covertness

The property of covertness that is important here is the one that I have highlighted here in green. It is the special addresses. We need a 2-of-2 multisig but we don’t want to reveal that it is a 2-of-2 multisig because 2-of-2 multisigs comprise of a super minority of both Bitcoin amounts and Bitcoin addresses on the network. What we would like to do is look like any wallet, blockchain.com wallet, Wasabi wallet, Mycelium wallet, Coinomi wallet, every other common wallet. None of those wallets are offering 2-of-2 multisig. That is quite rare. He solved this with 2 party ECDSA.

Amount Correlation?

The second thing we have to ask is how are we going to avoid amount correlation. The problem you have is if an adversary starts tracking an address of AliceA at Point A they could unmix the CoinSwap easily by searching the entire blockchain for other transactions with amounts close to the amount that Alice is using. If it is 15 Bitcoin then that forensics company is going to watch for other addresses that are doing transactions with exactly 15 Bitcoin. This would lead them to address AliceB. We can beat this amount correlation attack by creating multi transaction CoinSwaps. What does that look like?

At the top Alice is going to swap with Bob. She is going to have her 30 Bitcoin. She is going to swap it over to Bob who is going to receive the 30 Bitcoin. Bob is not going to give Alice a single UTXO. Instead Bob is going to give Alice a cluster of UTXOs that amount to 30 Bitcoin. As you can see here now we are introducing not a single CoinSwap but a triple CoinSwap, a multicoin CoinSwap. The net result is that it is much harder for anyone on the outside to find those UTXOs that amount to 30. Especially if Alice herself doesn’t have a single UTXO but several UTXOs that she is exchanging for other UTXOs. As it turns out with this protocol there are no limitations with both merging UTXOs in a CoinSwap but also branching UTXOs in a CoinSwap. You really don’t know if Alice is getting 1 UTXO, 2 UTXOs or 12 UTXOs. That is how Belcher proposes that we prevent amount correlation.

Property - Trustlessness

Let’s talk about trustlessness. We talked about covertness, let’s talk about trustlessness. A protocol is trustless if it neither jeopardizes the security nor the privacy of the users’ Bitcoin and their Bitcoin history to any single party. The Coinjoins we’ve talked about in the past, in particular ZeroLink Coinjoins are trustless specifically ZeroLink Chaumian blind Coinjoins are trustless because the users do not give the central coordinator any additional information about their addresses. There are other protocols that don’t do this blinding and that of course is not trustless. With the ZeroLink Coinjoin they are trustless. ZeroLink Coinjoins rely on the users having to sign the transactions. If a user believes they are being screwed over and do not get their amount of Bitcoin back they simply don’t sign and there is no risk to the user losing Bitcoin. CoinSwaps are not trustless by default. It is not the security of the Bitcoin that is of concern, it is the privacy of the Bitcoin. Whenever you do a CoinSwap you are engaging with one other individual. How do you know that that other individual isn’t Chainalysis or some forensics company? How do you know that individual isn’t going to comply or collude with a bad actor? Clearly you don’t know if the person you are collaborating with is going to rat you out because they know your history of coins. How does Belcher deal with this problem?

Routing CoinSwaps

He introduces this idea of routing CoinSwaps. This diagram here isn’t very accurate, there is a more accurate diagram in a second but the idea is sort of there. Here what you can imagine is three participants are doing CoinSwaps between each other. First Alice is doing it with Bob and then with Bob’s coin Alice is doing it with Carol and then finally with Carol’s coin she receives that back. The idea here is that coins are bouncing around from multiple participants such that one participant can know a particular link but in order for Alice’s coin to be deanonymized all of the participants in this CoinSwap must collude together to deanonymize her. If this is starting to sound very similar it is because this sounds almost exactly like Lightning or Tor. The fact that we are onion routing, we are doing this multiple times with multiple participants so every participant is just one link to the final destination means this is very secure.

This is a better diagram. You can imagine that on the left is Alice’s coin and she is switching it for the brown coin. Now she has that brown coin she switches it for a blue coin. She has a blue coin and she switches it for a green coin. Finally she switches it for one last coin and that is the coin she is deciding to spend from now on. The important thing is that all of these links need to be broken in order for her privacy to be broken. Brown, blue and green must collude together to break her privacy.

If Alice wants to avoid trusting Bob or Carol or Dan she can trust none of them by essentially trusting all of them and as long as all of them don’t collude in a conjunction format, all collude together, she is pretty much ok.

Multi-Transaction and Routing

Then we have to combine these two concepts together. In the paper Belcher has this drawing here where Alice has her multiple coins, she is swapping them with Bob’s multiple coins. Then she has Bob’s multiple coins and she is swapping it with Charlie. Then she has Charlie’s coins and she is swapping it with Dennis. And so forth until she is ready to spend those coins. The graph is completely broken and Alice has not trusted Bob, Charlie or Dennis. The amount correlation is very hard to perform.

Here is a slightly better diagram I think than the one Belcher gave. Just because he was constrained. You have these CoinSwaps for different coins of the same total amounts. These CoinSwaps continue indefinitely. Belcher does make a point that most people will trust one swap. I think that is an interesting point. Most users will trust one or two swaps and won’t need to do many swaps. I don’t know if it is a particularly sound thing to suggest because it does present problems with motivated actors performing a sybil attack though he addresses that later on.

Here is another diagram. He also talks about what happens if Alice has a single UTXO of 15 Bitcoin. Alice will need to do a branching CoinSwap where she doesn’t swap for a single coin, she swaps for 6 of Bob’s coins. Those coins are then swapped for Charlie’s coins and Dennis’ coins. They are swapped at different times with different participants. Not all 15 coins are going in the same path, they are actually branching out to different individuals altogether. It makes timing and amount correlation very hard. At the end Alice ends with a bunch of coins from Edward and Fred.

If Alice has 2 large coins and she needs to consolidate them but she doesn’t want the history of those 2 coins to be merged, she can again do this same thing by switching them for Bob’s and Charlie’s coins and then finally merging those coins together because Bob’s and Charlie’s histories are less critical than Alice’s history.

Breaking change output and wallet fingerprinting heuristics

Now Belcher is talking about breaking change output and wallet fingerprinting heuristics. The first thing we should talk about is breaking the change output. Equal output Coinjoins easily leak change addresses unless they are swept with no change. We know that any equal output Coinjoin has this problem where this some change and that change is what we call toxic. This isn’t a problem with CoinSwap because any Bitcoin you have you can find a maker, an individual who has Bitcoin and is willing to match your amount in exchange for some fee. CoinSwap doesn’t have that change output heuristic. This brings us to the next point which is the wallet fingerprinting heuristic.

Fingerprinting Heuristics (randomness)

I put in brackets the concept of randomness. BTCPay Server has an interesting transaction output ordering. When you use the BTCPay Server and you send money from within BTCPay Server what it will do is randomly organize the inputs. Typically when you look at businesses that get a lot of inputs or coins into their business, when they want to spend they will order by amount, by reverse ordering, high to low or low to high, or they will order by the time the Bitcoin has arrived in the wallet. But BTCPay Server didn’t do that. They decided to do it randomly. They took all their inputs and it is random, it is not by time and it is not by amount. You might think “This is brilliant. Use randomness and now BTCPay Server adds some privacy for the merchants that use that service.” Unfortunately that is not the case, it is the opposite. Very few people use randomness. The result is that randomness is a fingerprint. Belcher pointed this out. He says “we can break this heuristic by having makers randomly with some probability send their change to an address they’ve used before.” He is talking here about a different heuristic, addresses being reused are likely payment addresses. For example I want to pay you money and I am likely to reuse your address. But change addresses are always brand new. He is saying why don’t we just have a small probability of change addresses being reused on purpose? Again some people might say “Shouldn’t we never reuse an address?” but what Belcher is saying is totally correct. We should be behaving like the crowd so that forensic companies cannot say anything probabilistically or statistically about our particular protocol. Wasabi did a very similar thing I think for RBF a few months ago. We found out that around 7 percent of bech32 spends have RBF enabled. We made it so that Wasabi wallets have 7 percent of the time transactions with RBF. It is not random, it is looking similar to what other people are doing. Another great heuristic he attacks is the script type heuristic. He says why don’t we purposefully make the address format look like the address format of the rest of the network. This is quite clever though there are downsides to not using the most efficient address format because that results in higher fees.

Additional Topics

There were a lot of other things to talk about in this paper. It likely merits a part 2. I was hoping Belcher would be here to answer some questions. Let’s leave it there for now.

Questions/Comments

Q - What should be the end goal of Bitcoin privacy, the end user experience? Could you describe it? I could describe what the end user experience could be. Very small Joinmarket 2-of-2 Coinjoins so that every transaction is a Coinjoin. Large Coinjoins provide a lot of privacy but it would take 20, 40 or 60 minutes before you can spend your coins. What it would like for CoinSwaps?

A - I think with privacy or anonymity networks you are trying to smell user behavior and figure out where users are. I don’t see one solution fits all where everything is solved. You still have privacy problems. Let’s say you are a merchant. You are getting all these payments from people that are doing these CoinSwaps. The graph is really obfuscated. Now you have all these payments, what do you do? I think for large businesses and users Coinjoin is still the most effective because it crushes the graph. It makes the graph incredibly hard to unravel.

Q - I am not familiar with the different wallet fingerprints. What kind of fingerprints?

A - I can talk about that in depth. I fingerprint a lot of wallets. I can look at the blockchain with pretty good accuracy and figure out which user is using which wallet. It is everything. When I see a transaction in the blockchain what kind of address format? Is the change address a BIP 69? Is it indexed steganographically or is it randomly indexed? That gives something away. What is the locktime, is it zero or is it a recent block? What is the nSequence? What is the version number of that transaction? Is the change address reused like blockchain.com? That is an easy one. Sometimes I notice that a wallet will pay to… outputs. Only a couple of wallets offer that advanced feature. I immediately know what is going on. The transaction fee, what kind of number is it? Is it a whole number or is it a partial number? When I look at the blockchain is it based on Electrum’s fee distribution or Bitcoin Core’s fee distribution? Does the user get to manually pick or are there three base options? These are things that are pretty quickly revealed. There are probably 20-30 qualities of a transaction and the problem with having a lot of qualities is that if you repeat qualities you become a snowflake which means you become unique. No other wallet is going to have the same 20 qualities that you do. For that reason someone like me, I have a pretty easy time deciphering transactions to which wallet based on nothing but one transaction.

Q - If everything else fails the transaction graph gives the final clue of what happened here.

(Wallet fingerprinting was also discussed at the London BitDevs Socratic Seminar on Payjoin.)

\ No newline at end of file +https://www.youtube.com/watch?v=Pqz7_Eqw9jM

Intro (Aviv Milner)

Today we are talking about CoinSwaps, massively improving Bitcoin privacy and fungibility. A lot of excitement about CoinSwaps so hopefully we can touch on some interesting things.

CoinSwaps (Belcher 2020)

https://gist.github.com/chris-belcher/9144bd57a91c194e332fb5ca371d0964

This is a 2020 ongoing GitHub research paper that Chris Belcher has been working on. You can view the original file here. It is heavily based on the 2013 work by Greg Maxwell.

Wasabi Research Club

Just a reminder of what we have been doing. Wasabi Research Club is a weekly meetup that tries to focus on interesting philosophical papers, math papers, privacy papers around Bitcoin. We cover different topics. You can see here we have covered a lot of different topics in the last few months. We went on a hiatus because I was mostly the one organizing these things. It became a lot less formal in April and May with casual conversations. Most recently we are very happy to say that Wabisabi, the outcome of all of this discussion and work, we have this new protocol draft that has just been finished by a lot of people on this call. That is very exciting. Now we are getting back into the swing of things with regular discussions on papers. Last week we talked about CoinSwaps as a broad idea. What are CoinSwaps? Why do we want them? What are some ways we could use CoinSwaps? Today we are looking at a specific protocol for CoinSwaps which is Chris Belcher’s 2020 paper. Find out about what we are doing on our GitHub.

CoinSwaps (Belcher 2020)

https://gist.github.com/chris-belcher/9144bd57a91c194e332fb5ca371d0964

Like any good privacy paper the CoinSwap paper begins with prose. “Imagine a future where a user Alice has bitcoins and wants to send them with maximal privacy so she creates a special kind of transaction. For anyone looking at the blockchain her transaction appears completely normal with her coins seemingly going from address A to address B. But in reality her coins end up in address Z which is entirely unconnected to either A or B.” The big thing that Belcher is talking about is the concept of covertness.

Property - Covertness

What is covertness? We talked about this last week. We say a protocol is covert if a passive bystander is not able to differentiate a user of the protocol with a regular everyday user who is not engaged in the protocol. For Bitcoin this means not revealing any sort of smart contract behavior in the address format or the transaction format. If we care about covertness we have to admit that Coinjoins are not covert. What do we mean by that?

When we look at a Coinjoin we can say individuals participating in a coinjoin are anonymous between themselves. There is a probabilistic uncertainty on which blinded output belongs to which participant. There is no question to the passive observer that what is going on before them is a Coinjoin. These three individuals are engaged in some sort of privacy enhancing technique. In that way a ZeroLink Coinjoin is not covert. We know people who use Coinjoin and who don’t use Coinjoin and they are very distinct.

Covertness

Why is covertness so critical? This is brought up immediately in the paper. Belcher says “… imagine another user Carol who isn’t too bothered by privacy and sends her bitcoin using a regular wallet which exists today.” It doesn’t consider privacy at all. Because Carol’s transaction looks exactly like Alice’s and Alice does care about privacy, Alice is doing these covert CoinSwaps, Carol’s transaction is now possibly Alice’s transaction. You can’t be certain that Carol herself is not engaged in some privacy protocol because it looks just like Alice. Carol’s privacy is improved even though she didn’t change her behavior and perhaps has never even heard of this software. She doesn’t care at all about CoinSwaps but because Alice is engaged in these covert privacy protocols she is actually shielding Carol as well. This is a huge property. Coinjoin in a sense does offer this property in a very small amount. If Coinjoin participants are engaged with average everyday users, those average everyday users are getting obfuscated coins. When they give those obfuscated coins to other users, obfuscated coins are everywhere.

What do we mean by covertness in light of CoinSwaps? If we have Alice in orange and Bob in yellow, when they are doing a CoinSwap between each other, they are sending their addresses to a special address that looks like your average plain pay-to-public-key address or pay-to-witness-public-key address. From that address they are switching ownership from one to the other. The critical thing here is that on the blockchain these two graphs of Alice’s coin going to this brown address and then to this yellow address is completely unconnected. There is no merging from the graphs from these two sides. They are completely separate graphs. No one knows that this is happening. The end result is that whatever Bob’s history is now Alice is inheriting it. If Bob is a British individual who likes to buy shoes online his history is now going over to Alice who is then going to behave totally differently. Any forensic company who is trying to cluster those behaviors and say “This graph looks like Bob”. This will be met by a completely different user behavior because actually it is Alice. This is totally covert.

CoinSwap, Belcher

Here is Belcher talking about CoinSwaps in the very beginning. You can see that he outlines it very clear here. At the bottom the same chart that was on a previous slide. The critical thing to understand is that CoinSwaps are non-custodial. Alice is not giving her coin up to Bob hoping that Bob doesn’t steal that coin. They are using hash timelocked contracts. As soon as Bob claims Alice’s coin, Alice’s coin is now able to claim Bob’s coin. This is not very different from the Lightning Network. This is heavily based on the 2013 protocol by Gregory Maxwell. Back then Maxwell had to use many more transactions and they weren’t covert at all. If something fishy was going on this transaction was different to an average transaction. This is not the case with Belcher’s work.

ECDSA-2P

How is Belcher making it covert? If Maxwell couldn’t make it covert how is Belcher doing that? He is using a clever trick called ECDSA-2P. This is two party elliptic curve digital signature algorithm. What it essentially is a 2-of-2 multisignature address but it looks the same as a regular single signature address. Any address can be converted into a two party ECDSA. You can use it with pay-to-public-key-hash (P2PKH), you can use it with pay-to-script-hash wrapped pay-to-witness-public-key-hash (P2SH - P2WPKH) or you can use it with the new bech32 pay-to-witness-public-key-hash (P2WPKH). There is a big point that Belcher makes here. This shows a lot of where he comes from. I haven’t said a lot about Chris Belcher because everyone in the chat knows about him. He is the privacy wiki maintainer. He is the Electrum Personal Server maintainer and a Joinmarket maintainer. He is probably one of the top three people in this space working on privacy. You’ll notice something very smart here. He says that although “Schnorr signatures with MuSig provide a much easier way to create invisible 2-of-2 multisig but it is not as suitable for CoinSwap.” Why would a more secure, simpler protocol that all the new hype is talking about, why isn’t that the way to go? The answer is simple. Because not enough people are using this address. What Belcher really wants is users who are using CoinSwap to look like an overwhelming majority of average people on the network. He wants people to look like everyday old school P2PKH or the newer P2SH wrapped P2WSH. He doesn’t want to use a new format that is used by a small number of people.

Covertness

The property of covertness that is important here is the one that I have highlighted here in green. It is the special addresses. We need a 2-of-2 multisig but we don’t want to reveal that it is a 2-of-2 multisig because 2-of-2 multisigs comprise of a super minority of both Bitcoin amounts and Bitcoin addresses on the network. What we would like to do is look like any wallet, blockchain.com wallet, Wasabi wallet, Mycelium wallet, Coinomi wallet, every other common wallet. None of those wallets are offering 2-of-2 multisig. That is quite rare. He solved this with 2 party ECDSA.

Amount Correlation?

The second thing we have to ask is how are we going to avoid amount correlation. The problem you have is if an adversary starts tracking an address of AliceA at Point A they could unmix the CoinSwap easily by searching the entire blockchain for other transactions with amounts close to the amount that Alice is using. If it is 15 Bitcoin then that forensics company is going to watch for other addresses that are doing transactions with exactly 15 Bitcoin. This would lead them to address AliceB. We can beat this amount correlation attack by creating multi transaction CoinSwaps. What does that look like?

At the top Alice is going to swap with Bob. She is going to have her 30 Bitcoin. She is going to swap it over to Bob who is going to receive the 30 Bitcoin. Bob is not going to give Alice a single UTXO. Instead Bob is going to give Alice a cluster of UTXOs that amount to 30 Bitcoin. As you can see here now we are introducing not a single CoinSwap but a triple CoinSwap, a multicoin CoinSwap. The net result is that it is much harder for anyone on the outside to find those UTXOs that amount to 30. Especially if Alice herself doesn’t have a single UTXO but several UTXOs that she is exchanging for other UTXOs. As it turns out with this protocol there are no limitations with both merging UTXOs in a CoinSwap but also branching UTXOs in a CoinSwap. You really don’t know if Alice is getting 1 UTXO, 2 UTXOs or 12 UTXOs. That is how Belcher proposes that we prevent amount correlation.

Property - Trustlessness

Let’s talk about trustlessness. We talked about covertness, let’s talk about trustlessness. A protocol is trustless if it neither jeopardizes the security nor the privacy of the users’ Bitcoin and their Bitcoin history to any single party. The Coinjoins we’ve talked about in the past, in particular ZeroLink Coinjoins are trustless specifically ZeroLink Chaumian blind Coinjoins are trustless because the users do not give the central coordinator any additional information about their addresses. There are other protocols that don’t do this blinding and that of course is not trustless. With the ZeroLink Coinjoin they are trustless. ZeroLink Coinjoins rely on the users having to sign the transactions. If a user believes they are being screwed over and do not get their amount of Bitcoin back they simply don’t sign and there is no risk to the user losing Bitcoin. CoinSwaps are not trustless by default. It is not the security of the Bitcoin that is of concern, it is the privacy of the Bitcoin. Whenever you do a CoinSwap you are engaging with one other individual. How do you know that that other individual isn’t Chainalysis or some forensics company? How do you know that individual isn’t going to comply or collude with a bad actor? Clearly you don’t know if the person you are collaborating with is going to rat you out because they know your history of coins. How does Belcher deal with this problem?

Routing CoinSwaps

He introduces this idea of routing CoinSwaps. This diagram here isn’t very accurate, there is a more accurate diagram in a second but the idea is sort of there. Here what you can imagine is three participants are doing CoinSwaps between each other. First Alice is doing it with Bob and then with Bob’s coin Alice is doing it with Carol and then finally with Carol’s coin she receives that back. The idea here is that coins are bouncing around from multiple participants such that one participant can know a particular link but in order for Alice’s coin to be deanonymized all of the participants in this CoinSwap must collude together to deanonymize her. If this is starting to sound very similar it is because this sounds almost exactly like Lightning or Tor. The fact that we are onion routing, we are doing this multiple times with multiple participants so every participant is just one link to the final destination means this is very secure.

This is a better diagram. You can imagine that on the left is Alice’s coin and she is switching it for the brown coin. Now she has that brown coin she switches it for a blue coin. She has a blue coin and she switches it for a green coin. Finally she switches it for one last coin and that is the coin she is deciding to spend from now on. The important thing is that all of these links need to be broken in order for her privacy to be broken. Brown, blue and green must collude together to break her privacy.

If Alice wants to avoid trusting Bob or Carol or Dan she can trust none of them by essentially trusting all of them and as long as all of them don’t collude in a conjunction format, all collude together, she is pretty much ok.

Multi-Transaction and Routing

Then we have to combine these two concepts together. In the paper Belcher has this drawing here where Alice has her multiple coins, she is swapping them with Bob’s multiple coins. Then she has Bob’s multiple coins and she is swapping it with Charlie. Then she has Charlie’s coins and she is swapping it with Dennis. And so forth until she is ready to spend those coins. The graph is completely broken and Alice has not trusted Bob, Charlie or Dennis. The amount correlation is very hard to perform.

Here is a slightly better diagram I think than the one Belcher gave. Just because he was constrained. You have these CoinSwaps for different coins of the same total amounts. These CoinSwaps continue indefinitely. Belcher does make a point that most people will trust one swap. I think that is an interesting point. Most users will trust one or two swaps and won’t need to do many swaps. I don’t know if it is a particularly sound thing to suggest because it does present problems with motivated actors performing a sybil attack though he addresses that later on.

Here is another diagram. He also talks about what happens if Alice has a single UTXO of 15 Bitcoin. Alice will need to do a branching CoinSwap where she doesn’t swap for a single coin, she swaps for 6 of Bob’s coins. Those coins are then swapped for Charlie’s coins and Dennis’ coins. They are swapped at different times with different participants. Not all 15 coins are going in the same path, they are actually branching out to different individuals altogether. It makes timing and amount correlation very hard. At the end Alice ends with a bunch of coins from Edward and Fred.

If Alice has 2 large coins and she needs to consolidate them but she doesn’t want the history of those 2 coins to be merged, she can again do this same thing by switching them for Bob’s and Charlie’s coins and then finally merging those coins together because Bob’s and Charlie’s histories are less critical than Alice’s history.

Breaking change output and wallet fingerprinting heuristics

Now Belcher is talking about breaking change output and wallet fingerprinting heuristics. The first thing we should talk about is breaking the change output. Equal output Coinjoins easily leak change addresses unless they are swept with no change. We know that any equal output Coinjoin has this problem where this some change and that change is what we call toxic. This isn’t a problem with CoinSwap because any Bitcoin you have you can find a maker, an individual who has Bitcoin and is willing to match your amount in exchange for some fee. CoinSwap doesn’t have that change output heuristic. This brings us to the next point which is the wallet fingerprinting heuristic.

Fingerprinting Heuristics (randomness)

I put in brackets the concept of randomness. BTCPay Server has an interesting transaction output ordering. When you use the BTCPay Server and you send money from within BTCPay Server what it will do is randomly organize the inputs. Typically when you look at businesses that get a lot of inputs or coins into their business, when they want to spend they will order by amount, by reverse ordering, high to low or low to high, or they will order by the time the Bitcoin has arrived in the wallet. But BTCPay Server didn’t do that. They decided to do it randomly. They took all their inputs and it is random, it is not by time and it is not by amount. You might think “This is brilliant. Use randomness and now BTCPay Server adds some privacy for the merchants that use that service.” Unfortunately that is not the case, it is the opposite. Very few people use randomness. The result is that randomness is a fingerprint. Belcher pointed this out. He says “we can break this heuristic by having makers randomly with some probability send their change to an address they’ve used before.” He is talking here about a different heuristic, addresses being reused are likely payment addresses. For example I want to pay you money and I am likely to reuse your address. But change addresses are always brand new. He is saying why don’t we just have a small probability of change addresses being reused on purpose? Again some people might say “Shouldn’t we never reuse an address?” but what Belcher is saying is totally correct. We should be behaving like the crowd so that forensic companies cannot say anything probabilistically or statistically about our particular protocol. Wasabi did a very similar thing I think for RBF a few months ago. We found out that around 7 percent of bech32 spends have RBF enabled. We made it so that Wasabi wallets have 7 percent of the time transactions with RBF. It is not random, it is looking similar to what other people are doing. Another great heuristic he attacks is the script type heuristic. He says why don’t we purposefully make the address format look like the address format of the rest of the network. This is quite clever though there are downsides to not using the most efficient address format because that results in higher fees.

Additional Topics

There were a lot of other things to talk about in this paper. It likely merits a part 2. I was hoping Belcher would be here to answer some questions. Let’s leave it there for now.

Questions/Comments

Q - What should be the end goal of Bitcoin privacy, the end user experience? Could you describe it? I could describe what the end user experience could be. Very small Joinmarket 2-of-2 Coinjoins so that every transaction is a Coinjoin. Large Coinjoins provide a lot of privacy but it would take 20, 40 or 60 minutes before you can spend your coins. What it would like for CoinSwaps?

A - I think with privacy or anonymity networks you are trying to smell user behavior and figure out where users are. I don’t see one solution fits all where everything is solved. You still have privacy problems. Let’s say you are a merchant. You are getting all these payments from people that are doing these CoinSwaps. The graph is really obfuscated. Now you have all these payments, what do you do? I think for large businesses and users Coinjoin is still the most effective because it crushes the graph. It makes the graph incredibly hard to unravel.

Q - I am not familiar with the different wallet fingerprints. What kind of fingerprints?

A - I can talk about that in depth. I fingerprint a lot of wallets. I can look at the blockchain with pretty good accuracy and figure out which user is using which wallet. It is everything. When I see a transaction in the blockchain what kind of address format? Is the change address a BIP 69? Is it indexed steganographically or is it randomly indexed? That gives something away. What is the locktime, is it zero or is it a recent block? What is the nSequence? What is the version number of that transaction? Is the change address reused like blockchain.com? That is an easy one. Sometimes I notice that a wallet will pay to… outputs. Only a couple of wallets offer that advanced feature. I immediately know what is going on. The transaction fee, what kind of number is it? Is it a whole number or is it a partial number? When I look at the blockchain is it based on Electrum’s fee distribution or Bitcoin Core’s fee distribution? Does the user get to manually pick or are there three base options? These are things that are pretty quickly revealed. There are probably 20-30 qualities of a transaction and the problem with having a lot of qualities is that if you repeat qualities you become a snowflake which means you become unique. No other wallet is going to have the same 20 qualities that you do. For that reason someone like me, I have a pretty easy time deciphering transactions to which wallet based on nothing but one transaction.

Q - If everything else fails the transaction graph gives the final clue of what happened here.

(Wallet fingerprinting was also discussed at the London BitDevs Socratic Seminar on Payjoin.)

\ No newline at end of file diff --git a/zh/adopting-bitcoin/2021/2021-11-16-gloria-zhao-transaction-relay-policy/index.html b/zh/adopting-bitcoin/2021/2021-11-16-gloria-zhao-transaction-relay-policy/index.html index 60193ffd93..669815a7ea 100644 --- a/zh/adopting-bitcoin/2021/2021-11-16-gloria-zhao-transaction-relay-policy/index.html +++ b/zh/adopting-bitcoin/2021/2021-11-16-gloria-zhao-transaction-relay-policy/index.html @@ -23,4 +23,4 @@ Gloria Zhao

日期: November 16, 2021

记录者: Michael Folkson

译者: Ajian

标签: Transaction relay policy

类别: Conference

Media: -https://www.youtube.com/watch?v=fbWSQvJjKFs

主题:L2 开发者须知的交易转发规则

位置:Adopting Bitcoin

幻灯片:https://github.com/glozow/bitcoin-notes/blob/master/Tx%20Relay%20Policy%20for%20L2%20Devs%20-%20Adopting%20Bitcoin%202021.pdf>

引言

哈咯我是 Gloria,我在 Brink 公司为 Bitcoin Core 开发。今天我准备聊聊 “交易池规则(mempool policy)” 以及为什么你不能理所当然觉得你的交易会被广播、你的手续费追加方法要如何才能生效。我们正在做的事情就是要解决这些难题。如果你发现交易池有点难懂、不稳定、不可预期,你不确定自己的开发应用时能否放心使用交易池这个接口,这个演讲就是为你准备的。我猜大部分内容都对你有用。希望今天我们能让交易池变得更容易理解。

我的目标是让大家尽可能理解为交易的传播设置的各种限制。而我尝试的视角是作为一名比特币协议开发者的视角 —— 协议开发者如何思考交易池规则和交易转发策略。我接下来要定义什么是 “交易池规则”、我们如何分析它、问什么我们需要特定的策略并定义 “钉死攻击(pinning)”。我会从一些已知的限制开始,可能会清理一些关于攻击界面的误解。我还会提到一种对闪电通道的攻击。

我的目标是,在大家听完演讲之后,就能成为朋友,然后开始讨论如何在 Layer 1 和 Layer 2 之间开发通畅的桥梁。

任何人都能发送比特币支付

我想从头开始,确认我们对比特币的理解是一致的。我们想要比特币这样的系统,是为了让任何人都能把钱发给另一个人。这就是比特币的全部。

人们经常使用 “免准入(permissionless)” 和 “去中心化(decentralized)”、“免信任(trustless)” 这样的词来指称比特币。但可能一个更准确的框架是这样的。我们想要的第一个特性,是非常便宜就能运行一个节点。假如说 —— 我们举个例子哈 —— bitcoind 无法运行在 MacOS 上,或者它需要 32 GB 的内存、需要每月 2000 美元的代价才能运行一个全节点,那这对我们来说是不可接受的。我们希望它能运行在你的树莓派(Raspberry Pi)上,也就是很便宜的硬件就能运行。我们想要的另一个特性是 “抗审查性(censorship resistance)”,因为我们在尝试开发一个系统、帮助那些无法获得传统金融系统服务的人。如果我们开发出来的支付系统,很容易就能被某一些政府、富有的大银行、大公司任意审查他们不喜欢的交易,那我们就失败了。还有一种特性,显然,是安全性。在这个领域,我们不能依赖于某一些政府,由他们来确定 “这个节点在现实生活中就是这个人,他做了一些坏事,我们去起诉他”,我们没有这样的选择。我们希望每个人都能运行自己的节点,这是比特币的设计哲学中属于免信任性和去中心化的一部分。因此,安全也不是一个可选项,而是我们设计交易转发策略时候的首要考量。第四,我们不能强迫任何人运行比特币节点的一个新补丁,所以,当我们说某一个升级会优化用户的隐私性、但矿工的成本会变得更高、损失 50% 的收入或交易手续费会变得更高时,我们无法期待其他人会升级自己的节点。在讨论交易转发时,激励兼容也是一个始终需要考虑的东西。这就是我们思考这些事物的基本框架。

点对点的交易转发网络

希望这些对你来说并不意外,但是,要知道,在比特币中,我们需要在一个分布式的点对点网络中实现这些特性 —— 理想情况下,每个人都运行一个比特币节点、连接到比特币网络。这样的连接有许多形式:可能是你在自己的笔记本后台运行一个 bitcoind 程序、你安装了 Umbrel 或者 Raspiblitz 的树莓派小电脑,也可能是一个服务几千个智能手机钱包用户的大公司的服务器。所有这些都是一个比特币节点。它们本身非常不一样,但在点对点网络中,它们都是一个比特币节点。在你尝试给某人支付的时候,你要做的就是连接你的节点、提交你的交易、将交易广播给你的对等节点,希望这些对等节点会继续广播你的交易、最终将交易传播给某一个矿工。矿工会将交易打包到某个区块。

交易池

交易转发策略的真正关键的地方在于:每个参与交易转发的人都维护着一个交易池(mempool)。如果你想知道 “交易池” 的定义,我这里有一个:它是 “未确认的交易(unconfirmed transactions)” 的缓存(cache),并且是为了选出最为激励兼容(也即手续费率最高)的交易集合而专门优化的。这个优化方向不论对矿工还是非矿工都有意义,它帮助我们优雅地重组(re-org)、允许我们以更私密的方式设计交易转发策略,因为我们希望它比 “只懂 接受-转发” 更聪明一些。这是非常有用的,我在这个主题上写了许多东西

交易池规则

你可能听过这样一句话,“本就没有唯一的交易池(there’s no such thing as the mempool)”,这是对的。因为每个节点都维护着一个自己的交易池,而且每个节点都可以使用不同的软件。他们可以随心所欲配置其交易池规则,因为比特币是自由的,对吧?不同节点的交易池可能看起来大相径庭,你的一代树莓派可能无法为交易池分配那么大的内存,所以你的交易池的策略可能更加严格。相反,一个矿工可能维护着一个非常大的交易池,因为他们希望有一个很长的备用交易队列,在区块还未满载的时候就可以从中选择交易、打包到区块中。我们将交易池规则称为 “共识规则之上,节点各自设立的交易验证规则” —— 共识规则是所有未确认交易都必须遵守的规则,否则就一定无法进入任何交易池。这些是未确认的交易,跟已经进入区块的交易无关。限制哪些交易能进入自己的交易池是各个节点的特权。

作为协议开发者,我们专注于保护 bitcoind 的用户

我真正想强调的是,从一个比特币协议开发者的角度看,我们想做的是保护运行比特币节点的用户。我跟应用开发者有过许多争执,他们认为我们应该保护正在广播交易的用户 —— 在某些情况下,这两者是不一致的。在交易池验证中,我们可用的资源是非常受限的。一般来说,只有 300 MB 的内存,而且我们不希望(比如说)花半秒钟来验证一笔交易,这太费时了。再强调一次,即使我们希望支持诚实的用户、吹哨人、社会运动人士、在传统世界中被审查的人,我们也需要注意,从设计上来说,我们连接的是一个个对等节点,这些对等节点也可能是攻击者。我们分析的最常见的情形是耗尽 CPU 资源。如果要花费半秒才能验证一笔交易,那真的是太慢了,因为可能一秒钟就有几百笔交易进入交易池。我们的节点是开源的,所以,如果我们的交易池代码中有 bug,那么(比如说),用户就有可能因为使用某种脚本的交易而耗尽内存,我们需要预想到它会被利用。还有一件事情是,在挖矿的竞争环境中,某个矿工用垃圾填埋他人的交易池、或者让对手的交易池变得无用、阻止高手续费交易进入对手的交易池,有时候会给 TA 带来好处。或者,在他们想发动长程攻击(long range attack)的时候,他们可以一边发送交易,让每个人都必须花半秒钟来验证一笔交易,另一边广播新区块。不论如何,他们让整个网络停顿了半秒,这给了他们提前挖掘下一个区块的优势 —— 这可不是件小事。或者,他们可以启用别的一些攻击。

另一个我今天不会多谈的顾虑是隐私性,我主要会谈闪电通道中的对手方,但我们也应该意识到,具有相互冲突的交易的闪电通道对手方可能会在手续费追加竞赛(或者说钉死攻击)中来回拉扯。我们希望做到的事情是:只要诚实的用户广播了一些激励兼容的东西,就总是能够赢得竞赛;至少是大部分时候吧。当然,交易转发策略也是一个我们高度在乎隐私性的领域。如果你能够发现发出某一笔交易的最初节点,那么你就能将一个 IP 地址与一个钱包地址关联起来,不论你如何在链上隐藏信息,这笔交易都会被去匿名化。这是一个对隐私性非常敏感的领域。这方面的另一个例子是,也许发起交易是最便宜的分析网络拓扑图的方法。在优化隐私性的时候,我们一直在努力保证不要搬起石头砸自己的脚。我今天不准备过多讨论隐私性,但这确实是顾虑之一。

策略的光谱

我们可以将策略想象成一个光谱,一端是 “理想的” 完美的交易池,我们有无限的资源,可以在受到交易或者需要组装区块时冻结时间,因此我们有无限多的时间来验证交易、组装区块。在这种情况下,也许我们会接受所有在共识上有效的交易。而在光谱的另一端,是完全防御性的、保守的交易池规则:我们不验证任何东西,不花费任何验证资源在来自我们所不知道的人的任何东西上。不幸的是,从网络层面看,为了让这种策略可持续,我们将需要一个中心身份注册表,将节点与钱包地址对应起来。这时候我们就变成了一个银行。

这两个端点都没有达到我们想要的设计目标。我们想要的是某个在中间的东西。

DoS 抗性并不是全部

但是,如果(DoS 抗性)是唯一的顾虑,那又太简单了。DoS 抗性并不是我们考虑的唯一东西,这是一种常见的误解。我们回到上面说的光谱的两端。我们会看到,线性的方法并不是全部。即使我们拥有无限的资源,我们真的想要接受所有在共识上有效的交易吗?答案也是否定的。我能提供的最好的例子是软分叉。虽然现在所有在共识上有效的交易所构成的空间是这么大,但这个范围也有可能会缩小。比如在最近的 Taproot 激活中,我们注意到,挖出激活区块的 F2Pool 实际上并没有升级节点、部署 Taproot 的验证逻辑。问题在于,如果某人给他们发送了一笔无效的 Taproot 交易,那会怎么样?如果 F2Pool 的交易池接受了这笔交易,然后 F2Pool 将它打包到了一个区块中,那么 F2Pool、AntPool 和所有使用未升级节点的用户就会被分叉出去。我们将经历一次网络分裂,未升级的节点和实际上执行了 Taproot 验证规则的节点将被割裂。这种情况很危险,但只要 51% 的哈希率部署了 Taproot 验证规则,那我们就没问题;但这是一种非常糟糕的情况,是我们希望避免的。所以,假设所有矿工都运行了 0.13 版本以上的节点,那就执行了包含隔离见证的规则。隔离见证设置了 v0 见证脚本的语义,它会反激励所有使用 v1 见证脚本和传统脚本的交易。这就是为什么 F2Pool 不执行 Taproot 规则,也不接受 v1 见证脚本的交易进入他们的交易池、不将它们打包到区块中。这说明了何以线性视角并不完美。

光谱的另一端是完全防御型的策略。问题在于,这里是否有一些策略,是我们尝试保护节点免受 DoS 攻击但伤害了用户的?这就要回到我们前面说的,我跟应用开发者的讨论,他们认为有一些策略伤害了他们的用户。例子之一是 “交易后代(descendant)” 的数量限制。在 Bitcoin Core 的标准化规则中,为一笔交易及其后代设置的默认限制是,在交易池中,这样的一串交易整体不能超过 101 k vB。这是一个有意义的规则,(如果没有这样的限制)你可以想象某个矿工会广播一连串的交易,使得每个人的交易池都被一笔交易及其后代占满;他们只要发布一个与某一笔交易冲突的区块,所有人的交易池都必须清空。其他人要花费几秒来更新目录,然后得到一个空的交易池、没有任何东西能打包到区块中,而这个矿工就可以先人一步,开始挖掘下一个区块。所以说,这个规则可以保护我们的资源有限的交易池。但是,对闪电网络的开发者来说(可能你也注意到了),在提议为闪电通道中的承诺交易加上 “锚点输出(anchor outputs)” 时,其想法正是,让每一方都可以立即发送携带更高手续费的子交易,来为承诺交易追加手续费。这是锚点输出背后的想法,是个好主意。但是,他们也想到了:“要是其中一方发送了一笔体积为 100 k vB 的子交易,这就触发了交易池默认策略的限制,那另一方还怎么追加手续费呢?” 这就是人们所知的 “钉死攻击” 中的一种。钉死攻击是一种审查攻击,攻击者利用了交易池的策略来阻止某一笔交易进入节点的交易池,或者阻止已经在交易池中的一笔交易被挖出。在这种情况下,他们就是在阻止这笔承诺交易被挖出。

于是,这就有必要提到 CPFP carve out 例外规则了,它是被硬塞到交易池规则中的,就是为了解决这种对闪电用户的攻击。这个规则有点复杂:假设额外的后代交易最多只有两个祖先交易,而它自身又小于 10 k vB,等等,那么,它可以得到豁免,不会被交易池规则拒之门外。这导致了许许多多的测试 bug,而且是一种丑陋的补丁。我在这里并不是想抱怨它,我是想说,一般的比特币节点用户,如果完全不关心闪电网络的话,那就没有理由在自己的交易池规则中启用这一条 —— 它可能不是完全激励兼容的。我后面会回到这一点。

为能够追加手续费而设计激励兼容的策略

关于手续费追加(fee bumping),我想指出一些人们可能会喜欢的交易池规则案例。这些手续费追加方法全部都是由交易池规则实现的。

第一种是 “RBF(手续费替换)”,它来自于交易池的一种行为;当我们收到一笔新交易、跟已经在交易池中的某一笔交易相冲突时,我们不会直接拒绝这笔交易,而是检查这笔新交易是否支付了高得多的手续费,从而可以考虑替换掉交易池中的那笔旧交易。这对用户来说是好事,因为它是激励兼容的,它允许用户为自己的交易追加手续费。

另一种方法则基于交易池知道祖先交易和后代交易所组成的一个交易包,在我们为区块打包交易的时候,我们会把后代交易的手续费率也考虑进去。这使得子交易可以为父交易支付手续费,也叫 “CPFP(子为父偿)”。而且,当我们要从交易池中驱逐交易的时候,我们也不会驱逐自身手续费率较低、但后代手续费率较高的交易。它也是激励兼容的,也能帮助用户。

交易池规则

总结一下我们迄今为止的理解。我们定义了 “交易池规则” 作为 “共识规则之上、节点为尚未得到确认的交易设置的验证规则”。我们也列举了一些我们需要交易池规则的理由。最显著的理由之一是 DoS 抗性。我们也尝试了设计一种激励兼容的交易接受逻辑,它让我们可以安全地更新网络的共识规则。我还应该提到第四种分类,网络最佳时间或者说标准化规则(例如粉尘限制),但我们今天没有时间了。那是我认为更复杂的一种东西。

我将这个演讲命名为 “交易转发策略”,是因为我们注意到了一个事实:比特币网络上的大部分节点都运行 Bitcoin Core。而在推特上的无差别调查显示,在交易池规则上,大部分人都使用默认设置。所以,当你要为交易池规则变更开启一个 PR 时,这有点令人畏惧,但也是一个好事,因为它让我们对一笔交易是否能传遍网络有一个预期,所以可以称为 “交易转发策略”。

策略可以认为是随意的、不透明的,而且让交易转发可以预测

现在我准备讲讲 Bitcoin Core 的默认交易池规则中已知的问题。我已经讲过,许多策略似乎是随意的,比如粉尘限制。我感同身受,所以我做了这样一个表情包来表示同情。但根本上,作为一个应用开发者,你没有广播交易的稳定接口、可预期的接口。尤其是在闪电网络或者说链下合约中,我们依赖于广播交易、及时让交易得到确认的能力来保证对手无法偷走我们的钱;没有这样的接口,就很头疼。我注意到一些交易标准化规则,其中最大的一个就是粉尘限制(译者注:对一个 UTXO 的最小面额的限制),还有交易需要遵顼的许多复杂而不透明的脚本规则。另一类是交易在一个交易池中的求值。我已经提到了一些非常小众的、在计算祖先、后代的数量限制时候的例外。我也提到了 BIP 125 RBF 规则让很多人感到痛苦,因为有时候使用 RBF 是非常昂贵的,甚至完全不划算。当然,最大的问题之一是,在交易量高涨的时候,你永远不知道接下来会发生什么,你也不可能总是为自己的交易追加手续费。我在底部放了最糟糕的一种情况,就是每个交易池都有不同的策略。我自己认为,另一种情况也可以说是很糟糕的,就是应用开发者无法很好地理解交易池规则。这也是我在这里演讲的理由。

承诺交易无法相互替换

现在,我准备介绍一种具体的闪电通道攻击,它基于一个事实:一个闪电通道内的承诺交易是无法相互替换的。在闪电通道中, 你有这些条件的组合,你可以提前商量手续费率,但这个商量发生在广播这笔交易的很久以前(当时你并不准备广播这笔交易)。而且你不能使用 RBF。如果你的对手尝试欺诈你,或者你要单方面关闭通道(因为对方不响应你),显然他们不会跟你签名一笔新的承诺交易。但也因为交易池一次只看一笔替换交易,而你的承诺交易将跟对手的承诺交易有相同的费率,所以它们不能相互替换。即使这里的 Tx2A 比 Tx1A 有更高的费率,也不能替换 Tx1B。

一个小提醒,我在这里并没有完全封装闪电交易的所有脚本逻辑,请原谅我,我没有加入撤销密钥等等的东西。就把它们想成简化版的闪电交易就好。

闪电攻击

我们这里介绍一种发生在 Alice、Bob 和 Mallory 之间的攻击。Alice 和 Bob 有一条通道,Bob 和 Mallory 有一条通道。Alice 准备通过 Bob 给 Mallory 支付。如果闪电网络工作顺利,那么,要么 Bob 和 Mallory 都可以得到支付,要么都得不到支付。这就是我们希望的情形。当 Alice 给 Bob 一个 HTLC,而 Bob 又给 Mallory 一个 HTLC 时,两条通道内都发生了一笔交易。所以两条通道内都出现了一对承诺交易。因为 LN-Panelty,结成一对的两笔承诺交易是不对称的,这样我们才能对广播交易的一方实施惩罚。所以说,它们是相互冲突的交易。Tx1A 和 Tx1B 是相互冲突的。Tx2B 和 Tx2M 也是相互冲突的。只有其中一笔可以得到广播、在链上得到确认。我们构造了这些交易,使得 Alice 给 Bob 支付,同时 Bob 也给 Mallory 支付。Mallory 可以揭示原像,然后拿走 Bob 所支付的资金;要么,她不揭晓原像,Bob 可以在 t5 之后拿回自己所支付的资金。另一方面,如果 Bob 拿到了 Mallory 的原像,他也可以向 Alice 揭晓并拿走 Alice 所支付的资金;要么,Alice 将在 t6 之后拿回自己的资金。Bob 在 Mallory 不支付和 Alice 赎回之间有一段小小的缓冲时间。但剧透一下,这次攻击的结果是 Bob 给 Mallory 支付,但 Alice 不会给 Bob 支付。对应的是,Mallory 广播交易,然后通过原像获得支付;而 Alice 也广播交易,然后拿回给 Bob 的支付。也就是说,在 t6 之后,Alice 获得返回的资金,而 Bob 无法让资金返回。

发生了什么事情呢?Mallory 在 t4 广播了她的承诺交易,并带上了一笔体积巨大的钉死交易。这两笔交易进入了每一个节点的交易池。这里就分两种情况了:或者,Bob 也有一个交易池,并且观察到了发生的所有事,他知道 Mallory 广播了这些交易;或者,他并不知情。

我们先分析第一种情况。我们需要 CPFP carve out 规则,但就像我前面说的,我并不知道为什么每个节点都会将 CPFP carve out 作为自己的交易池规则的一部分。在这种情况下,Bob 可以拿回自己的资金,他可以为承诺交易追加手续费并拿回自己的资金。但这种情况说明 CPFP carve out 对闪电网络的安全性很关键。

第二种情况,Bob 没有交易池。我认为,这是对闪电节点更合理的安全假设;对我来说,没有理由要求闪电节点需要一个比特币交易池或需要监控比特币交易池。Bob 没有监控某个交易池,所以到 t5 的时候,他说,“Mallory 没有揭晓原像,而且她也不理我,所以我准备单方面关闭通道,我要广播我的承诺交易,再追加一些手续费,然后我就能在 to_self-delay 之后拿回我的资金。”问题在于,因为 Mallory 的交易已经在交易池中了,所以即使 Bob 广播了自己的那一笔,也没用,他会疑惑 “为什么我的交易无法得到确认?”他查看区块链,看不到自己的交易得到确认,于是非常迷茫。这是因为 Mallory 已经先一笔广播了交易,而他不能替换掉 Mallory 的交易。这个问题的解决方案是 “交易包转发”。这时候,交易包转发就变成了闪电网络安全性的重要部分。

不论在哪种情况中,t5 过去了,但 Bob 还是不能成功拿回自己的资金。到了 t6,Alice 拿回了自己的资金,但 Bob 还没有。假定 Alice 和 Mallory 实际上是同一个人,她成功地从 Bob 处偷到了这个 HTLC 的价值。希望大家都搞明白了这个例子。

声明一下,如果承诺交易商讨了手续费率,使其可以立即在 t4t5 的下一个区块得到确认,那这种情况是很容易避免的。这就引向了我希望给 L2 和应用开发者的建议。

我们做朋友吧?

我的建议是暂时倾向于高估手续费,而不是低估手续费。这些钉死攻击只有在目标交易游荡在交易池底部的时候才能奏效。另一个建议是,永远不要依赖于零确认交易,因为你无法控制。你的交易池里的东西,跟矿工交易池里的东西不一定相同;甚至于网络中其他所有节点的交易池里都有一笔跟你的交易池中的交易相冲突的交易。所以,如果你无法完全控制那笔交易,就不应该依赖它(假设它还没有得到确认的话)。还有一个建议是,Bitcoin Core 有一个非常棒的 RPC 叫做 testmempoolaccept 。它会同时测试共识规则和默认的 Bitcoin Core 标准化规则;因为网络中的大部分节点都使用这些策略,所以这是很好的测试。尤其是你的应用依赖于手续费追加方法的话,我建议你们使用不同的 mempool 内容做不同的测试,甚至是相互冲突的内容、一批节点,来测试你的假设。每次放出新版本的时候都测试一下,甚至每当软件库的 Master 分支有更新的时候就测试一下。我不想给你太大的压力,但这就是我的建议,这样可以确保你的应用(在交易传播时)不会依赖于不靠谱的假设。

我也希望应用开发者能说出你的不满,不论是对我们的标准交易池规则,还是对测试用例(不足以覆盖你的应用场景)。请出面抱怨,讲出来。希望这个演讲可以说服你,虽然我们因为某些原因施加了许多限制,但我们也希望支持闪电应用。请给发送到 bitcoin-dev 邮件组列表中的提议发送反馈。

另一方面,我认为我们都同意,闪电交易也是比特币交易,所以至少在 Bitcoin Core 里面,我们的重要工作是为当前的交易转发策略形成说明书,并提供一个稳定的测试接口,来确保交易可以通过策略的检查。我们也在为当前的交易池规则开发多种多样的优化和简化。当前,我们已经同意,不应该不在 bitcoin-dev 里面讨论就限制交易池规则。但是,如果你们都不看 bitcoin-dev、或者看了也不给我们反馈,那就什么都没有用。让我们一起,让 L2 应用获得隐私性、可扩展和所有其它有用的特性,且不必牺牲安全性。

问答

问:使用 testmempoolaccept 的时候,它是硬编码了默认的交易池规则,还是它会基于我的节点正在使用的交易池规则?

答:如果你在自己的节点上调用 testmempoolaccept,它就会使用你的节点的策略。但假如你启用了一些 regtest 节点,或者你自己编译了 bitcoind,那应该是默认的策略。

问:所以说,如果不是默认策略,那就是我正是使用的策略。如果我想在我的节点测试其它策略的话,我可以变更我的策略,然后它就会传导到 testmempoolaccept

答:是的。

问:交易包转发策略有没有可能使我们能够安全地移除 CPFP carve out 规则,而不会伤害现有的闪电通道和其它应用的用户?

答:非常好的问题。我认为可以,但我还没有通盘思考过这个问题。我直觉认为可以。

问:一个关于监控交易池的闪电节点的问题。你说,假设一个闪电节点总能监控交易池是不合理的,因为它没必要。但如果你在转发 HTLC 的话,有人可能会把承诺交易发到链上,你可能会在那里看到原像。你必须接受使用相同哈希值的 HTLC。为什么我们应该等待交易得到确认,而不是在它还在交易池中的时候就采取行动呢?你不应该假设自己总能在交易池中看到它,因为确实可能收不到,但如果你能够在交易池中看到它,你就可以更快地行动。

答:我是这么理解你的问题的。在许多这样的攻击中,如果诚实的一方的监控了交易池,他们是能够缓解攻击、作出响应的。我认为这就是拥有交易池的区别,也是为什么闪电节点需要交易池来保护自己的交易。所以,一方面来说,我同意你说的。拥有交易池会带来好处。但我不认为每个人都希望闪电网络的安全性依赖于交易池。

问:而且,享受这种好处需要能够监控整个网络的交易池,这是不可能做到的。

答:是的。

问:你跟 Lisa 的 “death to the mempool” 是怎么回事?

答:我很开心能在线下讨论这个事。Lisa 和我讨论了我们有分歧的地方。她在自己的 bitcoin-dev 帖子里点名了我。人们以为我们是一致的,但实际上我们有分歧 —— 我认为这就是大家觉得困惑的地方。

问:她的意思是,交易池是不可依赖的,所以我们应该完全抛开它?

答:如他们所说,今天的交易池规则是不完美的。对于应用开发者来说,这是一个非常不稳定的接口。我已经听到许多闪电网络开发者说 “对我们来说,一个好得多的接口就是让我们公司跟某个矿工变成合作伙伴,从而让我们总是能把交易发送给他们”。比如说,所有的 LND 节点或者所有的 rust-lightning 节点都总是可以把交易发送给 F2Pool,或者别的矿池。这就解决了让我们头疼的因素,确保你的交易总是标准的、总能够通过策略检查。我理解为什么这道闪电网络开发者来说更舒服,但我感觉,它只是将 “我们需要交易池来让闪电通道变得安全” 换成了 “我们需要跟某个矿工的合作关系,才能让闪电通道变得安全”。我不认为它具有抗审查性、具有隐私性,我也不认为这是我们想要的比特币和闪电网络的发展方向。这是我个人的意见,也是为什么今天我要讲这个话题,因为我理解大家的不满,但我认为我们可以合作,创造出可靠的接口。

darosior:最终来说,这跟手续费追加方法很像,你总是有一些捷径,但这些捷径通常都会推动中心化。失去了抗审查性,我们才会意识到它的重要性。这就是为什么我们应该在弄丢它之前慎重考虑。

问:(丢失)

答:我无法提供一个具体的路线图,但我可以告诉你已经实现的东西,以及还没有实现的东西。所有的交易池验证代码都已经实现了。但还未全部合并。这部分也已经写好了。下一部分是定义 P2P 协议,这个我们每天都在推进。写出规范、形成 BIP、实现它,在审核之后合并。这是已经实现的和还没有实现的东西。你可以自己估计一下我们还需要多长时间。

问:你一直在开发交易包转发策略,你学到了什么让你惊喜的东西吗?比如说意料之外的、比你想象中更难或者更简单得东西?

答:我的惊喜在于它非常有趣。每天醒来就开始思考图论(graph theory)、激励兼容和 DoS 抗性是我的幸运。真的非常有趣。我从没想过它这么有趣。次一等的惊讶在于,并不是每个人都对交易池规则感兴趣。

问:你能不能为可能引入、不能明显看出的 DDoS 攻击界面举个例子?在我们开发交易包转发策略的时候。

答:举个例子,节点的 UTXO 集大约是几 GB。一部分数据存在硬盘上,而且我们使用分层的缓存。我们希望尽可能不让网络上的对等节点影响我们的缓存。但这会影响区块验证的性能。比如说,基于我们对一笔交易的体积限制,我们可以预期 UTXO 缓存系统会出现混乱。我们不希望交易的体积太大,因为我们希望一次性验证 25 笔交易,或者 2 笔交易,乃至 n 笔交易。这就是一种 DoS 攻击的例子。同样的道理,如果一个交易包对比单笔交易,其签名数量是呈指数级增长的,我们并不希望需要通入的验证资源也呈指数级增长。

问:闪电通道是一个非常特别的例子,它的交易包有两笔交易。这是为了先解决这个问题,然后再考虑一般的情形吗?

答:是的。单个子交易和单个父交易,对比单个子交易和多个父交易。我认为,它基本上是一种权衡,在实现的复杂性和它对 L1 及 L2 协议的有用性中取舍。基于我们知道的事实,大量的批处理手续费追加在比特币链上发生,多个父交易和单个子交易的组合是非常有用的。然后,我们对比 “单个子交易、单个父交易” 和 “单个子交易、多个父交易” 时,会发现后者的复杂性没有那么糟糕。所以两相权衡之下,单个子交易、多个父交易的功能更好。

darosior:将交易池规则的用场限制成为闪电网络提供帮助,在我看来是很奇怪的。我部分同意的对 CPFP carve out 的批评是,为多个父交易追加手续费、CPFP,对其它应用也是非常有用的。

问:我同意你的说法,交易池是一个非常有趣的研究领域。纵观整个区块链领域,许多东西都因为交易池规则而崩溃,这不一定发生在比特币上,也可能发生在其它区块链上。你有没有花时间研究过其它链尝试过的方法,失败的和成功到可以引入比特币交易池中的方法?

答:没有,我从没这样做过。并不是因为我不觉得别的地方也有好想法。我不是唯一一个思考过交易池的人,比特币也不是唯一一个关注交易池的地方。我只是注意到,我们在交易池规则中遇到的大量问题,都根源于一个事实:比特币采用了 UTXO 模式。它很棒,因为我们不会遭遇无法执行交易排序的问题;父交易一定要在子交易之前得到确认,这很清楚。但这也意味着,在我们创建的交易池中,75% 的交易池动态内存使用,都分配给了元数据,而不是交易本身。我们需要跟踪交易包的整体,而不是把它们当作一个又一个的原子。

问:你说的是在交易包转发的世界中,还是在 Bitcoin Core 里面?

答:在 Bitcoin Core 里面。

问:现在?

答:是的,75% 的数字,总是能吓到很多人。我认为,这在很大程度上是因为我们使用了 UTXO 模型。我不认为这是个坏事,只是一个我们必须处理而别人不必处理的东西。就像我说的,这也带来了别人无法获得的一些好处。

问:我在哪里可以找到更多关于交易包转发的资料?

答:Brink 最近放出了一档播客,叫做 “The Bitcoin Development Podcast”。我们已经录了 5 期,都是关于交易池规则、钉死攻击、交易包转发和挖矿优化的。我推荐你去听听看:brink.dev/podcast

问:你可以讲讲粉尘限制和交易池规则吗?有没有计划为交易池规则降低粉尘限制?我们应该有粉尘限制吗?还是不应该设置呢?是否不同客户端对粉尘限制有不同的想法?我认为这会影响闪电节点在有分歧的时候的关闭通道的操作。

答:解释一下,“粉尘输出” 就是 “不经济的输出”。假设你有一个价值 20 聪的输出,即使当前网络的手续费率是 1 聪/vB,你也无法从花费它中得到任何好处,因为用来证明你可以花费这个输出的数据就将要求你支付 20 聪。在比特币的交易池规则中,我们会拒绝接受任何产生粉尘输出的交易。这不仅仅是为了把关,也是因为,如果在全局的 UTXO 集中产生了粉尘输出,没有任何人愿意花费它,但网络中的每个人都必须保存它。因为可扩展性本质上受限于 UTXO 集的规模,所以 Bitcoin Core 的开发者武断地选择了一个限制:我们不允许粉尘通过 Bitcoin Core 的节点来传播。但它就是我在演讲中提到的最佳习惯的一个例子:粉尘交易在共识上是有效的,但它不属于某一种的 DoS 保护措施,但在某种程度上它又是一种 DoS:你是在要求整个网络为你保存粉尘输出。这也是一个能够体现应用开发者和协议开发者的内在张力的例子(在交易池规则上)。举个例子,在闪电通道中,如果你有一个输出,比如是一个正在服役的 HTLC 输出,或者是表示通道余额的输出,它的价值太低了,成了一个粉尘,那么这笔交易是不会被广播的。所以,我认为,在闪电通道中,如果你有一个输出会成为粉尘,那你应该把它变成手续费。这被认为是最佳的做法。但这样是不便利的,因为必须在实现闪电通道的软件中增加额外的东西。当然,我一直在尝试共情,我也理解为什么这让人难受。但设置粉尘限额是有理由的。这里面是有矛盾的。我并不是说这些东西永远不会改变,我只是把理由说出来。

\ No newline at end of file +https://www.youtube.com/watch?v=fbWSQvJjKFs

主题:L2 开发者须知的交易转发规则

位置:Adopting Bitcoin

幻灯片:https://github.com/glozow/bitcoin-notes/blob/master/Tx%20Relay%20Policy%20for%20L2%20Devs%20-%20Adopting%20Bitcoin%202021.pdf>

引言

哈咯我是 Gloria,我在 Brink 公司为 Bitcoin Core 开发。今天我准备聊聊 “交易池规则(mempool policy)” 以及为什么你不能理所当然觉得你的交易会被广播、你的手续费追加方法要如何才能生效。我们正在做的事情就是要解决这些难题。如果你发现交易池有点难懂、不稳定、不可预期,你不确定自己的开发应用时能否放心使用交易池这个接口,这个演讲就是为你准备的。我猜大部分内容都对你有用。希望今天我们能让交易池变得更容易理解。

我的目标是让大家尽可能理解为交易的传播设置的各种限制。而我尝试的视角是作为一名比特币协议开发者的视角 —— 协议开发者如何思考交易池规则和交易转发策略。我接下来要定义什么是 “交易池规则”、我们如何分析它、问什么我们需要特定的策略并定义 “钉死攻击(pinning)”。我会从一些已知的限制开始,可能会清理一些关于攻击界面的误解。我还会提到一种对闪电通道的攻击。

我的目标是,在大家听完演讲之后,就能成为朋友,然后开始讨论如何在 Layer 1 和 Layer 2 之间开发通畅的桥梁。

任何人都能发送比特币支付

我想从头开始,确认我们对比特币的理解是一致的。我们想要比特币这样的系统,是为了让任何人都能把钱发给另一个人。这就是比特币的全部。

人们经常使用 “免准入(permissionless)” 和 “去中心化(decentralized)”、“免信任(trustless)” 这样的词来指称比特币。但可能一个更准确的框架是这样的。我们想要的第一个特性,是非常便宜就能运行一个节点。假如说 —— 我们举个例子哈 —— bitcoind 无法运行在 MacOS 上,或者它需要 32 GB 的内存、需要每月 2000 美元的代价才能运行一个全节点,那这对我们来说是不可接受的。我们希望它能运行在你的树莓派(Raspberry Pi)上,也就是很便宜的硬件就能运行。我们想要的另一个特性是 “抗审查性(censorship resistance)”,因为我们在尝试开发一个系统、帮助那些无法获得传统金融系统服务的人。如果我们开发出来的支付系统,很容易就能被某一些政府、富有的大银行、大公司任意审查他们不喜欢的交易,那我们就失败了。还有一种特性,显然,是安全性。在这个领域,我们不能依赖于某一些政府,由他们来确定 “这个节点在现实生活中就是这个人,他做了一些坏事,我们去起诉他”,我们没有这样的选择。我们希望每个人都能运行自己的节点,这是比特币的设计哲学中属于免信任性和去中心化的一部分。因此,安全也不是一个可选项,而是我们设计交易转发策略时候的首要考量。第四,我们不能强迫任何人运行比特币节点的一个新补丁,所以,当我们说某一个升级会优化用户的隐私性、但矿工的成本会变得更高、损失 50% 的收入或交易手续费会变得更高时,我们无法期待其他人会升级自己的节点。在讨论交易转发时,激励兼容也是一个始终需要考虑的东西。这就是我们思考这些事物的基本框架。

点对点的交易转发网络

希望这些对你来说并不意外,但是,要知道,在比特币中,我们需要在一个分布式的点对点网络中实现这些特性 —— 理想情况下,每个人都运行一个比特币节点、连接到比特币网络。这样的连接有许多形式:可能是你在自己的笔记本后台运行一个 bitcoind 程序、你安装了 Umbrel 或者 Raspiblitz 的树莓派小电脑,也可能是一个服务几千个智能手机钱包用户的大公司的服务器。所有这些都是一个比特币节点。它们本身非常不一样,但在点对点网络中,它们都是一个比特币节点。在你尝试给某人支付的时候,你要做的就是连接你的节点、提交你的交易、将交易广播给你的对等节点,希望这些对等节点会继续广播你的交易、最终将交易传播给某一个矿工。矿工会将交易打包到某个区块。

交易池

交易转发策略的真正关键的地方在于:每个参与交易转发的人都维护着一个交易池(mempool)。如果你想知道 “交易池” 的定义,我这里有一个:它是 “未确认的交易(unconfirmed transactions)” 的缓存(cache),并且是为了选出最为激励兼容(也即手续费率最高)的交易集合而专门优化的。这个优化方向不论对矿工还是非矿工都有意义,它帮助我们优雅地重组(re-org)、允许我们以更私密的方式设计交易转发策略,因为我们希望它比 “只懂 接受-转发” 更聪明一些。这是非常有用的,我在这个主题上写了许多东西

交易池规则

你可能听过这样一句话,“本就没有唯一的交易池(there’s no such thing as the mempool)”,这是对的。因为每个节点都维护着一个自己的交易池,而且每个节点都可以使用不同的软件。他们可以随心所欲配置其交易池规则,因为比特币是自由的,对吧?不同节点的交易池可能看起来大相径庭,你的一代树莓派可能无法为交易池分配那么大的内存,所以你的交易池的策略可能更加严格。相反,一个矿工可能维护着一个非常大的交易池,因为他们希望有一个很长的备用交易队列,在区块还未满载的时候就可以从中选择交易、打包到区块中。我们将交易池规则称为 “共识规则之上,节点各自设立的交易验证规则” —— 共识规则是所有未确认交易都必须遵守的规则,否则就一定无法进入任何交易池。这些是未确认的交易,跟已经进入区块的交易无关。限制哪些交易能进入自己的交易池是各个节点的特权。

作为协议开发者,我们专注于保护 bitcoind 的用户

我真正想强调的是,从一个比特币协议开发者的角度看,我们想做的是保护运行比特币节点的用户。我跟应用开发者有过许多争执,他们认为我们应该保护正在广播交易的用户 —— 在某些情况下,这两者是不一致的。在交易池验证中,我们可用的资源是非常受限的。一般来说,只有 300 MB 的内存,而且我们不希望(比如说)花半秒钟来验证一笔交易,这太费时了。再强调一次,即使我们希望支持诚实的用户、吹哨人、社会运动人士、在传统世界中被审查的人,我们也需要注意,从设计上来说,我们连接的是一个个对等节点,这些对等节点也可能是攻击者。我们分析的最常见的情形是耗尽 CPU 资源。如果要花费半秒才能验证一笔交易,那真的是太慢了,因为可能一秒钟就有几百笔交易进入交易池。我们的节点是开源的,所以,如果我们的交易池代码中有 bug,那么(比如说),用户就有可能因为使用某种脚本的交易而耗尽内存,我们需要预想到它会被利用。还有一件事情是,在挖矿的竞争环境中,某个矿工用垃圾填埋他人的交易池、或者让对手的交易池变得无用、阻止高手续费交易进入对手的交易池,有时候会给 TA 带来好处。或者,在他们想发动长程攻击(long range attack)的时候,他们可以一边发送交易,让每个人都必须花半秒钟来验证一笔交易,另一边广播新区块。不论如何,他们让整个网络停顿了半秒,这给了他们提前挖掘下一个区块的优势 —— 这可不是件小事。或者,他们可以启用别的一些攻击。

另一个我今天不会多谈的顾虑是隐私性,我主要会谈闪电通道中的对手方,但我们也应该意识到,具有相互冲突的交易的闪电通道对手方可能会在手续费追加竞赛(或者说钉死攻击)中来回拉扯。我们希望做到的事情是:只要诚实的用户广播了一些激励兼容的东西,就总是能够赢得竞赛;至少是大部分时候吧。当然,交易转发策略也是一个我们高度在乎隐私性的领域。如果你能够发现发出某一笔交易的最初节点,那么你就能将一个 IP 地址与一个钱包地址关联起来,不论你如何在链上隐藏信息,这笔交易都会被去匿名化。这是一个对隐私性非常敏感的领域。这方面的另一个例子是,也许发起交易是最便宜的分析网络拓扑图的方法。在优化隐私性的时候,我们一直在努力保证不要搬起石头砸自己的脚。我今天不准备过多讨论隐私性,但这确实是顾虑之一。

策略的光谱

我们可以将策略想象成一个光谱,一端是 “理想的” 完美的交易池,我们有无限的资源,可以在受到交易或者需要组装区块时冻结时间,因此我们有无限多的时间来验证交易、组装区块。在这种情况下,也许我们会接受所有在共识上有效的交易。而在光谱的另一端,是完全防御性的、保守的交易池规则:我们不验证任何东西,不花费任何验证资源在来自我们所不知道的人的任何东西上。不幸的是,从网络层面看,为了让这种策略可持续,我们将需要一个中心身份注册表,将节点与钱包地址对应起来。这时候我们就变成了一个银行。

这两个端点都没有达到我们想要的设计目标。我们想要的是某个在中间的东西。

DoS 抗性并不是全部

但是,如果(DoS 抗性)是唯一的顾虑,那又太简单了。DoS 抗性并不是我们考虑的唯一东西,这是一种常见的误解。我们回到上面说的光谱的两端。我们会看到,线性的方法并不是全部。即使我们拥有无限的资源,我们真的想要接受所有在共识上有效的交易吗?答案也是否定的。我能提供的最好的例子是软分叉。虽然现在所有在共识上有效的交易所构成的空间是这么大,但这个范围也有可能会缩小。比如在最近的 Taproot 激活中,我们注意到,挖出激活区块的 F2Pool 实际上并没有升级节点、部署 Taproot 的验证逻辑。问题在于,如果某人给他们发送了一笔无效的 Taproot 交易,那会怎么样?如果 F2Pool 的交易池接受了这笔交易,然后 F2Pool 将它打包到了一个区块中,那么 F2Pool、AntPool 和所有使用未升级节点的用户就会被分叉出去。我们将经历一次网络分裂,未升级的节点和实际上执行了 Taproot 验证规则的节点将被割裂。这种情况很危险,但只要 51% 的哈希率部署了 Taproot 验证规则,那我们就没问题;但这是一种非常糟糕的情况,是我们希望避免的。所以,假设所有矿工都运行了 0.13 版本以上的节点,那就执行了包含隔离见证的规则。隔离见证设置了 v0 见证脚本的语义,它会反激励所有使用 v1 见证脚本和传统脚本的交易。这就是为什么 F2Pool 不执行 Taproot 规则,也不接受 v1 见证脚本的交易进入他们的交易池、不将它们打包到区块中。这说明了何以线性视角并不完美。

光谱的另一端是完全防御型的策略。问题在于,这里是否有一些策略,是我们尝试保护节点免受 DoS 攻击但伤害了用户的?这就要回到我们前面说的,我跟应用开发者的讨论,他们认为有一些策略伤害了他们的用户。例子之一是 “交易后代(descendant)” 的数量限制。在 Bitcoin Core 的标准化规则中,为一笔交易及其后代设置的默认限制是,在交易池中,这样的一串交易整体不能超过 101 k vB。这是一个有意义的规则,(如果没有这样的限制)你可以想象某个矿工会广播一连串的交易,使得每个人的交易池都被一笔交易及其后代占满;他们只要发布一个与某一笔交易冲突的区块,所有人的交易池都必须清空。其他人要花费几秒来更新目录,然后得到一个空的交易池、没有任何东西能打包到区块中,而这个矿工就可以先人一步,开始挖掘下一个区块。所以说,这个规则可以保护我们的资源有限的交易池。但是,对闪电网络的开发者来说(可能你也注意到了),在提议为闪电通道中的承诺交易加上 “锚点输出(anchor outputs)” 时,其想法正是,让每一方都可以立即发送携带更高手续费的子交易,来为承诺交易追加手续费。这是锚点输出背后的想法,是个好主意。但是,他们也想到了:“要是其中一方发送了一笔体积为 100 k vB 的子交易,这就触发了交易池默认策略的限制,那另一方还怎么追加手续费呢?” 这就是人们所知的 “钉死攻击” 中的一种。钉死攻击是一种审查攻击,攻击者利用了交易池的策略来阻止某一笔交易进入节点的交易池,或者阻止已经在交易池中的一笔交易被挖出。在这种情况下,他们就是在阻止这笔承诺交易被挖出。

于是,这就有必要提到 CPFP carve out 例外规则了,它是被硬塞到交易池规则中的,就是为了解决这种对闪电用户的攻击。这个规则有点复杂:假设额外的后代交易最多只有两个祖先交易,而它自身又小于 10 k vB,等等,那么,它可以得到豁免,不会被交易池规则拒之门外。这导致了许许多多的测试 bug,而且是一种丑陋的补丁。我在这里并不是想抱怨它,我是想说,一般的比特币节点用户,如果完全不关心闪电网络的话,那就没有理由在自己的交易池规则中启用这一条 —— 它可能不是完全激励兼容的。我后面会回到这一点。

为能够追加手续费而设计激励兼容的策略

关于手续费追加(fee bumping),我想指出一些人们可能会喜欢的交易池规则案例。这些手续费追加方法全部都是由交易池规则实现的。

第一种是 “RBF(手续费替换)”,它来自于交易池的一种行为;当我们收到一笔新交易、跟已经在交易池中的某一笔交易相冲突时,我们不会直接拒绝这笔交易,而是检查这笔新交易是否支付了高得多的手续费,从而可以考虑替换掉交易池中的那笔旧交易。这对用户来说是好事,因为它是激励兼容的,它允许用户为自己的交易追加手续费。

另一种方法则基于交易池知道祖先交易和后代交易所组成的一个交易包,在我们为区块打包交易的时候,我们会把后代交易的手续费率也考虑进去。这使得子交易可以为父交易支付手续费,也叫 “CPFP(子为父偿)”。而且,当我们要从交易池中驱逐交易的时候,我们也不会驱逐自身手续费率较低、但后代手续费率较高的交易。它也是激励兼容的,也能帮助用户。

交易池规则

总结一下我们迄今为止的理解。我们定义了 “交易池规则” 作为 “共识规则之上、节点为尚未得到确认的交易设置的验证规则”。我们也列举了一些我们需要交易池规则的理由。最显著的理由之一是 DoS 抗性。我们也尝试了设计一种激励兼容的交易接受逻辑,它让我们可以安全地更新网络的共识规则。我还应该提到第四种分类,网络最佳时间或者说标准化规则(例如粉尘限制),但我们今天没有时间了。那是我认为更复杂的一种东西。

我将这个演讲命名为 “交易转发策略”,是因为我们注意到了一个事实:比特币网络上的大部分节点都运行 Bitcoin Core。而在推特上的无差别调查显示,在交易池规则上,大部分人都使用默认设置。所以,当你要为交易池规则变更开启一个 PR 时,这有点令人畏惧,但也是一个好事,因为它让我们对一笔交易是否能传遍网络有一个预期,所以可以称为 “交易转发策略”。

策略可以认为是随意的、不透明的,而且让交易转发可以预测

现在我准备讲讲 Bitcoin Core 的默认交易池规则中已知的问题。我已经讲过,许多策略似乎是随意的,比如粉尘限制。我感同身受,所以我做了这样一个表情包来表示同情。但根本上,作为一个应用开发者,你没有广播交易的稳定接口、可预期的接口。尤其是在闪电网络或者说链下合约中,我们依赖于广播交易、及时让交易得到确认的能力来保证对手无法偷走我们的钱;没有这样的接口,就很头疼。我注意到一些交易标准化规则,其中最大的一个就是粉尘限制(译者注:对一个 UTXO 的最小面额的限制),还有交易需要遵顼的许多复杂而不透明的脚本规则。另一类是交易在一个交易池中的求值。我已经提到了一些非常小众的、在计算祖先、后代的数量限制时候的例外。我也提到了 BIP 125 RBF 规则让很多人感到痛苦,因为有时候使用 RBF 是非常昂贵的,甚至完全不划算。当然,最大的问题之一是,在交易量高涨的时候,你永远不知道接下来会发生什么,你也不可能总是为自己的交易追加手续费。我在底部放了最糟糕的一种情况,就是每个交易池都有不同的策略。我自己认为,另一种情况也可以说是很糟糕的,就是应用开发者无法很好地理解交易池规则。这也是我在这里演讲的理由。

承诺交易无法相互替换

现在,我准备介绍一种具体的闪电通道攻击,它基于一个事实:一个闪电通道内的承诺交易是无法相互替换的。在闪电通道中, 你有这些条件的组合,你可以提前商量手续费率,但这个商量发生在广播这笔交易的很久以前(当时你并不准备广播这笔交易)。而且你不能使用 RBF。如果你的对手尝试欺诈你,或者你要单方面关闭通道(因为对方不响应你),显然他们不会跟你签名一笔新的承诺交易。但也因为交易池一次只看一笔替换交易,而你的承诺交易将跟对手的承诺交易有相同的费率,所以它们不能相互替换。即使这里的 Tx2A 比 Tx1A 有更高的费率,也不能替换 Tx1B。

一个小提醒,我在这里并没有完全封装闪电交易的所有脚本逻辑,请原谅我,我没有加入撤销密钥等等的东西。就把它们想成简化版的闪电交易就好。

闪电攻击

我们这里介绍一种发生在 Alice、Bob 和 Mallory 之间的攻击。Alice 和 Bob 有一条通道,Bob 和 Mallory 有一条通道。Alice 准备通过 Bob 给 Mallory 支付。如果闪电网络工作顺利,那么,要么 Bob 和 Mallory 都可以得到支付,要么都得不到支付。这就是我们希望的情形。当 Alice 给 Bob 一个 HTLC,而 Bob 又给 Mallory 一个 HTLC 时,两条通道内都发生了一笔交易。所以两条通道内都出现了一对承诺交易。因为 LN-Panelty,结成一对的两笔承诺交易是不对称的,这样我们才能对广播交易的一方实施惩罚。所以说,它们是相互冲突的交易。Tx1A 和 Tx1B 是相互冲突的。Tx2B 和 Tx2M 也是相互冲突的。只有其中一笔可以得到广播、在链上得到确认。我们构造了这些交易,使得 Alice 给 Bob 支付,同时 Bob 也给 Mallory 支付。Mallory 可以揭示原像,然后拿走 Bob 所支付的资金;要么,她不揭晓原像,Bob 可以在 t5 之后拿回自己所支付的资金。另一方面,如果 Bob 拿到了 Mallory 的原像,他也可以向 Alice 揭晓并拿走 Alice 所支付的资金;要么,Alice 将在 t6 之后拿回自己的资金。Bob 在 Mallory 不支付和 Alice 赎回之间有一段小小的缓冲时间。但剧透一下,这次攻击的结果是 Bob 给 Mallory 支付,但 Alice 不会给 Bob 支付。对应的是,Mallory 广播交易,然后通过原像获得支付;而 Alice 也广播交易,然后拿回给 Bob 的支付。也就是说,在 t6 之后,Alice 获得返回的资金,而 Bob 无法让资金返回。

发生了什么事情呢?Mallory 在 t4 广播了她的承诺交易,并带上了一笔体积巨大的钉死交易。这两笔交易进入了每一个节点的交易池。这里就分两种情况了:或者,Bob 也有一个交易池,并且观察到了发生的所有事,他知道 Mallory 广播了这些交易;或者,他并不知情。

我们先分析第一种情况。我们需要 CPFP carve out 规则,但就像我前面说的,我并不知道为什么每个节点都会将 CPFP carve out 作为自己的交易池规则的一部分。在这种情况下,Bob 可以拿回自己的资金,他可以为承诺交易追加手续费并拿回自己的资金。但这种情况说明 CPFP carve out 对闪电网络的安全性很关键。

第二种情况,Bob 没有交易池。我认为,这是对闪电节点更合理的安全假设;对我来说,没有理由要求闪电节点需要一个比特币交易池或需要监控比特币交易池。Bob 没有监控某个交易池,所以到 t5 的时候,他说,“Mallory 没有揭晓原像,而且她也不理我,所以我准备单方面关闭通道,我要广播我的承诺交易,再追加一些手续费,然后我就能在 to_self-delay 之后拿回我的资金。”问题在于,因为 Mallory 的交易已经在交易池中了,所以即使 Bob 广播了自己的那一笔,也没用,他会疑惑 “为什么我的交易无法得到确认?”他查看区块链,看不到自己的交易得到确认,于是非常迷茫。这是因为 Mallory 已经先一笔广播了交易,而他不能替换掉 Mallory 的交易。这个问题的解决方案是 “交易包转发”。这时候,交易包转发就变成了闪电网络安全性的重要部分。

不论在哪种情况中,t5 过去了,但 Bob 还是不能成功拿回自己的资金。到了 t6,Alice 拿回了自己的资金,但 Bob 还没有。假定 Alice 和 Mallory 实际上是同一个人,她成功地从 Bob 处偷到了这个 HTLC 的价值。希望大家都搞明白了这个例子。

声明一下,如果承诺交易商讨了手续费率,使其可以立即在 t4t5 的下一个区块得到确认,那这种情况是很容易避免的。这就引向了我希望给 L2 和应用开发者的建议。

我们做朋友吧?

我的建议是暂时倾向于高估手续费,而不是低估手续费。这些钉死攻击只有在目标交易游荡在交易池底部的时候才能奏效。另一个建议是,永远不要依赖于零确认交易,因为你无法控制。你的交易池里的东西,跟矿工交易池里的东西不一定相同;甚至于网络中其他所有节点的交易池里都有一笔跟你的交易池中的交易相冲突的交易。所以,如果你无法完全控制那笔交易,就不应该依赖它(假设它还没有得到确认的话)。还有一个建议是,Bitcoin Core 有一个非常棒的 RPC 叫做 testmempoolaccept 。它会同时测试共识规则和默认的 Bitcoin Core 标准化规则;因为网络中的大部分节点都使用这些策略,所以这是很好的测试。尤其是你的应用依赖于手续费追加方法的话,我建议你们使用不同的 mempool 内容做不同的测试,甚至是相互冲突的内容、一批节点,来测试你的假设。每次放出新版本的时候都测试一下,甚至每当软件库的 Master 分支有更新的时候就测试一下。我不想给你太大的压力,但这就是我的建议,这样可以确保你的应用(在交易传播时)不会依赖于不靠谱的假设。

我也希望应用开发者能说出你的不满,不论是对我们的标准交易池规则,还是对测试用例(不足以覆盖你的应用场景)。请出面抱怨,讲出来。希望这个演讲可以说服你,虽然我们因为某些原因施加了许多限制,但我们也希望支持闪电应用。请给发送到 bitcoin-dev 邮件组列表中的提议发送反馈。

另一方面,我认为我们都同意,闪电交易也是比特币交易,所以至少在 Bitcoin Core 里面,我们的重要工作是为当前的交易转发策略形成说明书,并提供一个稳定的测试接口,来确保交易可以通过策略的检查。我们也在为当前的交易池规则开发多种多样的优化和简化。当前,我们已经同意,不应该不在 bitcoin-dev 里面讨论就限制交易池规则。但是,如果你们都不看 bitcoin-dev、或者看了也不给我们反馈,那就什么都没有用。让我们一起,让 L2 应用获得隐私性、可扩展和所有其它有用的特性,且不必牺牲安全性。

问答

问:使用 testmempoolaccept 的时候,它是硬编码了默认的交易池规则,还是它会基于我的节点正在使用的交易池规则?

答:如果你在自己的节点上调用 testmempoolaccept,它就会使用你的节点的策略。但假如你启用了一些 regtest 节点,或者你自己编译了 bitcoind,那应该是默认的策略。

问:所以说,如果不是默认策略,那就是我正是使用的策略。如果我想在我的节点测试其它策略的话,我可以变更我的策略,然后它就会传导到 testmempoolaccept

答:是的。

问:交易包转发策略有没有可能使我们能够安全地移除 CPFP carve out 规则,而不会伤害现有的闪电通道和其它应用的用户?

答:非常好的问题。我认为可以,但我还没有通盘思考过这个问题。我直觉认为可以。

问:一个关于监控交易池的闪电节点的问题。你说,假设一个闪电节点总能监控交易池是不合理的,因为它没必要。但如果你在转发 HTLC 的话,有人可能会把承诺交易发到链上,你可能会在那里看到原像。你必须接受使用相同哈希值的 HTLC。为什么我们应该等待交易得到确认,而不是在它还在交易池中的时候就采取行动呢?你不应该假设自己总能在交易池中看到它,因为确实可能收不到,但如果你能够在交易池中看到它,你就可以更快地行动。

答:我是这么理解你的问题的。在许多这样的攻击中,如果诚实的一方的监控了交易池,他们是能够缓解攻击、作出响应的。我认为这就是拥有交易池的区别,也是为什么闪电节点需要交易池来保护自己的交易。所以,一方面来说,我同意你说的。拥有交易池会带来好处。但我不认为每个人都希望闪电网络的安全性依赖于交易池。

问:而且,享受这种好处需要能够监控整个网络的交易池,这是不可能做到的。

答:是的。

问:你跟 Lisa 的 “death to the mempool” 是怎么回事?

答:我很开心能在线下讨论这个事。Lisa 和我讨论了我们有分歧的地方。她在自己的 bitcoin-dev 帖子里点名了我。人们以为我们是一致的,但实际上我们有分歧 —— 我认为这就是大家觉得困惑的地方。

问:她的意思是,交易池是不可依赖的,所以我们应该完全抛开它?

答:如他们所说,今天的交易池规则是不完美的。对于应用开发者来说,这是一个非常不稳定的接口。我已经听到许多闪电网络开发者说 “对我们来说,一个好得多的接口就是让我们公司跟某个矿工变成合作伙伴,从而让我们总是能把交易发送给他们”。比如说,所有的 LND 节点或者所有的 rust-lightning 节点都总是可以把交易发送给 F2Pool,或者别的矿池。这就解决了让我们头疼的因素,确保你的交易总是标准的、总能够通过策略检查。我理解为什么这道闪电网络开发者来说更舒服,但我感觉,它只是将 “我们需要交易池来让闪电通道变得安全” 换成了 “我们需要跟某个矿工的合作关系,才能让闪电通道变得安全”。我不认为它具有抗审查性、具有隐私性,我也不认为这是我们想要的比特币和闪电网络的发展方向。这是我个人的意见,也是为什么今天我要讲这个话题,因为我理解大家的不满,但我认为我们可以合作,创造出可靠的接口。

darosior:最终来说,这跟手续费追加方法很像,你总是有一些捷径,但这些捷径通常都会推动中心化。失去了抗审查性,我们才会意识到它的重要性。这就是为什么我们应该在弄丢它之前慎重考虑。

问:(丢失)

答:我无法提供一个具体的路线图,但我可以告诉你已经实现的东西,以及还没有实现的东西。所有的交易池验证代码都已经实现了。但还未全部合并。这部分也已经写好了。下一部分是定义 P2P 协议,这个我们每天都在推进。写出规范、形成 BIP、实现它,在审核之后合并。这是已经实现的和还没有实现的东西。你可以自己估计一下我们还需要多长时间。

问:你一直在开发交易包转发策略,你学到了什么让你惊喜的东西吗?比如说意料之外的、比你想象中更难或者更简单得东西?

答:我的惊喜在于它非常有趣。每天醒来就开始思考图论(graph theory)、激励兼容和 DoS 抗性是我的幸运。真的非常有趣。我从没想过它这么有趣。次一等的惊讶在于,并不是每个人都对交易池规则感兴趣。

问:你能不能为可能引入、不能明显看出的 DDoS 攻击界面举个例子?在我们开发交易包转发策略的时候。

答:举个例子,节点的 UTXO 集大约是几 GB。一部分数据存在硬盘上,而且我们使用分层的缓存。我们希望尽可能不让网络上的对等节点影响我们的缓存。但这会影响区块验证的性能。比如说,基于我们对一笔交易的体积限制,我们可以预期 UTXO 缓存系统会出现混乱。我们不希望交易的体积太大,因为我们希望一次性验证 25 笔交易,或者 2 笔交易,乃至 n 笔交易。这就是一种 DoS 攻击的例子。同样的道理,如果一个交易包对比单笔交易,其签名数量是呈指数级增长的,我们并不希望需要通入的验证资源也呈指数级增长。

问:闪电通道是一个非常特别的例子,它的交易包有两笔交易。这是为了先解决这个问题,然后再考虑一般的情形吗?

答:是的。单个子交易和单个父交易,对比单个子交易和多个父交易。我认为,它基本上是一种权衡,在实现的复杂性和它对 L1 及 L2 协议的有用性中取舍。基于我们知道的事实,大量的批处理手续费追加在比特币链上发生,多个父交易和单个子交易的组合是非常有用的。然后,我们对比 “单个子交易、单个父交易” 和 “单个子交易、多个父交易” 时,会发现后者的复杂性没有那么糟糕。所以两相权衡之下,单个子交易、多个父交易的功能更好。

darosior:将交易池规则的用场限制成为闪电网络提供帮助,在我看来是很奇怪的。我部分同意的对 CPFP carve out 的批评是,为多个父交易追加手续费、CPFP,对其它应用也是非常有用的。

问:我同意你的说法,交易池是一个非常有趣的研究领域。纵观整个区块链领域,许多东西都因为交易池规则而崩溃,这不一定发生在比特币上,也可能发生在其它区块链上。你有没有花时间研究过其它链尝试过的方法,失败的和成功到可以引入比特币交易池中的方法?

答:没有,我从没这样做过。并不是因为我不觉得别的地方也有好想法。我不是唯一一个思考过交易池的人,比特币也不是唯一一个关注交易池的地方。我只是注意到,我们在交易池规则中遇到的大量问题,都根源于一个事实:比特币采用了 UTXO 模式。它很棒,因为我们不会遭遇无法执行交易排序的问题;父交易一定要在子交易之前得到确认,这很清楚。但这也意味着,在我们创建的交易池中,75% 的交易池动态内存使用,都分配给了元数据,而不是交易本身。我们需要跟踪交易包的整体,而不是把它们当作一个又一个的原子。

问:你说的是在交易包转发的世界中,还是在 Bitcoin Core 里面?

答:在 Bitcoin Core 里面。

问:现在?

答:是的,75% 的数字,总是能吓到很多人。我认为,这在很大程度上是因为我们使用了 UTXO 模型。我不认为这是个坏事,只是一个我们必须处理而别人不必处理的东西。就像我说的,这也带来了别人无法获得的一些好处。

问:我在哪里可以找到更多关于交易包转发的资料?

答:Brink 最近放出了一档播客,叫做 “The Bitcoin Development Podcast”。我们已经录了 5 期,都是关于交易池规则、钉死攻击、交易包转发和挖矿优化的。我推荐你去听听看:brink.dev/podcast

问:你可以讲讲粉尘限制和交易池规则吗?有没有计划为交易池规则降低粉尘限制?我们应该有粉尘限制吗?还是不应该设置呢?是否不同客户端对粉尘限制有不同的想法?我认为这会影响闪电节点在有分歧的时候的关闭通道的操作。

答:解释一下,“粉尘输出” 就是 “不经济的输出”。假设你有一个价值 20 聪的输出,即使当前网络的手续费率是 1 聪/vB,你也无法从花费它中得到任何好处,因为用来证明你可以花费这个输出的数据就将要求你支付 20 聪。在比特币的交易池规则中,我们会拒绝接受任何产生粉尘输出的交易。这不仅仅是为了把关,也是因为,如果在全局的 UTXO 集中产生了粉尘输出,没有任何人愿意花费它,但网络中的每个人都必须保存它。因为可扩展性本质上受限于 UTXO 集的规模,所以 Bitcoin Core 的开发者武断地选择了一个限制:我们不允许粉尘通过 Bitcoin Core 的节点来传播。但它就是我在演讲中提到的最佳习惯的一个例子:粉尘交易在共识上是有效的,但它不属于某一种的 DoS 保护措施,但在某种程度上它又是一种 DoS:你是在要求整个网络为你保存粉尘输出。这也是一个能够体现应用开发者和协议开发者的内在张力的例子(在交易池规则上)。举个例子,在闪电通道中,如果你有一个输出,比如是一个正在服役的 HTLC 输出,或者是表示通道余额的输出,它的价值太低了,成了一个粉尘,那么这笔交易是不会被广播的。所以,我认为,在闪电通道中,如果你有一个输出会成为粉尘,那你应该把它变成手续费。这被认为是最佳的做法。但这样是不便利的,因为必须在实现闪电通道的软件中增加额外的东西。当然,我一直在尝试共情,我也理解为什么这让人难受。但设置粉尘限额是有理由的。这里面是有矛盾的。我并不是说这些东西永远不会改变,我只是把理由说出来。

\ No newline at end of file